Is a Zero Footprint Viewer for a Radiologist’s Read the Right Tool for the Job?

 A zero-footprint viewer is an image viewer that runs completely in a web browser and does not require anything to be installed on the computer running it.  This is in direct contrast to the “thick” or “thin” client web viewers, or dedicated viewing workstations.  There are, of course, benefits to a zero-footprint viewer.  Firstly, as there is nothing to install, it does not require the user to have admin-level privileges.  Also, these viewers are presumed to be browser and OS agnostic.  Meaning, it should run on PC or MAC; Chrome, IE, Safari, and the like.  Zero-footprint viewers are now capable of displaying full DICOM image sets, as well as various levels of compression from lossless to lossy. As vendors continue to add more and more tools to these viewers, they can satisfy the needs of a much larger group of physicians.  Encompassing a variety of specialties and image sets, including cardiology and visible light. The ability for physicians to view images anywhere, including on a mobile device, is huge step forward for the industry and can potentially provide dramatic improvement in patient care, as images are no longer “locked” at a specific location.

Where I diverge from trending industry-think is in the idea that every radiologist viewer should be zero-footprint.  Radiology standards, when viewing and interpreting images, are necessarily high.  First and foremost, when following industry standards, a radiologist requires the use of DICOM calibrated monitors. For “diagnostic” monitors there should be a QA program in place to validate the calibration of the monitor and its ability to display the full depth of data in the radiology image.  In addition to the need for diagnostic quality monitors, a radiologist typically dictates into a voice recognition system to generate a report.  There are indeed cloud-based dictation solutions and, while it is possible for a radiologist to type a report directly into the EMR, these are not the norm for primary interpretation.  The industry dominant voice recognition system, used by the majority of radiologists, requires its application to be installed on a workstation, running windows OS.  It is very difficult to have this software running on one PC but pointing to two different versions or implementations.  It is in effect one dictation client per PC.  Combined, these two factors generally limit radiologists, in my experience, to an average of 2-3 physical locations in which they dictate.  This is important, as here is where we begin to see the technology trade-offs of having a zero-footprint viewer.  First in that a web browser, such as IE or Safari, does not have a reliable way of determining how many monitors are being used on the workstation (a typical read configuration has 3 monitors) nor how to best utilize that real estate. And, second is that speed is paramount.  When comparing viewers if there is a compromise between features and responsiveness to be made, most radiologists that I’ve worked with will, within reason, choose speed.  The zero-footprint viewers tend to do well on a good network, but over a high latency low bandwidth network it is very difficult to provide lightning fast response times.  In this type of instance, a client-based viewer can download full data sets in the background and pre-cache data for viewing, i.e. it is loading the next case.  Additionally, while the zero-footprint will probably beat the client in time-to-first-image, often the client-based viewer will win during significant image manipulation.  Overall, the client is more resilient to the inherent variability of a network connection.  So, given that a radiologist is, more often than not, reading from a pre-defined set of locations, which require specific physical hardware in terms of monitors and a dedicated dictation application, is there a superior advantage to the radiologist in having a zero-footprint viewer?  I submit that, currently, there is not one.

I believe that, as an industry, we need BOTH zero-footprint viewers of diagnostic quality, as well as client-based viewers.  Currently, clients provide a rich feature set, faster manipulation and integration in a reading environment while, zero-footprint viewers provide flexibility of delivery, fast review of compressed data sets to any browser, anytime, anywhere.   These are different tools for  different needs.  You wouldn’t try to use a screwdriver to drive a nail or vice versa.

 

Ultimately, the best solution will blend the advantages of both systems, depending on the needs of the user at a particular time and place.

Buying and Selling Big Data, A Practical Solution for AI and Clinical Research

Every now and then someone asks me about or I read an article about someone selling massive amounts of data to one of the big companies out there.  When you have a lot of data the obvious thought is, I want some of that free money!  As a thought exercise lets look at some of the realities in moving more

than a PetaByte of image data.  A PetaByte is 1,024 TeraBytes or 1,048,576 GigaBytes.  Many, dare I say most VNA’s store data in a near DICOM format, that is close but often not a straight .dcm file.  This means that to get data out you can’t simply copy the file but have to do a DICOM transaction.  There are some that do store in straight DCM, but even so, there is still the issue of de-identification so a DICOM store is not the end of the world.

In my experience a single server tops out at somewhere around 15,000 studies per day or ~500GB.  So, doing the simple math, 10 servers dedicated to nothing but copying this data, ignoring a penalty for de-identification or additional compression will move 1 PB in 209 days.  I submit that this is not practical and there is a better way.

First, we are looking at the problem from the wrong end.  Whether clinical research or training an AI engine, it is likely that the buyer doesn’t want ALL data, they are looking for very specific use cases.  In particular what diagnosis are they trying to research or train?  Instead of dumping billions of images on them and letting the buyer figure it out, perhaps a targeted approach is better.  This begins at the report, not the images.  As I would want to have a long-term relationship and sell my data multiple times I propose that instead of answering a single question like send me all chest x-rays with lung cancer, preparing a system that can answer any question.

So, to do this we would build a database that holds all reports (not images) for the enterprise.  Start with pulling an extract from the EMR for all existing reports, and then add the HL7 or FHIR connection to get all new reports.  With the reports parsed into the dat

abase any future questions or requirements can be answered.  The output of this query would be accession number, patient ID, date of service and procedure description.  Obviously, there SHOULD be a 1-1 relationship between accession number on the report and the images in VNA, but the other data will help if Murphy happens which often does.

Armed with this export a savvy VNA team can do a targeted export of specific data that is needed.  Instead of taking a dump truck and leaving all of the data in the parking lot, one can deliver a very specific set of data needed, and setup a relationship that can be very beneficial to both sides moving forward.  Using this method, one could even prepare a sample set of data for the buyer of say 1,000 exams to which the queries can be revised and updated to get a better and better targeted data set.

Now instead of providing all chest x-rays with lung cancer we can provide, Hispanic non-smoker males between the ages of 15-30 with a lung cancer diagnosis.  I am not a researcher, but I suspect that this type of targeted approach would be more beneficial to them as well as much easier to service from the VNA, in effect a win-win.

Searching for commitment between PACS and VNA

Many moons ago when most PACS was designed the archive was local.  It is the A after all in PACS.   Now that the industry is moving inexorably to a deconstructed model, or PACS as a service the archive is rarely on the same LAN as PACS.  Not only is it not on the same LAN but the fact that it is a separate application means that different rules may apply.  For example, some systems accept DIOM studies with alpha characters in the study UID, others will allow series or images to be stored in two different studies with the same SOP instance UID.  These variations in interpretation or enforcement of DICOM standards lead to problems when storing to the VNA.  There are times when a DICOM store transaction is successful, but the study is not accepted into the VNA.  There can also be a delay between the time a study is received by VNA and when it is actually stored to disk as many VNA’s have some sort of inbound cache or holding pen while processing data.  This discrepancy can create a problem where PACS believes a study to be stored but it is not actually stored, which is of course heresy for an archive.

It turns out that there is an old-school, little used solution for this very problem.  It is the arcane process called DICOM Storage Commit, and I highly recommend that every VNA owner enable this process for all sources that support it.  During the DICOM store transaction each image should be acknowledged as received and in theory any images that are not acknowledged as received would be resent by the PACS or other source system.   In practice there are a number of places where this does not occur.  The storage commit is a separate transaction that occurs after the DICOM Store.  The sending system will generate a new transaction in which it lists every image that was sent.  The response includes a list of every image with a success or failure.  If any image is listed as a failure then the source system can resend the image or the entire study, most tend to resend the entire study.

One problem with using storage commit is that many vendors have ignored this transaction for quite some time the result is that it is often less than optimally designed or configured.  Some systems have defaulted timeouts, and others batch up storage commit messages while others will not archive anything else until the commit is received.  Even with these limitations it is worth it.  The fundamental problem is that when a source believes that a study has been archived it is then available to be deleted or flushed from the cache.  If for some reason it did not successfully archive there is then there will be data loss.