Searching for commitment between PACS and VNA

Many moons ago when most PACS was designed the archive was local.  It is the A after all in PACS.   Now that the industry is moving inexorably to a deconstructed model, or PACS as a service the archive is rarely on the same LAN as PACS.  Not only is it not on the same LAN but the fact that it is a separate application means that different rules may apply.  For example, some systems accept DIOM studies with alpha characters in the study UID, others will allow series or images to be stored in two different studies with the same SOP instance UID.  These variations in interpretation or enforcement of DICOM standards lead to problems when storing to the VNA.  There are times when a DICOM store transaction is successful, but the study is not accepted into the VNA.  There can also be a delay between the time a study is received by VNA and when it is actually stored to disk as many VNA’s have some sort of inbound cache or holding pen while processing data.  This discrepancy can create a problem where PACS believes a study to be stored but it is not actually stored, which is of course heresy for an archive.

It turns out that there is an old-school, little used solution for this very problem.  It is the arcane process called DICOM Storage Commit, and I highly recommend that every VNA owner enable this process for all sources that support it.  During the DICOM store transaction each image should be acknowledged as received and in theory any images that are not acknowledged as received would be resent by the PACS or other source system.   In practice there are a number of places where this does not occur.  The storage commit is a separate transaction that occurs after the DICOM Store.  The sending system will generate a new transaction in which it lists every image that was sent.  The response includes a list of every image with a success or failure.  If any image is listed as a failure then the source system can resend the image or the entire study, most tend to resend the entire study.

One problem with using storage commit is that many vendors have ignored this transaction for quite some time the result is that it is often less than optimally designed or configured.  Some systems have defaulted timeouts, and others batch up storage commit messages while others will not archive anything else until the commit is received.  Even with these limitations it is worth it.  The fundamental problem is that when a source believes that a study has been archived it is then available to be deleted or flushed from the cache.  If for some reason it did not successfully archive there is then there will be data loss.

Which comes first the PACS or the VNA?

 

This is a question that several years ago was philosophical and interesting but not terribly relevant.  Today as the landscape is changing the answer for your organization is vital to your overall success.  Like all good questions the answer is….. It Depends!

First what do we mean by PACS  or VNA First?  It simply means in your environment after images are acquired, are they stored to a PACS, presumably for interpretation and then archived to the VNA for storage, or are they sent to the VNA first and then routed elsewhere.  As one might expect there are pros and cons to each strategy and the determination really relates to how each is used.  I hesitate to use the term workflow because it, like “train the trainer” is one of the overused terms in the industry.

A PACS first orientation is the more classical approach to a VNA.  The study is acquired by modalities, typically reviewed by a technologist at a PACS workstation in which demographics are verified.  There may be some study manipulation such as window leveling, deleting of images and general image QA.  Often times additional information is added in the form of scanned documents which can be anything from the insurance card, to technologist notes and worksheets.  Finally, the exam is marked as ready to be viewed by the physician or radiologist.  When the study is interpreted, and a report created the study is marked complete or reported.  At some point in this flow the study is put into the archive queue and it is sent on to the VNA.  In this flow the VNA is acting primarily as the archive, and in some cases is called the deep archive or cold archive.  If the study is ever needed again as a prior and it is not in local storage PACS will retrieve it as needed.

A VNA first orientation is a different flow.  After the images are acquired they are sent to a technologist imaging system.  At this step the image manipulation occurs, this can be done in a department-based system like a PACS, it could be on a dedicated QC workstation, web system or components in the VNA itself.  Then the study is sent to the VNA, which likely maintains a local cache, but could be a cloud-based system.  After the study is in the VNA it is ready to be read.  The study is then sent from the VNA to the reading station where the interpretation takes place.

One of the keys in the distinction between the two is how quickly the study is available in the VNA.  In a VNA first scenario the study is almost immediately available on the VNA.  This becomes important when there are multiple consumers of the image, such as an EMR integration that is serviced by the VNA not PACS.  A PACS first orientation the study is interpreted prior to archival which means the likelihood of the images changing is very low.  I would opine that the images should NOT change once they have been reported.  If they do, then an addendum is warranted.  This data flow also maintains a linear nature and is relatively simple.  There is value in simplicity and that should not be understated.   The downside of this is the time required for the image to get to the VNA and the relative inflexibility of the system.  If there is an issue with PACS or the study is “missed” it will not be available to downstream systems.

In the VNA first method there are multiple systems at play any of which could be down.  It is also a more complex workflow involving several steps.  The benefit however is near immediate access in downstream systems to the images as well as significant flexibility to integrate multiple data flows and systems.  A VNA first architecture allows for a reduced PACS footprint that can lower overall maintenance costs (often 15-20% annually of the PACS license cost).   It also supports the integration of multiple viewing systems for referring physicians, specialist viewers and outside contracted radiology groups.  I would also argue that it better supports the transition to PACS as a service or “deconstructed PACS” or PACS 3.0 whichever is your favorite term, as well as a multi facility multi PACS environment in which a single study needs to “live” in many places at once.

So back to the question, which is better?  It depends on what the current imaging needs are, in terms of access to images, how many systems are integrated and   what the future vision is for the system.  For simple systems stick with PACS first (your PACS vendor will love it!) if the intent is to implement more exotic workflows or there are multiple downstream systems it would be worth investigating a VNA first data flow.

 

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

That’s not a pencil, it’s a MEDICAL DEVICE!

Three years ago, I was visiting my primary care physician for an annual exam.  My Dr, not fresh out of medical school and had been my family physician for a number of years.  Dr. L did not like computers.  He was writing on my cart in pencil (yes a for real paper chart!).  When I noticed that the pencil was worn down so far that it would only write from one angle and even so was more like a crayon.  I looked at him and said, “you might want to sharpen that pencil.”  He replied, “I can’t, this is a medical device.”  Being the highly technical Imaging person that I am I said, “forgive me Dr, but that is not a medical device, it is just a pencil.”  Slightly exasperated he took off his glasses and looked at me, replying “This is your chart, a medical record.  Obviously, you can see I am making notes and documenting your diagnosis.  You can’t do that with just any writing device, that would be illegal!  I might be audited, you can only make a diagnosis with a medical device!”  Not taking the hint I said, “well at least sharpen it, you can barely write with that.”  Now clearly ticked off Dr. L replied, “were you not listening?!  This pencil is a medical device, if I were to sharpen it, I would have to have a licensed carpenter come in, charging me $400 an hour to sharpen it!  You can’t go messing with a medial device unless you have FDA clearance!”

Sooooooo, maybe there is a hint of sarcasm in my story, but let’s talk about what a medical device is and what the FDA really says.  I was at one time a vendor, and while I was I said many of the same things about my system.  Medical device… can’t patch… blah blah, FDA certification…. I truly believed everything I said.  I had been told that by my company, and I had never read any FDA filings (at the time) so I was retelling what was for me the truth.  Like my former self, many vendors have never read nor do they understand FDA process..

 

The FDA defines a Medical Device as “”…an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is: recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them, intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals, and which does not achieve any of its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of any of its primary intended purposes.” (Syring, 2018)

 

From that definition we could assume that yes, a pencil is indeed a medical device, or could we?  Did the pencil do anything?  Did it assist in the diagnosis?  Not really, it assisted in recording it.  Similarly, we have to look at the distinction between things that are used in the diagnosis vs what is supporting.  Is a CT or Ultrasound a medical device?  Yes, no question.  What about PACS?  The software is considered a medical device, but the hardware it is running on likely is not.  Let’s examine a real 510(k) letter for a PACS.  By the way if you want to look up the certification for your vendor, which I strongly encourage you can do so on the FDA website.

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm

Back to Vendor X……  “PACS X is medical image and information management software that is intended to receive, transmit, store, archive, retrieve, manage, display, print and process digital medical images, digital medical video and associated patient and medical information.  PACS X includes a suite of standalone, web-enabled software components, and is intended for installation and use with off-the-shelf hardware that meets or exceeds minimum specifications.” (emphasis added)

 

What this means is that the software is a medical device, and when the SOFTWARE is patched it must be tested in accordance with General Principles of Software Validation linked here (Food and Drug Administration (FDA), 2001).  The hardware that it runs on however, does not.  You can run PACS X on any hardware that meets or exceeds specs and it has no impact on the FDA certification whatsoever!  A vendor is well within their rights to provide an approved hardware list, but this is a support issue and not a FDA issue.  This distinction is very important!

 

Because the computer and operating system that run PACS software are not part of the 510(k) certification there is no requirement for the FDA to review security patches.

“Medical device manufacturers can always update a medical device for cybersecurity. In fact, the FDA does not typically need to review changes made to medical devices solely to strengthen cybersecurity.” (Food and Drug Administration, 2018)

There is a one page fact sheet that is very clearly written and I also encourage you to read here.

In summary, your PACS software IS a medical device, however what it RUNS on likely is not.   Especially given security concerns it behooves us all to read the FDA guidance and hold our vendors accountable to make sure that our devices are patched and up to date.  No one wants to report to the CEO or CIO that their system was responsible for a virus or ransomware attack on the enterprise.  Also surprising to me was that for all the secrecy and mystery surrounding medical devices and subsequent maintenance, the FDA website is surprisingly clear and easy to understand.

 

Thank you for reading, please post comments and questions !

Kyle Henson

 

 

References

Food and Drug Administration (FDA). (2001, 02 25). Information for Healthcare Organizations about FDA’s “Guidance for Industry: Cybersecurity for Networked Medical Devices Containing Off-The-Shelf (OTS) Software”. Retrieved from Food and Drug Administration Website: https://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070634.htm

Food and Drug Administration. (2018, 02 02). Information for Healthcare Organizations about FDA’s “Guidance for Industry: Cybersecurity for Networked Medical Devices Containing Off-The-Shelf (OTS) Software”. Retrieved from Food and Drug Administration: https://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070634.htm

Food and Drug Administration. (2018, 02 07). THE FDA’S ROLE IN MEDICAL DEVICE CYBERSECURITY. Retrieved from Food and Drug Administration: https://www.fda.gov/downloads/MedicalDevices/DigitalHealth/UCM544684.pdf

Syring, G. (2018, 02 25). Overview: FDA Regulation of Medical Devices. Retrieved from Quality and Regulatory Assoicats: http://www.qrasupport.com/FDA_MED_DEVICE.html

 

 

 

The DICOM is in the Details! Part 2 the Query Retrieve

 

Given the apparent interest in some of the details about DICOM store transactions, thank you to all who read it!  I thought I would add in a brief description of Query / Retrieve and then next week I will write about my favorite, Storage Commit.

A DICOM Query Retrieve transaction is a fairly simple transaction. First there is a query, and then a retrieve, luckily the standards team didn’t go crazy with the names. The first part is of course the query, which is a C-FIND transaction. In a C-FIND we again have a service class user (SCU) and a service class provider (SCP). The provider is going to be the “server” and the user the “client” or the one making the request. The query can be for a study or for a patient. However, it does not have to be only one. The query could be for “all” patients, or All studies done on a certain date, or if you get a wild hair, all dexa studies completed on Friday the 13th that have a patient who’s first name begins with the letter Q.

No matter what the C-FIND attributes are (specifics of the query) the user will send the query to the SCP (provider) and the provider will then issue a C-FIND response. The response is the list of studies that meet the criteria.  Different systems have built in mechanisms to deal with large C-FIND requests, some will reject the request if it is too broad, others will limit the number of responses to and arbitrary number such as 300, while still others don’t mind at all and simply send back a very long list of matches.

The client or SCU now has a list of studies and may decide to retrieve the studies. The command to retrieve a study is typically not “send me x study” it is often a C-MOVE command. Which roughly translates to “send X study over there” with the there usually being the requester. This is mostly semantical, but interesting to me. The C-MOVE command consists of what study is to be sent and where it is to be sent. The where is an Application Entity or AE title. Once the C-MOVE provider has this information, it then begins a DICOM Store transaction with the AE title requested.  For info on the DICOM Store,

see The DICOM is in the Details!

(Yes, shameless plug for clicks)

One interesting note here, in the C-MOVE command the only destination is the AE title, it does not include the IP address or port! This gets complex because almost every PACS and modality has a standard AE title that the vendor uses for EVERY SINGLE INSTALLATION, I won’t call out a single vendor, because they all do it. This was not a problem back in the day because relatively few systems queried each other, and they were often different vendors. Now however, when you are building an enterprise system like a VNA it is not uncommon at all to have many PACS or CPACS from the same vendor. Which brings AE uniqueness into play.

Some PACS will have the ability to use multiple AE titles so you can simply add a new AE for your VNA to send back to, and not change the modalities. Other PACS will only support one AE title and you may have to reconfigure all modalities sending to it. Last tangential point on Query / Retrieve is that this process of C-FIND and C-MOVE is pretty much what all data migration companies do. They simply do a lot of transactions!

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

When a picture ISN’T worth a thousand words, where do reports fit into VNA’s and Enterprise Imaging?

 

In traditional imaging systems like Radiology and Cardiology PACS the report is always with the images. In Radiology, the dictation system sends a copy of the report to PACS via HL7 which is ok since it is text. In cardiology it is either a textual report, or the cardiology system creates the report and therefore has a copy. As we get outside of the walls of those two systems, where does the report really live?

For those that don’t read to the end … the answer is DICOM SR  in your VNA but please keep reading!

In an environment where all users are logged into the EMR and launching images from there, it is not an issue as the EMR is now the system of record for the reports and will have a copy. Now, IMAGINE A WORLD (queue deep commercial voice) where images are sent for reading to various physician groups who are not logged into the EMR.  Reading the newest image is not an issue, but what about priors? In some teleradiology workflows prior reports are faxed, others copy and paste prior reports from the EMR, and still others simply read what is in front of them.

I submit that there is a better way. As we move forward with outsourcing reads, and facilities are divested and acquired regularly it makes no sense whatsoever to not keep reports with the images. The two are intrinsically linked and are important for different reasons as part of the patient record. Luckily there are several mechanisms to resolve this. Surprisingly I don’t see them often implemented.

Let’s start with the low hanging fruit, cardiology. Since most CPACS have reporting modules within the system the report is already with the images before the images are archived and / or sent elsewhere. While I am all for FHIR and emerging solutions I prefer to stick with what I can implement today, now, and yes there are options. The simplest is to do an HL7 export to the EMR. This will provide the text but no images. Often times CPACS will generate a PDF report but that ends up being imported as a separate document into the EMR and not linked. There are actually 3 options to export a content rich report besides emailing the pdf.

The first is to utilize HL7 and the encapsulated document (ED) standard. The standard does exist, and it can be done but I have not seen it nor talked to anyone who has tried. The second is to store the PDF document in the XDS, I am all about standards and a big believer in XDS. The problem is that first you have to HAVE an XDS repository which many don’t, and secondly you need a system to act as the XDS source, which many (most) imaging systems don’t do. There is a very easy answer to this problem and one that has been around for a very long time it just isn’t used.

The easy answer is to DICOM Encapsulate the PDF report and store it with the images as another series. Many CPACS do this natively, it is as simple as clicking a button in the configuration to “archive report with images”.   Why this is not done more often is a mystery to me. This is a very good option for CPACS which commonly creates pdfs as the report product but for other systems that rely more on plain text is the PDF the way to go?

There are several options for textual reports as well. HL7 interfaces between systems is an option but HL7 tends to be more of an all or nothing proposition. Again, XDS offers several opportunities, we stored the text reports as CDA objects in XDS, however this shares some of the previously stated limitations with XDS, namely the lack of adoption so far. Still, there is an old school solution to this problem. The DICOM Structured Report (SR).  By using the DICOM SR one can store the report with the images, any time the images are viewed or sent to another location the report goes with it with no additional steps.

I did this with my VNA from the beginning and it has been a huge success as my EMR Viewer can process the SR and therefore when looking at priors for history the report is available for review without the hospitalist having to go back and forth to the EMR to view the interpretation that goes with the images. Similarly, any time images are requested by another facility or need to be shared for patient care the report is always with the images, either as a DICOM SR or an encapsulated PDF. See that was worth reading to the end wasn’t it?

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

The DICOM is in the Details! but how does it work?

Most of us use DICOM every day, we smell it, we live it and we talk about it. However, often the deep dark secured is that we don’t really know how it works. What is a SOP? What is a Transfer syntax? And why do the engineers keep talking about Endians?

To begin with lets quickly review how a DICOM Store occurs, the sending system initiates a transaction. The sending system is the USER of DICOM Store (Service Class User or SCU) and the receiver is the Provider of DICOM Store or the Service Class provider (SCP). The user says I have this study that I want to store. The provider (receiver) says great, here I am. Then the user says I want to send a Breast Tomosynthesis Image.

*nerd alert- The type of image to be sent is defined by the SOP Class, the SOP stands for Service-Object Pair which is the Information Object Definition (image type) and DICOM Service Elements (DICOM Wrapper).   The SOP for Breas Tomo is 1.2.840.10008.5.1.4.1.1.13.1.3, which is in a supplement.

https://www.dicomlibrary.com/dicom/sop/

At this time the provider will reply back with yes, no problem or no, I don’t know that that is. If the answer is yes and the receiver (SCP) supports that SOP (see how you are starting to get the lingo!) it will also send back the list of languages it speaks. We are all pretty familiar by now with the 3 types of compression, uncompressed commonly called DICOM, lossless compressed which is compressed but still ok for reading and lossy compressed in which image data is lost but is much smaller. Each of these along with several others are called in DICOM Speak a transfer syntax.

Once the sender and receiver have agreed on what will be stored, the receiver sends back a list of languages it speaks, or transfer syntaxes. The sender or SCU will then select one of these to send the image. Thus, it is the sender the decides whether or not the image is sent compressed or not. Implicit VR Endian is the default DICOM transfer syntax and therefore supported by ALL vendors.  Because of this, many vendors take the easy road and simply accept the default. This is … OK… within a LAN but when the data is stored or transferred over a WAN compression becomes very important.

https://www.dicomlibrary.com/dicom/transfer-syntax/

Now that SCU and SCP have agreed on what is to be sent and how it will be sent the data transmission goes. The transmission can be at the instance level which refers to individual images or at the study level in which many images are sent on the same association. Once the association is complete the sender may initiate a Storage Commit, which I highly recommend when sending to VNA across a WAN.

Briefly in a storage commit message the sender reaches back out to the provider and sends a list of all individual images that were sent. The Provider then responds back either positively that ALL images were received or negatively in which something wasn’t. In the case of negative the entire study is considered a failure and will be resent, which takes up a lot of your bandwidth.

Please like, share and comment. I would love to know what topics are of interest to the imaging people out there!

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

What the heck is Enterprise Imaging…… Really?

 

According to SIIM Enterprise Imaging is

“a set of strategies, initiatives and workflows implemented across a healthcare enterprise to consistently and optimally capture, index, manage, store, distribute, view, exchange, and analyze all clinical imaging and multimedia content to enhance the electronic health record.” (SIIM Enterprise Imaging Workgrouip, 2018)

Healthcare Informatics says

“The foundations for building an effective EI program can be leveraged from traditional imaging and IT best-practice fundamentals. The following lays out the deliberate, innovative alignment between “imaging & imaging IT Fundamentals,” such as acquisition, management, distribution, archive, and governance, with evolving industry conditions impacting imaging and IT, such as the demand for centralization, standardization, interoperability, data integrity, and governance.” (Pittman, 2015)

While there is a lot of good information in both of those they are a bit unwieldy. So, lets say Enterprise Imaging is a plan to a efficiently use all images to better treat the patient. We all want to provide better care and who can argue with efficiency in the form of physician time or hospital resources? Some may be thinking, I have images in my EMR so I have Enterprise Imaging, box checked. I wo

uld challenge that there are many more images out there than we realize. The obvious departments that come to mind are Radiology and Cardiology. There are also images generated in surgery, ED, dermatology, GI, lab and countless other places. A rheumatology clinic I work with very often uses ultrasound for needle placement for injections.  Before we get to the questions like how do I optimize workflows and where do I put them, I would first ask, what is the purpose of the image? Does it need to be kept? If so for how long? Just because we can store it forever doesn’t mean we should, but that is an entirely different discussion.

Once we have identified all of the images

that are acquired and determined the regulatory constraints and useful clinical relevance we can look to apply workflow and best practices. Radiology has some of the most developed rules and processes for acquiring, moving using and storing images, and being a long time PACS person, all images look like X-Rays to me. However, they are not all the same. Appropriate use of radiology best practices can and should be applied to other imaging areas, but only where it makes sense given the need. Orders make sense in the radiology context because they provide a mechanism to attach a result to which is a separate billable component. In many instances outside of radiology images are supplemental or supporting information that belong with other clinical notes regarding the procedure. There are other workflows that can appropriately store this information in the EMR without the

order/accession number process.

Probably the most important lesson we can learn from radiology workflows is the importance of categorizing the information. Hand entered demographics don’t work. Hand entered descriptions of the data don’t work. Whatever workflow is developed it must include selecting the patient from a list, which is typically derived from admissions, and then describing the data by how and where it is acquired and again creating a discrete set of procedures or descriptions of what the images represent. Without these two things the images are relatively useless because it is highly unlikely they will be viewed again as it will be difficult to identify what they are, and they may or may not be accepted into the EMR.

The next lesson we can learn is around

data standards. Regardless of the vendor or the department vendors love to store your data in a proprietary format that is only accessible in their system. This is self-preservation and future revenue streams. No vendor wants to make it easy for you to share data and read it in another system, nor do they want to make it easy to manage your own data. This is what I like to call stickiness, because you are stuck with that vendor. A more economic term would be high barriers to exit or switching costs. If it costs $250,000 just to move the data to another system you may choose to stay with an inferior product due to the additional costs of changing vendors. So it is critical when purchasing a system that you demand that the data is stored in standard formats and that the local team has the ability to access the raw data and move it to other systems in the event the departmental team choses a different system in the future.

To return to the question, what is Enterprise Imaging? I would opine (yes, it is a word) that it means take the appropriate best practices in IT and imaging and develop a plan on how to apply them to all images that are acquired with the ultimate goal of improving patient care.

 

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

References

Pittman, D. (2015, April 15). Enterprise Imaging: The “New World” of Clinical Imaging & Imaging IT. Retrieved from Healthcare Informatics: https://www.healthcare-informatics.com/article/enterprise-imaging-new-world-clinical-imaging-imaging-it

SIIM Enterprise Imaging Workgrouip. (2018, 01 28). What is Enterprise Imaging?Retrieved from Society for Imaging Informatics in Medicine: http://siim.org/page/enterprise_imaging

How do you architect your VNA?

 

 

Before you can design your VNA you need to identify what exactly it is that you want to build. Implying that requirements are simply to be the long-term archive for images is about the same as saying I want a building to live in, or build me an office building downtown. While there are many, many requirements and concepts, I would like to focus on two for now. The first is the concept of location, where are the images, where are the consumers and where will the images be stored. The second is how to organize data within your VNA.

It seems somewhat anachronistic to be talking about physical location in a virtual world, but location matters because it effects latency which relates to how fast you can move the data. In a perfect world there are gigabit pipes everywhere with no latency and data moves almost instantaneously. However, in my world there are some pretty slow and saturated networks. So, where are the consumers of imaging data? This is typically the radiologists and referring physicians. These users are likely in relatively close physical proximity to the origin of the images; at least in a metropolitan area, Nighthawk and teleradiology is a different subject altogether. To provide the fastest access to data there should be some amount of images near the consumers. This is typically a local cache or short-term storage (STS) often provided by the primary PACS system. However, in many circumstances VNA is now supplying the EMR image viewer directly in which case there should be some subset of images in close proximity to the EMR servers. Depending on the configuration you may need to setup a component of the VNA to act as that short-term storage locally. This is critical in a VNA first workflow which is part of the “deconstructed PACS” concept. Also important is the location of the data center which may be in the building, across town or across the country.   The size of the cache varies based on workflows and user needs primary diagnostics may require a cache of up to 2 years’ worth of data on site, or none at all. When I say 2 years’ worth I simply mean enough storage to hold all data that was acquired in the last two years.

The second and more interesting idea is how to organize data within your VNA. Each vendor has their own terminology, so I will refer to them as “buckets”. How you put data IN to your VNA greatly effects how you get data OUT of your VNA. As we all know business cycles are cyclical, so there will likely be a period of acquisitions and growth, followed by a period of divestitures. The smart VNA team will plan for both.  The simplest way to organize the data is in one big bucket, hey that’s a VNA right? With all data stored in one bucket, something like the accession number or more likely the study Unique Identifier (UID) which is by definition and standard unique throughout the world, would be how data is separated. To find data in your bucket you need to query for a number of fields, like patient name, DOB study type, or in a perfect world the UID. This is simple to setup, easy to get data in but hard to get data out. The other extreme of the spectrum is to create a bucket for everything. Data can be split into endless buckets, I have herd of some facilities that have one bucket per modality, per hospital, per year. Meaning they have hospital A x-ray 2017, ct 2017, MR 2017, then Hospital B x-ray, ct etc….. This is difficult to get data in, but very easy to find data and get it out.

Why would one separate data? It makes reporting much easier as many VNA’s (and PACS) don’t do the best job of analytics. It is also logical when looking at the facility level. The trick is to expand the view up to an enterprise of say 20-40 hospitals and related imaging centers and the problem becomes more complex and too many buckets becomes unsustainable. Having worked with many enterprises through the years I have settled into the “sweet spot”. Basically, I plan for an eventual divestiture. This is typically not done at the modality level, but at a facility level. No one has ever asked me to separate out and deliver only one modality, but often times an imaging center is bought or sold as is a hospital. This level allows for adequate reporting and tracking but also facilitates the smooth transition during a divesture.

In practical terms I have found that 4 walls define a facility, as does a business relationship. Hospital A may have a woman’s care center in house, an imaging department and an off campus urgent care center (UCC). I would create a VNA bucket for each, as well as each source system. PACS, CPACS, ophiology, surgery and eventually pathology are unique entities that should have separate data, as they will likely be replaced and upgraded on separate schedules. Therefore, they each get their own bucket. That does of course bring up the question of mammography. When there is a separate departmental system for mammography it is often connected into PACS. If there is a workflow reason to store these images into PACS then I would not create a separate VNA bucket.  If there is no reason to then have the mammo system archive to VNA directly.

One last point, is that there is effort and complexity that goes into breaking up imaging streams into separate buckets. This setup cost must be weighed against the benefit of reporting and long term possible divesture. I can confidently say that when you go through the offloading and divesture process you will be very glad you have the data broken out because it makes the process significantly easier. In that environment the facility has already been sold and therefore it is difficult to justify resources to the process. You will want to be able to point the new owner to the data and let them retrieve it having confidence that the data is separated in your VNA such that they can’t see or accidently get to other data.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

 

 

How Big is Your Pipe and Does it Matter?

 

Let me start by saying I am not a network analyst, never been one, but I did stay at a Holiday Inn-Express last night. I want to look at WAN, networks and bandwidth from the perspective of how they affect archiving of images (big sets of data) across the WAN. For any truly smart network folk, I appreciate comments and corrections. Now to the meat of things.

Networks are like a highway and you are driving in a car, or pick-up truck. There are two main factors that are important, the number of lanes and the speed you are driving. If you are on an old country road, it is likely 2 lanes and you may be driving 40-50 MPH. Not bad, and if its just you, you don’t need more than 2 lanes and you get there just fine. The number of lanes is bandwidth, the speed you are driving is latency and the road is the connection, sometimes called the pipe.

If more people start driving on the same road it starts to slow down, you can add lanes or increase the speed limit. It makes sense to expand the road to 2, 3 or even 4 lanes. At a certain point expanding the lanes doesn’t help, why because it costs a lot of money and you get incremental benefits in speed. Sure you may go from 50 MPH to 60 or even 70, but you don’t get much faster than that, even if you have 12 lanes. Obviously even if you have a ton of bandwidth, if you are driving slow you are unhappy.

Latency, or the speed that data is flowing can be dramatically affected by the route you take. Say you are driving from Dallas to Chicago. According to google maps, you take 75 North, then 69 north and finally I 44, it is 927 miles and should take just under 14 hours. Let’s say that there is a wreck on I 44 and your trusty phone re-routes you through Charlotte, NC. Your trip is now 1,785 miles and will take 27 hours…. No bueno. This is EXACTALLY how data gets routed in and around the internet. In this case your latency just went from 14 hours to 27 hours. It really doesn’t matter how many lanes the highway has, you have a long drive ahead of you. This is known in the networking world as coincidently the route. It is often measured by the number of “hops” which basically equates to the number of cities between you and your destination. 4 or 5 hops is good, 10 or 12 is bad. The more hops the longer it will take and the more likely you got routed through Atlanta or Charlotte on your way to Chicago.

Now to add insult to injury, when you have a big load to send, let’s say a 100 MB file, or a 600 MB breast tomo exam. To continue the analogy let’s say a ton of bricks. You can only fit a portion of those bricks in your trusty pick-up (I am really from Dallas). Given that your truck can only fit 1/10 of the bricks at a time you need to make 10 trips. Now you can see that the latency adds up, very quickly, because of course your truck has to make 2 trips across the network for each load. Your network does this as well. You send some data, and the other side sends back a verification of what it received. This is where someone will say AH HA! I will just send 10 trucks at once! I do need more bandwidth! Unfortunately, it just doesn’t work that way, you can’t put all the data on the wire at once. As the file is broken up, due to constraints in the systems themselves each file, each computer is limited to 3 trucks. Let’s say that is state law, limited number of trucks in America, sun spots, I don’t know…. It just is.

If you have stayed with me through all of this, you should see that there is a balancing act going on. You want to have enough bandwidth so that you are not constrained by one lane, but at a certain point the constraint tips and it is not bandwidth but latency that is slowing down your data transfer. So, what can be done? That my friends I will leave to the network people, but I think it has something to do with point to point connections and dedicated routes through “the cloud”.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

How Big is a Mammography Study??

 

How big is a mammo study?

It depends. I was recently asked what is the average size of a mammography study. I asked for clarification, what do you mean? I received a somewhat strange look and the response mammography, you know breast imaging….

Bottom line up front, somewhere between 20 MB and 1GB per exam

The problem is that there is no easy answer to that question because well, it depends. For starters there is as we all know a huge difference between tomography and plain film mammo. So averaging the two would vary greatly depending on the ratio of tomo to mammo. If you look in your VNA they will often share the modality code MG so how do you tell the difference? Number of images? You could assume that anything over 10 images is tomo. Ok so the next way to tell would be to get into your database and do a query for the SOP Class Breast Tomosysnthesis Image Storage (1.2.840.10008.5.1.4.1.1.13.1.3) however, you will find that within a study you will have a few tomo images and plan mammo images stored. Then again, you may be working with a popular vendor who stores the tomo images in a proprietary and much small format using the secondary capture SOP class (1.2.840.10008.5.1.4.1.1.7) easy right? All you need to do is isolate out the 2D exams from the exams containing a 3D image, then find out if you are using the BTO SOP class or the secondary capture SOP, THEN you can average your exams and get the average study size, right? Well sort of..

NOW we need to determine if they are compressed or not. To figure that out you need to look at the transfer syntax. Many modalities will default to Implicit VR Endian, which is transfer syntax (1.2.840.10008.1.2) this is uncompressed. Many PACS will take the syntax that the modality sent and refuse to change it for fear of impacting image quality. Therefore, the study is stored on disk and in the long-term archive or VNA in the same format. Unless you get into inbound and outbound compression which is a whole different topic. There are of course many different transfer syntaxes with varying compression, but we will take the other common one, JPG 2000 lossless (1.2.840.10008.1.2.4.90). Either compression can be applied to any of the SOP classes described above.

So, the question stands, what type of mammo do you mean? Standard format or proprietary (but very common) and compressed or uncompressed. How you ask the question will skew the answer dramatically.  Given the trend in the market tomo is growing so the average in Dec 2017 is very different than the average in Dec 2016.

If you have read all the way through this, the breast tomo format lossless compressed averaged out to be 711 MB, while the secondary capture format also lossless compressed averaged in at 194 MB.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org