When a picture ISN’T worth a thousand words, where do reports fit into VNA’s and Enterprise Imaging?

 

In traditional imaging systems like Radiology and Cardiology PACS the report is always with the images. In Radiology, the dictation system sends a copy of the report to PACS via HL7 which is ok since it is text. In cardiology it is either a textual report, or the cardiology system creates the report and therefore has a copy. As we get outside of the walls of those two systems, where does the report really live?

For those that don’t read to the end … the answer is DICOM SR  in your VNA but please keep reading!

In an environment where all users are logged into the EMR and launching images from there, it is not an issue as the EMR is now the system of record for the reports and will have a copy. Now, IMAGINE A WORLD (queue deep commercial voice) where images are sent for reading to various physician groups who are not logged into the EMR.  Reading the newest image is not an issue, but what about priors? In some teleradiology workflows prior reports are faxed, others copy and paste prior reports from the EMR, and still others simply read what is in front of them.

I submit that there is a better way. As we move forward with outsourcing reads, and facilities are divested and acquired regularly it makes no sense whatsoever to not keep reports with the images. The two are intrinsically linked and are important for different reasons as part of the patient record. Luckily there are several mechanisms to resolve this. Surprisingly I don’t see them often implemented.

Let’s start with the low hanging fruit, cardiology. Since most CPACS have reporting modules within the system the report is already with the images before the images are archived and / or sent elsewhere. While I am all for FHIR and emerging solutions I prefer to stick with what I can implement today, now, and yes there are options. The simplest is to do an HL7 export to the EMR. This will provide the text but no images. Often times CPACS will generate a PDF report but that ends up being imported as a separate document into the EMR and not linked. There are actually 3 options to export a content rich report besides emailing the pdf.

The first is to utilize HL7 and the encapsulated document (ED) standard. The standard does exist, and it can be done but I have not seen it nor talked to anyone who has tried. The second is to store the PDF document in the XDS, I am all about standards and a big believer in XDS. The problem is that first you have to HAVE an XDS repository which many don’t, and secondly you need a system to act as the XDS source, which many (most) imaging systems don’t do. There is a very easy answer to this problem and one that has been around for a very long time it just isn’t used.

The easy answer is to DICOM Encapsulate the PDF report and store it with the images as another series. Many CPACS do this natively, it is as simple as clicking a button in the configuration to “archive report with images”.   Why this is not done more often is a mystery to me. This is a very good option for CPACS which commonly creates pdfs as the report product but for other systems that rely more on plain text is the PDF the way to go?

There are several options for textual reports as well. HL7 interfaces between systems is an option but HL7 tends to be more of an all or nothing proposition. Again, XDS offers several opportunities, we stored the text reports as CDA objects in XDS, however this shares some of the previously stated limitations with XDS, namely the lack of adoption so far. Still, there is an old school solution to this problem. The DICOM Structured Report (SR).  By using the DICOM SR one can store the report with the images, any time the images are viewed or sent to another location the report goes with it with no additional steps.

I did this with my VNA from the beginning and it has been a huge success as my EMR Viewer can process the SR and therefore when looking at priors for history the report is available for review without the hospitalist having to go back and forth to the EMR to view the interpretation that goes with the images. Similarly, any time images are requested by another facility or need to be shared for patient care the report is always with the images, either as a DICOM SR or an encapsulated PDF. See that was worth reading to the end wasn’t it?

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

The DICOM is in the Details! but how does it work?

Most of us use DICOM every day, we smell it, we live it and we talk about it. However, often the deep dark secured is that we don’t really know how it works. What is a SOP? What is a Transfer syntax? And why do the engineers keep talking about Endians?

To begin with lets quickly review how a DICOM Store occurs, the sending system initiates a transaction. The sending system is the USER of DICOM Store (Service Class User or SCU) and the receiver is the Provider of DICOM Store or the Service Class provider (SCP). The user says I have this study that I want to store. The provider (receiver) says great, here I am. Then the user says I want to send a Breast Tomosynthesis Image.

*nerd alert- The type of image to be sent is defined by the SOP Class, the SOP stands for Service-Object Pair which is the Information Object Definition (image type) and DICOM Service Elements (DICOM Wrapper).   The SOP for Breas Tomo is 1.2.840.10008.5.1.4.1.1.13.1.3, which is in a supplement.

https://www.dicomlibrary.com/dicom/sop/

At this time the provider will reply back with yes, no problem or no, I don’t know that that is. If the answer is yes and the receiver (SCP) supports that SOP (see how you are starting to get the lingo!) it will also send back the list of languages it speaks. We are all pretty familiar by now with the 3 types of compression, uncompressed commonly called DICOM, lossless compressed which is compressed but still ok for reading and lossy compressed in which image data is lost but is much smaller. Each of these along with several others are called in DICOM Speak a transfer syntax.

Once the sender and receiver have agreed on what will be stored, the receiver sends back a list of languages it speaks, or transfer syntaxes. The sender or SCU will then select one of these to send the image. Thus, it is the sender the decides whether or not the image is sent compressed or not. Implicit VR Endian is the default DICOM transfer syntax and therefore supported by ALL vendors.  Because of this, many vendors take the easy road and simply accept the default. This is … OK… within a LAN but when the data is stored or transferred over a WAN compression becomes very important.

https://www.dicomlibrary.com/dicom/transfer-syntax/

Now that SCU and SCP have agreed on what is to be sent and how it will be sent the data transmission goes. The transmission can be at the instance level which refers to individual images or at the study level in which many images are sent on the same association. Once the association is complete the sender may initiate a Storage Commit, which I highly recommend when sending to VNA across a WAN.

Briefly in a storage commit message the sender reaches back out to the provider and sends a list of all individual images that were sent. The Provider then responds back either positively that ALL images were received or negatively in which something wasn’t. In the case of negative the entire study is considered a failure and will be resent, which takes up a lot of your bandwidth.

Please like, share and comment. I would love to know what topics are of interest to the imaging people out there!

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

What the heck is Enterprise Imaging…… Really?

 

According to SIIM Enterprise Imaging is

“a set of strategies, initiatives and workflows implemented across a healthcare enterprise to consistently and optimally capture, index, manage, store, distribute, view, exchange, and analyze all clinical imaging and multimedia content to enhance the electronic health record.” (SIIM Enterprise Imaging Workgrouip, 2018)

Healthcare Informatics says

“The foundations for building an effective EI program can be leveraged from traditional imaging and IT best-practice fundamentals. The following lays out the deliberate, innovative alignment between “imaging & imaging IT Fundamentals,” such as acquisition, management, distribution, archive, and governance, with evolving industry conditions impacting imaging and IT, such as the demand for centralization, standardization, interoperability, data integrity, and governance.” (Pittman, 2015)

While there is a lot of good information in both of those they are a bit unwieldy. So, lets say Enterprise Imaging is a plan to a efficiently use all images to better treat the patient. We all want to provide better care and who can argue with efficiency in the form of physician time or hospital resources? Some may be thinking, I have images in my EMR so I have Enterprise Imaging, box checked. I wo

uld challenge that there are many more images out there than we realize. The obvious departments that come to mind are Radiology and Cardiology. There are also images generated in surgery, ED, dermatology, GI, lab and countless other places. A rheumatology clinic I work with very often uses ultrasound for needle placement for injections.  Before we get to the questions like how do I optimize workflows and where do I put them, I would first ask, what is the purpose of the image? Does it need to be kept? If so for how long? Just because we can store it forever doesn’t mean we should, but that is an entirely different discussion.

Once we have identified all of the images

that are acquired and determined the regulatory constraints and useful clinical relevance we can look to apply workflow and best practices. Radiology has some of the most developed rules and processes for acquiring, moving using and storing images, and being a long time PACS person, all images look like X-Rays to me. However, they are not all the same. Appropriate use of radiology best practices can and should be applied to other imaging areas, but only where it makes sense given the need. Orders make sense in the radiology context because they provide a mechanism to attach a result to which is a separate billable component. In many instances outside of radiology images are supplemental or supporting information that belong with other clinical notes regarding the procedure. There are other workflows that can appropriately store this information in the EMR without the

order/accession number process.

Probably the most important lesson we can learn from radiology workflows is the importance of categorizing the information. Hand entered demographics don’t work. Hand entered descriptions of the data don’t work. Whatever workflow is developed it must include selecting the patient from a list, which is typically derived from admissions, and then describing the data by how and where it is acquired and again creating a discrete set of procedures or descriptions of what the images represent. Without these two things the images are relatively useless because it is highly unlikely they will be viewed again as it will be difficult to identify what they are, and they may or may not be accepted into the EMR.

The next lesson we can learn is around

data standards. Regardless of the vendor or the department vendors love to store your data in a proprietary format that is only accessible in their system. This is self-preservation and future revenue streams. No vendor wants to make it easy for you to share data and read it in another system, nor do they want to make it easy to manage your own data. This is what I like to call stickiness, because you are stuck with that vendor. A more economic term would be high barriers to exit or switching costs. If it costs $250,000 just to move the data to another system you may choose to stay with an inferior product due to the additional costs of changing vendors. So it is critical when purchasing a system that you demand that the data is stored in standard formats and that the local team has the ability to access the raw data and move it to other systems in the event the departmental team choses a different system in the future.

To return to the question, what is Enterprise Imaging? I would opine (yes, it is a word) that it means take the appropriate best practices in IT and imaging and develop a plan on how to apply them to all images that are acquired with the ultimate goal of improving patient care.

 

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

References

Pittman, D. (2015, April 15). Enterprise Imaging: The “New World” of Clinical Imaging & Imaging IT. Retrieved from Healthcare Informatics: https://www.healthcare-informatics.com/article/enterprise-imaging-new-world-clinical-imaging-imaging-it

SIIM Enterprise Imaging Workgrouip. (2018, 01 28). What is Enterprise Imaging?Retrieved from Society for Imaging Informatics in Medicine: http://siim.org/page/enterprise_imaging

How do you architect your VNA?

 

 

Before you can design your VNA you need to identify what exactly it is that you want to build. Implying that requirements are simply to be the long-term archive for images is about the same as saying I want a building to live in, or build me an office building downtown. While there are many, many requirements and concepts, I would like to focus on two for now. The first is the concept of location, where are the images, where are the consumers and where will the images be stored. The second is how to organize data within your VNA.

It seems somewhat anachronistic to be talking about physical location in a virtual world, but location matters because it effects latency which relates to how fast you can move the data. In a perfect world there are gigabit pipes everywhere with no latency and data moves almost instantaneously. However, in my world there are some pretty slow and saturated networks. So, where are the consumers of imaging data? This is typically the radiologists and referring physicians. These users are likely in relatively close physical proximity to the origin of the images; at least in a metropolitan area, Nighthawk and teleradiology is a different subject altogether. To provide the fastest access to data there should be some amount of images near the consumers. This is typically a local cache or short-term storage (STS) often provided by the primary PACS system. However, in many circumstances VNA is now supplying the EMR image viewer directly in which case there should be some subset of images in close proximity to the EMR servers. Depending on the configuration you may need to setup a component of the VNA to act as that short-term storage locally. This is critical in a VNA first workflow which is part of the “deconstructed PACS” concept. Also important is the location of the data center which may be in the building, across town or across the country.   The size of the cache varies based on workflows and user needs primary diagnostics may require a cache of up to 2 years’ worth of data on site, or none at all. When I say 2 years’ worth I simply mean enough storage to hold all data that was acquired in the last two years.

The second and more interesting idea is how to organize data within your VNA. Each vendor has their own terminology, so I will refer to them as “buckets”. How you put data IN to your VNA greatly effects how you get data OUT of your VNA. As we all know business cycles are cyclical, so there will likely be a period of acquisitions and growth, followed by a period of divestitures. The smart VNA team will plan for both.  The simplest way to organize the data is in one big bucket, hey that’s a VNA right? With all data stored in one bucket, something like the accession number or more likely the study Unique Identifier (UID) which is by definition and standard unique throughout the world, would be how data is separated. To find data in your bucket you need to query for a number of fields, like patient name, DOB study type, or in a perfect world the UID. This is simple to setup, easy to get data in but hard to get data out. The other extreme of the spectrum is to create a bucket for everything. Data can be split into endless buckets, I have herd of some facilities that have one bucket per modality, per hospital, per year. Meaning they have hospital A x-ray 2017, ct 2017, MR 2017, then Hospital B x-ray, ct etc….. This is difficult to get data in, but very easy to find data and get it out.

Why would one separate data? It makes reporting much easier as many VNA’s (and PACS) don’t do the best job of analytics. It is also logical when looking at the facility level. The trick is to expand the view up to an enterprise of say 20-40 hospitals and related imaging centers and the problem becomes more complex and too many buckets becomes unsustainable. Having worked with many enterprises through the years I have settled into the “sweet spot”. Basically, I plan for an eventual divestiture. This is typically not done at the modality level, but at a facility level. No one has ever asked me to separate out and deliver only one modality, but often times an imaging center is bought or sold as is a hospital. This level allows for adequate reporting and tracking but also facilitates the smooth transition during a divesture.

In practical terms I have found that 4 walls define a facility, as does a business relationship. Hospital A may have a woman’s care center in house, an imaging department and an off campus urgent care center (UCC). I would create a VNA bucket for each, as well as each source system. PACS, CPACS, ophiology, surgery and eventually pathology are unique entities that should have separate data, as they will likely be replaced and upgraded on separate schedules. Therefore, they each get their own bucket. That does of course bring up the question of mammography. When there is a separate departmental system for mammography it is often connected into PACS. If there is a workflow reason to store these images into PACS then I would not create a separate VNA bucket.  If there is no reason to then have the mammo system archive to VNA directly.

One last point, is that there is effort and complexity that goes into breaking up imaging streams into separate buckets. This setup cost must be weighed against the benefit of reporting and long term possible divesture. I can confidently say that when you go through the offloading and divesture process you will be very glad you have the data broken out because it makes the process significantly easier. In that environment the facility has already been sold and therefore it is difficult to justify resources to the process. You will want to be able to point the new owner to the data and let them retrieve it having confidence that the data is separated in your VNA such that they can’t see or accidently get to other data.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

 

 

How Big is Your Pipe and Does it Matter?

 

Let me start by saying I am not a network analyst, never been one, but I did stay at a Holiday Inn-Express last night. I want to look at WAN, networks and bandwidth from the perspective of how they affect archiving of images (big sets of data) across the WAN. For any truly smart network folk, I appreciate comments and corrections. Now to the meat of things.

Networks are like a highway and you are driving in a car, or pick-up truck. There are two main factors that are important, the number of lanes and the speed you are driving. If you are on an old country road, it is likely 2 lanes and you may be driving 40-50 MPH. Not bad, and if its just you, you don’t need more than 2 lanes and you get there just fine. The number of lanes is bandwidth, the speed you are driving is latency and the road is the connection, sometimes called the pipe.

If more people start driving on the same road it starts to slow down, you can add lanes or increase the speed limit. It makes sense to expand the road to 2, 3 or even 4 lanes. At a certain point expanding the lanes doesn’t help, why because it costs a lot of money and you get incremental benefits in speed. Sure you may go from 50 MPH to 60 or even 70, but you don’t get much faster than that, even if you have 12 lanes. Obviously even if you have a ton of bandwidth, if you are driving slow you are unhappy.

Latency, or the speed that data is flowing can be dramatically affected by the route you take. Say you are driving from Dallas to Chicago. According to google maps, you take 75 North, then 69 north and finally I 44, it is 927 miles and should take just under 14 hours. Let’s say that there is a wreck on I 44 and your trusty phone re-routes you through Charlotte, NC. Your trip is now 1,785 miles and will take 27 hours…. No bueno. This is EXACTALLY how data gets routed in and around the internet. In this case your latency just went from 14 hours to 27 hours. It really doesn’t matter how many lanes the highway has, you have a long drive ahead of you. This is known in the networking world as coincidently the route. It is often measured by the number of “hops” which basically equates to the number of cities between you and your destination. 4 or 5 hops is good, 10 or 12 is bad. The more hops the longer it will take and the more likely you got routed through Atlanta or Charlotte on your way to Chicago.

Now to add insult to injury, when you have a big load to send, let’s say a 100 MB file, or a 600 MB breast tomo exam. To continue the analogy let’s say a ton of bricks. You can only fit a portion of those bricks in your trusty pick-up (I am really from Dallas). Given that your truck can only fit 1/10 of the bricks at a time you need to make 10 trips. Now you can see that the latency adds up, very quickly, because of course your truck has to make 2 trips across the network for each load. Your network does this as well. You send some data, and the other side sends back a verification of what it received. This is where someone will say AH HA! I will just send 10 trucks at once! I do need more bandwidth! Unfortunately, it just doesn’t work that way, you can’t put all the data on the wire at once. As the file is broken up, due to constraints in the systems themselves each file, each computer is limited to 3 trucks. Let’s say that is state law, limited number of trucks in America, sun spots, I don’t know…. It just is.

If you have stayed with me through all of this, you should see that there is a balancing act going on. You want to have enough bandwidth so that you are not constrained by one lane, but at a certain point the constraint tips and it is not bandwidth but latency that is slowing down your data transfer. So, what can be done? That my friends I will leave to the network people, but I think it has something to do with point to point connections and dedicated routes through “the cloud”.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

How Big is a Mammography Study??

 

How big is a mammo study?

It depends. I was recently asked what is the average size of a mammography study. I asked for clarification, what do you mean? I received a somewhat strange look and the response mammography, you know breast imaging….

Bottom line up front, somewhere between 20 MB and 1GB per exam

The problem is that there is no easy answer to that question because well, it depends. For starters there is as we all know a huge difference between tomography and plain film mammo. So averaging the two would vary greatly depending on the ratio of tomo to mammo. If you look in your VNA they will often share the modality code MG so how do you tell the difference? Number of images? You could assume that anything over 10 images is tomo. Ok so the next way to tell would be to get into your database and do a query for the SOP Class Breast Tomosysnthesis Image Storage (1.2.840.10008.5.1.4.1.1.13.1.3) however, you will find that within a study you will have a few tomo images and plan mammo images stored. Then again, you may be working with a popular vendor who stores the tomo images in a proprietary and much small format using the secondary capture SOP class (1.2.840.10008.5.1.4.1.1.7) easy right? All you need to do is isolate out the 2D exams from the exams containing a 3D image, then find out if you are using the BTO SOP class or the secondary capture SOP, THEN you can average your exams and get the average study size, right? Well sort of..

NOW we need to determine if they are compressed or not. To figure that out you need to look at the transfer syntax. Many modalities will default to Implicit VR Endian, which is transfer syntax (1.2.840.10008.1.2) this is uncompressed. Many PACS will take the syntax that the modality sent and refuse to change it for fear of impacting image quality. Therefore, the study is stored on disk and in the long-term archive or VNA in the same format. Unless you get into inbound and outbound compression which is a whole different topic. There are of course many different transfer syntaxes with varying compression, but we will take the other common one, JPG 2000 lossless (1.2.840.10008.1.2.4.90). Either compression can be applied to any of the SOP classes described above.

So, the question stands, what type of mammo do you mean? Standard format or proprietary (but very common) and compressed or uncompressed. How you ask the question will skew the answer dramatically.  Given the trend in the market tomo is growing so the average in Dec 2017 is very different than the average in Dec 2016.

If you have read all the way through this, the breast tomo format lossless compressed averaged out to be 711 MB, while the secondary capture format also lossless compressed averaged in at 194 MB.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org