What the heck is Enterprise Imaging…… Really?

 

According to SIIM Enterprise Imaging is

“a set of strategies, initiatives and workflows implemented across a healthcare enterprise to consistently and optimally capture, index, manage, store, distribute, view, exchange, and analyze all clinical imaging and multimedia content to enhance the electronic health record.” (SIIM Enterprise Imaging Workgrouip, 2018)

Healthcare Informatics says

“The foundations for building an effective EI program can be leveraged from traditional imaging and IT best-practice fundamentals. The following lays out the deliberate, innovative alignment between “imaging & imaging IT Fundamentals,” such as acquisition, management, distribution, archive, and governance, with evolving industry conditions impacting imaging and IT, such as the demand for centralization, standardization, interoperability, data integrity, and governance.” (Pittman, 2015)

While there is a lot of good information in both of those they are a bit unwieldy. So, lets say Enterprise Imaging is a plan to a efficiently use all images to better treat the patient. We all want to provide better care and who can argue with efficiency in the form of physician time or hospital resources? Some may be thinking, I have images in my EMR so I have Enterprise Imaging, box checked. I wo

uld challenge that there are many more images out there than we realize. The obvious departments that come to mind are Radiology and Cardiology. There are also images generated in surgery, ED, dermatology, GI, lab and countless other places. A rheumatology clinic I work with very often uses ultrasound for needle placement for injections.  Before we get to the questions like how do I optimize workflows and where do I put them, I would first ask, what is the purpose of the image? Does it need to be kept? If so for how long? Just because we can store it forever doesn’t mean we should, but that is an entirely different discussion.

Once we have identified all of the images

that are acquired and determined the regulatory constraints and useful clinical relevance we can look to apply workflow and best practices. Radiology has some of the most developed rules and processes for acquiring, moving using and storing images, and being a long time PACS person, all images look like X-Rays to me. However, they are not all the same. Appropriate use of radiology best practices can and should be applied to other imaging areas, but only where it makes sense given the need. Orders make sense in the radiology context because they provide a mechanism to attach a result to which is a separate billable component. In many instances outside of radiology images are supplemental or supporting information that belong with other clinical notes regarding the procedure. There are other workflows that can appropriately store this information in the EMR without the

order/accession number process.

Probably the most important lesson we can learn from radiology workflows is the importance of categorizing the information. Hand entered demographics don’t work. Hand entered descriptions of the data don’t work. Whatever workflow is developed it must include selecting the patient from a list, which is typically derived from admissions, and then describing the data by how and where it is acquired and again creating a discrete set of procedures or descriptions of what the images represent. Without these two things the images are relatively useless because it is highly unlikely they will be viewed again as it will be difficult to identify what they are, and they may or may not be accepted into the EMR.

The next lesson we can learn is around

data standards. Regardless of the vendor or the department vendors love to store your data in a proprietary format that is only accessible in their system. This is self-preservation and future revenue streams. No vendor wants to make it easy for you to share data and read it in another system, nor do they want to make it easy to manage your own data. This is what I like to call stickiness, because you are stuck with that vendor. A more economic term would be high barriers to exit or switching costs. If it costs $250,000 just to move the data to another system you may choose to stay with an inferior product due to the additional costs of changing vendors. So it is critical when purchasing a system that you demand that the data is stored in standard formats and that the local team has the ability to access the raw data and move it to other systems in the event the departmental team choses a different system in the future.

To return to the question, what is Enterprise Imaging? I would opine (yes, it is a word) that it means take the appropriate best practices in IT and imaging and develop a plan on how to apply them to all images that are acquired with the ultimate goal of improving patient care.

 

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

References

Pittman, D. (2015, April 15). Enterprise Imaging: The “New World” of Clinical Imaging & Imaging IT. Retrieved from Healthcare Informatics: https://www.healthcare-informatics.com/article/enterprise-imaging-new-world-clinical-imaging-imaging-it

SIIM Enterprise Imaging Workgrouip. (2018, 01 28). What is Enterprise Imaging?Retrieved from Society for Imaging Informatics in Medicine: http://siim.org/page/enterprise_imaging

How do you architect your VNA?

 

 

Before you can design your VNA you need to identify what exactly it is that you want to build. Implying that requirements are simply to be the long-term archive for images is about the same as saying I want a building to live in, or build me an office building downtown. While there are many, many requirements and concepts, I would like to focus on two for now. The first is the concept of location, where are the images, where are the consumers and where will the images be stored. The second is how to organize data within your VNA.

It seems somewhat anachronistic to be talking about physical location in a virtual world, but location matters because it effects latency which relates to how fast you can move the data. In a perfect world there are gigabit pipes everywhere with no latency and data moves almost instantaneously. However, in my world there are some pretty slow and saturated networks. So, where are the consumers of imaging data? This is typically the radiologists and referring physicians. These users are likely in relatively close physical proximity to the origin of the images; at least in a metropolitan area, Nighthawk and teleradiology is a different subject altogether. To provide the fastest access to data there should be some amount of images near the consumers. This is typically a local cache or short-term storage (STS) often provided by the primary PACS system. However, in many circumstances VNA is now supplying the EMR image viewer directly in which case there should be some subset of images in close proximity to the EMR servers. Depending on the configuration you may need to setup a component of the VNA to act as that short-term storage locally. This is critical in a VNA first workflow which is part of the “deconstructed PACS” concept. Also important is the location of the data center which may be in the building, across town or across the country.   The size of the cache varies based on workflows and user needs primary diagnostics may require a cache of up to 2 years’ worth of data on site, or none at all. When I say 2 years’ worth I simply mean enough storage to hold all data that was acquired in the last two years.

The second and more interesting idea is how to organize data within your VNA. Each vendor has their own terminology, so I will refer to them as “buckets”. How you put data IN to your VNA greatly effects how you get data OUT of your VNA. As we all know business cycles are cyclical, so there will likely be a period of acquisitions and growth, followed by a period of divestitures. The smart VNA team will plan for both.  The simplest way to organize the data is in one big bucket, hey that’s a VNA right? With all data stored in one bucket, something like the accession number or more likely the study Unique Identifier (UID) which is by definition and standard unique throughout the world, would be how data is separated. To find data in your bucket you need to query for a number of fields, like patient name, DOB study type, or in a perfect world the UID. This is simple to setup, easy to get data in but hard to get data out. The other extreme of the spectrum is to create a bucket for everything. Data can be split into endless buckets, I have herd of some facilities that have one bucket per modality, per hospital, per year. Meaning they have hospital A x-ray 2017, ct 2017, MR 2017, then Hospital B x-ray, ct etc….. This is difficult to get data in, but very easy to find data and get it out.

Why would one separate data? It makes reporting much easier as many VNA’s (and PACS) don’t do the best job of analytics. It is also logical when looking at the facility level. The trick is to expand the view up to an enterprise of say 20-40 hospitals and related imaging centers and the problem becomes more complex and too many buckets becomes unsustainable. Having worked with many enterprises through the years I have settled into the “sweet spot”. Basically, I plan for an eventual divestiture. This is typically not done at the modality level, but at a facility level. No one has ever asked me to separate out and deliver only one modality, but often times an imaging center is bought or sold as is a hospital. This level allows for adequate reporting and tracking but also facilitates the smooth transition during a divesture.

In practical terms I have found that 4 walls define a facility, as does a business relationship. Hospital A may have a woman’s care center in house, an imaging department and an off campus urgent care center (UCC). I would create a VNA bucket for each, as well as each source system. PACS, CPACS, ophiology, surgery and eventually pathology are unique entities that should have separate data, as they will likely be replaced and upgraded on separate schedules. Therefore, they each get their own bucket. That does of course bring up the question of mammography. When there is a separate departmental system for mammography it is often connected into PACS. If there is a workflow reason to store these images into PACS then I would not create a separate VNA bucket.  If there is no reason to then have the mammo system archive to VNA directly.

One last point, is that there is effort and complexity that goes into breaking up imaging streams into separate buckets. This setup cost must be weighed against the benefit of reporting and long term possible divesture. I can confidently say that when you go through the offloading and divesture process you will be very glad you have the data broken out because it makes the process significantly easier. In that environment the facility has already been sold and therefore it is difficult to justify resources to the process. You will want to be able to point the new owner to the data and let them retrieve it having confidence that the data is separated in your VNA such that they can’t see or accidently get to other data.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

 

 

How Big is Your Pipe and Does it Matter?

 

Let me start by saying I am not a network analyst, never been one, but I did stay at a Holiday Inn-Express last night. I want to look at WAN, networks and bandwidth from the perspective of how they affect archiving of images (big sets of data) across the WAN. For any truly smart network folk, I appreciate comments and corrections. Now to the meat of things.

Networks are like a highway and you are driving in a car, or pick-up truck. There are two main factors that are important, the number of lanes and the speed you are driving. If you are on an old country road, it is likely 2 lanes and you may be driving 40-50 MPH. Not bad, and if its just you, you don’t need more than 2 lanes and you get there just fine. The number of lanes is bandwidth, the speed you are driving is latency and the road is the connection, sometimes called the pipe.

If more people start driving on the same road it starts to slow down, you can add lanes or increase the speed limit. It makes sense to expand the road to 2, 3 or even 4 lanes. At a certain point expanding the lanes doesn’t help, why because it costs a lot of money and you get incremental benefits in speed. Sure you may go from 50 MPH to 60 or even 70, but you don’t get much faster than that, even if you have 12 lanes. Obviously even if you have a ton of bandwidth, if you are driving slow you are unhappy.

Latency, or the speed that data is flowing can be dramatically affected by the route you take. Say you are driving from Dallas to Chicago. According to google maps, you take 75 North, then 69 north and finally I 44, it is 927 miles and should take just under 14 hours. Let’s say that there is a wreck on I 44 and your trusty phone re-routes you through Charlotte, NC. Your trip is now 1,785 miles and will take 27 hours…. No bueno. This is EXACTALLY how data gets routed in and around the internet. In this case your latency just went from 14 hours to 27 hours. It really doesn’t matter how many lanes the highway has, you have a long drive ahead of you. This is known in the networking world as coincidently the route. It is often measured by the number of “hops” which basically equates to the number of cities between you and your destination. 4 or 5 hops is good, 10 or 12 is bad. The more hops the longer it will take and the more likely you got routed through Atlanta or Charlotte on your way to Chicago.

Now to add insult to injury, when you have a big load to send, let’s say a 100 MB file, or a 600 MB breast tomo exam. To continue the analogy let’s say a ton of bricks. You can only fit a portion of those bricks in your trusty pick-up (I am really from Dallas). Given that your truck can only fit 1/10 of the bricks at a time you need to make 10 trips. Now you can see that the latency adds up, very quickly, because of course your truck has to make 2 trips across the network for each load. Your network does this as well. You send some data, and the other side sends back a verification of what it received. This is where someone will say AH HA! I will just send 10 trucks at once! I do need more bandwidth! Unfortunately, it just doesn’t work that way, you can’t put all the data on the wire at once. As the file is broken up, due to constraints in the systems themselves each file, each computer is limited to 3 trucks. Let’s say that is state law, limited number of trucks in America, sun spots, I don’t know…. It just is.

If you have stayed with me through all of this, you should see that there is a balancing act going on. You want to have enough bandwidth so that you are not constrained by one lane, but at a certain point the constraint tips and it is not bandwidth but latency that is slowing down your data transfer. So, what can be done? That my friends I will leave to the network people, but I think it has something to do with point to point connections and dedicated routes through “the cloud”.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org