That’s not a pencil, it’s a MEDICAL DEVICE!

Three years ago, I was visiting my primary care physician for an annual exam.  My Dr, not fresh out of medical school and had been my family physician for a number of years.  Dr. L did not like computers.  He was writing on my cart in pencil (yes a for real paper chart!).  When I noticed that the pencil was worn down so far that it would only write from one angle and even so was more like a crayon.  I looked at him and said, “you might want to sharpen that pencil.”  He replied, “I can’t, this is a medical device.”  Being the highly technical Imaging person that I am I said, “forgive me Dr, but that is not a medical device, it is just a pencil.”  Slightly exasperated he took off his glasses and looked at me, replying “This is your chart, a medical record.  Obviously, you can see I am making notes and documenting your diagnosis.  You can’t do that with just any writing device, that would be illegal!  I might be audited, you can only make a diagnosis with a medical device!”  Not taking the hint I said, “well at least sharpen it, you can barely write with that.”  Now clearly ticked off Dr. L replied, “were you not listening?!  This pencil is a medical device, if I were to sharpen it, I would have to have a licensed carpenter come in, charging me $400 an hour to sharpen it!  You can’t go messing with a medial device unless you have FDA clearance!”

Sooooooo, maybe there is a hint of sarcasm in my story, but let’s talk about what a medical device is and what the FDA really says.  I was at one time a vendor, and while I was I said many of the same things about my system.  Medical device… can’t patch… blah blah, FDA certification…. I truly believed everything I said.  I had been told that by my company, and I had never read any FDA filings (at the time) so I was retelling what was for me the truth.  Like my former self, many vendors have never read nor do they understand FDA process..

 

The FDA defines a Medical Device as “”…an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is: recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them, intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals, and which does not achieve any of its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of any of its primary intended purposes.” (Syring, 2018)

 

From that definition we could assume that yes, a pencil is indeed a medical device, or could we?  Did the pencil do anything?  Did it assist in the diagnosis?  Not really, it assisted in recording it.  Similarly, we have to look at the distinction between things that are used in the diagnosis vs what is supporting.  Is a CT or Ultrasound a medical device?  Yes, no question.  What about PACS?  The software is considered a medical device, but the hardware it is running on likely is not.  Let’s examine a real 510(k) letter for a PACS.  By the way if you want to look up the certification for your vendor, which I strongly encourage you can do so on the FDA website.

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm

Back to Vendor X……  “PACS X is medical image and information management software that is intended to receive, transmit, store, archive, retrieve, manage, display, print and process digital medical images, digital medical video and associated patient and medical information.  PACS X includes a suite of standalone, web-enabled software components, and is intended for installation and use with off-the-shelf hardware that meets or exceeds minimum specifications.” (emphasis added)

 

What this means is that the software is a medical device, and when the SOFTWARE is patched it must be tested in accordance with General Principles of Software Validation linked here (Food and Drug Administration (FDA), 2001).  The hardware that it runs on however, does not.  You can run PACS X on any hardware that meets or exceeds specs and it has no impact on the FDA certification whatsoever!  A vendor is well within their rights to provide an approved hardware list, but this is a support issue and not a FDA issue.  This distinction is very important!

 

Because the computer and operating system that run PACS software are not part of the 510(k) certification there is no requirement for the FDA to review security patches.

“Medical device manufacturers can always update a medical device for cybersecurity. In fact, the FDA does not typically need to review changes made to medical devices solely to strengthen cybersecurity.” (Food and Drug Administration, 2018)

There is a one page fact sheet that is very clearly written and I also encourage you to read here.

In summary, your PACS software IS a medical device, however what it RUNS on likely is not.   Especially given security concerns it behooves us all to read the FDA guidance and hold our vendors accountable to make sure that our devices are patched and up to date.  No one wants to report to the CEO or CIO that their system was responsible for a virus or ransomware attack on the enterprise.  Also surprising to me was that for all the secrecy and mystery surrounding medical devices and subsequent maintenance, the FDA website is surprisingly clear and easy to understand.

 

Thank you for reading, please post comments and questions !

Kyle Henson

 

 

References

Food and Drug Administration (FDA). (2001, 02 25). Information for Healthcare Organizations about FDA’s “Guidance for Industry: Cybersecurity for Networked Medical Devices Containing Off-The-Shelf (OTS) Software”. Retrieved from Food and Drug Administration Website: https://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070634.htm

Food and Drug Administration. (2018, 02 02). Information for Healthcare Organizations about FDA’s “Guidance for Industry: Cybersecurity for Networked Medical Devices Containing Off-The-Shelf (OTS) Software”. Retrieved from Food and Drug Administration: https://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070634.htm

Food and Drug Administration. (2018, 02 07). THE FDA’S ROLE IN MEDICAL DEVICE CYBERSECURITY. Retrieved from Food and Drug Administration: https://www.fda.gov/downloads/MedicalDevices/DigitalHealth/UCM544684.pdf

Syring, G. (2018, 02 25). Overview: FDA Regulation of Medical Devices. Retrieved from Quality and Regulatory Assoicats: http://www.qrasupport.com/FDA_MED_DEVICE.html

 

 

 

The DICOM is in the Details! Part 2 the Query Retrieve

 

Given the apparent interest in some of the details about DICOM store transactions, thank you to all who read it!  I thought I would add in a brief description of Query / Retrieve and then next week I will write about my favorite, Storage Commit.

A DICOM Query Retrieve transaction is a fairly simple transaction. First there is a query, and then a retrieve, luckily the standards team didn’t go crazy with the names. The first part is of course the query, which is a C-FIND transaction. In a C-FIND we again have a service class user (SCU) and a service class provider (SCP). The provider is going to be the “server” and the user the “client” or the one making the request. The query can be for a study or for a patient. However, it does not have to be only one. The query could be for “all” patients, or All studies done on a certain date, or if you get a wild hair, all dexa studies completed on Friday the 13th that have a patient who’s first name begins with the letter Q.

No matter what the C-FIND attributes are (specifics of the query) the user will send the query to the SCP (provider) and the provider will then issue a C-FIND response. The response is the list of studies that meet the criteria.  Different systems have built in mechanisms to deal with large C-FIND requests, some will reject the request if it is too broad, others will limit the number of responses to and arbitrary number such as 300, while still others don’t mind at all and simply send back a very long list of matches.

The client or SCU now has a list of studies and may decide to retrieve the studies. The command to retrieve a study is typically not “send me x study” it is often a C-MOVE command. Which roughly translates to “send X study over there” with the there usually being the requester. This is mostly semantical, but interesting to me. The C-MOVE command consists of what study is to be sent and where it is to be sent. The where is an Application Entity or AE title. Once the C-MOVE provider has this information, it then begins a DICOM Store transaction with the AE title requested.  For info on the DICOM Store,

see The DICOM is in the Details!

(Yes, shameless plug for clicks)

One interesting note here, in the C-MOVE command the only destination is the AE title, it does not include the IP address or port! This gets complex because almost every PACS and modality has a standard AE title that the vendor uses for EVERY SINGLE INSTALLATION, I won’t call out a single vendor, because they all do it. This was not a problem back in the day because relatively few systems queried each other, and they were often different vendors. Now however, when you are building an enterprise system like a VNA it is not uncommon at all to have many PACS or CPACS from the same vendor. Which brings AE uniqueness into play.

Some PACS will have the ability to use multiple AE titles so you can simply add a new AE for your VNA to send back to, and not change the modalities. Other PACS will only support one AE title and you may have to reconfigure all modalities sending to it. Last tangential point on Query / Retrieve is that this process of C-FIND and C-MOVE is pretty much what all data migration companies do. They simply do a lot of transactions!

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

When a picture ISN’T worth a thousand words, where do reports fit into VNA’s and Enterprise Imaging?

 

In traditional imaging systems like Radiology and Cardiology PACS the report is always with the images. In Radiology, the dictation system sends a copy of the report to PACS via HL7 which is ok since it is text. In cardiology it is either a textual report, or the cardiology system creates the report and therefore has a copy. As we get outside of the walls of those two systems, where does the report really live?

For those that don’t read to the end … the answer is DICOM SR  in your VNA but please keep reading!

In an environment where all users are logged into the EMR and launching images from there, it is not an issue as the EMR is now the system of record for the reports and will have a copy. Now, IMAGINE A WORLD (queue deep commercial voice) where images are sent for reading to various physician groups who are not logged into the EMR.  Reading the newest image is not an issue, but what about priors? In some teleradiology workflows prior reports are faxed, others copy and paste prior reports from the EMR, and still others simply read what is in front of them.

I submit that there is a better way. As we move forward with outsourcing reads, and facilities are divested and acquired regularly it makes no sense whatsoever to not keep reports with the images. The two are intrinsically linked and are important for different reasons as part of the patient record. Luckily there are several mechanisms to resolve this. Surprisingly I don’t see them often implemented.

Let’s start with the low hanging fruit, cardiology. Since most CPACS have reporting modules within the system the report is already with the images before the images are archived and / or sent elsewhere. While I am all for FHIR and emerging solutions I prefer to stick with what I can implement today, now, and yes there are options. The simplest is to do an HL7 export to the EMR. This will provide the text but no images. Often times CPACS will generate a PDF report but that ends up being imported as a separate document into the EMR and not linked. There are actually 3 options to export a content rich report besides emailing the pdf.

The first is to utilize HL7 and the encapsulated document (ED) standard. The standard does exist, and it can be done but I have not seen it nor talked to anyone who has tried. The second is to store the PDF document in the XDS, I am all about standards and a big believer in XDS. The problem is that first you have to HAVE an XDS repository which many don’t, and secondly you need a system to act as the XDS source, which many (most) imaging systems don’t do. There is a very easy answer to this problem and one that has been around for a very long time it just isn’t used.

The easy answer is to DICOM Encapsulate the PDF report and store it with the images as another series. Many CPACS do this natively, it is as simple as clicking a button in the configuration to “archive report with images”.   Why this is not done more often is a mystery to me. This is a very good option for CPACS which commonly creates pdfs as the report product but for other systems that rely more on plain text is the PDF the way to go?

There are several options for textual reports as well. HL7 interfaces between systems is an option but HL7 tends to be more of an all or nothing proposition. Again, XDS offers several opportunities, we stored the text reports as CDA objects in XDS, however this shares some of the previously stated limitations with XDS, namely the lack of adoption so far. Still, there is an old school solution to this problem. The DICOM Structured Report (SR).  By using the DICOM SR one can store the report with the images, any time the images are viewed or sent to another location the report goes with it with no additional steps.

I did this with my VNA from the beginning and it has been a huge success as my EMR Viewer can process the SR and therefore when looking at priors for history the report is available for review without the hospitalist having to go back and forth to the EMR to view the interpretation that goes with the images. Similarly, any time images are requested by another facility or need to be shared for patient care the report is always with the images, either as a DICOM SR or an encapsulated PDF. See that was worth reading to the end wasn’t it?

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

The DICOM is in the Details! but how does it work?

Most of us use DICOM every day, we smell it, we live it and we talk about it. However, often the deep dark secured is that we don’t really know how it works. What is a SOP? What is a Transfer syntax? And why do the engineers keep talking about Endians?

To begin with lets quickly review how a DICOM Store occurs, the sending system initiates a transaction. The sending system is the USER of DICOM Store (Service Class User or SCU) and the receiver is the Provider of DICOM Store or the Service Class provider (SCP). The user says I have this study that I want to store. The provider (receiver) says great, here I am. Then the user says I want to send a Breast Tomosynthesis Image.

*nerd alert- The type of image to be sent is defined by the SOP Class, the SOP stands for Service-Object Pair which is the Information Object Definition (image type) and DICOM Service Elements (DICOM Wrapper).   The SOP for Breas Tomo is 1.2.840.10008.5.1.4.1.1.13.1.3, which is in a supplement.

https://www.dicomlibrary.com/dicom/sop/

At this time the provider will reply back with yes, no problem or no, I don’t know that that is. If the answer is yes and the receiver (SCP) supports that SOP (see how you are starting to get the lingo!) it will also send back the list of languages it speaks. We are all pretty familiar by now with the 3 types of compression, uncompressed commonly called DICOM, lossless compressed which is compressed but still ok for reading and lossy compressed in which image data is lost but is much smaller. Each of these along with several others are called in DICOM Speak a transfer syntax.

Once the sender and receiver have agreed on what will be stored, the receiver sends back a list of languages it speaks, or transfer syntaxes. The sender or SCU will then select one of these to send the image. Thus, it is the sender the decides whether or not the image is sent compressed or not. Implicit VR Endian is the default DICOM transfer syntax and therefore supported by ALL vendors.  Because of this, many vendors take the easy road and simply accept the default. This is … OK… within a LAN but when the data is stored or transferred over a WAN compression becomes very important.

https://www.dicomlibrary.com/dicom/transfer-syntax/

Now that SCU and SCP have agreed on what is to be sent and how it will be sent the data transmission goes. The transmission can be at the instance level which refers to individual images or at the study level in which many images are sent on the same association. Once the association is complete the sender may initiate a Storage Commit, which I highly recommend when sending to VNA across a WAN.

Briefly in a storage commit message the sender reaches back out to the provider and sends a list of all individual images that were sent. The Provider then responds back either positively that ALL images were received or negatively in which something wasn’t. In the case of negative the entire study is considered a failure and will be resent, which takes up a lot of your bandwidth.

Please like, share and comment. I would love to know what topics are of interest to the imaging people out there!

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

How do you architect your VNA?

 

 

Before you can design your VNA you need to identify what exactly it is that you want to build. Implying that requirements are simply to be the long-term archive for images is about the same as saying I want a building to live in, or build me an office building downtown. While there are many, many requirements and concepts, I would like to focus on two for now. The first is the concept of location, where are the images, where are the consumers and where will the images be stored. The second is how to organize data within your VNA.

It seems somewhat anachronistic to be talking about physical location in a virtual world, but location matters because it effects latency which relates to how fast you can move the data. In a perfect world there are gigabit pipes everywhere with no latency and data moves almost instantaneously. However, in my world there are some pretty slow and saturated networks. So, where are the consumers of imaging data? This is typically the radiologists and referring physicians. These users are likely in relatively close physical proximity to the origin of the images; at least in a metropolitan area, Nighthawk and teleradiology is a different subject altogether. To provide the fastest access to data there should be some amount of images near the consumers. This is typically a local cache or short-term storage (STS) often provided by the primary PACS system. However, in many circumstances VNA is now supplying the EMR image viewer directly in which case there should be some subset of images in close proximity to the EMR servers. Depending on the configuration you may need to setup a component of the VNA to act as that short-term storage locally. This is critical in a VNA first workflow which is part of the “deconstructed PACS” concept. Also important is the location of the data center which may be in the building, across town or across the country.   The size of the cache varies based on workflows and user needs primary diagnostics may require a cache of up to 2 years’ worth of data on site, or none at all. When I say 2 years’ worth I simply mean enough storage to hold all data that was acquired in the last two years.

The second and more interesting idea is how to organize data within your VNA. Each vendor has their own terminology, so I will refer to them as “buckets”. How you put data IN to your VNA greatly effects how you get data OUT of your VNA. As we all know business cycles are cyclical, so there will likely be a period of acquisitions and growth, followed by a period of divestitures. The smart VNA team will plan for both.  The simplest way to organize the data is in one big bucket, hey that’s a VNA right? With all data stored in one bucket, something like the accession number or more likely the study Unique Identifier (UID) which is by definition and standard unique throughout the world, would be how data is separated. To find data in your bucket you need to query for a number of fields, like patient name, DOB study type, or in a perfect world the UID. This is simple to setup, easy to get data in but hard to get data out. The other extreme of the spectrum is to create a bucket for everything. Data can be split into endless buckets, I have herd of some facilities that have one bucket per modality, per hospital, per year. Meaning they have hospital A x-ray 2017, ct 2017, MR 2017, then Hospital B x-ray, ct etc….. This is difficult to get data in, but very easy to find data and get it out.

Why would one separate data? It makes reporting much easier as many VNA’s (and PACS) don’t do the best job of analytics. It is also logical when looking at the facility level. The trick is to expand the view up to an enterprise of say 20-40 hospitals and related imaging centers and the problem becomes more complex and too many buckets becomes unsustainable. Having worked with many enterprises through the years I have settled into the “sweet spot”. Basically, I plan for an eventual divestiture. This is typically not done at the modality level, but at a facility level. No one has ever asked me to separate out and deliver only one modality, but often times an imaging center is bought or sold as is a hospital. This level allows for adequate reporting and tracking but also facilitates the smooth transition during a divesture.

In practical terms I have found that 4 walls define a facility, as does a business relationship. Hospital A may have a woman’s care center in house, an imaging department and an off campus urgent care center (UCC). I would create a VNA bucket for each, as well as each source system. PACS, CPACS, ophiology, surgery and eventually pathology are unique entities that should have separate data, as they will likely be replaced and upgraded on separate schedules. Therefore, they each get their own bucket. That does of course bring up the question of mammography. When there is a separate departmental system for mammography it is often connected into PACS. If there is a workflow reason to store these images into PACS then I would not create a separate VNA bucket.  If there is no reason to then have the mammo system archive to VNA directly.

One last point, is that there is effort and complexity that goes into breaking up imaging streams into separate buckets. This setup cost must be weighed against the benefit of reporting and long term possible divesture. I can confidently say that when you go through the offloading and divesture process you will be very glad you have the data broken out because it makes the process significantly easier. In that environment the facility has already been sold and therefore it is difficult to justify resources to the process. You will want to be able to point the new owner to the data and let them retrieve it having confidence that the data is separated in your VNA such that they can’t see or accidently get to other data.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org

 

 

How Big is a Mammography Study??

 

How big is a mammo study?

It depends. I was recently asked what is the average size of a mammography study. I asked for clarification, what do you mean? I received a somewhat strange look and the response mammography, you know breast imaging….

Bottom line up front, somewhere between 20 MB and 1GB per exam

The problem is that there is no easy answer to that question because well, it depends. For starters there is as we all know a huge difference between tomography and plain film mammo. So averaging the two would vary greatly depending on the ratio of tomo to mammo. If you look in your VNA they will often share the modality code MG so how do you tell the difference? Number of images? You could assume that anything over 10 images is tomo. Ok so the next way to tell would be to get into your database and do a query for the SOP Class Breast Tomosysnthesis Image Storage (1.2.840.10008.5.1.4.1.1.13.1.3) however, you will find that within a study you will have a few tomo images and plan mammo images stored. Then again, you may be working with a popular vendor who stores the tomo images in a proprietary and much small format using the secondary capture SOP class (1.2.840.10008.5.1.4.1.1.7) easy right? All you need to do is isolate out the 2D exams from the exams containing a 3D image, then find out if you are using the BTO SOP class or the secondary capture SOP, THEN you can average your exams and get the average study size, right? Well sort of..

NOW we need to determine if they are compressed or not. To figure that out you need to look at the transfer syntax. Many modalities will default to Implicit VR Endian, which is transfer syntax (1.2.840.10008.1.2) this is uncompressed. Many PACS will take the syntax that the modality sent and refuse to change it for fear of impacting image quality. Therefore, the study is stored on disk and in the long-term archive or VNA in the same format. Unless you get into inbound and outbound compression which is a whole different topic. There are of course many different transfer syntaxes with varying compression, but we will take the other common one, JPG 2000 lossless (1.2.840.10008.1.2.4.90). Either compression can be applied to any of the SOP classes described above.

So, the question stands, what type of mammo do you mean? Standard format or proprietary (but very common) and compressed or uncompressed. How you ask the question will skew the answer dramatically.  Given the trend in the market tomo is growing so the average in Dec 2017 is very different than the average in Dec 2016.

If you have read all the way through this, the breast tomo format lossless compressed averaged out to be 711 MB, while the secondary capture format also lossless compressed averaged in at 194 MB.

Kyle Henson

Please let me know what topics you would like to discuss

Kyle@kylehenson.org