Is the VNA Dead?

While at RSNA this November, I heard more than one person comment that VNA’s are dead. Every time I heard these statements uttered with such certainty I couldn’t help but think of Mark Twain’s amusing quip “the reports of my death are greatly exaggerated”.

It won’t come as a surprise to those who have read my previous articles that I remain in favor of the VNA. Not because I am wedded to the idea philosophically, but because the reasons that I, and many others, have implemented VNAs over the last ten years still exist. VNAs are kind of like the tires on your car, they’re underneath it all, helping it go smoothly and no, VNA isn’t sexy, it’s not super fun and it doesn’t come with AI and blockchain promising to make you coffee while printing money. However, a VNA is still a foundational part of an enterprise imaging strategy.

One alternative being thrown around is an “enterprise PACS”. And, while it might seem the ensuing debate would be enterprise imaging vs enterprise PACS, the reality is that this is just a rebranding of the old ‘single-vendor vs best-of-breed’ debate. There are, of course, advantages to each. Two of the most common reasons cited for adopting a single-vendor strategy are reduced integrations and reduced interoperability challenges.

There are several companies in the market today offering this single-vendor experience and they are buying/building many of the components of enterprise imaging; an archive, a physician worklist, a viewer and packaging it as a one-stop-shop. As the argument (sales pitch) goes, if they have all the things you need why would you shop anywhere else? Well, the answer to that question depends on what your organization’s priorities are.

I tend to lean toward the ‘best-of-breed’ side for a few reasons. In my experience, hospital systems tend to run through cycles of buying and selling hospitals, imaging centers, and/or urgent care centers. Each of these acquisitions and divestitures (A&D) comes with imaging systems attached. So, thanks to A&D there are always migrations and integrations. This leaves the goal of ‘having one PACS to rule them all’ difficult to execute and maintain in the real world. 

Secondly, while hospitals may desire to move toward centralization each radiology group, which is very often an outsourced service provider, is looking to customize its operations and achieve efficiency. That is, rad groups want to use their internal PACS and not have to read from a different PACS for each hospital. Beyond rad groups there are various specialists that are wanting to integrate imaging into their operations. Patient portals are now  looking to integrate imaging. Even AI itself is often integrated with a particular viewer.  Each of these instances invariably involves a specialty viewer. 

Quite clearly, the ability to integrate with multiple specialty viewers and support a wide variety of workflows does not appear to be abating, quite the opposite, this requirement is growing. This is where the VNA, as the foundational layer of an enterprise imaging solution, starts to show its true value. A best-of-breed VNA really shines when it is supporting migrations in and/or out of the organization. As well as when creating the routing and integration tier to consolidate multiple image sources, thereby providing images to specialty viewers and radiologists as needed. Each user expects a longitudinal patient record regardless of the source of the data, which is what quality VNA’s are good at.

There are, of course, organizations that are successful with a single-vendor strategy. Usually this is an environment with little turnover, relatively homogeneous physician needs and a strong relationship with the vendor. For other organizations with a high degree of A&D and a diverse physician population with unique requirements, I still find that the VNA creates the strongest foundation on which a complex imaging ecosystem can, and should, be built.

Building My Own Better Way

One of the more frustrating things about running my VNA was taking help desk calls. Not because I had to be on call evenings and weekends (let’s be honest, no one loves that part) and not when an end-user might need a bit of education. No, the frustration occurred whenever one of my systems was down and I hadn’t known about it. Even worse, when those calls came in it often meant that a doctor or clinician was unable to utilize the system to provide their patients with the highest possible level of care. Which is really why we’re all here, isn’t it, to support optimized patient care?  

So, when a ticket did come in the first thing I would try to find out was if the incident involved a single study, was their entire system down, was it isolated to “only” one hospital, or did we have a full network of facilities down? Assuming I could contact the end user, they were often involved in direct patient care, and unable to stay on the phone with me or the team as we go through the information gathering process. Thus, the troubleshooting procedures would begin, starting with the EMR viewer and image availability, then on to the EMR to see if reports are populating and image links active. Next, the VNA, is it up, are images populating, are they recent or significantly behind? All of this, of course, just being rapid triage, primarily looking for system wide outages. 

I always took it personally when one of my systems (VNA, Enterprise Viewer or Image sharing) was down. It is true that we had active monitoring, as do many facilities, however, that “active monitoring” inevitably turned into a flood of email notifications throughout the day. Many of the emails offering fantastic information about the system; RAM utilization, free drive space, the number of jobs in que on server 23 etc., etc., etc. But, the incessant ping of noncritical emails eventually becomes deafening and I must admit I, like most of my peers, tuned them out. One issue with most of the systems I was responsible for is that services almost always stay running and the logs rarely filled up the disk drive. This means that many of the purely “IT triggers” that were monitored didn’t turn red, even when a system wasn’t functioning. 

The problem is that the monitoring tools we have are not designed to conduct functional testing, nor are they designed around integration testing. Each system is designed to fulfill a role or set of functions. VNA gets studies sent to it but does not know what should be there.  An interface engine sends a message and receives and acknowledgement but does not know if it processed. The most insidious of all is the “DICOM error peer aborted association” (or some variant of that language) typically in which, vendor-A enthusiastically determines that vendor-B is the problem. Anyone who has ever been on the phone with two vendors while troubleshooting well knows how this conversation goes … The net result of all of this is that it can take the team hours to determine which system has the problem and then, finally, the concrete problem-solving can begin. 

These are only some of the many real-world challenges that made me stop and say, “there must be a better way!” After a recent career change I was able to put on my thinking cap and build a better way by the name of VNA Heartbeat. Specifically designed to monitor every link in the imaging chain and immediately notify the support team when system A (or B or C) is down. We even let you know when an image doesn’t go from vendor A to vendor B, or if the system is operating so slowly that it may as well be down. Instead of adding to the slew of unread emails VNA Heartbeat notifies your team by the one thing that we’re all plugged into 24 x 7, text messaging. When your PACS or VNA archive has “fallen and it can’t get up” we make sure you’re alerted within minutes so that your team can be proactive instead of reactive. I’ll be at RSNA so let’s schedule an appointment to see if VNA Heartbeat is a good fit for your systems, otherwise, feel free to message or call me to schedule a quick demo!

Fall Cleaning – Imaging Style (Part 2)

Welcome back to the Fall Cleaning series! Hope you found some helpful takeaways in part 1. As you might remember we explored several admin-level ideas for keeping your “imaging house” in top running order. If you missed Fall Cleaning – Imaging Style (Part 1) be sure to give it a read and make your 4th Quarter your most productive yet!

Today I’m going to share with you some of the more hands-on systems level tasks that I recommend tackling while you’re reaping the benefit of the Thanksgiving, RSNA and Christmas lull. So, without further ado, let’s dive in to some system’s work.

Disaster Recovery (DR) – When was the last time you did a DR failover … On purpose? With lower volume and some reduction of project-work this is the perfect time to schedule a failover and validate your procedures. Note: I don’t suggest you do this ON a holiday week, but before or after. Likely, there will be a few gremlins that need to be addressed and you will want to have some support staff around to help you sort these out.  This is also the perfect time to update your process documentation.

Configuration audit – If it ain’t broke, don’t fix it right? Well……. It could be “broke” but you just don’t know it yet. As systems become more and more complex you may have things misrouted, or misconfigured, but the redundancy in your system covers it up. This is an ideal time to dedicate some resources to giving system configurations a onceover to make sure all is as it should be. I would check routing, pre-fetch rules, storage locations, any prefixing or data manipulation (tag management), HL7 interfaces, compression, archive queues, unarchive counts, etc. As an example, a fast network may mask the fact that prefetching was turned off years ago, no biggie until you add in 5 tomo units and everything changes!

Storage storage storage – We all know that image sizes get bigger each year… tomo… but what about the system storage? Do you have cache storage where you need it? Are studies storing in the right location? How fast are you adding in data vs how much you have, ie. what is your run rate? If you are storing 5TB per year but only have 2 TB of storage you may want to look at this because, in addition to the new data, priors will be pulled back locally. Run your checks and make sure you have budgeted for storage both long term and short term!

Monitor backlight hours – If you’re not running an enterprise automated solution that preemptively alerts you it’s likely that you have little insight into the backlight hours of your diagnostic monitors. They do wear out! It makes an enormous difference in how your screensaver is setup, many times the backlight is running all night which reduces the monitor’s useful life because the monitor is still “on” when no one is reading. Take the time to run a good preventative check on your diagnostic monitors and order replacements if you need to.

QC issues – Data quality is a huge issue; in my experience I’ve found about 2% of studies didn’t match an order. Whether you are looking at PACS or VNA, there will be data sets that do not match. I have run into sites with thousands, and even hundreds of thousands, of studies flagged with QA issues that were left unresolved. You want the data to match as a prior and there is no benefit to leaving these issues to sit indefinitely. So, while it’s not necessarily the most fun of tasks, the 4th quarter tends to be a good time to dedicate 1-2 hours per day (possibly per person depending on your volume) to weed through and fix these data quality issues. 

Cable management – Most of us have been in a new server room, you start with good clean racks and tidy cable management then, as servers get moved in and out the whole space starts to resemble a consortium of octopi. Eventually, you have network cables tied to power cables tied to KVM cables strung between racks. Yes, it happens, but why not schedule a time to flip over to the DR system and clean up those racks? Anyone who has do to any work, ever, in your server room with thank you immensely!

Now that you’re armed with even more tips to a tidy and productive 4th quarter it’s time to get to work! As always, I greatly welcome your comments, thoughts and ideas so reach out on the blog, email or LinkedIn. And, if you’re going to be at RSNA this year and would like to connect in person let me know!

Enterprise Imaging Teams: Who, What, When and How?

I’m more and more frequently being asked ‘what does an Enterprise Imaging team look like?’  Given that the term is relatively new, and that there will be variances based on the individual organization’s needs, there are no hard-and-fast rules for creating an Enterprise Imaging (EI) team.  However, there are many components to keep in mind, and in this article, I will share with you some key considerations for EI team creation.  I believe that your EI team is a separate entity from the traditional PACS/CPACS team.  Enterprise imaging teams have a very different tempo and focus when compared to the, more organizationally static, PACS team.  Another foundational aspect of EI team creation is to understand that it is ultimately an operational team, not a project team.  There are certainly some project elements, but the majority of effort is spent maintaining operations versus discrete projects.  The size of the EI team will vary, depending on the size and complexity of the organization, but irrespective of size the ideal EI department will include the following skillsets.

Clinical workflow:  As the EI team works directly with clinical departments it is critical that there is an understanding of how things operate in the ED, Radiology, Cardiology and imaging centers.  Having a deep understanding of workflow will help to keep the EI team patient-focused and able to respond appropriately to the clinician’s needs.

Application administration:  The applications administrators keep the wheels turning and green lights blinking.  This is the hands-on monitoring of VNA queues, image viewer servers, HL7 interfaces etc.  Each system will have some amount of daily work that has to be done to keep them in good health.

Infrastructure: EI systems use a tremendous amount of infrastructure both in terms of storage and network bandwidth.  If not managed correctly this can impact other departments, to the detriment of patient care.  The EI team should have a thorough understanding of infrastructure’s technology at its most fundamental level, and then be able to translate application system requirements back to the infrastructure team.

Project Management:  Even though the team is operational, there are always projects that impact the systems.  PM experience on the team allows the EI group to effectively schedule upgrade and deployment work while interfacing with other project teams.

Training:  As systems are rolled out within the EI department the team will ideally have skilled trainers that can go onsite to instruct local PACS teams and/or physicians in the use of clinic-facing applications.  Not having to outsource all training to the vendor saves the organization substantial money, and allows for the self-service of deployments.

Troubleshooting: The EI team should be consistently working toward the ability to operate as tier 1 and tier 2 support for applications supported.  This is beyond the basics of how to navigate around on the interface, and should include a deep understanding of the back-end of each application. Vendors rarely support your system with the same passion and timeline as your internal resources will!

Data migration: A VNA is only as valuable as the data in it, which means that for the vast majority of systems connected to VNA there is a data migration to be run.  The team needs to be experts in the mechanics of data migrations, particularly in the reconciliation and verification that occurs at the end of a migration.  In general, the EI team should be running data migrations as most organizations will realize a significant cost savings keeping this task inhouse.

PACS Administration: Configuring and supporting EI systems requires a thorough understanding of the systems that they interface with, namely PACS and CPACS.  Many of the problems that EI teams face result from eccentricities of the source systems.  Knowing how each source system is deployed and how they operate is critical to configuring and troubleshooting.

Department head: Ultimately the team leader should be at an appropriate level within the organization to garner the resources needed to operate this critical set of applications.  While all the typical skills are necessary such as managing direct reports, running meeting, budgets, etc., they must also be comfortable operating as Chief Evangelization Officer of the EI team, spreading the word of EI to the rest of the organization, as more departments are included.

Depending on the size and scope of the organization, EI team members may be asked to wear multiple hats.  Delegation of duties notwithstanding, to create an efficient Enterprise Imaging team each one of the skill-sets discussed should be represented.  To go one step further and create an EI team of excellence, start with your foundation and hire the best of the best!  The more effectively your EI team does its job, the more effectively physicians can do theirs.  Great EI team = great patient care!


Back to Basics

5 years into my VNA experience the one thing I can say is that, the KISS principle applies.  The only way to easily maintain your system, after it is up and running, is if the build is very orderly from the beginning.  To that end, it cannot be stressed enough that sticking to the basics of system design and deployment is critical.  One way this is achieved is by utilizing well thought-out naming conventions and time-proven, standardized processes.

Before the first system is brought online the team should focus (obsessively so) on the naming conventions.  These naming conventions need to be easily identifiable and baked into every nook and cranny of your system; from AE titles all the way to network shares.  Typical things that one should consider including in naming conventions are: site name, the type of system and the function.  Bearing in mind that AE titles are limited to 16 characters, it is important that the names be easily and consistently identifiable.  During troubleshooting situations, the VNA team will be looking for these names in the logs, so ease of use is paramount.  An example might be HOSA_FS_R_AE standing for hospital A, Fuji Synapse, Radiology, AE title.  The last two characters could be changed to DB for database, ST for storage or any other internal component related to each particular system.  Once the naming convention is set it needs to be passionately enforced as there is often enticement to loosen the standard to fit this site or that site.   Although there might be a circumstance that truly requires it, under 99% of situations you should not give in to this temptation!  Consistency in troubleshooting and training of new VNA team members are only two, of the many, reasons as to why uniformity is key during name convention decisions.

Similar to the idea of constancy in naming conventions is the creation of standard operating procedures.  The team needs to create a checklist for regularly occurring actions, such as: adding systems to VNA, testing new VNA releases, starting and finishing data migrations and verifying during PACS upgrades.  Each of these line-item tasks needs to be simple, straight forward, and DOCUMENTED.  There are many tools available in the VNA and there may be a temptation to get fancy with tag mapping / morphing, adding in sophisticated routing rules or any one of the myriad of features and functionalities that are available.  While these features are useful, remember that customization is the antithesis of supportability.  The team should determine what features and functions to use, and site 100 should be configured similarly to site 1.  This is not to say that there are no circumstances under which customizations should be used, however, they should be build-necessary customizations and not simply because one wants to play with all the cool tools in the toolbox.

There is a myriad of things to consider when planning and architecting a VNA, of which, naming conventions and SOPs are but two.  And, while this idea of a back-to-basics approach is neither new, nor specific to VNA design, it is an intelligent and proven method to utilize when laying the foundation upon which to build your Enterprise Imaging system.

What ever happened to “deconstructed PACS”? More to the point, what would it take to achieve?

Several years ago, “deconstructed PACS” was the hot topic, also called PACS as a service, PACS 2.0, PACS 3.0, de-coupled PACS etc.  During that time many organizations made purchases based on the idea.  These were often a Vendor Neutral Archive (VNA) and a Zero Footprint Viewer (ZFV) for web and EMR viewing of radiology images.  Dictation systems have remained constant and have for the most part always been separate from the PACS.  There are several global worklist companies out in the market that have been around for a long time.   For the most part these worklist systems have been focused on the needs and requirements of teleradiology and large radiology groups.  That said they do a very good job of consolidating multiple sources of data, usually many hospitals and EMRs into a single worklist and assignment engine to get the right radiologist to read the study at the right time.  With all of this technology, why have we not seen the panacea of a deconstructed PACS come to fruition?

The answer surprisingly is that as an industry we have focused on many of the hard problems and forgotten that we still need a system to do what the PACS of yore did, and that is a departmental workflow system for the technologists.  There are several functions that need to occur before image data can be submitted into the machinery of the Enterprise Imaging Systems.  These are, while seemingly mundane extremely important:  DICOM Modality Worklist, Document Scanning, Quality Control (QC) and image management, and order validation, all tasks that are completed by the traditional PACS.  We will explore each of these in turn.

DICOM Modality Worklist (DMWL) is essential, as it ensures that images match an order, and all demographics are correct.  In my belief, a manual process is a broken process and typing in a 16 character accession number correctly is a recipe for disaster. In short something must provide DMWL and it should be used in all cases.

Document Scanning is an area I struggle with as in this day and age we should be able to remove all paper from the process.  I firmly believe that the only documents to be scanned, or electronically completed are documents that are required to be viewed by the radiologist and will alter the diagnosis.  This MUST be stored as a series with the images.  I submit that there is probably a better place for the data (the EMR) but that is another topic.

QC and image manipulation is simply verifying that the images are good and possibly adjusting the window/level.  There is sometimes a need for splitting, merging, deleting and moving images but this has gone down over the years and most of this is done at the modality.  The main point of this step is that this is the last opportunity for a technologist to review the study.  As soon as the image is released it becomes “ready to read” and will be picked up by a radiologist somewhere in the ether.  Given that in the new model it is unlikely that the tech can walk down the hallway and speak to the radiologist this step gains significance.  Like a bad picture on the internet, once it is out, it can’t be taken back.

The final task is order validation, with consistent use of DMWL this step should be not necessary, but in the real-world things happen.  In a distributed system where the worklist is driving reads from HL7, any study that does not match an order will not be read.  It can’t, as the dictation system needs to respond to an order with the report.  The viewer can’t launch it because the accession number does not match an order.  This last verification step is a gatekeeper step to ensure that every study has an order, as my old coach said, “no pass, no play”.

Again, PACS does all of these tasks, so what happens in a deconstructed model?  These tasks could be performed by the incumbent PACS, however most of the old guard has an inflated pricing structure and huge service maintenance contracts.  They tend to be not interested in this market, (or even the imaging center market).  Most of the smaller PACS companies are fixated on the diagnostic reading license and will not un-bundle the software at a relative price point to service this need.  We need a vendor or vendors to supply a light weight low cost departmental workflow system to feed into the enterprise imaging stack.  The only thing holding us back from deconstructing PACS is performing the simplest tasks that PACS does at the right price.  My challenge to the vendors out there is, who wants to fill this need and gain a footprint in almost every large organization?  Bear in mind that as a component of enterprise imaging this could be a foot in the door to then service one of the other components…. I would think that would be an attractive proposition, but the market has not responded thusly.

Is a Zero Footprint Viewer for a Radiologist’s Read the Right Tool for the Job?

 A zero-footprint viewer is an image viewer that runs completely in a web browser and does not require anything to be installed on the computer running it.  This is in direct contrast to the “thick” or “thin” client web viewers, or dedicated viewing workstations.  There are, of course, benefits to a zero-footprint viewer.  Firstly, as there is nothing to install, it does not require the user to have admin-level privileges.  Also, these viewers are presumed to be browser and OS agnostic.  Meaning, it should run on PC or MAC; Chrome, IE, Safari, and the like.  Zero-footprint viewers are now capable of displaying full DICOM image sets, as well as various levels of compression from lossless to lossy. As vendors continue to add more and more tools to these viewers, they can satisfy the needs of a much larger group of physicians.  Encompassing a variety of specialties and image sets, including cardiology and visible light. The ability for physicians to view images anywhere, including on a mobile device, is huge step forward for the industry and can potentially provide dramatic improvement in patient care, as images are no longer “locked” at a specific location.

Where I diverge from trending industry-think is in the idea that every radiologist viewer should be zero-footprint.  Radiology standards, when viewing and interpreting images, are necessarily high.  First and foremost, when following industry standards, a radiologist requires the use of DICOM calibrated monitors. For “diagnostic” monitors there should be a QA program in place to validate the calibration of the monitor and its ability to display the full depth of data in the radiology image.  In addition to the need for diagnostic quality monitors, a radiologist typically dictates into a voice recognition system to generate a report.  There are indeed cloud-based dictation solutions and, while it is possible for a radiologist to type a report directly into the EMR, these are not the norm for primary interpretation.  The industry dominant voice recognition system, used by the majority of radiologists, requires its application to be installed on a workstation, running windows OS.  It is very difficult to have this software running on one PC but pointing to two different versions or implementations.  It is in effect one dictation client per PC.  Combined, these two factors generally limit radiologists, in my experience, to an average of 2-3 physical locations in which they dictate.  This is important, as here is where we begin to see the technology trade-offs of having a zero-footprint viewer.  First in that a web browser, such as IE or Safari, does not have a reliable way of determining how many monitors are being used on the workstation (a typical read configuration has 3 monitors) nor how to best utilize that real estate. And, second is that speed is paramount.  When comparing viewers if there is a compromise between features and responsiveness to be made, most radiologists that I’ve worked with will, within reason, choose speed.  The zero-footprint viewers tend to do well on a good network, but over a high latency low bandwidth network it is very difficult to provide lightning fast response times.  In this type of instance, a client-based viewer can download full data sets in the background and pre-cache data for viewing, i.e. it is loading the next case.  Additionally, while the zero-footprint will probably beat the client in time-to-first-image, often the client-based viewer will win during significant image manipulation.  Overall, the client is more resilient to the inherent variability of a network connection.  So, given that a radiologist is, more often than not, reading from a pre-defined set of locations, which require specific physical hardware in terms of monitors and a dedicated dictation application, is there a superior advantage to the radiologist in having a zero-footprint viewer?  I submit that, currently, there is not one.

I believe that, as an industry, we need BOTH zero-footprint viewers of diagnostic quality, as well as client-based viewers.  Currently, clients provide a rich feature set, faster manipulation and integration in a reading environment while, zero-footprint viewers provide flexibility of delivery, fast review of compressed data sets to any browser, anytime, anywhere.   These are different tools for  different needs.  You wouldn’t try to use a screwdriver to drive a nail or vice versa.


Ultimately, the best solution will blend the advantages of both systems, depending on the needs of the user at a particular time and place.

Buying and Selling Big Data, A Practical Solution for AI and Clinical Research

Every now and then someone asks me about or I read an article about someone selling massive amounts of data to one of the big companies out there.  When you have a lot of data the obvious thought is, I want some of that free money!  As a thought exercise lets look at some of the realities in moving more

than a PetaByte of image data.  A PetaByte is 1,024 TeraBytes or 1,048,576 GigaBytes.  Many, dare I say most VNA’s store data in a near DICOM format, that is close but often not a straight .dcm file.  This means that to get data out you can’t simply copy the file but have to do a DICOM transaction.  There are some that do store in straight DCM, but even so, there is still the issue of de-identification so a DICOM store is not the end of the world.

In my experience a single server tops out at somewhere around 15,000 studies per day or ~500GB.  So, doing the simple math, 10 servers dedicated to nothing but copying this data, ignoring a penalty for de-identification or additional compression will move 1 PB in 209 days.  I submit that this is not practical and there is a better way.

First, we are looking at the problem from the wrong end.  Whether clinical research or training an AI engine, it is likely that the buyer doesn’t want ALL data, they are looking for very specific use cases.  In particular what diagnosis are they trying to research or train?  Instead of dumping billions of images on them and letting the buyer figure it out, perhaps a targeted approach is better.  This begins at the report, not the images.  As I would want to have a long-term relationship and sell my data multiple times I propose that instead of answering a single question like send me all chest x-rays with lung cancer, preparing a system that can answer any question.

So, to do this we would build a database that holds all reports (not images) for the enterprise.  Start with pulling an extract from the EMR for all existing reports, and then add the HL7 or FHIR connection to get all new reports.  With the reports parsed into the dat

abase any future questions or requirements can be answered.  The output of this query would be accession number, patient ID, date of service and procedure description.  Obviously, there SHOULD be a 1-1 relationship between accession number on the report and the images in VNA, but the other data will help if Murphy happens which often does.

Armed with this export a savvy VNA team can do a targeted export of specific data that is needed.  Instead of taking a dump truck and leaving all of the data in the parking lot, one can deliver a very specific set of data needed, and setup a relationship that can be very beneficial to both sides moving forward.  Using this method, one could even prepare a sample set of data for the buyer of say 1,000 exams to which the queries can be revised and updated to get a better and better targeted data set.

Now instead of providing all chest x-rays with lung cancer we can provide, Hispanic non-smoker males between the ages of 15-30 with a lung cancer diagnosis.  I am not a researcher, but I suspect that this type of targeted approach would be more beneficial to them as well as much easier to service from the VNA, in effect a win-win.

Searching for commitment between PACS and VNA

Many moons ago when most PACS was designed the archive was local.  It is the A after all in PACS.   Now that the industry is moving inexorably to a deconstructed model, or PACS as a service the archive is rarely on the same LAN as PACS.  Not only is it not on the same LAN but the fact that it is a separate application means that different rules may apply.  For example, some systems accept DIOM studies with alpha characters in the study UID, others will allow series or images to be stored in two different studies with the same SOP instance UID.  These variations in interpretation or enforcement of DICOM standards lead to problems when storing to the VNA.  There are times when a DICOM store transaction is successful, but the study is not accepted into the VNA.  There can also be a delay between the time a study is received by VNA and when it is actually stored to disk as many VNA’s have some sort of inbound cache or holding pen while processing data.  This discrepancy can create a problem where PACS believes a study to be stored but it is not actually stored, which is of course heresy for an archive.

It turns out that there is an old-school, little used solution for this very problem.  It is the arcane process called DICOM Storage Commit, and I highly recommend that every VNA owner enable this process for all sources that support it.  During the DICOM store transaction each image should be acknowledged as received and in theory any images that are not acknowledged as received would be resent by the PACS or other source system.   In practice there are a number of places where this does not occur.  The storage commit is a separate transaction that occurs after the DICOM Store.  The sending system will generate a new transaction in which it lists every image that was sent.  The response includes a list of every image with a success or failure.  If any image is listed as a failure then the source system can resend the image or the entire study, most tend to resend the entire study.

One problem with using storage commit is that many vendors have ignored this transaction for quite some time the result is that it is often less than optimally designed or configured.  Some systems have defaulted timeouts, and others batch up storage commit messages while others will not archive anything else until the commit is received.  Even with these limitations it is worth it.  The fundamental problem is that when a source believes that a study has been archived it is then available to be deleted or flushed from the cache.  If for some reason it did not successfully archive there is then there will be data loss.

Which comes first the PACS or the VNA?


This is a question that several years ago was philosophical and interesting but not terribly relevant.  Today as the landscape is changing the answer for your organization is vital to your overall success.  Like all good questions the answer is….. It Depends!

First what do we mean by PACS  or VNA First?  It simply means in your environment after images are acquired, are they stored to a PACS, presumably for interpretation and then archived to the VNA for storage, or are they sent to the VNA first and then routed elsewhere.  As one might expect there are pros and cons to each strategy and the determination really relates to how each is used.  I hesitate to use the term workflow because it, like “train the trainer” is one of the overused terms in the industry.

A PACS first orientation is the more classical approach to a VNA.  The study is acquired by modalities, typically reviewed by a technologist at a PACS workstation in which demographics are verified.  There may be some study manipulation such as window leveling, deleting of images and general image QA.  Often times additional information is added in the form of scanned documents which can be anything from the insurance card, to technologist notes and worksheets.  Finally, the exam is marked as ready to be viewed by the physician or radiologist.  When the study is interpreted, and a report created the study is marked complete or reported.  At some point in this flow the study is put into the archive queue and it is sent on to the VNA.  In this flow the VNA is acting primarily as the archive, and in some cases is called the deep archive or cold archive.  If the study is ever needed again as a prior and it is not in local storage PACS will retrieve it as needed.

A VNA first orientation is a different flow.  After the images are acquired they are sent to a technologist imaging system.  At this step the image manipulation occurs, this can be done in a department-based system like a PACS, it could be on a dedicated QC workstation, web system or components in the VNA itself.  Then the study is sent to the VNA, which likely maintains a local cache, but could be a cloud-based system.  After the study is in the VNA it is ready to be read.  The study is then sent from the VNA to the reading station where the interpretation takes place.

One of the keys in the distinction between the two is how quickly the study is available in the VNA.  In a VNA first scenario the study is almost immediately available on the VNA.  This becomes important when there are multiple consumers of the image, such as an EMR integration that is serviced by the VNA not PACS.  A PACS first orientation the study is interpreted prior to archival which means the likelihood of the images changing is very low.  I would opine that the images should NOT change once they have been reported.  If they do, then an addendum is warranted.  This data flow also maintains a linear nature and is relatively simple.  There is value in simplicity and that should not be understated.   The downside of this is the time required for the image to get to the VNA and the relative inflexibility of the system.  If there is an issue with PACS or the study is “missed” it will not be available to downstream systems.

In the VNA first method there are multiple systems at play any of which could be down.  It is also a more complex workflow involving several steps.  The benefit however is near immediate access in downstream systems to the images as well as significant flexibility to integrate multiple data flows and systems.  A VNA first architecture allows for a reduced PACS footprint that can lower overall maintenance costs (often 15-20% annually of the PACS license cost).   It also supports the integration of multiple viewing systems for referring physicians, specialist viewers and outside contracted radiology groups.  I would also argue that it better supports the transition to PACS as a service or “deconstructed PACS” or PACS 3.0 whichever is your favorite term, as well as a multi facility multi PACS environment in which a single study needs to “live” in many places at once.

So back to the question, which is better?  It depends on what the current imaging needs are, in terms of access to images, how many systems are integrated and   what the future vision is for the system.  For simple systems stick with PACS first (your PACS vendor will love it!) if the intent is to implement more exotic workflows or there are multiple downstream systems it would be worth investigating a VNA first data flow.


Kyle Henson

Please let me know what topics you would like to discuss