Many moons ago when most PACS was designed the archive was local. It is the A after all in PACS. Now that the industry is moving inexorably to a deconstructed model, or PACS as a service the archive is rarely on the same LAN as PACS. Not only is it not on the same LAN but the fact that it is a separate application means that different rules may apply. For example, some systems accept DIOM studies with alpha characters in the study UID, others will allow series or images to be stored in two different studies with the same SOP instance UID. These variations in interpretation or enforcement of DICOM standards lead to problems when storing to the VNA. There are times when a DICOM store transaction is successful, but the study is not accepted into the VNA. There can also be a delay between the time a study is received by VNA and when it is actually stored to disk as many VNA’s have some sort of inbound cache or holding pen while processing data. This discrepancy can create a problem where PACS believes a study to be stored but it is not actually stored, which is of course heresy for an archive.
It turns out that there is an old-school, little used solution for this very problem. It is the arcane process called DICOM Storage Commit, and I highly recommend that every VNA owner enable this process for all sources that support it. During the DICOM store transaction each image should be acknowledged as received and in theory any images that are not acknowledged as received would be resent by the PACS or other source system. In practice there are a number of places where this does not occur. The storage commit is a separate transaction that occurs after the DICOM Store. The sending system will generate a new transaction in which it lists every image that was sent. The response includes a list of every image with a success or failure. If any image is listed as a failure then the source system can resend the image or the entire study, most tend to resend the entire study.
One problem with using storage commit is that many vendors have ignored this transaction for quite some time the result is that it is often less than optimally designed or configured. Some systems have defaulted timeouts, and others batch up storage commit messages while others will not archive anything else until the commit is received. Even with these limitations it is worth it. The fundamental problem is that when a source believes that a study has been archived it is then available to be deleted or flushed from the cache. If for some reason it did not successfully archive there is then there will be data loss.