Saturday, November 29, 2008

RSNA 2008 RFID Tracking of Attendees

Summary: RSNA is tracking attendees in the vendors' exhibit areas with RFID tags, with very little notice to the attendees; if you value your privacy, opt out or destroy the RFID tag in the back of your badge.

Long version:

I rarely duplicate an entire post from something that I have contributed to another forum, Aunt Minnie on this occasion, but in this case I feel strongly enough to reproduce the material in its entirety here.

Last year I got a bit annoyed that RSNA had deployed RFID tags in the attendees badges, for the purpose of piloting tracking attendance in the technical exhibits (i.e., vendor's booths), after Dalai pointed this out in his blog. See "http://www.auntminnie.com/forum/tm.aspx?m=120792". Mostly I was concerned about it not being made very clear to folks that this was going on, rather than because there was anything particularly nefarious about it.

This year, RSNA is again using this technology, and if you look for example at the back of my badge, you can see it taped underneath a label that identifies it:



In the RSNA Pocket Guide, the subject is also specifically mentioned, with instructions on where to go to "opt out" if you want:


Here is an article for from RSNA 2008 for the exhibitors, entitled "Increasing Revenue with RFID Exhibit Booth Tracking", which puts the objectives in perspective. Note that this is not a totally clandestine effort, and though in my opinion notice to registrants is hardly prominent, it was mentioned in the "November RSNA News", which contains similar text to what is in the pocket guide. What really bothers me is that there seems to be no mention of it at all in the "Registration Materials", at least as far as I can find (please correct me if I am wrong about this).

Now, whilst I am happy for RSNA to know that I attended, and happy to know which scientific sessions I participated in to help their planning, I am not at all happy about providing that information to the vendors. So, whilst I do not yet know what their "opt out" mechanism is, I suspect it is to record your details to be excluded from the reports sent to the vendors (they did that on request last year in my case).

So this year I am going to be proactive and remove or destroy the RFID tag that is in my badge. This is actually easier said than done, because it turns out they are tough little f..rs. The sticky label on the back of the badge will not peel off cleanly. Attacking the chip or antenna with a scalpel reveals that they are very hard, and without any way of confirming that the device is actually no longer working, doing a really good job (e.g., on the chip with a hammer) is going to make a mess of the badge. A Google search on the Internet (see for example, "How to kill your RFID chip") reveals that a short time in a microwave oven does the job, though at the risk of starting a fire, which doesn't sound cool. Also, most attendees won't have a microwave in their hotel room. I tried it on my wife's badge first (!), and when that didn't catch fire, did my own, and whacked the chip with a hammer, nailed it with a punch a couple of times, and cut the antenna. That said, I would still rather peel the whole thing off if it didn't look like the whole badge would tear apart.

Anyway, if you respect your privacy, as I do, then I suggest you find a way to deactivate the device before you go wandering around, and if you forget, make sure to go an opt out to prevent the information being disseminated.

David

PS. Another thing that bothered me last year was that the signage that notifies attendees that this sort of monitoring is going on was not terribly prominent. I will update this post as I wander around and investigate.

Saturday, November 22, 2008

The DICOM Exposure attribute fiasco

Summary: The original ACR-NEMA standard specified ASCII numeric data elements for Exposure, Exposure Time and X-Ray Tube Current that could be decimal values; for no apparent reason DICOM 3.0 in 1993 constrained these to be integers, which for some modalities and subjects are too small to be sufficiently precise; CPs and supplements since have been adding new data elements ever since to fix this with different scaling factors and encodings, so now receivers are faced with confusion; ideally receivers should look for all possible data elements and chose to display the most precise. Next time we do DICOM, we will do it right :)

Long Version:

Just how difficult can those of us who write standards for a living actually make an implementer's life ? Pretty difficult, is the answer, though largely this occurs as we strive to avoid breaking the installed base of existing applications that might never be upgraded.

Today I was responding to a question from a software engineer at a vendor of veterinary radiology equipment who had come to realize the the "normal" attribute for encoding Exposure Time was insufficiently precise, given that it was restricted to being an Integer String, and small things, like cats, may have exposure times shorter than a whole second. I say "normal attribute", because the original CR IOD, and most other IODs since, have used this and other attributes with similarly constrained encoding to describe X-Ray technique, and in some cases made these attributes mandatory or conditional. The attributes I am talking about are:

  • Exposure (0018,1152), which is IS VR
  • Exposure Time (0018,1150), which is IS VR
  • X-Ray Tube Current (0018,1151), which is IS VR
This problem was realized not too long after the standard was published and the resulting fix was published as final text in CP 77 in 1996, entitled "Wrong VR for exposure parameters". So, what's the problem, you might ask, it's fixed right ? Well, the problem is the nature of the fix.

A naive approach would be to just change the VR for the existing data element, say from Integer String (IS) to Decimal String (DS), which would then allow fractional values. The problem with this solution would be that recipients that expected a string formatted in a particular manner might fail, for example if the parser, or display text field or database column did not expect decimal values. I.e., existing implementations might be broken, which is something we always try to avoid when "correcting" the standard.

You might well ask why the standard makes the distinction between integer strings and decimal strings in the first place, or indeed allows for both binary and string encoding of integers and floating point values. For example, a number might be encoded as an integer string (IS), decimal string (DS), unsigned 16 bit short (US) or 32 bit long (UL) or signed 16 bit (SS) or signed 32 bit (SL) binary integer, or as a 32 bit (FL) or 64 bit (FD) IEEE floating point binary value. The original ACR-NEMA standard had fewer and less specific encoding choices; it specified only four choices for value representation, 16 bit binary (BI), 32 bit binary (BD), ASCII numeric (AN) and ASCII text (AT). Note that there was no distinction between signed and unsigned binary values, and no distinction between integer and decimal string numeric values, and no way to encode floating point values in a binary form (indeed the standard for encoding binary floating point values, IEEE 754, was released in the same year as the first ACR-NEMA standard, 1985, and certainly was not universally adopted for many years). Anyway, if you review the list of data elements, the authors of the ACR-NEMA standard seem to have taken the approach of encoding:
  • structural elements related to the encoding of the message (like lengths and offsets) and pixel value related (rows, columns, bits allocated) stuff as binary (16 or 32 bit as appropriate),
  • "real world" things as ASCII numeric, even things things that could have been binary integers like counts of numbers of images, etc.
In ACR-NEMA, there was no indication of whether or not ASCII numeric values could be integers or decimal values or whether one or the other made sense. The authors of DICOM, in attempting to maintain some semblance of backward compatibility with ACR-NEMA and at the same time apply more precise constraints, re-defined all ACR-NEMA data elements of VR AN as either IS or DS, the former being the AN integer numbers (with new size constraints), and the latter being the AN fixed point and floating point numbers. In the process of categorizing the old data elements into either IS or DS, not only were the obvious integers (like counts of images and other things) made into integers, but it appears that also any "real world" attribute that in somebody's expert opinion did not need greater precision than a whole integer, was so constrained as well. If you look at the original 1993 Part 6 Data Dictionary, you will see a surprising number of these, not just the exposure-related data elements, but also other things like cine rates, R-R intervals, generator power, focal distance, velocities, depths of scan field, etc. It is hard to know what drove the decisions to constrain these, but perhaps it was related to the fact that many of the data elements were literal translations of what vendors already included in their own proprietary image file formats, and if some engineer in pre-historic times had allocated an integer rather than a fixed or floating point value for something, that arbitrary constraint founds its way into the standard without much further evaluation or consideration. Alternatively, the authors may have been of the common mindset that it was helpful to recipients to constrain the size, length of value range of data elements to the greatest extent possible, something that now seems counter-productive in a world of nearly unlimited bandwidth, storage capacity and computing power, but in the recent past could have been perceived as a significant performance benefit, even in an interchange standard.

Unfortunately, even though the DICOM standard introduced the concept of sending not only the value of a data element but also its type in the message, using the so-called "explicit value representation" transfer syntaxes, the new standard continued to support, and indeed require as the default, the "implicit value representation" that was equivalent to the way some vendors had implemented the ACR-NEMA standard over the network. Requiring only explicit VR would have allowed recipients to use the VR transmitted to decide what to do with the value, and opened the door to "fixing" incorrect VRs in the data dictionary. One could have required that recipients check and use the explicit VR. Unfortunately, by permitting implicit VR transfer syntaxes, the VR has to remain fixed forever, otherwise receivers have no way of knowing what to do with a value that is of an unexpected form. I am told that there was significant discussion of this issue with respect to the 1992 RSNA demonstration, and that implicit VR was allowed for the demonstration to maximize participation, with the intent that it not be included in the standard published in 1993, but there was not sufficient support to follow through with this improvement after all. In hindsight it is easy to criticize this short-sighted decision. On interchange media, added in 1995, only explicit VR transfer syntaxes are permitted, but by then it was too late.

So what does all this mean for our exposure-related attributes ? Given that one cannot reasonably change the VR of an existing data element, the only option was to add a new one. So this is what CP 77 did:
  • it described the problem with all three data elements
  • it described the historic lack of constrains in ACR-NEMA
  • it only fixed the problem for one of the data elements (Exposure (0018,1152)), without further explanation as to why only that one was addressed
  • it add a new data element, Exposure in μAs (0018,1153), to the data dictionary and added it as an optional attribute in the CR Image Module
  • it defined the new attribute to have a scaling factor 1,000 different than the original attribute, which was defined to be in mAs (as is normally displayed to the user)
  • it gave the new attribute a VR of IS
You might well ask
  • why CP 77 didn't just make the new data element a DS, keep the same units that were used previously and that are the normal units in which a user expects to see the value displayed ?
  • why not just call the data element something like Exposure (Decimal), or indeed use the same name and rename the old one to Exposure (Retired) or similar ?
  • why was the old attribute in the CR Image Module not simply retired or deprecated in some other way ?
I have no good answers to these questions, but unfortunately the CP 77 approach set a precedent for all subsequent changes of this type, which include the data elements listed in but not fixed by CP 77, which is perhaps why we have ended up with:
  • Exposure Time in μS (0018,8150), which is DS VR
  • Exposure in μAs (0018,1153), which is IS VR
  • X-Ray Tube Current in μA (0018,8151), which is DS VR
Thankfully, CP 187, which introduced the new data elements, did not repeat the same mistake of using an IS rather than DS VR, but did perpetuate the notion of adding a different scaling factor to disambiguate the new data element from the old. I have to take responsibility for this particular piece of stupidity, since I was doing the editing for the DX supplement and probably this CP also at the time. Surprisingly, and I can't remember why (probably an oversight on my part), though Exposure in μAs (0018,1153) got propagated into the CR and CT IODs, Exposure Time in μS (0018,8150) and X-Ray Tube Current in μA (0018,8151) did not, which often causes implementers reading PS 3.3 not to realize that these can be used to solve any precision problems for time and current as well as exposure. Another CP on this subject is probably in order.

There are several other problems than the VR and the scaling factor with this approach of fixing inappropriate VRs by adding optional attributes that mean the same thing as what they are intended to "replace", without actually retiring and removing the old attribute. Specifically:
  • How is a poor receiver to know which to use if it receives both (the sensible answer is to use the more precise one instead of the less precise one, but the standard does not require that) ?
  • What about an old receiver that has never heard of the new attribute (it will display the old less precise one) ?
  • Should a sender send both a less precise and a precise value, just to be able to allow such old receivers to display something rather than nothing (almost certainly yes) ?
If you think this is unfortunate, guess what, with the new Enhanced IODs we decided to make things even "better" by introducing yet more new attributes, this time with a more conventional scaling factor but an FD value representation. These are used in the Enhanced CT IOD, as well as the new Enhanced XA/XRF, 3D X-Ray and similar IODs:
  • Exposure Time in ms (0018,9328), which is FD VR
  • X-Ray Tube Current in mA (0018,9330), which is FD VR
  • Exposure in mAs (0018,9332), which is FD VR
Note that this is not nearly as bad as it sounds, because these new attributes only occurr nested inside the per-frame and shared functional group sequences, and hence will not occur in the "top level" dataset in a manner that might confuse receivers. Receivers of enhanced IOD images need to extract all their technique, positioning and other frame-specific annotation information from such sequences, and hence should always use the new attributes and never need to worry about encountering the old ones. These attributes are also mandatory attributes by the way, as is the convention with all of the Enhanced family of objects. The use of FD (or FL) rather than DS, by the way, has been the policy of WG 6 for some time now when introducing new non-integer numeric data elements, since the use of binary IEEE floats eliminates any ambiguity in encoding or parsing funky string values that are not described for DS, like infinity or NaN.

The problem with these new data elements is that now that they are in the data dictionary, some creative implementers of non-enhanced images have started to stuff them into the "old" IODs in order to send values with greater precision, instead of sending the intended CP 77 and CP 187 data elements. Strictly speaking this is legal as a so-called "Standard Extended SOP Class", but it creates an even greater problem for the receivers. When I first encountered someone doing this, I added a specific check to my dciodvfy validator to display an error if these attributes are present when they should not be in the DX IOD, and I have subsequently the check to other "old" IODs as well, including CR, XA/XRF and CT; I also implemented some limited consistency checking when multiple attributes for the same concept are present, since I encountered examples where completely different values were present that made no sense at all. As more and more modalities implement the Enhanced family of objects, however, and include the ability to "fall back" to sending the "old" objects if the SCP does not support the new ones, and do it by copying the "new" attributes from the functional group sequences into the top level datasets of old IOD objects rather than converting them to the "old" attributes, we may see more proliferation of a multitude of different data elements in which the exposure parameters might be encoded.

So back to the problem of what a poor receiver (of non-enhanced IOD) images is to do ? The bottom line in my opinion is that a modern receiver should check for the presence of any of the alternative attributes that encode the exposure parameters, and use whatever they find in order of greater precision. I implemented this rather crudely recently in the com.pixelmed.display.DemographicAndTechniqueAnnotations class in my PixelMed toolkit, if you are interested in taking a look at one approach to this; look for the use of the getOneOfThreeNumericAttributesOrNull() method.

If the foregoing sounds a little critical and sarcastic, it is intended to be. I continue to amaze myself with my own poor expedient decisions, lack of consistency and frequent carelessness when working on corrections and additions to the DICOM standard, and so this missive is intended to be as self-deprecating as it is critical of my contemporaries and predecessors. Much as we would like to change DICOM to make it "perfect", the need to correct problems and add functionality yet avoid breaking things that already work and avoid raising the implementation hurdle too high to be realistic are overriding; the result of compromise is significant "impurity".

If we ever had the chance to start DICOM all over again and "do it right", I am sure that despite our best intentions we would still manage to screw it up in equally egregious ways. We sometimes joke about doing a new standard called just "4", so-called because it would be the successor to DICOM 3.0, would not necessarily be just about images, and which would be an opportunity to skip the past the morass that is HL7 version 3. I doubt that we would really do much better and would no doubt encounter Fred Brooks' "second system syndrome". Indeed, DICOM 3.0 being the successor to ACR-NEMA already suffers in that respect, perhaps being accurately described as an "elephantine, feature-laden monstrosity". From what little I know about HL7 v3, it is not exempt either.

David

Sunday, November 16, 2008

Basic CD viewer requirements; extending PDI; software for sending images on CD media

Summary: IHE is defining requirements for basic CD viewers; PDI is being extended to add DVD, USB, compression and encryption; IHE PDI and DICOM CD media require viewers and importers to understand what is on the media; as compression, encryption and new types of images are used, receiving software struggles to keep up; this can be alleviated by executable software on the media that can decompress, decrypt and convert new image types to whatever has been negotiated with the recipient and then transmit them via the local DICOM network.

Long Version:

Since the cardiology community first began standardizing, promoting and adopting DICOM CDs as a means of interchange of images in the early 1990's, and radiology has rapidly caught up, CDs have proven to be wildly successful despite legitimate complaints about interoperability and ease of use. The PDI promotion effort by IHE initially focused on reducing confusion by insisting on only uncompressed images on CD, to reduce the burden on any device or software that the recipient may have installed. Dependence on on-board viewers was somewhat discouraged by IHE, both because of the potential security risk to executing externally supplied code and the variation in features that such viewers support.

As I have discussed previously, referring physicians who are the victims of a multitude of different viewers are "encouraging" us to improve the situation, both by endorsing the use of PDI as opposed to proprietary media, as well as joining with IHE to develop standards for what viewers are required to be able to do, in a manner that makes them intuitive to use. This latter effort is the Basic Image Review Profile. Last week we had our first Radiology Technical Committee meeting to discuss the requirements for this profile. The involvement of the users who are interested in this was extremely encouraging ... no fewer than three neurosurgeons attended the meeting to contribute! We discussed what features any basic viewer should have with respect to loading studies, navigating through them using thumbnails, comparing series side-by-side with synchronized scrolling, panning, zooming and windowing, making simple distance and angle measurements, displaying any if report present, and printing. We also discussed hardware and software requirements for such a viewer agreeing that it had to run on Windows (blech, but that's reality), and more controversially, to what extent elements of the user interface could be standardized in appearance to make unfamiliar viewers intuitively easy to use. Tooltips are one obvious means to assist with ease of use, but we also agreed to at least attempt to define what tools should be visible in the main interface and what they should look like (e.g., hand for pan, magnifying glass for zoom, etc.). We know there is a balance between consistency across vendors and the added value of proprietary look and feel, but hope that some consensus can be achieved on general principles. One item that everyone seems agreed on is the concept that the "basic" interface should be uncluttered, and "advanced" features should not be visible until they are called for, so the profile may well end up specifying what shall not be there in addition to what shall.

In the same meeting we also discussed extensions to PDI. For some time many applications have been limited by the size of datasets relative to the capacity and speed of uncompressed CD media. Accordingly, after our informal interoperability tests of DVD readability earlier this year at the Connectathon, the idea of extending PDI to support DVD as well as CD has been accepted, and at the same time it makes sense to add support for compression (as DICOM requires for DVD support) as well as for faster media like USB memory sticks and the like. The fuss about encryption of portable media makes this an opportune time to deal with that issue as well, to make sure that there is not a proliferation of proprietary alternatives to the DICOM secure media standard.

Yet extending PDI raises the bar for recipients that want to use their own pre-installed software or devices to display or to import media that may be compressed or encrypted in a manner that older software does not support. At the same time, we are well aware that any media may contain a multitude of different types of images, presentation states, key object selection and structured report documents, and IHE does not constrain this. What this means in practice is that though a viewer or importer (such as a PACS) may well support most of the image types, there may be content that is not successfully displayed or imported, the consequences of which may be unfortunate. The Basic Image Review Profile will address this for on-board viewers by adopting the fundamental principle that a compliant viewer on the media shall be able to view all the DICOM content on the media. That is a "no-brainer", but it doesn't help the pre-installed viewer or importer.

A solution that I have proposed for this that may help is to introduce the concept of "sending software" on the media. That is, even if one does not want to view the content on the media using an on-board viewer, which may or may not be present, easy to use, or even possible to execute on your hardware, it may be possible to execute software that helps to import the content into your own locally installed software. The requirements that I have drafted so far for the PDI extensions supplement include the ability to:
  • allow the user to enter the recipients network location (IP, port, AET)
  • read all the content of the media via the DICOMDIR
  • select what to send
  • coerce patient & study identifiers using local values supplied by the user
  • decrypt content if encrypted using the password supplied by the user
  • decompress content if the receiving devices doesn't support compression
  • convert instances whose SOP classes the receiver does not support to one that it does
  • transfer everything
The idea started out as a means to get around the fact that most of the installed base does not support encryption, or some of the more advanced compression schemes, but in fleshing this out it seemed reasonable to also address the fact that more recent SOP Classes like Digital X-Ray and Enhanced CT or MR are still not widely supported, and the same "fall back" strategy of converting SOP Classes that modalities use to deal with primitive PACS in this regard could be used on media too. A typical DX modality may, for instance, attempt to send a DX object that the PACS doesn't support, and can fall back to sending CR or even Secondary Capture instead if that is all that can be negotiated on the network; modern MR devices that support Enhanced MR do something similar. The key to this is the DICOM Association Negotiation mechanism that allows this to be figured out on the fly, rather than being pre-configured.

Ideally, the "sending software" present on the media would be multi-platform, and it is certainly possible to do that (say with Java and on-board JRE's for the popular platforms in case they are not already installed). But at the bare minimum, given the prevalence of Windows, the requirements are that it executes:
  • from the media without installation
  • on desktop Windows operating systems (XP or later)
  • without requiring the presence of or installation of supporting frameworks (e.g., .NET or JRE), other than to be able to execute them from the media if required
  • without requiring administrative privileges
Obviously, the requirements for "sending software" could be satisfied by an on-board viewer that had such functionality embedded within it, and in many cases that may be simpler than adding separate standalone software. That said, very few viewers supplied by commercial producers of CDs at least, include any kind of networking capability, at least not yet.

A potential problem is the need for the user to supply network parameters for the recipient (in the absence of self-discovery support, something not very widespread, unfortunately), and at the other end for the receiving PACS or workstation to be willing to accept inbound objects from a strange source (some are "promiscuous" in this respect, others are not). In the case where the media sending software is executed on the same machine as the "workstation" (or pre-installed viewer) into which the images are going to be imported, this should be less of a problem. Indeed defaulting to sending to port 104 or 11112 on the localhost with a pre-defined AET might well work for this and we could consider defining that in the IHE PDI profile option.

Anyway, though obviously the "sending software" option is not something ordinary users such as referring physicians will want to have to deal with, since their pre-installed or on-board Basic Image Review Profile viewer should cope most of the time, it provides a means of "last resort", if you will, for support personal to extract content from media that for some reason is unreadable locally through normal means. It also provides a means of helping the enterprise-to-enterprise interchange use-case, when the receiving PACS does not supported the more modern DICOM objects that advanced modalities produce, more modern compression techniques such as JPEG 2000, or the encryption that is being mandated by some jurisdictions specifically for this use-case.

David

Friday, November 7, 2008

UK Encryption Update

Summary: Encryption is not required for CDs given to patients in the UK

Long Version:

In the discussion on AuntMinnie on this subject, Brandon Bertolli from London provided an update of the UK situation that clarifies when encryption is expected to be used, or not used. Specifically, a note in a letter from NHS Chief Executive David Nicholson to the president of the British Orthopaedic Association, dated 29 October 2008, includes important statements:
  • "Patients can continue to be given their own images on CD to carry away with them ... provided that the CDs are given directly to the patient, they are made aware of the risks and they take responsibility for their safekeeping, there is no fundamental problem if these are not encrypted."
  • "If ... a CD needs to be used, which is possibly the case if the X-Ray is taken in a non acute setting ... then it should be encrypted ... alternatively it can be given to the patient and therefore encryption would not be necessary."
For those of us involved in teaching and research, there is another very important clarification:
  • "Naturally images will need to continue to be used for teaching, and the system for protecting data on CDs should not prevent entirely legitimate teaching activities ... if the teaching is outside the clinical environment then as long as the data on the CD contains no patient identifiable information then there is no need for it to be encrypted."
These are very important and sensible clarifications, which should ease the concerns that some folks have had about the potential negative impact of privacy protection in the UK on safety and convenience, and the practicality of long term accessibility of password based encrypted media.

It seems very clear that the NHS is taking action primarily for transfers between organizations and between providers, which is as it should be. But the need for encryption can still not be dismissed lightly and is described in the letter as "good practice" even for CDs for patients. So we do need to make sure that we promote the appropriate standards for media creation vendors to implement so as to avoid the NHS or anybody else needing to adopt proprietary schemes for such transfers.

But the sky over Britain's CD users is not falling after all.

David

PS. Here is the scanned in text of the letter and the accompanying note (with thanks to Miss. Clare Marx who kindly provided a copy of the entire letter):

Wednesday, November 5, 2008

CD Encryption Revisited - UK Leads the Charge

Summary: UK NHS demands encryption of image CDs; should we use device or file-based encryption, standard or proprietary, password or public-key based ?

Long Version:

In a previous post I talked about Media Security and Encrypted DICOM CDs, and this topic has also come up on Aunt Minnie. Whilst there has been a general concern that the threat to privacy is small and the risk to usability high, it seems that in the UK at least, this discussion has been pre-empted by a decision by the NHS to require encryption, outlined in a letter from the NHS Chief Executive, David Nicholson. I quote from this letter:
  • "You are aware that there is a mandatory requirement that all removable data, including laptops, CDs, USB Pens etc must be encrypted."
  • "The encryption mandate applies equally to PACS images whether on CD or back-up tapes."
  • "There could be occasional exceptions on patient safety grounds ..."
  • "The CD and the password MUST be transferred by different routes."
Note that the NHS is not mandating any particular encryption scheme, though they have procured a proprietary piece of software from MacAffee (SafeBoot, now EndPoint) for this purpose. It is unfortunate that the NHS has not chosen to promote a standard interoperable and vendor-neutral solution, but perhaps that is because they have not been able to find one, or at least find one that is widely adopted or adopted at all.

Regardless, it would seem that the writing is on the wall for encryption of DICOM media, and solutions will need to be provided, even though the inconvenience and risk to patient safety will likely be significant. Accordingly, we have been considering a number of strategies to address this need, specifically, the encryption of an entire set of files (or an entire device), such as the open-source cross-platform TrueCrypt approach, or the encryption of individual files, such as by using the Cryptographic Message Syntax (CMS) that was designed for secure email (S/MIME) and which is already included in the DICOM standard for secure media. Further, one needs to make a choice between a password-based mechanism (so-called Password Based Encryption (PBE)), or a scheme that depends on the use of public keys and certificates and so forth, dependent on there being a Public Key Infrastructure (PKI) for senders and recipients.

The primary advantage of encrypting the entire file set or device would seem to be that one could do that, then present the encrypted set as if it were an ordinary filesystem, and the effect would be completely transparent to applications like DICOM viewers and PACS importers, once the decryption had been activated by the user entering a password or the appropriate private key being matched. Unfortunately, great as this sounds, it turns out that one needs to install some software into the operating system (like a device driver) to actually make this happen, and this requires administrative privileges. Either recipients need to have software pre-installed on their machine by someone appropriately authorized, or they need to have the right to do this themselves, for example when auto-running such a tool from the media itself. The latter is indeed supported by TrueCrypt, for example, but how likely is it that the average doctor receiving media will have such privileges, and how safe would it be (in terms of the risk of viruses) to allow them to do so ? This may be a showstopper for what otherwise seems on the face of it like the most expedient solution. There is also the matter that TrueCrypt is not a standard per se, nor is it included in other standards like DICOM, but the latter could easily be rectified since the format is fully documented and free from intellectual property restrictions.

By contrast, what seems like a more complex approach, inclusion of support for encryption directly into the DICOM viewing or importing software, may actually be a more effective solution, since it requires no additional permissions or privileges on the part of the user. Since often a viewer is supplied on the media anyway, that viewer can support the encryption mechanism used for the files. As long as the encryption scheme is a standard one, then other software can also view or import the media, if that other software also supports the standard scheme. In the interim, whilst other viewers and importers are being "upgraded" to support encryption, one could add to the on-board viewer the capability to not only decrypt and view the files, but also to send the decrypted images over a DICOM network to a PACS or workstation (preferrably allowing editing of the Patient ID field to allow for reconciliation of different sites identifiers in the process).

As mentioned, DICOM already defines the use of CMS for this purpose for secure media, though to my knowledge this feature has never been implemented in a commercial product. Further, in anticipation of this need we have been working on adding a standard password-based mechanism to augment the public-key approach used in the existing standard, specifically in DICOM CP 895, so that now we have the option of using either PBE or a PKI as the situation warrants. There are free and open-source encryption libraries that have support for CMS as well as the underlying encryption schemes like AES, for example the excellent Bouncy Castle libraries, and I and others have begun work on testing this concept using these libraries. Indeed, you can download from here a small test dataset that I created encrypted using the DICOM Secure Media profile using the CP 895 mechanism.

Regardless of which technical approach prevails, in all likelihood the simpler password-based mechanisms will likely be deployed, if only because of the complete lack of an existing PKI in most health care environments. Obviously, the privacy protection from encryption is only as good as the password chosen. Though security folks talk about long and complex passwords and phrases to improve protection, one does have to wonder how in reality imaging centers will choose passwords, and to what extent they will be based on well-known information that is memorable and predictable to simplify use, balanced against the relatively low perceived likelihood and consequences of a security breach. Further, there has yet to be discussion on good security practices and procedures for exchanging the media and the passwords separately, and what the recipient should do in this regard. For example, should the password be included in the printed report that is faxed or email to the intended recipient ? Should the patient have a copy of this for their long term use ? I would certainly expect so, but inevitably the patient sill store the report with the CD, which rather defeats the point !

None of these mechanisms address the concern that if a password is lost or not transmitted or the recipient cannot for some reason run the on-board viewer, then the patient's safety and convenience are potential at risk. In a network-based scenario, emergency access can be granted on demand, perhaps simply recording an auditable event that such emergency access by an authenticated but otherwise unauthorized individual was granted. With physical media, the sender and recipient are decoupled, however; indeed the recipient may not even be known a priori, such as when a patient takes their images for a second opinion, or for use as priors at a subsequent event. In such cases, loss or lack of access to the password becomes problematic. The problem is exacerbated in regions where it is not traditional for the imaging facility to provide long-term archival of images, such as Australia. One could imaging a scenario in which a woman has her screening mammogram recorded on an encrypted CD, the radiology center does not archive the images, and next year they cannot be used as priors because she has forgotten or lost the password.

Conceivably one could use a more complex form of encryption that allowed for escrow of additional keys that would allow recovery from some central authority perhaps, but such escrow schemes have been widely unpopular in the security community for many reasons. In the absence of an infrastructure to support this, all CDs could include the use of an additional key that was "well known" to some central authority, but of course eventually someone might be able to compromise such a key (consider the DVD Content Scramble System (CSS), for example).

So, though we do not yet have broad consensus on the standard mechanism that the industry should adopt, globally and not just in the UK, we are making some progress. Next week we will be meeting as the IHE Radiology Technical Committee and encryption is one of the topics for discussion for this year's extensions to PDI. The agenda is here, if perhaps you are interested in attending.

Though improving interoperability and reducing the barriers to viewing images on media has always been our primary goal, and encryption has the potential to threaten that objective, hopefully we will have a clear technical direction shortly for those folks who may no longer have the option of avoiding media encryption.

David