UK Imaging Informatics Group
Body part mapping of exam codes PreviousNext
UK Imaging Informatics Group > Questions & Answers > PACS Integration & Standards >
Subtopic Last Post Last Poster Posts
Archive through May 21, 201421-05-14  04:29 pmNeelam Dugar20
Archive through May 25, 201425-05-14  08:55 pmDavid Clunie20
 
Message/Author
 Link to this message Joel Lidstrom  posted on Monday, May 26, 2014 - 03:47 am Edit Post Delete Post Print Post
Excellent question, David. We do have a reason for using our own codes. I'll see if I can make sense of where we're heading with them.

But first let me say that I am in awe of your grasp of these difficult ontologies, and your knowledge in their complexities. I don't know if I could do it, even if I had the intellectual acumen. But at the same time, you'll come to see that I am working from the other end of the horse. I am trying to tease out rigorously defined radiologic information from slap-dash, free-texted entries by techs and administrators predisposed to entering diagnoses or billing information rather than anatomic and functional definitions.

Now to your question. The body part mapping I sent was a "down and dirty" output from our Related Procedures server. We upload spreadsheets of proc codes; it maps exam descriptions to create data that PACS can use for automated hanging of priors. For example, the body part "Artery Abdomen Aorta/FOB" enables a range of variously described abdominal angio studies to hang as priors. It is not meant as more than a study type designation, but perhaps it and other terms gave the wrong impression.

We have a normalization engine that will programmatically assess any incoming exam code description, and tag it with what we believe to be a "standard" term of "standard" granularity. Underlying those terms are RadLex identifiers. A connection to SNOMED, et al, is merely a matter of doing the leg work to enter synonyms for each of those standards, and something that we will certainly do.

We began this endeavor as a means of programmatically finding correlates among radiologic studies of disparate terminology. Unlike the NHS, which has adopted a standard with a rigorous set of editorial principles and with an underlying structure of SNOMED terms, we are working in the 'Wild West' where everyone and his sister is creating their own descriptions, whether for the RIS or directly at the modality. Our software would assess these text strings, and tag them with a normalized term and concomitant RadLex identifier for each significant attribute it encountered.

Given that capability, in the early going of the ACR's DIR study we were asked to map the cacophony of exam descriptions that CT scanners were sending. But the ACR wanted more than attribute identifiers or our terminology; they wanted a correlation to the RadLex Playbook. I realized that if I were to map unstructured exam descriptions and the Playbook to common terms, I could force the computer to do what it is unhappy to do—find similarities—by getting it to do what it does so well—find exact matches.

At the same time I realized that my textual output was going to be a “Playbook” in itself, but with greater malleability. Why should I force my spelling of “fetus” on those who prefer “foetus” when the underlying identifier is identical? Why should I always express a CT UROGRAPHY as a CT ABDOMEN PELVIS UROGRAPHY just to promulgate the RadLex Playbook’s penchant for adding unnecessary hierarchies? And so our work took on two dimensions. I sought to accurately define radiologic exams by correlating salient and implied terms to the RadLex ontology (and by extension to other important standards), while retaining the ability to output them in ways that fit various types of “common use”.

It was a little disconcerting, however, when we discovered that despite mapping unstructured procedure codes with a 99.5% accuracy, when we correlated them to a standard list like LOINC or the Playbook, two things were happening. Either there would be a large number of exams that had no correlate, i.e., the "standard" list was simply not robust enough, or many studies would find multiple correlates in the standard list, i.e., there were redundancies in the standard. It seemed that it wasn't sufficient to define a standard list, despite committees of experts spending months—even years—on its creation.

It was then that I reasoned that our ability to map exam descriptions meant that if we were to see enough studies, a quarter million perhaps, we would have a list of all radiologic studies that are being performed. And if we were to count their occurrence after normalizing them, sorting them from most to least frequent, we would have a list from which a truly standard list could be extrapolated, with no omissions or redundancies. It is simply a matter of “how far down the list do we go?".

Though the need to adopt a standard is universally acknowledged, it is a hard sell in the U.S. It will not come from the top down as it has with the NHS, and it needs to be painless. It isn't enough to provide “a” standard. True, someone with your facility with term mapping could reasonably create SNOMED correlates for a 1000-procedure RIS list; but it is beyond the interest or even capability of most radiology departments. And what of their database of 65,000 unique proc codes? RadMapps can do it in fifteen minutes.

So that is what we are hoping to do, with our terminology yes, but with theirs as well, and always underpinned with RadLex, ultimately SNOMED, and other standards that prove useful.
 Link to this message David Clunie  posted on Monday, May 26, 2014 - 02:34 pm Edit Post Delete Post Print Post
Hi Joel

One of the things that I have found in doing such mapping work (whether it be from text strings or standard codes or local codes) is the importance of extracting no more or less than is necessary for a specific use case, which seems to be consistent with your observations.

There are different approaches in the various standard schemes like SNOMED, LOINC and indeed RadLex wrt. representing this.

For example, in LOINC one has the multi-axial hierarchy, which for radiology specifies anatomical relationships, generic LOINC Parts down to the anatomically specific procedure level, and then specializes each of those with other "attributes" (e.g., with the Method attribute). E.g., for CT Abdomen and Pelvis one has:

LP34218-5.LP30643-8.LP33481-0,1,LP33481-0,LP133024-2,Abdomen and Pelvis | Computerized tomography (CT)
LP34218-5.LP30643-8.LP33481-0.LP133024-2,1,LP133024-2,44115-4,Abd+Pelvis CT
LP34218-5.LP30643-8.LP33481-0.LP133024-2,2,LP133024-2,42274-1,Abd+Pelvis CT W+WO contr IV
LP34218-5.LP30643-8.LP33481-0.LP133024-2,3,LP133024-2,36813-4,Abd+Pelvis CT W contr IV
LP34218-5.LP30643-8.LP33481-0.LP133024-2,4,LP133024-2,72250-4,Abd+Pelvis CT W contrast PO+IV
LP34218-5.LP30643-8.LP33481-0.LP133024-2,5,LP133024-2,36952-0,Abd+Pelvis CT WO contr

LP133024-2 is UMLS C1980561, by the way.

I mention this since in doing the sorting of mapping you are describing you are concerned about ambiguity (one of your concepts maps to multiple possible target concepts), the answer to which I would argue is that each target scheme should (usually +/- ideally) have a more general concept that exactly matches yours, though may also have more specialized variants of it. I.e., the target scheme needs to be treated not as a flat list, but as it is intended, with hierarchical or component based relationships, and varying degrees of specificity.

This is certainly true of RadLex, in which for any specific concept, there will be (or can be) a more general concept that does not have a particular attribute (modifier) value.

I also think it is important not to obsess about the resultant string representation. I.e., not to be concerned about whether a concept appears as "CT UROGRAPHY" or "CT ABDOMEN PELVIS UROGRAPHY" but rather what it "means", since one can always use synonyms as opposed to insist on some supposedly canonical string representation. I.e., how a coded concept is defined is more important to me than how it is represented.

As I am sure you know, this artifact of the construction of RadLex synonyms is a consequence of it literally translating two separate attributes of each concept, the Body Region and the Anatomic Focus, which it was decided to model separately in RadLex.

Not to say that variants of string representation (synonyms) are not important for lexical (as opposed to semantic) matching, and I do find it very convenient when given a string like "CT Urography" to find an exact match, for example, amongst the SNOMED synonyms:

419017005 2 Computed tomography urography (procedure) XUdjP P5-08096 1

2574018018 8 419017005 Computed tomography urography (procedure) 0 3 en
2577327017 8 419017005 CT urography 1 1 en
2580076012 8 419017005 CT urogram 1 2 en
2580077015 8 419017005 Computed tomography urography 0 2 en

even though that turns out to be a retired (duplicate) concept, which is replaced by:

419084009 0 Computed tomography of urinary tract (procedure) XUdhP P5-080C0 0

2575262011 0 419084009 Computed tomography of urinary tract (procedure) 0 3 en
2578703015 0 419084009 CT of urinary tract 1 1 en
2580740017 0 419084009 Computed tomography of urinary tract 0 2 en

To your point about completeness though, there is no similar correlate of this concept in LOINC, and in RadLex this is modeled "incorrectly" in my opinion, as a "modality modifier" as opposed to an "anatomic focus" (i.e., a urogram is an anatomic focus on the urinary tract, arguably not a technique per se, unless one gets into specifying intravenous, anterograde or retrograde variants, which might be legitimate). As a matter of expediency, each "standard" scheme has made various choices in this regard when faced with scenarios that do not fit their model conveniently (and models are refined over time).

This example also highlights the fact that all coding schemes evolve, as in this case a concept (initially added as a US national extension to SNOMED I believe) was added then made redundant as the modeling improved. Or to put it another way, if you don't like it, it can be changed/fixed.

All of the existing standard coding schemes seem to have been developed in a similar manner to your's though, by iterative refinement based on procedures encountered in the real world (based on the needs of local lists), and when the effort is put into adequately describing each concept by its attributes and mapping the models of each scheme, as well as dealing with discrepancies introduced by inadequate or incorrect modeling, the result should be the union of all necessary concepts, from the general to the specific, and there should be no further "ambiguity" or "incompleteness".

Anyway, to the extent that your internal "standard" set of concepts represents yet another pass through the domain of all possible procedure codes, it would be useful to evaluate it, if you can share it. I have looked at your web pages on previous occasions, and found your Normalized RIS" page to be particularly interesting (http://www.radmapps.com/NormalizedRIS+.html).

It would also be an interesting validation of your text recognition methodology to feed all the various text string synonyms in SNOMED, LOINC, RadLex and NICIP into your mapping system as if they were used at a customer's site, and see how well the matching does, if it detects the same concepts for synonyms, etc.

As for sorting by frequency and establishing a threshold, I am not sure that I would not want to do everything … even if a procedure is rarely performed at most sites, or even rarely within a single site, it is still important to encode it. E.g., not everyone orders or performs a "Wada test", but when you need to, you need to. It is certainly true though, that there will always be "local" codes needed for exotic or novel stuff, since no standard can ever anticipate every change in the real world, and will always lag behind.

David

PS. Since you mentioned the ACR Dose Index Registry, this was indeed an excellent test case, and led to a significant change in direction for RadLex Playbook; the choice of RadLex Playbook for this use case was as much political as it was a practical (or not), involved a certain amount of ignorance about what was already available in LOINC and SNOMED not to mention parochialism, and given the relatively small subset of procedures being processed initially, was arguably overkill anyway. Fortunately, the exercise highlighted major deficiencies in the earlier releases of RadLex Playbook that were later mitigated (especially excessive permutation of the attributes beyond what was practical or necessary in the real world). And the issue may be moot in future as ACR Common (yet another coding scheme) emerges from behind the curtain in Reston.

PPS. One of the challenges that you might not have taken on yet is dealing with languages other than (US) English … in both my international clinical trials and radiation dose work I have played around with recognizing other language synonyms in text descriptions, and had quite good success even with very crude approaches, both for fully elucidated words as well as abbreviations (see com.pixelmed.anatproc package).

E.g., to find "ankle", I matched:

"Ankle",
"Tobillo"/*ES*/,
"Knöchel"/*DE*/,
"Enkel"/*NL*/,
"Cheville"/*FR*/,
"Tornozelo"/*PT*/,
"αστράγαλος"/*GR*/,
"足首"/*JP*/,
"발목"/*KR*/,
"лодыжка"/*RU*/

or "cervical spine":

"CS",
"CWK"/*NL*/,
"CWZ"/*NL*/,
"HWS"/*DE*/,
"H Rygg"/*SE*/,
"Cspine",
"C spine",
"Spine Cervical",
"Cervical",
"Cervic"/*abbrev*/,
"Kaelalülid"/*EE*/,
"KRÈNÍ OBRATLE"/*CZ*/,
"Halswervels"/*NL*/,
"Vertebrae cervicalis"/*NL*/,
"Wervel hals"/*NL*/,
"Kaulanikamat"/*FI*/,
"Rachis cervical"/*FR*/,
"Vertèbre cervicale"/*FR*/,
"Vertèbres cervicales"/*FR*/,
"COLONNE CERVICALE"/*FR*/,
"CERVICALE"/*FR*/,
"Halswirbel"/*DE*/,
"Vertebrae cervicales"/*DE*/,
"Vertebre cervicali"/*IT*/,
"頚椎"/*JP*/,
"頸椎"/*JP*/,
"Vértebras Cervicais"/*PT*/,
"ШЕЙНЫЕ ПОЗВОНКИ"/*RU*/,
"columna cervical"/*ES*/,
"columna cerv"/*ES abbrev*/,
"columna espinal cervical"/*ES*/,
"columna vertebral cervical"/*ES*/,
"vértebras cervicales"/*ES*/,
"Cervikalkotor"/*SE*/,
"Halskotor"/*SE*/,
"Halsrygg"/*SE*/,
"Cervicale wervelzuil"/*BE*/,
"C chrbtica"/*SK*/

(Sorry the Kanji, Hangul and Cyrillic characters get mapped to entities in this forum system, but you get the idea).

As you can imagine, this was not a terribly robust approach, but had a good enough retrieval rate for my purposes to be worthwhile, and as you can see from the examples, grew empirically.
 Link to this message Joel Lidstrom  posted on Tuesday, May 27, 2014 - 04:26 pm Edit Post Delete Post Print Post
Thanks, David, for all the good food for thought. Let me digest what I can, and then send something that might be a starting point for assessing our ability to map "...the various text string synonyms in SNOMED, LOINC, RadLex and NICIP..."

It is the loveliest day in Minnesota!
 Link to this message Joel Lidstrom  posted on Monday, June 02, 2014 - 10:17 pm Edit Post Delete Post Print Post
Hi David,

I hope I’ve not wandered too far into the pea patch here, but I wanted to reply to your last response to me.

There is an interesting difference in perspective between the vast and invaluable work that has been done, whether in SNOMED-CT, LOINC, NICIP, or RadLex, and the humble work that RadMapps is doing. I described it as ‘the other end of the horse’, and so I believe it to be. While what you say is true, “….the target scheme needs to be treated not as a flat list, but as it is intended, with hierarchical or component based relationships, and varying degrees of specificity”, our challenge in designing a tool to normalize procedure code descriptions was that connections to an overarching structure couldn’t be made until the mapping process was complete. When a human maps an “Abdomen” procedure, we begin with “Abdomen” and follow the hierarchy to a desired conclusion. But to our mapping algorithm, an incoming text string is an unknown; RadMapps has no reason to look first to Abdomen, or any other body region. The best it can do is map every term it finds, assemble them, resolve conflicts, and eliminate redundancies. Once complete, the value that it returns should be a precise representation of the study that was performed, one which finally can be positioned hierarchically.

Of course there is an obvious difference between this “retrospective” mapping of procedures, and mapping for current use. For example, when our software encounters a ‘CT ABDOMEN’ in a PACS database, that is as far as we can go to describe it. But if the same ‘CT ABDOMEN’ were to be the proposed scan submitted for clinical decision support, the complete hierarchy would need to be applied so that a more definitive choice can be made. Even “CT ABDOMEN WO/W” may not be as valuable a description as “Triphasic Liver study”, or “Renal Mass Protocol”. I can only hope that clearly expressed exam descriptions will become the standard in the future, positioned among a hierarchy of exams and encoded to define optimal hanging of related prior exams.

I am encouraged that you believe that “As for sorting by frequency and establishing a threshold, I am not sure that I would not want to do everything. “ And your reference to a Wada test made me smile. In our database we have several “Wada” exam descriptions. I had mapped them to “Angio” and “Brain”, but for clarification I will add “Wada” to our exam codes attributes table.

I passionately believe that a complete list cannot be easily made by individuals or committees. And I confess that it is completely serendipitous that we can do it. Never did I envision that, having engineered the ability to normalize a large database of procedures, it would give us the ability to create an all-encompassing list of orderables, accurately and uniformly expressed.
Attached are two spreadsheets for your review. The first is simply our normalization of LOINC radiologic exam descriptions. (I didn’t include NM studies because our lexicon is still expanding to accommodate that modality’s wealth of descriptors.) RadMapps did best in CT and MR and pretty well in US; I’ll use this exercise to augment its plain film and nuc med capability. (You’ll no doubt find a smattering of errors. I’m always on the lookout for incorrect mappings; I should live so long to eradicate them all.) A second correlates a large collection of exams to the RadLex Playbook. I’ll run the same for LOINC and NICIP codes after making sure my LOINC lexicon is a bit improved, and my NICIP data is up to date. I’ll be very appreciative of your, or anyone else’s, critique.

And then, having wandered far, I do have some simple questions about body part mapping with respect to comparison hanging. I’ll follow up with those on another day.

application/vnd.ms-excelLOINC mappings
LOINC Mappings.xls (671.2 k)
application/vnd.ms-excelPCs mapped to RadLex
Many PCs mapped to RadLex.xls (6955.5 k)
 Link to this message David Clunie  posted on Tuesday, June 03, 2014 - 12:43 am Edit Post Delete Post Print Post
Thanks Joel ... I will take a look at your files.

FYI ... "http://link.springer.com/article/10.1007/s10278-013-9663-y"

David
 Link to this message Joel Lidstrom  posted on Tuesday, June 03, 2014 - 04:55 pm Edit Post Delete Post Print Post
And wouldn't you know it. Right before I ran those lists I tried to get fancy with a "Right" mapping statement to catch some funky ones and a mis-placed parenthesis completely trashed it. The many "RT [body part] procedures that didn't map to Right are fixed...
 Link to this message Joel Lidstrom  posted on Wednesday, June 04, 2014 - 07:39 pm Edit Post Delete Post Print Post
David, here's a look at more structured exam codes (from miscellaneous RIS lists), mapped to LOINC with a RadMapps-generated exam description for all that find no LOINC correlate.

Keep in mind that the granularity of output is both important and easily altered. For example, a "CT Radial Head" I notice is generating its own exam description instead of finding a match with "Elbow" exams. We easily create granularity definitions based on application; in this case Radial Head should better have gone as an Elbow. I doubt that LOINC uses the term.

Ciao.
application/vnd.openxmlformats-officedocument.spreadsheetml.sheetRIS to LOINC
Misc RIS lists to LOINC.xlsx (485.9 k)
 Link to this message Simon Hadley  posted on Tuesday, June 17, 2014 - 08:21 am Edit Post Delete Post Print Post
All,

At the request of others, I enclose the body parts list from GOSH in Excel format this time.

Thanks,
Simon
application/vnd.openxmlformats-officedocument.spreadsheetml.sheetNICIP Codes with Body Parts for GOSH v1.2
NICIP_Code_Body_Region_RIS_Mapping_GOSH_20140522_v1.2.xlsx (109.7 k)
 Link to this message Ed McDonagh  posted on Tuesday, June 17, 2014 - 11:06 am Edit Post Delete Post Print Post
Hint: if you try and download an xlsx file from this forum the filetype suffix will be changed to '.unk', standing for unknown.

If you don't have 'hide extensions for known file types' checked in your file browser, then you can simply change the .unk to .xlsx and all will be fine.

If you do have that checked, then you can open the file from Excel (make sure you change the file type in the dialogue from Excel files to All files). You will get a message about the extension not matching, but proceed and it opens normally.

Plea for forum administrators: please fix this! I think the file type extensions dictionary needs extending.
 Link to this message Neelam Dugar  posted on Thursday, June 19, 2014 - 10:50 pm Edit Post Delete Post Print Post
http://dclunie.blogspot.co.uk/2013/07/pre-fetching-zombie-apocalypse-or.html

I found David's blog interesting. What came to mind to me was in practice radiologists DO NOT use body parts to sort or filter their studies. So why is there such a fascination with body parts by our PACS vendors, and now becoming part of XDS-I. The answer is simple --prefetching--PACS systems trying to predict what studies radiologists will want to review --instead of presenting all studies and letting radiologists make the choice.
Body parts is related to pre-fetch rules. Loses it's importance if data is locally hosted.
 Link to this message Andrew Downie  posted on Thursday, June 19, 2014 - 11:07 pm Edit Post Delete Post Print Post
Not so. My colleagues wanted it for better display protocols, hence why I resurected the thread. However I'm still uncertain if it is worth the effort.
 
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users may post messages here.
Password:
Options: Automatically activate URLs in message
Action: