"Whilst artificial intelligence (AI) is unlikely to replace radiologists any time soon, a new breed of machine learning-based software applications is poised to take on many of their tedious, repetitive, and time-consuming tasks -- improving productivity and freeing up more time to focus on value-added activities.
In most countries, radiologists are already operating at, or near, capacity; any further gains in efficiency is likely to be derived from the use of "intelligent" workflow software tools."
"For the present, at least, the consensus is that AI will aid radiologists, but not replace them. As Lundstrom says: ďThe role of the radiologist will change but it will be more like a Formula One driver kind of role. You have this extremely powerful machine but you need a very competent driver to know where to go and how to get there in the most effective and efficient way.Ē
We need PACS vendors integrated with CAD/AI so that the output is presented as a DICOM SR report as a separate series in PACS. This will transform how radiologists work!!
For me this is a really interesting area. Imagine over all the years I have worked in medical imaging, and with my background in signal processing, the number of times I have thought, "a computer could do a better job of pattern analysis in that image than a Radiologist". Almost a heretical thought - right?
Of course, pattern analysis is one thing, but comparison of a pattern to a wealth of experience (list of remembered similar cases) and seeing the pattern in context of other clinical information (which sub-sets of the list of similar patterns are relevant the clinical history) .. is entirely another. Or is it?
Yes it is early days. The 2-d comparison in the previous paragraph may appear to be quantizable(codified), or able to be made sense of using Natural Language Processing (or a weighted combination of the two). But I still think we are in early days. Early and exciting.
posted on Wednesday, April 26, 2017 - 03:57 pm
It's not just PACS vendors that are getting involved, the biggest breakthrough will via the Data Science working with the reporting radiographers and radiologists. My team was recently involved investigating ways of Improving the Early Detection of Lung Cancer which we'll see the results of soon on. FYI my team learnt a lot from all the other competitors, but didn't make it all the way.
Once the results are available the methodologies and code will be freely available to adapt, as we had to use open source/standards when submitting out entries.
posted on Wednesday, April 26, 2017 - 04:16 pm
CAD software that is better than fellowship-trained breast imaging radiologists in the classification of malignant and benign calcifications, has been shared back from ECR 2017, so little by little these tools are being developed to help the reporting radiographers and radiologists.
It will be specific detection techniques like these in the short term, and will be built up to detect more.
A large number of "Computer Vision AI" is appearing in the market--also called CAD! It is exciting times ahead! Moving from rules based algorithms to machine learning and deep learning algorithms are making detection more accurate with less false positives. NLP allows the Machine learning algorithms to learn from radiologists narrative reports and improve.
Robin you are right --the combination of Computer Vision AI and Natural Language Processing AI is a very powerful combination--and will be the Uber transformation of radiology. Radiologists will be only too glad for the help that AI is going to provide them!
Interoperabilty standards are already here. CAD server can send DICOM SR objects to a PACS Image Archive. This will then be displayed on the PACS display as a separate series with the CAD markers or can be toggled on and off by the radiologist on the PACS display.
To take advantage of AI--A PACS must be able to store and display DICOM SR objects from a CAD server.
A long time ago (2002),for my Masterís degree research project, I used advanced systems models to detect fractures using texture analysis. At the time my tutor asked me if such a system could replace a radiologist, and said maybe someday, but probably not in my life-time.
My reasoning was follows; you can split image understanding up into two types based on the two forms of information, i.e. syntactic understanding and semantic understanding. Syntactic understanding is bottom up pattern recognition, and is mostly done close to the pixel level of image information. This is exemplified in the early CAD systems that were developed for Breast Imaging. Ultimately itís mathematical abnormality detection at the pixel level, and importantly, the machine knows nothing. Itís just maths.
On the other hand, a radiologist performs very high level semantic understanding. Although this starts off as bottom up syntactic understanding in the visual centres of the brain, it is the top down semantic analysis that is uniquely significant. The radiologist knows what they are look at, and understands the complex relationships between the visual entities in the image and other sources of information, such as the clinical history and their radiological knowledge.
So at the time it was really obvious that computers could not do semantic image understanding in the way that a radiologist is required to do.
However, although Iím not involved in that kind of research now, I have tried to keep up to date. So about two or three years ago I started hearing about how good the systems at deep mind and other places were getting at performing semantic image understanding, and to be honest I was shocked. The research has clearly gone into the exponential part of the curve in terms of the speed of development. I believe it was Carl Sagan who said we overestimate technological development in the short term, and grossly underestimate it the medium to long term. I certainly underestimated how quickly they have got to this point. For example,Iím going to see a talk from someone at Google in a couple of week time about how their systems are learning about the world from Youtube videos. And that is a massive jump from what was possible when I did my MSc.
I think the key issue will now be, how these systems will integrated with the radiologists cognitive workflow. So guess what Iím researching for my PhD? What Iím currently trying to do is develop a basic workflow model of radiologists workstation attentional behaviours by using eye-tracking while they work on live patient data. If I can use Claesís analogy, if you are driving a formula one car you donít want to have to read a full set of architectís drawings in order to take the next turn at the optimum speed. Similarly if an AI is going to tell you something important, it needs to be done in the right way and the right time with respect to the radiologists reporting workflow.
Anyway, that was meant to be a couple of short comments. What Iím trying to say is that the field is developing a lot quicker than you might be aware of.
P.S. If anybody knows of any research oriented radiology departments out there, Iím still looking for two more clinical sites to get the rest of my data. I need five consultants doing twenty mins of chest reporting at each site. All the research is done with full NHS Ethics approval.
Thanks Simon. Very interesting. Hope you get some volunteers.
What has really changed since early CAD days are 1. Machine Learning algorithms and Deep learning algorithms- which result in better accuracy with less false positives--CADe- 2. Use of Natural Language Processing AI--which is already being used in daily lives-Siri, Alexa etc. NLP is reading and understanding the requset card, other clinical information that radiologists use. It is being used to guide judgements made by radiologists!-CADx
Radiologists report content: 1. Answer the clinical question 2. When a abnormality is seen provide a tentative or differential diagnosis, and advice on the next step of management.
Good integrated AI with PACS workflow will transform how we work and make us better.
I am also interested in another form of AI--CAST- Computer Aided Simple Triage--Here the CAD detects an abnormality and raises the priority of reporting. This requires HL7 messaging with the ability to send high priority update in the ORM message to RIS.
NLP could be used for developing deep learning algorithms by the CAD server reading and understanding the radiologists final narrative report and coroborrating with its own report.
AI will make radiologists work much more interesting and professionally satisfying!
Yes there are a number of areas that I could see AI systems adding value to in the near future:
Examination vetting Selection of structured reporting templates Diagnostic triage/abnormality detection Filtering and presentation of pertinent material from other information sources such as EPRs Providing relevant just in time decision support and local protocols Report quality control
But I still can't see them replacing Radiologists for quite some time. e.g. Aeroplanes have very sophisticated auto pilots these days, but the also have two pilots.
posted on Friday, April 28, 2017 - 08:55 am
Your analogy to aeroplanes is well made. To be honest I don't know quite how far off we are from seeing this technology in clinical practice. What it does show though, is that there is very real promise here to improve patient care, if used wisely. Most planes land on auto-pilot with pilots having to manually land In certain conditions/ and/or with a frequency to maintain their skills should the autopilot fail. I could see a scenario where most initial reports are generated by AI (ie first read) but all checked by a radiologist/ reporter, who would need to do a number of reports as present, to maintain their skills. If AI can deliver you would expect capacity to go up - something everyone needs!
I don't think the analogy to aircraft is really that helpful, either in terms of the engineering involved or the human factors interaction.
Auto-pilots are relatively "dumb" systems that follow specific signals to manipulate the flight controls to achieve a specific flight path. The "deep learning" hype that we are hearing about is targeted at way more sophisticated applications.
Monitoring the autopilot controlled flight path has very little in common with anything a radiologist does ... perhaps the closest analogy to an autopilot are hanging protocols and prefetching of relevant priors - something you could do manually but the machine can do for you, perhaps better.
Also, AFAIK, most planes do NOT land on autopilot (hard to find a good reference for statistics on this, but for example, "only a minute proportion of approaches have the combination of crew qualification, airport equipment, aircraft systems and a specific subset of weather conditions that result in pilot-supervised automated landings" "http://www.picma.org.uk/?q=content/landing-visual-cues").
And they don't "auto-taxi" (turn off the runway after slowing down in the landing rollout, e.g. see "https://www.airlinepilotforums.com/technical/65056-autoland-cat-3-landing.html"), probably not because they couldn't follow a path on the airport like the pilot follows the airport diagram and ground controller instructions, but because the sophistication to avoid running into other aircraft, ground vehicles, temporary obstructions, etc., is a level beyond what is currently implemented.
I.e., "autoland" (CAT III) systems are not "smart", they are just "precise" (and hopefully "accurate" ).
It's great to see that the AI/Machine Learning debate is becoming more mature and sophisticated. I was getting a little bored of unsubstantiated spin from companies with huge marketing budgets when all I really wanted was to understand: what can you do now? What's in the near future? What mechanism are you developing to allow cross-supplier integration?