The great AI debate has largely missed the point. In a radiology context, some of the earliest discussions set the tone for a conversation that still appears to be skewed in the wrong direction. Perhaps the most infamous contribution came from Professor Geoffrey Hinton, who stated: “It’s quite obvious that we should stop training radiologists.” There are many reasons why this statement hasn’t aged well (and many articles that outline them), but it is perhaps more worthwhile to explore the less-debated parts of the conversation.
The wrong question: “Should AI replace rads?”
Although this article will provide the resounding answer of ‘no’, to even ask such a question suggests there has been a poor understanding of the difference between an artificially intelligent machine and a human being. While it is certainly true that AI can enable processing tasks more quickly and for longer than a human, AI cannot provide any level of insight beyond its original brief, and nor can it deal with an unfamiliar or evolving situation.1
The right question: “How can AI help rads?”
In answering the question, we prefer to talk in terms of ‘intelligence amplification’. If we see machine learning tools as an enhancement rather than a replacement, the conversation becomes more rational. Instead of trying to “judge” medical images and determine pathology, computer vision can be leveraged to find and sort medical images to make comparative studies easier and faster. The radiologist can also be spared menial/repetitive tasks, e.g. anatomic navigation that highlights and auto-labels certain anatomies, and natural language processing to improve reporting speed, accuracy, and readability. The amplification occurs in this case because the radiologist is afforded more time and cognitive load to perform tasks that cannot be automated. When AI is applied intuitively, rads will be given more time for diagnostic reasoning, and more opportunity to build collaborative relationships with referring physicians.
When AI doesn’t add value
In order to add value, any tool must be accurate to the point where it doesn’t provide false or ambiguous readings (this is particularly relevant for diagnostic tools), and it must also integrate smoothly into the rads’ natural workflow. Mammography CAD is an example of a tool that was found to be lacking in both areas. It was described as having ‘no appreciable effect on the accuracy of radiologists’; with ‘alert fatigue’ and ‘false positives’ cited as reasons.2 It is perhaps telling that very few AI software devices are currently in routine clinical usage, despite over 130 approvals by the FDA.3
The right debate: The interface between man and machine
Finding the optimum balance of human judgment and computerized automation is essential for any AI system to succeed. How do you achieve the right level of automation to radically improve efficiency, while enabling enough human influence to guarantee safety in a process that is a matter of a life and death? The quality of the interface is key. An illustration of this came when certain grandmaster chess players were defeated by amateurs in an experiment where both players had access to a chess computer. The winning players were those who developed the most effective interface with the computer. For radiology specifically, an important first step is to ensure the computer is streamlined in the best way possible, i.e. a unified workspace that links together the previously disparate functions of viewer, reporter and worklist.
The right answer: “Radiologists who use AI will replace radiologists who don’t”
We resoundingly agree with Dr Curt Langlotz. It is clear that patients will receive the best care by combining the strongest elements of man and machine, rather than replacing one with the other. By changing the way we look at ‘intelligence’ to move away from the artificial and towards the concept of amplification, we can empower radiologists to deliver their very best work.