New AI diagnostic tool knows when to defer to a human, MIT researchers say

Machine learning researchers at MIT’s Computer Science and Artificial Intelligence Lab, or CSAIL, have developed a new AI diagnostic system they say can do two things: make a decision or diagnosis based on its digital findings or, crucially, recognize its own limitations and turn to a carbon-based lifeform who might make a more informed decision.

WHY IT MATTERS
The technology, as it learns, can also adapt how often it might defer to human clinicians, according to CSAIL, based on their availability and levels of experience.

“Machine learning systems are now being deployed in settings to [complement] human decision makers,” write CSAIL researchers Hussein Mozannar and David Sontagin a new paper recently presented at the International Conference of Machine Learning that touches, not just on clinical applications of AI, but also on areas such as content moderation with social media sites such as Facebook or YouTube.

“These models are either used as a tool to help the downstream human decision maker with judges relying on algorithmic risk assessment tools and risk scores being used in the ICU, or instead these learning models are solely used to make the final prediction on a selected subset of examples.”

In healthcare, they point out, “deep neural networks can outperform radiologists in detecting pneumonia from chest X-rays, however, many obstacles are limiting complete automation, an intermediate step to automating this task will be the use of models as triage tools to complement radiologist expertise.

“Our focus in this work is to give theoretically sound approaches for machine learning models that can either predict or defer the decision to a downstream expert to complement and augment their capabilities.”

THE LARGER TREND
Among the tasks the machine learning system was trained on was the ability to assess chest X-rays to potentially diagnose conditions such as lung collapse (atelectasis) and enlarged heart (cardiomegaly).

Importantly, the system was developed with two parts, according to MIT researchers: a so-called “classifier,” designed to predict a certain subset of tasks, and a “rejector” – that decides whether a specific task should be handled by either its own classifier or a human expert.

The team performed experiments focused on medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

While researchers say they haven’t yet tested the system with human experts, they did develop “synthetic experts” to enable them to tweak parameters such as experience and availability.

They note that for the machine learning program to work with a new human expert, the algorithm would “need some minimal onboarding to get trained on the person’s particular strengths and weaknesses.”

Interestingly, in the case of cardiomegaly, researchers found that a human-AI hybrid model performed 8% percent better than either could on its own.

Going forward, Mozannar and Sontag plan to study how the tool works with human experts such as radiologists. They also hope to learn more about how it will process biased expert data, and work with several experts at once.

ON THE RECORD
“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” said Mozannar, in a statement. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”

“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” added Sontag. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.”

“There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability,” says Sontag. “We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.”

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Source: Read Full Article