AMA Board Chair on AI's opportunities for optimizing healthcare operations

At the Connected Health Conference in Boston next Wednesday, a half-day event will explore the real-world, practical implications of artificial intelligence in healthcare: how it can improve quality, how it can lower costs, how it can optimize physician and patient experience, and how it should be regulated.

The sessions, collectively titled Getting Real With AI, will explore, among other topics, applied AI in clinical practice, how it can be put to work for patient engagement and address social determinants of health, policy implications around safety and algorithmic transparency, and its potential to help providers move more quickly toward value-based reimbursement.

Among the speakers throughout the morning will be leaders from organizations such as Duke Institute for Health Innovation, Conversa Health, Enlightening Results, VisualDx, Kognition.ai and others.

Also taking part in the event, which takes place on Oct. 16 at the Seaport World Trade Center, will be American Medical Association Board Chair Dr. Jesse Ehrenfeld, who spoke to Healthcare IT News this week about AI’s potential for transforming clinical practice and shifting the paradigm toward accountable care.

Q. Philosophically speaking, how does AMA view AI in general – with trepidation or with optimism?

A. Totally optimistically. We think that it’s an opportunity for technology to become an asset, not a burden to practice, and put us in a position that can help advance our efforts to improve the health of the nation.

Q. When I covered the AI policy recommendations AMA released a few months back, one thing I noticed was the distinction you make between augmented intelligence and AI.

A. We think it’s really important, because the goal is not to replace any individual or group of individuals or physicians or members of the healthcare team – but rather to bring technology into our workflows and clinical environments in a way that is helpful in enhancing the capability of physicians and other healthcare workers. And how machine learning can become a more effective force working on problems, whether it’s detecting disease, keeping patients healthy or setting up systems that can enable more effective care delivery.

Q. As AMA advocates for physicians, what are some of the areas that are top of mind with regard to ensuring that AI is deployed safely, effectively and efficiently within a physician’s workflow?

A. It’s really important that developers include physicians early in the design process for the technologies they’re trying to bring into the marketplace. We have all suffered from the challenges of electronic health records experience, where we have seen poor usability, products that don’t adhere to basic user-centered design principles, and tools that are difficult to use and frustrating.

We don’t want that to persist as AI tools are brought into the marketplace. So, a lot of our policy recommendations promote the idea of having thoughtfully designed, high-quality, validated healthcare AI that’s reproducible and transparent and promotes appropriate privacy and security frameworks.

Q. The technology is asserting itself in healthcare in a way that it wasn’t even five years ago. It’s fast-moving. Are you generally pleased with the way you’re seeing it evolve and the feedback you get from physicians?

A. I think it’s too early to tell. There’s tremendous movement in the developer community, and if you look at the amount of venture capital that is going into supporting development of various tools and products. But we’ve only seen one FDA-approved device. Personally, I think it is a really thoughtful way to bring a technology into the marketplace in terms of how they developed and validated their particular product.

But there are dozens of others in the pipeline and I think they run the gamut. There are certainly exemplars out there and there are other tools I think that are not being developed in a way that will allow them to be successful by incorporating physician perspectives and being very thoughtful about transparency and integration of the clinical workflow.

So I think that the jury’s out on what will succeed and what will fail. But we know that when we ask physicians about technology adoption, they asked some very basic questions before they’re willing to incorporate technology: Does it work? Will I get sued? Who has the liability coverage for a tool that I may not understand the inner workings of because it’s a black box? Will it work in my practice, and will I get reimbursed or paid for it?

Those are the kinds of things that developers really need to be conscious of as they are thinking about how their product will be useful in the marketplace.

Q. What are some areas where you think AI has the biggest potential in the near term to improve quality and safety from a clinical standpoint?

A. A lot of people focus on the sort of diagnostic capabilities, and with image recognition and some of the other things that I’ve seen, there already are tools that will be very valuable in helping us do interpretation and will support the diagnostic capabilities of clinicians.

That’s the most obvious thing that people are thinking about. And certainly those tools are coming into the marketplace, but there’s a whole set of opportunities around optimizing healthcare operations that I think is less described but will be equally as valuable.

Frankly, I think there will be tremendous opportunities to use AI to rethink how we engage patients in their own health and in patients to take ownership and control of their health in ways that today don’t really happen in a very natural way with our sort of encounter-based care system, where the focus is often on illness rather than promoting health.

Q. At Healthcare IT News in October we’re focusing on the theme of Reducing the Cost of Care. One of AMA’s proposals this June was that AI should “enable physicians to prepare for and transition to new care delivery models.”

A. I think it can help us reimagine the work. I cannot tell you how frustrating it is, every time I’m taking care of patients, when I find myself doing things, interacting with systems, where I know there should be automated solutions that can make that busy clerical work go away. Whether that’s finding information that’s buried in an EMR, or transcribing information into electronic systems, there are obvious places the tools can, again, easily help clinicians make technology an asset, not a burden. And there’s a lot of promise for what AI can offer.

Q. In Boston at the Connected Health Conference you’ll be participating in a roundtable discussion about regulating AI. Given that the technology is so quickly evolving, what are they keys to striking a balance – light enough to enable innovation, but strong enough to ensure that safety is paramount?

A. One of the things that is really important is that tools adhere to leading standards around reproducibility and transparency. And that’s a moving target, particularly when you think about what does transparency mean for an AI algorithm.

But it’s really critical. And for a system to be adopted in use by a clinician, the clinician has to have trust and faith in the system and its performance characteristics. So understanding how systems are developed and validated, where they do and don’t work, is equally as important as some other considerations. I think that gets lost in some of the conversations now when it comes to the regulatory framework.

There are a number of issues that we’ve been very vocal about in terms of making a distinction between autonomous systems that are locked versus those that are continuously learning. Because we think they carry a different level of risk, and when one is looking at the right regulatory framework for approving and bringing products into the marketplace, understanding that risk is really paramount.

Q. One of your predecessors as AMA Board Chair, Dr. Gerald E. Harmon, said: “We have to make sure the technology does not get ahead of our humanity and creativity as physicians.” What are the keys to balance the promise of AI and the human factors that make healthcare what it is?

A. I’m very optimistic that technology can continue to support physicians and all healthcare workers to elevate all of our abilities to care for patients. We have seen some instances where the technologies have not lived up to the hype, and in spite of that I remain optimistic that AI will be a suite of tools and applications, in very different parts of the healthcare system, that will ultimately allow us to function more effectively to deliver higher-quality and more cost-effective care to a greater number of patients.

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Source: Read Full Article