AI adoption in the clinical care setting has been slow due to patient safety concerns, regulatory restrictions and too much risk associated with AI.
Artificial intelligence in healthcare has proved valuable — to a point. Part one of this two-part series explored areas where AI has had great success in healthcare. Here, learn why AI adoption in direct clinical care is lagging behind its nonclinical care counterparts.
Artificial intelligence is pervading every industry, and healthcare is no exception. But AI adoption in direct clinical care is happening more slowly than in other industries and areas of healthcare, for that matter.
That likely comes as no surprise due to the significant regulatory restrictions, ethical concerns and the patient safety risk that direct clinical care settings face. Not to mention the immaturity of the technology itself. It’s true that artificial intelligence has been around for decades, but it is still proving itself as a technology worth the kind of investment required in some areas of healthcare.
AI is, as one expert put it, “a mixed bag” for healthcare organizations, where promise greatly outpaces reality. And while the possibilities may seem endless, the probability of AI replacing a physician is still science fiction.
What’s hindering AI adoption in direct clinical care
If AI in healthcare is going to move deeper into a healthcare organization, it’s going to have to overcome some major hurdles. A study from 2018 co-authored by Robert Challen, M.D., researcher at the University of Exeter in the U.K., found that the full clinical value of AI tools has yet to be realized partly due to an inability to ensure patient safety and ethical concerns about the technology.
For example, deep learning algorithms like those used in image and speech recognition, generally operate in what’s called a black box, making it difficult to know how an algorithm arrived at a conclusion, according to Challen’s “Artificial intelligence, bias and clinical safety.”
Challen believes there is a lack of communication between the AI research community and healthcare professionals that can result in “naïve” assumptions about the usefulness of products in a real healthcare setting. For AI to reach its potential in direct clinical care, Challen advocated for better interaction between clinicians and AI researchers so that AI research is clinically led rather than driven by advances in the technology.
But the hurdle for AI adoption in clinical care is bigger than that. Alexander Lennox-Miller, senior research analyst at healthcare research company Chilmark Inc., pointed out that steep regulatory restrictions healthcare organizations have to adhere to makes it hard to adopt cutting-edge technology. As a technology that changes based on new data it encounters, AI adoption in the clinical setting could be a particularly hard sell.
“The central, fundamental core of machine learning technology is it’s iterative and gets better quickly by doing things over and over again,” Lennox-Miller said. “That’s not something that sells well to physicians. Physicians want to know they’re doing right the first time.”
Chilmark director of research Brian Murphy takes a more conservative approach when it comes to AI in healthcare, seeing it as more hype than reality right now. Healthcare organizations are just beginning to get comfortable with the idea that they’ve got vast amounts of data available to them, and they’re still working out how to best use that data, putting them behind other industries, he said.
Yet AI product vendors see healthcare as a business opportunity. Murphy described this as an example of a “solution in search of a problem” for AI vendors. Take sepsis, for example.
“When you start to talk to them, you get a lot of the same examples of potential uses,” Murphy said. “Probably the one that jumps out is sepsis detection, and just generally dealing with sepsis earlier than conventional techniques would let people do with it. That makes me a little nervous when I hear kind of the same anecdotes from different vendors.”
Still, Murphy sees promise in the technology. “I think there are a lot of things that could be done with AI and machine learning,” he said. “I’m not sure there’s enough trust in AI, at this point, to kind of turn those things on. There is a lot of opportunity, but it feels like it’s such early days at this point.”
Even after healthcare organizations adopt AI tools, Jeff Becker, senior analyst at Forrester, said they’ll have a long road ahead of them to bring meaningful change with AI to scale. He said the promise around AI to achieve the triple aim of healthcare — better population health, better patient care and a better value — could be significant. But the ROI is still lacking and wont’ be something that happens quickly.
“I would say AI has a mixed bag in healthcare,” he said. “It’s not going to be a ground swell in healthcare that changes our ability to care for patients overnight. That’s just not what we have in front of us.”
The future of AI adoption in healthcare
The future role of AI adoption in healthcare is anyone’s guess, Challen said.
The key to adopting and implementing AI lies in finding simple applications where AI tools will make a big difference, he said.
For Becker, the future is rife with possibilities. “I think that there are too many problems in care delivery today for anyone to take the gas off of AI,” he said.
Becker said he thinks there will continue to be a push toward reducing administrative burden and getting physicians back to the patient bedside — and that this is an area that may benefit from AI adoption. On average, physicians spend about two hours doing administrative tasks for every one hour they spend on direct patient care, a model Becker said is not sustainable.
Health IT vendors, such as EHR vendor Cerner and speech recognition company Nuance, are hoping to help. They want to use AI to automate the note-taking process physicians have to go through as a way of alleviating physician administrative burden, he said.
“You’ll continue to see an erosion of the administrative tasks physicians need to do through iterative improvement of AI capabilities,” Becker said. “You see that in medical image analysis, you see it in the way that we are tackling physician notes … those are all forward-looking.”
While some fear AI in healthcare will replace the physician altogether or take some of the tasks out of the physician’s hands, experts like Lennox-Miller stress augmentation rather than automation. As AI tools are presented more as an assistive mechanism, such as what’s happening in radiology, it could ease that concern, he said.
“As you start to see AI presented more as physician assistance, more as clinical decision support tools, those are going to get faster uptake and become more widely accepted sooner,” Lennox-Miller said.
Date: May 14, 2019
Source: Search HealthIT