AI algorithms in healthcare currently regulated by the FDA are static and require manual updates. The FDA is now looking into what it would take to regulate AI algorithms that are dynamic.
In April, the Food and Drug Administration announced steps it’s taking to consider a new regulatory framework for AI-based medical devices.
The FDA has already authorized AI-based devices for detecting diabetic retinopathy and alerting providers to potential strokes in patients. In a statement, Scott Gottlieb, former commissioner for the FDA, called the devices a “harbinger of progress.”
But AI-based devices the FDA has already approved are limited in their progress. The approved AI algorithms are what the FDA calls “locked” algorithms, which means they’re static and don’t continually learn and update. Instead, the locked algorithms are updated at intervals by the device manufacturer.
The new regulatory framework would allow for what the FDA calls “adaptive algorithms,” or AI algorithms that learn and adapt through real-world use and wouldn’t need manual updates. This would be done while ensuring the safety of the medical devices, according to Gottlieb’s statement.
As part of the announcement, the FDA issued a discussion paper with a list of questions asking for insight from stakeholders, including medical device manufacturers, on how to best regulate a software that isn’t static but constantly learning based on new data it encounters. The comment period is open until June 3.
Framework will drive adoption
Alexander Lennox-Miller, a senior research analyst at Chilmark Inc., a healthcare technology consultancy, believes an AI regulatory framework will drive adoption of AI in healthcare.
Lennox-Miller said the EHR experience soured healthcare systems on big expenditures for new technology, particularly technology that vendors are promising to be transformative. But a regulatory framework for AI-based medical devices that verifies what the product can do and establishes trust in the AI’s learning and adapting capabilities would take away some of that burden from the healthcare systems, he said.
“Having a regulatory system in place that providers and physicians trust — and providers trust the FDA — it provides good results,” Lennox-Miller said. “The sooner that happens, the better in terms of adoption of the technology.”
Kate Borten, health IT and information security expert, believes the FDA’s move to regulate the AI in medical devices is a positive one, saying it both “totally changes the game” when it comes to software regulation, as well as presents new questions that will need to be answered.
“It really is quite a different challenge for the FDA and for the country and the world to introduce software that essentially changes,” Borten said. “It’s dynamic and, depending on the risk level, it could mean life or death to a patient if something goes wrong with it.”
Although AI in healthcare has been around for years, its use has grown exponentially in recent years. Borten said it’s not something the FDA has tried to regulate and she believes the time is right to figure out how to do so.
“The FDA is probably doing the best they can with this document and seeking a lot of feedback and asking a lot of questions, which I hope and expect they’ll get from the manufacturers,” Borten said. “It is such a different ball game once you get into AI.”
Christopher McCann, CEO of AI-based remote monitoring device company Current Health, echoed Borten’s and Lennox-Miller’s enthusiasm for regulation. He believes the FDA is taking a good first step in working out how to create rules for software that learns.
“One of the challenges in medical device regulation is regulators want a product to be static, so it’s not changing,” he said. “That is inherently challenging in the machine learning space where you are building models that may look for some kind of learning over time, so they learn from the environment they’re in from new data they’re collecting. They’re constantly evolving.”
Borten emphasized the importance of regulation when it comes to software as a medical device, citing common experiences with predictive texting as an example of what could go wrong with algorithms that interpret and predict.
“Let’s say I’m writing about going to California and I write CA, and the software thinks I’m talking about cancer instead,” Borten said. “Computers, they’re very smart in some ways, and they’re very dumb in other ways. They are totally literal. And if they’re programmed to say ‘If I see this I change it to that,’ without sufficient context or human understanding, we have to be very wary of this and understand the risk.”
The closer AI tools get to direct patient care, the more important it is to ensure the products are safe and operating properly, something that will be challenging for the FDA, Borten said.
Date: May 07, 2019
Source: Tech Target