46 participants spoke common drug names to Google Assistant, Siri and Alexa, and results varied. More R&D is needed, experts say.
A study on voice assistant comprehension reveals platforms like Google Assistant, Apple’s Siri and Amazon’s Alexa need to improve the accuracy of their technology for healthcare applications.
WHY IT MATTERS
The research, conducted by Klick Health and published in Nature Digital Medicine, found Google’s comprehension of the most commonly dispensed medication names in the U.S. was, on average, almost twice as accurate as Alexa’s and Siri’s comprehension.
Voice recordings of 46 participants (12 of whom had a foreign accent in English) were played back to Alexa, Google Assistant and Siri for the brand and generic names of the top 50 most dispensed medications in the United States.
Google Assistant scored tops for overall comprehension rates with nearly 92 percent accuracy on brand name medication names, boasting 84 percent accuracy on generic names.
However, in the same testing, Siri had 58 percent accuracy on brand names and 51 percent on generics, with Amazon’s Alexa digital assistant faring a little worse, with 54.6 percent accuracy on brand names and less than half (45 percent) on generics.
Although the three platforms demonstrated an overall strong understanding of over-the-counter pain relievers Aspirin and Tylenol (90 percent or higher accuracy rates), the study found Alexa was significantly weaker recognizing Advil – just 2 percent – and the generic Ibuprofen, with a recognition rate of only 4 percent.
Siri and Alexa also came in with lower overall comprehension rates for research participants with audible, foreign accents, though Google Assistant appeared to be unaffected by accents.
THE LARGER TREND
“This was an important learning given the multicultural makeup in North America, and it means that many people are likely to have trouble obtaining accurate medical information from voice assistants if they choose to use them for gathering health-related data,” Adam Palanica, behavioral scientist at Klick and co-author of the study, told Healthcare IT News.
Yan Fossat, vice president of Klick Labs and study co-author, told Healthcare IT News that the research indicated companies developing digital voice assistants need more R&D to enhance their AI abilities for recognizing this type of speech.
“Technology is always getting better, so it’s just a matter of time, but the work needs to be done on the back-end to improve the algorithms,” he said.
Fosset explained that in terms of real-world implications, it’s crucial to develop accurate speech recognition abilities for medical information.
Though encouraged by some of the findings, Fossat said he wanted to emphasize more work needs to be done to ensure voice assistants understand what people are saying to them, especially when it comes to important matters like their health.
ON THE RECORD
“If a voice assistant gives you irrelevant or potentially dangerous information related to a drug name that you just asked about, the consequences could be serious, maybe even fatal,” Fossat explained.
“Since these voice assistants have primarily been developed for non-medical related uses,” he concluded, “this study demonstrates that more research and development is needed to enhance the speech recognition abilities of these platforms specifically for medically relevant information.”
Date: June 24, 2019
Source: Healthcare IT News