Great technology is invisible. There’s always a push toward it becoming less and less obvious to the user. The less a user notices, the better it’s working. Voice-assisted technology is one place we see this push today. But while it’s ideal for many uses in health care, there are a few pitfalls to dodge.
Zero UI (Zero User Interface) is an interface without a physical presence — no screen, no buttons, no keys. Instead, a user moves, talks, looks or thinks, and artificial intelligence (AI) interprets that interaction in order to respond. It sounds simple, but Zero UI is an intensely complex effort requiring extremely robust capabilities.
Voice-assisted tech is a great example of Zero UI, whether you’re telling Domino’s Pizza’s AI, Dom, what you want on your pizza, asking Bank of America’s Erica for your credit score or telling your Mercedes “I’m too cold.”
Most technologies begin as more gimmick than godsend, but they become more “serious” as they improve. Gartner predicts that between 2019 and 2023, at-work interactions with apps will go from 3% to 25%. Moreover, with voice, they predict that “speech capabilities will rapidly become standard within most healthcare applications” to save time and improve “digital dexterity.” It’s practical. But it’s emotional, too. Think about it: How often did you anthropomorphize your search bar the way you do with Alexa and Siri? The combination of utility and feeling makes it ideal for health care.
But voice-assisted tech is still limited in the pharmaceutical industry today, as companies remain slow to invest in tech so intense under the hood. Nonetheless, opportunities abound for voice to help in patient access, patient support and medical liaison activities. Current tools that offer education, symptom tracking or reminders — or phone services that handle questions about dosing or coverage — could level up with voice technology triaging or bolstering the existing services.
Date: August 14, 2019