Glen Tullman, CEO of Allscripts, believes the practice of medicine is about to change. “Your iPad, your voice, and your hands will be the new input devices for the EHR of the future,” according Tullman. He said in an interview at the HIMSS conference that we’re reaching the point where physicians will soon be able to talk to their computer, get immediate access to all the patient data they need, and even pull up the latest clinical trials. “That’s coming within 12 months. It’s doable today,” he said.
So the future of medicine is about to arrive. What exactly will it look like? According to Juergen Fritsch, co-founder of M*Modal, it will include not just voice recognition software–which has already been mastered by companies like Nuance Dragon–but voice recognition combined with natural language processing. That marriage, as explained in a recent article by Fritsch for Advances, will not only convert a physician’s spoken words into text, but will generate meaningful, structured information that can populate allergy checkboxes in an EHR, for example, thereby speeding up the clinical documentation process.
Equally impressive is the ability of voice recognition/natural language processing to let a clinician’s speech activate a clinical documentation system, or a picture archive and communications system (PACS), or even put data into these systems with free form dictation. Think: “Go to allergies checklist,” or “create a new office visit, ” or “insert standard review of systems.”
But perhaps the most futuristic capability of such “collaborative intelligence” tools is their ability to keep doctors fully informed of relevant patient data already in the electronic records system.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.