Artificial Intelligence will have as transformational an impact as the internet did, predicts Jay Bellissimo, managing partner at IBM services. Every company needs an AI strategy, he says, and it is not a problem only for IT. Data creates insight, and insight creates knowledge. Mining the 80% of the data that’s not searchable via the internet is your company’s data advantage.
Bellissimo spoke at the AI World Conference & Expo earlier this month in Boston. A broad range of talks at the event covered artificial intelligence applied to a host of industries and applications. While some speakers focused on autonomous AI—for instance, fully independent self-driving cars—in the clinical sessions the emphasis was on how AI can assist—not replace—humans.
“Augmented intelligence” is the term Paul Bleicher from OptumLabs prefers. People have great ideas about what they can predict, but don’t know how they’ll use it, he said. Will it be cost efficient? Will it save someone a serious or fatal outcome? What data will be required to address that? AI can help steer those decisions, he said.
Many speakers advocated for starting with the data and a good question.
For example, Kamal Jethwani, senior director for connected health innovation at Partners HealthCare, had strong datasets in hospital records as well as data from Philips Lifeline, a medical alert system worn my seniors to call for help if needed. He wanted to know if comparing the two datasets would help predict older adult hospital admissions, a question that aligned well Partners’ AI priorities: risk stratification, personalization of interventions, and better use of wearables to earlier predict health events.
The fascinating finding, Jethwani said, was in data that Philips was throwing away. When looking at when seniors used the medical alert devices and how, Philips found a percentage of alerts that were “false alarms”. When the service called to check on the users, they were told it the alert was an accident.
Only when those data were compared with Partners HealthCare admissions, Jethwani found that the “false alarms” were the most predictive dataset for ambulance transport within three days. Those were the seniors who were feeling unwell or uneasy, and they were testing their call devices.
That type of prediction can be very impactful for both hospitals and patients, but it took input from different sources to uncover the findings.
The same multi-disciplinary approach is crucial to facilitating adoption of new tools and approaches by researchers and clinicians as well, said Danielle Ciofani, director of data strategy and alliances at the Broad Institute. We can’t neglect change management and other cultural shifts, Ciofani said. Getting buy in from all stakeholders is the only way to avoid building “shelfware”.
Yin Aphinyanaphongs, assistant professor at NYU Langone Medical Center, reported that nearly every clinician he works with is excited about what AI can bring; he finds there is tacit buy in. Medications are black boxes, Aphinyanaphongs said; physicians already understand black boxes. AI presents a model that will help you understand what outcomes to expect. He recommends working closely with clinicians and sending them predictions as models are being developed. It gives them confidence that the models are working, he said.
But even working models is still a long way away from replacing doctors with AI. Broad’s Ciofani believes the pathway to healing is still provider care, human-to-human care, she emphasized. But the treatment pathways of medicine are rules-based, she said, making them appropriate for technological intervention.
Catherine Kreatsoluas, Toronto, Harvard School of Public Health, echoed that point. In the clinic, AI can help doctors, but shouldn’t supplant them, Kreatsoluas said. Statistical significance is not the same as clinical significance, she said. It’s not enough just to crunch data and come up with findings. She emphasized the practical outcomes of data findings: How will a physician change her practice? What are the ethics involved in an AI-predicted treatment option?
There are also important questions of workflow, pointed out David Ledbetter, lead data scientist at Children’s Hospital Los Angeles. How do you ensure that you’re taking into account physician workflow and getting buy in from the clinical team?
Robert Bogucki, CTO at deepsense.ai, cautioned that the data must come first. Start with the questions you can solve in the near term, not future hypotheticals.
In an executive roundtable, he was asked how the development process happens in an AI or analytics project. First, he advised, make sure you have the data. “If I’m just thinking about collecting the data now, maybe I should wait on this project until I have the data sources.”
But he wasn’t opposed to a bit of exploration. Having a platform lets team members who are less savvy at data science play with the data.
Anju Gupta, head of sustainability campaign at Syngenta, agreed. When deciding between centralized and distributed analytics, Syngenta developed a matrix model: a platform that was layman based, but any developer could use it. “We found that once the platform was shared, there were business problems I wouldn’t have applied machine learning to that users did,” she said.
“Democratizing” AI might lead to new applications, but it also raises a logistical question. Where does your AI or data sciences group sit? Gupta believes the group should certainly sit outside of IT. Analytics serves far more business functions that just IT, she said. Machine learning is of no use if I don’t have the charter to launch new projects based on what I’ve learned. Bogucki agreed, but stressed the need for collaboration with IT. Norbert Monfort, VP at Assurant, a provider of specialty, niche-market insurance products, said that at Assurant the data analytics group is a peer of the CIO. Data engineering sits within IT. Having analytics outside of other groups with independent influence is a good solution, he has found.
Date: December 26, 2018