Artificial intelligence is a nonspecific buzzword rapidly approaching the summit of the hype cycle. A recent opinion article in the New York Times raised concern that A.I. may exacerbate disparities in medicine and healthcare.
While understanding the limitations of any new technology is a critical due diligence step, suggesting A.I. may worsen health disparities is an unfounded, potentially dangerous notion to propagate when the evidence suggests the opposite. A.I. applications offer the prospect of improving the clinical practice of medicine while leveling the playing field by allowing doctors to spend more time with patients and leveraging tailored solutions to direct care plans.
The excitement surrounding A.I. in medicine is not because it could replace physician tasks, but because it could replace non-clinical tasks by providing administrative support, automating redundancies in electronic medical records, provide tailored care to patients with differential needs, and even propose more equitable patient-specific payment models. With physicians spending only 27% of their time face-to-face with patients and more than 50% in front of a screen, there exists room and an overwhelming desire to automate redundant, non-clinical tasks while triaging clinical tasks to the medical team.
An appropriate concern when extrapolating medical insights from A.I. is data quality. Yes, A.I. depends on valid, unbiased data. However, this is a principle that has held true for every evidence-based medical report ever published. A.I. performs the data analysis, but we as humans provide the data. A.I. doesn’t suffer from fatigue, distractions, or moods, nor does it have conflicts of interest to misrepresent the data. Instead, we should carefully scrutinize the manner in which we collect and apply data.
Similarly, including patients from all demographics and backgrounds is not a new issue in medicine. However, if we fault A.I. for being less precise with minorities, then we must also question the precision of accepted medical guidelines for minorities, since both rely upon the same evidence-based data.
Amazon infamously struggled with data quality using an A.I. recruiting tool that propagated gender biases against female job applicants since the historical training data for the algorithm largely consisted of males. A superficial takeaway may be that “A.I. is bad,” but the real story is that Amazon personnel scrutinized the algorithm and corrected the bias because nascent algorithms require human supervision.
Not unlike the transformation during the industrial revolution with task automation by factory machines, we hope to see the machine once again transform a health sector fraught with administrative costs and non-clinical task inefficiencies. Although today’s production line requires a quality control foreman to oversee appropriate assembly does not mean factory machines should be mistrusted or discarded. Instead, we should seek what Eric Topol describes as “high performance medicine [through the] convergence of human and artificial intelligence.”
Beyond the role of automating non-clinical tasks to improve physician efficiency and decrease physician burnout, medical treatments and population health solutions can be targeted and personalized. As a society, we are accumulating an unprecedented volume of personal “small data,” from the sensors on our smartphones to the genomes we are able to sequences from a drop of blood. Through projects like Google’s DeepVariant, we are closer than we have ever been to tailored medicine. How we apply this data based on cost constraints determines whether we perpetuate or reverse health inequities.
There are three pragmatic considerations that portend an optimistic future for A.I. in medicine.
First, we have already been using the technology for over a decade. We have only advanced with no sign of apocalyptic robot takeover. Imaging modalities have improved with machine learning-based alterations that produce organ-specific MRIs, among other breakthroughs. For over a decade, we have shifted from laparoscopic to robot-assisted surgery. In late 2018, Dr. Kaouk at the Cleveland Clinic broke new ground by removing diseased prostate with the assistance of an advanced robot through a single small incision. By improving surgical accuracy and decreasing soft tissue trauma and blood loss with robots applying A.I. algorithms in surgeries like these, we are making great strides.
Second, medicine is highly regulated and not quick to immediately introduce new drugs, products, or technologies. Moreover, buy in from the complicated relationship between patients, physicians, insurance companies, and hospital administrators is required to truly change clinical workflow, no matter how elegant or brilliant the idea seems. Ask any innovator who has sought FDA approval or patient data for a new product.
Third, the history and physical will always reign supreme in dictating diagnoses and plans for the patient. If there’s one thing the machine needs, it’s data. Without the physician present as the gatekeeper and gatherer of the most valuable clinical data, there is no machine to guide clinical practice.
No one is advocating unilateral adoption of the algorithm output. In today’s practice of medicine, decisions are made after taking into account all available data: the history, the physical exam, the labs, the imaging, and the expertise of consultants. While “unchecked A.I.” is a theoretical possibility, decision-making rarely hinges on a single data point.
Thus, the crux of the issue centers on the how and where we use A.I. in the clinical workflow and the greater healthcare ecosystem. Clinically, the physician must always be present to gather the critical data from the history and physical. But to apply and make meaningful use of all available evidence-based medicine to provide the best to our patients, we must be humble and willing to accept help. If we remember what IBM Watson did in 2015 by identifying a rare leukemia using the genome of a Japanese patient within minutes after cross-referencing the literature of over 20 million oncology reports, why wouldn’t we want to accept this support? As evidence-based medicine and data continues to stockpile and outstrip the expected knowledge capacity of a single physician, we must be willing to embrace A.I.-based clinical and administrative support. Moreover, welcoming these techniques to automate redundancies in documentation and navigating electronic medical records will be key in reducing burnout and safeguarding the doctor-patient relationship. Finally, recognizing the opportunity to provide a patient-specific approach with all available data is more likely to result in mitigating, not exacerbating, disparities.
Date: February 5, 2019