COVID-19 has placed a new focus on how YouTube, Twitter and others are identifying and promoting valid sources of health information, writes VuMedi CEO Roman Giverts.
COVID-19 has accelerated our reliance on digital platforms for news and information. As businesses and communities shut down over the last three months in the name of social distancing, more people turned to their phones and laptops for the latest developments. Even before the health crisis, a third of Americans considered social media an important source for health and science news. Then in March, when the World Health Organization declared a global pandemic, the volume of Twitter conversations about COVID-19 doubled week over week. The problem was that users on Twitter shared links to misinformation sites almost at the same rate as links to credible health websites.
The spread of misinformation and disinformation isn’t new, but what has been different during this pandemic is the response by tech platforms. In late March, Twitter announced that it was working with global health authorities to identify experts and verify accounts. This shift in philosophy around validation of critical information was solidified when in May the social platform labeled Trump’s political tweets “potentially misleading” for the first time, leading to a national debate on freedom of speech.
Facebook, on the other hand, chose a different route. It had internal research showing that its algorithms created divisiveness and decided to ignore it to continue maximizing page views. As our nation grapples with how to respond to global health and social justice issues it has become increasingly urgent that tech platforms reevaluate their policies on verification. Business models that value click-through rates over validation will continue to promote sensationalism and controversy above thoughtful dialogue and accuracy.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Take for instance, on YouTube, the UCSF School of Medicine is unvalidated because it has less than the 100,000 followers required by the platform to be considered “prominent.” This approach leaves hospitals and medical institutions out of the verification system. Meanwhile other organizations create medical content without even naming the authors that contributed to the video, but manage to be validated because they meet the follower threshold. This “more eyeballs” and popularity methodology results in bias towards organizations adept at growth marketing and SEO versus medical accuracy.
This dichotomy also illustrates the issues that can arise when companies who specialize in one area, such as entertainment, are thrust into a new genre, like medical education, where their systems are not set up for success. Entertainment success is defined by fame and followers, but medical education success depends on a verification system of medical credentials and clinically accurate information.
So the question becomes, how exactly should tech platforms redesign their algorithms, and how will they enforce these rules to various discourses? Recently, we saw Twitter announce further changes will be made to their process specific to their request verification feature. The hope is they rewrite the rules to prioritize credible content creators and truthful information. This approach is necessary for medicine and health. We certainly wouldn’t choose our physicians solely based on the number of followers they have.
Social platforms that are increasingly becoming the primary sources of news and information must reprioritize human curation in their verification systems. Technology and algorithms alone aren’t enough in today’s climate of opinion news. In “Zero to One,” Peter Thiel described how PayPal overcame enormous challenges with fraud because it had the best technology, but also the best human analysts to detect fraud.
When it comes to protecting our money, or protecting our health, we need more than the best algorithm. In medicine, in particular, we need subject-matter experts who are able to deduce accurate and clinically relevant content. Such experts are able to evaluate the information as it becomes available and can anticipate the relevance of new information in advance of all of the data coming in. This serves as a check and balance to metrics that may be designated positive by a machine, but have significantly negative effects on society, like disinformation about COVID-19 treatments and vaccines.
My 10-year experience in my current role has shown me that blindly following metrics is not necessarily correlated to the ability to deliver authentic information, let alone the most valuable user experience. It’s common sense that seems to be ignored when an overreliance on page-view-generating algorithms leads to major universities being considered “unvalidated” content creators. In many ways, it’s no longer about accuracy or eyeballs – it’s a matter of ethics, and how we as tech leaders build products that are actually helpful in contributing to an educated, healthy, informed society.
Source: Mobihealth News