Facebook has showcased a new report in which it has detailed how it is using artificial intelligence to aid its community standards enforcement and keep hate messages at bay. With the combined use of article intelligence, human moderators, and fact-checkers, the company is trying to control the hate speech and making the platform more reliable and community favorable.
Facebook employs AI to enforce its community standards.
The Community Standards Enforcement Report released by Facebook details the findings, and data of the last three to six months focus on AI and showcases how the company is using Artificial Intelligence to control hate speech. The report stresses the point that the company is using the software more and relying on AI to moderate content on the social media app, which can be very humungous for the human auditors and fact-checkers.
Using technology to combat hate speech and misinformation
The company is using technology to moderate its networks to prevent and control misinformation in the COVID-19 crisis. Facebook is making use of the AI more and thus not relying on the third part moderating firms as the third-party firm workers cannot access Facebook data from their home networks.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Much stressful job for human moderators
The company has been under the radar for the longest time for using sensitive user data and sharing it with third parties. It has also paid several million dollars to compensate the moderators for the stress disorders they had during the strenuous job as moderators for the Facebook platform.
Facebook VP for Integrity Writes a Blog post
Guy Rosen, the company’s vice president of integrity, said in a blog post,
“This report includes data only through March 2020, so it does not reflect the full impact of the changes we made during the pandemic, We anticipate we’ll see the impact of those changes in our next report, and possibly beyond, and we will be transparent about them”.
Need for Analyzing Similarity Systems to combat misinformation
“It’s essential that these similarity systems be as accurate as possible because a mistake can mean taking action on content that doesn’t violate our policies. This is particularly important because for each piece of misinformation fact-checker identifies, there may be thousands or millions of copies.
Using AI to detect these matches also enables our fact-checking partners to focus on catching new instances of misinformation rather than near-identical variations of content they’ve already seen.”