Facebook says AI has fueled a hate speech crackdown

Facebook says it is proactively detecting more hate speech using artificial intelligence. A new transparency report released on Thursday offers greater detail on social media hate following policy changes earlier this year, although it leaves some big questions unanswered.

Facebook’s quarterly report includes new information about hate speech prevalence. The company estimates that 0.10 to 0.11 percent of what Facebook users see violates hate speech rules, equating to “10 to 11 views of hate speech for every 10,000 views of content.” That’s based on a random sample of posts and measures the reach of content rather than pure post count, capturing the effect of hugely viral posts. It hasn’t been evaluated by external sources, though. On a call with reporters, Facebook VP of integrity Guy Rosen says the company is “planning and working toward an audit.”

Facebook insists that it removes most hate speech proactively before users report it. It says that over the past three months, around 95 percent of Facebook and Instagram hate speech takedowns were proactive.

Facebook hate speech detection rates over time
Facebook hate speech detection rates over time

That’s a dramatic jump from its earliest efforts — in late 2017, it only made around 24 percent of takedowns proactively. It’s also ramped up hate speech takedowns: around 645,000 pieces of content were removed in the last quarter of 2019, while 6.5 million were removed in the third quarter of 2020. Organized hate groups fall into a separate moderation category, which saw a much smaller increase from 139,900 to 224,700 takedowns.

Some of those takedowns, Facebook says, are powered by improvements in AI. Facebook launched a research competition in May for systems that can better detect “hateful memes.” In its latest report, it touted its ability to analyze text and pictures in tandem, catching content like the image macro (created by Facebook) below.

Image macro with a grave and the text “IF THE ELECTION DOESN’T GO OUR WAY / YOUR ETHNIC GROUP WILL END UP HERE”
This macro created by Facebook illustrates hate speech that must be detected by analyzing images alongside text.

This approach has clear limitations. As Facebook notes, “a new piece of hate speech might not resemble previous examples” because it references a new trend or news story. It depends on Facebook’s ability to analyze many languages and catch country-specific trends, as well as how Facebook defines hate speech, a category that has shifted over time. Holocaust denial, for instance, was only banned last month.

It also won’t necessarily help Facebook’s moderators, despite recent changes that use AI to triage complaints. The coronavirus pandemic disrupted Facebook’s normal moderation practices because it won’t let moderators review some highly sensitive content from their homes. Facebook said in its quarterly report that its takedown numbers are returning “to pre-pandemic levels,” in part thanks to AI.

But some employees have complained that they’re being forced to return to work before it’s safe, with 200 content moderators signing an open request for better coronavirus protections. In that letter, moderators said that automation had failed to address serious problems. “The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up,” they said.

Rosen disagreed with their assessment and said that Facebook’s offices “meet or exceed” safe workspace requirements. “These are incredibly important workers who do an incredibly important part of this job, and our investments in AI are helping us detect and remove this content to keep people safe,” he said.

Facebook’s critics, including American lawmakers, will likely remain unconvinced that it’s catching enough hateful content. Last week, 15 US senators pressed Facebook to address posts attacking Muslims worldwide, requesting more country-specific information about its moderation practices and the targets of hate speech. Facebook CEO Mark Zuckerberg defended the company’s moderation practices in a Senate hearing, indicating that Facebook might include that data in future reports. “I think that that would all be very helpful so that people can see and hold us accountable for how we’re doing,” he said.

Zuckerberg suggested that Congress should require all web companies to follow Facebook’s lead, and policy enforcement head Monika Bickert reiterated that idea today. “As you talk about putting in place regulations, or reforming Section 230 [of the Communications Decency Act] in the United States, we should be considering how to hold companies accountable for acting on harmful content before it gets seen by a lot of people. The numbers in today’s report can help inform that conversation,” Bickert said. “We think that good content regulation could create a standard like that across the entire industry.”

Source

Facebook says it is proactively detecting more hate speech using artificial intelligence. A new transparency report released on Thursday offers greater detail on social media hate following policy changes earlier this year, although it leaves some big questions unanswered. Facebook’s quarterly report includes new information about hate speech prevalence. The…

Leave a Reply

Your email address will not be published. Required fields are marked *