Skip to main content
  1. Home
  2. Social Media
  3. Web
  4. News

Facebook apologizes after report shows inconsistencies in removing hate speech

Add as a preferred source on Google

Facebook has taken plenty of criticism on the platform’s algorithms designed to keep content within the community guidelines, but a new round of investigative reporting suggests the company’s team of human review staff could see some improvements too. In a study of 900 posts, ProPublica reports that Facebook’s review staff was inconsistent about the posts containing hate speech, removing some but not others with similar content.

Facebook apologized for some of those posts, saying that in the 49 posts highlighted by the non-profit investigative organization, reviewers made the wrong choice on 22 of those posts. The social media platform defended 19 other instances, while eight were excluded because of incorrect flags, user deletions or a lack of information. The study was crowd-sourced, with Facebook users sharing the posts with the organization.

Recommended Videos

Justin Osofsky, Facebook’s vice president of Global Operations and Media Partnerships, said that the social media platform will be expanding review staff to 20,000 people next year. “We’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” he said in response to the ProPublica investigation. “We must do better.”

ProPublica said Facebook is inconsistent on the treatment of hate speech, citing examples of two different statements that both essentially wished death on an entire group of people, with only one of them removed after being flagged. The second post was later removed after the ProPublica investigation.

“Based on this small fraction of Facebook posts, its content reviewers often make different calls on items with similar content, and don’t always abide by the company’s complex guidelines,” ProPublica said. “Even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook’s definition of hate speech.”

On the flip side, the report also found posts that were redacted that shouldn’t have been. In one example, the image contained a swastika, but the caption was asking viewers to stand up against a hate group.

The study is far from the first time ProPublica, a non-profit investigative organization, has called out Facebook’s practices this year. This fall, Facebook changed its ad targeting after a study showed that when enough users typed in their own answers into the bio fields, racial slurs could become a category for a targeted ad. Just a week ago, ProPublica demonstrated that employers could discriminate by age using those ad tools. In the first, Facebook apologized and immediately paused the ad tool until the slip up could be fully corrected, while in the second, Facebook defended its practices.

Monitoring content from the largest social media network with more than 2 billion monthly active users isn’t an easy task and one that Facebook approaches with both artificial intelligence algorithms and human reviewers. Social media networks generally attempt to find a balance between banning hateful content and prohibiting free speech. Osofsky says the platform deletes 66,000 instances of hate speech every week.

The move to a review staff of 20,000 is fairly significant — when Facebook reported in May that it would be adding 3,000 more review staff members that brought the team to 7,500 people.

ProPublica says the investigation is important “because hate groups use the world’s largest social network to attract followers and organize demonstrations.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Instagram’s teen crackdown moves from DMs to feeds
Meta is adding parental insight into teen algorithms after tightening how young users communicate
Electronics, Mobile Phone, Phone

Instagram’s teen crackdown is moving deeper into the feed.

Meta’s new supervision tools will show parents which broad topics are shaping a teen’s Instagram algorithm, including signals that affect Reels and Explore, with Feed support coming later. The timing matters because Instagram’s safety push is no longer focused only on who teens can message. It now reaches the recommendation system that decides what keeps showing up.

Read more
WhatsApp is getting iOS 26’s Liquid Glass glow-up, and it’s surprisingly gorgeous
WhatsApp is about to look a lot more premium on iPhones
Whatsapp on iPhone

WhatsApp is apparently going to look a lot more at home on the latest version of iOS. A new report has suggested that the chatting app for iPhone is getting a new look inspired by iOS 26's UI. According to WABetaInfo, WhatsApp for iOS version 25.28.75 is rolling out Apple’s new Liquid Glass design language. The update is available through the App Store, though the visual refresh is being enabled gradually.

WhatsApp on iOS is getting more premium

Read more
WhatsApp Plus is here, and you can safely ignore this subscription
WhatsApp wants a monthly fee for what other apps include by default, and that's a problem Meta can't dress up with custom icons.
WhatsApp Plus screenshots.

WhatsApp has fiercely defended its status as a free, no-nonsense online messaging app for over a decade, but a new subscription tier is muddying the waters. 

Meta is rolling out WhatsApp Plus, a paid subscription model, to a limited number of iPhone users using the latest version of the App Store. 

Read more