Whistleblowers have accused tech giants Meta and TikTok of making decisions that allowed more harmful content to spread on their platforms after internal research showed that outrage-driven posts boosted engagement.
According to more than a dozen insiders who spoke to the BBC, both companies became locked in an “algorithm arms race” following TikTok’s rapid growth. At Meta, a former engineer said leadership pushed teams to allow more “borderline” harmful content—such as misogyny and conspiracy theories—because it kept users engaged and helped revenue. Internal research also reportedly showed that emotional or anger-inducing posts were more likely to be promoted by algorithms.
At TikTok, a whistleblower from the trust and safety team claimed that moderation priorities were sometimes skewed. Cases involving politicians were allegedly given higher priority than reports involving harm to teenagers, including bullying and sexual exploitation. The insider said this was partly to maintain relationships with governments and avoid regulatory threats.
Former employees also described how safety teams struggled to keep up, especially as companies focused heavily on growth. At Meta, hundreds of staff were assigned to expand features like Instagram Reels, while requests for additional safety staff were reportedly denied. Research shared with the BBC suggested Reels had significantly higher levels of harmful content—such as bullying, hate speech, and violent posts—compared to other parts of the platform.
Engineers highlighted that recommendation systems are complex and difficult to control, often treating content as data points rather than evaluating meaning. This made it harder to prevent harmful material from being promoted once it began gaining engagement. Some users, including teenagers, said they continued to be shown violent or hateful content even after trying to block or avoid it.
Both companies strongly denied the allegations. Meta said it does not amplify harmful content for profit and has invested heavily in user safety, while TikTok described the claims as misleading and said it has strict moderation systems and protections for younger users.





































