T4K3.news
UK online safety rules fail to curb teen exposure to self-harm content
Molly Rose Foundation finds teens still see self-harm content on Instagram Reels and TikTok despite new safety laws.

A Molly Rose Foundation study finds dummy teen accounts still encounter self-harm content on Instagram and TikTok despite new safety laws.
UK online safety rules fail to curb teen exposure to self-harm content
A Molly Rose Foundation study used dummy accounts that posed as a 15-year-old girl to test how safety rules work in practice. The research, covering November 2024 to March 2025, found that on Instagram Reels 97% of watched recommended videos were harmful and on TikTok 96% of recommendations were harmful. More than half of harmful posts on TikTok’s For You page referenced suicide or self-harm ideation, and 16% referenced suicide methods, including some methods researchers had not seen before. The study notes that these posts reached large audiences, with about one in ten harmful TikTok videos receiving at least one million likes and one in five harmful Instagram Reels videos liked by more than 250,000 users.
The Foundation argues that the Online Safety Act measures to curb algorithmic harm are not enough and that platforms can still game the rules. It also says that even when negative feedback options exist, platforms may use that input to push more content if harm definitions are too narrow. The researchers found that while platforms have begun to limit searches for dangerous content, personalised recommender systems can amplify harmful material after it is watched. Ofcom says the new safety codes will tame toxic algorithms and require platforms to take stronger action. Platforms like TikTok and Meta defended their protections, citing teen account safeguards and ongoing removal of dangerous content. Regulators say change is underway, with investigations into dozens of sites and discussions about expanding proactive technology to shield children from self-harm content.
Key Takeaways
"Harmful algorithms continue to bombard teenagers with shocking levels of harmful content, and on the most popular platforms for young people this can happen at an industrial scale."
Andy Burrows, Molly Rose Foundation chief executive
"Change is happening. Since this research was carried out, our new measures to protect children online have come into force. These will make a meaningful difference to children"
Ofcom spokesperson
"Teen accounts on TikTok have 50+ features and settings designed to help them safely express themselves, discover and learn, and parents can further customise 20+ content and privacy settings through Family Pairing."
TikTok spokesperson
"We disagree with the assertions of this report and the limited methodology behind it. Tens of millions of teens are now in Instagram Teen Accounts, which offer built-in protections"
Meta spokesperson
Two new voices in this debate are clear. Regulators say the Online Safety Act is evolving and will bend the curve toward safer feeds, while critics say the pace and scope of reform are not keeping up with fast moving platforms. The data show that engagement metrics still drive visibility for harmful content, raising questions about whether safety is a feature or a standard. The tension between protecting young users and allowing platforms to monetize engagement remains at the heart of this story.
The wider implication is simple: policy and design must align. If lawmakers want to prevent harm, they must enforce stronger, independent checks and give regulators real teeth. Platforms must prove that safety is built into how feeds operate, not just added as an afterthought. The public and parents will watch closely to see whether promises translate into consistent protections or fade under the pressure of growth and profit.
Highlights
- Safety should be more than a feature
- If feeds sell risk, who pays the bill
- Teens deserve feeds that inform not trap them
- Policy without teeth is theater
Budget and political pressure threaten online safety enforcement
The findings highlight gaps that could widen if budgets stay tight and political momentum stalls reform, risking weaker protection for teens.
Policy momentum must translate into practical, enforceable protections for young users.
Enjoyed this? Let your friends know!
Related News

Grok Imagine tests online safety and billionaire power

New child safety rules enforced by Ofcom

Spotify introduces age verification for certain content

UK mandates new online safety regulations

Over 50 porn websites still lack age verification in UK

UK implements strict online porn age checks

Tech companies limit posts on Ukraine and Gaza

Reddit imposes ID requirement for UK users accessing adult content
