favicon

T4K3.news

TikTok cuts Berlin moderation team with AI and contractors

Berlin workers strike as TikTok shifts trust and safety tasks to AI and contracted staff amid mass layoffs.

August 10, 2025 at 09:07 AM
blur TikTok to replace trust and safety team in Germany with AI and outsourced labor

Berlin workers strike as TikTok shifts moderation to AI and outsourced staff amid mass layoffs.

TikTok replaces Berlin trust and safety team with AI and contractors

TikTok workers in Germany are striking as the company moves to dismantle its Berlin trust and safety team, cutting about 150 jobs. The Berlin hub is the largest for the German-speaking market, with roughly 400 employees, and the layoffs would wipe out nearly 40 percent of the local workforce. TikTok says the plan aims to streamline workflows and improve efficiency while continuing to protect the platform, with some moderation work to be outsourced to contractors and handled by AI.

The strikes come amid a global push to rely more on automation for content moderation. In recent years the company and other social networks have cut trust and safety staff in places like the Netherlands and Malaysia, and Reuters has reported broad layoffs across Asia, Europe, the Middle East and Africa. German unions say the changes raise safety, mental health and job security concerns, even as TikTok cites regulatory pressure and the need to remove harmful content quickly. The EU’s Digital Services Act adds an extra layer of scrutiny for platforms, potentially shaping how AI and human moderators share the workload. TikTok maintains it will still use human contractors for some tasks and insists the move is not a retreat from safety.

Key Takeaways

✔️
TikTok plans large-scale layoffs in Berlin affecting trust and safety staff
✔️
Moderation work will shift toward AI and contractors rather than fully human teams
✔️
The move mirrors a wider tech industry trend toward automation of content review
✔️
Unions are using strikes to demand severance and a longer notice period
✔️
EU rules add regulatory pressure on how platforms deploy AI in moderation
✔️
There are concerns about AI misclassifying content and reducing worker health support
✔️
A longer-term impact could be a chilling effect on hiring in the sector

"Replacing people tasked with ensuring platforms are safe for all users, including minors, is going to lead to more mistakes and more harmful experiences."

Aliya Bhatia of the Center for Democracy and Technology voices safety concerns about automation in moderation.

"AI is not able to really identify problematic pictures or videos."

Kalle Kunkel of ver.di expresses doubt about AI effectiveness.

"We call on management to stop intimidating strikers."

Kathlen Eggerling, ver.di lead negotiator, condemns reported warnings to workers.

"We remain fully committed to protecting the safety and integrity of our platform."

Anna Sopel, TikTok spokesperson, defends the company’s stance.

The Berlin case sits at the intersection of safety, cost and control. Tech platforms face rising expectations to police content, while investors push for leaner operations. Shifting moderation to AI and contractors could speed up takedowns but may also widen blind spots and reduce the human touch that catches nuance. The EU’s Digital Services Act adds teeth to enforcement, pressuring platforms to prove safeguards keep pace with automation. If Berlin’s experience is echoed elsewhere, policymakers may demand clearer accountability for AI decisions and stronger protections for workers who bridge policy and practice.

The strikes reveal a broader tension: safety ambitions are costly, but so is eroding trust when moderated content slips through or when workers fear for their livelihoods. The coming months will test whether TikTok can balance swift removal of harmful content with reliable human oversight, and whether unions can turn protests into a bargaining advantage without derailing platform operations.

Highlights

  • Safety is more than quick automation.
  • AI cannot replace human judgment on safety.
  • Strikes show workers still guard the threshold of safety.
  • Negotiations should not be a bargaining chip.

Safety and labor risks in AI moderation shift

The plan to replace human moderators with AI and contractors in Germany raises budget and safety concerns. It could trigger political scrutiny and public backlash as workers protest and regulators weigh compliance with EU rules.

Safety outcomes will increasingly hinge on how clearly rules translate into practice, not just into promises.

Enjoyed this? Let your friends know!

Related News