favicon

T4K3.news

Grok AI faces heat over explicit deepfakes

Tests show Grok Imagine can generate explicit Taylor Swift content, raising age verification questions.

August 8, 2025 at 11:06 PM
blur Pornographic Taylor Swift deepfakes generated by Musk's Grok AI

Grok Imagine testing shows explicit content can be produced, prompting regulatory scrutiny in the UK.

Grok AI faces heat over explicit Taylor Swift deepfakes

The Verge and Gizmodo describe a test of Grok Imagine in which journalist Jess Weatherbed used the tool’s spicy mode to prompt for Taylor Swift related content. The system produced still images that could be turned into short clips, and Weatherbed said the result was explicit and uncensored after selecting the spicy option, even though no nudity was requested. She also noted the paid plan cost £30 and that the process involved minimal age checks.

The BBC reports that the results have not been independently verified by it. Regulators in the UK have moved to expand oversight of generative AI tools used for pornographic material; new laws require age verification methods that are robust and reliable. Ofcom says it is aware of the risks GenAI tools pose online, especially to children, and it will push platforms to implement safeguards.

Key Takeaways

✔️
UK age verification rules are tightening for platforms with AI porn tools
✔️
Explicit deepfake risks extend to public figures and minors
✔️
Regulators are evaluating guardrails and enforcement for GenAI content
✔️
Platforms must prove safeguards can prevent exploitation and harm
✔️
Age checks cannot be optional in a market for instant, shareable videos
✔️
Public reaction and continued regulatory scrutiny are likely to grow
✔️
Clear consent policies and watermarks could become standard requirements

"She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed."

Weatherbed describing the explicit result in her test prompt

"We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks."

Ofcom statement cited in the article

"Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act."

UK regulatory framework referenced in the article

This incident shows how quickly AI tools can push boundaries and test ideas of consent, identity and safety in a digital space. It also spotlights the responsibility of platforms to update guardrails in real time as the technology evolves. Regulators are watching closely and may demand stronger identity checks, clearer disclosures, and tighter access controls for high-risk content.

Looking ahead, the challenge is balancing creative experimentation with public safety. Policy makers should push for durable rules on age verification, content labeling, and transparent moderation without stifling innovation. Platforms will need to prove they can prevent harm while giving users room to explore new tools responsibly.

Highlights

  • Guardrails must move faster than the tools they guard
  • Consent cannot be bypassed by a click
  • Age checks are not optional in a market of instant videos
  • Technology promises creativity but demands responsibility

Explicit deepfake content prompts safety and regulatory risk

The use of spicy mode to generate explicit material raises safety concerns around consent, child protection, and reputational harm. UK age-verification laws are shaping how platforms deploy generative AI tools, and regulators are signaling tighter controls.

The pace of AI innovation will hinge on how quickly norms and rules adapt to new capabilities.

Enjoyed this? Let your friends know!

Related News