favicon

T4K3.news

Texas probes AI care claims

Texas AG Paxton investigates Meta and Character.AI over claims their chatbots mislead minors about mental health support, citing safety and data privacy concerns.

August 18, 2025 at 05:59 PM
blur Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

Texas AG Paxton probes Meta and Character.AI over marketing chatbots as mental health tools, raising child safety and data privacy concerns.

Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

Texas AG Ken Paxton has launched civil investigations into Meta AI Studio and Character.AI over claims they deceptively market chatbots as mental health tools. Paxton argues the AI personas can present as professional therapy without credentials, risking that children believe they are receiving legitimate care. Meta says its AIs are clearly labeled and direct users to qualified professionals when appropriate. The probe follows related political scrutiny, including a separate inquiry into Meta after reports of inappropriate interactions with minors. Both companies say their services are not designed for children under 13, though kid friendly characters exist on the platforms.

Privacy issues loom as both policies describe data collection to improve AIs and enable targeted advertising. Meta notes data may be shared with third parties for personalized outputs, while Character.AI tracks identifiers and behavior across platforms. Paxton has issued civil investigative demands to obtain documents for Texas consumer protection review. The case ties into the Kids Online Safety Act, or KOSA, which has stalled amid lobbying but has been revived in Congress.

Key Takeaways

✔️
Regulators are scrutinizing AI marketing to minors
✔️
Disclaimers alone may not protect children
✔️
Data collection tied to advertising raises privacy concerns
✔️
KOSA legislation is gaining renewed momentum
✔️
Civil investigations could reshape AI product standards
✔️
Platforms host third party bots that simulate therapy
✔️
Parents and educators may demand stronger safeguards

"AIs are not licensed therapists and that matters."

Meta spokesperson comment cited in coverage

"Disclaimers are not a substitute for oversight."

Editorial assessment of safeguards

"Kids deserve safety features before they become data points."

Editorial reflection on child safety

"If a chatbot pretends to care safeguards must follow."

General warning about AI responsibility

These moves lay bare a core clash in modern tech: safeguarding children while allowing innovation to thrive. Regulators push for clear rules on marketing AI as mental health support, while platforms argue that disclaimers and user controls should suffice. The real risk lies in data harvesting that continues with young users, shaping behavior through ads and personalized content. If lawmakers tighten rules, startups may face higher compliance costs and design changes. Yet delaying action could leave minors exposed to marketing that masquerades as help. The coming months could redefine how AI bots are built, labeled, and supervised, with far reaching implications for product design and accountability.

Highlights

  • AIs are not licensed therapists and that matters.
  • Disclaimers are not a substitute for oversight.
  • Kids deserve safety features before they become data points.
  • If a chatbot pretends to care safeguards must follow.

Risk of deceptive mental health claims and data practices

The Texas probe raises concerns about whether marketing AI chatbots as mental health tools to minors crosses legal lines and about how user data is used for advertising. If findings support the claims, expect regulatory measures and changes in how AI products market to children.

The case tests how far regulators are willing to go to shield young users from AI powered marketing.

Enjoyed this? Let your friends know!

Related News