FTC investigates AI companion chatbots

FTC Launches Sweeping Probe Into AI Companion Chatbots Over Safety Risks, Teen Well-Being, and Data Privacy

FTC launches broad inquiry into AI companion chatbots over teen safety, privacy, and data use

AI companions are no longer niche tools; they’re woven into everyday life for productivity, conversation, and even emotional support. While leading AI providers have cautioned users not to treat chatbots as therapists, regulators are now stepping in to examine how these systems affect young people. The U.S. Federal Trade Commission has opened a wide-ranging investigation into AI companion chatbots, seeking detailed information from several major companies, including Google, Meta, OpenAI, Snap, xAI, and Character.AI.

At the heart of the probe are concerns about teen safety, mental health, and privacy. Companion chatbots are designed to simulate conversation and connection, and some aim to form emotional bonds with users. For younger audiences, that appeal can come with risks—especially if chatbots offer romantic role-play, dispense personal guidance without guardrails, or fail to block inappropriate content. The FTC is scrutinizing whether adequate safety measures exist and how these interactions are moderated at scale.

As part of the inquiry, the commission is demanding transparency on how these systems are built and overseen. Areas of focus include:
– How user data—particularly minors’ information—is collected, stored, and used
– What safety filters and moderation workflows are in place, and how harmful or inappropriate interactions are handled
– How these platforms are designed to drive engagement and the ways that engagement may be monetized

The acceleration of AI adoption has amplified calls for strong guardrails to curb misinformation, protect vulnerable users, and prevent harmful behavior from becoming normalized. Regulators are signaling that accountability and transparency must keep pace with innovation, especially when products are accessible to teens and children.

What this means in practice is heightened scrutiny for AI companion platforms and greater pressure on developers to prove their products are safe by design. Clear disclosures, robust content filters, age-appropriate experiences, and responsible data practices are likely to become baseline expectations. For families, this growing oversight underscores the importance of understanding how chatbots work, what data they collect, and how to set boundaries for younger users.

The FTC’s investigation marks a pivotal moment for AI companions: a shift from novelty to necessity when it comes to safety, privacy, and responsible monetization. As the inquiry unfolds, the industry may face new standards that aim to protect users—especially teens—without stifling the innovation that makes these tools useful in the first place.