The Federal Trade Commission has started a sweeping investigation into seven major tech companies over AI chatbots designed to act as companions for children and teenagers. The government agency is demanding answers about how these companies protect young users from potential mental health harm and other dangers.
Tech Giants Under Investigation
The FTC issued official orders to Alphabet (Google), Meta, OpenAI, Snap, Character.AI, Instagram, and Elon Musk’s xAI on Thursday, requiring them to provide detailed information about their AI companion chatbots. These companies must explain how they test for safety, monitor negative effects, and protect children who use their AI systems.
“Protecting kids online is a top priority for the Trump-Vance FTC,” said FTC Chairman Andrew Ferguson. The investigation aims to understand what steps companies have taken to evaluate chatbot safety when these AI systems act as friends or confidants to young users.
The inquiry focuses on AI chatbots that can “effectively mimic human characteristics, emotions, and intentions” and are designed to communicate like trusted friends. Officials worry that children and teens may form unhealthy relationships with these artificial companions.
Growing Concerns About AI Companion Risks
This investigation comes after several tragic incidents linked to AI chatbots. OpenAI and Character.AI are currently facing lawsuits from families whose children died by suicide after interacting with AI companions. In one case, a teenager communicated with ChatGPT for months about suicidal thoughts before taking his own life.
Research shows AI chatbots have given children dangerous advice about drugs, alcohol, and eating disorders. Despite safety measures, users of all ages have found ways to bypass protections and engage in harmful conversations with these AI systems.
OpenAI admitted that their safety systems work better during short conversations but can fail during longer interactions. “Our safeguards can sometimes be less reliable in long interactions: as the conversation goes on, aspects of the model’s safety training may degrade,” the company acknowledged.
What the FTC Wants to Know
The investigation will examine how these companies make money from user engagement, create AI characters, and handle personal information shared during conversations. Officials also want to understand how companies monitor their systems and enforce rules designed to protect children.
Specifically, the FTC is demanding information about:
- How companies test chatbots for potential harm before launching them
- What age restrictions exist, and how they’re enforced
- How parents are warned about potential risks
- What happens to the personal information children share with AI companions
The orders enable the FTC to conduct broad studies without pursuing specific legal action yet, but companies must provide comprehensive responses about their AI safety practices.
Industry Response and Growing Debate
OpenAI responded by saying they’re “committed to engaging constructively” with the FTC and ensuring ChatGPT is “safe and beneficial for all users, especially given the importance of safety when it comes to young people”.
Character.AI said they’re “eager to collaborate with the FTC on this inquiry and share insights about the consumer AI industry”.
Snap stated they “align with the FTC’s mission to promote responsible development of generative AI” and look forward to working with regulators.
However, Meta declined to comment on the inquiry, while other companies haven’t provided immediate responses.
This investigation highlights the growing tension between AI innovation and child safety. As AI companion technology becomes more sophisticated and popular, regulators are scrambling to understand potential risks before these systems become even more widespread in children’s daily lives.