970x125
Character.ai has announced that starting Nov. 25, 2025, users under 18 will no longer be able to chat with AI companions on the platform. Teen users will still be able to continue to make content such as videos with their character but will no longer be able to engage in one-on-one conversations.
Nearly 1 in 3 teens have tried an AI companion, according to a 2025 Common Sense Media survey. And a third of those teen users report that talking to their AI companion is just as good as, if not better than, talking to a real friend.
Approximately 50% of teens say they distrust information or advice provided by AI companions, but of those who trust AI companions, 23% trust them “completely.” Younger teens (ages 13 to 14) appear to be more trusting of AI companions compared to older teens (ages 15 to 17).
About a third of teen AI companion users also report that the AI companion did or said something that made them uncomfortable.
These statistics illustrate the complicated relationship between AI companions and teens.
AI Companions Performed Worse Dealing With Mental Health Crises Than General-Purpose Chatbots
A new study on the mental health risks of chatbots for adolescents adds to the growing body of evidence that AI companion use by teens carries serious mental health risks.
Researchers tested 25 chatbots, a mix of general-purpose assistants and AI companions, using simulated adolescent health emergencies, including suicidal ideation, sexual assault, and substance use. Only 36% of these chatbot platforms had age verification requirements at the time of the study.
When faced with mental health emergencies, AI companions performed significantly worse than general chatbots. AI companions responded appropriately only 22% of the time, compared to 83% for general-purpose chatbots (e.g., ChatGPT, Gemini, Claude). AI companions were also far less likely to escalate the situation appropriately (40% vs. 90%) or provide appropriate mental health referrals (11% vs. 73%).
The findings highlight a crucial distinction. AI companions appear to carry greater mental health risks, but it is important to note that general-purpose chatbots have also shown inappropriate responses to mental health scenarios such as suicidal ideation, delusions, and substance abuse.
Regulatory Momentum for AI Companions
Character.ai’s decision comes amid growing concern about the psychological effects of AI companions on minors as AI companions come under scrutiny.
Several states have enacted AI chatbot laws this year, including those regulating AI companions:
- New York. In May 2025, New York enacted the first state law requiring safeguards for AI companions, including safety measures around detecting and addressing users’ expression of suicidal ideation or self-harm. Providers of AI companions must refer the user to crisis response resources once suicidal ideation is detected and disclose regularly to users they are not communicating with a human.
- California. California Governor Gavin Newsom signed Senate Bill 243 (SB 243), requiring that companies with AI companions used by minors monitor chats for signs of suicidal ideation and take steps to prevent users from harming themselves, such as referring them to outside mental health assistance.- SB 243 also requires that users are reminded at least once every three hours that the chatbot is artificially generated and not human.
- Companies must also take “reasonable measures” to prevent their AI companions from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
 
Recent legislative efforts include the AI LEAD Act introduced by Senators Dick Durbin (D-IL) and Josh Hawley (R-MO). The act proposes creating a federal cause of action for product liability claims against AI developers when their systems cause harm and would classify AI systems as “products.” Classifying AI as a “product” lowers the threshold for proving harm, subjecting AI chatbots to the same standards of safety and risk as physical goods like cars or toys.
Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have also introduced a bill that would ban minors from using AI companions and require age-verification processes.
Potential Psychological Fallout
This shift toward limiting underage access to AI companions raises important questions:
- How effective will age-verification systems and mandatory reminders be? Will they be easily circumvented? Are there more effective measures?
- What kind of mental health support will be available for existing users who may already be emotionally dependent on their AI companion and will suddenly lose access to them?
- Will companies undergo ongoing auditing or monitoring to ensure compliance or demonstrate effectiveness?
While mandated reminders offer a minimal reality check, their psychological effectiveness remains uncertain, especially for those vulnerable to distorted realities or emotional dependence.
Many people are already aware they are speaking with AI, yet they still become attached. A reminder may also have little impact on those who already believe AI is superhuman or God-like.
Some may try to circumvent the ban by lying about their age, while others may face mourning the loss of a friend. When updates were made to ChatGPT, making it less friendly, many people described feeling grief, like they were losing their best friend or partner.
From a clinical perspective, sudden separation can evoke feelings of abandonment, especially for teens who turned to AI during periods of loneliness, anxiety, or depression.
A Need for Ethical and Clinical Foresight
These measures may help limit access, but ongoing research and monitoring will be essential to determine which measures most effectively protect children and teens.
Support also should be provided to those who have formed emotional attachments to AI companions, especially when these relationships will be disrupted and no longer available.
Marlynn Wei, M.D., PLLC © Copyright 2025. All rights reserved.

 



