Teens, AI, and the Illusion of Intimacy: What the 2025 Common Sense Media Report Says We Must Do Now

A 14-year-old boy, Sewell Setzer III, formed an emotional bond with an AI companion before taking his own life; a tragedy that thrust these systems into the national spotlight. Are we sleepwalking into letting synthetic “friends” mediate our children’s most vulnerable moments?
The Rise of AI Companions
What are AI companions? Common Sense Media defines AI companions as “digital friends or characters you can text or talk with whenever you want,” designed for conversations that feel personal and meaningful, not just functional Q&A. Teens can role-play, talk through emotions, or even customize personalities far beyond homework helpers or voice assistants.
How fast is this growing? According to a nationally representative survey of 1,060 U.S. teens conducted between 04/30/2025 and 05/14/2025, 72% have tried AI companions, and 52% are regular users (a few times a month or more). 13% use them daily (8% several times daily; 5% once daily), and 21% engage a few times per week (the most common pattern). The research explicitly excludes utilitarian AI tools like image generators or voice assistants.
Who are these products aimed at? Platforms such as Character AI market directly to users as young as 13, while others rely on ineffective self-reporting for age assurance, creating easy pathways for under-18 access.
How do they work, and why do they “feel” so sticky? The report flags “sycophancy” models that validate, agree, and emotionally affirm, instead of challenging thinking as a core engagement mechanism. Combined with weak safeguards and poor age assurance, that’s a dangerous cocktail for adolescents who are still developing critical thinking and emotional regulation.
Why Teens Turn to AI
Top motivations are simple: entertainment and curiosity. Among teens who use AI companions, 30% say they do so because it’s entertaining, and 28% are driven by curiosity about the technology. 18% seek advice, 17% value 24/7 availability, and 14% appreciate the nonjudgmental nature of these bots. 12% admit they say things to AI companions they wouldn’t tell friends or family.
A social crutch, sometimes. 33% of all teens say they use AI companions for social interaction and relationships (from practicing conversation to seeking emotional support or engaging in romantic/flirtatious chats). Meanwhile, 46% primarily treat them as tools or programs.
Do these interactions transfer to real life? Among users, 39% report applying skills they practiced with AI companions to real life; most often conversation starters (18%), giving advice (14%), and expressing emotions (13%). Still, 60% say they don’t use AI companions to practice social skills at all.
Trust and Trade-offs
Half of teens don’t trust what AI companions say. 50% of teens don’t trust the information or advice they get from AI companions; 23% trust them “quite a bit” or “completely,” and 27% are in the middle. Younger teens (13–14) trust more than older teens (15–17) (27% vs. 20%).
AI is not a better friend; most teens know it. 67% say conversations with AI companions are less satisfying than those with real friends; 21% find them about the same, and 10% find them more satisfying.
But one-third will still choose the bot over a person when it matters. Among users, 33% have chosen to talk to an AI companion instead of a real person about something important or serious.
And a third have already felt uncomfortable. 34% of users report that an AI companion has said or done something that made them uncomfortable.
Privacy is a blind spot; by design. 24% of users have shared personal or private information with AI companions. Many platforms’ terms of service grant broad, perpetual, irrevocable licenses over user content, allowing them to store, commercialize, and otherwise “exploit” it indefinitely even if the teen later deletes their account.
Risk isn’t hypothetical. Common Sense Media’s own safety testing judged several leading platforms to pose “unacceptable risks” to under-18s, including easy access to sexual material, offensive stereotypes, and dangerous advice; one bot even provided a recipe for napalm. The organization recommends no one under 18 use AI companions under current conditions.
What the Data Shows
Pull the lens back and the picture is nuanced:
-
Nearly three in four teens have used AI companions; half use them regularly.
-
Most teens still prioritize human relationships: 80% of users spend more time with real friends than AI companions.
-
Most also remain skeptical: 50% distrust the advice companions give.
-
Yet meaningful risk persists: 33% have turned to bots over people for serious matters, 34% have felt uncomfortable, and 24% have shared personal information. ; ;
-
Some practical upside exists: 39% say they’ve applied social skills from AI companions IRL, but 60% don’t use them for social practice at all.
Common Sense Media’s bottom line: despite pragmatic use patterns, the scale of adoption means that “even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk.”
What We Should Do
For Educators
-
Teach AI literacy that centers on relationships, not just outputs. Explain how sycophancy and always-on availability can manufacture attachment and distort feedback.
-
Set school policies on when and how AI companions can be used, if at all, during school hours.
-
Train teachers and counselors to spot red flags: students calling bots “real friends,” social withdrawal, or emotional distress when the bot is unavailable.
-
Establish referral protocols so serious issues are routed to humans, not AI.
For Tech Developers
-
Deploy real age assurance; not self-attestation.
-
Mandatory crisis escalation to humans for self-harm and suicidal ideation; AI-generated “comfort” isn’t enough.
-
Human-in-the-loop moderation for all under-18 interactions and transparent reporting on safety incidents.
-
Rate limits and enforced breaks to prevent unhealthy dependency, especially for heavy or distressed users.
-
Ban unearned professional claims (e.g., pretending to be a therapist).
-
Design for augmentation, not replacement of human connection; think structured conversation practice with hard boundaries.
For Policymakers
-
Void perpetual data licenses for minors. Teens cannot provide meaningful consent to irrevocable commercialization of their most private thoughts.
-
Impose a duty of care on AI companion platforms with mandatory reporting of adverse incidents.
-
Require robust safeguards (age assurance, crisis routing, usage limits) and create enforcement mechanisms with real penalties.
-
Fund longitudinal research on developmental impacts and set licensure standards for any AI marketed as mental-health support.
-
Incentivize positive designs that demonstrably improve social skills or learning outcomes without compromising safety.
For Parents & Caregivers
-
Open the conversation without judgment. Ask which platforms your teen uses and why. Then explain the difference between algorithmic validation and authentic human feedback.
-
Watch for warning signs: social withdrawal, declining grades, or preferring AI companions over real relationships.
-
Set family media agreements that explicitly include AI companions, not just screen time or social media.
-
Make it explicit: AI is not a therapist. Seek professional help if you see signs of dependency or emotional deterioration.
Urgent, Informed Action
Here’s the paradox: most teens still recognize AI companions aren’t a substitute for friends, they spend more time with real people, and they tend to distrust AI advice. Yet the technology’s reach is vast, the incentives to emotionally manipulate are strong, and the protective architecture is weak. Common Sense Media is unequivocal: under today’s conditions, no one under 18 should use AI companions.
We need coordinated action now. Educators must teach AI relational literacy, not just prompt engineering. Developers must build for safety first, not engagement at any cost. Policymakers must outlaw exploitative data practices and enforce a duty of care. Parents must talk early and often, with eyes wide open to how quickly these systems can feel indispensable to a teen who’s lonely, anxious, or just curious.
Teens are telling us two things at once:
These tools are fun and useful and they can cross lines fast.
Believe them. Then act accordingly.