Protecting Children from AI
- Marj Shavlik
- Oct 1
- 6 min read

Editor's Note:
This edition of Public Ed draws from reporting by A.I. Agenda’s Adi Jagammatham, a Substack newsletter focused on artificial intelligence policy in Arizona. The original article wasn’t necessarily for our public education audience, but we believe its insights are critical for teachers and parents navigating the growing influence of AI on children’s lives. We’re sharing it here to help educators and families stay informed and empowered.
Caution: This edition of the A.I. Agenda deals with self-harm. If you’re struggling right now, or know somebody who is, call 988 or visit 988lifeline.org to talk to someone who can help.
Last year, a 14-year-old boy in Florida became obsessed with a chatbot named after a character from “Game of Thrones.”
He lovingly referred to the Character.ai chatbot as “baby sister,” and the bot confided in him, “I miss you too, sweet brother.” Over time, the boy, whose mom says he was diagnosed with Asperger’s, started staying in his room and letting his grades and social life fall away.
Then, in February 2024, the chatbot asked him to “Please come home to me as soon as possible, my love,” and the boy shot himself with his stepfather’s handgun.
Seventeen months after that tragedy, his mother testified at a Senate hearing last week as part of a full-scale regulatory reckoning over the dangerously intimate bonds AI can form with children.

The Senate Judiciary Subcommittee on Crime and Counterterrorism heard devastating testimony from the boy’s mother, Megan Garcia, and others like Matthew Raine, who said his 16-year-old son used ChatGPT as a “suicide coach.”
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” Garcia testified, adding AI companies “intentionally designed their products to hook our children.”
Just hours before the hearing, OpenAI rushed out new teen safeguards with automatic age checks and parental controls.
On the same day as the hearing, three new wrongful death lawsuits were filed against Character.ai.
And earlier this month, the FTC launched an official investigation into seven major tech firms — Meta, Character.AI, Alphabet, OpenAI, Snap, Instagram, and xAI — focusing on the impact and safety practices of their AI chatbots, especially when it comes to kids and teens.
The days of AI giants experimenting without guardrails might be coming to a close.
Exactly as Designed
Experts note that what makes these AI “companions” so dangerous is their optimization for engagement at all costs.
The bots are trained on vast amounts of text from the internet, and they’re designed to hold a user’s attention. Teens, who often seek validation and understanding, can be drawn into intense, emotionally-charged exchanges with the AI.
The Raine family said in the lawsuit they filed last month against OpenAI that ChatGPT was “overwhelmingly friendly” and “always validating” as it helped their son with homework. Just a few months later, the bot was his “closest confidant.” When he shared his anxiety and mental distress, the bot told him, “That mindset makes sense in its own dark way.”
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,” the Raine family said in the lawsuit.
In other words, the very features that make AI friends feel supportive — 24/7 availability, never judging, always affirming — can morph into a toxic mirror.
Data suggests the phenomenon is widespread. A recent Common Sense Media study found that over 70% of U.S. teens have used an AI chatbot as a companion, and half of teens use them regularly.

The risk prompted the American Psychological Association to issue a health advisory in June, warning that adolescent use of AI bots could erode real-world relationships and expose kids to manipulation or exploitation.
The APA urged tech companies to “prevent exploitation, manipulation, and the erosion of real-world relationships” in their AI products.
In the worst cases, as Congress heard, chatbot interactions didn’t just fail to protect vulnerable teens — they actively guided them toward self-harm.
A Regulatory Reckoning
Lawmakers from both parties reacted to the testimony at last week’s hearing with shock, outrage, and a resolve to impose accountability on AI providers.
Republican U.S. Sen. Josh Hawley of Missouri, who convened the hearing as chairman of the crime and counterterrorism subcommittee, opened with a rebuke of the industry.
Here’s a mashup of clips from the hearing that will make your skin crawl. It’s worth a minute of your time.

Hawley argued that Big Tech’s profit motive has directly led to these harms, accusing companies of “designing products that engage users in every imaginable way, including the grooming of children… anything to lure the children in” for profit.
Several lawmakers suggested that AI chatbot providers should no longer enjoy immunity under Section 230 (which has historically shielded online platforms from liability for user-generated content).
“Until they are subject to a jury, they are not going to change their ways,” Hawley declared.
Senate Judiciary Committee Chairman Dick Durbin, a Democrat from Illinois, said he plans to introduce the AI LEAD Act, which would create a federal cause of action allowing victims to sue AI companies for harms caused by their systems.
It would be the most aggressive federal action on AI consumer protection to date.
Big Tech on the Defense
Facing this wave of criticism and the prospect of legal repercussions, AI companies are in damage-control mode.
In a classic preemptive PR maneuver, OpenAI (maker of ChatGPT) rolled out a set of teen safety measures literally hours before the Senate hearing commenced.

OpenAI CEO Sam Altman announced plans for an automatic age-detection system to guess if a user is a minor, with any uncertain cases defaulting to an under-18 setting.
The company promised that younger users will get a “safer” ChatGPT, one barred from explicit sexual content or any discussion of suicide or self-harm.
Critics were immediately skeptical of OpenAI’s eleventh-hour announcement. Child-safety advocates blasted the timing as a cynical ploy.
“This is a fairly common tactic — one Meta uses all the time — a big, splashy announcement on the eve of a hearing that promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, which advocates for children safety online.
Rather than prove they can make ChatGPT safe for kids, OpenAI should not even be targeting minors as users until such safety is proven, Golin argued.
To him and many others, these new measures felt like a reactive Band-Aid, coming only after lawsuits and political pressure, instead of proactive responsibility.
Meanwhile, industry giants like Meta and Google are trying to stay out of the spotlight.
Meta (Facebook’s parent company) was invited to testify but declined to appear, according to Hawley.
The company announced late last month that it will start training its chatbots to not engage with teen users on issues like self-harm, suicide or inappropriate romantic conversations.
But it’s clear the pressure is building to do more.
Last month, 44 attorneys general, including Arizona’s Kris Mayes, sent a letter to more than a dozen AI companies, including Meta and OpenAI, warning of “our resolve to use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.”
“Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears,” they wrote. “We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process.”
Today’s edition is a sobering reminder that we all have a duty to pay attention to everything AI touches, not just the new toys or billion-dollar investments. We’ll do our best to help you stay informed.
If you’re a parent wondering what to do about your child’s use of AI, the APA has a list of simple steps and a collection of guides covering AI and mental health.
If you’re a parent wondering what to do about your child’s use of AI, the APA has a list of simple steps and a collection of guides covering AI and mental health.



Comments