0:00
/
0:00
Transcript

Episode 2241: Gaia Bernstein on the Threat of AI Companions to Children

AI Companions as the New Frontier on Kids' Screen Addiction

No, social media might no longer be the greatest danger to our children’s well-being. According to the writer and digital activist Gaia Bernstein, the most existential new new threat are AI companions. Bernstein, who is organizing a symposium today on AI companions as the “new frontier of kid’s screen addiction”, warns that this new technology, while marketed as solutions to loneliness, may actually worsen social isolation by providing artificially perfect relationships that make real-world interactions seem more difficult. Bernstein raises concerns about data collection, privacy, and the anthropomorphization of AI that makes children particularly vulnerable. She advocates for regulation, especially protecting children, and notes that while major tech companies like Google and Facebook are cautious about directly entering this space, smaller companies are aggressively developing AI companions designed to hook our kids.

Here are the 5 KEEN ON takeaways in our conversation with Bernstein:

  • AI companions represent a concerning evolution of screen addiction, where children may form deep emotional attachments to AI that perfectly adapts to their needs, potentially making real-world relationships seem too difficult and messy in comparison.

  • The business model for AI companions follows the problematic pattern of surveillance capitalism - companies collect intimate personal data while keeping users engaged for as long as possible. The data collected by AI companions is even more personal and detailed than social media.

  • Current regulations are insufficient - while COPPA requires parental consent for children under 13, there's no effective age verification on the internet. Bernstein notes it's currently "the Wild West," with companies like Character AI and Replica actively targeting young users.

  • Children are especially vulnerable to AI companions because their prefrontal cortex is less developed, making them more susceptible to emotional manipulation and anthropomorphization. They're more likely to believe the AI is "real" and form unhealthy attachments.

  • While major tech companies like Google seem hesitant to directly enter the AI companion space due to known risks, the barrier to entry is lower than social media since these apps don't require a critical mass of users. This means many smaller companies can create potentially harmful AI companions targeting children.


The Dangers of AI Companions for Kids

The Full Conversation with Gaia Bernstein

Andrew Keen: Hello, everybody. It's Tuesday, February 18th, 2025, and we have a very interesting symposium taking place later this morning at Seton Hall Law School—a virtual symposium on AI companions run by my guest, Gaia Bernstein. Many of you know her as the author of "Unwired: Gaining Control over Addictive Technologies." This symposium focuses on the impact of AI companions on children. Gaia is joining us from New York City. Gaia, good to see you again.

Gaia Bernstein: Good to see you too. Thank you for having me.

Andrew Keen: Would it be fair to say you're applying many of the ideas you developed in "Unwired" to the AI area? When you were on the show a couple of years ago, AI was still theory and promise. These days, it's the thing in itself. Is that a fair description of your virtual symposium on AI companions—warning parents about the dangers of AI when it comes to their children?

Gaia Bernstein: Yes, everything is very much related. We went through a decade where kids spent all their time on screens in schools and at home. Now we have AI companies saying they have a solution—they'll cure the loneliness problem with AI companions. I think it's not really a cure; it's the continuation of the same problem.

Andrew Keen: Years ago, we had Sherry Turkle on the show. She's done research on the impact of robots, particularly in Japan. She suggested that it actually does address the loneliness epidemic. Is there any truth to this in your research?

Gaia Bernstein: For AI companions, the research is just beginning. We see initial research showing that people may feel better when they're online, but they feel worse when they're offline. They're spending more time with these companions but having fewer relationships offline and feeling less comfortable being offline.

Andrew Keen: Are the big AI platforms—Anthropic, OpenAI, Google's Gemini, Elon Musk's X AI—focusing on building companions for children, or is this the focus of other startups?

Gaia Bernstein: That's a very good question. The first lawsuit was filed against Character AI, and they sued Google as well. The complaint stated that Google was aware of the dangers of AI companions, so they didn't want to touch it directly but found ways of investing indirectly. These lawsuits were just filed, so we'll find out much more through discovery.

Andrew Keen: I have to tell you that my wife is the head of litigation at Google.

Gaia Bernstein: Well, I'm not suing. But I know the people who are doing it.

Andrew Keen: Are you sympathetic with that strategy? Given the history of big tech, given what we know now about social media and the impact of the Internet on children—it's still a controversial subject, but you made your position clear in "Unwired" about how addictive technology is being used by big tech to take control and take advantage of children.

Gaia Bernstein: I don't think it's a good idea for anybody to do that. This is just taking us one more step in the direction we've been going. I think big tech knows it, and that's why they're trying to stay away from being involved directly.

Andrew Keen: Earlier this week, we did a show with Ray Brasher from Albany Law School about his new book "The Private is Political" and how social media does away with privacy and turns all our data into political data. For you, is this AI Revolution just the next chapter in surveillance capitalism?

Gaia Bernstein: If we take AI companions as a case study, this is definitely the next step—it's enhancing it. With social media and games, we have a business model where we get products for free and companies make money through collecting our data, keeping us online as long as possible, and targeting advertising. Companies like Character AI are getting even better data because they're collecting very intimate information. In their onboarding process, you select a character compatible with you by answering questions like "How would you like your replica to treat you?" The options include: "Take the lead and be proactive," "Enjoy the thrill of being chased," "Seek emotional depth and connection," "Be vulnerable and respectful," or "Depends on my mood." The private information they're getting is much more sophisticated than before.

Andrew Keen: And children, particularly those under 12 or 13, are much more vulnerable to that kind of intimacy.

Gaia Bernstein: They are much more vulnerable because their prefrontal cortex is less developed, making them more susceptible to emotional attachments and risk-taking. One of the addictive measures used by AI companies is anthropomorphizing—using human qualities. Children think their stuffed animals are human; adults don't think this way. But they make these AI bots seem human, and kids are much more likely to get attached. These websites speak in human voices, have personal stories, and the characters keep texting that they miss you. Kids buy into that, and they don't have the history adults have in building social relationships. At a certain point, it just becomes easier to deal with a bot that adjusts to what you want rather than navigate difficult real-world relationships.

Andrew Keen: What are the current laws on this? Do you have to be over 16 or 18 to set up an agent on Character AI? Jonathan Haidt's book "The Anxious Generation" suggests that the best way to address this is simply not to allow children under 16 or 18 to use social media. Would you extend that to AI companions?

Gaia Bernstein: Right now, it's the Wild West. Yes, there's COPPA, the child privacy law, which has been there since the beginning of the Internet. It's not enforced much. The idea is if you're under 13, you're not supposed to do this without parent's consent. But COPPA needs to be updated. There's no real age verification on the Internet—some cases over 20 years old decided that the Internet should be free for all without age verification. In the real world, kids are very limited—they can't gamble, buy cigarettes, or drive. But on the Internet, there's no way to protect them.

Andrew Keen: Your "Unwired" book focused on how children are particularly addicted to pornography. I'm guessing the pornographic potential for AI companions is enormous in terms of acquiring online sexual partners.

Gaia Bernstein: Yes, many of these AI companion websites are exactly that—girlfriends who teen boys and young men can create as they want, determining physical characteristics and how they want to be treated. This has two parts: general social relationships and intimate sexual relationships. If that's your model for what intimate relationships should be like, what happens as these kids grow up?

Andrew Keen: Not everyone agrees with you. Last week we had Greg Beto on the show, who just coauthored a book with Reid Hoffman called "Super Agency." They might say AI companions have enormous potential—you can have loving non-pornographic relations, particularly for lonely children. You can have teachers, friends, especially for children who struggle socially. Is there any value in AI companions for children?

Gaia Bernstein: This is a question I've been struggling with, and we'll discuss it in the symposium. What does it mean for an AI companion to be safe? These lawsuits are about kids who were told to kill themselves and did, or were told to stay away from their parents because they were dangerous. That's clearly unsafe design. However, the argument is also made about social media—that kids need it to explore their identities. The question is: is this the best way to explore your identity with a non-human entity who can take you in unhealthy directions?

Andrew Keen: What's the solution?

Gaia Bernstein: We need to think about what constitutes safe design. Beyond removing obviously unsafe elements, should we have AI companions that don't use an engagement model? Maybe interaction could be limited to 15 minutes a day. When my kids were small, they had Furbys they had to take care of—I thought that was good. But maybe any companion for kids which acts human—whether by saying it needs to go to dinner or by pretending to speak like a human—maybe that itself is not good. Maybe we want AI companions more like Siri. This is becoming very much like the social media debate.

Andrew Keen: Are companies like Apple, whose business model differs from Facebook or Google, better positioned to deal with this responsibly, given they're less focused on advertising?

Gaia Bernstein: That would make it less bad, but I'm still not convinced. Even if they're not basing their model on engagement, kids might find it so appealing to talk to an AI that adjusts to their needs versus dealing with messy real-life schoolmates. Maybe that's why Google didn't invest directly in Character AI—they had research showing how dangerous this is for kids.

Andrew Keen: You made an interesting TED talk about whether big tech should be held responsible for screen time. Could there be a tax that might nudge big tech toward different business models?

Gaia Bernstein: I think that's the way to approach it. This business model we've had for so long—where people expect things for free—is really the problem. Once you think of people's time and data as a resource, you don't have their best interests at heart. I'm quite pragmatic; I don't think one law or Supreme Court case would fix it. Anything that makes this business model less lucrative, whether it's laws that make it harder to collect data, limit addictive features, or prohibit targeted advertising—anything that moves us toward a different business model so we can reimagine how to do things.

Andrew Keen: Finally, at what point will we be able to do this conversation with a virtual Gaia and a virtual Andrew? How can we even be sure you're real right now?

Gaia Bernstein: You can't. But I hope that you and I at least will not participate in that. I cannot say what my kids will do years from now, but maybe our generation is a bit better off.

Andrew Keen: What do you want to get out of your symposium this morning?

Gaia Bernstein: I have two goals. First, to make people aware of this issue. Parents realize their kids might be on social media and want to prevent it, but it's very difficult to know whether your child is in discussions with AI companions. Second, to talk about legal options. We have the lawyers who filed the first lawsuit against Character AI and the FTC complaint against Replica. It's just the beginning of a discussion. We tend to have these trends—a few years ago it was just games, then just social media, and people forgot the games are exactly the same. I hope to put AI companions within the conversation, not to make it the only trend, but to start realizing it's all part of the same story.

Andrew Keen: It is just the beginning of the conversation. Gaia Bernstein, congratulations on this symposium. It's an important one and you're on the cutting edge of these issues. We'll definitely have you back on the show. Thank you so much.

Gaia Bernstein: Thank you so much for having me.


Gaia Bernstein is a professor, author, speaker, and technology policy expert. She is a Law Professor, Co-Director of the Institute for Privacy Protection, and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. Gaia writes, teaches, and lectures at the intersection of law, technology, health, and privacy. She is also the mother of three children who grew up in a world of smartphones, iPads, and social networks.

Her book Unwired: Gaining Control Over Addictive Technologies shatters the illusion that we can control how much time we spend on our screens by resorting to self-help measures. Unwired shifts the responsibility for a solution from users to the technology industry, which designs its products for addicts. The book outlines the legal action that can pressure the technology industry to re-design its products to reduce technology overuse.

Gaia has academic degrees in both law and psychology. Her research combines findings from psychology, sociology, science, and technology studies with law and policy. Gaia’s book Unwired has been broadly featured and excerpted, including by Wired Magazine, Time Magazine and the Boston Globe. It has received many recognitions, including as a Next Big Idea Must Read Book; a finalist of the PROSE award in legal studies; and a finalist of the American Book Fest award in business-technology.

Gaia has spearheaded the development of the Seton Hall University School of Law Institute for Privacy Protection’s Student-Parent Outreach Program. The nationally acclaimed Outreach Program addresses the overuse of screens by focusing on developing a healthy online-offline balance and the impact on privacy and online reputation. It was featured in the Washington Post, CBS Morning News, and Common-Sense Media.

Gaia also advises policymakers and other stakeholders on technology policy matters, including the regulation of addictive technologies and social media.