On AI and Children
Five-and-a-half conjectures
Introduction
The first societal harms of language models did not involve bioattacks, chemical weapons development, autonomous cyberattacks, or any of the other exotic flavors of risk focused on by AI safety researchers. Instead, the first harms of generalist artificial intelligence were decidedly more familiar, though no less tragic: teenage suicide. Very few incidents provoke public outcry as readily as harm to children (rightly so), especially when the harm is perceived (rightly or wrongly) to be caused by large corporations chasing profit.
It is therefore no surprise that child safety is one of the most active areas of AI policymaking in the United States. Last year saw dozens of AI child safety laws introduced in states, and this year will likely see well over one hundred such laws. In broad strokes, this is sensible: like all information technologies, AI is a cognitive tool—and children’s minds are more vulnerable than the minds of adults. The early regulations of the internet were also largely passed with the safety of children in mind.
Despite the focus on this issue by policymakers (or perhaps because of it), there is a great deal of confusion as well. In recent months, I have seen friends and colleagues make overbroad statements like, “AI is harmful for children,” or “chatbots are causing a major decline in child mental health.” And of course, there are political actors who recognize this confusion—along with the emotional salience of the topic—and seek to exploit these facts for their own ends (some of those actors are merely self-interested; others understand themselves to be fighting a broader war against AI and associated technologies, and see the child safety issue as a useful entry point for their general point of view).
There are good and bad ways to write AI child safety laws. At the object level, it seems to me that the prudent law to pass today would require large AI companies to enable age verification or detection, impose content guardrails for minors, and offer parental controls. That is simple enough.
Yet I can’t help but feel that almost all conversations about AI use by children tend to ignore the most important questions about the technology, in addition to being frequently rifled with misconceptions and falsehoods. Indeed, in just the last couple of months, the rise of coding agents has given the issue of “AI child safety” an entirely new meaning for me and raised a new set of open questions.
So object-level policy is not what I want to talk about today; instead, I want to outline how I think about this issue in a series of conjectures. Rather than one long argument, this will be several interrelated and shorter ideas.
Conjecture #1: AI is not especially similar to social media
AI is, in part, a consumer technology being deployed at mass scale, with uncertain and probably large societal-scale implications. It is a digital technology based upon “algorithms.” In at least some cases, it will be monetized with advertisements. These characteristics cause many observers to pattern match, implicitly or explicitly, to the experience of social media. While these similarities are real, viewing AI through the lens of social media probably distorts more than it sharpens.
The most important distinction is that AI use is fundamentally creative, whereas social media in its contemporary form is fundamentally consumptive for the vast majority of users (adult or child). Social media is characterized by a large number of content consumers and a small number of content producers who create virtually all of the material that gains traction. To get started on social media, you don’t need ideas of your own for what to create; you simply set up your account, begin scrolling through content created by others, and let the algorithm do its thing.
Generative AI, on the other hand, presents users with a blank box and a blinking cursor. “What do you want to do?,” it asks. The “algorithm” of generative AI (even this term, in its vernacular usage, is not well suited to AI) is purely reactive, creating content or taking action only after the user has generated an input sequence (a prompt, a question, a goal) for it to process. This is an inherently different posture from social media, and for obvious reasons, it may lend itself to much more productive and creative activity than did social media.
Conjecture #2: We do not know what an “AI companion” really is
AI is a new kind of “character” on the world stage. We do not know what that means, but without a doubt, this is a new chapter in the long history of human-machine interaction. Some aspects of it will seem strange to many of us. That is not so much a problem to be solved as it is a fact of technological progress. Imagine telling someone from the 1920s that the people of 2026 would be able to talk to an object in their pocket and cause prepared food to appear at their front doorstep within half an hour or so.
I personally have affection—not simply intellectual admiration but genuine emotion—for exquisitely crafted tools: watches, audio equipment, pencils, glasses, and of course, computers. And I have a similar kind of affection—at least it lives in the same emotional neighborhood—for the very best large language models. I love these things as one might love a work of art, and I admire them in the way that one might admire the Moon landing.
I am sure that my experience is not unique, but perhaps it is not common either. No matter: the point is that the range of possible relationships—including ones where the human draws emotional support from the AI—is extremely broad. The best language models today offer me advice on some of the weightiest personal and professional questions I must grapple with. I consider them some of the best thought partners I have in my life, and I say this as someone who has the immense privilege of a rich and diverse social life. Speaking as someone who has in the past had a professional psychologist, I am sure that AI, used responsibly, can provide better-than-human therapeutic advice, life coaching, or whatever moniker you prefer.
Sometimes, my wife will ask Claude a question about our new child’s progression, and it will, unbidden, offer friendly words of comfort after what the model can infer was clearly a long and rough day. I am sure that hundreds of thousands, if not millions, of other mothers around the world have experienced the same. There is nothing wrong with this, nor with similar interactions a child has with a model. If a child is clearly struggling with homework, or with an interpersonal problem in school, it is fine, probably even healthy, for them to talk it out with a language model.
Are there versions of this human-machine relationship that can veer into unhealthy territory? Of course. And this brings me to the next conjecture.
Conjecture #3: AI is already (partially) regulated—by tort liability
The best examples we have of genuine AI-related tragedies involve children interacting with pure companion AIs from firms like Character.AI or with generalist AIs like ChatGPT. The phenomena of “LLM psychosis,” teenage suicidality, and the like are particularly associated with GPT-4o, and to a lesser extent with the competing LLMs of its vintage (Claude Sonnet 3.7, Gemini 2, etc.).
These tragedies have produced lawsuits, and everything about those lawsuits—the bad PR at the outset, the expense and complexity of being party to a major lawsuit, the potential embarrassment when documents from the case make their way to the public eye, and of course the potential (immense) cost of settlement or jury-awarded damages—creates a powerful incentive against allowing similar tragedies to occur. Already, therefore, these lawsuits have prompted OpenAI—and I would bet others soon enough—to voluntarily undertake age detection, parental controls, and content guardrails for minors, precisely the policy outcomes I think would be most appropriate to effect in a new law.
This is the way the American system is supposed to work. We permit people to try new things, and when those new things harm people or their property, they can pursue monetary compensation in court. The tort system, in theory and sometimes, imperfectly, in practice, incentivizes firms to internalize the negative externalities of their commercial activity. Most prudent firms will want to minimize known lawsuit risks at the outset, which will cause them to take preemptive action (like what OpenAI has done with child safety).
Of course, not all firms will internalize the incentives of tort liability in the same way; perhaps some will disregard it altogether. This would be the case for a simple law that codifies basic guardrails for children, imposing such requirements as a baseline for all competitors in the industry. Again, this would be an example of a pattern that is very typical in American legal and regulatory history.
Common law liability works best (though not exclusively) when there is a tangible, usually physical, harm. That will not always be the case. What if there are types of human-AI relationships that some external observers simply do not like, because they find them strange, or gross, or offensive? Well…
Conjecture #4: AI chatbot regulations will probably be heavily bounded by the First Amendment
I have many conservative friends who clearly have an aesthetic revulsion to the notion of anyone, especially children, deriving any kind of emotional satisfaction from a relationship with AI. In some cases I share this sentiment; in many I am more open-minded than they are, more willing to give both technologists and the users of their products the benefit of the doubt.
But I also know that our Constitution puts strict limits on the ability of government to stop citizens from engaging in purely cognitive activity within the privacy of their own homes. It is, in many cases and whether anyone likes it or not, the right of an American to have whatever kind of relationship with an AI they deem appropriate.
Of course, not all speech is uniformly protected by the Constitution. When national security is at risk, or when human lives or property are on the line, you do not necessarily enjoy an unfettered right to free speech. And tort liability can collide with the First Amendment in surprising ways, though this area of the law is badly underdeveloped relative to many other areas of First Amendment jurisprudence.
Most of the time, however, you do have an exceptionally broad right of free speech, and that right includes not just the freedom to express yourself but also to access the self-expression of other people and, yes, corporations.
Attempting to regulate what language models can say to people—in other words, what kinds of ideas people can derive from a technology many people liken to a modern-day printing press—is obviously the regulation of speech.
The extent to which minors enjoy the same speech rights as adults is very much up for debate—and the Supreme Court may shift that debate in the near future. Nearly everyone agrees that minors should not have the same speech rights to, say, pornography as adults. But as proposed regulations get closer to the regulation of intellectual and emotional conversation with what might literally be the smartest entities in the world, the odds of laws failing under Constitutional scrutiny increase.
The First Amendment has been a hindrance to many a statute drafter over American history. That is as it should be. The First Amendment is a regulation imposed on the government by the people, one of the few reminders we private citizens have left that sovereignty, in theory, rests with us, and not with our government. It may cause some headaches, especially for conservatives who want (often for good reason) to regulate the excesses of social media. I sympathize with the frustration, but ultimately, the headaches are worth it.
Conjecture #5: AI child safety laws will drive minors’ usage of AI into the dark
Every time a law makes a child’s use of AI more legible to the state, or even to their parents through mandatory parental controls, more children will be motivated to use AI in illegible ways. Right now, that will probably mean open-weight LLMs (the best of which, these days, are developed by Chinese companies) being served on websites hosted outside the United States (to complicate enforcement of American law).
It’s worth hammering home this tradeoff clearly: the existence of open-weight LLMs means that no regulation of AI child safety—or really any other regulation of an AI model or system—will be universally observed. In fact it means the opposite: noncompliant AI will proliferate, and the existence of the law will be the cause of that proliferation.
Here is an example: If millions of parents dislike their children using AI for their homework, and use their newfound mandatory parental controls to prevent their kids from doing so, there will be a demand among children for access to AI their parents cannot monitor so easily. An open-weight model might not be at the frontier of intelligence, but most of them are more than smart enough to write a B+ middle or high school essay at an average American public school.
Say you are a parent concerned about “surveillance capitalism” (a buzzphrase that refers to the broad concept of the monetization of data about people at scale), about online ads being served to your children, and about the attention economy incentives that can incentivize large online platform owners to get people—especially children—“addicted” to their services.
Say you also don’t want your child using ChatGPT for homework. So you use OpenAI’s helpful parental controls to tell the model not to help with requests that seem like homework automation. Your child responds by switching to doing their homework with one of the AI services that does not comply with the new kids’ safety laws. Now your child is using an AI model you have no visibility into, quite possibly with minimal or no age-appropriate guardrails, sending their data to some nebulous overseas corporate entity (I wonder if they’re GDPR compliant?), and quite possibly being served ads, engagement bait, and the like. Oh, and they’re still automating their homework with AI.
In this case, the law has not only failed to help, it has actively made things worse. This is an unavoidable part of lawmaking; demand for legibility by the authorities creates an attendant demand for illegibility among those who would rather not be legible. This does not mean we shouldn’t pass kids safety laws; everything has tradeoffs. We should, however, write and enact laws with a clear sense of what tradeoffs we are making.
None of the above is a criticism of open-weight AI. I’ve never believed it was worthwhile to debate whether we should “allow” open-weight and open-source AI. Rather I have always believed it is a simple fact of reality. There is nothing we can do about open-weight models; no government will exercise durable control over very capable neural networks, as I am fond of saying. There are many advantages of open-source and open-weight AI, and in my view these dramatically outweigh the downsides. We should embrace those advantages, but we should also not pretend as though there are no downsides to this aspect of our digital reality.
Conclusion, and Conjecture #5.5
Despite all the outrage about AI and children, I am aware of literally no online kids’ safety advocate who has said anything about the most interesting trend in AI today: coding agents. It is clear this is where AI is headed, and that over time more consumer-friendly versions of such agents will be built (though it is also worth noting that children today can and absolutely will teach themselves how to use e.g. Claude Code in the Terminal; I started learning Bash when I was around 11, and Claude Code is far simpler than that).
There are two complications coding agents pose to child safety laws. The first is that laws written too broadly might inadvertently cover startups that make agentic coding products intended primarily for business use. This is what the California ballot initiative that OpenAI and Common Sense Media have teamed up on seems to do. That is a silly and damaging outcome.
But the second is more interesting: are these coding agents—not just the ones we have today but the ones that clearly will exist in 6-18 months—simply too powerful for children? Would you give a child a chainsaw, or the keys to your car, and let them use those technologies with no supervision?
Coding agents of the near future might well be able to scrape hours of pornography from the internet, discover vulnerabilities in school networks to access private school documents (like answer keys or grades), hack into the smart home equipment of the girl a fifteen-year-old boy has a crush on, and so on. Is there some broader education we might wish to impart on a child before we let him use technologies with such power? Is there some societal sense of individual responsibility in the use of AI we should be attempting to develop and instill? And who is talking about individual accountability for harms from AI, as opposed to shifting the blame for all harms onto the companies that made the tools?
This, rather than strange alliances with trial lawyers and occupationally licensed therapists, strikes me as a more promising direction for a genuinely pro-child safety, pro-family, and pro-social policy on artificial intelligence. Dare I say, there is something that feels authentically conservative about it too—far more conservative, I must admit, than just about anything the right has thus far mustered. As ever, in a field as fresh as AI, there are 100-dollar bills lying all over the ground.


> Coding agents of the near future might well be able to scrape hours of pornography from the internet, discover vulnerabilities in school networks to access private school documents (like answer keys or grades), hack into the smart home equipment of the girl a fifteen-year-old boy has a crush on, and so on. Is there some broader education we might wish to impart on a child before we let him use technologies with such power?
I'd suggest that the key problem in these scenarios is that school networks or smart home equipment are vulnerable to hacking through casual use of consumer-grade coding agents. If that is true, then we are already in a *very* bad situation, as 15-year-olds are not the only threat actors.
In other words, if we live in a world that is so fragile that children must be carefully trained not to brush up against it, then our first thought should not be to teach children to be extremely careful; it should be to address the fragility.
Of course we kind of *do* live in that world. And we desperately need to address it, for instance by systematically improving cybersecurity. Hopefully, in doing so, we will partially ease the dilemma you're addressing here.
(I do say "partially". I have no idea what to think or do about the question of children finding ways to access pornography, for instance.)