
Congress is quietly drifting toward an ID-verified internet. The Children Harmed by AI Technology (CHAT) Act of 2025 is Washington's newest attempt to regulate speech through mandatory age checks. Rather than protecting kids, it would normalize showing a government ID for basic online speech.
The CHAT Act tries to target fictional role-play bots you've likely heard horror stories about. The problem is that the bill defines a chatbot so broadly that anything that "simulates emotional interaction" could be restricted. By the CHAT Act's standards, ChatGPT, some video game characters, or even a customer service bot could all require users to upload a government ID just to log in.
Large language models (LLMs) learn to write by training on billions of real conversations and stories. That makes their output naturally resemble human dialogue—including emotional tone. Forcing AIs to remove random "interpersonal" behaviors and eliminating all traces of emotional tone or dialogue could mean deleting much of the training data itself.
If the CHAT Act were to pass, developers would face two terrible choices. They could impose ID verification across their platforms or censor outputs so aggressively that American AI products could become unusable. Just as China's DeepSeek censors references to Tiananmen Square, the CHAT Act could force U.S. developers toward a similar censorious model of compliance. The result could be an industry-wide unforced error that would hobble innovation relative to foreign competitors.
And it wouldn't stop with chatbots. AI is now built into nearly every digital product. AI now runs through everyday products: Duolingo's language tutor, Alexa's music suggestions, and video game NPCs offering advice. Under the CHAT Act, any of them could require a government ID. Lawyers and developers, unsure where Congress's ill-defined lines will fall, could slow or suspend AI integrations altogether.
On the internet, the danger extends even deeper. As Google search increasingly leverages its AI chatbot Gemini, as OpenAI builds its new browser "Atlas," and as queries increasingly take place through LLMs instead of search engines, the CHAT Act brings us closer to an ID verification layer across tomorrow's internet.
What's more, the bill wouldn't protect vulnerable users. Age-verification laws are prone to backfire.
Requiring users to upload government IDs may sound simple, but it creates a massive honeypot for hackers. Once those databases are inevitably breached, millions of Americans—including minors—could have their most sensitive personal data stolen in the name of "safety."
Other obstacles such as ID portals or geoblocking become an incentive for tech-savvy generations to download VPNs that allow users to spoof their location to different countries with fewer restrictions. When the UK implemented similar age-verification laws, VPN usage spiked by up to 1,400 percent as users flocked to unregulated platforms abroad. Without our guardrails or basic consumer protections, these foreign platforms could expose children to more dangerous and explicit content.
U.S. lawmakers should avoid driving young Americans toward foreign platforms with weaker protections and lower accountability.
The United States has a long history of consumer protection laws rooted in evidence and precedent, rather than preemptive panic over emerging technologies. Consumer protection frameworks evolved through tested case law rather than reactionary moral legislation.
With the earliest cases still ongoing or only just filed, it is too early to know how courts will treat AI-related harms and whether existing laws can address them. But those cases are likely to provide a more comprehensive understanding of the shortcomings in our current legislation. Congress needs to stop legislating out of fear and start learning how the technology works.
The panic around "AI harm" has pushed Congress into reactionary policymaking that risks rewriting the rules of online speech without meaningfully protecting kids. By copying the censorious and restrictive internet frameworks of China and the UK, lawmakers could end up creating more danger by forcing children into darker corners of the web.
The post The CHAT Act Won't Protect Kids, But it Might Break the Internet. appeared first on Reason.com.