Get all your news in one place.
100's of premium titles.
One app.
Start reading
Fortune
Fortune
Catherina Gioino

Education experts to Mamdani: why are you foisting AI on our kids?

(Credit: Jose CABEZAS/AFP via Getty Images)

The researchers, doctors, and child development experts have studied what generative AI does to developing brains. Their conclusion: it shouldn’t be anywhere near a classroom, and action needs to happen fast.

“We just don’t want to waste another 10 years in which our kids’ education is undermined,” Leonie Haimson, executive director of the Parent Coalition for Student Privacy, told Fortune. “It took more than 10 years to ban cell phones from schools. We can’t afford that again.”

Boston-based child advocacy nonprofit Fairplay is leading a coalition of more than 250 experts and organizations in calling for a five-year moratorium on all student-facing generative AI products in Pre-K through 12 schools in the U.S. and Canada. The group, made up of a coalition of mental health experts, parents, educators and groups geared towards protecting children online, warned that any product that fails safety testing during that pause should be permanently banned. The report, shared exclusively with Fortune, will be released right when advocates plan a rally in front of New York City’s City Hall to push for a two-year ban in the city’s public schools specifically.

Fairplay last month led a similar coalition of experts in penning a letter to YouTube and its parent company Alphabet to stop the spread of “AI slop” in YouTube Kids videos. The report was co-authored by members of the Screen Time Action Network’s Screens in Schools Work Group, including Emily Cherkin, a screen time consultant on faculty at the University of Washington’s Evans School of Public Policy along with other online and mental health experts.

“It’s an unproven, untested product, and we’re giving it to children in the name of improving education or equity or cognition, when none of those things have been proven,” Cherkin told Fortune. “If a local children’s hospital told parents, ‘We’ve got this new drug, it has potential to save lives, just trust us,’ people would be horrified. We have vetting processes for all kinds of industries, and yet somehow we’re allowing generative AI companies access to our most vulnerable population.”

The experts’ core finding is that AI doesn’t just distract children: it actively interferes with the developmental work they need to do. The human brain isn’t fully formed until the mid-20s, and the prefrontal cortex, used in planning, reasoning, emotion regulation, and critical thinking, is among the last regions to mature. “The problem with giving children generative AI is not just that they will cognitively offload the skill building,” Cherkin said. “It’s that they will displace the building of those skills even in the first place. If they’re never building skills, they have none to offload.”

The report pointed to a joint MIT and Harvard study finding that AI use accumulates “cognitive debt,” impairing independent thinking over time. Similarly, OECD research found that students who use ChatGPT as a study tool actually perform worse on tests than peers without access, even when the AI tutor has been programmed not to provide direct answers.

The mental health findings are equally apparent. Google and Character.AI are currently facing lawsuits alleging its chatbot contributed to user suicides and induced children to harm family members. The American Psychological Association issued a health advisory on AI and adolescent well-being. The report notes that teachers, therapists, and counselors must maintain licensure and follow ethics codes to work with children, but generative AI products face none of those requirements, and have been found to violate ethical standards in providing mental health support.

Under-resourced schools are more likely to rely on AI as a substitute for human teachers while well-resourced schools retain them. Because AI training datasets contain historical bias, the report warns, these products are likely to amplify existing educational inequities rather than close them. A February 2026 Pew Research Center survey found that 60% of teenagers say students at their school use chatbots to cheat “very often” or “somewhat often.”

The report is also pointed about what remains unknown. There is no proven educational benefit to generative AI in schools: it is marketed purely on “potential,” which the authors define as “literally what something is not.” Long-term effects on children’s cognitive and social-emotional development are entirely uncharted. “Giving children untested generative AI products based on future potential is dangerous,” the report states.

“The precautionary principle must be employed,” Cherkin said. “The best preparation for a digital future is an analog childhood. If we want kids to navigate generative AI someday, we should be doubling down on the skills that help them think critically, and that’s not happening at all.”

In New York City, Haimson, who is also a member of the DOE’s own AI working group, said Mayor Zohran Mamdani has failed to deliver the break from the previous administration that advocates were promised. “We were hoping for a new attitude in the mayor’s office and at DOE, and we just don’t see it,” she told Fortune. “We see basically the same people running the show. Many of them EdTech enthusiasts, many of them Google fellows. We’re basically seeing our kids’ futures being sold out to EdTech.”

She had stark words for the new mayor, who recently celebrated 100 days in office. “He said he himself doesn’t use AI, which is good, but why is he foisting it on New York City public school students?”

Haimson said the DOE’s AI working group was stonewalled. Officials refused to provide a list of AI products currently in use in city schools, citing NDAs with vendors, and denied requests for teacher training materials. The AI guidance that finally emerged in March was reportedly produced by Accenture, the consulting firm, with no meaningful input from privacy experts or parents. The advisory council that shaped the guidance, she said, was stacked with industry representatives, a legacy of the Eric Adams era and former Chancellor David Banks, who resigned after an FBI investigation.

“Our initial AI guidance released in March is just the beginning of our work to ensure these tools are used appropriately and safely, with the right guardrails in place to protect students and staff,” read a New York City Department of Education statement to Fortune. “The reality is that many of our students and educators are already engaging with AI in their lives. In order to meet this moment, we know we must design educational spaces that never lose sight of original thinking or human centered learning while advancing AI literacy safely and responsibly, ultimately preparing our students for the workplace of the future.”

The coalition is also raising a structural contradiction at the heart of the industry’s school push: AI companies prohibit minors in their own terms of service while simultaneously marketing to schools. Anthropic’s Terms of Use bar users under 18, yet MagicSchool AI, one of the most widely used K-12 platforms in the country, is built on Anthropic’s models.

The five-year pause, advocates say, would allow time for independent third-party audits of AI platforms, a vetting process for new products, a public registry of every AI tool currently used in schools, and regulatory frameworks that don’t yet exist. Any product that fails that process, the coalition says, should not get a second chance.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.