Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Roll Call
Roll Call
Allison Mollenkamp

White House holds back on national AI framework specifics

A leader of the White House’s artificial intelligence strategy offered House lawmakers few details Wednesday on what Congress can expect from the administration’s planned legislative recommendations for a national standard that would seek to preempt state laws.

In December, President Donald Trump signed an executive order directing federal agencies to sue states if their AI laws are “onerous” and to limit states’ access to certain federal funds, including through a broadband deployment program, based on those laws. That order came after attempts by pro-AI lawmakers to legislate a national preemption of state AI laws fell short in the face of bipartisan opposition in defense of state authority.

The executive order also tasked White House Science and Technology Adviser Michael Kratsios, along with Special Adviser for AI and Crypto David Sacks, with developing legislative recommendations for a national AI standard that would preempt AI laws.

In his first appearance on Capitol Hill since that order, Kratsios avoided details in testimony before the House Science, Space and Technology’s research panel, while facing lawmakers’ concerns about the balance of responsibility on AI between states, Congress, and the Trump administration.

Kratsios said that in executing the administration’s AI Action Plan released early last year, he sees “opportunities for collaboration” with the committee and Congress.

“If American innovators are to continue to lead the world, they will need regulatory clarity and certainty, which the legislative and executive branches must work together to provide,” Kratsios said.

Subcommittee Chair Jay Obernolte, R-Calif., offered general support for Congress enacting what he called “an appropriate federal framework” that “maintains the position of the United States as the leading force in the development and deployment of worldwide AI.”

But he also emphasized a role for states in regulating AI. His home state of California has passed laws that require AI developers to file information on any catastrophic risks from their models as well as their training data.

“I think what everyone believes is that there should be a federal lane, and that there should be a state lane,” he said.

“And, that the federal government needs to go first in defining what is under Article 1 of the Constitution, interstate commerce, and where those preemptive guardrails are, where regulation is reserved only for regulation at the federal level, and then outside those guardrails, where the states are free to go be the laboratories of democracy that they are.”

Obernolte pressed Kratsios about potential “guardrails” and the administration’s vision for congressional action.

Kratsios spoke of the reasoning behind the December executive order that tasked him with developing legislative recommendations, including preventing AI startups from having to comply with many different states’ regulations. He noted the order’s carve-out for state laws on child safety, data center infrastructure, and state government procurement of AI.

He said that he and Sacks “look forward, over the next weeks and months, to be working with Congress on a viable solution” on an AI standard, but he did not specify what that standard would cover or when legislative recommendations would be ready.

The ranking member of the full committee, Rep. Zoe Lofgren, D-Calif., questioned the executive order’s attempts to move power over AI from the states and Congress to the executive branch, adding that she believes the order is unconstitutional.

“What we should not do is preempt the states from taking necessary actions to protect their citizens while here in Congress, we do nothing to pass legislation ourselves,” Lofgren said.

Lofgren expressed support for the goals of the administration’s AI Action Plan, specifically “innovation, infrastructure, international diplomacy and security goals.” But she said the plan “only minimally addresses the risks of AI, and even where it does, including with respect to deepfakes, the administration has failed to take meaningful action to address these risks.”

Musk and deepfakes

Lofgren expressed concern about the federal government’s relationship to Elon Musk’s X, formerly Twitter, in the wake of the platform allowing the Grok AI chatbot to generate sexualized images of real people, including children. The Senate on Tuesday by voice vote passed legislation that would let victims sue X and other platforms over the AI generation and distribution of nonconsensual intimate images.

Kratsios said that misuse of technology, including by any federal government employees, “requires accountability,” rather than “blanket restrictions on the use and development of that technology.”

Lawmakers on both sides of the aisle also questioned Kratsios about the administration’s plans for the National Institute of Standards and Technology and its Center for AI Standards and Innovation, known until last summer as the US AI Safety Institute.

Obernolte indicated that he plans to introduce a bill dubbed the Great American AI Act that would codify the center.

He also applauded the administration’s support for continuing the National Artificial Intelligence Research Resource, or NAIRR, which he sponsored a bill to codify.

Kratsios celebrated the administration’s move to replace the former safety institute with CAISI and its direction that NIST revise its AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”

“We want NIST to be focused on advanced scientific metrology. Inserting political rhetoric into their work is something that devalues and corrupts the broader efforts that NIST is trying to do across so many important scientific domains,” Kratsios said.

The panel’s ranking member, Rep. Haley Stevens, D-Mich., bemoaned the administration’s attempts to cut NIST’s budget and the potential impacts on programs to encourage the use of AI in manufacturing.

“The cuts hinder NIST’s AI related efforts. They’re going to weaken cybersecurity and privacy standards, something I have legislation on, and limit advanced manufacturing, physical infrastructure and resilience innovation,” Stevens said.

The president’s budget request for fiscal 2026 proposed a $325 million cut, but the compromise Commerce-Justice-Science bill included in a three-bill package being considered by the Senate would reject that proposal.

The post White House holds back on national AI framework specifics appeared first on Roll Call.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.