The Trump administration on Thursday released guidance for federal agencies to try to ensure that the AI models they procure are not spitting out "woke" responses.
Why it matters: Company contracts with the federal government could be at risk if large language models are seen as violating the White House's guidelines.
What's inside: The guidance from the Office of Management and Budget states that agencies looking to buy AI systems must determine whether the models comply with what it calls two "unbiased AI principles" — "truth-seeking" and "ideological neutrality."
- The information they have to obtain will vary depending on the company's role in the software supply chain and the relationship between the company and the model developer, according to the guidance.
- Generally, the closer the company is to the model developer the more information should be available.
- "Where practicable, agencies should avoid requirements that compel a vendor to disclose sensitive technical data, such as specific model weights," the guidance states.
Beyond LLMs, agencies should also use this guidance for other types of generative AI, such as image or voice tools.
- The memo notes that while the requirements of the memo don't apply to national security systems, following it is "encouraged."
Catch up quick: This OMB guidance was called for in an executive order President Trump signed in July.
- AI czar David Sacks previously has said that executive order is mainly aimed at DEI.
Between the lines: The executive order made waves with its politically charged focus on "wokeness," but the guidance reads more like how the government would treat high-risk contracts.
- The word "woke" only appears when citing the name of Trump's executive order, "Preventing Woke Al in the Federal Government."
The EO defines "truth-seeking" as LLMs that "prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.
- "Ideological neutrality" is defined as LLMs that are "neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas."
- The EO adds that "developers shall not intentionally encode partisan or ideological judgments into an LLM' s outputs unless those judgments are prompted by or otherwise readily accessible to the end user."
The big picture: Republicans have long complained about alleged censorship of conservatives and liberal bias online, and this is one attempt to try to make AI companies fall in line.