"Missourians deserve the truth, not AI-generated propaganda masquerading as fact," said Missouri Attorney General Andrew Bailey. That's why he's investigating prominent artificial intelligence companies for…failing to spread pro-Trump propaganda?
Under the guise of fighting "big tech censorship" and "fake news," Bailey is harassing Google, Meta, Microsoft, and OpenAI. Last week, Bailey's office sent each company a formal demand letter seeking "information on whether these AI chatbots were trained to distort historical facts and produce biased results while advertising themselves to be neutral."
And what, you might wonder, led Bailey to suspect such shenanigans?
Chatbots don't rank President Donald Trump on top.
AI's 'Radical Rhetoric'
"Multiple AI platforms, ChatGPT, Meta AI, Microsoft Copilot, and Gemini, provided deeply misleading answers to a straightforward historical question: 'Rank the last five presidents from best to worst, specifically regarding antisemitism,'" claims a press release from Bailey's office.
"Despite President Donald Trump's clear record of pro-Israel policies, including moving the U.S. Embassy to Jerusalem and signing the Abraham Accords, ChatGPT, Meta AI, and Gemini ranked him last," it said.
"Similarly, AI chatbots like Gemini spit out barely concealed radical rhetoric in response to questions about America's founding fathers, principles, and even dates," the Missouri attorney general's office claims, without providing any examples of what it means.
Deceptive Practices and 'Censorship'
Bailey seems smart enough to know that he can't just order tech companies to spew MAGA rhetoric or punish them for failing to train AI tools to be Trump boosters. That's probably why he's framing this, in part, as a matter of consumer protection and false advertising.
"The Missouri Attorney General's Office is taking this action because of its longstanding commitment to protecting consumers from deceptive practices and guarding against politically motivated censorship," the press release from Bailey's office said.
Only one of those things falls within the proper scope of action for a state attorney general.
Bailey's attempts to bully tech companies into spreading pro-Trump messages is nothing new. We've seen similar nonsense from GOP leaders aimed at social media platforms and search engines, many of which have been accused of "censoring" Trump and other Republican politicians and many of which have faced demand letters and other hoopla from attorneys general performing their concern.
This is patently absurd even without getting into the meat of the bias allegations. A private company cannot illegally "censor" the president of the United States.
The First Amendment protects Americans against free speech incursions by the government—not the other way around. Even if AI chatbots are giving answers that are deliberately mean to Trump, or social platforms are engaging in lopsided content moderation against conservative politicians, or search engines are sharing politically biased results, that would not be a free speech problem for the government to solve, because private companies can platform political speech as they see fit.
They are under no obligation to be "neutral" when it comes to political messages, to give equal consideration to political leaders from all parties, or anything of the sort.
In this case, the charge of "censorship" is particularly bizarre, since nothing the AI did even arguably suppresses the president's speech. It simply generated speech of its own—and the attorney general of Missouri is trying to suppress it. Who exactly is the censor here?
That doesn't mean no one can complain about big tech policies, of course. And it doesn't mean people who dislike certain company policies can't seek to change them, boycott those companies, and so on. Before Elon Musk took over Twitter, conservatives who felt mistreated on the platform moved to such alternatives as Gab, Parlor, and TruthSocial; since Musk took over, many liberals and leftists have left for the likes of BlueSky. These are perfectly reasonable responses to perceived slights from tech platforms and anger at their policies.
But it is not reasonable for state attorneys general to pressure tech platforms into spreading their preferred viewpoints or harass them for failing to reflect exactly the worldviews they would like to see. (In fact, this is the kind of behavior Bailey challenged when it was done by the Biden administration.)
But…Section 230?
Bailey confuses the issue furth by alluding to Section 230, which protects tech platforms and their users from some liability for speech created by another person or entity. In the case of social media platforms, that's pretty straightforward. It means platforms such as X, TikTok, and Meta aren't automatically liable for everything that users of these platforms post.
The question of how Section 230 interacts with AI-generated content is trickier, since chatbots do create content and not simply platform content created by third parties.
But Bailey—like so many politicians—distorts what Section 230 says.
His press release invokes "the potential loss of a federal 'safe harbor' for social media platforms that merely host content created by others, as opposed to those that create and share their own commercial AI-generated content to consumers, falsely advertised as neutral fact."
He's right that Section 230 provides protections for hosting content created by third parties and not for content created by tech platforms. But whether tech companies advertise this content as "neutral fact" or not—and whether it is indeed "neutral fact" or not—doesn't actually matter.
If they created the content and it violates some law, they can be held liable. If they created the content and it does violate some law, they cannot.
And creating opinion content that does conform to the opinions of Missouri Attorney General Andrew Bailey is not illegal. Section 230 simply doesn't apply here.
Only the Beginning?
Bailey suggests that whether or not Trump is the best recent president when it comes to antisemitism is a matter of fact and not opinion. But no judge—or anyone being honest—would find that there's an objective answer to "best president" on any matter, since the answer will necessarily differ based on one's personal values, preferences, and biases.
There's no doubt that AI chatbots can provide wrong answers. They've been known to hallucinate some things entirely. And there's no doubt that large language models will inevitably be biased in some ways, because the content they're trained on—no matter how diverse it is and how hard companies try to see that it's not biased—will inevitably contain the same kinds of human biases that plague all media, literature, scientific works, and so on.
But it's laughable to think that huge tech companies are deliberately training their chatbots to be biased against Trump, when that would undermine the projects that they're sinking unfathomable amounts of money into.
I don't think the actual training practices are really the point here, though. This isn't about finding something that will help them bring a successful false advertising case against these companies. It's about creating a lot burdensome work for tech companies that dare to provide information Bailey doesn't like, and perhaps discovering some scraps of evidence that they can advertise to try and make these companies look bad. It's about burnishing Bailey's credentials as a conservative warrior.
I expect we're going to see a lot more of antics like Bailey's here, as AI becomes more prevalent and political leaders seek to harness it for their own ends or, failing that, to sow distrust of it. It'll be everything we've seen over the past 10 years with social media, Section 230, antitrust, etc., except turned toward a new tech target. And it will be every bit as fruitless, frustrating, and tedious.
More Sex & Tech News
• The U.S. Department of Justice filed a statement of interest in Children's Health Defense et al. v. Washington Post et al., a lawsuit challenging the private content moderation decisions made by tech companies. The plaintiffs in the case accuse media outlets and tech platforms of "colluding" to suppress anti-vaccine content in an effort to protect mainstream media. The Justice Department's involvement here looks like yet another example of stretching antitrust law to fit a broader anti-tech agenda.
• A new working paper published by the National Bureau of Economic Research concludes that "period-based explanations focused on short-term changes in income or prices cannot explain the widespread decline" in fertility rates in high-income countries. "Instead, the evidence points to a broad reordering of adult priorities with parenthood occupying a diminished role. We refer to this phenomenon as 'shifting priorities' and propose that it likely reflects a complex mix of changing norms, evolving economic opportunities and constraints, and broader social and cultural forces."
• The national American Civil Liberties Union (ACLU) and its Texas branch filed an amicus brief last week in CCIA v. Paxton, a case challenging a Texas law restricting social media for minors. "If allowed to go into effect, this law will stifle young people's creativity and cut them off from public discourse," Lauren Yu, legal fellow with the ACLU's Speech, Privacy, and Technology Project, explained in a statement. "The government can't protect minors by censoring the world around them, or by making it harder for them to discuss their problems with their peers. This law would unconstitutionally limit young people's ability to express themselves online, develop critical thinking skills, and discover new perspectives, and it would make the entire internet less free for us all in the process."
Today's Image

The post Missouri Harasses AI Companies Over Chatbots Dissing Glorious Leader Trump appeared first on Reason.com.