Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Chatbots trigger next misinformation nightmare

New generative AI tools like OpenAI's ChatGPT, Microsoft's BingGPT and Google's Bard that have stoked a tech-industry frenzy are also capable of releasing a vast flood of online misinformation.

Why it matters: Regulators and technologists were slow to address the dangers of misinformation spread on social media and are still playing catch-up with imperfect and incomplete policy and product solutions.


  • Now, experts are sounding the alarm faster as real-life examples of inaccurate or erratic responses from generative AI bots circulate.
  • “It’s getting worse and getting worse fast,” Gary Marcus, a professor emeritus of psychology and neural science at New York University and AI skeptic, told Axios.

The big picture: Generative AI programs like ChatGPT don't have a clear sense of the boundary between fact and fiction. They're also prone to making things up as they try to satisfy human users' inquiries.

  • Google-parent Alphabet faced embarrassment (and a $100 billion hit to its stock price) two weeks ago after its Bard tool bungled a historical fact in a public marketing video meant to tout the sophistication of the tool.

Be smart: For now, experts say the biggest generative AI misinformation threat is bad actors leveraging the tools to spread false narratives quickly and at scale.

  • "I think the urgent issue is the very large number of malign actors, whether it's Russian disinformation agents or Chinese disinformation agents," Gordon Crovitz, co-founder of NewsGuard, a service that uses journalists to rate news and information sites, told Axios.

What we're watching: Misinformation can flow into AI models as well as from them. That means at least some generative AI will be subject to "injection attacks," where malicious users teach lies to the programs, which then spread them.

The misinformation threat posed by everyday users unintentionally spreading falsehoods through bad results is also huge, but not as pressing.

  • "The technology is impressive, but not perfect… whatever comes out of the chatbot should be approached with the same kind of scrutiny you might have approaching a random news article," said Jared Holt, a senior research manager at the Institute for Strategic Dialogue.
  • "Chatbots are designed to please the end consumer — so what happens when people with bad intentions decide to apply it to their own efforts?" Holt adds.

Between the lines: Tech firms are trying to get ahead of the possible regulatory and industry concerns around AI-generated misinformation by developing their own tools to detect falsehoods and using feedback to train the algorithms in real time.

  • OpenAI, the creator of ChatGPT, released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine, Axios's Ina Fried reported.
  • Last week, Google issued guidance to web publishers, warning them that it will use extra caution when elevating health, civic or financial information in its search results.

Researchers are already creating tools to slow the spread of disinformation from generative AI tools.

  • NewsGuard last week introduced a new tool for training generative artificial intelligence services to prevent the spread of misinformation.
  • NewsGuard assembles data on the most authoritative sources of information and the most significant top false narratives spreading online. Generative AI providers can then use the data to better train their algorithms to elevate quality news sources and avoid false narratives.
  • Microsoft, a backer of NewsGuard, already licenses NewsGuard’s data and uses it for BingGPT.

How it works: At Microsoft, user feedback is considered a key component to making ChatGPT work better.

  • "The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing," the company posted on its blog on Feb. 15, a week after Bing with ChatGPT rolled out.
  • Microsoft's Responsible AI team is working through mitigations for thorny issues like making sure the chatbot is responding to suicide inquires with help resources, company officials told reporters in Washington this month. Officials also said the bot will rely heavily on footnotes for fact-checking.

Yes, but: "The challenge for an end user is that they may not know which answer is correct, and which one is completely inaccurate," Chirag Shah, a professor at the Information School at the University of Washington, told Axios.

  • "So we're seeing a lot of use cases where misinformation is being presented as if it's validated," said Shah. "Because it's coming in a very natural language modality, people tend to trust it because they see that it has been constructed for them in the moment."
  • Other issues average users need to look out for include bias, said Shah, which is especially tough for users to discern with ChatGPT-generated answers, because there is a less direct link to where the information in the box is coming from.
  • A lack of transparency and "explainability"— i.e., explaining to users where the information comes from and precautions to take when using the chatbot — will ultimately hurt user trust, he added.

Go deeper: Read more in Axios' AI Revolution series —

Editor's note: This story has been corrected to show that Google's stock lost $100 billion in value, not $100 million, after publicity around a mistake made by its Bard tool.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.