Get all your news in one place.
100’s of premium titles.
One app.
Start reading
PC Gamer
PC Gamer
Jorge Jimenez

Google employees reportedly begged it not to release 'pathological liar' AI chatbot Bard

Half of Artificial Intelligence robot face

According to internal documents reviewed by Bloomberg, several Google employees raised concerns that its Bard AI chatbot was not ready for its March release, citing low-quality, incorrect, and potentially dangerous answers. Two quotes about Bard allegedly from employees: "pathological liar" and "cringe-worthy."

Bard is Google's answer to OpenAI's ChatGPT. CEO Sundar Pichai said it "combines the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models." But Bloomberg reports that Google rushed Bard out the door in order to compete with ChatGPT in what the company allegedly called a competitive "code red." 

According to Bloomberg's report, an employee sent a message to an internal message group that was viewed by 7,000 employees saying, “Bard is worse than useless: please do not launch.” Right before the launch, Google's AI governance lead, Jen Gennai, reportedly overruled a risk evaluation by her own team when they said that Bard's answers could potentially be harmful.

Bloomberg reports a couple of examples: Suggestions on landing a plane that would result in a crash and scuba diving instructions that could have resulted in "serious injury or death."

Meredith Whittaker, a former manager at Google, told Bloomberg that "AI ethics has taken a back seat" at the company

Google Bard says not to rely on its advice for "important decisions." (Image credit: Future)

ChatGPT has its own issues with the truth and proper sourcing of information it scrapes for answers. Currently, Google refers to Bard as an "experiment" and, in Bloomberg's words, maintains that "responsible AI is a top priority" for the company.

As an experiment, I asked Google Bard if its advice is potentially dangerous, and it replied with the following response:

"It is possible for my advice to be dangerous, especially if it is about health or other sensitive topics. I am still under development, and I am not always able to distinguish between good and bad advice."

It also told me not to rely on its advice for "important decisions" and that it does "not have the same level of understanding and knowledge as a human being." 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.