Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times
Business
AFP News

Google CEO Slams 'Completely Unacceptable' Gemini AI Errors

Google CEO Sundar Pichai has said the company is working 'around the clock' to fix problems with its Gemini AI app (Credit: AFP)

Google CEO Sundar Pichai on Tuesday slammed "completely unacceptable" errors by its Gemini AI app, after gaffes such as images of ethnically diverse World War II Nazi troops forced it to stop users from creating pictures of people.

The controversy emerged within weeks of Google's high-profile rebranding of its ChatGPT-style AI to "Gemini", giving the app unprecedented prominence in its products as it competes with OpenAI and its backer Microsoft.

Social media users mocked and criticized Google for the historically inaccurate Gemini-generated images, such as US senators from the 1800s that were ethnically diverse and included women.

"I want to address the recent issues with problematic text and image responses in the Gemini app," Pichai wrote in a letter to staff, which was published by the news website Semafor.

"I know that some of its responses have offended our users and shown bias -- to be clear, that's completely unacceptable and we got it wrong."

A Google spokesperson confirmed to AFP that the letter was authentic.

Pichai said Google's teams were working "around the clock" to fix these issues but did not say when the image-generating feature would be available again.

"No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes," he wrote.

Tech companies see generative artificial intelligence models as the next big step in computing and are racing to infuse them into everything from searching the internet and automating customer support to creating music and art.

But AI models, and not just Google's, have long been criticized for perpetuating racial and gender biases in their results.

Google said last week that the problematic responses from Gemini were a result of the company's efforts to remove such biases.

Gemini was calibrated to show diverse people but did not adjust for prompts where that should not have been the case, also becoming too cautious with some otherwise harmless requests, Google's Prabhakar Raghavan wrote in a blog post.

"These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong," he said.

Many concerns about AI have emerged since the explosive success of ChatGPT.

Experts and governments have warned that AI also carries the risk of major economic upheaval, especially job displacement, and industrial-scale disinformation that can manipulate elections and spur violence.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.