Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Christiaan Hetzner

Google is warning staff about the cybersecurity risks posed by A.I.—including its own

Google is warning its employees not to enter confidential information into its BardAI chatbot, since it poses a security risk. (Credit: Getty Images)

Google fears inherent vulnerabilities in artificial intelligence chatbots could leak sensitive corporate information, raising questions over the suitability of its own Bard A.I. product for commercial use.

Citing unnamed sources briefed on the matter, Reuters reported on Thursday that the $1.6 trillion tech giant warned employees not to input certain types of data into advanced chatbots for fear it could be exploited. That warning included its own entrant in the A.I. race.

“Don’t include confidential or sensitive information in your Bard conversations,” stated a Google privacy notice that was updated at the start of June, according to the newswire.

Not only can human reviewers read the chats, but researchers found that similar A.I. can reproduce the data fed into it to train its neural net, creating a backdoor leak.

A.I. race

Big Tech has been locked in a race over who can develop the first killer applications based on generative artificial intelligence. Microsoft affiliate OpenAI struck the first blow against Google’s DeepMind by unveiling ChatGPT on Nov. 30, taking the world by storm.

Amid all the hype over generative A.I.’s ability to pass professional entrance tests like the bar exam, concerns have already begun to arise owing to its penchant to “hallucinate”—presenting inaccurate or flat-out incorrect information as incontrovertible fact.

Should it prove a security risk as well, there could be potentially serious implications for its commercial use. Google is currently rolling Bard A.I. out in more than 180 countries and in 40 languages, including higher-priced versions for business clients that do not absorb data into public A.I. models.

Yet if the company cannot trust even its own chatbot to prevent its corporate secrets from being reverse engineered by rivals, how can its customers ultimately?

Google’s A.I. track record has not exactly been reassuring, either.

The company’s perceived lackadaisical behavior toward ethical A.I. questions indirectly led Elon Musk to cofound its main rival, OpenAI, while Google’s own A.I. employees nearly revolted at one point over its decision to work for the Pentagon

Last year Google fired a software employee who falsely claimed the company’s artificial intelligence had achieved sentience. Large language models in fact merely mimic human intelligence by using probability to predict correct outcomes.

Finally, in its haste to keep up with OpenAI, Google rushed out the presentation of Bard in February, including a promotional video that unwittingly revealed how error-prone it was. This early example of a chatbot’s tendency to hallucinate temporarily cost parent company Alphabet $100 billion in lost market value

Google did not respond immediately for comment, but in a statement made to Reuters, it did not deny the report.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.