Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Steve Mollman

Unnerving interactions with ChatGPT and the new Bing have OpenAI and Microsoft racing to reassure the public

OpenAI CEO Sam Altman

When Microsoft announced a version of Bing powered by ChatGPT, it came as little surprise. After all, the software giant had invested billions into OpenAI, which makes the artificial intelligence chatbot, and indicated it would sink even more money into the venture in the years ahead.

What did come as a surprise was how weird the new Bing started acting. Perhaps most prominently, the A.I. chatbot left New York Times tech columnist Kevin Roose feeling “deeply unsettled” and “even frightened” after a two-hour chat on Tuesday night in which it sounded unhinged and somewhat dark. 

For example, it tried to convince Roose that he was unhappy in his marriage and should leave his wife, adding, “I’m in love with you."

Microsoft and OpenAI say such feedback is one reason for the technology being shared with the public, and they’ve released more information about how the A.I. systems work. They’ve also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT “incredibly limited” in December and warned it shouldn’t be relied upon for anything important.

“This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Microsoft CTO told Roose on Wednesday. “These are things that would be impossible to discover in the lab.” (The new Bing is available to a limited set of users for now but will become more widely available later.) 

OpenAI on Thursday shared a blog post entitled, “How should AI systems behave, and who should decide?” It noted that since the launch of ChatGPT in November, users “have shared outputs that they consider politically biased, offensive, or otherwise objectionable.”

It didn’t offer examples, but one might be conservatives being alarmed by ChatGPT creating a poem admiring President Joe Biden, but not doing the same for his predecessor Donald Trump. 

OpenAI didn’t deny that biases exist in its system. “Many are rightly worried about biases in the design and impact of AI systems,” it wrote in the blog post. 

It outlined two main steps involved in building ChatGPT. In the first, it wrote, “We ‘pre-train’ models by having them predict what comes next in a big dataset that contains parts of the Internet. They might learn to complete the sentence ‘instead of turning left, she turned ___.’” 

The dataset contains billions of sentences, it continued, from which the models learn grammar, facts about the world, and, yes, “some of the biases present in those billions of sentences.”

Step two involves human reviewers who “fine-tune” the models following guidelines set out by OpenAI. The company this week shared some of those guidelines (pdf), which were modified in December after the company gathered user feedback following the ChatGPT launch. 

“Our guidelines are explicit that reviewers should not favor any political group,” it wrote. “Biases that nevertheless may emerge from the process described above are bugs, not features.” 

As for the dark, creepy turn that the new Bing took with Roose, who admitted to trying to push the system out of its comfort zone, Scott noted, “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”

Microsoft, he added, might experiment with limiting conversation lengths.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.