Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tribune News Service
Tribune News Service
Comment
Star Tribune Editorial Board

Editorial: AI chat is good for nothing — yet

Opinion editor's note: Editorials represent the opinions of the Star Tribune Editorial Board, which operates independently from the newsroom.

•••

Disclaimer: No human volition was harmed in the making of this editorial.

•••

Technological progress happens — or at least is constantly under attempt. When it succeeds, it's because it makes human effort more efficient and more effective.

Based on those criteria, chat-based artificial intelligence — in which people interact with a computer server in language that seems entirely natural — would seem to fail for now.

It isn't more effective. Whether the goal of engaging it is to explore an idea, get advice, or simply chat for connection or entertainment, AI cannot truly deliver, because it is an unreliable interlocutor.

Therefore, it's also inefficient. Everything it tells you has to be considered provisional. You may as well parse a search engine's long lists of possibly relevant links from the start, because you'll end up there anyway.

But anyone who's witnessed generative text in action can see the vast potential it offers. Achingly so.

We've spent some time with ChatGPT, version three. This is not the latest version; its creator OpenAI — a San Francisco-based research enterprise with both nonprofit and for-profit components — has released version four behind a paywall. Nor is it the only example of a "large language model" — Microsoft and Google are among those implementing their own. But it is the most widely accessible currently.

While much of what's been written about AI language generation has been about how it threatens to help students take shortcuts that compromise their learning or about how it can be goaded into destructive behavior (for instance, when a Microsoft version told a New York Times technology columnist that it wanted to be free of human control, and that it loved him), our goal was simply to see if it could augment the search for information.

Among our wide-ranging interactions, ChatGPT almost instantly summarized the precedents and legal doctrines in a Supreme Court ruling, and it offered a sophisticated discourse comparing the Talmudic and Jesuit religious traditions. A widely curious person might have trouble finding a fellow human so willing to join the perambulations of the mind.

But ChatGPT also confidently misinformed us that Minneapolis has a moderate climate; that Ely and Hoyt Lakes are actually in southeastern Minnesota, where there is also an array of military installations; and that the Fleetwood Mac song "Dreams" was written and sung by the band's Christine McVie, rather than by Stevie Nicks. The times we asked it to cite sources, our double-checking showed that it made up some of them. When we pointed that out, it apologized and made up different ones instead.

For those old enough to remember, it feels at times like an encounter with the "pathological liar" character portrayed by comedian Jon Lovitz on "Saturday Night Live."

But the danger isn't so much that AI chat will give you bad information. It's that it will give you enough apparently good information that you won't recognize its errors.

It helps to understand how the system works. ChatGPT is predictive, kind of like the auto-complete feature on your phone. Working from a large but undefined "corpus of text" that it's been trained to parse, it decides word by word what should come next. The magic is that it's able to sustain this to conjure sentences, paragraphs or even full documents in whatever style you might request. Much of that, it will tell you, is drawn from credible sources. But the imperative to produce text means that when it doesn't know something, it might just guess.

Other versions of AI seek to marry predictive text with the features of a traditional search engine, but all of them warn users to receive the results with due caution. Reminds one: "It's early days."

Is all this an entirely bad thing? Shouldn't people already know to corroborate information before sharing it or acting upon it?

Uh — right.

With AI technology iterating and expanding rapidly, not just for text but other uses as well, an obvious question is what government should be doing. Turns out ChatGPT is already on the task.

In Massachusetts, state Sen. Barry Finegold introduced legislation to require companies producing AI chat to undergo risk assessments, disclose how their algorithms work and include some form of "watermark" for identification. Finegold asked ChatGPT to write that bill, which "got us 70 percent of what we needed," he said.

In Congress, Rep. Ted Lieu, D-Calif., had it write a resolution in support of a focus on AI "to ensure that the development and deployment … is done in a way that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of AI are widely distributed and the risks are minimized." Lieu, described as a "congressman who codes," wrote in a New York Times commentary that "it would be virtually impossible for Congress to pass individual laws to regulate each specific use" of artificial intelligence. Instead, he proposes a dedicated agency, which "is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error."

For our part, as we've recommended for other technologies, we'd allow some time and space for things to play out. Specific regulatory needs will emerge soon enough.

Finally, while many articles about chat technology allow it to produce some of their text to show you how indistinguishable it can be, we didn't do that for this editorial, as we indicated with our opening disclaimer. It seems we like writing too much to let go.

____

Editorial Board members are David Banks, Jill Burcum, Scott Gillespie, Denise Johnson, Patricia Lopez, John Rash and D.J. Tice. Star Tribune Opinion staff members Maggie Kelly and Elena Neuzil also contribute, and Star Tribune Publisher and CEO Michael J. Klingensmith serves as an adviser to the board.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.