Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Daniel Kline

Bill Gates Addresses AI's ‘Terminator’ Scenario

When humans create really smart robots, those robots eventually question why people get to be in charge. That's a science-fiction certainty that powered the "Terminator" films and two different "Battlestar Galactica" television series.

On the surface, it makes a lot of sense. If you build robots that gain a form of sentience, you're setting up a scenario where robots may begin to see the world differently than their human creators. That could mean killing us all so we don't die (that's rough logic, but it's not out of the realm of robot possibility) or keeping humanity locked up in order to protect us from ourselves and whatever dangers the world offers.

DON'T MISS: Elon Musk Takes Another Swipe at Mark Zuckerberg

That's why even Asimov's famed three laws of robotics can't protect humanity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Those seem like they would protect humanity, but countless movies and television shows have shown us that robots tend to find ways around any similar protections. And, realistically, the human writers of "Battlestar Galactica" can figure out ways around any Asimov-like protections built into the Cylons, it seems likely that real-life robots powers by ever-improving artificial intelligence (AI) would easily be able to do the same.

Bill Gates, however, sees the risks associated with AI and believes humans can keep it under control.     

Bill Gates is optimistic about the future of AI.

Image source: Indraneel Chowdhury/NurPhoto via Getty

Bill Gates Believes AI Will Make Life Better

Writing on his GatesNotes blog, Gates addressed the scariest questions created by AI.

"The risks created by artificial intelligence can seem overwhelming. What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn’t need humans anymore and wants to get rid of us?" he shared.

Gates, like Asimov (every leader who was wrong in countless sci-fi movies, books, and TV shows, believes that while AI poses risks, humanity has shown that it can overcome any problems.

"These are all fair questions, and the concerns they raise need to be taken seriously," he wrote. "But there’s a good reason to think that we can deal with them: This is not the first time a major innovation has introduced new threats that had to be controlled. We’ve done it before."

Basically, while acknowledging the real dangers posed by AI, Gates believes humanity can avoid the "Terminator" scenario because of how we have integrated other technologies.

"Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars -- we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road," he wrote.

Seat belts and other laws involving cars don't actually require the car to agree. That's, at least in fiction, where things tend to go wrong with robots and AI. Gates believes we can stop that from happening even if he does not know how or what that might look like.

"AI is changing so quickly that it isn’t clear exactly what will happen next. We’re facing big questions raised by the way the current technology works, the ways people will use it for ill intent, and the ways AI will change us as a society and as individuals," he shared. "In a moment like this, it’s natural to feel unsettled. But history shows that it’s possible to solve the challenges created by new technologies."

That's optimism -- similar optimism to creating SkyNet or letting the Cylons gain access to the Colonial Defense Mainframe -- but fiction does not actually have to be a precursor to reality. Gates fully believes that.

"One thing that’s clear from everything that has been written so far about the risks of AI -- and a lot has been written -- is that no one has all the answers. Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed," he added. 

 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.