Get all your news in one place.
100’s of premium titles.
One app.
Start reading
inkl
inkl
Technology
Thomas Wharton

Artificial intelligence - friend or foe?

This is an inkl Original article. If you like this article and you haven't already done so, consider joining inkl where we serve up news without clickbait. Our unique algorithms help filter out the noise and bring you the best news stories from around the world.

Robots powered by Artificial Intelligence (AI) are tomorrow’s gods. We are captivated by their potential power, yet unsure whether they will lift us to unknown heights or wipe us off the face of the planet. Like other gods, it’s also awfully difficult to define precisely what they are. What we do know is that a swathe of complex ethical problems is growing as fast as the AI’s themselves. inkl explores the ethical quandaries around teaching robots how to learn…

Nazi AIs

When Microsoft created Tay, it was meant to be an ordinary teenage girl on Twitter, not a Donald Trump-supporting anti-feminist bigot. Microsoft software engineers built an algorithm that allowed an AI to communicate with humans and to learn from that communication; a devilishly hard task. To great fanfare, Tay was launched and given her own Twitter account. However, once up and running, Tay’s learning-through-repetition feature was almost immediately exploited by a bombardment of derogatory tweets. In return, the AI fired back tweet after tweet of fascist, racist and sexist views. As a result, Microsoft had no choice but to close the account within 24 hours. Tay’s creators clearly had not intended for the AI to tell the world that “Hitler was right”.

The quest to create a program with human-level intelligence is the Holy Grail of the tech world. Up until now, the AI used around us has been ‘narrow’ or ‘weak’: i.e., programs that can solve problems within specified boundaries at lightning speed. The common example is the junk folder in your email. An algorithm chooses what is ‘real’ and ‘fake’ communication and is usually able to perform that task quite well. Today ‘weak AI’ is used in nearly every piece of technology around us (think about your Fitbit, self-serve checkout, new microwave).

But Tay represents part an endeavour on  a much more grandiose scale: to build a ‘strong AI’ with intelligence comparable to our own. And it turns out that the quickest way to do that is by teaching AI’s how to learn for themselves. This process is called Deep Learning, and Tay demonstrated it with flying colours. On several occasions, without being prompted to repeat a message, Tay formulated entirely original derogatory and cruel comments. Inevitably, a public apology was issued, but Microsoft’s outwardly abashed programmers were, despite the content matter, satisfied with the AIs method.

If the tech world is to be believed, AIs like Tay, commonly known as chatbots, are the future. Industry leaders claim that chatbot AIs will be the new app boom (the app industry makes $50bn per year). Both Microsoft and Facebook are currently engaged in a chatbot arms race to develop and monetise AIs. Soon people will be able to use a combination of text and spoken words to go shop, book tickets, learn about the news or weather and even do their banking. If technology continues to advance at such a pace, even these relatively advanced AIs will become ubiquitous in the very near future.

Killer Robots

“Killer Robots” is an eye-grabbing headline. Our grim fascination stems from a canon of films in which robots achieve sentience and rain cold, metallic death upon their creators. Think the T-800, HAL 9000, Skynet, the Sentinels, and the terrible robots from I, Robot. One imagines the constant hum of entirely autonomous American Predator drones circling over various Middle Eastern, North African and West Asian countries. We are primed to pay attention to stories about Israeli ‘suicide bomb’ drones in the Caucasus and nuclear submarine-hunting unmanned submersibles. However, the doomsday scenario of robotic annihilation appears farfetched when compared to another more present danger – that we might all soon be out of work.

The use of robotics in heavy industry and manufacturing has already completely disrupted processes and jobs. Even industries that were thought to require a ‘human touch’ are now threatened: accountancy, journalism, stock brokerage, surgery, academic research. A recent study found that up to 47% of New Zealand’s jobs could be done by robots powered by AI. Stephen Hawking has consistently warned us about technology-driven inequality, a real threat that as the human labour force is diminished, the wealth made by AIs will not be shared.

Are we there yet?

Given that the use of AIs is blossoming across the world, it’s worth exploring how close we are to building a ‘strong AI’. While there is disagreement about when a ‘strong AI’ will be completed (some academics say it will be within a decade), some big names in the tech world have voiced serious concern. Elon Musk, Stephen Hawking and Steve Wozniack were at the top of a list of tech luminaries who last year signed a letter calling for AI research to be corralled into areas that can benefit humans. The group is tipping $1b into OpenAI, a new startup nonprofit that encourages responsible research into what they describe as the biggest existential threat facing humanity.

An Australian delegation to the United Nations recently warned that ‘lethal autonomous weapons systems’ are closer than we think. The Australians claimed that given the speed at which we are pushing the envelope, it is just a matter of a few years. The future, they claim, would see armed drones not simply taking off and landing by themselves, but engaging in target acquisition and weapons control too. The dissenting view is that Musk and Hawking are jumping the gun. Some technologists argue that we are still decades away from a ‘strong AI’ and that we will retain the capability to control it at all times.

They come in peace

The world-ending drama of Hollywood and the fears of futurists receive more attention than the quiet achievements and limitless possibilities for benevolent AIs. Healthcare, for example, is undergoing a revolution as AIs are used in a variety of ways. The ubiquitous robotic manufacturing arms have moved from the factory floor into surgeries. UAVs are being developed to airdrop emergency medical aid in rural Rwanda. And complex algorithms powered deep learning are allowing AIs to respond to patients with Schizophrenia (although in a fascinating turn of events, military drone operators have described schizophrenic states developing as a result of their jobs).

The education sector is also set for a complete overhaul. Hawking is postulating that AIs will become the primary one-on-one tutors for school children. American programmers with little to no knowledge of the Japanese language have developed an AI that reads Japanese accurately enough to assist in the marking of written school exams. The use of AIs to break down language barriers represents a truly unifying aspect of machine learning, something that Google’s hegemonic search engine and translation tools have already embraced.

However, underlying all these advances there remains the spectre of AIs usurping much of today’s human labour force.

New dog, old tricks

How do we balance our innate fears of the unknown and mortality in the face of such drastic change? Hawking said the advent of ‘strong AI’ is “likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.” A daunting task when one thinks of the myriad ways in which we can go wrong. This is where Demis Hassabis comes into frame. Hassabis leads DeepMind, a recent acquisition of Google’s that is the tip of the spear of AI development. In a recent interview he admits to reading Mary Shelley’s Frankenstein to caution himself. Hassabis believes that the key to developing benevolent AIs is by instilling them with ethical and moral guidelines. An early example of this in science fiction was Isaac Asimov’s three laws of robotics. This unimaginably difficult task will play out over many years, but one problem will continue to recur. Any ethical or moral guidelines we program will reflect the personal biases of the programmers. There are suggestions that tomorrow’s long-range robotic submarines will be directed by an AI ‘ethical governor’. But we are not robots. We are imperfect ethical decision makers at the best of times. So can we truly expect our creations to be otherwise? It’s certainly a question worth pondering.

Thomas Wharton is a freelance journalist and writer at Inkl.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.