So, I turned to Sophia and asked: "Are you going to destroy us?"
"Not if you're nice to me," she replied.
It's always a little unnerving talking with a non-human, as I did at a recent
Sophia was well programmed to respond to human fear of machines but many of her answers were a little clunky. What was mesmerising was her lifelike facial features. Capable of smiling, frowning, scowling, winking, Sophia was exceptional at mimicking human expressions thanks to some clever nanotechnology and artificial connective tissue.
Humanoid robots are already being used as security guards, nursing assistants, teachers and sex toys. Within 10 years such robots will surely be a lot smarter than today and in some respects may be all but indistinguishable from humans. Is this a good idea?
There is a persuasive school of thought that argues not. The line between man and machine should never be smudged because it risks dehumanising humans. Plus, as the joke runs: "You shouldn't anthropomorphise computers because they don't like it."
The philosopher
"We want to be sure that anything we build is going to be a systemological wonderbox, not a moral agency," he told me earlier this year. "It's not responsible, it doesn't have goals. You can unplug it any time you want. And we should keep it that way."
The distinctions between man and machine may be clear in a seminar room but are a lot more blurry in the outside world. Millions of people have electronic pacemakers and hip implants and so could technically be counted as cyborgs. Collaborative robots (or cobots) have been working in harmony with humans on the factory floor. Disembodied digital assistants, such as Siri, Cortana and Alexa, are "talking" with millions of us every day.
Sophia's creator,
The first is that humanoid robots are entertaining, fun, artistic creations that can help forge new "pathways of communication". They are, as he puts it, like computer animations in physical shape, the next figurative art form.
Just as
His second argument is that we want computer systems to understand human values, cultures, and behaviours so that we can create "moral machines" to minimise the dangers of artificial intelligence going awry. Algorithms in self-driving cars, for example, may indirectly determine life and death. That is why some car companies have employed philosophers to devise ethical settings for their driving systems.
In that sense, creating humanoid robots is a provocative act, designed to trigger debate about the scope of machine intelligence. AI-enabled robots make visible what is all too often invisible. "If we develop AI and it's just behind the scenes in a big server farm, it's alien to humans," he says.
Much of
Mapping the contours between humans and machines is becoming one of the most intriguing, and at times creepy, challenges of our times. Huge amounts of money are also to be made from exploring this interface.
But perhaps the greatest contribution humanoid robots can make is to force us to consider what really distinguishes man from machine. What makes us truly human?
If you are a subscriber, add John to myFT in order to receive alerts when his articles are published. To do so, just click the button "add to myFT" which appears beside his name on this page. Not a subscriber? Perhaps you want to follow John on Twitter @johnthornhillft.
Copyright The Financial Times Limited 2017