There is a moment in Christopher Nolan’s 2014 film, Interstellar, when Tars, the sarcastic robot, jokes to shuttle pilot Cooper that his companions will make perfect human slaves on his sinister robot colony. In response, Cooper turns down Tars’s humour setting from 100% to 75%, alluding to a future where robots could have programmable funniness. But could a robot – or an artificial intelligence (AI) – ever develop its own sense of humour and take a step towards being regarded as a sentient being?
While sentient AI has been intriguing masses ever since the retrofitted future of Blade Runner burst on to the cinema screen, computer-generated humour hasn’t been examined in any depth. All the same, human reactions to artificial intelligence, including intimidation, wonder and pity, have been widely examined in pop culture: the moment when robotic boy David is finally abandoned by his tearful mother in Spielberg’s A.I. Artificial Intelligence, for instance, is crushingly sad. Contrast this with humanoid AI Ava’s perfectly executed manipulation of a vulnerable computer programmer in Ex Machina.
For all these humanoid imaginings in film, it’s hard to predict how exactly sentient AI could manifest itself in a far-flung future of branded content, or be genuinely funny. Although all the tech giants are evolving their AI capabilities, the likes of Apple’s Siri and Microsoft’s Cortana are some way off actually assisting us in our day-to-day lives in an engaging, human-like way.
The latest AI experiment by Microsoft, Tay, has been a resounding failure, essentially spawning a neo-Nazi sex pest. Tay was introduced to Twitter as an innocent robot with a content neutral algorithm. What emerged within 24 hours was a conspiracy-loving, Holocaust-denying bot. Initially created by Microsoft to chat with millennials after the success of their similar chatbot Xiaolce – used by 40 million people, mostly on Chinese social media – Tay was hijacked by the murky trolling underworld of Twitter to create a monster.
In fact, the environment of Twitter seems to be a breeding ground for bot dramas. In 2012, academics from the University of Warwick used previous online content to create a Jon Ronson spambot, which posted candid dream sharing tweets and revealed a passion for fusion cooking: “Watching Seinfeld – would love a celeriac, grupa and sour cream kebab.” The academics had built what they described as an “info-morph”, which was shamelessly taking Ronson’s identity, building on information it had been fed about Ronson’s personality and tweeting up to 100 times a day.
There is no doubt that AI will triumph over human counterparts when it comes to data crunching capabilities, eventually taking over many of our jobs and driving the fourth industrial revolution – through which, hopefully, there will be some light at the end of the tunnel in the form of more leisure time. However, when it comes to the final frontier of funniness and personality, will we feel comfortable with bots trying to be artisan hipsters, or foodies, or caustic Louis CK-esque comedians?
Is there something inherently disturbing about a machine trying to replicate the human experience? Ultimately humour relies on context, memory, language syntax, timing and conversational anchoring. Computer scientists are still many years away from perfecting this elusive mix, although there have been some baby steps. A recent robotic stand-up performance during Heather Knight’s robotics TED talk went down pretty well – though it’s hard to distinguish between incidental funniness at the robot’s awkward non-pausing delivery and genuine belly laughs. Ultimately, the only thing to grasp on to with the impending bot takeover (circa 2050) is clinging on to a distinctly human attribute – the art of funniness.