It is very difficult to read the words “Defense Department” and “robots” and not immediately come up with the phrase “robot army”, but if this weekend’s contestants at the Darpa Robotics Challenge in Pomona, California, invaded your town, the damage would be about what a gang of arthritic 90-year-olds could do, if those 90-year-olds also kept forgetting where they were and what they were trying to accomplish.
These robots stumbled, they broke, they stood motionless for half an hour, they couldn’t get out of the car. And this was the exciting version - the 2013 trials in this competition were “like watching paint dry”, according to one Darpa worker.
The robots may be coming slowly, and with a lot of stops and starts, and they often have to be repaired, reworked and disassembled over long periods of time. But they are definitely coming, and probably for your jobs.
Christopher Atkeson, the ursine roboticist from Carnegie Mellon University on whom the villain in Disney’s Marvel Comics sci-fi cartoon Big Hero 6 was partially based, is very much in favor of better robots, soon. He wants them to be healthcare professionals like Baymax, the robot in that movie. Getting there, he told the Guardian, is “a long, hard slog”.
“So you have Moore’s law for microprocessors,” said Atkeson, wearing a Build Baymax T-shirt of his own design and staring out over the Pomona fairgrounds where his team was about to field a challenger in the DRC. Moore’s law, he explained, means that the number of transistors on an integrated circuit doubles about every 24 months – essentially, microchips are twice as good every two years. Digital brains are constantly getting smarter, and digital perception, like reading road signs and lane lines the way the new generation of self-driving cars does, has progressed by leaps and bounds with innovations like Lidar.
“That capacity is going to explode,” said Atkeson. “But when we start to talk about things that move, it’s mostly hydraulics. That hasn’t changed since world war two. Perception is going to be dirt cheap, but movement is going to be expensive.”
“What’s going on in the car business is mostly perception,” Atkeson explained, the fruit of development in the self-driving cars that came out of the Darpa Urban Challenge. Much of the software work on the current project was already done before the humanoid robots even showed up; the same kinds of perception used by self-driving vehicles are crucial to the eyes and ears of the competitors in Pomona last week.
Atkeson is something of a rock star here, and a veteran: 11 years ago, when Darpa offered its first million-dollar prize to the team of engineers who could make a car successfully navigate an obstacle course the way a human driver would, no one won. Darpa didn’t change the goals for the next year’s edition – they just increased the prize money, and that did the trick. Five teams completed the course. Last month, Daimler’s self-driving 18-wheelers became street legal in the state of Nevada, and Uber and Google are edging ever closer to publicly available self-driving vehicles.
Last week, three human-sized robots (one made by the Google-owned Boston Dynamics) successfully drove a car, not with a complex computer interface, but with a steering wheel. Things are moving forward, literally.
Robots that can operate in human environments are incredibly difficult to make, largely because the raw materials that make up an organic body are so much more sophisticated than anything manmade. It’s frustrating for scientists like Atkeson, who’ve seen electronics far outstrip mechanics. Workarounds aren’t very practical, but they are abundant – you can build your own cyborg cockroaches from a kit these days, cockroach not included (please email me if this is a problem for you – I have extras).
That difficulty is why Darpa offered $2m to the winner of the challenge this year. Napoleon, DRC head Gill Pratt pointed out, discovered that you could keep food from spoiling in jars in 1795 (before germ theory) by offering 12,000 francs to the person who came up with the best method of food preservation. People and robots are natural partners, Pratt said: “The person is great at thinking strategically about what needs to be done. What is the robot good at? Working in a really dangerous situation.”
Of course, robots can also work in non-dangerous situations, especially now. Automation has been diminishing the number of employees required on, say, a car assembly line for decades. Now it’s becoming possible to automate easier work. “I don’t think there’s necessarily anything more sophisticated about combing through records as a junior partner at a law firm than there is about working on an assembly line,” said Peter Frase, editor at leftist politics magazine Jacobin.
“The technical frontier has moved from ‘How do we make one piece of metal and move it to another piece of metal?’ and become more about boiling down a lot of information into something that is comprehensible and useful.”
IBM supercomputer Watson, for example, notable for crushing human opponents under its electronic boot heel (on Jeopardy) is now being used to sort healthcare information.
To be sure, robots are lousy novelists, and they’re not very good musicians, either. But the question, Frase said, is ultimately one of fundamental human politics. “It may be possible in principle to have a society where automation means that we all have more leisure time, but people who are promoters of this tend to talk about it as something that will inevitably happen,” he said. “It most certainly will not inevitably happen. We could easily have a future where a few people get rich and everyone else is unemployed and miserable.”