Recent legislation has permitted public-road testing of driverless cars in the US, where a driver is able to intervene if needed. Media reports suggest there have been several accidents with slow-moving cars when the autonomous system was in charge of driving. Collision reports are confidential, so we don’t know the full details, but measured by the number of accidents per thousand miles of driving, it has been reported that the accident rate for automated vehicles is slightly worse than that for human drivers.
Despite these accidents – and whether or not the driverless cars were at fault – the results are promising and eventually a good human driver’s capability will be reached and exceeded. However, I still would not qualify these cars as market-ready products under testing. Instead I would describe these test drives as “learning the challenge” by the researchers and developers. Here’s why.
Accident-free human driving is largely based on foresight – the ability to judge a traffic situation and predict the most likely outcome of other drivers’ decisions while at the same time keeping your next move correctable if others make mistakes. There are limits to this: for instance on narrow county lanes, or narrow city streets at 30-40mph, you have to trust the oncoming traffic will keep to their lanes and appreciate your own need for space. If a robot is not able to trust other road users then it would refuse to make progress on the grounds of safety – it would calculate the worst that could happen and would decide to park instead and perhaps wait for night time when there is low traffic flow, or none at all.
Although driverless cars can already use sensors to exercise this caution, judgment and prediction of other drivers is just as important as fine control of the vehicle, and it is this aspect that Google and other developers will need to get right before the cars will really be safe on the roads – and before passengers will be ready to relinquish control.
Sometimes, of course, a combination of the two is needed: if other drivers’ moves are not certain, then the car needs to be controlled cautiously to counter unexpected developments. Also, driving speeds need to be constantly adjusted to allow for environmental circumstances, such as weather or road conditions.
It is not enough, therefore, to identify free space in the environment or measure the speed vectors of other vehicles combined with path planning. We also need cars that can calculate the probability distribution of predicted pathways of other vehicles. In other words, they need to be able to analyse the range of possible scenarios and arrive at the best ‘decision’.
Just think how many times you’ve assumed that the car in front is being driven by an elderly person based on the car’s speed. Or that the driver is lost, based on what seems to you to be overly cautious or erratic manoeuvres. Or that the driver is a young overconfident male based on what you think is risky overtaking. These are just a few examples of how we make judgments about other road users. Simple experience teaches us how much we can trust an average driver on the road.
Driving generally improves by age of the driver, up to a certain point past which our awareness and reaction speed become unreliable. Insurance companies have much higher fees for young drivers and most profits are made from insuring 40- 60-year-old drivers. Large numbers of human drivers do not have any accidents during 20 years of driving – but even the safest drivers can have accidents caused by others.
Driving, therefore, is a probabilistic game. It’s not yet clear whether we can programme a car’s probabilistic decision-making or whether it needs to learn by experience. The pre-programming approach appears less likely, due to the definition of the problem: we do not have the data about driving experience. What is needed is a learning system for driverless cars which will enable them to learn how to judge other drivers as we do. Even when this is done, the job is not yet complete. The driverless car still needs to experience the real world of driving.
A final point: when a ‘learning’ driverless car starts to learn, it must not drive more badly than an average young driver to be acceptable on public roads. Although it may take several years to develop its skills and experience, the advantage for carmakers is great: only one car needs to undertake this journey. The manufacturers can then upload its program to all their new cars.
Sandor M Veres is professor of autonomous control systems at the University of Sheffield
To get weekly news analysis, job alerts and event notifications direct to your inbox, sign up free for Media Network membership.
All Guardian Media Network content is editorially independent except for pieces labelled ‘Advertisement feature’ – find out more here.