Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK

Driverless car crashes and data theft: law experts predict the court cases of the future

1-Guardian-TR-Header-2560

The rise of technologies such as driverless cars, the Internet of Things (IoT) and smart cities will result in a proliferation of legal cases to establish who is responsible for automated, intelligent devices, while hackers and fraudsters take advantage of such innovations to find new ways to pry money out of people and companies. Meanwhile, in a bid to keep pace, regulators are writing new laws that require interpretation, while the courts re-imagine existing laws for the connected age.

Here, experts in the law and new technology predict the court cases of tomorrow, from class-action data-breach suits to liability for failures across smart homes, the IoT and self-driving cars. Technology is progressing at what seems like an ever-increasing rate. So, is the law as it stands able to provide clarity in this brave – and complicated – new world?

2-Guardian-TR-Driverless-Car-2560

A car crash waiting to happen
Driverless cars are hurtling into the present, promising safer roads without inattentive humans behind the wheel. But there’s still work to do: on the same day that Google’s Waymo announced its driverless cars had been approved for public testing without a human behind the wheel, a Nayva driverless shuttle in Las Vegas took no evasive action to prevent a lorry from reversing into it.

In the UK, driverless vehicles are already being tested in Milton Keynes, Greenwich and elsewhere, with varying levels of automation. While it’s likely to be many years until fully driverless cars take over, UK transport secretary Chris Grayling believes completely self-driving cars will be on British roads by 2021.

Their arrival could be a boon for road safety around the world. According to the National Highway Traffic Safety Administration, 94% of crashes in the US are due to human error. Worldwide, says the World Health Organization, 1.25 million people die each year as a result of traffic accidents.

Despite this, one of the most common debates about driverless cars centres on what happens when driverless cars are involved in an accident: how do we decide who is at fault? It may not be as difficult as it sounds, says Joseph Raczynski, legal technologist and applications integrator with Thomson Reuters.

“Driverless cars with hundreds of sensors will capture everything that occurred with massive volumes of data, audio and video, which will tell a pretty exact story of the incident,” he says. “This brings about the need for lawyers to be able to tap into this data, contract with experts who can extract the data, and understand the full picture.” This means they’ll need to understand how code works, in order to understand what technical experts are telling them.

Then there’s the so-called “trolley problem”, exploring how a car would “decide” whom to hit. “What if the car’s algorithm has to make a decision between crashing to one side or another of a single lane road, ‘choosing’ to hit an older person rather than a child on the sidewalk?” says Raczynski. “Certainly these are cases that we will see argued with the mass adoption of these new transports.”

The degree of automation in a car will affect such cases. Automotive standards body SAE International has six levels of driving automation, ranging from level 0 (automated warnings), to level 3 (drivers can take their eyes off the road but human intervention may be required), and level 5 (steering wheels not necessary).

Mid-level systems that let drivers take their hands off the wheel at certain times could prove more contentious than fully automated level 5 cars. “The interesting liability arises if the vehicle has said: ‘Switch back to manual mode’, and the driver doesn’t pick that up quickly enough. Who’s responsible in that scenario,” asks Emma Wright, commercial technology partner, Kemp Little and contributor to Thomson Reuters Practical Law. “It’s going to be difficult to prove what failed and who was responsible.”

That’s further complicated by external influences, from software bugs and cyber-attacks, to obscured street signs – who is at fault, says Raczynski, if a self-driving car can’t “read” a stop sign because of graffiti?

3-Guardian-TR-IoT-2560

When technology talks
Connected devices have already started to arrive in our homes – from thermostats to the voice assistants – as well as our cities, with Google designing an entire “smart” district in Toronto. But while half of Britons own some sort of connected home device most of those are TVs or entertainment-related, with smart appliances and lighting less popular.

That does not mean that the Internet of Things (IoT) revolution has stopped. Instead, expect gradual evolution, with smart features popping up in devices as and when they’re replaced. The global IoT market is expected to reach $724bn (£550bn) by 2023, with more than 20bn connected devices globally by 2020, according to Gartner.

Consumers are already protected if a smart appliance goes wrong, says Wright. “What is the difference between a connected thermostat or tumble dryers causing fires?” she asks.

That said, the complex network of companies designing and supporting IoT devices could make liability difficult to ascertain, says Kate Chandler, senior counsel in disputes and investigations at Taylor Wessing, noting that software developers, manufacturers, services providers and even consumers themselves could be found at fault. “The consumer might contribute to the damage if they fail to follow the instructions for use and warnings properly, or did not maintain the product adequately by installing software updates.”

Manufacturers might turn to the “state of the art” defence, arguing that their product was as good as it could be at release – though, if a bug could be patched and wasn’t, that may not hold water.

The largest source of lawsuits is likely to be security, says Joseph Raczynski, legal technologist and applications integrator with Thomson Reuters, pointing to a string of hacks against security cameras and baby monitors. “All of this happened because most of these IoT devices lacked proper security protocols to protect the device and the home network it sits on,” he says. “This area is undoubtedly the most prime area for suits in the next few years.”

The legal liability issues with IoT become more complex as devices start communicating with one another. In a smart home, a connected thermostat might turn on a plug that powers a floor heater, or an alarm clock triggering a coffee maker – all without human interaction. If one sends flawed instructions or spreads a virus, determining who is at fault will involve unpicking a network of systems, services and software, says Angus Finnegan, head of communications in Information Technology at Taylor Wessing. “Responsibility for such damage or interference will need to be looked at on a case-by-case basis to determine whether any of the suppliers involved were at fault – through negligence or otherwise.”

Even if users don’t connect their own homes, they’ll still come face-to-face with IoT in cities, offices and even their residences, as landlords future-proof buildings and look to save on costs with smart tech, says Clare Harman Clark, professional support lawyer at Taylor Wessing. “Immediate legal questions arise concerning rights to install technology, as enshrined in existing lease arrangements,” she says, adding that data protection and privacy must be considered in all smart-city implementations. “IoT allows for the collection of considerable data by landlords – by accident or design.”

Barry Jennings, legal director in Bird & Bird’s Tech & Comms sector group, adds that privacy concerns and the potential for tracking and surveillance increase with the addition of automation. “Technologies are using artificial intelligence and machine learning to automate decision-making and reducing or removing human control,” he says. In other words, much of our lives will be overseen and affected by machines that don’t ask us for permission first.

4-Guardian-TR-Fraud-2560

Phish and chips
Fraud isn’t new, but technology is helping it to spread from localised incidents to worldwide phishing and targeted scams. The latest reports from the Office for National Statistics (ONS) show British adults were hit by 3.3m fraud attacks in the year to June 2017, with 57% of those computer-related – suggesting fraud is shifting online.

The figures make fraud the most common criminal offence in the UK, the ONS said, with 75% of incidents relating to bank accounts and credit cards, followed by online shopping or fake computer service calls, at 22%. And it’s costing the UK more than £1bn annually, according to KPMG – up 55% in 2016 from the year before.

Morag Rea, head of business crime and investigations for Thomson Reuters Practical Law, says that the growth of social media as a selling platform is putting consumers at risk of intellectual property crime and product safety issues, while increasingly connected devices leave users open to data theft.

One challenge continues to be accessing the necessary data to track fraud rings. “It is recognised that law enforcement agencies, banks, financial institutions, internet service providers and telecom operators face obstacles to addressing cyberfraud because of data privacy laws, information-sharing restrictions, multi-jurisdiction issues and a lack of resources,” Rea says. “Agreement must be reached to allow access to the right data from private and public partners, within agreed parameters that are mutually respected.”

For example, Rea notes that financial regulators are already cracking down on fraudulent transactions: the introduction of the Market Abuse Regulation has resulted in 3,730 suspicious transaction reports being filed to the Financial Conduct Authority in the first nine months of this year. Further tightening of the rules is set to follow with the Markets in Financial Instruments Directive update (MiFID II). “When MiFID II comes into force on 3 January, suspected market manipulation will become even easier to detect as the FCA will be able to scrutinise more and more data,” Rea says.

However, she adds that this is only the first step for one particular type of fraud: “Although uncovering it may become simpler, the opportunities presented for committing market abuse through cyber-attacks, fake news and “spoofing” are also multiplying. Implementing the systems and controls to keep up with what is required to prevent attack is a huge task.”

Indeed, the rise of technology – in particular social media and messaging – makes it easier for fraudsters to find avenues of attack and gather enough information to seem legitimate to their victims. “It’s been going on for hundreds of years, there are just easier ways to do it now – more ways in, and more ways to be credible,” says Paul Glass, partner in disputes and investigations at Taylor Wessing.

Just as email led to phishing, every new technology opens the door to a new fraud. “There’s all sorts of ‘fun’ things around cryptocurrencies,” says Glass. “What if someone manages to access my digital wallet and steals my 100 bitcoins – at current prices, that’s close to £100,000. What have they actually stolen? The law is rather behind on this … that will be tested in the next five to 10 years.”

Another ripe target is smart contracts based on the blockchain, which are decentralised, automated systems for managing agreements. It may sound niche, but the smart contracts and blockchain market will be worth more than $2bn by 2021, according to industry reports. For example, Glass says, a delivery could be managed by smart contract via GPS; once the package is at the desired address, the payment is automatically triggered. “What if someone accesses the code and manages to send the payment somewhere else, or spoofs the GPS signal, so it looks like goods have arrived when they have not. Who’s liable in that situation? It’s quite difficult.

“Because the technology is so new, we can guess, but we don’t know what the answers are yet.”

5-Guardian-TR-Security-2560

The corporate cybercrime crunch
Quantifying the effects of cybercrime is difficult, but the figures can be staggering. Government reports suggest as many as 46% of UK companies were hit by a cyber-attack last year, with industry research suggesting hacks cost UK businesses £29bn in the same period.

There’s always a new trend among hackers: ransomware attacks were up 250% in the first few months of the year, according to Kaspersky Labs. As hackers develop new techniques to target data, companies will also have to be proactive about security, automating their hunt for infiltrators and their response using artificial intelligence and machine-learning tools.

Just ask NHS bosses. Several hospital trusts were knocked offline this year following the WannaCry ransomware attack. Hardly a day goes by without a high-profile company admitting a data breach.

This is a problem for businesses, damaging both their reputations and their bottom lines. “A failure to take reasonable security precautions when storing customer data is likely to amount to negligence and potentially an award of damages,” says Tim Gunn, editor for Thomson Reuters Practical Law. “Actions for misuse of private information, breach of confidence and under financial services legislation by investors are also possible in appropriate cases.”

At the moment, while there is an exception for certain sectors and a recommendation from the regulator to report in the case of serious breaches, there is no general mandatory duty in the UK to notify in case of a data breach. But changes coming in via the EU General Data Protection Regulation (GDPR) and the Network and Information Security Directive in May mean companies will be required to notify authorities and potentially customers of a breach, with different rules depending on sector.

“Significant litigation risk arises out of such incidents and this is likely to increase once the EU’s mandatory reporting regime comes into force next year,” says Gunn. “Certain GDPR breaches contemplate fines of up to €20m or 4% of annual worldwide turnover – whichever is higher.”

Mandatory reporting means we’ll be hearing about more and more hacks and breaches – and that means more legal action. “These mandatory notification requirements will increase awareness among large pools of potential litigants,” says Gunn. “Businesses that are not cyber-ready should beware.”

That could lead to class-action suits over data loss. The first such case hit Morrisons after staff took the grocery retailer to court after an apparently disgruntled employee leaked their bank records and criminal background checks, among other data. “That’s the first significant class-action for that type of claim in this country,” says Glass.

Up to 90% of data breaches are thought not to have been made public, so GDPR’s rules mean there will be more people finding out about successful hacks – and more people who can sue. “It’s a death by a thousand cuts,” Glass says. “One person’s claim of £2,000 doesn’t mean very much, but 100,000 people or a million people, that’s suddenly potentially company threatening.”

In the future, we can expect major firms to go out of business after a data breach. Because of this, companies will be more careful about who they send data to or what customer data they share with partners, so businesses must expect to be open about their security protections. “They’ll be going in and doing proper audits, some already do this, but it will become a bigger issue,” Glass says. “I think there’ll be a lot more of that happening – at least, I hope so.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.