Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic
The Atlantic
National
Caroline Mimbs Nyce

Why the Tesla Recall Matters

The Atlantic

More than 350,000 Tesla vehicles are being recalled by the National Highway Traffic Safety Administration because of concerns about their self-driving-assistance software—but this isn’t your typical recall. The fix will be shipped “over the air” (meaning the software will be updated remotely, and the hardware does not need to be addressed).

Missy Cummings sees the voluntary nature of the recall as a positive sign that Tesla is willing to cooperate with regulators. Cummings, a professor in the computer-science department at George Mason University and a former NHTSA regulator herself, has at times argued that the United States should proceed more cautiously on autonomous vehicles, drawing the ire of Elon Musk, who has accused her of being biased against his company.

Cummings also sees this recall as a software story: NHTSA is entering an interesting—perhaps uncharted—regulatory space. “If you release a software update—that’s what’s about to happen with Tesla—how do you guarantee that that software update is not going to cause worse problems? And that it will fix the problems that it was supposed to fix?” she asked me. “If Boeing never had to show how they fixed the 737 Max, would you have gotten into their plane?”

Cummings and I discussed that and more over the phone.

Our conversations have been condensed and edited for clarity.


Caroline Mimbs Nyce: What was your reaction to this news?

Missy Cummings: I think it’s good. I think it’s the right move.

Nyce: Were you surprised at all?

Cummings: No. It’s a really good sign—not just because of the specific news that they’re trying to get self-driving to be safer. It also is a very important signal that Tesla is starting to grow up and realize that it’s better to work with the regulatory agency than against them.

Nyce: So you’re seeing the fact that the recall was voluntary as a positive sign from Elon Musk and crew?

Cummings: Yes. Really positive. Tesla is realizing that, just because something goes wrong, it’s not the end of the world. You work with the regulatory agency to fix the problems. Which is really important, because that kind of positive interaction with the regulatory agency is going to set them up for a much better path for dealing with problems that are inevitably going to come up.

That being said, I do think that there are still a couple of sticky issues. The list of problems and corrections that NHTSA asked for was quite long and detailed, which is good—except I just don’t see how anybody can actually get that done in two months. That time frame is a little optimistic.

It’s kind of the Wild West for regulatory agencies in the world of self-certification. If Tesla comes back and says, “Okay, we fixed everything with an over-the-air update,” how do we know that it’s been fixed? Because we let companies self-certify right now, there’s not a clear mechanism to ensure that indeed that fix has happened. Every time that you try to make software to fix one problem, it’s very easy to create other problems.

Nyce: I know there’s a philosophical question that’s come up before, which is, How much should we be having this technology out in the wild, knowing that there are going to be bugs? Do you have a stance?

Cummings: I mean, you can have bugs. Every type of software—even software in safety-critical systems in cars, planes, nuclear reactors—is going to have bugs. I think the real question is, How robust can you make that software to be resilient against inevitable human error inside the code? So I’m okay with bugs being in software that’s in the wild, as long as the software architecture is robust and allows room for graceful degradation.

Nyce: What does that mean?

Cummings: It means that if something goes wrong—for example, if you’re on a highway and you’re going 80 miles an hour and the car commands a right turn—there’s backup code that says, “No, that’s impossible. That’s unsafe, because if we were to take a right turn at this speed … ” So you basically have to create layers of safety within the system to make sure that that can’t happen.

This isn’t just a Tesla problem. These are pretty mature coding techniques, and they take a lot of time and a lot of money. And I worry that the autonomous-vehicle manufacturers are in a race to get the technology out. And anytime you’re racing to get something out, testing and quality assurance always get thrown out the window.   

Nyce: Do you think we’ve gone too fast in green-lighting the stuff that’s on the road?

Cummings: Well, I’m a pretty conservative person. It’s hard to say what green-lighting even means. In a world of self-certification, companies were allowed to green-light themselves. The Europeans have a preapproval process, where your technology is preapproved before it is let loose in the real world.

In a perfect world—if Missy Cummings were the king of the world—I would have set up a preapproval process. But that’s not the system we have. So I think the question is, Given the system in place, how are we going to ensure that, when manufacturers do over-the-air updates to safety-critical systems, it fixes the problems that it was supposed to fix and doesn’t introduce new safety-related issues? We don’t know how to do that. We’re not there yet.

In a way, NHTSA is wading into new regulatory waters. This is going to be a good test case for: How do we know when a company has successfully fixed recall problems through software? How can we ensure that that’s safe enough?

Nyce: That’s interesting, especially as we put more software into the things around us.

Cummings: That’s right. It’s not just cars.

Nyce: What did you make of the problem areas that were flagged by NHTSA in the self-driving software? Do you have any sense of why these things would be particularly challenging from a software perspective?

Cummings: Not all, but a lot are clearly perception-based.

The car needs to be able to detect objects in the world correctly so that it can execute, for example, the right rule for taking action. This all hinges on correct perception. If you’re going to correctly identify signs in the world—I think there was an issue with the cars that they sometimes recognized speed-limit signs incorrectly—that’s clearly a perception problem.

What you have to do is a lot of under-the-hood retraining of the computer vision algorithm. That’s the big one. And I have to tell you, that’s why I was like, “Oh snap, that is going to take longer than two months.” I know that theoretically they have some great computational abilities, but in the end, some things just take time. I have to tell you, I’m just so grateful I’m not under the gun there.

Nyce: I wanted to go back a bit—if it were Missy’s world, how would you run the regulatory rollout on something like that?

Cummings: I think in my world we would do a preapproval process for anything with artificial intelligence in it. I think the system we have right now is fine if you take AI out of the equation. AI is a nondeterministic technology. That means it never performs the same way twice. And it’s based on software code that can just be rife with human error. So anytime that you’ve got this code that touches vehicles that move in the world and can kill people, it just needs more rigorous testing and a lot more care and feeding than if you’re just developing a basic algorithm to control the heat in the car.

I’m kind of excited about what just happened today with this news, because it’s going to make people start to discuss how we deal with over-the-air updates when it touches safety-critical systems. This has been something that nobody really wants to tackle, because it’s really hard. If you release a software update—that’s what’s about to happen with Tesla—how do you guarantee that that software update is not going to cause worse problems? And that it will fix the problems that it was supposed to fix?

What should a company have to prove? So, for example, if Boeing never had to show how they fixed the 737 Max, would you have gotten into their plane? If they just said, “Yeah, I know we crashed a couple and a lot of people died, but we fixed it, trust us,” would you get on that plane?

Nyce: I know you’ve experienced some harassment over the years from the Musk fandom, but you’re still on the phone talking to me about this stuff. Why do you keep going?

Cummings: Because it’s really that important. We have never been in a more dangerous place in automotive-safety history, except for maybe right when cars were invented and we hadn’t figured out brake lights and headlights yet. I really do not think people understand just how dangerous a world of partial autonomy with distraction-prone humans is.

I tell people all the time, “Look, I teach these students. I will never get in a car that any of my students have coded, because I know just what kinds of mistakes they introduce into the system.” And these aren’t exceptional mistakes. They’re just humans. And I think the thing that people forget is that humans create the software.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.