Philip Koopman's 2022 book on self-driving safety

Talking self-driving safety and regulation with Philip Koopman, Associate Professor at Carnegie Mellon University

Share this article

Koopman on self-driving safety in 2024: UK is adult in the room, US is Wild West


With the Automated Vehicles Bill passing Parliament, and attention turning to secondary legislation, we go deep on regulation with one of the world’s preeminent self-driving safety experts – Philip Koopman, Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Carnegie Mellon University, Pennsylvania.

In his 2022 book “How Safe Is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety”, aimed at engineers, policy stakeholders and technology enthusiasts, Koopman deconstructs the oft-quoted metric of being “at least as safe as a human driver”, and urges greater focus on what is “acceptably safe for real-world deployment”.

Self-driving safety expert, Philip Koopman
Self-driving safety expert, Philip Koopman

You’ve described the UK as “the adult in the room” when it comes to self-driving regulation – why? 

To be clear, the context was a general statement about safety, not necessarily specific to any particular regulation or standard. It’s a cultural statement, rather than a technical one.

Let’s talk about the US, the UK and Europe, because I can separate those out. In Europe, there’s type approval, whereas in the US there is no requirement to follow any standards at all. People point to the Federal Motor Vehicle Safety Standards (FMVSS), but that’s about things like airbags and dashboard warning lights, not automated vehicle features.

In the UK, you have the ALARP principle, which applies to all health and safety law. It is not required anywhere else, other than perhaps Australia, which is also doing a good job on safety. Under ALARP, companies are required to have a safety case that demonstrates they have mitigated risks ‘As Low As Reasonably Practicable’.

That’s a reflection of UK culture valuing and emphasising safety – industrial safety systems as well as occupational safety. Other countries don’t do that to the same degree, so that was the basis for my ‘adult in the room’ statement.

You British actually have research funding for safety! There’s a bit of that in the EU, but in the US there’s essentially none. I’ve succeeded, and Professor Leveson at MIT, but it’s a very small handful. In the UK, you have the York Institute for Safe Autonomy, you have Newcastle University, and there’s government funding for safety which you just don’t see in the US.

What about self-driving vehicle manufacturers – how do they approach safety?

The car companies had functional safety people, and some of them ended up looking at autonomy, but it was often pretty crude. You need to differentiate between traditional motor vehicle safety and the computer-based safety required for self-driving.

Ultimately, it comes down to culture. The car safety people have historically had a human driver to blame when things go bad – and this is baked into the standards such as ISO 26262, the classic automotive safety standard for electronic systems.

In private, some US self-driving companies will say ‘yeah, we read it, but it’s not for us’. In public, they use words written by lawyers for other lawyers – the large print giveth and the fine print taketh away.

In other standards, risk is a combination of probability and severity – the riskier it is, the more engineering effort you need to put in to mitigate that risk.

In automotive, they say it’s controllability, severity and exposure. They take credit every time a driver cleans up a technical malfunction, until they don’t – then they blame driver error. Google the Audi 5000 Unintended Acceleration Debacle, a famous case from the 1980s. The point is car companies are used to blaming the humans for technical malfunctions.

In self-driving you also have the robot guys, who are used to making cool demos to get the next tranche of funding. Their idea of safety is a big red button. I’ve worked with them, they’re smart and they’re gonna learn on the job, but they historically had zero skills in mass production or safety at scale on public roads.

Both these cultures made sense in their previous operating environments. In traditional automotive, I have a problem with some driver blaming but, holistically, one fatality per 100 million miles is pretty impressive. With the robot guys, the Silicon Valley ‘move fast and break things’ model falls down if what you’re breaking is a person, particularly a road user who didn’t sign up for the risk.

Oh, and they’re also now using machine learning, which means the functional safety people will struggle to apply their existing toolsets. That’s the challenge. It’s complicated and there’s lots of moving parts.

Koopman's 2022 book on self-driving safety: How Safe Is Safe Enough?
Koopman’s 2022 book on self-driving safety: How Safe Is Safe Enough?

Which brings us to the need for regulation…

In the US, it’s like we’ve been purposely avoiding regulating software for decades. Look at the National Highway Transportation Safety Authority (NHTSA) investigations into Tesla crashes – it always seems to be about the driver not paying attention, rather than Tesla made it easy for them not to pay attention.

Now we have the likes of Cruise, Waymo and Zoox – computers driving the car, no human backup, and basically self-certification. Jump through the bureaucratic hoops, get insurance, and you can just put this stuff on the road.

The US is the Wild West for vehicle automation. There are no rules. The NHTSA might issue a recall for something particularly egregious. If there’s a bad crash in California, the Department of Motor Vehicles (DMV) might yank a permit.

Our social contract is supposedly supported by strong tort and product defect laws. But what good is that if it takes five years and a million dollars of legal fees to pursue a car company in the event of a fatal crash? In some states the computer is said to be responsible for driving errors, but is not a legal person, so there is literally nobody to sue.

That’s why I’m working with William H. Widen, Professor at the University of Miami School of Law – to find ways to reduce the expense and improve accessibility.

Expanding this to hands-free driving, you’re no fan of using the SAE levels for regulation?

Whether you like them or not, the SAE levels are the worst idea ever for regulation – they make for bad law. The mythical Level 5 is just an arbitrary point on a continuum! Also, testing – beta versus not beta – matters a lot and SAE J3016 is really weak on that.

That’s why I’ve proposed a different categorisation of driving modes: testing, autonomous, supervisory and conventional. L2 and L3 is supervisory, L4 and 5 is autonomous.

The car accepting the button press to engage self-driving transfers the duty of care to a fictional entity called the computer driver, for whom the manufacturer is responsible. That’s not incompatible with your Law Commission’s user in charge (UIC) and no user in charge (NUIC).

The next question is: how do you give the duty of care back to the human driver? I say by giving them at least a 10 second warning, more if appropriate. In a lot of cases, 30 or 40 seconds might be required, depending on the circumstance.

It’s not perfect, but it’s got simplicity on its side. The car companies can then do whatever the heck they want, held accountable under tort law.

For further info, including links to Philip Koopman’s books and Safe Autonomy blog, visit koopman.us

Share this article

Author: Neil Kennett

Neil is MD of Featurebank Ltd. He launched Carsofthefuture.co.uk in 2019.