Self-driving straight-talking: Missy Cummings wows PAVE UK Luncheon in London, 10 October 2025
Completing a great doubleheader, the day after the Zenzic CAM Pathfinder Launch, we made our way to the Royal Automobile Club in London for a PAVE UK Luncheon with Professor Missy Cummings.
Following a brief welcome by Prof. Kerry Kirwan, of the University of Warwick, came the headline act. As most readers will surely know, Cummings is a former US Navy fighter pilot, now close colleague of our 2024 Self-Driving Industry Legend, Professor Phil Koopman.
Currently Director of Autonomy and Robotics at George Mason University in Fairfax, Virginia, having previously been a senior adviser to the US National Highway Traffic Safety Administration (NHTSA), she is best-known as an outspoken critic of Tesla’s reliance on camera-based vision.
Not full self-driving
“The world’s richest man doesn’t like me very much,” she said, regarding a 2025 court case in which Tesla was ordered to pay US$243m in damages following a fatal crash involving its Autopilot system back in 2019.
She subsequently received death threats from Elon Musk supporters online, but she’s not backing down. Maximising automated mobility safety is her life’s mission.
Much of the content was common to her recent presentation at Safecomp 2025 – see video below – augmented with aspects unique to the UK. This included praise for our EuroNCAP vehicle assessments, the standard of our driving test, and our more tightly regulated approach to on-road rollout.
“In America we run right ahead and do dumb stuff when sometimes it is better to be Number Two,” she said. “You’re about to benefit from us being first.”
Presenting data on US AV crashes involving Cruise, Waymo and Zoox vehicles, she highlighted an increased risk of rear-end shunts compared to human-driven cars.

“How generalisable is computer vision trained in the US, for the UK? No one knows!” she said. “My first question would be: How much training have you done on our roads?”
The badass quotes kept on coming. Leading UK operator Wayve was “kindergarten compared to Waymo”. Is there even such a thing as a self-driving car, given that every one has a human operator, either on-board or remote?
Giving a graphic description of the screams of a pedestrian being heard in a control room, she emphasised the need for a “big red button”, and criticised long distance remote driver operations, noting that milliseconds of delay can be crucial.
Cruise, of course, folded, but might be making a comeback under one of Cummings’ former students.
Highlighting a recent case where a Waymo car turned across a lane and froze – ‘bricked’, in American parlance – she wondered how long it would take for road rage to kick-in if such a scenario was repeated in London.
Then there’s Unexpected Actions by Others (UAOs), which self-driving cars are bad at responding to because “AI does not think, is not capable of judgment under uncertainty, is just linear algebra on steroids… which is why you will always need human oversight.”

Something as simple as rotating lights, for example, cause big problems for AI, because they are constantly changing, leading to epistemic uncertainty – inaccurate measurement – not to mention aleatoric uncertainty – processing issues.
“All neural nets hallucinate,” she warned, on the subject of phantom braking – advanced cars automatically slamming on the anchors for apparently no reason.
Self-driving Q&A
During the Q&A session, including questions by Dr Richard Saldanha of Queen Mary University and Dr Jack Stilgoe of University College London [Was everyone there an eminent academic?!], Cummings asserted that, while communication with infrastructure could be highly beneficial, “successful AVs must be able to be fully self-contained at all times.”
There followed a fireside chat, hosted by PAVE UK’s Prof. Siddartha Khastgir, and featuring Prof. Sarah Sharples, former Chief Scientific Adviser at the Department for Transport.
Sharples relayed her own ‘scary’ experience of phantom braking, saying it was currently impossible to say which was safer – Level 4 autonomous cars or Level 2++ cars, driven by humans aided by advanced driver assistance systems (ADAS) – due to a lack of real-world data.
“We need that data to be shared because at the moment we do not have the evidence base,” she said.
Summing up her pragmatic approach, Cummings concluded: “I’m a big fan of technology but in the US it’s not ’til you hit someone in the wallet that they start listening.”