DeKort slams on-road self-driving: L3 shouldn’t exist and L4 unviable
In the week which saw an undisputed heavyweight champion crowned for the first time this century, Cars of the Future interviewed one of self-driving’s most vocal critics, Michael DeKort. Like Oleksandr Usyk, we never duck a challenge!
A winner of the IEEE Barus Ethics Award, and member of the SAE On-Road Autonomous Driving Validation & Verification Task Force, DeKort shot to prominence in America in 2006, when, as an engineering project manager at Lockheed Martin, he posted a whistleblowing video about the company’s Deepwater system.
“I’m not against automotive autonomy, I’m against incompetent and unsafe autonomy,” he began. “If somebody wants to have legitimate and safe self-driving, actually, I’m your best friend.”
An unexpected start. “So, you’re a fan of the slower, more sensible approach we’re taking here in the UK?” we queried.
“No, the UK is just doing less of a very bad thing. I believe there are use cases for autonomy, maybe helping people who can’t or shouldn’t drive, or cutting down on the number of vehicles on the road, and the military side, but let’s try not to injure or kill people needlessly trying to get to level 4. And, while we’re at it, level 3 shouldn’t even exist.
“There’s a huge over-reliance on AI when all we have is pattern recognition. No matter how you slice it. In a lot of cases, it’s at a pixel level. In order to recognise something, the system has to experience a huge amount of variations of objects in various scenarios.
“There’s no general artificial intelligence, there’s no inference, it is basically trial and error. In the meantime, you’re using humans as guinea pigs when, actually, a human that’s not drunk, not distracted, not asleep, is pretty darn good at driving.
Off-road self-driving
“If you do mining or farming with autonomous vehicles then fine, because you can get through the use cases in testing. If they detect an object that shouldn’t be there, they just stop, and probably then somebody remote controls them out of the way.”
“What about shuttles on dedicated lanes?” we asked. “That’s basically a monorail without the rail – it has infrastructure around it – a cordoned off area where only certain people are allowed to go, and that’s factored in.
“The public road network is different. There are countless variations in the environment – material differences, colour differences blah, blah. The point is you can’t get the testing workload down enough.
“You can pick any spot near your house and you will never see level 4 there in your lifetime because there are too many variables. A city like London, forget it! And don’t even get me started on autonomous aircraft without a pilot’s seat, that’s insane.
Self-driving’s perfect storm
“Right now, the autonomous vehicle industry is in a perfect storm. They can’t get enough real-world testing data, and the simulations can’t run complex enough models.
“Most autonomous vehicle makers don’t even model the perception system, they skip over it and feed their own data into the planning system. That’s not proper systems engineering.
“Crash scenarios are not edge cases automatically. That is nonsense. They use it as an excuse, like this thing is so rare. There’s been a lot of coverage about how robotaxis have problems making unprotected left turns. That’s not an edge case. It’s higher risk, but it’s something that human drivers do all the time.
“Look at all the executives from the failed autonomous vehicle makers who have left the industry. Why? Because they realise they can’t get to where they want to, to level 4. So, they go off to different use cases.
“A lot of senior people in this industry have blocked me. They don’t address my point that this technology is not viable, and by putting it on the road we risk harming people for no reason. Tell me why I’m incorrect.”
DeKort has thrown down the gauntlet, would anyone care to pick it up?
Thanks to Attentie Attentie for the boxing ring pic.