2023 Self-Driving Industry Award winner, Dr Nick Reed, on Digital Commentary Driving

#SDIA23 Update: Research award winner Dr Nick Reed developing Digital Commentary Driving concept to assess self-driving safety

Welcome to #SDIA23 Updates, a new series exploring what our reigning Self-Driving Industry Award champions have been working on recently. First up: winner of the 2023 award for Research, Reed Mobility.

In this category, the judges were looking for examples of exceptional academic studies and/or market analysis. Funded by the Rees Jeffreys Road Fund, Reed Mobility led a project exploring public attitudes and expectations towards the ethical behaviours of self-driving vehicles.

Since the awards in November, Reed Mobility founder, Dr Nick Reed, has continued his work as Chief Road Safety Adviser to National Highways – including plans to reduce deaths and serious injuries on England’s strategic road network (SRN), and supporting activity in relation to connected and automated mobility (CAM), smart motorways and cybersecurity.

He recently became a founding member of the Department for Transport’s College of Experts, was appointed a trustee to the Road Safety Trust, and joined the Advisory Board of Partners for Automated Vehicle Education (PAVE) UK.

As if all that weren’t enough, Dr Reed also found time to update BSI’s CAM Vocabulary

BSI self-driving definition: automated driving
BSI self-driving definition: automated driving

… and continued his work with colleagues there to develop a technique for assessing automated vehicle (AV) safety performance – Digital Commentary Driving (DCD) – as he explains here:

Self-driving data

NR: The 20th century economist, William Deming, is quoted as saying “In God we trust, all others bring data”. This captures his sense that when it comes to important decisions, gut feel and belief are not enough; objective evidence in the form of data is necessary to support decision-making.

My work that won the research category of the Self-Driving Industry Awards 2023 identified that trust was the most important value to the public in their appreciation of automated vehicles and that this trust is encapsulated by four key attributes. AVs should:

  • Be governed by a clear, legal framework;
  • Be at least as safe as a good human driver;
  • Protect other road users at least as well as they protect their occupants;
  • Share data with stakeholders to improve safety.

These principles are enshrined in the recently passed Automated Vehicles Act 2024, but what will be the data that enables us to trust that they will be safe? There are many ways that this question can be approached. The Act provides some signposts, starting with two key principles:

  1. authorised automated vehicles will achieve a level of safety equivalent to, or higher than, that of careful and competent human drivers, and
  2. road safety in Great Britain will be better as a result of the use of authorised automated vehicles on roads than it would otherwise be.

These make intuitive sense but what objective data would satisfy Deming and show that AVs are adhering to these principles? Principle (a) is challenging because there is no agreed definition of careful and competent driving (although DVSA’s National Standards for Driving are a good start); it is not clear how we could determine that an automated vehicle is behaving carefully and competently and therefore what data we should be collecting in order to prove it.

For principle (b), the answer appears more straightforward. We could look at collision rates of AVs (i.e. crashes per distance travelled) and compare that to collision rates for human drivers in similar vehicles on similar journeys and, if AVs achieve a lower crash rate, we can say that road safety is better. However, there are nuances here too. Let’s say AVs were found to be 10% safer than human driven vehicles – would we consider road safety to have improved if utilisation went up by 20%?

Although each individual AV trip would be relatively safer than that completed by a human driver, the overall level of exposure would mean an increase in the absolute number of crashes. Furthermore, an AV service might attract customers who previously completed a similar journey by train. Rail travel is estimated to be 20× safer than human driving so shifting to AVs would increase the global risk of injury even if the AVs were significantly safer than human drivers on the same trip. 

My work with colleagues from BSI to develop a potential technique for assessing AV safety performance may offer a solution. We looked at the ways that we assess the safety of advanced human drivers and the metrics used to assess safe performance of mobile robots. Bridging these worlds, we proposed the concept of Digital Commentary Driving (DCD). This is a standardised protocol for the collection of data from AVs requiring the collection of the data that an AV must be using in order to drive safely.

This includes the current status of the vehicle (e.g. speed, steering angle, brake application, accelerator application, current heading, software version etc.), perception of the surrounding environment (e.g. fixed objects, moving objects) and predictions of their future movement (e.g. desired future heading, desired speed etc.).

Since DCD data covers the essential features necessary for careful and competent driving, it cannot compromise commercially sensitive information about the way a vehicle is being controlled. There is no presumption over how this data is arrived at by the AV systems. DCD does not prescribe the hardware (e.g. an AV might use one or more of cameras, lidar, radar, ultrasound, V2X communication etc.) or the software (e.g. end-to-end deep learning or rules-based approaches etc.) involved – it only requires the sharing of standardised data regarding the perceptions, decisions and actions of the AV.

Furthermore, AV companies would only be required to share data on the performance of their vehicles with an authorised regulator who would hold it securely for analysis pursuant to safety performance.

Of course, the collection of DCD data in itself does not tell us what it means for an AV to be a careful and competent driver. However, it does start to provide a consistent dataset that will enable objective analysis of driving performance by AVs from all developers and benchmarks to be established that set expectations around what it means to drive safely.

This may start the process of building the trust so valued by the public – and would perhaps satisfy Deming’s expectation for objective data to support critical decision making.  

Self-driving expert Dr Nick Reed
Self-driving expert Dr Nick Reed

Self-driving critic Michael DeKort pulls no punches on L3 and L4 on public roads.

DeKort slams on-road self-driving: L3 shouldn’t exist and L4 unviable

In the week which saw an undisputed heavyweight champion crowned for the first time this century, Cars of the Future interviewed one of self-driving’s most vocal critics, Michael DeKort. Like Oleksandr Usyk, we never duck a challenge!

A winner of the IEEE Barus Ethics Award, and member of the SAE On-Road Autonomous Driving Validation & Verification Task Force, DeKort shot to prominence in America in 2006, when, as an engineering project manager at Lockheed Martin, he posted a whistleblowing video about the company’s Deepwater system.

On-road self-driving critic Michael DeKort
On-road self-driving critic Michael DeKort

“I’m not against automotive autonomy, I’m against incompetent and unsafe autonomy,” he began. “If somebody wants to have legitimate and safe self-driving, actually, I’m your best friend.”

An unexpected start. “So, you’re a fan of the slower, more sensible approach we’re taking here in the UK?” we queried.

“No, the UK is just doing less of a very bad thing. I believe there are use cases for autonomy, maybe helping people who can’t or shouldn’t drive, or cutting down on the number of vehicles on the road, and the military side, but let’s try not to injure or kill people needlessly trying to get to level 4. And, while we’re at it, level 3 shouldn’t even exist.

“There’s a huge over-reliance on AI when all we have is pattern recognition. No matter how you slice it. In a lot of cases, it’s at a pixel level. In order to recognise something, the system has to experience a huge amount of variations of objects in various scenarios.

“There’s no general artificial intelligence, there’s no inference, it is basically trial and error. In the meantime, you’re using humans as guinea pigs when, actually, a human that’s not drunk, not distracted, not asleep, is pretty darn good at driving.

Off-road self-driving

“If you do mining or farming with autonomous vehicles then fine, because you can get through the use cases in testing. If they detect an object that shouldn’t be there, they just stop, and probably then somebody remote controls them out of the way.”

“What about shuttles on dedicated lanes?” we asked. “That’s basically a monorail without the rail – it has infrastructure around it – a cordoned off area where only certain people are allowed to go, and that’s factored in.

“The public road network is different. There are countless variations in the environment – material differences, colour differences blah, blah. The point is you can’t get the testing workload down enough.

“You can pick any spot near your house and you will never see level 4 there in your lifetime because there are too many variables. A city like London, forget it! And don’t even get me started on autonomous aircraft without a pilot’s seat, that’s insane.

Self-driving’s perfect storm

“Right now, the autonomous vehicle industry is in a perfect storm. They can’t get enough real-world testing data, and the simulations can’t run complex enough models.

“Most autonomous vehicle makers don’t even model the perception system, they skip over it and feed their own data into the planning system. That’s not proper systems engineering.

“Crash scenarios are not edge cases automatically. That is nonsense. They use it as an excuse, like this thing is so rare. There’s been a lot of coverage about how robotaxis have problems making unprotected left turns. That’s not an edge case. It’s higher risk, but it’s something that human drivers do all the time.

“Look at all the executives from the failed autonomous vehicle makers who have left the industry. Why? Because they realise they can’t get to where they want to, to level 4. So, they go off to different use cases.

“A lot of senior people in this industry have blocked me. They don’t address my point that this technology is not viable, and by putting it on the road we risk harming people for no reason. Tell me why I’m incorrect.”

DeKort has thrown down the gauntlet, would anyone care to pick it up?

Thanks to Attentie Attentie for the boxing ring pic.

Self-driving on track! Exclusive video of Indy Autonomous Challenge testing ahead of UK competitive debut at Festival of Speed 2024

Indy Autonomous Challenge self-driving tests ahead of Festival of Speed 2024 record attempt

The Indy Autonomous Challenge (IAC) race team invited Cars of the Future to Goodwood on Sunday 7 July to witness final testing ahead of its self-driving hill climb record attempt at the Festival of Speed 2024.

Working closely with Vodafone for on-site connectivity, a new version of the 192mph PoliMOVE car – the reigning autonomous land speed world record holder – successfully completed a series of increasingly rapid test runs, as you can see here…

Self-driving on track: Indy Autonomous Challenge at Festival of Speed 2024

Sponsored by Bridgestone and developed by the Politecnico university in Milan, the University of Alabama and Michigan State University, PoliMOVE now boasts a Dallara AV-24 chassis, 4-cylinder Honda engine, upgraded hardware (including 4 Luminar Iris lidar units, 6 cameras and 2 GPS), and a significantly improved software stack.

With safety paramount and weather permitting, the team expects to beat the current Festival of Speed (FOS) mark of 66.96s, achieved by the now defunct Roborace team back in 2018.

Self-driving on track - IAC PoliMOVE car at Festival of Speed 2024
Self-driving on track – IAC PoliMOVE car at Festival of Speed 2024

Self-driving challenge

Commenting on the unique demands of the narrow track, Paul Mitchell, IAC President, said: “Unlike the familiar ovals and F1 road courses, Goodwood’s famous Hillclimb will challenge the precision of sensor perception, GPS localisation, vehicle dynamics, and path planning in new ways, providing a historical backdrop to showcase the future of high-speed autonomous mobility.”

The event forms part of Goodwood’s FOS Tech strategy, bringing together all future mobility content, along with science, technology, engineering, and mathematics (STEM) learning programmes for 11-16 year-olds.

FOS founder, The Duke of Richmond, said: “This year, with our new FOS Tech ethos, visitors to the Festival of Speed can experience the work of groundbreaking innovators and their vision for tomorrow’s world.”

PoliMOVE has provisionally been allocated a slot in Batch 2 on each of the four days. For scheduled track times, see the Festival of Speed website

Self-driving expert Peter Hafmar of Scania talks automated mining and hub-to-hub transport

Why Scania is focused on self-driving hub-to-hub and mining

Following his talk on safe and responsible self-driving rollout at the SMMT’s Connected 2024 event, we asked Peter Hafmar, Head of Autonomous Solutions at Scania, to expand on his groundbreaking work.

Self-driving expert Peter Hafmar of Scania
Self-driving expert Peter Hafmar of Scania

PH: “I started in the people transport area, more bus-related, at the height of self-driving hype in 2015-16, when everyone believed it was all going to happen immediately! Then I moved into the taxi pod side of things, and for the last three years I’ve been at Scania, working on the two closest to market segments – on-road hub-to-hub goods transport and mining applications.

Self-driving mining

“In mining, we have a development agreement with British-Australian multinational, Rio Tinto, for an open pit rigid tractor solution. We’ve been working on it for quite a while and have come so far we’re now opening up the order books to other customers, first in Australia and then other continents, probably South America next.

Scania self-driving mining

“A lot of mines are very remote, so there’s complexity in getting people able to operate vehicles there, but the main push is safety. Mining customers are extremely safety conscious. If you can remove people from hazardous situations, you do so. We’ve also found we can downsize the size of tractors and mine in a more efficient way.

Self-driving hub-to-hub

“For the hub-to-hub, we’ve been driving on public roads since 2021, so our development curve is quite steep. We’ve been driving in different countries, but our main drive is now in Sweden, working with our US-based software stack partner, Plus.

“We’re in dialogue with several customers about starting operations, moving their goods from one of their terminals to another completely autonomously, but with a safety driver. We’re getting to a point with the technology now where we need to focus on the integrations you need in the customer’s data flows – what communications you want to have with their management system so everything is integrated in the right way.

“The biggest difference for us between hub-to-hub and mining is that with the on-road project we have a software stack partner, whereas for the mining segment we’re doing everything ourselves. The technology in the confined area is more rules based, whereas on-road involves more AI solutions – how you interact with the hardware, sensors and computing side. There’s a lot of learning we can bring in from the mining segment about full self-driving operations, without a safety driver.

Scania self-driving
Scania self-driving

“Scania is a well-established European company with a really good set of values, traditionally very focused on truck hardware. Today, we also develop state-of-the-art software, and are embracing all these transformative modern technologies. It’s pretty cool.

“As to how deployment will happen, people services in cities like London or Stockholm will take time, not only because of the complexity of the infrastructure, but also the difficulties in accounting for free will. How do you take payment? How do you get people to sit if they insist on standing?  How do you stop them holding a door open when they shouldn’t? There’s so much to consider. That’s why industrial applications and hub-to-hub freight will come first.”

New for summer 2024! Your favourite self-driving news in print.

Cars of the Future – the UK’s No.1 for Self-Driving – in print for MOVE

Well, this is exciting; visitors to the MOVE event in London will be the first to enjoy Cars of the Future – the UK’s No.1 for Self-Driving – in print.

Copies will be available in Theatre 6 (the AV stage), where I’ll be hosting on Wednesday afternoon. If you can’t make it, fear not, you can always email us to request one.

Or there’s this digital copy: Cars of the Future, summer 2024

A brief look at common objections to self-driving. Do they stack up?

AV Myth-Busting: From Self-Driving Denial To Terrorist Hacks

“It’s clear that many people are still not sure whether self-driving vehicles will be safer than human drivers, and don’t know whether they will improve travel or who will benefit most,” concluded the 2021 Myth-Busting Self-Driving Vehicles paper by the road safety charity, Brake.

Compared to bizarre but persistent urban legends like “dogs can’t look up”, these sound like reasonable doubts which can and should be addressed. That said, it is universally accepted that the automated vehicle (AV) industry has a public perception mountain to climb.

To move the debate on, we’ve divided the vocally anti-self-driving into three groups: 1) deniers, 2) opponents, and 3) catastrophisers. Let’s take their concerns in turn and see if they stack up.

Self-driving deniers

The it’ll never happen brigade – those living, wilfully or not, in denial of the capabilities of modern transport technologies. Exhibit A: The headline “Self-driving cars are another Silicon Valley fantasy that will never work” in The Telegraph last September.

Being as balanced as we can, the industry has somewhat brought this upon itself by overpromising. In 2018, Elon Musk felt “very confident” that Tesla owners would be sending their cars out as robotaxis the following year. That didn’t happen.

Fast forward to 2024, however, and AVs are on the road. In America, Waymo says it has conducted “7+ million miles of rider-only driving”. In Scotland, Project CAVForth – using a specially modified fleet of Stagecoach buses – has been taking fares daily since May 2023, giving tens of thousands of UK passengers their first taste of self-driving public transport.

Self-driving opponents

The I don’t like it mob – fair enough, that is their prerogative, but it is often extended to an assertion that nobody loves AVs. Exhibit B: The headline “Maybe People Don’t Want Self-Driving Cars After All” in Jalopnik last October.

Ok, why should they? Last summer, The Self-Driving All-Party Parliamentary Group published a well-informed policy paper to make the case, starting with some pretty eye-catching economic and safety benefits.

“The UK has a unique opportunity for leadership in an industry that could be worth £750 billion globally by 2035,” it said. “The Government’s analysis of the sector showed that it could potentially generate £42 billion and 38,000 jobs for the UK economy by 2035.”

On the safety impact, it listed the four leading causes of road accidents – driver error, reckless behaviour, disobeying traffic laws and driver impairment – saying: “Research from the insurance industry shows that self-driving vehicles could save the NHS £2.3 billion annually in medical and ambulance costs by eliminating the 85% of accidents where human error is a contributory factor.”

Another oft-quoted benefit is improved accessibility. While urging the industry to engage more with the community, Gordon McCullough, CEO of the Research Institute for Disabled Consumers (RiDC), said recently: “Self-driving can clearly be a transformative technology for a lot of disabled people.”

Self-driving catastrophisers

The nightmare scenario obsessives – sometimes quite knowledgeable, who focus on the worst potential impacts of automation. Exhibit C: The headline “Terrorists could hack into driverless cars to use as weapons” in The Mail last October.

Cybersecurity has been one of the hottest automotive topics for a decade now, with increasingly frequent and sophisticated attacks met by ever more advanced defences. It was highlighted at the SMMT’s Connected 2024 event that we don’t invest as heavily as the banking sector. Maybe we should.

Another go-to for catastrophisers is the trolley problem – the question of who to save in no-win crash situations. Exhibit D: Jeremy Clarkson’s “Driverless cars are pointless – and they have built-in instructions to kill you” headline in The Sun.

That isn’t how perception software works, and, as Elliot Hemes, of IPG Automotive UK, says:99% of the time, great brakes will get you out of trolley problem scenarios.”

Please note: a version of this article was first published in the Institute of the Motor Industry’s MotorPro magazine.

Talking self-driving safety and regulation with Philip Koopman, Associate Professor at Carnegie Mellon University

Koopman on self-driving safety in 2024: UK is adult in the room, US is Wild West

With the Automated Vehicles Bill passing Parliament, and attention turning to secondary legislation, we go deep on regulation with one of the world’s preeminent self-driving safety experts – Philip Koopman, Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Carnegie Mellon University, Pennsylvania.

In his 2022 book “How Safe Is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety”, aimed at engineers, policy stakeholders and technology enthusiasts, Koopman deconstructs the oft-quoted metric of being “at least as safe as a human driver”, and urges greater focus on what is “acceptably safe for real-world deployment”.

Self-driving safety expert, Philip Koopman
Self-driving safety expert, Philip Koopman

You’ve described the UK as “the adult in the room” when it comes to self-driving regulation – why? 

To be clear, the context was a general statement about safety, not necessarily specific to any particular regulation or standard. It’s a cultural statement, rather than a technical one.

Let’s talk about the US, the UK and Europe, because I can separate those out. In Europe, there’s type approval, whereas in the US there is no requirement to follow any standards at all. People point to the Federal Motor Vehicle Safety Standards (FMVSS), but that’s about things like airbags and dashboard warning lights, not automated vehicle features.

In the UK, you have the ALARP principle, which applies to all health and safety law. It is not required anywhere else, other than perhaps Australia, which is also doing a good job on safety. Under ALARP, companies are required to have a safety case that demonstrates they have mitigated risks ‘As Low As Reasonably Practicable’.

That’s a reflection of UK culture valuing and emphasising safety – industrial safety systems as well as occupational safety. Other countries don’t do that to the same degree, so that was the basis for my ‘adult in the room’ statement.

You British actually have research funding for safety! There’s a bit of that in the EU, but in the US there’s essentially none. I’ve succeeded, and Professor Leveson at MIT, but it’s a very small handful. In the UK, you have the York Institute for Safe Autonomy, you have Newcastle University, and there’s government funding for safety which you just don’t see in the US.

What about self-driving vehicle manufacturers – how do they approach safety?

The car companies had functional safety people, and some of them ended up looking at autonomy, but it was often pretty crude. You need to differentiate between traditional motor vehicle safety and the computer-based safety required for self-driving.

Ultimately, it comes down to culture. The car safety people have historically had a human driver to blame when things go bad – and this is baked into the standards such as ISO 26262, the classic automotive safety standard for electronic systems.

In private, some US self-driving companies will say ‘yeah, we read it, but it’s not for us’. In public, they use words written by lawyers for other lawyers – the large print giveth and the fine print taketh away.

In other standards, risk is a combination of probability and severity – the riskier it is, the more engineering effort you need to put in to mitigate that risk.

In automotive, they say it’s controllability, severity and exposure. They take credit every time a driver cleans up a technical malfunction, until they don’t – then they blame driver error. Google the Audi 5000 Unintended Acceleration Debacle, a famous case from the 1980s. The point is car companies are used to blaming the humans for technical malfunctions.

In self-driving you also have the robot guys, who are used to making cool demos to get the next tranche of funding. Their idea of safety is a big red button. I’ve worked with them, they’re smart and they’re gonna learn on the job, but they historically had zero skills in mass production or safety at scale on public roads.

Both these cultures made sense in their previous operating environments. In traditional automotive, I have a problem with some driver blaming but, holistically, one fatality per 100 million miles is pretty impressive. With the robot guys, the Silicon Valley ‘move fast and break things’ model falls down if what you’re breaking is a person, particularly a road user who didn’t sign up for the risk.

Oh, and they’re also now using machine learning, which means the functional safety people will struggle to apply their existing toolsets. That’s the challenge. It’s complicated and there’s lots of moving parts.

Koopman's 2022 book on self-driving safety: How Safe Is Safe Enough?
Koopman’s 2022 book on self-driving safety: How Safe Is Safe Enough?

Which brings us to the need for regulation…

In the US, it’s like we’ve been purposely avoiding regulating software for decades. Look at the National Highway Transportation Safety Authority (NHTSA) investigations into Tesla crashes – it always seems to be about the driver not paying attention, rather than Tesla made it easy for them not to pay attention.

Now we have the likes of Cruise, Waymo and Zoox – computers driving the car, no human backup, and basically self-certification. Jump through the bureaucratic hoops, get insurance, and you can just put this stuff on the road.

The US is the Wild West for vehicle automation. There are no rules. The NHTSA might issue a recall for something particularly egregious. If there’s a bad crash in California, the Department of Motor Vehicles (DMV) might yank a permit.

Our social contract is supposedly supported by strong tort and product defect laws. But what good is that if it takes five years and a million dollars of legal fees to pursue a car company in the event of a fatal crash? In some states the computer is said to be responsible for driving errors, but is not a legal person, so there is literally nobody to sue.

That’s why I’m working with William H. Widen, Professor at the University of Miami School of Law – to find ways to reduce the expense and improve accessibility.

Expanding this to hands-free driving, you’re no fan of using the SAE levels for regulation?

Whether you like them or not, the SAE levels are the worst idea ever for regulation – they make for bad law. The mythical Level 5 is just an arbitrary point on a continuum! Also, testing – beta versus not beta – matters a lot and SAE J3016 is really weak on that.

That’s why I’ve proposed a different categorisation of driving modes: testing, autonomous, supervisory and conventional. L2 and L3 is supervisory, L4 and 5 is autonomous.

The car accepting the button press to engage self-driving transfers the duty of care to a fictional entity called the computer driver, for whom the manufacturer is responsible. That’s not incompatible with your Law Commission’s user in charge (UIC) and no user in charge (NUIC).

The next question is: how do you give the duty of care back to the human driver? I say by giving them at least a 10 second warning, more if appropriate. In a lot of cases, 30 or 40 seconds might be required, depending on the circumstance.

It’s not perfect, but it’s got simplicity on its side. The car companies can then do whatever the heck they want, held accountable under tort law.

For further info, including links to Philip Koopman’s books and Safe Autonomy blog, visit koopman.us

Attention turns to secondary legislation as landmark self-driving Bill passes UK Parliament.

Self-driving insurance and skills issues as AV Bill awaits royal assent

Following consideration of Commons amendments in the Lords on 8 May 2024, the Automated Vehicles (AV) Bill has successfully passed through Parliament.

The landmark self-driving legislation now awaits only the rubber stamp of royal assent, with some speculating this could be given within days.

We’ve covered the passage of the Bill extensively on Cars of the Future, from its inclusion in the 2023 King’s Speech to the excellent Self-Driving Vehicles All-Party Parliamentary Group (APPG) media briefing at Wayve.

For the sake of posterity, let us record here that its long title is: “A Bill to regulate the use of automated vehicles on roads and in other public places; and to make other provision in relation to vehicle automation.”

It was sponsored by Lord Davies of Gower, Parliamentary Under Secretary of State at the Department for Transport (DfT), and Secretary of State for Transport, Mark Harper, both Conservatives, but secured cross-party support.

Progress of the AV Bill on the Parliament website
Progress of the AV Bill on the Parliament website

Self-driving scrutiny

Lord Davies of Gower said: “My Lords, I extend my gratitude to colleagues across the House for their supportive comments on and contributions to this Bill. Your Lordships’ careful and considered scrutiny has been hugely valuable.

“Over the coming months, we will launch a comprehensive programme of secondary legislation, building the new regulatory framework piece by piece.

“This will incorporate several statutory instruments, including guidance in the form of the statement of safety principles. Among the first elements to be consulted on will be regulations on misleading marketing, as these can apply before the authorisation system has been established.”

What might this mean for Tesla’s Full Self-Driving (FSD) package, we wonder? Tellingly, prominent early reactions came from the automotive and insurance industries.

Industry reaction

Jonathan Fong, of the Association of British Insurers (ABI), said: “We’re delighted the Automated Vehicles Bill will soon receive royal assent – putting the UK on the road to being a world leader in AV technology.

“While this Bill represents a significant step forward, further consideration is needed to address concerns around safety and cybersecurity. It’s critical that insurers have access to relevant data in order to support the adoption of this technology.”

Tara Foley, CEO of AXA UK&I, agreed: “AXA UK is delighted that the Bill has now become law, paving the way for self-driving vehicles to improve road safety, boost the UK economy and enhance mobility for people with limited transport options, including the disabled and elderly.

“It’s now crucial that secondary legislation is quickly passed to address issues such as cybersecurity, data sharing and the safety principles for commercial deployment.”

Meanwhile, the Institute of the Motor Industry (IMI) was quick to highlight the technical upskilling required to service fleets of self-driving vehicles.

IMI highlights self-driving training need
IMI highlights self-driving training need

Hayley Pells, Policy Lead at the IMI, said: “The Automated Vehicles Bill 2024 addresses the liability issues of automated vehicles for manufacturers and insurers, and provides a positive pathway for the introduction of this new form of mobility that could be empowering for so many.

“Clearly this is just the first step, and the IMI is keen to ensure that future legislation also takes into account the skills that will be crucial in the aftermarket for safe use of automated vehicles.  

“Failure to maintain and update these high-tech systems, many of which are designed to keep road users safe, really could be a matter of life and death.

“To ensure checks are carried out accurately, we desperately need more technicians to be trained to work on vehicles with this technology. We are therefore urging government and policymakers to ensure there’s the funding and infrastructure to support the essential upskilling.”

Responding to an update on the AV Bill by transport technology lawyer Alex Glassbrook, Ben Gardner of Shoosmiths suggested that royal assent could be given as soon as next week – then on to secondary legislation.

As Nelson Mandela noted in his Long Road To Freedom speech: After climbing a great hill, one only finds that there are many more hills to climb.

New York City to start trials of self-driving cars with safety drivers.

Safety drivers required for Big Apple self-driving trials

Flagship local news provider NBC New York has highlighted plans to trial self-driving cars with safety drivers in the most populous city in the United States – “autonomous but not driverless”, as reporter Andrew Siff puts it.

2024 NBC New York report on automated driving testing in NYC

Self-driving NYC

NYC Department of Transportation Commissioner, Ydanis Rodriguez, emphasised that all companies applying for testing permits will have to go through a rigorous approval process.

They must also agree to share data relating to any occasions when the safety driver has intervened.

This approach is broadly in line with what we’re doing here in the UK – certainly more cautious than the free-wheeling approach of California.

The report notably contains footage of robotaxis operated by both Cruise and Waymo.

Cruise and Waymo self-driving cars
Cruise and Waymo self-driving cars

Thanks to Ian Dooley on Unsplash for the cool black and white NYC photo.

VW deepens strategic partnership with Mobileye to deliver self-driving ID Buzz

VW links with Mobileye promising large scale self-driving EV production by 2026

In what it claims is a first for a global vehicle manufacturer, Volkswagen Group has partnered with self-driving technology specialist, Mobileye, to develop a Level 4 electric van for “large-scale production”.

The agreement will see Mobileye supplying software, hardware and digital maps for the self-driving ID Buzz. In particular, a self-driving system based on the Mobileye Drive platform.

Further key components include two independent high-performance computers, 13 cameras, nine lidar and five radar units, plus constant online connection to clouds providing “swarm data from other road users about the traffic situation”.

VW partners with Mobileye to push self-driving
VW partners with Mobileye to push self-driving

As well as automated driving, Volkswagen Commercial Vehicles (VWCV) is also the lead brand within VW for mobility-as-a-Service (MAAS). It has been working on the self-driving ID Buzz since 2021 and, via Volkswagen ADMT GmbH, plans to have it ready by 2026.

Pushing self-driving

“Bringing autonomous shuttles on the road in large quantities requires cooperation from strong partners,” said Christian Senger, member of the Board of Management at VWCV. “We are developing the first fully autonomous large-scale production vehicle, and Mobileye brings its digital driver on board.”

In the longer term, VW aims to develop on own in-house system, leveraging its partnerships with Bosch and Qualcomm, as well as Horizon Robotics in China.

“Our goal is to offer our customers throughout the world outstanding products with cutting-edge technology,” said Oliver Blume, CEO of VW and Porsche.

VW CEO, Oliver Blume, backs self-driving
VW CEO, Oliver Blume, backs self-driving

“New automated driving functions will significantly boost convenience and safety. These functions, which will be tailored to our brands and products, will make every trip a personal, individual experience. In Mobileye, we have an additional first-class partner to shape this automotive future together.”

Prof. Amnon Shashua, President and CEO of Mobileye, added: “We are proud to work closely with Volkswagen Group to make the future of driving safer, more automated and more rewarding.”