Lucas Noldus Ph.D. details the latest high tech ways to measure driver behaviour in ADAS-equipped and self-driving vehicles

Connected and self-driving car safety: Noldus keeps more than an eye on distracted driving

Isn’t LinkedIn marvellous? I met Lucas Noldus Ph.D., Founder & CEO of Netherlands-based Noldus Information Technology, after he liked my interview with his Global Partnership on Artificial Intelligence (GPAI) colleague, Inma Martinez.

A few messages flew back and forth, and it transpired that he’s an expert in measuring driver behaviour, particularly driver-vehicle interactions in ADAS-equipped and self-driving vehicles. That was music to my ears, so we arranged a Zoom. What follows is the highly insightful result.

Lucas Noldus
Lucas Noldus Ph.D., Founder of Noldus Information Technology

LN: “The future starts here. The world is changing. We see people living longer and there are more and more interactive devices – telephones, tablets, dashboards – with which we can interact, leading to greater risk of distraction while driving. I know personally how tempting it is to use these devices, always trying to keep your eyes on the road.

“We already have fascinating developments in connected driving and now, with self-driving, the role of the driver changes significantly. That has triggered research institutes, universities, OEMs and tier one suppliers to pay more attention to the user experience for both drivers and passengers.

“All these experiences are important because how people perceive the safety and comfort will influence their buying decisions, and their recommendations to other potential users.

“For autonomous driving, how far will we go towards level five? What happens at the intermediate stages? Over the coming decades, driving tasks will gradually diminish but, until full autonomy, the driver will have to remain on standby, ready to take over in certain situations. How will the vehicle know the driver is available? How quickly can he take over? These are the topics we’re involved in as a technology company.

“We make tools to allow automotive researchers to keep the human in the loop. Traditionally, automotive research focused exclusively on improving the vehicle – better engines, drivetrains etc. Until recently, nobody paid much attention to the human being (with a brain, skeletal system, muscles, motor functions), who needs to process information through his sensory organs, draw the right conclusions and take actions.

“Now, these aspects are getting more attention, especially in relation to reduced capacity, whether due to a distracting device, drugs, alcohol or neurodegeneration. As you get older your response time becomes longer, your eyesight and hearing abilities reduce, as does the speed at which you can process information.

“These are the challenges that researchers in automotive are looking at concerning the role of the driver, now and in the future. If the automated or semi-automated system wants to give control back to the driver because its AI algorithms decide a situation is too complex, can the driver safely take over while he’s been doing something like reading or taking a nap? How many milliseconds does the brain need to be alert again?

NK: “Draft legislation seems to be proceeding on a 10-second rule, but some studies say at least 30 seconds is required.”

LN: “Situational awareness – that’s a key word in this business. Not only where am I geographically, but in what situation. Oh, I’m in a situation where the road surface is very wet, there’s a vehicle just in front of me, the exit I need is near and I’m in the wrong lane. Understanding a situation like that takes time.

“If we take a helicopter view, from our perspective as a technology company, what should be measured to understand the driver behaviour? Which sensors should we use to pick up that information? If we use a microphone, a video camera, a heartbeat monitor and a link to the ECU, how do we synchronise that?

“That’s not trivial because one sensor may be sending the sampling at 300Hz and another at 25 frames per second. That’s something my company has specialised in over the years. We’re very good at merging data from different sources, whether it’s a driving simulator continuously spitting out data, a real car, or sensors mounted in the infrastructure.

“You then need to analyse that data and pull out meaningful quantitative units that give you actionable insights. Generating large matrices is no big deal, making sense of that information is the real challenge.

“For example, in dashboard design, a manufacturer might be comparing two or three displays of road quality. A driver behaviour study with our tools will give the designer a clear answer on which design leads to the least cognitive workload, the least confusion.

Noldus DriveLab
Noldus DriveLab

“This same technical challenge can be applied to a vast number of design objectives. The vehicle manufacturer might be looking to make incremental improvements to, say, the readability of the dashboard under certain light conditions. Or they might be working on a completely new feature, like an intelligent personal in-car assistant. A number of brands are working on that, but the concept is still relatively new.

“You cannot test every scenario on the road, it’s just too dangerous, so we work with simulator manufacturers too. On the road or in the lab, we can measure a driver’s actions with eye-tracker, audio, video, face-reader and physiology in one.”

NK: “Back to LinkedIn again, I saw a post by Perry McCarthy, the F1 driver and original Stig on Top Gear, who said something like: Simulators are getting so good these days, when you make a mistake they drop three tonnes of bricks on your legs!”

LN: “You have so-called high fidelity and low fidelity simulators – the higher the fidelity, the closer you get to the real vehicle behaviour on the road, and there are all sorts of metrics to benchmark responsiveness.

“You have simple fixed-base simulators right up to motion-based simulators which can rotate, pitch and roll, move forward, backwards, sideways and up and down. For the best ones you’re talking about 10 million euros.

“We work with OEMs, tier1 suppliers, research institutes and simulator manufacturers to build-in our DriveLab software platform. We also advise on what sensors are recommended depending on what aspects of driver behaviour they want to study.

“We try to capture all the driver-vehicle interactions, so if he pushes a pedal, changes gear or turns the steering wheel, that’s all recorded and fed into the data stream. We can also record their body motion, facial expression, what they’re saying and how they’re saying it – it all tells us something about their mental state.

Noldus eye-tracker
Multi-camera eye tracker (Smart Eye)

“Eye tracking measures the point of gaze – what your pupils are focused on. In a vehicle, that might be the left, right and rear mirrors, through the windscreen or windows, around the interior, even looking back over your shoulders. To capture all that you need multiple eye-tracking cameras. If you just want to look at, for example, how the driver perceives distance to the car in front, you can do with just two cameras rather than six.

“Eye tracking generates all sorts of data. How long the eyes have been looking at something is called dwell time. Then there’s what direction the eyes are looking in and how fast the eyes move from one fixed position to another – that’s the saccade. People doing eye tracking research measure saccades in milliseconds.

“Another important metric is pupil diameter. If the light intensity goes up, the pupil diameter decreases. Given a stable light condition, the diameter of your pupil says something about the cognitive load to your brain – the harder you have to think, the wider your pupils will open. If you’re tired, your blink rate will go up. There’s a normal natural blink rate to refresh the fluid on your eyes with a fully awake person, but if you’re falling asleep the blink rate changes. It’s a very useful instrument.

“Then there’s body worn sensors that measure physiology. It’s harder to do in-car, but in a lab people don’t mind wearing electromyography (EMG) sensors to measure muscle tension. If you’re a designer and you want to know how easy it is for an 80-year-old lady to operate a gearshift, you need to know how much muscle power she has to exert.

“We also measure the pulse rate with a technique called photoplethysmography (PPG), like in a sports watch. From the PPG signal you can derive the heart rate (HR). However, a more accurate method is an electrocardiogram (ECG), which is based on the electrical activity of the heart.


Noldus physiological data
GSR (EDA) measurement

“Further still, we measure galvanic skin response (GSR), also called electrodermal activity (EDA), the level of sweating of your skin. The more nervous you get, the more you sweat. If you’re a bit late braking approaching a traffic jam, your GSR level will jump up. A few body parts are really good for capturing GSR – the wrist, palm, fingers, and the foot.

“We also measure oxygen saturation in the blood with near infrared spectroscopy (NIRS) and brain activity with an electroencephalogram (EEG). Both EEG and NIRS show which brain region is activated.

“Another incredibly useful technique is face reading. Simply by pointing a video camera at someone’s face we can plot 500 points – the surroundings of the eyebrows, the eyelids, the nose, chin, mouth, lips. We feed this into a neural network model and classify it against a database of tens of thousands of annotated images, allowing us to identify basic emotions – happy, sad, angry, surprised, disgusted, scared or neutral. You can capture that from one photograph. For other states, like boredom or confusion, you need a series of images.

“These days we can even capture the heart rate just by looking at the face – tiny changes in colour resulting from the pulsation of the blood vessels in the skin. This field of face reading is evolving every year and I dare to claim that we are leading the pack with our tool.

“Doing this in the lab is one thing, doing it in a real car is another challenge, being able to keep your focus on the driver’s face and deal with variable backgrounds. Of course, cars also drive at night so the next question is can you do all this in darkness? We turned our company van into an instrumented vehicle and my sons agreed to be the guinea pigs.

“It took some work – overcoming the issue of light striking the face and causing sharp shadows, for instance – but we can now use infrared illuminators with our FaceReader software to make these measurements in full darkness.

“The turning of the head is also very important in studying distraction, for example, if the driver looks sideways for too long, or nods their head in sleepiness. When something shocks someone, we see the face change and the blood pressure rise, and these readings are synchronised in DriveLab.

“It is well proven that even things like changing radio station can be very distracting. Taking your eyes off the road for just a few seconds is dangerous. As we move to more and more connected devices, touchscreens and voice commands, minimising distraction is vital to ensure safety.”

NK: “I absolutely love this tech but what I actually drive is a 7-year-old Suzuki Swift Sport with a petrol engine and a manual gearbox, and I quite like it that way”

LN: “I’m doing research on cars of the future with my software but I am personally driving a 30-year old soft-top Saab 900. That’s my ultimate relaxation, getting away from high tech for a moment.

“At Noldus, we’re constantly pushing the boundaries of research, working with top level organisations in automotive – Bosch, Cat, Daimler, Fiat, Honda, Isuzu, Land Rover, Mazda, Nissan, Scania, Skoda, Toyota, Valeo and Volvo, to name just a few – and also with the Netherlands Aerospace Centre (NLR) and the Maritime Research Institute Netherlands (MARIN).

“Our aim is make it so that the client doesn’t have to worry about things like hardware to software connections – we do that for them so they can focus on their research or design challenge.”

For further info see noldus.com





Bill McKinley of Keysight Technologies explains how C-V2X and DSRC enable higher levels of self-driving

Keysight at forefront of self-driving safety standards and certification

Ahead of a flagship product launch later this week, Bill McKinley, Automotive Strategic Planner at Keysight Technologies, gives his thoughts on self-driving and the fast-changing connected and autonomous vehicle (CAV) landscape.

Avid readers may remember that Bill was on the panel I hosted at the Small Cells World Summit in May. He’s got 30+ years’ experience in wireless communications and his current focus is developing test solutions for the automotive sector.

BM: “The UK, in line with other nations around the world, is investing heavily in connectivity and electrification – both the vehicles themselves and the charging infrastructure. Connected vehicles have been demonstrated to enhance safety via cellular vehicle to everything (C-V2X) and dedicated short-range communication (DSRC).

“These technologies allow for more efficient driving, for example, by routing to avoid accidents or poor road conditions. They also enable higher levels of automation, all of which can lead to an improved overall driving experience.

“It is likely that the first fully automated vehicles will be delivery vehicles, controlled environment shuttle type services, and buses on specific routes. With the gradual introduction of robotaxis, we will no doubt start to see Mobility as a Service (MaaS) become more common over the next 10-15 years.

“From a Keysight perspective, we play a significant role at the very leading edge of connected and automated mobility. We participate in various global organisations developing the standards, test procedures and certification for the industry, including the 5G Automotive Association (5GAA), the Car 2 Car Communication Consortium (C2C CC), the China Academy of Information and Communications Technology (CAICT), the OmniAir Consortium and the Society of Automotive Engineers (SAE).

“Keysight was the first test and measurement company to be awarded Global Certification Forum (GCF) validation for C-V2X RF conformance. We have industry leading validated test cases for the C-V2X protocol conformance test, and we were the first to be awarded OmniAir Qualified Test Equipment (OQTE) status. 

“Cybersecurity will play a critical role in connected mobility and Keysight is working with leading companies and organisations in this space to develop solutions to ensure vehicular communications remain safe and robust against attacks. 

“Clearly, the main risks associated with self-driving vehicles are around the safety aspects, which in turn will heavily influence public acceptance of the technology. We are all very familiar with some of the headlines about Tesla vehicles.  

“It remains incredibly challenging to overcome the complexities of urban automated driving, but things are improving all the time. Our autonomous driving emulator (ADE) system is designed with this in mind – to test many autonomous drive systems in a rich and authentic environment within the lab, before moving testing out into the field.”

More on that to follow soon. For further info see keysight.com

Navtech Radar puts figures on the benefits of port automation including reduced operating expenses and labour costs

Navtech builds the business case for automation

Regular readers will recognise the name Navtech Radar from our recent update on Oxbotica. In May, the two Oxfordshire-based companies joined forces to launch Terran360, promoted as the world’s first all-weather radar localisation solution for industrial autonomous vehicles.

While self-driving cars await a legislative framework, this ground-breaking technology is already being deployed in off-road settings. Ports are a good example and Madelen Shepherd, Growth Marketing Manager at Navtech, sets out a strong business case.

MS: “Ports are complicated operations and automation can massively improve efficiency, so we’ve been doing some financial analysis on the quantification of value. The benefits fall into three main areas: 1) reduced operating expenses; 2) reduced labour requirements; and 3) productivity increases.”

According to Navtech’s research, benefits resulting from port automation include a 31% reduction in operating expenses, a 40% reduction in labour costs and a 21% increase in productivity.

Navtech on port automation
Automation at ports delivers significant cost savings

MS: “This kind of financial modelling is important for Navtech to demonstrate that our products are viable, but it also provides a compelling argument for automation in general.

“The findings are based on averages from multiple quotes, although there was quite a large range on the reduction in operating expenses, from around 25% up to 50%.

“Currently, only 3% of the world’s ports are automated, but the rate of growth is now exponential. Key drivers for this include the rise of megaships and increasing next day deliveries.

“About 80% of the world’s goods go through ports. There’s already time pressure on everything and the increasing global population equals ever increasing demand.  

“New ports are a massive investment. For example, the first phase of the Tuas project in Singapore, which will create the world’s largest container terminal, is nearly complete and has already cost $1.76bn. There are three more phases to come.

“Of course, any cost benefit analysis must also include risks. If you’re retrofitting an existing port, how much is installation going to disrupt operations? What about the social impact of job losses or a shift in employment profile? Are the new jobs higher paid or more secure? How much time and money would an infrastructure-free solution save in operational downtime during installation compared to an infrastructure dependent solution?

“Automation has created so-called ghost ports, which are largely human-free, so there are clear safety benefits. And with automation you get remote operation, so maybe one person can now operate two straddle carriers.

“Also, operating bulky vehicles like terminal tractors can require an additional member of staff to supervise the movement. By using technological solutions – installing sensors which act beyond human capabilities – that’s no longer necessary.

“Terran360, an infrastructure-free localisation solution, delivers a detailed 360-degree map made up of around 400 slices and uploads this to a cloud-based server. The vehicle drives down a route continually scanning all these different landmarks.

“We’re always looking for new partners in the shipping world and other industrial settings. This kind of radar is perfect for self-driving cars too, so that’s another exciting growth area.”

Dr Basu issues stark warning on need to earn public trust in self-driving technology.

UK lawyer claims dangerous lack of evidence on safe driverless car to driver handover

Dr Subhajit Basu, of The University of Leeds’ School of Law, is a lawyer with impeccable credentials and a strong sense of public duty… and he’s got serious concerns about “handover” – the moment when a self-driving vehicle transfers control back to a human driver.

An editor at The International Review of Law, a Fellow of The Royal Society (RSA), and Chair of The British and Irish Law Education Technology Association (BILETA), he recently supervised research into “Legal issues in automated vehicles: critically considering the potential role of consent and interactive digital interfaces”.

The report, first published in the prestigious Nature journal, concluded that: “An urgent investigation is needed into the technology that allows self-driving cars to communicate with their operators”. Why? Because the “digital interfaces may be unable to adequately communicate safety and legal information, which could result in accidents”.

That is a stark warning indeed and Dr Basu believes the Government and the automotive industry need to be much more up-front about the issues.

Dr Basu report cover
Legal issues in automated vehicles report

SB: “The main safety messages surround the extreme difficulty most drivers will encounter when an autonomous vehicle suddenly transfers the driving back to them. Even if a driver responds quickly, they may not regain enough situational awareness to avoid an accident.

“The general public is not aware of their vulnerability, and it is doubted that an interface in an automated vehicle will communicate this point with sufficient clarity.   

“The article in Nature was part of a multidisciplinary international project, PAsCAL, funded by the EU’s Horizon 2020, into public acceptance of connected and autonomous vehicles (CAVs).

“My expertise is in the regulation of emerging technologies. I’m one of those people who sees autonomous vehicles not as a disruptive, but as something which can improve human life. However, in order to do that, we have to put public safety, public service and public trust before profit. I always emphasise that transparency is paramount, but the autonomous vehicle industry can be extremely secretive.

“The overall goal of PAsCAL was to create a guide to autonomy – a set of guidelines and recommendations to deliver a smooth transition to this new system of transport, particularly with regards to the behaviour of the driver, or user in charge, of an autonomous vehicle. 

“You have to recognise that an Assisted Lane Keeping System (ALKS) is basically an evolution of the lane departure warning systems that lots of cars already have, but in general self-driving cars are not an evolution but a revolution – they will change our way of life.

“We want to understand not just how the technology works, but also how people see it. The aim is to capture the public’s acceptance and attitudes – not just the users in charge, but pedestrians and other road users too – and to take their concerns into consideration.

“With any new technology there has to be a proper risk assessment. Take the example of smart motorways – it’s a brilliant idea in theory and it works in other countries, but there has been a lack of understanding in the UK. We didn’t create enough stopping places and the cameras weren’t good enough to monitor all the cars in real time. You need an artificial intelligence driven system which can identify a car which is slowing in a fraction of a second.

“Similarly with autonomous vehicles, if you want to deliver something like this you should have the right technologies in place. In this case, that means the human machine interface. The vehicle manufacturers (VMs) will basically give responsibility to the driver, the user in charge, saying “when you are warned, you should take over, okay?”.

“In our report, we argue that there will not be enough time for an individual to understand the legal complexities, what they are accepting liability for. The communication of that risk will not be easy for the user in charge to understand. Honestly, how many people have read the terms and conditions of Facebook?

“In autonomous vehicles, the human machine interface will communicate very important safety information and legally binding information, with civil or criminal implications if the driver fails to adequately respond.

“If you look at the proposed change to the Highway Code, it assumes that the driver will be able to take back control when prompted by the vehicle. We are concerned that even the most astute and conscientious driver may not be able to take back control in time. The effectiveness of the human machine interface is one limiting factor and then there is the driver – every driver has different cognitive abilities and different skill levels.

“Human beings are all different, they react differently to different circumstances, so defining the right timeframe for a handover is a difficult balance to strike. Are you going to assess people on their cognitive abilities, on the speed of their reflexes?

“In some circumstances, I have doubts about whether it is fair to have a handover even within 30 to 40 seconds. Certainly, there is nothing I have found where scientifically they have viewed 10 seconds as an adequate time. Cognitively, a blanket 10 seconds simply may not be possible – that’s my major concern.

“This is something we have been talking about for quite some time now. The UK government seems to be in very much in favour in pushing ahead with this technology quickly, because it fits with the “Build Back Better” tagline. There is a huge risk that we are disregarding safety in the name of innovation.

“I think the automotive industry has a responsibility here. When you are travelling in a self-driving car, the manufacturer is responsible for your safety, for ensuring that the technology is up to standard.

“The industry also has a responsibility to ensure that drivers are adequately trained, adequately educated. The argument that accidents happen and can be used for development is vulgar. Go and tell that to the person who has lost a relative – that this is a learning process.

“I am not against autonomous vehicles. What I am saying is that we need evidence-based conclusions. We need to be sure that the reaction time is well-founded and supported, so we don’t create a system which will fail.

“Personally, I propose that we should first create a comprehensive legal framework which should mandate additional driver training for the safe use of self-driving systems. The automotive industry could take a lead on this, actively push for it.

“At the end of the day, this is about road safety, it is about saving human lives. I believe that autonomous vehicles can reduce congestion, can be good for the climate, but they also have the potential to become deathtraps because we are getting over-reliant on the technology to work perfectly and over-relying on human ability, without the evidence-based research to find out whether we can react within the stipulated time.

“As a lawyer, it is my responsibility to uphold public safety, to highlight the risks. If the government and the automotive industry don’t face these issues, then people will lose trust in this amazing technology.”

For more, you can read the full Nature article here.

The warning lights are flashing on draft guidance to drivers in driverless cars.

The AV safety case: UK motor law expert expresses “real doubts” about proposed Highway Code changes

Barrister Alex Glassbrook specialises in road transport and has written two books on UK autonomous vehicle (AV) law. An expert in the law of advanced, automated and electric vehicles, serious personal injury, motor insurance and high-value vehicle damages cases, he begins by highlighting three recent developments:

  1. The Automated and Electric Vehicles Act 2018 coming into force on 21 April 2021;
  2. The government announcement on 28 April that it isn’t yet publishing a list of AVs under Section 1 of the Act, but that it does expect to list vehicles equipped with Automated Lane Keeping Systems (ALKS) as “automated”; and
  3. Also on 28 April, the publication of proposed amendments to the Highway Code related to AVs.
Alex Glassbrook
Barrister Alex Glassbrook specialises in the law of connected and autonomous vehicles.

AG: “My work overwhelmingly involves car accidents as the source of serious injury, so the AEV Act coming into force was an historic moment. Immediately though, it was clear there was something missing: the list of automated vehicles under Section 1 of the Act, which the Secretary of State is required to publish as soon as it is prepared. There was a presumption that the Act and the list would come together, but they didn’t. We have the Act but no list. In traffic light terms, we’ve gone past amber but there’s no green. What’s going on?

“That question was answered a week later with the Centre for Connected and Autonomous Vehicles’ publication of its paper for the Department for Transport on whether vehicles equipped with ALKS would be listed as automated. In summary, it said the list is not yet being published because we’re waiting to find out if these vehicles will get Whole Vehicle Type Approval from the Vehicle Certification Agency (VCA). If that happens, then the Secretary of State does expect to list them as automated under the AEV Act.

“This has huge implications for liability because it brings into effect a new line of motor insurance. Currently, under the Road Traffic Act, the motor insurer is effectively the body that will satisfy any judgment against a liable driver, or indeed can be sued directly under the direct rights against insurers regulations.

“The new AEV Act does something very different, something particular to AVs: it makes the insurer of the vehicle directly liable. This brings two important changes. One is the direct liability, which is slightly different from the direct rights regs. Second, it attaches to the vehicle rather than the driver, which is quite a radical step.

“There are obviously practical considerations behind this. Would publishing the list before the vehicles get Type Approval be putting the cart before the horse? Even so, it’s a little bit curious because the Act has already come into effect. Moreover, it’s not yet certain that ALKS-equipped vehicles will be classed as automated. The Secretary of State could change his mind.

“Running alongside this, we have the proposed amendments to the Highway Code. They’re quite eye-catching. The current Highway Code reiterates the orthodoxy, that the driver must at all times be in control of the vehicle and must understand the manufacturer’s instructions. The new proposed version is currently out for consultation, but the consultation period is very short, with a deadline of 28 May.

Proposed amendments to the Highway Code re AVs 2021
Proposed amendments to the Highway Code published on 28 April 2021.

“The key section reads: “On the basis of responses to the call for evidence, and the step-change that the expected introduction of the first legally recognised automated vehicles represents, we have decided to make a more ambitious amendment to The Highway Code, coinciding with the code’s 90th year anniversary.” To me, the fact it is 90 years since the Highway Code was first published in 1931 is neither here nor there. What is notable is the reference to “more ambitious”, because that implies there was an earlier draft.

“The next sentence has the wow factor. It says: “Automated vehicles no longer require the driver to pay attention to the vehicle or the road when in automated mode, except to resume control in response to a transition demand in a timely manner.” The implications of those words are immense.

“The document continues: “Automated vehicles are vehicles that are listed by the Secretary of State for Transport. While an automated vehicle is driving itself, you are not responsible for how it drives, and you do not need to pay attention to the road.”

“Well, we don’t have that list yet, and what follows is really quite striking. It proposes an instruction in the Highway Code, the official guidance to drivers, to do nothing – to pay no attention to how the vehicle is driving or what’s happening on the road. It positively advises drivers to switch off their attention.

“The next paragraph sets some parameters: “If the vehicle is designed to require you to resume driving after being prompted to, while the vehicle is driving itself, you MUST remain in a position to be able to take control. For example, you should not move out of the driving seat.”

“So, you shouldn’t get out of the driving seat – that’s quite a low standard. This appears to be saying it’s fine to watch a movie, it’s fine to go on Instagram, it’s fine to read and respond to business emails. All these tasks are entirely absorbing of concentration and require some disengaging from.

“I’ve done a lot of trials in which I’ve asked witnesses about their appreciation of time during a crash and heard expert evidence about what can happen within a short window of time. Particularly when you’ve got three or four lanes of motorway, multiple vehicles, an awful lot can happen in 10 seconds.

“There are two fairly well-known exceptions to driver control recognised in the law. One is a medical emergency, if a driver is suddenly incapacitated. The other is moments of peril, sometimes known as agony of the moment – when it is such a difficult situation that a driver causing injury by their evasive manoeuvre is not to be judged by the usual demanding standard.

“So, the common law has formed exceptions to liability, but in this case it’s more complex. First, it introduces, for want of a better phrase, artificial intelligence (AI) into the picture. Adjudicating the actions of AI is still a very undeveloped area of law. Second, it brings into the picture something that has been manufactured, namely a computer and sensor system within a moving vehicle. Again, the laws of product liability are at a very early stage of development in relation to AI and new technologies. A notorious example of that is over-the-air (OTA) software, which is not understood as goods.

“From a legal perspective, it is vitally important not to have guidance which leaves open very obvious questions. Unfortunately, these proposed changes to the Highway Code do just that. On the one hand, the Highway Code might say it’s perfectly fine to completely distract yourself from driving. But on the other hand, it’s not okay to do things like climbing out of the driving seat. That leaves open a very broad set of situations and the courts are going to find themselves dealing with some very difficult problems.

“Of course, what the court has to deal with is very much secondary, the primary question must be: what is safe? There are plenty of lessons in the history of motor vehicles when innovation has overreached. Famously, Ralph Nader’s book, Unsafe At Any Speed (published in the USA in 1965), highlighted rear suspension which lost traction when going round corners. That changed product liability law across all sectors.

“I’m not a road traffic engineer but, as an observer of many road traffic accident cases over many years, I have real doubts as to the safety of this guidance.”

Alex’s new book, Advanced, Automated and Electric Vehicle Law is coming soon.

“We will be running autonomous buses this year. That’s an incredible milestone.”

The future is here, 2021: CAVForth buses will put UK on the driverless map

Our Zenzic CAM Creator series continues with Jim Hutchinson, CEO of Fusion Processing.

As a partner in the ambitious CAVForth project, predicted by the Scottish Mail on Sunday to make Edinburgh “the most ‘driverless’ city in the world”, Fusion Processing is delivering on its promise to design and build world leading systems for the automation of vehicles. Here, CEO Jim Hutchinson talks ADAS, cyclist detection and autonomous vehicle safety, explaining how CAVForth is set to make a major mark on the global self-driving map.

Jim Hutchinson, CEO of Fusion Processing
Jim Hutchinson, CEO of Fusion Processing

Please can you outline Fusion Processing’s work on connected and automated mobility?

JH: “We’ve been going since 2012. We set up to develop automated vehicle systems with the ultimate goal of being fully autonomous – able to do anything that a human-driven vehicle can. We knew from the start there were a lot of steps along the way and, for essentially a commercial company, we needed to have products along those steps rather than trying for a ‘Level 5 or nothing’ approach.

“We developed the CAVstar platform as a scalable solution – a drive system we could put into pretty much any vehicle, from small cars up to HGV. Along the way we’ve been involved in some great schemes like the Venturer project, one of the original three UK AV projects.

“Then, more or less in parallel with that, we were involved in the Gateway project in London. We provided the autonomous drive system for the pods that drove along the Thames path. That was a big trial with random members of the public – some who came along specifically to experience it, and many others who just wanted to get from the O2 to the other end of the route. The pods encountered various other people on the route – the vehicles had to be mindful of dog walkers and cyclists. The feedback was by and large very positive, and it was a good proof point for us of how our system can be used off-highway.

“It also led to other things, notably our partnerships with Stagecoach and Alexander Dennis. First-off we were exploring using autonomy in bus depots. Every night a lot of operations have to happen involving the movement of vehicles – they have to be fuelled, washed, made ready for the morning, so we put together a system which could automate that. The concept was based on a fleet manager directing all this from a control tower once the bus arrives back at the depot.

“The system proved very successful, demonstrating operating efficiency and improved safety for those working in the depot, so that led to CAVForth – an autonomous bus service. Again, we’re working with Stagecoach and Alexander Dennis, joined by Transport Scotland, Bristol Robotics Laboratory and Napier University.

Fusion Processing CAVstar
Fusion Processing’s CAVstar

“The intent is to put into service a number of Level 4 autonomous buses between the Fife Park & Ride and the Hermiston Gait Interchange. It’s a commuter route so we’re expecting a large number of daily commuters who want to travel to the Hermiston Gait Interchange, where they can transfer on to trams for the city centre, the airport or the rail network. We expect tourists will want to use it too to reach the Forth Road Bridge, a UNESCO heritage site.

“It’s a useful service, running every day of the week, and the hope is that it will go from a pilot service to a full service. It’s being registered as a new route, providing a service that wasn’t previously there, and Stagecoach anticipate around 10,000 journeys a week.

“The route includes a mix of road environments – motorway, bus lanes, roundabouts, signalled interchanges – so from our point of view it makes for a great demonstration of capability. There’s the technology side, which Fusion is focussed on, but there’s also key research around public acceptance and uptake. That’s really exciting too.

“The launch date isn’t set in stone due to Covid uncertainties, and the point at which they start taking passengers is still to be determined, but we will be running autonomous buses this year. That’s an incredible milestone, absolutely huge. It will be a very significant achievement to demonstrate a Level 4 capability on that class of vehicle – a big thing for the UK which will be noticed around the world.

“There are one or two other groups working on similar projects, but I haven’t seen anything with this level of ambition, this level of complexity, or length of route. It’ll obviously be fantastic for us and our CAVForth partners, but also for the UK autonomous vehicle industry as a whole. It will really put us on the worldwide map.”

Please can you outline Fusion Processing’s work on driver assistance?

JH: “CycleEye is an important product for us. We identified a need for collision avoidance technology. There are lots of collisions with cyclists and quite often they occur because the bus driver doesn’t know the cyclist is there. CycleEye is like a subsystem of CAVstar in a lot of ways – one of those steps to get some proof points on bits of the technology. It recognises and classifies different types of vehicle, and the driver gets an alert when there’s a cyclist in the danger zone. It is currently being used in a few cities around the UK, including on the Bristol Metrobus. It’s a good system. Whenever it has been evaluated against other cyclist detection systems it has always come out on top.

“We’re particularly excited about the next incarnation of CycleEye, evolving it to become a camera mirror system. It’s legal now to use cameras instead of mirrors, so we can provide that functionality too – monitors in the driver’s cab instead of mirrors. That has several benefits. Mirrors, on buses particularly, can be a bit of a liability – they quite often get knocked and sometimes they knock people. They stick out and head strikes are unfortunately quite common. They also get smashed, putting the bus out of service, which is an inconvenience and an operational cost. We think that being able to offer this camera mirror with CycleEye functionality is going to prove attractive to a lot of operators.”

Van with Fusion Processing technology
Van with Fusion Processing technology

Over what timescale do you expect Level 4 and 5 autonomy to be achieved in the UK and which sectors will be early adopters?

JH: “With CAVForth we’ll be running Level 4 autonomous vehicles, where you’ve got a restricted operational design domain (ODD), in the UK this year. Restricting these vehicles to particular routes or environments lends itself very well to public service, where the vehicles are maintained by an operator. That’s very achievable right now. As well as passenger service vehicles, other service vehicle fleets are easy wins, as well as off-highway stuff like industrial sites. Then you’ve got delivery vehicles.

“When it comes to true Level 5 – go anywhere, do anything vehicles – repair and maintenance is an issue. We know that with privately owned cars, some people maintain them exactly as they should, and other people don’t. There are other complications too – things that people perhaps don’t do that often but like their vehicles to be able to do, like parking in a farmer’s field at a festival – that’s a little bit further out still.

“If you just roll back slightly from true Level 5, if people want a city car or a comfortable car for a long motorway journey, nothing off-road, there’s a case for vehicles which have an autonomous mode. That certainly appeals to me.”

Can you address the concerns about ADAS, particularly handover of control, driver concentration levels and driver deskilling?

JH: “I’m not a big fan of Level 3. If you haven’t been driving for an hour to suddenly be asked to take the wheel because the car has encountered something it can’t handle, it’s just unrealistic. Whereas a Level 4 system, which can put itself into a safe state when it reaches the limits of its ODD – perhaps ready to be restarted in a manual mode when the driver wants to take control – that’s much more practical.

“If there are circumstances when the driver needs to take over then clearly the driver needs to be of a standard that they can drive safely. Once you have widespread adoption of autonomous systems, and people are not driving routinely, there is a risk of driver deskilling. If that were the case you’d really need to look at greater regulation of drivers.

“That said, you can sometime envisage problems that don’t really transpire. We’ve had cruise control and adaptive cruise control for a while now and I don’t think they’ve had the effect of particularly deskilling drivers. So, with Lane Keep, maybe it’s not such a big deal. Once you get to the point where cars are properly self-driving, there is a danger. If you haven’t got anything to do your mind will wander, that’s human nature, so it is a concern.”

For further info, visit fusionproc.com

AI and IoT expert Karim Jaser presents a resolute defence of the trolley problem.

Ethics in self-driving: The trolley problem strikes back

The trolley problem – the question of who to save, or kill, in no-win crash situations – continues to divide opinion like no other subject in the driverless world.

I must admit to flip-flopping on it myself. From being quite taken with it in 2018’s Autonomous now: the shift to self-driving, through 2019’s The driverless dilemma: touchstone or red herring?, to last month’s Self-driving experts across the world agree: the trolley problem is a nonsense.

That, I thought, was conclusion reached, the end of the matter. Far from it! In response to the latter article, Karim Jaser, Senior Product Manager specialising in artificial intelligence (AI) and internet of things (IoT) for blue chip companies, posted a resolute defence of the much-maligned thought experiment on our LinkedIn page.

“I do agree that humans don’t go through the trolley problem evaluation in the split decision second, however I also think not all experts agree it is a nonsense,” he said. “It is for society as a whole to discuss these ethical problems. From the point of view of self-driving technology, this can be solved in many ways, with probability theory and estimations on minimal loss, but it is not up to developers or self-driving experts alone to decide how to tackle the point. It needs the involvement of regulators, governments and the industry.”

Karim Jaser
Karim Jaser, Senior Product Manager specialising in AI and IoT.

Well, with our mission to encourage debate about all aspects of autonomous vehicles, how could we resist? We asked Karim if he’d be up for an interview. He kindly agreed and here we present his thoughtful and cohesive opinion.

KJ: “I was always fascinated by robot intelligence, so at university I studied telecommunication engineering. There were lots of exams on probability theory, system control, software engineering. I was also involved in coding in my spare time, and later did it as a job.

“Self-driving is a control problem first and foremost. There are elements of robotics, including perception state estimation and trajectory planning, but also software, hardware and AI working together.

“The interest grew stronger when I started studying machine learning and AI about four years ago. When I was at university in the 90s, AI was not really a popular subject. It was a topic I picked up later in my career. As a senior product manager at a high technology company, AI is everywhere now – it’s an essential part of the skills necessary to perform and innovate, from biometric scans and image recognition to automated travel.

“AI has a lot of potential to have a beneficial impact on society – fewer accidents, better mobility, less pollution, more autonomy for people with disabilities – but it doesn’t come without challenges, for example, cyber threats, and also ethical and regulatory issues, which is why I got involved on the trolley problem.”

“It’s not straightforward. If we take a step back, you we need to understand how self-driving cars take decisions. They’re using supervised learning, reinforcement learning, convolutional neural networks (CNN) and recurrent neural networks (RNN), deep learning for computer vision and prediction. Specifically, reinforcement and inverse reinforcement learning are very tightly linked to the way driverless vehicles behave through means of policies.

“Policies are related to the distribution of probabilities, but the trolley problem is an ethical choice, so I understand why a lot of people in the industry dismiss it. It’s not the way autonomous vehicles take decisions, going through philosophical considerations in a split second, so it might seem irrelevant, right? Like the Turing Test and Asimov’s Robot Rules, the trolley problem can be perceived as a distraction from more practical considerations.

“It can be distracting for two reasons: first, these considerations are corner cases – there are other priorities, more likely scenarios still to be addressed; second, autonomous vehicles will not be given ethical guidelines to link with probabilities.

“With regards to the first objection, as Patrick Lin (director of the Ethics and Emerging Sciences Group at California Polytechnic State University) has pointed out, it shouldn’t matter if these scenarios are impossible, because the job of these thought experiments is to force us to think more carefully about ethical priorities, not to simulate the realities.

“The second objection is related to self-driving cars taking decisions through distribution of probabilities. The actions of these vehicles are linked not to hard coding, but all statistical contextual information, and that makes each scenario difficult to interpret. You can potentially have millions of mini trolley problems in different contexts.

“The trolley problem is a reminder that corner cases and autonomous vehicle behaviours are not a technical irrelevance. This is an issue that belongs to society and should be discussed in the same way as other AI pitfalls like privacy and bias.

“Actually, the trolley problem is more related to the third pitfall of AI, replicability. When trying to understand why and how an autonomous vehicle takes a decision, it is important to note that most autonomous vehicle developers are taking ethical considerations into account.

“In 2017 in America, Apple commended the National Highway Traffic Safety Administration (NHTSA) for including ethical considerations in its Federal Automated Vehicles Policy. It even highlighted three particular areas: 1) the implications of algorithmic decisions for the safety, mobility and legality of automated vehicles and their occupants; 2) the challenge of ensuring privacy and security in the design of automated vehicles; and 3) the impact of automated vehicles on the public good, including their consequences for employment and public spaces.

“The automotive industry has also approached the issue of accidents caused by autonomous vehicles in relation to ethics. For example, Volvo stated in 2015 that it would take responsibility for all Volvo self-driving car accidents. This is an ethical decision, because it did so without regulation forcing it to do so.

“We will see what happens. If there are no ethical decisions by the industry, the regulators will step in. On a fun note, looking to the past, horses were not considered responsible for their actions, the rider was. Whereas in this case, responsibility for the autonomous vehicle will lie not with the owner but with the carmaker.

“So, to conclude, automakers and AV developers are taking ethical and regulatory matters into account, which underlines the importance of these discussions. We cannot just dismiss the trolley problem because it’s not the way an autonomous vehicle decides, or because it distracts from technical development.

“The way to deal with this is to discuss the implications in the right context, being aware of how autonomous vehicles are developed without scaring the public with sensationalist articles. The trolley problem might be perceived as a Terminator-style situation, and that’s where it gets on the nerves of a lot of people that are developing and testing AI. It’s not black and white, it’s a grey area, and that takes us to the path of discussions.

“The trolley problem forces us to consider ethics in vehicle development and confront the fact that ethical principles differ around the world, as documented by the Massachusetts Institute of Technology (MIT) simulation.

“Are we at the point where discussing the trolley problem should be a priority? I believe that would be beneficial to the success of the self-driving industry, guiding us in the thinking process of building the right mix of safeguards and transparency.”

CGA’s simulations train autonomous vehicles to deal with environments specific to the UK.

Self-driving and smart cities: stop wishcasting and get real with predictive simulation

Our Zenzic CAM Creator series continues with Liverpool-based Jon Wetherall, Managing Director of CGA Simulation, and Max Zadow, Director of Future Coders.

By applying gaming knowledge to real-world mobility questions, CGA has created engaging simulations to study autonomous driving and smart city solutions.

JW: “My background is gaming. I used to work for the company that did Wipeout and F1 games. We made a racing game called Space Ribbon and one day, about five years ago, we got a call from The Department for Transport (DfT). They were doing a research project on virtual reality (VR) in the testing and training of drivers, specifically hazard awareness.

“We turned it into a game and it worked – people said their attitudes changed as a result of our simulations. The hardest scenario came early in the game – a parked lorry with a big blind spot – and a lot of people crashed. VR feels so visceral, the experience can be quite vivid and shocking. Of course, smarter cars will hopefully fix these types of situations.”

CGA Simulation junction and forecourt
CGA Simulation junction and forecourt

To pursue this goal, CGA received a grant from Innovate UK to create an artificial learning environment for autonomous driving (ALEAD).

JW: “The aim was to make these cars safer and we stayed true to our computer game history. We didn’t have the resources to lidar scan the whole area, so we did our own thing using mapping data. We made a digital twin of Conwy in north Wales and unlike other simulations we kept all the ‘noise’ in – things like rain. This was important because it is now well-understood that noise is a big challenge for autonomous vehicles (AVs).

“Modern autonomous driving stacks have 20 different subsystems and we generally focus on only one or two, to do with perception. There’s been massive progress in this area over recent years, to the extent that artificial intelligence (AI) can identify an individual by their gait. What’s more, you can now do this on a computer you can put in a car – this is one of the cornerstones of driverless.

“It’s not the first time people have been excited about AI. In the 50s they were saying it was only a few years away. It has taken much longer than people thought, but major problems have now been solved.

“We are lucky to have one of the world’s leading experts in radar on our doorstep, Professor Jason Ralph of The University of Liverpool, and he helped us develop the simulation. You have to feed the car’s brain, a computer, all the information it will need – from sensors, cameras, GNSS – and you can do all that in the software.”

MZ: “In particular, The University of Liverpool were interested in how weather affects things, right down to different types of rain and mist. In California, if an AV encounters conditions it can’t handle, like heavy rain, it pulls to the side of the road. That’s ok for San Francisco but not for Manchester!

“A few years ago, everyone seemed to be using the example of an AV encountering a kangaroo. How would it cope? The point is you can use our simulations to train cars, to create algorithm antibodies for once in a lifetime events and regular things in different environments. That remains an essential part of what’s needed to make AVs a reality.

“We picked Conwy partly because it has very different patterns of land use to America. An early use case for AVs is predicted to be taxis, but in the UK these are most frequently used by people who don’t own their own car, and they often live in high density housing or narrow streets. The operational design domains (ODDs) are going to have to deal with environments specific to this country – steep hills, roads which twist and turn, and changeable weather.”

Mobility Mapper

Wetherall and Zadow’s latest collaboration is Mobility Mapper, a project to create greener and more intelligently designed transport hubs. The technology underpinning Mobility Mapper has been used previously by the team to model Covid 19 spread, autonomous vehicle technology and by the Liverpool 5G Create project (funded by DCMS as part of their 5G Testbeds and Trials Programme).

JW: “E-hubs are basically an extension of what used to be called transport hubs – train or bus stations. They’ll provide charging facilities and access to different modes of transport, for example, you can drop off an e-scooter and hop into a shared autonomous car.

“Here in Liverpool, there was a big trial of e-scooters, big in international terms not just UK. The worry was that a lot of them would end up in the canal, but that didn’t happen. The trial was incredibly successful. It’s all about linking that movement and nudging people away from car ownership.”

MZ: “We were already thinking about how Jon’s technology could be used for mobility as a service (MAAS) when we attended a virtual future transport conference in LA with the Centre for Connected and Autonomous Vehicles (CCAV).

“That was an influence, as was an Intelligent Transportation Systems (ITS) trade show in Copenhagen, where we saw an autonomous tram system designed to take bicycles. It was a small step from there to imagining autonomous trams carrying autonomous delivery pods.

“This is classic smart city stuff but you need to know how these e-hubs are likely to be used, with no track record, nothing to go on. We need simulated environments to make best guesses in. That’s Mobility Mapper.”

JW: “It is early days, still in the development phase, but the authorities in both Manchester and Liverpool have agreed there’s a need for such a predictive simulation tool.”

As we wrap-up a thoroughly enjoyable interview, Max dons his Director of Digital Creativity in Disability hat: “Autonomous delivery bots are basically electric wheelchairs without a person, so there’s clearly a potential benefit, but there needs to less wishcasting and more real work on how accessibility will be affected.”

For further info, visit CGAsimulation.com

The UK’s National Physical Laboratory is working on a framework for virtual sensor testing.

Developing test frameworks which build a bridge of trust to driverless cars in the UK

Our Zenzic CAM Creator series continues with Andre Burgess, digital sector strategy leader at the National Physical Laboratory (NPL).

NPL is the UK’s National Metrology Institute, responsible for developing and maintaining the national primary measurement standards. For over a century, it has worked to translate scientific expertise into economic prosperity, skilled employment and improved quality of life, covering everything from cancer treatments to quantum computing. In the self-driving sector, Andre Burgess’s focus is test frameworks to support the deployment of safe and reliable autonomous transport on land, sea and air.

Andre Burgess, digital sector strategy leader at NPL
Andre Burgess, digital sector strategy leader at NPL.

AB: “We’re all about measurement and how it can be applied to the autonomous vehicle space. Artificial intelligence (AI) and machine learning represents a great transformation. Whereas in the past we’ve developed tests for whether a human is fit to do something, in this new world we need a new set of tests to assure autonomous systems and build a bridge of trust. This is not a one-off test, it is ongoing work to develop new methodologies and support the development of new standards.

“One of the key things this country has developed is Testbed UK, a collaboration between government and industry which has delivered a formidable testing environment – a network of safe, highly controlled environments increasingly linked to virtual testing.

“Working with the Met Office on behalf of the Centre for Connected and Autonomous Vehicles (CCAV) over the last year we have focused on the usability and reliability of sensors in different weather conditions. How do you know if sensors are performing well? How do you validate the decision making? How do you apply metrics and KPIs to this? Having undertaken a proof of concept for a testing framework, we are confident this can be delivered and deployed throughout the industry.

“There is much talk about pass/fail tests but our focus is confidence, improving confidence in the outputs and building confidence in the system. We collaborate across the board, with regulators, testers, developers – engaging with them to understand their requirements.  Our approach is to provide tools which help reduce the barriers to innovation without compromising regulation and safety assurance.  Striking the right balance between reliability and usability is key. Our work will support validation and help the UK to influence international standards.

“The biggest transformation in road transport over the next decade will be emissions reduction and self-driving vehicles and smart mobility systems will be key drivers. It will require changes to infrastructure and changes in habits – batteries or hydrogen will be critical, perhaps a need to drive more slowly, maybe less private car ownership. The impact of Covid has led to a move away from trains and buses, so a resurgence of public transport is vital.

“In terms of self-driving, I envisage there will be personally driven vehicles and on-demand vehicles. Increasingly I expect we’ll see a transition into smaller public transport vehicles, perhaps for 8-10 people, in continuous use. There’s real value in getting to places that don’t have bus stops and there’ll be benefits from autonomous safety features too. It won’t be everywhere but I hope within 10 years there’ll be good examples of that in the UK. The question is will we be ahead or behind the curve? In some more authoritarian countries implementation might be faster but maybe not better.

“We’ll also start to see autonomous low level aviation and autonomous shipping, for example, short cargo sea freight. Combined, these things will make roads less congested. Key transport stakeholders have expressed the need to integrate, to pursue the most efficient way to get goods into and around the UK.

“For our part, we are focused on the framework for virtual sensor testing, and also integration between virtual and physical testing. To give an accurate level of confidence requires understanding the common metrics and the areas of uncertainty. The human factor is so important, for example, what about the people with cars that don’t have this tech – how do they respond?”

For further info visit www.npl.co.uk.

Autonomous vehicle software specialist set to become a major UK success story.

Oxbotica secures huge BP investment and targets anything that moves people or goods

Oxford University spin-out, Oxbotica, has been on our must-speak-to list for a while, and on Friday we got some Zoom time with the top people – CEO, Ozgur Tohumcu, and co-founder and CTO, Professor Paul Newman.

It’s three weeks since the autonomous vehicle software specialist announced a US$47m Series B investment led by bp ventures. Yes, that BP. The press release asserts that this will accelerate the deployment of Oxbotica’s platform “across multiple industries and key markets”, but Prof. Newman is quick to emphasise this is not about robotaxis, not even about cars.

Prof Paul Newman, Oxbotica co-founder and CTO.
Prof Paul Newman, Oxbotica co-founder and CTO.

“We’ve been deploying our software in industrial settings – mines, airports – for six years now, and not only in the UK, in Europe, North America, Australia,” he says. “Everyone talks about cars but all vehicles are game for us – anything that requires moving people or goods. That’s the advantage of being pure software.

“We’re a global business and raising this kind of money during a pandemic speaks volumes. We have clear water behind and blue sky ahead. Having these new investors and strategic partners will really allow us to drive home the opportunities that came last year. Vehicles are common but software of our standard is not. We’re showing that great IP can be generated everywhere, not just Silicon Valley, and that’s very refreshing.”

While Prof. Newman focuses on the vision, Tohumcu provides the detail. “Since the funding announcement, the exchange rate means it’s actually worth closer to $50m, so that’s not bad,” he says. “We’ve just conducted a review of the business and it was pleasing to see that we achieved exactly what we said we’d do two years ago – delivering results against measurable goals.

Ozgur Tohumcu, Oxbotica CEO.
Ozgur Tohumcu, Oxbotica CEO.

“We’ve done a lot of planning recently – some well-defined, other things we’re still making choices about. We’ve been approached by new companies interested in using our tech and there are exciting deals in the pipeline, deals that come with investment. We’ll be making further announcements over the coming weeks and months.”

Make no mistake, Oxbotica is set to become a major UK success story… just don’t mention driverless cars!