Lucas Noldus Ph.D. details the latest high tech ways to measure driver behaviour in ADAS-equipped and self-driving vehicles

Connected and self-driving car safety: Noldus keeps more than an eye on distracted driving

Isn’t LinkedIn marvellous? I met Lucas Noldus Ph.D., Founder & CEO of Netherlands-based Noldus Information Technology, after he liked my interview with his Global Partnership on Artificial Intelligence (GPAI) colleague, Inma Martinez.

A few messages flew back and forth, and it transpired that he’s an expert in measuring driver behaviour, particularly driver-vehicle interactions in ADAS-equipped and self-driving vehicles. That was music to my ears, so we arranged a Zoom. What follows is the highly insightful result.

Lucas Noldus
Lucas Noldus Ph.D., Founder of Noldus Information Technology

LN: “The future starts here. The world is changing. We see people living longer and there are more and more interactive devices – telephones, tablets, dashboards – with which we can interact, leading to greater risk of distraction while driving. I know personally how tempting it is to use these devices, always trying to keep your eyes on the road.

“We already have fascinating developments in connected driving and now, with self-driving, the role of the driver changes significantly. That has triggered research institutes, universities, OEMs and tier one suppliers to pay more attention to the user experience for both drivers and passengers.

“All these experiences are important because how people perceive the safety and comfort will influence their buying decisions, and their recommendations to other potential users.

“For autonomous driving, how far will we go towards level five? What happens at the intermediate stages? Over the coming decades, driving tasks will gradually diminish but, until full autonomy, the driver will have to remain on standby, ready to take over in certain situations. How will the vehicle know the driver is available? How quickly can he take over? These are the topics we’re involved in as a technology company.

“We make tools to allow automotive researchers to keep the human in the loop. Traditionally, automotive research focused exclusively on improving the vehicle – better engines, drivetrains etc. Until recently, nobody paid much attention to the human being (with a brain, skeletal system, muscles, motor functions), who needs to process information through his sensory organs, draw the right conclusions and take actions.

“Now, these aspects are getting more attention, especially in relation to reduced capacity, whether due to a distracting device, drugs, alcohol or neurodegeneration. As you get older your response time becomes longer, your eyesight and hearing abilities reduce, as does the speed at which you can process information.

“These are the challenges that researchers in automotive are looking at concerning the role of the driver, now and in the future. If the automated or semi-automated system wants to give control back to the driver because its AI algorithms decide a situation is too complex, can the driver safely take over while he’s been doing something like reading or taking a nap? How many milliseconds does the brain need to be alert again?

NK: “Draft legislation seems to be proceeding on a 10-second rule, but some studies say at least 30 seconds is required.”

LN: “Situational awareness – that’s a key word in this business. Not only where am I geographically, but in what situation. Oh, I’m in a situation where the road surface is very wet, there’s a vehicle just in front of me, the exit I need is near and I’m in the wrong lane. Understanding a situation like that takes time.

“If we take a helicopter view, from our perspective as a technology company, what should be measured to understand the driver behaviour? Which sensors should we use to pick up that information? If we use a microphone, a video camera, a heartbeat monitor and a link to the ECU, how do we synchronise that?

“That’s not trivial because one sensor may be sending the sampling at 300Hz and another at 25 frames per second. That’s something my company has specialised in over the years. We’re very good at merging data from different sources, whether it’s a driving simulator continuously spitting out data, a real car, or sensors mounted in the infrastructure.

“You then need to analyse that data and pull out meaningful quantitative units that give you actionable insights. Generating large matrices is no big deal, making sense of that information is the real challenge.

“For example, in dashboard design, a manufacturer might be comparing two or three displays of road quality. A driver behaviour study with our tools will give the designer a clear answer on which design leads to the least cognitive workload, the least confusion.

Noldus DriveLab
Noldus DriveLab

“This same technical challenge can be applied to a vast number of design objectives. The vehicle manufacturer might be looking to make incremental improvements to, say, the readability of the dashboard under certain light conditions. Or they might be working on a completely new feature, like an intelligent personal in-car assistant. A number of brands are working on that, but the concept is still relatively new.

“You cannot test every scenario on the road, it’s just too dangerous, so we work with simulator manufacturers too. On the road or in the lab, we can measure a driver’s actions with eye-tracker, audio, video, face-reader and physiology in one.”

NK: “Back to LinkedIn again, I saw a post by Perry McCarthy, the F1 driver and original Stig on Top Gear, who said something like: Simulators are getting so good these days, when you make a mistake they drop three tonnes of bricks on your legs!”

LN: “You have so-called high fidelity and low fidelity simulators – the higher the fidelity, the closer you get to the real vehicle behaviour on the road, and there are all sorts of metrics to benchmark responsiveness.

“You have simple fixed-base simulators right up to motion-based simulators which can rotate, pitch and roll, move forward, backwards, sideways and up and down. For the best ones you’re talking about 10 million euros.

“We work with OEMs, tier1 suppliers, research institutes and simulator manufacturers to build-in our DriveLab software platform. We also advise on what sensors are recommended depending on what aspects of driver behaviour they want to study.

“We try to capture all the driver-vehicle interactions, so if he pushes a pedal, changes gear or turns the steering wheel, that’s all recorded and fed into the data stream. We can also record their body motion, facial expression, what they’re saying and how they’re saying it – it all tells us something about their mental state.

Noldus eye-tracker
Multi-camera eye tracker (Smart Eye)

“Eye tracking measures the point of gaze – what your pupils are focused on. In a vehicle, that might be the left, right and rear mirrors, through the windscreen or windows, around the interior, even looking back over your shoulders. To capture all that you need multiple eye-tracking cameras. If you just want to look at, for example, how the driver perceives distance to the car in front, you can do with just two cameras rather than six.

“Eye tracking generates all sorts of data. How long the eyes have been looking at something is called dwell time. Then there’s what direction the eyes are looking in and how fast the eyes move from one fixed position to another – that’s the saccade. People doing eye tracking research measure saccades in milliseconds.

“Another important metric is pupil diameter. If the light intensity goes up, the pupil diameter decreases. Given a stable light condition, the diameter of your pupil says something about the cognitive load to your brain – the harder you have to think, the wider your pupils will open. If you’re tired, your blink rate will go up. There’s a normal natural blink rate to refresh the fluid on your eyes with a fully awake person, but if you’re falling asleep the blink rate changes. It’s a very useful instrument.

“Then there’s body worn sensors that measure physiology. It’s harder to do in-car, but in a lab people don’t mind wearing electromyography (EMG) sensors to measure muscle tension. If you’re a designer and you want to know how easy it is for an 80-year-old lady to operate a gearshift, you need to know how much muscle power she has to exert.

“We also measure the pulse rate with a technique called photoplethysmography (PPG), like in a sports watch. From the PPG signal you can derive the heart rate (HR). However, a more accurate method is an electrocardiogram (ECG), which is based on the electrical activity of the heart.


Noldus physiological data
GSR (EDA) measurement

“Further still, we measure galvanic skin response (GSR), also called electrodermal activity (EDA), the level of sweating of your skin. The more nervous you get, the more you sweat. If you’re a bit late braking approaching a traffic jam, your GSR level will jump up. A few body parts are really good for capturing GSR – the wrist, palm, fingers, and the foot.

“We also measure oxygen saturation in the blood with near infrared spectroscopy (NIRS) and brain activity with an electroencephalogram (EEG). Both EEG and NIRS show which brain region is activated.

“Another incredibly useful technique is face reading. Simply by pointing a video camera at someone’s face we can plot 500 points – the surroundings of the eyebrows, the eyelids, the nose, chin, mouth, lips. We feed this into a neural network model and classify it against a database of tens of thousands of annotated images, allowing us to identify basic emotions – happy, sad, angry, surprised, disgusted, scared or neutral. You can capture that from one photograph. For other states, like boredom or confusion, you need a series of images.

“These days we can even capture the heart rate just by looking at the face – tiny changes in colour resulting from the pulsation of the blood vessels in the skin. This field of face reading is evolving every year and I dare to claim that we are leading the pack with our tool.

“Doing this in the lab is one thing, doing it in a real car is another challenge, being able to keep your focus on the driver’s face and deal with variable backgrounds. Of course, cars also drive at night so the next question is can you do all this in darkness? We turned our company van into an instrumented vehicle and my sons agreed to be the guinea pigs.

“It took some work – overcoming the issue of light striking the face and causing sharp shadows, for instance – but we can now use infrared illuminators with our FaceReader software to make these measurements in full darkness.

“The turning of the head is also very important in studying distraction, for example, if the driver looks sideways for too long, or nods their head in sleepiness. When something shocks someone, we see the face change and the blood pressure rise, and these readings are synchronised in DriveLab.

“It is well proven that even things like changing radio station can be very distracting. Taking your eyes off the road for just a few seconds is dangerous. As we move to more and more connected devices, touchscreens and voice commands, minimising distraction is vital to ensure safety.”

NK: “I absolutely love this tech but what I actually drive is a 7-year-old Suzuki Swift Sport with a petrol engine and a manual gearbox, and I quite like it that way”

LN: “I’m doing research on cars of the future with my software but I am personally driving a 30-year old soft-top Saab 900. That’s my ultimate relaxation, getting away from high tech for a moment.

“At Noldus, we’re constantly pushing the boundaries of research, working with top level organisations in automotive – Bosch, Cat, Daimler, Fiat, Honda, Isuzu, Land Rover, Mazda, Nissan, Scania, Skoda, Toyota, Valeo and Volvo, to name just a few – and also with the Netherlands Aerospace Centre (NLR) and the Maritime Research Institute Netherlands (MARIN).

“Our aim is make it so that the client doesn’t have to worry about things like hardware to software connections – we do that for them so they can focus on their research or design challenge.”

For further info see noldus.com





Bold predictions about our driverless future by petrolhead Clem Robertson.

Meet the maverick radar expert of UK drones and driverless

Welcome to a new series of interviews with our fellow Zenzic CAM Creators. First up, Clem Robertson, CEO of R4dar Technologies.

As a keen cyclist who built his own Cosworth-powered Quantum sportscar from scratch, it’s no surprise that the founder of Cambridge-based R4dar takes a unique approach to self-driving. Indeed, his involvement can be traced directly to one shocking experience: driving down a local country lane one night, he had a near miss with a cyclist with no lights. He vividly remembers how a car came the other way, illuminating the fortunate rider in silhouette and enabling an emergency stop. It proved to be a light bulb moment.

R4dar urban scene tags
R4dar urban scene tags

What does R4dar bring to connected and automated mobility (CAM)? 

CR: “I’d been working in radar for five or six years, developing cutting edge radar for runways, when the incident with the cyclist got me thinking: Why could my cruise control radar not tell me something was there and, importantly, what it was? This kind of technology has been around for years – in World War II we needed to tell the difference between a Spitfire and a Messerschmitt. They placed a signal on the planes which gave this basic information, but things can be much more sophisticated these days. Modern fighter pilots use five different methods of identification before engaging a potential bogey, because one or more methods might not work and you can’t leave it to chance whether to blow someone out of the sky. The autonomous vehicle world is doing similar with lidar, radar, digital mapping etc. Each has its shortcomings – GPS is no good in tunnels; the cost of 5G can be prohibitive and coverage is patchy; cameras aren’t much good over 100 metres or in the rain, lidar is susceptible to spoofing or misinterpretation; digital maps struggle with temporary road layouts – but together they create a more resilient system.”

How will your solutions improve the performance of self-driving cars?

CR: “Radar only communicates with itself, so it is cyber-resilient, and our digital tags can be used on smart infrastructure as well as vehicles – everything from platooning lorries to digital high vis jackets, traffic lights to digital bike reflectors. They can tell you three things: I am this, I am here and my status is this. For example, I’m a traffic light up ahead and I’m going to turn red in 20 seconds. Radar works in all weathers. It is reliable up to 250-300m and very good at measuring range and velocity, while the latest generation of radars are getting much better at differentiating between two things side-by-side. We are working with CAM partners looking to use radar in active travel, to improve safety and traffic management, as well as with fleet and bus operators. We are also working with the unmanned aerial vehicle (UAV) industry to create constellations of beacons that are centimetre-accurate, so that delivery drones can land in a designated spot in the garden and not on the dog!”

R4dar cyclists in fog
R4dar cyclists in fog

What major developments do you expect over the next 10-15 years?

CR: “Fully autonomous vehicles that don’t carry passengers will come first. There are already little robots on the streets of Milton Keynes and, especially with Covid, you will see a big focus on autonomous last mile delivery – both UAVs and unmanned ground vehicle (UGVs). You never know, we might see delivery bots enacting a modern version of the computer game Paperboy. More and more people in urban areas with only roadside parking will realise that electric cars are tricky to charge, unless you put the chargers in the road, which is expensive. If you only need a car one or two days a month, or even for just a couple of hours, there will be mobility as a service (MAAS) solutions for that. Why would you bother with car ownership? E-scooters are one to keep an eye on – once they’re regulated they will be a useful and independent means of getting around without exercising. Town centres will change extensively once MAAS and CAM take off. There will be improved safety for vulnerable road users, more pedestrianisation, and you might see segmented use at certain times of day.”

Do you see any downsides in the shift to self-driving?

CR: “Yes! I love driving, manual gearboxes, the smell of petrol, the theatre, but you can see already that motorsport, even F1, is becoming a dinosaur in its present form. People are resistant to change and autonomous systems prompt visions of Terminator, but it is happening and there will be consequences. Mechanics are going to have less work and will have to retrain because electric motors have less moving parts. Courier and haulage driving jobs will go. Warehouses will be increasingly automated. MAAS will mean less people owning their own cars and automotive manufacturers will have to adapt to selling less vehicles – it’s a massive cliff and it’s coming at them much faster than they thought – that’s why they’re all scrambling to become autonomous EV manufacturers, it’s a matter of survival.”

R4dar lights in fog
R4dar lights in fog

So, to sum up….

CR: “Fully autonomous, go-anywhere vehicles are presented as the utopia, but there’s a realisation that this is a difficult goal, or at least a first world problem. There might always be a market for manned vehicles in more remote locations. A lot of the companies in this industry specialise in data, edge processing and enhanced geospatial awareness, and that will bring all kinds of benefits. How often have you driven in fog unable to see 10m in front of you? Self-driving technology will address that and many other dangers.”

Hearing bold predictions like these from a petrolhead like Clem, suddenly Zenzic’s ambitious 10-year plan seems eminently achievable.

For further info, visit the R4dar website.

Driverless car laws and insurance

The Law Commission of England and Wales is currently undertaking a far-reaching review of the legal framework for driverless cars… and insurers are keen to contribute.

The deadline for submissions to the preliminary consultation paper passed last week and AXA Insurance has highlighted what it hopes will be key themes:

1) Access to data and a transparent framework for effective data governance is fundamental for establishing liability and accurate risk modelling.

2) The legal and regulatory framework must clearly define the responsibilities of the users of autonomous vehicles (AVs) and any changes to the current road safety regime.

3) Consumers must be educated on their responsibilities, how the equipment should be used and the regulations attached to them.

Noting the Government’s recent announcement on the advanced trials for self-driving vehicles, David Williams, managing director of underwriting and technical services at AXA, said: “We are only in February but the world of driverless has started 2019 at a blistering pace.

“It might not sound as exciting as trials and tech, but as driverless cars are rapidly becoming a reality, it is right now that we need think about the legal aspects of this technology. The consultation had 46 detailed questions on areas ranging from the responsibilities of a human user to the need for data retention.”

In its submission, the International Underwriting Association (IUA), which represents many of the world’s largest insurance companies, argued that accident data should be automatically retained.

Chris Jones, IUA director of legal and market services, said: “The technology surrounding driverless cars is developing rapidly. It is essential, therefore, that an effective framework is established governing their operation. Insurers have a vital role to play in this process.

“In order for liability to be established, vehicle data must be recorded and made available. This will include, for example, the status of the automated system, whether engaged or disengaged, the speed of the vehicle and any camera footage from the time of the accident.

“As information expands and usage grows, we are likely to see potential vulnerabilities highlighted and new risk areas emerge. We anticipate that the technology will be capable of self-reporting system errors, defects and other issues affecting road worthiness.”

In a sign of things to come, Bloomberg reports that entrepreneur Dan Peate has launched Avinew, with $5m in seed funding, offering an insurance product which monitors drivers’ use of autonomous features in cars made by Tesla, Nissan, Ford and Cadillac.

Discounts will be determined based on how the features are used, after the customer has given permission for their driving data to be accessed.

This seems a logical next step in telematics or ‘black box’ insurance, which tracks the way you drive and links it to the amount you pay.

In terms of what happens in the event of an accident, a story in the Daily Express explained how a fraudulent claim worth £6,000 was prevented using telematics.

A Renault Clio driver facing a whiplash claim was cleared by data showing that the incident occurred at under 5mph. Martyne Miller, associate director of Coverbox said: “The data was able to successfully refute a substantial claim, saving both the motorist and the insurer money.”

Once cars are fully autonomous, Rodney Parker, associate professor of operations management at Indiana University, predicts that “liability is likely to migrate from the individual to the manufacturer and the licensers of the software that drives the AV.”

There’s also the possibility that motorists could be encouraged out of driving via the prohibitive cost of insurance.

The Law Commission was asked to look at the legal framework for driverless cars by the UK’s Centre for Connected and Autonomous Vehicles (CCAV), a joint Department for Business, Energy & Industrial Strategy (BEIS) and Department for Transport (DfT) policy team.

If these insurer submissions are anything to go by, the focus will be at least as much on the connected elements as the autonomous ones.

Will it have anything to say about who to save in no-win crash situations or who should be the data controller?

The final report is due in March 2021.

Online teach-out gives bite-sized answers to driverless car questions

If you’ve got a couple of hours to digest important driverless car questions, try this online course from the University of Michigan: Self-Driving Cars Teach-Out.

The university’s Ann Arbor campus is home to the 32-acre Mcity test facility, the first purpose-built proving ground for connected and automated vehicles (CAVs).

Carrie Morton, deputy director of Mcity, describes it as “the ultimate sandbox”, a place to foster collaboration with industry, government and academic partners.

Following a quick overview of the key on-board technologies – sensors, lidar, GPS etc – the university’s experts get into the nitty gritty of their specialisms.

Liz Gerber, professor of public policies, sets the scene, saying: “The promise of driverless vehicles is super exciting for communities and for society. We talk about the promise of reduced congestion, increased mobility options and enhanced safety and convenience.”

Professor Matthew Johnson Roberson discusses the fragility of artificial intelligence (AI) in dealing with new systems, the challenge of getting from 95% to 99.99% accuracy, and the importance of failing gracefully in the event of an error.

Professor Dan Crane looks at balancing competition, differentiation and standardisation, asserting that we should encourage “a thousand flowers to bloom”, because no one yet knows which technologies will work best.

Ian Williams, inaugural fellow for the Law & Mobility Program, addresses privacy concerns and the ability to change settings. He also raises the possibility of motorists being encouraged out of driving via the prohibitive cost of insurance.

Big picture thinking comes from Alex Murphy, assistant professor in sociology, who considers the profound impacts of a lack of transportation – from the kinds of jobs people can take to the schools they can access. “It has huge implications for inequality,” she says.

Lionel Robert, associate professor in the School of Information, predicts that we’ll see level five, fully autonomous, go anywhere CAVs “in our lifetime”. He focusses on giving consumers “accurate trust” in the technology, not under- or over-trust.

One reassuring point which crops up time and again is the continuing need for humans – from John the safety conductor on the Mcity Shuttle, to roles variously described as truck operators, fleet attendants, concierges and guides.

This evolution could potentially help to offset the fear that driverless technology will immediately put people out of a job, a belief which has been blamed for attacks on self-driving test cars.

CAV’s potential to help the blind community was also particularly thought-provoking.

CASE study: connected, autonomous, something and electric

The motor industry is notoriously fond of an acronym and here’s a new one which might just catch on: CASE.

In this case, C stands for connected, A for autonomous and E for electric, but there’s disagreement about what the S should stand for.

Vehicle manufacturer Daimler goes for connected, autonomous, shared and electric, although if you dig a bit deeper into their website they keep their options open with “shared and services”.

“Each of these has the power to turn our entire industry upside down,” said Dr Dieter Zetsche, chairman of the board of Daimler AG. “But the true revolution is in combining them in a comprehensive, seamless package.”

Over at car parts maker ZF, Andy Whydell, vice president of systems product planning for active and passive safety, goes for connected, autonomous, safe and electric.

For explanations of other vehicle-related terms and acronyms, see our Cars of the Future glossary.