Supported by the Rees Jeffreys Road Fund and Zenzic, Eloy’s multi-vehicle coordination is making parking and driving more efficient. Next, self-driving.

80% car parking timesaving? That’s intelligent CAM decision making

In this Cars of the Future exclusive, the co-founders of Hertfordshire-based Eloy, Anna Corp and Damian Horton, explain how their connected car services make parking, driving and self-driving safer and more efficient.

Congratulations on your Zenzic CAM Scale-Up success. How did you get into connected vehicle tech?

DH: “Our story goes back to 2004, when I was a maths undergrad at Oxford. I ended up doing a thesis on bifurcation theory in swarms, based on simulations like those seen in computer games – little army men running together, and how they interact with each other. It involved a lot of traffic modelling, but at the time there were no jobs in driverless cars, so I went into investment banking and started a couple of businesses. When I moved back to the UK from Australia, in 2018, I saw that the connected and autonomous (CAM) vehicle market was finally happening. I met Anna at a start-up in London and together with an old school friend of mine, Marcus Robbins, who’d done a lot of geospatial work, we decided to give it a crack.”

AC: “My background is in marketing, user experience and customer insights. One issue I’m very familiar with is companies not thinking about real-life human problems. At Eloy, we’re all about solving problems for all road users, not just car and lorry drivers – cyclists, pedestrians, horse riders, everybody. We take a much more holistic approach to making roads safer and more efficient. There’s a big push to move people on to active travel and public transport, but is that what people really want? Shared robotaxis are often presented as a utopia, but why would a mum use one when she has her own car with all the baby stuff already in the back? It’s hard to force behaviour change. A better way is to give people options and tools which they see value in, which make their lives better. Then they’ll adopt.”

Which brings us to your app…

DH: “We worked out early on that the best way to get into the connected car space was to provide a sat nav, before building in any new experiences to make roads flow better. On Boxing Day 2020, we got the email from Apple saying the sat nav component had been accepted for CarPlay.”

AC: “We joke that we’re the smallest sat nav company in the world, but it’s a prerequisite for all we plan to do. We had to get into existing vehicles.”

DH: “We’re obsessed with the situations you get into as a driver – sitting waiting to make a turn across a blocked carriageway, queuing at a mini-roundabout while everyone waits for each other. How can we make these small things better? The missing piece over and over again was multi-vehicle coordination (MVC). So we got super focused on niche use cases, like getting in and out of car parks and passing on country lanes.”

AC: “Smartphones are a good example of a product which has morphed into so much more. The best thing is we can offer connected vehicle solutions now, to provide good advice for human drivers, to prove high efficacy, and then apply them to higher level automated driving.”

And you’re already testing at UTAC’s Millbrook Proving Ground…

DH: “Yes. In October, we demonstrated our narrow road warning solution, which reduces the need for reversing to find a passing point, at The Transport Technology Forum at UTAC. That involved just two vehicles. The next phase is to get it working with 20 vehicles in a controlled environment, then up to 100, and scale from there. We’re looking for the right partners, ranging from ports and farms to construction traffic, freight and public transport – probably fleets initially.

“Early simulations indicate a 20% timesaving from MVC for country lane passing, and up to 80% for car park entry and exit. Internet connectivity is an issue (that’s for someone else to solve!), but we can deploy on sections of road where a good signal is almost guaranteed. Then it’s a question of making sure the intervention – the beeping or flashing or messaging – doesn’t outweigh the benefit. The big question is always: does it improve safety?”

Sorry, did you just say an 80% timesaving for car parking?

DH: “Yes, by using very similar modelling to how to fill an empty aeroplane. For years, it was a free for all, so the airlines tried to get organised by filling in order from row one. Mathematically, that turned out to be the slowest way, because everyone has to wait for the person in front of them. So they got clever and started filling from row 30 and working backwards. That’s actually the second slowest way, because you end up with the same problem of everyone waiting. Eventually they worked out that the best method is a structured filling pattern. You send in rows 30, 25 and 20, then rows 15, 10 and 5. They all have space to stow their luggage and what you have is a lot more manoeuvres per second. That gives you an 80% reduction in filling time.

“We looked at high density car parking in the same way. Think Silverstone on grand prix weekend, when there’s traffic chaos. If every car has an allotted parking bay and follows guidance from a sat nav, you can apply those same principles of more simultaneous manoeuvres. There are potentially further gains too, for example, by connecting live data to the local traffic lights to disperse the traffic more efficiently. The challenge is coordination, between the event organisers, the local authority, the car park operator and the attendees. The rationale is economic benefit, reduced journey times for everyone, which brings you to infrastructure investment decisions – the cost per mile benefits of these intelligent systems compared to building more roads.”

AC: “Once people see that the system works, they’ll quickly learn to trust it. I see huge opportunities in business parking for big employers. If they could save each employee 10 minutes a day, think of the extra productivity. Over a year, suddenly the business would have gained a lot of time and a lot of money.”

Eloy virtual sign: country lane solution
Eloy virtual sign: country lane solution

And you’re using artificial intelligence to optimise this?

DH: “Yes. Using SUMO simulation software, we’ve created full digital twins for car parks and certain road segments. Then we add a reward function. The AI basically tries to get the most points, a bit like the 1980’s computer game, Frogger. It’s a type of reinforcement learning that tells cars what to do in different circumstances. We’re training it for a road layout at Millbrook at the moment.

“The holy grail is getting 100% of cars using the software, transmitting and receiving your information and following the instructions. In the meantime, there’s questions around gaps in the data – how much knowledge you can you infer from modelling. Then there’s the dynamics of network effects. An interesting one, going back to our car park efficiency, is what happens if someone decides to break the rules, perhaps by stealing someone else’s slot. You can probably use a financial incentive to overcome that.”

AC: “One reason we really like the cars on a country lane solution is because it’s self-reinforcing. It’s a win-win without needing to use a monetary incentive. Both drivers benefit from additional information and overall traffic flow improves as a result.”

Sounds good to me. For further info, visit the Eloy website

Zenzic CAM – connected and self-driving – Scale-Up winners all get UK government funding

Backed for self-driving success: Zenzic CAM Scale-Up Winners 2022

On 6 October, the UK self-driving organisation, Zenzic, announced the seven winners of its 2022 CAM Scale-Up Programme: Axitech; Calyo; Dromos; Eloy; Gaist; Oxford RF; and PolyChord.

The future of self-driving: Zenzic CAM Scale-Up Winners 2022
The future of self-driving: Zenzic CAM Scale-Up Winners 2022

The selected start-ups and SMEs each win a share of UK government funding through the Centre of Connected and Autonomous Vehicles (CCAV), access to the world class testing facilities of CAM Testbed UK, and investment support from delivery partner Plug and Play.

They follow in the footsteps of six 2021 winners: Albora; Exeros; Grayscale AI; R4DAR; Xtract 360; and Route Konnect (celebrated at the brilliant CAM Innovators event in March this year). And five 2020 winners: Angoka; Beam Connectivity; Eatron Technologies; Helix Technologies; and RoboK. Will there be eight winners next year?!

Connected / self-driving

Here’s a bit about this year’s cohort:

Leeds-based Axitech for its Connected Collision Management Platform “empowering automotive organisations to deliver transformational customer and claims experiences”.

Bristol-based Calyo for its next-generation AI-enabled perception system, offering “an unprecedented combination of high performance, flexibility and low cost for smart mobile robots and autonomous vehicles”.

German company Dromos – partnered in the UK with designer PriestmanGoode and engineering firms Buro Happold and RLB – for its “high-density urban passenger & freight transport” offering the “highest passenger convenience” at “half the cost/space/time”.

Hertfordshire-based Eloy – a connected and autonomous vehicle software business “focused on multi-vehicle coordination”.

Skipton-based Gaist, “Leading the way in road scape and highways information”.

Oxfordshire-based Oxford RF Solutions, offering “breakthrough radar vision for autonomy”.

And, finally, Cambridge University spinout PolyChord for its “uniquely powerful data science technology”.

The online event also featured presentations by other shortlisted companies: Conigital, Delivers.ai, Imperium, Megasets, Streetscope and Teragence.

Throw in an intro by the CCAV’s Michael Talbot, a fireside chat with Kirsty Lloyd-Dukes of Waymo and Ben Peters of FiveAI, and a closing keynote by UK Automotive Council CAM Working Group chair David Skipp, it really was an action-packed couple of hours.

As programme director at Zenzic, Mark Cracknell, said: “These companies are the future that’s happening now.”

Lucas Noldus Ph.D. details the latest high tech ways to measure driver behaviour in ADAS-equipped and self-driving vehicles

Connected and self-driving car safety: Noldus keeps more than an eye on distracted driving

Isn’t LinkedIn marvellous? I met Lucas Noldus Ph.D., Founder & CEO of Netherlands-based Noldus Information Technology, after he liked my interview with his Global Partnership on Artificial Intelligence (GPAI) colleague, Inma Martinez.

A few messages flew back and forth, and it transpired that he’s an expert in measuring driver behaviour, particularly driver-vehicle interactions in ADAS-equipped and self-driving vehicles. That was music to my ears, so we arranged a Zoom. What follows is the highly insightful result.

Lucas Noldus
Lucas Noldus Ph.D., Founder of Noldus Information Technology

LN: “The future starts here. The world is changing. We see people living longer and there are more and more interactive devices – telephones, tablets, dashboards – with which we can interact, leading to greater risk of distraction while driving. I know personally how tempting it is to use these devices, always trying to keep your eyes on the road.

“We already have fascinating developments in connected driving and now, with self-driving, the role of the driver changes significantly. That has triggered research institutes, universities, OEMs and tier one suppliers to pay more attention to the user experience for both drivers and passengers.

“All these experiences are important because how people perceive the safety and comfort will influence their buying decisions, and their recommendations to other potential users.

“For autonomous driving, how far will we go towards level five? What happens at the intermediate stages? Over the coming decades, driving tasks will gradually diminish but, until full autonomy, the driver will have to remain on standby, ready to take over in certain situations. How will the vehicle know the driver is available? How quickly can he take over? These are the topics we’re involved in as a technology company.

“We make tools to allow automotive researchers to keep the human in the loop. Traditionally, automotive research focused exclusively on improving the vehicle – better engines, drivetrains etc. Until recently, nobody paid much attention to the human being (with a brain, skeletal system, muscles, motor functions), who needs to process information through his sensory organs, draw the right conclusions and take actions.

“Now, these aspects are getting more attention, especially in relation to reduced capacity, whether due to a distracting device, drugs, alcohol or neurodegeneration. As you get older your response time becomes longer, your eyesight and hearing abilities reduce, as does the speed at which you can process information.

“These are the challenges that researchers in automotive are looking at concerning the role of the driver, now and in the future. If the automated or semi-automated system wants to give control back to the driver because its AI algorithms decide a situation is too complex, can the driver safely take over while he’s been doing something like reading or taking a nap? How many milliseconds does the brain need to be alert again?

NK: “Draft legislation seems to be proceeding on a 10-second rule, but some studies say at least 30 seconds is required.”

LN: “Situational awareness – that’s a key word in this business. Not only where am I geographically, but in what situation. Oh, I’m in a situation where the road surface is very wet, there’s a vehicle just in front of me, the exit I need is near and I’m in the wrong lane. Understanding a situation like that takes time.

“If we take a helicopter view, from our perspective as a technology company, what should be measured to understand the driver behaviour? Which sensors should we use to pick up that information? If we use a microphone, a video camera, a heartbeat monitor and a link to the ECU, how do we synchronise that?

“That’s not trivial because one sensor may be sending the sampling at 300Hz and another at 25 frames per second. That’s something my company has specialised in over the years. We’re very good at merging data from different sources, whether it’s a driving simulator continuously spitting out data, a real car, or sensors mounted in the infrastructure.

“You then need to analyse that data and pull out meaningful quantitative units that give you actionable insights. Generating large matrices is no big deal, making sense of that information is the real challenge.

“For example, in dashboard design, a manufacturer might be comparing two or three displays of road quality. A driver behaviour study with our tools will give the designer a clear answer on which design leads to the least cognitive workload, the least confusion.

Noldus DriveLab
Noldus DriveLab

“This same technical challenge can be applied to a vast number of design objectives. The vehicle manufacturer might be looking to make incremental improvements to, say, the readability of the dashboard under certain light conditions. Or they might be working on a completely new feature, like an intelligent personal in-car assistant. A number of brands are working on that, but the concept is still relatively new.

“You cannot test every scenario on the road, it’s just too dangerous, so we work with simulator manufacturers too. On the road or in the lab, we can measure a driver’s actions with eye-tracker, audio, video, face-reader and physiology in one.”

NK: “Back to LinkedIn again, I saw a post by Perry McCarthy, the F1 driver and original Stig on Top Gear, who said something like: Simulators are getting so good these days, when you make a mistake they drop three tonnes of bricks on your legs!”

LN: “You have so-called high fidelity and low fidelity simulators – the higher the fidelity, the closer you get to the real vehicle behaviour on the road, and there are all sorts of metrics to benchmark responsiveness.

“You have simple fixed-base simulators right up to motion-based simulators which can rotate, pitch and roll, move forward, backwards, sideways and up and down. For the best ones you’re talking about 10 million euros.

“We work with OEMs, tier1 suppliers, research institutes and simulator manufacturers to build-in our DriveLab software platform. We also advise on what sensors are recommended depending on what aspects of driver behaviour they want to study.

“We try to capture all the driver-vehicle interactions, so if he pushes a pedal, changes gear or turns the steering wheel, that’s all recorded and fed into the data stream. We can also record their body motion, facial expression, what they’re saying and how they’re saying it – it all tells us something about their mental state.

Noldus eye-tracker
Multi-camera eye tracker (Smart Eye)

“Eye tracking measures the point of gaze – what your pupils are focused on. In a vehicle, that might be the left, right and rear mirrors, through the windscreen or windows, around the interior, even looking back over your shoulders. To capture all that you need multiple eye-tracking cameras. If you just want to look at, for example, how the driver perceives distance to the car in front, you can do with just two cameras rather than six.

“Eye tracking generates all sorts of data. How long the eyes have been looking at something is called dwell time. Then there’s what direction the eyes are looking in and how fast the eyes move from one fixed position to another – that’s the saccade. People doing eye tracking research measure saccades in milliseconds.

“Another important metric is pupil diameter. If the light intensity goes up, the pupil diameter decreases. Given a stable light condition, the diameter of your pupil says something about the cognitive load to your brain – the harder you have to think, the wider your pupils will open. If you’re tired, your blink rate will go up. There’s a normal natural blink rate to refresh the fluid on your eyes with a fully awake person, but if you’re falling asleep the blink rate changes. It’s a very useful instrument.

“Then there’s body worn sensors that measure physiology. It’s harder to do in-car, but in a lab people don’t mind wearing electromyography (EMG) sensors to measure muscle tension. If you’re a designer and you want to know how easy it is for an 80-year-old lady to operate a gearshift, you need to know how much muscle power she has to exert.

“We also measure the pulse rate with a technique called photoplethysmography (PPG), like in a sports watch. From the PPG signal you can derive the heart rate (HR). However, a more accurate method is an electrocardiogram (ECG), which is based on the electrical activity of the heart.


Noldus physiological data
GSR (EDA) measurement

“Further still, we measure galvanic skin response (GSR), also called electrodermal activity (EDA), the level of sweating of your skin. The more nervous you get, the more you sweat. If you’re a bit late braking approaching a traffic jam, your GSR level will jump up. A few body parts are really good for capturing GSR – the wrist, palm, fingers, and the foot.

“We also measure oxygen saturation in the blood with near infrared spectroscopy (NIRS) and brain activity with an electroencephalogram (EEG). Both EEG and NIRS show which brain region is activated.

“Another incredibly useful technique is face reading. Simply by pointing a video camera at someone’s face we can plot 500 points – the surroundings of the eyebrows, the eyelids, the nose, chin, mouth, lips. We feed this into a neural network model and classify it against a database of tens of thousands of annotated images, allowing us to identify basic emotions – happy, sad, angry, surprised, disgusted, scared or neutral. You can capture that from one photograph. For other states, like boredom or confusion, you need a series of images.

“These days we can even capture the heart rate just by looking at the face – tiny changes in colour resulting from the pulsation of the blood vessels in the skin. This field of face reading is evolving every year and I dare to claim that we are leading the pack with our tool.

“Doing this in the lab is one thing, doing it in a real car is another challenge, being able to keep your focus on the driver’s face and deal with variable backgrounds. Of course, cars also drive at night so the next question is can you do all this in darkness? We turned our company van into an instrumented vehicle and my sons agreed to be the guinea pigs.

“It took some work – overcoming the issue of light striking the face and causing sharp shadows, for instance – but we can now use infrared illuminators with our FaceReader software to make these measurements in full darkness.

“The turning of the head is also very important in studying distraction, for example, if the driver looks sideways for too long, or nods their head in sleepiness. When something shocks someone, we see the face change and the blood pressure rise, and these readings are synchronised in DriveLab.

“It is well proven that even things like changing radio station can be very distracting. Taking your eyes off the road for just a few seconds is dangerous. As we move to more and more connected devices, touchscreens and voice commands, minimising distraction is vital to ensure safety.”

NK: “I absolutely love this tech but what I actually drive is a 7-year-old Suzuki Swift Sport with a petrol engine and a manual gearbox, and I quite like it that way”

LN: “I’m doing research on cars of the future with my software but I am personally driving a 30-year old soft-top Saab 900. That’s my ultimate relaxation, getting away from high tech for a moment.

“At Noldus, we’re constantly pushing the boundaries of research, working with top level organisations in automotive – Bosch, Cat, Daimler, Fiat, Honda, Isuzu, Land Rover, Mazda, Nissan, Scania, Skoda, Toyota, Valeo and Volvo, to name just a few – and also with the Netherlands Aerospace Centre (NLR) and the Maritime Research Institute Netherlands (MARIN).

“Our aim is make it so that the client doesn’t have to worry about things like hardware to software connections – we do that for them so they can focus on their research or design challenge.”

For further info see noldus.com





Bold predictions about our driverless future by petrolhead Clem Robertson.

Meet the maverick radar expert of UK drones and driverless

Welcome to a new series of interviews with our fellow Zenzic CAM Creators. First up, Clem Robertson, CEO of R4dar Technologies.

As a keen cyclist who built his own Cosworth-powered Quantum sportscar from scratch, it’s no surprise that the founder of Cambridge-based R4dar takes a unique approach to self-driving. Indeed, his involvement can be traced directly to one shocking experience: driving down a local country lane one night, he had a near miss with a cyclist with no lights. He vividly remembers how a car came the other way, illuminating the fortunate rider in silhouette and enabling an emergency stop. It proved to be a light bulb moment.

R4dar urban scene tags
R4dar urban scene tags

What does R4dar bring to connected and automated mobility (CAM)? 

CR: “I’d been working in radar for five or six years, developing cutting edge radar for runways, when the incident with the cyclist got me thinking: Why could my cruise control radar not tell me something was there and, importantly, what it was? This kind of technology has been around for years – in World War II we needed to tell the difference between a Spitfire and a Messerschmitt. They placed a signal on the planes which gave this basic information, but things can be much more sophisticated these days. Modern fighter pilots use five different methods of identification before engaging a potential bogey, because one or more methods might not work and you can’t leave it to chance whether to blow someone out of the sky. The autonomous vehicle world is doing similar with lidar, radar, digital mapping etc. Each has its shortcomings – GPS is no good in tunnels; the cost of 5G can be prohibitive and coverage is patchy; cameras aren’t much good over 100 metres or in the rain, lidar is susceptible to spoofing or misinterpretation; digital maps struggle with temporary road layouts – but together they create a more resilient system.”

How will your solutions improve the performance of self-driving cars?

CR: “Radar only communicates with itself, so it is cyber-resilient, and our digital tags can be used on smart infrastructure as well as vehicles – everything from platooning lorries to digital high vis jackets, traffic lights to digital bike reflectors. They can tell you three things: I am this, I am here and my status is this. For example, I’m a traffic light up ahead and I’m going to turn red in 20 seconds. Radar works in all weathers. It is reliable up to 250-300m and very good at measuring range and velocity, while the latest generation of radars are getting much better at differentiating between two things side-by-side. We are working with CAM partners looking to use radar in active travel, to improve safety and traffic management, as well as with fleet and bus operators. We are also working with the unmanned aerial vehicle (UAV) industry to create constellations of beacons that are centimetre-accurate, so that delivery drones can land in a designated spot in the garden and not on the dog!”

R4dar cyclists in fog
R4dar cyclists in fog

What major developments do you expect over the next 10-15 years?

CR: “Fully autonomous vehicles that don’t carry passengers will come first. There are already little robots on the streets of Milton Keynes and, especially with Covid, you will see a big focus on autonomous last mile delivery – both UAVs and unmanned ground vehicle (UGVs). You never know, we might see delivery bots enacting a modern version of the computer game Paperboy. More and more people in urban areas with only roadside parking will realise that electric cars are tricky to charge, unless you put the chargers in the road, which is expensive. If you only need a car one or two days a month, or even for just a couple of hours, there will be mobility as a service (MAAS) solutions for that. Why would you bother with car ownership? E-scooters are one to keep an eye on – once they’re regulated they will be a useful and independent means of getting around without exercising. Town centres will change extensively once MAAS and CAM take off. There will be improved safety for vulnerable road users, more pedestrianisation, and you might see segmented use at certain times of day.”

Do you see any downsides in the shift to self-driving?

CR: “Yes! I love driving, manual gearboxes, the smell of petrol, the theatre, but you can see already that motorsport, even F1, is becoming a dinosaur in its present form. People are resistant to change and autonomous systems prompt visions of Terminator, but it is happening and there will be consequences. Mechanics are going to have less work and will have to retrain because electric motors have less moving parts. Courier and haulage driving jobs will go. Warehouses will be increasingly automated. MAAS will mean less people owning their own cars and automotive manufacturers will have to adapt to selling less vehicles – it’s a massive cliff and it’s coming at them much faster than they thought – that’s why they’re all scrambling to become autonomous EV manufacturers, it’s a matter of survival.”

R4dar lights in fog
R4dar lights in fog

So, to sum up….

CR: “Fully autonomous, go-anywhere vehicles are presented as the utopia, but there’s a realisation that this is a difficult goal, or at least a first world problem. There might always be a market for manned vehicles in more remote locations. A lot of the companies in this industry specialise in data, edge processing and enhanced geospatial awareness, and that will bring all kinds of benefits. How often have you driven in fog unable to see 10m in front of you? Self-driving technology will address that and many other dangers.”

Hearing bold predictions like these from a petrolhead like Clem, suddenly Zenzic’s ambitious 10-year plan seems eminently achievable.

For further info, visit the R4dar website.

Driverless car laws and insurance

The Law Commission of England and Wales is currently undertaking a far-reaching review of the legal framework for driverless cars… and insurers are keen to contribute.

The deadline for submissions to the preliminary consultation paper passed last week and AXA Insurance has highlighted what it hopes will be key themes:

1) Access to data and a transparent framework for effective data governance is fundamental for establishing liability and accurate risk modelling.

2) The legal and regulatory framework must clearly define the responsibilities of the users of autonomous vehicles (AVs) and any changes to the current road safety regime.

3) Consumers must be educated on their responsibilities, how the equipment should be used and the regulations attached to them.

Noting the Government’s recent announcement on the advanced trials for self-driving vehicles, David Williams, managing director of underwriting and technical services at AXA, said: “We are only in February but the world of driverless has started 2019 at a blistering pace.

“It might not sound as exciting as trials and tech, but as driverless cars are rapidly becoming a reality, it is right now that we need think about the legal aspects of this technology. The consultation had 46 detailed questions on areas ranging from the responsibilities of a human user to the need for data retention.”

In its submission, the International Underwriting Association (IUA), which represents many of the world’s largest insurance companies, argued that accident data should be automatically retained.

Chris Jones, IUA director of legal and market services, said: “The technology surrounding driverless cars is developing rapidly. It is essential, therefore, that an effective framework is established governing their operation. Insurers have a vital role to play in this process.

“In order for liability to be established, vehicle data must be recorded and made available. This will include, for example, the status of the automated system, whether engaged or disengaged, the speed of the vehicle and any camera footage from the time of the accident.

“As information expands and usage grows, we are likely to see potential vulnerabilities highlighted and new risk areas emerge. We anticipate that the technology will be capable of self-reporting system errors, defects and other issues affecting road worthiness.”

In a sign of things to come, Bloomberg reports that entrepreneur Dan Peate has launched Avinew, with $5m in seed funding, offering an insurance product which monitors drivers’ use of autonomous features in cars made by Tesla, Nissan, Ford and Cadillac.

Discounts will be determined based on how the features are used, after the customer has given permission for their driving data to be accessed.

This seems a logical next step in telematics or ‘black box’ insurance, which tracks the way you drive and links it to the amount you pay.

In terms of what happens in the event of an accident, a story in the Daily Express explained how a fraudulent claim worth £6,000 was prevented using telematics.

A Renault Clio driver facing a whiplash claim was cleared by data showing that the incident occurred at under 5mph. Martyne Miller, associate director of Coverbox said: “The data was able to successfully refute a substantial claim, saving both the motorist and the insurer money.”

Once cars are fully autonomous, Rodney Parker, associate professor of operations management at Indiana University, predicts that “liability is likely to migrate from the individual to the manufacturer and the licensers of the software that drives the AV.”

There’s also the possibility that motorists could be encouraged out of driving via the prohibitive cost of insurance.

The Law Commission was asked to look at the legal framework for driverless cars by the UK’s Centre for Connected and Autonomous Vehicles (CCAV), a joint Department for Business, Energy & Industrial Strategy (BEIS) and Department for Transport (DfT) policy team.

If these insurer submissions are anything to go by, the focus will be at least as much on the connected elements as the autonomous ones.

Will it have anything to say about who to save in no-win crash situations or who should be the data controller?

The final report is due in March 2021.

Online teach-out gives bite-sized answers to driverless car questions

If you’ve got a couple of hours to digest important driverless car questions, try this online course from the University of Michigan: Self-Driving Cars Teach-Out.

The university’s Ann Arbor campus is home to the 32-acre Mcity test facility, the first purpose-built proving ground for connected and automated vehicles (CAVs).

Carrie Morton, deputy director of Mcity, describes it as “the ultimate sandbox”, a place to foster collaboration with industry, government and academic partners.

Following a quick overview of the key on-board technologies – sensors, lidar, GPS etc – the university’s experts get into the nitty gritty of their specialisms.

Liz Gerber, professor of public policies, sets the scene, saying: “The promise of driverless vehicles is super exciting for communities and for society. We talk about the promise of reduced congestion, increased mobility options and enhanced safety and convenience.”

Professor Matthew Johnson Roberson discusses the fragility of artificial intelligence (AI) in dealing with new systems, the challenge of getting from 95% to 99.99% accuracy, and the importance of failing gracefully in the event of an error.

Professor Dan Crane looks at balancing competition, differentiation and standardisation, asserting that we should encourage “a thousand flowers to bloom”, because no one yet knows which technologies will work best.

Ian Williams, inaugural fellow for the Law & Mobility Program, addresses privacy concerns and the ability to change settings. He also raises the possibility of motorists being encouraged out of driving via the prohibitive cost of insurance.

Big picture thinking comes from Alex Murphy, assistant professor in sociology, who considers the profound impacts of a lack of transportation – from the kinds of jobs people can take to the schools they can access. “It has huge implications for inequality,” she says.

Lionel Robert, associate professor in the School of Information, predicts that we’ll see level five, fully autonomous, go anywhere CAVs “in our lifetime”. He focusses on giving consumers “accurate trust” in the technology, not under- or over-trust.

One reassuring point which crops up time and again is the continuing need for humans – from John the safety conductor on the Mcity Shuttle, to roles variously described as truck operators, fleet attendants, concierges and guides.

This evolution could potentially help to offset the fear that driverless technology will immediately put people out of a job, a belief which has been blamed for attacks on self-driving test cars.

CAV’s potential to help the blind community was also particularly thought-provoking.

CASE study: connected, autonomous, something and electric

The motor industry is notoriously fond of an acronym and here’s a new one which might just catch on: CASE.

In this case, C stands for connected, A for autonomous and E for electric, but there’s disagreement about what the S should stand for.

Vehicle manufacturer Daimler goes for connected, autonomous, shared and electric, although if you dig a bit deeper into their website they keep their options open with “shared and services”.

“Each of these has the power to turn our entire industry upside down,” said Dr Dieter Zetsche, chairman of the board of Daimler AG. “But the true revolution is in combining them in a comprehensive, seamless package.”

Over at car parts maker ZF, Andy Whydell, vice president of systems product planning for active and passive safety, goes for connected, autonomous, safe and electric.

For explanations of other vehicle-related terms and acronyms, see our Cars of the Future glossary.