The key to the future of self-driving is education, education, education, says Millbrook’s Stoker.

On track and in virtual space, Millbrook tests cars of the future

Our Zenzic CAM Creator series continues with Peter Stoker, Chief Engineer for Connected and Autonomous Vehicles at Millbrook.

Part of CAM Testbed UK, Millbrook Proving Ground in Bedford boasts 700 acres of private roads on which to develop and test connected and autonomous vehicle (CAV) technologies. As Chief Engineer, Peter Stoker is right at the forefront of self-driving in the UK.

Peter Stoker
Peter Stoker, Chief Engineer for Connected and Autonomous Vehicles at Millbrook

Please can you outline Millbrook’s work on connected and automated mobility?

“My primary role is to bring focus to two testbeds, our CAV testbed and our 5G testbed. We are not a purpose-built CAV testbed – we have safety, propulsion and conventional vehicle test facilities too – so CAV is something we’ve blended into the existing business.

“For the CAV testbed, we partnered with the UK Atomic Energy Authority (UKAEA), particularly the Remote Applications in Challenging Environments (RACE) division, to provide a controlled urban environment. We have three open source StreetDrone vehicles and miles of track with targets for very precise measurements, accurate to 1-2cm. We offer safety driver training and also have a simulation environment for driver-in-the-loop and hardware-in-the-loop testing. The whole idea is to fail in private, not in public, and to progress, to evolve out of the testbeds and on to open roads.

“The 5G testbed is a completely separate consortium, backed by the Department for Digital, Culture, Media and Sport (DCMS). We have 59 masts looking at all types of connectivity and I’d say the millimetre wave at 70GHz is currently the most interesting.”

Millbrook graphic
Millbrook Proving Ground graphic

What major shifts in UK road transport do you expect over the next 10 years? 

“Getting the crystal ball out, I see increased use of connectivity in existing vehicles and some very interesting new use cases – buses connected to city networks, video analytics from cameras, smart ambulances streaming live data, autonomous deliveries on campuses. What I don’t see within 10 years is millions of privately owned driverless cars. That will start in the luxury sector but to begin with it will be more about transporting goods.”

How do you see the testing framework for CAVs developing?

“There’s a lot of simulation in the automotive world – crash testing, fatigue testing, computational fluid dynamics. These days, manufacturers are developing whole vehicles before building a prototype. You have to have a good simulation on a good simulator and there’s an interesting shift that needs to happen on regulation. It’s early days on that, but it’s essential.

“The strength of virtual space is that you can run hundreds of scenarios in machine time – not only set up complicated scenarios that would take days with real cars, but actually speed up the process so it runs faster than real time. The national scenario database is already really good and regulation will move to being a mixture of real and virtual certification – global, European, UK and perhaps even city-specific. We are happy to advise, but don’t set policy.”

What are the biggest challenges in the shift to self-driving and how can these risks be mitigated?

“The key to the future of self-driving is education, education, education – for everyone, the public, vehicle manufacturers, the aftermarket, recovery operators. We have to work on the terminology – autonomous, driverless, CAV, CAM – it’s confusing, even to people who know what they’re talking about.

“At the moment, we’re making it harder to understand, not easier. We’re in a really grey area of transition with different trade names for systems. There’s a lot of groundwork needed to prepare people and, for example, the brilliant website mycardoeswhat.org does a great job of trying to explain it.

“If you get into a hire car, you need to have the right expectation of what it does and what it doesn’t do. If you buy a new car, you should read the manual, but how many people do? Especially with Covid, more cars are being delivered with minimal interaction – it’s a case of “there’s the key, where’s the station?”. Too often, the customer handover just isn’t there.

“How are garages, the aftermarket and the amber light sector going to deal with all this? Basic questions like how do you put it in neutral? ADAS has already led to huge changes in training and skill sets – how to calibrate and monitor them.

“We haven’t talked about over-the-air (OTA) updates, cameras embedded in the tarmac or even electrification – there’s a huge amount of things! How do you learn about them? Hopefully in testing rather than in crash situations.”

For further info, visit www.millbrook.co.uk

Humanising Autonomy uses behavioural psychology and computer algorithms to make cities safer for pedestrians and cyclists.

Using cameras and AI to protect vulnerable road users

Our Zenzic CAM Creator series continues with Raunaq Bose, co-founder of Humanising Autonomy.

Before establishing predictive artificial intelligence (AI) company Humanising Autonomy in 2017, Raunaq Bose studied mechanical engineering at Imperial College London and innovation design engineering at the Royal College of Art. Focusing on the safety of vulnerable road users, Humanising Autonomy aims to redefine how machines and people interact, making cities safer for pedestrians, cyclists and drivers alike.

RB: “Our model is a novel mix of behavioural psychology, deep learning and computer algorithms. We work with OEMs and Tier 1 suppliers on the cameras on vehicles, with the aftermarket on retrofitted dashcams, and also with infrastructure. Our software works on any camera system to look for interactions between vulnerable road users, vehicles and infrastructure in order to prevent accidents and near misses. While most AI companies use black box systems where you can’t understand why decisions are made, we set out to make our models more interpretable, ethically compliant and safety friendly.

“When it comes to questions like ‘Is this pedestrian going to cross the road?’, we look at body language and factors like how close they are to the edge of the pavement. We then put a percentage on the intention. Take distraction, for example, we cannot see it but we can infer it. Are they on the phone? Are they looking at the oncoming vehicle? Is their view blocked? These are all behaviours you can see and our algorithm identifies them and puts a numerical value on them. So we can say, for example, we’re 60% sure that this pedestrian is going to cross. This less binary approach is important in building trust – you don’t want lots of false positives, for the system to be pinging all the time.

“One of the main things we’re likely to see over the next decade is increased use of micromobility, such as cycling and e-scootering. At the same time you will see more communication between these different types of transportation, and also with vehicles and infrastructure. The whole point of ADAS is to augment the driver’s vision, to reduce blind spots and, if necessary, take control of the vehicle to avoid a shunt. Then there’s the EU agreement that by 2022 all buses and trucks must have safety features to detect and warn of vulnerable road users.

“We currently only look at what’s outside the vehicle, but with self-driving there will be monitoring of the cabin. In terms of privacy, we have a lot of documentation about our GNPR processes and how we safeguard our data. Importantly, we never identify people, for example, we never watch for a particular individual between camera streams. We look to the future with autonomous cars but for now we’re focused on what’s on the road today.”

For further info visit humanisingautonomy.com.

Dr Charlie Wartnaby says there’s an industry consensus that Level3 self-driving is not reasonable if it requires quick driver intervention.

Self-driving world first: multi-car cooperative crash avoidance

Our Zenzic CAM Creator series continues with Dr Charlie Wartnaby, chief engineer at Applus IDIADA.

Way back in 2019 we covered IDIADA’s role in the construction of the new CAVWAY testing facility, and that investment continued with a large new venture. With a PhD in physical chemistry from the University of Cambridge, Charlie Wartnaby was technical lead for the ground-breaking Multi-Car Collision Avoidance (MuCCA) project.

Charlie Wartnaby, chief engineer at Applus IDIADA
Charlie Wartnaby, chief engineer at Applus IDIADA

CW: “Certainly the funding from the Centre for Connected and Autonomous Vehicles (CCAV) for MuCCA and CAVWAY were big wins for us. Traditionally, we’d focused on automotive electrics and engine management, but we could see there was all this exciting CAV work. Now we’re working with an OEM I can’t name to run an field operational test using our IDAPT development tool – a high performance computer with GPS and car-to-car communications – as a spin-off from MuCCA.

“With the MuCCA project, we think we achieved a world first by having multiple full-sized vehicles do real-time cooperative collision avoidance. We still have the cars for further R&D when time, budget and Covid allow.

IDIADA’s Multi-Car Collision Avoidance (MuCCA) project

“In the UK, we’re focussed on building a new proving ground (CAVWAY) near Oxford, which should open in 2021. There’s also our CAVRide Level4 taxi project, at our headquarters near Barcelona. CAVRide shares some of the technology developed for MuCCA and they’ve done some really interesting vehicle-in-the-loop testing, having the real vehicle avoid virtual actors in a simulation environment.

“In the short term, we’re really working hard on the C in CAV. Connected vehicles offer massive safety and efficiency improvements, for example, by warning about stopped vehicles or advising on speed to get through traffic lights on green. There’s a bit of a VHS versus Betamax situation, with both WiFi-based short-range communications and the C-V2X 5G-based protocol, so we’ve upgraded IDAPT to support both.

“Personally I think that while heroic work by the big players shows robotaxi applications are feasible, economic viability is a long way off, 2030 maybe. Watch the latest Zoox and Waymo videos from America, they’re mesmerising! No way is that kind of tech going to be installed in private cars any time soon because it’s eye-wateringly expensive. Think about the costs involved in making every taxi driverless – it’d be out of all proportion to replacing driver salaries, especially considering backup teleoperators and maintenance and charging personnel.

“These big self-driving companies aren’t operating in the UK yet, but we do have very successful smaller players with intellectual property to sell. The UK government has been supporting a good number of R&D projects, via the CCAV and UK Research and Innovation (UKRI), and the regulatory environment has been reasonably friendly so far.

“I feel the first practical applications are likely to be low-speed shuttle buses and small autonomous delivery droids, but trucking is a very important area. If lorry drivers were permitted to stop their tachographs while napping in the back of the cab once on the motorway – only clocking up hours for parts of long journeys – that would make a viable economic case for a Level4 operating design domain (ODD) of ‘just motorways’, which is harder to justify merely as a convenience feature in private cars.

“In terms of current tech, emergency lane keeping systems (ELK), to stop drifting, are a major breakthrough, requiring cameras, sensors and autonomous steering. I welcome the road safety, however, if drivers engage automation systems like ALKS (automated lane keeping) by habit, for sure their skills will be affected. Perhaps there’s a case for the system enforcing some periods of manual driving, just as airline pilots perform manual landings to stay in practice even in planes that can land themselves.

“Concerns about timely handover are well-founded and I think there’s an industry consensus now that Level3 is not reasonable if it requires quick driver intervention. We see up to 20 seconds before some unprepared drivers are properly in control when asked to resume unexpectedly. It really requires that the vehicle can get itself into (or remain in) a safe state by itself, or at least there needs to be a generous takeover period. The difference between L3 and L4 is that the latter must always be able to achieve that safe state.”

For further info, visit www.idiada.com

Vivacity Labs founder backs the citizen first vision of 21st century privacy.

Time for a grown-up conversation about cameras, AI, traffic flow and privacy

Our Zenzic CAM Creator series continues with the founder of Vivacity Labs, Mark Nicholson.

Vivacity uses sensors, cameras and artificial intelligence (AI) to provide “up-to-the-minute data on urban movement”, helping local councils to promote active travel, improve safety and reduce congestion. Big Brother you say? Well, it’s 2020 not 1984 and CEO Mark Nicholson is very happy to have that debate.

MN: “As the transport network becomes more complicated, local authorities need more powerful tools. Tech giants have invaded the ecosystem, and when you’re dealing with Uber and driverless cars, sending someone out with a clipboard just isn’t going to cut it. We bring new technology which tells them about their transport, so they can adapt and gain control over the ecosystem.

“We started with sensors and then video-based sensors, generating huge data sets and better quality data. We’ve looked at everything from cyclists undertaking to lockdown journey times and asked: how can we use this data to make the road system more efficient? The next phase is autonomous vehicles, because that ecosystem needs to work with both infrastructure and other road users.

“Privacy is not just a key issue in self-driving but in the whole smart city. There are basically two visions – the Chinese and the European. The Chinese vision is very invasive, it’s 1984 and that’s the point. The alternative is the European vision, with the General Data Protection Regulation (GDPR). For a while it looked like there might be a third, a corporate American vision. Google were running a smart city project in Canada, but it didn’t work out so we’re back to two models.”

If you don’t know about the Quayside project in Toronto, a much-shared Guardian article from 2019 warned of surveillance capitalism, data harvesting and the possibility that algorithms could be used to nudge behaviour in ways that favour certain businesses. You can read it here or, er, Google it.

MN: “We’re very much on the European, privacy-centric, citizen first side – an ecosystem that gives the benefits of mass data without the costs to privacy. All our data is anonymised at source, everything. Each camera or sensor unit has its own processor on board which uses AI to extract information, for example, what are the road users? The imagery is discarded within a few milliseconds, all we keep is the data. We recently looked at how socially distanced people were in Kent and, although no personal data was collected, it caused a bit of controversy.”

It did indeed. “Big Brother is watching your social distancing: Fury as traffic flow cameras are secretly switched to monitor millions of pedestrians in government-backed Covid project”, screamed the headline in the Daily Mail. We’d better get back to self-driving.

MN: “Over the last couple of years the hype around driverless cars has died down. There’s been a recognition that autonomous vehicles are not as close as Elon Musk promised. The technology is progressing though. They can drive quite well on motorways and in quiet areas, but in busy, congested areas they struggle.

“What would happen if you rolled out driverless cars today? My suspicion is they would probably perform to about the same level as human drivers. The question is: Are we happy with systemic risk rather than personal risk? Can we engineer out that risk? Can we make the infrastructure intelligent enough so it works with vehicles in even the most challenging situations?

“The best way to end the no-win scenario is to have enough data to dodge it. Most of these incidents come about due to an unforeseen element, such as a pedestrian stepping out, a cyclist skipping a red light or someone speeding round a corner. If the vehicle knows about it in advance, the trolley problem never occurs. For me it’s about having the data earlier, and how we, as representatives of infrastructure, can help to give cars that information.”

For further info, visit vivacitylabs.com.

Thomas Sors says connectivity is the essential foundation for autonomous vehicles.

Putting the C in Connected and Automated Mobility

Our Zenzic CAM Creator series continues with Beam Connectivity CEO, Thomas Sors.

Having previously led Dyson’s Connected Vehicle programme, Thomas Sors launched Beam Connectivity in January this year. It might be one of the newest cogs in the UK automotive wheel, but its Connected Vehicle as a Service (CVaaS) product is already attracting interest from car, freight and public transport manufacturers.

TS: “When it comes to connected and automated mobility (CAM) and connected and autonomous vehicles (CAVs), we see a lot of focus on the ‘A’ part, but not so much about ‘C’, which is our focus. Connectivity is the essential foundation for automation later on, but at the moment it often doesn’t perform very well. For example, OEM apps sometimes get two point something star ratings due to problems with the initial connection and latency.

“Our CVaaS solution provides a better user experience and can unlock the value of data generated by vehicle fleets. It offers a new way of getting data from vehicles to the cloud and back-end, or to send data into the vehicle. Because we’re brand new, there are no issues with legacy software – privacy by design and security by design are embedded all the way through our process, not an afterthought or a bolt-on. That starts with ensuring that we fulfil General Data Protection Regulation (GDPR) access rights, including the right to be forgotten.

“I’ve seen quotes that by 2030 all cars will have some form of connectivity. eCall [the EU initiative to enable cars to automatically contact the emergency services in the event of a serious accident] is mandatory for new cars, and that’s just the start. It’s about transparency and explaining the benefits. If you give people the option to say ‘yes, take this data in order for me to get feature X’, then that builds trust.

“From the manufacturer or fleet operator perspective, prognostics is an interesting area – fixing things before they go wrong. Then there’s the ability to understand usage patterns and perform over the air (OTA) updates. Another thing we’re already seeing is support to improve the driving experience, for example, vehicle to infrastructure communications being used to reduce congestion. We expect that to build up quickly over the next 2-4 years.

“We’re only a few months in but we’ve already deployed an end-to-end system to several vehicles and we’re looking to do more and more. It’s not unusual for manufacturers to spend 12-18 months building a connected vehicle solution, so our platform can really speed up their development lifecycle. Why build a connectivity team when we’ve already done it very effectively?

“As to self-driving, the technology is leading the way and moving along quickly, so the focus needs to be on standards, legislation and public acceptance.”

For further info, visit beamconnectivity.com.

Autoura’s Bainbridge says China has won the self-driving engineering race and Level4 is near-term in the UK.

UK urged to concede the self-driving engineering race and focus on the business opportunity

Our Zenzic CAM Creator series continues with the CEO of Autoura, Alex Bainbridge.

Since selling online reservation service TourCMS in 2015, tourism entrepreneur Alex Bainbridge has been working on his next industry gamechanger: Sahra the sightseeing robot – a digital assistant, concierge and tour guide. Sahra is already available as an app for tourists on foot, but combine her with a driverless car and you get an AI holiday rep and your own personal tour bus in one, all completely human-free. Bringing a different perspective to our other Zenzic CAM Creators, the affable Bainbridge has words of wisdom and some brutal home truths for the UK self-driving industry.

AB: “Over the last 20 years we’ve seen web, mobile and social dramatically change the sightseeing industry. These inventions were forced upon us and we’ve had to grapple with them. Self-driving is next and governments around the world are rushing to legalise it. A lot of the focus here is still on engineering, but China has already won that race. The faster we all accept that, the sooner we as a nation can shift to winning the commercialisation race. We’re driven by the money-making opportunity.

Sightseeing Autonomous Hospitality Robot by Autoura – Sahra

“50% of sightseeing is by vehicle and these new automated forms of transport will bring change, whether it’s an e-scooter for a city tour, or a self-driving car for a vineyard visit or road trip. I’m interested in the pure leisure uses and the customer experience, not deliveries or getting from A to B. We’ve built a digital platform that can work on any robo-taxi. Google, Apple, Amazon and Baidu will all run self-driving fleets, and they’re going to have to compete with Uber and Lyft. We want the customer experience layer.

“Most urban vehicle-based sightseeing is currently done by hop-on hop-off buses, but major cities are beginning to ban them – either directly, by closing roads, or indirectly, by not allowing them to park. The transition to autonomous will start with CAVs running routes like buses. This means we can get trading from Level4, and we only need a few vehicles to start. We’re a step away from the hardware but look at Waymo in Phoenix and Cruise in San Francisco – this is near-term and we’re going to see some big changes in the second half of 2021.”

For further info on Autoura’s “in-destination travel experiences”, see autoura.com.

California-based Xona Space is working on new generation Low Earth Orbit GPS for self-driving cars.

Next generation: self-driving GPS is out of this world

Our Zenzic CAM Creator series continues with the Co-Founder and CEO of Xona Space, Brian Manning.

Compared to the familiar British reserve, California-based Xona Space is from a different planet. This self-declared “group of space ninjas, engineers, GPS nerds, motorcycle racers and adventurers” has helped to put over 50 vehicles in space and published over 50 scientific papers on navigation technology. That’s handy because today’s sat navs are creaking under the sky high requirements of self-driving cars. Brian Manning says his company’s new Pulsar positioning, navigation and timing (PNT) service will provide the necessary security, availability and accuracy.

BM: “We’re primarily working on new generation GPS from Low Earth Orbit (LEO) – something much more secure, precise and resilient. It will sure-up a lot of issues. GPS has been phenomenal, it has given a lot of value for a long time, but people are now trying to use it for applications it wasn’t designed for. It’s tough to get where you’re going when you don’t know where you are.”

A reference perhaps to the GPS spoofing incident at the 2019 Geneva Motor Show, when cars from a host of manufacturers displayed their location as Buckingham, England, in 2036! Apparently Americans also do sarcasm now. We swiftly move on to realistic timescales for the SAE levels of driving automation.

BM: “Ubiquitos Level5 is probably still far off, but personally I think we’ll start seeing deployments in contained environments within five years. I came from SpaceX so I know that with the right team you can get an amazing amount done in a very short time. A big part of Xona’s focus is to get Level5 tech out of the contained environments and also to work in bad weather and more rural environments, where current systems struggle. Rather than which sectors will be early adopters, I look more geographically – to highways with autonomous lanes. That said, it will probably be more on the freight side first because there’s more safety standards involved when you have passengers on board.”

We were wondering which might come first, Level5 or a winner in the Presidential election, but that’s all sorted now, isn’t it?

For further info, visit Xonaspace.com

Bold predictions about our driverless future by petrolhead Clem Robertson.

Meet the maverick radar expert of UK drones and driverless

Welcome to a new series of interviews with our fellow Zenzic CAM Creators. First up, Clem Robertson, CEO of R4dar Technologies.

As a keen cyclist who built his own Cosworth-powered Quantum sportscar from scratch, it’s no surprise that the founder of Cambridge-based R4dar takes a unique approach to self-driving. Indeed, his involvement can be traced directly to one shocking experience: driving down a local country lane one night, he had a near miss with a cyclist with no lights. He vividly remembers how a car came the other way, illuminating the fortunate rider in silhouette and enabling an emergency stop. It proved to be a light bulb moment.

R4dar urban scene tags
R4dar urban scene tags

What does R4dar bring to connected and automated mobility (CAM)? 

CR: “I’d been working in radar for five or six years, developing cutting edge radar for runways, when the incident with the cyclist got me thinking: Why could my cruise control radar not tell me something was there and, importantly, what it was? This kind of technology has been around for years – in World War II we needed to tell the difference between a Spitfire and a Messerschmitt. They placed a signal on the planes which gave this basic information, but things can be much more sophisticated these days. Modern fighter pilots use five different methods of identification before engaging a potential bogey, because one or more methods might not work and you can’t leave it to chance whether to blow someone out of the sky. The autonomous vehicle world is doing similar with lidar, radar, digital mapping etc. Each has its shortcomings – GPS is no good in tunnels; the cost of 5G can be prohibitive and coverage is patchy; cameras aren’t much good over 100 metres or in the rain, lidar is susceptible to spoofing or misinterpretation; digital maps struggle with temporary road layouts – but together they create a more resilient system.”

How will your solutions improve the performance of self-driving cars?

CR: “Radar only communicates with itself, so it is cyber-resilient, and our digital tags can be used on smart infrastructure as well as vehicles – everything from platooning lorries to digital high vis jackets, traffic lights to digital bike reflectors. They can tell you three things: I am this, I am here and my status is this. For example, I’m a traffic light up ahead and I’m going to turn red in 20 seconds. Radar works in all weathers. It is reliable up to 250-300m and very good at measuring range and velocity, while the latest generation of radars are getting much better at differentiating between two things side-by-side. We are working with CAM partners looking to use radar in active travel, to improve safety and traffic management, as well as with fleet and bus operators. We are also working with the unmanned aerial vehicle (UAV) industry to create constellations of beacons that are centimetre-accurate, so that delivery drones can land in a designated spot in the garden and not on the dog!”

R4dar cyclists in fog
R4dar cyclists in fog

What major developments do you expect over the next 10-15 years?

CR: “Fully autonomous vehicles that don’t carry passengers will come first. There are already little robots on the streets of Milton Keynes and, especially with Covid, you will see a big focus on autonomous last mile delivery – both UAVs and unmanned ground vehicle (UGVs). You never know, we might see delivery bots enacting a modern version of the computer game Paperboy. More and more people in urban areas with only roadside parking will realise that electric cars are tricky to charge, unless you put the chargers in the road, which is expensive. If you only need a car one or two days a month, or even for just a couple of hours, there will be mobility as a service (MAAS) solutions for that. Why would you bother with car ownership? E-scooters are one to keep an eye on – once they’re regulated they will be a useful and independent means of getting around without exercising. Town centres will change extensively once MAAS and CAM take off. There will be improved safety for vulnerable road users, more pedestrianisation, and you might see segmented use at certain times of day.”

Do you see any downsides in the shift to self-driving?

CR: “Yes! I love driving, manual gearboxes, the smell of petrol, the theatre, but you can see already that motorsport, even F1, is becoming a dinosaur in its present form. People are resistant to change and autonomous systems prompt visions of Terminator, but it is happening and there will be consequences. Mechanics are going to have less work and will have to retrain because electric motors have less moving parts. Courier and haulage driving jobs will go. Warehouses will be increasingly automated. MAAS will mean less people owning their own cars and automotive manufacturers will have to adapt to selling less vehicles – it’s a massive cliff and it’s coming at them much faster than they thought – that’s why they’re all scrambling to become autonomous EV manufacturers, it’s a matter of survival.”

R4dar lights in fog
R4dar lights in fog

So, to sum up….

CR: “Fully autonomous, go-anywhere vehicles are presented as the utopia, but there’s a realisation that this is a difficult goal, or at least a first world problem. There might always be a market for manned vehicles in more remote locations. A lot of the companies in this industry specialise in data, edge processing and enhanced geospatial awareness, and that will bring all kinds of benefits. How often have you driven in fog unable to see 10m in front of you? Self-driving technology will address that and many other dangers.”

Hearing bold predictions like these from a petrolhead like Clem, suddenly Zenzic’s ambitious 10-year plan seems eminently achievable.

For further info, visit the R4dar website.

Aside from recognising Cars of the Future as a CAM Creator, Zenzic’s new Roadmap features other notable developments…

Zenzic unveils updated UK Connected and Automated Mobility Roadmap

On Tuesday 20 October 2020, Zenzic unveiled the latest (second) version of its UK Connected and Automated Mobility Roadmap to 2030.

Bringing together government, industry and academia, Zenzic is tasked with establishing the UK as a world leader in self-driving.

Aside from the headline news that Cars of the Future was recognised as an official CAM Creator (sorry, had to get that in), there were notable developments in relation to regulation, safety and public perception.

During a virtual launch event (due to the ongoing Covid plague), Daniel Ruiz, CEO of Zenzic, outlined the “phenomenal progress” made in the 12 months since the launch of the first Roadmap. For instance, the fact that the first self-driving vehicle testing safety standards milestone is on track to be reached by end of this year.

He also highlighted the increased support for local governments on connected vehicles and emphasised the “need to continue to invest”.

Ruiz then handed over to Mark Cracknell, head of technology at Zenzic and architect of the Roadmap, who praised the UK’s collaborative approach over that of other countries where tech companies push the agenda.

“The Roadmap details the route to delivering the vision,” he said. “We are only one year into a 10 year plan and we are in a great position, with activity and progress reflected in the real world.”

Cracknell then joined a panel discussion, moderated by Alex Kreetzer of Auto Futures, with Imogen Pierce, head of experience strategy at Arrival (formerly of Jaguar), and Dr Richard Fairchild, operations director at Aurrigo. Given the participants, it understandably focussed on mobility as a service (MAAS) and first and last mile transport solutions.

It was unfortunate that this event coincided with Bauer’s Virtual Smart Transport Conference. Surely the driverless highway is not yet so congested that organisations have to tread on each other’s toes?

Anyway, if you’d like to explore the new Roadmap, you are very welcome to do so here.

Space age navigation for driverless cars

In a fascinating new article, published on 18 September 2020, NASA explained how its laser-based lunar landing technology could be adopted by self-driving cars.

Facing many of the same navigational and hazard avoidance challenges, NASA brought sensors, cameras, algorithms and high-performance computers together under the Safe and Precise Landing Integrated Capabilities Evolution (SPLICE) project.

Considering Mars is approximately 34 million miles from earth, and NASA successfully landed the Curiosity rover within a 12×4 mile target area, autonomous vehicle developers would be wise to pay attention.

What’s more, NASA intends to be even more precise in future, with a new variation called Navigation Doppler Lidar (NDL), which detects the movement and velocity of distant objects, as well as a spacecraft’s own motion relative to the ground.

Steve Sandford, former director at NASA’s Langley Research Center and now Chief Technology Officer at Psionic, said: “Doppler lidar’s high resolution can distinguish between objects that are only several inches apart and even at a distance of several hundred feet.” Potentially perfect for detecting, for instance, a pedestrian crossing a road.

For further info, read the original NASA article.