Kognic logo

Steven Spieczny of Kognic gives us the lowdown on sensor-fusion annotation for ADAS and self-driving…

Share this article

Self-driving safety accelerator: Kognic turns sensor-fusion into datasets you can trust


Launched five years ago and already working with household-name OEMs, Gothenburg-based Kognic is very much one to watch in the fast-growing self-driving perception software sector.

Cars of the Future spoke to Vice President of Marketing, Steven Spieczny, to find out about the sensor-fusion annotation platform everyone’s talking about…

Steven Spieczny is Vice President of Marketing at Kognic
Steven Spieczny is Vice President of Marketing at Kognic

“Kognic was founded by two technologists from the machine learning space to address the need for accurate training data for Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS).

“We have a quickly built-up a diverse customer base, including global vehicle manufacturers such as Volvo Cars/Zenseact, Tier 1 suppliers like Bosch, Continental and Qualcomm, and some very innovate start-ups such as Kodiak, a leader in autonomous for commercial trucking.

“We process data from cars, vans, trucks, drones and robots, and feed it into our cloud-based software platform. Information from cameras, lidar, radar, gets pulled into one comprehensive dataset.

“From there, everything that can be sensed is defined and labelled – that’s a road sign, that’s a pedestrian sitting on a bench, that’s a big truck straight ahead!

Kognic annotation

“This tagging process is called annotation. You see it quite a bit in healthcare, for example, to automatically flag broken bones on scans. In automotive, the level of complexity is much higher because the data we’re capturing is constantly changing, literally a moving picture.

“We help our customers to manage and curate this data so they can, in turn, use these datasets to power their AI products through model validation and tracking of performance.

“For ADAS, it started with lane marking recognition, where there are a lot of variables. Then you expand the data domain, which gets you these rare occurrences. For instance, light source object detection (LSOD) is a crucial use case where the reflection of a vehicle must be distinguished from an actual vehicle on the road.

Kognic point cloud for ADAS and self-driving with pedestrians and vehicles
Kognic point cloud with pedestrians and vehicles

“Obviously, in the AV industry, there’s been a fair bit of turmoil over the last few years for consumer vehicle applications. This gave way to a parallel focus on commercial trucking.

“One of the early assessments was that long-haul trucking was the perfect use case – long straight roads, no pedestrians. It turns out this is actually a really hard dynamic to get right – high speeds, sensor range limitations and long stopping distances, especially when fully loaded, contribute to a similarly complex situation.

Kodiak is one of our trucking customers in the US. They’re doing about 70,000 autonomous miles a month now, all the way from California down through the Southwest into Texas.

“They’re a success story in a sector which, like robotaxis, has seen a lot of ups and downs. Kodiak supply a top to bottom autonomous stack, and we sit behind that, pushing and pulling all this sensor data to enable their machine learning to make better decisions.

Kognic pre-annotation

“The ascension of the data scientist is important here, along with the new depth of technology around self-supervised learning, all these very geeky things.

“Data is the fuel for Machine Learning Operations (MLOps) – this idea of programming with data, rather than traditional coding. Our software enables the fusion of data from various sensors and the way we pre-annotate helps to make the whole process more efficient and cost-effective. That’s our USP.

Kognic pre-annotation for ADAS and self-driving
Kognic pre-annotation for ADAS and self-driving

“In the future, we believe, as many do, that everything that moves will have some form of autonomy. The whole world of AI is very dynamic, particularly with regards to self-driving cars because of the vast amount of data involved.

“For something like ChatGPT, 80% accuracy might be ok. For something as safety critical as self-driving, it has to be 99.9%. The principal is the same though – more inputs in order for the machine to learn on its own and be smarter about the outputs.

Sensor-fusion for self-driving

“We agree with Wayve that Embodied AI is the great North Star, but we’re not there yet. It’s unrealistic for the market to assume that we’re quickly going to jump to level 5 autonomy. We’re going to have to build up the capability, and that’s a big challenge.

“Concurrent to all this is the transition to the software defined vehicle (SDV). There’s a Mercedes model which is approved for level 3 in certain conditions in Germany. In the UK, Ford’s BlueCruise assisted driving system enables hands-off on some motorways.

“In a nutshell, the better your data, the better your models will be, and that will ultimately result in better user experiences. We call this alignment to expectation, and safety is the biggest issue.

“The self-driving industry needs a way to accurately calibrate and merge senor data to provide the machine with a very specific picture about what it is seeing at any given time. Kognic has that annotation platform to produce what is needed.”

For further info see Kognic.com

Share this article

Author: Neil Kennett

Neil is MD of Featurebank Ltd. He launched Carsofthefuture.co.uk in 2019.