You are on page 1of 68

THE INTERNATIONAL

ADAS & AUTONOMOUS VEHICLE INTERNATIONAL

REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION

January 2023
JANUARY 2023

Synthetic
Data
How generative models are
PUBLISHED BY UKi MEDIA & EVENTS

helping develop AV technology

INTELLIGENT SPEED ASSIST TESTING STANDARDS BAIDU APOLLO RT6


In-depth analysis of the technology and challenges Experts from ASAM, the European Commission The inside story on the Apollo Go sixth-
behind ISA – now mandatory in Europe – and how and testing organizations share their views on generation robotaxi, plus details of Baidu’s
it could enable higher levels of automated driving the emerging regulatory landscape for AVs plans for further expansion
CONTENTS

22
COVER STORY:
SYNTHETIC DATA
04 Tech insider: Safety Pool How generative models such as
WMG’s head of verification and validation, generative adversarial networks
Siddartha Khastgir, on the decision to grant and neural radiance fields
public access to the world’s largest scenario continue to help develop the latest
database for AVs self-driving technology
08 Tech insider: NCSU
A new cooperative distributed algorithm can
help autonomous vehicles better navigate
highway merges
10 Project brief: AgXeed 2.055W4
‘AgBots’ continue to grow in popularity: check out
this 75hp dual-tracked driverless tractor for hoeing 14 Intelligent speed assist
Europe has mandated intelligent speed
assistance systems, paving the way for more
advanced automated driving features, while
also highlighting the challenges ahead,
including issues surrounding harmonized
vehicle speed signaling and driver acceptance
14 28 Testing standards
Regulators and experts share their
understanding of the emerging ADAS/AV
industry regulatory landscape, and the
implications for testing standards
34 Machine learning
A guide to some of the interesting architectures
being used in the AV machine learning stack,
including ViDAR, which can detect depth and
motion from camera frames alone
40 Interview: Baidu Apollo RT6
A senior expert in the operation management
department within Baidu’s Intelligent Driving
Group reveals further details following the
launch of the company’s new low-cost flagship
robotaxi, the RT6
46 Audi C−V2X
40 Pom Malhotra, senior director of connected
services at Audi of America, on how C-V2X
not only offers a means to drastically reduce
46 VRU injuries and fatalities but also helps clear
the way for autonomous driving

ADAS & Autonomous Vehicle International January 2023 01


ADAS
& AUTO
NOMO
CONTENTS

US VEHI
CLE INTE
THE
REV IEW INT ERN ATI

RNAT
VEH ICL OF AUT ON ON AL

IONA
E TEC OM OU
S

L
FRO M HN OLO GIE S:
TO MA CO NC EPT ION
NU FAC

JANU
IMP LEM TUR E TO
ENT ATI

ARY
ON

2023
Janu
ary 202
3

How
gene
help ra
ing de tive mod
velo els ar

Welcome
p AV e

Synt
tech
nolo
gy

Data hetic
In the few months since I wrote the intro for the September 2022 issue, the UK,
How

PUBL
gene
ra

ISHED
help
ing de tive mod

BY UKi
velo els ar

where AAVI is based, has seen three prime ministers, as many chancellors of the
p AV e

MEDI
tech

A & EVEN
INTELL nolo
IGENT
In-de
pth analy SP EED AS gy

TS
behin sis
d
it could ISA – now of the tech SIST

exchequer, and a daily deluge of ever more gloomy economic forecasts, with the tech
enab mandato nology
le high and chall
er level ry in Euro
s of auto pe – and enges
TESTIN
mate how Expe G STA
d drivin
g
rts NDAR
and testinfrom ASA
DS
the eme g orga M, the Euro
rging nizations pean
Commiss
regulator shar
y lands e their view ion BAIDU

sector in particular suffering record losses.


cape s
for AVs on APOLL
The insid

52 Supplier interview:
O RT
e story
gene
plans
ration
robo
6 on the
Apollo
for expa taxi, plus Go
nsion details sixth-
of Baid
u’s

Little did I know back then that we were also about to witness VW and Ford pulling
Wynne Jones IP the plug on their investment in Argo AI, whose CTO, Brett Browning, was the subject
Dr Elliott Davies, partner and patent attorney
of an exclusive interview in the September issue. Argo’s sudden and unexpected
at Wynne Jones IP, examines the implications
of the EU’s new Unified Patent Court (UPC) demise has spawned some rather sensationalist headlines announcing the death of
the autonomous vehicle industry. Once valued at more than US$7bn, the company’s
54 Electric power steering closure is undoubtedly a cause for concern, but it’s far from the end of the road for
Allegro MicroSystems on how hardware self-driving. It just so happens that on the same day that VW and Ford exited Argo AI,
selection affects driver experience in EPS Mobileye went public, with its shares surging by more than 37%.
For many, Argo’s decline was in keeping with a general trend toward overall market
56 Simulation consolidation and a preference among traditional automotive OEMs for L2/L3 ADAS
The integration of Ansys AVxcelerate sensors
technology that they can make a return on immediately – given current tough trading
within aSR’s simulation framework allows
conditions – rather than more ambitious L4/L5 self-driving capabilities.
efficient virtual testing and validation of
sensor technology for autonomous driving “There’s a huge opportunity right now for Ford to give time – the most valuable
commodity in modern life – back to millions of customers while they’re in their
58 In−cabin monitoring vehicles,” said Ford CEO Jim Farley, in the auto maker’s Q3 earnings report, in which
dSpace’s RTMaps multisensor development it announced that it was ditching Argo. “It’s mission-critical for Ford to develop great
framework offers a block-based approach that and differentiated L2+ and L3 applications that at the same time make transportation
lets users easily integrate in-cabin sensors even safer. We’re optimistic about a future for L4 ADAS, but profitable, fully
with relevant vehicle buses in their setups autonomous vehicles at scale are a long way off and we won’t necessarily have
to create that technology ourselves.”
59 ADAS testing
VBox Automotive’s all-new VBox 3i ADAS, Just how far away self-driving vehicles remain is a topic of hot debate. Many believe
designed to make ADAS testing easier the big tech firms outside the traditional auto sector will push on regardless, as they
have the most to gain from disrupting the status quo. Some of these are already
60 Datalogging heavily invested in driverless trucks for the ‘middle mile’, avoiding the complications
Xylon’s instrumental fix for real-time test and wrought by congested cities, with services likely to ramp up in the next year or two.
validation challenges Meanwhile, others champion C-V2X as a key enabler in delivering the communal
intelligence and safety that could speed the urban deployment of AVs.
61 Log data management
Applied Intuition on how ADAS and AV Baidu is one such company. AAVI was granted a rare interview with the internet giant,
programs can create simulation test cases in which it made clear its plans for growth. “Baidu will continue to expand the scale of
from real-world logs its driverless autonomous driving commercial operation,” says Qiong Wu, a senior
expert in Baidu’s Intelligent Driving Group, on page 44. “It is expected that in 2025,
62 Data acquisition autonomous driving technology will enter the stage of commercialization at scale.”
How InoNet’s Mayflower-B17-LiQuid helps to
But it might not have to wait until then to make a profit. Baidu recently reported sales
master the masses of data captured during
ADAS development and verification of its Apollo ADAS and autonomous driving solutions to auto makers had exceeded
¥11.4bn (US$1.6bn), with a major Chinese OEM choosing its Apollo Navigation Pilot,
63 Cabin monitoring Automated Valet Parking and HD Maps for one of its most popular car models.
4activeSystems’ solutions for child presence Anthony James
detection Editor, ADAS & Autonomous Vehicle International
64 Have you met…? anthony.james@ukimediaevents.com
Hemant Sikaria – CEO and co-founder, Sibros

Editor Head of data and production


Anthony James Lauren Floyd published by The views expressed in the articles and technical papers are
Production assistants UKi Media & Events, a division those of the authors and are not necessarily endorsed by the
Web editor publisher. While every care has been taken during production,
Dylan Botting, Amy Moreland of UKIP Media & Events Ltd the publisher does not accept any liability for errors that may
Callum Brook-Jones have occurred.
Divisional sales director, magazines Contact us at:
Production editor This publication is protected by copyright ©2023
Rob Knight rob.knight@ukimediaevents.com ADAS & Autonomous Vehicle International Printed by William Gibbons & Sons, PO Box 103,
Alex Bradley Publication manager Abinger House, 26 Planetary Rd, Willenhall, West Midlands, WV13 3XT, UK
Sub editors Sam Figg Church Street, ADAS & Autonomous Vehicle International is brought to you by the
Sarah Lee, Alasdair Morton, Dorking, publisher of Automotive Testing Technology International, Crash
Mary Russell CEO Surrey, Test Technology International, Automotive Powertrain Technology
Tony Robinson RH4 1DF, UK International and Tire Technology International. The company also
Art director Managing director, magazines organizes Automotive Interiors Expo, Automotive Testing Expo,
Tel: +44 1306 743744 ADAS & Autonomous Vehicle Technology Expo and Tire Technology
Craig Marshall Anthony James Email: avi@ukimediaevents.com Expo. Please visit www.ukimediaevents.com to find out more.
Art editor General manager Twitter: @AVImagazine Moving on? To amend your details, or to be removed from our
Nicola Turner Ross Easterbrook Web: www.ukimediaevents.com circulation list, please email datachanges@ukimediaevents.com.
autonomousvehicleinternational.com For more information about our GDPR-compliant privacy policy,
Search for ADAS & Autonomous Vehicle International please visit www.ukimediaevents.com/policies.php#privacy. You
to find and follow us on Linked In! can also write to UKi Media & Events, Abinger House, Church
ISSN 2753-6483 (print) Street, Dorking, RH4 1DF, UK, to be removed from our circulation
ISSN 2753-6491 (online) list or request a copy of our privacy policy
Follow us on Twitter for news, views and more! @AVImagazine

02 ADAS & Autonomous Vehicle International January 2023


VISIONARY CABIN MONITORING

Gentex’s new multi-modal, AI-based camera technology combines machine


vision, depth perception and micro-vibration detection to provide an entire
suite of cabin monitoring features.

The ideal cross-platform solution. Discretely integrated into an interior


rearview mirror (or nearby location) to optimize performance, minimize
occlusions, enhance styling, and share electronics.

Comprehensive and scalable:

+ Driver monitoring – distraction, weariness,


sudden sickness, return of manual control

+ Cabin monitoring – occupants, behavior, objects

+ Social and mobile communications – video


phone calls, meetings, in-cabin selfies

+ Cabin air quality monitoring – smoke, vape,


chemical sensing

Visit Gentex.com to learn more.

Gentex-AVI-FullPage-Ad-2022.indd 1 2/28/22 11:52 AM


TECH INSIDER: SAFET Y POOL

Public works
Dr Siddartha Khastgir, head of
verification and validation at WMG,
University of Warwick, on the
decision to grant public access to
the Safety Pool Scenario Database
– the world’s largest scenario
database for automated vehicles
By Anthony James

In September, leading experts at the UK’s WMG,


University of Warwick, and Deepen AI opted to provide
credit-based access to the largest public store of scenarios
for testing automated vehicles. The Safety Pool Scenario
Database features more than 250,000 scenarios.
To date, more than 500 organizations have enrolled in the
database, which provides scenarios in different operational
“WE HAVE CREATED THE WORLD’S
design domains that can be leveraged by governments, LARGEST PUBLIC STORE OF
industry and academia to test and benchmark automated
driving systems. Scenarios covered include urban
SCENARIOS, INTENTIONALLY
environments, highways and those representing varied DEVELOPED TO BE USED GLOBALLY,
environmental conditions, where vehicles perform different
maneuvers such as cut-ins and overtaking. NOT JUST AT A NATIONAL LEVEL”
The scenarios are generated using two novel scenario Dr Siddartha Khastgir, head of verification and validation, WMG
generation methods – knowledge-based and data-based –
where scenarios are focused on uncovering failures in
automated vehicles as they capture those edge case How do scenario databases support
scenarios. Use cases supported include automated lane the development of ADAS and
keeping systems, low-speed shuttles, urban Level 4, automated driving?
highway ADAS and more. If we try to prove that automated driving systems (ADS)
Users are rewarded with credits for submitting scenarios are safer than human drivers just by driving real-world
to the database. Contributions are scored based on the miles, we would need to drive over 11 billion miles. To put
uniqueness of the scenarios and their validity, and that into perspective, this would be equivalent to driving to
corresponding credits are awarded to the contributing the moon and back 20,000 times. This is a lot of work to
organization. These credits can be redeemed to gain demonstrate a 20% improvement in safety, and would not
access to more scenarios. This system encourages users to be commercially feasible.
contribute to grow the database and make more scenarios As a result, scenario-based testing has quickly become
available to the self-driving-vehicle community. a key method to achieve and demonstrate the safety of
ADAS and ADS. Among the pillars of scenario-based
testing are the test scenarios used in the testing process.
Some of the Given the number of scenarios that will be required to
Safety Pool
team back in demonstrate safety, a scenario database is an essential
September part of the safety process for an ADAS or ADS.
2022, when A scenario database is a repository of test scenarios
public access that can be used by ADAS and ADS developers to verify
to the and validate these systems.
database was
announced Some of the most popular scenario databases are the
Safety Pool Scenario Database (developed in the UK but
with global relevance), Sakura scenario database (Japan),
Pegasus scenario database (Germany), Streetwise
database (Netherlands) and ADScene (France).

04 ADAS & Autonomous Vehicle International January 2023


TECH INSIDER: SAFET Y POOL

WMG 3D
simulator
running a test
scenario
What are the main criteria for
AV developers to consider when
selecting a scenario database?
The main criteria are ease of use, relevance to the
operational design domain (ODD) of the system under
test and diversity in test scenarios with respect to ODD.
Each of these factors is key to the success and
uptake of a scenario database. Having a database with
lots of content should not be the target for the ADS
ecosystem. As a community, we need to agree on
quality requirements for scenario databases (for each of
the three criteria listed above). Such discussions have
begun at various international regulatory levels.

What makes Safety Pool different?


The Safety Pool Scenario Database is underpinned
by the philosophy that the automated driving
SAFETY POOL
ecosystem should not compete on safety. This ENABLES PUBLIC
is a mission that both WMG and Deepen AI all ACCESS TO MORE
have in common, and it is what brought the two
THAN 250,000
organizations together to collaborate and create
the Safety Pool Scenario Database. SCENARIOS VIA A To promote the pre-competitive aspect of testing
Through the Safety Pool Scenario Database, CREDIT SYSTEM the safety of ADS, one of the key features of the
we have created the world’s largest public store of Safety Pool Scenario Database is its credit-based
scenarios, intentionally developed to be used scenario exchange mechanism. We have created an
globally, not just at a national level. ecosystem where organizations are incentivized to share
test scenarios with each other. Initially, organizations have
access to the public section of the database. When they
contribute a scenario, we score the contribution for quality
and quantity of the scenarios, and assign credits to the
contributing organizations. They can then use these credits
to buy other scenarios from the restricted section of the
database. No organization can buy their way into the
database financially; they must contribute scenarios.
Safety Pool has scenarios covering a diverse set of ODDs
(rural, urban, semi-urban, motorway, weather conditions,
different actors, etc). This makes it relevant to varied types of
ADS developers. Coupled with an ability to search scenarios
based on the ODD of the ADS, the testing process becomes
more efficient and relevant.
Safety Pool was conceived to meet the needs of a diverse
set of stakeholders – not only ADS developers but also
OEMs and Tier 1 suppliers. Positioned
as a tool to help regulators and type
approval bodies as well as researchers,
Safety Pool provides ODD-based searching
and human-intuitive scenario definition for
the test scenarios, making the database
more accessible.
With a wide variety of stakeholders in the
ADAS and ADS ecosystem, alignment with
international standards is a prerequisite to
ensure common understanding. The Safety
Pool Scenario Database is unique in the
sense that it is compliant with the relevant
international standards such as ASAM
OpenLABEL, ISO 34503 and ASAM
OpenSCENARIO. The team at WMG and
Screenshots from the Deepen AI have played a key leadership
scenario database role in the creation of these standards.

ADAS & Autonomous Vehicle International January 2023 05


TECH INSIDER: SAFET Y POOL

“EVERY ADS WILL


REQUIRE A DIFFERENT
SET OF TEST SCENARIOS Virtual miles run
on scenarios are
DEPENDING ON ITS more effective

ODD AND DESIGN” and cost-efficient


for ADAS safety
improvement than
burning rubber
on real roads

What are some of the common at WMG currently focuses on creating metrics on scenario
misconceptions about scenario completeness as well as requirements on the quality of
databases? scenario databases.
It is erroneously assumed that all types of users (ADS
developers, regulators, researchers) will use the scenario What scenarios are of most interest?
database in a similar manner. The reality is that user Based on our research at WMG, we have coined a concept
journeys for different types of users will be very different of hazard-based testing, which focuses on testing how
and have varied sets of requirements. For example, a a system fails rather than how a system works. Our focus
type approval authority or regulator would like to read/ is on identifying failures.
understand scenarios (stored in the scenario database) As a result, we believe that the scenarios that offer
in a human-readable format. In this regard, WMG has led the most value are the ones that reveal failures. We have
the British Standards Institution (BSI) activity on creation proposed a hybrid approach to scenario generation:
of the BSI Flex 1889 standard, which provides a language 1) data-based scenario generation; 2) knowledge-based
for this purpose. scenario generation.
[On the other hand,] an ADS developer performing Data-based scenario generation analyzes accident
simulation-based testing would like to have the scenarios databases, insurance claim records and real-world
in a machine-readable format that can be executed in collected data to identify trends that lead to incidents
a simulation platform for virtual testing. or near-misses. Knowledge-based scenario
Thus, we have a competing set of demands generation analyzes the system to understand
placed on a scenario database. For Safety Pool, how its design could lead to failures. Our
we created a concept that can cater to both
OVER 200 scenario generation approach has also
sets of users and requirements by having ORGANIZATIONS been incorporated in the EU regulation
two abstraction levels for a language to WORLDWIDE HAVE on automated driving systems that was
describe scenarios. ALREADY adopted on August 5, 2022.
Another misconception is that all ADS need To ensure randomization, we treat
to be tested for the same set of test scenarios,
ENROLLED IN scenarios at a logical scenario level: all
but in fact every ADS will require a different set SAFETY POOL scenarios have parameters, and each
of test scenarios depending on its ODD and parameter has a value range. Depending on
design. There will be no ‘golden set’ of scenarios the ODD definition of the ADS, we first identify
for all ADS. We need to create a scenario database the relevant scenarios (at the logical level) and have
that has diversity in scenarios and represents different intelligent test case generation algorithms that assign
ODDs – for example, urban, motorway, rural and weather values to the parameters, focusing on revealing failures.
conditions. With more than 250,000 scenarios, Safety Pool
covers a wide variety of ODDs to meet this industry need. How are you working to enhance
In addition to misconceptions, there is still huge Safety Pool?
confusion in the ADS industry on the meaning/definition We are continuously working with stakeholders across
of the term ‘test scenarios’ and what constitutes a test the ADS ecosystem in the UK and internationally to
scenario. In simple terms, a test scenario should define the capture new requirements and user journeys. Currently,
behavior of various actors (vehicles, cyclists, pedestrians, we are working with more than 500 organizations
etc). At WMG, we have been championing a drive for the worldwide on various user journeys using the Safety Pool
industry to agree on terminology, and we actively contribute Scenario Database.
to various international standards to this end. However, Regulators, researchers and ADS developers in different
more efforts are needed to drive industry to a consensus countries have a lot of common requirements but also many
to avoid confusion. bespoke needs. Being a public database funded by the UK
The biggest challenge currently is the need to ensure that government allows us to work closely with these partners
the scenarios contained in the database offer a representative to understand and implement the enhancements and
set of complete scenarios for an ODD of the ADS. Research therefore grow the reach of Safety Pool. ‹

06 ADAS & Autonomous Vehicle International January 2023


Registered Straight
Trade Marks Patents Sensors AI
Designs Talking Advice

Keeping you up
to speed with IP
Stay ahead of protecting your Intellectual
Property with straight talking advice from our IP Experts.

Wynne Jones IP - Not your typical law firm.

For more information, please call :


020 3146 7888
OFFICES: LONDON | CARDIFF | GLOUCESTER | MALVERN | TELFORD
TECH INSIDER: NCSU

A problem
shared
Researchers from North Carolina
State University (NCSU) have
developed a technique that helps
autonomous vehicles better
navigate tricky highway merges
By Anthony James

In August 2022, NCSU researchers published a paper


titled Distributed cooperative trajectory and lane
changing optimization of connected automated vehicles:
Freeway segments with lane drop, in the journal
Transportation Research Part C. The paper introduced a new
technique that allows autonomous vehicle software to make
calculations related to complex traffic interactions more
quickly – improving both traffic and safety in simulated
autonomous vehicle systems.
“To the best of our knowledge, this is the first time a
cooperative distributed algorithm has been used to help with
merging highway traffic of connected automated vehicles,”
says Ali Hajbabaie, corresponding author of the paper and operations on the freeway for all vehicles. To summarize,
an associate professor of civil, construction and the lack of capability to make decisions that are better for
environmental engineering at NC State. all vehicles in a short amount of time is a common problem
“Cooperative distributed algorithms are used by our that this study has addressed.”
team, too, to help the movement of connected automated
vehicles in intersections and roundabouts. The big challenge Project background
here was to identify which vehicles needed to The team at NCSU has been researching connected
communicate with each other to agree on how automated vehicles for the past six years. “Our focus
to proceed with a lane change.” has been on developing models, methods and
Until now, the programs designed to methodologies that make the CAV decision-making
IN THE USA,
help autonomous vehicles navigate lane process more efficient, make their decisions more
changes have relied on making problems MOST STATES effective and integrate them into traffic control
computationally simple enough to PLACE THE methods,” Hajbabaie says. “We aim to assist
resolve quickly, so the vehicle can RESPONSIBILITY CAVs in making real-time decisions that improve
operate in real time. However, simplifying traffic operations and safety in collaboration with
OF MERGING SOLELY
the problem too much can actually other traffic control devices.”
create a new set of problems, as ON THE TRAFFIC IN In their past work, the team came to realize just
real-world scenarios are rarely simple. THE LANE THAT how complex this decision-making process is and
“There are different challenges, but the IS ENDING how often research focuses only on longitudinal
specific one that our study addresses is the control of CAVs. Hence they decided it was time for
complexity of the decision-making process,” a different approach. “In this research, we aimed to
explains Hajbabaie. “We have broken up this develop an efficient method that controls both the
process into smaller pieces, and by doing so, we have longitudinal and lateral movement of CAVs when they go
significantly reduced the complexity. At the same time, we through a lane drop and have to merge,” Hajbabaie explains.
have created a coordination scheme among decision makers “Our approach was to use a distributed algorithm to cope
so that their decisions are not selfish and improve traffic with the computational complexity – otherwise we would

08 ADAS & Autonomous Vehicle International January 2023


TECH INSIDER: NCSU

not be able to find control decisions in real time – so we


would know what to do when it was too late. At the same
time, we do not want CAVs to behave selfishly; as such, we
created a cooperative environment where CAVs negotiate
with each other and reach a consensus on how and when
to make a lane change.”

Test results
So far, the researchers have only tested their approach in
simulations, where the subproblems are shared among
different cores in the same computing system. However,
if autonomous vehicles ever use the approach on the road,
the vehicles would network with each other and share the
computing subproblems.
During this proof-of-concept testing, the researchers
looked at two things: whether their technique allowed
autonomous vehicle software to solve merging problems in
real time, and how the new ‘cooperative’ approach affected
traffic and safety compared with an existing model for
navigating autonomous vehicles.
“We tested this approach on a simulated freeway with
a lane drop from four to three, three to two and two to one
lanes,” explains Hajbabaie. “We tested three speed limits
of 60mph, 65mph and 70mph [96.5km/h, 104.5km/h and
112.5km/h], and three traffic demand levels of 900, 1,500
and 2,400 vehicles per hour per lane. One unexpected
outcome was that we did not see a breakdown (or traffic
congestion, in other words) even when the traffic level was
2,400 vehicles per hour per lane, and we initially had two
lanes that dropped to one.
“This means that a single freeway lane could process
4,800 vehicles per hour per lane, which is at least double
its theoretical capacity. This is a huge increase in capacity
under idealistic conditions in a simulated environment, and
it is possible partly because the methodology was fast
Above: Screenshot from enough to perform all the computations.”
the simulation used to Overall, the researchers found their approach
evaluate the new technique enabled autonomous vehicles to navigate
designed to better manage
highway merges
complex freeway lane merging scenarios in real
time in moderate and heavy traffic, with ‘spottier
performance’ only when traffic volumes got
particularly high.
“When the traffic volumes are high, we have
spottier performance – we see congestion forming
and growing on the freeway using existing methods.
The existing approaches show poor performance,
while our proposed approach shows traffic can be
handled on the freeway. We have observed reductions in
travel time of up to 86.4% when the traffic volumes are high.”
NCSU is now working on several ways to further its
“THIS IS THE FIRST TIME A research. “One is to test the approach in a 1:18 scale
COOPERATIVE DISTRIBUTED automated testbed,” notes Hajbabaie. “The other is to bring
communication loss and delay into the approach and see
ALGORITHM HAS BEEN USED TO how it will influence the performance. Another angle is to
HELP WITH MERGING HIGHWAY consider non-cooperative driving behavior and its effects
on the improvements we have observed.
TRAFFIC OF CONNECTED “We are researching how to improve traffic operations
and safety using automated vehicles as mobile controllers.
AUTOMATED VEHICLES” In such a setting, an automated vehicle is assumed to
Ali Hajbabaie, North Carolina State University collaborate with traffic lights and other traffic control
systems to improve traffic operations and safety. We have
developed this concept for a signalized intersection and a
roundabout and are looking into other highway facilities.” ‹

ADAS & Autonomous Vehicle International January 2023 09


P R OJ E C T B R I E F

An electric
drivetrain offers
“In this way we contribute in a structural way to
a speed range
up to 13.5km/h improving productivity and yields while minimizing
the impact on the environment.”
However, it’s the financial gains that remain most
attractive: “No less important than ecology and
productivity, we provide economy – of time, labor,
energy, fertilizer, seeds, etc,” continues Kamps. “As a
single unit or as part of a fleet, our AgBots are able to
execute all agricultural tasks, completely independently
and in a safe and efficient way due to robotic precision.
Being autonomous, the AgBots will provide customers
with ample time to focus on their key activities and add

Light relief
true added value to the food supply chain.”

THE VEHICLE
The model features a 55kW power source,
coming out of a 2.9-liter four-stroke diesel
engine, connected to a generator to provide electrical
Dutch driverless tractor manufacturer AgXeed power to the rear wheels. A wide range of track widths
has revealed a lightweight version of its are available, from 1.5m up to 3m. Autonomous
operation relies on extremely detailed mapping and
popular AgBot that is even kinder to soil location data. “We first record the field boundaries with
the help of an RTK-corrected GNSS, so we know where
By Anthony James the physical boundaries are, as well as obstacles such
as power lines, ditches or trees,” explains Kamps.
During operation, a lidar, radar and ultrasonic sensors
THE CONCEPT are used to detect any obstacles that were not present
The 2.055W4, a 75hp dual-tracked driverless during the initial recording of the field and any fixed
tractor ideal for hoeing applications, was obstacles. “You can think of people, animals or just
first revealed at the UK’s Cereals trade show a car that was parked tight on the border of the
earlier this year. REAL−TIME field,” Kamps says. Meanwhile, a 4G connection
“Hoeing is highly needed to reduce chemical KINETIC (RTK) provides live access to a 360° camera view and
crop protection, but it is time-consuming – and machine and process data, as well as service
that is what farmers don’t have,” explains
GNSS ENSURES access for updates and remote services.
Philipp Kamps, product manager at AgXeed. PRECISE GUIDANCE
“Autonomy is part of the answer to this conflict AND POSITIONING PROJECT STATUS
– to achieve a high workload, AgXeed AgBots OF UP TO The machine continues to be shown at
are not limited to specific tasks.”
Featuring a standard hitch for conventional
±2.5CM leading agricultural trade shows and
demonstrations, with deliveries scheduled to
implements, the 2.055W4 is highly versatile begin in spring 2023. The AgBot is part of a project
according to Kamps. “Beyond hoeing, the machine that has received funding from the European Union’s
is also suitable for seedbed preparation, seeding and Horizon 2020 research and innovation program under
crop care applications,” he explains. “It is also ideal grant agreement number 970619.
for grassland applications, being perfect for mowers, “In 2022 we were active in eight European countries,
tedders and swathers.” offering our machines through our distribution
partners,” explains Anastasia Laska, CCO at AgXeed.
THE TEAM “We are preparing for a hockey stick-shaped sales
The driverless tractor has been developed by curve for 2023, scaling our production, distribution
Dutch company AgXeed, which describes itself as and service efforts accordingly.” ‹
a provider of full autonomy systems with scalable and
customizable hardware, cloud-based planning tools and
value-generating data models. “Our solutions serve not The robot drives
only as a replacement for conventional tractors and in reverse when
harvesting machines, but they empower the farmer fitted with a
to approach his business and farm management in disc mower
a completely different way,” explains Kamps.
Beyond efficiency, Kamps also emphasizes the
environmental credentials of AgBots: “All our AgBots
stay under the irreversible soil compaction threshold,
eliminating further degradation of the soil in comparison
with conventional, increasingly heavy machinery, leading
to healthier crops and higher yields,” he says.

10 ADAS & Autonomous Vehicle International January 2023


JUNE 13, 14, 15, 2023
MESSE STUTTGART, GERMANY

SAVE THE DATES! 

The latest ADAS


technologies + full
autonomy solutions
+ simulation, test and
development

TESTING TOOLS SENSING AND AI SIMULATION SOFTWARE

GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/stuttgart
#avtexpostuttgart
Get involved online!
JUNE 13, 14, 15, 2023
MESSE STUTTGART, GERMANY

SAVE THE DATES! 

See all the latest


development, testing,
validation and
next-generation
enabling technologies
specifically for
your ADAS and AV
programs

CO-LOCATED WITH:

150+
ADAS AND AUTONOMOUS
TECH EXHIBITORS

For information on exhibiting, contact Chris Richardson, sales director


Tel: +44 1306 743744 Email: chris.richardson@ukimediaevents.com
#avtexpostuttgart www.adas-avtexpo.com/stuttgart
Get involved online!
CONF ERENCE

The 2023 ADAS &


AV conference will
be held alongside
the expo, bringing
together world-leading
experts in the field of
autonomous vehicle
research
Rates apply – see website for details

GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/stuttgart
www.adas-avtexpo.com/stuttgart
INTELLIGENT SPEED ASSIST

Sign lan

14 ADAS & Autonomous Vehicle International January 2023


INTELLIGENT SPEED ASSIST

anguage
Mandatory intelligent speed assistance
systems in Europe could lay the
foundations for more advanced
automated driving features – but they’re
also highlighting the challenges ahead
By Alex Grant

EU legislation made ISA


mandatory for all new
vehicles from 2022,
and mandatory for all
existing car lines by 2024

ADAS & Autonomous Vehicle International January 2023 15


INTELLIGENT SPEED ASSIST

T he European Union has its sights set


on virtually eliminating road deaths by
2050 and, although there’s no single
solution to deliver that goal, it is all too
aware of the most common causes.
Inappropriate speed is a key factor in one-third of road fatalities
and a contributor to almost all collisions across the region, the
European Commission says, adding that 40-50% of drivers still
admit to breaking legal limits. In addition to contributing to
improved road safety, the resulting regulatory change could
be an important test for technologies that are vital for higher
levels of automated driving.
Intelligent speed assistance (ISA) became mandatory for all
M (passenger) and N (goods) class vehicle type approvals from
July 2022, as part of the EU’s General Vehicle Safety Regulation
2019/2144 (known as GSR2) and will be applied to vehicle
registrations from July 2024. GSR2 requires systems that can
recognize correct speed limits for at least 90% of a 400km test
route, with 15% or more carried out in darkness. None of the
route can be repeated and urban, rural and highway sections
must account for at least 25% of the total, with an 80% compliance Above: Advanced digital
rate for each road type. Compliance covers explicit and implicit ISA maps containing verified
limits, including residential areas, schools and city borders where TECHNOLOGY IS speed limit data can help
cars see beyond their
there is no numerical sign. ESTIMATED TO camera range and
However, the regulation doesn’t specify how those features
should be delivered. Vehicles can use cameras, maps or a
REDUCE ACCIDENTS perform in all conditions

combination of both, and intervention can be passive, such BY 30% AND Below: Mobileye’s Road
Experience Management
as a display and audible or haptic warnings, or the system DEATHS BY 20% (REM) provides a rich
can actively step in to reduce the vehicle’s speed. The main supplemental layer of
Source: ETSC
requirement is that the system should work with the driver, information on the driving
environment
who must be able to override it and who ultimately remains
responsible for staying within the speed limit.

Safety in numbers
Mobileye is supplying EyeQ system-on-chip technology to help
systems recognize traffic signs more accurately, based on 200PB
of data gathered to train the algorithms. However, the company
believes safety-critical applications require robust redundancy and
crowdsourced data. Mobileye’s Road Experience Management (REM)
supplements real-time vision systems by gathering data from vehicles
and layering it over the company’s maps.
Adriano Palao of Euro NCAP also believes crowdsourced data will be
an important mechanism for updating explicit limits and filling in gaps
where lane markings are missing. He notes the European Data for Road
Safety (DFRS) project as an important step, encouraging the industry to
share critical information about conditions ahead, such as poor traction,
obstacles and pedestrians, so vehicles can adjust.
Euro NCAP will reward this sort of harmonization, he says. “The
biggest improvement will come when OEMs decide to share safety-
relevant data – i.e. a standardized back end, leveraged by Vehicle-To-
Network where all vehicles can write and read data. This is currently not
100% harmonized, though we are starting to see big improvements.”

16 ADAS & Autonomous Vehicle International January 2023


INTELLIGENT SPEED ASSIST

Better by design
Ford believes the most advanced ISA benefits for safety but also for avoiding inadvertent
technology could alleviate the need for speeding fines. The technology can adapt to local
physical signposts, which add roadside hazards, roadworks and different times of day, and
clutter, can be confusing and are prone to being fleet operators can geofence private facilities such
obscured by foliage and poor weather. In March the as depots to enforce a lower speed limit. That
company began a 12-month trial in Cologne, Germany, infrastructure has wider benefits too: testing builds
using geofencing to create a virtual boundary for on geofencing trials in 2020 using the Transit Plug-in
different speed limits. Hybrid, enabling it to switch to battery-electric mode
The project is a collaboration between the Ford for low-emission zones.
City Engagement team, its software engineers in
Palo Alto in California and officials from Cologne and
Aachen in Germany. It uses Ford E-Transit electric
vans, equipped with a system that recognizes 30km/h
and 50km/h areas of the city center and elsewhere.
Drivers are given information about new limits on the
dashboard display, and the system will then slow the
van appropriately – though it can be overridden.
Ford claims the system could be rolled out to
future commercial and passenger vehicles, with

That lack of specificity attracted criticism from the


European Transport Safety Council (ETSC), despite the
organization advocating ISA for over a decade. The ETSC’s “We know that a combination of cameras and maps provides
communications manager, Dudley Curtis, says highly accurate the best accuracy for ADAS and allows car manufacturers to meet
solutions are technically feasible today but they require a the approval thresholds set by the EU regulator,” Salles says.
fusion of vision and mapping technologies and are most “Maps can provide speed limit information irrespective of
effective when they actively limit speed. However, driving conditions, if regularly updated. On connected cars,
manufacturers aren’t always fitting them. always fresh and up-to-date speed limits can be delivered directly
“The car industry, represented by ACEA [European to the vehicle via cloud.”
Automobile Manufacturers’ Association], fought long Similar work is underway at Here, which launched a specific
and hard to ensure that they could get away with fitting ISA Map data set in a range of data formats in 2021. Maps are
the cheapest-possible ISA system – i.e. one that doesn’t updated using the Here True fleet, anonymized data from
require a digital map of speed limits and only uses the vehicle sensors and official sources. Philip Hubertus, the
front-facing camera already used by other in-car systems company’s director of product management for automated
on most cars today. Why is this important? Because a driving, says the latest stereo camera systems are significantly
camera cannot see a speed sign that isn’t there,” Curtis better than previous technologies, but adds that maps offer an
explains, noting implicit and conditional limits as extra layer of robustness.
particular challenges for such systems.
“ETSC argued strongly for conditional limits to be taken Edge cases and hidden costs
into account by camera systems – but ACEA argued against, Speed limits change on 10% of Europe’s roads each year –
together with CLEPA [European Association of Automotive 55,000km were updated in March 2022 alone – and there are edge
Suppliers], which represents the car supplier industry. There cases. Notably, cameras have been found to read the Fiat logo as
are advanced vision systems out there that can cope with these a 100km/h limit, and have mistaken maximum speed stickers on
sorts of signs, and we strongly encourage manufacturers to fit trucks as roadside signs. Algorithms informing updates are being
them in order to deliver optimum performance.” trained to avoid this sort of false positive. However, unless
The standards set by GSR2 have required close attention
to the background map data. TomTom’s maps and Virtual
Horizon software are maintained by a fleet of survey vehicles “MAPS CAN PROVIDE SPEED LIMIT
augmented by half a billion connected devices, third-party
sources and sensor data from partners. Sebastien Salles, INFORMATION IRRESPECTIVE
product manager for ISA, sees mapping as complementary OF DRIVING CONDITIONS,
to cameras, helping vehicles ‘see’ further ahead than their
sensors allow, and in all conditions, while also assisting with IF REGULARLY UPDATED”
the 60% of European roads that have implicit limits. Sebastien Salles, ISA product manager, TomTom

ADAS & Autonomous Vehicle International January 2023 17


Mayflower®-B17-LiQuid

5x GPU Power for AI

QuickTray®

4activeOD
INTELLIGENT SPEED ASSIST

managed properly, streaming this level of live data


introduces additional costs.
“The automotive industry typically had a fixed Here’s ISA map
cost to build the vehicle. Now they have ongoing cost data is used by
over 30 brands
for that data transfer, and it depends on how across 15 global
frequently that vehicle is driven and where it’s from,” auto makers
Hubertus explains. “Think of some of these smaller
box vans – they drive all over the place and are often
budget vehicles. You can incur a lot of ongoing costs for a
cloud-based solution.
“What we see typically are monthly updates. What I’m
proposing a lot is tile-based streaming and caching, so you
get little parts of the map. Our tiles are, on the highest
granularities, about 2.5 x 2.5km, and there are also options to
just stream a path of data so you don’t need to download a full
tile. For example, when you’re driving down the motorway,
you don’t need all the side roads and speed limits. We also
have a mechanism to just bring down a path of data, and that
is minimizing the data consumption.”
Research commissioned by consumer organization FIA Region
I suggests those costs will be necessary to keep systems reliable.
Its 2018 report was far from an endorsement of ISA, highlighting “THE CHANNEL THROUGH WHICH
that lack of standardized or well-maintained signage, poor
weather conditions and inaccurate GPS technology all
THE INFORMATION IS TRANSFERRED
significantly affect performance. The organization’s director TO THE DRIVER NEEDS TO BE
general, Laurianne Krid, says those problems persist four years
later, with low-cost camera-only systems producing a 20-30%
STRAIGHTFORWARD AND INTUITIVE”
false positive rate and challenges with ensuring maps are up to Adriano Palao, ADAS/AD technical manager, Euro NCAP
date. GSR2 requires suppliers to support map updates for 14 years,
but they only have to be provided free of charge for seven. will be expanded to reward systems that respond to upcoming
Krid comments, “Systems are continuously upgraded and corners, roundabouts and junctions; the 2025 roadmap proposes
improved, but costly. Consumers are only ensured five or incentives for recognizing other safety-critical traffic signs, such
six years of map updates. Then they have to pay themselves as ‘yield’, ‘stop’ and ‘one way’.
and are in the hands of the industry whether the map and Functionality is vital. “The biggest concern with such
camera system becomes technically obsolete, even if the systems is expected to be driver acceptance,” Palao explains.
consumer is prepared to spend a reasonable amount to “To ensure drivers consider these systems a companion
keep the system up and running.” rather than a hassle, and therefore don’t eventually switch
them off, there are two main points to consider. The first is
Expanded test protocol human factors – the channel through which the information
Regulations aren’t the only stimulus for more advanced ISA is transferred to the driver. This needs to be straightforward
features. Euro NCAP introduced a test protocol for speed and intuitive, and acoustic warnings are far from that. The
assistance systems in 2015, which assesses features over a 100km second is system accuracy: if a system is not accurate in telling
route. Adriano Palao, the organization’s technical manager for the driver what the road speed limit is at a certain point in time,
ADAS and AD, says the focus is on driving innovation rather than trust in the system will be damaged, and thus the driver will
complying with a minimum requirement. From 2023, the protocol refrain from using it.”
Tom Leggett, a vehicle technology specialist at Thatcham
Drivers can override Research, agrees that it’s important to build on the baseline
the system by pushing requirements set out in GSR2 – especially for conditional speed
on the accelerator limits. He believes that camera and map interactions have
improved to offer more reliable results, adding that Euro NCAP
recommends a combination of both, but the latest map-only
systems are now capable of 90-95% accuracy, which bodes well
as a basis for future applications.
“The background map data has been getting better and better,
and we are now moving into HD maps,” Leggett explains. “This
is not about better navigation but centimeter accuracy of the
entire road network across the world. That information is going
to be so important because when we move into the realm of
automated vehicles, you can’t have 95% accuracy on your speed
limit. You have to have 100% accuracy, or close to it, so they need
that back-end data.
“Self-driving and automated vehicles are not reliant on
the driver’s ability to understand the road type and the road
architecture and the speed limits. If it’s raining, I like to think

ADAS & Autonomous Vehicle International January 2023 19


INTELLIGENT SPEED ASSIST

Here’s Hubertus believes ISA could be as much about


“A LOGICAL NEXT STEP … IS consumer familiarity as it is technological development: “What
SOMETHING LIKE ADVANCED CRUISE I like about this technology is that it starts bringing in a system
that will hopefully help people to trust technology to support
CONTROL, WHERE IT WILL PROMPT them in their driving.
YOU AND SAY THERE’S A HIGHER OR “Apart from just displaying the speed limit, I think a logical
next step – and that’s a technology that exists today – is
LOWER SPEED LIMIT COMING UP” something like advanced cruise control, where it will prompt you
and say there’s a higher or lower speed limit coming up and you
Philip Hubertus, director of product management for
automated driving, Here Technologies press a button and accept that. Then the vehicle adjusts and saves
fuel, drives according to the speed limit so it’s safer, it’s more
comfortable for drivers. It’s very interesting.”
Salles is similarly optimistic, noting that vehicles capable
of understanding road curvature and topology are a useful
stimulus for other features. “ISA fits into a broader trend
for ADAS popularization on new vehicles, driven both by
regulators and customers’ demands for more safety and
driving comfort,” he says.
“ISA will accelerate this trend, incentivizing the auto
makers to think longer term about the scalability and
performance of the ADAS solutions adopted by their fleets
across different vehicle segments, platforms and geographies.” ‹

Commercial
Research shows
motorists drive
more slowly when
using ISA
sense
a lot of people slow down on the roads they’re not too sure on. The General Safety Regulation applies not
A self-driving car has to do the same sort of thing, so it has to only to cars and vans but to trucks and buses
have a much greater understanding of the road. I think that’s the too. Here’s Philip Hubertus says this means
development we’re going to see in the next five years.” that in addition to understanding speed limits, ISA
Those changes are being delivered collaboratively. The has to know how these apply to different vehicles
Navigation Data Standard (NDS) Association, a global alliance and conditions. The upshot, he adds, is it could help
of OEMs, vendors and map data providers, is co-developing a cost-sensitive fleet operators save money.
“We are working on a system that allows the truck
specification for storing and interacting with HD maps, with a
drivers to drive without their feet. It is adjusting the
focus on improved ADAS capability. The latest version, NDS.Live,
speed limits like adaptive cruise control, and not only
is organized into building blocks, which optimizes how the data on motorways but also on the smaller roads. That
is transmitted to the vehicle to avoid excessive data use. helps manufacturers sell a vehicle that is much more
fuel efficient,” he explains.
Infrastructure investment “Knowing what the speed limit is up ahead, and
Not all of the problems are delivered at a vehicle level. The FIA’s the curvature and the slope of the road, allows them
Krid believes wider vehicle-to-vehicle and vehicle-to-infrastructure to sail into situations where the speed limit goes
systems will enable ISA and other ADAS technologies to make use down, then accelerate correctly when the vehicle
of artificial intelligence and edge computing. However, she adds goes uphill, sail again when it goes downhill. That is,
that the infrastructure could create additional costs for consumers according to them [fleet operators], probably going
and taxpayers, and the basic infrastructure is still problematic. to reduce fuel consumption by about 10%, which is
“The very first issue to resolve is member states agreeing on huge for logistics.”
harmonized vehicle speed signaling (it is a national responsibility,
not at the EU level), complying with the Vienna 1968 Convention
on Road Signs and Signals, spending sufficient resources on
maintenance and putting road signaling on those places where
proper ISA detection is deemed critical. Next is further
innovations in affordable vehicle technology,” she explains.
“[Other regions can learn from Europe by] monitoring whether
member states are prepared to significantly improve road
signaling and traffic signs and especially whether they will
reserve sufficient resources to put such advanced infrastructure
in place. We hope that ISA one day will be sufficiently robust and
trustworthy so that drivers and traffic participants can reap the
HD map data used for
benefits, outweighing the flaws that all systems inherently have, ISA could also help
despite innovation and technical progress.” trucks save fuel

20 ADAS & Autonomous Vehicle International January 2023


Steer Your Way
to Autonomy
Improve driver experience and safety with our efficiency-enhancing,
design-simplifying, compact solutions for every steering architecture.

• High-current gate drivers with leading • Small-footprint, fully integrated current


transient performance sensors enable in-phase sensing

• Industry-leading ASIL D PMIC portfolio • Accurate, low-noise torque sensors for


with highest level of integration significantly reduced system torque ripple

• High-resolution TMR motor position • Power and sensing solutions with


sensors ideal for precise motor control detailed diagnostics and verifications

Innovative Power and Sensing Technologies


for a Safer, More Autonomous Future

www.allegromicro.com/steering

Allegro ADAS and Autonomous Vehicle Full Page Sept 22.indd 1 21/09/2022 18:29
SY N T H E T I C DATA

Manufactured

AI city view render


from Synthesis AI,
which produces
AI-generated drivers
and environments
for ML training

22 ADAS & Autonomous Vehicle International January 2023


SY N T H E T I C DATA

How generative models are


helping the development of

L
AV technology
By Ben Dickson

ike all systems that rely on machine


learning, autonomous vehicles need
a large amount of annotated training
data. And like all machine learning
systems that are designed to work in
the real world, collecting training data remains one of the
biggest challenges of AV systems.
When the data becomes too hard and expensive to
come by, why not generate your own?
As the market for self-driving cars continues to
mature, many companies and researchers are leveraging
advances in synthetic data and generative machine
learning models to complement real-world data.
Recent developments in this area show how innovation
in different fields can come together to solve some of the
toughest challenges of the autonomous vehicle industry.

Costly labels
Machine learning models are best known for prediction
and classification tasks. They receive visual, audio, textual
or tabular data as input and return a prediction, such as
the probability of an image containing a certain object,
the sentiment of a social media post or the future price of
a stock. In AVs, machine learning models perform various
tasks, such as detecting objects and predicting the speed
and trajectory of other cars.
Without synthetic data, all the training examples must
be collected from the real world, which imposes extra
costs on AV companies.
“Current methods require manufacturers to build
and deploy cars that are loaded with sensors and cameras
to potentially drive thousands – if not hundreds of
thousands – of miles before car makers obtain enough
data to label,” says Yashar Behzadi, CEO and founder of
Synthesis AI.
An additional challenge is the labeling of training
examples. Many machine learning models used in
autonomous vehicles are ‘supervised’, meaning that they
require their training examples to be annotated for
ground truth information.
“The process of labeling it is a monumental task that
requires careful extraction of specific events to identify
useful information for developing computer vision
machine learning models,” Behzadi says.
A recent study by Synthesis AI found that data labeling
costs organizations upward of US$2.3m annually, and that
it can take up to 16 weeks to conduct supervised learning
on new projects.

ADAS & Autonomous Vehicle International January 2023 23


SY N T H E T I C DATA

Not only is real-world data hard to come by but it is also


not representative of all the situations that the AV will face.
“Real-world data is often biased as the data collection
platforms (the vehicles) often run on selected routes and
typologies. Therefore, there is an inherent bias toward such
routes, speeds, typologies and actors for all developed
ML models using such data,” says
Mayuresh Savargaonkar, a researcher Synthetic humans
and driving
at the Informatics, Reliability & Data
scenarios for ML
Analytics (IRDA) lab at the model training,
University of Michigan. from Synthesis AI
These are some of the areas
where generative models and
synthetic data can help,
complementing real-world
data while avoiding safety and
regulatory complications.

Synthetic data and generative models


For safety-critical applications like autonomous driving,
synthetic data fills the gaps in real-world data. Synthetic data
also enables companies to test rare events and edge cases to
ensure safe and robust performance. This can include
dangerous scenarios that are hard to capture in the real world,
such as near-collisions and pedestrians dashing into the street.
Synthetic data can be obtained in different ways. One
popular method is to use game engines and virtual
environments. Another, which is becoming increasingly
popular along with simulators, is to use ‘generative models’ us to get beyond this bias observed in
to create training data for the AV machine learning systems. “REAL-WORLD DATA real-world data. Finally, the power of
Generative models are machine learning systems that
create data similar to their training examples. For example,
IS OFTEN BIASED AS simulation tools and synthetic data
generation techniques allows us to
if a generative model is trained on images of faces, it will be THE COLLECTION collect data for high-risk edge cases.”
able to generate realistic faces that don’t exist in reality.
Generative models come in various types and are used in
PLATFORMS OFTEN Generative adversarial
different applications. In the case of autonomous vehicles, RUN ON SELECTED networks
generative models can help create realistic road scenes and Generative adversarial networks (GAN)
sensor data, and have become an important part of the toolbox ROUTES AND are one of the most popular types of
of AV machine learning engineers. TYPOLOGIES” generative models. GAN started out as a
“We, as ML engineers, are specifically interested in research project undertaken by a group
Mayuresh Savargaonkar, researcher,
replicating real-world data distributions and expanding beyond University of Michigan
of scientists in Montreal, Canada, in
those to expose the developed models to 2014 and quickly became a very popular
high-risk edge cases,” adds Abdallah Chehade, a technique used in applied machine learning.
professor of industrial engineering at the StyleGAN transfers In essence, a GAN comprises a pair of competing deep neural
University of Michigan. “Furthermore, using the style of one image networks – a ‘generator’ and a ‘discriminator’. The generator
synthetic maps with varying typologies allows to another tries to create data that passes as authentic examples, and the
discriminator tries to tell real data from that created by the
generator. The two neural networks are trained together and
against each other, and each becomes better at its respective
task as they go through more epochs of training. With enough
training, the generator can create realistic examples, including
images of roads and cars.
“Generative models such as GANs are being increasingly
investigated for synthetic data generation,” Savargaonkar says.
“Although simulation-based tools offer synthetic data, it is
essential to remember that the data (especially perception-based
data) is only as good as the simulator’s rendering capabilities.”
To improve on these limitations, researchers are investigating
how different types of GANs can bridge the gap between
simulation environments and real-world data. For example,
StyleGAN is a variation of the model that can transfer the style
of one image to another. It can be used to project the realistic
shading of real-world images onto the frames rendered by
a simulated environment.

24 ADAS & Autonomous Vehicle International January 2023


SY N T H E T I C DATA

SurfelGAN, a technique developed by


researchers at Waymo, Google Brain and the
University of Texas at Austin, uses generative
models to create realistic sensor data. The
researchers use camera and lidar data to
reconstruct environments that self-driving
cars navigate. The SurfelGAN model is then
used to remove artifacts and inaccuracies
caused by the reconstruction process. The
result is synthetic data that is much closer
to the real world.
Savargaonkar and Chehade were part
of a team that helped develop VTrackIt, a
synthetic driving data set that encourages integrating intelligent Above: SurfelGAN helps generate novel
infrastructure for safer and more robust autonomous vehicles. images from recorded trajectories
The team also developed InfraGAN, a module that uses VTrackIt Below: InfraGAN uses V2V and V2I data
data to perform informed trajectory predictions. to generate trajectory predictions
“Due to limitations in existing technologies, little work has
been done to gauge the advantages of vehicle-to-infrastructure
technology. This data presents an unprecedented opportunity
for increased safety and reliability of SDCs,” says Chehade.
VTrackIt provides explicit annotations for several vehicle-to-
infrastructure (V2I) and vehicle-to-vehicle (V2V) information
variables. It consists of more than 600 sequences that collect
such data under a wide distribution of speeds, typologies, traffic
and weather conditions.
“Our results overwhelmingly conclude that the infrastructure
and pooled vehicle information provided by VTrackIt indeed
support the development of informed trajectory prediction
models since they experience a significantly lower number of
edge cases,” explains Savargaonkar.

Nvidia’s Neural Reconstruction Engine


To ensure AI drivers Real-world 2D video data is
and systems meet converted by the AI pipeline into
the requirements for a 3D digital twin that can then
on-road use, Nvidia has created be loaded into Nvidia’s Drive
the Neural Reconstruction Sim. Other 3D assets can also
Engine, a new AI toolset for the be reconstructed and placed
Nvidia Drive Sim simulation into a library of assets for use
platform, which uses AI in other simulations.
networks to turn recorded Just like in the real world,
video data into simulation. a multitude of scenarios take
Nvidia’s solution uses AI place during a simulation in an
to automatically extract the environment combined with
key components needed for assets. Nvidia’s new Neural
simulation, which include the Reconstruction Engine works
environment, 3D assets and to assign AI-based behaviors to
scenarios. Once extracted, actors within the scene, so when
the individual pieces are these events or scenarios occur,
reconstructed into simulations the actors behave just like they of events, in addition to to generate pre-labeled
that have an extremely high did in the real drive. Additionally, incorporating new elements. synthetic data, and by using
level of realism but are now because of the AI behavior Within Drive Sim, developers real-world data to build scenes,
reactive and can be changed model, figures in the simulation can also adjust dynamic and the sim-to-real gap is reduced.
as engineers see fit. can respond and react to scene, static objects, a vehicle’s path The Neural Reconstruction
The solution streamlines element or vehicle changes. and the location, orientation and Engine will be integrated into
the process of creating Furthermore, because of parameters of vehicle sensors. future releases of Drive Sim, and
simulated worlds, making the scenarios occurring in a To further improve and train enables both physics-based and
it more cost-effective when simulated world, developers can perception systems, the same neural-driven simulation on the
using the toolset. change the time and location scenes in Drive Sim can be used same cloud-based platform.

ADAS & Autonomous Vehicle International January 2023 25


SY N T H E T I C DATA

Block-NeRF uses
neural radiance
fields to recreate
entire city blocks
from previously
recorded videos

The market for synthetic AV data


“INSTEAD OF HUMAN ARTISTS, ASSETS “With the rise of synthetic data, companies of all sizes can
CAN BE AUTOMATICALLY LEARNED FROM easily develop or acquire the necessary information to power AI
applications at a fraction of the time and cost of hand-labeling
DATA SETS THROUGH NERF MODELS” methods,” Behzadi says.
The market for synthetic data is growing. Companies such as
Kashyap Chitta, researcher, Max Planck Institute for Intelligent Systems
Synthesis AI are providing autonomous vehicle manufacturers
with synthetic data sets with richly annotated examples which
can be used to train and test AV machine learning models under
Neural radiance fields different terrain, road, weather and lighting conditions.
A more recent generative model that is becoming popular for “Synthetic data will be a valuable asset for developing driver
different applications is the neural radiance field (NeRF). safety systems and autonomous technology in the mobility space,”
Introduced in 2020, NeRF is a neural network that can take several Behzadi says. “The massive demand for high-quality synthetic
2D images of an object and render the same object from new angles. data will continue to push the envelope to meet the demands
“NeRFs are a tool for data-driven rendering,” says Kashyap of the modern connected vehicle.” ‹
Chitta, a doctoral researcher at Max Planck Institute for Intelligent
Systems in Germany. “Most existing simulators require human

Simulation−based
artists to build assets, and yet still do not produce photorealistic
data. NeRFs could contribute to fixing these two problems in the
long term. Instead of human artists, assets can be automatically
learned from data sets through NeRF models.”
NeRFs are useful for several applications, including object sensor selection
and packaging
reconstruction, background removal and interpolation between
different types of objects.
NeRFs have also found their way into the self-driving car
industry. A very interesting example is Waymo’s Block-NeRF – a Synthetic data can also
deep learning model that can synthesize full blocks of a city based support the selection and
on recordings captured by the cameras of a vehicle. The technique packaging of sensors to
was used to reconstruct a square-kilometer neighborhood of San optimize perception performance,
Francisco, California, based on video frames captured by an according to Chris Gundling, head
autonomous vehicle during multiple trips. The recordings were done of sensor simulation at Applied
across three months, at different times of day and under different Intuition. “Typically, sensor
selection and packaging [i.e. a
lighting and weather conditions. Once the Block-NeRFs were
sensor’s mounting hardware and
created, the researchers could create photorealistic image sequences
placement on the vehicle] consume time and resources,” he
of new pathways that were not traveled by the autonomous vehicle says. “ADAS and AV teams need to acquire physical sensor
during the original recordings. samples from the manufacturer, mount them on vehicles or test
Under the hood, NeRFs create latent representations of the rigs and evaluate their performance through real-world testing.
scene, which can then be used for different tasks. In fact, Chitta and Sample sensors often take months to arrive and are unavailable
his colleagues at Max Planck Institute and the University of until sensors have entered production. Real-world test methods
Tübingen used the same kind of technique to create neural constrain the number of configurations teams can test.
attention fields (NEATs), a deep learning architecture that can be “With sensor simulation, teams can simulate specific sensors
used in AVs during the self-driving task. packaged in any combination of ways and quantitatively
“In NEAT, we use implicit representations (like NeRF) as our evaluate the resulting synthetic data. This method is fast,
model output as it lets us model very high resolutions in space and cost-effective and deterministic, enabling teams to rapidly
time with a fixed amount of memory usage,” Chitta says. identify a list of sensors and packaging that might maximize
NEAT can compress high-resolution images into intermediate their perception system’s performance. Teams can then validate
the system with a small number of real-world tests.
representations that can then be used for trajectory prediction
“This simulation-first workflow enables teams to evaluate a
and planning.
larger number of sensors and packaging while simultaneously
“I think an even more important role that generative models saving resources, thus ensuring they select the optimal sensors
will play is in the testing of AVs,” Chitta says. “Demonstrating for their perception system. For example, teams can evaluate
reliability purely through real-world driving demonstrations is which Ouster lidar best suits their needs by leveraging lidar
probably not going to be sufficient given the high safety standards. models created in a direct partnership between Applied
This means that high-quality simulation with critical scenarios, Intuition and Ouster.”
using generative models, will be crucial to showcase or certify the
robustness of self-driving systems.”

26 ADAS & Autonomous Vehicle International January 2023


Vehicle Dynamics Software for ADAS and Automated Driving
Proven simulation software platform for ADAS/autonomous driving
engineers to develop a scalable and robust simulation strategy
essential for next generation vehicles.
• World-class, validated vehicle dynamics.
• Real-time range sensors executing at the same rate as the vehicle model.
• Architecture to integrate best-in-class sensor and visualization
technologies.
• Scalable for parallel and/or DOE simulations.
• Simulations run on Windows and Linux.
• High-fidelity 3D road models and ability to interface with external mapping
technologies.
• Engineers can run SIL, MIL, HIL or DIL using the same simulation
parameters and models.

car SIM
MECHANICAL SIMULATION
truck SIM
MECHANICAL SIMULATION
bike SIM
MECHANICAL SIMULATION
carsim.com
info@carsim.com
+1 734 668 2930
T E S T I N G S TA N DA R D S

Leading regulators and experts


share their understanding of the
emerging regulatory landscape
for the ADAS/AV industry
By Paul Willis

T he year 2022 was “very


important” in the evolution
of AV regulations, according
to Alain Piperno, a senior
expert in autonomous
vehicle testing at the French vehicle testing
organization UTAC. Piperno’s opinion is based on the
publication of a number of significant regulations
related to advanced driving systems, and their testing.
The first was an EU draft regulation laying down
the technical specifications, assessment and test
requirements for the type approval of fully automated
vehicles, paving the way for the full deployment of
robotaxis, autonomous shuttles and last-mile delivery
vehicles in European cities in the coming years.
The draft regulation was released in April, and the
final version was adopted and published into law by
the European Union in August.
Maria Cristina Galassi, project leader – Safety
of Connected and Automated Vehicles at the
European Commission’s Joint Research
Center, called the regulation “a milestone IN 2022
for the automotive sector and the EU”. ASAM PUBLISHED
“It is the first comprehensive A CONCEPT FOR
regulation worldwide allowing the type A NEW AV SAFETY
approval of fully driverless (Level 4)
STANDARD ENABLING
vehicles,” says Galassi, who provided
technical support during the
TESTING FOR ROAD
development of the regulation.
READINESS
The second development, announced
in June, is a UN regulation extending the
maximum speed for automated lane keeping
systems (ALKS) in passenger cars and light-duty
vehicles up to 130 km/h on motorways and physically
separated dual carriages, and allowing fully
automated lane changes.
This regulation, which was guided by the UN’s
Economic Commission for Europe’s framework on
autonomous vehicles, represents a significant step
forward in Level 3 vehicle automation and came after
a UN ruling last November extending ALKS to heavy
vehicles including trucks, buses and coaches.
Meanwhile, in July 2022, the EU’s Vehicle General
Safety Regulation went into force, a large-scale ruling
that helps establish the legal framework for the approval
of automated and fully driverless vehicles in the EU.

28 ADAS & Autonomous Vehicle International January 2023


T E S T I N G S TA N DA R D S

UTAC’s TEQMO test


center, which is
focused on CAV
development and
homologation testing

Ordinance
survey
ADAS & Autonomous Vehicle International January 2023 29
T E S T I N G S TA N DA R D S

The Silicon Valley split


In the drive to develop global between the tech industry and the US. In Europe, certification is overseen by
standards and regulations for the automotive industry that have developed national regulators whereas in the US the
emerging autonomous car industry, over time because of the differing market companies themselves are responsible for
a divide has opened up between the realities, according to Sven Beiker of Silicon self-certifying their own products.
traditional automotive industry and Silicon Valley Mobility. “In Europe you need to show that you
Valley-based tech companies. While the “In Silicon Valley, standards often come comply with certain regulatory categories
auto makers have pushed the case for global through pure market share and size,” says and requirements, while in the US, in the
standards, the tech companies have been Beiker. “A big tech company develops a new first instance, everything that is not
more reticent, preferring to develop their cell phone operating system and because prohibited is basically allowed,” says Beiker.
standards in-house, says Foretellix’s Gil Amid. everybody uses their specification, it The clearest example of the divide
“If you look at companies like Waymo or becomes the de facto standard. between tech startups and traditional auto
Zoox or Aurora, each one of them will have “Whereas in the traditional automotive makers is the differing approaches of Tesla
its own solution for how to do scenario- industry, it’s more collaborative because you and Mercedes, says ASAM’s Ben Engel.
based testing,” he explains. “And these don’t often find a situation where one player While Mercedes’ approach to the rollout
companies don’t exchange a lot with the has 70-80% of the market share.” of its self-driving capabilities may look
others, unlike what you see in the EU and Another important factor that influences conservative compared with that of Tesla,
with the classical OEMs.” the divide between tech startups and legacy Engel explains that both companies’ ways of
The tech companies’ reluctance comes OEMs is the difference between the operating are “prescribed by the regulatory
largely from the operational differences regulatory environments in Europe and the environments in which they find themselves”.

As Piperno succinctly puts it, “The regulation is here, and now we


are waiting for vehicles.”
How long third-party testing houses like UTAC will have to wait
before auto makers begin certifying their vehicles under the new
regulations remains to be seen. Because although the new regulations
are something of a milestone, they have highlighted the ongoing
difficulties of creating a regulatory framework around something
as complex as AV technology.
The ALKS regulation, for example, has already “showcased a few
problems”, according to Ben Engel, CTO at the Association for
Standardization of Automation and Measuring Systems (ASAM),
the automotive standards agency based in Germany. Following the
publication of the regulation, ASAM members – many of them
major car makers – tried to specify the regulation using
ASAM’s open-standard scenario-based software.
“The frustrating thing was that almost all of the
participants came up with different and conflicting
variants,” says Engel. “So it means that if the regulation
can be misinterpreted, of course that’s an issue.”
One reason the new regulations might be misinterpreted
is the lack of specificity in how they are written, says Gil Amid,
co-founder and chief regulatory affairs officer at Foretellix, an
Israeli tech company that develops software for AV testing.
Amid says that the EU’s ADS regulation that was approved this
summer asks OEMs “for various data and documentation, but it is
not articulating exactly what documentation they require”. AV testing
Amid continues, “The regulation asks the OEM to file the set of underway at UTAC
simulations and tests they have carried out in order to ensure safety
but it doesn’t say what you should test.”
However, although this lack of specifics leaves the new rules open
to misinterpretation, Engel believes that too much detail can also lead
to problems. “The risk for [the regulators] in specifying in too much “IF YOU LOOK AT COMPANIES
detail what scenarios to test is that the OEMs will test exactly that and
then declare their function safe,” he says. “And the likelihood of that
LIKE WAYMO OR ZOOX OR
being sufficient is at the moment just too difficult to be able to clearly AURORA, EACH ONE OF THEM
state because the methodologies are still being developed.”
Consequently, the regulators and the OEMs are caught in what
WILL HAVE ITS OWN
Engel characterizes as “a cat-and-mouse game”. SOLUTION FOR HOW TO DO
He continues, “The regulators are hoping that someone can tell
them what to regulate and how to quantify it, and so are the OEMs, SCENARIO-BASED TESTING”
but no one wants to take the first step. Gil Amid, co-founder and chief regulatory affairs officer, Foretellix

30 ADAS & Autonomous Vehicle International January 2023


T E S T I N G S TA N DA R D S

Above: Diagram
showing a ‘cut-in’ test
“THE FRUSTRATING THING procedure for AEB

WAS THAT ALMOST ALL OF Right: Cut-in test


performed at AVL
THE PARTICIPANTS CAME ZalaZone in Hungary

UP WITH DIFFERENT AND


CONFLICTING VARIANTS”
Ben Engel, CTO, ASAM

“For the OEMs, if they take the first step and get it wrong,
then the house falls down around them. And with respect to the

How self−driving regulators, if they regulate too intensely and too strictly, then the
same thing will probably happen.”

cars will be
Despite these uncertainties, there is broad agreement between
regulators and OEMs that the certification of self-driving vehicles

certified in the EU
will primarily involve scenario-based testing. This refers to
testing a vehicle’s response to various predetermined scenarios
that it might encounter on the road. Although some of this testing
The certification process for self-driving cars in the EU can be physical (on either public roads or private testing grounds)
is based on the Whole Vehicle Type-Approval (WVTA) most of it is simulation based.
system already in place in Europe for traditional Foretellix and ASAM have jointly developed the OpenScenario
automotives, according to Maria Cristina Galassi at the simulator-based software for use in scenario testing. They hope
European Commission’s Joint Research Center. that the software will become the standard language for
“Under the WVTA, a manufacturer scenario-based testing. According to Amid, it is already being
can obtain certification for a vehicle type used by more than 20 companies developing AV technology,
in one EU country and market it EU-wide including Volvo. “It is a standard that enables you to specify
without further tests,” she explains. and articulate scenarios in a specific language that can
“The certification is issued by a THE USDOT’S be fed into computer programs,” Amid explains.
type approval authority and the tests NATIONAL HIGHWAY But even with a standardized software in place to
are carried out by the designated TRAFFIC SAFETY run scenario-based tests, OEMs and regulators are
technical services.” ADMINISTRATION ISSUED left with the thorny question of which scenarios to
A technical service could be A FIRST−OF−ITS−KIND test. Amid points to three sources currently used
a third-party testing center or a FINAL RULE RELATING
conformity assessment agency to identify scenarios.
approved to carry out tests or
TO THE SAFETY OF The first is data derived from analyzing
inspections on behalf of the national OCCUPANTS IN real-world road traffic data, including driver
regulator. EU member states are AUTONOMOUS VEHICLES behavior and traffic patterns. The second source is
required to notify the European IN 2022 scenarios specified in standards and regulations. For
Commission of all test centers approved example, Euro NCAP, the European car safety assessment
as designated technical services. program, has published a set of protocols that can be
“The same system will apply for AVs,” says represented in OpenScenario “and immediately you have many
Galassi. “However, in addition to the testing for type more scenarios to test”, says Amid.
approval done at EU level, rules concerning the safety of The third source is so-called edge cases: scenarios that are
operation of those vehicles in local transportation services will rare but nevertheless must be tested against to ensure a vehicle
be implemented at the national/local level. Therefore, close
can handle all eventualities.
cooperation with member state authorities is important.”
But once scenarios have been selected, OEMs and regulators
For this reason, says Galassi, in the first phase of the new
legislation implementation the EU is encouraging member states’ still need to decide on the degree to which they need to be tested,
authorities and technical services to “share their experience” of according to Dr Sven Beiker, who is an external advisor to the
how the new testing and certification regime is unfolding. US-based standards agency SAE International as well as the
managing director of Silicon Valley Mobility, an independent
consulting and advisory firm.

ADAS & Autonomous Vehicle International January 2023 31


T E S T I N G S TA N DA R D S

“THE EU TYPE APPROVAL


WILL REPRESENT ONLY
THE FIRST STEP FOR
MARKET INTRODUCTION OF
AUTOMATED VEHICLES”
Maria Cristina Galassi, project leader –
Safety of Connected and Automated
Vehicles, European Commission

The VVM project aims to


determine reliable validation
and verification methods

Systematically
Taking as an example the scenario of a
pedestrian carelessly crossing the street,

testing
Beiker says a testing house could simulate
99% of the possible variations of how that
might play out. “But we already know that 99%

autonomous is not good enough,” he continues. “So the


question is, how many nines do you need to put after the dot?

vehicles
“You can basically start anywhere and almost all of it would
be relevant but the real question is when do you stop; when do you
say, ‘Now we’re done’?”
The German research project VVM – Verification The question of how much testing is enough speaks to the high
and Validation Methods for Automated Vehicles at stakes involved in the certification of self-driving vehicles. Since no
Level 4 and 5 – which began in July 2019, is funded one in the nascent industry wants to run the risk of repeating the
by the Federal Ministry of Economics and Energy and led mistakes of Uber, whose self-driving car business imploded after one
by Robert Bosch GmbH and BMW Group, with support
of its test vehicles was involved in a fatal car crash in Arizona in 2018,
from 23 industry and research partners. Its aim is to
most players are erring heavily on the side of caution.
develop test methods and provide systems and methods to
prove that automated vehicles are safe. One innovative method that the industry has developed to help
The project reached its halfway point in March this year, the regulatory process along is the growing use of operational design
with partners presenting their initial results and methods. domains (ODDs). Seen as a way of managing the risk of putting
Representatives from politics, industry and academia ADS-enabled vehicles on public roads, ODDs rely on the concept of
stressed the importance of the project, stating that ‘informed safety’. This means that rather than attempting to realize
autonomous driving can be safe only if reliable validation the impossible goal of making a product totally safe, companies make
and verification methods are used. dSPACE contributed its users aware of what a system can and cannot do. When it comes to
expertise in software- and hardware-in-the-loop simulation autonomous vehicles, ODDs can be used to specify the conditions in
for the analysis and evaluation of project requirements, and which a particular autonomous system is capable of operating.
also provided prototype implementations. “They quantify in a non-ambiguous and clear fashion where the
Results were presented for three main areas: the autonomous function is intended to drive,” says Engel.
requirements for the test methods, orchestration and An obvious area where quantifying ODDs could prove useful from
validation of the test infrastructures, and the data flow
a regulatory standpoint is the ability of autonomous systems to handle
and tools used in the project. dSPACE successfully tested
different weather conditions, Engel continues: “If we take the US as an
the methods developed in the project for practicality with
two demonstrations on criticality analysis and sensor example, there are quite a few AVs on the roads but the majority of
model validation. them are in very dry, sunny areas. So their operating domain when it
The second half of the project will focus on refining the comes to weather is dry and clear weather, which is much easier than
demonstrated results and implementing more concept if you’re trying to develop the same function for rain or snow.”
demonstrators. While the criticality analysis and sensor According to Galassi, the new EU regulations don’t cover
model validation will remain as important as before, test ODD-specific testing: “The EU type approval will represent only
orchestration, i.e. the distribution of test cases to the most the first step for market introduction of automated vehicles. Some
suitable test equipment, and the required continuity of the general scenarios will be tested already at this stage, aiming at
test infrastructure between software- and hardware-in-the verifying the overall capabilities of the system, but ODD-specific
loop testing will be another focal point of dSPACE’s efforts testing will be possible only at the national/local level,” she
in the project. For more information, please visit https:// concludes. “That is one of the reasons why an open regulation
www.vvm-projekt.de/en/
approach was needed, with high-level requirements, compared
with the conventional approach based on physical testing in very
specific conditions.” ‹

32 ADAS & Autonomous Vehicle International January 2023


ADAS Testing Just Got Easier
Introducing the all-new VBOX 3i ADAS - flexible and
intuitive data logger exclusively for ADAS testing.

• Multiple-constellation and dual frequency 100 Hz


GNSS RTK receiver
• Integrates with VBOX IMU for enhanced
data accuracy
• Resilient RTK for Open Road testing
• Compatible with steering robots
• Efficient test configuration and
intuitive data analysis
vboxautomotive.co.uk/3iadas
M AC H I N E L E A R N I N G

ARCHITEC T
digest
The evolution
of the AV
machine learning
stack
By Ben Dickson

34 ADAS & Autonomous Vehicle International January 2023


M AC H I N E L E A R N I N G

C TURAL

I n the late 1980s, scientists


at Carnegie Mellon
University in Pennsylvania
developed ALVINN, a
self-driving car system
that was powered by a deep-learning model.
The model – a three-layer deep neural
network – processed input from a color
camera and a laser range finder and
outputted steering and acceleration
commands to the vehicle.
Machine learning models like the one
But they have also become much more
complex and can handle many more
tasks. Here are some of the interesting
architectures that are being used in the
AV machine learning stack.

Transformers
Perception is a key component of self-
driving cars. AVs must be able to detect
cars, pedestrians, traffic signs, objects,
street lanes and more by analyzing video
and lidar data. Until recently, the main
used in ALVINN have become a mainstay machine learning architecture used for this
of the autonomous vehicle industry. But as task was the convolutional neural network
the cars, sensing hardware and processors (CNN), a deep learning architecture that
have evolved, so have the models. has been around since the late 1980s and
Today’s neural networks still retain has played a prominent role in computer
some of the components used in ALVINN. vision applications.

ADAS & Autonomous Vehicle International January 2023 35


M AC H I N E L E A R N I N G

“ALMOST ALL “The transformer-based architectures, which have


flourished in the vision community, have already been
TASKS IN A actively researched and applied in the field of AV. Almost
all tasks in a perception module can benefit from a
PERCEPTION MODULE transformer architecture,” An says. “Due to its superior
CAN BENEFIT FROM performance, the transformer architecture is rapidly
becoming the ideal choice for various tasks.”
A TRANSFORMER Beyond camera input, the transformer architecture
ARCHITECTURE” can be applied to lidar point clouds and other types of
sensor data gathered by the AV.
Le An, senior deep learning Despite its benefits, however, An says that the
software engineer, Nvidia
transformer still has a few flaws that need to be
addressed to make the most of it in AVs. The memory
complexity of the transformer grows quadratically with
respect to the size of the input. Moreover, it performs
memory-hungry operations that can cause latency.
The ‘transformer’ model arrived in 2017 “Particularly when the model is deployed on
– a deep learning architecture that became embedded systems, these can be a hindrance for AV
very successful in natural language production, as latency is among the most critical metrics
processing (NLP). Transformers have ALVINN HAD AN to evaluate network practicality,” An says. “With these
become the main architectures used in large OPERATING limitations in mind, our intention was to approach
language models such as GPT-3. One of the network design with the consideration of memory
key features of the transformer is the SYSTEM OF complexity reduction and operation efficiency on
‘attention mechanism’, which enables the 100 MILLION hardware characteristics. Ultimately, we aimed for
model to find important relations in long FLOATING−POINT production deployment on embedded systems such
sequences of data. Another advantage is as the Nvidia Drive Orin system-on-a-chip.”
its ability to learn from unlabeled data,
OPERATIONS PER An was part of a team at Nvidia that created a
also known as ‘unsupervised learning’ SECOND transformer architecture that achieves state-of-the-art
or ‘self-supervised learning’. accuracy in multiple tasks at only a fraction of the cost in
The success of transformers in natural terms of model size and inference latency compared with
language processing inspired the adoption other convolutional or transformer-based methods.
of transformer models in other domains, “This is not the end of our journey, and we continue
including computer vision. The ‘vision transformer’ to investigate novel transformer architectures that are
is a variant of the original transformer that has been lightweight, faster and more accurate,” An says.
modified to process visual data.
“Generally speaking, with a vision transformer,
Above: ALVINN
an image is partitioned into patches, which are often
– the first AV
referred to as tokens. These tokens interact together

Worldly wise
technology stack
as a so-called attention mechanism to understand the that used deep
context of the image,” says Le An, senior deep learning learning
software engineer at Nvidia. Below: The vision
From an early age, humans and other animals start
The attention mechanism relates tokens from the transformer uses
the successful
building internal representations, or models, of the
input and gives more weight to the parts that are more world through observation and interaction. This
transformer
relevant across the image. This mechanism overcomes accumulated knowledge of the world (‘common sense’)
model to process
the limitation of conventional convolution filters in visual data enables us to navigate effectively in unfamiliar situations.
the CNN, which only consider local contexts. Vision These ‘world’ models that describe the evolution of the
transformers have resulted in notable improvements environment around us are paramount in how we act in
in computer vision tasks such as image classification, our everyday lives.
object detection and semantic segmentation. Imitation learning is a method that allows machine learning
models to learn to mimic human behavior on a given task. An
example is learning to drive a vehicle in an urban environment
from expert demonstrations collected by human drivers. Data
collected by human drivers implicitly contains common
knowledge of the world. To account for this knowledge, many
AI developers believe that incorporating world models into
driving models is key to enabling them to properly understand
the human decisions they are learning from and ultimately
generalize to more real-world situations.
London-based Wayve, a startup pioneering a scalable way
to bring AVs to the UK and beyond, recently presented a paper
titled Model-Based Imitation Learning for Urban Driving at
NeurIPS 2022 (a conference on neural information processing
systems). The paper detailed an approach, named MILE, that

36 ADAS & Autonomous Vehicle International January 2023


M AC H I N E L E A R N I N G

ViDAR
The machine learning models of autonomous vehicles must
solve several challenges, including 3D scene construction,
depth estimation and optical flow prediction. These tasks
usually require heavy use of lidars.
In 2020, scientists at Waymo and Google proposed ViDAR,
a neural network architecture that can detect depth and
motion from camera frames alone.
“The ViDAR’s architecture is a deep structure from motion
(SfM) framework that, given a set of consecutive images, can
predict the depth and the flow maps together,” says Jeremy
Cohen, AV engineer and founder of Think Autonomous.
The ViDAR network takes several consecutive image frames
obtained from AV cameras and predicts the scene depth map,
which is a visual representation of the scene where each pixel
represents the distance to the camera. By putting several
consecutive depth maps next to each other, the ViDAR then
tracks the movements of vehicles and objects through time.
“A ViDAR isn’t about depth perception and it isn’t about Above: ViDAR is predictions and adjust its parameters during training. Fortunately
optical flow – it’s about fusing both into a single 4D output a deep learning for Waymo, the company had collected large amounts of aligned
architecture that
(3D for depth, 4D for time),” Cohen says. camera and lidar data from the millions of miles that its AVs had
can estimate
One challenge with the ViDAR is teaching the neural depth and flow previously traveled. The lidar data acted as the labels used to
network to detect depth from flat images. For this, the machine from visual data train the depth estimator.
learning engineers need the depth data that corresponds to the Below: A Wayve In addition to depth and flow prediction, ViDARs can perform
video frames they use for training the model. This depth data test vehicle in other tasks such as uncertainty estimation, where they provide
becomes the ground truth that the model uses to evaluate its London a color map that represents the reliability of the depth estimates.
This feature is particularly important in certain conditions such
as darkness and foggy weather.
Although Waymo is still using lidars in its self-driving vehicles,
the development of ViDAR has helped make its machine learning
stack much more robust.
“Using this new approach, Waymo can rely more on cameras
and get a richer scene,” Cohen says. “Sending videos to a model,
they can also predict the flow directly from the model, rather
than calculating an object’s displacement from frame to frame.”

Multitask neural networks


One of the problems with classic machine learning models is
that they are designed to perform a single task. This becomes a
challenge in autonomous vehicles, which require separate deep
learning models for object detection, image segmentation, depth
estimation and other tasks.
“A self-driving car might need 20-30 models, all running in
parallel,” Cohen says. “These models require significantly high
memory and perform unnecessary repeat operations.”
combines world modeling with imitation learning. MILE jointly Interestingly, the overlap between neural networks can be
learns a model of the world and a driving policy from an offline turned into an advantage. Where neural networks process the
corpus of driving data. It can imagine and visualize diverse and same kind of data (for example, video feeds and lidar cloud points),
plausible futures and use this ability to imagine how to plan its they tend to learn and use the same features in their early layers.
future actions. Wayve reports that its model achieved state-of-the- This provides an opportunity to create shared neural network
art performance in the open-source CARLA driving simulator. The layers that can be used across different deep learning models.
biggest improvement upon previous methods (35% increase in This is how multitask neural networks operate. They include
driving score) was observed when deploying MILE in new towns
an encoder module, which extracts the relevant features of the
and under previously unseen weather conditions, highlighting how
it can achieve better generalization than previous methods.
“MILE is another critical step toward achieving a more
generalizable driver,” explains Gianluca Corrado, lead applied “A ViDAR ISN’T ABOUT DEPTH
scientist at Wayve. “It offers great potential for improving motion
planning by giving us an accurate model of the environment and the PERCEPTION AND IT ISN’T
dynamic agents acting within it. Through MILE our driving models
increase their capability to drive when entering new, unfamiliar
ABOUT OPTICAL FLOW − IT’S
settings. This ability to adapt to new driving environments is key to ABOUT FUSING BOTH INTO
scaling autonomous driving technology to multiple cities worldwide.”
A SINGLE 4D OUTPUT”
Jeremy Cohen, founder, Think Autonomous

ADAS & Autonomous Vehicle International January 2023 37


M AC H I N E L E A R N I N G

input data. The output of the encoder is then


passed on to several other networks that
perform different tasks on it, such as depth
prediction and semantic segmentation. This
way, feature extraction is only performed once.
This shared architecture reduces the costs of
training and running the neural networks.
“Using one neural network, we can create
several outputs, use fewer computations and
thus improve efficiency,” Cohen says.
One notable example of a multitask neural network is Tesla’s
HydraNet, named after the mythical beast that has several heads. “USING ONE NEURAL NETWORK, WE
HydraNet takes sequences of inputs from the vehicle’s eight
surrounding cameras (Tesla does not use lidars). A series of CAN CREATE SEVERAL OUTPUTS,
neural networks process each image and extract its features. A USE FEWER COMPUTATIONS AND
second set of neural networks fuse these feature maps spatially
and temporally to account for features across sequences of inputs. THUS IMPROVE EFFICIENCY”
The spatiotemporal feature data is then shared across smaller Jeremy Cohen, founder, Think Autonomous
networks that perform object detection,
semantic segmentation, depth detection
and other tasks. combine the features of each node and its neighbors.
Not only is HydraNet’s modular The output of the GNN is a ‘graph embedding’, which
architecture both memory- and compute- THE SIZE OF A encodes the information of each node and its neighbors
efficient, it is also easier to manage. Each into a lower dimensional vector space.
component can be developed, trained and
REFRIGERATOR, Graph neural networks have important applications
updated completely independently and by ALVINN’S CPU in domains such as e-commerce and social networking.
different teams. WAS POWERED For example, in social networks, GNNs can examine a
“Tesla has been the first to publicly BY A 5kW user’s social graph and provide friend suggestions based
talk about HydraNets. From my research, on the similarity of the user’s graph embedding to those
most companies were using several
GENERATOR of other users.
networks before. Today, many companies GNNs have also found their way into autonomous
are using a network that can do as much driving. For example, in object tracking, GNNs can
as possible,” Cohen says. help to improve the accuracy of object tracking across
time by finding and relating objects that have similar
Graph neural networks Above: GNNs can graph embeddings. GNNs can also find relationships
Graph neural networks (GNNs) are a class of deep learning improve some between objects and predict how they will affect each
models that specialize in finding relations between different functions of the AV other’s trajectories.
machine learning
objects in a graph. Graphs are composed of nodes and edges. Each Zoox uses GNNs to encode the relationships between
stack, including
node is represented by a set of features, and the edges represent trajectory prediction all the agents in the scene and the AV, and predict how
the relationship between nodes. these might develop in the future.
Below: Tesla’s
The GNN layers process node and edge data and perform HydraNet is a
‘message passing’, which is basically a set of operations that multitask network The fascinating world of neural
networks
Neural networks have come a long way in the three
decades since they were first applied to autonomous
vehicles. In fact, there are so many types of neural
networks used in AVs that it would be impossible to cover
all of them in a single article. Self-driving cars still have
a way to go, but what seems sure is that neural networks
will remain one of their key components.
“It’s gratifying to see that what at the time (circa 1990)
was a far-out idea – namely that the newly developed
backpropagation algorithm for training neural networks
could be leveraged to train a car to drive itself rather than
hand-coding algorithms to interpret camera images of
the road ahead – has turned out to be the right approach,”
says Dean Pomerleau, the computer scientist who designed
ALVINN in 1989.
“It has been a long time coming,” he continues, “and
complete autonomy in the full range of driving situations
and conditions is still at least 5 to 10 years off, but it isn’t
really that far from the ‘in 20 years’ prediction for full
automation we were making back in 1997.” ‹

38 ADAS & Autonomous Vehicle International January 2023


BAIDU APOLLO RT6

The world’s largest robotaxi


service provider, Apollo Go,
continues to expand across
China, following the launch
of a low-cost flagship
vehicle, the RT6
By Anthony James

F ord and VW’s decision in late 2022 to pull


the plug on Argo AI continues to reverberate
around the robotaxi sector. However, Chinese
internet giant Baidu Inc. shows no sign of
slowing down in its efforts to prove the
viability of such technology, having unveiled its latest fully
autonomous vehicle, the Apollo RT6, in July.
An all-electric, production-ready vehicle, complete with a
detachable steering wheel, Apollo RT6 will be put into operation
in China in 2023 on Apollo Go, Baidu’s autonomous ride-hailing
service. With a per unit cost of just ¥250,000 (US$37,000),
the RT6’s economy of scale should help reassure investors
As Baidu’s sixth-generation AV, Apollo RT6 marks a distinct
departure from its predecessors. Its flat floor and removable paddle
ensure a more spacious interior with plenty of room for extra seating,
vending machines, desktops and even gaming consoles.
It is also the first vehicle built on Xinghe – Baidu’s self-developed
automotive E/E architecture developed for fully autonomous
driving. Baidu claims the vehicle is 100% “automotive grade”,
offering full redundancy throughout its hardware and autonomous
driving software.
The RT6’s L4 autonomous driving system, powered by dual
computing units capable of up to 1,200 TOPS, receives inputs
from 38 sensors, including eight lidars and 12 cameras, to ensure
who, given current market turbulence, could be forgiven for accurate, long-range detection on all sides. A huge treasure trove
questioning if they will ever see a return. of data captured from nine years of real-world testing and operation
“This massive cost reduction will enable us to deploy tens also informs its decision-making and perception capabilities, further
of thousands of AVs across China,” said Robin Li, co-founder boosting safety.
and CEO of Baidu, at Baidu World 2022, the company’s flagship In fact, during the RT6’s launch event, Zhenyu Li, senior corporate
technology conference and the scene of the vehicle’s launch. “We vice president of Baidu and general manager of its Intelligent Driving
are moving toward a future where taking a robotaxi will be half Group (IDG), claimed the RT6’s autonomous driving capability is
the cost of taking a taxi today.” equivalent to that of a skilled driver with 20 years of experience.

40 ADAS & Autonomous Vehicle International January 2023


BAIDU APOLLO RT6

Baidu Apollo RT6’s


sleek, steering-
wheel-free interior

Driving

ADAS & Autonomous Vehicle International January 2023 41


BAIDU APOLLO RT6

Safer than humans?


Apollo, first introduced in 2017, is Baidu IDG’s open-source
autonomous driving tech platform. It is one of the world’s most
active open platforms for autonomous driving, with 700,000+ lines Three−layer safety
system
of source code, 80,000+ developers and 210+ industry partners.
For Qiong Wu, a senior expert in the operation management
department of Baidu’s IDG, “high safety, high quality and low
cost” are the three elements essential to realizing the large-scale All Baidu Apollo-equipped vehicles follow a three-layer
commercial operation of autonomous vehicles across China in safety system design, according to Qiong Wu. “Baidu
the next few years. Apollo’s self-developed preinstalled OS ensures the
vehicle reacts faster than a human driver,” she says. “The
“Our autonomous vehicles must be safer than human drivers,”
preinstalled detection system has abundant sensors to ensure
she says. “At present, most robotaxi service providers equip their
accurate detection so as to identify and respond to changes.
vehicles with safety operators. When designing the autonomous “Additionally, the decision-planning system ensures the
driving system, most vendors tend to pay great attention to AI vehicle behaves according to transportation laws while
algorithms but little to the system’s ability to troubleshoot and react completing the theoretical calculations to ensure the safety
to accidents, so the safety operators are usually the ‘scapegoat’ of the vehicle behavior.
responsible for most mistakes or accidents. “As well as single-vehicle intelligence, vehicle-road [V2X]
“Once you enter the fully driverless phase, simply relying on AI collaboration can independently provide a complete closed-loop
algorithms and retrofitting autonomous car models won’t resolve capability of over-the-horizon perception, such as non-motor
many system malfunctions. Apollo RT6 has added braking, steering, vehicles, blind spots, construction roads, low obstacles and
power supply, sensors and other redundant designs to the vehicle. more, which is also a supplement to the perception
If the main system fails, the backup can step in. The autonomous of single-vehicle [robotaxi] intelligence.”
driving system in RT6 is also designed and developed in accordance A further layer is provided via full hardware and vehicle
with functional safety [see Three-layer safety system, right] to redundancy, so that “the failure of any single component of the
set won’t affect safety”, according to Wu.
improve safety and realize fully autonomous operation.”
Finally, Baidu Apollo also deploys 5G remote driving, which is
To this end, all Apollo vehicles include a vehicle-end collision-
capable of one-to-many control of autonomous vehicles on the
prevention system, a vehicle-end safety-redundancy control system road. “This approach leverages the advantages of the network
and a 5G cloud-enabled safety driving officer. And unlike robotaxi and road infrastructure in China and accelerates the
firms that take existing models and retrofit them with the necessary popularization of driverless vehicles,” continues Wu. “In addition
sensors and components for automated driving, Apollo to the functional safety methodology and three-layer safety
continues to work directly with some of China’s biggest system, more than 90% of the new type issues occurring
auto makers to install its autonomous driving in car accidents can be repaired every half year by using
hardware during the production phase.  BY THE END OF a quick simulation verification.”
“On top of the well-established quality-
management system of the modern automobile
Q3 2022, APOLLO
industry, we have achieved high-level GO HAD COMPLETED
automotive-grade mass production for our 1.4 MILLION RIDES ON
fourth-, fifth- and now sixth-generation PUBLIC ROADS - MORE has been completely integrated into the quality-
autonomous driving vehicles, from parts THAN ANY OTHER management system of the vehicle from design to
selection to the design integration and ROBOTAXI SERVICE manufacturing, and the resulting reliability and
manufacturing process,” says Wu. “Apollo RT6 consistency represent a great leap forward, which is
PROVIDER almost impossible for other modified vehicles.” 

The RT6 is designed to Cost equation


deliver a more versatile
in-car experience
However, it is the long-term cost benefits that result from such
an approach that should offer most reassurance to investors,
according to Wu. “We make mass-production-ready cars,”
she says. “We don’t work with just one supplier for key parts.”
Wu notes that the cost of traditional taxi and online
ride-hailing services is made up of three parts – driver cost,
vehicle depreciation and daily operation costs such as refueling
or charging, daily maintenance, cleaning and insurance.
For the robotaxi model to work, costs must be low. Wu
maintains that Apollo can turn a profit. “On one hand, with the
gradual scale-up of fully autonomous business operations, it is
expected that the cost of human drivers will be scaled down,”
she says. “On the other hand, we should also reduce the cost of
the vehicle. Even if millions of vehicles are fully driverless, the
business model won’t make sense if they still cost too much.”
Conversely, robotaxi companies that rely on retrofitting or
modifying vehicles will struggle with costs, as the original
vehicles have already been bought, with many featuring
expensive, luxury options that are unnecessary in a taxi setting,
such as a large dashboard display and screen for the driver.

42 ADAS & Autonomous Vehicle International January 2023


BAIDU APOLLO RT6

With 8 lidars and


12 cameras, the
RT6 is capable of
full L4 operation

Driverless
“RT6 IS A COMPLETE REDESIGN, operation in
Wuhan,
PUTTING THE NEEDS OF Chongqing

PASSENGERS FIRST. IT REDUCES


THE COST SIGNIFICANTLY AND in Beijing. The regulatory approval applies to Beijing’s 60km2
DELIVERS A BETTER EXPERIENCE High-level Automated Driving (BJHAD) demonstration zone in

AND VALUE FOR TAXI USERS” Yizhuang district and allows the removal of a safety operator from
the driver’s seat. However, operations are limited to daylight
Qiong Wu, senior expert, Baidu IDG hours and all cars must have a staff member in the vehicle.
“We will gradually roll out 24-hour operation based on
“Any retrofitted or modified vehicle cannot be mass-produced regulatory approval,” says Wu. “To meet the needs of China’s
at scale,” says Wu, “so the cost of each car cannot be reduced. On rapidly developing autonomous driving technology, Beijing’s
the contrary, Apollo RT6 is a complete redesign of an autonomous Economic and Technological Development Zone has started
driving vehicle, putting the needs of passengers first. It reduces the construction of the Beijing Intelligent Connected Vehicle
the cost significantly and delivers a better experience and more Policy Pilot Zone, following the establishment of the BJHAD
value to taxi users.” demonstration area,” she continues.
“It has issued the first batch of night-time and special weather
World view autonomous driving test qualifications to Baidu Apollo and other
Baidu also enjoys another key advantage: precision-honed companies. It will slowly open more policy support for testing
training data derived from a near decade of real-world autonomous driving in extreme conditions. Since May 2, Apollo
driving. Baidu IDG’s test fleet of more than 600 Go has started a trial night-time operation, extending its hours
autonomous vehicles has accumulated more from 08:00-16:00 to 07:00-23:00.”
than 36,000,000km, drawn from operations Such approval and investment suggest an openness
in 10 high-density cities across China, as well among Chinese authorities to embrace a fully driverless
as driverless testing in California, while the future. Sure enough, in August 2022 Baidu secured
number of passengers carried on Apollo Go further permits for Apollo Go to offer fully driverless
has passed one million. commercial operations, with no safety driver in the
There have been a few bumps along the vehicle at all, in Chongqing and Wuhan; and as AAVI
way – but Wu won’t be drawn on details. went to press (November 2022), it also received Beijing’s
“Autonomous driving does not equal zero first permits to test robotaxis without a front-row safety
accidents,” she says. “It is almost certain that driver in the BJHAD demo zone, where its first batch of 10
accidents will happen during development. But we ‘Apollo Moon’ fifth-generation robotaxis have begun testing.
should not deny its progress and achievements because of “Chongqing and Wuhan are the first two cities to allow fully
accidents that happened in the early stage of development.” driverless autonomous vehicles, which provides a sound policy
Fortunately, the Chinese authorities seem to agree. In April foundation for companies like Baidu to pilot and apply its
2022, Apollo Go secured the first-ever permits in China to provide innovative technologies,” comments Wu. “It builds an
driverless ride-hailing services to the public, on open roads environment that encourages industry pioneers to run pilot

ADAS & Autonomous Vehicle International January 2023 43


BAIDU APOLLO RT6

programs while they operate. Removing the safety operator from Apollo Go has expanded to
more than 10 cities in China
the car is a major move to help explore the commercialization since its launch in 2020
of robotaxis and will accelerate the development of China’s
autonomous driving core technology.”
For Wu, the data from such dynamic and diverse locations is
priceless, with each Apollo Go vehicle in Beijing, Shanghai and
Guangzhou completing more than 15 rides per day, on average,
according to Baidu’s latest (Q3 2022) earnings report. “The density
of urban road participants in Beijing is 15 times that of California,”
adds Wu. “Among all the typical urban road scenes with
commercial potential in the world, it can be said that China’s are
the most complex, which means that to realize driverless operation
on those roads we have to face the considerable challenges.”
Fortunately Baidu, as China’s leading search engine, can call
upon some impressive resources to help it crunch through all
that accumulated data. “Cloud computing is the foundation of
“THE DENSITY OF URBAN ROAD
AI, and Baidu’s powerful technology in this area can accelerate PARTICIPANTS IN BEIJING IS
the iteration of autonomous driving technology,” says Wu.
15 TIMES THAT OF CALIFORNIA”
Networking opportunity
Baidu also benefits from a government keen to provide the perform L4 autonomous driving on open public roads, using
regulatory framework and spectrum allocation to enable only roadside sensing empowered by 5G and V2X wireless
cellular vehicle-to-everything (C-V2X) – a key technology for communication. The technology was developed in 2021 by
fully driverless L4 or L5 operation. Baidu as part of the Apollo Air project in partnership with the
“While our robotaxis are able to handle 99.99% of complex Institute for AI Industry Research (AIR) at Tsinghua University.
urban roads, C-V2X infrastructure can provide extra redundancy Meanwhile, in April 2022 Baidu opened up a full-stack open
for long-tail problems and edge cases, such as driving during tough system for C-V2X, named Kailu.
weather, bad lighting or poor environmental conditions,” says Wu. All Apollo Go robotaxis are equipped with an onboard V2X
Many Chinese cities have partnered with local wireless network unit that can communicate with roadside sensors, while V2X is
operators to build the necessary infrastructure, while C-V2X is part of Baidu IDG’s ACE (Autonomous Driving, Connected Roads,
expected to be included in approximately half of new cars in China Efficient Mobility) smart transportation solution, which has
by 2025, by which time China also plans to begin testing even more been adopted by 63 cities.
advanced 5G NR-V2X technology. In previous public statements, Baidu has outlined how V2X and
The BJHAD demo zone in Beijing’s Yizhuang district already AI technology can help cities improve traffic congestion, road safety
enables V2X across an area of 60km2, spanning 225km of open and air quality, with V2X infrastructure serving as “an intelligent
roads. As of May 2022, 332 intersections within the zone had been vehicle-road coordination platform”. The company believes such
upgraded with smart traffic lights, helping to achieve a 30.3% a platform can provide connected autonomous vehicles with
reduction in the length of traffic congestion at intersections. information on surrounding traffic and road conditions, and
These intersections also feature the world’s first and only “thus defines the standards for traffic-related applications, which
V2X technology that enables a car without sensor equipment to in turn drive industry adoption”.
Meanwhile, Baidu has released the DAIR-V2X data set – the first
large-scale, multimodality, multiview data set from real scenarios
for VICAD (vehicle-infrastructure cooperative autonomous driving).

Map maker DAIR-V2X comprises 71,254 lidar frames and 71,254 camera frames,
all captured from real scenes with 3D annotations.

As one of the only companies licensed to provide Future forecast


high-precision map solutions in China, Baidu has over
According to Public Security and Transportation document No. 97,
300 survey vehicles equipped with its self-developed
recently released by the Ministry of Industry and Information
data collection technology. The system automatically
processes point clouds and images with AI technology, and Technology of China, a number of Chinese cities have rolled out
its solutions combine map data and cloud services, to enable autonomous driving-related tests and pilot work. The subsequent
ADAS on production vehicles and guide its robotaxis. release of the Notice on the Pilot Work of Access and Road Use of
“Cloud services include an OTA download service and road Intelligent Internet-Connected Vehicles (Draft) has provided a
reliability service [RRS],” explains Wu. “Baidu’s high-precision further positive policy signal, according to Wu. “The release
map integrates the two features together to support lane level of the draft will further encourage the iteration and innovation
applications, lane level of local policies,” she says.
navigation and high “Wuhan, Chongqing, Shenzhen, Shanghai and other cities have
precision fusion been closely following the international development trend and are
positioning as well actively rolling out local practices,” she continues. “They have also
as other autonomous issued relevant policies or regulations on demonstration operations
driving applications.”
for autonomous driving and Baidu will continue to expand the scale
Example of a of its driverless autonomous driving commercial operation. It is
‘lane-level’ HD map expected that in 2025 autonomous driving technology will enter
the stage of commercialization at scale.” ‹

44 ADAS & Autonomous Vehicle International January 2023


Powered by Data.
Driven by Insights.
With software-connected test systems from NI, today’s
EV makers are using data to transform the auto industry.
Our lifecycle analytics tools reveal insights that improve
performance and help predict development roadblocks.
At NI, we provide EV innovators with bold technologies to
engineer a better future, faster.

©2022 National Instruments. Alle Rechte vorbehalten. National Instruments, NI und ni.com sind Marken der National Instruments Corporation.. 158363 ni.com/evtest
C-V2 X

Audi believes that C-V2X not only offers the


means to drastically reduce VRU injuries
and fatalities, but also helps clear a pathway
toward autonomous driving
By Anthony James

Crash
course
46 ADAS & Autonomous Vehicle International January 2023
C-V2 X

Audi recently
conducted a
C-V2X trial
designed to
improve driver
awareness of
cyclists

ADAS & Autonomous Vehicle International January 2023 47


C-V2 X

F ive scenarios involving bikes and cars


are currently responsible for most
roadway fatalities among vulnerable
road users (VRUs) in both the US and
Europe. The common theme across
all five is the driver’s inability to spot the cyclist. Sadly,
such incidents are on the rise, with NHTSA reporting 846
bicycle fatalities from motor-vehicle-related accidents in
the US in 2019 – a 36% increase since 2010. Year-on-year,
NHTSA reports that on-road cycling injuries increased
4.3% to 49,000 in the US in 2019.
In addition, the National Roadway Safety Strategy
released by the US Department of Transportation in
January 2022 reported that ‘fatalities among pedestrians
and bicyclists have been increasing faster than roadway
fatalities overall in the past decade’.
However, connected vehicle-to-everything (C-V2X)
technology may offer a solution. German car maker Audi,
Swiss bicycle manufacturer BMC and US tech startup
Spoke Safety undertook a day-long demo back in October
to show how C-V2X could deliver a range of visual and
audible warnings alerting drivers to the presence of a
cyclist and thereby helping prevent avoidable accidents.
“There are about 50 million bicycles on the road in the
US right now,” notes Pom Malhotra, senior director of
connected services at Audi of America, which has been
experimenting with V2X to improve road user safety
since 2006, culminating in trials (see First responder,
opposite) focused on construction zones and workers,
and school bus stops and children.

Preventing collisions
LTE-enabled connected mobility technology already
supports Audi’s Traffic Light Information (TLI) service,
launched in the US in 2016, which helps the OEM’s
drivers catch a ‘green wave’. A cockpit display tells the
between vehicles
driver what speed is required to reach the next traffic and cyclists
light on green; if that is not possible within the permitted
speed limit, it shows a countdown to the During the recent Oceanside trial,
next green phase. Audi focused on five use cases with the
greatest potential to reduce collisions
However, Audi is keen to explore the “WHAT WE HAVE BEEN between vehicles and cyclists
possibilities offered by PC5 C-V2X
technology, which exploits the ITS 5.9GHz ABLE TO DO IN ALL • Proximity warning/front and rear collision
cellular band licensed by the Federal
Communications Commission for traffic
THESE TRIALS IS warning: When a vehicle and cyclist come closer
to one another, a notification appears showing
safety to enable direct ‘user equipment to MODIFY WHAT IS where a possible collision may occur
user equipment’ communication without
the need for cellular towers.
SHOWN TO THE • Cross-traffic alert: Vehicle detects if a bicycle is
on a possible collision path when approaching
“Spoke Safety is partnering with DRIVER SO THAT from the left or right up ahead
manufacturers to embed C-V2X
technology into bicycle frames, as well as THEY KNOW EXACTLY • Parallel parking departure alert: When pulling
out of its curbside spot, a parallel-parked vehicle
working with manufacturers of devices WHAT’S GOING ON” detects if a bicycle is approaching
that go on bicycles – tail-lights, GPS
Pom Malhotra, senior director of from behind
navigation and location systems,” explains
connected services, Audi of America
Malhotra. “They’ve also created a device • Right-turn assist (‘right hook’): Driver gives
that you can buy separately and just push turn signal indication to turn right while cyclist
into your backpack, which can directly alert vehicles drives straight through
when there is a cyclist present. What we have been able to • Left-turn assist (‘left cross’): Vehicle gives left
do in all these trials is modify what is shown to the driver turn indication and receives notification if a
so that they know exactly what’s going on – whether it’s cyclist is approaching from the opposite direction
a cyclist, a construction worker or a school bus.” and potentially entering the turn of the vehicle

48 ADAS & Autonomous Vehicle International January 2023


C-V2 X

First responder
Audi’s recent Oceanside demo was not the company’s
first foray into actively exploring the safety benefits of
C-V2X technology. In October 2021 in Northern
Virginia, it teamed up with the Virginia Tech Transportation
Institute, Virginia Department of Transportation and Qualcomm
Technologies on a C-V2X-enabled program to inform vehicles
that they were approaching a construction zone, alert drivers of
the workzone speed limit when entering construction zones
and alert roadside workers when vehicles were close to
construction zones via a connected safety vest.
In May 2021, Audi collaborated with Applied Information and
the Fulton County School System, Qualcomm Technologies
and others, on a demo in Alpharetta, Georgia, to show how
C-V2X can connect cars with school buses to identify when
children are boarding or exiting buses, and show vehicle drivers
when they are entering active school zones.
“In the near future, we’ll show how we have fine-tuned the
technology to enable two-way communication, where the bus
driver gets information to help them decide [whether] to open
the door or not,” reveals Malhotra. “The technology will alert
them if a vehicle is not stopping after the stop sign is extended.
We’re also looking to demonstrate how ambulances can better
communicate to cars.”

The bicycle frames and devices interact with


hardware and software fitted to Audi’s all-electric Audi
BY AUGUST e-tron Sportback test vehicle. The vehicle is dedicated
2021, AUDI’S TLI to examining the possibilities offered by direct
WAS IN OPERATION communication using short-range signals that do not
rely on a cellular network, as well as LTE signals that
This scenario would
AT OVER 22,000 use cellular towers to send and receive signals.
trigger a parallel INTERSECTIONS As a result, the vehicle can read its surroundings to
parking departure alert
dashboard warning
ACROSS 60 CITIES alert the driver to construction workers, ambulances
AND 20 MAJOR US and other emergency vehicles, or – as in the most
recent trial – when bicycles are nearby, even when
METROPOLITAN obstructed from a driver’s view.
AREAS The location of the day-long demo in Oceanside in
October earlier this year was no accident. “Oceanside
is close to the city of Carlsbad, which recently had to
declare an emergency because it had seen such an
increase in cyclist collisions with motor vehicles,”
explains Malhotra. E-bike and bicycle collisions in the
city increased from 30 in 2019 to 100 in 2021.
To ensure repeatability and safety, the Oceanside
demo was undertaken off the public highway. “We chose
a parking lot so we were in full control – we had our own
driver at the wheel, and the rider on the bike was fully
versed and knew what to expect,” continues Malhotra,
adding that the demo would have worked just as well on
a public road. “It was demonstrated in a closed parking
lot only for the safety of conducting the demo and the
attendees watching.”
The test vehicle featured a modified dashboard and
antenna, and the parking lot was modeled to replicate
the scenarios responsible for most accidents. “Over 80%
Left: Images of fatalities are linked to five situations that cyclists
depicting what
the cyclist would experience on US roads. In most of these cases, what
see on a C-V2X- happens is that the driver is not able to actually see the
enabled device cyclist until the very last minute,” explains Malhotra.

ADAS & Autonomous Vehicle International January 2023 49


C-V2 X

This scenario would


prompt a cross-
traffic alert on the
digital display

Public purse
To realize the true benefits of C-V2X,
governments need to invest in more intelligent
infrastructure, while car makers will pass the
price on to consumers. “Typically, for safety-related
technologies, the investment will get priced into the
vehicle,” says Audi of America’s Pom Malhotra. “But
To recreate those situations in a reliable and predictable we don’t expect to charge a subscription [for C-V2X
manner, Audi placed barriers in certain places in the line of sight features] because it is a core safety requirement.
of the driver so that the only information received was through “However, where the vehicle talks to infrastructure
the C-V2X channel, displayed via the digital dashboard. such as road signs, traffic cones, traffic lights – that’s
where city investment comes in. The decisions made
The trial showed that the technology can enable drivers to
by the US government to deploy massive amounts of
recognize dangerous situations much sooner than if they were
funding for infrastructure upgrades can really benefit
driving without the C-V2X-enabled prompts. “Direct and
this space. Specifically, intelligent transportation is part
deterministic communication ensures the alert comes much of the allocation. Many states have plans to use that
sooner – so much so, you can count on the driver to react in time.” money to upgrade their traffic infrastructure. Statistics
show that 70% of traffic lights around the country need
Dilemma zone some form of enhancement. As those improvements
According to Malhotra, the company’s Traffic Light Information happen, connectivity is part of the overall value
service first alerted Audi to the potential to improve reaction proposition being considered.”
times by helping drivers with their decision making to avoid
the dreaded ‘dilemma zone’.
“Without traffic light information, you could be
half a mile away, approaching an intersection, and you
can clearly see a green light, but it isn’t until you get BICYCLIST
much closer that it turns amber – leaving you with FATALITIES ARE ON
a decision to make: stop or go through,” he explains.
“Traffic engineers call this the ‘dilemma zone’. AN UPWARD TREND,
But with the system, the countdown pops up in WITH AN UPTICK OF
your dashboard to give you an early warning – it 5% FROM 2020
changes how you approach the intersection, where
you most likely take your foot off the accelerator,
TO 2021, TOTALING
coast and come to a stop.” 985 FATALITIES
Unlike the traffic light countdown, which is Source: CDC

powered by slightly slower LTE technology and is only


offered as a “comfort and convenience” feature, Malhotra
says Audi sees an even bigger opportunity to use PC5-enabled
C-V2X to improve road safety.
“We took this learning and looked to apply it to safety-related
use cases,” he explains. “And this is where PC5 C-V2X becomes so
much more important because it doesn’t need a tower but can still
communicate 10 times a second across a good range and at high
speeds, ensuring the car will notice a bicyclist in the area, even
C-V2X ensures the
when it’s up to a quarter of a mile away.”
driver is aware of
Audi designed the user interface so that as the cyclist comes the cyclist before
closer, the dashboard display icon changes from yellow to orange turning right

50 ADAS & Autonomous Vehicle International January 2023


C-V2 X

to a popup alert to warn of an imminent collision, which also


sets off an audible ‘chiming’ alert. “The icon also shows on “C−V2X ALLOWS DEVICES TO ‘SEE’
which specific side of the vehicle it is,” adds Malhotra.
Overall, Malhotra sees C-V2X as a potential game-changer.
OTHER NON−LINE−OF−SIGHT (NLOS)
“Cars communicating with their environment can have a lot of ROAD USERS BEHIND OBSTACLES
benefits that have to do with not just improving the experience
of driving and owning a vehicle but also overall safety in the
OR AROUND THE CORNER, WHILE
transportation space,” he says. “But we must have some form OTHER SENSORS [CAMERA, LIDAR,
of more direct communication between vehicles and between
vehicles and infrastructure. C-V2X technology allows devices to ETC] ARE UNABLE TO”
‘see’ other non-line-of-sight (NLOS) road users behind obstacles
or around the corner, while other sensors [camera, lidar, etc] are
unable to. C-V2X tech adds another layer of information required done in broadcast mode – it’s not like you’re trying to
for full autonomous driving.” direct it to a particular address. So any C-V2X receiver
OEM excitement about C-V2X peaked following a US Federal in the area will get that same message.”
Communications Commission (FCC) ruling in November 2020, This is a key advantage – and one Malhotra feels
published in May 2021, that saw the FCC agree to allocate a could be crucial to the successful rollout of autonomous
portion of the ITS 5.9GHz cellular band for C-V2X applications vehicles: “An AV has sensors and a computer in the
for the first time. The decision allowed C-V2X for the exchange vehicle to transcribe the data captured to understand if
of standardized communications between vehicles and between it’s a child, a deer, a bicyclist or a hedge, whereas C-V2X
vehicles and infrastructure. just says ‘I’m a cyclist and I’m here’. All that additional
“Both the car and the bicycle are basically saying to each effort that has to go into making sure that you’re
other, ‘I’m a car, here’s where I am’ and ‘I’m a bicycle, here’s accurately interpreting what you’re seeing doesn’t have to
where I am’ – and that’s it,” explains Malhotra. “And this is happen. Instead, it’s telling you: ‘I’m a cyclist, I’m here,
I’m around you, be careful what you’re doing’.”

The time is now


Through its own internal research, as well as publicly
available data, Audi estimates there will be 5,300,000
vehicles, workzones, railway crossings, bicycles and
other devices that will be able to connect using C-V2X
by 2025. By the end of the decade, it is possible that
number will increase to 61,000,000 connected devices,
including as many as 20,000 crosswalks, 60,000 school
zones, 216,000 school buses and 45 million cyclists
and smart devices.
Audi believes C-V2X connected mobility is the
gateway to dramatic improvements in road safety, and
now is the time to allow its rapid deployment. Earlier this
year, it joined several other auto makers, tech innovators,
traffic equipment manufacturers, states and localities in
seeking a Federal Communications Commission waiver
to put C-V2X on US roads immediately. The public
comments on this request were overwhelmingly positive,
and Audi is now calling on the FCC to grant the waivers.
Malhotra says Audi and others now await final FCC
rule-making that will ensure that the transportation
wireless spectrum remains viable for direct and urgent
V2X communications.
“The FCC needs to finish the job – formalizing the
decision but also clearing up specifications around
interference management, particularly with below-band
interference. That’s not stopping us though. We have
applications to allow what we call deployment waiver
requests. These have come in through Audi, Ford, Jaguar
Land Rover and Applied Information, as well as several
state agencies, including the Virginia DOT and Utah
DOT, as well as from Spoke itself for New York City,
Georgia, Florida, Ohio, Maryland and Dallas Fort Worth.
Once all those waivers are approved, and then the FCC
completes its rule-making on the interference levels, then
production deployment is clear. Our plan is to introduce
mid-decade our next-generation premium platform
electric architecture.” ‹

ADAS & Autonomous Vehicle International January 2023 51


S U PPL I E R I N T E R V I E W: W Y N N E -J O N E S I P

Patent
protection
The innovations behind self-driving
vehicles require suitable patent protection.
Dr Elliott Davies, partner and patent
attorney at Wynne-Jones IP, examines
the implications of the EU’s new
Unified Patent Court

W
Interview by Anthony James

hat is the UPC?


The Unified Patent Court (UPC) is a new court
that has jurisdiction to hear matters relating to
enforcement and validity of national patents derived from
European patents and also the new Unitary Patent (UP).
Innovative companies in the autonomous vehicle (AV)
sectors commit huge resources in a race to develop new
technology. To safeguard their innovations and to provide a
return for their investment in R&D, companies aim to secure
robust intellectual property rights before launching their
technology. In 2018, the European Patent Office conducted
a review of patent trends based on technology sectors
and the result showed a clear upward trend in the
number of patent filings in the AV sector. With
this increasing demand for patent protection, it is
important that companies are aware of the imminent,
significant change to the European patent landscape, validation process can include a requirement to
namely the introduction of the new UP and the UPC. submit a translation of the European patent into
the local language.
How is patent protection achieved Once the validation stage is complete, the single
in Europe? European patent effectively becomes a bundle of
The principal mechanism used by companies in the AV standalone, national patents. Proprietors wishing to
sector for obtaining patent protection for an invention across enforce their national patents are required to do so at the
various European states involves the use of the European national level using the tried-and-tested national courts.
patent system. This is because the European system provides Similarly, if a party wishes to challenge a national patent
applicants with a more cost-effective means of obtaining then this will need to be done at the national level. So as
a granted patent compared with filing national patent you can see, separate national proceedings are required
applications. However, a shortcoming of the European system in different states if proprietors wish to take action to
is that applicants, or more particularly proprietors at that stop an infringer across a number of states. Similarly,
point, are required to select those states in which they would a party desirous of avoiding the risks associated with
like to make the patent effective. This so-called validation patent infringement in different states would need to
stage can include significant costs as for some states the conduct separate invalidity proceedings in each state.

What is changing?
“A DECISION FROM THE GERMAN SEAT Sometime early in 2023 the new Unified Patent Court
OF THE UPC, FOR EXAMPLE, WILL HAVE Agreement (UPCA) is expected to come into force, at
which point existing national patents that have been
A BINDING EFFECT ON OTHER MEMBER obtained via the European patent system will instead

STATES TO THE UPCA” fall under the jurisdiction of the new UPC.


The establishment of the new court will also be
Dr Elliott Davies, partner and patent attorney, Wynne-Jones IP accompanied by the introduction of the UP, which will

52 ADAS & Autonomous Vehicle International January 2023


S U PPL I E R I N T E R V I E W: W Y N N E -J O N E S I P

enable proprietors of European patents to cover a new


pseudo-region of (currently) 17 EU states. The UP will fall
under the exclusive jurisdiction of the UPC and could
result in fewer post-grant translation charges than the
traditional validation stage.

What are the considerations?


The questions for companies in the AV sector to consider,
given their appetite to secure patent protection across
Europe, are whether or not to opt out their national
patents from the jurisdiction of the UPC, and whether
or not to make use of the UP.
The UPC will have a seat in most member states to
the UPCA, but the judges appointed to hear the case
may not be local to that particular state. Moreover,
as there is no case law for this new court, it
UBER will be difficult to predict how patent
infringement and validity matters will be
TECHNOLOGIES assessed. However, decisions from this
INC PAID WAYMO court will have a cross-border binding
US$245M TO effect and so a decision from the German
SETTLE AN seat of the UPC, for example, will have
a binding effect on other member states
AV PATENT to the UPCA. So the UPC can be leveraged
DISPUTE to obtain a decision that will take effect in
a number of states without [the proprietor]
having to conduct separate expensive, national
legal proceedings.
On the flip side, though, a decision by the UPC that
a national patent is invalid will also be binding on other
national patents in the family. In this case it is possible
Europe leads the world to lose protection across a number of states by a single
in patents for self-driving
vehicles, accounting for
decision of the UPC.
more than 33% of all patent Proprietors who wish to shield their portfolio from
applications around the this ‘central attack’ can elect to opt out of the jurisdiction
world, says the European of the UPC, in which case the national patents will fall
Automobile Manufacturers under the jurisdiction of the national courts, just like
Association (ACEA)
they do currently. However, proprietors will need to be
proactive in opting out as the default position is that the
UPC will take jurisdiction over national patents.
Now, while there is no specific deadline for filing
requests to opt out, if an action is commenced at the UPC
before an opt out request is filed, then the proceedings
must continue at the UPC. So proprietors who wish to
shield some of their strategically important patents may
wish to opt out early.
One of the benefits of the UPC, however, is that it is
possible for proprietors of national patents to undertake
an element of forum shopping to assess which UPC seat
may be best to launch an action for patent infringement
(provided an infringement takes place in the territory
covered by that seat). For example, a proprietor may be
able to take advantage of local legal peculiarities to
support their action. A potential infringer who wants to
start an independent revocation action or an action for a
declaration of non-infringement does not have a choice
and must go to the central division of the UPC which is
based in Paris and Munich. For this reason, certain critics
classify the UPC as being patent proprietor-friendly.
It is clear there are a lot of things to consider.
Ultimately, it is important to align the patent strategy
with the business strategy and Wynne-Jones IP has a
number of experts who can assist and guide businesses
through the forthcoming changes. ‹

ADAS & Autonomous Vehicle International January 2023 53


ELECTRIC POWER STEERING

1 2

How hardware selection affects

Reduce
driver experience in electric
power steering systems
By Zach Nelson, segment application engineer,
Allegro MicroSystems

torque ripple
C
ontinuous innovation in the Maximizing the driver experience permanent magnet synchronous motor (PMSM)
automotive market is inevitable. Auto Significant effort has been expended to with field-oriented control (FOC), the designer
makers are tasked with maximizing minimize unwanted by-products inherent in a can achieve very minimal torque ripple in the
the performance and efficiency of motor-assisted electric power steering system, end system, illustrated in Figure 2.1
their end systems. For that reason, such as reducing torque ripple and improving Torque ripple can be greatly improved by
electric power steering (EPS) systems have system response. Although usually driven using the level of detail the controller has about the
become popular in automotive manufacturing, a sophisticated control algorithm, they still rely positioning of the motor. The ability to detect
displacing their mechanical counterparts on a responsive gate driver plus a control network within 1° of accuracy is necessary to prevent large
because they save space and reduce weight, to feed back high-quality data. Selecting the right oscillations in the assist motor that will be felt on
emissions and fit complexity. hardware can be the key to creating a consistent, the column. Motor positioning and handwheel
While the electrical architectures of EPS smooth steering experience. feedback can be sensed using Allegro’s low-noise,
systems are similar, there are areas where Reducing the torque ripple inherent in low-error angle sensors (Figure 3).
designers can differentiate their designs. brushless direct current (BLDC) control is the Another way to reduce torque ripple in the
By selecting the right hardware and taking highest priority. Torque ripple is the phenomenon system is to improve the accuracy and error in
advantage of enhanced features, auto makers where the force applied by the motor to the load the current sensing mechanism. Torque applied
can maximize the driver experience in a compact oscillates and can lead to a feeling of counter- by a motor is directly proportional to the current
and functionally safe design. steering or vibration in the wheel. If employing a through the windings. The two methods most
The electrical architecture for conventional
EPS designs can be broken down into the
following subsystems: EPS motor drive, steering ALLEGRO RECOMMENDED SOLUTIONS
wheel interface, DC-DC conversion, battery SUBSYSTEM SOCKET PART NUMBER
management, digital processing and
A4918
communication (Figure 1). BLDC motor driver
There are trade-offs to be made within each AMT49106
subsystem, which can affect the performance EPS motor drive Phase current sensing ACS72981
and efficiency of the system. At its most basic Motor position sensing AAS33001
functionality, the steering wheel interface uses
a hand-wheel angle sensor and steering torque Motor phase disconnect A6861
sensor to register input force and direction; the Hand-wheel angle sensor A33002
Hand-wheel interface
EPS motor drive uses a highly integrated gate Steering torque sensor A31102
driver to drive a torque-assist motor fixed to
ARG81402
the column or rack to provide the steering DC-DC conversion System power conversion
force. How these devices are selected can ARG81407
affect the driver experience. Battery management Battery return current sensing ACS72981

54 ADAS & Autonomous Vehicle International January 2023


ELECTRIC POWER STEERING

Figure 1: EPS block


diagram
Figure 2: Torque ripple in rate on the switching FET and prevent overshoot
3 PMSM/BLDC with FOC
and shoot-through events (Figure 5). Motor
Figure 3: Average angle
controllers offered by Allegro are designed to give
error
designers control over these and other output
Figure 4: Low-side and
stage characteristics.
in-phase current
sensing When sensing in-phase currents the bandwidth
Figure 5: Miller region
needs to be sufficiently high, 5-10 times the PWM
control frequency as a rule of thumb, for sufficient sampling
Figure 6: Settling time within the averaging window. Gate drivers selected
for low-side sampling for EPS systems typically switch near or above
Figure 7: Current sensor 20kHz, striving to get above the audible range of
response humans. Allegro Hall-effect current sensors use
a differential sensing mechanism to minimize
common mode noise and enable designers to
4 typically used to measure sense in-phase comfortably with motor driver
current are low-side sensing PWM switching frequencies near 20kHz.
and in-phase sensing, each In the case of low-side sensing, devices may
with their own benefits and need higher bandwidth to reliably sense the
limitations (Figure 4). current during the tON of the low-side FET. When
In-phase current sensing the low-side FET has a short ‘on cycle’, the current
provides accurate data has a settling time of about 1µs, which means a
regarding the currents flowing bandwidth upward of 20-40 times the PWM
in each phase, which reduces frequency to guarantee the current is sampled
software complexity. Due to accurately (Figure 6).
the nature of the design, The accuracy of this data is critical for closing
current sensors will need to the FOC current loop. Current sense amplifiers
support ≥100A maximum that are integrated into Allegro motor drivers
current with low noise and support high bandwidth to enable designers to
high accuracy. Allegro switch at higher frequencies and offer
Hall-effect current sensors programmable gain and offset for calibration.
can provide accurate measurements, unaffected System response can also be hindered by slow
5 by common-mode transients on the phase nodes sensor feedback. Allegro Hall-effect current
that standard shunt current sense amplifiers sensors boast an incredibly low response (Figure
(CSA) may struggle with. 7). Hand-wheel, motor position and torque sensors
Low-side sensing is an alternative method to offered by Allegro also grant quick response,
measure current but can only sense while the which is necessary to support complex algorithms.
low-side field effect transistor (FET) is on and Through effective management of torque ripple
will require additional firmware development to and system response, designers can continue to
recreate a complete phase current. Using CSAs improve the driver’s steering experience. Selecting
that are integrated in Allegro motor drivers can the optimal hardware can improve the quality of
6
enable designers to save space and reduce the data, reduce the computational overhead required
cost compared with discrete in-phase sensing. for BLDC control and enhance the system’s overall
control over the assist motor.
System response
Another component of user experience is the Conclusion
ability to respond instantaneously to driver Implementing an EPS system with exceptional
input. To achieve similar response in an EPS driver experience is a very difficult task, but it
system, the feedback loop for driver input is necessary to enable EPS manufacturers to
requires low latency and can benefit from a differentiate themselves. Manufacturers can
higher motor drive frequency. All pulse-width realize the full potential of their EPS systems by
modulation (PWM) using the right components to maximize driver
7
switching topologies are experience and achieve functional safety
susceptible to rise and fall standards with flexible implementation. ‹
time ∂∂t_V transients, which
can cause system faults and References
force designers to lengthen 1) K Sumitra and Prof. M K Giridharan, 2015, Torque
blanking times. Ripple Reduction using Field Oriented Control (FOC)
To reduce switching – A Comparison Between BLDCM and PMSM, Int’l
transients on phase nodes, Journal of Science Technology & Engineering 194
designers should select a
gate driver that supports
Miller region current CONTACT
Allegro MicroSystems | inquiry no. 101
control to dictate the V DS To learn more about this advertiser, please visit:
(drain-source voltage) slew www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2023 55


S I M U L AT I O N

Tool
connectivity
W
Integrating Ansys ith multiple automotive companies complexity to the simulation techniques needed
racing to introduce SAE Level 3 (L3) to develop these advanced systems. This is easier
AVxcelerate Sensors and above autonomous driving, said than done: just precisely predicting the
within aSR’s Simulation this change is expected to have
far-reaching consequences across
vehicle driving environment for the next few
seconds is an enormous challenge.
Framework enables the entire mobility industry. With L3 and above, As a solution, real-time Doppler evaluation of
there is an enormous increase in the driver’s the radar sensor with aSR driving simulator, 3D
efficient virtual testing comfort level, because at L3 the driver is allowed environment simulation and error-free detection
and validation of to temporarily hand over complete driving of obstacles via cameras, ultrasound, radar and
control to the vehicle and is free to perform laser systems is gaining widespread acceptance.
sensor technology for other secondary tasks. Sensor simulation (simulation of the ‘eyes and
autonomous driving According to German law, secondary tasks ears’ of the car) is becoming an elementary
such as checking emails or reading a newspaper component of valid virtual testing of ADAS or
By Simon Gimpel, CTO, aSR while driving are permitted – provided the driver autonomous driving (AD) capabilities.
stays alert enough to assume control of the Virtual testing of ADAS/AD functions requires
steering wheel again after being warned by the massive changes in the conventional development
autonomous driving system. In contrast, in SAE methods. Introducing simulation early in the
Level 2, the driver is always responsible during design cycle, rather than waiting to introduce
the journey and must constantly monitor the it later at the physical prototype stage, helps
assistance systems. not only with the system integration but also
Predicting the future is a challenge and has with testing the relevant vehicle components.
a profound impact on the design of the driver Individual departmental simulation models
assistance systems, adding another layer of must be coordinated with each other because

56 ADAS & Autonomous Vehicle International January 2023


S I M U L AT I O N

Below: Exemplary
software architecture
within the aSR
Simulation Framework

use. For example, by upgrading a software version


or substituting simulation models (MIL) with
control unit codes (SIL) or a test bench (HIL), only
the relevant model is affected by the planned
Left: Real-time
Doppler evaluation of
change, while the rest of the tool infrastructure
the radar sensor with remains unaffected. Consequently, there is no
aSR’s driving simulator THE GLOBAL renewed system integration effort required.
ADAS MARKET IS
PROJECTED TO Conclusion
To sum up, these two types of scalability enable
REACH US$40.2BN the introduction of co-simulation in existing
BY 2029 digital development environments. All
Source: Meticulous Research calculation models already in use can continue to
be used in their corresponding software and will
also work while interacting with models of other
vehicle dynamics significantly influence the departments. Compared with the usual
perspective of the sensors through chassis substitution of development tools, no software
movements such as nodding/swaying and tests and thereby interrupt process chains in need be replaced in a possibly long migration
compression/suspension. virtual development. period – the new software is just added to the
However, until now, it has only been possible established ones as needed.
to couple two CAE tools together in firmly Fully integrated solution This can be demonstrated by the illustrated
defined version pairs, and with a very limited The use of standalone tools leads to weaknesses setup, using the aSR Co-Simulation to connect a
number of tool combinations allowed. The that are encountered with conventional distributed simulation mixing IPG CarMaker,
problem is intensified by the involvement of development. A high degree of maturity of the Antemotion Midgard and Ansys AVxcelerate
external developers and/or suppliers. simulation requires the simultaneous inclusion Sensors. This results in a sophisticated simulator
of other domains in the analysis, rather than just testrun with photorealistic visualization and a
Best-of-breed approach bilateral coupling with other tools. physics based radar evaluation for rapid and
Ansys AVxcelerate Sensors has proved itself in By using middleware-based co-simulation flexible maneuver variant testing in the design of
testing and validation of sensor technologies for within the aSR Simulation Framework, maximum autonomous driving functions. Thus,
self-driving cars through its real-time physics- flexibility in tool connectivity can be achieved in repositioning of a sensor, which would normally
based simulation. With Ansys AVxcelerate terms of horizontal scalability of the simulation. be very time-consuming to edit, is accomplished
Sensors, the user can effectively investigate For the concept phase, this means the in a very short time. The use of virtual drivers in
different sensor types covering different restriction to basic simulation models of the the scenarios offers little flexibility, whereas in
frequency ranges, such as radar, lidar and visible/ mandatory domains such as driving dynamics, the aSR simulator, the simulation engineer simply
thermal cameras. Physical effects and environment (route, traffic, etc) and basic assumes the role to get a picture of detected
disturbance factors such as signal noise, weather functions is possible. For the product objects in a specific sensor configuration. The
conditions and light influences or interfering development process, the integration of analysis collaboration between aSR and Ansys provides
signals can also be considered at any time. software from other areas such as powertrain or the much-needed flexibility for simulation
Simple adaptation to user-specific test tracks and ADAS/AD functions is possible at any time. engineers to design and test autonomous driving
vehicle types, such as single-person vehicles and With regard to vertical scalability of the functions with the highest level of safety. ‹
external traffic, succeeds effortlessly. Finally, the simulation, the middleware-based co-simulation
real-time capability of Ansys AVxcelerate Sensors in the aSR Simulation Framework enables the user CONTACT
aSR | inquiry no. 102
should be particularly emphasized because other, to substitute simplified concept models with more To learn more about this advertiser, please visit:
non-real-time-capable simulations prevent HIL complex ones, and also substitute the software in www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2023 57


IN- CABIN MONITORING

Developing a DSM
application requires
synchronous evaluation
of different sensors and
vehicle functions, all
competently handled
by RTMaps

Multisensor driver
monitoring algorithm in a time-correlated manner.
This enables data fusion as a key prerequisite
for efficient driver status detection. The
development framework natively supports
the Python scripting language, which is widely
used for the development of AI algorithms and
enables users to quickly integrate deep learning
The RTMaps multisensor development framework offers functions to meet the challenging requirements
a block-based approach that lets users easily integrate for highly robust driver monitoring functions.
Other features included in RTMaps allow
in-cabin sensors with relevant vehicle buses in their setup users to perform validation tasks. For example,
they can use data replay to feed an AI-driven
By Dr-Ing. Dominik Dörr, lead product manager, dSPACE
DMS function with a host of real or realistic

L
synthetic data to test its robustness, all using
evel 3 and up automated driving requires For their development, the RTMaps just one tool.
monitoring of the driver and passengers multisensor framework offers a block-based
in the vehicle cabin. Features such as approach that lets users easily integrate in-cabin OTA validation of radar sensors
driver status monitoring (DSM) will be sensors such as camera or radar along with The radar sensors used for in-cabin applications
an integral part of the European New relevant vehicle buses in their setup. They can operate in the V-band at 60GHz. Flexible, precise
Car Assessment Program (Euro NCAP) for drag a wide range of sensor types and bus test systems are required to check whether the
passenger cars as of 2023. DSM systems are components from the library, connect them with radar sensor reliably detects the occupants in
designed to detect driver distraction, fatigue, their DMS application and execute them with just various test scenarios. In particular, if the entire
unresponsiveness and vital signs to ensure that a few clicks. Users can request or even integrate chain of effects of the radar sensor is to be
the driver can take over the steering wheel at any any sensors that are not in the library by using a considered under vehicle cabin conditions,
time and that passengers remain in their seats. documented API and implementation examples. over-the-air (OTA) testing is required.
A promising approach to meet robustness RTMaps provides unique built-in capabilities To test radars operating in the V-band, the
requirements is the utilization of deep learning that expose the resulting asynchronous input dSPACE Automotive Radar Test System (DARTS)
algorithms and specific training data. data streams of different types to the user’s DMS equipped with the HBC-7066V converter enables
the simulation of radar targets for in-cabin
applications. The high precision of DARTS
enables users to validate functions such as
monitoring and detection of drivers and
passengers, including children in infant car
seats. The DARTS hardware even meets the
requirements for the detection of vital signs
using micro-Doppler. The convenient OTA
validation method of DARTS enables validation
of the entire sensor transmission channel. In
addition, the small size of the radar front ends
makes them ideal for convenient in-cabin use. ‹

DARTS operating at 60GHz can be used in all CONTACT


In-cabin monitoring is a prerequisite for key development phases, from chip design to dSPACE | inquiry no. 103
autonomous driving and goes as far as sensor development to end-of-line testing of To learn more about this advertiser, please visit:
detecting vital signs in-cabin applications www.ukimediaevents.com/info/avi

58 ADAS & Autonomous Vehicle International January 2023


A DA S T E S T I N G

Datalogger for
ADAS testing VBox 3i ADAS
offers outstanding
VBox Automotive has RTK performance

launched the all-new VBox on the test track


and the open road
3i ADAS, designed to make
ADAS testing easier
By Kevin Bursnall, technical sales director,
Racelogic

A
dvances in adaptive safety and
autonomous vehicle technologies are
demanding that test and validation
solutions meet the complex needs
of test programs but are still quick
to set up and easy to use. VBox 3i ADAS meets
these demands by offering outstanding RTK
performance for high-accuracy position data on
the test track and the open road. Intuitive setup
and analysis software, capable of evaluating
complex ADAS test scenarios, makes the most
of available testing time and resources.
VBox 3i ADAS features a 100Hz GNSS and multilane configurations, to comply with software to view and analyze real-time results in
multi-constellation, dual-frequency engine that Euro NCAP requirements. The vehicle under test the field. Alternatively, data can be sent live via
can use GPS, Galileo, GLONASS and BeiDou can simultaneously reference any combination a CAN or serial connection for integration with
constellations. This delivers RTK accuracy even of up to three moving targets, two static targets, third-party systems.
in challenging GNSS conditions. When satellite three road lines and 99 signposts. VBox 3i ADAS is equally at home on a test
signals are obscured, VBox 3i ADAS combines When a full scenario analysis is required, track or the open road. Its RTK GNSS receiver
wheel speed data from the vehicle’s CANbus additional data including CANbus can be logged can obtain correction data via either an on-site
with GNSS and inertial data from a VBox IMU simultaneously. For example, the activation of base station or, for open road testing, the use of
to maintain the accuracy of speed and position. light and audio sensors used in ADAS such as an NTRIP modem.
For the evaluation of complex ADAS lane departure warning, blind spot detection and There is also a growing need to complete
scenarios, VBox 3i ADAS enables users to collision warning can be event marked to provide ADAS testing and sensor validation within an
customize the test setup, including multitarget analysis of sensor activation in relation to the environmentally controlled space, usually in an
wider test scenario. indoor facility. This enables year-round testing
Test data can be logged directly in the and sensor-specific assessments including sensor
The all-new VBox 3i ADAS VBox 3i ADAS for post-test analysis, or it can be flare, fog, mist and water films. VBox 3i ADAS
– designed to make ADAS
connected to a laptop running VBox Test Suite can be used in combination with VBox Indoor
testing easier
Positioning System (VIPS), which measures
real-time, dynamic 3D position, speed and
attitude (pitch/roll/yaw), to achieve RTK-
equivalent centimeter-level accuracy in areas
where GNSS is not available.
Dedicated to ADAS testing in any
environment, VBox 3i ADAS is set to be a
valuable addition to any test program. For further
information, visit vboxautomotive.co.uk/3iadas. ‹

CONTACT
Racelogic | inquiry no. 104
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2023 59


DATA L O G G I N G

The right tool


for the job
Instrumental fix for real-time
test and validation challenges
By Gordan Galic, technical marketing director, Xylon

A
DAS and AV embedded electronics
systems are made of complex
PCBs that challenge the laws of
physics, chips that tick in the
gigahertz range and interfaces
that carry terabytes of data at blazing speeds
through a vehicle’s nervous system.
This makes non-intrusive datalogging of
test and diagnostic data, as well as proper
ECU stimulation in HIL research, extremely
challenging. To ensure the coherence of the
test and validation process, only minimal or
no impact on the regular behavior of systems
under test is permissible for test tools.
An example system configuration for
In most cases these tools use PC platforms datalogging on the road and HIL testing
and customized software, which quickly shows in a lab. All sensors and ECUs are
deficiencies at a real-time performance level, interconnected through a single
a lack of appropriate interfaces and a need for logiRECORDER unit that enables
not previously seen real-time data
additional electronic boards to complete system
manipulation to ensure the coherence
integration. Such limitations can ultimately of the test and validation process FPGA TOEs can be configured in different
be overcome by fully customized hardware ways, for example to demultiplex a 10GbE data
platforms, which often imply customization link with all sensory data from a driving
at the chip level. With production costs of A forward-looking camera generates three computer, and store data from each sensor
customized chips reaching tens of millions GMSL2 video streams that route directly through in a separate MDF4 or ROS session.
of dollars, that hurdle seems too high. the logiRECORDER, and connects to an ECU A practically identical system can be used for
Luckily there is a solution available at an with minimal latency. The ECU controls the a HIL simulation setup. The logiRECORDER’s
immeasurably lower cost, and it comes in the camera through a back-channel I2C bus tunneled video routing, paired with I2C tunneling, enables
form of programmable logic FPGA and SoC through the logger. the ECU to run production firmware and
chips. FPGAs can be designed for specific Camera metadata and TAPI are delivered via communicate with cameras without any stalling
functionality and manufactured within weeks Ethernet and analyzed in real time by the CPU resulting from unanswered commands. At the
or, at the latest, months. card. FPGAs implement Ethernet TCP offloading same time, the logiRECORDER can inject stored
Programmable FPGA chips form the basis engines (TOE) and the fastest possible Ethernet and simulated videos instead of videos from
of Xylon’s logiRECORDER, a single-box device TAP, which make external routers unnecessary. cameras. FPGA accelerators enable real-time
for datalogging and HIL simulations. The A CANbus connects additional sensors, such video manipulations, such as merging of real
logiRECORDER integrates all automotive as radar and GPS, and carries various diagnostic camera data with recordings, including fully
interfaces and, due to its ultimate hardware data, and an acceleration card runs Universal synthetic video or camera error injections.
configurability, it can quickly be tuned for Measurement and Calibration Protocol (XCP) Via the XCP protocol, the acceleration card
specific requirements. to monitor and configure the ECU. can load calibration data into the ECU and
To answer some of the latest field Reference cameras for datalogging monitor its internal states. ‹
requirements, Xylon has just introduced a monitoring are connected via a GigE Vision
brand-new modular CPU acceleration card protocol supported by the card. FPGA
that enables not previously seen datalogging accelerators eliminate timestamp jitter in CONTACT
Xylon | inquiry no. 105
and HIL features, best described through an compressed video streams and enable extraction To learn more about this advertiser, please visit:
example test configuration. of individual video frames. www.ukimediaevents.com/info/avi

60 ADAS & Autonomous Vehicle International January 2023


L O G DATA M A N AG E M E N T

Log data for


simulation tests
How ADAS and AV programs
can create simulation test Log-based
test-case creation
cases from real-world logs enables ADAS and
AV programs to
create simulation
By Vijaysai Patnaik, head of product,

L
test cases from
Applied Intuition real-world log data
og data management is one of the most
important tasks every ADAS and AV
program needs to master. Test fleets
collect on average 4TB of log data per
vehicle per day. Production fleets
(vehicles purchased by individual consumers)
can generate millions of events daily. This
firehose of data has enormous potential to power
an autonomy program’s development efforts.

What is log data? reproduce the issue, making a code change to autonomy stack. Later-stage autonomy programs
In ADAS and AV development, log data is any resolve it, using the test case to confirm the typically create the majority of their simulated
collected real-world data corresponding to the issue’s resolution, and finally adding the test miles from logs using re-simulation. Scenario
autonomous task at hand. It ranges from raw case to a regression suite to ensure the issue extraction usually suffices for simple test cases,
sensor inputs to wheel actuation commands. All does not reappear. but log re-simulation may be more affordable in
log files go through a complex lifecycle. First, an the case of perception or object detection issues.
autonomous system collects the log file. Next, Reproducing an issue When reproducing long-tail issues caused by
data processing pipelines distribute the file, and There are two ways to create test cases from a noise, latency or hardware, re-simulation is the
different teams explore and use it according log: scenario extraction and log re-simulation. only available choice.
to their needs. Finally, the log file lands in Scenario extraction creates a synthetic test with
long-term storage. actor behaviors sampled from perception outputs Resolving the issue
in the log. Log re-simulation replays the original Once autonomy programs have reproduced an
Log−based test−case creation logged data to the autonomy stack without any issue by creating a test case and receiving a failing
Log data powers various workflows for different synthetic signals.  result, they can now resolve the issue locally by
teams within ADAS and AV development. One of Both methods have strengths and using real data from the log to improve their
these workflows is log-based test-case creation. weaknesses. With scenario extraction, the autonomy stack. After reproducing and resolving
Creating test cases from real-world data is an extracted actor behaviors are typically robust the issue locally, they can add the created test
effective way to resolve long-tail issues found when it comes to stack changes and are portable case to a regression test suite that regularly
during real-world testing, and protect the stack between programs (for example, L2 and L4 executes comprehensive tests in continuous
against future regressions. It helps perception, autonomy programs within the same integration (CI).
localization and motion planning teams improve organization). Log re-simulation has higher
their respective modules. The workflow involves fidelity and can ‘losslessly’ recreate the exact How RideFlux uses log−based
the following steps: creating a test case to timing and content of signals sent to the test−case creation
The self-driving technology provider RideFlux is
using log-based test-case creation to power its AV
development efforts. The company operates three
commercial robotaxi services on Jeju Island in
South Korea. Using re-simulation, the RideFlux
team can debug on-road events with the
autonomy stack in the loop, test different versions
of the stack on past drives, and catch regressions
before they are deployed to the vehicle. ‹
Log-based test-case creation
enables autonomy programs to CONTACT
use real-world data to improve Applied Intuition | inquiry no. 106
their autonomy stack To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2023 61


DATA AC Q U I S I T I O N

The Mayflower-B17-LiQuid unit


with InoNet QuickTray How InoNet Computer’s Mayflower-
B17-LiQuid helps engineers to master
the masses of data emerging in ADAS
development and verification
By Janina Jonker & Manuel Deuter, InoNet Computer GmbH

Voyage of
discovery Above: Liquid cooling for GPU
and CPU
Below: From SIL to HIL to VIL

T
o react as quickly as possible in Networks for communication
extreme situations, modern vehicles protocols such as CAN, FlexRay
with a wide range of driver assistance and XCP interfaces, as well as
systems require the appropriate expansion cards for real-time
hardware solution in the background. operating systems, are also
Advanced driver assistance systems (ADAS) essential. Event measurement
make a major contribution to both safety and a data must be captured in real time
comfortable driving experience. To ensure that and transferred from the vehicle
these solutions work accurately, it is essential to the evaluation system after a
to provide OEMs with high-performance data test drive. This requires high write
acquisition and processing platforms in the rates and a robust data carrier for
development and testing phases. These platforms fast, flexible exchange, such as the
must enable constant data acquisition with high InoNet QuickTray.
bandwidth and high processing performance for
evaluation of parallel data streams in real time, GPU performance for AI under
as well as providing high computational power For example, if a single camera recording at extreme conditions
for AI applications. 8MP is considered for a car with SAE Level 3 and High CPU and GPU performance is not only
25 modules (radar, lidar, cameras), it can easily required for compressing video sequences. To
Digital platform on wheels require a transmission rate of 500MB/s. As video create or simulate realistic traffic scenarios with
Due to an increasing number of sensors, sensors usually have the highest raw data rate, we object classification and AI-trained models,
actuators and modules inside and outside the can extrapolate the 500MB/s to our 25 modules exceptionally high processor performance is
vehicle that communicate with the ECU, vehicles and have up to a 12.5GB/s pure write rate for raw required for algorithm computation. This can
are increasingly becoming digital platforms on data. At this rate we arrive at up to 360TB of raw only be achieved by using powerful CPUs in
wheels. With the current shift from SIL to HIL data in an eight-hour working day. combination with high-performance graphic
and from HIL to VIL, as well as the validation Recording, storing and evaluating this cards bundled into a co-processor. To ensure
of ADAS and AD developments, data sets need amount of measurement data under real driving performance even under extreme temperatures,
to be as close to reality as possible. conditions in a vehicle is a major challenge for the datalogger requires not only industrial
During the car development and testing automotive manufacturers. Consequently, ADAS/ components but also a special heat dissipation
phase, numerous high-resolution sensors are AD development requires a rugged system that and cooling system. InoNet’s Mayflower-B17-
needed to monitor and record the driving provides enormous computing power, high write LiQuid addresses all these challenges by
environment. In addition to cameras, the rates and huge storage capacity. providing constant high performance with
interaction of sensor types such as lidar, Due to the previously mentioned numerous a dedicated liquid cooling system. ‹
radar and infrared detection is essential. sensors and communication between the control
Challenges in vehicle development are units, large amounts of data are generated at CONTACT
InoNet | inquiry no. 107
therefore signal conditioning and processing, once, which are recorded for simulation, To learn more about this advertiser, please visit:
and especially sensor fusion. validation and optimization of the systems. www.ukimediaevents.com/info/avi

62 ADAS & Autonomous Vehicle International January 2023


PRODUCTS AND SERVICES

Child presence detection

C
hild deaths caused by heatstroke can
be the tragic result when leaving
children unattended in a parked A 4activeOD
car, particularly when the vehicle dummy, ready for
is exposed to the sun. Existing and child presence
future technical solutions can help prevent these detection testing
fatalities by offering different levels of warnings,
beginning with an initial warning to the vehicle
user to more persistent escalating warnings and,
as a final step, interventions such as informing
third parties if the vehicle user has ignored the
previous warnings and if critical temperatures
can be detected.
Consumer tests organizations will reward
vehicles that offer such solutions (for example,
Euro NCAP in its 2023 Child Presence Detection
Test and Assessment Protocol) to boost the
integration of these technologies in vehicle
fleets worldwide. For the ratings it is essential
to have reproducible test environments as project supported by industry partners small movements of the chest and abdomen
obviously it is not possible to use real children and Euro NCAP in 2021, aiming to develop are produced through respiration. In such cases,
for the test setup. Therefore, there is a strong human surrogates that can be used to develop realistic breathing patterns can be activated by
need for test tools that can represent children and test child presence detection systems a user-friendly software interface, while other
of different ages. inside the vehicle. movements (limbs, head movement, etc) can be
As a market leader in advanced testing Out of the project came the 4activeOD triggered individually. These ‘surrogates’ are very
technologies for active vehicle safety, dummies, which start from newborns as a robust and have a modular design so defective
4activeSystems has long experience in particularly vulnerable group, going up to one, parts can be exchanged quickly.
developing dummy objects such as pedestrians, three and six year olds. The dummies have ‘Help those who cannot help themselves’ –
bicyclists, powered two wheelers and car targets a realistic response to technologies such as this perfectly describes the motivation behind
to test ADAS systems outside the vehicle. The camera systems, NIR systems, radar systems 4activeSystems’ decision to develop products
scope of the development is always the same: and wi-fi sensing. This is achieved by using that will help prevent any harm that might befall
providing industry-accepted, globally representative materials and implementing children left behind in vehicles. ‹
harmonized test tools with very high similarity realistic movement of different areas of the
to real objects regarding the technologies used dummy. For example, the most challenging CONTACT
4activeSystems | inquiry no. 108
for the intended functionalities. This approach object to detect is a sleeping baby in a child To learn more about this advertiser, please visit:
was also used when starting an open consortium seat covered by a blanket, where only very www.ukimediaevents.com/info/avi

INDEX TO ADVERTISERS
4activeSystems GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Gentex Corporation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ADAS & Autonomous Vehicle Technology Expo California 2023.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inside back cover InoNet Computer GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
ADAS & Autonomous Vehicle Technology Expo Stuttgart 2023.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 12, 13 Mechanical Simulation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Allegro MicroSystems.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 National Instruments Corp.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Applied Intuition.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inside front cover Racelogic.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ASR Advanced Simulated Reality.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 57 Sibros.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
dSpace GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outside back cover Wynne-Jones IP.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ADAS &
AUTONOM

LEARN MORE ABOUT OUR


OUS VEHI

THE INT
REV IEW ERN ATI ON
CLE

OF AL
VEH ICL E AU TON OM OU S
INTERNAT
FREE SERVICE!

TEC HN
FRO M CO OLO GIE S:
TO MA NU NC EPT ION
IONAL

FAC TUR
IMP LEM E TO
EN TAT ION

ADVERTISERS NOW!
JANUARY
2023

January
2023

Visit www.ukimediaevents.com/info/avi to request How ge


nerati
ve mod

exclusive and rapid information about the latest


helpin els ar
g deve

Syntheti
lop AV e
techno
logy

technologies and services featured in this issue


Data c
How ge
PUBLISHE

nerati
helpin ve mod
g deve els ar
D BY UKi

lop AV e
techno
MEDIA

INTELL logy
& EVEN

63
IGENT
In-depth SPEED
behind analysis of the ASSIST
ADAS & Autonomous Vehicle International January 2023
TS

ISA – now tech


it could
enable mandato nology and chall TESTING
ry in enges
higher
levels of Europe – and STAND
automate how
d drivin
Experts
ARDS
from ASA
and testin M, the Euro
g
the eme g organizations pean Com
BAIDU
rging regu miss
shar
latory land e their view ion APOLLO
scape for
AVs
s on The insid
e RT6
generatio story on the
Apol
plans for n robotaxi, plus lo Go sixth-
expansion details
of Baid
u’s
H AV E YO U M E T…?

Hemant Sikaria
CEO and co−founder of Sibros
AVI catches up with Hemant Sikaria, CEO and co-founder
of Sibros, which provides a vehicle-to-cloud management
platform for orchestrating smart updates, data collection
and fleet analytics on any vehicle or fleet, across the
product lifecycle
By Anthony James
Career in brief

U
Before Sibros, Sikaria spent five and a
half years at Tesla, joining the team as an early
nder Hemant Sikaria’s direction, By combining powerful automotive
engineer. At Tesla, Sikaria contributed to the
Silicon Valley-based Sibros software and data management tools in
design, implementation and deployment of the
continues to unite the one platform, Sibros empowers OEMs
very first large-scale OTA software update system.
complexities of embedded vehicle to realize hundreds of connected
This system currently handles updates for the
software and data with cloud- vehicle use cases, spanning
entire Tesla fleet and has completed millions so far.
native technologies. fleet management, predictive
Prior to exiting Tesla, Sikaria managed the body
Sibros powers the connected vehicle maintenance, data monetization,
and chassis firmware teams. As part of his role,
ecosystem with its Deep Connected Platform paid feature upgrades and beyond.
he oversaw the design, implementation and
(DCP) for full vehicle OTA software updates, As CEO and co-founder, Sikaria
integration of systems such as the Model X
data collection and diagnostics in one has played a pivotal role in setting the
falcon-wing doors, air suspension, seat
vertically integrated system. DCP supports any gold standard for automotive software
controls, self-presenting door handles,
vehicle architecture – from ICE, hybrid and quality and building a world-class team,
key fob, security and vehicle
EV to fuel cell – while also meeting rigorous while ensuring the happiness of his
authentication.
safety, security and compliance standards. staff, customers and partners.

How did you get into the any use case for any vehicle type. From the telematics in the hardware of the
world of connected vehicle providing software updates for two- vehicle. e.GO is an exciting partnership,
software systems and why wheelers to haulage, we also deliver like our work with Sono Motors, Volta
did you start Sibros? new connected apps and services to Trucks and Bajaj Auto two-wheelers in
I’ve always wanted to solve problems. My address software/firmware defects and India. We deploy the exact same product
involvement in connected vehicles started critical updates, as well as all industry on all those client vehicles – even tractors,
at Tesla. I worked on the software compliances for safety, cybersecurity and earthmovers and construction equipment,
management systems there, when it was data protection – entirely over the air – at showing how one system can enable a
in its earlier stages. Back then I didn’t a programmatic scale. range of different vehicles.
have a Tesla and my family members had The names I mention are public. As
cars from other manufacturers. Together, You just announced your with most companies in our space, acting
we experienced three recalls within partnership with e.GO at CES. as a white-label solution means we have
around an 18- to 24-month period. It How many other OEMs do you a huge majority of clients that we can’t
was then that I thought, “Why, if we work for? discuss due to customer confidentiality.
created a robust OTA solution at Tesla, e.GO is a really good example of how
have other manufacturers not sorted Sibros can propel smaller mobility Care to share any future trends
this yet?” There was an opportunity startups to the next level and it was great that you see on the horizon?
for an agnostic third party to solve the to showcase e.GO at CES. Our solution is If I take a critical but realistic look at
problem. So, we did. That was back in an out-of-the-box system that plugs and humanity, and the way we move and
2018, with me and my co-founder working plays with the original telematics hardware operate, everyone is always connected
in a shared office, when Sibros as you in vehicles. If you [as a manufacturer] have and on the go and aware of our impact on
know it now was founded. We’re now a telematics unit already installed, we can the environment. The automotive industry
140 people in four countries. use that unit to provide a full and robust must adopt technology enablers that make
digital ecosystem for orchestrating our mobility experiences seamlessly
Elevator−pitch time. What scalable updates, data collection and connected, safer, efficient and sustainable
do you offer? remote commands, based on intelligent so humanity can lead happier and
Sibros offers integrated connected vehicle data and software twins. productive lives. ‹
vehicle management that gives auto This includes a cloud-native back-end
makers everything required to realize portal, and in-vehicle firmware that sits on For more information, visit www.sibros.com

64 ADAS & Autonomous Vehicle International January 2023


SEPTEMBER 20 & 21, 2023
SANTA CLARA | CALIFORNIA

SAVE THE DATES! 

The latest ADAS


technologies + full
autonomy solutions
+ simulation, test
and development

TESTING TOOLS SENSING AND AI SIMULATION SOFTWARE

GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/california
##AVTExpoCA
Get involved online!
Y O U R P A R T N E R I N S I M U L AT I O N A N D V A L I D AT I O N

Ready, set,
done.
Preparation, simulation and validation made fast and easy.
SIMPHERA. Enter simpliCity.

Welcome to SIMPHERA. Enter a whole new world and discover


the latest web-based solution to bring forward autonomous
driving. Experience seamless testing on SIL and HIL platforms.
Or in other words: Enter simpliCity. simphera.dspace.com

dSPACE_SIMPHERA_ISO-26262_Ad_215x275mm_220707_E.indd 1 07.07.2022 06:40:51

You might also like