Professional Documents
Culture Documents
REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION
January 2023
JANUARY 2023
Synthetic
Data
How generative models are
PUBLISHED BY UKi MEDIA & EVENTS
22
COVER STORY:
SYNTHETIC DATA
04 Tech insider: Safety Pool How generative models such as
WMG’s head of verification and validation, generative adversarial networks
Siddartha Khastgir, on the decision to grant and neural radiance fields
public access to the world’s largest scenario continue to help develop the latest
database for AVs self-driving technology
08 Tech insider: NCSU
A new cooperative distributed algorithm can
help autonomous vehicles better navigate
highway merges
10 Project brief: AgXeed 2.055W4
‘AgBots’ continue to grow in popularity: check out
this 75hp dual-tracked driverless tractor for hoeing 14 Intelligent speed assist
Europe has mandated intelligent speed
assistance systems, paving the way for more
advanced automated driving features, while
also highlighting the challenges ahead,
including issues surrounding harmonized
vehicle speed signaling and driver acceptance
14 28 Testing standards
Regulators and experts share their
understanding of the emerging ADAS/AV
industry regulatory landscape, and the
implications for testing standards
34 Machine learning
A guide to some of the interesting architectures
being used in the AV machine learning stack,
including ViDAR, which can detect depth and
motion from camera frames alone
40 Interview: Baidu Apollo RT6
A senior expert in the operation management
department within Baidu’s Intelligent Driving
Group reveals further details following the
launch of the company’s new low-cost flagship
robotaxi, the RT6
46 Audi C−V2X
40 Pom Malhotra, senior director of connected
services at Audi of America, on how C-V2X
not only offers a means to drastically reduce
46 VRU injuries and fatalities but also helps clear
the way for autonomous driving
US VEHI
CLE INTE
THE
REV IEW INT ERN ATI
RNAT
VEH ICL OF AUT ON ON AL
IONA
E TEC OM OU
S
L
FRO M HN OLO GIE S:
TO MA CO NC EPT ION
NU FAC
JANU
IMP LEM TUR E TO
ENT ATI
ARY
ON
2023
Janu
ary 202
3
How
gene
help ra
ing de tive mod
velo els ar
Welcome
p AV e
Synt
tech
nolo
gy
Data hetic
In the few months since I wrote the intro for the September 2022 issue, the UK,
How
PUBL
gene
ra
ISHED
help
ing de tive mod
BY UKi
velo els ar
where AAVI is based, has seen three prime ministers, as many chancellors of the
p AV e
MEDI
tech
A & EVEN
INTELL nolo
IGENT
In-de
pth analy SP EED AS gy
TS
behin sis
d
it could ISA – now of the tech SIST
exchequer, and a daily deluge of ever more gloomy economic forecasts, with the tech
enab mandato nology
le high and chall
er level ry in Euro
s of auto pe – and enges
TESTIN
mate how Expe G STA
d drivin
g
rts NDAR
and testinfrom ASA
DS
the eme g orga M, the Euro
rging nizations pean
Commiss
regulator shar
y lands e their view ion BAIDU
52 Supplier interview:
O RT
e story
gene
plans
ration
robo
6 on the
Apollo
for expa taxi, plus Go
nsion details sixth-
of Baid
u’s
Little did I know back then that we were also about to witness VW and Ford pulling
Wynne Jones IP the plug on their investment in Argo AI, whose CTO, Brett Browning, was the subject
Dr Elliott Davies, partner and patent attorney
of an exclusive interview in the September issue. Argo’s sudden and unexpected
at Wynne Jones IP, examines the implications
of the EU’s new Unified Patent Court (UPC) demise has spawned some rather sensationalist headlines announcing the death of
the autonomous vehicle industry. Once valued at more than US$7bn, the company’s
54 Electric power steering closure is undoubtedly a cause for concern, but it’s far from the end of the road for
Allegro MicroSystems on how hardware self-driving. It just so happens that on the same day that VW and Ford exited Argo AI,
selection affects driver experience in EPS Mobileye went public, with its shares surging by more than 37%.
For many, Argo’s decline was in keeping with a general trend toward overall market
56 Simulation consolidation and a preference among traditional automotive OEMs for L2/L3 ADAS
The integration of Ansys AVxcelerate sensors
technology that they can make a return on immediately – given current tough trading
within aSR’s simulation framework allows
conditions – rather than more ambitious L4/L5 self-driving capabilities.
efficient virtual testing and validation of
sensor technology for autonomous driving “There’s a huge opportunity right now for Ford to give time – the most valuable
commodity in modern life – back to millions of customers while they’re in their
58 In−cabin monitoring vehicles,” said Ford CEO Jim Farley, in the auto maker’s Q3 earnings report, in which
dSpace’s RTMaps multisensor development it announced that it was ditching Argo. “It’s mission-critical for Ford to develop great
framework offers a block-based approach that and differentiated L2+ and L3 applications that at the same time make transportation
lets users easily integrate in-cabin sensors even safer. We’re optimistic about a future for L4 ADAS, but profitable, fully
with relevant vehicle buses in their setups autonomous vehicles at scale are a long way off and we won’t necessarily have
to create that technology ourselves.”
59 ADAS testing
VBox Automotive’s all-new VBox 3i ADAS, Just how far away self-driving vehicles remain is a topic of hot debate. Many believe
designed to make ADAS testing easier the big tech firms outside the traditional auto sector will push on regardless, as they
have the most to gain from disrupting the status quo. Some of these are already
60 Datalogging heavily invested in driverless trucks for the ‘middle mile’, avoiding the complications
Xylon’s instrumental fix for real-time test and wrought by congested cities, with services likely to ramp up in the next year or two.
validation challenges Meanwhile, others champion C-V2X as a key enabler in delivering the communal
intelligence and safety that could speed the urban deployment of AVs.
61 Log data management
Applied Intuition on how ADAS and AV Baidu is one such company. AAVI was granted a rare interview with the internet giant,
programs can create simulation test cases in which it made clear its plans for growth. “Baidu will continue to expand the scale of
from real-world logs its driverless autonomous driving commercial operation,” says Qiong Wu, a senior
expert in Baidu’s Intelligent Driving Group, on page 44. “It is expected that in 2025,
62 Data acquisition autonomous driving technology will enter the stage of commercialization at scale.”
How InoNet’s Mayflower-B17-LiQuid helps to
But it might not have to wait until then to make a profit. Baidu recently reported sales
master the masses of data captured during
ADAS development and verification of its Apollo ADAS and autonomous driving solutions to auto makers had exceeded
¥11.4bn (US$1.6bn), with a major Chinese OEM choosing its Apollo Navigation Pilot,
63 Cabin monitoring Automated Valet Parking and HD Maps for one of its most popular car models.
4activeSystems’ solutions for child presence Anthony James
detection Editor, ADAS & Autonomous Vehicle International
64 Have you met…? anthony.james@ukimediaevents.com
Hemant Sikaria – CEO and co-founder, Sibros
Public works
Dr Siddartha Khastgir, head of
verification and validation at WMG,
University of Warwick, on the
decision to grant public access to
the Safety Pool Scenario Database
– the world’s largest scenario
database for automated vehicles
By Anthony James
WMG 3D
simulator
running a test
scenario
What are the main criteria for
AV developers to consider when
selecting a scenario database?
The main criteria are ease of use, relevance to the
operational design domain (ODD) of the system under
test and diversity in test scenarios with respect to ODD.
Each of these factors is key to the success and
uptake of a scenario database. Having a database with
lots of content should not be the target for the ADS
ecosystem. As a community, we need to agree on
quality requirements for scenario databases (for each of
the three criteria listed above). Such discussions have
begun at various international regulatory levels.
What are some of the common at WMG currently focuses on creating metrics on scenario
misconceptions about scenario completeness as well as requirements on the quality of
databases? scenario databases.
It is erroneously assumed that all types of users (ADS
developers, regulators, researchers) will use the scenario What scenarios are of most interest?
database in a similar manner. The reality is that user Based on our research at WMG, we have coined a concept
journeys for different types of users will be very different of hazard-based testing, which focuses on testing how
and have varied sets of requirements. For example, a a system fails rather than how a system works. Our focus
type approval authority or regulator would like to read/ is on identifying failures.
understand scenarios (stored in the scenario database) As a result, we believe that the scenarios that offer
in a human-readable format. In this regard, WMG has led the most value are the ones that reveal failures. We have
the British Standards Institution (BSI) activity on creation proposed a hybrid approach to scenario generation:
of the BSI Flex 1889 standard, which provides a language 1) data-based scenario generation; 2) knowledge-based
for this purpose. scenario generation.
[On the other hand,] an ADS developer performing Data-based scenario generation analyzes accident
simulation-based testing would like to have the scenarios databases, insurance claim records and real-world
in a machine-readable format that can be executed in collected data to identify trends that lead to incidents
a simulation platform for virtual testing. or near-misses. Knowledge-based scenario
Thus, we have a competing set of demands generation analyzes the system to understand
placed on a scenario database. For Safety Pool, how its design could lead to failures. Our
we created a concept that can cater to both
OVER 200 scenario generation approach has also
sets of users and requirements by having ORGANIZATIONS been incorporated in the EU regulation
two abstraction levels for a language to WORLDWIDE HAVE on automated driving systems that was
describe scenarios. ALREADY adopted on August 5, 2022.
Another misconception is that all ADS need To ensure randomization, we treat
to be tested for the same set of test scenarios,
ENROLLED IN scenarios at a logical scenario level: all
but in fact every ADS will require a different set SAFETY POOL scenarios have parameters, and each
of test scenarios depending on its ODD and parameter has a value range. Depending on
design. There will be no ‘golden set’ of scenarios the ODD definition of the ADS, we first identify
for all ADS. We need to create a scenario database the relevant scenarios (at the logical level) and have
that has diversity in scenarios and represents different intelligent test case generation algorithms that assign
ODDs – for example, urban, motorway, rural and weather values to the parameters, focusing on revealing failures.
conditions. With more than 250,000 scenarios, Safety Pool
covers a wide variety of ODDs to meet this industry need. How are you working to enhance
In addition to misconceptions, there is still huge Safety Pool?
confusion in the ADS industry on the meaning/definition We are continuously working with stakeholders across
of the term ‘test scenarios’ and what constitutes a test the ADS ecosystem in the UK and internationally to
scenario. In simple terms, a test scenario should define the capture new requirements and user journeys. Currently,
behavior of various actors (vehicles, cyclists, pedestrians, we are working with more than 500 organizations
etc). At WMG, we have been championing a drive for the worldwide on various user journeys using the Safety Pool
industry to agree on terminology, and we actively contribute Scenario Database.
to various international standards to this end. However, Regulators, researchers and ADS developers in different
more efforts are needed to drive industry to a consensus countries have a lot of common requirements but also many
to avoid confusion. bespoke needs. Being a public database funded by the UK
The biggest challenge currently is the need to ensure that government allows us to work closely with these partners
the scenarios contained in the database offer a representative to understand and implement the enhancements and
set of complete scenarios for an ODD of the ADS. Research therefore grow the reach of Safety Pool. ‹
Keeping you up
to speed with IP
Stay ahead of protecting your Intellectual
Property with straight talking advice from our IP Experts.
A problem
shared
Researchers from North Carolina
State University (NCSU) have
developed a technique that helps
autonomous vehicles better
navigate tricky highway merges
By Anthony James
Test results
So far, the researchers have only tested their approach in
simulations, where the subproblems are shared among
different cores in the same computing system. However,
if autonomous vehicles ever use the approach on the road,
the vehicles would network with each other and share the
computing subproblems.
During this proof-of-concept testing, the researchers
looked at two things: whether their technique allowed
autonomous vehicle software to solve merging problems in
real time, and how the new ‘cooperative’ approach affected
traffic and safety compared with an existing model for
navigating autonomous vehicles.
“We tested this approach on a simulated freeway with
a lane drop from four to three, three to two and two to one
lanes,” explains Hajbabaie. “We tested three speed limits
of 60mph, 65mph and 70mph [96.5km/h, 104.5km/h and
112.5km/h], and three traffic demand levels of 900, 1,500
and 2,400 vehicles per hour per lane. One unexpected
outcome was that we did not see a breakdown (or traffic
congestion, in other words) even when the traffic level was
2,400 vehicles per hour per lane, and we initially had two
lanes that dropped to one.
“This means that a single freeway lane could process
4,800 vehicles per hour per lane, which is at least double
its theoretical capacity. This is a huge increase in capacity
under idealistic conditions in a simulated environment, and
it is possible partly because the methodology was fast
Above: Screenshot from enough to perform all the computations.”
the simulation used to Overall, the researchers found their approach
evaluate the new technique enabled autonomous vehicles to navigate
designed to better manage
highway merges
complex freeway lane merging scenarios in real
time in moderate and heavy traffic, with ‘spottier
performance’ only when traffic volumes got
particularly high.
“When the traffic volumes are high, we have
spottier performance – we see congestion forming
and growing on the freeway using existing methods.
The existing approaches show poor performance,
while our proposed approach shows traffic can be
handled on the freeway. We have observed reductions in
travel time of up to 86.4% when the traffic volumes are high.”
NCSU is now working on several ways to further its
“THIS IS THE FIRST TIME A research. “One is to test the approach in a 1:18 scale
COOPERATIVE DISTRIBUTED automated testbed,” notes Hajbabaie. “The other is to bring
communication loss and delay into the approach and see
ALGORITHM HAS BEEN USED TO how it will influence the performance. Another angle is to
HELP WITH MERGING HIGHWAY consider non-cooperative driving behavior and its effects
on the improvements we have observed.
TRAFFIC OF CONNECTED “We are researching how to improve traffic operations
and safety using automated vehicles as mobile controllers.
AUTOMATED VEHICLES” In such a setting, an automated vehicle is assumed to
Ali Hajbabaie, North Carolina State University collaborate with traffic lights and other traffic control
systems to improve traffic operations and safety. We have
developed this concept for a signalized intersection and a
roundabout and are looking into other highway facilities.” ‹
An electric
drivetrain offers
“In this way we contribute in a structural way to
a speed range
up to 13.5km/h improving productivity and yields while minimizing
the impact on the environment.”
However, it’s the financial gains that remain most
attractive: “No less important than ecology and
productivity, we provide economy – of time, labor,
energy, fertilizer, seeds, etc,” continues Kamps. “As a
single unit or as part of a fleet, our AgBots are able to
execute all agricultural tasks, completely independently
and in a safe and efficient way due to robotic precision.
Being autonomous, the AgBots will provide customers
with ample time to focus on their key activities and add
Light relief
true added value to the food supply chain.”
THE VEHICLE
The model features a 55kW power source,
coming out of a 2.9-liter four-stroke diesel
engine, connected to a generator to provide electrical
Dutch driverless tractor manufacturer AgXeed power to the rear wheels. A wide range of track widths
has revealed a lightweight version of its are available, from 1.5m up to 3m. Autonomous
operation relies on extremely detailed mapping and
popular AgBot that is even kinder to soil location data. “We first record the field boundaries with
the help of an RTK-corrected GNSS, so we know where
By Anthony James the physical boundaries are, as well as obstacles such
as power lines, ditches or trees,” explains Kamps.
During operation, a lidar, radar and ultrasonic sensors
THE CONCEPT are used to detect any obstacles that were not present
The 2.055W4, a 75hp dual-tracked driverless during the initial recording of the field and any fixed
tractor ideal for hoeing applications, was obstacles. “You can think of people, animals or just
first revealed at the UK’s Cereals trade show a car that was parked tight on the border of the
earlier this year. REAL−TIME field,” Kamps says. Meanwhile, a 4G connection
“Hoeing is highly needed to reduce chemical KINETIC (RTK) provides live access to a 360° camera view and
crop protection, but it is time-consuming – and machine and process data, as well as service
that is what farmers don’t have,” explains
GNSS ENSURES access for updates and remote services.
Philipp Kamps, product manager at AgXeed. PRECISE GUIDANCE
“Autonomy is part of the answer to this conflict AND POSITIONING PROJECT STATUS
– to achieve a high workload, AgXeed AgBots OF UP TO The machine continues to be shown at
are not limited to specific tasks.”
Featuring a standard hitch for conventional
±2.5CM leading agricultural trade shows and
demonstrations, with deliveries scheduled to
implements, the 2.055W4 is highly versatile begin in spring 2023. The AgBot is part of a project
according to Kamps. “Beyond hoeing, the machine that has received funding from the European Union’s
is also suitable for seedbed preparation, seeding and Horizon 2020 research and innovation program under
crop care applications,” he explains. “It is also ideal grant agreement number 970619.
for grassland applications, being perfect for mowers, “In 2022 we were active in eight European countries,
tedders and swathers.” offering our machines through our distribution
partners,” explains Anastasia Laska, CCO at AgXeed.
THE TEAM “We are preparing for a hockey stick-shaped sales
The driverless tractor has been developed by curve for 2023, scaling our production, distribution
Dutch company AgXeed, which describes itself as and service efforts accordingly.” ‹
a provider of full autonomy systems with scalable and
customizable hardware, cloud-based planning tools and
value-generating data models. “Our solutions serve not The robot drives
only as a replacement for conventional tractors and in reverse when
harvesting machines, but they empower the farmer fitted with a
to approach his business and farm management in disc mower
a completely different way,” explains Kamps.
Beyond efficiency, Kamps also emphasizes the
environmental credentials of AgBots: “All our AgBots
stay under the irreversible soil compaction threshold,
eliminating further degradation of the soil in comparison
with conventional, increasingly heavy machinery, leading
to healthier crops and higher yields,” he says.
GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/stuttgart
#avtexpostuttgart
Get involved online!
JUNE 13, 14, 15, 2023
MESSE STUTTGART, GERMANY
CO-LOCATED WITH:
150+
ADAS AND AUTONOMOUS
TECH EXHIBITORS
GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/stuttgart
www.adas-avtexpo.com/stuttgart
INTELLIGENT SPEED ASSIST
Sign lan
anguage
Mandatory intelligent speed assistance
systems in Europe could lay the
foundations for more advanced
automated driving features – but they’re
also highlighting the challenges ahead
By Alex Grant
combination of both, and intervention can be passive, such BY 30% AND Below: Mobileye’s Road
Experience Management
as a display and audible or haptic warnings, or the system DEATHS BY 20% (REM) provides a rich
can actively step in to reduce the vehicle’s speed. The main supplemental layer of
Source: ETSC
requirement is that the system should work with the driver, information on the driving
environment
who must be able to override it and who ultimately remains
responsible for staying within the speed limit.
Safety in numbers
Mobileye is supplying EyeQ system-on-chip technology to help
systems recognize traffic signs more accurately, based on 200PB
of data gathered to train the algorithms. However, the company
believes safety-critical applications require robust redundancy and
crowdsourced data. Mobileye’s Road Experience Management (REM)
supplements real-time vision systems by gathering data from vehicles
and layering it over the company’s maps.
Adriano Palao of Euro NCAP also believes crowdsourced data will be
an important mechanism for updating explicit limits and filling in gaps
where lane markings are missing. He notes the European Data for Road
Safety (DFRS) project as an important step, encouraging the industry to
share critical information about conditions ahead, such as poor traction,
obstacles and pedestrians, so vehicles can adjust.
Euro NCAP will reward this sort of harmonization, he says. “The
biggest improvement will come when OEMs decide to share safety-
relevant data – i.e. a standardized back end, leveraged by Vehicle-To-
Network where all vehicles can write and read data. This is currently not
100% harmonized, though we are starting to see big improvements.”
Better by design
Ford believes the most advanced ISA benefits for safety but also for avoiding inadvertent
technology could alleviate the need for speeding fines. The technology can adapt to local
physical signposts, which add roadside hazards, roadworks and different times of day, and
clutter, can be confusing and are prone to being fleet operators can geofence private facilities such
obscured by foliage and poor weather. In March the as depots to enforce a lower speed limit. That
company began a 12-month trial in Cologne, Germany, infrastructure has wider benefits too: testing builds
using geofencing to create a virtual boundary for on geofencing trials in 2020 using the Transit Plug-in
different speed limits. Hybrid, enabling it to switch to battery-electric mode
The project is a collaboration between the Ford for low-emission zones.
City Engagement team, its software engineers in
Palo Alto in California and officials from Cologne and
Aachen in Germany. It uses Ford E-Transit electric
vans, equipped with a system that recognizes 30km/h
and 50km/h areas of the city center and elsewhere.
Drivers are given information about new limits on the
dashboard display, and the system will then slow the
van appropriately – though it can be overridden.
Ford claims the system could be rolled out to
future commercial and passenger vehicles, with
QuickTray®
4activeOD
INTELLIGENT SPEED ASSIST
Commercial
Research shows
motorists drive
more slowly when
using ISA
sense
a lot of people slow down on the roads they’re not too sure on. The General Safety Regulation applies not
A self-driving car has to do the same sort of thing, so it has to only to cars and vans but to trucks and buses
have a much greater understanding of the road. I think that’s the too. Here’s Philip Hubertus says this means
development we’re going to see in the next five years.” that in addition to understanding speed limits, ISA
Those changes are being delivered collaboratively. The has to know how these apply to different vehicles
Navigation Data Standard (NDS) Association, a global alliance and conditions. The upshot, he adds, is it could help
of OEMs, vendors and map data providers, is co-developing a cost-sensitive fleet operators save money.
“We are working on a system that allows the truck
specification for storing and interacting with HD maps, with a
drivers to drive without their feet. It is adjusting the
focus on improved ADAS capability. The latest version, NDS.Live,
speed limits like adaptive cruise control, and not only
is organized into building blocks, which optimizes how the data on motorways but also on the smaller roads. That
is transmitted to the vehicle to avoid excessive data use. helps manufacturers sell a vehicle that is much more
fuel efficient,” he explains.
Infrastructure investment “Knowing what the speed limit is up ahead, and
Not all of the problems are delivered at a vehicle level. The FIA’s the curvature and the slope of the road, allows them
Krid believes wider vehicle-to-vehicle and vehicle-to-infrastructure to sail into situations where the speed limit goes
systems will enable ISA and other ADAS technologies to make use down, then accelerate correctly when the vehicle
of artificial intelligence and edge computing. However, she adds goes uphill, sail again when it goes downhill. That is,
that the infrastructure could create additional costs for consumers according to them [fleet operators], probably going
and taxpayers, and the basic infrastructure is still problematic. to reduce fuel consumption by about 10%, which is
“The very first issue to resolve is member states agreeing on huge for logistics.”
harmonized vehicle speed signaling (it is a national responsibility,
not at the EU level), complying with the Vienna 1968 Convention
on Road Signs and Signals, spending sufficient resources on
maintenance and putting road signaling on those places where
proper ISA detection is deemed critical. Next is further
innovations in affordable vehicle technology,” she explains.
“[Other regions can learn from Europe by] monitoring whether
member states are prepared to significantly improve road
signaling and traffic signs and especially whether they will
reserve sufficient resources to put such advanced infrastructure
in place. We hope that ISA one day will be sufficiently robust and
trustworthy so that drivers and traffic participants can reap the
HD map data used for
benefits, outweighing the flaws that all systems inherently have, ISA could also help
despite innovation and technical progress.” trucks save fuel
www.allegromicro.com/steering
Allegro ADAS and Autonomous Vehicle Full Page Sept 22.indd 1 21/09/2022 18:29
SY N T H E T I C DATA
Manufactured
L
AV technology
By Ben Dickson
Costly labels
Machine learning models are best known for prediction
and classification tasks. They receive visual, audio, textual
or tabular data as input and return a prediction, such as
the probability of an image containing a certain object,
the sentiment of a social media post or the future price of
a stock. In AVs, machine learning models perform various
tasks, such as detecting objects and predicting the speed
and trajectory of other cars.
Without synthetic data, all the training examples must
be collected from the real world, which imposes extra
costs on AV companies.
“Current methods require manufacturers to build
and deploy cars that are loaded with sensors and cameras
to potentially drive thousands – if not hundreds of
thousands – of miles before car makers obtain enough
data to label,” says Yashar Behzadi, CEO and founder of
Synthesis AI.
An additional challenge is the labeling of training
examples. Many machine learning models used in
autonomous vehicles are ‘supervised’, meaning that they
require their training examples to be annotated for
ground truth information.
“The process of labeling it is a monumental task that
requires careful extraction of specific events to identify
useful information for developing computer vision
machine learning models,” Behzadi says.
A recent study by Synthesis AI found that data labeling
costs organizations upward of US$2.3m annually, and that
it can take up to 16 weeks to conduct supervised learning
on new projects.
Block-NeRF uses
neural radiance
fields to recreate
entire city blocks
from previously
recorded videos
Simulation−based
artists to build assets, and yet still do not produce photorealistic
data. NeRFs could contribute to fixing these two problems in the
long term. Instead of human artists, assets can be automatically
learned from data sets through NeRF models.”
NeRFs are useful for several applications, including object sensor selection
and packaging
reconstruction, background removal and interpolation between
different types of objects.
NeRFs have also found their way into the self-driving car
industry. A very interesting example is Waymo’s Block-NeRF – a Synthetic data can also
deep learning model that can synthesize full blocks of a city based support the selection and
on recordings captured by the cameras of a vehicle. The technique packaging of sensors to
was used to reconstruct a square-kilometer neighborhood of San optimize perception performance,
Francisco, California, based on video frames captured by an according to Chris Gundling, head
autonomous vehicle during multiple trips. The recordings were done of sensor simulation at Applied
across three months, at different times of day and under different Intuition. “Typically, sensor
selection and packaging [i.e. a
lighting and weather conditions. Once the Block-NeRFs were
sensor’s mounting hardware and
created, the researchers could create photorealistic image sequences
placement on the vehicle] consume time and resources,” he
of new pathways that were not traveled by the autonomous vehicle says. “ADAS and AV teams need to acquire physical sensor
during the original recordings. samples from the manufacturer, mount them on vehicles or test
Under the hood, NeRFs create latent representations of the rigs and evaluate their performance through real-world testing.
scene, which can then be used for different tasks. In fact, Chitta and Sample sensors often take months to arrive and are unavailable
his colleagues at Max Planck Institute and the University of until sensors have entered production. Real-world test methods
Tübingen used the same kind of technique to create neural constrain the number of configurations teams can test.
attention fields (NEATs), a deep learning architecture that can be “With sensor simulation, teams can simulate specific sensors
used in AVs during the self-driving task. packaged in any combination of ways and quantitatively
“In NEAT, we use implicit representations (like NeRF) as our evaluate the resulting synthetic data. This method is fast,
model output as it lets us model very high resolutions in space and cost-effective and deterministic, enabling teams to rapidly
time with a fixed amount of memory usage,” Chitta says. identify a list of sensors and packaging that might maximize
NEAT can compress high-resolution images into intermediate their perception system’s performance. Teams can then validate
the system with a small number of real-world tests.
representations that can then be used for trajectory prediction
“This simulation-first workflow enables teams to evaluate a
and planning.
larger number of sensors and packaging while simultaneously
“I think an even more important role that generative models saving resources, thus ensuring they select the optimal sensors
will play is in the testing of AVs,” Chitta says. “Demonstrating for their perception system. For example, teams can evaluate
reliability purely through real-world driving demonstrations is which Ouster lidar best suits their needs by leveraging lidar
probably not going to be sufficient given the high safety standards. models created in a direct partnership between Applied
This means that high-quality simulation with critical scenarios, Intuition and Ouster.”
using generative models, will be crucial to showcase or certify the
robustness of self-driving systems.”
car SIM
MECHANICAL SIMULATION
truck SIM
MECHANICAL SIMULATION
bike SIM
MECHANICAL SIMULATION
carsim.com
info@carsim.com
+1 734 668 2930
T E S T I N G S TA N DA R D S
Ordinance
survey
ADAS & Autonomous Vehicle International January 2023 29
T E S T I N G S TA N DA R D S
Above: Diagram
showing a ‘cut-in’ test
“THE FRUSTRATING THING procedure for AEB
“For the OEMs, if they take the first step and get it wrong,
then the house falls down around them. And with respect to the
How self−driving regulators, if they regulate too intensely and too strictly, then the
same thing will probably happen.”
cars will be
Despite these uncertainties, there is broad agreement between
regulators and OEMs that the certification of self-driving vehicles
certified in the EU
will primarily involve scenario-based testing. This refers to
testing a vehicle’s response to various predetermined scenarios
that it might encounter on the road. Although some of this testing
The certification process for self-driving cars in the EU can be physical (on either public roads or private testing grounds)
is based on the Whole Vehicle Type-Approval (WVTA) most of it is simulation based.
system already in place in Europe for traditional Foretellix and ASAM have jointly developed the OpenScenario
automotives, according to Maria Cristina Galassi at the simulator-based software for use in scenario testing. They hope
European Commission’s Joint Research Center. that the software will become the standard language for
“Under the WVTA, a manufacturer scenario-based testing. According to Amid, it is already being
can obtain certification for a vehicle type used by more than 20 companies developing AV technology,
in one EU country and market it EU-wide including Volvo. “It is a standard that enables you to specify
without further tests,” she explains. and articulate scenarios in a specific language that can
“The certification is issued by a THE USDOT’S be fed into computer programs,” Amid explains.
type approval authority and the tests NATIONAL HIGHWAY But even with a standardized software in place to
are carried out by the designated TRAFFIC SAFETY run scenario-based tests, OEMs and regulators are
technical services.” ADMINISTRATION ISSUED left with the thorny question of which scenarios to
A technical service could be A FIRST−OF−ITS−KIND test. Amid points to three sources currently used
a third-party testing center or a FINAL RULE RELATING
conformity assessment agency to identify scenarios.
approved to carry out tests or
TO THE SAFETY OF The first is data derived from analyzing
inspections on behalf of the national OCCUPANTS IN real-world road traffic data, including driver
regulator. EU member states are AUTONOMOUS VEHICLES behavior and traffic patterns. The second source is
required to notify the European IN 2022 scenarios specified in standards and regulations. For
Commission of all test centers approved example, Euro NCAP, the European car safety assessment
as designated technical services. program, has published a set of protocols that can be
“The same system will apply for AVs,” says represented in OpenScenario “and immediately you have many
Galassi. “However, in addition to the testing for type more scenarios to test”, says Amid.
approval done at EU level, rules concerning the safety of The third source is so-called edge cases: scenarios that are
operation of those vehicles in local transportation services will rare but nevertheless must be tested against to ensure a vehicle
be implemented at the national/local level. Therefore, close
can handle all eventualities.
cooperation with member state authorities is important.”
But once scenarios have been selected, OEMs and regulators
For this reason, says Galassi, in the first phase of the new
legislation implementation the EU is encouraging member states’ still need to decide on the degree to which they need to be tested,
authorities and technical services to “share their experience” of according to Dr Sven Beiker, who is an external advisor to the
how the new testing and certification regime is unfolding. US-based standards agency SAE International as well as the
managing director of Silicon Valley Mobility, an independent
consulting and advisory firm.
Systematically
Taking as an example the scenario of a
pedestrian carelessly crossing the street,
testing
Beiker says a testing house could simulate
99% of the possible variations of how that
might play out. “But we already know that 99%
vehicles
“You can basically start anywhere and almost all of it would
be relevant but the real question is when do you stop; when do you
say, ‘Now we’re done’?”
The German research project VVM – Verification The question of how much testing is enough speaks to the high
and Validation Methods for Automated Vehicles at stakes involved in the certification of self-driving vehicles. Since no
Level 4 and 5 – which began in July 2019, is funded one in the nascent industry wants to run the risk of repeating the
by the Federal Ministry of Economics and Energy and led mistakes of Uber, whose self-driving car business imploded after one
by Robert Bosch GmbH and BMW Group, with support
of its test vehicles was involved in a fatal car crash in Arizona in 2018,
from 23 industry and research partners. Its aim is to
most players are erring heavily on the side of caution.
develop test methods and provide systems and methods to
prove that automated vehicles are safe. One innovative method that the industry has developed to help
The project reached its halfway point in March this year, the regulatory process along is the growing use of operational design
with partners presenting their initial results and methods. domains (ODDs). Seen as a way of managing the risk of putting
Representatives from politics, industry and academia ADS-enabled vehicles on public roads, ODDs rely on the concept of
stressed the importance of the project, stating that ‘informed safety’. This means that rather than attempting to realize
autonomous driving can be safe only if reliable validation the impossible goal of making a product totally safe, companies make
and verification methods are used. dSPACE contributed its users aware of what a system can and cannot do. When it comes to
expertise in software- and hardware-in-the-loop simulation autonomous vehicles, ODDs can be used to specify the conditions in
for the analysis and evaluation of project requirements, and which a particular autonomous system is capable of operating.
also provided prototype implementations. “They quantify in a non-ambiguous and clear fashion where the
Results were presented for three main areas: the autonomous function is intended to drive,” says Engel.
requirements for the test methods, orchestration and An obvious area where quantifying ODDs could prove useful from
validation of the test infrastructures, and the data flow
a regulatory standpoint is the ability of autonomous systems to handle
and tools used in the project. dSPACE successfully tested
different weather conditions, Engel continues: “If we take the US as an
the methods developed in the project for practicality with
two demonstrations on criticality analysis and sensor example, there are quite a few AVs on the roads but the majority of
model validation. them are in very dry, sunny areas. So their operating domain when it
The second half of the project will focus on refining the comes to weather is dry and clear weather, which is much easier than
demonstrated results and implementing more concept if you’re trying to develop the same function for rain or snow.”
demonstrators. While the criticality analysis and sensor According to Galassi, the new EU regulations don’t cover
model validation will remain as important as before, test ODD-specific testing: “The EU type approval will represent only
orchestration, i.e. the distribution of test cases to the most the first step for market introduction of automated vehicles. Some
suitable test equipment, and the required continuity of the general scenarios will be tested already at this stage, aiming at
test infrastructure between software- and hardware-in-the verifying the overall capabilities of the system, but ODD-specific
loop testing will be another focal point of dSPACE’s efforts testing will be possible only at the national/local level,” she
in the project. For more information, please visit https:// concludes. “That is one of the reasons why an open regulation
www.vvm-projekt.de/en/
approach was needed, with high-level requirements, compared
with the conventional approach based on physical testing in very
specific conditions.” ‹
ARCHITEC T
digest
The evolution
of the AV
machine learning
stack
By Ben Dickson
C TURAL
Transformers
Perception is a key component of self-
driving cars. AVs must be able to detect
cars, pedestrians, traffic signs, objects,
street lanes and more by analyzing video
and lidar data. Until recently, the main
used in ALVINN have become a mainstay machine learning architecture used for this
of the autonomous vehicle industry. But as task was the convolutional neural network
the cars, sensing hardware and processors (CNN), a deep learning architecture that
have evolved, so have the models. has been around since the late 1980s and
Today’s neural networks still retain has played a prominent role in computer
some of the components used in ALVINN. vision applications.
Worldly wise
technology stack
as a so-called attention mechanism to understand the that used deep
context of the image,” says Le An, senior deep learning learning
software engineer at Nvidia. Below: The vision
From an early age, humans and other animals start
The attention mechanism relates tokens from the transformer uses
the successful
building internal representations, or models, of the
input and gives more weight to the parts that are more world through observation and interaction. This
transformer
relevant across the image. This mechanism overcomes accumulated knowledge of the world (‘common sense’)
model to process
the limitation of conventional convolution filters in visual data enables us to navigate effectively in unfamiliar situations.
the CNN, which only consider local contexts. Vision These ‘world’ models that describe the evolution of the
transformers have resulted in notable improvements environment around us are paramount in how we act in
in computer vision tasks such as image classification, our everyday lives.
object detection and semantic segmentation. Imitation learning is a method that allows machine learning
models to learn to mimic human behavior on a given task. An
example is learning to drive a vehicle in an urban environment
from expert demonstrations collected by human drivers. Data
collected by human drivers implicitly contains common
knowledge of the world. To account for this knowledge, many
AI developers believe that incorporating world models into
driving models is key to enabling them to properly understand
the human decisions they are learning from and ultimately
generalize to more real-world situations.
London-based Wayve, a startup pioneering a scalable way
to bring AVs to the UK and beyond, recently presented a paper
titled Model-Based Imitation Learning for Urban Driving at
NeurIPS 2022 (a conference on neural information processing
systems). The paper detailed an approach, named MILE, that
ViDAR
The machine learning models of autonomous vehicles must
solve several challenges, including 3D scene construction,
depth estimation and optical flow prediction. These tasks
usually require heavy use of lidars.
In 2020, scientists at Waymo and Google proposed ViDAR,
a neural network architecture that can detect depth and
motion from camera frames alone.
“The ViDAR’s architecture is a deep structure from motion
(SfM) framework that, given a set of consecutive images, can
predict the depth and the flow maps together,” says Jeremy
Cohen, AV engineer and founder of Think Autonomous.
The ViDAR network takes several consecutive image frames
obtained from AV cameras and predicts the scene depth map,
which is a visual representation of the scene where each pixel
represents the distance to the camera. By putting several
consecutive depth maps next to each other, the ViDAR then
tracks the movements of vehicles and objects through time.
“A ViDAR isn’t about depth perception and it isn’t about Above: ViDAR is predictions and adjust its parameters during training. Fortunately
optical flow – it’s about fusing both into a single 4D output a deep learning for Waymo, the company had collected large amounts of aligned
architecture that
(3D for depth, 4D for time),” Cohen says. camera and lidar data from the millions of miles that its AVs had
can estimate
One challenge with the ViDAR is teaching the neural depth and flow previously traveled. The lidar data acted as the labels used to
network to detect depth from flat images. For this, the machine from visual data train the depth estimator.
learning engineers need the depth data that corresponds to the Below: A Wayve In addition to depth and flow prediction, ViDARs can perform
video frames they use for training the model. This depth data test vehicle in other tasks such as uncertainty estimation, where they provide
becomes the ground truth that the model uses to evaluate its London a color map that represents the reliability of the depth estimates.
This feature is particularly important in certain conditions such
as darkness and foggy weather.
Although Waymo is still using lidars in its self-driving vehicles,
the development of ViDAR has helped make its machine learning
stack much more robust.
“Using this new approach, Waymo can rely more on cameras
and get a richer scene,” Cohen says. “Sending videos to a model,
they can also predict the flow directly from the model, rather
than calculating an object’s displacement from frame to frame.”
Driving
Driverless
“RT6 IS A COMPLETE REDESIGN, operation in
Wuhan,
PUTTING THE NEEDS OF Chongqing
AND VALUE FOR TAXI USERS” Yizhuang district and allows the removal of a safety operator from
the driver’s seat. However, operations are limited to daylight
Qiong Wu, senior expert, Baidu IDG hours and all cars must have a staff member in the vehicle.
“We will gradually roll out 24-hour operation based on
“Any retrofitted or modified vehicle cannot be mass-produced regulatory approval,” says Wu. “To meet the needs of China’s
at scale,” says Wu, “so the cost of each car cannot be reduced. On rapidly developing autonomous driving technology, Beijing’s
the contrary, Apollo RT6 is a complete redesign of an autonomous Economic and Technological Development Zone has started
driving vehicle, putting the needs of passengers first. It reduces the construction of the Beijing Intelligent Connected Vehicle
the cost significantly and delivers a better experience and more Policy Pilot Zone, following the establishment of the BJHAD
value to taxi users.” demonstration area,” she continues.
“It has issued the first batch of night-time and special weather
World view autonomous driving test qualifications to Baidu Apollo and other
Baidu also enjoys another key advantage: precision-honed companies. It will slowly open more policy support for testing
training data derived from a near decade of real-world autonomous driving in extreme conditions. Since May 2, Apollo
driving. Baidu IDG’s test fleet of more than 600 Go has started a trial night-time operation, extending its hours
autonomous vehicles has accumulated more from 08:00-16:00 to 07:00-23:00.”
than 36,000,000km, drawn from operations Such approval and investment suggest an openness
in 10 high-density cities across China, as well among Chinese authorities to embrace a fully driverless
as driverless testing in California, while the future. Sure enough, in August 2022 Baidu secured
number of passengers carried on Apollo Go further permits for Apollo Go to offer fully driverless
has passed one million. commercial operations, with no safety driver in the
There have been a few bumps along the vehicle at all, in Chongqing and Wuhan; and as AAVI
way – but Wu won’t be drawn on details. went to press (November 2022), it also received Beijing’s
“Autonomous driving does not equal zero first permits to test robotaxis without a front-row safety
accidents,” she says. “It is almost certain that driver in the BJHAD demo zone, where its first batch of 10
accidents will happen during development. But we ‘Apollo Moon’ fifth-generation robotaxis have begun testing.
should not deny its progress and achievements because of “Chongqing and Wuhan are the first two cities to allow fully
accidents that happened in the early stage of development.” driverless autonomous vehicles, which provides a sound policy
Fortunately, the Chinese authorities seem to agree. In April foundation for companies like Baidu to pilot and apply its
2022, Apollo Go secured the first-ever permits in China to provide innovative technologies,” comments Wu. “It builds an
driverless ride-hailing services to the public, on open roads environment that encourages industry pioneers to run pilot
programs while they operate. Removing the safety operator from Apollo Go has expanded to
more than 10 cities in China
the car is a major move to help explore the commercialization since its launch in 2020
of robotaxis and will accelerate the development of China’s
autonomous driving core technology.”
For Wu, the data from such dynamic and diverse locations is
priceless, with each Apollo Go vehicle in Beijing, Shanghai and
Guangzhou completing more than 15 rides per day, on average,
according to Baidu’s latest (Q3 2022) earnings report. “The density
of urban road participants in Beijing is 15 times that of California,”
adds Wu. “Among all the typical urban road scenes with
commercial potential in the world, it can be said that China’s are
the most complex, which means that to realize driverless operation
on those roads we have to face the considerable challenges.”
Fortunately Baidu, as China’s leading search engine, can call
upon some impressive resources to help it crunch through all
that accumulated data. “Cloud computing is the foundation of
“THE DENSITY OF URBAN ROAD
AI, and Baidu’s powerful technology in this area can accelerate PARTICIPANTS IN BEIJING IS
the iteration of autonomous driving technology,” says Wu.
15 TIMES THAT OF CALIFORNIA”
Networking opportunity
Baidu also benefits from a government keen to provide the perform L4 autonomous driving on open public roads, using
regulatory framework and spectrum allocation to enable only roadside sensing empowered by 5G and V2X wireless
cellular vehicle-to-everything (C-V2X) – a key technology for communication. The technology was developed in 2021 by
fully driverless L4 or L5 operation. Baidu as part of the Apollo Air project in partnership with the
“While our robotaxis are able to handle 99.99% of complex Institute for AI Industry Research (AIR) at Tsinghua University.
urban roads, C-V2X infrastructure can provide extra redundancy Meanwhile, in April 2022 Baidu opened up a full-stack open
for long-tail problems and edge cases, such as driving during tough system for C-V2X, named Kailu.
weather, bad lighting or poor environmental conditions,” says Wu. All Apollo Go robotaxis are equipped with an onboard V2X
Many Chinese cities have partnered with local wireless network unit that can communicate with roadside sensors, while V2X is
operators to build the necessary infrastructure, while C-V2X is part of Baidu IDG’s ACE (Autonomous Driving, Connected Roads,
expected to be included in approximately half of new cars in China Efficient Mobility) smart transportation solution, which has
by 2025, by which time China also plans to begin testing even more been adopted by 63 cities.
advanced 5G NR-V2X technology. In previous public statements, Baidu has outlined how V2X and
The BJHAD demo zone in Beijing’s Yizhuang district already AI technology can help cities improve traffic congestion, road safety
enables V2X across an area of 60km2, spanning 225km of open and air quality, with V2X infrastructure serving as “an intelligent
roads. As of May 2022, 332 intersections within the zone had been vehicle-road coordination platform”. The company believes such
upgraded with smart traffic lights, helping to achieve a 30.3% a platform can provide connected autonomous vehicles with
reduction in the length of traffic congestion at intersections. information on surrounding traffic and road conditions, and
These intersections also feature the world’s first and only “thus defines the standards for traffic-related applications, which
V2X technology that enables a car without sensor equipment to in turn drive industry adoption”.
Meanwhile, Baidu has released the DAIR-V2X data set – the first
large-scale, multimodality, multiview data set from real scenarios
for VICAD (vehicle-infrastructure cooperative autonomous driving).
Map maker DAIR-V2X comprises 71,254 lidar frames and 71,254 camera frames,
all captured from real scenes with 3D annotations.
©2022 National Instruments. Alle Rechte vorbehalten. National Instruments, NI und ni.com sind Marken der National Instruments Corporation.. 158363 ni.com/evtest
C-V2 X
Crash
course
46 ADAS & Autonomous Vehicle International January 2023
C-V2 X
Audi recently
conducted a
C-V2X trial
designed to
improve driver
awareness of
cyclists
Preventing collisions
LTE-enabled connected mobility technology already
supports Audi’s Traffic Light Information (TLI) service,
launched in the US in 2016, which helps the OEM’s
drivers catch a ‘green wave’. A cockpit display tells the
between vehicles
driver what speed is required to reach the next traffic and cyclists
light on green; if that is not possible within the permitted
speed limit, it shows a countdown to the During the recent Oceanside trial,
next green phase. Audi focused on five use cases with the
greatest potential to reduce collisions
However, Audi is keen to explore the “WHAT WE HAVE BEEN between vehicles and cyclists
possibilities offered by PC5 C-V2X
technology, which exploits the ITS 5.9GHz ABLE TO DO IN ALL • Proximity warning/front and rear collision
cellular band licensed by the Federal
Communications Commission for traffic
THESE TRIALS IS warning: When a vehicle and cyclist come closer
to one another, a notification appears showing
safety to enable direct ‘user equipment to MODIFY WHAT IS where a possible collision may occur
user equipment’ communication without
the need for cellular towers.
SHOWN TO THE • Cross-traffic alert: Vehicle detects if a bicycle is
on a possible collision path when approaching
“Spoke Safety is partnering with DRIVER SO THAT from the left or right up ahead
manufacturers to embed C-V2X
technology into bicycle frames, as well as THEY KNOW EXACTLY • Parallel parking departure alert: When pulling
out of its curbside spot, a parallel-parked vehicle
working with manufacturers of devices WHAT’S GOING ON” detects if a bicycle is approaching
that go on bicycles – tail-lights, GPS
Pom Malhotra, senior director of from behind
navigation and location systems,” explains
connected services, Audi of America
Malhotra. “They’ve also created a device • Right-turn assist (‘right hook’): Driver gives
that you can buy separately and just push turn signal indication to turn right while cyclist
into your backpack, which can directly alert vehicles drives straight through
when there is a cyclist present. What we have been able to • Left-turn assist (‘left cross’): Vehicle gives left
do in all these trials is modify what is shown to the driver turn indication and receives notification if a
so that they know exactly what’s going on – whether it’s cyclist is approaching from the opposite direction
a cyclist, a construction worker or a school bus.” and potentially entering the turn of the vehicle
First responder
Audi’s recent Oceanside demo was not the company’s
first foray into actively exploring the safety benefits of
C-V2X technology. In October 2021 in Northern
Virginia, it teamed up with the Virginia Tech Transportation
Institute, Virginia Department of Transportation and Qualcomm
Technologies on a C-V2X-enabled program to inform vehicles
that they were approaching a construction zone, alert drivers of
the workzone speed limit when entering construction zones
and alert roadside workers when vehicles were close to
construction zones via a connected safety vest.
In May 2021, Audi collaborated with Applied Information and
the Fulton County School System, Qualcomm Technologies
and others, on a demo in Alpharetta, Georgia, to show how
C-V2X can connect cars with school buses to identify when
children are boarding or exiting buses, and show vehicle drivers
when they are entering active school zones.
“In the near future, we’ll show how we have fine-tuned the
technology to enable two-way communication, where the bus
driver gets information to help them decide [whether] to open
the door or not,” reveals Malhotra. “The technology will alert
them if a vehicle is not stopping after the stop sign is extended.
We’re also looking to demonstrate how ambulances can better
communicate to cars.”
Public purse
To realize the true benefits of C-V2X,
governments need to invest in more intelligent
infrastructure, while car makers will pass the
price on to consumers. “Typically, for safety-related
technologies, the investment will get priced into the
vehicle,” says Audi of America’s Pom Malhotra. “But
To recreate those situations in a reliable and predictable we don’t expect to charge a subscription [for C-V2X
manner, Audi placed barriers in certain places in the line of sight features] because it is a core safety requirement.
of the driver so that the only information received was through “However, where the vehicle talks to infrastructure
the C-V2X channel, displayed via the digital dashboard. such as road signs, traffic cones, traffic lights – that’s
where city investment comes in. The decisions made
The trial showed that the technology can enable drivers to
by the US government to deploy massive amounts of
recognize dangerous situations much sooner than if they were
funding for infrastructure upgrades can really benefit
driving without the C-V2X-enabled prompts. “Direct and
this space. Specifically, intelligent transportation is part
deterministic communication ensures the alert comes much of the allocation. Many states have plans to use that
sooner – so much so, you can count on the driver to react in time.” money to upgrade their traffic infrastructure. Statistics
show that 70% of traffic lights around the country need
Dilemma zone some form of enhancement. As those improvements
According to Malhotra, the company’s Traffic Light Information happen, connectivity is part of the overall value
service first alerted Audi to the potential to improve reaction proposition being considered.”
times by helping drivers with their decision making to avoid
the dreaded ‘dilemma zone’.
“Without traffic light information, you could be
half a mile away, approaching an intersection, and you
can clearly see a green light, but it isn’t until you get BICYCLIST
much closer that it turns amber – leaving you with FATALITIES ARE ON
a decision to make: stop or go through,” he explains.
“Traffic engineers call this the ‘dilemma zone’. AN UPWARD TREND,
But with the system, the countdown pops up in WITH AN UPTICK OF
your dashboard to give you an early warning – it 5% FROM 2020
changes how you approach the intersection, where
you most likely take your foot off the accelerator,
TO 2021, TOTALING
coast and come to a stop.” 985 FATALITIES
Unlike the traffic light countdown, which is Source: CDC
Patent
protection
The innovations behind self-driving
vehicles require suitable patent protection.
Dr Elliott Davies, partner and patent
attorney at Wynne-Jones IP, examines
the implications of the EU’s new
Unified Patent Court
W
Interview by Anthony James
What is changing?
“A DECISION FROM THE GERMAN SEAT Sometime early in 2023 the new Unified Patent Court
OF THE UPC, FOR EXAMPLE, WILL HAVE Agreement (UPCA) is expected to come into force, at
which point existing national patents that have been
A BINDING EFFECT ON OTHER MEMBER obtained via the European patent system will instead
1 2
Reduce
driver experience in electric
power steering systems
By Zach Nelson, segment application engineer,
Allegro MicroSystems
torque ripple
C
ontinuous innovation in the Maximizing the driver experience permanent magnet synchronous motor (PMSM)
automotive market is inevitable. Auto Significant effort has been expended to with field-oriented control (FOC), the designer
makers are tasked with maximizing minimize unwanted by-products inherent in a can achieve very minimal torque ripple in the
the performance and efficiency of motor-assisted electric power steering system, end system, illustrated in Figure 2.1
their end systems. For that reason, such as reducing torque ripple and improving Torque ripple can be greatly improved by
electric power steering (EPS) systems have system response. Although usually driven using the level of detail the controller has about the
become popular in automotive manufacturing, a sophisticated control algorithm, they still rely positioning of the motor. The ability to detect
displacing their mechanical counterparts on a responsive gate driver plus a control network within 1° of accuracy is necessary to prevent large
because they save space and reduce weight, to feed back high-quality data. Selecting the right oscillations in the assist motor that will be felt on
emissions and fit complexity. hardware can be the key to creating a consistent, the column. Motor positioning and handwheel
While the electrical architectures of EPS smooth steering experience. feedback can be sensed using Allegro’s low-noise,
systems are similar, there are areas where Reducing the torque ripple inherent in low-error angle sensors (Figure 3).
designers can differentiate their designs. brushless direct current (BLDC) control is the Another way to reduce torque ripple in the
By selecting the right hardware and taking highest priority. Torque ripple is the phenomenon system is to improve the accuracy and error in
advantage of enhanced features, auto makers where the force applied by the motor to the load the current sensing mechanism. Torque applied
can maximize the driver experience in a compact oscillates and can lead to a feeling of counter- by a motor is directly proportional to the current
and functionally safe design. steering or vibration in the wheel. If employing a through the windings. The two methods most
The electrical architecture for conventional
EPS designs can be broken down into the
following subsystems: EPS motor drive, steering ALLEGRO RECOMMENDED SOLUTIONS
wheel interface, DC-DC conversion, battery SUBSYSTEM SOCKET PART NUMBER
management, digital processing and
A4918
communication (Figure 1). BLDC motor driver
There are trade-offs to be made within each AMT49106
subsystem, which can affect the performance EPS motor drive Phase current sensing ACS72981
and efficiency of the system. At its most basic Motor position sensing AAS33001
functionality, the steering wheel interface uses
a hand-wheel angle sensor and steering torque Motor phase disconnect A6861
sensor to register input force and direction; the Hand-wheel angle sensor A33002
Hand-wheel interface
EPS motor drive uses a highly integrated gate Steering torque sensor A31102
driver to drive a torque-assist motor fixed to
ARG81402
the column or rack to provide the steering DC-DC conversion System power conversion
force. How these devices are selected can ARG81407
affect the driver experience. Battery management Battery return current sensing ACS72981
Tool
connectivity
W
Integrating Ansys ith multiple automotive companies complexity to the simulation techniques needed
racing to introduce SAE Level 3 (L3) to develop these advanced systems. This is easier
AVxcelerate Sensors and above autonomous driving, said than done: just precisely predicting the
within aSR’s Simulation this change is expected to have
far-reaching consequences across
vehicle driving environment for the next few
seconds is an enormous challenge.
Framework enables the entire mobility industry. With L3 and above, As a solution, real-time Doppler evaluation of
there is an enormous increase in the driver’s the radar sensor with aSR driving simulator, 3D
efficient virtual testing comfort level, because at L3 the driver is allowed environment simulation and error-free detection
and validation of to temporarily hand over complete driving of obstacles via cameras, ultrasound, radar and
control to the vehicle and is free to perform laser systems is gaining widespread acceptance.
sensor technology for other secondary tasks. Sensor simulation (simulation of the ‘eyes and
autonomous driving According to German law, secondary tasks ears’ of the car) is becoming an elementary
such as checking emails or reading a newspaper component of valid virtual testing of ADAS or
By Simon Gimpel, CTO, aSR while driving are permitted – provided the driver autonomous driving (AD) capabilities.
stays alert enough to assume control of the Virtual testing of ADAS/AD functions requires
steering wheel again after being warned by the massive changes in the conventional development
autonomous driving system. In contrast, in SAE methods. Introducing simulation early in the
Level 2, the driver is always responsible during design cycle, rather than waiting to introduce
the journey and must constantly monitor the it later at the physical prototype stage, helps
assistance systems. not only with the system integration but also
Predicting the future is a challenge and has with testing the relevant vehicle components.
a profound impact on the design of the driver Individual departmental simulation models
assistance systems, adding another layer of must be coordinated with each other because
Below: Exemplary
software architecture
within the aSR
Simulation Framework
Developing a DSM
application requires
synchronous evaluation
of different sensors and
vehicle functions, all
competently handled
by RTMaps
Multisensor driver
monitoring algorithm in a time-correlated manner.
This enables data fusion as a key prerequisite
for efficient driver status detection. The
development framework natively supports
the Python scripting language, which is widely
used for the development of AI algorithms and
enables users to quickly integrate deep learning
The RTMaps multisensor development framework offers functions to meet the challenging requirements
a block-based approach that lets users easily integrate for highly robust driver monitoring functions.
Other features included in RTMaps allow
in-cabin sensors with relevant vehicle buses in their setup users to perform validation tasks. For example,
they can use data replay to feed an AI-driven
By Dr-Ing. Dominik Dörr, lead product manager, dSPACE
DMS function with a host of real or realistic
L
synthetic data to test its robustness, all using
evel 3 and up automated driving requires For their development, the RTMaps just one tool.
monitoring of the driver and passengers multisensor framework offers a block-based
in the vehicle cabin. Features such as approach that lets users easily integrate in-cabin OTA validation of radar sensors
driver status monitoring (DSM) will be sensors such as camera or radar along with The radar sensors used for in-cabin applications
an integral part of the European New relevant vehicle buses in their setup. They can operate in the V-band at 60GHz. Flexible, precise
Car Assessment Program (Euro NCAP) for drag a wide range of sensor types and bus test systems are required to check whether the
passenger cars as of 2023. DSM systems are components from the library, connect them with radar sensor reliably detects the occupants in
designed to detect driver distraction, fatigue, their DMS application and execute them with just various test scenarios. In particular, if the entire
unresponsiveness and vital signs to ensure that a few clicks. Users can request or even integrate chain of effects of the radar sensor is to be
the driver can take over the steering wheel at any any sensors that are not in the library by using a considered under vehicle cabin conditions,
time and that passengers remain in their seats. documented API and implementation examples. over-the-air (OTA) testing is required.
A promising approach to meet robustness RTMaps provides unique built-in capabilities To test radars operating in the V-band, the
requirements is the utilization of deep learning that expose the resulting asynchronous input dSPACE Automotive Radar Test System (DARTS)
algorithms and specific training data. data streams of different types to the user’s DMS equipped with the HBC-7066V converter enables
the simulation of radar targets for in-cabin
applications. The high precision of DARTS
enables users to validate functions such as
monitoring and detection of drivers and
passengers, including children in infant car
seats. The DARTS hardware even meets the
requirements for the detection of vital signs
using micro-Doppler. The convenient OTA
validation method of DARTS enables validation
of the entire sensor transmission channel. In
addition, the small size of the radar front ends
makes them ideal for convenient in-cabin use. ‹
Datalogger for
ADAS testing VBox 3i ADAS
offers outstanding
VBox Automotive has RTK performance
A
dvances in adaptive safety and
autonomous vehicle technologies are
demanding that test and validation
solutions meet the complex needs
of test programs but are still quick
to set up and easy to use. VBox 3i ADAS meets
these demands by offering outstanding RTK
performance for high-accuracy position data on
the test track and the open road. Intuitive setup
and analysis software, capable of evaluating
complex ADAS test scenarios, makes the most
of available testing time and resources.
VBox 3i ADAS features a 100Hz GNSS and multilane configurations, to comply with software to view and analyze real-time results in
multi-constellation, dual-frequency engine that Euro NCAP requirements. The vehicle under test the field. Alternatively, data can be sent live via
can use GPS, Galileo, GLONASS and BeiDou can simultaneously reference any combination a CAN or serial connection for integration with
constellations. This delivers RTK accuracy even of up to three moving targets, two static targets, third-party systems.
in challenging GNSS conditions. When satellite three road lines and 99 signposts. VBox 3i ADAS is equally at home on a test
signals are obscured, VBox 3i ADAS combines When a full scenario analysis is required, track or the open road. Its RTK GNSS receiver
wheel speed data from the vehicle’s CANbus additional data including CANbus can be logged can obtain correction data via either an on-site
with GNSS and inertial data from a VBox IMU simultaneously. For example, the activation of base station or, for open road testing, the use of
to maintain the accuracy of speed and position. light and audio sensors used in ADAS such as an NTRIP modem.
For the evaluation of complex ADAS lane departure warning, blind spot detection and There is also a growing need to complete
scenarios, VBox 3i ADAS enables users to collision warning can be event marked to provide ADAS testing and sensor validation within an
customize the test setup, including multitarget analysis of sensor activation in relation to the environmentally controlled space, usually in an
wider test scenario. indoor facility. This enables year-round testing
Test data can be logged directly in the and sensor-specific assessments including sensor
The all-new VBox 3i ADAS VBox 3i ADAS for post-test analysis, or it can be flare, fog, mist and water films. VBox 3i ADAS
– designed to make ADAS
connected to a laptop running VBox Test Suite can be used in combination with VBox Indoor
testing easier
Positioning System (VIPS), which measures
real-time, dynamic 3D position, speed and
attitude (pitch/roll/yaw), to achieve RTK-
equivalent centimeter-level accuracy in areas
where GNSS is not available.
Dedicated to ADAS testing in any
environment, VBox 3i ADAS is set to be a
valuable addition to any test program. For further
information, visit vboxautomotive.co.uk/3iadas. ‹
CONTACT
Racelogic | inquiry no. 104
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi
A
DAS and AV embedded electronics
systems are made of complex
PCBs that challenge the laws of
physics, chips that tick in the
gigahertz range and interfaces
that carry terabytes of data at blazing speeds
through a vehicle’s nervous system.
This makes non-intrusive datalogging of
test and diagnostic data, as well as proper
ECU stimulation in HIL research, extremely
challenging. To ensure the coherence of the
test and validation process, only minimal or
no impact on the regular behavior of systems
under test is permissible for test tools.
An example system configuration for
In most cases these tools use PC platforms datalogging on the road and HIL testing
and customized software, which quickly shows in a lab. All sensors and ECUs are
deficiencies at a real-time performance level, interconnected through a single
a lack of appropriate interfaces and a need for logiRECORDER unit that enables
not previously seen real-time data
additional electronic boards to complete system
manipulation to ensure the coherence
integration. Such limitations can ultimately of the test and validation process FPGA TOEs can be configured in different
be overcome by fully customized hardware ways, for example to demultiplex a 10GbE data
platforms, which often imply customization link with all sensory data from a driving
at the chip level. With production costs of A forward-looking camera generates three computer, and store data from each sensor
customized chips reaching tens of millions GMSL2 video streams that route directly through in a separate MDF4 or ROS session.
of dollars, that hurdle seems too high. the logiRECORDER, and connects to an ECU A practically identical system can be used for
Luckily there is a solution available at an with minimal latency. The ECU controls the a HIL simulation setup. The logiRECORDER’s
immeasurably lower cost, and it comes in the camera through a back-channel I2C bus tunneled video routing, paired with I2C tunneling, enables
form of programmable logic FPGA and SoC through the logger. the ECU to run production firmware and
chips. FPGAs can be designed for specific Camera metadata and TAPI are delivered via communicate with cameras without any stalling
functionality and manufactured within weeks Ethernet and analyzed in real time by the CPU resulting from unanswered commands. At the
or, at the latest, months. card. FPGAs implement Ethernet TCP offloading same time, the logiRECORDER can inject stored
Programmable FPGA chips form the basis engines (TOE) and the fastest possible Ethernet and simulated videos instead of videos from
of Xylon’s logiRECORDER, a single-box device TAP, which make external routers unnecessary. cameras. FPGA accelerators enable real-time
for datalogging and HIL simulations. The A CANbus connects additional sensors, such video manipulations, such as merging of real
logiRECORDER integrates all automotive as radar and GPS, and carries various diagnostic camera data with recordings, including fully
interfaces and, due to its ultimate hardware data, and an acceleration card runs Universal synthetic video or camera error injections.
configurability, it can quickly be tuned for Measurement and Calibration Protocol (XCP) Via the XCP protocol, the acceleration card
specific requirements. to monitor and configure the ECU. can load calibration data into the ECU and
To answer some of the latest field Reference cameras for datalogging monitor its internal states. ‹
requirements, Xylon has just introduced a monitoring are connected via a GigE Vision
brand-new modular CPU acceleration card protocol supported by the card. FPGA
that enables not previously seen datalogging accelerators eliminate timestamp jitter in CONTACT
Xylon | inquiry no. 105
and HIL features, best described through an compressed video streams and enable extraction To learn more about this advertiser, please visit:
example test configuration. of individual video frames. www.ukimediaevents.com/info/avi
L
test cases from
Applied Intuition real-world log data
og data management is one of the most
important tasks every ADAS and AV
program needs to master. Test fleets
collect on average 4TB of log data per
vehicle per day. Production fleets
(vehicles purchased by individual consumers)
can generate millions of events daily. This
firehose of data has enormous potential to power
an autonomy program’s development efforts.
What is log data? reproduce the issue, making a code change to autonomy stack. Later-stage autonomy programs
In ADAS and AV development, log data is any resolve it, using the test case to confirm the typically create the majority of their simulated
collected real-world data corresponding to the issue’s resolution, and finally adding the test miles from logs using re-simulation. Scenario
autonomous task at hand. It ranges from raw case to a regression suite to ensure the issue extraction usually suffices for simple test cases,
sensor inputs to wheel actuation commands. All does not reappear. but log re-simulation may be more affordable in
log files go through a complex lifecycle. First, an the case of perception or object detection issues.
autonomous system collects the log file. Next, Reproducing an issue When reproducing long-tail issues caused by
data processing pipelines distribute the file, and There are two ways to create test cases from a noise, latency or hardware, re-simulation is the
different teams explore and use it according log: scenario extraction and log re-simulation. only available choice.
to their needs. Finally, the log file lands in Scenario extraction creates a synthetic test with
long-term storage. actor behaviors sampled from perception outputs Resolving the issue
in the log. Log re-simulation replays the original Once autonomy programs have reproduced an
Log−based test−case creation logged data to the autonomy stack without any issue by creating a test case and receiving a failing
Log data powers various workflows for different synthetic signals. result, they can now resolve the issue locally by
teams within ADAS and AV development. One of Both methods have strengths and using real data from the log to improve their
these workflows is log-based test-case creation. weaknesses. With scenario extraction, the autonomy stack. After reproducing and resolving
Creating test cases from real-world data is an extracted actor behaviors are typically robust the issue locally, they can add the created test
effective way to resolve long-tail issues found when it comes to stack changes and are portable case to a regression test suite that regularly
during real-world testing, and protect the stack between programs (for example, L2 and L4 executes comprehensive tests in continuous
against future regressions. It helps perception, autonomy programs within the same integration (CI).
localization and motion planning teams improve organization). Log re-simulation has higher
their respective modules. The workflow involves fidelity and can ‘losslessly’ recreate the exact How RideFlux uses log−based
the following steps: creating a test case to timing and content of signals sent to the test−case creation
The self-driving technology provider RideFlux is
using log-based test-case creation to power its AV
development efforts. The company operates three
commercial robotaxi services on Jeju Island in
South Korea. Using re-simulation, the RideFlux
team can debug on-road events with the
autonomy stack in the loop, test different versions
of the stack on past drives, and catch regressions
before they are deployed to the vehicle. ‹
Log-based test-case creation
enables autonomy programs to CONTACT
use real-world data to improve Applied Intuition | inquiry no. 106
their autonomy stack To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi
Voyage of
discovery Above: Liquid cooling for GPU
and CPU
Below: From SIL to HIL to VIL
T
o react as quickly as possible in Networks for communication
extreme situations, modern vehicles protocols such as CAN, FlexRay
with a wide range of driver assistance and XCP interfaces, as well as
systems require the appropriate expansion cards for real-time
hardware solution in the background. operating systems, are also
Advanced driver assistance systems (ADAS) essential. Event measurement
make a major contribution to both safety and a data must be captured in real time
comfortable driving experience. To ensure that and transferred from the vehicle
these solutions work accurately, it is essential to the evaluation system after a
to provide OEMs with high-performance data test drive. This requires high write
acquisition and processing platforms in the rates and a robust data carrier for
development and testing phases. These platforms fast, flexible exchange, such as the
must enable constant data acquisition with high InoNet QuickTray.
bandwidth and high processing performance for
evaluation of parallel data streams in real time, GPU performance for AI under
as well as providing high computational power For example, if a single camera recording at extreme conditions
for AI applications. 8MP is considered for a car with SAE Level 3 and High CPU and GPU performance is not only
25 modules (radar, lidar, cameras), it can easily required for compressing video sequences. To
Digital platform on wheels require a transmission rate of 500MB/s. As video create or simulate realistic traffic scenarios with
Due to an increasing number of sensors, sensors usually have the highest raw data rate, we object classification and AI-trained models,
actuators and modules inside and outside the can extrapolate the 500MB/s to our 25 modules exceptionally high processor performance is
vehicle that communicate with the ECU, vehicles and have up to a 12.5GB/s pure write rate for raw required for algorithm computation. This can
are increasingly becoming digital platforms on data. At this rate we arrive at up to 360TB of raw only be achieved by using powerful CPUs in
wheels. With the current shift from SIL to HIL data in an eight-hour working day. combination with high-performance graphic
and from HIL to VIL, as well as the validation Recording, storing and evaluating this cards bundled into a co-processor. To ensure
of ADAS and AD developments, data sets need amount of measurement data under real driving performance even under extreme temperatures,
to be as close to reality as possible. conditions in a vehicle is a major challenge for the datalogger requires not only industrial
During the car development and testing automotive manufacturers. Consequently, ADAS/ components but also a special heat dissipation
phase, numerous high-resolution sensors are AD development requires a rugged system that and cooling system. InoNet’s Mayflower-B17-
needed to monitor and record the driving provides enormous computing power, high write LiQuid addresses all these challenges by
environment. In addition to cameras, the rates and huge storage capacity. providing constant high performance with
interaction of sensor types such as lidar, Due to the previously mentioned numerous a dedicated liquid cooling system. ‹
radar and infrared detection is essential. sensors and communication between the control
Challenges in vehicle development are units, large amounts of data are generated at CONTACT
InoNet | inquiry no. 107
therefore signal conditioning and processing, once, which are recorded for simulation, To learn more about this advertiser, please visit:
and especially sensor fusion. validation and optimization of the systems. www.ukimediaevents.com/info/avi
C
hild deaths caused by heatstroke can
be the tragic result when leaving
children unattended in a parked A 4activeOD
car, particularly when the vehicle dummy, ready for
is exposed to the sun. Existing and child presence
future technical solutions can help prevent these detection testing
fatalities by offering different levels of warnings,
beginning with an initial warning to the vehicle
user to more persistent escalating warnings and,
as a final step, interventions such as informing
third parties if the vehicle user has ignored the
previous warnings and if critical temperatures
can be detected.
Consumer tests organizations will reward
vehicles that offer such solutions (for example,
Euro NCAP in its 2023 Child Presence Detection
Test and Assessment Protocol) to boost the
integration of these technologies in vehicle
fleets worldwide. For the ratings it is essential
to have reproducible test environments as project supported by industry partners small movements of the chest and abdomen
obviously it is not possible to use real children and Euro NCAP in 2021, aiming to develop are produced through respiration. In such cases,
for the test setup. Therefore, there is a strong human surrogates that can be used to develop realistic breathing patterns can be activated by
need for test tools that can represent children and test child presence detection systems a user-friendly software interface, while other
of different ages. inside the vehicle. movements (limbs, head movement, etc) can be
As a market leader in advanced testing Out of the project came the 4activeOD triggered individually. These ‘surrogates’ are very
technologies for active vehicle safety, dummies, which start from newborns as a robust and have a modular design so defective
4activeSystems has long experience in particularly vulnerable group, going up to one, parts can be exchanged quickly.
developing dummy objects such as pedestrians, three and six year olds. The dummies have ‘Help those who cannot help themselves’ –
bicyclists, powered two wheelers and car targets a realistic response to technologies such as this perfectly describes the motivation behind
to test ADAS systems outside the vehicle. The camera systems, NIR systems, radar systems 4activeSystems’ decision to develop products
scope of the development is always the same: and wi-fi sensing. This is achieved by using that will help prevent any harm that might befall
providing industry-accepted, globally representative materials and implementing children left behind in vehicles. ‹
harmonized test tools with very high similarity realistic movement of different areas of the
to real objects regarding the technologies used dummy. For example, the most challenging CONTACT
4activeSystems | inquiry no. 108
for the intended functionalities. This approach object to detect is a sleeping baby in a child To learn more about this advertiser, please visit:
was also used when starting an open consortium seat covered by a blanket, where only very www.ukimediaevents.com/info/avi
INDEX TO ADVERTISERS
4activeSystems GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Gentex Corporation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ADAS & Autonomous Vehicle Technology Expo California 2023.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inside back cover InoNet Computer GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
ADAS & Autonomous Vehicle Technology Expo Stuttgart 2023.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 12, 13 Mechanical Simulation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Allegro MicroSystems.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 National Instruments Corp.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Applied Intuition.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inside front cover Racelogic.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ASR Advanced Simulated Reality.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 57 Sibros.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
dSpace GmbH.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outside back cover Wynne-Jones IP.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ADAS &
AUTONOM
THE INT
REV IEW ERN ATI ON
CLE
OF AL
VEH ICL E AU TON OM OU S
INTERNAT
FREE SERVICE!
TEC HN
FRO M CO OLO GIE S:
TO MA NU NC EPT ION
IONAL
FAC TUR
IMP LEM E TO
EN TAT ION
ADVERTISERS NOW!
JANUARY
2023
January
2023
Syntheti
lop AV e
techno
logy
nerati
helpin ve mod
g deve els ar
D BY UKi
lop AV e
techno
MEDIA
INTELL logy
& EVEN
63
IGENT
In-depth SPEED
behind analysis of the ASSIST
ADAS & Autonomous Vehicle International January 2023
TS
Hemant Sikaria
CEO and co−founder of Sibros
AVI catches up with Hemant Sikaria, CEO and co-founder
of Sibros, which provides a vehicle-to-cloud management
platform for orchestrating smart updates, data collection
and fleet analytics on any vehicle or fleet, across the
product lifecycle
By Anthony James
Career in brief
U
Before Sibros, Sikaria spent five and a
half years at Tesla, joining the team as an early
nder Hemant Sikaria’s direction, By combining powerful automotive
engineer. At Tesla, Sikaria contributed to the
Silicon Valley-based Sibros software and data management tools in
design, implementation and deployment of the
continues to unite the one platform, Sibros empowers OEMs
very first large-scale OTA software update system.
complexities of embedded vehicle to realize hundreds of connected
This system currently handles updates for the
software and data with cloud- vehicle use cases, spanning
entire Tesla fleet and has completed millions so far.
native technologies. fleet management, predictive
Prior to exiting Tesla, Sikaria managed the body
Sibros powers the connected vehicle maintenance, data monetization,
and chassis firmware teams. As part of his role,
ecosystem with its Deep Connected Platform paid feature upgrades and beyond.
he oversaw the design, implementation and
(DCP) for full vehicle OTA software updates, As CEO and co-founder, Sikaria
integration of systems such as the Model X
data collection and diagnostics in one has played a pivotal role in setting the
falcon-wing doors, air suspension, seat
vertically integrated system. DCP supports any gold standard for automotive software
controls, self-presenting door handles,
vehicle architecture – from ICE, hybrid and quality and building a world-class team,
key fob, security and vehicle
EV to fuel cell – while also meeting rigorous while ensuring the happiness of his
authentication.
safety, security and compliance standards. staff, customers and partners.
How did you get into the any use case for any vehicle type. From the telematics in the hardware of the
world of connected vehicle providing software updates for two- vehicle. e.GO is an exciting partnership,
software systems and why wheelers to haulage, we also deliver like our work with Sono Motors, Volta
did you start Sibros? new connected apps and services to Trucks and Bajaj Auto two-wheelers in
I’ve always wanted to solve problems. My address software/firmware defects and India. We deploy the exact same product
involvement in connected vehicles started critical updates, as well as all industry on all those client vehicles – even tractors,
at Tesla. I worked on the software compliances for safety, cybersecurity and earthmovers and construction equipment,
management systems there, when it was data protection – entirely over the air – at showing how one system can enable a
in its earlier stages. Back then I didn’t a programmatic scale. range of different vehicles.
have a Tesla and my family members had The names I mention are public. As
cars from other manufacturers. Together, You just announced your with most companies in our space, acting
we experienced three recalls within partnership with e.GO at CES. as a white-label solution means we have
around an 18- to 24-month period. It How many other OEMs do you a huge majority of clients that we can’t
was then that I thought, “Why, if we work for? discuss due to customer confidentiality.
created a robust OTA solution at Tesla, e.GO is a really good example of how
have other manufacturers not sorted Sibros can propel smaller mobility Care to share any future trends
this yet?” There was an opportunity startups to the next level and it was great that you see on the horizon?
for an agnostic third party to solve the to showcase e.GO at CES. Our solution is If I take a critical but realistic look at
problem. So, we did. That was back in an out-of-the-box system that plugs and humanity, and the way we move and
2018, with me and my co-founder working plays with the original telematics hardware operate, everyone is always connected
in a shared office, when Sibros as you in vehicles. If you [as a manufacturer] have and on the go and aware of our impact on
know it now was founded. We’re now a telematics unit already installed, we can the environment. The automotive industry
140 people in four countries. use that unit to provide a full and robust must adopt technology enablers that make
digital ecosystem for orchestrating our mobility experiences seamlessly
Elevator−pitch time. What scalable updates, data collection and connected, safer, efficient and sustainable
do you offer? remote commands, based on intelligent so humanity can lead happier and
Sibros offers integrated connected vehicle data and software twins. productive lives. ‹
vehicle management that gives auto This includes a cloud-native back-end
makers everything required to realize portal, and in-vehicle firmware that sits on For more information, visit www.sibros.com
GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/california
##AVTExpoCA
Get involved online!
Y O U R P A R T N E R I N S I M U L AT I O N A N D V A L I D AT I O N
Ready, set,
done.
Preparation, simulation and validation made fast and easy.
SIMPHERA. Enter simpliCity.