You are on page 1of 201

Lecture Notes in Mobility

Jörg Dubbert · Beate Müller 
Gereon Meyer Editors

Advanced
Microsystems
for Automotive
Applications 2018
Smart Systems for Clean,
Safe and Shared Road Vehicles
Lecture Notes in Mobility

Series editor
Gereon Meyer, Berlin, Germany
More information about this series at http://www.springer.com/series/11573
Jörg Dubbert Beate Müller

Gereon Meyer
Editors

Advanced Microsystems
for Automotive
Applications 2018
Smart Systems for Clean,
Safe and Shared Road Vehicles

123
Editors
Jörg Dubbert Gereon Meyer
VDI/VDE Innovation + Technik GmbH VDI/VDE Innovation + Technik GmbH
Berlin, Germany Berlin, Germany

Beate Müller
VDI/VDE Innovation + Technik GmbH
Berlin, Germany

ISSN 2196-5544 ISSN 2196-5552 (electronic)


Lecture Notes in Mobility
ISBN 978-3-319-99761-2 ISBN 978-3-319-99762-9 (eBook)
https://doi.org/10.1007/978-3-319-99762-9

Library of Congress Control Number: 2018952245

© Springer Nature Switzerland AG 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

Self-driving and electric on-demand taxis and shuttle buses are widely considered
as the optimal means of future urban transport. They seem to provide solutions for
the most pressing current issues in the mobility sector, such as road fatalities,
climate change, and pollution, as well as land use for transport. While those
vehicles are first being tested in controlled environments around the world today,
they may rapidly reach maturity due to the disruptive character of the underlying
innovations: According to the roadmaps of the European Technology Platforms in
the automotive domain, advancements in smart sensors, control and communication
systems will enable the implementation of high-degree connected and automated
driving (i.e., SAE levels 3 and above) on the motorway and in urban environments
in the 2020-25 time frame. This coincides with the projected begin of a broad
market introduction of electric vehicles: Due to fast progress in battery and pow-
ertrain systems’ performance in combination with economies of scale, an up to ten
percent market share of such vehicles has been predicted for 2020, quickly rising to
40 percent by 2025.
The two technical fields of automation and electrification are highly interlinked
due to similarities in (a) the electronics and data architecture of control, (b) the
cooperation in energy matters, and (c) the systemic character of the operating
environment. In an ideal world, a self-driving car, e.g., would no longer require any
passive safety systems, as it would be safe per se. Consequently, such vehicle
would be much lighter and, if electrified, could be much more energy efficient, thus
providing a longer driving range. It should be noted, however, that due to its higher
level of convenience, a self-driving car may be used more intensively. This and the
increase in computing power and sensor equipment could lead to the reverse effect
of using more energy, counteracting the advantages of electric vehicles in terms of
energy savings and climate protection. A joint study by a number of National
Laboratories in the USA recently found that these two opposite effects counter-
balance each other: While the energy consumption per km may decline by a factor
of 1/3, the overall energy consumption may increase by a factor of three.

v
vi Preface

True synergies of electric, connected, and automated driving may be unlocked in


combination with shared mobility, though. Car sharing as a systemic mobility ser-
vice offer would reduce the total cost of ownership of automated and electric
vehicles, facilitate the management of battery charging, and reduce the number of
parked cars. And ride-sharing, provided, e.g., by self-driving and electric on-demand
shuttles, would in addition be highly cost-efficient and customer-oriented, and it
could potentially reduce the overall number of cars on the streets. The exploitation of
such synergies may accelerate innovation at both enabling technologies and appli-
cations levels, which would be essential for fully realizing the benefits of connected,
automated, and electrified vehicles.
The International Forum on Advanced Microsystems for Automotive
Applications (AMAA) has been covering the progress in connected, automated, and
electrified vehicles and the enabling technologies for many years. In view of the
above-mentioned considerations, the topic of the 22nd edition of AMAA, held at
Berlin on September 11–12, 2018, was “Smart Systems for Clean, Safe and Shared
Road Vehicles.” The 2018 AMAA conference also marked the transition from a
previous to a new Coordination and Support Action for its support, namely from
“Safe and Connected Automation in Road Transport” (SCOUT) of the con-
nectedautomateddriving.eu initiative to “Coherent Support for Mobility.E Strategy”
(COSMOS) of the ECSEL Joint Undertaking. The AMAA organizers, VDI/VDE
Innovation + Technik GmbH together with the European Technology Platform on
Smart Systems Integration (EPoSS), greatly acknowledge this continuous funding
of their efforts by the European Union.
The chapters of this volume of the Lecture Notes in Mobility book series by
Springer have been authored by engineers and researchers who attended the AMAA
2018 conference to report on their recent research and innovation activities. The
papers presented had been selected by the members of the AMAA Steering
Committee and are also available through academic libraries worldwide. In our
roles as the organizers and the chairman of the AMAA 2018, we would like to
express our great acknowledgment to all the authors for their excellent contributions
to the conference and also to this book. We would also like to appreciate the
tremendous support that we have received from our colleagues at VDI/VDE-IT,
particularly from Ms. Monika Curto Fuentes of the AMAA office.

Berlin Jörg Dubbert


July 2018 Beate Müller
Gereon Meyer
Supporters and Organizers

Funding Authority

European Commission

Supporting Organisations

European Council for Automotive R&D (EUCAR)


European Association of Automotive Suppliers (CLEPA)
Strategy Board Automobile Future (eNOVA)
Advanced Driver Assistance Systems in Europe (ADASE)
Zentralverband Elektrotechnik- und Elektronikindustrie e.V. (ZVEI)
Mikrosystemtechnik Baden-Württemberg e.V.

Organisers

European Technology Platform on Smart Systems Integration (EPoSS)


VDI/VDE Innovation + Technik GmbH

Steering Committee

Mike Babala TRW Automotive, Livonia MI, USA


Serge Boverie Continental AG, Toulouse, France
Geoff Callow Technical & Engineering Consulting,
London, UK
Kay Fürstenberg Sick AG, Hamburg, Germany
Wolfgang Gessner VDI/VDE-IT, Berlin, Germany
Roger Grace Roger Grace Associates, Naples FL, USA

vii
viii Supporters and Organizers

Riccardo Groppo Ideas & Motion, Cavallermaggiore, Italy


Jochen Langheim ST Microelectronics, Paris, France
Steffen Müller NXP Semiconductors, Hamburg, Germany
Roland Müller-Fiedler Robert Bosch GmbH, Stuttgart, Germany
Andy Noble Ricardo Consulting Engineers Ltd.,
Shoreham-by-Sea, UK
Pietro Perlo IFEVS, Sommariva del Bosco, Italy
Christian Rousseau Renault SA, Guyancourt, France
Jürgen Valldorf VDI/VDE-IT, Berlin, Germany
David Ward MIRA Ltd., Nuneaton, UK

Conference Chair

Gereon Meyer VDI/VDE-IT, Berlin, Germany


Contents

Smart Sensors
All-Weather Vision for Automotive Safety: Which Spectral Band? . . . . 3
Nicolas Pinchon, Olivier Cassignol, Adrien Nicolas,
Frédéric Bernardin, Patrick Leduc, Jean-Philippe Tarel,
Roland Brémond, Emmanuel Bercier, and Johann Brunet
Machine Learning Based Automatic Extrinsic Calibration
of an Onboard Monocular Camera for Driving Assistance
Applications on Smart Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Razvan Itu and Radu Danescu

Driver Assistance and Vehicle Automation


Towards Collaborative Perception for Automated Vehicles
in Heterogeneous Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Saifullah Khan, Franz Andert, Nicolai Wojke, Julian Schindler,
Alejandro Correa, and Anton Wijbenga
Real Time Recognition of Non-driving Related Tasks
in the Context of Highly Automated Driving . . . . . . . . . . . . . . . . . . . . . 43
Timo Pech, Stephan Enhuber, Bernhard Wandtner,
Gerald Schmidt, and Gerd Wanielik
Affordable and Safe High Performance Vehicle Computers
with Ultra-Fast On-Board Ethernet for Automated Driving . . . . . . . . . . 56
Martin Hager, Przemyslaw Gromala, Bernhard Wunderle,
and Sven Rzepka
The Disrupters: The First to Market Automation
Technologies to Revolutionize Mobility . . . . . . . . . . . . . . . . . . . . . . . . . 69
Adriano Alessandrini

ix
x Contents

TrustVehicle – Improved Trustworthiness


and Weather-Independence of Conditionally Automated
Vehicles in Mixed Traffic Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Pamela Innerwinkler, Ahu Ece Hartavi Karci, Mikko Tarkiainen,
Micaela Troglia, Emrah Kinav, Berzah Ozan, Eren Aydemir,
Cihangir Derse, Georg Stettinger, Daniel Watzenig, Sami Sahimäki,
Norbert Druml, Caterina Nahler, Steffen Metzner, Sajin Gopi,
Philipp Clement, Georg Macher, Johan Zaya, Riccardo Groppo,
and Samia Ahiad
Adaptation Layer Based Hybrid Communication Architecture:
Practical Approach in ADAS&ME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Prachi Mittal, Emily Bourne, and Tim Leinmueller
Assistance and Mitigation Strategies in Case of Impaired
Motorcycle Riders: The ADAS&ME Case Study . . . . . . . . . . . . . . . . . . 97
Luca Zanovello, Stella Nikolaou, Ioannis Symeonidis,
and Marco Manuzzi

Data, Clouds and Machine Learning


Towards a Privacy-Preserving Way of Vehicle Data
Sharing – A Case for Blockchain Technology? . . . . . . . . . . . . . . . . . . . 111
Christian Kaiser, Marco Steger, Ali Dorri, Andreas Festl,
Alexander Stocker, Michael Fellmann, and Salil Kanhere
Challenges and Opportunities of Artificial Intelligence
for Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Benjamin Wilsch, Hala Elrofai, and Edgar Krune

Electric Vehicles
Light Electric Vehicle Design Tailored to Human Needs . . . . . . . . . . . . 139
Diana Trojaniello, Alessia Cristiano, Alexander Otto, Elvir Kahrimanovic,
Aldo Sorniotti, Davide Dalmasso, Gorazd Lampic, Paolo Perelli,
Alberto Sanna, Reiner John, and Riccardo Groppo
DCCS-ECU an Innovative Control and Energy Management
Module for EV and HEV Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Bartłomiej Kras, Paweł Irzmański, and Maciej Kwiatkowski
Connectivity Design Considerations for a Dedicated Shared
Mobility Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Jörg Kottig, Dirk Macke, and Michael Pielen
Contents xi

Innovation Strategy
Trends and Challenges of the New Mobility Society . . . . . . . . . . . . . . . 175
Sakuto Goda
Roadmap for Accelerated Innovation in Level 4/5 Connected
and Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Jörg Dubbert, Benjamin Wilsch, Carolin Zachäus, and Gereon Meyer
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Smart Sensors
All-Weather Vision for Automotive Safety:
Which Spectral Band?

Nicolas Pinchon1, Olivier Cassignol2, Adrien Nicolas2,


Frédéric Bernardin3, Patrick Leduc4, Jean-Philippe Tarel5,
Roland Brémond5, Emmanuel Bercier6(&), and Johann Brunet7
1
VALEO Comfort and Driving Assistance,
34 rue Saint André, 93012 Bobigny Cedex, France
nicolas.pinchon@valeo.com
2
SAFRAN Electronics & Defense,
21 Avenue du Gros chêne, 95610 Éragny Sur Oise, France
{olivier.cassignol,adrien.nicolas}@safrangroup.com
3
CEREMA, 8-10 rue Bernard Palissy,
63017 Clermont-Ferrand Cedex 2, France
frederic.bernardin@cerema.fr
4
CEA/Léti, 17 rue des Martyrs, 38054 Grenoble Cedex 9, France
patrick.leduc@cea.fr
5
University Paris-Est, IFSTTAR, Cosys/Lepsis, 14-20 Boulevard Newton,
77420 Champs-sur-Marne, France
{Jean-Philippe.Tarel,Roland.Bremond}@ifsttar.fr
6
ULIS, 364 Route de Valence, Actipole – CS 10027,
38113 Veurey-Voroize, France
e.bercier@ulis-ir.com
7
NEXYAD, 95 Rue Pereire, 78100 Saint-Germain-en-Laye, France
jbrunet@nexyad.net

Abstract. The AWARE (All Weather All Roads Enhanced vision) French
public funded project is aiming at the development of a low cost sensor fitting to
automotive and aviation requirements, and enabling a vision in all poor visibility
conditions, such as night, fog, rain and snow.
In order to identify the technologies providing the best all-weather vision, we
evaluated the relevance of four different spectral bands: Visible RGB, Near-
Infrared (NIR), Short-Wave Infrared (SWIR) and Long-Wave Infrared (LWIR).
Two test campaigns have been realized in outdoor natural conditions and in
artificial fog tunnel, with four cameras recording simultaneously.
This paper presents the detailed results of this comparative study, focusing on
pedestrians, vehicles, traffic signs and lanes detection.

Keywords: Vision  ADAS  Visibility  Adverse weather  Fog


Infrared  Thermal

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 3–15, 2019.
https://doi.org/10.1007/978-3-319-99762-9_1
4 N. Pinchon et al.

1 Introduction

In the automotive industry, New Car Assessment Programs (NCAP) are increasingly
pushing car manufacturers to improve performances of Advanced Driver Assistance
Systems (ADAS), and especially autonomous emergency braking on vulnerable road
users (VRU). For instance the 2018 Euro NCAP roadmap is moving towards pedes-
trians and pedal cyclists detection in day and night conditions.
This trend matches accidentology figures, like those provided by French Road
Safety Observatory (Table 1).

Table 1. 2014 French accidentology data in adverse conditions [1]


Injury casualties Fatalities
Night 32% 41%
Wet road 20% 20%
Adverse weather 21% 23%

In the longer term, after automated parking and highway driving, all weather and
city driving will be the main technical challenge in the automated driving roadmap.
Current ADAS sensors as visible cameras or Lidars are fitting functional require-
ments of VRU and obstacle detection in normal conditions (day or night). However,
these technologies show limited performances in adverse weather conditions such as
fog or rain.
Automotive industry is thus facing this new challenge of detecting vehicle envi-
ronment in all conditions, and especially in poor visibility conditions, such as night,
fog, rain and snow.
This topic has been addressed in the framework of the AWARE French public
funded project, aiming at the development of a sensor enabling a vision in all poor
visibility conditions. This paper presents an experimental comparative study of four
different spectral bands: Visible RGB, Near-Infrared (NIR), Short-Wave Infrared
(SWIR) and Long-Wave Infrared (LWIR). Sensors and field tests are described in
Sects. 2 and 3. Experimental results are detailed in Sect. 4, focusing on pedestrians,
vehicles, traffic signs and lanes detection.

2 Sensors

In this project, we only focus on cameras technologies, and not on distance mea-
surement systems like LIDARs or RADARs. But it is well-known that both tech-
nologies are complementary and necessary to bring redundancy for improving the
detection system’s reliability and accuracy [2].
Four cameras were tested during the project. The Table 2 shows their
characteristics.
All-Weather Vision for Automotive Safety 5

Table 2. Cameras characteristics


Camera Sensor Spectral band Optics
Visible RGB CMOS – SXGA 0.4–0.65 µm HFOV = 54°
(1280  966) VFOV = 40°
3  8 bits, pitch 4.2 lm F-number = 2
Extended NIR CMOS – SXGA 0.4–1 µm HFOV = 39°
(1280  1024) VFOV = 31°
10 bits, pitch 5.3 lm F-number = 2.9
Extended SWIR InGaAs—VGA 0.6–1.7 µm HFOV = 39°
(640  512) VFOV = 31°
14 bits, pitch 15 lm F-number = 1.8
LWIR Microbolometer—VGA (640  482) 8–12 µm HFOV = 44°
14 bits, pitch 17 lm VFOV = 33°
F-number = 1.2

The visible RGB CMOS camera is used here as a reference for the test.
Extended NIR camera uses monochrome CMOS photodiodes with a cut-off
wavelength close to 1 µm. It detects the reflective visible and NIR light from the scene.
It thus requires an illumination by sun, moon or night glow or an illuminator positioned
on the vehicle.
Extended SWIR camera is based on InGaAs III-V material and extends from a
wavelength of 0.6 µm, red to human eye, to 1.7 µm in the SWIR infrared band. SWIR
spectral band is typically used for active (reflective) vision in very dark condition with
a good contrast as SWIR light is generally more reflective than visible light.
LWIR sensor is an array of microbolometers. It detects the thermal radiation in the
spectral band extending from 8 µm to 14 µm. Any object emits radiations which
depend on its temperature. For a human or an animal at ambient temperature, the
maximum of emission corresponds to a wavelength close to 10 µm. LWIR is used for
the detection of a temperature contrast and do not require an illuminator.
Mid-Wave Infrared (MWIR) has not been added in the study for reasons of cost and
capacity, due to the cooling system required for the detectors.

3 Field Tests

3.1 Outdoor Test Campaign


The four cameras were installed during one month in 2015 along the French motorway
A75/E11 at the site named La Fageole. A weather station located near the cameras was
equipped with a diffusiometer (meteorological visibility), a rain gauge, a luxmeter
(ambient light) and a temperature and humidity sensor (ambient air) (Fig. 1).
More than 33 different scenarios have been set depending on weather conditions
(rain type and intensity, ambient temperature), ambient light, and human eye visibility
distance. Table 3 shows the most interesting recorded scenarios, with a class 3 fog
(visibility distance < 100 m):
6 N. Pinchon et al.

Fig. 1. Outdoor test campaign

Table 3. Most interesting recorded outdoor scenarios


Scenario Weather Visibility Ambient Temperature
distance light
7 Day, heavy fog, light snow 75 m 5032 lx −1 °C
11 Day, heavy fog and snow 69 m 1096 lx −1 °C
22 Day low light, heavy fog and 75 m 104 lx −0.8 °C
snow
26 Night, heavy fog 99 m 0 lx +1.5 °C

3.2 Tunnel Test Campaign


The tests were carried in the CEREMA fog tunnel in Clermont-Ferrand (30 m length,
5.5 m width and 2.5 m height). Artificial fog and rain are reproduced and controlled:
fog and rain drop size, meteorological visibility of fog and rain intensity. Two fog
classes are available: unimodal drop size distribution (DSP) centred around 1 µ and
bimodal DSP centred around 1.5 and 10 µ. 74 scenarios were performed by varying the
type of adverse weather conditions (fog and rain), scene illumination (night, day,
automotive lighting, aircraft lighting) and glare (front vehicle). The scene in front of the
cameras contained road pavement, road marking, pedestrians and lights (see Fig. 2).

Fig. 2. CEREMA fog tunnel


All-Weather Vision for Automotive Safety 7

4 Field Tests

In this section, we describe the detection and recognition range performances that were
measured in this study. It is important to keep in mind that these results reflect not only
the intrinsic characteristics of the spectral bands but also the capability of the chosen
cameras. The cameras were selected to be representative of the typical current state-of-
the-art.
In order to prevent any detection algorithm artefact, a visual analysis has been
performed by two different human observers. As expected, exact detection range values
differed from one observer to the other, but the relative values were consistent. In all
cases, brightness and contrast were carefully tuned in order to optimize ranges.
With respect of each camera’s channels created by the four spectral bands, a video
database has been created to remotely record videos of relevant scenes for each listed
scenario. The Fig. 3 provides a sample of the video database (outdoor campaign):

Fig. 3. Example of snapshots recorded by LWIR (top left), Visible (top right), SWIR (bottom
left) and NIR (bottom right) cameras

4.1 Pedestrian Detection


Pedestrian detection tests have been performed into Cerema fog tunnel using real
human bodies, as illustrated in the Fig. 4.
Objects moving in the fog generate important transmission inhomogeneities. In
order to avoid errors due to this effect, we recorded films of human test subjects
standing still at the end of the tunnel while the fog cleared over time. Detection was
declared successful when the outline of the chest of the test subject became visible
against the background. This way, detection ranges were measured at the height at
which the transmission meter was set. The test subjects were *25 m away from the
cameras. One of the test subjects wore a high visibility jacket and another one wore
8 N. Pinchon et al.

Fig. 4. Pedestrian detection test setup into Cerema fog tunnel, LWIR (top left), Visible (top
right), SWIR (bottom left) and NIR (bottom right) cameras

dark clothes. As expected, Visible, NIR and SWIR detection performances were better
for the subject wearing the high visibility jacket (even though this improvement is less
pronounced for thicker fog). For this study, the case of the subject in dark clothes was
deemed more relevant.
The following table gives the fog density, expressed as standard visibility ranges, at
which the pedestrian becomes visible. A reduced visibility range indicates a successful
pedestrian detection in a thicker fog, and hence a better capability to see through fog.
Cases with glare are not included (Table 4).

Table 4. Fog thickness for pedestrian detection at 25 m with the different cameras
Camera Fog density for pedestrian detection
Visible RGB Moderate (visibility range = 47 ± 10 m)
Extended NIR High (visibility range = 28 ± 7 m)
Extended SWIR High (visibility range = 25 ± 3 m)
LWIR Extreme (visibility range = 15 ± 4 m)

Error ranges mostly reflect the dispersion between the different scenarios used in
the study.
Conclusions are the following:
• The LWIR camera has a better capability to see through fog than the NIR and
SWIR ones. The visible camera has the lowest fog piercing capability.
• The LWIR camera is the only one that allows pedestrian detection in full darkness.
All-Weather Vision for Automotive Safety 9

• The LWIR camera also proved more resilient to glare caused by facing headlamps
in the fog. Other cameras sometimes missed a pedestrian because she or he was
hidden by the glare (Fig. 5).

Fig. 5. Example of images recorded in the fog tunnel with the four different cameras

4.2 Vehicle Detection and Recognition


Vehicle detection ranges were measured in the outdoor test campaign.
Similarly to military range performance tests, we define two tasks of interest:
detection and recognition. Detection means the presence of an object on the road can
been acknowledged, even if the type of object cannot be assessed. Recognition means
that the detected object can be classified into a category such as: truck, car, motorcycle,
bicycle, pedestrian, animal, static obstacle… In particular, VRU can be distinguished
from other vehicles. The Fig. 6 illustrates detection and recognition in two different
spectral bands (images are from different scenarios).
Distances were calibrated within the cameras’ field of view using the road markings
(which are clearly visible for all cameras in images recorded on a sunny day) and an
aerial map of the area. That method gives a distance measurement precision on the
order of a few meters simply by noting the vehicle position within an image. The
maximal distance that could be reliably assessed by this method was on the order of
150 m (Fig. 7).
Average detection and recognition ranges are given in the Figs. 8 and 9 for the
different spectral bands and for each of the four scenarios of interest. A total of 22
vehicles were observed. Error bars give the dispersion between the different vehicles
observed within a given scenario.
In VIS, NIR or SWIR, detection was performed using the vehicle headlamps. The
case of a vehicle driving with headlamps off while in adverse conditions was not
encountered in this study. Should it happen, however, detection ranges in VIS, NIR and
10 N. Pinchon et al.

Detection

Recognition

Task VIS LWIR

Fig. 6. Examples of detection and recognition

Fig. 7. Reference of range based on site map analysis and T1 lanes type of road marking

SWIR would be on the order of the recognition ranges. In LWIR, detection relied on
the observation of hot vehicle parts: the wheels, the motor or the exhaust system.
In some foggy instances, recognition proved difficult or even impossible in VIS,
NIR or SWIR because the vehicle remained entirely hidden by the glare of its own
headlamps. When this happened, the vehicles were not taken into account in the
average ranges given in Fig. 9. Images illustrating this phenomenon are given in the
Fig. 10. SWIR here presents more glare than the other bands but it is only due to the
camera settings, it is not an intrinsic characteristic of the spectral band.
The visual observation of the videos also confirmed the well-known fact that the
exploitation of movement by the human visual system greatly increases detection
capabilities: the success of the detection task is much higher when performed while
All-Weather Vision for Automotive Safety 11

Fig. 8. Vehicle detection ranges

Fig. 9. Vehicle recognition ranges (except cases with vehicle hidden by glare)

VIS NIR SWIR LWIR

Fig. 10. Images showing the glare effect in the VIS, NIR and SWIR spectral bands

watching a film than when performed by observing still images taken from the very
same film. This is due to the human visual cortex implementing advanced spa-
tiotemporal denoising and should inspire detection software developers.
In some recordings, wild animals are visible on the side of the road in the LWIR
band. These animals are visible in none of the other spectral bands.
12 N. Pinchon et al.

The conclusions are the following:


• For vehicles with headlights on, detection in adverse conditions is better in SWIR.
NIR comes next and then VIS and LWIR.
• For the recognition task in the same conditions, the conclusions are inverted.
• The VIS, NIR and SWIR bands are sensitive to glare in foggy conditions, making
the recognition task impossible in many cases.
• For both tasks, the LWIR camera gives much more reproducible results than all the
others. In particular, its performance is independent of the vehicle’s headlights
being on or off.
• The LWIR camera was also the only one allowing the detection of hot blooded
animals on the side of the road in adverse conditions.

4.3 Road Marking Detection


Detection ranges of road markings were evaluated in the outdoor test campaign by
using the road lines of the highway. The same calibration as for vehicle detection was
used to measure the distances.
The Table 5 gives the average maximal detection distance for the four scenarios of
interest:

Table 5. Road lines detection ranges


Scenario Visibility [m] Ambient light [Lux] Maximal detection range
[m]
VIS NIR SWIR LWIR
7 75 5032 53 63 66 –
77 69 1096 50 60 63 –
22 75 104 53 60 63 –
26 99 0 53 53 53 –

Conclusions are the following:


• Observation in the LWIR spectral band is only depending on road marking thermal
emissivity. In 15 of the 21 scenarios recorded in LWIR, road markings are not
visible. The visibility depends on the weather: rain is cleaning the lines while sun
exposure enhances them (see Fig. 11). Thus road marking observation in the LWIR
spectral band is not relevant.
• Visible, NIR and SWIR all required additional lighting to detect road marking at
night. Detection in NIR and SWIR are equivalent, and slightly better than VIS. This
is due in part to: a broader overall spectral band, monochromaticity (no RGB filter),
larger pixels and a higher bit depth.
All-Weather Vision for Automotive Safety 13

Fig. 11. Road marking observation in the LWIR spectral band during day sunny condition (left)
and day rainy condition (right)

VIS NIR SWIR

Fig. 12. Images of traffic sign acquired in different spectral bands. In the first line, weather
conditions is a sunny day and in the second line is from scenario 7 (fog class 3 with snow)

4.4 Traffic Signs Recognition


The traffic sign located at 38 m from the cameras on La Fageole test site has been used
to realize the comparative study. The Fig. 12 shows the sign in two different weather
conditions.
Conclusions of the analysis of all the scenarios are the following:
• The SWIR sensor does not allow the identification of the traffic sign, even during
daylight. This is due to the fact that the difference of reflectivity between the letters
and the background is very low in the SWIR band. Traffic signs are indeed designed
for visible spectrum and not for the other bands.
• The NIR camera provides a good vision and a better SNR than the RGB visible
camera, especially in adverse weather conditions. This is due to the same reasons as
for road markings detection.
• As expected, the traffic sign is never identifiable in the LWIR band. Letters and
background have indeed the same temperature and emissivity.
14 N. Pinchon et al.

Table 6. Comparison of performances of ADAS functions using Visible, NIR, SWIR or LWIR
camera technologies, Night fog, headlights
ADAS FUNCTION Camera Spectral band
Night Fog, using
headlights
VIS NIR SWIR LWIR
Pedestrian, Bicycles, Animals detection + ++ ++ ++++
Vehicle shape recognition - - -- ++
Vehicle lights detection + ++ +++ --
Traffic signs recognition + ++ - --
Road marking detection + ++ ++ -

Table 7. Comparison of performances of ADAS functions using Visible, NIR, SWIR or LWIR
camera technologies, Day fog, headlights
ADAS FUNCTION Camera Spectral band
Day Fog, using
headlights
VIS NIR SWIR LWIR
Pedestrian, Bicycles, Animals detection + ++ ++ ++++
Vehicle shape recognition + ++ ++ ++
Traffic signs recognition + ++ - --
Road marking detection + ++ ++ -

5 Conclusion

As they are complementary to other distance measurement systems like LIDARs or


RADARs, the AWARE project only focused its experiments on cameras technologies
that are necessary to bring redundancy and complementary characteristics for
improving ADAS detection system’s reliability and accuracy.
In order to detect pedestrian, vehicle, road marking or recognize traffic signs, the
relevance of four different spectral bands has been evaluated under adverse weather
conditions: Visible RGB, Near-Infrared (NIR), Short-Wave Infrared (SWIR) and Long-
Wave Infrared (LWIR).
The Tables 6 and 7 summarize the relative performances of ADAS function for the
four different cameras.
Results of experiments clearly state that:
• In addition to the visible spectral band, only the LWIR spectral band provides
outstanding interests. Targets can be detected with or without additional light.
LWIR detection is not sensitive to any dazzle.
• NIR and SWIR provide equivalent performances, mainly due to the fact that
experiments used extended visible to NIR and visible to SWIR cameras. However,
as scattering in fog is reduced when wavelength increases, a NIR-only or a SWIR-
All-Weather Vision for Automotive Safety 15

only camera would certainly have provided much better performances than exten-
ded spectral bands.
• Visible RGB extended to NIR (or Red-Clear sensors) combined with LWIR provide
the best spectral bands combination to improve ADAS performances of detection
such as vehicle, pedestrian, bicycle, animals or road marking, and recognition such
as traffic signs.
• Caution shall be considered while using LED headlight technology to provide
additional light. LED pulsed technology could reduce detection reliability of system
based on Visible, NIR and SWIR cameras.

Acknowledgement. The authors acknowledge the contribution of their colleagues to this work:
P. Morange, J-L. Bicard and all the pedestrians from CEREMA, A. Picard from Sagem and
B. Yahiaoui from Nexyad.

1 Glossary

ADAS: Advanced Driver Assistance Systems


AWARE: All Weather All Roads Enhanced vision
CMOS: Complementary Metal Oxide Semiconductor
LIDAR: LIght Detection And Ranging
LWIR: Long-Wave InfraRed
NCAP: New Car Assessment Programs
NIR: Near InfraRed
RADAR: RAdio Detection And Ranging
RGB: Red-Green-Blue
SNR: Signal-to-Noise Ratio
SWIR: Short-Wave InfraRed
VRU: Vulnerable Road User

References
1. French Road Safety Observatory (ONISR): Les accidents corporels de la circulation 2014 –
Recueil de données brutes (2015)
2. Premebida, C., Ludwig, O., Nunes, U.: Lidar and vision-based pedestrian detection system.
J. Field Robot. 26(9), 696–711 (2009)
Machine Learning Based Automatic Extrinsic
Calibration of an Onboard Monocular Camera
for Driving Assistance Applications on Smart
Mobile Devices

Razvan Itu and Radu Danescu(&)

Technical University of Cluj-Napoca,


Str. Memorandumului nr. 28, Cluj-Napoca, Romania
{razvan.itu,radu.danescu}@cs.utcluj.ro

Abstract. Smart mobile devices can be easily transformed into driving assis-
tance tools or traffic monitoring systems. These devices are placed behind the
windshield such that the camera is facing forward to observe the traffic. For the
visual information to be useful, the camera must be calibrated, and a proper
calibration is laborious and difficult to perform for the average user. In this
paper, we propose a calibration technique that requires no input from the user
and is able to estimate the extrinsic parameters of the camera: yaw, pitch and roll
angles and the height of the camera above the road. The calibration algorithm is
based on detecting vehicles using CNN based classifiers, and using statistics
about their size and position in the image to estimate the extrinsic parameters via
Extended Kalman filters.

Keywords: Automatic camera calibration  Monocular vision


Convolutional neural networks  Smart mobile devices

1 Introduction

The smart mobile devices are omnipresent, and, due to the fact that they come equipped
with cameras of ever increasing resolution and frame rate, and also with increasing
processing power, not to mention their additional sensors and connection capabilities,
they can be easily transformed into driving assistance or traffic monitoring/analysis
tools. The driver can easily mount such a device behind the windshield, so that the
camera faces forward to observe the traffic. However, in order to relate the features seen
by the camera with the 3D features of the real world, calibration must be performed,
and this is a step that most users would rather skip.
Automatic camera calibration is crucial for obtaining robust and accurate computer
vision based driver assistance systems. Correlation between the 3D world and the 2D
image scene is required in systems that sense and measure the surrounding environ-
ment. Monocular vision applications are easier to use and to deploy, and they are more
cost effective than stereovision based systems. The downside of using a single camera
is the missing depth information that stereo systems have. The monocular systems must
rely on constraints imposed on the environment geometry, such as flat road, standard

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 16–28, 2019.
https://doi.org/10.1007/978-3-319-99762-9_2
Machine Learning Based Automatic Extrinsic Calibration 17

object sizes, and so on, but they still require calibration [1]. Traditionally, the cali-
bration process is performed in controlled environments and in laboratories, usually by
measuring known objects manually placed in the observed scene, and accurately
measured. If we address the scenario of the user simply mounting the phone behind the
windshield and driving off, these constraints are impossible to be satisfied, and
therefore automatic calibration must be performed.
Camera calibration represents an active research area in the context of driving
assistance. An important step towards achieving automatic on-board calibration is to
determine the point where the parallel lines in the 3D world scene intersect, also called
vanishing point (VP). This point can be used to determine the extrinsic parameters of
the camera system. Similar work has been proposed to automatically estimate the
camera orientation using VP since the 1990’s [2].
Conventional and existing methods for determining the vanishing point usually
take the advantage of existing geometric or texture features in the scene, such as lane
lines or side-walk lines. These methods extract the relevant features, then apply a
voting scheme and finally extract VP candidates. These approaches make use of the
image space, but methods based on Gaussian sphere have been previously presented as
well [3]. Gaussian unit sphere methods map the parallel line vectors (2D image data)
into a Gaussian sphere, and process the resulted great circles. In [4] the authors present
a solution that uses RANSAC for orthogonal vanishing point detection. Our previous
research [5] has further simplified the classic approaches by using a convolutional
neural network (CNN) that takes an image as input and predicts the vanishing point
x and y coordinates as output. Using our own dataset we have found that this method
works well and with high accuracy.
Monocular calibration methods may also make use of additional sensors mounted
on the ego-vehicle, such as laser or radar based sensors. LIDAR-camera calibration has
been more widely used in the research community as well as in production. The
existing work generally follows the same steps, based on the correlation between 3D
LIDAR points and features or edges in the images from the monocular camera.
The LIDAR frame is aligned to camera images by using contour matching. The edges
from the 3D LIDAR frame are projected into the image. Calibration is performed by
adjusting the extrinsic parameters until these 3D points projected into the image are
aligned with the 2D contours detected from the original camera image. Similar
approaches been presented in [6, 7]. However, the usage of external sensors represents
an additional cost factor, and reduces the mobility and portability of the monocular
vision system. Also, the LIDAR sensor requires a calibration of its own, with its own
methodology and constraints.
Most of the automatic calibration techniques use features painted on the road, such
as lane markings. However, there may be scenarios where lane markings are not
available, or they are poorly drawn, dirty, or they simply cannot be seen due to the
overwhelming presence of obstacles. This paper proposes a technique for extrinsic
parameter calibration that does not require the presence and detection of lane markings,
but instead works on obstacles alone. The main idea is to use a Convolutional Neural
Network (CNN) vehicle detector that does not require calibration, and will generate a
bounding box of the vehicle in the image. Using the detected bounding boxes, and
18 R. Itu and R. Danescu

geometrical constraints, the camera’s height above the ground plane, and the three
rotation angles, can be calibrated.
The proposed calibration technique does not require any action from the user,
besides simply driving the car in normal traffic. The only constraint is that the system
has to observe enough objects, so that individual detection errors can cancel each other
out, and a robust estimation of the camera parameters can be achieved.

2 Vehicle Detection Using Convolutional Neuronal Networks

The main features used for camera calibration are the bounding box of the obstacles in
the image plane. In order to extract these features, we need a detector that is fast enough
to work in real time on the mobile phone’s computing resources (Fig. 1), and which
does not require calibration, meaning that it will detect the obstacle no matter its size,
orientation, position in the image, etc.

Fig. 1. The mobile device placed behind the windshield, detecting vehicles

The solution chosen for obstacle detection is based on a convolutional neural


network, pre-trained on a desktop machine using pairs of images and obstacles
expressed as bounding boxes (the rectangle coordinates of the boxes). We have used
the TensorFlow object detection API [8] to simplify the training approach and we have
retrained the existing single shot detector (SSD) MobileNet CNN [9] as it features a
reduced number of parameters needed to train (training doesn’t require a lot of time)
and is specifically designed to run on mobile devices.
The MobileNet CNN architecture features a full convolution as the first layer, while
the other layers are built using depthwise separable convolutions. A standard convo-
lution is split (factorized) into a depthwise convolution and pointwise convolution also
called 1  1 convolution. In MobileNet, a single filter is applied to each channel
initially by the depthwise convolution, whereas the pointwise 1  1 convolution
combines the outputs. The reduced computational time is achieved by this factoriza-
tion. Each layer of the network is followed by batch normalization and uses ReLU for
nonlinearity. The final layer doesn’t use ReLU and is fed into a softmax layer for final
classification. We have used an SSD MobileNet that was initially trained using the
COCO dataset [10]. Our network has been retrained using the KITTI obstacle dataset
[11] and the Udacity dataset [12], and we have limited the obstacle classes to passenger
Machine Learning Based Automatic Extrinsic Calibration 19

cars only, as they are the features used for calibration. Training is done using gradient
descent and two loss functions: one for detection and another for classification.
Smoothed L1 loss is used for localization and the weighted sigmoid loss is used for
classification. For our setup, we have used input images that are resized to 300  300
pixels. The TensorFlow Object detection API handles negative examples using online
hard-negative mining.
On the mobile device, the network generates, in real time, bounding boxes around
the detected vehicles (Fig. 2). Furthermore, we can take advantage of the official
TensorFlow Android tracking algorithm for the bounding boxes. The TF tracker uses
the FAST features [13] generated by the pyramidal Lucas Kanade optical flow method.
The median movement of features is analyzed at each frame and the bounding box
tracker will drop the current bounding box when the cross-correlation with the original
detection drops below a fixed threshold. The current bounding box can also be updated
if the new detection has a large overlap (a fixed threshold).

Fig. 2. Vehicle detection result, provided by the CNN-based classifier

Using tracking has pros and cons. On the other hand, the results are more stable, but
tracking may also generate false data, due to inertia of the update. In our calibration
method, we can work with or without tracking. The obstacle bounding boxes do not
have to be perfect, or complete. The calibration methodology only requires a statisti-
cally representative set of bounding boxes, for different distances, and for multiple
vehicle sizes.

3 Camera Calibration

The camera calibration algorithm will estimate the height of the camera above the
ground, and the three rotation angles, pitch, yaw and roll.
20 R. Itu and R. Danescu

3.1 Extended Kalman Filter for Camera Height and Pitch Estimation
The first two parameters to be estimated are the camera height above the ground, h, and
the pitch angle h. The intrinsic parameters of the camera are assumed to be known: the
principal point is assumed to be in the center of the image, and the focal length of the
camera is read from the mobile device’s camera API. Thus, the intrinsic camera matrix
can be written as:
0 1
f 0 W=2
A ¼ @0 f H=2 A ð1Þ
0 0 1

The parameters H and W are the image’s height and width, in pixels, and the focal
length is expressed also in pixels. The camera’s position in the world coordinate system
is determined by the height alone:
0 1
0
TCW ¼ @ h A ð2Þ
0

The rotation matrix between the world and the camera is, at this point, assumed to
depend on the pitch angle alone:
0 1
1 0 0
RWC ¼ @0 cos h  sin h A ð3Þ
0 sin h cos h

The projection matrix that will project a 3D point (X, Y, Z) in the world coordinate
system to an image point (u, v) is computed as:

P ¼ A½RWC TWC  ð4Þ

TWC is the translation vector between the world and the camera coordinate system:

RWC ¼ RWC TCW ð5Þ

Given any two image rows, v1 and v2, and a typical car width L, we can predict the
width of the car in the image plane on these two lines, assuming that the vehicle is on
the road and is viewed from behind, in a quasi-central position in the image. The lines
and the car width are assumed to be fixed, and the size of the car width in the image
space will depend only on two parameters, h and h, which form the parameter vector X,
to be estimated:
 
h
X¼ ð6Þ
h
Machine Learning Based Automatic Extrinsic Calibration 21

The algorithm for obtaining the car widths for the given rows v1 and v2 is the following:
1. Assuming that the vehicle is on the road, in a central position, the sides of the
vehicle are given by the points (−L/2, 0, Z) and (L/2, 0, Z), L being the width of the
car and Z being the distance from the camera. By taking two distances, Z1 and Z2,
four 3D points are generated – 2 on the left side, and 2 on the right side of the
Z axis.
2. The four points are projected into the image plane using the projection matrix
P. The resulted projection points are (uL,1, vL,1), (uL,2, vL,2), (uR,1, vR,1) and (uR,2,
vR,2).
3. The two lines formed by the points on the left side, and the points on the right side,
are intersected with the horizontal lines defined by the given row coordinates v1 and
v2. The intersection points will have the column coordinates uL,1, uL,2, uR,1 and uR,2.
4. The two widths are computed as w1 = uR,1-uL,1 and w2 = uR,2-uL,2.
We can define the width projection function g as:
 
w1
gv1;v2;L ðXÞ ¼ ð7Þ
w2

We can denote the output of the function g as Z, the measurement vector. Now the
problem can be stated as an estimation problem: having the measurement vector Z, and
the measurement function g, one needs to estimate the unknown parameter vector X. For
the estimation of this vector, we can use the equations of the Extended Kalman Filter.
We will perform multiple iterations, starting from an initial guess for X, X0, with an
initial diagonal covariance matrix P0, which will be large enough to cover all possible
values for pitch and height.
For each iteration k, the following steps will be executed:
1. Prediction of the measurement vector:

Z0k ¼ gv1;v2;L ðXk1 Þ ð8Þ

2. Computation of the measurement matrix (Jacobian of g). This step will be achieved
by numerical differentiation, by varying the height and pitch angle by small
amounts around the values predicted by the current Xk.
!
@w1 @w1
Mk ¼ @h @h ð9Þ
@w2 @w2
@h @h

3. Computation of the Kalman gain:


 1
Kk ¼ Pk MT Mk Pk MTk þ R ð10Þ

R is the measurement covariance matrix, a diagonal matrix encoding the uncertainty


of the vehicle width measurement in the image space (in pixels).
22 R. Itu and R. Danescu

4. Refinement of the X vector. Using an actual measurement vector Z, extracted from


measurement data (a method that will be described in Sect. 3.2), an updated version
of the X vector can be computed:
 0

Xk ¼ Xk1 þ Kk Z  Zk ð11Þ

We will perform 10 iterations. Usually the values for height and pitch converge
after about 5 iterations. The initial values, for X0, are a height of 1 m, and a pitch of 0°.
The standard deviation for height (in P0) is 400 mm, and for pitch is 3°. The standard
deviation for the width pixel error (for the matrix R) is 5 pixels.
The Extended Kalman Filter has an additional step, the updating of the state
covariance matrix P. We have found, through experiments, that we can use the same
P = P0 for all iterations, without affecting the convergence or the final results.

3.2 Extracting Measurements Data for Camera Calibration


According to the previous section, in order to calibrate the pitch angle and the camera
height above the road plane, we only need two car widths (in pixels), on two separate
image rows. For that, we can use the CNN based vehicle detector, described in Chapter 2,
which will produce rectangles indicating the image regions containing cars. The bottom
line of the rectangle is the one we need, as it shows the contact point of the car with the
road, so it will match the assumption that the height coordinate Y is zero.
Unfortunately, we cannot simply take two detected cars, on two image lines, and
apply the EKF. There are several reasons why this approach will fail:
– The detected vehicles are not always seen from behind, and therefore their per-
ceived width may be larger (see the red vehicle in Fig. 2)
– The detector may not always generate the perfect bounding box for the vehicle (see
the vehicle in front of us in Fig. 2), which means the incorrect image width and the
incorrect image row are detected.
– The tracked vehicle has an unknown width. It may be close to the average, or if may
be a very large or a very narrow vehicle.
For these reasons, the measurement vector Z will be generated from a statistical
analysis of the detected car rectangles, using image sequences acquired while driving
for several minutes. Figure 3 shows the detected widths, for different image rows, for a
sequence acquired in about 8 min of driving through Cluj-Napoca.
Ideally, the relation between the car width w and the image row v should be a line.
As multiple outliers are present, the RANSAC method [14] is used for fitting a line to
the noisy data. The resulted line, also shown in Fig. 3, will cross the w = 0 axis when
v matches the vanishing row of the image (the cars will become points in the horizon).
Having the line fitted to the detection data, we can intersect this line with two
vertical lines of our choice, corresponding to the image rows v1 and v2 used in the
previously described algorithm. For an image size of 640  480, we have chosen
v1 = 310, and v2 = 360. The average car width was set at 1.750 m.
Machine Learning Based Automatic Extrinsic Calibration 23

Fig. 3. Vehicle detection results: car widths with respect to image rows

Now we have everything we need for running the EKF algorithm. In Fig. 4 the
results of 10 iterations are shown, for height estimation (in millimeters) and for pitch
estimation (in degrees). The camera (phone camera) height above the road was mea-
sured at 1250 mm, but the pitch angle was not measured. However, the accuracy of
pitch angle estimation can be validated by its effect in the projection matrix.

Fig. 4. Height and pitch estimation iterations

The effects of the new computed parameters are seen in Fig. 5. Using the projection
matrix generated from the new extrinsic parameters using Eqs. (1–5), we can project a
distant point in the image space and find its row coordinate, which must match the
horizon line (Fig. 5, left). We can also use the new projection matrix to generate a bird-
eye view of the scene (the Inverse Perspective Mapping, IPM image), as seen in Fig. 5,
right. The IPM image has the pixel coordinates proportional to the 3D coordinates X
and Z (lateral and longitudinal distances), with a scale factor of 50 mm for 1 pixel. If
the height and pitch parameters are correct, the IPM image should show the lane
markings to be parallel, and the pixel distance between them should correspond to a
valid lane width in the 3D world. The width of the lane, in pixels, in the IPM image, is
65, which corresponds to 3.250 m. The lane in that area was measured at 3.200 m.
24 R. Itu and R. Danescu

Fig. 5. Vanishing row (horizon line) resulted from the pitch angle (left), and the Inverse
Perspective Mapping image generated using the estimated projection matrix (right)

From Fig. 5, we can see that the estimation of the pitch angle, and the camera
height above the road, corresponds to the real camera parameters. However, we can
also see that the lane in Fig. 5 does not match our direction of travel, which it should,
as we are driving in a straight line. This means that the camera has a yaw angle with
respect to the car reference frame, and this angle should be estimated.

3.3 Estimation of the Yaw Angle


In order to estimate the yaw angle (the heading angle of the camera with respect to the
longitudinal axis of the car), we need to find the vanishing point of the scene. This is
the point where the parallel lines of the road (such as the lane markings) meet in the
image space. Even though we can use road markings when available, we can also use
the detected obstacles.
If an obstacle is tracked, and this obstacle is a vehicle following the road, the
positions of the lower left and lower right rectangle corners in the image space will
form curves that will meet in the vanishing point, as seen in Fig. 6, left. Our solution is
to not use tracking, but instead form intersecting lines from detected obstacles that are
present in consecutive images, and not too far from each other. Obviously, false lines
are generated, and their intersection will lead to a false vanishing point. We will filter
the vanishing points that do not fall within a radius of 15 pixels from the already
estimated vanishing row v0. The remaining intersection will be sorted based on their
u coordinate, and the median u value will be selected as the vanishing column u0
(Fig. 6, right).
From the position of the vanishing point column, we can compute the yaw angle
using the following equation, where w is the yaw angle, W is the image width, and f is
the focal distance in pixels:

u0  W=2
tan W ¼ ð12Þ
f
Machine Learning Based Automatic Extrinsic Calibration 25

Fig. 6. Finding a vanishing point candidate from the trajectory of an obstacle (left), and finding
the vanishing point as a median of the candidates (right)

Having the yaw angle, we can re-compute the rotation matrix as:
0 10 1
1 0 0 cos W 0 sin W
RWC ¼ @0 cos h  sin h A@ 0 1 0 A ð13Þ
0 sin h cos h  sin W 0 cos W

Using the new rotation matrix, we can re-compute the projection matrix. Using this
projection matrix to generate the Inverse Perspective image, we obtain the bird-eye
view image with the road aligned with our car axis (Fig. 7).

Fig. 7. Comparison between the IPM image with the yaw angle assumed to be zero (left), and
with the yaw angle estimated correctly (right)

3.4 Estimation of the Roll Angle


The presence of a roll angle can cause the image of the scene to appear rotated around
its middle point. We will not include the roll angle in the camera model, but we will
detect it and rotate the images to compensate.
26 R. Itu and R. Danescu

For detecting the roll angle, we’ll use the object detection rectangles, the same input
that we have used for all the calibration steps in this work. If there is no roll, the objects
are expected to be in an upright position, which, taking into consideration that the
objects of interest are cars, means a lot of their edges are either horizontal or vertical.
This means that the orientation of the gradient must be mostly at 90° (for horizontal
edges), or at 0° (for the vertical edges). In order to assess the orientation of the object,
we’ll compute the Histogram of Oriented Gradients (HOG), with 360 histogram bins,
for all objects in the sequence that are near the center of the image (Fig. 8).

Fig. 8. Histogram of oriented gradients. Blue: the HOG of an upright scene. Red: the HOG of
the same scene rotated by 5°

It can be seen that the histogram has strong maxima at 90 and 270°, and weak
maxima at 0, 180 and 270. This means that the horizontal edges of the vehicles are
much stronger, and their angle’s dispersion is lower, while the vertical edges are
smaller, and with a much more variable angle (some vehicle sides are round, many
edges of rear windows are diagonal, etc.). If a roll angle is present, the strong peaks will
shift with the amount of degrees of the roll angle. Therefore, in order to detect the roll
angle, we’ll detect the shift of the histogram peaks from their default positions (90, 180,
and so on).

4 Results and Discussion

The calibration algorithm was tested on several sequences acquired by driving in Cluj-
Napoca. The obstacle detection works in real time, at about 10 frames per second on a
Samsung Galaxy S8+ smartphone. The system stores the detection data in a file, and
the calibration algorithm is called by the user. The best results are obtained for longer
sequences, obtained by driving for more than 5 min. The pitch and yaw angles are
accurately estimated (errors less than 0.1°) for most images. The roll angle was tested
only as simulation (the images are rotated by an artificial angle), because we cannot
extract a ground truth value for this angle, and its effect on the IPM image is not always
Machine Learning Based Automatic Extrinsic Calibration 27

clear (the road may itself be tilted more than the camera). The system estimated the
artificial roll angle with an error of less than 0.2°.
The camera height estimation is more sensitive, as sometimes we can have errors of
more than 10 cm. The cause of these errors is simply that there are too few vehicles
detected, and they lack diversity. This can happen in the following scenarios:
– A sequence may contain only one vehicle, in front of us, that we follow. If this
vehicle is narrow, or wide, and does not fit the 1.75 m average width, the height
estimation will fail.
– Most of the detected vehicles are on the side of the road, and they will be detected
as larger boxes in the image, including their side view.
The solution for overcoming these problems is to simply collect more data, with
diverse obstacles in front of us.

5 Conclusion and Future Work

We have presented an algorithm which, based on the results of vehicle detection from a
CNN classifier, is able to estimate the extrinsic parameters of a monocular camera with
respect to a vehicle-bound reference frame. The system works accurately when suffi-
cient and diverse data is available, which means longer sequences, in diverse traffic
situations. As the quality and diversity of the vehicle detection results directly impacts
the calibration results, future work will focus on better data generation (better classi-
fication, better generation of the bounding rectangle), better filtering of the resulted
rectangles, and possible fusion with other visual cues.

Acknowledgment. This work was supported by a grant of Ministry of Research and Innovation,
CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2016-0440, within PNCDI III.

References
1. Danescu, R., Itu, R., Petrovai, A.: Generic dynamic environment perception using smart
mobile devices. Sensors 16, 1–21 (2016). Article no. 1721
2. Caprile, B., Torre, V.: Using vanishing points for camera calibration. Int. J. Comput. Vis. 4,
127–139 (1990)
3. Magee, M., Aggarwal, J.: Determining vanishing points from perspective images. Comput.
Vis. Graph. Image Process. 26, 256–267 (1984)
4. Bazin, J., Pollefeys, M.: 3-line RANSAC for orthogonal vanishing point detection. In: 2012
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4282–
4287 (2012)
5. Itu, R., Borza, D., Danescu, R.: Automatic extrinsic camera parameters calibration using
Convolutional Neural Networks. In: 2017 IEEE 13th International Conference on Intelligent
Computer Communication and Processing (ICCP 2017), pp. 273–278 (2017)
6. Bileschi, S.: Fully automatic calibration of LiDAR and video streams from a vehicle. In:
IEEE International Conference on Computer Vision Workshops (ICCV), pp. 1457–1464
(2009)
28 R. Itu and R. Danescu

7. Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics
Science Systems Conference, pp. 1–8 (2013)
8. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems.
arXiv:1603.04467 (2016) (preprint)
9. Howard, A., et al.: MobileNets: efficient convolutional neural networks for mobile vision
applications. arXiv:1704.04861 (2017) (preprint)
10. Lin, T., et al.: Microsoft COCO: common objects in context. In: European Conference on
Computer Vision (ECCV), pp. 740–755 (2014)
11. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? In: Computer
Vision and Pattern Recognition Conference (CVPR), pp. 3354–3361 (2012)
12. Udacity Vehicle Dataset: https://github.com/udacity/self-driving-car/tree/master/annotations
13. Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to
corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32, 105–119 (2010)
14. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography. Commun. ACM 24, 381–395
(1981)
Driver Assistance and Vehicle
Automation
Towards Collaborative Perception
for Automated Vehicles
in Heterogeneous Traffic

Saifullah Khan1, Franz Andert2, Nicolai Wojke2,


Julian Schindler1(&), Alejandro Correa3, and Anton Wijbenga4
1
Institute of Transportation Systems, German Aerospace Center (DLR),
Lilienthalpl. 7, 38108 Braunschweig, Germany
{saifullah.khan,julian.schindler}@dlr.de
2
Institute of Transportation Systems, German Aerospace Center (DLR),
Rutherfordstr. 2, 12489 Berlin, Germany
{franz.andert,nikolai.wojke}@dlr.de
3
Universidad Miguel Hernandez de Elche, 03202 Elche, Spain
acorrea@umh.es
4
MAP Traffic Management, Ptolemaeuslaan 54, 3528 BP Utrecht, Netherlands
anton.wijbenga@maptm.nl

Abstract. In the near future Automated Vehicles (AVs) will be part of the
vehicular traffic on the roads. Normally, all automation levels will be granted on
the road based on the different road situations, but challenging situations will
still exist that AVs will not be able to handle safely and efficiently. AVs driving
at a high automation level may step down to the lower automation level and
handover the partial/full control to the driver when the automation system
reaches its functional system limits or encounters unexpected situations. This
paper briefly explains the H2020 TransAID project covering the transition
phases between different levels of automation. It will review related work and
introduce the concept to investigate automation level changes. Furthermore, the
collective sensor data processing architecture using for demonstrators and the
selected use cases are presented.

Keywords: TransAID  Cooperative automated vehicles  Connected vehicles


Automation  Transition of Control  Infrastructure

1 Background

Vehicle automation is considered to be a major step towards a more efficient road


system, both in terms of producing a more stable traffic flow that reduces the risks of
congestion, as well as improved fuel efficiency due to increases in aerodynamic per-
formance [1]. It is also expected to have an influence on overall network performance
and capacity [2, 3]. This will be especially the case on higher market penetration rates
of automated vehicles [4, 5].
Autonomous vehicles alone will not be able by themselves to solve traffic con-
gestion on the mixed road. Next to the ability of automated vehicles to anticipate the

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 31–42, 2019.
https://doi.org/10.1007/978-3-319-99762-9_3
32 S. Khan et al.

behaviour of manually driven vehicles with its uncertainties (i.e. backwards compati-
bility [1]), current research is also focusing on traffic management procedures [6, 7],
cooperative driving [8] and artificial intelligence [9]. This includes the use of com-
munication among the vehicles (V2V) and between vehicles and infrastructure (V2I).
Many organizations are trying to find a way to enable Cooperative Intelligent Transport
Systems (C-ITSs) on their major roads, albeit mostly in pilot trials as explained by [10].
There, they typically equip the roadside units along with the communication and
interaction facilities based on the European CAR 2 CAR ITS-G5 [11] standard. There
are various overarching projects, such as C-ITS Corridor, InterCor, Compass4D,
Talking Traffic, or even the C-Roads Platform signing a Memorandum of Under-
standing (MoU) for a closer collaboration among the automotive industry and road
infrastructure providers/managers. This will, in turn, facilitate the uptake of the so-
called Day 1 and Day 1.5 services—the former are typical hazardous location notifi-
cations, and the latter contain more specific mobility-related information [12]. Within
cooperative automation, collective perception is one of the challenging tasks. This
service will be performed by Connected Autonomous Vehicles (CAVs), Connected
Vehicles (CVs) and Road-Side Units (RSUs) to enhance the situational awareness of
the driving environment by sharing information about the perceived information col-
lected from its cameras and sensors (i.e. radar or lidar-sensors).
All objects such as, the non-connected vehicles, other road participants and
obstacles etc., that are not present in its own perception range, are forwarded by the
CAVs and CVs which thereby increases the perception boundary. The concept of
sharing collective perception information is studied in different projects and scientific
papers which primarily focuses on the domain of object sensing and filtering tech-
niques as filtering methods [13–16] or credibility maps [17–19]. Also, recently the
ETSI European standards for Intelligent Transport System (ITS) are working on
standardising collective perception information. Publications as [17, 20] introduce the
so-called Collective Perception Message (CPM), its fusion architecture and the network
impact. The results show that the CPM messages have higher transmission latency than
the existing Cooperative Awareness Message (CAM) [21]. To avoid the latency, it is
suggested to adapt an event-triggered message for the CPM [22] which is close to the
standardization.

2 Introduction to TransAID

As the introduction of automated vehicles becomes feasible, even in urban areas, it will
be necessary to investigate their impacts on traffic safety and efficiency. This is par-
ticularly true during the early stages of market introduction, where automated vehicles
of all SAE levels, as discussed in [23], connected vehicles which are able to com-
municate via V2X and conventional vehicles will share the same roads with varying
penetration rates. There will be areas and situations on the roads where high automation
can be granted, and others where it is not allowed or not possible due to missing sensor
inputs, high complexity situations, etc. At these areas many automated vehicles will
change their level of automation. We refer to these areas as Transition Areas as shown
Towards Collaborative Perception for Automated Vehicles 33

in Fig. 1. If a transition fails, a so-called Minimum Risk Maneuver (MRM e.g. soft
braking until stop) is intended; however this has to be avoided due to its negative
impact on traffic flow.

Fig. 1. Areas on the road with Transitions of Control when automation is difficult, not possible,
or restricted. This includes both increase and decrease of automation level.

The EC-funded project TransAID [24] develops and demonstrates traffic man-
agement procedures and protocols to enable smooth coexistence of automated, con-
nected, and conventional vehicles, especially at Transition Areas. A hierarchical
approach as depicted in Fig. 2 is followed where control actions are implemented at
different layers including centralized traffic management, infrastructure, and vehicles.

Fig. 2. Hierarchical traffic management in TransAID. The infrastructure will integrate the
acquired information at the Traffic Management System (TMS). The TMS will generate
progression plans for the vehicles which are taken over by the infrastructure and communicated
to the vehicles, either by I2V communication or (in case of non-equipped vehicles) by e.g.
variable message signs.

The TransAID project includes simulations to find optimal infrastructure-assisted


management solutions to control connected, automated, and conventional vehicles at
Transition Areas, taking into account traffic safety and efficiency metrics. Then, and
34 S. Khan et al.

this will be the focus of this paper, measures to detect and inform conventional vehicles
are also addressed. The most promising solutions are then implemented as real world
prototypes and demonstrated under real urban conditions. Finally, guidelines for
advanced infrastructure assisted driving are presented. These guidelines also include a
roadmap defining activities and needed upgrades of road infrastructure in the upcoming
fifteen years in order to guarantee a smooth coexistence of conventional, connected,
and automated vehicles.

3 Collective Sensor Data Processing Architectures

Within TransAID, collective perception includes the CVs/CAVs and RSUs inside the
collective perception service loop to enhance the detection capabilities. This is bene-
ficial since the moving vehicles may have some limitations on sensing environmental
information due to own sensor range, sensor mounting positions and various other
physical limitations. In this section we describe the sensor setup of our experimental
vehicle platform and a camera-based RSU which we use during the experimental
evaluation.

3.1 Vehicle-Based Sensor Processing Architecture


Cooperative sensor fusion will be implemented on board the CAV whose architecture
is shown in Fig. 3. It basically consists of the proposed architecture for automated
vehicles, enriched by project-specific parts. Together with such complex CAVs, a
reduced architecture is designed for CVs where the C2X box will be linked to a smart
phone application.

Fig. 3. CAV architecture implemented on the test vehicles


Towards Collaborative Perception for Automated Vehicles 35

Next to field testing with real communication and other traffic, the architecture
allows the integration of virtual vehicles for tests within virtual or augmented reality.
During those tests, common traffic simulation software [25, 26], which is also used in
driving simulator experiments, is providing realistic behavior of virtual vehicles. These
vehicles are of course not seen by any sensor on the real vehicle, and therefore need to
be added as input to the sensor data fusion.

3.2 Road Side Unit with Sensors


The initial design of the infrastructure architecture is shown in Fig. 4. The infras-
tructure basically consists of the RSU part and the sensor part. The RSU part is
managing the calculation of traffic management measures and the provision of related
advises by using G5 communication. The sensor part is interfacing to the available
sensors (here, especially a camera) and performs a sensor data fusion. Connected to the
RSU, the C2X box receives all necessary data from other vehicles.

Fig. 4. Road Side Unit infrastructure with sensors and communication modules

For the field tests, we will use real RSUs with variable message sign during the
project. Each of the signs has an individual power supply and a road side unit offering
ITS-G5 communication. The signs are shown in Fig. 5(a), a similar pole is going to be
equipped with a hemispheric camera as the Samsung PNM-9020V. To make sensor
data fusion possible, a high performance computing server is installed to run the latest
state-of-the-art detection and tracking chain as explained in [27, 28] and shown in
Fig. 5(b).
36 S. Khan et al.

(a) (b)

Fig. 5. (a) Mobile variable message signs equipped with a road side unit and the corresponding
antennas, (b) Data fusion output for a virtual perspective of the hemispheric camera

4 Vehicular Communication

TransAID is developing traffic management schemes that assume automated vehicles


have the communication capabilities. Therefore, the prevention, management and
distribution of Transition of Control (ToC) [29] at transition areas require the com-
munication among vehicles and infrastructure (V2I/I2V). The communication perfor-
mance relies on the use of the currently available ETSI ITS G5 communication stack.
The overall communication system design of TransAID consists of Road Side
Infrastructure (RSI) and vehicles as depicted in Fig. 6.

RSI
- Collec ve percep on CAV
- ITS-G5 / V2I, I2V, V2V
- Road authority
- TMC / TCC
- Measures AV
- Road sensors
- Services
- Smartphone services
- VMS panel
CV

Direct communica on (e.g. ITS-G5)

Signaling (e.g. VMS, Road sign, Traffic light) LV

Signaling from autonomous to other


vehicles (e.g. lights on (C)AV)

Fig. 6. TransAID communication system design

The different arrows represent different types of communication. The solid arrows
indicate direct communication considering ITS-G5. It is based on an extension of the
ETSI ITS standards to transmit vehicle and road advisory related information. It
Towards Collaborative Perception for Automated Vehicles 37

supports the definition and the execution of traffic management policies. The dotted
blue arrows represent conventional signaling measures such as Variable Message Sign
(VMS) panels and possible new measures to reach AVs. The dotted green arrows are
more exclusive to TransAID and/or automated driving developments. Those arrows
represent measures to convey information from AVs to other vehicles such as Legacy
Vehicles (LVs), for example, light indicators on the back of the vehicle.
To exchange the traffic information between V2V and V2I, selected communica-
tion messages are intended to be used in TransAID such as CAM [21], CPM [22],
MAP [30] [New reference for map which will be 2], IVI [31], etc. Each message is
served based on their functionality. For example, Cooperative Awareness Messages
(CAMs) [21] are distributed within the ITS-G5 network, capable to share the sur-
rounding information. This information includes the presence, position and basic status
of the neighboring ITS stations that are reachable within a single hop distance. All the
participated ITS stations within V2X network have the means to generate and share the
state vector information (time, position, direction, etc.) within its neighborhood region.
Upon the reception, reasonable efforts can be taken to evaluate the relevance of the
messages and information to support different ITS applications to act accordingly. For
example, by comparing the status of the originating ITS station with its own status, a
receiving ITS station is able to estimate the collision risk with the originating station
and if necessary may inform the vehicle’s driver or any available vehicle automation.
Another message called Collective Perception Message (CPM) [22] which aims to
share the driving environment information among the ITS stations. To this end,
Collective Perception Service (CPS) provides the related data regarding other road
participants and obstacles etc. in abstract descriptions. Collective perception helps to
minimize the ambient uncertainty of ITS stations about the up-to-date environment as
other stations contribute context information. This includes the syntax and semantics of
the CPS and specification of the data along with the message and message handling to
increase the awareness in a cooperative manner. Furthermore, to increase the traffic
safety, the objects information is also included to the CPM to share with other ITS
stations. These objects information are therefore used by the safety applications on the
receiving side. Objects relevant for traffic safety are either static or dynamic. The latter
are located on the driving lanes or have the ability to move. The objective of trans-
mitted objects information as part of the CPM is not to share and to compare traffic-
regulation information such as traffic signs and traffic light information. Instead, data
about objects which cannot be available to other ITS stations will be provided. This
could e.g. be objects that are only temporary present, such as traffic participants or
temporary obstacles that however require priority. MAP message intended to serve as a
basis for other messages to have a topological reference. The topological information is
defined as an intersection where nodes are deployed on lanes. These nodes can have
attributes and have a location that can be converted in latitude longitude coordinates
according to the WGS84. IVI (In-Vehicle Information) message will also be used in
TransAID, conveying information about infrastructure based traffic services which are
needed for the implementation of IVI road safety and traffic efficiency.
38 S. Khan et al.

5 Selected Use Cases with Potential Benefits of V2X

Next to cooperative awareness messaging, collective perception is subject to increase


safety within ad-hoc situations such as congestion and other incidents—especially in
mixed traffic with non-cooperative vehicles. With collective perception, CAVs can
adapt their path planning in advance, and e.g. hard maneuvers are prevented.
To evaluate benefits of collaborative perception, the following exemplary scenarios
are investigated [32]. First, if regular lanes are blocked as shown in Fig. 7, in case of an
incident, obstacle avoidance is achieved by e.g. using a bus lane or passing through
other accessible terrain. Collective perception comes from other vehicles and, espe-
cially on hazard spots etc., also from possible RSIs with own sensors. Next to the
incident itself, CVs/CAVs may not be able to avoid the obstacle and will stop in front
which can also be detected and further avoided. Related use cases are blockage
avoidance at intersections or foresighted driving towards tail end of traffic jam.

Fig. 7. Example use case with blocked road where driving at bus lane which is usually not
allowed

On motorway merge and diverge segments, lane changing can be critical within
high traffic density shown in Fig. 8. Collective perception can provide advices for
cooperative lane changes, e.g. to generate free space to merge. Other related use cases
on such road segments are optimized CAV platoon driving, handling queues at exit
lanes, or early traffic separation for diverging.

Fig. 8. Example use case with merging lanes on motorway and dense traffic

Another scenario deals with possible situations where the road is impassable for
automated vehicles (e.g. they are not able to drive safely, or where automation is not
allowed) as shown in Fig. 9. In such cases, CAVs might be guided to safe spots (e.g.
side lanes or parking area). On congestion or hazard spots, RSI may be also available
for monitoring free-space areas and for providing them through collaborative
perception.
Towards Collaborative Perception for Automated Vehicles 39

Fig. 9. Example use case with impassable road segment due to bad visual conditions

For these and some further use cases, the TransAID project is going to measure the
impact of collaborative perception (and other arrangements) with respect to ToC
maneuvers which may fail and lead to MRMs which are subject to generate further
traffic congestion. It is expected that ToC probability can be decreased with the
developed measures which is going to be tested in simulation and real driving
experiments.

6 Demonstration Setup

Within TransAID, the above described traffic situations will be replicated in simulation
[25] and in reproducible demonstration tests of full scale. For the latter outdoor tests, it
is planned to use multiple vehicles with different capabilities to act as LV, CV, AV or
CAV. One of the advanced test demonstrator named FasCar-E [10] will be used which
is capable of drive-by-wire control and advanced automation. It is equipped with a
combination of standard sensors used by the manufacturer for adaptive cruise control
and lane departure warnings as well as more expensive non-standard sensors which
provide more accurate ego-localization and object tracking capabilities.
FasCar-E as shown in Fig. 10 is equipped with four Ibeo LUX laser scanners for
close and mid-range obstacle detection and tracking. The laser scanners operate at a
frequency of 25 Hz and cover a vertical field of view of approximately 180 degree to
the front and 85 degree to the rear of the vehicle. A Bosch RADAR with detection
range of up to 160 m and two SMS RADAR with detection range of 70 m are available
for object detection at the mid and long-range. A Novatel SPAN-CPT provides fused
GPS/DGPS and IMU position data at 100 Hz.
The vehicle has the possibility to drive automated by commanding the demanded
values via the DSpace Autobox to the vehicle CAN bus and from there to the corre-
sponding vehicle subsystems, like the ACC, lane keeping and the park assistance
systems. It also has a widescreen display installed in place of the instrument cluster for
interaction with the driver. For TransAID, the vehicle is using a Cohda Wireless MK5
On-Board Unit for communication. Next to this specific car, at least one further vehicle
is being equipped in a similar manner.
Next to initial driving tests located at the roads and parking spaces of DLR campus
in Braunschweig, large-scale driving is planned at the closed Peine-Eddesse Air Field
20 km northwest as shown in Fig. 11. The test track on the 900 m runway can be
equipped with virtual road markings visible to the vehicles only. This makes the air
field very flexible, and it can be used for longitudinal and lateral vehicle automation.
40 S. Khan et al.

Fig. 10. Experimental vehicle FasCar-E with equipped sensor.

Fig. 11. Peine-Eddesse Air Field with virtual lanes and intersections (Image source: Google
Earth)

7 Conclusion

The paper presents the project TransAID which aims to develop infrastructure-based
traffic management procedures, guidelines for a smooth coexistence between auto-
mated, connected and conventional vehicles during the market introduction phase of
ICT technologies for automated driving. The paper is focused on the sensor and
communication architecture to be integrated into vehicles and stationary road site units.
With that, specific use cases are addressed where automatic driving is difficult and
where cooperative sensing is subject to reduce automation failures and transitions of
control.
Towards Collaborative Perception for Automated Vehicles 41

Acknowledgement. This work has been supported by the EC within the Horizon 2020
Framework Programme, Project TransAID under Grant Agreement No. 723390.

References
1. Van, R.J., Martens, M.H.: Automated driving and its effect on the safety ecosystem: how do
compatibility issues affect the transition period? Procedia Manuf. 3, 3280–3285 (2015)
2. Atkins Ltd.: Research on the impacts of connected and autonomous vehicles (CAVs) on
traffic flow. Summary Report, Version 1.1, Department of Transport (2016)
3. Hoogendoorn, R., Van, B., Hoogendoorn, S.: Automated driving, traffic flow efficiency and
human factors: a literature review. In: 93rd Transportation Research Board Annual Meeting,
USA (2014)
4. Aria, E., Olstam, J., Schwietering, C.: Investigation of automated vehicle effects on driver’s
behavior and traffic performance. Transp. Res. Procedia 15, 761–770 (2016)
5. Mahmassani, H.S.: Autonomous vehicles and connected vehicle systems: flow and
operations considerations. Transp. Sci. 50(4), 1140–1162 (2016)
6. Baskar, L.D., de Schutter, B., Hellendoorn, J., Papp, Z.: Traffic control and intelligent
vehicle highway systems: a survey. IET Intell. Transp. Syst. 5(1), 38–52 (2011)
7. Birnie, J.: Can regional operational traffic management stand on its own after fifteen years?
NM Mag. 10(1), 8–13 (2015). (in Dutch)
8. Van Waes, F., van der Vliet, H.: The road to c-its and automated driving. NM Mag. 12(2),
16–17 (2017). (in Dutch)
9. Cheng, H.: Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation.
Springer, Berlin (2011)
10. Kaschwich, C., Wölfel, L.: Experimental vehicles FASCar-II and FASCar-E. J. Large Scale
Res. Facil. 3 (2017). A111. http://doi.org/10.17815/jlsrf-3-147
11. http://www.car-2-car.org/
12. European Commission: A European strategy on cooperative intelligent transport systems, a
milestone towards cooperative, connected and automated mobility (2016). https://ec.europa.
eu/transport/sites/transport/files/com20160766_en.pdf. Accessed 13 June 2018
13. Karam, N., Chausse, F., Aufrere, R., Chaupuis, R.: Cooperative multi-vehicle localization.
In: IEEE Intelligent Vehicles Symposium, pp. 564–570 (2006)
14. Kim, S.W., et al.: Multivehicle cooperative driving using cooperative perception: design and
experimental validation. IEEE Trans. Intell. Transp. Syst. 16(2), 663–680 (2015)
15. Mourllion, B., Lambert, A., Gruyer, D., Aubert, D.: Collaborative perception for collision
avoidance. In: IEEE International Conference on Networking, Sensing and Control, pp. 880–
885 (2004)
16. Zhu, H., Mihaylova, K.V., Leung, H.: Overview of environment perception for intelligent
vehicles. IEEE Trans. Intell. Transp. Syst. 18(10), 2581–2601 (2017)
17. Nguyen, T.N., Michaelis, B., Al-Hamadi, A., Tomow, M., Meinecke, M.M.: Stereo-camera
based urban environment perception using occupancy grid and object tracking. IEEE Trans.
Intell. Transp. Syst. 13(1), 154–165 (2012)
18. Sivamaran, S., Trivedi, M.M.: Dynamic probabilistic drivability maps for lane change and
merge driver assistance. IEEE Trans. Intell. Transp. Syst. 15(5), 2063–2073 (2014)
19. Zhao, X., Mu, K., Hui, F., Prehofer, C.: A cooperative vehicle-infrastructure based urban
driving environment perception method using a D-S theory-based credibility map. Opt. Int.
J. Light Electron Opt. 138, 407–415 (2017)
42 S. Khan et al.

20. Rauch, A., Klanner, F., Rasshofer, R., Dietmayer, K.: Car2x-based perception in a high-level
fusion architecture for cooperative perception systems. In: IEEE Intelligent Vehicles
Symposium, pp. 270–275 (2012)
21. ETSI: Intelligent Transport System (ITS); Vehicular Communications; Basic Set of
Applications; Part 2: Specification of Cooperative Awareness Basic Service. Draft TS 302
637–2 V1.3.2 (2014)
22. ETSI: Intelligent Transport System (ITS); Vehicular Communications; Basic Set of
Applications; Specification of the Collective Perception Service. Draft TS 103 324
V0.0.12 (2017)
23. SAE International: Taxonomy and definitions for terms related to driving automation
systems for on-road motor vehicles (2018). http://standards.sae.org/j3016_201806/. Acces-
sed 18 June 2018
24. Lu, M., et al.: Transition areas for infrastructure-assisted driving. www.transaid.eu/.
Accessed 13 June 2018
25. http://www.ict-itetris.eu/simulator/
26. Fischer, M., et al.: Modular and scalable driving simulator hardware and software for the
development of future driver assistance and automation systems. In: Driving Simulator
Conference, pp. 223–229 (2014). https://www.researchgate.net/publication/265908625_
Modular_and_Scalable_Driving_Simulator_Hardware_and_Software_for_the_
Development_of_Future_Driver_Assistence_and_Automation_Systems
27. Smeulders, A., et al.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal.
Mach. Intell. 36(7), 1442–1468 (2013)
28. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep
association metric. In: IEEE International Conference on Image Processing (ICIP),
pp. 3645–3649 (2017)
29. Lu, Z., Happee, R., Cabrall, C.D., Kyriakidis, M., de Winter, J.: Human Factors of
Transitions in Automated Driving: A General Framework and Literature Survey.
Transp. Res. Part F Traffic Psychol. Behav. 43, 183–198 (2016). https://www.
researchgate.net/publication/304624338_Human_Factors_of_Transitions_in_Automated_
Driving_A_General_Framework_and_Literature_Survey
30. http://www.smartmobilitycommunity.eu
31. Eco-AT Consortium: SWAP 2.1 use cases, in-vehicle information, WP2—system definition,
version 4. http://eco-at.info. Accessed 13 June 2018
32. TransAID: Scenarios, Use Cases and Requirements; Deliverable D2.1 (2018)
Real Time Recognition of Non-driving Related
Tasks in the Context of Highly
Automated Driving

Timo Pech1(&), Stephan Enhuber1(&), Bernhard Wandtner2(&),


Gerald Schmidt2(&), and Gerd Wanielik1(&)
1
Chemnitz University of Technology, Chemnitz, Germany
{timo.pech,stephan-michael.enhuber,
gerd.wanielik}@etit.tu-chemnitz.de
2
Opel Automobile GmbH, Rüsselsheim am Main, Germany
{bernhard.wandtner,gerald.schmidt}@opel.com

Abstract. With the continuous development and improvement of advanced


driver assistance systems up to highly automated driving functions, the driving
task is changing. There is no need for the driver to permanently supervise
automatic driving functions of SAE J3016 level 3 and 4. The driver is allowed to
engage in non-driving related tasks temporarily. However, if the automated
vehicle reaches its limitations, the driver needs to react appropriately to a take-
over request. Driver state monitoring systems might enable adaptive take-over
concepts to support the driver in such situations. In order to recognize the
currently performed non-driving related task by a technical system it is neces-
sary to fuse different features from several measurement signals to infer the
currently executed task of the driver. Main features of non-driving related tasks
include the driver’s visual orientation and position of his or her hands. In this
paper, a methodology is presented to detect a non-driving related task using
Hidden Markov Models to represent the temporal relationships of characteristic
features. Measurement data was obtained from participants in a driving simu-
lator and used to train and evaluate the presented system with various non-
driving related tasks.

Keywords: Automated driving  Driver monitoring  Driver’s action detection


Non-driving related tasks classification

1 Introduction

1.1 Background
The active role of the driver in vehicle guidance changes with the continuous devel-
opment of highly automated driving functions to a more passive one. In highly auto-
mated driving longitudinal and lateral vehicle guidance is performed by a system
within given application areas (e.g. highway roads). The driver does not need to
permanently monitor the driving environment and could theoretically turn his attention

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 43–55, 2019.
https://doi.org/10.1007/978-3-319-99762-9_4
44 T. Pech et al.

to non-driving related (NDR) tasks, like phoning or checking e-mails. However, in


some situations the automated vehicle reaches its limitations and issues a takeover
request to the driver. To hand over the responsibility of the vehicle guidance back to
the driver, the driver’s state is an important information to create an adaptive, safe and
optimal transition concept.
Generally, the driver state refers to the physical and mental conditions of the driver
that can be described by the interaction of different factors. The most important aspects
include driver alertness (impaired by fatigue/drowsiness) on the one hand and driver
attention (impaired by NDR-tasks) on the other hand, cf. [1]. The present approach
focusses mainly on the driver’s attentional state. A driver monitoring system
(DMS) was implemented to recognise the driver’s currently performed activity which
is described in the next chapters. This is due to the fact that it is more practical to
describe the driver’s concrete actions by sensor measurements in a first stage than her
or his attentional state. These NDR-tasks can be seen as a representation of the driver’s
attentional state. These NDR-tasks can be seen as a representation of the driver’s
attentional state. Each NDR-task investigated in this paper constitutes an exemplary
distinct combination of auditory, visual, vocal or manual distraction.

1.2 State of the Art


In the past several years, different applications of driver state monitoring were studied
in research [2]. The focus of early DMS was on the driver’s vigilance and fatigue [3–5].
Current DMS research concentrates on the driver’s actions in a wide range in the
context of manual and automated driving. The information about the driver’s behaviour
is used to estimate probable manoeuvre actions [6] and to recognise in-vehicle actions
related to the driving task like gear shifting [7] or not related to the driving task like
video watching, cf. [8].
For these applications different approaches of machine learning methodologies are
used, like Support Vector Machines [8], Hidden Markov Models [9] and newer
techniques of deep learning like Convolutional Neural Networks, cf. [10, 11]. The
similarity of these DMS approaches are the used information and the aim to classify
phenomenological described activities. Most attention is paid on head and eye related
information to identify NDR activities e.g. mailing, video watching or reading, cf. [8].
In addition, hand location [9, 12] and body pose estimation [7] are essential features to
detect activities like phoning, eating, texting or reaching behind [10, 11]. The regions,
where the driver’s interactions are taking place, are used as an indication for inferring
the current activity. In that regard, real-time tracking and an appropriate combination of
head, eye and hand patterns are especially important.

2 Classification of NDR-Tasks

During automated driving, the driver does not need to continuously take care of the
driving task. The driver has his hands free and can focus his attention on NDR-tasks
like reading, using a laptop, tablet or mobile phone. In comparison to other human
activity recognition issues the set of in-vehicle tasks is restricted to the geometry and
Real Time Recognition of Non-driving Related Tasks 45

design of the car but, in contrast to the manual driving task, the driver has much more
degrees of freedom.
To investigate how potential NDR-tasks can be classified, measurement data of 44
test persons was acquired in a driving simulator with the sensor setup described in
Sect. 2.2. All participants completed an approximately one-hour ride in the driving
simulator using a highly automated mode. During the drive each participant executed
different NDR-tasks. Table 1 shows the NDR-tasks that should be detected by the
DMS approach presented in this paper. For the detection issue, it is necessary to define
a baseline state of the driver. This condition is defined by monitoring the driving
situation in a normal upright position on the driver’s seat. This state is referred as
“baseline” in the following sections.

Table 1. Set of used non-driving related tasks with description of their realization
NDR-task Description
Repeating spoken text Auditory presented sentences, repeating verbally
Reading out text Written sentences presented on tablet computer (attached in center
console), reading out aloud
Texting (mounted) Transcribing text on tablet computer, attached in center console
Texting (handheld) Transcribing text on tablet computer, performed handheld
Reaching for object: Searching for specific Lego bricks and placing these in a box on
passenger seat the passenger seat
Cell-phone talk Receiving a call from the experimenter
(handheld)

2.1 Approach
There is a limited possibility to directly measure NDR-tasks from a single signal. The
set of possible tasks is too large and single tasks can have a complex characteristic.
Hence, it is necessary to fuse different features from several measurement signals to
infer the currently executed task of the driver. To identify these features, a task can be
divided into sub-tasks, see Fig. 1. For example the NDR-task “cell-phone talk” can be
temporally divided into subtasks from the first glance to the phone as beginning until
the end when the driver puts the phone back into its place.

Fig. 1. Sub-tasks of example non-driving related task “cell-phone talk”


46 T. Pech et al.

By analysing the activity, task characteristic features are clearly visible. The actions
of the driver can be described by the main features visual orientation and position of
hands. Because of this, our approach to estimate NDR-tasks is primary based on
features describing regions of interest (ROI) for task related glance and hand positions,
see Fig. 2.

Fig. 2. Identified glance- (left) and hand- (right) positions of defined non-driving related tasks

In addition to the main features glance and hand area, secondary features are
needed to distinguish between tasks with similar visual and biomechanical character-
istics like “repeating spoken text” and “baseline”. In this approach, mouth movement is
analysed to estimate if the driver is speaking. It is added as a further feature to get the
ability to describe the characteristics of NDR-tasks like “repeating spoken text” or
“reading out text” (see Table 1) in more detail.

2.2 System Overview


Figure 3 shows the overview of the used system to estimate NDR-tasks. On sensor
level, 2D and 3D cameras are used for raw data measurements. The 2D cameras located
in front of the driver (Fig. 4) are utilized for head tracking with VGA resolution and
30 Hz image frequency. To determine the current hand position around the driver, a 3D
sensor is applied with a resolution of 176  132 and 25 Hz frequency.

Fig. 3. Overview of the system architecture for estimating non-driving related tasks in the
context of automated driving.
Real Time Recognition of Non-driving Related Tasks 47

Fig. 4. Sensorial setup for detecting non-driving related tasks of the driver (left) and example
images from the 2D (C1 & C2) and 3D (C3) cameras (right).

2.3 Determine Driver’s Glance Area


The preprocessing modules extract features from the raw measurement data, see Fig. 3.
The determination of the current driver’s glance area is based on head tracking mea-
surements because of required inconspicuousness and robustness. For large head
rotations, eyes are not reliably detectable without a head-mounted eye tracker or
expensive and complex sensor configurations. Head orientation is in comparison to eye
movements more stable to estimate a region of interest because of high frequent
fixation shifts of the eyes. Furthermore, glasses that may cover the driver’s eyes
decrease the accuracy of eye detection. Therefore, an approach was chosen that esti-
mates the current glance area of the driver, based on the measured head position and
orientation with a Bayes classifier, cf. [13]. The classifier was trained with a random
subset of the test persons’ data acquired in the driving simulator, like described above.
The created training data contains the feature vector ½xHpos ; yHpos ; zHpos ; ux;Hrot ; #y;Hrot 
that indicates the 3D position and rotation (pitch and yaw angle) of the head together
with label information of the annotated glance areas, see Fig. 2. Testing the classifier
with the evaluation data set shows a mean accuracy of 80%.

2.4 Driver Is Speaking Detection


Some predefined NDR-tasks are characterized by speaking. For instance, it is not
possible to distinguish between the NDR-task “repeating spoken text” and “baseline”
only by information about the position of the hand and orientation of the driver’s head.
The visual and manual channel are not exclusive allocated by these actions. Therefore,
a feature is needed that contains information about if the driver is speaking or not. This
can be realized by several methodologies. Because of the available information from
the head-tracking algorithm, a simple approach for the detection if the driver is
speaking is implemented. This method is based on the analysis of the temporal mouth
movement using facial landmark points. The distance between the upper and lower lip
landmark points is determined and normalized with the estimated head camera offset.
48 T. Pech et al.

Within a sliding window, the variance of the distance values over time is calculated.
Finally, the decision if the driver is speaking is made by an empirical threshold
parameter applied on the variance value.

2.5 Determine Driver’s Hand Position


To determine the hand position of the driver the vehicle interior is divided into a set of
regions of interest (ROI), see Fig. 3. For each ROI an occupancy grid [14] is applied to
determine if the driver’s hand occupies the ROI. The extracted depth data from the 3D
camera (see Fig. 4) updates the related grid cells. Therefore, a binary bias filter is used
to calculate the log-odds ltx;y as parameter if the cell m at the coordinates x; y is occupied
at time t by given depth data zt :

p mx;y jzt
ltx;y ¼ lt1 þ log10   l0x;y : ð1Þ
x;y
1  p mx;y jzt

In Eq. (1) lt1 0


x;y defines the last cell state and lx;y the initial cell state. This occupancy
grid characteristic update function causes stabilised results over a sequence
 of sensor data
to suppress measurement errors. The occupancy probability p mx;y jzt of cell m by a given
measurement z is unknown. Therefore a defined inverse sensor model is utilised to
determine the probability if a cell is occupied by a given depth measurement. Figure 5
shows this calculation in a practical example. The static vehicle interior has due to the
position of the 3D camera plausibly higher range measurement then dynamic objects in
the foreground. It is assumed in this approach that the driver itself causes the dynamically
changing values. With the knowledge of the static values, which are based on the vehicle
interior, the above-mentioned inverse sensor model can be parameterized. The model will
reinforce the depth data caused by dynamical objects and weaken the depth values
potentially caused by static surfaces. Finally, the average value of all cells inside the ROI
indicates the occupation of this ROI. Due to the characteristic of a specific driver action,
the resulting vector of the hand ROI states xh ¼ ½h1 ; h2 ; . . .; hi T with hi ¼ f0; 1g repre-
sents a typical pattern of several occupied ROIs. With the same method the tablet position
is determined that indicates if it is in the holder. It is added as an additional feature tp.

2.6 Hidden Markov Models for NDR-Task Classification


A Hidden Markov Model (HMM) is a structured and probabilistic methodology to
create a statistical model of a process with a set of emitting states and transitions
between these states. In this approach, discrete HMMs are used. These are described by
a random variable Xt whose realization corresponds to the states of a finite set
fS1 ; S2 ; . . .; Sn g; n 2 N and according observations represented by the matrix B,
cf. [15]. For the classification of NDR-tasks the HMM models the temporal sequence of
an action with all its single fragments, cf. Fig. 1. The task sequence is abstractly
divided into a set of states and organised in a HMM structure. A left-right structure
with n ¼ 12 states is empirically determined as the most suitable structure for these
temporal processes. Figure 6 shows the graphical representation of an exemplary
Real Time Recognition of Non-driving Related Tasks 49

Fig. 5. Example data for calculation the occupancy of a hand ROI; black – the occurring depth
values of the static environment of the vehicle interior; white – the occurring depth values if a
hand activity is executed in the ROI.

Fig. 6. Exemplary left-right structure of an HMM

left-right HMM, where only transitions to higher states are allowed. The nodes in this
graph are corresponding to the states of the variable Xt and the connections between
these nodes represent the possible transitions between the states.
To estimate which subtask in the sequence is the current one (cf. Fig. 1) only the
available measurements in the feature vector are available. It is not clearly observable if
the driver takes the phone e.g. from the co-driver seat. Only the information about the
current and last hand positions are known and the state inside the task sequence must be
inferred. This state is referred to as hidden and assumptions on the current state of Xt
can only be made using measureable observations. The transition between each state of
Xt is described by an n  n state transition matrix A. The elements of Aij give the
conditional probability of the transition from state Si to Sj , cf. [15].

Aij ¼ P Xt ¼ Sj jXt1 ¼ Si ; 0\i; j  n; i; j; n 2 N ð2Þ

In general, the observations of a HMM are described by probabilities, which are


combined in the observation matrix B [15]. The elements Bij describe the conditional
50 T. Pech et al.

probabilities of a particular feature vector represented as observation j at the state i.


Through this probabilistic description, uncertainties, which come from interindividu-
ally differences in the execution of the NDR-tasks and potential measurement errors
from the preprocessing modules, can be considered.
For a complete description, the a priori probabilities of the states fS1 ; S2 ; . . .; Sn g
are also required for the initialization time step t ¼ t0 . These will be summarized in the
vector P, cf. [15]:

Pi ¼ PðX0 ¼ Si Þ; 0\i  n; i; n 2 N ð3Þ

The used input feature vector for the HMM contains the results of the preprocessing
modules, the detected driver’s glance area, the hand position, the tablet position and the
estimation if the driver is speaking:
 T
xt ¼ gt ; h1;t ; h2;t ; . . .; h8;t ; tpt ; st : ð4Þ

The discrete variable gt represents the calculated glance area of the driver at time t.
h1;t ; . . .; h8;t contains the binary occupation state of the defined possible hand regions.
In contrast to the glance area gt the occupancy information of the hand ROI is not
represented in a single variable because at a time t more than one region can be
occupied caused by the characteristic of the related NDR-task. tpt indicates if the tablet
is inside the holder. The last feature indicates the binary state if the driver is speaking.
To apply HMMs for a real-time NDR-task classification a training step is needed
first, which means learning the model parameters A; B and P from a given training data
set. The training dataset is created from a subset of the measurement data acquired in
the driving simulator, where participants executed defined NDR-tasks, see Sect. 2.
These are labelled in an annotation process to receive various sequences of different
persons. For each NDR-task defined in Table 1, one corresponding HMM was trained
using the Baum-Welch-Algorithm [15].
For real-time detection of NDR-tasks that are inside of the trained task set (Table 1)
a sliding window is applied to generate an input sequence O. Each trained HMM ki is
feed with the current observed temporal input sequence of the feature vector (see
Eq. 4). The probability of the sequence given the model description PðOjki Þ is cal-
culated for all trained HMM ki via backward algorithm (cf. [15]) and is compared
among each other. Following the maximum a posteriori estimation (MAP) the HMM
with the highest observation probability fits best to the observed sequence. Thus, the
corresponding NDR-task i forms the result of the classification.

3 Results

To evaluate the performance of the NDR-task detection using a trained HMM for each
task, every set of sequences from one single participant is left out at the training step
and then presented as a test set respectively. This leave-one-out cross-validation sep-
arated by participants provides the following true positive and false positive detection
rates. Figure 7 depicts the performance of the detection of the defined NDR-tasks
Real Time Recognition of Non-driving Related Tasks 51

(Table 1) as well as “baseline”, which represents the free monitoring of the driving
situation without performing any specific task.
It should be noted that the detection rate is relative to the execution time of the
respective tasks. Thus, if the driver is phoning two minutes the system detects in
average nearly one minute of that correctly as “cell phone call”. Due to the sliding
window approach, the detection result of the HMMs has a delay of at least the defined
length of the sliding windows. This leads to the observed behavior that a detection of
the complete execution time of the task cannot be achieved.
The results show that the detection of NDR-tasks with characteristic glance and
hand areas, e.g. “texting handheld” or “reaching for object on passenger seat”, performs
much better than for tasks with no concrete allocation of the visual and manual channel,
like “repeating spoken text” or “baseline”. In relation to this, the true positive rates of
the task “cell-phone talk handheld” represents an anomaly because there is no reliable
detection if the driver holds the phone to his head. The field of view of the 3D sensor
only covers the right side of the driver. If the driver uses the phone with the left hand, it
cannot be detected.
Given the true positive and false positive rates from Fig. 7, the sensitivity index
d0 ¼ Z ðtrue positive rateÞZ ðfalse positive rateÞ introduced by signal detection theory
(cf. [16]) evaluates the performance for each task. A value of d0 ¼ 0 indicates that the
system detects the currently executed task randomly, whereas d0 ¼ 4 is a related per-
formance of a hit rate of 0.95 according to a false alarm rate of 0.01.

Fig. 7. True positive and false positive rates for each task from the leave-one-out cross-
validation.
52 T. Pech et al.

Thus, resulting d0 values, listed in Table 2, show varying performances depending


on the respective task, reaching from 1.31 (“cell-phone talk handheld”) up to 3.78
(“reaching for object on passenger seat”). Regarding to the sensitivity index d0 the
approach shows very good performance for the NDR-tasks “reaching for object: pas-
senger seat”, “texting (mounted)” and “texting (handheld)”, but has its limits to dis-
tinguish among each of the tasks “cell-phone talk handheld”, “repeating spoken text”,
“reading out text” and “baseline”.

Table 2. d′ values for task detection.


Task dʹ
Cell-phone talk (handheld) 1.31
Reaching for object: passenger seat 3.78
Repeating spoken text 1.38
Reading out text 1.69
Texting (mounted) 2.70
Texting (handheld) 3.30
Baseline 1.52

The applied methodology is influenced by finding an appropriate sliding window


length. Figure 8 exemplary visualizes the probabilities of the trained HMM for “cell
phone talk” and “baseline” during an executed cell-phone call using sliding windows of
five and ten seconds. Note that for HMM longer sequences cause lower probabilities,
thus the relative values are relevant respectively. The plot starts when the driver begins
to take the call. The black bar indicates the hanging up. First of all the sliding window
needs to gather enough information about the scenery before inferring the related NDR-
task. Short windows react fast on new input but cause unstable peaks in the proba-
bilities. However, a too long window results in a delay that increases correlated to its
length. Thus, the applied sliding window length of ten seconds is a compromise
between minimizing the delay of NDR-task detection due to a long window size on the
one hand, but generate stable probability distributions for clear decisions on the other
hand.

Fig. 8. Logarithmic probability of exemplary observation sequence for baseline (dashed orange
line) versus cell-phone talk (continuous blue line) for comparison of applied sliding window of
length (a) 5 s and (b) 10 s.
Real Time Recognition of Non-driving Related Tasks 53

4 Conclusion

In this paper, a methodology for the detection and classification of NDR-tasks in the
context of automated driving is described. The presented results show that it is feasible
to detect a currently executed NDR-task by the driver using HMMs. It has also been
shown that the used feature vector contains sufficient information to model the pre-
defined tasks in Table 1. It seems in our approach hand and glance position are stable
features to distinguish between the considered NDR-tasks with different visual and
manual characteristics. The results in Fig. 7 show that NDR-tasks with an allocated
manual and visual channel have higher detection rates than tasks with uncertain hand or
glance positions, like “repeating spoken text”. However, this is also a weak point;
without an accurate and complete preprocessing of the measurement data, missing
information can lead to recognition mistakes. This is observed by the NDR-task “cell
phone call”. The typical hand position of this task is not available if the driver uses the
phone with the left hand. If this happened, “cell phone call” would be miss-classified as
“repeating spoken text”. In this way, the accuracy of the NDR-task detection increases
with a much more different task characteristic, which is represented by a defined area
where the task is executed and by the occupied human sensory channel. This is e.g.
visible in the detection rates of “reaching for object: passenger seat” and “texting
handheld”. Because of the low true positive rates for tasks with a vocal attribute, like
“repeating spoken text” or “speaking on cell phone”, further investigations on the
feature level are necessary to stabilise the detection if the driver is speaking.
Based on the available results it is recommended to group tasks with similar
characteristics regarding their resource demands. E.g. actions like “repeating spoken
text” or “reading out text” can be grouped by their vocal attribute.
Research has shown that drivers’ take-over performance is highly influenced by
specific NDR-tasks (e.g. [17, 18]). The information of the currently performed NDR-
task might be used to enable task-adaptive HMI concepts to support the driver in take-
over situations. Possible adaptations might include earlier or more urgent warnings as
well as the usage of modalities (e.g. visual, auditory or haptic) that are not compro-
mised by the current NDR-task.

Acknowledgment. This work results from the joint project Ko-HAF - Cooperative Highly
Automated Driving and has been funded by the Federal Ministry for Economic Affairs and
Energy based on a resolution of the German Bundestag.

References
1. Rauch, N., Kaussner, A., Boverie, S., Giralt, A.: HAVEit: the future of driving. Deliverable
32.1. Report on driver assessment methodology. In: 7th Framework Programme ICT-
2007.6.1 (2009)
2. Dong, Y., Hu, Z., Uchimura, K., Murayama, N.: Driver in attention monitoring system for
intelligent vehicles: a review. IEEE Trans. Intell. Transp. Syst. 12(2), 596–614 (2011)
54 T. Pech et al.

3. Bachmann, T., Bujnoch, S.: ConnectedDrive - driver assistance systems for the future.
Technical report, BMW (2002)
4. Bekaris, A.: System for effective assessment of driver vigilance and warning according.
Technical report, National Center for Research and Technology Hellas (CERTH) Hellenic
Institute of Transport (HIT) (2002)
5. Brandt, T., Stemmer, R., Mertsching, B., Rakot, A.: Affordable visual driver monitoring
system for fatigue and monotony. Int. Conf. Syst. Man Cybern. 7, 6451–6456 (2004).
https://doi.org/10.1109/icsmc.2004.1401415
6. Leonhardt, V., Pech, T., Wanielik, G.: Fusion of driver behaviour analysis and situation
assessment for probabilistic driving manoeuvre prediction. In: Bengler, K., Hoffmann, S.,
Manstetten, D., Neukum, A., Drüke, J. (eds.) UR:BAN Human Factors in Traffic.
Approaches for Safe, Efficient and Stressfree Urban Traffic, pp. 223–244. Springer,
Wiesbaden (2017). https://doi.org/10.1007/978-3-658-15418-9_11
7. Cheng, S., Park, S., Trivedi, M.: Multiperspective thermal IR and video arrays for 3D body
tracking and driver activity analysis. In: Proceedings of the 2005 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition—Workshops. IEEE (2005).
https://doi.org/10.1109/cvpr.2005
8. Braunagel, C., Stolzmann, W., Kasneci, E., Rosenstiel, W.: Driver-activity recognition in the
context of conditionally autonomous driving. In: IEEE 18th International Conference on
Intelligent Transportation Systems, Las Palmas, pp. 1652–1657 (2015). https://doi.org/10.
1109/itsc.2015.268
9. Cheng, S., Park, S., Trivedi, M.: Multi-spectral and multi-perspective video arrays for driver
body tracking and activity analysis. Comput. Vis. Image Underst. 106, 245–257 (2007).
https://doi.org/10.1016/j.cviu.2006.08.010
10. Yan, S., Teng, Y., Smith, J., Zhang, B.: Driver behavior recognition based on deep
convolutional neural networks. In: 12th International Conference on Natural Computation,
Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, pp. 636–641 (2016).
https://doi.org/10.1109/fskd.2016.7603248
11. Cronje, J., Engelbrecht, A.: Training convolutional neural networks with class based data
augmentation for detecting distracted drivers. In: Proceedings of the 9th International
Conference on Computer and Automation Engineering (ICCAE 2017), Sydney, pp. 126–130
(2017). https://doi.org/10.1145/3057039.3057070
12. Ohn-Bar, E., Martin, S., Tawari, A., Trivedi, M.: Head, eye, and hand patterns for driver
activity recognition. In: 22nd International Conference on Pattern Recognition, Stockholm,
pp. 660–665 (2014). https://doi.org/10.1109/icpr.2014.124
13. Pech, T., Lindner, P., Wanielik, G.: Head tracking based glance area estimation for driver
behaviour modelling during lane change execution. In: 2014 IEEE 17th International
Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, pp. 655–660
(2014)
14. Elfes, A.: Occupancy grids: a prohubilistic framework for mobile robot perception and
navigation. Ph.D. thesis, Electrical and Computer Engineering Dept./Robotics Inst.,
Carnegie Mellon Univ. (1989)
15. Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech
recognition. Proc. IEEE 77(2), 257–286 (1989)
16. Macmillan, N.A., Creelman, C.D.: Detection Theory: A User’s Guide. Taylor & Francis,
Abingdon (2004). ISBN 9781410611147
Real Time Recognition of Non-driving Related Tasks 55

17. Petermann-Stock, I., Hackenberg, L., Muhr, T., Mergl, C.: Wie lange braucht der Fahrer?
Eine Analyse zu Übernahmezeiten aus verschiedenen Nebentätigkeiten während einer
hochautomatisierten Staufahrt [How long does it take for the driver? Analysis of takeover
times with different secondary tasks in a highly automated traffic jam assist]. Paper presented
at 6th Tagung Fahrerassistenz [6th Conference on Driving Assistance], München, Germany
(2013)
18. Wandtner, B., Schömig, N., Schmidt, G.: Effects of non-driving related task modalities on
takeover performance in highly automated driving. Hum. Factors (2018). https://doi.org/10.
1177/0018720818768199
Affordable and Safe High Performance Vehicle
Computers with Ultra-Fast On-Board Ethernet
for Automated Driving

Martin Hager1(&), Przemyslaw Gromala2, Bernhard Wunderle3,


and Sven Rzepka4
1
AE/ECC-ME, Robert Bosch GmbH,
Markwiesenstraße 46, 72703 Reutlingen, Germany
martin.hager@de.bosch.com
2
AE/EDT3, Robert Bosch GmbH,
Tübinger Str. 121, 72703 Reutlingen, Germany
PrzemyslawJakub.Gromala@de.bosch.com
3
Fakultät ET und IT, Technische Universität Chemnitz,
Reichenhainer Str. 70, 09126 Chemnitz, Germany
bernhard.wunderle@etit.tu-chemnitz.de
4
Micro Materials Center, Fraunhofer ENAS,
Technologie-Campus 3, 09126 Chemnitz, Germany
sven.rzepka@enas.fraunhofer.de

Abstract. Autonomous Driving at level 5 requires high-performance vehicle


computers (HPVC) to perform the multitude of complex functions, such as
comprehensive vision processing, object recognition, intelligent traffic system,
and task dispatch between different ECUs in the car. Today, HPVC systems are
available as development platforms but not ready to be deployed under harsh
operating conditions of real vehicles yet. The paper assesses the requirements,
discusses the current state of the art, and introduces architectural and design
solutions for a robust and safe automotive grade HPVC platform. The required
high computational power and all the necessary communication interfaces will be
provided at costs allowing the production of passenger cars for the broad public.

Keywords: Autonomous driving  High performance computing


Automobile ethernet  High-speed communication  Thermal management
Reliability  Functional safety  Automotive electronics  Smart sensors

1 Introduction

Today’s vehicle mobility and transportation suffers from time-consuming traffic


operations, too many accidents and fatalities, and high-energy inefficiency for indi-
vidual transport. The latest vehicle generation is already supporting the mobility user
and the individual self-driver by systems like distance and traffic jam assistant, auto-
mated parking, panorama view as well as real time traffic information systems. Still,
these driver assistance systems are not able to replace the driver and do not provide

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 56–68, 2019.
https://doi.org/10.1007/978-3-319-99762-9_5
Affordable and Safe High Performance Vehicle Computers 57

highly flexible, time optimized and safe mobility. Neither is it possible yet to optimize
the individual traffic volume.
To address these issues and to enable a new quality of driving comfort, autonomous
driving (AD) will be a key capability for future vehicle generations. Ultimately, the
level 5 of full autonomy will be approached. Thus, these AD cars require high-
performance vehicle computers (HPVC) in order to perform the multitude of complex
functions, such as comprehensive vision processing, object recognition, intelligent
traffic system, and task dispatch between different ECUs in the car. The HPVC system
must safely be capable of always handling all driving situations autonomously. Cur-
rently, major semiconductor manufacturers are making strong efforts in developing
powerful processors according to these needs. In fact, first components have been
announced to become available this year on functional development platforms. They
will allow first assessments and software developments (Fig. 1).

Fig. 1. Functional HPVC development platform [1]

However, they are not designed for the actual use in the harsh conditions of real
vehicles. Automotive grade HPVC modules and systems are yet to be created. For this,
essential technological obstacles need to be overcome before the solutions can qualify
as regular products at affordable prices. These obstacles result from the following
requirements:
• Level 5 AD cars need—compared to today’s vehicles—much more computational
power. Otherwise, the human driver cannot be replaced. The most modern solutions
offer 320 TOPS and claim being sufficient for AD at level 5. They use graphic
processors, which each dissipate about 300 W. Accounting for redundancy and all
necessary periphery, the system power can be estimated to reach 1 kW in total.
• Level 5 AD cars need comprehensive perception of the surrounding environment in
real time. This can only be achieved by deploying multiple video/radar/lidar/
ultrasonic sensors in the car. They will generate much more data than in the vehicles
today. Final data fusion will be done in the centralized HPVC units. Therefore, the
on-board communication network needs to allow data rates of 10 Gbit/s and
guarantee highest quality of service (QOS). New connectors, wiring harness solu-
tions as well as communication chips and AD converters are needed.
• The on-board HPVC and communication systems of AD cars need to show very
high reliability, security, and safety in order to protect the human life in daily
58 M. Hager et al.

routine traffic as well as in difficult traffic situations. In fact, reliability and func-
tional safety of AD electronic systems must be increased substantially: The active
human driver, who is constantly monitoring the driving behavior of the car, will
change into a passive passenger that leaves all the control functions to the electronic
system. In addition, the evolving use and business case scenarios based on car-
sharing approaches rather than on fleets of cars, each privately owned by the
individual people, will increase the time of operation in service life for the elec-
tronics systems of AD cars several times from the current 8,000 h.
All these requirements and challenges need to be tackled simultaneously.

2 State of the Art

The high performance development platform ‘Pegasus’ announced by Nvidia [1] to


provide 320 TOPS is seen to meet the requirements for level 5 AD solutions with
respect to the computational power. Hence, this platform allows a holistic software
development, several testing options, and proper validation campaigns. Involving
schemes of artificial intelligence, self-learning, and other sophisticated algorithms, it
will bring up the programs for data fusion, decision making and the comprehensive
driving control as needed for AD cars in the upcoming years. The development plat-
forms will allow prototype implementation of the respective software and demon-
stration of full-scale realistic problems under laboratory conditions. This corresponds to
a technology readiness level (TRL) of six. For higher TRL, hardware needs to be made
available, which is operational in regular vehicles and their actual target environments.
This requires substantial improvements in all three domains: (i) thermal performance,
(ii) electrical data communication as well as (iii) mechanical robustness and safety
based on a thorough analysis of the current state of the art.

2.1 Computation - Thermal Domain


Automotive ECU with convection cooling are feasible only below 100 W system
power. Typically, BGA type processors are mounted on (heavily Cu-loaded) PCB or
LTCC substrates that are thermally connected to the casing for heat removal. In high
power units (e.g., converters), the coarse electronic dies are directly mounted on DCB-
substrates that are soldered to base-plates, which are connected to the car water cooling
circuit through a manifold or pin-fin array. This allows rejecting immense power losses
(several kW/l) [2]. Such a rough mounting is seen prohibitive for processor dies due to
the stress sensitivity of the fine structures (w.r.t. both, mechanical and electronics
parameters).
Today, high performance computing enjoys office or data center thermal boundary
conditions, where both enforced air-cooling, in conjunction with heat pipe enhanced
folded fin heat sinks, or water-cooled solutions are found. Along the heat path from the
semiconductor downstream, the first thermal interface material (TIM) is a major bot-
tleneck. For the large chips of multi-core CPUs and GPUs (die size: >300 mm2),
thermal grease or liquid metal/phase change materials come into play. High-end
Affordable and Safe High Performance Vehicle Computers 59

materials are used for substrate/interposers and spreader (sometimes also built as a
metal cap) to assure low thermal mismatch and planarity of the assembly. Again, air-
cooling is found on single chips with powers up to 100 W whereas liquid cooling is
necessary for any higher power. So far, only single chip-level cold plates (internally
pin-fins or µ-channels) have been marketed for high reliability applications (e.g., within
blade servers [3]). There, the liquid is strictly sealed away from the electronics. Lab
solutions have demonstrated 370 W/cm2 with jet impingement (50 k nozzles) [4] or in
3D interlayer cooling 1000 W/cm2 [5] in stacked silicon. Recently, top-mounted
polymer manifold jet impingement coolers were realized with 330 W/cm2K [6] at
comparable thermal budget. These solutions allow higher heat transfer coefficients due
to missing thermal interfaces, but bring the cooling liquid very close to the chip, which
is thought to be a reliability problem. For automotive applications, avoidance strategies
that safely prevent any leakage will be of high importance.
In the future, the liquid cooling circuit will have a maximum temperature of 65 °C
(today 55 °C), which further reduces the thermal budget for keeping the junction
temperature at 80…95 °C. Processor de-rating is no option for thermal management, as
full operational performance needs to be available all time during driving.
In sum, there is presently no cooling solution available for high performance
computing with the targeted system power of up to 1 kW for highly reliable automotive
deployment.

2.2 Communication - Electrical Domain


Today, the data transfer systems used in the vehicle for driver assistance functions still
rely on legacy communication protocols such as LIN, CAN or CAN-FD, which are
limited to data rates of 10 Mbit/s and less. The first technology evolution for trans-
mitting 100 Mbit/s over an unshielded and affordable single twisted pair connection—
started with the proprietary BroadR-Reach standard, then established by the OPEN
Alliance SIG consortium. The current Ethernet communication PHYs and switches in
use within automotive rely on the IEEE standardized 100BASE-T1 to target the
100 Mbit/s data rate. Current luxury cars use these features with unsealed technologies
inside the cabin and with sealed technologies in the harsh environment part of the
vehicle. For higher data rates, i.e., for  1 Gbit, only single line and non-automotive
connector interface solutions are available and are mainly restricted to the cabin
environment. According to actual forecasts, this will not satisfy scalability and future
needs of next generations of vehicles. In particular, AD cars are even expected to target
data rates of 10 Gbit/s and more as a single lidar or radar sensor can easily send
0.5–1 Gbit/s already and usually, the vehicle needs in the order of 20 of such radar,
lidar, ultrasound, and video sensors for a comprehensive 360-degree real-time per-
ception of its surrounding environment. A common setup is to have 3 to 9 radar
sensors, whose outputs are fused together with the outputs of camera and lidar sensors
in the central ECU. There are tendencies of further increasing the number of sensors for
an even more accurate perception. This situation require data interfaces providing
10 Gbit/s and higher data rate as standard solution for the on-board communication.
They need to be available reliably for both, cabin and harsh, environments. State of the
art solutions are unable to cover this demand—neither in terms of connector
60 M. Hager et al.

technologies nor with the wiring harness solutions. Thus, a number of automotive
OEMs expect Ethernet to become the core technology as it offers the right level of
flexibility, scalability, and cost, especially when combined with the proper protocols.

Fig. 2. Envisioned set of sensor systems for level 5 autonomous driving [7]

2.3 Integration - Mechanical Domain


The multi-core processors have become available in BGA packages, which allow
accommodating high numbers of I/O at affordable cost. Such BGA components with
1000 pins and more are well in service for desktop and laptop computers. However,
they have not been used for the automotive yet. The risk of solder ball fatigue and
failure is seen too high—caused by the strong mechanical and thermo-mechanical loads
that unavoidably occur in automotive environments. Hence, measures need to be
developed now, which can permanently stabilize the BGA solder balls so that they
become mechanically robust even for a significantly longer operation time than in
current personal cars (see above).
Furthermore, the conventional design for reliability optimization will not be suf-
ficient for the new AD solutions. It only covers the ultimate lifetime on the statistical
average and cannot predict the specific remaining useful lifetime (RUL) within the
individual car. However, the passengers will entrust their lives to that electronic system
when driving autonomously. Hence, measures for assuring the functional safety are
mandatory. Currently, ISO 26262 responds to these needs by requiring redundancy at
multiple levels: ASIC, components, module, and system. In case of autonomous
vehicles, the number of safety relevant parts will massively increase. In particular, the
required system redundancy (always based on different work principles since identical
parts would also fail at almost the same time) will then dramatically add to the total cost
Affordable and Safe High Performance Vehicle Computers 61

of the AD car. In order to stay affordable and competitive, smarter solutions shall be
implemented—even more as the current approach cannot overcome the statistical
dilemma. Its estimates are only valid on average. Hence, it is still unknown for how
long the spare system will survive the failure of the first one in each individual case.

3 Architectural and Design Solutions

The new HPVC and communication technologies enabling level 5 AD feature immense
challenges in each of the three domains, thermal, electrical, and mechanical. The proper
module and system integration will need to be built on significant improvements on
component and even on material level in order to arrive at architecture and design
solutions that comprehensively meet the requirements of all domains simultaneously.
The issues concern the thermal budget, the multi-modal connectivity and signalling, the
reliability and functional safety in the harsh automotive environment as well as the
form factor and the cost. Tackling them necessitates several key innovations in design,
technology, and test concepts along the full heat path from silicon to system, in all parts
of the electrical communication network, and in the various heterogeneous integration
efforts. The following sections discuss key innovations in each of the three domains.
Ultimately, they need to be merged into a true system approach. After the thorough
analysis and planning, thermal, electrical, and mechanical co-design must be consid-
ered on the same footing right from the beginning of the product development.

3.1 Computation - Thermal Domain


The engineering framework of the highly innovative and integrated approach to the
cooling concept is depicted schematically in Fig. 3 highlighting the intrinsic thermal
and mechanical challenges in detail.
On chip level, the heat needs to be spread first in order to manage hot spots on the
multi-core GPU with >40–60 W/cm2. This requires exceptionally performant TIMs
featuring high thermal conductivity, low interface resistance and enabling a low stress
bond. The candidates include phase change materials (PCM), pastes, adhesives, and
solder materials. Advanced method need to be applied, which provide characterization
results valid for the real application conditions. In particular, the thermal interface
behaviour needs to be captured in addition to the bulk conductance. This part addresses
a crucial bottleneck for the entire heat path. It shall be engineered in intimate coop-
eration with the thermo-mechanical design.
At component level, four innovative approaches are seen for improving the thermal
management. The concepts differ in the way the HPVC casing is thermally enhanced
for superior heat transfer:
(1) by integrated (low CTE-) spreaders (e.g. CuMo, CuGr),
(2) by integrated heat-pipes,
(3) by integrated liquid coolers (µ-channel, µ-pin fin) and
(4) by a direct chip-mounted jet impingement manifold for each power die to bring the
advantages of each to fruition (simplicity, reliability, performance, form factor).
62 M. Hager et al.

Fig. 3. Schematic of targeted innovative thermal management concepts: (a) with system-level
manifold, (b) with chip-level manifold including thermal and thermo-mechanical challenges. ▲
(Thermal) 1. TIM (first bottleneck), 2. Interface to spreader, 3. Heat transfer into manifold,
4. Thermal enhancement of substrate, 5. Heat path encapsulated packages, 6. Heat pipe
performance, 7. µ-channel cooler performance, 8. Jet impingement cooler performance,
9. Secondary liquid loop with heat exchanger and pump. ★ (Thermo-mechanical) 1. System
tolerance, CPI, 2. UF delamination, solder joint reliability, 3. Interface toughness/low stress
bond, 4. & 5. System sealing, 6. Fixation mechanical decoupling, 7. Leak proof µC cooler &
tubing connections, 8. Leak proof jet impingement, 9. Reliable & corrosion-free secondary
cooling circuit.

Concept 4 truly is of visionary character. It promises best thermal performance but


involves complex and innovative processes like 3D printing of the micro-cooling
channels according to the optimization through advanced thermal computational fluid
dynamics (CFD) simulations [8]. Hence, realizations of concepts (1) and (2) are seen
more realistic for the first generation of HPVC units enabling level 5 AD.
At system level, the thermal management involves the embedding of the unit into
the liquid cooling circuit of the car by attaching a manifold to the back of the unit, e.g.,
in form of shower jets or staggered pin fins for concept (1) and (2). For concepts
Affordable and Safe High Performance Vehicle Computers 63

(3) and (4), fluid redistribution and the secondary liquid loop with heat exchanger and
pump need to be foreseen.
The study of all these concepts will be facilitated by thermal test chips (TCC),
which encompass a sufficiently fine hot spot granularity and distributed temperature
sensing for localized TIM performance and degradation monitoring. The TCC may be
constructed with upmost simplicity. A single metal layer for the matrix of heater and
thermal sensor elements as well as for the daisy chains and the 4-point structures allow
all thermal performance tests as well as the reliability assessments and the structural
health monitoring when following the 3-omega-approach according to the recent
developments [9]. A matrix of these 3-omega structures, which are combinations of
metal line based heaters and temperature sensors that form so-called Thixels (thermal
pixels), can be used to unequivocally predict delamination with a working spatial
resolution of the order of typically 100 µm. They can universally be integrated within
the given chip technology as simple conductor lines and offer further advantages like
being very robust and not showing any cross sensitivity to potentially parasitic effects
as e.g. temperature, stress, or moisture.

3.2 Communication - Electrical Domain


As seen in Fig. 2, the typical configuration for AD cars comprise multiple radar, lidar,
video, and ultrasonic sensors for obtaining comprehensive information. They work
with ultra-fast analogue-digital converters, whose sampling speed will soon reach up to
80 MHz and more. This way, a data stream of about 2 Gbit/s is generated per single
radar sensor. Fed into the local processor for preselecting the data from the areas of
interest, the rate sent to the central HPVC unit is reduced to 0.5–1 Gbit/s. Still, the on-
board communication backbone needs to provide a minimum bandwidth of 10 Gbit/s
all time in order to accommodate all the sensor data. In addition, the external com-
munication (soon based 5G) as well as the control and the interactions with the other
ECUs needs to be handled.
The Ethernet is an established high-speed communication technology. It has just
recently been adopted for the on-board network in premium vehicles. So far, data rates
of 100–1000 Mbit/s have been shown feasible on twisted pair cables and in accordance
to the automotive standards (100BASE-T1, 1000BASE-T1). Figure 4 shows an
example of a control unit with both of these Ethernet standards.
When increasing the data rate as needed for AD applications, the wiring harness
shall still be based on unshielded twisted pair (UTP), shielded twisted pair (STP), or
shielded parallel pair (SPP) cables for cost and weight savings while coaxial or optical
cables will primarily be assessed in addition to provides the benchmark.
Besides the wiring harness, the ECU high-speed data rate connectors need to be
improved. Currently, only single connectors for each Ethernet port are available on the
market. They are costly, cover large PCB areas, and lead to high numbers of plug-in
mounts during assembly, which is an efficiency distractor in the production lines of the
OEMs. Therefore, the developments of new highly integrated and scalable connector
interface systems are currently underway, which shall support the transfer of up to a
dozen high-speed communication channels. They will minimize the number of
64 M. Hager et al.

Fig. 4. Control Unit with Ethernet capability (here: 2  1 Gbit/s) [10]

connectors. Design assessments by simulations and physical tests are conducted to


meet the electromagnetic compliance requirements.
Finally, the new on-board communication system also needs the respective Eth-
ernet chips. They may start with standards-compliant data rates of 1000 Mbit/s but shall
improve towards 10 Gbit/s soon in order to satisfy the AD needs.

3.3 Integration - Mechanical Domain


The large BGA components with their 1000+ balls are seen most critical with respect to
thermo-mechanical reliability. A permanent stabilization needs to be provided. Organic
materials are one option. They can fill the remaining space around the solder joints in
the gap between component and module board. However, the usual processes are
driven by capillary forces. They are applied in flip chip technology, where the typical
dies are smaller than 1 cm2. They would not be able to meet the efficiency and
throughput expectations for automotive industry when expanded to BGAs with almost
20 cm2 in size. Therefore, new processes are currently under investigation. They are
based on transfer and injection molding, which allows low process times. Furthermore,
potting and overmolding is explored to encapsulate the entire region around the
component so that the structure becomes very rigid and mechanically robust. Because
all these measures will also influence the thermal situation strongly, they need to be co-
optimized with the heat removal strategy chosen for the high performance processor
and all other components in the HPVC unit. The system solutions need to meet the
requirements of both domains, the mechanical and the thermal, at the same time. In
addition, there are new use conditions to be considered. On the one hand, a substantial
increase in the operational time per day is expected for car-sharing usage modes instead
of individual ownerships. On the other hand, the electronic hardware shall be prepared
for easy upgrades within the lifetime of a car in order to allow the users to benefit from
the high innovation rate in computation technologies. The new algorithms based on
artificial intelligence will soon need yet more computer power. In order to meet both of
these trends, the electronics systems shall to be designed for enduring several 104 h of
operation as well as for easy replacement after just 3 … 4 years. This requires modular
concepts with clearly defined interfaces for all the primary aspects like mechanical,
electrical, and functional parameters but also for related concern like thermal
Affordable and Safe High Performance Vehicle Computers 65

management. Then, the individual modules can be designed and optimized separately,
i.e., by various suppliers and with customer-specific performance parameters, but
would still fit seamlessly into the over-all system architecture.
Developing AD vehicles, the concerns of functional safety further increase sub-
stantially. A new strategy has been proposed that uses knowledge-based prognostics
and health management to meet them in affordable way [11]. The approach is based on
monitoring the condition of the electronic systems that are actually in use so com-
prehensively that costly redundancy by second sets of full-scale systems can be
avoided. The remaining redundancies at lower level (module, component and feature
redundancies like double pins) would then be sufficient for full safety assurance.
Implementing this policy is a strategic goal for the next decade. It starts with the
identification of key failure indicators (KFI) that allow detecting the onset of degra-
dation safely and well ahead of any critical failure. This can be achieved by monitoring
preceding effects or by using so-called canary features1, for example. Delamination of
mold compound from the die adhesive and from the die surface is a known wear-out
phenomenon in electronic components. It is caused by the difference in thermal
expansion between these materials. It usually is initiated at one corner or edge of the
die and propagating inwards. Ultimately, it may lead to wire bond lifts. The delami-
nation causes the mechanical situation to change, which can be detected by strain
sensing elements well ahead of the electrical failure (Fig. 5).

Fig. 5. Delamination propagation during thermal cycle test (−40 °C $ 125 °C). iForce stress
sensor: Difference in normal in-plane stress components grows [12]

Similarly, canary features can trigger the calls for maintenance and repair before the
occurrence of a failure. These few KFI features are foreseen in addition to the func-
tional structures and placed at positions where they are exposed to higher loads during
the operation cycles than the functional essential counterparts and/or they are inten-
tionally designed weaker. Hence, the canary feature fails before the essential part. Its
failure provides the individual calibration point that allows the prediction of the specific
remaining useful lifetime (RUL) of this particular electronic system. This way, the few
additional canary features pave the path to the avoidance of the costly system redun-
dancy as they indicate the onset of degradation in the actual system and provide a
quantitative estimate of the RUL. Examples of canary features are small passives like

1
The expression goes back to the canary birds used in old days' coalmines.
66 M. Hager et al.

SMD resistors, whose solder pads are reduced in size, or the corner joints of compo-
nents like QFP, BGA, CSP, flip chips, etc., which are known to be stressed the most.
The difference in lifetime expectancy between the canary and the functional features,
respectively, i.e., the RUL after canary failure, can very well be determined in relia-
bility tests and numerical simulations during the product development of the electronic
system.
Final estimation of the RUL in field application based on the information from the
sensors and canary features, and devices will be realized utilizing a concept of digital
twin. Digital twin is a mathematical model of the physical system that in-situ evaluates
data from the system (e.g. from temperature or moisture sensor) under investigation
and compare with expected response (e.g. temperature or humidity) using meta-models.
In the digital twin model, different patterns can be saved, and based on the answer of
the meta-model estimation of the wear out can be done. RUL will be estimated in two
ways. For known failure modes and mechanism, physics of failure will be used. For
failure modes, whose mechanism cannot be described explicitly, data driven approach
will be used. The combination of these two approaches is known as hybrid PHM
approach. Digital twin will be a very important feature of future automotive HPVC
electronic systems and will allow for continues analysis of the system state of health.
As a result, the RUL of the individual system can be estimated accurately utilizing a
“clone” of that system, so that predictive maintenance can be realized in practice.

4 Automotive HPVC and Communication Platform

The strong market pull towards level 5 AD cars has triggered massive efforts in the
development of automotive HPVC and communication platforms. Regarding the
HPVC components, the upcoming Nvidia system ‘Pegasus’ is the most known example
[1]. It is announced to provide 320 TOPS computational power, to be capable of
interacting with 4x 10 Gbit/s, 8x 1 Gbit/s, and 16x 100 Mbit/s Ethernet links and to
dissipate some 500 W thermal power. While the communication and computation
performance figures would fit to level 5 AD cars, the practical parameters of the
development platform will not meet the automotive requirements yet. Based on the
analysis explained in the sections before, the following improvements are seen most
essential to make the platform operational in regular automotive products:
• Computation - Thermal Domain
The challenge of heat removal from the graphic processors and the peripheral
components will be tackled along two distinct tracks:
(a) Low-CTE on-die heat spreaders and integrated heat pipe solutions.
(b) Implementation of micro-channel and direct fluid cooling concepts.
While solutions according to track (a) are expected to enter regular production
within the coming 3–5 years, the next generation systems according to track
(b) offer minimum 2x higher cooling efficiency. Production may start after 2025 due
to fundamental reliability concern yet to be removed before.
Affordable and Safe High Performance Vehicle Computers 67

• Communication - Electrical Domain


Ethernet will become the dominating technology for the high-speed on-board
communication. In order to handle the numerous ports, new standards will be
developed for the automotive Ethernet connectors, wiring harness, and driver chip
solutions. The connectors will be capable of providing multiple high-speed ports
(10 Gbit/s and 1 Gbit/s) so that the number of plug-in mounts during assemblage at
the car OEM is minimized. The wiring harness solutions will provide high signal
and power integrity at minimum complexity. The Ethernet PHY chips, switches and
AD converters will be designed for the high speed at lowest power and maximum
functional safety.
• Integration - Mechanical Domain
The favorite approach to achieve the necessary permanent mechanical stabilization
for the large processor and other electronic components is seen in the use of organic
materials like particle filled epoxy systems. They will be applied by newly devel-
oped transfer or injection molding processes, which are optimized for high
throughput. Most likely, these molding processes will also finalize the integration of
the thermal and connector features in order to lead to a truly compact and robust
HPVC and communication unit.
In order to assure functional safety, a few sensors, canary features, and further KFI
will be integrated, which allow detecting the onset of physical degradation and a
reliable RUL estimation of the individual HPVC system based on the digital twin
approach.
This combination of adequate computational and communication performance with
innovative solutions assuring highly efficient thermal management, maximum yield and
throughput in production as well as compact, robust and safe design will provide all
means for the fabrication of truly automotive grade HPVC platforms as regular
products affordable to the broader public.

Acknowledgement. The authors would like to thank Stefania Fontanella, Hubert Straub, and
Abdessamad El Ouardi (Robert Bosch GmbH) for their valuable input. We are looking forward
to the work in ‘HiPer’, the PENTA project supported by BMBF/VDI (Germany), RVO
(Netherlands), and VLAIO (Belgium).

References
1. Nvidia—Drive PX Xavier … Pegasus. https://www.nvidia.de/self-driving-cars/drive-px/
2. Bret, C.L.: Power electronics for EV/HEV—market, innovations and trends, Yole
Development report (2016)
3. Michel, B.: Roadmap towards efficiency—zero-emission datacenters; IBM-Research,
Advanced Thermal Packaging/Micro Integration 02 June 2015. https://www.zurich.ibm.
com/st/energy_efficiency/zeroemission.html
4. Brunschwiler, T., Paredes, S., Drechsler, U., Michel, B., Cesar, W., Leblebici, Y., Wunderle,
B., Reichl, H.: Heat-removal performance scaling of interlayer cooled chip stacks. In:
Proceedings of 12th Itherm Conference, Las Vegas, USA, 2–5 June 2010
68 M. Hager et al.

5. Madhour, Y., Zervas, M., Schlottig, G., Brunschwiler, T., Leblebici, Y., Thome, J.R.,
Michel, B.: Integration of intra chip stack fluidic cooling using thin-layer solder bonding. In:
Proceedings on 3DIC Conference, San Francisco, CA, USA, 2–4 October 2013
6. Tiwei, T., Oprins, H., Cherman, V., Van der Plas, G., De Wolf, I., Beyne, E., Baelmans, M.:
High efficiency direct liquid jet impingement cooling of high power devices using a 3D-
shaped polymer cooler. In: Proceedings on IEDM Conference 2013
7. Buller, W.T.: Benchmarking sensors for vehicle computer vision systems. Michigan Tech
Research Institute, Ann Arbor, MI (2017). http://mtri.org/automotivebenchmark.html
8. Gupta, M.P., Kumar, S.: Thermal Management of many-core processors using power
multiplexing. Electronics Cooling (2013)
9. Wunderle, B., May, D., Abo Ras, M., Keller, J.: Non-destructive in-situ monitoring of
delamination of buried interfaces by a thermal pixel (Thixel) chip. In: Proceedings on 16th
Itherm Conference, Orlando, USA, May 30–June 2 2017
10. Hager, M., Lock, A.: Future electrical architectures and their effects on automotive ECU and
connector systems (Zukünftige Bordnetzarchitekturen und deren Auswirkungen auf
automotive Steuergeräte und Steckerssysteme). In: 6th International Congress on Automo-
tive Wire Harness, Ludwigsburg, 13–14 March 2018
11. Rzepka, S., Gromala, P.J.: Integrated smart features assure the high functional safety of the
electronics systems as required for fully automated vehicles. In: Advanced Microsystems for
Automotive Applications. Lecture Notes in Mobility, pp. 167–178. Springer, Cham (2017).
https://doi.org/10.1007/978-3-319-66972-4_14
12. Schindler-Saefkow, F., Rost, F., Otto, A., Faust, W., Wunderle, B., Michel, B., Rzepka, S.:
Stress chip measurements of the internal package stress for process characterization and
health monitoring. In: 13th International Conference on EuroSimE 2012, Proceedings, Art.
No. 6191746
The Disrupters: The First to Market
Automation Technologies to Revolutionize
Mobility

Adriano Alessandrini(&)

DICeA – Dipartimento di Ingegneria Civile e Ambientale,


Università degli Studi di Firenze, Via Santa Marta 3, 50123 Florence, Italy
adriano.alessandrini@unifi.it

Abstract. Road Vehicle Automation will revolutionise transport. Such revo-


lution will effectively start when new transport services with significant added
value with respect to today’s ones can be deployed.
This paper presents two transport services, one based on private vehicle
ownership and one on public and share transport vehicles, which will become
the first available disruptive “new modes of transport”.
Such modes will have different impacts on the society and the economy and
depending on the favored one they will have long term influence on our society.
A political intervention is needed to evaluate the effects of this revolution
beforehand and to shape policies to guide the future to avoid automation to
backfire.

Keywords: Road vehicle automation  Public transport  Private transport


Automation functions  Parking Garage Pilot  High speed highway chauffeur
CyberCars  High speed automated busses on dedicated corridors

1 Background

Road Vehicle Automation will revolutionise transport. Such revolution will effectively
start when new transport services with significant added value with respect to today’s
ones can be deployed. The possibility of relocating empty vehicles on the roads (or
almost empty) will allow the unprecedented flexibility for private transport to make of
private road vehicles a different transport mode with respect to conventional private
vehicles. Similarly, in the public (and shared) transport services is the empty vehicle
relocation which will make economically viable new transport services which today are
too expensive to be widely implemented.
Such new services and new ways of using private cars enabled by automation will
lead to a paradigm shift in vehicle usage and to consequent impacts. The prevailing
business model for the new transport services enabled by automation will depend on
many factors; time to market is first and foremost.
On one hand automation will allow for more convenient use of the personal auto-
mobile allowing to use differently (e.g. working) the time otherwise spent driving and
relieving from parking seeking burden pushing a more diffuse use of the private vehicle.

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 69–74, 2019.
https://doi.org/10.1007/978-3-319-99762-9_6
70 A. Alessandrini

On the other hand, driverless shared transport services will complement mass
transits and add the flexibility and capillarity not economically feasible for conven-
tional public transport making the new public transport more attractive and financially
self-sufficient.
According to the VRA roadmap [1] adopted by ERTRAC, between 2019 and 2022
key technologies to enable these new and disruptive transport modes will be released
on the market and it is expected that in 2025 the latest they will become sufficiently
diffused to have an impact on citizens’ mobility.
Figure 1 shows how between 2019 and 2022 some crucial automation functions
will hit the market. For the passenger cars they will be highway chauffeur (SAE level 3
[4]) and valet parking (SAE level 4 [4]). For shared mobility systems last mile vehicles
and fully automated (SAE level 4) and high-speed buses on dedicated corridors (SAE
level 4) will be available. These 2 sets of 2 functions will enable completely new
services for public and private transport.

Fig. 1. Elaboration on VRA ERTRAC road maps to automation to identify next to market
automation functions
The Disrupters: The First to Market Automation Technologies 71

Using then the consolidated ERTRAC roadmap there are two scenarios likely to
happen in 2022:
• In the public transport domain
CyberCars (Fully automated last mile public transport services on carefully
selected, certified and designated infrastructures)
CityMobil2 project [2] demonstrated the technical feasibility of such systems in
2014 and last mile shuttles to complement mass transit have been slowly deployed
since in those states where full road automation has been meanwhile made legal.
However important, last mile shuttles are they are not the universal solution to
public transport problems. But the technology and the urban integration approach
used for such shuttles can become so by extending this kind of automation to bus-
platooning, car-sharing combined with ride-sharing and automated empty vehicle
relocation. The combination of these three services, all enabled by the same
automation techniques already demonstrated and currently on the market, with
conventional mass transits will revolutionize public transport [3].
High speed busses on dedicated corridors
This technology is already available today but still has some legal problem and
requires some infrastructural adaptations (nothing different from BRT lines which
are built every day). Between today and 2022 their diffusion can be sufficient to
have significant impacts.
• In the private vehicle domain
SAE level 3 high speed highway chauffeur allows the individual car-driver to
dedicate her full attention to other tasks while travelling on the main road infras-
tructures and
SAE level 4 Low speed Parking Garage Pilot allows the individual car-drivers to
alight from the cars at their destinations and leave the cars to search for parking and
park themselves; the car will not be capable of driving far or returning to the trip
origin, but the owner will no longer need to seek a parking.
The combination of these two functions will make commuting by private individual
cars much more attractive than before; even longer journeys could be more pro-
ductive, and parking will no longer be a problem.
According to the roadmaps merged in Fig. 1 (taken from ERTRAC but almost
unanimously shared with differences on the arrival year of SAE level 5 [4] vehicles [5])
these automation functions will all be on the market between 2019 and 2022 and will,
for the first time, allow the automation revolution to start. It will allow the rapid take-up
of two new transport “modes”: (i) the “perfect” private transport one in which
automation will enormously favor private individual trips by making them easier and
more comfortable and (ii) the “perfect public transport one” which will be obtained
deploying ubiquitously in cities last mile capillary shared transport services and fast
and effective longer distance transport on corridors.
72 A. Alessandrini

2 The First to Market Disruptive Service for Private Road


Transport

A commuter living in the outskirts of a big European city will decide to invest her
money to buy a new vehicle and use it to shift her daily transport mode from (all or
partly) public transport to full private transport when the new vehicle will ensure a few
features:
1. it will need to relieve her from the driving task allowing her to perform other tasks
while commuting freeing her time and allowing her to accept longer commuting
time sitting in traffic;
2. it will need to relieve her from parking searching and discharging her at destination
while it goes to park itself.
The commuter drives her vehicle only for the first mile from her house to the first
motorway junction then the vehicle will drive itself with the highway chauffeur
automation function allowing her to work on her computer, to do shopping on line, to
entertain, or whatever else. The vehicle being in a SAE level 3 automation mode, she
cannot sleep but can perform other tasks [4]. When reaching the end of the motorway
stretch of her trips she will need to resume control and drive from the last motorway
junction to her destination.
She will drive to the front door of her destination and alight there from the vehicle.
Using the SAE level 4 Low speed Parking Garage Pilot automation function the vehicle
will go park itself at very low speed.
This new mode of transport will solve two of the main issues discouraging today
from the use of private cars: the time waste in congestion and the time needed for
parking seeking. It will be working at first (up to 2025 at least) only for those trips in
which parking is available not too far from the destination and for which the most part
of the journey is on motorway stretches. It will not be applicable for trip destined to the
city centres of most European cities but will attract most of the trips from peripheries to
peripheries.
The main impacts to expect from the take-up of this new mode of transport [6] is a
shift toward private car use from other modes in most of the commuting trips in the
outskirts of large cities and all over smaller ones; a longer-term effect to expect is the
selection of living locations further away from the cities now that commuting is more
comfortable.

3 The First to Market Disruptive Service for Shared


and Public Transport

CityMobil2 shuttles [3] have proven that the technology for last mile services is already
available; however, such technology still has two main obstacles to full deployment:
the legal framework to allow driverless vehicles on all roads and the business case
which is not very positive when a very costly vehicle (the shuttle) is used only for low
speed last mile services.
The Disrupters: The First to Market Automation Technologies 73

The same enabling technology can be however used for two services which can
solve both the issues.
Going in the morning from home to a train station, commuters could share rides. If
instead of being fully automated the vehicle were driven by one of the passengers, the
service would be the same but with no legal or vehicle-cost problems; naturally the
vehicle would then need to be relocated to avoid it to make just one trip a-day. If there
is no counterflow demand to relocate the vehicle where it can be used for the next trip,
such relocation needs to be made by automation. Either low speed automation already
demonstrated for shuttles (empty vehicles can travel at low speed with much less
drawbacks) or a “platooning-relocation” would do. Platooning could have less legal
problems than full automation keeping one driver in the first vehicle of the platoon and
requiring less technology and, allowing a higher speed, it should also solve the shuttle-
business-case problem.
The trips to and from train stations (or conventional high quality public transport
stops) would then be solved but high-speed platooning on corridors can also easily
improve the main public transport network. Small electric busses (30 to 45 places and 6
metres length) can provide a feeder service in medium demand areas to and from main
corridors and when the bus reaches the corridors to and from the city centre the driver
would alight leaving the bus to join automatically the first passing platoon on the
corridor.
The combination of these two service allows deploying quickly and cost-effectively
new ubiquitous forms of public transport which will constitute a new mode of
transport.
Depending on the quality of the ride, its comfort and the ease of use of these new
services such services can become very popular even at higher prices than the minimal
public transport ticket commonly subsidised in Europe allowing finally public transport
to become financially self-sufficient.
This can revolutionise mobility in the direction of public transport.
A commuter would book a ride to work the evening before for the next morning
and she will either find a vehicle to drive or someone to give her a ride at the given time
(all real time communication via opportune apps) and reach the train station perfectly
on time to board the train on which she has a reserved seat. The vehicle she left at the
station would be driven back in a platoon by a professional driver and left parked close
by the home of the customer driving the next pool.
If not to the closest train station the same last mile vehicle (which can be a 9-seater
minibus) can go to a high-speed corridor where it would join a high-speed platoon to
the city.
Such new services will induce a modal shift toward public transport [6], an
emancipation of public transport from subsidies, a possible decrease in car-ownership
rate and opportunities for new businesses related to managing transport and deliveries.

4 Which is Preferable and Which Will Win

Both new “modes” are expected to become available at the same time. It will be a
matter of political (and industrial) will to decide which to push.
74 A. Alessandrini

The private automated road transport has many advantages for users, many for the
economy and maybe some environmental and infrastructural drawback; on the other
hand, the shared and public automated transport mode is expected to have better
environmental and social impacts but many disruptive economic effects with no clear
winner or loser and it induces (and needs) a strong mind shift from the users.
The one to win this race will influence and shape mobility and economy for the
next decades.
Even if the final point will be to have fully automated vehicles, hopefully shared, and
available (depending on the place and time) from door to door or in a multimodal fashion,
the “route” chosen to reach this longer-term future will influence people behaviour.
As shown in Fig. 2 below, even if the technology in the longer-term scenario will
be the same the dominant mobility services might be very different depending on the
paths we chose today to reach full automation.

Fig. 2. From short-medium to long term scenarios and the “importance of the path”

References
1. VRA Vehicle Road Automation Project Deliverable D1.1.3
2. Mercier-Handisyde, P.: CityMobil2 an EC funded project. In: Implementing Automated Road
Transport Systems in Urban Settings. Elsevier, New York (2017)
3. Alessandrini, A., Stam, D.: ARTS—automated road transport systems. In: Alessandrini, A.
(ed.) Implementing Automated Road Transport Systems in Urban Settings. Elsevier,
New York (2017)
4. SAE: Taxonomy and definitions for terms related to on-road motor vehicle automated driving
systems. SAE standard J3016 (2014)
5. Shladover, S.E.: The truth about “Self-Driving” cars. Sci. Am. 314, 52–57 (2016). https://doi.
org/10.1038/scientificamerican0616-52
6. Sessa, C., Alessandrini, A., Flament, M., Hoadley, S., Pietroni, F., Stam, D.: The socio-
economic impact of urban road automation scenarios: CityMobil2 participatory appraisal
exercise. Road Veh. Autom. 3, 163–186 (2016)
TrustVehicle – Improved Trustworthiness
and Weather-Independence of Conditionally
Automated Vehicles in Mixed Traffic Scenarios

Pamela Innerwinkler1(&), Ahu Ece Hartavi Karci2,


Mikko Tarkiainen3, Micaela Troglia4, Emrah Kinav5, Berzah Ozan5,
Eren Aydemir5, Cihangir Derse6, Georg Stettinger1,
Daniel Watzenig1, Sami Sahimäki7, Norbert Druml8,
Caterina Nahler8, Steffen Metzner9, Sajin Gopi9, Philipp Clement9,
Georg Macher9, Johan Zaya10, Riccardo Groppo11,
and Samia Ahiad12
1
Virtual Vehicle Research Center, Inffeldgasse 21a, 8010 Graz, Austria
{pamela.innerwinkler,georg.stettinger,
daniel.watzenig}@v2c2.at
2
University of Surrey, Guildford, Surrey GU2 7XH, UK
a.hartavikarci@surrey.ac.uk
3
VTT Technical Research Centre of Finland Ltd.,
P.O. Box 1300, 33101 Tampere, Finland
mikko.tarkiainen@vtt.fi
4
CISC Semiconductor GmbH, Lakeside B07, 9020 Klagenfurt, Austria
m.troglia@cisc.at
5
Ford Otosan, Akpinar Mah Hasan Basri Cad No 2,
34885 Sancaktepe, Istanbul, Turkey
{Ekinav,eaydemi4}@ford.com.tr
6
Tofas Turk Otomobil Fabrikasi A.S.,
Istanbul Cad. No: 574 Osmangazi, 16369 Bursa, Turkey
cihangir.derse@tofas.com.tr
7
Linkker OY, Koritie 2, 15540 Villahde, Finland
sami.sahimaki@linkkerbus.com
8
Infineon Technologies Austria AG, Babenberger Straße 10, 8020 Graz, Austria
Norbert.druml@infineon.com,
caterina.nahler2@infineon.com
9
AVL List GmbH, Hans List Platz 1, 8020 Graz, Austria
{steffen.metzner,sajin.gopi,Philipp.clement,
georg.macher}@avl.com
10
Volvo Car Corporation,
Södra Porten 2, Flöjelbergsgatan 2a, 431 35 Mölndal, Sweden
johan.zaya@volvocars.com
11
Ideas & Motion s.r.l., Via Moglia 19, 12062 Cherasco, CN, Italy
Riccardo.groppo@ideasandmotion.com
12
Valeo Vision SAS, Rue Saint Andre 34, 93012 Bobigny, France
samia.ahiad@valeo.com

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 75–89, 2019.
https://doi.org/10.1007/978-3-319-99762-9_7
76 P. Innerwinkler et al.

Abstract. The introduction of automated vehicles to the market raises various


questions and problems. One of those problems is the trustworthiness of the
automated systems and in this connection the user’s perception and acceptance.
The user’s perception is especially important during SAE level 3 automated
driving (L3AD), where the driver has to be able to resume vehicle control, and
during the initial deployment of automated systems, where mixed traffic situa-
tions occur, in which automated and human-driven vehicles share the same road
space. The Horizon 2020 project TrustVehicle aims at investigating critical
scenarios, especially in mixed traffic situations and under harsh weather con-
ditions, and at improving the trustworthiness and availability of L3AD func-
tionalities through a user-centric approach.

Keywords: Level 3 automated driving  Reliability  Driver-centric approach


Co-simulation  HMI

1 Introduction

Automated vehicle technology has the potential to be a game changer on the roads,
altering the face of driving as we experience it by today. Many benefits are expected
ranging from improved safety, reduced congestion and lower stress for car occupants,
social inclusion, lower emissions, and better road utilization due to optimal integration
of private and public transport. Many cars sold today are already capable of some level
of automation while higher automated prototype vehicles are continuously tested on
public roads especially in the United States, Europe, and Japan. Automated vehicle
technology has arrived rapidly on the market and the deployment is expected to
accelerate over the next years. As a matter of fact, most of the core technologies required
for fully automated driving (SAE level 5) are available today, however, reliability,
robustness, and finally trustworthiness have to be significantly improved to achieve end-
user acceptance. System and human driver uncertainty pose a significant challenge in the
development of trustable and fault-tolerant automated driving controllers, especially for
conditional automation (SAE level 3) in mixed traffic scenarios under unexpected
weather conditions. The TrustVehicle consortium gathers European key partners who
cover the entire vehicle value chain and form a European eco-system: OEMs, Tier1
suppliers, semiconductor industry, software, engineering, and research partners to
enhance safety and user-friendliness of level 3 automated driving (L3AD) systems.
In Sect. 2 the overall goals of the TrustVehicle project, as well as the approach to
get there are summarized. Section 3 gives an overview of the respective workpackages
and their content. Finally in Sect. 4 some results from the first project year are pre-
sented and the paper is concluded with a brief summary in Sect. 5.

2 Ambition

In order to accelerate deployment of automated vehicle technology on the market,


several requirements have to be met to ensure user acceptance and an accurate level of
road and system safety (Sect. 2.1). Based on these requirements the TrustVehicle
objectives, listed in Sect. 2.2, were defined.
TrustVehicle – Improved Trustworthiness and Weather-Independence 77

2.1 Requirements Regarding User Acceptance and Safety

(a) Reliability and safety


Especially in the beginning of the deployment, mixed traffic poses a challenge,
since it induces some additional uncertainty. Therefore the reliability level for all
road users has to be enhanced, with focus on mixed traffic situations and difficult
operating conditions, including e.g. road works scenarios, very low friction
conditions caused by snow or adverse weather situations resulting in visibility
problems. Related to this is the enhancement of the overall level of active safety,
including low tire-road friction conditions, through the integration of steering
system control, drivetrain control and individual friction brake control.
(b) Hand-over-process
Since L3AD systems are not designed to handle every driving situation, inclusion
of the driver is crucial. This leads to the timely identification of scenarios that
cannot be controlled by the automated driving system and the definition of
intuitive, safe, trustable and user-friendly methods for managing the hand-over
phases from automated driving to human driving and vice versa. Here the whole
set of vehicle users must be considered, including the aging population not par-
ticularly familiar with the last generation of advanced information and commu-
nication technologies.
(c) User acceptance
For the deployment of automated systems a general improvement of automated
vehicle user acceptance is still needed. This can consist of several subtle aspects,
e.g., slightly modifying vehicle trajectory to increase the lateral distance from a
lorry travelling in opposite direction. In fact, experiments showed some strain in
automated vehicle test users in these situations, as current automated driving
systems tend to keep the same lateral distance from a vehicle moving in opposite
direction, independently from its size. On the contrary, human drivers tend to
modify the trajectory in these conditions, thus increasing the lateral distance from
large vehicles travelling in adjacent lanes.
(d) Assessment of automated driving systems
The development of simulation/co-simulation and post-processing tools for the
systematic, cost- and time-effective assessment of the automated driving system
performance is needed, which extends the current practice adopted for fine-tuning
the conventional automotive controllers, such as the anti-jerk controller, the anti-
lock braking system and the vehicle stability controller.

2.2 Objectives
O1. Systematic identification of critical road scenarios for the currently available
AD systems
Focus here is on the uncertainty associated with the behaviour of the other road
users and the sensor fusion system of the ego vehicle. This objective is addressed
through:
78 P. Innerwinkler et al.

• Catalogue of safety-critical scenarios for current automated driving systems. The


scenarios are classified into critical scenarios solvable through enhanced driving
controllers and critical scenarios solvable through the safe transition from auto-
mated to human driving. These scenarios are used as TrustVehicle test cases.
• Catalogue of assessment criteria of the critical scenarios.
• Correlation between objective performance indicators and subjective user’s
assessment for each of the identified scenarios.
O2. Controllers and sensor fusion systems for enhanced road safety
The controllers and sensor fusion systems shall be capable of dealing with complex,
uncertain and variable road scenarios. This objective is addressed through:
• Vehicle demonstration of robust non-linear stochastic model predictive control as
systematic and safe vehicle control approach for dealing with the complex uncer-
tain situations identified in O1.
• Machine-learning techniques for the continuous variation of control system
response based on the sensed environment (e.g., weather conditions) and human
driver behaviour.
• Integration of the steering-based automated driving functions with the stability
control functions actuated through the friction brakes, for increased safety during
automated driving in any tyre-road friction condition.
• Implementation and demonstration of new sensors and sensor functions (e.g., self-
cleaning cameras by Valeo [1], and time-of-flight cameras by Infineon [2], pro-
viding 3D images and robust infrared images).
• Demonstration of self-diagnostic tools within the sensor fusion system for the safe
and timely identification of the conditions requiring the transition from automated to
human driving.
O3. Development and demonstration of intuitive human-machine interfaces
For the safe management of the transition phase between purely automated driving
and human driving an intuitive HMI is crucial. Therefore also user acceptance and
gender-specific aspects shall be taken into account. The output is represented by:
• Open-access publication of the results of the TrustVehicle user questionnaires,
aimed at defining requirements and expectations.
• Scenario-based analysis of user behaviour, with recording of user reactions and
involvement of the human scientists of the consortium.
• Definition of intuitive, re-assuring and trustable ways of informing and guiding the
users to have appropriate behaviour in the traffic scenarios defined in O1.
• Simulation-based and experimental assessment of the TrustVehicle HMI, with
careful selection of the driver sample (e.g., in terms of age, gender, cultural
background, education, employment, accident history, current vehicle) and by using
the evaluation criteria defined in O1.
• Fault-tolerant procedures in the case of incorrect driver behaviour during the hand-
over phase from automated to human driving.
TrustVehicle – Improved Trustworthiness and Weather-Independence 79

O4. Development and demonstration of new tools for the cost- and time-effective
assessment of vehicle and driver behaviour in complex mixed traffic scenarios
This objective is related to the entire development and validation chain. It aims at
assessing the vehicle and driver behaviour as well as drastically reducing development
and test time.
• Enhanced simulation tool integrating traffic, vehicle powertrain, chassis, controllers,
sensor fusion system (see O2) and driver behaviour to assess complex scenarios
(such as those defined in O1) and their impact on drivability, safety and different
road user acceptance.
• Validation of the simulation tool against real-world data (>4 validation cases) and in
terms of re-usability for the different vehicle platforms of the involved OEMs, with
focus on fail-operational behaviour and hand-/take-over scenarios (driver-in-the-
loop and driver-off-the-loop).
• 30% reduction of the time required for the simulation-based assessment of complex
road scenarios during the development phases of novel controllers for automated
driving.
• New tool for the objective assessment of vehicle behaviour during complex L3AD
scenarios. This will be based on data logging during real vehicle operation, along
the same principle of the commercially available AVL tools (e.g., AVL DRIVE) for
drivability assessment of conventional vehicles. The TrustVehicle tool will include
machine-learning capabilities, i.e., the scenario catalogue of the tool will auto-
matically evolve to support agile validation.
• 70% reduction of the post-processing and analysis time after experimental vehicle
testing in complex road scenarios.
O5. Evaluation of L3AD functions
• Evaluation and tailoring of selected L3 functions on real vehicles from three dif-
ferent road transport markets relying on the framework proposed in TrustVehicle.

3 Methodology

The TrustVehicle consortium (Fig. 1)


consists of 12 partners from the whole
vehicle value chain, working together
to comprehensively address the objec-
tives stated above. This section
describes the overall approach
addressing the objectives stated in
Sect. 2 as well as the main aims of the
TrustVehicle project.
Fig. 1. The TrustVehicle consortium
80 P. Innerwinkler et al.

3.1 Overall Approach


Four OEMs provide four different use cases, addressing different vehicle classes,
namely commercial/heavy goods vehicles (Ford Otosan), electric buses (Linkker), light
commercial vehicles (LCVs) (Tofaş) and passenger vehicles (Volvo). On the basis of
these vehicle classes and use cases, the requirements and objectives for TrustVehicle
are further refined and research results are demonstrated in simulation as well as in real
world demonstration. TrustVehicle’s approach is the integrated and driver-centric
engineering approach shown in Fig. 2, where user’s expectations and experiences
influence every part of the development and verification and validation process.
Objective 1 is mainly addressed by the comprehensive definition of the driving
scenario catalogue, with focus on the limitations of existing AD control systems, and
deep consideration of the actual users’ needs, including subtle and subjective aspects.
Regarding objective 2 advanced self-diagnostics of the sensor fusion system for the
trustable identification of the driving conditions (e.g., bad weather) requiring the
transition from automated driving to human driving is investigated and ultimately
demonstrated on Volvo’s demonstrator vehicle. Also the integration of new sensor
functionalities is planned. Examples are: (i) self-cleaning cameras based on the Valeo
AquaBlade system, which will be further developed to remove frost in winter and bugs
in summer; (ii) Time-of-Flight cameras by Infineon, for the extension of the operative
range of the automated driving systems and continuous monitoring of the driver
attention level in the case of hand-over scenario; (iii) driver cockpit activity assessment
unit of VTT, which, according to the driver monitoring camera unit and sensors, will be
able to assess his/her gaze direction. Robust model-predictive controllers conjugating
enhanced performance with respect to conventional automotive controllers, but also
characterized by computational efficiency with respect to conventional model

Fig. 2 The TrustVehicle driver-centric engineering approach to L3AD. The approach combines
a learning database, HMI development, co-simulation, virtual assessment as well as real world
demonstration, while integrating user’s expectations and experiences
TrustVehicle – Improved Trustworthiness and Weather-Independence 81

predictive controllers, thanks to the systematic adoption of explicit formulations are


developed and validated in the respective relevant environment.
Objective 3 is met through the development of an HMI concept for the safe hand-
over from manual to automated mode. The HMI will take into account driver behaviour
and support seamless transition between automated and manual vehicle control. Special
attention will be paid to the HMI design parameters.
New simulation and post-processing tools, allowing a significant enhancement of
the simulation capabilities of complex road scenarios, and the objective in-vehicle
evaluation of the automated driving system performance are developed.
Ultimately, experimental demonstrations of the relevant new functions and devices
on the existing case study vehicles of the consortium, ranging from passenger cars, to
light-duty vehicles, buses and commercial/heavy goods vehicles will be performed.
All these measures will contribute to (i) the reduction of the number of accidents
caused by human error; (ii) the maintenance of the leadership position in the respective
field; (iii) proper validation systems for automated driving systems; (iv) taking into
consideration user requirements, expectations and concerns; (v) gender issues.

3.2 Main Aims


3.2.1 Identification of Critical Transition Scenarios
TrustVehicle’s main focus is on mixed traffic situations in unpredictable scenarios to
protect both occupants and VRUs in urban and rural areas. L3AD level requires sig-
nificant innovations in architecture (SW&HW) and surveillance of the complete vehicle
system to support a fail-safe operation along with a high user acceptance. A thorough
analysis of critical scenarios and user expectations is needed:
• Analysis of traffic injuries, considering age, gender, etc.
• Identification of critical transition scenarios and precise manoeuvring scenarios for
urban driving.
• Definition of adverse scenarios and use cases for redundant operation
– Intra vehicle (e.g. component failure, out of alignment (crash), occlusion (dirt),
power and communication failure).
– Extra vehicle (e.g. atmospheric conditions, road conditions, illumination levels).
• Defining the requirements on a failsafe redundant sensing system for L3AD.
• Defining the KPIs for user acceptance.

3.2.2 Comprehensive Real-Time Simulation and Validation Framework


A modular co-simulation framework is built and planners and controllers for L3AD are
developed:
• Build a comprehensive real-time simulation and validation framework by con-
necting state-of-the-art domain specific simulation and measurement tools with an
independent co-simulation framework. This framework schedules the execution of
the simulation tools and is responsible for transferring signals between them.
• Development of trajectory planners and controllers for safe and precise automated
driving in urban environment.
82 P. Innerwinkler et al.

• Validation and testing case studies setup in a generic tool independent manner. Test
scenarios will be developed analytically for both static objects and/or dynamic
vehicles and obstacles.

3.2.3 Intuitive, Safe Human-Machine Interface and Control


The intuitive and novel HMI concept developed in this project will have generic bases
and enable tailoring to different vehicle types (passenger car, truck, bus). The focus of
the HMI development is high user acceptance and smooth transition from autonomous
driving mode to manual mode and vice versa. It aims at:
• Generating and adapting the in-vehicle HMI for enabling assessment of the user
friendly human interaction with automated vehicle functions.
• Optimizing the transition periods between autonomous and manual mode using
driver modelling.
• Generating the special software tools for adapting the HMI concept to different
vehicles (city bus, passenger car and trucks).
• Creating harmonized interfaces to the in-vehicle systems for gathering validation
and evaluation data.
• Evaluating user acceptance and potential safety impacts of the improved HMI
concept.

3.2.4 Adaptive Vehicle Assessment


A self-diagnostics/self-learning concept for level 3 automated vehicles will be devel-
oped. Main focus of this self-diagnostics will be the end-customer acceptance of the
driving behaviour of level 3 automated vehicles, with special attention on driving
comfort and perceived safety. Driving simulator tests are planned, using the feedback
of a variety of test persons during a field study in different driving situations. More
specifically the objectives are:
• Define sufficient measurement equipment to characterize human “feeling” and
mental stress.
• Evaluate subjective impressions during various traffic situations using question-
naires and measurement equipment.
• Automatically assess when the Level 3 system is active and its performance in
various traffic situations (upgrade of the AVL Drive tool).

3.2.5 Implementation and Vehicle Demonstration


Finally the L3AD components investigated and developed within the project will be
demonstrated on demonstrator vehicles. Therefore the following tasks will be
performed:
• Integration of the TrustVehicle components and sub-systems onto the vehicles.
• Implementations of the algorithms onto the vehicles.
• Validation of the components, sub-systems and algorithms.
• User-acceptance test.
• Final demonstration of the results.
TrustVehicle – Improved Trustworthiness and Weather-Independence 83

4 Current Status

This section gives a brief summary of the activities performed in the first project year of
TrustVehicle.

4.1 Traffic Injury Analysis


Already three deliverables have been completed within this WP. The analysis started
with a traffic injury analysis, where injuries and fatalities on European roads in the last
few years were investigated [3, 4, 5]. The research revealed considerable differences
among countries in traffic road injuries, especially in relation with the gross-domestic-
product (GDP) of the countries. The IRTAD Safety Report underlines that a full 90%
of casualties occur in low- and middle-GDP countries.
Considerable traffic injuries reductions were achieved in Spain and Portugal
(around 70%). Also many other countries had reductions of more than 50%, namely
Denmark, France, Slovenia and Lithuania, whereas most non-European countries
achieved a reduction lower than the average. The lowest road mortality rates are located
in Sweden and the United Kingdom, where less than 3 fatalities per 100 000 inhabi-
tants were recorded in 2013. In some member states, however, this rate is still
exceeding 10 fatalities per 100 000 inhabitants.
Overall more men were injured in road accidents than women in the last few years,
where the majority of them were drivers of the involved vehicles. Women on the other
hand were more often injured when riding as passengers, while most female fatalities
were pedestrians.
Considering TrustVehicle’s focus of VRUs and urban scenarios, it was revealed
that 55% of the moped fatalities investigated occurred in urban areas. Also fatalities on
junctions occur more often in urban than outside of urban environments, where nearly
55% of these fatalities were
VRUs. Overall nearly 30% of
all road fatalities were pedes-
trians (21%) and cyclists
(8%). Elderly people also
represent an unexpected high
risk category.
Researching the most
common causes for accidents,
the following were found:
excessive speed, premature-,
late- or no action, incorrect
direction and inadequate plan-
Fig. 3 Fatalities by main cause of accident and environ-
ning. However distraction, ment in Austria 2016
psychotropic substances, tired-
ness and handheld mobile
telephone usage are also significant factors. Minor factors are education level, personality,
aggression, social deviance, experience and previous involvement in motor accidents,
which was not further investigated.
84 P. Innerwinkler et al.

As an example, causes for fatalities within and outside urban areas in Austria 2016
are depicted in Fig. 3. The findings mentioned above are confirmed. Inadequate speed
is mainly a problem outside urban areas, while priority injuries are most common in
urban areas.
Based on these findings, the TrustVehicle focus is further refined:
• Young males and elderly people
• In urban areas
– Junction
– Conflict among users when sharing road space (cars, cyclists, heavy goods
vehicles and buses)
– Pedestrians and moped
• In suburban areas
– Intersections
– Motorcycles and cars

4.2 User Expectations


The next step dealt with the analysis of user expectations. The aim was to explore
what people think about automated vehicles and how comfortable they feel with this
new technology. Therefore a questionnaire addressing these issues was prepared and
issued to volunteering participants. The questionnaire was compiled of two existing
questionnaires that were already validated through conducted field tests. These
questionnaires were adapted to TrustVehicle needs following partner inputs and
feedback from focus groups involving participants from Argentina, UK, South Korea,
Iraq, Italy and Austria. They were between 24 and 49 years old with a high level of
education (6 and 7 according to the international classification of education). At the
end of the focus groups and the interviews, data was collected and integrated with the
feedback from the partners, so the first questionnaire could be created and delivered
to the TrustVehicle consortium.

4.3 Critical Driving Scenarios for the TrustVehicle Use Cases


A broad overview of critical driving scenarios developed through the TrustVehicle
project is compiled, where special focus
is on the use cases represented by the
OEM’s aims within the project, namely
Ford Otosan’s truck/trailer backing
manoeuvres, Tofaş’ LCV scenarios for
narrow urban streets, Linkker’s charging
station use case and Volvo’s sensor
monitoring system.
Truck/Trailer
A loading dock (Fig. 4) is a recessed bay
in a building, where trucks are loaded Fig. 4 Docking station back parking with a
and unloaded, as they can be found at heavy duty commercial vehicle
TrustVehicle – Improved Trustworthiness and Weather-Independence 85

manufacturing plants, warehouses and other industrial buildings. Back parking is a


frequent activity of a truck driver going in and out of a docking station. In a busy
environment the parking process needs special care considering other vehicles and
pedestrians in the field. Autonomous back parking scenario starts with the selection of
the available docking platform and initial positioning of the vehicle. With the initiation
of the self-parking feature, the truck will approach the desired position giving proper
actuation and perceiving the environment for any dangerous situation without any
driver intervention.
Especially the construction sites along crowded city roads need a huge effort of the
drivers while going in and out of the site. Serious manoeuvring with special attention to
other vehicles and pedestrians is needed. Autonomous back approach feature will
primarily detect the situation of the road and manoeuvre possibilities. The driver will
observe the recommended manoeuvre and initiate the approach. This scenario calls for
a simultaneous perception of the situation as well as the ability to update the
manoeuvre accordingly. The manoeuvre will finish with the entry of the truck in the
site and final approach to the loading position. Possible handover cases will be con-
sidered for the scenario regarding changing road restrictions and immediate attention to
the danger present in situations.
Light Commercial Vehicle (LCV)
Automated door to door delivery concept is a strategic priority for light commercial
vehicles (Fig. 5). N1 type vehicles undertake a high percentage of urban delivery with
their dimensional advantage when the delivery mission does not require high payload
capacities. Logistic companies have special focus on the automation of delivery and
reducing their expenses. Tofaş will implement 2 urban traffic scenarios in the
TrustVehicle project:
Scenario 1: Backward and forward approaching to delivery point.
Scenario 2: Fine-tuned low speed manoeuvres for narrow street delivery duties with
mixed urban traffic. Hand over situations will be implemented for high density
urban traffic scenarios.

Fig. 5 Tofaş’ LCV. A mule vehicle is collecting real world measurements for the virtual test
within the Tofaş use case

Electric Bus
The electric bus under consideration (Fig. 6) should drive automatically towards
electric charging points at the bus stop. In this scenario, the bus is approaching the
bus stop and charging spot on manual driving mode, while the system provides driving
instructions (e.g. speed and distance to area enabled for automated driving). Then when
86 P. Innerwinkler et al.

the bus enters the automated driving area, the


preconditions for automated driving are
checked, e.g. current speed, position, heading
and angle of the steering wheel as well as sensor
capabilities and weather restrictions. Then the
automated driving system calculates the driving
path and checks possible obstacles. If automated
driving can be activated, it is offered to the
driver via HMI. The driver turns the automated
Fig. 6 Automated public transporta-
driving ON or acknowledges the transition from tion with Linkker’s electric bus
manual driving to automatic driving. The system
is now in transition mode. It confirms that
automated driving mode is activated (HMI).
While driving on automated driving mode, the system shows to the driver only relevant
information, e.g. the current location of the bus, planned driving path, obstacles,
destination and distance to the destination. At the charging point the system indicates to
the driver when the bus has been parked and the charging can be activated. Automated
driving mode is deactivated (shown in HMI).
A related scenario is when the bus drives automatically out of the bus stop after
charging.
Passenger Cars
For automated driving scenarios with passenger
vehicles, the focus will be on degraded sensor
functionality and sensor data output. In order to
conduct safe automated vehicle manoeuvres,
vehicle controllers are highly dependent on
trustworthy data from the vehicle sensors
(Fig. 7). The sensor output must ensure that
lines, objects, VRU’s and so on within the
driving path are detected and reported. However
ensuring 100% detection rate is unlikely and
much of this due to external disturbance and not Fig. 7 Volvo’s surround view for pas-
necessarily sensor performance. Degradation of senger cars
the sensor output can result in false-, late- or
even missed detections which may lead to cra-
shes. The degradation of sensor output also impacts the availability of the intended
function, i.e. the automated driving function is available much less than expected. The
use cases for potential sensor output degradation are split in the below listed scenarios.
Scenario 1: Functional behaviour – sensor faults already reported by diagnostics
Scenario 2: Environmental conditions – harsh weather reducing sensor visibility
Scenario 3: Misalignment – changes in sensor position or reduced calibration
accuracy
According to the findings of the traffic injuries analysis conducted in WP2, cyclists
formed 8,1% of all road fatalities in the EU in 2014. Therefore cyclists sharing road
TrustVehicle – Improved Trustworthiness and Weather-Independence 87

space with other traffic participants is a crucial sce-


nario. The selected scenario involving a VRU
depicted in Fig. 8 shows a cyclist driving in the same
direction as the ego-vehicle on the right half of the
ego-vehicle’s lane. The cyclist is driving consider-
ably slower than the ego-vehicle. Without action
from either the ego-vehicle or the cyclist, a collision
would occur at the CP.
The scenarios were further refined by definition
of respective key performance indicators (KPIs), also
taking into account the results from the questionnaire, Fig. 8 Traffic scenario involving
where over 120 responses from Europe and the US a cyclist in the ego vehicle’s own
could be gathered. driving lane

4.4 Co-simulation and Modular Use Case


Architecture
Combining different subsystems using various software tools and hardware compo-
nents can lead to highly complex systems. The management of different configurations
as well as testing scenarios and testing environment can consume a lot of time and
effort. Model.CONNECTTM and its co-simulation support the user in the organization
of such system model variants. In the TrustVehicle project, modular architectures for
the respective use cases are set up. An exemplary setup is given in Fig. 9, where
Volvo’s sensor monitoring use case is depicted. The subsystem specifics here are
intended for the use on the AVL driver simulator, which uses Vires VTD for the
environment simulation and AVL VSM for vehicle dynamics. The sensor fusion and
driving functions provided by VIF as well as Volvo’s sensor monitoring and the HMI
are connected via Model.CONNECTTM to the driver simulator, as depicted in Fig. 10.
First tests on the driver simulator using this specific setup, but without Volvo’s
sensor monitoring, are scheduled for summer 2018. Here several test persons will pass
various scenarios on the
driver simulator, where most
of the driving will be per-
formed by the AD functions.
The aim is to test models or
functions in a safe environ-
ment, including the interac-
tion between the model and
the driver (e.g. handover
processes from human dri-
ver to AD function and vice
versa). Later on the test
persons will be asked for
their subjective impressions Fig. 9 Subsystem architecture for Volvo’s sensor monitoring
and comments, especially use case on the AVL driver simulator with VIF’s sensor fusion
regarding comfort and and advanced driving functions
88 P. Innerwinkler et al.

perceived safety, in order to further improve


L3AD functionalities within the TrustVehicle
project.

4.5 HMI
The TrustVehicle project aims at the devel-
opment of a human centered L3AD system,
based on identification of risky conditions by
combining driver state estimators with a
good knowledge of the environment around
the vehicle. The general HMI concept is
focusing on Ford Otosan and Linkker
demonstration scenarios, both of which are
targeting on low-speed automated driving
scenarios in specific areas. The following Fig. 10 Possible co-simulation setup as
features can be listed for the TrustVehicle used for the AVL driver simulator
general HMI concept:
• HMI supports safe transition between
automated and manual driving modes during low-speed manoeuvring in mixed
traffic situation
• Adaptive & intuitive HMI
• Measuring the driver state
• Identifying risky conditions by combining driver state estimators with the
information about the environment and other road users around the vehicle
• Prioritizing and adapting the information given to the driver
In the first phase of the HMI concept development, requirements and preliminary
specifications including the architecture for the general HMI concept have been
addressed. The requirements for the general HMI concept have been collected and each
HMI requirement has been analysed whether it is applicable for the various driving
modes: Manual, Transition and Automated driving.

5 Summary

In this presentation of the TrustVehicle project the user-centric approach for improving
the trustworthiness and availability of L3AD functions is described. The driver’s
impressions and feelings are crucial for L3AD driving since he/she should be able to
resume vehicle control if needed. Therefore they are strongly taken into account in the
whole development process of the different components that constitute the automated
system. Questionnaires and tests on the driver simulator are some of the measures taken
within TrustVehicle to assure this involvement of the user in the development process,
either for planners and controllers, sensors and sensor monitoring or HMI. A modular
co-simulation approach assures the flexibility needed within the development process.
TrustVehicle – Improved Trustworthiness and Weather-Independence 89

This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 723324.

References
1. Valeo: Aquablade (2018). https://www.valeo.com/en/aquablade/. Accessed 25 June 2018
2. Infineon: In-cabin sensing applications (2018). https://www.infineon.com/cms/en/applications/
automotive/chassis-safety-and-adas/adas/in-cabin-sensing-by-time-of-flighttof. Accessed 25
June 2018
3. Troglia, M., et. al.: TrustVehicle D2.1 Report on traffic road injuries (2017). https://urlde
fense.proofpoint.com/v2/url?u=http-3A__www.trustvehicle.eu_downloads&d=DwIFAw&c=
vh6FgFnduejNhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx8&r=RBo6AuZoqPUqdfIxMkY29PX
cQu3yKI3zYibZM3SORus&m=ou6hBywCXhh3FRmhNj6fe4UUYv1qDjBOrTYiM_nda-
I&s=N9B1JqSMLdAvKWgNvUDncchhGqH8zwzHvM4sJoS7Msg&e=.
4. IRTAD: Road Safety Annual Report (2015)
5. Statistik Austria: (2017). https://urldefense.proofpoint.com/v2/url?u=http-3A__www.statistik.
at_web-5Fde_statistiken_energie-5Fumwelt-5Finnovation-5Fmobilitaet_verkehr_strasse_
unfaelle-5Fmit-5Fpersonenschaden_index.html&d=DwIFAw&c=vh6FgFnduejNhPPD0fl_
yRaSfZy8CWbWnIf4XJhSqx8&r=RBo6AuZoqPUqdfIxMkY29PXcQu3yKI3zYibZM3SO
Rus&m=ou6hBywCXhh3FRmhNj6fe4UUYv1qDjBOrTYiM_nda-I&s=nqiWMJnBql6Jek
LniAOyjXhqjZccsEIKgWIhUbUg7r4&e=. Accessed 16 Aug 2017
Adaptation Layer Based Hybrid
Communication Architecture: Practical
Approach in ADAS&ME

Prachi Mittal(&), Emily Bourne, and Tim Leinmueller

DENSO Automotive Deutschland GmbH,


Freisinger Strasse 21-23, Eching, Germany
{p.mittal,e.bourne,t.leinmueller}@denso-auto.de

Abstract. Connected vehicles are an essential part of Intelligent Transport


Systems (ITS). A number of communication technologies exist that can com-
plement each other or serve as an alternative. Keeping that in mind, to get the
most benefit for a connected vehicle, a hybrid communication paradigm where
multiple communication technologies are used in parallel is considered as the
natural way forward.
To achieve this in practice, it is important to design vehicular communication
architectures to be flexible but also reliable. Building upon an established theory
of adaptation layer based architecture in literature, this paper discusses the
specifics of a practical implementation of such architecture. This practical
environment is provided within the European research project ADAS&ME.
The paper discusses the practical assumptions while using an adaptation layer
based hybrid communication architecture and presents the benefits of it while
also critically mentioning the shortcomings.

Keywords: Hybrid communication  Communication architecture


Adaptation layer

1 Introduction

Communication is an essential component of automated vehicles. Direct connectivity to


other vehicles over V2X and connectivity to cloud services over cellular communication
are just some examples of the communication requirements of automated vehicles.
A number of communication technologies exist, namely 802.11p based ITS-G5, cellular
2G/3G/4G, Mobile Edge Computing (MEC) etc., to serve these requirements. Thus, a
hybrid communication approach is foreseen in the vehicles so that they can make the
most of all these technologies.
Hybrid Communication needs to address a number of challenges, e.g. to manage
incoming data over multiple communication technologies (possibly from different
sources), and to manage the communication resources for outgoing data. Tailored
communication architectures can provide a solution for these requirements. While the
ETSI ITS architecture [1] in Fig. 1 is agnostic of access layer technologies, it doesn’t
make provisions for using multiple access technologies in parallel. That was the
starting point for our proposed Adaptation layer based solution in [2]. There, we
© Springer Nature Switzerland AG 2019
J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 90–96, 2019.
https://doi.org/10.1007/978-3-319-99762-9_8
Adaptation Layer Based Hybrid Communication Architecture 91

presented detailed requirements of a hybrid communication system and an architecture


using an ‘adaptation layer’ that addresses these requirements. We also discussed merits
and shortcomings of three different configurations of such an architecture and chose the
most beneficial one, which is shown in Fig. 2.

Fig. 1. ETSI ITS communication architecture

While our previous work [2] presented a comprehensive account of the merits of an
adaptation layer based approach, it didn’t cover any practical aspects of implementing
such an architecture. In the present paper, we discuss practical aspects of implementing
adaptation layer based architecture for a hybrid communication system in the context of
European project ADAS&ME.
The remainder of this draft paper is organized as follows. The next section provides
an overview of the project ADAS&ME, the communication technologies used, and the
corresponding communication requirements. It is followed by a description of the
practical approach taken to implement the adaptation layer based architecture in the
project. The paper then discusses the advantages and shortcomings of this practical
approach and finally concludes by presenting possible future work.

2 ADAS&ME Hybrid Communication


2.1 The Project
ADAS&ME [3] is European public funded research project that focusing on devel-
oping ADAS (Advanced Driver Assistance Systems) that incorporate driver/rider state,
situational/environmental context. Additionally, ADAS&ME will develop adaptive
HMI to automatically hand over different levels of automation and thus ensure safer
and more efficient road usage for all vehicle types (conventional and electric car, truck,
bus, motorcycle).
To determine the situational/environmental context, ADAS&ME employs, along
with vehicle on-board sensors, a multitude of communication technologies as described
in the next section.
The project is funded under the H2020 scheme of the European Commission. It
started in September 2016 and will run for a duration of 42 months.
92 P. Mittal et al.

Fig. 2. Adaptation layer based architecture

2.2 ADAS&ME Communication Technologies


From a communication point of view, ADAS&ME uses two paradigms for situational
and environmental context detection as shown in Fig. 3 – direct V2X communication
between vehicles and road infrastructure over ITS G5/IEEE 802.11p, and cellular
mobile communication for exchanging information with cloud services. Together, they
serve to complement and support the vehicles’ onboard sensing systems.

Fig. 3. ADAS&ME communication paradigm

2.3 ADAS&ME Communication Requirements


Detailed communication requirements of the project are as follows.
• Cooperative awareness of vehicles on the road using cooperative awareness mes-
sages (CAMs) [4] (over ITS G5)
• Cooperative exchange of (critical) events using decentralized environmental noti-
fication messages (DENMs) [5] (over ITS G5)
Adaptation Layer Based Hybrid Communication Architecture 93

• Sensor data sharing between vehicles using a Collective Perception Message


(CPM) – related to what is currently being investigated in ETSI work item
DTS/ITS-00167 [6] (over ITS G5)
• Coordination and negotiation to carry out coordinated maneuvers between vehicles
using coordination and negotiation messages – related to what is currently being
investigated in ETSI work item DTS/ITS-00184 [7] (over ITS G5)
• Receive map updates from and notify about map changes to the digital infras-
tructure in the cloud (over cellular communication)
• Receive real time traffic information from and provide updates on the current sit-
uation to the digital infrastructure in the cloud (over cellular communication)
• Receive the ‘driving intensity’ or the ‘stressfulness’ of the driven road segment
from the digital infrastructure in the cloud (over cellular communication)

3 Adaptation Layer Approach in ADAS&ME

This section presents features of the practical implementation of the adaptation layer
architecture in ADAS&ME. It includes the description of the ADAS&ME communi-
cation architecture, implementation aspects resulting from the hardware used and
existing communication stacks, and a brief description of the implemented adaptation
functions for incoming and outgoing data.

3.1 ADAS&ME Communication Architecture


ADAS&ME considers two communication technologies – 802.11p/ITS-G5 for V2X
and cellular for connectivity with digital infrastructure (cloud). This is why only a
subset of Fig. 2 is applicable, as shown in Fig. 4.

Fig. 4. ADAS&ME adaptation layer based communication architecture


94 P. Mittal et al.

3.2 Implementation
Part of the above communication stack was implemented using existing hardware for
802.11p V2X in the form of Denso Wireless Safety Unit (WSU). The rest of the stack
was implemented on a PC based platform (Raspberry Pi) as depicted in Fig. 5. This
reuse of hardware once again emphasizes the importance of the modular structure of
the architecture.

Fig. 5. Hardware implementation of ADAS&ME communication stack

To ease in-house testing, a virtual cellular/backend connectivity was implemented


within the project, without actually using a cellular modem or requiring a cellular
network. A simulation of the corresponding stack was implemented using an embedded
PC (Raspberry Pi). The same setup but with an actual cellular modem is planned to be
used in real world demonstrations and tests.

3.3 Adaptation Functions


For the incoming data, the following adaptation functions have been implemented.
[a1] Adapting data received over V2X to a common data format that is used by
other systems in the vehicle
[a2] Adapting data received from the cloud to a common data format that is used by
other systems in the vehicle
[a3] For data sets that can be received over both V2X and cellular (e.g. roadworks
information), matching the overlapping information before adapting it into a
common data format
For the outgoing data, the following adaptation functions have been implemented.
Adaptation Layer Based Hybrid Communication Architecture 95

[a4] Choose one communication medium over the other due to capabilities
(choosing 802.11p for sending out V2X messages and cellular communication
for sending out the cloud messages)
[a5] Adapting object data received from the sensor data fusion system into a data
format suitable for sending out in collective perception messages.
[a6] Note: Sensor data fusion is a system in the vehicle responsible for fusing the
data from on-board sensors (Camera, Radar, LIDAR etc.). The data exchange
between this system and the communication stack takes place through the
management and applications layers.
The adaptation function [a5] was not envisioned in the original theoretical study of
the adaptation layer [2] but was deemed necessary in this practical implementation
within the context of ADAS&ME. This means that the Adaptation layer must perform
adaptations for the incoming/outgoing data (over the communication channel) but also
for the data from/to upper layers.
As ADAS&ME employs 802.11p V2X and cellular communication for addressing
non-overlapping requirements, coordinating sending of data over multiple media in
parallel was not implemented for the outgoing data. Similarly, failover realization in
case once communication medium becomes unavailable was not implemented.

4 Conclusions

Communication using multiple communication technologies in parallel is now being


seen as an essential feature for (fully) automated vehicles. To benefit from the capa-
bilities of different communication technologies in the most efficient way, an integrated
architecture based solution is considered to be the most effective solution. Much
research, including our previous work [2], has been carried out in this particular field.
This work builds upon the theoretical work presented in [2] and provides the
specifics of practical implementation of adaptation layer based hybrid communication
architecture in the European research project ADAS&ME. It begins by providing the
communication requirements of ADAS&ME and establishing the need for hybrid
communication. In the process of describing the practical implementation of the hybrid
communication architecture, it confirms the benefits of the modular structure of such
architecture by re-using existing hardware. Moreover, this work establishes the fun-
damentals of practical use of an adaptation layer based architecture and brings us one
step closer to unified communication systems.
In future, this work is planned to be extended by taking into account additional
communication technologies that complement or serve as an alternative to the ones
used in this implementation. LTE-V2X, 5G, and edge computing are few examples of
that. Additionally, this work is planned to be extended by adding top-down dynamic
communication resource management elements. Furthermore, we have already started
to integrate this system in a simulation environment.
96 P. Mittal et al.

References
1. ETSI EN 302 665, Intelligent Transport Systems (ITS); Communications Architecture
2. Mittal, P., Leinmueller, T., Spaanderman, P.: Adaptation layer based architecture for vehicular
hybrid communication. In: ITS World Congress 2017, Montreal, Canada, 29 October–2
November 2017
3. Webpage for EU project ADAS&ME. http://www.adasandme.com/. Accessed 15 June 2017
at 08:40:00 UTC
4. ETSI EN 302 637-2 V1.3.2, Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service,
ETSI Std., November 2014
5. ETSI EN 302 637-3 V1.2.2, Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 3: Specifications of Decentralized Environmental Notification
Basic Service, ETSI Std., November 2014
6. ETSI work item DTS/ITS-00167, Intelligent Transport Systems (ITS); Collective Perception
Service
7. ETSI work item DTS/ITS-00184, Intelligent Transport Systems (ITS); Vehicular Commu-
nications; Basic Set of Applications; Maneuver Coordination Service
Assistance and Mitigation Strategies in Case
of Impaired Motorcycle Riders:
The ADAS&ME Case Study

Luca Zanovello1(&), Stella Nikolaou2, Ioannis Symeonidis3,


and Marco Manuzzi4
1
Ducati Motor Holding S.p.A.,
Via Cavalieri Ducati, 3, 40132 Bologna, Italy
luca.zanovello@ducati.com
2
Centre for Research and Technology Hellas/Hellenic Institute of Transport,
52, Egialias Str., 15125 Marousi, Athens, Greece
snikol@certh.gr
3
Centre for Research and Technology Hellas/Hellenic Institute of Transport,
6th Km Charilaou-Thermi Rd, 570 01 Thermi, Thessaloníki, Greece
ioannis.sym@certh.gr
4
Dainese S.p.A., Via Dell’Economia, 91, 36100 Vicenza, Italy
marco.manuzzi@dainese.com

Abstract. Riding a motorcycle requires both physical and mental effort. These
requirements are amplified by factors like long riding hours, high or low tem-
peratures, high relative humidity levels or rain. Besides exposing the rider to the
external environment, the vehicle cannot offer full aerodynamic protection and
limits him/her in a fixed position, which is less comfortable than that of a car.
Furthermore, physical effort is required to steer and actively balance the
motorcycle. Such factors may induce impairing states like physical fatigue,
distraction and stress. The work carried out within the ADAS&ME project has
the aim to create a system able to detect, and possibly in extreme conditions
prevent, these states, and then to provide adequate assistance to the rider during
long touring travels and, if the situation becomes safety critical, actively enable
intervention functions with embedded ad-hoc safety strategy.

Keywords: Fatigue  Stress  Distraction  Advanced rider assistance system


HMI  Assistance  Mitigation

1 Introduction

The riding task is a complex and demanding activity both from physical and mental point
of view. Riders are, as a matter of fact, directly exposed to environmental and weather
conditions, such as high/low temperature (several times extremes), high humidity levels
and atmospheric agents like rain, wind and fog. Besides that, the motorcycle itself gen-
erates noise and vibrations that are difficult to attenuate, since the rider sits few centimetres
above the engine. The motorcycle ergonomics is closely related to the PTW (Powered
Two-Wheeler) typology, but the knee flexion angle is generally greater than 90°.

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 97–107, 2019.
https://doi.org/10.1007/978-3-319-99762-9_9
98 L. Zanovello et al.

Furthermore, the motorcycle has complex 3D dynamics where the rider has an active role:
as an example to set a curve trajectory the rider is not merely steering in the direction of the
curve but counter-steering and then moving his/her body to increase the lean angle while
continuously controlling the throttle [1]. This is even more tiring if the vehicle is fully
loaded and/or is carrying a passenger.
These factors have a direct influence on the riding experience, reducing comfort
and inducing states like fatigue, stress or distraction. Kuschefski et al. [2] have iden-
tified climate, posture and noise as the highest sensory strains for riders. Also, other
studies correlate hot climate conditions with accidents [3, 4]. The MAIDS (Motorcycle
Accidents In Depth Study) [5] analysed 921 motorcycle crashes occurred between 1999
and 2000 in Europe, and showed that human factors were the primary cause in 37.4%
of the cases, of which 10.6% the main factor was “attention failure”, a general term
which includes distraction, stress and other related rider impairments.
Two dedicated Use Cases within the ADAS&ME European project are specifically
addressing PTWs, with the target to create an effective assistance and mitigation strategy
in such circumstances and, ultimately, develop an adaptive HMI system based on current
rider state, providing customised and personalised support at different rider incapacity
levels.

2 The ADAS&ME Case Study

2.1 Overview
ADAS&ME is a research project, funded by the EC under the Horizon 2020
framework-programme. The project addresses four different vehicle types: truck,
conventional car, electric car, bus and motorcycle. The general aim of the project is to
develop advanced driver/rider assistance system functions and an adaptive HMI
(Human Machine Interface), that take into account the driver/rider state and the situ-
ational and environmental context, to ensure a safer and more efficient road usage [6].

2.2 Use Cases Definition


Two UCs (Use Cases) within ADAS&ME are dedicated to motorcycles:
• UC E: “Long Range Attentive Touring with Motorbike”;
• UC F: “Rider Faint”.
Use Case E addresses situations that are typically experienced during long leisure
tours where riders ride for several hundreds of kilometres for more than one day, usually
in groups, with few stops and resting periods. This can be due to their wish to retain
contact with the other riders of the group, avoid reaching a selected destination during
dark, and even, challenge themselves to high or extreme mileage coverage. In such cases
motorcyclists tend to endure fatigue, distraction and stress, further enhanced by related
side effects such as hypo/hyperthermia, dehydration and low sleep quantity/quality.
Use Case F covers an extreme case of the previous Use Case (UC E), occurring
when an impaired rider ignores all preventive warnings and rather decides to keep
Assistance and Mitigation Strategies 99

riding despite the fact that his/her ability to control the vehicle has significantly
deteriorated. A focus group study with riders [7] validated that this can happen espe-
cially when motorcyclists are close to their final (for the day) destination, and the
relatively small distance forces them to avoid stopping at an intermediate rest area.
The above Use Cases were communicated to end users, using both an on-line
survey and a focus group session, and further discussed and analysed by stakeholders
and experts through an online survey and a dedicated workshop, held in April 2017 in
Brussels [7, 8]. The top ranked scenarios for UC E were:
• E2: Assistance, during long range touring, in case of inattention;
• E1: Assistance, during long range touring, in case of tiredness;
• E3: Assistance during long range touring, in case of stress;
• E4: Activation of active systems, if the rider is more and more tired and ignoring
assistance;
while for UC F were:
• F1: Activation of active systems if the rider is fainting;
• F2: Activation of active systems if the rider is going to faint and ignoring assistance.
UC F was kept separated and not merged in UC E with Scenario E4 representing a
bridge between them. This is related to end-users’ feedback: motorcyclists’ community
is traditionally sceptic towards innovation and active support systems advantages,
confirmed by the fact that the highest ranked scenarios in UC E do not foresee inter-
vention scenarios, but rather milder “assistance” and support mitigation advices, which
the rider can ignore at any time; however they may accept an active intervention if the
situation gets safety critical, e.g. in case of a (imminent) loss of control.
The focus group provided very useful feedback also for refining the rider states
under consideration; in particular for fatigue, riders identified as a study of potential
interest, the combination of muscular fatigue caused by vibrations, fixed riding posture
kept for hours and demanding manoeuvres at bends, together with the exposure to high
temperatures and extreme sunlight. As a result, and for this state specifically, the term
“physical fatigue” was introduced and currently on focus within ADAS&ME.

3 Rider Monitoring and Adaptive HMI Strategies

3.1 Rider State Monitoring


The Rider State Monitoring Subsystem includes a wide set of sensors integrated in the
PPE (Personal Protective Equipment) as it is depicted in Fig. 1. In more detail, the
wearable platform includes five main elements with integrated sensors:
• Helmet:
– 8 temperature sensors, to monitor head skin temperature. They are distributed
throughout the internal surface of the helmet, along air vents channels and on
temporal area;
– 1 6-axis IMU (Inertial Measurement Unit), to measure head orientation;
100 L. Zanovello et al.

Fig. 1. Monitoring sensors used in ADAS&ME project

• Right glove:
– 1 air temperature and relative humidity sensor;
– 1 EDA (Electro-Dermal Activity) sensor, to measure skin conductance;
– 1 UV (UltraViolet) sensor, to measure the exposure to solar radiation;
– 1 6-axis IMU, to measure vibrations and hand orientation;
• Left glove:
– 1 air temperature and relative humidity sensor;
– 1 PPG (PhotoPlethysmoGram) sensor, to monitor PR (Pulse Rate);
– 1 UV sensor, to measure the exposure to solar radiation;
– 1 6-axis IMU, to measure vibrations and hand orientation;
• Undershirt:
– 1 3-electrodes ECG (ElectroCardioGram) sensor, to measure HR (Heart Rate)
and HRV (Heart Rate Variability);
– 1 chest strap, to monitor RD (Respiration Depth) and RR (Respiratory Rate);
– 1 Temperature and relative humidity sensor, to measure torso skin temperature
and humidity;
• Back Protector:
– 1 GPS unit, to monitor the rider’s trip and speed;
– 1 altimeter unit, able to monitor air pressure;
– 1 9-axis IMU to measure torso orientation;
– the back protector also includes a control unit, which manages all the afore-
mentioned sensors, and which communicates with the motorcycle through an RF
(Radio Frequency) channel.
On board the vehicle there are other complementary sensors, which provide
information about the vehicle dynamics and environmental/situational data, in
particular:
Assistance and Mitigation Strategies 101

• 1 5-axis IMU, able to estimate roll and pitch angle, roll and pitch rate and the three
linear acceleration;
• the ABS unit, sharing information about the vehicle speed and the brakes usage;
• 1 air temperature sensor;
• 1 navigation unit, connected with the motorcycle through a BT (BlueTooth) channel
sending information about surrounding traffic.
For the development of the rider state monitoring system, experiments with vol-
unteers were conducted at CERTH premises in Thessaloniki, during December 2017
and January 2018 [9]. The objective of the experiments was to collect data for training
the state monitoring algorithms, as well as for the evaluation of the accuracy of the
integrated wearable sensors, relative to medical reference equipment. The experiments
included both on road testing and simulation tests using a motorcycle simulator placed
in a specifically adapted environmental chamber. Selected simulator scenarios and
environmental conditions relative to the addressed UCs were examined.
The volunteers were instrumented with both the wearable platform and the refer-
ence medical equipment and additionally they responded to self-assessment ques-
tionnaires regarding their condition. The data captured during the experiments was used
for training three different rider state detection algorithms, addressing physical fatigue,
distraction and stress, based on machine learning classifiers, currently in development.
For each state, different parameters are used, and the algorithms have a different output
logic, as presented below.
• Physical Fatigue
Physical fatigue comprises two sub-states: muscular fatigue and thermal impair-
ment. The muscular fatigue due to riding can be broadly divided into two major
activities: maintaining the riding posture and generating the required forces to
control the motorcycle. Thermal impairment, during hot weather conditions, often is
initially experienced with hyperthermia that is led by dehydration and subsequently
in extreme cases the rider may faint. The environmental chamber was used to
induce the state of thermal impairment by simulating hot weather conditions with
high levels of humidity, hot temperatures and strong heat radiation, while muscular
fatigue was induced by riding for one hour on the road and then continuing for
30 min on the simulator.
The main parameters used to address this state are: skin temperature in different
body regions, respiration depth, respiration rate, heart rate, heart rate variability,
skin wettedness, riding time, vehicle dynamics information (speed, lean angle,…),
air temperature, air relative humidity and UV index; From these inputs three levels
of the rider condition are identified (uncritical, critical, risky) combined with a
confidence level.
• Distraction
Distraction can be seen as a subset of inattention where the mismatch of applied
resources [10] to the driving task is caused either by visual, auditory, biomechanical
(physical) and cognitive distraction. For the needs of the Use Case E, only visual
distraction was studied, e.g. looking away from traffic too often or for too long [11].
102 L. Zanovello et al.

For the experiment a bright light was placed on the dashboard of the bike simulator
and on the left side of the rider to attract his/her attention.
The main parameters used to address this state are: head and torso translations and
rotations, vehicle dynamics information (speed, lean angle,…), riding time while
the identified state condition comprises two levels (not distracted, distracted) along
with a confidence level.
• Workload Stress
In [12] the definition of workload is based on the amount of resources that are
required by a set of concurrent tasks and it is distinguished in visual, motor and
mental. During the experiment of Use Case E, volunteers were induced with
workload stress while riding the simulator, encompassing methods and tools dis-
cussed in [13–15]).
The main inputs for this state are: heart rate, heart rate variability, respiratory rate,
skin conductance; while three detected levels are planned (normal, increased, high)
combined with their respective confidence level.

3.2 Design Process for Assistance and Mitigation Strategies


The design process for the assistance and mitigation strategies was based on previous
European research work conducted within SAFERIDER EU project, which developed
and evaluated different HMI elements and combined information/warning outputs
(acoustic, haptic, visual) to achieve a safe and rider-friendly communication for several
Advanced Rider Assistance Systems and On-Bike Information Systems functions
[16–18]. The state-of-the-art knowledge of SAFERIDER is adapted and extended to
serve both HMI design innovation and the priority Use Cases scenarios within
ADAS&ME.
The strategy (and incremental state levels) will be therefore facilitated through the
use of appropriate combinations of HMI elements, see Fig. 2, selected to guarantee an
ample range of different feedbacks The available HMI elements on the vehicle include
a 5” TFT dashboard, where it is possible to display icons and written messages, a
navigation unit, to display and, if needed, re-route the trip and the hazard lights, which
can be used to warn the other road users when rider’s conditions are risky. On the
wearable platform (PPE) there are other elements: a headset, providing the possibility
to include tones, auditory icons and/or voice messages in the strategy, an Info Led or
Info Helmet, i.e. an add-on module to be placed under the helmet visor including a
LED strip, whose is possible to control colours, light intensity and flashing frequency,
haptic vibration-motors integrated in the helmet and in the gloves to deliver haptic
feedback and in the end a LED strip mounted on the rear of the helmet, under the
spoiler, that can be used together with the hazard lights to alert the other road users.
With regard to the assistance and mitigation strategy design, an adaptive approach
is decided, following basic guidelines retrieved from previous experience [17, 18],
riders’ and stakeholders’ feedback and intelligence by the development team. The key
identified design guidelines, include:
Assistance and Mitigation Strategies 103

Fig. 2. HMI elements used in ADAS&ME project

• To convey low urgency information temporary icons are suggested. Flashing should
be kept for demanding/critical situations, in the other circumstances it has a dis-
tracting effect [19, 20];
• Riders are sceptic towards new technologies and too frequent feedbacks would
probably encourage them to turn off the whole system. Furthermore, the strategy
should take into account distance to the final destination; no rider would stop to rest,
if the target location is nearby [7, 16, 17];
• A Multimodal Approach is suggested. This was requested by riders during the focus
group [7], it is useful as redundancy in case of an HMI element failure and, above
all, it is proven that multimodal feedbacks generate shorter reaction times [19];
• Motorcycles, as vehicles, have specificities: there is no cabin, the dashboard has a
different size and position in the vehicle, the rider wears a helmet [21];
• The HMI elements available for motorcycles are far more limited in room and
capacity, in comparison to the ones available for cars [16]. During the definition of
the feedbacks this should be taken into account. Nevertheless, some possibilities for
the rider to customise the HMI experience should be present;
• Advanced rider assistance systems should be employed only when the impaired
rider is no longer able to fully control the vehicle [7, 16, 17];
• Customisation and personalisation are necessary to achieve both user acceptance
and comfort. To offer riders the possibility to customise the way the feedback is
conveyed, a specific menu is added to the dashboard, where they can set their
preferences (e.g. turn-off the haptic feedback in the gloves and in the helmet).
Furthermore, it is possible to change the Info Helmet settings through a mobile app,
developed for iOS.
104 L. Zanovello et al.

3.3 Active Functions to Support Rider Assistance and Vehicle Control


Two new advanced rider assistance systems are in development specifically for this
project: the “Recovery Mode” and the “Capsize Control”. The first one limits the
motorcycle performance by controlling the torque delivered by the engine and the
vehicle speed. The limit values of these two parameters can be varied without sig-
nificant issues and the function is designed to handle transients. Furthermore in case
other on board functions or strategies, like traction control, request a change in the
delivered torque or speed, the Recovery Mode handles the situation smoothly avoiding
conflicts. In Fig. 3, in the first sub-plot it is possible to see the requested torque and the
delivered one (thick line), while in the second sub-plot it is possible to see the vehicle
speed reaching the target set by the function.

Fig. 3. ADAS&ME recovery mode control [DUCATI]

The “Capsize Control” is on the other hand a subsystem able to increase directional
stability at low speeds, when the rider risks to lose control of the vehicle. It is based on
the torque generated, through the so-called gyroscopic effect, by a couple of counter-
rotating gyroscope flywheels mounted on the rear of the motorcycle. Its functionality
supports the stabilisation of the bike during a safe stop manoeuvre.

3.4 Implementation of the Adaptive HMI for Rider Monitoring


All of the above-mentioned guidelines and functions form the input for the imple-
mentation of the adaptive HMI system for rider monitoring. In Figs. 4 and 5 it is
possible to see two excerpts from the tentative UML (Unified Modelling Language)
diagram for scenarios E1 and E4. Having a deeper look, it is possible to highlight the
following:
• at first the strategy is designed only to inform/warn the rider: an icon representing
the state will be displayed on the dashboard, while an auditory icon will suggest to
take a rest. The effectiveness of the haptic feedback on the gloves and in the helmet
will be also investigated. The feedback duration will be investigated as well, in
order to avoid irritating or frustrating the rider (Fig. 4);
Assistance and Mitigation Strategies 105

Fig. 4. UML diagram detail: information level

Fig. 5. UML diagram detail: warning level

• if the rider decides to ignore the warnings and his/her state is further deteriorated,
the strategy will follow a stronger approach: along with the icon it will be displayed
a text, the auditory feedback will also include a vocal message and vibrations will
become more intense. The possibility to show resting areas locations and distance to
106 L. Zanovello et al.

reach via the navigator will be explored, and even perform an automatic re-routing
selecting one of those location as an intermediate destination (Fig. 5). If the rider
still ignores the more intensive warning, the system will perform a safety check for
a few seconds (e.g. there is low traffic, the motorcycle is not approaching an
intersection or a tight curve) and will then activate the recovery mode function that
will limit, as previously said, the motorcycle performance. This active function
targets to convince the rider to stop.
At the moment the assistance and mitigation strategy is being implemented in the
motorcycle demonstrators. First tests with the vehicle are expected from October 2018.

4 Conclusions

Long motorcycle touring is a demanding activity, even if, nearly always, it is for leisure
or lifestyle. The physical effort it requires, in combination with the exposure to the
external environment and potential adverse conditions, can induce impairing states like
physical fatigue, distraction and potentially stress. In this paper, the work performed in
the ADAS&ME project to assist the rider in these situations and mitigate possible
negative consequences is presented. The paper describes how the scenarios were
identified and the on-going implementation of the rider monitoring system, which
consist of the sensors, integrated in the protective gear and on the vehicle, as well as the
rider state monitoring logic. Furthermore, the design process for the assistance and
mitigation strategies for the addressed Use Cases is described: this includes the HMI
elements and the advanced rider assistance systems under development, a list of
guidelines for the effective implementation of such strategy and finally the description of
the UML diagram defined for two scenarios. The assistance and mitigation strategies
will be tested from Autumn 2018, but already represent a promising approach to deal
with potentially risky situations triggered by a rider impairment. The first version of the
overall Adaptive HMI rider monitoring system prototype is expected to be ready by
Spring 2019 and tested in both open roads and test tracks in Barcelona in Summer 2019.

Acknowledgements. The ADAS&ME project has received funding from the European Union’s
Horizon 2020 research and innovation programme under grant agreement No 688900.

References
1. Cossalter, V.: Motorcycle Dynamics, 2nd edn. LuLu Enterprises Inc., Morrisville (2006)
2. Kuschefski, A., Haasper, M., Vallese, A.: Advanced rider systems for powered-two-
wheelers (ADVANCED RIDER ASSISTANCE SYSTEM-PTW). In: Proceedings of 8th
Institut für Zweiradsicherheit Conference, Köln, Germany, 04–05 April 2010
3. De Rome, L., Senserrick, T.: Factors associated with motorcycle crashes in new South
Wales, Australia, 2004–2008. J. Transp. Res. Board 2265, 54–61 (2011)
Assistance and Mitigation Strategies 107

4. Haworth, N., Rowden, P.: Proceedings of Australasian Road Safety Research, Policing and
Education Conference, 2006, Gold Coast, Queensland (2006)
5. MAIDS: In-depth investigations of accidents involving powered two wheelers, 2009,
Bruxelles, Belgium. www.maids-study.eu
6. www.adasandme.com. Accessed 2016
7. Dukic Willstrand, T., Anund, A., Strand, N., Nikolaou, S., Touliou, K., Gemou, M., Faller,
F.: Deliverable 1.2—Driver/Rider models. Use Cases and implementation scenarios,
ADAS&ME Project (2017). www.adasandme.com/dissemination
8. Dukic Willstrand, T., Anund, A., Pereira Cocron, M., Griesche, S., Strand, N., Troberg, S.,
Zanovello, L.: Collecting end-users needs regarding driver state-based automation in the
ADAS&ME project. In: Proceedings of 7th Transport Research Arena TRA 2018, Vienna,
Austria, 16–19 April 2018
9. Symeonidis, I., Nikolaou, S., Touliou, K., Gaitatzi, O., Chrysochoou, E., Xochelli, A.,
Manuzzi, M., Guseo, T., Zanovello, L., Georgoulas, G., Bekiaris, E.: ADAS&ME:
experiments for the development of a rider condition monitoring system. In: International
Motorcycle Conference (2018, accepted)
10. Young, K., Regan, M.: Driver distraction: a review of the literature. In: Faulks, I.J., Regan,
M., Stevenson, M., Brown, J., Porter, A., Irwin, J.D. (eds.) Distracted Driving, pp. 379–405.
Australasian College of Road Safety, Sydney (2007)
11. Kircher, K., Ahlström, C.: Issues related to the driver distraction detection algorithm AttenD.
In: First International Conference on Driver Distraction and Inattention, 2009, Gothenburg,
Sweden (2009)
12. Hoedemaker, M.: Summary description of workload indicators: workload measures, human
machine interface and the safety of traffic in Europe growth project. s.l.: HASTE. Institute
for Transport Studies. University of Leeds. Leeds (2002)
13. Kirchner, W.K.: Age differences in short-term retention of rapidly changing information.
J. Exp. Psychol. 55(4), 352–358 (1958)
14. Brouwer, A.M., Hogervorst, M.A.: A new paradigm to induce mental stress: the Sing-a-Song
Stress Test (SSST). Front. Neurosci. 8, 224 (2014)
15. Stansfeld, S.A., Matheson, M.P.: Noise pollution: non-auditory effects on health. Br. Med.
Bull. 68(1), 243–257 (2003)
16. Touliou, K., Margaritis, D., Spanidis, P., Nikolaou, S., Bekiaris, E.: Evaluation of Rider’s
support systems in power two wheelers (PTWs). Procedia Soc. Behav. Sci. 48, 632–641
(2009)
17. Bekiaris, E., Nikolaou, S., et al.: SAFERIDER-advanced telematics for enhancing the safety
and comfort of motorcycle riders. In: 17th ITS World Congress, Japan (2010)
18. Pauzie, A., Nikolaou, S.: Ergonomic inspection of IVIS for riders: recommendations for
design and safety. Paper 4040 Presented at the 16th ITS World Congress, 21–25 September
2009, Stockholm, Sweden (2009)
19. Politis, I., Brewster, S., Pollick, F.: Evaluating multimodal driver displays of varying
urgency. In: Proceedings of the 5th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (AutomotiveUI’13), October 28–30, 2013, Eind-
hoven, The Netherlands (2013)
20. Campbell, J.L., Richard, C.M., Brown, J.L., McCallum, M.: Crash Warning System
Interfaces: Human Factors Insights and Lessons Learned. US DoT Report No HS-810 697
(2007)
21. Broughton, P.S., Fuller, R., Stradling, S., Gormley, M., Kinnear, N., O’Dolan, C., Hannigan,
B.: Conditions for speeding behavior: a comparison of car drivers and powered two wheeled
riders. Transp. Res. Part F 12, 417–427 (2009)
Data, Clouds and Machine Learning
Towards a Privacy-Preserving Way of Vehicle
Data Sharing – A Case
for Blockchain Technology?

Christian Kaiser1(&), Marco Steger1, Ali Dorri2, Andreas Festl1,


Alexander Stocker1, Michael Fellmann3, and Salil Kanhere2
1
Virtual Vehicle Research Center, Inffeldgasse 21/A, 8010 Graz, Austria
{christian.kaiser,andreas.festl,
alexander.stocker}@v2c2.at, marco.st1987@gmail.com
2
School of Computer Science and Engineering,
University of New South Wales, K17, Barker Street,
Kensington, Sydney, NSW 2052, Australia
{ali.dorri,salil.kanhere}@unsw.edu.au
3
University of Rostock,
Albert-Einstein-Straße 22 (Konrad Zuse Haus), 18059 Rostock, Germany
michael.fellmann@uni-rostock.de

Abstract. Vehicle data is a valuable source for digital services, especially with
a rising degree of driving automatization. Despite regulation on data protection
has become stricter due to Europe’s GDPR we argue that the exchange of
vehicle and driving data will massively increase. We therefore raise the question
on what would be a privacy-preserving way of vehicle data exploitation?
Blockchain technology could be an enabler, as it is associated with privacy-
friendly concepts including transparency, trust, and decentralization. Hence, we
launch the discussion on unsolved technical and non-technical issues and pro-
vide a concept for an Open Vehicle Data Platform, respecting the privacy of
both the vehicle owner and driver using Blockchain technology.

Keywords: Blockchain technology  Vehicle data sharing


Automotive security and privacy  Open Vehicle Data Platform
Privacy settings  Quantified vehicles

1 Introduction and Scope

1.1 Introduction and Motivation


Future smart vehicles will provide advanced autonomous driving functions and will be
highly connected to other vehicles, roadside infrastructure and to various cloud ser-
vices. The information gained through these wireless interconnections will be used by
any smart vehicle to enrich its own information gathered by built-in sensors such as
cameras and radar sensors to further increase the reliability of its autonomous driving
functions. However, it will also assist to solve automotive research topics like detection
of driver fatigue or driver distraction. These research topics will receive additional

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 111–122, 2019.
https://doi.org/10.1007/978-3-319-99762-9_10
112 C. Kaiser et al.

focus at the time autonomously driven vehicles will face real world problems on the
street and will have to force the driver to takeover. However, the data collected within
current vehicles of limited smartness can be used beyond assisting their drivers in
driving. Moreover, vehicle data is valuable for third parties [1–3] including e.g. vehicle
manufacturers (i.e., OEMs), suppliers, and traffic managers to name three stakeholders,
although, there are still many open issues connected to the exchange of vehicle usage
data. One dominant challenge for vehicle and driving data exploitation is how to
safeguard the privacy of the driver. Despite the privacy regulation has gotten stricter in
Europe with the General Data Protection Regulation (GDPR) [4], we argue that the
exchange of vehicle usage data will increase a lot in the future due to two recent
developments, tech startups pushing artificial intelligence technologies and the rising
interest of the automotive industry to foster the automated driving paradigm.
Shortcomings of current vehicle data provisioning approaches are: Data, informa-
tion, and services are mostly exchanged within proprietary closed environments, as
collected vehicle usage data is usually directly sent from the smart vehicle to a single
service provider (e.g., by a device connected to the OBD-II interface of the vehicle or
via the drivers’ smartphone). As a result, a vehicle owner willing to share data with
multiple service providers will have to provide the data multiple times while collecting
the data with different devices in parallel. This can be critical due to the large amount of
data collected by smart vehicles (up to 4 TB of data per day are expected [5]), and
because a significant portion of current service providers (e.g., Automile and Zubie) is
using dedicated OBD-II dongles to gather data from smart vehicles. Thus, it is currently
not feasible or at least not practical to use several services at the same time. Finally,
these closed systems certainly disrespect the vehicle owner’s privacy, as they do not
make it transparent how they further monetize the gathered data nor with whom they
share it. They typically do not allow the end user to control what data is transferred and
shared. And most of them have a lock-in effect, i.e. they use the vehicle data for their
own purposes. Finally, their business models do not scale yet as their user community
is still composed mostly of early adopters [1].

1.2 Contributions and Structure


Sharing data always holds the risk of violating one’s privacy. So, what is a privacy-
preserving way of vehicle data exploitation? Can the Blockchain technology act as an
enabler?
Blockchain technology is currently revolutionizing the way smart contracts
between parties will be managed due to its outstanding advantages namely decen-
tralization and transparency per design. The application of Blockchains as a solid basis
for a secure data exchange platform seems to be promising to solve the challenge of
monetizing vehicle usage data while protecting the data owner’s privacy. In contrast to
closed systems, a so-enabled Open Vehicle Data Platform for vehicle usage data based
on smart contracts maintained within Blockchains would allow the user to choose
which service providers can access certain vehicle data for which exploitation purpose.
Thus, end users can make use of services from various service providers at the same
time, while being in full control over the collected data, which will also be crucial for
autonomous driving. Full control can be achieved by employing privacy settings for
Towards a Privacy-Preserving Way of Vehicle Data Sharing 113

each authorized service provider. The user can decide whether to share only anon-
ymized data (e.g., as required by traffic management systems), vehicle-specific data
(e.g., for OEMs for continuous improvement), or even user-specific data (e.g., as
required by insurance companies to provide flexible insurance rates in Pay-As-You-
Drive (PAYD) models [6]). Such a platform will be able to support a wide range of
service providers and allow different benefit/business models advantageous for both the
users and the service providers.
Towards proposing a concept for an Open Vehicle Data Platform, in Sect. 1, we
reviewed existing solutions for vehicle data sharing, highlight strengths and weak-
nesses, and particularly focused on potential privacy issues. Thereafter, in Sect. 2, we
provide related work and background for Blockchain technology in the automotive
domain and for connected vehicles. Consequently, we discuss the actors and roles of a
vehicle data sharing ecosystem, the underlying privacy challenge and propose possible
privacy setting schemes protecting the privacy of the involved users, followed by a
concept for a Blockchain-based Open Vehicle Data Platform in Sect. 3. In the latter,
Blockchain technology ensures a trustworthy data exchange between all involved
entities and users. After providing a description of a conceptual workflow, we discuss
open issues and related aspects required to realize the proposed data sharing platform
and thereby conclude the paper with a discussion and outlook in Sect. 4.

2 Related Work and Background

2.1 Blockchain Technology (in Automotive)


Blockchains were first introduced as underlying technology of Bitcoin in 2008 [7]. In
this initial form, single transactions are used to describe a cash flow from one entity to
another. Every new transaction is distributed to the entire Blockchain system and in a
subsequent step a predefined amount of these transactions is compiled into a block, and
finally this block is then stored in the Blockchain. The latter can be seen as a distributed
database, where blocks are immutably chained to each other. The immutable property
on block and on transaction level is ensured by using cryptographic hash functions and
digital signatures. Every entity within the Blockchain system can easily verify a
transaction as well as a block without requiring any trusted party within the system.
Newer versions of Blockchain allow, besides the exchange of simple transactions,
also the creation of smart contracts. The latter can be seen as executable “if-then”
condition which is stored on the Blockchain and can e.g. be used to trigger a cash flow
by an event (e.g., transfer the flat rent to the landlord on the 1st day of each new month).
Besides simple examples, smart contracts also allow describing more complex relations
between companies, governmental bodies, etc. and thus is a promising technology to
realize a wide range of distributed services and applications in various industrial
domains and especially w.r.t. IoT solutions.
Thus, Blockchains and especially smart contracts can potentially be used to solve
certain open issues in the automotive industry due to its capability to preserve privacy;
in particular w.r.t. long-term research topics like detection of driver attention/fatigue
and current topics like utilizing vehicles as distributed comprehensive environmental
114 C. Kaiser et al.

sensors, thereby connecting vehicles more and more to each other (V2 V) as well as to
the surrounding infrastructure (V2I).
As a result of this, Blockchain technology raised enormous attention in research,
academia and industry. Various projects and initiatives covering different industrial
domains were started in the last months with the goal of identifying real business
opportunities for the use of Blockchain in future products, or even to develop concrete
(distributed) applications where the use of Blockchain technology can be beneficial,
including the automotive industry which has identified potential areas for the use for
Blockchains. Recently, automotive car manufacturers BMW, GM, Ford and Renault
started the Mobility Open Blockchain Initiative (MOBI) together with other industrial
and academic partners such as Bosch, Blockchain at Berkeley, Hyperledger, Fetch.ai,
IBM and IOTA [8]. Also, other vehicle manufacturers are evaluating Blockchains or
are already working on concrete projects: In 2017, Daimler started a project where
Blockchain technology is used to manage financial transactions [9]. Furthermore, the
automotive supplier ZF teamed up with IBM and UBS to work on a Blockchain-based
automotive Platform called Car eWallet with the goal of paving the way for autono-
mous vehicles by allowing automatic payments and by providing other convenience
features [10].
Hence, Blockchain definitely gained attention in the automotive industry. However,
concrete ideas, products and services are needed to show that Blockchain is more than a
hyped technology but rather allows the development of new business cases.

2.2 Connected Vehicles and Data Exploitation


Future vehicles will communicate with each other as well as with surrounding road
infrastructure to collect valuable information about road conditions and to sense the
current traffic situations (e.g., very relevant in traffic intersection scenarios). Further-
more, vehicles will increasingly be connected to the Internet to provide a wide range of
convenience services to the users, to gather latest traffic and map information, the
current city traffic strategy or even to report an accident (i.e., eCall).
This Internet connection could of course also be used to transfer environmental data
collected by the vehicle itself (e.g., camera, Radar, or Lidar data) to the cloud. Intel
recently released a statement saying that future (self-driving) vehicles will collect up to
4 TB of data each day [5]. A wide range of different service providers (not restricted to
automotive) would be interested in using the collected data in various ways. Sharing
the collected data could/should also be beneficial for the owner/driver of the vehicle
(see Sect. 3.1) and, on the down side, will raise serious privacy issues, as the
exchanged information could be used to e.g. track down the user’s location or analyze
the user’s behavior (see also Sect. 3.2).
Several tech startups such as Automile, Dash, and Zendrive, as well as large
initiatives driven by vehicle manufacturers such as AutoMat (coordinated by Volk-
swagen), started initiatives with the goal to collect and utilize data from single vehicles
up to entire fleets following different purposes [1]:
(i) Provide specific services in order to generate a benefit for the driver or the
vehicle/fleet owner in return for sharing data.
Towards a Privacy-Preserving Way of Vehicle Data Sharing 115

(ii) Create value by monetizing the collected data coming from a mass of vehicles to
third parties, which in turn use it as input for algorithms.
(iii) Further improve the business offerings of service providers and develop new
services.
Furthermore, in times of a shift of the automotive industry towards digitalization, in
times to manage different SAE levels of autonomous driving on the road simultane-
ously, and in times of the Internet of Things where sensors are increasingly connected
to the Internet, the automotive industry still tries to solve many long-known phe-
nomena. These phenomena include for example the detection of the drivers distraction,
fatigue and trust or the vehicles security and safety, which will increasingly be done in
the cloud, by feeding the algorithms with sensitive and privacy relevant data from
vehicle usage.
Data ownership of vehicle sensor data seems to be yet unclear from a legal per-
spective. Driver, vehicle owner, passengers, and the vehicle manufacturer may claim
their right on certain data. In the AutoMat project, coordinated by Volkswagen, it is
argued that as usual in other domains, e.g. in the music show business, “the copyright is
distributed proportionally among the members of the value chain” [11]. This copyright
distribution would give vehicle manufacturers the right to use the data a driver pro-
duces without charge, and thus would bring vehicle manufacturers into the profitable
data platform provider role (as they can integrate a data interface in their cars easily).
However, from a driver’s/vehicle owner’s/passenger’s perspective, copyright should
not be distributed as there would not be any data without them driving the vehicle. This
is usual in many domains e.g. digital camera manufacturers do not have a copyright on
produced photos, and a competitive market with open data platforms will force
innovative solutions and offer more benefits to the data owner to attract data provision.

3 Towards Privacy-Preserving Vehicle Data Sharing


3.1 A Vehicle Data Sharing Ecosystem
A series of stakeholders including vehicle developers, vehicle manufacturers, insurers,
and even smart cities could benefit a lot from an open privacy-preserving vehicle data
sharing platform, and thus participate in a vehicle data sharing ecosystem. The fol-
lowing Fig. 1, sketches such a vehicle data sharing ecosystem and highlights the
connections between the different stakeholders. The figure illustrates stakeholders and
advantages for their businesses (based on shared vehicle data), as well as advantages
for vehicle owners (using the service the stakeholder provides based on their shared
data). Thereby different connection types and privacy levels are envisaged, as different
stakeholders are interested in different aspects of the data collected by connected
vehicles.
As indicated in Fig. 1, certain service providers such as city planners or map
providers are not interested in who is driving (i.e., do not need driver-specific infor-
mation) or what specific type of vehicle (i.e., do not need vehicle-specific information
such as brand, color, or model). Thus, these services can be satisfied by providing
anonymized vehicle usage data. Other (automotive) services targeting on the vehicle
116 C. Kaiser et al.

Fig. 1. Vehicle usage data can be used for various services and by different entities and bring
advantages to both the vehicle owner/user as well as the service provider/data consumer.

development lifecycle (e.g., predictive maintenance or wearout of vehicle components),


will only require vehicle-specific data, whereas other services will be mainly interested
in user-specific information (i.e., who is/was driving).
The proposed Open Vehicle Data Platform will address the fact that different
services require a different kind of data and allow specifying which components of the
collected data is shared to enable services. Thereby, privacy is especially addressed as a
connected vehicle will not necessarily have to share an entire dataset with a service
provider but rather only the data which is really needed by the service provider to
provide a specific service. In the simplified model of a vehicle data ecosystem four
types of data sharing might be distinguished: sharing anonymous data, driver-specific
data, vehicle specific data, or a combination of them.
From a more abstract point of view, a vehicle data sharing ecosystem can have
several types of actors linked by value flows, as indicated in the e3value model in
Fig. 2. For instance, a driver can share driving and vehicle data with a gateway pro-
vider who then forwards this data to a data platform provider. In return the driver may
receive money but will probably have to mount a vehicle data gateway device in his
vehicle. A service provider may use driving data from the data market/platform to
establish a preventive maintenance service for drivers. While drivers may pay service
providers a fee for consuming this service, the data market receives another fee from
the service provider for providing the technical data, which is the baseline for this
service.
Consequently, the ecosystem has mutual dependencies and thus allows scenarios
where e.g. a driver uses an attractive service which is offered for free, because an
Towards a Privacy-Preserving Way of Vehicle Data Sharing 117

Fig. 2. Actors and value flows (e3value model) of a vehicle data sharing ecosystem.

organizational consumer (in current scenarios from the market usually without the
knowledge of the driver) pays the service provider for the development and service
provision in the background, in order to get the data or access to a valuable service
based on this data.

3.2 The Privacy Challenge for Data Sharing


As discussed before, service providers will monetize data collected by connected
vehicles and thus should reward drivers providing the data with certain benefits. In case
that the exchange of data between the connected vehicle and the service provider is
insecure [12] (or the service provider itself is compromised/ acts malicious), privacy
issues ranging from tracking down the user to stealing sensitive information can arise.
Hence, security and privacy must be addressed when designing a vehicle usage data
platform, and, as general rule, a service provider should only be allowed to access
relevant (i.e., for providing a specific service) data collected by a connected vehicle.
The driver may conduct a driving behavior which could be interpreted in a negative
way and might not be willing to share the so generated driving data with others as this
would either imply legal, social or ethical consequences. For instance an aggressive
driving behavior might cause social (if shared with friends while benchmarking) or
even legal consequences (if captured by the police). Drivers becoming aware of this
fact may not want to contribute to any data sharing platform at all if their shared vehicle
data could allow to cause negative consequences for them. This fact is also reflected in
current studies and surveys, where users are asked about trust and privacy w.r.t.
118 C. Kaiser et al.

connected vehicles. In one of these studies, Walter et al. [13] details the user concerns
regarding connected vehicles and highlights the needs for a privacy-aware data sharing
mechanism.
Defining a privacy configuration mechanism w.r.t usability and transparency brings
up different opportunities:
One approach is a distinction between vehicle specific and driver specific data,
where one can opt to share both of them either anonymized or not, just one or none.
Another approach would be to have four easy understandable levels with
decreasing privacy: (i) don’t share, where simply no data is shared at all, (ii) private,
where data is provided e.g. to calculate some basic individual statistics, but cannot be
used for anything else, (iii) anonymized for public usage, where data can be used like in
private level and additionally is provided to public in an anonymized way, and
(iv) public, where all data is provided to public. However, this approach would raise
awareness of drivers and service providers would have to adopt the concept, hence it
limits possibilities and perhaps opens legal loopholes and at the end of the day it lacks
transparency which specific data a service has access to.
Therefore, we argue that it is feasible to adopt the approach of Android smartphone
applications, which clusters the access to certain data into topics (i.e. An app needs
access to one’s contacts and images). The level of detail is a decisive factor for such
clusters: emission values can be clustered under a huge topic named vehicle sensor
data or be seen as an individual emission values category, while using quite granular
categories would require basic technical understanding of every user. The authors still
see improvement potential as this solution has somehow a touch of too much infor-
mation, comparable to terms and conditions no one really reads carefully.

3.3 A Concept for a Blockchain-Based Open Vehicle Data Platform


The concept provided in this section sketches a privacy preserving Open Vehicle Data
Platform. Instead of going into detail and arguing for certain tools and architectures,
we’d rather spread our idea by describing the workflow.
A vehicle is capable of acquiring a lot of valuable data and the driver of the
connected vehicle shall be able to decide if and how this data is shared with service
providers, as discussed earlier. In the proposed concept and as indicated in Fig. 3,
smart contracts based on Blockchain technology are used to specify whether a service
is allowed to access data from a certain vehicle and also which kind of data will be
shared.
Once an agreement between the connected vehicle and the service provider (i.e.,
smart contract) is signed, Blockchain technology is exploited to (i) make sure that the
smart contract cannot be tampered with, as well as (ii) to make the smart contract
available to so called Brokers. The latter provides an online storage, where data col-
lected by connected vehicles is stored securely, and it is also responsible to handle the
access of a specific service on data stored on its online storage according to existing
smart contracts. Furthermore, the Broker will maintain secure data connections
between its online storage and connected vehicles as well as service providers by using
suitable protection mechanisms (e.g., TLS).
Towards a Privacy-Preserving Way of Vehicle Data Sharing 119

Fig. 3. Data exchange between origin (vehicle) and target (service providers) is managed by a
broker using Blockchain technology for smart contracts.

In the proposed concept, several Brokers will take over the aforementioned tasks,
and thereby also allow connected vehicles to switch between different Brokers or even
to store data on different locations. The Blockchain will thereby fulfill two essential
tasks. Firstly, the Blockchain provides tamperproof storage for smart contracts as well
as other transactions, and secondly also provides a way to ensure the authenticity of
data collected by a connected vehicle and stored on an online storage, as the hash of a
collected dataset is integrated in a transaction and then stored on the Blockchain. Such
a transaction can also be seen as a trigger for service providers informing them about
the latest available dataset.
Please note that storing data directly on the Blockchain is not advisable from
technological point of view. Also note that existing contracts on the Blockchain can
simply be revoked or changed by filing a new contract between the connected vehicle
and the concerned service provider.
The proposed concept will rely on two different entities which are stored on the
Blockchain, namely
(i) Smart contracts, describing which data is shared with a certain service provider
and also specifies the corresponding reward. It will contain information about the
Broker that is used to store the collected data, and the timespan in which a certain
service is allowed to access the collected data. Each smart contract will be signed
by the connected vehicle (is owner) and the service provider before it is stored on
the Blockchain;
(ii) Dataset transactions, containing the hash of a dataset stored on the online storage
of a Broker. Every transaction is signed by the connected vehicle (or its owner),
and also by the Broker once the dataset was successfully transferred (and verified)
to its online storage.
The proposed concept is able to securely interconnect connected vehicles and
services providers in a privacy-preserving way, by utilizing Blockchain as tamperproof,
decentralized database, as well as by using dedicated Brokers providing a secure online
storage and handling access control w.r.t. the stored data. In the following, we
120 C. Kaiser et al.

summarize seven steps required to share data between a connected vehicle and a
service provider and use this example to highlight the benefits of the propose vehicle
data sharing platform:
1. Initially, the owner of a connected vehicle wants to use a certain service and, in
further consequence, will get into contact with the responsible service provider. In
this initial step, the user will be informed about the type of the data the service
provider requires to provide a specific service.
2. If the user agrees to this terms, a smart contract specifying the relation between the
connected vehicle, its owner, and the service provider is created and signed by the
vehicle owner (representing the connected vehicle) and the service provider.
3. Once the smart contract is finalized, it will be stored on the Blockchain.
4. While being used, the connected vehicle will continuously collect valuable data,
which is divided into datasets (e.g., after a predefined time or once a certain amount
of data is collected) and sent encrypted to the online storage of the Broker. Each
transferred dataset is accompanied by a dataset transaction containing the hash of
the dataset as well as the digital signature of the connected vehicle (its owner).
5. Hence, the Broker on the one hand can verify that the dataset was not altered while
being transferred, and on the other is held from changing the dataset itself as this
would invalidate the digital signature already included in the dataset transaction.
Once the currently received dataset is verified, the Broker will add its signature
(thus completes the transaction) to the transaction and broadcast it on the Block-
chain network.
6. Service providers can monitor the Blockchain and will be directly notified about the
latest available dataset by looking for relevant dataset transaction. In case such a
transaction was found, the service provider requests the dataset by establishing a
connection with the Broker.
7. Next, the latter looks for a suitable smart contract on the Blockchain and provides
access to data as specified in the smart contract or declines the request in case no
smart contract was found or it was revoked.

4 Conclusion, Discussion and Outlook

This paper was aimed to launch the discussion on how the Blockchain technology may
help to establish an open vehicle data sharing platform, respecting the privacy of both
the vehicle owner and the vehicle driver. Thereby smart contracts are introduced as a
mode to fully digitize the data sharing relationship between a consumer (e.g. a driver,
who provides his data with the purpose to use services) and a service provider (e.g. a
provider of a preventive maintenance service). They describe what kind of data will be
provided by whom and for what data exploitation purpose. While these smart contracts
are stored on the Blockchain to increase the trust between the vehicle data sharing
ecosystem stakeholders, the shared data itself will not be stored on the Blockchain, but
for instance on a separate data platform and a data market.
However, a series of issues and research topics remain open and will be targeted in
future work:
Towards a Privacy-Preserving Way of Vehicle Data Sharing 121

There are certain pre-requisites vehicles would need for the provided concept. For
example, a standardized vehicle data interface across manufacturers, where in general
all vehicle data can be provided to extern (to be stored on SD card or on a hard drive if
used for private purposes, or to be sent to online destinations), would ease data
acquisition. Only data which is marked to be stored/sent to somewhere should be
captured, all other data should be deleted or continuously overwritten.
In order to participate, users need to be able to authorize themselves (e.g. to use
their privacy settings in every vehicle they use) to the vehicle and the Broker, so they
need to register and have an identity.
Using Blockchain technology ensures a privacy preserving way to securely share
the data from the vehicle to the service provider. If a service provider gets access to
one’s data, then this indicates that he is not allowed to resell it unless this is explicitly
mentioned in the contract. However, in praxis this can not be prevented with the
presented concept, thus privacy can not fully be ensured.
As mentioned in Sect. 3.2, how to cluster data in useful groups and in which
granularity is a topic for future research. An initial version could be as follows:
– Emission data
– Vehicle data (e.g. base weight, number of passengers, year of manufacture, type,
brand)
– Environment data (e.g. road topography, temperature outside, rain)
– Traffic data (e.g. detected entities around the vehicle including humans and vehi-
cles, information about the streets throughput rate)
– Driver data (e.g. Driver ID, music channel, mood, fatigue level, driving score, heart
rate)
– Ride data (e.g. GPS position, temperature inside, start datetime, target)
– Other data

Acknowledgment. This work is partially funded by the SCOTT (http://www.scott-project.eu)


project. SCOTT has received funding from the Electronic Component Systems for European
Leadership (ECSEL) Joint Undertaking under grant agreement No 737422. This joint under-
taking receives support from the European Unions Horizon 2020 research and innovation pro-
gram and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland, Portugal, Netherlands,
Belgium, Norway. SCOTT is also funded by the Austrian Federal Ministry of Transport,
Innovation and Technology (BMVIT) under the program “ICT of the Future” between May 2017
and April 2020. The authors also acknowledge the financial support of the COMET K2 Program
of the Austrian Federal Ministries BMVIT and BMWFW, the Province of Styria, and the Styrian
Business Promotion Agency (SFG).

References
1. Stocker, A., Kaiser, C., Fellmann, M.: Quantified vehicles - novel services for vehicle
lifecycle data. J. Bus. Inf. Syst. Eng. 59(2), 125–130 (2017)
2. Stocker, A., Kaiser, C.: Quantified car: potentials, business models and digital ecosystems.
e & i Elektrotechnik und Informationstechnik 133(7), 334–340 (2016)
122 C. Kaiser et al.

3. Kaiser, C., Stocker, A., Festl, A., Lechner, G., Fellmann, M.: A research agenda for vehicle
information systems. In: Proceedings of European Conference on Information Systems
(ECIS) (2018, will be published)
4. European Commission: Data protection in the EU (2018). https://ec.europa.eu/info/law/law-
topic/data-protection/data-protection-eu_en
5. Krzanich. B.: Data is the new oil in the future of automated driving (2016). https://
newsroom.intel.com/editorials/krzanichthe-future-of-automated-driving/
6. Husnjak, S., Perakovi, D., Forenbacher, I., Mumdziev, M.: Telematics system in usage based
motor insurance. 100, 816–825 (2015). Elsevier Ltd. Conference of 25th DAAAM
International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014
7. Nakamoto, S., Bitcoin: a peer-to-peer electronic cash system, Whitepaper (2008). http://
www.bitcoin.org/bitcoin.pdf
8. Russel, J.: BMW, GM, Ford and Renault launch blockchain research group for automotive
industry. Techcrunch, May 2018
9. Dotson, K.: Daimler and LBBW issue $114 M corporate bond using blockchain.
SiliconAngle, June 2017
10. Kilbride, J.: Secure Payments “On The Go” With Blockchain Technology From ZF, UBS
and IBM. IBM, September 2017
11. AutoMat-Project. Automat: Connected car data - the unexcavated treasure. Youtube (2018).
https://www.youtube.com/watch?v=uRjvnahJ-9o
12. Valasek, C., Miller, C.: Remote Exploitation of an Unaltered Passenger Vehicle, White
Paper, p. 93 (2015)
13. Walter, J., Abendroth, B.: Losing a Private Sphere? A Glance on the User Perspective on
Privacy in Connected Cars (2018)
Challenges and Opportunities of Artificial
Intelligence for Automated Driving

Benjamin Wilsch1(&), Hala Elrofai2, and Edgar Krune1


1
VDI/VDE Innovation + Technik GmbH, Berlin, Germany
{Benjamin.Wilsch,Edgar.Krune}@vdivde-it.de
2
TNO, Helmond, The Netherlands
Hala.Elrofai@tno.nl

Abstract. The advancement of automated driving (AD) depends on a multitude


of influencing factors, however, achieving higher levels of automation funda-
mentally hinges on the capabilities of Artificial Intelligence (AI) to perform
driving tasks. Improvements in AI hardware and the availability of large
amounts of data (Big Data) have fueled the rapid increase in AD-related research
and development activities over the past decade and are thus also the key
indicators for future development. The shift from humans to AI in vehicle
control unlocks many of the well-established potentials of AD, but is also the
root for many non-technical issues that affect its introduction. Starting from the
state of the art of AI for AD this chapter discusses key challenges and oppor-
tunities that mark the development path.

Keywords: Artificial Intelligence  Machine Learning  Training


Validation  Hardware  Big Data  Vehicle automation  Autonomous driving
SCOUT  CARTRE  European Commission

1 State of the Art

Although the possibility of using computers to control cars was already proposed in the
late 1960’s [1] and a suitable software algorithm for lane recognition based on Artificial
Neural Networks (ANNs) was developed as early as 1989 [2], research and development
for automated vehicles (AVs) did not become a prime and widespread interest until the
late 2000s. This was principally due to the fact that the use of ANNs for image/object
recognition (via classification or prediction) requires both sufficiently efficient hardware
for the parallelized execution of matrix multiplications and adequate amounts of data for
training ANNs. In the course of two decades the former restrictions were gradually
alleviated. The CPU performance initially increased in line with Moore’s Law, but,
more importantly, it was possible to fundamentally boost the performance of relevant
algorithms with the switch to GPUs in 2009 [3]. The functional and widespread
application of ANNs in Machine Learning (ML) was further enabled by the availability
of large amounts of training data (Big Data), which has increased in an unprecedented
manner with the introduction of digital and mobile devices as well as corresponding
storage and communication technologies. The subsequent success of ML in several

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 123–135, 2019.
https://doi.org/10.1007/978-3-319-99762-9_11
124 B. Wilsch et al.

fields, e.g. speech recognition, image analysis and machine language translation have
made this subdomain of AI methods the dominant solution for practical applications.
For the analysis of the role of AI for AD it proves meaningful to take a step back
and analyze the landscape of current research and development efforts in the field.
While Advanced Driver-Assistance Systems (ADAS) support human drivers in
certain driving tasks, e.g. by maintaining a specified velocity or keeping the vehicle in
lane, increasing automation with the successive transfer of driving responsibilities to AI
requires a complete set of capabilities spanning environment recognition as well as
motion planning and control. The methods and hardware that can be employed to meet
these requirements are detailed in the following sections.

1.1 AI Methods for Automated Driving


The concept of Artificial Intelligence (AI) originates from the development of artifi-
cially intelligent robots in the first half of the 20th century. In 1950 the logical
framework of AI was formulated [4]. The academic research field of AI was founded in
1956 at the Dartmouth Summer Research Project on AI. It was anticipated by scientists,
mathematicians, and philosophers that AI would make a machine as intelligent as a
human.
ML was the root of the successful development of AI-based applications over the
past decade, but it is important to note that it only represents one subcategory of AI that
emerged three years after AI became a recognized academic research topic. At the time,
ML was pursued in parallel to a knowledge-based approach and built on the theory of
neural networks developed in the 1940s as well as statistical reasoning to allow
decision-making based on probabilistic models rather than relying on explicit
programming.
In ML, a knowledge representation is deduced from a given training data set that
describes an application lacking an analytical model. The computer system thus gains
experience which can subsequently be applied to previously unseen data to make
predictions, recommendations or decisions [5]. The insufficient efficiency of neural
networks led to a decades-long standstill in the field of ML, which was only overcome
in the 1980s with the development of backpropagation as a method to optimize the
weights in an ANN. Backpropagation is a method used primarily in reinforcement
learning, in which the computer system is presented with inputs and outputs and the
model is adapted to reproduce the desired behaviour. More specifically, input values
are processed in multiple intermediate and hidden layers (forming a Deep Neural
Network [DNN]) to produce the output values, with the underlying principle inspired
by the current understanding of the human brain. Weights are then assigned to the
individual nodes of the layers and, in backpropagation, the gradient of the error in the
output (the loss function) is used to adjust the weights of the nodes. As a result, ANNs
can be employed to approximate complex non-linear functions and successively
improve performance on new data.
ML also encompasses supervised and unsupervised learning methods, which can be
used to identify patterns in labelled and unlabelled data respectively. Reinforcement
learning is closely related to supervised learning but may rely solely on the attribution
of a reward in response to an output rather than requiring labelled data. The three ML
Challenges and Opportunities of Artificial Intelligence 125

methods can be applied for classification, regression and clustering tasks based on large
data sets and the specific algorithms include linear and logistic regression, decision
trees (e.g. iterative dichotomiser 3 or random forests), support vector machines and
Bayesian models. Amongst the ML methods and algorithms DNNs have been the focus
of ML-related research efforts over the past decade, yielding variations adapted to
specific learning tasks and algorithms, including feedforward networks, convolutional
neural networks, recurrent networks, generative adversarial networks (GANs) and long
short-term memory (LSTM) [5].
For AD, deep learning methods using ANNs are of fundamental importance for
environment recognition (object detection based on image classification) which pro-
vides the basis for motion planning and control. As discussed above, the application of
these methods was enabled by significant advances in computer hardware, which will
be detailed in the following section.

1.2 AI Hardware
Training an ANN requires High-Performance Computing (HPC) to process the big
data. Therefore, compute-intensive technology became essential for the progress in AI
making companies with the corresponding know-how and infrastructure the drivers of
the AI technology. Moreover, inference requires high computing power and is thus not
easily performed on edge devices with restrictions on the energy supply. Additionally,
some applications require real-time capabilities. Fortunately, strong hardware
improvement is possible by means of optimization of the chip architecture to the
arithmetic operations of the ANNs which correspond mainly to matrix multiplications.
The basic optimization strategy is based on parallelization thanks to the so-called
“embarrassingly parallel workload”. A straightforward solution was to perform
general-purpose computation on GPU to accelerate the training significantly. GPUs
enable much higher data throughput compared to CPUs and reduce the power con-
sumption at the same time. Another hardware solution is based on Field-Programmable
Gate Arrays (FPGAs) which enable designers to reprogram the underlying hardware
architecture to support the parallel computing operations. Application-Specific Inte-
grated Circuits (ASICs) outperform FPGAs since they are specifically designed and
optimized for a certain task. Such multi-processor System-on-Chips (SoCs) incorporate
GPUs, CPUs as well as accelerator cores optimized for certain operations like image
processing. Their big disadvantages are their inflexibility and the high development
costs. Today, off-the-shelf hardware is not optimized for ML. Therefore, there is a high
demand for hardware innovations. Fortunately, there are several approaches to increase
the computing power and to minimize the power consumption [6].
The enormous potential impact on all industry segments led to a race for more
efficient chips between IC vendors, tech giants, IP vendors and various start-ups. It is
remarkable that various start-ups try to compete with the big IC giants in such a cost-
intensive industry branch. Designing an ASIC can cost up to hundreds of millions of
dollars requiring a large team of experienced engineers. The long design process
(typically 2–3 years) necessitates a large number of chip sales and regular improvement
is necessary to adapt to fast changing software development. Especially the early state
of the AI technology can lead to significant changes in the hardware development in the
126 B. Wilsch et al.

upcoming years. Only the enthusiastic conviction that the new chips tailored for AI
applications can strongly outperform state-of-the-art hardware can justify such
investments and the confidence to compete with heavily experienced IC giants.
In the automotive sector and for CAD developments in particular, there is a high
demand for better hardware and various innovations are expected in the near future. As
an indication, several trends and developments are provided in the following:
• MobilEye introduced its fifth generation SoC “EyeQ5” for fully autonomous
driving at the CES 2018 which will be in series production by 2020. The perfor-
mance target is to achieve 24 trillion operations per second (TOPS) under a power
consumption of 10 W. The most advanced TSMC 7 nm-FinFET process is con-
sidered for production to address the performance targets. Intel plans to combine the
EyeQ5 with its “Intel Atom” processor and to develop an AI computing platform
for autonomous driving. Intel and MobilEye claim that two EyeQ5 SoCs and an
Intel Atom processor will be sufficient to enable fully autonomous driving.
• The automotive supplier ZF built the “ZF ProAI” supercomputing self-driving
system which is based on the “Nvidia DRIVE PX 2 AI” computing platform. ZF
claims to follow a modular and scalable system architecture that can be applied to
any vehicle and tailored according to the application, the available hardware and the
desired automation level. Audi is using this self-driving system in the worldwide
first level 3 vehicle where self-driving capabilities are achieved in jam traffic on an
autobahn up to a speed limit of 60 km/h. Baidu cooperates with ZF and announced
to use the “ZF ProAI” for automated parking.
• Nvidia introduced its new SoC “Xavier” at the CES 2018 which will offer up to
30 TOPS under a power consumption of 30 W. The chip will be fabricated by the
TSMC 12nm-FinFET process and the series production starts in 2019. True level 5
autonomous vehicles will need at least two of such chips to provide sufficient
computing power. Therefore, Nvidia’s new “DRIVE Pegasus AI” computing
platform will incorporate two “Xavier” SoCs and two discrete GPUs. It will enable
320 TOPS and consume up to 500 W. According to Nvidia the computing power
should be sufficient for fully autonomous driving.
• NXP developed its “BlueBox” autonomous driving platform. It incorporates an
automotive vision and sensor fusion processor capable of processing AI applica-
tions. The performance is stated as 90,000 Dhrystone million instructions per
second (DMIPS) under a power consumption of 40 W.
• Renesas has a similar automotive computing platform with its “R-Car” SoCs which
achieve 40,000 DMIPS.
More general hardware solutions are necessary due to the demand for higher
computing power, lower power consumption and cost reduction. More sensors will be
attached to the car in the future. Under the frame of the ImageNet contest the per-
formance of object detection was increased by means of higher model complexity in
the last years. This tendency implies higher amount of parameters of the ANNs. Safety
is a crucial issue for the breakthrough of self-driving cars. Therefore, more complex
models will be presented to increase the robustness of object detection and inference.
This corresponds directly to more complex AI algorithms and a growing demand for
computing power and higher energy efficiency. In automotive the new SoCs tailored
Challenges and Opportunities of Artificial Intelligence 127

for machine learning tend to be more complex since high data throughput is necessary
and moving data between different chips deteriorates the performance. Moore’s law
still assures continuous increase of the number of integrated transistors on
chip. Therefore, the size of future optimized SoCs should scale up. For example,
Nvidia’s new “Xavier” SoC is one of the most complex systems to date with more than
9 billion transistors. Both market leaders MobilEye/Intel and Nvidia plan the first series
production of their new SoCs and already mentioned the development of next SoC
generations (Nvidia’s “Orin”, MobilEye’s “EyeQ6”). It is important to note that Nvi-
dia’s Xavier SoC architecture was recently certified with the highest safety rating
ASIL-D of the automotive industry’s standard for functional safety ISO-26262 by TÜV
SÜD. This is an important step since autonomous driving requires maximum safety.
Standardization of an open automotive AI platform can increase competition
between IC manufacturers and make OEMs and Tier1s more independent from IC
giants. Another possibility are close cooperation between IC manufacturers, OEMs and
Tier1s leading to distinct solutions for automotive AI computing platforms. In such a
scenario e.g. an “Intel Inside” label could be a unique selling point if the performance
differs significantly between IC manufacturers. One argument for distinct solutions
could be a higher efficiency thanks to a hardware-software co-design process.
Today, Nvidia and Intel/MobilEye offer hardware as well as software solutions. But
both market leaders offer separate solutions as well. It enables modular hardware
integration in open platforms such as “Apollo” from Baidu. Both approaches can be
successful. At this point it is not obvious which approach will ultimately find wide-
spread application.
On the basis of the current state of AI hardware and methodology for AD, which
was established above, it is now possible to examine the opportunities and challenges
intertwined with their application in the following section.

2 Opportunities

Besides its decisive role in enabling automated driving, AI in the form of ML also
provides the key capabilities for the interaction between the driver, who will succes-
sively transition into a user of autonomous services, and the vehicle. In this field of
human-computer interaction, applications can draw directly from the success of ML in
the field of natural language processing and facial recognition. Such functionality can
be employed initially to enhance safety as driver assistance systems in low-level
automation, e.g. by detecting driver fatigue and alerting the driver, may then be
employed for gesture recognition to enhance driver comfort in higher level automation,
before eventually enabling the provision of new services to users of autonomous
vehicles. For example, a face scan could be used to access a vehicle and the integration
of digital assistants in vehicles paves the way for various new service offers for
drivers/users increasingly freed from driving obligations.
The use of brain-machine interfaces further unlocks potentials in vehicle operating
by providing alternatives to mechanical controls such as gas pedals or steering wheels
and thus providing access to humans not capable of operating the established control
system [7]. Such applications present prime examples for the potential of AI to
128 B. Wilsch et al.

reproduce and eventually surpass human capabilities in a given task. In autonomous


vehicles, the effect of the AI-enabled augmented social inclusion in mobility will be
complete, since all humans will have access to mobility services, independent of
physical and mental capabilities or age. If the current progress can be sustained, then AI
driving systems will ultimately not only succeed in emulating human capabilities but
will achieve superhuman driving capabilities, thereby eliminating human error as the
dominant cause of road accidents.
AI can further shape the future of mobility by enabling improved traffic flow and
vehicle usage, a field that is ideally suitable for the application of ML and which thus
naturally hinges on the availability of large data sets on vehicles distribution and
movement. An optimized management of both individual vehicles and fleets can further
be achieved by employing predictive maintenance to increase the vehicle service time.
The potentials of AD discussed above, which are unlocked by the use of AI
methods, present the central motivation for pursuing its development. In many cases,
precautions must be taken to ensure that the objectives are achieved and not negated,
e.g. if achieved levels of social inclusion in terms of physical capabilities are thwarted
by limiting access to mobility services based on price and thus income. While such
circumstances are marked by a strong uncertainty and are thus difficult to foresee, the
development of AD based on AI faces some key technical and non-technical challenges
that must be resolved before high-level automation can be achieved. These are
examined in the following section.

3 Challenges

Utilizing AI to achieve vehicle automation presents a fundamental shift in mobility,


which cannot be adequately addressed by focusing solely on the technical develop-
ment. The EU Coordination and Support Actions (CSAs) CARTRE and SCOUT,
which started in 2016 and will end in 2018, attempted to tackle the complexity of
Connected and Automated Driving (CAD) development by establishing a category
framework to guide discussions.
In SCOUT, hurdles and accelerators along CAD roadmap paths were analyzed
using a five-layer model differentiating between technical, legal, human, economic and
societal factors. The majority of the non-technical aspects that may hinder CAD
deployment can be traced back to the transfer of driving responsibility from humans to
AI. Questions pertaining to e.g. ethics or liability essentially concern the appropriate
and regulated application of AI. User acceptance further critically hinges on the societal
understanding and expectations of AI. It is thus important to remember that CAD is one
of the various applications of AI, which has the potential to disrupt mobility as it is
known today and which must be approached with great caution given its critical role in
the safety of all road users. For certain issues of CAD development, user acceptance, it
will thus be possible to learn valuable lessons from other AI applications and, inver-
sely, successful and widespread implementation of AI-controlled vehicles can pave the
way and serve as a model for other AI applications, e.g. in health, where ethics and user
acceptance are equally critical aspects. It is clear that humans are prone to be far more
forgiving with respect to the humanly more easily comprehensible errors of drivers that
Challenges and Opportunities of Artificial Intelligence 129

cause accidents than with computer errors. This skewed perception is further amplified
by the disparity in media attention attributed to rare but entirely new and unusual
accidents caused by computer error in comparison to common human errors. In con-
sequence, research indicates that AI-controlled cars would need to outperform humans
by one to three orders of magnitude to ensure user acceptance [8].
The CARTRE project tackled the complexity of CAD using eleven topical cate-
gories, one of which was dedicated to “Big Data, AI and their application” and thus
reflecting the central importance of each of these two fields for the successful appli-
cation of the other. During the course of the project, links to other topics were
established and again showed clearly that even if CAD cannot be equated with AI,
many of the issues that must be resolved for its implementation are direct consequences
of the use of computer intelligence. The input for the EU research agenda that was
presented in the respective CARTRE position papers1 for Big Data and AI covers legal
(regulation and insurance) and ethical aspects as well as requirements for data avail-
ability and testing and validation methods. Although the development of new AI-based
CAD functionalities will always have to occur within the ethical and legal framework,
it is the questions concerning data availability, AI training and validation and the
traceability of AI-based decision-making that are at the root of the discussion of ethical
and legal aspects and will further be decisive for future improvement of vehicle
intelligence and CAD implementation. These issues will thus be examined in more
detail below.

3.1 Data Availability


AI performance relies strongly on the available training data. This data has to cover all
traffic scenes under different weather situations in an adequate amount to assure
appropriate action of the AI driven vehicle in every situation. The lead in the devel-
opment of AD is often measured by the accumulated data although synthetic data
gained more and more importance in recent years. It should be noted that both traffic
and driver behaviour differ strongly between countries. Therefore, car manufacturers
and suppliers are running their own vehicle fleets collecting data in different countries
which correspond to important markets. One of the highest amounts of data was
collected by Waymo and currently amounts to 11.3 million driven km.
There are also open data sets such as Kitti or CityScapes but they are very limited
and cannot compete with the data collected by the companies. Recently, ApolloScape
was released as the largest open data set under frame of Baidu’s AD platform Apollo.
The volume is 10 times higher than any other open data set. Moreover, open data sets
created in a virtual environment such as Synthia are also available. But these are also
orders of magnitude smaller than the 8 billion km simulated by Waymo.
In general, it is questionable what amount of data is necessary to ensure that
autonomous vehicles cause fewer accidents than humans. Statistical estimations sug-
gest that data from up to hundreds of billions of driven kilometres have to be collected
to provide an actual advantage of AD over human capabilities. For example, according

1
Available online at www.connectedautomateddriving.eu.
130 B. Wilsch et al.

to a Rand Corporation report [16] 100 vehicles have to drive non-stop for 500 years to
achieve 20% safer driving capabilities than humans.

3.2 Training and Validation


If AI is to replace humans in driving, it must be able to accurately and reliably detect
the location and environment of the vehicle and then make the correct decision for
action based on this information. As discussed, in ML the correct interpretation of
visual or radar images is achieved by training the AI system on large amounts of data
so that it can subsequently also correctly categorize previously unseen images. The
collection of relevant data can be achieved by driving a car equipped with the typical
sensors around and the total distance travelled in such test drives can serve as a first
quantitative indicator of R&D activity in the field. The test drive distance does,
however, not allow conclusions about the quality and completeness of the training data,
i.e. on the one hand a test drive could take place on private grounds with no or limited
traffic or in dense urban traffic, and on the other hand, critical situations are, by design,
extremely rare. Even extensive test drives will thus not provide sufficient data on edge
cases to ensure that these can be handled by AI. Critical situations, i.e. those that are
dangerous for vehicle passengers or other road users, can, furthermore, not be recreated
for test drives. It is thus necessary to supplement real-world test drives with AI training
on synthetic data generated from simulations of real-world scenarios. For example,
Waymo has produced training data from over eleven million kilometers of real-world
testing and combined this with over three billion virtual kilometers, while e.g. Nvidia
has provided a simulation environment (AutoSIM) to developers. Synthetic data also
offers the advantage that it is, by nature, already labeled, while labeling is an additional
step that must, at least to a given extent, be performed manually for data acquired by
imaging sensors.
Even after combining real-world and virtual training data, the question remains,
whether this allows AI to react appropriately in rare but critical situations. This is the
central prerequisite for the validation of AI-based driving functions. An answer is
proposed by the GENESIS project [9] of the German Research Center for Artificial
Intelligence (DFKI) and the German certification body TÜV Süd and relies on the
assertion that it will be possible to define the comprehensive scope of driving scenarios,
to translate this into constraints for the virtual training scenarios and to then run a large
number of simulations with incremental variations until the testing regime may be
considered complete. Such a procedure could allow for a benchmarking of the
development process using reproducible and standardized test scenarios, scalable and
fast simulations and, ideally, open architectures for the integration of different models
and simulations. The ultimate objective would be to establish a virtual homologation
agency for AI driving functions.

3.3 Traceability of AI-Based Decision-Making


Due to the lack of analytical model to link the inputs and outputs of ANNs used in deep
learning methods, a decision-making process relying on their application is intrinsically
not traceable and thus raises fundamental questions concerning the use of AI for AD.
Challenges and Opportunities of Artificial Intelligence 131

Since the given inputs and monitored outputs of an ANN are linked via a complex deep
layer structure in which the weights between nodes have been adjusted in an extensive
training process until the desired behavior is obtained, it is eventually impossible to
deduce how a specific decision was made. ANNs thus essentially present a black box
for which only the inputs and outputs are known, while the process by which outputs
are produced can only be defined in terms of linkage weights. For AD, a system with
this lack of traceability poses problems concerning the liability in case of accidents and
also presents questions relating to the ethicality of entrusting ML-based AI with
potentially life-threatening tasks. Moreover, if an autonomous vehicle is trained as an
end-to-end system, the inability to model the decision-making process results in a lack
of modularity, since individual components of the AD system cannot be replaced
without necessitating a renewal of the entire training process to once again translate
given inputs into desired outputs (decisions) [9]. While the limited traceability of ML-
based decision-making is not an unsurmountable hurdle for the introduction of AI, it
does require the development of specific solutions, e.g. module-specific training
algorithms, and is also an intrinsic characteristic of ML, which has led researchers to
pursue alternative AI methods (see Sect. 5.1).
The previous sections have highlighted the central challenges faced by ML-based
AI applications for AD, which can present significant roadblocks on the way to its
deployment. The question of how these problems can be resolved is, however, usually
accompanied by the question of where AD will be introduced first. Influencing factors
that may affect this race for AD and which ultimately revert to questions about AI
capabilities are thus discussed below.

4 International Competitiveness

Since extensive competences in AI are essential to achieve high-level automated driv-


ing, they can also be used to assess which country or region provides the best breeding
ground and is thus likely to be the frontrunner in AD introduction. From a historical
viewpoint, the U.S. has long been a lone leader in the field, but past years have seen the
emergence of China as a serious competitor, driven not least by substantial government
support and funding. In 2017 the Chinese government presented its plan to become the
world’s primary AI innovation center in both research and applications by 2030, which
it backed, e.g., with a 1.8 billion Euro investment in an AI technology park in Beijing.
By that time, Chinese researchers and developers had already succeeded in closing the
gap on the U.S., ranking second in patents filed on AI-related topics and first in the
number of research papers on deep learning (since 2014) [10]. While the U.S. also
presented a strategic plan on AI in 2016, the EU is yet to present a strategy2 that assures
its competitiveness in the field and thus currently relies primarily on national strategies,
as have been introduced recently in France or Finland. The need for imminent action
has, however, been identified and several initiatives were launched in the first half of
2018, primarily: the signing of a Declaration of cooperation on AI by 27 EU member

2
A coordinated plan has been announced for the end of 2018.
132 B. Wilsch et al.

states and Norway (as of May 2018) followed by a call for private and public invest-
ments in AI amounting to at least 20 billion Euro by the end of 20203. The latter cannot
match the venture capital provided to companies in China, where around 425 billion
Euro of funds were expected to be raised via Government Guidance Funds in 2016 with
another 250 billion Euro coming from private funds [11], and in the U.S.
In the outline for a European approach, the European Commission (EC) also
acknowledged the need to modernize education and training systems to establish a
talent pool that can advance AI technologies. Currently, the availability of ML experts
cannot match the demand, resulting in a significant surge in salaries and strong
international competition over available talent (including a significant brain drain from
China to the U.S.). The U.S. clearly leads the world in terms of the size and average
experience of the workforce [12], a field where China is also trying to catch up, after
the first undergraduate course in AI was established as recently as 2004. The advantage
held today by the U.S. is in large a result of substantial investments in STEM education
in the 1960s, which should thus also be a priority of governments today. As an
example, the German Federal Association for AI has included a call for data science
education starting in third grade as part of its 9-step plan to advance AI in Germany
[13], an initiative that has already been implemented by the Chinese Ministry of
Education with both a plan for increased education in coding starting in primary
schools and an “AI Innovation Action Plan for Colleges and Universities”. To respond
to the expected spike in demand of AI talent, the EC planned to invest 2.3 billion Euro
specifically in digital skills between 2014–2020.
The non-technical implications of AI and the way in which these are approached and
handled will also have significant effect on international competitiveness. Specifically,
regulations concerning data protection and privacy, which impedes the access to data as
the fuel of ML, and the comprehensive discussion of ethical issues can have a restrictive
effect on the speed of innovation in AI. With the introduction of the General Data
Protection Regulation (GDPR) and the planned presentation of ethical guidelines for AI
development by the EC by the end of 2018, researchers in the EU certainly face the
strongest constraints. It must, however, be noted that given the fundamental societal
transformation that a widespread application of AI technologies could trigger as well as
the potential threats of AI, a cautious and balanced approach is justified.

5 Outlook

AD can unquestionably only be achieved by developing AI that is capable of reliable


and safe vehicle control, therein matching, if not exceeding, human capabilities. The
explanation of the role, opportunities and challenges of AI for AD above has, however,
also underlined that, although major advances have been achieved over the past decade,
comprehensive development efforts will also be required over the next one and that
substantial non-technical issues must also be resolved in the process. Some potentials

3
1.5 billion as part of the Horizon 2020 programme, 2.5 billion from public-private partnerships and
over 0.5 billion via the European Fund for Strategic Investment.
Challenges and Opportunities of Artificial Intelligence 133

that either serve to accelerate development or provide alternatives if, e.g., legal or
ethical problems prove to be substantial roadblocks, are presented as an outlook in the
following two sections.

5.1 Alternative Methods


As explained above, the current success of AI in various applications and particularly
in the field of AD rests entirely on ML. Other AI methods should, however, not be
disregarded, especially if they provide the opportunity to circumvent the non-technical
issues. Some research efforts are thus directed towards the development of “explainable
AI” methods that provide full traceability and an in-depth understanding of the
decision-making process. For example, so called “grey-box solutions” are intended to
integrate physical models in existing algorithms, in order to increase the control over
the intermediate steps of decision-forming.
An interpretable, mathematical model for safety assurance is proposed by Mobi-
lEye where rule-based driving results from analytical description of the driving state.
As long as the vehicle fulfills certain conditions it should not be able to cause an
accident [8]. This could result in a fully retraceable white box solution and enable
validation and certification of self-driving capabilities. In this case planning will be
explainable while sensing the environment will still rely on classification by AI. But the
sensor redundancy (camera, lidar, radar) should assure reliability. A key argument to
follow this path is that safe and reliable AD cannot be assured just by collecting
training data. According to statistical estimations the necessary amount of collected
data is too high [14] as well as the necessary energy to process this data. The sensitive
public reactions to accidents of self-driving cars demand explanations for the cause and
it is highly unlikely that the users would accept a black box explanation.
Other approaches include solutions inspired by cognitive science, which intend to
imitate the way in which humans understand and learn models of the physical world
and which may include methods such as detecting intentions of road users from facial
expressions. For example, the MIT spin-off iSee aims to mimic human common sense,
thereby reducing the dependency on large amounts of data and enabling traceable
decision-making that is easier to validate. Perceptive Automata unites neuroscientists
and computer scientists to develop autonomous vehicle software with an intuitive
understanding of driving scenarios. If successful, such AI methods could significantly
boost the system’s ability to cope with unfamiliar situations.

5.2 New Hardware Development


The aforementioned chip designs correspond to a software implementation of ANNs
and its execution on conventional von Neumann chip architectures. Here, a major
factor contributing to the power consumption and the training or inference duration is
the data transfer between the memory and the processing units. The complexity of more
powerful ANNs increases and so does the amount of weights representing the synapses.
These have to be transferred between memory and the processing units while processed
data input propagates through the ANN. The latencies and bandwidth limitation
associated with the data transfer is called the “von Neumann bottleneck” and restrict
134 B. Wilsch et al.

the data throughput. Moreover, the data amount of the weights can be too large to be
stored on a local on-chip memory.
An alternative way is to implement ANNs directly in hardware by means of neu-
romorphic chips. Here, the memory and the processors are not separated. Every arti-
ficial neuron represents a processing unit and has its own memory so that the
computing is performed at the data location by means of the neuron connections.
Furthermore, the neuron communication is not controlled by a central clock. The
communication is only initiated if the corresponding neurons are stimulated. This is a
much better imitation of biological neural networks. The lack of data transfer between
memory and the processing units and the asynchronous communication concept raises
the potential to reduce the power consumption significantly. A vast variety of imple-
mentation concepts can be found in the literature [15].
So far, this approach is mainly investigated by academia and is widely ignored by
the industry. IBM was the first company investigating neuromorphic computing and
presented its “TrueNorth” chip in 2011 before the actual breakthrough of deep learning
and the resurgence of convolutional neural networks (CNNs) in 2012. In 2016 it was
shown that a trained ANN can be mapped to such a neuromorphic chip and approach
state-of-the-art classification accuracy [16]. The huge advantage was the very low
power consumption of only 275 mW while processing 2600 frames/s. Currently, Intel
is working on its own neuromorphic chip “Loihi”. Here, the signal processing is based
on asynchronous spiking similar to biological neurons. According to Intel this chip
combines training and inference, supports different ANN topologies including recurrent
neural networks (RNN), can be used for supervised as well as for reinforcement
learning and is continuously learning. Intel calls it a test chip and is going to share it
with universities and research institutions. Samsung announced collaboration with
leading Korean universities to develop a neuromorphic chip. In Europe, neuromorphic
computing is investigated under the frame of the Human Brain Project since 2013. The
Belgian research institute Imec introduced its own neuromorphic chip in 2017.
This technology is very young and a lot of research has to be done to explore its full
potential and to verify its capabilities. The claims about the potential performance are
orders of magnitude of higher power efficiency and orders of magnitude of faster
learning capabilities. If these promises are only half true, neuromorphic computing
should attract high interest of the industry in the future. Neuromorphic chips are ideal
for classification tasks but not for precise calculations like conventional processors.
Therefore, these have to be embedded in conventional hardware which deals with rule-
based navigation in traffic. Furthermore, new software has to be designed to integrate
such chips in conventional hardware systems.

6 Conclusion

Based on the experience from work in the European projects SCOUT and CARTRE the
objective of this chapter was to highlight the role of AI for the development of AD.
Beside an overview of current AI hardware and ML-focused methodology, key
opportunities and challenges for the application of AI have been discussed and may, in
the case of non-technical issues, also serve as examples for the application of AI in
Challenges and Opportunities of Artificial Intelligence 135

other fields. Future development paths and alternative methods that may help to resolve
specific non-technical issues have also been explored. Due to the central importance of
AI for AD, future development and international competitiveness in particular will be
closely related to AI-specific capabilities.

Acknowledgements. The authors are grateful for fruitful cooperation with the contractual
partners of the Coordination and Support Actions “Safe and Connected Automation of Road
Transport” (SCOUT) and “Coordination of Automated Road Transport Deployment for Europe”
(CARTRE). The SCOUT and CARTRE projects have received funding from the EU’s Horizon
2020 programme under grant agreements No. 713843 and 724086, respectively. The section on
AI hardware further draws from investigations carried out as part of the SCORE project, which
has also received funding under the EU’s Horizon 2020 programme.

References
1. McCarthy, J.: Computer Controlled Cars, Essay (1969)
2. Touretzky, D., Pomerlau, D.: What’s hidden in the hidden layers? BYTE 14, 227–233
(1989)
3. Raina, R., Madhavan, A., Ng, A.: Large-scale deep unsupervised learning using graphics
processors. In: Proceedings of the 26th Annual Conference on Machine Learning, ICML
2009, pp. 873–880 (2009)
4. Turing, A.M.: Computing machinery and intelligence. Mind 49, 433–460 (1950)
5. Döbel, I., Leis, M., Vogelsang, M.M., et al.: Machine Learning - Competencies,
Applications and Research Needs. Frauenhofer Society (2018). (in German)
6. Dally, W.: High-performance hardware for machine learning. NIPS Tutorial (2015)
7. Göhring, D., Latotzky, D., Wang, M., Rojas, R.: Semi-autonomous car control using brain
computer interfaces. Advances in Intelligent Systems and Computing, vol. 94, pp. 393–408
(2013)
8. Shalev-Shwartz, S., Shammah, S., Shashua, A.: On a formal model of safe and scalable self-
driving cars (2018). arXiv:1708.06374v5
9. Slusallek, P.: Understanding the world with AI: training & validating autonomous vehicles
with synthetic data. Talk Presented at Interactive Symposium on Research and Innovation
for CAD in Europe at Tech Gate, Vienna, 20 April 2018
10. Probst, L., Pedersen, B., Lefebvre, V., Dakkak-Arnoux, L.: USA-China-EU plans for AI:
where do we stand? Digital Transformation Monitor of the European Commission (2018)
11. Ding, J.: Deciphering China’s AI Dream, Governance of AI Program. University of Oxford
(2018)
12. Churchill, O.: Chinas AI dreams. Nature 553, S10–S12 (2018). https://doi.org/10.1038/
d41586-018-00539-y
13. KI Bundesverband e.V.: Artificial Intelligence: State of the Art and Catalogue of Measures
(2018). (in German)
14. Kalra, N., Paddock, S.M.: Driving to safety: how many miles of driving would it take to
demonstrate autonomous vehicle reliability? RAND Corporation, Santa Monica (2016).
https://www.rand.org/pubs/research_reports/RR1478.html
15. Schuman, C.D., et al.: A survey of neuromorphic computing and neural networks in
hardware (2017). arXiv:1705.06963v1
16. Esser, S.K., et al.: Convolutional networks for fast, energy-efficient neuromorphic
computing. PNAS 113(41), 11441–11446 (2016)
Electric Vehicles
Light Electric Vehicle Design Tailored
to Human Needs

Diana Trojaniello1(&), Alessia Cristiano1, Alexander Otto2,


Elvir Kahrimanovic3, Aldo Sorniotti4, Davide Dalmasso5,
Gorazd Lampic6, Paolo Perelli7, Alberto Sanna1, Reiner John8,
and Riccardo Groppo9
1
Fondazione Centro San Raffaele, Via Olgettina, 60, 20132 Milan, Italy
trojaniello.diana@hsr.it
2
Fraunhofer Institute for Electronic Nano Systems ENAS,
Technologie-Campus 3, 09126 Chemnitz, Germany
Alexander.Otto@enas.fraunhofer.de
3
Infineon Technologies Austria AG,
Siemensstrasse 2, Bau 07.4.19, 9500 Villach, Austria
elvir.kahrimanovic@infineon.com
4
University of Surrey, Guildford, Surrey GU2 7XH, UK
a.sorniotti@surrey.ac.uk
5
M.T.M. s.r.l, Via La Morra, 1, 12062 Cherasco, CN, Italy
D.Dalmasso@brc.it
6
Elaphe Propulsion Technologies Ltd.,
Litostrojska Cesta 44c, 1000 Ljubljana, Slovenia
Gorazd@elaphe-ev.com
7
JAC-ITALY Design Center S.r.l., Via Torino 21 B, 10044 Pianezza, TO, Italy
p.perelli@jac-italy.com
8
Infineon Technologies AG, Am Campeon 1-12, 85579 Neubiberg, Germany
Reiner.John@infineon.com
9
Ideas & Motion Srl,, Via Moglia 19, 12062 Cherasco, CN, Italy
riccardo.groppo@ideasandmotion.com

Abstract. The SilverStream project has developed and demonstrated a new


light and affordable vehicle concept (L-category) tailored to the needs of ageing
population. The project has combined both ergonomic concepts conceived for
elderly people and advanced automotive technologies for improved driveability
and energy efficiency. It has been focused on the development of a compre-
hensive set of technologies covering the whole vehicle, driven by a team of
experts in the fields of medical and cognitive science domains through a
top/down approach. Hence those technologies have been integrated into a
vehicle demonstrator running in realistic tests environment. This paper provides
a description of the experimental activities carried out during the whole project
to verify the elderly acceptance and satisfaction of the proposed vehicle and
integrated technologies.

Keywords: Human centered design  Elderly needs  LEV  HMI


User experience  Usability  Ergonomics  Comfort

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 139–152, 2019.
https://doi.org/10.1007/978-3-319-99762-9_12
140 D. Trojaniello et al.

1 Introduction

The increase of average age is surely among the greatest achievements of the past
decades. Indeed, a gradual transformation has taken place mainly in the western world
and other advanced societies. The increase of average age has mostly been achieved
because of better medical, nutritional and lifestyle factors, all resulting in increasing
longevity. It is estimated that, by 2050, 29.9% of the European population will be over
the age of 65, with the proportion of the eldest people (aged 80 years or more) by this
time being highest in Italy (14.4%), Germany (13.6%) and Spain (12.8%) [1].
Strategies and actions are needed to make urban spaces, houses and transport more
accessible and affordable to these people. It is crucial to include older people in society,
so that they are not considered “second-class” citizens, but active and necessary part of
the community.
European senior citizens consider car driving as a stressful event, due to motor
cognitive perceptual and emotional age-related decline [2]. A prerequisite for driving is
the integration of high-level cognitive functions with perception and motor functions.
Ageing, per se, does not necessarily impair driving or increase the crash risk [3, 4].
However, medical conditions, such as cognitive impairments and dementia, and ageing
related decline become more prevalent with advancing age and may contribute to poor
driving performances and an increased crash risk [5, 6]. For many seniors, driving a car
is crucial for keeping their independence, their social life and wellbeing. Older drivers
often self-regulate their driving habits, for example by restricting driving to known
routes or by avoiding driving during rush hours and at night, without necessarily
stopping driving at all.
Products for older people (e.g. a car), as far as possible, should be adaptable and
easily customized to suit the skills of the elderly. They will have to adapt to the
changing needs of lifestyles in a discreet manner and, as far as possible, should meet
the aesthetic and functional needs of mature users without limiting the possibility of
self-expression.
In this context, the SilverStream project represents a unique approach to urban
mobility where a stylish Light Electric Vehicle (L6e category) integrated a compre-
hensive set of automotive technologies tailored to the needs of an urban and ageing
population. Innovative technologies such as a new HMI based on gesture recognition, a
lightweight seat, assisted rear lift and crane, specifically conceived for meeting elderly
people’s needs, have been designed and tested in both in–lab and out-lab environments.
In research contexts that envisage the development of innovative technological
solutions and advanced services for different classes of users, one key issue is to
understand the real needs and expectations of the possible end-users and to examine the
dynamics between each other and the environment around them. The act of under-
standing the real user needs guarantees not only the effectiveness of the technological
solution developed, but also the interaction which the end-users will have with the
system according to their existing behaviors, motivations and social/cultural back-
ground. The involvement of the end-user is a practice that should be considered in each
phase of the design, implementation, experimentation and evaluation of an innovative
solution as SilverStream vehicle is.
Light Electric Vehicle Design Tailored to Human Needs 141

The present study addresses all the experimental activities performed with end-
users during the three years long project, starting from the single components’ testing
until the final integrated vehicle validation.

2 SilverStream Users Testing Framework

The SilverStream vehicle (Fig. 1) characteristics tailored to the specific needs of elderly
people have been verified during the three-years project by involving representative
samples of older adults in various experimental sessions. In particular, the most user
relevant subsystems (e.g. seats, HMI, etc.) have been tested, before the final integration
in the vehicle prototype, in laboratory settings through various validation studies
specifically designed.

Fig. 1. The SilverStream vehicle

In particular, the sustainable ergonomics, perceived comfort and adaptive HMI for
minimum fatigue vehicle operation have been assessed by testing the following
components:
• Innovative HMI based on gesture recognition simplifying the operation of the
auxiliary systems featuring an on board display design based on advanced cognitive
science studies (Fig. 2a).
• Lightweight seat (e-Seat) specifically designed for (a) optimal posture including
lumbar and neck support for comfortable and low fatigue driving; (b) easy ingress
and egress through 90 deg swivel function (Fig. 2b);
• Assisted rear e-lift (30 kg payload) and crane for easy loading and unloading of the
car (Fig. 2b).
Two different strategies have been used in order to assess the agreement of the
tested subsystems with the elderly needs:
142 D. Trojaniello et al.

Fig. 2. SilverStream vehicle main components: intelligent control system based on gesture
recognition (a), e-Seat & Rear e-Lift and Crane (b)

(1) Instrumental evaluation. Tests have been performed in-Lab environments, i.e.
motion analysis laboratory with the involvement of physical therapists and bio-
engineers, to measure muscle activity, joint motion, forces and pressure distribu-
tion while the subjects perform different motor tasks (i.e. seating, ingress-egress,
etc.). Starting from these measures, information about the subject comfort, fatigue
and muscle activity could be gathered and used to evaluate the acceptability of the
tested sub-systems prototypes.
(2) Qualitative evaluation. Questionnaires and interviews have been specifically
designed to investigate the subjects perception with respect to the tested sub-
systems prototypes. In particular, comfort and ergonomics of specific sub-systems
such as the e-Seat as well as the usability, acceptability and feasibility of the HMI
have been investigated with specifically designed tools.

Fig. 3. SilverStream users testing framework

The validation plan (Fig. 3) with final end-users has been structured in two phases.
Light Electric Vehicle Design Tailored to Human Needs 143

The first phase (Single components users’ testing) consisted of validating the single
components (e-Seat, Intelligent Control system based on gesture recognition, Rear e-
Lift and Crane) developed during the project through controlled experiments in-Lab
environment, mainly at San Raffaele Hospital (HSR) facilities. Those experiments were
performed with real users over 65 y.o. The main output of those experiments consisted
in assessing the validity of the developed technologies for the target population as well
as collecting users’ feedbacks aimed at improving those technologies. According to the
results obtained, a number of refinements have been done on the developed tech-
nologies (e.g. refinement of the e-Seat conformation, re-design of the HMI, etc.).
The second phase (Validation in realistic environment with end-users), instead,
consisted of validating the overall solution in real scenario conditions, in Out-Lab
environment. Those experiments were performed with real users over 65 y.o. The main
output of such experiments consisted in an overall evaluation by the target population
of the proposed solution.

2.1 Intelligent Control System with Gesture Recognition


A prototype of in-vehicle human machine interface (HMI) system based on gesture
control allowing the subjects to interact with the vehicle sub-systems (i.e. vehicle
navigation tool), avoiding unambiguous and problematic interactions in order to ensure
fewer distractions while driving, has been realized within SilverStream project. The
interaction with the proposed system, whose development has been up to one of the
project partners (JAC), occurs through hand movements thanks to the presence of a
hand-tracking device (Leap motion controller, Leap Motion, Inc., USA) which repre-
sents the core of the system. It consisted in a small USB peripheral device, which uses
two monochromatic IR cameras and three infrared LEDs to track hand gestures and
recognize the fingers movements.
In the first phase of the validation (Single components users’ testing), the developed
in-vehicle system has been tested at HSR facilities, in lab-controlled conditions, where
the elderly users have been invited to try the system.
In the second phase of the validation (Validation in realistic environment with end-
users), the HMI system, fine-tuned according to the results gained in the previous
experiments, has been mounted inside the SilverStream vehicle and tested in real
scenarios with target end-users.

2.1.1 Single Component Users Testing: The HMI Study


Thirty elderly subjects took part in the “HMI study” [7]. The aim of the study was to
investigate, through interviews and questionnaires, the appropriateness of the proposed
HMI for the use by elderly people in terms of usability, user experience and mental
workload identifying, in this way, the presence of potential critical aspects. A properly
furnished room equipped with the hand-tracking device, a 3D mouse (3Dconnexion
Space navigator), a monitor (Lilliput FA1013/S 10.1’’) and a computer (i.e. devices
needed to reconstruct the entire experience with the HMI system) has been chosen as
scenario of the experimental procedure (Fig. 4).
144 D. Trojaniello et al.

Fig. 4. HMI study

The results of the study showed that the perception of the difficulty of the HMI
increases with decreasing of the use of technology; also, learning to use the HMI
requires a lot of time, a sizeable mental demand, especially for learning and remem-
bering the gestures, and a good executive functions to reproduce the gestures properly.
At the end of the test, a small part (27%) of population judged the HMI usable and a
very small part (13%) of the sample considered the HMI easy to use. As suggested by
participants, to become a useful tool for elderly people, the tested HMI had to be
integrated with other interaction modalities such as voice and touch control. In addi-
tion, the user interface (UI) had to be further simplified in order to be more suitable for
interacting with it through the gesture control and the number of gestures had to be
lowered to let people easier remember them.

2.1.2 Improvements After the First Experimental Phase


Based on the “HMI study” results, the interface of the intelligent control system has
been redrawn for making it easier and more intuitive. The system has been also
integrated with voice and touch control whereas the gesture one has been simplified as
suggested by participants of the first experimental phase. Some gestures have been
removed to reduce the mental demand required for the use of system by elderly people
and, at the same time, the sensibility of the hand-tracking device has been increased to
cope with the difficulties observed during the tests to reproduce the gestures properly
once learned.

2.1.3 Validation in Realistic Environment with End-Users


Twenty-two elderly subjects participated to the HMI validation in the second phase of
the validation. The study was held at HSR facilities with the involvement of a team of
experts with the aim to evaluate with target end-users the subjective perception of the
benefit of the intelligent control system developed and integrated in the vehicle. During
the test (Fig. 5), participants were invited to carry out a number of tasks commonly
performed in a vehicle (i.e. insertion of a new destination or regulation of the
Light Electric Vehicle Design Tailored to Human Needs 145

temperature) through the system using all its modalities of interaction (i.e. voice, touch
and gestures). Once completed the assignments, subjects were asked to answer to
structured questionnaires in order to investigate their user experience.

Fig. 5. Validation of HMI in realistic scenarios

The results obtained showed that the main strength of the system was the UI design
(50%): elderly subjects appreciated the display dimensions as well as the characters size.
Its integration with the vehicle and the easy access to auxiliary functions were also
noticed. However, some critical issues have been observed: 5% of sample population
considered the position of the screen and its angulation not suitable. A different
angulation of the screen would be required to reduce the chance of being distracted
while driving. Among the HMI interaction modalities, the voice control resulted to be
the favorite one. However its limited use restricted to few functionalities as well as the
lack of feedbacks have been reported. The gesture control, instead, resulted to be the
less favorite interaction modality and the most distracting one too. Unfortunately,
because of the high level of gesture control malfunction observed during the test, it was
difficult to analyze the contribution of gesture control in HMI system objectively. The
touch control, in the end, has been considered the most familiar and immediate way to
control the interface.

2.2 e-Seat: Design, Ergonomics and In/Egress


During the SilverStream project, a seat prototype specifically designed for elderly
people has been realized in collaboration with two Italian companies (MTM, project
partner, and Sparco, i.e. leader in the field of automotive seats). The shapes of the seat
curve (cushion and backrest) have been designed according to the indications provided
by the Transportation Research Institute of the University of Michigan [8]. In addition,
interchangeable cushion and backrest paddings (i.e. pillows with different density and
height attached to the seat through Velcro system to easily switch between the different
solutions) have been shaped.
146 D. Trojaniello et al.

In the first phase of the validation (Single components users’ testing), the seat has
been tested in terms of comfort perceived within the “e-Seat comfort evaluation”. Then,
the prototype e-Seat has been integrated with a roto-translating platform to facilitate
older people in entering and exiting the vehicle. Tests have been performed at the
motion analysis lab at HSR facilities to evaluate the efficacy of the proposed system in
lowering physical requirements during the vehicle ingress/egress for elderly people
(“Ingress-egress biomechanical analysis”).
The second phase of the validation (Validation in realistic environment with end-
users), was, instead, mostly devoted to test the fine-tuned version of the e-Seat in the
SilverStream vehicle in real scenarios in terms of both comfort perceived and improved
vehicle accessibility.

2.2.1 Single Component Users Testing: The e-Seat Study

(a) e-Seat comfort evaluation


Thirteen elderly subjects took part in this study [9]. The aim of the study was to
evaluate the proposed e-Seat comparing it, with its three interchangeable central
paddings of cushion and backrest differing in thickness and lift (SA, SB, SC), with a
standard seat in terms of comfort perceived. The study was held in an appropriately
furnished room at HSR where both a subjective evaluation, by means selected surveys
and checklists, as well as an objective assessment, through a pressure distribution
evaluation using a pressure mat (i.e. X-Sensor system), of the standard seat and the
proposed one has been performed (Fig. 6).

Fig. 6. e-Seat comfort study

Among the four examined seats, from “Seat features assessment checklist” (SFAC)
[10, 11], the SA conformation (cushion and backrest thickness = 16 mm, cushion
lift = 23 daN and backrest lift = 38 daN) resulted the best one with the only except for
Light Electric Vehicle Design Tailored to Human Needs 147

the backrest. The SC seat (cushion and backrest thickness = 43 mm, cushion lift =
23 daN and backrest lift = 38 daN) resulted, instead, to have the best backrest. Fur-
thermore, based on “Body part discomfort assessment checklist” (BPDAC) [12–14]
results, the seat SA resulted as the most comfortable even if a light level of discomfort
in correspondence of neck was noticed.
An important aspect to consider is the level of comfort that the seat is able to
guarantee in lumbar area.
According to “Psychophysical discomfort questionnaire” (PDQ) [15] results, both
the SA seat and the SB seat are considered fairly (with the same score) as the most
comfortable: the only difference lied in the number of subjects that judged the cushion
very comfortable (3 vs 1). Therefore, the seat judged more comfortable by sample
population was SA. Such result has been confirmed by the objective evaluation
(pressure distribution) too. Nevertheless, the sample population suggested some
improvements to apply to the backrest to guarantee a higher level of comfort perceived.
Based on such results, only the SA seat has been used in the subsequent study aiming at
evaluating the ingress/egress (I/E) task.
(b) Ingress-egress biomechanical analysis
The efficacy of e-Seat, remotely controlled, in assisting elderly subjects during car I/E
task has been assessed with thirty elderly subjects in a car-like wooden setup (Fig. 7)
by a comparison with the standard I/E mode (without rototranslation).

Fig. 7. e-Seat ingress/egress using roto-translating movements study

Both user experience measures and biomechanical data have been acquired during
the study performed at HSR motion analysis lab [16]. Most of the participants con-
sidered car I/E task through the rototranslating movements of the e-Seat easier than that
with standard mode. Moreover, the remote control was considered easy to use. The
analysis of biomechanical data showed that the I/E task through the rototranslating
movements requires a lower muscle activation and a lower knee and trunk ranges of
motion and such properties contribute to the reduction of the physical load sustained by
148 D. Trojaniello et al.

elderly in accomplishing that task. The e-Seat is therefore able to facilitate car I/E task
in elderly subjects where age-related or impairment-related motor capabilities make the
I/E movements particularly difficult. However, some characteristics of the proposed e-
Seat had to be improved: the height from the ground when the e-Seat was rotated was
judged too high while the velocity of the system was evaluated too slow.

2.2.2 Improvements After the First Experimental Phase


According to the results of the e-Seat studies, some modifications have been applied to
the SA seat with the aim of improving it thus increasing the level of comfort perceived
by final end-users. In particular, the cushion of the SA seat, with a reduced length, and
the backrest of the SC seat, with a reduced stiffness and a lowered contour, compose
the redefined e-Seat, which even presents a reduced length of the cushion for an easier
vehicle egress. Regarding the rototranslating platform, the use of one button enabling
the ingress movement of the seat and of a second single button to activate the egress
movement of the seat has been confirmed, the automatically operation mode has been
given preference over the manually one and the design of the remote control has been
redrawn for its enhanced understandability.

2.2.3 Validation in Realistic Environment with End-Users


Twenty-two elderly subjects participated to the e-Seat validation in the second phase of
the validation plan. The study was held at HSR facilities with the involvement of a
team of experts with the aim to evaluate with target end-users the improved accessi-
bility and comfort (i.e. in terms of reduction of the backache relating to driving) caused
by the rotating seat, its stiffness and adaptability properties. During the test, participants
were invited to enter in the SilverStream vehicle using the rototranslating system
remotely controlled and to adjust the seat according to their own preferences (Fig. 8).

Fig. 8. Validation of rototranslating e-Seat in realistic scenarios

Validated questionnaires, already used in the first experimental phase (i.e. SFAC
and BPDAC), have been administered to characterize the final e-Seat in terms of
conformation and comfort provided.
Light Electric Vehicle Design Tailored to Human Needs 149

The participants are then asked to exit the vehicle through the rototranslating
movements of the seat and encouraged to express their impressions about their sub-
jective experience.
Based on the results gained, the main strengths of the e-Seat have been: the comfort
(82%), the easy ingress (64%) and egress (59%). Meanwhile, the main weakness of the
e-Seat have been: not enough space for the lower limbs (59%) and high height from the
ground when the seat is completely rotated (27%).
The accessibility to the vehicle through the SilverStream e-Seat, so, resulted to be
improved and comfortable even if some criticalities remain. It is important to note that
most of those aspects are not due to the properties of the seat but rather to SilverStream
final vehicle characteristics: its small dimension and the traditional door are, in fact, the
main reasons of the critical issues found. Most likely, the same SilverStream e-Seat,
integrated with the same rototranslating mechanism and controlled by the same remote
control, placed on a larger motor vehicle chassis could produce fewer negative opinions
than those reported by elderly during the test.

2.3 Rear e-Lift and Crane


A lifting platform (Rear e-Lift) and a mechanical assistant (Crane) to support elderly
people in loading and unloading weights have been developed and installed in the trunk
of the SilverStream vehicle.
The Rear e-Lift is characterized by a container (740  490 mm) where users can
load boxes or luggage that can be lowered of 400 mm from the trunk level to help
elderly to load and discharge bulky and heavy goods. The platform can be activated for
loading with one button and activated for unloading with a second one while its
movement can be stopped in the desired position just leaving the button. The Rear e-
Lift is able to lift at maximum 30 kg and the total weight of this tool is 45 kg.
The Crane has an electric winch with rope and hook, which makes it particularly
useful in case of loads with handles. The mechanical assistant can be positioned with a
manual movement of its arm while its lift with an electric aid. The Crane is able to lift
at maximum 30 kg and the total weight of this tool is 8 kg.
In the first phase of the validation (Single components users’ testing), the Rear e-
Lift and the Crane of the SilverStream vehicle trunk have been tested within the “Rear
e-Lift & Crane study”.
In the second phase of the validation (Validation in realistic environment with end-
users), the fine-tuned version of the proposed trunk with its remote control completely
renewed has been tested in real scenarios with target end-users.

2.3.1 First Year Experimental Study: The Rear e-Lift & Crane Study
Fourteen elderly subjects took part in “Rear e-Lift & Crane study”. The aim of the
study was to assess the efficacy of the SilverStream trunk (inclusive of Rear e-Lift and
Crane) in supporting elderly people during loading and unloading (L/UL) task using
Rear e-Lift and Crane movements by making a comparison with the standard L/UL
task. The tests have been carried out in an appropriately furnished area at MTM in
Cherasco where questionnaires and interviews have been administered to the partici-
pants for the evaluation of the SilverStream trunk (Fig. 9). Both Rear e-Lift, controlled
150 D. Trojaniello et al.

by a remote control, and Crane, activated by buttons on crane arm, resulted easy to use
and well accepted by the users. However, some critical issues have been highlighted for
both devices: the slow velocity of the Rear e-Lift and the uncontrolled load swings
relative to the crane have been reported.

Fig. 9. Rear e-Lift & Crane study

2.3.2 Improvements After the First Experimental Phase


No improvements have been implemented for Rear e-Lift and Crane except for the re-
design of the remote control to activate the Rear e-Lift. Within an integration and
optimization process, the trunk and the e-Seat remote controls have been merged. Such
procedure entailed to inspect the Rear e-Lift modalities of operation that have not been
resulted in the usage of one button for the activation of loading and another single
button for the activation of unloading anymore. The final version of the remote control,
able to control both the e-Seat and the Rear e-Lift, is in fact composed of four buttons
and a combination of two buttons is needed for the activation of loading as well as
another combination of two buttons is required for the activation of unloading.
However, it is important to note that the remote control has been appropriately
equipped by an own legend to improve its comprehension by elderly people.

2.3.3 Validation in Realistic Environment with End-Users


Twenty-two elderly subjects participated to the Rear e-Lift and Crane validation in the
second phase of the validation plan. The study was held at HSR facilities with the
involvement of a team of experts with the aim to evaluate the enhanced performance
associated with the Rear e-Lift and Crane during actual vehicle operations. During the
test, participants were asked to pick up a bulky load (i.e. a trolley), to place it inside the
luggage compartment of the SilverStream vehicle and then to unload it with firstly the
platform and secondly the mechanical assistant. Once completed the assignments, a
Light Electric Vehicle Design Tailored to Human Needs 151

structured questionnaire is administered to investigate participants’ user experience


with the components of the SilverStream trunk (Fig. 10).

Fig. 10. Validation of Rear e-Lift & Crane study in realistic scenarios

Loading and unloading objects resulted strongly improved and easier when per-
formed with the Rear e-Lift. However, some participants observed as excessive the
time necessary for the L/UL task using the platform. Only a small part of the tested
population considered the Rear e-Lift unnecessary. Finally, a more intuitive design of
the remote control is needed since the main reason of difficulties met by elderly during
the use of that system was its understanding.
Regarding the Crane, the 91% of participants claimed that such device, as the Rear
e-Lift, is very useful for loading and unloading weights. Nevertheless, some sugges-
tions have been indicated for further improvements as replacing the hook with a
carabiner (9%), making the buttons more visible (9%) and adding a lock to avoid
uncontrolled load swings (5%). The missing of an automatic opening of trunk door has
been observed by the 9% of participants.

3 Conclusion

As result of the studies performed during the SilverStream project and summarized in
the present paper, it is possible to state that the SilverStream final demonstrator can be
considered a valid solution in supporting elderly drivers and, consequently, enhancing
their driving experience.

Acknowledgments. The research leading to these results has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant agreement No
653861 – SILVERSTREAM
152 D. Trojaniello et al.

References
1. E. Communities, Eurostat Database, Internet (2005). http://epp.eurostat.cec.eu.int
2. Wagner, J.T., Müri, R.M., Nef, T., Mosimann, U.P.: Cognition and driving in older persons.
Swiss Med. Wkly. 140, w13136 (2011). https://doi.org/10.4414/smw.2011.13136
3. Logsdon, R.G., Teri, L., Larson, E.B.: Driving and Alzheimer’s disease. J. Gen. Intern. Med.
7, 583–588 (1992)
4. Lesikar, S.E., Gallo, J.J., Rebok, G.W., Keyl, P.M.: Prospective study of brief neuropsy-
chological measures to assess crash risk in older primary care patients. J. Am. Board Fam.
Pract. 15, 11–19 (1995)
5. Ball, K., Owsley, C., Sloane, M.E., Roenker, D.L., Bruni, J.R.: Visual attention problems as
a predictor of vehicle crashes in older drivers. Invest. Ophthalmol. Vis. Sci. 34, 3110–3123
(1993). http://www.ncbi.nlm.nih.gov/pubmed/8407219
6. Brown, L.B., Ott, B.R.: Driving and dementia: a review of the literature. J Geriatr.
Psychiatry Neurol. 17, 232–240 (2004). https://doi.org/10.1177/0891988704269825.
17/4/232 [pii]
7. Trojaniello, D., Cristiano, A., Musteata, S., Sanna, A.: Evaluating real-time hand gesture
recognition for automotive applications in elderly population: cognitive load, user
experience and usability degree. In: HEALTHINFO, Nice, France (2018)
8. Reed, M.P., Schneider, L.W.: Design criteria for automobile seatbacks based on preferred
driver postures. Technical report Documentation, p. 42 (1995)
9. Trojaniello, D., Cristiano, A., Oleari, E., Tettamanti, A., Sanna, A.: Car seat comfort
assessment based on objective and subjective measurements in elderly population. In:
Proceedings of the 7th Transport Research Arena TRA 2018, April 16–19, 2018, Vienna,
Austria (2018)
10. Kolich, M.: Ergonomic modeling and evaluation of automobile seat comfort (2000)
11. Deros, B.M., Daruis, D., Mohd Nor, M.J.: Evaluation of car seat using reliable and valid
vehicle seat discomfort survey. Ind. Eng. Manag. Syst. 8, 121–130 (2009)
12. Karuppiah, K., Salit, M.S., Ismail, M.Y., Ismail, N., Tamrin, S.B.M.: Evaluation of
motorcyclist’s discomfort during prolonged riding process with and without lumbar support.
An. Acad. Bras. Cienc. 84, 1169–1188 (2012). https://doi.org/10.1590/S0001-
37652012000400031
13. Lin, C.: Ergonomic assessment of excavator seat. Int. J. Appl. Sci. Eng. 9(2), 99–109 (2011)
14. Velagapudi, S.P., Ray, G.G.: Reliability and validity of seat interface pressure to quantify
seating comfort in motorcycles, pp. 1–8 (2015)
15. Kolich, M., Taboun, S.M.: Combining psychophysical measures of discomfort and
electromyography for the evaluation of a new automotive seating concept. Int.
J. Occup. Saf. Ergon. 8, 483–496 (2002). https://doi.org/10.1080/10803548.2002.
11076549
16. Cristiano, A., Corbetta, D., Tettamanti, A., Sanna, A., Trojaniello, D.: Validation study of a
roto-translating seat to support elderly drivers during car ingress/egress: a biomechanical
analysis. In: MEMEA, Roma, Italy (2018)
DCCS-ECU an Innovative Control and Energy
Management Module for EV and HEV
Applications

Bartłomiej Kras, Paweł Irzmański, and Maciej Kwiatkowski(&)

Impact Clean Power Technology S.A.,


Aleje Jerozolimskie 424 A, 00-116 Pruszków, Poland
{bk,pi,mk}@icpt.pl

Abstract. Impact Clean Power Technology S.A. (ICPT S.A.) has recently
developed an innovative, universal, and scalable electronic control unit for
electric (EV) and hybrid (HEV) vehicles which fulfils intelligent management
functions. One of the main problems of modern EVs is energy management.
Proposed ECU (Electronic Control Unit) addresses this issue by performing the
optimisation of energy consumption, higher power performance, real time power
distribution which results in vehicle range extension.

Keywords: DCCS-ECU  HEV  EV  Scalable  Redundant


CAN

1 Introduction

The development of electric vehicles brings the necessity for use of new generation
electronic systems. An on-board ECU (Electronic Control Unit) computer is destined to
manage the operation of battery and electric propulsion system in an electric or hybrid
car. Electric and hybrid vehicles offered on the market typically contain dedicated
expensive and complex ECU computers, which are either not available to new market
players currently investing in electric vehicles (because, as a common practice, they are
reserved for large automotive concerns) or they are not fully suitable for use in new
applications and their implementation and integration would be complicated and high-
cost. Apparently, the assembly of such systems is economically relevant from the point
of view of a company looking for success delivering one product or product family for
e-mobility market. The unit presented hereby responds to the mentioned problem and,
due to its outstanding features, it offers universal quality solution. As a part of this
project the authors aimed at delivering to the market the easily available, intelligent and
scalable management unit with the widest possible range of target applications within
electric vehicles industry at very low prices even for small batches (<100szt). The
electronic board presented in Fig. 1 has been designed in the way enabling the
reduction of period of integration in a given application. This way the time reserved for
product launch onto the market is significantly shortened. The stages of development,
tests, and preparation of software are less time-consuming.

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 153–161, 2019.
https://doi.org/10.1007/978-3-319-99762-9_13
154 B. Kras et al.

Fig. 1. Photograph of DCCS-ECU computer

The implementation of ECU computers, which are dedicated especially to EV


(Plug-in electric vehicle) and PHEV (plug-in hybrid electric vehicle) cars, allows
optimization of energy consumption and flexible management of vehicle’s power. As a
consequence, the drive range of such vehicle is, comparatively, enhanced. Furthermore,
the introduced device is fitted to match variety of possible user systems and to overtake
different functions in the target application. For example, it could be either a module
performing DCU (Door Control Unit) functions under the supervision of other on-
board computer or become a vehicle’s main computer itself. An example of such
implementation is an electric vehicle produced by Mitsubishi I-MiEV concern. Its
control platform is distributed and consists of as many as five dedicated independent
ECU computers. In the further part of this paper the ECU module is referred to as
DCCS-ECU (Distributed Computer Control System—Electronic Control Unit).

2 Introduction

Contemporary vehicles have several dozen and, often, even a few hundred electronic
controllers connected to one another with fast digital buses. The global leader among
currently installed digital buses is CAN (Controller Area Network) standard developed
by Robert Bosch GmbH company in 1986. Standard CAN [2] refers to both bus and
data transmission protocols. CAN bus is a broadcasting bus, which does not have a
discrete superior unit.
Together with ISO 11898 and SAE J2284 standards, CAN protocol became an
international norm applied in passenger cars.
Figure 2 presents a typical topology of contemporary vehicle. What is interesting is
the presence of multiple CAN communication buses. Each of them performs different
DCCS-ECU an Innovative Control and Energy Management Module 155

function [3]. One of them is a propulsion system CAN bus which allows communi-
cation among controllers responsible for drive system and safety, such as motor con-
troller and ABS system controller. Comfort CAN bus is responsible for communication
among multimedia devices such as radio, navigation or on-board entertainment system.
Body CAN bus connects cabin controllers with one another such e.g. a drop down
electric windows’ controller with seats’ controller.

ESP ABS Engine


Radio Navigation
controller Controller controller

Propulsion CAN bus Comfort CAN bus


ECU
Body CAN bus Diagnostic CAN bus

Dashboard
Door
AC controller display
controller
controller

Fig. 2. Typical EV/HEV topology

Additionally, there is a separate CAN network used for the diagnostics of vehicle’s
electronic boards. Because of the multiple character of the transferred data the networks
operate with different speeds and sometimes in different application standards such as
CAN Open or J1939. As ECU system uses data sent via all buses it is recommended
that it has the highest possible number of independently operating CAN bus interfaces.

3 DCCS-ECU Structure

Due to device’s universal character it offers multiple hardware configurations.


Naturally, not all of these options can be served by the microprocessor. It was assumed
that part of system’s external peripheries would be activated with additional passive
components which enable extra hardware optimizations at the production stage.

3.1 Control MCU Unit


The heart of DCCS-ECU is Infineon TC27x microcontroller based on TriCore 1.6 core
of Aurix family. This complex and advanced microcontroller forms a robust and
functionally flexible computing platform. However, due to high launching costs and its
level of complexity, an alternative has been considered for this purpose as well – a
solution by NXP company (former Freescale) within more attractive price bracket.
There were also other factors taken into account:
– systems and tools produced by NXP company are easily available on the market
even for entities working on smaller production batches.
156 B. Kras et al.

– NXP (former Freescale) is (next to Renesas, Bosch, and Infineon) one of the largest
and most important manufacturers of solutions for automotive.
Interestingly, in their presentations Infineon compare their solution mainly with that
of NXP, which may lead to conclusion that the both products, although dedicated to
different market segments, present similar effects in terms of functionalities. The final
choice was two-core unit of S12XEP series. This system enables upload of real time
operating system (RTOS) in one of the two possible forms:
– with free licence, or
– with additional licence fee where the system is equipped with functional safety
features.
Thus, it is decided by individual users which safety level controller they prefer.
Two times faster than main core, the second core allows real time coding of
protection against access to information transferred via internal data buses with uti-
lization of minimum 128-bit key AES cipher algorithm (Advanced Encryption
Standard).

3.2 Power Supply


In contemporary all-electric vehicles drops of voltage during starting, typical for
combustion engine vehicles, are not at all present. Nevertheless, such phenomena as
voltage fluctuations may occur in hybrid applications. In response to such events it has
been decided to tailor DCCS-ECU device to short duration operation (up to 15 s) at
supply voltage values reduced even to 6 V. At the same time, for this purpose, the
maintenance of device’s full functionality is not required—some circuits may be
deactivated for the time of this procedure. Simultaneously, the system should be kept
stable and predictable. The overall current, which is permitted to flow into ECU, which
derives from assumptions concerning current efficiency at outputs is 52 A (excluding
non-potential outputs). In reality the situation where all the outputs would be fully
engaged in power supply is not feasible. Moreover, adjustment of the device for
continuous work with such high currents would be complicated and costly. Due to this
fact the board should be tailored in the way that through the main signals flows current
of 75% value of maximum current of all the outputs altogether, i.e. 39 A.

3.3 Communication
The device is equipped with the minimum of four independent CAN interfaces. Such
high number of interfaces is convenient and serves well in case DCCS-ECU is used to
convert a vehicle. Fairly often it turns out that the simulation of components removed
from combustion vehicle (such as engine controller) and the achievement of required
functionality of remaining components involves the separation of CAN line and cre-
ation of discrete connections.
Each single interface is independent and is capable of working at various speed
values, which are defined in communication protocol specification. Embedded
DCCS-ECU an Innovative Control and Energy Management Module 157

interfaces—LIN and FlexRay, in the main body of the computer, have been abandoned
due to low popularity. By doing so the total material costs have been reduced.

3.4 Other Features


As it is not feasible to anticipate all the possible client requirements which may be
defined for various electric vehicle applications and in order to enable supplementation
of omitted or potential new functionalities, the device has been geared with connector,
type UEXT [5]. Within enclosure there is space spared for additional adapter module. It
allows implementation of such components as currently popular E-Call.

4 Functions in Target System

In this section an example functionality of DCCS-ECU in an electric vehicle is being


presented. The device is shown at work in cooperation with energy storage system of
converted FIAT 500 car. Based on this the authors present operation of the computer.
Please refer to Fig. 3 for exact information on module location in target system
(Fig. 4).

Fig. 3. Reference diagram of electric vehicle system which may implement DCCS-ECU [4]

Energy storage system of contemporary electric vehicle is usually rather complex.


Typically the energy is stored in lithium-ion cells which are known for high energy
density, high power density, high number of possible operation cycles, and
158 B. Kras et al.

Acceleration and
brake pedal
position

Di
git
al
ECU
Characteristic of selected drive mode
I/O
Limtiation
calculated by
BMS Torque setting for
CAN CAN motor controller
N
CA

Speed and drive


direction
(ABS controller)

Fig. 4. Propulsion system torque calculation mechanism

maintenance-free character. Lithium-ion cells require appropriate management over


charging and discharging processes. These function is usually performed by BMS /
Battery Management System/built incorporated into the battery. BMS handles a
number of tasks connected with electrical values measurement in battery pack. Among
others, it is necessary to measure voltage and temperature values of each single cell
inside battery, as well as, current value of battery circuit. Based on collected measured
data and embedded algorithms it is possible for BMS to determine the parameters of
storage system such as State of Charge (SOC) and allowed charge and discharge
currents. These parameters are transmitted in real time via CAN line. ECU uses this
information when adjusting torque settings for propulsion system, also during charging
process when ECU mediates in information exchange in between BMS and onboard or
external charger.

5 Summary and Conclusions

Project’s works on DCCS-ECU module resulted in the creation of a device, which, due
to CAN bus popularity, is capable to be connected with literally any contemporary
EV/HEV vehicle. Major decisions have been made in relations to limiting availability
of LIN and FlexRay buses in primary version of the device. There is no problem with
using the supplementary adapter ports in case of more demanding applications and this
way implementing the earlier mentioned features.
Based on the above characteristic the delivered DCCS-ECU computer can be
described using the information presented in the table below (Table 1).
DCCS-ECU an Innovative Control and Energy Management Module 159

Table 1. DCCS-ECU module parameters


160 B. Kras et al.

The results of conducted electromagnetic emission analysis of device’s typical


operating conditions i.e. during discharge (controlled with converter) are shown as a
complement in Figs. 5 and 6. Tests were successful and they met the requirements of
2004/1008/EC regulation.

Fig. 5. Electromagnetic radiation emission measurement during device operation within


frequency range 30–200 MHz

Fig. 6. Electromagnetic radiation emission during device operation within frequency range
200–1 GHz
DCCS-ECU an Innovative Control and Energy Management Module 161

References
1. ICPT SA: Development of universal electronic control unit for electric and hybrid vehicles,
ICPT SA, http://icpt.pl/innovations.aspx#tab1
2. BOSCH, CAN Specification Version 2.0 (1991). www.can.bosch.com
3. Michna, M., Adamczyk, D., Kut, F., Ronkowski, M., Bernatt, J., Pistelok, P., Król, E.,
Kucharski, Ł., Kwiatkowski, M., Byrski, Ł., Kozioł, M.: Koncepcja, modelowanie i
symulacja układu napędowego prototypu samochodu slektrycznego “Elv001”, Zeszyty
Problemowe—Maszyny Elektryczne Nr 92/2011
4. Źródło; Selection of Electric Motor Drives for Electric Vehicles, Xue, X.D., Cheng, K.W.E.,
Cheung, N.C.: Department of Electrical Engineering, the Hong Kong Polytechnic University,
Hung Hom, Kowloon, Hong Kong, China
5. http://en.wikipedia.org/wiki/UEXT
Connectivity Design Considerations
for a Dedicated Shared Mobility Vehicle

Jörg Kottig1(&), Dirk Macke1, and Michael Pielen2


1
FEV Europe GmbH, Neuenhofstrasse 181, 52078 Aachen, Germany
{kottig,macke}@fev.com
2
share2drive GmbH, Krefelder Straße 147, 52070 Aachen, Germany
Michael.Pielen@share2drive.com

Abstract. With shared mobility features, such as keyless entry and cloud stored
user profiles, informing and guiding the vehicle design in many areas like the
E/E architecture, new challenges on how to approach the early stages of a
vehicle development process arise. Car connectivity as the enabler for many
shared mobility features is in the focus of the presented approach, however also
integration aspects into the whole system design are considered.
For an entrepreneurial project, where a vehicle is conceptualized from the
idea through a prototype to a serial product, requirement specifications and
function specification are not the right tools to start with. In this paper
share2drive and FEV share their design approach of deriving a prioritized fea-
ture set for a new vehicle class of a Personal Public Vehicle (PPV), dedicated to
the use for shared mobility concepts and with end user satisfaction and User
Experience (UX) as guiding principles.

Keywords: Agile  Architecture  Automotive  Blockchain  C-ITS


Connected car  Connected vehicle  Connectivity  Design concept
Digital services  Multimodal  Shared mobility  Urban mobility
V2X

1 What Is Connectivity?

If we want to design a “connected car”, the first step is to understand the needs and
expectations of its intended users.
As the rise of today’s ubiquitous social media channels and platforms has shown,
“connectivity” is significantly more than just the technical means to exchange infor-
mation between classic communication theory’s sender and receiver. As every
smartphone user will confirm, “being connected” is as important as the actual exchange
of information. Also “connectivity” refers today as much to one’s ability to connect
socially, as it describes an ecosystem enabling such connections.
Another aspect is to keep in mind, that “connectivity” is today inevitably linked
with the expectation of “information at your fingertips”. Bill Gate’s vision [1] has
become a reality, and any connected car will need to deliver on this expectation. This
implies not only availability of digital services users will take for granted—most
notably delivery of audio and video content and messaging—but also access to

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 162–172, 2019.
https://doi.org/10.1007/978-3-319-99762-9_14
Connectivity Design Considerations for a Dedicated Shared 163

personal or personalized content without the need for additional devices or complicated
authentication procedures.
A connected vehicle will show innovation through novel ways of combining digital
services and making these accessible in-vehicle. The technical solutions of doing so
will give center stage to the user experience (Fig. 1).

Fig. 1. A connected car will have to deliver “information at your fingertips”

2 SVEN—A Public Personal Vehicle

The vehicles used in urban mobility services to date are only able to fulfill their
deployment purpose to a limited extent, since they were originally designed to be
owned by an individual user. Fleet operators only carry out minor modifications, which
focus on the access to the vehicles themselves. In the course of its use within an urban
shared mobility concept, the specific requirements of shared mobility operators and
vehicle users are becoming equally important. At the same time, vehicles in new
mobility service scenarios must be understood as mobile devices in a multimodal
world. As a result, the requirement profile of an ideal shared vehicle should not be
based on a conventional customer analysis only, but rather on those of a mobility
concept as well as a business model, too.
SVEN—Shared Vehicle Electric Native—(see Fig. 2) is a pure electric vehicle
designed for urban shared mobility with focus on car sharing and fleet management.
The Unique Selling Points (USPs) addressed by SVEN are:
• Designed for shared mobility and for short distances
• Zero emissions (pure electric vehicle)
• Ease of use—easy to clean—easy to maintain
164 J. Kottig et al.

Fig. 2. The Public Personal Vehicle SVEN (Shared Vehicle Electric Native)

• 1 + 2 seater at 2.5 m length, so that parking perpendicular is possible


• Fully connected vehicle
• Ready for multimodal transport—integration between public transportation and
individual transport
By combining three design principles SVEN is a vehicle optimized for the needs of
shared mobility:
1. Design for best cost per km
Each design decision is evaluated towards the Total Cost for Ride (TCR) effects in a
sharing use case.
2. Design for sharing convenience
Focus on ease of use, cleaning between rides, maintenance & connectivity—
compromising on comfort.
3. Design for further car sharing revenue streams
The vehicle IT architecture is designed for maximum use of vehicle data, enabling
operator-specific solutions.
SVEN offers a dynamic and functional design. The use of front and rear body
panels allows easy customization to match an operator’s branding well as easy han-
dling for upgrades.
Lights, cameras and sensors are placed on a functional band connecting the
Greenhouse with the body. A sliding door allows the driver to comfortably get out in
tight parking situations. The generous glass surfaces ensure an open and bright
atmosphere as well as an optimized view of the car’s surroundings.
SVEN offers a functional design in the interior (Fig. 3). It is optimized for sharing
and for use by up to three people. With regard to the ease of use approach – a sub item
of design principle two, all controls can be operated via touch or voice control.
Smartphone or tablet connection, inductive charging terminal, mirror cameras and
Connectivity Design Considerations for a Dedicated Shared 165

displays like an instrument cluster underline the advanced mobility claim and support
the driver in the safe handling of the vehicle. Easy-to-clean surfaces allow quick and
easy reprocessing after a usage cycle.

Fig. 3. Interior Design Study for SVEN

The technical design of the powertrain is based on the predominant use in urban
traffic. SVEN integrates a 20 kWh battery pack to ensure a 80 km range—even under
extreme conditions. The rear engine with 24 kW covers a maximum speed of
120 km/h.

3 Connectivity Demands on a Car Dedicated to Sharing

The previous chapters have pointed out that requirements on the Public Personal
Vehicle (PPV) need to fulfil the Operator and the Individual User demands likewise,
and besides the Business Model they also need to consider the Mobility Concept. This
is illustrated in Fig. 4—However, while the previous section described the overall
requirements on a shared vehicle, this chapter will focus on the connectivity aspects:
For a number of the demands on a shared vehicle, connectivity will be the enabler.
The business model relies on the reliable delivery of digital services, which in turn need
a reliable connection to internet and cloud services. These services can be downloading
“Apps” from a “vehicle application store”, but can also be for example an optional
driver assistant package allowing for assisted parking, or Adaptive Cruise Control
(ACC).
166 J. Kottig et al.

Fig. 4. Demands on a Public Personal Vehicle (PPV)

A multimodal mobility concept works best, if the various mobile devices are in
sync with each other. Future urban concepts for Smart Cities and Cooperative Intel-
ligent Transport Systems (C-ITS) [2], can leverage the full potential of connected and
interconnected vehicles only with cars ready for V2X [3, 4].
Car sharing operators need to monitor and maintain their fleet. This stipulates the
need of the vehicles to connect to a central backend service (cloud) in order to realize
essential networked functionality like a booking service, or service charging. When a
connected car is ready for digital services, operators can offer instant service provi-
sioning for a complete new User Experience. For example, they could reward users for
a positive driving behavior by the minute, not just at the end of a ride. Blockchain
technology, can be an enabler for this, if it keeps its promise to allow peer to peer
transactions within seconds. FEV is partnering with Nano to elaborate the potentials of
the Nano Cryptocurrency [5].
Last but not least a car needs to create a relation to each of the users individually for
them to bond with the vehicle, or the service. The car must literally connect to its users
to deliver individualized functions and behavior: As a user, I should be able to access
the car as if it was mine. I can personalize it as if it was mine, or I can access my
personal media and data just like on a smartphone.

4 Connectivity - Technical Aspects

From the multitude of technical aspects of connectivity the most obvious requirement is
the need of a permanent Internet connection, for instance to access personal data, to
stream music, or to let the user subscribe to digital services. A personalized shared car
also needs the vehicle to be able to work with user profiles, configuring the car’s
seating and mirror positions for example, or to pre-set the favorite music channels.
Connected cars will as well enable more intelligent traffic management. You can
receive real time traffic updates and get routing recommendations. The unique concept
Connectivity Design Considerations for a Dedicated Shared 167

of a vehicle designed for shared mobility will allow its operators to both harness such
swarm data for his services, and to broker such data.
A connected car should be ready for connected and cooperative Advanced Driver
Assistant Systems (ADAS) [6] to be future proof, and to support Autonomous Driving.
It should also meet the demands of a Cooperative Intelligent Transport Services
(C-ITS), thus it should be ready for the standard approaches of a Vehicle to Everything
(V2X) communication.

Fig. 5. There two different approaches to standardize V2X communication today

It is not obvious yet which of the technologies will make the race for a worldwide
standard. While the US market is currently targeting the WLAN based technology, the
European Market and China seem to favor the cellular based solutions (see Fig. 5).
Further required radio connectivity are for WLAN, Bluetooth and NFC: The car
shall act as a WLAN-Hotspot to allow its users WLAN access on the ride. Bluetooth
capabilities are required for example for music streaming from the user’s smart phone
to the car’s infotainment system, and Near Field Communication (NFC) is required as
fall back solution to open the car in case a user does not own a smartphone for key-less
entry.
There are a number of solutions available on the market offering hardware- and
software solutions to turn any car into a ‘ready to share’ car. The solutions for our
concept need to be flexible enough, so that it can support a broad bandwidth of use
cases: From being integrated in a big car fleet to being a privately owned car that can be
shared with others.

5 Challenges

The previous chapters hint at some of the challenges to conceptualize and design a car
dedicated for sharing. The overarching theme is that the car needs to feel personal to its
user, and at the same time it needs to integrate seamlessly into a big fleet of a car
sharing operator. The technology needs to allow a wide range of integration: From a
fleet down to a private offered for sharing.
168 J. Kottig et al.

“Connectivity is the capability to connect not only technically, but also in a social
aspect” [7]. Therefore for the connectivity concept, we need to anticipate the future
behavior and the future demands of the car sharing community. The presence of digital
services will be taken for granted. The services inside a car will be expected to work as
for smartphones: Individual content is always available, and software upgrades work in
the background, ideally without user interaction or attention.
We want to make the car ready for future technologies like driver assistance,
autonomous driving, driving within a smart city and being conducted by an Intelligent
Transport System (ITS). Today there two different approaches to standardize V2X
communication: The first one is based on WLAN technology and the second approach
uses the Cellular infrastructure, including 5G Technology, and also the technology
decision—ITS-G5/DSRC vs. Cellular-V2X—has not been concluded yet. A connec-
tivity design concept needs to account for such uncertainties.
Two bigger topics, Cyber Security and Privacy, are admittedly a challenge for a
connected car. Examples and analysis of cyber-attacks on cars can be found in the
recently published Keen Report [8] and in the research paper from Computest [9].
However this is a topic on its own and thus could not be considered in the course of this
paper.

6 The Concept Approach

Car manufacturing is known for more than a hundred years by now. In 1910, Henry
Ford created the first mass production process for the Ford Model T, and over time the
engineering processes have matured. Automotive engineers became used to work along
the V-Model [10], which typically starts with a comprehensive requirement specifi-
cation, from which the system design is derived and further broken down to component
design and so on.
For automotive start-ups, where you conceptualize and design a car from a business
idea, the engineering process starts significantly earlier before the requirement speci-
fication for the actual vehicle. Connected services usually require specifications, for
which the actual vehicle is but only a part (or sub-system) of it. Customer and solution
engineers have to work in a phase where uncertainty is high and knowledge is low.
FEV and share2drive found the third Agile Manifesto Principle “customer col-
laboration over contract negotiation” [11] a very useful approach to cope with such
uncertainty. Like the Agile Software Development methods, which approach com-
plexity with an iterative process, we have accepted that changes will happen in an early
stage of a production design, and we therefore tackled complex tasks in iterations, as
illustrated in Fig. 6.
Connectivity Design Considerations for a Dedicated Shared 169

Fig. 6. Approaching Uncertainties with Iterations

We formed teams where project- and solution engineers work together with busi-
ness representatives from the customer. The first phase’s goal was about understanding
the business ideas: We talked about Unique Selling Points (USPs), revenue streams and
story boards [12]. The purpose was to create a common understanding of the product
and its users. The engineers learned what the customer envisions, and the customer
benefitted from the questions of the experts.
With the knowledge gained from the first iteration, managing further iterations
became significantly simpler. The use case definitions can be a diligent, but routine of
piece of work. Like before, also in this phase experts worked together with customer
representatives. The outcome was a number of descriptions on how the product will be
used by its users, mentioning activities and data flow. A user can be anyone interacting
with the car. The goal of this phase was to generate a common and documented mutual
understanding of the functions and features of the final product. An example of a
template such Use Case Description can be found in Fig. 7.
Next we started creating a product feature list and prioritized it. Priorities can be
driven by various factors. For example by the uniqueness of a feature. Other features
are mandated by legislation. Priorities are influenced by the prize of a feature, too, so
we had to start adding a price tag to features quite early in the process.
170 J. Kottig et al.

Fig. 7. Example for a Use Case description template in textual format.

When not intending to develop all solutions by oneself, market research was
required to deliver input to pricing and potential solutions.
The three steps above helped to funnel business ideas to a list of most valuable
features for a first Minimum Viable Product (MVP) [13].
The final iteration, the “Function Decomposition”, was necessary because the
features need to be integrated into an overall architectural concept, spanning various
functional domains such as battery and powertrain design. The overall system design
requires a function- and component decomposition from each sub-system, so that it can
define the interfaces and relations of all components.
As said, the approach sketched so far is described for the sub-system “Connec-
tivity”. But connectivity is only one sub-system out of many. There are for instance
also ADAS features, a battery solution and a power-train that needs to integrate all
together smoothly into the overall product (Fig. 8). To maintain the system view from
the beginning, all sub-systems need to integrate frequently. The whole process of
funneling to a feature- and component lists and integrating into the overall concept was
again executed in iterations, so that the learnings from each iteration could feed the
next one.
Connectivity Design Considerations for a Dedicated Shared 171

Fig. 8. Connectivity is only one sub-system out of many—all sub-systems need to integrate all
together smoothly into the overall product

7 Conclusion

In this paper we have outlined our approach of defining a “Connected Car” and have
introduced the business model for “SVEN”, a car dedicated to sharing. Connectivity
has been shown to be a key enabler for turning cars into mobile IT devices, equipped
with digital services similar to those on smartphones.
The engineering approach for such a novel vehicle design needs to be significantly
different from an approach applicable to a well specified traditional one, because the
early phases in a start-up business are dynamic and new ideas need the chance to find
their way into the concept. We have described our approach to funnel ideas into use
cases and into a prioritized feature list for the connectivity sub-system. For scaling the
approach to many sub-systems, we did sketch how feature- and component-
decomposition enables the integration into the overall system: A Public Personal
Vehicle (PPV).
172 J. Kottig et al.

References
1. Comdex Keynote Speech: Information at your fingertips, Bill Gates (1995)
2. C-ITS Platform: Final Report (2016). https://ec.europa.eu/transport/sites/transport/files/
themes/its/doc/c-its-platform-final-report-january-2016.pdf
3. “5G V2X, The automotive use-cases for 5G”, Dino Flore, 5GAA Director General. http://
www.3gpp.org/ftp/Information/presentations/Presentations_2017/A4Conf010_Dino%
20Flore_5GAA_v1.pdf
4. Rebbeck, T., Stewart, J., Lacour, H.-A., Andrew Killeen of Analysys Mason, David
McClure and Alain Dunoyer of SBD Automotive: Socio-Economic Benefits of Cellular V2X
(2017). http://5gaa.org/wp-content/uploads/2017/12/Final-report-for-5GAA-on-cellular-
V2X-socio-economic-benefits-051217_FINAL.pdf
5. Technical Paper, Nano: A Feeless Distributed Cryptocurrency Network, Colin LeMahieu
(2017). https://nano.org/en/whitepaper
6. The Case for Cellular V2X for Safety and Cooperative Driving: 5G Automotive Association
(2016). http://5gaa.org/wp-content/uploads/2017/10/5GAA-whitepaper-23-Nov-2016.pdf
7. “brand eins” economy magazine 04/2018 “Geht doch!”, Bernd Heinrichs, Executive VP &
Chief Digital Officer Automotive, Bosch (2018)
8. Experimental Security Assessment of BMW Cars: A Summary Report. Keen Security Lab
(2018)
9. Research Paper, The Connected Car—Ways to get unauthorized access and potential
implications, Computest (2018)
10. V-Model XT, Das deutsche Referenzmodell für Systementwicklungsprojekte: Verein zur
Weiterentwicklung des V-Modell XT e.V. (Weit e.V.), Version 2.2
11. Manifesto for Agile Software Development: http://agilemanifesto.org/
12. Agile Scenarios and Storyboards: Roman Pichler (2013). https://www.romanpichler.com/
blog/agile-scenarios-and-storyboards/
13. MVP—Minimum Viable Product: Frank Robinson, Syncdev (2016). http://www.syncdev.
com/minimum-viable-product/
Innovation Strategy
Trends and Challenges of the New
Mobility Society

Sakuto Goda(&)

Nomura Research Institute, Ltd., Otemachi Financial City Grand Cube,


1-9-2 Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan
s-goda@nri.co.jp

Abstract. This article outlines market trends, customer needs and challenges
that the automotive industry will face to achieve electric, autonomous and
shared mobility: For the policy and automotive industry, the trend towards
electrification seems to be agreed among stakeholders, however, there are still
major challenges, for example, shortage of electricity, batteries and production
equipment in some regions. The other topic is autonomous driving. According
to a worldwide consumer survey conducted by NRI, the acceptance and needs of
customers vary from society to society. The spread of shared mobility also
depends on the maturity of the taxi industry. While the coming transformation
will be significant and affect the global market, regional, cultural, social issues
need to be considered.

Keywords: Autonomous driving  Electrification  Car sharing

1 Electrification

This chapter outlines the market trends and challenges of electrification.

1.1 Market Development of xEVs and Growth Drivers


In 2016, more than 2 million of xEVs (HEV, PHEV, and pure EV) were sold across the
globe (Fig. 1). The major growth driver was market recovery of HEV after the launch
of the new Prius by Toyota and significant growth of EV sales especially in China, that
is, the global biggest EV market today. In 2017, the Chinese government published that
it is considering regulation or ban of internal combustion engines.
On the other hand, many of European countries such as the Netherlands, Norway,
and France declared that they will ban cars with internal combustion engines (ICE) in
the future. Although the detailed scope of regulations often are not yet specified, the
market share of xEVs will grow driven by legislations.

1.2 Challenges
This chapter outlines the challenges related to the introduction of electric vehicles
across the value chain, focusing especially on the pure-electric vehicle.

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 175–182, 2019.
https://doi.org/10.1007/978-3-319-99762-9_15
176 S. Goda

Fig. 1. Global xEV market development. Source: NRI

According to the survey, in the customers’ perception, EVs are considered an eco-
friendly, but also a pricy vehicle option, while the driving mileage is seen as a factor of
minor importance for the consumer’s decision to by or not buy an EV (see Fig. 2).

Fig. 2. The reasons why consumers want/do not want to buy an electric car. Source: NRI
consumer survey 2017

Although the cost of battery can be reduced alongside the increase of the pro-
duction volume, there are foreseeable and critical issues across the value chain to
overcome for turning electrified vehicles into market reality.

1.2.1 Shortage of Battery Supply


A stable and reliable supply chain of batteries including production equipment and raw
materials is one of the biggest challenges for electrification.
If the projected sales numbers of xEV published by the industry are realistic, the
manufacturing capacity for automotive batteries required in 2025 will be around eight
times as big as in 2015, namely around 480 GWh/year, which is equivalent to more
than 10 million pure electric vehicles (see Fig. 2). This means that the automotive
industry needs to build up the same capacity every year as the consumer electronics
Trends and Challenges of the New Mobility Society 177

industry has built since 1991, 50 GWh. In other words, the optimistic market pene-
tration of xEV requires tens of giga-factories every year.
Besides, production equipment of batteries is often manufactured by medium sized
companies located in Asian countries. The required investment is at least as signifi-
cantly a burden for those manufacturers as for auto makers and battery suppliers
(Fig. 3).

Fig. 3. Required battery capacity (unit: GWh). Source: [1]

1.2.2 Charging Time


The results of the consumer survey show that customers tend to expect more than 300
miles of driving range for electric vehicles. A simple method to expand driving mileage
is to load more batteries on the vehicle; however, there is need for a trade-off: The
charging time becomes longer if the car is equipped with bigger battery capacity. This
would affect the convenience of pure electric vehicle, as the owners of electric vehicles
would have to wait longer than the ones of internal combustion engine cars. As fastest
charging stations have 350 kW of charging power, and an electric vehicle with 300
miles of driving range has around 75 kWh of battery capacity, ca. 13 min would be the
estimated quick charging time for an EV of 300 miles driving range, i.e. around
500 km (see Fig. 4). This is still longer than waiting time for fueling a car with internal
combustion engine at the gas station, which is approximately 5 min even for a full tank.
This might not be of concern for those who value ecological aspect of the electric
vehicle, however, for the usual consumer, it might be something stressful.

1.2.3 Energy Infrastructure


The more EVs are running on the road, the more electricity is required. It can be
estimate how many EVs can be introduced at the more shares of electricity used for
electric vehicle (see Fig. 5).
178 S. Goda

Fig. 4. Consumers’ need for driving mileage of pure electric vehicles. Source: NRI consumer
survey 2017

Fig. 5. Energy share of EV vs share of EV in total vehicle in operation. Source: NRI


Trends and Challenges of the New Mobility Society 179

The lines are different from country to country, however, thinking about the world
where 100% of the vehicles on the road are pure electric vehicles, the world would
need more than twice as large as the current power generation capacity. For he 20% of
EV penetration case, where 4.4 million of EVs are in operation, it seems to be more
realistic to balance supply and demand of the electricity.

1.3 Conclusion
Many of stakeholder, governments and industry, now declares that they are moving
toward electrification, however, the challenges are not only about the car manufac-
turing and the cost of battery [2], but the entire eco-system and supply chain. A holistic
approach and significant effort as well as investment across the value chain is required.

2 Autonomous Driving and Shared Mobility

This section introduces some findings from consumer survey on autonomous driving
and shared mobility.

2.1 Who Wants Self-driving Cars?


According to the survey, more than 70% of customers in China are willing to buy an
autonomous driving car (automation above level 3 according to SAE), around 30% in
the United States and in Germany, around a half in Japan.
In Japan, the non-car owners are more willing to buy autonomous vehicles than car
owners (more than 60%), surprisingly, for the older generation, above 65 years old, the
figure rises to more than 80%. These customers would also be willing to pay more;
more than a half of elderly people answered that they would pay more than 300,000
JPY for autonomous driving and one tenth of them would pay more than 800,000 JPY
(approx. 130 JPY equal 1 EUR).
This shows the value of freedom to move for those who do not want to or can not
(or no longer) drive cars. At the same time, there are many applications and meanings
of autonomous cars, such as efficient urban mobility or cost reduction.

2.2 What Do You Do Inside an Autonomous Vehicle?


According to the survey, most of the people do not expect something special in the
autonomous driving while there are different tendencies among regions. The customers
hardly imagine what they do inside self-driving cars and many of the consumers will
even hold the handle; the providers of autonomous driving cars should educate cus-
tomers proposing what the passengers should do during the cars are in the self-driving
mode to deliver new added-values (Fig. 6).
180 S. Goda

Fig. 6. Ranking of what the consumers would want to do inside an autonomous driving cars.
Source: NRI consumer survey 2017

2.3 How Powerful Will Shared Mobility Services Be?


The acceptance of shared mobility services, especially ride sharing, depends on how
the consumers are satisfied with the incumbent mobility services. In general, the need
for new mobility services is comparably low in the countries like Japan and Germany
where the existing taxi industry is well-organized and regulated (Fig. 7).

Fig. 7. Acceptance of shared mobility services. Source: NRI consumer survey 2017
Trends and Challenges of the New Mobility Society 181

2.4 Chapter: Conclusion


Autonomous driving and shared mobility are discussed as “disruptive” technologies or
services, however, the different markets show diverse customer needs, also due to
cultural as well as societal differences. As these trends require the significant change of
how use cars and the way of living, these subtle differences should be taken into
account.

3 Conclusion

As seen in the findings from the consumer survey and analysis, the entire journey
toward the new generation of the mobility society requires extensive effort to solve the
issues across the value chain and the society.

4 Overview of the Survey Methodology


182 S. Goda

References
1. Kazama, T., Suzuki, K., Zhang, D., Yoshihashi, S.: Electrification and its impact on the
supporting industries. Knowl. Creat. Integr. 25, 14 (2017)
2. Goda, S., Fujita, A., Hirano, Y., Suzuki, K.: Development of automotive battery for new
generation vehicles. NRI Knowl. Insight 34, 2–3 (2014)
Roadmap for Accelerated Innovation in Level
4/5 Connected and Automated Driving

Jörg Dubbert, Benjamin Wilsch, Carolin Zachäus,


and Gereon Meyer(&)

Department Future Technologies and Europe,


VDI/VDE Innovation + Technik GmbH, Steinplatz 1, 10623 Berlin, Germany
{joerg.dubbert,benjamin.wilsch,carolin.zachaeus,
gereon.meyer}@vdivde-it.de

Abstract. This chapter is summarizing the findings of the EU-funded Coor-


dination and Support Action “Safe and Connected Automation in Road
Transport” (SCOUT) that has established a comprehensive and structured
roadmap to describe innovation paths towards an accelerated development and
deployment of high degree automated driving, i.e. particularly at SAE levels 4
and 5. With the involvement of a multitude of experts, the project assessed a
number of use cases and development trends, identified societal goals and
challenges, and formulated a future vision for connected and automated driving
(CAD). It also analysed the state of play in technologies and business models,
and it identified gaps and risks. Hurdles for achieving the vision have been
recognized, actions to overcome those hurdles have been found at technical,
societal, economical, human factors and legal layers, and interlinks between
those actions have been described. Finally, opportunities to leapfrog hurdles for
innovation in level 4/5 automated driving by a coordinated interplay of actions
have been described for five specific use cases: automated on-demand shuttle,
truck platooning, valet parking, delivery robot, and traffic-jam chauffeur.

Keywords: Connected car  Vehicle automation  Autonomous driving


Roadmap  Level 4/5 automation  Automated shuttle  Truck platooning
Automated valet parking  Traffic jam chauffeur  SCOUT  CARTRE
EPoSS  ERTRAC  European Commission

1 Introduction

Field operational tests and pilot projects with vehicles capable of fully automated
driving or self-driving functionalities have started in cities and regions all around
Europe and the world. In particular, autonomous on-demand shuttles and robot taxis
are popular among policy makers and city planners, both in the U.S. [1] and in Europe
[2]. The reasons are manifold: Such vehicles may provide a cost-efficient opportunity to
fulfill obligations in public transport, particularly for the last mile, they use road space
more efficiently, and thus reduce the number of cars on the road. Furthermore, they
show the way towards an IT-enabled future of shared transportation of people, goods,

© Springer Nature Switzerland AG 2019


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 183–194, 2019.
https://doi.org/10.1007/978-3-319-99762-9_16
184 J. Dubbert et al.

and probably equipment and services. Therefore, it can be expected that such vehicles
will have a high disruptive innovation potential in mobility [3].
Equipped with advanced systems for environment perception and decision making,
automated vehicles conventionally follow a reactive bottom-up safety paradigm. Like
humans, such systems may fail. There are opportunities for making an automated car
close to 100% safe by a more proactive, communication based approach [4]: One could
equip the infrastructure with sensors that “look around the corner” and tell the car what
they see, and one could further advance the artificial intelligence of the control system
to better understand particular traffic scenes, e.g. whether a pedestrian standing at the
curb will cross a road or not. One could also aim for a top-down safety concept, limit
the use of automated vehicles to fenced lanes, or apply control from a central traffic
manager. Whether and when a specific solution will be feasible depends merely on
economics and regulations than on technical concept.
The purpose of this paper is to report on the findings concerning the interplay of
technical and non-technical factors of innovation in level 4/5 automated driving made
by the Coordination and Support Action entitled “Safe and connected automation in
Road Transport” (SCOUT) that the European Commission funded between July 2016
and June 2018 [5]. The project’s objectives comprised:
• To identify pathways for an accelerated proliferation of safe and connected high-
degree automated driving (SAE 3-5).
• To take into account user needs and expectations, technical and non-technical gaps
and risks, viable business models as well as international cooperation and
competition.
• To help the automotive, the telecommunication and digital sectors need to join
forces and agree on a common roadmap.
The consortium, which was coordinated by VDI/VDE-IT, included Renault, FCA,
BMW, Bosch, NXP, Telecom Italia, NEC, RWTH, Fraunhofer, CLEPA, and Sernauto.
A number of public expert workshops with external stakeholders representing supply
and demand side of technology development, and particularly individual user groups
were organized, and steps towards a comprehensive roadmap were taken. For the
creation of the roadmap, a story mapping process was applied, that started from ana-
lyzing the innovation context, then defined a future vision, analyzed the state of the art,
and finally recognized opportunities and hurdles as well as ways to close the “gap”
between state of the art and vision with concrete actions. It can be expected that the
SCOUT project by its structured and comprehensive approach will add cohesion and
insight to the diverse landscape of for building a common European Strategy on
CAD [6].

2 Future Vision on CAD

User-centric approaches have proven to be particularly effective for developing visions


and roadmaps on the future of transportation, recently [7]. Implications of CAD are
specific for each use case and business model, and for all partners in the value creation
process. In the discussions with societal stakeholders representing different user
Roadmap for Accelerated Innovation in Level 4/5 185

perspectives, the SCOUT project found a number of high, but common expectations,
though: zero fatalities, no traffic jams, productive travel time, social inclusion, reduced
operation costs, and vanishing borders between the transport modes. Consequently,
when asked about their future vision on CAD, users sketched an ambitious picture.
From their point of view, the basic idea of CAD is strongly connected with the concept
of seamless mobility of people and goods on demand. Ideally, the implementation of
such concept should ensure that no compromises are made on safety, solutions are
effective and affordable, and save or free time for the user. Asked about specific
solutions that would embody the key elements of the vision users referred to a great
number of advanced ideas, including robot taxi, universally designed vehicles and
services, logistic hubs as well as connected traffic systems and more. Putting those
potential solutions on a simplified map of application scopes, starting from urban via
suburban, rural and interurban environments towards the international area, the great
diversity of use cases becomes evident. Actually, there are four areas of particular
interest, namely mobility as a service, passenger transport, goods delivery and
infrastructure. It turns out, that in level 4 and 5 automated driving the essence of the
common future vision consists in the different use cases. The technical challenges are
very similar, though, and may be solved by smart systems that combine sensing with
connectivity and intelligent decision-making [8]. However, due to a complex interplay
of technical and non-technical issues, advanced automated or self-driving cars have not
yet reached full maturity, oftentimes miss a viable business case and are not yet al-
lowed on public roads. Hence, the process of roadmap development could be expected
to be particularly troublesome.

3 State of the Art

The analysis of the state of the art for high level connected and automated driving
carried out by the SCOUT project was structured with reference to a five layers model:
Besides the technical layer as a basis for connected and automated driving functions,
further layers describe the relevant non-technical issues, i.e. human factors, economics,
legal, and societal aspects. The layers are strongly interlinked and they each are cov-
ering three interrelated topics, the driver (or passenger), the vehicle and the
environment.
The in depth analysis was primarily focussed on the technical, the legal and the
economic layer, as reported elsewhere [9], though, all layers were covered by the
project’s activities. Regarding the state of the art of CAD on the technical layer, the
SCOUT project distinguished three major functional domains, environment perception
(“sense”), decision making (“think”), and control (“act”). It was concluded that tech-
nical solutions have been found for most issues already, even though some significant
challenges remain, e.g. sensing under adverse weather and lighting conditions, decision
making fully acknowledging intentions of people on the road, and control with fail
operational capabilities. Moreover, the availability of digital infrastructure for con-
nectivity and communication turned out to be critical for making CAD a safe product,
186 J. Dubbert et al.

even though discussion whether it would rather be a necessary than just a sufficient
condition, particularly in complex urban environment, are on-going. It was also con-
cluded that awareness of cyber security issues of CAD exists, as for level 4/5 all control
functions are safety critical; concepts for a long-term protection are missing, though.
For the state of the art of CAD in the legal layer, it was concluded that the Vienna
Convention, which most European Countries have ratified and turned into national law,
due to an amendment that entered into force in early 2016 [10], now is covering level 3
automation, but not yet levels 4 and 5. National regulations may grant exceptions,
however, e.g. for testing.
On the state of the art of CAD at the economic layer, a number of use case of CAD
were analysed regarding value proposition, value creation partners, and monetization
potential, e.g. valet parking, truck platooning and automated on-demand shuttles.

4 Comprehensive Roadmap Approach

Aiming to map out the paths towards the users’ ambitious future vision on CAD while
acknowledging the state of the art, the SCOUT project took a structured and compre-
hensive story mapping approach of roadmap development: The five-layers model that
already was found to be appropriate for a description of the state of the art, was applied
to build an action plan on level 4/5 automated driving. At two public workshops with the
involvement of dedicated experts for all the five layers (technical, social, economic,
human factors, legal), gaps between state of the art and vision were recognized and
actions were identified for each layer, linked to actions in other layers, and aligned on
the time scale. While the outcome was a close-to-complete list of research, innovation
and framework needs that complemented one another, it lacked coherence completely.
In contrary, the links that the experts indicated in between the actions, revealed that
technical and non-technical challenges are highly related to each other with many
actions requiring the outcome of others before they can start. The many interdepen-
dencies lead to locked-in situations, creating a kind of Gordian knot. This indicates that
the development and deployment of level 4/5 CAD may be heavily delayed if it is not
comprehensively coordinated. This is a typical feature of complex innovation processes
that comprise a number of technical and nontechnical dimensions. The SCOUT project
consortium therefore concluded that for delivering useful indications, the roadmap
approach needed to be distinct not just for the five layers but for specific use cases, and
focused on well-defined milestones on the way towards the vision. Supposedly, such use
case-specific and targeted roadmaps could help to anticipate roadblocks and highlight
agile shortcuts, enabling an accelerated innovation process.
Roadmap for Accelerated Innovation in Level 4/5 187

5 Use Case Specific Roadmaps

In order to properly address the complexity of the comprehensive innovation planning


process for level 4/5 connected and automated driving, the SCOUT project thus
developed a simplified and use case specific roadmap template covering (a) a story map
with hurdles and opportunities on the way from state of the art to future vision,
(b) goals in terms of milestones on the timeline towards the vision, and (c) a plan of
timely sequenced actions in the five layers back-casted from one of the milestones. It is
assumed that the actions in the roadmap trigger each other, e.g. by an invention,
customer demand, business model, user needs, product design, norm or regulation. As
this helps to anticipate time sinks and risks for delays in the innovation process,
opportunities for taking agile shortcuts between the layers should be incorporated into
the design of the action, e.g. demonstrations, sandboxes approaches, co-creation ses-
sion, and living labs [6].
Taking into consideration the expert inputs on gaps and necessary actions gathered
at the public project workshops the template has been used to establish roadmaps for
five different use cases and specific milestones of level 4/5 CAD, namely:
• Automated on-demand shuttle (Fig. 1)
• Truck platooning (Fig. 2)
• Automated valet parking (Fig. 3)
• Delivery robot (Fig. 4)
• Traffic jam chauffeur (Fig. 5)
These roadmaps have been validated at an additional workshop with experts for all
five layers, and were presented at the Automated Vehicles Symposium 2018 in San
Francisco, CA (USA) [11].
188 J. Dubbert et al.

Fig. 1. Roadmap for the automated on-demand shuttles use case.


Roadmap for Accelerated Innovation in Level 4/5 189

Fig. 2. Roadmap for the truck platooning use case.


190 J. Dubbert et al.

Fig. 3. Roadmap for the (automated) valet parking use case.


Roadmap for Accelerated Innovation in Level 4/5 191

Fig. 4. Roadmap for the delivery robot use case.


192 J. Dubbert et al.

Fig. 5. Roadmap for the traffic jam chauffeur use case.


Roadmap for Accelerated Innovation in Level 4/5 193

6 Conclusions and Outlook

The SCOUT project succeeded to solve the Gordian knot of locked-in interdependencies
between required actions that occurred when it was tried to describe the innovation path
towards level 4/5 connected and automated driving in terms of a comprehensive
roadmap covering technical, social, economic, human factors and legal aspects. For this,
a clear distinction of use cases and a focus on milestones were key. The roadmaps on
automated on-demand shuttles, truck platooning, delivery robots, valet parking, and
traffic jam chauffeur resulting from the SCOUT project are highly relevant in view of the
European Commission’s ambition to become a world leader in connected and automated
driving as stated in a strategy communication that was launched with the 3rd mobility
package, recently [12]. According to that strategy, e.g. low-speed self-driving urban
shuttles and delivery vehicles may be available on European streets from 2020 on,
though further development of those technologies will take yet another decade. Even
though the SCOUT roadmap is not able to be more specific on the actual time line, it
points out the necessary actions on the five layers of the plan, and highlights opportu-
nities for accelerated innovation. Thereby, it will be an important input to the on-going
process of building an implementation plan of the Strategic Transport Research and
Innovation Agenda (STRIA) on Connected and Automated Driving that the European
Commission has launched. The methodology and the results of the project may by
applied to related topics in the near future, e.g. on assessing the potential synergies of
electrification and automation at technology and application levels, and on describing
the options of technology transfer from the 2-dimensional road transport domain to the
3-dimensional world of taxi and delivery drones.

Acknowledgements. The authors are grateful for fruitful cooperation with the contractual
partners of the Coordination and Support Action “Safe and Connected Automation of Road
Transport” (SCOUT), i.e. Luisa Andreone and Leandro D’Orazio (CRF), Franz Geyer (BMW),
Yves Page (Renault), Roland Galbas and Andi Winterboer (Bosch), Steven von Bargen (NXP),
Giovanna Larini (TIM), Roberto Baldessari and Francesco Alesiani (NEC Europe), Devid Will
and Adrian Zlocki (RWTH), Heiko Hahnenwald and Thilo Bein (Fraunhofer LBF) and Beatrice
Tomassini and Alessandro Coda (CLEPA). Important inputs were provided by Jochen Langheim,
Benjamin von Bodungen, Wolfgang Gruel, Suzanne Hoadley, Stella Nikolaou, Natasha Merat,
Alizee Stappers, Klemen Kozelj, Wolfgang Schulz as well as members of the CARTRE project
and EPoSS and ERTRAC. Roadmap designs were created by Juliane Lenz from Berlin.
The SCOUT project has received funding from the EU’s Horizon 2020 programme under grant
agreement No 713843.

References
1. Smart Cities Challenge. U.S. Department of Transportation (2017)
2. Meyer, G.: Policy Trends from the Proposals Under the Topic of Urban Mobility. Urban
Innovative Actions, Lille (2017)
3. Meyer, G., Shaheen, S. (eds.): Disrupting Mobility—Impacts of Sharing Economy and
Innovative Transportation on Cities. Springer, Cham (2017)
4. Safer Roads with Automated Vehicles: International Transport Forum, OECD (2018)
194 J. Dubbert et al.

5. www.connectedautomateddriving.eu/about-us/scout/
6. Meyer, G.: European roadmaps, programs, and projects for innovation in connected and
automated road transport. In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation 5.
Springer, Cham (2018)
7. Müller, B., Meyer, G. (eds.): Towards User-Centric Transport in Europe. Springer, Cham
(2018)
8. Dokic, J., Müller, B., Meyer, G. (eds.): European Roadmap Smart Systems for Automated
Driving. European Technology Platform on Smart Systems Integration (EPoSS) (2015)
9. Will, D., Eckstein, L., van Bargen, S., et al.: State of the art analysis for connected and
automated driving within the SCOUT project. ITS World Congress (2017)
10. UNECE Paves the Way for Automated Driving by Updating UN International Convention.
Press Release, UNECE, 23 March 2016
11. Zachäus, C., Wilsch, B., Dubbert, J., Meyer, G.: A comprehensive roadmap for level 4/5
connected and automated driving in Europe. Poster. In: Automated Vehicles Symposium
(2018)
12. On the Road to Automated Mobility: An EU strategy for mobility of the future. European
Commission, COM 2018 (283)
Author Index

A G
Ahiad, Samia, 75 Goda, Sakuto, 175
Alessandrini, Adriano, 69 Gopi, Sajin, 75
Andert, Franz, 31 Gromala, Przemyslaw, 56
Aydemir, Eren, 75 Groppo, Riccardo, 75, 139

B
H
Bercier, Emmanuel, 3
Hager, Martin, 56
Bernardin, Frédéric, 3
Bourne, Emily, 90
Brémond, Roland, 3 I
Brunet, Johann, 3 Innerwinkler, Pamela, 75
Irzmański, Paweł, 153
C Itu, Razvan, 16
Cassignol, Olivier, 3
Clement, Philipp, 75 J
Correa, Alejandro, 31 John, Reiner, 139
Cristiano, Alessia, 139
K
D Kahrimanovic, Elvir, 139
Dalmasso, Davide, 139 Kaiser, Christian, 111
Danescu, Radu, 16 Kanhere, Salil, 111
Derse, Cihangir, 75 Karci, Ahu Ece Hartavi, 75
Dorri, Ali, 111 Khan, Saifullah, 31
Druml, Norbert, 75 Kinav, Emrah, 75
Dubbert, Jörg, 183 Kottig, Jörg, 162
Kras, Bartłomiej, 153
E Krune, Edgar, 123
Elrofai, Hala, 123 Kwiatkowski, Maciej, 153
Enhuber, Stephan, 43
L
F Lampic, Gorazd, 139
Fellmann, Michael, 111 Leduc, Patrick, 3
Festl, Andreas, 111 Leinmueller, Tim, 90

© Springer Nature Switzerland AG 2019 195


J. Dubbert et al. (Eds.): AMAA 2018, LNMOB, pp. 195–196, 2019.
https://doi.org/10.1007/978-3-319-99762-9
196 Author Index

M Schindler, Julian, 31
Macher, Georg, 75 Schmidt, Gerald, 43
Macke, Dirk, 162 Sorniotti, Aldo, 139
Manuzzi, Marco, 97 Steger, Marco, 111
Metzner, Steffen, 75 Stettinger, Georg, 75
Meyer, Gereon, 183 Stocker, Alexander, 111
Mittal, Prachi, 90 Symeonidis, Ioannis, 97
N
T
Nahler, Caterina, 75
Tarel, Jean-Philippe, 3
Nicolas, Adrien, 3
Tarkiainen, Mikko, 75
Nikolaou, Stella, 97
Troglia, Micaela, 75
O Trojaniello, Diana, 139
Otto, Alexander, 139
Ozan, Berzah, 75 W
Wandtner, Bernhard, 43
P Wanielik, Gerd, 43
Pech, Timo, 43 Watzenig, Daniel, 75
Perelli, Paolo, 139 Wijbenga, Anton, 31
Pielen, Michael, 162 Wilsch, Benjamin, 123, 183
Pinchon, Nicolas, 3 Wojke, Nicolai, 31
Wunderle, Bernhard, 56
R
Rzepka, Sven, 56
Z
S Zachäus, Carolin, 183
Sahimäki, Sami, 75 Zanovello, Luca, 97
Sanna, Alberto, 139 Zaya, Johan, 75

You might also like