Professional Documents
Culture Documents
Jörg Dubbert · Beate Müller
Gereon Meyer Editors
Advanced
Microsystems
for Automotive
Applications 2018
Smart Systems for Clean,
Safe and Shared Road Vehicles
Lecture Notes in Mobility
Series editor
Gereon Meyer, Berlin, Germany
More information about this series at http://www.springer.com/series/11573
Jörg Dubbert Beate Müller
•
Gereon Meyer
Editors
Advanced Microsystems
for Automotive
Applications 2018
Smart Systems for Clean,
Safe and Shared Road Vehicles
123
Editors
Jörg Dubbert Gereon Meyer
VDI/VDE Innovation + Technik GmbH VDI/VDE Innovation + Technik GmbH
Berlin, Germany Berlin, Germany
Beate Müller
VDI/VDE Innovation + Technik GmbH
Berlin, Germany
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Self-driving and electric on-demand taxis and shuttle buses are widely considered
as the optimal means of future urban transport. They seem to provide solutions for
the most pressing current issues in the mobility sector, such as road fatalities,
climate change, and pollution, as well as land use for transport. While those
vehicles are first being tested in controlled environments around the world today,
they may rapidly reach maturity due to the disruptive character of the underlying
innovations: According to the roadmaps of the European Technology Platforms in
the automotive domain, advancements in smart sensors, control and communication
systems will enable the implementation of high-degree connected and automated
driving (i.e., SAE levels 3 and above) on the motorway and in urban environments
in the 2020-25 time frame. This coincides with the projected begin of a broad
market introduction of electric vehicles: Due to fast progress in battery and pow-
ertrain systems’ performance in combination with economies of scale, an up to ten
percent market share of such vehicles has been predicted for 2020, quickly rising to
40 percent by 2025.
The two technical fields of automation and electrification are highly interlinked
due to similarities in (a) the electronics and data architecture of control, (b) the
cooperation in energy matters, and (c) the systemic character of the operating
environment. In an ideal world, a self-driving car, e.g., would no longer require any
passive safety systems, as it would be safe per se. Consequently, such vehicle
would be much lighter and, if electrified, could be much more energy efficient, thus
providing a longer driving range. It should be noted, however, that due to its higher
level of convenience, a self-driving car may be used more intensively. This and the
increase in computing power and sensor equipment could lead to the reverse effect
of using more energy, counteracting the advantages of electric vehicles in terms of
energy savings and climate protection. A joint study by a number of National
Laboratories in the USA recently found that these two opposite effects counter-
balance each other: While the energy consumption per km may decline by a factor
of 1/3, the overall energy consumption may increase by a factor of three.
v
vi Preface
Funding Authority
European Commission
Supporting Organisations
Organisers
Steering Committee
vii
viii Supporters and Organizers
Conference Chair
Smart Sensors
All-Weather Vision for Automotive Safety: Which Spectral Band? . . . . 3
Nicolas Pinchon, Olivier Cassignol, Adrien Nicolas,
Frédéric Bernardin, Patrick Leduc, Jean-Philippe Tarel,
Roland Brémond, Emmanuel Bercier, and Johann Brunet
Machine Learning Based Automatic Extrinsic Calibration
of an Onboard Monocular Camera for Driving Assistance
Applications on Smart Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Razvan Itu and Radu Danescu
ix
x Contents
Electric Vehicles
Light Electric Vehicle Design Tailored to Human Needs . . . . . . . . . . . . 139
Diana Trojaniello, Alessia Cristiano, Alexander Otto, Elvir Kahrimanovic,
Aldo Sorniotti, Davide Dalmasso, Gorazd Lampic, Paolo Perelli,
Alberto Sanna, Reiner John, and Riccardo Groppo
DCCS-ECU an Innovative Control and Energy Management
Module for EV and HEV Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Bartłomiej Kras, Paweł Irzmański, and Maciej Kwiatkowski
Connectivity Design Considerations for a Dedicated Shared
Mobility Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Jörg Kottig, Dirk Macke, and Michael Pielen
Contents xi
Innovation Strategy
Trends and Challenges of the New Mobility Society . . . . . . . . . . . . . . . 175
Sakuto Goda
Roadmap for Accelerated Innovation in Level 4/5 Connected
and Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Jörg Dubbert, Benjamin Wilsch, Carolin Zachäus, and Gereon Meyer
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Smart Sensors
All-Weather Vision for Automotive Safety:
Which Spectral Band?
Abstract. The AWARE (All Weather All Roads Enhanced vision) French
public funded project is aiming at the development of a low cost sensor fitting to
automotive and aviation requirements, and enabling a vision in all poor visibility
conditions, such as night, fog, rain and snow.
In order to identify the technologies providing the best all-weather vision, we
evaluated the relevance of four different spectral bands: Visible RGB, Near-
Infrared (NIR), Short-Wave Infrared (SWIR) and Long-Wave Infrared (LWIR).
Two test campaigns have been realized in outdoor natural conditions and in
artificial fog tunnel, with four cameras recording simultaneously.
This paper presents the detailed results of this comparative study, focusing on
pedestrians, vehicles, traffic signs and lanes detection.
1 Introduction
In the automotive industry, New Car Assessment Programs (NCAP) are increasingly
pushing car manufacturers to improve performances of Advanced Driver Assistance
Systems (ADAS), and especially autonomous emergency braking on vulnerable road
users (VRU). For instance the 2018 Euro NCAP roadmap is moving towards pedes-
trians and pedal cyclists detection in day and night conditions.
This trend matches accidentology figures, like those provided by French Road
Safety Observatory (Table 1).
In the longer term, after automated parking and highway driving, all weather and
city driving will be the main technical challenge in the automated driving roadmap.
Current ADAS sensors as visible cameras or Lidars are fitting functional require-
ments of VRU and obstacle detection in normal conditions (day or night). However,
these technologies show limited performances in adverse weather conditions such as
fog or rain.
Automotive industry is thus facing this new challenge of detecting vehicle envi-
ronment in all conditions, and especially in poor visibility conditions, such as night,
fog, rain and snow.
This topic has been addressed in the framework of the AWARE French public
funded project, aiming at the development of a sensor enabling a vision in all poor
visibility conditions. This paper presents an experimental comparative study of four
different spectral bands: Visible RGB, Near-Infrared (NIR), Short-Wave Infrared
(SWIR) and Long-Wave Infrared (LWIR). Sensors and field tests are described in
Sects. 2 and 3. Experimental results are detailed in Sect. 4, focusing on pedestrians,
vehicles, traffic signs and lanes detection.
2 Sensors
In this project, we only focus on cameras technologies, and not on distance mea-
surement systems like LIDARs or RADARs. But it is well-known that both tech-
nologies are complementary and necessary to bring redundancy for improving the
detection system’s reliability and accuracy [2].
Four cameras were tested during the project. The Table 2 shows their
characteristics.
All-Weather Vision for Automotive Safety 5
The visible RGB CMOS camera is used here as a reference for the test.
Extended NIR camera uses monochrome CMOS photodiodes with a cut-off
wavelength close to 1 µm. It detects the reflective visible and NIR light from the scene.
It thus requires an illumination by sun, moon or night glow or an illuminator positioned
on the vehicle.
Extended SWIR camera is based on InGaAs III-V material and extends from a
wavelength of 0.6 µm, red to human eye, to 1.7 µm in the SWIR infrared band. SWIR
spectral band is typically used for active (reflective) vision in very dark condition with
a good contrast as SWIR light is generally more reflective than visible light.
LWIR sensor is an array of microbolometers. It detects the thermal radiation in the
spectral band extending from 8 µm to 14 µm. Any object emits radiations which
depend on its temperature. For a human or an animal at ambient temperature, the
maximum of emission corresponds to a wavelength close to 10 µm. LWIR is used for
the detection of a temperature contrast and do not require an illuminator.
Mid-Wave Infrared (MWIR) has not been added in the study for reasons of cost and
capacity, due to the cooling system required for the detectors.
3 Field Tests
4 Field Tests
In this section, we describe the detection and recognition range performances that were
measured in this study. It is important to keep in mind that these results reflect not only
the intrinsic characteristics of the spectral bands but also the capability of the chosen
cameras. The cameras were selected to be representative of the typical current state-of-
the-art.
In order to prevent any detection algorithm artefact, a visual analysis has been
performed by two different human observers. As expected, exact detection range values
differed from one observer to the other, but the relative values were consistent. In all
cases, brightness and contrast were carefully tuned in order to optimize ranges.
With respect of each camera’s channels created by the four spectral bands, a video
database has been created to remotely record videos of relevant scenes for each listed
scenario. The Fig. 3 provides a sample of the video database (outdoor campaign):
Fig. 3. Example of snapshots recorded by LWIR (top left), Visible (top right), SWIR (bottom
left) and NIR (bottom right) cameras
Fig. 4. Pedestrian detection test setup into Cerema fog tunnel, LWIR (top left), Visible (top
right), SWIR (bottom left) and NIR (bottom right) cameras
dark clothes. As expected, Visible, NIR and SWIR detection performances were better
for the subject wearing the high visibility jacket (even though this improvement is less
pronounced for thicker fog). For this study, the case of the subject in dark clothes was
deemed more relevant.
The following table gives the fog density, expressed as standard visibility ranges, at
which the pedestrian becomes visible. A reduced visibility range indicates a successful
pedestrian detection in a thicker fog, and hence a better capability to see through fog.
Cases with glare are not included (Table 4).
Table 4. Fog thickness for pedestrian detection at 25 m with the different cameras
Camera Fog density for pedestrian detection
Visible RGB Moderate (visibility range = 47 ± 10 m)
Extended NIR High (visibility range = 28 ± 7 m)
Extended SWIR High (visibility range = 25 ± 3 m)
LWIR Extreme (visibility range = 15 ± 4 m)
Error ranges mostly reflect the dispersion between the different scenarios used in
the study.
Conclusions are the following:
• The LWIR camera has a better capability to see through fog than the NIR and
SWIR ones. The visible camera has the lowest fog piercing capability.
• The LWIR camera is the only one that allows pedestrian detection in full darkness.
All-Weather Vision for Automotive Safety 9
• The LWIR camera also proved more resilient to glare caused by facing headlamps
in the fog. Other cameras sometimes missed a pedestrian because she or he was
hidden by the glare (Fig. 5).
Fig. 5. Example of images recorded in the fog tunnel with the four different cameras
Detection
Recognition
Fig. 7. Reference of range based on site map analysis and T1 lanes type of road marking
SWIR would be on the order of the recognition ranges. In LWIR, detection relied on
the observation of hot vehicle parts: the wheels, the motor or the exhaust system.
In some foggy instances, recognition proved difficult or even impossible in VIS,
NIR or SWIR because the vehicle remained entirely hidden by the glare of its own
headlamps. When this happened, the vehicles were not taken into account in the
average ranges given in Fig. 9. Images illustrating this phenomenon are given in the
Fig. 10. SWIR here presents more glare than the other bands but it is only due to the
camera settings, it is not an intrinsic characteristic of the spectral band.
The visual observation of the videos also confirmed the well-known fact that the
exploitation of movement by the human visual system greatly increases detection
capabilities: the success of the detection task is much higher when performed while
All-Weather Vision for Automotive Safety 11
Fig. 9. Vehicle recognition ranges (except cases with vehicle hidden by glare)
Fig. 10. Images showing the glare effect in the VIS, NIR and SWIR spectral bands
watching a film than when performed by observing still images taken from the very
same film. This is due to the human visual cortex implementing advanced spa-
tiotemporal denoising and should inspire detection software developers.
In some recordings, wild animals are visible on the side of the road in the LWIR
band. These animals are visible in none of the other spectral bands.
12 N. Pinchon et al.
Fig. 11. Road marking observation in the LWIR spectral band during day sunny condition (left)
and day rainy condition (right)
Fig. 12. Images of traffic sign acquired in different spectral bands. In the first line, weather
conditions is a sunny day and in the second line is from scenario 7 (fog class 3 with snow)
Table 6. Comparison of performances of ADAS functions using Visible, NIR, SWIR or LWIR
camera technologies, Night fog, headlights
ADAS FUNCTION Camera Spectral band
Night Fog, using
headlights
VIS NIR SWIR LWIR
Pedestrian, Bicycles, Animals detection + ++ ++ ++++
Vehicle shape recognition - - -- ++
Vehicle lights detection + ++ +++ --
Traffic signs recognition + ++ - --
Road marking detection + ++ ++ -
Table 7. Comparison of performances of ADAS functions using Visible, NIR, SWIR or LWIR
camera technologies, Day fog, headlights
ADAS FUNCTION Camera Spectral band
Day Fog, using
headlights
VIS NIR SWIR LWIR
Pedestrian, Bicycles, Animals detection + ++ ++ ++++
Vehicle shape recognition + ++ ++ ++
Traffic signs recognition + ++ - --
Road marking detection + ++ ++ -
5 Conclusion
only camera would certainly have provided much better performances than exten-
ded spectral bands.
• Visible RGB extended to NIR (or Red-Clear sensors) combined with LWIR provide
the best spectral bands combination to improve ADAS performances of detection
such as vehicle, pedestrian, bicycle, animals or road marking, and recognition such
as traffic signs.
• Caution shall be considered while using LED headlight technology to provide
additional light. LED pulsed technology could reduce detection reliability of system
based on Visible, NIR and SWIR cameras.
Acknowledgement. The authors acknowledge the contribution of their colleagues to this work:
P. Morange, J-L. Bicard and all the pedestrians from CEREMA, A. Picard from Sagem and
B. Yahiaoui from Nexyad.
1 Glossary
References
1. French Road Safety Observatory (ONISR): Les accidents corporels de la circulation 2014 –
Recueil de données brutes (2015)
2. Premebida, C., Ludwig, O., Nunes, U.: Lidar and vision-based pedestrian detection system.
J. Field Robot. 26(9), 696–711 (2009)
Machine Learning Based Automatic Extrinsic
Calibration of an Onboard Monocular Camera
for Driving Assistance Applications on Smart
Mobile Devices
Abstract. Smart mobile devices can be easily transformed into driving assis-
tance tools or traffic monitoring systems. These devices are placed behind the
windshield such that the camera is facing forward to observe the traffic. For the
visual information to be useful, the camera must be calibrated, and a proper
calibration is laborious and difficult to perform for the average user. In this
paper, we propose a calibration technique that requires no input from the user
and is able to estimate the extrinsic parameters of the camera: yaw, pitch and roll
angles and the height of the camera above the road. The calibration algorithm is
based on detecting vehicles using CNN based classifiers, and using statistics
about their size and position in the image to estimate the extrinsic parameters via
Extended Kalman filters.
1 Introduction
The smart mobile devices are omnipresent, and, due to the fact that they come equipped
with cameras of ever increasing resolution and frame rate, and also with increasing
processing power, not to mention their additional sensors and connection capabilities,
they can be easily transformed into driving assistance or traffic monitoring/analysis
tools. The driver can easily mount such a device behind the windshield, so that the
camera faces forward to observe the traffic. However, in order to relate the features seen
by the camera with the 3D features of the real world, calibration must be performed,
and this is a step that most users would rather skip.
Automatic camera calibration is crucial for obtaining robust and accurate computer
vision based driver assistance systems. Correlation between the 3D world and the 2D
image scene is required in systems that sense and measure the surrounding environ-
ment. Monocular vision applications are easier to use and to deploy, and they are more
cost effective than stereovision based systems. The downside of using a single camera
is the missing depth information that stereo systems have. The monocular systems must
rely on constraints imposed on the environment geometry, such as flat road, standard
object sizes, and so on, but they still require calibration [1]. Traditionally, the cali-
bration process is performed in controlled environments and in laboratories, usually by
measuring known objects manually placed in the observed scene, and accurately
measured. If we address the scenario of the user simply mounting the phone behind the
windshield and driving off, these constraints are impossible to be satisfied, and
therefore automatic calibration must be performed.
Camera calibration represents an active research area in the context of driving
assistance. An important step towards achieving automatic on-board calibration is to
determine the point where the parallel lines in the 3D world scene intersect, also called
vanishing point (VP). This point can be used to determine the extrinsic parameters of
the camera system. Similar work has been proposed to automatically estimate the
camera orientation using VP since the 1990’s [2].
Conventional and existing methods for determining the vanishing point usually
take the advantage of existing geometric or texture features in the scene, such as lane
lines or side-walk lines. These methods extract the relevant features, then apply a
voting scheme and finally extract VP candidates. These approaches make use of the
image space, but methods based on Gaussian sphere have been previously presented as
well [3]. Gaussian unit sphere methods map the parallel line vectors (2D image data)
into a Gaussian sphere, and process the resulted great circles. In [4] the authors present
a solution that uses RANSAC for orthogonal vanishing point detection. Our previous
research [5] has further simplified the classic approaches by using a convolutional
neural network (CNN) that takes an image as input and predicts the vanishing point
x and y coordinates as output. Using our own dataset we have found that this method
works well and with high accuracy.
Monocular calibration methods may also make use of additional sensors mounted
on the ego-vehicle, such as laser or radar based sensors. LIDAR-camera calibration has
been more widely used in the research community as well as in production. The
existing work generally follows the same steps, based on the correlation between 3D
LIDAR points and features or edges in the images from the monocular camera.
The LIDAR frame is aligned to camera images by using contour matching. The edges
from the 3D LIDAR frame are projected into the image. Calibration is performed by
adjusting the extrinsic parameters until these 3D points projected into the image are
aligned with the 2D contours detected from the original camera image. Similar
approaches been presented in [6, 7]. However, the usage of external sensors represents
an additional cost factor, and reduces the mobility and portability of the monocular
vision system. Also, the LIDAR sensor requires a calibration of its own, with its own
methodology and constraints.
Most of the automatic calibration techniques use features painted on the road, such
as lane markings. However, there may be scenarios where lane markings are not
available, or they are poorly drawn, dirty, or they simply cannot be seen due to the
overwhelming presence of obstacles. This paper proposes a technique for extrinsic
parameter calibration that does not require the presence and detection of lane markings,
but instead works on obstacles alone. The main idea is to use a Convolutional Neural
Network (CNN) vehicle detector that does not require calibration, and will generate a
bounding box of the vehicle in the image. Using the detected bounding boxes, and
18 R. Itu and R. Danescu
geometrical constraints, the camera’s height above the ground plane, and the three
rotation angles, can be calibrated.
The proposed calibration technique does not require any action from the user,
besides simply driving the car in normal traffic. The only constraint is that the system
has to observe enough objects, so that individual detection errors can cancel each other
out, and a robust estimation of the camera parameters can be achieved.
The main features used for camera calibration are the bounding box of the obstacles in
the image plane. In order to extract these features, we need a detector that is fast enough
to work in real time on the mobile phone’s computing resources (Fig. 1), and which
does not require calibration, meaning that it will detect the obstacle no matter its size,
orientation, position in the image, etc.
Fig. 1. The mobile device placed behind the windshield, detecting vehicles
cars only, as they are the features used for calibration. Training is done using gradient
descent and two loss functions: one for detection and another for classification.
Smoothed L1 loss is used for localization and the weighted sigmoid loss is used for
classification. For our setup, we have used input images that are resized to 300 300
pixels. The TensorFlow Object detection API handles negative examples using online
hard-negative mining.
On the mobile device, the network generates, in real time, bounding boxes around
the detected vehicles (Fig. 2). Furthermore, we can take advantage of the official
TensorFlow Android tracking algorithm for the bounding boxes. The TF tracker uses
the FAST features [13] generated by the pyramidal Lucas Kanade optical flow method.
The median movement of features is analyzed at each frame and the bounding box
tracker will drop the current bounding box when the cross-correlation with the original
detection drops below a fixed threshold. The current bounding box can also be updated
if the new detection has a large overlap (a fixed threshold).
Using tracking has pros and cons. On the other hand, the results are more stable, but
tracking may also generate false data, due to inertia of the update. In our calibration
method, we can work with or without tracking. The obstacle bounding boxes do not
have to be perfect, or complete. The calibration methodology only requires a statisti-
cally representative set of bounding boxes, for different distances, and for multiple
vehicle sizes.
3 Camera Calibration
The camera calibration algorithm will estimate the height of the camera above the
ground, and the three rotation angles, pitch, yaw and roll.
20 R. Itu and R. Danescu
3.1 Extended Kalman Filter for Camera Height and Pitch Estimation
The first two parameters to be estimated are the camera height above the ground, h, and
the pitch angle h. The intrinsic parameters of the camera are assumed to be known: the
principal point is assumed to be in the center of the image, and the focal length of the
camera is read from the mobile device’s camera API. Thus, the intrinsic camera matrix
can be written as:
0 1
f 0 W=2
A ¼ @0 f H=2 A ð1Þ
0 0 1
The parameters H and W are the image’s height and width, in pixels, and the focal
length is expressed also in pixels. The camera’s position in the world coordinate system
is determined by the height alone:
0 1
0
TCW ¼ @ h A ð2Þ
0
The rotation matrix between the world and the camera is, at this point, assumed to
depend on the pitch angle alone:
0 1
1 0 0
RWC ¼ @0 cos h sin h A ð3Þ
0 sin h cos h
The projection matrix that will project a 3D point (X, Y, Z) in the world coordinate
system to an image point (u, v) is computed as:
TWC is the translation vector between the world and the camera coordinate system:
Given any two image rows, v1 and v2, and a typical car width L, we can predict the
width of the car in the image plane on these two lines, assuming that the vehicle is on
the road and is viewed from behind, in a quasi-central position in the image. The lines
and the car width are assumed to be fixed, and the size of the car width in the image
space will depend only on two parameters, h and h, which form the parameter vector X,
to be estimated:
h
X¼ ð6Þ
h
Machine Learning Based Automatic Extrinsic Calibration 21
The algorithm for obtaining the car widths for the given rows v1 and v2 is the following:
1. Assuming that the vehicle is on the road, in a central position, the sides of the
vehicle are given by the points (−L/2, 0, Z) and (L/2, 0, Z), L being the width of the
car and Z being the distance from the camera. By taking two distances, Z1 and Z2,
four 3D points are generated – 2 on the left side, and 2 on the right side of the
Z axis.
2. The four points are projected into the image plane using the projection matrix
P. The resulted projection points are (uL,1, vL,1), (uL,2, vL,2), (uR,1, vR,1) and (uR,2,
vR,2).
3. The two lines formed by the points on the left side, and the points on the right side,
are intersected with the horizontal lines defined by the given row coordinates v1 and
v2. The intersection points will have the column coordinates uL,1, uL,2, uR,1 and uR,2.
4. The two widths are computed as w1 = uR,1-uL,1 and w2 = uR,2-uL,2.
We can define the width projection function g as:
w1
gv1;v2;L ðXÞ ¼ ð7Þ
w2
We can denote the output of the function g as Z, the measurement vector. Now the
problem can be stated as an estimation problem: having the measurement vector Z, and
the measurement function g, one needs to estimate the unknown parameter vector X. For
the estimation of this vector, we can use the equations of the Extended Kalman Filter.
We will perform multiple iterations, starting from an initial guess for X, X0, with an
initial diagonal covariance matrix P0, which will be large enough to cover all possible
values for pitch and height.
For each iteration k, the following steps will be executed:
1. Prediction of the measurement vector:
2. Computation of the measurement matrix (Jacobian of g). This step will be achieved
by numerical differentiation, by varying the height and pitch angle by small
amounts around the values predicted by the current Xk.
!
@w1 @w1
Mk ¼ @h @h ð9Þ
@w2 @w2
@h @h
We will perform 10 iterations. Usually the values for height and pitch converge
after about 5 iterations. The initial values, for X0, are a height of 1 m, and a pitch of 0°.
The standard deviation for height (in P0) is 400 mm, and for pitch is 3°. The standard
deviation for the width pixel error (for the matrix R) is 5 pixels.
The Extended Kalman Filter has an additional step, the updating of the state
covariance matrix P. We have found, through experiments, that we can use the same
P = P0 for all iterations, without affecting the convergence or the final results.
Fig. 3. Vehicle detection results: car widths with respect to image rows
Now we have everything we need for running the EKF algorithm. In Fig. 4 the
results of 10 iterations are shown, for height estimation (in millimeters) and for pitch
estimation (in degrees). The camera (phone camera) height above the road was mea-
sured at 1250 mm, but the pitch angle was not measured. However, the accuracy of
pitch angle estimation can be validated by its effect in the projection matrix.
The effects of the new computed parameters are seen in Fig. 5. Using the projection
matrix generated from the new extrinsic parameters using Eqs. (1–5), we can project a
distant point in the image space and find its row coordinate, which must match the
horizon line (Fig. 5, left). We can also use the new projection matrix to generate a bird-
eye view of the scene (the Inverse Perspective Mapping, IPM image), as seen in Fig. 5,
right. The IPM image has the pixel coordinates proportional to the 3D coordinates X
and Z (lateral and longitudinal distances), with a scale factor of 50 mm for 1 pixel. If
the height and pitch parameters are correct, the IPM image should show the lane
markings to be parallel, and the pixel distance between them should correspond to a
valid lane width in the 3D world. The width of the lane, in pixels, in the IPM image, is
65, which corresponds to 3.250 m. The lane in that area was measured at 3.200 m.
24 R. Itu and R. Danescu
Fig. 5. Vanishing row (horizon line) resulted from the pitch angle (left), and the Inverse
Perspective Mapping image generated using the estimated projection matrix (right)
From Fig. 5, we can see that the estimation of the pitch angle, and the camera
height above the road, corresponds to the real camera parameters. However, we can
also see that the lane in Fig. 5 does not match our direction of travel, which it should,
as we are driving in a straight line. This means that the camera has a yaw angle with
respect to the car reference frame, and this angle should be estimated.
u0 W=2
tan W ¼ ð12Þ
f
Machine Learning Based Automatic Extrinsic Calibration 25
Fig. 6. Finding a vanishing point candidate from the trajectory of an obstacle (left), and finding
the vanishing point as a median of the candidates (right)
Having the yaw angle, we can re-compute the rotation matrix as:
0 10 1
1 0 0 cos W 0 sin W
RWC ¼ @0 cos h sin h A@ 0 1 0 A ð13Þ
0 sin h cos h sin W 0 cos W
Using the new rotation matrix, we can re-compute the projection matrix. Using this
projection matrix to generate the Inverse Perspective image, we obtain the bird-eye
view image with the road aligned with our car axis (Fig. 7).
Fig. 7. Comparison between the IPM image with the yaw angle assumed to be zero (left), and
with the yaw angle estimated correctly (right)
For detecting the roll angle, we’ll use the object detection rectangles, the same input
that we have used for all the calibration steps in this work. If there is no roll, the objects
are expected to be in an upright position, which, taking into consideration that the
objects of interest are cars, means a lot of their edges are either horizontal or vertical.
This means that the orientation of the gradient must be mostly at 90° (for horizontal
edges), or at 0° (for the vertical edges). In order to assess the orientation of the object,
we’ll compute the Histogram of Oriented Gradients (HOG), with 360 histogram bins,
for all objects in the sequence that are near the center of the image (Fig. 8).
Fig. 8. Histogram of oriented gradients. Blue: the HOG of an upright scene. Red: the HOG of
the same scene rotated by 5°
It can be seen that the histogram has strong maxima at 90 and 270°, and weak
maxima at 0, 180 and 270. This means that the horizontal edges of the vehicles are
much stronger, and their angle’s dispersion is lower, while the vertical edges are
smaller, and with a much more variable angle (some vehicle sides are round, many
edges of rear windows are diagonal, etc.). If a roll angle is present, the strong peaks will
shift with the amount of degrees of the roll angle. Therefore, in order to detect the roll
angle, we’ll detect the shift of the histogram peaks from their default positions (90, 180,
and so on).
The calibration algorithm was tested on several sequences acquired by driving in Cluj-
Napoca. The obstacle detection works in real time, at about 10 frames per second on a
Samsung Galaxy S8+ smartphone. The system stores the detection data in a file, and
the calibration algorithm is called by the user. The best results are obtained for longer
sequences, obtained by driving for more than 5 min. The pitch and yaw angles are
accurately estimated (errors less than 0.1°) for most images. The roll angle was tested
only as simulation (the images are rotated by an artificial angle), because we cannot
extract a ground truth value for this angle, and its effect on the IPM image is not always
Machine Learning Based Automatic Extrinsic Calibration 27
clear (the road may itself be tilted more than the camera). The system estimated the
artificial roll angle with an error of less than 0.2°.
The camera height estimation is more sensitive, as sometimes we can have errors of
more than 10 cm. The cause of these errors is simply that there are too few vehicles
detected, and they lack diversity. This can happen in the following scenarios:
– A sequence may contain only one vehicle, in front of us, that we follow. If this
vehicle is narrow, or wide, and does not fit the 1.75 m average width, the height
estimation will fail.
– Most of the detected vehicles are on the side of the road, and they will be detected
as larger boxes in the image, including their side view.
The solution for overcoming these problems is to simply collect more data, with
diverse obstacles in front of us.
We have presented an algorithm which, based on the results of vehicle detection from a
CNN classifier, is able to estimate the extrinsic parameters of a monocular camera with
respect to a vehicle-bound reference frame. The system works accurately when suffi-
cient and diverse data is available, which means longer sequences, in diverse traffic
situations. As the quality and diversity of the vehicle detection results directly impacts
the calibration results, future work will focus on better data generation (better classi-
fication, better generation of the bounding rectangle), better filtering of the resulted
rectangles, and possible fusion with other visual cues.
Acknowledgment. This work was supported by a grant of Ministry of Research and Innovation,
CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2016-0440, within PNCDI III.
References
1. Danescu, R., Itu, R., Petrovai, A.: Generic dynamic environment perception using smart
mobile devices. Sensors 16, 1–21 (2016). Article no. 1721
2. Caprile, B., Torre, V.: Using vanishing points for camera calibration. Int. J. Comput. Vis. 4,
127–139 (1990)
3. Magee, M., Aggarwal, J.: Determining vanishing points from perspective images. Comput.
Vis. Graph. Image Process. 26, 256–267 (1984)
4. Bazin, J., Pollefeys, M.: 3-line RANSAC for orthogonal vanishing point detection. In: 2012
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4282–
4287 (2012)
5. Itu, R., Borza, D., Danescu, R.: Automatic extrinsic camera parameters calibration using
Convolutional Neural Networks. In: 2017 IEEE 13th International Conference on Intelligent
Computer Communication and Processing (ICCP 2017), pp. 273–278 (2017)
6. Bileschi, S.: Fully automatic calibration of LiDAR and video streams from a vehicle. In:
IEEE International Conference on Computer Vision Workshops (ICCV), pp. 1457–1464
(2009)
28 R. Itu and R. Danescu
7. Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics
Science Systems Conference, pp. 1–8 (2013)
8. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems.
arXiv:1603.04467 (2016) (preprint)
9. Howard, A., et al.: MobileNets: efficient convolutional neural networks for mobile vision
applications. arXiv:1704.04861 (2017) (preprint)
10. Lin, T., et al.: Microsoft COCO: common objects in context. In: European Conference on
Computer Vision (ECCV), pp. 740–755 (2014)
11. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? In: Computer
Vision and Pattern Recognition Conference (CVPR), pp. 3354–3361 (2012)
12. Udacity Vehicle Dataset: https://github.com/udacity/self-driving-car/tree/master/annotations
13. Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to
corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32, 105–119 (2010)
14. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography. Commun. ACM 24, 381–395
(1981)
Driver Assistance and Vehicle
Automation
Towards Collaborative Perception
for Automated Vehicles
in Heterogeneous Traffic
Abstract. In the near future Automated Vehicles (AVs) will be part of the
vehicular traffic on the roads. Normally, all automation levels will be granted on
the road based on the different road situations, but challenging situations will
still exist that AVs will not be able to handle safely and efficiently. AVs driving
at a high automation level may step down to the lower automation level and
handover the partial/full control to the driver when the automation system
reaches its functional system limits or encounters unexpected situations. This
paper briefly explains the H2020 TransAID project covering the transition
phases between different levels of automation. It will review related work and
introduce the concept to investigate automation level changes. Furthermore, the
collective sensor data processing architecture using for demonstrators and the
selected use cases are presented.
1 Background
behaviour of manually driven vehicles with its uncertainties (i.e. backwards compati-
bility [1]), current research is also focusing on traffic management procedures [6, 7],
cooperative driving [8] and artificial intelligence [9]. This includes the use of com-
munication among the vehicles (V2V) and between vehicles and infrastructure (V2I).
Many organizations are trying to find a way to enable Cooperative Intelligent Transport
Systems (C-ITSs) on their major roads, albeit mostly in pilot trials as explained by [10].
There, they typically equip the roadside units along with the communication and
interaction facilities based on the European CAR 2 CAR ITS-G5 [11] standard. There
are various overarching projects, such as C-ITS Corridor, InterCor, Compass4D,
Talking Traffic, or even the C-Roads Platform signing a Memorandum of Under-
standing (MoU) for a closer collaboration among the automotive industry and road
infrastructure providers/managers. This will, in turn, facilitate the uptake of the so-
called Day 1 and Day 1.5 services—the former are typical hazardous location notifi-
cations, and the latter contain more specific mobility-related information [12]. Within
cooperative automation, collective perception is one of the challenging tasks. This
service will be performed by Connected Autonomous Vehicles (CAVs), Connected
Vehicles (CVs) and Road-Side Units (RSUs) to enhance the situational awareness of
the driving environment by sharing information about the perceived information col-
lected from its cameras and sensors (i.e. radar or lidar-sensors).
All objects such as, the non-connected vehicles, other road participants and
obstacles etc., that are not present in its own perception range, are forwarded by the
CAVs and CVs which thereby increases the perception boundary. The concept of
sharing collective perception information is studied in different projects and scientific
papers which primarily focuses on the domain of object sensing and filtering tech-
niques as filtering methods [13–16] or credibility maps [17–19]. Also, recently the
ETSI European standards for Intelligent Transport System (ITS) are working on
standardising collective perception information. Publications as [17, 20] introduce the
so-called Collective Perception Message (CPM), its fusion architecture and the network
impact. The results show that the CPM messages have higher transmission latency than
the existing Cooperative Awareness Message (CAM) [21]. To avoid the latency, it is
suggested to adapt an event-triggered message for the CPM [22] which is close to the
standardization.
2 Introduction to TransAID
As the introduction of automated vehicles becomes feasible, even in urban areas, it will
be necessary to investigate their impacts on traffic safety and efficiency. This is par-
ticularly true during the early stages of market introduction, where automated vehicles
of all SAE levels, as discussed in [23], connected vehicles which are able to com-
municate via V2X and conventional vehicles will share the same roads with varying
penetration rates. There will be areas and situations on the roads where high automation
can be granted, and others where it is not allowed or not possible due to missing sensor
inputs, high complexity situations, etc. At these areas many automated vehicles will
change their level of automation. We refer to these areas as Transition Areas as shown
Towards Collaborative Perception for Automated Vehicles 33
in Fig. 1. If a transition fails, a so-called Minimum Risk Maneuver (MRM e.g. soft
braking until stop) is intended; however this has to be avoided due to its negative
impact on traffic flow.
Fig. 1. Areas on the road with Transitions of Control when automation is difficult, not possible,
or restricted. This includes both increase and decrease of automation level.
The EC-funded project TransAID [24] develops and demonstrates traffic man-
agement procedures and protocols to enable smooth coexistence of automated, con-
nected, and conventional vehicles, especially at Transition Areas. A hierarchical
approach as depicted in Fig. 2 is followed where control actions are implemented at
different layers including centralized traffic management, infrastructure, and vehicles.
Fig. 2. Hierarchical traffic management in TransAID. The infrastructure will integrate the
acquired information at the Traffic Management System (TMS). The TMS will generate
progression plans for the vehicles which are taken over by the infrastructure and communicated
to the vehicles, either by I2V communication or (in case of non-equipped vehicles) by e.g.
variable message signs.
this will be the focus of this paper, measures to detect and inform conventional vehicles
are also addressed. The most promising solutions are then implemented as real world
prototypes and demonstrated under real urban conditions. Finally, guidelines for
advanced infrastructure assisted driving are presented. These guidelines also include a
roadmap defining activities and needed upgrades of road infrastructure in the upcoming
fifteen years in order to guarantee a smooth coexistence of conventional, connected,
and automated vehicles.
Within TransAID, collective perception includes the CVs/CAVs and RSUs inside the
collective perception service loop to enhance the detection capabilities. This is bene-
ficial since the moving vehicles may have some limitations on sensing environmental
information due to own sensor range, sensor mounting positions and various other
physical limitations. In this section we describe the sensor setup of our experimental
vehicle platform and a camera-based RSU which we use during the experimental
evaluation.
Next to field testing with real communication and other traffic, the architecture
allows the integration of virtual vehicles for tests within virtual or augmented reality.
During those tests, common traffic simulation software [25, 26], which is also used in
driving simulator experiments, is providing realistic behavior of virtual vehicles. These
vehicles are of course not seen by any sensor on the real vehicle, and therefore need to
be added as input to the sensor data fusion.
Fig. 4. Road Side Unit infrastructure with sensors and communication modules
For the field tests, we will use real RSUs with variable message sign during the
project. Each of the signs has an individual power supply and a road side unit offering
ITS-G5 communication. The signs are shown in Fig. 5(a), a similar pole is going to be
equipped with a hemispheric camera as the Samsung PNM-9020V. To make sensor
data fusion possible, a high performance computing server is installed to run the latest
state-of-the-art detection and tracking chain as explained in [27, 28] and shown in
Fig. 5(b).
36 S. Khan et al.
(a) (b)
Fig. 5. (a) Mobile variable message signs equipped with a road side unit and the corresponding
antennas, (b) Data fusion output for a virtual perspective of the hemispheric camera
4 Vehicular Communication
RSI
- Collec ve percep on CAV
- ITS-G5 / V2I, I2V, V2V
- Road authority
- TMC / TCC
- Measures AV
- Road sensors
- Services
- Smartphone services
- VMS panel
CV
The different arrows represent different types of communication. The solid arrows
indicate direct communication considering ITS-G5. It is based on an extension of the
ETSI ITS standards to transmit vehicle and road advisory related information. It
Towards Collaborative Perception for Automated Vehicles 37
supports the definition and the execution of traffic management policies. The dotted
blue arrows represent conventional signaling measures such as Variable Message Sign
(VMS) panels and possible new measures to reach AVs. The dotted green arrows are
more exclusive to TransAID and/or automated driving developments. Those arrows
represent measures to convey information from AVs to other vehicles such as Legacy
Vehicles (LVs), for example, light indicators on the back of the vehicle.
To exchange the traffic information between V2V and V2I, selected communica-
tion messages are intended to be used in TransAID such as CAM [21], CPM [22],
MAP [30] [New reference for map which will be 2], IVI [31], etc. Each message is
served based on their functionality. For example, Cooperative Awareness Messages
(CAMs) [21] are distributed within the ITS-G5 network, capable to share the sur-
rounding information. This information includes the presence, position and basic status
of the neighboring ITS stations that are reachable within a single hop distance. All the
participated ITS stations within V2X network have the means to generate and share the
state vector information (time, position, direction, etc.) within its neighborhood region.
Upon the reception, reasonable efforts can be taken to evaluate the relevance of the
messages and information to support different ITS applications to act accordingly. For
example, by comparing the status of the originating ITS station with its own status, a
receiving ITS station is able to estimate the collision risk with the originating station
and if necessary may inform the vehicle’s driver or any available vehicle automation.
Another message called Collective Perception Message (CPM) [22] which aims to
share the driving environment information among the ITS stations. To this end,
Collective Perception Service (CPS) provides the related data regarding other road
participants and obstacles etc. in abstract descriptions. Collective perception helps to
minimize the ambient uncertainty of ITS stations about the up-to-date environment as
other stations contribute context information. This includes the syntax and semantics of
the CPS and specification of the data along with the message and message handling to
increase the awareness in a cooperative manner. Furthermore, to increase the traffic
safety, the objects information is also included to the CPM to share with other ITS
stations. These objects information are therefore used by the safety applications on the
receiving side. Objects relevant for traffic safety are either static or dynamic. The latter
are located on the driving lanes or have the ability to move. The objective of trans-
mitted objects information as part of the CPM is not to share and to compare traffic-
regulation information such as traffic signs and traffic light information. Instead, data
about objects which cannot be available to other ITS stations will be provided. This
could e.g. be objects that are only temporary present, such as traffic participants or
temporary obstacles that however require priority. MAP message intended to serve as a
basis for other messages to have a topological reference. The topological information is
defined as an intersection where nodes are deployed on lanes. These nodes can have
attributes and have a location that can be converted in latitude longitude coordinates
according to the WGS84. IVI (In-Vehicle Information) message will also be used in
TransAID, conveying information about infrastructure based traffic services which are
needed for the implementation of IVI road safety and traffic efficiency.
38 S. Khan et al.
Fig. 7. Example use case with blocked road where driving at bus lane which is usually not
allowed
On motorway merge and diverge segments, lane changing can be critical within
high traffic density shown in Fig. 8. Collective perception can provide advices for
cooperative lane changes, e.g. to generate free space to merge. Other related use cases
on such road segments are optimized CAV platoon driving, handling queues at exit
lanes, or early traffic separation for diverging.
Fig. 8. Example use case with merging lanes on motorway and dense traffic
Another scenario deals with possible situations where the road is impassable for
automated vehicles (e.g. they are not able to drive safely, or where automation is not
allowed) as shown in Fig. 9. In such cases, CAVs might be guided to safe spots (e.g.
side lanes or parking area). On congestion or hazard spots, RSI may be also available
for monitoring free-space areas and for providing them through collaborative
perception.
Towards Collaborative Perception for Automated Vehicles 39
Fig. 9. Example use case with impassable road segment due to bad visual conditions
For these and some further use cases, the TransAID project is going to measure the
impact of collaborative perception (and other arrangements) with respect to ToC
maneuvers which may fail and lead to MRMs which are subject to generate further
traffic congestion. It is expected that ToC probability can be decreased with the
developed measures which is going to be tested in simulation and real driving
experiments.
6 Demonstration Setup
Within TransAID, the above described traffic situations will be replicated in simulation
[25] and in reproducible demonstration tests of full scale. For the latter outdoor tests, it
is planned to use multiple vehicles with different capabilities to act as LV, CV, AV or
CAV. One of the advanced test demonstrator named FasCar-E [10] will be used which
is capable of drive-by-wire control and advanced automation. It is equipped with a
combination of standard sensors used by the manufacturer for adaptive cruise control
and lane departure warnings as well as more expensive non-standard sensors which
provide more accurate ego-localization and object tracking capabilities.
FasCar-E as shown in Fig. 10 is equipped with four Ibeo LUX laser scanners for
close and mid-range obstacle detection and tracking. The laser scanners operate at a
frequency of 25 Hz and cover a vertical field of view of approximately 180 degree to
the front and 85 degree to the rear of the vehicle. A Bosch RADAR with detection
range of up to 160 m and two SMS RADAR with detection range of 70 m are available
for object detection at the mid and long-range. A Novatel SPAN-CPT provides fused
GPS/DGPS and IMU position data at 100 Hz.
The vehicle has the possibility to drive automated by commanding the demanded
values via the DSpace Autobox to the vehicle CAN bus and from there to the corre-
sponding vehicle subsystems, like the ACC, lane keeping and the park assistance
systems. It also has a widescreen display installed in place of the instrument cluster for
interaction with the driver. For TransAID, the vehicle is using a Cohda Wireless MK5
On-Board Unit for communication. Next to this specific car, at least one further vehicle
is being equipped in a similar manner.
Next to initial driving tests located at the roads and parking spaces of DLR campus
in Braunschweig, large-scale driving is planned at the closed Peine-Eddesse Air Field
20 km northwest as shown in Fig. 11. The test track on the 900 m runway can be
equipped with virtual road markings visible to the vehicles only. This makes the air
field very flexible, and it can be used for longitudinal and lateral vehicle automation.
40 S. Khan et al.
Fig. 11. Peine-Eddesse Air Field with virtual lanes and intersections (Image source: Google
Earth)
7 Conclusion
The paper presents the project TransAID which aims to develop infrastructure-based
traffic management procedures, guidelines for a smooth coexistence between auto-
mated, connected and conventional vehicles during the market introduction phase of
ICT technologies for automated driving. The paper is focused on the sensor and
communication architecture to be integrated into vehicles and stationary road site units.
With that, specific use cases are addressed where automatic driving is difficult and
where cooperative sensing is subject to reduce automation failures and transitions of
control.
Towards Collaborative Perception for Automated Vehicles 41
Acknowledgement. This work has been supported by the EC within the Horizon 2020
Framework Programme, Project TransAID under Grant Agreement No. 723390.
References
1. Van, R.J., Martens, M.H.: Automated driving and its effect on the safety ecosystem: how do
compatibility issues affect the transition period? Procedia Manuf. 3, 3280–3285 (2015)
2. Atkins Ltd.: Research on the impacts of connected and autonomous vehicles (CAVs) on
traffic flow. Summary Report, Version 1.1, Department of Transport (2016)
3. Hoogendoorn, R., Van, B., Hoogendoorn, S.: Automated driving, traffic flow efficiency and
human factors: a literature review. In: 93rd Transportation Research Board Annual Meeting,
USA (2014)
4. Aria, E., Olstam, J., Schwietering, C.: Investigation of automated vehicle effects on driver’s
behavior and traffic performance. Transp. Res. Procedia 15, 761–770 (2016)
5. Mahmassani, H.S.: Autonomous vehicles and connected vehicle systems: flow and
operations considerations. Transp. Sci. 50(4), 1140–1162 (2016)
6. Baskar, L.D., de Schutter, B., Hellendoorn, J., Papp, Z.: Traffic control and intelligent
vehicle highway systems: a survey. IET Intell. Transp. Syst. 5(1), 38–52 (2011)
7. Birnie, J.: Can regional operational traffic management stand on its own after fifteen years?
NM Mag. 10(1), 8–13 (2015). (in Dutch)
8. Van Waes, F., van der Vliet, H.: The road to c-its and automated driving. NM Mag. 12(2),
16–17 (2017). (in Dutch)
9. Cheng, H.: Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation.
Springer, Berlin (2011)
10. Kaschwich, C., Wölfel, L.: Experimental vehicles FASCar-II and FASCar-E. J. Large Scale
Res. Facil. 3 (2017). A111. http://doi.org/10.17815/jlsrf-3-147
11. http://www.car-2-car.org/
12. European Commission: A European strategy on cooperative intelligent transport systems, a
milestone towards cooperative, connected and automated mobility (2016). https://ec.europa.
eu/transport/sites/transport/files/com20160766_en.pdf. Accessed 13 June 2018
13. Karam, N., Chausse, F., Aufrere, R., Chaupuis, R.: Cooperative multi-vehicle localization.
In: IEEE Intelligent Vehicles Symposium, pp. 564–570 (2006)
14. Kim, S.W., et al.: Multivehicle cooperative driving using cooperative perception: design and
experimental validation. IEEE Trans. Intell. Transp. Syst. 16(2), 663–680 (2015)
15. Mourllion, B., Lambert, A., Gruyer, D., Aubert, D.: Collaborative perception for collision
avoidance. In: IEEE International Conference on Networking, Sensing and Control, pp. 880–
885 (2004)
16. Zhu, H., Mihaylova, K.V., Leung, H.: Overview of environment perception for intelligent
vehicles. IEEE Trans. Intell. Transp. Syst. 18(10), 2581–2601 (2017)
17. Nguyen, T.N., Michaelis, B., Al-Hamadi, A., Tomow, M., Meinecke, M.M.: Stereo-camera
based urban environment perception using occupancy grid and object tracking. IEEE Trans.
Intell. Transp. Syst. 13(1), 154–165 (2012)
18. Sivamaran, S., Trivedi, M.M.: Dynamic probabilistic drivability maps for lane change and
merge driver assistance. IEEE Trans. Intell. Transp. Syst. 15(5), 2063–2073 (2014)
19. Zhao, X., Mu, K., Hui, F., Prehofer, C.: A cooperative vehicle-infrastructure based urban
driving environment perception method using a D-S theory-based credibility map. Opt. Int.
J. Light Electron Opt. 138, 407–415 (2017)
42 S. Khan et al.
20. Rauch, A., Klanner, F., Rasshofer, R., Dietmayer, K.: Car2x-based perception in a high-level
fusion architecture for cooperative perception systems. In: IEEE Intelligent Vehicles
Symposium, pp. 270–275 (2012)
21. ETSI: Intelligent Transport System (ITS); Vehicular Communications; Basic Set of
Applications; Part 2: Specification of Cooperative Awareness Basic Service. Draft TS 302
637–2 V1.3.2 (2014)
22. ETSI: Intelligent Transport System (ITS); Vehicular Communications; Basic Set of
Applications; Specification of the Collective Perception Service. Draft TS 103 324
V0.0.12 (2017)
23. SAE International: Taxonomy and definitions for terms related to driving automation
systems for on-road motor vehicles (2018). http://standards.sae.org/j3016_201806/. Acces-
sed 18 June 2018
24. Lu, M., et al.: Transition areas for infrastructure-assisted driving. www.transaid.eu/.
Accessed 13 June 2018
25. http://www.ict-itetris.eu/simulator/
26. Fischer, M., et al.: Modular and scalable driving simulator hardware and software for the
development of future driver assistance and automation systems. In: Driving Simulator
Conference, pp. 223–229 (2014). https://www.researchgate.net/publication/265908625_
Modular_and_Scalable_Driving_Simulator_Hardware_and_Software_for_the_
Development_of_Future_Driver_Assistence_and_Automation_Systems
27. Smeulders, A., et al.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal.
Mach. Intell. 36(7), 1442–1468 (2013)
28. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep
association metric. In: IEEE International Conference on Image Processing (ICIP),
pp. 3645–3649 (2017)
29. Lu, Z., Happee, R., Cabrall, C.D., Kyriakidis, M., de Winter, J.: Human Factors of
Transitions in Automated Driving: A General Framework and Literature Survey.
Transp. Res. Part F Traffic Psychol. Behav. 43, 183–198 (2016). https://www.
researchgate.net/publication/304624338_Human_Factors_of_Transitions_in_Automated_
Driving_A_General_Framework_and_Literature_Survey
30. http://www.smartmobilitycommunity.eu
31. Eco-AT Consortium: SWAP 2.1 use cases, in-vehicle information, WP2—system definition,
version 4. http://eco-at.info. Accessed 13 June 2018
32. TransAID: Scenarios, Use Cases and Requirements; Deliverable D2.1 (2018)
Real Time Recognition of Non-driving Related
Tasks in the Context of Highly
Automated Driving
1 Introduction
1.1 Background
The active role of the driver in vehicle guidance changes with the continuous devel-
opment of highly automated driving functions to a more passive one. In highly auto-
mated driving longitudinal and lateral vehicle guidance is performed by a system
within given application areas (e.g. highway roads). The driver does not need to
permanently monitor the driving environment and could theoretically turn his attention
2 Classification of NDR-Tasks
During automated driving, the driver does not need to continuously take care of the
driving task. The driver has his hands free and can focus his attention on NDR-tasks
like reading, using a laptop, tablet or mobile phone. In comparison to other human
activity recognition issues the set of in-vehicle tasks is restricted to the geometry and
Real Time Recognition of Non-driving Related Tasks 45
design of the car but, in contrast to the manual driving task, the driver has much more
degrees of freedom.
To investigate how potential NDR-tasks can be classified, measurement data of 44
test persons was acquired in a driving simulator with the sensor setup described in
Sect. 2.2. All participants completed an approximately one-hour ride in the driving
simulator using a highly automated mode. During the drive each participant executed
different NDR-tasks. Table 1 shows the NDR-tasks that should be detected by the
DMS approach presented in this paper. For the detection issue, it is necessary to define
a baseline state of the driver. This condition is defined by monitoring the driving
situation in a normal upright position on the driver’s seat. This state is referred as
“baseline” in the following sections.
Table 1. Set of used non-driving related tasks with description of their realization
NDR-task Description
Repeating spoken text Auditory presented sentences, repeating verbally
Reading out text Written sentences presented on tablet computer (attached in center
console), reading out aloud
Texting (mounted) Transcribing text on tablet computer, attached in center console
Texting (handheld) Transcribing text on tablet computer, performed handheld
Reaching for object: Searching for specific Lego bricks and placing these in a box on
passenger seat the passenger seat
Cell-phone talk Receiving a call from the experimenter
(handheld)
2.1 Approach
There is a limited possibility to directly measure NDR-tasks from a single signal. The
set of possible tasks is too large and single tasks can have a complex characteristic.
Hence, it is necessary to fuse different features from several measurement signals to
infer the currently executed task of the driver. To identify these features, a task can be
divided into sub-tasks, see Fig. 1. For example the NDR-task “cell-phone talk” can be
temporally divided into subtasks from the first glance to the phone as beginning until
the end when the driver puts the phone back into its place.
By analysing the activity, task characteristic features are clearly visible. The actions
of the driver can be described by the main features visual orientation and position of
hands. Because of this, our approach to estimate NDR-tasks is primary based on
features describing regions of interest (ROI) for task related glance and hand positions,
see Fig. 2.
Fig. 2. Identified glance- (left) and hand- (right) positions of defined non-driving related tasks
In addition to the main features glance and hand area, secondary features are
needed to distinguish between tasks with similar visual and biomechanical character-
istics like “repeating spoken text” and “baseline”. In this approach, mouth movement is
analysed to estimate if the driver is speaking. It is added as a further feature to get the
ability to describe the characteristics of NDR-tasks like “repeating spoken text” or
“reading out text” (see Table 1) in more detail.
Fig. 3. Overview of the system architecture for estimating non-driving related tasks in the
context of automated driving.
Real Time Recognition of Non-driving Related Tasks 47
Fig. 4. Sensorial setup for detecting non-driving related tasks of the driver (left) and example
images from the 2D (C1 & C2) and 3D (C3) cameras (right).
Within a sliding window, the variance of the distance values over time is calculated.
Finally, the decision if the driver is speaking is made by an empirical threshold
parameter applied on the variance value.
Fig. 5. Example data for calculation the occupancy of a hand ROI; black – the occurring depth
values of the static environment of the vehicle interior; white – the occurring depth values if a
hand activity is executed in the ROI.
left-right HMM, where only transitions to higher states are allowed. The nodes in this
graph are corresponding to the states of the variable Xt and the connections between
these nodes represent the possible transitions between the states.
To estimate which subtask in the sequence is the current one (cf. Fig. 1) only the
available measurements in the feature vector are available. It is not clearly observable if
the driver takes the phone e.g. from the co-driver seat. Only the information about the
current and last hand positions are known and the state inside the task sequence must be
inferred. This state is referred to as hidden and assumptions on the current state of Xt
can only be made using measureable observations. The transition between each state of
Xt is described by an n n state transition matrix A. The elements of Aij give the
conditional probability of the transition from state Si to Sj , cf. [15].
Aij ¼ P Xt ¼ Sj jXt1 ¼ Si ; 0\i; j n; i; j; n 2 N ð2Þ
The used input feature vector for the HMM contains the results of the preprocessing
modules, the detected driver’s glance area, the hand position, the tablet position and the
estimation if the driver is speaking:
T
xt ¼ gt ; h1;t ; h2;t ; . . .; h8;t ; tpt ; st : ð4Þ
The discrete variable gt represents the calculated glance area of the driver at time t.
h1;t ; . . .; h8;t contains the binary occupation state of the defined possible hand regions.
In contrast to the glance area gt the occupancy information of the hand ROI is not
represented in a single variable because at a time t more than one region can be
occupied caused by the characteristic of the related NDR-task. tpt indicates if the tablet
is inside the holder. The last feature indicates the binary state if the driver is speaking.
To apply HMMs for a real-time NDR-task classification a training step is needed
first, which means learning the model parameters A; B and P from a given training data
set. The training dataset is created from a subset of the measurement data acquired in
the driving simulator, where participants executed defined NDR-tasks, see Sect. 2.
These are labelled in an annotation process to receive various sequences of different
persons. For each NDR-task defined in Table 1, one corresponding HMM was trained
using the Baum-Welch-Algorithm [15].
For real-time detection of NDR-tasks that are inside of the trained task set (Table 1)
a sliding window is applied to generate an input sequence O. Each trained HMM ki is
feed with the current observed temporal input sequence of the feature vector (see
Eq. 4). The probability of the sequence given the model description PðOjki Þ is cal-
culated for all trained HMM ki via backward algorithm (cf. [15]) and is compared
among each other. Following the maximum a posteriori estimation (MAP) the HMM
with the highest observation probability fits best to the observed sequence. Thus, the
corresponding NDR-task i forms the result of the classification.
3 Results
To evaluate the performance of the NDR-task detection using a trained HMM for each
task, every set of sequences from one single participant is left out at the training step
and then presented as a test set respectively. This leave-one-out cross-validation sep-
arated by participants provides the following true positive and false positive detection
rates. Figure 7 depicts the performance of the detection of the defined NDR-tasks
Real Time Recognition of Non-driving Related Tasks 51
(Table 1) as well as “baseline”, which represents the free monitoring of the driving
situation without performing any specific task.
It should be noted that the detection rate is relative to the execution time of the
respective tasks. Thus, if the driver is phoning two minutes the system detects in
average nearly one minute of that correctly as “cell phone call”. Due to the sliding
window approach, the detection result of the HMMs has a delay of at least the defined
length of the sliding windows. This leads to the observed behavior that a detection of
the complete execution time of the task cannot be achieved.
The results show that the detection of NDR-tasks with characteristic glance and
hand areas, e.g. “texting handheld” or “reaching for object on passenger seat”, performs
much better than for tasks with no concrete allocation of the visual and manual channel,
like “repeating spoken text” or “baseline”. In relation to this, the true positive rates of
the task “cell-phone talk handheld” represents an anomaly because there is no reliable
detection if the driver holds the phone to his head. The field of view of the 3D sensor
only covers the right side of the driver. If the driver uses the phone with the left hand, it
cannot be detected.
Given the true positive and false positive rates from Fig. 7, the sensitivity index
d0 ¼ Z ðtrue positive rateÞZ ðfalse positive rateÞ introduced by signal detection theory
(cf. [16]) evaluates the performance for each task. A value of d0 ¼ 0 indicates that the
system detects the currently executed task randomly, whereas d0 ¼ 4 is a related per-
formance of a hit rate of 0.95 according to a false alarm rate of 0.01.
Fig. 7. True positive and false positive rates for each task from the leave-one-out cross-
validation.
52 T. Pech et al.
Fig. 8. Logarithmic probability of exemplary observation sequence for baseline (dashed orange
line) versus cell-phone talk (continuous blue line) for comparison of applied sliding window of
length (a) 5 s and (b) 10 s.
Real Time Recognition of Non-driving Related Tasks 53
4 Conclusion
In this paper, a methodology for the detection and classification of NDR-tasks in the
context of automated driving is described. The presented results show that it is feasible
to detect a currently executed NDR-task by the driver using HMMs. It has also been
shown that the used feature vector contains sufficient information to model the pre-
defined tasks in Table 1. It seems in our approach hand and glance position are stable
features to distinguish between the considered NDR-tasks with different visual and
manual characteristics. The results in Fig. 7 show that NDR-tasks with an allocated
manual and visual channel have higher detection rates than tasks with uncertain hand or
glance positions, like “repeating spoken text”. However, this is also a weak point;
without an accurate and complete preprocessing of the measurement data, missing
information can lead to recognition mistakes. This is observed by the NDR-task “cell
phone call”. The typical hand position of this task is not available if the driver uses the
phone with the left hand. If this happened, “cell phone call” would be miss-classified as
“repeating spoken text”. In this way, the accuracy of the NDR-task detection increases
with a much more different task characteristic, which is represented by a defined area
where the task is executed and by the occupied human sensory channel. This is e.g.
visible in the detection rates of “reaching for object: passenger seat” and “texting
handheld”. Because of the low true positive rates for tasks with a vocal attribute, like
“repeating spoken text” or “speaking on cell phone”, further investigations on the
feature level are necessary to stabilise the detection if the driver is speaking.
Based on the available results it is recommended to group tasks with similar
characteristics regarding their resource demands. E.g. actions like “repeating spoken
text” or “reading out text” can be grouped by their vocal attribute.
Research has shown that drivers’ take-over performance is highly influenced by
specific NDR-tasks (e.g. [17, 18]). The information of the currently performed NDR-
task might be used to enable task-adaptive HMI concepts to support the driver in take-
over situations. Possible adaptations might include earlier or more urgent warnings as
well as the usage of modalities (e.g. visual, auditory or haptic) that are not compro-
mised by the current NDR-task.
Acknowledgment. This work results from the joint project Ko-HAF - Cooperative Highly
Automated Driving and has been funded by the Federal Ministry for Economic Affairs and
Energy based on a resolution of the German Bundestag.
References
1. Rauch, N., Kaussner, A., Boverie, S., Giralt, A.: HAVEit: the future of driving. Deliverable
32.1. Report on driver assessment methodology. In: 7th Framework Programme ICT-
2007.6.1 (2009)
2. Dong, Y., Hu, Z., Uchimura, K., Murayama, N.: Driver in attention monitoring system for
intelligent vehicles: a review. IEEE Trans. Intell. Transp. Syst. 12(2), 596–614 (2011)
54 T. Pech et al.
3. Bachmann, T., Bujnoch, S.: ConnectedDrive - driver assistance systems for the future.
Technical report, BMW (2002)
4. Bekaris, A.: System for effective assessment of driver vigilance and warning according.
Technical report, National Center for Research and Technology Hellas (CERTH) Hellenic
Institute of Transport (HIT) (2002)
5. Brandt, T., Stemmer, R., Mertsching, B., Rakot, A.: Affordable visual driver monitoring
system for fatigue and monotony. Int. Conf. Syst. Man Cybern. 7, 6451–6456 (2004).
https://doi.org/10.1109/icsmc.2004.1401415
6. Leonhardt, V., Pech, T., Wanielik, G.: Fusion of driver behaviour analysis and situation
assessment for probabilistic driving manoeuvre prediction. In: Bengler, K., Hoffmann, S.,
Manstetten, D., Neukum, A., Drüke, J. (eds.) UR:BAN Human Factors in Traffic.
Approaches for Safe, Efficient and Stressfree Urban Traffic, pp. 223–244. Springer,
Wiesbaden (2017). https://doi.org/10.1007/978-3-658-15418-9_11
7. Cheng, S., Park, S., Trivedi, M.: Multiperspective thermal IR and video arrays for 3D body
tracking and driver activity analysis. In: Proceedings of the 2005 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition—Workshops. IEEE (2005).
https://doi.org/10.1109/cvpr.2005
8. Braunagel, C., Stolzmann, W., Kasneci, E., Rosenstiel, W.: Driver-activity recognition in the
context of conditionally autonomous driving. In: IEEE 18th International Conference on
Intelligent Transportation Systems, Las Palmas, pp. 1652–1657 (2015). https://doi.org/10.
1109/itsc.2015.268
9. Cheng, S., Park, S., Trivedi, M.: Multi-spectral and multi-perspective video arrays for driver
body tracking and activity analysis. Comput. Vis. Image Underst. 106, 245–257 (2007).
https://doi.org/10.1016/j.cviu.2006.08.010
10. Yan, S., Teng, Y., Smith, J., Zhang, B.: Driver behavior recognition based on deep
convolutional neural networks. In: 12th International Conference on Natural Computation,
Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, pp. 636–641 (2016).
https://doi.org/10.1109/fskd.2016.7603248
11. Cronje, J., Engelbrecht, A.: Training convolutional neural networks with class based data
augmentation for detecting distracted drivers. In: Proceedings of the 9th International
Conference on Computer and Automation Engineering (ICCAE 2017), Sydney, pp. 126–130
(2017). https://doi.org/10.1145/3057039.3057070
12. Ohn-Bar, E., Martin, S., Tawari, A., Trivedi, M.: Head, eye, and hand patterns for driver
activity recognition. In: 22nd International Conference on Pattern Recognition, Stockholm,
pp. 660–665 (2014). https://doi.org/10.1109/icpr.2014.124
13. Pech, T., Lindner, P., Wanielik, G.: Head tracking based glance area estimation for driver
behaviour modelling during lane change execution. In: 2014 IEEE 17th International
Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, pp. 655–660
(2014)
14. Elfes, A.: Occupancy grids: a prohubilistic framework for mobile robot perception and
navigation. Ph.D. thesis, Electrical and Computer Engineering Dept./Robotics Inst.,
Carnegie Mellon Univ. (1989)
15. Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech
recognition. Proc. IEEE 77(2), 257–286 (1989)
16. Macmillan, N.A., Creelman, C.D.: Detection Theory: A User’s Guide. Taylor & Francis,
Abingdon (2004). ISBN 9781410611147
Real Time Recognition of Non-driving Related Tasks 55
17. Petermann-Stock, I., Hackenberg, L., Muhr, T., Mergl, C.: Wie lange braucht der Fahrer?
Eine Analyse zu Übernahmezeiten aus verschiedenen Nebentätigkeiten während einer
hochautomatisierten Staufahrt [How long does it take for the driver? Analysis of takeover
times with different secondary tasks in a highly automated traffic jam assist]. Paper presented
at 6th Tagung Fahrerassistenz [6th Conference on Driving Assistance], München, Germany
(2013)
18. Wandtner, B., Schömig, N., Schmidt, G.: Effects of non-driving related task modalities on
takeover performance in highly automated driving. Hum. Factors (2018). https://doi.org/10.
1177/0018720818768199
Affordable and Safe High Performance Vehicle
Computers with Ultra-Fast On-Board Ethernet
for Automated Driving
1 Introduction
highly flexible, time optimized and safe mobility. Neither is it possible yet to optimize
the individual traffic volume.
To address these issues and to enable a new quality of driving comfort, autonomous
driving (AD) will be a key capability for future vehicle generations. Ultimately, the
level 5 of full autonomy will be approached. Thus, these AD cars require high-
performance vehicle computers (HPVC) in order to perform the multitude of complex
functions, such as comprehensive vision processing, object recognition, intelligent
traffic system, and task dispatch between different ECUs in the car. The HPVC system
must safely be capable of always handling all driving situations autonomously. Cur-
rently, major semiconductor manufacturers are making strong efforts in developing
powerful processors according to these needs. In fact, first components have been
announced to become available this year on functional development platforms. They
will allow first assessments and software developments (Fig. 1).
However, they are not designed for the actual use in the harsh conditions of real
vehicles. Automotive grade HPVC modules and systems are yet to be created. For this,
essential technological obstacles need to be overcome before the solutions can qualify
as regular products at affordable prices. These obstacles result from the following
requirements:
• Level 5 AD cars need—compared to today’s vehicles—much more computational
power. Otherwise, the human driver cannot be replaced. The most modern solutions
offer 320 TOPS and claim being sufficient for AD at level 5. They use graphic
processors, which each dissipate about 300 W. Accounting for redundancy and all
necessary periphery, the system power can be estimated to reach 1 kW in total.
• Level 5 AD cars need comprehensive perception of the surrounding environment in
real time. This can only be achieved by deploying multiple video/radar/lidar/
ultrasonic sensors in the car. They will generate much more data than in the vehicles
today. Final data fusion will be done in the centralized HPVC units. Therefore, the
on-board communication network needs to allow data rates of 10 Gbit/s and
guarantee highest quality of service (QOS). New connectors, wiring harness solu-
tions as well as communication chips and AD converters are needed.
• The on-board HPVC and communication systems of AD cars need to show very
high reliability, security, and safety in order to protect the human life in daily
58 M. Hager et al.
routine traffic as well as in difficult traffic situations. In fact, reliability and func-
tional safety of AD electronic systems must be increased substantially: The active
human driver, who is constantly monitoring the driving behavior of the car, will
change into a passive passenger that leaves all the control functions to the electronic
system. In addition, the evolving use and business case scenarios based on car-
sharing approaches rather than on fleets of cars, each privately owned by the
individual people, will increase the time of operation in service life for the elec-
tronics systems of AD cars several times from the current 8,000 h.
All these requirements and challenges need to be tackled simultaneously.
materials are used for substrate/interposers and spreader (sometimes also built as a
metal cap) to assure low thermal mismatch and planarity of the assembly. Again, air-
cooling is found on single chips with powers up to 100 W whereas liquid cooling is
necessary for any higher power. So far, only single chip-level cold plates (internally
pin-fins or µ-channels) have been marketed for high reliability applications (e.g., within
blade servers [3]). There, the liquid is strictly sealed away from the electronics. Lab
solutions have demonstrated 370 W/cm2 with jet impingement (50 k nozzles) [4] or in
3D interlayer cooling 1000 W/cm2 [5] in stacked silicon. Recently, top-mounted
polymer manifold jet impingement coolers were realized with 330 W/cm2K [6] at
comparable thermal budget. These solutions allow higher heat transfer coefficients due
to missing thermal interfaces, but bring the cooling liquid very close to the chip, which
is thought to be a reliability problem. For automotive applications, avoidance strategies
that safely prevent any leakage will be of high importance.
In the future, the liquid cooling circuit will have a maximum temperature of 65 °C
(today 55 °C), which further reduces the thermal budget for keeping the junction
temperature at 80…95 °C. Processor de-rating is no option for thermal management, as
full operational performance needs to be available all time during driving.
In sum, there is presently no cooling solution available for high performance
computing with the targeted system power of up to 1 kW for highly reliable automotive
deployment.
technologies nor with the wiring harness solutions. Thus, a number of automotive
OEMs expect Ethernet to become the core technology as it offers the right level of
flexibility, scalability, and cost, especially when combined with the proper protocols.
Fig. 2. Envisioned set of sensor systems for level 5 autonomous driving [7]
of the AD car. In order to stay affordable and competitive, smarter solutions shall be
implemented—even more as the current approach cannot overcome the statistical
dilemma. Its estimates are only valid on average. Hence, it is still unknown for how
long the spare system will survive the failure of the first one in each individual case.
The new HPVC and communication technologies enabling level 5 AD feature immense
challenges in each of the three domains, thermal, electrical, and mechanical. The proper
module and system integration will need to be built on significant improvements on
component and even on material level in order to arrive at architecture and design
solutions that comprehensively meet the requirements of all domains simultaneously.
The issues concern the thermal budget, the multi-modal connectivity and signalling, the
reliability and functional safety in the harsh automotive environment as well as the
form factor and the cost. Tackling them necessitates several key innovations in design,
technology, and test concepts along the full heat path from silicon to system, in all parts
of the electrical communication network, and in the various heterogeneous integration
efforts. The following sections discuss key innovations in each of the three domains.
Ultimately, they need to be merged into a true system approach. After the thorough
analysis and planning, thermal, electrical, and mechanical co-design must be consid-
ered on the same footing right from the beginning of the product development.
Fig. 3. Schematic of targeted innovative thermal management concepts: (a) with system-level
manifold, (b) with chip-level manifold including thermal and thermo-mechanical challenges. ▲
(Thermal) 1. TIM (first bottleneck), 2. Interface to spreader, 3. Heat transfer into manifold,
4. Thermal enhancement of substrate, 5. Heat path encapsulated packages, 6. Heat pipe
performance, 7. µ-channel cooler performance, 8. Jet impingement cooler performance,
9. Secondary liquid loop with heat exchanger and pump. ★ (Thermo-mechanical) 1. System
tolerance, CPI, 2. UF delamination, solder joint reliability, 3. Interface toughness/low stress
bond, 4. & 5. System sealing, 6. Fixation mechanical decoupling, 7. Leak proof µC cooler &
tubing connections, 8. Leak proof jet impingement, 9. Reliable & corrosion-free secondary
cooling circuit.
(3) and (4), fluid redistribution and the secondary liquid loop with heat exchanger and
pump need to be foreseen.
The study of all these concepts will be facilitated by thermal test chips (TCC),
which encompass a sufficiently fine hot spot granularity and distributed temperature
sensing for localized TIM performance and degradation monitoring. The TCC may be
constructed with upmost simplicity. A single metal layer for the matrix of heater and
thermal sensor elements as well as for the daisy chains and the 4-point structures allow
all thermal performance tests as well as the reliability assessments and the structural
health monitoring when following the 3-omega-approach according to the recent
developments [9]. A matrix of these 3-omega structures, which are combinations of
metal line based heaters and temperature sensors that form so-called Thixels (thermal
pixels), can be used to unequivocally predict delamination with a working spatial
resolution of the order of typically 100 µm. They can universally be integrated within
the given chip technology as simple conductor lines and offer further advantages like
being very robust and not showing any cross sensitivity to potentially parasitic effects
as e.g. temperature, stress, or moisture.
management. Then, the individual modules can be designed and optimized separately,
i.e., by various suppliers and with customer-specific performance parameters, but
would still fit seamlessly into the over-all system architecture.
Developing AD vehicles, the concerns of functional safety further increase sub-
stantially. A new strategy has been proposed that uses knowledge-based prognostics
and health management to meet them in affordable way [11]. The approach is based on
monitoring the condition of the electronic systems that are actually in use so com-
prehensively that costly redundancy by second sets of full-scale systems can be
avoided. The remaining redundancies at lower level (module, component and feature
redundancies like double pins) would then be sufficient for full safety assurance.
Implementing this policy is a strategic goal for the next decade. It starts with the
identification of key failure indicators (KFI) that allow detecting the onset of degra-
dation safely and well ahead of any critical failure. This can be achieved by monitoring
preceding effects or by using so-called canary features1, for example. Delamination of
mold compound from the die adhesive and from the die surface is a known wear-out
phenomenon in electronic components. It is caused by the difference in thermal
expansion between these materials. It usually is initiated at one corner or edge of the
die and propagating inwards. Ultimately, it may lead to wire bond lifts. The delami-
nation causes the mechanical situation to change, which can be detected by strain
sensing elements well ahead of the electrical failure (Fig. 5).
Fig. 5. Delamination propagation during thermal cycle test (−40 °C $ 125 °C). iForce stress
sensor: Difference in normal in-plane stress components grows [12]
Similarly, canary features can trigger the calls for maintenance and repair before the
occurrence of a failure. These few KFI features are foreseen in addition to the func-
tional structures and placed at positions where they are exposed to higher loads during
the operation cycles than the functional essential counterparts and/or they are inten-
tionally designed weaker. Hence, the canary feature fails before the essential part. Its
failure provides the individual calibration point that allows the prediction of the specific
remaining useful lifetime (RUL) of this particular electronic system. This way, the few
additional canary features pave the path to the avoidance of the costly system redun-
dancy as they indicate the onset of degradation in the actual system and provide a
quantitative estimate of the RUL. Examples of canary features are small passives like
1
The expression goes back to the canary birds used in old days' coalmines.
66 M. Hager et al.
SMD resistors, whose solder pads are reduced in size, or the corner joints of compo-
nents like QFP, BGA, CSP, flip chips, etc., which are known to be stressed the most.
The difference in lifetime expectancy between the canary and the functional features,
respectively, i.e., the RUL after canary failure, can very well be determined in relia-
bility tests and numerical simulations during the product development of the electronic
system.
Final estimation of the RUL in field application based on the information from the
sensors and canary features, and devices will be realized utilizing a concept of digital
twin. Digital twin is a mathematical model of the physical system that in-situ evaluates
data from the system (e.g. from temperature or moisture sensor) under investigation
and compare with expected response (e.g. temperature or humidity) using meta-models.
In the digital twin model, different patterns can be saved, and based on the answer of
the meta-model estimation of the wear out can be done. RUL will be estimated in two
ways. For known failure modes and mechanism, physics of failure will be used. For
failure modes, whose mechanism cannot be described explicitly, data driven approach
will be used. The combination of these two approaches is known as hybrid PHM
approach. Digital twin will be a very important feature of future automotive HPVC
electronic systems and will allow for continues analysis of the system state of health.
As a result, the RUL of the individual system can be estimated accurately utilizing a
“clone” of that system, so that predictive maintenance can be realized in practice.
The strong market pull towards level 5 AD cars has triggered massive efforts in the
development of automotive HPVC and communication platforms. Regarding the
HPVC components, the upcoming Nvidia system ‘Pegasus’ is the most known example
[1]. It is announced to provide 320 TOPS computational power, to be capable of
interacting with 4x 10 Gbit/s, 8x 1 Gbit/s, and 16x 100 Mbit/s Ethernet links and to
dissipate some 500 W thermal power. While the communication and computation
performance figures would fit to level 5 AD cars, the practical parameters of the
development platform will not meet the automotive requirements yet. Based on the
analysis explained in the sections before, the following improvements are seen most
essential to make the platform operational in regular automotive products:
• Computation - Thermal Domain
The challenge of heat removal from the graphic processors and the peripheral
components will be tackled along two distinct tracks:
(a) Low-CTE on-die heat spreaders and integrated heat pipe solutions.
(b) Implementation of micro-channel and direct fluid cooling concepts.
While solutions according to track (a) are expected to enter regular production
within the coming 3–5 years, the next generation systems according to track
(b) offer minimum 2x higher cooling efficiency. Production may start after 2025 due
to fundamental reliability concern yet to be removed before.
Affordable and Safe High Performance Vehicle Computers 67
Acknowledgement. The authors would like to thank Stefania Fontanella, Hubert Straub, and
Abdessamad El Ouardi (Robert Bosch GmbH) for their valuable input. We are looking forward
to the work in ‘HiPer’, the PENTA project supported by BMBF/VDI (Germany), RVO
(Netherlands), and VLAIO (Belgium).
References
1. Nvidia—Drive PX Xavier … Pegasus. https://www.nvidia.de/self-driving-cars/drive-px/
2. Bret, C.L.: Power electronics for EV/HEV—market, innovations and trends, Yole
Development report (2016)
3. Michel, B.: Roadmap towards efficiency—zero-emission datacenters; IBM-Research,
Advanced Thermal Packaging/Micro Integration 02 June 2015. https://www.zurich.ibm.
com/st/energy_efficiency/zeroemission.html
4. Brunschwiler, T., Paredes, S., Drechsler, U., Michel, B., Cesar, W., Leblebici, Y., Wunderle,
B., Reichl, H.: Heat-removal performance scaling of interlayer cooled chip stacks. In:
Proceedings of 12th Itherm Conference, Las Vegas, USA, 2–5 June 2010
68 M. Hager et al.
5. Madhour, Y., Zervas, M., Schlottig, G., Brunschwiler, T., Leblebici, Y., Thome, J.R.,
Michel, B.: Integration of intra chip stack fluidic cooling using thin-layer solder bonding. In:
Proceedings on 3DIC Conference, San Francisco, CA, USA, 2–4 October 2013
6. Tiwei, T., Oprins, H., Cherman, V., Van der Plas, G., De Wolf, I., Beyne, E., Baelmans, M.:
High efficiency direct liquid jet impingement cooling of high power devices using a 3D-
shaped polymer cooler. In: Proceedings on IEDM Conference 2013
7. Buller, W.T.: Benchmarking sensors for vehicle computer vision systems. Michigan Tech
Research Institute, Ann Arbor, MI (2017). http://mtri.org/automotivebenchmark.html
8. Gupta, M.P., Kumar, S.: Thermal Management of many-core processors using power
multiplexing. Electronics Cooling (2013)
9. Wunderle, B., May, D., Abo Ras, M., Keller, J.: Non-destructive in-situ monitoring of
delamination of buried interfaces by a thermal pixel (Thixel) chip. In: Proceedings on 16th
Itherm Conference, Orlando, USA, May 30–June 2 2017
10. Hager, M., Lock, A.: Future electrical architectures and their effects on automotive ECU and
connector systems (Zukünftige Bordnetzarchitekturen und deren Auswirkungen auf
automotive Steuergeräte und Steckerssysteme). In: 6th International Congress on Automo-
tive Wire Harness, Ludwigsburg, 13–14 March 2018
11. Rzepka, S., Gromala, P.J.: Integrated smart features assure the high functional safety of the
electronics systems as required for fully automated vehicles. In: Advanced Microsystems for
Automotive Applications. Lecture Notes in Mobility, pp. 167–178. Springer, Cham (2017).
https://doi.org/10.1007/978-3-319-66972-4_14
12. Schindler-Saefkow, F., Rost, F., Otto, A., Faust, W., Wunderle, B., Michel, B., Rzepka, S.:
Stress chip measurements of the internal package stress for process characterization and
health monitoring. In: 13th International Conference on EuroSimE 2012, Proceedings, Art.
No. 6191746
The Disrupters: The First to Market
Automation Technologies to Revolutionize
Mobility
Adriano Alessandrini(&)
1 Background
Road Vehicle Automation will revolutionise transport. Such revolution will effectively
start when new transport services with significant added value with respect to today’s
ones can be deployed. The possibility of relocating empty vehicles on the roads (or
almost empty) will allow the unprecedented flexibility for private transport to make of
private road vehicles a different transport mode with respect to conventional private
vehicles. Similarly, in the public (and shared) transport services is the empty vehicle
relocation which will make economically viable new transport services which today are
too expensive to be widely implemented.
Such new services and new ways of using private cars enabled by automation will
lead to a paradigm shift in vehicle usage and to consequent impacts. The prevailing
business model for the new transport services enabled by automation will depend on
many factors; time to market is first and foremost.
On one hand automation will allow for more convenient use of the personal auto-
mobile allowing to use differently (e.g. working) the time otherwise spent driving and
relieving from parking seeking burden pushing a more diffuse use of the private vehicle.
On the other hand, driverless shared transport services will complement mass
transits and add the flexibility and capillarity not economically feasible for conven-
tional public transport making the new public transport more attractive and financially
self-sufficient.
According to the VRA roadmap [1] adopted by ERTRAC, between 2019 and 2022
key technologies to enable these new and disruptive transport modes will be released
on the market and it is expected that in 2025 the latest they will become sufficiently
diffused to have an impact on citizens’ mobility.
Figure 1 shows how between 2019 and 2022 some crucial automation functions
will hit the market. For the passenger cars they will be highway chauffeur (SAE level 3
[4]) and valet parking (SAE level 4 [4]). For shared mobility systems last mile vehicles
and fully automated (SAE level 4) and high-speed buses on dedicated corridors (SAE
level 4) will be available. These 2 sets of 2 functions will enable completely new
services for public and private transport.
Fig. 1. Elaboration on VRA ERTRAC road maps to automation to identify next to market
automation functions
The Disrupters: The First to Market Automation Technologies 71
Using then the consolidated ERTRAC roadmap there are two scenarios likely to
happen in 2022:
• In the public transport domain
CyberCars (Fully automated last mile public transport services on carefully
selected, certified and designated infrastructures)
CityMobil2 project [2] demonstrated the technical feasibility of such systems in
2014 and last mile shuttles to complement mass transit have been slowly deployed
since in those states where full road automation has been meanwhile made legal.
However important, last mile shuttles are they are not the universal solution to
public transport problems. But the technology and the urban integration approach
used for such shuttles can become so by extending this kind of automation to bus-
platooning, car-sharing combined with ride-sharing and automated empty vehicle
relocation. The combination of these three services, all enabled by the same
automation techniques already demonstrated and currently on the market, with
conventional mass transits will revolutionize public transport [3].
High speed busses on dedicated corridors
This technology is already available today but still has some legal problem and
requires some infrastructural adaptations (nothing different from BRT lines which
are built every day). Between today and 2022 their diffusion can be sufficient to
have significant impacts.
• In the private vehicle domain
SAE level 3 high speed highway chauffeur allows the individual car-driver to
dedicate her full attention to other tasks while travelling on the main road infras-
tructures and
SAE level 4 Low speed Parking Garage Pilot allows the individual car-drivers to
alight from the cars at their destinations and leave the cars to search for parking and
park themselves; the car will not be capable of driving far or returning to the trip
origin, but the owner will no longer need to seek a parking.
The combination of these two functions will make commuting by private individual
cars much more attractive than before; even longer journeys could be more pro-
ductive, and parking will no longer be a problem.
According to the roadmaps merged in Fig. 1 (taken from ERTRAC but almost
unanimously shared with differences on the arrival year of SAE level 5 [4] vehicles [5])
these automation functions will all be on the market between 2019 and 2022 and will,
for the first time, allow the automation revolution to start. It will allow the rapid take-up
of two new transport “modes”: (i) the “perfect” private transport one in which
automation will enormously favor private individual trips by making them easier and
more comfortable and (ii) the “perfect public transport one” which will be obtained
deploying ubiquitously in cities last mile capillary shared transport services and fast
and effective longer distance transport on corridors.
72 A. Alessandrini
A commuter living in the outskirts of a big European city will decide to invest her
money to buy a new vehicle and use it to shift her daily transport mode from (all or
partly) public transport to full private transport when the new vehicle will ensure a few
features:
1. it will need to relieve her from the driving task allowing her to perform other tasks
while commuting freeing her time and allowing her to accept longer commuting
time sitting in traffic;
2. it will need to relieve her from parking searching and discharging her at destination
while it goes to park itself.
The commuter drives her vehicle only for the first mile from her house to the first
motorway junction then the vehicle will drive itself with the highway chauffeur
automation function allowing her to work on her computer, to do shopping on line, to
entertain, or whatever else. The vehicle being in a SAE level 3 automation mode, she
cannot sleep but can perform other tasks [4]. When reaching the end of the motorway
stretch of her trips she will need to resume control and drive from the last motorway
junction to her destination.
She will drive to the front door of her destination and alight there from the vehicle.
Using the SAE level 4 Low speed Parking Garage Pilot automation function the vehicle
will go park itself at very low speed.
This new mode of transport will solve two of the main issues discouraging today
from the use of private cars: the time waste in congestion and the time needed for
parking seeking. It will be working at first (up to 2025 at least) only for those trips in
which parking is available not too far from the destination and for which the most part
of the journey is on motorway stretches. It will not be applicable for trip destined to the
city centres of most European cities but will attract most of the trips from peripheries to
peripheries.
The main impacts to expect from the take-up of this new mode of transport [6] is a
shift toward private car use from other modes in most of the commuting trips in the
outskirts of large cities and all over smaller ones; a longer-term effect to expect is the
selection of living locations further away from the cities now that commuting is more
comfortable.
CityMobil2 shuttles [3] have proven that the technology for last mile services is already
available; however, such technology still has two main obstacles to full deployment:
the legal framework to allow driverless vehicles on all roads and the business case
which is not very positive when a very costly vehicle (the shuttle) is used only for low
speed last mile services.
The Disrupters: The First to Market Automation Technologies 73
The same enabling technology can be however used for two services which can
solve both the issues.
Going in the morning from home to a train station, commuters could share rides. If
instead of being fully automated the vehicle were driven by one of the passengers, the
service would be the same but with no legal or vehicle-cost problems; naturally the
vehicle would then need to be relocated to avoid it to make just one trip a-day. If there
is no counterflow demand to relocate the vehicle where it can be used for the next trip,
such relocation needs to be made by automation. Either low speed automation already
demonstrated for shuttles (empty vehicles can travel at low speed with much less
drawbacks) or a “platooning-relocation” would do. Platooning could have less legal
problems than full automation keeping one driver in the first vehicle of the platoon and
requiring less technology and, allowing a higher speed, it should also solve the shuttle-
business-case problem.
The trips to and from train stations (or conventional high quality public transport
stops) would then be solved but high-speed platooning on corridors can also easily
improve the main public transport network. Small electric busses (30 to 45 places and 6
metres length) can provide a feeder service in medium demand areas to and from main
corridors and when the bus reaches the corridors to and from the city centre the driver
would alight leaving the bus to join automatically the first passing platoon on the
corridor.
The combination of these two service allows deploying quickly and cost-effectively
new ubiquitous forms of public transport which will constitute a new mode of
transport.
Depending on the quality of the ride, its comfort and the ease of use of these new
services such services can become very popular even at higher prices than the minimal
public transport ticket commonly subsidised in Europe allowing finally public transport
to become financially self-sufficient.
This can revolutionise mobility in the direction of public transport.
A commuter would book a ride to work the evening before for the next morning
and she will either find a vehicle to drive or someone to give her a ride at the given time
(all real time communication via opportune apps) and reach the train station perfectly
on time to board the train on which she has a reserved seat. The vehicle she left at the
station would be driven back in a platoon by a professional driver and left parked close
by the home of the customer driving the next pool.
If not to the closest train station the same last mile vehicle (which can be a 9-seater
minibus) can go to a high-speed corridor where it would join a high-speed platoon to
the city.
Such new services will induce a modal shift toward public transport [6], an
emancipation of public transport from subsidies, a possible decrease in car-ownership
rate and opportunities for new businesses related to managing transport and deliveries.
Both new “modes” are expected to become available at the same time. It will be a
matter of political (and industrial) will to decide which to push.
74 A. Alessandrini
The private automated road transport has many advantages for users, many for the
economy and maybe some environmental and infrastructural drawback; on the other
hand, the shared and public automated transport mode is expected to have better
environmental and social impacts but many disruptive economic effects with no clear
winner or loser and it induces (and needs) a strong mind shift from the users.
The one to win this race will influence and shape mobility and economy for the
next decades.
Even if the final point will be to have fully automated vehicles, hopefully shared, and
available (depending on the place and time) from door to door or in a multimodal fashion,
the “route” chosen to reach this longer-term future will influence people behaviour.
As shown in Fig. 2 below, even if the technology in the longer-term scenario will
be the same the dominant mobility services might be very different depending on the
paths we chose today to reach full automation.
Fig. 2. From short-medium to long term scenarios and the “importance of the path”
References
1. VRA Vehicle Road Automation Project Deliverable D1.1.3
2. Mercier-Handisyde, P.: CityMobil2 an EC funded project. In: Implementing Automated Road
Transport Systems in Urban Settings. Elsevier, New York (2017)
3. Alessandrini, A., Stam, D.: ARTS—automated road transport systems. In: Alessandrini, A.
(ed.) Implementing Automated Road Transport Systems in Urban Settings. Elsevier,
New York (2017)
4. SAE: Taxonomy and definitions for terms related to on-road motor vehicle automated driving
systems. SAE standard J3016 (2014)
5. Shladover, S.E.: The truth about “Self-Driving” cars. Sci. Am. 314, 52–57 (2016). https://doi.
org/10.1038/scientificamerican0616-52
6. Sessa, C., Alessandrini, A., Flament, M., Hoadley, S., Pietroni, F., Stam, D.: The socio-
economic impact of urban road automation scenarios: CityMobil2 participatory appraisal
exercise. Road Veh. Autom. 3, 163–186 (2016)
TrustVehicle – Improved Trustworthiness
and Weather-Independence of Conditionally
Automated Vehicles in Mixed Traffic Scenarios
1 Introduction
Automated vehicle technology has the potential to be a game changer on the roads,
altering the face of driving as we experience it by today. Many benefits are expected
ranging from improved safety, reduced congestion and lower stress for car occupants,
social inclusion, lower emissions, and better road utilization due to optimal integration
of private and public transport. Many cars sold today are already capable of some level
of automation while higher automated prototype vehicles are continuously tested on
public roads especially in the United States, Europe, and Japan. Automated vehicle
technology has arrived rapidly on the market and the deployment is expected to
accelerate over the next years. As a matter of fact, most of the core technologies required
for fully automated driving (SAE level 5) are available today, however, reliability,
robustness, and finally trustworthiness have to be significantly improved to achieve end-
user acceptance. System and human driver uncertainty pose a significant challenge in the
development of trustable and fault-tolerant automated driving controllers, especially for
conditional automation (SAE level 3) in mixed traffic scenarios under unexpected
weather conditions. The TrustVehicle consortium gathers European key partners who
cover the entire vehicle value chain and form a European eco-system: OEMs, Tier1
suppliers, semiconductor industry, software, engineering, and research partners to
enhance safety and user-friendliness of level 3 automated driving (L3AD) systems.
In Sect. 2 the overall goals of the TrustVehicle project, as well as the approach to
get there are summarized. Section 3 gives an overview of the respective workpackages
and their content. Finally in Sect. 4 some results from the first project year are pre-
sented and the paper is concluded with a brief summary in Sect. 5.
2 Ambition
2.2 Objectives
O1. Systematic identification of critical road scenarios for the currently available
AD systems
Focus here is on the uncertainty associated with the behaviour of the other road
users and the sensor fusion system of the ego vehicle. This objective is addressed
through:
78 P. Innerwinkler et al.
O4. Development and demonstration of new tools for the cost- and time-effective
assessment of vehicle and driver behaviour in complex mixed traffic scenarios
This objective is related to the entire development and validation chain. It aims at
assessing the vehicle and driver behaviour as well as drastically reducing development
and test time.
• Enhanced simulation tool integrating traffic, vehicle powertrain, chassis, controllers,
sensor fusion system (see O2) and driver behaviour to assess complex scenarios
(such as those defined in O1) and their impact on drivability, safety and different
road user acceptance.
• Validation of the simulation tool against real-world data (>4 validation cases) and in
terms of re-usability for the different vehicle platforms of the involved OEMs, with
focus on fail-operational behaviour and hand-/take-over scenarios (driver-in-the-
loop and driver-off-the-loop).
• 30% reduction of the time required for the simulation-based assessment of complex
road scenarios during the development phases of novel controllers for automated
driving.
• New tool for the objective assessment of vehicle behaviour during complex L3AD
scenarios. This will be based on data logging during real vehicle operation, along
the same principle of the commercially available AVL tools (e.g., AVL DRIVE) for
drivability assessment of conventional vehicles. The TrustVehicle tool will include
machine-learning capabilities, i.e., the scenario catalogue of the tool will auto-
matically evolve to support agile validation.
• 70% reduction of the post-processing and analysis time after experimental vehicle
testing in complex road scenarios.
O5. Evaluation of L3AD functions
• Evaluation and tailoring of selected L3 functions on real vehicles from three dif-
ferent road transport markets relying on the framework proposed in TrustVehicle.
3 Methodology
Fig. 2 The TrustVehicle driver-centric engineering approach to L3AD. The approach combines
a learning database, HMI development, co-simulation, virtual assessment as well as real world
demonstration, while integrating user’s expectations and experiences
TrustVehicle – Improved Trustworthiness and Weather-Independence 81
• Validation and testing case studies setup in a generic tool independent manner. Test
scenarios will be developed analytically for both static objects and/or dynamic
vehicles and obstacles.
4 Current Status
This section gives a brief summary of the activities performed in the first project year of
TrustVehicle.
As an example, causes for fatalities within and outside urban areas in Austria 2016
are depicted in Fig. 3. The findings mentioned above are confirmed. Inadequate speed
is mainly a problem outside urban areas, while priority injuries are most common in
urban areas.
Based on these findings, the TrustVehicle focus is further refined:
• Young males and elderly people
• In urban areas
– Junction
– Conflict among users when sharing road space (cars, cyclists, heavy goods
vehicles and buses)
– Pedestrians and moped
• In suburban areas
– Intersections
– Motorcycles and cars
Fig. 5 Tofaş’ LCV. A mule vehicle is collecting real world measurements for the virtual test
within the Tofaş use case
Electric Bus
The electric bus under consideration (Fig. 6) should drive automatically towards
electric charging points at the bus stop. In this scenario, the bus is approaching the
bus stop and charging spot on manual driving mode, while the system provides driving
instructions (e.g. speed and distance to area enabled for automated driving). Then when
86 P. Innerwinkler et al.
4.5 HMI
The TrustVehicle project aims at the devel-
opment of a human centered L3AD system,
based on identification of risky conditions by
combining driver state estimators with a
good knowledge of the environment around
the vehicle. The general HMI concept is
focusing on Ford Otosan and Linkker
demonstration scenarios, both of which are
targeting on low-speed automated driving
scenarios in specific areas. The following Fig. 10 Possible co-simulation setup as
features can be listed for the TrustVehicle used for the AVL driver simulator
general HMI concept:
• HMI supports safe transition between
automated and manual driving modes during low-speed manoeuvring in mixed
traffic situation
• Adaptive & intuitive HMI
• Measuring the driver state
• Identifying risky conditions by combining driver state estimators with the
information about the environment and other road users around the vehicle
• Prioritizing and adapting the information given to the driver
In the first phase of the HMI concept development, requirements and preliminary
specifications including the architecture for the general HMI concept have been
addressed. The requirements for the general HMI concept have been collected and each
HMI requirement has been analysed whether it is applicable for the various driving
modes: Manual, Transition and Automated driving.
5 Summary
In this presentation of the TrustVehicle project the user-centric approach for improving
the trustworthiness and availability of L3AD functions is described. The driver’s
impressions and feelings are crucial for L3AD driving since he/she should be able to
resume vehicle control if needed. Therefore they are strongly taken into account in the
whole development process of the different components that constitute the automated
system. Questionnaires and tests on the driver simulator are some of the measures taken
within TrustVehicle to assure this involvement of the user in the development process,
either for planners and controllers, sensors and sensor monitoring or HMI. A modular
co-simulation approach assures the flexibility needed within the development process.
TrustVehicle – Improved Trustworthiness and Weather-Independence 89
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 723324.
References
1. Valeo: Aquablade (2018). https://www.valeo.com/en/aquablade/. Accessed 25 June 2018
2. Infineon: In-cabin sensing applications (2018). https://www.infineon.com/cms/en/applications/
automotive/chassis-safety-and-adas/adas/in-cabin-sensing-by-time-of-flighttof. Accessed 25
June 2018
3. Troglia, M., et. al.: TrustVehicle D2.1 Report on traffic road injuries (2017). https://urlde
fense.proofpoint.com/v2/url?u=http-3A__www.trustvehicle.eu_downloads&d=DwIFAw&c=
vh6FgFnduejNhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx8&r=RBo6AuZoqPUqdfIxMkY29PX
cQu3yKI3zYibZM3SORus&m=ou6hBywCXhh3FRmhNj6fe4UUYv1qDjBOrTYiM_nda-
I&s=N9B1JqSMLdAvKWgNvUDncchhGqH8zwzHvM4sJoS7Msg&e=.
4. IRTAD: Road Safety Annual Report (2015)
5. Statistik Austria: (2017). https://urldefense.proofpoint.com/v2/url?u=http-3A__www.statistik.
at_web-5Fde_statistiken_energie-5Fumwelt-5Finnovation-5Fmobilitaet_verkehr_strasse_
unfaelle-5Fmit-5Fpersonenschaden_index.html&d=DwIFAw&c=vh6FgFnduejNhPPD0fl_
yRaSfZy8CWbWnIf4XJhSqx8&r=RBo6AuZoqPUqdfIxMkY29PXcQu3yKI3zYibZM3SO
Rus&m=ou6hBywCXhh3FRmhNj6fe4UUYv1qDjBOrTYiM_nda-I&s=nqiWMJnBql6Jek
LniAOyjXhqjZccsEIKgWIhUbUg7r4&e=. Accessed 16 Aug 2017
Adaptation Layer Based Hybrid
Communication Architecture: Practical
Approach in ADAS&ME
1 Introduction
While our previous work [2] presented a comprehensive account of the merits of an
adaptation layer based approach, it didn’t cover any practical aspects of implementing
such an architecture. In the present paper, we discuss practical aspects of implementing
adaptation layer based architecture for a hybrid communication system in the context of
European project ADAS&ME.
The remainder of this draft paper is organized as follows. The next section provides
an overview of the project ADAS&ME, the communication technologies used, and the
corresponding communication requirements. It is followed by a description of the
practical approach taken to implement the adaptation layer based architecture in the
project. The paper then discusses the advantages and shortcomings of this practical
approach and finally concludes by presenting possible future work.
This section presents features of the practical implementation of the adaptation layer
architecture in ADAS&ME. It includes the description of the ADAS&ME communi-
cation architecture, implementation aspects resulting from the hardware used and
existing communication stacks, and a brief description of the implemented adaptation
functions for incoming and outgoing data.
3.2 Implementation
Part of the above communication stack was implemented using existing hardware for
802.11p V2X in the form of Denso Wireless Safety Unit (WSU). The rest of the stack
was implemented on a PC based platform (Raspberry Pi) as depicted in Fig. 5. This
reuse of hardware once again emphasizes the importance of the modular structure of
the architecture.
[a4] Choose one communication medium over the other due to capabilities
(choosing 802.11p for sending out V2X messages and cellular communication
for sending out the cloud messages)
[a5] Adapting object data received from the sensor data fusion system into a data
format suitable for sending out in collective perception messages.
[a6] Note: Sensor data fusion is a system in the vehicle responsible for fusing the
data from on-board sensors (Camera, Radar, LIDAR etc.). The data exchange
between this system and the communication stack takes place through the
management and applications layers.
The adaptation function [a5] was not envisioned in the original theoretical study of
the adaptation layer [2] but was deemed necessary in this practical implementation
within the context of ADAS&ME. This means that the Adaptation layer must perform
adaptations for the incoming/outgoing data (over the communication channel) but also
for the data from/to upper layers.
As ADAS&ME employs 802.11p V2X and cellular communication for addressing
non-overlapping requirements, coordinating sending of data over multiple media in
parallel was not implemented for the outgoing data. Similarly, failover realization in
case once communication medium becomes unavailable was not implemented.
4 Conclusions
References
1. ETSI EN 302 665, Intelligent Transport Systems (ITS); Communications Architecture
2. Mittal, P., Leinmueller, T., Spaanderman, P.: Adaptation layer based architecture for vehicular
hybrid communication. In: ITS World Congress 2017, Montreal, Canada, 29 October–2
November 2017
3. Webpage for EU project ADAS&ME. http://www.adasandme.com/. Accessed 15 June 2017
at 08:40:00 UTC
4. ETSI EN 302 637-2 V1.3.2, Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service,
ETSI Std., November 2014
5. ETSI EN 302 637-3 V1.2.2, Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 3: Specifications of Decentralized Environmental Notification
Basic Service, ETSI Std., November 2014
6. ETSI work item DTS/ITS-00167, Intelligent Transport Systems (ITS); Collective Perception
Service
7. ETSI work item DTS/ITS-00184, Intelligent Transport Systems (ITS); Vehicular Commu-
nications; Basic Set of Applications; Maneuver Coordination Service
Assistance and Mitigation Strategies in Case
of Impaired Motorcycle Riders:
The ADAS&ME Case Study
Abstract. Riding a motorcycle requires both physical and mental effort. These
requirements are amplified by factors like long riding hours, high or low tem-
peratures, high relative humidity levels or rain. Besides exposing the rider to the
external environment, the vehicle cannot offer full aerodynamic protection and
limits him/her in a fixed position, which is less comfortable than that of a car.
Furthermore, physical effort is required to steer and actively balance the
motorcycle. Such factors may induce impairing states like physical fatigue,
distraction and stress. The work carried out within the ADAS&ME project has
the aim to create a system able to detect, and possibly in extreme conditions
prevent, these states, and then to provide adequate assistance to the rider during
long touring travels and, if the situation becomes safety critical, actively enable
intervention functions with embedded ad-hoc safety strategy.
1 Introduction
The riding task is a complex and demanding activity both from physical and mental point
of view. Riders are, as a matter of fact, directly exposed to environmental and weather
conditions, such as high/low temperature (several times extremes), high humidity levels
and atmospheric agents like rain, wind and fog. Besides that, the motorcycle itself gen-
erates noise and vibrations that are difficult to attenuate, since the rider sits few centimetres
above the engine. The motorcycle ergonomics is closely related to the PTW (Powered
Two-Wheeler) typology, but the knee flexion angle is generally greater than 90°.
Furthermore, the motorcycle has complex 3D dynamics where the rider has an active role:
as an example to set a curve trajectory the rider is not merely steering in the direction of the
curve but counter-steering and then moving his/her body to increase the lean angle while
continuously controlling the throttle [1]. This is even more tiring if the vehicle is fully
loaded and/or is carrying a passenger.
These factors have a direct influence on the riding experience, reducing comfort
and inducing states like fatigue, stress or distraction. Kuschefski et al. [2] have iden-
tified climate, posture and noise as the highest sensory strains for riders. Also, other
studies correlate hot climate conditions with accidents [3, 4]. The MAIDS (Motorcycle
Accidents In Depth Study) [5] analysed 921 motorcycle crashes occurred between 1999
and 2000 in Europe, and showed that human factors were the primary cause in 37.4%
of the cases, of which 10.6% the main factor was “attention failure”, a general term
which includes distraction, stress and other related rider impairments.
Two dedicated Use Cases within the ADAS&ME European project are specifically
addressing PTWs, with the target to create an effective assistance and mitigation strategy
in such circumstances and, ultimately, develop an adaptive HMI system based on current
rider state, providing customised and personalised support at different rider incapacity
levels.
2.1 Overview
ADAS&ME is a research project, funded by the EC under the Horizon 2020
framework-programme. The project addresses four different vehicle types: truck,
conventional car, electric car, bus and motorcycle. The general aim of the project is to
develop advanced driver/rider assistance system functions and an adaptive HMI
(Human Machine Interface), that take into account the driver/rider state and the situ-
ational and environmental context, to ensure a safer and more efficient road usage [6].
riding despite the fact that his/her ability to control the vehicle has significantly
deteriorated. A focus group study with riders [7] validated that this can happen espe-
cially when motorcyclists are close to their final (for the day) destination, and the
relatively small distance forces them to avoid stopping at an intermediate rest area.
The above Use Cases were communicated to end users, using both an on-line
survey and a focus group session, and further discussed and analysed by stakeholders
and experts through an online survey and a dedicated workshop, held in April 2017 in
Brussels [7, 8]. The top ranked scenarios for UC E were:
• E2: Assistance, during long range touring, in case of inattention;
• E1: Assistance, during long range touring, in case of tiredness;
• E3: Assistance during long range touring, in case of stress;
• E4: Activation of active systems, if the rider is more and more tired and ignoring
assistance;
while for UC F were:
• F1: Activation of active systems if the rider is fainting;
• F2: Activation of active systems if the rider is going to faint and ignoring assistance.
UC F was kept separated and not merged in UC E with Scenario E4 representing a
bridge between them. This is related to end-users’ feedback: motorcyclists’ community
is traditionally sceptic towards innovation and active support systems advantages,
confirmed by the fact that the highest ranked scenarios in UC E do not foresee inter-
vention scenarios, but rather milder “assistance” and support mitigation advices, which
the rider can ignore at any time; however they may accept an active intervention if the
situation gets safety critical, e.g. in case of a (imminent) loss of control.
The focus group provided very useful feedback also for refining the rider states
under consideration; in particular for fatigue, riders identified as a study of potential
interest, the combination of muscular fatigue caused by vibrations, fixed riding posture
kept for hours and demanding manoeuvres at bends, together with the exposure to high
temperatures and extreme sunlight. As a result, and for this state specifically, the term
“physical fatigue” was introduced and currently on focus within ADAS&ME.
• Right glove:
– 1 air temperature and relative humidity sensor;
– 1 EDA (Electro-Dermal Activity) sensor, to measure skin conductance;
– 1 UV (UltraViolet) sensor, to measure the exposure to solar radiation;
– 1 6-axis IMU, to measure vibrations and hand orientation;
• Left glove:
– 1 air temperature and relative humidity sensor;
– 1 PPG (PhotoPlethysmoGram) sensor, to monitor PR (Pulse Rate);
– 1 UV sensor, to measure the exposure to solar radiation;
– 1 6-axis IMU, to measure vibrations and hand orientation;
• Undershirt:
– 1 3-electrodes ECG (ElectroCardioGram) sensor, to measure HR (Heart Rate)
and HRV (Heart Rate Variability);
– 1 chest strap, to monitor RD (Respiration Depth) and RR (Respiratory Rate);
– 1 Temperature and relative humidity sensor, to measure torso skin temperature
and humidity;
• Back Protector:
– 1 GPS unit, to monitor the rider’s trip and speed;
– 1 altimeter unit, able to monitor air pressure;
– 1 9-axis IMU to measure torso orientation;
– the back protector also includes a control unit, which manages all the afore-
mentioned sensors, and which communicates with the motorcycle through an RF
(Radio Frequency) channel.
On board the vehicle there are other complementary sensors, which provide
information about the vehicle dynamics and environmental/situational data, in
particular:
Assistance and Mitigation Strategies 101
• 1 5-axis IMU, able to estimate roll and pitch angle, roll and pitch rate and the three
linear acceleration;
• the ABS unit, sharing information about the vehicle speed and the brakes usage;
• 1 air temperature sensor;
• 1 navigation unit, connected with the motorcycle through a BT (BlueTooth) channel
sending information about surrounding traffic.
For the development of the rider state monitoring system, experiments with vol-
unteers were conducted at CERTH premises in Thessaloniki, during December 2017
and January 2018 [9]. The objective of the experiments was to collect data for training
the state monitoring algorithms, as well as for the evaluation of the accuracy of the
integrated wearable sensors, relative to medical reference equipment. The experiments
included both on road testing and simulation tests using a motorcycle simulator placed
in a specifically adapted environmental chamber. Selected simulator scenarios and
environmental conditions relative to the addressed UCs were examined.
The volunteers were instrumented with both the wearable platform and the refer-
ence medical equipment and additionally they responded to self-assessment ques-
tionnaires regarding their condition. The data captured during the experiments was used
for training three different rider state detection algorithms, addressing physical fatigue,
distraction and stress, based on machine learning classifiers, currently in development.
For each state, different parameters are used, and the algorithms have a different output
logic, as presented below.
• Physical Fatigue
Physical fatigue comprises two sub-states: muscular fatigue and thermal impair-
ment. The muscular fatigue due to riding can be broadly divided into two major
activities: maintaining the riding posture and generating the required forces to
control the motorcycle. Thermal impairment, during hot weather conditions, often is
initially experienced with hyperthermia that is led by dehydration and subsequently
in extreme cases the rider may faint. The environmental chamber was used to
induce the state of thermal impairment by simulating hot weather conditions with
high levels of humidity, hot temperatures and strong heat radiation, while muscular
fatigue was induced by riding for one hour on the road and then continuing for
30 min on the simulator.
The main parameters used to address this state are: skin temperature in different
body regions, respiration depth, respiration rate, heart rate, heart rate variability,
skin wettedness, riding time, vehicle dynamics information (speed, lean angle,…),
air temperature, air relative humidity and UV index; From these inputs three levels
of the rider condition are identified (uncritical, critical, risky) combined with a
confidence level.
• Distraction
Distraction can be seen as a subset of inattention where the mismatch of applied
resources [10] to the driving task is caused either by visual, auditory, biomechanical
(physical) and cognitive distraction. For the needs of the Use Case E, only visual
distraction was studied, e.g. looking away from traffic too often or for too long [11].
102 L. Zanovello et al.
For the experiment a bright light was placed on the dashboard of the bike simulator
and on the left side of the rider to attract his/her attention.
The main parameters used to address this state are: head and torso translations and
rotations, vehicle dynamics information (speed, lean angle,…), riding time while
the identified state condition comprises two levels (not distracted, distracted) along
with a confidence level.
• Workload Stress
In [12] the definition of workload is based on the amount of resources that are
required by a set of concurrent tasks and it is distinguished in visual, motor and
mental. During the experiment of Use Case E, volunteers were induced with
workload stress while riding the simulator, encompassing methods and tools dis-
cussed in [13–15]).
The main inputs for this state are: heart rate, heart rate variability, respiratory rate,
skin conductance; while three detected levels are planned (normal, increased, high)
combined with their respective confidence level.
• To convey low urgency information temporary icons are suggested. Flashing should
be kept for demanding/critical situations, in the other circumstances it has a dis-
tracting effect [19, 20];
• Riders are sceptic towards new technologies and too frequent feedbacks would
probably encourage them to turn off the whole system. Furthermore, the strategy
should take into account distance to the final destination; no rider would stop to rest,
if the target location is nearby [7, 16, 17];
• A Multimodal Approach is suggested. This was requested by riders during the focus
group [7], it is useful as redundancy in case of an HMI element failure and, above
all, it is proven that multimodal feedbacks generate shorter reaction times [19];
• Motorcycles, as vehicles, have specificities: there is no cabin, the dashboard has a
different size and position in the vehicle, the rider wears a helmet [21];
• The HMI elements available for motorcycles are far more limited in room and
capacity, in comparison to the ones available for cars [16]. During the definition of
the feedbacks this should be taken into account. Nevertheless, some possibilities for
the rider to customise the HMI experience should be present;
• Advanced rider assistance systems should be employed only when the impaired
rider is no longer able to fully control the vehicle [7, 16, 17];
• Customisation and personalisation are necessary to achieve both user acceptance
and comfort. To offer riders the possibility to customise the way the feedback is
conveyed, a specific menu is added to the dashboard, where they can set their
preferences (e.g. turn-off the haptic feedback in the gloves and in the helmet).
Furthermore, it is possible to change the Info Helmet settings through a mobile app,
developed for iOS.
104 L. Zanovello et al.
The “Capsize Control” is on the other hand a subsystem able to increase directional
stability at low speeds, when the rider risks to lose control of the vehicle. It is based on
the torque generated, through the so-called gyroscopic effect, by a couple of counter-
rotating gyroscope flywheels mounted on the rear of the motorcycle. Its functionality
supports the stabilisation of the bike during a safe stop manoeuvre.
• if the rider decides to ignore the warnings and his/her state is further deteriorated,
the strategy will follow a stronger approach: along with the icon it will be displayed
a text, the auditory feedback will also include a vocal message and vibrations will
become more intense. The possibility to show resting areas locations and distance to
106 L. Zanovello et al.
reach via the navigator will be explored, and even perform an automatic re-routing
selecting one of those location as an intermediate destination (Fig. 5). If the rider
still ignores the more intensive warning, the system will perform a safety check for
a few seconds (e.g. there is low traffic, the motorcycle is not approaching an
intersection or a tight curve) and will then activate the recovery mode function that
will limit, as previously said, the motorcycle performance. This active function
targets to convince the rider to stop.
At the moment the assistance and mitigation strategy is being implemented in the
motorcycle demonstrators. First tests with the vehicle are expected from October 2018.
4 Conclusions
Long motorcycle touring is a demanding activity, even if, nearly always, it is for leisure
or lifestyle. The physical effort it requires, in combination with the exposure to the
external environment and potential adverse conditions, can induce impairing states like
physical fatigue, distraction and potentially stress. In this paper, the work performed in
the ADAS&ME project to assist the rider in these situations and mitigate possible
negative consequences is presented. The paper describes how the scenarios were
identified and the on-going implementation of the rider monitoring system, which
consist of the sensors, integrated in the protective gear and on the vehicle, as well as the
rider state monitoring logic. Furthermore, the design process for the assistance and
mitigation strategies for the addressed Use Cases is described: this includes the HMI
elements and the advanced rider assistance systems under development, a list of
guidelines for the effective implementation of such strategy and finally the description of
the UML diagram defined for two scenarios. The assistance and mitigation strategies
will be tested from Autumn 2018, but already represent a promising approach to deal
with potentially risky situations triggered by a rider impairment. The first version of the
overall Adaptive HMI rider monitoring system prototype is expected to be ready by
Spring 2019 and tested in both open roads and test tracks in Barcelona in Summer 2019.
Acknowledgements. The ADAS&ME project has received funding from the European Union’s
Horizon 2020 research and innovation programme under grant agreement No 688900.
References
1. Cossalter, V.: Motorcycle Dynamics, 2nd edn. LuLu Enterprises Inc., Morrisville (2006)
2. Kuschefski, A., Haasper, M., Vallese, A.: Advanced rider systems for powered-two-
wheelers (ADVANCED RIDER ASSISTANCE SYSTEM-PTW). In: Proceedings of 8th
Institut für Zweiradsicherheit Conference, Köln, Germany, 04–05 April 2010
3. De Rome, L., Senserrick, T.: Factors associated with motorcycle crashes in new South
Wales, Australia, 2004–2008. J. Transp. Res. Board 2265, 54–61 (2011)
Assistance and Mitigation Strategies 107
4. Haworth, N., Rowden, P.: Proceedings of Australasian Road Safety Research, Policing and
Education Conference, 2006, Gold Coast, Queensland (2006)
5. MAIDS: In-depth investigations of accidents involving powered two wheelers, 2009,
Bruxelles, Belgium. www.maids-study.eu
6. www.adasandme.com. Accessed 2016
7. Dukic Willstrand, T., Anund, A., Strand, N., Nikolaou, S., Touliou, K., Gemou, M., Faller,
F.: Deliverable 1.2—Driver/Rider models. Use Cases and implementation scenarios,
ADAS&ME Project (2017). www.adasandme.com/dissemination
8. Dukic Willstrand, T., Anund, A., Pereira Cocron, M., Griesche, S., Strand, N., Troberg, S.,
Zanovello, L.: Collecting end-users needs regarding driver state-based automation in the
ADAS&ME project. In: Proceedings of 7th Transport Research Arena TRA 2018, Vienna,
Austria, 16–19 April 2018
9. Symeonidis, I., Nikolaou, S., Touliou, K., Gaitatzi, O., Chrysochoou, E., Xochelli, A.,
Manuzzi, M., Guseo, T., Zanovello, L., Georgoulas, G., Bekiaris, E.: ADAS&ME:
experiments for the development of a rider condition monitoring system. In: International
Motorcycle Conference (2018, accepted)
10. Young, K., Regan, M.: Driver distraction: a review of the literature. In: Faulks, I.J., Regan,
M., Stevenson, M., Brown, J., Porter, A., Irwin, J.D. (eds.) Distracted Driving, pp. 379–405.
Australasian College of Road Safety, Sydney (2007)
11. Kircher, K., Ahlström, C.: Issues related to the driver distraction detection algorithm AttenD.
In: First International Conference on Driver Distraction and Inattention, 2009, Gothenburg,
Sweden (2009)
12. Hoedemaker, M.: Summary description of workload indicators: workload measures, human
machine interface and the safety of traffic in Europe growth project. s.l.: HASTE. Institute
for Transport Studies. University of Leeds. Leeds (2002)
13. Kirchner, W.K.: Age differences in short-term retention of rapidly changing information.
J. Exp. Psychol. 55(4), 352–358 (1958)
14. Brouwer, A.M., Hogervorst, M.A.: A new paradigm to induce mental stress: the Sing-a-Song
Stress Test (SSST). Front. Neurosci. 8, 224 (2014)
15. Stansfeld, S.A., Matheson, M.P.: Noise pollution: non-auditory effects on health. Br. Med.
Bull. 68(1), 243–257 (2003)
16. Touliou, K., Margaritis, D., Spanidis, P., Nikolaou, S., Bekiaris, E.: Evaluation of Rider’s
support systems in power two wheelers (PTWs). Procedia Soc. Behav. Sci. 48, 632–641
(2009)
17. Bekiaris, E., Nikolaou, S., et al.: SAFERIDER-advanced telematics for enhancing the safety
and comfort of motorcycle riders. In: 17th ITS World Congress, Japan (2010)
18. Pauzie, A., Nikolaou, S.: Ergonomic inspection of IVIS for riders: recommendations for
design and safety. Paper 4040 Presented at the 16th ITS World Congress, 21–25 September
2009, Stockholm, Sweden (2009)
19. Politis, I., Brewster, S., Pollick, F.: Evaluating multimodal driver displays of varying
urgency. In: Proceedings of the 5th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (AutomotiveUI’13), October 28–30, 2013, Eind-
hoven, The Netherlands (2013)
20. Campbell, J.L., Richard, C.M., Brown, J.L., McCallum, M.: Crash Warning System
Interfaces: Human Factors Insights and Lessons Learned. US DoT Report No HS-810 697
(2007)
21. Broughton, P.S., Fuller, R., Stradling, S., Gormley, M., Kinnear, N., O’Dolan, C., Hannigan,
B.: Conditions for speeding behavior: a comparison of car drivers and powered two wheeled
riders. Transp. Res. Part F 12, 417–427 (2009)
Data, Clouds and Machine Learning
Towards a Privacy-Preserving Way of Vehicle
Data Sharing – A Case
for Blockchain Technology?
Abstract. Vehicle data is a valuable source for digital services, especially with
a rising degree of driving automatization. Despite regulation on data protection
has become stricter due to Europe’s GDPR we argue that the exchange of
vehicle and driving data will massively increase. We therefore raise the question
on what would be a privacy-preserving way of vehicle data exploitation?
Blockchain technology could be an enabler, as it is associated with privacy-
friendly concepts including transparency, trust, and decentralization. Hence, we
launch the discussion on unsolved technical and non-technical issues and pro-
vide a concept for an Open Vehicle Data Platform, respecting the privacy of
both the vehicle owner and driver using Blockchain technology.
focus at the time autonomously driven vehicles will face real world problems on the
street and will have to force the driver to takeover. However, the data collected within
current vehicles of limited smartness can be used beyond assisting their drivers in
driving. Moreover, vehicle data is valuable for third parties [1–3] including e.g. vehicle
manufacturers (i.e., OEMs), suppliers, and traffic managers to name three stakeholders,
although, there are still many open issues connected to the exchange of vehicle usage
data. One dominant challenge for vehicle and driving data exploitation is how to
safeguard the privacy of the driver. Despite the privacy regulation has gotten stricter in
Europe with the General Data Protection Regulation (GDPR) [4], we argue that the
exchange of vehicle usage data will increase a lot in the future due to two recent
developments, tech startups pushing artificial intelligence technologies and the rising
interest of the automotive industry to foster the automated driving paradigm.
Shortcomings of current vehicle data provisioning approaches are: Data, informa-
tion, and services are mostly exchanged within proprietary closed environments, as
collected vehicle usage data is usually directly sent from the smart vehicle to a single
service provider (e.g., by a device connected to the OBD-II interface of the vehicle or
via the drivers’ smartphone). As a result, a vehicle owner willing to share data with
multiple service providers will have to provide the data multiple times while collecting
the data with different devices in parallel. This can be critical due to the large amount of
data collected by smart vehicles (up to 4 TB of data per day are expected [5]), and
because a significant portion of current service providers (e.g., Automile and Zubie) is
using dedicated OBD-II dongles to gather data from smart vehicles. Thus, it is currently
not feasible or at least not practical to use several services at the same time. Finally,
these closed systems certainly disrespect the vehicle owner’s privacy, as they do not
make it transparent how they further monetize the gathered data nor with whom they
share it. They typically do not allow the end user to control what data is transferred and
shared. And most of them have a lock-in effect, i.e. they use the vehicle data for their
own purposes. Finally, their business models do not scale yet as their user community
is still composed mostly of early adopters [1].
each authorized service provider. The user can decide whether to share only anon-
ymized data (e.g., as required by traffic management systems), vehicle-specific data
(e.g., for OEMs for continuous improvement), or even user-specific data (e.g., as
required by insurance companies to provide flexible insurance rates in Pay-As-You-
Drive (PAYD) models [6]). Such a platform will be able to support a wide range of
service providers and allow different benefit/business models advantageous for both the
users and the service providers.
Towards proposing a concept for an Open Vehicle Data Platform, in Sect. 1, we
reviewed existing solutions for vehicle data sharing, highlight strengths and weak-
nesses, and particularly focused on potential privacy issues. Thereafter, in Sect. 2, we
provide related work and background for Blockchain technology in the automotive
domain and for connected vehicles. Consequently, we discuss the actors and roles of a
vehicle data sharing ecosystem, the underlying privacy challenge and propose possible
privacy setting schemes protecting the privacy of the involved users, followed by a
concept for a Blockchain-based Open Vehicle Data Platform in Sect. 3. In the latter,
Blockchain technology ensures a trustworthy data exchange between all involved
entities and users. After providing a description of a conceptual workflow, we discuss
open issues and related aspects required to realize the proposed data sharing platform
and thereby conclude the paper with a discussion and outlook in Sect. 4.
sensors, thereby connecting vehicles more and more to each other (V2 V) as well as to
the surrounding infrastructure (V2I).
As a result of this, Blockchain technology raised enormous attention in research,
academia and industry. Various projects and initiatives covering different industrial
domains were started in the last months with the goal of identifying real business
opportunities for the use of Blockchain in future products, or even to develop concrete
(distributed) applications where the use of Blockchain technology can be beneficial,
including the automotive industry which has identified potential areas for the use for
Blockchains. Recently, automotive car manufacturers BMW, GM, Ford and Renault
started the Mobility Open Blockchain Initiative (MOBI) together with other industrial
and academic partners such as Bosch, Blockchain at Berkeley, Hyperledger, Fetch.ai,
IBM and IOTA [8]. Also, other vehicle manufacturers are evaluating Blockchains or
are already working on concrete projects: In 2017, Daimler started a project where
Blockchain technology is used to manage financial transactions [9]. Furthermore, the
automotive supplier ZF teamed up with IBM and UBS to work on a Blockchain-based
automotive Platform called Car eWallet with the goal of paving the way for autono-
mous vehicles by allowing automatic payments and by providing other convenience
features [10].
Hence, Blockchain definitely gained attention in the automotive industry. However,
concrete ideas, products and services are needed to show that Blockchain is more than a
hyped technology but rather allows the development of new business cases.
(ii) Create value by monetizing the collected data coming from a mass of vehicles to
third parties, which in turn use it as input for algorithms.
(iii) Further improve the business offerings of service providers and develop new
services.
Furthermore, in times of a shift of the automotive industry towards digitalization, in
times to manage different SAE levels of autonomous driving on the road simultane-
ously, and in times of the Internet of Things where sensors are increasingly connected
to the Internet, the automotive industry still tries to solve many long-known phe-
nomena. These phenomena include for example the detection of the drivers distraction,
fatigue and trust or the vehicles security and safety, which will increasingly be done in
the cloud, by feeding the algorithms with sensitive and privacy relevant data from
vehicle usage.
Data ownership of vehicle sensor data seems to be yet unclear from a legal per-
spective. Driver, vehicle owner, passengers, and the vehicle manufacturer may claim
their right on certain data. In the AutoMat project, coordinated by Volkswagen, it is
argued that as usual in other domains, e.g. in the music show business, “the copyright is
distributed proportionally among the members of the value chain” [11]. This copyright
distribution would give vehicle manufacturers the right to use the data a driver pro-
duces without charge, and thus would bring vehicle manufacturers into the profitable
data platform provider role (as they can integrate a data interface in their cars easily).
However, from a driver’s/vehicle owner’s/passenger’s perspective, copyright should
not be distributed as there would not be any data without them driving the vehicle. This
is usual in many domains e.g. digital camera manufacturers do not have a copyright on
produced photos, and a competitive market with open data platforms will force
innovative solutions and offer more benefits to the data owner to attract data provision.
Fig. 1. Vehicle usage data can be used for various services and by different entities and bring
advantages to both the vehicle owner/user as well as the service provider/data consumer.
Fig. 2. Actors and value flows (e3value model) of a vehicle data sharing ecosystem.
organizational consumer (in current scenarios from the market usually without the
knowledge of the driver) pays the service provider for the development and service
provision in the background, in order to get the data or access to a valuable service
based on this data.
connected vehicles. In one of these studies, Walter et al. [13] details the user concerns
regarding connected vehicles and highlights the needs for a privacy-aware data sharing
mechanism.
Defining a privacy configuration mechanism w.r.t usability and transparency brings
up different opportunities:
One approach is a distinction between vehicle specific and driver specific data,
where one can opt to share both of them either anonymized or not, just one or none.
Another approach would be to have four easy understandable levels with
decreasing privacy: (i) don’t share, where simply no data is shared at all, (ii) private,
where data is provided e.g. to calculate some basic individual statistics, but cannot be
used for anything else, (iii) anonymized for public usage, where data can be used like in
private level and additionally is provided to public in an anonymized way, and
(iv) public, where all data is provided to public. However, this approach would raise
awareness of drivers and service providers would have to adopt the concept, hence it
limits possibilities and perhaps opens legal loopholes and at the end of the day it lacks
transparency which specific data a service has access to.
Therefore, we argue that it is feasible to adopt the approach of Android smartphone
applications, which clusters the access to certain data into topics (i.e. An app needs
access to one’s contacts and images). The level of detail is a decisive factor for such
clusters: emission values can be clustered under a huge topic named vehicle sensor
data or be seen as an individual emission values category, while using quite granular
categories would require basic technical understanding of every user. The authors still
see improvement potential as this solution has somehow a touch of too much infor-
mation, comparable to terms and conditions no one really reads carefully.
Fig. 3. Data exchange between origin (vehicle) and target (service providers) is managed by a
broker using Blockchain technology for smart contracts.
In the proposed concept, several Brokers will take over the aforementioned tasks,
and thereby also allow connected vehicles to switch between different Brokers or even
to store data on different locations. The Blockchain will thereby fulfill two essential
tasks. Firstly, the Blockchain provides tamperproof storage for smart contracts as well
as other transactions, and secondly also provides a way to ensure the authenticity of
data collected by a connected vehicle and stored on an online storage, as the hash of a
collected dataset is integrated in a transaction and then stored on the Blockchain. Such
a transaction can also be seen as a trigger for service providers informing them about
the latest available dataset.
Please note that storing data directly on the Blockchain is not advisable from
technological point of view. Also note that existing contracts on the Blockchain can
simply be revoked or changed by filing a new contract between the connected vehicle
and the concerned service provider.
The proposed concept will rely on two different entities which are stored on the
Blockchain, namely
(i) Smart contracts, describing which data is shared with a certain service provider
and also specifies the corresponding reward. It will contain information about the
Broker that is used to store the collected data, and the timespan in which a certain
service is allowed to access the collected data. Each smart contract will be signed
by the connected vehicle (is owner) and the service provider before it is stored on
the Blockchain;
(ii) Dataset transactions, containing the hash of a dataset stored on the online storage
of a Broker. Every transaction is signed by the connected vehicle (or its owner),
and also by the Broker once the dataset was successfully transferred (and verified)
to its online storage.
The proposed concept is able to securely interconnect connected vehicles and
services providers in a privacy-preserving way, by utilizing Blockchain as tamperproof,
decentralized database, as well as by using dedicated Brokers providing a secure online
storage and handling access control w.r.t. the stored data. In the following, we
120 C. Kaiser et al.
summarize seven steps required to share data between a connected vehicle and a
service provider and use this example to highlight the benefits of the propose vehicle
data sharing platform:
1. Initially, the owner of a connected vehicle wants to use a certain service and, in
further consequence, will get into contact with the responsible service provider. In
this initial step, the user will be informed about the type of the data the service
provider requires to provide a specific service.
2. If the user agrees to this terms, a smart contract specifying the relation between the
connected vehicle, its owner, and the service provider is created and signed by the
vehicle owner (representing the connected vehicle) and the service provider.
3. Once the smart contract is finalized, it will be stored on the Blockchain.
4. While being used, the connected vehicle will continuously collect valuable data,
which is divided into datasets (e.g., after a predefined time or once a certain amount
of data is collected) and sent encrypted to the online storage of the Broker. Each
transferred dataset is accompanied by a dataset transaction containing the hash of
the dataset as well as the digital signature of the connected vehicle (its owner).
5. Hence, the Broker on the one hand can verify that the dataset was not altered while
being transferred, and on the other is held from changing the dataset itself as this
would invalidate the digital signature already included in the dataset transaction.
Once the currently received dataset is verified, the Broker will add its signature
(thus completes the transaction) to the transaction and broadcast it on the Block-
chain network.
6. Service providers can monitor the Blockchain and will be directly notified about the
latest available dataset by looking for relevant dataset transaction. In case such a
transaction was found, the service provider requests the dataset by establishing a
connection with the Broker.
7. Next, the latter looks for a suitable smart contract on the Blockchain and provides
access to data as specified in the smart contract or declines the request in case no
smart contract was found or it was revoked.
This paper was aimed to launch the discussion on how the Blockchain technology may
help to establish an open vehicle data sharing platform, respecting the privacy of both
the vehicle owner and the vehicle driver. Thereby smart contracts are introduced as a
mode to fully digitize the data sharing relationship between a consumer (e.g. a driver,
who provides his data with the purpose to use services) and a service provider (e.g. a
provider of a preventive maintenance service). They describe what kind of data will be
provided by whom and for what data exploitation purpose. While these smart contracts
are stored on the Blockchain to increase the trust between the vehicle data sharing
ecosystem stakeholders, the shared data itself will not be stored on the Blockchain, but
for instance on a separate data platform and a data market.
However, a series of issues and research topics remain open and will be targeted in
future work:
Towards a Privacy-Preserving Way of Vehicle Data Sharing 121
There are certain pre-requisites vehicles would need for the provided concept. For
example, a standardized vehicle data interface across manufacturers, where in general
all vehicle data can be provided to extern (to be stored on SD card or on a hard drive if
used for private purposes, or to be sent to online destinations), would ease data
acquisition. Only data which is marked to be stored/sent to somewhere should be
captured, all other data should be deleted or continuously overwritten.
In order to participate, users need to be able to authorize themselves (e.g. to use
their privacy settings in every vehicle they use) to the vehicle and the Broker, so they
need to register and have an identity.
Using Blockchain technology ensures a privacy preserving way to securely share
the data from the vehicle to the service provider. If a service provider gets access to
one’s data, then this indicates that he is not allowed to resell it unless this is explicitly
mentioned in the contract. However, in praxis this can not be prevented with the
presented concept, thus privacy can not fully be ensured.
As mentioned in Sect. 3.2, how to cluster data in useful groups and in which
granularity is a topic for future research. An initial version could be as follows:
– Emission data
– Vehicle data (e.g. base weight, number of passengers, year of manufacture, type,
brand)
– Environment data (e.g. road topography, temperature outside, rain)
– Traffic data (e.g. detected entities around the vehicle including humans and vehi-
cles, information about the streets throughput rate)
– Driver data (e.g. Driver ID, music channel, mood, fatigue level, driving score, heart
rate)
– Ride data (e.g. GPS position, temperature inside, start datetime, target)
– Other data
References
1. Stocker, A., Kaiser, C., Fellmann, M.: Quantified vehicles - novel services for vehicle
lifecycle data. J. Bus. Inf. Syst. Eng. 59(2), 125–130 (2017)
2. Stocker, A., Kaiser, C.: Quantified car: potentials, business models and digital ecosystems.
e & i Elektrotechnik und Informationstechnik 133(7), 334–340 (2016)
122 C. Kaiser et al.
3. Kaiser, C., Stocker, A., Festl, A., Lechner, G., Fellmann, M.: A research agenda for vehicle
information systems. In: Proceedings of European Conference on Information Systems
(ECIS) (2018, will be published)
4. European Commission: Data protection in the EU (2018). https://ec.europa.eu/info/law/law-
topic/data-protection/data-protection-eu_en
5. Krzanich. B.: Data is the new oil in the future of automated driving (2016). https://
newsroom.intel.com/editorials/krzanichthe-future-of-automated-driving/
6. Husnjak, S., Perakovi, D., Forenbacher, I., Mumdziev, M.: Telematics system in usage based
motor insurance. 100, 816–825 (2015). Elsevier Ltd. Conference of 25th DAAAM
International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014
7. Nakamoto, S., Bitcoin: a peer-to-peer electronic cash system, Whitepaper (2008). http://
www.bitcoin.org/bitcoin.pdf
8. Russel, J.: BMW, GM, Ford and Renault launch blockchain research group for automotive
industry. Techcrunch, May 2018
9. Dotson, K.: Daimler and LBBW issue $114 M corporate bond using blockchain.
SiliconAngle, June 2017
10. Kilbride, J.: Secure Payments “On The Go” With Blockchain Technology From ZF, UBS
and IBM. IBM, September 2017
11. AutoMat-Project. Automat: Connected car data - the unexcavated treasure. Youtube (2018).
https://www.youtube.com/watch?v=uRjvnahJ-9o
12. Valasek, C., Miller, C.: Remote Exploitation of an Unaltered Passenger Vehicle, White
Paper, p. 93 (2015)
13. Walter, J., Abendroth, B.: Losing a Private Sphere? A Glance on the User Perspective on
Privacy in Connected Cars (2018)
Challenges and Opportunities of Artificial
Intelligence for Automated Driving
Although the possibility of using computers to control cars was already proposed in the
late 1960’s [1] and a suitable software algorithm for lane recognition based on Artificial
Neural Networks (ANNs) was developed as early as 1989 [2], research and development
for automated vehicles (AVs) did not become a prime and widespread interest until the
late 2000s. This was principally due to the fact that the use of ANNs for image/object
recognition (via classification or prediction) requires both sufficiently efficient hardware
for the parallelized execution of matrix multiplications and adequate amounts of data for
training ANNs. In the course of two decades the former restrictions were gradually
alleviated. The CPU performance initially increased in line with Moore’s Law, but,
more importantly, it was possible to fundamentally boost the performance of relevant
algorithms with the switch to GPUs in 2009 [3]. The functional and widespread
application of ANNs in Machine Learning (ML) was further enabled by the availability
of large amounts of training data (Big Data), which has increased in an unprecedented
manner with the introduction of digital and mobile devices as well as corresponding
storage and communication technologies. The subsequent success of ML in several
fields, e.g. speech recognition, image analysis and machine language translation have
made this subdomain of AI methods the dominant solution for practical applications.
For the analysis of the role of AI for AD it proves meaningful to take a step back
and analyze the landscape of current research and development efforts in the field.
While Advanced Driver-Assistance Systems (ADAS) support human drivers in
certain driving tasks, e.g. by maintaining a specified velocity or keeping the vehicle in
lane, increasing automation with the successive transfer of driving responsibilities to AI
requires a complete set of capabilities spanning environment recognition as well as
motion planning and control. The methods and hardware that can be employed to meet
these requirements are detailed in the following sections.
methods can be applied for classification, regression and clustering tasks based on large
data sets and the specific algorithms include linear and logistic regression, decision
trees (e.g. iterative dichotomiser 3 or random forests), support vector machines and
Bayesian models. Amongst the ML methods and algorithms DNNs have been the focus
of ML-related research efforts over the past decade, yielding variations adapted to
specific learning tasks and algorithms, including feedforward networks, convolutional
neural networks, recurrent networks, generative adversarial networks (GANs) and long
short-term memory (LSTM) [5].
For AD, deep learning methods using ANNs are of fundamental importance for
environment recognition (object detection based on image classification) which pro-
vides the basis for motion planning and control. As discussed above, the application of
these methods was enabled by significant advances in computer hardware, which will
be detailed in the following section.
1.2 AI Hardware
Training an ANN requires High-Performance Computing (HPC) to process the big
data. Therefore, compute-intensive technology became essential for the progress in AI
making companies with the corresponding know-how and infrastructure the drivers of
the AI technology. Moreover, inference requires high computing power and is thus not
easily performed on edge devices with restrictions on the energy supply. Additionally,
some applications require real-time capabilities. Fortunately, strong hardware
improvement is possible by means of optimization of the chip architecture to the
arithmetic operations of the ANNs which correspond mainly to matrix multiplications.
The basic optimization strategy is based on parallelization thanks to the so-called
“embarrassingly parallel workload”. A straightforward solution was to perform
general-purpose computation on GPU to accelerate the training significantly. GPUs
enable much higher data throughput compared to CPUs and reduce the power con-
sumption at the same time. Another hardware solution is based on Field-Programmable
Gate Arrays (FPGAs) which enable designers to reprogram the underlying hardware
architecture to support the parallel computing operations. Application-Specific Inte-
grated Circuits (ASICs) outperform FPGAs since they are specifically designed and
optimized for a certain task. Such multi-processor System-on-Chips (SoCs) incorporate
GPUs, CPUs as well as accelerator cores optimized for certain operations like image
processing. Their big disadvantages are their inflexibility and the high development
costs. Today, off-the-shelf hardware is not optimized for ML. Therefore, there is a high
demand for hardware innovations. Fortunately, there are several approaches to increase
the computing power and to minimize the power consumption [6].
The enormous potential impact on all industry segments led to a race for more
efficient chips between IC vendors, tech giants, IP vendors and various start-ups. It is
remarkable that various start-ups try to compete with the big IC giants in such a cost-
intensive industry branch. Designing an ASIC can cost up to hundreds of millions of
dollars requiring a large team of experienced engineers. The long design process
(typically 2–3 years) necessitates a large number of chip sales and regular improvement
is necessary to adapt to fast changing software development. Especially the early state
of the AI technology can lead to significant changes in the hardware development in the
126 B. Wilsch et al.
upcoming years. Only the enthusiastic conviction that the new chips tailored for AI
applications can strongly outperform state-of-the-art hardware can justify such
investments and the confidence to compete with heavily experienced IC giants.
In the automotive sector and for CAD developments in particular, there is a high
demand for better hardware and various innovations are expected in the near future. As
an indication, several trends and developments are provided in the following:
• MobilEye introduced its fifth generation SoC “EyeQ5” for fully autonomous
driving at the CES 2018 which will be in series production by 2020. The perfor-
mance target is to achieve 24 trillion operations per second (TOPS) under a power
consumption of 10 W. The most advanced TSMC 7 nm-FinFET process is con-
sidered for production to address the performance targets. Intel plans to combine the
EyeQ5 with its “Intel Atom” processor and to develop an AI computing platform
for autonomous driving. Intel and MobilEye claim that two EyeQ5 SoCs and an
Intel Atom processor will be sufficient to enable fully autonomous driving.
• The automotive supplier ZF built the “ZF ProAI” supercomputing self-driving
system which is based on the “Nvidia DRIVE PX 2 AI” computing platform. ZF
claims to follow a modular and scalable system architecture that can be applied to
any vehicle and tailored according to the application, the available hardware and the
desired automation level. Audi is using this self-driving system in the worldwide
first level 3 vehicle where self-driving capabilities are achieved in jam traffic on an
autobahn up to a speed limit of 60 km/h. Baidu cooperates with ZF and announced
to use the “ZF ProAI” for automated parking.
• Nvidia introduced its new SoC “Xavier” at the CES 2018 which will offer up to
30 TOPS under a power consumption of 30 W. The chip will be fabricated by the
TSMC 12nm-FinFET process and the series production starts in 2019. True level 5
autonomous vehicles will need at least two of such chips to provide sufficient
computing power. Therefore, Nvidia’s new “DRIVE Pegasus AI” computing
platform will incorporate two “Xavier” SoCs and two discrete GPUs. It will enable
320 TOPS and consume up to 500 W. According to Nvidia the computing power
should be sufficient for fully autonomous driving.
• NXP developed its “BlueBox” autonomous driving platform. It incorporates an
automotive vision and sensor fusion processor capable of processing AI applica-
tions. The performance is stated as 90,000 Dhrystone million instructions per
second (DMIPS) under a power consumption of 40 W.
• Renesas has a similar automotive computing platform with its “R-Car” SoCs which
achieve 40,000 DMIPS.
More general hardware solutions are necessary due to the demand for higher
computing power, lower power consumption and cost reduction. More sensors will be
attached to the car in the future. Under the frame of the ImageNet contest the per-
formance of object detection was increased by means of higher model complexity in
the last years. This tendency implies higher amount of parameters of the ANNs. Safety
is a crucial issue for the breakthrough of self-driving cars. Therefore, more complex
models will be presented to increase the robustness of object detection and inference.
This corresponds directly to more complex AI algorithms and a growing demand for
computing power and higher energy efficiency. In automotive the new SoCs tailored
Challenges and Opportunities of Artificial Intelligence 127
for machine learning tend to be more complex since high data throughput is necessary
and moving data between different chips deteriorates the performance. Moore’s law
still assures continuous increase of the number of integrated transistors on
chip. Therefore, the size of future optimized SoCs should scale up. For example,
Nvidia’s new “Xavier” SoC is one of the most complex systems to date with more than
9 billion transistors. Both market leaders MobilEye/Intel and Nvidia plan the first series
production of their new SoCs and already mentioned the development of next SoC
generations (Nvidia’s “Orin”, MobilEye’s “EyeQ6”). It is important to note that Nvi-
dia’s Xavier SoC architecture was recently certified with the highest safety rating
ASIL-D of the automotive industry’s standard for functional safety ISO-26262 by TÜV
SÜD. This is an important step since autonomous driving requires maximum safety.
Standardization of an open automotive AI platform can increase competition
between IC manufacturers and make OEMs and Tier1s more independent from IC
giants. Another possibility are close cooperation between IC manufacturers, OEMs and
Tier1s leading to distinct solutions for automotive AI computing platforms. In such a
scenario e.g. an “Intel Inside” label could be a unique selling point if the performance
differs significantly between IC manufacturers. One argument for distinct solutions
could be a higher efficiency thanks to a hardware-software co-design process.
Today, Nvidia and Intel/MobilEye offer hardware as well as software solutions. But
both market leaders offer separate solutions as well. It enables modular hardware
integration in open platforms such as “Apollo” from Baidu. Both approaches can be
successful. At this point it is not obvious which approach will ultimately find wide-
spread application.
On the basis of the current state of AI hardware and methodology for AD, which
was established above, it is now possible to examine the opportunities and challenges
intertwined with their application in the following section.
2 Opportunities
Besides its decisive role in enabling automated driving, AI in the form of ML also
provides the key capabilities for the interaction between the driver, who will succes-
sively transition into a user of autonomous services, and the vehicle. In this field of
human-computer interaction, applications can draw directly from the success of ML in
the field of natural language processing and facial recognition. Such functionality can
be employed initially to enhance safety as driver assistance systems in low-level
automation, e.g. by detecting driver fatigue and alerting the driver, may then be
employed for gesture recognition to enhance driver comfort in higher level automation,
before eventually enabling the provision of new services to users of autonomous
vehicles. For example, a face scan could be used to access a vehicle and the integration
of digital assistants in vehicles paves the way for various new service offers for
drivers/users increasingly freed from driving obligations.
The use of brain-machine interfaces further unlocks potentials in vehicle operating
by providing alternatives to mechanical controls such as gas pedals or steering wheels
and thus providing access to humans not capable of operating the established control
system [7]. Such applications present prime examples for the potential of AI to
128 B. Wilsch et al.
3 Challenges
cause accidents than with computer errors. This skewed perception is further amplified
by the disparity in media attention attributed to rare but entirely new and unusual
accidents caused by computer error in comparison to common human errors. In con-
sequence, research indicates that AI-controlled cars would need to outperform humans
by one to three orders of magnitude to ensure user acceptance [8].
The CARTRE project tackled the complexity of CAD using eleven topical cate-
gories, one of which was dedicated to “Big Data, AI and their application” and thus
reflecting the central importance of each of these two fields for the successful appli-
cation of the other. During the course of the project, links to other topics were
established and again showed clearly that even if CAD cannot be equated with AI,
many of the issues that must be resolved for its implementation are direct consequences
of the use of computer intelligence. The input for the EU research agenda that was
presented in the respective CARTRE position papers1 for Big Data and AI covers legal
(regulation and insurance) and ethical aspects as well as requirements for data avail-
ability and testing and validation methods. Although the development of new AI-based
CAD functionalities will always have to occur within the ethical and legal framework,
it is the questions concerning data availability, AI training and validation and the
traceability of AI-based decision-making that are at the root of the discussion of ethical
and legal aspects and will further be decisive for future improvement of vehicle
intelligence and CAD implementation. These issues will thus be examined in more
detail below.
1
Available online at www.connectedautomateddriving.eu.
130 B. Wilsch et al.
to a Rand Corporation report [16] 100 vehicles have to drive non-stop for 500 years to
achieve 20% safer driving capabilities than humans.
Since the given inputs and monitored outputs of an ANN are linked via a complex deep
layer structure in which the weights between nodes have been adjusted in an extensive
training process until the desired behavior is obtained, it is eventually impossible to
deduce how a specific decision was made. ANNs thus essentially present a black box
for which only the inputs and outputs are known, while the process by which outputs
are produced can only be defined in terms of linkage weights. For AD, a system with
this lack of traceability poses problems concerning the liability in case of accidents and
also presents questions relating to the ethicality of entrusting ML-based AI with
potentially life-threatening tasks. Moreover, if an autonomous vehicle is trained as an
end-to-end system, the inability to model the decision-making process results in a lack
of modularity, since individual components of the AD system cannot be replaced
without necessitating a renewal of the entire training process to once again translate
given inputs into desired outputs (decisions) [9]. While the limited traceability of ML-
based decision-making is not an unsurmountable hurdle for the introduction of AI, it
does require the development of specific solutions, e.g. module-specific training
algorithms, and is also an intrinsic characteristic of ML, which has led researchers to
pursue alternative AI methods (see Sect. 5.1).
The previous sections have highlighted the central challenges faced by ML-based
AI applications for AD, which can present significant roadblocks on the way to its
deployment. The question of how these problems can be resolved is, however, usually
accompanied by the question of where AD will be introduced first. Influencing factors
that may affect this race for AD and which ultimately revert to questions about AI
capabilities are thus discussed below.
4 International Competitiveness
2
A coordinated plan has been announced for the end of 2018.
132 B. Wilsch et al.
states and Norway (as of May 2018) followed by a call for private and public invest-
ments in AI amounting to at least 20 billion Euro by the end of 20203. The latter cannot
match the venture capital provided to companies in China, where around 425 billion
Euro of funds were expected to be raised via Government Guidance Funds in 2016 with
another 250 billion Euro coming from private funds [11], and in the U.S.
In the outline for a European approach, the European Commission (EC) also
acknowledged the need to modernize education and training systems to establish a
talent pool that can advance AI technologies. Currently, the availability of ML experts
cannot match the demand, resulting in a significant surge in salaries and strong
international competition over available talent (including a significant brain drain from
China to the U.S.). The U.S. clearly leads the world in terms of the size and average
experience of the workforce [12], a field where China is also trying to catch up, after
the first undergraduate course in AI was established as recently as 2004. The advantage
held today by the U.S. is in large a result of substantial investments in STEM education
in the 1960s, which should thus also be a priority of governments today. As an
example, the German Federal Association for AI has included a call for data science
education starting in third grade as part of its 9-step plan to advance AI in Germany
[13], an initiative that has already been implemented by the Chinese Ministry of
Education with both a plan for increased education in coding starting in primary
schools and an “AI Innovation Action Plan for Colleges and Universities”. To respond
to the expected spike in demand of AI talent, the EC planned to invest 2.3 billion Euro
specifically in digital skills between 2014–2020.
The non-technical implications of AI and the way in which these are approached and
handled will also have significant effect on international competitiveness. Specifically,
regulations concerning data protection and privacy, which impedes the access to data as
the fuel of ML, and the comprehensive discussion of ethical issues can have a restrictive
effect on the speed of innovation in AI. With the introduction of the General Data
Protection Regulation (GDPR) and the planned presentation of ethical guidelines for AI
development by the EC by the end of 2018, researchers in the EU certainly face the
strongest constraints. It must, however, be noted that given the fundamental societal
transformation that a widespread application of AI technologies could trigger as well as
the potential threats of AI, a cautious and balanced approach is justified.
5 Outlook
3
1.5 billion as part of the Horizon 2020 programme, 2.5 billion from public-private partnerships and
over 0.5 billion via the European Fund for Strategic Investment.
Challenges and Opportunities of Artificial Intelligence 133
that either serve to accelerate development or provide alternatives if, e.g., legal or
ethical problems prove to be substantial roadblocks, are presented as an outlook in the
following two sections.
the data throughput. Moreover, the data amount of the weights can be too large to be
stored on a local on-chip memory.
An alternative way is to implement ANNs directly in hardware by means of neu-
romorphic chips. Here, the memory and the processors are not separated. Every arti-
ficial neuron represents a processing unit and has its own memory so that the
computing is performed at the data location by means of the neuron connections.
Furthermore, the neuron communication is not controlled by a central clock. The
communication is only initiated if the corresponding neurons are stimulated. This is a
much better imitation of biological neural networks. The lack of data transfer between
memory and the processing units and the asynchronous communication concept raises
the potential to reduce the power consumption significantly. A vast variety of imple-
mentation concepts can be found in the literature [15].
So far, this approach is mainly investigated by academia and is widely ignored by
the industry. IBM was the first company investigating neuromorphic computing and
presented its “TrueNorth” chip in 2011 before the actual breakthrough of deep learning
and the resurgence of convolutional neural networks (CNNs) in 2012. In 2016 it was
shown that a trained ANN can be mapped to such a neuromorphic chip and approach
state-of-the-art classification accuracy [16]. The huge advantage was the very low
power consumption of only 275 mW while processing 2600 frames/s. Currently, Intel
is working on its own neuromorphic chip “Loihi”. Here, the signal processing is based
on asynchronous spiking similar to biological neurons. According to Intel this chip
combines training and inference, supports different ANN topologies including recurrent
neural networks (RNN), can be used for supervised as well as for reinforcement
learning and is continuously learning. Intel calls it a test chip and is going to share it
with universities and research institutions. Samsung announced collaboration with
leading Korean universities to develop a neuromorphic chip. In Europe, neuromorphic
computing is investigated under the frame of the Human Brain Project since 2013. The
Belgian research institute Imec introduced its own neuromorphic chip in 2017.
This technology is very young and a lot of research has to be done to explore its full
potential and to verify its capabilities. The claims about the potential performance are
orders of magnitude of higher power efficiency and orders of magnitude of faster
learning capabilities. If these promises are only half true, neuromorphic computing
should attract high interest of the industry in the future. Neuromorphic chips are ideal
for classification tasks but not for precise calculations like conventional processors.
Therefore, these have to be embedded in conventional hardware which deals with rule-
based navigation in traffic. Furthermore, new software has to be designed to integrate
such chips in conventional hardware systems.
6 Conclusion
Based on the experience from work in the European projects SCOUT and CARTRE the
objective of this chapter was to highlight the role of AI for the development of AD.
Beside an overview of current AI hardware and ML-focused methodology, key
opportunities and challenges for the application of AI have been discussed and may, in
the case of non-technical issues, also serve as examples for the application of AI in
Challenges and Opportunities of Artificial Intelligence 135
other fields. Future development paths and alternative methods that may help to resolve
specific non-technical issues have also been explored. Due to the central importance of
AI for AD, future development and international competitiveness in particular will be
closely related to AI-specific capabilities.
Acknowledgements. The authors are grateful for fruitful cooperation with the contractual
partners of the Coordination and Support Actions “Safe and Connected Automation of Road
Transport” (SCOUT) and “Coordination of Automated Road Transport Deployment for Europe”
(CARTRE). The SCOUT and CARTRE projects have received funding from the EU’s Horizon
2020 programme under grant agreements No. 713843 and 724086, respectively. The section on
AI hardware further draws from investigations carried out as part of the SCORE project, which
has also received funding under the EU’s Horizon 2020 programme.
References
1. McCarthy, J.: Computer Controlled Cars, Essay (1969)
2. Touretzky, D., Pomerlau, D.: What’s hidden in the hidden layers? BYTE 14, 227–233
(1989)
3. Raina, R., Madhavan, A., Ng, A.: Large-scale deep unsupervised learning using graphics
processors. In: Proceedings of the 26th Annual Conference on Machine Learning, ICML
2009, pp. 873–880 (2009)
4. Turing, A.M.: Computing machinery and intelligence. Mind 49, 433–460 (1950)
5. Döbel, I., Leis, M., Vogelsang, M.M., et al.: Machine Learning - Competencies,
Applications and Research Needs. Frauenhofer Society (2018). (in German)
6. Dally, W.: High-performance hardware for machine learning. NIPS Tutorial (2015)
7. Göhring, D., Latotzky, D., Wang, M., Rojas, R.: Semi-autonomous car control using brain
computer interfaces. Advances in Intelligent Systems and Computing, vol. 94, pp. 393–408
(2013)
8. Shalev-Shwartz, S., Shammah, S., Shashua, A.: On a formal model of safe and scalable self-
driving cars (2018). arXiv:1708.06374v5
9. Slusallek, P.: Understanding the world with AI: training & validating autonomous vehicles
with synthetic data. Talk Presented at Interactive Symposium on Research and Innovation
for CAD in Europe at Tech Gate, Vienna, 20 April 2018
10. Probst, L., Pedersen, B., Lefebvre, V., Dakkak-Arnoux, L.: USA-China-EU plans for AI:
where do we stand? Digital Transformation Monitor of the European Commission (2018)
11. Ding, J.: Deciphering China’s AI Dream, Governance of AI Program. University of Oxford
(2018)
12. Churchill, O.: Chinas AI dreams. Nature 553, S10–S12 (2018). https://doi.org/10.1038/
d41586-018-00539-y
13. KI Bundesverband e.V.: Artificial Intelligence: State of the Art and Catalogue of Measures
(2018). (in German)
14. Kalra, N., Paddock, S.M.: Driving to safety: how many miles of driving would it take to
demonstrate autonomous vehicle reliability? RAND Corporation, Santa Monica (2016).
https://www.rand.org/pubs/research_reports/RR1478.html
15. Schuman, C.D., et al.: A survey of neuromorphic computing and neural networks in
hardware (2017). arXiv:1705.06963v1
16. Esser, S.K., et al.: Convolutional networks for fast, energy-efficient neuromorphic
computing. PNAS 113(41), 11441–11446 (2016)
Electric Vehicles
Light Electric Vehicle Design Tailored
to Human Needs
1 Introduction
The increase of average age is surely among the greatest achievements of the past
decades. Indeed, a gradual transformation has taken place mainly in the western world
and other advanced societies. The increase of average age has mostly been achieved
because of better medical, nutritional and lifestyle factors, all resulting in increasing
longevity. It is estimated that, by 2050, 29.9% of the European population will be over
the age of 65, with the proportion of the eldest people (aged 80 years or more) by this
time being highest in Italy (14.4%), Germany (13.6%) and Spain (12.8%) [1].
Strategies and actions are needed to make urban spaces, houses and transport more
accessible and affordable to these people. It is crucial to include older people in society,
so that they are not considered “second-class” citizens, but active and necessary part of
the community.
European senior citizens consider car driving as a stressful event, due to motor
cognitive perceptual and emotional age-related decline [2]. A prerequisite for driving is
the integration of high-level cognitive functions with perception and motor functions.
Ageing, per se, does not necessarily impair driving or increase the crash risk [3, 4].
However, medical conditions, such as cognitive impairments and dementia, and ageing
related decline become more prevalent with advancing age and may contribute to poor
driving performances and an increased crash risk [5, 6]. For many seniors, driving a car
is crucial for keeping their independence, their social life and wellbeing. Older drivers
often self-regulate their driving habits, for example by restricting driving to known
routes or by avoiding driving during rush hours and at night, without necessarily
stopping driving at all.
Products for older people (e.g. a car), as far as possible, should be adaptable and
easily customized to suit the skills of the elderly. They will have to adapt to the
changing needs of lifestyles in a discreet manner and, as far as possible, should meet
the aesthetic and functional needs of mature users without limiting the possibility of
self-expression.
In this context, the SilverStream project represents a unique approach to urban
mobility where a stylish Light Electric Vehicle (L6e category) integrated a compre-
hensive set of automotive technologies tailored to the needs of an urban and ageing
population. Innovative technologies such as a new HMI based on gesture recognition, a
lightweight seat, assisted rear lift and crane, specifically conceived for meeting elderly
people’s needs, have been designed and tested in both in–lab and out-lab environments.
In research contexts that envisage the development of innovative technological
solutions and advanced services for different classes of users, one key issue is to
understand the real needs and expectations of the possible end-users and to examine the
dynamics between each other and the environment around them. The act of under-
standing the real user needs guarantees not only the effectiveness of the technological
solution developed, but also the interaction which the end-users will have with the
system according to their existing behaviors, motivations and social/cultural back-
ground. The involvement of the end-user is a practice that should be considered in each
phase of the design, implementation, experimentation and evaluation of an innovative
solution as SilverStream vehicle is.
Light Electric Vehicle Design Tailored to Human Needs 141
The present study addresses all the experimental activities performed with end-
users during the three years long project, starting from the single components’ testing
until the final integrated vehicle validation.
The SilverStream vehicle (Fig. 1) characteristics tailored to the specific needs of elderly
people have been verified during the three-years project by involving representative
samples of older adults in various experimental sessions. In particular, the most user
relevant subsystems (e.g. seats, HMI, etc.) have been tested, before the final integration
in the vehicle prototype, in laboratory settings through various validation studies
specifically designed.
In particular, the sustainable ergonomics, perceived comfort and adaptive HMI for
minimum fatigue vehicle operation have been assessed by testing the following
components:
• Innovative HMI based on gesture recognition simplifying the operation of the
auxiliary systems featuring an on board display design based on advanced cognitive
science studies (Fig. 2a).
• Lightweight seat (e-Seat) specifically designed for (a) optimal posture including
lumbar and neck support for comfortable and low fatigue driving; (b) easy ingress
and egress through 90 deg swivel function (Fig. 2b);
• Assisted rear e-lift (30 kg payload) and crane for easy loading and unloading of the
car (Fig. 2b).
Two different strategies have been used in order to assess the agreement of the
tested subsystems with the elderly needs:
142 D. Trojaniello et al.
Fig. 2. SilverStream vehicle main components: intelligent control system based on gesture
recognition (a), e-Seat & Rear e-Lift and Crane (b)
(1) Instrumental evaluation. Tests have been performed in-Lab environments, i.e.
motion analysis laboratory with the involvement of physical therapists and bio-
engineers, to measure muscle activity, joint motion, forces and pressure distribu-
tion while the subjects perform different motor tasks (i.e. seating, ingress-egress,
etc.). Starting from these measures, information about the subject comfort, fatigue
and muscle activity could be gathered and used to evaluate the acceptability of the
tested sub-systems prototypes.
(2) Qualitative evaluation. Questionnaires and interviews have been specifically
designed to investigate the subjects perception with respect to the tested sub-
systems prototypes. In particular, comfort and ergonomics of specific sub-systems
such as the e-Seat as well as the usability, acceptability and feasibility of the HMI
have been investigated with specifically designed tools.
The validation plan (Fig. 3) with final end-users has been structured in two phases.
Light Electric Vehicle Design Tailored to Human Needs 143
The first phase (Single components users’ testing) consisted of validating the single
components (e-Seat, Intelligent Control system based on gesture recognition, Rear e-
Lift and Crane) developed during the project through controlled experiments in-Lab
environment, mainly at San Raffaele Hospital (HSR) facilities. Those experiments were
performed with real users over 65 y.o. The main output of those experiments consisted
in assessing the validity of the developed technologies for the target population as well
as collecting users’ feedbacks aimed at improving those technologies. According to the
results obtained, a number of refinements have been done on the developed tech-
nologies (e.g. refinement of the e-Seat conformation, re-design of the HMI, etc.).
The second phase (Validation in realistic environment with end-users), instead,
consisted of validating the overall solution in real scenario conditions, in Out-Lab
environment. Those experiments were performed with real users over 65 y.o. The main
output of such experiments consisted in an overall evaluation by the target population
of the proposed solution.
The results of the study showed that the perception of the difficulty of the HMI
increases with decreasing of the use of technology; also, learning to use the HMI
requires a lot of time, a sizeable mental demand, especially for learning and remem-
bering the gestures, and a good executive functions to reproduce the gestures properly.
At the end of the test, a small part (27%) of population judged the HMI usable and a
very small part (13%) of the sample considered the HMI easy to use. As suggested by
participants, to become a useful tool for elderly people, the tested HMI had to be
integrated with other interaction modalities such as voice and touch control. In addi-
tion, the user interface (UI) had to be further simplified in order to be more suitable for
interacting with it through the gesture control and the number of gestures had to be
lowered to let people easier remember them.
temperature) through the system using all its modalities of interaction (i.e. voice, touch
and gestures). Once completed the assignments, subjects were asked to answer to
structured questionnaires in order to investigate their user experience.
The results obtained showed that the main strength of the system was the UI design
(50%): elderly subjects appreciated the display dimensions as well as the characters size.
Its integration with the vehicle and the easy access to auxiliary functions were also
noticed. However, some critical issues have been observed: 5% of sample population
considered the position of the screen and its angulation not suitable. A different
angulation of the screen would be required to reduce the chance of being distracted
while driving. Among the HMI interaction modalities, the voice control resulted to be
the favorite one. However its limited use restricted to few functionalities as well as the
lack of feedbacks have been reported. The gesture control, instead, resulted to be the
less favorite interaction modality and the most distracting one too. Unfortunately,
because of the high level of gesture control malfunction observed during the test, it was
difficult to analyze the contribution of gesture control in HMI system objectively. The
touch control, in the end, has been considered the most familiar and immediate way to
control the interface.
In the first phase of the validation (Single components users’ testing), the seat has
been tested in terms of comfort perceived within the “e-Seat comfort evaluation”. Then,
the prototype e-Seat has been integrated with a roto-translating platform to facilitate
older people in entering and exiting the vehicle. Tests have been performed at the
motion analysis lab at HSR facilities to evaluate the efficacy of the proposed system in
lowering physical requirements during the vehicle ingress/egress for elderly people
(“Ingress-egress biomechanical analysis”).
The second phase of the validation (Validation in realistic environment with end-
users), was, instead, mostly devoted to test the fine-tuned version of the e-Seat in the
SilverStream vehicle in real scenarios in terms of both comfort perceived and improved
vehicle accessibility.
Among the four examined seats, from “Seat features assessment checklist” (SFAC)
[10, 11], the SA conformation (cushion and backrest thickness = 16 mm, cushion
lift = 23 daN and backrest lift = 38 daN) resulted the best one with the only except for
Light Electric Vehicle Design Tailored to Human Needs 147
the backrest. The SC seat (cushion and backrest thickness = 43 mm, cushion lift =
23 daN and backrest lift = 38 daN) resulted, instead, to have the best backrest. Fur-
thermore, based on “Body part discomfort assessment checklist” (BPDAC) [12–14]
results, the seat SA resulted as the most comfortable even if a light level of discomfort
in correspondence of neck was noticed.
An important aspect to consider is the level of comfort that the seat is able to
guarantee in lumbar area.
According to “Psychophysical discomfort questionnaire” (PDQ) [15] results, both
the SA seat and the SB seat are considered fairly (with the same score) as the most
comfortable: the only difference lied in the number of subjects that judged the cushion
very comfortable (3 vs 1). Therefore, the seat judged more comfortable by sample
population was SA. Such result has been confirmed by the objective evaluation
(pressure distribution) too. Nevertheless, the sample population suggested some
improvements to apply to the backrest to guarantee a higher level of comfort perceived.
Based on such results, only the SA seat has been used in the subsequent study aiming at
evaluating the ingress/egress (I/E) task.
(b) Ingress-egress biomechanical analysis
The efficacy of e-Seat, remotely controlled, in assisting elderly subjects during car I/E
task has been assessed with thirty elderly subjects in a car-like wooden setup (Fig. 7)
by a comparison with the standard I/E mode (without rototranslation).
Both user experience measures and biomechanical data have been acquired during
the study performed at HSR motion analysis lab [16]. Most of the participants con-
sidered car I/E task through the rototranslating movements of the e-Seat easier than that
with standard mode. Moreover, the remote control was considered easy to use. The
analysis of biomechanical data showed that the I/E task through the rototranslating
movements requires a lower muscle activation and a lower knee and trunk ranges of
motion and such properties contribute to the reduction of the physical load sustained by
148 D. Trojaniello et al.
elderly in accomplishing that task. The e-Seat is therefore able to facilitate car I/E task
in elderly subjects where age-related or impairment-related motor capabilities make the
I/E movements particularly difficult. However, some characteristics of the proposed e-
Seat had to be improved: the height from the ground when the e-Seat was rotated was
judged too high while the velocity of the system was evaluated too slow.
Validated questionnaires, already used in the first experimental phase (i.e. SFAC
and BPDAC), have been administered to characterize the final e-Seat in terms of
conformation and comfort provided.
Light Electric Vehicle Design Tailored to Human Needs 149
The participants are then asked to exit the vehicle through the rototranslating
movements of the seat and encouraged to express their impressions about their sub-
jective experience.
Based on the results gained, the main strengths of the e-Seat have been: the comfort
(82%), the easy ingress (64%) and egress (59%). Meanwhile, the main weakness of the
e-Seat have been: not enough space for the lower limbs (59%) and high height from the
ground when the seat is completely rotated (27%).
The accessibility to the vehicle through the SilverStream e-Seat, so, resulted to be
improved and comfortable even if some criticalities remain. It is important to note that
most of those aspects are not due to the properties of the seat but rather to SilverStream
final vehicle characteristics: its small dimension and the traditional door are, in fact, the
main reasons of the critical issues found. Most likely, the same SilverStream e-Seat,
integrated with the same rototranslating mechanism and controlled by the same remote
control, placed on a larger motor vehicle chassis could produce fewer negative opinions
than those reported by elderly during the test.
2.3.1 First Year Experimental Study: The Rear e-Lift & Crane Study
Fourteen elderly subjects took part in “Rear e-Lift & Crane study”. The aim of the
study was to assess the efficacy of the SilverStream trunk (inclusive of Rear e-Lift and
Crane) in supporting elderly people during loading and unloading (L/UL) task using
Rear e-Lift and Crane movements by making a comparison with the standard L/UL
task. The tests have been carried out in an appropriately furnished area at MTM in
Cherasco where questionnaires and interviews have been administered to the partici-
pants for the evaluation of the SilverStream trunk (Fig. 9). Both Rear e-Lift, controlled
150 D. Trojaniello et al.
by a remote control, and Crane, activated by buttons on crane arm, resulted easy to use
and well accepted by the users. However, some critical issues have been highlighted for
both devices: the slow velocity of the Rear e-Lift and the uncontrolled load swings
relative to the crane have been reported.
Fig. 10. Validation of Rear e-Lift & Crane study in realistic scenarios
Loading and unloading objects resulted strongly improved and easier when per-
formed with the Rear e-Lift. However, some participants observed as excessive the
time necessary for the L/UL task using the platform. Only a small part of the tested
population considered the Rear e-Lift unnecessary. Finally, a more intuitive design of
the remote control is needed since the main reason of difficulties met by elderly during
the use of that system was its understanding.
Regarding the Crane, the 91% of participants claimed that such device, as the Rear
e-Lift, is very useful for loading and unloading weights. Nevertheless, some sugges-
tions have been indicated for further improvements as replacing the hook with a
carabiner (9%), making the buttons more visible (9%) and adding a lock to avoid
uncontrolled load swings (5%). The missing of an automatic opening of trunk door has
been observed by the 9% of participants.
3 Conclusion
As result of the studies performed during the SilverStream project and summarized in
the present paper, it is possible to state that the SilverStream final demonstrator can be
considered a valid solution in supporting elderly drivers and, consequently, enhancing
their driving experience.
Acknowledgments. The research leading to these results has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant agreement No
653861 – SILVERSTREAM
152 D. Trojaniello et al.
References
1. E. Communities, Eurostat Database, Internet (2005). http://epp.eurostat.cec.eu.int
2. Wagner, J.T., Müri, R.M., Nef, T., Mosimann, U.P.: Cognition and driving in older persons.
Swiss Med. Wkly. 140, w13136 (2011). https://doi.org/10.4414/smw.2011.13136
3. Logsdon, R.G., Teri, L., Larson, E.B.: Driving and Alzheimer’s disease. J. Gen. Intern. Med.
7, 583–588 (1992)
4. Lesikar, S.E., Gallo, J.J., Rebok, G.W., Keyl, P.M.: Prospective study of brief neuropsy-
chological measures to assess crash risk in older primary care patients. J. Am. Board Fam.
Pract. 15, 11–19 (1995)
5. Ball, K., Owsley, C., Sloane, M.E., Roenker, D.L., Bruni, J.R.: Visual attention problems as
a predictor of vehicle crashes in older drivers. Invest. Ophthalmol. Vis. Sci. 34, 3110–3123
(1993). http://www.ncbi.nlm.nih.gov/pubmed/8407219
6. Brown, L.B., Ott, B.R.: Driving and dementia: a review of the literature. J Geriatr.
Psychiatry Neurol. 17, 232–240 (2004). https://doi.org/10.1177/0891988704269825.
17/4/232 [pii]
7. Trojaniello, D., Cristiano, A., Musteata, S., Sanna, A.: Evaluating real-time hand gesture
recognition for automotive applications in elderly population: cognitive load, user
experience and usability degree. In: HEALTHINFO, Nice, France (2018)
8. Reed, M.P., Schneider, L.W.: Design criteria for automobile seatbacks based on preferred
driver postures. Technical report Documentation, p. 42 (1995)
9. Trojaniello, D., Cristiano, A., Oleari, E., Tettamanti, A., Sanna, A.: Car seat comfort
assessment based on objective and subjective measurements in elderly population. In:
Proceedings of the 7th Transport Research Arena TRA 2018, April 16–19, 2018, Vienna,
Austria (2018)
10. Kolich, M.: Ergonomic modeling and evaluation of automobile seat comfort (2000)
11. Deros, B.M., Daruis, D., Mohd Nor, M.J.: Evaluation of car seat using reliable and valid
vehicle seat discomfort survey. Ind. Eng. Manag. Syst. 8, 121–130 (2009)
12. Karuppiah, K., Salit, M.S., Ismail, M.Y., Ismail, N., Tamrin, S.B.M.: Evaluation of
motorcyclist’s discomfort during prolonged riding process with and without lumbar support.
An. Acad. Bras. Cienc. 84, 1169–1188 (2012). https://doi.org/10.1590/S0001-
37652012000400031
13. Lin, C.: Ergonomic assessment of excavator seat. Int. J. Appl. Sci. Eng. 9(2), 99–109 (2011)
14. Velagapudi, S.P., Ray, G.G.: Reliability and validity of seat interface pressure to quantify
seating comfort in motorcycles, pp. 1–8 (2015)
15. Kolich, M., Taboun, S.M.: Combining psychophysical measures of discomfort and
electromyography for the evaluation of a new automotive seating concept. Int.
J. Occup. Saf. Ergon. 8, 483–496 (2002). https://doi.org/10.1080/10803548.2002.
11076549
16. Cristiano, A., Corbetta, D., Tettamanti, A., Sanna, A., Trojaniello, D.: Validation study of a
roto-translating seat to support elderly drivers during car ingress/egress: a biomechanical
analysis. In: MEMEA, Roma, Italy (2018)
DCCS-ECU an Innovative Control and Energy
Management Module for EV and HEV
Applications
Abstract. Impact Clean Power Technology S.A. (ICPT S.A.) has recently
developed an innovative, universal, and scalable electronic control unit for
electric (EV) and hybrid (HEV) vehicles which fulfils intelligent management
functions. One of the main problems of modern EVs is energy management.
Proposed ECU (Electronic Control Unit) addresses this issue by performing the
optimisation of energy consumption, higher power performance, real time power
distribution which results in vehicle range extension.
1 Introduction
The development of electric vehicles brings the necessity for use of new generation
electronic systems. An on-board ECU (Electronic Control Unit) computer is destined to
manage the operation of battery and electric propulsion system in an electric or hybrid
car. Electric and hybrid vehicles offered on the market typically contain dedicated
expensive and complex ECU computers, which are either not available to new market
players currently investing in electric vehicles (because, as a common practice, they are
reserved for large automotive concerns) or they are not fully suitable for use in new
applications and their implementation and integration would be complicated and high-
cost. Apparently, the assembly of such systems is economically relevant from the point
of view of a company looking for success delivering one product or product family for
e-mobility market. The unit presented hereby responds to the mentioned problem and,
due to its outstanding features, it offers universal quality solution. As a part of this
project the authors aimed at delivering to the market the easily available, intelligent and
scalable management unit with the widest possible range of target applications within
electric vehicles industry at very low prices even for small batches (<100szt). The
electronic board presented in Fig. 1 has been designed in the way enabling the
reduction of period of integration in a given application. This way the time reserved for
product launch onto the market is significantly shortened. The stages of development,
tests, and preparation of software are less time-consuming.
2 Introduction
Contemporary vehicles have several dozen and, often, even a few hundred electronic
controllers connected to one another with fast digital buses. The global leader among
currently installed digital buses is CAN (Controller Area Network) standard developed
by Robert Bosch GmbH company in 1986. Standard CAN [2] refers to both bus and
data transmission protocols. CAN bus is a broadcasting bus, which does not have a
discrete superior unit.
Together with ISO 11898 and SAE J2284 standards, CAN protocol became an
international norm applied in passenger cars.
Figure 2 presents a typical topology of contemporary vehicle. What is interesting is
the presence of multiple CAN communication buses. Each of them performs different
DCCS-ECU an Innovative Control and Energy Management Module 155
function [3]. One of them is a propulsion system CAN bus which allows communi-
cation among controllers responsible for drive system and safety, such as motor con-
troller and ABS system controller. Comfort CAN bus is responsible for communication
among multimedia devices such as radio, navigation or on-board entertainment system.
Body CAN bus connects cabin controllers with one another such e.g. a drop down
electric windows’ controller with seats’ controller.
Dashboard
Door
AC controller display
controller
controller
Additionally, there is a separate CAN network used for the diagnostics of vehicle’s
electronic boards. Because of the multiple character of the transferred data the networks
operate with different speeds and sometimes in different application standards such as
CAN Open or J1939. As ECU system uses data sent via all buses it is recommended
that it has the highest possible number of independently operating CAN bus interfaces.
3 DCCS-ECU Structure
– NXP (former Freescale) is (next to Renesas, Bosch, and Infineon) one of the largest
and most important manufacturers of solutions for automotive.
Interestingly, in their presentations Infineon compare their solution mainly with that
of NXP, which may lead to conclusion that the both products, although dedicated to
different market segments, present similar effects in terms of functionalities. The final
choice was two-core unit of S12XEP series. This system enables upload of real time
operating system (RTOS) in one of the two possible forms:
– with free licence, or
– with additional licence fee where the system is equipped with functional safety
features.
Thus, it is decided by individual users which safety level controller they prefer.
Two times faster than main core, the second core allows real time coding of
protection against access to information transferred via internal data buses with uti-
lization of minimum 128-bit key AES cipher algorithm (Advanced Encryption
Standard).
3.3 Communication
The device is equipped with the minimum of four independent CAN interfaces. Such
high number of interfaces is convenient and serves well in case DCCS-ECU is used to
convert a vehicle. Fairly often it turns out that the simulation of components removed
from combustion vehicle (such as engine controller) and the achievement of required
functionality of remaining components involves the separation of CAN line and cre-
ation of discrete connections.
Each single interface is independent and is capable of working at various speed
values, which are defined in communication protocol specification. Embedded
DCCS-ECU an Innovative Control and Energy Management Module 157
interfaces—LIN and FlexRay, in the main body of the computer, have been abandoned
due to low popularity. By doing so the total material costs have been reduced.
Fig. 3. Reference diagram of electric vehicle system which may implement DCCS-ECU [4]
Acceleration and
brake pedal
position
Di
git
al
ECU
Characteristic of selected drive mode
I/O
Limtiation
calculated by
BMS Torque setting for
CAN CAN motor controller
N
CA
Project’s works on DCCS-ECU module resulted in the creation of a device, which, due
to CAN bus popularity, is capable to be connected with literally any contemporary
EV/HEV vehicle. Major decisions have been made in relations to limiting availability
of LIN and FlexRay buses in primary version of the device. There is no problem with
using the supplementary adapter ports in case of more demanding applications and this
way implementing the earlier mentioned features.
Based on the above characteristic the delivered DCCS-ECU computer can be
described using the information presented in the table below (Table 1).
DCCS-ECU an Innovative Control and Energy Management Module 159
Fig. 6. Electromagnetic radiation emission during device operation within frequency range
200–1 GHz
DCCS-ECU an Innovative Control and Energy Management Module 161
References
1. ICPT SA: Development of universal electronic control unit for electric and hybrid vehicles,
ICPT SA, http://icpt.pl/innovations.aspx#tab1
2. BOSCH, CAN Specification Version 2.0 (1991). www.can.bosch.com
3. Michna, M., Adamczyk, D., Kut, F., Ronkowski, M., Bernatt, J., Pistelok, P., Król, E.,
Kucharski, Ł., Kwiatkowski, M., Byrski, Ł., Kozioł, M.: Koncepcja, modelowanie i
symulacja układu napędowego prototypu samochodu slektrycznego “Elv001”, Zeszyty
Problemowe—Maszyny Elektryczne Nr 92/2011
4. Źródło; Selection of Electric Motor Drives for Electric Vehicles, Xue, X.D., Cheng, K.W.E.,
Cheung, N.C.: Department of Electrical Engineering, the Hong Kong Polytechnic University,
Hung Hom, Kowloon, Hong Kong, China
5. http://en.wikipedia.org/wiki/UEXT
Connectivity Design Considerations
for a Dedicated Shared Mobility Vehicle
Abstract. With shared mobility features, such as keyless entry and cloud stored
user profiles, informing and guiding the vehicle design in many areas like the
E/E architecture, new challenges on how to approach the early stages of a
vehicle development process arise. Car connectivity as the enabler for many
shared mobility features is in the focus of the presented approach, however also
integration aspects into the whole system design are considered.
For an entrepreneurial project, where a vehicle is conceptualized from the
idea through a prototype to a serial product, requirement specifications and
function specification are not the right tools to start with. In this paper
share2drive and FEV share their design approach of deriving a prioritized fea-
ture set for a new vehicle class of a Personal Public Vehicle (PPV), dedicated to
the use for shared mobility concepts and with end user satisfaction and User
Experience (UX) as guiding principles.
1 What Is Connectivity?
If we want to design a “connected car”, the first step is to understand the needs and
expectations of its intended users.
As the rise of today’s ubiquitous social media channels and platforms has shown,
“connectivity” is significantly more than just the technical means to exchange infor-
mation between classic communication theory’s sender and receiver. As every
smartphone user will confirm, “being connected” is as important as the actual exchange
of information. Also “connectivity” refers today as much to one’s ability to connect
socially, as it describes an ecosystem enabling such connections.
Another aspect is to keep in mind, that “connectivity” is today inevitably linked
with the expectation of “information at your fingertips”. Bill Gate’s vision [1] has
become a reality, and any connected car will need to deliver on this expectation. This
implies not only availability of digital services users will take for granted—most
notably delivery of audio and video content and messaging—but also access to
personal or personalized content without the need for additional devices or complicated
authentication procedures.
A connected vehicle will show innovation through novel ways of combining digital
services and making these accessible in-vehicle. The technical solutions of doing so
will give center stage to the user experience (Fig. 1).
The vehicles used in urban mobility services to date are only able to fulfill their
deployment purpose to a limited extent, since they were originally designed to be
owned by an individual user. Fleet operators only carry out minor modifications, which
focus on the access to the vehicles themselves. In the course of its use within an urban
shared mobility concept, the specific requirements of shared mobility operators and
vehicle users are becoming equally important. At the same time, vehicles in new
mobility service scenarios must be understood as mobile devices in a multimodal
world. As a result, the requirement profile of an ideal shared vehicle should not be
based on a conventional customer analysis only, but rather on those of a mobility
concept as well as a business model, too.
SVEN—Shared Vehicle Electric Native—(see Fig. 2) is a pure electric vehicle
designed for urban shared mobility with focus on car sharing and fleet management.
The Unique Selling Points (USPs) addressed by SVEN are:
• Designed for shared mobility and for short distances
• Zero emissions (pure electric vehicle)
• Ease of use—easy to clean—easy to maintain
164 J. Kottig et al.
Fig. 2. The Public Personal Vehicle SVEN (Shared Vehicle Electric Native)
displays like an instrument cluster underline the advanced mobility claim and support
the driver in the safe handling of the vehicle. Easy-to-clean surfaces allow quick and
easy reprocessing after a usage cycle.
The technical design of the powertrain is based on the predominant use in urban
traffic. SVEN integrates a 20 kWh battery pack to ensure a 80 km range—even under
extreme conditions. The rear engine with 24 kW covers a maximum speed of
120 km/h.
The previous chapters have pointed out that requirements on the Public Personal
Vehicle (PPV) need to fulfil the Operator and the Individual User demands likewise,
and besides the Business Model they also need to consider the Mobility Concept. This
is illustrated in Fig. 4—However, while the previous section described the overall
requirements on a shared vehicle, this chapter will focus on the connectivity aspects:
For a number of the demands on a shared vehicle, connectivity will be the enabler.
The business model relies on the reliable delivery of digital services, which in turn need
a reliable connection to internet and cloud services. These services can be downloading
“Apps” from a “vehicle application store”, but can also be for example an optional
driver assistant package allowing for assisted parking, or Adaptive Cruise Control
(ACC).
166 J. Kottig et al.
A multimodal mobility concept works best, if the various mobile devices are in
sync with each other. Future urban concepts for Smart Cities and Cooperative Intel-
ligent Transport Systems (C-ITS) [2], can leverage the full potential of connected and
interconnected vehicles only with cars ready for V2X [3, 4].
Car sharing operators need to monitor and maintain their fleet. This stipulates the
need of the vehicles to connect to a central backend service (cloud) in order to realize
essential networked functionality like a booking service, or service charging. When a
connected car is ready for digital services, operators can offer instant service provi-
sioning for a complete new User Experience. For example, they could reward users for
a positive driving behavior by the minute, not just at the end of a ride. Blockchain
technology, can be an enabler for this, if it keeps its promise to allow peer to peer
transactions within seconds. FEV is partnering with Nano to elaborate the potentials of
the Nano Cryptocurrency [5].
Last but not least a car needs to create a relation to each of the users individually for
them to bond with the vehicle, or the service. The car must literally connect to its users
to deliver individualized functions and behavior: As a user, I should be able to access
the car as if it was mine. I can personalize it as if it was mine, or I can access my
personal media and data just like on a smartphone.
From the multitude of technical aspects of connectivity the most obvious requirement is
the need of a permanent Internet connection, for instance to access personal data, to
stream music, or to let the user subscribe to digital services. A personalized shared car
also needs the vehicle to be able to work with user profiles, configuring the car’s
seating and mirror positions for example, or to pre-set the favorite music channels.
Connected cars will as well enable more intelligent traffic management. You can
receive real time traffic updates and get routing recommendations. The unique concept
Connectivity Design Considerations for a Dedicated Shared 167
of a vehicle designed for shared mobility will allow its operators to both harness such
swarm data for his services, and to broker such data.
A connected car should be ready for connected and cooperative Advanced Driver
Assistant Systems (ADAS) [6] to be future proof, and to support Autonomous Driving.
It should also meet the demands of a Cooperative Intelligent Transport Services
(C-ITS), thus it should be ready for the standard approaches of a Vehicle to Everything
(V2X) communication.
It is not obvious yet which of the technologies will make the race for a worldwide
standard. While the US market is currently targeting the WLAN based technology, the
European Market and China seem to favor the cellular based solutions (see Fig. 5).
Further required radio connectivity are for WLAN, Bluetooth and NFC: The car
shall act as a WLAN-Hotspot to allow its users WLAN access on the ride. Bluetooth
capabilities are required for example for music streaming from the user’s smart phone
to the car’s infotainment system, and Near Field Communication (NFC) is required as
fall back solution to open the car in case a user does not own a smartphone for key-less
entry.
There are a number of solutions available on the market offering hardware- and
software solutions to turn any car into a ‘ready to share’ car. The solutions for our
concept need to be flexible enough, so that it can support a broad bandwidth of use
cases: From being integrated in a big car fleet to being a privately owned car that can be
shared with others.
5 Challenges
The previous chapters hint at some of the challenges to conceptualize and design a car
dedicated for sharing. The overarching theme is that the car needs to feel personal to its
user, and at the same time it needs to integrate seamlessly into a big fleet of a car
sharing operator. The technology needs to allow a wide range of integration: From a
fleet down to a private offered for sharing.
168 J. Kottig et al.
“Connectivity is the capability to connect not only technically, but also in a social
aspect” [7]. Therefore for the connectivity concept, we need to anticipate the future
behavior and the future demands of the car sharing community. The presence of digital
services will be taken for granted. The services inside a car will be expected to work as
for smartphones: Individual content is always available, and software upgrades work in
the background, ideally without user interaction or attention.
We want to make the car ready for future technologies like driver assistance,
autonomous driving, driving within a smart city and being conducted by an Intelligent
Transport System (ITS). Today there two different approaches to standardize V2X
communication: The first one is based on WLAN technology and the second approach
uses the Cellular infrastructure, including 5G Technology, and also the technology
decision—ITS-G5/DSRC vs. Cellular-V2X—has not been concluded yet. A connec-
tivity design concept needs to account for such uncertainties.
Two bigger topics, Cyber Security and Privacy, are admittedly a challenge for a
connected car. Examples and analysis of cyber-attacks on cars can be found in the
recently published Keen Report [8] and in the research paper from Computest [9].
However this is a topic on its own and thus could not be considered in the course of this
paper.
Car manufacturing is known for more than a hundred years by now. In 1910, Henry
Ford created the first mass production process for the Ford Model T, and over time the
engineering processes have matured. Automotive engineers became used to work along
the V-Model [10], which typically starts with a comprehensive requirement specifi-
cation, from which the system design is derived and further broken down to component
design and so on.
For automotive start-ups, where you conceptualize and design a car from a business
idea, the engineering process starts significantly earlier before the requirement speci-
fication for the actual vehicle. Connected services usually require specifications, for
which the actual vehicle is but only a part (or sub-system) of it. Customer and solution
engineers have to work in a phase where uncertainty is high and knowledge is low.
FEV and share2drive found the third Agile Manifesto Principle “customer col-
laboration over contract negotiation” [11] a very useful approach to cope with such
uncertainty. Like the Agile Software Development methods, which approach com-
plexity with an iterative process, we have accepted that changes will happen in an early
stage of a production design, and we therefore tackled complex tasks in iterations, as
illustrated in Fig. 6.
Connectivity Design Considerations for a Dedicated Shared 169
We formed teams where project- and solution engineers work together with busi-
ness representatives from the customer. The first phase’s goal was about understanding
the business ideas: We talked about Unique Selling Points (USPs), revenue streams and
story boards [12]. The purpose was to create a common understanding of the product
and its users. The engineers learned what the customer envisions, and the customer
benefitted from the questions of the experts.
With the knowledge gained from the first iteration, managing further iterations
became significantly simpler. The use case definitions can be a diligent, but routine of
piece of work. Like before, also in this phase experts worked together with customer
representatives. The outcome was a number of descriptions on how the product will be
used by its users, mentioning activities and data flow. A user can be anyone interacting
with the car. The goal of this phase was to generate a common and documented mutual
understanding of the functions and features of the final product. An example of a
template such Use Case Description can be found in Fig. 7.
Next we started creating a product feature list and prioritized it. Priorities can be
driven by various factors. For example by the uniqueness of a feature. Other features
are mandated by legislation. Priorities are influenced by the prize of a feature, too, so
we had to start adding a price tag to features quite early in the process.
170 J. Kottig et al.
When not intending to develop all solutions by oneself, market research was
required to deliver input to pricing and potential solutions.
The three steps above helped to funnel business ideas to a list of most valuable
features for a first Minimum Viable Product (MVP) [13].
The final iteration, the “Function Decomposition”, was necessary because the
features need to be integrated into an overall architectural concept, spanning various
functional domains such as battery and powertrain design. The overall system design
requires a function- and component decomposition from each sub-system, so that it can
define the interfaces and relations of all components.
As said, the approach sketched so far is described for the sub-system “Connec-
tivity”. But connectivity is only one sub-system out of many. There are for instance
also ADAS features, a battery solution and a power-train that needs to integrate all
together smoothly into the overall product (Fig. 8). To maintain the system view from
the beginning, all sub-systems need to integrate frequently. The whole process of
funneling to a feature- and component lists and integrating into the overall concept was
again executed in iterations, so that the learnings from each iteration could feed the
next one.
Connectivity Design Considerations for a Dedicated Shared 171
Fig. 8. Connectivity is only one sub-system out of many—all sub-systems need to integrate all
together smoothly into the overall product
7 Conclusion
In this paper we have outlined our approach of defining a “Connected Car” and have
introduced the business model for “SVEN”, a car dedicated to sharing. Connectivity
has been shown to be a key enabler for turning cars into mobile IT devices, equipped
with digital services similar to those on smartphones.
The engineering approach for such a novel vehicle design needs to be significantly
different from an approach applicable to a well specified traditional one, because the
early phases in a start-up business are dynamic and new ideas need the chance to find
their way into the concept. We have described our approach to funnel ideas into use
cases and into a prioritized feature list for the connectivity sub-system. For scaling the
approach to many sub-systems, we did sketch how feature- and component-
decomposition enables the integration into the overall system: A Public Personal
Vehicle (PPV).
172 J. Kottig et al.
References
1. Comdex Keynote Speech: Information at your fingertips, Bill Gates (1995)
2. C-ITS Platform: Final Report (2016). https://ec.europa.eu/transport/sites/transport/files/
themes/its/doc/c-its-platform-final-report-january-2016.pdf
3. “5G V2X, The automotive use-cases for 5G”, Dino Flore, 5GAA Director General. http://
www.3gpp.org/ftp/Information/presentations/Presentations_2017/A4Conf010_Dino%
20Flore_5GAA_v1.pdf
4. Rebbeck, T., Stewart, J., Lacour, H.-A., Andrew Killeen of Analysys Mason, David
McClure and Alain Dunoyer of SBD Automotive: Socio-Economic Benefits of Cellular V2X
(2017). http://5gaa.org/wp-content/uploads/2017/12/Final-report-for-5GAA-on-cellular-
V2X-socio-economic-benefits-051217_FINAL.pdf
5. Technical Paper, Nano: A Feeless Distributed Cryptocurrency Network, Colin LeMahieu
(2017). https://nano.org/en/whitepaper
6. The Case for Cellular V2X for Safety and Cooperative Driving: 5G Automotive Association
(2016). http://5gaa.org/wp-content/uploads/2017/10/5GAA-whitepaper-23-Nov-2016.pdf
7. “brand eins” economy magazine 04/2018 “Geht doch!”, Bernd Heinrichs, Executive VP &
Chief Digital Officer Automotive, Bosch (2018)
8. Experimental Security Assessment of BMW Cars: A Summary Report. Keen Security Lab
(2018)
9. Research Paper, The Connected Car—Ways to get unauthorized access and potential
implications, Computest (2018)
10. V-Model XT, Das deutsche Referenzmodell für Systementwicklungsprojekte: Verein zur
Weiterentwicklung des V-Modell XT e.V. (Weit e.V.), Version 2.2
11. Manifesto for Agile Software Development: http://agilemanifesto.org/
12. Agile Scenarios and Storyboards: Roman Pichler (2013). https://www.romanpichler.com/
blog/agile-scenarios-and-storyboards/
13. MVP—Minimum Viable Product: Frank Robinson, Syncdev (2016). http://www.syncdev.
com/minimum-viable-product/
Innovation Strategy
Trends and Challenges of the New
Mobility Society
Sakuto Goda(&)
Abstract. This article outlines market trends, customer needs and challenges
that the automotive industry will face to achieve electric, autonomous and
shared mobility: For the policy and automotive industry, the trend towards
electrification seems to be agreed among stakeholders, however, there are still
major challenges, for example, shortage of electricity, batteries and production
equipment in some regions. The other topic is autonomous driving. According
to a worldwide consumer survey conducted by NRI, the acceptance and needs of
customers vary from society to society. The spread of shared mobility also
depends on the maturity of the taxi industry. While the coming transformation
will be significant and affect the global market, regional, cultural, social issues
need to be considered.
1 Electrification
1.2 Challenges
This chapter outlines the challenges related to the introduction of electric vehicles
across the value chain, focusing especially on the pure-electric vehicle.
According to the survey, in the customers’ perception, EVs are considered an eco-
friendly, but also a pricy vehicle option, while the driving mileage is seen as a factor of
minor importance for the consumer’s decision to by or not buy an EV (see Fig. 2).
Fig. 2. The reasons why consumers want/do not want to buy an electric car. Source: NRI
consumer survey 2017
Although the cost of battery can be reduced alongside the increase of the pro-
duction volume, there are foreseeable and critical issues across the value chain to
overcome for turning electrified vehicles into market reality.
industry has built since 1991, 50 GWh. In other words, the optimistic market pene-
tration of xEV requires tens of giga-factories every year.
Besides, production equipment of batteries is often manufactured by medium sized
companies located in Asian countries. The required investment is at least as signifi-
cantly a burden for those manufacturers as for auto makers and battery suppliers
(Fig. 3).
Fig. 4. Consumers’ need for driving mileage of pure electric vehicles. Source: NRI consumer
survey 2017
The lines are different from country to country, however, thinking about the world
where 100% of the vehicles on the road are pure electric vehicles, the world would
need more than twice as large as the current power generation capacity. For he 20% of
EV penetration case, where 4.4 million of EVs are in operation, it seems to be more
realistic to balance supply and demand of the electricity.
1.3 Conclusion
Many of stakeholder, governments and industry, now declares that they are moving
toward electrification, however, the challenges are not only about the car manufac-
turing and the cost of battery [2], but the entire eco-system and supply chain. A holistic
approach and significant effort as well as investment across the value chain is required.
This section introduces some findings from consumer survey on autonomous driving
and shared mobility.
Fig. 6. Ranking of what the consumers would want to do inside an autonomous driving cars.
Source: NRI consumer survey 2017
Fig. 7. Acceptance of shared mobility services. Source: NRI consumer survey 2017
Trends and Challenges of the New Mobility Society 181
3 Conclusion
As seen in the findings from the consumer survey and analysis, the entire journey
toward the new generation of the mobility society requires extensive effort to solve the
issues across the value chain and the society.
References
1. Kazama, T., Suzuki, K., Zhang, D., Yoshihashi, S.: Electrification and its impact on the
supporting industries. Knowl. Creat. Integr. 25, 14 (2017)
2. Goda, S., Fujita, A., Hirano, Y., Suzuki, K.: Development of automotive battery for new
generation vehicles. NRI Knowl. Insight 34, 2–3 (2014)
Roadmap for Accelerated Innovation in Level
4/5 Connected and Automated Driving
1 Introduction
Field operational tests and pilot projects with vehicles capable of fully automated
driving or self-driving functionalities have started in cities and regions all around
Europe and the world. In particular, autonomous on-demand shuttles and robot taxis
are popular among policy makers and city planners, both in the U.S. [1] and in Europe
[2]. The reasons are manifold: Such vehicles may provide a cost-efficient opportunity to
fulfill obligations in public transport, particularly for the last mile, they use road space
more efficiently, and thus reduce the number of cars on the road. Furthermore, they
show the way towards an IT-enabled future of shared transportation of people, goods,
and probably equipment and services. Therefore, it can be expected that such vehicles
will have a high disruptive innovation potential in mobility [3].
Equipped with advanced systems for environment perception and decision making,
automated vehicles conventionally follow a reactive bottom-up safety paradigm. Like
humans, such systems may fail. There are opportunities for making an automated car
close to 100% safe by a more proactive, communication based approach [4]: One could
equip the infrastructure with sensors that “look around the corner” and tell the car what
they see, and one could further advance the artificial intelligence of the control system
to better understand particular traffic scenes, e.g. whether a pedestrian standing at the
curb will cross a road or not. One could also aim for a top-down safety concept, limit
the use of automated vehicles to fenced lanes, or apply control from a central traffic
manager. Whether and when a specific solution will be feasible depends merely on
economics and regulations than on technical concept.
The purpose of this paper is to report on the findings concerning the interplay of
technical and non-technical factors of innovation in level 4/5 automated driving made
by the Coordination and Support Action entitled “Safe and connected automation in
Road Transport” (SCOUT) that the European Commission funded between July 2016
and June 2018 [5]. The project’s objectives comprised:
• To identify pathways for an accelerated proliferation of safe and connected high-
degree automated driving (SAE 3-5).
• To take into account user needs and expectations, technical and non-technical gaps
and risks, viable business models as well as international cooperation and
competition.
• To help the automotive, the telecommunication and digital sectors need to join
forces and agree on a common roadmap.
The consortium, which was coordinated by VDI/VDE-IT, included Renault, FCA,
BMW, Bosch, NXP, Telecom Italia, NEC, RWTH, Fraunhofer, CLEPA, and Sernauto.
A number of public expert workshops with external stakeholders representing supply
and demand side of technology development, and particularly individual user groups
were organized, and steps towards a comprehensive roadmap were taken. For the
creation of the roadmap, a story mapping process was applied, that started from ana-
lyzing the innovation context, then defined a future vision, analyzed the state of the art,
and finally recognized opportunities and hurdles as well as ways to close the “gap”
between state of the art and vision with concrete actions. It can be expected that the
SCOUT project by its structured and comprehensive approach will add cohesion and
insight to the diverse landscape of for building a common European Strategy on
CAD [6].
perspectives, the SCOUT project found a number of high, but common expectations,
though: zero fatalities, no traffic jams, productive travel time, social inclusion, reduced
operation costs, and vanishing borders between the transport modes. Consequently,
when asked about their future vision on CAD, users sketched an ambitious picture.
From their point of view, the basic idea of CAD is strongly connected with the concept
of seamless mobility of people and goods on demand. Ideally, the implementation of
such concept should ensure that no compromises are made on safety, solutions are
effective and affordable, and save or free time for the user. Asked about specific
solutions that would embody the key elements of the vision users referred to a great
number of advanced ideas, including robot taxi, universally designed vehicles and
services, logistic hubs as well as connected traffic systems and more. Putting those
potential solutions on a simplified map of application scopes, starting from urban via
suburban, rural and interurban environments towards the international area, the great
diversity of use cases becomes evident. Actually, there are four areas of particular
interest, namely mobility as a service, passenger transport, goods delivery and
infrastructure. It turns out, that in level 4 and 5 automated driving the essence of the
common future vision consists in the different use cases. The technical challenges are
very similar, though, and may be solved by smart systems that combine sensing with
connectivity and intelligent decision-making [8]. However, due to a complex interplay
of technical and non-technical issues, advanced automated or self-driving cars have not
yet reached full maturity, oftentimes miss a viable business case and are not yet al-
lowed on public roads. Hence, the process of roadmap development could be expected
to be particularly troublesome.
The analysis of the state of the art for high level connected and automated driving
carried out by the SCOUT project was structured with reference to a five layers model:
Besides the technical layer as a basis for connected and automated driving functions,
further layers describe the relevant non-technical issues, i.e. human factors, economics,
legal, and societal aspects. The layers are strongly interlinked and they each are cov-
ering three interrelated topics, the driver (or passenger), the vehicle and the
environment.
The in depth analysis was primarily focussed on the technical, the legal and the
economic layer, as reported elsewhere [9], though, all layers were covered by the
project’s activities. Regarding the state of the art of CAD on the technical layer, the
SCOUT project distinguished three major functional domains, environment perception
(“sense”), decision making (“think”), and control (“act”). It was concluded that tech-
nical solutions have been found for most issues already, even though some significant
challenges remain, e.g. sensing under adverse weather and lighting conditions, decision
making fully acknowledging intentions of people on the road, and control with fail
operational capabilities. Moreover, the availability of digital infrastructure for con-
nectivity and communication turned out to be critical for making CAD a safe product,
186 J. Dubbert et al.
even though discussion whether it would rather be a necessary than just a sufficient
condition, particularly in complex urban environment, are on-going. It was also con-
cluded that awareness of cyber security issues of CAD exists, as for level 4/5 all control
functions are safety critical; concepts for a long-term protection are missing, though.
For the state of the art of CAD in the legal layer, it was concluded that the Vienna
Convention, which most European Countries have ratified and turned into national law,
due to an amendment that entered into force in early 2016 [10], now is covering level 3
automation, but not yet levels 4 and 5. National regulations may grant exceptions,
however, e.g. for testing.
On the state of the art of CAD at the economic layer, a number of use case of CAD
were analysed regarding value proposition, value creation partners, and monetization
potential, e.g. valet parking, truck platooning and automated on-demand shuttles.
Aiming to map out the paths towards the users’ ambitious future vision on CAD while
acknowledging the state of the art, the SCOUT project took a structured and compre-
hensive story mapping approach of roadmap development: The five-layers model that
already was found to be appropriate for a description of the state of the art, was applied
to build an action plan on level 4/5 automated driving. At two public workshops with the
involvement of dedicated experts for all the five layers (technical, social, economic,
human factors, legal), gaps between state of the art and vision were recognized and
actions were identified for each layer, linked to actions in other layers, and aligned on
the time scale. While the outcome was a close-to-complete list of research, innovation
and framework needs that complemented one another, it lacked coherence completely.
In contrary, the links that the experts indicated in between the actions, revealed that
technical and non-technical challenges are highly related to each other with many
actions requiring the outcome of others before they can start. The many interdepen-
dencies lead to locked-in situations, creating a kind of Gordian knot. This indicates that
the development and deployment of level 4/5 CAD may be heavily delayed if it is not
comprehensively coordinated. This is a typical feature of complex innovation processes
that comprise a number of technical and nontechnical dimensions. The SCOUT project
consortium therefore concluded that for delivering useful indications, the roadmap
approach needed to be distinct not just for the five layers but for specific use cases, and
focused on well-defined milestones on the way towards the vision. Supposedly, such use
case-specific and targeted roadmaps could help to anticipate roadblocks and highlight
agile shortcuts, enabling an accelerated innovation process.
Roadmap for Accelerated Innovation in Level 4/5 187
The SCOUT project succeeded to solve the Gordian knot of locked-in interdependencies
between required actions that occurred when it was tried to describe the innovation path
towards level 4/5 connected and automated driving in terms of a comprehensive
roadmap covering technical, social, economic, human factors and legal aspects. For this,
a clear distinction of use cases and a focus on milestones were key. The roadmaps on
automated on-demand shuttles, truck platooning, delivery robots, valet parking, and
traffic jam chauffeur resulting from the SCOUT project are highly relevant in view of the
European Commission’s ambition to become a world leader in connected and automated
driving as stated in a strategy communication that was launched with the 3rd mobility
package, recently [12]. According to that strategy, e.g. low-speed self-driving urban
shuttles and delivery vehicles may be available on European streets from 2020 on,
though further development of those technologies will take yet another decade. Even
though the SCOUT roadmap is not able to be more specific on the actual time line, it
points out the necessary actions on the five layers of the plan, and highlights opportu-
nities for accelerated innovation. Thereby, it will be an important input to the on-going
process of building an implementation plan of the Strategic Transport Research and
Innovation Agenda (STRIA) on Connected and Automated Driving that the European
Commission has launched. The methodology and the results of the project may by
applied to related topics in the near future, e.g. on assessing the potential synergies of
electrification and automation at technology and application levels, and on describing
the options of technology transfer from the 2-dimensional road transport domain to the
3-dimensional world of taxi and delivery drones.
Acknowledgements. The authors are grateful for fruitful cooperation with the contractual
partners of the Coordination and Support Action “Safe and Connected Automation of Road
Transport” (SCOUT), i.e. Luisa Andreone and Leandro D’Orazio (CRF), Franz Geyer (BMW),
Yves Page (Renault), Roland Galbas and Andi Winterboer (Bosch), Steven von Bargen (NXP),
Giovanna Larini (TIM), Roberto Baldessari and Francesco Alesiani (NEC Europe), Devid Will
and Adrian Zlocki (RWTH), Heiko Hahnenwald and Thilo Bein (Fraunhofer LBF) and Beatrice
Tomassini and Alessandro Coda (CLEPA). Important inputs were provided by Jochen Langheim,
Benjamin von Bodungen, Wolfgang Gruel, Suzanne Hoadley, Stella Nikolaou, Natasha Merat,
Alizee Stappers, Klemen Kozelj, Wolfgang Schulz as well as members of the CARTRE project
and EPoSS and ERTRAC. Roadmap designs were created by Juliane Lenz from Berlin.
The SCOUT project has received funding from the EU’s Horizon 2020 programme under grant
agreement No 713843.
References
1. Smart Cities Challenge. U.S. Department of Transportation (2017)
2. Meyer, G.: Policy Trends from the Proposals Under the Topic of Urban Mobility. Urban
Innovative Actions, Lille (2017)
3. Meyer, G., Shaheen, S. (eds.): Disrupting Mobility—Impacts of Sharing Economy and
Innovative Transportation on Cities. Springer, Cham (2017)
4. Safer Roads with Automated Vehicles: International Transport Forum, OECD (2018)
194 J. Dubbert et al.
5. www.connectedautomateddriving.eu/about-us/scout/
6. Meyer, G.: European roadmaps, programs, and projects for innovation in connected and
automated road transport. In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation 5.
Springer, Cham (2018)
7. Müller, B., Meyer, G. (eds.): Towards User-Centric Transport in Europe. Springer, Cham
(2018)
8. Dokic, J., Müller, B., Meyer, G. (eds.): European Roadmap Smart Systems for Automated
Driving. European Technology Platform on Smart Systems Integration (EPoSS) (2015)
9. Will, D., Eckstein, L., van Bargen, S., et al.: State of the art analysis for connected and
automated driving within the SCOUT project. ITS World Congress (2017)
10. UNECE Paves the Way for Automated Driving by Updating UN International Convention.
Press Release, UNECE, 23 March 2016
11. Zachäus, C., Wilsch, B., Dubbert, J., Meyer, G.: A comprehensive roadmap for level 4/5
connected and automated driving in Europe. Poster. In: Automated Vehicles Symposium
(2018)
12. On the Road to Automated Mobility: An EU strategy for mobility of the future. European
Commission, COM 2018 (283)
Author Index
A G
Ahiad, Samia, 75 Goda, Sakuto, 175
Alessandrini, Adriano, 69 Gopi, Sajin, 75
Andert, Franz, 31 Gromala, Przemyslaw, 56
Aydemir, Eren, 75 Groppo, Riccardo, 75, 139
B
H
Bercier, Emmanuel, 3
Hager, Martin, 56
Bernardin, Frédéric, 3
Bourne, Emily, 90
Brémond, Roland, 3 I
Brunet, Johann, 3 Innerwinkler, Pamela, 75
Irzmański, Paweł, 153
C Itu, Razvan, 16
Cassignol, Olivier, 3
Clement, Philipp, 75 J
Correa, Alejandro, 31 John, Reiner, 139
Cristiano, Alessia, 139
K
D Kahrimanovic, Elvir, 139
Dalmasso, Davide, 139 Kaiser, Christian, 111
Danescu, Radu, 16 Kanhere, Salil, 111
Derse, Cihangir, 75 Karci, Ahu Ece Hartavi, 75
Dorri, Ali, 111 Khan, Saifullah, 31
Druml, Norbert, 75 Kinav, Emrah, 75
Dubbert, Jörg, 183 Kottig, Jörg, 162
Kras, Bartłomiej, 153
E Krune, Edgar, 123
Elrofai, Hala, 123 Kwiatkowski, Maciej, 153
Enhuber, Stephan, 43
L
F Lampic, Gorazd, 139
Fellmann, Michael, 111 Leduc, Patrick, 3
Festl, Andreas, 111 Leinmueller, Tim, 90
M Schindler, Julian, 31
Macher, Georg, 75 Schmidt, Gerald, 43
Macke, Dirk, 162 Sorniotti, Aldo, 139
Manuzzi, Marco, 97 Steger, Marco, 111
Metzner, Steffen, 75 Stettinger, Georg, 75
Meyer, Gereon, 183 Stocker, Alexander, 111
Mittal, Prachi, 90 Symeonidis, Ioannis, 97
N
T
Nahler, Caterina, 75
Tarel, Jean-Philippe, 3
Nicolas, Adrien, 3
Tarkiainen, Mikko, 75
Nikolaou, Stella, 97
Troglia, Micaela, 75
O Trojaniello, Diana, 139
Otto, Alexander, 139
Ozan, Berzah, 75 W
Wandtner, Bernhard, 43
P Wanielik, Gerd, 43
Pech, Timo, 43 Watzenig, Daniel, 75
Perelli, Paolo, 139 Wijbenga, Anton, 31
Pielen, Michael, 162 Wilsch, Benjamin, 123, 183
Pinchon, Nicolas, 3 Wojke, Nicolai, 31
Wunderle, Bernhard, 56
R
Rzepka, Sven, 56
Z
S Zachäus, Carolin, 183
Sahimäki, Sami, 75 Zanovello, Luca, 97
Sanna, Alberto, 139 Zaya, Johan, 75