Professional Documents
Culture Documents
02 EXPERIMENTS
03 ALGORITHMS
04 RESULTS
Seite 2
INTRODUCTION
Driving Anger And Distraction Recognition And Analysis
2018.07-2018.12
Collaborator:BMW China, Chinese Academy of Science
Research Objectives
Based on Tongji’s 8DOF Driving Simulator, record driving behaviour, facial expressions, voices
features that are sensitive to the changes of driver states including driving anger, cell phone
usage and cognitive distraction;
Train Machine Learning based algorithms for detection and recognition of driver attention and
emotion
Seite 3
EXPERIMENTS
30 PARTICIPANTS
PRE QUESTIONAIRRES
Driving Behavior (BYNDS):Violations, Risky Driving Behavior, Situation Awareness,
Judgment and Emotion Control Abilities
Seite 4
QUESTIONNAIRE ANALYSIS(GENDER)
Seite 5
EXPERIMENT
Two lane rural freeway with tunnels marked in red
白尖山一号
白尖山二号
龙洞冲
狮子庵
Seite 6
EXPERIMENT
Seite 7
EXPERIMENT
3 times ,12min each,speed 60-100km/h
Baseline
(No subtasks)
Distraction
(Answer questions, read massages, answer phone call)
Anger
(Self-report anger score every 2min)
Recall anger
experiences
Model 1 Y Model 3
Distracted Texting
Dristracted? Sub-task?
N Congnitive
distraction
Not
distracted
Model 2
Anger?
Y
Anger Model 1: Distraction or Not
N
Model 2:Anger or Not
Normal
Model 3:Which Type of Distraction
Seite 10
MACHINE LEARNING METHODS
SUPPORT VECTOR MACHINE
Voice Feature
Facial Pictures
Deep Learning
Seite 11
MODEL DEVELOPMENT (DRIVING BEHAVIOR)
Seite 12
SELECTED VARIABLES
Number Variable
1 Speed
2 Speed X
3 Speed Y
4 Longitude Acceleration
5 Calculated Lateral Acceleration
6 Gas Pedal
7 Brake Pedal Force
8 Pitch Ground
9 Yaw Ground
10 Yaw Acceleration
11 Yaw Speed
12 Steering Wheel Speed
13 Lateral Shift
14 Banking
Seite 13
PROJECT DELIVERABLES
APPROACH 1: DRIVING BEHAVIOR FEATURE EXTRACTION
The 11 new generated features are combined with 14 original features, 25 features
were eventually used.
Seite 14
PROJECT DELIVERABLES
APPROACH 1: MODEL RESULTS
Support Vector Machine was selected over comparison with Random Forest.
Linear Kernel Function was selected over comparison with polynomial, radial basis function and sigmoid functions.
Training dataset Test dataset Accuracy
Model 1 95.7%~99.1%
Experiment 1
1600±184 320±39 Model 2 91.3%~97.8%
(Individual participant)
Model 3 84.2%~97.5%
Experiment 2 Model 1 98.3%
(Stratified sampling of all 51739 10348 Model 2 95.9%
participants 5 train:1 test) Model 3 87.1%
Experiment 3 Model 1 92.5%
(25 participants as train set, 50952 11135 Model 2 78.0%
5 participants as test set) Model 3 75.6%
Model 1 (Distraction) ; Model 2 (Anger) ; Model 3 (Distraction Types: Cognitive, Call, Message)
Seite 15
MODEL VARIABLES
Finally 25 variables were put into the model.
Sliding time windows w1, w2, w3 Sliding windows with different scales, defined by time intervals of 1s (w1), 5s 15-17
(w2), and 10s (w3), under different time windows, return time series data
Difference dif Measure behaviour and interactions between subject vehicle and preceding 18
vehicle, 𝐷𝑖−1 𝑡 − 𝐷𝑖 𝑡
Percentage change pct Percentage change of a variable per 1second,
𝐷 𝑡 −𝐷 𝑡−1
∗ 100 19
𝐷 𝑡−1
Log ratio logr Calculated by log
𝐷 𝑡 20
𝐷 𝑡−1
Simple moving average; sma; msd Values of mean 𝑀𝐴𝑖 𝑡 and std of time series data within a moving window 21; 22
Moving standard deviation defined by 𝑡 − 𝑤, 𝑡
Relative standard deviation rsd Relative variability and unitised measure, defined as the ratio of std to mean 23
Bias ratio emar (X-MA)/MA, where MA is data of exponential moving average under different 24
time windows
Dynamic time warping dtw Using DTW algorithms to measure similarity between two temporal sequences 25
Seite 16
RESULTS
Seite 17
DEEP LEARNING ALGORITHMS
4 types of pictures are labelled
Normal Phone Call Message Anger
Seite 18
PROJECT DELIVERABLES
APPROACH 2: METHODOLOGY
The Deep Learning library Keras was used to run on top of Tensorflow.
Algorithms were run seamlessly on Quadro P4000 and Intel i7 8th
Generation.
Very deep Convolutional Neural Network (CNN) with 22 layers was
implemented. Normal (baseline) Reading Messages
Seite 19
PROJECT DELIVERABLES
APPROACH 2: RECOGNITION RESULTS
X axis——Number of epoch
Training Baseline Anger Distraction Y axis——Model fitting accuracy
dataset 340*5 340*5 340*5
Validation
340 340 340
dataset
Test
340 340 340
dataset
Accuracy Model 1: 99.7%; Model 2: 99.8; Model 3: 98.2%
• Model 1 (Distraction)
• Model 2 (Anger)
• Model 3 (Distraction sub-class: Cognitive, Call, Message)
Seite 20
THANK YOU FOR YOUR ATTENTION.
2018.12.30