You are on page 1of 3

PhD Thesis Proposal: Collaborative Data Fusion for Augmented

Perception
Key words: vehicle perception, multi-sensors, collaborative perception, local representation,
unified representation, augmented representation, fusion with external processed data,
detection, occlusions.

Thesis supervisors: Dr. Julien Moreau(1), Dr. Elwan Héry(2) and Prof. Vincent Frémont(2)

Laboratories
The candidate will share his time between 2 labs : Heudiasyc(1) and LS2N(2), with a predefined
schedule and time periods at the convenience of the candidate (with a minimum of 12 months in
each lab).

Expected beginning and duration


The PhD thesis will start in September 2022 for a period of 36 months.

Project
The candidate will be involved in the ANNAPOLIS ANR project (AutoNomous Navigation
Among Personal mObiLity devIceS).

PhD thesis description


Urban centers are increasingly invaded by new means of Powered Personal Mobility Platforms
(PPMP: Electric scooters, Hoverboards, Gyro-wheels, etc.), directly or indirectly at the source of
unpredictable behaviors in the traffic environment. In such a context, autonomous vehicles
suffer from their limited perception obtained only from on-board sensors (forced to undergo the
movements of the vehicle), and sometimes reduced in the measurement field by bulky
obstacles (buses, trucks, etc.) or an occluding environment (buildings or urban structures). In
such a situation, unforeseen and unexpected events take place from the presence of new
electrical mobility systems, or from behaviors of unstable pedestrians using (or not) new PPMP
and respecting (or not) the traffic rules.

In this thesis, the main objective is to build a 2D / 3D hybrid representation of the vehicle
environment containing different abstraction levels that includes geometric (metric, topology,
occupancy grid maps), dynamic (multi-objects detection & tracking) and semantic data [1, 2, 3,
4, 5, 6, 7] and while considering intrinsic communication delays through information prediction
[8]. This digital representation will fuse within a specific collaborative data structure, all the
information coming from the sensors and algorithms on board the vehicle and remote intelligent
RSU (Road Side Unit). This digital representation will contain specific fields that will be used in
the decision and path planning task.
The scientific challenges will be on:

● Data structure and formalization


● Deep learning based multi-object tracking: trajectories prediction and velocity profiles
under communication constraints (delays)
● Multi-sensors (LiDAR/Camera/IMU) data fusion architecture both data-driven and
model-based.

The experimentations will be done on both simulation with CARLA simulator and real platforms
with ROS from project’s partners. In both laboratories, the candidate will have access to
robotised Renault Zoe cars with test roads: the APAChE vehicles in the SEVILLE test road at
Heudiasyc and the Zoe cars with the ICARS software environment at LS2N.

Candidate’s profile
The candidate should have a Master M2 level. An advanced knowledge is required in the field of
Robotics, Computer Vision, Deep Learning, Multi-sensor Data fusion, and an advanced level in
programming e.g. in Matlab, Python, C/C++ and ROS middleware. Scientific curiosity, large
autonomy, ability to learn by himself and deductive skills are also expected.

How to apply
Please send your application (CV, letter of motivation, supervisors contacts, one or two
recommendation letters as well as your transcript with the grades of your Master or engineering
school) to Julien Moreau <julien.moreau@hds.utc.fr>, Elwan Héry <elwan.hery@ec-nantes.fr>
and Vincent Frémont <vincent.fremont@ec-nantes.fr>.

References
[1] R. Drouilly, P. Rives and B. Morisset, "Hybrid Metric-Topological-Semantic Mapping in
Dynamic Env.," in IROS, 2015.
[2] R. Drouilly, P. Rives and B. Morisset, "Semantic Representation For Navigation In
Large-Scale Environments," in ICRA, 2015.
[3] M. Meilland, A. Comport and P. Rives, "Dense omnidirectional RGB-D mapping of large
scale outdoor environments for real-time localisation and autonomous navigation," Field
Robotics, vol. 32, no. 4, pp. 474- 503, 2015.
[4] R. Martins, E. Fernandez-Moral and P. Rives, "Dense Accurate Urban Mapping from
Spherical RGB-D Images," in IROS, Hamburg, Germany, 2015.
[5] T. Gokhool, R. Martins, P. Rives and N. Despre, "A Compact Spherical RGBD
Keyframe-based Representation," in ICRA, 2015.
[6] A. Loukkal, V. Frémont, Y. Grandvalet and Y. Li, "Improving Semantic Segmentation in Urban
Scenes with a Cartographic Information," in ICARCV, Singapore, 2018.
[7] V. Frémont, S. A. R. Florez and B. Wang, "Mono-vision based moving object detection in
complex traffic scenes," in IEEE IVS, 2017.
[8] Wang TH., Manivasagam S., Liang M., Yang B., Zeng W., Urtasun R. (2020) V2VNet:
Vehicle-to-Vehicle Communication for Joint Perception and Prediction. In: ECCV 2020.
(1)
Heudiasyc, UMR CNRS 7253, SYRI Team
Université de technologie de Compiègne
CS 60319 - 57 avenue de Landshut
60203 COMPIEGNE CEDEX - FRANCE
(2)
LS2N, UMR CNRS 6004, ARMEN Team
Ecole Centrale de Nantes
1 rue de la Noë
44321 NANTES CEDEX - FRANCE

You might also like