You are on page 1of 1

AHSAN HABIB

Address: Kirkland · WA 98033 Mob:(407)495-0675 Email: mdahsanhabib26@gmail.com


LinkedIn: https://www.linkedin.com/in/ahsanhabibphd

SUMMARY
Specialties: Computer vision, Sensor Fusion, Machine Learning, Robotics, 3D Math
Fluent in: C++, OpenCV, Point Cloud Library, MATLAB, ROS, GAZEBO, Tensorflow
Experienced in: Python, C#, LabVIEW, CANalyzer, SolidWorks, Keras
Immigration Status: Green Card holder

EDUCATION
University of Texas at Arlington, Arlington, TX, 2017
Ph.D. in Electrical Engineering; GPA: 3.778

EXPERIENCE
Huawei Technologies Co.,Ltd., Research Engineer, Bellevue, Washington, Feb 2019 – Present
Proposed and developed new applications on android device platform that uses the device camera and IMU
• Researched and prototyped vehicle relocalization algorithm comprising Visual Odometry (VO), Simultaneous Localization and
Mapping (SLAM), Structure from Motion (SfM) based 3D reconstruction and place recognition components
• Prototyped SLAM on android C++ native layer for feature based real-time 6-DOF localization
• Researched and prototyped deep learning based approach which take RGB images as input to regress directly 6-DOF camera pose
• Acquired Tensorflow best practice methods for data pipeline, model deployment, optimization, and augmentation

Honda R&D Americas, Research Engineer, Ann Arbor, Michigan, Jan 2018 – Jan 2019
Developed and tested software and algorithms for autonomous and connected vehicles on ROS/C++ platform, including:
• Sensor fusion and sensor error analysis
• Low level communication and processing of RADAR, LIDAR, GPS, IMU and Cameras data
• Design and prototyped lane line visualizer for adverse weather condition using SLAM and Lane Line Mapping
• Improved data association for cooperative localization of connected vehicles by 22% in congested scenario
• Probabilistic state estimation (Kalman Filter variants, Particle Filter, etc.)
• Created a test plan, carried out on-road testing and performed data analysis to develop sensor fusion algorithm

University of Texas at Arlington, Research Assistant, Arlington, Texas, 2013-2017


Developed a fully automated algorithm that works on 3D LiDAR data to first detect and then segment out all objects (natural and
artificial) present in the scene
• Used ArcGIS, MATLAB, C++ and Point Cloud Library to visualize and process LiDAR point cloud data
• Object-based image analysis
• Used graph-based algorithm for hierarchical segmentation

Designed and developed SkinSim: A Simulation Environment for Multimodal Robotic Skin
• Implemented using ROS/GAZEBO simulation infrastructure and C++
• Designed multi-element spring-mass damper system to simulate Robot Skin integrated with tactile sensors
• Implemented the system model in simulation and validate with real-world scenario of robot interaction
• Contributed to SkinSim open source ROS repository

Designed and developed algorithm that maps human facial expression to an Android robot
• Implemented using C#. Facial features extracted using Microsoft Kinect SDK
• The mapping from facial feature space to actuator space is learned using a shallow neural network

You might also like