UTCN Computer Science Department

Artificial vision for autonomous driving
2009/2010 Master AIV Sergiu Nedevschi, PhD E E Professor in Computer Science Technical University of Cluj-Napoca 15 C. Daicoviciu str, Cluj-Napoca, RO 400 020,Romania Phone: (+40 264) 401 219; (+40 264) 401 456 Fax: (+40 264) 594491 E-mail: Sergiu.Nedevschi@cs.utcluj.ro Each student has to choose two topics( one applicative and one theoretical). These topics will be prepared and presented by students during classes at the programmed data. Please check periodically the Artificial Vision document from the Topics directory. The students that are not programmed yet are invited to ask for the topics via e-mail.

Applicative Topics: references are available in the References folder
01. Sensorial system and perception 02. Epipolar geometry 03. Object detection in disparity space 04. Object detection in 3D space 05. Good features to track 06. Optical flow and motion field 07. Ego motion estimation 08. Object tracking 09. Sensor fusion 10. Environment representation 11. Robot motion 12. Measurements 13. Mobile robot localization 14. Navigation path planning and obstacle avoidance

Theoretical Topics from:Probabilistic Robotics by S. Thrun, W. Burgard, D. Fox,
1 INTRODUCTION 1

4 The Extended Information Filter Algorithm 3.2 The Kalman Filter Algorithm 3.5 Mathematical Derivation of the Extended Information Filter 3.1 The Discrete Bayes Filter Algorithm 1 3 5 6 7 9 9 10 16 16 18 20 22 23 23 24 28 30 30 31 32 33 33 34 34 36 37 39 48 49 50 51 53 55 55 57 58 60 61 62 64 65 67 68 69 .7 Bibliographical Remarks 3 GAUSSIAN FILTERS 3.5 Representation and Computation 2.4.6 Practical Considerations 3.1 Introduction 2.1 Linearization Via Taylor Expansion 3.1 Canonical Representation 3.3.3.4 Road Map 1.1 Linear Gaussian Systems 3.3 The Extended Kalman Filter 3.2.3.2 The EKF Algorithm 3.3 Mathematical Derivation of the Bayes Filter 2.2.4.3.2 Basic Concepts in Probability 2.3 Implications 1.1.4.1 The Bayes Filter Algorithm 2.2 The Kalman Filter 3.4 Belief Distributions 2.2 Probabilistic Robotics 1.3.1 The Histogram Filter 4.2 Environment Interaction 2.3.1.4 Mathematical Derivation of the KF 3.3.1 State 2.4.5 Summary 3.4.3 Probabilistic Generative Laws 2.6 Bibliographical Remarks 4 NONPARAMETRIC FILTERS 4.4.4.5 Bibliographical Remarks 2 RECURSIVE STATE ESTIMATION 2.4.3 Illustration 3.1 Introduction 3.6 Summary 2.4 The Information Filter 3.3 Robot Environment Interaction 2.3 Mathematical Derivation of the EKF 3.4.4 The Markov Assumption 2.2 The Information Filter Algorithm 3.4 Bayes Filters 2.2 Example 2.4 Practical Considerations 3.3 Mathematical Derivation of the Information Filter 3.1 Uncertainty in Robotics 1.2.4.3.2.

1 Feature Extraction 6.6.1 The Basic Measurement Algorithm 6.4 Practical Considerations 6.3 Mathematical Derivation 6.3.4 Binary Bayes Filters With Static State 4.4 Likelihood Fields for Range Finders 6.2.4 Odometry Motion Model 5.3 Mathematical Derivation 5.2 Sampling Algorithm 5.3 Sensor Model With Known Correspondence 6.3 Mathematical Derivation 5.3 Beam Models of Range Finders 6.6 Feature-Based Sensor Models 6.1 Introduction 5.4.2 Adjusting the Intrinsic Model Parameters 6.3 Decomposition Techniques 4.5 Correlation-Based Sensor Models 6.2.5 Further Considerations 6.4.3.2 Continuous State 4.6.2.3.3.2.1 Closed Form Calculation 5.2 Preliminaries 5.3.4.2 Extensions 6.3 Velocity Motion Model 5.2.2 Sampling Algorithm 5.1.2 Maps 6.4 Sampling Poses 6.6.3 Mathematical Derivation of the PF 4.3.2 Landmark Measurements 6.4.6.1 Kinematic Configuration 5.1 Introduction 6.2.2 The Particle Filter 4.2 Probabilistic Kinematics 5.1 Basic Algorithm 4.2 Importance Sampling 4.7 Practical Considerations 6.1.3 Summary 4.4 Properties of the Particle Filter 4.6 Summary 5.5 Motion and Maps 5.1.3.4.1 Closed Form Calculation 5.4 Bibliographical Remarks 5 ROBOT MOTION 5.4.8 Summary 69 73 74 77 77 80 82 84 89 90 91 91 92 92 93 95 95 96 99 107 108 111 113 114 118 119 121 121 123 124 124 129 134 138 139 139 143 145 147 147 148 149 150 152 153 154 .6.1 Basic Algorithm 6.7 Bibliographical Remarks 6 MEASUREMENTS 6.

3 Mathematical Derivation 7.4 Illustration 8.3.3 Random Particle MCL: Recovery from Failures 8.2.2 Properties of MCL 8.4 Maximum A Posterior Occupancy Mapping 9.5.6 Summary 8.2 Occupancy Grid Mapping with Forward Models 9.2.1 Basic Algorithm 8.2 Mathematical Derivation 7.5 EKF Localization 7.5.4 Modifying the Proposal Distribution 8.4.7 MOBILE ROBOT LOCALIZATION 7.3.8 Practical Considerations 7.2 The Occupancy Grid Mapping Algorithm 9.2 Grid Localization 8.3 Markov Localization 7.7 Multi-Hypothesis Tracking 7.3 Learning Inverse Measurement Models 9.9 Summary 8 GRID AND MONTE CARLO LOCALIZATION 8.1 Multi-Sensor Fusion 9.1 Illustration 7.3.3.6.3.3 Computational Considerations 8.1 The MCL Algorithm 8.3.1 Inverting the Measurement Model 9.4 Localization in Dynamic Environments 8.1 EKF Localization with Unknown Correspondences 7.3 The Error Function 9.3.1 Introduction 8.4 Further Considerations 9.2 A Taxonomy of Localization Problems 7.6.2 The EKF Localization Algorithm 7.5.5 Summary 10 SIMULTANEOUS LOCALIZATION AND MAPPING 157 157 158 162 164 166 167 168 170 174 174 176 179 181 184 187 187 188 188 189 193 195 200 200 201 204 209 211 216 218 219 221 221 224 230 232 232 233 234 236 238 238 240 242 245 .2.6 Estimating Correspondences 7.5 Practical Considerations 8.1 Introduction 7.2.4.3 Monte Carlo Localization 8.4 Illustration of Markov Localization 7.7 Exercises 9 OCCUPANCY GRID MAPPING 9.2 Grid Resolutions 8.3.1 Introduction 9.2 Sampling from the Forward Model 9.1 The Case for Maintaining Dependencies 9.2.

1 Computing Data Association Probabilities 12.3 Constructing the Information Form 11.5 Recovering the Path and the Map 11.3.2 Intuitive Description 12.2.3.3 Mathematical Derivation 10.4.2 Practical Considerations 12.1 Introduction 11.1 The General EKF SLAM Algorithm 10.5 Sparsification 12.1 The Full SLAM Posterior 11.1 General Idea 12.5.1 Introduction 12.8.4.10.6 Amortized Approximate Map Recovery 12.8 Incremental Data Association 12.3 The SEIF SLAM Algorithm 12.2 Mathematical Derivation 11.4.6 Efficiency Consideration 11.1 Setup and Assumptions 10.4 Mathematical Derivation 11.4 Reducing the Information Form 11.2 Examples 10.3 EKF SLAM with Unknown Correspondences 10.3 Mathematical Derivation 12.1 Motion Update 12.5 Data Association in the EIF 11.2.2 SLAM with Known Correspondence 10.2.2 Intuitive Description 11.2 Sparsifications in SEIFs 12.2 SLAM with Extended Kalman Filters 10.2 Taylor Expansion 11.4 Summary 10.7 Empirical Implementation 11.5 Bibliographical Remarks 10.5.4.1 The EIF SLAM Algorithm With Unknown Correspondence 11.4 Mathematical Derivation 12.5.3.4.3 Feature Selection and Map Management 10.8 Summary 12 THE SPARSE EXTENDED INFORMATION FILTER 12.5.3 The EIF SLAM Algorithm 11.1 Introduction 10.9 Tree-Based Data Association 245 248 248 248 252 256 256 260 262 264 265 265 267 267 268 271 276 277 278 280 283 285 286 287 290 292 294 300 303 303 305 308 312 312 316 316 316 318 319 320 323 328 328 330 335 .4.2 Measurement Updates 12.4.5.8.7 How Sparse Should SEIFs Be? 12.6 Projects 11 THE EXTENDED INFORMATION FORM ALGORITHM 11.

2 Gradient Calculation 14.9.3 Maximum Likelihood as Gradient Descent 14.2 Incremental Likelihood Maximization 14.4.3 Suggestions for the Implementation 14.6.5 Grid-Based Implementation 13.4.6 Examples 13.4.6 Layered EM Mapping 13.2 Correcting Poses Backwards in Time 14.6.6 Examples 13.10.10.2Establishing Correspondence 12.5 Limitations 14.4 The E-step 13.6.4.3 Efficient Maximum Likelihood Estimation 13.4.3 Equivalency Constraints 12.5 Multi-Robot Mapping 14.1 Calculating Data Association Probaiblities 12.1 Layered Map Representations 13.9.5 The Layered EM Mapping Algorithm 13.4 Mapping with the EM Algorithm 13.2 Motivation 13.9.1 Detecting Cycles 14.4.4.9.3.3.4 EM with Layered Maps 13.5 The M-step 13.3.12.6.3.4 Incremental Mapping with Posterior Estimation 14.4.8 Bibliographical Remarks 14 FAST INCREMENTAL MAPPING ALGORITHMS 14.3.7 Summary 14.4.8 Bibligraphical Remarks 336 339 340 341 344 344 347 349 353 353 356 358 365 365 367 370 371 376 379 379 381 382 383 384 386 389 389 390 391 393 393 395 398 398 400 403 404 406 407 407 408 410 412 414 418 419 .11 Discussion 13 MAPPING WITH UNKNOWN DATA ASSOCIATION 13.4 Examples 14.6 Mapping in 3D 14.2 The Map Likelihood Function 13.2 Local Maps 13.3 Mapping with EM: The Basic Idea 13.1 Search in Pose Space 14.10 Multi-Vehicle SLAM 12.1 The EM Mapping Algorithm 13.2 Tree Search 12.4 Practical Considerations 12.6.1 Latest Derivation 13.3 The Perceptual Model For Layered Maps 13.6.3 Illustrations 14.1 Motivation 14.1Fusing Maps Acquired by Multiple Robots 12.7 Summary 13.

“Introductory Techniques for 3-D Computer Vision”.3 Value Iteration 15.1 The Augmented State Space 16.2.6 Summary 16. Fox. Trucco.1 Goals and Payoff 15.1. A.1 Motivation 16.3. 2.9 Projects 15 MARKOV DEVISION PROCESSES 15. Siegwart. Stanford Press .5.5. 1998.2.2 Finite Environments 16.2 Value Iteration in AMDPs 16. 2004 3.14.2.2 Uncertainty in Action Selection 15.3 Calculating the Value Function 16.1 Motivation 15.4.4.5 Augmented Markov Decision Processes 16.3. Verri.2 Experimental Results 16. Probabilistic Robotics.4.4 A Monte Carlo Approximation 16.1 An Illustrative Example 16. Prentice Hall.7 Bibliographical Remarks 16.3 Value Iteration 15.1Learning Value Functions 16.4 Illustration 16 PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES 16. MIT Press. S.2.2 Value Iteration in Belief Space 16.2Nearest Neighbor 16.3.1 Monte Carlo Backups 16. “Autonomous Mobile Robots”. W. Thrun.3 Illustration 16. E. D. I. Burgard.2 Finding Control Policies in Fully Observable Domains 15.8 Projects REFERENCES 419 421 421 424 427 427 431 433 435 437 437 439 439 448 450 455 458 461 462 462 465 465 466 468 469 470 472 475 475 475 477 Reference textbooks: 1.1 The General POMDP Algorithm 16.3.5.1. R.3. Nourbakhsh.4 Linear Programming Solution 16.3 General POMDPs 16.4.

Sign up to vote on this title
UsefulNot useful