You are on page 1of 3

Lahore University of Management Sciences

EE566 Optimal Control Theory

Fall 2017
Instructor Dr. Abubakr Muhammad
Room No. 9-351A
Office Hours Monday :0000-0000 (TBA)
Wednesday:0000-0000 (TBA)
Telephone +92(42)35608132
Lab Engineer/TA TBA
TA Office Hours TBA
Course URL (if any)

Course Basics
Credit Hours 03
Lecture(s) Nbr of Lec(s) Per Week 1 Duration 150min
Recitation/Lab (per week) Nbr of Lec(s) Per Week Duration
Tutorial (per week) Nbr of Lec(s) Per Week Duration

Course Distribution
Elective Electrical Engineering majors.
Open for Student Category All
Close for Student Category None

The course prepares students to do independent work at the frontiers of systems theory and control engineering. The
course builds further on standard linear systems theory to explore issues related to optimization, estimation and
adaptation. Students will learn to formulate and appreciate fundamental limitations in control, filtering and estimation..
Topics include review of linear control systems, static constrained optimization, calculus of variations, dynamic
optimization, Bellmans principle of optimality, Maximum principle, two-point boundary value problems and Riccati
equations, linear quadratic regulater (LQR), learning and adaptation in controllers, policy- and value-iteration.

EE561. Digital Control Systems or by premission of instructor.

1. Understand fundamental limits of control & estimation

2. Use advanced mathematical techniques to formulate and solve control problems

3. Appreciate issues of robustness, optimality, architecture and uncertainty in control


4. Identify practical challenges in posing control problems

Lahore University of Management Sciences
Learning Outcomes
At the end of the course the students will be able to:

1. Interpret, reproduce and create deep mathematical results for advanced control engineering.

2. Formulate various engineering design problems as mathematical optimization programs.

3. Numerically solve dynamic optimization problems using a computer package.

Grading Breakup and Policy

Mid-term Exam 1x 30%
Homework 4x 40%
Final Exam 1x 30%

Examination Detail

Yes/No: Yes
Combine Separate:
Duration: 120 min
Preferred Date:
Exam Specifications:

Yes/No: Yes
Combine Separate:
Final Exam
Duration: 180 min
Exam Specifications:


Week Session References

1 Constrained optimization, Largrange multiplier method. Parameter optimization. Examples. Bryson.

Review of pole placement techniques, full-state feedback, observer design, reference tracking, Franklin
Discrete-time optimal control (LQR) as a constrainted optimization problem, Riccati equation, Franklin.
origins of the two point boundary value problem. Bryson.
Discrete-time optimal control (contd.). LQR steady-state optomal control, symmetric root-locus Franklin.
interpretation for optimality. Examples Bryson.
5 Differentiability and calculus of variations. Bamieh

6 Constrained optimization in the Hilbert space. Bamieh.

7 Optimal control and Pontryagins Maximum Principle Bamieh.

8 Optimal control and Pontryagins Maximum Principle (contd.) Bamieh

Lahore University of Management Sciences
Eid Break

9 Two point boundary value problems (TPBVP) Bamieh.

10 Midterm Exam
11 A relook at continuous-time LQR Bamiel.
12 A relook at continuous-time LQR (contd.) Bamieh.
Applied control. Trajectory generation and tracking, differential flatness, two-degrees of freedom Murray.
design. Examples
An short encounter with Receding horizon control. inverse optimality, control Lyapunov functions, Murray.
Bellmans principle of optimality revisited, approximate dynamic programming, connections to Lewis.
reinforcement learning. Policy iteration and Value-iteration algorithms.
16 Paper presentations Lewis.
Final Exams

Textbook(s)/Supplementary Readings
The course will be taught from a combination of textbooks and course notes. The following books and course notes will
be used as reference.
Dynamic Optimization by Arthur Bryson. Addison Wesley. 1999.

Optimization Based Control. Lecture notes by Richard Murray. 2010.

Optimal Control and Linear Quadratic Problems. Lecture Notes by Bassam Bameih. 2001.

Reinforcement Learning and Approximate Dynamic Programming by Frank Lewis, Derong Liu. 2013.