Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
Evans Optimal Control Course

Evans Optimal Control Course

Ratings: (0)|Views: 10|Likes:
Published by humbertorc

More info:

Published by: humbertorc on Feb 20, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

10/12/2014

pdf

text

original

 
An Introduction to MathematicalOptimal Control TheoryVersion 0.2
ByLawrence C. EvansDepartment of MathematicsUniversity of California, BerkeleyChapter 1: IntroductionChapter 2: Controllability, bang-bang principleChapter 3: Linear time-optimal controlChapter 4: The Pontryagin Maximum PrincipleChapter 5: Dynamic programmingChapter 6: Game theoryChapter 7: Introduction to stochastic control theoryAppendix: Proofs of the Pontryagin Maximum PrincipleExercisesReferences1
 
PREFACE
These notes build upon a course I taught at the University of Maryland duringthe fall of 1983. My great thanks go to Martino Bardi, who took careful notes,saved them all these years and recently mailed them to me. Faye Yeager typed uphis notes into a first draft of these lectures as they now appear. Scott Armstrongread over the notes and suggested many improvements: thanks, Scott. StephenMoye of the American Math Society helped me a lot with AMSTeX versus LaTeXissues. My thanks also to Atilla Yilmaz for spotting lots of typos and errors, whichI have corrected.I have radically modified much of the notation (to be consistent with my otherwritings), updated the references, added several new examples, and provided a proof of the Pontryagin Maximum Principle. As this is a course for undergraduates, I havedispensed in certain proofs with various measurability and continuity issues, and ascompensation have added various critiques as to the lack of total rigor.This current version of the notes is not yet complete, but meets I think theusual high standards for material posted on the internet. Please email me atevans@math.berkeley.edu with any corrections or comments.2
 
CHAPTER 1: INTRODUCTION
1.1. The basic problem1.2. Some examples1.3. A geometric solution1.4. Overview
1.1 THE BASIC PROBLEM.DYNAMICS.
We open our discussion by considering an ordinary differentialequation (ODE) having the form(1
.
1)
˙
x
(
t
) =
(
x
(
t
)) (
t >
0)
x
(0) =
x
0
.
We are here given the initial point
x
0
R
n
and the function
:
R
n
R
n
.
The un-known is the curve
x
: [0
,
)
R
n
, which we interpret as the dynamical evolutionof the state of some “system”.
CONTROLLED DYNAMICS.
We generalize a bit and suppose now that
depends also upon some “control” parameters belonging to a set
A
R
m
; so that
:
R
n
×
A
R
n
.
Then if we select some value
a
A
and consider the correspondingdynamics:
˙
x
(
t
) =
(
x
(
t
)
,a
) (
t >
0)
x
(0) =
x
0
,
we obtain the evolution of our system when the parameter is constantly set to thevalue
a
.The next possibility is that we change the value of the parameter as the systemevolves. For instance, suppose we define the function
α
: [0
,
)
A
this way:
α
(
t
) =
a
1
0
t
t
1
a
2
t
1
< t
t
2
a
3
t
2
< t
t
3
etc.for times 0
< t
1
< t
2
< t
3
...
and parameter values
a
1
,a
2
,a
3
,
···∈
A
; and we thensolve the dynamical equation(1
.
2)
˙
x
(
t
) =
(
x
(
t
)
,
α
(
t
)) (
t >
0)
x
(0) =
x
0
.
The picture illustrates the resulting evolution. The point is that the system maybehave quite differently as we change the control parameters.3

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->