Professional Documents
Culture Documents
Concepts
Optimization, minimum, maximum, quadratic forms, optimal control problem
An optimization problem is the problem of finding the best solution from all feasible solutions. In
everything in life there are trade-offs so that you cannot get everything you want. Therefore, one
tries to maximize a desired value while minimizing the effects of the undesirable parts. In a control
system, one tries to minimize the steady state error and reduce the percentage overshoot by
choosing an optimal gain.
Optimization problem involves the determination of the maxima and minima of a given function.
Many applications involve solutions around minimum and maximum values of a function, f(.). At
the extremum points the derivative, df/dx =0. The different extremum points are :
Absolute Maxima: Is a point where the function obtains its greatest possible value.
1
Absolute minima: Is a point where the function obtains its least possible value.
Relative maxima: Is a point where the function changes direction from increasing to decreasing
but is not the absolute maximum.
Relative minima: is a point where the function changes direction from decreasing to increasing
but is not the absolute minimum.
Functions which have extremum points are normally quadratic form. Considering a function of
two variable, x and y, then the necessary conditions for an extrema is the first order conditions:
f ( x , y ) f ( x, y )
Fx Fy 0.......... .......... .......... .......... (1)
x y
The type of extrema is obtain by the second order conditions: The second order condition if it is
less than zero, is a maximum and if it is greater than zero, it is a minimum. If the condition is zero,
then the critical point is an inflexion point. In second order systems this corresponds to a saddle
point.
d2 f
Fxx 0; minimum
dx 2
d2 f
Fyy 0; maximum
dy 2
Fzz 0; infexion
Unconstrained Optimization
Example 1.
f ( x, y ) f ( x, y )
Fx Fy
x y
6 x y 12 y x
13
7x 13y, x y
7
2
There are many values of x and y that satisfy the equation.
Constrained Optimization
Sometimes, there are constraint which restricts from freely choosing variables:
The constrained optimization gives a conditions, g1(x,y) and g2(x,y), in addition to the quadratic
function, f(x,y) to be optimized. The constraints gives the allowable limits where the optimization
function is required to be true. The constraints may be given:
i) g1(x,y)=0 : an equation
ii) 0< g1(x,y)< umax : within two limits
We shall consider the first case, where the equation of the constraint equation is given:
Example 2
We shall consider the constrained optimization problem. There are several methods used in
optimization which include, among others:
1. Substitution method
2. Lagrange multipliers
3. Total Differential
4. Dynamic programming
We shall illustrate the first two methods and discuss the Lagrange multipliers method.
f ( x, y ) 3( 20 x y ) 2 6 y 2 ( 20 y ) y 1200 140 y 10 y 2
120 14 y y 2
f
2 y 14 0, and x y 20; y 7, x 13
y
f f
4. Is the extrema a maximum or minimum: 6x y , 12 y x
x x
2 f 2 f
6 0; 2 0 Both points are minima
x 2 x 2
3
The function is called the objective function which is in quadratic form and the constraint can
take different forms.
Can be reformulated so that the function and the constraint become part of the same problem in
the following steps:
Steps
1. Introduces a parameter multipliers, λ; this gives the multiples of the constraint that
satisfy the function.
g(x, y) ( 20 - x - y) 0
2. Combine the function and constrains as one function , called the Langrangian, L(x,y,λ):
L( x, y ) 3x 2 6 y 2 xy ( 20 - x - y)
3. Differentiate , L(x,y,λ), to obtain the critical points
d
L( x, y ) 0,
dx
d d
L( x, y ) 0, L( x , y ) 0
dy d
4. Solve for the solution of the maximum points.
The solution corresponding to the original constrained optimization is always a saddle point of
the Lagrangian function, which can be identified among the stationary points from
the definiteness of the bordered Hessian matrix.
4
7 x 13 y 0.............( 7)
6. Substitute the value of x 20 y and (x*,y*,λ*) = (13,7,71)
NB:
The optimal point is (x*,y*,λ*) = (13,7,71)
The first order derivatives: Lx = Ly = Lλ = 0 ;
The optimal Lagrangian function, L(x*,y*,λ*) = 710
The lagrange multiplier is also taken to be a variable and we have to determine the
optimal value of λ. It is sometimes called the co-state vector.
Example 2
Minimize the function f ( x1 , x 2 ) x1 x 22 subject to g1 ( x ) : x 1, g 2 ( x1 , x2 ) : x12 x22 1
The solution is (x1,x2)=(1,0).
In this case we form:
L( x, y , ) f ( x, y ) 1 g1 ( x, y ) 2 g 2 ( x, y )
Write the expression
L( x1 , x2 , ) x1 x 22 1 ( x 1) 2 (1 x12 x 22 )
Differentiate with respect to each variable
dL
Lx 1 1 1 22 0........
dx1
dL
Lx2 2 x 2 2x 2 0.......
dx 2
dL
L x1 x12 x 22 x1 (1 x1 )
d
dF dF
dF(x, y) dx dy 0 Fx dx Fy dy 0
dx dy
dG dG
dG(x, y) dx dy 0 G x dx G y dy 0
dx dy
And
1. dx dy Fy Fx G y G x
2. Fy Fx G y G x
f f g
df dx1 dx2
x1 x2 x
dx2 1 dx1
g g g
0 dx1 dx2
x1 x2 x2
5
f
f x
df 2 g dx
x1 g x1 1
x
2
( )=λ
f g
df dx1
x1 x1
Example
Find the maximum and minimum of the function
f ( x1 , x2 ) x12 x22 2 x1 6 x2 8 subject to constraint x y 10
The lagrangian: L (10 - x 1 x 2 )
df
dx1 (2 x1 2)
.......... ..1)
df ( 2 x 2 6)
dx2
dg
dx1
1.......... .2)
dg
dx2
x 1 x 2 10
6
Subtract 1) from 2) : x 2 x 1 2 and putting back into x 2 x 2 2 10 x 2 6 , and x1=4.
Example:
Maximize the function, F x1 x 2 2x1 subject to g(): 60 4 x1 2 x2 : 60 4 x1 2 x2 0
x1 x2 2 1 2 x1 x2 2 2
Substitute back in the constraint:
Hamiltonian principle gives a minimization problem expressed as an integral and also quadratic
form. However, it involves some quadratic energy functions.
“The motion of the system from time t1 to time t2 is such that the line integral ( called the action
or action integral has a stationary value for the actual path of motion”.
tf
A Ldt
t0
OR
“The path followed by a mechanical system during some time interval , [t1,t2], is the path that
makes the integral of the difference between the kinetic and the potential energy,(L=T-U),
stationary”.
“Of all possible paths along which a dynamical system may move from one point to another, in
a given time interval (consistent with the constraints), the actual path followed is one which
minimizes the time integral of the difference in the KE & the PE ”.
That is, the one which makes the variation of the following integral equal to zero:
tf tf
(T V )dt Ldt 0
t0 t0
7
L=T-V is the Lagrangian of the system; T and V are respectively the kinetic and potential energy of
the system.
1 1
L( x , x , t ) mx 2 kx 2
2 2
It gives the minimum quantity (time, path, energy,..) for the system. Every system moves in
such a way to minimize the energy.
tf
1
(mx 2 kx 2 )dt 0
t0
2
The principle of least action is used in defining the optimal control to minimize the various forms
of action on the state, input or the time required to move from one point to the other. The
minimization solution of the equation is given by the Euler-Lagrange equation used in calculus of
variations:
[ Lx , x ] [ L( x , x )] 0
dx x x
Performance index measures is defined using the input and state vectors as quadratic function or
an energy function or a form of Lyapunov function that can be optimized; have a minimum or a
maximum.
8
J ( x, u ) ax 2 ru 2 ;
L(x, u) x T Qx u T Ru; J(x, u) L(x, u)dt
1) x Ax bu; y Cx
2) L x T Qx u T Ru
which forms the required optimization problem with function, f(x,t)=Ax+bu as the constraint and
the J(x,y,t) =qx2+ru2 as the objective function.
The corresponding new Lyapunov function, that includes the state equation, is:
3) L x T Qx u T Ru ( Ax bu x )
Using these tools the control problem may be solved. The performance measure is normally
expressed as an integral equation as:
tf
J S ( x, t ) x T Qx u T Rudt
t0
where S(x,t) refers to the final state and the integral term the dynamic part.
The optimal control may be classified according to the parameter to be optimized as:
9
8.2.3 The State Regulator Problem
Transfers the system from an initial state x 0=xr to the final state, xf , with minimum integral square
error
tf tf tf
x x x dt dt e dt
T 2 2
J ( x, t ) r x xr
0 0 0
With a change of the coordinate system by starting from the origin, this may be written in the
more general infinite time form as:
J ( x, t ) x T Qxdt
0
q1 0
q2
Q
.. .. .. ..
0 qn
This is the normal problem with the performance criteria defined as a Lyapunov function.
The complete regulator problem defines both the state and control matrices:
1 0 1 0 x1
Q , x1 x2 x 21 x 22
0 1 0 1 x 2
And
tf tf
J ( x, u ) ( x T Qx u T Ru )dt ( x12 x 22 ) u 2 dt
0 0
where R is a diagonal, positive definite matrix associated with the input, u(t).
J (u, t ) u 2 dt
0
J (u, t ) u 2 dt
0
10
8.2.5 Minimum time
The objective is to transfer the system from an initial point state, x(0), to the final state with
minimum time t.
tf
J ( x, t ) 1dt
0
11