You are on page 1of 6

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

C.11 Bang-bang Control


11.1 Introduction This chapter deals with the control with restrictions: is bounded and might well be possible to have discontinuities. To illustrate some of the basic concepts involved when controls are bounded and allowed to have discontinuities we start with a simple physical problem: Derive a controller such that a car move a distance a with minimum time. The motion equation of the car
F d F &2 x 2 dt x =0
& 2 = p1 p p 2 = ( 2u )

(11.10) (11.11) (11.12)

F d F =0 & u dt u F d F =0 & v dt v

2v = 0

(11.12) v = 0 or = 0 . We will consider these two cases.

d 2x dt 2

=u

(11.1)

(i)

=0

where
u = u (t ), u

p = 0 1 be impossible. p2 = 0 (ii) v = 0
(11.2)

represents the applied acceleration or deceleration (braking) and x the distance traveled. The problem can be stated as minimize
T = 1 dt
o

(11.5): v 2 = (u + )( u ) u = or u = Hence
&2 = x 0 t <t T

(11.3)

the switch taking place at time . Integrating using boundary conditions on x 2


t &1 = x2 = x (t T ) 0 t <t T

subject to (10.11) and (10.12) and boundary conditions


& (0) = 0, x(T ) = a, x & (T ) = 0 x(0) = 0, x

(11.4)

(11.13)

The methods we developed in the last chapter would be appropriate for this problem except that they cannot cope with inequality constraints of the form (11.2). We can change this constraint into an equality constraint by introducing another control variable, v, where

Integrating using boundary conditions on x1


1 2 t x1 = 2 1 (t T ) 2 + a 2 0 t (11.14)

v = (u + )( u )
2

(11.5)

<t T

Since v is real, u must satisfy (11.2). We introduce th usual state variable notation x1 = x so that

Both distance, x1 , and velocity, x 2 , are continuous at t = , we must have (11.14) = ( T ) (11.15)
1 1 2 = a ( T ) 2 2 2

&1 = x2 x1 (0) = 0, x2 (T ) = a x & 2 = u x 2 (0) = 0, x 2 (T ) = 0 x We now form the augmented functional


T* =

(11.6) (11.7)

Eliminating T gives the switching time as


&1 ) + p 2 (u x & 2 ) + [v 2 {1 + p1 ( x 2 x

(u + )( u )]}dt

(11.8)

2a ( + )

(11.15)

where p1 , p 2 , are Lagrange multipliers associated with the constraints (11.6), (11.7) and (11.5) respectively. The Euler equations for the state variables x1 , x 2 and control variables u and v are
F d F =0 &1 x1 dt x

and the final time is T= 2a ( + ) (11.16)

&1 = 0 p

(11.9)

The problem now is completely solved and the optimal control is specified by

Chapter 11 Bang-bang Control

53

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

u=

0 t <t T

(11.17)

&i = p

H xi

(i = 1,2, L , n)

(11.23)

this is illustrated in Fig. 11.1

control u

The Euler equations for the control variables, u i , do not follow as in section 10.4 as it is possible that there are discontinuities in u i , and so we cannot assume that the partial derivaties H / u i exist. On the other hand, we can apply the free end point condition (9.23):
F = 0 to obtain &i x

time t

p k (T ) = 0

k = q + 1, L , n

(11.24)

that is, the adjoint variable is zero at every end point where the corresponding state variable is not specified. As before, we refer to (11.24) as tranversality conditions.

Fig. 11.1 Optimal Control This figure shows that the control: - has a switch (discontinuity) at time t = - only take its maximum and minimum values This type of control is called bang-bang control.
11.2 Pontryagins Principle (early 1960s)

Our difficulty now lies in obtaining the analogous equation to H / u i = 0 for continuous controls. For the moment, let us assume that we can differentiate H with respect to u, and consider a small variation u in the control u such that u + u still belong to U , the admissible control region. Corresponding to the small change in u , there will be small change in x , say x , and in p , say p . The change in the value of J * will be J * , where

Problem: We are seeking extremum values of the functional


J =

f 0 (x, u, t )dt

(11.18)

J * =

{H

p x& }dt
i i i =1

subject to state equations


& i = f i(x, u, t ) x

(i = 1,2, L , n)

(11.19)

The small change operator, , obeys the same sort of properties as the differential operator d / dx . Assuming we can interchange the small change operator, , and integral sign, we obtain

initial conditions x = x 0 and final conditions on x1 , x 2 , L , x q (q n) and subject to u U , the admissible control region. For example, in the previous problem, the admissible control region is defined by U = {u : u } As in section 10.4, we form the augmented functional
J* =

J * =
=

[ ( H

0 T

[H ( p x& )]dt
0 i i

i =1 n

p x& )]dt
i i n

f0 +

i =1

& i ) dt pi ( f i x

(11.20)

[H

i =1

i =1 n

& i pi x

p x& ]dt
i i i =1

and define the Hamintonian


H = f0 +

Using chain rule for partial differentiation

i =1

pi f i

(11.21)

H =

j =1

H u j + u j

i =1

H xi + xi

p p
i =1 i

For simplicity, we consider that the Hamintonian is a function of the state vector x, control vector u, and adjoint vector p, that is, H = H (x, u, p) . We can express J * as
J* =

so that

J * =

u
T 0 j =1

p x& dt
i i i =1

H
j

u j
H
i

(11.22)

p p
i =1

& i xi xi pi p i x & i dt p H &i = fi = x pi

and evaluating the Euler equations for xi , we obtain as in section 10.4 the adjoint equations

& i = H / xi . Also, from (11.21) since p

Chapter 11 Bang-bang Control

54

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

& i = f i(x, u, t ), (i = 1,2, L , n) . Thus using (11.19): x

We first illustrate its use by examining a simple problem. We required to minimize


J = 1dt
0

J * =

0 T = 0
T


j =1 m

H u j u j H u j u j

& i xi + p i x & i ) dt (p
i =1 n


j =1

i =1

d ( pi xi ) dt dt

&1 = x 2 , x & 2 = u where u and x1 (T ) = a , subject to x

x2 (T ) = 0 , x1 (0) = x 2 (0) = 0 .

We can now integrate the second part of the integrand to yield

Introducing adjoint variables p1 and p 2 , the Hamiltonian is given by


H = 1 + p1 x 2 + p 2 u

J * =

( p x ) +
i i i =1 0 0

m T

j =1

H u j dt u j

(11.25)

At t = 0 : At t = T :

xi (i = 1,L, n) are specified xi (0) = 0 xi (i = 1,L , q ) are fixed

xi (T ) = 0

For i = q + 1, L, n, from the transversality conditions, (11,24)


pi (T ) xi (T ) = 0

We must minimize H with respect to u, and where u U = [ , ] , the admissible control region.. Since H is linear in u, it clearly attains its minimum on the boundary of the control region, that is, either at u = or u = . This illustrated in Fig.11.3. In fact we can write the optimal control as u= if p 2 > 0 if p 2 < 0 H

for i = 1,2, L , q, q + 1, L , n . We now have


T J * = 0


j =1

H u j dt u j
control u

where u j is the small variation in the j th component of the control vector u . Since all these variations are independent, and we require J * = 0 for a turning point when the controls are continuous, we conclude that H = 0 ( j = 1,2, L , m) u j (11.26)

Fig. 11.3 The case p 2 > 0

But p 2 will vary in time, and satisfies the adjoint equations,


&1 = p &2 = p H =0 x1 H = p1 x 2

But this is only valid when the controls are continuous and not constrained. In our present case when u U , the admissible control region and discontinuities in u are allowed. The arguments presented above follow through in the same way, except that (H / u j )du j must be replaced by H (x; u1 , u 2 ,L, u j + u j ,L, u m ; p) H (x, u, p) We thus obtain

Thus p1 = A , a constant, and p 2 = At + B , where B is constant. Since p 2 is a linear function of t, there will at most be one switch in the control, since p 2 has at most one zero, and from the physical situation there must be at least one switch. So we conclude that (i) the control u = or u = , that is, bang-bang control; (ii) there is one and only one switch in the control. Again, it is clear from the basic problem that initially u = , followed by u = at the appropriate time.

J * =

0 j =1

T m

[ H (x; u1 ,L, u j + u j ,L, u m ; p) H (x, u, p)] dt

In order for u to be a minimizing control, we must have J * 0 for all admissible controls u + u . This implies that H (x; u1 ,L, u j + u j ,L, u m ; p) H (x, u, p) (11.27)

11.3 Switching Curves


In the last section we met the idea of a switch in a control. The time (and position) of switching from one extremum value of the control to another does of course depend on the initial starting point in the phase plane. Byconsidering a specific example we shall show how these switching positions define a switching curve.

for all admissible u j and for j = 1,L, m . So we have established that on the optimal control H is minimized with respect to the control variables, u1 , u 2 ,L, u m . This is known as Pontryagins minimum principle.
Chapter 11 Bang-bang Control

55

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Suppose a system is described by the state variables x1 , x 2 where


&1 = x1 + u x &2 = u x

x 2 B = k log | ( x1 k ) / A |

(11.28) (11.29)

Now if u = 1 , that is, k = 1 , then the trajectories are of the form


x 2 B = log | ( x1 1) / A |

Here u is the control variable which is subject to the constraints 1 u 1 . Given that at t = 0, x1 = a, x 2 = b , we wish to find the optimal control which takes the system to x1 = 0, x 2 = 0 in minimum time; that is, we wish to minimize
J = 1dt
0

that is
x 2 = log | x1 1 | +C

(11.32)

(11.30)

where C is constant. The curves for different values of C are illustrated in Fig.11.4
u =1 x2

while moving from (a, b) to (0,0) in the x1 x 2 phase plane and subject to (11.28), (11.29) and 1 u 1 (11.31)

Following the procedure outline in section 11.2, we introduce the adjoint variables p1 and p 2 and the Hamiltonian
0

x1 1

H = 1 + p1 ( x1 + u ) + p 2 u = u ( p1 + p 2 ) + 1 p1 x1 Since H is linear in u, and | u | 1 , H is minimized with respect to u by taking +1 if u= 1 if p1 + p 2 < 0 p1 + p 2 > 0


A

So the control is bang-bang and the number of switches will depend on the sign changes in p1 + p 2 . As the adjoint equations, (11.23), are &1 = p H = p1 x1

Fig. 11.4 Trajectories for u = 1 Follow the same procedure for u = 1 , giving
x2 = log | x1 1 | +C

(11.33)

H &2 = p =0 x 2

p1 = Ae p2 = B

, A and B are constant

and the curves are illustrated in Fig. 11.5.

and p1 + p 2 = Ae t + B , and this function has at most one sign change. So we know that from any initial point (a, b) , the optimal control will be bang-bang, that is, u = 1 , with at most one switch in the control. Now suppose u = k , when k = 1 , then the state equations for the system are
&1 = x1 + k x &2 = k x
1

x2

x1
0

u = 1

We can integrate each equation to give


x1 = Ae t + k x2 = k t + B

, A and B are constants

The x1 x 2 plane trajectories are found by eliminating t , giving

Fig. 11.4 Trajectories for u = 1

Chapter 11 Bang-bang Control

56

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

The basic problem is to reach the origin from an arbitrary initial point. All the possible trajectories are illustrated in Figs. 11.4 and 11.5, and we can see that these trajectories are only two possible paths which reach the origin, namely AO in Fig. 11.4 and BO in Fig. 11.5.
x2 u = 1 B

Following the usual procedure, we form the Hamiltonian


H = 1 + p1 x2 + p 2 u

(11.39)

We minimize H with respect to u, where 1 u 1 , which gives


+1 if u= 1 if p2 < 0 p2 > 0

The adjoint variables satisfy


x1 1

H & p1 = x = 0 p = A 1 1 H p 2 = At + B p &2 = = p1 x 2 Since x2 (T ) is not specified, the transversality condition

u =1

becomes p 2 (T ) = 0 . Hence 0 = At + B and p 2 = A(T t ) . For 0 < t < T , there is no change in the sign of p 2 , and hence no switch in u. Thus either u = +1 or u = 1 , but with no switches. We have
2 x2 = 2( x1 B) 2 x2

Fig. 11.4 Trajectories for u = 1 Combining the two diagrams we develop the Fig. 11.6. The curve AOB is called switching curve. For initial points below AOB, we take u = +1 until the switching curve is reached, followed by u = 1 until the origin is reached. Similarly for the points above AOB, u = 1 until the switching curve is reached, followed by u = +1 until the origin is reached. So we have solved the problem of finding the optimal trajectory from an arbitrary starting point. Thus the switching curve has equation
log(1 + x1 ) x2 = log(1 + x1 ) for for x1 > 0 x1 < 0

when when

u =1 u = 1

(11.40) (11.41)

= 2( x1 B)

These trajectories are illustrated in Fig. 11.7, the direction of &2 = u the arrows being determined from x
u = +1 x2 x2 u = 1

(11.34)

x1

x1

11.4 Transversarlity conditions


To illustrate how the transversality conditions ( pi (T ) = 0 if
xi is not specified) are used, we consider the problem of

finding the optimum control u when | u | 1 for the system described by


&1 = x 2 x &2 = u x

Fig. 11.7 Possible Optimal Trajectories


x2
x1 0 ( a, b )

(11.35) (11.36)

which takes the system from an arbitrary initial point x1 (0) = a, x 2 (0) = b to any point on the x2 axis, that is,
x1 (T ) = 0 but x2 (T ) is not given, and minimize

T+

J = 1.dt
0

(11.37)

T{

{
( a, b ) A

subject to (11.35), (11.36), the above boundary conditions on x1 and x 2 , and such that
1 u 1

Fig. 11.8 Initial point below OA We first consider initial points (a, b) for which a > 0 . For points above the curve OA, there is only one trajectory,

(11.38)

Chapter 11 Bang-bang Control

57

Introduction to Control Theory Including Optimal Control

Nguyen Tan Tien - 2002.5

________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

u = 1 which reaches the x 2 -axis, and this must be the optimal curve. For points below the curve OA, there are two possible curves, as shown in Fig. 11.8.
& 2 = u , that is x & 2 = 1 , and the integrating From (11.36): x between 0 and T gives

x 2 (T ) x 2 (0) = T

that is
T = | x 2 (T ) x 2 (0) |

(11.42)

Hence the modulus of the difference in final and initial values of x 2 given the time taken. This is shown in the diagram as
T+ for u = +1 and T for u = 1 . The complete set of optimal trajectories is illustrated in Fig. 11.9.
x2 u = +1 u = 1

x1

Fig. 11.9 Optimal Trajectories to reach x 2 -axis in minimum time

11.5 Extension to the Boltza problem

Chapter 11 Bang-bang Control

58

You might also like