You are on page 1of 15

Adaptive Control

1. Introduction
Adaptive Control 2. Self-oscillating Adaptive Control
3. Model Reference Adaptive Control
Karl Johan Åström and Anders Rantzer 4. Estimation and Excitation
5. Minimum Variance Control
Department of Automatic Control, LTH
Lund University 6. Self-Tuning Regulators
7. Learning and Dual Control
January 20, 2021
8. Applications
9. Related Fields
10. Summary

Introduction A Brief History of Adaptive Control


◮ Adaptive Control: Learn enough about a process and its environment
Tune to adjust for proper response for control – restricted domain, prior info
Adapt to adjust to a specified use or situation ◮ Development similar to neural networks
Autonomous independence, self-governing Many ups and downs, lots of strong egos
Learn to acquire knowledge or skill by study, instruction or experience ◮ Early work driven adaptive flight control 1950-1970.
Reason the intellectual process of seeking truth or knowledge by infering The brave era: Develop an idea, hack a system, simulate and fly!
from either fact of logic Several adaptive schemes emerged no analysi
Intelligence the capacity to acquire and apply knowledge Disasters in flight tests - the X-15 crash nov 15 1967
Gregory P. C. ed, Proc. Self Adaptive Flight Control Systems. Wright
Patterson Airforce Base, 1959
In Automatic Control
◮ Emergence of adaptive theory 1970-1980
◮ Gain scheduling - adjust controller parameter based on direct Model reference adaptive control emerged from flight control stability
measurement of system and environmental parameters theory - a servo problem
◮ Automatic tuning - tuning on demand The self tuning regulator emerged from process control and stochastic
control theory - a regulation problem
◮ Adaptation - continuous adjustment of controller parameters based
◮ Microprocessor based products 1980
on regular measured signals
◮ Robust adaptive control 1990
◮ Machine Learning and Adaptation 2020

Publications in Scopus Flight Control – Servo Problem


P. C. Gregory March 1959. Proceedings of the Self Adaptive Flight Control
Systems Symposium. Wright Air Development Center, Wright-Patterson
Air Force Base, Ohio. Summary of Air force sponsored research started in
10
6 1956.

Most of you know that with the advent a few years ago of hypersonic and
Pubications per year

4
10
supersonic aircraft, the Air Force was faced with a control problem. This
problem was two-fold; one, it was taking a great deal of time to develop a
flight control system; and two, the system in existence were not capable of
2
10

fulfilling future Air Force requirements. These systems lacked the ability to
10
0
control the aircraft satisfactorily under all operating conditions.
1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020
Year
Contracts on two important test flights
Blue control red adaptive control ◮ Honeywell self oscillating adaptive system X-15
◮ Lear will test MIT model reference adaptive system on F-101A

Mishkin, E. and Brown, L Adaptive Control Systems. Mc-Graw-Hill New


York, 1961

IFAC Symposia on Adaptive Control Process Control – Regulation Problem


1. Rome, Italy 1962
◮ What can be achieved?
2. Teddington, England 1965 Theory of Self-Adaptive Control Systems ◮ Reduce variations
3. San Francisco, USA 1983 Adaptive Control and Signal Processing ◮ Move set points closer to
ACOSP constraints
4. Lund 1986 ◮ What are the benefits?
5. Glasgow 1989 ◮ Moisture control
6. Grenoble 1992
◮ Small improvements 1%
7. Budapest 1995
can have larege economic
8. Glasgow 1998
consequences
9. Cernobbio-Come Italy 2001 Adaptation and Learning in Control and
Signal Processing ALCOSP
10. Yokohama, Japan 2004 Some contributions
11. Saint Petersburg, Russia 2007 ◮ Early pneumatic systems
12. Antalya, Turkey 2010 ◮ Kalman’s self-optimizing controller
13. Caen, France 2013 ◮ The self-tuning regulator - moving average controller
14. Eindhoven Holland 2016
◮ Relay auto-tuning
15. Winchester England 2019

1
Some Landmarks Yakov Z Tsypkin 1919 - 1997
◮ Early flight control systems 1955 ◮ BS, Moscow Electrical Engineering Institute 1941
◮ Dynamic programming Bellman 1957 ◮ BS, Moscow State University 1943
◮ Dual control Feldbaum 1960 ◮ MS Engineering, Moscow State University 1945
◮ System identification 1965 ◮ PhD, Moscow State University 1948
◮ Learning control Tsypkin 1971 ◮ Engineer, senior engineer, Chief of Department,
Research Institute Aircraft Equipment 1941-1949
◮ Algorithms MRAS STR 1970
◮ Senior researcher, Institute Control Sciences,
◮ Stability analysis 1980
Moscow 1950-1957
Lyapunov - Tsypkin
◮ Head of laboratory, Institute Control Sciences,
Passivity - Popov Landau
Moscow, since 1957
Augmented Error - Monopoli
◮ Yakov Z. Tsypkin and C. Constanda Relay Control ◮ Lenin Prize 1960
◮ PID auto-tuning 1980
Systems.
◮ ◮ Quazza Medal IFAC
Industrial products 1980 ◮ Sampling Systems Theory and Its Application 1984
◮ Robustness 1985 Volume 1 and 2 (NY 1964 Macmillan. Translated
◮ Hartley Medal IMC 1985
◮ from Russian by A. Allen an...)
Automatic tuning 1985
◮ Rufus Oldenburger
◮ Foundations of the Theory of Learning Systems by
◮ Autonomous Control 1995 Medal ASME 1989
Tsypkin, Ya. Z. (1973) Paperback
◮ Adaptation and Learning - a renaissance

Richard Bellman 1920 - 1984 Adaptive Control


◮ BA math Brooklyn College 1941
◮ MA University of Wisconsin
◮ Los Alamos Theoretical Physics 1. Introduction
◮ PhD Princeton Lefschetz 1946 2. Self-oscillating Adaptive Systems
◮ RAND Corporation 3. Model Reference Adaptive Control
◮ Founding editor Math Biosciences 4. Estimation and Excitation
◮ Brain tumor 1973 5. Minimum Variance Control
◮ 619 papers 39 Books 6. Self-Tuning Regulators
◮ Dynamic Programming 7. Dual Control
◮ Bellman Equation HJB 8. Applications
◮ Curse of dimensionality 9. Related Fields
◮ Bellman-Ford algorithm 10. Summary
◮ John von Neumann Theory Prize (1976)
◮ IEEE Medal of Honor (1979)

The Self-Oscillating Adaptive System H. Schuck Honeywell 1959 The Self-Oscillating Adaptive System

It’s is rather hard to tell when we at Honeywell first became interested in adaptive control. Gain
... In retrospect, it seems that we first conciously articulated the need in connection with our Filter
changer
early work in automatic approach and landing. ...
Let us look at the adaptive flight control system that was proven in the flight tests on the
F-94C. Conceptually it is simple, deceptively so. The input is applied to a model whose Filter Process
ym y
dynamic performance is what we whish the dynamic performance of the aircraft to be. The
actual response is compared with the reponse of the model and the difference is used as
Model Σ G f (s) Σ G p (s)
e
an input to the servo. If the gain of the servo is sufficiently high, the response of the aircraft
will be identical to that of the model, no matter what the elevator effectiveness, so long as it
is finite and has the right direction. Dither
Design of the model is fairly simple. ... The big problem comes inconnection with the need
to make the gain of the servo sufficiently high. An ordinary linear servo loop will not do. It
simply cannot be given a sufficiently high gain and still be stable. So we go in for −1
non-linearity, the most extreme form of non-linearity, in fact - the bang-bang type. Full
available power is applied one way, or the other, depending on the the direction of the
◮ Shape behavior by the block called Model
swithching order.
Now it is well known that a simple bang-bang sytem is oscillatorye. And we don’t want an ◮ Make the inner loop as fast as possible
oscillatory aircraft. .. So we look for ways to tame it down, keeping its high-gain ◮ Relay feedback automatically adjusts to gain margin for low
characteristic while reducing it oscillatory activity. ... frequency signals to gm = 2! Dual input describing functions!!!
◮ Relay amplitude adjusted by logic

k
SOAS P (s) = SOAS Simulation - Adding Lead Network
s(s + 1)2
1 1
y y
ym ym
−1 −1

0 10 20 30 40 50 0 10 20 30 40 50
Time Time

1 u 1 u

−1 −1

0 10 20 30 40 50 0 10 20 30 40 50
Time Time
0.5
0.5 e e

−0.5 −0.5

0 10 20 30 40 50 0 10 20 30 40 50
Time Time

Gain k nominally 1 increased to k = 5 at time t = 25 Gain increases by a factor of 5 at time t = 25


System with linear controller unstable for k > 2

2
SOAS Simulation - Adding Gain Changer The X-15 with MH-96 Autopilot Crash Nov 11 1967

1
y
ym
−1

0 10 20 30 40 50
Time
1 u

−1

0 10 20 30 40 50
Time
0.5
e

−0.5
Logic for gain change did not increase gain fast enough
0 10 20 30 40 50
Time
Dydek, Zachary, Anuradha Annaswamy, and Eugene Lavretsky. “Adaptive Control and the
Gain increases by a factor of 5 at time t = 25 NASA X-15-3 Flight Revisited.” IEEE Control Systems Magazine 30.3 (2010): 32–48.

Adaptive Control Global Stability


A long story
K. J. Åström ◮ MRAS and the MIT rule 1959
1. Introduction ◮ Empirical evidence of instability
2. Self-oscillating Adaptive Control ◮ Analysis
3. Model Reference Adaptive Control ◮ Butchart and Shackloth 1965
The MIT rule -sensitivity derivatives ◮ Passivity theory
Direct MARS - update parameters of a process model
Indirect MRAS - update controller parameters directly
◮ Parks 1966
L1 adaptive control - avoid dividing with estimates ◮ Landau 1969
4. Estimation and Excitation ◮ The augmented error Monopoli 1974
5. Self-Tuning Regulators ◮ Counterexamples Feuer and Morse 1978
6. Learning and Dual Control ◮ Stability proofs
◮ Egardt 1979
7. Applications
◮ Goodwin Ramage Caines 1980
8. Related Fields ◮ Narendra 1980
9. Summary ◮ Morse 1980
◮ Many others

Model Reference Adaptive Control – P. Whitaker MIT 1959 Model Reference Adaptive Control – MRAS

ym
We have further suggested the name model-reference adaptive system for Model
the type of system under consideration. A model-reference system is
Controller parameters
characterized by the fact that the dynamic specification for a desired Adjustment
system output are embodied in a unit which is called the model-reference mechanism

for the system, and which forms part of the equipment installation. The uc
commandsignal input to the control system is also fed to the model. The u
Plant
y
Controller
difference between the output signal of the model and the corresponding
output quantity of the system is then the response error. The design
objectie of the adaptive portion of this type of system is to minimze this
Linear feedback from e = y − ym is not adequate for parameter
response error under all operational conditions of the system. Specifically
adjustment!
the adjustment is done by the MIT Rule.
The MIT rule
dθ €e
= −γ e
dt €θ
Many other versions

A First Order System MIT Rule - Sensitivity Derivatives


The error
Process
dy e = y − ym
= −ay + bu
dt bθ 1
y= uc
Model p + a + bθ 2
dym
= −am ym + bm uc €e b
dt = uc
Controller
€θ 1 p + a + bθ 2

u ( t ) = θ 1 uc ( t ) − θ 2 y ( t ) €e b2θ 1 b
=− uc = − y
€θ 2 (p + a + bθ 2 )2 p + a + bθ 2
Ideal controller parameters
Approximate
bm p + a + bθ 2 ( p + am
θ1 = θ 10 =
b
am − a Hence
θ 2 = θ 20 = dθ 1 am
3 4
b = −γ uc e
dt p + am
Find a feedback that changes the controller parameters so that the closed dθ 2
3
am
4
loop response is equal to the desired model =γ y e
dt p + am

3
Block Diagram Simulation a = 1, b = 0.5, am = bm = 2.
ym
G m (s) 1

y
e
Σ
−1
uc + u y +
Π Σ G(s)
0 20 40 60 80 100
− Time
θ1
Π 5 u
θ2
0
γ γ

s s −5
0 20 40 60 80 100
am am Time
Π Π
s + am s + am
θ1 γ =5
4
γ =1
2
dθ 1 am γ = 0.2
3 4
= −γ uc e
0
0 20 40 60 80 100
dt p + am
θ2
Time
γ =5
dθ 2 am
3 4
2
=γ y e γ =1
dt p + am 0
γ = 0.2
Example a = 1, b = 0.5, am = bm = 2. 0 20 40 60 80 100
Time

Adaptation Laws from Lyapunov Theory First Order System


Process model and desired behavior

Replace ad hoc with desings that give guaranteed stability dy dym


= −ay + bu, = −am ym + bm uc
dt dt
◮ Lyapunov function V (x ) > 0 positive definite
Controller and error
dx
= f (x ), u = θ 1 uc − θ 2 y , e = y − ym
dt
dV dV dx DV
= = f (x ) < 0 Ideal parameters
dt dx dt dx b am − a
θ1 = , θ2 =
◮ Determine a controller structure bm b
◮ Derive the Error Equation The derivative of the error

◮ Find a Lyapunov function de


= −am e − (bθ 2 + a − am )y + (bθ 1 − bm ) uc
dV dt
◮ ≤ 0 Barbalat’s lemma
dt Candidate for Lyapunov function
◮ Determine an adaptation law
1 1 1
3 4
V ( e, θ 1 , θ 2 ) = e2 + (bθ 2 + a − am )2 + (bθ 1 − bm )2
2 bγ bγ

Derivative of Lyapunov Function Comparison with MIT rule

1 1 1
3 4
V ( e, θ 1 , θ 2 ) = e2 + (bθ 2 + a − am )2 + (bθ 1 − bm )2
2 bγ bγ
Lyapunov MIT
Derivative of error and Lyapunov function
de G m (s)
= −am e − (bθ 2 + a − am )y + (bθ 1 − bm ) uc −

dt
e G m (s)
Σ −
e
dV de 1 dθ 2 1 dθ 1 uc
Π
+
Σ
u y + Σ
=e (bθ 2 + a − am ) (bθ 1 − bm )
G(s) +
+ + uc u y +

γ γ Π Σ

dt dt dt dt
G(s)
θ1 −
Π θ1
Π
1 dθ 2 θ2
3 4
= − am e 2 + (bθ 2 + a − am ) − γ ye
θ2
γ γ

γ
γ γ
dt s s −
s s

1 dθ 1
3 4 am am
Π Π Π Π
+ (bθ 1 − bm ) + γ uc e s + am s + am

γ dt
Adaptation law
dθ 1 dθ 2 de
= −γ uc e, = γ ye [ = − e2
dt dt dt
Error will always go to zero, what about parameters, Barbara’s lemma!

Simulation - Dotted Parameters MIT rule Indirect MRAS


Process inputs and outputs
ym
(a) Parameter
1
adjustment
y
−1
Controller
0 20 40 60 80 100
parameters
(b)
Time
5 u
Setpoint
0 Output
Controller Plant
−5 Control
0 20 40 60 80 100
Time signal
Parameters
θ1 γ =5
4
Schemes
2
γ =1 γ = 0.2 Two loops ◮ Model Reference Adaptive
0
0 20 40 60 80 100
◮ Regular feedback loop Control MRAS
θ2
Time
γ =5 ◮ Self-tuning Regulator STR
◮ Parameter adjustment loop
1
γ =1 ◮ Learning and Dual Control
−1 γ = 0.2
0 20 40 60 80 100
Time

4
Indirect MRAS - Estimate Process Model Indirect MRAS
Process and estimator

dx d x̂
= ax + bu, = âx̂ + b̂u
dt dt

Nominal controller gains: kx = kx0 = (a − am )/b, kr = kr0 = bm /b. Process and estimator
Estimation error e = x̂ − x has the derivative dx d x̂
= ax + bu, = âx̂ + b̂u
de dt dt
= âx + b̂u − ax − bu = ae + (â − a)x̂ + (b̂ − b)u = ae + ãx̂ + b̃u,
dt Control law
â − am bm
where ã = â − a and b̃ = b̂ − a. Lyapunov function u=− x+ r
b̂ b̂
11 2 Very bad to divide by b̂!
2V = e2 + ã + b̃2 .
2
γ

Its derivative becomes

dV de 1 1 d â d b̂ 2 1 d ã 2 1 d b̃ 2
= ae2 + ex̂ +
1 1
=e + ã + b̃ ã + eu + b̃
dt dt γ dt dt γ dt γ dt

L1 Adaptive Control - Hovkimian and Cao 2006 Adaptive Control


Replace

â − am bm
u=− x+ r 1. Introduction
b̂ b̂
b̂u + (â − am )x − bm r = 0 2. Self-oscillating Adaptive Control
3. Model Reference Adaptive Control
with the differential equation
4. Estimation and Excitation
du 5. Minimum Variance Control
= K bm r − (â − am )x − b̂u
! "
dt 6. Self-Tuning Regulators
Avoid division by b̂, can be interpreted as sending the signal 7. Learning and Dual Control
b̂m r + (am − â)x through a first order filter with the pole s = −K b̂. 8. Applications
9. Related Fields
L1 has been tested on several airplanes
10. Summary
I. M. Gregory, C. Cao, E. Xargay, N. Hovakimyan and X. Zou L1 Adaptive Control Design
for NASA AirSTAR Flight Test Vehicle. AIAA Guidance, Navigation, and Control
Conference Portland Oregon August 2011

The Least Squares Method The Book

The problem: The Orbit of Ceres


The problem solver: Karl Friedrich Gauss
The principle: Therefore, that will be the most probable system of values of
the unknown quantities, in which the sum of the squares of the differences
between the observed and computed values, multiplied by numbers that
measure the degree of precision, is a minimum.
In conclusion, the principle that the sum of the squares of the differences
between the observed and computed quantities must be a minimum, may
be considered independently of the calculus of probabilities.
An observation: Other criteria could be used. But of all these principles
ours is the most simple; by the others we should be led into the most
complicated calculations.

Recursive Least Squares Persistent Excitation PE


Introduce
t
1X
c (k ) = lim u (i )u (i − k )
t →∞ t
i =1
yt +1 = −a1 yt − a2 yt −1 + · · · + b1 ut + · · · + et +1 = φ Tt θ + et +1
A signal u is called persistently exciting (PE) of order n if the matrix Cn is
φ t = [−yt − yt −1 · · · ut ut −1 · · · ]
positive definite.
θ = [ a 1 a 2 · · · b1 b 2 · · · ] ,
 
c ( 0) c ( 1) . . . c ( n − 1)
the parameter estimates are given by c(1) c ( 0) . . . c ( n − 2) 
 
Cn =  .. 
θˆt = θˆt −1 + Kt (yt − φ t θˆt −1 )  . 
Kt = Pt φ t c ( n − 1) c ( n − 2) . . . c ( 0)

Pt = Pt −1φ t (λ + φ Tt Pt −1φ t )−1 A signal u is persistently exciting of order n if and only if

The parameter λ controls how quickly old dat is discounted, many t


1X
versions: directional forgetting, square root filtering etc. U = lim (A(q)u(k ))2 > 0
t →∞ t
k =1

for all nonzero polynomials A of degree n − 1 or less.

5
Persistent Excitation - Examples Lack of Identifiability due to Feedback

y (t ) = ay (t − 1) + bu(t − 1) + e(t ), u(t ) = −ky (t )


◮ A step is PE of order 1
Multiply by α and add, hence
( q − 1) u ( t ) = 0
y ( t ) = ( a + α k ) y ( t − 1) + ( b + α ) u ( t − 1) + e( t )
◮ A sinusoid is PE of order 2
Same I/O relation for all â and b̂ such that
(q2 − 2q cos ω h + 1)u(t ) = 0
â = a + αk , b̂ = b + α
◮ White noise

◮ PRBS
◮ Physical meaning Slope − 1 k

◮ Mathematical meaning b • True value

a aˆ

Lack of Identifiability due to Feedback Example


v
r e u y
Model
Σ C Σ P
y (t ) + ay (t − 1) = bu(t − 1) + e(t )

Parameters
−1
a = −0.9, b = 0.5, σ = 0.5, θˆ(0) = 0, P (0) = 100I
P (s)C(s) P (s)
Y (s) = R(s) + V (s) Excitation
1 + P (s)C(s) 1 + P (s)C(s)
◮ Unit pulse at t = 50
C(s) C(s)P (s)
U (s) = R(s) − V (s) ◮ Square wave of unit amplitude and period 100
1 + P (s)C(s) 1 + P (s)C(s)
◮ Two cases
1
Y (s) = P (s)U (s) if v = 0, and Y (s) = − U (s) if r = 0. a) u(t ) = −0.2y (t )
C(s) b) u(t ) = −0.32y (t − 1)
Identification will then give the negative inverse of controller transfer
function! Any signal entering between u and v will influence closed loop
identification severely A good model can only be obtained if v = 0, or if v
is much smaller than r!

Example ... Adaptive Control

a) u(t ) = −0.2y (t ) b) u(t ) = −0.32y (t − 1) 1. Introduction


2. Self-oscillating Adaptive Control
b̂ b̂
1 1
3. Model Reference Adaptive Control
4. Estimation and Excitation
5. Minimum Variance Control
0 0
Motivation
Stochastic control theory
A model structure
−1 −1
System identification
−2 −1 0 1 â −2 −1 0 1 â
The control algorithm
No convergence with constant feedback with compatible structure, slow 6. Self-Tuning Regulators
convergence to low dimensional subspace with irregular feedback! 7. Learning and Dual Control

T. Hägglund and KJÅ, Supervision of adaptive control algorithms. 8. Applications


Automatica 36 (2000) 1171-1180 9. Related Fields
10. Summary

The Billerud-IBM Project The Billerud Plant


◮ IBM and Computer Control
IBM dominated computer market totally in late 1950
Saw big market in the process industry
Started research group in math department of IBM Research, hired
Kalman, Bertram and Koepcke 1958
Bad experience with installation in US paper industry
IBM Nordic Laboratory 1959 hired KJ Jan1960
◮ Billerud
Visionary manager Tryggve Bergek
Had approached Datasaab earlier for computer control
◮ Project Goals
Billerud: Exploit computer control to improve quality and profit!
IBM: Gain experience in computer control, recover prestige and find a
suitable computer architecture!
◮ Schedule
Start April 1963, computer Installed December 1964
System identification and on-line control March 1965
Full operation September 1966
40 many-ears effort in about 3 years

6
The Drying Section Steady State Regulation of Basis Weight and Moisture Content

Small improvements 1% are very valuable

The Scene of 1960 The RAND Corporation

◮ Servomechanism theory 1945 Set up as an independent non-profit research organization (Think Tank) for
the US Airforce by Douglas Aircraft Corporation in 1945.
◮ IFAC 1956 (50 year jubilee in 2006)
Saab R System
◮ Widespread education and industrial use of
control
◮ The First IFAC World Congress Moscow 1960 ◮ Richard Bellman
◮ Exciting new ideas
◮ Georg Danzig LP
Dynamic Programming Bellman 1957
Maximum Principle Pontryagin 1961
◮ Henry Kissinger
Kalman Filtering ASME 1960 ◮ John von Neumann
◮ Exciting new development ◮ Condolezza Rice
The space race (Sputnik 1957)
◮ Donald Rumsfeld
Computer Control Port Arthur 1959
◮ IBM and IBM Nordic Laboratory 1960 ◮ Paul Samuelson
Computerized Process Control

Stochastic Control Theory Model Structures


Process model
dx = Axdt + Budt + dv
dy = Cxdt + de
Kalman filtering, quadratic control, separation theorem
Much redundancy z = Tx + noise model. Start by transforming to
Process model innovations representation, ε is Wiener process

d x̂ = Ax̂dt + Budt + K (dy − Cx̂dt )


dx = Axdt + Budt + dv
dy = Cxdt + de
= (A − KC)x̂dt + Budt + K dε
dy = Cx̂dt + dε
Controller
Transfor to observable canonical form
     
d x̂ = Ax̂ + Bu + K (dy − Cx̂dt ) − a1 1 0 ... 0 b1 k1
u = L(xm − x̂ ) + uff  − a2 0 1 0  b2  k2 
     
 .  . .
d x̂ =  ..  x̂dt +  ..  udt +  ..  dε
     
−an−1 0 0 1    
A natural approach for regulation of industrial processes. − an 0 0 0 bn kn
dy = 1 0 0 . . . 0 x̂ + dε
! "

Model Structures The Sampled Model


...
     
− a1 1 0 ... 0 b1 k1
 − a2 0 1 0  b2  k2 
     
 .  . .
d x̂ =  ..  x̂ dt +  ..  u dt +  ..  dε
      The basic sampled model for stochastic SISO system is
−an−1 0 0 1    
− an 0 0 0 bn kn A(q−1 )y (t ) = B(q−1 )u(t ) + C(q−1 )e(t )
dy = 1 0 0 . . . 0 + x̂ + d ε
! "
Notice symmetries
Input output representation
◮ y can be computed from e, dynamics A
b1 sn−1 + b2 sn−2 + · · · + bn k1 sn−1 + k2 sn−2 + · · · + kn 2
◮ e can be computed from y , dynamics A − KC (observer dynamics)
1
Y = U + 1+ E
sn + a1 sn−1 + · · · + an sn + a1 sn−1 + · · · + an
◮ Kalman filter gains explicitly in model A(s) − C(s)
◮ Filter gains ki appear explicitely
◮ Dynamics of system ai is characteristic polynomial of estimator
eigenvalues of A − KC
Corresponding sampled system
A(q−1 )y (t ) = B(q−1 )u(t ) + C(q−1 )e(t )

7
Modeling from Data (Identification) Practical Issues
The Likelihood function (Bayes rule)
N
1 X ε 2 (t ) N
p(Yt , θ ) = p(y (t )pYt −1 , θ ) = · · · = − − log 2πσ 2
2 σ2 2
1 ◮ Sampling period
θ = (a1 , . . . , an , b1 , . . . , bn , c1 , . . . , cn , ε (1), .., ) ◮ To perturb or not to perturb
Ay (t ) = Bu(t ) + Ce(t ) Cε (t ) = Ay (t ) − Bu(t ) ◮ Open or closed loop
ε = one step ahead prediction error experiments
Efficient computations ◮ Model validation
N ◮ 20 min for two-pass compilation
€J X € ε (t ) € ε (t )
= ε (t ) C = qk y ( t ) of Fortran program!
€ ak € ak € ak
1 ◮ Control design
◮ Good match identification and control. Prediction error is minimized ◮ Skills and experiences
in both cases!
KJÅ and T. Bohlin, Numerical Identification of Linear Dynamic Systems from Normal
Operating Records. In Hammond, Theory of Self-Adaptive Control Systems, Plenum Press,
January 1966.

Minimum Variance Control - Example Minimum Variance (Moving Average Control)

Consider the first order system


Process model
Ay (t ) = Bu(t ) + Ce(t )
y (t + 1) + ay (t ) = bu(t ) + e(t + 1) + ce(t )
Factor B = B+ B− , solve (minimum G-degree solution)
Consider the situation at time t, we have
AF + B− G = C
y (t + 1) = −ay (t ) + bu(t ) + e(t + 1) + ce(t )
Cy = AFy + B− Gy = F (Bu + Ce) + B− Gy = CFe + B− (B+ Fu + Gy )
The control signal u(t ) can be chosen, the underlined terms are known at
time t and e(t + 1) is independent of all data available at time t. The Control law and output are given by
controller that minimizes Ey 2 is thus given by
B+ Fu(t ) = −Gy (t ), y (t ) = Fe(t )
bu(t ) = ay (t ) − ce(t )
where deg F ≥ pole excess of B/A
and the control error is y (t + 1) = e(t + 1), i.e. the one step prediction RT
a−c True minimum variance control V = E T1 y 2 (t )dt
error. The control law becomes u(t ) = y (t ). 0
b

Properties of Minimum Variance Control Minimum Variance Control


◮ The output is a moving average Process model

y = Fe, deg F ≤ deg A − deg B+ . yt + a1 yt −1 + ... = b1 ut −k + ... + et + c1 et −1 + ...


Ayt = But −k + Cet
Easy to validate!
◮ Interpretation for B− = 1 (all process zeros canceled), y is a moving ◮ Ordinary differential equation with time The output is a moving av-
average of degree npz = deg A − deg B. It is equal to the error in erage yt +j = Fet , which is
delay
predicting the output npz step ahead. easy to validate!
◮ Disturbances are statinary stochastic
◮ Closed loop characteristic polynomial is
process with rational spectra

B+ Cz deg A−deg B = B+ Cz deg A−deg B+deg B .


+ − ◮ The predition horizon: tru delay and
one sampling period
◮ The sampling period an important design variable! ◮ Control law Ru = −Sy
◮ Sampled zeros depend on sampling period. For a stable system all ◮ Output becomes a moving averate of
zeros are stable for sufficiently long sampling periods. white noise yt +k = Fet

KJÅ, P Hagander, J Sternby Zeros of sampled systems. Automatica 20 (1), 31-38, 1984
◮ Robustness and tuning

Performance (B− = 1) and Sampling Period A Robustness Result

A simple digital controller for systems with monotone step response


Plot prediction error as a function of prediction horizon Tp
(design based on the model y (k + 1) = bu(k ))
2
σpe 2
uk = k (ysp − yk ) + uk −1 , k<
g(∞)

Td Td + Ts Tp
Ts t

Td is the time delay and Ts is the sampling period. Decreasing Ts reduces


the variance but decreases the response time. g(∞)
Stable if g(Ts ) > KJÅ: Automatica 16 1980, pp 313–315.
2

8
Back to Billerud - Performance of Minimum Variance Control IBM Scientific Symposium Control Theory and Applications 1964

KJÅ, Computer control of a paper machine—An application of linear stochastic control


theory. IBM Journal of research and development 11 (4), 389-405, 1967

Summary Adaptive Control

◮ Regulation can be done


effectively by minimum variance 1. Introduction
control
2. Self-oscillating Adaptive Control
◮ Easy to validate because
3. Model Reference Adaptive Control
regulated output is a moving
average of white noise! 4. Estimation and Excitation

◮ Robustness depends critically 5. Minimum Variance Control


on the sampling period 6. Self-Tuning Regulators
◮ Sampling period is the design The self-tuning regulator
Properties
variable!
7. Learning and Dual Control
◮ The Harris Index and related
criteria 8. Applications

◮ OK to assess but how about 9. Related Fields


adaptation? 10. Summary

Rudolf Emile Kalman 1930-2016 Kalman’s Self-Optimizing Regulator 1

R. E. Kalman, Design of a self optimizing control system. Trans. ASME


80,468– 478 (1958)
◮ Born in Budapest 1930
◮ BS MIT 1953 Inspired by work
◮ MS MIT 1954 at IBM Research and DuPont

◮ PhD Columbia University NY 1957 Repeat the following two steps at


◮ IBM Research Yorktown Heights 1957-58 each sampling instant
Step 1: Estimate the parameters
◮ RIAS Baltimore 1958-1964
a1 , a2 , . . . , an , b1 , b2 , . . . , bn
◮ Professor Stanford 1964-1971
in the model (8)
◮ Professor University of Florida 1971-1992
Step 2: Use a control law that gives the
◮ Professor 1973 Professor ETH 1973-2016
shortest settling time for a step change in
the reference signal

Remark: Many other methods can be


used for parameter estimation and control
design

Kalman’s Self-Optimizing Regulator 2 The Self-Tuning Regulator 1

R. E. Kalman, Design of a self optimizing control system. Trans. ASME


80,468– 478 (1958) Self-tuning regulator
Specification Process parameters
Remark on computations
In practical applications, however, a general-purpose digital com-
Controller Estimation
puter is an expensive, bulky, extremely complex, and sometimes design
awkward pieze of equipment. Moreover, the computational capa- Controller
bilities (speed, storage capacity, accuracy) of even smaller com- parameters
Reference
mercially available general-purpose digital computers are consid-
Controller Process
erably in access of what is demanded in performing the compu- Input Output
tations listed in the Appendix. For these reasons, a small special-
purpose computer was constructed which could be externally dig-
ital and internally analog ◮ Certainty Equivalence - Design as if the estimates were correct
(Simon)
Columbia University had a computer of this type ◮ Many control and estimation schemes
Unfortunately Kalman’s regulator never worked!

9
The Self-Tuning Regulator 2 The Self-Tuning Regulator 3
Process model, estimation model and control law

yt + a1 yt −1 + · · · + an yt −n = b0 ut −k + · · · + bm ut −m
Estimates of parameters in
+ et + c1 et −1 + · · · + cn et −n
yt +a1 yt −1 +· · ·+an yt −n = b0 ut −k +· · ·+bm ut −m +et +c1 et −1 +· · ·+cn et −n yt +k = s0 yt + s1 yt −1 + · · · + sm yt −m + r0 (ut + r1 ut −1 + · · · rn ut −{ )
ut + r̂1 ut −1 + · · ·r̂n ut −{ = −(ŝ0 yt + ŝ1 yt −1 + · · · + ŝm yt −m )/r0
gives unbiased estimates if ci = 0.
Surprisingly the self-tuner works with ci ,= 0. Discovered empirically in If estimate converge and 0.5 < r0 /b0 < ∞
simulations by Wieslander, Wittenmark and independently by Peterka. ry (τ ) = 0, τ = k , k + 1, · · · k + m + 1
Johan Wieslander and Björn Wittenmark An approach to adaptive control ryu (τ ) = 0, τ = k , k + 1, · · · k + {
using real time identification. Proc 2nd IFAC Symposium on Identification
If degrees sufficiently large ry (τ ) = 0, ∀τ ≥ k
and Process Parameter Estimation, Prague 1970
◮ Essentially Kalman’s algorithm
Vaclav Peterka Adaptive digital regulation of noisy systems. Proc 2nd IFAC ◮ Converges to minimum variance control even if ci ,= 0, Surprising!
Symposium on Identification and Process Parameter Estimation, Prague ◮ Automates identification and minimum variance control in about 30 lines of code.
1970 ◮ The controller that drives covariances to zero

KJÅ and B. Wittenmark. On Self-Tuning Regulators, Automatica 9 (1973),185-199


Björn Wittenmark, Self-Tuning Regulators. PhD Thesis April 1973

Test at Billerud 1973 Convergence Proof


Process model Ay = Bu + Ce

yt + a1 yt −1 + · · · + an yt −n = b0 ut −k + · · · bm ut −n
+ et + c1 et −1 + · · · + cn et −n
Estimation model

yt +k = s0 yt + s1 yt −1 + · · · + sm yt −m + r0 (ut + r1 ut −1 + · · · rn ut −{ )

Theorem: Assume that


◮ Time delay k of the sampled system is known
◮ Upper bounds of the degrees of A, B and C are known
◮ Polynomial B has all its zeros inside the unit disc
◮ Sign of b0 is known
The the sequences ut and yt are bounded and the parameters converge to
U. Borisson and B. Wittenmark. An Industrial Application of a Self-Tuning Regulator, 4th the minimum variance controller
IFAC/IFIP Symposium on Digital Computer Applications to Process Control 1974.
G. C. Goodwin, P. J. Ramage, P. E. Caines, Discrete-time multivariable adaptive control.
U. Borisson. Self-Tuning Regulators - Industrial Application and Multivariable Theory. PhD
IEEE AC-25 1980, 449–456
thesis LTH 1975.

Convergence Proof Adaptive Control


Markov processes and differential equations

€p € fp 1 € 2 2
dx = f (x )dt + g(x )dw =− + g f
€t €x 2 €x2 1. Introduction

θ t + 1 = θ t + γt φ e = f (θ ) = Eφ e 2. Self-oscillating Adaptive Control

3. Model Reference Adaptive Control
Method for convergence of recursive algorithms. Global stability of STR
4. Estimation and Excitation
(Ay = Bu + Ce) if G(z ) = 1/C(z ) − 0.5 is SPR
5. Minimum Variance Control
L. Ljung, Analysis of Recursive Stochastic Algorithms IEEE Trans AC-22 (1967) 551–575.
6. Self-Tuning Regulators
Converges locally if ℜC(zk ) > 0 for all zk such that B(zk ) = 0 7. Learning and Dual Control

Jan Holst, Local Convergence of Some Recursive Stochastic Algorithms. 5th IFAC 8. Applications
Symposium on Identification and System Parameter Estimation, 1979 9. Related Fields

General convergence conditions 10. Summary

Lei Gui and Han-Fu Chen, The Åström-Wittenmark Self-tuning Regulator Revisited and
ELS-Based Adaptive Trackers. IEEE Trans AC36:7 802–812.

Dual Control - Alexander Aronovich Fel’dbaum Dual Control - Feldbaum

Control should be probing as well as directing uc Nonlinear u


Process
y
control law
◮ Dual control theory I A. A. Feldbaum
Avtomat. i Telemekh., 1960, 21:9,
1240–1249
◮ Dual control theory II A. A. Feldbaum Calculation
Avtomat. i Telemekh., 1960, l21:11, of hyperstate
1453–1464 Hyperstate
◮ R. E. Bellman Dynamic Programming
Academic Press 1957 ◮ No certainty equivalence
◮ Stochastic control theory - Adaptive ◮ Control should be directing as well as investigating!
control ◮ Intentional perturbation to obtain better information
◮ Decisionmaking under uncertainty - ◮ Conceptually very interesting
Economics 1913 – 1969 ◮ Unfortunately very complicated
◮ Optimization Hamilton Jacobi Bellman Helmerson KJA
Is it time for a second look?

10
The Problem - Integrator with unknown gain The Hamilton-Jakobi-Bellman Equation
Consider the system
yt +1 = yt + but + et +1 The solution to the problem is given by the Bellman equation
where et is a sequence of independent normal (0, σ 2 ) random variables

Vt (Xt ) = EXt min E yt2+1 + Vt +1 (Xt +1 ) Xt
1 2
and b a constant but unknown parameter with a normal b̂, P (0) prior or a ut
random wai.
Find a control llaw such that ut based on the information available at time t The state is Xt = yt , yt −1 , yt −2 , . . . , y0 , ut −1 , ut −2 , . . . , u0 . The derivation
is general applies also to
Xt = yt , yt −1 , . . . , y0 , ut −1 , ut −2 , . . . , u0 ,
xt +1 = f (xt , ut , et )
that minimizes the cost function yt = g(xt , ut , vt )
X
T
X min E q(x1 , ut )
2
V =E y (k ).
k =1 How to solve the optimization problem?
The curse of dimensionality: Xt has high dimension
KJÅ and A. Helmersson. Dual Control of an Integrator with Unkown Gain, Computers and
Mathematics with Applications 12:6A, pp 653–662, 1986.

A Sufficient Statistic - Hyperstate The Bellman Equation


It can be shown that a sufficient statistic for estimating future outputs is yt Vt (Xt ) = EXt min E yt2+1 + Vt +1 (Xt +1 ) Xt

1 2
and the conditional distribution of b given Xt . In our setting the conditional u t

distribution is gaussian N b̂t , Pt Use hyperstate to replace Xt = yt , yt −1 , yt −2 , . . . , y0 , ut −1 , ut −2 , . . . , u0


! "

with yt , b̂t , Pt . Introduce


b̂t = E (bpXt ), Pt = E [(b̂t − b)2 pXt ]
T
X

3 4
b̂t +1 = b̂t + Kt [yt +1 − yt − b̂t ut ] = b̂t + Kt et +1
Vt (yt , b̂t , Pt ) = min E yk2 yt , b̂t , Pt
ut Pt Ut
k =t +1
Kt =
σ2 + ut2 Pt σ 2 Pt
σ 2 Pt yt +1 = yt + b̂t ut + et +1 , b̂t +1 = b̂t + Kt et +1 , Pt +1 =
Pt +1 = [1 − Kt ut ]Pt = σ2 + ut2 Pt
σ 2 + ut2 Pt
and the Bellman equation becomes
In our particular case the conditional distrubution depens only on by y, b̂
Vt (y , b̂, P ) = min E yt2 + Vt +1 yt +1 , b̂t +1 , Pt +1 y , b̂t , Pt
1 ! " 2
and P - a significant reduction of dimensionality!
u

Short Time Horizon - One Step Ahead Controller Gain - Cautious Control
Consider situation at time t and look one step ahead
b̂ y b̂
T
X u=− y = Ky , η= . β = √ ,
VT −1 (y , b̂, P ) = min E yk2 = min yT2 b̂2 + P σ P
u u
k =T

yT = yT −1 + buT −1 + eT

We know yt have an estimate b̂ of b with covariance P

VT (y , b̂, P ) = min EyT2 = min (y + b̂u)2 + u2 P + σ 2


1 2
u u

Py 2
= min y + 2y b̂u + u2 (b̂2 + P ) + σ 2 = σ 2 + 2
2
1 2
u b̂ + P
where minimum occurs for

b̂ 1
u=− y [ u=− y as P → 0
b̂2 + P b̂
These control laws are called cautious control and certainty equivalence
control (Herbert Simon).

The Solution and Scaling Understanding Probing - Policy function has multiple minima

Vt (y , b̂, P ) = min (y + b̂u)2 + σ 2 + u2 P + Vt +1 yt +1 , b̂t +1 , Pt +1 )


1 ! 2
u

Py 2
VT (y , b̂, P ) = σ 2 +
+P b̂2
Iterate backward in time. Scale variable

y b̂ u P
η= . β = √ , µ=
σ P σ
The scaled Bellman equation
1 2
Wt (η, β ) = min Ut (η, β , µ), φ (x ) = √ e−x /2
µ 2π
Ut (η, β , µ) = (η + β µ)2 + 1 + µ2
Z∞1 p p 2
+ Wt + 1 ( η + β µ + ε 1 + µ2 , β 1 + µ2 + µε φ (ε )dε
−∞
Solving minimization gives control law µ = Π(η, β ), where η is scaled
control error and β is the relative parameter uncertainty Two functions: the
value function W (η, β ) and the policy function U (η, β , µ) Notice jump!!

11
Controller Gain - 3 Steps Controller gain for 30 Steps

K (η, β ) larger than 3 not shown

Cautious Control - Drifting Parameters Dual Control - Drifting Parameters

Comparison Summary
◮ Use dynamic programming (move optimization inside the integral)
T T
X 1X

2
Cautious Control Dual Control min E yt2 = min EXt E yk2 Xt
Ut Ut
k =t +1 k =t +1
T
1X 2 T
1X 2

= EXt min E yk2 Xt = EXt min min E yk2 Xt
Ut ut Ut +1
k =t +1 k =t +1
T 2
1 X
= EXt min E yt2+1 + min yk2 Xt
ut Ut +1
k =t +2


◮ State reduction y , b̂, P → η = y /σ , β = b̂/ P
◮ Certainty equivalence, cautious and dual
◮ K. J. Åström Control of Markov Chains with Incomplete State Information
JMAA, 10, pp. 174–205, 1965.
◮ K. J. Åström and A. Helmersson. Dual Control of an Integrator with Unkown
Gain, Computers and Mathematics with Applications 12:6A, pp 653–662,
1986.

Limitations of the Stochastic Approach Minimax Adaptive Control - State Feedback


Let Q, R ≻ 0. Given (A1 , B1 ), . . . , (AN , BN ) and a number γ > 0, find a
The stochastic formulations of dual control have severe limitations: control law µ that attains the infimum

◮ Prohibitive complexity of the dynamic programming approach ∞


X ! 2
pxt pQ + put p2R − γ 2 pwt p2
"
◮ Poor robustness to structural assumptions (compare H2 vs. H∞ ) inf sup
µ x0 ,w ,i
t =0
Is there a counter-part to “adversarial learning”?
when px0 p = 1 and supremum is taken over all solutions to

w xt +1 = Ai xt + Bi ut + wt t≥0
(x , u ) ✛
✛ x + = Ax + Bu + w ✛ ut = µt (x0 , . . . , xt , u0 , . . . , ut −1 ).
u
◮ If N = 1, then this gives standard H∞ control.
✲ ◮ γ quantifies robustness to unmodeled dynamics.
Dual controller
◮ In general, nonlinear feedback with memory is needed.
◮ Early work by [Didinsky/Basar, CDC 1994]
◮ Extension to output feedback difficult

12
Example: Double Integrator with Unknown Sign Adaptive Control

The double integrator yt +1 = 2yt − yt −1 ± ut −1 can be written as

1. Introduction
2 −1 1 0
   

xt +1 = 1 0 0 xt ± 0 ut 2. Self-oscillating Adaptive Control


0 0 0 1 3. Model Reference Adaptive Control
| {z } |{z}
A B 4. Estimation and Excitation
$s 5. Minimum Variance Control
where the state is xt = yt yt −1 ut −1 . Convex optimization gives the
#
6. Self-Tuning Regulators
minimax controller
7. Learning and Dual Control
P
−Kx if τt −=10 (xτ +1 − Axτ )s Buτ ≥ 0
I
ut = 8. Applications
Kx otherwise. 9. Related Fields
K = 1.786 −1.288 1.288
# $
10. Summary

Control of Orecrusher 1973 Control over Long Distance 1973


Plant in Kiruna, computer in Lund. Distance Lund-Kiruna 1400 km,
home-made modem (Leif Andersson), supervision over phone, sampling
period 20s.

Forget Physics! - Hope an STR can work!


Power increased from 170 kW to 200 kW
R. Syding, Undersökning av Honeywells adaptiva reglersystem. MS Thesis LTH 1975
U. Borisson, and R. Syding, Self-Tuning Control of an Ore Crusher, Automatica 1976, 12:1,
1–7

Results Steermaster
Kockums

◮ Ship dynamics
◮ SSPA Kockums
◮ Full scale tests on ships
Significant improvement of production 15 %! in operation

Ship Steering - 3% less fuel consumption ABB


STR Conventional
◮ ASEA Innovation 1981
◮ DCS system with STR
◮ Grew quickly to 30 people
and 50 MSEK in 1984
◮ Strong grpup
◮ Wide range of applications
◮ Adaptive feedforward
Incorporated in ABB Master 1984
and later in ABB 800xA
◮ Difficult to transfer to
standard sales and
C. Källström Identification and Adaptive Control Applied to Ship Steering PhD Thesis
commision workforce
Department of Automatic Control, Lund University, Sweden, April 1979.
C. Källström, KJÅ, N. E. Thorell, J. Eriksson, L. Sten, Adaptive Autopilots for Tankers,
Arthur D . Little Innovation at ASEA. 1985
Automatica, 15 1979, 241-254

13
First Controll Relay Auto-Tuner Tore

◮ One-button tuning
◮ Automatic generation of
gain schedules
◮ Adaptation of feedback and
◮ Gunnar Bengtsson
feedforward gains
◮ Founder of ABB Novatune ◮ Many versions
◮ Rolling mills Single loop controllers
◮ Continuous casting DCS systems

◮ Semiconductor manufacturing ◮ Robust


◮ Microcontroller XC05IX ◮ Excellent industrial
Raspberry pie, linux experience
Robust adaptive control ◮ Large numbers
Grahpical programming
Modelica simulation

Adaptive Control The Perceptron - Rosenblatt


◮ PhD Experimental Psychology, Cornell
1. Introduction 1956: broad interests mathematics ...
◮ Cornell Aeronautical Lab 1957-59
2. Self-oscillating Adaptive Control
◮ Probabilistic model for information storage
3. Model Reference Adaptive Control
in the brain
4. Estimation and Excitation
◮ One layer neural network classifier
5. Minimum Variance Control
6. Self-Tuning Regulators
7. Learning and Dual Control
8. Applications
Rosenblatt, Frank (1957). "The
9. Related Fields
Perceptron—a perceiving and
◮ Adaptive Signal Processing
◮ Neural Networks recognizing automaton". Re-
◮ Boxes port 85-460-1. Cornell Aero-
nautical Laboratory.
10. Summary

Addaline - Bernard Widrow & Ted Hoff (Intel 4004) BOXES 1960 - Donald Michie
◮ Cryptography Bletchley Park
◮ Director Department of Machine
Intelligence and Perception, University
of Edinburgh (previously the
Experimental Programming Unit 1965)
◮ Quantize state in boxes
◮ Each box stores information about
control actions taken and the
performance
◮ Adaline - ADAptive LInear Neuron - Single layer neuron network
Pm ◮ Local demon and global demons
y = sign j =1 wij uj , least Mean Square (LMS)
◮ Used for playing games like
◮ B. Widrow and M. E. Hoff, “Adaptive Switching Circuits,” 1960 IRE
Tick-Tack-Toe and broom balancing
WESCON Convention Record, 1960, pp. 96-104.
◮ Analog implementation potentiometers D. Michie and R. A. Chambers, “BOXES: An experiment in adaptive
control,” in Machine Intelligence 2, E. Dale and D. Michie, Eds. Edinburgh:
◮ Separating hyperplane
Oliver and Boyd, (1968), pp. 137–152.
◮ Madaline a multilayer version

Neural Networks Development of Learning

14
Development of Image Recognition Adaptive Control

1. Introduction
2. Self-oscillating Adaptive Control
3. Model Reference Adaptive Control
4. Estimation and Excitation
5. Self-Tuning Regulators
6. Learning and Dual Control
7. Applications
8. Related Fields
9. Summary

Very rapid development because of good test batch and competing algorithms

Summary

◮ A glimpse of an interesting and useful field of control


◮ Nonlinear system, not trivial to analyse and design
◮ Several good algorithms: self-oscillating, MRAS and STR
Why is not STR more widely used?
Natural candidate for regulation in noisy environment?
Control of computer systems?
◮ A number of successful industrial applications
◮ Currently renewed interest because of connections to learning

KJÅ and B. Wittenmark. Adaptive Control. Second Edition. Dover 2008.

15

You might also like