You are on page 1of 8

The Self Tuning Regulator

The Self Tuning Regulator


1. Introduction
2. Minimum Variance Control
K. J. Åström 3. System Identification

Department of Automatic Control, LTH 4. The Self Tuning Regulator


Lund University
5. Properties
6. Examples
February 23, 2021
7. Summary

Process Control – Steady State Regulation The Billerud-IBM Project


◮ IBM and Computer Control
◮ Reduce variations
IBM dominated computer market totally in late 1950
◮ Move set points closer to Saw big market in the process industry TRW-Texaco 1956-59
constraints Started research group in math department of IBM Research, hired
◮ What are the benefits? Kalman, Bertram and Koepcke 1958
Bad experience with installation in US paper industry
◮ Small improvements 1% IBM Nordic Laboratory 1959 Kai Kinberg hired KJ Jan1960
can have larege economic ◮ Billerud
consequences Visionary manager Tryggve Bergek
Had approached Datasaab earlier for computer control
◮ Project Goals
Some elements Billerud: Exploit computer control to improve quality and profit!
◮ What is a suitable control algorithm? IBM: Gain experience in computer control, recover prestige and find a
◮ What is a suitable estimation algorithm? suitable computer architecture!
◮ Schedule
◮ Kalman’s self-optimizing controller
Start April 1963, computer Installed December 1964
◮ The self-tuning regulator System identification and on-line control March 1965
Snapshots of Process Control. IEEE Control Systems Magazine 20:4 2006 Full operation September 1966
40 man-years effort in about 3 years

The Billerud Plant The Drying Section

Steady State Regulation of Basis Weight and Moisture Content Benefits of Improved Regulation - 1970 data

Production rate for one machine

Q[ton/min] = BW [gm−2 ∗ W [m] ∗ V [m/min]


= 70 ∗ 8 ∗ 500 = 280000[g/min]
= 280[kg/min]

Production time per year: 500 000 min


One years production 1.4 107 [kg]= 14 000 [ton]
Value of one years production 145 $ 14000 ( 2 M USD
1% increase of moisture content corresponds to to 200 k USD per year
Cost of automation system Measurex 280 k USD

Many thanks to Nils Leffler (Measurex and ABB) and Olle Alsholm (Billerud)

Small improvements 1% improvements are very valuable

1
The Self-Tuning Regulator The Scene of 1960
◮ Servomechanism theory 1945
◮ IFAC 1956 (50 year jubilee in 2006)
◮ Widespread education and industrial use of
1. Introduction control

2. Minimum Variance Control ◮ The First IFAC World Congress Moscow 1960
3. System Identification
◮ Exciting new ideas
Dynamic Programming Bellman 1957
4. Self Tuning Control
Maximum Principle Pontryagin 1961
5. Properties Kalman Filtering ASME 1960
6. Examples ◮ Exciting new development
7. Summary The space race (Sputnik 1957)
Computer Control Port Arthur Texas+TRW
1956-1959
◮ IBM and IBM Nordic Laboratory 1960
Computerized Process Control

Stochastic Control Theory Model Structures


Process model
dx = Axdt + Budt + dv
dy = Cxdt + de
Kalman filtering, quadratic control, separation theorem
Much redundancy z = Tx + noise model. Start by transforming to
Process model innovations representation, ε is Wiener process

d x̂ = Ax̂dt + Budt + K (dy − Cx̂dt )


dx = Axdt + Budt + dv
dy = Cxdt + de
= (A − KC)x̂dt + Budt + K dε
dy = Cx̂dt + dε
Controller
Transfor to observable canonical form
     
d x̂ = Ax̂ + Bu + K (dy − Cx̂dt ) − a1 1 0 ... 0 b1 k1
u = L(xm − x̂ ) + uff  − a2 0 1 0  b2  k2 
     
 .   ..   .. 
d x̂ =  ..  x̂dt + . udt +  .  dε
     
−an−1 0 0 1    
A natural approach for regulation of industrial processes. − an 0 0 0 bn kn
dy = 1 0 0 . . . 0 x̂ + dε
! "

Model Structures The Sampled Model


...
     
− a1 1 0 ... 0 b1 k1
 − a2 0 1 0  b2  k2 
     
 .  . .
d x̂ =  ..  x̂ dt +  ..  u dt +  ..  dε
      The basic sampled model for stochastic SISO system is
−an−1 0 0 1    
− an 0 0 0 bn kn A(q−1 )y (t ) = B(q−1 )u(t ) + C(q−1 )e(t )
dy = 1 0 0 . . . 0 x̂ + d ε
! "
Notice symmetries
Input output representation
◮ y can be computed from e, dynamics A
b1 sn−1 + b2 sn−2 + · · · + bn k1 sn−1 + k2 sn−2 + · · · + kn 2
◮ e can be computed from y , dynamics A − KC (observer dynamics)
1
Y = U + 1+ E
sn + a1 sn−1 + · · · + an sn + a1 sn−1 + · · · + an
◮ Kalman filter gains explicitly in model A(s) − C(s)
◮ Filter gains ki appear explicitely
◮ Dynamics of system ai is characteristic polynomial of estimator
eigenvalues of A − KC
Corresponding sampled system
A(q−1 )y (t ) = B(q−1 )u(t ) + C(q−1 )e(t )

Minimum Variance Control - Example Minimum Variance (Moving Average Control)

Consider the first order system


Process model
Ay (t ) = Bu(t ) + Ce(t )
y (t + 1) + ay (t ) = bu(t ) + e(t + 1) + ce(t )
Factor B = B+ B− , solve (minimum G-degree solution)
Consider the situation at time t, we have
AF + B− G = C
y (t + 1) = −ay (t ) + bu(t ) + e(t + 1) + ce(t )
Cy = AFy + B− Gy = F (Bu + Ce) + B− Gy = CFe + B− (B+ Fu + Gy )
The control signal u(t ) can be chosen, the underlined terms are known at
time t and e(t + 1) is independent of all data available at time t. The Control law and output are given by
controller that minimizes Ey 2 is thus given by
B+ Fu(t ) = −Gy (t ), y (t ) = Fe(t )
bu(t ) = ay (t ) − ce(t )
where deg F ≥ pole excess of B/A
and the control error is y (t + 1) = e(t + 1), i.e. the one step prediction RT
a−c True minimum variance control V = E T1 y 2 (t )dt
error. The control law becomes u(t ) = y (t ). 0
b

2
Properties of Minimum Variance Control Minimum Variance Control
Process model
◮ Output is a moving average y = Fe, deg F ≤ deg A − deg B+
◮ Easy to asess (log output compute prediction error) and validate yt + a1 yt −1 + ... = b1 ut −k + ... + et + c1 et −1 + ...
◮ Interpretation for B− = 1 (all process zeros canceled), y is a moving Ayt = But −k + Cet
average of degree npz = deg A − deg B. It is equal to the error in
predicting the output npz step ahead. ◮ Ordinary differential equation with time The output is a moving av-
◮ Closed loop characteristic polynomial is erage yt +j = Fet , which is
delay
easy to validate!
◮ Disturbances are statinary stochastic
B+ Cz deg A−deg B = B+ Cz deg A−deg B+deg B .
+ −

process with rational spectra


◮ The sampling period and the prediction horizon are important design ◮ The predition horizon: true delay and
variables! one sampling period
◮ Sampled zeros depend on sampling period. For a stable system all ◮ Control law Ru = −Sy
zeros are stable for sufficiently long sampling periods. ◮ Output becomes a moving averate of
white noise yt +k = Fet
KJÅ, P Hagander, J Sternby Zeros of sampled systems. Automatica 20 (1), 31-38, 1984
◮ Robustness and tuning

Performance (B− = 1) and Sampling Period A Robustness Result

A simple digital controller for systems with monotone step response


Plot prediction error as a function of prediction horizon Tp
(design based on the model y (k + 1) = bu(k ))
2
σpe 2
uk = k (ysp − yk ) + uk −1 , k<
g(∞)

Td Td + Ts Tp
Ts t

Td is the time delay and Ts is the sampling period. Decreasing Ts reduces


the variance but decreases the response time. g(∞)
Stable if g(Ts ) > kjå: Automatica 16 1980, pp 313–315.
2

Summary The Self Tuning Regulator


◮ Regulation can be done very
effectively by minimum variance
control
◮ Easy to estimate possible
performance apriori: measure 1. Introduction
fluctuation in normal operation,
2. Minimum Variance Control
calculate predition error for
different prediction horizons, 3. System Identification
estimate process time delay. 4. The Self Tuning Regulator
◮ Easy to validate because 5. Properties
regulated output is a moving 6. Examples
average of white noise!
7. Summary
◮ Robustness depends critically
on the sampling period
◮ Sampling period and time
horizon are the design
variables!

Modeling from Data (Identification) Practical Issues


The Likelihood function (Bayes rule)
N
1 X ε 2 (t ) N
p(Yt , θ ) = p(y (t )pYt −1 , θ ) = · · · = − − log 2πσ 2
2 σ2 2
1 ◮ Sampling period
θ = (a1 , . . . , an , b1 , . . . , bn , c1 , . . . , cn , ε (1), .., ) ◮ To perturb or not to perturb
Ay (t ) = Bu(t ) + Ce(t ) Cε (t ) = Ay (t ) − Bu(t ) ◮ Open or closed loop
ε = one step ahead prediction error experiments
Efficient computations ◮ Model validation
N ◮ 20 min for two-pass compilation
€J X € ε (t ) € ε (t )
= ε (t ) C = qk y ( t ) of Fortran program!
€ ak € ak € ak
1 ◮ Control design
◮ Good match identification and control. Prediction error is minimized ◮ Skills and experiences
in both cases!
KJÅ and T. Bohlin, Numerical Identification of Linear Dynamic Systems from Normal
Operating Records. In Hammond, Theory of Self-Adaptive Control Systems, Plenum Press,
January 1966.

3
Back to Billerud - Minimum Variance Control Before and After

KJÅ, Computer control of a paper machine—An application of linear stochastic control


theory. IBM Journal of research and development 11 (4), 389-405, 1967

IBM Scientific Symposium Control Theory and Applications 1964 Great Inspiration for University Research

◮ Many unresolved problems in system identification and adaptive


control
◮ Industry will be our laboratory
◮ Building up competence from scratch
◮ Good problems for MS PhD students
◮ Algorithms
◮ Convergence and efficiency
◮ Stability
◮ Possibility to do real experiments
◮ Exchange of people
◮ Many new problems production planning

Some Theses - System Identification and Adaptive Control The Self-Tuning Regulator
Self-Tuning Regulators Björn Wittenmark 1973 PhD
Stochastic Convergence of Algorithms for Identification and Adaptive Control Lennart Ljung
1974 PhD
Identification of Industrial Process Dynamics Ivar Gustavsson 1974 PhD
Self-Tuning Regulators - Industrial Application and Multivariable Theory Ulf Borisson 1975
PhD
Identification and Dead-Beat Control of a Heat Diffusion Process Bo Leden 1975 PhD 1. Introduction
Self-Tuning Control of the Dissolved Oxygen Concentration in an Activated Sludge System 2. Minimum Variance Control
Lars Rundqwist 1976 Lic
Adaptive Prediction and Recursive Estimation Jan Holst 1977 PhD 3. The Self Tuning Regulator
Topics in Dual Control Jan Sternby 1977 PhD 4. Properties
Stability of Model Reference Adaptive and Self-Tuning Regulators Bo Egardt 1978 PhD
Interaction in Computer Aided Analysis and Design of Control Systems Johan Wieslander 5. Examples
1979 PhD 6. Summary
Stabilisation of Uncertain Systems Per Molander 1979 PhD
Identification and Adaptive Control Applied to Ship Steering Claes Källström 1979 PhD
Adaptive Start-up Control Matz Lenells 1982 PhD
Multivariable Adaptive Control Rolf Johansson 1983 PhD
New Estimation Techniques for Adaptive Control Tore Hägglund 1983 PhD
Adaptive Stabilization Bengt Mårtensson 1986 PhD
Modelling and Control of Fermentation Processes Jan Peter Axelsson 1989 PhD

Motivation Rudolf Emile Kalman 1930-2016


◮ Born in Budapest 1930
◮ BS MIT 1953
◮ MS MIT 1954
◮ Regulation problems are common in industry ◮ PhD Columbia University NY 1957
◮ A reasonable systems model is Ay (t ) = Bu(t ) + Ce(t ) ◮ IBM Research Yorktown Heights 1957-58
◮ Minimum variance control is a suitable strategy ◮ DuPont case study
◮ The model can be obtained using system identifictation but the ◮ RIAS Baltimore 1958-1964
procedure is tedious
◮ Professor Stanford 1964-1971
◮ Can we replace identification and control with a single adaptive
◮ Professor University of Florida 1971-1992
control algorithm?
◮ Professor 1973 Professor ETH 1973-2016
◮ New view of control: state feedback,
Kalman filter
◮ Adaptive control

4
Kalman’s Self-Optimizing Regulator 1 Kalman’s Self-Optimizing Regulator 2

R. E. Kalman, Design of a self optimizing control system. Trans. ASME R. E. Kalman, Design of a self optimizing control system. Trans. ASME
80,468– 478 (1958) 80,468– 478 (1958)

Remark on computations
Inspired by work
In practical applications, however, a general-purpose digital com-
at IBM Research and DuPont
puter is an expensive, bulky, extremely complex, and sometimes
Repeat the following two steps at awkward pieze of equipment. Moreover, the computational capa-
each sampling instant bilities (speed, storage capacity, accuracy) of even smaller com-
Step 1: Estimate the parameters
mercially available general-purpose digital computers are consid-
a1 , a2 , . . . , an , b1 , b2 , . . . , bn erably in access of what is demanded in performing the compu-
tations listed in the Appendix. For these reasons, a small special-
in the model (8)
purpose computer was constructed which could be externally dig-
Step 2: Use a control law that gives the
ital and internally analog
shortest settling time for a step change in
the reference signal
Columbia University had a computer of this type
Remark: Many other methods can be Unfortunately Kalman’s regulator never worked!
used for parameter estimation and control
Later planned attempt at IBM Research with hybrid computing Dick
design
Koepcke canceled because group moved from Yorktown to San Jose

Self-Tuning Regulators Parameter Estimation - Recursive Least Squares

Self-tuning regulator
Specification Process parameters yt +1 = −a1 yt − a2 yt −1 + · · · + b1 ut + · · · + et +1 = φ Tt θ + et +1
φ t = [−yt − yt −1 . . . ut ut −1 . . .]
Controller Estimation θ = [ a1 a2 . . . b1 b2 . . . bm ] ,
design

Controller the parameter estimates are given by


parameters

θˆt pt = θˆt pt −1 + Kt (yt − φ t θˆtTpt −1 )


Reference
Controller Process
Kt = Pt φ Tt
Input Output

Pt = Pt −1φ Tt (λ + φ t Pt −1φ Tt )−1


◮ Certainty Equivalence - Design as if the estimates were correct
The parameter λ controls how quickly old data is discounted, many
(Simon) alternatives: directional forgetting, square root filtering etc .
◮ Many control and estimation schemes

Parameter Estimation The Algorithm


Recursive least squares is a good method for

y ( t ) + a1 y ( t − 1) + a2 y ( t − 2) + · · · + an y ( t − n) Regression model and control law

= b0 u(t − k ) + b1 u(t − k − 1) + · · · + bm u(t − k − m) + e(t ) y (t + k ) = s0 y (t ) + s1 y (t − 1) + · · · + sn y (t − n)


where e is white noise. Our model + b0 u(t ) + r1 u(t − 1) + · · · + rm u(t − m) = φ T (t )θ
! "

y ( t ) + a 1 y ( t − 1 ) + a 2 y ( t − 2 ) + · · · + an y ( t − n ) 1!
u (t ) = − s0 y (t ) + s1 y (t − 1) + · · · + sn y (t − n)
"
= b0 u(t − k ) + b1 u(t − k − 1) + · · · + bm u(t − k − m) b0
− r1 u(t − 1) + · · · + rm u(t − m)
+ e(t ) + c1 e(t − 1) + c2 e(t − 2) + · · · cn e(t − n)
has colored residuals and the estimate will then be biased
1. Measure y(t)
◮ Many methods tested, extended least squares, ...
2. Update estimate θˆ(t ) = θˆ(t − 1) + K (t ) y (t ) − ŷ (t )
! "
◮ Serendipity, an unplanned fortunate discovery, common in the history
of product invention and scientific discovery 3. Compute control u(t ) from above equation
◮ Proc 2 nd
IFAC Symposium on Identification and Process Parameter 4. Set output u(t )
Estimation, Prague 1970: V. Peterka Adaptive digital regulation of 5. Update φ (t ), P (t ) and K (t )
noisy systems. J. Wieslander and B. Wittenmark. An approach to
adaptive control using real time identification.

A Simulation - Surprised!! Explain the Result


Process y (t + 1) + ay (t ) = bu(t ) + e(t + 1) + ce(t ) Pt −1
y ( k ) y ( k + 1) − u ( k )
! "
Parameters: a = −0.9, b = 3, c = −0.3, (a − c )/b = −0.2 θˆ(t ) = k =0
Pt −1
Direct self-tuner based on y (t + 1) = b0 u(t ) + s0 y (t ) assuming white k =0 y 2 (k )
noise Control law
u(t ) = −θˆ(t )y (t )
1.5
Properties
1.0
t −1 t −1
ŝ0 /b0 1X 1 X1
θˆ(t )y 2 (k ) + u(k )y (k )
2
0.5 y ( k + 1) y ( k ) =
t t
k =0 k =0
0.0
0 100 200 300 400 500 t −1
1 X1
θˆ(t ) − θˆ(k ) y 2 (k )
2
Time
=
600 t
k =0

400
Assume that the estimate converges
Self-tuning control
200 Minimum-variance control t −1
1X
0 r̂y (1) = lim y ( k + 1) y ( k ) = 0
0 100 200 300 400 500 t →∞ t
Time k =0

5
The Self-Tuning Regulator STR Integrator with Time Delay
Process model, estimation model and control law
yt + a1 yt −1 + · · · + an yt −n = b0 ut −k + · · · + bm ut −m
+ et + c1 et −1 + · · · + cn et −n
yt +k = s0 yt + s1 yt −1 + · · · + sm yt −m + r0 (ut + r1 ut −1 + · · · rn ut −{ )
ut + r̂1 ut −1 + · · ·r̂n ut −{ = −(ŝ0 yt + ŝ1 yt −1 + · · · + ŝm yt −m )/r0 A(q) = q(q − 1)
If estimate converge and 0.5 < r0 /b0 < ∞ τ
B(q) = (h − τ )q + τ = (h − τ )(q + )
h−τ
ry (τ ) = 0, τ = k , k + 1, · · · k + m + 1
C (q ) = q (q + c )
ryu (τ ) = 0, τ = k , k + 1, · · · k + {
Minimum-phase if τ < h/2. Controller with d = 1, τ changed from 0.4 to
If degrees sufficiently large ry (τ ) = 0, ∀τ ≥ k
0.6 at time 100.
◮ Converges to minimum variance control even if ci ,= 0
◮ Automates identification and minimum variance control in about 35
lines of code.
◮ The controller that drives covariances to zero
KJÅ and B. Wittenmark. On Self-Tuning Regulators, Automatica 9 (1973),185-199
Björn Wittenmark, Self-Tuning Regulators. PhD Thesis April 1973

Integrator with Time Delay Test at Billerud 1973


Minimum-phase if τ < h/2. Controller with d = 1, τ changed from 0.4 to
0.6 at time 100.
(a) 5 y
0

−5
0 100 200 300 400
Time

u
20

−20
0 100 200 300 400
Time

Controller with d = 2
(b) 5 y
0

−5
0 100 200 300 400
Time

u
20
U. Borisson and B. Wittenmark. An Industrial Application of a Self-Tuning Regulator, 4th
0
IFAC/IFIP Symposium on Digital Computer Applications to Process Control 1974.
−20
0 100 200 300 400 U. Borisson. Self-Tuning Regulators - Industrial Application and Multivariable Theory. PhD
Time
thesis LTH 1975.

Convergence Proof Convergence Proof


Process model Ay = Bu + Ce Markov processes and differential equations

yt + a1 yt −1 + · · · + an yt −n = b0 ut −k + · · · bm ut −n €p € p 1 € fp 2 1 € 2 2
dx = f (x )dt + g(x )dw , =− + g f =0
+ et + c1 et −1 + · · · + cn et −n €t €x €x 2 €x2

Estimation model dθ
θ t + 1 = θ t + γ t φ e, = f (θ ) = Eφ e

yt +k = s0 yt + s1 yt −1 + · · · + sm yt −m + r0 (ut + r1 ut −1 + · · · rn ut −{ )
Method for convergence of recursive algorithms. Global stability of STR
Theorem: Assume that (Ay = Bu + Ce) if G(z ) = 1/C(z ) − 0.5 is SPR
◮ Time delay k of the sampled system is known L. Ljung, Analysis of Recursive Stochastic Algorithms IEEE Trans AC-22 (1967) 551–575.
◮ Upper bounds of the degrees of A, B and C are known
Converges locally if ℜC(zk ) > 0 for all zk such that B(zk ) = 0
◮ Polynomial B has all its zeros inside the unit disc
◮ Sign of b0 is known Jan Holst, Local Convergence of Some Recursive Stochastic Algorithms. 5th IFAC
Symposium on Identification and System Parameter Estimation, 1979
The the sequences ut and yt are bounded and the parameters converge to
the minimum variance controller General convergence conditions

G. C. Goodwin, P. J. Ramage, P. E. Caines, Discrete-time multivariable adaptive control. Lei Gui and Han-Fu Chen, The Åström-Wittenmark Self-tuning Regulator Revisited and
IEEE AC-25 1980, 449–456 ELS-Based Adaptive Trackers. IEEE Trans AC36:7 802–812.

References The Self-Tuning Regulator

J. Wieslander and B. Wittenmark An approach to adaptive control using real time


identification. Proc 2nd IFAC Symposium on Identification and Process Parameter
Estimation, Prague 1970.
V. Peterka Adaptive digital regulation of noisy systems. Proc 2nd IFAC Symposium on 1. Introduction
Identification and Process Parameter Estimation, Prague 1970.
KJÅ and Wittenmark, Björn, On Self Tuning Regulators, April 1972, 5th IFAC World
2. Minimum Variance Control
Congress Paris March 1972. 3. System Identification
KJÅ and B. Wittenmark. On Self-Tuning Regulators, Automatica 9 (1973),185-199.
Lennart Ljung Stochastic Convergence of Algorithms for Identification and Adaptive Control
4. The Self Tuning Regulator
PhD Thesis Department of Automatic Control, Lund University, Sweden, April 1974. 5. Properties
Jan Holst, Local Convergence of Some Recursive Stochastic Algorithms. 5th IFAC
Symposium on Identification and System Parameter Estimation, 1979 6. Examples
G. C. Goodwin, P. J. Ramage, P. E. Caines, Discrete-time multivariable adaptive control. 7. Summary
IEEE AC-25 1980, 449–456
Lei Gui and Han-Fu Chen, The Åström-Wittenmark Self-tuning Regulator Revisited and
ELS-Based Adaptive Trackers. IEEE Trans AC36:7 802–812.

6
Control of Orecrusher 1973 Control over Long Distance 1973
Plant in Kiruna, computer in Lund. Distance Lund-Kiruna 1400 km,
home-made modem (Leif Andersson), supervision over phone, sampling
period 20s.

Forget Physics! - Hope an STR can work!


Power increased from 170 kW to 200 kW
R. Syding, Undersökning av Honeywells adaptiva reglersystem. MS Thesis LTH 1975
U. Borisson, and R. Syding, Self-Tuning Control of an Ore Crusher, Automatica 1976, 12:1,
1–7

Results Steermaster
Kockums

◮ Ship dynamics
◮ SSPA Kockums
◮ Full scale tests on ships
Significant improvement of production 15 %! in operation

Ship Steering - 3% less fuel consumption Gunnar Bengtsson


STR Conventional

◮ PhD Automatic Control Lund 1974


◮ Visiting professor University of Toronto
◮ Visiting professor University of Florida
Kalman
◮ ASEA Innovarion 1977
◮ First Control Systems AB 1985
◮ IEEE CSS Technology Award 1991

C. Källström Identification and Adaptive Control Applied to Ship Steering PhD Thesis
Department of Automatic Control, Lund University, Sweden, April 1979.
C. Källström, KJÅ, N. E. Thorell, J. Eriksson, L. Sten, Adaptive Autopilots for Tankers,
Automatica, 15 1979, 241-254

ABB First Controll


◮ ASEA Innovation 1981
◮ DCS system with STR
◮ Grew quickly to 30 people
and 50 MSEK in 1984
◮ Strong grpup
◮ Wide range of applications ◮ Gunnar Bengtsson
◮ Adaptive feedforward ◮ Founder of ABB Novatune
Incorporated in ABB Master 1984 ◮ Rolling mills
and later in ABB 800xA ◮ Continuous casting
◮ Difficult to transfer to ◮ Semiconductor manufacturing
standard sales and ◮ Microcontroller XC05IX
commisioning workforce Raspberry pie, linux
(You must know control!) Robust adaptive control
Graphical programming
Arthur D . Little Innovation at ASEA. 1985
Modelica simulation

7
The Self-Tuning Regulator Summary

1. Introduction
◮ Process control - steady state regulation
2. Minimum Variance Control
◮ A very simple natural algorithm for regulation problems
3. System Identification
◮ Kalman could have done it in 1958
4. The Self Tuning Regulator
◮ Usefulness demonstrated in many industrial applications
5. Properties
◮ I don’t understand why it is it not more widely used!
6. Examples
◮ How about trying it on some computer related tasks?
7. Summary

You might also like