l=0
h
(l)
u(n l) (2.2)
where the inputsignal u(n) is ltered by the coecients h
(l)
of the lter. The ltercoe
cients of the FIR lter can be seen as the response of the system to a unit impulse input.
Equation 2.2 can be alternatively written using the unit delay operator q
1
which is dened
as follows:
q
l
u(n) = u(n l) (2.3)
Which results in:
y(n) = H(q
1
)u(n) (2.4)
An advantage of a FIR lter is that it is inherently stable, because all its poles are laying in
the origin. However if a lightly damped system is described by a FIR lter, the lter may
require many coecients to describe such a system suciently. Therefore another way of
describing a system can be obtained by means of a state space model (SSM):
H
_
x(n + 1) = Ax(n) + Bu(n)
y(n) = Cx(n) + Du(n)
(2.5)
2.3. Optimal feedforward control 7
By this way the state of the system on timestamp n is described by the state vector x(n).
The state matrices A, B, C and D indicating the contribution of the states and inputsignal
to the future states and outputsignal. A state space description is a numerically robust
and computational ecient way of describing the model of a system [8].
Throughout this report in the time domain H(q
1
) denotes a model described as a FIRM.
A state space description is denoted as H, without the unitdelay operator. In the transform
domain, this distinction will not be made and the model description of H(z), should follow
logically from the text. Furthermore, time domain signals are dentoted italic, whereas
transform domain signals are denoted regular.
Having identied the dierent signals and systems of blockdiagram 2.1, the objective is
now to nd the controller that will maximal attenuate the error signal. Ideally the optimal
controller which entirely cancels the error signal would be of the form:
W = S
1
P (2.6)
However, a physical realization of W can only be given if the controller is stable and causal.
When equation 2.6 is used, the resulting controller will likely to be unstable caused by the
nonminimum phase zeros of S
1
. In that case also the controller W will be unstable
provided no zeros of P will cancel out any poles of S
1
outside the unit circle. Therefore
another way of determining the controller is needed. A widely used way of doing this,
is to base the controller W on the minimisation of a predened criterium J. A suitable
criterium appears to be the expected value of the the squared error signal. This criterium
can be seen as a measure for the energy contained by the residual error signal [2].
J = E[e
T
(n)e(n)] (2.7)
Minimisation of this criterium as a function of all possible causal stable controllers leads
to the optimal controller W.
W
opt
= arg min
W
J(W) (2.8)
In the following sections, an expression in the time as well as in the transform domain for
the optimal controller will be derived, using criterium 2.8.
2.3 Optimal feedforward control
2.3.1 Introduction
A feedforward arrangement suggests that signals upstream in the system are feedforward
to the controller to better predict the outcome of the disturbance signal. In section 2.2 a
general way of nding the optimal contoller was presented. In this section an expression
for the xed gain controller in feedforward arrangement will be derived. First this will be
done in the time domain, subsequently the transform domain will be treated. The purpose
and eect of regularisation will be discussed in the appropriate subsections.
1
A system is said to be minimum phase if all the poles and zeros lay within the unit circle.
8 2. Optimal control
2.3.2 Time domain controller
A general schematic overview of the feedforward optimal controller can be seen in gure
2.2. In this gure x(n) R
K
, u(n) R
M
, d(n), y(n), e(n) R
L
represents the reference,
steering, disturbance, antidisturbance and error signals respectively. The primary path
P consists of K inputs and L outputs, the secondary path S consists of M inputs and L
outputs and the controller W(q
1
) consists of K inputs and M outputs, where M K
and L M.
To derive the time domain controller according to the dened criterium 2.7, we will assume
P
W(q
1
) S
x(n) d(n)
u(n) y(n) e(n)
K
M L
+
+
X(n) =
_
_
x(n) 0 0
0 x(n)
.
.
.
.
.
.
0 x(n)
_
_
T
R
MMKI
(2.16)
2.3. Optimal feedforward control 9
The vector of delayed inputsignals x(n) is called the regression vector. The antidisturbance
signal y(n) can be calculated as follows:
y(n) = SW(q
1
)x(n) (2.17)
= S[
X(n)w] (2.18)
Assuming time invariant models, the multiplication order of S and W(q
1
) may be inter
changed:
y(n) =
_
S x
T
(n)
w (2.19)
Where denotes the kronecker tensor product[5], and
R(n) = S x
T
(n) R
LMKI
(2.20)
is the matrix with past reference signals x(n) ltered by the secondary path S. It consists
of LMK sequences of ltered reference signals r
lmk
:
r
l,mk
(n) = S
lm
x
k
(n) (2.21)
Using equation 2.19 the error signal can now be expressed in terms of the ltered reference
signal:
e(n) = R(n)w +d(n) (2.22)
Where R(n)w denotes a matrix vector product. Subsequently the criterium J can be
expressed in terms of the coecients of the optimal controller W
opt
J = E[e
T
(n)e(n)] = E
__
w
T
R
T
(n) +d
T
(n)
_
(R(n)w +d(n))
= w
T
E
_
R
T
(n)R(n)
w + 2w
T
E
_
R
T
(n)d(n)
+ . . .
. . . E
_
d
T
(n)d(n)
(2.23)
This is a quadratic expression for each of the FIR lter coecients of the controller w
(i)
m,k
and is minimized according to each of the lter coecients by setting the derivative of the
criterium J to the corresponding coecient to zero:
J
w
(i)
m,k
= 0 (2.24)
Which leads to the following expression for the optimal controller in the time domain:
W
opt
=
_
E
_
R
T
(n)R(n)
_
1
E
_
R
T
(n)d(n)
(2.25)
The matrix E
_
R
T
(n)R(n)
(2.27)
R
rd
= E
_
R
T
(n)d(n)
(2.28)
Regularisation
Criterium 2.7 is optimal in the sense that it tries to minimise the energy of the error signal.
It does not take into account the energy of the steering signals. In practical situations, this
steering signal can be limited for various reasons. The actuator output can be restricted to
a maximum output level. Also to comply with the linearity assumption, the displacements
caused by the actuators can be limited. So in general it is often desirable to modify the
cost function by adding an eort weighting term in addition with the error weighting term.
Such a cost function can be dened as follows:
J = E[e
T
(n)e(n) + u
T
(n)u(n)] (2.29)
However using this cost function to determine the lter coecients in an adaptive way,
turns out to be computational rather inecient as will be mentioned in the next chapter.
Using the xed gain solution only for analysis purposes, the following cost function will be
introduced:
J = E[e
T
(n)e(n) + w
T
w] (2.30)
This cost function additionally weighs the squared ltercoecients multiplied by a weight
ing factor beta. The factor beta is used to determine how strong the coecients should
be weighed compared to the error signal. If the reference signal is a white noise sequence,
costfunctions 2.29 and 2.30 can be considered as the same. Combining equations 2.30 and
2.23 leads to the following expression for the optimal lter coecients:
W
opt
= {R
rr
+ I }
1
R
rd
(2.31)
The new optimal solution can simply be computed by adding a term beta to the main
diagonal of the autocorrelation matrix. Another adaventage of this regularized solution
is that by adding a small term to the main diagonal will result in an increase of the
eigenvalues, and therefore makes a potentially poorly conditioned matrix easier to convert.
The performance of the reguralized solution will be less optimal compared to the optimal
solution without regularization. It turns out however, that the introduction of a small value
beta will only have a slight impact on the performance of the controller, while a signicant
reduction in eort energy can be obtained. Furthermore it turns out that regularization of
the ltercoecients also has a positive inuence on the stability and convergence properties
of the adaptive solution as will be discussed in the next chapter.
2.3. Optimal feedforward control 11
2.3.3 Frequency domain controller
In the past subsections the xed gain controller is derived in the time domain. In the
following subsections an expression for the xed gain controller in the frequency domain
will be given. The minimization of the cost function in the time domain can be viewed as
the minimization of the H2norm of the system in the frequency domain. To illustrate this
rst the cost function is represented in the frequency domain by use of Parsevals theorem:
= E[e
T
(n)e(n)] = trE[e(n)e
T
(n)] (2.32)
=
1
2
tr
S
ee
(e
jT
)dT (2.33)
where in the second equation T denotes the sampletime and S
ee
(e
jT
) denotes the power
spectral density of the error signal:
S
ee
(e
jT
) = E[e(e
jT
)e
T
(e
jT
)]
= E
_
_
d(e
jT
) +y(e
jT
)
_
d(e
jT
) +y(e
jT
)
T
_
= E
_
_
P(e
jT
)x(e
jT
) +S(e
jT
)W(e
jT
)x(e
jT
)
. . .
. . .
_
P(e
jT
)x(e
jT
) +S(e
jT
)W(e
jT
)x(e
jT
)
T
_
=
_
P(e
jT
) +S(e
jT
)W(e
jT
)
E
_
x(e
jT
)x
T
(e
jT
)
. . .
. . .
_
P
T
(e
jT
) +W
T
(e
jT
)S
T
(e
jT
)
(2.34)
If the reference signal is a white noise sequence with unit variance, then the expectation
of the reference signal is equal to unity:
E
_
x(e
jT
)x
T
(e
jT
)
= I (2.35)
By dening the H2norm of the system H(z) as:
H(z)
2
=
_
1
2
tr
H(e
jT
)H
T
(e
jT
)dT (2.36)
The costfunction 2.7 can alternatively be written as the square of the H2norm of the
system P(z) +S(z)W(z):
E[e
T
(n)e(n)] =
1
2
tr
_
P(e
jT
) +S(e
jT
)W(e
jT
)
. . .
. . .
_
P
T
(e
jT
) +W
T
(e
jT
)S
T
(e
jT
)
dT (2.37)
= P(z) +S(z)W(z)
2
2
(2.38)
Because solving the optimization problem according to 2.7, is equivalent of minimizing the
squared H2 norm of the system, the minimization problem can now be stated as follows:
W(z) = min
W
P(z) +S(z)W(z)
2
2
(2.39)
12 2. Optimal control
The derivation of the solution, which is known as the model based causal Wiener solution,
can be found in [3], and the solution is directly given:
W(z) = S
1
o
(z)
_
S
T
i
(z
1
)P(z)
_
+
(2.40)
The solution is obtained by performing an inner outer factorization on the secondary path.
The innerouterfactorization is dened in appendix as is dened in appendix A.1. Because
the outerfactor has the property that it is minimum phase, it has a stable inverse. The
anticausal
2
transposed system, also known as the adjoint system, of the causal innerfac
tor is denoted by S
T
i
(z
1
). The causality operator is denoted by {}
+
, which is dened
as taking the causal part of the total system between the brackets. The advantage of
this solution in contrast to the solution described by correlation matrices is that by the
former a purely model based solution is obtained. The latter has the disadvantage that
an autocorrelation matrix, with potentially huge dimensions, has to be inverted. Another
disadvantage is that an error will be made by using a nite data length to calculate the
correlationmatrix. However in order to obtain the modelbased solution, the models of the
primary and secondary path should be available.
Regularisation
To regularize the solution, the cost function can be extended as in equation: 2.30:
J = E[e
T
(n)e(n) + w
T
w] (2.41)
=
1
2
tr
_
P(e
jT
) +S(e
jT
)W(e
jT
)
_
P
T
(e
jT
) +W
T
(e
jT
)S
T
(e
jT
)
+ . . .
. . .
_
I
M
W(e
jT
)W
T
(e
jT
)
_
I
M
dT (2.42)
This can be written, using the H2norm:
W(z) = min
W
P
aug
(z) +S
aug
(z)W(z)
2
2
(2.43)
With:
P
aug
(z) =
_
P(z)
0
MK
_
S
aug
(z) =
_
S(z)
I
MM
_
(2.44)
the augmented (L + M) K primary and augmented (L + M) L secondary path. The
structure of the optimization problem of 2.43 is now the same as the structure for the
general optimization problem as shown in equation 2.39. Therefor the regularised transform
domain xed gain solution can now directly be written using the general structure of the
solution of the general optimization problem.
W(z) = S
aug 1
o
(z)
_
S
aug T
i
(z
1
)P
aug
(z)
_
+
(2.45)
2
A system H(z) is said to be anticausal H(z
1
) if its output is depending on future inputvalues
2.4. Optimal feedback control 13
The above equation can be simplied using the fact that the the innerfactor of the aug
mented secondary path can be split into two parts:
S
aug
i
(z) =
_
S
aug
i,1
(z)
S
aug
i,2
(z)
_
(2.46)
With S
aug
i,1
(z) a L M system and S
aug
i,2
(z) a M M system. The product between the
causality operator can now be rewritten as:
_
_
S
aug, T
i,1
(z
1
) S
aug, T
i,2
(z
1
)
_
_
P(z)
0
MK
__
= S
aug, T
i,1
(z
1
)P(z) (2.47)
It is shown that the primary path has not to be augmented which leads to a simplied and
a computational more ecient way of determining the regularized controller:
W(z) = S
aug 1
o
(z)
_
S
aug T
i,1
(z
1
)P(z)
_
+
(2.48)
2.4 Optimal feedback control
2.4.1 Introduction
In the previous section the xed gain feedforward controller was derived. It was assumed
that the reference signal is known a priori. However, if this knowledge is not available, the
controller can be arranged in an feedback conguration. By using the principle of internal
model control (IMC) an estimation of the disturbance signal can be obtained, which is
driving the controller like a newly obtained reference signal. The principle of IMC will
be discussed in the next section. Following the same structure as in the previous section
subsequently expressions for the xed gain controller in the time domain and frequency
domain will be derived. Because a feedback structure does not have the property of being
inherently stable as was the case using a feedforward structure, a special section will
cover the stability properties of the feedback solution. In this section also the regularized
solutions will be treated.
2.4.2 Internal model control
A blockdiagram of a feedback conguration using IMC is shown in gure 2.3.
In this conguration the disturbance signal d(n) is estimated by substracting an esti
mation of the output signal y(n) from the error signal. This estimated output signal is
obtained by ltering the steering signal u(n) by the internal model
S which is an estima
tion of the real secondary path. By this way an estimation of the disturbance signal is
achieved:
y(n) =
Su(n) (2.49)
S
H
x(n) d(n)
d(n)
u(n) y(n)
y(n)
e(n)
K
M L
+
+
+
Figure 2.3: Blockdiagram of the feedback conguration in an IMC arrangement
in gure 2.3, the feedback arrangement can be represented in the simplied form shown
in gure 2.4. Where the transfer function from the error signal to the steering signal is
dened as:
H(z) =
W(z)
1 +
S(z)W(z)
(2.51)
Using expression 2.51, the resulting sensitivity function G(z) of this arrangement is given
S H
d(n)
u(n) y(n) e(n)
M L
+
+
Figure 2.4: Simplied blockdiagram of the feedback conguration in an IMC arrangement
by:
G(z) =
e(z)
d(z)
=
1 +
S(z)W(z)
1
_
S(z)
S(z)
_
W(z)
(2.52)
If perfect plant knowledge is assumed, the arrangement of the sensitivity function reduces
to a form which is linear in the ltercoecients:
G(z) = 1 +S(z)W(z) (2.53)
and criterium 2.7 can be applied which is quadratic in the ltercoecients. Minimization
of the criterium to each of its coecients leads to a global minimum, giving the coecients
2.4. Optimal feedback control 15
of the optimal controller according to criterium 2.7. So using IMC and assuming perfect
plan knowledge will lead to an equivalent feedforward minimization problem, which is also
shown in gure 2.5: The blockdiagram of the feedback system is now given by an entirely
S W
d(n) u(n) y(n) e(n)
M L
+
+
Figure 2.5: Block diagram of feedback conguration with perfect plant knowledge assumed
feedfworward structure.
The strategy of using IMC and assuming perfect plant knowledge will be applied in the
next sections to calculate the optimal feedback controller.
2.4.3 Time domain controller
Using the IMC arrangement discussed in the previous subsection the optimal feedback time
domain controller is derived in the same way as was discussed in section 2.3.2. Assuming
perfect platnt knowledge the estimated disturbance signal equals the original disturbance
signal, which is playing the role of the reference signal in the feedforward case. The matrix
R(n) is now achieved by the kronecker tensor product between the secondary path S and
the disturbance signal d(n):
d
l
(n) = [d
l
(n), d
l
(n 1), . . . , d
l
(n i), . . . , d
l
(n I)] R
1I
(2.54)
d(n) = [
d
1
(n),
d
2
(n), . . . ,
d
l
(n), . . . ,
d
L
(n)]
T
R
LI1
(2.55)
R(n) = S
d
T
(n) R
LMLI
(2.56)
Having specied the matrix R(n), the optimal time domain controller consisting of the
L M FIR lters can be obtained using equation 2.25.
W
opt
=
_
E
_
R
T
(n)R(n)
_
1
E
_
R
T
(n)d(n)
(2.57)
In case perfect plant knowledge may not be assumed the designed controller above may
be suboptimal. A more optimal controller may then be derived by using the estimated
disturbance signal, which actually acts as the input of the controller, instead of the real
disturbance signal. However by designing the optimal controller using the estimated dis
turbance, the dependence of the estimated disturbance with respect to the coecients is
ignored in order to obtain a quadratic minimization problem. It is therefor not guaran
teed to give a better performance compared to the controller obtained using perfect plant
knowledge.
2.4.4 Transform domain controller
Also a transform domain solution can be obtained using the internal model control im
plementation and assuming perfect plant knowledge. Referering to cost function 2.33, the
16 2. Optimal control
power spectrum density function of the error signal can be written as:
S
ee
(e
jT
) = E[e(e
jT
)e
T
(e
jT
)]
= E
_
_
d(e
jT
) +y(e
jT
)
_
d(e
jT
) +y(e
jT
)
T
_
= E
_
_
P(e
jT
)x(e
jT
) +S(e
jT
)W(e
jT
)P(e
jT
)x(e
jT
)
. . .
. . .
_
P(e
jT
)x(e
jT
) +S(e
jT
)W(e
jT
)P(e
jT
)x(e
jT
)
T
_
=
_
P(e
jT
) +S(e
jT
)W(e
jT
)P(e
jT
)
E
_
x(e
jT
)x
T
(e
jT
)
. . .
. . .
_
P
T
(e
jT
) +P
T
(e
jT
)W
T
(e
jT
)S
T
(e
jT
)
(2.58)
The problem can now be restated as minimising the following H2norm:
W(z) = min
W
P(z) +S(z)W(z)P(z)
2
2
(2.59)
which has the following solution [3]:
W(z) = S
1
o
(z)
_
S
T
i
(z
1
)P(z)P
T
ci
(z
1
)
_
+
P
co
(z) (2.60)
where denotes the left inverse operation and P
co
, and P
c
i are the coouter and coinner
factor respectively of the primary path as explained in appendix A.1.
Noting that:
P(z)P
T
ci
(z
1
) = P(z)P
1
ci
(z) = P
co
(z) (2.61)
expression 2.60 can be simplied leading to the following solution for the optimal wiener
feedback controller:
W(z) = S
1
o
(z)
_
S
T
i
(z
1
)P
co
(z)
_
+
P
co
(z) (2.62)
2.4.5 Stability
By assuming perfect plant knowledge, the feedback loop is completely ignored. In fact
this feedback loop is still present as can be seen in blockdiagram 2.6. Actually this is
S W
S
S
d(n) u(n) y(n) e(n)
M L
+
+
+
Figure 2.6: blockdiagram of the IMC arrangement with internal feedback loop present
just a rearranged version of the blockdiagram shown in gure 2.3. In the above gure the
looptransfer path can be dened as:
L(z) = W(z)[S(z)
S(z)] (2.63)
2.5. summary 17
In order to have a stable feedback transfer path it is required that the loopgain, which
may be expressed by the norm, of the loop transferpath is less then unity:
L(z)
[P+SWP] [P
+P
] +
_
I
M
WW
_
I
M
dT (2.66)
Where (e
jT
) has been omitted for clarity reasons and denotes the complex conjugate
transpose. Because the controller W is prefactorised with the primary path, both terms
in the integral can not be combined.Therefor the transformdomain cost function can not
be obtained in the same general form as was derived in equation 2.38 and the solution can
not easily be generalized. For analysis purposes only the timedomain regularised solution
will be used.
2.5 summary
In this chapter a general method was presented to attenuate unwanted vibrations. This
method of attenuating vibrations was mainly based on the design of a time independent
controller which used the minimisation of the energy contained by the error signal as a way
to determine the optimal controller. A static time domain controller, which is commonly
denoted in literature as the class of Wiener controllers, was derived by the use of statistical
properties of the reference and measured signals. By formulating the minimisation of a
criterium into the minimisation of the H2norm a transform domain solution could be de
rived. After the feedforward solution was derived a method to reduce the steering signals
was described, which is often constrained to a certain maximum level. Subsequently the
feedback problem was dealt with. By using an internal model arrangement, it was shown
that the feedback problem could be encountered as an equivalent feedforward problem,
provided perfect plant knowledge is available. The time and transform domain feedback
controllers were therefor derived using standard feedforward control laws. Finally it was
shown how imperfect plant knowledge may lead to stability problems. By using regular
ization a method was presented to improve the stability of the feedback system.
18 2. Optimal control
Chapter 3
Adaptive control
3.1 Introduction
The xed gain Wiener controller derived in the previous chapter is optimised only for in
putsignals which statistic properties are stationary and can become less optimal if these
properties change over time. This changing can be caused by a slowly varying primary
path. Also a change in the statistics of the reference signal may result in a less optimal
controller. Under these conditions using an adaptive controller may lead to a better reduc
tion of the disturbance signal, because it has the possibility to adapt to a more optimal
controller when the correlation properties of the inputsignals change over time. It requires
however the calculation of a new set of control lters every sample period and therefor
an important design criterium will be the eciency of the algorithm. Another important
requirement of an adaptive controller is that it has to converge to a stable controller where
it is desirable to have a fast convergence, so that the disturbancesignal may be attenuated
over a short time period. The actuators are driven by the output of the controller, yet the
output energy of the actuators to attenuate the disturbance signal is often restricted. This
leads to the requirement that the adaptive controller should also be ecient in the sense
that maximal reduction of the disturbance energy is achieved with a minimal amount of
control eort. These demands will lead to the design of an adaptive controller, arranged
in a feedforward or a feedback structure and will be the subject of this chapter.
The organisation of this chapter is as follows, rst a basic structure for an adaptive con
troller will be laid out. Principles to determine the stability and convergence speed will be
issued. Subsequently the adaptive feedforward controller will be derived, rst, the FxLMS
algorithm. It will be shown how the computational eciency can be enhanced together
with a increase in convergence speed by using the principle of innerouter factorization.
In the following subsection it will be shown how the robustness to modeluncertainties can
be enhanced by using regularization. In the nal subsection, the adjointLMS algorithm
is introduced leading to a further decrease of the computational load. In the following
section the adaptive controller in feedback arrangement is derived. The design can mainly
be generalized from the feedforward arrangement. Yet referring to the design of the xed
gain feedback controller, special attention will be given to the stability requirements to
19
20 3. Adaptive control
LMS
W(q
1
, n)
x(n)
d(n)
y(n)
e(n)
K L
+
+
X
T
(n)e(n)] (3.4)
because of the expectation operation incorporated, the computational load of the calcula
tion of the derivative in the update equation may be rather large. Therefor the use of an
instantaneous estimate of the derivative is proposed. The set of equations to update the
controllter coecients can then be given as:
w(new) = w(old)
X
T
(n)e(n) (3.5)
22 3. Adaptive control
The adaptation algorithm involved with this update equation appears to be simple and
numerically robust and is commonly known as the least mean squares (LMS) algorithm.
Yet compared to an actual steepest descent method, it may dier in the fact that the
instantaneous gradient may dier from the gradient according to the MSE criterium and
the path on the errorspace may dier from the one using the MSE criterium. However,
it can be shown that when the adaptation process takes place slowly over time, i.e. the
amount in which the ltercoecients are changing is small during the time dened by
the time the lters impulseresponse needed to decay suciently, the coecients of the
adaptive lter will converge in the mean to the coecients of optimal Wiener solution [2].
stability condition
The stability property of the LMS algorithm can be conveniently analyzed considering an
averaged behaviour of the algorithm. This is expressed by taking the mean value of the
dierent terms of the update step of the algorithm over a number of trials:
E[w(n + 1)] = E[w(n)] + E[
X
T
(n)d(n)] E[
X
T
(n)
X(n)w(n)] (3.6)
When considering the reference signal as statistical independent of the ltercoecients the
last term may be split into two dierent factors:
E[
X
T
(n)
X(n)w(n)] = E[
X
T
(n)
X(n)]E[w(n)] (3.7)
This independence assumption is only valid for a slowly varying lter, i.e.: the coecients
of the lter can be considered as constant w.r.t. the length of the lter. By dening the
normalized coecients as:
(n) = E[w(n)] w
opt
(3.8)
and recalling the expression for the optimal ltercoecients 2.26 where R
xx
is dened
as the autocorrelationmatrix of the reference signal, equation 3.6 can be substituted in
equation 3.8 yielding:
(n + 1) = [I R
xx
](n) (3.9)
which represents a set of coupled equations demonstrating the evolution of the normalized
ltercoecients over time. By using an eigenvalue decomposition of the autocorrelation
matrix:
R
xx
= QQ
T
(3.10)
equation 3.9 can be written as:
v(n + 1) = [I ]v(n) (3.11)
forming a set of I independent equations:
v
i
(n + 1) = (1
i
)v
i
(n) (3.12)
3.2. Adaptive feedforward control 23
Where the ith normalized averaged rotated lter coecient, v
i
(n) is dened as:
v
i
(n) = Q
T
[E[w(n)] w
opt
] (3.13)
The independent coecients v
i
(n) are also referred as the dierent modes in which the
adaptive algorithm converges. To guarantee a stable convergence, every independent mode
has to converge to zero. This leads to the following condition for the stepsize for each
independent mode i:
1
i
 < 1 0 < < 2/
i
(3.14)
From this condition it becomes clear that the mode associated with the highest eigenvalue
max
will be the rst mode to become unstable. Therefore, the maximum stepsize is bound
by the highest eigenvalue and the condition can be more specically written as:
0 < < 2/
max
(3.15)
In practical situations however the independence assumption 3.7 is often not achievable
because of a fast developing lter. In that case a smaller value of the stepsize is required
to obtain a stable adaptation process.
convergence speed
The speed of convergence of each independent mode can be described by an exponential
decay [2] with time constant:
i
=
1
2
i
(samples) (3.16)
The speed of the convergence process is determined by the mode mode with largest time
constant which corresponds to the mode with the smallest eigenvalue. Also the convergence
process is inuenced by the stepsize. A larger stepsize results in a faster convergence, so
a good measure of the overall convergence process appears to be the ratio between the
largest and smallest eigenvalue
max
min
. A fast overall convergence behaviour requires a small
eigenvaluespread.
3.2.3 Presenting the secondary path: FxLMS algorithm
In the previous subsection the general LMS adaptive lter problem was discussed. It was
assumed that the output of the control lter eects the disturbance signal measured at the
errorsensors without any change in gain or delay, i.e. the inuence of the secondary path
is neglected. This assumption is not valid for most practical situations where there will be
a noticeable transfer path between the output of the control lter and the place where the
eect of the output of the controllter is measured. Therefore the eect of the secondary
path has to be incorporated in the LMS algorithm.
In the derivation of the controller, the assumption is made that the controllter W(q
1
, n)
will only change slowly compared to the timescale of the system dynamics of the secondary
24 3. Adaptive control
LMS
W(q
1
, n)
S
x(n)
d(n)
y(n)
e(n)
u(n)
R(n)
K M
L
+
+
R
T
(n)e(n) (3.19)
3.2. Adaptive feedforward control 25
Provided the algorithm is stable, the implication of using the modied update equation is
that the coecients will in the mean converge to a suboptimal solution described by:
w
=
_
E[
R
T
(n)R(n)]
_
1
E[
R
T
(n)d(n)] (3.20)
which diers from the optimal Wiener solution described by equation: 2.26.
The stability criterium for the FxLMS algorithm can be derived in a similar way as
was described for the general LMS algorithm, leading to an expression for the theoretical
maximum stepsize:
0 < < 2
2Re(
max
)

max

2
(3.21)
Where the potentially imaginary eigenvalues are taken from the crosscorrelation matrix
consisting of the reference signal ltered by the estimated and real secondary path R
rr
.
It may also be noted that if at least one of the eigenvalues has a negative real part,
the associated independent mode v
i
described by equation 3.12 will exponentially increase,
leading to an instable adaptation process. If perfect plant knowledge is assumed the matrix
E[R
T
(n)R(n)] will be guaranteed to be positive denite, having only positive eigenvalues,
provided the reference signal persistently excites the lter.
A sucient condition to guarantee stability if perfect plant knowledge is not assumed
[2] may then be given by :
eig[
S
H
(e
jT
)S(e
jT
) +S
H
(e
jT
)
S(e
jT
)] > 0 for all T (3.22)
Where
H
denotes the complex conjugate transpose operator. Besides using a more accurate
plant model, another solution to stabilize the controller is to make use of regularization.
This will be discussed in section 3.2.5.
3.2.4 Reducing the computational load and increasing the convergence
speed: IO factorization
In the previous subsection it was shown that the convergence properties of the FxLMS
algorithm are depending on the size of the eigenvaluespread from the crosscorrelation ma
trix consisting of the reference signal ltered by the estimated and real secondary path
R
rr
. If this eigenvaluespread approaches unity, faster convergence of the adaptive lter
coecients to the xed gain solution is achieved. A low eigenvaluespread therefor is desir
able for a fast convergence. Using the FxLMS algorithm, the eigenvaluespread is limited
by the dynamical range of the power spectrum of the reference signal combined with the
dynamical range of the frequency response of the secondary path [2]. Considering a single
input single output system and assuming perfect plant knowledge this may be written as:
max
min
_
S(e
jT
)
2
S
xx
(e
jT
)
max
[S(e
jT
)
2
S
xx
(e
jT
)]
min
(3.23)
If the reference signal is assumed to be a white noise sequence, it has a power spectrum of
unity over the whole frequency range. The eigenvaluespread is then bounded only by the
26 3. Adaptive control
LMS
W(q
1
, n) S
S
i
S
1
o
x(n)
d(n)
y(n)
e(n)
u(n) u(n)
R(n)
K M
L
+
+
Figure 3.4: Blockdiagram of FxLMS adaptive lter problem with postconditioning applied
ratio of maximum and minimum values of the gain of the secondary path. So as to have a
fast convergence it is desirable to keep this ratio as small as possible.
Now lets consider the outputsignal of the adaptive controller is to be preltered by the
inverse of the secondary path. Referring to the properties of the inner and outer factor
explained in appendix A.1, this is allowed because the outerfactor is a minimum phase
system and thus has a stable inverse. The errorsignal may then be expressed as:
e(n) = SS
1
o
W(q
1
, n)x(n) +d(n) (3.24)
= S
i
W(q
1
, n)x(n) +d(n) (3.25)
and the blockdiagram of the FxLMS algorithm using IOfactorization, which is also refereed
to as postconditioning, is shown in gure 3.4. The resulting secondary path may now be
recognized as the innerfactor of the original secondary path. Because the innerfactor of
a system is by denition an allpass system, it has a frequency response of unity over the
whole frequency range, and therefor the power spectrum density of the reference signal
ltered with the innerfactor remains unaected. Consequently the eigenvaluespread of
the ltered reference signal is limited by the dynamical range of the power spectrum of the
reference signal only when inner outer factorization is applied.
The prize to pay using the postconditioned FxLMS algorithm, is that the output signal
of the adaptive lter has to be ltered by the inverse of the outerfactor, which is an extra
operation. However by noticing that the order of the inner factor equals the amount of
zeros of the secondary path outside the unit circle, the order of the model used to lter the
reference signal may just be largely reduced. When the computational load of the FxLMS
algorithm is examined, it appears that kronecker tensor product between the reference
signal and the secondary path S x(n) requires the major part. Filtering the reference
signal using a reduced order secondary path may therefor lead to a far more ecient
algorithm compared to the traditional FxLMS algorithm.
A comparison of the amount of additions and multiplications referred to as oating
point operations (ops) between the two algorithms is given in table 3.1. The total amount
of ops per sample required by each version of the FxLMS consist of the amount of ops
of a number of separate operations which can be categorized as:
3.2. Adaptive feedforward control 27
1. The ltering of the reference signal by the adaptive control lter.
2. The calculation of the kronecker tensor product between the reference signal and the
secondary path.
3. The calculation of the new vector of ltercoecients.
4. For the postconditioned FxLMS algorithm, the ltering of the inverse of the outer
factor by the outputsignal of the adaptive control lter.
In the table, N denotes the order of the secondary path, N
i
denotes the order of the
innerfactor. The order of the outerfactor is by denition equal to the order of the system
itself. K, M and L denotes respectively the amount of reference, steering and error signals.
Further the amount of tabs of the control lter is denoted by I.
To have an idea of the practical reduction in the amount of ops an innerouter factor
Table 3.1: Comparison of the number of oating point operations required by the FxLMS
algorithm with and without postconditioning
Operation Traditional FxLMS with
FxLMS algorithm postconditioning
Filtered reference signal LMK[2N
2
+ 3N + 1] LMK[2N
2
i
+ 3N
i
+ 1]
Filterupdate 2MKIL + L 2MKIL + L
Filtering controller output 0 2N[N + M + L]
by outerfactor +2ML N L
Filtering reference signal 2IKM M 2IKM M
with adaptive lter
Total amount of ops LMK[2N
2
+ 3N + 2I + 1]+ LMK[2N
2
i
+ 3N
i
+ 2I + 1]+
2MKI M + L 2MKI M+
2N[N + M + L] + 2ML N
ization on an identied secondary path of the 6DOF hybrid isolation vibration setup is
performed. The identied system has an order of 100, yielding an innerfacor of order 25 and
the adaptive controllter is assumed to be contained by I is 200 tabs. The total amount
of ops without innerouter factorization of the FxLMS algorithm equals: 747636. Using
postconditioning, yields a total of 86902 number of ops, which is less then 12 percent of
the amount of ops required by the traditional FxLMS algorithm.
3.2.5 Decreasing the steering signals: regularized solution
The energy required by the actuators to obtain optimal reduction of the disturbance signal
can be considerably high when the adaptive controller is designed to minimize the mean
square error only. Therefore as was discussed in the previous chapter it may be desirable
to make use of a regularized solution, which also tries to minimize the output of the
controller. In this subsection the implication of regularization on the adaptive controller
will be discussed.
28 3. Adaptive control
Regularization of the controller output
In order to derive the regularized adaptive controller, rst the costfunction described by
equation 2.29 is considered. In addition to the mean squared error also a term proportional
to the mean squared actuatorinput is to be minimized. First the adaptive controller
without applying postconditioning is considered. The error and actuatorinput signal may
then be written as:
e(n) = R(n)w(n) +d(n) (3.26)
u(n) =
X(n)w(n) (3.27)
Taking the instantaneous derivative of the mentioned costfunction according to each of the
ltercoecient yields the following update equation for the vector of ltercoecients:
w(n + 1) = w(n) [
R
T
(n)e(n) +
X
T
(n)u(n)] (3.28)
Resulting in an extra of 3MKI ops per sampletime compared to the nonregularized
solution. Next the regularized solution using IOfactorization is derived. The ltered
reference signal is obtained by taking the kronecker tensor product between the inputsignal
and the innerfactor of the secondary path as was dened in the previous subsection. The
inputsignal can now be dened as follows:
u(n) =
S
1
o
_
X(n)w(n)
(3.29)
=
Q(n)w(n) with
Q(n) =
S
1
o
x
T
(n) (3.30)
Taking again the instantaneous derivative of the costfunction to each of the ltercoecients
the update equations is now given by:
w(n + 1) = w(n) [
R
T
(n)e(n) +
Q
T
(n)u(n)] (3.31)
The problem arises with the calculation of the kronecker tensor product which incorpo
rates an amount LMK[2N
2
+3N1] extra multiplications and additions which leads to an
relatively large increase of the computational load.
Regularization of the controlltercoecients
Therefore another approach is taken when applying postconditioning. Instead of weighing
the controlleroutput signals, the weighing of the sum of the squared coecients of the
adaptive controller in the costfunction is proposed. By doing so the coecients of the
adaptive FIR lter of the controller will be minimized in addition with the mean square
error according to the following costfunction:
J = E[e
T
(n)e(n) + w
T
(n)w(n)] (3.32)
Which results in the following expression for the update equation of the ltercoecients:
w(n + 1) = [1 ]w(n) [
R
T
(n)e(n)] (3.33)
3.2. Adaptive feedforward control 29
where the factor [1 ] is called the leakage factor and so this algorithm is also known
as the leaky LMS algorithm, because the coecients would leak away if the error signal
approaches zero. The extra ops involved with the leaky LMS algorithm are just MKI +2
operations.
When a similar analysis on the stability of this algorithm is performed as was discussed
in section 3.2.2 it appears that the stability and convergence properties of the algorithm
are now depending on the eigenvalues of the matrix: E[
R
T
(n)R(n)+I ], which eectively
means that to each of the eigenvalues a term is added. So besides decreasing the controller
output, another advantage of introducing a coecient weighting factor is that eigenvalues
having otherwise a small negative real part may now become positive. In general can
be said that adding a small value to the vector of eigenvalues will increase the smallest
eigenvalue by a relative large amount and therefore reducing the eigenvaluespread. The
leaky LMS algorithm can therefor be used to decrease the steering signals, increase the
robustness as well as the convergence speed of the FxLMS algorithm. The drawback of
using coecient weighing factor in the costfunction is that the solution will converge to
2.31 which gives a suboptimal performance compared with the optimal solution given by
equation 2.26. However experimental results have proved using only a small value of can
have a major eect on the decrease of the steering signals and resulting in only a slightly
decreased performance.
Regularization of the outerfactor
It may be mentioned that using regularization in combination with postconditioning, leads
only to the minimizing of the adaptive part of the controller. This may be seen by consid
ering the complete controller consisting of two systems. The rst system can be recognized
as the adaptive controller represented by the FIRlter
W(q
1
, n), the second part equals
the inverse outerfactor of the secondary path: S
1
o
and the complete controller may then
be represented by the following expression:
W
combined
= S
1
o
. .
xed
W(q
1
, n)
. .
adaptive
(3.34)
When a regularization term is included in the costfunction, the sum of squared coecients
of the adaptive lter is being minimized together with the mean square error. As a result
the gain of the adaptive part of the controller will decrease at its peak values. However
the gain of the xed part of the controller remains unaected, which may still lead to a
considerable large gain of the combined controller at those frequencies corresponding to
the peak values of the frequency response of the inverse outer factor.
If it is desired to decrease the gain of the total controller at the mentioned frequencies
as well, the outerfactor may also be regularized. This is done by adding a small value to
frequency response of the outerfactor, which may be expressed as [2]:
S
T
o
(z
1
)
S
o
(z) = S
T
o
(z
1
)S
o
(z) + I (3.35)
30 3. Adaptive control
Where I denotes an M M identity matrix, and M denotes the number of inputsignals of
S(z). The gain of the inverse of the outerfactor is most large at those places where the gain
of the outerfactor is most small. By adding a small value to the gain of the outerfactor, it
will most largely be increased at places where its value is smallest, thereby decreasing the
gain of the inverse of the outerfactor most largely at its peak values. It was observed that
by lowering the gain at the peak values of the inverse outerfactor, the gain of the adaptive
part of the controller was increased at the same frequencies. By doing so the gain at those
frequencies may be more eciently reduced by the adaptive controller using the standard
regularization according to equation: 3.32
Instead of ltering the errorsignal by S
i
(z), the errosignal now needs to be ltered by:
S(z)
S
1
o
(z). The additional computational load is determined by the increased size of the
impulse response of the latter model, compared to the impulse response of S
i
(z). Yet, the
advantage of this solution with respect to for example introducing a frequency dependent
weighing term in the costfunction is that the additional computational load required, may
be much smaller.
The convergence properties are determined from the eigenvalues of the autocorrela
tionmatrix obtained by ltering the reference signal by the combined model: S(z)
S
1
o
(z).
Because this system will not anymore be an allpass system, an increase in the eigenvalue
spread will be the result.
3.2.6 Reducing the computational load: adjointLMS algorithm
A disadvantage of the FxLMS algorithm is that the computational load is relatively high
especially when multiple inputs and outputs are considered. As was mentioned earlier,
this is mainly caused by the kronecker tensor product involved in ltering the reference
signal by the secondary path, which requires a relatively large amount of ops. In this
subsection it will be shown that a computational much more ecient algorithm called the
AdjointLMS algorithm can be obtained by ltering the error signal instead of the reference
signal.
The algorithm will be derived considering a time averaged approach. First lets consider
the general costfunction
J = E[e
T
e] (3.36)
Taking the derivative of this costfunction to the ltercoecients results in:
J
w
= 2E[R
T
(n)e(n)] (3.37)
Written out the expectation operation, this derivative can be written apart for each coef
cient as follows:
J
w
(i)
m,k
= lim
N
2
N
N
n=N
L
l=1
S
m,l
x
k
(n i)e
l
(n) (3.38)
3.2. Adaptive feedforward control 31
When approaching the state space model S
l,m
by a FIR model with a sucient number of
J coecients s
(j)
l,m
, the above derivative can be written as:
J
w
(i)
m,k
= lim
N
2
N
N
n=N
L
l=1
J1
j=0
s
(j)
m,l
x
k
(n i j)e
l
(n) (3.39)
By introducing the dummy variable n
+j=N
L
l=1
J1
j=0
s
(j)
m,l
e
l
(n
+ j)x
k
(n
i) (3.40)
When considering taking the mean of the derivative from n = N j until n = N j as
the same as taking the mean from N until N as N goes to , n
+j may be replaced by
n in the expectation operation:
J
w
(i)
m,k
= lim
N
2
N
N
n=N
L
l=1
J1
j=0
s
(j)
m,l
e
l
(n + j)x
k
(n i) (3.41)
By dening the ltered error signal as:
f
m
(n) =
L
l=1
J1
j=0
s
(j)
m,l
e
l
(n + j) (3.42)
The gradient may then be regarded as a multiplication of the reference and the ltered
error signal:
J
w
(i)
m,k
= lim
N
2
N
N
n=N
f
m
(n)x
k
(n i) (3.43)
The time averaged behaviour of an adaptation algorithm using the gradient based on a
ltered error signal will be the same as a steepest descent algorithm based on a ltered
reference signal [15, 2].
When taking the instantaneous version of the gradient dened by equation 3.43 and
examining the expression for the ltered error, it becomes apparent that this expression is
not causal because a time advanced error signal is required. To make the expression causal
a delay of J 1 samples will be introduced to the error and reference path and by dening
j
l=1
j
=J1
=0
s
(j
J1)
m,l
e
l
(n j
) (3.44)
Leading to the following expression for the updatestep
w(n + 1) = w(n) +
2
J
w(n)
(3.45)
= w(n) +
X
T
(n J + 1)f (n J + 1) (3.46)
32 3. Adaptive control
LMS
W(q
1
, n) S
S
(q
1
)
q
x(n)
d(n)
y(n)
e(n)
u(n)
x(n ) f (n )
K
M
M
L
+
+
Figure 3.5: Blockdiagram of adaptive lter problem using AdjointLMS alghoritm, de
notes J 1 samples
Where
X(n) is dened as in equation 2.16 and f (n) is dened as the vector of M1 ltered
error signals. It may be noticed that the ltering of the error signal actually occurs by a
delayed time reversed impulse response of the secondary path model, which ztransform
can be written as:
z
J+1
S
m,l
(z
1
) =
J1
j=0
s
(j)
m,l
z
jJ1
(3.47)
The lter S
T
(z
1
) is called the adjoint of S(z) and is dened as its anticausal transposed
counterpart. Therefore this algorithm is also known as the adjointLMS algorithm. In
order to implement a stable and causal approximation of the adjoint of the secondary path
the adjoint system is described by a nite impulse response. The length of the lter is
governed by the amount of samples by which the impulse response of the secondary path
is contained.
The blockdiagram of the adjointLMS algorithm is shown in gure 3.5. The stability behav
iour of the FxLMS algorithm was described for a slowly varying lter. Because the gradient
estimate of the AdjointLMS algorithm will be similar to the gradient of the FxLMS in the
limit of a slow adaptation process, the stability conditions described for the FxLMS algo
rithm will also apply for the AdjointLMS algorithm [2, 1]. Yet the convergence behaviour
will be somewhat slower because of the delay introduced to make the adjoint lter causal.
Because both algorithms converge to the same optimal solution the amount of reduction
obtained at the disturbance signal will ultimately be similar for both algorithms.
Having specied the algorithm, the most important reason using the adjoint LMS alogrithm
is because it is much more ecient. The reason of this, lies in the fact that the kronecker
tensor product incorporated by the ltering of the reference signal by the FxLMS algo
rithm can be avoided. This may save a large amount of ops required per sample, especially
when multiple input and output channels are involved. The dierence in the amount of
ops for the traditional FxLMS and the adjoint LMS algorithm is denoted in table 3.2
Where the symbols are dened as in section 3.2.4 and J denotes the amount of tabs by
3.2. Adaptive feedforward control 33
Table 3.2: Comparison of the number of oating point operations required by the FxLMS
algorithm and Adjoint LMS algorithm
FxLMS algorithm Adjoint LMS algorithm
Filtered reference/error signal LMK[2N
2
+ 3N + 1] 2MLJ M
Filterupdate 2MKIL + L 2MKI + M
Filtering reference signal 2IKM M 2IKM M
with adaptive lter
Total amount of ops LMK[2N
2
+ 3N + 2I + 1]+ M[2JI + 4KI 1]
2MKI M + L
which the impulse response of the secondary path is contained. Using the practical ex
ample mentioned before, and noting that J can be given by 200 tabs, the total amount
ops of the FxLMS algorithm is 747636. The total amount of ops required by the Ad
joint LMS algorithm contains 484794, which is only 65 percent compared to the traditional
FxLMS algorithm. Finally it should be noted that when the adjoint LMS algorithm, like
LMS
W(q
1
, n) S S
1
o
S
i
(q
1
)
q
x(n)
d(n)
y(n)
e(n)
u(n) u(n)
x(n ) f (n )
K
M
M
L
+
+
Figure 3.6: Blockdiagram of adaptive lter problem using AdjointLMS alghoritm with
postconditioning applied, denotes J 1 samples
the FxLMS algorithm is arranged with postconditioning as shown if gure 3.6 it takes
advantage from the more contained size of the impulse response of the innerfactor of the
secondary path compared with the impulse response of the secondary path itself. However
the reduction in the amount of ops between the FxLMS and the AdjointLMS algorithm
is less pronounced when using postconditioning. The innerfactor of the secondary path
may now be suciently described by a FIR lter containing 20 tabs. The ops required by
the postconditioned FxLMS algorithm amounts 86902 compared to 75160 ops required
by the postconditioned AdjointLMS algotihm.
34 3. Adaptive control
3.3 Adaptive feedback control
3.3.1 Introduction
In the previous section the design of an adaptive feedforward controller was discussed. It
was mentioned in the chapter on the design of the xed gain controller, that a feedback
arrangement can be considered as a feedforward arrangement using the principle of internal
model control and assuming perfect plant knowledge. By this way the optimal feedback
controller could be obtained using the standard feedforward design theory. This strategy
will also be applied in the design of the adaptive feedback controller, as will be shown in the
rst subsection of this section. In the following subsection the inuence of modeluncertainty
on the behaviour of the stability and convergence properties of the adaptive controller will
be described. Finally the adaptive controller is presented using postconditioning which
concludes this section on adaptive feedback control.
3.3.2 Design of the adaptive controller
Following closely the discussion on the design of the xed gain feedback controller men
tioned in section 2.4, the adaptive feedback controller is designed according to the Ad
jointLMS algorithm discussed in the previous section.
First perfect plant knowledge is not assumed and the according blockdiagram of the
adaptive feedback controller can be shown in gure 3.7. As a reference signal, the estimated
LMS
W(q
1
, n) S
(q
1
)
q
d(n)
d(n)
y(n)
y(n)
e(n)
u(n)
x(n )
f (n )
M
M
L
+
+
+
Figure 3.7: Blockdiagram of adaptive feedback lter problem using the AdjointLMS al
ghoritm, denotes J 1 samples
3.3. Adaptive feedback control 35
disturbance signal
d(n) is used. The error signal can then be written as:
e(n) = SW(q
1
, n)
d(n) (3.50)
=
_
I +
SW(q
1
, n)
_
1
e(n) (3.51)
Substituting the expression for the estimated disturbance signal in equation 3.48, it be
comes clear that the error signal becomes a nonlinear function of the ltercoecients
w
(i)
m,l
, resulting in an expression of the mean square error which is no longer quadratic in
the ltercoecients.
The design of an adaptive controller minimizing the mean square error using a gradient
descent algorithm may now become more complicated because the algorithm is not guar
anteed to converge to the global minimum of the mean square error, but may converge to
local minima as well.
When considering a single input single output model the sensitivity function may be
expressed in the transform domain as:
G(z) =
e(z)
d(z)
=
1 +
S(z)W(z)
1
_
S(z)
S(z)
_
W(z)
(3.52)
From this expression it can be concluded that the nonlinear component vanishes if the
transfer [S(z)
D
T
(n) be dened according to the matrix of stacked reference signals 2.16:
D(n) =
_
d(n) 0 0
0
d(n)
.
.
.
.
.
.
0
d(n)
_
_
T
R
MMLI
(3.53)
The update algorithm to calculate the coecients of the ML FIR lters for the adaptive
feedback controller can then be determined referring to equation 3.46.
w(n + 1) = w(n) +
2
J
w(n)
(3.54)
= w(n) +
D
T
(n J + 1)f (n J + 1) (3.55)
The xed gain controller which optimal minimizes the mean square error is specied
by equation 2.57. This controller was derived considering the theoretical situation that
perfect plant knowledge may be assumed. Yet, due to the modeluncertainty of the identied
model of the secondary path which is used for internal model control and to obtain the
ltered errorsignal, the set of controllters actually converges to a dierent set of xed
gain control lters. The vector of xed gain FIRlter coecients to which the adaptive
FIRlter coecients actual converges to, can be obtained by taking into account the
modeluncertainty of the secondary path. Let R(n) and
R(n) be dened as the matrix
of ltered estimated disturbance signals obtained by ltering the estimated disturbance
signal by the actual and estimated secondary path using the kronecker tensor product
respectively:
R(n) = S
d(n) (3.56)
R(n) =
S
d(n) (3.57)
The set of lters to which the adaptive controller will converge can now be given by writing
out the expression for the derivative of the costfunction to the coecients:
J
w(n)
= 2E
_
R
T
(n)e(n)
_
(3.58)
Setting this derivative to zero and writing out the errorsignal according to equation 3.48 the
resulting xed gain solution to which the adaptive coecients have to converge becomes:
w
= E
_
_
R(n)R(n)
_
1
[
R(n)d(n)
_
(3.59)
Which may indeed dier from the optimal control lter coecients as dened in 2.57.
3.3.3 Stability and convergence properties of the adaptive feedback con
troller
Having specied the design of the adaptive feedback controller, the stability of the controller
will be issued in this subsection. As is mentioned in the chapter on optimal feedback
3.3. Adaptive feedback control 37
control the stability of the feedback loop is determined by the loopgain of the feedback
transfer path. Given the amount of uncertainty between the secondary path and the model
of the secondary path, the gain of the controllter for each frequency is restricted by the
requirement that the loopgain has to be less then unity. This provides a strict boundary for
the maximum values of the ltercoecients in order for the feedback controller to be stable.
Instability may yet also occur if the algorithm itself becomes unstable. Provided the error
may be considered as a linear function of the ltercoecients, the stability requirement
of the algorithm may be derived analogously to derivation mentioned using a feedforward
arrangement.
Together with the assumptions mentioned earlier of a slowly varying lter w.r.t. the
system dynamics of the secondary path and the length of the lter itself, the stability of
the algorithm may be analysed in the mean over a number of trials. Carrying out this
analysis will lead to the requirement that each of the independent modes of the lter has
to converge. According to this requirement the expression for the theoretical maximum
stepsize may be written as:
0 < < 2
2Re(
max
)

max

2
(3.60)
Where the potentially imaginary eigenvalues are taken from the crosscorrelation matrix
consisting of the estimated disturbance signal ltered by the estimated and real secondary
path: E
_
R(n)
R(n)
_
.
The adaptive lter may be made more robust using a regularization term in the
costfunction. By doing so the eigenvalues dening the convergence properties of each
of the individual modes of the algorithm are taken from the modied correlationmatrix
E
_
R(n)
R(n) + I
_
and accordingly become larger making potentially negative eigenval
ues positive as was already mentioned in section 3.2.5. On the other hand by introducing
an coecient weighing term in the costfunction, also the gain of the controller will be
decreased, which stabilizes the internal feedback transfer path as was mentioned in section
2.4.5.
3.3.4 Postconditioning
Innerouter factorization may also be applied in a feedback arrangement as is shown in
gure 3.8, using again the adjointLMS algorithm. The secondary path reduces then to the
innerfactor of the secondary path, which is of much lower order and has a more contained
impulse response. The advantage of a reduced eigenvaluespread becomes less apparent
because the reference signal may not be assumed to be a white noise signal as was considered
in the feedforward case. The eigenvaluespread is now mainly determined by the dynamical
range of the power spectrum of the disturbance signal. Theoretically it might be possible
to lter the estimated disturbance signal with the inverse of the outerfactor of primary
path, which may be obtained performing an outerinner factorization on the primary path.
When perfect plant knowledge may be assumed this may lead to an eigenvaluespread of one
if the input signal to the primary path is a white noise sequence. Filtering the estimated
38 3. Adaptive control
LMS
W(q
1
, n) S
S
i
S
1
o
i
(q
1
)
q
d(n)
d(n)
y(n)
y(n)
e(n)
u(n) u(n)
x(n )
f (n )
M
M
L
+
+
+
Figure 3.8: Blockdiagram of adaptive feedback lter problem using the AdjointLMS al
ghoritm with postconditioning, denotes J 1 samples
disturbance signal by the outerfactor of the primary path is also known as preconditioning.
Preconditioning is however not considered in this report, because the disadvantage of the
considerable performance degradation caused by the extra computational load required
to lter the estimated disturbance signal by the inverse outerfactor of the primary path
doesnt weighs into the advantage of a potentially smaller eigenvalue spread leading to a
faster convergence.
The controller to which the complete feedback controller should converge assuming
perfect plant knowledge is given in the transform domain by equation: 2.62. It may be
mentioned that applying postconditioning actually split the controller into two parts. The
part to which the adaptive part of the complete feedback controller has to converge is
dened by:
_
S
T
i
(z
1
)P
co
(z)
_
+
P
co
(z) (3.61)
The second part of the controller consists of the inverse outerfactor of the secondary path
which does not change over time. When applying regularization on a postconditioned
adaptive controller, the eect on the gain of the complete controller becomes less pro
nounced because the regularization term only eects the minimization of the gain of the
adaptive part of the controller. So it may be the case that by using a regularization term,
the frequency response of the controller at frequencies dened by the peak values of the
frequency response of the adaptive part of the controller may be largely reduced, while the
frequency response of the controller at frequencies dened by the peak values of the fre
quency response of the xed part of the controller are just marginally aected. When the
3.4. Summary 39
loopgain of the internal feedback transfer path is larger then one at the frequencies which
coincides with the frequencies of the controller corresponding to the marginally aected
peak values of the xed part of the controller, the controller may be dicult to stabilize
using a regularization term in the costfunction. One solution is to use a large regularization
term , another solution however is to regularize the outerfactor as well as described in
section: 3.2.5.
3.4 Summary
In this chapter the design of an adaptive controller was discussed. The advantage of an
adaptive controller is that it may anticipate on changes in the statistics of the disturbance
signal. First a general description on how to design an adaptive controller was given by
means of the LMS algorithm. Again rst the controller was designed in case the reference
signal is assumed to be known, resulting in the design of an adaptive feedforward controller.
Using the FxLMS algorithm, the eect of the secondary path could taken into account. It
was shown that the eigenvaluespread of the autocorrelation matrix of the ltered reference
signal is an important measure for the convergence speed of the algorithm. In order to
enhance the convergence speed, the output of the controller was preltered by the inverse
of the outerfactor of the secondary path. An additional advantage of postconditioning was
shown to obtain a far more ecient algorithm. In order to further improve the eciency of
the adaptive algorithm, the AdjointLMS algorithm was introduced. In contrast of ltering
the reference signal required by the FxLMS algorithm, the AdjointLMS algorithm required
the error signal to be ltered by the secondary path. By this way the kronecker tensor
operation can be avoided which can save much computational load when multiple input
and output signals are involved. Analogous to the derivation of the xed gain controller
a method in order to reduce the steering signals was given by using a regularization term
in the costfunction. It was also shown that regularization resulted also in a more robust
controller by increasing the eigenvalues of the autocorrelation matrix of the ltered ref
erence signals. Also the spread in eigenvalues reduces, resulting in a faster convergence.
The downside of using regularization is that a slightly less optimal performance is obtained
when the controller is fully converged.
In the next subsection the design of the adaptive feedback controller was treated. It was
shown that when the sensitivity function may be assumed as a linear function of the lter
coecients and by using the principle of internal model control analogous to the design of
the xed gain feedback controller mentioned in the previous chapter, the adaptive feedback
controller could be designed using the same theory used in the design of an adaptive feed
forward controller. In the following subsection the implications of using an imperfect plant
model on properties of the stability of the adaptive feedback controller were described.
Together with the constraint of a stable internal feedback transferpath, it was concluded
that the design of an adaptive feedback controller is more sensitive to modeluncertainties
compared to an adaptive feedforward controller. It was shown that a regularization term
could again be a solution to make the controller more robust to modeluncertainties. Fi
40 3. Adaptive control
nally it was shown how the use of postconditioning could reduce the stabilizing eect of
the regularization term. A solution to this problem will be given in the next chapter.
Chapter 4
Experimental results
4.1 Introduction
In the previous chapters the theory of the design of a fast and computational ecient
algorithm has been derived in order to reduce a disturbing signal emitted by a certain
source structure. If the statistics of the disturbance are varying slowly over time, it has
been shown that the controller can be introduced in an adaptive form. When the knowledge
of the reference signal, driving the source structure is not available, the controller can be
arranged in a feedback structure. In this chapter the results will be presented and discussed
obtained by simulating and implementing the designed controller according to the dierent
architectures.
This chapter is outlined as follows: rst the identication method used to identify the
obtained models of the secondary and primary path will be briey discussed in section 4.2.
Next the results obtained by the feedforward controller are discussed in section: 4.3. The
results obtained by the feedback controller are subsequently presented in section 4.4. In
the nal section 4.5 the obtained results of both feedback and feedforward controller will
be summarized.
4.2 Identication
For design and simulation purposes, a model of the secondary as well as the primary path
has to be available. The models are obtained in statespace form using subspace model
identication (SMI) techniques, a detailed discussion about the identication procedure
can be found in: [14].
The frequency range of interest to apply active control is up to 1 kHz. Higher frequency
disturbance signals may suciently be attenuated using passive control techniques. Having
specied the actual frequency range of interest, the sample frequency is set to 2 kHz, in
order to circumvent aliasing problems. The identication is performed using a white noise
sequence of 10,000 samples, i.e. 5 seconds, validation of the model is performed with
another 10,000 samples. The accuracy of the model may be determined by comparing the
plant output validation data y with the model output validation data y. The variance
41
42 4. Experimental results
accounted for (VAF) value is used as a measure for the accuracy of the identied model
and is dened as:
VAF =
1
L
L
l=1
_
1
var (y
l
y
l
)
var (y
l
)
_
100% (4.1)
where L denotes the number of output signals of the identied model. The primary and
secondary path were identied separately. A model of the primary path was obtained
having an order of 103 and a VAFvalue of 99.93. For reasons to be explained later on, two
models of the secondary path were estimated. A 97th and 58th order model were estimated
having VAFvalues of 99.95 and 99.87 respectively. The identied models are used in the
design of the dierent controllers as will be discussed in the next sections.
4.3 Feedforward control
In this section the results obtained by the feedforward controller are presented. First the
results obtained from simulating the controller in Matlab/Simulink are presented. The
controller was simulated both as a xed gain controller as well as an adaptive controller.
In the next subsection the results obtained by implementing the controller on the realtime
setup are given.
4.3.1 Simulation results
Fixed gain control
The xed gain controller was derived according to the modelbased design technique as
discussed in section 2.3.3 and assuming perfect plant knowledge. First the unregularized
controller is calculated according to equation 2.40. The causality operation was performed
in state space using the discretetime algorithm as proposed in [6]. A state space real
ization of the inner and outer factor is achieved by using the matlab function iofact.m
[9]. This resulted in a 200th order state space model for the xed gain feedforward con
troller. The controller was simulated with a broadband white noise reference sequence for
20 seconds. To be consistent with realtime results, measurement noise was added to each
of the errorsensors which is approximated by a gaussian white noise sequence. For each
sensor, the obtained reduction of the disturbance signal can be expressed in decibels by
the performance reduction coecient . The performance reduction coecient is dened
as:
(l) = 10log
10
_
1
N
N
n=1
e
2
l
(n)
d
2
l
(n)
_
dB (4.2)
Where N denotes the datalength over which the reduction is determined, and l species
the errorsensor. The obtained disturbance from this simulation is presented in table 4.1.
The performance of the rst errorsensor is shown if gure: 4.1, other sensor have shown
similar behaviour and are not shown for brevity. The solid line represents the output of
4.3. Feedforward control 43
Table 4.1: Reduction in dB on the six error sensors outputs by simulating the unregularized
xed gain feedforward controller.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
13.8 14.0 10.2 8.1 10.3 12.9 11.5
the error sensor when the controller is switched of. The dasheddotted line shows the
output of the errorsensor when the controller is switched on. The meaning of the dotted
line will be explained later on. In order to have an indication of the convergence prop
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
65
60
55
50
45
40
35
30
25
20
15
10
Figure 4.1: Spectral density plot of the rst errorsignal obtained by simulating the xed
gain feedforward controller with and without regularization.
erties when this controller is implemented adaptively, the resulting eigenvaluespread of
the autocorrelation matrix ltered reference signal was calculated which equals: 1.2 10
5
.
When trying to implement the obtained feedforward xedgain controller on the realtime
setup, two problems arises. First the low frequency gain of the controller output exceeds
the maximum allowable lowfrequency input level of the actuators. This is due to the
fact that the inverse outerfactor has a lowfrequency high gain. Therefor a regularization
term is incorporated in the costfunction as dened in section 2.3.3. The regularization
term was tuned to = 2.5 10
5
. A second problem arises with the computational load
incorporated by implementing the obtained 200th order state space model. To reduce
the computational a new reduced order controller was identied. This was achieved by
performing a new identication experiment using SMI. By doing so a reduced controller
44 4. Experimental results
was obtained having an order of 135, with a resulting VAFvalue of 99.9990, indicating
a nearly as accurate controller compared to the 200th order controller. The performance
of simulating the newly obtained regularizedreduced order controller, is indicated by the
dotted line in gure 4.1. The reduction obtained, measured by each of the errorsensors
is indicated in table: 4.2 The eigenvaluespread is now determined over the modied au
Table 4.2: Reduction in dB on the six error sensors outputs by simulating the xed gain
controller including a regularization parameter beta = 2.5 10
5
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
11.7 12.7 8.9 8.7 9.4 11.2 10.4
tocorrelation matrix E
_
R
T
(n)R(n) + I
S
1
o
(z), where
S
o
(z) denotes the regularized outer
factor, and was set to 100. The adaptive part of the controller is regularized with a term
= 0.0003 and the outerfactor was regularized with = 0.06. Both regularization terms
were obtained by the design of the according feedforward xed gain controller, discussed
earlier. Next, the stepsize during simulations was set equal to the value that stabilizes the
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
60
55
50
45
40
35
30
25
20
15
(a) Performance obtained by the xed gain
controller
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
60
55
50
45
40
35
30
25
20
15
(b) Performance obtained by the adaptive con
troller
Figure 4.2: Spectral density plot of the rst errorsignal. Shown are the simulation results
obtained by simulating the xedgain and adaptive feedforward controller. The controlloer
was regularized with a term = 0.0003 together with regularizing the outerfactor by a term
= 0.06.
equivalent adaptive controller on the realtime setup, and set to = 0.01. The controller
was then simulated in Simulink using a white noise reference signal for 300 seconds. Dur
ing the simulation, measurement noise was added to be consistent with realtime results.
46 4. Experimental results
The reduction of the disturbance signal is shown in table 4.4, which is obtained during the
last 20 seconds of simulation. The powerspectrum density plot of the rst errorsignal is
Table 4.4: Reduction in dB on the six error sensors outputs by simulating the adaptive
feedforward controller regularized by a term = 0.0003, the outerfactor was regularized
= 0.06.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
11.7 12.6 8.9 8.4 9.1 11.1 10.3
shown in gure 4.2 on the right side. It appears that the obtained reduction obtained by
simulation of the adaptive controller is in good correspondence to the reduction obtained
by simulation of the xed gain controller.
4.3.2 Realtime results
Fixedgain control
First the xedgain controller as derived earlier during simulation, using regularization
over the combined controller by a term = 2.5 10
5
is considered. The controller was
implemented on the realtime setup and simulated for 20 seconds. The powerspectrum
density plot obtained by the rst errorsensor is shown in gure: 4.3. In table: 4.5, the
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
65
60
55
50
45
40
35
30
25
20
15
10
Figure 4.3: Powerspectrum density plot obtained by the rst errorsensor by implementing
the xed gain feedforward controller in realtime. Regularization of the combined controller
was applied by a term = 2.5 10
5
.
Table 4.5: Reduction in dB on the six error sensors outputs by implementing the xed gain
feedforward controller in realtime. Regularization of the combined controller was applied
by a term = 2.5 10
5
.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
10.7 11.0 7.3 7.3 8.6 10.2 9.1
4.3. Feedforward control 47
reduction measured by the dierent errorsensors is shown. From the results obtained in
realtime and simulation according to table 4.2, it may be concluded that the performance
of in realtime has dropped by 1.3 dB.
Next the xed gain controller derived earlier using both regularization of the controller
as well as regularizing the outerfactor is implemented on the realtime experimental setup.
Note that only the adaptive part of the controller is regularized by a term . The controller
was then simulated using a white noise sequence of 20 seconds. The resulting reduction is
shown in table 4.6. A power spectrum density plot of the obtained disturbance reduction
Table 4.6: Reduction in dB on the six error sensors outputs obtained by implementing
the xed gain feedforward controller on the realtime 6DOF setup. The controller was
regularized by a term = 0.0003, the outerfactor was regularized by a term = 0.06.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
10.5 10.7 7.1 7.2 8.5 9.9 9.0
measured by the rst error sensor is shown in gure: 4.4 on the left side. Obviously the
performance obtained in realtime is less then obtained in simulation.
Adaptive control
In the previous subsection a postconditioned regularized adaptive controller was derived
and simulated. The same controller is now implemented on the realtime setup. The model
Table 4.7: Reduction in dB on the six error sensors outputs obtained by implementing the
adaptive feedforward controller regularized by a term = 0.0003 on the realtime experi
mental setup, the outerfactor was regularized = 0.06.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
11.3 11.4 7.3 7.1 8.6 10.5 9.4
used to obtain the ltered error signal is again specied as a set of 36 FIR lters each
consisting of 100 tabs. The stepsize was tuned to = 0.01 in order to obtain a stable
convergence process. The controller was simulated during 300 seconds using a white noise
reference signal. The resulted disturbance rejection is shown in table: 4.7. The power
spectrum density plot obtained from the residual signal measured by the rst errorsensor
is shown in gure 4.4 on the right side. From this gure it may be seen that both the
performance obtained by the xed gain as well as obtained from the adaptive controller
appear to be in good correspondence with each other. The dierence found at some fre
quencies might be explained by the existing modeluncertainty. According to equation 3.20
the adaptive controller will converge to a dierent controller when perfect plant knowledge
may not be assumed, the xedgain controller is optimized for the obtained models of the
realtime systems and may therefor give a less optimal performance when implemented on
the realtime experimental setup.
48 4. Experimental results
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
60
55
50
45
40
35
30
25
20
15
(a) Performance obtained by the xed gain
controller
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
60
55
50
45
40
35
30
25
20
15
(b) Performance obtained by the adaptive con
troller
Figure 4.4: Spectral density plot of the rst errorsignal. Shown are the realtime results
obtained by implementing the xedgain and adaptive feedforward controller on 6DOF
setup. The controller was regularized with a term = 0.0003 together with regularizing the
outerfactor by a term = 0.06.
By calculating the averaged performance reduction coecient as a function of time, the
reduction obtained by the adaptation process may be visualized by a learning curve [7]. In
gure: 4.5 on the left is the learning curve obtained by simulating the adaptive controller
is shown, on the right side the realtime result is shown. From this gure it can be seen
Time [s]
A
m
p
l
i
c
a
t
i
o
n
[
d
B
]
0 50 100 150 200 250 300
12
10
8
6
4
2
0
2
(a) Learning curve obtained in simulation
Time [s]
A
m
p
l
i
c
a
t
i
o
n
[
d
B
]
0 50 100 150 200 250 300
12
10
8
6
4
2
0
2
(b) Learning curve obtained in realtime
Figure 4.5: Feedforward adaptation process visualized by a learning curve
that convergence is achieved 15 seconds after the adaptive controller is switched on. It
may also be noticed that the convergence behaviour of the simulated controller is in good
correspondence with realtime convergence.
4.4. Feedback control 49
4.4 Feedback control
In this section the results of the controller implemented in a feedback arrangement are
presented. Following the same structure as is obtained in the previous section, in the rst
subsection the results obtained by simulating the adaptive and xed gain controller in
Simulink/Matlab are presented. Subsequently, the realtime results of feedback control are
discussed.
4.4.1 Simulation results
Fixed gain control
In the previous section, the xedgain feedforward controller was designed as a modelbased
statespace controller. In feedback arrangement, the according modelbased controller can
be designed by performing an outerinner factorization on the primary path as explained in
section 2.4.4. However the computation of this factorization led to computational problems
and therefor the xedgain controller is designed using the correlation properties of the
disturbance and ltered disturbance signals as described in section 2.4.3.
In order to be able to implement the controller in realtime the xed gain controller had to
be regularized to guarantee a stable internal feedback loop. The regularization terms were
derived during the implementation of the adaptive controller on the experimental setup as
will be discussed in further detail later on. It appeared that both the controller as well as
the outerfactor needed to be regularized with terms = 0.003 and = 0.01 respectively.
The xedgain controller can now be obtained according to:
W
opt
=
_
E
_
R
T
(n)R(n) + I
_
1
E
_
R
T
(n)d(n)
(4.3)
Where the ltered disturbance signal R(n) is calculated according:
R(n) = [S
S
1
o
]
d
T
(n) (4.4)
where
S
o
denotes the regularized outerfactor as dened in equation 3.35 and
d(n) is dened
as in equation 2.55. This resulted in a xedgain feedback controller approximated by a 66
FIRlter structure each consisting of 60 tabs. The obtained xed gain feedback controller
was then simulated assuming perfect plant knowledge in Matlab during 20 seconds. To be
consistent with realtimesimulations, sensor noise was added. The reduction obtained from
the simulation is summarized in table 4.8. The powerspectrum density plot the residual
Table 4.8: Reduction in dB on the six error sensors outputs by simulating the xed gain
feedback controller. The controller was regularized by a term = 0.003, the outerfactor
was regularized by a term = 0.01.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
3.6 5.3 2.4 2.9 2.8 3.8 3.5
signal measured by the rst errorsensor is shown in gure 4.6 on the left.
50 4. Experimental results
Adaptive control
Without regularization, the xed gain feedback controller could not be derived. This was
due to the fact that the inverting of the autocorrelationmatrix of the ltered disturbance
signal led to computational problems. In order to have an idea of the maximum reduc
tion obtained by an adaptive feedback controller, the unregularized controller is simulated
adaptively. The unregularized adaptive feedback controller is designed according to the
adjointLMS algorithm. Assuming perfect plant knowledge, the controller was simulated
with a length of 200 tabs. To speed up the convergence, postconditioning was applied,
using a stepsize of = 0.1. The resulted reduction after 300 seconds is shown in table:
4.9 Next the adaptive controller is made robust to plant uncertainties. The regularization
Table 4.9: Reduction in dB on the six error sensors outputs by simulating the unregularized
adaptive feedback controller
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
7.0 8.6 3.9 3.8 6.4 5.1 5.8
parameters were obtained from realtime experiments and are the same as were used in
the simulation of the regularized xedgain controller. The number of tabs of each of the
FIR lters to describe the adaptive controller was set to 60. The model used to obtain
the ltered error signal was described by 36 FIR lters each consisting of 40 tabs. The
obtained adaptive feedback controller is simulated assuming perfect plant knowledge using
a white noise reference signal during 20 seconds, again measurement noise was added. The
obtained reduction is shown in table: 4.10 and the powerspectrum density plot the residual
Table 4.10: Reduction in dB on the six error sensors outputs by simulating the adaptive
feedback controller regularized by a term = 0.003, the outerfactor was regularized =
0.01.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
3.7 5.3 2.5 2.9 2.9 3.9 3.5
signal measured by the rst errorsensor is shown in gure 4.6 on the right indicated by
the dasheddotted line. The dotted line indicates the obtained reduction by the unregu
larized adaptive controller. From this gure it may be seen that the results obtained by
simulating the adaptive feedback controller are in good correspondence with the results
that were obtained by simulating the xed gain feedback controller. However only a good
disturbance rejection is achieved by the regularized controller at the resonant peaks.
4.4. Feedback control 51
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
55
50
45
40
35
30
25
20
15
(a) Performance obtained by the xed gain
controller
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
55
50
45
40
35
30
25
20
15
(b) Performance obtained by the adaptive con
troller. The dasheddotted line indicates regu
larized controller, the dotted line indicates un
regularized control.
Figure 4.6: Spectral density plot of the rst errorsignal. Shown are the results obtained
by simulating the xedgain and adaptive feedback controller in Matlab/Simulink. The
controller was regularized with a term = 0.003 together with regularizing the outerfactor
by a term = 0.01. Also shown is the result obtained by the unregularized adaptive feedback
controller.
4.4.2 Realtime results.
Fixed gain control
The xed gain feedback controller is now implemented on the realtime sixdegreesof
freedom experimental setup. Due to limited computational power available, a reduced order
secondary path is estimated using SMI. The 58th order secondary path model specied in
4.2 is used as an internal model to determine the estimated disturbance signal in realtime
experiments. The obtained reduction during an experiment of 20 seconds is listed in table:
4.11. A powerspectrum density plot of the rst errorsignal can be seen in gure 4.7 on
Table 4.11: Reduction in dB on the six error sensors outputs obtained by implementing the
xed gain feedback controller on the realtime 6DOF setup. The controller was regularized
by a term = 0.003, the outerfactor was regularized by a term = 0.01.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
3.8 5.2 2.4 2.9 2.7 4.0 3.5
the left.
Adaptive control
In order to be able to implement the adaptive controller on the realtime setup, a reduced
order internal model has been used as was explained earlier. Implementing the controller
52 4. Experimental results
without regularization, led to immediate instability, and also attempts to regularize the
controller with a term without regularizing the outerfactor were unsuccessful. From the
outputdata obtained from the identication process it was observed that the modeluncer
tainty between the internal model and the actual secondary path was most large in the
low frequency range. Due to the low frequency high gain of S
1
o
(z) together with the high
lowfrequency modeluncertainty, the internal feedback loop has become unstable. There
for, in order to eciently reduce the lowfrequency gain of the controller, the outerfactor
was regularized by a term . This value was determined by an iterative process of choosing
a value of , implementing the resulting adaptive feedback controller on the experimental
setup and subsequently increasing the value of regularization parameter , used to regu
larize the adaptive controller, until the adaptation process remained stable. This resulted
in a regularization of the outerfactor with a term = 0.01, the adaptive part of the con
troller was regularized by a term = 0.003. The stepsize was then tuned to = 0.01 after
which the obtained controller was simulated on the realtime setup for 300 seconds. The
obtained performance is summarized in table 4.12. A powerspectrum density plot of the
Table 4.12: Reduction in dB on the six error sensors outputs obtained by implementing the
adaptive feedback controller regularized by a term = 0.003 on the realtime experimental
setup, the outerfactor was regularized with = 0.01.
Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 Sensor 6 Average
3.7 5.3 2.4 2.9 2.8 4.0 3.5
rst errorsignal is shown in gure 4.7 on the right. From the data given in the dierent
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
55
50
45
40
35
30
25
20
15
(a) Performance obtained by the xed gain
controller
Frequency [Hz]
M
a
g
n
i
t
u
d
e
[
d
B
]
0 100 200 300 400 500 600 700 800 900 1000
55
50
45
40
35
30
25
20
15
(b) Performance obtained by the adaptive con
troller
Figure 4.7: Spectral density plot of the rst errorsignal. Shown are the realtime results
obtained by implementing the xedgain and adaptive feedback controller on 6DOF setup.
The controller was regularized with a term = 0.003 together with regularizing the outer
factor by a term = 0.01.
tables it may be noticed that the resulting performance during simulations are in close
4.5. Summary 53
correspondence with the performance obtained during the realtime experiments. Also the
results obtained by the xed gain controller seems to show a similar behavior compared to
the results obtained by the adaptive controller.
In gure 4.8 the learningcurves are shown for feedback control. In the left gure the result
ing learning curve obtained by simulation is shown. From this gure it may be noticed that
adaptive control shows a slightly better performance compared to the xedgain control.
This is not what may be expected when simulating with perfect plant knowledge. The
explanation for the behaviour may be found from the fact that sensor noise was added
during simulations. The xedgain feedback controller was designed to give an optimal
result without the presence of sensor noise. It may also be observed that the resulting
Time [s]
A
m
p
l
i
c
a
t
i
o
n
[
d
B
]
0 50 100 150 200 250 300
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0.5
(a) Learning curve obtained in simulation
Time [s]
A
m
p
l
i
c
a
t
i
o
n
[
d
B
]
0 50 100 150 200 250 300
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0.5
(b) Learning curve obtained in realtime
Figure 4.8: Feedback adaptation process visualized by a learning curve
reduction obtained by adaptive and xedgain control from realtime experiments diers by
a small amount. This may be explained from the fact that besides the presence of sensor
noise, also the modeluncertainty is degrading the performance of the xed gain controller
2
.
4.5 Summary
In this chapter the results were presented obtained by implementing the controller in a
feedforward and feedback arrangement. The controller was designed as a xedgain con
troller as well as in adaptive form. To speed up the convergence process, postconditioning
was applied both in feedforward as well as in feedback arrangement. The feedforward
controller needed to be regularized in order to reduce the controller output preventing the
actuators to saturate. To eciently reduce to lowfrequency gain of the controller also
the outerfactor was regularized. As a result of the applied regularization a performance
degradation in the lowfrequency range was observed.
2
The discrepancy between the reduction shown by the learning curves and the data given in the dierent
tables is a result of roundo errors made by taking the average over the dierent errorsensors.
54 4. Experimental results
Also the feedback controller needed to be regularized to guarantee a stable internal feedback
loop. Again the outerfactor was regularized. In table 4.13 the averaged performance of the
Table 4.13: Results summarized
Feedforward Feedback
Fixedgain Adaptive Fixedgain Adatpive
Simulation 10.5 10.3 3.5 3.5
Realtime 9.0 9.4 3.5 3.5
four dierent regularized control schemes are summarized. As expected the feedforward
controller gives a better performance compared to the feedback controller.
Chapter 5
Conclusions & recommendations
The topic of this report was to design and implement an adaptive feedback controller in
order to reduce broadband [0  1kHz] disturbances on a six degreesoffreedom vibration
isolation setup. To compare the results obtained by the adaptive feedback controller, the
controller was also designed and implemented in feedforward arrangement. The results
obtained by both feedforward and feedback control were compared by the results obtained
by the according xedgain controllers. The adaptive controllers were designed according to
the adjoint least mean squares algorithm using the postconditioning technique to improve
the convergence properties.
5.1 Conclusions
The xedgain feedforward controller was designed as a statespace based causal
Wiener lter [6]. In order to decrease the actuator input, the controller as well
as the outerfactor had to be regularized. An average reduction of 10.5 dB was
achieved in simulation, while 9.0 dB was achieved in realtime experiments. The
adaptive feedforward controller was designed also using regularization of both the
controller and outerfactor in order to decrease the steering signals. By adaptive
feedforward control a reduction of 10.3 dB was achieved in simulation and realtime
implementation resulted in an averaged reduction of 9.4 dB. The reason the adaptive
controller gives a better performance compared to the xedgain controller in realtime
may be explained from the fact that the adaptive controller is compensating for
uncertainty in the primary path model.
The xedgain feedback controller was implemented by a nite impulse response l
ter structure. Internal model control was applied, assuming perfect plant knowledge.
In order to stabilise the controller on the realtime setup, regularization was applied
on the controller as well as the outerfactor. A reduction of 3.5 dB was obtained in
realtime as well as in simulation. The reduction is mainly obtained at the resonance
peaks. The adaptive feedback controller was designed using the principle of internal
model control. The adaptive feedback controller was found to be successfully im
55
56 5. Conclusions & recommendations
plemented on the sixdegreesoffreedom setup, by regularizing both the controller
as well as the outerfactor. By adaptive feedback control a reduction was obtained
of 3.5 dB in realtime. Simulating the same controller with perfect plant knowledge
assumed, also resulted in a reduction of 3.5 dB. The dierence seen in the gures of
the learning curves may be explained as a result of roundof errors.
In feedback as well in feedforward arrangement, the resulting performance obtained
by the xedgain control was found to be in good correspondence with the perfor
mance obtained by the corresponding adaptive control.
Given the current experimental setup, the adaptive feedforward controller has proven
to give a better performance compared to the performance obtained by the adaptive
feedback controller.
5.2 Recommendations
By examining the results obtained in this report, some recommendations may be made
in order to achieve a better performance given the current experimental setup. These
recommendations are summarized as follows:
Instead of regularizing the outerfactor, another approach might be to apply robust
control by adding a frequency dependent eort weighing term in the costfunction.
When the design of the adaptive controller is not constrained by the available com
putational power, a better performance might be obtained by including such a term
in the costfunction.
The reduction obtained by the feedforward controller is largely degraded in the low
frequency because regularization had to be applied. A better performance might
be obtained by designing a feedforward controller specically for a frequency range
larger then 100Hz.
Appendix A
Innerouter factorization
A.1 Innerouter factorization
Given an asymptotically stable state space system H(z), with M inputs and N outputs
this system may be factorized in a inner H
i
(z) and outer factor H
o
(z) [10]:
H(z) = H
i
(z)H
o
(z) (A.1)
Where H
i
(z) is a Lmin (L, M) system factorized with the property that it is an all pass
system:
H
T
i
(z
1
)H
i
(z) = I
min (L,M)
(A.2)
which leads to the implication that H
T
i
(z
1
) is a left inverse of H
i
(z). Further H
o
z is of
dimensions min (L, M)M factorized in a way that is has a stable right inverse and is said
to be minimumphase. Factorizing H
i
(z) as an all pass system hast the implication that:
H
T
(z
1
)H(z) = H
T
o
(z
1
)H
o
(z) (A.3)
which can be interpreted as signals ltered by the outer factor having the same spectral
energy as ltered by the original system. Filtering by the innerfactor results only in a
delayed signal.
A simple example will explain the innerouter factorization more clearly. Lets consider
the following third order single input single output system:
H(z) =
(z 2)(z + 0.5i)(z 0.5i)
(z + 0.1)(z + 0.3i)(z 0.3i)
(A.4)
To determine the inner and outerfactor of this system, rst it is split up in a minimum phase
system H
mp
(z) and a nonminimumphase system H
nmp
(z), where the nonminimuphase
system is determined to become the inner factor and the minimumphase will become the
outer factor.
H
mp
(z) =
(z + 0.5i)(z 0.5i)
(z + 0.1)(z + 0.3i)(z 0.3i)
(z a)
b
(A.5)
H
nmp
(z) = b
(z 2)
(z a)
(A.6)
57
58 A. Innerouter factorization
Where an extra pole is added to the nonminimumphase system in order to make it proper,
and a zero is added to the minimuphase system to cancel the pole of the nonminimumphase
system. To let the nonminimuphase system an allpass system, the product between
H
nmp
(z) and its anticausal counterpart H
nmp
(z
1
) has to be unity for all frequencies:
H
nmp
(z)H
nmp
(z
1
) = 1 (A.7)
b
(z 2)
(z a)
b
(z
1
2)
(z
1
a)
= 1 (A.8)
Because the above equation should hold for all values of z, the pole a en the factor b are
determined by writing out the equation and setting dierent terms of z equal to zero:
2b
2
+ a = 0 (A.9)
4b
2
a
2
= 0 (A.10)
yields a = 0.5 and b = 0.5. The inner and outer factor of system H(z) are now fully
determined:
H
o
(z) =
1
0.5
(z + 0.5i)(z 0.5i)(z 0.5)
(z + 0.1)(z + 0.3i)(z 0.3i)
(A.11)
H
i
(z) = 0.5
(z 2)
(z 0.5)
(A.12)
The polezero plot of the system H(z) is shown in gure ??, the polezero maps of the inner
and outerfactor of H(z) are shown in ??. It may be noticed that the outerfactor is indeed
PoleZero Map
2 1 0 1 2 3
1.5
1
0.5
0
0.5
1
1.5
Figure A.1: Polezero map of the system H(z).
the minimumphase part of the original system in addition with the zero outside the unit
circle being mirrored in the unitcircle. The innerfactor consists of the nonminimumphase
part of the original system in addition with its mirrored inverse.
A.2. Outerinner factorization 59
PoleZero Map
2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3
1.5
1
0.5
0
0.5
1
1.5
(a) Polezero map of the innerfactor H
i
(z).
PoleZero Map
2 1 0 1 2 3
1.5
1
0.5
0
0.5
1
1.5
(b) Polezero map of the outerfactor H
o
(z).
Figure A.2: An innerouterfactorization performed on H(z) which is shown by the polezero
maps of the minimumphase system H
o
(z) and the allpass system H
i
(z)
A.2 Outerinner factorization
Given an asymptotically stable state space system H(z), with M inputs and N outputs
this system may be factorized in a coouter factor H
co
(z) and coinner factor H
ci
(z):
H(z) = H
co
(z)H
ci
(z) (A.13)
Where H
ci
(z) is a min (L, M) M system factorized with the property that it is an all
pass system:
H
ci
(z)H
T
ci
(z
1
) = I
min (L,M)
(A.14)
which leads to the implication that H
T
ci
(z
1
) is a right inverse of H
ci
(z). Further H
co
(z)
is of dimensions L min (L, M) factorized in a way that is has a stable left inverse and is
said to be minimumphase. Factorizing H
ci
(z) as an all pass system hast the implication
that:
H(z)H
T
(z
1
) = H
co
(z)H
T
co
(z
1
) (A.15)
which can be intepreted as the coouter factor having the same spectral density as the
original system. When the original system is a SISO system, both factorization result in
the same inner and outer factors.
60 A. Innerouter factorization
Bibliography
[1] S. C. Douglas. Fast implementations of the lteredx lms and lms algorithms for
multichannel active noise control. IEEE Transactions on speech and audio processing,
7(4), July 1999.
[2] S. Elliott. Signal processing for active control. Acadamic press, 2001.
[3] R. Fraanje. Robust and fast schemes in broadband active noise and vibration control.
PhD thesis, University of Twente, 2004.
[4] J.T. Hofman. Feedback control of broadband disturbances on experimental vibration
isolation setups. Masters thesis, 2004.
[5] R. Kuo, M. Morgan. Active Noise Control systems. John Wiley and Sons, 1996.
[6] A.P. Leemhuis. Design of the causal wiener controller for active vibration isolation
control: A statespace approach. Masters thesis, 2004.
[7] H. Van Dijk J Jonker J.B. Nijsse, G. Super. Feedforward control of broadband distur
bances on a sixdegreesoffreedom vibration isolation setup. Active 2004: The 2004
International Symposium on Active Control of Sound and Vibration, 2004.
[8] M. De Schutter B. Westwick D. Doelman N. Nijsse, G. Verhaegen. State space mod
elling in multichannel active control systems. Active 1999: The 1999 International
Symposium on Active Control of Sound and Vibration, pages 909920, December 1999.
[9] R. Nijsse, G. Fraanje. Iofact function for matlab. 2002.
[10] C. Oar a and A. Varga. The general innerouter factorization problem for discretetime
systems. Karlsruhe, 1999.
[11] C.D. Petersen. Controlling broadband disturbances on a onedegreeoffreedom ex
perimental vibration isolation setup. Masters thesis, 2003.
[12] M.K. Post. Algorithms for suppression of tonal disturbances on a sixdegreesof
freedom isolation setup. Masters thesis, 2004.
[13] G. van Dijk J. Jonker J Super, H. Nijsse. Active 2004: The 2004 International Sym
posium on Active Control of Sound and Vibration, 2004.
61
62 Bibliography
[14] B. van Overschee, P de Moor. Subspace identication for linear systems. Kluwer
Academic Publishers, 1996.
[15] E. Wan. Adjoint lms: An ecient alternative to the lteredx lms and multiple error
lms algorithms. Proc. IEEE Int. Acoust., Speech, Signal Processing, 3:18421845, May
1996.