You are on page 1of 77

University of Ghent

Department of Control Engineering and Automation

Nonlinear Predictive Control

Nonlinear controller based on the EPSAC approach

Research Report

by
Mircea Lazăr

Promotor
Prof. Dr. Ir. Robin De Keyser
Acknowledgements

I wish to express my appreciation and gratitude to my promoters Prof. Dr. Ir. Robin De
Keyser from University of Ghent and Prof. Dr. Ir. Octavian Păstrăvanu from the Technical
University “Gh. Asachi” of Iaşi for their help and time over the research period. Also, to Lulu
for sharing his experience.
Contents

1. Introduction 1

2. The EPSAC Approach to Nonlinear Predictive Control 3

2.1 MBPC and NPC 3


2.2 Linear EPSAC 6
2.3 Nonlinear EPSAC 10

3. DC-DC Power Converters 16

3.1 Basic notions about DC-DC converters 16


3.2 Models for the Buck-Boost converter 17
3.2.1 Switched models 17
3.2.2 Averaged models 21
3.3 Control structures for DC-DC converters 23

4. Linear EPSAC versus Nonlinear EPSAC 28

4.1 Matlab implementation 28


4.2 Start-Up behavior and setpoint tracking 29

5. Simulink Model Design and Nonlinear EPSAC Implementation 37

5.1 Simulink model design and implementation of the Buck-Boost converter 37


5.2 Simulation experiments for the nonlinear EPSAC performed in Simulink 43
5.2.1 Simulink implementations 43
5.2.2 Simulink simulation experiments 45
5.3 Convergence analysis 51

6. Neural Model Based Predictive Controller for Nonlinear Systems 55

6.1 Neural models and predictors 55


6.2 Predictive control using neural networks 59
6.3 Simulation study 61
6.4 Neural predictive controller for nonlinear systems based on the EPSAC
approach 68
Conclusions 70

References 71
1. Introduction

Model Based Predictive Control (MBPC) is a control strategy that uses an optimizer to solve
for the control trajectory over a future time horizon based on a dynamic model of the process.
In the last decades, MBPC has become an important, distinctive part of control theory and has
been used in over 2000 industrial applications in the refining, petrochemical, chemical, pulp
and paper, and food processing industries. Until recently, industrial applications of MBPC
have relied on linear dynamic models even though most processes are nonlinear. Predictive
control based on linear models is acceptable when the process operates at a single setpoint
and the primary use of the controller is the rejection of the disturbances. Still, many processes
are often required to operate at different setpoints depending on the grade of the product to be
produced. Because these processes make transitions over the nonlinearity of the system, linear
MBPC frequently results in poor control performance. In order to properly control these
plants, a Nonlinear Predictive Control (NPC) technology is needed.

Many researchers have developed nonlinear models using various technologies but most have
not been practical for wide scale industrial application. Until recently, also empirically based
nonlinear models had seen limited industrial use. Over the past five years, several software
tools for building, simulating and implementing nonlinear MBPC controllers were created.

This diploma project, within the above research frame, investigates NPC focusing on the
Extended Predictive Self-Adaptive Control (EPSAC) algorithm for nonlinear systems. The
EPSAC approach to NPC differs from other attempts introducing a gentle way to calculate the
optimal control not depending on the type of the nonlinear process model. The present work
gives a detailed analysis of the nonlinear EPSAC algorithm covering both theoretical and
practical aspects. The control algorithm for nonlinear systems derives from the linear version
of EPSAC, using the concepts of base response and optimised response to calculate the future
outputs. In order to illustrate the applicability of the nonlinear EPSAC approach to NPC, a
DC-DC power converter was considered as the nonlinear plant to be controlled. This plant is
highly nonlinear due to the incorporated switch, which causes the process to change
dramatically within one operating cycle.

The diploma project aims to study the most important issues of the EPSAC approach to NPC
related to its application to process control. In order to achieve the proposed objectives the
text is organized as follows.

Chapter 2 describes the theoretical aspects of the EPSAC algorithm for both the linear and the
nonlinear versions of the control law. Starting from the original EPSAC concepts regarding
the representation of the future plant response as being the cumulative result of two effects
(base response and optimizing response) the nonlinear EPSAC algorithm is presented.

Chapter 3 gives details about the Buck-Boost DC-DC converter, the plant used to illustrate
the validity of the EPSAC algorithm for nonlinear systems. Several linear and nonlinear,
1
continuous and discrete plant models are presented together with the main control structures
utilized for DC-DC power converters.

Chapter 4 further investigates the EPSAC algorithm in order to extend the validity of the
theoretical aspects presented in Chapter 2 to the field of simulation experiments. Linear and
nonlinear EPSAC are simulated here in closed loop, with the plant presented in Chapter 2 as
the controlled process, to demonstrate the soundness of the EPSAC algorithm for nonlinear
systems. The aim is to compare the performances of both linear and nonlinear EPSAC
algortihms for the same nonlinear plant.

In order to thoroughly test the nonlinear EPSAC algorithm, a more realistic model of the
Buck-Boost DC-DC converter was developed in Chapter 5 using Simulink. Examples dealing
with the Simulink implemented plant model and the nonlinear EPSAC controller are given to
test the ability to reject disturbances, the robustness and the tracking performances.

Finally, Chapter 6 analyzes a neural network based nonlinear predictive controller, which
eliminates the most significant obstacles of NPC implementation by facilitating the
development of nonlinear models and providing a rapid, reliable solution of the control
algorithm. The above controller is compared with the EPSAC controller using the same neural
network based model to underline the rapidity of the second controller.

The results obtained were very good, extending the theoretical validity of the non-linear
EPSAC to the field of simulation experiments, fact that encourages our future plan for
approaching the whole complexity of a real-life implementation.

2
2. The EPSAC Approach to Nonlinear Predictive
Control

The Extended Prediction Self-Adaptive Control algorithm is presented in this Chapter,


starting from the simplified version for linear systems and then, the EPSAC for nonlinear
systems is thoroughly analysed. The aim is to give a strong theoretical background for the
nonlinear EPSAC algorithm, fact that results in a gentle approach to NPC. The control
problem is to calculate a sequence of inputs that will take the process from its current state to
a desired steady state, which is determined by a local steady state optimization based on a cost
function. The optimal steady state must be recalculated at each time instant because
disturbances affecting the plant may change the location of the optimal operating point. In
principle, the NPC is limited to those problems for which a global optimal solution to the
dynamic optimization can be found between one control execution and the next.

The nonlinear EPSAC, due to its original approach to NPC, turns out to be an appropriate way
to solve the problems that come with nonlinear predictive control.

2.1 MBPC and NPC


MBPC methodology

Model Based Predictive Control (MBPC) is not a specific control strategy but a combination
of an ample range of methods developed around certain common principles. These design
methods lead to linear controllers, which have practically the same structure. The ideas
appearing in a greater or lesser degree in all predictive control family are basically the
following:
- explicit use of a model to predict the process output at future time instants;
- calculation of a control sequence minimizing a certain objective function;
- receding horizon strategy, so that at each instant the horizon is displaced towards the
future, which involves the application of the first control signal of the sequence
calculated at each step.

The various MBPC algorithms only differ amongst themselves in the model used to represent
the process, the noises and the cost function to be minimized. This type of control is of an
open nature within which many works have already been developed in the academic world
including the papers (De Keyser et al., 1985; Garcia et al., 1989; De Keyser, 1991; Qin and
Badgwell, 1997; Camacho and Bordons, 1999; Rawlings, 2000). There are also many
industrial applications of predictive control, successfully in use at present time in several
fields such as the cement industry, drying towers, robot arms, distillation columns, PVC
plants, steam generators, servos, etc. The good performance of these applications shows the
capacity of the MBPC to achieve highly efficient control systems able to operate during long
3
periods of time with no intervention at all. MBPC presents a series of advantages over other
methods as follows:
- intuitive concepts and relatively easy tuning make it attractive to staff with poor control
knowledge;
- it can be used for a wide range o processes including systems with time delay or of non-
minimum phase or even unstable ones;
- the multivariable case can be easy dealt with;
- it intrinsically has compensation for dead times;
- its extension for the treatment of constraints is conceptually simple and these can be
systematically included during the design process.

Besides these convincing benefits, which gradually attracted more members into the MBPC
club, the industrial dissemination of MBPC technology was also made possible by the fact
that certain MBPC-enabling technologies have now reached a certain state of maturity:
- modeling and identification: the techniques are now diverse and powerful, able to work
in an adverse environment, even with a poor signal/noise ratio;
- digital computers: they are fast, reliable and affordable, able to crank on-line complex
algorithms such as constrained MBPC.

The methodology of all the controllers belonging to the MBPC family is characterized by the
following strategy, represented in Fig. 2.1.

Fig. 2.1 The MBPC strategy

At each ‘current’ moment t, the process output y(t+k) is predicted over a time horizon
k=1..N2. The predicted values are indicated by y(t+k/t) and the value N2 is called the
prediction horizon. The prediction is done by means of a model of the process; it is assumed
that this model is available. The forecast depends on the past inputs and outputs, but also on
the future control scenario {u( t + k / t ), k = 0… N 2 − 1} (i.e. the control actions that we intend to
apply from the present moment t on);

A reference trajectory {r( t + k / t ), k = 1... N 2 } , starting at r(t/t)=y(t), is defined over the


prediction horizon, describing how we want to guide the process output from its current value
y(t) to its setpoint w(t).

4
The control vector {u( t + k / t ), k = 0... N 2 − 1} is calculated in order to minimize a specified cost
function, depending on the predicted control errors {[r ( t + k / t ) − y ( t + k / t )], k = 1... N 2 } ; also, in
most methods there is some structuring of the future control law {u( t + k / t ), k = 0... N 2 − 1} and
there might also be constraints on the process variables; these are all very important concepts
of MBPC which will be explained later on. The 1st element u(t/t) of the optimal control vector
{u(t + k / t ), k = 0... N 2 − 1} is actually applied to the real process. All other elements of the
calculated control vector can be forgotten, because at the next sampling instant all time-
sequences are shifted, a new output measurement y(t+1) is obtained and the whole procedure
is repeated; this leads to a new control input u(t+1/t+1), which is generally different from the
previously calculated u(t+1/t); this principle is called the ‘receding horizon’ strategy.

In the above strategy, some important elements characterizing MBPC can be recognized:
- prediction by means of a process model;
- specification of a reference trajectory;
- structuring of the future (postulated) control law;
- definition of a cost function (and constraints);
- calculation of the optimizing control scenario.

The MBPC strategy can be visualized in the following block-scheme (Fig. 2.2):

Reference
Predicted Trajectory
Past Inputs
And Outputs Outputs
+
Model
-
Future
Inputs

Optimizer
Future Errors

Cost Constraints
Function

Fig. 2.2 Basic structure of MBPC

Consequently, the process model plays a decisive role in the controller. The model must be
capable of capturing the process dynamics so as to precisely predict the future output as well
as being simple to implement and understand.

NPC methodology

The initial developed predictive algorithms were utilizing only linear models, considered to
bring most of the benefits with MBPC. A large number of algorithms have appeared in
literature including model predictive heuristic control (Richalet et al., 1978), dynamic matrix
control (Cutler and Ramaker, 1980), extended prediction self-adaptive control (De Keyser and
van Cauwenberghe, 1985), generalized predictive control (Clarke et al., 1987) and unified

5
predictive control (Soeterboek et al., 1990). However, because superior performances and
control quality were required, the use of nonlinear models became attractive (Sommer, 1994;
Rawlings et al., 1994; De Keyser, 1998; Liu et al., 1998; Qin and Badgwell, 1998; Allgöwer
et al., 1999; Piché et al., 2000). As stated before, the MBPC strategy results in a linear
controller, and the use of a nonlinear model within the control law may appear strange, but
this intuitive modification proved to give good results.

Once this step was taken a new concept was imposed, Nonlinear Predictive Control (NPC),
defined as a nonlinear programming problem of optimising a predictive multistep cost
function with nonlinear differential equations of the process model casting as equality
constraints. In first approaches to NPC, analytic ways and specialized optimisation techniques
built within MBPC were extended to nonlinear model representations like Hammerstein
models, Wiener models, Volterra series and bilinear models (Sommer, 1994) or neural
network models (Hunt et al., 1992). Also, an analytic solution was given in (Liu et al., 1992),
using Neural NARMAX models in the case of nonlinear affine processes. Even so, these
solutions meant to extend MBPC to NPC remain valid only in particular situations, when the
process model is one of the types mentioned above. In the case of plants with high
nonlinearities included and subject to frequent disturbances, or in the case of servo problems,
when the operating point is frequently changing, a true NPC methodology valid for a wide
class of nonlinear processes is required. A possible way to solve this problem is introduced in
(De Keyser, 1998). The EPSAC algorithm for nonlinear systems is capable of obtaining (in an
iterative way) the optimal control signal, not depending on the model type. In this way, the
use of complex optimising techniques, which result in a complicated solution unsuitable for
real-time implementation, can be avoided.

The present chapter gives a detailed analysis of the nonlinear EPSAC algorithm covering both
theoretical and practical aspects.

2.2 Linear EPSAC


One of the earliest predictive controllers, Extended Predictive Self-Adaptive Controller
(EPSAC) introduced in (De Keyser and van Cauwenberghe, 1985), has the advantage that the
process model to be used by the control law can be either linear or nonlinear. Therefore, the
process (possibly nonlinear) is modelled as
y( t ) = x( t ) + n( t ) (2.1)
which is illustrated in Fig. 2.3, with:
- y(t): (measured) process output;
- u(t): process input;
- x(t): model output;
- n(t): process/model disturbance.

n
u x y
MODEL

Fig. 2.3 Process model

6
The disturbance n(t) includes the effects in the measured output y(t), which do not come from
the model output x(t). Theses non-measurable disturbances have a stochastic character with
non-zero average value that can be modelled by a coloured noise process:
C ( q −1 )
n( t ) = e( t ) (2.2)
D( q −1 )
where:
- e(t): uncorrelated white noise with zero mean value;
- C(q-1), D(q-1): monic polynomials in the backward shift operator q-1 of orders nC and nD.

The model output x(t) represents the effect of the process input u(t) on the process output y(t)
being also a non-measurable signal. The relationship between u(t) and x(t) is given by the
generic dynamic model:
x( t ) = f [x( t − 1 ), x( t − 2 ), ,u( t − 1 ),u( t − 2 ), ] (2.3)
where f[.] represents a known function and t denotes the discrete time index. In other words,
model (2.3) can be a linear or a nonlinear difference equation.

The fundamental step in MBPC methodology consists in the prediction of the process output
at the time instant t {y( t + k / t ),k = 1… N 2 }, over the prediction horizon N2 based on:
- measurements available at time instant t: {y( t ), y( t − 1 ), , u( t − 1 ), u( t − 2 ), };
- future (postulated) values of the input signal: {u( t / t ), u( t + 1 / t ), }.

Using the generic process model (2.1), the predicted values of the output are:
y( t + k / t ) = x( t + k / t ) + n( t + k / t ) (2.4)
The prediction of x(t) can be done by recursion of the process model. There are two possible
implementation configurations, illustrated below in Fig. 2.4 and Fig. 2.5 for a 3rd order model:

Parallel Model:

x(t+k-1/t)
x(t+k-2/t)
x(t+k-3/t)
Model x(t+k/t)
u(t+k-1/t)
u(t+k-2/t)
u(t+k-3/t)
Fig. 2.4 Parallel model
Series-Parallel Model:

y(t+k-1/t)
y(t+k-2/t)
y(t+k-3/t)
Model x(t+k/t)
u(t+k-1/t)
u(t+k-2/t)
u(t+k-3/t)
Fig. 2.5 Series-Parallel Model
7
The parallel model (sometimes called independent model) can only be used for stable
processes. The series-parallel model (sometimes called realigned model) can be used also for
unstable processes.

The difference between both implementation structures is also illustrated in the following
figure (Fig. 2.6):

Process Process
u y u y

+ +
x P-Model y S/P-Model
x- n x- n
u u

Fig. 2.6 Implementation of Parallel and Series-Parallel models

For the sequel we assume a P-model, but the strategy is completely similar for a S/P-model.
At each sampling instant t, the recursion is started with k=0 and x(t/t) is computed using the
model input vector [x( t − 1 )x( t − 2 )x( t − 3 ) u( t − 1 )u( t − 2 )u( t − 3 ) ] , which contains
values from the past, thus known at time t (available in our database). Notice that
x( t ) ≡ x( t / t ) and that this value has to be saved in the database for further use at the next
sampling instants. Then for k=1, the previously computed x(t/t) is used at the model input to
compute x(t+1/t), etc.

Following a similar procedure, the future values of n(t) can be predicted. At time t, we can
compute x(t) using the data [x( t − 1 )x( t − 2 )x( t − 3 ) u( t − 1 )u( t − 2 )u( t − 3 ) ] in the
model. Using the measured process output y(t), we then compute the current value of the
disturbance n(t) with the generic process model: n( t ) = y( t ) − x( t ) . Notice that we also have the
previous values n(t-1), n(t-2), ... available in the database.
The filtered disturbance signal is computed using the following relation:

D( q −1 )
nf (t ) = n( t ) (2.5)
C( q −1 )
with the difference equation
n f ( t ) = −c1 n f ( t − 1 ) − c 2 n f ( t − 2 )− ...
(2.6)
... + n( t ) + d 1 n( t − 1 ) + d 2 n( t − 2 ) + ...
Considering the disturbance model given in (2.2) we conclude that the signal nf is white noise:
nf(t)=e(t).

As white noise is by definition uncorrelated, its best prediction is the mean value, which is
zero:
n f ( t + k / t ) ≡ 0 ,k = 1… N 2 (2.7)

8
In this way, the best prediction of the disturbance is obtained from:
C( q −1 )
n( t + k / t ) = nf (t + k / t ) (2.8)
D( q −1 )
computed with the difference equation:
n( t + k / t ) = −d 1 n( t + k − 1 / t ) − d 2 n( t + k − 2 / t )−…
(2.9)
…+ n f ( t + k / t ) + c1 n f ( t + k − 1 / t ) +…
The recursion goes from k=1...N2. For k=1, the signal values in the right hand side n(t/t), n(t-
1/t), ..., nf(t/t), nf(t-1/t), ... are known, while nf(t+1/t)=0. The computed value n(t+1/t) is then
used in the right hand side, together with nf(t+2/t)=0, in order to compute n(t+2/t), etc.

Theoretically, the future response can be considered as the cumulative result of two effects, a
typical characteristic of most MBPC algorithms:
y( t + k / t ) = y free ( t + k / t ) + y forced ( t + k / t ) (2.10)
The two contributions have different origins, while the free response is the effect of past
control, of a default future control scenario and of future disturbances, the forced response is
the effect of future control moves. The component yforced is the cumulative effect of a
sequence of step inputs {∆u( t + k / t ), k = 0...N u − 1} :
y forced ( t + k / t ) = g k ∆u( t / t ) + g k −1 ∆u( t + 1 / t )+ ... + g k − N u+1 ∆u( t + N u − 1 / t ) (2.11)

The parameters g1, g2 ,… gk ,… gN 2 are the coefficients of the unit step response of the system, i.e.
the response of the system output for a stepwise change of the system input (with amplitude
1). Note that g0 = g−1 = g− 2 =…≡ 0 .

For a linear system, the unit step response does not depend on the operating point, and the
coefficients of the unit step response can be calculated once offline, using the process model.
In the case of a nonlinear system, the unit step response is different for each operating point.
For this reason, when the processes is nonlinear, the unit step response coefficients are
calculated at each sampling period using a procedure that will be detailed in the next
paragraph. The forced response can be expressed in a matrix notation:
Yforced = GU (2.12)
leading to the key MBPC equation:
Y = Y + GU (2.13)
with the notations:
Y = [ y( t + N 1 / t ) y( t + N 2 / t )]
T

Y = [ y free ( t + N 1 / t ) y free ( t + N 2 / t )]
T

U = [∆u( t / t ) ∆u( t + N u − 1 / t )]
T

⎡ gN g N −1 g N −2 ... ... ⎤ (2.14)


⎢g ⎥
1 1 1

⎢ N +1 1
gN 1
g N −1
1
... ... ⎥
G = ⎢ ... ... ... ... ... ⎥
⎢ ⎥
⎢ ... ... ... ... ... ⎥
⎢ gN gN gN ... g N − N +1 ⎥⎦
⎣ 2 2 −1 2 −2 2 u

9
The controller is obtained minimizing the cost function:
N u −1
∑ [r( t + k / t ) − y( t + k / t )] + λ ∑ [∆u( t + k / t )]
N2
2 2
(2.15)
k = N1 k =0

and
∆u( t + k / t ) ≡ 0 for k ≥ N u ,with
∆u( t + k / t ) = u( t + k / t ) − u( t + k − 1 / t )
r( t + k / t ) = α r( t + k − 1 / t ) + ( 1 − α )w( t + k / t ),
k = 1...N 2 ,r( t / t ) = y( t )
with the notations:
- N1...N2: coincidence horizon (default N1=time-delay);
- λ : weight parameter (default 0);
- α : filter parameter (default 0);
- w: setpoint;
- r: reference trajectory.
Utilizing the matrix notation for the cost function (2.10) and minimizing it w.r.t. U the
optimal control signal is obtained:
U ∗ = (G T G + λ I) −1 G T (R − Y) (2.16)
where:
R = [r( t + N 1 / t ) r( t + N 2 / t )]
T

Only the first element ∆u( t / t ) in U* is needed in order to compute the actual control input:
u ( t ) = u ( t − 1 ) + ∆u ( t / t ) (2.17)
At the next sampling instant t+1, the whole procedure is repeated taking into account the new
measurement information y(t+1) w.r.t the principle of the receding horizon. The matrix
[ ]
G T G + λI which has to be inverted has dimension Nu x Nu. For the default case Nu=1, this
results in a scalar control law.

2.3 Nonlinear EPSAC

The original EPSAC approach (De Keyser, 1985) had a slightly different formulation
compared to the one described in the previous Section 2.2. In fact, the concepts of free
response and forced response, as presented before, correspond to the popular Generalized
Predictive Control (GPC) approach. When dealing with linear systems, both approaches result
in identical optimal control actions. However, the original EPSAC concepts of base response
and optimizing response are more general. They are presented here, mainly because they open
new perspectives in the field of NPC.

Indeed, the GPC-concept that the future response y( t + k / t ) can be considered as being the
cumulative result of two effects (see (2.4)), is theoretically only valid for linear systems,
because it is based on the superposition principle. The 2nd term yforced ( t + k / t ) never
becomes zero, except in the steady-state case, because it depends on ∆u( t + k / t ) . In EPSAC,

10
the future response is considered as being the cumulative result of two other effects:
y( t + k / t ) = y base ( t + k / t ) + y optimize ( t + k / t ) (2.18)

in which the 2nd term can optionally be made equal to zero in an iterative way for nonlinear
systems. This then results in the optimal solution, also for nonlinear systems, because the
superposition principle is no longer involved.
The two contributions have the following origins:

y base ( t + k / t ) :

- effect of past control {u(t-1), u(t-2), ...};


- effect of a basic future control scenario, called u base ( t + k / t ),k ≥ 0 , which is defined a
priori; some ideas on how to choose ubase ( t + k / t ) will be presented later;
- effect of future disturbances n(t+k/t).

The component y base ( t + k / t ) can be easily obtained, taking u base ( t + k / t ) as the model
input: u( t + k / t ) = u base ( t + k / t ) .

Notice that the GPC approach is a special case of above EPSAC concept; it corresponds to the
choice {u base ( t + k / t ) ≡ u( t − 1 ),k ≥ 0}, resulting in the free- response y free ( t + k / t ) , valid
only for linear systems.

y optimize ( t + k / t ) :

- effect of the optimizing future control actions {δu( t / t ),δu( t + 1 / t ),…δu( t + N u − 1 / t )}


with δu( t + k / t ) = u( t + k / t ) − u base ( t + k / t ) ;
The Fig. 2.7 illustrates the concept.

Nu=4
u u(t+k/t)

δu(t+k/t)
ubase(t+k/t)

time

Fig. 2.7 The nonlinear EPSAC concept

The figure also shows that the component y optimize ( t + k / t ) is the cumulative effect of a
series of impulse inputs and a step input:

- an impulse with amplitude δu( t / t ) at time t (Fig. 2.8), resulting in a contribution


hk δu( t / t ) to the process output at time t+k (k sampling periods later);

11
im p ulse inp ut
δ u(t/t)

im p ulse resp onse


h δ u(t/t)
k

k sam p les tim e

t t+ k

Fig. 2.8 Impulse response of the process model at time t

- an impulse with amplitude δu( t + 1 / t ) at time t+1 (Fig. 2.9), resulting in a contribution
hk −1δu( t + 1 / t ) to the predicted process output at time t+k (k-1 sampling periods later);

im p ulse inp ut
δ u(t+ 1 /t)

h δ u(t+ 1 /t)
k -1 im p ulse resp onse

(k-1 ) sam p les tim e

t t+ k

Fig. 2.9 Impulse response of the process model at time t+1

- etc;
- finally a step with amplitude δu( t + N u − 1 / t ) at time t + N u − 1 , resulting in a
contribution g k − N u +1δu( t + N u − 1 / t ) to the predicted process output at time t+k
( k − N u + 1 sampling periods later).
The cumulative effect of all impulses and the step is:
y optimize ( t + k / t ) = hk δu( t / t ) + hk −1δu( t + 1 / t )+ ...
(2.19)
... + g k − N u+1 δu( t + N u − 1 / t )

The parameters h1 , h2 ,… hk ,… h N 2 are the coefficients of the unit impulse response of the
system. Note that the impulse response coefficients can be easily calculated from the step
response coefficients: hk = g k − g k −1 .

Using the matrix notation introduced in Section 2.2, the structure of the key MBPC equation
remains the same:
Y = Y + GU , (2.20)
but it involves new definitions of the matrices:

12
Y = [ y base ( t + N 1 / t ) y base ( t + N 2 / t )]
T

U = [δu( t / t ) δu( t + N u − 1 / t )]
T
(2.21)
⎡ hN h N −1 hN −2 ... g N − N ⎤ +1
⎢h ⎥
1 1 1 1 u

⎢ N +11
hN 1
h N −11
... ... ⎥
G = ⎢ ... ... ... ... ... ⎥
⎢ ⎥
⎢ ... ... ... ... ... ⎥
⎢ hN hN hN ... g N − N +1 ⎥⎦
⎣ 2 2 −1 2 −2 2 u

Notice that a simple relationship exists between the control actions ∆u and δu :
∆u( t / t ) = u( t / t ) − u( t − 1 ) =
u base ( t / t ) + δu( t / t ) − u( t − 1 )
∆u( t + 1 / t ) = u( t + 1 / t ) − u( t / t ) = (2.22)
u base ( t + 1 / t ) + δu( t + 1 / t ) − u base ( t / t ) − δu( t / t )

After converting this to matrix notation, a simple linear matrix expression is obtained:
⎡ ∆u( t / t ) ⎤ ⎡ δu( t / t ) ⎤
⎢ ∆u( t + 1 / t ) ⎥ ⎢ δu( t + 1 / t ) ⎥
⎢ ⎥ ⎢ ⎥
⎢ ..... ⎥ = A⎢ ..... ⎥+b (2.23)
⎢ ⎥ ⎢ ⎥
⎢ ..... ⎥ ⎢ ..... ⎥
⎢⎣∆u( t + Nu − 1 / t )⎥⎦ ⎢⎣δu( t + Nu − 1 / t )⎥⎦

with matrix A and vector b given by:


⎡1 0 0 0⎤
...
⎢− 1 1 0 ... 0 ⎥⎥

A = ⎢ ... ... ... ... ...⎥
⎢ ⎥
⎢ ... ... ... ... ...⎥
⎢⎣ 0 0 ... − 1 1 ⎥⎦
(2.24)
⎡ u base ( t / t ) − u( t − 1 ) ⎤
⎢ u base ( t + 1 / t ) − u base ( t / t ) ⎥
⎢ ⎥
b=⎢ ... ⎥
⎢ ⎥
⎢ ... ⎥
⎢⎣u base ( t + N u − 1 / t ) − u base ( t + N u − 2 / t )⎥⎦

It is now straightforward to derive the NPC solution in a similar way as presented in Section
2.2, taking into account the slightly different interpretation of the control vector U. The cost
function (2.15) is again a quadratric form in U having the following matrix form:
[R − Y − GU]T [R − Y − GU] + λ (AU + b) T (AU + b) (2.25)
which after minimization leads to the solution:

[
U ∗ = G TG + λ A T A ] [G
−1 T
(R − Y) − λ A T b ] (2.26)

13
The actual control action applied to the process is:
u(t ) = u base ( t / t ) + δu( t / t ) = u base ( t / t ) + U ∗ ( 1 ) (2.27)
The EPSAC procedure, valid for linear as well as for nonlinear systems can be summarized as
follows:

- first, a selection of {u base ( t + k / t ),k = 1… N 2 } is required; in the case of linear models, the
choice is irrelevant for the solution, some simple examples being u base ( t + k / t ) ≡ 0 or
u base ( t + k / t ) ≡ u( t − 1 ) .

- for nonlinear models, the objective is to find (in an iterative way) a control policy
u base ( t + k / t ) , which is as close as possible to the optimal strategy u( t + k / t ) (thus bringing
the optimizing control actions δu( t + k / t ) and the term yoptimize ( t + k / t ) practically to zero).
In order to minimize the number of iterations, it is thus wise to select a good initial value for
u base ( t + k / t ) . A simple but effective choice is to start with u base ( t + k / t ) ≡ u( t + k / t − 1 ) ,
which is the optimal control policy derived at the previous sample.
- calculation of δu( t + k / t ) and of the control signals
u( t + k / t ) = u base ( t + k / t ) + δu( t + k / t ) . For a linear model, these are the optimal controls;
for a nonlinear model, they are suboptimal, but they can converge to the optimal controls
iteratively.

The validity of the EPSAC approach to NPC is mainly connected to the convergence rate of
the control algorithm. Thus, the design procedure of the controller for nonlinear systems must
include a convergence analysis. The first intuitive measure to be taken is to specify the
precision within which the control signal is considered optimal. This means that instead of
bringing (in an iterative way) yoptimized to zero, when a magnitude of certain order (e.g. 1e-4) is
reached, yoptimized can be considered equal to zero. In this way, keeping the optimal solution in
a safe range increases the convergence robustness of the control algorithm. The safe range for
the optimal control signal, which must be established depending on the process characteristics
and the imposed performances, guarantees that the algorithm will converge to the specified
optimum.

The next condition to be imposed is to limit the number of iterations needed to reach the
optimal control signal. The maximum number of iterations allowed will depend on the
duration of the calculations performed during one iteration and the duration of a sampling
period. In this way, not only that the algorithm is guaranteed to converge to the specified
optimum, but also the time it takes to converge is guaranteed to be less than the duration of a
sampling period. This results in a controller for nonlinear systems fully applicable in the case
real process.

The added features described above, meant to increase the convergence robustness of the
EPSAC algorithm are of computational nature. In order to completely solve the convergence
problem (a drawback of the control algorithm), an analytic criterion is required. Yet, in the
case of EPSAC and in the case of a NPC algorithm in general, this is not likely to be
achieved. The main reason is that the optimisation method used to find the optimal controls is
based on the classical gradient search, also using the calculation of the Hessian matrix of the
cost function to be minimized, in order to guarantee the convergence to the optimum. When
nonlinear systems are involved, it is rather difficult and sometimes impossible to obtain the
14
analytical relation for calculating the optimal controls. This is why the software artefacts are
recommended for good performance in nonlinear EPSAC.

The following chapters will present the nonlinear plant to be controlled by the nonlinear
EPSAC and the implementation details together with the performances of the control
algorithm.

15
3. DC-DC Power Converters

Power electronic circuits are widely used in computer industry and consumer electronics
including cameras. These applications require power from a wall plug or a battery to perform
their primary function, which is to process information. On the other hand, the primary function
of power circuits is to process energy. They change the character of the electrical energy, from dc
to ac, from one voltage level to another, or in any other way, being generally referred as
converters.

3.1 Basic notions about DC-DC converters


Some of the most important circuits within the family of power circuits are the DC-DC
converters. They are extensively used in power supplies for electronic equipment to control the
energy flow between two DC systems. The DC-DC converter takes the DC output of an AC-DC
converter and transforms it to the different voltages required. They are mainly used in fields such
as:
- switch-Mode DC Power Supplies;
- DC motor drive applications;
- solar cells supplies;
- electro-chemical systems.

The main reasons why switching-mode power supplies, such as the DC-DC converters, are
difficult to stabilize, thus hard to control are presented below:
- the power supply can operate in different modes such as the continuos and the discontinuous
current mode;
- load variations affect the location of poles and zeros associated with the output filter;
- switching noise can affect stability and the signal measurements.

The highly nonlinear behaviour of these power circuits is caused by the presence of a switch. The
switch can be any electronic switch such as a transistor, a tiristor or any other switching device.
Depending on the state of the switch (On/Off) the plant structure is changing dramatically
resulting in a severe nonlinearity.

All types of DC-DC converters are meant to provide a constant DC output voltage that is
obtained from an input voltage source. The following DC-DC converters are the ones
encountered more often:
- step-down (Buck) converter;
- step-up (Boost) converter;
- step-down/up (Buck-Boost) converter.
16
These converters are different depending of the value of the output voltage, which if it is lower
than the supply voltage corresponds to a Buck circuit, and if it is higher than the supply voltage it
corresponds to a Boost circuit. The user, function of a specific application, usually specifies the
desired value of the output voltage.

The most complex DC-DC converter is the Buck-Boost circuit, which can provide an output
voltage either lower or higher than the supply voltage, depending on the necessities. Thus, this
type of converter deals with both the problems encountered for a Buck circuit and for a Boost
circuit. Because of this, any conclusion regarding the Buck-Boost circuit will certainly apply for
the Buck or the Boost converters. All three converters are in fact RLC electric circuits and their
configuration differs by the position of the switch (transistor) and of the diode.

For the following paragraphs, only the Buck-Boost DC-DC converter will be analysed because of
the reasons stated above. The dynamics of DC-DC converters and the control problems
encountered for this type of power circuits will be further investigated in the following
paragraphs, considering the Buck-Boost converter as the plant to be controlled.

3.2 Models for the Buck-Boost converter


3.2.1 Switched models

Continuous state space model

Control of a DC-DC converter power circuit is based, explicitly or implicitly, on a model that
describes how control actions and disturbances are expected to affect the future behaviour of the
plant. Usually, the control problem consists in defining the desired nominal operating condition,
and then regulating the circuit so that it stays close to the nominal, when the plant is subject to
disturbances and modelling errors that cause its operation to deviate from the nominal.

The use of an appropriate model is of major importance to the control design procedure. In
practice, the most appropriate model is the one that gives the best prediction of the future
behaviour of the process. For a DC-DC converter, state-space models provide a much more
general and powerful basis for dynamic modelling. They include switched and averaged circuit
models as special cases and they play an important part in simulation, analysis and control of
both steady state and transient behaviour. A lot of power circuit models have only linear and time
invariant components and ideal switches. The analysis of such a circuit in each switch
configuration is as simple as the analysis of a linear and time invariant circuit.

This paragraph presents the nonlinear state-space models for both continuous time and discrete-
time variables with the specification that the discrete time model is the result of the converter’s
cyclic operation, which is a natural way to model samples of the circuit variables taken once per
cycle. The Buck-Boost (Up/Down) DC-DC converter is shown in Fig. 3.2, where R represents the
resistance of the load resistor, C is the capacitance, L is the inductance of the coil and RC is the
equivalent series resistance (ESR) with the capacitor. The goal is to keep the average output
voltage v0 within 3% of the nominal or reference desired value Vref, despite step changes in the

17
input voltage vin from a nominal DC value Vin and variations of the load resistor R. The transistor
is periodically turned On and operated with a duty ratio D. The switching function q(t) represents
the switch status, with q(t) = 1 when the transistor is On and q(t) = 0 when it is Off. The
notations D’ and q’(t) will denote from now on 1-D and 1- q(t), respectively.

iL
+ Rc +
R
q(t)
vin + L vL + v0
– C vc
– – –
ic

Fig. 3.1 The Buck-Boost converter circuit

In order to obtain the state-space model of the circuit, the state variables were naturally chosen as
iL (inductor current) and vC (capacitor voltage). The inputs are the source voltage vin(t) and the
control signal q(t).
For q(t) = 1 it is obtained:

di L
vL ( t ) = L = vin ( t )
dt
(3.1)
dv −1
ic ( t ) = C c = vc ( t )
dt R + Rc
and when q(t) = 0 results:

vL ( t ) = L
di L
=
R
[− Rc i L ( t ) + v c ( t )]
dt R + Rc
(3.2)
−1
[R i L ( t ) + v c ( t )]
dv
ic ( t ) = C c =
dt R + Rc

Combining (3.1) and (3.2) and introducing the switching function q(t) the desired state-space
model is achieved:

di L
=
R
[− Rc q' ( t )iL ( t ) + q' ( t )vc ( t )] + 1 q( t )xin ( t )
dt L( R + Rc ) L
(3.3)
−1
dvc
= [Rq' ( t )i L ( t ) + vc ( t )]
dt C ( R + Rc )

Considering (without affecting the generality of the model) RC = 0 and introducing the notations
x1 = i L , x 2 = v C , the simplified switched model is obtained:
18
⎡ x1 ( t ) ⎤ ⎡ 0 q' ( t ) L ⎤ ⎡ x1 ( t )⎤ ⎡q( t ) L⎤
⎢ x ( t )⎥ = ⎢− q' ( t ) C − 1 / RC ⎥ ⎢ x ( t )⎥ + ⎢ 0 ⎥ vin ( t ) (3.4)
⎣ 2 ⎦ ⎣ ⎦⎣ 2 ⎦ ⎣ ⎦

or in a matrix notation:

x( t ) = Aq( t ) x( t ) + Bq( t ) vin ( t ) (3.5)


where:
⎡ 0 q' ( t ) L ⎤ ⎡q( t ) L ⎤
Aq( t ) = ⎢ ⎥ and Bq( t ) = ⎢ ⎥
⎣− q' ( t ) C − 1 / RC ⎦ ⎣ 0 ⎦

Discrete-time state-space model

The discrete model required for the implementation of the predictive control algorithm, is
developed assuming that the switching period T is much smaller then the time constants
associated with the circuit. As a result, the inductor current and the capacitor voltage wave forms
in each switch configuration are essentially straight-line segments. Using this assumption, a
discrete-time model for the DC-DC converter can be obtained, starting with the continuous-time
model (3.4).

Suppose the transistor is turned On every T seconds and turned Off a time d(k)T later in the kth
cycle, so that d(k) is the duty ratio in the kth cycle and also assume that the source voltage vin has
the constant value vin(k). Setting q(t) = 1 in (3.4) to represent the interval when the transistor is
On and using the forward Euler approximation, it is obtained:
x( kT + d ( k )T ) ≈ ( I + d ( k )TA1 )x( kT ) + d ( k )TB1vin ( k ) (3.6)
Setting q(t) = 0 in (3.4) for the interval when the transistor is Off and letting d’(k) = 1- d(k) gives:
x( kT + T ) ≈ ( I + d' ( k )TA0 )x( kT + d ( k )) + d ( k )TB0 vin ( k ) (3.7)
Deriving the above relations and introducing the notations:
Ad ( k ) = d ( k ) A1 + d' ( k ) A0 , Bd ( k ) = d ( k )B1 + d' ( k )B0 (3.8)
where Ad(k) and Bd(k) are the averages of Aq(t) and Bq(t) obtained by replacing q(t) with d(k), the
plant model to be used by the control algorithm becomes:
x( k + 1 ) = (I + TAd ( k ) )x( k ) + TBd ( k ) vin ( k ) + T 2 d' ( k )d ( k ) A0 ( A1 x( k ) + B1vin ( k )) (3.9)

The model given by (3.9) is non-linear, time invariant, discrete in time and in a state-space form
with inputs d(k) and vin(k). For the following, the term involving T2 is omitted without significant
loss of accuracy, resulting in the final model.
⎡ x1( k +1)⎤ ⎡ 1 Td' ( k ) L⎤⎡ x1( k )⎤ ⎡Td( k ) L⎤
⎢x ( k +1)⎥ = ⎢− Td' ( k ) C 1− T RC⎥⎢x ( k )⎥ + ⎢ 0 ⎥ vin( k ) (3.10)
⎣2 ⎦ ⎣ ⎦⎣ 2 ⎦ ⎣ ⎦

19
Generalized state space model

The combination of a state-space model that contains auxiliary variables with associated non-
dynamic constraints that determine these variables is called a generalized state-space model. In
the case of the Buck-Boost converter, the constraint is imposed for current control mode. Thus, in
current-mode control the controller specifies a peak switch current in each cycle or equivalently,
a peak inductor current rather, than the duty ratio. The switch may be regularly turned On every T
seconds, but is turned Off when the inductor current iL reaches the specified upper threshold
value ith. This threshold value is the primary control variable and the duty ratio becomes an
indirectly determined auxiliary variable. The constraint that determines d(k) in terms of ith can be
written as:
i L ( kT + d ( k )T ) = ith ( kT + d ( k )T ) (3.11)
The threshold ith is usually chosen as the sum of two signals:
- a slowly varying signal iP elaborated by the controller on the basis of the error between the
actual and the nominal average output voltages;
- a regular ramp of slope S at the switching frequency.

The signals are represented in Fig. 3.2. It results that ith is obtained with a ramp signal hanged on
the iP generated by the controller.

ith(t) ip(k)
ip(t)
ST

iL(t)

t
T 2T 3T

Fig. 3.2 Inductor current waveform under current-mode control

At the time instant kT, the inductor current iL will increase until it reaches the value of ith, when
the transistor is turned Off and the current stars to decrease. In order to obtain current-mode
control, the following relation is imposed, according to Fig. 3.2:
ith ( kT + d ( k )T ) = i p ( k ) − Sd ( k )T (3.12)

Substituting (3.12) in (3.11) and taking into account the expression of the inductor current from
(3.6), yields:
i L (k ) + d k Tvin (k ) L = i p (k ) − S d k T (3.13)

For current mode control, the plant model is represented by (3.10) with (3.13) added as
constraint.

20
3.2.2 Averaged models

An averaged circuit is constructed as follows. First, all instantaneous voltages and currents in the
circuit are replaced by their averages and linear and time invariant components remain the same.
However, nonlinear or time-varying components (e.g. switchers) in the original circuit do not
map into the similar components in the averaged circuit. The use of the switching function q(t)
and its average q ( t ) = d ( t ) (continuous duty ratio) is natural and convenient in analysis and
control design for DC-DC power converters. Averaged-circuit models have been derived mainly
for high-frequency switching DC-DC converters, usually through an averaging process applied to
state-space models. Thus, nonlinear circuits are obtained, that describe the average behaviour of
various power circuits. For nonlinear models in circuit form, linearization techniques are applied
in order to obtain linear models, which are able to approximately describe small deviations or
perturbations affecting the nominal operating point of the process.

Continuous-time nonlinear averaged model

This model is obtained “averaging” (3.4), i.e. by replacing the instantaneous with their averages
i L , v c for the states, and q = d , vin , for the inputs (Kassakian et al., 1992):

⎡ di L ⎤
⎢ dt ⎥ ⎡ 0 d' ( t ) L ⎤ ⎡ i L ( t ) ⎤ ⎡ d ( t ) L ⎤
⎢ dv ⎥ = ⎢ ⎢ ⎥+
− 1 / RC ⎥⎦ ⎣vc ( t )⎦ ⎢⎣ 0 ⎥⎦
vin ( t )
⎢ c ⎥ ⎣ − d' ( t ) C (3.14)
⎣ dt ⎦
Ad(t) Bd(t)

With a constant duty ratio, d(t) = D and vin ( t ) =Vin, the steady state solution is found considering
the nominal values of the two states X1 = IL, X2 = VC. Making the derivatives from (3.14) equal to
zero, the following nominal values are obtained:

⎡I ⎤ −1 ⎡ D / RD' 2 ⎤
X = ⎢ L ⎥ = − AD BDVin = ⎢ ⎥Vin (3.15)
⎣ c⎦
V ⎣ − D / D' ⎦

Continuous-time linearized averaged model

In order to compute the linearized model, small deviations from the nominal operating point are
denoted with the superscript ~:
~
x= X +~
x , v in = V in + v~in , d = D + d (3.16)
and using (3.14) and (3.15), the linearized averaged model corresponding to the variations around
the nominal operating point is obtained:
d~
x ~
= AD ~x ( t ) + BD ~
vin ( t ) + Jd ( t ) (3.17)
dt

21
where the matrices A1, A0, B1 and B0 verify the equations:
⎡( V − Vc ) / L ⎤
J = ( A1 − A0 ) X + ( B1 − B0 )Vin = ⎢ in ⎥ (3.18)
⎣ IL / C ⎦
Considering the linearized averaged model (3.17), taking the Laplace transforms and solving for
~
x ( s ) yields:
~
~ ⎡ iL ( s ) ⎤
x ( s ) = ⎢~ [
−1 ~ ~ ]
⎥ = ( sI − AD ) Jd ( s ) + B D vin ( s ) =
⎣vc ( s )⎦
−1
⎡ s − D' L ⎤ ⎧⎡( Vin − Vc ) / L ⎤ ~ ⎡ D / L⎤ ~ ⎫
=⎢ ⎥ ⎨⎢ ⎥ d ( s ) + +⎢ ⎥ vin ( s )⎬ = (3.19)
⎣ D' C s + ( 1 / RC )⎦ ⎩⎣ I L / C ⎦ ⎣ 0 ⎦ ⎭
1 ⎡ s + ( 1 / RC ) D' L ⎤ ⎧⎡( Vin − Vc ) / L ⎤ ~ ⎡ D / L⎤ ~ ⎫
= ⎢ ⎥ ⎨⎢ ⎥ d(s)+ ⎢ ⎥ vin ( s )⎬
a D ( s ) ⎣ − D' C s ⎦ ⎩⎣ I L / C ⎦ ⎣ 0 ⎦ ⎭
where:
1 D' 2
aD ( s ) = s 2 + s+
RC LC
From (3.19) it is determined that:
− DD'
~ I s − ( V / LI ) ~ LC
v0 ( s ) = v~c ( s ) = L in L
2
d(s)+ 2
v~in ( s ) (3.20)
C 2 1 D' 1 D'
s + s+ s2 + s+
RC LC RC LC
The above equation represents a linearized averaged input-output model that can be utilised for
duty ratio control.

In the case of current-mode control, a description of the circuit that does not involve the duty
ratio d, is needed. In order to achieve this, in (Kassakian et al., 1992), the “power” balance
following directly from Tellegen’s theorem is applied for the averaged variables resulting:

~ ( Vin − sLI P )R ~ V0 + RI P
v0 ( s ) = v~c ( s ) = iP ( s ) + v~in ( s ) (3.21)
sRC( V0 − Vin ) + ( 2V0 − Vin ) sRC( V0 − Vin ) + ( 2V0 − Vin )
Using the steady-state relationships V0 = -(D/D’)Vin and IPD’ = -V0/R to simplify the above
expression, it is obtained:

~ 1 − sTz ~ 1 ~
v0 = a 0 iP ( s ) + b0 vin ( s ) (3.22)
1 + sTc 1 + sTc
where:

22
1 LI P D L
Tc = RC Tz = = 2
1+ D Vin D' R
(3.23)
D' D2
a0 = − R b0 = −
1+ D D' ( 1 + D )
The equations (3.22) and (3.23) represent the simplified averaged model for the buck-boost
converter under current-mode control.

Discrete-time averaged models

The starting point is the liniarization of the discrete model (3.9), considering that the steady state
operating point is obtained by fixing d(k) = D, vin = Vin and having x(k+1) = x(k) = X for all k
given by:
⎡I ⎤
x(k) = X = ⎢ L ⎥ = − AD−1 BDVin (3.24)
⎣Vc ⎦
The term in T2 is neglected and the model used for duty ratio control is:
~ ~
⎡ iL ( k + 1 )⎤ ⎡ 1 T D' L ⎤ ⎡ iL ( k )⎤ ⎡T d ( k ) L ⎤ ⎡T ( Vin − Vc ) / L ⎤ ~
⎢~ ⎥=⎢ ⎥ ⎢~ ⎥+⎢ ⎥ vin ( k ) + ⎢ ⎥ d ( k ) (3.25)
⎣vc ( k + 1 )⎦ ⎣− T D' C 1 − T RC ⎦ ⎣vc ( k )⎦ ⎣ 0 ⎦ ⎣ TI L / C ⎦
For current mode control, the constraint (3.13) is linearized around the nominal operating point,
resulting:

0 = [1 0]~
DT ~ ~ V ~
x( k ) + vin ( k ) − ip ( k ) + T ( S + in )d ( k ) (3.26)
L L
or after some elementary calculations:
~ ⎡ DT ~ ~ ~ ⎤ 1
d ( k ) = ⎢− vin ( k ) + ip ( k ) − iL ( k )⎥ (3.27)
⎣ L ⎦ T ( S + Vin )
L
~
Substituting d k from (3.27) in (3.25), the model for current-mode control is obtained.

3. 3 Control structures for DC-DC converters


In the sequel, classical control structures for power circuits will be presented, as well as new
control systems, using advanced control technologies resulting in better performances. The
typical control system configuration is depicted in Fig. 3.3. In simple open loop control, the
controller is not given any information about the system during operation, although the open loop
controller may be constructed on the basis of priori information or models. Open loop control
with feed-forward utilizes measurements of disturbances affecting the system. Using feed-
forward, the controller can attempt to cancel the anticipated effects of measure disturbances.

23
Feed-forward alone is usually insufficient for obtaining satisfactory performances in power
electronic circuits.

Disturbances

Feedforward path

Desired Control Other outputs


nominal inputs Power of interest
operating Controller Circuit
point
(reference) Measured
outputs

Feedback path
Measurement
noise
Fig. 3.3 Typical control system configuration

A improved strategy is for the controller to also use measurements that reveal the circuits present
behaviour. This strategy is referred as closed loop or feed back control. In both cases, open loop
and closed loop, PID controllers are utilized, which can be designed using classical
methodologies based on the Nyquist stability criterion and Bode plots or tuned with Ziegler
Nichols techniques. The controller must keep the DC-DC converter within a certain percentage of
the specified nominal operating point in the presence of disturbances and modelling errors. In
order to achieve this, one of the mentioned control approaches can be used:
- duty ratio control;
- current-mode control.

It is possible to use these techniques separately or, both in a cascade control structure. A duty
ratio control system for a DC-DC converter is given in Fig. 3.4.
v in

F e e d fo rw a rd
c o m p e n sa tio n

D
~ + v0
V ref + C o n t ro lle r d U p /D o w n
+ D uty co nverter
- r a t io

Fig. 3.4 Feedback/Feedforward control system for a DC-DC converter

24
The feed forward controller calculates the nominal value of the duty ratio D and the feed back
controllers process the deviation of the average output voltage v0 from the desired value Vref
~
giving the control correction d .

If the DC-DC converter is described using the model (3.22), the current-mode control system can
be implemented as shown in Fig. 3.5.
v~ in

b0 Up/Down
converter

~ ~ v~ 0
V0 = 0 iP +
+ Current 1
Controller a0 ( 1 − sTz )
+ 1 + sTc
-
Compensator

Fig. 3.5 Current-mode control for a DC-DC converter

In order to obtain the best performances for DC-DC converters, duty ratio control and current-
mode control are used in a cascade control structure represented in Fig. 3.6

+
+ v0
v in +
– C vc
iL – –
ic
~ – –
V in D
~ iL +I L v~0 + V ref
d Current Voltage loop
V ref Calculation +
of D + loop ~ controller v~0 ref = 0 V
controller ip

Fig. 3.6 Cascade structure for inductor current and output voltage control

25
Current-mode control results in a inner loop that regulates the inductor current iL at
approximately iP. The controller in the outer loop specifies the value of iP needed to regulate the
average output voltage v0 at a desired value. The outer loop will be designed to have a much
smaller bandwith then the inner loop. Under these assumptions, the offset and the dynamics of
the inner loop can be negeleted in the initial design of the outer loop controller. This approach
allows to obtain a simplified model for the outer loop and makes the design of its controller much
easier.

The inner loop design is ilustratred in Fig. 3.7 for a Buck-Boost DC-DC converter starting from
the plant model:
~
iL ( s ) =
1
a( s )
[ ~
b1 ( s )d ( s ) + b2 ( s )~
vin ( s )]
where (3.28)
1 D' 2 v ⎛ 1+ D ⎞ D⎛ 1 ⎞
a( s ) = s 2 + s+ , b1 ( s ) = in ⎜ s + ⎟, b2 ( s ) = ⎜ s + ⎟.
RC LC LD' ⎝ RC ⎠ L⎝ RC ⎠

v~in
Plant
b2(s)

~ ~ ~
ip Curent loop d + + 1 iL
b1(s)
controller a( s)

Fig. 3.7 Inner loop

Considering the outer loop much faster then the inner loop and havinf the steady state gain equal
to one, the outer loop design is shown in Fig. 3.8. The plant model for the voltage loop controller
is given by (3.22) and (3.23).

v~in
Plant
Inner loop
b0

~
v 0 ref = 0 ~ ~ ~ v~0
Voltage loop ip iL = i p + + 1
1 a 0 (1 − sT z )
controller 1 + sT c

Fig. 3.8 Outer loop

26
As it can be observed from the figures presented in this paragraph, there are no specifications
regarding the controller. Theoretically, the controller can be of any type, such as a fuzzy
controller, a predictive controller or a PID controller. Up to the present time, for DC-DC
converters, only P, PI, or PID controllers were used in practice. The aim of the thesis is to
demonstrate that predictive control algorithms can also be applied to this type of power circuits,
resulting in a stable closed loop system with better performances.

The following chapter will illustrate how the EPSAC algorithm, described in Chapter 2, can be
applied to DC-DC converters and which plant model is appropriate for the predictive controller in
this case. For all the following experiments, the duty-ratio control structure was preferred. The
other control structures, such as the current mode controller or the cascade controller, remain to
be dealt with in future research work.

27
4. Linear EPSAC versus Nonlinear EPSAC

This Chapter further investigates the EPSAC algorithm in order to extend the validity of the
theoretical aspects presented in Chapter 2 to the field of simulation experiments. Linear and
nonlinear EPSAC are simulated here to demonstrate the soundness of the EPSAC algorithm
for nonlinear systems. The aim is to compare the performances of both linear and nonlinear
EPSAC algortihms for the same nonlinear plant.

4.1 Matlab implementation


The linear and the nonlinear EPSAC, as described in Chapter 2, were implemented in Matlab
together with the plant model (3.10). It is a particular version of the EPSAC control
algorithm, considering N 1 = 1, N u = 1, and λ = 0 which results in a scalar control law. The
user must specify the prediction horizon N2 that is utilized as a tuning parameter. Also, the
desired reference voltage and the stop time of the simulation experiment must be given.
For each version of EPSAC exists a main routine built on the same structure of the operations:

- the routine reads the process outputs, the inductor current and the output voltage, from
the software implemented plant model (3.10);

- the process model together with old values of the control input and of the process
outputs are used to calculate the present value of the noise;

- the secondary routine responsible with the calculation of the G matrix is called;

This routine uses the process model to compute the impulse and step response coefficients,
which are then grouped as in (2.21) to form the G matrix. In this particular case, with Nu = 1
(considered to bring most benefits in practice), the G matrix will contain only the step
response coefficients.

- the secondary routine responsible with the calculation of the prediction error is called;

This routine computes the prediction of x(t) and of n(t) as described in Chapter 2 using the
process model (3.10) and the colored noise model (2.2), respectively. Having the predictions
of the process output and the desired reference voltage, the prediction error is obtained.

- The control law (2.26) with relation (2.27) is used to calculate the control signal to be
applied to the process;

28
- Linear EPSAC: the linear version of the control algorithm sends the control previously
calculated to the software implemented plant model and the operation cycle is
repeated at the next sampling period.

- Nolinear EPSAC: the nonlinear version takes the previously calculated control signal
and sends it again to the process model instead of sending it to the software
implemented plant model, according to the procedure presented in Section 2.3. Then,
the subroutines responsible with the calculation of the G matrix and of the prediction
error are called and the control signal is computed once more, until the solution of
equation (2.26) (the control law) becomes zero. This means that yoptimized was
successfully brought to zero and the last value of the control signal obtained with
(2.27), before the solution of (2.26) became zero, is applied to the software
implemented plant model.

These extra operations performed at each sampling period in order to compute the optimal
control signal are referred as iterations. Their number is limited to a certain value so the
duration of the sampling period will not be exceeded and the solution of (2.26) is brought to
zero within a specified range (precision, e.g. 1e-4) as mentioned in Section 2.3.

All the routines are M-files included in the attached diskette and they are also listed in
Appendix 1.

The system to be controlled is the Buck-Boost converter model (3.10), chosen for its highly
nonlinear characteristic (see Chapter 3). In this way, it will be possible to analyze the
performances obtained with the linear predictive controller (linear EPSAC), which uses a
nonlinear model, and the ones obtained with the nonlinear predictive controller (nonlinear
EPSAC) for the same nonlinear plant.

4.2 Start-Up behavior and Setpoint tracking


The parameters of the Buck-Boost power circuit depicted in Fig. 3.1 are: R = 165Ω, C =
2200µF and L = 4.2mH and it is operated at a frequency of 10 kHz (RC is considered equal to
zero without affecting the complexity of the circuit). The goal is to keep the average output
voltage v0 within 3% of the desired nominal or reference value Vref, despite step changes in
the input voltage vin from a nominal DC value Vin = 15 V and variations of the load resistor R.
First, the Start-Up behavior of the closed loop system will be observed and the ability of the
controllers to track the specified Setpoint will be analyzed.

The following results, shown in Fig.4.1 and Fig. 4.2, were obtained for the desired reference
voltage Vref = -10 V, considering initial conditions of the plant model (3.10) equal to zero and
the sampling period equal to 0.1ms (corresponding to the DC-DC converter operating
frequency of 10kHz). Thus, the start-up behavior of the closed loop system can be observed,
using the same parameters of the control law, both for the linear EPSAC and for the nonlinear
EPSAC:
N 1 = 1, N 2 = 8, N u = 1 (4.1)
with the mention that the number of iterations per sampling period for the nonlinear EPSAC
is limited to 50 and the solution is considered optimal (a precision of order 1e-4) when:

29
δu( t / t ) = U ∗ ( 1 ) ∈ (− 0.0001, 0.0001) (4.2)

30

20

10
Amperes
0

-1 0

-2 0

-3 0
0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 0 .1 8 0 .2
T im e secon d s

Fig. 4.1 a) Linear EPSAC - the inductor current

20

10

0
Volts

-1 0

-2 0

-3 0

-4 0
0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 0 .1 8 0 .2
T im e s e c o n d s

Fig. 4.1 b) Linear EPSAC - the output voltage (solid line);


the reference voltage (dotted line)

25

20

15

10

5
Amperes

-5

-1 0

-1 5

-2 0

-2 5
0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 0 .1 8 0 .2
T im e seco n d s

Fig. 4.2 a) Nonlinear EPSAC – the inductor current

30
0

-5

-1 0

-1 5
Volts

-2 0

-2 5

-3 0

-3 5
0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 0 .1 8 0 .2
T im e secon d s

Fig. 4.2 b) Nonlinear EPSAC - the output voltage (solid line);


the reference voltage (dotted line)

As it can be seen, the linear EPSAC, although using the non-linear model (3.10) causes
instability, whereas the non-linear EPSAC manages to diminish the amplitude of the
oscillations and then eliminates them completely.

The unusual high values of the output voltage and of the inductor current are the result of the
fact that, the value of the current through the inductor is not limited. In the case of the real
electric circuit depicted in Fig. 3.1, the transistor is switched Off when the value of the
inductor current is higher then 5 A, and when the current becomes 0, the DC-DC converter is
working in discontinuous conduction mode. This current limitation was applied to the
software-implemented plant model (3.10) to achieve a more realistic behavior. The results
obtained for the desired reference voltage Vref = -20 V with N2 = 10 are presented in Fig. 4.3
and Fig. 4.4.

4
Amperes

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds

Fig. 4.4 a) Linear EPSAC - the inductor current

31
0

-2

-4

-6

-8

Volts -10

-12

-14

-16

-18

-20

-22
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds
Fig. 4.4 b) Linear EPSAC - the output voltage (solid line);
the reference voltage (dotted line)

4
Amperes

0
0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 0 .1 8 0 .2
T im e s e c o n d s

Fig. 4.2 a) Nonlinear EPSAC – the inductor current

-2

-4

-6

-8

-10
Volts

-12

-14

-16

-18

-20

-22
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time seconds

Fig. 4.2 b) Nonlinear EPSAC - the output voltage (solid line);


the reference voltage (dotted line)
32
The oscillations around the nominal operating point are constant in amplitude for the linear
EPSAC this time, due to the limitations applied on the inductor current. When the current is
bigger then 5A, the transistor is switched Off for the next sampling period (the duty ratio
becames zero) so the current drops. As a direct result of this, the current is oscillating around
5A.

Still, the difference in performance is obvious. The results of the linear EPSAC can be
improved if the prediction horizon is increased. For example, if N2 is made 25, a better start-
up behavior is obtained (Fig. 4.5).

4
Amperes

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds

Fig. 4.5 a) Linear EPSAC – the inductor current

-2

-4

-6

-8

-10
Volts

-12

-14

-16

-18

-20

-22
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds

Fig. 4.4 b) Linear EPSAC - the output voltage (solid line);


the reference voltage (dotted line)

33
Even so, when a disturbance occurs, the linear EPSAC with a increased prediction horizon
causes the system to oscillate proving to be an inadequate control strategy in the case of a
highly nonlinear plant such as the DC-DC converter. Fig. 4.6 illustrates the incapacity of the
linear EPSAC (with a increased prediction horizon) to reject a load disturbance (at the time
instant 0.1s the load resistance drops to 33Ω).

4
Amperes

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time seconds

Fig. 4.6 a) Linear EPSAC: inductor current


Load disturbance rejection

-2

-4

-6

-8

-10
Volts

-12

-14

-16

-18

-20

-22
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds

Fig. 4.6 b) Linear EPSAC: output voltage (solid line); reference voltage (dotted line)
Load disturbance rejection

On the other hand, the nonlinear EPSAC, with N2 = 20 manages to reject the disturbance
obtaining good performances (as seen in Fig. 4.7) proving to be an appropriate predictive
control strategy for nonlinear systems.

34
6

4
Amperes

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds
Fig. 4.7 a) Nonlinear EPSAC: the inductor current
Load disturbance rejection

-2

-4

-6

-8

-10
Volts

-12

-14

-16

-18

-20

-22
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Time seconds

Fig. 4.7 b) Nonlinear EPSAC: the output voltage (solid line);


the reference voltage (dotted line)
Load disturbance rejection

Other experiments, regarding the behavior of the closed loop system in the presence of
several disturbances and a thorough analysis of the algorithm’s convergence rate will be
presented in the followings.

The excellent results obtained with the nonlinear EPSAC fully sustain the validity of the
theoretical aspects presented in Section 2.3 and they offer a possible breakthrough in
Nonlinear Predictive Control.
35
In order to further test the robustness, the tracking performances and the ability to reject
disturbances of the nonlinear EPSAC, a more sophisticated plant model implemented in
Simulink will be used as the process to be controlled. The next Chapter presents the
characteristics of the new, more realistic plant model, the Simulink implementation details
and the results obtained for several tests.

36
5. Simulink Model Design and
Nonlinear EPSAC Implementation

In order to thoroughly test the nonlinear EPSAC algorithm, a more realistic model of the
Buck-Boost DC-DC converter was required. The model used for the Matlab version was an
averaged model, which differs in some ways from the real circuit. The Simulink software
environment that comes with Matlab proves to be the adequate tool for implementing and
testing a switching model of the power converter, which is an exact replica of the real circuit.
This time all parasite components of the circuit are taken into account and the nonlinear
EPSAC is using the averaged model (3.10) as the process model. Thus, a control structure
more close to the real case is obtained, when the process (switched model) is different from
the process model used by the control algorithm (averaged model (3.10)).

Note: Up to now, the difference between the process and the process model, both implemented in Matlab,
consisted in the colored noise added to the process model. Without the noise, the equations were identical.

5.1 Simulink model design and implementation of the Buck-Boost converter


The switched state-space model

The model of an electric circuit in general is obtained by applying the Kirchhoff laws that
result in a set of equations describing the circuit. Using this set of equations and appropriately
choosing the state variables, a switching state-space model is obtained. This procedure was
already shown in Chapter 3, where the switching model (3.4) was obtained for the power
circuit depicted in Fig. 3.1. The DC-DC converter was treated in that case as an ideal electric
circuit, all parasite components being neglected, including the ESR (Equivalent Series
Resistance). In reality, an electric circuit has many parasite components, which must be
considered if a reliable circuit model is needed, as in real-time control.

A Buck-Boost converter including all the elements present in a real circuit is obtained starting
from the circuit shown in Fig. 3.1. This operation leads to the desired switched state-space
model, after the Kirchhoff laws are applied.

The components of the power circuit were inspired from a real Buck-Boost DC-DC converter
from the Department of Control Engineering, Ghent University. The parasite elements and
their values are the following:

- Diode:

- equivalent voltage source, Ed = 1.2V;


- parasite resistance, Rd = 0.06Ω.
37
- Transistor:

- shunt resistance, Rt = 0.5Ω.

- Inductor:

- inductance, L = 4.2e-4mH;
- parasite resistance, Rind = 0.42Ω.

- Capacitor:

- capacitance, C = 2200e-6µF;
- parasite resistance (ESR), RC = 0.05Ω.

The switching model equations are obtained as described in sub-section 3.2.1, w.r.t. the
Kirchhoff laws and introducing the same state variables x1 = i L , x 2 = v C .

- q(t) = 1 – the ON state of the converter:

⎡ x 1 ( t )⎤ ⎡ − ( Rind + Rt ) / L 0 ⎤ ⎡ x1 ( t )⎤ ⎡1 L ⎤ (5.1)
⎢ x ( t ⎥ = ⎢ + ⎢
⎣ 2 )⎦ ⎣ 0 − 1 /(( R + Rc ) C ) ⎥⎦ ⎢⎣ x 2 ( t ⎥
)⎦ ⎣ 0 ⎦
⎥ v in ( t )

- q(t) = 0 – the OFF state of the converter:

- continuous conduction mode:

⎡ x1 ( t )⎤ ⎡ − ( Rind + Rd ) / L −1/ L ⎤


⎢ x ( t )⎥ = ⎢( R /( R + Rc ))((1 / C ) + Rc( Rind + Rd ) / L ) − ( R /( R + Rc ))( Rc / L + 1 / RC )⎥ ×
⎣ 2 ⎦ ⎣ ⎦ (5.2)
⎡ x1 ( t )⎤ ⎡ −1 L ⎤
×⎢ ⎥+⎢ ⎥ Ed
⎣ x 2 ( t )⎦ ⎣− RRc /( L( R + Rc ))⎦

- discontinuous conduction mode:

⎡ x 1 ( t )⎤ ⎡0 0 ⎤ ⎡ x1 ( t )⎤ ⎡0⎤
⎢ x ( t ⎥ = ⎢ + ⎢ ⎥ v in ( t ) (5.3)
⎣ 2 )⎦ ⎣0 − 1 /(( R + Rc ) C ) ⎥⎦ ⎢⎣ x 2 ( t ⎥
)⎦ ⎣0⎦

The above equations describe exactly the behavior of a real Buck-Boost DC-DC converter:

- at the beginning of the sampling period the transistor is switched On (q(t) = 1) and the
circuit output is given by model (5.1);
- at a certain time instant during the current sampling period, the transistor is switched
Off (q(t) = 1) and the circuit output is given by model (5.2).

The normal operating mode of the DC-DC converter in this case is the continuous conduction
mode, when the inductor current doesn’t drop to zero. However, in practice a problematic
38
situation occurs if the transistor stays in the Off state a certain period of time so the inductor
current will become zero. When this happens, the circuit model changes again because the
inductor current no longer affects the value of the output voltage and then the circuit output is
given by (5.3). Note that the output voltage x2(t), refereed as the circuit output is the
controlled output.

The transistor is switched On or Off by a electronic device called the Pulse Width Modulator
(PWM). The PWM generates a ramp voltage signal for each sampling period, which is seen
as the first input signal. The second input signal is the controller output. At the beginning of
sampling period the transistor is switched On and then the PWM compares the value of the
ramp voltage with the value of the control signal. The transistor is kept On until the value of
the ramp voltage is lower or equal to the value of the control signal, when it is switched Off
and remains in the Off state for the rest of the sampling period. In this way, a desired duty-
ratio is achieved for each sampling period, which corresponds to the duty-ratio control
structure principles presented in Chapter 3.

The Simulink implementation

The equations presented above were implemented in Simulink, using a S-function block. This
is a special block provided by the Simulink software package that gives the user the
possibility to define any type of nonlinear process or control algorithm. The nonlinear process
(equations (5.1), (5.2) and (5.3)) was implemented in a C-file, which was then compiled using
a MEX compiler and the resulting DLL-file was given as a parameter to the S-function block.

S-function blocks, the listed C-file and the MDL-file are presented in Appendix 2.

In order to make it simple, the S-function block must be seen as a black box, which has inputs
and outputs plus a set of parameters that must be specified by the user. The inputs are
processed according to the supplied parameters and the block generates the calculated outputs.

There are two types of parameters:


- an imposed parameter, which is the name of the DLL-file to be called. This file is very
important because it defines the states, the working sampling period, the number of
inputs, the number of outputs and the procedure that calculates the outputs.
- a set of parameters that can be specified optionally depending on each application.

The circuit model implemented in Simulink is shown in Fig. 5.1.

The S-function block implementing the converter has three inputs and three outputs:

- inputs:

- the supply (source) voltage vin;


- the value of the duty-cycle (specifies when the transistor is turned Off during each
sampling period);
- the value of the load resistor R.

39
15

Vin
Output voltage display
Mux pspicebbu Demux

Duty-Cycle S-Function implementing


the Buck-Boost converter
Inductor current display
165

R
... t
Duty-Cycle display
Clock
To Workspace2

Fig. 5.1 The Simulink model of the Buck-Boost converter

The values of the source voltage and of the load resistor where defined as inputs in order to
simulate (in a simple way) step disturbances as shown in the followings.

- outputs:

- the output voltage (the controlled output);


- the inductor current (necessary for the state-space model used by the controller);
- the duty-cycle (utilized together with the inductor current for choosing the model used
to calculate the outputs).

The parameters of the S-function block are:

- the name of the DLL-file: pspicebbu.dll (the source file written in C language is listed
in Appendix 2);
- the values of the inductance and of the capacitance together with all the parasite
components mentioned before must be given by the user as optionally parameters of
the S-function block.

A clock block is used to synchronize the operations performed by all the blocks in the model.
A very important aspect in the analysis of DC-DC converters is the On and Off response
showing how the circuit is behaving if the transistor is kept only in the On state and vice
versa. The On and the Off responses of the Buck-Boost model depicted in Fig. 5.1 are shown
in Fig. 5.2 and Fig. 5.3 considering the initial values of the states as iL = 0.4A and vout = -10V.
As it can be seen, the On response is quite similar with the Off response in the case of the
controlled output (the output voltage); in both cases voltages are dropping to zero and only in
the Off state the voltage rises for three sampling periods and then falls to zero. This behavior
of the Buck-Boost converter (for the Buck and the Boost circuits in the On state the output
voltage rises and in the Off state it falls – clear difference) makes the control problem more
difficult, when the output voltage must be increased (e.g. at Start-Up).

40
4000

3000
Amperes

2000

1000

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time seconds

Fig. 5.2 Fig. 5.2 a) The On response versus the Off response of the converter;
the inductor current (zero for the Off response)
0

-2

-4
Volts

-6

-8

-1 0
0 0 .2 0 .4 0 .6 0 .8 1
T im e sec o n d s

Fig. 5.3 a) The On response versus the Off response of the converter,
the output voltage

In the above picture the response appears to be the same and that is why a zoom on the output
voltages is needed in order to illustrate the difference.
-9.4

-9.5

-9.6
Volts

-9.7

-9.8

-9.9

-10
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
-3
x 10
T im e secon ds

Fig. 5.3 b) Zoom on the On response versus the Off response of the converter,
the output voltage
41
In order to consider the model of the DC-DC converter implemented in Simulink as a good
replica of a real electric circuit it must be validated. On this purpose, a Buck-Boost DC-DC
converter was implemented in the PSpice software environment (especially built for testing
electric circuits) with the same values of the components as the Simulink model.

The response of the two models was compared for a constant duty-cycle equal to 40% (this
means that for a sampling period of 1s, after 0.4s the transistor is switched). The output
voltage and the inductor current of the Simulink model are shown in Fig, 5.4.

3.5

2.5
Amperes

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds

Fig. 5.4 a) The inductor current for a duty-cycle of 40%

-1

-2

-3

-4

-5
Volts

-6

-7

-8

-9

-10

-11
0 0.05 0.1 0.15 0.2 0.25 0.3

Time seconds

Fig. 5.4 b) The output voltage for a duty-cycle of 40%

42
The results obtained do not match exactly with the ones obtained for the circuit implemented
in PSpice but it is important that they have the same dynamics. In the case of an ideal Buck-
Boost converter, when the circuit is operated with a constant duty-cycle, there is a formula for
calculating the steady state value of the voltage:
Vout = − D /( 1 − D ) * Vin = − 10 V (5.4)
In Fig. 5.4 one can observe that the output voltage doesn’t stabilize around –10V but
somewhere around –9V, behavior similar to the PSpice model. The voltage drop on the
parasite components explains this phenomenon and its presence demonstrates that the
implementation of the power circuit in Simulink is a good replica of the real electric circuit.

For the following simulation experiments, the Buck-Boost model implemented in Simulink
will be the process to be controlled.

5.2 Simulation experiments for the nonlinear EPSAC


performed in Simulink
5.2.1 Simulink implementations

Simulink implementation of the nonlinear EPSAC

Following a procedure similar to the one described in Section 5.1, a S-function block was
utilized to implement the nonlinear EPSAC algorithm in the Simulink software environment.
The major difference consists in the file that describes the functioning of the S-function block,
which is a M-file this time. The advantage of using a M-file instead of a C-file is that the M-
file does not need to be compiled to obtain a DLL-file because the S-function block accepts
strings as parameters (representing the name of the M-file). The only requirement is that the
M-file must be written and organized in an imposed format, specially designed for the S-
function block, otherwise it will not be accepted as parameter.

The other general features remain the same as presented in section 5.1. The S-function block
implementing the nonlinear EPSAC (see Fig. 5.5) has the following structure:
- inputs:
- the output voltage (the controlled variable);
- the inductor current (used with the output voltage for updating the process model);
- past value of the output voltage;
- past value of the inductor current;
- past value of the control signal.

The last three inputs are used for noise calculation and for process model updating. These
values are considered inputs to avoid the use of buffers that would increase the time needed
for computing the control signal. If this number of inputs is not available in practice (e.g.
when the controller is a two inputs DSP) the last three inputs can be eliminated and the
necessary data can be stored in buffers from where it can be fetched at the next sampling
instant. The reference voltage Vref can be an input for the controller if the reference trajectory
is a staircase trajectory.

- outputs:
- the control signal that is applied to the PWM.
43
The control signal is limited according to the duty-cycle limitations (10% - 80%) so that the
allowed physical range of the controls is not violated. Also, a duty-cycle bigger than 80% will
result in unreliable values of the impulse or step response coefficients used for calculating the
controls. This restriction respects the properties of a real DC-DC converter, when the duty-
cycle does not exceed 60% in most of the cases.

- parameters:
- the name of the M-file implementing the nonlinear EPSAC algorithm (the M-file is
listed in Appendix 2);
- some optional parameters, such as the prediction horizon, the limit for the number of
iteration allowed per sampling period or the imposed precision for the optimal control.

The optional parameters can be specified in the afferent window of the S-function block or
editing the M-file that implements the nonlinear EPSAC can be another possibility to change
them.

Simulink implementation of the Closed Loop Control Structure

The final Simulink model, including the process model, the PWM and the nonlinear EPSAC
controller is presented in Fig. 5.5.

Step3

15

Vin
Output voltage display

Mux pspicebbu Demux

S-Function implementing
Step the Buck-Boost converter Inductor current display

165

R
(Load Resistor) Duty-Cycle display
0 t
1
Clock
To Workspace2 z
Step1 Unit Delay1

z
Step2
Unit Delay
Scope 1

z
Pulse
Generator Unit Delay2
Q S

!Q R Out1 In1 epsacnonlin Mux

S-R S-Function implementing


Subsystem
Flip-Flop the nonlinear EPSAC algorithm
implementing the
PWM

Control signal display

Fig. 5.5 Simulink Closed Loop Control Structure

44
The S-function blocks implementing the Buck-Boost converter model and the nonlinear
EPSAC algorithm are the ones described previously. The subsytem implementing the PWM is
shown in Fig. 5.6.

1
Re p e a ti n g
O u t1
S e q u e n ce Re l a y

1
In 1 M ux
S a tu ra ti o n
Di sp l a y

Fig. 5.6 Simulink implementation of the PWM

The Repeating Sequence block generates the ramp voltage signal, referred earlier as the first
input of the PWM. The In1 block is the control signal generated by the nonlinear EPSAC
controller. When the sum of the two signals is zero (the ramp voltage has become equal to the
control signal) the relay switches changing the output of the PWM.

At the beginning of each sampling period the Pulse Generator block sets the Flip-Flop, which
turns the transistor Off and then, when the ramp voltage becomes equal with the control
signal, the PWM resets the Filp-Flop that turns the transistor On (w.r.t Beginning of On-time
Modulation).

Fig. 5.5 also illustrates how step disturbances on the input voltage or on the load resistor can
be simulated using Step blocks.

All the blocks in the Simulink scheme are synchronized and they perform operations w.r.t the
working frequency of the Buck-Boost DC-DC converter of 10kHz (a sampling period of
0.1ms).

5.2.2 Simulink simulation experiments

The closed loop control structure containing a nonlinear process model and a nonlinear
predictive controller is used to test the tracking performances, the robustness and the ability to
reject disturbances of the nonlinear EPSAC. The parameters of the Buck-Boost converter are
the one presented in Section 4.2, with the parasite components specified in Section 5.1 added.
The sampling period is the same for all experiments, 0.1ms. The nonlinear EPSAC algorithm
has the same parameters as in Capter 4:
N 1 = 1, N u = 1, and λ = 0 (5.4)
with N2 (the prediction horizon) as a tuning parameter, the number of iterations per sampling
period limited to 50 and the precision imposed for the optimal control of order 1e-4:
δu( t / t ) = U ∗ ( 1 ) ∈ (− 0.0001, 0.0001) (5.5)

45
Start-Up behavior and Setpoint tracking

The following results, shown in Fig.5.7 and Fig. 5.8, were obtained for the desired reference
voltage Vref = -10V and Vref = -20V, respectively, considering initial conditions of the
Simulink DC-DC converter model equal to zero and N2 = 20. At time instant 0.1s a step up of
2V is applied on the reference voltage and at time instant 0.2s a step down of 2V is applied on
the reference voltage.

4.5

3.5

3
Amperes

2.5

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.7 a) Start-Up behavior and Setpoint tracking (-10V)
The inductor current
0

-2

-4

-6
Volts

-8

-10

-12

0 0.05 0.1 0.15 0.2 0.25 0.3


Time seconds

Fig. 5.7 b) Start-Up behavior and Setpoint tracking (-10V)


The output voltage tracks the reference voltage

46
5

4.5

3.5

3
Amperes

2.5

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.8 a) Start-Up behavior and Setpoint tracking (-20V)
The inductor current

-2

-4

-6

-8

-10
Volts

-12

-14

-16

-18

-20

-22

0 0.05 0.1 0.15 0.2 0.25 0.3

Time seconds
Fig. 5.8 b) Start-Up behavior and Setpoint tracking (-20V)
The output voltage tracks the reference voltage

As it can be seen in the figures presented above, the start-up behavior and the tracking
performances of the nonlinear EPSAC are very good, considering that the control algorithm
still uses as the process model the state space averaged model (3.10) with colored noise
added, which is different from the plant (the Simulink Buck-Boost model). Also the model
used by the controller does not include any of the parasite components present in the Simulink
model. This problem is solved estimating the value of the load resistor and of the input
47
voltage at each sampling period. In this way, the algorithm actually estimates the value of the
load resistor R as being R + RC thus compensating the absence of the parasite elements from
the converter model and resulting in a good performance of the control algorithm. The fact
that good results are obtained with a model that is not similar to the nonlinear process
demonstrates the robustness of the nonlinear EPSAC. The experiments were performed for
Vref = -10V and Vref = -20V in order to test the controller in both possible working situations
of a Buck-Boost converter (the level of the output voltage can be bigger or lower then the
source voltage 15V).

Disturbance rejection

Next, in order to further test the nonlinear EPSAC algorithm, the response of the closed loop
control system is shown, when the process is subject to a step change in the input voltage vin
from a nominal DC value Vin = 15 V or to a variation of the load resistor R.

The first experiment (see Fig. 5.9) simulates a 7.5V step down applied on the input voltage at
time instant 0.1s and a 7.5V step up applied on the input voltage at time instant 0.2s.
5

4.5

3.5

3
Amperes

2.5

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3

Time seconds
Fig. 5.9 a) The inductor current – Source voltage disturbance
-19

-19.2

-19.4

-19.6

-19.8
Volts

-20

-20.2

-20.4

-20.6

-20.8

-21
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds

Fig. 5.9 b) The output voltage tracking the reference voltage (Vref = -20V)
Source voltage disturbance
48
The effect of the huge step (7.5V is 50% of the nominal value Vin = 15V) applied to the input
voltage at the time instant 0.1s and at the time instant 0.2s on the output voltage is
insignificant. Disturbances on the input voltage are rejected after a transient state when some
variations of the output voltage occur. These variations are minor with respect to the imposed
performances, which consist mainly in keeping the output voltage within 3% of the desired
nominal or reference value Vref (-19.4V, -20.6).

In the followings, the load disturbance rejection is tested, as shown in Fig. 5.10, where at time
instant 0.1s the value of the load resistor drops to 33Ω and at time instant 0.2s the value of the
load resistor becomes 165 Ω.

4.5

3.5

3
Amperes

2.5

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.10 a) The inductor current – Load disturbance rejection

-19

-19.2

-19.4

-19.6

-19.8
Volts

-20

-20.2

-20.4

-20.6

-20.8

-21
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.10 b) The output voltage tracking the reference voltage (Vref = -20V)
Load disturbance rejection

49
As it can be seen in Fig. 5.10, the nonlinear EPSAC doesn’t eliminate completely the
oscillations produced by the severe load disturbance, because the prediction horizon N2 = 20
is not big enough. Still, the results are not violating the imposed performances, which is good
enough for the real world. Yet, the inductor current oscillations are a little too big in
amplitude, so a prediction horizon N2 = 30 is needed in order to obtain an excellent
performance (see Fig. 5.11).

4.5

3.5

3
Amperes

2.5

1.5

0.5

0
0 0.05 0.1 0.15 0.2 0.25 0.3

Time seconds
Fig. 5.11 a) The inductor current – load disturbance rejection

-19

-19.2

-19.4

-19.6

-19.8
Volts

-20

-20.2

-20.4

-20.6

-20.8

-21
0 0.05 0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.11 b) The output voltage tracking the reference voltage (Vref = -20V)
Load disturbance rejection

50
5.3 Convergence analysis
The nonlinear EPSAC is able to calculate the optimal control signal making yoptimized equal to
zero in an iterative way, which means that the number of iterations per sampling period is
related to the value of the cost function (2.15). The amplitude of the control error determines
this value, while matrix G acts as a weight factor.

The connection between the control error and the number of iterations is illustrated in Fig
5.12 and Fig. 5.13.
55

50

45

40
Number of iterations

35

30

25

20

15

10

0
0 500 1000 1500 2000 2500 3000
Sampling periods
Fig. 5.12 Number of iterations per sampling period needed to reach the optimal control signal
Load disturbance rejection (N2 = 20)

55

50

45

40

35
Number of iterations

30

25

20

15

10

0
0 500 1000 1500 2000 2500 3000

Time seconds
Fig. 5.13 Number of iterations per sampling period needed to reach the optimal control signal
Load disturbance rejection (N2 = 30)
51
During the start-up, the error between the setpoint and the process output is large, so the value
of the cost function is also large, which causes the nonlinear EPSAC to perform the maximum
number of iterations allowed. The fact that the optimal control signal cannot be reached is
reasonable, because the optimum that would take the output voltage from zero to Vref exceeds
the maximum control value permitted as input for the process.

During the steady state, the output voltage oscillates around the nominal operating point
resulting in a non-zero control error. Function of the control error that haves different values
depending on the disturbances affecting the converter (e.g. see Fig. 5.10 b) and Fig. 5.12; Fig.
5.11 b) and Fig. 5.13), a smaller or a bigger number of iterations will be required.

The extra operations performed by the nonlinear EPSAC are numbered starting from zero. So
when zero iterations are performed it means that the linear EPSAC managed to obtain good
results. Actually, the nonlinear EPSAC performing no iterations (zero) becomes in the
particular case (5.4) exactly the linear EPSAC.

The conclusions drawn in Chapter 4 are confirmed:


- when the prediction horizon is N2 = 20 the linear EPSAC is not able to obtain
reasonable performances (as sustained in the theoretical aspects presented in Chapter
2), not even during the steady state, as seen in Fig. 5.12 (the number of iterations is
not zero);
- when the prediction horizon is N2 = 30 (increasing the prediction horizon would lead
the linear EPSAC to reasonable performances; see Chapter 4) the linear EPSAC is
able to solve the control problem during the steady state, but is not able to reject the
disturbance (when the load disturbance occurs, the number of iterations increases) as
illustrated in Fig. 5.13.

These results confirm the validity of the EPSAC algorithm for nonlinear systems and
demonstrate that the EPSAC approach to NPC is successful. The next step is to implement the
nonlinear EPSAC on a real process, but in order to achieve this some adjustments must be
done.

A useful idea would be to force the iteration limit lower after the start-up, obtaining a faster
control algorithm, appropriate for practical implementation.

In order to reduce the number of iterations and to maintain the same performance of the
control algorithm, the predictive horizon N2 must be increased. Even so, rising N2 but keeping
the number of the iterations low, a much shorter time is required for the calculations. If the
limit equal to 50, necessary for good behavior at start-up is kept constant with N2 = 20, the
nonlinear EPSAC will be 50 times slower than the linear one due to the extra 50 operations
performed at each sampling period. If N2 is made 25, after the steady state is reached, the limit
can be lowered to 10 and the performances remain very good, as seen in Fig. 5.14 b), but the
algorithm is only 10 times slower than the linear EPSAC.

The nonlinear EPSAC with the limit equal to 50 and N2 = 20 performs 1000 extra
calculations, but if the limit is equal to 10 and N2 = 25 it will perform only 250 extra
calculations, becoming four times faster. This improvement is of major importance in the case
of a real application where the actual sampling period (0.1ms for the Buck-Boost DC-DC
converter) must be taken into account.

52
2.5

1.5
Amperes

0.5

0
0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.14 a) Disturbance on the input voltage - the inductor current
-19.9

-19.92

-19.94

-19.96

-19.98
Volts

-20

-20.02

-20.04

-20.06

-20.08

-20.1
0.1 0.15 0.2 0.25 0.3
Time seconds
Fig. 5.14 b) Disturbance on the input voltage
The output voltage tracking the reference voltage
12

10
Number of iterations

0
800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000
Sampling periods

Fig. 5.15 Number of iterations per sampling period


53
The results presented in Fig 5.14 and Fig. 5.15 were obtained for N1 = 1, N 2 = 25, N u = 1 and a
maximum number of 10 iterations allowed during each sampling period, after the steady state
was reached.

The nonlinear EPSAC successfully rejects the 7.5V step down applied on the input voltage Vin
= 15 V at the time instant 0.1s (and the 7.5V step up applied at the time instant 0.2s) keeping
the output voltage of the converter within the imposed range of 3% of the desired voltage Vref
(equal to -20V).

The correlation between the control error (the difference between the setpoint and the
controlled output) and the number of iterations shown in Fig. 5.15 is now clearly seen. When
the disturbance occurs (0.1s), the oscillations of the output voltage around the nominal
operating point increase in amplitude, resulting in a bigger value of the control error so more
iterations are needed to minimize the cost function (2.15).

54
6. Neural Model Based Predictive Controller
for Nonlinear Systems

In this Chapter, a neural network based predictive controller design method is studied for non-
linear control systems. The advantages of using neural networks for modeling non-linear
processes are shown together with the construction of neural predictors. The implementation
algorithm of the neural predictive controller eliminates the most significant obstacles
encountered in non-linear predictive control applications by facilitating the development of
non-linear models and providing a rapid, reliable solution of the control algorithm. Controller
design and implementation are illustrated for a plant frequently referred to in the literature.
Results are given for simulation experiments where the closed-loop dynamics is governed by
the neural based predictive controller.

Finally, a short example with a Neural Model based EPSAC controller will be presented in
order to compare the performances of the two nonlinear predictive controllers.

6.1 Neural models and predictors


Recently, neural networks have become an attractive tool in the construction of models for
complex non-linear systems, because of their inherent ability to learn and approximate non-
linear functions. A large number of control and identifications structures based on neural
networks has been presented in literature (e.g. Chen et al., 1990, Narendra and Parthasarathy,
1990, Hunt et al., 1992, Fabri and Kadirkamanathan, 1996, Liu et al., 1998). Most of the non-
linear predictive control algorithms imply the minimization of a cost function, by using non-
linear programming techniques in order to obtain the optimal command for the process. The
implementation of the non-linear predictive control algorithms becomes very difficult for real
time control because the minimization algorithm must converge to the optimum solution for
all cases and in a very short time (less than a second).

Neural Network based Models

The use of neural networks for non-linear processes modeling and identification is justified by
their capacity to approximate the dynamics of non-linear systems including those with high
non-linearities or with dead time. In order to estimate the non-linear process, the neural
network must be trained until the optimal values of the weight vectors (i.e. weights and biases
in a vector form organization) are found. In most applications, feedforward neural networks
are used, because the training algorithm is less complicated. The problem of obtaining the
model of a dynamic system can be solved in two ways: using a static neural network with
external dynamics or using a dynamic neural network. Usually, static neural networks are
preferred, which requires that filters are applied on the network inputs in order to capture the
55
real system dynamics. The filters usually consist in delaying elements and in this case the
network is called a time delay neural network – TDNN (Dumitrache et al., 1999). The order
of the non-linear system must be known with a certain approximation, so that a reliable non-
linear model is achieved. For simplicity, only the case of non-linear systems without dead-
time is going to be considered. The presence of dead-time does not affect the overall scenario
to be developed in the sequel. The structure of a TDNN is presented in Fig. 6.1. The neural
network output y(k) is a function of the filtered input and output signals (u and y):
y( k ) = F [ u( k − 1 ), u( k − 2 ),....., y( k − 1 ), y( k − 2 ),....] (6.1)

u(k)
q-1

q-2
. . Static y(k)
. . Neural
Network
q-1

q-2
. .
. .

Fig. 6.1 The structure of a Time Delay Neural Network

When it comes to non-linear models, the most general model that includes the biggest class of
non-linear processes is the NARMAX model (Chen and Billings, 1985) given by:
y( k ) = F [ u( k − 1 ),u( k − 2 ), ..., u( k − m ), y( k − 1 ), y( k − 2 ), ..., y( k − n )] (6.2)
where F[.] is some non-linear function and n and m are the orders of the non-linear system
model.

Adjusting in a proper way the filter elements of a TDNN, the neural non-linear model
corresponding to the NARMAX model is obtained. The neural NARMAX, not considering
the noise is briefly represented in Fig. 6.2. In this case, the neural network output will be
described by:

u(k-1)

Neural y(k)
Network
y(k-1)

Fig. 6.2 The neural NARMAX model

y ( k ) = F N [u ( k − 1), y ( k − 1)] (6.3)

56
where FN represents the neural network which replaces the non-linear function F, and u(k-1),
y(k-1) are the vectors which contain m respectively n delayed elements of u and y starting
from the time instant k-1, i.e:
u(k − 1) = [u (k − 1), u (k − 2), ..., u (k − m)]T
(6.4)
y (k − 1) = [ y (k − 1), y (k − 2), ..., y (k − n)]T
The neural NARMAX corresponds to a recurrent neural network, because some of the
network inputs are past values of the network output. By expanding equation (6.3) for a two-
layer network, the following expression is obtained for the network output at an arbitrary
discrete-time instant:
N T T
y (k ) = ∑ w j σ j (u(k − 1)w uj + y ( k − 1) w yj + b j ) + b (6.5)
j =1

where:
- N - the number of neurons in the hidden layer;
- σj - the activation function for the j – th neuron from the hidden layer;
- wjuT - the weight vector for the j – th neuron with respect to the inputs stored in u(k-1);
- wjyT - the weight vector for the j – th neuron with respect to the inputs stored in y(k-1);
- bj - the bias for the j – th neuron from the hidden layer;
- wj - the weight for the exit layer corresponding to the j – th neuron from the hidden
layer;
- b – the bias for the output layer.

Such structures with a single hidden layer are considered satisfactory for most of the cases. In
order to obtain the model of a non-linear process, the vector u(k-1) defined by (6.4) is applied
as input to the process. The plant output is stored in a vector, which will be used as the target
vector for the neural network. The target vector together with an input vector, which contains
the input values applied to the plant are used to train the neural network. The training
procedure consists in sequentially adjusting the network weight vectors, so that the error
between the desired response (the values from the target vector) and the network output is
minimized and when a certain stop criteria is satisfied, the training algorithm gives the
optimum values of the weight vectors. Several training algorithms are implemented in the
Matlab Neural Network Toolbox (Demuth and Beale, 1998), such as: the backpropagation
algorithm (trainbp), the backpropagation algorithm with momentum (trainbpm) , the
backpropagation algorithm with adaptive learning (trainbpa), the fast backpropagation
algorithm (trainbpx) and the Levenberg-Marquardt backpropagation algorithm (trainlm).

The TDNN with the constant values for the weight vectors, obtained after the training,
represents the non-linear model of the system. Before this model is used to obtain the neural
predictors, validation of the neural based model is necessary. Two validation methods are
recommended:
- a what-if test, which is a time validation test consisting in a comparison between the
output of the neural network and the output of the non-linear system when an input signal,
different from the input signal used to train the network is used. This test uses the following
error index to appreciate the quality of the model (with N the number of samples):

∑k =1 ( y n (k ) − y p (k )) 2
N
errorindex = (6.6)
∑k =1 y p (k ) 2
N

57
where yp is the process output and yn is the neural model output.

- a correlation test (Billings and Voon, 1986) which is based on five correlation criteria.

If the neural network succeeds in all these tests, it is accepted as a good model of the non-
linear system.

Neural Predictors

The predictors are necessary for the prediction of future values of the plant output that are
used in the predictive control strategy. They are constructed starting from the model of the
process. The following i – step ahead predictor was introduced for the NARMAX model:
(Leontaritis and Billings, 1985):
y ( k + i ) = Q[ y ( k + i − 1), ..., y (k ), u ( k + i ), ..., u ( k )] (6.7)
where Q[.] is some non-linear function and i depends on the prediction horizon.

Neural predictors rely on the neural-based model of the process (Tan and De Keyser, 1994),
(Liu et al., 1998). In order to obtain the model of the non-linear system, the same structure of
the neural network given by (6.5) is considered. The advantage of neural predictors is that
instead of Q[.], they use the same non-linear function F which described the model of the
process. The non-linear function F is in this case the neural network itself. A sequential
algorithm based on knowledge of current values of u and y and the neural-network system
model gives the i – step ahead neural predictor. From equation (6.5), one can properly derive
the network output at the k+1 time instant:
N T T
y (k + 1) = ∑ w j σ j (u(k )w uj + y (k )w yj + b j ) + b (6.8)
j =1

where:
u(k ) = [u (k ), u (k − 1), ..., u (k + 1 − m)]T
(6.9)
y (k ) = [ y (k ), y (k − 1), ..., y (k + 1 − n)]T
This is the expression of the one-step-ahead predictor, with respect to the notations introduced
in equation (6.5). In Fig. 6.3, the construction of the neural predictors is presented in a
suggestive manner. Extending equation (6.8) one step further ahead,

u(k) u(k+1) u(k+i-1)


Neural y(k+1) Neural y(k+2) .... Neural y(k+i)
Network Network Network
y(k) y(k+1) y(k+i-1)

Fig. 6.3 The structure of the neural predictors

y(k+2) can be obtained and generally, the i – step ahead predictor can be derived:

58
N T T
y (k + i ) = ∑ w j σ j (u(k + i − 1)w uj + y (k + i − 1)w yj + b j ) + b (6.10)
j =1

where:
u(k + i − 1) = [u (k + i − 1), u (k + i − 2), ..., u (k + i − m)]T
(6.11)
y (k + i − 1) = [ y (k + i − 1), y (k + i − 2), ..., y (k + i − n)]T
The predictive control algorithm will use the neural predictors to calculate the future control
signal and thus, the neural predictive control is achieved.

6.2 Predictive control using neural networks


The neural predictive control strategy uses the neural predictors to calculate the future plant
output and a cost function based on the error between the predicted process output and the
reference trajectory. The cost function, which may be different from case to case, is
minimized in order to obtain the optimum control input that is applied to the non-linear plant.
In most of the predictive control algorithms a quadratic form is utilized for the cost function:
N2 N2
J = ∑ [ y (k + i ) − r (k + i )]2 + λ ∑ u 2 (k + i − 1) (6.12)
i = N1 i =1

with additional requirements:


∆u (k + i − 1) = 0, N1 ≤ N u < i ≤ N 2 (6.13)
where the following notations were used:
- Nu - the control horizon;
- N1 - the minimum prediction horizon;
- N2 - the prediction horizon;
- i - the order of the predictor;
- r - the reference trajectory;
- λ - weight factor;
- ∆ - the differentiation operator.

The cost function is often used with the weight factor λ = 0. The requirement expressed by
(6.13) will also remain valid for the i – step ahead predictor due to the vector u(k+i-1) given
by (6.11). The requirement will affect the elements of the vector u(k+i-1) and consequently
the i – step ahead predictor, depending on the control horizon:
u (k + i − l ) = u (k + i − l ), for i − l < N u
u (k + i − l ) = u (k + N u ), for i − l ≥ N u (6.14)
where l = 1,… , m
This means that the control signal will be kept constant at a value equal to u(k+Nu) when the
prediction horizon exceeds the control horizon.

The optimal controller output sequence over the prediction horizon is obtained by minimizing
the cost function J with respect to the vector u. This can be achieved by setting:
∂J
= 0, u = [u (k ), u ( k + 1), ..., u ( k + N u − 1)]T (6.15)
∂u
59
Proceeding further with the calculation an important problem occurs: differentiating the
neural network implies differentiating the activation functions which are usually sigmoidal
functions. Differentiating the sigmoid function does not lead to an analytical solution. This is
why a non-linear programming technique is preferred for the minimization of the cost
function J. In order to apply the minimization method, the algorithm must always converge to
the minimum and the necessary time for calculations must be suitable for real time
implementation. There are cases, such as the affine non-linear systems where an analytical
solution is obtained by differentiating the neural network, if the activation functions are of
radial basis type (Liu et al., 1998).

The computation of the optimal control signal at the discrete-time instant k can be achieved
with the following algorithm:

- the previous iteration, after the minimization procedure for the moment k-1, gives
the command vector:
u = [u (k − 1), u (k ), ..., u (k + N u − 2)]T (6.16)
At the first iteration, the control input vector will contain some initial values provided by the
user. The number of values introduced must be equal to the control horizon.

- the step ahead predictors of orders between N1 and N2 are calculated by using the
vectors u(k+ N1-1), y(k+ N1-1) and u(k+ N2-1), y(k+ N2-1) respectively, described in
Section 6.1 as well as the neural network based process model.

Minimizing the cost function J with respect to the command vector develops the optimal
output control signal:
u = [u (k ), u (k + 1), ..., u (k + N u − 1)]T (6.17)
Two routines were created to implement this algorithm in the Matlab environment. The main
routine solves the first and the third step of the algorithm presented above and applies the
command to the non-linear system. It also creates two vectors in which the output command
and the process response are stored at each iteration. The other routine is responsible for the
computations requested by the second step of the algorithm and constructs the cost function J,
which will be minimized in the main routine. For the minimization of the cost function, the
Matlab Optimal Toolbox functions lsqnonlin and fmincon were used. Both functions represent
implementations of minimization algorithms for non-linear multivariable functions. Unlike
the lsqnonlin function, fmincon allows imposing constraints with respect to the value of the
control input such as upper or lower bounds, which are often required in practice. The cost
function J is given as an input parameter for the functions mentioned above together with the
initial values for the control input vector and some options regarding the minimization
method (the maximum number of iterations, the minimum error value, the use of analytical
gradients, etc). In the case of fmincon the constraints must also be given as input parameters
in a matrix form.

The advantage of this non-linear neural predictive controller consists in the implementation
method, which solves the main problems of the non-linear MBPC. The implementation itself
is simple and easy to use and respects the requirements regarding the minimization algorithm.
The parameters of the neural predictive controller such as the prediction horizon, the control
horizon as well as the necessary constraints can be modified.

60
6.3 Simulation study
In order to test the implementation algorithm proposed a simulation study was carried out.
The following non-linear plant taken from literature was considered (Chen and Khalil, 1995):
2.5 y ( k − 1) y ( k − 2)
y (k ) = + 0.3 cos(0.5( y ( k − 1) + y ( k − 2)) + 1.2u ( k − 1) (6.18)
1 + y 2 ( k − 1) + y 2 ( k − 2)
where y is the output of the plant and u is the plant input.

The procedure described in Section 6.1 was followed in order to obtain the neural network
based model of the process given by (6.18). The neural network structure is presented in Fig.
6.4, with m = 1 and n = 2 (the notations are those introduced in Section 6.1).

u(k-1)
y(k-1) Neural y(k)
Network
y(k-2)

Fig. 6.4 The neural network model

A neural network structure with ten neurons in the hidden layer was used. In Fig. 6.5 there are
plotted the input signal applied to plant (6.18) and the corresponding response, which have
been used for training the Neural Network given in Fig. 6.4 according to a standard series-
parallel scheme (Narenda and Parthasathy, 1990). The input signal is a white noise with the
power equal to 3, the seed equal to 3341 and the sample time is 8. The output signal is
calculated without considering the noise.
1.5

0.5

-0.5

-1

-1.5
0 50 100 150 200 250 300 350 400 450 500
Samples

Fig. 6.5 a) Signals to be used in the Neural Network training:


The input signal applied to plant

61
2.5

1.5

0.5

-0.5

-1

-1.5

-2
0 50 100 150 200 250 300 350 400 450 500
Samples

Fig. 6.5 b) Signals to be used in the Neural Network training:


The plant response

The neural network was trained with the fast backpropagation algorithm (trainbpx). In Fig.
6.6 the model is validated using a time-validation test for a sinusoidal input signal. The error
index calculated with (8) has the value of 0.0108, which means that the model is a very good
approximation of the non-linear system.

1.5

0.5

-0.5

-1

-1.5

-2
0 100 200 300 400 500 600 700 800 900 1000
Samples

Fig. 6.6 a) The model validation


The input signal identical for both the plant and the model

62
2.5

1.5

0.5

-0.5

-1
0 100 200 300 400 500 600 700 800 900 1000
Samples

Fig. 6.6 b) The model validation


The responses of the model (solid line) and the process response (dotted line)

With respect to the notations introduced in Section 6.2, the next values for the tuning
parameters of the predictive control algorithm were chosen: N1 = 1, N2 = 3 and Nu = 2. The
realization of the neural predictors starting from the neural network model is shown in Fig.
6.7. Next, the cost function J is constructed:
N2
J = ∑ [ y (k + i ) − r (k + i )]2 (6.19)
i = N1

u(k+2)=
u(k) u(k+1) u(k+1)
u(k-1)

y(k-1) Neural y(k) Neural y(k+1) Neural y(k+2) Neural y(k+3)


Network Network Network Network
y(k-2) y(k-1) y(k) y(k+1)

Fig. 6.7 The neural predictors

Setting
∂J
= 0, u = [u (k ), u (k + 1)]T (6.20)
∂u
the minimization algorithm gives the control input vector u = [u(k) u(k+1)]T to be applied to
the plant described by (6.18). In order to check how the implementation algorithm works
there were used several reference input signals, such as sinusoidal signals and different step
signals. The initial condition of the plant is:
(y(k-1), y(k-2))=(0, 0) (6.21)

63
and the goal is to control plant (6.18) so as to track the reference input very tight for all the
reference signals.

Fig. 6.8 and 6.9 present the performance of the non-linear neural predictive controller for
sinusoidal and step reference, respectively.

1.5

0.5

-0.5

-1
0 20 40 60 80 100 120 140 160 180 200
Sam ples

Fig. 6.8 a) The closed-loop system behaviour for a sinusoidal reference


Setpoint (dotted line) and output (solid line)

0.2

-0.2

-0.4

-0.6

-0.8

-1

-1.2

-1.4

-1.6
0 20 40 60 80 100 120 140 160 180 200
Sam ples

Fig. 6.8 b) Control signal

64
3

2.5

1.5

0.5

-0.5
0 20 40 60 80 100 120 140 160 180 200

Samples
Fig. 6.9 a) The closed-loop system behaviour for a staircase step reference
Setpoint (dotted line) and output (solid line)

1.4

1.2

0.8

0.6

0.4

0.2

-0.2

-0.4
0 20 40 60 80 100 120 140 160 180 200
S am ples

Fig. 6.9 b) Control signal

The control algorithm also deals with constraints regarding the magnitude of the control
signal. In this case the Matlab function used for the cost function minimization is fmincon,
which replaces the Matlab function lsqnonlin used to obtain the previews results. To show
how the neural predictive controller works when constraints are applied to the control signal
the following example is considered. For the same multi-step reference, Fig. 6.10 presents the
closed-loop system behaviour when no constraint is applied to the control signal, whereas Fig.
6.11 presents the closed-loop system behaviour when the control signal is subject to
constraint:
− 0.3 ≤ u ≤ 0.9 (6.22)

65
2 .5

1 .5

0 .5

-0 .5
0 20 40 60 80 100 120 140 160 180 200
S am ples

Fig. 6.10 a) The closed-loop system behaviour for a multi-step reference without constraints
Setpoint (dotted line) and output (solid line);
2

1 .5

0 .5

-0 .5

-1
0 20 40 60 80 100 120 140 160 180 200
S a m p le s
Fig. 10 b) The closed-loop system behaviour for multi-step reference without constraint:
The control signal

2 .5

1 .5

0 .5

-0 .5
0 20 40 60 80 100 120 140 160 180 200
S a m p le s

Fig. 6.11 a) The closed-loop system behaviour for a multi-step reference with constraints:
Setpoint (dotted line) and output (solid line)

66
2

1 .5

0 .5

-0 .5

-1
0 20 40 60 80 100 120 140 160 180 200
S a m p le s
Fig. 6.11 b) The closed-loop system behaviour for a multi-step reference with constraints
The control signal

The results obtained were better than the ones reported in literature by the literature for the
same non-linear plant (e.g. Liu et al., 1998). The advantages of the implementation method
presented in Section 6.2 are sustained by comparing the evolution of the tracking error for our
approach and for Liu’s as reflected by Fig. 6.12. The comparison is considered starting from
the sample 100 and from the time instant 100, respectively.

Fig. 6.12 a) The tracking error for Liu’s approach

0 .5

0 .4

0 .3

0 .2

0 .1

-0 .1

-0 .2

-0 .3

-0 .4

-0 .5

0 100 200 300 400 500 600 700 800 900 1000

T he tracking error

Fig. 6.12 b) The tracking error for the proposed approach

67
6.4 Neural predictive controller
for nonlinear systems
based on the EPSAC approach
In this Section, the performance of the nonlinear EPSAC algorithm using a neural network
based process model is analyzed. An illustrative example was presented in (Lazăr et. al, 2001)
where the nonlinear system given by (6.18) was considered as the process to be controlled.

The neural network model of process (6.18) and the neural predictors used by the nonlinear
EPSAC are the ones utilized in Section 6.3. The EPSAC algorithm was implemented in
Matlab following the procedure described in Chapter 4. In Fig. 6.13, the tracking performance
of neural model based nonlinear EPSAC and the control signal are shown for the same
sinusoidal reference trajectory used in Section 6.3. Also, the number of iterations per sample
and the tracking error are given in Fig. 6.14.
1.5

0.5

-0.5

-1
0 100 200 300 400 500 600 700 800 900 1000

Samples
0.5

-0.5

-1

-1.5
0 100 200 300 400 500 600 700 800 900 1000

Samples

Fig. 6.13 Process output (solid line) tracking the reference trajectory (dotted line) and
The control signal
3

2.5

1.5

1
0 100 200 300 400 500 600 700 800 900 1000
Samples
0.6
0.4
0.2
0
-0.2
-0.4

0 100 200 300 400 500 600 700 800 900 1000

Samples

Fig. 6.14 The number of iterations per sample and


The tracking error

68
For these experiments, the number of iterations per sample performed by the nonlinear
EPSAC starts from 1. As a result, when only one iteration is performed it means that the
control algorithm is equivalent to the linear EPSAC. The precision imposed for the optimal
control was the following:
δu( t / t ) = U ∗ ( 1 ) ∈ (− 0.01, 0.0001) (6.22)
The EPSAC approach gives the best results with neural NPC. Although a computational
method is used to obtain the control signal, the controller is very fast due to the small number
of iterations needed for calculating the control signal. For the case of other nonlinear
processes (more difficult to control), ways to decrease the number of iterations were presented
in Chapter 2 and Chapter 5.

69
Conclusions

This Chapter summarizes the major investigation presented in the present research project and
gives some perspectives for future research.

The central philosophy is to introduce a gentle approach to NPC based on the EPSAC
algorithm. Thus, the theoretical aspects of the EPSAC algorithm for the linear and the
nonlinear cases were first presented. The nonlinear EPSAC, based on the concepts of base
response and optimizing response, proved to be an appropriate solution for the main problems
that come with NPC.

In order to illustrate the performances of the EPSAC algorithm, a highly nonlinear plant,
namely the Buck-Boost DC-DC converter, was chosen as the process to be controlled. Several
linear and nonlinear, continuous and discrete plant models were developed together with the
main control structures utilized in practice for DC-DC power converters.

The linear and nonlinear EPSAC algorithms were implemented in Matlab together with the
plant model. One routine was created for calculating the matrix, which contains the impulse
response coefficients and another routine was designed for calculating the prediction error
that is obtained using the plant model with colored noise added. These are both secondary
routines to be used within the main file, being the same for the linear and the nonlinear
EPSAC. The above software tools were used for comparing the start-up behavior and the
setpoint tracking of the linear EPSAC versus the nonlinear EPSAC underlining the superiority
of the nonlinear controller.

The properties of the nonlinear EPSAC algorithm were further studied using a more realistic
plant model implemented in Simulink. To achieve this, a Simulink version of the controller
was also developed together with a closed loop control structure. Several experiments were
presented which illustrated the controller setpoint tracking performances and ability to reject
disturbances.

An interesting and useful analysis of the nonlinear EPSAC algorithm convergence was
provided together with the discussion of the possibilities to overcome the difficulties of a
practical implementation.

The nonlinear EPSAC proved to give excellent results also when a neural network based
process model is used, opening new perspectives in neural predictive control.

The results obtained were very good, extending the theoretical validity of the non-linear
EPSAC to the field of simulation experiments, fact that encourages our future plan for
approaching the whole complexity of a real-life implementation in a DSP.

70
References

Allgöwer, F., T.A. Badgwell, S.J. Qin, J.B. Rawlings and S.J. Wright (1999). Nonlinear
predictive control and moving horizon estimation – An introducing overview. In
Advances in Control: Highlights of ECC’99, (P.M. Frank, Ed. London: Springer), pp.
391-449.

Camacho, E.F. and C. Bordons (1999). Model Predictive Control, Springer-Verlog, London.

Chen, S., S.A. Billings and P.M. Grant (1990). Non-linear system identification using neural
networks. Int. J. Control, Vol. 51, pp. 1191-1214.

Chen, S., and S.A. Billings (1999). Representations of non-linear systems: the NARMAX
model. Int. J. Control, Vol. 49, No. 3, pp. 1013-1032.

Clarke, C.R., C. Mohtadi and P.S. Tuffs (1987). Generalized predictive control: I – the basic
algorithm and II – Extensions and interpretations. Automatica, Vol. 23, pp. 137-160.

Cutler, C.R. and B.L. Ramaker (1980). Dynamic matrix control – A computer control
algorithm. Proc. Joint Automatic Control Conference, San Francisco, USA.

De Keyser, R.M.C., and A.R. von Cauwenberghe (1985). Extended prediction self-adaptive
control. IFAC Symp. on Identification and System Parameter Estimation, York, U.K.,
pp. 1225-1260.

De Keyser, R.M.C., Ph.G.A. Vande Velde and F.A.G. Dumortier (1988). A comparative
study of self-adaptive long-range predictive control methods. Automatica, Vol. 24, No.
2, pp. 149-163.

De Keyser, R.M.C. (1991). Basic Principles of Model Based Predictive Control. ECC 91
European Control Conference, Grenoble, France, pp. 1753-1758.

De Keyser, R.M.C., O. Păstrăvanu, (1993). Neural Network Based Control Algorithms


Implemented in Matlab, “Buletinul Institutului Politehnic din Iaşi”, Vol. XXXIX
(XLIII), No. 1-4, pp. 25-34.

De Keyser, R.M.C, (1998). A Gentle Introduction to Model Based Predictive Control.


Europen Union EC-ALFA-PADI 2 Int. Conference on “Control Engineering and Signal
Processing”, Piura.

Demuth, H. and M. Beale (1998). Neural Network Toolbox Users’s Guide by The
MathWorks, Inc., pp. 5-2 – 5-56.

71
Dumitrache, I. N. Constantin and M. Drăgoicea (1999). Neural Networks – Process
Identification and Control. Matrix ROM, Bucureşti (in Romanian).

Fabri, S. and V. Kadirkamanathan (1996). Dynamic structure neural networks for stable
adaptive control of nonlinear systems. IEEE Transactions on Neural Networks, Vol. 7,
pp. 1151-1167.

Garcia, C.E., D.M. Prett and M. Morari (1989). Model Predictive Control: Theory and
Practice – a Survey, Automatica, Vol. 25, No. 3, pp. 1753-1758.

Hunt, K.J., D. Sbarado, R. Zbikowski and P.J. Gawthrop (1992). Neural Networks for control
systems – a survey. Automatica, Vol. 28, pp. 1083-1112.

Kassakian, J.G., M.F. Schlecht and G.C. Verghese (1992). Principles of Power Electronics.
Adisson-Wesley Publishing Company, Inc.

Lazăr, C. (1999). Model based Predictive Control. Matrix ROM, Bucureşti (in Romanian).

Lazăr, M., O.Păstrăvanu, R. De Keyser, (2001). A Comparison of Neural Model Based


Predictive Controllers. Poster presentation at ICCoS (Identification and Control of
Complex Systems) meeting, 26 April 2001, Bruxelles, Belgium.

Leontaritis, I.J. and S.A. Billings (1985). Input-output parametric models for non-linear
systems. Part I – Deterministic non-linear systems, Part II – Stohastic non-linear
systems. Int. J. Control, Vol. 41, No. 2, pp. 303-344.

Liu G.P., V. Kadirkamanathan and S.A. Billings (1998). Predictive control for non-linear
systems using neural networks. Int. Journal of Control, Vol. 71, No. 6, pp. 1119-1132.

Narenda, K.S. and K. Parthasathy (1990). Identification and control of dynamical systems
using neural networks. IEEE Transactions on Neural Networks, Vol. 1, pp. 4-27.

Piché, S., B. Sayyar-Rodsari, D. Johnson and M. Gerules (2000). Nonlinear Model Predictive
Control using Neural Networks. IEEE Control Systems Magazine, Vol. 20, No. 3, pp.
53-62.

Qin, S.J. and T.A. Badgwell (1997). An overview of industrial model predictive control
technology. In Chemical Process Control-AIChE Symposium Series (J. Kantor, C.
Garcia and B. Carnahan, Eds. New York: AIChE), pp. 232-256.

Qin, S.J. and T.A. Badgwell (1998). An overview of nonlinear model predictive control
applications. In Proc. Int. Symposium Nonlinear Model Predictive Control, Ascona,
Switzerland.

Rawlings, J.B., E.S. Meadows and K.R. Muske (1994). Nonlinear model predictive control: A
tutorial and survey. ADCHEM’94 Proceedings, Kyoto, Japan.

Rawlings, J.B. (2000). Tutorial Overview of Model Predictive Control. IEEE Control Systems
Magazine, Vol. 20, No. 3, pp. 38-52.

72
Richalet, J.A., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control:
applications to an industrial process. Automatica, Vol. 14, pp.413-428.

Soeterboek, A.R.M., H.B. Verbruggen, P.P.J. Van den Bosch and M. Bulter (1990). On the
unifications of predictive control algorithms. Proc. Of the 29th IEEE Conference on
Decision and Control.

Sommer, S. (1994). Model-based predictive control methods based on non-linear and bilinear
parametric system descriptions. In Advances in Model-Based Predictive Control (C.R.
Clark, Ed. Oxford University Press, Oxford), pp. 192-204.

Tan, Y. and R. De Keyser (1994) “Neural Network Based Adaptive Predictive Control”. In D.
Clarke (Ed.): Advances in Model-Based Predictive Control, Oxford University Press,
pp. 358-369.

73

You might also like