You are on page 1of 6

Genetic Controller: A New Approach to Perform State Feedback

Control of Generic Non-Linear Systems


César Daltoé Berci, Celso Pascoli Bottura

Abstract— Control theory has been benefited from the enor- the dynamic model of the system to be controlled is well
mous development of digital computing and computational defined. That, among others limitations, led to the emergence
intelligence to solve relatively complex problems through means of the theory known as post modern control [9], [10], [8],
of computational methods. In the control theory of nonlinear
systems that relationship is even clearer. Because of the com- [11], in which is embedded the theory of robust control [1],
plexity of these systems, it becomes virtually impossible to treat [20], [23], [7], [16].
most of then with traditional methods. Within this context, this At the end of the last century, control theory received an-
paper proposes the design of a controller, based on genetic other major contribution, the rediscovery of the LMI (Linear
algorithms, which through a particular approach to the control Matrix Inequalities) concept [5] that modified the state of
problem, applies an evolutionary process to select the correct
input to be applied to the system at every sample time. Thus the art of this theory, providing computational solutions for
a controller quite simple and generally applicable to a wide many problems previously not able to be solved, among other
range of nonlinear dynamic systems is obtained. factors, by not presenting an analytic solution.
Alongside the development of the theory of control, is
I. INTRODUCTION developing the history of digital computing, which has
Control theory started approximately in the year 1868, experienced great advances in recent decades. In the middle
with the beginning of the use of differential equations of the last century, the phenomenon of semi-conductance
in dynamic systems analysis, which provided a significant that inaugurates the silicon era and alter the course of
advance for studying these systems and marks the beginning the computation history was discovered. Until then, digital
of the historical period known as primitive control. computers were large machines with their main elements the
Around the first quarter of the twentieth century, Harold valves, many orders of magnitude larger and less efficient
Stephen Black, proposed the concept of negative feedback than the transistors, the basic element of modern digital
closed loop control, introducing a new milestone in the computation. The first digital computers were exclusivity of
history of control. The historical period that begins with a few universities and research centers around the world,
the emergence of the concept of Black is known as classic which restricted their use to a select and very small number
control. In this theory the system is represented in the fre- of people. The silicon era, with the advent of the transistor,
quency domain by means of transfer functions. This approach provided a rapid democratization and advancement of digital
allows the creation of controllers based on specifications computation, making it feasible to implement the most mod-
over desired characteristics of the frequency response, and by ern control theories, therefore, now a large number of people
many decades it has been used successfully in the design of in more remote regions of the world, has been provided
controllers. Nowadays this theory is still used and has great with sufficient computational capacity to computationally
value in the control area, however, it has some significant implement models in the state space and computationally
limitations, motivating the search for new solutions. expensive control techniques.
Around the year 1960, a new period called modern control Older than the actual discovery of the transistor, is the
arised, in which many of the limitations of the approach area known as computational intelligence, which has long
in the frequency domain were overcome, however, at the been studied and for a long time generating numerous
cost of greater computational complexity of the required theories and tools of great importance in various fields.
calculations, which now involved arrays and matrices. That With the advancement of digital computing, new theories on
period started due, among others, to three works of Rudolf computational intelligence were created, while others have
E. Kalman [17], which successfully applied the Lyapunov gained strength.
theory to control and contributed, among other things, to Computational intelligence became an important numeri-
the development of state feedback control, optimum filter- cal tool, used in many areas of human knowledge, among
ing, realization and state estimation. At this point dynamic which control is also included. Its use in this area has been
systems are no longer being represented in the frequency explored by several researchers in countless applications, as
domain, but in the state space in a time domain. This repre- in the solution of LMI’s and Riccati algebraic equations as
sentation, although more advantageous in terms of control, is done in [21], among many other possible applications,
requires greater storage and data processing capacities, being and some of them would be unthinkable without the use of
supported by the emergence of digital computing. computational intelligence. This scenario creates conditions
Modern control, despite overcoming many of the limita- for the appearance of a new approach to control theory,
tions of classical control, is still restricted to cases in which intelligent control, which essentially deals with the use
of computational intelligence in controllers design and in II. G ENETIC A LGORITHMS
control itself. Genetic algorithms are methods of search and optimization
Clearly the relationship between the developments of based on biological evolutionary processes. These algorithms
digital computing, computational intelligence and control were developed with basically two targets:
theory, similarly to what occurs with other areas of human • To find a formal and rigorous representation for bio-
knowledge that clearly interact during their evolution, created logical evolutionary processes involved in the evolution
a sequence of interconnected events forming a stochastic of human beings and another species, enabling a better
evolutionary process. In control theory this process emerged understanding of living organisms and their origins.
in the theories known today, and is often conditioned by • To use computationally existing capacities and the
expectations generated by the state of the art, which generally mechanisms of natural evolution, developing evolving
tries to combine aspects of old solutions from different systems capable of solving complex problems based on
areas to form hybrid solutions that exploit the full available the evolution of simple agents.
capabilities . These algorithms use a population of candidate solutions
During this evolutionary process, some concepts consid- to the problem, each solution is represented by an indi-
ered outdated or inefficient or even incorrect, can be forgotten vidual in the population. These individuals are subject to
and disregarded. A superficial analysis of the evolutionary interactions with the environment which they belong. The
process of control theory with respect to its emergence in the process of evolution happens in a continuous way, and
current state of the art, refers to the question: What would be simultaneous with respect to all individuals of the population,
the state of the art in control theory, if any historical event characterizing a parallel search strategy.
had not occurred when it really did, for example, if humanity As in natural evolutionary processes, the genetic algo-
had developed the model of neural computation and not rithms are subject to random variables. The interactions of
the digital one? The answer to that question is certainly these variables generate a process essentially stochastic. This
unknown, but a reflection about this lead us to believe in characteristic of genetic algorithms contrasts with procedural
the existence of many possibilities still not exploited. search and optimization methods, which are for the most part
This historical scenario with the evolutionary process of deterministic.
control theory inserted, created a motivation for the applica- It is not the only difference between the evolutionary
tion of the relatively new techniques of computational intelli- paradigm and the most traditional search and optimization
gence to the problem of control by less conventional means, methods. Among the most important features in genetic
as is done in: [2], [3], where intelligent state observers based algorithms which contrast with procedural methods, the
on geometric observability conditions [15], [4], [13], [14], following can be cited:
[18], [19], [22] are developed, through a new paradigm in • Genetic algorithms work on the set of encoding pa-
the state estimation field, created from a relatively old theory rameters, as individuals and populations and not on
of control, however, supported by the new computational parameters itself. This method aims to develop the
intelligence capacity. population and not to optimize a cost function, which
Bearing in mind the same motivation, this paper proposes is achieved through appropriate choice of codification.
an alternative solution, and considerably simpler, to the • Evolutionary methods are based on the adaptation of
control problem, seeing it in its most natural and intuitive the individuals as a result of their interactions with
formulation. In spite of the use of state feedback, which the environment, and not in derivatives or other kind
is relatively modern, the proposed controller uses only one of information about the problem. Due to this charac-
simple and direct premise, that here defines the control teristic genetic algorithms are considered blind search
problem: find a specific point located in the input space, that algorithms.
when applied to the system leads the output to the desired • The selection and development processes are stochastic,
condition. so the genetic algorithms are not characterized by a de-
The controller here proposed uses the large computational terministic process as most of the procedural methods.
power now available for a massive computation, guided by The stochastic characteristic of genetic algorithms, com-
an heuristic method of computational intelligence, finds this bined with its search and exploration ability, make this
point in the input space, solving this way the control problem method an important tool for global optimization, given the
here stated. greater likelihood of a genetic algorithm to find a global
Among the various computational tools available, and optimum, when compared to more traditional methods based
capable of accomplishing the task here proposed, the genetic on gradient descent.
algorithm was chosen to implement the controller. Among In [12], the author introduces the basic functional structure
other factors leading to this choice, the genetic algorithm of a genetic algorithm, explaining each step of the algorithm.
represents a widespread and widely accepted computational More details of this algorithm are described by this sequence
tool in the scientific community, as well as is essentially a of steps:
method of blindly searching, which meets the requirements • An initial population formed by a random set of indi-
of the proposed issue. viduals Is generated, where each individual represents
a possible solution to the problem. Now, lets consider a subspace R, that contains only one
• The evolutionary process is started: every individual time variant point r(t), here called reference. Then the
of the population is evaluated and receives a concept, control problem here stated is reduced to finding the value of
called fitness, which reflects the quality of the solution u(t) at time t, that applied to the system 1 takes the output
represented by him. y to the interior of R, which is accomplished by minimizing
• Members of the population based on the concept that the distance between y(t) and R.
they received in the previous step are selected. Part of Clearly this is a classical optimization problem, and can
the individuals is maintained for the next generation. be solved by a genetic algorithm or another heuristic opti-
Meanwhile, the least adapted individuals are not se- mization tool.
lected and disappear from the population, in a process
that tends to extinguish adverse changes. IV. T HE G ENETIC C ONTROLLER
• Pairs for reproduction, also based on their fitness are Let us consider the system given by equation 1, which is
chosen. These individuals will have inheritors in the described by a continuous time model, and a population of
new generation. individuals, defined in a real space belonging to Rm , where
• The new population is subjected to random mutations, m is the number of inputs in the system.
aiming to introduce variability in the individuals. These Assuming that the state of the system, x(t), is completely
changes will be evaluated in the next generation, where known, then, given an input u at time t we can easily find the
features favorable to the solution of the problem will be future output of the system at time t + α, using a numerical
privileged. method like the trapezoidal integration to calculate the state
• The evolutionary process continues to create new gen- x(t + α) and solving the equation y(t + α) = h(x(t + α)).
erations until a given criterion is met and a satisfactory If chosen a discrete time model to represent the system,
solution is found. the same process could be used, but now α should be an
This algorithm, although very simple from the biological integer and not a real value as in the continuous model.
view point, represents a powerful search engine adaptive and Given these assumptions, the first step to design the ge-
robust [6] with a great computational power. netic controller is to choose an appropriate fitness function to
evaluate the individuals of the population. Here this function
III. T HE C ONTROL P ROBLEM is a measure of the distance between the system output y(t)
and the subspace R(t) at time t. Thus the fitness function
First we will define a a particular representation for the
will be time variant, like the output and reference signals do.
control problem.
The equation 2 shows the fitness function here proposed at
Definition 1 (The Control Problem): Given a time variant the k-th iteration step:
variable y(t), belonging to a space Y ∈ Rp , it should stay in
a particular subspace R belonging to Y, such that: (y(t) − f itnessi (k) = exp −τ (yi (k) − r(tk ))T (yi (k) − r(tk ))

y0 )T (y(t) − y0 ) → 0 for every y0 belonging to R, having (2)
y to accomplish this task, the ability of changing the value
where yi is the future output corresponding to an indi-
of a second variable u(t), belonging to U that is related in
vidual i of the population, τ is a parameter used to adjust
some way to y.
the sensibility of the fitness function to control the diversity
This basic concept of control is practically applicable to preservation of the population, and r(k) is the reference
all known methods of control, that use different approaches, signal. Here we should note that r(tk ), is not a discretization
but do not change the main objective. The procedure adopted of the signal r(t), but this signal at time instant tk where tk
here, aims to find directly a solution to the control problem is the real time at iteration k.
as described in definition 1, supported by the state represen- Now applying the evolutionary process with bases on the
tation of the dynamic system that will be controlled. fitness function 2, the individual of population that represent
To accomplish this, consider a non-linear system: the system output closer to reference signal will be obtained.
Since the individuals of the population are defined in
ẋ = f (x, u) a space Rm , like the system input does, and assuming
(1) that any individual represents a system input, then we can
y = h(u)
calculate a future output to each individual through the
where x ∈ Rn is the state of the system, u ∈ Rm is the process discussed above. This output will be used in equation
input of the system, y ∈ Rp is the output, f : Rn → Rn is 2 to calculate the fitness of the individual.
a vector field where fi for all i ∈ {1...n}, is a smooth and Assuming that the evolutionary process requires a time
Lipschitz function, and h : Rn → Rp is a mapping where interval of β seconds for its computational processing, then
hi for every i ∈ {1...p} is a smooth and Lipschitz function. we will find the input that will be applied to the system at
Here the sets of functions fi and hi , give the relationship time t + β, that is the time instant when the solution of the
between u(t), the variable where value we are able to change, genetic algorithm will be available. This way, it is necessary
and y(t), the variable that will be controlled. before starting the evolutionary process, to calculate the
system state at time t + β, and then to calculate the future
output for an input applied to the system at time t + β.
Now a sample period for the controller need to be chosen,
that obviously need to be greater then β, otherwise some
evolutionary process may be lost. In this work we choose
a sample period α = 2β, that clearly optimize the control
efficiency from the computational viewpoint. This way when
applied an input to the system at time t + aβ 1 the state
x(t + (a + 1)β) will be previously known, and it will be
necessary to initiate the evolutionary process at time t + aβ.
Any real value greater than 2β may be chosen for the sample
period, also giving good results.

V. A S IMPLE E XAMPLE

Fig. 2. System output for a reference signal with amplitude -1.5


Let us consider the non-linear dynamic system:

We see from Figures 1 and 2 that despite the system


input u need of a certain accommodation period to stabilize
ẋ1 = x21 − x2 − u (something about 2 seconds), the output quickly reaches the
ẋ2 = x1 x2 − 2x21 desired level, which occurs in approximately two sample
periods (as can be seen in Figure 2), This result can still
y = x1 (3)
be substantially improved and arrive very close to an ideal
controller, as it is ever possible to increase the efficiency
which is not completely controllable, as will become clear of the genetic algorithm, which will only require more
in the course of this example. computational capacity.
Now let us consider a simulation of 10 seconds with a
sample period of 0.1 seconds. The reference chosen is a step
with amplitude -1.5. Figure 1 shows the input calculated by
the genetic controller for this simulation.

Fig. 3. Detail of the system output at the begining of the simulation

The previous result is one of the major features of the


proposed controller. Controllers known today can be adjusted
Fig. 1. System input for a reference signal with amplitude -1.5 to obtain a faster response, however, they would generate an
increase in the output overshot, and also some oscillation
during the accommodation period. This happens because
The obtained system output is shown in Figure 2: these controllers in general work by adjusting the poles and
zeros of the dynamic system to be controlled, thus they
are subjected to dynamic properties and also have dynamic
1a is an integer characteristics.
Unlike these controllers, the genetic controller here pro-
posed is not a dynamic system, nor was designed based on
the dynamic properties of the system. As the only objective
of this controller is to find the input that takes the output to
the desired condition by an optimization process, the overall
result, the difference between the system output and the
reference, will be smaller as better the optimization process
is. That way, what limits the quality of the controller is
only the computational capacity available, which is certainly
limited, however, can be extended indefinitely to the point
of being considered relatively infinite, taking the proposed
controller as close to the ideal as we want.

Consider now a situation where we can not find an input u


such that the system output will follow a particular reference,
which occurs when the system is not controllable. For this Fig. 5. System input for a reference signal with amplitude 1
case, we will try to design the proposed controller to make
the system given by the equation 3 to follow a reference Figure 5 shows that the system input needs to increase
signal given by a unit step. Again a sample period of 0.1 indefinitely to maintain the output stable, and that the pro-
seconds and a simulation time of 4 seconds are considered. posed controller failed to accomplish this because its solution
Figure 4 shows the system output for this simulation. became faraway from the population. If we consider an initial
population in a larger region, or start it closer to the last
solution found, the controller will maintain the output stable
until it loses computational representation, or in a practical
case, when it reaches a saturation.
Based in these results, and in the definition of the proposed
controller, clearly it will be able to control any dynamic
system through state feedback, whenever the system is
controllable and observable, and its dynamics is well known,
having no other kind of restriction on the system, or need
of applying any kind of linearization or state transformation,
being this way the most generic possible controller.
VI. C ONCLUSIONS
The use of techniques of mass computing and compu-
tational intelligence to obtain solutions to various control
problems is an approach already widespread. The proposed
genetic controller, is inserted in this scenario by a new
perspective, creating a very innovative solution to the control
problem.
The definition of control problem here considered, refers
to the problem of control in its most generic way it can
take, not making any mention to the dynamic characteristics
of the system that one wants to control. This way, controllers
Fig. 4. System output for a reference signal with amplitude 1
designed based on this definition will be the most generic as
possible and they can be applied regardless of the complexity
of the system.
The numerical results obtained confirm another very im-
portant result. The proposed controller has its efficiency tied
to the computational capacity used in its implementation, and
not on the dynamic characteristics of the system and of the
controller itself. That way one can get a controller so close
to the ideal as one wants. This result is of great importance
It is clear in Figure 4 that the system output lost stability for the control area.
after t = 3.7 seconds. Figure 5 shows the system input Given the simplicity, generality and great efficiency pre-
calculated by the proposed controller for this simulation: sented by the proposed controller, it is clear its great
relevance to control theory and practice. It represents a
viable alternative especially when it is necessary to control
relatively complex systems with great precision.
R EFERENCES
[1] Stefano Battilotti, Senior, and Alberto De Santis. Robust output feed-
back control of nonlinear stochastic systems using neural networks.
IEEE Transactions on Neural Networks, 14:103–116, 2003.
[2] César Daltoé Berci. Observadores Inteligentes de Estado: Propostas.
Tese de Mestrado, LCSI/FEEC/UNICAMP, Campinas, Brasil, 2008.
[3] César Daltoé Berci and Celso Pascoli Bottura. Non model based neural
intelligent observer for nonlinear systems. Submitted Article, 2009.
[4] G. Besançon. Nonlinear Observers and Aplications. Springer, 2007.
[5] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix
Inequalities in System and Control Theory. SIAM, Philadelphia, 1994.
[6] André Luiz Brun. Algoritimos Genéticos. Apostila EPAC - Encontro
Paranaense de Computação, Brasil, 2007.
[7] J. J. Cruz. Controle Robusto Multivariável. Editora da Universidade
de São Paulo, São Paulo, Brasil, 1996.
[8] J. Doyle and G. Stein. Multivariable feedback design: Concepts for a
classical modern synthesis. IEEE Transactions on Automatic Control,
26:4– 16, 1981.
[9] J. C. Doyle. Analysis of feedback systems with structured uncertain-
ties. Proc. IEE, Pt. D, 129:242–250, 1982.
[10] J. C. Doyle, K. Glover, P. P. Khargonekara, and B. A. Francis. State-
space solutions to standard H2 and H∞ control problems. IEEE
Transactions on Automatic Control, 34 no. 8, 1989.
[11] J. C. Geromel, P. L. D. Peres, and J. Bernussou. On a convex parameter
space method for linear control design of uncertain systems. SIAM J.
Control and Optimization, 381-402:1991, 29 no. 2.
[12] David E. Goldberg. Genetic Algorithms in Search, Optimization &
Machine Learning. Addison-Wesley, 1989.
[13] George W. Haynes. Controllability of Nonlinear Systems With Linearly
Occurring Controls. National Aeronautics and Space Administration
- Washington, D.C., 1969.
[14] J. K. Hedrick and A. Girard. Controllability and observability of
nonlinear systems. Control of Nonlinear Dynamic Systems: Theory
and Applications, 2005.
[15] Ron M. Hirschorn and Andrew D. Lewis. Geometric local con-
trollability: Second-order conditions. Transactions of The American
Mathematical Society, 1997.
[16] Petros A. Ioannou. Robust Adaptive Control. Prentice Hall, 1996.
[17] Rudolf Emil Kalman. A new approach to linear filtering and prediction
problems. Transactions of the ASME, Journal of Basic Engineering,
82(Series D):35–45, 1960.
[18] Jerzy Klamka. Controllability of nonlinear discrete systems. Int. J.
Appl. Math. Comput. Sci, 12:179–180, 2002.
[19] Andrew D. Lewis. Nonlinear control theory. Transactions Of The
American Mathematical Society, 2002.
[20] P.L.D. Peres. Sur la Robustesse des Systemes Lineaires: Approache
par Programmation Lineaire. Tese de Doutorado, LAAS, Tolouse,
France, 1989.
[21] Annabell Del Real Tamariz. Modelagem Computacional de Dados
e Controle Inteligente no Espaço de Estado. Tese de Doutorado,
LCSI/FEEC/UNICAMP, Campinas, Brasil, 2005.
[22] Sabiha Amin Wadoo. Feedback Control and Nonlinear Controllability
of Nonholonomic Systems. PhD thesis, Faculty of the Virginia
Polytechnic Institute and State University, 2003.
[23] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and optimal
control. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996.

You might also like