Professional Documents
Culture Documents
Vendel-Szilamér Varró
1 INTRODUCTION 4
1.1 Objectives and specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 LINEAR CONTROL 19
3.1 Linear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 State feedback control with pole placement . . . . . . . . . . . . . . . . . . . 22
3.3 State feedback control with Linear Quadratic Regulator . . . . . . . . . . . . . 23
3.4 Observer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 NONLINEAR CONTROL 34
4.1 Takagi-Sugeno fuzzy systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Takagi-Sugeno fuzzy model of the robot manipulator . . . . . . . . . . . . . . 37
4.3 Parallel distributed compensation controller design . . . . . . . . . . . . . . . 41
4.4 Performance guarantees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Observer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.6 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Appendices 55
A Nonlinear model 56
C Nonlinear control 62
1
List of Figures
2
C.2 Models used in simulation of the TS fuzzy system with observer . . . . . . . . 63
List of Tables
2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Parameter values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3
Chapter 1
INTRODUCTION
The purpose of this thesis is to present and compare the effects of different control
strategies for a generic, two degree of freedom robotic arm. The presented control strategies
include two linear ones, i.e. pole placement and LQR, and a nonlinear one, i.e. Takagi-Sugeno
fuzzy control. The design of the control strategies presented in this thesis require the model
of the robot manipulator. In order to obtain the linear model of the manipulator, the nonlinear
model, which is obtained using Euler-Lagrange approach, is linearized. Observers are designed
for the linear and nonlinear cases. Simulations are performed for the designed controllers on
Matlab and Simulink models. The obtained results are analyzed and compared to each other. A
simulator is built, using the same parameters as the ones used in the nonlinear model. Finally,
simulations are performed, in order to compare the results, and validate the synthesis.
Robot manipulators, referred to as robotic arms, perform operations that require among
others high precision, high speed, continuous work, manipulating heavy payloads or working
in hazardous enviroments [2].
Robotic arms have a wide applicability, since they resemble a human arms in many functional
aspects. They are widely used in industrial manufacturing, performing tasks such as pick and
place, assembly operations, drilling, surface finishing, palletizing or welding [3]. The physical
assembly is made of rigid links connected by mobile joints. The type of joint used to connect
two links may be: revolute joint, prismatic joint, spherical joint, cylindrical joint [2]. The end
4
effector is usually a specialized tool, such spray nozzle, drill or gripper. Regarding the con-
troller implemented, usually position control is involved [3]. The most comonly used position
transducers (sensors) are encoders (rotary, longitudinal), limit switches, inductive sensors. The
actuation is usually done by AC or DC motors with speed reductor and speed control, or in
some cases torque control [3].
The structure of the thesis follows the same pattern throughout the main chapters of
the thesis. Each chapter first presents theoretical notions. Based on these, the corresponding
model or controller is implemented. Each chapter is finished with the results of the simulations
performed on the analytical model. The goal of the simulations is to verify, whether the theory
applied is correct. The Testing and validation chapter, gathers the analytical achievements, and
simulates on a model, that is a high-fidelity copy of the actual system, implemented in Sim-
scape.
In Chapter 2 we deduce and verify the dynamic model of the manipulator. Chapter 3
deals with linarization of the model, obtained in the previous chapter, as well as with the im-
plementation of the linear controllers and observers for the system. The obtained controllers
and observers are verified performing simulations. In Chapter 4 we implement the nonlinear
control and observer for the robot arm. Simulations are performed for verification. Chapter 5
contains the description and implementation of the robot arm in Simscape. The controllers and
observers obtained in the former two chapters are verified in this model performing simulations
for the same parameters.
5
Chapter 2
MODELING OF THE ROBOT MANIPULATOR
In this chapter we obtain the Euler-Lagrange nonlinear dynamic model of the robot
arm. The model is then brought to the generic state-space form by compensating the affine
term. Finally, the behavior of the model is simulated for different values of the joint angle
positions.
6
of mass (centroids), length of links, gravitational force and moment of inertia. In order to
control the position of the robot manipulator, we must know its dynamic properties. Based
on this we calculate the right amount of torque (force) needed to be applied. There are two
wide spread ways of obtaining the analytical description of the system, namely the Newton-
Lagrange approach and the Euler-Lagrange approach [4]. The two approaches lead to equivalent
analytical descriptions of the system [5]. The main reason why in this thesis we consider the
Lagrangian (energy based) approach over the Newtonian (force/torque balance based) one is
its simplicity of formulation. This simplicity derives from the fundamental difference between
the two: the former considers all internal and external forces represented by vectors, acting on
the system in a specific coordinate frame, while the latter relies on the least action principle
in a generalized coordinate frame, i.e. between any two points the system follows a path that
minimizes the action [5]. The Euler-Lagrange model considers the multi-part robot body as a
whole, thus constraint forces (internal, conservative, non-dissipative forces) between the parts
are eliminated by default [4]. The Euler-Lagrange approach also provides compact, closed-form
equations, representing explicit input-output relation [4]. The Newton-Lagrange approach may
also be brought to this form, however, it requires recursive evaluation of separate equations and
elimination of internal forces by substitution.
The Lagrangian L(q, q̇) defines the difference between the kinetic energy K(q, q̇) and
the potential energy U (q) of the system with respect to the generalized coordinates and veloci-
ties (q, q̇) [4]:
d ∂L(q, q̇) ∂L(q, q̇)
− =τ (2.2)
dt ∂ q̇ ∂q
where the joint torques τ are a set of proper inputs separated from constraint forces [6].
In this thesis we will consider the robot manipulator schematically represented in Figure 2.2.
7
Figure 2.2: Schematic representation of a two degree of freedom robot manipulator
Description Notation
Length of link 1 l1
Length of link 2 l2
Position of center of mass of L1 lc1
Position of center of mass of L2 lc2
Mass of L1 m1
Mass of L2 m2
Moment of inertia of L1 I1
Moment of inertia of L2 I2
Gravitational acceleration g
Angle of L1 q1
Angle of L2 q2
Having two degrees of freedom, we have two generalized coordinates [7] chosen as the joint
8
positions (angles of the links): q1 and q2 . Our goal is to obtain the dynamic model of the
manipulator. We verify whether the motion is the one desired by measuring the joint positions.
We manipulate the link’s movement by changing the two input joint torques τ1 and τ2 .
The first step of model building is obtaining the forward kinematic model of the ma-
nipulator from Figure 2.2. The velocity of a point is defined as the distance traveled by the point
over a period of time [8]:
dS
v=
dt
Since the links of the manipulator are rigid bodies rotating around the joints, we are studying a
structure having circular motion. Each point in a link has a constant radius r relative to the axis
of rotation, thus the distance traveled by a point is given as:
S = rq
The distance traveled by a point along a circular path over a period of time is called tangential
velocity, since its direction is tangent to the circular path of the point, and it is defined as [8]:
dS dq
v= = r = rω
dt dt
dq
ω=
dt
The angular velocity ω is equal in every point of a link (it does not depend on the radius), and
is given by the change of the angle q in time. However the tangential velocity of each point
depends also on its distance from the axis of rotation. If the axis of rotation of a link is not a
fixed one, the velocity of the axis of rotation is also added to the velocity of each point of the
link, like in case of the second link L2 . Another important remark is that the velocity is a vector
quantity (unlike speed, which is a scalar one), having both magnitude and direction. Because
of this, it may be decomposed as a vectorial sum of velocities along the axes:
v = vx + vy
The position (x,y) of the centroids of the two links, as well as their tangential velocities
are obtained as follows:
9
Position and velocity of the first link’s (L1 ) Position and velocity of the first link’s (L1 )
centroid along X-axis: centroid along Y-axis:
−x1 y1
sin(q1 ) = cos(q1 ) =
lc1 lc1
x1 = −lc1 · sin(q1 ) y1 = lc1 · cos(q1 )
vx1 = ẋ 1 = −lc1 · cos(q1 ) · q̇ 1 vy1 = ẏ 1 = −lc1 · sin(q1 ) · q̇ 1
Position and velocity of the second link’s (L2 ) Position and velocity of the second link’s (L2 )
centroid along X-axis measured from the end centroid along Y-axis measured from the end
of the first one: of the first one:
−x2 y2
sin(q1 + q2 ) = cos(q1 + q2 ) =
lc2 lc2
x2 = −lc2 · sin(q1 + q2 ) y2 = −lc2 · cos(q1 + q2 )
vx2 = ẋ 2 = −lc2 · cos(q1 + q2 )· vy2 = ẏ 2 = −lc2 · sin(q1 + q2 )·
· (q̇ 1 + q̇ 2 ) · (q̇ 1 + q̇ 2 )
Position and velocity of the second link’s cen- Position and velocity of the second link’s cen-
troid along X-axis measured from the base: troid along Y-axis measured from the base:
Starting from the forward kinematic model obtained above, we compute the kinetic
and potential energies needed in (2.1). The total kinetic energy (Ktotal ) of a link is defined as
the sum of the translational (Ktrans ) and rotational (Krot ) kinetic energies:
1
Ktrans = mv 2
2
1
Krot = Iω 2
2
1
Ktotal = Ktrans + Krot = (mv 2 + Iω 2 )
2
1 1 1
K1trans (q, q̇) = m1 (vx21 + vy21 ) = m1 lc1
2
(sin2 (q1 ) + cos2 (q1 ))q̇ 21 = m1 lc1
2 2
q̇ 1
2 2 2
1
K1rot (q, q̇) = I1 q̇ 21
2
10
1 2
K1total (q, q̇) = (m1 lc1 + I1 )q̇ 21
2
1
K2trans (q, q̇) = m2 (vx2 + vy2 ) (2.3)
2
where
2
vx2 = l12 cos2 (q1 )q̇ 21 + lc2 cos2 (q1 + q2 )(q̇ 1 + q̇ 2 )2 +
+ 2l1 cos(q1 )q̇ 1 lc2 cos(q1 + q2 )(q̇ 1 + q̇ 2 ) (2.4)
vy = l12 sin2 (q1 )q̇ 21 + lc2
2 2
sin2 (q1 + q2 )(q̇ 1 + q̇ 2 )2 +
+ 2l1 sin(q1 )q̇ 1 lc2 sin(q1 + q2 )(q̇ 1 + q̇ 2 ) (2.5)
We simplify (2.4) and (2.5) using the formulas (2.6) and (2.7):
1
cos(q1 + q2 ) cos(q1 ) = [cos(2q1 + q2 ) + cos(q2 )] (2.6)
2
1
sin(q1 + q2 ) sin(q1 ) = − [cos(2q1 + q2 ) − cos(q2 )] (2.7)
2
1
K2trans (q, q̇) = m2 l12 q̇ 21 + lc22
(q̇ 1 + q̇ 2 )2 + 2l1 lc2 cos(q2 )q̇ 1 (q̇ 1 + q̇ 2 )
2
1
K2rot (q, q̇) = I2 (q̇ 1 + q̇ 2 )2
2
1
K2total (q, q̇) = m2 l12 q̇ 21 + m2 l1 lc2 cos(q2 )q̇ 1 (q̇ 1 + q̇ 2 )+
2
1 2
+ (m2 lc2 + I2 )(q̇ 1 + q̇ 2 )2
2
The total kinetic energy of the structure is the sum of the kinetic energies of the links and it is
given as:
1 2
Ktotal (q, q̇) = K1total (q, q̇) + K2total (q, q̇) = (m1 lc1 + m2 l12 + I1 )q̇ 21 +
2
1 2
+ m2 l1 lc2 cos(q2 )q̇ 1 (q̇ 1 + q̇ 2 ) + (m2 lc2 + I2 )(q̇ 1 + q̇ 2 )2
2
U = mgh
11
where g represents the gravitational acceleration (9.8 sm2 ), while h represents the altitude of the
body with respect to the base of the structure. The potential energy of the manipulator is given
as the sum of the potential energies of the links:
Next we calculate the Lagrangian (2.1) and its derivatives needed in the Euler-Lagrange equa-
tion (2.2).
∂L(q, q̇) 2
= (m1 lc1 + m2 l12 + I1 )q̇ 1 + (m2 l1 lc2 cos(q2 ))(2q̇ 1 + q̇ 2 ) + (m2 lc2
2
+ I2 )(q̇ 1 + q̇ 2 ) =
∂ q̇ 1
2
= (m1 lc1 + m2 l12 + I1 + 2m2 l1 lc2 cos(q2 ) + m2 lc2 2
+ I2 )q̇ 1 +
2
+ (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̇ 2
∂L(q, q̇) 2 2
= (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̇ 1 + (m2 lc2 + I2 )q̇ 2
∂ q̇ 2
d ∂L(q, q̇) 2
= (m1 lc1 + m2 l12 + I1 + 2m2 l1 lc2 cos(q2 ) + m2 lc2 2
+ I2 )q̈ 1 −
dt ∂ q̇ 1
2
− 2m2 l1 lc2 sin(q2 )q̇ 1 q̇ 2 + (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̈ 2 −
2
− m2 l1 lc2 sin(q2 )q̇ 2
d ∂L(q, q̇) 2
= (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̈ 1 − m2 l1 lc2 sin(q2 )q̇ 1 q̇ 2 +
dt ∂ q̇ 2
2
+ (m2 lc2 + I2 )q̈ 2
∂L(q, q̇)
= (m1 lc1 + m2 l1 )g sin(q1 ) + m2 glc2 sin(q1 + q2 )
∂q1
∂L(q, q̇)
= −m2 l1 lc2 sin(q2 )q̇ 1 (q̇ 1 + q̇ 2 ) + m2 glc2 sin(q1 + q2 )
∂q2
12
Based on (2.2) the applied torques are:
d ∂L(q, q̇) ∂L(q, q̇)
τ1 = − =
dt ∂ q̇ 1 ∂q1
2
= (m1 lc1 + m2 l12 + I1 + 2m2 l1 lc2 cos(q2 ) + m2 lc2 2
+ I2 )q̈ 1 −
2
− 2m2 l1 lc2 sin(q2 )q̇ 1 q̇ 2 + (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̈ 2 −
− m2 l1 lc2 sin(q2 )q̇ 22 − (m1 lc1 + m2 l1 )g sin(q1 ) − m2 glc2 sin(q1 + q2 ) (2.8)
d ∂L(q, q̇) ∂L(q, q̇)
τ2 = − =
dt ∂ q̇ 2 ∂q2
2 2
= (m2 l1 lc2 cos(q2 ) + m2 lc2 + I2 )q̈ 1 + (m2 lc2 + I2 )q̈ 2 +
2
+ m2 l1 lc2 sin(q2 )q̇ 1 − m2 glc2 sin(q1 + q2 ) (2.9)
τ1 m11 (q2 ) m12 (q2 ) q̈ 1 c11 (q2 , q̇ 2 ) c12 (q2 , q̇ 2 ) q̇ 1
= + +
τ2 m21 (q2 ) m22 q̈ 2 c21 (q2 , q̇ 1 ) c22 q̇ 2
d1 0 q̇ 1 g1 (q1 , q2 )
+ + (2.10)
0 d2 q̇ 2 g2 (q1 , q2 )
Note that in (2.11) Dc represents the damping coefficient (friction force) matrix given by the
resistance to movement of the manipulator’s links. Also note that the elements of the matrices
from (2.11) are functions of the joint angle positions and joint angular velocities and are given
as:
2
m11 (q2 ) = m1 lc1 2
+ m2 l12 + I1 + 2m2 l1 lc2 cos(q2 ) + m2 lc2 + I2
2
m12 (q2 ) = m2 l1 lc2 cos(q2 ) + m2 lc2 + I2
2
m21 (q2 ) = m2 l1 lc2 cos(q2 ) + m2 lc2 + I2
2
m22 = m2 lc2 + I2
13
c21 (q2 , q̇ 1 ) = m2 l1 lc2 sin(q2 )q̇ 1
c22 = 0
d1 ∈ IR
d2 ∈ IR
where (2.13) is the state equation, (2.14) is called the output equation, xr×1 represents the
state vector, up×1 denotes the input vector, y q×1 is the output vector, Ar×r is called the state
(transition) matrix, B r×p represents input (control) matrix, C q×r denotes the output matrix while
Dq×p is the feed forward (through) matrix.
In case of linear systems the matrices A, B, C and D are constants, i.e. they are not functions
of an independent variable (e.g.: torque, time). In case of our system (2.12) there are two inputs
(the torques τ1 and τ2 ), two outputs (the joint angles of the links q1 and q2 ). We choose a state
vector x containing the joint angle positions q1 and q2 and the joint angular velocities q̇ 1 and q̇ 2 .
Accordingly p = 2, q = 2 and r = 4.
x4 q̇ 2
14
The time derivative of the state vector x can be rewritten as:
ẋ 1 q̇ 1 x3
ẋ 2 q̇ 2 x4
ẋ =
ẋ 3 = q̈ 1 = f1 (x, u)
ẋ 4 q̈ 2 f2 (x, u)
0 0 1 0
0 0 0 1
A(q, q̇) =
0 0
(2.17)
−M −1 (q)(Cr (q, q̇) − Dc )
0 0
0 0
0 0
B(q) = −1
(2.18)
M (q)
1 0 0 0
C= (2.19)
0 1 0 0
D = O2 (2.20)
0
0
Af f (q) = −
−1
(2.21)
M (q)G(q)
Note that we are dealing with a nonlinear, time-invariant system (NLTI). This is easily justified
by the fact that the state matrix (2.17) and the input matrix (2.18) involved in the state equation
(2.15) are functions containing trigonometric functions of the state variables (elements of x)
defined in (2.12). On the other hand, time invariance is justified by the fact that these trigono-
metric functions depend on the state variables and not explicitly on time. Remark that the above
representation is slightly different from the general form (2.13) and (2.14) since an additional
(affine) term Af f (q) is added to the state equation. This type of system is called continous-time
affine system [10]. The block diagram of the system is presented in Figure 2.3.
15
Figure 2.3: Block diagram of the system given by (2.15) and (2.16)
We can compensate the effect of the affine term and get a linear representation of the system by
feeding its inverse −Af f (q) via the input u(t), so that the term B(q)u(t) + Af f (q) becomes
B(q)uctrl (t), where uctrl (t) is the control input (τ in the manipulator’s case). We achieve this
by introducing a compensator input ucmp (t), such that u(t) = uctrl (t) + ucmp (t). We have:
0 0 0
0 0
0
−1 ucmp − −1
=0
M (q) M (q)G(q)
ucmp = G
In what follows we will work with the linear model with compensated affine term. For simplic-
ity we will denote uctrl (t) simply by u(t).
0 0 1 0 0 0
0 0 0 1 x + 0 0
ẋ =
0 −1 u
(2.22)
0 −1
−M (q)(Cr (q̇) − Dc ) M (q)
0 0
1 0 0 0
y= x (2.23)
0 1 0 0
16
After compensation of the affine term the system given by (2.22) and (2.23) becomes:
Figure 2.4: Block diagram of the system given by (2.22) and (2.23)
Once we have obtained the white box model, we run simulations for three sets of initial
conditions of q1 and q2 , considering zero initial angular velocities, i.e. q̇ 1 = 0 and q̇ 2 = 0. The
initial conditions were selected in such a way that we acquire enough information about the
system’s behavior in order to justify its correctness. Namely, these are minimal (zero), close to
minimal and maximal offset with respect to the chosen equilibrium point x0 = [0 0 0 0]T of
the robot manipulator (vertical upright position or zero initial conditions). Another important
remark is that the inputs are considered zero, so we can get an uncontrolled, natural response of
the system. For simulation we used the parameters adopted from [1] and are presented in Table
2.2.
17
For zero initial conditions, i.e. x = [0 0 0 0]T the system is at rest (nothing happens), as
expected. Also according the expectations, for close to minimal, i.e. x = [0.1 0 0 0]T and
maximal, i.e. x = [0 π2 0 0]T offset initial conditions the system reaches the vertical downward
position slower, respectively faster. The results of the simulations are presented in Figure 2.5,
and intuitively yield a valid model of the robot manipulator.
3 3
2 2
1 1
0 0
-1 -1
-2 -2
-3 -3
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Time [s] Time [s]
T T
(a) Initial condition x0 = [0.1 0 0 0] (b) Initial condition x0 = [ π2 π
2 0 0]
18
Chapter 3
LINEAR CONTROL
In this chapter we present the steps of linearization of the nonlinear model. The notions
of state feedback and linear quadratic control are presented. Controllers are designed for the
obtained linear model using pole placement and LQR methods. An observer is also designed
for the system. Simulation results are analyzed for different parameter values and are compared
to those obtained for the system with observer.
∞
X f n (x0 )
f (x) = (x − x0 )n = (3.1)
0
n!
∂f (x) 1 ∂ 2 f (x)
= f (x0 ) + (x − x0 ) + (x − x0 )2 (3.2)
∂x 2! ∂x2
x0 x0
Neglecting the quadratic and higher order terms, as well as the error introduced by not consid-
ering them, we obtain the linear approximation of f (x) around x0 :
∂f (x)
f (x) = f (x0 ) + (x − x0 ) (3.3)
∂x
x0
= f (x0 ) + fx (x0 ) 4 x (3.4)
where fx (x0 ) denotes the partial derivative of f (x) with respect to x calculated in point x0 and
4x denotes the small variation of x around x0 .
19
In case of multivariable functions f (x1 , ..., xn ) we have to consider the variation of the function
f (x1 , ..., xn ) around each variable x1 , ..., xn . Using the same notations as in (3.4), the linear
approximation of f (x1 , ..., xn ) around x01 , ..., x0n becomes:
In order to linearize our model around an equilibrium point, we first compute them by
solving (2.12) for q̈ = 0. We obtain four equilibrium points, given as the solution of (2.12) for
zero inputs (u1 = 0 and u2 = 0):
• vertical upward position of the end of L1 and vertical up- or downward position of the
end of L2 : x = [0 ± π 0 0]T
• vertical up- or downward position of the end of L1 and vertical upward position of the
end of L2 : x = [±π 0 0 0]T
Note that except the last one, all the equilibrium points are unstable. For our implementation we
choose the first option (x0 = [0 0 0 0]T ), this one being the most difficult to stabilize. Since our
model can be decomposed as a system of two multivariable functions f1 (x1 , x2 , ẋ 1 , ẋ 2 , u1 , u2 )
and f2 (x1 , x2 , ẋ 1 , ẋ 2 , u1 , u2 ) we use the Jacobian (matrix of partial derivatives) to obtain the
first order Taylor-series expansion.
Let v = [x1 , x2 , ẋ 1 , ẋ 2 , u1 , u2 ]T denote the variables of the system, which can be written as:
h iT h
f (v) = f1 (v) f2 (v) = M −1 (x1 , x2 ) [u1 u2 ]T −
i
− Cr (x1 , x2 , ẋ 1 , ẋ 2 )[ẋ 1 ẋ 2 ]T − Dc [ẋ 1 ẋ 2 ]T − G(x1 , x2 )
∂f1 (v0 ) ∂f1 (v0 ) ∂f1 (v0 ) ∂f1 (v0 ) ∂f1 (v0 ) ∂f1 (v0 )
∂f (v) ∂x1 ∂x2 ∂ ẋ 1 ∂ ẋ 2 ∂u1 ∂u2
=
∂v
∂f2 (v0 ) ∂f2 (v0 ) ∂f2 (v0 ) ∂f2 (v0 ) ∂f2 (v0 ) ∂f2 (v0 )
v0
∂x1 ∂x2 ∂ ẋ 1 ∂ ẋ 2 ∂u1 ∂u2
f1x1 (v0 ) f1x2 (v0 ) f1ẋ 1 (v0 ) f1ẋ 1 (v0 ) f1u1 (v0 ) f1u2 (v0 )
= =
f2x1 (v0 ) f2x2 (v0 ) f2ẋ 1 (v0 ) f2ẋ 1 (v0 ) f2u1 (v0 ) f2u2 (v0 )
= Jpd
20
Let v0 = [0, 0, 0, 0, 0, 0]T denote the equilibrium point. The linear approximation of f (v) =
h iT
f1 (v) f2 (v) around v0 for small variation of v is given as:
Now, we can write the state matrix A and the input matrix B of the approximated system as:
0 0 1 0
0 0 0 1
A=
Jpd (1, 1)
=
Jpd (1, 2) Jpd (1, 3) Jpd (1, 4)
Jpd (2, 1) Jpd (2, 2) Jpd (2, 3) Jpd (2, 4)
0 0 1 0
0 0 0 1
= (3.5)
16.95 −0.69 −2.29 4.18
−12.96 19.178 4.18 −56.66
0 0 0 0
0 0 = 0 0
B=
Jpd (1, 5) Jpd (1, 6) 0.46 −0.84
(3.6)
Jpd (2, 5) Jpd (2, 6) −0.84 11.33
The output matrix C (2.19) and the feedforward matrix D (2.20) remain unchanged, since they
do not depend on the joint angle positions and joint angular velocities.
1 0 0 0
C= (3.7)
0 1 0 0
D = O2 (3.8)
21
The resulting LTI system is given as:
ẋ(t) = Ax(t) + Bu(t) (3.9)
y(t) = Cx(t) (3.10)
with A, B, C and D given in equations (3.5), (3.6), (3.7) and (3.8)
State feedback control requires the system to be (completely) state controllable [14], i.e. from
any initial state the system can be driven to any reachable, final state using the control input
in finite time [15]. A linear system is controllable if and only if the controllability matrix
Pcr×rp = [B AB A2 B Ar−1 B] has rank(Pc ) = r [15]. In case of the system given by (3.9)
and (3.10) the controllability condition is satisfied, i.e. Pc4×4·2 = [B AB A2 B A3 B], and
rank(Pc ) = 4, wehre
0 0 0.45 −0.83 −4.54 49.25 224.56 −2832.05
0 0 −0.83 11.33 49.25 −645.58 −2832.05 37013.84
Pc =
0.45 −0.83 −4.54 49.25
.
224.56 −2832.05 −12457.59 162409.67
−0.83 11.33 49.25 −645.58 −2832.05 37013.84 162409.67 −2122100.47
22
The linear closed-loop system described by equations (3.9) and (3.10) is controllable. So we
can proceed to design a linear controller for it.
Z ∞
J= xT (t)Qx(t) + uT (t)Ru(t) dt (3.11)
0
where the weighting functions are Qr×r ≥ 0, Q = QT (symmetric positive semidefinite matrix)
and Rp×p > 0, R = RT (symmetric positive definite matrix). The weighting functions represent
the importance of minimization of the controlled values (state variables) relative to each other
(Q), as well as the importance of the control inputs (R) [17]. We are looking for a state feedback
control law that minimizes J. This assumes that the control input is not an external signal to
the system, but its internal state, fed back. As a consequence, the system works in disturbance
rejection (stabilizing) regime and not in reference tracking one. Thus, we consider the control
law:
Using (3.12), the closed-loop system described by (3.9) and (3.10) becomes:
The performance index J can be interpreted as an energy function, defining the total energy of
the closed-loop system [17]. Since J is given as an infinite integral (t → ∞) of the state vari-
ables, if the value of J is finite, the state variables will converge to zero [17]. Thus, minimizing
the total energy of the closed-loop yields stability of the system. Because the state variables and
the control input are weighted by Q and R, respectively, and these weights are chosen according
to the design needs, there is no unique LQR solution. Larger Q values result in smaller state
23
x(t) values to compensate for Q, thus penalizing the state-variables. The same is true for R,
which penalizes the control effort u(t) in order to compensate for the large values of R, thus
keeping J small. In terms of pole placement, larger Q values result in shifting to the left the
real parts of the closed loop (Ac ) poles in the complex s-plane, increasing the robustness of the
system and consequently the states decay faster to zero due to higher control effort. On the
other hand, larger R values provide a smaller control effort by shifting the poles towards the
imaginary axis, resulting in larger state-variable values and implicitly decreased robustness. An
explicit trade off can be remarked between the two weighting functions.
The weight matrices are usually diagonal matrices, with the maximum acceptance levels: qii ≤
1
x2i
and rii ≤ u12 [18], where xi and ui are the maximum (in absolute value) imposed values
i
for the ith state and input variable, respectively. In the case when these default values are too
restrictive, the following choice of weight matrices is commonly used: Q = Ir and R = ρIp ,
where ρ > 0 [18]. Remark that the weighting functions taken independently from each other
penalize intrinsic relations between state-variables (Q) or control efforts (R). By scaling the
two functions (ρ) the importance of state over control (or vice versa) is penalized.
To find K we have to find P n×n , which is the only positive definite solution of the follow-
ing Riccati algebraic equation, fulfilling P = P T [17]:
ATc P + P Ac + Q + K T RK = 0
AT P + P A + Q + K T RK − K T B T P − P BK = 0 (3.15)
K = R−1 B T P (3.16)
24
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Oo =
16.95
−0.68 −2.29 4.17
−12.96 19.17 4.17 −56.66
−93.02 81.69 39.66 −246.98
805.31 −1089.50 −259.26 3247.12
The linear closed-loop system described by equations (3.9) and (3.10) is also observable. Con-
trollability and observability of LTI systems are dual notions [19]:
In order to be able to measure all the state variables (x), which are otherwise unaccessible, we
design and implement an observer. A Luenberger-observer is a copy of the system in parallel
with it.
˙
x̂(t) = (A − BK)x̂(t)
ŷ(t) = C x̂(t)
The observer, similarly to the pole placement method with state feedback, adjusts the location
of poles of the error dynamics using the input feedback law u(t) = −Lŷ(t) = −LC x̂(t). We
need to accelerate the estimator’s dynamics over that of the estimated system, so it provides the
25
required states on time for the estimated system [20].
Thus, the observer gain Lr×q is usually
chosen such that |Re{λmin }| > α|Re{λmax }| , where theoretically α = 10, in practice
obs sys
α = 5 or 6 [20], i.e. the observer dynamics is chosen to be 5-6 times faster. Choosing higher
values is not advisable in order to prevent the estimator to generate noise. L is computed such
that, the estimation error dynamics is asymptotically stable, i.e. x − x̂ → 0 as t → ∞ [20].
This implies global asymptotic stability thus, the choice of the initial condition x0 can be made
arbitrary.
26
T
1000.25 0.01 253.99 5.27
L=
0.01 1000.02 16.86 27.78
As a next step we compute the feedback gains K for both the pole placement and LQR tech-
niques. In Matlab the gain matrix K for a given set A, B, p can be computed using the command
Kpp = place(A, B, p), where p denotes the new places of the poles. We computed two gain
matrices for two sets of poles:
p1 = −20 · [10 10 1 1]
670.08 48.26 133.55 10.21
Kpp1 =
48.26 27.31 10.21 0.6067
p2 = −5 · [10 10 1 1]
10117.04 744.81 549.22 40.86
Kpp2 =
744.81 409.58 40.86 17.42
Note that in both cases the first two elements of the vectors p1 and p2 are 10 times greater in
absolute value than the last two ones. As a direct consequence, the first two poles are 10 times
faster than the last two ones, thus they are more robust. This choice of the poles is induced by
the fact that we aim to stabilize the joint angle positions, i.e. x1 and x2 , rather than the joint
angular velocities, i.e. x3 and x4 .
We also compute the gain matrices using LQR technique, which in Matlab can be computed for
a given set A, B, Q, R using the command [Klqr , P, e] = lqr(A, B, Q, R). We computed two
gain matrices for different weighting functions.
In both cases we prioritized the joint angle positions over the joint angular velocities, as in the
previous case, thus the weighting function Q is of the same form in both cases:
1 0 0 0
0 1 0 0
Q = α
0
(3.17)
0 β 0
0 0 0 β
where β is of several orders (5-6) of magnitude smaller than 1. We chose β = 10−6 . R has been
chosen as:
T 1 0
R = CC = (3.18)
0 1
27
The importance of minimizing the control effort is the same for both inputs (control signals),
given by R, thus R = I2 for both cases. For the case when we prioritize the states over the
control we choose the weighting function Q1 and obtain gain matrix Klqr1 as follows:
1 0 0 0
0 1 0
Q1 = 109
0 0 10 −6
(3.19)
0
0 0 0 10−6
31663.09 2.56 394.95 24.35
Klqr1 =
1.08 31624.6 24.34 77.94
For the case when we prioratize the control over the states we choose the weighting function
Q2 and obtain gain matrix Klqr2 as follows:
1 0 0 0
0 1 0
Q2 = 10−12
0 −6
(3.20)
0 10 0
0 0 0 10−6
All simulations were performed for the initial condition that is the farthest away from
the robust equilibrium point ([0 0 0 0]T ), namely x0 = [ π2 π2 0 0]T . It is important to mention
that since due to the resulting control efforts, the joint angular velocities (ẋ 1 and ẋ 2 ) were of
different orders of magnitude (0-2). In order to be able to compare the results, we limited the
joint angular velocities of the closed-loop system in the range [-5 5] rad/s. Also note that there
are no such restrictions in the case of the observer, also the initial state of the observer. The
equilibrium point x0 = [0 0 0 0]T . The simulations were performed for all combination of
systems and control strategies, namely linear and nonlinear systems with or without observer
for both methods, pole placement and LQR.
28
The first set of simulations were performed for the linear system:
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.2: Trajectories of the closed-loop system without observer using LQR
We can see in Figure 3.2, that in case of the system from subfigure (a) the decay rate is much
faster than in the case of the system from subfigure (b).
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.3: Trajectories of the closed-loop system without observer using pole placement
We can notice a similar behavior in case of the system from Figure 3.3, however, the difference
is not as remarkable as in the previous case.
29
Initial conditions: [ π/2, π/2, 0, 0 ] T Initial conditions: [ π/2, π/2, 0, 0 ] T
5 5
x1 [rad] x1 [rad]
x2 [rad] x2 [rad]
ẋ1 [rad/s] ẋ1 [rad/s]
ẋ2 [rad/s] ẋ2 [rad/s]
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.4: Trajectories of the closed-loop system with observer using LQR
In case of the system from Figure 3.4, beside the decay rate difference between the two subfig-
ures, similar as in case of the system from Figure 3.2, we can see how the observer slows the
dynamics of the system, while reducing the estimation error. This behavior is hard to remark in
the case of subfigure (b) since it’s dynamics is much slower than the system from subfigure (a),
even without an observer.
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.5: Trajectories of the closed-loop system with observer using pole placement
Figure 3.5 shows, how the observer slows the dynamics of both systems, which can be clearly
seen by comparing the results with the ones from 3.3.
30
From the above results we can deduce that the system with direct access to the states performs
better than that with observer in terms of settling time and decay rate. We can also deduce that
in case of the LQR control, when the Klqr1 gain vector is used, the control effort is high due
to high values of Klqr1 , thus the system decays faster (has smaller settling time) than in the
case of the system where the Klqr2 gain was used, in both cases (with and without observer).
The system response in case of pole placement control is similar using either of the two gains
Kpp1 and Kpp2 when comparing alike systems (with or without observer). When we compare
the differences between LQR and pole placement, we can deduce that, unless we prioritize the
control effort in case of the LQR, both control techniques perform similarly well.
Let us compare the same feedback gains on the nonlinear system:
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.6: Trajectories of the closed-loop system without observer using LQR
We can see in Figure 3.6, that in case of the system from subfigure (a) the decay rate is slightly
faster than in the case of the system from subfigure (b).
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.7: Trajectories of the closed-loop system without observer using pole placement
31
We can notice a similar behavior in case of the system from Figure 3.7, however, the difference
is not as remarkable as in the previous case.
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.8: Trajectories of the closed-loop system with observer using LQR
In case of the system from Figure 3.8, beside the decay rate difference between the two subfig-
ures, similar as in case of the system from Figure 3.6. The effect introduced by the observer is
not remarkable in terms of decay rate.
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 3.9: Trajectories of the closed-loop system with observer using pole placement
Similarly to the case of the linear system, figure 3.9 shows, how the observer slows the dynamics
of both systems, which can be clearly seen by comparing the results with the ones from 3.7.
32
The similar behavior can be remarked as in the previous case, due to the fact that there are
imposed limitations in terms of control input (e.g.: given by the actual actuator). Both control
techniques, however, perform slightly better in terms of decay rate on the linear system than on
the nonlinear one, since they were designed for the linear system.
33
Chapter 4
NONLINEAR CONTROL
In this chapter we present the notions of nonlinear control using Takagi-Sugeno fuzzy
approach. The fuzzy model of the manipulator is built, as well as the TS fuzzy controller for
it, using performance guarantees. A fuzzy observer is also computed. Finally, simulations are
performed and results are presented.
Fuzzy modeling of the system gives a solution to the above issues, by building submodels
valid in subdomains, which are basically partitions of the operating domain. The sub-models
are simple input-output relations [22]. The main feature of the TS fuzzy model is that it is
composed of linear sub-models (implications, rules or local linear input-output relations) of the
nonlinear system [23]. Blending these rules by nonlinear weighting functions we achieve the
overall fuzzy model of the system [24]. In other words the overall dynamics of the state space is
split into local regions described by linear models, which then are combined to cover the whole
state space using fuzzy interpolation (membership functions) [25].
Since each submodel is a linear one, linear control can be used. It is even possible to de-
sign a local state feedback controller for each model separately and to combine them, to obtain
a controller, valid in wider operating range [26]. Validity of a controller obtained this way, how-
ever, needs to be verified. The concept of designing a controller for each local model is called
parallel distributed compensation (PDC). By using quadratic Lyapunov functions to design the
fuzzy controller we can guarantee stability of the closed-loop system, as well as impose perfor-
mance guarantees using linear matrix inequalities (LMIs), which can be solved using convex
optimization techniques [27]. Such performance guarantees are decay rate (related to response
time) and constraints on control effort and state values.
34
To construct an exact fuzzy model of a nonlinear system of the form ẋ(t) = f (x(t)), with
f (0) = 0, we should find a global sector, for which the relation ax(t) ≤ f (x(t)) ≤ bx(t)
holds [23]. This method is called global sector nonlinearity, shown in (4.1). In the general
case it can be difficult to find such global sectors for the entire domain of x(t). Since real life
physical systems are bounded we can take a particular case for which local sector nonlinear-
ity fulfills the design needs. Figure 4.2 shows the exact fuzzy representation of the nonlinear
system between the bounds [−d d], i.e. −d ≤ x(t) ≤ d..
35
For the general case of modeling a continuous time nonlinear dynamical system let us consider
the generic subsystem (rule, implication) of the following form [28]:
Rule i:
The definition of notations is the following [29]: n is the number of fuzzy rules, z(t) =
{z1 (t), ..., zm (t)} represents the known premise variables that can be nonlinear functions of
the state variables obtained from the partitioning of the state space domain, i.e. z(t) = f (x(t));
they also can be functions of external disturbances or input variables; m denotes the number of
premise variables, k = 1, 2 is the number of weighting function, Mjk represents the k th fuzzy set
(weighting function) for the j th premise variable, (4.1) denotes state equation, (4.2) is the out-
put equation, xr×1 represents the state vector, up×1 denotes the input vector, y q×1 is the output
vector, Ar×ri represents the state (transition) matrix for rule i, Bir×p denotes the input (control)
matrix for rule i, while Ciq×r is the output matrix for rule i.
Blending of the rules is done as [22]:
n
P
ωi (z(t))[Ai x(t) + Bi u(t)]
i=1
ẋ(t) = n
P =
ωi (z(t))
i=1
n
X
= hi (z(t))[Ai x(t) + Bi u(t)] (4.3)
i=1
Pn
ωi (z(t))[Ci x(t) + Di u(t)]
i=1
y(t) = n
P =
ωi (z(t))
i=1
n
X
= hi (z(t))[Ci x(t) + Di u(t)] (4.4)
i=1
36
where Mjk (zj (t)) is the degree of membership of the premise variable zj (t) in the fuzzy set Mjk .
The membership functions ωi (z(t)) are implicitely normalized based on (4.3) and (4.4) as:
ωi (z(t))
hi (z(t)) = P
n
ωi (z(t))
i=1
hi (z(t)) ≥ 0
n
X
hi (z(t)) = 1
i=1
0 0 1 0 a11 a12 a13 a14
0 0 0 1 a21 a22 a23 a24
A(q, q̇) =
0 0
=
a31
a32 a33 (q, q̇) a34 (q, q̇)
−M (q)−1 (Cr (q, q̇) − Dc )
0 0 a41 a42 a43 (q, q̇) a44 (q, q̇)
0 0 b11 b12
0 0 = b21 b22
B(q) = b31 (q) b32 (q)
−1
M (q)
b41 (q) b42 (q)
1 0 0 0
C=
0 1 0 0
37
D = O2
The steps of fuzzy modeling the system are to define the sectors, the fuzzy (premise) vari-
ables and the fuzzy sets (weighting functions). Using the principle of local sector nonlinearity,
we impose bounds for the state variables as: x1 (t), x2 (t) ∈ [−0.5, 0.5], and x3 (t), x4 (t) ∈
[−3.5, 3.5], respectively. Since Ci = C and Di = D are constant matrices for any i =
1, ..., n no premise variables are needed to describe them. The same is valid for the elements
a11 , a12 , a13 , a14 , a21 , a22 , a23 , a24 , a31 , a32 , a41 , a42 , b11 , b12 , b21 and b22 . In order to further re-
duce the necessary computations, instead of taking all 8 premise variables left correspond-
ing to the positions a33 (q, q̇), a34 (q, q̇), a43 (q, q̇), a44 (q, q̇), b31 (q), b32 (q), b41 (q) and b42 (q), it is
enough to take those 5 that determine the minimal set of variables describing them. The expres-
sions of the elements a33 (q, q̇), a34 (q, q̇), a43 (q, q̇), a44 (q, q̇) are of the following forms:
It is not difficult to deduce that any factorization using any combination of the arguments of the
elements would only increase the number of necessary premise variables.
We take the first 4 premise variables as:
In case of elements b31 (t), b32 (t), b41 (t), b42 (t) we have the following form of the elements al-
lowing us to factorize:
The reduction of the number of the premise variables is possible by taking z5 (q) = cos(x2 (t)
1
and z6 (t) = den , where den = f (cos−2 (x2 )) denotes the common denominator of elements
38
b31 (q), b32 (q), b41 (q), b42 (q). After the computations of minimum and maximum values of z6 (t)
it turns out that it’s value is of several order of magnitudes smaller than those of others’, and so
we will neglect this term.
The following step is to compute the minimum and maximum values of the remaining premise
variables on the chosen range of the state variables, i.e. x1 (t), x2 (t) ∈ [−0.5, 0.5], and x3 (t), x4 (t) ∈
[−3.5, 3.5], respectively:
Note that each premise variable is a function of the state variables, which are all bounded.
Therefore the premise variables are also bounded, reaching their minimum and maximum value
on the corresponding interval, i.e. for x(t) ∈ [xmin , xmax ] → z(t) = f (x(t)) ∈ [zmin , zmax ].
Consequently they can be expressed as a convex combination of the membership functions
Mj1 (z(t)) and Mj2 (z(t)) [30]:
Note that Mj1 (z(t)) + Mj2 (z(t)) = 1, with Mj1 (z(t)), Mj2 (z(t)) ∈ [0 1].
z1 (q, q̇) = M11 (z1 (q, q̇))z1min + M12 (z1 (q, q̇))z1max =
17.14 − z1 (q, q̇) z1 (q, q̇) − 16.66
= 16.66 + 17.14
0.48 0.48
z2 (q, q̇) = M21 (z2 (q, q̇))z2min + M22 (z2 (q, q̇))z2max =
39
1.14 − z2 (q, q̇) z2 (q, q̇) − 0.99
= 0.99 + 1.14
0.15 0.15
z3 (q, q̇) = M31 (z3 (q, q̇))z3min + M32 (z3 (q, q̇))z3max =
−27.30 − z3 (q, q̇) z3 (q, q̇) − (−31.45)
= (−31.45) + (−27.30)
4.15 4.15
z4 (q, q̇) = M41 (z4 (q, q̇))z4min + M42 (z4 (q, q̇))z4max =
(−14.91) − z4 (q, q̇) z4 (q, q̇) − (−15.27)
= (−15.27) + (−14.91)
0.36 0.36
Our fuzzy model has 25 = 32 submodels (rules), which are the convex combinations of the 5
weighting functions:
Rule 1:
IF z1 (q, q̇) is M11 and z2 (q, q̇) is M21 and z3 (q, q̇) is M31 and z4 (q, q̇) is M41 and z5 (q) is M51
THEN
0 0 1 0 0 0
0 0 0 1 0 0
A1 =
0
B1 =
0 z1min z2min f1 (z5min ) f2 (z5min )
0 0 z3min z4min f2 (z5min ) f3 (z5min )
The membership function for rule 1 is: h1 (z(t)) = M11 M21 M31 M41 M51
Rule 2:
IF z1 (q, q̇) is M11 and z2 (q, q̇) is M21 and z3 (q, q̇) is M31 and z4 (q, q̇) is M41 and z5 (q) is M52
THEN
0 0 1 0 0 0
0 0 0 1 0 0
A1 =
0
B1 =
0 z1min z2min f1 (z5max ) f2 (z5max )
0 0 z3min z4min f2 (z5max ) f3 (z5max )
40
The membership function for rule 2 is: h2 (z(t)) = M11 M21 M31 M41 M52
...
Rule 32:
IF z1 (q, q̇) is M12 and z2 (q, q̇) is M22 and z3 (q, q̇) is M32 and z4 (q, q̇) is M42 and z5 (q) is M52
THEN
0 0 1 0 0 0
0 0 0 1 0 0
A1 =
0
B1 =
0 z1max z2max f1 (z5max ) f2 (z5max )
0 0 z3max z4max f2 (z5max ) f3 (z5max )
The membership function for rule 32 is: h32 (z(t)) = M12 M22 M32 M42 M52
In order to ensure asymptotic stability the derivative of the Lyapunov function has to be nega-
tive [28]:
d
V̇ (x) = V (x) < 0 (4.16)
dt
41
except for the equilibrium point x0 , when V (x0 ) = 0.
In other words we must ensure that the energy of the system dissipates over time, considering
zero external input.
The basic stability of a system is analyzed using the quadratic Lyapunov function [28]:
V (x) = xT P x
where P = P T > 0. For asymptotic stability, we should consider the derivative of the Lyapunov
function of the entire TS fuzzy system:
n
d X
V̇ (x) = V (x) = ẋ T P + P ẋ = hi (z(t))(ATi P + P Ai )x (4.17)
dt i=1
The equilibrium of a TS fuzzy system is globally asymptotically stable if there exists a common
symmetric, positive definite matrix P (P = P T > 0) for all the subsystems such that [28]:
for i = 1, ..., n, where n is the number of rules (n = 1 for linear system stability analysis). The
common matrix P can be found using convex optimization techniques.
The above stability conditions will also be used to design a fuzzy controller based on state
feedback (2.2) using the idea of parallel distributed compensation. Similarly to the LQR case,
full state feedback requires the system to be (completely) state controllable [14], i.e. from any
initial state the system can be driven to any reachable final state using the control input in finite
time [15]. The PDC approach is based on the same fuzzy sets and premise variables as the fuzzy
model itself. For each fuzzy subsystem (rule) a linear state feedback control law is defined [28]:
IF z1 (q, q̇) is M1k1 and z2 (q, q̇) is M2k2 and z3 (q, q̇) is M3k3 and z4 (q, q̇) is M4k3 and z5 (q) is M5k5
THEN u(t) = −Fi x
where Fi notes the state feedback gain for subsystem i, i = 1, ..., n and [k1 , ..., k2 ] ∈ {1, 2}
n
X
u(t) = − hi (z(t))Fi x(t) (4.19)
i=1
42
By substituting (4.19) into (4.3) we obtain the equation of the closed-loop system:
n X
X n
ẋ(t) = hi (z(t))hj (z(t))(Ai − Bi Fj )x(t) (4.20)
i=1 j=1
where
Gij = Ai − Bi Fj
where 1 ≤ j < i ≤ n.
where xT = [x1 , ..., xn ] is the decision variable and the matrices Fi = FiT ∈ IRn×n are known,
F (x) > 0 meaning F (x) is positive definite. The inequalities (4.22) and (4.23), however, are
not LMIs, since they are neither linear, nor jointly convex in Fi and P [23]. To obtain a linear
relation we introduce Q = P −1 . Note that since P = P T > 0 it follows that also Q = QT > 0.
By pre- and post multiplying (4.22) and (4.23) with Q we obtain:
43
In order to solve the convexity problem we introduce Wi = Fi Q, it follows that the state
feedback gains are given as Fi = Wi Q−1 = Wi P . We can then rewrite (4.24) and (4.25)
as:
where 1 ≤ j < i ≤ n.
Constraints can also be set on the input and output, respectively [32].
Since every Lyapunov function is a positive monotonically decreasing function (see (4.15) and
(4.16)), he relation V (x) ≤ V (x0 ) holds. We assume that the initial condition (x0 ) is unknown,
but it is norm-bounded as ||x0 || = xT0 x0 ≤ σ 2 . The Lyapunov function of the initial state is
given as: V (x0 ) = xT0 P x0 = xT0 Q−1 x0 . Imposing
Q = P −1 ≥ σ 2 I (4.29)
44
leads to [30]:
1
P ≤ I
σ2
1
V (x0 ) = xT0 P x0 ≤ σ 2 2 ≤ 1
σ
V (x) ≤ V (x0 ) ≤ 1 (4.30)
1 − xT Q−1 x ≥ 0
For the case when the input (ui = −Fi x) is norm-bounded (||ui || = uTi ui ≤ µ2 ) we have:
uTi ui = xT FiT Fi x ≤ µ2
1 T T
x Fi Fi x ≤ 1
µ2
We impose that:
1 T T
x Fi Fi x ≤ V (x) ≤ 1
µ2
1 T T
x Fi Fi x ≤ xT Q−1 x
µ2
By rearanging we obtain:
!
1 T
xT F Fi − Q−1 x ≤ 0
µ2 i
1 T
2
Fi Fi − Q−1 ≤ 0
µ
1
Q− QFiT Fi Q ≥ 0
µ2
45
By applying Schur complement we obtain the LMI form:
Q QFiT Q WiT
= >0 (4.31)
Fi Q µ2 I W i µ2 I
xT C T Cx ≤ α2
1 T T
x C Cx ≤ 1
α2
We impose that:
1 T T
x C Cx ≤ V (x) ≤ 1
α2
1 T T
x C Cx ≤ xT Q−1 x
α2
By rearanging we obtain:
!
1 T
xT C C − Q−1 x ≤ 0
α2
1 T
2
C C − Q−1 ≤ 0
α
1
Q− QC T CQ ≥ 0
α2
46
linear structure of the subsystems, a fuzzy observer can be seen as interpolated Luenberger
observers. The error dynamics is similar to the linear case, and is given as:
˙
x̂(t) = (Ai − Bi Fi )x̂(t)
ŷ(t) = C x̂(t)
The observer, similarly to the pole placement method with state feedback, adjusts the location
of poles of the error dynamics. There are two approaches of fuzzy observer design depending on
whether or not the premise variables (z(t)) depend on the state variables (x̂(t)) estimated by the
fuzzy observer. Sharing the same premise variables (z(t)) and membership functions (h(z(t)))
the augmented system is described similarly to the actual fuzzy model of the system [33]:
Observer rule i:
k
IF z1 (q, q̇) is M1k and ... and zm (t) is Mm THEN
˙
x̂(t) = Ai x̂(t) + Bi u(t) + Ki (y(t) − ŷ(t))
ŷ(t) = Ci x̂(t)
n
X
˙
x̂(t) = hi (z(t))[Ai x̂(t) + Bi u(t) + Ki (y(t) − ŷ(t))]
i=1
47
n
X
ŷ(t) = hi (z(t))Ci x̂(t)
i=1
We aim at computing the gain matrices (Ki ), which is done similarly to the computation of the
feedback gains (Fi ) [33]:
where Gij = Ai − Kj Ci and 1 ≤ j < i ≤ n, s.t. hi (z(t)) · hi (z(t)) > 0. Note that in our case
Ci = C.
where 1 ≤ j < i ≤ n, s.t. hi (z(t)) · hi (z(t)) > 0 and the observer gains are given as
Ki = Q−1 Ni .
48
Initial conditions: [ 0.5, 0.5, 0, 0 ] T Initial conditions: [ 0.5, 0.5, 0, 0 ] T
4 4
x1 [rad] x1 [rad]
3 x2 [rad] 3 x2 [rad]
ẋ1 [rad/s] ẋ1 [rad/s]
ẋ2 [rad/s] ẋ2 [rad/s]
2 2
1 1
0 0
-1 -1
-2 -2
-3 -3
-4 -4
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Time [s] Time [s]
Comparing the two subfigures we can remark that the settling time of the system without ob-
server is slightly less, than that of with observer.
49
Chapter 5
TESTING AND VALIDATION
In this chapter we present the Simscape simulation enviroment, where the manipulator
is recreated. Simulations are performed on the Simscape model using controllers and observers
obtained earlier. Simulation results are presented.
5.1 Simulator
For validation and testing purposes we implemented the robot manipulator in Sim-
scape within Simulink. Simscape is a multi-domain modeling tool designed for creating and
simulating models of physical systems assembled from the elementary components of the sys-
tem. These components are built from building blocks, which resemble elementary physical
elements. The interconnection between components is done by physical connections, which are
directly integrated by the component blocks of the model [34]. A Simscape model is capable of
recreating the actual system with high fidelity in several aspects. Simscape Multibody (SimMe-
chanics) is a tool specifically designated for multibody mechanical systems, such as our robot
manipulator.
We have built the Simscape model of our manipulator from 3 Solid Blocks, i.e. base,
upper arm and lower arm. The 3 Solid blocks were connected by 2 Rotary Joints Blocks. We
used parameters presented in Table 2.2. Depending on the type of system implemented, the
joints were disabled or enabled to provide the joint angle positions and joint angular velocities
in case of systems with or without observer. For visual purposes we imported the SolidWorks
model of the KUKA KR 6 R900 SIXX (KR AGILUS) industrial robot manipulator, which can
be seen in Figure 5.1. We adopted the model, i.e. the parameters are different from the actual
KUKA parameters.
50
(a) Actual robot (b) SolidWorks model in Simulink Simscape
The simulations were performed for the already computed controllers, with the same limitations
and the same initial states of both, the closed-loop system and the observer. We have made a
modification in the case of the TS fuzzy controller and observer. The only values of the decay
rates, for which the closed-loop system and the closed-loop system with observer were found
stable were αcl = 0.71 and αobs = 0.83, respectively. The trajectories (Simulink Scope plots)
of the Simscape model are presented below:
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 5.2: Trajectories of the closed-loop system without observer using LQR
We can see in Figure 5.2, that in case of the system from subfigure (a) the decay rate is much
faster compared to the system from subfigure (b).
51
Initial conditions: [ π/2, π/2, 0, 0 ] T Initial conditions: [ π/2, π/2, 0, 0 ] T
5 5
x1 [rad] x1 [rad]
x2 [rad] x2 [rad]
ẋ1 [rad/s] ẋ1 [rad/s]
ẋ2 [rad/s] ẋ2 [rad/s]
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 5.3: Trajectories of the closed-loop system with observer using LQR
In case of the system from Figure 5.3, beside the decay rate difference between the two sub-
figures, similar as in case of Figure 5.2, we can see how the observer slows the dynamics of
the system, while reducing the estimation error. This behavior is hard to remark in the case
of subfigure (b) since it’s dynamics is much slower than the system from subfigure (a), even
without an observer.
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 5.4: Trajectories of the closed-loop system without observer using pole placement
We can see in Figure 5.4, that in case of the system from subfigure (a) the decay rate is much
faster compared to the system from subfigure (b).
52
Initial conditions: [ π/2, π/2, 0, 0 ] T Initial conditions: [ π/2, π/2, 0, 0 ] T
5 5
x1 [rad] x1 [rad]
x2 [rad] x2 [rad]
ẋ1 [rad/s] ẋ1 [rad/s]
ẋ2 [rad/s] ẋ2 [rad/s]
0 0
-5 -5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Time [s] Time [s]
Figure 5.5: Trajectories of the closed-loop system with observer using pole placement
In case of Figure 5.3, beside the decay rate difference between the two subfigures, similar as in
case of Figure 5.2. The effect introduced by the observer is remarkable in both cases.
-2
-3
0
-4
-5
-6
-7
-8 -5
0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2
Time [s] Time [s]
(a) Decay rate: αcl = 0.71 (b) Decay rates: αcl = 0.71, αobs = 0.83
Figure 5.6: Trajectories of the closed-loop system without and with observer using TS fuzzy
control
This is the only case, when the effect of the observer, which can be seen in subfigure (b), is
beneficial, providing no vibration, and smaller settling time. In the case of the system without
observer subfigure (a), even though stable, it presents vibrations.
We have obtained similar results as in case of the analytically obtained models. This
leads to the conclusion, that our models, controllers and observers are valid ones.
53
Chapter 6
CONCLUSIONS AND FUTURE WORKS
The purpose of this thesis was to present different control strategies for a 2 degree of
freedom robot manipulator. The Euler-Lagrange modeling approach was used to deduce the
forward kinematic model of a generic robot manipulator. General notions of linear and nonlin-
ear systems were presented. The nonlinear, affine model, obtained at the modeling phase, was
brought to a linear time varying one by compensation of the affine term. The time varying form
was linearized to obtain the Linear Time Invariant form. General notions of controllability and
observability were presented. Two control strategies, namely pole placement and LQR, were
presented for the linear model. Two sets of controllers were designed in order to compare not
only the two strategies, but also the effects of parameters in each control strategy. An observer
was designed for the linear system. Simulations were performed for the computed controllers on
the linear and nonlinear system, with or without observer. The simulation results were analyzed
and compared. A nonlinear Takagi-Sugeno fuzzy model was built using sector nonlinearity.
A TS fuzzy controller and observer were also computed using predefined performance guar-
antees. Simulations were performed for the computed TS fuzzy controllers on the nonlinear
model. Simulation results were analyzed and compared. A high fidelity simulator was built
using the same parameters as the ones used in the nonlinear model. The formerly computed
controllers and observers were installed on the simulator and simulations were performed, for
the same initial states. The obtained results were analyzed and compared.
Future work includes, among others, identification of an actual robot. Building the
model of the robot with the presented approaches, and designing controllers, and if necessary,
observers for it; comparing the actual results with the results presented in this thesis and de-
ducing conclusions. Another possible development is the extension of the presented controllers
and observers for a manipulator having more degrees of freedom. Further development may
be extending the scope of the thesis to a non-planar manipulator, i.e. the motion of joints are
not in the same planes. A final step of development may be tailoring controllers and observers
for a specific manipulator in a specific domain, such as military, medicine, spatial or industrial
domain.
54
Appendices
55
Appendix A
Nonlinear model
Below are briefly presented the Simulink and Simscape models, used in the modeling
phase, as well as important code snipplets:
(a) Structure of the links (b) Model with COM and reference frames
function dx = Manipulator(x, u)
56
h12 = m2*lc2ˆ2 + m2*l1*lc2*cos(x(2)) + i2;
h21 = m2*lc2ˆ2 + m2*l1*l2*cos(x(2)) + i2;
h22 = m2*lc2ˆ2 + i2;
c11 = -2*m2*l1*lc2*sin(x(2))*x(4);
c12 = -m2*l1*lc2*sin(x(2))*x(4);
c21 = m2*l1*lc2*sin(x(2))*x(3);
c22 = 0;
M1 = inv(M);
57
Appendix B
Linear model and control
Below are briefly presented the Simulink and Simscape models, used in linear model-
ing and controller design phase, as well as important code snipplets:
Figure B.1: Models used in simulation of the linear system without observer
58
(a) Simulink model (b) Simscape model
Figure B.2: Models used in simulation of the linear system with observer
Matlab code implementing the linearization of the nonlinear model and the computation of the
pole placement and LQR control is presented below:
IC = [0 0 0 0 0 0 0 0];
59
h22 = m2*lc2ˆ2 + i2;
M1nl = inv(M);
M1l = M10;
c11 = -2*m2*l1*lc2*sin(x2)*x2d;
c12 = -m2*l1*lc2*sin(x2)*x2d;
c21 = m2*l1*lc2*sin(x2)*x1d;
c22 = 0;
Dnl = 5;
Fo = double(subs(Fnl, v, IC));
Al = [0 0 1 0; 0 0 0 1; Fpd(:,1:4)];
Bl = [0 0; 0 0; Fpd(:,7:8)];
Cl = [1 0 0 0; 0 1 0 0];
60
Dl = [0 0;0 0];
%%STATE(x1,x2) PRIORITY****************1%%%
R = Cl*Cl’;
alfa = 1e9;
% tau
Q = diag(alfa*[1 1 1e-6 1e-6]);
%%%STATE(x3,x4) PRIORITY%%%
% R = Cl*Cl’;
% alfa = 1e3;
% %tau
% Q = diag(alfa*[1e-1 1e-1 1 1]);
%%CONTROL PRIORITY*****************2%%%
% alfa = 1e-12;%<=13
% R = (Cl*Cl’);
%
% Q = alfa*diag([1 1 1e-6 1e-6]);
%%CONTROL PRIORITY%%%
% alfa = 1e12;%<=13
% R = alfa*(Cl*Cl’);
%
% Q = diag([1 1 1 1]);
%
[K,S,e] = lqr(Al,Bl,Q,R);
R = Cl*Cl’;
alfa = 1e6;
Q = diag(alfa*[1 1 1 1]);
%%%%%%pp1
% K = place(Al,Bl,-5*[10 10 1 1])
%%%%%%pp2
% K = place(Al,Bl,-20*[10 10 1 1])
[Lp,S,e] = lqr(Al’,Cl’,Q’,R’);
Lp = Lp’;
61
Appendix C
Nonlinear control
Below are briefly presented the Simulink and Simscape models, used in linear model-
ing and controller design phase, as well as important code snipplets:
Figure C.1: Models used in simulation of the TS fuzzy system without observer
62
(a) Simulink model (b) Simscape model
Figure C.2: Models used in simulation of the TS fuzzy system with observer
Matlab code implementing the computation of the feedback gains is presented below:
function k = fcn(x,Md,maxZ,minZ)
f1 = m2*l1*lc2*cos(x(2));
c11 = -2*m2*l1*lc2*sin(x(2))*x(4);
c12 = -m2*l1*lc2*sin(x(2))*x(4);
c21 = m2*l1*lc2*sin(x(2))*x(3);
63
c22 = 0;
Cnl = [c11 c12; c21 c22];
D = 10;
Dnl = eye(2)*D;
Fanl = -M1nl*(Cnl+Dnl);
% Fbnl = M1nl;
%
% [n,d] = numden(Fbnl);
h = zeros(1,32);
for i = 1:1:length(bin)
h(i) = w1(bin(i,1)+1)*w2(bin(i,2)+1)*w3(bin(i,3)+1)*
w4(bin(i,4)+1)*w5(bin(i,5)+1);
K = K + h(i)*Md(:,:,i);
end;
k = K;
% w1(1)*w2(1)*w3(1)*w4(1)*w5(1)
% h
% pause
%k = ones(2,4);
Matlab code implementing the system of LMIs (for the closed-loop system and the observer),
as well as the way the solver was used is presented below:
64
close all; clear all; clc;
load(’data2.mat’);
M = cell(1, length(bin));
for i = 1:length(bin)
M{i} = sdpvar(2,4);
end;
X = sdpvar(4);
G = cell(32,32);
gamma = 1e-25;
for i = 1:length(bin)
for j = 1:i
G{i,j} = X*(A(:,:,i) + A(:,:,j))’ - (B(:,:,i)*M{j} +
B(:,:,j)*M{i})’ + (A(:,:,i) + A(:,:,j))*X - (B(:,:,i)*M{j} + B(:,:,j)*M{
end;
end;
C = [1 0 0 0];
C1 = [0 1 0 0];
C2 = [0 0 1 0];
C3 = [0 0 0 1];
Ct = [C;C1];
Ct2 = [C2;C3];
alf = 0.71;
for i = 1:length(bin)
for j = 1:i
if (j == i)
Co = [Co, G{i,j} <= -4*alf*X - gamma*eye(4)];
else
Co = [Co, G{i,j} <= -4*alf*X];
end;
end;
65
end;
opt = sdpsettings(’solver’,’lmilab’);
sol = solvesdp(Co,[],opt);
P = inv(double(X));
Md = [];
for i = 1:length(bin)
Md(:,:,i) = double(M{i})*P;
end;
close all; clear all; clc;
% load(’data2.mat’);
load(’lmi.mat’);
C = [1 0 0 0];
C1 = [0 1 0 0];
C2 = [0 0 1 0];
C3 = [0 0 0 1];
Ct = [C;C1];
Ct2 = [C2;C3];
% r = 4;
% for i = 1:length(bin)
% Qo = obsv(A(:,:,i),Ct);
% if (r ˜= rank(Qo))
% i
% end;
% zita = ctrb(A(:,:,i),B(:,:,i));
% if (r ˜= rank(zita))
% i
% end;
% end;
% Q = obsv(Al,Cl);
% rank(Q);
L = cell(1, length(bin));
for i = 1:length(bin)
L{i} = sdpvar(4,2);
end;
66
Q = sdpvar(4);
G = cell(32,32);
gaLLa = 1e-25;
for i = 1:length(bin)
for j = 1:i
G{i,j} = (A(:,:,i) + A(:,:,j))’*Q - (L{j}*Ct +
L{i}*Ct)’ + Q*(A(:,:,i) + A(:,:,j)) - (L{j}*Ct + L{i}*Ct);
end;
end;
alf = 0.83;
for i = 1:length(bin)
for j = 1:i
%if ˜((h(i) == 0) || (h(j) == 0))
%Co = [Co, (G{i,j}+G{j,i})’/2*Q + Q*(G{i,j}+G{j,i})/2 <= 2*alf*Q];
%end
if (j == i)
Co = [Co, G{i,j} <= -4*alf*Q-gaLLa*eye(4)];
else
Co = [Co, G{i,j} <= -4*alf*Q];
end;
end;
end;
opt = sdpsettings(’solver’,’lmilab’);
sol = solvesdp(Co,[],opt);
P = inv(double(Q));
Ld = [];
for i = 1:length(bin)
Ld(:,:,i) = P*double(L{i});
end;
67
Bibliography
[1] Pablo Sánchez-Sánchez and Marco Antonio Arteaga-Pérez. Simplied Methodology for
Obtaining the Dynamic Model of Robot Manipulators. International Journal of Ad-
vanced Robotic Systems, 2012. Retrieved from: http://cdn.intechopen.com/
pdfs-wm/40783.pdf.
[3] Frank L. Lewis, Darren M. Dawson, and Chaouki T. Abdallah. Robot Manipulator Con-
trol: Theory and Practice, chapter 1 - Comercial Robot Manipulators, pages 1–19. CRC
Press, 2003.
[5] David Tong. Lectures on Classical Dynamics [Lecture notes]. Retrieved from: http:
//www.damtp.cam.ac.uk/user/tong/dynamics/two.pdf.
[7] Hanno Essén. The Theory of Lagrange’s Method [Lecture notes]. Retrieved from: http:
//www.mech.kth.se/˜hanno/LagrangesEqs.pdf.
[9] Derek Rowell. State-Space Representation of LTI Systems. Retrived from: http://
web.mit.edu/2.14/www/Handouts/StateSpace.pdf, October 2002.
[10] Pieter Collins, Luc Habets, Anton Kuut, Margreet Nool, Mihaly Petreczky, and Jan H.
van Schuppen. ConPAHS - A Software Package for Control of Piecewise-Affine Hy-
brid Systems. In 2006 IEEE Conference on Computer Aided Control System De-
sign, 2006 IEEE International Conference on Control Applications, 2006 IEEE Inter-
national Symposium on Intelligent Control, pages 76 – 81. IEEE, October 2006. Re-
trieved from: http://msdl.cs.mcgill.ca/people/mosterman/campam/
cacsd06/collins.pdf.
68
[11] Eric Sullivan. Taylor Series Single Variable and Multi-Variable [Lecture notes]. Re-
trieved from: http://www.math.ucdenver.edu/˜esulliva/Calculus3/
Taylor.pdf.
[12] J.E. Ackermann. Control systems, robotics and automation, volume 8, chapter Chapter 13:
Control of Linear Multivariable Systems. EOLSS Publishers Co. Ltd., 2009. Retrieved
from: http://www.eolss.net/sample-chapters/c18/e6-43.pdf.
[14] Radhakant Padhi. Pole Placement Control Design [Lecture notes]. Retrieved from:
http://nptel.ac.in/courses/101108047/module9/Lecture%2021.
pdf.
[15] Gabriel Oliver Codina. Controllability and Observability [Lecture notes]. Retrieved from:
http://dmi.uib.es/˜goliver/SS/Teoria/T3_Controllability.pdf.
[17] F. L. Lewis. Linear Quadratic Regulator (LQR) State Feedback Design [Lecture notes].
Retrieved from: http://www.uta.edu/utari/acs/Lectures/lqr.pdf, Oc-
tober 2008.
[18] Radhakant Padhi. Linear Quadratic Regulator (LQR) Design [Lecture notes]. Retrieved
from: http://nptel.ac.in/courses/101108047/module11/Lecture%
2027.pdf.
[19] M. Sami Fadali. Contollability & Observability [Lecture notes]. Retrieved from: http:
//wolfweb.unr.edu/˜fadali/ee471/ContolObserve.pdf.
[21] Bill Messner, Carnegie Mellon, and Dawn Tilbury. Control tutorials for matlab and
simulink.
[22] Kamyar Mehran. Takagi-Sugeno Fuzzy Modeling for Process Control [Lecture notes].
Retrieved from: https://www.staff.ncl.ac.uk/damian.giaouris/pdf/
IA%20Automation/TS%20FL%20tutorial.pdf, 2008.
69
[23] Kazuo Tanaka and Hua O. Wang. Fuzzy Control Systems Design and Analysis: A Linear
Matrix Inequality Approach. John Wiley & Sons, 2004.
[24] Huaping Liu, Fuchun Sun, Zengqi Sun, and Chunwen Li. Partial-State-Feedback Con-
troller Design for Takagi-Sugeno Fuzzy Systems Using Homotopy Method . In Proceed-
ings of the 2004, American Control Conference Boston, Massachusetts, June 30 - July 2,
2004, volume 1, pages 447 – 452. IEEE, July 2004.
[25] Yongru Gu, H. O. Wang, K. Tanaka, and L. G. Bushnell. Fuzzy control of nonlinear time-
delay systems: stability and design issues. In American Control Conference, volume 6,
pages 4771 – 4776. IEEE, June 2001.
[26] F. Khaber, K. Zehar, and A. Hamzaoui. State Feedback Controller Design via Takagi-
Sugeno Fuzzy Model: LMI Approach. International Journal of Mechanical, Aerospace,
Industrial, Mechatronic and Manufacturing Engineering, 2(6), 2008.
[27] P. Korba and P.M. Frank. A model-based fuzzy control: LMI-based approach. Retrieved
from: http://www.rst.e-technik.tu-dortmund.de/cms/Medienpool/
Downloads/Veranstaltungen/GMA-Fachausschuss/Publikationen/
workshop99/Korba__Frank.pdf.
[28] Enric Trillas and Luka Eciolaza. Fuzzy Logic: An Introductory Course for Engineering
Students. Springer, 2015.
[29] Morteza Seidi, Marzieh Hajiaghamemar, and Bruce Segee. Fuzzy Controllers- Recent
Advances in Theory and Applications, chapter 18 - Fuzzy Control Systems: LMI-Based
Design, pages 441–464. InTech, 2012.
[30] Zsófia Lendek. Nonlinear control using Takagi-Sugeno models [Lecture notes]. UT Press,
2013.
[31] Kazuo Tanaka, Takayuki Ikeda, and Hua O. Wang. Fuzzy Regulators and Fuzzy Ob-
servers: Relaxed Stability Conditions and LMI-Based Designs. In IEEE Transactions on
Fuzzy Systems, volume 6, pages 250–265, May 1998.
[32] Miguel Bernal and Petr Hušek. Non–Quadratic Performance Design For Takagi–Sugeno
Fuzzy Systems. International Journal of Applied Mathematics and Computer Science,
15(3):383–391, 2005.
70