You are on page 1of 9

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO.

3, MARCH 2011 497
Brief Papers
Real-Time Recurrent Neural State Estimation
Alma Y. Alanis, Edgar N. Sanchez, Alexander G. Loukianov,
and Marco A. Perez
Abstract—A nonlinear discrete-time neural observer for
discrete-time unknown nonlinear systems in presence of external
disturbances and parameter uncertainties is presented. It is based
on a discrete-time recurrent high-order neural network trained
with an extended Kalman-filter based algorithm. This brief
includes the stability proof based on the Lyapunov approach.
The applicability of the proposed scheme is illustrated by real-
time implementation for a three phase induction motor.
Index Terms—Discrete-time nonlinear systems, extended
Kalman filtering, neural state estimation, real-time implemen-
tation, recurrent neural networks.
I. INTRODUCTION
During the past four decades, state estimation of dynamic
systems has been an active topic of research in different
areas such as automatic control applications, fault detection,
monitoring, and modeling, among others [1]. This is due to the
fact that nonlinear control techniques usually assume complete
accessibility for the system state, which is not always possible
(cost, technological constraints, etc.) [2]. For this reason, non-
linear state estimation is a very important topic for nonlinear
control [3], [4]. State estimation has been studied by many
authors, who have obtained interesting results in different
directions. Most of those results require the use of a special
nonlinear transformation [5] or a linearization technique [6],
[7]. Such approaches can be considered as a relatively simple
method to construct nonlinear observers; however, they do not
consider uncertainties [8]–[10]. In practice, there exist external
disturbances and parameter uncertainties. Observers that have
a good performance even in presence of model and disturbance
uncertainties are called robust, but their design process is
too complex [1], [11]–[13]. All the approaches mentioned
above need the previous knowledge of the plant model, at
least partially. Recently, other kind of observers has emerged:
neural observers [4], [14]–[17], for unknown plant dynamics.
Manuscript received March 1, 2008; revised April 9, 2010, August 12, 2010,
December 10, 2010, and December 14, 2010; accepted December 15, 2010.
Date of publication January 17, 2011; date of current version March 2, 2011.
This work was supported in part by Consejo Nacional de Ciencia y Tecnología
Mexico, under Project 57801Y and Project 103191Y.
A. Y. Alanis and M. A. Perez are with the Centro Universitario de Ciencias
Exactas e Ingenierías, Universidad de Guadalajara, Jalisco 44430, Mexico
(e-mail: almayalanis@gmail.com; marco.perez@cucei.udg.mx).
E. N. Sanchez and A. G. Loukianov are with the Centro de Investi-
gación y Estudios Avanzados del Instituto Politécnico Nacional, Unidad
Guadalajara, Jalisco 45091, Mexico (e-mail: sanchez@gdl.cinvestav.mx;
louk@gdl.cinvestav.mx).
Color versions of one or more of the figures in this brief are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNN.2010.2103322
It is worth mentioning that state estimation for neural net-
works (NNs) (a special class of complex systems) is also an
interesting topic, which has been studied lately [12], [18].
NNs have grown to be a well-established methodology,
which allows solving very difficult problems in engineering,
as exemplified by their applications to modeling and control
of general nonlinear and complex systems. In particular, the
use of recurrent NNs for modeling and learning has rapidly
increased in recent years ( [17], [19] and references therein).
There exist different training algorithms for NNs, which,
however, normally encounter some technical problems such
as local minima, slow learning, and high sensitivity to initial
conditions, among others [20]. As a viable alternative, new
training algorithms, e.g., those based on Kalman filtering, have
been proposed [6], [21], [22], [23], [24]. Due to the fact that
training a NN typically results in a nonlinear problem, an
extended Kalman filter (EKF) is required to be used [21], [25].
As is well known [17], recurrent high-order NNs
(RHONNs) offer many advantages for modeling of complex
nonlinear systems. On the other hand, EKF training for NNs
allows the reduction of the epoch size and the number of
required neurons [21]. Considering these two facts, we propose
the use of the EKF training for RHONNs in order to model
complex nonlinear systems.
Parameter estimation [20], [26] and state estimation [27] are
related in the sense of how the measurement from sensors can
be used to obtain an accurate mode of the plant to be controlled
[28]. For many control applications, it is advisable to estimate
the system state or at least part of it [28]. Along these lines, in
this brief, a RHONN is used to develop an adaptive recurrent
neural observer for nonlinear systems, whose mathematical
model is assumed to be unknown, it is important to note
that the proposed scheme does not deal with plant parameter
estimation. The RHONN provides a mathematical model for
the plant, and at the same time estimates the plant state from
output measurements, thus avoiding plant parameter estima-
tion. The proposed observer constitutes a meaningful result
in order to develop modern control algorithms for unknown
nonlinear systems operating under uncertain conditions and
with nonmeasurable state variables [4]. The learning algorithm
for the RHONN is implemented using an EKF. The respective
stability analysis, based on the Lyapunov approach, is included
for the proposed scheme. The applicability of this scheme is
illustrated by real-time state estimation for an electric three
phase induction motor.
The main contributions of this brief are the following:
1) a neural observer based on a RHONN and trained online
with an EKF-based algorithm; 2) the respective stability analy-
sis for the proposed scheme; and 3) the real-time implementa-
tion of the proposed scheme for a three-phase induction motor,
without the need of determining induction motor parameters,
1045–9227/$26.00 © 2011 IEEE
498 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011
in the presence of unknown disturbances such as parametric
and load torque variations. These contributions are meaningful
because they allow the building of an adaptive neural model for
nonlinear systems, which is required to implement nonlinear
controllers.
II. MATHEMATICAL PRELIMINARIES
In this section, important mathematical preliminaries
required in future sections are presented.
A. Stability Definitions
This section close follows [29]. Through this brief, we use
k as the sampling step, k ∈ 0 ∪ Z
+
, |•| as the absolute value
and, • as the Euclidian norm for vectors and as any adequate
norm for matrices. Consider a multiple input–multiple output
(MIMO) nonlinear system
x (k +1) = F (x (k) , u (k)) (1)
y (k) = h (x (k)) (2)
where x ∈
n
, u ∈
m
, and F ∈
n
×
m

n
is nonlinear
function.
Definition 1: System (1) is said to be forced, or to have
inputs. In contrast, a system described by an equation without
explicit presence of an input u, that is
x (k +1) = F (x (k))
is said to be unforced. It can be obtained after selecting the
input u as a feedback function of the state
u (k) = ξ (x (k)) . (3)
Such substitution eliminates u and yields an unforced
system [30]
x (k +1) = F (x (k) , ξ (x (k))) . (4)
Definition 2: The solution of (1)–(3) is semiglobally uni-
formly ultimately bounded (SGUUB), if for any , which is
a compact subset of
n
and all x (k
0
) ∈ , there exists an
ε > 0 and a number N (ε, x (k
0
)) such that x (k) < ε for
all k ≥ k
0
+ N [29].
In other words, the solution of (1) is said to be SGUUB if,
for any a priory given (arbitrarily large) bounded set and any
a priory given (arbitrarily small) set
0
, which contains (0, 0)
as an interior point, there exists a control (3) such that every
trajectory of the closed-loop system starting from enters the
set
0
= {x (k) | x (k) < ε}, in a finite time and remains in
it thereafter, as is displayed in Fig. 1.
Theorem 1: Let V (x (k)) be a Lyapunov function for
a discrete-time system (1), which satisfies the following
properties:
γ
1
(x (k)) ≤ V (x (k)) ≤ γ
2
(x (k))
V (x (k +1)) − V (x (k)) = V (x (k))
≤ −γ
3
(x (k)) +γ
3
(ζ )
where ζ is a positive constant, γ
1
(•) and γ
2
(•) are strictly
increasing functions, and γ
3
(•) is a continuous nondecreasing
function. Thus if
V (x) < 0 for x (k) > ζ

0
ε
Fig. 1. SGUUB, schematic representation.
then x (k) is uniformly ultimately bounded, i.e., there is a time
instant k
T
such that x (k) < ζ, ∀ k < k
T
[29].
Definition 3: A subset S ∈
n
is bounded if there exists
r > 0 such that x ≤ r for all x ∈ S [30].
III. NEURAL STATE ESTIMATION
In this section, we consider to estimate the state of a
discrete-time nonlinear system, which is assumed to be ob-
servable, given by
x (k +1) = F (x (k) , u (k)) +d (k)
y (k) = Cx (k) (5)
where x ∈
n
is the state vector of the system, u (k) ∈
m
is
the input vector, y (k) ∈
p
is the output vector, C ∈
p×n
is
a known output matrix, d (k) ∈
n
is a disturbance vector, and
F (•) is a smooth vector field and F
i
(•) its entries. Hence,
(5) can be rewritten as
x (k) =
_
x
1
(k) . . . x
i
(k) . . . x
n
(k)
_

d (k) =
_
d
1
(k) . . . d
i
(k) . . . d
n
(k)
_

x
i
(k +1) = F
i
(x (k) , u (k)) +d
i
(k) , i = 1, . . . , n
y (k) = Cx (k) . (6)
For the system (6), we propose a recurrent neural Luen-
berger observer (RHONO) with the following structure:
¨x (k) =
_
¨x
1
(k) . . . ¨x
i
(k) . . . ¨x
n
(k)
_

¨x
i
(k +1) = w

i
z
i
(¨x(k), u(k)) + g
i
e (k)
¨y (k) = C¨x (k) , i = 1, . . . , n (7)
with g
i

p
, z
i
(x(k), u(k)) defined as
z
i
(x(k), (k)) =







z
i
1
z
i
2
.
.
.
z
i
L
i







=










j ∈I
1
ξ
di
j
(1)
i
j

j ∈I
2
ξ
d
i j
(2)
i
j
.
.
.

j ∈I
L
i
ξ
di
j
(L
i
)
i
j











(8)
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011 499
with d
j
i
(k) being nonnegative integers, and ξ
i
defined as
follows:
ξ
i
=










ξ
i
1
.
.
.
ξ
i
n
ξ
i
n+1
.
.
.
ξ
i
n+m










=










S(x
1
)
.
.
.
S(x
n
)

1
.
.
.

m










. (9)
In (9), = [
1
,
2
, . . . ,
m
]

is the input vector to the NN,
and S(•) is defined by
S(ς) =
1
1 +exp(−βς)
, β > 0 (10)
where ς is any real value variable. Therefore, the weight
estimation error is defined as
¯ w
i
(k) = w
i
(k) −w

i
(11)
where w

i
is the ideal weights vector and w
i
its estimate [31].
As discussed in [31], the general discrete-time nonlinear
system (5), which is assumed to be observable, can be ap-
proximated by the following discrete-time RHONN parallel
representation:
x (k +1) = W

z (x(k), u(k)) +
z
(12)
on the state component-wise for
x
i
(k +1) = w

i
z
i
(x(k), u(k)) +
z
i
, i = 1, . . . , n (13)
where x
i
is the i -th plant state and
z
i
is a bounded
approximation error which can be reduced by increasing the
number of the adjustable weights [31]. Let assume that there
exists the optimal weights vector w

i

L
i
such that
_
_

z
i
_
_
is minimized on a compact set
z
i

L
i
, which is an
artificial quantity required only for analytical purpose [31].
In general, it is assumed that this vector exists and is constant
but unknown. A disadvantage of this type of NN is that there
does not exist, to the best of our knowledge, a methodology to
determine its detailed structure; therefore it has to be selected
experimentally. Let us define the w

i
estimate as w
i
, then, the
weights estimation ¯ w
i
(k) and state observer ¯x
i
(k) errors are
defined, respectively, as
¯ w
i
(k) = w
i
(k) −w

i
(14)
and
¯x
i
(k) = x
i
(k) − ˆ x
i
(k). (15)
Since w

i
is constant
¯ w
i
(k +1) − ¯ w
i
(k) = w
i
(k +1) −w
i
(k) ∀k ∈ 0 ∪ Z
+
.
The weight vectors are updated online with a decoupled
EKF, described by
w
i
(k +1) = w
i
(k) +η
i
K
i
(k) e (k)
K
i
(k) = P
i
(k) H
i
(k) M
i
(k) , i = 1, . . . , n
P
i
(k +1) = P
i
(k) − K
i
(k) H

i
(k) P
i
(k) + Q
i
(k) (16)
with
M
i
(k) =
_
R
i
(k) + H

i
(k) P
i
(k) H
i
(k)
_
−1
(17)
C
Neural
Observer
+
-
C
Unknown
Plant
EKF
C
w(k)
x(k) u(k)
xˆ (k)
yˆ (k)
y(k)
e(k)
+

C
Unknown
Plant
Neural
Observer
EKF
Fig. 2. Neural observer scheme.
and the output error
e (k) = y (k) −¨y (k) . (18)
Then the dynamics of x
i
(k +1) can be expressed as
¯x
i
(k +1) = x
i
(k +1) −¨x
i
(k +1) .
Therefore
¯x
i
(k +1) = w

i
z
i
(x(k), u(k)) +
z
i
− w

i
(k) z
i
(x
a
(k), ¨x
b
(k), u(k)) − g
i
e (k) .
Adding and subtracting w

i
z
i
(x
a
(k), ¨x
b
(k)u(k), it can be
written as
¯x
i
(k +1) = ¯ w
i
(k) z
i
(x(k), u(k)) +

z
i
− g
i
e (k) (19)
with

z
i
= w

i
z
i
(¯x(k), u(k)) +
z
i
z
i
(¯x(k), u(k)) = z
i
(x(k), u(k)) −z
i
(¨x(k), u(k)).
On the other hand, the dynamics of (14) are
¯ w
i
(k +1) = ¯ w
i
(k) −η
i
K
i
(k) e (k) . (20)
Considering (16)–(20), we establish the main result of this
brief as the following theorem.
Theorem 2: For system (6), the RHONO (7), trained with
the EKF-based algorithm (16), ensures that the estimation
error (15) and the output error (18) are SGUUB, moreover,
the RHONO weights remain bounded.
Proof: See Appendix.
IV. INDUCTION MOTOR APPLICATIONS
In this section, we apply the above-developed neural
observer to a three-phase induction motor, which is one of
the most used actuators for industrial applications due to
its reliability, ruggedness, and relatively low cost. Modeling
of an induction motor is challenging, since its dynamics
is described by multivariable, coupled, and highly nonlinear
system [27], [32]. A well-known problem for induction motor
applications is that electrical parameters might not be accu-
rately known, or might significantly vary when the motor is
operating, which has motivated various approaches for their
identification [20], [26], [33], [34]. Among the wide range of
available contributions in this direction, one can find results
for the estimation of a limited number of required electrical
parameters under different assumptions [20], [26], [33], [34].
Traditionally, induction motor control is performed in steady
state for constant speed profiles [33], [34]; however, for
500 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011
Power module
Encoder
Sensors
Load or
Mechanic
source
V
DS1104 Board
Electrical
Machine
S
i
g
n
a
l

c
o
u
p
l
i
n
g
Connections
P
C
I
B
u
s
V
Connections
PC
Fig. 3. Schematic representation of the control prototype.
many applications such as electric vehicle, mass transportation,
advanced drilling, and steel milling operation, among others, it
is necessary to track time-varying reference signals. Therefore,
recent works on control of induction motors are focused on
field-oriented control [35], exact input–output linealization,
adaptive input output linearization, and direct torque control
([35] and references therein). Robustness of those controllers
depends significantly on accurate knowledge of the system
states. In practice, fluxes are not easily measurable. Therefore,
an observer is needed to estimate them [11]. Besides, most of
those works were developed for continuous-time model of the
motor. In [32], a discrete-time model is proposed, as well as an
observer. In [1] and [8], continuous-time observers are studied,
and in [32], all these observers assume that the parameters and
load torque of the motor model are known. In [11], [36] and
[1], continuous-time observers for induction motor are con-
sidered, in [8], a discrete-time observer based on linearization
is proposed. All the mentioned observers are designed on the
basis of the physical model of the motor, which results in a
sensitive control with respect to plant parameter variations. To
this end, we consider the state estimation problem assuming
that the plant parameters as well as external disturbances (load
torque) are unknown. Therefore, it is not possible to establish
a comparison between the proposed scheme and the above-
mentioned methodologies.
The experiments are performed using a benchmark, which
includes a PC for supervising, a PWM unit for the power stage,
a dSPACE DS1104 board (dSPACE is a registered trademark
of dSPACE GmbH, Germany) for data acquisition and control
of the system, and a three-phase induction motor as the plant
to be controlled, with the following characteristics: 220-V,
60 Hz, 0.19 kW, 1660 rpm, 1.3 A [37]. Fig. 3 presents a
schematic representation of the benchmark used for experi-
ments. Fig. 4 displays a view of the PC and the DS1104 board
and the PWM driver. The DS1104 board allows downloading
applications directly from Simulink (MATLAB and Simulink
are registered trademarks of MathWorks Inc., USA). The
experiment implemented on this benchmark uses the NN
state estimation discussed in Section III, the experiment is
performed with a constant load torque applied as an inertial
load coupled to the induction motor as shown in Fig. 5.
(a) (b)
Fig. 4. (a) PC and the DS1104 board and (b) PWM driver.
Encoder Induction Motor Gear box Load
Fig. 5. Induction motor coupled with the load and the encoder.
A. Motor Model
A sixth-order discrete-time induction motor model in the
stator fixed reference frame (α, β), under the assumptions of
equal mutual inductances and linear magnetic circuit, is given
as [32]
ω(k +1) = ω(k) −
_
T
J
_
T
L
(k) +
μ
α
(1 −α)
× M
_
i
β
(k) ψ
α
(k) −i
α
(k) ψ
β
(k)
_
ψ
α
(k +1) = cos
_
n
p
θ (k +1)
_
ρ
1
(k)
−sin
_
n
p
θ (k +1)
_
ρ
2
(k)
ψ
β
(k +1) = sin
_
n
p
θ (k +1)
_
ρ
1
(k)
+cos
_
n
p
θ (k +1)
_
ρ
2
(k)
i
α
(k +1) = ϕ
α
(k) +
T
σ
u
α
(k) +d
1
(k)
i
β
(k +1) = ϕ
β
(k) +
T
σ
u
β
(k) +d
2
(k)
θ (k +1) = θ (k) +ω(k) T −
T
L
(k)
J
T
2
+
μ
α
_
T −
(1 −a)
α
_
× M
_
i
β
(k) ψ
α
(k) −i
α
(k) ψ
β
(k)
_
(21)
with
ρ
1
(k) = a
_
cos (φ (k)) ψ
α
(k) +sin (φ (k)) ψ
β
(k)
_
+ b
_
cos (φ (k)) i
α
(k) +sin (φ (k)) i
β
(k)
_
ρ
2
(k) = a
_
cos (φ (k)) ψ
α
(k) −sin (φ (k)) ψ
β
(k)
_
+ b
_
cos (φ (k)) i
α
(k) −sin (φ (k)) i
β
(k)
_
ϕ
α
(k) = i
α
(k) +αβTψ
α
(k)
+ n
p
βTω(k) ψ
α
(k) −γ Ti
α
(k)
ϕ
β
(k) = i
β
(k) +αβTψ
β
(k)
+ n
p
βTω(k) ψ
β
(k) −γ Ti
β
(k)
φ (k) = n
p
θ (k)
with b = (1 −a) M, α = R
r
/L
r
, γ = (M
2
R
r
/σ L
2
r
) +
(R
s
/σ), σ = L
s
− (M
2
/L
r
), β = (M/σ L
r
), a = e
−αT
, μ =
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011 501
0 0.05 0.1 0.15 0.2 0.25 0.3
−2
0
2
4
6
8
10
12
14
16
time (s)
s
p
e
e
d

(
r
a
d
/
s
)
plant signal
estimation error
neural signal
Fig. 6. Real-time rotor speed estimation (plant signal in solid line and neural
signal in dashed line).
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
0.25
time (s)
a
l
p
h
a

f
l
u
x

(
w
b
2
)
neural signal
plant signal
estimation error
Fig. 7. Real-time alpha flux estimation (plant signal in solid line and neural
signal in dashed line).
(Mn
p
/J L
r
) where Ls, L
r
and M are the stator, rotor, and
mutual Inductance, respectively; R
s
and R
r
are the stator
and rotor resistances, respectively; n
p
is the number of pole
pairs; i
α
and i
β
represents the currents in the α and β phases,
respectively; ψ
α
and ψ
β
represent the fluxes in the α and β
phases, respectively, and θ is the rotor angular displacement.
B. Neural Observer Design
To this end, we apply the RHONO (Fig. 2), developed in
Section III, to estimate the state of a three-phase induction
motor (21)
¨x
1
(k +1) = w
11
(k) S (¨x
1
(k))
+ w
12
(k) S (¨x
1
) S (¨x
3
(k))¨x
4
(k)
+ w
13
(k) S (¨x
1
) S (¨x
2
(k))¨x
5
(k) + g
1
e (k)
¨x
2
(k +1) = w
21
(k) S (¨x
1
(k)) S (¨x
3
(k))
+ w
22
(k)¨x
5
(k) + g
2
e (k)
¨x
3
(k +1) = w
31
(k) S (¨x
1
(k)) S (¨x
2
(k))
+ w
32
(k)¨x
4
(k) + g
3
e (k)
¨x
4
(k +1) = w
41
(k) S (¨x
2
(k))
+ w
42
(k) S (¨x
3
(k)) +w
43
(k) S (¨x
4
(k))
+ w
44
(k) u
α
(k) + g
4
e (k)
¨x
5
(k +1) = w
51
(k) S (¨x
2
(k)) +w
52
(k) S (¨x
3
(k))
+ w
53
(k) S (¨x
5
(k))
+ w
54
(k) u
β
(k) + g
5
e (k) (22)
0 0.05 0.1 0.15 0.2 0.25 0.3
−0.25
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
time (s)
b
e
t
a

f
l
u
x

(
w
b
2
)
neural signal
plant signal
estimation error
Fig. 8. Real-time beta flux estimation (plant signal in solid line and neural
signal in dashed line).
0 0.05 0.1 0.15 0.2 0.25 0.3
−6
−4
−2
0
2
4
6
time (s)
a
l
p
h
a

c
u
r
r
e
n
t

(
A
)
neural signal
plant signal
estimation error
Fig. 9. Real-time alpha current estimation (plant signal in solid line and
neural signal in dashed line).
where ¨x
1
estimates the angular speed ω; ¨x
2
and ¨x
3
estimates
the fluxes ψ
α
and ψ
β
, respectively; and finally ¨x
4
and ¨x
5
estimate the currents i
α
and i
β
, respectively. The inputs u
α
and u
β
are selected as chirp functions.
It is important to note that for induction motor applications,
it is unusual to use chirp signals (sine wave signal whose
frequency increases at a linear rate with respect to time) as
inputs [25]. In this brief, they are used for purposes of state
estimation to excite most of the plant dynamics. For modeling
of nonlinear system structures, it is important to represent a
wide range of frequencies. Input signals that attempt to meet
this demand include pseudorandom binary sequences, random
Gaussian noise, and chirp signals (swept sinusoid). All of
these input signals have advantages which include independent
noise estimation and reduction of datasets, among others;
however, chirp signals have been generally found to provide
more consistent results and have been used successfully in
the past for modeling the dynamics of complex nonlinear
systems. For supplementary information, see also [38]–[40].
The NN training is performed online, and all of its states are
initialized in a random way. The associated covariances matri-
ces are initialized as diagonals, and the nonzero elements are
heuristically selected as P
i
(0) = 10000, Q
i
(0) = 500, and
R
i
(0) = 10000, (i = 1, . . . , 5), respectively. It is important to
502 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011
0 0.05 0.1 0.15 0.2 0.25 0.3
−8
−6
−4
−2
0
2
4
6
8
time (s)
b
e
t
a

c
u
r
r
e
n
t

(
A
)
neural signal
plant signal
estimation error
Fig. 10. Real-time beta current estimation (plant signal in solid line and
neural signal in dashed line).
TABLE I
STANDARD DEVIATION AND MEAN VALUE FOR THE STATE ESTIMATION
ERROR
Variable Standard deviation Mean value
ω −¨x
1
0.0995 rad/s −1.6478 ×10
−4
rad/s
ψ
α
−¨x
2
0.0028 wb 3.0836 ×10
−4
wb
ψ
β
−¨x
3
0.0026 wb −3.9090 ×10
−4
wb
i
α
−¨x
4
0.0332 A −7.0384 ×10
−4
A
i
β
−¨x
5
0.0381 A −2.9987 ×10
−4
A
consider that for the EKF learning algorithm the covariances
are used as design parameters [21], [41]. This NN structure
is determined heuristically in order to minimize the state
estimation error; however, it has a block control form [27]
in order to simplify the synthesis of nonlinear controllers.
C. Real-Time Neural State Estimation
This subsection presents the neural network observer
RHONO previously proposed for the discrete-time induction
motor model as applied in real time to the benchmark de-
scribed above. During the estimation process, the plant and the
NN operates in open loop. Both of them (plant and NN) have
the same input vector [u
α
u
β
]

, u
α
and u
β
are chirp functions,
with 200 V amplitude and incremental frequencies from 0 to
150 Hz and 0 to 200 Hz, respectively. The implementation is
performed with a sampling time of 0.0005 s. Fig. 6 displays
the estimation performance for the speed rotor, Figs. 7 and
8 present the estimation performance for the fluxes in phase
α and β, respectively. Figs. 9 and 10 portray the estimation
performance for currents in phase α and β, respectively.
Finally, Table I presents the standard deviation for the state
estimation errors.
It is worth mentioning that for real-time state estimation
there is no delay. In fact all calculations required are performed
in between consecutive samples. Considering that the induc-
tion motor is working in open loop, there is no effect of the
RHONN structure on its performance. It is important to remark
that the state estimation proposed in this brief is performed
using a chirp signal in order to excite most of the plant
l
o
a
d

t
o
r
q
u
e

(
N
m
)
3
2.5
2
1.5
1
0.5
0
time (s)
0 5 10 15
Fig. 11. Applied load torque.
t
r
a
c
k
i
n
g

e
r
r
o
r

(
r
a
d
/
s
)
3
2
1
0
−1
−2
−3
0 5000 10000
time (ms)
15000
Fig. 12. Output trajectory tracking error with a nonlinear discrete-time
nonlinear flux observer.
dynamics as is shown in Figs. 6–10. Because of the use of this
signal, neither the speed nor the flux settles to nominal values.
The included real-time results illustrate the effectiveness of
the proposed neural observer as applied to an electric three-
phase squirrel-cage induction motor, without knowledge or
estimation of the parameters, in the presence of parametric
variations caused by winding heating due to motor operation
and load torque variations, as displayed in Fig. 11.
D. Comparison for an Output Trajectory Tracking Application
This brief only deals with state estimation for a three-phase
induction motor; however, in order to establish a comparison
of the proposed neural observer with a typical nonlinear
observer [32], in this subsection we include real-time results
for a control law designed on the basis of the block control
and the sliding mode techniques developed initially with a
typical nonlinear flux observer [25] and then implemented
using the neural observer proposed in this brief [42]. Both
experiments are performed including a time-varying load
torque as presented in Fig. 11, which is applied by means of
a load-torque-controlled dc motor, coupled trough a gearbox;
then, the torque in the rotor side of the induction motor is
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011 503
t
r
a
c
k
i
n
g

e
r
r
o
r

(
r
a
d
/
s
)
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
time (ms)
0 5000 10000 15000
Fig. 13. Output trajectory tracking error with the proposed neural observer.
TABLE II
COMPARISON FOR THE SPEED OUTPUT TRAJECTORY TRACKING ERROR
Observer Standard deviation Mean value
Traditional 1.3019 rad/s −0.0283 rad/s
Neural 0.2359 rad/s −0.0054 rad/s
defined as the applied torque amplified by the gearbox gain.
For the state estimation using the proposed neural observer,
the explicit knowledge or the estimation of the load torque is
not required. Fig. 11 is included just for illustration. Fig. 12
displays the speed tracking error for the typical nonlinear
observer [25] and Fig. 13 presents the speed tracking error for
the same control scheme using the proposed neural observer
[42]. Table II includes the standard deviation and the mean
value for speed output trajectory tracking errors. It is easy
to see that the neural observer produces a better performance;
additionally, it does not require the measurement or estimation
of the load torque and/or of knowledge of the plant parameters
required for typical nonlinear observers [32]. It is important
to remark that the oscillatory nature of the tracking error is
due to many causes such as reversing speed, discontinuities
in the load torque (Fig. 11), measurement noises, gearbox
backlash, and nonlinearities due to switches in PWM driver,
among others.
V. CONCLUSION
A RHONN structure was used to design a neural ob-
server, named RHONO, for a class of MIMO discrete-time
nonlinear system; the proposed observer was trained with
an EKF-based algorithm, which was implemented online
in a parallel configuration. The boundedness of the output,
state, and estimation errors was established on the basis
of the Lyapunov approach. From Table I, it is easy to
see that the EKF-learning-based algorithm provides small
errors between the outputs of the NN model and the plant
signals, even with randomly assigned initial conditions and
NN weighs, which is in fact an excellent indicator of the
proposed scheme performance. Real-time results show the
effectiveness of the proposed observer, as applied to an electric
three-phase squirrel-cage induction motor in presence of time-
varying disturbances. This brief only dealt with state estima-
tion for a three-phase induction motor. Control synthesis and
implementation based on the proposed approaches were con-
sidered separately; however, output trajectory tracking results
were included in this brief in order to show the effectiveness
of the proposed observer as compared with other nonlinear
observer.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers
for their useful comments, which helped to improve this brief.
APPENDIX
PROOF OF THEOREM 2
Consider the candidate Lyapunov function
V
i
(k) = ¯ w
i
(k) P
i
(k) ¯ w
i
(k) +¯x
i
(k) P
i
(k)¯x
i
(k) (23)
whose first increment is defined as
V
i
(k) = V (k +1) − V (k)
= ¯ w
i
(k +1) P
i
(k +1) ¯ w
i
(k +1)
+ ¯x
i
(k +1) P
i
(k +1)¯x
i
(k +1)
− ¯ w
i
(k) P
i
(k) ¯ w
i
(k)
− ¯x
i
(k) P
i
(k)¯x
i
(k) . (24)
Using (16) and (14) in (24), then
V
i
(k) = [¯ w
i
(k) −η
i
K
i
(k) e (k)]
T
[ A
i
(k)]
× [¯ w
i
(k) −η
i
K
i
(k) e (k)]
+ [ f (k) − g
i
C¯x (k)]
T
[ A
i
(k)]
× [ f (k) − g
i
C¯x (k)] − ¯ w
i
(k) P
i
(k) ¯ w
i
(k)
− ¯x
i
(k) P
i
(k)¯x
i
(k) (25)
with
A
i
(k) = P
i
(k) − D
i
(k) + Q
i
D
i
(k) = K
i
(k) H

i
(k) P
i
(k)
f (k) = ¯ w
i
(k) z
i
(x(k), u(k)) +

z
i
.
Hence, (25) can be expressed as
V
i
(k) = 2¯ w
T
i
(k) P
i
(k) ¯ w
i
(k) −2¯ w
T
i
(k) [B
i
(k)] ¯ w
i
(k)
+ 2η
2
¯x
T
(k) C
T
K
T
[ A
i
(k)] K
i
(k) C¯x (k)
+ 2 f
T
(k) A
i
(k) f (k)
+ 2¯x
T
(k) C
T
g
T
i
[ A
i
(k)] g
i
C¯x (k)
− ¯ w
T
i
(k) P
i
(k) ¯ w
i
(k)
−¯x
T
i
(k) P
i
(k)¯x
i
(k) . (26)
Using the inequalities
X
T
X +Y
T
Y ≥ 2X
T
Y
X
T
X +Y
T
Y ≥ −2X
T
Y
−λ
min
(P) X
2
≥ −X
T
PX ≥ −λ
max
(P) X
2
(27)
504 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011
which are valid ∀X, Y ∈
n
, ∀P ∈
n×n
, P = P
T
> 0, then
(26) can be rewritten as
V
i
(k) ≤ ¯ w
i
(k)
2
λ
max
(P
i
(k))
−¯ w
i
(k)
2
λ
min
(B
i
(k))
+ 2 ¯x (k)
2
ηK
i
C
2
λ
max
(A
i
(k))
+ 2 f (k)
2
λ
max
(A
i
(k))
+ 2 ¯x (k)
2
g
i
C
2
λ
max
(A
i
(k))
− ¯x (k)
2
λ
min
(P
i
(k)) .
Substituting f (k) = ¯ w
i
(k) z
i
(x(k), u(k)) +

z
i
, then
V
i
(k) ≤ ¯ w
i
(k)
2
λ
max
(P
i
(k))
− ¯ w
i
(k)
2
λ
min
(P
i
(k))
+ 2 ¯x (k)
2
ηK
i
C
2
λ
max
(A
i
(k))
+ 4
¸
¸

z
i
¸
¸
2
λ
max
(A
i
(k))
+ 4 ¯ w
i
(k)
2
z
i
(x(k), u(k))
2
λ
max
(A
i
(k))
+ 2 ¯x (k)
2
g
i
C
2
λ
max
(A
i
(k))
− ¯x (k)
2
λ
min
(P
i
(k))
with B
i
(k) = D
i
(k) − Q
i
V
i
(k) ≤ −¯x (k)
2
E
i
(k) −¯ w
i
(k)
2
F
i
(k)
+ 4
¸
¸

z
i
¸
¸
2
λ
max
(A
i
(k))
and
E
i
(k) = 2 ηK
i
C
2
λ
max
(A
i
(k))
+ 2 g
i
C
2
λ
max
(A
i
(k)) −λ
min
(P
i
(k))
F
i
(k) = λ
max
(P
i
(k)) −λ
min
(P
i
(k))
+ 4 z
i
(x(k), u(k))
2
λ
max
(A
i
(k)) .
As a result, V
i
(k) < 0 when
¯x (k) >
_
4
¸
¸

z
i
¸
¸
2
λ
max
(A
i
(k))
E
i
(k)
≡ κ
1
or
¯ w
i
(k) >
_
4
¸
¸

z
i
¸
¸
2
λ
max
(A
i
(k))
F
i
(k)
≡ κ
2
.
Therefore, the solution of (19) and (20) is stable; hence the
estimation error and the RHONO weights are SGUUB [14].
Considering (7) and (18), it is easy to see that the output error
has an algebraic relation with ¯x (k), for that reason, if ¯x (k) is
bounded, e (k) is bounded too
e (k) = C¯x (k)
e (k) = C ¯x (k) .
REFERENCES
[1] D. F. Coutinho and L. P. F. A. Pereira, “A robust Luenberger-like
observer for induction machines,” in Proc. 31st Annu. Conf. IEEE Ind.
Electron. Soc., Nov. 2005, pp. 1–5.
[2] G. Besançon, Nonlinear Observers and Applications (Lecture Notes in
Control and Information Sciences), vol. 363. Berlin, Germany: Springer-
Verlag, 2007.
[3] J. A. Farrell and M. M. Polycarpou, Adaptive Approximation Based
Control: Unifying Neural, Fuzzy and Traditional Adaptive Approxima-
tion Approaches. New York: Wiley, 2006.
[4] A. S. Poznyak, E. N. Sanchez, and W. Yu, Differential Neural Networks
for Robust Nonlinear Control. Singapore: World Scientific, 2001.
[5] S. Nicosia and A. Tornambè, “High-gain observers in the state and
parameter estimation of robots having elastic joints,” Syst. Control Lett.,
vol. 13, no. 4, pp. 331–337, Nov. 1989.
[6] R. Grover and P. Y. C. Hwang, Introduction to Random Signals and
Applied Kalman Filtering, 2nd ed. New York: Wiley, 1992.
[7] A. J. Krener and A. Isidori, “Linearization by output injection and
nonlinear observers,” Syst. Control Lett., vol. 3, no. 1, pp. 47–52, Jun.
1983.
[8] J. Li and Y. Zhong, “Comparison of three Kalman filters for speed
estimation of induction machines,” in Proc. 40th IAS Annu. Meeting
Ind. Appl. Conf., vol. 3. Oct. 2005, pp. 1792–1797.
[9] Y. Liu, Z. Wang, and X. Liu, “Design of exponential state estimators
for neural networks with mixed time delays,” Phys. Lett. A, vol. 364,
no. 5, pp. 401–412, May 2007.
[10] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural
networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan.
2005.
[11] F. Chen and M. W. Dunnigan, “Comparative study of a sliding-mode
observer and Kalman filters for full state estimation in an induction
machine,” IEE Proc. Electric Power Appl., vol. 149, no. 1, pp. 53–64,
Jan. 2002.
[12] H. Huang, G. Feng, and J. Cao, “Robust state estimation for uncertain
neural networks with time-varying delay,” IEEE Trans. Neural Netw.,
vol. 19, no. 8, pp. 1329–1339, Aug. 2008.
[13] B. Walcott and S. Zak, “State observation of nonlinear uncertain
dynamical systems,” IEEE Trans. Autom. Control, vol. 32, no. 2, pp.
166–170, Feb. 1987.
[14] Y. H. Kim and F. L. Lewis, High-Level Feedback Control with Neural
Networks. Singapore: World Scientific, 1998.
[15] A. U. Levin and K. S. Narendra, “Control of nonlinear dynamical
systems using neural networks. II. Observability, identification, and
control,” IEEE Trans. Neural Netw., vol. 7, no. 1, pp. 30–42, Jan. 1996.
[16] R. Marino, “Adaptive observers for single output nonlinear systems,”
IEEE Trans. Autom. Control, vol. 35, no. 9, pp. 1054–1058, Sep. 1990.
[17] E. N. Sanchez and L. J. Ricalde, “Trajectory tracking via adaptive
recurrent control with input saturation,” in Proc. Int. Joint Conf. Neural
Netw., vol. 1. Portland, OR, Jul. 2003, pp. 359–364.
[18] J. Liang, Z. Wang, and X. Liu, “State estimation for coupled uncertain
stochastic networks with missing measurements and time-varying de-
lays: The discrete-time case,” IEEE Trans. Neural Netw., vol. 20, no. 5,
pp. 781–793, May 2009.
[19] A. Y. Alanis, E. N. Sanchez, and A. G. Loukianov, “Discrete-time
adaptive backstepping nonlinear control via high-order neural networks,”
IEEE Trans. Neural Netw., vol. 18, no. 4, pp. 1185–1195, Jul. 2007.
[20] M. Elbuluk, T. Liu, and I. Husain, “Neural network-based model ref-
erence adaptive systems for high performance motor drives and motion
controls,” IEEE Trans. Ind. Appl., vol. 38, no. 3, pp. 879–886, May–Jun.
2002.
[21] S. Haykin, Kalman Filtering and Neural Networks. New York: Wiley,
2001.
[22] S. Singhal and L. Wu, “Training multilayer perceptrons with the ex-
tended Kalman algorithm,” in Advances in Neural Information Process-
ing Systems, vol. 1, D. S. Touretzky, Ed. San Mateo, CA: Morgan
Kaufmann, 1989, pp. 133–140.
[23] S. Singhal and L. Wu, “Training multilayer perceptrons with the ex-
tended Kalman algorithm,” in Advances in Neural Information Process-
ing Systems, vol. 1, D. S. Touretzky, Ed. San Mateo, CA: Morgan
Kaufmann, 1989, pp. 133–140.
[24] R. J. Williams and D. Zipser, “A learning algorithm for continually
running fully recurrent neural networks,” Neural Comput., vol. 1, no. 2,
pp. 270–280, 1989.
[25] A. Y. Alanis, E. N. Sanchez, A. G. Loukianov, and M. A. Perez-Cisneros,
“Real-time discrete neural block control using sliding modes for electric
induction motors,” IEEE Trans. Control Syst. Technol., vol. 18, no. 1,
pp. 11–21, Jan. 2010.
[26] Z. Andonov, “Overview of induction motor parameter identification
techniques,” in Proc. 11th Int. Symp. Power Electron., Novi Sad,
Yugoslavia, Nov. 2001, pp. 241–245.
[27] V. Utkin, J. Guldner, and J. Shi, Sliding Mode Control in Electromechan-
ical Systems. New York: Taylor & Francis, 1999.
[28] F. Van der Heijden, D. de Ridder, Classification, Parameter Estimation,
and State Estimation: An Engineering Approach Using MATLAB. New
York: Wiley, 2004.
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 3, MARCH 2011 505
[29] S. S. Ge, J. Zhang, and T. H. Lee, “Adaptive neural network control for
a class of MIMO nonlinear systems with disturbances in discrete-time,”
IEEE Trans. Syst., Man Cybern., Part B: Cybern., vol. 34, no. 4, pp.
1630–1645, Aug. 2004.
[30] H. K. Khalil, Nonlinear Systems, 2nd ed. Upper Saddle River, NJ:
Prentice-Hall, 1996.
[31] G. A. Rovithakis and M. A. Chistodoulou, Adaptive Control with
Recurrent High-Order Neural Networks. Berlin, Germany: Springer-
Verlag, 2000.
[32] A. G. Loukianov, J. Rivera, and J. M. Cañedo, “Discrete-time sliding
mode control of an induction motor,” in Proc. IFAC, Barcelona, Spain,
Jul. 2002, pp. 106–111.
[33] J. Jiang and J. Holtz, “High dynamic speed sensorless AC drive with
on-line model parameter tuning for steady-state accuracy,” IEEE Trans.
Ind. Electron., vol. 44, no. 2, pp. 240–246, Apr. 1997.
[34] W. Leonard, Control of Electrical Drives, 2nd ed. New York: Springer-
Verlag, 2001.
[35] F. Khorrami, P. Krishnamurthy, and H. Melkote, Modeling and Adaptive
Nonlinear Control of Electric Motors. Berlin, Germany: Springer-Verlag,
2003.
[36] H. K. Khalil, E. G. Strangas, and S. Jurkovic, “Speed observer and re-
duced nonlinear model for sensorless control of induction motors,” IEEE
Trans. Control Syst. Technol., vol. 17, no. 2, pp. 327–339, Mar. 2009.
[37] J. Quiñones, “Real time implementation of a three phase induction
motor control,” M.S. dissertation, Cinvestav, Unidad Guadalajara,
Guadalajara, Jalisco, Mexico, 2006.
[38] L. Ljung, System Identification: Theory for the User, 2nd ed. Upper
Saddle River, NJ: Prentice-Hall, 1999.
[39] M. Norgaard, O. Ravn, N. K. Poulsen, and L. K. Hansen, Neural
Networks for Modelling and Control of Dynamic Systems. New York:
Springer-Verlag, 2000.
[40] T. Söderström and P. Stoica, System Identification. Englewood Cliffs,
NJ: Prentice-Hall, 1989.
[41] L. A. Feldkamp, D. V. Prokhorov, and T. M. Feldkamp, “Simple and
conditioned adaptive behavior from Kalman filter trained recurrent
networks,” Neural Netw., vol. 16, nos. 5–6, pp. 683–689, Jun. 2003.
[42] A. Y. Alanis, E. N. Sanchez, and A. G. Loukianov, “Real-time output
tracking for induction motors by recurrent high-order neural network
control,” in Proc. IEEE 17th Mediterranean Conf. Control Autom.,
Thessaloniki, Greece, Jun. 2009, pp. 868–873.
BELM: Bayesian Extreme Learning Machine
Emilio Soria-Olivas, Member, IEEE, Juan Gómez-Sanchis, Member,
IEEE, José D. Martín, Member, IEEE, Joan Vila-Francés, Member,
IEEE, Marcelino Martínez, Member, IEEE, José R. Magdalena,
Member, IEEE, and Antonio J. Serrano, Member, IEEE
Abstract—The theory of extreme learning machine (ELM) has
become very popular on the last few years. ELM is a new
approach for learning the parameters of the hidden layers of
a multilayer neural network (as the multilayer perceptron or the
radial basis function neural network). Its main advantage is the
lower computational cost, which is especially relevant when deal-
ing with many patterns defined in a high-dimensional space. This
brief proposes a Bayesian approach to ELM, which presents some
Manuscript received June 4, 2010; accepted December 24, 2010. Date of
publication January 20, 2011; date of current version March 2, 2011. This
work was supported in part by the Spanish Ministry for Education and
Science, under Grant TIN2007-61006: Aprendizaje por Refuerzo Aplicado
en Farmacocinetica and also in part by the Project CSD2007-00018.
The authors are with the Digital Signal Processing Group, Department of
Electronic Engineering, ETSE, University of Valencia, Burjassot 46100, Spain
(e-mail: emilio.soria@uv.es; juan.gomez-sanchis@uv.es; jose.d.martin@uv.es;
joan.vila@uv.es; marcelino.martinez@uv.es; rafael.magdalena@uv.es; anto-
nio.j.serrano@uv.es).
Color versions of one or more of the figures in this brief are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNN.2010.2103956
1045–9227/$26.00 © 2011 IEEE
advantages over other approaches: it allows the introduction of a
priori knowledge; obtains the confidence intervals (CIs) without
the need of applying methods that are computationally intensive,
e.g., bootstrap; and presents high generalization capabilities.
Bayesian ELM is benchmarked against classical ELM in several
artificial and real datasets that are widely used for the evaluation
of machine learning algorithms. Achieved results show that the
proposed approach produces a competitive accuracy with some
additional advantages, namely, automatic production of CIs,
reduction of probability of model overfitting, and use of a priori
knowledge.
Index Terms—Bayesian, extreme learning machine, multilayer
perceptron, radial basis function.
I. INTRODUCTION
Extreme learning machine (ELM) is a recent approach for
neural models with only one hidden layer and whose goal is
mapping an original input space into an output space in which
the tackled problem can be solved. Some neural models of this
kind are the two most widely used ones, namely, the multilayer
perceptron (MLP) and the radial basis function neural network
(RBFNN) [1]. ELM proposes a random initialization of the
parameters of the hidden layer (weights and biases in the
case of MLP [2] and centers and variances in the case of
RBFNN [3]). Afterwards, the weights of the output layer are
computed using a least-mean squares method based on ap-
plying a Moore–Penrose’s generalized inverse [4]. Therefore,
the computational cost is much lower than when using other
classical learning algorithms, such as gradient-descent meth-
ods or global search approaches (genetic algorithms, particle
swarm, etc.). Its application to different fields has become
more and more common lately [5], [6], and some algorithms
have been proposed in order to improve its performance
[7]–[9]. Moreover, since the amount of available data grows
exponentially, ELM represents a suitable approach to obtain
models from huge databases within a reasonable time.
The research on Bayesian methods for neural models has
become very intense currently. These methods introduce a
probability distribution on the network parameters and the
committed errors. There are many approaches that have shown
their suitability in different fields [10]. Some of the most
relevant approaches of this kind are the probabilistic version
of self-organizing maps (generative topographic mapping) and
the relevance support vector machine which is the probabilistic
approach for support vector machines [10]. This brief proposes
the Bayesian ELM (BELM). The BELM has the advantages
of both ELM and Bayesian models [10], [11], thus involving
a low computational cost and additionally building the corre-
sponding confidence interval (CI) without the need for using
other methods, such as bootstrap, that are computationally
costly.
The rest of this brief is outlined as follows. Section II
describes ELM and the proposed BELM. The performance
of BELM is evaluated using different standard datasets com-
monly used for machine learning benchmarking in Section III.
This brief ends with the conclusion in Section IV.

1. In contrast. k ∈ 0 ∪ Z+ . that is x (k + 1) = F (x (k)) is said to be unforced. there exists an ε > 0 and a number N (ε. which contains (0.498 IEEE TRANSACTIONS ON NEURAL NETWORKS. (3) Such substitution eliminates u and yields an unforced system [30] x (k + 1) = F (x (k) . . C ∈ p×n is a known output matrix. which is required to implement nonlinear controllers. Definition 1: System (1) is said to be forced. . di (k) . N EURAL S TATE E STIMATION In this section. SGUUB. n ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (7) z i (x(k). . x i (k) . we use k as the sampling step. u (k) ∈ m is the input vector. . x i (k + 1) = Fi (x (k) . u(k)) + gi e (k) y (k) = C x (k) . (k)) = ⎢ . and γ3 (•) is a continuous nondecreasing function.. Definition 3: A subset S ∈ n is bounded if there exists r > 0 such that x ≤ r for all x ∈ S [30]. These contributions are meaningful because they allow the building of an adaptive neural model for nonlinear systems. which is a compact subset of n and all x (k0 ) ∈ . III.. then x (k) is uniformly ultimately bounded. 22. with gi ∈ p. u(k)) defined as ⎡ ⎤ ⎡ ⎢ z i1 ⎢ ⎥ ⎢ ⎢ ⎢ z i2 ⎥ ⎢ ⎥ ⎢ ⎢ z i (x(k). 0) as an interior point. a system described by an equation without explicit presence of an input u. . we propose a recurrent neural Luenberger observer (RHONO) with the following structure: x (k) = x 1 (k) .e. . we consider to estimate the state of a discrete-time nonlinear system. Hence.. i = 1. u (k)) + di (k) .. 0 II. . Consider a multiple input–multiple output (MIMO) nonlinear system x (k + 1) = F (x (k) . important mathematical preliminaries required in future sections are presented. ⎥ = ⎢ ⎢ . u (k)) + d (k) y (k) = C x (k) (5) where x ∈ n . In other words. M ATHEMATICAL P RELIMINARIES In this section. di j (L i ) j ∈I L i ξi j (8) . 3. For the system (6). ∀ k < k T [29]. . or to have inputs. Theorem 1: Let V (x (k)) be a Lyapunov function for a discrete-time system (1). given by x (k + 1) = F (x (k) .. ξ (x (k))) . n (6) Definition 2: The solution of (1)–(3) is semiglobally uniformly ultimately bounded (SGUUB). • as the Euclidian norm for vectors and as any adequate norm for matrices. ⎦ ⎢ ⎢ ⎣ ziL i di j (1) j ∈I1 ξi j di j (2) j ∈I2 ξi j . A. which is assumed to be observable. .. and F (•) is a smooth vector field and Fi (•) its entries. y (k) = C x (k) . i. as is displayed in Fig. in a finite time and remains in it thereafter. d (k) ∈ n is a disturbance vector. (4) where x ∈ n is the state vector of the system. . . Stability Definitions This section close follows [29]. . if for any . . for any a priory given (arbitrarily large) bounded set and any a priory given (arbitrarily small) set 0 . It can be obtained after selecting the input u as a feedback function of the state u (k) = ξ (x (k)) . Thus if V (x) < 0 for x (k) > ζ . MARCH 2011 in the presence of unknown disturbances such as parametric and load torque variations. and F ∈ n × m → n is nonlinear function. ⎥ ⎢ ⎣ . the solution of (1) is said to be SGUUB if. 1. VOL. x n (k) x i (k + 1) = wi z i (x(k). . schematic representation. x i (k) . u ∈ m . x (k0 )) such that x (k) < ε for all k ≥ k0 + N [29]. γ1 (•) and γ2 (•) are strictly increasing functions. |•| as the absolute value and. NO. y (k) ∈ p is the output vector. .. there exists a control (3) such that every trajectory of the closed-loop system starting from enters the set 0 = {x (k) | x (k) < ε}. (5) can be rewritten as x (k) = d (k) = x 1 (k) d1 (k) . which satisfies the following properties: γ1 ( x (k) ) ≤ V (x (k)) ≤ γ2 ( x (k) ) V (x (k + 1)) − V (x (k)) = V (x (k)) ≤ −γ3 ( x (k) ) + γ3 (ζ ) where ζ is a positive constant. u (k)) y (k) = h (x (k)) (1) (2) ε Fig. x n (k) dn (k) i = 1. Through this brief. there is a time instant k T such that x (k) < ζ. .

Then the dynamics of x i (k + 1) can be expressed as x i (k + 1) = x i (k + 1) − x i (k + 1) . trained with the EKF-based algorithm (16). Neural observer scheme. As discussed in [31]. 3. . IV. it can be written as x i (k + 1) = wi (k) z i (x(k). 22. it is assumed that this vector exists and is constant but unknown. ⎥ ⎢ ⎥ . wi (k + 1) − wi (k) = wi (k + 1) − wi (k) (15) (14) z i (x(k). described by wi (k + 1) = wi (k) + ηi K i (k) e (k) K i (k) = Pi (k) Hi (k) Mi (k) . In general. . . then. ruggedness. u(k)) + on the state component-wise for x i (k + 1) = wi∗ z i (x(k). Modeling of an induction motor is challenging. Theorem 2: For system (6). the weight estimation error is defined as wi (k) = wi∗ wi (k) − wi∗ (11) where is the ideal weights vector and wi its estimate [31]. which is one of the most used actuators for industrial applications due to its reliability. moreover. which is an artificial quantity required only for analytical purpose [31]. On the other hand. the RHONO (7). we apply the above-developed neural observer to a three-phase induction motor. . A well-known problem for induction motor applications is that electrical parameters might not be accurately known. ⎥ ⎢ ⎥ . and ξi defined as follows: ⎡ ⎤ ⎡ ⎤ ξi1 S(x 1 ) ⎢ . Among the wide range of available contributions in this direction. . x b (k)u(k). can be approximated by the following discrete-time RHONN parallel representation: x (k + 1) = W ∗ z (x(k). [34]. as wi (k) = wi (k) − wi∗ and x i (k) = x i (k) − x i (k). u(k)). u(k)) − z i (x(k). . NO. . MARCH 2011 499 with d ji (k) being nonnegative integers. A disadvantage of this type of NN is that there does not exist. [32]. induction motor control is performed in steady state for constant speed profiles [33].IEEE TRANSACTIONS ON NEURAL NETWORKS. u(k)) − gi e (k) . (20) Considering (16)–(20). we establish the main result of this brief as the following theorem. [26]. and highly nonlinear system [27]. ⎥ ⎢ ⎥ . 2 . x b (k). u(k)) + with zi (12) − gi e (k) (19) i = 1. [33]. ⎣ . however. Adding and subtracting wi∗ z i (x a (k). and the output error β>0 (10) e (k) = y (k) − y (k) . [26]. to the best of our knowledge. respectively. . ensures that the estimation error (15) and the output error (18) are SGUUB. is the input vector to the NN. the weights estimation wi (k) and state observer x i (k) errors are defined. Therefore x i (k + 1) = wi∗ z i (x(k). . Let us define the wi∗ estimate as wi . = [ 1 . the general discrete-time nonlinear system (5). . which has motivated various approaches for their identification [20]. and S(•) is defined by S(ς ) = m] u(k) Unknown x(k) Unknown C Plant Plant EKF EKF w(k) ˆ Neural Neural x (k) Observer Observer C y(k) e(k) + − y (k) ˆ Fig. Traditionally. the dynamics of (14) are wi (k + 1) = wi (k) − ηi K i (k) e (k) . VOL. . z − wi (k) z i (x a (k). n Pi (k + 1) = Pi (k) − K i (k) Hi (k) Pi (k) + Q i (k) with Mi (k) = Ri (k) + Hi (k) Pi (k) Hi (k) −1 (16) (17) . Proof: See Appendix. [33]. coupled. ˆ Since wi∗ is constant ∀k ∈ 0 ∪ Z+ . . and relatively low cost. . since its dynamics is described by multivariable. ⎦ ⎣ ⎦ . Let assume that there exists the optimal weights vector wi∗ ∈ L i such that zi is minimized on a compact set zi ⊂ L i . I NDUCTION M OTOR A PPLICATIONS In this section. n (13) zi = wi∗ z i (x(k). which is assumed to be observable. . u(k)) = z i (x(k). [34]. for The weight vectors are updated online with a decoupled EKF. 2. the RHONO weights remain bounded. Therefore. or might significantly vary when the motor is operating. 1 + exp(−βς ) (18) where ς is any real value variable. ⎢ . i = 1. ξin+m m In (9). u(k)) + zi 1 . u(k)) + zi . one can find results for the estimation of a limited number of required electrical parameters under different assumptions [20]. . therefore it has to be selected experimentally. . u(k)) + zi where x i is the i -th plant state and z i is a bounded approximation error which can be reduced by increasing the number of the adjustable weights [31]. a methodology to determine its detailed structure. [34]. ξi = ⎢ (9) ⎥ ⎢ ⎥ 1 ⎢ ξin+1 ⎥ ⎢ ⎥ ⎢ . ⎢ ⎥ ⎢ ⎥ ⎢ ξin ⎥ ⎢ S(x n ) ⎥ ⎢ ⎥=⎢ ⎥.

most of those works were developed for continuous-time model of the motor. Germany) for data acquisition and control of the system. with the following characteristics: 220-V. β = (M/σ L ). and direct torque control ([35] and references therein). MARCH 2011 Load or Mechanic source PC PCIBus Encoder Electrical Machine Sensors (a) Fig. among others. recent works on control of induction motors are focused on field-oriented control [35]. a discrete-time observer based on linearization is proposed. Schematic representation of the control prototype. continuous-time observers are studied. In [1] and [8]. all these observers assume that the parameters and load torque of the motor model are known. USA). 5. Besides. Motor Model A sixth-order discrete-time induction motor model in the stator fixed reference frame (α. exact input–output linealization. Therefore.500 IEEE TRANSACTIONS ON NEURAL NETWORKS. The DS1104 board allows downloading applications directly from Simulink (MATLAB and Simulink are registered trademarks of MathWorks Inc. Induction motor coupled with the load and the encoder. 1. To this end. an observer is needed to estimate them [11]. All the mentioned observers are designed on the basis of the physical model of the motor. it is not possible to establish a comparison between the proposed scheme and the abovementioned methodologies. Fig. the experiment is performed with a constant load torque applied as an inertial load coupled to the induction motor as shown in Fig. Fig. 60 Hz. which includes a PC for supervising. we consider the state estimation problem assuming that the plant parameters as well as external disturbances (load torque) are unknown. it is necessary to track time-varying reference signals. 3 presents a schematic representation of the benchmark used for experiments. 1660 rpm. γ = (M 2 Rr /σ L r ) + 2 /L ). mass transportation. [36] and [1]. Fig. In [32]. 5. 3. under the assumptions of equal mutual inductances and linear magnetic circuit. as well as an observer. a discrete-time model is proposed. adaptive input output linearization. 4. 22. which results in a sensitive control with respect to plant parameter variations. β). 0. 4 displays a view of the PC and the DS1104 board and the PWM driver. The experiments are performed using a benchmark. 3. The experiment implemented on this benchmark uses the NN state estimation discussed in Section III. A. and a three-phase induction motor as the plant to be controlled.3 A [37]. advanced drilling. and in [32]. in [8]. In [11]. Therefore. α = Rr /L r . VOL. Encoder Induction Motor Gear box Load many applications such as electric vehicle. Connections Power module V Fig. a PWM unit for the power stage. (b) Signal coupling DS1104 Board Connections (a) PC and the DS1104 board and (b) PWM driver. a dSPACE DS1104 board (dSPACE is a registered trademark of dSPACE GmbH. Robustness of those controllers depends significantly on accurate knowledge of the system states. and steel milling operation. a = e −αT . μ = (Rs /σ ). NO. Therefore. continuous-time observers for induction motor are considered. is given as [32] ω (k + 1) = ω (k) − T μ TL (k) + (1 − α) J α β α × M i (k) ψ (k) − i α (k) ψ β (k) − sin n p θ (k + 1) ρ2 (k) ψ (k + 1) = sin n p θ (k + 1) ρ1 (k) + cos n p θ (k + 1) ρ2 (k) T i α (k + 1) = ϕ α (k) + u α (k) + d1 (k) σ T β β i (k + 1) = ϕ (k) + u β (k) + d2 (k) σ TL (k) 2 μ (1 − a) θ (k + 1) = θ (k) + ω (k) T − T + T− J α α (21) × M i β (k) ψ α (k) − i α (k) ψ β (k) with ρ1 (k) = a cos (φ (k)) ψ α (k) + sin (φ (k)) ψ β (k) + b cos (φ (k)) i α (k) + sin (φ (k)) i β (k) ρ2 (k) = a cos (φ (k)) ψ α (k) − sin (φ (k)) ψ β (k) + b cos (φ (k)) i α (k) − sin (φ (k)) i β (k) ϕ α (k) = i α (k) + αβT ψ α (k) + n p βT ω (k) ψ α (k) − γ T i α (k) ϕ β (k) = i β (k) + αβT ψ β (k) + n p βT ω (k) ψ β (k) − γ T i β (k) φ (k) = n p θ (k) 2 with b = (1 − a) M. In practice. σ = L s − (M r r β ψ α (k + 1) = cos n p θ (k + 1) ρ1 (k) .19 kW.. fluxes are not easily measurable.

B. random Gaussian noise. 6 4 alpha current (A) 2 0 −2 −4 neural signal plant signal −0. and Ri (0) = 10000.1 neural signal estimation error Fig.45 0.3 (Mn p /J L r ) where Ls.2 −0. i α and i β represents the currents in the α and β phases. Rs and Rr are the stator and rotor resistances.25 0.35 0.4 0. n p is the number of pole pairs. . The associated covariances matrices are initialized as diagonals. see also [38]–[40]. to estimate the state of a three-phase induction motor (21) x 1 (k + 1) = w11 (k) S (x 1 (k)) + w12 (k) S (x 1 ) S (x 3 (k)) x 4 (k) + w13 (k) S (x 1 ) S (x 2 (k)) x 5 (k) + g1 e (k) x 2 (k + 1) = w21 (k) S (x 1 (k)) S (x 3 (k)) + w22 (k) x 5 (k) + g2 e (k) x 3 (k + 1) = w31 (k) S (x 1 (k)) S (x 2 (k)) + w32 (k) x 4 (k) + g3 e (k) x 4 (k + 1) = w41 (k) S (x 2 (k)) + w42 (k) S (x 3 (k)) + w43 (k) S (x 4 (k)) + w44 (k) u α (k) + g4 e (k) x 5 (k + 1) = w51 (k) S (x 2 (k)) + w52 (k) S (x 3 (k)) + w53 (k) S (x 5 (k)) + w54 (k) u β (k) + g5 e (k) (22) Fig. and finally x 4 and x 5 estimate the currents i α and i β . respectively.1 0. it is important to represent a wide range of frequencies. respectively. and all of its states are initialized in a random way.15 0. and mutual Inductance. we apply the RHONO (Fig. 2).1 beta flux (wb2) 0. where x 1 estimates the angular speed ω.2 0.05 0. they are used for purposes of state estimation to excite most of the plant dynamics. and the nonzero elements are heuristically selected as Pi (0) = 10000.15 0.05 −0. however. For supplementary information. For modeling of nonlinear system structures. developed in Section III.5 time (s) Fig. It is important to .2 0. . respectively. it is unusual to use chirp signals (sine wave signal whose frequency increases at a linear rate with respect to time) as inputs [25].15 time (s) 0. among others. The NN training is performed online.3 0. respectively.25 0 0. All of these input signals have advantages which include independent noise estimation and reduction of datasets.2 neural signal 0. plant signal 0. rotor.15 −0. 8.05 0 −0.15 0 estimation error 0. x 2 and x 3 estimates the fluxes ψ α and ψ β . Real-time alpha current estimation (plant signal in solid line and neural signal in dashed line). Q i (0) = 500. NO.2 0. In this brief. MARCH 2011 501 16 14 12 speed (rad/s) 10 8 6 4 plant signal 2 0 −2 0 0.2 0. 7.1 0.15 alpha flux (wb2) 0. . 22. chirp signals have been generally found to provide more consistent results and have been used successfully in the past for modeling the dynamics of complex nonlinear systems. VOL. L r and M are the stator.05 0.05 0 plant signal Fig. 3.25 0.3 0. .1 0.1 −0. and θ is the rotor angular displacement. Neural Observer Design To this end. Real-time alpha flux estimation (plant signal in solid line and neural signal in dashed line). 6.1 −0.25 0.05 0.IEEE TRANSACTIONS ON NEURAL NETWORKS.3 estimation error neural signal 0.2 0.25 0. (i = 1.15 time (s) 0.1 0. It is important to note that for induction motor applications. respectively. Input signals that attempt to meet this demand include pseudorandom binary sequences. and chirp signals (swept sinusoid).25 0. 9. respectively. respectively.05 0. ψ α and ψ β represent the fluxes in the α and β phases. Real-time rotor speed estimation (plant signal in solid line and neural signal in dashed line).15 time (s) 0. 5). estimation error −6 0 0. The inputs u α and u β are selected as chirp functions. Real-time beta flux estimation (plant signal in solid line and neural signal in dashed line).05 −0.

In fact all calculations required are performed in between consecutive samples. Both of them (plant and NN) have the same input vector [u α u β ] . there is no effect of the RHONN structure on its performance. This NN structure is determined heuristically in order to minimize the state estimation error. as displayed in Fig. Output trajectory tracking error with a nonlinear discrete-time nonlinear flux observer. Applied load torque. 6–10.9090 × 10−4 wb −7. Because of the use of this signal. 7 and 8 present the estimation performance for the fluxes in phase α and β.5 load torque (Nm) 2 0 −2 −4 −6 −8 0 0. with 200 V amplitude and incremental frequencies from 0 to 150 Hz and 0 to 200 Hz. Figs. 9 and 10 portray the estimation performance for currents in phase α and β. 11. Table I presents the standard deviation for the state estimation errors. 10. Both experiments are performed including a time-varying load torque as presented in Fig. the plant and the NN operates in open loop. 22. MARCH 2011 8 6 4 beta current (A) plant signal estimation error 3 neural signal 2.0005 s. 11. TABLE I S TANDARD D EVIATION AND M EAN VALUE FOR THE S TATE E STIMATION E RROR Fig. which is applied by means of a load-torque-controlled dc motor. Comparison for an Output Trajectory Tracking Application This brief only deals with state estimation for a three-phase induction motor.3 2 1. however. Fig.502 IEEE TRANSACTIONS ON NEURAL NETWORKS.9987 × 10−4 A 1 0 −1 −2 ψ β − x3 i α − x4 i β − x5 consider that for the EKF learning algorithm the covariances are used as design parameters [21]. respectively. however. it has a block control form [27] in order to simplify the synthesis of nonlinear controllers. The included real-time results illustrate the effectiveness of the proposed neural observer as applied to an electric threephase squirrel-cage induction motor.0995 rad/s 0. C. Real-Time Neural State Estimation This subsection presents the neural network observer RHONO previously proposed for the discrete-time induction motor model as applied in real time to the benchmark described above.0384 × 10−4 A −2. It is worth mentioning that for real-time state estimation there is no delay. coupled trough a gearbox.5 1 0.6478 × 10−4 rad/s 3. neither the speed nor the flux settles to nominal values. 3.5 0 0 5 time (s) 10 15 Fig. NO. VOL.0381 A Mean value −1. 11. It is important to remark that the state estimation proposed in this brief is performed using a chirp signal in order to excite most of the plant −3 0 5000 time (ms) 10000 15000 Fig.0332 A 0.0026 wb 0. 12. without knowledge or estimation of the parameters.1 0. dynamics as is shown in Figs.0028 wb 0.25 0. Finally. then. in the presence of parametric variations caused by winding heating due to motor operation and load torque variations. Figs. in this subsection we include real-time results for a control law designed on the basis of the block control and the sliding mode techniques developed initially with a typical nonlinear flux observer [25] and then implemented using the neural observer proposed in this brief [42]. in order to establish a comparison of the proposed neural observer with a typical nonlinear observer [32]. u α and u β are chirp functions.2 0. respectively. During the estimation process.0836 × 10−4 wb −3. 3 2 tracking error (rad/s) Variable ψ α − x2 ω − x1 Standard deviation 0.15 time (s) 0.05 0. respectively. [41]. The implementation is performed with a sampling time of 0. Considering that the induction motor is working in open loop. the torque in the rotor side of the induction motor is . 6 displays the estimation performance for the speed rotor. Real-time beta current estimation (plant signal in solid line and neural signal in dashed line). D.

2 −0.2359 rad/s Mean value −0.3019 rad/s 0. u(k)) + Hence. then Vi (k) = [wi (k) − ηi K i (k) e (k)]T [ Ai (k)] × [wi (k) − ηi K i (k) e (k)] + [ f (k) − gi C x (k)]T [ Ai (k)] × [ f (k) − gi C x (k)] − wi (k) Pi (k) wi (k) − x i (k) Pi (k) x i (k) with Ai (k) = Pi (k) − Di (k) + Q i Di (k) = K i (k) Hi (k) Pi (k) f (k) = wi (k) z i (x(k).8 0 5000 time (ms) Fig. For the state estimation using the proposed neural observer. It is important to remark that the oscillatory nature of the tracking error is due to many causes such as reversing speed. output trajectory tracking results were included in this brief in order to show the effectiveness of the proposed observer as compared with other nonlinear observer. (24) (25) A RHONN structure was used to design a neural observer.0283 rad/s −0. Real-time results show the effectiveness of the proposed observer. VOL. MARCH 2011 503 0. TABLE II C OMPARISON FOR THE S PEED O UTPUT T RAJECTORY T RACKING E RROR Observer Traditional Neural Standard deviation 1.IEEE TRANSACTIONS ON NEURAL NETWORKS. discontinuities in the load torque (Fig.0054 rad/s three-phase squirrel-cage induction motor in presence of timevarying disturbances. Fig. even with randomly assigned initial conditions and NN weighs. which helped to improve this brief. NO. The boundedness of the output. Control synthesis and implementation based on the proposed approaches were considered separately. Using (16) and (14) in (24). Output trajectory tracking error with the proposed neural observer. which was implemented online in a parallel configuration. additionally. Fig. and nonlinearities due to switches in PWM driver. and estimation errors was established on the basis of the Lyapunov approach. 13. for a class of MIMO discrete-time nonlinear system. 3. 22.4 0. This brief only dealt with state estimation for a three-phase induction motor. the proposed observer was trained with an EKF-based algorithm.6 tracking error (rad/s) 0.2 0 −0. 12 displays the speed tracking error for the typical nonlinear observer [25] and Fig. Table II includes the standard deviation and the mean value for speed output trajectory tracking errors. A PPENDIX P ROOF OF T HEOREM 2 Consider the candidate Lyapunov function Vi (k) = wi (k) Pi (k) wi (k) + x i (k) Pi (k) x i (k) whose first increment is defined as Vi (k) = V (k + 1) − V (k) = wi (k + 1) Pi (k + 1) wi (k + 1) + x i (k + 1) Pi (k + 1) x i (k + 1) − wi (k) Pi (k) wi (k) − x i (k) Pi (k) x i (k) . ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their useful comments. however. measurement noises. among others. 13 presents the speed tracking error for the same control scheme using the proposed neural observer [42]. named RHONO. the explicit knowledge or the estimation of the load torque is not required. 11).6 −0. From Table I. 11 is included just for illustration. 10000 15000 (23) defined as the applied torque amplified by the gearbox gain. as applied to an electric . state. It is easy to see that the neural observer produces a better performance. it does not require the measurement or estimation of the load torque and/or of knowledge of the plant parameters required for typical nonlinear observers [32]. which is in fact an excellent indicator of the proposed scheme performance. Using the inequalities X T X + Y T Y ≥ 2X T Y X T X + Y T Y ≥ −2X T Y −λmin (P) X 2 ≥ −X T P X ≥ −λmax (P) X 2 (27) (26) zi . gearbox backlash.4 −0. C ONCLUSION Vi (k) = 2wiT (k) Pi (k) wi (k) − 2wiT (k) [Bi (k)] wi (k) + 2η2 x T (k) C T K T [ Ai (k)] K i (k) C x (k) + 2 f T (k) Ai (k) f (k) + 2x T (k) C T giT [ Ai (k)] gi C x (k) − wiT (k) Pi (k) wi (k) −x iT (k) Pi (k) x i (k) .8 0. it is easy to see that the EKF-learning-based algorithm provides small errors between the outputs of the NN model and the plant signals. (25) can be expressed as V.

Nicosia and A. 2. no. [17] E. 32. Kalman Filtering and Neural Networks. New York: Taylor & Francis. Tornambè. and I. Nov. Huang. “Adaptive observers for single output nonlinear systems. 363. N. vol. vol. 30–42. vol. vol. Y. Int. [7] A.. pp. 2005. Walcott and S. Joint Conf. Lett. 1329–1339. G. San Mateo. 1792–1797. pp. 1.” IEEE Trans. W. Power Electron.” Neural Comput. pp. vol. W. vol. “Neural network-based model reference adaptive systems for high performance motor drives and motion controls. 331–337. [6] R. A. Liu. Hwang. no.. u(k)) + Vi (k) ≤ wi (k) 2 λmax (Pi (k)) 2 2 − wi (k) + 2 x (k) +4 2 zi λmin (Pi (k)) ηK i C 2 λmax (Ai (k)) λmax (Ai (k)) + 4 wi (k) 2 z i (x(k). Liu. 149. then λmax (Pi (k)) 2 − wi (k) + 2 x (k) + 2 f (k) + 2 x (k) − x (k) λmin (Bi (k)) ηK i C gi C 2 2 2 2 2 λmax (Ai (k)) λmax (Ai (k)) λmax (Ai (k)) zi . pp. Chen and M. vol. pp.. [19] A. “Trajectory tracking via adaptive recurrent control with input saturation. 3. NO. IEEE Ind. New York: Wiley. Coutinho and L. Zhong. J. 40th IAS Annu. 1992. Neural Netw. 3. [24] R.. 16. vol. Van der Heijden. 22. Z. no. [2] G. A. 2003. 401–412. Sanchez. and X. Autom. Control Lett. Y. Novi Sad. New York: Wiley. e (k) is bounded too e (k) = C x (k) e (k) = C x (k) . 3. Control. “State observation of nonlinear uncertain dynamical systems. Polycarpou. and X. “Design of exponential state estimators for neural networks with mixed time delays. Grover and P. CA: Morgan Kaufmann. Pereira. 2006. Neural Netw. [5] S. S. N. 1987. 1989. M. no. 38. Utkin. Ricalde.” in Proc. Z. 2008. + 2 gi C As a result. vol. OR. “Comparison of three Kalman filters for speed estimation of induction machines. 19.” IEEE Trans. 1996. D. 1. 2 λmin (Pi (k)) . [16] R. 11th Int. U. Williams and D. and W.” IEEE Trans. and State Estimation: An Engineering Approach Using MATLAB. G. “Training multilayer perceptrons with the extended Kalman algorithm. Wang. Ed. 2007. D. May 2009. Liu. vol. Singapore: World Scientific. and control. N. Electric Power Appl. 47–52.” in Proc. Nonlinear Observers and Applications (Lecture Notes in Control and Information Sciences). vol. Neural Netw. 2001. 1989. Neural Netw. Yugoslavia. 1983. vol. Cao. no. “Control of nonlinear dynamical systems using neural networks. 2001. pp. May 2007. J. no. 2007. 31st Annu. no. R EFERENCES [1] D. Andonov. Introduction to Random Signals and Applied Kalman Filtering. pp. 133–140. and A. and M. for that reason. Alanis. May–Jun. N. 2001. Zak.. Ed.. A. 20. Singapore: World Scientific. no. Krener and A. Touretzky.” IEEE Trans. it is easy to see that the output error has an algebraic relation with x (k). vol.” in Advances in Neural Information Processing Systems. hence the estimation error and the RHONO weights are SGUUB [14]. [23] S. “State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case. no. Nov. Sep. no. [27] V.. II.” IEEE Trans. 1998. F. Wu. 364. 166–170. pp. D. Aug. 7. then Substituting f (k) = wi (k) z i (x(k). Conf. 8.” in Proc. 781–793. no. 5. vol. A. Farrell and M. [14] Y. “Comparative study of a sliding-mode observer and Kalman filters for full state estimation in an induction machine. pp.. Symp. Y. pp. Isidori. [15] A. Classification. and X. D. [25] A. Sanchez. Control.. 2nd ed. pp. 13. J. vol. pp. Sliding Mode Control in Electromechanical Systems. CA: Morgan Kaufmann. 35. [28] F. New York: Wiley. Fuzzy and Traditional Adaptive Approximation Approaches. Appl. “High-gain observers in the state and parameter estimation of robots having elastic joints. Shi.. Control Syst. pp. Electron. Sanchez and L. 1999. Ho. 241–245. Feng. 5. Singhal and L. [13] B. and J. Y ∈ (26) can be rewritten as Vi (k) ≤ wi (k) 2 n. VOL. Feb.. 2002. E. Alanis. 1. L. 3. [11] F. Germany: SpringerVerlag. San Mateo. Husain. Jul. Liang. Guldner. identification. pp. C. Yu. vol. the solution of (19) and (20) is stable.504 IEEE TRANSACTIONS ON NEURAL NETWORKS. [18] J. Jan. pp. Neural Netw.. “Linearization by output injection and nonlinear observers. [20] M. F. Li and Y. 9. Haykin. “Robust state estimation for uncertain neural networks with time-varying delay. Ind. Control Lett. no.” in Advances in Neural Information Processing Systems. Technol. 1. Autom. Kim and F. Therefore. Jun. [10] Z. if x (k) is bounded. [21] S. Neural Netw. C. Vi (k) < 0 when 4 2 zi x (k) > or wi (k) > λmax (Ai (k)) E i (k) ≡ κ1 4 2 zi λmax (Ai (k)) Fi (k) ≡ κ2 . “Overview of induction motor parameter identification techniques. [26] Z. 2005. P.” IEEE Trans. 1–5. “A robust Luenberger-like observer for induction machines. Marino. 2.” IEEE Trans. Dunnigan. S. 1. vol. pp. . Besançon. Adaptive Approximation Based Control: Unifying Neural. E. [3] J. and J. H. Wu.” IEEE Trans. 4. Touretzky. Liu. 133–140. 879–886. 279–284.” IEEE Trans. 270–280. Elbuluk. Jul. 359–364. “State estimation for delayed neural networks. [8] J. 1. [4] A.” Syst. vol. Wang. vol. 1. Differential Neural Networks for Robust Nonlinear Control. Appl. 1. Jan. 1054–1058. Poznyak.” Syst. [9] Y. 53–64. High-Level Feedback Control with Neural Networks.. Loukianov. Lewis. “Training multilayer perceptrons with the extended Kalman algorithm. S. Conf. “Discrete-time adaptive backstepping nonlinear control via high-order neural networks. P = P T > 0. pp.” in Proc. 4. Berlin. 18. 2004. no. Oct. Observability. MARCH 2011 which are valid ∀X. pp. J. Perez-Cisneros. Loukianov. 2002. [22] S. Levin and K.” Phys. pp. [12] H. A. “Real-time discrete neural block control using sliding modes for electric induction motors. Soc. Nov.” IEE Proc. 18. Meeting Ind.. Sanchez. 2010. S. ∀P ∈ n×n . Wang. E.. 1990. 1989. New York: Wiley. no. Jan. Parameter Estimation. Singhal and L. Zipser. “A learning algorithm for continually running fully recurrent neural networks. 11–21. Narendra. Liu. u(k)) 2 λmax (Ai (k)) + 2 x (k) 2 gi C 2 λmax (Ai (k)) − x (k) 2 λmin (Pi (k)) 2 with Bi (k) = Di (k) − Q i Vi (k) ≤ − x (k) +4 and E i (k) = 2 ηK i C 2 zi E i (k) − wi (k) 2 2 Fi (k) λmax (Ai (k)) λmax (Ai (k)) 2 λmax (Ai (k)) − λmin (Pi (k)) Fi (k) = λmax (Pi (k)) − λmin (Pi (k)) + 4 z i (x(k). de Ridder. 2005. Considering (7) and (18). G. pp. u(k)) 2 λmax (Ai (k)) . Jan. T. 1. 1185–1195. 1989. Portland.

2103956 1045–9227/$26.” IEEE Trans. “Speed observer and reduced nonlinear model for sensorless control of induction motors.org.IEEE TRANSACTIONS ON NEURAL NETWORKS. 2011. and L. 2000. Stoica. H. This brief proposes the Bayesian ELM (BELM). no. Guadalajara. 3. and T. date of current version March 2. bootstrap. Holtz. Alanis. “Adaptive neural network control for a class of MIMO nonlinear systems with disturbances in discrete-time. E.” IEEE Trans. and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. 2001. [32] A. Upper Saddle River. vol. pp. Member. Feldkamp. [39] M. Member. ELM proposes a random initialization of the parameters of the hidden layer (weights and biases in the case of MLP [2] and centers and variances in the case of RBFNN [3]). Neural Networks for Modelling and Control of Dynamic Systems. 5–6. automatic production of CIs.g.. 1999. vol. thus involving a low computational cost and additionally building the corresponding confidence interval (CI) without the need for using other methods. no.S. G.. 240–246. Jul. NJ: Prentice-Hall. Mar. vol. radial basis function. Berlin. The performance of BELM is evaluated using different standard datasets commonly used for machine learning benchmarking in Section III. Hansen. Prokhorov. Some neural models of this kind are the two most widely used ones. 2nd ed.es. 4. System Identification: Theory for the User. pp. N. Ge. which presents some Manuscript received June 4. obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive. 2010. antonio. NJ: Prentice-Hall. and some algorithms have been proposed in order to improve its performance [7]–[9]. the computational cost is much lower than when using other classical learning algorithms. 1989. that are computationally costly. Sanchez. I. I NTRODUCTION Extreme learning machine (ELM) is a recent approach for neural models with only one hidden layer and whose goal is mapping an original input space into an output space in which the tackled problem can be solved. New York: Springer-Verlag. 2011. System Identification. Control of Electrical Drives. 327–339. Its application to different fields has become more and more common lately [5]. Feldkamp.es. 1996. which is especially relevant when dealing with many patterns defined in a high-dimensional space. [35] F. 2009. 2006.es). Electron. Unidad Guadalajara. K. M. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Color versions of one or more of the figures in this brief are available online at http://ieeexplore. Strangas. IFAC.martin@uv. [42] A. This work was supported in part by the Spanish Ministry for Education and Science. Germany: Springer-Verlag. V. Englewood Cliffs. The research on Bayesian methods for neural models has become very intense currently. Rovithakis and M. New York: SpringerVerlag. [34] W. Martín. pp. the multilayer perceptron (MLP) and the radial basis function neural network (RBFNN) [1]. and S. Khorrami. and T.es. Khalil. 683–689. Adaptive Control with Recurrent High-Order Neural Networks. and A. pp. BELM: Bayesian Extreme Learning Machine Emilio Soria-Olivas. Melkote. Juan Gómez-Sanchis. Moreover. P. the weights of the output layer are computed using a least-mean squares method based on applying a Moore–Penrose’s generalized inverse [4]. Jurkovic. pp. Khalil. Member. K. vol. and H. 2.es. Date of publication January 20.00 © 2011 IEEE . Spain. and use of a priori knowledge. “Real-time output tracking for induction motors by recurrent high-order neural network control. Burjassot 46100. 2003. VOL. Afterwards. Nonlinear Systems. Jun.” in Proc. S. Some of the most relevant approaches of this kind are the probabilistic version of self-organizing maps (generative topographic mapping) and the relevance support vector machine which is the probabilistic approach for support vector machines [10]. NO. Ravn. Magdalena. juan. Rivera. Spain (e-mail: emilio. [41] L.j. M. 106–111. e. Söderström and P. Cinvestav. Department of Electronic Engineering. Jun. 2nd ed. 17. IEEE. and Antonio J. 2000. Germany: SpringerVerlag. “Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks. Member. IEEE 17th Mediterranean Conf. Control Autom. Technol. Apr. Syst. N. [38] L. The rest of this brief is outlined as follows. particle swarm. A.1109/TNN. Section II describes ELM and the proposed BELM.). Barcelona. Part B: Cybern. dissertation. 2009.. jose. Leonard. G. “High dynamic speed sensorless AC drive with on-line model parameter tuning for steady-state accuracy. 2004.serrano@uv. joan.. “Real time implementation of a three phase induction motor control. [36] H.. since the amount of available data grows exponentially. Poulsen. ETSE. Krishnamurthy. J. Lee.es. 34. Index Terms— Bayesian. 1630–1645.” M.ieee. Jiang and J.martinez@uv.. Norgaard. nos. Quiñones. A. O. These methods introduce a probability distribution on the network parameters and the committed errors. There are many approaches that have shown their suitability in different fields [10].” in Proc. such as gradient-descent methods or global search approaches (genetic algorithms. advantages over other approaches: it allows the introduction of a priori knowledge. IEEE Abstract— The theory of extreme learning machine (ELM) has become very popular on the last few years. Ind. Therefore.” Neural Netw. The BELM has the advantages of both ELM and Bayesian models [10]. K. Thessaloniki.magdalena@uv. 2002. accepted December 24. [31] G. D. Zhang. [37] J. Loukianov. Its main advantage is the lower computational cost. ELM represents a suitable approach to obtain models from huge databases within a reasonable time. Digital Object Identifier 10. IEEE. José R. Joan Vila-Francés. no. 2003. 2010. This brief ends with the conclusion in Section IV. University of Valencia. Serrano. MARCH 2011 505 [29] S. 2nd ed. Member. This brief proposes a Bayesian approach to ELM. [30] H. Member. Berlin. namely. rafael. IEEE. 44. Upper Saddle River. E. [33] J. Man Cybern. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages. “Discrete-time sliding mode control of an induction motor. 2. Y. Marcelino Martínez. 1997.2010. Aug. NJ: Prentice-Hall. IEEE. Chistodoulou. [40] T.d. [11]. 22.es. and J. Member. such as bootstrap. [6]. J. Jalisco.soria@uv. extreme learning machine. IEEE. José D.. A. Mexico.gomez-sanchis@uv. Control Syst. Modeling and Adaptive Nonlinear Control of Electric Motors. IEEE. marcelino. G. etc. under Grant TIN2007-61006: Aprendizaje por Refuerzo Aplicado en Farmacocinetica and also in part by the Project CSD2007-00018. K. Greece.vila@uv. Ljung. pp. 16.. namely. Cañedo. 868–873. The authors are with the Digital Signal Processing Group. Loukianov. multilayer perceptron.” IEEE Trans. reduction of probability of model overfitting.