Professional Documents
Culture Documents
ISA Transactions: Jinzhu Peng, Rickey Dubay
ISA Transactions: Jinzhu Peng, Rickey Dubay
ISA Transactions
journal homepage: www.elsevier.com/locate/isatrans
given in Eqs. (2) and (5) are modified to include the speed depen-
dent friction as follows,
dωm
Jm = τm − Bm ωm − τs − τf (ωm ) (9)
dt
dωL
JL = τs − BL ωL − τd − τf (ωL ). (10)
dt Fig. 3. The structure of Wiener-type neural network.
Dead-zone is caused primarily by the Coulomb friction force [7].
The Coulomb friction prevents the rotor from turning until the 3.2. Wiener-type neural network
torque τm is large enough to overcome it. The main purpose of this
paper is to present an adaptive controller for a DC motor system A block-oriented artificial neural network which is termed
that is able to achieve good performances in the presence of the as Wiener-type neural network is designed as shown in Fig. 3,
dead-zone nonlinearities. consisting of a single linear node with two tapped delay lines.
These delay lines form the model of the linear dynamic element
3. Wiener-type neural network for identification and the nonlinear static element. In Fig. 3, x̂(k) is raised to powers
1, . . . , p and each signal path is multiply by the corresponding
Unlike black-box models, Wiener or Hammerstein models,
weights ĉ1 , . . . , ĉp . The output ŷ(k) can be expressed as
which are called block-oriented nonlinear models, have a clear
physical interpretation [28,29]. In this section, a Wiener-type p
−
neural network is designed to obtain the Wiener model entirely. ŷ(k) = f (x̂(k)) = ĉl x̂l (k). (13)
The back-propagation (BP) training algorithm for weights updating l =1
is presented in detail.
The hidden layer output x̂(k) of the WNN can be expressed as
3.1. Wiener model formulation x̂(k) = −â1 x̂(k − 1) − â2 x̂(k − 2) + · · · − âna x̂(k − na )
According to Eqs. (15), (13) and (14), the partial derivatives of where the partial derivatives in Eqs. (24)–(26) are shown in
the WNN output ŷ(k) to weights ĉl and intermediate variable x̂(k) Eqs. (18), (22) and (23), respectively. From these evaluations the
can be calculated as output of the WNN ŷ is determined and the weights which are
associated with the Wiener model parameters can be updated.
∂ ŷ(k)
= x̂l (k), l = 1, 2, . . . , p (18) As mentioned previously, an ideal DC motor can be regarded as
∂ ĉl a linear dynamic system, with dead-zone as a static nonlinearity
p that describes insensitivity of the system to small input signals.
∂ ŷ(k) −
= l · ĉl · x̂l−1 (k). (19) This indicates that the DC motor system with DZC has the same
∂ x̂(k) l =1 structure as the proposed WNN. Therefore, the WNN can be used
to identify the DC motor system with DZC well.
Since the element of x̂(k − i) is also function of âi and b̂j , the From above, the major contribution of the paper is that it
partial derivatives of intermediate variable x̂(k) to the weights âi provides a novel approach to obtaining the parameters of the
and b̂j can be calculated as powerful traditional Wiener model, which is represented by the
WNN entirely. The structure of the proposed WNN is different
∂ x̂(k) −na
∂ x̂(k − s) from the existing neural networks as the parameters of the Wiener
= −x̂(k − i) − âs · , i = 1 , 2 , . . . , na (20)
∂ âi s =1
∂ âi model can be obtained directly from the weights of the WNN.
Therefore, the training algorithm can be executed every timestep
∂ x̂(k) na
− ∂ x̂(k − s) to update the weights so as to obtain the identified Wiener
= u(k − j) − âs · , j = 1 , 2 , . . . , nb . (21) models.
∂ b̂j s =1 ∂ b̂j
From Eqs. (19)–(21), the following partial derivatives can be 4. Adaptive PID-type neural network control based on WNN
calculated as
In this section, as shown in Fig. 4, the WNN is derived to model
∂ ŷ(k) ∂ ŷ(k) ∂ x̂(k) the DC motor system with DZC, and then a PID-type neural network
= ·
∂ âi ∂ x̂(k) ∂ âi (PIDNN) is designed to control it. The control objective is to tune
the parameters in the PIDNN controller so that the output y(k) of
p
− −na
∂ x̂(k − s)
= lĉl x̂ (k) · −x̂(k − i) −
l−1
âs · , the nonlinear system tracks the desired trajectories yd (k), at the
l=1 s=1
∂ âi
same time, to tune the parameters in the WNN identifier so that its
i = 1, 2, . . . , na (22)
output ŷ(k) can model the system nonlinear behavior y(k) in real
∂ ŷ(k) ∂ ŷ(k) ∂ x̂(k) time. Therefore, the WNN identifier can provide updated model
= · information to the PIDNN controller every timestep [35].
∂ b̂j ∂ x̂(k) ∂ b̂j
p
− −na
∂ x̂(k − s)
= lĉl x̂ (k) · u(k − j) −
l−1
âs · , 4.1. PID controller
l=1 s=1 ∂ b̂j
j = 1, 2, . . . , nb . (23) A typical discrete-time PID controller can be expressed
as [8,11],
According to Eq. (17), the update law of the weights ĉk , âi and −
b̂j can be calculated as u(k) = KP (k) · e(k) + KI (k) · e(k) + KD (k) · 1e(k) (27)
KP (k)Ts
∂ ŷ(k) where u(k) is the control effort at time k; KP (k), KI (k) =
ĉl (k + 1) = ĉl (k) + ηI · ê(k) , l = 1, 2, . . . , p (24) Ti
∂ ĉl and KD (k) =
KP (k)Td
are proportional, integral and derivative
Ts
∂ ŷ(k) gains, respectively; Ts represents the sampling time; Ti and Td
âi (k + 1) = âi (k) + ηI · ê(k) , i = 1, 2, . . . , na (25) represent the integral and derivative constants, respectively; e(k)
∂ âi
is the tracking error defined as e(k) = yd (k) − y(k), and 1e(k) =
∂ ŷ(k) e(k) − e(k − 1); yd (k) is the desired plant output, y(k) is the actual
b̂j (k + 1) = b̂j (k) + ηI · ê(k) , j = 1, 2, . . . , nb (26)
∂ b̂j plant output.
592 J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598
The PID controller Eq. (27) can also be expressed in the Using a BP training algorithm, the weights adaptation laws for
following incremental form, the output and hidden layers are derived as
3 ∂ JC (k)
Kj (k + 1) = Kj (k) − ηC · , j = 1, 2, 3
−
1u(k) = u(k) − u(k − 1) = Ki (k) · ei (k) (28) ∂ Kj (k)
(37)
i=1
∂ JC (k)
where Ki (k) for i = 1, 2, 3 are the parameters of PID controller; vij (k + 1) = vij (k) − ηC · , i, j = 1 , 2 , 3 (38)
e1 (k) = e(k)− e(k − 1), e2 (k) = e(k) and e3 (k) = e(k)− 2e(k − 1)+ ∂vij (k)
e(k−2). In general, if the parameters Ki (k) are chosen to be optimal, where the respective partial derivatives in Eqs. (37) and (38) are
a satisfied tracking performance could be obtained. However, it is shown in Eqs. (34) and (35).
difficult to select these parameters due to the plant nonlinearity. In The proposed approach can be used to control the DC
this paper, a neural network is used to formulate the PID controller, motor system with dead-zone (in this work) as well as other
the weights in which are corresponding with the parameters of PID nonlinearities such as backlash, hysteresis and saturation and
controller, so that the parameters Ki (k) can be updated through others. In addition, our approach can also be applied for identifying
online training of the neural network. and controlling other electrical and mechanical systems with any
invertible, unknown and nonlinear functions.
4.2. PID-type neural network (PIDNN)
5. Convergence analysis
As shown in Fig. 4, the PIDNN acts as a controller, the output of
the first layer net j and the output of the network 1u(k) are given In the training procedures of WNN and PIDNN, the update rules
as of Eqs. (24)–(26) and Eqs. (37)–(38) require a proper choice of
3
− training rates η. Too small η guarantees convergence but with slow
net j = vij · ei (k) (29) training speed. However, if η is too big, the training algorithm
i =1 becomes unstable. In this section, the approach on selecting
3 properly η is developed.
−
u(k) = u(k − 1) + 1u(k) = u(k − 1) + Kj (k) · h(net j ) (30) A discrete-type Lyapunov function can be given as [36,37],
j =1
1
V (k) = e2 (k). (39)
where vij for i = 1, 2, 3 and j = 1, 2, 3 are the weights between the 2
input layer and the hidden layer, and h(·) is a nonlinear activation
Therefore, the change of the Lyapunov function can be obtained
function of hidden layer, which can be selected as
as
1 − e−net j 1V (k) = V (k + 1) − V (k)
h(net j ) = . (31)
1 + e−net j 1 2
e (k + 1) − e2 (k)
Thus, the derivative of Eq. (31) is,
=
2
1e(k)
[ ]
1
h′ (net j ) =(1 − h(net j ))2 . (32) = 1e(k) e(k) + . (40)
2 2
The training algorithm can now be formulated in order to make The error difference can be represented as
the PIDNN adaptive.
∂ e(k)
T
e(k + 1) = e(k) + 1e(k) = e(k) + · 1W . (41)
4.3. Training algorithm ∂W
For the adaptive PIDNN controller, its weights are updated Eqs. (40) and (41) will be used in the following convergence
along the negative gradient of a given error function as follows, analyses of the WNN and the PIDNN. It is necessary to perform
this analysis since convergence may be too slow or unstable, if the
1 1 learning rate η is not properly chosen. In this paper, a procedure
JC (k) = e2 (k) = (yd (k) − y(k))2 . (33)
2 2 for selecting the proper learning rate η will be addressed.
The gradients of JC (k) with respect to weights Kj (k) and vij can be
evaluated as 5.1. Convergence of WNN
∂ JC (k) ∂ y(k) ∂ y(k)
= −e(k) = −e(k) h(net j ) (34) From the update rule of Eq. (17), we have,
∂ Kj (k) ∂ Kj (k) ∂ u(k)
∂ ê(k) ∂ ŷ(k)
∂ JC (k) ∂ y(k) ∂ y(k) 1WI = −ηI · ê(k) · = ηI · ê(k) · . (42)
= −e(k) = −e(k) Kj (k)h′ (net j )ei (k). (35) ∂ WI ∂ WI
∂vij (k) ∂vij (k) ∂ u(k)
The following theorem 1 [37] is revisited on selecting ηI
∂ y(k)
In Eqs. (34) and (35), ∂ u(k) denotes the sensitivity of the properly as the neural network in this paper is different.
plant output with respect to its input. The sensitivity cannot be
calculated directly from the output of the nonlinear system since Theorem 1. Let ηI be the training rate for the weights of WNN and
the precise mathematical model is usually unknown in many βI ,max be defined as βI ,max = maxk ‖βI (k)‖, where βI (k) = ∂∂ŷW(kI) and
practical system. In general, since the WNN was trained (off-line ‖ · ‖ is the usual Euclidean norm. Then, the convergence is guaranteed
or online), there is ŷ(k) ∼
= y(k). Therefore, the system sensitivity if ηI is chosen as
can be calculated as
2
∂ y(k) ∼ ∂ ŷ(k) ∂ ŷ(k) ∂ x̂(k) 0 < ηI < . (43)
yu (k) = = = · . (36) βI2,max
∂ u(k) ∂ u(k) ∂ x̂(k) ∂ u(k)
J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598 593
Proof. From Eqs. (39)–(41), 1V (k) can be represented as Since Eq. (50) is similar as Eq. (45), the only difference is that yu (k)
needs to be incorporated in PIDNN. From Eq. (36), we can obtain
1ê(k)
[ ]
1V (k) = 1ê(k) ê(k) + the limit on yu (k) as
2
∂ ŷ(k) ∂ x̂(k)
δmax = ‖yu (k)‖max =
T
∂ ê(k) ∂ ŷ(k)
·
= ηI ê(k) ∂ x̂(k) ∂ u(k) max
∂ WI ∂ WI
∂ ŷ(k)
· ∂ x̂(k)
1 ∂ ê(k)
T
∂ ŷ(k) ≤ ∂ x̂(k) ∂ u(k) (51)
· ê(k) + ηI ê(k) . (44) max max
2 ∂ WI ∂ WI
where δmax is the limit
on sensitivity function and it is estimated
∂ x̂(k)
Since ê(k) = y(k) − ŷ(k), = ∂ ê(k)
∂ WI
− ∂∂ŷW(kI) . Substituting these in the from Eq. (19) and ∂ u(k) . Similar as the proof of Theorem 1, we
last term of Eq. (44), we have, can conclude that Eq. (48) can guarantee the convergence of the
training algorithm of PIDNN. And, the optimal training rate is ηC∗ =
∂ ŷ(k) 2 1 2 2 ∂ ŷ(k) 4
1
1V (k) = −ηI ê2 (k) + η ê (k) , which is the upper half of the limit in Eq. (48). Note that
∂W . (45) δmax βC ,max
2 2
∂W 2
I
I I
∂ x̂(k)
∂ u(k)
can be similarly calculated as Eq. (21).
∂ ŷ(k)
Define βI ,max = maxk ‖βI (k)‖ where βI (k) = ∂ W , we obtain,
From the above analyses, the proper learning rate η can be
I
Fig. 5. WNN DC model identification, plant output y (dotted curve) and WNN output ŷ (solid curve).
Table 1 to Step 6, the parameters in Eqs. (13) and (14) are selected to
Specifications of the Honeywell 22VM51-020-5 DC Motor. na = 3, nb = 2 and p = 3. After 19 iterations of training, the
Motor Characteristics Parameters Values Units stop condition (JI < 0.01) is met. Fig. 5 shows the DC motor
Rated voltage (DC) – 24 V system identified results, where the WNN can identify the plant
Rated current (RMS) – 2.2 A output. Fig. 6 shows a portion of the system response at very low
Rated torque – 9.18 × 10−2 N·m speeds in order to emphasize the DZC. It is clear that the WNN can
Rated speed – 2225 RPM identify the dead-zone better than the Wiener model using the RLS
Back EMF constant Ke 0.0374 V · s/rad method.
Torque constant Km 0.0374 N · m/A
Terminal resistance Ra 3.6 Ω Test 2 (closed loop)—Controller performance.
Rotor inductance La 6.0 × 10−4 H The adaptive PIDNN control with WNN identification is
Viscous damping coefficient Bm 6.74 × 10−6 N·m·s/rad used to practically control the DC model with dead-zone. The
Rotor inertia Jm 3.18 × 10−6 kg · m2 setpoints are generated by a sinusoidal function yd (k) =
200 sin(2π kTs /40) (RPM). Figs. 7 and 8 show the first loop control
and the sixth loop results, respectively. We can see that the
controller can achieve a good performance and the motor dead-
specifications of the DC motor are shown in Table 1. The torque
zone gets smaller over these loops. This is because in the first
motor was controlled by a PC with a National Instruments 16-bit
loop the weights of PIDNN are set to be random within [−1, 1]. In
data acquisition (DAQ) board using a C-based program. The DAQ
addition, since the WNN was trained off-line, it does not identify
board receives the transducer output (DC voltage), which provided
the dead-zone online well as yet. During the off-line procedure,
the rotational speed of the DC motor system. The motor receives
only limited datasets were used for training. In online control, the
an analog control signal from the DAQ board which is generated
datasets have more data points available for WNN identifier every
by the software algorithm [38]. In the experimental setup, the
timestep. After six loops, the weights of PIDNN are at updated
main control algorithm is implemented at a Ts = 10 ms sampling
values that provide good control of the plant. The overall result is
rate.
the WNN identifying the dead-zone accurately so as to facilitate
continuous improvement of the PIDNN controller. A PID controller
Test 1 (open loop)—WNN for DC model identification.
using Ziegler–Nichols method [39] was used to control the DC
To test the identification performance of WNN for the DC motor system with DZC with results in Fig. 9. It is shown that the
motor with dead-zone, an open-loop test is performed with a PID controller can achieve a relatively good performance however,
sinusoidal function u(k) = 3.0 sin(2π kTs /10) V. In order to test the dead-zone region is not compensated or minimized as the
the ability of identifying the dead-zone, one cycle dataset (1000 PID controller gains are fixed during the entire control procedure.
pairs of data) are used to train the WNN. The initial parameters In general, this would be the case for any fixed-type control
∂ ŷ(k)
in Eqs. (24)–(26) are set to be ŷ(k) = 0, x̂(k) = 0, ∂ â = scheme.
i
∂ y(k) ∂ ŷ(k) From Figs. 7–9, it can be seen that the voltage has oscillations
0, = 0 and ∂ ĉl
= 0 as k ≤ 0. The initial values of the
∂ b̂j within ±0.2 V which is not a significant problem in the control.
weights âi , b̂j and ĉl are all selected within [−1, 1] randomly. This noise is due to the rotational speed tachogenerator sensor
The training rate is selected as η = 0.2. Choosing the objective which has an inherent 3% ripple. The noise can be reduced
error function JI < 0.01 and the maximum number of iterations by a filter [40], while all the results are unfiltered in this
be 50, and according to the step-by-step procedure from Step 1 paper.
J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598 595
Fig. 6. WNN dead-zone identification, plant output y (dotted ‘◦’ curve), Wiener model using RLS (dash ‘’ curve) and WNN output ŷ (solid ‘∗’ curve).
Fig. 7. PIDNN control results (first loop), setpoints yd (dotted) and PIDNN y (solid).
Test 3 (closed loop)—Comparison to PID control. error (ISE) performance indices are used to make the compar-
Additional testing was performed to demonstrate the perfor- isons [41]. From Table 2, the performance indices indicated that
mance of the proposed adaptive PIDNN controller. A square wave both controllers have similar rise time with significant improve-
setpoint profile is used corresponding to ±100 (RPM). The closed- ments using the PIDNN controller for the settling time and over-
loop responses are shown in Fig. 10 for the proposed adaptive shoot. The IAE and ISE values are relatively similar to each other
PIDNN and PID controllers. The figures illustrate that the DC mo- for both schemes.
tor system controlled by the PIDNN controller has a better track-
ing performance than using the PID controller. The corresponding 7. Conclusion
control efforts are illustrated in Fig. 11. For evaluating the track-
ing performance of the two controllers, the rise time, settling time, In this study, two neural networks are used to develop an iden-
overshoot, integral of the absolute error (IAE) and integral of square tification and control methodology. An adaptive PIDNN control
596 J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598
Fig. 8. PIDNN control results (sixth loop), setpoints yd (dotted) and PIDNN y (solid).
Table 2
Comparison of performance of controllers.
Controller Rise time (s) Settling time (s) (5%) Overshoot (%) IAE ISE
method based on WNN is derived for a DC motor system with of the motor system is minimized during closed loop in compar-
DZC. In the control scheme, the WNN acts as a forward identifier ison to a fixed PID scheme. This combination provides an effective
which provides the DC motor system dynamic behavior in realtime solution of enhancing the control performance of traditional PID
to facilitate adaptive control of the system. As a result, the DZC controllers.
J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598 597
Fig. 10. Setpoint tracking of the PIDNN and PID control systems.
Fig. 11. Control effort of the PIDNN and PID control systems.
Acknowledgments [3] Xu M, Chen G, Tian YT. Identifying chaotic systems using Wiener and
Hammerstein cascade models. Mathematical and Computer Modelling 2001;
33:483–93.
The authors would like to acknowledge the funding received [4] Norquay SJ, Palazoglu A, Romagnoli JA. Model predictive control based on
from the Natural Sciences and Engineering Research Council of Wiener models. Chemical Engineering Science 1998;53:75–84.
Canada to conduct this research investigation. [5] Al-Duwaish H, Karim MN, Chandrasekar V. Use of multilayer feedforward neu-
ral networks in identification and control of Wiener model. IEE Proceedings—
Control Theory and Applications 1996;143:255–8.
References [6] Janczak A. Neural network approach for identification of Hammerstein
systems. International Journal of Control 2003;76:1749–66.
[1] Armstrong-hélouvry B, Dupont P, Canudas de wit C. A survey of models, [7] Kara T, Eker İ. Nonlinear modeling and identification of a DC motor for
analysis tools and compensation methods for the control of machines with bidirectional operation with real time experiments. Energy Conversion and
friction. Automatica 1994;30(7):1083–138. Management 2004;45:1087–106.
[2] Chen G, Chen Y, Ogmen H. Identifying chaotic systems via a Wiener-type [8] Bennett S. Development of the PID controller. IEEE Control Systems Magazine
cascade model. IEEE Control Systems Magazine 1997;17:29–36. 1993;13:28–38.
598 J. Peng, R. Dubay / ISA Transactions 50 (2011) 588–598
[9] Howell MN, Gordon TJ, Best MC. The application of continuous action [26] Lewis FL, Tim WK, Wang LZ, Li ZX. Deadzone compensation in motion control
reinforcement learning automata to adaptive PID tuning. In: IEEE Seminar on systems using adaptive fuzzy logic control. IEEE Transactions on Control
learning systems for control. 2000. p. 1–4. Systems Technology 1999;7(6):731–42.
[10] Cameron F, Seborg DE. A self-tuning controller with a PID structure. [27] Nouri K, Dhaouadi R, Braiek NB. Adaptive control of a nonlinear DC motor
International Journal of Control 1983;38(2):401–17. drive using recurrent neural networks. Applied Soft Computing 2008;8:
[11] Kim JH, Choi KK. Self-turning discrete PID controller. IEEE Transactions on 371–82.
Industrial Electronics 1987;43:298–300. [28] Janczak A. Identification of nonlinear systems using neural networks and
[12] Vega P, Prada C, Aleixander V. Self-tuning predictive PID controller. IEE polynomial models: a block-oriented approach. New York: Springer-Verlag;
Proceedings—Control Theory and Applications 1991;138(3):303–11. 2004.
[13] Martins FG, Coelho MAN. Application of feed-forward artificial neural to [29] Ławryńczuk M. Computationally efficient nonlinear predictive control based
improve process control of PID-based control algorithms. Computers and on neural Wiener models. Neurocomputing 2010;74:401–17.
Chemical Engineering 2000;24:853–8. [30] Vörös J. Parameter identification of Wiener systems with multisegment
[14] Chen J, Huang TC. Applying neural networks to on-line updated PID controllers piecewise-linear nonlinearities. Systems & Control Letters 2007;56:99–105.
for nonlinear process control. Journal of Process Control 2004;14(2):211–30. [31] Boutayeb M, Darouach M. Recursive identification method for MISO
[15] Yuan XF, Wang YN. Neural networks based self-learning PID control of Wiener–Hammerstein model. IEEE Transactions on Automatic Control
electronic throttle. Nonlinear Dynamics 2009;55:385–93. 1995;40:287–91.
[16] Shu H, Pi Y. PID neural networks for time-delay systems. Computers and [32] Hagenblad A, Ljung L, Wills A. Maximum likelihood identification of Wiener
Chemical Engineering 2000;24:859–62. models. Automatica 2008;44:2697–705.
[17] Cong S, Liang Y. PID-like neural network nonlinear adaptive control for [33] Kalafatis AD, Arifin N, Wang L, Cluett WR. A new approach to the identification
uncertain multivariable motion control systems. IEEE Transactions on of pH processes based on the Wiener model. Chemical Engineering Science
Industrial Electronics 2009;56(10):3872–9. 1995;50:3693–701.
[18] Wang XS, Su CY, Hong H. Robust adaptive control of a class of nonlinear [34] Wigren T. Recursive prediction error identification algorithm using the
systems with unknown dead-zone. Automatica 2004;40:407–13. nonlinear Wiener model. Automatica 1993;29:1011–25.
[19] Zhonghua W, Bo Y, Lin C, Shusheng Z. Robust adaptive deadzone compensation [35] Peng J, Dubay R. Adaptive control for nonlinear dynamic systems using
of DC servo system. IEE Proceedings—Control Theory and Applications 2006; recurrent neural networks. In: The 20th international conference of flexible
153(6):709–13. automation and intelligent manufacturing. 2010.
[20] Zhou J, Wen C, Zhang Y. Adaptive output control of nonlinear systems with [36] Li X, Chen ZQ, Yuan ZZ. Simple recurrent neural network-based adaptive
uncertain deadzone nonlinearity. IEEE Transactions on Automatic Control predictive control for nonlinear systems. Asian Journal of Control 2002;4(2):
2006;51(3):504–11. 231–9.
[21] Ibrir S, Xie WF, Su CY. Adaptive tracking of nonlinear systems with non- [37] Bao Y, Wang H, Zhang J. Adaptive inverse control of variable speed wind
symmetric deadzone input. Automatica 2007;43:522–30. turbine. Nonlinear Dynamics 2010;61:819–27.
[22] Jang JO, Jeon GJ. A parallel neuro-controller for DC motors containing nonlinear [38] Abu-Ayyad M, Dubay R, Kember G. SISO extended predictive control:
friction. Neurocomputing 2000;30:233–48. implementation and robust stability analysis. ISA Transactions 2006;45:
[23] S̆elmić RR, Lewis FL. Deadzone compensation in motion control systems using 373–91.
neural networks. IEEE Transactions on Automatic Control 2000;45(4):602–13. [39] Åström KJ, Hägglund T. Revisiting the Ziegler–Nichols step response method
[24] Zhang TP, Ge SS. Adaptive neural control of MIMO nonlinear state time-varying for PID control. Journal of Process Control 2004;14:635–50.
delay systems with unknown dead-zones and gain signs. Automatica 2007;43: [40] Mutoh A, Nitta S, Konishi K. The attenuation characteristics of noise filter
1021–33. using motor-generator. In: IEEE international symposium on electromagnetic
[25] Oh SY, Park DJ. Design of new adaptive fuzzy logic controller for nonlinear compatibility. vol. 2. 2000. p. 557–62.
plants with unknown or time-varying dead zones. IEEE Transactions on Fuzzy [41] Eker İ. Second-order sliding mode control with experimental application. ISA
Systems 1998;6(4):482–91. Transactions 2010;49:394–405.