You are on page 1of 5

Composite Learning Control With Application

to Inverted Pendulums

Yongping Pan∗, Lin Pan†‡ , and Haoyong Yu∗


∗ School of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
Email: biepany@nus.edu.sg; bieyhy@nus.edu.sg
† Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, Luxembourg
Email: lin.pan@uni.lu
‡ School of Electric and Electronic Engineering, Wuhan Polytechnic University, Wuhan 430023, China
arXiv:1507.07844v3 [cs.SY] 6 Jul 2022

Abstract—Composite adaptive control (CAC) that integrates parameter convergence include accurate closed-loop identifica-
direct and indirect adaptive control techniques can achieve tion, exponential stability and robustness against measurement
smaller tracking errors and faster parameter convergence com- noise [28]. An emerging concurrent learning technique pro-
pared with direct and indirect adaptive control techniques. vides a feasible and promising way for achieving parameter
However, the condition of persistent excitation (PE) still has to be convergence without the PE condition in MRAC [27]–[29].
satisfied to guarantee parameter convergence in CAC. This paper
proposes a novel model reference composite learning control
The difference between the concurrent learning and the com-
(MRCLC) strategy for a class of affine nonlinear systems with posite adaptation lies in how to construct prediction errors.
parametric uncertainties to guarantee parameter convergence In the concurrent learning, a dynamic data stack constituted
without the PE condition. In the composite learning, an integral by online recorded data is used in constructing prediction
during a moving-time window is utilized to construct a prediction errors, singular value maximization is applied to maximize the
error, a linear filter is applied to alleviate the derivation of plant singular value of the data stack, and exponential convergence
states, and both the tracking error and the prediction error are of both tracking errors and estimation errors is guaranteed if
applied to update parametric estimates. It is proven that the regression functions of plant uncertainties are excited over a
closed-loop system achieves global exponential-like stability under time interval such that sufficiently rich data are recorded in
interval excitation rather than PE of regression functions. The the data stack. Yet in this innovative design, the singular value
effectiveness of the proposed MRCLC has been verified by the
application to an inverted pendulum control problem.
maximization leads to an exhaustive search over all recorded
data, and the requirement on the derivation of all plant states
for calculating prediction errors is stringent.
I. I NTRODUCTION
This paper proposes a novel model reference composite
Adaptive control is one of the major control techniques learning control (MRCLC) strategy for a class of parametric
of handling uncertainties in nonlinear systems and has still uncertain affine nonlinear systems. In the composite learning,
attracted great concern in recent years [1]–[16]. In particu- a modified modelling error that can utilize online recorded data
lar, model reference adaptive control (MRAC) is a popular is constructed as the prediction error, a second-order command
adaptive control architecture which aims to make an uncertain filter is applied to alleviate the derivation of plant states, and
dynamical system behave like a chosen reference model. The both the tracking error and the prediction error are applied
way of parameter estimation in adaptive control gives rise to to update parametric estimates. It is proven that the closed-
two different schemes, namely indirect and direct schemes loop system achieves global exponential-like stability under
[17]. Composite adaptive control (CAC) is an integrated direct interval excitation (IE) rather than PE of regression functions.
and indirect adaptive control technique which aims to achieve Consequently, the limitations of concurrent learning are alle-
better tracking and parameter estimation through faster and viated by the proposed MRCLC design. The effectiveness and
smoother parameter adaptation [18]. In the CAC, prediction superiority of this approach is verified by the application to an
errors are generated by identification models, and both tracking inverted pendulum control problem with the comparison with
errors and prediction errors are applied to update parametric es- some existing adaptive control approaches.
timates. The superiority of CAC on performance improvement
has been demonstrated in many control designs, where some The notations of this paper are relatively standard, where
latest results can be referred to [19]–[26]. Nevertheless, as in R, R+ , Rn and Rn×m denote the spaces of real numbers,
the classical adaptive control, CAC only achieves asymptotic positive real numbers, real n-dimensional vectors, and n × m-
convergence of tracking errors and does not guarantee param- dimensional matrixes, respectively, | · | and k · k denote the
eter convergence unless plant states satisfy the condition of absolute value and Euclidean-norm, respectively, L∞ denotes
persistent excitation (PE) [17]. It is well known that the PE the space of bounded signals, Ωc := {x|kxk ≤ c} denotes the
condition is very strict and often infeasible to monitor online ball of radius c, min{·} and max{·} are the minimum and
in practical control systems [27]. maximum functions, respectively, sgn(·) is the sign function,
rank(A) is the rank of A, diag(·) is a diagonal matrix, and C k
Learning is one of the fundamental features of autonomous represents the space of functions whose k-order derivatives all
intelligent behavior, and it is closely related to parameter exist and are continuous, where c ∈ R+ , x ∈ Rn , A ∈ Rn×m ,
convergence in adaptive control [30]. The benefits brought by and n, m and k are positive integers.
II. P ROBLEM F ORMULATION where A := Λ − bkeT is designed to be strictly Hurwitz. Thus,
for any matrix Q = QT > 0, a unique solution P = P T > 0
This section discusses the formulation of learning from the exists for the following Lyapunov equation:
classical MRAC. For clear illustration, consider a class of nth
order affine nonlinear systems as follows [27]: AT P + P A = −Q. (7)

ẋ = Λx + b f (x) + u (1) Let the adaptive law of Ŵ be as follows:
where Λ ∈ R n×n T
, b := [0, · · · , 0, 1] , x(t) := [x1 (t), x2 (t), ˙
Ŵ = P(Ŵ , γeT P bΦ(x)) (8)
· · · , xn (t)]T ∈ Rn is the vector of plant states, u(t) ∈ R is the
control input, and f (x) : Rn 7→ R is the C 1 model uncertainty. +
where γ ∈ R is a learning rate, and P(Ŵ , •) is a projection
A reference model that characterizes the desired response of operator given by [17]
the system (1) is given by 
 • − Ŵ Ŵ T · •/kŴ k2 ,
ẋr = Ar xr + br r (2) P(Ŵ , •) = if kŴ k ≥ cw & Ŵ T · • > 0 .

•, otherwise
with br := [0, · · · , 0, br ]T ∈ Rn , in which Ar ∈ Rn×n is a
strictly Hurwitz matrix, xr (t) := [xr1 (t), · · · , xrn (t)]T ∈ Rn Choose a Lyapunov function candidate
is the vector of reference model states, and r(t) ∈ R is a V (z) = eT P e/2 + W̃ T W̃ /(2γ) (9)
bounded reference signal. This study is based on the facts that
x is measurable, (Λ, b) is controllable, and f (x) is linearly with z := [eT , W̃ T ]T ∈ Rn+N for the closed-loop dynamics
parameterizable such that [27]–[29] composed of (6) and (8). It follows from the classical MRAC
result of [17] that if Ŵ (0) ∈ Ωcw and Φ(x) is of PE, then the
f (x) = W ∗T Φ(x) (3) closed-loop system (6) with (8) achieves global exponential
stability in the sense that both the tracking error e(t) and the
in which W ∗ ∈ Ωcw ⊂ RN is a unknown constant parameter
vector, Φ(x) : Rn 7→ RN is a known regression function estimation error W̃ (t) converge to 0.
vector, and cw ∈ R+ is a known constant. The following To remove the requirement of the PE condition on Φ(x)
definitions are given for facilitating control synthesis. for parameter convergence in [17], a concurrent learning law
of Ŵ is proposed in [27] as follows:
Definition 1 [27]: A bounded signal Φ(t) ∈ Rn is of IE
over [te − τd , te ] if
R tthere exist constants te , τd , σ ∈ R+ with ˙
 X p 
te > τd such that te −τd Φ(τ )ΦT (τ )dτ ≥ σI holds.
e Ŵ = P Ŵ , γeT P bΦ(x) + γ ǫj ΦT (xj ) (10)
j=1
Definition 2 [27]: A bounded signal Φ(t) ∈ Rn satisfies
where j is a certain epoch, p ≥ N is the number of stored
the
R t PE condition if there exist constants σ, τd ∈ R+ such that
T data, and ǫj := W̃jT Φ(xj ) denotes a modelling error which is
t−τd Φ(τ )Φ (τ )dτ ≥ σI holds, ∀t ≥ 0. regarded as the prediction error calculated by
Let xre := [xTr , r]T be an augmented reference signal, and ǫj = bT (ẋj − Λxj − buj ) − ŴjT Φ(xj ) (11)
Ŵ ∈ RN be an estimate of W ∗ . Define a tracking error e(t) :=
x(t) − xr (t), and an estimation error W̃ (t) := W ∗ − Ŵ (t). where xj , uj and Ŵj are online recorded data of x, u and Ŵ at
Our objective is to design a proper parametric update law of the epoch j, respectively. Let Z := [Φ(x1 ), Φ(x2 ), · · · , Φ(xp )]
MRAC such that both e and W̃ exponentially converge to 0 ∈ Rp×N be a dynamic data stack. It is shown in the concurrent
under the IE rather than the PE conation. learning MRAC approach of [27] that if Ŵ (0) ∈ Ωcw and
rank(Z) = N on t ≥ Te , then the closed-loop system (6)
with (10) achieves global exponential-like stability in the sense
III. C OMPOSITE L EARNING C ONTROL S TRATEGY that both the tracking error e and the estimation error W̃
A. Review of Previous Results converge to 0 on t ≥ Te . Yet in the approach of [27], fixed-
point smoothing must be applied to estimate ẋj in (11) such
From [17], the MRAC law can be designed as follows: that the prediction error ǫj is calculable, and singular value
maximization should be applied to exhaustively search the data
u = −keT e +krT xre −Ŵ T Φ(x) (4) stack Z to maximize its singular value.
| {z } | {z } | {z }
upd ure uad
B. Composite Learning Control Design
where upd denotes a proportional-derivative (PD) feedback Define a modified modelling error ǫ(t) := Θe W̃ (t) as the
part, ure denotes a reference signal feedforward part, uad prediction error, in which
denotes an adaptive part, ke ∈ Rn and kr ∈ Rn+1 are control 
gains, and the design of kr satisfies Θ(t) for t < Te if Θ(t) < σI
Θe := (12)
Θ(Te ) for t ≥ Te if Θ(Te ) ≥ σI
bkrT xre = (Ar − Λ)xr + br r. (5)
with σ = maxt≥0 {σr (t)} and
Thus, one obtains the tracking error dynamics Z t
Θ(t) := Φ(x(τ ))ΦT (x(τ ))dτ (13)
ė = Ae + bW̃ T Φ(x) (6) t−τd
in which τd ∈ R+ is an integral duration, σr (t) ∈ R+ is the e, W̃ ∈ L∞ . Since V̇ (t) ≤ 0 is satisfied ∀x(0) ∈ Rn and
minimal singular value of Θ(t), and Te ≥ τd is the time when V in (9) is radially unbounded (i.e. V (z) → ∞ as z → ∞),
σr (t) reaches its maximum1. Then, a composite learning law the stability is global. Using e, W̃ ∈ L∞ , one also obtains
of Ŵ is designed as follows: x, Ŵ , Φ, ǫ, u ∈ L∞ from their definitions. Thus, all closed-
˙  loop signals are bounded for all t ≥ 0.
Ŵ = γ proj eT P bΦ(x) + kw ǫ (14)
Secondly, consider the control problem at t ∈ [Te , ∞).
in which kw ∈ R+ is a weight factor. since there exist σ ∈ R+ and Te ≥ τd such that Θ(Te ) ≥ σI,
i.e. the bounded signal Φ(x(t)) is exciting over t ∈ [Te −
From ǫ = Θe W̃ and (12), the calculation of ǫ can follow
τd , Te ], it is obtained from (19) that
the calculation of ΘW̃ as follows:
Θ(t)W̃ (t) = Θ(t)W ∗ − Θ(t)Ŵ (t) (15) V̇ ≤ −eT Qe/2 − kw σ W̃ T W̃ , ∀t ≥ Te . (20)
It follows from (9) and (20) that
where ΘŴ is directly obtainable by (13) and (14), and ΘW ∗
is not directly obtainable. Multiplying both sides of (6) by V̇ (t) ≤ −ks V (t), ∀t ≥ Te (21)
Φ(x(t)), integrating the resulting equality over [t − τd , t] and
making some transformations, one gets where ks := min{λmin (Q)/λmax (P ), 2γkw σ} ∈ R+ , which
Z t implies that the closed-loop system has global exponential-like
Θ(t)W ∗ = Φ(x)(ėn + Ŵ T Φ(x) − bAe)dτ (16) stability in the sense that both e and W̃ exponentially converge
t−τd to 0 as long as t ≥ Te . 
in which the time variable τ is omitted in the above integral
part. Since ėn is unavailable, a second-order linear filter with IV. A PPLICATION TO I NVERTED P ENDULUM
unit gain is implemented as follows [31]: Consider the following inverted pendulum model [27]:

ê˙ n = ên+1
   
0 1 0
(17) ẋ = x+ (W ∗T Φ(x) + u)
ê˙ n+1 = −2ζωên+1 + ω 2 (en − ên ) 0 0 1
with ên (0) = en (0) and ên+1 (0) = 0, where ω ∈ R+ is the in which W ∗ = [1, −1, 0.5]T and Φ(x) = [sin x1 , |x2 |x2 ,
natural frequency, ζ ∈ R+ is the damping factor, and ên and ex1 x2 ]T . The reference model is given as follows:
ên+1 are estimates of en and ėn , respectively. The integral in    
(16) effectively reduces the influence of measurement noise 0 1 0
ẋr = xr + r.
on the calculation of ΘW ∗ . Thus, ω in (17) can be made −1 −2 1
sufficiently small so that ên+1 ≈ ėn . For simplifying analysis,
assume that ên+1 = ėn as in [31]. The reasonability of this For simulations, set x(0) = xr (0) = [1, 1]T , r(t) = 1 while
assumption will also be verified in the subsequent simulations. t ∈ [20, 25), and r(t) = 0 while t ∈ [0, 20) ∪ [25, ∞) [27].
The following theorem shows the main result of this study. The parameters selection of the proposed control law (4)
Theorem 1: Consider the system (1) driven by the control with (14) is based on that of [27], where the details are given
law (4) with (14), where the control gain kr is designed to as follows: firstly, solve (5) to obtain kr = [−1, −2, 1]T ; sec-
satisfy (5), and the control gain ke is selected to make A in (6) ondly, select ke = [1.5, 1.3]T so that A is strictly Hurwitz;
thirdly, solve (7) with Q = 10I to obtain P ; fourthly, set τd
strictly Hurwitz. If Ŵ (0) ∈ Ωcw and Θ(Te ) ≥ σI are satisfied
= 5 s in (13); fifthly, set γ = 3.5, kw = 6 and cw = 5 in (14);
with cw , σ ∈ R+ and Te ≥ τd , then the closed-loop system
finally, set ω = 100 and ζ = 0.7 in (17).
composed of (6) and (14) achieves global exponential-like
stability in the sense that all closed-loop signals are bounded Simulations are carried out in MATLAB software running
for all t ≥ 0 and both the tracking error e(t) and the estimation on Windows 7, where the solver is set to be fixed-step ode
error W̃ (t) exponentially converge to 0 on t ≥ Te . 5 with the step size being 0.001 s and other settings being
default values. The conventional MRAC in [17] and the model
proof: Firstly, consider the control problem at t ∈ [0, ∞).
reference CAC (MRCAC) with Q-modification in [32] are
Choose the Lyapunov function candidate V in (9) for the
selected as baseline controllers, where the reference models
closed-loop system constituted by (6) and (14). The time
and shared parameters of all controllers are set to be the same
derivative of V along (6) is as follows:
for fair comparisons. Simulation trajectories by all controllers
˙  are depicted Fig. 1. For the control performance, it is shown
V̇ = −eT Qe/2 + W̃ T eT P bΦ(x) − Ŵ (18)
that the plant state x follows its desired signal xr closely
where (7) is utilized to obtain (18). Applying (14) to (18), with a smooth control input u for all controllers, the MRCAC
noting Ŵ (0) ∈ Ωcw and ǫ = Θe W̃ , and using the projection achieves the worst tracking accuracy [see Fig. 1(c)], and the
operator results in [17], one obtains proposed MRCLC achieves the best tracking accuracy [see
Fig. 1(e)]. For the learning performance, it is observed that IE
V̇ ≤ −eT Qe/2 − kw W̃ T Θe W̃ , ∀t ≥ 0. (19) occurs with σ rising at t = 5 s in this case, the MRAC does not
Noting Q > 0 and Θe ≥ 0, one gets V̇ (t) ≤ 0, ∀t ≥ 0, which show any parameter convergence [see Fig. 1(b)], the MRCLC
implies that the closed-loop system is stable in the sense that shows better parameter estimation than the MRAC but still
does not achieve parameter convergence [see Fig. 1(d)], and
1 The presence of T and σ here is only used in the subsequent performance
e the proposed MRCLC achieves fast and accurate parameter
analysis and their values can only be gotten at the control process. estimation even the IE is short and weak [see Fig. 1(f)].
1.5 1.5
xr

Parameters learning W ⋆ , Ŵ
x by MRAC
1
State tracking xr , x

1
0.5

0.5 0
W⋆
−0.5 Ŵ by MRAC
0
−1

−0.5 −1.5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 0.12
u by MRAC σ by MRAC

Minimal singular value σr


X: 5
0 Y: 0.1159
.09
Control input u

−1
.06
−2

.03
−3

−4 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
time(s) time(s)

(a) (b)

1.5 1.5
xr

Parameters learning W ⋆ , Ŵ
x by MRCAC
1
State tracking xr , x

1
0.5
W⋆
0.5 0 Ŵ by MRCAC
−0.5
0
−1

−0.5 −1.5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 0.12
u by MRCAC σ by MRCAC
Minimal singular value σr

0 X: 5
.09
Control input u

Y: 0.1069
−1
.06
−2

.03
−3

−4 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
time(s) time(s)

(c) (d)

1.5 1.5
Parameters learning W ⋆ , Ŵ

xr
x by MRCLC 1
State tracking xr , x

1
0.5
W⋆
0.5 0 Ŵ by MRCLC
−0.5
0
−1

−0.5 −1.5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 0.12
u by MRCLC σr by MRCLC
Minimal singular value σr

0 X: 5
.09
Control input u

Y: 0.1071
−1
.06
−2

.03
−3

−4 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
time(s) time(s)

(e) (f)

Fig. 1. Simulation trajectories by all controllers. (a) Control performance by the MRAC. (b) Learning performance by the MRAC. (c) Control performance by
the MRCAC. (d) Learning performance by the MRCAC. (e) Control performance by the MRCLC. (f) Learning performance by the MRCLC.
V. C ONCLUSION [16] G. Tao, “Multivariable adaptive control: A survey,” Automatica, vol. 50,
no. 11, pp. 2737-2764, Nov. 2014.
In this paper, a MRCLC strategy has been successfully [17] P. A. Ioannou and J. Sun, Robust Adaptive Control. Englewood Cliffs,
developed for a class of parametric uncertain affine nonlinear NJ: Prentice Hall, 1996.
systems such that parameter convergence can be guaranteed by [18] J.-J. E. Slotine and W. Li, “Composite adaptive control of robot mani-
the IE rather than PE condition. The proposed approach has pulators,” Automatica, vol. 25, no. 4, pp. 509-519, Jul. 1989.
also been applied to an inverted pendulum model, where supe- [19] D. Naso, F. Cupertino, and B. Turchiano, “Precise position control of
rior control and learning performances have been demonstrated tubular linear motors with neural networks and composite learning,”
compared with the conventional MRAC and the MRCAC with Control Eng. Practice, vol. 18, no. 5, pp. 515-522, May 2010.
Q-modification. Further work would focus on the extension of [20] A. Mohanty and B. Yao, “Integrated direct/indirect adaptive robust
control of hydraulic manipulators with valve deadband,” IEEE-ASME
the composite learning to wider classes of nonlinear systems Trans. Mechatron., vol. 16, no. 4, pp. 707-715, Aug. 2011.
such as multi-input multi-output affine nonlinear systems [34] [21] P. M. Patre, S. Bhasin, Z. D. Wilcox, and W. E. Dixon, “Composite
and strict-feedback nonlinear systems [35]. adaptation for neural network-based controllers,” IEEE Trans. Autom.
Control, vol. 55, no. 4, pp. 944-950, Apr. 2010.
ACKNOWLEDGMENT [22] C. Hu, B. Yao, and Q. Wang, “Integrated direct/indirect adaptive ro-
bust contouring control of a biaxial gantry with accurate parameter
This work was supported in part by the National Research estimations,” Automatica, vol. 46, no. 4, pp. 701-707, Apr. 2010.
Foundation of Singapore under Grant NRF2014NRF-POC001- [23] Y. P. Pan, M. J. Er, and T. R. Sun, “Composite adaptive fuzzy control
027, in part by the Biomedical Engineering Programme, Age- for synchronizing generalized Lorenz systems,” Chaos, vol. 22, no. 2,
Article ID 023144, Jun. 2012.
ncy for Science, Technology and Research (A*STAR), Sing-
[24] Y. P. Pan, Y. Zhou, T. R. Sun, and M. J. Er, “Composite adaptive fuzzy
apore under Grant 1421480015, and in part by AFR and FNR H ∞ tracking control of uncertain nonlinear systems,” Neurocomputing,
programs, Luxembourg. vol. 99, pp. 15-24, Jan. 2013.
[25] X. J. Wei, N. Chen, and W. Q. Li, “Composite adaptive disturbance
R EFERENCES observer-based control for a class of nonlinear systems with multisource
disturbance,” Int. J. Adapt. Control Signal Process., vol. 27, no. 3, pp.
[1] K. J. Astrom and B. Wittenmark, Adaptive Control, 2nd ed. Mineola, 199-208, Mar. 2013.
NY, USA: Dover, 2008. [26] Z. T. Dydek, A. M. Annaswamy, J. J. E. Slotine, and E. Lavretsky,
[2] A. Astolfi, D. Karagiannis, and R. Ortega, Nonlinear and Adaptive Con- “Composite adaptive posicast control for a class of LTI plants with
trol with Applications. London, UK: Springer, 2008. known delay,” Automatica, vol. 49, pp. 1914-1924, Jun. 2013.
[3] B. D. O. Anderson and A. Dehghani, “Challenges of adaptive control- [27] G. V. Chowdhary and E. N. Johnson, “Concurrent learning for conver-
past, permanent and future,” Annu. Rev. Control, vol. 32, no. 2, pp. gence in adaptive control without persistency of excitation,” in Proc.
123-135, Dec. 2008. IEEE Conf. Decision Control, Atlanta, GA, 2010, pp. 3674-3679.
[4] M. Stefanovic and M. G. Safonov, Safe Adaptive Control: Data-Driven [28] G. V. Chowdhary, M. Muhlegg, and E. N. Johnson,“Exponential param-
Stability Analysis and Robust Synthesis. London, UK: Springer, 2011. eter and tracking error convergence guarantees for adaptive controllers
[5] K. S. Narendra and Z. Han, “The changing face of adaptive control: without persistency of excitation,” Int. J. Control, vol. 87, no. 8, pp.
The use of multiple models,” Annu. Rev. Control, vol. 35, no. 1, pp. 1- 1583-1603, May 2014.
12, Apr. 2011. [29] G. V. Chowdhary and E. N. Johnson, “Theory and flight-test validation
[6] A. Casavola, J. Hespanha, and P. Ioannou, “Special issue on ‘recent of a concurrent-learning adaptive controller,” J. Guid. Control Dyn., vol.
trends on the use of switching and mixing in adaptive control’,” Int. J. 34, no. 2, pp. 592-607, 2011.
Adapt. Control Signal Process., vol. 26, no. 8, pp. 690-691, Aug. 2012. [30] P. J. Antsaklis, “Intelligent learning control,” IEEE Control Syst. Mag.,
[7] S. G. Khan, G. Herrmann, F. L. Lewis, T. Pipe, and C. Melhuish, vol. 15, no. 3, pp. 5-7, Jun. 1995.
“Reinforcement learning and optimal adaptive control: An overview [31] J. C. Hu and H. H. Zhang, “Immersion and invariance based command-
and implementation examples,” Annu. Rev. Control, vol. 36, no. 1, pp. filtered adaptive backstepping control of VTOL vehicles,” Automatica,
42-59, Apr. 2012. vol. 49, no. 7, pp. 2160-2167, Jul. 2013.
[8] J. M. Martin-Sanchez, J. M. Lemos, and J. Rodellar, “Survey of indus- [32] K. Y. Volyanskyy, W. M. Haddad, and A. J. Calise, “A new neuroad-
trial optimized adaptive control,” Int. J. Adapt. Control Signal Process., aptive control architecture for nonlinear uncertain dynamical systems:
vol. 26, no. 10, pp. 881-918, Oct. 2012. beyond σ- and e-modifications,” IEEE Trans. Neural Netw., vol. 20, no.
[9] K. B. Pathak and D. M. Adhyaru, “Survey of model reference adap- 11, pp. 1707-1723, Nov. 2009.
tive control,” in Proc. Nirma University International Conference on [33] S. V. Joshi, A. Sreenatha, and J. Chandrasekhar, “Suppression of wing
Engineering, Ahmedabad, India, 2012, pp. 1-6. rock of slender delta wings using a single neuron controller,” IEEE
[10] E. Lavretsky and K. A. Wise, Robust and Adaptive Control: With Ae- Trans. Autom. Control Syst. Tech., vol. 6, pp. 671-677, Sep. 1998.
rospace Applications. London, UK: Springer, 2013. [34] Y. P. Pan and M. J. Er, “Enhanced adaptive fuzzy control with optimal
[11] A. Serrani, “An output regulation perspective on the model reference approximation error convergence,” IEEE Trans. Fuzzy Syst., vol. 21, no.
adaptive control problem,” Int. J. Adapt. Control Signal Process., vol. 6, pp. 1123-1132, Dec. 2013.
27, no. 1-2, pp. 22-34, Jan. 2013. [35] Y. P. Pan and H. Y. Yu, “Dynamic surface control via singular pertu-
[12] J. Sun, M. Krstic, and N. Bekiaris-Liberis, “Robust adaptive control: rbation analysis,” Automatica,vol. 51, pp. 29-33, Jul. 2015.
legacies and horizons,” Int. J. Adapt. Control Signal Process., vol. 27,
no. 1-2, pp. 1-3, Jan. 2013.
[13] I. Barkana, “Simple adaptive control - a stable direct model reference
adaptive control methodology - brief survey,” Int. J. Adapt. Control
Signal Process., vol. 28, no. 7-8, pp. 567-603, Jul. 2014.
[14] L. P. Chan, F. Naghdy, and D. Stirling, “Application of adaptive con-
trollers in teleoperation systems: A survey,” IEEE Trans. Hum.-Mach.
Syst., vol. 44, no. 3, pp. 337-352, Jun. 2014.
[15] P. Swarnkar, S. K. Jain, and R. K. Nema, “Adaptive control schemes for
improving the control system dynamics: A review,” IETE Tech. Rev.,
vol. 31, no. 1, pp. 17-33, Jan-Feb. 2014.

You might also like