You are on page 1of 121

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1

Event-Triggered Nonlinear
Iterative Learning Control
Na Lin , Ronghu Chi , Biao Huang , Fellow, IEEE, and Zhongsheng Hou , Fellow, IEEE

Abstract— An event-triggered nonlinear iterative learning Event-triggered control (ETC) has been considered as an alter-
control (ET-NILC) method is presented for repetitive nonaffine native way to solve this problem. ETC is to execute the control
and nonlinear systems that have 2-D dynamic behavior along task after the event occurs, rather than after a certain fixed
both time and iteration directions. Based on the virtual linear
data model, the ET-NILC method is proposed by designing an period of time, where the event is driven by well-designed
event triggering condition based on the Lyapunov-like stability event triggering conditions [4], [5]. Since the control is only
analysis conducted along the iteration direction. The learning executed when needed, the utilization of resources is greatly
gain function of ET-NILC is nonlinear and updated by designing reduced, while the control performance is guaranteed [6]–[8].
an iterative learning parameter estimation law to enhance the It is worth pointing out that many systems operate repet-
robustness. From the perspective of the time dynamics, the pro-
posed ET-NILC is a feedforward control and the event-triggering itively in engineering practice, such as batch processes in
condition can be verified offline using tracking errors, event trig- chemical industry [9] and high-speed trains [10]. Besides time-
gering errors, and the estimated parameters together. Moreover, varying dynamics, the repetitive process also evolves dynam-
the proposed ET-NILC is a data-driven scheme since it merely ically along the iteration direction, and thus, the repetitively
uses I/O data for the design. The results are also extended operating system presents a type of 2-D dynamic characteris-
to repetitive multiple-input–multiple-output (MIMO) nonaffine
nonlinear systems using the property of input-to-state stability tics. For the control of this type of systems, iterative learning
as the basic mathematical tool. The convergence of the proposed control (ILC) [11] has been considered most suitably because
ET-NILC methods is proved. Several simulations illustrate the it improves control performance by learning from the control
effectiveness of the proposed methods. knowledge obtained in the previous repetitive operations.
Index Terms— Data-driven method, event-triggered iterative Recently, there have been a lot of works [12]–[16] reporting
learning control (ILC), nonlinear nonaffine repetitive systems, the applications of ILC in NCSs. For example, several sto-
virtual linear data model. chastic ILC laws [12], [13] have been presented to address the
problem of data dropout. The robust ILC methods [14]–[16]
have been investigated to address time-varying transmission
I. I NTRODUCTION delays, data packet dropouts, and external disturbances.
Besides the problems of data dropouts, transmission delay,
N ETWORKED control systems (NCSs) have found many
successful applications [1]–[3], including intelligent
transportation, aerospace engineering, air conditioning system,
and data noise, the limited network channel is also a critical
issue to be solved in the network control systems. Therefore,
and intelligent agriculture. One of the remarkable characteris- a new research topic of combining ETC and ILC has come
tics of NCSs is that data transmission is carried out through to light with objective toward improving the efficiency of
a publicly shared communication network, which raises the a NCS. However, only a few works have been reported regard-
question of how to effectively utilize the limited network ing the event-triggered ILC for a repetitive network system.
channel to improve the quality and rate of signal transmission. Xiong et al. [17] proposed two event triggering schemes with
and without quantization by utilizing ILC laws for linear
Manuscript received April 29, 2019; revised August 27, 2019, discrete-time systems to iteratively reduce controller updating
December 28, 2019, March 23, 2020, and July 27, 2020; accepted numbers in each batch. Zhang and Li [18] developed an event-
September 22, 2020. This work was supported in part by the National Science triggered robust learning control with quantization for linear
Foundation of China under Grant 61374102 and Grant 61873139, in part by
the Taishan Scholar Program of Shandong Province of China, and in part by multiagent systems. Tang and Sheng [19] studied the iterative
the Key Research and Development Program of Shandong Province under learning fault-tolerant control problem with event-triggered
Grant 2018GGX101047. (Corresponding author: Ronghu Chi.) transmission strategy for networked batch processes. Appar-
Na Lin and Ronghu Chi are with the School of Automation and Elec-
tronic Engineering, Qingdao University of Science and Technology, Qingdao ently, compared with the ETC methods for 1-D time dynamic
266061, China (e-mail: linnaqingdao@163.com; ronghu_chi@hotmail.com). systems, the study of ETC for 2-D repetitive systems is still
Biao Huang is with the Department of Chemical and Materials in its infancy and many problems are yet to explore, including
Engineering, University of Alberta, Edmonton, AB T6G2G6, Canada (e-mail:
bhuang@ualberta.ca). the research for nonlinear systems.
Zhongsheng Hou is with the School of Automation, Qingdao University, Based on the above analysis, this article proposes an
Qingdao 266071, China (e-mail: zhshhou@bjtu.edu.cn). event-triggered nonlinear iterative learning control (ET-NILC)
Color versions of one or more of the figures in this article are available
online at http://ieeexplore.ieee.org. method for single-input–single-output (SISO) nonaffine and
Digital Object Identifier 10.1109/TNNLS.2020.3027000 nonlinear systems. The 2-D dynamic behavior along both time
2162-237X © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

and iteration directions is considered when designing event Denote yd (t) as the desired trajectory. The control goal
triggering conditions for the ET-NILC. A virtual linear data of this article is to design an event-triggered ILC to make
model is used to describe the input and output relationship the tracking error ỹk (t) = yd (t) − yk (t) converge to within
between the consecutive batches by means of the iterative a satisfied bound, while the required control input updating
dynamic linearization technique where the uncertainties of number is reduced according to the event triggering condition.
the original nonlinear system are lumped into an unknown Define event-triggered error ek−1 (t + 1) = ỹkl−1 (t + 1) −
parameter. Consequently, an ET-NILC method is proposed by ỹk−1 (t + 1), k − 1 ∈ [kl−1 , kl ). {kl }, l = 0, 1, . . . , denotes
designing an event triggering condition based on the Lyapunov the event-triggered iterative sequence generated by an event
stability analysis in the iteration domain. The control input triggering function χ(ek−1 (t + 1)), which is to be designed
of ET-NILC is updated only if the event triggering condition in the following analysis. The sequence of the event-triggered
is satisfied. The convergence of the proposed ET-NILC is iteration is determined by
proved mathematically. Furthermore, the results are extended
kl = inf{ k|k > kl−1 , χ(ek−1 (t + 1)) ≥ 0}.
to multiple-input–multiple-output (MIMO) nonaffine and non-
linear plants, where the convergence is proved by using the At a specified operation time, the control input is updated
property of input-to-state stability as the basic mathematical only at event-triggered iterations; otherwise, the control input
tool. The effectiveness of the presented methods is verified by remains the same as that in the last iteration.
numerical simulations. The main features of the ET-NILC are According to [20], two assumptions are made as follows.
listed as follows. Assumption 1: The partial derivative of f (·) with respect to
Compared with the traditional ETC in the 1-D time control input u k (t) is continuous.
domain [4]–[8], the stability of the ET-NILC is guaranteed Assumption 2: For ∀t ∈ [0, N ] and k ∈ Z+ , system (1) is
along the iteration direction instead of the time direction generalized Lipschitz, i.e.,
though the event triggering condition is still required to
|yk (t + 1)| ≤ bs |u k (t)| (2)
verify pointwisely; the input updating is triggered along
the iterative direction, which reduces the action time of for |u k (t)| = 0, where bs > 0 is a constant and
control devices, computation burden, and the required net-  represents the difference between two consecutive iterations,
work resource; the whole ET-NILC method is a feedforward e.g., x k (t) = x k (t) − x k−1 (t).
approach with respect to the time dynamics and the event- Remark 1: Assumption 2 means that the system output
triggering condition can be verified by offline computation does not change infinitely as long as the input changes are
using tracking errors, event triggering errors, and estimated bounded. In other words, it is assumed that the system works
parameters. smoothly and no abrupt change occurs in the working duration.
Compared with the previous event-triggered ILC meth- By nature, Assumption 2 is similar to the traditional global
ods [17]–[19], the proposed ET-NILC is data-based rather than Lipschitz continuous condition [21], which is a fundamen-
model-based, in which controller implementation, parameter tal premise of most nonlinear control systems. As stated
estimation, and validation of event triggering condition are in [22]–[25], Assumption 2 has been extensively used in many
conducted using only I/O data; the learning gain of the practical applications. Furthermore, Assumption 2 only for-
ET-NILC law is iteration-time-varying and can be adjusted mulates the relationship of the input and output data obtained
based on the real-time I/O data to enhance the robustness to from different operation batches of the same plant, which is
uncertainties. most different from the traditional assumptions imposed on
The rest of this article is arranged as follows. Section II the 1-D time-dynamical systems.
formulates the control problem to be investigated. The con- A virtual linear data model is shown in Lemma 1.
troller design is provided in Section III. Section IV is about Lemma 1 [20]: Consider system (1) satisfying
the analysis of convergence. Section V extends the result Assumptions 1 and 2. If |u k (t)| = 0, there must exist
to MIMO nonlinear systems. Section VI provides simulation an iteration-time-varying parameter φk (t) such that the input–
results. Section VII presents conclusions. output relationship of nonlinear system (1) is reformulated by
the following virtual linear data model:
II. P ROBLEM F ORMULATION yk (t + 1) = yk−1 (t + 1) + φk (t)u k (t) (3)
A repetitive nonlinear nonaffine discrete-time system is
and supk maxt∈{0,1,...,N } |φk (t)| ≤ bs with bs being defined in
considered
Assumption 2.
yk (t + 1) = f (yk (t), . . . , yk (t − n y ), u k (t), . . . , u k (t − n u )) Proof: See the Appendix.
(1) Remark 2: It is seen from the detailed derivations that
all possible complex dynamic characteristics of the original
where yk (t) ∈ R represents system output, u k (t) ∈ R nonlinear system are lumped into the parameter φk (t), such as
represents control input, the nonlinear function f (·) is contin- nonlinearity, time-varying parameters, or time-varying struc-
uously differentiable, t ∈ {0, 1, . . . , N } denotes time instant, tures. Therefore, the dynamics of the system is included in
N ∈ Z+ is the terminal time, k ∈ Z+ is the iteration index, and the parameter φk (t) implicitly.
n y and n u are some positive constants. Furthermore, u k (t) = 0 Remark 3: The linear data model (3) is different from the
and yk (t) = 0 if t < 0 or k < 0. linear model obtained using traditional linearization methods
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: ET-NILC 3

such as Taylor linearization [26] and feedback lineariza- In the case of the triggered iterations, i.e., k = kl , consid-
tion [27] because it only shows an I/O relationship between ering the event-triggered law (6), (9) becomes
two iterations of the system from a pure data perspective,
instead of representing the dynamic characteristics of the ỹk (t + 1) = ỹkl−1 (t + 1) − φk (t) P̂k (t) ỹkl−1 (t + 1)
system itself. By nature, the linear data model (3) exists in the = (1 − φk (t) P̂k (t)) ỹkl−1 (t + 1). (10)
computer virtually without any mechanistic interpretation and
According to the definition of event-triggered error, we have
it only serves for the subsequent controller design and analysis
ỹkl−1 (t + 1) = ỹk−1 (t + 1) + ek−1 (t + 1), k − 1 ∈ [kl−1 , kl ). For
under a data-driven framework, instead of being a mechanistic
k = kl , it is clear that k − 1 ∈ [kl−1 , kl ). Thus, we can further
model or an identified model that has the dynamics of the
get from (10) that
system itself. The control objective of the controller is the
nonlinear system itself, which is independent of model (3). ỹk (t + 1) = (1 − φk (t) P̂k (t))( ỹk−1 (t + 1) + ek−1 (t + 1)).
(11)
III. E VENT-T RIGGERED N ONLINEAR ILC D ESIGN
Define a Lyapunov function Vk (t + 1) = ỹk2 (t + 1). If k ∈
The following is the design procedure of the event-triggered (kl−1 , kl ), it is obvious that k − 1 ∈ [kl−1 , kl ), u k (t) =
nonlinear controller. u kl−1 (t) and u k−1 (t) = u kl−1 (t). Then, we can get yk (t + 1) =
When k = kl , define an objective function as follows: yk−1 (t + 1) when neither noise nor disturbance is considered.
As a result, for k ∈ (kl−1 , kl ), we have
J (u kl (t)) = |yd (t + 1) − ykl (t + 1)|2 + λs |u kl (t) − u kl−1 (t)|2
(4) Vk (t + 1)
= Vk (t + 1) − Vk−1 (t + 1) = ỹk2 (t + 1) − ỹk−1
2
(t + 1)
where λs > 0.
Differentiating (4) with respect to u kl (t), combining it = (yd (t + 1) − yk (t + 1))2 −(yd (t + 1) − yk−1 (t + 1))2 = 0.
with (3), and setting it to be 0, one can get (12)

u kl (t) = u kl−1 (t) + Pkl (t) ỹkl−1 (t + 1) (5) For the case of k = kl , according to (11), we have

where Pkl (t) = μs φkl (t)/(λs + φkl (t)2 ) and μs is added addi- Vk (t + 1) = (1 − φk (t) P̂k (t))2 ( ỹk−1 (t + 1) + ek−1 (t + 1))2
tionally as a step size factor to make the control algorithm (5) − ỹk−1
2
(t + 1). (13)
more general.
Using the Cauchy–Schwartz inequality (a + b)2 ≤
When k ∈ (kl−1 , kl ), the actuator is not triggered, so we
2(a 2 + b 2 ), from (13), we have
have u k (t) = u kl−1 (t).
Therefore, the event-triggered ILC law is proposed as Vk (t +1)≤2(1−φk (t) P̂k (t))2 ỹk−1
2
(t +1)+2(1−φk (t) P̂k (t))2
follows:
 × ek−1
2
(t + 1) − ỹk−1
2
(t + 1). (14)
u kl−1 (t) + P̂k (t) ỹkl−1 (t + 1), k = kl
u k (t) = (6) Letting Vk (t + 1) < 0, it follows that:
u kl−1 (t), k ∈ (kl−1 , kl )
2
ỹk−1 (t + 1) − 2(1 − φk (t) P̂k (t))2 ỹk−1
2
(t + 1)
2
ek−1 (t + 1) <
where P̂kl (t) = μs φ̂kl (t)/(λs + φ̂kl (t) ) with φ̂k (t) the estima-
2
2(1 − φk (t) P̂k (t))2 .
tion of φk (t). (15)
The parameter estimation law of φ̂k (t) is presented by
designing an objective function as follows: Since φk (t) in (15) is unknown, the parameter updating
   2 law (8) is used to estimate it. Therefore, we can get the
J φ̂k (t) = yk−1 (t + 1) − yk−2 (t + 1) − φ̂k (t)u k−1 (t) following inequality:
 2
+ μs φ̂k (t) − φ̂k−1 (t) (7) 2
ỹk−1 (t + 1) − 2(1 − φ̂k (t) P̂k (t))2 ỹk−1
2
(t + 1)
2
ek−1 (t + 1) < .
where μs > 0. 2(1 − φ̂k (t) P̂k (t))2
Therefore, we can get the iterative learning parameter esti- (16)
mation law by optimizing (7) Combining the definition of event-triggered iterative
φ̂k (t) = φ̂k−1 (t) sequence, the event triggering condition is that the following
inequality holds:
ηs u k−1 (t)(yk−1 (t + 1) − φ̂k−1 (t)u k−1 (t))
+ (8) Hk−1 (t)
μs + u k−1 (t)2 χ(ek−1 (t + 1)) = ek−1
2
(t + 1) − ≥0
2(1 − φ̂k (t) P̂k (t))2
where 0 < ηs < 2 is a step size factor, which is added addi-
(17)
tionally to make the estimation algorithm (8) more general.
According to the definition of tracking error and (3), where Hk−1 (t) = ỹk−1
2
(t + 1)−2(1 − φ̂k (t) P̂k (t))2 ỹk−1
2
(t + 1).
we have By nature, combining (14)–(17), if the condition Vk
(t + 1) < 0 holds, the controller is not triggered; otherwise,
ỹk (t + 1) = yd (t + 1) − yk−1 (t + 1) − φk (t)u k (t) the controller is triggered.
= ỹk−1 (t + 1) − φk (t)u k (t). (9)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Furthermore, define a tracking error threshold as Remark 8: The threshold-triggering mechanism in (18) is
supk maxt∈{0,1,...,N } | ỹk (t)| ≤ ϑ, where ϑ > 0 is a constant. similar to a dead-zone strategy [30], [31] to make the controller
If the obtained tracking error is larger than the threshold, be triggered only when the tracking error does not enter the
the control input is also updated even though event-triggering given range; otherwise, the controller is on hold without being
condition (17) is satisfied. As stated in [28], the tracking updated.
error threshold is necessary because the absence of it may
result in an accumulation of control updates and the role of IV. C ONVERGENCE A NALYSIS
threshold limits is to ensure that the error is within a small Theorem 1: Considering a repetitive nonlinear nonaffine
range. In other words, the actual triggering action of the ILC system (1) that satisfies Assumptions 1 and 2. If the controller
mechanism is up to both the event triggering condition (17) parameters are selected such that 0 < ηs < 2, μs > 0 and
and the tracking error threshold. 0 < (μs bs /(λs )1/2 ) < 4, applying the proposed ET-NILC
The proposed ET-NILC method is summarized as follows: method (18)–(21), one can ensure that the estimated parameter
kl = k, l = 0, 1, . . . , if | ỹk (t)| > ϑ φ̂k (t) is bounded and the tracking error ỹk (t) converges with
the increasing iterations.
or χ(ek−1 (t + 1)) ≥ 0 (18)
Proof: First, define parameter estimation error φ̃k (t) =
φ̂k (t) = φ̂k−1 (t) φk (t) − φ̂k (t). Subtract φk (t) from both sides of (19), and
ηs u k−1 (t)(yk−1 (t + 1) − φ̂k−1 (t)u k−1 (t)) according to (3), it obtains
+  
μs + u k−1 (t)2
ηs u 2k−1 (t)
(19) φ̃k (t) = 1 − φ̃k−1 (t) + φk (t). (22)
μs + u 2k−1 (t)
φ̂k (t) = φ̂0 (t), if |φ̂k (t)| ≤ ε or sign(φ̂k (t)) = sign(φ̂0 (t))
(20) Due to 0 < ηs < 2 and μs > 0, there must exist a positive
 constant ds for all k and t such that
u kl−1 (t) + P̂k (t) ỹkl−1 (t + 1), k = kl
u k (t) = (21) ηs u 2k−1 (t)
u kl−1 (t), k ∈ (kl−1 , kl ) 0 < |1 − | = dk (t) ≤ ds ≤ 1 (23)
μs + u 2k−1 (t)
where the reset algorithm (20) is introduced to make the
parameter estimation law (19) have stronger tracking abil- where ds = supk maxt∈{0,1,...,N } dk (t).
ity for iteration-time-varying parameters and to ensure φ̂k (t) Then, from (22) and (23), we can get
to be nonzero, which is similar to a persistent excitation 2bs
|φ̃k (t)| ≤ ds |φ̃k−1 (t)| + 2bs ≤ · · · ≤ dsk |φ̃0 (t)|+
. (24)
condition [29]; ε is a sufficiently small positive constant and 1 − ds
is less than the nonzero φk (t). Due to the system uncertainties and disturbances, the event-
Remark 4: For the selection of the initial parameter φ̂0 (t), triggering mechanism works at some points, which means
several closed-loop experiments on the controlled plant should that there are always some points that are triggered, and the
be done to obtain the I/O data to determine the sign of φk (t) number of triggered points is infinite because the number
at first. Then, one can select φ̂k (t) with the same sign of φk (t). of iterations is infinite. The case of ds < 1 always exists
In the sequel, the resetting algorithm can ensure the same sign since u k−1 (t) = 0 if the controller is triggered. Therefore,
of φ̂k (t) and φk (t) for all time instants and iterations. ds < 1 holds infinitely with increasing iterations. As a result,
Remark 5: Note that P̂k (t) is adjustable through the the boundedness of φ̃k (t) is derived from (24). Since the
iteration-time varying φ̂k (t), which is estimated merely using parameter φk (t) is bounded, it can be easily obtained that φ̂k (t)
the real-time I/O data. Furthermore, the computation burden of is also bounded.
P̂k (t) is light because no complex matrix operation is involved. Second, in the triggered iteration, i.e., k = kl , according
Remark 6: Different from [11]–[16], the ILC law proposed to (10), we have
in this work is event-triggered such that the action times of
control devices and the transmission data required can be ỹkl (t + 1) = (1 − φkl (t) P̂kl (t)) ỹkl−1 (t + 1). (25)
greatly reduced. On the other hand, it is seen from (21) that
Taking norm on both sides of (25), it results
the learning control law is feedforward and the input signal is
updated by using the control knowledge from the previous | ỹkl (t + 1)| ≤ |1 − φkl (t) P̂kl (t)|| ỹkl−1 (t + 1)|. (26)
iterations. The controller update is from one point in the
Since 0 < (μs bs /(λs ) ) < 4, one can derive according to
1/2
previous iteration to the same point in the current iteration,
the reset algorithm (20) that
namely along the iteration direction, regardless of the time
dynamics of the systems. μs φkl (t)φ̂kl (t) μs |φkl (t)||φ̂kl (t)| μs bs
Remark 7: There exists a switching mechanism in control 0< < √ < √ <2 (27)
λs + φ̂kl (t)
2
2 λs |φ̂kl (t)| 2 λs
law (19) such that the control law is discontinuous. Note that
such a switching mechanism is conducted under the conver- which implies that
 
gence conditions. Therefore, the convergence and stability of  μs φkl (t)φ̂kl (t) 

the proposed method can still be guaranteed even though the |1 − φkl (t) P̂kl (t)| = 1 −  ≤ d1 < 1 (28)
 λs + φ̂k2l (t) 
chattering may occur. Up to now, it is still an open problem
to reduce the chattering in ETC methods. where 0 < d1 < 1 is a constant.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: ET-NILC 5

T
According to (26) and (28), one gets where P̂ k (t) = μm  ˆ k (t)/(λm +  ˆ k (t) 2 ) and μm is an
adjustable positive parameter.  ˆ k (t) is the estimation of k (t).
| ỹkl (t + 1)| ≤ d1 | ỹkl−1 (t + 1)| ≤ · · · ≤ d1l | ỹk0 (t + 1)| (29)
Given an objective function
from which it is clear that lim kl →∞ | ỹkl (t + 1)| = 0. In other
J (
ˆ k (t)) = yk−1 (t + 1) − yk−2 (t + 1) − ˆ k (t)uk−1 (t) 2
words, the tracking error converges to zero along with the
+ μm k (t) − 
ˆ ˆ k−1 (t) 2 .
triggered iterations.
For the inter-event iterations, i.e., k ∈ (kl−1 , kl ), the control Take the derivative of this objective function with respect
input remains the same as that in the previous triggered to ˆ k (t), equate it to zero, and we can derive the parameter
iteration. Since the tracking error at the triggering iteration has estimate law as
been proved to be convergent, it is evident that the tracking
error at the non-triggering iterations remains unchanged.
ˆ k (t)

= ˆ k−1 (t)
Therefore, in general, the tracking error of the system
ηm ( yk−1 (t +1)− 
ˆ k−1 (t)uk−1 (t))uk−1
T
(t)
decreases with the increasing iterations. + (34)
μm + uk−1 (t) 2

V. E XTENSION TO MIMO N ONLINEAR where 0 < ηm < 2 and μm > 0 are two constants.
N ONAFFINE S YSTEMS In the case of the triggered iterations, i.e. k = kl , similar to
A. Problem Formulation steps (9) and (10), it obtains
Consider a repetitive MIMO nonlinear nonaffine discrete- ỹk (t + 1) = (I − k (t) P̂ k (t)) ỹkl−1 (t + 1). (35)
time system
Defining the event-triggered error ek−1 (t + 1) = ỹkl−1 (t +
yk (t + 1) = f ( yk (t), . . . , yk (t − n y ), uk (t), . . . , uk (t −n u )) 1) − ỹk−1 (t + 1), k − 1 ∈ [kl−1 , kl ), (35) becomes
(30) ỹk (t + 1) = (I − k (t) P̂ k (t))( ỹk−1 (t + 1) + ek−1 (t + 1)).
(36)
where yk (t) ∈ R represents the system output; uk (t) ∈ R
m m

represents the control input, and f (·) is the nonlinear and has Define a Lyapunov function as Vk (t + 1) = ỹk (t + 1) 2 .
the continuously differentiable property. Similarly, for the case of k ∈ (kl−1 , kl ), we have
Some assumptions are made as follows.
Assumption 1 ’: The partial derivative of f (·) with respect Vk (t + 1) = ỹk (t + 1) 2
− ỹk−1 (t + 1) 2
= 0. (37)
to control input uk (t) is continuous. For the case of k = kl , we have
Assumption 2’: For ∀t ∈ [0, N ] and k ∈ Z+ , if uk (t) =
0, then system (30) is generalized Lipschitz, i.e., Vk (t + 1)
= (I − kl (t) P̂ k (t))( ỹk−1 (t + 1) + ek−1 (t + 1)) 2
 yk (t + 1) ≤ b̄ uk (t) (31) − ỹk−1 (t + 1) 2
where b̄ > 0 is a constant. ≤ 2 I − k (t) P̂ k (t) 2 ỹk−1 (t + 1) 2
Lemma 2: For MIMO nonlinear system (30), satisfying + 2 I − k (t) P̂ k (t) 2 ek−1 (t + 1) 2 − ỹk−1 (t + 1) 2

Assumptions 1’ and 2’ with uk (t) = 0, there must exist = (2 I − k (t) P̂ k (t) 2 − 1) ỹk−1 (t + 1) 2
k (t) such that + 2 I − k (t) P̂ k (t) 2
ek−1 (t + 1) 2 . (38)
yk (t + 1) = yk−1 (t + 1) + k (t)uk (t) (32) Letting Vk (t + 1) < 0, it obtains
where k (t) = (φ j i,k (t)) ∈ R , j = 1, 2, . . . , m, and
m×m
(1−2 I −k (t) P̂ k (t) 2 ) ỹk−1 (t + 1) 2
i = 1, 2, . . . , m, and supk maxt∈{0,1,...,N } k (t) ≤ b̄. ek−1 (t + 1) 2
< .
2 I − k (t) P̂ k (t) 2
Remark 9: The proposed virtual linear data relationship (32)
(39)
plays an important role when the event triggering condition is
satisfied, which is mainly used in the design and analysis of Then, for MIMO systems, the event-triggering condition for
subsequent controller and parameter estimation algorithm. MIMO systems can be written as
G k−1 (t)
B. Controller Design χ(ek−1 (t + 1)) = ek−1 (t + 1) 2 − ≥0
2 I −ˆ k (t) P̂ k (t) 2

When k = kl , define an objective function as follows: (40)

J (ukl (t)) = yd (t + 1)− ykl (t + 1) 2 +λm ukl (t)−ukl−1 (t) 2 where G k−1 (t) = (1 − 2 I − ˆ k (t) P̂ k (t) 2 ) ỹk−1 (t + 1) 2 .
Similarly, the tracking error threshold is still considered
where λm > 0. in this section, that is, supk maxt∈{0,1,...,N } | ỹi,k (t)| ≤ ϑ,
Then, the event-triggered ILC law for MIMO nonlinear i = 1, 2, . . . , m.
systems is proposed as follows: In summary, the proposed ET-NILC method for MIMO
 systems is given as follows:
ukl−1 (t) + P̂ k (t) ỹkl−1 (t + 1), k = kl
uk (t) =
ukl−1 (t), k ∈ (kl−1 , kl ) kl = k, l = 0, 1, . . . , if | ỹi,k (t)| > ϑ
(33) or χ(ek−1 (t + 1)) ≥ 0 (41)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

 ˆ k (t) =  ˆ k−1 (t) error ỹk (t) is convergent with the increase of the iteration if
ηm ( yk−1 (t + 1) −  ˆ k−1 (t)uk−1 (t))uk−1
T
(t) for some iteration {ξq (t) : q ∈ Z + } and some finite positive
+ integer 1 ≤ δ(t) < ∞, satisfying {ξ0 (t) = 0} and 0 <
μm + uk−1 (t) 2

(42) ξq+1 (t) − ξq (t) ≤ δ(t), one can select the controller parameter
properly such that
φ̂ j i,k (t) = φ̂ j i,0 (t), if sign(φ̂ j i,k (t)) = sign(φ̂ j i,0 (t))
ξq+1 (t)−1
j, i = 1, 2, . . . , m (43) 
 Q kl (t) ≤ γ < 1, for ∀q ∈ Z + (48)
ukl−1 (t) + P̂ k (t) ỹkl−1 (t + 1), k = kl kl =ξq (t)
uk (t) = (44)
ukl−1 (t), k ∈ (kl−1 , kl )
where Q kl (t) = I − kl (t) P̂ kl (t).
where (43) is a reset algorithm added to make the estimate Proof:
algorithm (42) more flexible. 1) Boundedness of Parameter Estimate  ˆ k (t): Define para-
meter estimation error ˜ k (t) = k (t) − 
ˆ k (t). Following the
similar step of (20), we can get:
C. Convergence Analysis
Two lemmas are introduced in the convergence analysis. ηm uk−1 (t)uk−1
T
(t)
˜ k (t) ≤ 
 ˜ k−1 (t)(I − ) + 2b̄.
Lemma 3[32]: Given the system μm + uk−1 (t) 2
(49)
θ k+1 (t) = k (t)θ k (t) + σ k (t) (45)
Squaring the first term on the right-hand side of (49), one
where θ k (t) ∈ R P is the state, σ k (t) ∈ R p is the external obtains
input, and k (t) ∈ R p× p is a bounded mapping matrix,  
ηm ukl−1 (t)uk−1
T
(t) 2
i.e., supt∈{0,1,...,N },k∈Z + k (t) ≤ β with β > 0. If for ˜ k−1 (t) I − =  ˜ k−1 (t) 2
any given integer t ∈ {0, 1, . . . , N }, there exists some μm + uk−1 (t) 2
iteration sequence {ξq (t) : q ∈ Z + }, where {ξ0 (t) = 0} and ηm uk−1 (t) 2 ηm uk−1 (t)
˜ k−1 (t) 2
0 < ξq+1 (t) − ξq (t) ≤ δ(t) for some finite positive integer + −2 + × .
μm + uk−1 (t) 2 μm + uk−1 (t) 2
1 ≤ δ(t) < ∞, such that (50)
ξq+1 (t)−1
 Since 0 < ηm < 2 and μm > 0, then
 j (t) ≤ χ < 1, for all q ∈ Z + (46)
j =ξq (t) ηm uk−1 (t) 2
−2 + < 0. (51)
μm + uk−1 (t) 2
then for any initial state θ k0 (t) (k0 ∈ Z + ) and any bounded
input σ k (t), the solution to (45) exists for all k ≥ k0 and From (50) and (51), it obtains
satisfies ηm uk−1 (t)uk−1
T
(t) 2

˜ k−1 (t)(I − ) <  ˜ k−1 (t) 2 . (52)
θ k (t) ≤ χ q̄k (t)−qk0 (t) β
2(δ(t)−1)
θ k0 (t) μm + uk−1 (t) 2
1 − βδ(t)−1 Therefore, there must exist a constant 0 < dm < 1, such that
δ(t)−1
+ χ q̄k (t)−qk0 (t) β
1 − β 
˜ k (t) ≤ dm 
˜ k−1 (t) +2b̄ ≤ · · · ≤ dmk 
˜ 0 (t) + 2b̄
.
1−χ q̄k (t)−qk0 (t)
1− δ(t)
β 1− δ(t)−1
β 1 − dm
δ(t)−1
+ β + (53)
1−χ 1 − β 1 − β
× max σ i (t) (47) From (53), the boundedness of  ˆ k (t) can be directly
k0 ≤i≤k−1
deduced.
where, for every k ∈ Z + , q̄k (t) ∈ Z + and qk (t) ∈ Z + denote 2) Convergence of Tracking Error ỹk (t + 1) : From part 1),
the greatest and least integers such that ξq̄k (t) ≤ k < ξq̄k (t)+1 (t) we have known that ˆ k (t) is bounded. Combining Lemma 2,
and ξqk (t)−1 (t) < k ≤ ξqk (t) (t) hold, respectively. the boundedness of I − kl (t) P̂ kl (t) can be easy to obtain.
Lemma 4 [32]: Given the system (45), let σ k (t) be Let Q kl (t) = I − kl (t) P̂ kl (t). For k = kl , one can
bounded such that supk∈Z + σ k (t) ≤ βσ (t) for some bounded rewrite (35) as
βσ (t) ≥ 0. If the condition (46) stated in Lemma 3 holds, then
ỹkl (t + 1) = Q kl (t) ỹkl−1 (t + 1). (54)
the following results hold for all t ∈ {0, 1, . . . , N }.
1) θ k (t) is bounded such that supk∈Z + θ k (t) ≤ βθ (t) and By comparing (54) and (45), we can find that θ k+1 (t) and
limk→∞ sup θ k (t) ≤ βθ sup (t) for some bounded βθ (t) ≥ 0 k (t) in (45) correspond to ỹkl (t + 1) and Q kl (t) in (54),
and βθ sup (t) ≥ 0, where βθ sup (t) < βθ (t). respectively, and σ k (t) in (45) corresponds to the term 0
2) limk→∞ θ k (t) = 0 if limk→∞ σ k (t) = 0 additionally in (54). According to Lemma 3 and 2) in Lemma 4, it is
holds. easy to get limkl →∞ ỹkl (t + 1) = 0 if (48) is satisfied, i.e.,
The following is the convergence theorem of ET-NILC for the tracking error in the triggered iteration converges to zero
MIMO systems. with the increase of iteration under the condition (48).
Theorem 2: Considering system (30) under Assumptions 1’ For the inter-event interval, i.e., k ∈ (kl−1 , kl ), the tracking
and 2’, the presented ET-NILC approach (41)–(44) can guaran- error will be maintained because the control input remains the
tee that the parameter estimate  ˆ k (t) is bounded; the tracking same as the input at the latest triggered iteration.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: ET-NILC 7

Fig. 1. Output tracking performance of y1,k (t) in Example 1. Fig. 2. Output tracking performance of y2,k (t) in Example 1.

Therefore, the tracking error ỹk (t + 1) converges gradually


with the increase of iteration.

VI. S IMULATIONS S TUDY


Example 1: Consider a two-input and two-output nonlinear
discrete-time system

⎪ 2.5y1,k (t)

⎨ y1,k (t + 1) = + 1.2u 1,k (t) + 0.5u 2,k (t)
1 + y1,k
2
(t) + y2,k
2
(t)
⎪ 5y2,k (t)

⎩ y2,k (t + 1) = + 0.5u 1,k (t) + u 2,k (t).
1 + y1,k
2
(t) + y2,k
2
(t)
The desired tracking trajectory is

yd,1 (t) = cos(t/20) + sin(t/50)
, t ∈ {0, 1, . . . , 500}.
yd,2 (t) = sin(t/50) + 5 cos(t/20) Fig. 3. Total number of event-triggered moments per iteration in Example 1.

In this simulation, the tracking error threshold is selected


as ϑ = 0.1. Selecting parameters as
2 1
ˆ 0 (t) =

1 2
ηm = 1.8, μm = 2, μm = 1.6, and λm = 0.1. Applying the
proposed ET-NILC method (41)–(44), the simulation results
are shown in Figs. 1–5. Figs. 1 and 2 show the output tracking
performance. Fig. 3 shows the total triggered instances of
every iteration. Figs. 4 and 5 show the convergence of tracking
error where
 the y-axis is the average tracking error defined as
e
yi,k = Nj=1 ỹi,k ( j )/N, i = 1, 2.
It is seen that, as the iteration number increases, the total
triggered instances in an iteration decrease, which verifies
that the proposed ET-NILC method (41)–(44) can decrease
the number of the control activation while ensuring a good Fig. 4. e
Absolute mean tracking error y1,k .
tracking performance.
By nature, the proposed ET-NILC includes both an event- under the same simulation background, the results are shown
triggering condition derived according to the Lyapunov stabil- in Figs. 4–7. The red dashed–dotted lines in Figs. 4 and 5 show
ity and a threshold condition together. The former is used to the convergence of the tracking error. Fig. 6 shows the total
reduce controller triggering times and ensure system stability. triggered time instances of every iteration. Fig. 7 shows the
The latter is used to reduce the controller triggering times event-triggering cases along the iteration direction at the time
while making the tracking error converge to within a small points of 49, 93, 352, and 484 by using condition (40) only
bound. where the vertical axis represents whether the controller is
In order to clearly show the roles of the two condi- triggered or not, and “1” means that the controller is triggered
tions, applying the event-triggering condition (40) solely and “0” indicates that no update occurs for the controller.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 8. Total number of event-triggered moments per iteration with


e
Fig. 5. Absolute mean tracking error y2,k . threshold-triggering condition only (ϑ = 0.1).

Fig. 6. Total number of event-triggered moments per iteration with the


event-triggering condition (40) only. Fig. 9. Total number of event-triggered moments per iteration with the
threshold-triggering condition only (ϑ = 0).

Furthermore, one sees from Fig. 6 that a flat trend occurs


after several iterations, but it does not imply that the event-
triggering condition is invalid. One can see from Fig. 7 that
for a certain fixed time, the triggering condition (40) always
works.
Applying the threshold condition solely under the
same simulation background, the results are shown in
Figs. 4, 5, and 8. Similarly, one can see that the total triggered
times of the ET-NILC with the threshold condition only are
less than that with both conditions together. However, the con-
vergence of the ET-NILC with the threshold condition only
becomes worse than that with both conditions. Furthermore,
if we set the threshold value as zero, the ET-NILC becomes all-
time-iteration-triggered one, and a good control performance
Fig. 7. Event-triggering cases at the time points of 49, 93, 352, and 484 by can be achieved, but the cost is that no controller actions
using condition (40) only. are reduced, as shown in Figs. 4, 5, and 9 under the same
simulation background.
Compared with Figs. 3 and 6, one can see that the total In summary, the proposed ET-NILC with both conditions
triggered times of the ET-NILC with the event-triggering can not only reduce the controller triggering times but also
condition (40) only are less than that with both conditions ensure the tracking error converges to within an arbitrarily
together, while the convergence performance of the ET-NILC predesignated small bound.
with the event-triggering condition (40) only becomes worse Example 2: Consider steam-water heat exchanger [33],
than that with both conditions, as shown in Figs. 4 and 5. [34], whose experimental equipment is shown in Fig. 10,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: ET-NILC 9

Fig. 10. Heat exchanger [33].


Fig. 12. Total number of event-triggered moments per iteration in Example 2.

As a summary, the simulation results illustrate that the


presented ET-NILC method can reduce the control activations
while still ensuring tracking performance, so as to achieve the
purpose of reducing control cost.

VII. C ONCLUSION
An event-triggered nonlinear ILC method is presented for
SISO nonaffine and nonlinear systems via a virtual linear
data model. The 2-D dynamic behavior along both time and
iteration directions is considered when designing event trigger-
ing conditions. The ET-NILC is updated iteratively when the
triggering condition is satisfied; otherwise, the control input
remains the same as that in the previous batch, where the learn-
ing gain of the control law can be calculated using I/O data in
Fig. 11. Output tracking performance in Example 2. real time to enhance the robustness. The proposed ET-NILC
is a data-driven method using the I/O data only instead
is considered of relying on any explicit model information. The results
are also extended to MIMO systems where the convergence
x k (t) = 1.5u k (t) − 1.5u k (t)2 + 0.5u k (t)3
analysis is provided by introducing the property of input-to-
yk (t + 1) = 0.6yk (t) − 0.1yk (t − 1) + 1.2x k (t) state stability for discrete systems. The effectiveness of the
− 0.1x k (t − 1) + wk (t) presented methods is verified by two simulation examples.
where u k (t) represents the process water flow rate, A PPENDIX
yk (t) represents the process water exit temperature, and
From system (1), we have
wk (t) = (0.01r andn/k) is a disturbance varying with time
and iteration. yk (t + 1)
The control objective of this example is to track the follow- = f (yk (t),. . ., yk (t −n y ), u k (t),. . ., u k (t −n u ))
ing desired output:
⎧ − f (yk (t),. . ., yk (t −n y ), u k−1 (t), u k (t −1),. . ., u k (t −n u ))

⎪0.5, t ∈ [0, 50) + f (yk (t),. . ., yk (t −n y ), u k−1 (t), u k (t −1),. . ., u k (t −n u ))

⎨1, t ∈ [50, 100) − f (yk−1 (t),. . ., yk−1 (t −n y ), u k−1 (t),. . ., u k−1 (t −n u )).
yd (t) =

⎪2, t ∈ [100, 150)

⎩ (A1)
1.5, t ∈ [150, 200].
Let
In this simulation, the tracking error threshold is selected as
ξk (t)
ϑ = 0.001. The parameters are selected as φ̂0 (t) = 1.5, ηs =
0.01, μs = 1, μs = 0.6, and λs = 0.6. Applying the proposed = f (yk (t), . . . , yk (t − n y ), u k−1 (t), . . . , u k (t − n u ))
ET-NILC method (18)–(21), the simulation results are shown − f (yk−1 (t),. . ., yk−1 (t − n y ), u k−1 (t),. . ., u k−1 (t − n u )).
in Figs. 11 and 12. It is seen that the tracking performance
Using Assumption 1 and the Cauchy differential mean value
under disturbance can also be guaranteed although not all time
theorem, from (4), we have
the control is triggered and the total number of triggers per
iteration has a decreasing trend with the increase of iterations. yk (t + 1) = ∂ f ∗ /∂u k (t)u k (t) + ξk (t) (A2)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

where ∂ f ∗ /∂u k (t) is the proper partial derivative value of [21] L. Wang, M. V. Basin, H. Li, and R. Lu, “Observer-based composite
f (· · · ) with respect to u k (t). adaptive fuzzy control for nonstrict-feedback systems with actuator
failures,” IEEE Trans. Fuzzy Syst., vol. 26, no. 4, pp. 2336–2347,
Consider the following equation with a variable k (t): Aug. 2018.
[22] R. Cao, Z. Hou, Y. Zhao, and B. Zhang, “Model free adaptive iterative
ξk (t) = k (t)u k (t). (A3) learning control for tool feed system in noncircular turning,” IEEE
Access, vol. 7, pp. 113712–113725, 2019.
Since u k (t) = 0, there at least exists a solution ∗k (t) [23] X. Bu, Z. Hou, and R. Chi, “Model free adaptive iterative learning
to (A3). control for farm vehicle path tracking,” in Proc. 3rd IFAC Int. Conf.
Intell. Control Automat. Sci., Chengdu, China, Sep. 2013, pp. 153–158.
Let φk (t) = ∗k (t) + ∂ f ∗ /∂u k (t). Equation (A2) can be [24] Y. Ren, Z. Hou, I. I. Sirmatel, and N. Geroliminis, “Data driven model
rewritten as (3). free adaptive iterative learning perimeter control for large-scale urban
road networks,” Transp. Res. C, Emerg. Technol., vol. 115, pp. 1–16,
Jun. 2020.
R EFERENCES [25] X. Bu, L. Cui, Z. Hou, and W. Qian, “Formation control for a class of
nonlinear multiagent systems using model-free adaptive iterative learn-
[1] A. Ferrara, S. Sacone, and S. Siri, “Design of networked freeway traffic ing,” Int. J. Robust Nonlinear Control, vol. 28, no. 4, pp. 1402–1412,
controllers based on event-triggered control concepts,” Int. J. Robust Mar. 2018.
Nonlinear Control, vol. 26, no. 6, pp. 1162–1183, 2016. [26] L. Xu, “A proportional differential control method for a time-delay sys-
[2] W. Wang, C. Huang, J. D. Cao, and F. E. Alsaadi, “Event-triggered tem using the Taylor expansion approximation,” Appl. Math. Comput.,
control for sampled-data cluster formation of multi-agent systems,” vol. 236, pp. 391–399, Jun. 2014.
Neurocomputing, vol. 267, pp. 25–35, Dec. 2017. [27] H. Deng, H.-X. Li, and Y.-H. Wu, “Feedback-linearization-based neural
[3] N. K. Dhar, N. K. Verma, and L. Behera, “Adaptive critic-based event- adaptive control for unknown nonaffine nonlinear discrete-time systems,”
triggered control for HVAC system,” IEEE Trans. Ind. Informat., vol. 14, IEEE Trans. Neural Netw., vol. 19, no. 9, pp. 1615–1625, Sep. 2008.
no. 1, pp. 178–188, Jan. 2018. [28] P. Tallapragada and N. Chopra, “On event triggered tracking for
[4] C. Peng and F. Q. Li, “A survey on recent advances in event-triggered nonlinear systems,” IEEE Trans. Autom. Control, vol. 58, no. 9,
communication and control,” Inf. Sci., vols. 457–458, pp. 113–125, pp. 2343–2348, Sep. 2013.
Aug. 2018. [29] Z. S. Hou and S. S. Xiong, “On model-free adaptive control and
[5] L. Ding, Q. L. Han, X. Ge, and X. M. Zhang, “An overview of recent its stability analysis,” IEEE Trans. Autom. Control, vol. 64, no. 11,
advances in event-triggered consensus of multi-agent systems,” IEEE pp. 4555–4569, Nov. 2019, doi: 10.1109/TAC.2019.2894586.
Trans. Cybern., vol. 48, no. 4, pp. 1110–1123, Nov. 2018. [30] S. Ibrir and C.-Y. Su, “Simultaneous state and dead-zone parameter
[6] K. Masako, “Event-triggered control with self-triggered sampling for estimation for a class of bounded-state nonlinear systems,” IEEE Trans.
discrete-time uncertain systems,” IEEE Trans. Autom. Control, vol. 64, Control Syst. Technol., vol. 19, no. 4, pp. 911–919, Jul. 2011.
no. 3, pp. 1273–1279, Mar. 2019, doi: 10.1109/TAC.2018.2845693. [31] G. Lai, G. Tao, Y. Zhang, and Z. Liu, “Adaptive control of noncanon-
[7] D. Liu and G. H. Yang, “Neural network-based event-triggered MFAC ical neural-network nonlinear systems with unknown input dead-zone
for nonlinear discrete-time processes,” Neurocomputing, vol. 272, characteristics,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9,
pp. 356–364, Jan. 2018. pp. 3346–3360, Sep. 2020, doi: 10.1109/TNNLS.2019.2943637.
[8] N. Lin, R. Chi, and B. Huang, “Event-triggered model-free adaptive con- [32] D. Meng and K. L. Moore, “Robust iterative learning control for
trol,” IEEE Trans. Syst., Man, Cybern. Syst., early access, Jul. 3, 2019, nonrepetitive uncertain systems,” IEEE Trans. Autom. Control, vol. 62,
doi: 10.1109/TSMC.2019.2924356. no. 2, pp. 907–913, Feb. 2017.
[9] R. H. Chi, X. H. Liu, R. K. Zhang, Z. S. Hou, and B. Huang, [33] E. Eskinat, S. H. Johnson, and W. L. Luyben, “Use of Hammerstein
“Constrained data-driven optimal iterative learning control,” J. Process models in identification of nonlinear systems,” AIChE J., vol. 37, no. 2,
Control, vol. 55, pp. 10–29, Jul. 2017. pp. 255–268, Feb. 1991.
[10] Q. Yu, Z. S. Hou, and J.-X. Xu, “D-type ILC based dynamic modeling [34] T. Yamamoto, K. Takao, and T. Yamada, “Design of a data-driven PID
and norm optimal ILC for high-speed trains,” IEEE Trans. Control Syst. controller,” IEEE Trans. Control Syst. Technol., vol. 17, no. 1, pp. 29–39,
Technol., vol. 26, no. 2, pp. 652–663, Mar. 2018. Sep. 2009.
[11] S. Arimoto, S. Kawamura, and F. Miyazaki, “Bettering operation of
Na Lin received the M.Sc. degree in automatic
robots by learning,” J. Robotic Syst., vol. 1, no. 2, pp. 123–140, 1984.
control from the Qingdao University of Science and
[12] D. Shen, C. Zhang, and Y. Xu, “Two updating schemes of iterative learn-
Technology, Qingdao, China, in 2017, where she is
ing control for networked control systems with random data dropouts,”
currently pursuing the Ph.D. degree with the School
Inf. Sci., vol. 381, pp. 352–370, Mar. 2017.
of Automation and Electronic Engineering, Institute
[13] X. Bu, F. Yu, Z. Hou, and F. Wang, “Iterative learning control for a
of Artificial Intelligence and Control.
class of nonlinear systems with random packet losses,” Nonlinear Anal.
Her research interests include data-driven control
Real World Appl., vol. 14, no. 1, pp. 567–580, Feb. 2013.
and iterative learning control.
[14] Y.-J. Pan, H. J. Marquez, T. W. Chen, and L, Sheng, “Effects of network
communications on a class of learning controlled non-linear systems,”
Int. J. Syst. Sci., vol. 40, no. 7, pp. 757–767, 2009.
[15] B. Xuhui, H. Zhanwei, H. Zhongsheng, and Y. Junqi, “Robust iterative Ronghu Chi received the Ph.D. degree from Beijing
learning control design for linear systems with time-varying delays and Jiaotong University, Beijing, China, in 2007.
packet dropouts,” Adv. Difference Equ., vol. 2017, no. 1, pp. 1–17, He was a Visiting Scholar with Nanyang Techno-
Dec. 2017. logical University, Singapore, from 2011 to 2012,
[16] D. Meng and K. L. Moore, “Robust cooperative learning control and a Visiting Professor with the University of
for directed networks with nonlinear dynamics,” Automatica, vol. 75, Alberta, Edmonton, AB, Canada, from 2014 to 2015.
pp. 172–181, Jan. 2017. In 2007, he joined the Qingdao University of Science
[17] W. Xiong, X. Yu, R. Patel, and W. Yu, “Iterative learning control for and Technology, Qingdao, China, where he is cur-
discrete-time systems with event-triggered transmission strategy and rently a Full Professor with the School of Automa-
quantization,” Automatica, vol. 72, pp. 84–91, Oct. 2016. tion and Electronic Engineering. He has published
[18] T. Zhang and J. M. Li, “Event-triggered iterative learning control for over 100 papers in important international journals
multi-agent systems with quantization,” Asian J. Control, vol. 20, no. 3, and conference proceedings. His current research interests include iterative
pp. 1088–1101, 2018, doi: 10.1002/asjc.1450. learning control, data-driven control, intelligent transportation systems, and
[19] J. Tang and L. Sheng, “Iterative learning fault-tolerant control for so on.
networked batch processes with event-triggered transmission strategy Dr. Chi was awarded the Taishan Scholarship in 2016. He served in various
and data dropouts,” Syst. Sci. Control Eng., vol. 6, no. 3, pp. 44–53, positions in international conferences and was an invited Guest Editor for
2018. International Journal of Automation and Computing. He has also served as a
[20] R. H. Chi and Z. S. Hou, “Dual-stage optimal iterative learning control Council Member of the Shandong Institute of Automation and a Committee
for nonlinear non-affine discrete-time systems,” Acta Autom. Sinica, Member of the Data-driven Control, Learning and Optimization Professional
vol. 33, no. 10, pp. 1061–1065, 2007. Committee, and so on.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: ET-NILC 11

Biao Huang (Fellow, IEEE) received the B.Sc. and Zhongsheng Hou (Fellow, IEEE) received the bach-
M.Sc. degrees in automatic control from the Beijing elor’s and master’s degrees in applied mathematics
University of Aeronautics and Astronautics, Beijing, from the Jilin University of Technology, Changchun,
China, in 1983 and 1986, respectively, and the Ph.D. China, in 1983 and 1988, respectively, and the
degree in process control from the University of Ph.D. degree in automatic control from Northeastern
Alberta, Edmonton, AB, Canada, in 1997. University, Shenyang, China, in 1994.
In 1997, he joined the Department of Chemical He was a Post-Doctoral Fellow with the Harbin
and Materials Engineering, University of Alberta, Institute of Technology, Harbin, China, from 1995 to
as an Assistant Professor, where he is currently a 1997, and a Visiting Scholar with Yale University,
Professor. He is the Industrial Research Chair in New Haven, CT, USA, from 2002 to 2003. In 1997,
Control of oil Sands Processes with the Natural he joined the Beijing Jiaotong University, Beijing,
Sciences and Engineering Research Council of Canada and the Industry China, where he was a Distinguished Professor and the Founding Director of
Chair of Process Control with Alberta Innovates. He has applied his expertise the Advanced Control Systems Laboratory and the Head of the Department of
extensively in industrial practice. His current research interests include process Automatic Control until 2018. He is currently a Chair Professor with Qingdao
control, data analytics, system identification, control performance assessment, University, Qingdao, China. His research interests include data-driven control,
Bayesian methods, and state estimation. model-free adaptive control, learning control, and intelligent transportation
Dr. Huang is a fellow of the Canadian Academy of Engineering and the systems.
Chemical Institute of Canada. He was a recipient of a number of awards, Dr. Hou is a fellow of CAA and an IFAC Technical Committee Member
including the Alexander von Humboldt Research Fellowship from Germany, on both adaptive and learning systems and transportation systems. He is
the Best Paper Award from the Journal of Process Control (IFAC), the APEGA also the Founding Director of the Technical Committee on Data Driven
Summit Award in Research Excellence, and the Bantrel Award in Design and Control, Learning and Optimization (DDCLO), Chinese Association of
Industrial Practice. Automation.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON CYBERNETICS 1

Event-Triggered ILC for Optimal Consensus at


Specified Data Points of Heterogeneous Networked
Agents With Switching Topologies
Na Lin , Ronghu Chi , and Biao Huang , Fellow, IEEE

Abstract—In this article, the optimal consensus problem at system [1], which is a typical point-to-point (PTP) tracking
specified data points is considered for heterogeneous networked process. Other examples include satellite antenna pointing [2],
agents with iteration-switching topologies. A point-to-point lin- robotic pick-and-place tasks [3], etc. In addition, the final
ear data model (PTP-LDM) is proposed for heterogeneous agents
to establish an iterative input–output relationship of the agents product quality control [4] only needs to achieve the desired
at the specified data points between two consecutive iterations. target at the terminal point and thus it can be considered as a
The proposed PTP-LDM is only used to facilitate the subse- special case of multipoint tracking tasks.
quent controller design and analysis. In the sequel, an iterative To apply the feedback-based control methods [5]–[7] for the
identification algorithm is presented to estimate the unknown PTP tracking tasks, one needs to design a reference trajectory
parameters in the PTP-LDM. Next, an event-triggered point-
to-point iterative learning control (ET-PTPILC) is proposed to passing all the specified points, and then to design a con-
achieve an optimal consensus of heterogeneous networked agents trol to make the system output track all the points along this
with switching topology. A Lyapunov function is designed to trajectory. The disadvantage is that the unnecessary tracking
attain the event-triggering condition where only the control requirements are enforced at data points that are not the spec-
information at the specified data points is available. The con- ified ones. Another deficiency is that the calculation burden
troller is updated in a batch wise only when the event-triggering
condition is satisfied, thus saving significant communication becomes larger as a result.
resources and reducing the number of the actuator updates. The Iterative learning control (ILC) [8] is a feedforward con-
convergence is proved mathematically. In addition, the results trol strategy that updates the controller by using the error
are also extended from linear discrete-time systems to nonlinear information from the previous operation process. In other
nonaffine discrete-time systems. The validity of the presented words, the update of ILC is conducted iteratively from point
ET-PTPILC method is demonstrated through simulation studies.
to point. Therefore, it is possible to extend the ILC meth-
Index Terms—Event-triggered point-to-point iterative learning ods from trajectory tracking tasks to multipoint tracking tasks
control (ET-PTPILC), iteration-switching communication topol- such that the controller is updated only using data at the
ogy, multiagent systems (MASs), optimal consensus, specified
data points. specified points. Recently, many point-to-point ILC (PTPILC)
methods [9]–[12] have been developed for multipoint tracking
tasks. The energy optimal time allocation is explored in [9]
for PTPILC with specified output tracking. Xu et al. [10]
I. I NTRODUCTION and Shen et al. [11] developed a PTPILC method for lin-
OINT-TO-POINT control tasks are comment in practi- ear stochastic systems. Chi et al. [12] proposed a data-driven
P cal applications to track the specified data-target points,
instead of tracking all the points along a reference trajectory.
PTPILC using additional control inputs. On the other hand,
terminal ILC (TILC) has some limitations as it only tracks
For example, the precise stopping index is mainly required the terminal point. Lin et al. [13] proposed an adaptive TILC
at the specific train stations for a train stopping control approach based on the concept of high-order internal mod-
els. In [14], a TILC is designed for nonlinear systems based
Manuscript received July 11, 2020; revised November 3, 2020 and January on neural networks. Because only the specified data points are
18, 2021; accepted January 21, 2021. This work was supported in part by used, the PTPILC method can reduce the number of controller
the National Science Foundation of China under Grant 61873139; in part
by the Taishan Scholar Program of Shandong Province of China; in part updates, reduce the computational burden, decrease the data
by the Natural Science Foundation of Shandong Province of China under communications, and save the control efforts.
Grant ZR2019MF036; and in part by the Natural Science and Engineering In recent decades, cooperative control of networked agents
Research Council of Canada. This article was recommended by Associate
Editor X.-M. Zhang. (Corresponding author: Ronghu Chi.) has become a popular topic due to the broad applications of
Na Lin and Ronghu Chi are with the School of Automation and Electronic multiagent systems (MASs) in engineering applications, such
Engineering, Qingdao University of Science and Technology, Qingdao as cooperative control of multivehicles [15], [16]; formation
266061, China (e-mail: linnaqingdao@163.com; ronghu_chi@hotmail.com).
Biao Huang is with the Department of Chemical and Materials of robotic systems [17]; and so on. Many excellent works
Engineering, University of Alberta, Edmonton, AB T6G 2G6, Canada (e-mail: have been devoted to ILC research for consensus problems of
bhuang@ualberta.ca). MASs. In [18], the ILC problem is investigated for the con-
Color versions of one or more figures in this article are available at
https://doi.org/10.1109/TCYB.2021.3054421. sensus tracking of a singular linear MAS. Hui et al. [19] and
Digital Object Identifier 10.1109/TCYB.2021.3054421 Chi et al. [20] proposed two ILC approaches for formation
2168-2267 
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON CYBERNETICS

consensus. For a distributed parametric MAS, a learning con- 1) Compared with traditional PTPILC methods [9]–[12], an
sensus scheme is presented in [21]. In addition, several works event-triggering mechanism of PTPILC is presented for
have introduced TILC methods into the terminal consensus the first time to solve the optimal consensus of the MASs
problems of MASs. For example, an estimation-based TILC is at the specified data points, thus reducing the number of
designed in [22] for the consensus problem of nonlinear homo- actuator actions and the use of control resources, as well
geneous MASs. Meng and Jia [23], [24] developed a TILC as the communication and computational resources.
method for the consensus problem of heterogeneous agents. 2) Both the heterogeneous dynamics of MASs and the
Bu et al. [25] presented a data-driven TILC for the finite-time iteration-switching topologies are considered to make
consensus problem of linear homogeneous MASs. However, the proposed ET-PTPILC method more suitable for
there has been no work reported for the optimal consensus practical applications.
problems at multiple specified data points for MASs. 3) The proposed PTP-LDM is a data model for the purpose
On the other hand, most of the existing learning control of facilitating the controller design and analysis. As a
consensus protocols [18]–[25] are time driven. In many appli- result, the proposed ET-PTPILC is a data-driven method
cations, however, it is common that the agents are limited with no requirement of mechanistic information.
by the communication ability and the number of controller The remainder of this article is organized as follows.
updates, especially when agents themselves or their internal Section II shows the problem formulation and presents a
devices are powered by batteries, or communication band- PTP-LDM through identification. An ET-PTPILC is proposed
width and channels are limited [26], which means the control in Section III with convergence analysis. In Section IV,
resources are limited. So, it is imperative to design consensus the results are extended to nonlinear networked agents.
protocols with lower energy consumption, less communication Simulations are provided in Section V. Section VI concludes
transmission, and fewer control triggering instructions. this article.
From this point of view, event-triggered control Preliminaries: Consider an undirected graph Gk with N
(ETC) [27]–[33] is a good alternative to the fixed sampling nodes, where k ∈ {0, 1, . . .} denotes an iteration number,
time control because the controller is updated only when the which indicates that the communication topology is iteration
event is triggered according to a designed event-triggering switching. The adjacency matrix is Āk = (aji,k )N×N with
condition; otherwise, the controller holds with no actions. 
1, agent i is connected with agent j
However, in contrast to these 1-D ETC methods [27]–[33] aji,k =
0, otherwise.
that are designed and conducted along time direction,
few ETC methods have been developed for 2-D repetitive The Laplacian matrix is Lk = (l̄ji,k )N×N , where l̄ji,k = −1

systems [34]–[37], leaving alone the event-triggered iterative for j = i, and l̄ji,k = N i=1 aji,k if j = i. In addition, Dk =
learning consensus approach for MASs. The reason is that diag(d1,k , . . . , dN,k ) is the leader’s adjacency matrix, where
it is difficult to derive a proper event-triggering condition di,k > 0 if agent i is connected to the leader, and di,k = 0,
due to the coupled 2-D dynamics in both time and iteration otherwise. Suppose that the topology graphs are connected and
domains, as well as the mutual dynamic influences among at least one agent can receive the information from the leader.
the agents. In [36] and [37], two ILC methods are proposed
with an event-triggered strategy for MASs. However, to the II. P ROBLEM F ORMULATION AND PTP-LDM
best of our knowledge, no result has been reported in the I DENTIFICATION
development of an event-triggered PTPILC method for the
consensus of networked agents. A. Problem Formulation
Inspired by the above analysis, an event-triggered PTPILC Consider a heterogeneous linear time-varying (LTV)
(ET-PTPILC) method is presented in this work. It achieves discrete-time networked agents
the optimal consensus at the specified points of the heteroge- 
xk,i (t + 1) = Ai (t)xk,i (t) + Bi (t)uk,i (t)
neous networked agents with iteration-switching communica- (1)
yk,i (t) = Ci (t)xk,i (t)
tion topologies. First, a PTP linear data model (PTP-LDM) is
proposed for the heterogeneous agents to establish an iterative where i = 1, 2, . . . , N denotes the agent index; the subscript
input–output relationship of the agents at the specified data k ∈ {0, 1, . . . ,} denotes the iteration number; t ∈ {0, 1, . . . , T}
points between two consecutive iterations. Then, an iterative is the sampling time with T denoting the terminal time; xk,i (t),
identification algorithm is proposed for the estimation of the yk,i (t), and uk,i (t) are the system state, output, and control
parameter in the PTP-LDM, which is the basis of the subse- input, respectively; Ai (t), Bi (t), and Ci (t) are unknown but
quent computation of the event-triggering condition. Third, an bounded matrices with appropriate dimensions, and they are all
ET-PTPILC protocol is proposed for the consensus of specified related to the corresponding agent, respectively, which implies
points using a Lyapunov function to attain the event-triggering that the dynamics of the agents is heterogeneous.
condition. The convergence analysis is provided through rig- Our objective is to design an ILC protocol triggered by
orous mathematical derivations. In addition, the results are events such that the system output of each follower is at spec-
further extended to solving the optimal consensus problems ified time points {t1 , t2 , . . . , tM }, where tM ≤ T can track the
of nonlinear networked agents at the specified data points. leader’s output yd (t1 ), yd (t2 ), . . . , yd (tM ) as the iteration num-
Theoretical results are then tested through simulations. The ber increases, that is, limk→∞ (yk,i (tm ) − yd (tm )) = 0, where
main features of our work are shown as follows. yd (tm ) is the leader’s output at the tm th time instant.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: EVENT-TRIGGERED ILC FOR OPTIMAL CONSENSUS 3

Two assumptions are made on system (1). where μ > 0 is a weighting factor, and θ̂ k,i (tm − 1) is the
Assumption 1: System (1) is completely controllable. estimate of θ i (tm − 1).
Assumption 2: The initial state of agent i, xk,i (0), at each Differentiating (6) with respect to θ̂ k+1,i (tm −1), and setting
iteration k is identical, that is, xk+1,i (0) = xk,i (0). it to 0, one can derive the parameter identification algorithm
The PTP-LDM of system (1) can be built through the as follows:
following lemma.
θ̂ k+1,i (tm − 1) = θ̂ k,i (tm − 1)
Lemma 1: For the LTV discrete-time MAS (1) satisfying  
Assumptions 1 and 2, using the state transformation technique η yk,i (tm ) − θ̂ k,i (tm − 1)uk,i (tm − 1)
and iterative difference operation, a PTP-LDM of (1) can be + 2
written as follows: μ + uk,i (tm − 1)
× uT k,i (tm − 1) (7)
Yk+1,i = Yk,i + i Uk+1,i (2)
where 0 < η ≤ 2.
where Yk,i = [yk,i (t1 ), yk,i (t2 ), . . . , yk,i (tM )]T ∈ RM×1 , Uk,i =
The convergence of the proposed parameter identification
[uk,i (0), uk,i (1), . . . , uk,i (tM − 1)]T ∈ RtM ×1 , and  is an
algorithm (7) for the PTP-LDM (2) is shown in the following
iteration difference operator, that is, Uk+1,i = Uk+1,i − Uk,i ;
theorem.
i is a matrix with M rows and tM columns, as shown at
Theorem 1: Consider the proposed PTP-LDM (2). If 0 <
the bottom of the next page, whose jth row indicates that
η ≤ 2 and μ > 0, then applying the parameter identification
the first tj elements are nonzero and the remaining (tM − tj )
 −1 algorithm (7), one can guarantee that the parameter estimation
elements are 0, and θi,tm (p) = Ci (tm ) τtm=p+1 Ai (τ )Bi (p), errors θ̃ k,i (tm −1) = θ i (tm − 1)− θ̂ k,i (tm − 1) converge to zero
p = 0, 1, . . . , tm − 1, m = 1, 2, . . . , M. with the increasing number of iterations.
Proof: Using the state transformation technique, we can Proof: Subtracting θ i (tm − 1) from both sides of (7) and
derive from (1) that combining (5), one has

t
xk,i (t + 1) = Ai (τ )xk,i (0) θ̃ k+1,i (tm − 1) = θ̃ k,i (tm − 1)
τ =0 ηθ̃ k,i (tm − 1)uk,i (tm − 1)uTk,i (tm − 1)

t 
t − 2
+ Ai (τ )Bi (j)uk,i (j) (3) μ + uk,i (tm − 1)
j=0 τ =j+1 = θ̃ k,i (tm − 1)γ k,i (tm − 1) (8)
t
where τ =t+1 Ai (τ ) = 1. where
Furthermore, using the iterative difference operation and
ηuk,i (tm − 1)uTk,i (tm − 1)
combining Assumption 2, (1), and (3), it follows that: γ k,i (tm − 1) = I − 2
.
m −1 t
t m −1
μ + uk,i (tm − 1)
yk+1,i (tm ) = yk,i (tm ) + Ci (tm ) Ai (τ )Bi (j)uk+1,i (j). Consider the following fact that:
j=0 τ =j+1 2
(4) θ̃ k,i (tm − 1)γ k,i (tm − 1)
2
Then, the PTP-LDM (2) can be obtained. = θ̃ k,i (tm − 1)
2
B. Parameter Identification of the PTP-LDM η uk,i (tm − 1)θ̃ k,i (tm − 1) βk,i (tm − 1)
Since Ai (t), Bi (t), and Ci (t) are unknown in system (1), + 2
(9)
μ + uk,i (tm − 1)
even though the linear model structure is known, an identifi-
cation method should be given to estimate the parameter i where βk,i (tm − 1) = −2 + ([ηuk,i (tm − 1)2 ]/[μ +
in the proposed PTP-LDM (2). uk,i (tm − 1)2 ]).
According to (2), we have Since 0 < η ≤ 2 and μ > 0, one can derive βk,i (tm − 1) <
yk+1,i (tm ) = θ i (tm − 1)uk+1,i (tm − 1) (5) 0. So, according to (9), it can be obtained
2 2
where θ i (tm − 1) = [θi,tm (0), θi,tm (1), . . . , θi,tm (tm − 1)], and θ̃ k,i (tm − 1)γ k,i (tm − 1) < θ̃ k,i (tm − 1) (10)
uk+1,i (tm − 1) = [uk+1,i (0), uk+1,i (1), . . . , uk+1,i (tm − 1)]T .
To estimate the unknown parameter θ i (tm − 1), design an According to (8) and (10), there exists a constant 0 < d < 1
objective function such that
 
θ̃ k+1,i (tm − 1) ≤ d θ̃ k,i (tm − 1)
J θ̂ k+1,i (tm − 1) = yk,i (tm )
2 ≤ · · · ≤ dk+1 θ̃ 0,i (tm − 1) (11)
− θ̂ k+1,i (tm − 1)uk,i (tm − 1)
2 where θ̃ 0,i (tm − 1) is the initial value of θ̃ k,i (tm − 1).
+ μ θ̂ k+1,i (tm − 1) − θ̂ k,i (tm − 1) Since Ai (·), Bi (·), and Ci (·) are bounded, θ i (tm − 1) is
(6) bounded. Note that θ̃ 0,i (tm − 1) is bounded because θ i (tm − 1)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON CYBERNETICS

and θ̂ 0,i (tm − 1) are bounded. Therefore, from (11), the zero- Denote ζ̄ k,kl = [ζ Tk,kl ,1 , ζ Tk,kl ,2 , . . . , ζ Tk,kl ,N ]T ∈ RMN×1 .
convergence of θ̃ k,i (tm − 1) can be guaranteed along iteration According to the designed control protocol (13), we have
direction. Furthermore, the boundedness of θ̂ k,i (tm − 1) is
Ūk+1 = −K̄ζ̄ k,kl (16)
derived since θ i (tm − 1) is bounded.
Theorem 1 guarantees that the identified parameters where K̄ = diag(K1 , K2 , . . . , KN ) ∈ RNtM ×MN .
approach the optimal ones provided that the number of According to the definition of the tracking error, we have
offline data are sufficient. Accordingly, the subsequent dis- 
cussions are conducted on the basis of the well-identified ζk,kl ,i (tm ) = aij,k ỹkl ,i (tm ) − ỹkl ,j (tm )
PTP-LDM as j∈ϑi
+ di,k ỹkl ,i (tm ). (17)
Yk+1,i = Yk,i + ∗i Uk+1,i (12)
From (17), the following equation holds:
where ∗i denotes the converged optimal estimate of i .
¯
ζ̄ k,kl = (k ⊗ IM×M )Ỹ (18)
kl
III. E VENT-T RIGGERED P OINT- TO -P OINT I TERATIVE where k = Lk + Dk .
L EARNING C ONTROL From (15), (16), and (18), we derive
A. Event-Triggered PTPILC Design ¯ ¯ ¯
Ỹ k+1 = Ỹk −  k Ỹkl (19)
Define ỹk,i (tm ) = yk,i (tm ) − yd (tm ) as the output tracking
error at time tm . We use {kl }, l = 0, 1, . . . , to represent the where  k =  ¯ K̄(k ⊗ IM×M ).
sequence of the event-triggered iterations. Denote ek,i (tm ) = ỹkl ,i (tm ) − ỹk,i (tm ), k ∈ [kl , kl+1 ), m ∈
The event-triggered PTP-ILC protocol for the heterogeneous {1, 2, . . . , M}, as the event-triggered error, which refers to the
LTV discrete-time MAS (1) is designed as difference between the output tracking error of the latest event-
triggered iteration and the current output tracking error. Then,
Uk+1,i = Uk,i − Ki ζ k,kl ,i (13)
one has
where
ỹkl ,i (tm ) = ek,i (tm ) + ỹk,i (tm ). (20)
T
ζ k,kl ,i = ζk,kl ,i (t1 ), ζk,kl ,i (t2 ), . . . , ζk,kl ,i (tM )
 Define
ζk,kl ,i (tm ) = aij,k ykl ,i (tm ) − ykl ,j (tm ) T
j∈ϑi
ek,i = ek,i (t1 ), ek,i (t2 ), . . . , ek,i (tM ) ∈ RM×1
T
+ di,k ykl ,i (tm ) − yd (tm ) ēk = eTk,1 , eTk,2 , . . . , eTk,N ∈ RMN×1 .

ϑi is the set of the neighbors of agent i, k ∈ [kl , kl+1 ), Ki = From (20), one obtains
Ki 1 is a learning gain where Ki is a scalar constant, 1 = ¯ = ē + Ỹ
¯ .
Ỹ (21)
[ 1t1 1t2 · · · 1tM ] ∈ RtM ×M , 1tm , m = 1, 2, . . . , M, is a kl k k

tM -dimensional column vector whose tm th element is 1, and Inserting (21) into (19), results in
other elements are 0.  
¯ ¯ ¯
Subtracting Yd from both sides of (12), where Yd is Ỹ k+1 = Ỹk −  k ¯ek + Ỹk
defined as Yd = [yd (t1 ), yd (t2 ), . . . , yd (tM )]T , we can ¯ −  ē .
= (I −  k )Ỹk k k (22)
derive
Define a Lyapunov function Vk+1 = ¯
¯ T Ỹ
Ỹk+1,i = Ỹk,i + ∗i Uk+1,i (14) Ỹ k+1 k+1 . Then, the
difference form of the Lyapunov function is
where Ỹk,i = [ỹk,i (t1 ), . . . , ỹk,i (tM )]T . Furthermore, denote
  Vk+1 = Ỹ ¯
¯ T Ỹ ¯T ¯
¯ = ỸT , Ỹ T , . . . , ỸT T ∈ RMN×1 k+1 k+1 − Ỹk Ỹk
Ỹ  
k k,1 k,2 k,N ¯ −  ē T
= (I −  k )Ỹk k k
¯ = diag ∗ , ∗ , . . . , ∗N ∈ RMN×NtM , and
 1 2  
T × (I −  k )Ỹ¯ −  ē − Ỹ¯ T Ỹ
¯
Ūk = UTk,1 , UTk,2 , . . . , UTk,N ∈ RNtM ×1 . k k k k k

Combining (14), we can obtain ¯ T (I −  )T (I −  ) − I Ỹ


= Ỹ ¯
k k k k
¯ ¯ ¯ Ūk+1 . ¯ (I −  )T  ē + ēT  T  ē .
T
Ỹ k+1 = Ỹk +  (15) − 2Ỹk k k k k k k k (23)

⎡ ⎤
θi,t1 (0) θi,t1 (1) ··· θi,t1 (t1 − 1) 0 ··· 0 ··· ··· 0
⎢ θi,t2 (0) θi,t2 (1) ··· θi,t2 (t1 − 1) ··· ··· θi,t2 (t2 − 1) 0 ··· 0 ⎥
⎢ ⎥
i = ⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
θi,tM (0) θi,tM (1) ··· θi,tM (t1 − 1) ··· ··· θi,tM (t2 − 1) ··· ··· θi,tM (tM − 1) M×tM
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: EVENT-TRIGGERED ILC FOR OPTIMAL CONSENSUS 5

Let Qk = I − (I −  k )T (I −  k ). Since the topology graph Uk+1,i = Uk,i − Ki ζ k,kl ,i (32)


considered is undirected, the matrix k can be guaranteed to 2 2
l = 0, 1, . . . , if ek,i > σ αk Gk  ζ k,i
2
be symmetric positive definite. So Qk is a symmetric positive-
definite matrix as long as I −  k  ≤ ρ < 1, where ρ = (33)
maxk {I −  k }. where k ∈ [kl , kl+1 ).
Using the norm bound property in (23), we have Remark 1: Compared with traditional PTPILC
¯
Vk+1 ≤ −λmin (Qk ) Ỹ
2 methods [9]–[12], not only can the proposed ET-
k PTPILC (31)–(33) achieve the consensus of the heterogeneous
¯ ē  +  2 ē 2 (24)
+ 2I −  k  k  Ỹ MAS with switching topology but also has an event-triggering
k k k k
mechanism to update the controller only when the event
where λmin (Qk ) is the minimum eigenvalue of the matrix Qk . occurs, thus reducing the number of actuator actions and the
Using Young’s inequality, that is, 2ab ≤ (1/c)a2 + cb2 , use of control resources.
c > 0, for the second term in (24), yields Remark 2: It is difficult to extend the proposed method to
λmin (Qk ) ¯ 2 the MASs with directed topology because the adjacency matrix
Vk+1 ≤ − Ỹk of a directed topological graph cannot be guaranteed to be
2
2 symmetric positive definite, which is a fundamental condition
+ I −  k 2  k 2 ēk 2 +  k 2 ēk 2 . to attain the event-triggering condition.
λmin (Qk )
(25) B. Convergence Analysis
Letting Vk+1 ≤ 0, it follows that: The convergence of the proposed ET-PTPILC (31)–(33) is
shown in the following theorem.
¯
ēk 2 ≤ αk Ỹ
2
(26)
k Theorem 2: Consider the LTV discrete-time MAS (1) with
an iteration-switching topology under Assumptions 1 and 2.
where αk = ([λ2min (Qk )]/[4I −  k 2  k 2 + 2λmin If the following condition is satisfied with:
(Qk ) k  ]).
2

According to (26), the triggering condition to be designed ω1k < I −  k  < min(ω2k , ρ) (34)
only needs to ensure that the trigger does not start when the  √
where ω1k = (1/2) − ([ 4 − 8 2σ λmin (Qk )]/4) and ω2k =
following inequality (27) is satisfied:  √
(1/2) + ([ 4 − 8 2σ λmin (Qk )]/4), then the proposed ET-
ēk 2 ≤ σ αk Ỹ¯ k
2
(27) PTPILC (31)–(33) guarantees that the output tracking errors
where 0 < σ < 1 is a constant. ỹk,i (tm ) at specified time points t1 , t2 , . . . , tM converge to zero
According to (17) and (18), we have with increasing number of iterations.
Proof: Taking norm on both sides of (22) yields
¯
ζ̄ k = (k ⊗ IM×M )Ỹ (28)
k ¯ ¯
Ỹ k+1 ≤ I −  k  Ỹk +  k ēk . (35)
where ζ̄ k = [ζ Tk,1 , ζ Tk,2 , . . . , ζ Tk,N ]T ∈ RMN×1 ,
ζk,i = [ζk,i (t1 ), ζk,i (t2 ), . . . , ζk,i (tM )] , and ζk,i (tm ) =
T
When k = kl , ēk  = 0. Then, (35) becomes
j∈ϑi aij,k (yk,i (tm ) − yk,j (tm )) + di,k (yk,i (tm ) − yd (tm )).
Furthermore, we can obtain ¯ ¯
Ỹ k+1 ≤ I −  k  Ỹk . (36)
¯ = G ζ̄
Ỹ (29)
k k k Due to maxk {I −  k } = ρ, from (36), one obtains
where Gk = (k ⊗ IM×M )−1 . ¯ ¯ k+1 ¯
Ỹ k+1 ≤ ρ Ỹk ≤ . . . ≤ ρ Ỹ0 . (37)
Since (27) cannot be used directly as an event-triggering
condition, according to (27) and (29), the event-triggering
Since the initial tracking error of every agent Ỹ0,i is
condition can be designed as ¯ is also bounded. Thus, according to (37) and
bounded, Ỹ 0
2
> σ αk Gk 2 ζ k,i
2
I −  k  ≤ ρ < 1, it is derived that limk→∞ Ỹ ¯
ek,i (30) k+1  = 0.
which means that the event is updated as long as the inequal- Then, the convergence of the tracking error can be obtained,
ity (30) is satisfied. that is, limk→∞ |ỹk,i (tm )| = 0.
In summary, the presented ET-PTPILC approach for het- When k ∈ (kl , kl+1 ), that is, the event-triggering condition
erogeneous LTV discrete-time MASs with iteration-switching is not satisfied, (27) holds. Thus, we have from (27) and (35)
topology is proposed as follows: that
¯ ¯ √ ¯
θ̂ k+1,i (tm − 1) = θ̂ k,i (tm − 1) Ỹ k+1 ≤ I −  k  Ỹk + σ αk  k  Ỹk
  ⎛ ⎞

η yk,i (tm ) − θ̂ k,i (tm − 1)uk,i (tm − 1) σ λmin (Qk ) ¯ .
+ = ⎝I −  k  +  ⎠ Ỹ k
2
μ + uk,i (tm − 1) 4I −  k  + 2λmin (Qk )
2

× uT k,i (tm − 1) (31) (38)


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON CYBERNETICS

Due to the C-S inequality, that is, (a + b)2 ≤ 2(a2 + b2 ), we where ny and nu are system orders, and fi (·) denotes the
have heterogeneous nonlinear dynamics of each agent i.
 √  Two assumptions are made on nonlinear MAS (45).
4I −  k 2 + 2λmin (Qk ) ≥ 2I −  k  + λmin (Qk ). Assumption 3: The partial derivatives ∂fi /∂Uk,i exist and
(39) are nonzero, continuous, and bounded for all iterations, time
instants, and agents.
Then, from (38) and (39), we obtain
Assumption 4: The initial value of agent i is identical for
¯ ¯ each iteration k, that is, yk+1,i (0) = yk,i (0) = ci , where ci is a
Ỹ k+1 ≤ δk Ỹk (40)
constant.
√ √
where δk = I −  k  + ([ σ λmin (Qk )]/[ 2I −  k  +

Then, the PTP-LDM of the nonlinear MAS (45) can be
λmin (Qk )]). derived as follows.
Due to λmin (Qk ) > 0, we can obtain Lemma 2: Under Assumptions 3 and 4, the PTP-LDM of
√ the nonlinear discrete-time MAS (45) at specified data points
σ
δk = I −  k  + √ −1/2
can be derived as
−1
2λmin (Qk )I −  k  + λmin (Qk )
√ Yk+1,i = Yk,i + k+1,i Uk+1,i (46)
σ
≤ I −  k  + √ −1
2λmin (Qk )I −  k  where k,i is a pseudo-Jacobian matrix which has been shown
√ −1 √
2λmin (Qk )I −  k 2 + σ earlier, and its representation is the same as that of matrix i ,
= √ −1 . (41) that is, for row j, the first tj elements are nonzero, and the
2λmin (Qk )I −  k 
remaining (tM − tj ) elements are 0.
By selecting appropriate Ki suchthat ω1k < I −  k  < Proof: According to (45), for t = 0

min(ω2k , ρ), where ω1k = (1/2)−([ 4 − 8 2σ λmin (Qk )]/4)
 √ yk,i (1) = fi yk,i (0), uk,i (0)
and ω2k = (1/2) + ([ 4 − 8 2σ λmin (Qk )]/4) are two real = gi,0 yk,i (0), uk,i (0) (47)
roots of the following equations:
√ −1 √ √ where gi,0 (·) is a proper nonlinear function.
2λmin (Qk )I −  k 2 + σ − 2λ−1min (Qk )I −  k  = 0 For t = 1
(42)
yk,i (2) = fi yk,i (1), yk,i (0), uk,i (1), uk,i (0)
then the following inequality can be guaranteed:
√ −1 √ = fi gi,0 yk,i (0), uk,i (0) , yk,i (0), uk,i (1), uk,i (0)
2λmin (Qk )I −  k 2 + σ = gi,1 yk,i (0), uk,i (1), uk,i (0) (48)
δk ≤ √ −1 <χ <1 (43)
2λmin (Qk )I −  k 
where gi,1 (·) is a proper nonlinear function.
where 0 < χ < 1 is a constant. Continue this procedure for t
From (40) and (43), we can derive
¯ ¯ k+1 ¯ yk,i (t + 1) = fi yk,i (t), . . . , yk,i (0), uk,i (t), . . . , uk,i (0)
Ỹ k+1 ≤ χ Ỹk ≤ · · · ≤ χ Ỹ0 . (44)
= fi gi,t−1 yk,i (0), uk,i (t − 1), . . . , uk,i (0) , . . .
Due to the boundedness of the initial tracking error of yk,i (0), uk,i (t), . . . , uk,i (0)
Ỹ0,i , inequality (44) implies that limk→∞ Ỹ¯  = 0. Thus,
k = gi,t yk,i (0), uk,i (t), . . . , uk,i (0) (49)
limk→∞ |ỹk,i (tm )| = 0 is derived directly.
As a result of the two cases discussed above, the conver- where gi,t (·) is a proper nonlinear function.
gence of tracking error ỹk,i (tm ) is proved. Denote uk,i (tm − 1) = [uk,i (0), uk,i (1), . . . , uk,i (tm − 1)]T .
Therefore, the outputs at specified data points can be refor-
IV. G ENERALIZATION TO N ONLINEAR mulated as
N ETWORKED AGENTS
yk,i (tm ) = gi,tm −1 yk,i (0), uk,i (tm − 1) (50)
A. Problem Formulation
Consider a heterogeneous nonlinear nonaffine discrete- where gi,tm −1 (·) is a proper nonlinear function.
time MAS By nature, gi,tm −1 (·) is a compound function of fi (·),
which means the nonlinear function gi,tm −1 (·) also satisfies
yk,i (t + 1) = fi yk,i (t), . . . , yk,i t − ny Assumption 3. Therefore, in views of Assumption 3 and (50),
uk,i (t), . . . , uk,i (t − nu ) (45) we can derive

⎡ ⎤
φk,i,t1 (0) φk,i,t1 (1) ··· φk,i,t1 (t1 − 1) 0 ··· 0 ··· ··· 0
⎢ φk,i,t2 (0) φk,i,t2 (1) ··· φk,i,t2 (t1 − 1) ··· ··· φk,i,t2 (t2 − 1) 0 ··· 0 ⎥
⎢ ⎥
k,i =⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
φk,i,tM (0) φk,i,tM (1) ··· φk,i,tM (t1 − 1) ··· ··· φk,i,tM (t2 − 1) ··· ··· φk,i,tM (tM − 1) M×tM
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: EVENT-TRIGGERED ILC FOR OPTIMAL CONSENSUS 7

yk+1,i (tm ) − yk,i (tm ) = ∂gi,tm −1 (·)/∂yk+1,i (0)yk+1,i (0) C. ET-PTPILC Design and Convergence Analysis
+ ∂gi,tm −1 (·)/∂uk+1,i (tm − 1) Similarly, the proposed ET-PTPILC approach for hetero-
× uk+1,i (tm − 1). (51) geneous nonlinear nonaffine discrete-time MASs with an
iteration-switching topology is constructed as follows:
Furthermore, according to Assumption 4 and (51), the
following equation can be obtained: ϕ̂ k+1,i (tm − 1) = ϕ̂ k,i (tm − 1)
ηf yk,i (tm ) − ϕ̂ k,i (tm − 1)uk,i (tm − 1)
yk+1,i (tm ) − yk,i (tm ) = ϕ k+1,i (tm − 1)uk+1,i (tm − 1) (52) + 2
μf + uk,i (tm − 1)
where ϕ k+1,i (tm − 1) = ∂gi,tm −1 (·)/∂uk,i (tm − 1) =
[φk+1,i,tm (0), φk+1,i,tm (1), . . . , φk+1,i,tm (tm − 1)], m = 1, 2, × uT k,i (tm − 1) (57)
. . . , M. Uk+1,i = Uk,i − Ki ζ k,kl ,i (58)
Then, the PTP-LDM (46) can be obtained according to (52). 2
k + 1 = kl+1 , l = 0, 1, . . . , if ek,i
2
> σf ᾱk Gk 2 ζ k,i (59)
B. Parameter Estimation
Following the similar steps from (6) and (7), the parameter where
estimation algorithm of PTP-LDM (46) is obtained as follows: λ2min Q̄k
0 < σf < 1, ᾱk =
ϕ̂ k+1,i (tm − 1) = ϕ̂ k,i (tm − 1) 4 I − ¯ k
2
¯ k
2
+ 2λmin Q̄k ¯ k
2

ηf yk,i (tm ) − ϕ̂ k,i (tm − 1)uk,i (tm − 1)


Q̄k = I − I − ¯ k I − ¯ k
T
+ 2
μf + uk,i (tm − 1) ¯ K̄(k ⊗ IM×M )
¯ k = 
× uT k,i (tm − 1) (53) ¯ = diag ∗1 , ∗2 , . . . , ∗N ∈ RMN×NtM

where 0 < ηf ≤ 2 and μf > 0.
and k ∈ [kl , kl+1 ).
The convergence of the proposed parameter identification
Theorem 4: Consider the nonlinear MAS (45) with an
algorithm (53) is shown as follows.
iteration-switching topology under Assumptions 3 and 4. If
Theorem 3: For the proposed PTP-LDM (46) under
the following condition is satisfied with:
Assumptions 3 and 4, if 0 < ηf ≤ 2 and μf > 0, then
applying the parameter estimation algorithm (53), one can ω̄1k < I − ¯ k < min(ω̄2k , ρ̄) (60)
guarantee the estimation error, ϕ̃ k,i (tm − 1) = ϕ k,i (tm − 1) −  
ϕ̂ k,i (tm − 1), converges with the increasing number of where ω̄1k = (1/2) − ([ 4 − 8 2σf λmin (Q̄k )]/4), ω̄2k =
iterations.  
Proof: Subtracting ϕ k+1,i (tm − 1) from both sides of (53) (1/2) + ([ 4 − 8 2σf λmin (Q̄k )]/4) and I − ¯ k  ≤ ρ̄ < 1,
and combining (52), one has then the proposed ET-PTPILC (57)–(59) guarantees that the
output tracking errors at specified data points of each agent
ϕ̃ k+1,i (tm − 1) = ϕ̃ k,i (tm − 1) converge to zero with increased number of iterations.
ηf ϕ̃ k,i (tm − 1)uk,i (tm − 1)uTk,i (tm − 1) Proof: The proof is similar to that of Theorem 2, and is
− 2 omitted here.
μf + uk,i (tm − 1)
+ ϕ k+1,i (tm − 1) − ϕ k,i (tm − 1). (54)
V. S IMULATION
Example 1: The heterogeneous LTV discrete-time net-
According to [38], the parameter ϕ k+1,i (tm −1) is not sensi-
worked agents (1) are considered with
tive to the iteration-varying factors which implies ϕ k+1,i (tm − ⎡ ⎤
1) = ϕ k,i (tm − 1). Therefore, (54) becomes 0.2 exp(−t/100) −0.6 0
Ai (t) = ⎣ 0 ai bi ⎦
ϕ̃ k+1,i (tm − 1) = ϕ̃ k,i (tm − 1)ν k,i (tm − 1) (55)
0 0 0.7
where ν k,i (tm − 1) = I−([ηf ϕ̃ k,i (tm − 1)uk,i (tm − 1)]/[μf + Bi (t) = 0 0.3 sin t ci
T
uk,i (tm − 1)2 ]) × uTk,i (tm − 1).
Ci (t) = 0 di 1 + 0.1 cos t
Following the similar steps (9)–(11), one obtains
i ∈ {1, 2, 3, 4}
ϕ̃ k+1,i (tm − 1) ≤ df ϕ̃ k,i (tm − 1)
where a1 = 0.4, b1 = 0.4t, c1 = 0.4, d1 = 0.2, a2 = 0.1,
≤ · · · ≤ dfk+1 ϕ̃ 0,i (tm − 1) (56)
b2 = cos(t), c2 = 0.7, d2 = 0.5, a3 = 0.5, b3 = sin(t),
where 0 < df < 1 and ϕ̃ 0,i (tm − 1) is the initial value of c3 = 0.7, d3 = 0.7, a4 = 0.8, b4 = sin(2t), c4 = 1, d4 = 0.5,
ϕ̃ k,i (tm − 1). and t ∈ {0, 1, . . . , 50}.
Then, from (56), the convergence of the parameter estima- In this simulation, the iteration-switching topology is shown
tion error can be achieved. Therefore, we can use ϕ ∗i and ∗i to in Fig. 1. The followers are denoted as 1–4, and the leader is
denote the optimal estimate of ϕ k,i and k,i in the subsequent denoted as 0. The topologies switch according to a stochastic
analysis without losing generality. iteration-varying sequence ψk denoted as 1, 2, and 3. When
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON CYBERNETICS

Fig. 3. Random switching sequence of the topology in Example 1.


Fig. 1. Iteration-switching communication topology in Example 1.

Fig. 2. Relative parameter estimation error in Example 1.


Fig. 4. Values of ω1k , ω2k , ρ, and I −  k  in Example 1.

ψk = 1, Gk = G1; when ψk = 2, Gk = G2; and when ψk = 3,


that through the training of 1000 data points, the identified
Gk = G3.
parameters θ̂ 1000,i (tm − 1) can approach the true θ i (tm − 1).
In this simulation, the leader’s trajectory is yd (t) =
In the following, the identified parameters of the 1000th
0.5 sin(tπ/100), t ∈ {0, 1, . . . , 50}. Each follower needs to
iteration will be used in the calculation of event trigger-
track the special points yd (6), yd (10), yd (15), yd (32), and
ing conditions. According to the identification results, set
yd (48) of the leader’s output trajectory at time instants 6, 10,
θ i (tm − 1) = θ̂ 1000,i (tm − 1), i = 1, 2, 3, 4. The random
15, 32, and 48, respectively.
switching sequence of the topology is shown in Fig. 3. As
First, the parameter identification algorithm (31) is used
can be seen from Fig. 3, the switching sequence is con-
to identify the unknown parameter θ i (tm − 1). Select 1000
stantly changing among 1, 2, and 3. The initial conditions
data points, where the input signal is generated by a ran-
are xk,i (0) = 0 and u0,i (t) = 0, i = 1, 2, 3, 4. The controller
dom sequence with mean 0 and variance 1. Select parameters
parameters are selected as K1 = 0.03, K2 = 0.008, K3 = 0.02,
η = 0.4 and μ = 0.1, and initial values θ̂ 0,i (tm − 1) =
K4 = 0.02, and σ = 0.1.
0.01Etm , where Etm represents a tm -dimensional all one col-
By calculation, ρ = 0.9995, and the values of ω1k , ω2k , and
umn vector. The relative parameter estimation error is defined
ˆ k,i − ◦ i /◦ i , where I− k  are shown in Fig. 4, which illustrates that the conver-
as εk,i = 
⎡ ⎤ gence condition ω1k < I −  k  < min(ω2k , ρ) in Theorem 2
⎡ ⎤
θ i (5) 042 θ̂ k,i (5) 042 is satisfied.
⎢ θ i (9) 038 ⎥ ⎢ θ̂ (9) 0 ⎥ Applying the proposed ET-PTPILC method (31)–(33), sim-
⎢ ⎥ ⎢ k,i 38 ⎥
◦ ⎢ ⎥ ˆ ⎢ ⎥ ulations results are shown in Figs. 5 and 6. The output of
 i = ⎢ θ i (14) 033 ⎥ and k,i = ⎢ θ̂ k,i (14) 033 ⎥
⎣ θ i (31) 016 ⎦ ⎢ ⎥ each agent is shown in Fig. 5. It can be seen from Fig. 5
⎣ θ̂ k,i (31) 016 ⎦
θ i (47) 00 that the output of each agent can track the leader’s output
θ̂ k,i (47) 00
at the specified time instants 6, 10, 15, 32, and 48, respec-
where 0j , j = 42, 38, 33, 16, 0, represents a j-dimensional tively. Fig. 6 shows the event-triggered iterations of each
zero row vector. The relative parameter estimation error of agent. It is obvious that the number of triggered iterations is
each agent is shown in Fig. 2. From Fig. 2, we can observe reduced.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: EVENT-TRIGGERED ILC FOR OPTIMAL CONSENSUS 9

Fig. 7. Iteration-switching communication topology in Example 2.

Fig. 5. Output of each agent in Example 1.

Fig. 8. Output estimation error of all agents in Example 2.

y4 (t)u4 (t)
Agent 4: y4 (t + 1) = + 0.8u4 (t).
1 + y44 (t)
The topology varies along iteration direction randomly with
Fig. 6. Event-triggered iterations of agents 1–4 in Example 1. four states, as shown in Fig. 7. A stochastic iteration-varying
sequence ςk is assumed to take the values of 1, 2, 3, and 4
TABLE I randomly. When ςk = 1, Gk = G1; when ςk = 2, Gk = G2;
T OTAL N UMBER OF T RIGGERED I TERATIONS P ER AGENT IN E XAMPLE 1
when ςk = 3, Gk = G3; and when ςk = 4, Gk = G4.
In the simulation, the control goal is to make each follower
track the specified points yd (20), yd (30), yd (50), and yd (100)
of the leader’s output trajectory at time instants 20, 30, 50,
and 100, where yd (t) = 0.5 sin(tπ/60), t ∈ {0, 1, . . . , 100}.
First, the unknown parameter ϕ k,i (tm − 1) is identified by
Furthermore, Table I shows the total number of triggered selecting the signal with 1000 data points whose mean value is
iterations for each agent. One can see from Table I that the 0 and standard deviation is 1 as the input sequence. In the iden-
number of triggered iterations is greatly reduced from the total tification process, the parameters are set as ηf = 0.1, μf =
of 600. 0.01, and ϕ̂ 0,i (tm − 1) = 0.01Etm . Define an output estima-
o =  |ŷ (j) − y (j)|2 /4, j = 20, 30, 50, 100,
tion error εk,i
Example 2: Consider a nonlinear MAS with four follower k,i k,i
agents. The heterogeneous dynamics of every agent is where ŷk,i (j) = ŷk−1,i (j) + ϕ̂ k,i (j − 1)uk,i (j − 1) is the esti-
mate of the output. The identification results are shown in
y1 (t)u1 (t) Fig. 8, which illustrate that the output estimation error of each
Agent 1: y1 (t + 1) = + u1 (t) agent converges with the increase of iterations.
1 + y21 (t)
On this basis, set ϕ k,i (tm − 1) = ϕ̂ 1000,i (tm − 1), tm =
y2 (t)u2 (t)
Agent 2: y2 (t + 1) = + 2u2 (t) 20, 30, 50, 100, m = 1, 2, 3, 4, and i = 1, 2, 3, 4. The initial
1 + y42 (t) values are yk,i (0) = 0 and u0,i (t) = 0, i = 1, 2, 3, 4. The
y3 (t)u3 (t) controller parameters are selected as K1 = 0.03, K2 = 0.01,
Agent 3: y3 (t + 1) = + 0.9u3 (t)
1 + y23 (t) K3 = 0.1, K4 = 0.04, and σf = 0.1. In this simulation
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON CYBERNETICS

is proposed for the purpose of facilitating the controller design


and analysis. The proposed ET-PTPILC method is data driven,
which can be applied to LTV and general nonlinear MASs.
Convergence analysis and numerical simulations demonstrate
the validity of the proposed ET-PTPILC.

R EFERENCES
[1] Z. S. Hou, Y. Wang, C. Yin, and T. Tang, “Terminal iterative learning
control based station stop control of a train,” Int. J. Control, vol. 84,
no. 7, pp. 1263–1274, Jul. 2011.
[2] H. S. Ahn, O. Jung, S. Choi, J.-H. Son, D. Chung, and G. Kim, “An
optimal satellite antenna profile using reinforcement learning,” IEEE
Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 41, no. 3, pp. 393–406,
Fig. 9. Random switching sequence of the topology in Example 2. May 2011.
[3] Y. Chen, B. Chu, and C. T. Freeman, “Point-to-point iterative learning
control with optimal tracking time allocation,” IEEE Trans. Control Syst.
Technol., vol. 26, no. 5, pp. 1685–1698, Sep. 2018.
[4] N. Lin, H. Liang, Y. K. Lv, and R. H. Chi, “A forgetting-factor based
data-driven optimal terminal iterative learning control with applications
to product concentration control of ethanol fermentation processes,”
Trans. Inst. Meas. Control, vol. 41, no. 14, pp. 3936–3942, Oct. 2019.
[5] J. V. De Wijdeven and O. Bosgra, “Residual vibration suppression using
Hankel iterative learning control,” Int. J. Robust Nonlinear Control,
vol. 18, no. 10. pp. 1034–1051, Jul. 2008.
[6] H. Ding and J. Wu, “Point-to-point control for a high-acceleration posi-
tioning table via cascaded learning schemes,” IEEE Trans Ind. Electron.,
vol. 54, no. 5, pp. 2735–2744, Oct. 2007.
[7] C. T. Freeman, Z. Cai, E. Rogers, and P. L. Lewin, “Iterative learning
control for multiple point-to-point tracking application,” IEEE Trans.
Control Syst. Technol., vol. 19, no. 3, pp. 590–600, May 2011.
[8] S. Arimoto, S. Kawamura, and F. Miyazaki, “Bettering operation of
robots by learning,” J. Rob. Syst., vol. 1, no. 2, pp. 123–140, Jun. 1984.
[9] X. Zhao and Y. Wang, “Energy-optimal time allocation in point-
to-point ILC with specified output tracking,” IEEE Access, vol. 7,
pp. 122595–122604, 2019.
[10] Y. Xu, D. Shen, and X.-D. Zhang, “Stochastic point-to-point iterative
Fig. 10. Output of each agent in Example 2.
learning control based on stochastic approximation,” Asian J. Control,
vol. 19, no. 5, pp. 1748–1755, Sep. 2017.
TABLE II [11] D. Shen, J. Han, and Y. Wang, “Stochastic point-to-point iterative learn-
T OTAL N UMBER OF T RIGGERED I TERATIONS P ER AGENT IN E XAMPLE 2 ing tracking without prior information on system matrices,” IEEE Trans.
Autom. Sci. Eng., vol. 14, no. 1, pp. 376–382, Jan. 2017.
[12] R. Chi, Z. Hou, S. Jin, and B. Huang, “An improved data-driven point-
to-point ILC using additional on-line control inputs with experimental
verification,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 49, no. 4,
pp. 687–696, Apr. 2019.
[13] N. Lin, R. Chi, B. Huang, C.-J. Chien, and Y. Feng, “An E-HOIM
based data-driven adaptive TILC of nonlinear discrete-time systems for
example, the random switching sequence of the topology is non-repetitive terminal point tracking,” Asian J. Control, vol. 20, no. 3,
shown in Fig. 9. We can see that the switching sequence pp. 1135–1144, May 2018.
varies randomly among the four numbers 1, 2, 3, and 4. [14] J. Han, D. Shen, and C.-J. Chien, “Terminal iterative learning control for
discrete-time nonlinear systems based on neural networks,” J. Franklin
By calculation, ρ̄ = 0.9998, and the convergence condition Inst., vol. 355, no. 8, pp. 3641–3658, May 2018.
ω̄1k < I − ¯ k  < min(ω̄2k , ρ̄) in (60) is satisfied. [15] W. Ren, R. W. Beard, and E. M. Atkins, “Information consensus in mul-
Applying the proposed ET-PTPILC protocol (57)–(59), the tivehicle cooperative control,” IEEE Control Syst. Mag., vol. 27, no. 2,
pp. 71–82, Apr. 2007.
optimal consensus with specified data points for the heteroge- [16] X. Dong, Y. Hua, Y. Zhou, Z. Ren, and Y. Zhong, “Theory and experi-
neous agents under iteration-switching topologies is shown in ment on formation-containment control of multiple multirotor unmanned
Fig. 10, and the number of triggered iterations of every agent aerial vehicle systems,” IEEE Trans. Autom. Sci. Eng., vol. 16, no. 1,
pp. 229–240, Jan. 2019.
is shown in Table II. One sees that a perfect output consensus [17] Y. Tang, X. Xing, H. R. Karimi, L. Kocarev, and J. Kurths, “Tracking
of the specified data points can be achieved even though the control of networked multi-agent systems under new characterizations
number of triggered iterations has been significantly reduced of impulses and its applications in robotic systems,” IEEE Trans. Ind.
Electron., vol. 63, no. 2, pp. 1299–1307, Feb. 2016.
from the total of 600 iterations. [18] P. Gu, and S. Tian, “Consensus tracking control via iterative learning
for singular multi-agent systems,” IET Control Theory Appl., vol. 13,
no. 11, pp. 1603–1611, Jul. 2019.
VI. C ONCLUSION
[19] Y. Hui, R. Chi, B. Huang, and Z. Hou, “3-D learning-enhanced adaptive
In this work, a design framework for ET-PTPILC is ILC for iteration-varying formation tasks,” IEEE Trans. Neural Netw.
proposed for optimal consensus with specified data points of Learn. Syst., vol. 31, no. 1, pp. 89–99, Jan. 2020.
[20] R. Chi, Y. Hui, B. Huang, and Z. Hou, “Adjacent-agent dynamic
heterogeneous MASs under iteration-switching communica- linearization-based iterative learning formation control,” IEEE Trans.
tion topologies while saving control resources. A PTP-LDM Cybern., vol. 50, no. 10, pp. 4358–4369, Oct. 2020.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

LIN et al.: EVENT-TRIGGERED ILC FOR OPTIMAL CONSENSUS 11

[21] Y.-H. Lan, B. Wu, Y.-X. Shi, and Y.-P. Luo, “Iterative learning based Na Lin received the M.Sc. degree in automatic con-
consensus control for distributed parameter multi-agent systems with trol from the Qingdao University of Science and
time-delay,” Neurocomputing, vol. 357, pp. 77–85, Sep. 2019. Technology, Qingdao, China, in 2017, where she is
[22] Y. Lv, R. Chi, and Y. Feng, “Adaptive estimation-based TILC for the currently pursuing the Ph.D. degree with the Institute
finite time consensus control of non-linear discrete-time MASs under of Artificial Intelligence and Control, School of
directed graph,” IET Control Theory Appl., vol. 12, no. 18, pp. 2516– Automation and Electronic Engineering.
2525, Dec. 2018. Her research interests include data-driven control
[23] D. Meng and Y. Jia, “Iterative learning approaches to design finite- and iterative learning control.
time consensus protocols for multi-agent systems,” Syst. Control Lett.,
vol. 61, no. 1, pp. 187–194, Jan. 2012.
[24] D. Meng and Y. Jia, “Finite-time consensus for multi-agent systems via
terminal feedback iterative learning,” IET Control Theory Appl., vol. 5,
no. 18, pp. 2098–2110, Dec. 2011. Ronghu Chi received the Ph.D. degree from Beijing
[25] X. Bu, P. Zhu, Z. Hou, and J. Liang, “Finite-time consensus for lin- Jiaotong University, Beijing, China, in 2007.
ear multi-agent systems using data-driven terminal ILC,” IEEE Trans. He was a Visiting Scholar with Nanyang
Circuits Syst. II, Exp. Briefs, vol. 67, no. 10, pp. 2029–2033, Oct. 2020. Technological University, Singapore, from 2011 to
[26] L. Ding, Q.-L. Han, X. Ge, and X.-M. Zhang, “An overview of recent 2012 and a Visiting Professor with the University of
advances in event-triggered consensus of multiagent systems,” IEEE Alberta, Edmonton, AB, Canada, from 2014 to 2015.
Trans. Cybern., vol. 48, no. 4, pp. 1110–1123, Apr. 2018. In 2007, he joined the Qingdao University of Science
[27] C. Du, X. Liu, W. Ren, P. Lu, and H. Liu, “Finite-time consensus for and Technology, Qingdao, China, and is currently a
linear multiagent systems via event-triggered strategy without contin- Full Professor with the School of Automation and
uous communication,” IEEE Trans. Control Netw. Syst., vol. 7, no. 1, Electronic Engineering. He has published over 100
pp. 19–29, Mar. 2020. papers in important international journals and con-
[28] G. Zhao, C. Hua, and X. Guan, “A hybrid event-triggered approach ference proceedings. His current research interests include iterative learning
to consensus of multiagent systems with disturbances,” IEEE Trans. control, data-driven control, and intelligent transportation systems.
Control Netw. Syst., vol. 7, no. 3, pp. 1259–1271, Sep. 2020. Dr. Chi was awarded the “Taishan Scholarship” in 2016. He served as var-
[29] B. L. Zhang, Q.-L. Han, and X.-M. Zhang, “Event-triggered H-infinity ious positions in international conferences and was an Invited Guest Editor
reliable control for offshore structures in network environments,” J. of International Journal of Automation and Computing. He has also served
Sound Vib., vol. 368, pp. 1–21, Apr. 2016. as a Council Member of the Shandong Institute of Automation and commit-
[30] Y. Tang, D. Zhang, P. Shi, W. Zhang, and F. Qian, “Event-based forma- tee member of Data-Driven Control, Learning and Optimization Professional
tion control for nonlinear multiagent systems under DoS attacks,” IEEE Committee.
Trans. Autom. Control, vol. 66, no. 1, pp. 452–459, Jan. 2021.
[31] Y. Zhang, J. Sun, H. Liang, and H. Li, “Event-triggered adaptive tracking
control for multiagent systems with unknown disturbances,” IEEE Trans.
Cybern., vol. 50, no. 3, pp. 890–901, Mar. 2020.
[32] X. Li, Z. Sun, Y. Tang, and H. Karimi, “Adaptive event-triggered con- Biao Huang (Fellow, IEEE) received the B.Sc.
sensus of multi-agent systems on directed graphs,” IEEE Trans. Autom. and M.Sc. degrees in automatic control from the
Control, early access, Jun. 9, 2020, doi: 10.1109/TAC.2020.3000819. Beijing University of Aeronautics and Astronautics,
[33] X.-M. Zhang and Q.-L. Han, “A decentralized event-triggered dissipa- Beijing, China, in 1983 and 1986, respectively,
tive control scheme for systems with multiple sensors to sample the and the Ph.D. degree in process control from the
system outputs,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 2745–2757, University of Alberta, Edmonton, AB, Canada, in
Dec. 2016. 1997.
[34] J. Tang and L. Sheng, “Iterative learning fault-tolerant control for net- In 1997, he joined the University of Alberta,
worked batch processes with event-triggered transmission strategy and as an Assistant Professor, with the Department of
data dropouts,” Syst. Sci. Control Eng., vol. 6, no. 3, pp. 44–53, Oct. Chemical and Materials Engineering, where he is
2018. currently a Professor. He is the Industrial Research
[35] S. Wang, X. Bu, and J. Liang, “Event-triggered robust guaranteed cost Chair in Control of Oil Sands Processes with the Natural Sciences and
control for two-dimensional nonlinear discrete-time systems,” IEEE J. Engineering Research Council of Canada, and the Industry Chair of Process
Syst. Eng. Electron., vol. 30, no. 6, pp. 1243–1251, Dec. 2019. Control with Alberta Innovates. He has applied his expertise extensively in
[36] W. Xiong, X. Yu, R. Patel, and W. Yu, “Iterative learning control for industrial practice. His current research interests include process control, data
discrete-time systems with event-triggered transmission strategy and analytics, system identification, control performance assessment, Bayesian
quantization,” Automatica, vol. 72, pp. 84–91, Oct. 2016. methods, and state estimation.
[37] T. Zhang and J. Li, “Event-triggered iterative learning control for multi- Prof. Huang is a recipient of a number of awards, including the Alexander
agent systems with quantization,” Asian J. Control, vol. 20, no. 3, von Humboldt Research Fellowship from Germany, the Best Paper Award
pp. 1088–1101, May 2018. from IFAC Journal of Process Control, the APEGA Summit Award in
[38] N. Lin, R. Chi, and B. Huang, “Linear time-varying data model-based Research Excellence, and the Bantrel Award in Design and Industrial Practice.
iterative learning recursive least squares identifications for repetitive He is a Fellow of the Canadian Academy of Engineering and the Chemical
systems,” IEEE Access, vol. 7, pp. 133304–133313, 2019. Institute of Canada.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 1

Event-Triggered Iterative Learning Containment


Control of Model-Free Multiagent Systems
Changchun Hua , Yunfei Qiu, and Xinping Guan

Abstract—A new event-triggered iterative learning control on double-integrator dynamics. In [18], a PI n -type approach
method is proposed for handling the distributed containment was proposed to overcome the containment problem of MASs
control problem of model-free multiagent systems under a fixed in the discrete-time domain and the continuous-time domain.
directed graph. The designed controller merely uses the input and
output signals, controlled model information is not required. At In [19], a novel observer-type method was presented for MASs
first, the unknown dynamic is transformed into the linearization with higher-order linear dynamics only utilizing the output
model upon the base of pseudo partial derivative. Secondly, the measurement information of relevant neighbor agents. Note
novel distributed containment controller is proposed for each that above papers were all model-based. However, in many real
follower by use of iterative learning algorithm. Moreover, a new situations, it is hard to get the accurate mathematical model
trigger mechanism is designed to save energy of the systems,
such that the updating number of the proposed controller can be due to the complexity of systems.
reduced greatly. Mathematical deduction shows that the controller In addition, iterative learning control (ILC) as a useful method
can render the outputs of the followers converge to a convex hull can make the controlled system a better performance by leaning
formed by the outputs of leaders. Finally, simulation examples from previous works [20]–[23]. In [20] and [21], the distributed
are given for verifying the significance of proposed method. D-type ILC schemes were introduced for solving the formation
Index Terms—Containment control, event-triggered control, control issue in the discrete-time and continuous-time sep-
iterative learning control (ILC), model-free system, multiagent arately, and their MASs were all with switching topologies
systems (MASs). and unknown nonlinear dynamics. Meng et al. [22] investi-
gated switching topologies, disturbances, and initial state shifts
problems of MASs with an ILC method. Jin [23] addressed
I. I NTRODUCTION high-order nonlinear MASs with actuator faults and uncertain
HE DISTRIBUTED cooperative control of multiagent control input gain functions under directed graph topology in
T systems (MASs) has shown great potential among
the physical, biological, or sociological field in resent
the ILC frame. By modifying error, the output of the system
would be as close as possible to the ideal value. However, this
years [1]–[5]. Generally, by according to the amount of method was rarely used in containment control problem.
leaders, there are three types of classification for MASs dis- Moreover, in practical systems, because information is
tributed cooperative control problems: 1) leaderless consensus exchanged through networks among all agents, limited chan-
problem [6]–[9]; 2) leader-following problem [10]–[12]; and nel width increases the transmission difficulty of MASs
3) containment control problem [13], [14]. Containment con- coordination control. Hence, by considering MASs coop-
trol problem as a basic work in MASs has been used in eration controller design, network transfer resource utiliza-
autonomous underwater vehicles [15], multiple autonomous tion should be paid great attention. In this way, event-
vehicles [16], and so on. The purpose of MASs contain- triggered control method has been introduced for dealing with
ment control is to urge every state or output of followers the aforementioned issue [24]–[29]. Event-triggered control
into the convex hull spanned by those of leaders, many means that controller changes input information only when
researchers have studied this aspect. In [17], autonomous vehi- the appointed situation occurs. In [24], a distributed event-
cles distributed containment control issue was solved merely triggered method was presented for handling the issue of
using position informations, in which its model was based second-order MASs containment control only utilizing sam-
pled position information. Miao et al. [25] studied the problem
Manuscript received November 29, 2019; accepted March 8, 2020. This
work was supported in part by the National Key Research and Development of MASs containment control under event-triggered condi-
Program of China under Grant 2018YFB1308300, and in part by the National tions with constant time delays, and expanded conclusions
Natural Science Foundation of China under Grant 61751309, Grant 61933009, to the situation of multiple time delays. Zhang et al. [26]
Grant 618255304, and Grant 61673335. This article was recommended by
Associate Editor J. Sarangapani. (Corresponding author: Changchun Hua.) investigated the event-triggering containment control of MASs,
Changchun Hua and Yunfei Qiu are with the Institute of Electrical where communications exist among the leaders. However, an
Engineering, Yanshan University, Qinhuangdao 066004, China (e-mail: event-triggered scheme was less considered for shortening the
cch@ysu.edu.cn; yunfeiiqiu@163.com).
Xinping Guan is with the School of Electronics, Information and Electric size of iteration to be updated in ILC algorithm before [30].
Engineering, Shanghai Jiaotong University, Shanghai 200240, China (e-mail: Inspired by the above-mentioned arguments, the model-free
xpguan@sjtu.edu.cn). MASs containment control problem is discussed in this article.
Color versions of one or more of the figures in this article are available
online at http://ieeexplore.ieee.org. To improve the utilization rate of resources and enlarge the
Digital Object Identifier 10.1109/TSMC.2020.2981404 applications of the containment control in MASs, we propose
2168-2216 
c 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

an event-triggered ILC (ETILC) method for solving nonlinear II. P ROBLEM F ORMULATION
discrete-time MASs containment control problem. The main Consider a class of MASs with n followers and m leaders,
work and contributions of this article are described as follows. and the dynamic of jth (1 ≤ j ≤ n) follower is written as
1) An event-triggered containment control method is firstly
designed for model-free MASs. First, calculating the yj (t + 1, k) = fj yj (t, k), . . . , yj t − ny , k
difference between the current value and the last input
uj (t, k), . . . , uj (t − nu , k) (1)
value, then deciding whether to change the input value
or not. In this article, the ETILC method can effectively
reduce the update rate of MASs containment control where t ∈ {0, 1, . . . , T} is the time interval, k denotes iteration
protocol and save channel width resource. number; uj (t, k) = [uj1 (t, k), . . . , ujp (t, k)]T ∈ Rp , yj (t, k) =
2) The containment control problem is settled merely uti- [yj1 (t, k), . . . , yjp (t, k)]T ∈ Rp are the input signal and output
lizing I/O data of itself and its neighbors under a fixed signal at time t and iteration k, respectively; ny and nu are two
directed graph. This ETILC approach is a data-based unknown positive integers; fj (·) = [fj1 (·), . . . , fjp (·)]T ∈ Rp ,
approach, without any mathematical model informations and for any q = 1, 2, . . . , p, fjq (·) is an unknown function.
of agents. The outputs of m leaders that are yn+s ∈ Rp (s = 1, 2, . . . , m)
3) The proposed ETILC uses iterative learning method, are bounded.
while there are no any conditions on the initial values. Assumption 1: G is fixed. For each leader, there must exist
With the increase of time number and iteration step, the at least one directed route to the follower. For each follower,
effectiveness of the controller can be better. the in-degree is larger than the out-degree.
The remainder of this article is organized as follows. We Assumption 2: The partial derivation of fjq (·) in relation to
briefly introduce the nonlinear discrete-time MASs model and ujq (t, k) is continuous.
formulate the problem in Section II. In Section III, both the Assumption 3: Nonlinear function fjq (·) satisfies the gener-
proposed ETILC method and effectiveness analysis are shown. alized Lipschitz condition along the iteration axis, i.e., for any
In Section IV, simulation examples are given to prove the ujq (t, k) = 0, then there has |yjq (t + 1, k)| ≤ b|ujq (t, k)|,
effectiveness of this new method. Finally, Section V presents where yjq (t + 1, k)  yjq (t + 1, k) − yjq (t + 1, k − 1),
the conclusion. ujq (t, k)  ujq (t, k)−ujq (t, k−1) and b is a positive constant.
Notations: We consider a class of MASs with m lead- Lemma 1 [31]: Each entry of −L̄1−1 L̄2 is non-negative and
ers and n followers, which can be depicted by a weighted each row sum of −L̄1−1 L̄2 equals one.
directed graph. G  (V, E, A) denotes the weighted directed Lemma 2 [32]: For nonlinear system (1), if Assumptions 2
graph, where V  {p1 , p2 , . . . , pn+m } denotes the collection and 3 hold, there definitely has φjq (t, k), called pseudo partial
of vertices, E ⊆ V × V denotes the collection of edges, and derivative (PPD), such that if ujq (t, k) = 0, the system (1)
A  [ajl ] ∈ R(n+m)×(n+m) denotes the adjacency matrix. If can be written as
agent j can receive the information from agent l, then ajl > 0,
that is, (pj , pl ) ∈ E, otherwise ajl = 0. Moreover, if there yjq (t + 1, k) = φjq (t, k)ujq (t, k) (2)
exists ajl ∈ E, then ajl > 0, then we define the vertex pl is
the neighbor of pj . We assume for any j = 1, 2, . . . , n + m, and it satisfies |φjq (t, k)| ≤ b, then system (1) can become
ajj = 0, that is, there is no self-loop. Define the collection of
all neighbors of node jth as: Nj  {pl ∈ V : (pj , pl ) ∈ E}. The yjq (t + 1, k) = yjq (t + 1, k − 1) + φjq (t, k)ujq (t, k). (3)
Laplacian matrix L̄ = [ljl ] is denoted as L̄  D −  A, where
D  diag{d1 , . . . , dn+m } ∈ R (n+m)×(n+m) with dj  n+m Assumption 4: The signs of all PPD φjq (t, k) stay
l=1 ajl .
L̄ can be denoted as unchanged, there exists a sufficiently small positive param-
eter ε to make φjq (t, k) > ε > 0 or φjq (t, k) < −ε < 0. For
  simplicity, we consider φjq (t, k) > ε > 0.
L̄1 L̄2
L̄ = Remark 1: Assumption 1 is a common assumption for the
0m×n 0m×m
communication network about leaders, which can be found
in [18] and [19]. Assumption 1 is a necessary assump-
where L̄1 ∈ Rn×n and  L̄2 ∈ Rn×m . Because there is no tion about followers, it can be used in the following part.
n
self-loop, ljj = dj = l=1 ajl , where ljj is the diagonal Assumption 2 is a common condition for model-free control
element of matrixL̄ and the in-degree of follower j. We method [32], [33]. Assumption 3 implies that the rate of vari-
n
define that ol  a is out-degree of follower l. The ation of system output corresponding to the rate of variation
njl
j=1 n 
convex hull Co(X)  i=1 θi xi |xi ∈ X, θi ≥ 0, i=1 θi = 1 , of system input is bounded, it is a commonsensible condition
X = [x1 , x2 , . . . , xn ]T . in many practical motion control systems and can be found
The following definitions can be found in the later sections: in [34] and [35].
Remark 2: Lemma 2 requires |ujq (t, k)| = 0. PPD φj (t, k)
  in CFDL is unknown, we can estimate it in the follow-
Ā1 Ā2  
A , Ā  Ā1 Ā2 ∈ Rn×(n+m) ing. Moreover, CFDL method does not require any prior
0m×n 0m×m
information. Apparently PPD is bounded at any sampling time
D1  diag{d1 , . . . , dn } ∈ Rn×n . and iteration number.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HUA et al.: EVENT-TRIGGERED ITERATIVE LEARNING CONTAINMENT CONTROL OF MODEL-FREE MASs 3

Remark 3: Assumption 4 is necessary on solving control The following control algorithm is shown:
problem, i.e., measurement output is nondecreasing when con-
trol input increases corresponding. Assumption 4 is a common ηωjq (t, k − 1)
φ̂jq (t, k) = φ̂jq (t, k − 1) + 2
assumption, which can be found in [34] and [35]. μ + ωjq (t, k − 1)
The objective of the paper is to design proper inputs 
for driving all the outputs of the followers into the con- × yjq (t + 1, k − 1) − φ̂jq (t, k − 1)ωjq (t, k − 1)
vex hull spanned by the outputs of leaders, that is, (10)
infh(t)∈Y(t) yi − h(t) ≤ , for i = 1, 2, . . . , n, Y(t)  φ̂jq (t, k) = φ̂jq (t, 1), if φ̂jq (t, k) ≤ ε
Co{yn+1 , yn+2 , . . . , yn+m },  is an arbitrarily small positive  
constant. or sign φ̂jq (t, k)  = sign φ̂jq (t, 1) (11)
ρ φ̂jq (t, k)
ωjq (t, k) = ωjq (t, k − 1) + ξ (t
2 jq
+ 1, k − 1) (12)
III. M AIN R ESULTS
λ + φ̂jq (t, k)
Let ξjq (t, k) denote the containment error, which shows the
relationship between the agents. In this way, we can have
where η ∈ (0, 1), ρ ∈ (0, 1) are step size, they make the
ξjq (t, k)  ajl ylq (t, k) − yjq (t, k) . (4) algorithm more generality. μ > 0, λ > 0 are weighted factors.
φjq (t, 1) is the initial value of φjq (t, k) and φ̂jq (t, k) is the
l∈Nj
estimator of φjq (t, k).
Let ȳ1q (t, k)  [y1q (t, k), . . . , ynq (t, k)]T denote the n Remark 4: We choose the estimation criterion function
followers’ output vector, the m leaders’ output vector J(φjq (t, k)) = |yjq (t + 1, k − 1) − φjq (t, k)ωjq (t, k − 1)|2 +
ȳ2q (t)  [y(n+1)q (t), . . . , y(n+m)q (t)]T , all agents’ output vector μ|φjq (t, k) − φ̂jq (t, k − 1)|2 . Based on optimization condition
ȳq (t, k)  [y1q (t, k), . . . , y(n+m)q (t)]T , and vector ξ q (t, k)  (1/2)([∂J(φjq (t, k))]/[∂φjq (t, k)]) = 0, we can get the parame-
[ξ1q (t, k), . . . , ξnq (t, k)]T ∈ Rn . Rewrite (4), ξ q (t, k) can be ter estimation law (10). Similar with (10), (12) uses the control
expressed as input criterion function J(ωjq (t, k)) = |ξjq (t + 1, k − 1) −
φjq (t, k)(ωjq (t, k)−ωjq (t, k−1))|2 +λ|ωjq (t, k)−ωjq (t, k−1)|2 ,
ξ q (t, k) = Āȳq (t, k) − D1 ȳ1q (t) where λ is an important factor, proper λ can ensure the sta-
bility of system. In the reset algorithm (11), ε is a sufficiently
= −L̄1 ȳ1q (t, k) − L̄2 ȳ2q (t). (5)
small positive constant.
In what follows, we will give the theorem to state the
Further, define ŷ2q (t)  −L̄1−1 L̄2 ȳ2q (t), then we have
conditions and ability of the proposed algorithm.
ξ q (t, k) = −L̄1 ȳ1q (t, k) + L̄1 ŷ2q (t) Theorem: If the system (1) satisfies Assumptions 1–4 with
the trigger mechanism (8) and the controller (7), the con-
= −L̄1 ȳ1q (t, k) − ŷ2q (t) . (6) tainment error ξjq (t, k) can converge to an arbitrarily small
constant under the conditions of λ > (b2 /4) and
From Lemma 1, we can get ŷ2q (t) is in the convex
hull formed by ȳ2q (t). From (6), we notice that the error
1
ȳ1q (t, k) − ŷ2q (t) is bounded, if ξ q (t, k) is bounded. Then ρ<
designing a proper controller to assure the boundedness of maxj=1,...,n dj
ξ q (t, k) is required.
as i → ∞ for all followers.
We consider a fixed threshold event-triggered strategy for
Proof: We firstly demonstrate the boundedness of the
nonlinear uncertain systems in this article. The triggering event
parameter φ̂jq (t, k).
is presented below
When |φ̂jq (t, k)| ≤ ε or sign (φ̂jq (t, k)) = sign(φ̂jq (t, 1)),

ujq (t, k) = ωjq t, kχ ∀k ∈ kχ , kχ +1 (7) the reset algorithm (11) makes φ̂jq (t, k) = φ̂jq (t, 1).
 +
 Otherwise, subtracting φj (t, k) from both sides of (10)
kχ +1  inf k ∈ Z | vjq (t, k) ≥ m (8)
leads to
where m > 0 denotes the error bound, vjq (t, k)  ujq (t, k) −
ωjq (t, k) is the measurement error, ωjq (t, k) is the calculated φ̂jq (t, k) − φjq (t, k) = φ̂jq (t, k − 1) − φjq (t, k)
value, kχ (χ ∈ Z + ) is a positive update parameter. When (8) ηωjq (t, k − 1)
+ 2
is triggered, the iteration number will be marked as kχ +1 , then μ + ωjq (t, k − 1)
the controller algorithm will be updated by new input ujq (t, k).
During the interval [kχ , kχ +1 ), the control signal applied in the × yjq (t + 1, k − 1)

system keeps as a constant ωjq (t, kχ ).
− φ̂jq (t, k − 1)ωjq (t, k − 1) .
From (8), we can get a continuous varying parameter
δjq (t, k), satisfing δjq (t, kχ ) = 0, δjq (t, kχ +1 ) = ±1 and (13)
|δjq (t, k)| ≤ 1 ∀k ∈ [kχ , kχ +1 ). In this way, we have that
Make φ̃jq (t, k)  φ̂jq (t, k) − φjq (t, k) denote the parameter
ujq (t, k) = ωjq (t, k) + δjq (t, k)m. (9) estimation error. By substituting (2) and (9) into (13) becomes
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

φ̃jq (t, k) = φ̃jq (t, k − 1) − φjq (t, k) − φjq (t, k − 1) multiply both sides of (18) by D1 , then minus both by
ηωjq (t, k − 1) Āȳq (t + 1, k) and Āȳq (t + 1, k − 1), we can get
+ 2
yjq (t + 1, k − 1)
μ + ωjq (t, k − 1) Āȳq (t + 1, k) + Āȳq (t + 1, k − 1) − D1 ȳ1q (t + 1, k)

− φ̂jq (t, k − 1)ωjq (t, k − 1) = Āȳq (t + 1, k) + Āȳq (t + 1, k − 1) − D1 ȳ1q (t + 1, k − 1)
= φ̃jq (t, k − 1) − φjq (t, k) − φjq (t, k − 1) − D1 q (t, k)uq (t, k). (19)
ηωjq (t, k − 1) Then combine (5) and (19), one has
+ 2
μ + ωjq (t, k − 1)
ξ q (t + 1, k) + Āȳq (t + 1, k − 1)
× φjq (t, k − 1) × ωjq (t, k − 1) + δjq (t, k − 1)m
 = ξ q (t + 1, k − 1) + Āȳq (t + 1, k) − D1 q (t, k)uq (t, k).
− φ̂jq (t, k − 1)ωjq (t, k − 1)
 
Move Āȳq (t + 1, k − 1) of the formula above to the right
2
η ωjq (t, k − 1) of the equal sign, we can have
= 1− 2
φ̃jq (t, k − 1)
μ + ωjq (t, k − 1) ξ q (t + 1, k) = ξ q (t + 1, k − 1)
− φjq (t, k) − φjq (t, k − 1) + Ā ȳq (t + 1, k) − ȳq (t + 1, k − 1)
ηωjq (t, k − 1)φjq (t, k − 1)
+ δjq (t, k − 1)m. (14) − D1 q (t, k)uq (t, k)
2
μ + ωjq (t, k − 1) = ξ q (t + 1, k − 1)
Based on Lemma 2, we can get |φjq (t, k)−φjq (t, k−1)| ≤ 2b. + Ā1 ȳ1q (t + 1, k) − ȳ1q (t + 1, k − 1)
Taking absolute value on both sides of (12) yields − D1 q (t, k)uq (t, k)
η ωjq (t, k − 1)
2 = ξ q (t + 1, k − 1) + Ā1 q (t, k)uq (t, k)
φ̃jq (t, k) ≤ 1 − 2
φ̃jq (t, k − 1) − D1 q (t, k)uq (t, k)
μ + ωjq (t, k − 1)
ηδjq (t, k − 1)mb = ξ q (t + 1, k − 1) − L̄1 q (t, k)uq (t, k)
+ 2b + √ . (15) = ξ q (t + 1, k − 1) − L̄1 q (t, k)
2 μ
× ωq (t, k) + δ q (t, k)m
Because δjq (t, k), m are all bounded, 0 < η < 1, μ > 0, we
can get 0 < [(η|ωjq (t, k − 1)|2 )/(μ + |ωjq (t, k − 1)|2 )] < = ξ q (t + 1, k − 1) − L̄1 q (t, k)ωq (t, k)
η < 1. Let |1−[(η|ωjq (t, k−1)|2 )/(μ+|ωjq (t, k−1)|2 )]|  − L̄1 q (t, k)δ q (t, k)m

p1 , |[(ηδjq (t, k − 1)mb)/(2 μ)]| + 2b  p2 , where p1 , p2 are = ξ q (t + 1, k − 1) − ρ L̄1 q (t, i)Hq (t, k)
bounded positive constants. Then leading (13) into × ξ q (t + 1, k − 1) − L̄1 q (t, k)δ q (t, k)m
φ̃jq (t, k) ≤ p1 φ̃jq (t, k − 1) + p2 = ξ q (t + 1, k − 1) − L̄1 q (t, k)δ q (t, k)m
p2 − ρ L̄1 q (t, k)ξ q (t + 1, k − 1) (20)
≤ · · · ≤ pk−1 φ̃jq (t, 1) + . (16)
1 1 − p1 where
⎛ ⎞
We can get φ̂jq (t, k) is bounded, since φjq (t, k) is bounded.
Then we prove the boundness of ξ q (t, k). ⎜ φ1q (t, k)φ̂1q (t, k) φnq (t, k)φ̂nq (t, k) ⎟
q (t, k)  diag⎝ 2
,..., 2 ⎠
Define the collection of all the kth iteration agents as λ + φ̂1q (t, k) λ + φ̂nq (t, k)
the column stack vectors uq (t, k)  [u1q (t, k), . . . , unq (t, k)]T
and ωq (t, k)  [ω1q (t, k), . . . , ωnq (t, k)]T . We can get the = diag ζ1q , . . . , ζnq
 T
following equation from (12): δ q (t, k)  δ1q (t, k), . . . , δnq (t, k)
ωq (t, k) = ωq (t, k − 1) + ρHq (t, k)ξ q (t + 1, k − 1) (17) δ q (t, k)  δ q (t, k) − δ q (t, k − 1).
Take matrix 1-norm on both sides of (20), then we can get
where
⎛ ⎞ the following inequality shown as:
     
φ̂1q (t, k) φ̂nq (t, k) ξ q (t + 1, k) ≤ I − ρ L̄1 q (t, k) ξ q (t + 1, k − 1)
⎜ ⎟  1 
Hq (t, k)  diag⎝ ,..., 2⎠
. 1 1
2 + L̄1 q (t, k)δ q (t, k) m. (21)
λ + φ̂1q (t, k) λ + φ̂nq (t, k) 1

Since L̄1 q (t, k)δ q (t, k)1 m is obviously bounded,


We also can get the following equation similar as (17) that: there must exist a positive constant q1 that makes
L̄1 q (t, k)δ q (t, k)1 m ≤ q1 . We can choose proper param-
ȳ1q (t + 1, k) = ȳ1q (t + 1, k − 1) + q (t, k)uq (t, k) (18)
eters as follows:
where q (t, k)  diag(φ1q (t, k), . . . , φnq (t, k)) and φjq (t, k)φ̂jq (t, k) φjq (t, k)φ̂jq (t, k) b
uq (t, k)  uq (t, k) − uq (t, k − 1). Then we use arbi- 0< ≤ √ ≤ √ < 1 (22)
λ + φ̂jq (t, k)
2
2 λ φ̂jq (t, k) 2 λ
trary the qth order follower agent to analyze the stability,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HUA et al.: EVENT-TRIGGERED ITERATIVE LEARNING CONTAINMENT CONTROL OF MODEL-FREE MASs 5

Fig. 1. Communication graph of Example 1.

1
ρ< . (23)
maxj=1,...,n dj Fig. 2. Trajectories of three followers at 300th iteration.
n
Base on Assumption 1, we can get 0 < dj − i=1 (lij −ljj ) <
dj . If the parameters satisfies (22) and (23), there must exist a
positive constant q2 that makes I − L̄1 q (t, k)1 < q2 < 1.
Therefore, we can get
   
ξ q (t + 1, k) < q2 ξ q (t + 1, k − 1) + q1
1 1 
  q1 1 − qk−1
2
< · · · < qk−1 ξ q (t + 1, 1) + .
2 1 1 − q2
(24)
From the inequality (24), we can see that ξ q (t + 1, k)1 is
bounded. If there is without event-triggered section, then we
can obtain limk→∞ ξ q (t+1, k)1 = 0 during the time interval. Fig. 3. Responses of the contain error ξ11 , ξ21 , and ξ31 .
Hence, the convergence of ξjq (t, k) is proved.
Remark 5: Inequality (24) becomes ξ q (t + 1, k)1 <
[q1 /(1 − q2 )], because qk−1 2 approaches to 0 when k → ∞. y52 (t) = K6 sin(t/16)
The constant term [q1 /(1 − q2 )] relates to q1 and q2 , and Agent 6: y61 (t) = K5 sin(t/16)
L̄1 q (t, k)δ q (t, k)1 m ≤ q1 , I − ρ L̄1 q (t, k))1 < q2 < 1.
y62 (t) = K4 sin(t/16)
In this way, we can make containment error be arbitrarily small
by choosing proper triggering error bound m. where K1 = 1, K2 = 3, K3 = 2, K4 = 10, K5 = 3, and K6 = 5.
From Fig. 1, we can set a14 = 1, a15 = 1, a21 = 1, a26 = 1,
IV. S IMULATION E XAMPLE a32 = 1, a35 = 1, D = diag{2, 2, 2, 0, 0, 0}. The reciprocal
A. Example 1 of largest diagonal element of D is 1/2. Therefore, we can
choose ρ = 0.4 to satisfy our condition. The time interval is
We consider the MASs with three followers and three lead- t ∈ {1, 2, . . . , 25}, the iteration interval is k ∈ {1, 2, . . . , 500}.
ers, communication graph is structured in Fig. 1, in which Other initial condition values are chosen as λ = 0.5, η = 2,
agents 1, 2, and 3 are followers and agents 4, 5, and 6 are μ = 0.5, m = 10−2 , ε = 10−5 . The initial value y11 (1, k) = 4,
leaders. The models of six agents are shown as follows: y12 (1, k) = −5, y21 (1, k) = 9, y22 (1, k) = 9, y31 (1, k) = −5,
y11 (t, k) + u11 (t, k) and y32 (1, k) = 4, all initial input values are chosen as 0.
Agent 1: y11 (t + 1, k) = + K1 u11 (t, k) In Fig. 2, we can see the trajectories of three followers at
1 + y11 (t, k)2
y12 (t, k) + u12 (t, k) 300th iteration in the 2-D space. Except for initial outputs,
y12 (t + 1, k) = + K2 u12 (t, k). the rest followers’ outputs are into the convex hull formed by
1 + y12 (t, k)2
leaders’ outputs eventually. Fig. 3 displays the containment
y21 (t, k) + u21 (t, k)
Agent 2: y21 (t + 1, k) = + K3 u21 (t, k) error ξ11 , ξ21 , and ξ31 at time step 5, which implies that the
1 + y21 (t, k)2 containment errors converge under a bound along the iteration
y22 (t, k) + u22 (t, k) axis. Fig. 4 shows the three followers’ triggering events in the
y22 (t + 1, k) = + K3 u22 (t, k).
1 + y22 (t, k)2 first order, which implies the event-triggered strategy achieves
y31 (t, k) + u31 (t, k) less triggering number with the increasing of iteration step.
Agent 3: y31 (t + 1, k) = + K2 u31 (t, k) Fig. 5 shows the three followers’ control inputs in the first
1 + y31 (t, k)2
y32 (t, k) + u32 (t, k) order at time step 5, which implies that the control inputs
y32 (t + 1, k) = + K1 u32 (t, k). become smoother with the increasing of iteration step.
1 + y32 (t, k)2
Table I shows the number of triggering events at time step 5
Agent 4: y41 (t) = K4 sin(t/16)
along iteration axis. In example, controller would change 300
y42 (t) = K5 sin(t/16). times to change the size of input signal for keeping the
Agent 5: y51 (t) = K6 sin(t/16) good performance, if there is without event-triggered scheme.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 6. Communication graph of Example 2.

Fig. 4. Triggering event in the first order.

Fig. 7. Trajectories of agents at 1000th iteration.

The plant parameters are same with those in [35], and the
models of two leaders are given as follows:

Agent 3: y3 (t) = 5 + 5 sin(π t/10)


Fig. 5. Control inputs of followers.
Agent 4: y4 (t) = 30 + 5 sin(π t/10).
TABLE I
N UMBER O F T RIGGERING E VENTS Discretize (26) by Euler formula and take sample time as
ts = 0.01, we have get step T = 1000. From Fig. 6, we can
set a13 = 1, a14 = 1, a21 = 1, a23 = 1, D = diag{2, 2, 0, 0}.
We choose ρ = 0.3 to satisfy our condition (23). Other initial
condition values are chosen as λ = 0.5, η = 0.5, μ = 0.5,
m = 10−1 , ε = 10−5 . All initial output values are chosen as 0
This shows that the proposed event-triggered strategy can and all initial input values are chosen as 100. In Fig. 7, we can
effectively lower communication pressure. see the trajectories of two leaders and two followers at 1000th
iteration in the 1-D space. Except for the initial outputs, the
rest followers outputs are into the convex hull eventually. This
B. Example 2 example shows that the proposed containment control is useful
We apply this algorithm for permanent magnet dc linear for four-motor system.
motors with two leaders and two followers, communica-
tion graph is strictured in Fig. 6. Consider each motor
V. C ONCLUSION
with [35], [36]
 A new ETILC strategy is presented for handling the contain-
ẋ(t) = v(t) ment problem of model-free nonlinear discrete-time MASs. To
u(t)−ffriction (t)−fripple (t) (25)
v̇(t) = m design this distributed controller-based the unknown model,
the dynamics of followers are converted to dynamic lineariza-
where ffriction (t) is the friction force (N), fripple (t) is the rip-
tion formation. Event-triggered strategy is used for reducing
ple force (N). ffriction (t) and fripple (t) have been modeled as
the updating number of the proposed controller. Merely uti-
follows:
 lizing the limited information of the systems, this scheme can
δ
ffriction (t) = fc + (fs − fc )e−(ẋ/ẋδ ) + fv ẋ sgnẋ drive all the outputs of followers into the convex hull spanned
by leaders. The effectiveness of this algorithm is verified by
fripple (t) = b1 sin(ω0 x(t)) mathematical computation and simulation. In future, model-
Denote x1 (t) = x(t), x2 (t) = v(t). Equation (25) can be free MASs with dead-zone input issue might be considered.
converted to the following form:
⎧ R EFERENCES
⎨ ẋ1 (t) = x2 (t)
f (t)+f (t) [1] F. Ren, M. Zhang, D. Soetanto, and X. Su, “Conceptual design of a
ẋ (t) = − friction m ripple + m1 u(t) (26)
⎩ 2 multi-agent system for interconnected power systems restoration,” IEEE
y(t) = x2 (t) Trans. Power Syst., vol. 27, no. 2, pp. 732–740, May 2012.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HUA et al.: EVENT-TRIGGERED ITERATIVE LEARNING CONTAINMENT CONTROL OF MODEL-FREE MASs 7

[2] Y. Tang, X. Xing, H. R. Karimi, L. Kocarev, and J. Kurths, “Tracking [24] H. Xia, W. Zheng, and J. Shao, “Event-triggered containment control
control of networked multi-agent systems under new characterizations for second-order multi-agent systems with sampled position data,” ISA
of impulses and its applications in robotic systems,” IEEE Trans. Ind. Trans., vol. 73, pp. 91–99, Feb. 2018.
Electron., vol. 63, no. 2, pp. 1299–1307, Feb. 2016. [25] G. Miao, J. Cao, A. Alsaedi, and F. E. Alsaadi, “Event-triggered con-
[3] Z. Qu, J. Wang, and R. A. Hull, “Cooperative control of dynamical tainment control for multi-agent systems with constant time delays,” J.
systems with application to autonomous vehicles,” IEEE Trans. Autom. Franklin Inst., vol. 354, no. 15, pp. 6956–6977, 2017.
Control, vol. 53, no. 4, pp. 894–911, May 2008. [26] W. Zhang, Y. Tang, Y. Liu, and J. Kurths, “Event-triggering containment
[4] H. Li, F. Karray, O. Basir, and I. Song, “A framework for coordinated control for a class of multi-agent networks with fixed and switching
control of multiagent systems and its applications,” IEEE Trans. Syst., topologies,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 64, no. 3,
Man, Cybern. A, Syst. Humans, vol. 38, no. 3, pp. 534–548, May 2008. pp. 619–629, Mar. 2017.
[5] D. Oviedo, M. Romero-Ternero, M. Hernández, F. Sivianes, A. Carrasco, [27] A. Ghafoor, S. N. Balakrishnan, S. Jagannathan, and T. Yucelen, “Event
and J. Escudero, “Multiple intelligences in a multiagent system applied triggered neuro-adaptive controller (ETNAC) design for uncertain linear
to telecontrol,” Expert Syst. Appl., vol. 41, no. 15, pp. 6688–6700, 2014. systems,” in Proc. IEEE Conf. Decis. Control (CDC), Miami Beach, FL,
[6] H. Du, G. Wen, G. Chen, J. Cao, and F. E. Alsaadi, “A distributed USA, Dec. 2018, pp. 2217–2222.
finite-time consensus algorithm for higher-order leaderless and leader- [28] A. Ghafoor, J. Yao, S. N. Balakrishnan, J. Sarangapani, and T. Yucelen,
following multiagent systems,” IEEE Trans. Syst., Man, Cybern., Syst., “Event triggered neuroadaptive controller (ETNAC) design for uncertain
vol. 47, no. 7, pp. 1625–1634, Jul. 2017. affine nonlinear systems,” in Proc. Dyn. Syst. Control Conf., Atlanta,
[7] J. Qin, H. Gao, and W. Zheng, “Second-order consensus for multi- GA, USA, 2018, p. 10.
agent systems with switching topology and communication delay,” Syst. [29] A. Ghafoor, P. Galchenko, S. N. Balakrishnan, H. Pernicka, and
Control Lett., vol. 60, no. 6, pp. 390–397, 2011. T. Yucelen, “ETNAC design enabling formation flight at liberation
[8] B. Fan, Q. Yang, S. Jagannathan, and Y. Sun, “Output-constrained con- points,” in Proc. Amer. Control Conf. (ACC), Philadelphia, PA, USA,
trol of nonaffine multiagent systems with partially unknown control Jul. 2019, pp. 3689–3694.
directions,” IEEE Trans. Autom. Control, vol. 64, no. 9, pp. 3936–3942, [30] W. Xiong, X. Yu, R. Patel, and W. Yu, “Iterative learning control for
Sep. 2019. discrete-time systems with event-triggered transmission strategy and
[9] W. Meng, Q. Yang, J. Sarangapani, and Y. Sun, “Distributed control of quantization,” Automatica, vol. 72, pp. 84–91, Oct. 2016.
nonlinear multiagent systems with asymptotic consensus,” IEEE Trans. [31] Z. Meng, W. Ren, and Z. You, “Distributed finite-time attitude contain-
Syst., Man, Cybern., Syst., vol. 47, no. 5, pp. 749–757, May 2017. ment control for multiple rigid bodies,” Automatica, vol. 46, no. 12,
[10] G. Wen, C. L. P. Chen, Y. Liu, and Z. Liu, “Neural-network-based pp. 2092–2099, 2010.
adaptive leader-following consensus control for second-order non- [32] S. Jin, A. Hou, R. Chi, and X. Liu, “Data-driven model-free adaptive
linear multi-agent systems,” IET Control Theory Appl., vol. 9, no. 13, iterative learning control for a class of discrete-time nonlinear systems,”
pp. 1927–1934, Aug. 2015. Control Theory Appl., vol. 29, no. 8, pp. 1001–1009, 2012.
[11] W. Meng, Q. Yang, S. Jagannathan, and Y. Sun, “Distributed con- [33] Z. Hou and S. Jin, “A novel data-driven control approach for a class
trol of high-order nonlinear input constrained multiagent systems using of discrete-time nonlinear systems,” IEEE Trans. Control Syst. Technol.,
a backstepping-free method,” IEEE Trans. Cybern., vol. 49, no. 11, vol. 19, no. 6, pp. 1549–1558, Nov. 2011.
pp. 3923–3933, Nov. 2019. [34] X. Bu, Z. Hou, Q. Yu, and Y. Yang, “Quantized data driven iterative
[12] W. Zou, Z. Xiang, and C. K. Ahn, “Mean square leader-following con- learning control for a class of nonlinear systems with sensor saturation,”
sensus of second-order nonlinear multiagent systems with noises and IEEE Trans. Syst., Man, Cybern., Syst., early access, Sep. 24, 2018,
unmodeled dynamics,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 49, doi: 10.1109/TSMC.2018.2866909.
no. 12, pp. 2478–2486, Dec. 2019. [35] X. Bu, Z. Hou, and H. Zhang, “Data-driven multiagent systems con-
[13] L. Cheng, Y. Wang, W. Ren, Z. Hou, and M. Tan, “Containment sensus tracking using model free adaptive control,” IEEE Trans. Neural
control of multiagent systems with dynamic leaders based on a PI n - Netw. Learn. Syst., vol. 29, no. 5, pp. 1514–1524, May 2018.
type approach,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 3004–3017, [36] B. Armstrong-Hélouvry, P. Dupont, and C. C. D. Wit, “A survey of
Dec. 2016. models, analysis tools and compensation methods for the control of
[14] X. Wang, S. Li, and P. Shi, “Distributed finite-time containment control machines with friction,” Automatica, vol. 30, no. 7, pp. 1083–1138,
for double-integrator multiagent systems,” IEEE Trans. Cybern., vol. 44, 1994.
no. 9, pp. 1518–1528, Sep. 2014.
[15] S. P. Hou and C. C. Cheah, “Can a simple control scheme work for a
formation control of multiple autonomous underwater vehicles?” IEEE
Trans. Control Syst. Technol., vol. 19, no. 5, pp. 1090–1101, Sep. 2011.
[16] Y. Cao, D. Stuart, W. Ren, and Z. Meng, “Distributed containment
control for multiple autonomous vehicles with double-integrator dynam-
ics: Algorithms and experiments,” IEEE Trans. Control Syst. Technol.,
vol. 19, no. 4, pp. 929–938, Jul. 2011.
[17] J. Li, W. Ren, and S. Xu, “Distributed containment control with
multiple dynamic leaders for double-integrator dynamics using only
position measurements,” IEEE Trans. Autom. Control, vol. 57, no. 6,
pp. 1553–1559, Jun. 2012.
[18] L. Cheng, Y. Wang, W. Ren, Z. Hou, and M. Tan, “Containment
control of multiagent systems with dynamic leaders based on a PI n - Changchun Hua received the Ph.D. degree in
type approach,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 3004–3017, electrical engineering from Yanshan University,
Dec. 2016. Qinhuangdao, China, in 2005.
[19] G. Wen, Y. Zhao, Z. Duan, W. Yu, and G. Chen, “Containment of higher- He was a Research Fellow with the National
order multi-leader multi-agent systems: A dynamic output approach,” University of Singapore, Singapore, from 2006 to
IEEE Trans. Autom. Control, vol. 61, no. 4, pp. 1135–1140, Apr. 2016. 2007. From 2007 to 2009, he worked with Carleton
[20] Y. Liu and Y. Jia, “An iterative learning approach to formation control University, Ottawa, ON, Canada, funded by Province
of multi-agent systems,” Syst. Control Lett., vol. 61, no. 1, pp. 148–154, of Ontario Ministry of Research and Innovation
2012. Program. From 2009 to 2010, he worked with the
[21] Y. Liu and Y. Jia, “Formation control of discrete-time multi-agent University of Duisburg-Essen, Duisburg, Germany,
systems by iterative learning approach,” Int. J. Control Autom. Syst., funded by Alexander von Humboldt Foundation. He
vol. 10, no. 5, pp. 913–919, 2012. is currently a Full Professor with Yanshan University. He has authored
[22] D. Meng, Y. Jia, and J. Du, “Robust consensus tracking control for or coauthored more than 120 papers in mathematical, technical journals,
multiagent systems with initial state shifts, disturbances, and switch- and conferences. He has been involved in more than 15 projects sup-
ing topologies,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 4, ported by the National Natural Science Foundation of China, the National
pp. 809–824, Apr. 2015. Education Committee Foundation of China, and other important foundations.
[23] X. Jin, “Adaptive iterative learning control for high-order nonlinear He is Cheung Kong Scholars Programme Special appointment professor. His
multi-agent systems consensus tracking,” Syst. Control Lett., vol. 89, research interests are in nonlinear control systems, multiagent systems, control
pp. 16–23, Mar. 2016. systems design over network, teleoperation systems, and intelligent control.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Yunfei Qiu received the B.S. degree in building Xinping Guan received the B.S. degree in math-
electricity an intelligence from Shenyang Jianzhu ematics from Harbin Normal University, Harbin,
University, Shenyang, China, in 2013. She is cur- China, in 1986, and the M.S. degree in applied
rently pursuing the Ph.D. degree with Yanshan mathematics and the Ph.D. degree in electrical engi-
University, Qinhuangdao, China. neering from the Harbin Institute of Technology,
Her current research interests include model free Harbin, in 1991 and 1999, respectively.
adaptive control, iterative learning control, and delay He is with the Department of Automation,
systems. Shanghai Jiao Tong University, Shanghai, China. He
has (co)authored more than 200 papers in math-
ematical, technical journals, and conferences. As
(a)an (co)-investigator, he has finished more than
20 projects supported by the National Natural Science Foundation of China,
the National Education Committee Foundation of China, and other important
foundations. He is Cheung Kong Scholars Programme Special appoint-
ment professor. His current research interests include networked control
systems, robust control, and intelligent control for complex systems and their
applications.
Dr. Guan is serving as a Reviewer of Mathematic Review of America, a
member of the Council of Chinese Artificial Intelligence Committee, and the
Vice-Chairman of Automation Society of Hebei, China.
International Journal of Control, Automation and Systems 19(3) (2021) 1426-1442 ISSN:1598-6446 eISSN:2005-4092
http://dx.doi.org/10.1007/s12555-019-0882-y http://www.springer.com/12555

Event-triggered Iterative Learning Control for Perfect Consensus Track-


ing of Non-identical Fractional Order Multi-agent Systems
Liming Wang and Guoshan Zhang* 

Abstract: This paper is devoted to the perfect tracking problem of output consensus for a class of non-identical
fractional order multi-agent systems (NIFOMASs), in which different agents have different and unknown fractional
orders and dynamic functions. For the NIFOMASs including one leader agent and multiple follower agents, by de-
signing the event-triggered mechanism along an iteration axis and introducing it into the iterative learning controller,
an event-triggered iterative learning consensus protocol is proposed to reduce the number of controller update and
to save the communication resource. By analyzing the convergence of learning process, the sufficient conditions are
derived to guarantee that the output consensus tracking can be perfectly achieved over the finite time interval as the
iteration step goes to infinity. Finally, three numerical examples are presented to demonstrate the effectiveness and
wide application scope of the proposed control strategy.

Keywords: Event-triggered control, fractional-order multi-agent systems, iterative learning control, non-identical
fractional orders, output consensus tracking.

1. INTRODUCTION All of the mentioned literatures are concerned with


the time-triggered control strategy. Since the control in-
put is updated continuously and each agent communi-
With the research of the integer-order multi-agent sys- cates with its neighbors at each instant, the time-triggered
tems (IOMASs) [1–3] and the development of the frac- control strategy will consume large quantity of energy
tional calculus [4], the fractional-order multi-agent sys- and lead to the communication resource overload. On the
tems (FOMASs) have been paid more and more atten- other hand, in the practical multi-agent systems, agents
tion. It has been found that the dynamics of multi-agent are often equipped with microprocessors, and the infor-
systems working in fluids or porous media should be de- mation is transmitted via the communication networks
scribed by the fractional-order calculus [5]. The integer- among agents. Due to the limited power supplies of the
order systems can be considered as the special cases of equipped microprocessors and the bandwidth constraints
fractional-order systems, though, due to the essential dif- of the communication networks, it is significative to de-
ference between them, most results of IOMASs can’t be sign the energy-saving control strategy with the lower
simply extend to the FOMASs [6]. Thus, it is meaningful communication load for the FOMASs. In order to reduce
to investigate the FOMASs. A critical issue coming from the number of controller update and save the communica-
the control of FOMASs is the consensus control problem, tion resource, Yu et al. [16] proposed the sampled-data
the goal of which is to design the distributed control pro- protocol for the consensus of FOMASs. This sampled-
tocol such that the states or outputs of all agents reach data protocol requires that each agent updates its con-
an agreement on the shared information [7]. The consen- troller by using data at periodical sampling instants, which
sus problem of FOMASs was first considered in [8]. Af- still leads to some unnecessary energy consumption and
terwards, various kinds of models of FOMASs, such as information transmission. The event-trigger control strat-
heterogeneities [9], uncertainties [10, 11], and time delays egy can overcome the drawbacks in the time-triggered or
[12] were proposed. Meanwhile, the consensus of FO- sampled-data control strategy by updating control inputs
MAS was studied by using different methods including only at the triggering instants determined by the triggering
feedback control [13], adaptive control [14] and sliding- conditions [17]. For example, Shi et al. [18] and Ye and Su
mode control [15].

Manuscript received October 18, 2019; revised March 29, 2020 and June 6, 2020; accepted June 21, 2020. Recommended by Associate
Editor Dan Zhang under the direction of Editor Hamid Reza Karimi. This work was supported by National Natural Science Foundation of
China under Grant No.61473202 and Natural Science Foundation of Hebei Province of China under Grant No. F2019408063.

Liming Wang and Guoshan Zhang are with the School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, P. R.
China (e-mails: wlm_shooker@163.com, zhanggs@tju.edu.cn). Liming Wang is also with the Faculty of physics and electronic information,
Langfang Normal University, Langfang 065000, P. R. China.
* Corresponding author.

c ICROS, KIEE and Springer 2021


Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1427

[19] proposed the event-triggered control protocols with ference ones, the results in [31, 32] can’t be straightway
and without input delay for the consensus tracking of lin- applied to the perfect consensus of FOMASs.
ear FOMASs with and without input delay, respectively. Note that all of the mentioned literatures have dis-
Wang and Yang [20] designed the event-triggered control cussed the consensus problem of the FOMASs consist-
protocol with an exponentially decreasing threshold con- ing of agents with the identical fractional order. However,
dition for the consensus tracking of nonlinear FOMASs. these identical fractional order multi-agent systems can be
In [21], the event-triggered adaptive control protocol was considered as the special cases of the non-identical frac-
proposed to reduce the communication load and to achieve tional order multi-agent systems (NIFOMASs), in which
the consensus tracking of FOMASs. In order to avoid the different agents have different factional orders. So far, the
continuous measurements of states in [18–21], Chen et consensus control problem has been investigated for the
al. [22] proposed the sampled-data event-triggered pro- FOMASs with two kinds of fractional orders [33, 34], yet
tocol by combining the event-triggered strategy with the for the more general models such as NIFOMASs, there
sampled-data strategy. has been no report about consensus. The difference of
The consensus in the above literatures is achieved only fractional orders among agents can enhance the hetero-
when time goes to infinity, while some practical coordina- geneities of FOMASs and can make the design of consen-
tion control tasks, for example, the trajectory keeping of sus protocol and the stability analysis of closed-loop error
satellite [23], require that consensus is perfectly achieved system more difficult. Thus, the consensus control of NI-
from beginning to end during the overall process of ex- FOMASs is a challenging problem.
ecuting the given tasks. Thus, it is necessary to investi- In this paper, we propose the event-triggered ILC pro-
gate the perfect control problem of FOMASs. The itera- tocol for the perfect consensus tracking of NIFOMASs.
tive learning control (ILC) strategy can achieve the perfect Firstly, the consensus tracking problem of NIFOMASs is
consensus by creating a two-dimensional process with a formulated. Secondly, the event-triggered mechanism is
time axis and an iteration axis [24]. Here the perfect con- designed along an iteration axis, and the event-triggered
sensus refers to the phenomena that the states or outputs ILC protocol is proposed, and the consensus convergence
of all agents reach an agreement over a finite time inter- conditions are obtained by analyzing the convergence of
val as the iteration step goes to infinity. The ILC strat- learning process. Finally, three numerical examples are
egy has been applied to achieve the perfect consensus of presented to verify the theoretical results. The main con-
IOMASs and some instructive results have been obtained tributions of this paper can be summarized as follows:
[25–27], while for the FOMASs, just a few literatures have 1) This paper for the first time investigates the perfect
been published based on the ILC strategy [28–30]. For ex- consensus tracking problem for a class of more general
ample, Lv et al. [28] proposed the fractional-order ILC leader-following NIFOMASs via the event-triggered ILC
protocol for the perfect consensus of FOMASs with com- strategy. In this paper, different agents have the differ-
munication time delay. Luo et al. [29] designed the ILC ent and unknown fractional orders and dynamic functions.
protocols with the forgetting factors and the initial states The existing models of integer-order or fractional-order
learning laws to achieve the perfect consensus of linear leader-following multi-agent systems can be considered as
and nonlinear FOMASs, respectively. For the FOMASs the special cases of the proposed models in this paper.
subject to the interval uncertainties and parameter cou- 2) Different from [18–22], in the proposed event-
pling, Wang and Zhang [30] proposed the observer-type triggered ILC protocol, the control inputs depend on time
fractional-order ILC protocol to achieve the perfect track- and iteration steps, and the triggering events occur only
ing of consensus and the monotonic convergence of input along an iteration axis. Compared with [25–30], the de-
errors. In [25–30], the control inputs were always updated signed event-triggered mechanism can avoid the controller
at each iteration step and the information was transmit- update at each iteration step and do not require the neigh-
ted at all time. The unnecessary update and the redundant boring agents to communicate during some iteration steps,
communication will lead to the waste of energy. It is a bet- thus can reduce the number of controller update and save
ter energy-saving method to introduce the event-triggered the communication resource.
strategy into ILC consensus protocol. To the best of au- 3) In the proposed protocol, the controller update and
thors’ knowledge, until now, the consensus problem of the communication occur only at the triggering iteration
FOMASs has not been investigated by using the event- steps determined by the event-triggered condition. Un-
triggered ILC strategy. There have been some results der the proposed protocol, the output consensus of NIFO-
about the event-triggered ILC protocols for the perfect MASs can be perfectly achieved over the finite time inter-
consensus of multi-agent systems depicted by the discrete- val as the iteration step goes to infinity, and Zeno behavior
time equations [31] or by the continuous-time integer- can be naturally avoided.
order differential equations [32]. However, due to the dif- The rest of this paper is organized as follows: At first,
ference between the fractional-order differential systems in Section 2, notations, and some existing results about
and the integer-order differential ones/discrete-time dif- the fractional order calculus are introduced. The output
1428 Liming Wang and Guoshan Zhang

consensus tracking problem of NIFOMASs is formulated. Definition 3: Let α ∈ [0, 1]. The α order derivative for
Subsequently, the consensus criteria are derived and the the function f (t) is defined as
main theoretical results are given in Section 3. Moreover,
three numerical examples are presented in Section 4. Fi- Dtα f (t)
t0

nally, the conclusions are drawn from the present studies 
 f (t), α = 0,
in the last section.
 Z t
1 −α d f (τ)


= Γ(1 − α) t0 (t − τ) dτ, α ∈ (0, 1),


2. BACKGROUND AND PRELIMINARIES



 d f (t)

 , α = 1.
In this section, some basic notations, useful lemmas and dt
the algebraic graph theory as well as the model of NIFO- Definition 3 enables ones to easily classify the models
MASs are presented. of leader and followers according to the values of α.
Based on Definition 3, the following lemma can be de-
2.1. Notations rived.
Let R, Rm×n and Z+ denote the set of real numbers, the Lemma 1: If the function f (t, x(t)) is continuous, then
set of m × n real matrices and the set of positive integers, the initial value problem of
respectively. The superscript T denotes the transposition ( α
Dt x(t) = f (t, x(t)), 0 < α 6 1,
of vector or matrix, and Im denotes the m × m identity ma- t 0

trix. For a given vector x = [x1 , x2 , · · · , xn ]T , kxk p denotes x(t0 ) = xt0 ,


where 1 ≤ p ≤ ∞. In particular, kxk1 =
its l p vector norm, √
n is equivalent to the following nonlinear Volterra integral
∑k=1 |xk |, kx2 k2 = xT x, and kxk∞ = maxk=1,2,··· ,n |xk |. k·k equation
refers to the l∞ norm for vectors and refers to the l∞ vec-
1
Z t
tor norm induced matrix norm for matrices. k · k is sub- x(t) = xt0 +
α−1
(t − s) f (s, x(s))ds.
multiplcative, i.e., kCDk ≤ kCkkDk, where C ∈ Rm×n and Γ(α) t0
D ∈ Rn×q . For any vector function h(t) : [0, T ] → Rn , its Lemma 1 is the direct extension of Lemma 1 of [37]
λ -norm is defined as kh(t)kλ = supt∈[0,T ] {e−λt kh(t)k}, over the interval α ∈ (0, 1], thus its proof is omitted.
λ > 0. The notation ⊗ denotes the Kronecker product of Note: In the following sections, 0 Dtα f (t) is denoted as
two matrices. diag(c1 , c2 , · · · , cn ) denotes the diagonal ma- α
Dt f (t) for simplicity.
trix with diagonal elements ci (i = 1, 2, · · · , n). Matrices, if
not explicitly stated, are assumed to have appropriate di- 2.3. Model description and problem formulation
mensions. The description about graph theory can refer to
In this subsection, we consider the multi-agent systems
[35].
that consist of a fractional-order leader agent and multi-
ple fractional-order follower agents working in a repeat-
2.2. Some definitions and lemmas
able control environment and having different fractional
The following definitions and lemmas will be used for orders from each other. Meanwhile, the states of all agents
the convergence analysis in the main results. are not measurable, and the only available information is
Definition 1 [36]: The definition of fractional integral their output signals. The leader agent is labeled as 0 and
is described by its equation is described by
1
Z t (
−α
t0 Dt f (t) = (t − τ)α−1 f (τ)dτ, Dtα0 x0 (t) = f0 (t, x0 (t)),
Γ(α) t0 (1)
y0 (t) = C0 (t)x0 (t),
0 < α < 1, t > t0 ,
where t ∈ [0, T ], and the fractional order α0 ∈ [0, 1]. x0 (t) ∈
where Γ(α) is the well-known Gamma function.
Rn0 , y0 (t) ∈ Rm and f0 (·) ∈ Rn0 are respectively the state,
Definition 2 [36]: The Caputo derivative is defined as
output, nonlinear dynamical function of the leader agent,
C
Dtα f (t) = t0 Dtα−m Dtm f (t), α ∈ (m − 1, m), C0 (t) ∈ Rm×n0 , m, n0 ∈ Z+ .
t0
Based on Definition 3, the model of leader agent can be
where Dtm (·) is the classical m-order derivative, m ∈ Z+ . classified into three kind of cases according to the values
This paper will use the Caputo fractional operator in of α0 : (a1) a virtual leader agent when α0 = 1; (a2) an
Definitions 1 and 2 to model the dynamics of multi-agent integer-order leader agent when α0 = 1; (a3) a fractional-
systems. order leader agent when α0 ∈ (0, 1). Since the leader agent
By combining the Caputo fractional-order derivative only provides a target trajectory for the follower agents
(i.e., α ∈ (0, 1)) of the function f (·) with the function f (·) and the values of α0 don’t affect the consensus conver-
(i.e., α = 0) and its first-order derivative (α = 1), we can gence conditions, thus three kind of cases will not be dis-
give the following definition. cussed separately in the following sections.
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1429

The follower agents are labeled by j, j ∈ {1, 2, ..., N}, Define the new variables as follows:
and the interaction topology among all the follower agents
Ξ = diag(Dtα1 (·), Dtα2 (·), · · · , DtαN (·)),

is described by the iteration invariant directed graph G = 

B(t) = diag(B1 (t), B2 (t), · · · , BN (t)),

(V , E, A), where V = {1, 2, ..., N}, E and A are the ver- 
(3)
tex sets, the edge sets and the weighted adjacency matrix C(t) = diag(C1 (t),C2 (t), · · · ,CN (t)),

of G, respectively. Thus, the interaction topology among


D(t) = diag(D1 (t), D2 (t), · · · , DN (t)).
all agents can be described by the graph Ḡ = {V̄ , Ē, Ā},
where V̄ = {0, 1, 2, ..., N}, Ē and Ā are the vertex sets, the Then, the compact form of (2) is
edge sets and the weighted adjacency matrix of Ḡ, respec-
(
tively. Ξxk (t) = f (t, xk (t))+B(t)uk (t),
The equation of follower agent j is described by (4)
yk (t) = C(t)xk (t) + D(t)uk (t),
( α
Dt j xk, j (t) = f j (t, xk, j (t)) + B j (t)uk, j (t), where xk (t) = [xk,1 T T
(t), xk,2 T
(t), · · · , xk,N (t)]T , uk (t) =
(2)
yk, j (t) = C j (t)xk, j (t) + D j (t)uk, j (t), [uk,1 (t), uk,2 (t), · · · , uk,N (t)] , f (t, xk (t)) = [ f1T (t, xk,1 (t)),
T T T T

f2T (t, xk,2 (t)), · · · , fNT (t, xk,N (t))]T .


where t ∈ [0, T ], the fractional order α j ∈ (0, 1], and the The output tracking error of follower agent j at the kth
iteration step k ∈ {0, 1, 2, ...}. xk, j (t) ∈ Rn j , yk, j (t) ∈ Rm iteration is defined as
and uk, j (t) ∈ R p j are the states, outputs and control inputs,
respectively. C j (t) ∈ Rm×n j , B j (t) ∈ Rn j ×p j , D j (t) ∈ Rm×p j , ek, j (t) = y0 (t) − yk, j (t). (5)
n j , p j ∈ Z+ , j = 1, 2, ..., N. Moreover, C j (t), B j (t) and
α j are unknown in advance, and the unknown nonlinear The control objective in this paper is to design an ap-
function f j (t, xk, j (t)) is continuously differentiable respect propriate ILC protocol to guarantee that the outputs of all
to xk, j (t) and satisfies Assumption 1. follower agents can converge to the output of the leader
Assumption 1 [24, 30, 38]: The unknown nonlinear agent over a finite time interval with arbitrary high preci-
function f j (t, xk, j (t)) satisfies sion, that is,

k f j (t, xk+1, j (t)) − f j (t, xk, j (t))k lim kek, j (t)k = 0, j = 1, 2, · · · , N, (6)
k→∞
≤ L j kxk+1, j (t) − xk, j (t)k,
for all t ∈ [0, T ]. Meanwhile, if (6) holds, then the perfect
where the constant L j > 0, j = 1, 2, ..., N. consensus tracking of NIFOMASs (1) and (2) is achieved.
Remark 1: Assumption 1 is the Lipschitz condition of To simplify the analysis, the following assumptions are
vector function f j (t, xk, j (t)) and has been widely consid- needed.
ered in the literatures of the iterative learning control algo- Assumption 2 [24]: The initial output of each follower
rithms [24, 30, 38]. In this paper, Assumption 1 is needed agent is reset to be the same as that of the (virtual) leader
to establish the convergence of iterative learning process. agent after each iteration step.
Based on Definition 3, the models of follower agents Remark 3: Assumption 2 requires that the initial out-
can be classified into three kinds of cases according to the put of each follower agent is the same as that of leader
values of α j : (b1) the integer-order follower agents when agent. This requirement is rational in many applications.
α j = 1, j = 1, 2, ..., N; (b2) the identical fractional order For example, in the multi-agent systems including posi-
follower agents when α1 = · · · = αN ∈ (0, 1); (b3) the non- tion and velocity states, the output may be the velocity
identical fractional order follower agents when the values information. In this case, it is rational to assume that all
of α j are different for different follower agents. Since the agents have the zero initial velocities (i.e., zero initial out-
models in Cases (b1) and (b2) can be considered as the puts). On the other hand, for the cases that Assumption 2
special cases of the models in Case (b3), this paper mainly isn’t satisfied, for example, the fixed or varying initial out-
focuses on the models of follower agents in Case (b3). put errors, the iterative learning controller with the initial
Thus, the FOMASs in this paper are actually the NIFO- error learning law can be applied [39].
MASs containing one (virtual) leader agent and N non- Assumption 3 [18]: The communication graph Ḡ con-
identical fractional-order follower agents. tains a spanning tree with the (virtual) leader agent being
Remark 2: Most of existing literatures of FOMASs the root.
usually studied the models consisting of a group of iden-
tical fraction-order agents [8–16, 18–22, 28–30], and [33, 3. MAIN RESULTS
34] proposed the models of FOMASs consisting of agents
with two kinds of fractional orders. In contrast, in this pa- In this section, we will investigate the perfect tracking
per, different agents have different fractional orders. problem of consensus for NIFOMASs (1) and (2).
1430 Liming Wang and Guoshan Zhang

3.1. Design of event-triggered ILC protocol when their changes during some successive iteration steps
In order to reduce the number of controller update and are very small, the P-type ILC controller is designed as
save the communication resource, an iterative learning follows:
control law is constructed based on the event-triggered If k ∈ [kljj , kljj +1 ), l j ∈ Z+ , the controller is designed as
strategy.
Let {kljj }, l j ∈ Z+ denote the sequence of triggering iter- uk, j (t) = uk j , j (t).
lj
(12a)
ation steps for the follower agent j, where the triggering it-
eration step kljj represents the iteration number of follower At the triggering iteration step kljj +1 , the controller is
agent j at the l j th output broadcasting. The triggering iter- updated according to
ation step kljj is defined as
uk j (t) = uk j , j (t) + Γ j (t)ξk, j (t), (12b)
l j +1 , j lj
k0j = 0,
kljj +1 = inf{k > kljj : gk, j (t) ≥ 0}, l j ∈ Z+ , where u0, j (t) = 0, t ∈ [0, T ], ξk, j (t) has been defined in
(8), and the learning gain matrix Γ j ∈ R p j ×m > 0 will be
j = {1, 2, · · · , N}, (7) designed later.
If the interval [kljj , kljj +1 ) is too large, the consensus
where the triggering function gk, j (t) will be designed later.
tracking is difficult to achieve. On the contrary, the small
Define the cooperative error as
interval of [kljj , kljj +1 ) will lead to the increase of controller
ξk, j (t) = ∑ a ji (ykli ,i (t) − yk j , j (t)) update number and the waste of communication resource.
i∈N j
i lj Therefore, one of the key problems for the design of con-
troller (12) is to determine the appropriate triggering itera-
+ d j (y0 (t) − yk j , j (t)), k ∈ [kljj , kljj +1 ), (8)
lj tion steps kljj and kljj +1 . To this end, the triggering function
of follower agent j is designed as
where a ji is the (i, j)th entry in the adjacency matrix A,
and N j is the neighborhood set of follower agent . More gk, j (t) = kH j ⊗ Im k kδk, j (t)k − µ j k(H j ⊗ Im )ek (t)k ,
specifically, if the follower agent j can directly receive the j = 1, 2, · · · , N, (13)
information from the follower agent i, then a ji = 1, and
the follower agent i is called as a neighbor of follower where H j denotes the jth row of H, and the event-triggered
agent j. Otherwise, a ji = 0 and the follower agent i is not coefficient µ j > 0 will be designed later.
the neighbor of follower agent j. Moreover, d j = 1 if the The block diagram of iterative learning controller (12)
follower agent j can directly receive the information of the with the triggering conditions (7) and (13) is given in
leader agent, and d j = 0 otherwise. Fig. 1, where agents i, i ∈ N j denote the neighbors of
The output measurement error of follower agent j at the follower agent j. First, based on the output of follower
kth iteration is defined as agent j at the triggering iteration step kljj , the outputs from
δk, j (t) = yk j , j (t) − yk, j (t), k ∈ [kljj , kljj +1 ), l j ∈ Z+ . its neighbors i, i ∈ N j at the triggering iteration step klii
lj and the output y0 (t) of leader agent, the follower agent j
(9) calculates the cooperative error ξk, j (t) and the triggering
function gk, j (t) by (8) and (13), respectively. It should be
Define the variables as follows:
pointed out that the communication channel between the
T T T
(t), · · · , ξk,N (t)]T , leader agent and the follower agent j exists only when the

ξ (t) = [ξk,1 (t), ξk,2
 k

follower agent j can receive directly the information of
ek (t) = [eTk,1 (t), eTk,2 (t), · · · , eTk,N (t)]T , (10)

T T T
(t), · · · , δk,N (t)]T .

δk (t) = [δk,1 (t), δk,2

By (5), (9) and (10), (8) is rewritten as

ξk (t) = (H ⊗ Im )(ek (t) − δk (t)),


k ∈ [kljj , kljj +1 ), l j ∈ Z+ , (11)

where H = H 0 + D0 , H 0 is the Laplacian matrix of graph


G, and D0 = diag(d1 , d2 , ..., dN ).
Assume that each follower agent updates its control in-
puts only at its triggering iteration steps determined by its
own triggering condition, and broadcasts its outputs. Con-
sidering the fact that the control inputs needn’t be updated Fig. 1. Block diagram of event-triggered ILC strategy.
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1431

the leader agent. Then, according to the triggering condi- will not exhibit Zeno behavior under the ILC protocol (12)
tions (7) and (13), the follower agent j determines whether with the triggering conditions (7) and (13).
the events occur. If gk, j (t) ≥ 0, then the triggering events The detailed proof of Theorem 1 has been given in Ap-
occur, and the follower agent j updates its control inputs pendix A.
according to (12b) and broadcasts its current outputs. If Remark 5: Wang and Zhang [30] proposed the PDα -
gk, j (t) < 0, then no triggering event occurs, and accord- type ILC protocol for the perfect consensus tracking of
ing to (12a), the control inputs of follower agent j keep the FOMASs with the identical fractional order. Xiong et
unchanged. al. [31] achieved the perfect consensus tracking of dis-
To facilitate the subsequent theoretical analysis, we de- crete time multi-agent systems by the D-type ILC con-
note all triggering iteration steps kljj j = 1, 2, ..., N as a troller with the event-triggered strategy and quantization.
sequence {kl }, l ∈ Z+ , and assume that k01 = · · · = k0N = In this paper, different agents have different fractional or-
k0 = 0. Then, based on (12a) and (12b), we have: ders. Thus, the proposed model is different from those in
For k ∈ [kl , kl+1 ), the controller is [30,31]. Meanwhile, due to the difference of fractional or-
ders among agents, the PDα -type ILC controller in [30]
uk (t) = ukl (t). (14a) and the D-type ILC controller in [31] are no longer suit-
able for the proposed model. This paper proposes the
At the triggering iteration step kl+1 , the controller is event-triggered P-type ILC protocol for the perfect con-
sensus tracking of NIFOMASs. The proposed protocol
ukl+1 (t) = ukl (t) + Γ(t)(H ⊗ Im )(ek (t) − δk (t)), (14b) avoids the requirements of [30] for controller update at
each iteration step and for the communication at all time,
where Γ(t) = diag(Γ1 (t), Γ2 (t), · · · , ΓN (t)). ek (t) and δk (t) thus can reduce the number of controller update and save
have been defined in (5), (9) and (10). the communication resource.
Remark 4: Among the existing literatures about event- Since the matrix Γ(t) contains ∑Nj=1 p j × m components
triggered control of FOMASs [18–22], the event-triggered needing to be designed, it is inconvenient to use the con-
controllers were only related to time, and the triggering dition (15). However, the case can be improved if D j (t)
events occured along time axis. [32] proposed the event- satisfies Assumption 4.
triggered ILC protocol for the consensus of IOMASs, in Assumption 4 [40]: Assume that D j (t) is full row rank
which the control inputs depended on both time and itera- for all t ∈ [0, T ].
tion steps, and the triggering events occured along a time If Assumption 4 holds, then the gain matrix Γ j (t) can
axis. In contrast, this paper proposes the event-triggered be designed as
ILC protocol with the occurance of the triggering events
along an iteration axis for the perfect consensus tracking Γ j (t) = γ(D j (t))T (D j (t)(D j (t))T )−1 , (16)
of NIFOMASs. The proposed protocol gets rid of the re-
quirement of existing event-triggered ILC strategy [32] where the learning gain coefficient γ is a positive scalar
that the control inputs update at each iteration step, thus constant to be designed later.
can remarkably reduce the number of controller update. According to Schur triangularization theorem [24],
there always exists an appropriate unitary matrix U such
3.2. Convergence analysis of consensus that the matrix ∆ = U ∗ HU is an upper triangular matrix
with the diagonal entries being the eigenvalues of H, that
For the NIFOMASs under the controller (12) with the
is
triggering conditions (7) and (13), we obtain the following
theorem.
 
λ1 θ1,2 θ1,3 · · · θ1,N
Theorem 1: Let Assumptions 1-3 hold for the leader-  0 λ2 θ2,3 · · · θ2,N 
 
following NIFOMASs (1) and (2). If the learning gain ma- ∆ =  0 0 λ3 · · · θ3,N  ,
 
(17)
trix Γ(t) and the event-triggered coefficient µ j satisfy  .. .. .. . . .. 
 . . . . . 
kINm − D(t)Γ(t)(H ⊗ Im )k 0 0 0 0 λN
+ kD(t)Γ(t)(H ⊗ Im )k µmax ≤ ρ0 < 1, (15) where λ j ( j = 1, 2, ..., N) are the eigenvalues of H and θi, j
is the (i, j)th entry of ∆.
where µmax = max j=1,2,...,N {µ j } and ρ0 ∈ (0, 1), max(·) Let a constant matrix Q = diag(β , β 2 , · · · , β N ), β 6= 0.
denotes the maximal value function, then, by applying the Define a new norm k(·)ks as
ILC protocol (12) with the triggering conditions (7) and
(13) into (2), the output consensus between (1) and (2) k(·)ks = [(QU ∗ ) ⊗ Im ](·)[(UQ−1 ) ⊗ Im ] , (18)
can be achieved for any t ∈ [0, T ] as the iteration step k
goes to infinity, that is, limk→∞ kek (t)k = 0, ∀t ∈ [0, T ]. where k · k denotes the l∞ norm for vectors and the l∞ vec-
In addition, the leader-following NIFOMASs (1) and (2) tor norm induced matrix norm for matrices.
1432 Liming Wang and Guoshan Zhang

Based on (16), (18) and Theorem 1, we can obtain the event-triggered ILC strategy can reduce the communica-
following theorem. tion load.
Theorem 2: Let Assumptions 1-4 hold for the leader-
Algorithm 1: Procedure of event-triggered ILC
following NIFOMASs (1) and (2) and the gain matrix
algorithm.
Γ j (t) be designed as (16). If the learning gain coefficient
γ and the event-triggered coefficient µ j satisfy Initialization:
• Simulation time is terminal time; Number of
max {|1 − γλ j | + |γλ j | µmax } ≤ ρ1 < 1, (19) follower agents N;
j=1,2,··· ,N
• Giving the values of parameters in (1) and (2),
where µmax = max j=1,2,··· ,N {µ j }, ρ1 ∈ (0, 1), λ j ( j = 1, initial outputs of agents and communication topology to
2, ..., N) are the eigenvalues of H, then, by applying the ensure that Assumptions 1-4 are satisfied;
ILC protocol (12) with the triggering conditions (7) and • Let event-triggered coefficient µ j and learning
(13) into (2), the output consensus between (1) and (2) gain coefficient γ satisfy (19);
can be achieved for any t ∈ [0, T ] as the iteration step k • Initial control inputs u0, j (0) = 0, j = 1, 2, ..., N;
goes to infinity, that is, limk→∞ kek (t)k = 0, ∀t ∈ [0, T ]. • Initial triggering iteration steps k0j = 0, j = 1, 2, ...,
In addition, the leader-following NIFOMASs (1) and (2) N;
will not exhibit Zeno behavior under the ILC protocol (12) • Initial parameters l j = 0, j = 1, 2, ..., N;
with the triggering conditions (7) and (13).
• Initial iteration index k = 0;
The proof of Theorem 2 has been given in Appendix B.
In Theorem 1, the event-triggered coefficients µ j , j = 1, 1: Repeat
2, ..., N and the learning gain matrix Γ(t) containing 2: for j = 1, 2, ..., N do;
∑Nj=1 p j × m components need to be designed, yet in The- Receive ykli ,i (t) from neighbors i (i ∈ N j ) and
orem 2, in addition to the event-triggered coefficients µ j , y0 (t) from leader (if d j = 1);
j = 1, 2, ..., N, only a positive scalar constant γ needs to
Calculate ξk, j (t) and gk, j (t) according to (8) and
be designed. Therefore, Theorem 2 can be applied more
(13), respectively;
conveniently.
3: if gk, j (t) ≥ 0, then
Remark 6: In order to ensure that (19) holds, the design
of µ j , j = 1, 2, ..., N and γ can be accomplished within Update uk, j (t) according to (12.b);
the following three steps. First, by solving the inequality Send yk, j (t) to agents r ( j ∈ Nr );
max j=1,2,...,N {|1 − γλ j |} < 1 with γ > 0, a feasible solu- Set kljj = k;
tion γ̄ of γ is obtained. Second, by solving the inequality
max j=1,2,··· ,N {|1 − γ̄λ j | + |γ̄λ j |µmax } < 1 with µmax ≥ 0, Set l j = l j + 1;
a feasible solution µ̄max of µmax is obtained. Third, the 4: else
event-triggered coefficients µ j , j = 1, 2, ..., N are set as uk, j (t) keeps unchanged according to (12.a);
µ j ≤ µ̄max , j = 1, 2, ..., N. Moreover, it is another better
5: end if
method to determine the optimal values of γ and µ j by
solving the appropriate performance index consisting of γ 6: end for
and µ j , but it is still a challenging problem to design the Reset the initial outputs of all followers;
optimal event-triggered ILC protocol for the perfect con- Set k = k + 1;
sensus of NIFOMASs.
7: Until the perfect consensus tracking is achieved.
By summarizing Theorem 2, Algorithm 1 is presented
to describe the procedure of perfect consensus tracking via If µ j = 0 j = 1, 2, ..., N, then µmax = 0 and the event-
the proposed event-triggered ILC protocol. triggered ILC protocol (12) will transform into the tradi-
Algorithm 1 shows that the consensus tracking of (1) tional ILC protocol
and (2) can be achieved by only one-directional communi-
cation between agents at the triggering iteration steps. As uk+1, j (t) = uk, j (t) + Γ j (t)ξk, j (t). (20)
shown in Algorithm 1, the triggering events occur along
In this case, we can obtain Corollary 1 from Theorem 2.
an iteration axis, and there maybe exist some special it-
Corollary 1: Let Assumptions 1-4 hold for the leader-
eration steps, in which some agents do not communicate
following NIFOMASs (1) and (2) and the gain matrix
with any agent. Especially, during several consecutive it-
Γ j (t) be designed as (16). If the learning gain coefficient
eration steps, if agent j and all of its neighbors i (i ∈ N j )
γ satisfies
don’t satisfy their own triggering conditions, then agent j
neither sends its outputs to agents r ( j ∈ Nr ) nor receives max {|1 − γλ j |} ≤ ρ2 < 1, (21)
the outputs of its neighbors i (i ∈ N j ). Thus, the proposed j=1,2,··· ,N
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1433

 " #
 α −0.2 sin(x k,21
(t)) + 0.5x k,2 2


Dt 2 xk,2 (t) =



 −0.4x k,2 2
(t)
 " #
0.2 0 (23b)

 + uk,2 (t),


 0 0.5

 h i h i
Fig. 2. Communication topology of Example 1.

y (t) =
k,2 1 1 xk,2 (t) + 0.4 0.4 uk,2 (t),
 α3

Dt xk,3 (t)

 " #
where ρ2 ∈ (0, 1), then, by applying the ILC protocol (20) 0.6 sin(x (t)) + 0.6 cos(x (t)) − 0.5x

 k,3 1 k,3 1 k,3 2
=


into (2), the output consensus between (1) and (2) can be 

 −0.4xk,32 (t)
achieved for any t ∈ [0, T ] as the iteration step k goes to " #
infinity, that is, limk→∞ kek (t)k = 0, ∀t ∈ [0, T ].  0.2 0
 + uk,3 (t),


0 0.5





 h i h i

y (t) =
4. SIMULATION RESULTS k,3 1 1 xk,3 (t) + 0.4 0.4 uk,3 (t),
(23c)
In this section, to demonstrate the effectiveness of The-
where k ∈ Z+ and (α1 , α2 , α3 ) = (1.0, 0.9, 0.8).
orem 2, we present three numerical examples and com-
The initial states of agents are set as xk,11 (0) =
pare the proposed control strategy to the ILC methods in
xk,21 (0) = xk,31 (0) = 1, xk,12 (0) = xk,22 (0) = xk,32 (0) = −1,
the existing literatures.
and the zero initial control inputs are adopted and the ter-
minal time T = 3. According to the method in Remark 6,
4.1. Example 1 the learning gain coefficient is designed as γ = 0.1 and
µmax is set as µmax = 0.15. It is easy to verify the con-
Consider the FOMASs consisting of one leader agent vergence condition (19) is satisfied. In the simulation, in
and three follower agents with different fractional orders order to satisfy µ j ≤ µmax , j = 1, 2, 3, the event-triggered
over the directed communication graph in Fig. 2, where coefficients of follower agents are selected as µ1 = 0.05,
the dash arrows denote the communication between the µ2 = 0.1 and µ3 = 0.15.
virtual leader and the followers, while the communication Fig. 3(a) shows the infinity norms of output tracking
between different followers is labeled by the solid arrows. errors versus the iteration numbers, in which the output
It is easy to know that the communication graph in Fig. 2 tracking errors are gradually eliminated with the increase
satisfies Assumption 3. of iteration number. Fig. 3(b) and Fig. 3(c) present the trig-
The dynamical equation of the virtual leader agent is gering iteration steps and the control inputs versus the iter-
described by ation numbers, respectively. It can be seen that each agent
updates its control inputs only when its own triggering
 " # " # condition is violated. The relations between the triggering
 α x0 1
(t) cos(4πt) intervals and the triggering iteration steps are presented
D 0 = ,


 t x0 (t)

−0.4 sin(5πt) − 1 in Fig. 4, where the average triggering intervals of three

2
" # (22) follower agents are 1.56, 2.13 and 1.39, respectively. As
 h i x (t)

y0 (t) = 1 1 0 1 shown in Fig. 4, there exists a minimum triggering interval
,


x02 (t) equal to 1. Therefore, Zeno behavior can’t occur. Further-

more, there exist some special triggering iteration steps
where α0 = 0. (for example, the 90th-93th iteration steps), in which three
follower agents neither send nor receive any information.
The dynamical equations of the follower agents are de-
The results show that the proposed event-triggered ILC
scribed by
strategy can effectively reduce the number of controller
 " # update and the communication load.
 α 0.2 cos(xk,11 (t)) − 0.1xk,12 By applying the P-type ILC strategy of [25] into the
Dt xk,1 (t) =
 1
multi-agent systems (22) and (23), the simulation results


 −0.4xk,12 (t)
are obtained, as shown in Fig. 5. For comparison, the


 " #
0.2 0 (23a) learning gain matrix in Fig. 5 is selected as the same as
+ uk,1 (t),
that in Fig. 3. The convergence behaviors of output track-



 0 0.5
ing errors in Fig. 5 are similar to those in Fig. 3, but the



 h i h i
y (t) =
k,1 1 1 xk,1 (t) + 0.4 0.4 uk,1 (t), controllers in Fig. 5 update at each iteration step. Table 1
1434 Liming Wang and Guoshan Zhang

Table 1. Comparison of control effect between the control


strategy of [25] and the controller (12) of this pa-
per.
agents
Comparsion terms
1 2 3
Iteration numbers 100 100 100
Number of controller update in this paper 64 47 72
Number of controller update under control 100 100 100
strategy of [25]
Iterative number that follower nether sends
nor receives any information under controller 24 25 20
of this paper
Iterative number that follower neither sends
nor receives any information under control 0 0 0
strategy of [25]

presents the detail comparisons between Fig. 3 and Fig. 5.


The results further demonstrate the superiority of pro-
posed protocol (12) in reducing the number of controller
update and in saving the communication resource.

Fig. 4. Relations between triggering intervals and trigger-


ing iteration steps in Example 1: (a) Follower 1; (b)
Follower 2; (c) Follower 3.

4.2. Example 2
In order to demonstrate the effectiveness of the pro-
posed control strategy in the practical FOMASs, we con-
sider the multiple supercapacitor-based battery systems
working in a repeatable control environment, in which the
model of battery j is described by the electrical circuit (as
shown in Fig. 6) including a bulk capacitance C j , a surface
capacitance Ĉ j , an internal resistance and a polarization
resistance R̂. Vk, j (t) (V̂k, j (t)) denotes the voltage across the
bulk capacitor (the surface capacitor), and ik, j (t) (Uk, j (t))
denotes the the current (voltage) observed at the terminal
of battery j. The dynamic behaviors of model shown in
Fig. 6 can be described by [41]

α 1
Dt j Vk, j (t) = ik, j (t),
Fig. 3. γ = 0.1, µ1 = 0.05,µ2 = 0.1 and µ3 = 0.15, simula- C
tion results under controller (12) with (7) and (13): α 1 1
Dt j V̂k, j (t) = − V̂k, j (t) + ik, j (t),
(a) Maximal output tracking errors; (b) Events; (c) R̂Ĉ j Ĉ j
Control inputs. Uk, j (t) = Vk, j (t) + V̂k, j (t) + Rik, j (t). (24)
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1435

Table 2. Parameters of the multi-battery systems.


Paramter α1 α2 α3 Ĉ1 Ĉ2 Ĉ3 C R R̂
Value 0.88 0.95 0.92 25 30 32 50000 0.015 0.02
Unit - - - F F F F Ohm Ohm

h i
C = 1 1 , D = R. (26)

The desired reference trajectory (i.e., the output of vir-


tual leader agent) is

yd (t) = t + 1.5 cos(t), t ∈ [0, 5]. (27)

Among different batteries, the terminal voltage infor-


mation is transmitted via the directed network shown in
Fig. 7, in which the dash (solid) arrows denote the com-
munication between the virtual leader and the followers
(between different followers).
In the simulation, the parameters of batteries are given
in Table 2. The initial voltages are set as V0, j (0) = 1 V and
V̂0, j (0) = 0.5 V, j = 1, 2, 3, the initial currents i0, j (0) = 0
A, and the terminal time T = 6. Similar to Example 1, ac-
cording to the method in Remark 6, the learning gain coef-
ficient and the event-triggered coefficient are designed as
γ = 0.15, µ1 = 0.13, µ2 = 0.25 and µ3 = 0.2. It is easy to
verify that the the convergence condition (19) is satisfied.
Fig. 8(a), (b) and (c) present the changes of the infin-
Fig. 5. Simulation results under control strategy of [25]: ity norms of output tracking errors, the triggering iteration
(a) Maximal output tracking errors; (b) Iteration steps and the control inputs with the increase of iteration
steps of controller update; (c) Control inputs. numbers, respectively. Fig. 9 shows the relations between
the triggering intervals and the triggering iteration steps.
Moreover, we also compare the control effect of the con-
trol strategy of [25] to that of the controller (12) of this
paper. The results are similar to those in Table 1, thus are
omitted. All of the results show that by using the proposed
event-triggered ILC protocol, the leader-following mul-
tiple supercapacitor-based battery systems (25) and (27)
can achieve the perfect consensus tracking. Meanwhile,
the number of controller update and the communication
Fig. 6. Electrical circuit model of a supercapacitor-based load can be remarkably reduced.
battery [41].
4.3. Example 3

Let xk, j (t) = [Vk, j (t), V̂k, j (t)]T , uk, j (t) = ik, j (t) and In order to further demonstrate the applicability of the
yk, j (t) = Uk, j (t). Then, (24) can be rewritten as proposed control strategy for the IOMASs, we consider
the IOMASs consisting of one integer-order leader agent
α
Dt j xk, j (t) = A j xk, j (t) + B j uk, j (t), and four integer-order follower agents over the directed
yk, j (t) = Cxk, j (t) + Duk, j (t), (25) communication graph in Fig. 10. The dynamical equation

where
  " #
0 0 C−1
Aj =  , Bj = ,
0 − R̂1Ĉ Ĉ−1 Fig. 7. Communication topology of Example 2.
j j
1436 Liming Wang and Guoshan Zhang

of leader agent is described by


 " #
 0.1 sin(x0 1
(t)) + 0.2x02
(t)
ẋ0 (t) = ,


−0.2 cos(x02 (t))


" # (28)
 h i x (t)

y0 (t) = 1 1 0 1
.


x02 (t)

The dynamical equations of the follower agents are de-


scribed by

 " #
 0.5 cos(xk,11 (t)) + 0.2xk,12 (t)


ẋk,1 (t) =



 −0.1 sin(xk,12 (t))
 " #
1 1.5 (29a)
 + uk,1 (t),
1 −1





 h i h i

y (t) =
k,1 1 1 xk,1 (t) + 0.4 0.4 uk,1 (t),

Fig. 9. Relations between triggering intervals and trigger-


ing iteration steps in Example 2: (a) Follower 1; (b)
Follower 2; (c) Follower 3.

Fig. 10. Communication topology of Example 3.

 " #
 −0.1xk,21 (t) +0.2xk,22 (t)


ẋk,2 (t) =



 sin(t)xk,21 (t)

Fig. 8. γ = 0.15, µ1 = 0.13, µ2 = 0.25 and µ3 = 0.2,
" #
1 2 (29b)
simulation results under controller (12) with (7) 
 + uk,2 (t),
and (13): (a) Maximal output tracking errors; (b)


 3 4

 h i h i
Events; (c) Control inputs. 
y (t) =
k,2 2 2 xk,2 (t) + 0.4 0.4 uk,2 (t),
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1437
  


 0.2 sin(x k,3 1
(t))
  
ẋk,3 (t) = 




  −0.2 sin(x k,3 2
(t)) 





 −0.4 cos(x k,3 3
(t))

  


 1 2

+ 3 4 
 (29c)
 uk,3 (t),
 


0.5 0.6





 h i




y k,3 (t) = 1 1 0.5 xk,3 (t)

 h i

+ 0.4 0.4 0.4 uk,3 (t),


  


 −0.2x k,4 1
(t) − 0.1x k,4 2
(t)
  
ẋk,4 (t) = 




  sin(x k,4 2
(t)) 





 −0.3xk,43 (t)

  


 1 −1
+
 
 uk,4 (t), (29d)

  0.5 1 

2 1





 h i




y k,4 (t) = 0.5 0.5 0.3 xk,4 (t)

 h i

+ 0.4 0.4 0.4 uk,4 (t),

where k ∈ Z+ .
The initial states of agents are set as x01 (0) = xk,11 (0) =
xk,21 (0) = xk,31 (0) = xk,41 (0) = 1, x02 (0) = xk,12 (0) =
xk,22 (0) = xk,32 (0) = xk,42 (0) = −1, xk,33 (0) = xk,43 (0) = 0,
and their initial inputs are set as 0 and the terminal time
T = 5. By appling the method in Remark 6, the learning Fig. 11. γ = 0.14, µ1 = 0.02, µ2 = 0.08, µ3 = 0.1 and µ4 =
gain coefficient and the event-triggered coefficients are de- 0.12, simulation results under controller (12) with
signed as γ = 0.14, µ1 = 0.02, µ2 = 0.08, µ3 = 0.1 and (7) and (13): (a) Maximal output tracking errors;
µ4 = 0.12. (b) Events; (c) Control inputs.
Simulation results for the integer-order leader agent
(28) and four integer-order follower agents (29a)-(29d) are
shown in Fig. 11. The results show that the proposed con- sues of the fractional-order multi-agent systems with the
trol strategy can ahcieve the perfect consensus tracking of iteration-varying trial lengths can be further studied.
IOMASs and can reduce the number of controller update
and the communication load.
APPENDIX A: PROOF OF THEOREM 1

5. CONCLUSIONS The proof of Theorem 1 consists of two parts. Part I


proves the convergence of consensus tracking errors with
In this paper, the output consensus tracking problem
the increase of iteration steps, and Part II shows that Zeno
of non-identical fractional order multi-agent systems (NI-
behavior can be naturally excluded under the proposed
FOMASs) has been discussed. The event-triggered condi-
control strategy.
tion has been designed to ensure the occurance of events
along an iteration axis, and the event-triggered ILC proto- Part I: First, we consider the case that there exists k ∈
col without Zeno behavior has been proposed. By analyz- Z+ such that k, k + 1 ∈ [kl , kl+1 ), l ∈ Z+ .
ing the convergence of learning process, the sufficient con- By (1), (4) and (14a), we have
ditions have been presented to guarantee the perfect con-
sensus tracking of NIFOMASs. Simulation results have ek+1 (t) − ek (t) = −C(t)∆xk (t), (A.1)
shown the effectiveness of the proposed method and its
superiority over the existing results in reducing the num- where the state difference ∆xk (t) = xk+1 (t) − xk (t).
ber of controller update and in saving the communica- Based on Lemma 1 and Assumpton 2, the state differ-
tion resource. In the future work, the robust consensus is- ence ∆xk (t) can be calculated by integrating the systems
1438 Liming Wang and Guoshan Zhang

(4) along the time Thus, for a sufficiently large λ , we can obtain
 1
Zt 
(t−s)α1−1 kek+1 (t)kλ = kek (t)kλ ,
 Γ(α1 ) 0 
 
 ×( f1 (s, xk+1,1 (s))− f1 (s, xk,1 (s)))ds


 which indicates
Z t
 1
 
(t−s)α2−1 kek (t)kλ = kekl (t)kλ , ∀t ∈ [0, T ], ∀k ∈ [kl , kl+1 ).

 
 Γ(α2 ) 0 
  (A.5)
∆xk (t) = 
 ×( f2 (s, xk+1,2 (s))− f2 (s, xk,2 (s)))ds
.

Next, we analyze the convergence of kek (t)kλ at the
 
 .. 

 . 
 triggering iteration sequence {kl }, l ∈ Z+ .
Z t
1
 

 (t−s)αN−1

 By (1), (4) and (14b), we have
 Γ(αN ) 0 
×( fN (s, xk+1,N (s))− fN (s, xk,N (s)))ds ekl+1 (t) =[INm − D(t)Γ(t)(H ⊗ Im )]ekl (t)
(A.2) + D(t)Γ(t)(H ⊗ Im )δkl (t) −C(t)∆xkl (t).
(A.6)
Taking norm on both sides of (A.2), and noticing As-
sumption 1 and 0t (t − s)α−1 eλ s ds < λe α Γ(α), we have
R λt
Similar to (A.3), k∆xkl (t)k can be represented as

1
Z t
(t − s)α1 −1 k∆xkl (t)k
Γ(α1 ) 0 1
Z t
× ( f1 (s, xk+1,1 (s)) − f1 (s, xk,1 (s)))ds (t − s)α1 −1
Γ(α1 ) 0
Z t
1 × ( f1 (s, xkl+1 ,1 (s)) − f1 (s, xkl ,1 (s)))ds
(t − s)α2 −1
Γ(α2 ) 0 1
Z t
k∆xk (t)k = × ( f2 (s, xk+1,2 (s)) − f2 (s, xk,2 (s)))ds (t − s)α2 −1
Γ(α2 ) 0
.. ≤ × ( f2 (s, xkl+1 ,2 (s)) − f2 (s, xkl ,2 (s)))ds
.
Z t ..
1 .
(t − s)αN −1
Γ(αN ) 0 1
Z t
× ( fN (s, xk+1,N (s)) − fN (s, xk,N (s)))ds (t − s)αN −1
Γ(αN ) 0
 Z t
1 × ( fN (s, xkl+1 ,N (s)) − fN (s, xkl ,N (s)))ds
≤ max (t − s)α j −1
j=1,2,··· ,N Γ(α j ) 0 1
Z t
 (t − s)α1 −1
Γ(α1 ) 0
× k f j (s, xk+1, j (s)) − f j (s, xk, j (s))k ds
  × [B(s)Γ(s)(H ⊗ Im )]1:n1 [ek (s) − δk (s)]ds
Z t
Lj 1
Z t
≤ max (t − s)α j −1 eλ s ds (t − s)α2 −1
j=1,2,··· ,N Γ(α j ) 0 Γ(α2 ) 0

× k∆xk (t)kλ × [B(s)Γ(s)(H ⊗ Im )](n1 +1):(n1 +n2 )
+ × [ek (s) − δk (s)]ds
Lmax eλt ..
≤ α k∆xk (t)kλ , (A.3)
λ min .
Z t
1
where αmin = min j=1,2,··· ,N {α1 , α2 , · · · , αN }, min(·) (t − s)αN −1
denotes the minimal value function, and Lmax = Γ(αN ) 0
max j=1,2,··· ,N {L1 , L2 , · · · , LN }. [B(s)Γ(s)(H ⊗ Im )] N−1 N
( ∑ ni +1): ∑ ni
Multiplying both sides of (A.3) by e−λt , and applying i=1 i=1

the definition of λ -norm in Subsection 2.1, we have × [ek (s) − δk (s)]ds


 Z t 
Lj α j −1
k∆xk (t)kλ ≤ Lmax k∆xk (t)kλ /λ αmin , (A.4) ≤ max (t − s) k∆xkl (s)k ds
j=1,2,··· ,N Γ(α j ) 0
which indicates that when λ is sufficiently large,
 Z t
ηj
k∆xk (t)kλ = 0 holds. Moreover, it is easy to obtain from + max (t − s)α j −1 (kek (s)k
j=1,2,··· ,N Γ(α j ) 0
(A.1) that 
+ kδk (s)k)ds . (A.7)
|kek+1 (t)kλ − kek (t)kλ | ≤ kC(t)k k∆xk (t)kλ .
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1439

ηmax (1 + µmax ) kC(t)k


where η j = [B(t)Γ(t)(H ⊗ Im )] , [B(t)Γ(t)(H + < 1. (A.14)
j−1 j
( ∑ ni +1): ∑ ni
λ αmin − Lmax
i=1 i=1
⊗Im )] j−1 j ( j = 1, 2, ..., N) denotes a matrix con- Note that kl → +∞ as l → +∞. By (A.13) and (A.14),
( ∑ ni +1): ∑ ni
i=1 i=1 we have
j−1
sisting of the components from the ( ∑ ni + 1)th row to lim kekl (t)kλ = 0. (A.15)
i=1 l→+∞
j
the ( ∑ ni )th row in the matrix [B(t)Γ(t)(H ⊗ Im )]. Noticing (A.5) and by using (A.15), we have
i=1
By the triggering conditions (7) and (13), we have
lim kek (t)kλ = 0, ∀k ∈ [kl , kl+1 ). (A.16)
l→+∞
kδk (t)k 6 µmax kek (t)k , (A.8)
Since {kl } ⊂ {k}, k must approach infinity as kl → +∞.
where k ∈ [kl , kl+1 ), l ∈ Z+ , and µmax = max j=1,2,...,N {µ j }. Thus, k → +∞ must hold as l goes to infinity. Then, by
Substituting (A.8) into (A.7), we have combining (A.15) with (A.16), we have
Lmax eλt lim kek (t)kλ = 0. (A.17)
k∆xkl (t)k ≤ k∆xkl (t)kλ k→+∞
λ αmin
ηmax (1 + µmax )eλt From the definition of the λ -norm, it is easy to under-
+ kek (t)kλ . (A.9)
λ αmin stand that
where ηmax = max j=1,2,··· ,N {η j }. kek (t)kλ 6 sup {kek (t)k} 6 eλ T kek (t)kλ , (A.18)
Multiplying both sides of (A.9) by e−λt , and applying t∈[0,T ]
the definition of λ -norm in Subsection 2.1, we have
where t ∈ [0, T ], λ is a sufficiently large and positive scalar
Lmax constant, supt∈[0,T ] {·} denotes the supremum of {·} over
k∆xkl (t)kλ ≤ α k∆xkl (t)kλ
λ min the interval t ∈ [0, T ].
ηmax (1 + µmax ) Then, by (A.17) and (A.18), we have
+ kek (t)kλ . (A.10)
λ αmin
lim kek (t)k = 0. (A.19)
Rearranging k∆xkl (t)kλ and kekl (t)kλ in (A.10), we k→+∞
have Equation (A.19) indicates that under the event-triggered
ηmax (1 + µmax ) ILC protocol (12) with the triggering conditions (7) and
k∆xkl (t)kλ ≤ α kek (t)kλ . (A.11) (13), the leader-following FOMASs (1) and (2) achieve
λ min − Lmax
the perfect tracking of outputs consensus for any t ∈ [0, T ]
Taking λ -norm on both sides of (A.6) and noticing as the iteration step k goes to infinity.
(A.8) and (A.11), we have
Part II: Next, we explain that Zeno behavior can be
kekl+1 (t)kλ ≤(kINm − D(t)Γ(t)(H ⊗ Im )k naturally excluded under proposed consensus protocol
+ kD(t)Γ(t)(H ⊗ Im )k µmax ) kekl (t)kλ (12) with the triggering conditions (7) and (13).
The event-triggered ILC protocol (12) relates to time
ηmax (1 + µmax ) kC(t)k
+ kek (t)kλ . and iteration steps, yet the triggering events occur along
λ αmin − Lmax the iteration axis. Since the triggering iteration set {kl } is
(A.12)
a subset of iteration set {k} = {0, 1, 2, ...}, the minimal
Note that (A.5) holds for all k ∈ [kl , kl+1 ) as λ is suf- interval between triggering iteration steps is the length of
ficiently large. Thus, by selecting a sufficiently large λ in iteration step, that is, infl∈Z+ {kl+1 −kl } ≥ 1. This indicates
(A.12), and substituting (A.5) into (A.12), we have that Zeno behavior can be naturally excluded under the
proposed event-triggered ILC strategy.
kekl+1 (t)kλ ≤{kINm − D(t)Γ(t)(H ⊗ Im )k This ends the proof of Theorem 1.
+ kD(t)Γ(t)(H ⊗ Im )k µmax
ηmax (1 + µmax ) kC(t)k APPENDIX B: PROOF OF THEOREM 2
+ } kekl (t)kλ .
λ αmin − Lmax The proof of Theorem 2 consists of the convergence
(A.13)
analysis of consensus tracking errors and the explanation
When λ is sufficiently large, the condition (15) can for the exclusion of Zeno behavior. The explanation for
guarantee the exclusion of Zeno behavior is the same as Part II in
Appendix A. Next, we prove the convergence of consen-
kINm −D(t)Γ(t)(H⊗ Im )k+kD(t)Γ(t)(H⊗ Im )k µmax sus tracking errors with the increase of iteration steps.
1440 Liming Wang and Guoshan Zhang

By using (18), the (18)-based λ -norm and following the ≤ max {|1 − γλ j | + µmax |γλ j |}
j=1,2,··· ,N
proof way similar to Theorem 1, we can know that the
proof of Theorem 2 will be accomplished if + (1 + µmax )γ max kM2 (β )zk . (B.10)
kzk=1

kINm − D(t)Γ(t)(H ⊗ Im )ks


Define the last term in (B.10) as a function of β , i.e.,
+ µmax kD(t)Γ(t)(H ⊗ Im )ks < 1 (B.1) φ (β ) = (1 + µmax )γ maxkzk=1 kM2 (β )zk. Note that β −1
can be made arbitrarily small by choosing a sufficiently
holds. Thus, we need to prove that if (19) is satisfied, then
large β , it is obtained from (B.5) that limβ →+∞ M2 (β ) = 0.
(B.1) holds.
Thus, limβ →+∞ φ (β ) = 0. Since φ (β ) is a continuous
From (16), we have
function of β , for any ε = (1 − ρ1 )/2, ρ1 ∈ (0, 1), there al-
D(t)Γ(t)(H ⊗ Im ) =γ(H ⊗ Im ). (B.2) ways exists a constant β̄ such that φ (β ) < ε for all β > β̄ .
Thus, if (19) holds, by choosing β > β̄ in (B.10), we have
By (B.2), (B.1) is equivalent to
kINm − γ(H ⊗ Im )ks + µmax γ k(H ⊗ Im )ks
kINm − γ(H ⊗ Im )ks +µmax γ kH ⊗ Im ks < 1. (B.3)
≤ max {|1 − γλ j | + µmax |γλ j |} + ε
j=1,2,··· ,N
By (17) and (18), we have
< 1. (B.11)
kINm − γ(H ⊗ Im )ks
This ends the proof of Theorem 2.
= [(QU ∗ ) ⊗ Im ][INm − γ(H ⊗ Im )][(UQ−1 ) ⊗ Im ]
= INm − γ(QU ∗ HUQ−1 ) ⊗ Im
REFERENCES
= IN − γQ(U ∗ HU)Q−1
[1] K. K. Oh, M. C. Park, and H. S. Ahn, “A survey of multi-
= IN − γQ∆Q−1
agent formation control,” Automatica, vol. 53, pp. 424-440,
= kM1 + γM2 (β )k , (B.4) March 2015.

where M1 = diag(1 − γλ1 , 1 − γλ2 , · · · , 1 − γλN ), and [2] J. Qin, Q. Ma, Y. Shi, and L. Wang, “Recent advances in
  consensus of multi-agent systems: A brief survey,” IEEE
Transactions on Industrial Electronics, vol. 64, no. 6, pp.
0 −β −1 θ1,2 −β −2 θ1,3 · · · −β −N+1 θ1,N
  4972-4983, June 2017.
0 0 −β −1 δ2,3 · · · −β −N+2 θ2,N 
M2 (β ) =  . . [3] X. Ge, Q. L. Han, D. Ding, X. M. Zhang, and B. Ning,
 
 .. .. .. .. ..
 . . . . 
 “A survey on recent advances in distributed sampled-data
cooperative control of multi-agent systems,” Neurocomput-
0 0 0 ··· 0
ing, vol. 275, pp. 1684-1701, Jan. 2018.
(B.5)
[4] H. Sun, Y. Zhang, D. Baleanu, W. Chen, and Y. Chen,
By using the definition of the l∞ vector norm induced “A new collection of real world applications of fractional
matrix norm, we have calculus in science and engineering,” Communications in
Nonlinear Science and Numerical Simulation, vol. 64, pp.
kM1 k = max |1 − γλ j | , (B.6) 213-231, Nov. 2018.
j=1,2,··· ,N
[5] Y. Cao, Y. Li, W. Ren, and Y. Chen, “Distributed coordina-
and tion of networked fractional-order systems,” IEEE Trans-
actions on Systems, Man, and Cybernetics, Part B (Cyber-
kM2 (β )k = max kM2 (β )zk . (B.7) netics), vol. 40, no. 2, pp. 362-370, April 2010.
kzk=1
[6] M. A. Pakzad, S. Pakzad, and M. A. Nekoui, “Stability
Substituting (B.6) and (B.7) into (B.4), we have analysis of time-delayed linear fractional-order systems,”
International Journal of Control, Automation and Systems,
kINm − γ(H ⊗ Im )ks ≤ max |1 − γλ j |
j=1,2,··· ,N vol. 11, no. 3, pp. 519-525, June 2013.
+ γ max kM2 (β )zk . (B.8) [7] J. Bai, G. Wen, Y. Song, A. Rahmani, and Y. Yu, “Dis-
kzk=1
tributed formation control of fractional-order multi-agent
Similarly, we have systems with relative damping and communication delay,”
International Journal of Control, Automation and Systems,
k(H ⊗ Im )ks 6 max |λ j | + max kM2 (β )zk . (B.9) vol. 15, no. 1, pp. 85-94, Feb. 2017.
j=1,2,··· ,N kzk=1
[8] Y. Cao and W. Ren, “Distributed formation control for
By using (B.8) and (B.9), we have fractional-order systems: Dynamic interaction and abso-
lute/relative damping,” Systems & Control Letters, vol. 59,
kINm − γ(H ⊗ Im )ks + µmax γ kH ⊗ Im ks pp. 233-240, March-April 2010.
Event-triggered Iterative Learning Control for Perfect Consensus Tracking of Non-identical Fractional Order ... 1441

[9] P. Gong and W. Lan, “Distributed robust containment con- [22] Y. Chen, G. Wen, Z. Peng, and A. Rahmani, “Consensus of
trol for heterogeneous multi-agent systems with unknown fractional-order multiagent system via sampled-data event-
fractional-order dynamics,” Proc. of Chinese Automation triggered control,” Journal of the Franklin Institute, vol.
Congress (CAC), Jinan, China, pp. 1320-1325, Oct. 2017. 356, no. 17, pp. 10241-10259, Nov. 2019.
[10] L. Wang and G. Zhang, “Robust output consensus for [23] B. Wu, D. Wang, and E. K. Poh, “High precision satel-
a class of fractional-order interval multi-agent systems,” lite attitude tracking control via iterative learning control,”
Asian Journal of Control, vol. 22, no. 4, pp. 1679-1691, Journal of Guidance, Control, and Dynamics, vol. 38, no.
2019. 3, pp. 528-533, March 2015.
[24] S. Yang, J. X. Xu, D. Huang, and Y. Tan, “Optimal iterative
[11] P. Gong and W. Lan, “Adaptive robust tracking control for
learning control design for multi-agent systems consensus
uncertain nonlinear fractional-order multi-agent systems
tracking,” Systems & Control Letters, vol. 69, pp. 80-89,
with directed topologies,” Automatica, vol. 92, pp. 92-99,
July 2014.
June 2018.
[25] X. Dai, C. Wang, S. Tian, and Q. Huang, “Consensus con-
[12] F. Chu, H. Yang, and F Liu, “Consensus of fractional-order trol via iterative learning for distributed parameter mod-
multi-agent systems with heterogenous communication de- els multi-agent systems with time-delay,” Journal of the
lays,” Computer Simulation, vol. 31, no. 4, pp. 389-393, Franklin Institute, vol. 356, no. 10, pp. 5240-5259, July
April 2014. 2019.
[13] X. Yin and S. Hu, “Consensus of fractional-order uncer- [26] X. Deng, X. Sun, and R. Liu, “Quantized consensus control
tain multi-agent systems based on output feedback,” Asian for second-order nonlinear multi-agent systems with slid-
Journal of Control, vol. 15, no. 5, pp. 1538-1542, Sep. ing mode iterative learning approach,” International Jour-
2013. nal of Aeronautical and Space Sciences, vol. 19, no. 2, pp.
518-533, June 2018.
[14] Z. Yu, H. Jiang, C. Hu, and J. Yu, “Leader-following con-
sensus of fractional-order multi-agent systems via adaptive [27] D. Meng, Y. Jia, J. Du, and J. Zhang, “On iterative learn-
pinning control,” International Journal of Control, vol. 88, ing algorithms for the formation control of nonlinear multi-
no. 9, pp. 1746-1756, Sep. 2015. agent systems,” Automatica, vol. 50, no. 1, pp. 291-295,
Jan. 2014.
[15] J. Bai, G. Wen, A. Rahmani, and Y. Yu, “Distributed con- [28] S. Lv, M. Pan, X. Li, W. Cai, T. Lan, and B. Li, “Consensus
sensus tracking for the fractional-order multi-agent sys- control of fractional-order multi-agent systems with time
tems based on the sliding mode control method,” Neuro- delays via fractional-order iterative learning control,” IEEE
computing, vol. 235, pp. 210-216, April 2017. Access, vol. 7, pp. 159731-159742, Nov. 2019.
[16] Z. Yu, H. Jiang, C. Hu, and J. Yu, “Necessary and suffi- [29] D. Luo, J. Wang, D. Shen, and M. Fečkan, “Iterative learn-
cient conditions for consensus of fractional-order multia- ing control for fractional-order multi-agent systems,” Jour-
gent systems via sampled-data control,” IEEE Transactions nal of the Franklin Institute, vol. 356, no. 12, pp. 6328-
on Cybernetics, vol. 47, no.8, pp. 1892-1901, Aug. 2017. 6351, Aug. 2019.
[17] Y. W. Wang, Y. Lei, T. Bian, and Z. H. Guan, “Dis- [30] L. M. Wang and G. S. Zhang, “Performance index
tributed control of nonlinear multi-agent systems with based observer-type iterative learning control for consen-
unknown and nonidentical control directions via event- sus tracking of uncertain nonlinear fractional-order multia-
triggered communication,” IEEE Transactions on Cyber- gent systems,” Complexity, vol. 2019, pp. 1-17, Nov. 2019.
netics, vol. 50, no. 5, pp. 1820-1832, May 2020. [31] W. Xiong, X. Yu, R. Patel, and W. Yu, “Iterative learn-
ing control for discrete-time systems with event-triggered
[18] M. Shi, Y. Yu, and X. Teng, “Leader-following consensus
transmission strategy and quantization,” Automatica, vol.
of general fractional-order linear multi-agent systems via
72, pp. 84-91, Oct. 2016.
event-triggered control,” The Journal of Engineering, vol.
2018, no. 4, pp. 199-202, April 2018. [32] T. Zhang and J. Li, “Event-triggered iterative learning con-
trol for multi-agent systems with quantization,” Asian Jour-
[19] Y. Ye and H. Su, “Leader-following consensus of general nal of Control, vol. 20, no. 3, pp. 1088-1101, May 2018.
linear fractional-order multiagent systems with input delay
[33] X. Yin, D. Yue, and S. Hu, “Consensus of fractional-order
via event-triggered control,” International Journal of Ro-
heterogeneous multi-agent systems,” IET Control Theory
bust and Nonlinear Control, vol. 28, no. 18, pp. 5717-5729,
& Applications, vol. 7, no. 2, pp. 314-322, Jan. 2013.
Dec. 2018.
[34] H. Y. Yang, Y. Yang, F. Han, M. Zhao, and L. Guo, “Con-
[20] F. Wang and Y. Yang, “Leader-following consensus of tainment control of heterogeneous fractional-order multi-
nonlinear fractional-order multi-agent systems via event- agent systems,” Journal of the Franklin Institute, vol. 356,
triggered control,” International Journal of Systems Sci- no. 2, pp. 752-765, Jan. 2019.
ence, vol. 48, no. 3, pp. 571-577, June 2017.
[35] J. Huang, L. Chen, X. Xie, M. Wang, and B. Xu, “Dis-
[21] T. Hu, Z. He, X. Zhang, and S. Zhong, “Leader-following tributed event-triggered consensus control for heteroge-
consensus of fractional-order multi-agent systems based on neous multi-agent systems under fixed and switching
event-triggered control,” Nonlinear Dynamics, vol. 99, no. topologies,” International Journal of Control, Automation
3, pp. 2219-2232, Feb. 2020. and Systems, vol. 17, no. 8, pp. 1945-1956, Aug. 2019.
1442 Liming Wang and Guoshan Zhang

[36] A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory Guoshan Zhang received his B.S. degree
and Applications of Fractional Differential Equations, vol. in mathematics from Northeast Normal
204, Elsevier, Amsterdam, Holland, 2007. University, China, in 1983, an M.S. degree
[37] K. Diethelm and N. J. Ford, “Analysis of fractional dif- in applied mathematics, and a Ph.D. de-
ferential equations,” Journal of Mathematical Analysis and gree in industrial automation from North-
Applications, vol. 265, no. 2, pp. 229-248, June 2002. eastern University, China, in 1989 and
1996, respectively. Now he is a profes-
[38] J. X. Xu, “A survey on iterative learning control for nonlin- sor in the School of Electrical and In-
ear systems,” International Journal of Control, vol. 84, no. formation Engineering, Tianjin University,
7, pp. 1275-1294, June 2011. China. His research interests include nonlinear system control
[39] J. Li and J. Li, “Iterative learning control approach for a and intelligent control.
kind of heterogeneous multi-agent systems with distributed
initial state learning,” Applied Mathematics and Computa- Publisher’s Note Springer Nature remains neutral with regard
tion, vol. 265, pp. 1044-1057, Aug. 2015. to jurisdictional claims in published maps and institutional affil-
[40] W. Cao, J. Qiao, and M. Sun, “Learning gain self- iations.
regulation iterative learning control for suppressing singu-
lar system measurement noise,” IEEE Access, vol. 7, pp.
66197-66205, June 2019.
[41] W. Mitkowski and P. Skruch, “Fractional-order models of
the supercapacitors in the form of RC ladder networks,”
Bulletin of the Polish Academy of Sciences: Technical Sci-
ences, vol. 61, no. 3, pp. 581-587, Sep. 2013.

Liming Wang received his M.S. degree


in theoretical physics from Hebei Uni-
versity, Baoding, China, in 2003. He is
an Associate Professor of the Faculty of
Physics and Electronic Information, Lang-
fang Normal University, Langfang, China.
He is currently pursuing his Ph.D. degree
in control science and engineering from
Tianjin University, Tianjin, China. His re-
search interests include synchronization, fractional-order multi-
agent systems, and iterative learning control.
Available online at www.sciencedirect.com

Journal of the Franklin Institute 358 (2021) 3803–3821


www.elsevier.com/locate/jfranklin

Event-triggered learning consensus of networked


heterogeneous nonlinear agents with switching
topologies
Na Lin a, Ronghu Chi a,∗, Biao Huang b
a Schoolof Automation & Electronic Engineering, Qingdao University of Science & Technology, Qingdao 266061
PR China
b Department of Chemical and Materials Engineering, University of Alberta, Edmonton, Alberta T6G 2G6, Canada

Received 21 July 2020; received in revised form 15 December 2020; accepted 21 February 2021
Available online 1 March 2021

Abstract
In this work, a lifted event-triggered iterative learning control (lifted ETILC) is proposed aiming
for addressing all the key issues of heterogeneous dynamics, switching topologies, limited resources,
and model-dependence in the consensus of nonlinear multi-agent systems (MASs). First, we establish
a linear data model for describing the I/O relationships of the heterogeneous nonlinear agents as a
linear parametric form to make the non-affine structural MAS affine with respect to the control input.
Both the heterogeneous dynamics and uncertainties of the agents are included in the parameters of the
linear data model, which are then estimated through an iterative projection algorithm. On this basis, a
lifted event-triggered learning consensus is proposed with an event-triggering condition derived through
a Lyapunov function. In this work, no threshold condition but the event-triggering condition is used
which plays a key role in guaranteeing both the stability and the iterative convergence of the proposed
lifted ETILC. The proposed method can reduce the number of control actions significantly in batches
while guaranteeing the iterative convergence of tracking error. Both rigorous analysis and simulations
are provided and confirm the validity of the lifted ETILC.
© 2021 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

∗ Corresponding author.
E-mail address: ronghu_chi@hotmail.com (R. Chi).

https://doi.org/10.1016/j.jfranklin.2021.02.025
0016-0032/© 2021 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

1. Introduction

Learning is a nature intelligence of human being, and can help to realize high-performance
response for transient behaviors. For example, in a marching band [1] consisting of a group of
performers who play musical instruments and keep several formations to move forward while
playing the instruments, each performer requires cooperative learning from repeated trainings
to achieve the goal. Other examples include the military parade formation, synchronized
swimming and so on. In these examples, all the individuals achieve the consensus by their
active learning through repetitive trainings. However, when the individuals of the marching
band becomes the physical agents of a multi-agent system (MAS), an additional determinate
mechanism is needed to make them learn cooperatively for the consensus target.
By nature, iterative learning control (ILC) [2] is most suitable as a determinate learning
mechanism: It requires the systems to repeat over finite durations, similar to the training
of human being, and can learn experience from the iterations to realize a perfect tracking
performance. Nowadays, ILC has achieved a wealth of theoretical results [3,4] and a wide
range of successful applications [5,6]. Furthermore, as an intelligent learning mechanism, ILC
has also been introduced into MASs to improve the consensus performance, such as mobile
robots formation [7], coordinated train trajectory tracking [8], and so on. Ref. [9] investigates
the consensus tracking of singular linear MASs using ILC. Ref. [10] proposes an ILC approach
for formation consensus. Ref. [11] concerns a learning consensus scheme for a distributed
parametric MAS. A quantized ILC for continuous-time MASs with random packet losses is
proposed in [12].
It is noted that the dynamics of practical agents is commonly heterogeneous owing to
the system uncertainties and exogenous disturbances. Therefore, some ILC methods [13],
[14] have also been proposed for heterogeneous MASs. On the other hand, topology is most
important for the communication among agents in a MAS. It is more often to use a switch-
ing/variable communication topology aiming for improving the adaptability of the MASs to
complex situations. So, several works have also devoted to exploring the learning consen-
sus problems of MASs with switching topologies [15–17]. However, to the best of authors’
knowledge, few works [18,19] have been reported that consider both the heterogeneous char-
acteristics and the switching topologies of the MASs together even though they are often
coexistent in practical applications.
Data transmission is a fundamental feature of distributed control of MASs and is highly
subject to channel bandwidth, transmission rate, network capacities, etc. Information con-
gestion often happens owing to the limited resources. As a result, it is of a great interest
to reduce the utilization of resources while ensuring the control performance, for which the
event-triggered control (ETC) [20–26] is an effective method. By virtue of the basic idea of
ETC, one can design a consensus protocol to realize the control targets with limited data
communication and fewer control actions.
However, in contrast to the numerous distinct ETC methods [20–26] for the feedback-
based one-time-dimensional control systems, only sporadic event-triggered methods have been
reported for the ILC systems [27,28] and learning consensus of MASs [29,30], where no
consideration is taken for the heterogeneous dynamic characteristics and the switching com-
munication topologies. In addition, the existing event-triggered ILC [29,30] is only designed
for linear MASs. To the best of authors’ knowledge, the event-triggered ILC for nonlinear
MASs has not been reported.

3804
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Motivated by the above observations, this work proposes an event-triggered ILC method
for learning consensus of a nonlinear heterogeneous MAS with switching communication
topologies. Due to the non-affine structure of the considered MAS, a linear data model is
established to reformulate the I/O relationships of the heterogeneous nonlinear agents into
a linear parametric form that is affine with respect to the control input. Then, an iterative
projection algorithm is proposed for the estimation of the unknown parameters in the linear
data model to adapt to the system uncertainties. Based on the identified linear data model,
a lifted event-triggered ILC (lifted ETILC) is designed for the learning consensus where
the event-triggering condition is derived in view of Lyapunov stability theory. The iterative
convergence of the proposed lifted ETILC is not only proved by rigorous analysis but also
confirmed via extensive simulations. The main contributions of this work are listed as follows.
(i) This work is the first time aiming at addressing all the key issues of heterogeneous
dynamics, switching topologies, limited resources, and model-dependence in a MAS, therefore
enhancing the applicability of the proposed lifted ETILC to real time applications.
(ii) The linear data model established in this work has no any physical interpretations but
exists in the computer virtually for the sole purpose of algorithm design and analysis. Thus,
it can be regarded as a universal tool of addressing nonlinear systems bypassing mechanistic
modeling steps.
(iii) All the unknown heterogeneous dynamics and uncertainties of the agents are involved
in the parameters of linear data model whose establishment and identification are purely I/O
data driven without using any explicit model information of the original agents.
(iv) Under the switching topologies, i.e., all the adjacent coefficients are iteration-varying,
the proposed lifted ETILC can still make the consensus error convergent iteratively. Further,
the number of controller actions is reduced greatly in batches, that is, the controller does not
act at all time instants of this iteration as long as the event-triggering condition holds.
(v) The event-triggering condition can ensure not only the stability but also the iterative
convergence of the proposed lifted ETILC.
The arrangement of this work is as below. Section 2 is the graph theory and problem
formulation. The linear data model and parameter identification are introduced in Section 3.
Section 4 shows the design of event-triggered ILC. The convergence analysis of the proposed
lifted ETILC is given in Section 5. Section 6 shows simulation results. Section 7 concludes
this paper.

2. Graph theory and problem formulation

2.1. Graph theory

Consider an iteration-varying undirected graph Gk with N nodes, where k is iteration


number. The adjacency matrix is Āk = a ji,k N×N , where

1, agent i is connect ed wit h agent j
a ji,k = .
0, otherwise
  
The Laplacian matrix is Lk = l¯ji,k N×N , where l¯ji,k = −1 for j = i, and l¯ji,k = Ni=1 a ji,k
 
if j = i. In addition, Dk = diag d1,k , · · · , dN,k is the leader’s adjacency matrix, where di,k > 0
if agent i is connected to the leader, and di,k = 0, otherwise. Suppose that the topology graphs
are connected and at least one agent can receive the information of the leader.

3805
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Remark 1. Note that the problem of switching topologies is considered in the following
analysis. It is characterized by the iteration-varying/nonrepetitive adjacent coefficients in the
graph presentation.

2.2. Problem formulation

Consider a heterogeneous nonlinear nonaffine MAS,



yk,i (t + 1) = fi yk,i (t ), · · · , yk,i (t − ny ),

uk,i (t ), · · · , uk,i (t − nu ) (1)
where i = 1, 2, · · · , N denotes the agent index; the subscript k ∈ {0, 1, · · ·} denotes iteration
number; t ∈ {0, 1, · · · , T } is sampling time with T being a integer; yk,i (t ) and uk,i (t ) are the
system output and control input; fi (·) is a continuously differential nonlinear function used
to denote the nonlinear heterogeneous dynamics of the agent i; ny and nu represent system
orders, which are two positive constants.
By nature, the nonlinear nonaffine discrete-time system (1) is a nonlinear auto-regressive
moving average with exogenous input (NARMAX) model, which is widely used to formulate
many practical processes, such as temperature control systems [31], multi-layer composites
systems [32] and ankle dynamics [33].
Two assumptions are made in this paper.
Assumption 1. ∂ fi (·)/∂ uk,i (t ) exists and is nonzero, continuous, and bounded for all itera-
tions, time, and agents.
Assumption 2. The initial value of agent i is identical for each iteration k, i.e., yk+1,i (0) =
yk,i (0) = ci , where ci is a constant.
Define y˜k,i (t + 1 ) = yk,i (t + 1 ) − yd (t + 1 ) as the output tracking error, where yd (t + 1)
is the leader’s output. The control objective is to develop an event-triggered ILC approach
for heterogeneous nonlinear nonaffine MASs with iteratively switching topology such that
the tracking error of each follower can gradually  track the leader’s trajectory as the itera-
tion number increases, i.e., limk→∞ y˜k,i (t + 1 ) = 0, while reducing the number of control
updates.

3. Linear data model and parameter identification

3.1. Linear data model

Similar to Ref. [34], one can develop a linear data model of the nonlinear MAS (1) as
follows,
yk+1,i (t + 1) = yk,i (t + 1) + ϕk+1,i (t )uk+1,i (t ) (2)
where  represents an iterative difference operator, i.e., uk+1,i (t ) = uk+1,i (t ) − uk,i (t ),
uk,i (t ) = [uk,i (0), · · · , uk,i (t )]T ∈ R(t+1)×1 , and ϕk+1,i (t ) = [φk+1,i (0), · · · , φk+1,i (t )] ∈
1×(t+1) ∂ gt,i (·)
R , φk+1,i ( j) = ∂ uk+1,i ( j) , j = 0, 1, · · · , t, where gt,i (·) is a compound function of fi (·).
As a result, the nonlinear function gt,i (·) also satisfies Assumption 1. Therefore, ϕk+1,i (t ) is
bounded.
See Appendix A for the detailed deviations of (2).

3806
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Remark 2. From the derivations given in Appendix A, one can see that the only difference
from [34] is that the linear data model (2) is constructed for each of the agents where i
denotes the i-th agent. Furthermore, the linear data model is established without depending
on any mechanistic model, and thus it is virtual with no physical interpretations. The sole role
of the linear data model is to serve the controller design and analysis for nonlinear nonaffine
systems.

Remark 3. It is observed that the nonlinear heterogeneous dynamics and uncertainties of the
agent are both converted as the unknown parametric uncertainty in the linear data model. The
parameter ϕk+1,i (t ) can be regarded as slowly iteration-varying to a certain extent [35]. So,
we assume ϕk+1,i (t ) is iteration-independent in the following analysis.

3.2. Parameter identification

Due to ϕk+1,i (t ) in the linear data model is unknown, we select an iterative project algo-
rithm [34] to identify it,
  T
η yk,i (t + 1 ) − ϕˆ k,i (t )uk,i (t ) uk,i (t )
ϕˆ k+1,i (t ) = ϕˆ k,i (t ) +  2 (3)
μ + uk,i (t )

where ϕˆ k+1,i (t ) represents the estimate of ϕk+1,i (t ), and 0 < η < 2 and μ > 0 are two con-
stants.

Remark 4. The parameter estimation algorithm (3) is similar to the algorithm in Ref. [34].
However, in this work, the parameter estimation algorithm (3) is mainly used for off-line
identification of unknown parameter ϕk+1,i (t ).

Theorem 1. For the heterogeneous nonlinear MAS (1) satisfying Assumptions 1 and 2, the
identification algorithm (3) can guarantee that the estimation error ϕ˜ k+1,i (t ), defined as
ϕ˜ k+1,i (t ) = ϕˆ k+1,i (t ) − ϕk+1,i (t ), converges with the increasing number of iterations.

Proof. See Appendix B. 

The subsequent discussions are based on the well identified linear data model.
Theorem 1 can guarantee the identified parameters approach the optimal ones precisely pro-
vided that the offline data is sufficient. Therefore, without loss of generality, we use the
ultimately identified linear data model in the following analysis, shown as below,

yk+1,i (t + 1) = yk,i (t + 1) + ϕ∗i (t )uk+1,i (t ) (4)



where ϕ∗i (t ) = φi∗ (0 ), · · · , φi∗ (t ) ∈ R1×(t+1) denotes the ultimately identified value of ϕi (t ).

4. Lifted event-triggered ILC

Denote
 T
Yk,i = yk,i (1), · · · , yk,i (T )

3807
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

and
⎡ ⎤
φi∗ (0 ) 0 ··· 0
⎢ φi∗ (0 ) φi∗ (1 ) ··· 0 ⎥
⎢ ⎥
i = ⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
φi∗ (0 ) φi∗ (1 ) ··· φi∗ (T − 1 )
Then from (4), we can get a lifted linear data model,
Yk+1,i = Yk,i + i uk+1,i (T − 1 ) (5)
 i
In the subsequent analysis, km , m = 0, 1, · · · , is used to represent the event-triggered
iteration sequence of agent i. The event-triggered iterative learning control protocol for het-
erogeneous nonlinear nonaffine MASs is designed as
uk+1,i (T − 1 ) = −Ki ζ k ,kmi (6)
 i i 
where k ∈ km , km+1 , Ki = Ki IT ×T is a learning gain, Ki is a scalar and IT ×T repre-
 a T -dimensional identity matrix; ζ k ,kmi = [ζk ,kmi (1), ζk ,kmi (2), · · · , ζk ,kmi (T )] , ζk ,kmi (τ ) =
T
sents
j∈Ni a ji,k (ykmi ,i (τ ) − ykmi , j (τ )) + di,k (ykmi ,i (τ ) − yd (τ )), τ = 1, 2, . . . , T ; Ni is the set of the
neighbor of agent i.
Subtracting Yd from both sides of Eq. (5), where Yd is defined as Yd =
 T
yd (1 ), · · · , yd (T ) , we can derive
˜ k+1,i = Y
Y ˜ k,i + i uk+1,i (T − 1 ) (7)

where Y ˜ k,i = y˜k,i (1 ), · · · , y˜k,i (T ) T . Further, denote
 T
¯˜ = Y
Y ˜ T
, Y˜ T
, · · · , ˜
Y T
∈ RT N×1 ,
k k,1 k,2 k,N

¯ = diag(1 , · · · , N ) ∈ RT N×T N ,


 T T
Ūk = uk, 1 (T − 1 ), · · · , uk,N (T − 1 )
T
∈ RT N×1 .
Combining Eq. (7), we can get
¯˜ ¯˜ ¯
Y k+1 = Yk + Ūk+1 (8)
 T
Denote ζ̄ k,km = ζ Tk ,k 1 , ζ Tk ,k 2 , · · · , ζ Tk ,k N ∈ RT N×1 . According to the designed control pro-
m m m
tocol (6), we have
Ūk+1 = −K̄ζ̄ k,km (9)

where K̄ = diag(K1 , K2 , · · · , KN ) ∈ RT N×T N .


According to the definition of the tracking error, we can get

ζk ,kmi (t + 1) = a ji,k (y˜kmi ,i (t + 1) − y˜kmi , j (t + 1)) + di,k y˜kmi ,i (t + 1) (10)
j∈Ni

From Eq. (10), the following equation holds


¯˜
ζ̄ k,km = (k  IT ×T )Y (11)
km

3808
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821
 T
where k = Lk + Dk , and Y¯˜ = Y ˜T ,Y ˜T ,··· ,Y
˜T ∈ RT N×1 , ˜ km ,i =
Y
km km ,1 km ,2 km ,N
 T
y˜kmi ,i (1), y˜kmi ,i (2), · · · , y˜kmi ,i (N ) ∈ RT ×1 .
In terms of Eqs. (8), (9) and (11), we derive
¯˜ ¯˜ ¯˜
Y k+1 = Yk − Mk Ykm (12)

where Mk =  ¯ K̄ (k  IT ×T ).
 
Define the event-triggered error ek,i (t + 1 ) = y˜kmi ,i(t+1) − y˜k,i (t + 1 ), k ∈ kmi , km+1
i
, which
refers to the difference between the output tracking error of the latest event-triggered iter-
ation and the current output tracking error of the agent i. Then, one can get the following
formulation
y˜kmi ,i (t + 1 ) = ek,i (t + 1 ) + y˜k,i (t + 1 ) (13)
Denote

∈ RT ×1 ,
T
ek,i = ek,i (1 ), · · · , ek,i (T )

 T T
ēk = ek, 1 , ek,2 , · · · , ek,N
T T
∈ RT N×1
Then, from Eq. (13), one obtains
¯˜ = ē + Y
Y ¯˜ (14)
km k k

Inserting Eq. (14) into Eq. (12), it can derive


 
¯˜ ¯˜ ¯˜
Y k+1 = Yk − Mk ēk + Yk

¯˜ − M ē
= ( I − M k )Y (15)
k k k

¯˜ T Y
Define a Lyapunov function Vk+1 = Y ¯˜
k+1 k+1 . Then the difference form of the Lyapunov
function is
¯˜ T Y
Vk+1 = Y ¯˜ ¯˜ T ¯˜
k+1 k+1 − Yk Yk
 
¯˜ − M ē T
= ( I − M k )Y k k k
 
× ( I − M k )Y¯˜ − M ē − Y ¯˜ T Y
¯˜
k k k k k

¯˜ T (I − M )T (I − M ) − IY
=Y ¯˜
k k k k

¯˜ (I − M )T M ē + ēT M T M ē
− 2Y
T
(16)
k k k k k k k k

Let Qk = I − (I − Mk )T (I − Mk ). Since the topology graph considered is undirected, the


matrix k can be guaranteed to be symmetric positive definite. So Qk is a symmetric positive
definite matrix as long as I − Mk  ≤ ρ < 1, where ρ = max {I − Mk }.
k
Using the norm bound property in Eq. (16), we have
 2
 ¯˜ 
Vk+1 ≤ −λmin (Qk )Y k
 
 ¯˜ 
+ 2I − Mk Mk Y k ēk  + Mk  ēk 
2 2
(17)

3809
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

where λmin (Qk ) is the minimum eigenvalue of the matrix Qk . Further, using Young’s inequal-
ity, i.e., 2ab ≤ (1/c )a2 + cb2 , c > 0, for the second term in Eq. (17), yields
λmin (Qk )  
 ¯˜ 2
Vk+1 ≤ − Yk 
2
2
+ I − Mk 2 Mk 2 ēk 2
λmin (Qk )
+ Mk 2 ēk 2 (18)
Letting Vk+1 ≤ 0, it follows that
 2
 ¯˜ 
ēk 2 ≤ αk Y k (19)

λ2 ( Q )
where αk = 4I−M 2 M min k
2.
k  +2λmin (Qk )Mk 
2
k
Eq. (19) can be written as
N 
  N 
 
ek,i 2 ≤ αk  ˜ 2
Yk,i  (20)
i=1 i=1

Therefore, the event-triggering condition can be designed as


 2  2
ek,i  ≤ σ αk 
Y˜ k,i 
 (21)

where 0 < σ < 1 is a constant, which means that the event is updated as long as the inequality
(21) is violated.
In summary, the presented lifted ETILC approach for heterogeneous nonlinear nonaffine
MASs with iteration-varying topology is listed as follows,
ϕˆ k+1,i (t ) = ϕˆ k,i (t )
  T
η yk,i (t + 1 ) − ϕˆ k,i (t )uk,i (t ) uk,i (t )
+  2 (22)
μ + uk,i (t )
 2  
 ˜ 2
k + 1 = km+1 i
, m = 0, 1, · · · , i f ek,i  > σ αk Y k,i  (23)

uk+1,i (T − 1 ) = −Ki ζ k ,kmi (24)


 
where k ∈ kmi , km+1
i
.
Remark 5. Compared with Refs. [29] and [30], the proposed lifted ETILC is designed mainly
for heterogeneous nonlinear MASs with switching topology instead of linear MASs with fixed
topology. Compared with Ref. [34], the main work of this paper is to develop an event-
triggered ILC method for nonlinear MASs, so as to ensure the control performance, reduce
the number of control input updates, and finally achieve the purpose of reducing the use of
control resources.
Remark 6. If the event-triggering condition (21) is violated at one iteration or batch, the
control inputs of this iteration are updated over all the time instants of the batch {0, 1, · · · , T }.
That is, the event is triggered along the iteration axis in a batchwise sense, which is different
from the existing works [20–26].

3810
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Remark 7. According to Eq. (15), it is found that the smaller the norm of I − Mk is, the faster
the convergence rate of tracking error becomes. Since Mk =  ¯ K̄ (k  IT ×T ) ∈ RT N×T N ,
where  ¯ ∈R T N×T N
is a lower triangular matrix, K̄ ∈ R T N×T N
is a diagonal matrix, and
(k  IT ×T ) ∈ RT N×T N is a symmetric matrix, one can conclude that Ki only affects the size
of diagonal elements of the matrix Mk . Therefore, we shall select a smaller Ki at first ensuring
the stability of the system. Then one can increase the control gain Ki gradually to improve
the control performance.

5. Convergence analysis

The convergence theorem of the proposed lifted ETILC (22)–(24) for heterogenrous non-
linear nonaffine MASs is given as below.
Theorem 2. For the nonlinear MAS (1) with an iteration-varying topology under
Assumptions 1 and 2, the output tracking error of each agent converges to zero with increas-
ing number of iterations by applying the presented lifted ETILC (22) - (24) if the following
condition is satisfied,
ω1k < I − Mk  < min (ω2k , ρ).
Proof. Taking norm on both sides of Eq. (15), it is derived that
   
 ¯˜   ¯˜ 
Yk+1  ≤ I − Mk Y k  + Mk ēk  (25)

According to Eqs. (19)–(21), we can obtain


    √  
 ¯˜   ¯˜   ¯˜ 
Yk+1  ≤ I − Mk Y k + σ αk Mk Y k
 √ 
σ λmin (Qk )  
 ¯˜ 
= I − M k  +  Yk  (26)
4I − Mk 2 + 2λmin (Qk )
 
Owing to C-S inequality, i.e., (a + b)2 ≤ 2 a2 + b2 , we have
 √
4I − Mk 2 + 2λmin (Qk ) ≥ 2I − Mk 

+ λmin (Qk ) (27)
Then, from Eq. (26) and Eq. (27), it obtains
   
 ¯˜   ¯˜ 
Yk+1  ≤ βk Y k (28)

where βk = I − Mk  + √2I−Mσ λmin (Q )
√ k .
k + λmin (Qk )
Owing to λmin (Qk ) > 0, we can get

σ
βk = I − Mk  + √ −1
2λmin (Qk )I − Mk  + λ−1 /2
min (Qk )

σ
≤ I − Mk  + √ −1
2λmin (Qk )I − Mk 
√ −1 √
2λmin (Qk )I − Mk 2 + σ
= √ −1 (29)
2λmin (Qk )I − Mk 
3811
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

By√selecting appropriate Ki and σ√such that ω1k < I − Mk  < min (ω2k , 1 ), where ω1k =
√ √
4−8 2σ λmin (Qk ) 4−8 2σ λmin (Qk )
1
2
− 4
and ω2k = 21 + 4
are two real roots of the following equa-
tion
√ −1 √
2λmin (Qk )I − Mk 2 + σ

− 2λ−1 min (Qk )I − Mk  = 0 (30)
then the following inequality can be guaranteed,
√ −1 √
2λmin (Qk )I − Mk 2 + σ
βk ≤ √ −1 <γ <1 (31)
2λmin (Qk )I − Mk 
where 0 < γ < 1 is a constant. From Eq. (28) and Eq. (31), we can derive
     
 ¯˜   ¯˜  k+1  ¯˜ 
Yk+1  ≤ γ Y k  ≤ · · · ≤ γ Y0  (32)

Considering the boundedness of initial tracking error of each agent Y ˜ 0,i , inequality
¯  
˜ k  = 0. Thus, limk→∞ y˜k,i (t + 1) = 0 is derived directly.
(32) means that limk→∞ Y

6. Simulation

Consider a heterogeneous nonlinear MAS with four followers which is governed by


y1 (t )u1 (t )
Agent 1 : y1 (t + 1 ) = + u1 (t )
1 + y12 (t )
y2 (t )u2 (t )
Agent 2 : y2 (t + 1 ) = + 0.5u2 (t )
1 + y24 (t )
y3 (t )u3 (t )
Agent 3 : y3 (t + 1 ) = + 2u3 (t )
1 + y32 (t )
y4 (t )u4 (t )
Agent 4 : y4 (t + 1 ) = + 0.8u4 (t )
1 + y45 (t )
The communication topology varies along iteration direction randomly among four states,
as shown in Fig. 1, where 1, 2, 3 and 4 represent the follower agents, respectively, and 0
represents the leader. In this simulation, a parameter ςk is selected to represent the switching
of topologies, which takes the values of 1, 2, 3 and 4, randomly. When ςk is one of value 1,
2, 3 and 4, the topology corresponds to G1 , G2 , G3 and G4 in Fig. 1, respectively.
ϕk,i (t ) in linear data model (2) is identified by selecting the signal with 800 data points as
the input sequence whose mean value is 0 and standard deviation is 1. In order to better quan-

tify the identification effect, define an output estimate error εk = Tj=1 yˆk,i ( j ) − yk,i ( j )/T ,
where yˆk,i (t + 1 ) = yˆk−1,i (t + 1 ) + ϕˆ k,i (t )uk,i (t ) is the estimate of the output. Selecting
parameters η = 0.1 and μ = 0.1, and initial values φˆ0,i ( j) = 0.01, j = 0, 1, · · · , t, i =
1, 2, 3, 4. Applying the iterative projection algorithm (22), the output estimate error εk is
shown in Fig. 2.
We can observe that the output estimation error is gradually reduced with increasing num-
ber of iterations, which also indicates that a good parameter identification can be achieved
provided that the off-line data is sufficient. We choose the parameter estimation at the 800th

3812
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Fig. 1. The iteration-varying communication topologies.

5
agent 1
agent 2
4 agent 3
agent 4

0
0 200 400 600 800
Iteration Number
Fig. 2. The output estimate error of all agents.

iteration as the optimal one for the subsequent calculation of triggering conditions, and the
identified parameters ϕˆ 800,i (t ) are shown in Fig. 3.
In this simulation, the leader’s trajectory is yd (t + 1) = 0.5 sin (2πt /50 ). Select ϕi (t ) =
ϕˆ 800,i (t ), i = 1, 2, 3, 4. The initial values are yk,i (0 ) = 0 and u0,i (t ) = 0, i = 1, 2, 3, 4. The
learning gains are set as K1 = 0.02, K2 = 0.04, K3 = 0.01, K4 = 0.03, and σ = 0.02. By
calculating, ρ = 0.9994, and the values of ω1k , ω2k and I − Mk  are shown in Fig. 4, which
illustrate that the convergence condition ω1k < I − Mk  < min (ω2k , ρ) in Theorem 2 holds.
Applying the proposed lifted ETILC, the simulation results are shown in Figs. 5–10.
Fig. 5 shows the tracking error of each agent. Fig. 6 shows the output consensus of the
agents at the 600th iteration. It is seen that the proposed lifted ETILC can achieve a good
tracking performance and the tracking errors converge gradually with the number of increasing
iterations.
Figs. 7–10 show the event-triggered iterations of every agent. For clarity, the total number
of the triggered iterations per agent is shown in Table 1, where Rai = T riggered it erat ions
t ot al it erat ions
. We can

3813
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Fig. 3. The estimation ϕˆ k,i (t ) at the 800th iteration.

Fig. 4. The values of ω1k , ω2k , I − Mk  and ρ.

3814
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

agent 1 agent 2
0.08 0.08
Lifted ILC Lifted ILC
0.06 Lifted ETILC 0.06 Lifted ETILC
Error

Error
0.04 0.04

0.02 0.02

0 0
0 200 400 600 0 200 400 600
Iteration Iteration
agent 3 agent 4
0.1 0.1
Lifted ILC Lifted ILC
Lifted ETILC Lifted ETILC
Error

0.05 Error 0.05

0 0
0 200 400 600 0 200 400 600
Iteration Iteration
Fig. 5. Consensus errors of the agents.

1
agent 1
agent 2
agent 3
agent 4
0.5
y
d
Output

−0.5
0 20 40 60 80 100
Time
Fig. 6. The outputs of the follower and the leader at the 600th iteration.

Table 1
Thetotal number of triggered iterations per agent.

Agent number 1 2 3 4
Rai 177/600 241/600 213/600 240/600

3815
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

14
agent 1
12

Inter−event interval
10

0
0 100 200 300 400 500 600
Event−triggered iterations
Fig. 7. The event-triggered iterations of agent 1.

10
agent 2

8
Inter−event interval

0
0 100 200 300 400 500 600
Event−triggered iterations
Fig. 8. The event-triggered iterations of agent 2.

observe that the learning consensus of the heterogeneous nonlinear MASs can be achieved
even though the iterations that require the controller actions are significantly reduced from
the original total 600 iterations. Note that if the controller is not triggered at one iteration,
it means that the input update is not required for all time instants of this iteration. So,
the actual action number of the learning controller is decreased significantly but a satisfied
tracking performance can still be guaranteed (see Figs. 5 and 6).

3816
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

10
agent 3

8
Inter−event interval

0
0 100 200 300 400 500 600
Event−triggered iterations
Fig. 9. The event-triggered iterations of agent 3.

7
agent 4
6
Inter−event interval

0
0 100 200 300 400 500 600
Event−triggered iterations
Fig. 10. The event-triggered iterations of agent 4.

In order to illustrate the advantages of the proposed lifted ETILC, it is compared with the
time-triggered lifted ILC which is shown as follows,

ϕˆ k+1,i (t ) = ϕˆ k,i (t )
η (yk,i (t+1 )−ϕˆ k,i (t )uk,i (t ) )uk,i
T
(t ) (33)
+
μ+uk,i (t )
2

uk+1,i (T − 1 ) = −Ki ζ k,i (34)

3817
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

where ζ k,i = [ζk,i (1), ζk,i (2), · · · , ζk,i (T )]T , ζk,i (τ ) = j∈Ni a ji,k (yk,i (τ ) − yk, j (τ )) +
di,k (yk,i (τ ) − yd (τ )), τ = 1, 2, . . . , T .
The simulation results are shown as the green dotted line in Fig. 5. One can see from
Fig. 5 that the consensus tracking error of using the proposed lifted ETILC method is slightly
worse than that of using the time-triggered lifted ILC method, but the event-triggered times
are greatly reduced. Within the acceptable tracking accuracy, the number of control updates
is significantly reduced, which can be clearly seen from Table.

7. Conclusions

In this work, a lifted ETILC method is proposed for learning consensus of a nonlinear
heterogeneous MAS with switching communication topologies. A linear data model with an
iterative identification method is presented to address the nonlinear and nonaffine structures of
the MASs. The event-triggered mechanism in the proposed lifted ETILC solves the problem
of limited resources in MASs by reducing the data transmission and controller updates on
the premise of guaranteeing the system stability and convergence. Considering the switch-
ing topologies, mathematic analysis is provided to show the convergence of the proposed
lifted ETILC theoretically. Further, both design and analysis of the lifted ETILC are model-
independent and simulation results illustrate the effectiveness of the lifted ETILC.
Theoretical extension of the event-triggered ILC to a MAS with directed topology is still
challenging because the adjacency matrix of the directed topological graph cannot be guar-
anteed to be symmetric and positive definite, which makes it impossible to obtain the event-
triggering condition consequently. This issue may be addressed by introducing other related
mathematical tools in the future work.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This work was supported by National Science Foundation of China (61873139), Taishan
Scholar program of Shandong Province of China, the Natural Science Foundation of Shandong
Province of China (ZR2019MF036), and Natural Science and Engineering Research Council
of Canada.

Appendix A. Derivation Process of Linear Data Model

According to Ref. [34], system (1) can be transformed as follows


 
yk,i (t + 1 ) = gt,i yk,i (0 ), uk,i
T
(t ) (A.1)
where gt,i is a compound function of fi (·), which means the nonlinear function gt,i (·) satisfies
Assumption 1. In views of Assumption 1 and Eq. (A.1), we can derive
∂ gt,i
yk+1,i (t + 1 ) − yk,i (t + 1 ) = ∂ yk+1,i (0 )
yk+1,i (0 )
(A.2)
+ ∂ uT∂ gt,i (t ) uk+1,i (t )
k+1,i

3818
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

Further, according to Assumption 2 and (A.2), one can get the linear data model
yk+1,i (t + 1) = yk,i (t + 1) + ϕk+1,i (t )uk+1,i (t ) (A.3)
∂ gt,i
where ϕk+1,i (t ) = ∂ uk+1
.
,i (t )
T

Appendix B. Proof of Theorem 1

Subtracting ϕk+1,i (t ) from both sides of (3) and combining (2), one has
ηϕ˜ k,i (t )uk,i (t )uk,i
T
(t )
ϕ˜ k+1,i (t ) = ϕ˜ k,i (t ) −
μ+uk,i (t )
2
(B.1)
−ϕk+1,i (t )
According to Remark 3, one can rewrite (B.1) as
 
ηuk,i (t )uk,i
T
(t )
ϕ˜ k+1,i (t ) = ϕ˜ k,i (t ) I − (B.2)
μ+uk,i (t )
2

Consider the following fact that


  2
 ηuk,i (t )uk,i
T
(t )   2
 
ϕ˜ k,i (t ) I −  2  = ϕ˜ k,i (t )
 μ + uk,i (t ) 
  2   2
ηuk,i (t ) ηuk,i (t )ϕ˜ k,i (t )
+ −2 +  2 ×  2 (B.3)
μ + uk,i (t ) μ + uk,i (t )
ηuk,i (t )
2
In term of 0 < η < 2 and μ > 0, we have −2 + < 0. So, we can derive from
μ+uk,i (t )
2

(B.3) that
  2
 ηuk,i (t )uk,i
T 
(t ) 
ϕ˜ k,i (t ) I −  < ϕ˜ k,i (t )2 (B.4)
 μ+uk,i (t )
2 
According to (B.2) and (B.4), there exists a constant 0 < κi < 1 such that
     
ϕ˜ k+1,i (t ) ≤ κi ϕ˜ k,i (t ) ≤ · · · ≤ κ k+1 ϕ˜ 0,i (t ) (B.5)
i

Since ϕˆ 0,i (t ) is given bounded and ϕk,i (t ) is bounded, ϕ˜ 0,i (t ) is bounded too. Thus, from
(B.5), we can get that the parameter estimation error ϕ˜ k+1,i (t ) converges with the increase of
iterations.

References

[1] D. Meng, K.L. Moore, Learning to cooperate: networks of formation agents with switching topologies, Auto-
matica 64 (2016) 278–293, doi:10.1016/j.automatica.2015.11.013.
[2] S. Arimoto, S. Kawamura, F. Miyazaki, Bettering operation of robots by learning, J. Robot. Syst. 1 (2) (1984)
123–140, doi:10.1002/rob.4620010203.
[3] M. Sun, T. Wu, L. Chen, G. Zhang, Neural AILC for error tracking against arbitrary initial shifts, IEEE Trans.
Neural Netw. Learn. Syst. 29 (7) (2018) 2705–2716, doi:10.1109/TNNLS.2017.2698507.
[4] X. Li, D. Shen, J.-X. Xu, Adaptive iterative learning control for MIMO nonlinear systems performing iteration-
varying tasks, J. Frankl. Inst. 356 (16) (2019) 9206–9231, doi:10.1016/j.jfranklin.2019.08.012.
[5] S. Mandra, K. Galkowski, E. Rogers, A. Rauh, H. Aschemann, Performance-enhanced robust iterative learning
control with experimental application to PMSM position tracking, IEEE Trans. Control Syst. Technol. 27 (4)
(2019) 1813–1819, doi:10.1109/TCST.2018.2816906.

3819
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

[6] D. Huang, K. Wang, Y. Wang, H. Sun, X. Liang, T. Meng, Precise control for the size of droplet in t-
junction microfluidic based on iterative learning method, J. Frankl. Inst. 357 (9) (2020) 5302–5316, doi:10.
1016/j.jfranklin.2020.02.046.
[7] S. Sun, T. Endo, F. Matsuno, Iterative learning control based robust distributed algorithm for non-holonomic
mobile robots formation, IEEE Access 6 (2018) 61904–61917, doi:10.1109/ACCESS.2018.2876545.
[8] D. Huang, Y. Chen, D. Meng, P. Sun, Adaptive iterative learning control for high-speed train: a multi-agent
approach, IEEE Trans. Syst. Man Cybern. Syst. (2019) 1–11, doi:10.1109/TSMC.2019.2931289.
[9] P. Gu, S. Tian, Consensus tracking control via iterative learning for singular multi-agent systems, IET Control
Theory Appl. 13 (11) (2019) 1603–1611, doi:10.1049/iet-cta.2018.5901.
[10] J. Zhang, Y. Fang, C. Li, W. Zhu, Formation tracking via iterative learning control for multiagent systems with
diverse communication time-delays, Math. Probl. Eng. 2019 (2) (2019) 1–12, doi:10.1155/ 2019/ 8164297.
[11] Y.-H. Lan, B. Wu, Y.-X. Shi, Y.-P. Luo, Iterative learning based consensus control for distributed parameter
multi-agent systems with time-delay, Neurocomputing 357 (2019) 77–85, doi:10.1016/j.neucom.2019.04.064.
[12] T. Zhang, J. Li, Iterative learning control for multi-agent systems with finite-leveled sigma-delta quantization
and random packet losses, IEEE Trans. Circuits Syst. I Regul. Pap. 64 (8) (2017) 2171–2181, doi:10.1109/
TCSI.2017.2690689.
[13] X. Jin, Nonrepetitive leader-follower formation tracking for multiagent systems with LOS range and angle
constraints using iterative learning control, IEEE Trans. Cybern. 49 (5) (2019) 1748–1758, doi:10.1109/TCYB.
2018.2817610.
[14] S. Dong, X. Jian-Xin, Distributed learning consensus for heterogenous high-order nonlinear multi-agent systems
with output constraints, Automatica 97 (2018) 64–72, doi:10.1016/j.automatica.2018.07.030.
[15] L. Yang, Y. Jia, An iterative learning approach to formation control of multi-agent systems, Syst. Control Lett.
61 (1) (2012) 148–154, doi:10.1016/j.sysconle.2011.10.011.
[16] D. Meng, Y. Jia, J. Du, Robust consensus tracking control for multiagent systems with initial state shifts,
disturbances, and switching topologies, IEEE Trans. Neural Netw. Learn. Syst. 26 (4) (2015) 809–824, doi:10.
1109/TNNLS.2014.2327214.
[17] D. Meng, Y. Jia, J. Du, Consensus seeking via iterative learning for multi-agent systems with switching
topologies and communication time-delays, Int. J. Robust Nonlinear Control 26 (12) (2016) 3772–3790,
doi:10.1002/rnc.3534.
[18] D. Meng, Y. Jia, J. Du, J. Zhang, On iterative learning algorithms for the formation control of nonlinear
multi-agent systems, Automatica 50 (1) (2014) 291–295, doi:10.1016/j.automatica.2013.11.009.
[19] J. Li, J. Li, Iterative learning control approach for a kind of heterogeneous multi-agent systems with distributed
initial state learning, Appl. Math. Comput. 265 (2015) 1044–1057, doi:10.1016/j.amc.2015.06.035.
[20] C. Du, X. Liu, W. Ren, P. Lu, H. Liu, Finite-time consensus for linear multiagent systems via event-triggered
strategy without continuous communication, IEEE Trans. Control Netw. Syst. 7 (1) (2020) 19–29, doi:10.1109/
TCNS.2019.2914409.
[21] N. Lin, R. Chi, B. Huang, Event-triggered model-free adaptive control, IEEE Trans. Syst. Man Cybern. Syst.
(2019) 1–12, doi:10.1109/TSMC.2019.2924356.
[22] Y. Zhang, J. Sun, H. Liang, H. Li, Event-triggered adaptive tracking control for multiagent systems with unknown
disturbances, IEEE Trans. Cybern. 50 (3) (2020) 890–901, doi:10.1109/TCYB.2018.2869084.
[23] G. Zhao, C. Hua, X. Guan, A hybrid event-triggered approach to consensus of multi-agent systems with dis-
turbances, IEEE Trans. Control Netw. Syst. (2020), doi:10.1109/TCNS.2020.2972585.
[24] W. Xu, G. Hu, D.W.C. Ho, Z. Feng, Distributed secure cooperative control under denial-of-service attacks from
multiple adversaries, IEEE Trans. Cybern. 50 (8) (2020) 3458–3467, doi:10.1109/TCYB.2019.2896160.
[25] W. Xu, D.W.C. Ho, J. Zhong, B. Chen, Event/self-triggered control for leader-following consensus over
unreliable network with dos attacks, IEEE Trans. Neural Netw. Learn. Syst. 30 (10) (2019) 3137–3149,
doi:10.1109/TNNLS.2018.2890119.
[26] Y. Lei, Y.-W. Wang, W. Yang, Z.-W. Liu, Distributed control of heterogeneous multi-agent systems with unknown
control directions via event self-triggered communication, J. Frankl. Inst. (2020), doi:10.1016/j.jfranklin.2020.
08.043.
[27] J. Tang, L. Sheng, Iterative learning fault-tolerant control for networked batch processes with event-triggered
transmission strategy and data dropouts, Syst. Sci. Control Eng. Open Access J. 6 (3) (2018) 44–53, doi:10.
1080/21642583.2018.1532354.
[28] J. Chen, C. Hua, X. Guan, Fast data-driven iterative event-triggered control for nonlinear networked discrete
systems with data dropouts and sensor saturation, J. Frankl. Inst. 357 (13) (2020) 8364–8382, doi:10.1016/j.
jfranklin.2020.03.020.

3820
N. Lin, R. Chi and B. Huang Journal of the Franklin Institute 358 (2021) 3803–3821

[29] W. Xiong, X. Yu, R. Patel, W. Yu, Iterative learning control for discrete-time systems with event-triggered
transmission strategy and quantization, Automatica 72 (2016) 84–91, doi:10.1016/j.automatica.2016.05.031.
[30] T. Zhang, J. Li, Event-triggered iterative learning control for multi-agent systems with quantization, Asian J.
Control 20 (3) (2018) 1088–1101, doi:10.1002/asjc.1450.
[31] D. Liu, G. Yang, Event-based model-free adaptive control for discrete-time non-linear processes, IET Control
Theory Appl. 11 (15) (2017) 2531–2538, doi:10.1049/iet-cta.2016.1672.
[32] Z. Wei, L.H. Yam, L. Cheng, Narmax model representation and its application to damage detection for multi-
layer composites, Compos. Struct. 68 (1) (2005) 109–117, doi:10.1016/j.compstruct.2004.03.005.
[33] S.L. Kukreja, H.L. Galiana, R.E. Kearney, Narmax representation and identification of ankle dynamics, IEEE
Trans. Biomed. Eng. 50 (1) (2003) 70–81, doi:10.1109/TBME.2002.803507.
[34] R. Chi, Z. Hou, B. Huang, S. Jin, A unified data-driven design framework of optimality-based generalized
iterative learning control, Comput. Chem. Eng. 77 (2015) 10–23, doi:10.1016/j.compchemeng.2015.03.003.
[35] N. Lin, R. Chi, B. Huang, Linear time-varying data model-based iterative learning recursive least squares iden-
tifications for repetitive systems, IEEE Access 7 (2019) 133304–133313, doi:10.1109/ACCESS.2019.2941226.

3821
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON CYBERNETICS 1

Event-Triggered Model-Free Adaptive Iterative


Learning Control for a Class of Nonlinear
Systems Over Fading Channels
Xuhui Bu , Member, IEEE, Wei Yu, Qiongxia Yu , Zhongsheng Hou, Fellow, IEEE, and Junqi Yang

Abstract—This article investigates the problem of event- than that of the previous iteration. After decades of develop-
triggered model-free adaptive iterative learning control ment, ILC has obtained a wealth of theoretical research, and
(MFAILC) for a class of nonlinear systems over fading channels. has been widely used in the practical systems [2]–[8].
The fading phenomenon existing in output channels is modeled
as an independent Gaussian distribution with mathematical Recently, the research on adaptive ILC (AILC) and data-
expectation and variance. An event-triggered condition along driven ILC has attracted a lot of attention, for the reason that
both iteration domain and time domain is constructed in order most of the actual systems contain nonlinearities and uncer-
to save the communication resources in the iteration. The tainties, and may encounter the situation that the system model
considered nonlinear system is converted into an equivalent cannot be established. For instance, in [9], an AILC algo-
linearization model and then the event-triggered MFAILC
independent of the system model is constructed with the faded rithm is proposed for a class of discrete-time systems with
outputs. Rigorous analysis and convergence proof are developed parametric uncertainties, where the learning gain can be iter-
to verify the ultimately boundedness of the tracking error by atively tuned by means of recursive least squares schemes.
using the Lyapunov function. Finally, the effectiveness of the In [10], without the prior knowledge of the gain sign, a
presented algorithm is demonstrated with a numerical example novel AILC method is proposed for discrete-time systems to
and a velocity tracking control example of wheeled mobile
robots (WMRs). deal with the unknown control direction problem. In [11], a
robust ILC algorithm is developed for discrete-time nonlinear
Index Terms—Event-triggered mechanism, fading chan- systems with time-iteration-varying parameters. In [12] and
nels, iterative learning control (ILC), model free adaptive
control (MFAC). [13], the study of AILC extends to continuous-time systems
and multiagent systems, respectively. The advantage of AILC
is to design the control algorithm by estimating unknown
parameters. In addition, a series of data-driven ILC methods is
proposed to solve the challenge of unknown models. In [14],
I. I NTRODUCTION a data-driven optimal terminal ILC method for nonlinear
TERATIVE learning control (ILC) is proposed by
I Arimoto [1], which is an effective control method for
systems that perform repetitive control tasks in a finite interval.
systems with completely unknown model dynamics is stud-
ied. An enhanced data-driven optimal terminal ILC algorithm
is presented for nonlinear and nonaffine systems in [15], where
This method combines the output tracking error and control the ILC updates are constructed with a nonlinear learning gain.
input signals of the previous iteration to construct the control In [16], by using the dynamic linearization method, a novel
signal of current iteration, so as to obtain better control effect model-free adaptive ILC (MFAILC) method is developed with-
out relying on the system model information. The proposed
Manuscript received April 12, 2020; revised August 24, 2020 and November design in [16] is extended to nonlinear multiagent systems
9, 2020; accepted February 8, 2021. This work was supported in part by in [17]. Note that results in [9]–[11] and [14]–[17] assume
the National Natural Science Foundation of China under Grant U1804147,
Grant 61573129, Grant 61833001, and Grant 62003133; in part by the that the data transmission is complete.
Innovative Scientists and Technicians Team of Henan Polytechnic University In practical systems, due to the wide application of network
under Grant T2019-2; in part by the Innovative Scientists and Technicians technology, data dropouts, quantization, channels’ fading, and
Team of Henan Provincial High Education under Grant 20IRTSTHN019; in
part by the Natural Science Foundation of Henan Province of China under other problems caused by network insertion are inevitable in
Grant 202300410177; and in part by the Fundamental Research Funds for the process of data transmission [18]–[22]. So far, there has
the Universities of Henan Province under Grant NSFRF180335. This arti- also been considerable research of ILC on these network-
cle was recommended by Associate Editor Y. Shi. (Corresponding authors:
Qiongxia Yu; Xuhui Bu.) induced problems. For example, by applying the saturated and
Xuhui Bu, Wei Yu, Qiongxia Yu, and Junqi Yang are with the School quantized outputs, the data-driven ILC problem for a class
of Electrical Engineering and Automation, Henan Polytechnic University, of nonaffine nonlinear systems is studied in [23]. The ILC
Jiaozuo 454003, China (e-mail: buxuhui@gmail.com; yuwei5150@163.com;
qiongxiayu@hotmail.com). problem with stochastic signals is investigated in [24], which
Zhongsheng Hou is with the School of Automation, Qingdao University, includes stochastic data dropouts, systems noises, measure-
Qingdao 266071, China (e-mail: zhshhou@bjtu.edu.cn). ment noises, etc. The stochastic signals are described by a
Color versions of one or more figures in this article are available at
https://doi.org/10.1109/TCYB.2021.3058997. stochastic variable. In [25], a general framework of conver-
Digital Object Identifier 10.1109/TCYB.2021.3058997 gence analysis for all three kinds of data dropout model for the
2168-2267 
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON CYBERNETICS

problem of ILC under stochastic data dropout phenomenon is flexibility. Through rigorous theoretical analysis and simu-
proposed. Considering the channels’ fading phenomenon exist- lation experiments, it is verified that the proposed novel
ing in both input sides and output sides, a novel ILC algorithm MFAILC scheme can not only ensure system convergence but
for repetitive systems is proposed in [26] to guarantee system also save the communication bandwidth. The main contribu-
performance. However, the results of [24]–[26] only consider tions of this article are summarized as follows.
the linear systems and need to know the model information 1) This article studies the problem of MFAILC design
of the system in advance. This requirement is rather difficult for a class of unknown nonlinear systems under fading
to meet in many cases. channels. Due to the stochastic fading phenomenon, the
On the other hand, the ILC algorithms mentioned available information is not complete in the controller
in [2]–[16] and [23]–[26] perform a fixed update based on design. The relationship between the actual outputs of
time in each iteration step. However, in engineering fields, the system and the output signal received by the con-
frequent signal transmission in the wireless channel is likely troller is derived. Compared with the existing ILC results
to waste the valuable bandwidth, thus reducing the quality of for nonlinear systems, this algorithm uses faded signals
system communication. Furthermore, it is not always neces- to construct the controller and performs convergence
sary to update the ILC algorithm at a fixed time when the analysis in the sense of mathematical expectation.
signal variation of the controller is relatively small in some 2) This article proposes an event-triggered MFAILC algo-
continuous iterative steps. In recent years, event-triggered con- rithm and does not require any prior model information.
trol is considered to be a reliable method developed to save Different from most previous studies, the event-triggered
the transmission resources required for system performance, strategy in this article is operated along both iteration
and such event-triggered communication strategy has achieved domain and time domain, which combines the learn-
a wealth of theoretical results and full verification of prac- ing characteristics of ILC. Compared with the existing
tice. In [27], the event-triggered real-time scheduling for the results, it can not only ensure the system performance
problem of stabilizing control tasks to relax some periodic exe- but also save the valuable resources.
cution requirements is studied. In [28], a decentralized event- 3) Most of the existing data-driven ILC methods estab-
triggered implementation of centralized nonlinear controllers lish the recursive equation of the tracking error in the
is proposed over sensor/actuator networks. In [29], a new iteration domain, and then analyze the stability using
internal dynamic variable for a class of event-triggered con- the technique tool of contraction mapping. In this arti-
trol systems is produced. The effectiveness, design approaches, cle, after establishing an event-triggered mechanism, the
categories, and future challenges of the event-based control iteration domain-based convergence analysis is intro-
and filtering of networked systems are investigated in [30]. duced by the method of the Lyapunov function. It is
An overview of recent advances in the event-based strategy of worth noting that in the field of NCSs, there has been no
multiagent systems was provided in [31], including a basic previous research that combines fading phenomenon and
framework, existing methodologies, and in-depth analysis. event-triggered strategy under the same ILC framework
In [32] and [33], the event-triggered mechanism is introduced to consider.
into ILC algorithms, but it aimed at the linear plant whose The remainder of this article is organized as follows.
model is known. To the best of our knowledge, the majority of Section II reviews the MFAILC algorithm, models the fading
the literature on the event-triggered control scheme, including phenomenon, and proposes an event-triggered mechanism with
[27]–[33], all need the prior knowledge of the system to design adjustable parameters. The main results of this article are
and analyze the controller. In unreliable wireless networks shown in Section III. Two examples verified the reliability
where stochastic fading phenomenon occurs, how to design the of the proposed algorithms in Section IV and the conclusions
event-triggered ILC mechanism of nonlinear systems indepen- are concluded in Section V.
dent of the system model has not been fully explored. Hence,
there are two main difficulties to address.
1) Considering the repetitive characteristic of the system, II. P ROBLEM F ORMULATION
how to design the event-triggered mechanisms along A. System Descriptions
both the time domain and iteration domain?
Consider the following discrete-time repetitive SISO non-
2) For a class of nonlinear systems, how to expand the
linear system:
convergence analysis independent of the system model
when there is stochastic fading phenomenon in net-   
worked control systems (NCSs)? y(k + 1, i) = f y(k, i), . . . , y k − ny , i

Motivated by these observations, this article studies the u(k, i), . . . , u(k − nu , i) (1)
event-triggered MFAILC problem for a class of nonlinear
systems with fading measurements. Here, the stochastic faded where y(k, i), u(k, i) represent the system output and con-
outputs are described by the Rice fading model as an inde- trol input at the instant k of the ith iteration, respectively.
pendent Gaussian distribution. The event-triggered mechanism i ∈ {0, 1, 2, . . .}, k ∈ {0, 1, . . . , T}, T denotes a finite posi-
is used to execute the control task after the actuator meets tive integer. f (·) represents an unknown nonlinear function. ny
the predesigned triggering conditions, in which the trigger- and nu are two unknown positive integers, representing the
ing conditions have adjustable parameters to ensure sufficient unknown order of the system.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

BU et al.: EVENT-TRIGGERED MODEL-FREE ADAPTIVE ITERATIVE LEARNING CONTROL 3

Before giving the equivalent linearization model of non-


linear systems, we first give the following two reasonable
assumptions.
Assumption 1: The partial derivatives of the function f (·)
with respect to the control inputs u(k, i) exist and are contin-
uous.
Assumption 2: The generalized Lipschitz condition is sat-
isfied for system (1) for all i ∈ {0, 1, 2, . . . , } and k ∈
{0, 1, . . . , T} when |u(k, i)| = 0, which means that ∀k, i
|y(k + 1, i)| ≤ l|u(k, i)|
where y(k+1, i)=y(k+1, i)−y(k+1, i−1), u(k, i)=u(k, i)−
u(k, i − 1), and l denotes a positive constant.
Remark 1: In the controller design process, Assumption 1
is a typical constraint condition for a class of nonlinear
systems. Assumption 2 is an effective control method to limit
the upper bound of the output signal’s change rate within a cer-
tain range. That is to say, if the input of the system changes in
the bounded range, the change of the output energy should be Fig. 1. Event-triggered MFAILC systems via fading channels.
produced in the bounded range in the system. This is true for
a large class of practical systems. Therefore, the above two z(k + 1, i) are transmitted to the controller, and the control
assumptions are reasonable from the point of view of engi- input u(k, i) is calculated and sent to ZOH. ZOH saves u(k, i)
neering applications and energy. In addition, we only need to and sends it to the controlled system. After the above steps
know the existence of l, not its exact numerical value. have run in the whole time domain, the next iteration begins
Based on these two assumptions, the following Theorem 1 running.
can be obtained.
Theorem 1: For the considered system (1), if both B. Fading Phenomenon Modeling
Assumptions 1 and 2 are satisfied with |u(k, i)| = 0, then
Due to the transmission limitations of wireless networks,
there exists a parameter φ(k, i) called pseudopartial derivative
such as physical environments and equipment, the output
(PPD) such that
information from the system may suffer from stochastic chan-
y(k + 1, i) = φ(k, i)u(k, i) ∀k ∈ {0, 1, . . . , T} (2) nels fading. According to the commonly used Rice fading
model, the measured output z(k, i) can be described by the
with |φ(k, i)| ≤ τ . τ denotes a positive constant.
actual output y(k, i) as follows:
Proof: The linearization model is mainly obtained by
using the Cauchy mean-value theorem, and the detailed proof z(k, i) = θ (k, i)y(k, i) (3)
process can be referred to [34].
Here, system (1) is converted into an equivalent linearization where θ (k, i) denotes the channels coefficient, which satisfies
data-relationship model in Theorem 1. All possible complex- the independent Gaussian distribution. θ (k, i) takes a value
ities, such as nonlinearities, are compressed into the slow in the interval [0, 1] and its mathematical expectation is θ̄.
time-varying scalar parameter PPD. Therefore, considering the phenomenon of channels fading
The system configuration in this article is shown in Fig. 1, and the given fading model (3), the received output y(k, i)
where z(k+1, i) is the faded output and z(k+1, i) is the incre- in the traditional MFAILC scheme is replaced by faded out-
ment of z(k +1, i). Due to the phenomenon of channels fading put z(k, i). Thus, the controlled system is converted into a
in unreliable networks, the event generator can only receive stochastic system.
z(k + 1, i) instead of the actual output y(k + 1, i). In Fig. 1, Remark 3: Most of the existing research on fading phe-
the event generator decides whether to update the controller at nomenon is carried out under the model-based control
each sampling instant by judging whether the triggering con- approach, so it is a necessary assumption that the mathematical
dition is met. If the condition is not met, the control signal of expectation is known. However, the event-triggered MFAILC
the previous iteration will be set as the current control input proposed in this article is a data-driven control scheme, which
by using the zero-order holder (ZOH). If the condition is met, means that the design and analysis process of the controller
then the “event” is triggered and the control input is updated. is independent of the system model information. Hence, it is
Remark 2: In this article, MFAILC and ZOH are both only necessary to know that θ̄ belongs to θ̄ ∈ [0, 1] and does
assumed to be event triggered, where ZOH is sed to hold the not need an exact value.
control input signal, that is, when the “event” is not triggered, Remark 4: Due to the complexity and unpredictability of
u(k, i) = u(k, i − 1). At the instant the event is triggered, the the environment in the monitoring area, the measured signals
steps are executed as: the memorizer is used to hold output from the controlled system may suffer from stochastic fading
data; thus, it can construct the increment z(k + 1, i) between or intermittent data dropouts. The fading model (3) consid-
two adjacent iterations. Next, the faded outputs z(k + 1, i) and ers the stochastic fading of the measured signal and has been
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON CYBERNETICS

the value of the previous iteration of the same instant. In


i = it+1 th iteration, the condition is satisfied and the con-
troller is updated. Therefore, according to (4), before the next
triggering instant, the system satisfies
 
 (k, i) ≤ 2 1 + γ 2 (Q(k − 1, i))2 yd 2 (k)
Fig. 2. Update process and triggering intervals during iterations.  
− κ 1 − 2(1 − Q(k − 1, i))2 ζ 2 (k, i).
well adopted by many researches [26], [35]–[38]. As shown Because of the existing triggering conditions, it is basi-
in (3), the data transmitted over the fading channels are multi- cally impossible to transmit control signals to the system at
plied by a fading coefficient. Different values of the coefficient each sampling instant, so it is necessary to settle ZOH. In the
can represent different network-induced obstacles. If at some absence of the latest controller data transmitted to the channel,
instants in the iteration, θ (k, i) is equal to zero, it means the ZOH will transmit the corresponding signal in the triggering
completely fading phenomenon, in other words, the data are interval to maintain the normal operation of the system, that
lost in transmission. Thus, data dropout is a special case when is, u(k, i) = u(k, i − 1). The triggering rule (4) is repeat-
the fading coefficient is only taken to 0 or 1, which reflects edly judged by the event generator to determine the triggering
the universality and practicality of the model (3). instants of the next iterative transmission. By repeating the
above process in each iteration, the event-triggered control
C. Event-Triggered Mechanism Design strategy based on nonfixed time can be implemented.
In the event-triggered mechanism, the event generator sends Remark 5: The instants when the “event” is maintained are
its latest sampling data to the controller only when the trig- exactly the instants when the control signal change is small
gering condition is satisfied, that is, the system information is enough between two iterations. Therefore, the control signals
transmitted only when the tracking error triggers a predesigned discarded in the working process are those “unnecessary” data,
threshold. The triggering conditions are established as follows: which counteracts the influence of sampling error on stability
  to some extent. On the other hand, considering the learning
 (k, i) > 2 1 + γ 2 Q2 (k − 1, i)y2d (k)
  ability of the MFAILC algorithm to obtain information from
−κ 1 − 2(1 − Q(k − 1, i))2 ζ 2 (k, i) (4) the previous repeated operations, the system can constantly
correct the update of control signals to offset the impact of
where  (k, i) = z(k, it ) − z(k, i) denotes the sampling error. sampling error. Thus, the bad influence of sampling error can
(k, it ), t = 1, 2, . . . denotes the event-triggered iteration be effectively counteracted.
sequence at time k. t denotes the triggering instants in ith Remark 6: The cooperation of event generator and ZOH
iteration. γ , κ ∈ (0, 1) denotes the triggering parame- reduces the transmission energy consumption, but increases
2 2
ters. Q(k − 1, i) = ([ρ|φ̂(k − 1, i)| ]/[λ + |φ̂(k − 1, i)| ]). the computing cost in the network [39]. However, the energy
ζ (k, i) = yd (k) − z(k, i) represents the achievable tracking required to transmit a packet in the same environment is
errors. If condition (4) is satisfied, then the PPD’s estimation much higher than the energy required to compute a packet.
value φ̂(k, i) and control input u(k, i) are updated as well. It Therefore, it is reasonable and acceptable to design an event-
is judged at each instant that the controller works whether triggered mechanism to improve the rational use of the
the value of the sampling error exceeds a preset threshold. network resources.
Once the triggering condition is satisfied, the sampling error
brought by the event-triggered mechanism will return to zero; D. MFAILC Algorithm Design
thus, the impact of the sampling error on the system stability Considering the fading model (3) and event-triggered mech-
can be offset. Furthermore, the condition is model independent anism (4), this article proposed the following MFAILC algo-
with two tunable parameters κ and γ , which distinguishes it rithm:
from most of the existing literature. In the design of the event- ⎧

⎪ φ̂(k, it−1 ) + ηu(k,i−1) 2
triggered mechanism, it is a common requirement to balance ⎪
⎨ μ+|u(k,i−1)|
the computation amount of transmission and the stability of φ̂(k, i) = z(k + 1, i − 1)
⎪ × , i = it
the system. The added triggering parameters can be properly ⎪
⎪ − φ̂(k, i − 1)u(k, i − 1)

tuned to achieve this balance. φ̂(k, it−1 ), i ∈ (it−1 , it )
To better understand the execution of the event-triggered (5)
mechanism on the iteration domain, Fig. 2 illustrates the con-
φ̂(k, i) = φ̂(k, 1), if φ̂(k, i) ≤ ε or |u(k, i − 1)|
troller update process and triggering intervals during iterations.    
I1 and I2 are the triggering intervals and represent the number ≤ ε or sign φ̂(k, i) = sign φ̂(k, 1) (6)
of iterations between two adjacent triggering instants. In this ⎧
⎪ ρ φ̂(k,i)
article, it is assumed that in the first iteration, the controller ⎪ u(k, it−1 ) +
⎨ 2
is updated for the first time at each sampling instant, which λ+ φ̂(k,i)
u(k, i) =
means that i1 = 1 for any instant k. To be more specific, ⎪
⎪ × (yd (k + 1) − z(k + 1, i − 1)), i = it

as shown in Fig. 2, in i ∈ [it , it+1 )th iteration, the trigger- u(k, it−1 ), i ∈ (it−1 , it )
ing condition is not satisfied and the controller remains at (7)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

BU et al.: EVENT-TRIGGERED MODEL-FREE ADAPTIVE ITERATIVE LEARNING CONTROL 5

where φ̂(k, i) is the estimation value of PPD. yd (k + 1) is the Assumption 3: For any i ∈ {0, 1, 2, · · · }, k ∈ {0, 1, . . . , T},
desired tracking trajectory. 0 < η, ρ ≤ 1 are the step size. the condition φ(k, i) > ε̄ > 0 or φ(k, i) < −ε̄ < 0 is satis-
λ, μ > 0 are both constants, representing the weighting fac- fied, where ε̄ is a positive constant, which is arbitrarily small.
tors. Equation (6) is the reset algorithm of φ̂(k, i). φ̂(k, 1) is Without losing the generality, the case of φ(k, i) > ε̄ > 0 is
the initial value of φ̂(k, i). ε denotes a small positive constant. considered in this article.
The control input and the estimation value of the PPD esti- Remark 9: The rationality of this assumption lies in the fact
mation value are updated at the triggering instants, otherwise, that this condition is a linear-like feature of the system (1),
they will remain at the value of the previous iteration. which is used to limit the change of control direction. It is
Remark 7: Note that the parameter estimation algorithm (5) similar to the assumption in the model-based control method
and control algorithm (7) are obtained by minimizing two that the direction of control is known or at least its sign is
performance indexes. Meanwhile, the PPD reset algorithm (6) unchanged. The practical meaning of this assumption is that
is designed to ensure the tracking ability of (6). The detailed the corresponding output of the system should not decrease
derivation can be referred to [14], [15], [21], and [22]. as the control input increases. It is not a severe assumption
Remark 8: The controller cannot receive the complete as many practical industrial systems, such as the temperature
actual output signal during the iteration process due to fading control system and the pressure control system, satisfy such a
phenomenon. Therefore, the system output y(k, i) cannot be property.
directly used in controller design and analysis. Furthermore, We now proceed to present Theorem 2 of this article.
the traditional method for convergence analysis of MFAILC Theorem 2: For the considered system (1) with fading phe-
is to establish the tracking error recursion equation in the nomenon (3) and the event-triggered MFAILC scheme (4)–(7)
iteration domain, and then carry out the analysis using the with Assumptions 1–3 and Lemma 1, if we choose μ, ρ >
compression mapping principle. The introduction of stochas- 0, 0 < η < 1, then the tracking error e(k, i) approaches zero
tic fading coefficient makes the controlled system a stochastic as the iteration process i approaches infinity.
system and makes the traditional stability analysis method Proof: First, we prove the boundedness of φ̂(k, i). If any
unfeasible. This brings new challenges to controller design one or more of the reset condition in reset algorithm (6) is
and convergence analysis. satisfied, the boundedness of φ̂(k, i) can be guaranteed. For
the other cases, we define the estimation error of φ(k, i) as
III. M AIN R ESULTS φ̃(k, i) = φ̂(k, i) − φ(k, i).
In this section, the main results are introduced to both Below, we decide the analysis process into two parts: 1) the
reduce the update amounts of the PPD’s estimation and control triggering instants and 2) other instants. First, consider the
input while ensuring the tracking performance of the systems triggering instants, (k, i) = (k, it ). According to the control
in the presence of channels fading under the event-triggered update rules, we can easily obtain
MFAILC schemes.
φ̂(k, i − 1) = φ̂(k, it−1 ), u(k, i − 1) = u(k, it−1 ). (8)
Before giving the main results of this article, we first give
the necessary lemmas and assumptions that need to be applied. The updating algorithm of φ̂(k, i) is reformulated as
Lemma 1: According to the fading model (3), the relation-
ship between the increment of the received output z(k, i) ηu(k, i − 1)
φ̂(k, i) = φ̂(k, i − 1) +
and the increment of the system output y(k, i) in the sense μ + |u(k, i − 1)|2
 
of mathematical expectation can be concluded as
× z(k + 1, i − 1) − φ̂(k, i − 1)u(k, i − 1)
E{z(k, i)} = θ̄ E{y(k, i)} (9)
where E{·} denotes mathematical expectation. θ̄ denotes the Subtract φ(k, i) at each side of (9) and we can obtain
expectation of θ (k, i), which is unknown but belongs to the
ηu(k, i − 1)
interval [0, 1]. φ̃(k, i) = φ̃(k, i − 1) +
Proof: From the definition of channels fading (3) and the μ + |u(k, i − 1)|2
 
independence of the coefficients θ (k, i), we can obtain × z(k + 1, i − 1) − φ̂(k, i − 1)u(k, i − 1)
E{z(k, i)} = θ̄ E{y(k, i)}. + φ(k, i − 1) − φ(k, i). (10)

Furthermore, the mathematical expectation of z(k) Combining with Lemma 1 and considering (10) under the
yields that sense of mathematical expectation, one has
 
E{z(k, i)} = E{z(k, i) − z(k, i − 1)} E φ̃(k, i) = E φ̃(k, i − 1)
= E{θ (k, i)y(k, i) − θ (k, i − 1)y(k, i − 1)} + E{φ(k, i − 1) − φ(k, i)}
⎧ ⎫
= θ̄ E{y(k, i)}. ⎪ ηu(k,i−1)
⎨ μ+|u(k,i−1)|2 ⎪

+E θ̄ y(k + 1, i − 1) .
In addition, according to the knowledge of relevant mathe- ⎪
⎩× ⎪

matics, we can easily obtain that θ̄ ∈ [0, 1]. This is the end −φ̂(k, i − 1)u(k, i − 1)
of the proof. (11)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON CYBERNETICS

Substituting (2) into (11), we have Next, we continue to prove the convergence of the tracking
  error.
E φ̃(k, i) = E φ̃(k, i − 1) First, e(k, i) = yd (k)−y(k, i) is defined as the tracking error
+ E{φ(k, i − 1) − φ(k, i)} of (1). At the triggering instants, such as i = it , according to
⎧ ⎫ the linearization data model (2) and the control input update
⎨ P(k,i − 1) ⎬
+E θ̄ − 1 φ(k, i − 1) algorithm (7), e(k, i) can be reformulated as
⎩× ⎭ ⎛ ⎞
−φ̃(k, i − 1)
 ⎜ ρφ(k − 1, i)φ̂(k − 1, i) ⎟
= (1 − P(k, i − 1))E φ̃(k, i − 1) e(k, i) = ⎝1 − θ (k, i − 1) 2 ⎠e(k, i − 1)
  λ + φ̂(k − 1, i)
+ θ̄ − 1 P(k, i − 1)E{φ(k, i − 1)}
+ E{φ(k, i − 1) − φ(k, i)} (12) ρφ(k − 1, i)φ̂(k − 1, i)
− 2
(1 − θ (k, i − 1))yd (k).
where P(k, i−1) = ([η|u (k, i−1)|2 ]/[μ+|u(k, i − 1)|2 ]). λ + φ̂(k − 1, i)
Now, we discuss the boundedness of P(k, i − 1). Choosing (15)
suitable η and μ, such as 0 < η ≤ 1, 0 < μ, it is then easily
We have proved that φ̃(k, i) is bounded. Since φ(k, i) is a
observed that
slow time-varying parameter, it is reasonable to use a large
η|u(k, i − 1)|2 < |u(k, i − 1)|2 < μ + |u(k, i − 1)|2 . amount of I/O data to estimate φ(k, i) and replace it with
φ̂(k, i) in the parameter estimation stage. Correspondingly,
Furthermore, considering that the value of |u(k, i)| is (15) becomes
bounded, there exists positive constants d1 and q1 satisfy
e(k, i) = (1 − θ (k, i − 1)Q(k − 1, i))e(k, i − 1)
0 < q1 ≤ P(k, i − 1) < 1 − Q(k − 1, i)(1 − θ (k, i − 1))yd (k) (16)
0 ≤ |1 − P(k, i − 1)| ≤ d1 < 1. 2 2
where Q(k − 1, i) = ([ρ|φ̂(k − 1, i)| ]/[λ + |φ̂(k − 1, i)| ]).
In addition, considering the relationship between φ̂(k, i − 1) Consider the independence between θ (k, i) and y(k, i), and
and the upper bound of |φ(k, i)| ≤ σ , one has take the mathematical expectation of both sides
    
φ̂(k, it−1 ) − φ(k, it−1 ) E{e(k, i)} = 1 − θ̄ Q(k − 1, i) E{e(k, i − 1)}
E φ̃(k, i − 1) = E  
+φ(k, it−1 ) − φ(k, i − 1) − 1 − θ̄ Q(k − 1, i)E{yd (k)}. (17)

≤ E φ̃(k, it−1 ) + E{φ(k, it−1 )} The Lyapunov function is chosen as follows:

+ E{φ(k, i − 1)} V(k, i) = E e2 (k, i) .

≤ E φ̃(k, it−1 ) + 2σ. (13)
The difference of V(k, i) in the direction of iteration
We have discussed the range of θ̄ , which belongs to [0, 1]. becomes
 
Combining with (13) and taking the absolute value in both V(k, i) = E e2 (k, i) − E e2 (k, i − 1) . (18)
sides of (12)
  Substituting (16) and (17) into (18) and applying inequality
E φ̃(k, i) ≤ |1 − P(k, i − 1)|E φ̃(k, i − 1) ab ≤ (1/2)a2 + (1/2)b2 , where a and b are two real numbers,
V(k, i) yields
+ θ̄ − 1 |P(k, i − 1)|E{|φ(k, i − 1)|}  
+ E{|φ(k, i − 1)|} + E{|φ(k, i)|} V(k, i) = E e2 (k, i) − E e2 (k, i − 1)
  
≤ d1 E φ̃(k, it−1 ) + 5σ (1 − θ (k, i − 1)Q(k − 1, i))e(k, i − 1)
2
 =E
−Q(k − 1, i)(1 − θ (k, i − 1))yd (k)
≤ d1 2 E φ̃(k, it−2 ) + 5d1 σ + 5σ 
≤ ··· − E e2 (k, i − 1)
     2  
5σ 1 − d1 t−1 ≤ − 1 − 2 1 − θ̄ Q(k − 1, i) E e2 (k, i − 1)
≤ d1 t−1
E φ̃(k, i1 ) + . (14) 
1 − d1   2
+ 2 1 − θ̄ Q(k − 1, i) E yd 2 (k) . (19)
Since d1 < 1, E{|φ̃(k, i)|} is bounded for all i ∈ {0, 1, 2, . . .}
at each triggering instant according to (14). Furthermore, since Due to the fading phenomenon, the actual output cannot be
E{|φ(k, i)|} is known to be bounded, we can obtain that directly obtained. Since ζ (k, i) = yd (k) − z(k, i), one has
  2 
E{|φ̂(k, i)|} is bounded as well from the relationship between −1
E e (k, i − 1) = E yd (k) − θ (k, i − 1)z(k, i − 1)
2
φ(k, i) and φ̂(k, i).
In the other case, during the triggering intervals, that is,   −1 

i ∈ (it−1 , it ), the estimation value of PPD remains unchanged = E yd (k) + θ + θ̄
2 2
z (k, i − 1)
2

from (6). Clearly, E{|φ̂(k, i)|} is bounded. Thus, we can obtain 


the boundedness of E{|φ̂(k, i)|}. − E 2θ̄ −1 yd (k)z(k, i − 1) .
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

BU et al.: EVENT-TRIGGERED MODEL-FREE ADAPTIVE ITERATIVE LEARNING CONTROL 7

The relationship between e(k, i) and ζ (k, i) under the sense to the phenomenon of stochastic fading, the nonlinear system
of mathematical expectation is derived as becomes a stochastic one. Hence, the recursive relationship
 of PPD and tracking error along the time axis is established
E e2 (k, i − 1) in the sense of mathematical expectation. Moreover, to obtain
  −1 
∗ the convergence of the tracking error, the Lyapunov function
= E ζ (k, i − 1) + 1 + θ + θ̄
2 2
z (k, i − 1)
2
is also constructed in the sense of mathematical expectation,
   which is more challenging than the existing analytical methods
+ E 2 1 − θ̄ −1 yd (k)z(k, i − 1) of deterministic system.
  −1 Remark 11: According to the triggering conditions, when
≤ E ζ 2 (k, i − 1) + 1 + θ ∗ + θ̄ 2 z2 (k, i − 1) the system error is relatively small, the use of the event-
  triggered mechanism can avoid “unnecessary” frequent data
+ 2 θ̄ −1 − 1 |yd (k)z(k, i − 1)|. transmission. Obviously, the triggering sequence obtained in
If we choose appropriate parameters ρ and λ, the range this article is only a subsequence of the sampling time.
2
of 0 < 1 − 2(1 − θ̄ Q(k − 1, i)) < 1 can be guaranteed. Compared with the fixed time-triggered mechanism, the com-
Furthermore, at the triggering instants i = it , according to the munication frequency is reduced.
triggering conditions (4),  (k, it ) = 0. One has
  IV. S IMULATION R ESULTS
2 1 + γ 2 (Q(k − 1, i))2 yd 2 (k) In this section, both the two given examples have verified
 
the effectiveness of (5)–(7), including a numerical example
< κ 1 − 2(1 − Q(k − 1, i))2 ζ 2 (k, i).
and a velocity tracking control example of wheeled mobile
Since θ̄ ∈ [0, 1], 2(1 + γ 2 )Q2 (k − 1, i)yd 2 (k) > robots (WMRs).
2 Example 1 (SISO Nonlinear System): In this example, we
2((1 − θ̄ )Q(k − 1, i)) yd 2 (k) is easily derived. Furthermore,
one has simulate the following nonlinear second-order system:
 −1 y(k)
1 + θ ∗ + θ̄ 2 z2 (k, i − 1) y(k + 1) = + u2 (k), 0 < k ≤ 600. (21)
1 + y2 (k)
 
+ 2 θ̄ −1 − 1 |yd (k)z(k, i − 1)| > 0 It should be noted that the design process of controller does
not rely on the model information of (21). The purpose of
and giving the dynamic model of the system here is only to pro-
 2 vide the I/O information needed for the controller design. The
(1 − Q(k − 1, i))2 > 1 − θ̄ Q(k − 1, i) .
design of the controller does not use any model information of
From Lemma 1, we can derive the following relationship: the system, including system order, nonlinear properties, and
  so on. The following desired trajectory is considered:
E e2 (k, i − 1) − κE e2 (k, i − 1)
 yd (k) = 0.7 sin(kπ/30) + 0.3 cos(kπ/10), 0 < k ≤ 600.
= (1 − κ)E e2 (k, i − 1) ≥ 0. (20)
The initial condition y(1, i) is set to zero for any iteration i
Furthermore, combining with (4), (19), and (20), one has and the control signal is set as u(k, 1) = 0, k ∈ {0, 1, . . . , T}.
The other controller parameters in (5)–(7) are selected as
V(k, i) ≤ 0.
T = 600, ρ = 0.5, λ = 0.7, μ = η = 1, and ε = 10−5 .
The results have shown that the tracking errors of the system Considering the fading phenomenon, we set this phenomenon
can converge to zero at all triggering instants. For the other as an independent Gaussian distribution with expectation θ̄ =
cases, that is, it−1 < i < it , the control input keeps the same as 0.98 and variance θ ∗ = 0.01 in simulations. For the purpose
the previous iteration, so it is reasonable to deduce the results of reducing the system’s limited bandwidth usage, an event
that the tracking error is still convergent during the triggering generator is constructed. Rather than periodically transmit-
intervals. ting the data of each sampling, this event-triggered mechanism
In conclusion, the tracking errors e(k, i) of the considered determines the control update based on whether the triggering
system can converge to zero under the presented event- threshold is reached or not. The parameters in triggering con-
triggered MFAILC control algorithms when there is channel ditions are selected as κ = 0.6 and γ = 0.5. The results of
fading in the process of signal transmission. Therefore, the the response of the system outputs and the maximum tracking
algorithm proposed in this article can save the limited band- error in the iterative process are plotted in Figs. 3 and 4.
width resources on the premise of ensuring the desired stability Fig. 3 shows the tracking effect of the system outputs during
of the system. It can be observed that we did not mention iterations. It can be seen that at the end of the 5th iteration, the
the requirement of the consistency of the initial conditions system output is still quite different from the desired trajec-
of iteration, which is different from most existing research tory. By the end of 15th iteration, the tracking was better,
on ILC. This is another property of the proposed MFAILC but still less than ideal. After 60 iterations of the system,
algorithm. The proof ends here. the output trajectory has achieved complete tracking of the
Remark 10: The convergence of MFAILC is dependent on desired trajectory over the whole time domain, that is, within
the input and output (I/O) data information of the system. Due k = 1 : 600, the tracking error is zero for each instant, which
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON CYBERNETICS

Fig. 3. Output trajectories at the 5th, 15th, and 60th iterations.


Fig. 5. Comparison of tracking errors under different triggering parameters.

Fig. 4. Maximum tracking errors along iteration.


Fig. 6. Event-triggered instants and intervals at k = 10.

reflects the effectiveness of the tracking performance of the


MFAILC algorithm. In Fig. 4, we aim to verify the effect of of channels fading on system performance, in the following
the event-triggered mechanism and fading phenomenon on the simulations, we choose θ̄ = 1, θ ∗ = 0, that is, the signal is
convergence of the algorithm as a comparison. There are three completed received during transmissions. First, the influence
maximum error curves in Fig. 4 with κ = 0.6, γ = 0.5, in on stability is explored. In Fig. 5, we selected four different
which the “–” line represents the error change under the time- sets of event-triggered parameters and the results show the
triggered mechanism when there is no fading phenomenon, system maximum tracking errors. It can be seen that although
and “–” line and “.-” line describe the error change under the the parameters are selected differently, the four curves of track-
time and event-triggered mechanism when there is a fading ing errors among the 60 iterations are basically consistent
phenomenon. The performance of the fixed time-triggered with the error curves under the time-triggered mechanisms.
mechanism is little better than that of the event-triggered The results in Fig. 5 imply that the tracking performance
mechanism. Fig. 4 shows that although the maximum errors of the system does not deteriorate under different triggering
of the system can converge to near zero, channel fading has parameters. In addition, because the signal does not appear
a significant impact on the final tracking errors. The fea- fading phenomenon in the transmission process, so that the
ture of an event-triggered mechanism is that it can reduce final tracking error of the system is smaller.
the bandwidth loss of the system and improve the utilization Then, we verify the effect of this mechanism on bandwidth
rate on the premise of ensuring the stable performance of the utilization, and the results are shown in Figs. 6 and 7. Figs. 6
system. By choosing the appropriate parameters, a balance can and 7 plot the event-triggered instants and intervals for k = 10
be achieved between the update rate and the stability of the and k = 200 with κ = 0.8 and γ = 0.5 during the iteration,
system. respectively. The horizontal axis denotes the triggering num-
On the other hand, we explore the effect of the event- bers in the iteration. The length of the sticks represents the
triggered mechanism on system stability and bandwidth uti- triggering interval, which refers to the amount of iterations
lization efficiency. In order to eliminate the adverse influence during two successive triggering intervals at the same instant,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

BU et al.: EVENT-TRIGGERED MODEL-FREE ADAPTIVE ITERATIVE LEARNING CONTROL 9

Fig. 7. Event-triggered instants and intervals at k = 200. Fig. 8. Comparison of outputs under MFAILC and ILC.

TABLE I
C OMPARISON OF T RANSMISSION Q UANTITY AND T RANSMISSION R ATE

Fig. 9. Triggering times along the iteration domain.


as the same definition of I1 and I2 in Fig. 2. According to
the results of simple statistics in Figs. 7 and 8, over the 60
iterations, when k = 10, the event was triggered 39 times, than traditional time-triggered methods by providing larger
reducing by 35%. At k = 200, it was triggered 24 times, a transmission intervals.
48% reduction. The results in Figs. 6 and 7 imply that trigger- In addition, to illustrate the advantages of the proposed
ing intervals of both instants are significantly improved in the event-triggered MFAILC compared with existing ILC stud-
middle and late stages of the iteration process. The reason for ies, we compared it with P-type ILC, and the results are
this is that as the iterative process continues, the system grad- shown in Figs. 8 and 9. Fig. 8 shows the system outputs
ually reaches the maximum tracking error in the whole time after 40th iteration. It can be seen that after 40 iterations, both
domain and stabilizes near zero, which greatly improves the algorithms can realize the complete tracking of the desired
difficulty of reaching the threshold. It can be concluded that the trajectory. The tracking performance of two algorithms is com-
event-triggered strategy can significantly decrease the number parable. Fig. 9 plots the comparison of the triggering times
of data transmission, increase the transmission interval, and under two different control algorithms. The time-triggered
ensure an efficient use of communication resources compared ILC scheme triggered 36 000 times in total, and the event-
with the fixed time-triggered mechanism. triggered MFAILC scheme triggered 14 598 times in total, only
Furthermore, we explore the impact of different trigger- 40.55% of the former. Combining the two figures, the event-
ing parameters on the times of data transmission. The results triggered MFAILC algorithm proposed in this article can not
of comparison of transmission times and transmission ratio only achieve effective trajectory tracking but also significantly
throughout the process are shown in Table I. By calculat- reduce the transmission rate in the system, effectively saving
ing the data transfer rate, we can find that the data transfer bandwidth resources.
rate in a total of 36 000 sampling times can be adjusted To further explore the advantages of MFAILC, we compare
in a wide range by selecting different triggering param- it with model-free adaptive control (MFAC). The results in
eters. This result shows that our proposed event-triggered Fig. 10 show that compared with MFAC, the system achieved
mechanism utilizes communication resources more efficiently complete tracking in the entire time domain by 40th iteration
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON CYBERNETICS

Fig. 10. Comparison of outputs under MFAILC and MFAC.


Fig. 12. Velocity tracking in 15th iteration.

Fig. 13. Maximum tracking errors along iterations.

Fig. 11. Structure of WMRs.


desired velocity trajectories are selected as
   
v (k) 1.2 + 0.4 sin(k − π/2)
under the MFAILC algorithm, which indicated that MFAILC Vd (k) = d =
wd (k) 0.3 sin(0.4k)
had better control ability.
Example 2 (Velocity Tracking Control of WMRs): We sim- and V(0, i) = Vd (0) = 0. The controller parameters are
ulate the velocity tracking control of WMRs in this example. selected as T = 600, ρv = ρw = 1, λv = λw = 26,
The structure of WMRs is shown in Fig. 11. This type of μv = μw = ηv = ηw = 1, εv = εw = 10−6 , κv = κw = 0.75,
WMRs is a typical nonholonomic, underactuated system and and γv = γw = 0.024. The fading phenomenon is selected as
its dynamic model is as follows: θ̄ = 0.98 and θ ∗ = 0.01. The simulation results are shown as
Figs. 12–14.
  Fig. 12 plots the velocity tracking in 50th iteration, Fig. 13
1 I I
V̇(k, i) = × × τ (k, i) (22) shows the maximum velocity tracking errors along iteration
mIr −mR mR
and Fig. 14 plots the triggering instants and intervals. All
Figs. 12–14 show that as the iteration continues, the velocity
T
where V = [v w] denotes the input vector of WMRs’ kine- of WMRs almost coincides with the desired velocity. The fixed
matical model. v and w are linear and angular velocities, time-based update is well replaced by the novel event-based
respectively. m, I, 2r, and 2R are mass, moment of inertia, update. The above simulation results have verified that the
the driving wheels’ diameter, and the length of driving axle, presented MFAILC scheme can effectively deal with the fading
respectively. τ (k, i) is the input torque vector. The detailed phenomenon. Furthermore, the flexible event-triggered control
derivation of the above model (22) can be found in [40]. strategy can effectively save the valuable networked resources
In this simulation example, the parameters of WMRs are: without deteriorating the system tracking performance than the
m = 15 kg, I = 10 kgm2 , R = 0.15 m, and r = 0.1 m. The existing fixed-time mechanism.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

BU et al.: EVENT-TRIGGERED MODEL-FREE ADAPTIVE ITERATIVE LEARNING CONTROL 11

[10] W. Yan and M. Sun, “Adaptive iterative learning control of discrete-time


varying systems with unknown control direction,” Int. J. Adapt. Control,
vol. 27, no. 4, pp. 340–348, Apr. 2013.
[11] M. Yu and C. Li, “Robust adaptive iterative learning control for
discrete-time nonlinear systems with time-iteration-varying parameters,”
IEEE Trans. Syst., Man, Cybern., Syst., vol. 47, no. 7, pp. 1737–1745,
Jul. 2017.
[12] H. Ji, Z. Hou, and R. Zhang, “Adaptive iterative learning control for
high-speed trains with unknown speed delays and input saturations,”
IEEE Trans. Autom. Sci. Eng., vol. 13, no. 1, pp. 260–273, Jan. 2016.
[13] D. Shen and J.-X. Xu, “Distributed learning consensus for heteroge-
nous high-order nonlinear multi-agent systems with output constraints,”
Automatica, vol. 97, pp. 64–72, Nov. 2018.
[14] R. Chi, D. Wang, Z. Hou, and S. Jin, “Data-driven optimal
terminal iterative learning control,” J. Process Control, vol. 22, no. 10,
pp. 2026–2037, Dec. 2012.
[15] R. Chi, Z. Hou, S. Jin, D. Wang, and C.-J. Chien, “Enhanced data-
driven optimal terminal ILC using current iteration control knowledge,”
IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 11, pp. 2939–2948,
Fig. 14. Event-triggered instants and intervals at k = 60. Nov. 2015.
[16] R.-H. Chi and Z.-S. Hou, “Dual-stage optimal iterative learning con-
trol for nonlinear non-affine discrete-time systems,” Acta Autom. Sinica,
vol. 33, no. 10, pp. 1061–1065, Oct. 2007.
V. C ONCLUSION [17] X. Bu, Q. Yu, Z. Hou, and W. Qian, “Model free adaptive iterative
learning consensus tracking control for a class of nonlinear multiagent
A novel event-triggered MFAILC for the wireless NCSs systems,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 49, no. 4,
with fading measurements has been presented in this arti- pp. 677–686, Apr. 2019.
[18] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results
cle, in which the controller design process only applies the in networked control systems,” Proc. IEEE, vol. 95, no. 1, pp. 138–162,
I/O information. The event-triggered condition is based on Jan. 2007.
both time domain and iterative domain, which reduces the [19] X.-M. Zhang, Q.-L. Han, and X. Yu, “Survey on recent advances in
amount of data transmission in the control process. Moreover, networked control systems,” IEEE Trans. Ind. Informat., vol. 12, no. 5,
pp. 1740–1752, Oct. 2016.
the fading phenomenon is described by Rice fading models [20] G. C. Walsh, H. Ye, and L. G. Bushnell, “Stability analysis of networked
and the adverse effects of this phenomenon on the system control systems,” IEEE Trans. Control Syst. Technol., vol. 10, no. 3,
have been rigorously analyzed. By employing the Lyapunov pp. 438–446, May 2002.
[21] W. Zhang, M. S. Branicky, and S. M. Phillips, “Stability of networked
function, the convergence of the considered systems has control systems,” IEEE Control Syst. Mag., vol. 21, no. 1, pp. 84–99,
been guaranteed. Finally, the results of a numerical example Feb. 2001.
and WMRs simulation have demonstrated that the presented [22] R. A. Gupta and M.-Y. Chow, “Networked control system: Overview
and research trends,” IEEE Trans. Ind. Electron., vol. 57, no. 7,
MFAILC algorithm has the ability to save the consumption pp. 2527–2535, Jul. 2010.
of system resources without compromising the stability of the [23] X. Bu, Z. Hou, Q. Yu, and Y. Yang, “Quantized data driven iterative
system. learning control for a class of nonlinear systems with sensor saturation,”
IEEE Trans. Syst., Man, Cybern., Syst., vol. 50, no. 12, pp. 5119–5129,
Dec. 2020, doi: 10.1109/TSMC.2018.2866909.
R EFERENCES [24] D. Shen and Y. Wang, “Survey on stochastic iterative learning control,”
J. Process Control, vol. 24, no. 12, pp. 64–77, Dec. 2014.
[1] S. Arimoto, S. Kawamura, and F. Miyazaki, “Bettering operation [25] D. Shen and J.-X. Xu, “A framework of iterative learning control under
of robots by learning,” J. Robot. Syst., vol. 1, no. 2, pp. 123–140, random data dropouts: Mean square and almost sure convergence,”
Jun. 1984. J. Adapt. Control Signal Process., vol. 31, no. 12, pp. 1825–1852,
[2] D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A survey of iterative Aug. 2017.
learning control: A learning-based method for high-performance track- [26] D. Shen and G. Qu, “Performance enhancement of learning track-
ing control,” IEEE Control Syst. Mag., vol. 26, no. 3, pp. 96–114, ing systems over fading channels with multiplicative and additive
Jun. 2006. randomness,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 4,
[3] H.-S. Ahn, Y. Chen, and K. L. Moore, “Iterative learning control: Brief pp. 1196–1210, Apr. 2020.
survey and categorization,” IEEE Trans. Syst., Man, Cybern. C, Appl. [27] P. Tabuada, “Event-triggered real-time scheduling of stabilizing con-
Rev., vol. 37, no. 6, pp. 1099–1121, Nov. 2007. trol tasks,” IEEE Trans. Autom. Control, vol. 52, no. 9, pp. 1680–1685,
[4] J.-X. Xu, “A survey on iterative learning control for nonlinear systems,” Sep. 2007.
Int. J. Control, vol. 84, no. 7, pp. 1275–1294, Jun. 2011. [28] M. Mazo and P. Tabuada, “Decentralized event-triggered control over
[5] Y. Wang, F. Gao, and F. J. Doyle III, “Survey on iterative learning wireless sensor/actuator networks,” IEEE Trans. Autom. Control, vol. 56,
control, repetitive control, and run-to-run control,” J. Process Control, no. 10, pp. 2456–2461, Oct. 2011.
vol. 19, no. 10, pp. 1589–1600, Dec. 2009. [29] A. Girard, “Dynamic triggering mechanisms for event-triggered control,”
[6] D. Meng and K. L. Moore, “Robust iterative learning control for non- IEEE Trans. Autom. Control, vol. 60, no. 7, pp. 1992–1997, Jul. 2015.
repetitive uncertain systems,” IEEE Trans. Autom. Control, vol. 62, [30] L. Zou, Z.-D. Wang, and D.-H. Zhou, “Event-based control and filtering
no. 2, pp. 907–913, Feb. 2017. of networked systems: A survey,” Int. J. Autom. Comput., vol. 14, no. 3,
[7] D. Huang, J.-X. Xu, V. Venkataramanan, and T. C. T. Huynh, “High- pp. 239–253, May 2017.
performance tracking of piezoelectric positioning stage using current- [31] L. Ding, Q.-L. Han, X. Ge, and X.-M. Zhang, “An overview of recent
cycle iterative learning control with gain scheduling,” IEEE Trans. Ind. advances in event-triggered consensus of multiagent systems,” IEEE
Electron., vol. 61, no. 2, pp. 1085–1098, Feb. 2014. Trans. Cybern., vol. 48, no. 4, pp. 1110–1123, Apr. 2018.
[8] D. Meng and K. L. Moore, “Convergence of iterative learning control for [32] W. Xiong, X. Yu, R. Patel, and W. Yu, “Iterative learning control for
SISO nonrepetitive systems subject to iteration-dependent uncertainties,” discrete-time systems with event-triggered transmission strategy and
Automatica, vol. 79, pp. 167–177, May 2017. quantization,” Automatica, vol. 72, pp. 84–91, Oct. 2016.
[9] R. Chi, Z. Hou, and J. Xu, “Adaptive ILC for a class of discrete-time [33] T. Zhang and J. Li, “Event-triggered iterative learning control for multi-
systems with iteration-varying trajectory and random initial condition,” agent systems with quantization,” Asian J. Control, vol. 20, no. 3,
Automatica, vol. 44, no. 8, pp. 2207–2213, Aug. 2008. pp. 1088–1101, May 2018.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON CYBERNETICS

[34] S. Jin, Z. Hou, and R. Chi, “Data-driven model-free adaptive iterative Qiongxia Yu received the B.S. and M.S. degrees
learning control for a class of discrete-time nonlinear systems,” Control from Henan Polytechnic University, Jiaozuo, China,
Theory Appl., vol. 29, no. 8, pp. 1001–1009, Aug. 2012. in 2009 and 2012, respectively, and the Ph.D.
[35] G. G. Qu and D. Shen, “Stochastic iterative learning control with faded degree from Beijing Jiaotong University, Beijing,
signals,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 5, pp. 1196–1208, China, in 2017.
Sep. 2019. She is currently a Lecturer with Henan
[36] N. Elia, “Remote stabilization over fading channels,” Syst. Control Lett., Polytechnic University. Her research interests
vol. 54, no. 3, pp. 237–249, Mar. 2005. include learning control, data-driven control, and
[37] Q. Liu, Z. Wang, X. He, and D. H. Zhou, “Event-based distributed fil- their applications in intelligent transportation
tering with stochastic measurement fading,” IEEE Trans. Ind. Informat., systems.
vol. 11, no. 6, pp. 1643–1652, Dec. 2015.
[38] D. Zhao, Z. Wang, D. Ding, and G. Wei, “H∞ PID control with
fading measurements: The output-feedback case,” IEEE Trans. Syst.,
Man, Cybern., Syst., vol. 50, no. 6, pp. 2170–2180, Jun. 2020.
[39] H. Yan, H. Zhang, F. Yang, X. Zhan, and C. Peng, “Event-triggered Zhongsheng Hou (Fellow, IEEE) received the B.S.
asynchronous guaranteed cost control for Markov jump discrete-time and M.S. degrees in applied mathematics from the
neural networks with distributed delay and channel fading,” IEEE Trans. Jilin University of Technology, Jilin, China, in 1983
Neural Netw. Learn. Syst., vol. 29, no. 8, pp. 3588–3598, Aug. 2018. and 1988, respectively, and the Ph.D. degree in
[40] X. Lu and J. Fei, “Velocity tracking control of wheeled mobile robots control theory and applications from Northeastern
by iterative learning control,” Int. J. Adv. Robot. Syst., vol. 13, no. 3, University, Shenyang, China, in 1994.
pp. 1–10, May 2016. From 1997 to 2018, he was with Beijing
Jiaotong University, Beijing, China, where he was
a Distinguished Professor and the Head of the
Department of Automatic Control. He is currently
the Chair Professor with the School of Automation,
Xuhui Bu (Member, IEEE) received the B.S. and Qingdao University, Qingdao, China. He has published over 200 journal arti-
M.S. degrees in automation control from Henan cles. He has authored a monograph Model-Free Adaptive Control: Theory
Polytechnic University, Jiaozuo, China, in 2004 and and Applications (CRC, 2013). His research interests are data-driven control,
2007, respectively, and the Ph.D. degree in con- model free adaptive control, learning control, and intelligent transportation
trol theory and applications from Beijing Jiaotong systems.
University, Beijing, China, in 2011. Dr. Hou is the Founding Director of the Technical Committee on
He is currently a Full Professor with Henan Data Driven Control, Learning and Optimization, Chinese Association of
Polytechnic University. He has authored over Automation (CAA). He is an International Federation of Automatic Control
60 peer-reviewed journal articles and more than Technical Committee Member of Adaptive and Learning Systems and
20 articles in prestigious conference proceedings. Transportation Systems. He is a fellow of CAA.
His research interests include data-driven control,
iterative learning control, traffic control, and networked system control.

Wei Yu received the B.S. degree in rail trans- Junqi Yang received the Ph.D. degree in control the-
portation signal and control from Henan Polytechnic ory and control engineering from Tongji University,
University, Jiaozuo, China, in 2018, where he is cur- Shanghai, China, in 2013.
rently pursuing the M.S. degree in automation with He is currently an Associate Professor with Henan
the School of Electric Engineering and Automation. Polytechnic University, Jiaozuo, China. His research
His research interests include data-driven control interests include logical dynamic systems, switched
and networked control systems. systems, observer design, fault detection, and toler-
ant control.
Dr. Yang was a recipient of IJCAS Best Paper
Award 2018.
Available online at www.sciencedirect.com

Journal of the Franklin Institute 357 (2020) 12364–12379


www.elsevier.com/locate/jfranklin

Event-triggered state tracking for two-dimensional


neural networks with impulsive learning control
schemesR
Zijian Luo a, Wenjun Xiong a,c,∗, Jinde Cao b,d, Chi Huang a
a School of Economic Information Engineering, Southwestern University of Finance and Economics, Chengdu
611130, PR China
b School of Mathematics, Southeast University, Nanjing 210096, PR China
c School of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, PR China
d Yonsei Frontier Lab, Yonsei University, Seoul, Korea

Received 29 August 2019; received in revised form 20 July 2020; accepted 10 September 2020
Available online 20 September 2020

Abstract
In this paper, different types of learning control schemes are proposed to study the tracking of
two-dimensional discrete neural networks. The learning control schemes combine the advantages of
impulsive control and iterative learning control strategies, because the impulsive control technique can
improve tracking performance rapidly. Further, the event-triggered mechanism is used to determine the
impulse time. And an equivalent system is proposed by constructing a trigger function, which is used to
get over the difficulties in the theoretical analysis. Then learning control schemes are designed in line
with the equivalent system, and some sufficient conditions are proposed to guarantee the convergence of
the tracking error. The main results show that the tracking performance can be improved effectively by
our control schemes. And it shows that our control schemes are more effective than traditional learning
control approaches. Finally, the effectiveness is illustrated by numerical simulations.
© 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

https://doi.org/10.1016/j.jfranklin.2020.09.020
0016-0032/© 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

1. Introduction

Recently, there is growing interest in neural network. Neural network is a kind of complex
networks, which is formed by the interconnection of a large number of simple processing
units. It reflects some basic characteristics of the human brain, and it becomes a research hot
spot. Then it is widely applied in many fields such as pattern recognition [1], cooperative
learning control [2], prediction of stock returns [3]. Up to now, numerous significant results
are presented on the synchronization [4–7], stability analysis [8,9], visual tracking [10,11],
etc. It should be mention that the trajectory tracking is an important issue in networks, such
as tracking control [12], multiple mobile robots [13] and so on. The mentioned theoretical
results are obtained by using the Lyapunov method. However, none of these studies are always
effective when the systems require a high precision.
Iterative learning control (ILC) is an effect method to achieve the tracking with high pre-
cision [14,15]. This technology treats the system with repetitive motion as a two-dimensional
system [16], one dimension is the time and the other is the iteration step. The core idea of ILC
is regulating control input repeatedly according to the learning errors, then the full tracking
can be achieved along the direction of the iteration. And this control strategy is suitable for
nonlinear systems, which is widely used in robotics [17], process control [18], initial state
varying [19], multi-agent systems [20–22] and networked control systems [23–25]. Note that
neural networks are special nonlinear systems, it is appropriate to study the tracking problem
by ILC technology. Nevertheless, it should be pointed out that some ILC schemes cannot
achieve the tracking quickly, it only regulates the system behavior step by step. Hence, it is
interest to improve the tracking efficiency by combining with other control approaches.
As far as we know, impulsive control is an effective control method with lower cost,
fast response than continuous control strategies. Since the advantages of this control method,
it is widely used in real applications, such as ecosystems management [26], regulation of
savings rates [27], neural networks [28–30]. In addition, instantaneous and non-instantaneous
impulsive systems are proposed to track the discontinuous trajectories in [31–33]. And some
general ILC strategies are designed to solve the tracking problems. However, the mentioned
studies are focused on the fixed-time impulsive systems, which means the impulsive effect can
not be regulated at each iteration. In order to overcome the effect of uncertainty of impulse
times, the learning impulsive control schemes should be proposed to make the tracking error
convergence flexibly and rapidly. Therefore, it encourages us to explore the tracking problem
by designing appropriate ILC schemes with impulsive control strategies.
In general, the condition of triggering impulses should be designed in advance. And a key
issue is how to determine the impulse times. According to the existing literature, one knows
the trigger mechanism can solve the mentioned issue in line with trigger conditions [34,35].
The event-triggered technology attracts considerable attentions, and it is applied to the tracking

R This work was jointly supported by the National Natural Science Foundation of China (61873344), the Fun-

damental Research Funds for the Central Universities under Grant No. 16XRC032808, the Fundamental Research
Funds for the Central Universities under Grant No. JBK190502, the Collaborative Innovation Center for the Innova-
tion and Regulation of Internet-based-Finance, the Financial Intelligence and Financial Engineering Key Laboratory
of Sichuan Province.
∗ Corresponding author at: School of Economic Information Engineering, Southwestern University of Finance and

Economics, Chengdu 611130, PR China.


E-mail addresses: zjluomath@gmail.com (Z. Luo), xwenjun2@gmail.com (W. Xiong), jdcao@seu.edu.cn (J. Cao),
huangchima@gmail.com (C. Huang).

12365
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

problem [36,37], the cyber-physical systems [38] and event-based impulsive control studies
[39–41]. However, the tracking problem is not fully studied by using the learning impulsive
control method. The reason is that the event-triggered mechanism leads to the uncertainty of
impulsive moments at each iteration, and the uncertainty make the difficulty of theoretical
analysis for a two-dimensional system. As a result, it encourage us to find different analytical
approaches to solve this problem. Thus, an analytical framework is established. The event-
triggered condition is adopted to determine the impulse times at each iteration, and the trigger
function is designed to indicate whether the impulse is triggered or not. Then an equivalent
system is proposed to simplify the theoretical analysis.
From the above discussions, the tracking problem of discrete neural networks is studied by
proposing appropriate learning control strategies. The objective is to overcome the effect of
uncertainty of impulse times and make the convergence of the tracking error rapidly. Also, the
results show that designed control schemes are more effective than traditional ILC approaches.
The contributions of this paper are summarized as follows: (1) The event-triggered threshold
is used to indicate the trigger condition, then the trigger function is designed to help the
theoretical analysis. (2) An equivalent system is proposed by designing the trigger function,
and an analytical framework is established in line with the equivalent system. It is convenient
to analyze the tracking error and obtain the main results. (3) According to the equivalent
system, the learning impulsive control strategies are designed easily. The tracking performance
is improved because of the function of learning impulse strategies. And numerical simulations
show that designed control schemes make the error convergence fast.
The rest of the paper is organized as follows: Problem formulation is given in Section 2.
The main results are presented in Section 3. Numerical examples are given to verify the
theoretical results in Section 4. Section 5 presents the conclusion of this study.
Notations : I denotes the unit matrix with appropriate dimension. 0 denotes the zero vec-
tor with appropriate dimension. S = {0, 1, · · · , T } and J = S \ T denote time  intervals, T
n
denotes the fixed terminal time.  denotes the transposition. Further, z(t ) = i=1 |z (t )|
i 2

denotes 2-norm and zC = max z (t ). Symbol sign(x) denotes the symbolic function, i.e.,
t∈S
sign(x) = 1 if x > 0, sign(x) = 0 if x = 0, and sign(x) = −1 if x < 0.

2. Problem formulation

In this paper, we consider the two-dimensional discrete neural networks. Each neuron
has to deal with two independent dynamic processes. One is system dynamics about time t,
the other is system dynamics about the iteration step r. Therefore, the neural networks are
described as

zr+1 (t + 1) = Azr+1 (t ) + f (zr+1 (t )) + Bur+1 (t ), t ∈ J,

where r + 1 denotes the (r + 1)th iteration. zr+1 (t ) = [zr+1,1 (t ), . . . , zr+1,n (t )] ∈ Rn de-
notes the (r + 1)th tracking state of each neuron, where zr+1,i (t ) denotes the state of
neuron i, i ∈ {1, 2, . . . , n}. ur+1 (t ) = [ur+1,1 (t ), . . . , ur+1,n (t )] ∈ Rn denotes the (r + 1)th
control strategy. A, B ∈ Rn×n denote matrices with appropriate dimensions, and B is as-
sumed to be invertible. f : Rn → Rn denotes the neuron activation function, and f (zr+1 (t )) =
[ f (zr+1,1 (t )), . . . , f (zr+1,n (t ))] ∈ Rn .

12366
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

The learning control and the impulsive control schemes are designed to improve the track-
ing effect. And the neural networks are described as

zr+1 (t + 1) = Azr+1 (t ) + f (zr+1 (t )) + Bur+1 (t ), t ∈ J,
+ (1)
zr+1 (t + 1) = zr+1 (t + 1) + Ir+1 (t + 1), t = tr+1k
∈ Vr+1 ⊆ J,
where ur+1 (t ) and Ir+1 (t ) denote the control input and the impulsive input respectively, which
will be designed later. Vr+1 denotes the time set of the event-triggered condition. Further, the
tracking error can be described as

er+1 (t + 1) = zr+1 (t + 1) − zd (t + 1), t ∈ J,
(2)
e+ +
r+1 (t + 1) = zr+1 (t + 1) − zd (t + 1), t = tr+1 ∈ Vr+1 ,
k

where zd (t ) = [zd,1 (t ), . . . , zd,n (t )] denotes the desired state of each neuron. The objective
is to design appropriate control strategies such that the tracking can be achieved as much as
possible.
The impulsive control is designed to improve the tracking effect by the learning behavior,
and it is appropriate to design an event-triggered condition which is related to the tracking
errors. Then Vr+1 is defined as Vr+1 = {tr+1 k
| er+1 (tr+1
k
) > ε }, where ε ≥ 0 denotes the event-
triggered threshold. It is noteworthy that there exist some limitations in some event-triggered
control approaches, such as the waste of the limited network resources [36]. Also, it is difficult
to analyze the convergence of the tracking error at each iteration. The reason is the impulses
are depending on the event-triggered mechanism, which means that the impulse times are
decided by the event-triggered threshold ε and the iteration step r. And the variable impulse
times make the analysis difficult. Therefore, it is necessary to transform (1) into an equivalent
system.
For the convenience of analysis, the event-triggered impulsive condition can be transformed
to an impulsive trigger function which is defined as αr+1 (t ) = sign(max{er+1 (t ) − ε, 0} ),
i.e., αr+1 (t ) ∈ {0, 1}. And then zˆr+1 (t ) is used to describe the states of the neurons. zˆr+1 (t )
includes two cases: one is the states without impulses, and the other is the states with impulses.
It means the states of neurons can be further expressed as follows:

zr+1 (t + 1) = Azˆr+1 (t ) + f (zˆr+1 (t )) + Bur+1 (t ),
zˆr+1 (t + 1) = zr+1 (t + 1) + αr+1 (t + 1)Ir+1 (t + 1).
And the equivalent system can be derived as
zˆr+1 (t + 1) = Azˆr+1 (t ) + f (zˆr+1 (t )) + Bur+1 (t ) + αr+1 (t + 1)Ir+1 (t + 1), t ∈ J. (3)
Similarly, eˆr+1 (t ) is used to describe the tracking errors of the neurons, and it is not difficult
to get eˆr+1 (t + 1) = er+1 (t + 1) + αr+1 (t + 1)Ir+1 (t + 1) from Eq. (3).
Lemma 1. System (1) and (3) are equivalent with the learning control and the impulsive
control schemes, and the states can converge to the same desired state.
Proof. In line with Eq. (2), eˆr+1 (t + 1) = er+1 (t + 1) + αr+1 (t + 1)Ir+1 (t + 1). If er+1 (t +
1) → 0, it means zr (t) → zd (t) and αr+1 (t + 1) = 0. Then, eˆr+1 (t + 1) → 0. If e+
r+1 (t + 1) →
0, it shows zr+ (t ) → zd (t ), and one can know that eˆr+1 (t + 1) → 0. On the contrary, if
eˆr+1 (t ) → 0, it means zˆr (t ) → zd (t ). Further, one can know er+1 (t ) → 0 or e+r+1 (t ) → 0,
which shows zr (t) → zd (t) or zr+ (t ) → zd (t ). 
The following assumptions are needed throughout this study.

12367
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

Assumption 1. The initial state of neural network without drift, i.e., ∀ r, zr (0) = zd (0). And
there are not any impulses at initial time for each iteration.
Assumption 2. The nonlinear activation function f : Rn → Rn satisfies the Lipschitz condi-
tion, i.e.,  f (x1 ) − f (x2 ) ≤ L f x1 − x2 , where Lf is a constant.

3. Main results

In this section, some types of control strategies are designed to deal with the tracking issues.
Firstly, these types of control schemes are designed according to the equivalent system. Then,
main results are presented to show the effectiveness of the designed control approaches.
Part A. Impulsive learning control without control input
System input ur+1 (t ) is assumed to be a zero vector in system (1) or (3). This means
that the neural network is only affected by the event-triggered impulses. In addition, the
event-triggered threshold is chosen as ε = 0 to achieve the tracking.

I1 (t + 1) ∈ Rn is an arbitrary initial im pulsive control,
(4)
Ir+1 (t + 1) = αr (t + 1)Ir (t + 1) + K0 eˆr (t + 1) − A[eˆr+1 (t ) − eˆr (t )], r ≥ 1.
where K0 ∈ Rn×n is a learning gain matrix.
Theorem 1. Suppose that Assumptions 1 and 2 are satisfied, the control strategy (4) can
make the tracking states converge to the desired state if following inequalities are satisfied

1 L f (1 − L Tf +1 )
I + K0  < 1, L f ≤ , I + K0  + max{K0 , 1} < 1. (5)
2 1 − Lf
Proof. Linking with Eq. (3),
eˆr+1 (t + 1) = zˆr+1 (t + 1) − zd (t + 1)
= zˆr+1 (t + 1) − zˆr (t + 1) + zˆr (t + 1) − zd (t + 1)
= ˆzr (t + 1) + eˆr (t + 1), (6)
where ˆzr (t + 1) := zˆr+1 (t + 1) − zˆr (t + 1). Further,
ˆzr (t + 1) = zˆr+1 (t + 1) − zˆr (t + 1)
= Azˆr+1 (t ) + f (zˆr+1 (t )) + αr+1 (t + 1)Ir+1 (t + 1) − Azˆr (t ) − f (zˆr (t ))
− αr (t + 1)Ir (t + 1)
= A[eˆr+1 (t ) − eˆr (t )] + f (zˆr+1 (t )) − f (zˆr (t )) + αr+1 (t + 1)Ir+1 (t + 1)
− αr (t + 1)Ir (t + 1).
Combine with Eq. (4), the above equality becomes
ˆzr (t + 1) = A[eˆr+1 (t ) − eˆr (t )](1 − αr+1 (t + 1)) + αr (t + 1)Ir (t + 1)(αr+1 (t + 1) − 1)
+ αr+1 (t + 1)K0 eˆr (t + 1) + f (zˆr+1 (t )) − f (zˆr (t )). (7)
In line with (7), (6) becomes
eˆr+1 (t + 1) = [I + αr+1 (t + 1)K0 ]eˆr (t + 1) + A[eˆr+1 (t ) − eˆr (t )](1 − αr+1 (t + 1))
+ αr (t + 1)Ir (t + 1)(αr+1 (t + 1) − 1) + f (zˆr+1 (t )) − f (zˆr (t )). (8)

12368
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

When αr+1 (t + 1) = 0, i.e. er+1 (t + 1) = 0, then zˆr+1 (t + 1) = zr+1 (t + 1) and eˆr+1 (t +
1) = er+1 (t + 1) = 0. It means that tracking is achieved. When αr+1 (t + 1) = 1, from Eq. (8),
eˆr+1 (t + 1) = [I + K0 ]eˆr (t + 1) + f (zˆr+1 (t )) − f (zˆr (t )). Then
eˆr+1 (t + 1) ≤ I + K0 eˆr (t + 1) + L f ˆzr (t ). (9)
Similar as Eq. (7), when αr+1 (t ) = 0, which means er+1 (t ) = 0 and zˆr+1 (t ) = zr+1 (t ) =
zd (t ). Then ˆzr (t ) = −eˆr (t ) and ˆzr (t ) = eˆr (t ). Thus, Eq. (9) is equal to eˆr+1 (t +
1) ≤ I + K0 eˆr (t + 1) + L f eˆr (t ); When αr+1 (t ) = 1, ˆzr (t ) = K0 eˆr (t ) + f (zˆr+1 (t −
1)) − f (zˆr (t − 1)) and ˆzr (t ) ≤ K0 eˆr (t ) + L f ˆzr (t − 1). Then Eq. (9) is equal
to eˆr+1 (t + 1) ≤ I + K0 eˆr (t + 1) + L 2f ˆzr (t − 1) + L f K0 eˆr (t ). Based on the
above analysis, (9) is equal to
eˆr+1 (t + 1) ≤ I + K0 eˆr (t + 1) + αr+1 (t )L 2f ˆzr (t − 1)
+ L f eˆr (t )[αr+1 (t )K0  + (1 − αr+1 (t ))]. (10)
Let M2 (t ) = [αr+1 (t )K0  + (1 − αr+1 (t ))], from Eq. (10),
eˆr+1 (t + 1) ≤ I + K0 eˆr (t + 1) + αr+1 (t )L 2f ˆzr (t − 1) + L f M2 (t )eˆr (t )
≤ I + K0 eˆr (t + 1) + αr+1 (t )αr+1 (t − 1)L 3f ˆzr (t − 2)
+ L 2f αr+1 (t )M2 (t − 1)eˆr (t − 1) + L f M2 (t )eˆr (t )
≤ ······

t−1
≤ I + K0 eˆr (t + 1) + αr+1 (t − j)Lt+1
f ˆ
zr (0)
j=0


t
j+1

j
+ Lf αr+1 (t + 1 − l )M2 (t − j)eˆr (t − j). (11)
j=0 l=0

Note that M2 (t − j) ≤ max{K0 , 1}, ˆzr (0) = 0 and αr+1 (t − j) ∈ {0, 1}, Eq. (11) be-
comes

t
eˆr+1 (t + 1) ≤ I + K0 eˆr (t + 1) + max{K0 , 1} L fj+1 eˆr (t − j).
j=0

Further, one obtains



t
eˆr+1 C ≤ I + K0 eˆr C + max{K0 , 1} L fj+1 eˆr C
j=0
 
L f (1 − L Tf +1 )
≤ I + K0  + max{K0 , 1} eˆr C .
1 − Lf
From condition (5), eˆr+1 C < eˆr C . This implies that lim eˆr+1 C = 0 and lim zˆr+1 (t +
r→∞ r→∞
1) = zd (t + 1). According to Lemma 1, the result is obtained immediately. 
Remark 1. Compared with [39–41], the event-triggered impulsive system is transformed to
a non-impulsive system by using the trigger function, this kind of transformation is more
convenient for the convergence analysis. In addition, in Theorem 1, the learning impulsive

12369
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

control is applied to achieve the tracking without any control input. This shows the effective-
ness of the designed control strategies. Further, control input will be considered to reduce the
conservative in the following part.
Part B. Impulsive learning control with control input
Control input ur+1 (t ) is applied to system (3). This means that the neural network is
affected by the event-triggered impulses and the control input. And the two control strategies
force the system state converge to the desired state. Similarly, the event-triggered threshold
is chosen as ε = 0 at first. The learning controller is designed as follows

u1 (t ) ∈ Rn is an arbitrary initial input,
(12)
ur+1 (t ) = ur (t ) − B−1 A[eˆr+1 (t ) − eˆr (t )] + B−1 K1 eˆr (t + 1), r ≥ 1,
where K1 ∈ Rn×n is a learning gain matrix and Ir (t + 1) is an impulsive control at rth run.
Then the impulsive controller (4) is redesigned as follows
I1 (t + 1) ∈ Rn is an arbitrary initial im pulsive control,
(13)
Ir+1 (t + 1) = αr (t + 1)Ir (t + 1) + K0 eˆr (t + 1), r ≥ 1.
Theorem 2. Suppose that Assumptions 1 and 2 are satisfied, the control strategies (12) and
(13) can make the tracking states converge to the desired state if following inequalities are
satisfied
L f (1 − L Tf +1 ) 1
I + K0 + K1  < 1, I + K0 + K1  + max{K0 + K1 , 1} < 1, L f ≤ . (14)
1 − Lf 2
Proof. Similar as the proof of Theorem 1,
eˆr+1 (t + 1) = ˆzr (t + 1) + eˆr (t + 1),
where ˆzr (t + 1) := zˆr+1 (t + 1) − zˆr (t + 1). According to Eqs. (12) and (13),
ˆzr (t + 1) = A[eˆr+1 (t ) − eˆr (t )] + f (zˆr+1 (t )) − f (zˆr (t )) + B[ur+1 (t ) − ur (t )]
+ αr+1 (t + 1)Ir+1 (t + 1) − αr (t + 1)Ir (t + 1)
= f (zˆr+1 (t )) − f (zˆr (t )) + K1 eˆr (t + 1) + αr+1 (t + 1)K0 eˆr (t + 1)
+ [αr+1 (t + 1) − 1]αr (t + 1)Ir (t + 1).

eˆr+1 (t + 1) = [I + K1 + αr+1 (t + 1)K0 ]eˆr (t + 1) + f (zˆr+1 (t )) − f (zˆr (t ))


+ [αr+1 (t + 1) − 1]αr (t + 1)Ir (t + 1). (15)
When αr+1 (t + 1) = 0, i.e. er+1 (t + 1) = 0, then zˆr+1 (t + 1) = zr+1 (t + 1) and eˆr+1 (t +
1) = er+1 (t + 1) = 0. It means that the tracking is achieved. Thus, it is only needed to con-
sider αr+1 (t + 1) = 1. In this situation, Eq. (15) is equal to eˆr+1 (t + 1) = [I + K1 + K0 ]eˆr (t +
1) + f (zˆr+1 (t )) − f (zˆr (t )). Then,
eˆr+1 (t + 1) ≤ I + K1 + K0 eˆr (t + 1) + L f ˆzr (t ). (16)
Similar as the analysis of Theorem 1, when αr+1 (t ) = 0, ˆzr (t ) = −eˆr (t ) and ˆzr (t ) =
eˆr (t ). When αr+1 (t ) = 1, ˆzr (t ) ≤ K1 + K0 eˆr (t ) + L f ˆzr (t − 1). Based on the
above analysis, Eq. (16) is equal to
eˆr+1 (t + 1) ≤ I + K1 + K0 eˆr (t + 1) + αr+1 (t )L 2f ˆzr (t − 1)

12370
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

+ L f eˆr (t )[αr+1 (t )K0 + K1  + (1 − αr+1 (t ))].

The rest part of the proof is similar as the Theorem 1, then


 
L f (1 − L Tf +1 )
eˆr+1 C ≤ I + K0 + K1  + max{K0 + K1 , 1} eˆr C .
1 − Lf
According to the condition (14), the result is obtained immediately. 
Remark 2. Compared with [23,42], the error convergence is improved because the learning
impulsive control is applied to change the tracking state instantaneously. It means the learn-
ing impulsive control strategies can improve the tracking performance effectively. Compared
with [31], the impulse time is variable for the designed ILC schemes, which can make the
convergence of the tracking error rapidly. And It shows that our control strategies are valid.
Remark 3. Compared with Theorem 1 in this study, if I + K0 + K1  < I + K0  and K0 +
K1  < K0 , control schemes (12) and (13) improve the error convergence. In addition, the
range of the learning gain matrix K0 is wider because of the action of system control input.
If the matrix B is irreversible or the matrix A is unknown and bounded, the similar result
can be obtained by redesigning the learning control scheme (12) as

u1 (t ) ∈ Rn is an arbitrary initial input,
(17)
ur+1 (t ) = ur (t ) + K1 eˆr (t + 1), r ≥ 1.

Corollary 1. Suppose that Assumptions 1 and 2 are satisfied, the control strategies (17) and
(13) can make the tracking states converge to the desired state if following inequalities are
satisfied

I + K0 + BK1  < 1, AL f := Ab + L f ≤ 21 , where Ab is the least upper bound o f A.


AL f (1−AL Tf +1 )
I + K0 + BK1  + 1−AL f
max{K0 + K1 , 1} < 1,

Proof. The proof is similar as Theorem 2, then it is omitted here. 


Furthermore, it is pointed out that the event-triggered condition is a little harsh. Thus, the
trigger threshold can be chosen as ε > 0 to reduce the conservatism of the trigger condition.
And an analogous result is given in the following corollary.
Corollary 2. Suppose that Assumptions 1 and 2 are satisfied, the control strategies (12) and
(13) can make the tracking state converge to the desired state if following inequalities are
satisfied

I + K0 + K1  < 1, K0 + K1  < 1, L f ≤ 13 ,


L f (1−L Tf +1 )
I + K0 + K1  + 2 × 1−L f
< 1.

Proof. The proof is similar as Theorem 2, and it is easy to get inequality (16). Next, it
is necessary to analyze ˆzr (t ). When αr+1 (t ) = 0, which implies zˆr+1 (t ) = zr+1 (t ) =
zd (t ). Then, ˆzr (t ) = eˆr+1 (t ) − eˆr (t ) ≤ eˆr+1  + eˆr (t ). When αr+1 (t ) = 1, there is
ˆzr (t ) = K0 + K1 eˆr (t ) + L f ˆzr (t − 1). According to the above analysis,

eˆr+1 (t +1) ≤ M 1 eˆr (t +1)+αr+1 (t )L 2f ˆzr (t −1)+L f Mr+1


2
(t )eˆr (t )+Mr+1
3
(t )eˆr+1 (t ),

12371
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

where M 1 = I + K1 + K0 , Mr+1 2
(t ) = αr+1 (t )K0 + K1  + (1 − αr+1 (t )) and Mr+1
3
(t ) =
1 − αr+1 (t ). By induction, note that Mr+1 (t ) = max{1, K0 + K1 } < 1 and Mr+1 (t ) ≤ 1,
2 3


t−1
eˆr+1 (t + 1) ≤ M 1 eˆr (t + 1) + αr+1 (t − j)Lt+1
f ˆ
zr (0)
j=0


t  j
+ L fj+1 αr+1 (t + 1 − l ) eˆr (t − j)
j=0 l=0


t 
j
+ L fj+1 αr+1 (t + 1 − l ) eˆr+1 (t ).
j=0 l=0

According to Assumption 1, it is easy to know ˆzr (0) = 0. Note that αr+1 (t + 1 − l ) ≤ 1,


then the above inequality becomes

t 
t
eˆr+1 (t + 1) ≤ M 1 eˆr (t + 1) + L fj+1 eˆr (t − j) + L fj+1 eˆr+1 (t ).
j=0 j=0

By calculation, one can get


 
L f (1 − L Tf +1 ) L f (1 − L Tf +1 )
eˆr+1 C ≤ M +
1
eˆr C + eˆr+1 C . (18)
1 − Lf 1 − Lf
L f (1−L Tf +1 ) M 1 +Lˆf
Since L f ≤ 13 , Lˆf := 1−L f
< 21 . Then, from Eq. (18), eˆr+1 C ≤ 1−Lˆf
eˆr C . According
M +Lˆf
1
to the condition of the corollary, it is easy to know 1−Lˆf
< 1. Therefore, the result can be
obtained immediately. 
From the Corollary 2, the sufficient condition is still a little conservative. Then the learning
controller (12) is changed to reduce the conservativeness of the convergence condition.

u1 (t ) ∈ Rn is an arbitrary initial input,
(19)
ur+1 (t ) = ur (t ) − B−1 A[eˆr+1 (t ) − eˆr (t )] + B−1 [K1 eˆr (t +1)+αr (t +1)Ir (t + 1)], r ≥ 1,
where Ir (t + 1) is an impulsive control strategy at rth iteration. And the impulsive controller
(13) is changed as

Ir+1 (t + 1) = K0 eˆr (t + 1), r ≥ 1. (20)

Theorem 3. Suppose that Assumptions 1 and 2 are satisfied, the control strategies (19) and
(20) can make the tracking states converge to the desired state if following inequalities are
satisfied
max{I + K1 , I + K1 + K0 } < 1, L f ≤ 21 ,
L f (1−L Tf +1 ) (21)
max{I + K1 , I + K1 + K0 } + 1−L f
max{K1 , K1 + K0 } < 1.

Proof. Similar as the proof of Theorem 1,

eˆr+1 (t + 1) = ˆzr (t + 1) + eˆr (t + 1),

12372
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

where ˆzr (t + 1) := zˆr+1 (t + 1) − zˆr (t + 1). In line with Eq. (19),


ˆzr (t + 1) = zˆr+1 (t + 1) − zˆr (t + 1)
= K1 eˆr (t + 1) + αr+1 (t + 1)K0 eˆr (t + 1) + f (zˆr+1 (t )) − f (zˆr (t )). (22)
From (22),
eˆr+1 (t + 1) = [I + K1 + αr+1 (t + 1)K0 ]eˆr (t + 1) + f (zˆr+1 (t )) − f (zˆr (t )).
Further,
eˆr+1 (t + 1) ≤ I + K1 + αr+1 (t + 1)K0 eˆr (t + 1) +  f (zˆr+1 (t )) − f (zˆr (t ))
≤ I + K1 + αr+1 (t + 1)K0 eˆr (t + 1) + L f ˆzr (t ). (23)
According to Eq. (22), as for ˆzr (t ),
ˆzr (t ) ≤ K1 + αr+1 (t )K0 eˆr (t ) + L f ˆzr (t − 1)
≤ K1 + αr+1 (t )K0 eˆr (t ) + L f K1 + αr+1 (t − 1)K0 eˆr (t −1)+L 2f ˆzr (t −2)
≤ ······
t
≤ L fj K1 + αr+1 (t − j)K0 eˆr (t − j) + Ltf ˆzr (0). (24)
j=0

Note that αr+1 (t − j) ∈ {0, 1}, from Eqs. (23), (24) and Assumption 1,
eˆr+1 (t + 1) ≤ max{I + K1 , I + K1 + K0 }eˆr (t + 1)
t
+ max{K1 , K1 + K0 } L fj+1 eˆr (t − j).
j=0

Therefore,
 
L f (1 − L Tf +1 )
eˆr+1 C ≤ max{I + K1 , I + K1 + K0 } + max{K1 , K1 + K0 } eˆr C .
1 − Lf
In line with Eq. (21), it is not hard to get lim eˆr C = 0, which means that system (3) can
r→∞
converge to the desired state. From Lemma 1, the result is achieved immediately. 
Remark 4. Compared with [31,32], the impulsive time is variable and flexible because of
the action of the event-triggered impulses. And the variable-time impulses can improve the
tracking performance effectively. Moreover, the equivalent system is derived to study the
tracking problem easily by the trigger function.
Remark 5. Compared with [20], the trigger mechanism in this study is depending on the
tracking error of time t at each iteration, and the tracking error convergence flexibly. Mean-
while, the event-triggered mechanism acts on the impulsive controllers, which can improve
the tracking performance rapidly.
Remark 6. Compared with [38], the controller design in this study is not based on a special
estimator, and our purpose is different with that in [38]. Moreover, our trigger mechanism acts
on the impulsive controllers, which can make the convergence of the tracking error rapidly.
And the learning impulse strategies regulate the system state flexibly.

12373
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

4. Illustrative example

In this section, numerical examples are given to demonstrate the validity of the theoretical
results.

Example 1. Considering the general neural networks (1) in this paper, where S =
{0, 1, 2, 3, . . . , 30}, zk (t ) = [zk,1 (t ), zk,2 (t ), zk,3 (t ), zk,4 (t ), zk,5 (t )] ∈ R5 ,
⎡ ⎤ ⎡ ⎤
0.3 0 0 0 0.1 0.5 0 0 0 0
⎢0.1 0.2 0 0 0⎥ ⎢0 0.2 0 0 0⎥
⎢ ⎥ ⎢ ⎥
A=⎢
⎢0.4 0 0.4 0 0.2⎥ ⎢
⎥, B = ⎢0.1 0 0.3 0 0⎥ ⎥, and
⎣0 0.3 0.1 0.5 0⎦ ⎣0 0 0 0.2 0⎦
0 0 0 0.2 0.6 0 0 0 0.4 0.2
⎡ ⎤
f (zr,1 (t ))
⎢ f (zr,2 (t ))⎥
⎢ ⎥
f (zr (t )) = ⎢ ⎥
⎢ f (zr,3 (t ))⎥.
⎣ f (zr,4 (t ))⎦
f (zr,5 (t ))

Let f (zr,i (t )) = 0.3 tanh (zr,i (t )) and zr,i (0) = zd,i (0) = 0. One can use the learning control
schemes (4), (12) with (13), (19) with (20) and IlC scheme (3) in [42]. Let u1 (t ) = 0, I1 (t ) =
0, the learning gain matrices K0 = −0.45 × I and K1 = −0.35 × I. In addition, the event-
triggered threshold is set as ε = 1 for Theorem 3. It is not difficult to find that L f = 0.3 < 21 .
L f (1−L Tf +1 )
By calculation, one can know that I + K0  + 1−L f
max{K0 , 1} = 0.9786 < 1, I +
L f (1−L Tf +1 )
K0 + K1  + 1−L f
max{K0 + K1 , 1} = 0.6286 < 1 and max{I + K1 , I + K1 + K0 } +
L f (1−L Tf +1 )
1−L f
max{K1 , K1 + K0 } = 0.9929 < 1. It shows that the conditions are satisfied for
Theorems 1–3.
In addition, the desired state zd (t ) = [zd,1 (t ), zd,2 (t ), zd,3 (t ), zd,4 (t ), zd,5 (t )] , where
zd,1 (t ) = − cos(0.1πt ) + 1 + sin (0.1πt ), zd,2 (t ) = −0.01t (t − 20), zd,3 (t ) = sin (0.1πt ),
zd,4 (t ) = sinh(0.06t ), and zd,5 (t ) = − cos(0.1πt ) + 1.
And the tracking trajectories are shown as follows.
Fig. 1(a)–(e) show the tracking trajectories with different ILC schemes at the 2nd iteration.
From Fig. 1(f), one can see that the tracking errors decrease when the iteration step r increases.
And the error convergence rate of the designed control schemes is faster than the ILC scheme
in [42]. This illustrates the validity of Theorems 1–3. And it is easy to know that our control
schemes are effective.
From the above description and figures, the tracking effect of Theorems 1–3 are better
than ILC scheme in [42]. The reason is both the learning control and the impulsive control
schemes are applied to improve the tracking performance, and the impulsive control is based
on the learning behavior which can make the convergence of the tracking error flexibly. And
Theorem 3 is applicable because the sufficient condition is not conservative under the same
event-triggered threshold.

Furthermore, other two examples are given to show the advantages of the designed control
schemes.

12374
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

Fig. 1. The trajectories of the tracking errors with different ILC schemes.

Example 2. Firstly, one can consider the following linear discrete time system which is
considered in [23].
      
zk (t + 1) = Azk (t ) + Buk (t ), 0.3 0 0.1 0 1 0
, where A = , B= , C=
yk (t ) = Czk (t ) 0.1 0.2 0 0.2 0 1
And zk (t ) = [zk,1 (t ), zk,2 (t )] ∈ R2 , t ∈ {0, 1, . . . , 30}, zr,i (0) = zd,i (0) = 0. The desired
states are as follows:

⎨sin (0.1πt ), t ∈ {0, 1, . . . , 9},
zd,1 (t ) = −0.1t (t − 20) and zd,2 (t ) = 0, t ∈ {10, 11, . . . , 19},
0.5

1 − 0.08t, t ∈ {20, 21, . . . , 30}.
Then the ILC schemes (19) with (20) and (2) in [23] are used to achieve the tracking. Let
u1 (t ) = 0, I1 (t ) = 0, learning gain matrices K0 = −0.45 × I and K1 = −0.35 × I. In addition,
the event-triggered threshold is still ε = 1 for Theorem 3. It is not difficult to find that
L f = 0 < 21 . By calculation, one can know that max{I + K1 , I + K1 + K0 } = 0.65 < 1. It
shows that the conditions are satisfied for Theorem 3. The simulation diagrams are shown as
follows:
Fig. 2(a) and (b) show the tracking trajectories with different ILC schemes at the 2nd
iteration. Fig. 2(c) and (d) show the tracking errors with different ILC schemes at each
iteration. By comparison, one can see that the tracking errors decrease when the iteration
step r increases. And the convergence rate of the error is fast in Theorem 3, it illustrates the
validity of Theorem 3. From the above figures, it is easy to know our control schemes are
effective.
Example 3. One can consider the following nonlinear system which is similar as that in
[31].

zk (t + 1) = 0.05 sin (zk (t )) + 0.07uk (t ),
yk (t ) = zk (t ),
12375
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

Fig. 2. The trajectories of the tracking errors with different ILC schemes.

where zk (t ) ∈ R1 , f (zk (t )) = 0.05 sin (zk (t )) and B = 0.07. Set t ∈ {0, 1, . . . , 30}, zr (0) =
zd (0) = 0. And the desired state is as follows:


sin (0.1t ), t ∈ {0, 1, . . . , 15},
zd (t ) =
0.01t 1.5 + 2, t ∈ {16, . . . , 30}.

According to [31], the system is transformed as an impulsive nonlinear system, and the
fixed-time impulse is zk (16) = 3. Then P-type IlC scheme (14) in [31] is applied to track the
discontinuous desired trajectory. In this study, the learning control schemes (19) with (20) are
applied. Let u1 (t ) = 0, I1 (t ) = 0, learning gain matrices K0 = −0.45 and K1 = −0.35. In

12376
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

Fig. 3. The trajectories of the tracking errors with different ILC schemes.

addition, the event-triggered threshold is still ε = 1 for Theorem 3. It is not difficult


to find that L f = 0.05 < 21 . By calculation, one can know that max{I + K1 , I + K1 +
L f (1−L T +1 )
K0 } + 1−L ff max{K1 , K1 + K0 } = 0.6921 < 1. Then the conditions are satisfied for
Theorem 3. The simulation diagrams are shown as follows:
Fig. 3(a) and (b) show the tracking trajectories with different control schemes at the 2nd
and 5th iteration. Fig. 3(c) and (d) show the tracking errors with different ILC schemes at
each iteration. Compared with [31], one can see that the tracking errors decrease fast when
the iteration step r increases. This illustrates the validity of Theorem 3. From the above
figures, it is easy to know that our control schemes are also effective for the nonlinear
systems.
It is worthy to mention that the impulsive differential systems were designed to track the
desired discontinuous output trajectories in [31]. And P-type ILC schemes were applied to
achieve the tracking target. In this study, the impulsive learning control schemes are designed
to track the different desired state trajectories. These approaches avoid to design the fixed-
time impulsive systems according to the discontinuous desired trajectories, and it can make
the convergence of the tracking errors rapidly. From Examples 2 and 3, one knows that our
control schemes are effective for both continuous and discontinuous desired state trajectories.
It shows that our control schemes are valid and appropriate.

12377
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

5. Conclusion

In this paper, the tracking problem of the two-dimensional discrete networks has been
studied by designing impulsive learning control strategies. The event-triggered mechanism
has been considered to determine the impulse time flexibility. And the control schemes have
been consist of the learning control and impulsive control strategies, which has improved the
tracking performance effectively and rapidly. Then the sufficient conditions have been pre-
sented to guarantee the convergence of the tracking errors. Compared with traditional learning
control approaches, the theoretical results have shown that our learning control schemes are
valid for different desired state trajectories.

Declaration of Competing Interest

We declare that we have no financial and personal relationships with other people or or-
ganizations that can inappropriately influence our work, there is no professional or other
personal interest of any nature or kind in any product, service and/or company that could
be construed as influencing the position presented in, or the review of the manuscript
entitled.

References

[1] P. Melin, D. Sánchez, Multi-objective optimization for modular granular neural networks applied to pattern
recognition, Inform. Sci. 460–461 (2018) 594–610.
[2] W. Chen, S. Hua, S.S. Ge, Consensus-based distributed cooperative learning control for a group of discrete-time
nonlinear multi-agent systems using neural networks, Automatica 50 (9) (2014) 2254–2268.
[3] A.M. Rather, A. Agarwal, V.N. Sastry, Recurrent neural network and a hybrid model for prediction of stock
returns, Expert Syst. Appl. 42 (6) (2015) 3234–3241.
[4] X. Yang, J. Cao, Z. Yang, Synchronization of coupled reaction-diffusion neural networks with time-varying
delays via pinning-impulsive controller, Siam J. Control Optim. 51 (5) (2013) 3486–3510.
[5] G. Velmurugan, R. Rakkiyappan, J. Cao, Finite-time synchronization of fractional-order memristor-based neural
networks with time delays, Neural Netw. 73 (2016) 36–46.
[6] C. Huang, W. Wang, J. Cao, J. Lu, Synchronization-based passivity of partially coupled neural networks with
event-triggered communication, Neurocomputing 319 (2018) 134–143.
[7] L. Li, W. Zou, S. Fei, Event-triggered synchronization of delayed neural networks with actuator saturation using
quantized measurements, J. Frankl. Inst. 356 (12) (2019) 6433–6459.
[8] J. Chen, C. Li, X. Yang, Asymptotic stability of delayed fractional-order fuzzy neural networks with impulse
effects, J. Frankl. Inst. 355 (15) (2018) 7595–7608.
[9] L. Li, X. Wang, C. Li, Y. Feng, Exponential synchronizationlike criterion for state-dependent impulsive dynam-
ical networks, IEEE Trans. Neur. Net. Lear. Syst. 30 (4) (2019) 1025–1033.
[10] L. Wang, L. Zhang, Z. Yi, Trajectory predictor by using recurrent neural networks in visual tracking, IEEE
Trans. Cybern. 47 (10) (2017) 3172–3183.
[11] G. Masala, F. Casu, B. Golosio, E. Grosso, 2D recurrent neural networks: a high-performance tool for robust
visual tracking in dynamic scenes, Neural Comput. Appl. 29 (7) (2018) 329–341.
[12] H.A. Hashim, S. El-Ferik, F.L. Lewis, Neuro-adaptive cooperative tracking control with prescribed performance
of unknown higher-order nonlinear multi-agent systems, Int. J. Control 92 (2) (2019) 445–460.
[13] M. Ou, H. Du, S. Li, Finite-time tracking control of multiple nonholonomic mobile robots, J. Frankl. Inst. 349
(9) (2012) 2834–2860.
[14] A. Tayebi, Analysis of two particular iterative learning control schemes in frequency and time domains, Auto-
matica 43 (9) (2007) 1565–1572.
[15] K.L. Moore, M. Dahleh, S.P. Bhattacharyya, Iterative learning control: a survey and new results, J. Field Robot.
9 (5) (2010) 563–594.

12378
Z. Luo, W. Xiong, J. Cao et al. Journal of the Franklin Institute 357 (2020) 12364–12379

[16] M.A. Ghezzar, D. Bouagada, Influence of discretization step on positivity of a certain class of two-dimensional
continuous-discrete fractional linear systems, IMA J. Math. Control Inform. 845 (2018) 845–860.
[17] F. Bouakrif, D. Boukhetala, F. Boudjema, Velocity observer-based iterative learning control for robot manipu-
lators, Int. J. Syst. Sci. 44 (2) (2013) 214–222.
[18] D. Huang, J.X. Xu, Steady-state iterative learning control for a class of nonlinear PDE processes, J. Process
Control 21 (8) (2011) 1155–1163.
[19] R. Chi, Z. Hou, S. Jin, A data-driven adaptive ILC for a class of nonlinear discrete-time systems with random
initial states and iteration-varying target trajectory, J. Frankl. Inst. 352 (6) (2015) 2407–2424.
[20] W. Xiong, X. Yu, R. Patel, W. Yu, Iterative learning control for discrete-time systems with event-triggered
transmission strategy and quantization, Automatica 72 (2016) 84–91.
[21] D. Meng, Y. Jia, J. Du, Robust consensus tracking control for multiagent systems with initial state shifts,
disturbances, and switching topologies, IEEE Trans. Neur. Net. Lear. Syst. 26 (4) (2015) 809–824.
[22] X. Jin, Adaptive iterative learning control for high-order nonlinear multi-agent systems consensus tracking, Syst.
Control Lett. 89 (2016) 16–23.
[23] X. Bu, Z. Hou, S. Jin, R. Chi, An iterative learning control design approach for networked control systems with
data dropouts, Int. J. Robust Nonlinear Control 26 (1) (2016) 91–109.
[24] W. Xiong, W.X. Zheng, J. Cao, Tracking analysis of coupled continuous networks based on discontinuous
iterative learning control, Neurocomputing 222 (2017) 137–143.
[25] J. Han, D. Shen, C.-J. Chien, Terminal iterative learning control for discrete-time nonlinear systems based on
neural networks, J. Frankl. Inst. 355 (8) (2018) 3641–3658.
[26] C.E. Neuman, V. Costanza, Deterministic impulse control in native forest ecosystems management, J. Optim.
Theory Appl. 66 (2) (1990) 173–196.
[27] T. Yang, Impulsive Systems and Control: Theory and Applications, New York, NY, USA: Nova, 2001.
[28] X. Zhang, C. Li, T. Huang, Hybrid impulsive and switching Hopfield neural networks with state-dependent
impulses, Neural Netw. 93 (2017) 176–184.
[29] H. Li, C. Li, T. Huang, W. Zhang, Fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks,
Neural Netw. 98 (2018) 203–211.
[30] Y. Zhou, Z. Zeng, Event-triggered impulsive control on quasi-synchronization of memristive neural networks
with time-varying delays, Neural Netw. 110 (2019) 55–65.
[31] S. Liu, J. Wang, W. Wei, A study on iterative learning control for impulsive differential equations, Commun.
Nonlinear Sci. Numer. Simulation 24 (2015) 4–10.
[32] J. Wang, Z. Luo, D. Shen, Iterative learning control for linear delay systems with deterministic and random
impulses, J. Frankl. Inst. 355 (5) (2018) 2473–2497.
[33] S. Liu, J. Wang, D. Shen, D. O’Regan, Iterative learning control for noninstantaneous impulsive fractional-order
systems with varying trial lengths, Int. J. Robust Nonlinear Control 28 (18) (2018) 6202–6238.
[34] M.C.F. Donkers, W.P.M.H. Heemels, Output-based event-triggered control with guaranteed L∞ -gain and im-
proved event-triggering, in: Proceedings of the 49th IEEE Conference on Decision and Control, 2010,
pp. 3246–3251.
[35] Y. Sun, G. Yang, Event-triggered state estimation for networked control systems with lossy network communi-
cation, Inf. Sci. 492 (2019) 1–12.
[36] Y. Xu, M. Fang, Z. Wu, Y. Pan, M. Chadli, T. Huang, Input-based event-triggering consensus of multiagent
systems under denial-of-service attacks, IEEE Trans. Syst. Man, Cybern.: Syst. (2018), doi:10.1109/TSMC.2018.
2875250.
[37] G. Wang, M. Chadli, H. Chen, Z. Zhou, Event-triggered control for active vehicle suspension systems with
network-induced delays, J. Frankl. Inst. 356 (1) (2019) 147–172.
[38] Y. Gao, G. Sun, J. Liu, Y. Shi, L. Wu, State estimation and self-triggered control of CPSS against joint sensor
and actuator attacks, Automatica 113, 108687 (2020).
[39] W. Xu, D.W.C. Ho, Clustered event-triggered consensus analysis: an impulsive framework, IEEE Trans. Ind.
Electron. 63 (11) (2016) 7133–7143.
[40] B. Liu, H. John, C. Zhang, Z. Sun, Stabilization of discrete-time dynamical systems under event-triggered
impulsive control with and without time-delays, J. Syst. Sci. Complex. 31 (1) (2018) 130–146.
[41] X. Tan, J. Cao, X. Li, Consensus of leader-following multiagent systems: a distributed event-triggered impulsive
control strategy, IEEE Trans. Cybern. 49 (3) (2019) 792–801.
[42] D. Wang, Convergence and robustness of discrete time nonlinear systems with iterative learning control, Auto-
matica 34 (11) (1998) 1445–1448.

12379
Automatica 72 (2016) 84–91

Contents lists available at ScienceDirect

Automatica
journal homepage: www.elsevier.com/locate/automatica

Brief paper

Iterative learning control for discrete-time systems with


event-triggered transmission strategy and quantization✩
Wenjun Xiong a,b , Xinghuo Yu b , Ragini Patel b , Wenwu Yu c
a
School of Economic Information and Engineering, Southwestern University of Finance and Economics, Chengdu, China
b
School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001, Australia
c
Department of Mathematics, Southeast University, Nanjing, China

article info abstract


Article history: This paper investigates the iterative learning problem for discrete-time systems with event-triggered
Received 9 November 2015 scheme and quantization. The event-triggered scheme is firstly considered in the iterative learning
Received in revised form controllers to reduce the number of iteration steps to be updated. Here, the event-triggered scheme is
25 April 2016
designed depending on time t and iterative learning step k. Quantization is then introduced in the event-
Accepted 24 May 2016
Available online 19 July 2016
triggered controllers and some relaxed conditions are presented to guarantee the tracking problem by
using some interval matrix properties. Finally, simulation results are given to illustrate the usefulness of
Keywords:
the developed criteria.
Event-triggered control scheme © 2016 Elsevier Ltd. All rights reserved.
Iterative learning control
Quantization
Interval matrix
Tracking problem

1. Introduction Meng et al. (2014), Meng et al. dealt with the formation control
problems for multi-node systems with nonlinear dynamics and
Iterative learning control (ILC) is an effective technique that switching network topologies. It was shown in Meng et al. (2015a)
aims to improve the current performance of uncertain systems that these uncertainties of multi-node systems are dynamically
over a fixed time interval by learning from previous executions changing not only along the time axis but also along the iteration
(trials, iterations, passes). The focus of ILC is to improve the axis.
performance of systems that execute a repeated operation. In In the above literature, the given ILC algorithms are always
the past decades, ILC has been successfully applied to industrial updated in each iteration step (see Eq. (14) in Tan et al., 2011 and
robots (Arimoto, Kawamura, & Miyazaki, 1984), chemical reactors Eq. (4) in Meng et al., 2015a). However, it is costly and unnecessary
(Mezghani et al., 2002), input saturation (Tan, Xu, Norrlöf, & to update the ILC algorithm in each iteration step when the iterative
Freeman, 2011; Xiong, Ho, & Yu, 2015; Xu, Tan, & Lee, 2004; Zhang, controller change little in some successive iteration steps. To reduce
Chi, & Ji, 2015a), heat equations (Huang, Xu, Li, Xu, & Yu, 2013), the number of iteration steps to be updated, an event-triggered control
sampled-data systems (Abidi & Xu, 2011), and multi-node systems scheme is introduced in this paper. In the event-triggered control,
(Li & Li, 2014; Meng, Jia, & Du, 2015a,b; Meng, Jia, Du, & Yu, 2013; the measurement error plays a key role in the event design.
Meng, Jia, Du, & Zhang, 2014; Meng & Moore, 2016). For example, Li When the measurement error reaches the prescribed threshold,
et al. in Li and Li (2014) showed that all the followers can track the an event is triggered and the controller is updated. In the recent
leader uniformly on the finite interval [1, T ] for consensus problem years, as a good digital control scheme that can be used to reduce
and keep the desired distance from the leader to achieve velocity communication load, event-triggered control has been receiving
consensus uniformly on [1, T ] for the formation problem. In increasing attention in wireless sensor/actuator systems (Mazo
& Tabuada, 2011), networked control systems (Yue, Tian, & Han,
2013), fuzzy systems (Peng, Han, & Yue, 2013), sampled-data
control systems (Peng & Han, 2013; Zou, Wang, Gao, & Liu, 2015),
✩ The material in this paper was not presented at any conference. This paper was
multi-node systems (Fan, Feng, Wang, & Song, 2013; Hu, Liu, &
recommended for publication in revised form by Associate Editor Changyun Wen
under the direction of Editor Miroslav Krstic. Feng, 2015; Zhang, Hao, Zhang, & Wang, 2014). Different with the
E-mail addresses: xwenjun2@gmail.com (W. Xiong), xinghuo.yu@rmit.edu.au existing literature, the event-triggered scheme will be applied on ILC
(X. Yu), patel.ragini@gmail.com (R. Patel), wwyu@seu.edu.cn (W. Yu). algorithm in this paper and the event-triggered iterative learning
http://dx.doi.org/10.1016/j.automatica.2016.05.031
0005-1098/© 2016 Elsevier Ltd. All rights reserved.
W. Xiong et al. / Automatica 72 (2016) 84–91 85

controller will be designed to be related to time t and iterative


learning k.
In addition, in some applications, such as sensor systems and
industrial control systems, the aim is to control multiple dynamical Fig. 1. The broadcasting iteration sequence {kil } of node i.
systems by using multiple sensors to exchange information over
a communication network. Because of the limitation of storage (e1 (t , k), e2 (t , k), . . . , eN (t , k))T . Note that the tracking objective
and communication bandwidth among nodes, the original precise (2) holds if and only if limk→+∞ e(t , k) = 0, for ∀ t ∈ {0, 1, . . . , T }.
information needs to be quantized. Hence, it is necessary to In the existing literature (see Meng et al., 2015a, 2014), the
conduct analysis on the quantizers and understand how much ILC u(t , k) is always updated in each iteration k. However, it is
effect the quantization makes on dynamic systems. In fact, the not necessary to update the ILCs if the changes of controllers in
problem of quantized control for dynamic systems has been some successive iteration steps are small. Hence, the ILCs with
available in the literature (Delchamps, 1990; Fu & Xie, 2005; Liu, event-triggered strategy will be considered in this paper. We
Guan, Li, Zhang, & Xiao, 2012; Wang, Shen, Shu, & Wei, 2012; define the state measurement error of node i by δi (xi (t , k)) =
Zhang, Zhang, Hao, & Wang, 2015b). Unfortunately, to the best of xi (t , k) − xi (t , k), ∀ i ∈ {1, 2, . . . , N }, t ∈ {0, 1, . . . , T }, and

our knowledge, quantized ILC problem for dynamic systems has not ∀ k ∈ Z+ . Here,  xi (t , k) denotes the latest sampled state of node i,
been fully investigated despite its potential in practical applications. which will be given later. Note that, the transmission information
Furthermore, event-driven mechanism has not been used in ILC to needs to be coded or quantized due to the limitation of storage
reduce the requirements of storage and communication bandwidths. and communication bandwidth among nodes. Hence, based on
The aim of this paper is to address these problems. the measurement error of node i, the event-triggered strategies
Hence, our objectives in this study are twofold: (1) Consider without and with quantization are given in the following
the event-triggered scheme in the iterative learning controllers to  
n
reduce the number of iteration steps to be updated and discuss the
 
|δi (xi (t , k))| = γi ·  lijxj (t , k) , (3)
 
tracking problem of discrete-time systems with event-triggered  j =1 
scheme in a finite interval; (2) Consider quantization in event-  
triggered controllers for discrete-time systems, and present some  n 
|δi (xi (t , k))| = γi ·  lij q(xi (t , k)) , (4)
 
relaxed conditions to solve the tracking problem of the discussed
systems by using some interval matrix properties.
 j =1 
The remainder of this paper is organized as follows: The where γi > 0, matrix L = (lij )N ×N ∈ RN ×N , and ∥L∥ ̸= 0. And q(·) :
problem formulation is presented in Section 2. In Section 3, the R → Λϖ is a logarithmic quantizer, for a given accuracy parameter
event-triggered scheme is considered in the iterative learning ϖ ∈ (0, 1), one can define the logarithmic set of quantization
controllers. Moreover, quantization is applied in the event- levels
triggered controllers. In Section 4, simulations are carried out to
illustrate the effectiveness of the main results. Finally, conclusions Λϖ = {±ω(i) : ω(i) = ϖ i ω(0) , i = ±1, ±2, . . .}
are drawn in Section 5.
 
{±ω(0) } {0}, ω(0) > 0. (5)
Notation: Throughout this study, the superscript T represents the
The associated quantizer q(·) is defined as follows:
transpose. For all x = (x1 , x2 , . . . , xn )T ∈ Rn , define ∥x∥ =
n 1  1 1
ω(i) , ω(i) < x ≤ ω(i) ;
2 2
i=1 xi . For a matrix A, ∥A∥ denotes the spectral norm defined  if
1+σ 1−σ
1
by ∥A∥ = (λM (AT A)) 2 , and ρ(A) is the spectral radius with ρ(A) = q(x) = (6)
maxi |λi (A)|, where λi (A) denotes the ith eigenvalue of matrix A, 0,
 if x = 0;
−q(−x), if x < 0,
respectively.
where σ = 11−ϖ +ϖ
is named sector bound in Fu and Xie (2005).
2. Preliminaries If event triggering condition (3) (or (4)) is satisfied, node i will
broadcast its state and update its control protocol. The infor-
Consider an iterative learning system consisting of N nodes (N mation broadcasting iteration sequence of node i is {kil } (i ∈
is a positive integer). Each node has to deal with two independent {1, 2, . . . , N }, l ∈ Z+ ) (see Fig. 1). And  xi (t , k) is defined as
dynamic processes: the first process shows the system dynamics xi (t , k) = xi (t , kil ), k ∈ [kil , kil+1 ), l ∈ Z+ . From the definition of

about time t; the second process describes the system dynamics of logarithmic quantizer, the quantization error satisfies the follow-
node i about iterative learning k. Hence, the system dynamics are ing condition:
described by q(
xi (t , k)) = 
xi (t , k) + Λ
(t , k)
xi (t , k), (7)
xi (t + 1, k) = ci xi (t , k) + bi ui (t , k), (1) where Λ(t , k) is a scalar and satisfies that Λ (t , k) ∈ [−σ , σ ],
∀ t ∈ {0, 1, . . . , T }, i ∈ {1, 2, . . . , N } and ∀ k ∈ Z+ .
where xi (t , k) is the state vector of node i, i = 1, 2, . . . , N,
According to the event triggering conditions (3) and (4), we
and x(t , k) = (x1 (t , k), x2 (t , k), . . . , xN (t , k))T ∈ RN ; t ∈
shall consider the ILCs with and without quantization, which can
{0, 1, . . . , T } (T > 0 is a positive integer) and k ∈ Z+ (Z+ is the be expressed as
set of nonnegative integers); C = diag (c1 , c2 , . . . , cN ) ∈ RN ×N
and B = diag (b1 , b2 , . . . , bN ) ∈ RN ×N are constant matrices; n

ui (t , k) is the iterative learning controller of node i and u(t , k) = ui (t , kil+1 ) = 
 ui (t , kil ) + Γ1i xi (t + 1, k),
lij (8)
(u1 (t , k), u2 (t , k), . . . , uN (t , k))T ∈ RN . For every node i (i ∈ j =1

{1, 2, . . . , N }) in system (1), it is said to achieve the tracking of a n



desired reference trajectory if ui (t , kil+1 ) = 
 ui (t , kil ) + Γ2i lij q(
xi (t + 1, k)), (9)
j =1
lim xi (t , k) = x (t ),

t ∈ {0, 1, . . . , T }, (2)
k→+∞ and ui (t , k) = 
ui (t , kil ), k ∈ [kil , kil+1 ), l ∈ Z+ , Γ1 = diag (Γ11 , Γ12 ,
where x (t ) ∈ R is the desired reference trajectory. Let ei (t , k) =
∗ . . . , Γ1 ) and Γ2 = diag (Γ21 , Γ22 , . . . , Γ2N ) are learning gain
N

xi (t , k) − x∗ (t ) be the tracking error of node i, and e(t , k) = matrices, which will be designed later.
86 W. Xiong et al. / Automatica 72 (2016) 84–91

where  u(t , kl ) = ( u1 (t , kl ),u2 (t , kl ), . . . ,


uN (t , kl ))T , q(
x(t , k)) =
(q(
x1 (t , k)), q( x2 (t , k)), . . . , q(
xN (t , k)))T , l ∈ Z+ . Moreover,
u(t , kl ),
u(t , k) =  k ∈ [kl , kl+1 ). (12)
About matrices Dl (l ∈ Z+ ), one has the following property.

Property 1. Matrix Dl = diag (dl1 , dl2 , . . . , dlN ) ∈ Φ (IN ) (IN is the


identity matrix), ∀ i ∈ {1, 2, . . . , N }, dli = 1, or dlj = 0. Matrix
Dl is a matrix with one nonzero diagonal element (here, the nonzero
diagonal element is 1), and other elements are all zero. That is, ∃ i ∈
{1, 2, . . . , N } such that bli = 1, and blj = 0, ∀ j ∈ {1, 2, . . . , N }, j ̸=
Fig. 2. Multi-channel asynchronous broadcasting iterations.
i, l ∈ Z+ .
Proof. For ∀ l ∈ Z+ , from Fig. 2, there only exists an i ∈
{1, 2, . . . , N } and a s ∈ Z+ which satisfy that kl = kis . As a result,
Remark 1. In (8) and (9), the key is how to choose the information one has dli = 1 and dlj = 0, ∀ j ∈ {1, 2, . . . , N }, j ̸= i, l ∈ Z+ .
broadcasting iteration sequence {kil } (i ∈ {1, 2, . . . , N }, l ∈ Z+ ). It Hence, matrix Dl is a matrix with one nonzero diagonal element 1.
is not economic if the interval [kil , kil+1 ) is small. On the contrary, The proof is completed.
the tracking problem of system (1) is difficult to achieve if the
interval [kil , kil+1 ) is large. Hence, an economic alternative for node i 3.1. The tracking problem of system (1) with event-triggered strategy
is to use the neighbor’s constant states at the iterative point kil until (3) and iterative controller (8)
a pre-defined event is triggered at iterative point kil+1 . Then, the
next neighbor’s information is updated by the states at kil+1 until Define δ(x) = (δ1 (x1 ), δ2 (x2 ), . . . , δN (xN ))T , one can further
the next event is triggered, and so on. have from (3)
δ(x(t , k)) = γ · L
x(t , k) = γ · L(e(t , k) + δ(x(t , k))), (13)
Remark 2. In Eq. (14) of Tan et al. (2011), Eq. (4) of Meng et al.
where  x(t , k) = ( x1 (t , k),x2 (t , k), . . . ,
xN (t , k)) ∈ R , γ =
T N
(2014) and Eq. (4) in Meng et al. (2015a), the ILC algorithms are
diag (
γ1 , 
γ2 , . . . , 
γN ) with  γi = γi (or −γi ), i = 1, 2, . . . , N. If one
designed to be updated in each iteration k. However, it is costly
choose that ∥γ ∥ < ∥1L∥ , one has from (13)
and unnecessary to update the ILC algorithm in each iteration k
when the iterative controller changes a little in some successive
δ(x(t , k)) = (IN − γ L)−1 · γ · Le(t , k). (14)
iteration steps. Hence, the advantages of this paper are twofold:
(1) our ILC algorithms (8) and (9) are only updated when event-
triggering conditions (3) and (4) are satisfied. Moreover, different Theorem 1. Consider system (1) with iterative controller (8) trig-
from the traditional event-triggered scheme, the event-triggered gered by the event condition (3). In the finite interval t ∈ [0, T ], the
controllers are designed to be related to time t and iterative tracking error e(t , k) such that
learning step k. (2) quantization is investigated in the event- lim e(t , k) = 0, ∀ t ∈ {0, 1, . . . , T }, (15)
triggered controllers due to its potential in practical applications k→∞

such as sensor systems and industrial control systems.


if one chooses ∥γ ∥ < ∥1L∥ and the learning gain Γ1 such that

3. Main results ρ(Ψ


ls ) < 1, (16)

where Ψls = Ψls+1 −1 Ψls+1 −2 · · · Ψls , ls ∈ Z+ , ∀ s ∈ Z+ , l0 = 0, and


Firstly, some notations are introduced in this part. For any
diagonal matrix P = diag (p1 , p2 , . . . , pN ), let Φ (P ) = {G = Ψls = IN + BDls Γ1 L(IN − γ L)−1 .
diag (g1 , g2 , . . . , gN ) : gi = pi , or gi = 0, for ∀ i ∈ {1, 2, . . . , N }}. Proof. We shall discuss two cases in the proof.
Clearly, Φ (P ) is a finite set. The boundary conditions of system (1) Case 1. In this case, we consider that there exists k ∈ Z+ such that
are given as xi (0, k) = x∗ (0), xi (t , 0) = xi0 (t ), and ∥xi0 (t )∥ ≤ k, k + 1 ∈ [kl , kl+1 ), l ∈ Z+ . ∀ t ∈ {0, 1, . . . , T }, one has from (1),
εi ∀ t ∈ {0, 1, . . . , T }, i = 1, 2, . . . , N , k ∈ Z+ , εi is a positive (12) and the boundary conditions
constant. Moreover, when t = 0, one has xi0 (0) = x∗ (0), i =
1, 2, . . . , N. In this paper, for the sake of simplicity, the desired e(t + 1, k + 1) − e(t + 1, k) = x(t + 1, k + 1) − x(t + 1, k)
reference trajectory is assumed as x∗ (t ) = 0, ∀ t ∈ {0, 1, . . . , T }. = C (x(t , k + 1) − x(t , k)) + B(u(t , k + 1) − u(t , k))
In this section, our purpose is to achieve the tracking of system
(1) with event-triggered strategy (3) (or (4)) by designing the
= C t +1 (x(0, k + 1) − x(0, k)) = 0, (17)
appropriate iterative controller (8) (or (9)). which means that e(t , k + 1) = e(t , k). That is,
In system (1), each node i will broadcast its state and update
e(t , k) = e(t , kl ), ∀ k ∈ [kl , kl+1 ), (18)
its control protocol based on its own iteration channel (see Fig. 2).
In Fig. 2, all asynchronous broadcasting iterations are denoted as where ∀ t ∈ {0, 1, . . . , T }, l ∈ Z+ .
a sequence {kl }, l ∈ Z+ , and k10 = k20 = · · · = kN0 = k0 = 0. Case 2. In this case, we will discuss how e(t , k) changes at the
j
Here, the controller components may occur kiq = ks , for ∀ i, j ∈ iterations sequence {kl }, l ∈ Z+ . ∀ t ∈ {0, 1, . . . , T }, one has from
{1, 2, . . . , N }, i ̸= j, q, s ∈ Z+ . In this situation, we still regard kiq (1), (10), (12) and (18)
j
and ks as two different terms kl and kl+1 for some l ∈ Z+ . e(t + 1, kl+1 ) − e(t + 1, kl ) = x(t + 1, kl+1 ) − x(t + 1, kl )
Based on Fig. 2, (8) and (9) can be written as = C (x(t , kl+1 ) − x(t , kl )) + B(
u(t , kl+1 ) − 
u(t , kl ))
u(t , kl+1 ) = 
 u(t , kl ) + Dl Γ1 L
x(t + 1, k), (10) = C (x(t , kl+1 ) − x(t , kl )) + BDl Γ1 L
x(t + 1, k)
u(t , kl+1 ) = 
 u(t , kl ) + Dl Γ2 Lq(
x(t + 1, k)), (11) = C θ (t , kl ) + BDl Γ1 L(δ(t + 1, kl ) + e(t + 1, kl )), (19)
W. Xiong et al. / Automatica 72 (2016) 84–91 87

where ∀ k ∈ [kl , kl+1 ), θ (t , kl ) = x(t , kl+1 ) − x(t , kl ). One has from 3.2. The tracking problem of system (1) with event-triggered strategy
(1), (8) and (19) (4) and iterative controller (9)

θ(t , kl ) = C θ (t − 1, kl ) + BDl Γ1 L
x(t , kl )
In this subsection, we assume that matrix L is lower-triangular.
= C θ (t − 1, kl ) + BDl Γ1 L(δ(t , kl ) + e(t , kl )). (20) Define Λ(t , k) = Λ
(t , k)IN , one obtains from (7)
According to (20) and the boundary conditions, one can obtain that q(
x(t , k)) = (IN + Λ(t , k))
x(t , k). (27)
t From (7), one can have from (4) and (13)
C ι BDl Γ1 L(δ(t − ι, kl ) + e(t − ι, kl )),

θ(t , kl ) = (21)
ι=0 δ(x(t , k)) = γ · Lq(
x(t , k))

where ∀ t ∈ {0, 1, . . . , T }, l ∈ Z+ . It follows from (19) and (21) = γ · L(IN + Λ(t , k))(e(t , k) + δ(t , k)). (28)
that Similar to (14), if one choose that ∥γ ∥ < (1+σ )·∥L∥ , one has from
1

e(t + 1, kl+1 ) (7) and (28)


= (IN + BDl Γ1 L)e(t + 1, kl ) + BDl Γ1 Lδ(t + 1, kl ) δ(x(t , k)) = (IN − γ L(IN + Λ(t , k)))−1
t
· γ · L(IN + Λ(t , k))e(t , k), (29)
C ι+1 BDl Γ1 L(δ(t − ι, kl ) + e(t − ι, kl )).

+ (22)
ι=0 where ∀ t ∈ {0, 1, . . . , T }, ∀ k ∈ Z+ . In this part, we shall discuss
the iterative learning problem of system (1) based on the following
Combining (14) and (22), one has
definitions and lemma.
t
Θlι e(t − ι, kl ),

e(t + 1, kl+1 ) = Ψl e(t + 1, kl ) + (23) Definition 1. A scalar l is called an interval parameter if it lies
ι=0 between two boundaries according to l ∈ [l, l], where l is the
minimum value of l and l is the maximum value of l.
where Ψl = IN + BDl Γ1 L(IN − γ L)−1 , Θlι = C ι+1 BDl Γ1 L(IN −
γ L)−1 , ∀ ι ∈ {0, 1, . . . , t }. Let Definition 2. An interval matrix S I is defined as a matrix which is
U (k) = (eT (0, k), eT (1, k), . . . , eT (T , k))T , one has a member of the interval plant SI given by
U (kl+1 ) = Ωl U (kl ), l ∈ Z+ , (24) SI = {S I : S I = (sIij ), sIij ∈ [sij , sij ], i, j = 1, 2, . . . , m},
Ψ 0 0 ··· 0 0 0

l
where sij is the maximum extreme value of the ith row and jth
Θl1 Ψl 0 ··· 0 0 0
  column elements of the uncertain plant, and sij is the minimum
where Ωl =  Θl2 Θl1 Ψl 0 .
 
··· 0 0
· · · extreme value of the ith row and jth column elements of the
··· ··· ··· ··· ··· · · ·
uncertain plant.
ΘlT ΘlT −1 ΘlT −2 ··· Θl2 Θl1 Ψl
In (24), it is difficult to obtain that ρ(Ωl ) < 1 since matrix Dl
Definition 3. The upper bound matrix S is a matrix whose
is a matrix with one nonzero diagonal element. Hence, we obtain
elements are sij . The lower bound matrix S is a matrix whose
that from (24)
elements are sij . The vertex matrices Sv are defined by
U (kls+1 ) = Ω
ls U (kls ), s ∈ Z+ , (25)
Sv = {S v : S v = (svij ), svij ∈ {sij , sij }, i, j = 1, 2, . . . , m}.
where Ωls = Ωls+1 −1 Ωls+1 −2 · · · Ωls , and ls ∈ Z+ , ∀ s ∈ Z+ , and
l0 = 0. For ∀ k ∈ [kls , kls+1 ), s ∈ Z+ , one gets from (18) and (24)
that Lemma 1 (Seok Han & Lee, 1994, Shih, Lur, & Pang, 1998). With a
given interval matrix S I , the spectral radius of S I is bounded by the
= U (kls ), ∀ k ∈ [kls , kls +1 ); maximum value of the spectral radii of vertex matrices S v .

= Ωls U (kls ), ∀ k ∈ [kls +1 , kls +2 );



U (k) = Ωls +1 Ωls U (kls ), ∀ k ∈ [kls +2 , kls +3 ); (26) From the above lemma, one has the following property.


 ...
= Ωls U (kls ), ∀ k ∈ [kls+1 −1 , kls+1 ). Property 2. Matrix L is designed to be lower-triangular. Then, one
 
obtains that the spectral radius of matrix Φ  (t , kl ) is bounded by the
Based on (25), one has that U (kls+1 ) ≤ Ω
ls Ω l0 U (0). Note
ls−1 · · · Ω maximum value of the spectral radii of matrices Φ l1 and Φ
l2 , where
ls is lower-triangular and U (0) is bounded, one has U (kls ) →
that Ω Φ (t , kl ) = IN + BDl Γ2 L(IN + Λ(t , kl ))(IN − γ L(IN + Λ(t , kl )))−1 ,

0, as s → +∞ (i.e., k → +∞), if ρ(Ω ls ) < 1, ∀ s ∈ Z+ . Moreover, l1 = IN + (1 − σ )BDl Γ2 L(IN − (1 − σ )γ L)−1 , and Φ
Φ l2 = IN + (1 +
ρ(Ωls ) < 1 if condition (16) is satisfied. Hence, according to the σ )BDl Γ2 L(IN − (1 + σ )γ L) .−1

results in Cases 1 and 2, one has e(t , k) → 0, ∀ t ∈ {0, 1, . . . , T },


Proof. Matrix Φ  (t , kl ) is lower-triangular since matrix L is
as k → +∞ if condition (16) is satisfied.
lower-triangular and matrices B, Dl , Γ2 are all diagonal. As a
Note that the iteration k ∈ Z+ (see Fig. 1), which means kl+1 −
result, ρ(Φ  (t , kl )) is decided by the diagonal elements ξi (i =
kl ≥ 1, l ∈ Z+ . As a result, Zeno-behaviors will not happen under
1, 2, . . . , N ) with ξi = 1 + bi dli Γ2i lii (1 + Λ
(t , kl ))(1 − 
γi lii (1 +
the designed event condition (3). The proof is completed.
(t , kl )))−1 . With some calculation, one has ξi = 1 + bi dli Γ2i
Λ γi−1
Remark 3. Note that Dl is a matrix with one nonzero diagonal
× ( (1−γ l (1+Λ1
(t ,k ))) − 1). Note that Λ(t , kl ) ∈ [−σ , σ ], from

i ii l

element in Property 1. If choosing Dls + Dls +1 + Dls +2 + · · · +  (t , kl ) is


Lemma 1, one has that the spectral radius of matrix Φ
Ds+1−1 ≥ IN , i.e., ls+1 − ls ≥ N, ∀ s ∈ Z+ , it is easy to achieve bounded by the maximum value of the spectral radii of matrices
condition (16) by designing an appropriate learning gain matrix Γ1 . Φ
l1 and Φ
l2 . The proof is completed. 
88 W. Xiong et al. / Automatica 72 (2016) 84–91

Theorem 2. Consider system (1) with iterative controller (9) trig- where Ω ls = Ω ls+1 −1 Ω ls+1 −2 · · · Ω ls , and ls ∈ Z+ , ∀ s ∈ Z+ , and
gered by the event condition (4). In the finite interval t ∈ [0, T ], the l0 = 0. For ∀ k ∈ [kls , kls+1 ), s ∈ Z+ , one gets from (18) and (24)
tracking error e(t , k) such that
= U (kls ), ∀ k ∈ [kls , kls +1 );

= Ω ls U (kls ), ∀ k ∈ [kls +1 , kls +2 );

lim e(t , k) = 0, ∀ t ∈ {0, 1, . . . , T }

(30) 
k→∞
U (k) = Ω ls +1 Ω ls U (kls ), ∀ k ∈ [kls +2 , kls +3 ); (38)
. . .

if one chooses ∥γ ∥ < (1+σ )·∥L∥ and the learning gain Γ2 such that
1


=Ω ls U (kls ), ∀ k ∈ [kls+1 −1 , kls+1 ).
l1 ), ρ(Φ
max{ρ(Φ l2 )} < 1, (31) Based on (37), one has that U (kls+1 ) ≤ Ω l0 U (0). Note
ls−1 · · · Ω
ls Ω
s s
ls is lower-triangular and U (0) is bounded, one has U (kls ) →
that Ω
where Φ
li = Φ li −1 Φ li , i = 1, 2, ls ∈ Z+ , ∀ s ∈ Z+ ,
l1 −2 · · · Φ 0, as s → +∞ (i.e., k → +∞) if ρ(Ω ls ) < 1, ∀ s ∈ Z+ . Ac-
s s+1 s+1 s

l0 = 0, Φ l = IN + (1 − σ )BDls Γ2 L(IN − (1 − σ )γ L)−1 , and


1 cording to (36) and (37), one has ρ(Ωls ) < 1 if ρ(Φ
 ls (t , kl )) < 1,
s
l2 = IN + (1 + σ )BDls Γ2 L(IN − (1 + σ )γ L)−1 .
Φ where Φls (t , kl ) = Φls+1 −1 (t , kl )Φls+1 −2 (t , kl ) · · · Φls (t , kl ), ∀ t ∈
   
s
{0, 1, . . . , T }, ∀ s ∈ Z+ . Based on Property 2, one has that
ρ(Φls (t , kl )) < 1 if condition (31) is satisfied. Hence, according
Proof. We also discuss two cases in the proof.
to (32) and (38), one has e(t , k) → 0, ∀ t ∈ {0, 1, . . . , T }, as
Case 1. In this case, we consider that there exists k ∈ Z+ such that k → +∞ if condition (31) is satisfied.
k, k + 1 ∈ [kl , kl+1 ), l ∈ Z+ . Similar to Theorem 1, one has Note that the iteration k ∈ Z+ (see Fig. 1), which means kl+1 −
kl ≥ 1, l ∈ Z+ . As a result, Zeno-behaviors will not happen under
e(t , k) = e(t , kl ), ∀ k ∈ [kl , kl+1 ), (32) the designed event condition (4). The proof is completed.

where ∀ t ∈ {0, 1, . . . , T }, l ∈ Z+ . Remark 4. In general, the system state converges to a set but not
a trajectory when quantization is considered in the controller (see
Case 2. In this case, we will discuss how e(t , k) changes at the Zhang et al., 2015b). However, by choosing ∥γ ∥ < (1+σ1)·∥L∥ and
iterations sequence {kl }, l ∈ Z+ . Similar to (19), one has from (1), using the interval matrix properties, the system state (see (30)) can
(11), (12) and (27) converge to the desired trajectory when the quantized controller
(9) is considered in this paper.
e(t + 1, kl+1 ) − e(t + 1, kl ) = C θ (t , kl )
+ BDl Γ2 L(IN + Λ(t , kl ))(δ(t + 1, kl ) + e(t + 1, kl )), (33) Remark 5. In Zhang et al. (2015b), the quantized controller (36)
is also presented via event-triggered control. Compared with
where θ(t , kl ) = x(t , kl+1 )− x(t , kl ). Similar to (21), one can obtain Theorem 4.2 of Zhang et al. (2015b), the advantage of Theorem 2
in this paper is that one can get limk→∞ e(t , k) = 0, ∀ t ∈
t {0, 1, . . . , T }, by designing the quantized controller to be related
C ι BDl Γ2 L(IN + Λ(t − ι, kl ))

θ(t , kl ) = to time t and iterative learning k. While in Zhang et al. (2015b), all
ι=0 nodes only converge to a set in Theorem 4.2.
× (δ(t − ι, kl ) + e(t − ι, kl )), (34)
Remark 6. Similar to Section 3.2, the interval ILCs has also been
where ∀ t ∈ {0, 1, . . . , T }, k ∈ Z+ . It follows from (33) and (34) applied in Meng et al. (2015b). Compared with Meng et al. (2015b),
that the advantages of our results in Section 3.2 are: (1) our ILC
algorithm (9) are only updated when event-triggering condition
e(t + 1, kl+1 ) (4) is satisfied. However, the ILC algorithms (5) of Meng et al.
(2015b) are designed to be updated in each iteration k. Clearly,
= (IN + BDl Γ2 L(IN + Λ(t + 1, kl )))e(t + 1, kl )
it is costly and unnecessary to update the ILC algorithm in each
t
iteration k when the iterative controller changes a little in some
C ι+1 BDl

+ BDl Γ2 L(IN + Λ(t + 1, kl ))δ(t + 1, kl ) + successive iteration steps. (2) In Theorem 4.1 of Meng et al.
ι=0
(2015b), the graph is required to have a spanning tree, which is
· Γ2 L(IN + Λ(t − ι, kl ))(δ(t − ι, kl ) + e(t − ι, kl )). (35) removed in Theorem 2 of this paper.

Combining t(29) and (35), one has e(t + 1, kl+1 ) = Φ (t + 1, kl )e(t + Remark 7. In Meng et al. (2015a, 2014) and Meng and Moore

1, kl ) + ι=0 Θ
 ( t − ι, kl ) e( t − ι, kl ), and Φ
 (t + 1, kl ) = IN + (2016), the network topologies are switching via iteration, which
BDl Γ2 L(IN + Λ(t + 1, kl ))(IN − γ L(IN + Λ(t + 1, kl )))−1 , Θ  (t − will be a good idea in our future work. However, the iterative rules
ι, kl ) = C ι+1 BDl Γ2 L(IN + Λ(t − ι, kl ))(IN − γ L(IN + Λ(t − ι, kl )))−1 . such as Eq. (4) in Meng and Moore (2016) are always updated
Let U (k) = (eT (0, k), eT (1, k), . . . , eT (T , k))T , one has at each iteration step when the nodes are neighbors. Moreover,
the obtained conditions are related to time step t (see Condition
U (kl+1 ) = Ω l U (kl ), l ∈ Z+ , (36) (12) and CJ in Meng et al. (2015a) and Meng and Moore (2016),
respectively). Hence, in our future work, one of our objectives is
 (0, k )
Φ
l 0 0 ··· 0 0
 to consider switching topologies in our models and relax those
Θ (0, kl )
  (1, kl )
Φ 0 ··· 0 0 mentioned limitations.
Ω l = Θ
 (0, kl )  (1, kl )
Θ  (2, kl )
Φ .

··· 0 0
··· ··· ··· ··· ··· ··· Remark 8. In the whole paper, the desired reference trajectory
 (0, kl )
Θ  (1, kl )
Θ  (2, kl )
Θ ···  (T − 1, kl )
Θ  (T , kl )
Φ
is assumed as x∗ (t ) = 0, ∀ t ∈ {0, 1, . . . , T }. Actually, this
In (36), it is also difficult to obtain that ρ(Ω l ) < 1 since matrix assumption can be removed if one designs δi (xi (t , k)) to be
Dl is a matrix with one nonzero diagonal element. Hence, we obtain δi (xi (t , k)) =  ei (t , k) − ei (t , k), and changes  xj (t , k) as ej (t , k)
that from (36) in (3), (4), (8), and (9). Here,  ei (t , k) = 
xi ( t , k ) − x∗ ( t ) , ∀ i ∈
{1, 2, . . . , N }, t ∈ {0, 1, . . . , T }, ∀ k ∈ Z+ . Then, the same results
U (kls+1 ) = Ω
ls U (kls ), s ∈ Z+ , (37) in Theorems 1 and 2 can be obtained.
W. Xiong et al. / Automatica 72 (2016) 84–91 89

(a) The trajectories of state x(t , k) at iteration k = 1. (b) The trajectories of state x(t , k) at iteration k = 10.

(c) The trajectories of state x(t , k) at iteration k = 30. (d) The trajectories of state x(t , k) at iteration k = 100.

Fig. 3. The trajectories of state x(t , k) in system (1) with the iterative controller (8) and event-triggered strategy (3).

4. An illustrative example i2 = IN + (1 + σ )BDi Γ2 L(IN + (1 + σ )(IN − (1 + σ )γ L)−1 · γ ·


Φ
L), i = 1, 2, 3. Also, as mentioned in Remark 3, condition (31) in
In this section, a numerical example is presented to demon- Theorem 2 could be satisfied if we choose Dls + Dls +1 + Dls +2 +· · ·+
strate the usefulness of the results in Theorems 1 and 2. Ds+1−1 ≥ IN , ∀ s ∈ Z+ . Hence, under the iterative controller (9)
with event-triggered strategy (4), the tracking error e(t , k) satisfies
Example 1. Consider system (1) with three nodes. Let T = 10, e(t , k) → 0, when k → +∞. Fig. 4 shows the state trajectories
x(t , k) = (x1 (t , k), x2 (
t , k), x3 (t ,k))T ∈ R3 , σ = 0.5, L = of three nodes in system (1) with the iterative controller (9) and
0.8 0 0 1 0 0 1 0 0
event-triggered strategy (4) at different iteration k. From Fig. 4,
1 1 0 , C = 0 1 0 , B = 0 −1 0 , Dls ∈
0.8 1 0 .5 0 0 2 0 0 1 it is clear that the state of each node asymptotically achieve the
{D1 , D2 , D3 } with D1 = diag (1, 0, 0), D2 = diag (0, 1, 0), D3 = desired reference in the finite interval [0, T ] as the iteration index
diag (0, 0, 1), ∀ s ∈ Z+ . Define the desired reference trajectory k increases.
x∗ (t ) = 0, xi (t , 0) = xi0 (t ) = sin(t ), ∀ t ∈ {0, 1, . . . , T },
i = 1, 2, . . . , N. According to ∥L∥ = 2.0127, we define that Remark 9. In conditions (16) and (31), the learning gain can be
γ = diag ( 14 , 41 , 14 ), and Γ1 = diag (−0.5, 0.5, −1). As a result, arbitrarily chosen to satisfy the two conditions. For example, in
one has ρ(Ψ1 Ψ2 Ψ3 ) = 0.6667 < 1 with Ψi = IN + BDi Γ1 L(IN − Example 1, for each Di (i = 1, 2, 3), the learning gains Γ1 and Γ2
γ L)−1 , i = 1, 2, 3. As mentioned in Remark 3, condition (16) in are appropriately diagonal matrices to satisfy Ψiii < 1, Φ i1ii < 1
Theorem 1 could be satisfied if we choose Dls + Dls +1 + Dls +2 +· · ·+ and Φi < 1, where Ψi , Φ
2ii ii i and Φ
1ii 2ii
i are the ith elements of the
Ds+1−1 ≥ IN , ∀ s ∈ Z+ . Hence, under the iterative controller (8) diagonal lines of matrices Ψi , Φi1 and Φ
i2 , respectively.
with event-triggered strategy (3), the tracking error e(t , k) satisfies
e(t , k) → 0, when k → +∞. Fig. 3 shows the state trajectories 5. Conclusion
of three nodes in system (1) with the iterative controller (8) and
event-triggered strategy (3) at different iteration k. From Fig. 3, The iterative learning problem for discrete-time systems with
it is clear that the state of each node asymptotically achieve the event-triggered scheme and quantization has been discussed
desired reference in the finite interval [0, T ] as the iteration index in this paper. The event-triggered scheme has been considered
k increases. in the iterative learning controllers to reduce the number of
In addition, in Theorem 2, we define Γ2 = diag (−1, 0.7, −0.8), iteration steps to be updated. Quantization has been introduced
one has ρ(Φ 11 Φ 31 ) = 0.8118 < 1 and ρ(Φ
21 Φ 12 Φ 32 ) = 0.7143 <
22 Φ in event-triggered controller and some relaxed conditions have
1 with Φi = IN +(1−σ )BDi Γ2 L(IN +(1−σ )(IN −(1−σ )γ L)−1 ·γ ·L),
 1
been presented to guarantee the tracking problem by using some
90 W. Xiong et al. / Automatica 72 (2016) 84–91

(a) The trajectories of state x(t , k) at iteration k = 1. (b) The trajectories of state x(t , k) at iteration k = 10.

(c) The trajectories of state x(t , k) at iteration k = 30. (d) The trajectories of state x(t , k) at iteration k = 100.

Fig. 4. The trajectories of state x(t , k) in system (1) with the iterative controller (9) and event-triggered strategy (4).

interval matrix properties. Finally, numerical examples with the Hu, W., Liu, L., & Feng, G. (2015). Consensus of linear multi-agent systems
simulations have been provided to illustrate the usefulness of the by distributed event-triggered strategy. IEEE Transactions on Cybernetics,
http://dx.doi.org/10.1109/TCYB.2015.2398892.
obtained criteria. Huang, D., Xu, J., Li, X., Xu, C., & Yu, M. (2013). D-type anticipatory iterative
learning control for a class of inhomogeneous heat equations. Automatica, 49(8),
Acknowledgments 2397–2408.
Li, J., & Li, J. (2014). Adaptive iterative learning control for coordination of second-
order multi-agent systems. International Journal of Robust and Nonlinear Control,
This work was jointly supported by the National Natural Science 24(18), 3282–3299.
Foundation of China under Grant Nos. 61322302 and 61203146, Liu, Z., Guan, Z., Li, T., Zhang, X., & Xiao, J. (2012). Quantized consensus of multi-
agent systems via broadcast gossip algorithms. Asian Journal of Control, 14(6),
the China Postdoctoral Fund under Grant No. 2013M541589,
1634–1642.
the Jiangsu Postdoctoral Fund under Grant No. 1301025B, the Mazo, M., & Tabuada, P. (2011). Decentralized event-triggered control over wireless
Scientific Research Starting Project of SWPU under Grant No. sensor/actuator networks. IEEE Transactions on Automatic Control, 56(10),
2456–2461.
2014QHZ037, the Youth Research and Innovation Team of SWPU Meng, D., Jia, Y., & Du, J. (2015a). Robust consensus tracking control for multiagent
under Grant No.2013XJZT004, and also the Australian Research systems with initial state shifts, disturbances, and switching topologies. IEEE
Council’s Discovery Grant Scheme under Nos. 130104765 and Transactions on Neural Networks and Learning Systems, 26(4), 809–824.
140100544. Meng, D., Jia, Y., & Du, J. (2015b). Robust iterative learning protocols for finite-
time consensus of multi-agent systems with interval uncertain topologies.
International Journal of Systems Science, 46(5), 857–871.
References Meng, D., Jia, Y., Du, J., & Yu, F. (2013). Tracking algorithms for multiagent systems.
IEEE Transactions on Neural Networks and Learning Systems, 24(10), 1660–1676.
Abidi, K., & Xu, J. (2011). Iterative learning control for sampled-data systems: From Meng, D., Jia, Y., Du, J., & Zhang, J. (2014). On iterative learning algorithms for
theory to practice. IEEE Transactions on Industrial Electronics, 58(7), 3002–3015. the formation control of nonlinear multi-agent systems. Automatica, 50(1),
Arimoto, S., Kawamura, S., & Miyazaki, F. (1984). Bettering operation of robots by 291–295.
learning. Journal of Robotic Systems, 1(2), 123–140. Meng, D., & Moore, K. (2016). Learning to cooperate: Networks of formation agents
Delchamps, D. (1990). Stabilizing a linear system with quantized state feedback. with switching topologies. Automatica, 64, 278–293.
IEEE Transactions on Automatic Control, 35(8), 916–924. Mezghani, M., Roux, G., Cabassud, M., Lann, M., Dahhou, B., & Casamatta, G. (2002).
Fan, Y., Feng, G., Wang, Y., & Song, C. (2013). Distributed event-triggered control Application of iterative learning control to an exothermic semibatch chemical
of multi-agent systems with combinational measurements. Automatica, 49(2), reactor. IEEE Transactions on Control Systems Technology, 10(6), 822–834.
671–675. Peng, C., & Han, Q. (2013). A novel event-triggered transmission scheme and control
Fu, M., & Xie, L. (2005). The sector bound approach to quantized feedback control. co-design for sampled-data control systems. IEEE Transactions on Automatic
IEEE Transactions on Automatic Control, 50(11), 1698–1711. Control, 58(10), 2620–2626.
W. Xiong et al. / Automatica 72 (2016) 84–91 91

Peng, C., Han, Q., & Yue, D. (2013). To transmit or not to transmit: a discrete event- Xinghuo Yu received B.Eng. and M.Eng. degrees from
triggered communication scheme for networked takagi–sugeno fuzzy systems. the University of Science and Technology of China, Hefei,
IEEE Transactions on Fuzzy Systems, 21(1), 164–170. China, in 1982 and 1984, and Ph.D. degree from Southeast
Seok Han, H., & Lee, J. (1994). Necessary and sufficient conditions for stability of University, Nanjing, China in 1988, respectively. He is
time-varying discrete interval matrices. International Journal of Control, 59(4), currently the Associate Deputy Vice-Chancellor (Research
1021–1029. Capability) and Distinguished Professor of RMIT University
Shih, M., Lur, Y., & Pang, C. (1998). An inequality for the spectral radius of an interval (Royal Melbourne Institute of Technology), Melbourne,
matrix. Linear Algebra and its Applications, 274(1), 27–36. Australia. Professor Yu’s research interests include control
Tan, Y., Xu, J., Norrlöf, M., & Freeman, C. (2011). On reference governor in iterative systems, complex and intelligent systems, and smart
learning control for dynamic systems with input saturation. Automatica, 47(11), grids. He is President-Elect (2016–2017) of IEEE Industrial
2412–2419. Electronics Society. Professor Yu received a number
Wang, Z., Shen, B., Shu, H., & Wei, G. (2012). Quantized control for nonlinear of awards and honors for his contributions, including 2013 Dr.-Ing. Eugene
stochastic time-delay systems with missing measurements. IEEE Transactions Mittelmann Achievement Award of IEEE Industrial Electronics Society and 2012
on Automatic Control, 57(6), 1431–1444. IEEE Industrial Electronics Magazine Best Paper Award. He holds Fellowship of IEEE
Xiong, W., Ho, D., & Yu, X. (2015). Saturated finite interval iterative
and IET, and four other professional societies.
learning for tracking of dynamic systems with hnn-structural out-
put. IEEE Transactions on Neural Networks and Learning Systems,
http://dx.doi.org/10.1109/TNNLS.2015.2448716. Ragini Patel received the Bachelor’s degree in electrical
Xu, J., Tan, Y., & Lee, T. (2004). Iterative learning control design based on composite engineering from Rewa Engineering College, Rewa, M.P.,
energy function with input saturation. Automatica, 40(8), 1371–1377. India in 1999 and M.Tech. degree from Indian Institute of
Yue, D., Tian, E., & Han, Q. (2013). A delay system method for designing event- Technology Bombay, India in 2002. She is currently pursu-
triggered controllers of networked control systems. IEEE Transactions on ing Ph.D. in electrical engineering at RMIT University, Mel-
Automatic Control, 58(2), 475–481. bourne, Australia. Her research interests include control
Zhang, Z., Hao, F., Zhang, L., & Wang, L. (2014). Consensus of linear multi-agent theory, model predictive control and its applications, op-
systems via event-triggered control. International Journal of Control, 87(6), erations optimization under dynamic environment such as
1243–1251. renewable energy integration in smart grid.
Zhang, R., Hou, Z., Chi, R., & Ji, H. (2015). Adaptive iterative learning control for
nonlinearly parameterised systems with unknown time-varying delays and
input saturations. International Journal of Control, 88(6), 1133–1141.
Zhang, Z., Zhang, L., Hao, F., & Wang, L. (2015). Distributed event-triggered
consensus for multi-agent systems with quantisation. International Journal of Wenwu Yu received the B.Sc. degree in information and
Control, 88(6), 1112–1122. computing science and M.Sc. degree in applied mathemat-
Zou, L., Wang, Z., Gao, H., & Liu, X. (2015). Event-triggered state estima- ics from the Department of Mathematics, Southeast Uni-
tion for complex networks with mixed time delays via sampled data versity, Nanjing, China, in 2004 and 2007, respectively, and
information: The continuous-time case. IEEE Transactions on Cybernetics, the Ph.D. degree from the Department of Electronic Engi-
http://dx.doi.org/10.1109/TCYB.2014.2386781. neering, City University of Hong Kong, Hong Kong, China,
in 2010. Currently, he is the Founding Director of Labora-
tory of Cooperative Control of Complex Systems, an Asso-
ciate Director in the Research Center for Complex Systems
Wenjun Xiong was born in Hubei Province, China. She and Network Sciences, an Associate Dean in the Depart-
received the M.Sc. degree in applied mathematics from ment of Mathematics, and a Full Professor with the Distin-
the Department of Mathematics, Southeast University, guished Young Honor in Southeast University, China.
Nanjing, China, in 2005, and the Ph.D. degree from the Dr. Yu held several visiting positions in Australia, China, Germany, Italy, the
Department of Mathematics, City University of Hong Netherlands, and the USA. His research interests include multi-agent systems, com-
Kong, Hong Kong, China, in 2010. Currently, she is an plex networks and systems, nonlinear dynamics and control, smart grids, neural
Associate Professor in the School of Economic Information networks, game theory, robotics, and optimization.
and Engineering, Southwestern University of Finance and He was listed by Thomson Reuters Highly Cited Researchers in Engineering in
Economics, Chengdu, China. 2014 and 2015, respectively. Moreover, he was awarded a National Natural Science
Her research interests include multi-agent systems, Fund for Excellent Young Scholars in 2013, the Six Talent Peaks of Jiangsu Province
complex networks, nonlinear dynamics and control, of China in 2014, and the National Ten Thousand Talent Program for Young Top-
neural networks, and cooperative control of autonomous systems. notch Talents in 2015.
Available online at www.sciencedirect.com

Journal of the Franklin Institute 357 (2020) 8364–8382


www.elsevier.com/locate/jfranklin

Fast data-driven iterative event-triggered control for


nonlinear networked discrete systems with data
dropouts and sensor saturation
Jiannan Chen a, Changchun Hua a,∗, Xinping Guan b
a Institute of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
b School of Electronics, Information and Electric Engineering, Shanghai Jiaotong University, Dongchuan Road 800,
Shanghai 200240, China
Received 26 August 2019; received in revised form 11 March 2020; accepted 13 March 2020
Available online 3 July 2020

Abstract
The tracking control problem is investigated for a class of nonlinear networked discrete systems
with random data-dropout and sensor saturation, and a novel data-driven iterative learning event-triggered
control scheme is proposed. First, a new model is established to describe random data-dropout processes.
Then, the fixed threshold iterative learning control scheme based on randomly received saturated output
data is designed to track the desired trajectory and reduce the update number of iterations. Further,
in order to obtain a faster convergence or learning speed, a novel method based on varying parameter
along iteration axis is given. In the end, the resulting closed-loop system is proved to be stable and the
relationship between the upper bound of the consecutive data-dropout number and system stability is
revealed. Complete simulations are exploited to verify theoretical results.
© 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

1. Introduction

Iterative learning control (ILC) is an effective control framework which can improve the
current system performances for repeated system by using the past experiences, and its ba-

∗ Corresponding author
E-mail address: cch@ysu.edu.cn (C. Hua).

https://doi.org/10.1016/j.jfranklin.2020.03.020
0016-0032/© 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8365

sic idea is to construct the current control signal by employing the previous control signal
and some compensation signals related with error variables [1,2]. Due to the effectiveness
and simplicity, ILC has been widely implemented in industrial manufacturing, for example,
industrial manipulators, chemical batch reactors, electric motors, etc [3–6].
Model free adaptive control (MFAC) or data-driven control is characterized by designing
controllers with only input/output data of the investigated system, and which becomes an
interesting research field in the past decade [7–17]. In [7–9], the data-driven control problem
is studied for uncertain discrete SISO system with data-dropout, and two different methods, the
method without compensation and the method with compensation, are proposed. Further, the
MIMO nonlinear systems are taken into consideration in [10,11], and both tracking controllers
are designed based on proposed compact form dynamic linearization model. The two-player
zero-sum game problem is investigated in [12], and a new model-free globalized dual heuristic
dynamic programming method based on neural networks is proposed. [13] considers the fault
detection and fault-tolerant control issues, and a fault-tolerant control scheme is designed
where the neural network approximator is utilized to learn the unknown fault dynamics and
an estimator is developed to detect the fault. In [14], a new MFAC design method is created
for a class of nonlinear systems based on dual RBFNNs, and the main feature of such
method is that the controller structure is determined merely by the I/O data of the plant,
rather than the identified model or prior knowledge. The application of MFAC in interlinked
AC/DC microgrids is studied in [15], and the proposed scheme has the ability to regulate and
restore the DC terminal voltage and AC frequency. For a repeatable control task on a finite
time interval, some literatures of MFAC have been implemented under the ILC framework.
[16] proposes a distributed model free adaptive iterative learning control scheme for a class
of unknown nonlinear multiagent systems to perform consensus tracking mission under both
fixed and switched topologies. The input and output constraints problems are considered in
[17], and a constrained controller is designed for the SISO nonlinear system under the optimal
control theory framework. In physical engineering systems, unlimited signals are unlikely to
be provided or measured by sensors. The saturation nonlinearity is an inevitable problem
due to hardware constraints. For example, in the automotive industry, the measured lateral
acceleration of a car can be used to estimate its sideslip angle [18], and the response of the
lateral acceleration to changes in the sideslip angle is approximately linear for small sideslip
angles, but a saturation occurs for large sideslip angles. In order to reduce effects of sensor
saturation on system performances, many representative works have been carried out. In [19],
the linear SISO system in the presence of output measurement saturation is investigated, and
a globally asymptotically stability condition is given. Then, the result of [19] is extended
to MIMO system with saturated output, and authors in [20] reveal that MIMO system can
be globally asymptotically stabilized by output feedback. In [21], the filtering problem is
considered for linear time-varying system with sensor saturation and measurement noises,
and a sufficient condition for the existence of set-membership filter is given. In view of the
fact that many practical systems are impossible to establish system models, the saturation
constraint problem for model free system is studied in [22].
With the development of network technology, more and more control systems are im-
plemented in the networked mode to enhance the flexibility. Therefore, a problem arising
naturally is the data-dropout phenomenon. In [23], the state feedback H∞ controller with a
fixed gain is designed for liner system based on the assumption that the consecutive packet
dropout number is bounded. In [24], a stochastic variable satisfying Bernoulli distributed white
sequence is given to characterize the missing data scenario, and with which an active resilient
8366 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

control strategy is proposed for singular networked control systems with external disturbances
and missing data scenario. The Bernoulli random binary distribution is employed to model
the data-dropout problem in [25] and [26], and both stabilities of closed-loop systems are
proved under stochastic theory framework. In [27], the networked system with random packet
dropouts is transformed into a Markov system, and a state feedback controller is designed
based on the transformed stochastic system. In [9] and [28], the data-dropout compensator
strategies are designed for the networked systems with random packet dropouts where the
missing data is estimated by the latest successful date in transmission.
Event-triggered control is an effective control framework in control community for it has
the ability to reduce the required computation cost while still maintain control performances.
In the event-triggered control, an event is required to be defined based on the measurement er-
ror of certain signal of interest. When the measurement error reaches the prescribed threshold,
an event is triggered and the controller part or other part is updated. Currently, the event-
triggered control has been receiving increasing attention in continuous system [29], discrete
system [30], fuzzy system [31], stochastic system [32], singular system [33] and multiagent
system [34]. Note that results of event-triggered control for repetitive systems are still very
limited. The iterative learning problem is considered in [35] for discrete-time multiagent sys-
tems, and a control protocol based on event-triggered transmission strategy is designed where
the event is defined based on the measured state signals.
In this paper, we consider iterative learning problem for nonlinear networked discrete
systems with random data-dropout and sensor saturation, and a fast iterative learning event-
triggered control scheme is proposed. The major contributions are displayed as follows.

• A novel event-triggered iterative learning update strategy is designed based on the measured
differences between adjacent control signals u(k, i) and u(k, i − 1) to reduce the number
of iteration steps to be updated.
• The data-dropout problem is considered for the networked system, the relationship between
the upper bound of the consecutive packet dropout number and the stability of closed-loop
system is revealed.
• The convergence or learning speed of iterative learning control is improved based on a
novel varying parameter technique.
• Only the saturated output information is employed in the designed control algorithm, and
boundaries of sensors are asymmetrical and time-varying.

The rest of this paper is organized as follows. In Section 2, the system dynamics and
preliminaries are provided. The main results and the corresponding stability analyses are
given in Section 3. Some numerical examples are presented in Section 4, and the conclusions
are given in Section 5. Necessary appendixes are given in the last section.
Notations. In this paper, notations k and i are used to represent time instant and iteration,
respectively. λ1 (i), λ2 (i), β 1 , β 2 are positive parameters in the controllers.

2. System descriptions and preliminaries

Consider a class of networked nonlinear discrete systems [9,11,36],

y(k + 1, i) = f (y(k, i), . . . , y(k − n̄y , i), u(k, i), . . . , u(k − n̄u , i)) (1)
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8367

where y(k, i) and u(k, i) are system output and control input at time instant k and iteration
i. Parameters k ∈ [1, . . . , K ], i, n̄y and n̄u represent sampling time variable, iteration variable,
order of system output and order of system input, respectively.
Assumption 1. The partial derivative of f(∗ ) with respect to u(k, i) is continuous.
Assumption 2. The function f(∗ ) satisfies the generalized Lipschitz condition with respect to
iteration variable i. Therefore, for any u(k, i) = 0, there has |y(k + 1, i)| ≤ b|u(k, i)|,
where y(k + 1, i) = y(k + 1, i) − y(k + 1, i − 1) and u(k, i) = u(k, i) − u(k, i − 1). b is
a positive constant.
Assumption 3. The initial measured output y(0, i) is identical for all iterations, i.e., for
∀y(0, i) = y(0, 1) = C, where C is a constant.
For nonlinear system (1), if both Assumptions 1 and 2 hold, then there must exist parameter
φ(k, i) which is called Pseudo Partial Derivative (PPD). Therefore, when |u(k, i)| > 0 for all
k and i, the system (1) can be rewritten as
y(k + 1, i) = y(k + 1, i − 1) + φ(k , i)(u(k , i) − u(k, i − 1)) (2)
where |φ(k, i)| ≤ b, and dynamics (2) is called Compact Form Dynamic Linearization (CFDL)
model. In actual applications, the output measurement sensor has the characteristic of sat-
uration limitation, the actual output of the system is not available if its value exceeds the
effective range. Hence, only the measured output can be utilized in the design of control
input. In this paper, the sensor saturation is modeled as

⎨yup (k) y(k, i) > yup (k)
z(k, i) = sat (y(k, i)) = y(k, i) −y low (k) ≤ y(k, i) ≤ yup (k) (3)

−ylow (k) y(k, i) < −ylow (k)
where z(k, i) is the measured output. yup (k) and ylow (k) are two different positive time-varying
variables. From the given model, we can see that if the actual output exceeds the maximum
range, the measured output will not equal to actual values. This saturation nonlinearity feature
is widely existed in many kinds of practical measuring equipments and devices.
Assumption 4. The sign of PPD φ(k, i) remains unchanged, and only φ(k, i) > ϱ > 0 is dis-
cussed without loss of generality. ϱ is a small positive constant.
Remark 1. Assumption 4 indicates that the system output does not decrease as the control
input increase, which can be regarded as a type of linear-like characteristic. Such assumption
implies that the sign of the control direction is known, or at least not changed [7,10].
As shown in Fig. 1, the random data-dropout process is modeled as a switch that opens
and closes in a random manner. When the switch is on (α = 1), no data packet loss. When
the switch is off (α = 0), its output is held at the previous value and the present data packet is
lost. In this paper, the output y(k, i) of system passes through the network that suffered random
data-dropout and sensor saturation. Thus, the obtained information z˜(k, i) is the system output
z(k, i) with the probability of ᾱ. When no new data arrives, the previous data will be employed
with the probability of 1 − ᾱ. In this paper, the new model of the random data-dropout is
described as z˜(k, i) = αz(k, i) + (1 − α)z(k, i − mi ), and mi is unknown but bounded positive
integer. It is worth mentioning that ᾱ is the expectation of α.
8368 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

Fig. 1. The overall framework of the proposed control scheme.

3. Controller design and convergence analysis

In this section, the iterative learning event-triggered control scheme is proposed for the
investigated system. The overall control framework is shown in Fig. 1. The controller is
designed as

⎪ ˆ i) = φ(k,
φ(k, ˆ i − 1) + β1 u(k,i−1) 2

⎪ λ1 (i)+u(k,i−1)
⎨ ∗(˜z (k + 1, i − 1) − α φ(k, ˆ i − 1)u(k, i − 1))
ˆ
β2 φ(k,i) (4)

⎪ w(k, i) = w(k, i − 1) + λ (i)+φ(k,i) (yd (k + 1)

⎩ 2
ˆ 2

−˜z (k + 1, i − 1))
where parameters λ1 (i) and λ2 (i) are designed as (ρo − ρ∞ )e−κi + ρ∞ in which ρ o , κ and
ρ ∞ are positive constants to be determined. β 1 , β 2 ∈ (0, 1) are step factors, and φ(k, ˆ i)
is the estimation of unknown φ(k, i). ˜z (k + 1, i − 1) = z˜(k + 1, i − 1) − z˜(k + 1, i − 2),
y˜(k + 1, i − 1) = y˜(k + 1, i − 1) − y˜(k + 1, i − 2) and y˜(k + 1, i − 1) = αy(k + 1, i − 1) +
(1 − α)y(k + 1, i − 1 − mi−1 ). In order to ensure the estimated φ(k,ˆ i) to be positive all the
time, the reset algorithm is given as
ˆ i) = φ(k,
φ(k, ˆ 1), i f |φ(k,
ˆ i)| ≤ or|u(k, i)| ≤
ˆ ˆ 1))
or sign(φ(k, i )) = sign(φ(k, (5)
ˆ 1) is usually set to be positive. In order to reduce unnecessary updates, the event
where φ(k,
triggered update strategy is designed as

u(k, i) = w(k, Il ), i f i ∈ [Il , Il+1 )
(6)
Il+1 = in f {i ∈ z+ ||E (k, i)| ≥ Tm , and i > Il }
where E (k, i) = w(k, i) − u(k, i), and Tm is a positive constant parameter to be designed. To
facilitate the convergence analysis, the relationship between u(k, i) and w(k, i) is given as
u(k, i) = w(k, i) + λ3 (k, i)Tm (7)
where λ3 (k, i) ∈ [−1, 1] is an unknown coefficient [37]. Define ε(k, i) = yd (k) − z(k, i) and
e(k, i) = yd (k) − y(k, i). It is easy to obtain
ε(k, i) = γ (k , i)e(k , i) (8)
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8369

where

yd (k)−yup (k)

⎨ e(k,i) e(k, i) < yd (k) − yup (k)
γ (k, i) = 1 yd (k) + ylow (k) ≤ e(k, i) ≤ yd (k) − yup (k) (9)

⎩ yd (k)+ylow (k)
e(k,i)
e(k, i) > yd (k) + ylow (k).

It is easy to check that γ (k, i) ∈ (0, 1]. Denote z(k, i) = g(k , i)y(k , i) and the g(k, i) is
given as


⎪0 y(k) > yup (k), y(k − 1) > yup (k − 1)



⎪0 y(k) < −ylow (k), y(k − 1) < −ylow (k − 1)



⎪1 −ylow (k) < y(k) < yup (k), −ylow (k − 1) < y(k − 1) < yup (k − 1)

⎪ y(k)−yup (k)

⎪ −ylow (k) < y(k) < yup (k), y(k − 1) > yup (k − 1)

⎨ y(k) −y(k−1)
y(k)+ylow (k)
g(k, i) = y(k)−y(k−1)
−ylow (k) < y(k) < yup (k), y(k − 1) < −ylow (k − 1)

⎪ yup (k)−y(k−1)
yk > yup (k), −ylow (k − 1) < y(k − 1) < yup (k − 1)

⎪ y(k)−y(k−1)

⎪ −ylow (k)−y(k−1)

⎪ y(k) < −ylow (k), −ylow (k − 1) < y(k − 1) < yup (k − 1)


y(k)−y(k−1)

⎪ yup (k)+ylow (k)
y(k) > yup (k), y(k − 1) < −ylow (k)

⎪ y(k)−y(k−1)

⎩ −ylow (k)−yup (k)
y(k)−y(k−1)
y(k) < −ylow (k), y(k − 1) > yup (k)
(10)

where y(k) represents y(k, i) for clarity in Eq. (10), and it is evident that 0 ≤ g(k, i) ≤ 1.
Similarly, one can deduce that ˜z (k, i) = g(k , i)y˜(k , i).

Remark 2. In this paper, a novel model is presented in Eq. (3) to describe the sensor satu-
ration, and yup (k) and ylow (k) are the upper bound and lower bound of the output saturation,
respectively. Compared to literature [36], the investigated sensor saturation is more general,
specifically, the bounds of saturation are asymmetric and time-varying.

Remark 3. Compared to literatures [7–9], parameters λ1 (i) and λ2 (i) of Eq. (4) is de-
signed as (ρo − ρ∞ )e−κi + ρ∞ ∈ [ρ∞ , ρo]. It is known that 0.9 × 0.8 × 0.7 is smaller than
0.9 × 0.9 × 0.9. That is to say |e(k, i)| ≤ 0.9 × . . . × 0.1|e(k, 1)| converges much faster than
|e(k, i)| ≤ 0.9 × . . . × 0.9|e(k, 1)|. With the designed varying parameters, the convergence
speed can be improved, and detailed theoretical analyses are given in Remark 4.

Theorem 1. Consider the investigated system (2) with sensor saturation (3), if controller (4),
reset algorithm (5) and event-triggered update strategy (6) are implemented, then for any
desired trajectory yd (k), it is ensured that (1) error subsystem e(k, i) is uniformly ultimately
bounded as iteration goes to infinity; (2) the convergence speed is improved by the designed
parameters λ1 (i) and λ2 (i); (3) update number of control signals is reduced with the proposed
event-triggered scheme.

Proof. Proofs of Theorem 1 can be divided into two parts, Part A and Part B. The result
obtained from Part A is the basis of Part B. 

Part A. The proof for the boundness of estimated parameter.


ˆ i)| < or |u(k, i)| < ϱ or sign(φ(k,
(i): For |φ(k, ˆ i )) = sign(φ(k,
ˆ 1)), one can get |φ(k,
ˆ i)|
is bounded.
8370 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

ˆ i)| > : let φ(k,


(ii): For |φ(k, ˜ i) = φ(k,
ˆ i) − φ(k, i), subtracting φ(k, i) from both sides
of Eq. (4), then one can get
˜ i)
φ(k,
˜ i − 1) + φ(k, i − 1) − φ(k, i)
= φ(k,
β1 u(k, i − 1) ˆ i − 1)u(k, i − 1))
+ (˜z (k + 1, i − 1) − α φ(k,
λ1 (i) + u(k, i − 1)2
˜ i − 1) − φ(k, i)
= φ(k,
β1 u(k, i − 1) ˆ i − 1)u(k, i − 1))
+ (y˜(k + 1, i − 1) − α φ(k,
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (g(k + 1, i − 1) − 1)y˜(k + 1, i − 1)
λ1 (i) + u(k, i − 1)2
˜ i − 1) − φ(k, i)
= φ(k,
β1 u(k, i − 1)2 ˜ i − 1)
− α φ(k,
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)2
+ (g − 1)αφ(k + 1, i − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)y(k + 1, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)y(k + 1, i − 1 − mi−1 − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)y(k + 1, i − 1 − mi−1 − 2)
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (1 − α)y(k + 1, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (g − 1)(1 − α)y(k + 1, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (g − 1)(1 − α)y(k + 1, i − 1 − mi−1 − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (g − 1)(1 − α)y(k + 1, i − 1 − mi−1 − 2)
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (g − 1)(1 − α)y(k + 1, i − 2 − mi−2 + 1) (11)
λ1 (i) + u(k, i − 1)2
where g stands for g(∗ , ∗ ) in Eq. (11) for clarity. In the above analysis, the following technique
is used
y˜(k + 1, i − 1) = y˜(k + 1, i − 1) − y˜(k + 1, i − 2)
= α(y(k + 1, i − 1) − y(k + 1, i − 2))
+ (1 − α)(y(k + 1, i − 1 − mi−1 ) − y(k + 1, i − 2 − mi−2 ))
= α(y(k + 1, i − 1) − y(k + 1, i − 2))
+ (1 − α)(y(k + 1, i − 1 − mi−1 ) − y(k + 1, i − 1 − mi−1 − 1)
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8371

+ y(k + 1, i − 1 − mi−1 − 1) − . . . + y(k + 1, i − 2 − mi−2 + 1)


− y(k + 1, i − 2 − mi−2 )) (12)

For the clarity of proof, an important transformation is presented ahead. In the later
analysis, u(k, i − 1 − mi−1 ), . . . , u(k, i − 2 − mi−2 + 1) are replaced by W (k, i −
1 − mi−1 )u(k, i − 1) + W (k, i − 1 − mi−1 ), . . . , W (k, i − 2 − mi−2 + 1)u(k, i − 1) +
W (k, i − 2 − mi−2 + 1) where |W (k, i − 1 − mi−1 )|, . . . , |W (k, i − 2 − mi−2 + 1)| and
|W (k, i − 1 − mi−1 )|, . . . , |W (k, i − 2 − mi−2 + 1)| are bounded variables. The detailed
information of this transformation is given in Appendix A. Substituting above transformation
into Eq. (11) yields

˜ i)
φ(k,
˜ i − 1) − φ(k, i)
= φ(k,
β1 u(k, i − 1)2 ˜ i − 1)
− α φ(k,
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)2
+ (g − 1)αφ(k + 1, i − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
∗[W (k, i − 1 − mi−1 )u(k, i − 1) + W (k, i − 1 − mi−1 )]
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 1 − mi−1 − 1)
λ1 (i) + u(k, i − 1)2
∗[W (k, i − 1 − mi−1 − 1)u(k, i − 1) + W (k, i − 1 − mi−1 − 1)]
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 1 − mi−1 − 2)
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 1 − mi−1 − 2)u(k, i − 1) + W (k + 1, i − 1 − mi−1 − 2)]
+ ...
β1 u(k, i − 1)
+ (1 − α)φ(k + 1, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 2 − mi−2 + 1)u(k, i − 1) + W (k + 1, i − 2 − mi−2 + 1)]
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k + 1, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 1 − mi−1 )u(k, i − 1) + W (k + 1, i − 1 − mi−1 )]
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k + 1, i − 1 − mi−1 − 1)
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 1 − mi−1 − 1)u(k, i − 1) + W (k + 1, i − 1 − mi−1 − 1)]
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k + 1, i − 1 − mi−1 − 2)
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 1 − mi−1 − 2)u(k, i − 1) + W (k + 1, i − 1 − mi−1 − 2)]
+ ...
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k + 1, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2
∗[W (k + 1, i − 2 − mi−2 + 1)u(k, i − 1) + W (k + 1, i − 2 − mi−2 + 1)] (13)
8372 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

For clarity of the later analysis, two scaling techniques are given.


0 < λ1β(i)+u(k,i−1)
1 u(k,i−1)
2
2 < β1 < 1
β1 u(k,i−1) (14)
0 < | λ1 (i)+u(k,i−1) 2| <
√β1
2 λ (i)
1

Substituting these two scaling techniques in Eq. (13) yields

˜ i)
φ(k,
β1 u(k, i − 1)2 ˜ i − 1) − φ(k, i)
= (1 − α)φ(k,
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)2
+ (g − 1)αφ(k + 1, i − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 1 − mi−1 )W (k, i − 1 − mi−1 )u(k, i − 1)
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 2 − mi−2 + 1)W (k, i − 2 − mi−2 + 1)u(k, i − 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 1 − mi−1 )W (k, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (1 − α)φ(k, i − 2 − mi−2 + 1)W (k, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k, i − 1 − mi−1 )W (k, i − 1 − mi−1 )u(k, i − 1)
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k, i − 2 − mi−2 + 1)W (k, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2
u(k, i − 1)
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k, i − 1 − mi−1 )W (k, i − 1 − mi−1 )
λ1 (i) + u(k, i − 1)2
+ ...
β1 u(k, i − 1)
+ (g − 1)(1 − α)φ(k, i − 2 − mi−2 + 1)W (k, i − 2 − mi−2 + 1)
λ1 (i) + u(k, i − 1)2

β1 u(k, i − 1)2 ˜ i − 1) + D(k, i)
= 1− α φ(k, (15)
λ1 (i) + u(k, i − 1)2

where D(k, i) represents all other terms expect (1 − λ1β(i)+u(k,i−1)


1 u(k,i−1)
2
˜
2 α)φ(k, i − 1) in the right

hand side of the equal sign of Eq. (14). According to the boundness of φ(∗ ) and the
above scaling techniques, it is evident that D(k, i) is bounded, and the bound is denoted
as DM = max E {|D(k, i)|}. It is worth mentioning that DM is related with data-drop rate.
The lower data-drop rate is, the small DM is. Taking expectation in both side of Eq. (15),
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8373

one obtains

˜ i)|} ≤ |1 − β1 u(k, i − 1)2 ˜ i − 1)|} + DM


E {|φ(k, ᾱ|E {|φ(k,
λ1 (i) + u(k, i − 1)2
˜ i − 1)|} + DM
= d1 E {|φ(k,
˜ 1)|} + DM
≤ d1i−1 E {|φ(k, (16)
1 − d1
β1 u(k,i−1)2
where d1 = max{|1 − ᾱ|}.
By choosing parameters λ1 (i) and β 1 properly, it leads
λ1 (i)+u(k,i−1)2
˜ i)|} is bounded. Further, it is concluded that E {|φ(k,
to d1 ∈ (0, 1), which implies E {|φ(k, ˆ i)|}
is bounded based on the boundness of φ(k, i). It is worth pointing out that the convergence
bound is related with the data drop rate. When the data drop rate 1 − ᾱ is low, DM ≈ 3b.
However, when the data drop rate is high, to ensure the bound of convergence within a small
range, a smaller d1 needs to be selected. That’s the end of proof in Part A.
Part B. Proof for the convergence of tracking error. Substituting control schemes (4) and
(6) into system (2) yields
e(k + 1, i)
= yd (k + 1) − y(k + 1, i)
= e(k + 1, i − 1) − φ(k , i)(w(k , i) − w(k, i − 1)) − φ(k , i)(λ3 (k , i) − λ3 (k, i − 1))Tm
ˆ , i)
β2 φ(k
= e(k + 1, i − 1) − φ(k , i) (yd (k + 1) − z˜(k + 1, i − 1)) −φ(k , i)λ3 (k , i)Tm
ˆ i)2
λ2 (i) + φ(k,
ˆ , i)φ(k , i)
β2 φ(k
= e(k + 1, i − 1) −
ˆ i)2
λ2 (i) + φ(k,
× (yd (k + 1) − αz(k + 1, i − 1) − (1 − α)z(k + 1, i − 1 − mi−1 ))
− φ(k, i)λ3 (k, i)Tm
ˆ , i)φ(k , i)
β2 φ(k ˆ , i)φ(k , i)
β2 φ(k
= e(k + 1, i − 1) − αε(k + 1, i − 1) − (1 − α)
ˆ i)
λ2 (i) + φ(k, 2 ˆ i)2
λ2 (i) + φ(k,
× ε(k + 1, i − 1 − mi−1 )
− φ(k, i)λ3 (k, i)Tm
ˆ i)φ(k, i)
αγ β2 φ(k,
= (1 − )e(k + 1, i − 1) − φ(k, i)λ3 (k, i)Tm
ˆ i)2
λ2 (i) + φ(k,
ˆ , i)φ(k , i)
(1 − α)γ β2 φ(k
− e(k + 1, i − 1 − mi−1 ) (17)
ˆ i)2
λ2 (i) + φ(k,
where λ3 (k, i) = λ3 (k, i) − λ3 (k, i − 1), and γ represents γ (∗ , ∗ ) for clarity in Eqs. (17) and
(18). Taking expectation on both sides of Eq. (17), one obtains
E {|e(k + 1, i)|}
ˆ i)φ(k, i)
ᾱγ β2 φ(k,
≤ |1 − |E {|e(k + 1, i − 1)|} + | − φ(k , i)λ3 (k , i)Tm |
ˆ i)2
λ2 (i) + φ(k,
ˆ , i)φ(k , i)
(1 − ᾱ)γ β2 φ(k
+| − |E {|e(k + 1, i − 1 − mi−1 )|} (18)
ˆ i)2
λ2 (i) + φ(k,
8374 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

ˆ
ᾱγ β2 φ(k,i) φ(k,i) ˆ
(1−ᾱ)γ β2 φ(k,i) φ(k,i)
Denote A = |1 − ˆ
λ2 +φ(k,i) 2
| and B = | − ˆ
λ2 +φ(k,i) 2
|. Let d2 = max{A, B}, and
ᾱ
choose proper parameters λ1 (i), λ2 (i), β 1 , β 2 to ensure 2d2 ∈ (0, 1). Then, the Eq. (18) can
be rewritten as

E {|e(k + 1, i)|} ≤ d2 E {|e(k + 1, i − 1)|} + d2 E {|e(k + 1, i − 1 − mi−1 )|} + 2bTm


2bTm
≤ 2i−1 d2ᾱ(i−1) E {|e(k + 1, 1)|} +
1 − 2d2
2bTm
= (2d2ᾱ )i−1 E {|e(k + 1, 1)|} + (19)
1 − 2d2
Since 0 < 2d2ᾱ < 1, thus it is evident that error system E {|e(k + 1, i)|} is uniformly ultimately
bounded as i goes to infinity. The convergence bound is determined by terms Tm and d2 , which
can be restrained to a reasonable range by adjusting control parameters properly.
Remark 4. When there does not exist data-dropout in communication or the data-
dropout rate ᾱ approximates to 1, the Eq. (18) can be rewritten as E {|e(k + 1, i)|} ≤
ˆ
|1 − γλβ2(i)+
φ(k,i)
ˆ
φ(k,i)
φ(k,i) 2
|E {|e(k + 1, i − 1)|} + |φ(k , i)[λ3 (k , i)Tm − λ3 (k , i − 1)Tm ]|. Then, from the
2
monotone decline property of λ2 (i) and Remark 3, one can conclude the proposed technique
has the ability to improve the learning or convergence speed. However, when there exists
serious data-dropout phenomenon, such ability may be reduced.
Remark 5. In the event-triggered control, an event is required to be defined based on the
measurement error of certain signal of interest. When the measurement error reaches the
prescribed threshold, an event is triggered and the controller part is updated. Therefore, the
update rate/number of certain control or communication protocol can be reduced, that it to say,
the computation and communication resources can be saved to some extent if proper threshold
selected. Unlike most of available literatures, we propose a novel event-triggered control
strategy under the iteration learning framework. From event-triggered update mechanism (6),
it is easy to find that u(k, i) will stay same along iteration axis if the difference signal |E(k,
i)| within the pre-designed threshold Tm . The control signals at iterations i ∈ [Il + 1, Il+1 ) are
same with u(k, i), and the computation efforts for u(k, i), i ∈ [Il + 1, Il+1 ) are saved.

4. Simulations

To show the effectiveness and advance of the designed control strategy, two groups of
numerical simulations are conducted. Consider the following nonlinear discrete system
y (k)
y (k + 1) = + u(k)3 (20)
1 + y(k)2
In Group A, the desired trajectory is given as

⎨0.5 1 ≤ k ≤ 200
yd∗ = −0.6 200 < k ≤ 400 (21)

0.5 400 < k ≤ 600
The sensor saturation in Group A is modeled as

yup (k + 1) = 0.55 + 0.3e−0.01(k+1)
(22)
ylow (k + 1) = 0.65 + 0.3e−0.01(k+1)
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8375

Fig. 2. Tracking trajectories at 25th iteration with 90% data-dropouts in Group A.

Fig. 3. Tracking trajectories at 100th iteration with 90% data-dropouts in Group A.

In Group B, the desired trajectory is given as


4π k
yd (k) = 3sin( )+1 (23)
100
The sensor saturation in Group B is modeled as

yup (k + 1) = 0.55
(24)
ylow (k + 1) = 0.65
8376 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

Fig. 4. Max tracking error trajectory with 90% random data-dropouts in Group A.

Fig. 5. Control inputs at k = 600 with 90% random data-dropouts in Group A.

In Group A, controller parameters are selected as β1 = 0.3, β2 = 0.3, λ1 (i) = λ2 (i) =


e−0.02(i−1) + 1 and ᾱ = 0.9. Simulation results are shown from Figs. 2–6. In Figs. 2 and 3,
both system outputs y(k, 25), y(k, 100) and measured outputs z(k, 25), z(k, 100) are presented.
From these two plots, it is observed that although the system output is saturated, the designed
control scheme can still ensure the system output tracks desired trajectory. The maximum
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8377

Fig. 6. Update and hold history of iteration control signals in Group A.

Fig. 7. Tracking trajectories at 25th iteration with 80% random data-dropouts in Group B.

tracking error cure of 100th iteration is given in Fig. 4, and it can be seen that the convergence
of the learning algorithm can be guaranteed. Plots of control input are given in Figs. 5 and
6, and it is obvious that the proposed event-triggered control scheme has the ability to reduce
update number of iteration.
In Group B, controller parameters are designed as β1 = 0.5, β2 = 0.5, λ1 (i) = λ2 (i) =
0.5e−0.02(i−1) + 1 and ᾱ = 0.8. Simulation results are presented from Figs. 7–11. In Figs. 7 and
8378 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

Fig. 8. Tracking trajectories at 200th iteration with 80% random data-dropouts in Group B.

Fig. 9. Max tracking error trajectories with 80% random data-dropouts in Group B.

8, both system outputs and measured outputs are shown, and it is observed that the system
output tracks sinusoidal desired trajectory successfully. Two cures, the maximum tracking error
cure with constant parameter λ (λ1 and λ2 ) and varying parameter λ(i) (λ1 (i) and λ2 (i)), are
given in Fig. 9, and it can seen that the learning algorithm with the novel varying parameter
technique has faster learning speed or convergence speed. Plots of control input at 200th
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8379

Fig. 10. Control inputs at k = 600 with 80% random data-dropouts in Group B.

Fig. 11. Update and hold history of iteration control signals in Group B.

iteration are given in Figs. 10 and 11, and it is obvious that the proposed event-triggered
control scheme has the ability to reduce update numbers of iteration.

5. Conclusions

The iterative learning control scheme is proposed for nonlinear networked discrete systems
with event-triggered update strategy and data dropouts. First, to reduce the update number of
8380 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

iterations, the fixed threshold event-triggered iterative learning control input is designed where
triggered events are defined based on the difference of adjacent control signals directly. Then,
the random data-dropout problem is considered, and the relationship between the upper bound
of the consecutive data-dropout number and system stability is presented. Further, considering
the requirement for the high convergence or learning speed, a novel technique based on the
varying parameter along the iteration axis is developed. In the end, the complete and strict
simulations are conducted, and it is shown that the proposed control scheme can guarantee the
fast learning speed, the reduction of the update number of control signals and the robustness
to data dropouts. In the further works, we will research the iterative learning event-triggered
control for multi-agent systems.

Declaration of Competing Interest

The authors declared no potential conflicts of interest with respect to the research, author-
ship, and/or publication of this article.

Acknowledgement

This work was partially supported by National Key R&D Program of China
(2018YFB1308302), National Natural Science Foundation of China (618255304, 61751309,
61673335).

Appendix A

The system dynamics are given as


y(k + 1, i) = y(k + 1, i − 1) + φ(k , i)u(k , i) (A.1)
For the clarity of analysis, the controller is modified as
ˆ i)
β2 φ(k,
u(k, i) = (yd (k + 1) − y(k + 1, i − 1)) (A.2)
ˆ i)2
λ2 + φ(k,
It is worth pointing out the modified controller has no effects on the relationship to be found
out. In order to fully demonstrate the processes of analysis, the relationship between u(k,
3) and u(k, 5) is given as the motivated example. Given u(k, 3), and we have
ˆ 3)
β2 φ(k,
u(k, 3) = (yd (k + 1) − y(k + 1, 2)) (A.3)
ˆ 3)2
λ2 + φ(k,
Therefore, y(k + 1, 2) can be written as
ˆ 3)2
λ2 + φ(k,
y(k + 1, 2) = yd (k + 1) − u(k, 3) (A.4)
ˆ 3)
β2 φ(k,
Firstly, find the relationship between u(k, 4) and u(k, 3).
y(k + 1, 3) = y(k + 1, 2) + φ(k , 3)u(k , 3) (A.5)
According the controller A.3, we obtain
ˆ 4)
β2 φ(k,
u(k, 4) = (yd (k + 1) − y(k + 1, 3))
ˆ 4)2
λ2 + φ(k,
J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382 8381

ˆ 4)
β2 φ(k,
= (yd (k + 1) − y(k + 1, 2) − φ(k, 3)u(k, 3))
ˆ 4)2
λ2 + φ(k,
= W (k, 4)u(k, 3) + W (k, 4)

where |W(k, 4)| and |W(k, 4)| are bounded variables. Secondly, find the relationship between
u(k, 5) and u(k, 3).
y(k + 1, 4) = y(k + 1, 3) + φ(k , 4)u(k , 4) (A.6)
According the controller u(k, 5), we obtain
ˆ 5)
β2 φ(k,
u(k, 5) = (yd (k + 1) − y(k + 1, 4))
ˆ 5)2
λ2 + φ(k,
ˆ 5)
β2 φ(k,
= (yd (k + 1) − y(k + 1, 3) − φ(k, 4)u(k, 4))
ˆ 5)2
λ2 + φ(k,
= W (k, 5)u(k, 3) + W (k, 5)

For any given bounded positive integer F, it is not hard to find that
u(k, 3 + F ) = W (k, 3 + F )u(k, 3) + W (k, 3 + F ) (A.7)
where |W (k, 3 + F )| and |W (k, 3 + F )| are bounded variables.

References

[1] S. Arimoto, S. Kawamura, F. Miyazaki, Bettering operation of robots by learning, J. Robot. Syst. 1 (2) (1984)
123–140.
[2] H.S. Ahn, Y. Chen, K.L. Moore, Iterative learning control: Brief survey and categorization, IEEE Trans. Syst.
Man Cybern. Part C 37 (6) (2007) 1099–1121.
[3] H.S. Ahn, K.L. Moore, Y. Chen, Trajectory-keeping in satellite formation flying via robust periodic learning
control, Int. J. Robust Nonlinear Control 20 (14) (2010) 1655–1666.
[4] K.S. Lee, J.H. Lee, Iterative learning control-based batch process control technique for integrated control of end
product properties and transient profiles of process variables, J. Process Control 13 (7) (2003) 607–621.
[5] A. Tayebi, Adaptive iterative learning control for robot manipulators, Automatica 40 (7) (2004) 1195–1203.
[6] A.P. Schoellig, F.L. Mueller, R. D’Andrea, Optimization-based iterative learning for precise quadrocopter tra-
jectory tracking, Autonom. Robots 33 (1–2) (2012) 103–127.
[7] Z. Hou, X. Bu, Model free adaptive control with data dropouts, Expert Syst. Appl. 38 (8) (2011) 10709–10717.
[8] Z.H. Pang, G.P. Liu, D. Zhou, D. Sun, Data-driven control with input design-based data dropout compensation
for networked nonlinear systems, IEEE Trans. Control Syst. Technol. 25 (2) (2017) 628–636.
[9] X. Bu, Z. Hou, H. Zhang, Data-driven multiagent systems consensus tracking using model free adaptive control,
IEEE Trans. Neural Netw. Learn. Syst. 29 (5) (2018) 1514–1524.
[10] D. Xu, B. Jiang, P. Shi, A novel model-free adaptive control design for multivariable industrial processes, IEEE
Trans. Ind. Electron. 61 (11) (2014) 6391–6398.
[11] Z. Hou, S. Jin, Data-driven model-free adaptive control for a class of mimo nonlinear discrete-time systems,
IEEE Trans. Neural Netw. 22 (12) (2011) 2173–2188.
[12] X. Zhong, H. He, D. Wang, Z. Ni, Model-free adaptive control for unknown nonlinear zero-sum differential
game, IEEE Trans. Cybern. 48 (5) (2017) 1633–1646.
[13] Z. Wang, L. Liu, H. Zhang, Neural network-based model-free adaptive fault-tolerant control for discrete-time
nonlinear systems with sensor fault, IEEE Trans. Syst. Man Cybern. Syst. 47 (8) (2017) 2351–2362.
[14] Y. Zhu, Z. Hou, F. Qian, W. Du, Dual RBFNNs-based model-free adaptive control with aspen HYSYS simu-
lation, IEEE Trans. Neural Netw. Learn. Syst. 28 (3) (2017) 759–765.
[15] H. Zhang, J. Zhou, Q. Sun, J.M. Guerrero, D. Ma, Data-driven control for interlinked AC/DC microgrids via
model-free adaptive control and dual-droop control, IEEE Trans. Smart Grid 8 (2) (2017) 557–571.
8382 J. Chen, C. Hua and X. Guan / Journal of the Franklin Institute 357 (2020) 8364–8382

[16] X. Bu, Q. Yu, Z. Hou, W. Qian, Model free adaptive iterative learning consensus tracking control for a class
of nonlinear multiagent systems, IEEE Trans. Syst. Man Cybern. Syst. 49 (4) (2017) 677–686.
[17] R. Chi, X. Liu, R. Zhang, Z. Hou, B. Huang, Constrained data-driven optimal iterative learning control, J.
Process Control 55 (2017) 10–29.
[18] H.F. Grip, L. Imsland, T.A. Johansen, J.C. Kalkkuhl, A. Suissa, Vehicle sideslip estimation design, implemen-
tation, and experimental validation, IEEE Control Syst. Mag. 29 (5) (2009) 36–52.
[19] G. Kreisselmeier, Stabilization of linear systems in the presence of output measurement saturation, Syst. Control
Lett. 29 (1) (1996) 27–30.
[20] H. Grip, A. Saberi, X. Wang, Stabilization of multiple-input multiple-output linear systems with saturated
outputs, IEEE Trans. Autom. Control 55 (9) (2010) 2160–2164.
[21] F. Yang, Y. Li, Set-membership filtering for systems with sensor saturation, Automatica 45 (8) (2009) 1896–1902.
[22] X. Bu, Q. Wang, Z. Hou, W. Qian, Data driven control for a class of nonlinear systems with output saturation,
ISA Trans. 81 (2018) 1–7.
[23] H. Gao, T. Chen, Network-based H∞ output tracking control, IEEE Trans. Autom. Control 53 (3) (2008)
655–667.
[24] R. Sakthivel, S. Santra, B. Kaviarasan, Resilient sampled-data control design for singular networked systems
with random missing data, J. Frankl. Inst. 355 (3) (2018) 1040–1072.
[25] Z. Wang, F. Yang, D.W. Ho, X. Liu, Robust H∞ control for networked systems with random packet losses,
IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 37 (4) (2007) 916–924.
[26] N. Elia, J.N. Eisenbeis, Limitations of linear control over packet drop networks, IEEE Trans. Autom. Control
56 (4) (2011) 826–841.
[27] D. Wang, J. Wang, W. Wang, H∞ controller design of networked control systems with markov packet dropouts,
IEEE Trans. Syst. Man Cybern. Syst. 43 (3) (2013) 689–697.
[28] Q. Ling, M.D. Lemmon, Power spectral analysis of networked control systems with data dropouts, IEEE Trans.
Autom. Control 49 (6) (2004) 955–959.
[29] Y. Ye, H. Su, Y. Sun, Event-triggered consensus tracking for fractional-order multi-agent systems with general
linear models, Neurocomputing 315 (13) (2018) 292–298.
[30] M. Kishida, Event-triggered control with self-triggered sampling for discrete-time uncertain systems, IEEE Trans.
Automat. Control (2018), doi:10.1109/TAC.2018.2845693.
[31] J. Liu, Q. Liu, J. Cao, Y. Zhang, Adaptive event-triggered h∞ filtering for t-s fuzzy system with time delay,
Neurocomputing 189 (12) (2016) 86–94.
[32] Y. Wang, W.X. Zheng, H. Zhang, Dynamic event-based control of nonlinear stochastic systems, IEEE Trans.
Autom. Control 62 (12) (2017) 6544–6551.
[33] R. Sakthivel, S. Santra, B. Kaviarasan, K. Venkatanareshbabu, Dissipative analysis for network-based singu-
lar systems with non-fragile controller and event-triggered sampling scheme, J. Frankl. Inst. 354 (12) (2017)
4739–4761.
[34] Z. Wu, Y. Xu, R. Lu, Y. Wu, T. Huang, Event-triggered control for consensus of multiagent systems with
fixed/switching topologies, IEEE Trans. Syst. Man Cybern. Syst. 48 (10) (2018) 1736–1746.
[35] W. Xiong, X. Yu, R. Patel, W. Yu, Iterative learning control for discrete-time systems with event-triggered
transmission strategy and quantization, Automatica 72 (2016) 84–91.
[36] X. Bu, Z. Hou, Q. Yu, Y. Yang, Quantized data driven iterative learning control for a class of nonlinear systems
with sensor saturation, IEEE Trans. Syst. Man Cybern. Syst. (2018), doi:10.1109/TSMC.2018.2866909.
[37] L. Xing, C. Wen, Z. Liu, H. Su, J. Cai, Event-triggered adaptive control for a class of uncertain nonlinear
systems, IEEE Trans. Autom. Control 62 (2017) 2071–2076.

You might also like