You are on page 1of 12

International Journal of Control

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/tcon20

Sparse feedback stabilisation of linear delay


systems by proximal gradient method

Anping Tang, Guang-Da Hu & Yuhao Cong

To cite this article: Anping Tang, Guang-Da Hu & Yuhao Cong (2023) Sparse feedback
stabilisation of linear delay systems by proximal gradient method, International Journal of
Control, 96:10, 2576-2586, DOI: 10.1080/00207179.2022.2106892

To link to this article: https://doi.org/10.1080/00207179.2022.2106892

Published online: 08 Aug 2022.

Submit your article to this journal

Article views: 120

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tcon20
INTERNATIONAL JOURNAL OF CONTROL
2023, VOL. 96, NO. 10, 2576–2586
https://doi.org/10.1080/00207179.2022.2106892

Sparse feedback stabilisation of linear delay systems by proximal gradient method


Anping Tang a , Guang-Da Hua and Yuhao Congb
a Department of Mathematics, Shanghai University, Shanghai, People’s Republic of China; b Shanghai Customs College, Shanghai, People’s Republic
of China

ABSTRACT ARTICLE HISTORY


In this paper, the sparse feedback stabilisation in which the gain matrix has as many zero components as Received 31 August 2021
possible in linear delay systems is investigated. The necessary and sufficient condition for the asymptotic Accepted 22 July 2022
stability of the delay systems under sparse state feedback is given. By means of a special matrix norm and KEYWORDS
the transition matrix (fundamental matrix) of delay systems, the sparse gain matrix design problem is trans- Linear delay systems; sparse
formed into an optimisation problem. We further derive the proximal mapping of the special matrix norm, feedback; stabilisation;
and then based on the gradient descent of the smooth part of the objective function, the proximal gra- proximal gradient method;
dient method is introduced to develop an algorithm for solving the non-smooth optimisation problem. state transition matrix
Numerical examples are given to illustrate the effectiveness of the proposed method.

1. Introduction The idea of sparsity is to minimise the number of nonzero


components of the vector at the beginning, which has been
We are concerned with linear systems with multiple delays
widely applied in many fields, such as regression shrinkage
described by
and selection via the lasso based on the l1 norm of vectors

m
  (Tibshirani, 1996). In Donoho (2006), sparse solution of large
ẋ(t) = A0 x(t) + Aj x t − τj + Bu(t), (1) underdetermined systems of linear equations be found by con-
j=1 vex optimisation. A class of relaxed row-l0 quasi-norm to solve
linear inverse problems with multiple measurement vectors is
where A0 , Aj ∈ Rd×d and B ∈ Rd×p are constant matrices, proposed (Cotter et al., 2005). Tropp (2006) presented theo-
x(t) ∈ Rd is the state vector, u(t) ∈ Rp is control vector, τj > 0 retical and numerical results for a greedy pursuit algorithm
are scalars for j = 1, 2, . . . , m and τm > τm−1 > · · · > τ1 . Delay by l1,∞ norm. In recent years, group lasso has been proposed
differential dynamic systems have many applications in the and applied to logistic regression (Meier et al., 2008; Simon
problem of power system, engineering structure, rocket motion, & Tibshirani, 2012). In control community, sparse feedback
economics and in many other areas of science and technology, control for linear system based on LMI is proposed in Polyak
the number of which is steadily expanding, e.g. see Bereketoglu et al. (2014). The ADMM method is used to design the sparse
and Huseynov (2010), Deng et al. (2019) and Liu et al. (2018). optimal feedback gains with l1 penalty in linear systems (Lin
Especially, Kyrychko and Hogan (2010) review applications of et al., 2012; Shimizu, 2017). In Polyak and Tremba (2020), a
delay systems to different fields of engineering science. method for finding sparse solutions of boundary-value non-
The stability of systems (1) has been investigated in linear dynamic problems is proposed. Recent paper (Yagoubi
Kim (2015). Based on the bounded region, a necessary and suf- & Chaibi, 2020) also exploits a non-smooth newton approach
ficient condition for asymptotic stability of linear delay systems for the design of state feedback stabilisers of linear systems
is derived by the argument principle (Hu, 2020). under structure constraints.
The stabilisation problem for linear delay systems is a chal- In this paper, the sparse state feedback stabilisation of lin-
lenging research topic which has received extensive attention ear delay systems is investigated. We focus on the number of
in Uchida et al. (1988) and Chen et al. (2021, 2022). By an nonzero column (row) components of the gain matrix rather
augmented Lyapunov–Krasovskii functional and linear matrix than the vector, which is called column (row) sparse control.
inequalities (LMIs)-based conditions, a family of local state The design of sparse control for delay differential systems has
feedback schemes are designed to guarantee that the closed- two motivations. On the one hand, reducing the number of
loop subsystem enjoys the delay-dependent asymptotic stability states in the controller is synonymous with reducing the num-
(Fridman, 2014; Mahmoud, 2010), but the disadvantage is that ber of sensors or measurement devices. On the other hand, a
the results can be conservative. Recently, a necessary and suffi- more suitable feedback controller structure can be found when
cient condition for feedback stabilisation of linear delay control there are a lot of delay items or states. One approach is to pre-
systems based on the state transition matrix of the closed-loop select the controller with a sparse structure, which will increase
system is proposed in Hu and Hu (2020). the complexity of the algorithm. To overcome this problem, we

CONTACT Guang-Da Hu ghu@hit.edu.cn Department of Mathematics, Shanghai University, Shanghai 200444, People’s Republic of China

© 2022 Informa UK Limited, trading as Taylor & Francis Group


INTERNATIONAL JOURNAL OF CONTROL 2577

propose a new strategy to design sparse feedback gain to avoid The parameters of the feedback gain matrix K can be set as
structural combinations by optimisation methods. ⎡ ⎤
Based on the introduction of a special matrix norm to ensure k1,1 k1,2 · · · k1,d(m+1)
⎢k2,1 k2,2 · · · k2,d(m+1) ⎥
the sparsity of the gain matrix, combined with the state tran- ⎢ ⎥
K = [K0 , K1 , . . . , Km ] = ⎢ . .. . .. ⎥ . (4)
sition matrix (fundamental matrix) of delay systems, an opti- ⎣ .. . . . . ⎦
misation problem is proposed to design the sparse feedback kp,1 kp,2 · · · kp,d(m+1)
gain matrix to stabilise the system. The gradient formula of
the objective function and the proximal mapping of the spe- The gain matrix K becomes high-dimensional as the number
cial matrix norm are derived. We develop a proximal gradient of inputs, the number of states and the delay terms increase. To
algorithm that consists gradient descent and proximal map- avoid the problem, our aim is to seek a row or column sparse
ping steps to solve the non-smooth optimisation problem. To feedback gain matrix such that the closed-loop system (3) is
the author’s knowledge, no results have been reported about asymptotically stable.
the sparse feedback stabilisation of linear delay systems in the Define
⎡ ⎤
literature. x(t)
The main contributions of this work are summarised as ⎢ x(t − τ1 ) ⎥
⎢ ⎥
follows: X (t) = ⎢ .. ⎥. (5)
⎣ . ⎦
(1) We formulate the sparse feedback stabilisation controller x(t − τm )
design of linear delay systems as an optimisation problem. Design of state feedback u(t) = K X (t) from the incomplete
(2) An efficient proximal gradient algorithm is developed to state vector is equivalent to design a column sparse stabilising
compute the sparse gain matrix of stabilising controller. controller, i.e. gain matrix K having zero columns. For example,
as shown below, if columns 3–6 of the gain matrix are 0, the state
This paper is organised as follows. In Section 2, the sparse feedback controller requires only non-delayed states.
feedback of linear delay systems and special matrix norms are
provided. In Section 3, we propose an optimisation problem
to design the sparse feedback gain matrix of the systems. In
Section 4, numerical examples are given to illustrate the effec-
tiveness of the proposed method. In Section 5, some conclusions
are summarised.
Throughoutthe paper, we denote AF Frobenius norm
 2
of matrix A = i,j ai,j induced by Frobenius inner A, B = Reducing number of controls is equivalent to finding a row
Tr(A B). The sparse stabilising controller, i.e. gain matrix K having zero rows.
 p norm of n dimensional vector x is defined
by xp = ( ni=1 |xi |p )1/p . The matrix in Rm×n with all ele-
ments equal to 0 or equal to 1 are denoted as 0m×n and 1m×n ,
respectively.

2. Sparse feedback control


In this section, we introduce sparse feedback control of lin-
ear delay systems and define special matrix norms for row or 2.2 Special matrix norms
column sparse of gain matrix.
In order to sparse the rows and columns of the gain matrix, two
special norms are introduced below. We first review the well-
2.1 Sparse gain matrix of closed-loop system known norms in some literature.
Let the state feedback controller structure of system (1) be
Definition 2.1: Let X be an m × n matrix, X = [q1 q2 · · · qn ]
where qi ∈ Rm for i = {1, 2, . . . , n} is the ith column of X, and

m
  X = [pT1 pT2 · · · pTm ]T where pj ∈ Rn for j = {1, 2, . . . , m} is the
u(t) = K0 x(t) + Kj x t − τj , (2) jth row of X. Define
j=1

n
Xc1 = qi ∞ , (6)
i=1
where K0 , Kj ∈ Rp×d , j = 1, 2, . . . , m.
From (1) and (2), we have the closed-loop system 
m
Xr1 = pj ∞ . (7)
j=1

m
   
ẋ(t) = (A0 + BK0 ) x(t) + Aj + BKj x t − τj . (3) These matrix norms are referred to the l1,∞ norm, which is
j=1 widely used to obtain row sparse or column sparse solutions
2578 A. TANG ET AL.

of matrix (Quattoni et al., 2009). According to the following delay differential equation
lemma, minimising the Xr1 or Xc1 matrix norm may lead
to the reduction of nonzero rows or columns of the matrix, that Ḟ[K, t] = (A0 + BK0 )F[K, t]
is, increase the sparsity of the matrix. 
m
+ (Aj + BKj )F[K, t − τj ] for t > 0, (11)
j=1
Lemma 2.2 (Polyak et al., 2014): If the problem
under the condition
min Xr1
(8) F[K, t] = 0 for t < 0 and F[K, 0] = I. (12)
subject to AX = B,
Define the right term of a differential equation (11) as
where A ∈ Rs×m , s < m, B ∈ Rs×n , and X ∈ Rm×n , is feasible,
then exists a solution with no more than s ∗ n nonzero rows. f (K, F [K, t]) = (A0 + BK0 ) F[K, t]

m
  
The problem (8) can be transformed into a linear program- + Aj + BKj F K, t − τj , (13)
ming problem (Chen et al., 2001). The constraint set of optimal j=1
solution is a polyhedron, which is represented as the convex hull
of its vertices. The sparse solution can only be obtained by the where
combination of these vertices, and the limited number of ver-  
F[K, t]
tices leads to a reduction in flexibility within the group (each F [K, t] = ,
Fdelay [K, t]
row or column), that is, the reduction of smoothness. ⎡ ⎤
Define the following norm to increase the smoothness of F [K, t − τ1 ]
each row and column of the matrix. ⎢ F[K, t − τ2 ] ⎥
⎢ ⎥
Fdelay [K, t] = ⎢ .. ⎥.
⎣ . ⎦
Definition 2.3: Let X be an m × n matrix, define the weighted F [K, t − τm ]
norm
Then Equation (11) can be rewritten as

n
Xcol−1 = βi qi 2 , (9) Ḟ[K, t] = f (K, F [K, t]) for t > 0. (14)
i=1
A stability criterion via the state transition matrix is as follows.

m
Xrow−1 = αj pj 2 , (10)
j=1 Lemma 3.1 (Hu & Hu, 2020): The closed-loop system (3) is
asymptotically stable if and only if there exist feedback gain
where βi and αj are positive weights. Larger weights correspond matrices K0 and Kj for j = 1, . . . , m, i.e. K such that
to more expensive components of the column or row vectors.  T
lim F[K, σ ]2F dσ + λK2F (15)
n T→∞ 0
nFrom the Xc1 = i=1 qi ∞ norm to the Xcol−1 =
i=1 qi 2 norm, the l∞ norm of each column is replaced by exists and is finite, where λ is a positive coefficient, and F[K, σ ] is
the l2 norm, which increases the smoothness of the column the state transition matrix of the closed-loop system (11).
norm and the proximal mapping of l2 norm is easier to obtain
than l∞ norm. The sum of the column norm of Xcol−1 is the l1 The stability is still guaranteed when the Frobenius norm
norm, which is the same as the Xc1 norm in Lemma 2.2. The is replaced with other matrix norms, including  · col−1 norm
l1 norm penalty outside of Xcol−1 guarantees the sparsity of and  · row−1 norm. On this basis, the stability condition of
the matrix when minimising the norm in the optimisation prob- sparse feedback gain matrix of linear delay systems is given.
lem. Besides that, the matrix norms of Definitions 2.1 and 2.3 are
convex. Theorem 3.2: The closed-loop system (3) is asymptotically stable
if and only if there exist feedback gain matrices K0 and Kj for j =
1, . . . , m, i.e. K such that
3. Proximal gradient method for sparse feedback
stabilisation  T
lim F[K, σ ]2F dσ + λKcol−1 (16)
3.1 Sparse feedback stabilisation problem T→∞ 0

We review the concept of the state transition matrix (funda- exists and is finite, where λ is a positive coefficient, and F[K, σ ] is
mental matrix) of linear delay systems (see, for example, Hale the state transition matrix of the closed-loop system (11).
& Verduyn Lunel, 1993; Kolmanovskii & Myshkis, 1999).
The state transition matrix of the closed-loop system (3) is Proof: The proof can be obtained immediately from Lemma 3.1.
denoted by F[K, t] ∈ Rd×d , which is the solution of the matrix 
INTERNATIONAL JOURNAL OF CONTROL 2579

Theorem 3.2 still holds when the Kcol−1 norm in (16) is part G(K). We require the gradients of H(K) and state transi-
replaced by Krow−1 norm. According to the above argument tion matrix. The gradient of the state with respect to the matrix
and Theorem 3.2, we formulate an optimisation problem for K is given as a theorem stated below.
column sparse feedback stabilisation. In order to solve the optimisation problem conveniently, K
We note performance index can be expressed as follows:

J(K) = H(K) + G(K), (17) (m+1)d


  p
K= ki,j ei,j , (22)
where j=1 i=1
 T where ei,j represents a p × d(m + 1) matrix in which the jth ele-
H(K) = F[K, σ ]2F dσ , (18) ment in the ith row is 1, and the rest of the position elements are
0
all 0.
G(K) = λKcol−1 , λ > 0. (19)
Theorem 3.3: For each i = 1, . . . , p; j = 1, . . . , (m + 1)d,
This performance index consists of two parts. In order to sat-
isfy the performance requirements of the closed-loop system, ∂F[K, t]
the smooth part H(K) corresponding to state energy should = i,j (K, t). (23)
∂ki,j
become small in some finite time T (Hu & Hu, 2020). On the
other hand, G(K) is a convex non-smooth matrix function, Here, i,j (K, t) is the solution of the following auxiliary dynamic
which mainly ensures the sparsity of the feedback controller and system
limits the energy of the control inputs.
We try to seek sparse K, such that 
m
   
˙ i,j (t) = (A0 + BK0 ) i,j (t) + Aj + BKj i,j t − τj
min J(K) j=1
K
(20)
+ Bei,j F [K, t], (24)
s.t. Equations (11) and (12).
with
Problem (20) is an optimisation problem with equality con-
straint. For a given matrix K, the solution of the system (11) i,j (K, t) = 0 for t ≤ 0.
is unique and denoted as F[K, t]. Therefore, the equality con-
straint in Problem (20) can be implicitly taken into the objec- Proof: The proof of the theorem is given in ‘Appendix A’. 
tive function J(K) in the solution process and the transformed
objective function is denoted as J(K, F[K, t]). The sparse feed- According to Theorem 3.3, the state is differentiable with
back stabilisation problem is reformulated as a non-smooth respect to the variable ki,j , and the partial derivatives of the state
unconstrained optimisation problem: with respect to ki,j satisfy the auxiliary dynamic system (24).
Furthermore, the gradient formula of the smooth part H(K) is
min J(K, F[K, t]). (21) derived from the result of Theorem 3.3.
K

Remark 3.1: The setting time T in H(K) is selected according Theorem 3.4: The gradient of H(K) for each i = 1, . . . , p, j =
to the practical problem and generally does not need to be too 1, . . . , (m + 1)d with respect to ki,j is given by
large. The regular term coefficient λ in G(K) can balance the   
∂H(K) T
state energy, the sparsity of the gain matrix, and input energy. A = 2Tr(F[K, t] i,j (t)) dt, (25)
larger λ produces a sparser K in general, a smaller λ reduces the ∂ki,j 0
energy of state but will obtain a less sparse gain matrix.
where i,j (t) is the solution of the delay system (24).

3.2 Proximal gradient method for solving the problem Proof: The proof follows from applying the chain rule to (18)
Proximal gradient method is an appealing approach for solv- and Theorem 3.3. 
ing these types of non-smooth optimisation problems because
of their fast theoretical convergence rates and strong practical Equations (24) and (25) give the gradient of the smooth part
performance (Schmidt et al., 2011). The method is the same as H(K) of objective function in Problem (21). The proximal map-
the iterative shrinkage-thresholding algorithm in the literature ping of the matrix norm function (19) is calculated by following
(Beck, 2017). lemmas.
Recall that Problem (21) is a non-smooth optimisation prob-
lem composed of smooth non-convex function H(K) and non- Definition 3.5: Given a function f : Rm → (−∞, ∞], the
smooth convex matrix norm function G(K). The proximal proximal mapping of f (x) is the operator given by
gradient method for solving the sparse feedback stabilisation  
1 2
problem consists of a gradient descent step of the smooth part proxf (x) = arg min f (u) + u − x2 .
H(K) followed by the proximal mapping of the non-smooth u∈Rm 2
2580 A. TANG ET AL.

Lemma 3.6 (Beck, 2017): Let h : Rm → R be given by h(q) = Algorithm 1 is given in Hu (2020) to check the stability of
g(q2 ), where g : R → (−∞, ∞] is a proper closed and convex closed-loop system (3) and is given in ‘Appendix 2’. Based on
function satisfying dom(g) ⊆ [0, ∞). Then the gradient of smooth part H(K) of objective function and
⎧ the special norm proximal mapping of G(K), the algorithm of
⎨proxg (q2 ) q , q = 0, the proximal gradient method is developed to obtain the sparse
proxh (q) = q2 (26) feedback gain matrix of delay systems. The algorithm mainly
⎩{q ∈ Rm : q = prox (0)}, q = 0,
2 g consists of two steps: (1) gradient descent of H(K) by solving
Equations (24) and (25). (2) Non-smooth matrix norm function
where proxh and proxg are proximal mappings of h and g, proximal mapping by (27).
respectively.

Lemma 3.7: Let λ > 0, g : R → [0, ∞], Algorithm 2 Proximal gradient method for sparse feedback
stabilisation

λt, t ≥ 0, 1: Given starting point K 0 , convergence tolerance ε > 0, a
g(t) = time constant T, regular term coefficient λ > 0 and j = 0.
∞, t < 0.
2: repeat
Then the proximal mapping of g is 3: (1) Gradient descent of H(K j ):
Compute ∇H(K j ) by solving Equations (24) and (25),
proxg (t) = [t − λ]+ , then

where [a]+ is the larger value between a and 0 defined by [a]+ = Y j = K j − tj ∇H(K j ).
max{0, a}. (2) Proximal mapping of G(Y j ):

Proof: By Definition 3.5, K j+1 = proxtj G (Y j ),


 1
λu + (u − t)2 , u ≥ 0, next iterate K j+1 .
proxg (t) = arg min 2
u ∞, u < 0. 4: until stopping criteria K j+1 − K j 2F < ε satisfied.
5: For K j , check whether the closed-loop system (3) is asymp-
Note that (λu + (1/2)(u − t)2 ) = λ + u − t = 0, then u = totically stable or not using Algorithm 1. If the system is
t − λ. If t − λ ≥ 0, implying that proxg (t) = t − λ. Otherwise, asymptotically stable, the algorithm stops; otherwise, reini-
if t − λ < 0, proxg (t) = 0.  tialise K 0 or increase T and restart the algorithm.

Theorem 3.8: Let G : Rm×n → R be given by G(X) = In the algorithm, the tj in step 3 is determined by backtrack-
λXcol−1 . For any X ∈ Rm×n ing procedure satisfying the inequality
⎧   
⎨[qi 2 − λβi ]+ qi , qi = 0, H(K j+1 ) ≤ H(K j ) + Tr ∇H(K j ) K j+1 − K j
[proxG (X)]i = qi 2 (27)

0, qi = 0, 1
+ K j+1 − K j 2F . (30)
2tj
where [proxG (X)]i represents the ith column of matrix proxG (X).
The procedure requires a parameter η < 1, when the inequal-
Proof: The proximal mapping of G(X) = λXcol−1 is consid- ity (30) is not satisfied, we set tj := ηtj .
ered below, which is equivalent to solving optimisation problem
Remark 3.2: The special norm of the smooth function H(K)
1 and the non-smooth part G(K) of the objective function reflect
min L(Z) = λZcol−1 + Z − X2F . the stability of the system and the sparsity of the gain matrix,
Z 2
respectively.
This problem can be solved separately
Remark 3.3: The computation cost of Algorithm 2 is mainly
n 
 
1 2
to solve matrix differential equations (11) and (24). Since the
L(Z) = λβi zi 2 + zi − qi 2 , (28) delay differential equation has no analytical solution, it needs
i=1
2
to be solved by numerical methods, such as Euler method and
where zi is the ith column of Z. Runge–Kutta method.
Using Lemma 3.6 and Lemma 3.7, we can get
Remark 3.4: The sparsity of K can improve computational
⎧ efficiency. For instance, K is a column sparse matrix, the cor-
⎨[qi 2 − λβi ]+ qi , qi = 0,
zi = qi 2 (29) responding column of matrix BK in (24) is also equal to 0.
⎩ In addition, the zero column of K can reduce the compu-
0, qi = 0.
tational complexity in the proximal mapping, if qi = 0, then
 prox(qi ) = 0.
INTERNATIONAL JOURNAL OF CONTROL 2581

4. Numerical examples For λ = 5,


In this section, three control examples are given to demonstrate  
K ∗ = K0∗ , K1∗ = 0 −0.671 0 −0.771 0 0 0 0 .
the effectiveness of the algorithm.
The closed-loop system is also asymptotically stable, see
Example 4.1: A linearised version of the feed system and com- Figure 2.
bustion chamber equations were described in chapter (1.1.3) of
Kim (2015) and is given by (2) Let the initial value of K: K 0 = 11×8 and T = 10, the
weight coefficient of each column is 1.
ẋ(t) = A0 x(t) + A1 x(t − τ1 ) + Bu(t), (31)
For λ = 2,
where τ1 = 1,

⎡ ⎤ ⎡ ⎤ K ∗ = K0∗ , K1∗
0.2 0 0 0 −0.8 0 1 0 
⎢0 0 0 −1⎥⎥,
⎢ 0 0 0 0⎥⎥, = 0.739 −1.065 0 −1.852 0 0 0 0 .
A0 = ⎢ A1 = ⎢
⎣−1 0 −1 1 ⎦ ⎣ 0 0 0 0⎦
0 1 −1 0 0 0 0 0 For λ = 5,
⎡ ⎤ 
0 K ∗ = K0∗ , K1∗
⎢1⎥ 
B=⎢ ⎥
⎣0⎦ . = 0 −0.671 0 −0.771 0 0 0 0 .
0
Algorithm 1 verifies the stability of the closed-loop system. The
By Algorithm 1, the uncontrolled system has two unstable coefficient λ balances state energy and the sparsity of the gain
characteristic roots and is unstable. Let the controller structure matrix. In general as λ increases, the sparsity of the feedback
controller in this example also improves.
u(t) = K0 x(t) + K1 x(t − τ1 ).
Example 4.2: Consider the system
We obtain the column sparse gain matrix by Algorithm 2.
ẋ(t) = A0 x(t) + A1 x(t − τ1 ) + A2 x(t − τ2 ) + Bu(t), (32)
(1) Let the initial value of K: K 0 = 01×8 , T = 10, and the
weight coefficient of each column in K is 1. where
⎡ ⎤
0.1 −0.5 −0.3 0.3
For λ = 2, ⎢0 0 0.2 −0.2⎥⎥,
 A0 = ⎢
⎣0 0 −0.1 0 ⎦
K ∗ = K0∗ , K1∗
 0 0.4 0.3 −0.4
= 0.932 −1.157 0 −2.181 0 0 0 0 . ⎡ ⎤
−0.2 0.3 0.2 −0.5
⎢ 0 −0.2 −0.2 0.2 ⎥ ⎥,
Verify that the column sparse gain matrix can make the closed- A1 = ⎢
loop system asymptotically stable by Algorithm 1, and its solu-
⎣−0.1 0.1 0.1 0.1 ⎦
tion tends to 0, as shown in Figure 1. 0.1 −0.6 −0.6 0

Figure 1. The solution of the closed-loop system in Example 4.1. Figure 2. The solution of the closed-loop system in Example 4.1.
2582 A. TANG ET AL.

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.1 −0.1 0 0.2 0.2 0 1.1 −0.5 0.8 −0.1 0.3
⎢ 0 −0.2 0.1 0 ⎥
⎥,
⎢ 0 0.1⎥

⎢ 1.1 1 0.1 0.7 −0.7⎥
A2 = ⎢ B=⎢
⎣−0.3 0 ⎦ ,
⎢ ⎥
⎣0 0.1 0 0.3⎦ A1 = ⎢
⎢−0.9 0 0 0 −0.4⎥⎥,
0 −0.3 0.1 0 0.1 0 ⎣ 1.1 −0.7 −0.2 0 0.3 ⎦
0 0 −0.2 0 0
τ1 = 2, τ2 = 3. ⎡ ⎤
−0.6 1.2 0.2 1 −0.4
⎢ 0
⎢ −0.8 0 1 −0.3⎥⎥
By Algorithm 1, the uncontrolled system has three unstable
A2 = ⎢
⎢ 1.1 0.4 −0.4 −0.9 −0.4⎥ ⎥,
characteristic roots and is unstable. ⎣ 0 0 0.1 −0.6 −0.6⎦
The controller structure
−0.1 −0.2 0 −0.1 0
⎡ ⎤
u(t) = K0 x(t) + K1 x(t − τ1 ) + K2 x(t − τ2 ). 1 0 0
⎢0 1
⎢ 0⎥⎥
Let the initial value of K: K 0 = 12×12 , T = 10 and λ = 1. The B=⎢ ⎢0 0 1⎥⎥ , τ1 = 1, τ2 = 2.
column sparse state feedback is obtained by Algorithm 2.
⎣0 0 0⎦
In G(K), the weight coefficient of ith column of Kj is denoted 0 0 0
Kj
as βi . By Algorithm 1, the uncontrolled system has five unstable
Kj
characteristic roots and is unstable.
(1) The weight coefficients βi = 1 for i = 1, 2, 3, 4, j = 0, 1, The controller structure
2.
  u(t) = K0 x(t) + K1 x(t − τ1 ) + K2 x(t − τ2 ).
∗ −1.337 0.844 0.855 0 0 0 0 0 0 0 0 0
K = . Let the initial value of K: K 0 = 13×15 , T = 10 and λ = 3. We
0.380 −0.833 −0.074 0 0 0 0 0 0 0 0 0
obtain the column sparse gain matrix by Algorithm 2.
(2) The weight coefficients for K0 is βiK0 = 104 for i = 1, 2, 3, Kj
4, and the rest are 1. (1) The weight coefficients βi = 1 for i = 1, 2, 3, 4, 5, j = 0,
1, 2.
 
0 0 0 0 −0.765 0.957 0.961 0.649 0 0 0 0.075 ⎡ ⎤
K ∗ = 0 0 0 0 0.189 −0.355 −0.184 −0.204 0 0 0 −0.018 . −3.016 1.582 0.317 0 2.120
K0∗ = ⎣−1.128 −4.527 0.733 0 2.271 ⎦ ,
−0.711 −0.514 −3.211 0 −1.150
(3) The weight coefficients for K0 and K2 are βiK0 = βiK2 = 104 ⎡ ⎤
for i = 1, 2, 3, 4, and the rest are 1. 0 0 0 0 0
K1∗ = ⎣0 0 0 0 0⎦ ,
 
0 0 0 0 −0.738 1.018 0.898 0.715 0 0 0 0 0 0 0 0 0
K ∗ = 0 0 0 0 0.186 −0.323 −0.155 −0.231 0 0 0 0 . ⎡ ⎤
0 0 0 0 0
It is verified that the closed-loop systems under sparse gain K2∗ = ⎣0 0 0 0 0⎦ .
feedback control in the above case are asymptotically stable by 0 0 0 0 0
Algorithm 1. Kj
From this example, only the non-delay states are used in (1). (2) The weight coefficients for columns 3–5 of Kj is β3 =
Kj Kj
In (2) and (3), we can see that by increasing the weight of the β4 = β5 = 104 for j = 0, 1, 2, and the rest are 1.
non-delay states, it is helpful to get the corresponding sparse
column of the gain matrix. In application of linear delay system, ⎡ ⎤
we can only use the delayed state to stabilise the system due to −1.271 6.682 0 0 0
communication delay. K0∗ = ⎣−1.437 −6.430 0 0 0⎦ ,
−1.413 5.132 0 0 0
⎡ ⎤
Example 4.3: Consider the system −3.716 −0.036 0 0 0
K1∗ = ⎣ 0.444 −0.019 0 0 0⎦ ,
ẋ(t) = A0 x(t) + A1 x(t − τ1 ) + A2 x(t − τ2 ) + Bu(t), (33) 1.846 0.032 0 0 0
⎡ ⎤
1.324 −1.756 0 0 0
where K2∗ = ⎣ 0.473 −1.124 0 0 0⎦ .
⎡ ⎤ −0.474 1.159 0 0 0
0.9 0.5 0 −0.1 0
⎢ 0
⎢ 0 −0.6 0 0.5⎥
⎥ Algorithm 1 verifies that the sparse gain matrix obtained under

A0 = ⎢ −0.1 0 0 0.8 0⎥⎥, different weight coefficients of (1) and (2) can stabilise the orig-
⎣ −0.3 0 0 −0.5 0.5⎦ inal system. In particular, in (2), only the states of x1 , x2 and x3
−0.8 0 0.6 1 0 are required to stabilise the system without other states.
INTERNATIONAL JOURNAL OF CONTROL 2583

Remark 4.1: These examples show that sparse feedback can Hale, J. K., & Verduyn Lunel, S. M. (1993). Introduction to functional
determine the state feedback structure of delay system without differential equations. Springer-Verlag.
giving the sparse controller structure in advance. Hu, G. D. (2020). Stability criteria of high-order delay differential systems.
International Journal of Control, 93(9), 2095–2103. https://doi.org/10.10
Remark 4.2: Local optimal solution may be obtained for differ- 80/00207179.2018.1541365
Hu, G. D., & Hu, R. H. (2020). Numerical optimization for feed-
ent initial values of the non-convex problem (21), but the final back stabilization of linear systems with distributed delays. Jour-
gain matrix is always sparse with little structural difference. nal of Computational and Applied Mathematics, 371, Article 112706.
https://doi.org/10.1016/j.cam.2019.112706
Kim, A. V. (2015). Systems with delays: Analysis, control, and computations.
5. Conclusion Wiley-Scrivener.
Under a special matrix norm, the sparse feedback problem Kolmanovskii, V., & Myshkis, A. (1999). Introduction to the theory and
applications of functional differential equations. Springer Netherlands.
of linear delay system is transformed into an optimisation Kyrychko, Y., & Hogan, S. (2010). On the use of delay equations in engineer-
problem. The unconstrained problem with non-smooth part is ing applications. Journal of Vibration and Control, 16(7–8), 943–960.
numerically solved by using the proximal gradient method. The https://doi.org/10.1177/1077546309341100
row or column sparse feedback controller of the system can be Lin, F., Fardad, M., & Jovanovic, M. R. (2012). Sparse feedback synthesis
obtained. via the alternating direction method of multipliers. In 2012 American
control conference (ACC) (pp. 4765–4770). IEEE.
In future work, improving the numerical efficiency and Liu, D., Han, R., & Xu, G. (2018). Controller design for distributed param-
applying the method to higher dimensional or more varied eter systems with time delays in the boundary feedbacks via the back-
systems is a direction. stepping method. International Journal of Control, 93(5), 1220–1230.
https://doi.org/10.1080/00207179.2018.1500717
Mahmoud, M. S. (2010). Improved stability and stabilization approach to
Disclosure statement linear interconnected time-delay systems. Optimal Control Applications
No potential conflict of interest was reported by the authors. and Methods, 31(2), 81–92. https://doi.org/10.1002/oca.884
Meier, L., S. Van De Geer, & Bühlmann, P. (2008). The group lasso for logis-
tic regression. Journal of the Royal Statistical Society: Series B (Statistical
Funding Methodology), 70(1), 53–71. https://doi.org/10.1111/j.1467-9868.2007.
This work is supported by the National Natural Science Foundation of 00627.x
China (11871330 and 11971303) and Natural Science Foundation of Shang- Polyak, B., Khlebnikov, M., & Shcherbakov, P. (2014). Sparse feedback
hai (21ZR1426400). in linear control systems. Automation and Remote Control, 75(12),
2099–2111. https://doi.org/10.1134/S0005117914120029
ORCID Polyak, B., & Tremba, A. (2020). Sparse solutions of optimal control via
Newton method for under-determined systems. Journal of Global Opti-
Anping Tang http://orcid.org/0000-0002-7935-368X mization, 76(3), 613–623. https://doi.org/10.1007/s10898-019-00784-z
Quattoni, A., Carreras, X., Collins, M., & Darrell, T. (2009). An efficient
References projection for l1,∞ regularization. In Proceedings of the 26th annual
international conference on machine learning (pp. 1–8). ACM Press.
Beck, A. (2017). First-order methods in optimization. Society for Industrial Schmidt, M., Roux, N. L., & Bach, F. (2011). Convergence rates of inexact
and Applied Mathematics. proximal-gradient methods for convex optimization. Advances in Neu-
Bereketoglu, H., & Huseynov, A. (2010). Convergence of solutions of non- ral Information Processing Systems, 24, 1458–1466.
homogeneous linear difference systems with delays. Acta Applicandae Shimizu, K. (2017). Optimization of parameter matrix: Optimal output
Mathematicae, 110(1), 259–269. https://doi.org/10.1007/s10440-008- feedback control and optimal PID control. In 2017 IEEE conference on
9404-2 control technology and applications (pp. 1734–1739). IEEE.
Chen, P., Liu, S., Zhang, D., & Yu, L. (2021). Adaptive event-triggered Simon, N., & Tibshirani, R. (2012). Standardization and the group lasso
decentralized dynamic output feedback control for load frequency regu- penalty. Statistica Sinica, 22(3), 983–1001. https://doi.org/10.5705/ss.2011.075
lation of power systems with communication delays. IEEE Transactions Tibshirani, R. (1996). Regression shrinkage and selection via the lasso.
on Systems, Man, and Cybernetics: Systems, 1–13. https://doi.org/10. Journal of the Royal Statistical Society: Series B (Methodological), 58(1),
1109/TSMC.2021.3129783 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x
Chen, P., Zhang, D., Yu, L., & Yan, H. (2022). Dynamic event-triggered out- Tropp, J. A. (2006). Algorithms for simultaneous sparse approxima-
put feedback control for load frequency control in power systems with tion. Part II: Convex relaxation. Signal Processing, 86(3), 589–602.
multiple cyber attacks. IEEE Transactions on Systems, Man, and Cyber- https://doi.org/10.1016/j.sigpro.2005.05.031
netics: Systems, 1–13. https://doi.org/10.1109/TSMC.2022.3143903 Uchida, K., Shimemura, E., Kubo, T., & Abe, N. (1988). The linear-quadratic
Chen, S. S., Donoho, D. L., & Saunders, M. A. (2001). Atomic decompo- optimal control approach to feedback control design for systems with
sition by basis pursuit. Siam Review, 43(1), 129–159. https://doi.org/10. delay. Automatica, 24(6), 773–780. https://doi.org/10.1016/0005-1098
1137/S003614450037906X (88)90053-2
Cotter, S., Rao, B., Engan, K., & Kreutz-Delgado, K. (2005). Sparse Yagoubi, M., & Chaibi, R. (2020). A nonsmooth newton method for the
solutions to linear inverse problems with multiple measurement design of state feedback stabilizers under structure constraints. In 2020
vectors. IEEE Transactions on Signal Processing, 53(7), 2477–2488. 59th IEEE conference on decision and control (pp. 5992–5997). IEEE.
https://doi.org/10.1109/TSP.2005.849172
Deng, Y., Léchappé, V., Moulay, E., & Plestan, F. (2019). State feedback con-
trol and delay estimation for lti system with unknown input-delay. Inter- Appendix 1. Proof of Theorem 3.3
national Journal of Control, 94(9), 2369–2378. https://doi.org/10.1080/
In order to prove the theorem, we need the following lemma.
00207179.2019.1707288
Donoho, D. L. (2006). For most large underdetermined systems of linear
equations the minimal l1 -norm solution is also the sparsest solution. Lemma A.1 (Gronwall’s lemma): If v(t) and ξ(t) are nonnegative contin-
Communications on Pure and Applied Mathematics, 59(6), 797–829. uous functions in [t0 , ∞) verifying
https://doi.org/10.1002/(ISSN)1097-0312  t
Fridman, E. (2014). Introduction to time-delay systems: Analysis and control. v(t) ≤ c + ξ(τ )v(τ ) dτ ,
Springer. τ =t0
2584 A. TANG ET AL.

then for any t ∈ [t0 , ∞), the following inequality holds each ξ ∈ [−a, a], we obtain
 t 
v(t) ≤ c exp ξ(s) ds . F[K, t] + ηϕ ξ (t) ∈ Bd×d (C1 ) for t ∈ [0, T], η ∈ [0, 1].
τ =t0

This result remains true if c = 0.


Moreover, it is obvious that for each ξ ∈ [−a, a],
Proof: Let ei,j be p × (m + 1)d matrix in which (i, j)-entry is 1, and the rest
of the position elements are all 0. To prove the theorem, we need to show K + ηξ ei,j ∈ Bp×(m+1)d (C2 ) for η ∈ [0, 1],
that
∂F[K, t] F[K ξ , t] − F[K, t] where C2 := K + 1. Recall from F[K, t]2 and K2 which are contin-
= lim , (A1)
∂ki,j ξ →0 ξ uously differentiable with respect to each of its arguments that ∂f ξη /∂ki,j ,
where K ξ = K + ξ ei,j . We will prove this equation in the following steps. ∂f ξη /∂F and ∂f ξη /∂Fdelay are continuous. Hence, it follows from the com-
Step 1: Preliminaries. pactness of [0, T], Bd×d (C1 ) and Bp×(m+1)d (C2 ) that there exists a real
For each real number ξ ∈ R, let F denote F[K, t] and Fdelay denote number C3 > 0 such that, for each ξ ∈ [−a, a],
Fdelay [K, t]. Define f ξ (t) as follows:
" "
  " ∂f ξ "
f ξ (t) := f (K ξ , F [K ξ , t]) = f K ξ , F[K ξ , t], Fdelay [K ξ , t] , " η"
" " ≤ C3 , t ∈ [0, T], η ∈ [0, 1],
" ∂ki,j "
where " "
  " ∂f ξ "
F[K ξ , t] " η"
F [K ξ , t] = , " " ≤ C3 , t ∈ [0, T], η ∈ [0, 1],
Fdelay [K ξ , t] " ∂F "
⎡ ⎤ " "
F K ξ , t − τ1 " ∂f ξ "
" η "
⎢ F[K ξ , t − τ2 ] ⎥ " " ≤ C3 , t ∈ [0, T], η ∈ [0, 1],
⎢ ⎥ " ∂Fdelay "
Fdelay [K ξ , t] = ⎢ .. ⎥.
⎣ . ⎦

F K ξ , t − τm where  ·  denotes the Frobenius norm of the corresponding dimension.
Then, it follows from (14) that, for each ξ ∈ R, Step 2: The function ϕ ξ (t) is of order ξ .
 t Let ξ ∈ [−a, a] be arbitrary. Taking the norm of both sides of (A2) and
applying the definition of C3 gives
F[K ξ , t] = F[K ξ , 0] + f ξ (σ ) dσ , t ∈ [0, T].
0
"   # "
And ϕ ξ (t) is defined as follows: " ξ " " "
t 1 ∂f ξη ∂f ξη ξ ∂f ξη ξ "
"
"ϕ (t)" = " ξ+ ϕ (σ ) + ϕdelay (σ ) dη dσ "
 t " 0 0 ∂ki,j ∂F ∂Fdelay "
ϕ ξ (t) := F[K ξ , t] − F[K, t] = f ξ (σ ) − f 0 (σ ) dσ . (A2)  t
0 " "
≤ C3 T|ξ | + C3 "ϕ ξ (σ )" dσ
Define 0
  " "
f ξη ξ ξ m " t  "
:= f K + ηξ ei,j , F[K, t] + ηϕ (t), Fdelay [K, t] + ηϕdelay (t) ,  "
1 ∂f ξη ξ "
+ " ϕ (σ − τj ) dη dσ "
" τj 0 ∂F[K, t − τj ] "
where j=1
ξ
ϕdelay (t) ξ ξ
= [ϕ (t − τ1 ), ϕ (t − τ2 ), . . . , ϕ (t − τm )] . ξ   
m 
t " " t " "
≤ C3 T|ξ | + C3 "ϕ ξ (σ )" dσ + C3 "ϕ ξ (σ − τj )" dσ
Therefore, we can get, for t ∈ [0, T], 0 j=1 τj
ξ
f (t) − f (t) 0  
t " " t " "
⎧ ≤ C3 T|ξ | + C3 "ϕ ξ (σ )" dσ + m C3 "ϕ ξ (σ )" dσ

⎪   0 0
 1⎪⎪ ξ  " "
⎨ ∂f K + ηξ ei,j , F[K, t] + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t) t
" "
= ξ ≤ C3 T|ξ | + (1 + m) C3 "ϕ ξ (σ )" dσ .
0 ⎪ ∂ki,j 0




By applying Gronwall’s Lemma,we have
 
ξ
∂f K + ηξ ei,j , F[K, t] + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t) " ξ "
+ ϕ ξ (t) "ϕ (t)" ≤ C3 T exp((1 + m)C3 T)|ξ |, t ∈ [0, T]. (A3)
∂F[K, t]
 ⎫
∂f K + ηξ ei,j , F[K, t] + ηϕξ (t), Fdelay [K, t] ⎪ Since ξ ∈ [−a, a] is arbitrary, the function ϕ ξ (t) is of order ξ .


ξ
+ ηϕdelay (t) ⎪
⎬ Step 3: Definition and limiting behaviour of ρ(ξ ).
ξ
+ ϕdelay (t) dη. For each ξ ∈ R, define the corresponding functions
∂Fdelay [K, t] ⎪



⎭ ⎧ 
⎪ ∂f K + ηξ ei,j , F[K, t] 


Now, it follows that the set {F[K ξ , t] : ξ ∈ [−a, a]} and {Fdelay [K ξ , t] : ξ ∈  1⎪ ξ
⎨ + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t)
[−a, a]} are equibounded on [0, T], where a > 0 is a fixed small real num- λ1,ξ (t) :=
ber. Hence, there exists a real number C1 > 0, such that for each ξ ∈ ⎪
⎪ ∂ki,j
0 ⎪

[−a, a], it holds ⎩

F[K ξ , t] ∈ Bd×d (C1 ) for t ∈ [0, T], ⎫



 ⎪


Fdelay [K ξ , t] ∈ B(md)×d (C1 ) for t ∈ [0, T], ∂f K, F[K, t], Fdelay [K, t] ⎬
− ξ dη,
∂ki,j ⎪

where Bd×d (C1 ) denotes the closed ball in Rd×d of radius C1 centred at ⎪

the origin, B(md)×d (C1 ) is similarly defined. Since Bd×d (C1 ) is convex, for ⎭
INTERNATIONAL JOURNAL OF CONTROL 2585

⎧ 
⎪ ∂f K + ηξ ei,j , F[K, t]  from (A2) that


 1⎪ ξ
⎨ + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t)  t
λ2,ξ (t) := ϕ ξ (t) = [λ1,ξ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ )] dσ

⎪ ∂F 0
0 ⎪
⎪   
⎩ t ∂f t ∂f ξ t ∂f ξ
+ ξ dσ + ϕ (σ ) dσ + ϕ (σ ) dσ .
⎫ 0 ∂ki,j 0 ∂F 0 ∂Fdelay delay

 ⎪

⎪ (A7)
∂f K, F[K, t], Fdelay [K, t] ⎬ ξ
− ϕ (t) dη, Furthermore, integrating the auxiliary system gives
∂F ⎪


⎪  t  t
⎭ ∂f ∂f
i,j (t) = dσ + i,j (σ ) dσ
⎧  0 ∂ki,j 0 ∂F
⎪ ∂f K + ηξ ei,j , F[K, t] 

⎪  m  t
 1⎪ ξ
⎨ + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t) ∂f
+ i,j (σ − τj ) dσ . (A8)
λ3,ξ (t) := 0 ∂F[K, t − τj ]

⎪ ∂Fdelay j=1
0 ⎪


Multiplying (A7) by ξ −1 , and subtracting it from (A8) yields

⎪ ξ −1 ϕ ξ (t) − i,j (t)
 ⎪

⎪  t
∂f K, F[K, t], Fdelay [K, t] ⎬ ξ
− ϕdelay (t) dη. = ξ −1 [λ1,ξ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ )] dσ
∂Fdelay ⎪



0
⎭ 
∂f −1 ξ
t
+ (ξ ϕ (σ ) − i,j (σ )) dσ
Furthermore, define another function ρ : [−a, 0) ∪ (0, a] → R as follows: ∂F0
 T m  t
$ 1,ξ % ∂f
ρ(ξ ) = |ξ |−1 λ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ ) dσ . + (ξ −1 ϕ ξ (σ − τj ) − i,j (σ − τj )) dσ .
j=1 0 ∂F[K, t − τj ]
0

Since the function ϕ ξ (t) is of order ξ , it follows that Therefore,



ξ
F[K, t] + ηϕ (t) → F[K, t] as ξ → 0, " −1 ξ " t ∂f " "
"ξ ϕ (t) − i,j (t)" ≤ ρ(ξ ) + "ξ −1 ϕ ξ (σ ) − i,j (σ )" dσ
ξ 0 ∂F
Fdelay [K, t] + ηϕdelay (t) → Fdelay [K, t] as ξ → 0. (A4)
m 
 t ∂f " −1 ξ
Meanwhile, it is obvious that + "ξ ϕ (σ − τj )
j=1 0 ∂F[K, t − τj ]
K + ηξ ei,j → K as ξ → 0 (A5) "
− i,j (σ − τj )" dσ
uniformly with respect to t ∈ [0, T] and η ∈ [0, 1]. Since the convergence  t
in (A4) takes place inside the balls Bd×d (C1 ) and B(md)×d (C1 ), the con- " "
≤ ρ(ξ ) + (m + 1)C3 "ξ −1 ϕ ξ (σ ) − i,j (σ )" dσ .
vergence in (A5) takes place inside the ball Bp×(m+1)d (C2 ), and ∂f ξη /∂ki,j , 0
∂f ξη /∂F and ∂f ξη /∂Fdelay are uniformly continuous on the compact set By Gronwall’s Lemma, we can get
[0, T] × Bd×d (C1 ) × B(md)×d (C1 ) × Bp×(m+1)d (C2 ), " −1 ξ "
"ξ ϕ (t) − i,j (t)" ≤ ρ(ξ ) exp((m + 1)C3 T), t ∈ [0, T]. (A9)
∂f ξη ∂f &
→ as ξ → 0, Recalling that ξ ∈ [−a, 0) (0, a] be arbitrary, we can take the limit as ξ →
∂ki,j ∂ki,j 0 in (A9) and then apply (A6) to establish the following equation:
∂f ξη ∂f lim ξ −1 ϕ ξ (t) = i,j (t), t ∈ [0, T].
→ as ξ → 0, ξ →0
∂F ∂F
and This proves (A1), because ϕ ξ (t) = F[K ξ , t] − F[K, t] for each t ∈ [0, T].
∂f ξη From (A8),
∂f
→ as ξ → 0,
∂Fdelay ∂Fdelay 
m
   
˙ i,j (t) = (A0 + BK0 ) i,j (t) + Aj + BKj i,j t − τj + Bei,j F [K, t].
uniformly with respect to t ∈ [0, T] and η ∈ [0, 1]. These results, together
j=1
with inequality (A3), imply that |ξ |−1 λ1,ξ → 0, |ξ |−1 λ2,ξ → 0 and (A10)
|ξ |−1 λ3,ξ → 0 uniformly on [0, T] as ξ → 0. 
Consequently,
lim ρ(ξ ) = 0. (A6)
ξ →0
Step 4: Comparing ξ −1 ϕ ξ (t) with i,j (t).
We now use the results
& proved in the previous steps to establish (A1).
First, let ξ ∈ [−a, 0) (0, a] be arbitrary but fixed. Next, it follows
2586 A. TANG ET AL.

Appendix 2. Algorithm 1 (Hu, 2020)

Algorithm 1 An algorithm to check the stability of linear delay system


1: Calculate the upper bound r of the unstable root of the system (3). Then we get the closed semicircle curve l as the boundary
of D, which consists of the segment {s = it : −r ≤ t ≤ r} and the half-circle {s : |s| = r, −π/2 ≤ arg s ≤ π/2}.
2: Take a sufficiently large integer n to discretise l as uniformly as possible in the clockwise direction, and record these nodes as
{sj }nj=1 .
3: For each sj (j = 1, 2, . . . , n), we calculate the value P(sj ) and check whether P(sj ) = 0 by evaluating its magnitude satisfies
|P(sj )| ≤ δ1 with the preassigned tolerance δ1 at the same time. If it holds, i.e. P(sj ) = 0, then the system (3) is not asymptotically
stable and stop the algorithm. Otherwise, we continue to the next step.
4: Compute l arg P(s) along the ordered node {sj }n j=1 by checking l arg P(s) ≤ δ2 with the preassigned tolerance δ2 . If
l arg P(s) = 0, the system is asymptotically stable, otherwise not stable.

m
Here, P(s) = det[sI − (A0 + BK0 ) − j=1 (Aj + BKj ) exp(−τj s)] is the characteristic equation of system (3).

You might also like