Professional Documents
Culture Documents
To cite this article: Anping Tang, Guang-Da Hu & Yuhao Cong (2023) Sparse feedback
stabilisation of linear delay systems by proximal gradient method, International Journal of
Control, 96:10, 2576-2586, DOI: 10.1080/00207179.2022.2106892
CONTACT Guang-Da Hu ghu@hit.edu.cn Department of Mathematics, Shanghai University, Shanghai 200444, People’s Republic of China
propose a new strategy to design sparse feedback gain to avoid The parameters of the feedback gain matrix K can be set as
structural combinations by optimisation methods. ⎡ ⎤
Based on the introduction of a special matrix norm to ensure k1,1 k1,2 · · · k1,d(m+1)
⎢k2,1 k2,2 · · · k2,d(m+1) ⎥
the sparsity of the gain matrix, combined with the state tran- ⎢ ⎥
K = [K0 , K1 , . . . , Km ] = ⎢ . .. . .. ⎥ . (4)
sition matrix (fundamental matrix) of delay systems, an opti- ⎣ .. . . . . ⎦
misation problem is proposed to design the sparse feedback kp,1 kp,2 · · · kp,d(m+1)
gain matrix to stabilise the system. The gradient formula of
the objective function and the proximal mapping of the spe- The gain matrix K becomes high-dimensional as the number
cial matrix norm are derived. We develop a proximal gradient of inputs, the number of states and the delay terms increase. To
algorithm that consists gradient descent and proximal map- avoid the problem, our aim is to seek a row or column sparse
ping steps to solve the non-smooth optimisation problem. To feedback gain matrix such that the closed-loop system (3) is
the author’s knowledge, no results have been reported about asymptotically stable.
the sparse feedback stabilisation of linear delay systems in the Define
⎡ ⎤
literature. x(t)
The main contributions of this work are summarised as ⎢ x(t − τ1 ) ⎥
⎢ ⎥
follows: X (t) = ⎢ .. ⎥. (5)
⎣ . ⎦
(1) We formulate the sparse feedback stabilisation controller x(t − τm )
design of linear delay systems as an optimisation problem. Design of state feedback u(t) = K X (t) from the incomplete
(2) An efficient proximal gradient algorithm is developed to state vector is equivalent to design a column sparse stabilising
compute the sparse gain matrix of stabilising controller. controller, i.e. gain matrix K having zero columns. For example,
as shown below, if columns 3–6 of the gain matrix are 0, the state
This paper is organised as follows. In Section 2, the sparse feedback controller requires only non-delayed states.
feedback of linear delay systems and special matrix norms are
provided. In Section 3, we propose an optimisation problem
to design the sparse feedback gain matrix of the systems. In
Section 4, numerical examples are given to illustrate the effec-
tiveness of the proposed method. In Section 5, some conclusions
are summarised.
Throughoutthe paper, we denote AF Frobenius norm
2
of matrix A = i,j ai,j induced by Frobenius inner A, B = Reducing number of controls is equivalent to finding a row
Tr(A B). The sparse stabilising controller, i.e. gain matrix K having zero rows.
p norm of n dimensional vector x is defined
by xp = ( ni=1 |xi |p )1/p . The matrix in Rm×n with all ele-
ments equal to 0 or equal to 1 are denoted as 0m×n and 1m×n ,
respectively.
of matrix (Quattoni et al., 2009). According to the following delay differential equation
lemma, minimising the Xr1 or Xc1 matrix norm may lead
to the reduction of nonzero rows or columns of the matrix, that Ḟ[K, t] = (A0 + BK0 )F[K, t]
is, increase the sparsity of the matrix.
m
+ (Aj + BKj )F[K, t − τj ] for t > 0, (11)
j=1
Lemma 2.2 (Polyak et al., 2014): If the problem
under the condition
min Xr1
(8) F[K, t] = 0 for t < 0 and F[K, 0] = I. (12)
subject to AX = B,
Define the right term of a differential equation (11) as
where A ∈ Rs×m , s < m, B ∈ Rs×n , and X ∈ Rm×n , is feasible,
then exists a solution with no more than s ∗ n nonzero rows. f (K, F [K, t]) = (A0 + BK0 ) F[K, t]
m
The problem (8) can be transformed into a linear program- + Aj + BKj F K, t − τj , (13)
ming problem (Chen et al., 2001). The constraint set of optimal j=1
solution is a polyhedron, which is represented as the convex hull
of its vertices. The sparse solution can only be obtained by the where
combination of these vertices, and the limited number of ver-
F[K, t]
tices leads to a reduction in flexibility within the group (each F [K, t] = ,
Fdelay [K, t]
row or column), that is, the reduction of smoothness. ⎡ ⎤
Define the following norm to increase the smoothness of F [K, t − τ1 ]
each row and column of the matrix. ⎢ F[K, t − τ2 ] ⎥
⎢ ⎥
Fdelay [K, t] = ⎢ .. ⎥.
⎣ . ⎦
Definition 2.3: Let X be an m × n matrix, define the weighted F [K, t − τm ]
norm
Then Equation (11) can be rewritten as
n
Xcol−1 = βi qi 2 , (9) Ḟ[K, t] = f (K, F [K, t]) for t > 0. (14)
i=1
A stability criterion via the state transition matrix is as follows.
m
Xrow−1 = αj pj 2 , (10)
j=1 Lemma 3.1 (Hu & Hu, 2020): The closed-loop system (3) is
asymptotically stable if and only if there exist feedback gain
where βi and αj are positive weights. Larger weights correspond matrices K0 and Kj for j = 1, . . . , m, i.e. K such that
to more expensive components of the column or row vectors. T
lim F[K, σ ]2F dσ + λK2F (15)
n T→∞ 0
nFrom the Xc1 = i=1 qi ∞ norm to the Xcol−1 =
i=1 qi 2 norm, the l∞ norm of each column is replaced by exists and is finite, where λ is a positive coefficient, and F[K, σ ] is
the l2 norm, which increases the smoothness of the column the state transition matrix of the closed-loop system (11).
norm and the proximal mapping of l2 norm is easier to obtain
than l∞ norm. The sum of the column norm of Xcol−1 is the l1 The stability is still guaranteed when the Frobenius norm
norm, which is the same as the Xc1 norm in Lemma 2.2. The is replaced with other matrix norms, including · col−1 norm
l1 norm penalty outside of Xcol−1 guarantees the sparsity of and · row−1 norm. On this basis, the stability condition of
the matrix when minimising the norm in the optimisation prob- sparse feedback gain matrix of linear delay systems is given.
lem. Besides that, the matrix norms of Definitions 2.1 and 2.3 are
convex. Theorem 3.2: The closed-loop system (3) is asymptotically stable
if and only if there exist feedback gain matrices K0 and Kj for j =
1, . . . , m, i.e. K such that
3. Proximal gradient method for sparse feedback
stabilisation T
lim F[K, σ ]2F dσ + λKcol−1 (16)
3.1 Sparse feedback stabilisation problem T→∞ 0
We review the concept of the state transition matrix (funda- exists and is finite, where λ is a positive coefficient, and F[K, σ ] is
mental matrix) of linear delay systems (see, for example, Hale the state transition matrix of the closed-loop system (11).
& Verduyn Lunel, 1993; Kolmanovskii & Myshkis, 1999).
The state transition matrix of the closed-loop system (3) is Proof: The proof can be obtained immediately from Lemma 3.1.
denoted by F[K, t] ∈ Rd×d , which is the solution of the matrix
INTERNATIONAL JOURNAL OF CONTROL 2579
Theorem 3.2 still holds when the Kcol−1 norm in (16) is part G(K). We require the gradients of H(K) and state transi-
replaced by Krow−1 norm. According to the above argument tion matrix. The gradient of the state with respect to the matrix
and Theorem 3.2, we formulate an optimisation problem for K is given as a theorem stated below.
column sparse feedback stabilisation. In order to solve the optimisation problem conveniently, K
We note performance index can be expressed as follows:
Remark 3.1: The setting time T in H(K) is selected according Theorem 3.4: The gradient of H(K) for each i = 1, . . . , p, j =
to the practical problem and generally does not need to be too 1, . . . , (m + 1)d with respect to ki,j is given by
large. The regular term coefficient λ in G(K) can balance the
∂H(K) T
state energy, the sparsity of the gain matrix, and input energy. A = 2Tr(F[K, t] i,j (t)) dt, (25)
larger λ produces a sparser K in general, a smaller λ reduces the ∂ki,j 0
energy of state but will obtain a less sparse gain matrix.
where i,j (t) is the solution of the delay system (24).
3.2 Proximal gradient method for solving the problem Proof: The proof follows from applying the chain rule to (18)
Proximal gradient method is an appealing approach for solv- and Theorem 3.3.
ing these types of non-smooth optimisation problems because
of their fast theoretical convergence rates and strong practical Equations (24) and (25) give the gradient of the smooth part
performance (Schmidt et al., 2011). The method is the same as H(K) of objective function in Problem (21). The proximal map-
the iterative shrinkage-thresholding algorithm in the literature ping of the matrix norm function (19) is calculated by following
(Beck, 2017). lemmas.
Recall that Problem (21) is a non-smooth optimisation prob-
lem composed of smooth non-convex function H(K) and non- Definition 3.5: Given a function f : Rm → (−∞, ∞], the
smooth convex matrix norm function G(K). The proximal proximal mapping of f (x) is the operator given by
gradient method for solving the sparse feedback stabilisation
1 2
problem consists of a gradient descent step of the smooth part proxf (x) = arg min f (u) + u − x2 .
H(K) followed by the proximal mapping of the non-smooth u∈Rm 2
2580 A. TANG ET AL.
Lemma 3.6 (Beck, 2017): Let h : Rm → R be given by h(q) = Algorithm 1 is given in Hu (2020) to check the stability of
g(q2 ), where g : R → (−∞, ∞] is a proper closed and convex closed-loop system (3) and is given in ‘Appendix 2’. Based on
function satisfying dom(g) ⊆ [0, ∞). Then the gradient of smooth part H(K) of objective function and
⎧ the special norm proximal mapping of G(K), the algorithm of
⎨proxg (q2 ) q , q = 0, the proximal gradient method is developed to obtain the sparse
proxh (q) = q2 (26) feedback gain matrix of delay systems. The algorithm mainly
⎩{q ∈ Rm : q = prox (0)}, q = 0,
2 g consists of two steps: (1) gradient descent of H(K) by solving
Equations (24) and (25). (2) Non-smooth matrix norm function
where proxh and proxg are proximal mappings of h and g, proximal mapping by (27).
respectively.
Lemma 3.7: Let λ > 0, g : R → [0, ∞], Algorithm 2 Proximal gradient method for sparse feedback
stabilisation
λt, t ≥ 0, 1: Given starting point K 0 , convergence tolerance ε > 0, a
g(t) = time constant T, regular term coefficient λ > 0 and j = 0.
∞, t < 0.
2: repeat
Then the proximal mapping of g is 3: (1) Gradient descent of H(K j ):
Compute ∇H(K j ) by solving Equations (24) and (25),
proxg (t) = [t − λ]+ , then
where [a]+ is the larger value between a and 0 defined by [a]+ = Y j = K j − tj ∇H(K j ).
max{0, a}. (2) Proximal mapping of G(Y j ):
Theorem 3.8: Let G : Rm×n → R be given by G(X) = In the algorithm, the tj in step 3 is determined by backtrack-
λXcol−1 . For any X ∈ Rm×n ing procedure satisfying the inequality
⎧
⎨[qi 2 − λβi ]+ qi , qi = 0, H(K j+1 ) ≤ H(K j ) + Tr ∇H(K j ) K j+1 − K j
[proxG (X)]i = qi 2 (27)
⎩
0, qi = 0, 1
+ K j+1 − K j 2F . (30)
2tj
where [proxG (X)]i represents the ith column of matrix proxG (X).
The procedure requires a parameter η < 1, when the inequal-
Proof: The proximal mapping of G(X) = λXcol−1 is consid- ity (30) is not satisfied, we set tj := ηtj .
ered below, which is equivalent to solving optimisation problem
Remark 3.2: The special norm of the smooth function H(K)
1 and the non-smooth part G(K) of the objective function reflect
min L(Z) = λZcol−1 + Z − X2F . the stability of the system and the sparsity of the gain matrix,
Z 2
respectively.
This problem can be solved separately
Remark 3.3: The computation cost of Algorithm 2 is mainly
n
1 2
to solve matrix differential equations (11) and (24). Since the
L(Z) = λβi zi 2 + zi − qi 2 , (28) delay differential equation has no analytical solution, it needs
i=1
2
to be solved by numerical methods, such as Euler method and
where zi is the ith column of Z. Runge–Kutta method.
Using Lemma 3.6 and Lemma 3.7, we can get
Remark 3.4: The sparsity of K can improve computational
⎧ efficiency. For instance, K is a column sparse matrix, the cor-
⎨[qi 2 − λβi ]+ qi , qi = 0,
zi = qi 2 (29) responding column of matrix BK in (24) is also equal to 0.
⎩ In addition, the zero column of K can reduce the compu-
0, qi = 0.
tational complexity in the proximal mapping, if qi = 0, then
prox(qi ) = 0.
INTERNATIONAL JOURNAL OF CONTROL 2581
Figure 1. The solution of the closed-loop system in Example 4.1. Figure 2. The solution of the closed-loop system in Example 4.1.
2582 A. TANG ET AL.
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.1 −0.1 0 0.2 0.2 0 1.1 −0.5 0.8 −0.1 0.3
⎢ 0 −0.2 0.1 0 ⎥
⎥,
⎢ 0 0.1⎥
⎥
⎢ 1.1 1 0.1 0.7 −0.7⎥
A2 = ⎢ B=⎢
⎣−0.3 0 ⎦ ,
⎢ ⎥
⎣0 0.1 0 0.3⎦ A1 = ⎢
⎢−0.9 0 0 0 −0.4⎥⎥,
0 −0.3 0.1 0 0.1 0 ⎣ 1.1 −0.7 −0.2 0 0.3 ⎦
0 0 −0.2 0 0
τ1 = 2, τ2 = 3. ⎡ ⎤
−0.6 1.2 0.2 1 −0.4
⎢ 0
⎢ −0.8 0 1 −0.3⎥⎥
By Algorithm 1, the uncontrolled system has three unstable
A2 = ⎢
⎢ 1.1 0.4 −0.4 −0.9 −0.4⎥ ⎥,
characteristic roots and is unstable. ⎣ 0 0 0.1 −0.6 −0.6⎦
The controller structure
−0.1 −0.2 0 −0.1 0
⎡ ⎤
u(t) = K0 x(t) + K1 x(t − τ1 ) + K2 x(t − τ2 ). 1 0 0
⎢0 1
⎢ 0⎥⎥
Let the initial value of K: K 0 = 12×12 , T = 10 and λ = 1. The B=⎢ ⎢0 0 1⎥⎥ , τ1 = 1, τ2 = 2.
column sparse state feedback is obtained by Algorithm 2.
⎣0 0 0⎦
In G(K), the weight coefficient of ith column of Kj is denoted 0 0 0
Kj
as βi . By Algorithm 1, the uncontrolled system has five unstable
Kj
characteristic roots and is unstable.
(1) The weight coefficients βi = 1 for i = 1, 2, 3, 4, j = 0, 1, The controller structure
2.
u(t) = K0 x(t) + K1 x(t − τ1 ) + K2 x(t − τ2 ).
∗ −1.337 0.844 0.855 0 0 0 0 0 0 0 0 0
K = . Let the initial value of K: K 0 = 13×15 , T = 10 and λ = 3. We
0.380 −0.833 −0.074 0 0 0 0 0 0 0 0 0
obtain the column sparse gain matrix by Algorithm 2.
(2) The weight coefficients for K0 is βiK0 = 104 for i = 1, 2, 3, Kj
4, and the rest are 1. (1) The weight coefficients βi = 1 for i = 1, 2, 3, 4, 5, j = 0,
1, 2.
0 0 0 0 −0.765 0.957 0.961 0.649 0 0 0 0.075 ⎡ ⎤
K ∗ = 0 0 0 0 0.189 −0.355 −0.184 −0.204 0 0 0 −0.018 . −3.016 1.582 0.317 0 2.120
K0∗ = ⎣−1.128 −4.527 0.733 0 2.271 ⎦ ,
−0.711 −0.514 −3.211 0 −1.150
(3) The weight coefficients for K0 and K2 are βiK0 = βiK2 = 104 ⎡ ⎤
for i = 1, 2, 3, 4, and the rest are 1. 0 0 0 0 0
K1∗ = ⎣0 0 0 0 0⎦ ,
0 0 0 0 −0.738 1.018 0.898 0.715 0 0 0 0 0 0 0 0 0
K ∗ = 0 0 0 0 0.186 −0.323 −0.155 −0.231 0 0 0 0 . ⎡ ⎤
0 0 0 0 0
It is verified that the closed-loop systems under sparse gain K2∗ = ⎣0 0 0 0 0⎦ .
feedback control in the above case are asymptotically stable by 0 0 0 0 0
Algorithm 1. Kj
From this example, only the non-delay states are used in (1). (2) The weight coefficients for columns 3–5 of Kj is β3 =
Kj Kj
In (2) and (3), we can see that by increasing the weight of the β4 = β5 = 104 for j = 0, 1, 2, and the rest are 1.
non-delay states, it is helpful to get the corresponding sparse
column of the gain matrix. In application of linear delay system, ⎡ ⎤
we can only use the delayed state to stabilise the system due to −1.271 6.682 0 0 0
communication delay. K0∗ = ⎣−1.437 −6.430 0 0 0⎦ ,
−1.413 5.132 0 0 0
⎡ ⎤
Example 4.3: Consider the system −3.716 −0.036 0 0 0
K1∗ = ⎣ 0.444 −0.019 0 0 0⎦ ,
ẋ(t) = A0 x(t) + A1 x(t − τ1 ) + A2 x(t − τ2 ) + Bu(t), (33) 1.846 0.032 0 0 0
⎡ ⎤
1.324 −1.756 0 0 0
where K2∗ = ⎣ 0.473 −1.124 0 0 0⎦ .
⎡ ⎤ −0.474 1.159 0 0 0
0.9 0.5 0 −0.1 0
⎢ 0
⎢ 0 −0.6 0 0.5⎥
⎥ Algorithm 1 verifies that the sparse gain matrix obtained under
⎢
A0 = ⎢ −0.1 0 0 0.8 0⎥⎥, different weight coefficients of (1) and (2) can stabilise the orig-
⎣ −0.3 0 0 −0.5 0.5⎦ inal system. In particular, in (2), only the states of x1 , x2 and x3
−0.8 0 0.6 1 0 are required to stabilise the system without other states.
INTERNATIONAL JOURNAL OF CONTROL 2583
Remark 4.1: These examples show that sparse feedback can Hale, J. K., & Verduyn Lunel, S. M. (1993). Introduction to functional
determine the state feedback structure of delay system without differential equations. Springer-Verlag.
giving the sparse controller structure in advance. Hu, G. D. (2020). Stability criteria of high-order delay differential systems.
International Journal of Control, 93(9), 2095–2103. https://doi.org/10.10
Remark 4.2: Local optimal solution may be obtained for differ- 80/00207179.2018.1541365
Hu, G. D., & Hu, R. H. (2020). Numerical optimization for feed-
ent initial values of the non-convex problem (21), but the final back stabilization of linear systems with distributed delays. Jour-
gain matrix is always sparse with little structural difference. nal of Computational and Applied Mathematics, 371, Article 112706.
https://doi.org/10.1016/j.cam.2019.112706
Kim, A. V. (2015). Systems with delays: Analysis, control, and computations.
5. Conclusion Wiley-Scrivener.
Under a special matrix norm, the sparse feedback problem Kolmanovskii, V., & Myshkis, A. (1999). Introduction to the theory and
applications of functional differential equations. Springer Netherlands.
of linear delay system is transformed into an optimisation Kyrychko, Y., & Hogan, S. (2010). On the use of delay equations in engineer-
problem. The unconstrained problem with non-smooth part is ing applications. Journal of Vibration and Control, 16(7–8), 943–960.
numerically solved by using the proximal gradient method. The https://doi.org/10.1177/1077546309341100
row or column sparse feedback controller of the system can be Lin, F., Fardad, M., & Jovanovic, M. R. (2012). Sparse feedback synthesis
obtained. via the alternating direction method of multipliers. In 2012 American
control conference (ACC) (pp. 4765–4770). IEEE.
In future work, improving the numerical efficiency and Liu, D., Han, R., & Xu, G. (2018). Controller design for distributed param-
applying the method to higher dimensional or more varied eter systems with time delays in the boundary feedbacks via the back-
systems is a direction. stepping method. International Journal of Control, 93(5), 1220–1230.
https://doi.org/10.1080/00207179.2018.1500717
Mahmoud, M. S. (2010). Improved stability and stabilization approach to
Disclosure statement linear interconnected time-delay systems. Optimal Control Applications
No potential conflict of interest was reported by the authors. and Methods, 31(2), 81–92. https://doi.org/10.1002/oca.884
Meier, L., S. Van De Geer, & Bühlmann, P. (2008). The group lasso for logis-
tic regression. Journal of the Royal Statistical Society: Series B (Statistical
Funding Methodology), 70(1), 53–71. https://doi.org/10.1111/j.1467-9868.2007.
This work is supported by the National Natural Science Foundation of 00627.x
China (11871330 and 11971303) and Natural Science Foundation of Shang- Polyak, B., Khlebnikov, M., & Shcherbakov, P. (2014). Sparse feedback
hai (21ZR1426400). in linear control systems. Automation and Remote Control, 75(12),
2099–2111. https://doi.org/10.1134/S0005117914120029
ORCID Polyak, B., & Tremba, A. (2020). Sparse solutions of optimal control via
Newton method for under-determined systems. Journal of Global Opti-
Anping Tang http://orcid.org/0000-0002-7935-368X mization, 76(3), 613–623. https://doi.org/10.1007/s10898-019-00784-z
Quattoni, A., Carreras, X., Collins, M., & Darrell, T. (2009). An efficient
References projection for l1,∞ regularization. In Proceedings of the 26th annual
international conference on machine learning (pp. 1–8). ACM Press.
Beck, A. (2017). First-order methods in optimization. Society for Industrial Schmidt, M., Roux, N. L., & Bach, F. (2011). Convergence rates of inexact
and Applied Mathematics. proximal-gradient methods for convex optimization. Advances in Neu-
Bereketoglu, H., & Huseynov, A. (2010). Convergence of solutions of non- ral Information Processing Systems, 24, 1458–1466.
homogeneous linear difference systems with delays. Acta Applicandae Shimizu, K. (2017). Optimization of parameter matrix: Optimal output
Mathematicae, 110(1), 259–269. https://doi.org/10.1007/s10440-008- feedback control and optimal PID control. In 2017 IEEE conference on
9404-2 control technology and applications (pp. 1734–1739). IEEE.
Chen, P., Liu, S., Zhang, D., & Yu, L. (2021). Adaptive event-triggered Simon, N., & Tibshirani, R. (2012). Standardization and the group lasso
decentralized dynamic output feedback control for load frequency regu- penalty. Statistica Sinica, 22(3), 983–1001. https://doi.org/10.5705/ss.2011.075
lation of power systems with communication delays. IEEE Transactions Tibshirani, R. (1996). Regression shrinkage and selection via the lasso.
on Systems, Man, and Cybernetics: Systems, 1–13. https://doi.org/10. Journal of the Royal Statistical Society: Series B (Methodological), 58(1),
1109/TSMC.2021.3129783 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x
Chen, P., Zhang, D., Yu, L., & Yan, H. (2022). Dynamic event-triggered out- Tropp, J. A. (2006). Algorithms for simultaneous sparse approxima-
put feedback control for load frequency control in power systems with tion. Part II: Convex relaxation. Signal Processing, 86(3), 589–602.
multiple cyber attacks. IEEE Transactions on Systems, Man, and Cyber- https://doi.org/10.1016/j.sigpro.2005.05.031
netics: Systems, 1–13. https://doi.org/10.1109/TSMC.2022.3143903 Uchida, K., Shimemura, E., Kubo, T., & Abe, N. (1988). The linear-quadratic
Chen, S. S., Donoho, D. L., & Saunders, M. A. (2001). Atomic decompo- optimal control approach to feedback control design for systems with
sition by basis pursuit. Siam Review, 43(1), 129–159. https://doi.org/10. delay. Automatica, 24(6), 773–780. https://doi.org/10.1016/0005-1098
1137/S003614450037906X (88)90053-2
Cotter, S., Rao, B., Engan, K., & Kreutz-Delgado, K. (2005). Sparse Yagoubi, M., & Chaibi, R. (2020). A nonsmooth newton method for the
solutions to linear inverse problems with multiple measurement design of state feedback stabilizers under structure constraints. In 2020
vectors. IEEE Transactions on Signal Processing, 53(7), 2477–2488. 59th IEEE conference on decision and control (pp. 5992–5997). IEEE.
https://doi.org/10.1109/TSP.2005.849172
Deng, Y., Léchappé, V., Moulay, E., & Plestan, F. (2019). State feedback con-
trol and delay estimation for lti system with unknown input-delay. Inter- Appendix 1. Proof of Theorem 3.3
national Journal of Control, 94(9), 2369–2378. https://doi.org/10.1080/
In order to prove the theorem, we need the following lemma.
00207179.2019.1707288
Donoho, D. L. (2006). For most large underdetermined systems of linear
equations the minimal l1 -norm solution is also the sparsest solution. Lemma A.1 (Gronwall’s lemma): If v(t) and ξ(t) are nonnegative contin-
Communications on Pure and Applied Mathematics, 59(6), 797–829. uous functions in [t0 , ∞) verifying
https://doi.org/10.1002/(ISSN)1097-0312 t
Fridman, E. (2014). Introduction to time-delay systems: Analysis and control. v(t) ≤ c + ξ(τ )v(τ ) dτ ,
Springer. τ =t0
2584 A. TANG ET AL.
then for any t ∈ [t0 , ∞), the following inequality holds each ξ ∈ [−a, a], we obtain
t
v(t) ≤ c exp ξ(s) ds . F[K, t] + ηϕ ξ (t) ∈ Bd×d (C1 ) for t ∈ [0, T], η ∈ [0, 1].
τ =t0
⎧
⎪ ∂f K + ηξ ei,j , F[K, t] from (A2) that
⎪
⎪
1⎪ ξ
⎨ + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t) t
λ2,ξ (t) := ϕ ξ (t) = [λ1,ξ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ )] dσ
⎪
⎪ ∂F 0
0 ⎪
⎪
⎩ t ∂f t ∂f ξ t ∂f ξ
+ ξ dσ + ϕ (σ ) dσ + ϕ (σ ) dσ .
⎫ 0 ∂ki,j 0 ∂F 0 ∂Fdelay delay
⎪
⎪
⎪
⎪ (A7)
∂f K, F[K, t], Fdelay [K, t] ⎬ ξ
− ϕ (t) dη, Furthermore, integrating the auxiliary system gives
∂F ⎪
⎪
⎪
⎪ t t
⎭ ∂f ∂f
i,j (t) = dσ + i,j (σ ) dσ
⎧ 0 ∂ki,j 0 ∂F
⎪ ∂f K + ηξ ei,j , F[K, t]
⎪
⎪ m t
1⎪ ξ
⎨ + ηϕ ξ (t), Fdelay [K, t] + ηϕdelay (t) ∂f
+ i,j (σ − τj ) dσ . (A8)
λ3,ξ (t) := 0 ∂F[K, t − τj ]
⎪
⎪ ∂Fdelay j=1
0 ⎪
⎪
⎩
Multiplying (A7) by ξ −1 , and subtracting it from (A8) yields
⎫
⎪ ξ −1 ϕ ξ (t) − i,j (t)
⎪
⎪
⎪ t
∂f K, F[K, t], Fdelay [K, t] ⎬ ξ
− ϕdelay (t) dη. = ξ −1 [λ1,ξ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ )] dσ
∂Fdelay ⎪
⎪
⎪
⎪
0
⎭
∂f −1 ξ
t
+ (ξ ϕ (σ ) − i,j (σ )) dσ
Furthermore, define another function ρ : [−a, 0) ∪ (0, a] → R as follows: ∂F0
T m t
$ 1,ξ % ∂f
ρ(ξ ) = |ξ |−1 λ (σ ) + λ2,ξ (σ ) + λ3,ξ (σ ) dσ . + (ξ −1 ϕ ξ (σ − τj ) − i,j (σ − τj )) dσ .
j=1 0 ∂F[K, t − τj ]
0
m
Here, P(s) = det[sI − (A0 + BK0 ) − j=1 (Aj + BKj ) exp(−τj s)] is the characteristic equation of system (3).