You are on page 1of 13

Received: 3 November 2021 Revised: 18 April 2022 Accepted: 1 July 2022

DOI: 10.1002/oca.2929

RESEARCH ARTICLE

Sparse feedback stabilization in high-order linear systems

Anping Tang1 Guang-Da Hu1 Yuhao Cong2

1
Department of Mathematics, Shanghai
University, Shanghai, China Abstract
2
Shanghai Customs College, Shanghai, We consider the column sparsity of the feedback stabilization gain matrix in
China high-order linear systems. By means of a special matrix norm and the state tran-
sition matrix quadratic cost function (SQF) of the systems, the sparse feedback
Correspondence
Guang-Da Hu, Department of stabilization controller design problem is formulated as a regularized SQF opti-
Mathematics, Shanghai University, mization problem. We further derive the proximal mapping of the special matrix
Shanghai, China.
Email: ghu@hit.edu.cn
norm, and then based on the gradient descent of the SQF part of the objective
function, the proximal gradient method is introduced to develop an algorithm
Funding information for solving the non-smooth optimization problem. Numerical examples are
National Natural Science Foundation of
China, Grant/Award Numbers: 11871330, given to illustrate the effectiveness of the proposed method.
11971303; Natural Science Foundation of
Shanghai, Grant/Award Number: KEYWORDS
21ZR1426400 high-order linear systems, proximal gradient method, sparse feedback, stabilization, state
transition matrix

1 I N T RO DU CT ION

We are concerned with high-order dynamical linear systems

̇ + A0 x(t) = Bu(t),
Am x(m) (t) + Am−1 x(m−1) (t) + · · · + A1 x(t) (1)

where Ai ∈ Rd×d , i = 0, 1, … , m and B ∈ Rd×p are constant matrices, x(t) ∈ Rd and u(t) ∈ Rp are the state vector and
control vector, respectively. Throughout this article, we assume that matrix Am is nonsingular. High-order linear systems
appear in many fields, for example, see References 1 and 2.
The feedback design for linear systems has been well-studied in both mathematical and control literature since
the seminal works of Kalman in Reference 3. Linear matrix inequalities (LMI) and the Lyapunov functions method
for the search of the feedback stabilization gain are applied in Reference 4. In References 5 and 6, the optimal
feedback gain can be found by solving the algebraic matrix Riccati equation. Moreover, the problem of robust
pole assignment in high-order descriptor linear systems is proposed in Reference 7. The partial eigenvalue assign-
ment problem (PEAP) of high-order linear system by state feedback is solved based on minimum norm.8 The
gradient-based optimization method is considered for feedback controller.9,10 By means of the argument princi-
ple, stability criteria are presented which are necessary and sufficient conditions for the stability of high-order
systems.11
The idea of sparsity is to minimize the number of nonzero components of the vector at the beginning, which has been
widely applied in many fields, such as regression shrinkage and selection via the lasso based on l1 norm of vectors.12,13 A
sparse signal recovery method by reweighted l1 minimization is proposed in Reference 14. Tropp15 presents theoretical
and numerical results for a greedy pursuit algorithm by l1,∞ norm. In recent years, group lasso has been proposed and

Optim Control Appl Meth. 2023;44:53–65. wileyonlinelibrary.com/journal/oca © 2022 John Wiley & Sons Ltd. 53
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
54 TANG et al.

applied to logistic regression. In control community, sparse feedback control design for linear system based on LMI;
see References 16-18. An algorithm combining H∞ performance and l0 norm to design output feedback is proposed in
Reference 19. In Reference 20, a method is presented for the design of sparse optimal feedback gains with an l1 penalty
by ADMM method. In References 21 and 22, the design of structured and sparse optimal feedback gains by semi-definite
programming (SDP) is proposed. In order to reduce the communication and computational burden, sparse feedback
control has also been applied in networked control systems.23-27
In this article, the sparse feedback stabilization of high-order systems is investigated. We focus on the number of
nonzero column components of the feedback gain matrix rather than the vector, which is called column sparse control.
The motivation for designing sparse control of high-order linear systems is to reduce the states of controller, which is
synonymous with reducing the number of sensors or measuring devices. One approach is to pre-select the controller with
a prior specified structural constraints, which will increase the complexity of the algorithm. In order to overcome this
problem, we propose a new strategy to design sparse feedback gain to avoid structural combinations by regularized state
transition matrix quadratic cost function (SQF).
Based on a special matrix norm to ensure the sparsity of the feedback gain matrix, combined with the SQF of
high-order linear systems, a regularized SQF problem is proposed to determine the sparse feedback gain matrix to sta-
bilize the system. The value of the SQF part of the objective function and its gradient are transformed into solving two
Lyapunov equations respectively, and the proximal mapping of the weighted matrix norm is derived under the inspiration
of the vector proximal mapping. We develop a proximal gradient algorithm that consist gradient descent and proximal
mapping steps to solve this non-smooth optimization problem.
The following are the primary contributions of our work:
1. We formulate the column sparse feedback stabilization controller design for high-order linear systems as a
regularized SQF problem.
2. An efficient proximal gradient algorithm is developed to solve the regularized SQF problem.
This article is organized as follows. We introduce the column sparse feedback and review the stability criteria of
high-order systems in Section 2. In Section 3, we use a special matrix norm to formulate regularized SQF problem and
develop an algorithm based on proximal gradient method. Several examples are given to verify the effectiveness of the
algorithm in Section 4. In Section 5, some conclusions are given.
Throughout this article, the following notations are adopted. For a symmetric matrix Q ∈ Rn×n , we denote Q ≻ 0 as a
positive definite matrix. The n × n identity matrix and zero matrix are denoted by In and 0n . ||A||F stands for the Frobenius
(∑n )1
norm of matrix A. The p norm of n dimensional vector x is defined by ||x||p = i=1 |xi |
p p
. Re z stands for the real part
of a complex number z.

2 PRELIMINARIES

In this section, we introduce the column sparse feedback stabilization of the system, and review stability criteria of
high-order linear systems.

2.1 Sparse feedback of high-order linear systems

Denoting
[ ]⊤
Y (t) = x(t)⊤ ̇ ⊤
x(t) · · · (x(m−1) (t))⊤ .

̃ i and B̃ are defined by


The matrices A

à i = −A−1
m Ai for i = 0, 1, … , m − 1,
̃B = A−1m B.

The high-order linear system (1) can be converted into the following first-order system

Ẏ (t) = AY
̄ (t) + Bu(t), (2)
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 55

FIGURE 1 Example of column sparse feedback

where

⎡0 I 0 ··· 0 ⎤ ⎡0⎤
⎢ ⎥ ⎢ ⎥
⎢0 0 I ··· 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
̄A = ⎢ ⋮ ⋮ ⋮ ⋱ ⋮ ⎥ , B = ⎢⋮⎥ .
⎢ ⎥ ⎢ ⎥
⎢0 0 0 ··· I ⎥ ⎢0⎥
⎢A ⎥ ⎢B̃ ⎥
⎣̃0 ̃1
A ̃2
A ̃ m−1 ⎦
··· A ⎣ ⎦

Let the state feedback controller structure of system (1) be

̇ + · · · + Km−1 x(m−1) (t),


u(t) = K0 x(t) + K1 x(t) (3)

where K0 , Kj ∈ Rp×d , j = 1, 2, … , m − 1. Then the closed-loop system is given by

Am x(m) (t) + (Am−1 − BKm−1 )x(m−1) (t) + · · · + (A0 − BK0 )x(t) = 0. (4)

The parameters of the feedback gain matrix K can be set as

K = [K0 , K1 , … Km ].

From (2) and u(t) = KY (t), we have the first-order closed-loop system is given by

Ẏ (t) = (Ā + BK)Y (t). (5)

The dimension of the first-order linear system (5) is N = md. As the number of states and the order of states rise, the
gain matrix K becomes high-dimensional. To avoid this problem, we aim to seek a column sparse feedback gain matrix
K such that the closed-loop system (4) is asymptotically stable.
Let  is the set of stabilizing feedback gains,
{ }
 = K ∶ Re 𝜆K (Ā + BK) < 0 ,

where 𝜆K are the eigenvalues of Ā + BK. Indeed, the sparse stabilizing controller Ksp ∈ .
Design of state feedback u(t) = KY (t) from the incomplete state vector is equivalent to designing a column sparse
stabilizing controller, that is, gain matrix K having zero columns. For example, as shown below, if columns 3 to 6 of the
gain matrix K are 0, the state feedback control only needs derivative-free states x1 (t) and x2 (t), which also provides a way
to build a controller structure (Figure 1).

2.2 Stability criteria of high-order linear systems

The characteristic equation of system (4) is

P(s) = det[Am sm + (Am−1 − BKm−1 )sm−1 + · · · + (A0 − BK0 )] = 0. (6)


10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
56 TANG et al.

Algorithm 1. An algorithm to check the stability of high-order system

1: Calculate the upper bound r of the unstable root of the system (4). Then we get the closed semicircle curve l
as the boundary of D, which consists of the segment {s = it ∶ −r ≤ t ≤ r} and the half-circle {s ∶ |s| = r, −𝜋∕2 ≤
arg s ≤ 𝜋∕2}.
2: Take a sufficiently large integer n to discretize l as uniformly as possible in the clockwise direction, and record these
nodes as {sj }nj=1 .
3: For each sj (j = 1, 2, … , n), we calculate the value P(sj ) and check whether P(sj ) = 0 by evaluating its magnitude satis-
fies |P(sj )| ≤ 𝛿1 with the preassigned tolerance 𝛿1 at the same time. If it holds, that is, P(sj ) = 0, then the system (4) is
not asymptotically stable and stop the algorithm. Otherwise, we continue to the next step.
4: Compute Δl arg P(s) along the ordered node {sj }nj=1 by checking Δl arg P(s) ≤ 𝛿2 with the preassigned tolerance 𝛿2 . If
Δl arg P(s) = 0, the system is asymptotically stable, otherwise not stable.

Lemma 1 (11). Every unstable characteristic root is defined as s of system (4) satisfies

|s| ≤ r = max{1, q}, (7)

where q is defined by


m−1
‖A−1 ‖
q= ‖ m (Aj − BKj )‖2 . (8)
j=0

The domain D is defined by D = {s ∶ Re s ≥ 0, |s| ≤ r}, and the boundary of D is denoted by l. Based on (6), Algorithm 1
is given in Reference 11 to check the stability of closed-loop system (4) or calculate the number of unstable characteristic
roots.

3 SPARSE FEEDBACK STABILIZATION VIA PROXIMAL G RADIENT


M ET H O D

In this section, we introduce a matrix norm to sparse the feedback gain matrix. On this basis, we propose a regularized
SQF problem to design a sparse feedback controller for high-order linear systems, and the proximal gradient method is
used to develop an efficient algorithm for solving the problem.

3.1 Regularized SQF problem

In order to formulate the problem in this section, we first define the following special matrix norm.
Definition 1. Let X = [q1 q2 · · · qn ] be an m × n matrix, where qi ∈ Rm is the ith column of X. Then we define a
weighted matrix norm


n
||X||col−1 = 𝛽i ||qi ||2 , (9)
i=1

where 𝛽i is positive weight for i = 1, 2, … , n. Larger weights correspond to more expensive components of the column
vector.
Now we review the concept of the state transition matrix of linear systems. The state transition matrix of the
closed-loop system (5) is denoted by Φ(K, t) ∈ RN×N , which is the solution of the matrix differential equation

̇
Φ(K, t) = (Ā + BK)Φ(K, t) for t > 0, (10)

under the condition Φ(K, 0) = I.


10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 57

Lemma 2 (28). The closed-loop system (5) is asymptotically stable if and only if there exist feedback gain matrix K such that

T
lim ||Φ(K, t)||2F dt,
T→∞∫0

exists and is finite.


From Lemma 2 and the weighted matrix norm in Definition 1, we note performance indices

H(K) = ||Φ(K, t)||2F dt, (11)
∫0
G(K) = 𝜆||K||col−1 , 𝜆 > 0. (12)

We refer to H(K) as the SQF. Taking G(K) as the regular term or penalty function of the SQF, the objective function
is defined by J(K) = H(K) + G(K). Subsequently, we formally propose the regularized SQF problem for designing sparse
feedback stabilization as follows.

min J(K) = H(K) + G(K)


K

subject to ∶
̇
Φ(K, t) = (Ā + BK)Φ(K, t),
Φ(K, 0) = I,
K ∈ . (13)

The objective function of the optimization problem consists of two parts, H(K) is the infinite-horizon SQF, which
is smooth but non-convex in the set of stabilizing system gains . H(K) is finite to ensure the stability of the system
by Lemma 2. G(K) is a convex non-smooth matrix function in the set . Compared with the standard linear quadratic
regulator (LQR), the regularized SQF problem replaces the input energy cost in LQR with G(K), which mainly ensures
the sparsity of the feedback controller and limits the upper bound of the gain matrix elements. A larger parameter 𝜆 of
G(K) produces a sparser K in general.
Remark 1. When B is invertible matrix, a method for designing first-order linear system sparse feedback based on LMI
is given in Reference 17. For the non-invertible case of B in high-order linear systems, the regularized SQF problem is
proposed in the present paper.

Remark 2. Compared with the norm ||X||c1 = nj=1 max |xi,j |, which has been widely used in sparse solutions of matrix
1≤i≤m
equations.29 The column of the proposed matrix norm ||X||col−1 is grouped by the vector 2-norm instead of the vector
∞-norm, which increases smoothness between the column vectors of the matrix norm.

3.2 Gradient of H(K)

We formulate the SQF of finding the value of H(K) as solving a Lyapunov equation by following lemma.
Lemma 3 (30). Given W ≻ 0, and a Hurwitz matrix A. Then along the solution of LTI system

̇ = Az(t), z(0) = z0 ,
z(t)

it holds that

z(t)⊤ Wz(t) = Tr(QΣ0 ),
∫0

where Σ0 = z0 z0⊤ , and Q is the solution of Lyapunov matrix function

A⊤ Q + QA = −W.
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
58 TANG et al.

First, H(K) can be written as

∞ ∑
N ∞
H(K) = ||Φ(K, t)||2F dt = Φi (K, t)⊤ Φi (K, t)dt,
∫0 ∫
i=1 0

where Φi (K, t) is the ith column of Φi (K, t).


Applying Lemma 3 to system (5), when Ā + BK is Hurwitz matrix,

Φi (K, t)⊤ Φi (K, t)dt = Tr(PΣi ),
∫0

where Σi represents a matrix in which the ith element in the ith row is 1, and the rest of the position elements are all 0, P
is the solution of the Lyapunov equation

(Ā + BK)⊤ P + P(Ā + BK) = −I. (14)

The performance index can be rewritten as


N
H(K) = Tr(PΣi ) = Tr(P), (15)
i=1

where P is the solution of Lyapunov equation (14).


For a given parameter matrix K ∈ , the solution P > 0 of the matrix equation (14) exists by Lyapunov theorem, we
denote it as P(K). The problem can be further transformed as
{
Tr(P(K)), K ∈ ,
H(K) = (16)
∞, otherwise.

Furthermore, from the standard results in Reference 9, we know that for all K ∈ , the gradient of H(K) is
given by

∇H(K) = 2B⊤ PL, (17)

where L is the solution of the Lyapunov equation

(Ā + BK)L + L(Ā + BK)⊤ = −I. (18)

As a consequence, if K ∈ , H(K) and ∇H(K) can be obtained by solving Lyapunov equations (14) and (18),
respectively. Computationally, the complexity of solving these two Lyapunov equations is O(N 3 ).

3.3 Proximal gradient method for solving the problem

Proximal-gradient methods are an appealing approach for solving these types of non-smooth optimization problems
because of their fast theoretical convergence rates and strong practical performance.31,32 The method is the same as the
iterative shrinkage-thresholding algorithm (ISTA) in the literature.33
Recall that Problem (13) is an optimal parameter selection problem. The proximal gradient method for solving
the regularized SQF problem, which consists of a gradient descent step of the smooth part H(K) followed by the
proximal mapping of the non-smooth part G(K). Equations (17) and (18) give the gradient of smooth part of objec-
tive function in the problem. The proximal mapping of the matrix norm function (12) is calculated by following
lemma.
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 59

Lemma 4 (34). Let h ∶ Rm → R be given by h(q) = g(||q||2 ), where g is a proper closed and convex function satisfying
dom(g) ⊆ [0, ∞). Then

⎧ q
⎪proxg (||q||2 ) ||q|| , q ≠ 0,
proxh (q) = ⎨ 2 (19)
⎪{q ∈ Rm ∶ ||q||2 = proxg (0)}, q = 0,

where proxh and proxg are proximal mappings of h and g, respectively.


Theorem 1. Let G ∶ Rm×n → R be given by G(X) = 𝜆||X||col−1 . For any X ∈ Rm×n ,

⎧ q
⎪[||qi ||2 − 𝜆𝛽i ]+ ||q i|| , qi ≠ 0,
[proxG (X)]i = ⎨ i 2 (20)
⎪0, qi = 0,

where [proxG (X)]i represents the ith column of matrix proxG (X) and the vector [a]+ is defined by [a]+ = (max{0, ai })m
i=1
.

Proof. The proximal mapping of G(X) = 𝜆||X||col−1 is considered below, which is equivalent to solving optimization
problem

1
min L(Z) = ||Z − X||2F + 𝜆||Z||col−1 .
Z 2

This problem can be solved separately

n ( )
∑ 1
L(Z) = ||zi − qi ||22 + 𝜆𝛽i ||zi ||2 ,
i=1
2

where zi is the ith column of Z.


Using Lemma 4 and prox𝜆t (t) = [t − 𝜆]+ for t > 0, we can get

⎧ q
⎪[||qi ||2 − 𝜆𝛽i ]+ ||q i|| , qi ≠ 0,
zi = ⎨ i 2 (21)
⎪0, qi = 0.


Based on the objective function gradient (17) and the special matrix norm proximal mapping in Theorem 1, the
algorithm of the proximal gradient method is used to obtain the sparse feedback gain matrix of high-order systems (1).
The algorithm mainly consists of two steps: (1) SQF part gradient descent by solving two Lyapunov equations (14) and
(18). (2) Non-smooth matrix norm function proximal mapping by Theorem 1.

Initial conditions: The initialization of K is obtained using standard Lyapunov approach,35 which is stabilizing the
system.
The closed-loop system (5) is stable if and only if there exists a positive definite matrix P0 > 0 satisfying

(Ā + BK)⊤ P0 + P0 (Ā + BK) ≺ −2𝜎P0 ,

where 𝜎 > 0 is decay rate. Multiplying the inequality in on the left and right by Q0 = P0−1 , we obtain


̄ 0 + Q0 Ā ⊤ + BKQ0 + Q0 K⊤ B ≺ −2𝜎Q0 ,
AQ

and, introducing the new variable F0 = KQ0 , we arrive at the linear matrix inequality


̄ 0 + Q0 Ā ⊤ + BF0 + F0⊤ B ≺ −2𝜎Q0 .
AQ (22)
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
60 TANG et al.

Algorithm 2. Proximal gradient method for sparse feedback problem

1: Given positive constants 𝜆, 𝜎0 >0 and convergence tolerance 𝜀>0.


2: Initialize K 0 ∈  by (22) and (23), where 𝜎 = 𝜎0 .
3: repeat
4: (1) Gradient descent of H(K j ):Compute (Pj , Lj ) by solving Lyapunov equations (14) and (18), then

∇H(K j ) = 2B⊤ Pj Lj .

Z j = K j − tj 𝛻H(K j ),

(2) Proximal mapping:


K j+1 = proxtj G (Z j ),

next iterate K j+1 .


5: until stopping criteria ‖K j+1 − K j ‖2F <𝜀 satisfied.
6: For K j , check whether the closed-loop system (5) is asymptotically stable or not using Algorithm 1. If the system
is asymptotically stable, the algorithm stops; otherwise, reinitialize K 0 by changing parameter value and restart the
algorithm.

Hence, the initial stabilizing controller is given by

0 ,
K0 = F0 Q−1 (23)

where Q0 and F0 satisfy relation (22). Linear search:In the Algorithm 2, the tj in step 4 is determined by backtracking
procedure satisfying the inequality

( ( )) 1
H(Kj+1 ) ≤ H(Kj ) + Tr ∇H(Kj )⊤ Kj+1 − Kj + ||Kj+1 − Kj ||2F . (24)
2tj

The procedure requires a parameter 𝜂 < 1 to ensure that the step size is reduced, when the inequality (24) is not
satisfied, we set tj ∶= 𝜂tj .
Remark 3. We need initial K0 ∈ . Solving the algebraic Riccati equation numerically6 is also an initialization way, except
for the LMI method used above.
Remark 4. For the selection of step size tj , there are two other strategies except backtracking, one is the non-monotonically
decreasing Barzilai-Borwein step36 and the other is a fixed step tj , but bad choice of the fixed step makes the algorithm
divergent due to its sensitivity.

4 NUMERICAL EXAMPLES

In this section, numerical examples are given to demonstrate the effectiveness of the algorithm.
Example 1. Consider a second-order linear system in the form of (1) with the matrices are given as following

⎡ 0 0.8 1.8 1.2 −7.9 0 ⎤


⎢ ⎥
⎢ 1.2 −1.9 0 1.3 −4.5 −3.5⎥
⎢ ⎥
⎢ 3.1 0 3.5 −2.7 4 0 ⎥
A0 = ⎢ ⎥,
⎢ 3.5 0 −3 −1.5 4.5 2 ⎥
⎢− 2.2 −1 0 −4.7 1.4 −4 ⎥⎥

⎢ 0 −9.1 −3 −2.5 3.9 −1.8⎥⎦

10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 61

⎡ 0.4 0 0.3 2 −0.9 −3.4⎤


⎢ ⎥
⎢− 2.4 −1.4 0.3 0 0 6.8 ⎥
⎢ ⎥
⎢− 4.1 −1.5 1.3 1.1 −3.6 −0.8⎥
A1 = ⎢ ⎥,
⎢ 0.9 −3.8 −7.8 −0.2 −2.4 −7.9⎥
⎢ 0.1 −0.3 0 −0.8 2.8 0 ⎥⎥

⎢− 0.5 2.3 −0.2 0.9 −2.1 0 ⎥⎦

⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢1 0 0 0⎥
A2 = I6 , B=⎢ ⎥.
⎢0 1 0 0⎥
⎢0 0 1 0⎥⎥

⎢0 0 0 1⎥⎦

By Algorithm 1, the uncontrolled system has seven unstable characteristic roots. The controller structure

̇
u(t) = K0 x(t) + K1 x(t).

Let 𝛽i = 1, 𝜆 = 102 , 𝜎0 = 0.01, we obtain the sparse gain matrix by Algorithm 2.


We get column sparse controller

⎡ 2.980 | ⎤
0 0 −0.562 0 0 | 0 −1.952 −1.196 −1.370 0 0
⎢ | ⎥
|
⎢ | ⎥
[ ∗ ∗ ] ⎢ 4.577 0 0 −0.677 0 0 | 0 −1.028 0.008 −2.000 0 0⎥
|
K = K0 , K1 = ⎢ |
⎥.

|
⎢− 3.551 0 0 −5.422 0 0 | 0 −0.361 −1.234 −1.500 0 0⎥
|
⎢ | ⎥
|
⎢− 1.191 0 0 2.258 0 0 | 0 8.173 1.540 −0.840 0 0⎥
⎣ | ⎦
|

The Algorithm 1 is used to verify the stability of the closed-loop system. As a result, only five of the twelve states are used
for state feedback control.
Example 2. In this example, we consider an n-degrees-of-freedom damped mass-spring system illustrated in Reference
37. The vibration of the system satisfies the second-order system

A2 ẍ (t) + A1 x(t)
̇ + A0 x(t) = Bu(t), (25)

where B = In , the mass matrix A2 = diag (m1 , m2 , … , mn ) is diagonal, the damping matrix A1 and the stiffness matrix
A0 are defined by

A1 = Pdiag(d1 , d2 , … , dn−1 , 0)PT + diag(𝜏1 , 𝜏2 , … , 𝜏n ),

A0 = Pdiag(k1 , k2 , … , kn−1 , 0)PT + diag(𝜅1 , 𝜅2 , … , 𝜅n ),

with P = (𝛿ij − 𝛿i,j+1 ), where 𝛿ij is the Kronecker delta.


The controller structure

̇
u(t) = K0 x(t) + K1 x(t).

We take n = 20 and choose mi = 1, 𝜅i = ki = 2, 𝜏i = di = 1 except the first 𝜏1 = −1. Matrices A0 and A1 are tridiagonal
matrices. By Algorithm 1, the uncontrolled system has two unstable characteristic roots.
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
62 TANG et al.

10

15

20
0 10 20 30 40
(A)

10

15

20
0 10 20 30 40
(B)

10

15

20
0 10 20 30 40
(C)

FIGURE 2 Sparsity patterns of K∗ ∈ R20×40 for the system (25) with weights (A) 𝛽i = 1, (B) 𝛽i = i, and (C) 𝛽i = i2 , respectively.

Let 𝜆 = 1, 𝜎0 = 0.01, we obtain the sparse gain matrix by Algorithm 2 with different gain matrix weights 𝛽i . In Figure 2,
we show the sparsity pattern (location of non-zero elements) of the gain matrix with weights 𝛽i = 1, 𝛽i = i, and
𝛽i = i2 .
From this example, we can see that by increasing the weight of the high-order states, it is helpful to get the sparse
column of the gain matrix corresponding to the high-order states. In application of high-order linear system, we expect
to use the low-order states to stabilize the system.
Example 3. Consider a third-order linear system in the form of (1) with the matrices are given as following

⎡− 0.8 0.3 −0.9⎤ ⎡1.1 0.4 −1.6⎤ ⎡− 0.5 0 1.1⎤


⎢ ⎥ ⎢ ⎥ ⎢ ⎥
A0 = ⎢− 0.4 0.4 0 ⎥ , A1 = ⎢1.7 1.8 0 ⎥ , A2 = ⎢ 0 1.9 1.1⎥ ,
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 −0.1 −0.6⎦ ⎣ 1 −1.2 0 ⎦ ⎣ 0.1 −0.8 1.7⎦

A3 = I3 , B = I3 .
By Algorithm 1, the uncontrolled system has two unstable characteristic roots. The controller structure

̇ + K2 ẍ (t).
u(t) = K0 x(t) + K1 x(t)
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 63

Let 𝛽i = i, 𝜎0 = 0.01, we obtain the sparse gain matrix by Algorithm 2. (1). For 𝜆 = 102 ,

⎡− 0.717 −0.011 −0.272||−1.583 0 0


|
|−0.175 0 0 ⎤
⎢ | | ⎥
| |
[ ] ⎢ | | ⎥
K = K0 , K1 , K2 = ⎢− 0.413 0.008 −0.036|| 0.201
∗ ∗ ∗ ∗
0 0 | 0.082
| 0 0 ⎥,
⎢ | | ⎥
| |
⎢− 0.095 0.020 −0.455|| 0.637 0 0 | 0.033
| 0 0 ⎥
⎣ | | ⎦

(2). For 𝜆 = 103 ,

⎡− 0.417 −0.028 0.006 ||−1.319 0 0


|
| 0 0 0 ⎤
⎢ | | ⎥
| |
[ ] ⎢ | | ⎥
K = K0 , K1 , K2 = ⎢− 0.353 0.011 −0.104|| 0.133
∗ ∗ ∗ ∗
0 0 |
| 0 0 0 ⎥,
⎢ | | ⎥
| |
⎢ 0.064 0.049 −0.169|| 0.668 0 0 |
| 0 0 0 ⎥
⎣ | | ⎦

(3). For 𝜆 = 3 × 103 ,

⎡− 0.362 | | ⎤
0 0.026 ||−1.083 0 0 |
| 0 0 0
⎢ | | ⎥
[ ∗ ∗ ∗] ⎢ | | ⎥
K = K0 , K1 , K2 = ⎢− 0.619

0 −0.106|| 0.223 0 0 |
| 0 0 0 ⎥,
⎢ | | ⎥
| |
⎢ 0.250 0 −0.160|| 0.608 0 0 |
| 0 0 0 ⎥
⎣ | | ⎦

Algorithm 1 verifies the stability of the closed-loop system. The parameter 𝜆 balances SQF and the sparsity of
the gain matrix. In general as 𝜆 increases, the sparsity of the feedback controller in this example can also be
improved.
Remark 5. If the sparse controller is expected to use fewer higher-order states to stabilize the system, the corresponding
matrix norm weight coefficient can be appropriately larger.
Remark 6. In practically relevant systems, the column sparse gain matrix corresponds to a reduction in the number of
hardware implementations for measuring, such as speed sensors in automotive systems and voltage sensors in power
systems. Especially in high-order linear systems, the proposed approach provides a way to stabilize the system by avoiding
high-order states that are difficult or even impossible to measure. In Example 2, the system can be stabilized by using
only the position states of the first two springs in the large mass-spring system.

5 CO N C LU S I O N

We propose a regularized SQF problem to design a sparse feedback controller for stabilizing high-order linear systems.
Inspired by the SQF, a special matrix norm is used to transform the sparse feedback problem into a non-smooth optimiza-
tion problem. The proximal gradient algorithm is used to solve the problem mainly including two aspects. First, for the
smooth part of the objective function, the objective function value and gradient are obtained by solving two Lyapunov
equations. Second, the proximal mapping of the special matrix norm is derived. In the future work, we will continue to
improve the computational efficiency of the algorithm to apply this method to more complex problems.

ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China (Grant No. 11871330 and No.11971303),
Natural Science Foundation of Shanghai (21ZR1426400).

DATA AVAILABILITY STATEMENT


Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
64 TANG et al.

ORCID
Anping Tang https://orcid.org/0000-0002-7935-368X

REFERENCES
1. Duan GR, Wang GS. Eigenstructure assignment in a class of second-order descriptor linear systems: a complete parametric approach. Int
J Autom Comput. 2005;2(1):1-5.
2. Duan GR. Parametric approaches for eigenstructure assignment in high-order linear systems. Int J Control Autom Syst. 2005;3(3):
419-429.
3. Kalman R. Contribution to the theory of optimal control. Bol Soc Mat Mex. 1960;5(2):102-119.
4. Veselý V. Static output feedback controller design. Kybernetika. 2001;37(2):205-221.
5. Iwasaki T, Skelton RE, Geromel JC. Linear quadratic suboptimal control with static output feedback. Syst Control Lett. 1994;23(6):
421-430.
6. Benner P, Li JR, Penzl T. Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control
problems. Numer Linear Algebra Appl. 2008;15(9):755-777.
7. Yu HH, Duan GR. Robust pole assignment in high-order descriptor linear systems via proportional plus derivative state feedback. IET
Control Theory Appl. 2008;2(4):277-287.
8. Mao X, Dai H. Minimum norm partial eigenvalue assignment of high order linear system with no spill-over. Linear Algebra Appl.
2013;438(5):2136-2154.
9. Rautert T, Sachs EW. Computational design of optimal output feedback controllers. SIAM J Optim. 1997;7(3):837-852.
10. Shimizu K. Optimization of parameter matrix: optimal output feedback control and optimal PID control. Proceedings of the 2017 IEEE
Conference on Control Technology and Applications (CCTA); 2017: 1734–1739; Mauna Lani Resort, HI.
11. Hu GD, Hu X. Stability criteria of matrix polynomials. Int J Control. 2019;92(12):2973-2978.
12. Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc B. 1996;58(1):267-288.
13. Cotter S, Rao B, Engan K, Kreutz-Delgado K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans
Signal Process. 2005;53(7):2477-2488.
14. Candes EJ, Wakin MB, Boyd SP. Enhancing sparsity by reweighted l1 minimization. J Fourier Anal Appl. 2008;14(5-6):877-905.
15. Tropp JA. Algorithms for simultaneous sparse approximation. Part II: convex relaxation. Signal Process. 2006;86(3):589-602.
16. Malioutov D, Cetin M, Willsky A. Source localization by enforcing sparsity through a Laplacian prior: an SVD-based approach. Proceedings
of the IEEE Workshop on Statistical Signal Processing; 2003: 573–576; St. Louis, MO.
17. Polyak BT, Khlebnikov MV, Shcherbakov PS. Sparse feedback in linear control systems. Autom Remote Control. 2014;75(12):
2099-2111.
18. Arastoo R, GhaedSharaf Y, Kothare MV, Motee N. Optimal state feedback controllers with strict row sparsity constraints. Proceedings of
the 2016 American Control Conference (ACC); 2016: 1948–1953; Boston, MA.
19. Schuler S, Li P, Lam J, Allgöwer F. Design of structured dynamic output-feedback controllers for interconnected systems. Int J Control.
2011;84(12):2081-2091.
20. Lin F, Fardad M, Jovanovic MR. Design of optimal sparse feedback gains via the alternating direction method of multipliers. IEEE Trans
Autom Control. 2013;58(9):2426-2431.
21. Fardad M, Jovanovic MR. On the design of optimal structured and sparse feedback gains via sequential convex programming. Proceedings
of the 2014 American Control Conference; 2014: 2426–2431; Portland, OR.
22. Polyak B, Tremba A. Sparse solutions of optimal control via Newton method for under-determined systems. J Glob Optim.
2020;76(3):613-623.
23. Pakazad SK, Ohlsson H, Ljung L. Sparse control using sum-of-norms regularized model predictive control. Proceedings of the 52nd IEEE
Conference on Decision and Control; 2013: 5758–5763; Firenze.
24. Nagahara M, Quevedo DE, Ostergaard J. Sparse packetized predictive control for networked control over erasure channels. IEEE Trans
Autom Control. 2014;59(7):1899-1905.
25. Nagahara M, Østergaard J, Quevedo DE. Discrete-time hands-off control by sparse optimization. EURASIP J Adv Signal Process.
2016;2016(1):76.
26. Tzoumas V, Rahimian MA, Pappas GJ, Jadbabaie A. Minimal actuator placement with bounds on control effort. IEEE Trans Control Netw
Syst. 2016;3(1):67-78.
27. Ikeda T, Nagahara M. Resource-aware time-optimal control with multiple sparsity measures. Automatica. 2021;135:109957.
28. Hu GD, Hu RH. Numerical optimization for feedback stabilization of linear systems with distributed delays. J Comput Appl Math.
2020;371:1-9.
29. Quattoni A, Carreras X, Collins M, Darrell T. An efficient projection for l1,∞ regularization. Proceedings of the 26th International
Conference on Machine Learning; 2009:1–8; ACM Press; Montreal, QC.
30. Bellman R. Notes on matrix theory—X a problem in control. Q Appl Math. 1957;14(4):417-419.
31. Schmidt M, Roux NL, Bach F. Convergence rates of inexact proximal-gradient methods for convex optimization. Adv Neural Inf Process
Syst. 2011;24:1458-1466.
32. Parikh N, Boyd S. Proximal algorithms. Found Trends Optim. 2014;1(3):123-231.
10991514, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/oca.2929 by Shanghai University, Wiley Online Library on [22/04/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
TANG et al. 65

33. Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci. 2009;2(1):
183-202.
34. Beck A. First-Order Methods in Optimization. Society for Industrial and Applied Mathematics; 2017.
35. Boyd S, ed. Linear Matrix Inequalities in System and Control Theory. Vol 15. Society for Industrial and Applied Mathematics; 1994.
36. Barzilai J, Borwein JM. Two-point step size gradient methods. IMA J Numer Anal. 1988;8(1):141-148.
37. Tisseur F, Meerbergen K. The quadratic eigenvalue problem. SIAM Rev. 2001;43(2):235-286.

How to cite this article: Tang A, Hu G-D, Cong Y. Sparse feedback stabilization in high-order linear systems.
Optim Control Appl Meth. 2023;44(1):53-65. doi: 10.1002/oca.2929

You might also like