You are on page 1of 9

Journal of Computational and Applied Mathematics 381 (2021) 113031

Contents lists available at ScienceDirect

Journal of Computational and Applied


Mathematics
journal homepage: www.elsevier.com/locate/cam

An inverse eigenvalue problem for Hamiltonian matrices



Yongxin Yuan , Jinghua Chen
School of Mathematics and Statistics, Hubei Normal University, Huangshi, 435002, PR China

article info a b s t r a c t
[ ]
Article history: 0 In
Received 18 April 2019 Let J = ∈ R2n×2n . A matrix H ∈ R2n×2n is called Hamiltonian if (HJ)⊤ = HJ .
−In 0
Received in revised form 6 November 2019
In this paper, the inverse eigenvalue problem for Hamiltonian matrices is considered.
MSC: The solvability condition for the inverse problem is derived and the representation of
65F18 the general solution is presented by the generalized singular value decomposition of a
15A24 matrix pair. Furthermore, the associated optimal approximation problem for this inverse
eigenvalue problem is discussed and the expression of the solution for the optimal
Keywords:
Inverse eigenvalue problem approximation problem is presented.
Hamiltonian matrix © 2020 Elsevier B.V. All rights reserved.
Generalized singular value decomposition
Optimal approximation

1. Introduction

Throughout this paper, we denote the set of all m-by-n complex matrices by Cm×n , the set of all m-by-n real matrices
by Rm×n , the set of all orthogonal matrices in Rn×n by ORn×n and the identity matrix of order n by In .
[ ]
0 In
Definition 1. Let J = ∈ R2n×2n . A matrix H ∈ R2n×2n is called Hamiltonian if (HJ)⊤ = HJ. The set of all 2n × 2n
−In 0
Hamiltonian matrices is denoted by H2n×2n .

The eigenvalue problem for Hamiltonian matrices arises in a number of important applications, and many algorithms
for computing their eigenvalues, invariant subspaces and the Hamiltonian Schur form can be found in [1–3] and
the references therein. The inverse eigenvalue problem (IEP) concerns the reconstruction of a structured matrix from
prescribed spectral data [4–10]. Inverse eigenvalue problems are widely used in many research fields, such as structural
dynamics [11–14], parameter identification [15,16] and pole assignment [17,18] or eigenstructure assignment [19,20].
Extensive bibliographies of inverse eigenvalue problems for matrices can be found in the book by Chu and Golub [21].
The main purpose of this paper is to investigate the solvability condition and the representation of the general solution
for the inverse eigenvalue problem of Hamiltonian matrices by using generalized singular value decomposition (GSVD).
More specifically, the following problems are considered.
Problem IEP. Given a full column rank matrix X = [x1 , x2 , . . . , xp ] ∈ C2n×p and a diagonal matrix Λ = diag(λ1 , λ2 , . . . , λp )
∈ Cp×p , where the eigenpairs {(xi , λi )}pi=1 are closed under complex conjugate. Find a Hamiltonian matrix H ∈ R2n×2n such
that

HX = X Λ. (1)

∗ Corresponding author.
E-mail address: yuanyx_703@163.com (Y. Yuan).

https://doi.org/10.1016/j.cam.2020.113031
0377-0427/© 2020 Elsevier B.V. All rights reserved.
2 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031

Problem OAP. Given a matrix H̃ ∈ R2n×2n , find a solution H to Problem IEP such that ∥H − H̃ ∥ is minimized.
By using the generalized singular value decomposition of a matrix pair, we provide a necessary and sufficient condition
for Problem IEP to have a solution H ∈ H2n×2n and construct the solution set SE explicitly when it is nonempty. We prove
that there exists a unique solution to Problem OAP if the set SE is nonempty and present an explicit formula for the unique
solution. Two numerical examples are given to dwell upon the importance of our results.

2. Solving problem IEP

In order to solve Problems IEP and OAP, we need the following lemmas.
[ ]
E F
Lemma 1 ([2]). Let H ∈ H 2n×2n
. Then H is of the form H = , where E , F , G ∈ Rn×n with F = F ⊤ , G = G⊤ .
G −E ⊤

Lemma 2. Let S1 = diag(α1 , α2 , . . . , αs ) ∈ Rs×s , S2 = diag(β1 , β2 , . . . , βs ) ∈ Rs×s with αi2 + βi2 = 1, i = 1, . . . , s, and
A, B, C , D ∈ Rg ×s . Then
2  2
Φ (F12 ) = F12 S2 S1−1 − A + F12 S2 S1−1 − B + ∥F12 − C ∥2 + ∥F12 − D∥2 = min

if and only if
1(
(A + B)S1 S2 + CS12 + DS12 .
)
F12 = (2)
2

Proof. Let A = [aij ], B = [bij ], C = [cij ], D = [dij ] ∈ Rg ×s , and F12 = [fij ] ∈ Rg ×s . Then
g s
(( )2 ( )2 )
∑ ∑ βj βj )2 ( )2
Φ (F12 ) = .
(
fij − aij + fij − bij + fij − cij + fij − dij
αj αj
i=1 j=1

Now we minimize the quantities


)2 )2
βj βj
( (
)2 ( )2
ϕij = fij − aij + fij − cij + fij − dij , 1 ≤ i ≤ g ; 1 ≤ j ≤ s.
(
+ fij − bij
αj αj
It is easy to obtain the minimizers
aij αj βj + bij αj βj + cij αj2 + dij αj2 (aij + bij )αj βj + cij αj2 + dij αj2
fij = = , 1 ≤ i ≤ g ; 1 ≤ j ≤ s. (3)
2α + 2β
2
j
2
j
2

By rewriting (3) in matrix form, we immediately obtain (2). □

Lemma 3. Assume that S1 = diag(α1 , α2 , . . . , αs ) ∈ Rs×s , S2 = diag(β1 , β2 , . . . , βs ) ∈ Rs×s with αi2 + βi2 = 1, i = 1, . . . , s,
and A1 , B1 , C1 , D1 ∈ Rs×s . Then
2  2
Ψ (F22 ) = F22 S2 S1−1 − A1  + F22 S2 S1−1 − B1  + ∥F22 − C1 ∥2

2 (4)
+ S1−1 S2 F22 S2 S1−1 − D1  = min, s. t. F22 = F22
 ⊤

if and only if
1( 2
1 + B1 )S1 + S1 (C1 + C1 )S1 + S1 S2 (D1 + D1 )S2 S1 .
⊤ 2 2 ⊤ 2
S1 (A1 + B1 )S2 S1 + S1 S2 (A⊤ ⊤
)
F22 = (5)
2

Proof. Let A1 = [aij ], B1 = [bij ], C1 = [cij ], D1 = [dij ] ∈ Rs×s , and F22 = [fij ] ∈ Rs×s . From (4) we have
s s
(( )2 ( )2 )2 )
βj βj βi βj
(
∑ ∑ )2
Ψ (F22 ) = .
(
fij − aij + fij − bij + fij − cij + fij − dij
αj αj αi αj
i=1 j=1

Since fij = fji , then Ψ (F22 ) is a differentiable function of 21 s(s + 1) variables fij (i = 1, . . . , s; j = i, . . . , s). It is easy to verify
that the function Ψ (F22 ) achieves the smallest value at
αi2 (aij + bij )αj βj + αi2 (cij + cji )αj2 + αi βi (dij + dji )αj βj + αi βi (aji + bji )αj2
fij = . (6)
2
By rewriting (6) in matrix form, we easily obtain (5). □
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 3

Suppose H ∈ H2n×2n , by lemma 1, H is of the form


[ ]
E F
H= , (7)
G −E ⊤
where E , F , G ∈ Rn×n are to be determined. Let αi = Re(λi ) (the real part of the complex number λi ), βi = Im(λi ) (the
imaginary part of λi ), yi = Re(xi ), zi = Im(xi ) for i = 1, 3, . . . , 2l − 1, and
α1 β1 α2l−1 β2l−1
{[ ] [ ] }
Λ̃ = diag ,..., , λ2l+1 , . . . , λp ∈ Rp×p , (8)
−β1 α1 −β2l−1 α2l−1
X̃ = y1 , z1 , . . . , y2l−1 , z2l−1 , x2l+1 , . . . , xp ∈ R2n×p .
[ ]
(9)

Then, Eq. (1) is equivalent to

H X̃ = X̃ Λ̃. (10)

Let the partition of the matrix X̃ be


[ ]
X̃1
X̃ = , X̃1 , X̃2 ∈ Rn×p . (11)
X̃2
Using (7) and (11), Eq. (10) can be equivalently written as

E X̃1 + F X̃2 = X̃1 Λ̃, (12)

GX̃1 − E ⊤ X̃2 = X̃2 Λ̃. (13)

Since rank(X̃ ) = rank(X ) = p, the GSVD (see Refs. [22] and [23]) of the matrix pair (X̃1 , X̃2 ) is of the form⊤ ⊤

X̃1⊤ = M Σ1 U ⊤ , X̃2⊤ = M Σ2 V ⊤ , (14)


p×p
where M ∈ R is a nonsingular matrix, and
I 0 0 r −s 0 0 0 r −s
[ ] [ ]
Σ1 = 0 S1 0 s Σ2 = 0 S2 0 s
, ,
0 0 0 p−r 0 0 I p−r
r −s s n−r g s p−r

U3 ∈ ORn×n , V = V1 V3 ∈ ORn×n ,
[ ] [ ]
U = U1 U2 V2
the partitions of matrices U , V are compatible with those of Σ1 and Σ2 ,

g = n + r − p − s, r = rank(X̃1 ), s = rank(X̃1 ) + rank(X̃2 ) − p

and

S1 = diag(α1 , . . . , αs ), S2 = diag(β1 , . . . , βs )

with

1 > α1 ≥ α2 ≥ · · · ≥ αs > 0, 0 < β1 ≤ β2 ≤ · · · ≤ βs < 1, αi2 + βi2 = 1 (i = 1, . . . , s).

Let
J12 J11 J13 g
[ ]
V ⊤ X̃1 Λ̃(M −1 )⊤ = J22 J21 J23 s
, (15)
J32 J31 J33 p−r
r −s s p−r

L11
L12 L13 r −s
[ ]
U X̃2 Λ̃(M
⊤ −1 ⊤
) = L21
L22 L23 s
. (16)
L31
L32 L33 n−r
r −s s p−r
Now, we can establish the solvability of Problem IEP as follows.

Theorem 1. Suppose that X = [x1 , x2 , . . . , xp ] ∈ C2n×p and Λ = diag(λ1 , λ2 , . . . , λp ) ∈ Cp×p , where rank(X ) = p and
p
the eigenpairs {(xi , λi )}i=1 are closed under complex conjugate. Separate matrices Λ and X into real parts and imaginary parts
resulting Λ̃ and X̃ expressed as in (8) and (9). Let the partition of the matrix X̃ be given by (11), and the GSVD of the matrix
4 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031

pair (X̃1⊤ , X̃2⊤ ) be given by (14). The partitions of the matrices V ⊤ X̃1 Λ̃(M −1 )⊤ and U ⊤ X̃2 Λ̃(M −1 )⊤ are given by (15) and (16).
Then Problem IEP is solvable if and only if

13 , J33 = J33 , L11 = L11 , L21 S1 − J21 S2 = L12 ,


J31 = −L⊤ ⊤ ⊤ ⊤ ⊤
(17)
23 S1 + J23 S2 = J32 , S2 J22 + L22 S1 = J22 S2 + S1 L22 .
− L⊤ ⊤ ⊤ ⊤

In which case, the solution set SE can be expressed as


{ ⏐ [ ]}
⏐H = E F
,
2n×2n

SE = H∈R (18)
⏐ G −E ⊤
where
⎡ ⎤
J11 (J12 − F12 S2 )S1−1 E13
E = V ⎣J21 (J22 − F22 S2 )S1−1 S2−1 (S1 G23 − L⊤
32 )⎦
U ⊤, (19)
⎢ ⎥
⊤ ⊤
J31 −L23 −L33
⎡ ⎤
F11 F12 J13

F = V ⎣F12 F22 J23 ⎦ V ⊤ , (20)
⎢ ⎥
⊤ ⊤
J13 J23 J33
⎡ ⎤
L11 L⊤
21 L⊤
31

G23 ⎦ U ,

G = U ⎣L21 S1−1 (S2 J22 + L⊤ −1
22 S1 − S2 F22 S2 )S1
(21)
⎢ ⎥

L31 G⊤
23 G33
and E13 , F12 , G23 are arbitrary matrices, and F11 , F22 , G33 are arbitrary symmetric matrices.

Proof. Using the GSVD of the matrix pair (X̃1⊤ , X̃2⊤ ) given by (14), we see that Eqs. (12) and (13) are equivalent to

V ⊤ EU Σ1⊤ + V ⊤ FV Σ2⊤ = V ⊤ X̃1 Λ̃(M −1 )⊤ , (22)


U GU Σ1 − U E V Σ2 = U X̃2 Λ̃(M
⊤ ⊤ ⊤ ⊤ ⊤ ⊤ −1 ⊤
) . (23)

If we partition V EU , U GU and V FV as
⊤ ⊤ ⊤

E11 E12 E13 g


[ ]
V ⊤ EU = E21 E22 E23 s
, (24)
E31 E32 E33 p−r
r −s s n−r
⎡ ⎤ ⎡ ⎤
G11 G12 G13 r −s F11 F12 F13 g
U ⊤ GU = ⎣G⊤ G22 G23 ⎦ ⊤
V ⊤ FV = ⎣F12 F22 F23 ⎦
⎢ ⎥ ⎢ ⎥
12 s , s . (25)
G⊤
13 G⊤23 G33 n−r

F13 ⊤
F23 F33 p−r
r −s s n−r g s p−r
Then it follows that

E11 = J11 , −E33



= L33 , F13 = J13 , G⊤
13 = L31 , (26)
E31 = J31 , −E31

= L13 , (27)
F33 = J33 , G11 = L11 , (28)
E21 = J21 , G12 = L21 , G12 S1 − E21 S2 = L12 ,
⊤ ⊤
(29)
F23 = J23 , −E32

= L23 , E32 S1 + F23

S2 = J32 , (30)
E12 S1 + F12 S2 = J12 , G23 S1 − E23 S2 = L32 ,
⊤ ⊤
(31)
E22 S1 + F22 S2 = J22 , G22 S1 − E22

S2 = L22 . (32)

From the relations of (26)–(32), we can easily achieve the solvability condition (17) and the relations of (19), (20) and
(21). □

3. Solving problem OAP

In this section, we solve Problem OAP over SE when SE is nonempty. It is easy to verify that SE is a closed convex
set when the condition (17) is satisfied. Therefore there exists a unique solution to Problem OAP (see Ref. [24]). Now,
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 5

our mission is to seek out the unique solution Ĥ of Problem OAP in SE . For the given matrix H̃ ∈ R2n×2n and any matrix
H ∈ SE in (18), we have
 2  2  2  2  2
⊤
H − H̃  = E − H̃11  + E + H̃22  + F − H̃12  + G − H̃21  , (33)
        

where
[ ]
H̃11 H̃12
H̃ = , H̃ij ∈ Rn×n , i, j = 1, 2,
H̃21 H̃22
and E , F and G are given by (19)–(21). Upon substitution, it holds that
⎡ ⎤ 2
 J11 (J12 − F12 S2 )S −1 E13 
 2  1 
−1 −1 ⊤ ⊤
H − H̃  = ⎣J21 (J22 − F22 S2 )S1 S2 (S1 G23 − L32 )⎦ − V H̃11 U 
  ⎢ ⎥ 
 
 J31 − L⊤ 23 − L⊤
33

⎡ ⎤ 2
 J11 (J12 − F12 S2 )S −1 E13 
 1 
⊤ ⊤
+ ⎣J21 (J22 − F22 S2 )S1−1 S2−1 (S1 G23 − L⊤ )⎦ + V H̃22 U 
⎢ ⎥ 
 32 
 J31 − L⊤ 23 − L⊤
33

⎡ ⎤ 2
 F11 F12 J13 
 
+ ⎣F12 F22 J23 ⎦ − V ⊤ H̃12 V 
⎢ ⊤ ⎥ 
 ⊤ ⊤

 J13 J23 J33 
⎡ ⎤ 2
 L11
 L⊤21 L⊤
31


G23 ⎦ − U H̃21 U  .
−1 ⊤ −1 ⊤
+ ⎣L21 S1 (S2 J22 + L22 S1 − S2 F22 S2 )S1
⎢ ⎥ 
 
 L31 G⊤23 G33 
 
Therefore, H − H̃  = min if and only if
 

 2  2
E13 − V1⊤ H̃11 U3  + E13 + V1⊤ H̃22

U3  = min; (34)
   

 2  2
F12 S2 S1−1 − (J12 S1−1 − V1⊤ H̃11 U2 ) + F12 S2 S1−1 − (J12 S1−1 + V1⊤ H̃22

U 2 )
   
 2  2 (35)
+ F12 − V1⊤ H̃12 V2  + F12 − V1⊤ H̃12 ⊤
V2  = min;
   
 2  2
 −1
S2 S1 G23 − (S2−1 L⊤ ⊤  −1 −1 ⊤ ⊤ ⊤
32 + V2 H̃11 U3 ) + S2 S1 G23 − (S2 L32 − V2 H̃22 U3 )
 
 2  2 (36)
+ G23 − U2⊤ H̃21 U3  + G23 − U2⊤ H̃21 ⊤
U3  = min;
   
 2
F11 − V1⊤ H̃12 V1  = min, s. t. F11 = F11 ⊤
; (37)
 

 2
G33 − U3⊤ H̃21 U3  = min, s. t. G33 = G⊤
33 ; (38)
 

 2
F22 S2 S1−1 − (J22 S1−1 − V2⊤ H̃11 U2 )
 
 2  2
+ F22 S2 S1−1 − (J22 S1−1 + V2⊤ H̃22 ⊤
U2 ) + F22 − V2⊤ H̃12 V2  (39)
   
 2
22 − U2 H̃21 U2 ) = min, s. t. F22 = F22 .
+ S1−1 S2 F22 S2 S1−1 − (S1−1 S2 J22 S1−1 + S1−1 L⊤
 ⊤  ⊤

Solving the minimization problems (34)–(36) by using Lemma 2, we obtain


1( )
E13 = V1⊤ H̃11 U3 − V1⊤ H̃22

U3 ;
2
1 (( ) )
F12 = 2J12 S1−1 − V1⊤ H̃11 U2 + V1⊤ H̃22

U2 S1 S2 + V1⊤ H̃12 V2 S12 + V1⊤ H̃12

V2 S12 ; (40)
2
1( ( ) )
32 + V2 H̃11 U3 − V2 H̃22 U3 + S2 U2 H̃21 U3 + S2 U2 H̃21 U3 .
S1 S2 2S2−1 L⊤ ⊤ ⊤ ⊤ 2 ⊤ 2 ⊤ ⊤
G23 = (41)
2
6 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031

Solving the minimization problems (37)–(39) by using Lemma 3, we obtain


1( ⊤ )
F11 = V1 H̃12 V1 + V1⊤ H̃12

V1 ;
2
1( )
G33 = ⊤
U3⊤ H̃21 U3 + U3⊤ H̃21 U3 ;
2
( ( )
F22 = 1
2
S12 2J22 S1−1 − V2⊤ H̃11 U2 + V2⊤ H̃22

U2 S2 S1
( ) ( )
+S1 S2 2S1−1 J22

− U2⊤ H̃11

V2 + U2⊤ H̃22 V2 S12 + S12 V2⊤ H̃12 V2 + V2⊤ H̃12

V2 S12 (42)
( ) )
22 − U2 H̃21 U2 + S1 J22 S2 S1 + L22 S1 − U2 H̃21 U2 S2 S1 .
+ S1 S2 S1−1 S2 J22 S1−1 + S1−1 L⊤ ⊤ −1 ⊤ −1 −1 ⊤ ⊤

Theorem 2. Given H̃ ∈ R2n×2n and assume that the condition (17) holds. Then the matrix optimal approximation problem
OAP has a unique solution Ĥ ∈ SE , and Ĥ can be expressed as
[ ]
Ê F̂
Ĥ = , (43)
Ĝ −Ê ⊤
where
⎡ ⎤
J11 (J12 − F12 S2 )S1−1 1
2
(V1⊤ H̃11 U3 − V1⊤ H̃22

U3 )
⎦U ,
⎥ ⊤
Ê = V ⎣J21 (J22 − F22 S2 )S1−1 S2−1 (S1 G23 − L⊤ (44)

32 )
⊤ ⊤
J31 −L23 −L33
⎡1 ⎤
⊤ ⊤ ⊤
2
(V1 H̃12 V1 + V1 H̃12 V1 ) F12 J13
J23 ⎦ V ,
⊤ ⊤
F̂ = V ⎣ F12 F22 (45)
⎢ ⎥
⊤ ⊤
J13 J23 J33
L⊤ L⊤
⎡ ⎤
L11 21 31

S1−1 (S2 J22 + L⊤ ⎦U ,


−1 ⎥ ⊤
Ĝ = U ⎣L21 22 S1 − S2 F22 S2 )S1 G23 (46)

⊤ 1
L31 G23 2
(U3⊤ H̃21 U3 ⊤ ⊤
+ U3 H̃21 U3 )
where F12 , G23 and F22 are given by (40), (41) and (42), respectively.

4. Numerical algorithm and example

The above discussion leads us to formulate the following algorithm for solving Problem IEP and Problem OAP as follows.

Algorithm 1.

(1) Input X , Λ, H̃.


(2) Form the matrices Λ̃, X̃ by (8)[and] (9), respectively.
X̃1
(3) Partition the matrix X̃ as X̃ = , X̃1 , X̃2 ∈ Rn×p .
X̃2
(4) Compute the GSVD of the matrix pair (X̃1⊤ , X̃2⊤ ) following (14).
(5) Compute Jij and Lij , i, j = 1, 2, 3 according to (15) and (16), respectively.
(6) If the condition (17) holds, then [ continue,]otherwise, go to (1).
H̃11 H̃12
(7) Partition the matrix H̃ as H̃ = , H̃ij ∈ Rn×n , i, j = 1, 2.
H̃21 H̃22
(8) Compute F12 , G23 and F22 by (40), (41) and (42), respectively.
(9) Compute Ê , F̂ , Ĝ by (44), (45) and (46), respectively.
(10) Compute the unique solution Ĥ of Problem OAP by (43).
[ ]
E F
Example 1. We consider an 18-by-18 Hamiltonian matrix H = arising from position and velocity control for
G −E ⊤
a string of high-speed vehicles (see Refs. [25,26]), where

F = diag(1, 0, 1, 0, 1, 0, 1, 0, 1), G = diag(0, 10, 0, 10, 0, 10, 0, 10, 0),


Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 7

and

−1 0 0 0 0 0 0 0 0
⎡ ⎤
⎢1 0 −1 0 0 0 0 0 0 ⎥
⎢0 0 −1 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢0 0 1 0 −1 0 0 0 0 ⎥
⎥.
⎢ ⎥
⎢0
E=⎢ 0 0 0 −1 0 0 0 0 ⎥
⎢0 0 0 0 1 0 −1 0 0 ⎥
⎢0 0 0 0 0 0 −1 0 0 ⎥
⎢ ⎥
⎣0 0 0 0 0 0 1 0 −1⎦
0 0 0 0 0 0 0 0 −1

We choose the first four eigenpairs of this model as the experimental data, that is,

Λ = diag(−1.8049 + 1.6606i, −1.8049 − 1.6606i, −1.6758 + 1.5193i, −1.6758 − 1.5193i),

⎡ 0.0042 − 0.0503i 0.0042 + 0.0503i −0.0096 + 0.0979i −0.0096 − 0.0979i⎤


⎢−0.0548 + 0.0504i −0.0548 − 0.0504i 0.0844 − 0.0765i 0.0844 + 0.0765i⎥
⎢−0.0110 + 0.1316i −0.0110 − 0.1316i 0.0156 − 0.1584i 0.0156 + 0.1584i⎥
⎢ 0.0886 − 0.0815i 0.0886 + 0.0815i −0.0521 + 0.0473i −0.0521 − 0.0473i⎥
⎢ ⎥
⎢ 0.0136 − 0.1627i 0.0136 + 0.1627i −0.0000 + 0.0000i −0.0000 − 0.0000i⎥
⎢ ⎥
⎢−0.0886 + 0.0815i −0.0886 − 0.0815i −0.0521 + 0.0473i −0.0521 − 0.0473i⎥
⎢ ⎥
⎢−0.0110 + 0.1316i −0.0110 − 0.1316i −0.0156 + 0.1584i −0.0156 − 0.1584i⎥
⎢ ⎥
⎢ 0.0548 − 0.0504i 0.0548 + 0.0504i 0.0844 − 0.0765i 0.0844 + 0.0765i⎥
⎢ 0.0042 − 0.0503i 0.0042 + 0.0503i 0.0096 − 0.0979i 0.0096 + 0.0979i⎥
⎢ ⎥
X =⎢ ⎥.
⎢ 0.0801 + 0.0474i 0.0801 − 0.0474i −0.1423 − 0.0808i −0.1423 + 0.0808i⎥
⎢ 0.3035 + 0.0000i 0.3035 − 0.0000i −0.5034 + 0.0000i −0.5034 − 0.0000i⎥
⎢ ⎥
⎢−0.2097 − 0.1242i −0.2097 + 0.1242i 0.2302 + 0.1307i 0.2302 − 0.1307i⎥
⎢ ⎥
⎢−0.4910 + 0.0000i −0.4910 − 0.0000i 0.3111 + 0.0000i 0.3111 − 0.0000i⎥


⎢ 0.2592 + 0.1535i 0.2592 − 0.1535i −0.0000 − 0.0000i −0.0000 + 0.0000i⎥
0.4910 0.4910 0.3111 − 0.0000i 0.3111 + 0.0000i⎥
⎢ ⎥

⎢−0.2097 − 0.1242i −0.2097 + 0.1242i −0.2302 − 0.1307i −0.2302 + 0.1307i⎥
⎢ ⎥
−0.3035 − 0.0000i −0.3035 + 0.0000i −0.5034 −0.5034
⎣ ⎦
0.0801 + 0.0474i 0.0801 − 0.0474i 0.1423 + 0.0808i 0.1423 − 0.0808i

Let H̃ = H. By using Algorithm 1, the optimal approximation solution Ĥ can be computed and the residual ∥ĤX − X Λ∥
as well as the difference ∥Ĥ − H ∥ is given by

∥ĤX − X Λ∥ = 1.6574 × 10−14 , ∥Ĥ − H ∥ = 5.1573 × 10−14 ,

which is consistent with our intuition: the optimal approximation solution Ĥ for Problem OAP should be very close to H.

Example 2. Assume that n = 6 and p = 4. Given

−0.33353 0.14747 0.56846 0.56846


⎡ ⎤
⎢ −0.35404 0.17226 −0.32725 + 0.031812i −0.32725 − 0.031812i⎥
⎢ −0.28718 0.12007 −0.19957 + 0.017751i −0.19957 − 0.017751i⎥
⎢ ⎥
⎢ −0.41633 0.19232 −0.053976 − 0.021302i −0.053976 + 0.021302i⎥
⎢ ⎥
⎢ −0.48728 0.21889 0.046655 + 0.021066i 0.046655 − 0.021066i⎥
⎢ −0.34067 0.10162 −0.0053329 − 0.073078i −0.0053329 + 0.073078i⎥
⎢ ⎥
X =⎢ ⎥ ≜ [x1 , x2 , x3 , x4 ],
⎢−0.066663 −0.41768 −0.12043 + 0.16675i −0.12043 − 0.16675i⎥
⎢ −0.2049 −0.40335 0.18478 + 0.051701i 0.18478 − 0.051701i⎥
⎢ ⎥
⎢ −0.1177 −0.45743 0.40585 + 0.078452i 0.40585 − 0.078452i⎥
⎢ ⎥
⎢ −0.13584 −0.30615 −0.1479 + 0.19619i −0.1479 − 0.19619i⎥
⎢ ⎥
⎣ −0.21575 −0.24821 −0.033927 − 0.26461i −0.033927 + 0.26461i⎦
−0.1698 −0.37043 −0.33617 − 0.15071i −0.33617 + 0.15071i

Λ = diag(42.795, −42.795, 2.801 + 1.6368i, 2.801 − 1.6368i) ≜ diag(λ1 , λ2 , λ3 , λ4 ),


8 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031

Table 1
Residuals of the eigenpairs (λi , xi ).
(λi , xi ) (λ1 , x1 ) (λ2 , x2 ) (λ3 , x3 ) (λ4 , x4 )
∥Ĥxi − λi xi ∥ 1.4200 × 10−12 3.6768 × 10−13 4.8311 × 10−13 4.8311 × 10−13

and
⎡ ⎤
3.8005 1.8259 3.6873 −1.6411 0.5556 0.0611 4.2311 3.4064 1.5231 0.7544 2.4828 1.7099
⎢ 0.9246 0.074 2.9528 3.5746 0.8111 2.9871 2.6258 1.8974 0.9483 3.4895 4.4988 1.4486⎥
⎢ ⎥
⎢ ⎥
⎢ 2.4274 3.2856 0.7051 0.2316 0.7949 −1.7804 1.0132 −4.159 0.9672 1.8919 4.1081 1.706⎥
⎢ ⎥
⎢ 1.9439 −1.7788 1.6228 1.4115 2.4152 3.7273 −3.3607 2.5141 −3.4111 4.3001 −3.2246 2.6704⎥
⎢ ⎥
⎢ 3.5652 2.4617 3.7419 −3.2527 1.0888 1.864 4.1906 3.5474 1.5138 4.2683 4.0899 3.6356⎥
⎢ ⎥
⎢ ⎥
⎢ 3.0484 3.1677 3.6676 0.0394 0.7953 −1.6746 0.0982 2.1445 2.7084 2.9678 −3.3011 1.5465⎥

H̃ = ⎢ ⎥.

⎢ 2.5155 2.0837 0.5189 0.4096 0.8532 1.5465 3.1789 −2.7666 2.4922 5.9405 1.9202 −2.6395⎥
⎢ ⎥
⎢ 1.7042 1.8639 2.9392 0.0353 1.4077 1.0019 3.8432 3.407 1.83 4.7332 −5.7606 5.6003⎥
⎢ ⎥
⎢−1.1112 2.3845 0.8143 2.6817 −0.1943 1.2987 1.2544 4.7653 −5.2462 2.632 4.3598 4.1⎥
⎢ ⎥
⎢ ⎥
⎢ 2.1082 −2.8705 0.757 −0.5974 2.965 0.6778 2.2789 0.3551 0.0901 −2.9899 2.4717 1.2754⎥
⎢ ⎥
⎣−1.6397 1.5678 2.6272 0.8962 1.7484 −1.7394 −4.7 −3.6172 4.6077 1.2838 4.4674 5.0354⎦
⎢ ⎥

1.3346 2.6404 −2.2119 1.9843 1.2705 2.2811 4.0851 0.3016 −5.8251 −3.861 1.6077 −3.7727

According to Algorithm 1, it is calculated that the condition (17) holds. Using the Software MATLAB 6.5, we find
6.4686 6.0778 6.9095 0.85487 5.4698 2.0476 7.4368 6.2161 4.8161 2.7598 5.9875 1.5215
⎡ ⎤
⎢ 6.478 3.5235 4.1491 5.0941 2.9713 4.5598 6.2161 4.6323 1.1195 8.1132 9.5019 4.6074⎥
⎢ ⎥
⎢5.8209 5.6991 7.1821 3.5754 −1.5448 5.6153 4.8161 1.1195 4.8752 4.3278 6.3761 3.1316⎥
⎢ ⎥
⎢4.8274 3.6226 4.4104 5.5522 4.8899 8.4863 2.7598 8.1132 4.3278 9.0552 6.7671 6.046⎥
⎢ ⎥
⎢ ⎥
⎢7.4156 8.9545 8.7368 5.5493 3.9011 4.7179 5.9875 9.5019 6.3761 6.7671 9.5372 4.7617⎥
⎢ ⎥
⎢7.4963 4.2043 8.8346 4.6243 0.89663 4.6536 1.5215 4.6074 3.1316 6.046 4.7617 4.9633⎥

Ĥ = ⎢ ⎥.

⎢ 3.235 6.5903 4.8287 2.7916 2.9754 4.151 −6.4686 −6.478 −5.8209 −4.8274 −7.4156 −7.4963⎥
⎢6.5903 6.4156 6.2162 4.4865 6.1802 7.5455 −6.0778 −3.5235 −5.6991 −3.6226 −8.9545 −4.2043⎥
⎢ ⎥
⎢ ⎥
⎢4.8287 6.2162 3.5071 6.0483 5.803 2.9735 −6.9095 −4.1491 −7.1821 −4.4104 −8.7368 −8.8346⎥
⎢ ⎥
⎢2.7916 4.4865 6.0483 2.941 5.694 5.3311 −0.85487 −5.0941 −3.5754 −5.5522 −5.5493 −4.6243⎥
⎢ ⎥
⎣2.9754 6.1802 5.803 5.694 5.7272 4.9156 −5.4698 −2.9713 1.5448 −4.8899 −3.9011 −0.89663⎦
⎢ ⎥

4.151 7.5455 2.9735 5.3311 4.9156 7.1472 −2.0476 −4.5598 −5.6153 −8.4863 −4.7179 −4.6536

Furthermore, we can get the following numerical results (see Table 1), which implies that {(xi , λi )}4i=1 are the eigenpairs
of the matrix Ĥ.

5. Concluding remarks

In this paper, we have established a solvability condition (see Eq. (17)) for problem IEP and also the form of its general
solution by the GSVD of the matrix pair (X̃1⊤ , X̃2⊤ ). Furthermore, in the case when problem IEP is solvable, we have shown
that Problem OAP has a unique solution and have provided a formula for the minimizer Ĥ (see Eq. (43)).

References

[1] V. Mehrmann, The autonomous linear quadratic control problem, theory and numerical solution, in: Lecture Notes in Control and Information
Sciences, vol. 163, Springer, Heidelberg, 1991.
[2] P. Benner, D. Kressner, V. Mehrmann, Skew-Hamiltonian and Hamiltonian eigenvalue problems: theory, algorithms and applications, in:
Proceedings of the Conference on Applied Mathematics and Scientific Computing, Springer, Dordrecht, 2005, pp. 3–39.
[3] D. Chu, X. Liu, V. Mehrmann, A numerical method for computing the Hamiltonian Schur form, Numer. Math. 105 (2007) 375–412.
[4] Z. Bai, The solvability conditions for the inverse eigenvalue problem of Hermitian and generalized skew-Hamiltonian matrices and its
approximation, Inverse Problems 19 (2003) 1185–1194.
[5] Z. Zhang, X. Hu, L. Zhang, The solvability conditions for the inverse eigenvalue problem of Hermitian-generalized Hamiltonian matrices, Inverse
Problems 18 (2002) 1369–1376.
[6] Y. Yuan, H. Dai, On a class of inverse quadratic eigenvalue problem, J. Comput. Appl. Math. 235 (2011) 2662–2669.
[7] Y.-C. Kuo, W.-W. Lin, S.-F. Xu, Solutions of the partially described inverse quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl. 29 (2006)
33–53.
[8] J. Qian, R.C.E. Tan, On some inverse eigenvalue problems for Hermitian and generalized Hamiltonian/skew-Hamiltonian matrices, J. Comput.
Appl. Math. 250 (2013) 28–38.
[9] S. Gigola, L. Lebtahi, N. Thome, Inverse eigenvalue problem for normal J-hamiltonian matrices, Appl. Math. Lett. 48 (2015) 36–40.
[10] S. Gigola, L. Lebtahi, N. Thome, The inverse eigenvalue problem for a Hermitian reflexive matrix and the optimization problem, J. Comput.
Appl. Math. 291 (2016) 449–457.
[11] P. Lancaster, U. Prells, Inverse problems for damped vibating systems, J. Sound Vib. 283 (2005) 891–914.
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 9

[12] J.E. Mottershead, Y.M. Ram, Inverse eigenvalue problems in vibration absorption: Passive modification and active control, Mech. Syst. Signal
Process. 20 (2006) 5–44.
[13] Y. Yuan, A symmetric inverse eigenvalue problem in structural dynamic model updating, Appl. Math. Comput. 213 (2009) 516–521.
[14] Y. Yuan, H. Dai, An inverse problem for undamped gyroscopic systems, J. Comput. Appl. Math. 236 (2012) 2574–2581.
[15] F.E. Udwadia, Structural identification and damage detection from noisy modal data, J. Aerosp. Eng. 18 (2005) 179–187.
[16] Bo Dong, M.M. Lin, M.T. Chu, Parameter reconstruction of vibration systems from partial eigeninformation, J. Sound Vib. 327 (2009) 391–401.
[17] K.V. Singh, H. Ouyang, Pole assignment using state feedback with time delay in friction-induced vibration problems, Acta Mech. 224 (2013)
645–656.
[18] T.H.S. Abdelaziz, Robust pole assignment using velocity-acceleration feedback for second-order dynamical systems with singular mass matrix,
ISA Trans. 57 (2015) 71–84.
[19] B.N. Datta, S. Elhay, Y.M. Ram, D.R. Sarkissian, Partial eigenstructure assignment for the quadratic pencil, J. Sound Vib. 230 (2000) 101–110.
[20] J. Zhang, J. Ye, H. Ouyang, Static output feedback for partial eigenstructure assignment of undamped vibration systems, Mech. Syst. Signal
Process. 68–69 (2016) 555–561.
[21] M.T. Chu, G.H. Golub, Inverse Eigenvalue Problems: Theory, Algorithms, and Applications, Oxford University Press, Oxford, 2005.
[22] C.C. Paige, M.A. Saunders, Towards a generalized singular value decompostion, SIAM J. Numer. Anal. 18 (1981) 398–405.
[23] G.H. Golub, C.F. Van Loan, Matrix Computations, fourth ed., The Johns Hopkins University Press, Baltimore, 2013.
[24] E.W. Cheney, Introduction to Approximation Theory, AMS Chelsea Publishing, Providence, 1982.
[25] M. Athans, W.S. Levine, A. Levis, A system for the optimal and suboptimal position and velocity control for a string of high-speed vehicles, in:
Proc. 5th Int. Analogue Computation Meetings, Lausanne, Switzerland, September, 1967.
[26] A.J. Laub, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Control AC-24 (1979) 913–921.

You might also like