Professional Documents
Culture Documents
article info a b s t r a c t
[ ]
Article history: 0 In
Received 18 April 2019 Let J = ∈ R2n×2n . A matrix H ∈ R2n×2n is called Hamiltonian if (HJ)⊤ = HJ .
−In 0
Received in revised form 6 November 2019
In this paper, the inverse eigenvalue problem for Hamiltonian matrices is considered.
MSC: The solvability condition for the inverse problem is derived and the representation of
65F18 the general solution is presented by the generalized singular value decomposition of a
15A24 matrix pair. Furthermore, the associated optimal approximation problem for this inverse
eigenvalue problem is discussed and the expression of the solution for the optimal
Keywords:
Inverse eigenvalue problem approximation problem is presented.
Hamiltonian matrix © 2020 Elsevier B.V. All rights reserved.
Generalized singular value decomposition
Optimal approximation
1. Introduction
Throughout this paper, we denote the set of all m-by-n complex matrices by Cm×n , the set of all m-by-n real matrices
by Rm×n , the set of all orthogonal matrices in Rn×n by ORn×n and the identity matrix of order n by In .
[ ]
0 In
Definition 1. Let J = ∈ R2n×2n . A matrix H ∈ R2n×2n is called Hamiltonian if (HJ)⊤ = HJ. The set of all 2n × 2n
−In 0
Hamiltonian matrices is denoted by H2n×2n .
The eigenvalue problem for Hamiltonian matrices arises in a number of important applications, and many algorithms
for computing their eigenvalues, invariant subspaces and the Hamiltonian Schur form can be found in [1–3] and
the references therein. The inverse eigenvalue problem (IEP) concerns the reconstruction of a structured matrix from
prescribed spectral data [4–10]. Inverse eigenvalue problems are widely used in many research fields, such as structural
dynamics [11–14], parameter identification [15,16] and pole assignment [17,18] or eigenstructure assignment [19,20].
Extensive bibliographies of inverse eigenvalue problems for matrices can be found in the book by Chu and Golub [21].
The main purpose of this paper is to investigate the solvability condition and the representation of the general solution
for the inverse eigenvalue problem of Hamiltonian matrices by using generalized singular value decomposition (GSVD).
More specifically, the following problems are considered.
Problem IEP. Given a full column rank matrix X = [x1 , x2 , . . . , xp ] ∈ C2n×p and a diagonal matrix Λ = diag(λ1 , λ2 , . . . , λp )
∈ Cp×p , where the eigenpairs {(xi , λi )}pi=1 are closed under complex conjugate. Find a Hamiltonian matrix H ∈ R2n×2n such
that
HX = X Λ. (1)
∗ Corresponding author.
E-mail address: yuanyx_703@163.com (Y. Yuan).
https://doi.org/10.1016/j.cam.2020.113031
0377-0427/© 2020 Elsevier B.V. All rights reserved.
2 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031
Problem OAP. Given a matrix H̃ ∈ R2n×2n , find a solution H to Problem IEP such that ∥H − H̃ ∥ is minimized.
By using the generalized singular value decomposition of a matrix pair, we provide a necessary and sufficient condition
for Problem IEP to have a solution H ∈ H2n×2n and construct the solution set SE explicitly when it is nonempty. We prove
that there exists a unique solution to Problem OAP if the set SE is nonempty and present an explicit formula for the unique
solution. Two numerical examples are given to dwell upon the importance of our results.
In order to solve Problems IEP and OAP, we need the following lemmas.
[ ]
E F
Lemma 1 ([2]). Let H ∈ H 2n×2n
. Then H is of the form H = , where E , F , G ∈ Rn×n with F = F ⊤ , G = G⊤ .
G −E ⊤
Lemma 2. Let S1 = diag(α1 , α2 , . . . , αs ) ∈ Rs×s , S2 = diag(β1 , β2 , . . . , βs ) ∈ Rs×s with αi2 + βi2 = 1, i = 1, . . . , s, and
A, B, C , D ∈ Rg ×s . Then
2 2
Φ (F12 ) = F12 S2 S1−1 − A + F12 S2 S1−1 − B + ∥F12 − C ∥2 + ∥F12 − D∥2 = min
if and only if
1(
(A + B)S1 S2 + CS12 + DS12 .
)
F12 = (2)
2
Proof. Let A = [aij ], B = [bij ], C = [cij ], D = [dij ] ∈ Rg ×s , and F12 = [fij ] ∈ Rg ×s . Then
g s
(( )2 ( )2 )
∑ ∑ βj βj )2 ( )2
Φ (F12 ) = .
(
fij − aij + fij − bij + fij − cij + fij − dij
αj αj
i=1 j=1
Lemma 3. Assume that S1 = diag(α1 , α2 , . . . , αs ) ∈ Rs×s , S2 = diag(β1 , β2 , . . . , βs ) ∈ Rs×s with αi2 + βi2 = 1, i = 1, . . . , s,
and A1 , B1 , C1 , D1 ∈ Rs×s . Then
2 2
Ψ (F22 ) = F22 S2 S1−1 − A1 + F22 S2 S1−1 − B1 + ∥F22 − C1 ∥2
2 (4)
+ S1−1 S2 F22 S2 S1−1 − D1 = min, s. t. F22 = F22
⊤
if and only if
1( 2
1 + B1 )S1 + S1 (C1 + C1 )S1 + S1 S2 (D1 + D1 )S2 S1 .
⊤ 2 2 ⊤ 2
S1 (A1 + B1 )S2 S1 + S1 S2 (A⊤ ⊤
)
F22 = (5)
2
Proof. Let A1 = [aij ], B1 = [bij ], C1 = [cij ], D1 = [dij ] ∈ Rs×s , and F22 = [fij ] ∈ Rs×s . From (4) we have
s s
(( )2 ( )2 )2 )
βj βj βi βj
(
∑ ∑ )2
Ψ (F22 ) = .
(
fij − aij + fij − bij + fij − cij + fij − dij
αj αj αi αj
i=1 j=1
Since fij = fji , then Ψ (F22 ) is a differentiable function of 21 s(s + 1) variables fij (i = 1, . . . , s; j = i, . . . , s). It is easy to verify
that the function Ψ (F22 ) achieves the smallest value at
αi2 (aij + bij )αj βj + αi2 (cij + cji )αj2 + αi βi (dij + dji )αj βj + αi βi (aji + bji )αj2
fij = . (6)
2
By rewriting (6) in matrix form, we easily obtain (5). □
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 3
H X̃ = X̃ Λ̃. (10)
Since rank(X̃ ) = rank(X ) = p, the GSVD (see Refs. [22] and [23]) of the matrix pair (X̃1 , X̃2 ) is of the form⊤ ⊤
U3 ∈ ORn×n , V = V1 V3 ∈ ORn×n ,
[ ] [ ]
U = U1 U2 V2
the partitions of matrices U , V are compatible with those of Σ1 and Σ2 ,
and
S1 = diag(α1 , . . . , αs ), S2 = diag(β1 , . . . , βs )
with
Let
J12 J11 J13 g
[ ]
V ⊤ X̃1 Λ̃(M −1 )⊤ = J22 J21 J23 s
, (15)
J32 J31 J33 p−r
r −s s p−r
L11
L12 L13 r −s
[ ]
U X̃2 Λ̃(M
⊤ −1 ⊤
) = L21
L22 L23 s
. (16)
L31
L32 L33 n−r
r −s s p−r
Now, we can establish the solvability of Problem IEP as follows.
Theorem 1. Suppose that X = [x1 , x2 , . . . , xp ] ∈ C2n×p and Λ = diag(λ1 , λ2 , . . . , λp ) ∈ Cp×p , where rank(X ) = p and
p
the eigenpairs {(xi , λi )}i=1 are closed under complex conjugate. Separate matrices Λ and X into real parts and imaginary parts
resulting Λ̃ and X̃ expressed as in (8) and (9). Let the partition of the matrix X̃ be given by (11), and the GSVD of the matrix
4 Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031
pair (X̃1⊤ , X̃2⊤ ) be given by (14). The partitions of the matrices V ⊤ X̃1 Λ̃(M −1 )⊤ and U ⊤ X̃2 Λ̃(M −1 )⊤ are given by (15) and (16).
Then Problem IEP is solvable if and only if
G23 ⎦ U ,
⊤
G = U ⎣L21 S1−1 (S2 J22 + L⊤ −1
22 S1 − S2 F22 S2 )S1
(21)
⎢ ⎥
L31 G⊤
23 G33
and E13 , F12 , G23 are arbitrary matrices, and F11 , F22 , G33 are arbitrary symmetric matrices.
Proof. Using the GSVD of the matrix pair (X̃1⊤ , X̃2⊤ ) given by (14), we see that Eqs. (12) and (13) are equivalent to
If we partition V EU , U GU and V FV as
⊤ ⊤ ⊤
From the relations of (26)–(32), we can easily achieve the solvability condition (17) and the relations of (19), (20) and
(21). □
In this section, we solve Problem OAP over SE when SE is nonempty. It is easy to verify that SE is a closed convex
set when the condition (17) is satisfied. Therefore there exists a unique solution to Problem OAP (see Ref. [24]). Now,
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 5
our mission is to seek out the unique solution Ĥ of Problem OAP in SE . For the given matrix H̃ ∈ R2n×2n and any matrix
H ∈ SE in (18), we have
2 2 2 2 2
⊤
H − H̃ = E − H̃11 + E + H̃22 + F − H̃12 + G − H̃21 , (33)
where
[ ]
H̃11 H̃12
H̃ = , H̃ij ∈ Rn×n , i, j = 1, 2,
H̃21 H̃22
and E , F and G are given by (19)–(21). Upon substitution, it holds that
⎡ ⎤ 2
J11 (J12 − F12 S2 )S −1 E13
2 1
−1 −1 ⊤ ⊤
H − H̃ = ⎣J21 (J22 − F22 S2 )S1 S2 (S1 G23 − L32 )⎦ − V H̃11 U
⎢ ⎥
J31 − L⊤ 23 − L⊤
33
⎡ ⎤ 2
J11 (J12 − F12 S2 )S −1 E13
1
⊤ ⊤
+ ⎣J21 (J22 − F22 S2 )S1−1 S2−1 (S1 G23 − L⊤ )⎦ + V H̃22 U
⎢ ⎥
32
J31 − L⊤ 23 − L⊤
33
⎡ ⎤ 2
F11 F12 J13
+ ⎣F12 F22 J23 ⎦ − V ⊤ H̃12 V
⎢ ⊤ ⎥
⊤ ⊤
J13 J23 J33
⎡ ⎤ 2
L11
L⊤21 L⊤
31
G23 ⎦ − U H̃21 U .
−1 ⊤ −1 ⊤
+ ⎣L21 S1 (S2 J22 + L22 S1 − S2 F22 S2 )S1
⎢ ⎥
L31 G⊤23 G33
Therefore, H − H̃ = min if and only if
2 2
E13 − V1⊤ H̃11 U3 + E13 + V1⊤ H̃22
⊤
U3 = min; (34)
2 2
F12 S2 S1−1 − (J12 S1−1 − V1⊤ H̃11 U2 ) + F12 S2 S1−1 − (J12 S1−1 + V1⊤ H̃22
⊤
U 2 )
2 2 (35)
+ F12 − V1⊤ H̃12 V2 + F12 − V1⊤ H̃12 ⊤
V2 = min;
2 2
−1
S2 S1 G23 − (S2−1 L⊤ ⊤ −1 −1 ⊤ ⊤ ⊤
32 + V2 H̃11 U3 ) + S2 S1 G23 − (S2 L32 − V2 H̃22 U3 )
2 2 (36)
+ G23 − U2⊤ H̃21 U3 + G23 − U2⊤ H̃21 ⊤
U3 = min;
2
F11 − V1⊤ H̃12 V1 = min, s. t. F11 = F11 ⊤
; (37)
2
G33 − U3⊤ H̃21 U3 = min, s. t. G33 = G⊤
33 ; (38)
2
F22 S2 S1−1 − (J22 S1−1 − V2⊤ H̃11 U2 )
2 2
+ F22 S2 S1−1 − (J22 S1−1 + V2⊤ H̃22 ⊤
U2 ) + F22 − V2⊤ H̃12 V2 (39)
2
22 − U2 H̃21 U2 ) = min, s. t. F22 = F22 .
+ S1−1 S2 F22 S2 S1−1 − (S1−1 S2 J22 S1−1 + S1−1 L⊤
⊤ ⊤
Theorem 2. Given H̃ ∈ R2n×2n and assume that the condition (17) holds. Then the matrix optimal approximation problem
OAP has a unique solution Ĥ ∈ SE , and Ĥ can be expressed as
[ ]
Ê F̂
Ĥ = , (43)
Ĝ −Ê ⊤
where
⎡ ⎤
J11 (J12 − F12 S2 )S1−1 1
2
(V1⊤ H̃11 U3 − V1⊤ H̃22
⊤
U3 )
⎦U ,
⎥ ⊤
Ê = V ⎣J21 (J22 − F22 S2 )S1−1 S2−1 (S1 G23 − L⊤ (44)
⎢
32 )
⊤ ⊤
J31 −L23 −L33
⎡1 ⎤
⊤ ⊤ ⊤
2
(V1 H̃12 V1 + V1 H̃12 V1 ) F12 J13
J23 ⎦ V ,
⊤ ⊤
F̂ = V ⎣ F12 F22 (45)
⎢ ⎥
⊤ ⊤
J13 J23 J33
L⊤ L⊤
⎡ ⎤
L11 21 31
The above discussion leads us to formulate the following algorithm for solving Problem IEP and Problem OAP as follows.
Algorithm 1.
and
−1 0 0 0 0 0 0 0 0
⎡ ⎤
⎢1 0 −1 0 0 0 0 0 0 ⎥
⎢0 0 −1 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢0 0 1 0 −1 0 0 0 0 ⎥
⎥.
⎢ ⎥
⎢0
E=⎢ 0 0 0 −1 0 0 0 0 ⎥
⎢0 0 0 0 1 0 −1 0 0 ⎥
⎢0 0 0 0 0 0 −1 0 0 ⎥
⎢ ⎥
⎣0 0 0 0 0 0 1 0 −1⎦
0 0 0 0 0 0 0 0 −1
We choose the first four eigenpairs of this model as the experimental data, that is,
Let H̃ = H. By using Algorithm 1, the optimal approximation solution Ĥ can be computed and the residual ∥ĤX − X Λ∥
as well as the difference ∥Ĥ − H ∥ is given by
which is consistent with our intuition: the optimal approximation solution Ĥ for Problem OAP should be very close to H.
Table 1
Residuals of the eigenpairs (λi , xi ).
(λi , xi ) (λ1 , x1 ) (λ2 , x2 ) (λ3 , x3 ) (λ4 , x4 )
∥Ĥxi − λi xi ∥ 1.4200 × 10−12 3.6768 × 10−13 4.8311 × 10−13 4.8311 × 10−13
and
⎡ ⎤
3.8005 1.8259 3.6873 −1.6411 0.5556 0.0611 4.2311 3.4064 1.5231 0.7544 2.4828 1.7099
⎢ 0.9246 0.074 2.9528 3.5746 0.8111 2.9871 2.6258 1.8974 0.9483 3.4895 4.4988 1.4486⎥
⎢ ⎥
⎢ ⎥
⎢ 2.4274 3.2856 0.7051 0.2316 0.7949 −1.7804 1.0132 −4.159 0.9672 1.8919 4.1081 1.706⎥
⎢ ⎥
⎢ 1.9439 −1.7788 1.6228 1.4115 2.4152 3.7273 −3.3607 2.5141 −3.4111 4.3001 −3.2246 2.6704⎥
⎢ ⎥
⎢ 3.5652 2.4617 3.7419 −3.2527 1.0888 1.864 4.1906 3.5474 1.5138 4.2683 4.0899 3.6356⎥
⎢ ⎥
⎢ ⎥
⎢ 3.0484 3.1677 3.6676 0.0394 0.7953 −1.6746 0.0982 2.1445 2.7084 2.9678 −3.3011 1.5465⎥
⎥
H̃ = ⎢ ⎥.
⎢
⎢ 2.5155 2.0837 0.5189 0.4096 0.8532 1.5465 3.1789 −2.7666 2.4922 5.9405 1.9202 −2.6395⎥
⎢ ⎥
⎢ 1.7042 1.8639 2.9392 0.0353 1.4077 1.0019 3.8432 3.407 1.83 4.7332 −5.7606 5.6003⎥
⎢ ⎥
⎢−1.1112 2.3845 0.8143 2.6817 −0.1943 1.2987 1.2544 4.7653 −5.2462 2.632 4.3598 4.1⎥
⎢ ⎥
⎢ ⎥
⎢ 2.1082 −2.8705 0.757 −0.5974 2.965 0.6778 2.2789 0.3551 0.0901 −2.9899 2.4717 1.2754⎥
⎢ ⎥
⎣−1.6397 1.5678 2.6272 0.8962 1.7484 −1.7394 −4.7 −3.6172 4.6077 1.2838 4.4674 5.0354⎦
⎢ ⎥
1.3346 2.6404 −2.2119 1.9843 1.2705 2.2811 4.0851 0.3016 −5.8251 −3.861 1.6077 −3.7727
According to Algorithm 1, it is calculated that the condition (17) holds. Using the Software MATLAB 6.5, we find
6.4686 6.0778 6.9095 0.85487 5.4698 2.0476 7.4368 6.2161 4.8161 2.7598 5.9875 1.5215
⎡ ⎤
⎢ 6.478 3.5235 4.1491 5.0941 2.9713 4.5598 6.2161 4.6323 1.1195 8.1132 9.5019 4.6074⎥
⎢ ⎥
⎢5.8209 5.6991 7.1821 3.5754 −1.5448 5.6153 4.8161 1.1195 4.8752 4.3278 6.3761 3.1316⎥
⎢ ⎥
⎢4.8274 3.6226 4.4104 5.5522 4.8899 8.4863 2.7598 8.1132 4.3278 9.0552 6.7671 6.046⎥
⎢ ⎥
⎢ ⎥
⎢7.4156 8.9545 8.7368 5.5493 3.9011 4.7179 5.9875 9.5019 6.3761 6.7671 9.5372 4.7617⎥
⎢ ⎥
⎢7.4963 4.2043 8.8346 4.6243 0.89663 4.6536 1.5215 4.6074 3.1316 6.046 4.7617 4.9633⎥
⎥
Ĥ = ⎢ ⎥.
⎢
⎢ 3.235 6.5903 4.8287 2.7916 2.9754 4.151 −6.4686 −6.478 −5.8209 −4.8274 −7.4156 −7.4963⎥
⎢6.5903 6.4156 6.2162 4.4865 6.1802 7.5455 −6.0778 −3.5235 −5.6991 −3.6226 −8.9545 −4.2043⎥
⎢ ⎥
⎢ ⎥
⎢4.8287 6.2162 3.5071 6.0483 5.803 2.9735 −6.9095 −4.1491 −7.1821 −4.4104 −8.7368 −8.8346⎥
⎢ ⎥
⎢2.7916 4.4865 6.0483 2.941 5.694 5.3311 −0.85487 −5.0941 −3.5754 −5.5522 −5.5493 −4.6243⎥
⎢ ⎥
⎣2.9754 6.1802 5.803 5.694 5.7272 4.9156 −5.4698 −2.9713 1.5448 −4.8899 −3.9011 −0.89663⎦
⎢ ⎥
4.151 7.5455 2.9735 5.3311 4.9156 7.1472 −2.0476 −4.5598 −5.6153 −8.4863 −4.7179 −4.6536
Furthermore, we can get the following numerical results (see Table 1), which implies that {(xi , λi )}4i=1 are the eigenpairs
of the matrix Ĥ.
5. Concluding remarks
In this paper, we have established a solvability condition (see Eq. (17)) for problem IEP and also the form of its general
solution by the GSVD of the matrix pair (X̃1⊤ , X̃2⊤ ). Furthermore, in the case when problem IEP is solvable, we have shown
that Problem OAP has a unique solution and have provided a formula for the minimizer Ĥ (see Eq. (43)).
References
[1] V. Mehrmann, The autonomous linear quadratic control problem, theory and numerical solution, in: Lecture Notes in Control and Information
Sciences, vol. 163, Springer, Heidelberg, 1991.
[2] P. Benner, D. Kressner, V. Mehrmann, Skew-Hamiltonian and Hamiltonian eigenvalue problems: theory, algorithms and applications, in:
Proceedings of the Conference on Applied Mathematics and Scientific Computing, Springer, Dordrecht, 2005, pp. 3–39.
[3] D. Chu, X. Liu, V. Mehrmann, A numerical method for computing the Hamiltonian Schur form, Numer. Math. 105 (2007) 375–412.
[4] Z. Bai, The solvability conditions for the inverse eigenvalue problem of Hermitian and generalized skew-Hamiltonian matrices and its
approximation, Inverse Problems 19 (2003) 1185–1194.
[5] Z. Zhang, X. Hu, L. Zhang, The solvability conditions for the inverse eigenvalue problem of Hermitian-generalized Hamiltonian matrices, Inverse
Problems 18 (2002) 1369–1376.
[6] Y. Yuan, H. Dai, On a class of inverse quadratic eigenvalue problem, J. Comput. Appl. Math. 235 (2011) 2662–2669.
[7] Y.-C. Kuo, W.-W. Lin, S.-F. Xu, Solutions of the partially described inverse quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl. 29 (2006)
33–53.
[8] J. Qian, R.C.E. Tan, On some inverse eigenvalue problems for Hermitian and generalized Hamiltonian/skew-Hamiltonian matrices, J. Comput.
Appl. Math. 250 (2013) 28–38.
[9] S. Gigola, L. Lebtahi, N. Thome, Inverse eigenvalue problem for normal J-hamiltonian matrices, Appl. Math. Lett. 48 (2015) 36–40.
[10] S. Gigola, L. Lebtahi, N. Thome, The inverse eigenvalue problem for a Hermitian reflexive matrix and the optimization problem, J. Comput.
Appl. Math. 291 (2016) 449–457.
[11] P. Lancaster, U. Prells, Inverse problems for damped vibating systems, J. Sound Vib. 283 (2005) 891–914.
Y. Yuan and J. Chen / Journal of Computational and Applied Mathematics 381 (2021) 113031 9
[12] J.E. Mottershead, Y.M. Ram, Inverse eigenvalue problems in vibration absorption: Passive modification and active control, Mech. Syst. Signal
Process. 20 (2006) 5–44.
[13] Y. Yuan, A symmetric inverse eigenvalue problem in structural dynamic model updating, Appl. Math. Comput. 213 (2009) 516–521.
[14] Y. Yuan, H. Dai, An inverse problem for undamped gyroscopic systems, J. Comput. Appl. Math. 236 (2012) 2574–2581.
[15] F.E. Udwadia, Structural identification and damage detection from noisy modal data, J. Aerosp. Eng. 18 (2005) 179–187.
[16] Bo Dong, M.M. Lin, M.T. Chu, Parameter reconstruction of vibration systems from partial eigeninformation, J. Sound Vib. 327 (2009) 391–401.
[17] K.V. Singh, H. Ouyang, Pole assignment using state feedback with time delay in friction-induced vibration problems, Acta Mech. 224 (2013)
645–656.
[18] T.H.S. Abdelaziz, Robust pole assignment using velocity-acceleration feedback for second-order dynamical systems with singular mass matrix,
ISA Trans. 57 (2015) 71–84.
[19] B.N. Datta, S. Elhay, Y.M. Ram, D.R. Sarkissian, Partial eigenstructure assignment for the quadratic pencil, J. Sound Vib. 230 (2000) 101–110.
[20] J. Zhang, J. Ye, H. Ouyang, Static output feedback for partial eigenstructure assignment of undamped vibration systems, Mech. Syst. Signal
Process. 68–69 (2016) 555–561.
[21] M.T. Chu, G.H. Golub, Inverse Eigenvalue Problems: Theory, Algorithms, and Applications, Oxford University Press, Oxford, 2005.
[22] C.C. Paige, M.A. Saunders, Towards a generalized singular value decompostion, SIAM J. Numer. Anal. 18 (1981) 398–405.
[23] G.H. Golub, C.F. Van Loan, Matrix Computations, fourth ed., The Johns Hopkins University Press, Baltimore, 2013.
[24] E.W. Cheney, Introduction to Approximation Theory, AMS Chelsea Publishing, Providence, 1982.
[25] M. Athans, W.S. Levine, A. Levis, A system for the optimal and suboptimal position and velocity control for a string of high-speed vehicles, in:
Proc. 5th Int. Analogue Computation Meetings, Lausanne, Switzerland, September, 1967.
[26] A.J. Laub, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Control AC-24 (1979) 913–921.