You are on page 1of 13

Applied Mathematics and Computation 170 (2005) 711–723

www.elsevier.com/locate/amc

An iterative method for the least


squares symmetric solution of the
linear matrix equation AXB = C q
Zhen-yun Peng
College of Mathematics and Computation Science, Hunan University of Science and Technology,
Xiangtan 411201, PR China

Abstract

This paper an iterative method is presented to solve the minimum Frobenius norm
residual problem: minkAXB  Ck with unknown symmetric matrix X. By this iterative
method, for any initial symmetric matrix X0, a solution X can be obtained within finite
iteration steps in the absence of roundoff errors, and the solution X with least norm can
be obtained by choosing a special kind of initial symmetric matrix. In addition, the
unique optimal approximation solution X b to a given matrix X in Frobenius norm
can be obtained by first finding the least norm solution X e  of the new minimum residual
T
problem: min kA X e with unknown symmetric matrix X
e B  Ck e ¼ C  A X þX B.
e , where C
2
Given numerical examples are show that the iterative method is quite efficient.
 2005 Elsevier Inc. All rights reserved.

Keywords: Iterative method; The minimum residual problem; The matrix nearness problem; Least-
norm solution

q
This work is supported by China Postdoctoral Science Foundation (Grant No: 2004035645).
E-mail address: yunzhenp@163.com

0096-3003/$ - see front matter  2005 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2004.12.032
712 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723

1. Introduction

Denoted by Rm·n and SRn·n be the set of m · n real matrices and the set of
n · n real symmetric matrices, respectively. Denoted by the superscripts T and
+ be the transpose and Moore–Penrose generalized inverse of matrices, respec-
tively. In space Rm·n, we define inner product as: hA, Bi = trace(BT A) for all
A, B 2 Rm·n. Then the norm of a matrix A generated by this inner product
is, obviously, Frobenius norm and denoted by kAk.
We consider the solution of the minimum residual problem
min kAXB  Ck ð1:1Þ
X 2SRn n

with A 2 Rm·n, B 2 Rn·p and C 2 Rm·p. We also consider the solution of the
matrix nearness problem

min kX  X k; ð1:2Þ
X 2S X

where X 2 Rm n is given matrix and SX is the solution set of the minimum resid-
ual problem (1.1).
The well-known linear matrix equation AXB = C with a symmetric condi-
tion on the solution were studied by Dai [7] and Chu[8]. The approach taken
in both papers is use the generalized singular value decomposition (GSVD)
of matrices. The necessary and sufficient conditions for the existence of and
the expressions for the solution of the matrix equation were established. Peng
et al. [9] presented an iterative method for finding the symmetric solution of the
matrix equation AXB = C. They have been proved that the iteration method
can be terminated within finite iteration steps for any initial matrix, and that
the solution with least Frobenius norm can be obtained by choosing a special
kind of initial iteration matrix. Because of A, B and C occurring in practice are
usually obtained from experiments, it is difficult for them to satisfy the solvabil-
ity conditions of the above matrix equation. Therefore, consider the minimum
Frobenius norm residual problem (1.1) is necessary.
The matrix nearness problem (1.2) occurs frequently in experimental design,
see for instance [5]. Here the matrix X may be obtained from experiments, but
it may not satisfy the symmetric requirement and the minimum residual
requirement. The nearness matrix X b is the matrix that satisfies the symmetric
and the minimum residual restriction, and is closed to the given matrix X in
Frobenius norm (may be spectral norm or others). About the matrix nearness
problem, we refer the reader to references [1–4,9,10].
In this paper, an iterative method is presented to solve the minimum Frobe-
nius norm residual problem (1.1). By this iterative method, for any initial sym-
metric matrix X0, a solution X can be obtained within finite iteration steps in
the absence of roundoff errors, and the solution X with least norm can be ob-
tained by choosing a special kind of initial symmetric matrix. In addition, using
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 713

our iterative method, the unique solution X b of the matrix nearness problem
e  of the new min-
(1.2) can be obtained by first finding the least norm solution X
imum residual problem: min kA X e with unknown symmetric matrix X
e B  Ck e,
T
e X þX
where C ¼ C  A 2 B. Given numerical examples are show that the iterative
method is quite efficient.

2. Iterative methods for solving (1.1) and (1.2)

In this section, we first introduce an iterative method to obtain the solution


of the minimum residual problem (1.1). We then show that, for any initial sym-
metric matrix X0, the matrix sequence {Xk} generated by the iterative method
converges to its a solution within at most n2 iteration steps in the absence of
roundoff errors. We also show that if let the initial symmetric matrix
X0 = AATHBBT + BTBHAAT (H is arbitrary symmetric matrix), then the
solution X obtained by the iterative method is the least Frobenius norm solu-
tion. Finally, we consider the iteration method for solving the matrix nearness
problem (1.2).
To introduce the iterative method for solving the minimum residual problem
(1.1), we will require the following lemma. Its proof is similar to the proof of
Lemmas 1 and 2 in [11] or the analogous results in [6], and is omitted.

Lemma 2.1. The minimum residual problem (1.1) is equivalent to the linear
matrix equation

AT AXBBT þ BBT XAT A ¼ AT CBT þ BC T A; ð2:1Þ

and it is always consistent.

We present the iteration method for solving the minimum residual problem
(1.1), or equivalently, the iteration method for solving the linear matrix equa-
tion (2.1) as follow:

Algorithm 2.1
1. Input matrices A 2 Rm·n, B 2 Rn·p, C 2 Rm·p and X0 2 SRn·n;
2. Calculate
R0 ¼ AT CBT þ BC T A  AT AX 0 BBT  BBT X 0 AT A;

P 0 ¼ AT AR0 BBT þ BBT R0 AT A;

k :¼ 0;
3. If Rk = 0, then stop; else, k := k + 1;
714 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723

4. Calculate
2
kRk1 k
X k ¼ X k1 þ P k1 ;
kP k1 k2
Rk ¼ AT CBT þ BC T A  AT AX k BBT  BBT X k AT A
kRk1 k2
¼ Rk1  2
ðAT AP k1 BBT þ BBT P k1 AT AÞ;
kP k1 k
2
kRk k
P k ¼ AT ARk BBT þ BBT Rk AT A þ 2
P k1 ;
kRk1 k
5. Goto step 3.
About Algorithm 2.1, we have the following basic properties.

Lemma 2.2. Assume that X is a solution of the minimum residual problem (1.1),
then, for any initial symmetric matrix X0, the sequences {Xi}, {Ri} and {Pi}
generated by Algorithm 2.1 satisfy
2
hP i ; X   X i i ¼ kRi k ; ði ¼ 0; 1; 2; . . .Þ:

Proof. We prove the conclusion by induction. When i = 0, we have

hP 0 ; X   X 0 i ¼ hAT AR0 BBT þ BBT R0 AT A; X   X 0 i


¼ hAT AR0 BBT ; X   X 0 i þ hBBT R0 AT A; X   X 0 i
¼ hR0 ; AT AðX   X 0 ÞBBT i þ hR0 ; BBT ðX   X 0 ÞAT Ai
2
¼ hR0 ; AT AðX   X 0 ÞBBT þ BBT ðX   X 0 ÞAT Ai ¼ kR0 k :

And when i = 1, we have


* +
 T T T T kR1 k2 
hP 1 ; X  X 1 i ¼ A AR1 BB þ BB R1 A A þ 2
P 0; X  X 1
kR0 k
2
kR1 k
¼ hAT AR1 BBT þ BBT R1 AT A; X   X 1 i þ 2
hP 0 ; X   X 1 i
kR0 k
* +
2 2
2 kR1 k kR0 k
¼ kR1 k þ 2
P 0; X   X 0  2
P0
kR0 k kP 0 k
2 2
2 kR1 k kR1 k 2
¼ kR1 k þ 2
hP 0 ; X   X 0 i  2
hP 0 ; P 0 i ¼ kR1 k :
kR0 k kP 0 k
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 715

Assume that the conclusion holds for i = s (s > 0), that is hPs, X  Xsi =
kRsk2, then
* +
 T T T T kRsþ1 k2 
hP sþ1 ; X  X sþ1 i ¼ A ARsþ1 BB þ BB Rsþ1 A A þ 2
P s ; X  X sþ1
kRs k
¼ hAT ARsþ1 BBT þ BBT Rsþ1 AT A; X   X sþ1 i
kRsþ1 k2
þ 2
hP s ; X   X sþ1 i
kRs k
* +
2 kRsþ1 k2 kRs k2
¼ kRsþ1 k þ 2
P s; X   X s  2
Ps
kRs k kP s k
2 2
2 kRsþ1 k kRsþ1 k
¼ kRsþ1 k þ 2
hP s ; X   X s i  2
hP s ; P s i
kRs k kP s k
¼ kRsþ1 k2 :
By the principle of induction, the conclusion hPi, X  Xii = kRik2 holds for
all i = 0, 1, 2, . . .. h

Remark 2.1. Lemma 2.2 implies that if Ri 5 0, then Pi 5 0, (i = 1, 2, . . .). This


result implies that if Ri 5 0, then Algorithm 2.1 can not be terminated.

Lemma 2.3. For the sequences {Ri} and {Pi} generated by Algorithm 2.1, if
there exists a positive number k such that Ri 5 0 for all i = 0, 1, 2, . . . , k, then we
have
hRi ; Rj i ¼ 0; hP i ; P j i ¼ 0; ði; j ¼ 0; 1; 2; . . . ; k; i 6¼ jÞ:

Proof. Since hA, Bi = hB, Aiholds for all matrices A and B in Rm·n, we only
need prove the conclusion hold for all 0 6 i < j 6 k. Using induction and
two steps are required.
Step 1. Show that hRi, Ri+1i = 0 and hPi, Pi+1i = 0 for all i = 0, 1, 2, . . . , k. To
prove this conclusion, we also using induction.
When i = 0, we have
* +
2
kR0 k T T
hR0 ; R1 i ¼ R0 ; R0  2
ðA AP 0 BBT þ BBT P 0 A AÞ
kP 0 k
2
2 kR0 k
¼ kR0 k  2
hR0 ; AT AP 0 BBT þ BBT P 0 AT Ai
kP 0 k
2
kR0 k
¼ kR0 k2  hAT AR0 BBT þ BBT R0 AT A; P 0 i ¼ 0
kP 0 k2
716 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723

and
* +
T T T T kR1 k2
hP 0 ; P 1 i ¼ P 0 ; A AR1 BB þ BB R1 A A þ 2
P0
kR0 k
2
kR1 k
¼ 2
hP 0 ; P 0 i þ hP 0 ; AT AR1 BBT þ BBT R1 AT Ai
kR0 k
2 2
kR1 k kP 0 k
¼ 2
þ hAT AP 0 BBT þ BBT P 0 AT A; R1 i
kR0 k
2 2 2
kR1 k kP 0 k kP 0 k
¼ þ hR0  R1 ; R1 i ¼ 0:
kR0 k2 kR0 k2
Assume that conclusion holds for all i 6 s (0 < s < k), then
* +
2
kRs k
hRs ; Rsþ1 i ¼ Rs ; Rs  2
ðAAT P s BBT þ BBT P s AT AÞ
kP s k
2
kRs k
¼ kRs k2  2
hRs ; AAT P s BBT þ BBT P s AT Ai
kP s k
2
kRs k
¼ kRs k2  hAAT Rs BBT þ BBT Rs AT A; P s i
kP s k2
* +
2 kRs k2 kRs k2
¼ kRs k  2
Ps  2
P s1 ; P s ¼0
kP s k kRs1 k

and
* +
kRsþ1 k2
hP s ; P sþ1 i ¼ P s ; AT ARsþ1 BBT þ BBT Rsþ1 AT A þ 2
Ps
kRs k

kRsþ1 k2
¼ 2
hP s ; P s i þ hP s ; AT ARsþ1 BBT þ BBT Rsþ1 AT Ai
kRs k
kRsþ1 k2 :kP s k2
¼ 2
þ hAT AP s BBT þ BBT P s AT A; Rsþ1 i
kRs k
2 2 2
kRsþ1 k :kP s k kP s k
¼ þ hRs  Rsþ1 ; Rsþ1 i ¼ 0:
kRs k2 kRs k2

By the principle of induction, hRi, Ri+1i = 0 and hPi, Pi+1i = 0 hold for all
i = 0, 1, 2, . . . , k.
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 717

Step 2. Assume that hRi, Ri+li = 0 and hPi, Pi+li = 0 for all 0 6 i 6 k and
1 < l < k, show that hRi, Ri+l+1i = 0 and hPi, Pi+l+1i = 0. The proof are
following.
* +
kRiþl k2 T T T T
hRi ; Riþlþ1 i ¼ Ri ; Riþl  2
ðA AP iþl BB þ BB P iþl A AÞ
kP iþl k
2
kRiþl k
¼ 2
hRi ; AT AP iþl BBT þ BBT P iþl AT Ai
kP iþl k
kRiþl k2
¼ 2
hAT ARi BBT þ BBT Ri AT A; P iþl i
kP iþl k
* +
kRiþl k2 kRi k2
¼ 2
Pi  2
P i1 ; P iþl ¼ 0;
kP iþl k kRi1 k
* +
2
T T T T kRiþlþ1 k
hP i ; P iþlþ1 i ¼ P i ; A ARiþlþ1 BB þ BB Riþlþ1 A A þ 2
P iþl
kRiþl k
¼ hP i ; AT ARiþlþ1 BBT þ BBT Riþlþ1 AT Ai
¼ hAT AP i BBT þ BBT P i AT A; Riþlþ1 i
2
kP i k
¼ 2
hRi  Riþ1 ; Riþlþ1 i
kRi k
¼ 0:
From steps 1 and 2, we have by principle induction that hRi, Rji = 0 and
hPi, Pji = 0 hold for all i, j = 0, 1, 2, . . . , k, i 5 j. h

Remark 2.2. Lemma 2.3 implies that, for any initial symmetric matrix X0, a
solution of the minimum residual problem (1.1) can be obtained within at most
n2 iteration steps. Since the sequences R0, R1, R2, . . . are orthogonal each other
in the finite dimension matrix space Rn·n, it is certainly there exists a positive
number k 6 n2 such that Rk = 0.

The follow lemma from [9] is a directly use for stating our mainly results.

Lemma 2.4. Suppose that the consistent systems of linear equations Ax = b has
a solution x 2 R(AT), then x is the unique least Frobenius norm solution of the
systems of linear equations.

For a matrix A 2 Rm·n, denoted by vec(A) the following mn-vector contain-


ing all the entries of matrix A:
718 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723
0 1
Að:; 1Þ
B C
B Að:; 2Þ C
B C
B C
vecðAÞ ¼ B C 2 Rmn ;
B .. C
B . C
@ A
Að:; nÞ

where A(:,i) denotes ith column of matrix A (i.e., Matlab style). For vector
x 2 Rmn, denoted by g
vec m;n ðxÞ the following m · n matrix containing all the en-
tries of vector x:
vec m;n ðxÞ ¼ ðxð1 : mÞ xðm þ 1 : 2mÞ x½ðn  1Þm þ 1 : mnÞ 2 Rm n ;
g

where x(i:j) denotes a vector containing the elements i to j of vector x. Denoted


by AB the Kronecker product of matrices A and B.
Then the systems of the linear matrix equation (2.1) is equivalent to the lin-
ear equations

ðBBT  AT A þ AT A  BBT ÞvecðX Þ ¼ vecðAT CBT þ BC T AÞ:

Noting that

vecðAT AHBBT þ BBT HAT AÞ ¼ ðBBT  AT A þ AT A  BBT ÞT vecðH Þ


T
2 RððBBT  AT A þ AT A  BBT Þ Þ;

we know that if we take initial matrix X0 = ATA HBBT + BBT H ATA, where H
is arbitrary symmetric matrix, then all Xk generated by Algorithm 2.1 are sym-
metric matrices and satisfy
T
vecðX k Þ 2 RððBBT  AT A þ AT A  BBT Þ Þ:

Hence, we have from Lemma 2.4 that if X generated by Algorithm 2.1 is the
solution of the minimum residual problem (1.1), then it is its the least Frobe-
nius norm solution. In this case, X can be expressed as
þ
X ¼ g
vec n;n ððBBT  AT A þ AT A  BBT Þ vecðAT CBT þ BC T AÞÞ: ð2:2Þ

Above conclusions on the solution of the minimum residual problem (1.1)


can be stated as follow theorem. And its proof is omitted.

Theorem 2.1. The minimum residual problem (1.1) is always consistent, and for
any initial symmetric matrix X0, the sequence {Xk} generated by Algorithm 2.1
converges to its a solution within at most n2 iteration steps. Furthermore, if we
choose the initial matrix X0 = ATA HBBT + BBT H ATA (H is arbitrary
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 719

symmetric matrix), or more especially, let X0 = 02Rn·n, then the solution X


obtained by Algorithm 2.1 is the least Frobenius norm solution of the minimum
residual problem (1.1). In this case, X can be expressed as (2.2).

For the matrix nearness problem (1.2), there certainly exist an unique
solution since the solution set of the minimum residual problem (1.1) is a
nonempty closed convex cone. Noting that for arbitrary matrix X 2 Rn n , it
follows

T 2

X þX X  X T 2
2
min kX  X k ¼ minn n X  þ
X 2SRn n X 2SR 2 2

and the linear matrix equation (2.1) is equivalent to the linear matrix equation
T
! T
!
T X þX X þX
A A X T T
BB þ BB X  AT A
2 2
T
! T
!
T X þX T T TX þX T
¼A CA B B þB C B A A:
2 2
T T
e ¼ X  X þX ; and C
We let X e ¼ C  A X þX B, then finding the unique solu-
2 2
tion of the matrix nearness problem (1.2) is equivalent to first find the least
Frobenius norm solution of the matrix equation

e BBT þ BBT X
AT A X e AT A ¼ AT CB e T A:
e T þ BC ð2:3Þ

By using Algorithm 2.1, let initial matrix X e 0 ¼ AT AHBBT þ BBT HAT A,


e 0 ¼ 0 2 Rn n ,
where H is arbitrary symmetric matrix, or more especially, let X

e of the linear matrix
we can obtain the unique least Frobenius norm solution X

e b of
equation (2.3). Once the above matrix X is obtained, the unique solution X
b can be ex-
the matrix nearness problem (1.2) can be obtained. In this case, X
 T
b e
pressed as X ¼ X þ 2 .X þX

3. Examples for the iterative methods

In this section, we will give some numerical examples to illustrate our re-
sults. All the tests are performed by MATLAB 6.1 and the initial iterative
matrices are chosen as zero matrices in suitable size. Because of the influence
of the error of calculation, we regard the matrix A as zero matrix if
kAk < 1010. Of course, this criterion my be smaller or larger according to ones
need or the size of the problem.
720 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723

Example 3.1. Given matrices A, B and C as follows.


0 1
0 1 3 4 3 3 4 4
4 3 1 3 1 3 2 B 5 3
B 3 2 3 4 3 2 1 C B 5 5 3 3 CC
B C B C
B 4 3 1 3 1 3 2 C B 6 2 6 6 2 2 C
B C B C
A¼B C; B ¼ BB 8 4 8 8 4 4 C C;
B 3 1 3 1 3 2 1 C B 4 5
B
@ 4 3 1 3 1 3 2 A
C B 4 3 2 7 CC
B C
@ 3 2 3 3 2 2 A
3 1 3 1 3 2 1
1 2 1 1 2 2
0 1
43 54 73 54 51 54
B 31 37 61 37 53 37 C
B C
B 43 54 73 54 51 54 C
B C
C¼B C:
B 31 37 61 37 53 37 C
B C
@ 47 54 73 54 21 54 A
31 27 61 27 53 27
Using Algorithm 2.1 and iterate 58 steps, we have the unique least Frobe-
nius norm solution of the minimum residual problem (1.1) as follow:
0 1
1:0650 0:2510 0:9062 0:6469 0:6130 1:8154 0:5729
B 0:2510 0:6516 0:0189 0:4239 1:8937 0:8660 1:3207 C
B C
B C
B 0:9062 0:0189 1:9641 0:3755 2:2609 0:4210 2:2353 C
B C
X 58 ¼ B
B 0:6469 0:4239 0:3755 0:3307 0:2146 0:4136 1:0401 CC;
B 0:6130 1:8937 2:2609 0:2146 2:6651 4:3017 2:3216 C
B C
B C
@ 1:8154 0:8660 0:4210 0:4136 4:3017 1:0648 2:4271 A
0:5729 1:3207 2:2353 1:0401 2:3216 2:4271 0:4410

with
kR58 k ¼ kAT CBT þ BC T A  AT AX 58 BBT  BT BX 58 AAT k ¼ 6:2572 1011 :

And the minimum residual


min kAXB  Ck ¼ kAX 58 B  Ck ¼ 179:0445:
X

We let
0 1
1 2 3 2 1 1 3
B 2 1 3 3 2 3 4 C
B C
B C
B 3 3 3 3 2 1 1 C
B C
X ¼B
B 2 3 3 2 2 2 4 C
C;
B 3 2 2 2 1 3 3 C
B C
B C
@ 4 3 1 1 2 1 1 A
1 2 1 3 4 1 1
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 721

then using Algorithm 2.1 and iterate 47 steps, we have the unique least Frobe-
nius norm solution of the new linear matrix equations (2.3) as follow:
0 1
2:7699 0:1419 0:5455 0:8924 2:5920 1:6477 0:3693
B 0:1419 0:3278 1:1092 1:0844 1:1173 0:5698 2:0561 C
B C
B C
B 0:5455 1:1092 2:4188 1:6438 1:9861 0:7672 2:2490 C
B C
B
e 47 ¼ B 0:8924 C
X 1:0844 1:6438 5:8543 2:6224 1:1759 0:1833 C
B C
B 2:5920 1:1173 1:9861 2:6224 1:3618 4:7044 2:7716 C
B C
B C
@ 1:6477 0:5698 0:7472 1:1759 4:7044 1:0556 2:7992 A
0:3693 2:0561 2:2490 0:1833 2:7716 2:7992 0:9692

with

kR47 k ¼ kAT CB e T A  AT A X
e T þ BC e 47 BBT  BT B X
e 47 AAT k ¼ 8:0309 1010 :

Hence, the solution of the matrix nearness problem (1.2) is


T
e þX þX
b 47 ¼ X
X 47
0 2 1
1:7699 1:8581 3:5455 2:8924 0:5920 0:8523 2:3693
B C
B 1:8581 0:6722 1:8908 1:9156 3:1173 0:5698 1:0561 C
B C
B 3:5455 1:8908 0:5812 1:3562 3:9861 1:7472 2:2490 C
B C
B C
¼ B 2:8924 1:9156 1:3562 3:8543 0:6224 0:3241 3:6833 C:
B C
B 0:5920 3:1173 3:9861 0:6224 2:3618 2:2044 3:2716 C
B C
B C
@ 0:8523 0:5698 1:7472 0:3241 2:2044 0:0556 2:7992 A
2:3693 1:0561 2:2490 3:6833 3:2716 2:7992 0:0308
In this case, the minimum
b 47  X k ¼ 25:4084:
min kX  X k ¼ k X
X 2S X

Example 3.2. Let A and B be a 100 · 100 Hilbert matrix with element
 ij ¼ 1 and a 100 · 100 matrix with all elements are 1, respectively. Let
U iþj1
C ¼ AX B, where X be a 100 · 100 Hilbert matrix with element X ij ¼ iþj1
1
, then
the minimum
min kAXB  Ck ¼ 0
X

since we have kAX B  Ck ¼ 0. Using Algorithm 2.1 and iterate 1000 (10000)
steps, we have the follow convergence curve for the Frobenius norm of the
residual. See Fig. 1.
722 Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723

0
log10 || AXB – C ||

–2

–4

–6

–8

–10
0 100 200 300 400 500 600 700 800 900 1000
iteration number

Fig. 1. Convergence curve for the Frobenius norm of the residual.

4. Conclusions

This paper, we first introduce an iterative method, that is, Algorithm 2.1 for
solving the minimum residual problem (1.1) with unknown symmetric matrix
X. We then show that, for any initial symmetric matrix X0, the matrix sequence
{Xk} generated by Algorithm 2.1 converges to its a solution within at most n2
iteration steps in the absence of roundoff errors. We also show that if let the
e 0 ¼ AT AHBBT þ BBT HAT A, where H is arbitrary symmetric ma-
initial matrix X
trix, then the solution X obtained by the iterative method is the least Frobe-
nius norm solution. We also consider using Algorithm 2.1 for solving the
matrix nearness problem (1.2).
Given two examples and many other examples we have tested by MATLAB
confirm our theoretical results in this paper. Of course, the problem with large
and not sparse matrices A, B and C, Algorithm 2.1 may not be finite termina-
tion because of errors. This is an important problem which we should study in
future.
Z.-y. Peng / Appl. Math. Comput. 170 (2005) 711–723 723

References

[1] N.J. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra
Appl. 103 (1988) 103–118.
[2] K.T. Jeseph, Inverse eigenvalue problem in structural design, AIAA J. 30 (1992) 2890–2896.
[3] Z. Jiang, Q. Lu, Optimal application of a matrix under spectral restriction, Math. Numer.
Sinica 1 (1988) 47–52.
[4] Z.-Y. Peng, X.-Y. Hu, L. Zhang, The inverse problem of bisymmetric matrices, Numer.
Linear Algebra Appl. 1 (2004) 59–73.
[5] T. Meng, Experimental design and decision support, in: Leondes (Ed.), Expert Systems, the
Technology of Knowledge Management and Decision Making for the 21st Century, Vol. 1,
Academic Press, 2001.
[6] P.H. Gill, Y.W. Murray, M.H. Wright, Numerical Linear Algebra and Optimization,
Addisonwesley, Redwood City, CA, 1991.
[7] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl. 131
(1990) 1–7.
[8] K.E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear
Algebra Appl. 119 (1989) 35–50.
[9] Y.-X. Peng, X.-Y. Hu, L. Zhang, An iteration method for the symmetric solutions and the
optimal appromation solution of the matrix equation AXB = C, Appl. Math. Comput. 160 (3)
(2005) 763–777.
[10] M. Baruch, Optimization Procedure to Correct Stiffness and Flexibility Matrices Using
Vibration Tests, AIAA J. 16 (1978) 1208–1210.
[11] Y.X. Yuan, Least squares solutions of matrix equation AXB = E, CXD = F, J. East China
Shipbuilding Inst. (Natural Science Edition) 18 (3) (2004) 29–31.

You might also like