Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
MSC: In this paper, two new algorithms for computing the Weighted Moore–Penrose inverse
15A09 A†M,N of a general matrix A for weights M and N which are based on elementary row and
65F05
column operations on two appropriate block partitioned matrices are introduced and in-
Keywords: vestigated. The computational complexity of the introduced two algorithms is analyzed in
Partitioned matrix detail. These two algorithms proposed in this paper are always faster than those in Sheng
Gauss–Jordan elimination and Chen (2013) and Ji (2014), respectively, by comparing their computational complexi-
Weighted Moore–Penrose inverse ties. In the end, an example is presented to demonstrate the two new algorithms.
Computational complexity
© 2017 Elsevier Inc. All rights reserved.
1. Introduction
Throughout the paper we shall use the standard notations of [1–3]. The symbol Crm×n denotes the set of all m × n com-
plex matrices with rank r, Cn stands for the n dimensional complex space. In represents an identity matrix of order n. For
A ∈ Cm × n , the symbols R(A ), N (A ), ||A||F , A∗ , A−1 and r(A) denote its range, null space, the Frobenious norm, the conjugate
transpose, regular inverse and rank, respectively. R(A )⊥ and N (A )⊥ are orthogonal complement space of R(A ) and N (A ),
respectively.
†
For any A ∈ Cm × n , we recall that the weighted Moore–Penrose inverse of A, denoted by AM,N , is the unique solutions
X∈C n × m satisfying the following four matrix equations
AX A = A (1 )
X AX = X (2 )
(MAX )∗ = MAX ( 3M )
( N X A )∗ = N X A ( 4N )
where M and N are Hermitian positive definite matrices of orders m and n respectively. If M = Im and N = In , then the
†
weighted Moore–Penrose inverse AM,N reduces to the Moore–Penrose (abbreviated M–P) inverse A† . The matrix A# = N −1 A∗ M
is called the weighted conjugate transpose matrix of A, it is easy to check R(A# ) = R(N −1 A∗ M ) = N −1 R(A∗ ) and N (A# ) =
N (N −1 A∗ M ) = M−1 N (A∗ ).
R
This project was supported by NSF China (no. 11471122), Anhui Provincial Natural Science Foundation (no. 1508085MA12) and Key projects of excellent
talent fund in Anhui Provincial University support program (no. gxyqZD2016188).
E-mail address: xingpingsheng@163.com
https://doi.org/10.1016/j.amc.2017.11.041
0 096-30 03/© 2017 Elsevier Inc. All rights reserved.
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 65
Table 1
Error and execution time results for computing A†M,N with η = 10−2 .
Table 2
Error and execution time results for computing A†M,N with η = 104 .
2. Preliminaries
Lemma 2.2 [3]. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n respectively, and the columns
m×(m−r ) ×(n−r )
of U ∈ Cm −r and V ∗ ∈ Cnn−r form bases for N (A∗ ) and N (A ), respectively. Then
A M−1U
D= (2.1)
VN 0
is nonsingular and
−1 A†M,N V ∗ (V NV ∗ )−1
D = . (2.2)
(U M U )−1U ∗
∗ −1
0
Lemma 2.3 [12]. Let A ∈ Cm × n be of rank r, T be a subspace of Cn of dimension s ≤ r and let S be a subspace of Cm of dimension
m − s such that AT S = C m . In addition, suppose that G ∈ Cn × m satisfies R(G ) = S and N (G ) = T . Let G has an arbitrary full-
(2 )
rank decomposition, that is G = P Q. If A has a {2} inverse AT,S , then:
(1) QAP is an invertible complex matrix
(2 )
(2) AT,S = P (QAP )−1 Q
(3) ν = det (QAP ) = (AG )JJ = (GA )JJ .
J∈J (AG ) J∈J (GA )
†
In [14], Ji developed a new representation for the weighted M–P inverse AM,N free of computing N −1 . Ji also proposed
the following algorithm for calculating the representation.
Lemma 2.4. The total number of multiplications and divisions required for Algorithm 2.1 to compute the weighted M–P inverse
AM,N of a matrix A ∈ Crm×n with positive definite matrices M ∈ Cm × m and N ∈ Cn × n is
†
1 2
TJ1 (m, n, r ) = m2 n + 3mnr + (n − 2r )nr + n ( n − 1 ). (2.3)
2
Proof. The complexities for computing the matrix A∗ M is m2 n. For the matrix
A∗M with r (A∗ M ) = r (A∗ ) = r, r row pivot-
B J1
ing steps are needed in Step 2 to reach the reduced row echelon matrix . First row pivoting step involves m + n
0 J2
non-zero columns in A∗ M N . Thus, it needs m + n − 1 divisions and (m + n − 1 )(n − 1 ) multiplications with a total of
(m + n − 1 )n multiplications and divisions. On the second row pivoting step, there are m + n − 1 non-zero columns to deal
with. This pivoting steps requires (m + n − 2 )n operations. Following the same idea, the ith (1 ≤ i ≤ r) pivoting steps still re-
quires (m + n − i )n operations. Then the r pivoting steps for Step 2 altogether requires (m + n − r+1
2 )nr operations. (m − r )nr
BA B
operations are required to form in step 3 due to the structure of B.
J2 0
In Step 4, we first choose the n − r non-zero elements
in
matrix
J2 as pivoting elements. Similar to step 2, it takes
n+r−1 BA B In−r J2 0
2 n (n − r ) multiplications and divisions to make J2 0
into
0
BA B
. Observe that the r columns of B are
those of the identity matrix of order r. Thus, the first of the remaining r pivoting steps involves m + 1 non-zero columns.
This pivoting steps requires m divisions and m(n − 1 ) multiplications with a total of mn operations. The next r − 1 pivoting
steps also need to deal with m + 1 non-zero columns since a non-zero column is brought in while one non-zero column is
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 67
zeroed out except for one entry in the previous pivoting step. Thus, these steps also need mn operations as well. So the last
r pivoting steps requires mnr operations. Therefore, summing up, we have the computational complexity of Algorithm 2.1 for
†
computing AM,N :
1 2
TJ1 (m, n, r ) = m2 n + 3mnr + (n − 2r )nr + n ( n − 1 ). (2.4)
2
(2 )
In [21], Ji proposed an alternative method of elementary operations to compute AT,S , first performed elementary row
⎛ ⎞
Is 0 P1
G In
and column operations on the partitioned matrix into ⎝ 0 0 P2 ⎠. Then, perform elementary row
Im 0
Q1 Q2 O
operations on the the matrix D Im+n−s until Im+n−s D−1 is reached and return the submatrix of D−1 consisting of
(2 ) A Q2
the first n rows and the first m columns, i.e., AT,S , where D = .
P2 0
Ji’s results are summarized in the following algorithm:
Moreover, Ji [21] also analyzed the following computational complexity for Algorithm 2.1.
(2 )
Lemma 2.5 [21]. The total number of multiplications and divisions required for Algorithm 2.1 to compute AT,S is
to transform it into
⎛ ⎞
Is 0 G1
D=⎝ 0 0 0 ⎠.
G2 0 0
(3) Make the block matrices of D(1, 2) and D(2, 1) be zero matrices by applying elementary row and column transforma-
tions, respectively, through matrix Is , which yields
⎛ ⎞
Is 0
0
M3 = ⎝ 0 0 ⎠.
0 −G2 G1
(2 )
Then AT,S = G2 G1 .
In [17], the authors also established the computational complexity of the Algorithm 2.3, which will be restated as the
following lemma.
68 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74
(2 )
Lemma 2.6 [17]. The total number of multiplications and divisions required for Algorithm 2.3 to compute AT,S is
3s + 1
TSC (m, n, s ) = mn2 + m2 n + 4mns − ns. (2.6)
2
3. Main results
†
In this section, we shall establish two explicit expressions for AM,N . Based on these expressions, we can give two different
†
Gauss–Jordan elimination methods for computing AM,N .
Theorem 3.1. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n, respectively. Then there exist two
nonsingular elementary matrices P and Q of order n and m, respectively, such that
B P1
P A∗ N = , (3.1)
0 P2
and
B Ir 0
0 Q= 0 0 , (3.2)
M−1 Q1 Q2
where the matrix B ∈ Cr × m and r columns of B are those of Ir , further
is nonsingular.
Moreover
−1
A Q2 A†M,N N −1 P2∗ (P2 N −1 P2∗ )−1
= . (3.4)
P2 0 (Q2 MQ2 )−1 Q2∗ M
∗
0
Proof. From r (A∗ ) = r (A ) = r, we know that there exists a nonsingular elementary matrix P such that P A∗ N =
B Ir 0
B P1
. Similarly, there exists another nonsingular elementary matrix Q, such that 0 Q= 0 0 due to the
0 P2
M−1 Q1 Q2
structure of B.
It is seen from (3.1) that
∗ B P1
PA = , PN = . (3.5)
0 P2
It is seen from (3.5) and (3.6) that P2 N −1 A∗ = 0. This means that R(N −1 A∗ ) ⊂ N (P2 ). The nonsingular of P and N im-
plies that r (N −1 A∗ ) = r (A∗ ) and r (P2 ) = n − r. According to the fact that dim(N (P2 )) = n − (n − r ) = r and dim(R(N −1 A∗ )) =
r (N −1 A∗ ) = r, we have
R(N −1 A∗ ) = N (P2 ). (3.7)
This means
R(N −1 A∗ )⊥ = N (P2 )⊥ .
In other words R(P2∗ ) = N (AN −1 ) = NN (A ). Following the same line, we can proof that R(Q2 ) = N (A∗ M ) = M−1 N (A∗ ).
A Q2
From Lemma 2.2, we know the matrix is nonsingular and the result in (3.4) follows from (2.2) immediately.
P2 0
†
In summary of the above theorem, we have the following algorithm for computing AM,N .
A P1
A∗ N
(2) Perform elementary row operations on the first n rows of the bordered B1 = into B2 = 0 P2 ,
M−1 0
M−1 0
∈ C r×m and r columns of A
where A are those of Ir ;
A P1
(3) Perform elementary column operations on the first m columns of bordered matrix B2 = 0 P2 into B3 =
M−1 0
Ir 0 P1
0 0 P2 ;
Q1 Q2 0
A Q2
(4) Form the partitioned matrix B4 = ;
P2 0
(5) Perform elementary row operations on the matrix B4 I until I B−1
4
is reached and return the submatrix of
B−1
†
4
consisting of the first n rows and the first M columns, i.e., AM,N .
In the next theorem, we will use Gauss–Jordan elimination process to get a matrix G = CB with R(G ) = R(A# ) and
†
N (G ) = N (A# ), which is also a full rank decomposition of G. Then a novel expression AM,N = C (BAC )−1 B is established.
Theorem 3.2. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n, respectively. There exist an
E1 r
elementary row operation matrix E = and column operation matrix F = F1 F2 m, such that
E2 n − r
r m−r
n
E1 ∗ B r
EA∗ M = A M= (3.8)
E2 0 n−r
m
and
N −1 A∗ F = N −1 A∗ F1 F2 = C 0 n, (3.9)
r m−r
B
where and C 0 are the reduced row and column echelon form of A∗ M and N −1 A∗ with N (B ) = N (A# ) and R(C ) =
0
R(A# ), respectively.
Further the matrix BAC is invertible and
A†M,N = C (BAC )−1 B. (3.10)
4. Computational complexities
In this section, the computational complexities of Algorithms 3.1 and 3.2 will be analyzed. We only count the multipli-
cations and divisions.
Theorem 4.1. The total number of multiplications and divisions required for Algorithm 3.1 to compute AM,N of a matrix A ∈ Crm×n
†
For any matrix A ∈ Crm×n , the computational complexity of Algorithm 3.1 is compared with Algorithm 2.1 as follows:
TJ1 (m, n, r ) − TS1 (m, n, r ) = m2 (n − m ) + (m − r )(n − m )r + { n+2r−1 n(n − r )
(4.2)
+mnr − (m + n − r )[m2 + (m + n )(n − r )]}
Remark 1. Algorithm 3.1 does not need to switch block of certain matrix in the process computation, unlike Algorithm 2.1 in
[14]. The higher computational complexity of these two algorithms is all about 27 n3 multiplications and divisions when they
†
are applied to the case of r ≈ n = m for AM,N .
†
If we use Algorithm 2.2 to compute AM,N , the matrix G = N −1 A∗ M first needs to be calculated, this step requires
n3 + mn2
+ m2 n
operations. Therefore, according to Lemma 2.5 we have the computational complexity of Algorithm 2.2 for
†
computing AM,N
Theorem 4.2. The total number of multiplications and divisions required for Algorithm 3.2 to compute AM,N of a matrix A ∈ Crm×n
†
divisions through similar analyzing to Theorem 4.1. In step 3, it also needs r pivoting steps to get matrix C3 because of
r (N −1 )A∗ = r, then (n − 1+
2 )rm multiplications and divisions are required.
r
BAC B
(m − r )nr + (n − r )r2 multiplications are needed to form C4 = , which follows from B and C are row-echelon
C 0
and column-echelon reduced matrix, respectively.
In step 5, the first pivoting step on BAC B involves m + 1 nonzero columns and it requires m divisions and m(r − 1 )
multiplications with a total of mr operations. The second pivoting step also needs to deal with m + 1 nonzero columns.
It also requires mr divisions multiplications. Continuing this way, the rth pivoting step still handles with m + 1 nonzero
columns and it requires mr divisions and multiplications. Adding up, it takes mr2 operations to compute (BAC )−1 B.
Then resume elementary row and column operations on the matrix C5 to transform it into C6 . The complexity of this
process is mnr multiplications, which is the count to compute C (BAC )−1 B.
Hence, the total number of complexity of Algorithm 3.2 is
1+r
1+r
TS2 (m, n, r ) = m2 n + n3 + n2 m + m − nr + n − mr
2 2
+(m − r )nr + (n − r )r 2 + mr 2 + mnr
1 1
= n3 + mn2 + m2 n + 4mnr + (m − n )r 2 − r 3 − (m + n )r.
2 2
†
If we use Algorithm 2.3 to compute AM,N , the matrix G = N −1 A∗ M first needs to be calculated, this step requires
n3 + mn2 + m2 n operations. Therefore, according to Lemma 2.6 we have the computational complexity of Algorithm 2.2 for
†
computing AM,N as
3r + 1
NSC (m, n, r ) = n3 + mn2 + m2 n + TSC (m, n, r ) = n3 + 2mn2 + 2m2 n + 4mnr − nr. (4.6)
2
If we use the total number of complexity of Algorithm 2.3 minus that of Algorithm 3.2 for any matrix A ∈ Crm×n , we have
1 2
1
NSC (m, n, r ) − TS2 (m, n, r ) = m n2 − r + n(m2 − r 2 ) + r 2 + mr ≥ 0. (4.7)
2 2
Remark 3. Equality (4.7) implies that Algorithm 3.2 proposed in this paper is also always faster than Algorithm 2.3 for any
matrix A ∈ Crm×n .
72 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74
5. Numerical examples
In this section, we shall use some numerical examples to demonstrate our results. First, a handy method is used to
compute AM,N on a lower order matrix. Second a matrix A of size m × n with the random weights M = U U + Im and N =
†
V V + ηIn are tested by used these methods, where U = randn(m ), N = randn(n ) and η is a parameter. All the computations
were performed on Intel Pentium(R) Dual-Core CPU T4300 XP system by using MATLAB 7.0. The accuracy of the computed
results is measured by the four quantities defined below:
r1 =
AA†M,N A − A
F ,
r2 =
A†M,N AA†M,N − A†M,N
F ,
r3 =
MAA†M,N − (MAA†M,N )∗
F ,
r4 =
NA†M,N A − (NA†M,N A )∗
F .
†
Example 1. Use Algorithms 3.1 and 3.2 to compute the weighted M–P inverse AM,N of the matrix A for the weights M and
N in [14] where
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 0 3 0 1 0 1 0 1 1 0 0
⎜0 −2 0 1⎟ ⎜0 2 0 0⎟ ⎜1 2 0 0⎟
A=⎝
0⎠
M=⎝
0⎠
N=⎝
0⎠
, , and .
1 0 2 1 0 3 0 0 1
0 0 −1 0 0 0 0 1 0 0 0 2
†
Solution First, we will use Algorithm 3.1 to compute weighted M–P inverse AM,N .
A∗ N
Execute elementary row operations on the fist four rows of the first partitioned matrix B1 = ; we have
M−1 0
⎛ ⎞ ⎛ ⎞
1 0 1 0 1 1 0 0 1 0 0 0 −2 −2 1 0
⎜ 0 −2 0 0 1 2 0 0⎟ ⎜0 1 0 0 0 0 0 2⎟
⎜ −1 0⎟ ⎜0 −1 0⎟
⎜ 3 0 2 0 0 1 ⎟ ⎜ 0 1 0 3 3 ⎟
⎜ 0 1 0 0 0 0 0 2⎟⎟ ⎜0 0 0 0 1 2 0 4⎟
B1 = ⎜
⎜ 32 ⎟ → B = ⎜ 3
⎜ 2
⎟.
0 − 12 0 0 0 0 0 2
0 − 12 2 0 0 0 0⎟
⎜ ⎟ ⎜ ⎟
⎜0 1
0 0 0 0 0 0⎟ ⎜0 1
0 0 0 0 0 0⎟
⎝− 1 2
0 1
0 0 0 0 0 ⎠ ⎝− 1 2
0 1
−1 0 0 0 0⎠
2 2 2 2
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0
⎛ ⎞
2
⎜0⎟
According to the Algorithm 2.1, we obtain matrices P2 = 1 2 0 4 and Q2 = ⎝ ⎠, where matrices P2 and Q2
−1
1
are all full rank and satisfied R(Q2 ) = M−1 N (A∗ ) and R(P2∗ ) = NN (A ), respectively.
A Q2
Next, we construct second block matrix B3 = .
P2 0
From Lemma 2.1 , we know that matrix B3 is nonsingular and AM,N can be read off from B−1
†
. Then we perform elemen-
3
tary row operations transform B3 I into I B−1
3
.
⎛ ⎞
1 0 0 0 0 − 14 0 5
4
7
4
0
⎜
⎜0 1 0 0 0 1
− 25 − 18 7
− 40 1⎟
40 5⎟
B3 I → I B−1 = ⎜0 0 1 0 0 1
0 − 14 − 34 0 ⎟.
3
⎝0 0 0 1 0
4
1 1
− 14 7
− 20 2⎠
20 5 5
1
0 0 0 0 1 4
0 − 14 1
4
0
This yields
⎛ ⎞
− 14 0 5
4
7
4
⎜ 1
− 25 − 18 7
− 40 ⎟
A†M,N = ⎝ 40
1 ⎠.
4
0 − 14 − 34
1 1
20 5
− 14 7
− 20
†
Second, we will use Algorithm 3.2 to compute AM,N .
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 73
0 A∗ M
By applying the elementary row operations on the first four rows of the third partitioned matrix C1 = ,
N −1 A∗ 0
we get
⎛ ⎞ ⎛ ⎞
0 0 0 0 2 0 4 0 0 0 0 0 1 0 0 −2
⎜0 0 0 0 0 −4 0 0⎟ ⎜ 0 0 0 0 0 1 0 0⎟
⎜ −1⎟
⎟ ⎜0 1⎟
⎜0 0 0 0 5 0 9
⎜ 0 0 0 0 0 1
⎟
⎜0 0⎟⎟→⎜ 0⎟
∗ 0 0 0 0 2 0
0 A M 0 0 0 0 0 0 0
=⎜
⎜2 ⎜4 ⎟.
N −1 A∗ 0⎟
⎟ ⎜ 0⎟
0 2 2 0 0 0 0 0 0 0 0 0 0
⎜ ⎜0
⎜−1 −2 −1 0 0 0 0 0⎟ ⎝ 2 0 0 0 0 0 0⎟⎠
⎝3 0 2 −1 0 0 0 0⎠ 0 0 1 0 0 0 0 0
0 1
2
0 0 0 0 0 0 −1 −1 0 0 0 0 0 0
⎛ ⎞
4 0 0
1 0 0 −2
⎜0 2 0⎟
Denoting E = 0 1 and F = ⎝
0 0 ⎠, we can easily to check that E and F are all full rank and
0 0 1
0 0 1 1
−1 −1 0
satisfied N (E ) = N (A∗ M ) = M−1 N (A∗ ) and R(F ) = R(N −1 A∗ ) = N −1 R(A∗ ).
By computing, we have
⎛ ⎞⎛ ⎞
1 0 3 0 4 0 0
1 0 0 −2 4 0 5
⎜0 −2 0 1⎟⎜ 0 2 0⎟
EAF = 0 1 0 0 ⎝1 0 2 0⎠ ⎝ 0 0 1 ⎠ = −1 −5 0 .
0 0 1 1 4 0 1
0 0 −1 0 −1 −1 0
Accordingto Algorithm
3.2, we execute elementary row operations on the first three rows of the fourth partitioned
EAF E
matrix C2 = again, we have
F 0
⎛ ⎞ ⎛ 1 5 7
⎞
4 0 5 1 0 0 −2 1 0 0 − 16 0 16 16
⎜−1 −5 0 0 1 0 0⎟ ⎜0 1 0 1
− 15 1
− 16 7
− 80 ⎟
⎜4 1⎟ ⎜0 80
1
− 14 − 34
⎟
⎜ 0 1 0 0 1 ⎟ ⎜ 0 1 0 ⎟
C2 = ⎜ 0 ⎟ → C3 = ⎜
⎟ 0 ⎟
4
⎜4 0 0 0 0 0 ⎜4 0 0 0 0 0 ⎟.
⎜0 2 0 0 0 0 0⎟ ⎜0 2 0 0 0 0 0 ⎟
⎝0 0 1 0 0 0 0
⎠ ⎝0 0 1 0 0 0 0
⎠
−1 −1 0 0 0 0 0 −1 −1 0 0 0 0 0
One then resume elementary row and column operations on C3 , which results in
⎛ 1 5 7
⎞ ⎛ ⎞
1 0 0 − 16 0 16 16
1 0 0 0 0 0 0
⎜0 1 0 1
− 15 1
− 16 7
− 80 ⎟ ⎜0 1 0 0 0 0 0 ⎟
⎜0 80
1
− 14 − 34
⎟ ⎜0 0 1 0 0 0 0 ⎟
⎜ 0 1 0 ⎟ ⎜ ⎟
C3 = ⎜ ⎟ → C4 = ⎜0 − 74 ⎟.
4 1
0 0 0 − 54
⎜4 0 0 0 0 0 0 ⎟ ⎜ 4 ⎟
⎜0 2 0 0 0 0 0 ⎟ ⎜0 0 0 1
− 40 2 1 7
⎟
⎝0 0 1 0 0 0 0
⎠ ⎝0 0 0 − 14
5
0
8
1
40
3 ⎠
4 4
−1 −1 0 0 0 0 0 0 0 0 1
− 20 − 15 1
4
7
20
This leads to
⎛ ⎞
− 14 0 5
4
7
4
⎜ 40
1
− 25 − 18 7
− 40 ⎟
A†M,N = ⎝ 1 ⎠.
4
0 − 14 − 34
1 1
20 5
− 14 7
− 20
Example 2. The matrix A = rand (100, 50 ) × rand (50, 100 ) of size 100 × 100 from the function matrix in the Matrix Compu-
tation Toolbox [23], the random weights M = U U + I100 and N = V V + ηI100 with η = 10−2 , 104 are tested by Algorithms 2.2,
2.3, 3.1 and 3.2. The error and execution time are shown in the following Tables.
From the above two Tables, we find that Algorithms 3.1 and 3.2 are superior than Algorithms 2.2 and 2.3, respectively,
in the accuracy of error and the computation time.
6. Conclusion
†
In this paper, two novel explicit expressions for AM,N are derived and two Gauss–Jordan-like elimination procedure for
†
computing AM,N are proposed. The computational complexity of the introduced two algorithms is analyzed in detail. The
74 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74
two algorithms proposed in this paper are always faster than those in [17] and [14], respectively, by comparing their com-
putational complexities. The advantage of the two proposed methods in this paper is free of computing A# = N −1 A∗ M. So
the condition number of the algorithms in this paper could not increase comparing against that in [17] and [14].
Acknowledgment
The author would like to thank the two anonymous referees for their valuable comments and suggestions that improved
the presentation of the paper.
References
[1] A. Ben-Israel, T. N. E. Greville, Generalized Inverse Theory and Applications, second ed., Springer Verlag, NewYork, 2003.
[2] S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations, Dover Publications, New York, 1979.
[3] G.R. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, Science Press, Beijing, China/New York, 2004.
[4] J. Miao, Representations for the weighted Moore–Penrose inverse of a partitioned matrix, J. Comput. Math. 7 (1989) 321–323.
[5] W. Sun, Y. Wei, Inverse order rule for weighted generalized inverse, SIAM J. Matrix Anal. Appl. 19 (1998) 772–775.
[6] S. Wang, B. Zheng, Z. Xiong, Z. Li, The condition numbers for weighted Moore–Penrose inverse and weighted linear least squares problem, Appl. Math.
Comput. 215 (2009) 197–205.
[7] W. Wang, L. Lin, Derivative estimation based on difference sequence via locally weighted least squares regression, J. Mach. Learn. Res. 16 (2015)
2617–2641.
[8] Y. Wei, D. Wang, Condition numbers and perturbation of the weighted Moore–Penrose inverse and weighted linear least squares problem, Appl. Math.
Comput. 145 (2003) 45–58.
[9] Y. Wei, H. Wu, Expression for the perturbation of the weighted Moore–Penrose inverse, Comput. Math. Appl. 39 (20 0 0) 13–18.
[10] Z. Xu, J. Sun, C. Gu, Perturbation for a pair of oblique projectors AA†MN and BB†MN , Appl. Math. Comput. 203 (2008) 432–446.
[11] Z. Xu, C. Gu, B. Feng, Weighted acute perturbation for two matrices, Arab. J. Sci. Eng. 35 (1) (2010) 129–143.
(2 )
[12] X. Sheng, G. Chen, Full-rank representation of generalized inverse AT,S and its application, Comput. Math. Appl. 54 (2007) 1422–1430.
(2 )
[13] X. Sheng, G. Chen, Y. Gong, The representation and computation of generalized inverse AT,S , J. Comput. Appl. Math. 213 (2008) 248–257.
[14] J. Ji, Two inverse-of-N-free methods for A†M,N , Appl. Math. Comput. 232 (2014) 39–48.
[15] K.M. Anstreicher, U.G. Rothblum, Using Gauss–Jordan elimination to compute the index, generalized nulispaces and Drazin inverse, Linear Algebra
Appl. 85 (1987) 221–239.
[16] X. Sheng, G. Chen, A note of computation for M–P inverse A† , Int. J. Comput. Math. 87 (2010) 2235–2241.
(2 )
[17] X. Sheng, G. Chen, Innovation based on Gaussian elimination to compute generalized inverse AT,S , Comput. Math. Appl. 65 (2013) 1823–1829.
[18] X. Sheng, Execute elementary row and column operations on the partitioned matrix to compute M–P inverse A†, Abstr. Appl. Anal. 2014 (2014) 6.
Article ID 596049.
[19] J. Ji, Gauss–Jordan elimination methods for the Moore–Penrose inverse of a matrix, Linear Algebra Appl. 437 (2012) 1835–1844.
[20] J. Ji, X. Chen, A new method for computing Moore–Penrose inverse through Gauss–Jordan elimination, Appl. Math. Comput. 245 (2014) 271–278.
[21] J. Ji, Computing the outer and group inverses through elementary row operations, Comput. Math. Appl. 68 (6) (2014) 655–663.
[22] P.S. Stanimirovic, M.D. Petkovic, Gauss–Jordan elimination method for computing outer inverses, Appl. Math. Comput. 219 (2013) 4667–4679.
[23] N.J. Hrgham, Matrix market, National Institute of standards and Technology, Gaithersburg. MD.