You are on page 1of 11

Applied Mathematics and Computation 323 (2018) 64–74

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Computation of weighted Moore–Penrose inverse through


Gauss–Jordan elimination on bordered matricesR
Xingping Sheng
School of Mathematics and Statistics, Fuyang Normal College, Anhui 236037, PR China

a r t i c l e i n f o a b s t r a c t
MSC: In this paper, two new algorithms for computing the Weighted Moore–Penrose inverse
15A09 A†M,N of a general matrix A for weights M and N which are based on elementary row and
65F05
column operations on two appropriate block partitioned matrices are introduced and in-
Keywords: vestigated. The computational complexity of the introduced two algorithms is analyzed in
Partitioned matrix detail. These two algorithms proposed in this paper are always faster than those in Sheng
Gauss–Jordan elimination and Chen (2013) and Ji (2014), respectively, by comparing their computational complexi-
Weighted Moore–Penrose inverse ties. In the end, an example is presented to demonstrate the two new algorithms.
Computational complexity
© 2017 Elsevier Inc. All rights reserved.

1. Introduction

Throughout the paper we shall use the standard notations of [1–3]. The symbol Crm×n denotes the set of all m × n com-
plex matrices with rank r, Cn stands for the n dimensional complex space. In represents an identity matrix of order n. For
A ∈ Cm × n , the symbols R(A ), N (A ), ||A||F , A∗ , A−1 and r(A) denote its range, null space, the Frobenious norm, the conjugate
transpose, regular inverse and rank, respectively. R(A )⊥ and N (A )⊥ are orthogonal complement space of R(A ) and N (A ),
respectively.

For any A ∈ Cm × n , we recall that the weighted Moore–Penrose inverse of A, denoted by AM,N , is the unique solutions
X∈C n × m satisfying the following four matrix equations

AX A = A (1 )
X AX = X (2 )
(MAX )∗ = MAX ( 3M )
( N X A )∗ = N X A ( 4N )
where M and N are Hermitian positive definite matrices of orders m and n respectively. If M = Im and N = In , then the

weighted Moore–Penrose inverse AM,N reduces to the Moore–Penrose (abbreviated M–P) inverse A† . The matrix A# = N −1 A∗ M
is called the weighted conjugate transpose matrix of A, it is easy to check R(A# ) = R(N −1 A∗ M ) = N −1 R(A∗ ) and N (A# ) =
N (N −1 A∗ M ) = M−1 N (A∗ ).

R
This project was supported by NSF China (no. 11471122), Anhui Provincial Natural Science Foundation (no. 1508085MA12) and Key projects of excellent
talent fund in Anhui Provincial University support program (no. gxyqZD2016188).
E-mail address: xingpingsheng@163.com

https://doi.org/10.1016/j.amc.2017.11.041
0 096-30 03/© 2017 Elsevier Inc. All rights reserved.
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 65

Table 1
Error and execution time results for computing A†M,N with η = 10−2 .

Method Time (s) r1 r2 r3 r4

Algorithm 2.2 0.668300 1.3129e−11 3.4777e−12 7.1661e−07 4.6343e−07


Algorithm 3.1 0.609692 1.4660e−11 3.4204e−12 5.1026e−07 3.6697e−11
Algorithm 2.3 0.578097 2.4588e−10 1.1145e−09 1.1145e−08 8.7446e−07
Algorithm 3.2 0.401415 1.0820e−10 1.5932e−12 6.0671e−09 5.8628e−08

Table 2
Error and execution time results for computing A†M,N with η = 104 .

Method Time (s) r1 r2 r3 r4

Algorithm 2.2 0.755466 1.1092e−11 3.0973e−14 1.4398e−08 2.5990e−06


Algorithm 3.1 0.558564 6.8732e−12 3.0930e−14 1.1076e−08 4.3657e−09
Algorithm 2.3 0.750131 1.7855e−08 3.1442e−09 1.8343e−07 2.9696e−06
Algorithm 3.2 0.444912 3.1664e−09 1.4137e−11 1.0243e−07 7.4231e−07

Let A ∈ Cm × n be of rank r, T be a subspace of Cn of dimension s ≤ r and S be a subspace of Cm of dimension m − s such


that AT  S = C m . Then there exists a unique matrix X such that X AX = X with R(X ) = T and N (X ) = S. This X is called the
(2 )
outer inverse or {2} inverse of A with prescribed range T and null space S and denoted by AT,S .

It is well known that AM,N is a special {2} inverse X of A with R(X ) = R(A# ) = N −1 R(A∗ ) and N (X ) = N (A# ) =
this means that AM,N = A(2 ) = A(2−1
)

M−1 N (A∗ ), .
R ( A# ),N ( A# ) N R (A∗ ),M−1 N (A∗ )
Weighted M–P inverse arises in matrix computation, image reconstruction, large-scale systems and statistics. In the latest

fifty years, there have been many famous specialists and scholars, who investigated the weighted M–P inverse AM,N . Its
representation and perturbation theories were introduced in [4–14].
One handy method of computing the inverse of a nonsingular matrix A is the Gauss–Jordan elimination procedure by
executing elementary row operations on the pair (A I ) to transform it into (I A−1 ). Moreover Gauss–Jordan elimination
can be used to determine whether or not a matrix is nonsingular. However, one can not directly use this method to compute

weighted M–P inverse AM,N on a square singular matrix A.
In 1987, Anstreicher and Rothblum [15] used Gauss–Jordan elimination to compute the index, generalized null spaces,
and Drazin inverse. Recently, the authors [13,16–18] used two different Gauss–Jordan elimination methods to compute the
(2 )
A† and AT,S , respectively. More recently, these algorithms were further improved by Ji [14,19–21], Stanimirovic and Petkovic
[22].
(2 )
In [13,16], the author, Chen and Gong proposed an algorithm for computing the outer inverse AT,S and M–P inverse
A† starts from elementary row operations on the pair (GA I ). Then, Ji [19], Stanimirovic and Petkovic [22] proposed an
(2 )
alternative explicit expressions for A† and AT,S , respectively. These methods begin with the elementary row operations on

the pair (G I ) and do not need to compute A∗ A or GA. Following the line [19], Ji [14] develop an algorithm for AM,N
free of computing N −1 . More recently
 the author and Chen [17] start with the elementary row and column operations on
GAG G (2 )
the partitioned matrix for computing AT,S , then in [18] the author improved the algorithm [17] to compute
G 0
(2 )
M–P inverse A†. In [20,21] Ji proposed a new method for computing the outer inverse AT,S and M–P inverse A† by applying
elementary row operations on a blocked matrices.

As a special {2} inverse, the algorithms of [13,17,21,22] can be used to compute AM,N with G = A# . If we use these methods

to compute AM,N , it is not only increase the computational cost to compute the A# = N −1 A∗ M, but also it worsens the

condition number. The goal of this paper is to develop algorithms for AM,N free of computing A# = N −1 A∗ M.
In this paper, inspired by the ideas of [14,17,21], we will propose two alternative methods of elementary
 row and col-
† A∗ N
umn operations for weighted M–P inverse AM,N by applying row and column operations on the matrices and
M−1 0
 
0 A∗ M †
, respectively. Our approach is like the one in [17,21] by working a bordered matrix and the AM,N is easily
N −1 A∗ 0
read off from the computed result. But the complexities of my two approaches are all less than that in [17,21].
(2 )
The paper is organized as follows. The ideas of computational AT,S in [17,21] are repeated in the next section. In Section 3,
† †
we derive two novel explicit expressions for AM,N and propose two Gauss–Jordan-like elimination procedure for AM,N based
on the formula. In Section 4, their computational complexities are studied. In Section 5, an illustrative example is presented
to explain the corresponding improvements of the algorithm.
66 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74

2. Preliminaries

In this paper, the following Lemmas are needed in what follows:



Lemma 2.1 [1,2]. Let A ∈ Cm × n . Then for the Moore–Penrose inverse A† and the weight Moore–Penrose inverse AM,N , one has
(2 )
(a) A† = AR ( A∗ ),N ( A∗ )
,
(b) AM,N = A(2 ) = A(2−1
)

.
R ( A# ),N ( A# ) N R (A∗ ),M−1 N (A∗ )

Lemma 2.2 [3]. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n respectively, and the columns
m×(m−r ) ×(n−r )
of U ∈ Cm −r and V ∗ ∈ Cnn−r form bases for N (A∗ ) and N (A ), respectively. Then
 
A M−1U
D= (2.1)
VN 0

is nonsingular and
 
−1 A†M,N V ∗ (V NV ∗ )−1
D = . (2.2)
(U M U )−1U ∗
∗ −1
0

Lemma 2.3 [12]. Let A ∈ Cm × n be of rank r, T be a subspace of Cn of dimension s ≤ r and let S be a subspace of Cm of dimension
m − s such that AT  S = C m . In addition, suppose that G ∈ Cn × m satisfies R(G ) = S and N (G ) = T . Let G has an arbitrary full-
(2 )
rank decomposition, that is G = P Q. If A has a {2} inverse AT,S , then:
(1) QAP is an invertible complex matrix
(2 )
(2) AT,S = P (QAP )−1 Q
 
(3) ν = det (QAP ) = (AG )JJ = (GA )JJ .
J∈J (AG ) J∈J (GA )


In [14], Ji developed a new representation for the weighted M–P inverse AM,N free of computing N −1 . Ji also proposed
the following algorithm for calculating the representation.

Algorithm 2.1 [14]. Weighted M–P inverse-Ji


(1) Input: A ∈ Crm×n and positive matrices M ∈ Cm × m and N ∈ Cn × n ;  
(2) Compute A∗ M and execute elementary row operations on matrix A∗ M N until the matrix formed by its m
 
B J1
columns is in the reduced row echelon , where B ∈ Cr × m and r columns of B are those of Ir ;
0 J2
 
BA B
(3) Compute BA and form ;
J2 0
 
BA B   †
(4) Perform elementary row operations on the the matrix until In X is reached and return X. i.e., AM,N .
J2 0
Nevertheless, Ji [14] did not analyzed the computational complexity of Algorithm 2.1. Let me consider it.

Lemma 2.4. The total number of multiplications and divisions required for Algorithm 2.1 to compute the weighted M–P inverse
AM,N of a matrix A ∈ Crm×n with positive definite matrices M ∈ Cm × m and N ∈ Cn × n is

1 2
TJ1 (m, n, r ) = m2 n + 3mnr + (n − 2r )nr + n ( n − 1 ). (2.3)
2
Proof. The complexities for computing the matrix A∗ M is m2 n. For the matrix
 A∗M with r (A∗ M ) = r (A∗ ) = r, r row pivot-
B J1
ing steps are needed in Step 2 to reach the reduced row echelon matrix . First row pivoting step involves m + n
0 J2
 
non-zero columns in A∗ M N . Thus, it needs m + n − 1 divisions and (m + n − 1 )(n − 1 ) multiplications with a total of
(m + n − 1 )n multiplications and divisions. On the second row pivoting step, there are m + n − 1 non-zero columns to deal
with. This pivoting steps requires (m + n − 2 )n operations. Following the same idea, the ith (1 ≤ i ≤ r) pivoting steps still re-
quires (m + n − i )n operations. Then the r pivoting steps for Step 2 altogether requires (m + n − r+1
2 )nr operations. (m − r )nr
 
BA B
operations are required to form in step 3 due to the structure of B.
J2 0
In Step 4, we first choose the n − r non-zero elements
 in
 matrix
 J2 as pivoting elements. Similar to step 2, it takes
n+r−1 BA B In−r J2 0
2 n (n − r ) multiplications and divisions to make J2 0
into
0 
BA B
. Observe that the r columns of B are

those of the identity matrix of order r. Thus, the first of the remaining r pivoting steps involves m + 1 non-zero columns.
This pivoting steps requires m divisions and m(n − 1 ) multiplications with a total of mn operations. The next r − 1 pivoting
steps also need to deal with m + 1 non-zero columns since a non-zero column is brought in while one non-zero column is
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 67

zeroed out except for one entry in the previous pivoting step. Thus, these steps also need mn operations as well. So the last
r pivoting steps requires mnr operations. Therefore, summing up, we have the computational complexity of Algorithm 2.1 for

computing AM,N :

1 2
TJ1 (m, n, r ) = m2 n + 3mnr + (n − 2r )nr + n ( n − 1 ). (2.4)
2
(2 )
In [21], Ji proposed an alternative method of elementary operations to compute AT,S , first performed elementary row
⎛   ⎞
  Is 0 P1
G In
and column operations on the partitioned matrix into ⎝ 0 0 P2 ⎠. Then, perform elementary row
Im 0  
Q1 Q2 O
   
operations on the the matrix D Im+n−s until Im+n−s D−1 is reached and return the submatrix of D−1 consisting of
 
(2 ) A Q2
the first n rows and the first m columns, i.e., AT,S , where D = .
P2 0
Ji’s results are summarized in the following algorithm: 

Algorithm 2.2 [21]. Outerinverse-Ji


(1) Input: A ∈ Crm×n and matrix G ∈ Csn×m with s ≤ r; ⎛   ⎞
  Is 0 P1
G In
(2) Perform elementary row and column operations on the bordered matrix into ⎝ 0 0 P2 ⎠;
Im 0  
Q1 Q2 O
 
A Q2
(3) Construct a nonsingular matrix D = ;
P2 0
   
(4) Perform elementary row operations on the the matrix D Im+n−s until Im+n−s D−1 is reached and return the
(2 )
submatrix of D−1 consisting of the first n rows and the first m columns, i.e., AT,S .

Moreover, Ji [21] also analyzed the following computational complexity for Algorithm 2.1.
(2 )
Lemma 2.5 [21]. The total number of multiplications and divisions required for Algorithm 2.1 to compute AT,S is

TJ2 (m, n, s ) = smn + (m + n − s )3 . (2.5)


 
GAG G
In [17], the author executed elementary row and column operations on the partitioned matrix into
G 0
⎛  ⎞
Is 0
0
⎝ 0 0 ⎠ to compute generalized inverse AT,S
(2 )
of a given complex matrix A, where G is a matrix such that
(2 )
0 −AT,S
R(G ) = T and N (G ) = S. Here I restate the algorithm of [17] as follows:

Algorithm 2.3 [17]. Outerinverse-SC


(1) Input: matrix A ∈ Crm×n and G ∈ Csn×m with s ≤ r and calculate GAG.
(2) Execute elementary row operations on the first n rows and the first m columns of a partitioned matrix
 
GAG G
G 0

to transform it into
⎛   ⎞
Is 0 G1
D=⎝ 0 0 0 ⎠.
 
G2 0 0

(3) Make the block matrices of D(1, 2) and D(2, 1) be zero matrices by applying elementary row and column transforma-
tions, respectively, through matrix Is , which yields
⎛  ⎞
Is 0
0
M3 = ⎝ 0 0 ⎠.
0 −G2 G1
(2 )
Then AT,S = G2 G1 .

In [17], the authors also established the computational complexity of the Algorithm 2.3, which will be restated as the
following lemma.
68 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74

(2 )
Lemma 2.6 [17]. The total number of multiplications and divisions required for Algorithm 2.3 to compute AT,S is

3s + 1
TSC (m, n, s ) = mn2 + m2 n + 4mns − ns. (2.6)
2

3. Main results


In this section, we shall establish two explicit expressions for AM,N . Based on these expressions, we can give two different

Gauss–Jordan elimination methods for computing AM,N .

Theorem 3.1. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n, respectively. Then there exist two
nonsingular elementary matrices P and Q of order n and m, respectively, such that
 
  B P1
P A∗ N = , (3.1)
0 P2

and
 
B Ir 0
0 Q= 0 0 , (3.2)
M−1 Q1 Q2
where the matrix B ∈ Cr × m and r columns of B are those of Ir , further

(1) R(P2∗ ) = NN (A ) and R(Q2 ) = M−1 N (A∗ );


(2) The matrix
 
A Q2
(3.3)
P2 0

is nonsingular.

Moreover
 −1  
A Q2 A†M,N N −1 P2∗ (P2 N −1 P2∗ )−1
= . (3.4)
P2 0 (Q2 MQ2 )−1 Q2∗ M

0
 
Proof. From r (A∗ ) = r (A ) = r, we know that there exists a nonsingular elementary matrix P such that P A∗ N =
   
B Ir 0
B P1
. Similarly, there exists another nonsingular elementary matrix Q, such that 0 Q= 0 0 due to the
0 P2
M−1 Q1 Q2
structure of B.
It is seen from (3.1) that
   
∗ B P1
PA = , PN = . (3.5)
0 P2

The above two equalities imply that


 
P1 N −1
r (B ) = r (A ) = r ∗
and P= . (3.6)
P2 N −1

It is seen from (3.5) and (3.6) that P2 N −1 A∗ = 0. This means that R(N −1 A∗ ) ⊂ N (P2 ). The nonsingular of P and N im-
plies that r (N −1 A∗ ) = r (A∗ ) and r (P2 ) = n − r. According to the fact that dim(N (P2 )) = n − (n − r ) = r and dim(R(N −1 A∗ )) =
r (N −1 A∗ ) = r, we have
R(N −1 A∗ ) = N (P2 ). (3.7)
This means
R(N −1 A∗ )⊥ = N (P2 )⊥ .
In other words R(P2∗ ) = N (AN −1 ) = NN (A ). Following the same line, we can proof that R(Q2 ) = N (A∗ M ) = M−1 N (A∗ ).
 
A Q2
From Lemma 2.2, we know the matrix is nonsingular and the result in (3.4) follows from (2.2) immediately.
P2 0

In summary of the above theorem, we have the following algorithm for computing AM,N . 

Algorithm 3.1. Weighted M–P inverse S-Algorithm 1:


(1) Input: A ∈ Crm×n , positive definite matrices M ∈ Cm × m and N ∈ Cn × n ;
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 69

   
A P1
A∗ N
(2) Perform elementary row operations on the first n rows of the bordered B1 = into B2 = 0 P2 ,
M−1 0
M−1 0
 ∈ C r×m and r columns of A
where A  are those of Ir ;
 
A P1
(3) Perform elementary column operations on the first m columns of bordered matrix B2 = 0 P2 into B3 =
M−1 0

Ir 0 P1
0 0 P2 ;
Q1 Q2 0
 
A Q2
(4) Form the partitioned matrix B4 = ;
P2 0
   
(5) Perform elementary row operations on the matrix B4 I until I B−1
4
is reached and return the submatrix of
B−1

4
consisting of the first n rows and the first M columns, i.e., AM,N .
In the next theorem, we will use Gauss–Jordan elimination process to get a matrix G = CB with R(G ) = R(A# ) and

N (G ) = N (A# ), which is also a full rank decomposition of G. Then a novel expression AM,N = C (BAC )−1 B is established.

Theorem 3.2. Let A ∈ Crm×n , M and N be Hermitian positive definite matrices of order m and n, respectively. There exist an
E1 r  
elementary row operation matrix E = and column operation matrix F = F1 F2 m, such that
E2 n − r
r m−r
n
   
E1 ∗ B r
EA∗ M = A M= (3.8)
E2 0 n−r
m
and
   
N −1 A∗ F = N −1 A∗ F1 F2 = C 0 n, (3.9)
r m−r
 
B  
where and C 0 are the reduced row and column echelon form of A∗ M and N −1 A∗ with N (B ) = N (A# ) and R(C ) =
0
R(A# ), respectively.
Further the matrix BAC is invertible and
A†M,N = C (BAC )−1 B. (3.10)

 r (A ) = r (N A M ) = r (N A ) = r (A M ) = r, there exist two elementary row and column operation matrices


Proof. −1 ∗ −1 ∗ ∗
 since
E1 r  
E= and F = F1 F2 m, such that (3.8) and (3.9) are satisfied.
E2 n − r
r m−r
n
By comparing both sides of (3.8) and (3.9), we get B = E1 A∗ M and C = N −1 A∗ F1 .
Thus we have
N (A∗ M ) = M−1 N (A∗ ) ⊂ N (B ) and R(C ) ⊂ R(N −1 A∗ ) = N −1 R(A∗ ). (3.11)
Notice that
dim[N (A∗ M )] = m − r = dim[N (B]]. (3.12)
and
dim[R(N −1 A∗ )] = r = dim[R(C )]. (3.13)
This implies that
N (B ) = M−1 N (A∗ ) = N (A# ) and R(C ) = N −1 R(A∗ ) = R(A# ). (3.14)
Denote G = CB, that is a full rank decomposition of G with R(G ) = R(C ) = R ( A# ) and N (G ) = N (B ) = N ( A # ).
Following Lemmas 2.1 and 2.3, we obtain BAC is nonsingular and

A†M,N = C (BAC )−1 B.


According to the representation introduced in Theorem 3.2, we have another algorithm for computing weighted M–P

inverse AM,N 
70 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74

Algorithm 3.2. Weighted M–P inverse S-Algorithm 2:


(1) Input: A ∈ Crm×n positive definite matrices M ∈ Cm × m and N ∈ Cn × n ;  
0 A∗ M
(2) Perform elementary row operations on the first n rows of the bordered C1 = into C2 =
N −1 A∗ 0

0 B
0 0 , where B ∈ Cr × m and r columns of B are those of Ir ;
N −1 A∗ 0

0 B
(3) Perform elementary columns operations on the first m columns of bordered matrix C2 = 0 0 into C3 =
N −1 A∗ 0

0 0 B
0 0 0 , where C ∈ Cn × r and r rows of C are those of Ir ;
C 0 0
 
BAC B
(4) Copmute D = BAC and form the partitioned matrix C4 = and execute the elementary row operations on
C 0
 
Ir (BAC )−1 B
the first r rows of it into C5 = ;
C 0
(5) Make the block matrices of C5 (1, 2) and C5 (2, 1) be zero matrices by applying elementary row and column transfor-
mations, respectively, through matrix Ir , which yields
 
Ir 0
C6 = .
0 −C (BAC )−1 B

Then AM,N = C (BAC )−1 B.

4. Computational complexities

In this section, the computational complexities of Algorithms 3.1 and 3.2 will be analyzed. We only count the multipli-
cations and divisions.

Theorem 4.1. The total number of multiplications and divisions required for Algorithm 3.1 to compute AM,N of a matrix A ∈ Crm×n

with positive definite matrices M ∈ Cm × m and N ∈ Cn × n is


 1+r

TS1 (m, n, r ) = m3 + m + n − nr + (m − r )mr + (m + n − r )[m2 + (m + n )(n − r )]. (4.1)
2
Proof. It needs m3 multiplications and divisions to compute M−1 , then r row pivoting steps are needed to transform
matrix B1 into B2 following r (A ) = r (A ) = r. First row pivoting step involves m + n non-zero columns in
the partitioned ∗

A∗ N
B1 = . Thus, it needs m + n − 1 divisions and (m + n − 1 )(n − 1 ) multiplications with a total of (m + n − 1 )n
M−1 0
multiplications and divisions. On the second row pivoting step, there is less one column in the first part of the bordered
matrix. m + n − 1 a non-zero columns are needed to deal with. This pivoting steps also requires (m + n − 2 )n operations.
Following the same idea, the ith (1 ≤ i ≤ r) pivoting steps requires (m + n − i )n operations. So it require (m + n − 1 )n + (m +
n − 2 )n + . . . (m + n − r )n = (m + n − 1+ 2 )nr multiplications
r
and divisions to reach B2 .

For simplicity, we assume that A = Ir  ∗ , then the first r columns of matrix B2 all have 1 row in Ir and m non-zero
rows in M. So each pivoting step requires (m − r )m multiplications. This requires (m − r )mr multiplications to obtain B3 .
In Step 5 of Algorithm 3.1, we first choose the n − r non-zero elements in matrix P2 as pivoting elements. Similar
to the last part of  the proof procedure of Lemma  2.4, it takes (m + n − r )n(n − r ) multiplications and divisions to make
  In−r P2 0 0 ∗
B4 Im+n−r into  . Observe that there exists an identity matrix of order m in the last m rows
0 A Q2 Im ∗
 
In−r P2 0 0 ∗
of  . Thus, the first of the remaining m pivoting steps involves m + n − r + 1 non-zero columns.
0 A Q2 Im ∗
This pivoting steps requires m + n − r divisions and (m + n − r )(m + n − r − 1 ) multiplications with a total of (m + n − r )2
operations. The next m − 1 pivoting steps also need to deal with m + n − r + 1 non-zero columns since a non-zero column
is brought in while one non-zero column is zeroed out except for one entry in the previous pivoting step. Thus, these steps
also need (m + n − r )2 operations as well. So the last m pivoting steps requires (m + n − r )2 m operations.
Therefore, it requires
 1+r

TS1 (m, n, r ) = m3 + m + n − nr + (m − r )mr + (m + n − r )[m2 + (m + n )(n − r )]
2

operations altogether for Algorithm 3.1 to compute weighted M–P inverse AM,N . 
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 71

For any matrix A ∈ Crm×n , the computational complexity of Algorithm 3.1 is compared with Algorithm 2.1 as follows:
TJ1 (m, n, r ) − TS1 (m, n, r ) = m2 (n − m ) + (m − r )(n − m )r + { n+2r−1 n(n − r )
(4.2)
+mnr − (m + n − r )[m2 + (m + n )(n − r )]}


Remark 1. Algorithm 3.1 does not need to switch block of certain matrix in the process computation, unlike Algorithm 2.1 in
[14]. The higher computational complexity of these two algorithms is all about 27 n3 multiplications and divisions when they

are applied to the case of r ≈ n = m for AM,N .

If we use Algorithm 2.2 to compute AM,N , the matrix G = N −1 A∗ M first needs to be calculated, this step requires
n3 + mn2
+ m2 n
operations. Therefore, according to Lemma 2.5 we have the computational complexity of Algorithm 2.2 for

computing AM,N

NJ2 (m, n, r ) = n3 + mn2 + m2 n + TJ2 (m, n, r ) = n3 + mn2 + m2 n + mnr + (m + n − r )3 . (4.3)


Crm×n

For any matrix A ∈ with m ≤ n, Algorithm 2.2 computes AM,N with a computational complexity of NJi (m, n, r) operations

while Algorithm 3.1 finds AM,N with a computational complexity of TS1 (m, n, r). Due to the fact that 0 ≤ r ≤ min{m, n}, we
always have
 
NJ2 (m, n, r ) − TS1 (m, n, r ) = (n3 − m3 ) + mn − (n − r+12
)r n + [mn − (m − r )r]m
(4.4)
+(m + n − r )(m − r )(n − r ) ≥ 0.
Remark 2. Equality (4.4) means that Algorithm 3.1 proposed in this paper is always faster than Algorithm 2.2 by Ji [21] for
any matrix A ∈ Cm × n with m ≤ n.

Theorem 4.2. The total number of multiplications and divisions required for Algorithm 3.2 to compute AM,N of a matrix A ∈ Crm×n

with positive definite matrices M ∈ Cm × m and N ∈ Cn × n is


1 1
TS2 (m, n, r ) = n3 + mn2 + m2 n + 4mnr + (m − n )r2 − r3 − (m + n )r. (4.5)
2 2
Proof. The complexities for computing the matrices A∗ M and N −1 A∗ are m2 n and n3 + mn2 , respectively. In step 2, r pivoting
steps are needed to reach the matrix C2 following from r (A∗ M ) = r. This step requires (m − 1+ 2 )rn multiplications and
r

divisions through similar analyzing to Theorem 4.1. In step 3, it also needs r pivoting steps to get matrix C3 because of
r (N −1 )A∗ = r, then (n − 1+
2 )rm multiplications and divisions are required.
r
 
BAC B
(m − r )nr + (n − r )r2 multiplications are needed to form C4 = , which follows from B and C are row-echelon
C 0
and column-echelon reduced matrix, respectively.
 
In step 5, the first pivoting step on BAC B involves m + 1 nonzero columns and it requires m divisions and m(r − 1 )
multiplications with a total of mr operations. The second pivoting step also needs to deal with m + 1 nonzero columns.
It also requires mr divisions multiplications. Continuing this way, the rth pivoting step still handles with m + 1 nonzero
columns and it requires mr divisions and multiplications. Adding up, it takes mr2 operations to compute (BAC )−1 B.
Then resume elementary row and column operations on the matrix C5 to transform it into C6 . The complexity of this
process is mnr multiplications, which is the count to compute C (BAC )−1 B.
Hence, the total number of complexity of Algorithm 3.2 is
 1+r
  1+r

TS2 (m, n, r ) = m2 n + n3 + n2 m + m − nr + n − mr
2 2
+(m − r )nr + (n − r )r 2 + mr 2 + mnr
1 1
= n3 + mn2 + m2 n + 4mnr + (m − n )r 2 − r 3 − (m + n )r.
2 2


If we use Algorithm 2.3 to compute AM,N , the matrix G = N −1 A∗ M first needs to be calculated, this step requires
n3 + mn2 + m2 n operations. Therefore, according to Lemma 2.6 we have the computational complexity of Algorithm 2.2 for

computing AM,N as
3r + 1
NSC (m, n, r ) = n3 + mn2 + m2 n + TSC (m, n, r ) = n3 + 2mn2 + 2m2 n + 4mnr − nr. (4.6)
2
If we use the total number of complexity of Algorithm 2.3 minus that of Algorithm 3.2 for any matrix A ∈ Crm×n , we have
 1 2
 1
NSC (m, n, r ) − TS2 (m, n, r ) = m n2 − r + n(m2 − r 2 ) + r 2 + mr ≥ 0. (4.7)
2 2
Remark 3. Equality (4.7) implies that Algorithm 3.2 proposed in this paper is also always faster than Algorithm 2.3 for any
matrix A ∈ Crm×n .
72 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74

5. Numerical examples

In this section, we shall use some numerical examples to demonstrate our results. First, a handy method is used to
compute AM,N on a lower order matrix. Second a matrix A of size m × n with the random weights M = U U + Im and N =


V V + ηIn are tested by used these methods, where U = randn(m ), N = randn(n ) and η is a parameter. All the computations
were performed on Intel Pentium(R) Dual-Core CPU T4300 XP system by using MATLAB 7.0. The accuracy of the computed
results is measured by the four quantities defined below:

r1 =
AA†M,N A − A
F ,
r2 =
A†M,N AA†M,N − A†M,N
F ,
r3 =
MAA†M,N − (MAA†M,N )∗
F ,
r4 =
NA†M,N A − (NA†M,N A )∗
F .

Example 1. Use Algorithms 3.1 and 3.2 to compute the weighted M–P inverse AM,N of the matrix A for the weights M and
N in [14] where
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 0 3 0 1 0 1 0 1 1 0 0
⎜0 −2 0 1⎟ ⎜0 2 0 0⎟ ⎜1 2 0 0⎟
A=⎝
0⎠
M=⎝
0⎠
N=⎝
0⎠
, , and .
1 0 2 1 0 3 0 0 1
0 0 −1 0 0 0 0 1 0 0 0 2


Solution First, we will use Algorithm 3.1 to compute weighted M–P inverse AM,N .
 
A∗ N
Execute elementary row operations on the fist four rows of the first partitioned matrix B1 = ; we have
M−1 0
⎛ ⎞ ⎛ ⎞
1 0 1 0 1 1 0 0 1 0 0 0 −2 −2 1 0
⎜ 0 −2 0 0 1 2 0 0⎟ ⎜0 1 0 0 0 0 0 2⎟
⎜ −1 0⎟ ⎜0 −1 0⎟
⎜ 3 0 2 0 0 1 ⎟ ⎜ 0 1 0 3 3 ⎟
⎜ 0 1 0 0 0 0 0 2⎟⎟ ⎜0 0 0 0 1 2 0 4⎟
B1 = ⎜
⎜ 32 ⎟ → B = ⎜ 3
⎜ 2
⎟.
0 − 12 0 0 0 0 0 2
0 − 12 2 0 0 0 0⎟
⎜ ⎟ ⎜ ⎟
⎜0 1
0 0 0 0 0 0⎟ ⎜0 1
0 0 0 0 0 0⎟
⎝− 1 2
0 1
0 0 0 0 0 ⎠ ⎝− 1 2
0 1
−1 0 0 0 0⎠
2 2 2 2
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0
⎛ ⎞
2
  ⎜0⎟
According to the Algorithm 2.1, we obtain matrices P2 = 1 2 0 4 and Q2 = ⎝ ⎠, where matrices P2 and Q2
−1
1
are all full rank and satisfied R(Q2 ) = M−1 N (A∗ ) and R(P2∗ ) = NN (A ), respectively.
 
A Q2
Next, we construct second block matrix B3 = .
P2 0
From Lemma 2.1 , we know that matrix B3 is nonsingular and AM,N can be read off from B−1

. Then we perform elemen-
   3 
tary row operations transform B3 I into I B−1
3
.
⎛ ⎞
1 0 0 0 0 − 14 0 5
4
7
4
0
   ⎜
 ⎜0 1 0 0 0 1
− 25 − 18 7
− 40 1⎟
40 5⎟
B3 I → I B−1 = ⎜0 0 1 0 0 1
0 − 14 − 34 0 ⎟.
3
⎝0 0 0 1 0
4
1 1
− 14 7
− 20 2⎠
20 5 5
1
0 0 0 0 1 4
0 − 14 1
4
0

This yields
⎛ ⎞
− 14 0 5
4
7
4
⎜ 1
− 25 − 18 7
− 40 ⎟
A†M,N = ⎝ 40
1 ⎠.
4
0 − 14 − 34
1 1
20 5
− 14 7
− 20


Second, we will use Algorithm 3.2 to compute AM,N .
X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74 73

 
0 A∗ M
By applying the elementary row operations on the first four rows of the third partitioned matrix C1 = ,
N −1 A∗ 0
we get
⎛ ⎞ ⎛ ⎞
0 0 0 0 2 0 4 0 0 0 0 0 1 0 0 −2
⎜0 0 0 0 0 −4 0 0⎟ ⎜ 0 0 0 0 0 1 0 0⎟
⎜ −1⎟
⎟ ⎜0 1⎟
  ⎜0 0 0 0 5 0 9
⎜ 0 0 0 0 0 1

⎜0 0⎟⎟→⎜ 0⎟
∗ 0 0 0 0 2 0
0 A M 0 0 0 0 0 0 0
=⎜
⎜2 ⎜4 ⎟.
N −1 A∗ 0⎟
⎟ ⎜ 0⎟
0 2 2 0 0 0 0 0 0 0 0 0 0
⎜ ⎜0
⎜−1 −2 −1 0 0 0 0 0⎟ ⎝ 2 0 0 0 0 0 0⎟⎠
⎝3 0 2 −1 0 0 0 0⎠ 0 0 1 0 0 0 0 0
0 1
2
0 0 0 0 0 0 −1 −1 0 0 0 0 0 0
⎛ ⎞
4 0 0
1 0 0 −2
⎜0 2 0⎟
Denoting E = 0 1 and F = ⎝
0 0 ⎠, we can easily to check that E and F are all full rank and
0 0 1
0 0 1 1
−1 −1 0
satisfied N (E ) = N (A∗ M ) = M−1 N (A∗ ) and R(F ) = R(N −1 A∗ ) = N −1 R(A∗ ).
By computing, we have
⎛ ⎞⎛ ⎞
 1 0 3 0 4 0 0 
1 0 0 −2 4 0 5
⎜0 −2 0 1⎟⎜ 0 2 0⎟
EAF = 0 1 0 0 ⎝1 0 2 0⎠ ⎝ 0 0 1 ⎠ = −1 −5 0 .
0 0 1 1 4 0 1
0 0 −1 0 −1 −1 0
Accordingto Algorithm
 3.2, we execute elementary row operations on the first three rows of the fourth partitioned
EAF E
matrix C2 = again, we have
F 0
⎛ ⎞ ⎛ 1 5 7

4 0 5 1 0 0 −2 1 0 0 − 16 0 16 16
⎜−1 −5 0 0 1 0 0⎟ ⎜0 1 0 1
− 15 1
− 16 7
− 80 ⎟
⎜4 1⎟ ⎜0 80
1
− 14 − 34

⎜ 0 1 0 0 1 ⎟ ⎜ 0 1 0 ⎟
C2 = ⎜ 0 ⎟ → C3 = ⎜
⎟ 0 ⎟
4
⎜4 0 0 0 0 0 ⎜4 0 0 0 0 0 ⎟.
⎜0 2 0 0 0 0 0⎟ ⎜0 2 0 0 0 0 0 ⎟
⎝0 0 1 0 0 0 0
⎠ ⎝0 0 1 0 0 0 0

−1 −1 0 0 0 0 0 −1 −1 0 0 0 0 0
One then resume elementary row and column operations on C3 , which results in
⎛ 1 5 7
⎞ ⎛ ⎞
1 0 0 − 16 0 16 16
1 0 0 0 0 0 0
⎜0 1 0 1
− 15 1
− 16 7
− 80 ⎟ ⎜0 1 0 0 0 0 0 ⎟
⎜0 80
1
− 14 − 34
⎟ ⎜0 0 1 0 0 0 0 ⎟
⎜ 0 1 0 ⎟ ⎜ ⎟
C3 = ⎜ ⎟ → C4 = ⎜0 − 74 ⎟.
4 1
0 0 0 − 54
⎜4 0 0 0 0 0 0 ⎟ ⎜ 4 ⎟
⎜0 2 0 0 0 0 0 ⎟ ⎜0 0 0 1
− 40 2 1 7

⎝0 0 1 0 0 0 0
⎠ ⎝0 0 0 − 14
5
0
8
1
40
3 ⎠
4 4
−1 −1 0 0 0 0 0 0 0 0 1
− 20 − 15 1
4
7
20

This leads to
⎛ ⎞
− 14 0 5
4
7
4
⎜ 40
1
− 25 − 18 7
− 40 ⎟
A†M,N = ⎝ 1 ⎠.
4
0 − 14 − 34
1 1
20 5
− 14 7
− 20

Example 2. The matrix A = rand (100, 50 ) × rand (50, 100 ) of size 100 × 100 from the function matrix in the Matrix Compu-
tation Toolbox [23], the random weights M = U U + I100 and N = V V + ηI100 with η = 10−2 , 104 are tested by Algorithms 2.2,
2.3, 3.1 and 3.2. The error and execution time are shown in the following Tables.
From the above two Tables, we find that Algorithms 3.1 and 3.2 are superior than Algorithms 2.2 and 2.3, respectively,
in the accuracy of error and the computation time.

6. Conclusion


In this paper, two novel explicit expressions for AM,N are derived and two Gauss–Jordan-like elimination procedure for

computing AM,N are proposed. The computational complexity of the introduced two algorithms is analyzed in detail. The
74 X. Sheng / Applied Mathematics and Computation 323 (2018) 64–74

two algorithms proposed in this paper are always faster than those in [17] and [14], respectively, by comparing their com-
putational complexities. The advantage of the two proposed methods in this paper is free of computing A# = N −1 A∗ M. So
the condition number of the algorithms in this paper could not increase comparing against that in [17] and [14].

Acknowledgment

The author would like to thank the two anonymous referees for their valuable comments and suggestions that improved
the presentation of the paper.

References

[1] A. Ben-Israel, T. N. E. Greville, Generalized Inverse Theory and Applications, second ed., Springer Verlag, NewYork, 2003.
[2] S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations, Dover Publications, New York, 1979.
[3] G.R. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, Science Press, Beijing, China/New York, 2004.
[4] J. Miao, Representations for the weighted Moore–Penrose inverse of a partitioned matrix, J. Comput. Math. 7 (1989) 321–323.
[5] W. Sun, Y. Wei, Inverse order rule for weighted generalized inverse, SIAM J. Matrix Anal. Appl. 19 (1998) 772–775.
[6] S. Wang, B. Zheng, Z. Xiong, Z. Li, The condition numbers for weighted Moore–Penrose inverse and weighted linear least squares problem, Appl. Math.
Comput. 215 (2009) 197–205.
[7] W. Wang, L. Lin, Derivative estimation based on difference sequence via locally weighted least squares regression, J. Mach. Learn. Res. 16 (2015)
2617–2641.
[8] Y. Wei, D. Wang, Condition numbers and perturbation of the weighted Moore–Penrose inverse and weighted linear least squares problem, Appl. Math.
Comput. 145 (2003) 45–58.
[9] Y. Wei, H. Wu, Expression for the perturbation of the weighted Moore–Penrose inverse, Comput. Math. Appl. 39 (20 0 0) 13–18.
[10] Z. Xu, J. Sun, C. Gu, Perturbation for a pair of oblique projectors AA†MN and BB†MN , Appl. Math. Comput. 203 (2008) 432–446.
[11] Z. Xu, C. Gu, B. Feng, Weighted acute perturbation for two matrices, Arab. J. Sci. Eng. 35 (1) (2010) 129–143.
(2 )
[12] X. Sheng, G. Chen, Full-rank representation of generalized inverse AT,S and its application, Comput. Math. Appl. 54 (2007) 1422–1430.
(2 )
[13] X. Sheng, G. Chen, Y. Gong, The representation and computation of generalized inverse AT,S , J. Comput. Appl. Math. 213 (2008) 248–257.
[14] J. Ji, Two inverse-of-N-free methods for A†M,N , Appl. Math. Comput. 232 (2014) 39–48.
[15] K.M. Anstreicher, U.G. Rothblum, Using Gauss–Jordan elimination to compute the index, generalized nulispaces and Drazin inverse, Linear Algebra
Appl. 85 (1987) 221–239.
[16] X. Sheng, G. Chen, A note of computation for M–P inverse A† , Int. J. Comput. Math. 87 (2010) 2235–2241.
(2 )
[17] X. Sheng, G. Chen, Innovation based on Gaussian elimination to compute generalized inverse AT,S , Comput. Math. Appl. 65 (2013) 1823–1829.
[18] X. Sheng, Execute elementary row and column operations on the partitioned matrix to compute M–P inverse A†, Abstr. Appl. Anal. 2014 (2014) 6.
Article ID 596049.
[19] J. Ji, Gauss–Jordan elimination methods for the Moore–Penrose inverse of a matrix, Linear Algebra Appl. 437 (2012) 1835–1844.
[20] J. Ji, X. Chen, A new method for computing Moore–Penrose inverse through Gauss–Jordan elimination, Appl. Math. Comput. 245 (2014) 271–278.
[21] J. Ji, Computing the outer and group inverses through elementary row operations, Comput. Math. Appl. 68 (6) (2014) 655–663.
[22] P.S. Stanimirovic, M.D. Petkovic, Gauss–Jordan elimination method for computing outer inverses, Appl. Math. Comput. 219 (2013) 4667–4679.
[23] N.J. Hrgham, Matrix market, National Institute of standards and Technology, Gaithersburg. MD.

You might also like