# TWO CHARACTERIZATIONS OF INVERSE-POSITIVE MATRICES

:
THE HAWKINS-SIMON CONDITION AND THE
LE CHATELIER-BRAUN PRINCIPLE

TAKAO FUJIMOTO

Dedicated to the late Professors David Hawkins and Hukukane Nikaido
Abstract. It is shown that the Hawkins-Simon condition is satisﬁed by any real square matrix
which is inverse-postive after a suitable permutation of columns or rows. One more characterization
of inverse-positive matrices is given concerning the Le Chatelier-Braun principle.
Key words. Hawkins-Simon condition, Inverse-positivity, Le Chatelier-Braun principle.
AMS subject classiﬁcations. 15A15, 15A48.
1. Introduction. In economics as well as other sciences, the inverse-positivity
of real square matrices has been an important topic. The Hawkins-Simon condition
[7], so called in economics, requires that every principal minor be positive, and they
showed the condition to be necessary and suﬃcient for a Z-matrix(a matrix with
nonpositive oﬀ-diagonal elements) to be inverse-positive. One decade earlier, this
was used by Ostrowski [10] to deﬁne an M-matrix(an inverse-positive Z-matrix), and
shown to be equivalent to some of other conditions. (See Berman and Plemmons
[1, Ch.6] for many equivalent conditions.) Georgescu-Roegen [6] argued that for a
Z-matrix it is suﬃcient to have only leading principal minors positive, which was
also proved in Fiedler and Ptak [3]. Nikaido’s two books, [8] and [9], contain a proof
based on mathematical induction. Dasgupta [2] gave another proof using an economic
interpretation of indirect input.
In this note, the Hawkins-Simon condition is deﬁned to be the one which requires
that all the leading principal minors should be positive, and shall be called in this note
the weak Hawkins-Simon condition(WHS for short). We prove this condition WHS is
necessary for a real square matrix to be inverse-positive after a suitable permutation
of columns (or rows). The proof is easy and simple by use of the elimination method.
One more characterization of inverse-positive matrices is oﬀered, which is related to
the Le Chatelier-Braun principle in thermodynamics. That is, each element in the
inverse of leading (n − 1)× (n − 1) matrix is smaller or equal to the corresponding
element in the inverse of the original inverse-positive matrix.
Section 2 explains our notation, then in section 3 we present our theorems and
their proofs, ﬁnally giving some numerical examples and remarks in the ﬁnal section
4.
2. Notation. The symbol R
n
means the real Euclidean space of dimension
n(n ≥ 2), and R
n
+
the non-negative orthant of R
n
. A given real n × n matrix A
maps from R
n
into itself. Let a
ij
represent the (i, j) entry of A, while x ∈ R
n
stands

Received by the editors on ... Accepted for publication on .... Handling Editor: ...

Department of Economics, University of Kagawa, Takamatsu, Kagawa 760-8523, Japan
1
2 T. Fujimoto and R. Ranade
for a column vector, and x
i
for the i-th element of x. The symbol (A)
∗, j
means the
j-th column of A, and (A)
i,∗
means the i-th row. We also use the symbol x
(i)
, which
represents the column (n − 1) vector formed by deleting the i-th element from x.
Similarly, the symbol A
(i, j)
means the (n −1)× (n −1) matrix obtained by deleting
the i-th row and the j-th column from A. Likewise, A
(, j)
shows the n×(n−1) matrix
obtained by deleting the j-th column from A. Let (A)
i,∗(n)
be the row (n−1) vector
formed by deleting the n-th element from (A)
i,∗
, and (A)
∗(n), j
be the column (n−1)
vector formed by deleting the n-th element from (A)
∗, j
. The symbol e
i
∈ R
n
+
means
a column vector whose i-th element is unity with all the remaining entries being zero.
The inequality signs for vector comparison are as follows:
x ≥ y iﬀ x −y ∈ R
n
+
;
x > y iﬀ x −y ∈ R
n
+
−{0};
x À y iﬀ x −y ∈int(R
n
+
),
where int(R
n
+
) means the interior of R
n
+
. These inequality signs are used also for
matrices in a similar meaning.
3. Propositions. Let us ﬁrst note that when a real square matrix A is inverse-
positive, i.e., A
−1
> 0, this is equivalent to the following property:
Property 1. For any b ∈int(R
n
+
), the equation Ax = b has a solution x ∈int(R
n
+
).
Now we can prove the following theorem.
Theorem 3.1. Let A be inverse-positive. Then the WHS condition is satisﬁed
when a suitable permutation of columns (or rows) is made.
Proof. Because of Property 1 above, there should be at least one positive entry
in the ﬁrst row of A. So, such a column and the ﬁrst column can be exchanged: we
assume the two columns have been so permuted, thus
a
11
> 0. (3.1)
Next, we divide the ﬁrst equation of the system Ax = b by a
11
and subtract this
equation side by side from the i-th(i ≥ 2) equation after multiplying this by a
i1
, to
obtain

1 a
12
/a
11
· · · a
1n
/a
11
0 a
22
−a
12
a
21
/a
11
· · · a
2n
−a
1n
a
21
/a
11
.
.
.
.
.
.
.
.
.
.
.
.
0 a
n2
−a
12
a
n1
/a
11
· · · a
nn
−a
1n
a
n1
/a
11
¸
¸
¸
¸
¸
·

x
1
x
2
.
.
.
x
n
¸
¸
¸
¸
¸
=

b
1
/a
11
b
2
−b
1
a
21
/a
11
.
.
.
b
n
−b
1
a
n1
/a
11
¸
¸
¸
¸
¸
.
In short, the (i, j)-entry of the modiﬁed matrix is, for i, j ≥ 2,
¯
¯
¯
¯
a
11
a
1j
a
i1
a
ij
¯
¯
¯
¯
a
11
.
Again, because of Property 1, the second row has at least one positive entry, assuming
b
2
was chosen so large to make b
2
−b
1
a
21
/a
11
> 0. We suppose the column containing
this element and the second column have been permutated, making
Inverse-Positive Matrices 3
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
a
11
> 0. (3.2)
The same method of elimination is then applied to the remaining (n−1) equations
to obtain the following:

1 a
12
/a
11
α
13
· · · α
1n
0 1 α
23
· · · α
2n
0 0 α
33
· · · α
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · · · · α
nn
¸
¸
¸
¸
¸
¸
¸
·

x
1
x
2
.
.
.
.
.
.
x
n
¸
¸
¸
¸
¸
¸
¸
¸
=

b
1
/a
11
β
2
β
3
.
.
.
β
n
¸
¸
¸
¸
¸
¸
¸
,
where
α
33
=
¯
¯
¯
¯
¯
¯
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
¯
¯
¯
¯
¯
¯
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
, and β
3
=
¯
¯
¯
¯
¯
¯
a
11
a
12
b
1
a
21
a
22
b
2
a
31
a
32
b
3
¯
¯
¯
¯
¯
¯
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
,
and for i, j > 3
α
i j
=
¯
¯
¯
¯
¯
¯
a
11
a
12
a
1 j
a
21
a
22
a
2 j
a
i1
a
i2
a
i j
¯
¯
¯
¯
¯
¯
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
, and β
i
=
¯
¯
¯
¯
¯
¯
a
11
a
12
b
1
a
21
a
22
b
2
a
i1
a
i2
b
i
¯
¯
¯
¯
¯
¯
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
.
This is because
α
33
=

¸

¸
¯
¯
¯
¯
¯
¯
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
¯
¯
¯
¯
¯
¯
¸

Á

¸
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
a
11
¸

¸

,
a
11
,
and the other elements can be derived in a similar way. When b
i
(i ≥ 3)’s are large
enough, β
i
(i ≥ 3)’s are all positive because
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
> 0.
Then thanks to Property 1, there exists at least one α
3 j
(j ≥ 3) which is positive, and
we assume the column having this entry and the third column have been exchanged.
Hence
4 T. Fujimoto and R. Ranade
α
33
=
¯
¯
¯
¯
¯
¯
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
¯
¯
¯
¯
¯
¯
¯
¯
¯
¯
a
11
a
12
a
21
a
22
¯
¯
¯
¯
> 0. (3.3)
We can proceed in a similar way, eliminating x
4
, . . . , x
n−1
, and ﬁnally we have
|A|
¯
¯
A
(n,n)
¯
¯
x
n
=
¯
¯
A
(,n)
b
¯
¯
¯
¯
A
(n,n)
¯
¯
. (3.4)
From (3.1), (3.2), (3.3), and the subsequent equations, we know that all the
leading principal minors up to A
(n,n)
are positive, and thus when b
n
is large enough,
the RHS of (3.4) becomes positive. Then by Property 1, it follows
|A| > 0.
Therefore, our theorem is proved for a permutation of columns. For rows, we can
transpose a given matrix and the same proof applies.
Corollary 3.2. When A is a Z-matrix, the WHS is necessary and suﬃcient
for A to be inverse-positive.
Proof. Since the necessity is proved in Theorem 3.1, we show the suﬃciency. Let
us consider the elimination method used in the proof of Theorem 3.1. We assume
that b À 0. When A is a Z-matrix, it is easy to notice that as elimination proceeds,
a positive entry is always given at the upper left corner with the other entries (or
coeﬃcients) on the top equation being all non-positive, while the RHS of each equation
always remains positive. So, ﬁnally we reach the equation of a single variable x
n
with the two coeﬃcients on both sides being positive. Thus, x
n
> 0. Now moving
backward, we ﬁnd x À 0. Since b is arbitrary, this shows the WHS is suﬃcient.
Next, we present a theorem which is related to the Le Chatelier-Braun principle.
(See Fujimoto [4].)
Theorem 3.3. Suppose that A is inverse-positive, and the WHS is satisﬁed.
Then each element of the inverse of A
(n,n)
is less than or equal to the corresponding
element of the inverse of A.
Proof. The ﬁrst column of the inverse of A can be obtained as a solution vector
x ∈ R
n
+
to the system of equations Ax = e
1
, while the ﬁrst column of the inverse of
A
(n,n)
as a solution vector y ∈ R
n−1
+
to the system A
(n,n)
y = e
1(n)
two systems with some manipulations, we get the following system:
A

x
1
+ y
1
.
.
.
x
n−1
+y
n−1
x
n
¸
¸
¸
¸
¸
= d ≡

2
0
0
(A)
n,∗(n)
· y
¸
¸
¸
¸
. (3.5)
Inverse-Positive Matrices 5
By the Cramer’s rule, it follows that
x
n
=
¯
¯
A
(,n)
d
¯
¯
|A|
= 2x
n
+
¯
¯
A
(n,n)
¯
¯
|A|
· (A)
n,∗(n)
· y.
Thus, if x
n
= (A
−1
)
n1
> 0, then (A)
n,∗(n)
· y < 0, and if x
n
= 0, then (A)
n,∗(n)
· y = 0,
because
|A
(n,n)|
|A|
> 0 thanks to the WHS condition.
For the i-th(i < n) equation of (3.5), the Cramer’s rule gives us
x
i
+ y
i
= 2x
i
+
¯
¯
A
(n,i)
¯
¯
|A|
· (A)
n,∗(n)
· y.
From this, we have
y
i
= x
i
+(A
−1
)
in
· (A)
n,∗(n)
· y.
Therefore we can assert
½
y
i
< x
i
when (A
−1
)
n1
> 0 and(A
−1
)
in
> 0,
y
i
= x
i
when (A
−1
)
n1
= 0 or (A
−1
)
in
= 0.
For other columns, we can proceed in a similar way by use of e
i
.
Corollary 3.4. Suppose that the inverse of A has its last column and the bottom
row non-negative, and (A
−1
)
nn
> 0. Then each element of the inverse of A
(n,n)
is
less than or equal to the corresponding element of the inverse of A.
Proof. Exactly the same proof as that of Theorem 3.3 can be applied because
|A
(n,n)|
|A|
= (A
−1
)
nn
.
4. Numerical Examples and Remarks. The ﬁrst example is a simple M-
matrix and given as
A ≡
·
−2 1
7 −3
¸
, and A
−1
=
·
3 1
7 2
¸
.
By exchanging two columns, we have
·
1 −2
−3 7
¸
, and its inverse is
·
7 2
3 1
¸
.
This satisﬁes the normal Hawkins-Simon condition. The inverse of (1) is (1), and the
entry 1 is smaller than 7, thus verifying Theorem 3.3.
The next one is not an M-matrix:
A ≡

1 −1 1
1 1 −1
−1 1 1
¸
¸
, and A
−1
=

1
2
1
2
0
0
1
2
1
2
1
2
0
1
2
¸
¸
.
The inverse of A
(3,3)
is calculated as
6 T. Fujimoto and R. Ranade
·
1 −1
1 1
¸
−1
=
·
1
2
1
2

1
2
1
2
¸
.
The elements (A
−1
)
11
, (A
−1
)
12
, and (A
−1
)
22
are all equal to (A
−1
(3,3)
)
11
, (A
−1
(3,3)
)
12
, and
(A
−1
(3,3)
)
22
because (A
−1
)
32
= 0 and (A
−1
)
13
= 0. The entry (A
−1
(3,3)
)
21
is, however, −
1
2
and is smaller than the corresponding entry (A
−1
)
21
= 0, again conﬁrming Theorem
3.3.
The third example is
A ≡

31
6

17
6
4
3

25
6

26
3
16
3

4
3
17
3
17
3

13
3
1
3

8
3

3
2
3
2
0
1
2
¸
¸
¸
¸
, A
−1
=

3 5 8 11
1 2 4 7
10 13 15 16
6 9 12 14
¸
¸
¸
¸
, and |A| =
1
6
.
For this matrix,
¯
¯
A
(4,4)
¯
¯
=
¯
¯
¯
¯
¯
¯

31
6

17
6
4
3

26
3
16
3

4
3
17
3

13
3
1
3
¸
¸
¯
¯
¯
¯
¯
¯
=
7
3
> 0, and
¯
¯
(A
(4,4)
)
(3,3)
¯
¯
=
¯
¯
¯
¯
·
31
6

17
6

26
3
16
3
¸¯
¯
¯
¯
= 3 > 0.
These veriﬁes Theorem 3.1. (Note that
¯
¯
(A
(4,4)
)
(1,1)
¯
¯
=
·
16
3

4
3

13
3
1
3
¸
= −4 < 0.)
On the other hand,
¡
A
(4,4)
¢
−1
=

31
6

17
6
4
3

26
3
16
3

4
3
17
3

13
3
1
3
¸
¸
−1
=

12
7

29
14

10
7
−2 −
5
2
−2
22
7
19
7
9
7
¸
¸
,
which conﬁrms Theorem 3.3.
The ﬁnal example is to verify Corollary 3.4:
A ≡

17
24
2
3

5
24
1
6

1
3
1
6
23
24

2
3
11
24
¸
¸
, and A
−1
=

−1 −4 1
2 −3 2
5 4 3
¸
¸
.
Since
·

17
24
2
3
1
6

1
3
¸
−1
=
·

8
3

16
3

4
3

17
3
¸
,
Inverse-Positive Matrices 7
these equations conforms to Corollary 3.4.
Remark 4.1. The Le Chatelier-Braun principle in thermodynamics states that
when an equilibrium in a closed system is perturbed, directly or indirectly, the equi-
librium shifts in the direction which can attenuate the perturbation. As is explained
in Fujimoto [4], the system of equations Ax = b can be solved as an optimization
problem when A is an M-matrix. Thus, a solution x to the system can be viewed as
a sort of equilibrium. A similar argument can be made when A is inverse-positive. In
our case, a perturbation is a new constraint that the n-th variable x
n
should be kept
constant even after the vector b shifts, destroying the n-th equation. The changes in
other variables may become smaller when the increase of those variables requires x
n
to be greater. This is obvious in the case of an M-matrix. What we have shown is
that it is also the case with an inverse-positive matrix or even with a matrix with
positively bordered inverse as can be seen from Corollary 3.4.
Remark 4.2. Much more can be said about sensitivity analysis for the case of
M-matrices. We can also deal with the eﬀects of changes in the elements of A on the
solution vector x. See Fujimoto, Herrero, and Villar [5].
REFERENCES
[1] Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences.
[2] Dipankar Dasgupta. Using the correct economic interpretation to prove the Hawkins-Simon-
Nikaido theorem: one more note. Journal of Macroeconomics, 14:755—761, 1992.
[3] Miroslav Fiedler and Vlastimil Ptak. On Matrices with nonpositive oﬀ-diagonal elements and
positive principal minors. Czechoslovak Mathematical Journal, 12:382—400, 1962.
[4] Takao Fujimoto. Global strong Le Chatelier-Samuelson principle. Econometrica, 48:1667—1674,
1980.
[5] Takao Fujimoto, Carmen Herrero and Antonio Villar. A sensitivity analysis for linear systems
involving M-matrices and its application to the Leontief model. Linear Algebra and Its
Applications, 64:85—91, 1985.
[6] Nicholas Georgescu-Roegen. Some properties of a generalized Leontief model. in Tjalling Koop-
mans(ed.), Activity Analysis of Allocation and Production John Wiley & Sons, New York,
165—173, 1951.
[7] David Hawkins and Herbert A. Simon. Note: Some Conditions of Macroeconomic Stability.
Econometrica, 17:245—248, 1949.
[8] Hukukane Nikaido. Convex Structures and Economic Theory. Academic Press, New York, 1963.
[9] Hukukane Nikaido. Introduction to Sets and Mappings in Modern Economics. Academic Press,
New York, 1970.
[10] Alexander Ostrowski.
¨
Uber die Determinanten mit ¨ uberwiegender Hauptdiagonale. Commen-
tarii Mathematici Helvetici, 10:69—96, 1937.

j ≥ 2.. . Theorem 3. Likewise. Then the WHS condition is satisﬁed when a suitable permutation of columns (or rows) is made. We also use the symbol x(i) . the equation Ax = b has a solution x ∈int(Rn ). j) shows the n ×(n− 1) matrix obtained by deleting the j-th column from A. .1) a11 > 0. assuming b2 was chosen so large to make b2 − b1 a21 /a11 > 0. These inequality signs are used also for + + matrices in a similar meaning. . because of Property 1. 3. which represents the column (n − 1) vector formed by deleting the i-th element from x. In short. for i. = . + + Now we can prove the following theorem. Proof. Let (A)i. Fujimoto and R. the second row has at least one positive entry. . So.   . making . .e. j) means the (n − 1)× (n − 1) matrix obtained by deleting the i-th row and the j-th column from A. thus (3.   . the (i. a11 Next. Let A be inverse-positive. .∗ . + where int(Rn ) means the interior of Rn . such a column and the ﬁrst column can be exchanged: we assume the two columns have been so permuted.1. The symbol (A)∗. j means the j-th column of A. Let us ﬁrst note that when a real square matrix A is inversepositive. We suppose the column containing this element and the second column have been permutated.  . . + x À y iﬀ x − y ∈int(Rn ). 0 an2 − a12 an1 /a11 · · · ann − a1n an1 /a11 xn bn − b1 an1 /a11 Again. we divide the ﬁrst equation of the system Ax = b by a11 and subtract this equation side by side from the i-th(i ≥ 2) equation after multiplying this by ai1 . + x > y iﬀ x − y ∈ Rn − {0}. Because of Property 1 above.∗ means the i-th row. Similarly. . to obtain       x1 b1 /a11 1 a12 /a11 ··· a1n /a11  0 a22 − a12 a21 /a11 · · · a2n − a1n a21 /a11   x2   b2 − b1 a21 /a11        · .2 T.   . there should be at least one positive entry in the ﬁrst row of A. and xi for the i-th element of x. this is equivalent to the following property: Property 1. j . . . Propositions. . the symbol A(i. . and (A)∗(n). and (A)i. j)-entry of the modiﬁed matrix is. A(. A−1 > 0. The symbol ei ∈ Rn means + a column vector whose i-th element is unity with all the remaining entries being zero.∗(n) be the row (n − 1) vector formed by deleting the n-th element from (A)i. ¯ ¯ ¯ a11 a1j ¯ ¯ ¯ ¯ ai1 aij ¯ . j be the column (n − 1) vector formed by deleting the n-th element from (A)∗. i. Ranade for a column vector.. For any b ∈int(Rn ). The inequality signs for vector comparison are as follows: x ≥ y iﬀ x − y ∈ Rn .

.   .Inverse-Positive Matrices 3 (3.       0 0 α33 · · · α3n  ·  .   . .   . . . .2) ¯ ¯ a11 a12 ¯ ¯ a21 a22 a11 ¯ ¯ ¯ ¯ > 0..    . Hence and the other elements can be derived in a similar way. where The same method of elimination is then applied to the remaining (n−1) equations to obtain the following:   x     1 b1 /a11 1 a12 /a11 α13 · · · α1n     0 1 α23 · · · α2n   x2   β2   . . and βi = ¯ ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ ¯ ¯ a11 a12 ¯ ¯ a21 a22 a11 ¯ ¯ ¯ ¯ ¯ ¯ α33 . This is because α33 Then thanks to Property 1. When bi (i ≥ 3)’s are large enough. j > 3 αi j ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ . βi (i ≥ 3)’s are all positive because ¯ ¯ ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ > 0. . and for i.   . and we assume the column having this entry and the third column have been exchanged. and β3 = ¯ ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ ¯ ¯ a11 a12 b1 ¯ ¯ a21 a22 b2 ¯ ¯ ai1 ai2 bi ¯ . ¯ ¯ ¯  a11 . . . . .   . there exists at least one α3 j (j ≥ 3) which is positive.   .  =  β3   . . . 0 0 · · · · · · αnn βn xn ¯ ¯ a11 a12 a13 ¯ ¯ a21 a22 a23 ¯ ¯ a31 a32 a33 ¯ ¯ = ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ ¯ ¯ a11 a12 a1 j ¯ ¯ a21 a22 a2 j ¯ ¯ ai1 ai2 ai j ¯ ¯ = ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ a12 a22 a32 a13 a23 a33 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ a11 a12 b1 ¯ ¯ a21 a22 b2 ¯ ¯ a31 a32 b3 ¯ . . ¯ ¯ a11 ¯ ¯ a21 = ¯ ¯ a31 ¯  ¯ Á ¯ ¯  ¯ ¯ ¯  . .

3). Suppose that A is inverse-positive.n) ¯ xn = ¯A(n. We assume that b À 0. it follows |A| > 0.3) α33 We can proceed in a similar way. Since b is arbitrary. So.2. . and the WHS is satisﬁed. Thus. When A is a Z-matrix.4) becomes positive. . and thus when bn is large enough. Ranade (3. Then each element of the inverse of A(n. The ﬁrst column of the inverse of A can be obtained as a solution vector x ∈ Rn to the system of equations Ax = e1 . and the subsequent equations. For rows. while the RHS of each equation always remains positive.n) b¯ |A| ¯ ¯ ¯ ¯ ¯A(n. (3. while the ﬁrst column of the inverse of + n−1 A(n. the WHS is necessary and suﬃcient for A to be inverse-positive. (3. xn > 0. Let us consider the elimination method used in the proof of Theorem 3. (3. Adding these two systems with some manipulations. a positive entry is always given at the upper left corner with the other entries (or coeﬃcients) on the top equation being all non-positive. the RHS of (3.4 T. Then by Property 1. we know that all the leading principal minors up to A(n.   . Next.n) ¯ . (See Fujimoto [4]. we get the following system:     x1 + y1 2   .1.n) y = e1(n) . xn−1 . it is easy to notice that as elimination proceeds. we can transpose a given matrix and the same proof applies. Since the necessity is proved in Theorem 3.n) are positive. Fujimoto and R. Proof.∗(n) · y xn .2). and ﬁnally we have ¯ ¯ ¯A(. eliminating x4 .5) A =d≡   0  xn−1 + yn−1  (A)n. .1). this shows the WHS is suﬃcient. . we present a theorem which is related to the Le Chatelier-Braun principle. ﬁnally we reach the equation of a single variable xn with the two coeﬃcients on both sides being positive.4) From (3.n) as a solution vector y ∈ R+ to the system A(n. we show the suﬃciency. 0   . Corollary 3.3.) Theorem 3. Now moving backward. (3. . Proof. When A is a Z-matrix. our theorem is proved for a permutation of columns. we ﬁnd x À 0. Therefore. ¯ ¯ a11 a12 a13 ¯ ¯ a21 a22 a23 ¯ ¯ a31 a32 a33 ¯ ¯ = ¯ a11 a12 ¯ ¯ ¯ ¯ a21 a22 ¯ ¯ ¯ ¯ ¯ ¯ ¯ > 0.n) is less than or equal to the corresponding element of the inverse of A.1.

∗(n) · y. |A(n. The next one is not an M -matrix:  1 1    0 1 −1 1 2 2 A≡ 1 1 −1  . For the i-th(i < n) equation of (3.n) ¯ = 2xn + · (A)n. yi = xi when (A−1 )n1 = 0 or (A−1 )in = 0.i) ¯ xi + yi = 2xi + · (A)n.3.4. Proof. and if xn = 0. we can proceed in a similar way by use of ei .Inverse-Positive Matrices 5 Thus. thus verifying Theorem 3.∗(n) · y. and A−1 =  0 1 1  . Numerical Examples and Remarks.3) is calculated as By the Cramer’s rule. then (A)n. then (A)n. The inverse of (1) is (1). Then each element of the inverse of A(n. 2 2 1 −1 1 1 0 1 2 2 The inverse of A(3. we have · ¸ · ¸ 1 −2 7 2 . and its inverse is . xn = |A| |A| . |A| 4.n) | because |A| > 0 thanks to the WHS condition. The ﬁrst example is a simple M matrix and given as · ¸ · ¸ 3 1 −2 1 . and the entry 1 is smaller than 7. |A| From this.n) | = (A−1 )nn .5). if xn = (A−1 )n1 > 0.∗(n) · y < 0.∗(n) · y. Exactly the same proof as that of Theorem 3.∗(n) · y = 0. −3 7 3 1 This satisﬁes the normal Hawkins-Simon condition. Corollary 3. and (A−1 )nn > 0.n) is less than or equal to the corresponding element of the inverse of A. For other columns. we have yi = xi + (A−1 )in · (A)n. and A−1 = 7 2 7 −3 By exchanging two columns. the Cramer’s rule gives us ¯ ¯ ¯A(n. A≡ .n) d¯ ¯A(n. Therefore we can assert ½ yi < xi when (A−1 )n1 > 0 and (A−1 )in > 0.3 can be applied because |A(n. Suppose that the inverse of A has its last column and the bottom row non-negative. it follows that ¯ ¯ ¯ ¯ ¯A(.

3) (3.6 T. and (A−1 )22 are all equal to (A−1 )11 . and A−1 =  2 −3 2  . A  10 13 15 16  . however.  −3 −3 3 3 3 1 −3 0 6 9 12 14 2 2 2 ¯ 31 ¯ 6 ¯ ¯ ¯ ¯A(4.4:     17 2 5 −1 −4 1 − 24 − 24 3 1 1  . These veriﬁes Theorem 3.)  − 10 7 −2  . and |A| = 6 . − 1 (3.4) )(3. Ranade · 1 −1 1 1 ¸−1 = · −1 2 1 2 1 2 1 2 ¸ . (Note that · ¯ ¯ ¯(A(4. For this matrix. (A−1 )12 . again conﬁrming Theorem 3.3) 2 and is smaller than the corresponding entry (A−1 )21 = 0.3) ¯ = ¯ ¯ − 26 3 16 3 − 13 3 − 17 6 16 3 ¯ ¯ ¯ ¯ = 7 > 0.4) )(1. (A−1 )12 . 9 7 − 17 6 16 3 − 13 3 4 3 −4 3 1 3 −1  Since which conﬁrms Theorem 3.3. The ﬁnal example is to verify Corollary 3. −1 A≡ 6 3 6 11 2 23 −3 5 4 3 24 24 · − 17 24 1 6 2 3 −1 3 − 12 7  −2 = 22 7  − 29 14 −5 2 19 7 ¸−1 = · −8 3 −4 3 − 16 3 − 17 3 ¸ . and ¯ 3 ¯ ¸¯ ¯ ¯ = 3 > 0. ¯ −4 3 1 3 ¸ = −4 < 0. The elements (A−1 )11 .4) 3 17 3 ¯· 31 ¯ ¯ ¯ 6 ¯(A(4. Fujimoto and R.1) ¯ =  31 6 ¢−1 ¡ =  − 26 A(4.3) (3. and (3.3.4) ¯ = ¯ − 26 3 ¯ 17 ¯ 3 − 17 6 16 3 − 13 3 4 3 −4 3 1 3 On the other hand. . The entry (A−1 )21 is.3) (A−1 )22 because (A−1 )32 = 0 and (A−1 )13 = 0. The third example is     31 4 − 17 − 25 3 5 8 11 6 6 3 6 16 17   1 2 4 7   − 26 1 −4 −1  3 3 3 3  = A ≡  17 1 13 8 .1.

). Herrero. On Matrices with nonpositive oﬀ-diagonal elements and positive principal minors. We can also deal with the eﬀects of changes in the elements of A on the solution vector x. 1949. New York. Note: Some Conditions of Macroeconomic Stability.4. [6] Nicholas Georgescu-Roegen. New York. . Academic Press. What we have shown is that it is also the case with an inverse-positive matrix or even with a matrix with positively bordered inverse as can be seen from Corollary 3. Journal of Macroeconomics. Academic Press. Remark 4. 1951. 12:382—400. [8] Hukukane Nikaido. 48:1667—1674. 1992. 1963. the equilibrium shifts in the direction which can attenuate the perturbation. This is obvious in the case of an M -matrix. 165—173. A similar argument can be made when A is inverse-positive. Convex Structures and Economic Theory. Thus. [7] David Hawkins and Herbert A. Activity Analysis of Allocation and Production John Wiley & Sons. Much more can be said about sensitivity analysis for the case of M -matrices. 10:69—96. [4] Takao Fujimoto. Nonnegative Matrices in the Mathematical Sciences. REFERENCES [1] Abraham Berman and Robert J. 17:245—248. See Fujimoto. Some properties of a generalized Leontief model. New York. New York.2. Econometrica. [9] Hukukane Nikaido. the system of equations Ax = b can be solved as an optimization problem when A is an M -matrix. 1980. Czechoslovak Mathematical Journal. In our case. Academic Press. As is explained in Fujimoto [4]. 1937. The changes in other variables may become smaller when the increase of those variables requires xn to be greater. Simon. 1970. Introduction to Sets and Mappings in Modern Economics. 64:85—91. Linear Algebra and Its Applications.1. [5] Takao Fujimoto. 1985. The Le Chatelier-Braun principle in thermodynamics states that when an equilibrium in a closed system is perturbed. Uber die Determinanten mit uberwiegender Hauptdiagonale. Plemmons. destroying the n-th equation. a perturbation is a new constraint that the n-th variable xn should be kept constant even after the vector b shifts. Carmen Herrero and Antonio Villar.Inverse-Positive Matrices 7 these equations conforms to Corollary 3. Econometrica. [2] Dipankar Dasgupta. Using the correct economic interpretation to prove the Hawkins-SimonNikaido theorem: one more note. 1979. a solution x to the system can be viewed as a sort of equilibrium.4. ¨ [10] Alexander Ostrowski. [3] Miroslav Fiedler and Vlastimil Ptak. Remark 4. 14:755—761. Global strong Le Chatelier-Samuelson principle. directly or indirectly. and Villar [5]. A sensitivity analysis for linear systems involving M-matrices and its application to the Leontief model. in Tjalling Koopmans(ed. 1962. Commen¨ tarii Mathematici Helvetici.