Professional Documents
Culture Documents
www.elsevier.com/locate/laa
Abstract
For unsolvable systems of linear equations of the form A ⊗ x = b over the max–min
(fuzzy) algebra < = (h0, 1i, ⊕ = max, ⊗ = min), we propose an efficient method for find-
ing a Chebychev-best approximation of the matrix A ∈ <(m, n) in the set S(b) = {D ∈
<(m, n); D ⊗ x = b is solvable}. © 2000 Published by Elsevier Science Inc. All rights
reserved.
AMS classification: 15A06; 90C48; 65K05; 04A72
1. Introduction
It happens quite often in modelling real situations using systems of linear equa-
tions that the obtained system turns out to be unsolvable. One possibility would be
to omit some equations, but the problem of determining the minimum necessary
number of those is NP-complete, see [6] for systems over a field and [2] for systems
over extremal algebras. For fuzzy relational equations (max–min linear systems),
0024-3795/00/$ - see front matter 2000 Published by Elsevier Science Inc. All rights reserved.
PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 0 6 0 - 4
124 K. Cechlárová / Linear Algebra and its Applications 310 (2000) 123–128
Pedrycz [5] proposed to modify the right-hand side slightly to get a solvable system.
Based on Pedrycz’ idea, Cuninghame-Green and Cechlárová [4] devised a polynomi-
al procedure for finding the Chebychev-best approximation for the right-hand side of
a system, using the theory of residuation. Let us mention here, that the same problem
over the field of reals leads to a linear program and over the max-group algebra a
straightforward solution has been given by Cuninghame-Green in [3].
This paper deals with the situation when the right-hand side of an unsolvable
max–min fuzzy system remains constant and we are allowed to modify the matrix
to get a solvable system. The objective is again the Chebychev-minimality of this
modification.
A max–min or fuzzy algebra < is the interval h0, 1i of reals with two operations:
⊕ = max and ⊗ = min. The set of all m × n matrices over < will be denoted by
<(m, n) and the set of (column) m-vectors by <m . For brevity, we shall denote by
M and N the index sets {1, 2, . . . , m} and {1, 2, . . . , n}, respectively. Matrices are
represented by italic capitals, vectors by boldface letters and their entries by usual
roman ones.
A system of linear equations of the form
A⊗x=b (1)
uses the operations ⊕ and ⊗ instead of the addition and multiplication in formally
the same way as is done over a field. For each system (1) the so-called principal
solution x∗ (A, b) is defined by
x∗ (A, b) = AT ⊗0 b, (2)
where in the ⊗0 -matrixmultiplication the operations between scalars are
⊕0 = minimumand ⊗0 is defined by
b if a > b,
a ⊗0 b = (3)
1 if a 6 b.
The inequality
A ⊗ x∗ (A, b) 6 b (4)
holds always, but system (1) is solvable if and only if (2) is its solution, see e.g. [4].
For combinatorial derivation of necessary and sufficient conditions for solvability of
(1) see [1].
Since we shall change the matrix and retain the right-hand side of the system
constant, let us denote
S(b) = {D ∈ <(m, n); the system D ⊗ x = b is solvable}.
Proof. Let us recall that xj∗ (A, b) = mini {aij ⊗0 bi } and similarly xj∗ (D, b) =
mini {dij ⊗0 bi }, which can be rewritten as
∗ 1 if aij 6 bi for all i,
xj (A, b) = (6)
min{bi ; aij > bi } otherwise,
and
∗ 1 if dij 6 bi for all i,
xj (D, b) = (7)
min{bi ; dij > bi } otherwise.
As A ` D a b, we have {i; dij > bi } ⊆ {i; aij > bi } for each j ∈ N, so the desired
inequality follows.
Lemma 3. Let A ∈ <(m, n), b ∈ <m be given. If there exists D ∈ S(b) such that
ρ(A, D) = δ, then there also exists C ∈ S(b) such that A ` C a b and ρ(A, C) 6
δ.
1. If aij ∈ hbi , dij i or aij ∈ hdij , bi i, we set cij = max{bi , aij − (dij − aij )} =
max{bi , 2aij − dij } or cij = min{bi , aij + (aij − dij )} = min{bi , 2aij − dij },
respectively.
2. If dij ∈ haij , bi i or dij ∈ hbi , aij i, we leave the corresponding entry in C un-
changed, i.e. cij = dij .
3. If bi ∈ haij , dij i or bi ∈ hdij , aij i, we set cij = bi .
It is obvious that ρ(A, C) 6 δ and A ` C a b. Moreover, D ` C a b, thus Lemma
2 implies C ∈ S(b).
3. Algorithm
The main idea of the algorithm is to successively ‘approach’ the right-hand side
of the system by the matrix. This is formalized in the following definition.
Algorithm MATRIX
begin k := 0; Dk := 0; A(Dk ) := A;
Compute x∗ (A, b);
If A ⊗ x∗ (A, b) =
/ b then
repeat Dk+1 = Dk + mini,j {|A(Dk )ij − bi |; A(Dk )ij =
/ bi };
k := k + 1;
A(Dk ) := (A; Dk → b);
until A(Dk ) ⊗ x∗ (A(Dk ), b) = b;
output A(Dk ); Dk
end
Theorem 1. For a given matrix A ∈ <(m, n) and b ∈ <m , algorithm MATRIX cor-
rectly computes in polynomial time the Chebychev distance D = ρ(A, S(b)) and a
matrix A0 ∈ <(m, n) such that ρ(A, A0 ) = D.
Proof. Let us denote the matrix A(Dk ), which is produced by algorithm MATRIX,
as A0 . The choice of the ‘target errors’ D` , ` = 1, 2, . . . , k is such that no possible
solution will be jumped over: since for all values δ ∈ hD` , D`+1 ) the mutual positions
K. Cechlárová / Linear Algebra and its Applications 310 (2000) 123–128 127
of entries of A(δ) and b do not change, we have x∗ (A(δ), b) = x∗ (A(D` ), b) and the
solvability of system A(δ) ⊗ x = b is the same as for the system A(D` ) ⊗ x = b.
The remark after Definition 3 ensures that the algorithm eventually stops. Moreover,
in each repeat–until cycle at least one additional matrix entry attains the value equal
to its corresponding right-hand side and then stays at this value. Since the number
of entries in A is m × n, the maximum number of repetitions of the repeat–until
cycles will be m × n. As the computation of the principal solution and verifying the
solvability of the system is polynomial, we get that algorithm matrix is polymial too.
To prove the correctness of the algorithm, suppose that there exists a matrix D ∈
S(b) such that ρ(A, D) = δ < ρ(A, A0 ) = Dk . In the light of Lemma 3, we can
also suppose that A ` D a b. Since δ < Dk , there must exist ` < k such that δ ∈
hD` , D`+1 ) and ρ(A, D) = δ means that D ` A(δ) a b. Since D ∈ S(b), Lemma 2
implies that A(δ) ∈ S(b) and as stated above, A(D` ) must belong to S(b) too. But
that means that algorithm MATRIX should have stopped for some ` < k, which is a
contradiction.
Acknowledgement
The author would like to thank the anonymous referee for pointing out a missing
case in the proof of Lemma 3.
References
[1] K. Cechlárová, Unique solvability of max–min fuzzy equations and strong regularity of matrices over
fuzzy algebra, Fuzzy Sets Syst. 75 (1995) 165–177.
[2] K. Cechlárová, P. Diko, Resolving infeasibility in extremal algebras, Linear Algebra Appl. 290 (1999)
267–273.
[3] R.A. Cuninghame-Green, Minimax algebra, Lecture Notes in Econom. Math. Syst., vol. 166, Spring-
er, Berlin, 1979.
[4] R.A. Cuninghame-Green, K. Cechlárová, Residuation in fuzzy algebra and some applications, Fuzzy
Sets Syst. 71 (1995) 227–239.
[5] W. Pedrycz, Inverse problem in fuzzy relational equations, Fuzzy Sets Syst. 49 (1992) 339–355.
[6] J.K. Sankaran, A note on resolving infeasibility in linear programs by constraint relaxation, Oper.
Res. Lett. 13 (1993) 19–20.