Professional Documents
Culture Documents
Research Article
The Solution of Absolute Value Equations Using Two New
Generalized Gauss-Seidel Iteration Methods
Rashid Ali
School of Mathematics and Statistics, HNP-LAMA, Central South University, Changsha, 410083 Hunan, China
Copyright © 2022 Rashid Ali. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this paper, we provide two new generalized Gauss-Seidel (NGGS) iteration methods for solving absolute value equations Ax
− ∣x ∣ = b, where A ∈ Rn×n , b ∈ Rn , and x ∈ Rn are unknown solution vectors. Also, convergence results are established under
mild assumptions. Eventually, numerical results prove the credibility of our approaches.
Table 1: Numerical results for Example 4 with Ψ = 0:3 and λ = Multiplying λ, then we get
0:95:
(i) We extend the GGS technique [23] to the general ðΩ + DA − λLA Þxi+1 − λ xi+1 = ½ð1 − λÞðΩ + DA Þ + λðΩ + U A Þxi + λb,
case. To achieve this goal, we impose two additional ð6Þ
parameters (λ and Ω) that can accelerate the conver-
gence procedure.
where i = 0, 1, 2, ⋯, and 0 < λ ≤ 1 (see Appendix). Note
(ii) A variety of novel conditions are used to investigate that if λ = 1 and Ω = 0, then Eq. (6) reduces to the GGS
the convergence properties of the newly developed method [23].
methods. The next step in the analysis is to verify the convergence
of NGGS method I by using the following theorem.
The remainder of this paper is organized in the following
manner. In Section 2, we present some notations and a Theorem 2. Suppose that AVE (1) is solvable, let the diagonal
lemma that will be used throughout the remainder of this values of A > 1 and DA − LA − I matrix be the strictly row
study. In Section 3, we propose the NGGS procedures and wise diagonally dominant. If
discuss their convergence. We demonstrate the efficiency
of these algorithms in Section 4 by providing numerical
examples. In the last section, we make concluding remarks. ∥ðΩ + DA − λLA Þ−1 ½ð1 − λÞðΩ + DA Þ + λðΩ + U A Þ∥∞ < 1 − λ∥ðΩ + DA − λLA Þ−1 ∥∞ ,
ð7Þ
2. Preliminaries
then the sequence fxi g of the NGGS method I converges to the
Here we briefly examine some of the notations and concepts unique solution x⋆ of AVE (1).
used in this article.
Let A = ðaij Þ ∈ Rn×n , we indicate the norm and absolute
Proof. We will prove first ∥ðΩ + DA − λLA Þ−1 ∥∞ < 1. Clearly,
value as kAk∞ and ∣A ∣ = ð∣aij ∣ Þ, respectively. The matrix
if we put LA = 0, then ∥ðΩ + DA − λLA Þ−1 ∥∞ = ∥ðΩ + DA Þ−1
A ∈ Rn×n is called an Z-matrix if aij ≤ 0 for i ≠ j and an M ∥∞ < 1. If we assume that LA ≠ 0, we get
-matrix if it is a nonsingular Z-matrix and with A−1 ≥ 0: 0 ≤ ∣λLA ∣ t < ðΩ + DA − IÞt,if we take
∣λLA ∣ t < ðΩ + DA − IÞt:
Lemma 1. [36]. Suppose x and ∈Rn , then kx − zk∞ ≥ Taking both side by ðΩ + DA Þ−1 , we get
kjxj − jzjk∞ . ðΩ + DA Þ−1 ∣ λLA ∣ t < ðΩ + DA Þ−1 ððΩ + DA Þ − IÞt,
∣λðΩ + DA Þ−1 LA ∣ t < ðI − ðΩ + DA Þ−1 Þt,
3. NGGS Iteration Methods ∣λðΩ + DA Þ−1 LA ∣ t < t − ðΩ + DA Þ−1 t,
ðΩ + DA Þ−1 t < t − ∣λðΩ + DA Þ−1 LA ∣ t,
Here, we examine the suggested methods (NGGS method I
and NGGS method II) for determining AVE (1).
ðΩ + DA Þ−1 t < ð1−∣Q ∣ Þt, ð8Þ
3.1. NGGS Method I for AVE. Recalling that the AVE (1) has
the following form, where Q = λðΩ + DA Þ−1 LA and t = ð1, 1, ⋯, 1ÞT . Also, we
Ax − ∣x ∣ = b: have
Computational and Mathematical Methods 3
0 ≤ jðI − QÞ−1 j = jI + Q + Q2 + Q3 +⋯+Qn−1 j, 3.2. NGGS Method II for AVE. In this section, we describe
the NGGS method II. Based on (3) and (4), we can express
≤ I+∣Q∣+jQj2 + jQj3 +⋯+jQjn−1 = ðI−∣Q ∣ Þ−1 : ð9Þ the suggested method for determining AVE (1) as follows
(see Appendix):
Thus, from (8) and (9), we get ðΩ + DA − λLA Þxi +1 − λ∣xi+1 ∣ = ½ð1 − λÞðΩ + DA Þ + λð
Ω + U A Þxi+1 + λb, i = 0, 1, 2, ⋯:
jðΩ + DA − λLA Þ−1 jt = jðI − QÞ−1 ðΩ + DA Þ−1 jt ≤ jðI −
In the following, we will examine the convergence results
QÞ jjðΩ + DA Þ−1 jt,
−1
for NGGS method II.
<ðI−∣Q ∣ Þ−1 ðI−∣Q ∣ Þt = t:
So, we obtain Theorem 3. Suppose that AVE (1) is solvable, let the diagonal
∥ðΩ + DA − λLA Þ−1 ∥∞ < 1: values of A > 1 and DA − LA − I be row diagonally dominant,
Uniqueness: Let x⋆ and z ⋆ be two different solutions of and then the sequence of the NGGS method II converges to
the AVE (1). Using (5), we get the unique solution x⋆ of AVE (1).
x⋆ = λðΩ + DA − λLA Þ−1 ∣x⋆ ∣ + ðΩ + DA − λLA Þ−1 ½ðð1 − λÞðΩ + DA Þ Proof. The uniqueness can be inferred directly from Theo-
+ λðΩ + U A ÞÞx⋆ + λb, rem 2. For convergence, consider
ð10Þ
xi+1 − x⋆ = λðΩ + DA − λLA Þ−1 jxi+1 j + ðΩ + DA − λ LA
−1
Þ ½ðð1 − λÞðΩ + DA Þ + λðΩ + U A ÞÞxi+1 + λb − ðλ
z ⋆ = λðΩ + DA − λLA Þ−1 ∣z ⋆ ∣ + ðΩ + DA − λLA Þ−1 ½ðð1 − λÞðΩ + DA Þ
ðΩ + DA − λLA Þ−1 jx⋆ j + ðΩ + DA − λLA Þ−1 ½ðð1 − λÞðΩ + DA Þ
+ λðΩ + U A ÞÞz ⋆ + λb:
+ λðΩ + U A ÞÞx⋆ + λbÞ,
ð11Þ ðΩ + DA − λLA Þðxi+1 − x⋆ Þ = λð∣xi+1 ∣−∣x⋆ ∣ Þ + ðð1 − λÞ
ðΩ + DA Þ + λðΩ + U A ÞÞðxi+1 − x⋆ Þ,
From (10) and (11), we get
−1 λðDA − LA − U A Þxi+1 − λ jxi+1 j = λðDA − LA − U A Þx⋆
x⋆ − z ⋆ =λðΩ + DA − λLA Þ ð∣x ⋆ ∣−∣z ⋆ ∣ Þ + ðDA − λLA ⋆
−1
− λjx j,
Þ ðð1 − λÞðΩ + DA Þ + λðΩ + U A ÞÞðx⋆ − z ⋆ Þ:
Based on Lemma 1 and Eq. (7), the above equation can
be expressed as follows: ðDA − LA − U A Þxi+1 − ∣xi+1 ∣ = ðDA − LA − U A Þx⋆ − ∣x⋆ ∣:
∥x⋆ − z ⋆ ∥∞ ≤ λ∥ðΩ + DA − λLA Þ−1 ∥∞ ∥∣x⋆ ∣ − ∣z ⋆ ∣∥∞ + ð12Þ
−1
∥ðΩ + DA − λLA Þ ðð1 − λÞðΩ + DA Þ + λðΩ + U A ÞÞ∥∞ ∥x⋆
From (4) and (12), we have
− z ⋆ ∥∞ , Axi+1 − ∣xi+1 ∣ = Ax⋆ − ∣x⋆ ∣
<λ∥ðΩ + DA − λLA Þ−1 ∥∞ ∥x⋆ − z ⋆ ∥∞ + ð1 − λ∥ðΩ + DA Axi+1 − ∣xi+1 ∣ = b:
− λLA Þ−1 ∥∞ Þ∥x⋆ − z ⋆ ∥∞ , Therefore, xi+1 solves the system of AVE (1).
∥x⋆ − z ⋆ ∥∞ − λ∥ðΩ + DA − λLA Þ−1 ∥∞ ∥x⋆ − z⋆ ∥∞ < ð1
− λ∥ ðΩ + DA − λLA Þ−1 ∥∞ Þ∥x⋆ − z ⋆ ∥∞ , 4. Numerical Tests
ð1 − λ∥ðΩ + DA − λLA Þ−1 ∥∞ Þ∥x⋆ − z ⋆ ∥∞ < ð1 − λ∥ðΩ
Here, four examples are provided to illustrate the perfor-
+ DA − λLA Þ−1 ∥∞ Þ∥x⋆ − z ⋆ ∥∞ , mance of the novel approaches from three different
∥x⋆ − z ⋆ ∥∞ < ∥x⋆ − z⋆ ∥∞ ,which is a contradiction. perspectives:
Thus, x⋆ = z ⋆ .
(i) The number of iterations (indicated by “Itr”)
Convergence: We will consider x⋆ as the unique solution
to AVE (1). Consequently, from (10) and (ii) The computational time (s) (exposed by “Time”)
xi+1 = λðΩ + DA − λLA Þ−1 ∣xi+1 ∣ + ðΩ + D − λLA Þ−1 ½ðð1 (iii) The residual error (represented by “RSV”)
− λÞðΩ + DA Þ + λðΩ + U A ÞÞxi + λb, we deduce
xi+1 − x⋆ =λðΩ + DA − λLA Þ−1 ð∣xi+1 ∣−∣x⋆ ∣ Þ + ðΩ + DA Here, “RSV” is defined by
RSV ≔ ∥Axi − ∣xi ∣ − b∥2 /∥b∥2 ≤ 10−6 :
− λLA Þ−1 ½ðð1 − λÞðΩ + DA Þ + λðΩ + U A ÞÞðxi − xå Þ:
All numerical tests were conducted on a personal com-
By taking infinity norm and Lemma 1, we have
puter with 1.80 GHz CPU (Intel(R) Core (TM) i5-3337U)
∥xi+1 − x⋆ ∥∞ − λ∥ðΩ + DA − λLA Þ−1 ∥∞ ∥xi+1 − x⋆ ∥∞ and 4 GB of memory using MATLAB (2016a). In addition,
≤ ∥ðΩ + DA − λLA Þ−1 ðð1 − λÞðΩ + DA Þ + λðΩ + U A ÞÞ∥∞ ∥ the zero vector is the initial vector for Example 4
xi − x⋆ ∥∞ ,and since ∥ðΩ + DA − λLA Þ−1 ∥∞ < 1, it follows that
∥xi+1 − x⋆ ∥∞ ≤ ∥ ðΩ + DA − λLA Þ−1 ðð1 − λÞðΩ + DA Þ Example 4. Let
+ λðΩ + U A ÞÞ∥∞ /1 − λ∥ðΩ + DA − λLA Þ−1 ∥∞ ∥xi − x⋆ ∥∞ :
A = tridiagð−1, 4,−1Þ ∈ Rn×n :
According to the inequality above, the presented
approach converges to the solution when condition (7) is Calculate b = Ax⋆ − ∣x⋆ ∣ ∈Rn with x⋆ =
met. ðx1 , x2 , x3 , ⋯, xn ÞT ∈ Rn such that xi = ð−1Þi . Here, the
Computational and Mathematical Methods 5
proposed methods are compared to two existing methods: procedures are more efficient in iteration steps and comput-
the SOR-like optimal parameters technique shown in [16] ing time than the existing methods.
(expressed by SLM using ω = 0:825) and the shift splitting
iteration approach described in [22] (represented by SM). Appendix
The results are provided in Table 1.
Table 1 presents the solution x⋆ for various values of n. Here, we describe the implementation of the novel methods.
The result of this comparison shows that our proposed tech- From Ax − ∣x ∣ = b, we have
niques are more efficient than SLM and SM approaches in
terms of “Itr” and “Time.” x = A−1 ðjxj + bÞ: ðA:1Þ
Example 5. Consider A = M + 4I ∈ Rn×n and b = Ax⋆ − ∣x⋆ ∣ Thus, we can approximate xi+1 as follows:
∈Rn with
M = tridiagð−I n , H n ,−I n Þ ∈ Rn×n , x⋆ = ð−1, 1,−1, 1, ⋯ , xi+1 ≈ A−1 xi + b : ðA:2Þ
T
−1, 1Þ ∈ Rn ,where H n = tridiagð−1, 4,−1Þ ∈ Rv×v , I ∈ Rv×v ,
being a unit matrix and n = v2 : For Examples 5 and 6, use This approach is known as the Picard approach [9]. Our
the same stopping criterion and initial guess mentioned in next discussion concerns the algorithm for NGGS Method I.
[18]. The recommended methods are compared with the Algorithm for the NGGS Method I is as follows:
AOR approach [14] and the mixed-type splitting (MTS) iter-
(1) Select the parameters Ψ and λ, an initial guess x0 ∈
ative technique [18]. The outcomes are summarized in
Rn , and put i = 0
Table 2.
In Table 2, we present the numeric outcomes of the AOR (2) Compute
method, MTS method, NGGS method I, and NGGS method
II, respectively. Our results indicate that the proposed
methods are more effective than both AOR and MTS yi = xi+1 ≈ A−1 xi + b : ðA:3Þ
approaches.
Journal of Computational and Applied Mathematics, vol. 327, [17] S.-L. Hu and Z.-H. Huang, “A note on absolute value equa-
pp. 196–207, 2018. tions,” Optimization Letters, vol. 4, pp. 417–424, 2010.
[2] R. Ali, A. Ali, and S. Iqbal, “Iterative methods for solving abso- [18] A. J. Fakharzadeh and N. N. Shams, “An efficient algorithm for
lute value equations,” The Journal of Mathematics and Com- solving absolute value equations,” Journal of Mathematical
puter Science, vol. 26, no. 4, pp. 322–329, 2021. Extension, vol. 15, pp. 1–23, 2021.
[3] Z. Asgari, F. Toutounian, E. Babolian, and E. Tohidi, “An [19] M. Zhang, Z.-H. Huang, and Y.-F. Li, “The sparsest solution to
extended block Golub-Kahan algorithm for large algebraic the system of absolute value equations,” Journal of the Opera-
and differential matrix Riccati equations,” Computers & tions Research Society of China, vol. 3, no. 1, pp. 31–51, 2015.
Mathematcs with Applications, vol. 79, no. 8, pp. 2447– [20] L. Caccetta, B. Qu, and G. L. Zhou, “A globally and quadrati-
2457, 2020. cally convergent method for absolute value equations,” Com-
[4] Z. Asgari, F. Toutounian, E. Babolian, and E. Tohidi, “LSMR putational Optimization and Applications, vol. 48, no. 1,
iterative method for solving one- and two-dimensional linear pp. 45–58, 2011.
Fredholm integral equations,” Computational and Applied [21] X.-M. Gu, T.-Z. Huang, H.-B. Li, S.-F. Wang, and L. Li, “Two-
Mathematics, vol. 38, no. 3, pp. 1–16, 2019. CSCS based iteration methods for solving absolute value equa-
[5] Z. Asgari, F. Toutounian, and E. Babolian, “Augmented sub- tions,” Journal of Applied Mathematics and Computing, vol. 7,
spaces in the LSQR Krylov method,” Iranian Journal of Science no. 4, pp. 1336–1356, 2017.
and Technology. Transaction A, Science, vol. 44, pp. 1661– [22] S. L. Wu and C. X. Li, “A special shift splitting iteration
1665, 2020. method for absolute value equation,” AIMS Mathematics,
vol. 5, no. 5, pp. 5171–5183, 2020.
[6] F. K. Haghani, “On generalized Traub’s method for absolute
value equations,” Journal of Optimization Theory and Applica- [23] V. Edalatpour, D. Hezari, and D. Khojasteh Salkuyeh, “A gen-
tions, vol. 166, pp. 619–625, 2015. eralization of the gauss-Seidel iteration method for solving
absolute value equations,” Applied Mathematics and Compu-
[7] O. L. Mangasarian and R. R. Meyer, “Absolute value equa-
tation, vol. 293, pp. 156–167, 2017.
tions,” Linear Algebra and its Applications, vol. 419, no. 2-3,
pp. 359–367, 2006. [24] M. Dehghan and A. Shirilord, “Matrix multisplitting Picard-
iterative method for solving generalized absolute value matrix
[8] J. Rohn, “A theorem of the alternatives for the equationAx+B| equation,” Applied Numerical Mathematics, vol. 158, pp. 425–
x| = b,” Linear and Multilinear Algebra, vol. 52, no. 6, pp. 421– 438, 2020.
426, 2004.
[25] X. Dong, X.-H. Shao, and H.-L. Shen, “A new SOR-like
[9] J. Rohn, V. Hooshyarbakhsh, and R. Farhadsefat, “An iterative method for solving absolute value equations,” Applied Numer-
method for solving absolute value equations and sufficient ical Mathematics, vol. 156, pp. 410–421, 2020.
conditions for unique solvability,” Optimization Letters,
[26] F. Hashemi and S. Ketabchi, “Numerical comparisons of
vol. 8, pp. 35–44, 2014.
smoothing functions for optimal correction of an infeasible
[10] L. Rabiei, Y. Ordokhani, and E. Babolian, “Fractional-order system of absolute value equations,” Numerical Algebra, Con-
Legendre functions and their application to solve fractional trol & Optimization, vol. 10, no. 1, pp. 13–21, 2020.
optimal control of systems described by integro-differential [27] S. Ketabchi and H. Moosaei, “Minimum norm solution to the
equations,” Acta Applicandae Mathematicae, vol. 158, no. 1, absolute value equation in the convex case,” Journal of Optimi-
pp. 87–106, 2018. zation Theory and Applications, vol. 154, pp. 1080–1087, 2012.
[11] B. Saheya, C.-H. Yu, and J.-S. Chen, “Numerical comparisons [28] Y. F. Ke, “The new iteration algorithm for absolute value equa-
based on four smoothing functions for absolute value equa- tion,” Applied Mathematics Letters, vol. 99, pp. 105990–
tion,” Journal of Applied Mathematics and Computing, 105997, 2020.
vol. 56, pp. 131–149, 2018.
[29] F. Mezzadri, “On the solution of general absolute value equa-
[12] F. Toutounian and E. Tohidi, “A new Bernoulli matrix method tions,” Applied Mathematics Letters, vol. 107, pp. 106462–
for solving second order linear partial differential equations 106467, 2020.
with the convergence analysis,” Applied Mathematics and [30] H. Moosaei, S. Ketabchi, and H. Jafari, “Minimum norm solu-
Computation, vol. 223, pp. 298–310, 2013. tion of the absolute value equations via simulated annealing
[13] F. Toutounian, E. Tohidi, and A. Kilicman, “Fourier opera- algorithm,” Afr. Mat., vol. 26, no. 7–8, pp. 1221–1228, 2015.
tional matrices of differentiation and transmission: introduc- [31] O. L. Mangasarian, “Absolute value equation solution via con-
tion and applications,” Abstr. Appl. Anal., vol. 2013, article cave minimization,” Optimization Letters, vol. 1, pp. 3–5, 2006.
198926, 2013. [32] O. L. Mangasarian, “Linear complementarity as absolute value
[14] C. X. Li, “A preconditioned AOR iterative method for the equation solution,” Optimization Letters, vol. 8, pp. 1529–
absolute value equations,” International Journal of Computa- 1534, 2014.
tional Methods, vol. 14, no. 2, p. 1750016, 2017. [33] X.-H. Miao, J.-T. Yang, B. Saheya, and J.-S. Chen, “A smooth-
[15] Y. F. Ke and C. F. Ma, “SOR-like iteration method for solving ing Newton method for absolute value equation associated
absolute value equations,” Applied Mathematics and Compu- with second-order cone,” Applied Numerical Mathematics,
tation, vol. 311, pp. 195–202, 2017. vol. 120, pp. 82–96, 2017.
[16] C. Chen, D. Yu, and D. Han, “Optimal parameter for the SOR- [34] C. T. Nguyen, B. Saheya, Y.-L. Chang, and J.-S. Chen, “Unified
like iteration method for solving the system of absolute value smoothing functions for absolute value equation associated
equations,” Journal of Applied Analysis and Computation, with second-order cone,” Applied Numerical Mathematics,
vol. 17, pp. 1–15, 2020. vol. 135, pp. 206–227, 2019.
Computational and Mathematical Methods 7