You are on page 1of 18

Numerical Methods

Systems of Non-Linear Equations

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 1


Linearity and Non-Linearity – Single function

𝒙 𝒚 A mathematical function takes values of a set called “domain” and


𝒇(𝒙) converts them to other set called “codomain”. In engineering, this is
known as system where x is the input and y is the output

𝒇 𝒙 = 𝟒𝒙 𝒇 𝒙 = 𝟒𝒙𝟐
𝑓 2 =8 𝑓 3 = 12 𝑓 2 = 16 𝑓 3 = 36
𝟐 𝟖 𝟑 𝟏𝟐 𝟐 𝟏𝟔 𝟑 𝟑𝟔
𝟒𝒙 𝟒𝒙 𝟒𝒙𝟐 𝟒𝒙𝟐

1) 𝑓 5 ∙ 2 = 40 5 ⋅ 𝑓 2 = 40 1) 𝑓 5 ∙ 2 = 400 5 ⋅ 𝑓 2 = 80

𝟓⋅𝟐 𝟒𝟎 𝟓⋅𝟐 𝟒𝟎𝟎


𝟒𝒙 Scaling property 𝟒𝒙𝟐 Scaling property

2) 𝑓 2 + 3 = 20 𝑓 2 + 𝑓(3) = 20 2) 𝑓 2 + 3 = 100 𝑓 2 + 𝑓(3) = 52

𝟐+𝟑 𝟐𝟎 𝟐+𝟑 𝟏𝟔
𝟒𝒙 Superposition property 𝟒𝒙𝟐 Superposition property

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 2


Linear and Non-Linear Equation
𝑓(𝑥, 𝑦, 𝑧)

The single terms of the unknowns are linear,


3𝑥 + 8𝑦 + 4𝑧 − 10 = 0 therefore the whole equation is linear

𝑓(𝑥) 𝑓(𝑦) 𝑓(𝑧)

𝑓(𝑥, 𝑦, 𝑧)

The single terms of the unknowns are non-linear,


therefore the whole equation is non-linear. It is not
2𝑥
4𝑥 2 + + 𝑆𝑖𝑛 𝑦 + 3 = 0 necessary that all the terms be non-linear; with only
𝑦𝑧 one, it would be enough for the non-linearity of the
equation
𝑓(𝑥) 𝑓(𝑥, 𝑦, 𝑧) 𝑓(𝑦)

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 3


Single Variable Differentiation

𝑓(𝑥0 ) + 𝑓′(𝑥0 ) ∙ Δ𝑥 𝑓 𝑥0 + 𝛥𝑥 − 𝑓(𝑥0 )


𝑓(𝑥0 + Δ𝑥) 𝑓 ′ 𝑥0 = 𝑙𝑖𝑚
𝛥𝑥→0 𝛥𝑥
𝑓(𝑥0 ) 𝛥𝑦 Definition
𝛥𝑦 𝑑𝑦
𝛥𝑥 𝑓 ′ 𝑥0 = 𝑙𝑖𝑚 =
𝛥𝑥→0 𝛥𝑥 𝑑𝑥

𝑥0 𝛥𝑦
𝑓 ′ 𝑥0 ≈ As long as Δ𝑥 be small
𝛥𝑥

The derivative of the function can be used to estimate the change in the dependent variable 𝒚 when the independent
variable 𝒙 changes
𝛥𝑦 ≈ 𝑓 ′ 𝑥0 𝛥𝑥

As Δ𝑥 becomes smaller, the


approximation is better,
i.e., the red point get closer
𝛥𝑥 to the blue point 𝛥𝑥

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 4


Single variable differentiation example

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 5


Extending the concept to multiple variables

𝑧 = 𝑓(𝑥, 𝑦) Function of two variables

The change in 𝑧 when 𝑥 changes and 𝑦 keeps constant is

𝜕𝑓(𝑥0 , 𝑦0 ) 𝑓 𝑥0 + 𝛥𝑥, 𝑦0 − 𝑓(𝑥0 , 𝑦0 ) 𝛥𝑧𝑥 𝑑𝑧𝑥 𝜕𝑓(𝑥0 , 𝑦0 ) 𝛥𝑧𝑥


= 𝑙𝑖𝑚 = 𝑙𝑖𝑚 = ≈
𝜕𝑥 𝛥𝑥→0 𝛥𝑥 𝛥𝑥→0 𝛥𝑥 𝑑𝑥 𝜕𝑥 𝛥𝑥

𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧𝑥 ≈ 𝛥𝑥
𝜕𝑥

The change in 𝑧 when 𝑦 changes and 𝑥 keeps constant is

𝜕𝑓(𝑥0 , 𝑦0 ) 𝑓 𝑥0 , 𝑦0 + 𝛥𝑦 − 𝑓(𝑥0 , 𝑦0 ) 𝛥𝑧𝑦 𝑑𝑧𝑦 𝜕𝑓(𝑥0 , 𝑦0 ) 𝛥𝑧𝑦


= 𝑙𝑖𝑚 = 𝑙𝑖𝑚 = ≈
𝜕𝑦 𝛥𝑦→0 𝛥𝑦 𝛥𝑦→0 𝛥𝑦 𝑑𝑦 𝜕𝑦 𝛥𝑦

𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧𝑦 ≈ 𝛥𝑦
𝜕𝑦

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 6


Extending the concept to multiple variables
𝑧 can change in more than one direction, therefore the total change is the sum of the changes due to
every single variable

𝛥𝑧 ≈ 𝛥𝑧𝑥 + 𝛥𝑧𝑦

𝜕𝑓(𝑥0 , 𝑦0 ) 𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧 ≈ 𝛥𝑥 + 𝛥𝑦
𝜕𝑥 𝜕𝑦

If both changes becomes zero, we get the total derivative of a multi-variable function

𝜕𝑓(𝑥0 , 𝑦0 ) 𝜕𝑓(𝑥0 , 𝑦0 )
𝑑𝑧 = 𝑑𝑥 + 𝑑𝑦
𝜕𝑥 𝜕𝑦

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 7


Multiple variable differentiation example
𝑧 = 4 − 𝑥2 − 𝑦2 → 𝑓 𝑥, 𝑦 = 4 − 𝑥 2 − 𝑦 2 𝜕𝑧
𝜕𝑥 Δ𝑧
Δ𝑧
Δ𝑥 Δ𝑥
(𝒙𝟎 , 𝒚𝟎 , 𝒛𝟎 )
(𝒙𝟎 , 𝒚𝟎 , 𝒛𝟎 )

𝜕𝑧 Δ𝑧 (𝒙𝟎 , 𝒚𝟎 , 𝒛𝟎 )
𝜕𝑥 Δ𝑦
Δ𝑧
Δ𝑦

𝑦
𝝏𝒇 𝛁𝒇
𝜕𝑓 𝜕𝑓
𝑑𝑧 = 𝑑𝑥 + 𝑑𝑦 𝝏𝒚
𝜕𝑥 𝜕𝑦
(𝒙𝟎, 𝒚𝟎 )
𝜕𝑓 𝜕𝑓 𝝏𝒇 𝑥
𝑑𝑧 = 𝑥ො + 𝑦ො ∙ 𝑑𝑥𝑥ො + 𝑑𝑦𝑦ො
𝜕𝑥 𝜕𝑦 𝝏𝒚
The change in dz occurs in any direction dr but the
𝒅𝒛 = 𝜵𝒇 ∙ 𝒅𝒓 gradient specifies the direction of maximum change.
In words, it says where “to walk” in the xy plane

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 8


The Jacobian
𝜕𝑓1 𝜕𝑓1 𝜕𝑓1
𝑢 = 𝑓1 (𝑥, 𝑦, 𝑧) 𝑑𝑢 = 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧
𝜕𝑥 𝜕𝑦 𝜕𝑧
𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 Total derivatives of the
𝑣 = 𝑓2 (𝑥, 𝑦, 𝑧) 𝑑𝑣 = 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧 multivariable functions
𝜕𝑥 𝜕𝑦 𝜕𝑧
𝜕𝑓3 𝜕𝑓3 𝜕𝑓3
𝑤 = 𝑓3 (𝑥, 𝑦, 𝑧) 𝑑𝑤 = 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧
𝜕𝑥 𝜕𝑦 𝜕𝑧

If this system of partial differential equations is represented in matrix form, we get:

𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 𝜕𝑓1


𝜕𝑥 𝜕𝑦 𝜕𝑧 𝜕𝑥 𝜕𝑦 𝜕𝑧
𝑑𝑢 𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 𝑑𝑥 Δ𝑢 𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 Δ𝑥
𝑑𝑣 = 𝑑𝑦 Δ𝑣 ≈ Δ𝑦
𝜕𝑥 𝜕𝑦 𝜕𝑧 𝜕𝑥 𝜕𝑦 𝜕𝑧
𝑑𝑤 𝑑𝑧 Δ𝑤 Δ𝑧
𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝜕𝑓3
𝜕𝑥 𝜕𝑦 𝜕𝑧 𝜕𝑥 𝜕𝑦 𝜕𝑧

The jacobian is the matrix of partial derivatives. It can be used to


𝑱(𝒙, 𝒚, 𝒛) estimate the changes of the independent variables, i.e., the
functions, when the independent variables change

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 9


The Newton-Raphson method
Multiple Variable Single Variable
𝜕𝑓1 𝜕𝑓1 𝜕𝑓1
𝜕𝑥 𝜕𝑦 𝜕𝑧
𝑢𝑘+1 − 𝑢𝑘 𝑥 − 𝑥𝑘
𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 𝑘+1
𝑣𝑘+1 − 𝑣𝑘 ≈ 𝑦𝑘+1 − 𝑦𝑘
𝑤𝑘+1 − 𝑤𝑘 𝜕𝑥 𝜕𝑦 𝜕𝑧 𝑧
𝑘+1 − 𝑧𝑘
𝜕𝑓3 𝜕𝑓3 𝜕𝑓3
𝜕𝑥 𝜕𝑦 𝜕𝑧 Zero because it is the solution
sough in each iteration
−1
𝜕𝑓1 𝜕𝑓1 𝜕𝑓1
𝜕𝑥 𝜕𝑦 𝜕𝑧 Δ𝑦
𝑥𝑘+1 𝑥𝑘 𝑢𝑘+1 − 𝑢𝑘 𝑓 ′ 𝑥𝑘 ≈
𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 Δ𝑥 Zero because it is the
𝑦𝑘+1 ≈ 𝑦𝑘 + 𝑣𝑘+1 − 𝑣𝑘
𝜕𝑥 𝜕𝑦 𝜕𝑧 solution sough in each
𝑧𝑘+1 𝑧𝑘 𝑤𝑘+1 − 𝑤𝑘 𝑦𝑘+1 − 𝑦𝑘 iteration
𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝑓 ′ 𝑥𝑘 ≈
𝑥𝑘+1 − 𝑥𝑘
𝜕𝑥 𝜕𝑦 𝜕𝑧
−1
𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 𝑦𝑘+1 − 𝑦𝑘
𝜕𝑥 𝜕𝑦 𝜕𝑧 𝑥𝑘+1 ≈ 𝑥𝑘 +
𝑥𝑘+1 𝑥𝑘 𝑢𝑘 𝑓′(𝑥𝑘 )
𝜕𝑓2 𝜕𝑓2 𝜕𝑓2
𝑦𝑘+1 ≈ 𝑦𝑘 − 𝑣𝑘
𝑧𝑘+1 𝑧𝑘 𝜕𝑥 𝜕𝑦 𝜕𝑧 𝑤𝑘 𝑦𝑘
𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝑥𝑘+1 ≈ 𝑥𝑘 −
𝑓′(𝑥𝑘 )
𝜕𝑥 𝜕𝑦 𝜕𝑧
𝒇(𝒙𝒌 )
−𝟏 𝒙𝒌+𝟏 = 𝒙𝒌 −
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭(𝑿𝒌 ) 𝒇′(𝒙𝒌 )

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 10


Example 1

6𝑥 2 0 𝑓1 (𝑥, 𝑦, 𝑧)
𝑓1 𝑥, 𝑦, 𝑧 = 3𝑥 2 + 2𝑦 − 4 −1 𝐽 𝑥, 𝑦, 𝑧 = 2 2 0 𝐹(𝑥, 𝑦, 𝑧) = 𝑓2 (𝑥, 𝑦, 𝑧)
𝑓2 𝑥, 𝑦, 𝑧 = 2𝑥 + 2𝑦 − 3 𝑋0 = 3 2 0 4 𝑆𝑖𝑛(𝑧) 𝑓3 (𝑥, 𝑦, 𝑧)
1
𝑓3 𝑥, 𝑦, 𝑧 = 2𝑥 − 4 𝐶𝑜𝑠(𝑧) −𝟏
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭(𝑿𝒌 )

Iteration 1 (k=0)
−6 2 0 −0.125 0.125 0 5
−1
𝐽 𝑋0 = 2 2 0 𝐽 𝑋0 = 0.125 0.375 0 𝐹(𝑋0 ) = 1
2 0 3.365884 0.074275 −0.074275 0.297099 −4.161209

−1
𝑋1 = 𝑋0 − 𝐽 𝑋0 𝐹(𝑋0 )

𝑥1 −1 −0.125 0.125 0 5 −𝟎. 𝟓


𝑦1 = 3 − 0.125 0.375 0 1 𝑿𝟏 = 𝟐
𝑧1 1 0.074275 −0.074275 0.297099 −4.161209 𝟏. 𝟗𝟑𝟗𝟏𝟗𝟏

We find out how good the approximation is:

−0.5 −1 0.5 𝑛𝑜𝑟𝑚(𝑋1 − 𝑋0 ) = (𝑥1 − 𝑥0 )2 +(𝑦1 − 𝑦0 )2 +(𝑧1 − 𝑧0 )2


𝑋1 − 𝑋0 = 2 − 3 = −1
1.939191 1 0.939191 𝑛𝑜𝑟𝑚(𝑋1 − 𝑋0 ) = 1.460165

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 11


Example 1 – cont.
Iteration 2 (k=1)
−3 2 0 −0.2 0.2 0 0.75
−1
𝐽 𝑋1 = 2 2 0 𝐽 𝑋1 = 0.2 0.3 0 𝐹(𝑋1 ) = 0
2 0 3.731627 0.107192 −0.107192 0.26798 0.440473

−1
𝑋2 = 𝑋1 − 𝐽 𝑋1 𝐹(𝑋1 )

𝑥2 −0.5 −0.2 0.2 0 0.75 −𝟎. 𝟑𝟓


𝑦2 = 2 − 0.2 0.3 0 0 𝑿𝟐 = 𝟏. 𝟖𝟓
𝑧2 1.939191 0.107192 −0.107192 0.26798 −0.440473 𝟏. 𝟕𝟒𝟎𝟕𝟓𝟗

We find out how good the approximation is:

−0.35 −0.5 0.15


𝑋2 − 𝑋1 = 1.85 − 2 = −0.15
1.740759 1.939191 −0.198432

𝑛𝑜𝑟𝑚(𝑋2 − 𝑋1 ) = (𝑥2 − 𝑥1 )2 +(𝑦2 − 𝑦1 )2 +(𝑧2 − 𝑧1 )2

𝑛𝑜𝑟𝑚(𝑋2 − 𝑋1 ) = 0.290474

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 12


Example 1 – cont.
Iteration 3 (k=2)
2.1 2 0 −0.243902 0.243902 0 0.0675
−1
𝐽 𝑋2 = 2 2 0 𝐽 𝑋2 = 0.243902 0.256098 0 𝐹(𝑋2 ) = 0
2 0 3.942364 0.123734 −0.123734 0.253655 −0.023418

−1
𝑋3 = 𝑋2 − 𝐽 𝑋2 𝐹(𝑋2 )

𝑥3 −0.35 −0.243902 0.243902 0 0.0675 −𝟎. 𝟑𝟑𝟑𝟓𝟑𝟕


𝑦3 = 1.85 − 0.243902 0.256098 0 0 𝑿𝟑 = 𝟏. 𝟖𝟑𝟑𝟓𝟑𝟕
𝑧3 1.740759 0.123734 −0.123734 0.253655 −0.023418 𝟏. 𝟕𝟑𝟖𝟑𝟒𝟕

We find out how good the approximation is:

−0.333537 −0.35 0.016463


𝑋3 − 𝑋2 = 1.833537 − 1.85 = −0.016463
1.738347 1.740759 −0.002412

𝑛𝑜𝑟𝑚(𝑋3 − 𝑋2 ) = (𝑥3 − 𝑥2 )2 +(𝑦3 − 𝑦2 )2 +(𝑧3 − 𝑧2 )2

𝑛𝑜𝑟𝑚(𝑋3 − 𝑋2 ) = 0.023407

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 13


Example 1 – cont.
Iteration 4 (k=3)
−2.001222 2 0 −0.249924 0.249924 0
−1
𝐽 𝑋3 = 2 2 0 𝐽 𝑋3 = 0.249924 0.250076 0
2 0 3.943985 0.126737 −0.126737 0.253551

0.000815
𝐹(𝑋3 ) = 0
−2.700707 × 10−6

−1
𝑋4 = 𝑋3 − 𝐽 𝑋3 𝐹(𝑋3 )

𝑥4 −0.333537 −0.249924 0.249924 0 0.000815 −𝟎. 𝟑𝟑𝟑𝟑𝟑𝟑


𝑦4 = 1.833537 − 0.249924 0.250076 0 0 𝑿𝟒 = 𝟏. 𝟖𝟑𝟑𝟑𝟑𝟑
𝑧4 1.738347 0.126737 −0.126737 0.253551 −2.700707 × 10−6 𝟏. 𝟕𝟑𝟖𝟐𝟒𝟒

We find out how good the approximation is:

−0.333333 −0.333537 0.000204


𝑋4 − 𝑋3 = 1.833333 − 1.833537 = −0.000204
1.738244 1.738347 −0.000103

𝑛𝑜𝑟𝑚(𝑋4 − 𝑋3 ) = (𝑥4 − 𝑥3 )2 +(𝑦4 − 𝑦3 )2 +(𝑧4 − 𝑧3 )2

𝑛𝑜𝑟𝑚(𝑋4 − 𝑋3 ) = 0.000306

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 14


Example 1 – cont.

The solution with a tolerance of 1 × 10−9 is found after 6 iterations

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 15


Example 2
4 −1 1 𝑓1 (𝑥, 𝑦, 𝑧)
𝑓1 𝑥, 𝑦, 𝑧 = 4𝑥 − 𝑦 + 𝑧 − 7 0 𝐽 𝑥, 𝑦, 𝑧 = 4 −8 1 𝐹(𝑥, 𝑦, 𝑧) = 𝑓2 (𝑥, 𝑦, 𝑧)
𝑋0 = 0 −2 1 5 𝑓3 (𝑥, 𝑦, 𝑧)
𝑓2 𝑥, 𝑦, 𝑧 = 4𝑥 − 8𝑦 + 𝑧 + 21
0
𝑓3 𝑥, 𝑦, 𝑧 = −2𝑥 + 𝑦 + 5𝑧 − 15 −𝟏
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭(𝑿𝒌 )

Iteration 1 (k=0)
4 −1 1 0.266234 −0.038961 −0.045455 −7
−1 𝐹(𝑋0 ) = 21
𝐽 𝑋0 = 4 −8 1 𝐽 𝑋0 = 0.142857 −0.142857 0
−2 1 5 0.077922 −0.012987 0.181818 −15

−1
𝑋1 = 𝑋0 − 𝐽 𝑋0 𝐹(𝑋0 )

𝑥1 0 0.266234 −0.038961 −0.045455 −7 𝟐


𝑦1 = 0 − 0.142857 −0.142857 0 21 𝑿𝟏 = 𝟒
𝑧1 0 0.077922 −0.012987 0.181818 −15 𝟑

We find out how good the approximation is:

2 0 2 𝑛𝑜𝑟𝑚(𝑋1 − 𝑋0 ) = (𝑥1 − 𝑥0 )2 +(𝑦1 − 𝑦0 )2 +(𝑧1 − 𝑧0 )2


𝑋1 − 𝑋0 = 4 − 0 = 4
3 0 3 𝑛𝑜𝑟𝑚(𝑋1 − 𝑋0 ) = 5.385165

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 16


Example 2 – cont.
Notice that:
𝑓1 (2,4,3) 0
𝐹 𝑋1 = 𝑓1 (2,4,3) = 0 The solution is found with only 1 iteration !
𝑓1 (2,4,3) 0

Starting with the same vector and with a tolerance of 1 × 10−9, the Jacobi method requires 16 iterations
and the Seidel method requires 10

It seems that the Newton-Raphson method is far much faster. Why don’t use it for linear systems as well?
Ana drawback?

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 17


Newton-Raphson for linear systems
𝑎11 𝑎12 . . . 𝑎1𝑚 𝑥1 𝑏1
𝑎21 𝑎22 . . . 𝑎2𝑚 𝑥2 𝑏2
𝐀𝐗 = 𝐛 → . . . . . =
.
𝑎𝑚1 𝑎𝑚2 . . . 𝑎𝑚𝑚 𝑥𝑚 𝑏𝑚

𝑓1 (𝑥1 , 𝑥2 , … , 𝑥𝑚 ) 𝑎11 𝑎12 . . . 𝑎1𝑚 𝑥1 𝑏1


𝑓2 (𝑥1 , 𝑥2 , … , 𝑥𝑚 ) 𝑎21 𝑎22 . . . 𝑎2𝑚 𝑥2 𝑏2
= . . . . . − → 𝑭(𝑿)= 𝑨𝑿 − 𝒃
. .
𝑓𝑚 (𝑥1 , 𝑥2 , … , 𝑥𝑚 ) 𝑎𝑚1 𝑎𝑚2 . . . 𝑎𝑚𝑚 𝑥𝑚 𝑏𝑚

𝑱 𝑿 =𝑨 → 𝑱(𝑿)−𝟏 = 𝑨−𝟏

−𝟏
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭 𝑿𝒌 = 𝑿𝒌 − 𝑨−𝟏 𝑨𝑿𝒌 − 𝒃 → 𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑨−𝟏 𝑨𝑿𝒌 +𝑨−𝟏 𝒃

𝑿𝒌+𝟏 = 𝑨−𝟏 𝒃

If the Newton-Raphson method is used to solve linear systems, the result is found always in the first iteration.
There is no computational gain when using this algorithm for linear equations because the inversion process
of the matrix of coefficients is not skipped

Universidad Panamericana Numerical Methods Dr. José Herrera Santos 18

You might also like