Professional Documents
Culture Documents
𝒇 𝒙 = 𝟒𝒙 𝒇 𝒙 = 𝟒𝒙𝟐
𝑓 2 =8 𝑓 3 = 12 𝑓 2 = 16 𝑓 3 = 36
𝟐 𝟖 𝟑 𝟏𝟐 𝟐 𝟏𝟔 𝟑 𝟑𝟔
𝟒𝒙 𝟒𝒙 𝟒𝒙𝟐 𝟒𝒙𝟐
1) 𝑓 5 ∙ 2 = 40 5 ⋅ 𝑓 2 = 40 1) 𝑓 5 ∙ 2 = 400 5 ⋅ 𝑓 2 = 80
𝟐+𝟑 𝟐𝟎 𝟐+𝟑 𝟏𝟔
𝟒𝒙 Superposition property 𝟒𝒙𝟐 Superposition property
𝑓(𝑥, 𝑦, 𝑧)
𝑥0 𝛥𝑦
𝑓 ′ 𝑥0 ≈ As long as Δ𝑥 be small
𝛥𝑥
The derivative of the function can be used to estimate the change in the dependent variable 𝒚 when the independent
variable 𝒙 changes
𝛥𝑦 ≈ 𝑓 ′ 𝑥0 𝛥𝑥
𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧𝑥 ≈ 𝛥𝑥
𝜕𝑥
𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧𝑦 ≈ 𝛥𝑦
𝜕𝑦
𝛥𝑧 ≈ 𝛥𝑧𝑥 + 𝛥𝑧𝑦
𝜕𝑓(𝑥0 , 𝑦0 ) 𝜕𝑓(𝑥0 , 𝑦0 )
𝛥𝑧 ≈ 𝛥𝑥 + 𝛥𝑦
𝜕𝑥 𝜕𝑦
If both changes becomes zero, we get the total derivative of a multi-variable function
𝜕𝑓(𝑥0 , 𝑦0 ) 𝜕𝑓(𝑥0 , 𝑦0 )
𝑑𝑧 = 𝑑𝑥 + 𝑑𝑦
𝜕𝑥 𝜕𝑦
𝜕𝑧 Δ𝑧 (𝒙𝟎 , 𝒚𝟎 , 𝒛𝟎 )
𝜕𝑥 Δ𝑦
Δ𝑧
Δ𝑦
𝑦
𝝏𝒇 𝛁𝒇
𝜕𝑓 𝜕𝑓
𝑑𝑧 = 𝑑𝑥 + 𝑑𝑦 𝝏𝒚
𝜕𝑥 𝜕𝑦
(𝒙𝟎, 𝒚𝟎 )
𝜕𝑓 𝜕𝑓 𝝏𝒇 𝑥
𝑑𝑧 = 𝑥ො + 𝑦ො ∙ 𝑑𝑥𝑥ො + 𝑑𝑦𝑦ො
𝜕𝑥 𝜕𝑦 𝝏𝒚
The change in dz occurs in any direction dr but the
𝒅𝒛 = 𝜵𝒇 ∙ 𝒅𝒓 gradient specifies the direction of maximum change.
In words, it says where “to walk” in the xy plane
6𝑥 2 0 𝑓1 (𝑥, 𝑦, 𝑧)
𝑓1 𝑥, 𝑦, 𝑧 = 3𝑥 2 + 2𝑦 − 4 −1 𝐽 𝑥, 𝑦, 𝑧 = 2 2 0 𝐹(𝑥, 𝑦, 𝑧) = 𝑓2 (𝑥, 𝑦, 𝑧)
𝑓2 𝑥, 𝑦, 𝑧 = 2𝑥 + 2𝑦 − 3 𝑋0 = 3 2 0 4 𝑆𝑖𝑛(𝑧) 𝑓3 (𝑥, 𝑦, 𝑧)
1
𝑓3 𝑥, 𝑦, 𝑧 = 2𝑥 − 4 𝐶𝑜𝑠(𝑧) −𝟏
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭(𝑿𝒌 )
Iteration 1 (k=0)
−6 2 0 −0.125 0.125 0 5
−1
𝐽 𝑋0 = 2 2 0 𝐽 𝑋0 = 0.125 0.375 0 𝐹(𝑋0 ) = 1
2 0 3.365884 0.074275 −0.074275 0.297099 −4.161209
−1
𝑋1 = 𝑋0 − 𝐽 𝑋0 𝐹(𝑋0 )
−1
𝑋2 = 𝑋1 − 𝐽 𝑋1 𝐹(𝑋1 )
𝑛𝑜𝑟𝑚(𝑋2 − 𝑋1 ) = 0.290474
−1
𝑋3 = 𝑋2 − 𝐽 𝑋2 𝐹(𝑋2 )
𝑛𝑜𝑟𝑚(𝑋3 − 𝑋2 ) = 0.023407
0.000815
𝐹(𝑋3 ) = 0
−2.700707 × 10−6
−1
𝑋4 = 𝑋3 − 𝐽 𝑋3 𝐹(𝑋3 )
𝑛𝑜𝑟𝑚(𝑋4 − 𝑋3 ) = 0.000306
Iteration 1 (k=0)
4 −1 1 0.266234 −0.038961 −0.045455 −7
−1 𝐹(𝑋0 ) = 21
𝐽 𝑋0 = 4 −8 1 𝐽 𝑋0 = 0.142857 −0.142857 0
−2 1 5 0.077922 −0.012987 0.181818 −15
−1
𝑋1 = 𝑋0 − 𝐽 𝑋0 𝐹(𝑋0 )
Starting with the same vector and with a tolerance of 1 × 10−9, the Jacobi method requires 16 iterations
and the Seidel method requires 10
It seems that the Newton-Raphson method is far much faster. Why don’t use it for linear systems as well?
Ana drawback?
𝑱 𝑿 =𝑨 → 𝑱(𝑿)−𝟏 = 𝑨−𝟏
−𝟏
𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑱 𝑿𝒌 𝑭 𝑿𝒌 = 𝑿𝒌 − 𝑨−𝟏 𝑨𝑿𝒌 − 𝒃 → 𝑿𝒌+𝟏 = 𝑿𝒌 − 𝑨−𝟏 𝑨𝑿𝒌 +𝑨−𝟏 𝒃
𝑿𝒌+𝟏 = 𝑨−𝟏 𝒃
If the Newton-Raphson method is used to solve linear systems, the result is found always in the first iteration.
There is no computational gain when using this algorithm for linear equations because the inversion process
of the matrix of coefficients is not skipped