Professional Documents
Culture Documents
Final Review
Fall 2021
Instructor: Shaofan Li
Agenda
Post-Midterm Topics
• Root Finding
• Linear Systems
• Regression
• Interpolation
• Gradient Descent
• Numerical Differentiation
• Numerical Integration
• Ordinary Differential Equations
Root Finding
Bisection Method
Numerical Root Finding:
• Converges slowly!
– Error is halved at each step (linear convergence)
f(x)
• Doesn’t work if the function ‘touches’ the x- axis
x
Newton-Raphson Method
𝒇 𝒙𝒏
𝒙𝒏+𝟏 = 𝒙𝒏 − ′
𝒇 (𝒙𝒏 )
Algorithm:
Limitations:
• Need to start with a good initial guess
• If f’(x) happens to be close to zero,
next step would be far away from the root
• Can converge to a root that
we’re not interested in
Example
Example
𝒇 𝒙𝒏
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′ (𝒙𝒏 )
Example
Error is halved at each step (linear convergence)
Example
Linear Systems
Linear Systems
Linear Systems: Solutions
A linear system 𝑨𝒎×𝒏 𝒙𝒏×𝟏 = 𝒚𝒎×𝟏 can behave in three different ways:
𝑟𝑎𝑛𝑘 𝐴, 𝑦 = 𝑟𝑎𝑛𝑘(𝐴)
3) Infinitely Many
𝑟𝑎𝑛𝑘 𝐴 < 𝑛
Solutions
(Underconstrained)
Linear Systems: Solutions
1) Unique Solution
𝒙=𝑨\𝒚
3) No Solution (Over-constrained)
−𝟏 𝑻
𝒙 = 𝑨𝑻 𝑨 𝑨 𝒚
Example
Example
Example
Regression
Least Squares Regression Simple Linear Regression
Optimization problem:
Find parameters (𝜷) which will minimize TSE
Where;
Least Squares Regression 𝐿𝑖𝑛𝑒𝑎𝑟 𝐵𝑎𝑠𝑖𝑠 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑠:
𝑦(𝑥)
ො = 𝛽1 + 𝛽2 𝑥
In general:
where : basis functions
: model parameters
and
is called the pseudoinverse of A
Least Squares Regression
1 𝑥1
Simple Linear fit 1 𝑥2
1 𝑥3 𝐴𝑖𝑗 = 𝑓𝑗 𝑥𝑖
= 𝜷𝟏 + 𝜷𝟐 𝒙
𝒚(𝒙) 𝑨= . .
. .
1 𝑥𝑛
Polynomial fit: 1 𝑥1 𝑥1 2 . . .
1 𝑥2 𝑥2 2 . . .
= 𝜷𝟏 + 𝜷𝟐 𝒙 + 𝜷𝟑 𝒙𝟐 + … 1 𝑥3 𝑥3 2 . . .
𝒚(𝒙) 𝑨= . . . . . .
. . . . . .
1 𝑥𝑛 𝑥𝑛 2 . . .
In MATLAB:
Example
Example
Example
Example
Example
Example
Interpolation
Interpolation
If we substitute Eq. (1) to Eq. (2) , we get a system of linear equations in the coefficients.
Polynomial interpolation:
Polynomial regression:
Lagrangian.m
Example
Gradient Descent
Gradient Descent
Task: We want to optimize (find the minimum) of the function 𝑓 𝑥
Start from an initial guess and move in the opposite direction of the gradient.
Convergence:
Numerical Differentiation
(Finite Difference Methods)
Finite Difference Approximations
(Approximate Slope)
𝒇(𝒙)
Forward Difference ∆𝒙
𝑓(𝑥𝑖+1 )
𝒇(𝒙𝒊+𝟏 ) − 𝒇(𝒙𝒊 ) 𝒇(𝒙𝒊+𝟏 ) − 𝒇(𝒙𝒊 )
𝒇′(𝒙𝒊 ) ≈
∆𝒙
𝑓(𝑥𝑖 )
𝒇′(𝒙𝒊 )
𝐸𝑟𝑟𝑜𝑟 ~ 𝑂(∆𝑥) (True Slope)
𝑥𝑖−1 𝑥𝑖 𝑥𝑖+1
Finite Difference Approximations 𝒇(𝒙)
Backward Difference
(Approximate Slope)
𝒇(𝒙𝒊 ) − 𝒇(𝒙𝒊−𝟏 )
𝒇′(𝒙𝒊 ) ≈
∆𝒙 ∆𝒙
𝑓(𝑥𝑖 )
𝒇(𝒙𝒊 ) − 𝒇(𝒙𝒊−𝟏 ) 𝒇′(𝒙𝒊 )
𝑓(𝑥𝑖+1 ) (True Slope)
𝐸𝑟𝑟𝑜𝑟 ~ 𝑂(∆𝑥)
𝑥𝑖−1 𝑥𝑖 𝑥𝑖+1
Finite Difference Approximations 𝒇(𝒙)
(Approximate Slope)
𝒇(𝒙𝒊+𝟏 ) − 𝒇(𝒙𝒊−𝟏 )
𝒇′(𝒙𝒊 ) ≈ 𝒇(𝒙𝒊+𝟏 ) − 𝒇(𝒙𝒊−𝟏 )
𝟐∆𝒙
𝒇′(𝒙𝒊 )
𝐸𝑟𝑟𝑜𝑟 ~ 𝑂(∆𝑥 2 ) 𝑓(𝑥𝑖−1 ) (True Slope)
𝑥𝑖−1 𝑥𝑖 𝑥𝑖+1
Example
Example
Example
Example
Numerical Integration
Riemann Sum
Approximate the area under a curve by summing a series of rectangles
𝑏−𝑎 𝑥𝑖 + 𝑥𝑖+1
where; 𝑁 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐼𝑛𝑡𝑒𝑟𝑣𝑎𝑙𝑠, ℎ= is the 𝑺𝒕𝒆𝒑 𝑺𝒊𝒛𝒆 and 𝑦𝑖 =
𝑁 2
Trapezoidal Rule
where;
𝑏−𝑎
𝑁 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐼𝑛𝑡𝑒𝑟𝑣𝑎𝑙𝑠, ℎ= is the 𝑆𝑡𝑒𝑝 𝑆𝑖𝑧𝑒
𝑁
𝑥𝑖+1 = 𝑥𝑖 + ℎ
Simpson’s Rule
Approximate the area by fitting quadratic polynomials through 3 consecutive data points
ℎ ℎ
𝑥0 𝑥1 𝑥2
Monte Carlo Integration
Use random sampling to approximate an integral:
𝑏 𝑁
𝑏−𝑎
න 𝑓 𝑥 𝑑𝑥 ≈ 𝑓(𝑥𝑖∗ )
𝑎 𝑁
𝑖=1
Trapezoidal Rule
Simpson’s Rule
Example
Right Riemann
Trapezoidal
Simpson’s Rule
Approximate the area by fitting quadratic polynomials through 3 consecutive data points
ℎ ℎ
𝑥0 𝑥1 𝑥2
To approximate the integral over (a,b), divide the interval into segments of equal
width
Numerical Methods for
Ordinary Differential Equations
Euler’s Method Explicit Euler Method
𝑑𝑦
= 𝑓(𝑡, 𝑦) 𝑡 ∈ [𝑡0 , 𝑡𝑛 ] 𝑦(𝑡0 ) = 𝑦0
𝑑𝑡
Get a higher accuracy method by evaluating the derivative several times at different locations
ℎ
𝑦𝑖+1 = 𝑦𝑖 + (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
where
𝑘1 = 𝑓(𝑡𝑖 , 𝑦𝑖 )
𝑘2 = 𝑓(𝑡𝑖 + ℎ/2 , 𝑦𝑖 + ℎ/2 𝑘1 )
𝑘3 = 𝑓(𝑡𝑖 + ℎ/2 , 𝑦𝑖 + ℎ/2 𝑘2 )
𝑘4 = 𝑓(𝑡𝑖 + ℎ , 𝑦𝑖 + ℎ 𝑘3 )
“Slopes”
Other Methods
Second-Order Runge-Kutta Method (RK2)
𝑦𝑖+1 = 𝑦𝑖 + ℎ 𝑘2
𝑘1 = 𝑓(𝑡𝑖 , 𝑦𝑖 )
𝑦𝑖+1 − 𝑦𝑖−1
𝑓(𝑡𝑖 , 𝑦𝑖 ) ≈ 𝑦𝑖+1 = 𝑦𝑖−1 + 2ℎ 𝑓 𝑡𝑖 , 𝑦𝑖
2∆𝑥
Order of Accuracy
Accuracy: How fast a scheme gets close to the exact solution, as a function of step size.
𝑘1 = 𝑓(𝑡𝑖 , 𝑦𝑖 )
ℎ 𝑘2 = 𝑓(𝑡𝑖 + ℎ/2 , 𝑦𝑖 + ℎ/2 𝑘1 )
𝑦𝑖+1 = 𝑦𝑖 + (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6 𝑘3 = 𝑓(𝑡𝑖 + ℎ/2 , 𝑦𝑖 + ℎ/2 𝑘2 )
𝑘4 = 𝑓(𝑡𝑖 + ℎ , 𝑦𝑖 + ℎ 𝑘3 )
Example
(II): Runge-Kutta Method
Second-Order Runge-Kutta (RK2) Method Midpoint rule
We use midpoint rule to integrate this equation: 𝑑𝑦
= 𝑓(𝑡, 𝑦)
𝑑𝑡
Example
Meshgrid.m
These are the topics
covered before midterm,
which will be tested again
in final exam.
Good luck on your finals!