Professional Documents
Culture Documents
JUNE,2019
ARBA MINCH, ETHIOPIA
APPROVAL SHEET
This is to certify that the project entitled “LEAST SQUARE APPROXIMATION” submitted
in department of mathematics, Arab Minch university and is a record of original project carried
out by BEZAWIT TSIGAYERNS/249/09 has been submitted for B.S.C degree.
The assistance and the help received during the course of this investigation have been duly
acknowledged. Therefore, I recommend that it would be accepted as fulfilling the project
requirements.
Prepared by:
___________________ ____________________ ________________
Name of student signature Date
Approved by:
___________________ ____________________ ________________
Name of Advisor Signature Date
Next I would like to express sincere appreciation to my project advisor Zeleke Markos (M.Sc.),
for sharing his substantial experience to do this project in the expect way and giving wonderful
personality through the time without any reservation time.
Also I would like to thanks Arba Minch university mathimatics department staff members who
shares me their knowledge and cooperation to carry out this project paper.
Table of Contents
ACKNOWLEDGMENT...........................................................................................................iii
CHAPTER-ONE.........................................................................................................................1
1. Introduction.............................................................................................................................1
1.1 Background of study.............................................................................................................1
1.2 Objectives of the study........................................................................................................2
1.2.1 General objective................................................................................................................2
1.2.2 Specific objectives...............................................................................................................2
Chapter 2......................................................................................................................................4
2 Least square approximation...................................................................................................4
2.1 Discrete least-square approximation...................................................................................4
2.2 Liner least square approximation........................................................................................5
2.3 Non-linear least square method..........................................................................................6
2.3.1 Polynomial least square method.......................................................................................7
2.3.2 Exponential least square method....................................................................................9
2.3.3 Power function................................................................................................................10
2.3.3.1 Saturation function........................................................................................................11
2.3.4 Curve fitting......................................................................................................................12
2.4 Continuous Least Square Method.....................................................................................12
2.5 Weighted Least square approximaion...............................................................................14
2.5.1 Linear Weighted Least square approximation..............................................................17
2.5.2 Weighted Least square for continuous function............................................................17
Chapter 3....................................................................................................................................20
Conclusion..................................................................................................................................20
References..................................................................................................................................21
CHAPTER-ONE
1. Introduction
It is an approach fitting a mathematical or statically model to data in cases where the idealized
valve provided by the model for any data point.
It is the problem of an approximately solving an over determine system of linear equation where
the best an approximation is defined as that corresponding modeled value.
The form of Least Squares you are most likely to see:
m
E=¿) =∑ ( y ¿¿ i−a 0+ a1 x i +…+ ai x ni ¿a) ¿ ¿2
i=0
Is minimum.
The criterion however, does not offer a good measure of how well the line fits the data. it allows
for positive and negative individual errors even very large errors to cancel out and yield a zero
sum. Another strategy is to minimize the sum of the absolute values of the individual errors
n n
E=∑ |e i|=∑ ¿ yi −( a1 x i+ ao ) ∨¿ ¿
i=1 i =1
As a result, the individual errors can no longer chancel out and the total error is always positive.
This criterion, however, is not able to uniquely determine the coefficients that describe the best
line fit because for a given set of data several lines can have the same total error.
The strategy is to minimize the sum of the squares of the individual error
n n
E=∑ e 2i =∑ ¿ ¿ ¿
i=1 i=1
When f ( x ) isliner, the least squares problem is the problem of finding constant a 0 and a 1 .such as
the function P1(x) = a 0+ a1 x , that best fits the data.
The error E (a 0 , a 1) we need to minimize is:
n
E ( a 0 , a 1) =∑ ¿ ¿
i=0
In order to minimize this function ofa 0 and a 1 . we must compute its partial derivatives with
respect to zero.
∂E ∂E
=0 and =0
∂ a0 ∂ a1
n
∂
E ( a0 , a1 )=2 ∑ [ ( a0 +a1 x i) −f i ]=0
∂ a0 i=0
n
∂
E ( a 0 , a 1) =2 ∑ x i [ ( a0 +a1 x i) −f i ]=0
∂ a1 i=0
Both of these partial derivatives must be equal to zero. now the system of liner equation.
{
n n
a 0 ( n+1 ) + a1 ∑ x i=∑ f i
i=0 i=0
n n n
a0 ∑ x i+ a1 ∑ x2i =∑ xi f i
i=0 i=0 i=0
[ ][ ] [ ]
n n
(n+ 1) ∑ x i a ∑ fi
i=0 o = i=0
n n a1 n
∑ xi ∑ x 2i ∑ xi f i
i=0 i=0 i=0
( )(∑ ) ( )(∑ )
n n n n
∑ x 2i yi − ∑ xi xi yi
i=0 i=0 i=0 i=0
a 0=
(∑ ) (∑ )
n n 2
2
n x −
i xi
i=0 i=0
(∑ )(∑ )(∑ )
n n n
n xi yi xi yi
i=0 i=0 i=0
a 1=
(∑ ) (∑ )
n n 2
2
n x −
i xi
i=0 i=0
EXAMPLE 1
Using the method of least-squares, find the linear function that best fits the following data
X 1 1.5 2 2.5 3 3.5 4
Y 25 31 27 28 36 35 32
u
The solution is:
7
∑ yi = 25 + 31 + 27 + 28 + 36 + 35 + 32 = 214,
i=0
∑ x i y i= (1) (25) + (1.5) (31) + (2) (27) + (2.5) (28) + (3) (36) + (3.5) (35) + (4) (32) = 554.
¿0
( )(∑ ) ( )(∑ )
n n n n
∑ xi 2
yi − ∑ xi xi yi
i=0 i=0 i=0 i=0
Now:a 0=
( ∑ ) (∑ )
n n 2
2
n x − i xi
i=0 i=0
50.75∗214−17.5∗554
a 0= 2 =2.7243
7 (50.75−( 17.5) )
( )( )( )
n n n
n ∑ xi yi ∑ xi ∑ yl
i=0 i=0 i=0
a 1=
(∑ ) (∑ )
n n 2
n x 2i − xi
i=0 i=0
a
1=¿
7 ( 554 ) ∗17.5∗214
¿ =23.7857
7 ( 50.75 )− (17.5)2
n
∂
∂ a2
[
E ( a o , a1 , a 2) =2 ∑ x 2i ( a0 + a1 x i +a2 xi2) −f i =0
i=o
]
Similarly for the quadratic polynomial p2(x) =a o +a1 x+ a2 x 2, the normal equation are
{
n n n
a o ( n+1 ) a1 ∑ x i +a 2 ∑ x 2i =∑ f i
i=0 i=0 i=0
n n n n
a o ∑ x i +a 1 ∑ x i +a2 ∑ x i =∑ x i f i
2 3
[]
n n n
(n+ 1) ∑ xi ∑ x 2
i ∑ fi
i=0 i=0 i=0
n n n ao n
⌈ ∑ xi ∑ x 2i ∑ x i a1 ⌉=
3
⌉ ⌈ ∑ xi f i
i =0 i=0 i=0 i=0
n n n
a2 n
∑ x2i ∑ x 3i ∑ xi4
∑ xi2 f i
i =0 i=0 i =0 i=0
EXAMPLE 2
Find the quadratic polynomial following table of values.
X 0 0.5 1 1.5 2 2.5
[]
n n n
(n) ∑ xi ∑ x 2
i ∑ fi
i=0 i=0 i=0
n n n ao n
⌈ ∑ xi ∑ x 2i ∑ xi ⌉ ⌈ a1 ⌉ =
3
∑ xi f i
i=0 i=0 i=0 i=0
n n n
a2 n
∑ xi2 ∑ x 3i ∑ xi4
∑ x 2i f i
i=0 i=0 i=0 i=0
[ ][][ ]
6 7.5 13.75 a0 1.42
7.5 13.75 28.125 a1 = 2.285
13.75 28.125 61.187 a2 4.3375
y=a e bx
For some constants a and b. The difficulty with applying the least squares procedure in a
Situation of this type come from attempting to minimize
m
E=∑ ¿ ¿ ¿ ¿
i=0
The normal equations associated with these procedures are obtained from either
m
∂E
0= =2 ∑ ¿ ¿ ¿
∂b i=0
m
∂E
And 0= =2 ∑ ¿ ¿ ¿
∂a i=0
i xi yi
1 2.0774 1.4506
2 2.3049 2.8462
3 3.0125 2.1536
4 4.7092 4.7438
5 5.5016 7.7260
Solution
i xi yi z i ¿ lny i x i2 xi zi
By defining
s=
[ 17.6056
5
] [
17.6056 c= 5.7867
71.1475 24.0751 ]
Solving the normal equation
c 0 ¿−0.2653, c 1=0.0404
c0 −0.2653
And obtain b=c 1=0.4040 , a=e =e =0.7670
The exponential function of best fits this data in the least square is
0.4040 x
y=0.7670 e
y=ax b ( a ,b is constant )
ln y=b ln x+ ln a
So that the plot of log y versuslog x is a straight line with slope b and interceptlog a ,
m
E=∑ ¿ ¿ ¿ ¿
i=0
m
∂E
0= =2 ∑ ¿ ¿ ¿
∂b i=0
m
∂E
And 0= =2 ∑ ¿ ¿ ¿
∂a i=0
ln y=x ln b+ ln a
So that the plot of log y versuslog x is a straight line with slope b and interceptlog a ,
Y = A +bX
Inversing equation
1
y
=b
1
x ()
+a
1 1
So that the plot of versus is a straight line with slope b and intercept a.
y x
[ (
e i= y i −
a
xi
+b √ x i
)]
n
We have S=∑ e i2
i=0
[ ( )]
n 2
a
¿∑ y i − +b √ x i
i=0 xi
[ ( )] ( )
n
∂S a −1
There for =2 ∑ y i− + b √ x i =0
∂a i=0 xi xi
∑[ ( )]
n
∂S a
And =2 y − + b √ x ( −√ x ) =0
i i i
∂b i=0 x i
n n n
1
∑ y √ x i=a ∑ + b∑ x
i=0 i=0 √ x i i=0 i
is minimized.
The polynomial pn (x) is called the least square polynomial. For minimization, we must have
δE
=0 ,i=0,1,2… n
δ ai
As before, these conditions will give rise to a system of (n+1) normal equation in (n+1)
unknown;( a 0 , a1 , a 2 , … , an )
b
n 2
Since E=∫ [ f ( x )−(a0 + a1 x +a2 x + …+an x ) ] dx
2
:
:
b
δE
=−2∫ x n [ f ( x )−(a0 +a 1 x +a2 x2 +…+ an x n) ] dx
δ ai a
Thus, we have
b b b b b
δE
=0→ a 0∫ 1 dx +a1∫ x dx +a 2∫ x dx +…+ an∫ x dx=∫ f ( x )
2 n
δ a0 a a a a a
Similarly
b b b b b
δE
=0→ a 0∫ x dx+ a1∫ x dx+ a2∫ x dx 2+…+ an∫ x dx=∫ x f (x )
i i+1 i +2 i+ n i
i=0,1,2,3 … , n
δ ai a a a a a
a a a a a
b b b b b
i=1 :a0 ∫ xdx+ a1∫ x dx +a2∫ x dx +…+ an∫ x dx=∫ xf (x)
2 3 n +1
a a a a a
⋮ ⋮ ⋮
b b b b b
i=n : a0∫ x n dx+ a1∫ x n +1 dx + a2∫ x n+2 dx +…+ an∫ x 2 n dx=∫ x n f ( x)
a a a a a
Denoting
b
Si=∫ x dx , i=0,1,2 , … , n
i
b
b i=∫ x f ( x ) ,i=0,1,2, … , n
i
EXAMPLE 4
Find liner and quadratic least square approximation to f ( x )=e x on [−1,1] .
Solution
Liner approximation: n=1 pi ( x ) =a0 +a 1 x
1
s0 =∫ 1 dx=[ x ]−1 =2
1
−1
[ ]
1 2 1
s1=∫ x dx=
−1
x
2 −1
1 1
= − =0
2 2 ()
[ ]
1 1
s2=∫ x dx=
−1
x3
3
2
−1
1 −1 2
= −
3 3
=
3 ( )
1
x 1 1
b 0=∫ 1 e dx=[ e ]
x
−1 =1− =2.3504
−1 e
1
1 2
b 1=∫ x e x dx= [(x−1)e x ]−1= =0.7358
−1 e
From matrix S and vector b
[ ]
2 0
S=
0
2 , b=
3
2.3504
0.7358[ ]
Solve normal system is:
[ ][ ] [
2 0
0
3
a
2 0=
a1
2.3504
0.7358 ]
This give:
a 0=1.1752 , a1=1.1037
The liner least square polynomial p1 ( x )=1.1752+1.1037 x
Relative error
|e 0.5 −p 1 (0.5)|=|1.6487−1.7270|=0.0475
|e 0.5| |1.6487|
Quadratic fitting, n=2; pi ( x )=a 0+ a1 x + a2 x 2
1
s0 =∫ 1 dx=[ x ]−1 =2
1
−1
[ ]
1 2 1
s1=∫ x dx=
−1
x
2 −1
1 1
= − =0
2 2 ()
[ ]
1 1
s2=∫ x dx=
−1
2x3
3 −1
1 −1 2
= −
3 3
=
3 ( )
[ ]
1 4 1
s3=∫ x dx=
−1
3x
4 −1
1 1
= −
4 4
=0 ()
[ ]
1 5 1
s4 =∫ x dx=
−1
x
5
4
−1
1 −1 2
= −
5 5
=( )
5
1
x 1 1
b 0=∫ 1 e dx=[ e ]
x
−1 =1− =2.3504
−1 e
1
x1 2
b 1=∫ x e dx= [(x−1)e ]
x
−1 = =0.7358
−1 e
1
5
b 2=∫ x 2 e x dx=e− =0.8789
−1 e
From the matrix S and vector b:
[ ]
2
2 0
[ ]
3
2.3504
2
S= 0 0 , b= 0.7358
3
0.8789
2 2
0
3 5
Solve the normal system is:
[ ]
2
2 0
[][ ]
3
a0 2.3504
2
0 0 a1 = 0.7358
3
0.8789
2 2 a2
0
3 5
This gives;
a 0 ¿ 0.9963 , a1 ¿ 1.1037 , a2=0.5368
0 0 0 0
1 1 1 1
a 0∫ x dx+ a1∫ x dx+ ¿ a2∫ x dx=¿ ∫ x sin πx dx ¿ ¿
2 3
0 0 0 0
1 1 1 1
a 0∫ x dx+ a1∫ x dx+ ¿ a2∫ x dx=¿∫ x sin πx dx ¿ ¿
2 3 4 2
0 0 0 0
This is a pair of simultaneous liner equation is the unknown a 0∧a1 .they are called the normal
equation and can be written as
( ) ( )
n n n
∑ wi a 0+ ∑ wi xi a 1= ∑ w i y i
i=0 i=0 i=0
(∑ ) (∑ )
n n n
wi xi a0 + wi x i a1=∑ wi xi y i
2
From this normal equation solving the values a 0∧a1 .then substitute in equation ,we get the
required least square approximation.
Example 6
Fit a straight line to the following date giving weight to x as
W 1 1 1 5 5
By the method of least square
X 0 1 2 3 4
Solution
Let the straight line be ¿ a 0+ a1 x , where a 0∧a1 are terms to be determine.
The data and summations needs to compute the best fit liner is,
i xi yi wi w i xi w i x i2 w i x i yi w i yi
0 0 1 1 0 0 0 1
n
+an x =∑ a i x
n−1 n i
Where Pn ( x ) =a0 + a1 x +…+ an−1 x
i=0
[ ]
b n 2
[ ]
b n
∂S
=∫ W ( x ) f ( x )−∑ ai x i x j dx=0 , j=0,1, … , n
∂aj a i=0
Chapter 3
Conclusion
This project work focus on Least- square approximation methods with practical example. Due to
the drawback of interpolation when the data points are large. least square method is an
appropriate approach to fit the data. and linear least square method derivation with practical
example illustration.
Similarly, non-linear least square approach is confirming the same result for quadratic least
square. As well as, the exponential and saturation function with their proof and worked example.
Finally, the project has worked Continuous Least Square Method and weighted least square
method is minimized by setting the partial derivatives of best fitting of approximation.
References
[1] Levenberg, K., 1944. A method for the solution of certain non-linear problems in
least squares. Quarterly of applied mathematics, 2(2), pp.164-168.
[2] Kirschvink, J.L., 1980. The least-squares line and plane and the analysis of
palaeomagnetic data. Geophysical Journal International, 62(3), pp.699-718.
[3] Richard, L. and Burden, J.2017. Numerical Analysis 10th Edition 1988. Douglas
faires, numerical analysis.
[4] F.B Hidebrand, Introduction to numerical analysis, pp(314-318)