You are on page 1of 26

ARBAMINCH UNIVERSITY

COLLEGE OF NATURAL SCIENCE


DEPARTMENT OF MATHEMATICS

LEAST SQUARE APPROXIMATION

PREPARED BY: BEZAWIT TSIGAYE ID NO RNS/249/09

ADVISOR: ZELEKE MARKOS (M.Sc.)

PROJECT PAPER SUBMITTED TO ARBAMINCH UNIVERSITY,


COLLEGE OF NATURAL SCIENCES, DEPARTMENT OF MATHEMATHICS

JUNE,2019
ARBA MINCH, ETHIOPIA
APPROVAL SHEET
This is to certify that the project entitled “LEAST SQUARE APPROXIMATION” submitted
in department of mathematics, Arab Minch university and is a record of original project carried
out by BEZAWIT TSIGAYERNS/249/09 has been submitted for B.S.C degree.
The assistance and the help received during the course of this investigation have been duly
acknowledged. Therefore, I recommend that it would be accepted as fulfilling the project
requirements.
Prepared by:
___________________ ____________________ ________________
Name of student signature Date

Approved by:
___________________ ____________________ ________________
Name of Advisor Signature Date

__________________ ____________________ ________________


Name of Examiner1 signature Date

__________________ ____________________ ________________


Name of Examiner2 signature Date

____________________ ____________________ ________________


Department Head Signature Date
ACKNOWLEDGMENT
Above all I would like to thank the almighty God who has been with me in all activities to be
succeeded from the beginning to the end.

Next I would like to express sincere appreciation to my project advisor Zeleke Markos (M.Sc.),
for sharing his substantial experience to do this project in the expect way and giving wonderful
personality through the time without any reservation time.
Also I would like to thanks Arba Minch university mathimatics department staff members who
shares me their knowledge and cooperation to carry out this project paper.
Table of Contents
ACKNOWLEDGMENT...........................................................................................................iii
CHAPTER-ONE.........................................................................................................................1
1. Introduction.............................................................................................................................1
1.1 Background of study.............................................................................................................1
1.2 Objectives of the study........................................................................................................2
1.2.1 General objective................................................................................................................2
1.2.2 Specific objectives...............................................................................................................2
Chapter 2......................................................................................................................................4
2 Least square approximation...................................................................................................4
2.1 Discrete least-square approximation...................................................................................4
2.2 Liner least square approximation........................................................................................5
2.3 Non-linear least square method..........................................................................................6
2.3.1 Polynomial least square method.......................................................................................7
2.3.2 Exponential least square method....................................................................................9
2.3.3 Power function................................................................................................................10
2.3.3.1 Saturation function........................................................................................................11
2.3.4 Curve fitting......................................................................................................................12
2.4 Continuous Least Square Method.....................................................................................12
2.5 Weighted Least square approximaion...............................................................................14
2.5.1 Linear Weighted Least square approximation..............................................................17
2.5.2 Weighted Least square for continuous function............................................................17
Chapter 3....................................................................................................................................20
Conclusion..................................................................................................................................20
References..................................................................................................................................21
CHAPTER-ONE
1. Introduction

1.1 Background of study


Numerical analysis is to provide convenient methods for obtaining useful solutions to
mathematical problems and for extracting useful information from available solutions which are
not expressed in tractable forms. Numerical methods are procedures that allow for efficient
solution of a mathematically formulated problem in a finite number of steps to within a chance
precision. Although scientific calculators can handle simple problem, computers are needed in
most cases. Numerical methods usually consist of set of rules to do determined mathematical
(algebraic and logical) operation leading to an approximate solution of a specific problem. Such
set of guidelines is known as an algorithm.
A major advantage for numerical technique is that a numerical answer can be obtained even
when a problem has no analytical solution. However, result from numerical analysis is an
approximation, in general, which can be made as accurate as desired. The reliability of the
numerical result will depend on an error estimate or bound, therefore the analysis of error and the
sources of error in numerical solution.
Approximation method is a method used to numerically evaluate a function using a series of
sums based on the function. An approximation is anything that is similar but not exactly equal to
something else. In approximation method to get a best fit we use just like interpolation, splitting,
least square etc. But the best fit is least square.
The process of finding the equation of the curve of best fit, which may be most suitable for
predicting the unknown values, is known as curve fitting. Therefore, curve fitting means an exact
relationship between two variables by algebraic equations. There are following methods for
fitting a curve:
I. Graphic method
II. Method of group averages
III. Method of moments
IV. Principle of least square.
Out of above four methods, we will only discuss and study here principle of least square.
The least-squares method is usually credited to Carl fried rich gauss (1795). But it was first
distributed by adrien-marie Legendre (1805). The idea of least-squares analysis was also
independently expressed by the American Robert Adrian in (1808). In the next two centuries
workers in the theory of errors and in statistics found many changed ways of applying least
squares.
The method of least square is probably the most systematic procedure to fit a unique curve
through the given data points and is a standard approach in regression analysis to approximate
the solution of over determined systems, i.e., sets of equations in which there are more equations
than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares
of the residuals made in the results of every single equation the most important application is in
data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals (a
residual being: the difference between an observed value, and the fitted value provided model).
Math works products provide all the tools you need to progress Least squares.
Application of least square method
 It is easy to find the best fitting regression line
 It helps to know extreme values
 We are sure to find only one best fitting line
 By using scatter plot showing the relationship between variables x and y
 We want to find the line that minimizes the vertical distance between itself and the
observed points on the scatter plot
 We can find the vertical distance from each points to the line
 We can to minimize the total distance from the paint to the line solve want to minimize
the sum of the lengths of all the dotted lines

1.2 Objectives of the study


1.2.1 General objective
The main objective this project work is to determine the Least- square approximation methods
with practical example
1.2.2 Specific objectives
In this project work will able to:

 Develop least square formulas

 Compute linear least square with example

 Compute non-linear list square with example

 Write a conclusion from the results.


Chapter 2
2 Least square approximation

It is an approach fitting a mathematical or statically model to data in cases where the idealized
valve provided by the model for any data point.
It is the problem of an approximately solving an over determine system of linear equation where
the best an approximation is defined as that corresponding modeled value.
The form of Least Squares you are most likely to see:

2.1 Discrete least-square approximation


Discrete Least-Squares Approximation Problem
Given a set of n discrete data points ( x i, y i ), i = 1, 2, . . . ,m.
Find the algebraic polynomial
2 n
pn ( x ) =ao + a1 x +a 2 x + …+a n x (n<m)
Such that the error E(a 0 , a 1 , a2 , … , an) in the least-squares sense in minimized; that is

m
E=¿) =∑ ( y ¿¿ i−a 0+ a1 x i +…+ ai x ni ¿a) ¿ ¿2
i=0

Is minimum.

Here E(a 0 , a 1 , a2 , … , an) is function of n+1 variables: a 0 , a 1 , a2 , … , an


Different strategies can be considered for determining the best liner fit of asset of n data point (
x 1 , y 1 ¿ …..( x n , y n)one strategy is to minimize the sum of all the individual errors
n n
E=∑ e i=∑ [ y i− ( a1 xi + ao ) ]
i=1 i=1

The criterion however, does not offer a good measure of how well the line fits the data. it allows
for positive and negative individual errors even very large errors to cancel out and yield a zero
sum. Another strategy is to minimize the sum of the absolute values of the individual errors
n n
E=∑ |e i|=∑ ¿ yi −( a1 x i+ ao ) ∨¿ ¿
i=1 i =1
As a result, the individual errors can no longer chancel out and the total error is always positive.
This criterion, however, is not able to uniquely determine the coefficients that describe the best
line fit because for a given set of data several lines can have the same total error.

The strategy is to minimize the sum of the squares of the individual error
n n
E=∑ e 2i =∑ ¿ ¿ ¿
i=1 i=1

2.2 Liner least square approximation


The least squares approach to this problem involves determining the best approximating line
when the error involved is the sum of the squares of the differences between the y-values on the
approximating line and the given y-values. Hence, constants a0 and a1 must be found that
minimize the least squares error:

When f ( x ) isliner, the least squares problem is the problem of finding constant a 0 and a 1 .such as
the function P1(x) = a 0+ a1 x , that best fits the data.
The error E (a 0 , a 1) we need to minimize is:

n
E ( a 0 , a 1) =∑ ¿ ¿
i=0

In order to minimize this function ofa 0 and a 1 . we must compute its partial derivatives with
respect to zero.
∂E ∂E
=0 and =0
∂ a0 ∂ a1
n

E ( a0 , a1 )=2 ∑ [ ( a0 +a1 x i) −f i ]=0
∂ a0 i=0

n

E ( a 0 , a 1) =2 ∑ x i [ ( a0 +a1 x i) −f i ]=0
∂ a1 i=0
Both of these partial derivatives must be equal to zero. now the system of liner equation.

{
n n
a 0 ( n+1 ) + a1 ∑ x i=∑ f i
i=0 i=0
n n n
a0 ∑ x i+ a1 ∑ x2i =∑ xi f i
i=0 i=0 i=0

Since everything except a 0 and a 1 is known, this is a 2- by-2 system of equations.

[ ][ ] [ ]
n n
(n+ 1) ∑ x i a ∑ fi
i=0 o = i=0
n n a1 n

∑ xi ∑ x 2i ∑ xi f i
i=0 i=0 i=0

The solution to this system of equation is:

( )(∑ ) ( )(∑ )
n n n n

∑ x 2i yi − ∑ xi xi yi
i=0 i=0 i=0 i=0
a 0=

(∑ ) (∑ )
n n 2
2
n x −
i xi
i=0 i=0

(∑ )(∑ )(∑ )
n n n
n xi yi xi yi
i=0 i=0 i=0
a 1=

(∑ ) (∑ )
n n 2
2
n x −
i xi
i=0 i=0

EXAMPLE 1
Using the method of least-squares, find the linear function that best fits the following data
X 1 1.5 2 2.5 3 3.5 4
Y 25 31 27 28 36 35 32
u
The solution is:
7

∑ x i= 1+1.5 + 2+2.5 + 3 + 3.5 + 4 = 17.5,


i=0
7

∑ yi = 25 + 31 + 27 + 28 + 36 + 35 + 32 = 214,
i=0

∑ x 2i = 12+ 1.52 + 22 + 2.52+ 32 + 3.5 + 42= 50.75


i=0

∑ x i y i= (1) (25) + (1.5) (31) + (2) (27) + (2.5) (28) + (3) (36) + (3.5) (35) + (4) (32) = 554.
¿0

( )(∑ ) ( )(∑ )
n n n n

∑ xi 2
yi − ∑ xi xi yi
i=0 i=0 i=0 i=0
Now:a 0=

( ∑ ) (∑ )
n n 2
2
n x − i xi
i=0 i=0

50.75∗214−17.5∗554
a 0= 2 =2.7243
7 (50.75−( 17.5) )

( )( )( )
n n n
n ∑ xi yi ∑ xi ∑ yl
i=0 i=0 i=0
a 1=

(∑ ) (∑ )
n n 2

n x 2i − xi
i=0 i=0

a
1=¿
7 ( 554 ) ∗17.5∗214
¿ =23.7857
7 ( 50.75 )− (17.5)2

Therefore, least-squares line is:


Y=2.7243x +23.7857y

2.3 Non-linear least square method


2.3.1 Polynomial least square method
Non-linear least squares is the form of least squares analysis used to fit a set of m observations
with a model that is non-linear in n unknown parameters (m > n). It is used in some forms of
nonlinear regression. The basis of the method is to approximate the model by a linear one and to
refine the parameters by successive iterations. There are many similarities to linear least squares,
but also some significant differences.
2
For the quadratic polynomial p2 ( x ) =ao + a1 x +a2 x , the error is given by
n
E ( a o , a1 , a 2 ) = ∑ ¿ ¿
i=0

At minimum (best model) we must have


n

∂ ao
[
E ( ao , a1 , a2 )=2 ∑ ( a o+ a1 x i +a 2 x 2i )−f i =0
i=o
]
n

E ( a o , a1 , a 2) =2 ∑ xi ¿ ¿
∂ a1 i=0

n

∂ a2
[
E ( a o , a1 , a 2) =2 ∑ x 2i ( a0 + a1 x i +a2 xi2) −f i =0
i=o
]

Similarly for the quadratic polynomial p2(x) =a o +a1 x+ a2 x 2, the normal equation are

{
n n n
a o ( n+1 ) a1 ∑ x i +a 2 ∑ x 2i =∑ f i
i=0 i=0 i=0
n n n n
a o ∑ x i +a 1 ∑ x i +a2 ∑ x i =∑ x i f i
2 3

i=0 i=0 i=0 i=0


n n n n
ao ∑ x i + ao ∑ x i +a2 ∑ x i =∑ x i f i
2 3 4 2

i=0 i=0 i=0 i=0

The normal equation-as matrix equation

[]
n n n
(n+ 1) ∑ xi ∑ x 2
i ∑ fi
i=0 i=0 i=0
n n n ao n
⌈ ∑ xi ∑ x 2i ∑ x i a1 ⌉=
3
⌉ ⌈ ∑ xi f i
i =0 i=0 i=0 i=0
n n n
a2 n

∑ x2i ∑ x 3i ∑ xi4
∑ xi2 f i
i =0 i=0 i =0 i=0

EXAMPLE 2
Find the quadratic polynomial following table of values.
X 0 0.5 1 1.5 2 2.5

fi 0 0.20 0.27 0.30 0.32 0.33


The solution is:
N xi 2
xi
3
xi xi
4
fi xi f i 2
xi f i
1 0 0 0 0 0 0 0
2 0.5 0.25 0.125 0.0625 0.2 0.1 0.05
3 1 1 1 1 0.27 0.27 0.27
4 1.5 2.25 3.375 5.0625 0.3 0.45 0.675
5 2 4 8 16 0.32 0.64 1.28
6 2.5 6.25 15.62 39.0625 0.33 0.825 2.0625
5
sum 7.5 13 . 75 28.12 61.187 1.42 2.285 4.3375
5

[]
n n n
(n) ∑ xi ∑ x 2
i ∑ fi
i=0 i=0 i=0
n n n ao n
⌈ ∑ xi ∑ x 2i ∑ xi ⌉ ⌈ a1 ⌉ =
3
∑ xi f i
i=0 i=0 i=0 i=0
n n n
a2 n

∑ xi2 ∑ x 3i ∑ xi4
∑ x 2i f i
i=0 i=0 i=0 i=0

[ ][][ ]
6 7.5 13.75 a0 1.42
7.5 13.75 28.125 a1 = 2.285
13.75 28.125 61.187 a2 4.3375

The solution of this linear system is


a 0= 0.0033, a 1= 0.4970, a 2= = −0.2738,
y = 0.0033 + 0.4970 x − 0.2738 x2

2.3.2 Exponential least square method


The exponential function is in the form

y=a e bx
For some constants a and b. The difficulty with applying the least squares procedure in a
Situation of this type come from attempting to minimize
m
E=∑ ¿ ¿ ¿ ¿
i=0

The normal equations associated with these procedures are obtained from either
m
∂E
0= =2 ∑ ¿ ¿ ¿
∂b i=0

m
∂E
And 0= =2 ∑ ¿ ¿ ¿
∂a i=0

No exact solution to either of these systems in a and b can generally be found.


The method that is commonly used when the data are suspected to be exponentially related is to
consider the logarithm of the approximating equation:
lny=lnb+ ax
In the least-squares sense develop normal equations:
We now set Y =ln y , B=lna∧ A=b
On solving the above two equations, we get liner equation A and B.
y= AX + B
There for b=B∧a=e A
EXAMPLE 3
Find the exponential function y=a e bx that best approximates the data shown in the table.

i xi yi
1 2.0774 1.4506
2 2.3049 2.8462
3 3.0125 2.1536
4 4.7092 4.7438
5 5.5016 7.7260

Solution
i xi yi z i ¿ lny i x i2 xi zi

1 2.0774 1.4506 0.3722 4.3156 0.7732


2 2.3049 2.8462 1.0460 5.3126 2.4109
3 3.0125 2.1536 0.7671 9.0752 2.3110
4 4.7092 4.7438 1.5568 22.1766 7.3315
5 5.5016 7.7260 2.0446 30.2676 11.2485
sum 17.6056 18.9205 5.7867 71.1475 24.0751

By defining

s=
[ 17.6056
5
] [
17.6056 c= 5.7867
71.1475 24.0751 ]
Solving the normal equation
c 0 ¿−0.2653, c 1=0.0404
c0 −0.2653
And obtain b=c 1=0.4040 , a=e =e =0.7670
The exponential function of best fits this data in the least square is
0.4040 x
y=0.7670 e

2.3.3 Power function


Another example of nonlinear function is power function

y=ax b ( a ,b is constant )

Linearization is achieved by taking the standard logarithm

ln y=b ln x+ ln a

So that the plot of log y versuslog x is a straight line with slope b and interceptlog a ,

m
E=∑ ¿ ¿ ¿ ¿
i=0

m
∂E
0= =2 ∑ ¿ ¿ ¿
∂b i=0
m
∂E
And 0= =2 ∑ ¿ ¿ ¿
∂a i=0

Linearization is achieved by taking the standard logarithm

ln y=x ln b+ ln a

So that the plot of log y versuslog x is a straight line with slope b and interceptlog a ,

Now consider Y =ln y , A=ln a∧B=ln b

Y = A +bX

2.3.3.1 Saturation function


The saturation function is the form

y=x /(ax+ b)(a , b=constant )

Inversing equation

1
y
=b
1
x ()
+a

1 1
So that the plot of versus is a straight line with slope b and intercept a.
y x

2.3.4 Curve fitting


Finding an appropriate mathematical model that expresses the relationship between a dependent
variable Y and a single independent variable X and estimating the values of its parameters using
nonlinear regression
a
y= +b √ x
b
The error estimation for the i th point ( x i , y i) is

[ (
e i= y i −
a
xi
+b √ x i
)]
n
We have S=∑ e i2
i=0
[ ( )]
n 2
a
¿∑ y i − +b √ x i
i=0 xi

By the principle of least square, the value of S is minimum:

[ ( )] ( )
n
∂S a −1
There for =2 ∑ y i− + b √ x i =0
∂a i=0 xi xi

∑[ ( )]
n
∂S a
And =2 y − + b √ x ( −√ x ) =0
i i i
∂b i=0 x i

Then normal equation is


n
yi n
1
n
1
∑ =a ∑ 2 +b ∑
i=0 xi i=0 xi i=0 √ x i

n n n
1
∑ y √ x i=a ∑ + b∑ x
i=0 i=0 √ x i i=0 i

2.4 Continuous Least Square Method


Given a function f ( x ) ,continuous on [a , b].find a polynomial pn (x) of degree at most n:
pn ( x ) =a0 + a1 x +a2 x 2+ …+an x n ( n ≤ m ) ,
Such that the integral of the square of error is minimize. That is,
b
E=( a0 , a1 , a2 , … , an )=∫ [ f ( x )− pn (x) ] dx
2

is minimized.
The polynomial pn (x) is called the least square polynomial. For minimization, we must have
δE
=0 ,i=0,1,2… n
δ ai
As before, these conditions will give rise to a system of (n+1) normal equation in (n+1)
unknown;( a 0 , a1 , a 2 , … , an )
b
n 2
Since E=∫ [ f ( x )−(a0 + a1 x +a2 x + …+an x ) ] dx
2

Differentiating E with respect to each a i results in


b
δE
=−2∫ [ f ( x )−(a0 +a1 x+ a2 x 2 +…+ an xn ) ] dx
δ a0 a
b
δE
=−2 ∫ x [ f ( x )−(a0 +a1 x+ a2 x2 +…+ an x n) ] dx
δ a1 a

:
:
b
δE
=−2∫ x n [ f ( x )−(a0 +a 1 x +a2 x2 +…+ an x n) ] dx
δ ai a

Thus, we have
b b b b b
δE
=0→ a 0∫ 1 dx +a1∫ x dx +a 2∫ x dx +…+ an∫ x dx=∫ f ( x )
2 n
δ a0 a a a a a

Similarly
b b b b b
δE
=0→ a 0∫ x dx+ a1∫ x dx+ a2∫ x dx 2+…+ an∫ x dx=∫ x f (x )
i i+1 i +2 i+ n i
i=0,1,2,3 … , n
δ ai a a a a a

So, the ( n+1 ) normal equation in this case is:


b b b b b
i=0 : a0∫ 1 dx +a1∫ x dx +a 2∫ x dx+ …+ an∫ x dx=∫ f (x )
2 n

a a a a a

b b b b b
i=1 :a0 ∫ xdx+ a1∫ x dx +a2∫ x dx +…+ an∫ x dx=∫ xf (x)
2 3 n +1

a a a a a

⋮ ⋮ ⋮
b b b b b
i=n : a0∫ x n dx+ a1∫ x n +1 dx + a2∫ x n+2 dx +…+ an∫ x 2 n dx=∫ x n f ( x)
a a a a a

Denoting
b
Si=∫ x dx , i=0,1,2 , … , n
i

b
b i=∫ x f ( x ) ,i=0,1,2, … , n
i

Hence, we have the system of normal equation


S a=b
Where
[ ] [] []
s 0 s1 sn a0 b0

s s2 s n+1 a b
S= 1 a= 1 b= 1
⋮ ⋱ ⋮ ⋮ ⋮
s n s n+1 ⋯ s2 n an bn

EXAMPLE 4
Find liner and quadratic least square approximation to f ( x )=e x on [−1,1] .
Solution
Liner approximation: n=1 pi ( x ) =a0 +a 1 x
1
s0 =∫ 1 dx=[ x ]−1 =2
1

−1

[ ]
1 2 1
s1=∫ x dx=
−1
x
2 −1
1 1
= − =0
2 2 ()
[ ]
1 1

s2=∫ x dx=
−1
x3
3
2

−1
1 −1 2
= −
3 3
=
3 ( )
1
x 1 1
b 0=∫ 1 e dx=[ e ]
x
−1 =1− =2.3504
−1 e

1
1 2
b 1=∫ x e x dx= [(x−1)e x ]−1= =0.7358
−1 e
From matrix S and vector b

[ ]
2 0
S=
0
2 , b=
3
2.3504
0.7358[ ]
Solve normal system is:

[ ][ ] [
2 0
0
3
a
2 0=
a1
2.3504
0.7358 ]
This give:
a 0=1.1752 , a1=1.1037
The liner least square polynomial p1 ( x )=1.1752+1.1037 x
Relative error
|e 0.5 −p 1 (0.5)|=|1.6487−1.7270|=0.0475
|e 0.5| |1.6487|
Quadratic fitting, n=2; pi ( x )=a 0+ a1 x + a2 x 2
1
s0 =∫ 1 dx=[ x ]−1 =2
1

−1

[ ]
1 2 1
s1=∫ x dx=
−1
x
2 −1
1 1
= − =0
2 2 ()
[ ]
1 1

s2=∫ x dx=
−1
2x3
3 −1
1 −1 2
= −
3 3
=
3 ( )
[ ]
1 4 1
s3=∫ x dx=
−1
3x
4 −1
1 1
= −
4 4
=0 ()
[ ]
1 5 1
s4 =∫ x dx=
−1
x
5
4

−1
1 −1 2
= −
5 5
=( )
5

1
x 1 1
b 0=∫ 1 e dx=[ e ]
x
−1 =1− =2.3504
−1 e

1
x1 2
b 1=∫ x e dx= [(x−1)e ]
x
−1 = =0.7358
−1 e
1
5
b 2=∫ x 2 e x dx=e− =0.8789
−1 e
From the matrix S and vector b:

[ ]
2
2 0

[ ]
3
2.3504
2
S= 0 0 , b= 0.7358
3
0.8789
2 2
0
3 5
Solve the normal system is:

[ ]
2
2 0

[][ ]
3
a0 2.3504
2
0 0 a1 = 0.7358
3
0.8789
2 2 a2
0
3 5

This gives;
a 0 ¿ 0.9963 , a1 ¿ 1.1037 , a2=0.5368

The quadratic least square polynomial p2 ( x ) =0.9963+1.1037 x+ 0.5368 x 2


Relative error
|e 0.5 −p 2 (0.5)|=|1.6487−1.6889|=0.0204
|e 0.5| |1.6487|
Example 5
Construct Weighted Least square for continuous function f ( x )=sin πx on [ 0,1 ].
Solution
The normal equation for P2 ( x ) =a2 x 2 +a 1 x +a0 are
1 1 1 1
a 0∫ 1 dx +a1∫ x dx +¿ a2∫ x dx=¿∫ sin πx dx ¿ ¿
2

0 0 0 0

1 1 1 1
a 0∫ x dx+ a1∫ x dx+ ¿ a2∫ x dx=¿ ∫ x sin πx dx ¿ ¿
2 3

0 0 0 0

1 1 1 1
a 0∫ x dx+ a1∫ x dx+ ¿ a2∫ x dx=¿∫ x sin πx dx ¿ ¿
2 3 4 2

0 0 0 0

the Performing the integration gives


1 1 2
a 0 + a1 + a 2 =
2 3 π
1 1 1 1
a0 + a1+ a2=
2 3 4 π
2
1 1 1 π +4
a 0+ a1+ a2= 3
3 4 5 π
In three unknowns can solved to obtain
2
12 π −120
a 0= 3
≈−0.050465
π
720−60 π 2
And a 1=−a2 = ≈ 4.12251
π3
Consequently, the least squares polynomial approximation of degree 2 for
f ( x )=sin πx on [ 0,1 ] is P2 ( x ) =−4.12251 x2 + 4.12251 x−0.050465

2.5 Weighted Least square approximaion


The minimized the sum of square of the error. a more general approach is to minimize the
weighted sum of square of the error taken over all data points.
if the sum denoted by S. We have
n
S=∑ wi [ y i−f ( x i) ]
2
where w i , i=1,2,3 … , n
i=0

2.5.1 Linear Weighted Least square approximation


y=ao +a 1 x

Be a straight line to fit to the given data point ( x 0 , y 0 ) , ( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( xn , y n ) .then


n
S ( a 0 , a1 )=∑ wi ¿ ¿ ¿
i =0

For maxima and minima, we have


∂ s 0∧∂ s
= =0
∂ a0 ∂ a1
Are necessarily at minimum.
Taking derivatives, we obtain:
n
∂s
=∑ −2 w i ( y i −a0 −a1 x i ) =0
∂ a0 i=0
n
∂s
=∑ −2 wi ( y i−a0−a1 x i ) xi =0
∂ a1 i=0

This is a pair of simultaneous liner equation is the unknown a 0∧a1 .they are called the normal
equation and can be written as
( ) ( )
n n n

∑ wi a 0+ ∑ wi xi a 1= ∑ w i y i
i=0 i=0 i=0

(∑ ) (∑ )
n n n
wi xi a0 + wi x i a1=∑ wi xi y i
2

i=0 i=0 i=0

From this normal equation solving the values a 0∧a1 .then substitute in equation ,we get the
required least square approximation.
Example 6
Fit a straight line to the following date giving weight to x as
W 1 1 1 5 5
By the method of least square
X 0 1 2 3 4

Y 1 1.8 3.3 4.5 6.3

Solution
Let the straight line be ¿ a 0+ a1 x , where a 0∧a1 are terms to be determine.
The data and summations needs to compute the best fit liner is,
i xi yi wi w i xi w i x i2 w i x i yi w i yi

0 0 1 1 0 0 0 1

1 1 1.8 1 1 1 1.8 1.8

2 2 3.3 1 2 4 6.6 3.3

3 3 4.5 5 15 45 67.5 22.5

4 4 6.3 5 20 80 126 31.5

sum 10 16.9 13 38 130 201.9 60.1


Now, substitute
all the value on the normal equation
13 a0 +38 a 1=60.1
38 a0 +130 a 1=201.9
Solve simultaneously,a 0=0.5724∧a1 =1.3858
Therefore, the equation of best fit liner line of the given data is
y=0.5724+1.3858 x ,

2.5.2 Weighted Least square for continuous function


An integrals function W ( x ) is called a weighted function on the interval I .if W ( x ) ≥ 0 for all x in
I ,but W ( x ) ≠ 0 on any sub-interval of I .the purpose of weighted function is to assign varying
degrees of important to approximation on a certain portion of the interval.
The function which are continuous on [ a , b ] and are given explicitly, we have
b
S=∫ W ( x ) [ f ( x )−Pn ( x ) ] dx
2

n
+an x =∑ a i x
n−1 n i
Where Pn ( x ) =a0 + a1 x +…+ an−1 x
i=0

[ ]
b n 2

→ S=∫ W ( x ) f ( x )−∑ ai x i dx=minimum


a i=0

The necessarily condition is to have a minimum value is that


∂S ∂S ∂S
= =…= =0 , j=1,2, … , n
∂ a0 ∂ a1 ∂aj
The given a system of (n+1) unknown a 0 , a1 , … , an.These equation are called normal equation.
the normal equation become

[ ]
b n
∂S
=∫ W ( x ) f ( x )−∑ ai x i x j dx=0 , j=0,1, … , n
∂aj a i=0
Chapter 3
Conclusion
This project work focus on Least- square approximation methods with practical example. Due to
the drawback of interpolation when the data points are large. least square method is an
appropriate approach to fit the data. and linear least square method derivation with practical
example illustration.
Similarly, non-linear least square approach is confirming the same result for quadratic least
square. As well as, the exponential and saturation function with their proof and worked example.
Finally, the project has worked Continuous Least Square Method and weighted least square
method is minimized by setting the partial derivatives of best fitting of approximation.
References
[1] Levenberg, K., 1944. A method for the solution of certain non-linear problems in
least squares. Quarterly of applied mathematics, 2(2), pp.164-168.
[2] Kirschvink, J.L., 1980. The least-squares line and plane and the analysis of
palaeomagnetic data. Geophysical Journal International, 62(3), pp.699-718.
[3] Richard, L. and Burden, J.2017. Numerical Analysis 10th Edition 1988. Douglas
faires, numerical analysis.
[4] F.B Hidebrand, Introduction to numerical analysis, pp(314-318)

You might also like