You are on page 1of 16

APPROXIMATION OF FUNCTIONS

1. FOURIER SERIES

29

Theoretical Discussion:
In mathematics, a Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series is a branch of Fourier analysis. The Fourier series is named in honor of Joseph Fourier (17681830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d'Alembert, and Daniel Bernoulli. Fourier introduced the series for the purpose of solving the heat equation in a metal plate, publishing his initial results in his 1807 Mmoire sur la propagation de la chaleur dans les corps solides (Treatise on the propagation of heat in solid bodies), and publishing his Thorie analytique de la chaleur in 1822. Early ideas of decomposing a periodic function into the sum of simple oscillating functions date back to the 3rd century BC, when ancient astronomers proposed an empiric model of planetary motions, based on deferents and epicycles.

The heat equation is a partial differential equation. Prior to Fourier's work, no solution to the heat equation was known in the general case, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was to model a complicated heat source as a superposition (or linear combination) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series.

APPROXIMATION OF FUNCTIONS
Sample Problems:
1. a simple Fourier series

30

Plot of a periodic identity functiona sawtooth wave

Animated plot of the first five successive partial Fourier series We now use the formula above to give a Fourier series expansion of a very simple function. Consider a sawtooth wave

In this case, the Fourier coefficients are given by

It can be proven that the Fourier series converges to (x) at every point x where is differentiable, and therefore:

APPROXIMATION OF FUNCTIONS

31

When x = , the Fourier series converges to 0, which is the half-sum of the left- and right-limit of at x = . This is a particular instance of the Dirichlet theorem for Fourier series.

2. Fourier's motivation Heat distribution in a metal plate, using Fourier's method One notices that the Fourier series expansion of our function in example 1 looks much less simple than the formula (x) = x, and so it is not immediately apparent why one would need this Fourier series. While there are many applications, we cite Fourier's motivation of solving the heat equation. For example, consider a metal plate in the shape of a square whose side measures meters, with coordinates (x, y) [0, ] [0, ]. If there is no heat source within the plate, and if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by y = , is maintained at the temperature gradient T(x, ) = x degrees Celsius, for x in (0, ), then one can show that the stationary heat distribution (or the heat distribution after a long period of time has elapsed) is given by

Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by multiplying each term of Eq.1 by sinh(ny)/sinh(n). While our example function f(x) seems to have a needlessly complicated Fourier series, the heat distribution T(x, y) is nontrivial. The function T cannot be written as a closedform expression. This method of solving the heat problem was made possible by Fourier's work.

APPROXIMATION OF FUNCTIONS
Applications:
The Fourier series has many such applications in electrical engineering, vibration analysis, acoustics, optics, signal processing, image

32

processing, quantum mechanics, econometrics, thin-walled shell theory, etc. Many other Fourier-related transforms have since been defined, extending the initial idea to other applications. This general area of inquiry is now sometimes called harmonic analysis. A Fourier series, however, can be used only for periodic functions, or for functions on a bounded (compact) interval. In engineering applications, the Fourier series is generally presumed to converge everywhere except at discontinuities, since the functions encountered in engineering are more well behaved than the ones that mathematicians can provide as counter-examples to this presumption. In particular, the Fourier series converges absolutely and uniformly to (x) whenever the derivative of (x) (which may not exist everywhere) is square integrable.

APPROXIMATION OF FUNCTIONS 2. CHEBYSHEV APPROXIMATION


Theoretical Discussion:
In mathematics the Chebyshev polynomials, named after Pafnuty Chebyshev, are a sequence of orthogonal polynomials which are related to de Moivre's formula and which can be defined recursively. One usually distinguishes between Chebyshev polynomials of the first kind which are denoted Tn and Chebyshev polynomials of the second kind which are denoted Un. The letter T is used because of the alternative transliterations of the name Chebyshev as Tchebycheff (French) or Tschebyschow (German). The Chebyshev polynomials Tn or Un are polynomials of degree n and the sequence of Chebyshev polynomials of either kind composes a polynomial sequence. Chebyshev polynomials are important in approximation theory because the roots of the Chebyshev polynomials of the first kind, which are also called Chebyshev nodes, are used as nodes in polynomial interpolation. The resulting interpolation polynomial minimizes the problem of Runge's phenomenon and provides an approximation that is close to the polynomial of best approximation to a continuous function under the maximum norm. This approximation leads directly to the method of ClenshawCurtis quadrature. In the study of differential equations they arise as the solution to the Chebyshev differential equations

33

and

for the polynomials of the first and second kind, respectively. These equations are special cases of the SturmLiouville differential equation.

APPROXIMATION OF FUNCTIONS
Sample Problems:
The following numerical examples demonstrate the solution procedure. The data used in this example is from a stratified random sample survey conducted in Varanasi district of Uttar Pradesh (U.P), India to study the distribution of manurial resources among different crops and cultural practices. Relevant data with respect to the two characteristics area under rice and total cultivated area are given in Table 1. The total number of villages in the district was 4190. In order to demonstrate the procedure the following are also assumed. The per unit travels, cost ci , (i=1,,4) of measurement in various strata are independently normally distributed with the following means and variances E(c 1) = 3, E(c2) = 4, E(c3) = 5, E(c4) = 7 and V(c1) = 0.6, V(c2) = 0.5, V(c3) = 0.7, V(c4) = 0.8. Let us assign the weights to the variances of two characters in proportion to the inverse of the sums
4 i=1 Si1 4 i=1 Si2

34

which turn out to be a1 = 0.75 and a2 = 0.25.

The total amount available for the survey assumed as 600 units including an expected overhead cost t 0 = 100 units.

APPROXIMATION OF FUNCTIONS

35

The values of sample sizes rounded to nearest integer are n1 = 46, n2 = 8, n3 = 9 and n4 = 26, with a total of 89. Corresponding to this allocation the values of the variances for the two character are obtained as V1 = 159.05 and V2 = 478.32

The integer solution is obtained as are n1 = 44, n2 = 9, n3 = 10 and n4 = 29, with total of 92. The values of the individual variances, corresponding to this allocation are obtained as V1 = 144.36 and V2 = 476.90.

Applications:
For a given frequency band, the frequency points corresponding to the Chebyshev nodes are found by transformation of coordinates, and the surface electric currents at these points are computed by the method of moments. The surface current is represented by a polynomial function via the Chebyshev approximation, and the electric current distribution can be obtained at any frequency point within the given frequency band.

APPROXIMATION OF FUNCTIONS 3. APPROXIMATION WITH LEGENDRE POLYNOMIALS


Theoretical Discussion:
With interpolation we were given a formula or data about a function f(x), and we made a model p(x) that passed through a given set of data points. We now consider

36

approximation, in which we are still trying to build a model, but specify some
condition of ``closeness'' that the model must satisfy. It is still likely that p(x) will be equal to the function f(x) at some points, but we will not know in advance which points. As with interpolation, we build this model out of a set of basis functions. The model is then a recipe, a set of coefficients to use when building the model. In this lab we will consider four different selections of basis functions in the space L2[-1,1], [-1,1],. The first is the usual monomials and so on. In this case, the coefficients are exactly the coefficients Matlab uses to specify a polynomial. The that specify how much of each basis function

second is the set of Legendre polynomials, which will yield the same approximations but will turn out to have better numerical behavior. The third selection is the trigonometric functions, and the final selection is a set of piecewise constant functions. Once we have our basis set, we will consider how we can determine the approximating function p(x)as the ``best possible'' approximate for the given basis functions, and we will look at the behavior of the approximation error. Since we are working in L2[-1,1], we will use L2[-1,1] the norm to measure error. This kind of approximation requires evaluation of integrals. In most applications this evaluation is done using numerical integration, but we have not yet discussed numerical integration in general. On the other hand, you have seen the trapezoid rule for numerical integration in lecture, and we will be using a specialized application of this method. It turns out that approximation by monomials results in a matrix similar to the Hilbert matrix whose inversion can be quite inaccurate, even for small sizes. This

APPROXIMATION OF FUNCTIONS
inaccuracy translates into poor approximations. Use of orthogonal polynomials

37

such as the Legendre polynomials, results in a diagonal matrix that can be inverted almost without error, but the right side is inaccurate because of roundoff errors. approximation is improved but still not of high quality. Fourier approximation substantially reduces the roundoff errors, but is slow to compute and evaluate and still is subject to error for higher terms. Approximation by piecewise constants is not subject to error until ridiculously large numbers of pieces are employed.

Sample Problems:
1. We will use Legendre polynomials to approximate f(x) = cos x on [=2; =2] by a quadratic polynomial. First, we note that the first three Legendre polynomials, which are the ones of degree 0, 1 and 2, are

However, it is not practical to use these polynomials directly to approximate f(x), because they are orthogonal with respect to the inner product defined on the interval [1; 1], and we wish to approximate f(x) on [=2; =2]. To obtain orthogonal polynomials on [=2; =2], we replace x by 2t=, where t belongs to [=2; =2], in the Legendre polynomials, which yields

Then, we can express our quadratic approximation f2(x) of f(x) by the linear combination

APPROXIMATION OF FUNCTIONS

38

2. We will develop them using the Grams Schmidt process First we need a vector space with an appropriate dot product. Consider the space of continuous functions on the interval [-1,1]. The dot product of f and g is the integral of fg over the interval. If you like complex numbers, use f times the conjugate of g. We'll stick with the reals for now. Let the powers of x act as a basis for the polynomial functions, then renormalize.The first independent function is the constant function 1. There's nothing to change here. The next independent function is x. The dot product of 1 and x is 0. That is, the integral of 1x from -1 to 1 is 0. The functions are already orthogonal; there is nothing to do. The next function is x2. This is already orthogonal to x. Subtract 1 (1.x2/1.1), giving x2-(1/3). The next function is x3, which is already orthogonal to the even degree Legendre polynomials. Subtract x (x.x3/x.x) to get x3-(3/5)x. The next function, derived

APPROXIMATION OF FUNCTIONS
from x4, becomes x4-(6/7)x2+(3/35). Continue this process, writing as many Legendre polynomials as you need. Given a continuous function f on a closed interval, scale the domain so that the interval becomes [-1,1]. Take the dot product of f with each of the first n+1 Legendre polynomials. sometimes this integral can be computed directly;

39

sometimes it is approximated by a computer. The result is the coefficient for that particular Legendre polynomial. Add these polynomials together to obtain a good approximation of f. If g is the approximating polynomial of degree n, as computed above, let h = f-g, the error term. Let p be one of the first n+1 Legendre polynomials, and consider h.p. Remember that g is a linear combination of Legendre polynomials, all orthogonal. They all drop out, hence g.p is merely p.p times its coefficient. And that coefficient was computed from f.p, hence h.p = 0. The error term is orthogonal to the first n+1 Legendre polynomials, hence it is orthogonal to all polynomials of degree n or less. Suppose we didn't really pick the best polynomial of degree n. Let g+e be an improvement. In other words, the integral of (h+e)2 is smaller than the integral of h2. Remember that an integral of a function squared is just that function dotted with itself. Expand (h+e)(h+e), remembering that h is orthogonal to every polynomial of degree n or less, hence h is orthogonal to e. The result is h.h+e.e. This is larger than h.h. Therefore g is the best nth degree polynomial approximation to f.

Applications:
Application of Legendre Polynomials is the Elimination of Baseline Variation in Biological FT-IR Spectra. A method for eliminating baseline variation is proposed for use on biological FT-IR spectra. This method is motivated by the fact that baselines are often assumed to be low-degree polynomials. Spectra are expanded in terms of a set of orthonormal polynomials derived from the Legendre polynomials, and leading terms of the expansion, which contain most of the baseline variation, are removed. An application of this method to protein spectra is presented.

APPROXIMATION OF FUNCTIONS
4. LEAST SQUARES APPROXIMATION OF FUNCTIONS

40

Theoretical Discussion:
The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the 'x' variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed-form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, thus the core calculation is similar in both cases. The least-squares method was first described by Carl Friedrich Gauss around 1794.Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator.

APPROXIMATION OF FUNCTIONS
Sample Problems:
1. By using the Least Squares Approximation, fit a straight line to the given data below X y Solution In order to fit a straight line to the set of data above, we assume the equation of the form y = ao+a1x The graph of the set of data above is given by in figure 1 1 120 2 90 3 60 4 70 5 35 6 11

41

By inspection a straight line may be fitted to this set of data as the line of best fit, since most of the points will lie on the fitted line or close to it. However, some may want to fit a curve to this but the accuracy of the curve fitted is a thing for consideration. Now from the straight line equation above, we have to determine two unknowns ao, and a1 , the normal equations necessary to determine these unknowns can be obtained from equation as: yi = nao + a1 xiyi = ao xi + a1 xi xi2

Hence we shall need to construct columns for vales of xy and x 2 in addition to x and y values already given. Thus the table below shows the necessary columns:

APPROXIMATION OF FUNCTIONS

42

Table 1 x 1 2 3 4 5 6 21 y 120 90 60 70 35 11 386 X2 1 4 9 16 25 36 91 Xy 120 180 180 280 175 66 1001

386 = 6a0 + 21a1 1001 = 21a0 + 91a1 Solving these two equations, we obtain a0 = 134.33, a1 = -20 Therefore the straight line fitted to the given data is y = 134.33 20x The graph of the approximate functions are shown below:

APPROXIMATION OF FUNCTIONS
2. Let f(x) = ex, let p(x) = 0 + 1x, 0, 1 unknown. Approximate f(x) over [1, 1]. Choose 0, 1 to minimize

43

Integrating,

with constants {c1, . . . , c6}, e.g.

g is a quadratic polynomial in the two variables 0, 1. To find its minimum, solve the system

dierentiate, obtaining

This simplies to

APPROXIMATION OF FUNCTIONS
Applications:
The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable(the 'x' variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares.

44

You might also like