You are on page 1of 4

Numeric Solution to Blasius Equation

The boundary layer equations for the incompressible viscous flow over a flat plate with zero pressure gradient are: u v u u 2 u + =0 u +v = x y x y y 2 If we assume similarity between boundary layers and introduce the stream function, these two partial differential equations (PDEs)reduce to a single 3rd order ordinary differential equation (ODE) as shown below.

2 f + ff = 0

Blasius Equation

This equation can be solved for the parameter f ( ) which is nothing more than a modified stream function whose first derivative is the ratio of local horizontal velocity to freestream velocity: u / V = f . Further, the new independent variable in this equation is a scaled distance from the wall given by: V =y x Since this is a 3rd order ODE, we need to have 3 boundary conditions to solve. The boundary conditions for this case are that both components of the velocity are zero at the wall due to no slip, and that the horizontal velocity approaches the constant freestream velocity at some distance away from the plate.

f ( = 0) = 0

f ( = 0) = 0

f ( ) = 1

Unfortunately, since Blasiuss equation is non-linear (it includes the product of the dependant function with one of its derivatives), there is not known analytic solution. Instead, it is common to solve this type problem numerically though a marching method: i.e. start at an initial point, say = 0 , with initial conditions and march in space to a desired vale of . Since we only have 2 initial conditions and one condition at the other boundary, , we will have to supply an arbitrary 3rd initial condition on f , say f =0.2. If, after marching the solution to a suitably large value of , the other boundary condition of f ( ) = 1 is not satisfied, we correct our initial guess for f and begin iterating till the far boundary condition is satisfied. We begin our numerical solution by realizing that in addition to Blasius equation, we also have 3 other auxiliary equation, namely: f=

f df d

f =

df f d

f =

df f d

Thus, if we know the values of f ( ) and its derivative a an initial points, say i, and want to march our solution to the next point, say i+1, then our 4 equations (3 auxilary and Blasius) can be written as: f i +1 = f i + f i f i 1 = f i + f i + f i1 = f i+ f i + f i1 = f i +1 f i1 / 2 + + While this approach would work, this numerical approximation is known to be 1st order: i.e. the error between exact solution and numeric solution varies with the 1st power of the marching step size, . Thus, if you decrease the step size by , you decrease the numeric error by . If you decrease the step size far enough, you can decrease the error as much as desired but each decrease by doubles the amount of calculation necessary for the same final value of . Normally, we try to use numeric methods which are at least second order such errors decrease with the square of step size this allows a decrease in error by while doubling the amount of work. Higher order methods, like the well known Runga-Kutta method but are also harder to program. The reason why the above method is 1st order is the fact that we are using the initial derivatives of f ( ) to determine how much values change over the step interval. However, the derivative themselves change over that interval. An improvement would be to use the average of the initial and final derivative over the step size. But of course, we dont know the final value that is what we are trying to find in the first place! To resolve this dilemma, a predictor-corrector numerical approach is utilized. We use the 1st order method above to predict preliminary values of the function and its derivative at the end of the interval. Then we correct this estimate, by using a 2nd order method using the average of initial and preliminary final derivatives to get a better estimate of the final values. The equations for this method are:
Predictor f i +1 = f i + f i f i 1 = f i + f i + f i1 = f i+ f i + f i1 = f i +1 f i1 / 2 + +

Corrector

f i1 = f i+ ( f i+ f i1 ) / 2 + + f i1 = f i +1 f i1 / 2 + +

f i 1 = f i + ( f i+ f i1 ) / 2 + +

f i +1 = f i + ( f i + f i 1 ) / 2 +

Note that the corrector step could be repeated multiple times to get a true 2nd order solution i.e. repeat the corrector until the new calculated derivatives agree with the previous values. However, most numerical approaches of this type are satisfied with near 2nd order accuracy by only doing one corrector step.

Programming Examples
Microsoft Excel: To build a spreadsheet to solve the Blasius ODE using the predictor-corrector scheme, I used two set of columns to contain my initial conditions and corrector results and another set to contain my predictor results. In my spreadsheet, I use the current values in columns B-C to predict final values which are calculated in columns G-J. Then the corrector step uses the current values and the predictor values to get new values at the next marching step. The spreadsheet with numbers is shown below.
A 1 2 3 4 5 6 7 8 B C = D 0.1 E F G |1-f' (max)|= H 0.00000 Predictor f' 0.03318 0.06636 0.09953 0.13268 I J

f 0.00000 0.00166 0.00664 0.01493

Initial/Corrector f' f'' 0.00000 0.33180 0.03318 0.33180 0.06636 0.33174 0.09953 0.33158

The same spreadsheet with the equations show is given below:

A 1 2 3 4 5 6 7 B C = D 0.1 E F G H |1-f' (max)|= =ABS(1-C100) Predictor f'
=C5+\$D\$1*D 5 =C6+\$D\$1*D 6 =C7+\$D\$1*D 7

0.00
=A5+\$D\$1 =A6+\$D\$1

f 0.00000
=B5+\$D\$1*( C5+H5)/2 =B6+\$D\$1*( C6+H6)/2

Initial/Corrector f' f'' 0.00000

=C5+\$D\$1*( D5+I5)/2 =C6+\$D\$1*( D6+I6)/2

f'''

f
=B5+\$D\$1*C 5 =B6+\$D\$1*C 6 =B7+\$D\$1*C 7

f''

f'''

=D5+\$D\$1*E 5 =-G5*I5/2 =D6+\$D\$1*E 6 =-G6*I6/2 =D7+\$D\$1*E 7 =-G7*I7/2

The value stored in cell H1 is the absolute value of 1- f in the last row of my solution. I use this along with the built in Excel iterative solver to drive my initial guess of f to a value such that |1- f | is a minimum and hopefully zero. Matlab: In Matlab, you have the option of either programming in the entire predictor-corrector method, or using one of the build in solver functions.
function [eta,f,fp,fpp,fppp] = Blasius(etamax,steps,fppwall) % calculates solution to Blasius equation % inputs: % etamax = maximum eta for calculation % steps = number of steps between 0 and etamax % fppwall = initial (wall) value of 2nd derivative of Blasius function

% outputs: % eta - the similarity coordinate normal to the wall % f, fp, fpp, fppp - the Blasius function and it first 3 derivatives deta = etamax/(steps-1); eta = zeros(steps,1); f = zeros(steps,1); fp = zeros(steps,1); fpp = zeros(steps,1); fppp = zeros(steps,1); % initial guess for fpp fpp(1) = fppwall; for i=1:steps-1 eta(i+1) = eta(i) + deta; % predictor fbar = f(i) + deta * fp(i); fpbar = fp(i) + deta * fpp(i); fppbar = fpp(i) + deta * fppp(i); fpppbar = -fbar*fppbar/2; % corrector f(i+1) = f(i) + deta * (fp(i) + fpbar)/2; fp(i+1) = fp(i) + deta * (fpp(i) + fppbar)/2; fpp(i+1) = fpp(i) + deta * (fppp(i) + fpppbar)/2; fppp(i+1) = -f(i+1)*fpp(i+1)/2; end

The built in ODE solver functions in Matlab are very useful but may be confusing to the novice user. The procedure is to re-write our function and equations in terms of a set of dependant functions rather than as one function and it set of derivatives. So, rather than having f , f , and f we have f1 , f 2 , and f 3 related by: f1 = f 2 f 2 = f 3 f 3 = f 1 f 3 / 2 Thus, the Matlab function wound look like:
function df = BlasiusFunc(eta,f) % f() is array of f and its derivatives: f = f(1) df = zeros(3,1); % df() is array of the derivatives of f() df(1) = f(2); df(2) = f(3); df(3) = -f(1)*f(3)/2; f' = f(2) f" = f(3)

The ODE solver recommended by Matlab for most problems is name ode45. To run this solver, first create a vector containing the initial values for f1 , f 2 , and f 3 . Then call the
>> f0 = [0,0,0.3318]; >> [eta,f] = ode45(@BlasiusFunc,[0,10],f0); >> plot(eta,f(:,2))