You are on page 1of 10

LARGE EDDY SIMULATION OF COMPRESSIBLE FLOWS USING UNSTRUCTURED GRIDS

Doyle Knighty , Gang Zhouz, Nora Okong'ox and Vijay Shuklaz Department of Mechanical and Aerospace Engineering Rutgers University - The State University of New Jersey PO Box 909 Piscataway NJ 08855-0909
An algorithm for compressible Large Eddy Simulation (LES) using unstructured tetrahedral grids is presented. The subgrid scale stresses are represented by two approaches: the Monotone Integrated Large Eddy Simulation (MILES) technique of Boris, Oran and Grinstein, whereby the energy transfer from the resolved scales to the subgrid scales is modeled using the inherent dissipation of the numerical algorithm, and a hybrid technique combining MILES with a Smagorinsky eddy viscosity model for the subgrid scale stresses. The algorithm is e ciently parallelized using MPI. Results are presented for the simulation of the decay of incompressible isotropic turbulence. Good agreement is obtained with the experimental data of Comte-Bellot and Corrsin for the decay of turbulence energy, and reasonable agreement with the temporal evolution of the energy spectrum.

Abstract

1 Introduction
Current Reynolds-averaged Navier-Stokes (RANS) simulation methodology for high speed turbulent ows lacks important capabilities in several areas as documented in two recent extensive reviews 1, 2]. Speci cally, RANS methods have not demonstrated the capability for accurate prediction of mean surface heat transfer in shock wave-turbulent boundary layer interactions, nor do RANS methods provide predictions of rms surface heat transfer and pressure in these ows. Large Eddy Simulation (LES) is an alternative to RANS which may be capable of improved prediction of the quantities described above. LES has been demonstrated to be both a useful research tool for understanding the physics of turbulence, and also an accurate predictive method for ows of engineering interest. There are two di erent approaches to modeling of the subgrid stresses and heat ux. In the rst approach, a speci c model of the subgrid-scale stresses (and heat ux) is proposed (see, for example, Galperin and Orszag 3], Mason 4], Lesieur and Metais 5] and Moin 6]). In the second approach, the inherent dissipation of the numerical method is relied upon to provide the correct energy transfer between the resolved and subgrid scales. The Monotone Integrated Large Eddy Simulation (MILES) method of Boris, Oran and Grinstein 7, 8, 9] is an example of this approach. Although most research has focused on incompressible turbulent ows, there has recently emerged a growing interest in applications of LES to compressible turbulent ows. Examples include Yoshizawa 10], Speziale et al 11], Moin et al 12], Erlebacher et al 13], Zang et al 14], El-Hady et al 15], Jansen 16], Spyropoulos and Blaisdell 17], and Haworth and Jansen 18]. Nearly all compressible LES has employed spectral methods or structured grids, with the exception of Jansen 16] and Haworth 18].
yProfessor, Associate

Fellow AIAA. zResearch Associate, Member AIAA.

x

Graduate Research Assistant, Member AIAA.

1

The complexity of the geometries of high speed vehicles is also a challenge for LES. Unstructured grids have two advantages over the more commonly used structured grids. First, algorithms have been developed to facilitate automatic generation of unstructured grids for complex geometries (see, for example, the discussion in Barth 19, 20]). These grid generation methods can be substantially more e cient (in terms of user time) than some of the multi-block structured grid generation methods used. Second, local mesh re nement, either adaptive or xed, can been performed much more readily for unstructured grids. This paper describes the initial results of a research program to develop a Large Eddy Simulation algorithm for compressible turbulent ows using unstructured grids. Two methods for simulation of the subgrid scale stresses are examined. The rst method is the Monotone Integrated Large Eddy Simulation (MILES) technique. The second method is a hybrid technique combining MILES with a Smagorinsky eddy viscosity model for the subgrid scale stresses. Simulation of the decay of isotropic incompressible turbulence shows good agreement with the experimental data of Comte-Bellot and Corrsin for the decay of turbulence energy, and reasonable agreement with the temporal evolution of the energy spectrum. A parallel version of the algorithm has been developed and achieves excellent performance.

2 Governing Equations
For an arbitrary function F (xi ; t), the ltered variable F (xi ; t) is de ned as

F (xi ; t) =

Z

D

G xi

( ?

i;

) F(

i; t d i

)

(1)

where G is the lter function, and is a measure of the lter width and is related to the computational e(xi; t) as mesh size 21]. For compressible ows, it is expedient to de ne the Favre- ltered variable F
e(xi ; t) = F F

(2) (3)
j

The ltered compressible Navier-Stokes equations are
@ @t @ ui @t @ e @t u ~i = 0 +@ @x
i

~i u ~j = ? @ p + @ Tij ~ +@ u @x @x @x ~+
@ @xj

~ = RT (6) where xi represents the Cartesian coordinates (i = 1; 2; 3), is the mean density, u ~i are the Cartesian components of the ltered velocity, p is mean pressure, Tij is the total stress, Hj is the energy ux due to 1 u ~+ 2 heat transfer and work done by the total stress, and e ~ = cv T ~i u ~i + k is the ltered total energy per 1 g ~i u ~i ) is the subgrid scale turbulence kinetic energy per unit volume. The unit volume where k = 2 (u i ui ? u total stress is Tij = ij + ij (7) where ij is the subgrid scale stress tensor
p
ij

Hj (e ~ + p) u ~j = @ @x
j

j

i

(4) (5)

= ? (u g ~i u ~j ) i uj ? u

(8) (9)

and hence

ii

= ?2 k. The molecular viscous stress tensor ij is approximated 12] by @u ~i @ u ~j ~k 2 @u ~ ij = (T ) ? 3 @xk ij + @xj + @xi 2

~) is the molecular viscosity based on the Favre- ltered static temperature T ~. The total heat where (T transfer is Qj = Qj + qj (10) where Qj is the subgrid scale heat ux
Qj

= ?cp
qj

g u jT

~ ?u ~j T

(11) (12) (13)

and qj is the molecular heat ux

~) the molecular thermal conductivity. The energy ux 22] Hj is where (T

~ ~) @ T = (T @x

j

Hj = Qj + Tij u ~i

The closure of the system of equations (3) to (6) requires a model for the subgrid scale stress ij and heat ux Qj , and the speci cation of appropriate initial and boundary conditions for the ow variables. There are two opposite views regarding the subgrid scale model. In the rst view, the physical model (e.g., Smagorinsky eddy viscosity) for the subgrid scale stress ij is held wholely responsible for the entire energy transfer from resolved to subgrid (unresolved) scales. This requires a high order accurate numerical algorithm which minimizes numerical dissipation. In the second view (e.g., MILES), the numerical algorithm is held wholely responsible for the entire energy transfer between resolved and subgrid scales, and no explicit subgrid scale model is employed (i.e., ij = 0 and Qj = 0). There is, of course, an intermediate view which allows for both a subgrid model and numerical dissipation (a hybrid SGS model), with the subgrid scale model coe cients dynamically adjusted to provide the additional dissipation (above and beyond the dissipation provided by the numerical algorithm) needed for achieving the proper energy transfer between the resolved and unresolved scales1 . Our focus is the hybrid SGS approach. For unstructured grid LES using a nite volume methodology2, the spatial accuracy is e ectively restricted to second- or third-order for reasons of computational e ciency. Thus, numerical dissipation is inevitable, and may in some circumstances be comparable to the dissipation a orded by the subgrid scale model. Our challenge is to determine the best numerical algorithm, i.e., the algorithm which maintains the greatest delity of the large scale eddy motion and its energy transfer to the subgrid scales, and to complement the numerical dissipation with an appropriate subgrid scale model to achieve overall the correct dynamics, with the goal of accurate simulation of compressible turbulent ows. We therefore investigate two di erent models. First, we evaluate the MILES approach wherein no explicit subgrid scale model is used ( ij = 0 and Qj = 0). This approach, described by Boris et al 7] and Oran and Boris 8], relies entirely on the inherent dissipation of the numerical algorithm to account for the energy transfer from the resolved to the subgrid scales. The speci c details of the numerical algorithm signi cantly a ect the turbulent energy transfer, in particular the function reconstruction method. We investigate two di erent reconstruction methods in this paper to determine their e ect. Second, we evaluate the hybrid approach wherein a speci c subgrid scale model is employed, together with the inherent dissipation of the numerical method. In our approach, we select the compressible extension of the Smagorinsky subgrid-scale stress model 13, 23] as the rst application due to its simplicity and generally satisfactory performance in previous simulations of compressible turbulence3 . The model is
ij
1 2

= 2CR

2

q

Skl Skl Sij

~ ~

1S ~ ?3 ~kk

ij

2 ?3

k ij

(14)

I am indebted to Prof. Ugo Piomelli for this point. Our focus on nite volume methods is driven by the need to simulate ows with strong shock waves. Such ows require the use of a Riemann solver to minimize numerical oscillations. 3 The results presented herein employed a xed Smagorinsky constant. Future work will incorporate a dynamic subgrid scale model.

3

Yoshizawa's 10] model is used for k. The rate-of-strain tensor is de ned ~i + @ u ~j ~ij = 1 @ u S (15) 2 @xj @xi ? 3 (V )1=3 where V is the Two di erent models for the lter width are examined: = V n 1=3 and = 4 n n average volume of all cells which share the node n, and Vn is the sum of the volume of all cells which share the node n. For compressible isotropic turbulent ow, Erlebacher et al 13] found that CR = 0:012 gives a high correlation between the exact and modeled stresses using various measures of comparison. The eddy viscosity model for subgrid-scale heat ux Qj is q ~ CR 2 ~ ~ @ T Skl Skl Qj = cp (16) Pr @x where the Prandtl number P rt is chosen of a range 0.3 to 0.5 13]. In our model, we use P rt = 0:4.
t j

3 Numerical Method
Two di erent second-order accurate algorithms are employed for determining the inviscid uxes. The rst is Godunov's Second Method 24] which is based on the exact solution of the Riemann problem at each face. The implementation is described in Knight et al 22]. The second is Roe's method 25], whose implementation for 2-D unstructured grids is described in Knight 26]. The extension to 3-D unstructured grids is straightforward. Two di erent methods of functional reconstruction of the ow variables to the cell faces are employed. The rst reconstruction method is the second-order least-squares method of Ollivier-Gooch 27] wherein a Taylor series expansion is developed within each cell. The coe cients are determined by minimizing the error associated with the approximation to the cell-averaged values obtained from the expansion applied to adjacent cells. Details are presented in 27, 28]. The second reconstruction method is due to Frink 29] wherein the nodal values of the ow variables are computed using a second order interpolation, and the face values then obtained from the nodes. The uxes due to the subgrid scale stresses and heat ux are computed by a second-order accurate algorithm using an average of the values at the cell nodes which are obtained from a discrete version of Gauss' Theorem 29] using the volume de ned by the collection of tetrahedra which share the node. The temporal integration is performed using a second-order or fourth-order accurate Runge-Kutta method 30].

4 Details of Computations
A series of simulations of the decay of incompressible isotropic turbulence were performed for the experimental conditions of Comte-Bellot and Corrsin (CBC) 31]. The decaying turbulence was simulated by considering the uid to be inside a cube with periodic boundary conditions on all surfaces. The experimental data includes turbulence energy and spectra at three locations downstream of the turbulence generating grid. Using the mean ow speed of the experiment, these data can be tranformed 31] to three di erent dimensionless times U0 tCBC =M = 42, 98, and 171. The data at the initial time is used to initialize the simulation in the conventional manner 13, 32]. The simulations are compared to the experimental data at the two later times. The dimensionless time t in the simulation is related to U0 tCBC =M by a factor of 1.88 due to the choice of non-dimensionalization. The computational domain is comprised of 163,840 tetrahedra which constitute the control volumes for solution of the governing equations. The tetrahedra are determined as follows. The cube is divided into hexahedra using a mesh of 33 33 33 nodes. The nodes are uniformly spaced. Each hexahedra, formed by eight nodes, is further subdivided into ve tetrahedra. The tetrahedra therefore constitute a regular unstructured grid. In order to ascertain the sensitivity of the simulation to the geometric regularity of the unstructured grid, a simulation was also performed for which the interior nodes were randomly perturbed, thereby creating a random unstructured grid. Further details are presented in 22]. 4

5 Results
Nine di erent simulations (Table 1) were conducted to evaluate the SGS models, the e ects of the reconstruction method, the type of Riemann solvers, the e ect of regular vs random grids and the convergence parameter used in Godunov's method. Table 1: Summary of Cases
Test Case 1 2 3 4 5 6 7 8 9 SGS Reconstruction Grid Intervals of Model Method Type Fourier Modes (Nk ) MILES LS 2nd Reg 26 MILES F 2nd Reg 26 MILES F 2nd Reg 26 MILES First Reg 26 MILES F 2nd Ran 26 MILES F 2nd Reg 26 MILES F 2nd Reg 52 CR = 0:012 F 2nd Reg 26 = (V n )1=3 CR = 0:012 F 2nd Reg 26 1=3 =3 4 (Vn ) LEGEND
MILES LS 2nd F 2nd First Reg Ran Intervals ... Toler

Riemann Solver Godunov Godunov Roe Godunov Godunov Godunov Godunov Godunov

Toler 10?10 10?10 10?10 10?10 10?12 10?10 10?10

Godunov 10?10

Monotone Integrated Large Eddy Simulation Second order least-squares reconstruction method Second order Frink's reconstruction method First order reconstruction method Regular grid of tetrahedra Grid obtained by random perturbing the nodes of the regular grid Number of Fourier intervals used for initial energy spectrum Tolerance employed in iteration solution for p

In Fig. 1, the decay of the resolved turbulence kinetic energy is compared with the ltered experimental data (details of the non-dimensionalization are presented in 22]). The MILES method using either the leastsquares reconstruction method (Case 1) or the reconstruction method of Frink (Case 2) accurately predicts the decay of isotropic turbulence. This implies that the inherent numerical dissipation in the numerical algorithm (due to the nite order accuracy of the reconstruction and ux quadrature) provides a reasonable model of turbulent energy dissipation. The least-squares method has lower inherent numerical dissipation than Frink's method, and thus Case 1 shows a somewhat lower decay rate than Case 2. An SGS model may be needed to compensate the reduced inherent numerical dissipation in order to achieve the best agreement with experiment. The computations employing the hybrid models (MILES plus Smagorinsky model), shown in Cases 8 and 9, exhibit only a small di erence compared to the MILES simulations (Cases 1 and 2), since the dissipation of turbulence energy is almost entirely a consequence of the numerical algorithm (MILES). The computed decay of resolved turbulence energy is insensitive to the selection of Godunov's or Roe's method for the inviscid ux (Cases 2 and 3), the choice of a random internal grid vs a regular internal grid of nodes (Cases 2 and 5) and the choice of convergence parameter in Godunov's method (Cases 2 and 6). These results emphasize the robustness of the LES algorithm. The computed decay shows only a slight di erence between Nk = 26 (Case 2) and Nk = 52 (Case 8). The computed decay using the rst order reconstruction (Case 4) is in poor agreement with experiment as anticipated. In Figs. 2 and 3, the ltered energy spectra are shown at U0 tCBC =M = 98 and 171, respectively, for Cases 1 and 2. The least-squares reconstruction (Case 1) is more accurate than Frink's method (Case 2). Since the CPU time for the two methods is essentially identical, the least-squares reconstruction is preferable. A \pile-up" of energy at lower wave numbers is noted. This was also observed by Haworth and Jansen 18] who employed a similar grid resolution. 5

6.0E-5
CBC (Filtered) case 1 (LS, 2nd order, 8 cells)

5.0E-5 Filtered Turbulence Kinetic Energy

4.0E-5

Case 2 (no SGS model) Case 3 (Roe) Case 4 (1st order) Case 5 (random perturb) Case 6 (10-12 in Godunov) Case 7 (52 shells) Case 8 (SGS Model 1) Case 9 (SGS Model 2)

3.0E-5

2.0E-5

1.0E-5

0.0E0 0 50 100 150 200 Physical time 250 300 350

Figure 1: Decay of ltered turbulence kinetic energy
a)
300

Uot/M = 98 (CBC) Uot/M = 98, Frink’s (2nd order) Uot/M = 98, LS (2nd + 8 cells)
-2 3

200

E(k) cm sec
100 0 10

0

10

1

k cm

-1

Figure 2: Turbulence spectrum at U0 tCBC =M = 98
b)
300

Uot/M = 171 (CBC) Uot/M = 171, Frink’s(2nd order) Uot/M = 171, LS (2nd + 8 cells)
-2 3

200

E(k) cm sec
100 0 10

0

10

1

k cm

-1

Figure 3: Turbulence spectrum at U0 tCBC =M = 171 6

6 Parallel Code
The computational domain is decomposed into np subdomains. The sub-domain boundaries coincide with cell faces, and thus each cell belongs to a unique subdomain. The domain decomposition is one-dimensional for simplicity, i.e., each processor communicates with two other processors ('left' and 'right'). An example is shown in Fig. 4.
Y X Z

Figure 4: Example of domain decomposition The parallel code is written using MPI. To simplify the LES code, all grid connectivity information is established in the grid generation code. On each processor, the ow solver does the same work unless performing communication by associated sending or receiving data. Two di erent types of data transfer between subdomains have been investigated. In the rst method (M1), two layers4 of cells are exchanged between adjacent subdomains. This minimizes the number of communications calls between the two processors (only one pair of send/receive calls is needed), but requires a (redundant) computation of the node values in the overlapping region subsequent to the data transfer. In the second method (M2), one layer of cells and two layers of nodes are exchanged. This eliminates the redundant computation of the node values in the overlap region, but requires two pairs of communications calls. In either method, the data structure is sent/received using MPI-BYTE. Numerical experiments show that the communication time for the second method is 25% less than the rst method due to the smaller amount of data necessary to be transferred.
6.0E-5
KEC Case 31 (no SGS model) KEC MPI (Case 31) KEC CBC (Filtered)

5.0E-5

Filtered Turbulence Kinetic Energy

4.0E-5

3.0E-5

2.0E-5

1.0E-5

0.0E0 0 50 100 150 200 250 300 350

Time

Figure 5: Decay of ltered turbulence kinetic energy using parallel code
For subdomain n, the rst cell layer in subdomain n + 1 consists of all cells which share one or more nodes on the interface between the subdomains. The de nition is extended recursively to de ne the second layer.
4

7

200

150

E(k) Uot/M = 171 (CBC) E(k) Uot/M = 171 (Computed) E(k) Uot/M = 171 (MPI)

E(k)cm sec

3

-2

100

50

0 10

0

10
-1

1

K cm

Figure 6: Turbulence spectrum at U0 tCBC =M = 171 using parallel code The results using the MPI code are identical to the single processor code (Section 5) as expected. Fig. 5 shows the comparison of simulations of decay of isotropic turbulent kinetic energy, and Fig. 6 shows the spectra at U0 tCBC =M = 171. The computations used Frink's method for reconstruction (Case 2 of Table 1). The parallel code achieves a parallel e ciency of 98.7% on a four processor SGI R10000 for both methods M1 and M2.

7 Concluding Remarks
An unstructured grid LES algorithm for compressible turbulent ow has been developed. A series of simulations were performed for the experiment of Comte-Bellot and Corrsin of the decay of incompressible isotropic turbulence. The simulations examined the e ect of di erent subgrid-scale models (MILES and hybrid MILES/Smagorinsky), reconstruction methods (least-squares and Frink's), type of Riemann solver (Godunov and Roe), nature of the grid (regular and random) and convergence tolerance in Godunov's method. The MILES model accurately predicts the decay of turbulence kinetic energy using second-order accurate function reconstruction. The results are insensitive to the type of Riemann solver, convergence parameter in Godunov's method, and nature of the grid. The least-squares method more accurately predicted the evolution of the turbulence spectrum. The hybrid (MILES plus Smagorinsky) model showed a small improvement in the prediction of the rate of decay of turbulence kinetic energy. A parallel version of the LES code was developed using MPI. The results for the decay of isotropic turbulence are identical to the single processor version of the code as expected. The parallel e ciency of the code is 98.7% on a four processor SGI R10000.

Acknowledgments
This research is sponsored by the Air Force O ce of Scienti c Research under Grant AFOSR Grant F4962096-1-0389, monitored by Dr. Len Sakell.

8

References
1] D. Knight and G. Degrez, \Shock Wave Boundary Layer Interactions in High Mach Number Flows { A Critical Survey of Current CFD Prediction Capabilities," in AGARD AR-319, Volume 2 (to appear), 1997. 2] D. Knight, \Numerical Simulation of Compressible Turbulent Flows Using the Reynolds-Averaged Navier-Stokes Equations," in AGARD FDP/VKI Special Course on Turbulence in Compressible Flows, AGARD Report R-819, 1997. 3] B. Galperin and S. Orszag, eds., Large Eddy Simulation of Complex Engineering and Geophysical Flows. Cambridge University Press, 1993. 4] P. Mason, \Large Eddy Simulation: A Critical Review of the Technique," Quarterly Journal of the Royal Meteorological Society, vol. 120, pp. 1{26, 1994. 5] M. Lesieur and O. Metais, \New Trends in Large-Eddy Simulations of Turbulence," in Annual Review of Fluid Mechanics, vol. 28, pp. 45{82, Annual Reviews, Inc., 1996. 6] P. Moin, \Progress in Large Eddy Simulation of Turbulent Flows." AIAA Paper 97-0749, January 1997. 7] J. Boris, F. Grinstein, E. Oran, and R. Kolbe, \New Insights into Large Eddy Simulation," Fluid Dynamics Research, vol. 10, pp. 199{228, 1992. 8] E. Oran and J. Boris, \Computing Turbulent Shear Flows - A Convenient Conspiracy," Computers in Physics, vol. 7, pp. 523{533, September/October 1993. 9] F. Grinstein, \Dynamics of Coherent Structures and Transition to Turbulence in Free Square Jets." AIAA Paper 96-0781, 1996. 10] A. Yoshizawa, \Statistical Theory for Compressible Turbulent Shear Flows, with the Application to Subgrid Modeling," Physics of Fluids, vol. 29, pp. 2152{2164, 1986. 11] C. G. Speziale, G. Erlebacher, A. Zang, and M. Y. Hussaini, \The Subgrid Modeling of Compressible Turbulence," Physics of Fluids, vol. 31 (4), pp. 940{943, 1988. 12] P. Moin, K. Squires, W. Cabot, and S. Lee, \A Dynamic Subgrid-scale Model for Compressible Turbulence and Scalar Transport," Physics of Fluids A, vol. 11, pp. 2746{2757, November 1991. 13] G. Erlebacher, M. Hussaini, C. Speziale, and T. Zang, \Toward the Large-eddy Simulation of Compressible Turbulent Flows," Journal of Fluid Mechanics, vol. 238, pp. 155{185, 1992. 14] T. A. Zang, R. B. Dahlburg, and P. Dahlburg, \Direct and Large Eddy Simualtions of Three-Dimensional Compressible Navier-Stokes Turbulence," Physics of Fluids A, vol. 4 (1), pp. 127{140, 1992. 15] N. El-Hady, T. Zang, and U. Piomelli, \Applications of the Dynamic Subgrid-Scale Model to Axisymmetric Transitional Boundary Layer at High Speed," Physics of Fluids, vol. 6, pp. 1299{1309, 1994. 16] K. Jansen, \Unstructured Grid Large Eddy Simulation of Flow Over an Airfoil," tech. rep., Center for Turbulence Research, 1994. 17] E. Spyropoulos and G. Blaisdell, \Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence," AIAA Journal, vol. 34, pp. 990{998, May 1996. 18] D. Haworth and K. Jansen, \Large Eddy Simulation on Unstructured Deforming Meshes: Towards Reciprocating IC Engines." Center for Turbulence Research, 1996. 19] T. Barth, \On Unstructured Grids and Solvers," in Computational Fluid Dynamics, Lecture Notes 1990-03,von Karman Institute for Fluid Dynamics, March, 5-9 1990. 9

20] T. Barth, \Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations," in AGARD Special Course on Unstructured Grid Methods for Advection Dominated Flows, Advisory Group for Aerospace Research and Development, May 1992. AGARD Report 787. 21] P. Moin and J. Jimenez, \Large Eddy Simulation of Complex Turbulent Flows." AIAA Paper 93-3099, July 1993. 22] D. Knight, G. Zhou, N. Okong'o, and V. Shukla, \Compressible Large Eddy Simulation Using Unstructured Grids." submitted to AIAA 36th Aerospace Sciences Meeting, 1998. 23] A. Ansari and W. Z. Strang, \Large-Eddy Simulation of Turbulent Mixing Layers." AIAA Paper No. 96-0684, 1996. 24] S. K. Godunov, \Numerical Simulation of Multidimensional Problems in Gasdynamics." Nauka, Moscow, 1976. 25] P. Roe, \Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes," Journal of Computational Physics, vol. 43, pp. 357{372, 1981. 26] D. Knight, \A Fully Implicit Navier-Stokes Algorithm Using an Unstructured Grid and Flux Di erence Splitting," Applied Numerical Mathematics, vol. 16, pp. 101{128, 1994. 27] C. F. Ollivier-Gooch, \High Order ENO Schemes for Unstructured Meshes Based on Least Squares Reconstruction." AIAA Paper No. 97-0540, 1997. 28] N. Okong'o and D. Knight, \Accurate Unsteady Simulation using Unstructured Grids." To be submitted to the AIAA 36th Aerospace Sciences Meeting, 1998. 29] N. T. Frink, \Recent Progress Toward a Three Dimensional Unstructured Navier-Stokes Flow Solver." AIAA Paper No. 94-0061, 1994. 30] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C : the Art of Scienti c Computing. New York: Cambridge University Press, 2nd ed., 1992. 31] G. Comte-Bellot and S. Corrsin, \Simple Eulerian Time Correlation of Full- and Narrow-Band Velocity Signals in Grid-Generated, Isotropic Turbulence," Journal of Fluid Mechanics, vol. 48, pp. 273{337, 1971. 32] E. Spyropoulos and G. Blaisdell, \Evaluation of the Dynamic Subgrid-Scale Model for Large Eddy Simulations of Compressible Turbulent Flows." AIAA Paper 95-0355, 1995.

10