You are on page 1of 9

Perspective

https://doi.org/10.1038/s43588-022-00264-7

Enhancing computational fluid dynamics


with machine learning
Ricardo Vinuesa1,2 ✉ and Steven L. Brunton 3

Machine learning is rapidly becoming a core technology for scientific computing, with numerous opportunities to advance the
field of computational fluid dynamics. Here we highlight some of the areas of highest potential impact, including to accelerate
direct numerical simulations, to improve turbulence closure modeling and to develop enhanced reduced-order models. We also
discuss emerging areas of machine learning that are promising for computational fluid dynamics, as well as some potential
limitations that should be taken into account.

T
he field of numerical simulation of fluid flows is gener- control applications and related fields. Others have reviewed more
ally known as computational fluid dynamics (CFD). Fluid specific aspects of ML for CFD, such as turbulence closure13,14 and
mechanics is an area of great importance, both from a sci- heat-transfer aspects of CFD for aerodynamic optimization15. Our
entific perspective and for a range of industrial-engineering appli- discussion will address the middle ground of ML for CFD more
cations. Fluid flows are governed by the Navier–Stokes equations, broadly, with a schematic representation of topics covered in
which are partial differential equations (PDEs) modeling the con- Fig. 1. Approaches to improve CFD with ML are aligned with the
servation of mass and momentum in a Newtonian fluid. These PDEs larger efforts to incorporate ML into scientific computing, for
are nonlinear due to the convective acceleration (which is related example via physics-informed neural networks (PINNs)16,17 or to
to the change of velocity with the position), and they commonly accelerate computational chemistry8,18.
exhibit time-dependent chaotic behavior, known as turbulence.
Solving the Navier–Stokes equations for turbulent flows requires Accelerating direct numerical simulations
numerical methods that may be computationally expensive, or even Direct numerical simulation (DNS) is a high-fidelity approach in
intractable at high Reynolds numbers (Re), due to the wide range which the governing Navier–Stokes equations are discretized and
of scales in space and time necessary to resolve these flows. There integrated in time with enough degrees of freedom to resolve all flow
are various approaches to solve these equations numerically, which structures. Turbulent flows exhibit a pronounced multi-scale char-
can be discretized using methods of different orders, for instance acter, with vortical structures across a range of sizes and energetic
finite-difference1, finite-volume2, finite-element3, spectral methods4 content19. This complexity requires fine meshes and accurate com-
and so on. Furthermore, turbulence can be simulated with different putational methods to avoid distorting the underlying physics with
levels of fidelity and computational cost. numerical artifacts. With a properly designed DNS, it is possible
At the same time, we are experiencing a revolution in the field to obtain a representation of the flow field with the highest level of
of machine learning (ML), which is enabling advances across a detail of the CFD methods. However, the fine computational meshes
wide range of scientific and engineering areas5–9. Machine learn- required to resolve the smallest scales lead to exceedingly high com-
ing is a subfield of the broader area of artificial intelligence, which putational costs, which increase with the Reynolds number20.
is focused on the development of algorithms with the capability of A number of ML approaches have been developed recently to
learning from data without explicit mathematical models10. Many improve the efficiency of DNS. We first discuss several studies aimed
of the most exciting advances in ML have leveraged deep learn- at improving discretization schemes. Bar-Sinai et al.21 proposed a
ing, based on neural networks (NNs) with multiple hidden layers technique based on deep learning to estimate spatial derivatives in
between the input and the output. One key aspect contributing to low-resolution grids, outperforming standard finite-difference meth-
the remarkable success of deep learning is the ability to learn in a ods. A similar approach was developed by Stevens and Colonius22 to
hierarchical manner: initial layers learn simple relationships in the improve the results of fifth-order finite-difference schemes in the
data, then deeper layers combine this information to learn more context of shock-capturing simulations. Jeon and Kim23 proposed to
abstract relationships. Many physical problems exhibit this hierar- use a deep neural network to simulate the well-known finite-volume
chical behavior and can therefore be effectively modeled using deep discretization scheme2 employed in fluid simulations. They tested
learning, and ML more generally. their method with reactive flows, obtaining very good agreement
In this Perspective we focus on the potential of ML to improve with reference high-resolution data, but at one-tenth the computa-
CFD, including possibilities to increase the speed of high-fidelity tional cost. However, they also documented errors with respect to
simulations, develop turbulence models with different levels of the reference solution, which increased with time. Another deep-
fidelity, and produce reduced-order models beyond what can be learning approach, based on a fully convolutional/long-short-term-
achieved with classical approaches. Several authors have surveyed memory (LSTM) network, was proposed by Stevens and Colonius24
the potential of ML to improve fluid mechanics11,12, including top- to improve the accuracy of finite-difference/finite-volume methods.
ics beyond the scope of CFD, such as experimental techniques, Second, we consider the strategy of developing a correction between

FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden. 2Swedish e-Science Research Centre (SeRC), Stockholm, Sweden.
1

Department of Mechanical Engineering, University of Washington, Seattle, WA, USA. ✉e-mail: rvinuesa@mech.kth.se
3

358 Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci


NaTure CoMpuTaTional Science Perspective
a where Δt is the simulation time step, ρ is the fluid density and p
is the pressure. Solving this equation is typically the most compu-
tationally expensive step of the numerical solver. Therefore, devis-
ing alternate strategies to solve it more efficiently is an area of great
promise. ML can be used for this task, because this technology can
exploit data from previous examples to find mappings between the
divergence of the uncorrected velocity and the resulting pressure
Machine learning
field. For instance, Ajuria et al.37 proposed using a convolutional
Accelerate neural network (CNN) to solve equation (1) in incompressible
Improve
simulations, physical simulations, and tested it in a plume configuration. Their results
improve scaling understanding indicate that it is possible to outperform the traditional Jacobi solver
b c Resolved d with good accuracy at low Richardson numbers Ri (the Richardson
number measures the ratio between the buoyancy and the shear in
~
ui = Ui + ui
Energy

Mo

the flow). However, the accuracy degrades at higher Ri, motivat-


de
led

f (νT)
ing the authors to combine the CNN with the Jacobi solver (which
Wavenumber
is used when the CNN diverges). CNNs were also used to solve
Direct numerical Turbulence modeling Reduced-order the Poisson problem by decomposing the original problem into a
simulation (LES and RANS) models homogeneous Poisson problem plus four inhomogeneous Laplace
subproblems38. This decomposition resulted in errors below 10%,
Fig. 1 | Summary of some of the most relevant areas where ML can which motivates using this approach as a first guess in iterative algo-
enhance CFD. a, Neural network illustrating the field of ML. b, In direct rithms, potentially reducing computational cost. Other data-driven
numerical simulations, computational grids fine enough to properly methods39 have been proposed to accelerate the pressure correction
resolve the details of the flow structures (such as the one shown in blue) in multi-grid solvers. These approaches may also be used to acceler-
are needed. The gray band denotes the wall. c, Left: illustration of a large- ate simulations of lower fidelity that rely on turbulence models.
eddy simulation (LES) and a schematic of the turbulence spectrum, where Numerical simulations can also be accelerated by decreasing
the larger scales (to the left of the green vertical line, that is, the cutoff the size of the computational domain needed to retain the physi-
wavenumber) are numerically simulated, and the smaller ones (to the right cal properties of the system. Two ways in which the domain can be
of the cutoff) are represented by a model. Because these smaller scales reduced are to replace a section upstream of the domain of interest
do not need to be resolved, the computational cost of the simulation is by an inflow condition, and to replace part of the far-field region
reduced. Right: illustration of Reynolds-averaged Navier–Stokes (RANS) by a suitable boundary condition. In doing so, these parts of the
simulations, where ũi is the instantaneous velocity component i, Ui the domain do not need to be simulated, thus producing computational
mean and ui the fluctuating component. In RANS models, the fluctuations savings, and ML can help to develop the inflow and far-field condi-
are usually expressed as a function of the eddy viscosity νT. d, General tions as discussed next. Fukami et al.40 developed a time-dependent
schematic of a dimensionality-reduction method, where the circles denote inflow generator for wall-bounded turbulence simulations using
flow quantities. The input (on the left in blue) is the high-resolution flow a convolutional autoencoder with a multilayer perceptron (MLP).
field, the output (on the right in red) is the reconstruction of the input field, They tested their method in a turbulent channel flow at ReT = 180,
and the yellow box represents the low-dimensional system. which is the friction Reynolds number based on channel half height
and friction velocity, and they maintained turbulence for an inter-
val long enough to obtain converged turbulence statistics. This is
fine- and coarse-resolution simulations. This was developed by a promising research direction due to the fact that current inflow-
Kochkov et al.25 for the two-dimensional (2D) Kolmogorov flow26, generation methods show limitations in terms of generality of the
which maintains fluctuations via a forcing term. They leveraged inflow conditions, for instance at various flow geometries and
deep learning to develop the correction, obtaining excellent agree- Reynolds numbers. A second approach to reduce the computational
ment with reference simulations in meshes from eight to ten times domain in external flows is to devise a strategy to set the right pres-
coarser in each dimension, as shown in Fig. 2. In particular, this fig- sure-gradient distribution without having to simulate the far field.
ure shows that, for long simulations, the standard coarse-resolution This was addressed by Morita et al.41 through Bayesian optimization
case does not exhibit certain important vortical motions; these are, based on Gaussian-process regression, achieving accurate results
however, properly captured by the low-resolution case with the ML when imposing concrete pressure-gradient conditions on a turbu-
model. These results promise to substantially reduce the computa- lent boundary layer.
tional cost of relevant fluid simulations, for example, for weather27,
climate28, engineering29 and astrophysics30. Finally, other strategies Improving turbulence models
to improve the performance of PDE solvers in coarser meshes have DNS is impractical for many real-world applications because of the
been developed by Li and others31–33. computational cost associated with resolving all scales for flows
It is also possible to accelerate CFD by solving the Poisson equa- with high Reynolds numbers, together with difficulties arising
tion with deep learning. This has been done by several research from complex geometries. Industrial CFD typically relies on either
groups in various areas, such as simulations of electric fields34 and Reynolds-averaged Navier–Stokes (RANS) models, where no tur-
particles35. The Poisson equation is frequently used in operator- bulent scales are simulated, or coarsely resolved large-eddy simula-
splitting methods to discretize the Navier–Stokes equations36, where tions (LES), where only the largest turbulent scales are resolved and
first the velocity field is advected, and the resulting field u* does smaller ones are modeled. Here the term model refers to an a priori
not satisfy the continuity equation (that is, for incompressible flows, assumption regarding the physics of a certain range of turbulent
u* does not satisfy the divergence-free condition). The second step scales. In the following we discuss ML applications to RANS and
involves a correction to ensure that u* is divergence-free, leading to LES modeling.
the following Poisson equation:
Δt 2 RANS modeling. Numerical simulations based on RANS models
∇ p = −∇ · u∗ , (1) rely on the so-called RANS equations, which are obtained after
ρ
decomposing the instantaneous flow quantities into a mean and a

Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci 359


Perspective NaTure CoMpuTaTional Science

a Time b

10

resolution
Original
5

Downsampling
resolution

Super-resolution

Vorticity
+ ML
Low

–5
resolution
Low

–10

Fig. 2 | An example of ML-accelerated direct numerical simulation. a, Results from the work by Kochkov et al.25, where the instantaneous vorticity field
is shown for simulations with high/low resolution, as well as for a case with low resolution supplemented with ML. Four different time steps are shown,
and some key vortical structures are highlighted with yellow squares. Adapted from ref. 25, with permission from the United States National Academy of
Sciences. b, Sketch showing that ML accelerates the simulation through a correction between the coarse and fine resolutions. In this example one can
reduce the resolution by downsampling the flow data on a coarser mesh, and then use ML to recover the details present in the finer simulation through
super-resolution. The correction between coarse and fine resolutions, which is based on ML, enables the super-resolution task here.

fluctuating component, and averaging the Navier–Stokes equations physics, and includes non-unique mappings, a realizability lim-
in time. Using index notation, the instantaneous ith velocity com- iter and noise-insensitivity constraints. Interpretable models are
ponent ũi can be decomposed into its mean (Ui) and fluctuating essential for engineering and physics, particularly in the context
(ui) components as follows: ũi = Ui + ui. Although the RANS equa- of turbulence modeling. The interpretability of the framework by
tions govern the mean flow, the velocity fluctuations are also present Jiang et al.52 relies on its constrained model form53, although this
in the form of the Reynolds stresses ui uj (where the overbar denotes is not generally possible to attain. Recent studies54 have discussed
averaging in time), which are correlations between the ith and jth various approaches to include interpretability in the development
velocity components. Because it is convenient to derive equations of deep-learning models, and one promising approach is the one
containing only mean quantities, ui uj needs to be expressed as a proposed by Cranmer and others55. This technique has potential in
function of the mean flow, and this is called the closure problem. terms of embedding physical laws and improving our understand-
The first approach to do this was proposed by Boussinesq42, who ing of such phenomena. Other interpretable RANS models were
related the Reynolds stresses to the mean flow via the so-called proposed by Weatheritt and Sandberg56, using gene-expression pro-
eddy viscosity, vT. Although this approach has led to some success, gramming (GEP), which is a branch of evolutionary computing57.
there are still a number of open challenges for RANS modeling in GEP iteratively improves a population of candidate solutions by
complex flows43, where this approach is too simple. In particular, survival of the fittest, with the advantage of producing closed-form
conventional RANS models exhibit notable errors when dealing models. The Reynolds-stress anisotropy tensor was also modeled by
with complex pressure-gradient distributions, complicated geom- Weatheritt and Sandberg58, who performed tests in RANS simula-
etries involving curvature, separated flows, flows with a substantial tions of turbulent ducts. Models based on sparse identification of
degree of anisotropy and 3D effects, among others. As argued by nonlinear dynamics (SINDy)59 have also been used for RANS clo-
Kutz44, ML can produce more sophisticated models for the Reynolds sure models60–62.
stresses by using adequate data, in particular if the examples used Furthermore, the literature also reflects the importance of
for training represent a sufficiently rich set of flow configurations. imposing physical constraints in the models and incorporating
A wide range of ML methods have recently been used to improve uncertainty quantification (UQ)63–65 alongside ML-based models. It
RANS turbulence modeling13, focusing on the challenge of improv- is also important to note that, when using DNS quantities to replace
ing the accuracy of the Reynolds stresses for general conditions. terms in the RANS closure, the predictions may be unsatisfactory66.
For example, Ling et al.45 proposed a novel architecture, includ- This inadequacy is due to the strict assumptions associated with the
ing a multiplicative layer with an invariant tensor basis, used to RANS model, as well as the potential ill-conditioning of the RANS
embed Galilean invariance in the predicted anisotropy tensor. equations67. It is thus essential to take advantage of novel data-driven
Incorporating this invariance improves the performance of the methods, while also ensuring that uncertainties are identified and
network, which outperforms traditional RANS models based on quantified. Another interesting review by Ahmed et al.14 discussed
linear42 and nonlinear46 eddy-viscosity models. They tested their both classical and emerging data-driven closure approaches, also
models for turbulent duct flow and the flow over a wavy wall, connecting with reduced-order models (ROMs).
which are challenging to predict with RANS models47,48 because of Obiols-Sales et al.68 developed a method to accelerate the con-
the presence of secondary flows49. Other ML-based approaches50,51 vergence of RANS simulations based on the very popular Spalart–
rely on physics-informed random forests to improve RANS mod- Allmaras turbulence model69 using the CFD code OpenFOAM70.
els, with applications to cases with secondary flows and separation. In essence, they combined iterations from the CFD solver and
On the other hand, Jiang et al.52 recently developed an interpretable evaluation of a CNN model, obtaining a convergence that was from
framework for RANS modeling based on a physics-informed resid- 1.9 to 7.4 times faster than that of the CFD solver, both in laminar
ual network (PiResNet). Their approach relies on two modules to and turbulent flows. Multiphase flows, which consist of flows with
infer the structural and parametric representations of turbulence two or more thermodynamic phases, are also industrially relevant.

360 Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci


NaTure CoMpuTaTional Science Perspective
a b
ML helps to Global Energy spectrum, viscous
model unresolved xi state dissipation rate and total
Agent
flow scales These scales are dissipation rate
represented by the Local Invariants of the gradient and state
Smagorinski model, Agent state Hessian of the velocity

where a very i
important parameter
is the dissipation Dissipation Policy, which
coefficient Cs, which is coefficient Cs depends on the
unknown. at x i and t agent state

Global
Reward = + Local
reward reward

ML calculates Cs through
reinforcement learning

Fig. 3 | An example of LES modeling where the dissipation coefficient in the Smagorinski model is calculated by means of ML. a, Schematic of a
numerical simulation fine enough to resolve the blue flow structures, but too coarse to resolve the green structure. This small-scale structure (green)
would have to be modeled by means of an SGS model, such as the well-known SGS model by Smagorinski74. This model relies on an empirical parameter,
the dissipation coefficient Cs, which can be determined more accurately by ML. b, An example of an ML-based approach to determine the value of Cs
by RL83. The RL agents are located at the red blocks, and they are used to compute Cs for each grid point with coordinates xi at time t. This calculation is
carried out in terms of a policy, which depends on the state of the agent. This state is based on local variables (invariants of the gradient and Hessian of
the velocity) and also global ones (energy spectrum, viscous dissipation rate and total dissipation rate). The agents receive a reward, which also depends
on local and global quantities, based on the accuracy of the simulation. Adapted from ref. 83, Springer Nature Ltd.

Gibou et al.71 proposed different directions in which ML and deep performance of their method on Kraichnan turbulence78, which
learning can improve the CFD of multiphase flows, in particular is a classical 2D decaying-turbulence test case. Several other stud-
when it comes to enhancing the simulation speed. Ma et al.72 used ies have also used NNs in a supervised manner for SGS model-
deep learning to predict the closure terms (that is, gas flux and ing79–81. Furthermore, GEP has been used for SGS modeling82 in a
streaming stresses) in their two-fluid bubble flow, whereas Mi et LES of a Taylor–Green vortex, where it outperformed standard LES
al.73 analyzed gas–liquid flows and employed NNs to identify the models. Regarding the second approach to LES modeling, Novati
different flow regimes. et al.83 employed multi-agent reinforcement learning (RL) to esti-
mate the unresolved subgrid-scale physics. This unsupervised
LES modeling. LES-based numerical simulations rely on low-pass method exhibits favorable generalization properties across grid
filtering the Navier–Stokes equations (Fig. 1c), such that the largest sizes and flow conditions, and the results are presented for isotropic
scales are simulated and the smallest ones (below a certain cutoff) turbulence. As shown in the schematic representation of Fig. 3, the
are modeled by means of a so-called subgrid-scale model (SGS). state of the agents at a particular instant is given in terms of both
Note that the smallest scales are the most demanding from a compu- local and global variables. This state is then used to calculate the
tational point of view (both in terms of computer time and memory so-called dissipation coefficient at each grid point.
usage), because they require fine meshes to be properly resolved. In certain applications, for example those involving atmospheric
The first proponent of this type of approach was Smagorinski74, who boundary layers, the Reynolds number is several orders of magni-
developed an SGS model based on an eddy viscosity, which was tude larger than those of most studies based on turbulence models
computed in terms of the mean flow and the grid size. His model or wind-tunnel experiments84. The mean flow in the inertial sub-
assumed that the production equals the dissipation for small scales. layer has been widely studied in the atmospheric boundary layer
Although LES can lead to substantial computational savings with community, and it is known that in neutral conditions it can be
respect to DNS while exhibiting good accuracy, there are still chal- described by a logarithmic law85. The logarithmic description of the
lenges associated with its usage for general purposes43. For example, inertial sublayer led to the use of wall models, which replace the
current SGS models exhibit limited accuracy in their predictions of region very close to the wall with a model defining a surface shear
turbulent flows at high Reynolds numbers, in complex geometries stress matching the logarithmic behavior. This is the cornerstone of
and in cases with shocks and chemical reactions. most atmospheric models, which avoid resolving the computation-
ML has also been used to develop SGS models in the context ally expensive scales close to the wall. One example is the work by
of LES of turbulent flows in basically two ways: supplementing Giometto et al.86, who studied a real urban geometry, adopting the
the unresolved energy in the coarse mesh using supervised learn- LES model by Bou-Zeid et al.87 and the Moeng model88 for the wall
ing and developing agent-based approaches to stabilize the coarse boundary condition. It is possible to use data-driven approaches
simulation. When it comes to the first approach, in the following to develop mappings between the information in the outer region
we list several studies that rely on high-fidelity data to train both (which is resolved) and a suitable off-wall boundary condition (so
deep-learning and GEP-based models. First, Beck et al.75 used an the near-wall region does not need to be resolved). For example, it
artificial NN based on local convolutional filters to predict the is possible to exploit properties of the logarithmic layer and rescale
mapping between the flow in a coarse simulation and the closure the flow in the outer region to set the off-wall boundary condition
terms, using a filtered DNS of decaying homogeneous isotropic tur- in turbulent channels89,90. This may also be accomplished via trans-
bulence. Lapeyre et al.76 employed a similar approach, with a CNN fer functions in spectral space91, CNNs92 or modeling the temporal
architecture inspired by a U-net model, to predict the subgrid-scale dynamics of the near-wall region via deep NNs93. Other promising
wrinkling of the flame surface in premixed turbulent combustion. approaches based on deep learning are the one by Moriya et al.94,
They obtained better results than classical algebraic models. Maulik based on defining a virtual velocity, and the RL technique by Bae
et al.77 employed an MLP to predict the SGS model in an LES using and Koumoutsakos95. Defining off-wall boundary conditions with
high-fidelity numerical data to train the model. They evaluated the ML is a challenging yet promising area of research.

Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci 361


Perspective NaTure CoMpuTaTional Science

a Shallow linear autoencoder b Deep nonlinear autoencoder


(POD/PCA)
x xˆ x x̂
z z
t
UT V φ ψ

3D subspace 2D manifold
coordinates coordinates
c x x· d x x·
z z· z z·

UT f(z) V φ f(z) ψ

Regression
Galerkin
model
projection

Fig. 4 | Schematic of NN autoencoders for dimensionality reduction and model identification. Reduced-order models are typically designed to balance
efficiency and accuracy. ML solutions further improve the efficiency by reducing the effective dimension of the model and increasing the accuracy through
better modeling of how these few variables co-evolve in time. In this figure, the input is a high-resolution flow field evolving in time (t), and the output is
a reconstruction of that field from the latent space. a, Classic POD/PCA may be viewed as a shallow autoencoder with a single encoder UT and decoder
V layers, together with linear activation units. For the flow past a cylinder example, as shown, the dynamics evolve in a 3D coordinate system. b, A deep,
multi-level autoencoder with multilayer encoder φ and decoder ψ, as well as nonlinear activation functions, provides enhanced nonlinear coordinates on
a manifold. The cylinder flow now evolves on a 2D submanifold. c, The classic Galerkin projection model, obtained by projecting the governing Navier–
Stokes equations onto an orthogonal basis. d, The Galerkin projection model in c can be replaced by more generic ML regressions, such as LSTM networks,
reservoir networks or sparse nonlinear models, to represent the nonlinear dynamical system ż = f(z).

Developing reduced-order models approaches provide linear subspaces to approximate data, even
ML is also being used to develop ROMs in fluid dynamics. ROMs though it is known that many systems evolve on a nonlinear mani-
rely on the fact that even complex flows often exhibit a few domi- fold. Deep learning provides a powerful approach to generalize the
nant coherent structures96–98 that may provide coarse, but valuable POD/PCA/SVD dimensionality reduction from learning a linear
information about the flow. Thus, ROMs describe the evolution of subspace to learning coordinates on a curved manifold. Specifically,
these coherent structures, providing a lower-dimensional, lower- these coordinates may be learned using a NN autoencoder, which
fidelity characterization of the fluid. In this way, ROMs provide has an input and output the size of the high-dimensional fluid state
a fast surrogate model for the more expensive CFD techniques and a constriction or bottleneck in the middle that reduces to a low-
described above, enabling optimization and control tasks that rely dimensional latent variable. The map from the high-dimensional
on many model iterations or fast model predictions. The cost of this state x to the latent state z is called the encoder, and the map back
efficiency is a loss of generality: ROMs are tailored to a specific flow from the latent state to an estimate of the high-dimensional state x̂
configuration, providing massive acceleration but a limited range is the decoder. The autoencoder loss function is ∥ x̂ − x ∥22. When
of applicability. the encoder and decoder consist of a single layer and all nodes have
Developing a reduced-order model involves (1) finding a set of identity activation functions, then the optimal solution to this net-
reduced coordinates, typically describing the amplitudes of impor- work will be closely related to POD101. However, this shallow linear
tant flow structures, and (2) identifying a differential-equation autoencoder may be generalized to a deep nonlinear autoencoder
model (that is, a dynamical system) for how these amplitudes evolve with multiple encoding and decoding layers and nonlinear activa-
in time. Both of these stages have seen incredible recent advances tion functions for the nodes. In this way, a deep autoencoder learns
with ML. One common ROM technique involves learning a low- nonlinear manifold coordinates that may considerably improve
dimensional coordinate system with the proper orthogonal decom- the compression in the latent space, with increasing applications
position (POD)96,99 and then obtaining a dynamical system for the in fluid mechanics102,103. This concept is illustrated in Fig. 4 for the
flow system restricted to this subspace by Galerkin projection of the simple flow past a cylinder, where it is known that the energetic
Navier–Stokes equations onto these modes. Although the POD step coherent structures evolve on a parabolic submanifold in the POD
is data-driven, working equally well for experiments and simula- subspace104. Lee and Carlberg105 recently showed that deep convo-
tions, Galerkin projection requires a working numerical implemen- lutional autoencoders may be used to greatly improve the perfor-
tation of the governing equations; moreover, it is often intrusive, mance of classical ROM techniques based on linear subspaces106,107.
involving custom modifications to the numerical solver. The related More recently, Cenedese et al.108 proposed a promising data-driven
dynamic-mode decomposition (DMD)100 is a purely data-driven method to construct ROMs on spectral submanifolds.
procedure that identifies a low-dimensional subspace and a lin- Once an appropriate coordinate system is established, there are
ear model for how the flow evolves in this subspace. Here we will many ML approaches to model the dynamics in these coordinates.
review a number of recent developments to extend these approaches Several NNs are capable of learning nonlinear dynamics109, includ-
with ML. ing the LSTM network110–112 and echo-state networks113, which are a
The first broad opportunity to incorporate ML into ROMs is in form of reservoir computing. Beyond NNs, there are alternate regres-
learning an improved coordinate system in which to represent the sion techniques to learn effective dynamical-systems models. Cluster
reduced dynamics. POD96,99 provides an orthogonal set of modes reduced-order modeling114 is a simple and powerful unsupervised-
that may be thought of as a data-driven generalization of Fourier learning approach that decomposes a time series into a few repre-
modes, which are tailored to a specific problem. POD is closely sentative clusters, and then models the transition-state probability
related to principal-component analysis (PCA) and the singular- between clusters. The operator-inference approach is closely related
value decomposition (SVD)5, which are two core dimensional- to Galerkin projection, where a neighboring operator is learned from
ity-reduction techniques used in data-driven modeling. These data115–117. The SINDy59 procedure learns a minimalistic model by

362 Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci


NaTure CoMpuTaTional Science Perspective
fitting the observed dynamics to the fewest terms in a library of can-
didate functions that might describe the dynamics, resulting in mod- Table 1 | Summary of applications of ML to enhance CFD
els that are interpretable and balance accuracy and efficiency. SINDy
Accelerate DNS Improve LES/ Develop ROMs Future
has been used to model a range of fluids118–124, including laminar and
RANS developments
turbulent wake flows, convective flows and shear flows.
The modeling approaches above may be combined with a deep Deep learning Deep learning/ POD for Deep learning
autoencoder to uncover a low-dimensional latent space, as in the for correction RL for more improved linear for prediction
SINDy autoencoder125. DMD has also been extended to nonlin- between general subspaces and control of
ear coordinate embeddings through the use of deep-autoencoder coarse and fine subgrid-scale to represent turbulent flows
networks126–130. In all of these architectures, there is a tremendous simulations models dynamics
opportunity to embed partial knowledge of the physics, such as con- Deep learning Deep learning/ Autoencoders to Better
servation laws118, symmetries120 and invariances131–133. It may also be for accelerating RL for more learn manifolds interpretability of
possible to directly impose stability in the learning pipeline118,134,135. the solution robust wall wall to represent deep learning for
The ultimate goal for ML ROMs is to develop models that have of the Poisson models dynamics CFD models
improved accuracy and efficiency, better generalizability to new ini- equation
tial and boundary conditions, flow configurations, varying param- Domain Deep learning/ Deep learning More efficient
eters, as well as improved model interpretability, ideally with less reduction: symbolic for representing operation
intrusive methods and less data. Enforcing partially known physics, deep learning regression for temporal system of high-
such as symmetries and other invariances, along with sparsity, is for inflow more accurate dynamics performance
expected to be critical in these efforts. It is also important to con- generation; and general computing
tinue integrating these efforts with the downstream applications Gaussian RANS models facilities
of control and optimization. Finally, many applications of fluid processes facilitated by ML
dynamics involve safety-critical systems, and therefore certifiable for pressure-
models are essential. gradient
distributions
Emerging possibilities and outlook Deep learning Deep learning Sparse ML-enabled
In this Perspective we have provided our view on the potential of for improved /symbolic identification more advanced
ML to advance the capabilities of CFD, focusing on three main accuracy of regression for of nonlinear computational
areas: accelerating simulations, enhancing turbulence models and finite-difference/ interpretable dynamics for architectures
improving reduced-order models. In Table 1 we highlight some finite-volume RANS models learning efficient
examples of applications within each of the three areas. There are schemes and interpretable
also several emerging areas of ML that are promising for CFD, ROMs
which we discuss in this section. One area is non-intrusive sens- Shown are examples of approaches to accelerate DNSs, improvement of turbulence models for LES
and RANS simulations, development of ROMs and possible future developments.
ing—that is, the possibility of performing flow predictions based on,
for example, information at the wall. This task, which has important
implications for closed-loop flow control136, has been carried out
via CNNs in turbulent channels137. In connection to the work by There are also grand challenges in CFD that necessitate new
Guastoni et al.137, there are a number of studies documenting the methods in ML. One motivating challenge in CFD is to perform
possibility of performing super-resolution predictions (for exam- accurate coarse-resolution simulations in unforced 3D wall-bounded
ple, when limited flow information is available) in wall-bounded turbulent flows. The production of turbulent kinetic energy (TKE)
turbulence using CNNs, autoencoders and generative-adversarial in these flows takes place close to the wall147 (although at very high
networks138–141. Another promising direction is the imposition of Reynolds numbers, outer-layer production also becomes relevant),
constraints based on physical invariances and symmetries on the and therefore using coarse meshes may substantially distort TKE
ML model, which has been used for SGS modeling131, ROMs118 and production. In these flows of high technological importance, the
for geophysical flows133. TKE production sustains the turbulent fluctuations, so a correc-
PINNs constitute another family of methods that are becom- tion between coarse and fine resolutions may not be sufficient to
ing widely adopted for scientific computing, more broadly. This obtain accurate results. Another challenge is the need for robust and
technique, introduced by Raissi et al.16, uses deep learning to solve interpretable models, a goal that is not easily attainable with deep-
PDEs, exploiting the concept of automatic differentiation used in learning methods. There are, however, promising directions54,55 to
the back-propagation algorithm to calculate partial derivatives achieve interpretable deep-learning models, with important applica-
and form the equations, which are enforced through a loss func- tions to CFD. A clear challenge of CFD is the large energy consump-
tion. Although the method is different from the ML approaches tion associated with large-scale simulations. In this sense, ML can
discussed above, in certain cases PINNs provide a promising alter- be an enabler for more efficient operation of the high-performance-
nate approach to traditional numerical methods for solving PDEs. computing (HPC) facilities7, or even for future computational archi-
This framework also shows promise for biomedical simulations, in tectures, including quantum computers148, as shown in Table 1.
particular after the recent work by Raissi et al.142, in which the con- It is worth noticing that, although ML has very high potential
centration field of a tracer is used as an input to accurately predict for CFD, there are also a number of caveats that may limit the appli-
the instantaneous velocity fields by minimizing the residual of the cability of ML to certain areas of CFD. First, ML methods, such
Navier–Stokes equations. The growing usage of PINNs highlights as deep learning, are often expensive to train and require large
the relevance of exploiting the knowledge we have about the gov- amounts of data. It is therefore important to identify areas where
erning equations when conducting flow simulations. In fact, PINNs ML outperforms classical methods, which have been established for
are being used for turbulence modeling143 (solving the RANS equa- decades, and may be more accurate and efficient in certain cases.
tions without the need of a model for the Reynolds stresses), for For example, it is possible to develop interpretable ROMs with tra-
developing ROMs144, for dealing with noisy data145 and for accelerat- ditional methods such as POD and DMD, and although deep learn-
ing traditional solvers, for example, by means of solving the Poisson ing can provide some advantages103, the simpler ML methods may
equation more efficiently146. be more efficient and straightforward. Finally, it is important to

Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci 363


Perspective NaTure CoMpuTaTional Science

assess the information about the training data available to the user: 24. Stevens, B. & Colonius, T. FiniteNet: a fully convolutional LSTM network
certain flow properties (for example, incompressibility, periodicity architecture for time-dependent partial differential equations. Preprint at
https://arxiv.org/abs/2002.03014 (2020).
and so on) should be embedded in the ML model to increase train- 25. Kochkov, D. et al. Machine learning-accelerated computational fluid
ing efficiency and prediction accuracy. There is also a question of dynamics. Proc. Natl Acad. Sci. USA 118, e2101784118 (2021).
how the training data are generated, and whether the associated cost 26. Chandler, G. J. & Kerswell, R. R. Invariant recurrent solutions embedded in
is taken into account when benchmarking. In this context, transfer a turbulent two-dimensional Kolmogorov flow. J. Fluid Mech. 722, 554–595
learning is a promising area137. (2013).
27. Bauer, P., Thorpe, A. & Brunet, G. The quiet revolution of numerical
Despite the caveats above, we believe that the trend of advancing weather prediction. Nature 525, 47–55 (2015).
CFD with ML will continue in the future. This progress will con- 28. Schenk, F. et al. Warm summers during the Younger Dryas cold reversal.
tinue to be driven by an increasing availability of high-quality data, Nat. Commun. 9, 1634 (2018).
high-performance computing and a better understanding and facil- 29. Vinuesa, R. et al. Turbulent boundary layers around wing sections up to
ity with these emerging techniques. Improved adoption of repro- Rec = 1,000,000. Int. J. Heat. Fluid Flow. 72, 86–99 (2018).
30. Aloy Torás, C., Mimica, P. & Martinez Sober, M. in Artificial Intelligence
ducible research standards149,150 is also a necessary step. Given the Research and Development: Current Challenges, New Trends and Applications
critical importance of data when developing ML modes, we advo- (eds Falomir, Z. et al.) 59–63 (IOS Press, 2018).
cate that the community continues to establish proper benchmark 31. Li, Z. et al. Fourier neural operator for parametric partial differential
systems and best practices for open-source data and software so as equations. Preprint at https://arxiv.org/abs/2010.08895 (2020).
to harness the full potential of ML to improve CFD. 32. Li, Z. et al. Multipole graph neural operator for parametric partial
differential equations. In Proc. 34th Int. Conf. on Neural Information
Processing Systems 6755–6766 (NIPS, 2020).
Received: 28 January 2022; Accepted: 17 May 2022; 33. Li, Z. et al. Neural operator: graph kernel network for partial differential
Published online: 27 June 2022 equations. Preprint at https://arxiv.org/abs/2003.03485 (2020).
34. Shan, T. et al. Study on a Poisson’s equation solver based on deep learning
technique. In Proc. 2017 IEEE Electrical Design of Advanced Packaging and
References Systems Symposium (EDAPS) 1–3 (IEEE, 2017).
1. Godunov, S. & Bohachevsky, I. Finite difference method for numerical
35. Zhang, Z. et al. Solving Poisson’s equation using deep learning in particle
computation of discontinuous solutions of the equations of fluid dynamics.
simulation of PN junction. In Proc. 2019 Joint International Symposium on
Mat. Sb. 47, 271–306 (1959).
Electromagnetic Compatibility, Sapporo and Asia-Pacific International
2. Eymard, R., Gallouët, T. & Herbin, R. Finite volume methods. Handb.
Symposium on Electromagnetic Compatibility (EMC Sapporo/APEMC)
Numer. Anal. 7, 713–1018 (2000).
3. Zienkiewicz, O. C., Taylor, R. L., Nithiarasu, P. & Zhu, J. Z. The Finite 305–308 (IEEE, 2019).
Element Method, 3 (Elsevier, 1977). 36. Bridson, R. Fluid Simulation (A. K. Peters, 2008).
4. Canuto, C., Hussaini, M. Y., Quarteroni, A. & Zang, T. A. Spectral Methods 37. Ajuria, E. et al. Towards a hybrid computational strategy based on deep
in Fluid Dynamics (Springer Science & Business Media, 2012). learning for incompressible flows. In Proc. AIAA AVIATION 2020 Forum
5. Brunton, S. L. & Kutz, J. N. Data-Driven Science and Engineering: Machine 1–17 (AIAA, 2020).
Learning, Dynamical Systems and Control (Cambridge Univ. Press, 2019). 38. Özbay, A. et al. Poisson CNN: convolutional neural networks for the
6. Recht, B. A tour of reinforcement learning: the view from continuous solution of the Poisson equation on a Cartesian mesh. Data Centric Eng. 2,
control. Annu. Rev. Control Robot. Auton. Syst. 2, 253–279 (2019). E6 (2021).
7. Vinuesa, R. et al. The role of artificial intelligence in achieving the 39. Weymouth, G. D. Data-driven multi-grid solver for accelerated pressure
sustainable development goals. Nat. Commun. 11, 233 (2020). projection. Preprint at https://arxiv.org/abs/2110.11029 (2021).
8. Noé, F., Tkatchenko, A., Müller, K.-R. & Clementi, C. Machine learning for 40. Fukami, K., Nabae, Y., Kawai, K. & Fukagata, K. Synthetic turbulent inflow
molecular simulation. Annu. Rev. Phys. Chem. 71, 361–390 (2020). generator using machine learning. Phys. Rev. Fluids 4, 064603 (2019).
9. Niederer, S. A., Sacks, M. S., Girolami, M. & Willcox, K. Scaling digital twins 41. Morita, Y. et al. Applying Bayesian optimization with Gaussian-process
from the artisanal to the industrial. Nat. Comput. Sci. 1, 313–320 (2021). regression to computational fluid dynamics problems. J. Comput. Phys. 449,
10. Samuel, A. L. Some studies in machine learning using the game of checkers. 110788 (2022).
IBM J. Res. Dev. 3, 210–229 (1959). 42. Boussinesq, J. V. Théorie Analytique de la Chaleur: Mise en Harmonie avec
11. Brenner, M., Eldredge, J. & Freund, J. Perspective on machine learning for la Thermodynamique et avec la Théorie Mécanique de la Lumière T. 2,
advancing fluid mechanics. Phys. Rev. Fluids 4, 100501 (2019). Refroidissement et Échauffement par Rayonnement Conductibilité des Tiges,
12. Brunton, S. L., Noack, B. R. & Koumoutsakos, P. Machine learning for fluid Lames et Masses Cristallines Courants de Convection Théorie Mécanique de
mechanics. Annu. Rev. Fluid Mech. 52, 477–508 (2020). la Lumière (Gauthier-Villars, 1923).
13. Duraisamy, K., Iaccarino, G. & Xiao, H. Turbulence modeling in the age of 43. Slotnick, J. et al. CFD Vision 2030 Study: A Path to Revolutionary
data. Annu. Rev. Fluid Mech. 51, 357–377 (2019). Computational Aerosciences. Technical Report NASA/CR-2014-218178
14. Ahmed, S. E. et al. On closures for reduced order models—a spectrum of (NASA, 2014).
first-principle to machine-learned avenues. Phys. Fluids 33, 091301 (2021). 44. Kutz, J. N. Deep learning in fluid dynamics. J. Fluid Mech. 814, 1–4 (2017).
15. Wang, B. & Wang, J. Application of artificial intelligence in computational 45. Ling, J., Kurzawski, A. & Templeton, J. Reynolds averaged turbulence
fluid dynamics. Ind. Eng. Chem. Res. 60, 2772–2790 (2021). modelling using deep neural networks with embedded invariance. J. Fluid
16. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics-informed neural Mech. 807, 155–166 (2016).
networks: a deep learning framework for solving forward and inverse 46. Craft, T. J., Launder, B. E. & Suga, K. Development and application of a
problems involving nonlinear partial differential equations. J. Comput. Phys. cubic eddy-viscosity model of turbulence. Int. J. Heat Fluid Flow 17,
378, 686–707 (2019). 108–115 (1996).
17. Karniadakis, G. E. et al. Physics-informed machine learning. Nat. Rev. Phys. 47. Marin, O., Vinuesa, R., Obabko, A. V. & Schlatter, P. Characterization of the
3, 422–440 (2021). secondary flow in hexagonal ducts. Phys. Fluids 28, 125101 (2016).
18. Noé, F., Olsson, S., Köhler, J. & Wu, H. Boltzmann generators: sampling 48. Spalart, P. R. Strategies for turbulence modelling and simulations. Int. J.
equilibrium states of many-body systems with deep learning. Science 365, Heat Fluid Flow 21, 252–263 (2000).
eaaw1147 (2019). 49. Vidal, A., Nagib, H. M., Schlatter, P. & Vinuesa, R. Secondary flow in
19. Vinuesa, R., Hosseini, S. M., Hanifi, A., Henningson, D. S. & Schlatter, P. spanwise-periodic in-phase sinusoidal channels. J. Fluid Mech. 851, 288–316
Pressure-gradient turbulent boundary layers developing around a wing (2018).
section. Flow. Turbul. Combust. 99, 613–641 (2017). 50. Wang, J. X., Wu, J. L. & Xiao, H. Physics-informed machine learning
20. Choi, H. & Moin, P. Grid-point requirements for large eddy simulation: approach for reconstructing Reynolds stress modeling discrepancies based
Chapman’s estimates revisited. Phys. Fluids 24, 011702 (2012). on DNS data. Phys. Rev. Fluids 2, 034603 (2017).
21. Bar-Sinai, Y., Hoyer, S., Hickey, J. & Brenner, M. P. Learning data-driven 51. Wu, J.-L., Xiao, H. & Paterson, E. Physics-informed machine learning
discretizations for partial differential equations. Proc. Natl Acad. Sci. USA approach for augmenting turbulence models: a comprehensive framework.
116, 15344–15349 (2019). Phys. Rev. Fluids 3, 074602 (2018).
22. Stevens, B. & Colonius, T. Enhancement of shock-capturing methods via 52. Jiang, C. et al. An interpretable framework of data-driven turbulence
machine learning. Theor. Comput. Fluid Dyn. 34, 483–496 (2020). modeling using deep neural networks. Phys. Fluids 33, 055133 (2021).
23. Jeon, J., Lee, J. & Kim, S. J. Finite volume method network for the 53. Rudin, C. Stop explaining black box machine learning models for high
acceleration of unsteady computational fluid dynamics: Non-reacting and stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1,
reacting flows. Int. J. Energy Res. https://doi.org/10.1002/er.7879 (2022). 206–215 (2019).

364 Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci


NaTure CoMpuTaTional Science Perspective
54. Vinuesa, R. & Sirmacek, B. Interpretable deep-learning models to help 83. Novati, G., de Laroussilhe, H. L. & Koumoutsakos, P. Automating
achieve the sustainable development goals. Nat. Mach. Intell. 3, 926 (2021). turbulence modelling by multi-agent reinforcement learning. Nat. Mach.
55. Cranmer, M. et al. Discovering symbolic models from deep learning with Intell. 3, 87–96 (2021).
inductive biases. In Proc. 34th Int. Conf. on Neural Information Processing 84. Hutchins, N., Chauhan, K., Marusic, I., Monty, J. & Klewicki, J. Towards
Systems 17429–17442 (NIPS, 2020) reconciling the large-scale structure of turbulent boundary layers in the
56. Weatheritt, J. & Sandberg, R. D. A novel evolutionary algorithm applied to atmosphere and laboratory. Bound. Layer Meteorol. 145, 273–306 (2012).
algebraic modifications of the RANS stress-strain relationship. J. Comput. 85. Britter, R. E. & Hanna, S. R. Flow and dispersion in urban areas. Annu. Rev.
Phys. 325, 22–37 (2016). Fluid Mech. 35, 469–496 (2003).
57. Koza, J. R. Genetic Programming: On the Programming of Computers by 86. Giometto, M. G. et al. Spatial characteristics of roughness sublayer mean
Means of Natural Selection (MIT Press, 1992). flow and turbulence over a realistic urban surface. Bound. Layer Meteorol.
58. Weatheritt, J. & Sandberg, R. D. The development of algebraic stress models 160, 425–452 (2016).
using a novel evolutionary algorithm. Int. J. Heat Fluid Flow 68, 298–318 87. Bou-Zeid, E., Meneveau, C. & Parlange, M. A scale-dependent Lagrangian
(2017). dynamic model for large eddy simulation of complex turbulent flows. Phys.
59. Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations Fluids 17, 025105 (2005).
from data by sparse identification of nonlinear dynamical systems. Proc. 88. Moeng, C. A large-eddy-simulation model for the study of planetary
Natl Acad. Sci. USA 113, 3932–3937 (2016). boundary-layer turbulence. J. Atmos. Sci. 13, 2052–2062 (1984).
60. Beetham, S. & Capecelatro, J. Formulating turbulence closures using sparse 89. Mizuno, Y. & Jiménez, J. Wall turbulence without walls. J. Fluid Mech. 723,
regression with embedded form invariance. Phys. Rev. Fluids 5, 084611 429–455 (2013).
(2020). 90. Encinar, M. P., García-Mayoral, R. & Jiménez, J. Scaling of velocity
61. Schmelzer, M., Dwight, R. P. & Cinnella, P. Discovery of algebraic fluctuations in off-wall boundary conditions for turbulent flows. J. Phys.
Reynolds-stress models using sparse symbolic regression. Flow Turbul. Conf. Ser. 506, 012002 (2014).
Combust. 104, 579–603 (2020). 91. Sasaki, K., Vinuesa, R., Cavalieri, A. V. G., Schlatter, P. & Henningson, D. S.
62. Beetham, S., Fox, R. O. & Capecelatro, J. Sparse identification of multiphase Transfer functions for flow predictions in wall-bounded turbulence. J. Fluid
turbulence closures for coupled fluid-particle flows. J. Fluid Mech. 914, A11 Mech. 864, 708–745 (2019).
(2021). 92. Arivazhagan, G. B. et al. Predicting the near-wall region of turbulence
63. Rezaeiravesh, S., Vinuesa, R. & Schlatter, P. On numerical uncertainties in through convolutional neural networks. Preprint at https://arxiv.org/
scale-resolving simulations of canonical wall turbulence. Comput. Fluids abs/2107.07340 (2021).
227, 105024 (2021). 93. Milano, M. & Koumoutsakos, P. Neural network modeling for near wall
64. Emory, M., Larsson, J. & Iaccarino, G. Modeling of structural uncertainties turbulent flow. J. Comput. Phys. 182, 1–26 (2002).
in Reynolds-averaged Navier-Stokes closures. Phys. Fluids 25, 110822 94. Moriya, N. et al. Inserting machine-learned virtual wall velocity for
(2013). large-eddy simulation of turbulent channel flows. Preprint at https://arxiv.
65. Mishra, A. A. & Iaccarino, G. Uncertainty estimation for Reynolds-averaged org/abs/2106.09271 (2021).
Navier-Stokes predictions of high-speed aircraft nozzle jets. AIAA J. 55, 95. Bae, H. J. & Koumoutsakos, P. Scientific multi-agent reinforcement learning
3999–4004 (2017). for wall-models of turbulent flows. Nat. Commun. 13, 1443 (2022).
66. Poroseva, S., Colmenares, F. J. D. & Murman, S. On the accuracy of RANS 96. Taira, K. et al. Modal analysis of fluid flows: an overview. AIAA J. 55,
simulations with DNS data. Phys. Fluids 28, 115102 (2016). 4013–4041 (2017).
67. Wu, J., Xiao, H., Sun, R. & Wang, Q. Reynolds-averaged Navier-Stokes 97. Rowley, C. W. & Dawson, S. T. Model reduction for flow analysis and
equations with explicit data-driven Reynolds stress closure can be control. Annu. Rev. Fluid Mech. 49, 387–417 (2017).
ill-conditioned. J. Fluid Mech. 869, 553–586 (2019). 98. Taira, K. et al. Modal analysis of fluid flows: applications and outlook. AIAA
68. Obiols-Sales, O., Vishnu, A., Malaya, N. & Chandramowlishwaran, A. J. 58, 998–1022 (2020).
CFDNet: a deep learning-based accelerator for fluid simulations. In Proc. 99. Lumley, J. L. in Atmospheric Turbulence and Wave Propagation (eds Yaglom,
34th ACM Int. Conf. on Supercomputing 1–12 (ACM, 2020). A. M. & Tatarski, V. I.) 166–178 (1967).
69. Spalart. P. & Allmaras, S. A one-equation turbulence model for 100. Schmid, P. J. Dynamic mode decomposition of numerical and experimental
aerodynamic flows. In 30th Aerospace Sciences Meeting and Exhibit, AIAA data. J. Fluid Mech. 656, 5–28 (2010).
Paper 1992-0439 (AIAA, 1992). 101. Baldi, P. & Hornik, K. Neural networks and principal component analysis:
70. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. A tensorial approach to learning from examples without local minima. Neural Netw. 2, 53–58 (1989).
computational continuum mechanics using object-oriented techniques. 102. Murata, T., Fukami, K. & Fukagata, K. Nonlinear mode decomposition with
Comput. Phys. 12, 620–631 (1998). convolutional neural networks for fluid dynamics. J. Fluid Mech. 882, A13
71. Gibou, F., Hyde, D. & Fedkiw, R. Sharp interface approaches and deep (2020).
learning techniques for multiphase flows. J. Comput. Phys. 380, 442–463 103. Eivazi, H., Le Clainche, S., Hoyas, S. & Vinuesa, R. Towards extraction of
(2019). orthogonal and parsimonious non-linear modes from turbulent flows.
72. Ma, M., Lu, J. & Tryggvasona, G. Using statistical learning to close Expert Syst. Appl. 202, 117038 (2022).
two-fluid multiphase flow equations for a simple bubbly system. Phys. 104. Noack, B. R., Afanasiev, K., Morzynski, M., Tadmor, G. & Thiele, F. A
Fluids 27, 092101 (2015). hierarchy of low-dimensional models for the transient and post-transient
73. Mi, Y., Ishii, M. & Tsoukalas, L. H. Flow regime identification methodology cylinder wake. J. Fluid Mech. 497, 335–363 (2003).
with neural networks and two-phase flow models. Nucl. Eng. Des. 204, 105. Lee, K. & Carlberg, K. T. Model reduction of dynamical systems on
87–100 (2001). nonlinear manifolds using deep convolutional autoencoders. J. Comput.
74. Smagorinsky, J. General circulation experiments with the primitive Phys. 404, 108973 (2020).
equations: I. The basic experiment. Mon. Weather Rev. 91, 99–164 (1963). 106. Benner, P., Gugercin, S. & Willcox, K. A survey of projection-based model
75. Beck, A. D., Flad, D. G. & Munz, C.-D. Deep neural networks for reduction methods for parametric dynamical systems. SIAM Rev. 57,
data-driven LES closure models. J. Comput. Phys. 398, 108910 (2019). 483–531 (2015).
76. Lapeyre, C. J., Misdariis, A., Cazard, N., Veynante, D. & Poinsot, T. 107. Carlberg, K., Barone, M. & Antil, H. Galerkin v. least-squares Petrov-
Training convolutional neural networks to estimate turbulent sub-grid scale Galerkin projection in nonlinear model reduction. J. Comput. Phys. 330,
reaction rates. Combust. Flame 203, 255–264 (2019). 693–734 (2017).
77. Maulik, R., San, O., Rasheed, A. & Vedula, P. Subgrid modelling for 108. Cenedese, M., Axås, J., Bäuerlein, B., Avila, K. & Haller, G. Data-driven
two-dimensional turbulence using neural networks. J. Fluid Mech. 858, modeling and prediction of nonlinearizable dynamics via spectral
122–144 (2019). submanifolds. Nat. Commun. 13, 872 (2022).
78. Kraichnan, R. H. Inertial ranges in two-dimensional turbulence. Phys. 109. Lopez-Martin, M., Le Clainche, S. & Carro, B. Model-free short-term fluid
Fluids 10, 1417–1423 (1967). dynamics estimator with a deep 3D-convolutional neural network. Expert
79. Vollant, A., Balarac, G. & Corre, C. Subgrid-scale scalar flux modelling Syst. Appl. 177, 114924 (2021).
based on optimal estimation theory and machine-learning procedures. 110. Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P. & Koumoutsakos, P.
J. Turbul. 18, 854–878 (2017). Data-driven forecasting of high-dimensional chaotic systems with long
80. Gamahara, M. & Hattori, Y. Searching for turbulence models by artificial short-term memory networks. Proc. R. Soc. A 474, 20170844 (2018).
neural network. Phys. Rev. Fluids 2, 054604 (2017). 111. Srinivasan, P. A., Guastoni, L., Azizpour, H., Schlatter, P. & Vinuesa, R.
81. Maulik, R. & San, O. A neural network approach for the blind Predictions of turbulent shear flows using deep neural networks. Phys. Rev.
deconvolution of turbulent flows. J. Fluid Mech. 831, 151–181 (2017). Fluids 4, 054603 (2019).
82. Reissmann, M., Hasslbergerb, J., Sandberg, R. D. & Klein, M. Application of 112. Abadía-Heredia, R. et al. A predictive hybrid reduced order model based
gene expression programming to a-posteriori LES modeling of a Taylor on proper orthogonal decomposition combined with deep learning
Green vortex. J. Comput. Phys. 424, 109859 (2021). architectures. Expert Syst. Appl. 187, 115910 (2022).

Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci 365


Perspective NaTure CoMpuTaTional Science
113. Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Model-free prediction of 136. Vinuesa, R., Lehmkuhl, O., Lozano-Durán, A. & Rabault, J. Flow control in
large spatiotemporally chaotic systems from data: a reservoir computing wings and discovery of novel approaches via deep reinforcement learning.
approach. Phys. Rev. Lett. 120, 024102 (2018). Fluids 865, 281–302 (2019).
114. Kaiser, E. et al. Cluster-based reduced-order modelling of a mixing layer. J. 137. Guastoni, L. et al. Convolutional-network models to predict wall-bounded
Fluid Mech. 754, 365–414 (2014). turbulence from wall quantities. J. Fluid Mech. 928, A27 (2021).
115. Peherstorfer, B. & Willcox, K. Data-driven operator inference for 138. Kim, H., Kim, J., Won, S. & Lee, C. Unsupervised deep learning for
nonintrusive projection-based model reduction. Comput. Meth. Appl. Mech. super-resolution reconstruction of turbulence. J. Fluid Mech. 910, A29 (2021).
Eng. 306, 196–215 (2016). 139. Fukami, K., Fukagata, K. & Taira, K. Super-resolution reconstruction of
116. Benner, P., Goyal, P., Kramer, B., Peherstorfer, B. & Willcox, K. Operator turbulent flows with machine learning. J. Fluid Mech. 870, 106–120 (2019).
inference for non-intrusive model reduction of systems with non- 140. Güemes, A. et al. From coarse wall measurements to turbulent velocity
polynomial nonlinear terms. Comput. Meth. Appl. Mech. Eng. 372, 113433 fields through deep learning. Phys. Fluids 33, 075121 (2021).
(2020). 141. Fukami, K., Nakamura, T. & Fukagata, K. Convolutional neural network
117. Qian, E., Kramer, B., Peherstorfer, B. & Willcox, K. Lift & Learn: based hierarchical autoencoder for nonlinear mode decomposition of fluid
physics-informed machine learning for large-scale nonlinear dynamical field data. Phys. Fluids 32, 095110 (2020).
systems. Physica D 406, 132401 (2020). 142. Raissi, M., Yazdani, A. & Karniadakis, G. E. Hidden fluid mechanics:
118. Loiseau, J.-C. & Brunton, S. L. Constrained sparse Galerkin regression. J. learning velocity and pressure fields from flow visualizations. Science 367,
Fluid Mech. 838, 42–67 (2018). 1026–1030 (2020).
119. Loiseau, J.-C. Data-driven modeling of the chaotic thermal convection in an 143. Eivazi, H., Tahani, M., Schlatter, P. & Vinuesa, R. Physics-informed neural
annular thermosyphon. Theor. Comput. Fluid Dyn. 34, 339–365 (2020). networks for solving Reynolds-averaged Navier-Stokes equations. Preprint
120. Guan, Y., Brunton, S. L. & Novosselov, I. Sparse nonlinear models of at https://arxiv.org/abs/2107.10711 (2021).
chaotic electroconvection. R. Soc. Open Sci. 8, 202367 (2021). 144. Kim, Y., Choi, Y., Widemann, D. & Zohdi, T. A fast and accurate
121. Deng, N., Noack, B. R., Morzynski, M. & Pastur, L. R. Low-order model for physics-informed neural network reduced order model with shallow
successive bifurcations of the fluidic pinball. J. Fluid Mech. 884, A37 (2020). masked autoencoder. J. Comput. Phys. 451, 110841 (2021).
122. Deng, N., Noack, B. R., Morzynski, M. & Pastur, L. R. Galerkin force model 145. Eivazi, H. & Vinuesa, R. Physics-informed deep-learning applications to
for transient and post-transient dynamics of the fluidic pinball. J. Fluid experimental fluid mechanics. Preprint at https://arxiv.org/abs/2203.15402
Mech. 918, A4 (2021). (2022).
123. Callaham, J. L., Rigas, G., Loiseau, J.-C. & Brunton, S. L. An empirical 146. Markidis, S. The old and the new: can physics-informed deep-learning
mean-field model of symmetry-breaking in a turbulent wake. Sci. Adv. 8, replace traditional linear solvers? Front. Big Data https://doi.org/10.3389/
eabm4786 (2022). fdata.2021.669097 (2021).
124. Callaham, J. L., Brunton, S. L. & Loiseau, J.-C. On the role of nonlinear 147. Kim, J., Moin, P. & Moser, R. Turbulence statistics in fully developed
correlations in reduced-order modelling. J. Fluid Mech. 938, A1 (2022). channel flow at low Reynolds number. J. Fluid Mech. 177, 133–166 (1987).
125. Champion, K., Lusch, B., Kutz, J. N. & Brunton, S. L. Data-driven discovery 148. Fukagata, K. Towards quantum computing of turbulence. Nat. Comput. Sci.
of coordinates and governing equations. Proc. Natl Acad. Sci. USA 116, 2, 68–69 (2022).
22445–22451 (2019). 149. Barba, L. A. The hard road to reproducibility. Science 354, 142–142 (2016).
126. Yeung, E., Kundu, S. & Hodas, N. Learning deep neural network 150. Mesnard, O. & Barba, L. A. Reproducible and replicable computational fluid
representations for Koopman operators of nonlinear dynamical systems. dynamics: it’s harder than you think. Comput. Sci. Eng. 19, 44–55 (2017).
Preprint at https://arxiv.org/abs/1708.06850 (2017).
127. Takeishi, N., Kawahara, Y. & Yairi, T. Learning Koopman invariant Acknowledgements
subspaces for dynamic mode decomposition. In Advances in Neural R.V. acknowledges financial support from the Swedish Research Council (VR) and
Information Processing Systems 1130–1140 (ACM, 2017). from ERC grant no. ‘2021-CoG-101043998, DEEPCONTROL’. S.L.B. acknowledges
128. Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear funding support from the Army Research Office (ARO W911NF-19-1-0045; programme
embeddings of nonlinear dynamics. Nat. Commun. 9, 4950 (2018). manager M. Munson).
129. Mardt, A., Pasquali, L., Wu, H. & No‚, F. VAMPnets: deep learning of
molecular kinetics. Nat. Commun. 9, 5 (2018).
130. Otto, S. E. & Rowley, C. W. Linearly-recurrent autoencoder networks for Author contributions
learning dynamics. SIAM J. Appl. Dyn. Syst. 18, 558–593 (2019). Both authors contributed equally to the ideation of the study and the writing process.
131. Wang, R., Walters, R. & Yu, R. Incorporating symmetry into deep dynamics
models for improved generalization. Preprint at https://arxiv.org/ Competing interests
abs/2002.03061 (2020). The authors declare no competing interests.
132. Wang, R., Kashinath, K., Mustafa, M., Albert, A. & Yu, R. Towards
physics-informed deep learning for turbulent flow prediction. In Proc. 26th
ACM SIGKDD International Conference on Knowledge Discovery & Data Additional information
Mining 1457–1466 (ACM, 2020). Correspondence should be addressed to Ricardo Vinuesa.
133. Frezat, H., Balarac, G., Le Sommer, J., Fablet, R. & Lguensat, R. Physical Peer review information Nature Computational Science thanks Michael Brenner and
invariance in neural networks for subgrid-scale scalar flux modeling. Phys. the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Rev. Fluids 6, 024607 (2021). Principal Handling Editor: Jie Pan, in collaboration with the Nature Computational
134. Erichson, N. B., Muehlebach, M. & Mahoney, M. W. Physics-informed Science team.
autoencoders for Lyapunov-stable fluid flow prediction. Preprint at https://
Reprints and permissions information is available at www.nature.com/reprints.
arxiv.org/abs/1905.10866 (2019).
135. Kaptanoglu, A. A., Callaham, J. L., Hansen, C. J., Aravkin, A. & Brunton, S. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
L. Promoting global stability in data-driven models of quadratic nonlinear published maps and institutional affiliations.
dynamics. Phys. Rev. Fluids 6, 094401 (2021). © Springer Nature America, Inc. 2022

366 Nature Computational Science | VOL 2 | June 2022 | 358–366 | www.nature.com/natcomputsci

You might also like