Introduction
ADVANCED DEEP LEARNING FOR PHYSICS, COURSE OVERVIEW
General Motivation
Computational Methods in the Age of Deep Learning
• Traditional simulation methods have shown tremendous success
- Magnetohydrodynamics
- Turbo machinery and combustion
- Aerodynamic shape optimisation
- Blood ow in a human heart
- …
2
fl
General Motivation
Computational Methods in the Age of Deep Learning
• Traditional simulation methods have shown tremendous success
• “Deep Learning” comes along.
- Can nd cats & dogs in images
- Alpha Go
- Alphafold 1,2
- Chat GPT
• Open question: how to join forces of both worlds
3
fi
Connections to Neuroscience and Psychology
Thinking fast and slow (D. Kahnemann)
• System 1 - fast, intuition
• System 2 - analytic thinking
• Goal:
- Simulations provide “system 2” functionality
- Combine with improved intuition via “system 1” from NNs
System 1 / System 1 / System 1 / System 1 / System 1 / System 1 /
NN Component NN Component NN Component NN Component NN Component NN Component
System 2 / System 2 / System 2 / System 2 / System 2 / System 2 /
Simulation Simulation Simulation Simulation Simulation Simulation
4
Machine Learning in Science
Scienti c Discoveries
• Traditional: experiment & theory. Contemporary: computation.
• Machine learning & data-driven approaches:
- Understanding nature: Keppler’s orbit (1609)
- Discovering patterns from observations (Darcy’s law, 1856)
- Principal component analysis (e.g. Pearson, 1901)
• Deep Learning as fundamental step forward
5
fi
Versus Classical Numerical Methods
Advancements in scienti c disciplines
• Case 1) Brilliant insight / ash of light / theory
- Einstein: theory of general relativity
- Many derivative works follow…
• Case 2) Long, arduous path, tiny steps, many unsolved questions
- Navier-Stokes equations: (Millennium prize problem) still unproved
- Hard work, chipping away to uncover the nature of things…
6
fl
fi
Versus Classical Numerical Methods
• Traditional numerical methods had a di cult start (cf. Case 2)
- Initially frowned upon
- “Unreliable” , “inexact”, etc. …
- Now we know: extremely useful tools
• Deep Learning also clearly of “the second kind”
• No reason not to look into it
ffi
Rough Categorization - Types of Integration
Physical Deep Learning
System
Forward
Simulations
Supervised
Loss Terms
Hybrid
Inverse
Problems
8
Category 1/3: Supervised / No Integration
E.g.: https://github.com/thunil/Deep-Flow-Prediction
• Supervised loss, di erential equation (PDE / ODE) only used to generate data
• Thuerey et. al: learning RANS ows
9
ff
fl
Category 2/3: Loss Terms / “Unsupervised”
E.g.: https://github.com/google/FluidNet
• “Unsupervised” training
• Tompson et al.: learning Poisson solvers
• In a nutshell: minimize ∇ ⋅ f(x)
• Raissi et al.: “physics-informed” networks, PINNs
• Physics objective: conservation of mass via
divergence free velocity eld
10 fi
Category 3/3: Interleaved / Tight Integration
E.g.: https://github.com/tum-pbs/Solver-in-the-Loop
• Train neural network to work alongside numerical solver (details will follow)
• Um et al.: “solver in the loop” approach
• Unsuper-/supervised distinction not too meaningful
11
Hybrid Solver Example
Example in Motion: Unsteady Wake Flow in 3D, Re=546.9
0.6
Source Solver + NN
Reference
0.3
0.0
12
Organization
Um et. al: Solver-in-the-Loop: Learning from Di erentiable Physics to Interact with
Course Overview
People https://ge.in.tum.de/about/
• Main Lecturers
- Nils Thuerey
- Mario Lino
• Exercise Supervision
- Qiang Liu
14
Content
1. Introduction & Supervised Learning
2. Physical Losses & Di erentiable Solvers
3. Constructing Hybrid Solvers
4. Graph Networks
5. Time-Series Predictions
6. Gradients and Adjoint Methods
7. Di usion Models
8. Reinforcement Learning
9. Conclusions
15
ff
ff
Content
What this course is _not_
• No introduction to deep learning
- Expected to know: GD, backprop, etc. …
- Self-check: di erence between Autoencoder, U-net, and ResNet?
• No introduction to numerical simulation
- Expected to know: nite di erences & co., basic iterative solvers
• [But: both topics not super complicated, can potentially be learned on the side]
• Focus lies on methods on the intersection of DL & simulations
16
ff
fi
ff
Course Overview
Practical Aspects - Lectures
• 2h per week lecture + 2h exercise
• Lecture: Tuesdays at 16:15
• [No meeting during 8:00 Friday time slot]
17
Course Overview
Practical Aspects - Exercises
• Exercises: 10 weekly assignments
• Posted each Tuesday, hand 1 week later
• Estimated weekly e ort: 2h
• In depth focus on working with “di erentiable solvers”
• Individual work required, in-person BBB sessions for support
• Increasing number of “points” per exercise (1 to 5)
• Grade bonus: 0.6 for 20 points, 0.3 for 10 points.
18
ff
ff
Course Overview
Practical Aspects - Exercises
• Implementation based on PhiFlow framework
• Feel free to start right away: available at https://github.com/tum-pbs/PhiFlow
• Get started via demos https://github.com/tum-pbs/PhiFlow/tree/develop/demos
• Exercise related discussions & questions Thursday 15:00 & 16:00
- Main place for exercise related topics!
- Held online, via BBB (see Moodle)
19
Course Overview
Practical Aspects - Exam
• Written exam 90min (Oral if less participants)
• Exam Dates (tentative)
- Aug. 8. , 11:30
- Oct. 10. , 11:30
• Covers exercise topics, and parts of the lecture (details will follow)
20
Script
Primarily: https://www.physicsbaseddeeplearning.org/
• Plus slides on Moodle
• No separate script…
21
Teaser Example from PBDL
https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/intro-teaser.ipynb
• DL extremely powerful…
• … but sometimes surprisingly wrong
22
Supervised Learning
ADVANCED DEEP LEARNING FOR PHYSICS
Model Equations
Um et. al: Solver-in-the-Loop: Learning from Di erentiable Physics to Interact with
Model Equations
Basic PDEs
• Di usion
• Burgers
• Navier-Stokes
25
ff
Model Equations
Basic PDEs
• Di usion
• Burgers
• Navier-Stokes
∂u
− α ∇2 u = 0
∂t
Di usion constant α
26
ff
ff
Model Equations
Basic PDEs
• Di usion
• Burgers (in 2D)
• Navier-Stokes
∂ux
+ u ⋅ ∇ux = ν∇ ⋅ ∇ux
∂t
∂uy
+ u ⋅ ∇uy = ν∇ ⋅ ∇uy
∂t
Kinematic Viscosity ν
27
ff
Model Equations
Basic PDEs
• Di usion
• Burgers
• Navier-Stokes (2D)
∂ux 1
+ u ⋅ ∇ux = − ∇p + ν∇ ⋅ ∇ux + gx
∂t ρ
∂uy 1
+ u ⋅ ∇uy = − ∇p + ν∇ ⋅ ∇uy + gy
∂t ρ
s.t. ∇ ⋅ u = 0
28
ff
Model Equations
Basic PDEs
• Di usion
• Burgers
• Navier-Stokes (2D)
Distinguish forward and inverse problems
Forward: initial & boundary conditions, solve from time t0 to end time
Inverse: from data/observations solve for state (e.g., u(t0) ) or
parameter (e.g., viscosity ν)
29
ff
Supervised Learning - The Basics
Um et. al: Solver-in-the-Loop: Learning from Di erentiable Physics to Interact with
Notation
Deep Learning Basics
• Approximate unknown function f*(x) = y*
• Star super-script * denotes ground truth (often intractable)
• Find approximation f(x) over training data set with (xi, y*
i
) pairs
• Minimizing error e(x, y)
2
• In the simplest case L2: arg minθ | f(x; θ) − y* |2
• Solve non-linear minimization problem with gradient based optimizer (Adam)
31
Types of Machine Learning
Traditional Viewpoints
• Traditional ML distinction: classi cation VS regression
- In the following: regression, f(x) = y , with x, y continuous
functions
• Later on physics regression (f(x)) = y : physical
model combined with regression problem; typically
involves highly non-linear functions that cause uneven
scaling
32
𝒫
𝒫
fi
Re-cap Supervised Training
• De nition Supervised Training := purely data-driven, pre-computed x, y, with simple loss (e.g. L 2)
• Fully data-driven
- Physical model not taken into account xi f(xi; θ) yi
- Sub-optimal accuracy and generalization Supervised loss: e(yi, y*i )
2
• Exactly as before: arg minθ | f(x; θ) − y* |2 θ ∂f/∂θ ∂e/∂yi
• Beautiful from an ML perspective: no “inductive biases” needed
• Horrible from a computational perspective: no existing knowledge used
33
fi
Supervised Training
https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/supervised-airfoils.ipynb
34
Training Surrogate Models
Supervised Training for Time Integration
• Precompute time series data: given states over time [u 0, u 1, …, u N]
• Consider batches representing a single time step forward x := u t; y* = u t+1
2
• Then, just like before: arg minθ | f(x; θ) − y* |2
• Given u 0 approximate any state u i by i recurrent / autoregressive evaluations of f( )
35
Training Surrogate Models
Recurrent Evaluation
• Per step approximation errors will grow
• Classic “data shift” problem from ML
• Most likely: time evolution will be unstable, solutions will explode …
36
Training Surrogate Models
Growing Errors - Example
• Simple Navier Stokes “wake ow”
• Here: all models are quite good
• “Drift” from G.T. is very slow
• (Not shown: eventual complete blow up)
37
fl
Training Surrogate Models
Outlook
• Obvious x: include time evolution in training , ideally include solver
• Train with “unrolling” , more details later on…
38
fi
Supervised Training
Best Practices
• Always start here
• Always start with over tting 1 data point
• Always check number of NN parameters
• Always adjust hyper parameters at this stage
• … then slowly introduce more data and beautiful physics models
39
fi
Supervised Training
Best Practices
✅ fast, reliable (builds on established DL methods)
✅ Great starting point
❌ Sub-optimal performance, accuracy and generalization.
❌ Fundamental problems in multi-modal settings
❌ Requires precomputed data (e.g., no solver interactions)
40