You are on page 1of 23

Learning to Simulate

Complex Physics
with Graph Networks

Alvaro Sanchez-Gonzalez * 1 Jonathan Godwin * 1 Tobias Pfaff * 1 Rex


Ying * 1 2 Jure Leskovec 2 Peter W. Battaglia 1
Table of contents
01 Introduction

02 Related Work

03 GNS Model Framework

04 Experimental Methods

05 Results

06 Conclusion
2
01
Introduction

3
Idea

traditional simulators
● High-quality simulators require substantial
computational resources.
● Even the best are often inaccurate due to
insufficient knowledge of, or difficulty in machine learning simulators
approximating, the underlying physics and
parameters. ● Can train simulators directly from observed data.
● The large state spaces and complex dynamics
have been difficult for standard end-to-end
learning approaches to overcome.

4
Graph Network-based Simulators (GNS)
The framework imposes strong inductive biases, where
rich physical states are represented by graphs of
interacting particles, and complex dynamics are
approximated by learned message-passing among nodes.

5
02
Related Work

6
Traditional Ways

smoothed particle position-based


hydrodynamics dynamics material point method
(SPH) (PBD) (MPM)
Evaluates pressure and viscosity Incompressibility and collision Discretize a block of material into a
forces around each particle, and dynamics involve resolving pairwise large number of particles, and
updates particles’ distance constraints between compute spatial derivatives and
velocities and positions accordingly. particles, and directly predicting solve momentum equations.
their position changes.

Differentiable Particle-based Simulators


DiffTaichi, PhiFlow, Jax-MD :
Backpropagate gradients through the architecture.

7
Graph Networks
● A type of graph neural network.
● A GN maps an input graph to an output graph with the
same structure but potentially different node, edge, and
graph-level attributes, and can be trained to learn a form of
messagepassing , where latent information is propagated
between nodes via the edges.
● GNs and their variants, e.g., "interaction networks", can
learn to simulate rigid body, mass-spring, n-body, and
robotic control systems , as well as non-physical systems.

GNS framework is a general approach to learning simulation, is


simpler to implement, and is more accurate across fluid, rigid,
and deformable material systems.

8
03
GNS Model
Framework

9
Model Structure

10
Graph Networks

11
04
Experimental
Methods

12
GNS Implementation Details
the components of the GNS framework using standard deep learning building blocks,
and used standard nearest neighbor algorithms to construct the graph.

Input ENCODER
xitk = [pitk , pi˙ tk-C+1 , . . . , pi˙ tk,fi ], constructs the graph structure G0
C = 5,
pi = position MLP encode node features and edge
pi˙ = previous velocities features into the latent vectors, vi and
fi = features that capture static ei,j , of size 128.
material properties

13
GNS Implementation Details

PROCESSOR DECODER
Uses a stack of M GNs with Decoder’s learned function, δv ,
identical structure, MLPs as internal is an MLP.
edge and node update functions. After the DECODER, the future position
and velocity are updated using an Euler
integrator, so the yi corresponds to
accelerations, p¨i

14
Neural network parameters

MLP Loss function


All MLPs have Randomly sampled particle state pairs (xitk,
● Two hidden layers xitk+1) from training trajectories, calculated
● ReLU activations target accelerations p¨ tk
● Layer size = 128
followed by a LayerNorm layer L(xitk , xitk+1 ; θ) = ||dθ(xitk) − p¨ tk||2

15
05
Results

16
Simulating Complex Materials
● Substantially greater than demonstrated in previous
methods—GNS can operate at resolutions high enough
for practical prediction tasks and high-quality 3D
renderings.
● Although our models were trained to make one-step
predictions, the long-term trajectories remain plausible
even over thousands of rollout timesteps.

MSE

SPH GNS 17
Multiple Interacting Materials
Train a single architecture with a single set of parameters to simulate all of our different materials,
interacting with each other in a single system.

18
Generalization
The GNS generalizes well even beyond its training distributions, which suggests it learns a more
general-purpose understanding of the materials and physical processes experienced during training.

trained:

1x1 domain,
2.5k particles, 600 steps

inference:

8x4 domain,

85k particles, 5000 steps

19
Key Architectural Choices
While our GNS model was generally robust to architectural and hyperparameter settings, we also
identified several factors which had more substantial impact:
1. the number of message-passing steps,
2. shared vs. unshared PROCESSOR GN parameters,
3. the connectivity radius,
4. the scale of noise added to the inputs during training,
5. relative vs. absolute ENCODER.

20
Comparisons to Previous Models
● ● CConv:
DPI:
CConv performs well for domains like water,
While DPI uses hard-coded constraints to keep
which it was built for, but struggles with some
the box shape consistent, our model achieves
of our more complex materials.
this without any special treatment of the solid
Similarly, in a CConv rollout of the BOXBATH
particles.
DOMAIN the rigid box loses its shape.

21
06
Conclusion

22
Conclusion
Simpler

More accurate

Better generalization

● GNS approach may also be applicable to data represented using meshes.


● There are also natural ways to incorporate stronger, generic physical knowledge into the
framework.

23

You might also like