You are on page 1of 15

Physics Informed

Machine Learning (PIML)


---------------------------------
Physics Informed
Deep Learning (PIDL)
-------------------------------
Physics Informed
Neural Network (PINN)

by ARNAB HALDER
Contents
• What is Machine Learning?
• Approaches for Machine Learning
• Several types of Machine Learning techniques
• Concepts of Neural Network
• Concepts of Neuron
• Universal Approximation Theorem
• Solving ODEs/PDEs using Neural Network
• Example: Viscous Burgers’ Equation
• Why we use Neural Network?
• References
What is Machine Learning?

Machine Learning (ML) is a subset of Artificial Intelligence (AI) that


focuses on the development of algorithms and statistical models, enabling
computers to learn from and make predictions or decisions based on data,
without being explicitly programmed to perform specific tasks. The primary goal
of machine learning is to allow computers to improve their performance on a task
through experience, just like humans do.

In traditional programming, a programmer writes specific instructions for


a computer to follow to achieve a particular outcome. However, in machine
learning, the approach is different. Instead of writing explicit rules, the system
learns from data patterns and examples to make predictions or decisions.
Approaches for Machine Learning

Data Collection

Data Preprocessing

Model Training

Model Evaluation

Model Deployment
Several types of Machine Learning
techniques
Supervised
Learning

Unsupervised
Learning

Machine
Learning
Semi-
supervised
Learning

Reinforcement
Learning
Concepts of Neural Networks
𝑥1 • If the number of Hidden Layers > 1,
𝑦ො1
Then it is called Deep Network
___& hence the name Deep
𝑥2 𝑦ො2 ___Learning comes.
• The Input weight ‘n’ & the output
weight ‘k’ are not necessarily equal.
• Total weight required assuming
every neuron is connected to every
other neuron = (𝑛 × 𝑚 + Bias Units)
= (𝑛 × 𝑚 + 𝑚)
𝑚𝑡ℎ
𝑥𝑛 𝑦ො𝑘

Input Layer Hidden Layers Output Layer


Concepts of Neuron
1 1
𝜎 𝑧 = =
1 + ⅇ −𝑧 1 + ⅇ −𝛴𝜔𝑖𝑥𝑖
𝑥1

𝑥Ԧ 𝑥2 𝑧 = 𝛴𝜔𝑖 𝑥𝑖 g g(z)≡ a activation


= 𝑦ො
Linear Nonlinear
Combination Activation

𝑥3

Ԧ is vector
• Notable that, provided input into Linear combination section(𝑥)
but output ‘z’ is scaler.
Universal Approximation Theorem

• The universal approximation theorem states that any continuous function


f : [0, 1]n → [0, 1] can be approximated arbitrarily well by a neural network
__with at least 1 hidden layer with a finite number of weights.
Solving ODEs/PDEs using Neural Network
• Converting into optimization problem according to ideas of Lagaris et al:
ⅆ2 𝑢 ⅆ𝑢
• Let, +𝑎 = 𝑏 [𝑢(0) = 𝑢0 & 𝑢(1) = 𝑢1 ]
ⅆ𝑥 2 ⅆ𝑥

• From UAT we can say, u = NN(x)


• So,

NN
x 𝑢ො
(𝑎1 )

• 𝑎1 = 𝜎 𝜔1 𝑥 & 𝑢 = 𝜔2 𝜎 𝜔1 𝑥
ⅆ𝑢
• = 𝜔2 𝜎 ′ 𝜔, 𝑥 𝜔1 & this way all derivatives of ‘𝑢’ can be calculated by
ⅆ𝑥
backprop even with multiple layers of hidden neurons.
• Now, if 𝑢ො = NN(x), we can pose the problem as,

ቊ ⅆ2 𝑢

ⅆ𝑥 2
+ 𝑎
ⅆෝ
𝑢
ⅆ𝑥
−𝑏
2
+ 𝑢ො 0 − 𝑢0 2
+ 𝑢ො 1 − 𝑢1 2 } =0
Cost Function

Loss Function
• As example (Viscous Burgers’ Equation) As an example, let us
consider the Burgers’ equation. This equation arises in various areas
of applied mathematics, including fluid mechanics, nonlinear
acoustics, gas dynamics, and traffic flow.
• Burgers’ equation can lead to shock formation that is notoriously hard
to resolve by classical numerical methods. In one space dimension,
the Burger’s equation along with Dirichlet boundary conditions reads
as:
0.01
• 𝑢𝑡 + 𝑢𝑢𝑥 − 𝑢𝑥𝑥 = 0, x ∈ [−1, 1], t ∈ [0, 1]
𝜋

• 𝑢(0, x) = −sin(πx),
• 𝑢(t, −1) = 𝑢(t, 1) = 0
0.01
• Let us define f(t, x) to be given by 𝑓: = 𝑢𝑡 + 𝑢𝑢𝑥 − 𝑢𝑥𝑥 .
𝜋

and proceed by approximating 𝑢(t, x) by a deep neural network:


Why we use Neural Network?

• Traditional Methods are less • Where as Neural Networks


resource hungry but takes more backed by Universal
time so time complexity increases Approximation Theorem are time
spatially in case of NP problems. efficient if user is reluctant of the
number of Hidden Networks and
introduce multi-thread
processing.
References

• Associated Research Papers:


https://drive.google.com/drive/folders/1-
__NnpdZnraesUMfb6Zm0kvw6JtlEJwzaF?usp=sharing
• Project & Paper on Artificial Intelligence & Cyber Security, University
of California, Berkeley by ARNAB HALDER.
• External Materials: https://github.com/ARNABINDIAEDU/AI.git
Thank You

You might also like