You are on page 1of 14

Using Machine Learning to

solve Ordinary Differential


Equations

Eeshan Mishra
2020A2PS2435H
Introduction
● Since it is not always simple to discover an analytic solution directly in differential
equations, numerous methods for solving differential equations have
been devised. Some of them use numerical approaches, which have limitations
in that they only supply answers at grid points via a set of linear equations.
● In actuality, we need answers not only for a few locations, but for the
entire Cartesian plane.
● Lagaris et al. were the first to propose utilising a Neural Network to
solve an ODE. The idea is to train a neural network to meet the requirements
specified by a differential equation. In other words, we must identify a
function whose derivative meets the ODE criteria.
Mathematical Foundations
Let’s take an ODE system, given by-:

Neural Networks, as we all know, are universal approximators. We will exploit this characteristic
of Neural Networks to approximate the solution of the given ODE:

On differentiating both sides, we get

So, if NN(t) is extremely close to the true solution, then its derivative is also close to the true
solution's derivative, i.e.

Thus, we can turn this condition into our loss function.


Thus, the loss function (L) is as follows-:

By taking the initial conditions into consideration, we get-:

However, this is not the best approach as inclusion of too many variables leads to unstable
training.To avoid that, we can more efficiently encode the initial condition into the loss function.
Instead of accessing the neural network directly, let's construct a new function and use it.

It’s easy to see that g(t) will always satisfy the initial condition since g(0) will lead to
tNN(t)=0, leaving just the initial condition on the expression. Now, the loss function becomes
Python Implementation
Example 1-:
Let’s choose a simple ODE-: u’=2x and u(0)=1
We will implement the described method in python using the TensorFlow library.
This issue is readily solved by integrating both sides of the answer, resulting in u+C=x2+C, and after
fitting C to obey the starting condition, we get u=x2+1. Nonetheless, rather than solving it analytically,
let us try to solve it using Neural Nets.

In this example, we will build an MLP Neural Net with two hidden layers, sigmoid activation
functions, and a gradient descent optimizer.
a) Defining Variables
b) Defining the Model and Loss Function
c) Train Function

We can compute the predicted value at each point x since we know the ODE function that governs
this model. You may also observe that we always compute the loss using 10 points in the
-1,1 range. It may not be sufficient for all potential functions, but given the simplicity of our
example,it will work.
The variable dNN in the code represents the Neural Network Derivative. We are just use
the differentiation definition:
d) Plotting the results

This gives us the following curve-:


Example 2-:

We will use Pytorch, Numpy, and Matplotlib for this example.

The Neural Network suggested in I.E. Lagaris’ paper was with one hidden layer of 50 neurons. The
boundary conditions and the the trial solution Psi_t have also been defined, with the right hand
function as -1.
The loss function for this equation is

Now, during implementation, we must compute the second-order derivative. This is done with
Pytorch’s autograd.

By using BFGS as an optimizer and running it, we get-:


Finally, we have to compare our results with the NN approximator.

We get the following curve-:


Conclusion
They are close approximations, especially because no dataset was utilised to train. It
could have been even better if we had utilised more collocation points to
calculate the loss function or kept the training going for a bit longer. Our
network was able to develop a function that corresponded to our data points
and may be utilised as a proxy for the analytic solution with minimal
mistakes.

What if we wanted to apply this concept to PDEs(Non-Linear)? George Em


Karniadakis has written an outstanding paper - (Physics informed Deep
Learning, Solutions of Nonlinear Partial Differential Equations).
References
● I. E. Lagaris, A. Likas and D. I. Fotiadis, "Artificial neural networks for solving
ordinary and partial differential equations," in IEEE Transactions on Neural
Networks, vol. 9, no. 5, pp. 987-1000, Sept. 1998, doi: 10.1109/72.712178.
● https://www.analyticsvidhya.com/blog/2021/09/ordinary-differential-equations-
made-easy-with-deep-learning/
● https://towardsdatascience.com/using-neural-networks-to-solve-ordinary-differ
ential-equations-a7806de99cdd

You might also like