Professional Documents
Culture Documents
O3 formation: O2 + sunlight O + O
NO2 + sunlight NO + O
O2 + O O3
NO + O3 NO2 + O2
Objective
Objective
To Build deep learning models to make prediction of:
ratios for the trace species O , CO , H O, [O], and [H], volume emission rates for NO, OH and O ,
3 2 2 2
cooling & heating rates for many CO , O , and O bands, and chemical heating rates for important
2 3 2
reactions.
Data Source
1. SABER website.
FEATURES: Solar Ap, Solar Kp, Solar F10.7 index, solar zenith angle, time, tp_latitude, tp_longitude, tp_altitude,
temperature,Ozone mixing ratios, neutral density & NO volume emission rate.
2. Wdc-Kyoto website.
FEATURES: DST-index, AE-index, SYM-H Component.
Data
Correlation Matrix
Heat Map
Of Correlation Matrix
MACHINE LEARNING
● Any kind of observation always have an underlying probability distribution.
● If the distribution for the system is known then we can sample value for a
given condition.
● Generally figuring out this distribution is a non-trivial task.
● One of the ways to achieve this is machine learning.
● Machine learning is divided into 3 major categories:
○ Supervised Machine Learning
○ Unsupervised Machine Learning
○ Reinforcement Learning
● In this project we make use of supervised learning techniques.
Supervised Machine Learning
● Supervised machine learning involves inferring a mapping between input
features and the output based on the set of examples provided.
● General roadmap for supervised learning:
○ We start with an initial representation for the data, i.e we start with some functional form.
○ The parameter of the functional form is then adjusted using the training data available.
○ We feed in the data get some result compare the result with the given value and using the
difference between the values to calculate the amount by which the parameters must be
adjusted.
○ We use a loss function which in some sense would give us the “distance” of the computed
values from the observed value.
○ The main task in supervised learning is to minimize this loss function for the whole of the
dataset.
Learning
● Loss functions:
○ These functions gives us information about the how far off the predicted values is from the
observed values for the present set of parameter.
○ Our aim is to minimize this loss.
○ One of the most popular ways to do this is gradient descent.
○ In this method we travel the loss surface in the opposite direction of the highest descent.
○ If the surface is locally convex then this method will lead us to the minima.
○ Sometimes the loss surface is too complex and we could only achieve local minima because
global minima does not exist.
Gradient Descent
NEURAL NETWORKS
Typical Neural Network Architecture
x1 z1
y1
g
x2 z2
y2
x3 z3
The forward pass
y2 c2
Back Propagation
x3 z1
y1
g L(g, g’)
x2 z2
y2
x1 z3
Updating the weights
Model Performance
After Regularization