Professional Documents
Culture Documents
BTP Report
BTP Report
Using
Remote Sensing Observations
PROJECT REPORT
Submitted in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology
In
Engineering Physics
By
Aman Kumar
Enrollment Number: 17122005
&
Aniket Sujay
Enrollment Number: 17122006
DEPARTMENT OF PHYSICS
INDIAN INSTITUE OF TECHONOLGY ROORKEE
ROORKEE- 247667 (INDIA)
18TH June 2021
CANDIDATE’S DECLARATION
We hereby declare that the work which is being presented in this B.Tech Project
report entitled “Supervised predictive modeling of Space Weather Events Using
Remote Sensing Observations” in partial fulfillment of the requirements for the
award of the Degree of Bachelor of Technology in Engineering Physics and
submitted in the Department of Physics of the Indian Institute of Technology
Roorkee, is an authentic record of our own work carried out during the period of
August 2020 to June 2021 under the supervision of Prof. M.V. Sunil Krishna,
Department of Physics, Indian Institute of Technology, Roorkee.
Aman Kumar
B.Tech Engineering Physics
Enrollment No: 17122005
Aniket Sujay
B.Tech Engineering Physics
Enrollment No: 17122006
CERTIFICATE
Associate Professor
Department of Physics
Indian Institute of Technology, Roorkee
ACKNOWLEDGEMENTS
We would also like to thank Mr. Alok Kumar Ranjan, Research Scholars at
Department of Physics, IIT Roorkee who helped us throughout the course
of this project.
Introduction
○ O (80-100 km)
The NO emission rate is severely influenced by
● Cooling Rates:
storm conditions. It was observed that the
○ CO2 (15µ𝑚)
radiative emission rate during a storm period in
○ NO (5.3µ𝑚)
the Lower Thermosphere region is larger by a
○ O3 (9.6µ𝑚)
few orders of magnitude over the quiet period.
○ H2O (6.7µ𝑚 & far IR)
● Chemical Heating Rates (Odd-Oxygen
and Odd-Hydrogen families)
● Solar Heating rates of CO2, O3, and O2
𝑇 2
𝐸𝑃𝐸(𝑓) = (𝑌 − 𝑋 β)
∂𝐸𝑃𝐸
∂β
= 0 Fig 3: Artificial Neural Network architecture
4 ∂𝑅 ∂𝑅 ∂η𝑖
= *
𝑧𝑖 = σ( ∑ 𝑤𝑖𝑗𝑥𝑗 + 𝑤𝑖0) ∂𝑡𝑖 ∂η𝑖 ∂𝑡𝑖
𝑗=1
∂𝑅 ∂𝑅 ∂𝑧𝑖𝑘
For the second layer
∂𝑤𝑖𝑗
= ∂𝑧𝑖𝑘
* ∂𝑤𝑖𝑗
3
3
𝑡𝑖 = σ( ∑ β𝑖𝑗𝑤𝑖 + 𝑤𝑖0) ∂𝑅 ∂𝑅 ∂𝑡𝑙
𝑗=1 ∂𝑧𝑖𝑘
= ∑ ∂𝑡𝑙
* ∂𝑧𝑖𝑘
For the output layer 𝑙=1
Using these equations we can calculate the
3 adjustments in the parameters.
𝑦 = ∑ η𝑗𝑡𝑗
𝑗=1 Performance Tuning
Step 2: Calculation f the cost function The performance of an ANN depends on quite a
few parameters. Some of the most common are:
For our regression task, we will use the square
error loss function. 1. Size of the dataset: In the field of neural
𝑁 networks more data is always better.
2 More data points will allow a better fit
𝑅 = ∑ (𝑦𝑖 − 𝑓(𝑥𝑖)) 2. The dimensionality of the input variable:
𝑖=1 High dimensional data is difficult to train
as they require more data points to Architecture
properly fit the model.
3. Learning Rate: This determines the
amount by which we adjust our
parameter value during
backpropagation.
4. Optimizer functions: There are many
optimizer functions apart from the vanilla
gradient descent.
Ex: Adam optimizer, RMS propagation
which could prevent the model from
being stuck in an un-optimal position in
the error surface.
5. Batch Normalization: Neural Networks Fig 4: Autoencoder architecture
are notoriously difficult to train. The
training gets affected by the Autoencoder network has 3 parts:
randomness present in the data or the 1. Encoder network
randomness of the initialization of the 2. Bottleneck
parameters. This is called internal 3. Decoder network
covariate shift. To reduce this we apply
batch normalization techniques to the The encoder network’s job is to
entire dataset. This fixes the means and compress/encode the data into a
variances of each layer’s input. lower-dimensional form.
The bottleneck layer represents the compressed
Autoencoders data.
And the decoder network’s job is to
decompress/decode the compressed data.
Recently people have been highly successful in
applying encoding networks to traditional
machine learning problems.
Training
As the name suggests this type of neural
network can train on the dataset and can learn To train an autoencoder we set the output
how to compress the data in lower dimensional variable to be the same as the input variable.
form. This will force the network to mimic the input
As the lower-dimensional form still preserves the data. The encoder network tries to find a
information content of the original dataset we suitable compression and the decoder network
can train another model with the compressed tries to convert the compressed form back to the
data as the input. It has been shown that this original. The loss function is the difference
method sometimes increases the performance between the input and the output from the
from the traditional approach. decoder.
After training, we can isolate the encoder
network and the bottleneck layer. This neural
network will act as a compression model. Now if
we feed new data it will give a pretty reasonable
encoding for it.
We now hook up the bottleneck layer as input to
another neural network(or some other statistical
regression model) and train this new
combination using the original output.
based on this. The intensity of colors shows the
magnitude of the correlation coefficient.
Dataset Description
● From SABER :
○ Event, Solar AP, Solar KP, Solar
F10.7 Index, Solar Zenith Angle,
Time, Latitude, Longitude,
Altitude, Kinetic Temperature
○ NO Volume Emission rate,
Atmospheric Density, Ozone
Mixing ratio at 9.6 & 1.27 µ𝑚.
● From WDC Kyoto :
Fig 6: Heat Map
○ DST Index, AE Index &
Symmetric H component.
Dropped features are:
DST Index is having a 1-hour resolution, AE ● event
Index & Symmetric H components are having a ● solKp (Solar Kp)
1-minute resolution. These three are combined ● sym_h (Symmetric H Component)
with the SABER data with the help of their time
values. Data Preprocessing:
Results:
Results:
The test and training set error goes down
exponentially with each epoch through the
dataset. Here Adam and RMSpros we used to
optimize the error function.