We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
Building Neural
Networks from
Scratch in Python
With Code Examples
%save for later ll
Introduction to Neural
Networks
Neural networks are computational models inspired
by the human brain. They consist of interconnected
nodes (neurons) that process and transmit
information. Neural networks can learn from data to
perform tasks like classification and regression.
Swipe next —>follow for more
Activation Functions
Activation functions introduce non-linearity into
neural networks, allowing them to learn complex
patterns. Common activation functions include
ReLU, sigmoid, and tanh.
import numpy as np
def relu(x):
return np.maximum(0, x)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def tanh(x):
return np.tanh(x)
Swipe next —>Forward Propagation
Forward propagation is the process of passing input
data through the network to generate predictions. It
involves matrix multiplication and applying activation
functions.
import numpy as np
def forward_propagation(X, W1, bl, W2, b2):
JA np.dot(W1, X) + bl
Al np.tanh(Z1)
Z2 = np.dot(W2, Al) + b2
A2 = sigmoid(Z2)
return Z1, Al, Z2, A2
Swipe next —>save for later ll
Loss Functions
Loss functions measure the difference between
predicted and actual values. They guide the network
in adjusting its parameters to improve performance.
Swipe next —>follow for more
Backpropagation
Backpropagation is the algorithm used to calculate
gradients of the loss function with respect to the
network's parameters. It's crucial for updating
weights and biases during training.
ere foh a
ae
ee (|
mela
aT
Swipe next —>save for later ll
Gradient Descent
Gradient descent is an optimization algorithm used
to minimize the loss function by iteratively adjusting
the network's parameters in the direction of steepest
descent.
Swipe next —>Building a Simple Neural
Network Class
Let's create a basic neural network class that
encapsulates the concepts we've covered so far.
ee
TO a ae ee)
a lanrela ed
self, input e, hidden_size, output_size)
SPR Me Ur une amess Fe
fee Caer
LNG ne) a)
aay 7)
aay v
return
Cire era rd
return 1 / (1 + np.
Swipe next —>follow for more
Training the Neural
Networ
Training involves iteratively performing forward
propagation, calculating loss, backpropagation, and
updating parameters.
cl (self, X, Y, iterations, learning_rate):
an OR OMe eee tat)
self. forward(X)
ape he
db1 Clyne aC
oa
Om a
i
RO ma 4
cost = -np.sum(Y * np.log(A2) + (1 - Y) * np.log(1 - A2)) / m
oa ees
Swipe next —>save for later ll
Data Preprocessing
Proper data preprocessing is crucial for effective
training. This includes normalization, handling
missing values, and encoding categorical variables
SO WESC Lee
ee ras
fom. randn( 10,
ROMO
Swipe next —>follow for more
Implementing Mini-batch
Gradient Descent
Mini-batch gradient descent is a variation of gradient
descent that processes small batches of data ata
time, offering a balance between computational
efficiency and convergence speed.
, batch_size):
np.hstack((X, Y))
np. random. shuffle(data)
aCe ys jata.shape[®] // batch_s
(i + 1) * batch_s
].reshape((-1, 1)).T
Y_mini))
fo ne
Swipe next —>Regularization Techniques
Regularization helps prevent overfitting by adding a
penalty term to the loss function. L2 regularization
(weight decay) is a common technique.
PLease
i
1)
eee
Swipe next —>follow for more
Dropout
Regularization
Dropout is another regularization technique that
randomly "drops out" a proportion of neurons during
training, which helps prevent overfitting.
aH
Peete
Z1 = np.dot(wi,
Al = np.tanh(Z1)
D1 = np.random.rand(Al.shape[], Al-shape[1]) < k
vi ast
Al es et)
foci
Swipe next —>Hyperparameter Tuning
Hyperparameters are configuration settings for the
neural network that are not learned during training.
Proper tuning can significantly impact performance.
cere ED
Swipe next —>follow for more
Saving and Loading
Models
Saving trained models allows you to use them later
without retraining. Here's a simple way to save and
load neural network parameters using Python's
pickle module.
[x
rae
Clee ms 1 UD
with open( filename, ond
Pace aCe te a en)
oad_model( filename):
Prete aaa Masih:
Pele im ee ae Te
esc
oa TT GUT eae ON mC Le hs Lea D)
ier ue me UP OPS tae CCC mate)
EMM AGT (1 c aan)
d_mo eC MoU r
edictions = loaded_model.forward(X_test)
Swipe next —>save for later ll
Additional Resources
For further learning on neural networks and deep
learning, consider exploring these peer-reviewed
papers from arXiv:
1."Deep Learning" by Yann LeCun, Yoshua
Bengio, and Geoffrey Hinton arXiv:1521.00561
2."Understanding the difficulty of training deep
feedforward neural networks" by Xavier Glorot
and Yoshua Bengio arXiv:1003.0485
3."Dropout: A Simple Way to Prevent Neural
Networks from Overfitting" by Nitish Srivastava et
al. arXiv:1207.0580Follow For More Data’.
Science Content sith