Professional Documents
Culture Documents
Deeplearning - Ai Deeplearning - Ai
Deeplearning - Ai Deeplearning - Ai
DeepLearning.AI makes these slides available for educational purposes. You may not use or distribute
these slides for commercial purposes. You may make copies of these slides and use or distribute them for
educational purposes as long as you cite DeepLearning.AI as the source of the slides.
Embedding
Positive (1)
Negative (0)
Neural Networks for sentiment analysis
Embedding
Positive (1)
Negative (0)
● Initial representation
Trax: Neural
Networks
deeplearning.ai
Outline
● Define a basic neural network using Trax
● Benefits of Trax
Neural Networks in Trax
Dot Non-Linear
product Function
Neural Networks in Trax
from trax import layers as tl
Model = tl.Serial(
tl.Dense(4),
tl.Sigmoid(),
Sigmoid Sigmoid Softmax tl.Dense(4),
tl.Sigmoid(),
tl.Dense(3),
tl.Softmax())
Advantages of using frameworks
● Parallel computing
Color Class
Methods
r = 230 r = 61 r = 166
g = 145 g = 133 g = 77
b = 56 b = 198 b = 121
Classes in Python
class MyClass:
def __init__(self, y):
self.y = y f = MyClass(7)
def my_method(self,x): print(f(3))
return x + self.y 10
def __call__(self, x):
return
self.my_method(x)
Subclasses and Inheritance
Name, Height, Weight
Domestic Animal
Methods
● ReLU function
Neural networks in Trax
Dot Non-Linear
product Function
Dense Layer
Dense layer
Trainable
parameters
ReLU Layer
● ReLU Layer
Serial Layer
deeplearning.ai
Serial Layer
Composition of layers
in serial arrangement
● Dense Layers
● Activation Layers
● Mean layer
Embedding Layer
Vocabulary Index
I 1 0.020 0.006
am 2 -0.003 0.010
happy 3 0.009 0.010
because 4 -0.011 -0.018 Trainable
learning 5 -0.040 -0.047 weights
NLP 6 -0.009 0.050
sad 7 -0.044 0.001 Vocabulary
not 8 0.011 -0.022 x
Embedding
Mean Layer
Vocabulary Index
I 1 0.020 0.006
Tweet: I am happy
am 2 -0.003 0.010
happy 3 0.009 0.010
0.020 0.006
-0.003 0.010
0.009 0.010
● Training
Computing gradients in Trax
def f(x):
return 3*x**2 + x
grad_f = trax.math.grad(f)
Gradient Returns a
function
Training with grad()
y = model(x)
grads = grad(y.forward)(y.weights,x)
In a loop
Gradient
weights -= alpha*grads Descent
Forward and
Back-propagation
Summary
● grad() allows much easier training