This notebook is open with private outputs. Outputs will not be saved. You can disable this in Notebook settings.
Exec R Ea a a owe @
Ca oN
+ Code + Text aa
Sia
ai
Q
Ungraded Lab: Multiple LSTMs
In this lab, you will look at how to build a model with multiple LSTM layers. Since you know the
preceding steps already (e.g. downloading datasets, preparing the data, etc.), we won't expound
‘on it anymore so you can just focus on the model building code.
Saeed aos
Download and Prepare the Dataset
RPO Te stm ce eer
LC Rae ee neon et
Ceeee ee ate rt Meet eae eC eR ot as
c keni
tokenizer = info. features[ "text" ].encoder
Like the previous lab, we increased the BaTcH_sIze here to make the training faster. If you are
CR a a en eee need
po eee
Cas eee
Cae ee
ce RU Rae sees
SO Ree et cde cea
Poe ese ee sae)
train_dataset = train_data.shuffle(BUFFER SIZE)
# Batch and pad the datasets to the maximum length of the sequen
Poste e haem Um erence reco aoe5)
test dataset = test data.padded batch(BATCH SIZE)
Build and Compile the Modelith Goce mice tit Gccltnuccmecn aun in
Peremee mu ua in pi eee eres OR MIS oe tee
i eee eT Mee ete Cee Se URS foe
sequence as well. See the code cell below that demonstrates this flag in action. You'll notice that
the output dimension is in 3 dimensions (batch size, timesteps, features) when when
return_sequences is True.
ect R eC eC eon
pee aan
Page
Cerca ee
pet eee)
coe
Serene eae)
Pe UGate ees Tae cSt C ab)
print(#'timesteps (sequence length): {timesteps}")
ear ee Cet rere ay
PUG et ieee etc Lye
Paes U ee eC en me ums)
Re en eR Cree aso eon)
Uae et eee ot se oes)
su Ret ee ests Carers
eee Se Meet)
result = Istm(random input)
PSU Gar ecm ee es ete a meee eee TS Ee on a)
ST esate ee acy
eRe ee Cea etre Re)
result = 1stmrs(random_input)
Pur Gare meee et tere tee TE ee
Bote Re ue ecu cence
OC caa Ca ieons
aaa
are!
cecgme ens
ceca neerd
Cece
a model
Caen a Cee Sey Kg
tf keras. layers. Embedding(tokenizer.vocab_size, embedding dim),
Se OTe cesta eee ene ec Ce cae
eo OTe east Ga ECE Meee eee
Ge eee Oe eC eae eases e eeeearner Meret st nee sty
aD}
sue a
model. summary()
ees
Dec e eke Sr pm ce aN Ls Occ Coe Oe eet)
Bcclm eam (ore)
See ea Sa eeu ue eR ce Rk voted Mca
default parameters we set, it will take around 2 minutes per epoch with the Colab GPU enabled.
MOR see ee)
ORS
history = model.fit(train_dataset, epochs-NUM_EPOCHS, validation data-test_dataset)
pCa esc ayaa eas
Pacrariseet
Cee Gar Ces aarsStr aes
Pleat (recA atc est ap)
atest cre ce ea eer mess)
pit.xlabel (“epochs”)
Steep eset)
plt.legend([string, ‘val_‘+string])
plt.show()
oC eet
Gee Cres See ecto)
Dose aaa)
A Tre] o eo)
Sena Me Toric eS en
will continue exploring other architectures you can use to implement your sentiment classification
CroomSer ees en oe