Professional Documents
Culture Documents
Laporan Analisa
Laporan Analisa
Result
Gated Recurrent Unit (GRU) Method
Preprocessing Normalizing with MinMax Scaller
Dataset Look back = 22
70:30 ratio
Training Data : Test Data
Methodology GRU = 2 Layers
Hidden Dimension = 22
Units = 1
Epoch = 75
Batch Size = Batch First
Optimizer = Adam
Learn Rate = 0.01
Train Loss 7.643641583854333e-05
Test Loss 0.00015840362175367773
Result