Professional Documents
Culture Documents
ipynb - Colaboratory
Aurthor
Name : Farshid Hossain
Github : https://github.com/farshid101
Linkedin : https://www.linkedin.com/in/farshid-hossain-b67890218/
Main Objective
Candle Stick Classifiaction
The model should detect whether the image is Downside or Upside
identify the increase and decrease by seeing the image
Predict the image by building model
Target
Make an effective model so that it can predict the right image
Increase the accuracy and decrease the loss
Steps
Download the Data from github
load the image
Make a normal model
Will Use the Transfer learning
Goal
Learn new things
Be pretient and Be consistance
Learn Experiment , Experiment and Experiment
Face the mistake and error , Solving that with not destroying the laptop or keyboard
Write Code and learn Code
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 1/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
!unzip /content/candle_stick.zip
!unzip /content/candle_stick.zip
Archive: /content/candle_stick.zip
replace img_candel_stick/Test/DOWN/Screenshot 2023-07-13 142105.png? [y]es, [n]o, [A]ll, [N]one, [r]ename:
Exploring directory
!ls img_candel_stick/
Test Train
#Train folder
!ls img_candel_stick/Train/
DOWN UP
#Test folder
!ls img_candel_stick/Test/
DOWN UP
#finding images
!ls img_candel_stick/Train/DOWN/
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 2/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Screenshot 2023-07-14 020457.png Screenshot 2023-07-14 022635.png
'Screenshot 2023-07-14 020518.png' 'Screenshot 2023-07-14 022645.png'
'Screenshot 2023-07-14 020532.png' 'Screenshot 2023-07-14 022656.png'
'Screenshot 2023-07-14 020547.png' 'Screenshot 2023-07-14 022729.png'
'Screenshot 2023-07-14 020555.png' 'Screenshot 2023-07-14 022743.png'
'Screenshot 2023-07-14 020619.png' 'Screenshot 2023-07-14 022803.png'
'Screenshot 2023-07-14 020648.png' 'Screenshot 2023-07-14 022817.png'
'Screenshot 2023-07-14 020715.png' 'Screenshot 2023-07-14 022837.png'
'Screenshot 2023-07-14 020732.png' 'Screenshot 2023-07-14 022851.png'
'Screenshot 2023-07-14 020804.png' 'Screenshot 2023-07-14 022903.png'
'Screenshot 2023-07-14 020835.png' 'Screenshot 2023-07-14 022912.png'
'Screenshot 2023-07-14 020857.png' 'Screenshot 2023-07-14 022928.png'
'Screenshot 2023-07-14 020909.png' 'Screenshot 2023-07-14 022938.png'
'Screenshot 2023-07-14 020946.png' 'Screenshot 2023-07-14 023006.png'
'Screenshot 2023-07-14 021006.png' 'Screenshot 2023-07-14 023043.png'
'Screenshot 2023-07-14 021022.png' 'Screenshot 2023-07-14 023105.png'
'Screenshot 2023-07-14 021053.png' 'Screenshot 2023-07-14 023127.png'
'Screenshot 2023-07-14 021105.png' 'Screenshot 2023-07-14 023137.png'
'Screenshot 2023-07-14 021128.png' 'Screenshot 2023-07-14 023202.png'
'Screenshot 2023-07-14 021156.png' 'Screenshot 2023-07-14 023211.png'
'Screenshot 2023-07-14 021220.png' 'Screenshot 2023-07-14 023309.png'
'Screenshot 2023-07-14 021246.png' 'Screenshot 2023-07-14 023327.png'
import os
for dirpath , dir_num , file_num in os.walk("img_candel_stick"):
print(f"There are {len(dir_num)} directories and {len(file_num)} images in ' {dirpath} ' ")
# Another way to find out how many images are in a file
num_steak_images_train = len(os.listdir("img_candel_stick/Train/DOWN/"))
num_steak_images_train
120
# Get the class names (programmatically, this is much more helpful with a longer list of classes)
import pathlib
import numpy as np
data_dir = pathlib.Path("img_candel_stick/Train/") # turn our training path into a Python path
class_names = np.array(sorted([item.name for item in data_dir.glob('*')])) # created a list of class_names from the subdirectories
print(class_names)
['DOWN' 'UP']
Image Load
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import random
import os
def view_random_image(target_dir, target_class):
# Setup target directory (we'll view images from here)
target_folder = target_dir+target_class
# Get a random image path
random_image = random.sample(os.listdir(target_folder), 1)
# Read in the image and plot it using matplotlib
img = mpimg.imread(target_folder + "/" + random_image[0])
plt.imshow(img)
plt.title(target_class) # Changed
plt.axis("off");
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 3/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
print(f"Image shape: {img.shape}") # show the shape of the image
return img
img = view_random_image(target_dir="img_candel_stick/Train/",
target_class="DOWN")
img = view_random_image(target_dir="img_candel_stick/Train/",
target_class="UP")
Making a function to see random image from directory and target class
def random_view(T_target_dir , T_target_class , Te_target_dir , Te_target_class ):
plt.figure(figsize=(16,10))
for i in range(12):
ax=plt.subplot( 6 ,2 ,i+1 )
if(i%2==1):
view_random_image(T_target_dir, T_target_class)
else:
view_random_image(Te_target_dir, Te_target_class)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 4/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
random_view("img_candel_stick/Train/","UP","img_candel_stick/Train/", "DOWN" )
img = view_random_image(target_dir="img_candel_stick/Train/",
target_class="DOWN")
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 5/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
img.shape
(312, 345, 4)
img
...,
train_dir = "img_candel_stick/Train/"
test_dir = "img_candel_stick/Test/"
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Set the seed
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 6/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
tf.random.set_seed(42)
# Preprocess data (get all of the pixel values between 1 and 0, also called scaling/normalization)
train_datagen = ImageDataGenerator(rescale=1./255)
valid_datagen = ImageDataGenerator(rescale=1./255)
# Import data from directories and turn it into batches
train_data = train_datagen.flow_from_directory(train_dir,
batch_size=32, # number of images to process at a time
target_size=(224, 224), # convert all images to be 224 x 224
class_mode="binary", # type of problem we're working on
seed=42)
valid_data = valid_datagen.flow_from_directory(test_dir,
batch_size=32,
target_size=(224, 224),
class_mode="binary",
seed=42)
Model_1
model_1 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
model_1.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history_1 = model_1.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
Epoch 1/5
8/8 [==============================] - 15s 437ms/step - loss: 0.7090 - accuracy: 0.5417 - val_loss: 0.6733 - val_accuracy: 0.5000
Epoch 2/5
8/8 [==============================] - 2s 223ms/step - loss: 0.5921 - accuracy: 0.7625 - val_loss: 0.4609 - val_accuracy: 0.8472
Epoch 3/5
8/8 [==============================] - 2s 225ms/step - loss: 0.3659 - accuracy: 0.8958 - val_loss: 0.3483 - val_accuracy: 0.8472
Epoch 4/5
8/8 [==============================] - 2s 218ms/step - loss: 0.2837 - accuracy: 0.8875 - val_loss: 0.3237 - val_accuracy: 0.8611
Epoch 5/5
8/8 [==============================] - 2s 230ms/step - loss: 0.2244 - accuracy: 0.9250 - val_loss: 0.3105 - val_accuracy: 0.8750
# Check out the layers in our model
model_1.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 10) 280
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 7/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
conv2d_3 (Conv2D) (None, 106, 106, 10) 910
=================================================================
Total params: 31,101
Trainable params: 31,101
Non-trainable params: 0
_________________________________________________________________
4. History of model 1
# Plot the training curves
import pandas as pd
pd.DataFrame(history_1.history).plot(figsize=(10, 7));
# Plot the validation and training data separately
def plot_loss_curves(history):
"""
Returns separate loss curves for training and validation metrics.
"""
loss = history.history['loss']
val_loss = history.history['val_loss']
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
epochs = range(len(history.history['loss']))
# Plot loss
plt.plot(epochs, loss, label='training_loss')
plt.plot(epochs, val_loss, label='val_loss')
plt.title('Loss')
plt.xlabel('Epochs')
plt.legend()
# Plot accuracy
plt.figure()
plt.plot(epochs, accuracy, label='training_accuracy')
plt.plot(epochs, val_accuracy, label='val_accuracy')
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 8/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
plt.title('Accuracy')
plt.xlabel('Epochs')
plt.legend();
plot_loss_curves(history_1)
# Create ImageDataGenerator training instance with data augmentation
train_datagen_augmented = ImageDataGenerator(rescale=1/255.,
rotation_range=20, # rotate the image slightly between 0 and 20 degrees (note: this is an in
shear_range=0.2, # shear the image
zoom_range=0.2, # zoom into the image
width_shift_range=0.2, # shift the image width ways
height_shift_range=0.2, # shift the image height ways
horizontal_flip=True) # flip the image on the horizontal axis
# Create ImageDataGenerator training instance without data augmentation
train_datagen = ImageDataGenerator(rescale=1/255.)
# Create ImageDataGenerator test instance without data augmentation
test_datagen = ImageDataGenerator(rescale=1/255.)
# Import data and augment it from training directory
print("Augmented training images:")
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 9/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
train_data_augmented = train_datagen_augmented.flow_from_directory(train_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary',
shuffle=False) # Don't shuffle for demonstration purposes, usually a g
# Create non-augmented data batches
print("Non-augmented training images:")
train_data = train_datagen.flow_from_directory(train_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary',
shuffle=False) # Don't shuffle for demonstration purposes
print("Unchanged test images:")
test_data = test_datagen.flow_from_directory(test_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary')
images, labels = train_data.next()
augmented_images, augmented_labels = train_data_augmented.next()
# Show original image and augmented image
random_number = random.randint(0, 32) # we're making batches of size 32, so we'll get a random instance
plt.imshow(images[random_number])
plt.title(f"Original image")
plt.axis(False)
plt.figure()
plt.imshow(augmented_images[random_number])
plt.title(f"Augmented image")
plt.axis(False);
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 10/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Model_2
# Creating model
model_2= tf.keras.models.Sequential([
tf.keras.layers.Conv2D(10 ,3, activation="relu" , input_shape=(224, 224, 3)) ,
tf.keras.layers.MaxPool2D(pool_size=2) ,# reduce number of features by half
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Flatten() ,
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Compile the model
model_2.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
# Fit the model
history_2 = model_2.fit(train_data_augmented, # changed to augmented training data
epochs=5,
steps_per_epoch=len(train_data_augmented),
validation_data=test_data,
validation_steps=len(test_data))
Epoch 1/5
8/8 [==============================] - 7s 545ms/step - loss: 0.6978 - accuracy: 0.3417 - val_loss: 0.6932 - val_accuracy: 0.4861
Epoch 2/5
8/8 [==============================] - 4s 482ms/step - loss: 0.6931 - accuracy: 0.4958 - val_loss: 0.6886 - val_accuracy: 0.7083
Epoch 3/5
8/8 [==============================] - 4s 510ms/step - loss: 0.6887 - accuracy: 0.6750 - val_loss: 0.6821 - val_accuracy: 0.7361
Epoch 4/5
8/8 [==============================] - 4s 470ms/step - loss: 0.6843 - accuracy: 0.6875 - val_loss: 0.6723 - val_accuracy: 0.7500
Epoch 5/5
8/8 [==============================] - 5s 614ms/step - loss: 0.6760 - accuracy: 0.6833 - val_loss: 0.6476 - val_accuracy: 0.7917
plot_loss_curves(history_2)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 11/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
# Import data and augment it from directories
train_data_augmented_shuffled = train_datagen_augmented.flow_from_directory(train_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary',
shuffle=True) # Shuffle data (default)
# Creating model
model_3= tf.keras.models.Sequential([
tf.keras.layers.Conv2D(10 ,3, activation="relu" , input_shape=(224, 224, 3)) ,
tf.keras.layers.MaxPool2D(pool_size=2) ,# reduce number of features by half
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Conv2D(10 ,3, activation="relu" ) ,
tf.keras.layers.MaxPool2D() ,
tf.keras.layers.Flatten() ,
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Compile the model
model_3.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
# Fit the model
history_3 = model_3.fit(train_data_augmented_shuffled, # changed to augmented training data
epochs=5,
steps_per_epoch=len(train_data_augmented),
validation_data=test_data,
validation_steps=len(test_data))
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 12/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Epoch 1/5
8/8 [==============================] - 6s 460ms/step - loss: 0.6998 - accuracy: 0.5292 - val_loss: 0.6981 - val_accuracy: 0.5000
Epoch 2/5
8/8 [==============================] - 6s 779ms/step - loss: 0.6935 - accuracy: 0.5000 - val_loss: 0.6897 - val_accuracy: 0.5000
Epoch 3/5
8/8 [==============================] - 4s 506ms/step - loss: 0.6884 - accuracy: 0.5042 - val_loss: 0.6818 - val_accuracy: 0.5694
Epoch 4/5
8/8 [==============================] - 5s 558ms/step - loss: 0.6792 - accuracy: 0.6667 - val_loss: 0.6717 - val_accuracy: 0.6667
Epoch 5/5
8/8 [==============================] - 4s 472ms/step - loss: 0.6706 - accuracy: 0.6875 - val_loss: 0.6560 - val_accuracy: 0.6667
History of model 3
plot_loss_curves(history_3)
import matplotlib.pyplot as plt
def plot_compare_history(histories):
"""
Plots separate loss curves for training and validation metrics for multiple histories.
"""
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k'] # Define colors for each history
# Create subplots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 13/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
for i, history in enumerate(histories):
loss = history.history['loss']
val_loss = history.history['val_loss']
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
epochs = range(len(history.history['loss']))
# Plot loss
ax1.plot(epochs, loss, label=f'Training Loss {i+1}', color=colors[i])
ax1.plot(epochs, val_loss, label=f'Validation Loss {i+1}', linestyle='--', color=colors[i])
ax1.set_title('Loss')
ax1.set_xlabel('Epochs')
ax1.legend()
# Plot accuracy
ax2.plot(epochs, accuracy, label=f'Training Accuracy {i+1}', color=colors[i])
ax2.plot(epochs, val_accuracy, label=f'Validation Accuracy {i+1}', linestyle='--', color=colors[i])
ax2.set_title('Accuracy')
ax2.set_xlabel('Epochs')
ax2.legend()
# Display the plot
plt.tight_layout()
plt.show()
histories = [ history_2, history_3]
plot_compare_history(histories)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 14/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
histories = [ history_1, history_3]
plot_compare_history(histories)
Report 1
Without Data augmentation the model is doing well .
Both shuffle ture or false model is performing Not Good ( model 2 , model_3 )
Improving model_1 by making a new one and increasing the epoch layers, learning rate
number
model_4 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 15/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
model_4.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history_4_e = model_4.fit(train_data,
epochs=10,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
Epoch 1/10
8/8 [==============================] - 5s 264ms/step - loss: 2.1174 - accuracy: 0.2333 - val_loss: 0.8126 - val_accuracy: 0.5000
Epoch 2/10
8/8 [==============================] - 2s 232ms/step - loss: 0.7519 - accuracy: 0.5000 - val_loss: 0.6892 - val_accuracy: 0.5000
Epoch 3/10
8/8 [==============================] - 2s 227ms/step - loss: 0.6888 - accuracy: 0.5833 - val_loss: 0.6650 - val_accuracy: 0.7361
Epoch 4/10
8/8 [==============================] - 2s 230ms/step - loss: 0.6520 - accuracy: 0.8083 - val_loss: 0.6370 - val_accuracy: 0.6944
Epoch 5/10
8/8 [==============================] - 2s 281ms/step - loss: 0.6600 - accuracy: 0.5458 - val_loss: 0.6082 - val_accuracy: 0.7917
Epoch 6/10
8/8 [==============================] - 2s 292ms/step - loss: 0.6580 - accuracy: 0.5833 - val_loss: 0.5759 - val_accuracy: 0.8056
Epoch 7/10
8/8 [==============================] - 2s 230ms/step - loss: 0.6352 - accuracy: 0.5875 - val_loss: 0.5345 - val_accuracy: 0.8194
Epoch 8/10
8/8 [==============================] - 2s 223ms/step - loss: 0.5464 - accuracy: 0.8750 - val_loss: 0.5350 - val_accuracy: 0.8333
Epoch 9/10
8/8 [==============================] - 2s 223ms/step - loss: 0.5331 - accuracy: 0.8875 - val_loss: 0.4524 - val_accuracy: 0.8472
Epoch 10/10
8/8 [==============================] - 2s 262ms/step - loss: 0.4627 - accuracy: 0.8333 - val_loss: 0.3473 - val_accuracy: 0.8472
model_4 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(100, 3, activation="relu"),
tf.keras.layers.Conv2D(100, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
model_4.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history_4_l = model_4.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
Epoch 1/5
8/8 [==============================] - 11s 513ms/step - loss: 0.6943 - accuracy: 0.4625 - val_loss: 0.6931 - val_accuracy: 0.5000
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 16/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Epoch 2/5
8/8 [==============================] - 2s 271ms/step - loss: 0.6939 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 3/5
8/8 [==============================] - 2s 226ms/step - loss: 0.6935 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 4/5
8/8 [==============================] - 2s 237ms/step - loss: 0.6933 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 5/5
8/8 [==============================] - 2s 243ms/step - loss: 0.6935 - accuracy: 0.2375 - val_loss: 0.6931 - val_accuracy: 0.5417
model_4 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
model_4.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=0.001), # Default , learning_rate: float = 0.001
metrics=["accuracy"])
# Fit the model
history_4_lr = model_4.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizer
Epoch 1/5
8/8 [==============================] - 4s 287ms/step - loss: 1.1203 - accuracy: 0.4875 - val_loss: 0.7221 - val_accuracy: 0.5000
Epoch 2/5
8/8 [==============================] - 2s 313ms/step - loss: 0.6759 - accuracy: 0.5083 - val_loss: 0.6614 - val_accuracy: 0.8889
Epoch 3/5
8/8 [==============================] - 2s 225ms/step - loss: 0.6533 - accuracy: 0.8125 - val_loss: 0.7054 - val_accuracy: 0.5000
Epoch 4/5
8/8 [==============================] - 2s 235ms/step - loss: 0.7214 - accuracy: 0.5875 - val_loss: 0.6015 - val_accuracy: 0.8611
Epoch 5/5
8/8 [==============================] - 2s 224ms/step - loss: 0.5536 - accuracy: 0.8708 - val_loss: 0.4541 - val_accuracy: 0.9306
Comparing model 4
( Here , Model_1 was chosen for the experiment )
histories = [ history_4_e, history_4_l ,history_4_lr]
plot_compare_history(histories)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 17/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Report 2
Model 1's epochs, layers, and learning rate were increased
Model 1 performance may be improved by increasing epochs and learning rate.
Model_5
increasing the number of layers, epochs, and learning rate
model_5 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
model_5.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=0.0003),
metrics=["accuracy"])
# Fit the model
history_5 = model_5.fit(train_data,
epochs=15,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizer
Epoch 1/15
8/8 [==============================] - 6s 297ms/step - loss: 0.7992 - accuracy: 0.4167 - val_loss: 0.6887 - val_accuracy: 0.5000
Epoch 2/15
8/8 [==============================] - 2s 219ms/step - loss: 0.6857 - accuracy: 0.6625 - val_loss: 0.6645 - val_accuracy: 0.7361
Epoch 3/15
8/8 [==============================] - 2s 225ms/step - loss: 0.6638 - accuracy: 0.6833 - val_loss: 0.6454 - val_accuracy: 0.5000
Epoch 4/15
8/8 [==============================] - 2s 228ms/step - loss: 0.6054 - accuracy: 0.7167 - val_loss: 0.5502 - val_accuracy: 0.8472
Epoch 5/15
8/8 [==============================] - 2s 259ms/step - loss: 0.5064 - accuracy: 0.8292 - val_loss: 0.3799 - val_accuracy: 0.8611
Epoch 6/15
8/8 [==============================] - 2s 272ms/step - loss: 0.3288 - accuracy: 0.8750 - val_loss: 0.3774 - val_accuracy: 0.7917
Epoch 7/15
8/8 [==============================] - 2s 218ms/step - loss: 0.5515 - accuracy: 0.8042 - val_loss: 0.3763 - val_accuracy: 0.8611
Epoch 8/15
8/8 [==============================] - 2s 234ms/step - loss: 0.4448 - accuracy: 0.7958 - val_loss: 0.4564 - val_accuracy: 0.8056
Epoch 9/15
8/8 [==============================] - 2s 225ms/step - loss: 0.4484 - accuracy: 0.8167 - val_loss: 0.4764 - val_accuracy: 0.7639
Epoch 10/15
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 18/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
8/8 [==============================] - 2s 233ms/step - loss: 0.5537 - accuracy: 0.7792 - val_loss: 0.4170 - val_accuracy: 0.8611
Epoch 11/15
8/8 [==============================] - 2s 266ms/step - loss: 0.3557 - accuracy: 0.8875 - val_loss: 0.3269 - val_accuracy: 0.8750
Epoch 12/15
8/8 [==============================] - 2s 226ms/step - loss: 0.3356 - accuracy: 0.8750 - val_loss: 0.2654 - val_accuracy: 0.9028
Epoch 13/15
8/8 [==============================] - 2s 222ms/step - loss: 0.2932 - accuracy: 0.9042 - val_loss: 0.2742 - val_accuracy: 0.9028
Epoch 14/15
8/8 [==============================] - 2s 224ms/step - loss: 0.2882 - accuracy: 0.8958 - val_loss: 0.2811 - val_accuracy: 0.9028
Epoch 15/15
8/8 [==============================] - 2s 219ms/step - loss: 0.2545 - accuracy: 0.9083 - val_loss: 0.2571 - val_accuracy: 0.9306
History of Model 5
plot_loss_curves(history_5)
Report 3
Model 5 was Great
But the validation loss is abnormanl
train loss and validation loss has a big difference
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 19/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Model_6
model_6 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=10,
kernel_size=3, # can also be (3, 3)
activation="relu",
input_shape=(224, 224, 3)), # first layer specifies input shape (height, width, colour channels)
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.MaxPool2D(pool_size=2, # pool_size can also be (2, 2)
padding="valid"), # padding can also be 'same'
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(10, 3, activation="relu"), # activation='relu' == tf.keras.layers.Activations(tf.nn.relu)
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation="sigmoid") # binary activation output
])
# Compile the model
model_6.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=0.0001),
metrics=["accuracy"])
# Fit the model
history_6 = model_6.fit(train_data,
epochs=25,
steps_per_epoch=len(train_data),
validation_data=valid_data,
validation_steps=len(valid_data))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizer
Epoch 1/25
8/8 [==============================] - 4s 252ms/step - loss: 0.7549 - accuracy: 0.5542 - val_loss: 0.6882 - val_accuracy: 0.6944
Epoch 2/25
8/8 [==============================] - 2s 232ms/step - loss: 0.6866 - accuracy: 0.7375 - val_loss: 0.6728 - val_accuracy: 0.8194
Epoch 3/25
8/8 [==============================] - 2s 316ms/step - loss: 0.6676 - accuracy: 0.8125 - val_loss: 0.6325 - val_accuracy: 0.8611
Epoch 4/25
8/8 [==============================] - 2s 221ms/step - loss: 0.6697 - accuracy: 0.5792 - val_loss: 0.5494 - val_accuracy: 0.8056
Epoch 5/25
8/8 [==============================] - 2s 217ms/step - loss: 0.5584 - accuracy: 0.8292 - val_loss: 0.4122 - val_accuracy: 0.8333
Epoch 6/25
8/8 [==============================] - 2s 238ms/step - loss: 0.4329 - accuracy: 0.8292 - val_loss: 0.3633 - val_accuracy: 0.8333
Epoch 7/25
8/8 [==============================] - 2s 210ms/step - loss: 0.3605 - accuracy: 0.8625 - val_loss: 0.3367 - val_accuracy: 0.8611
Epoch 8/25
8/8 [==============================] - 2s 311ms/step - loss: 0.3292 - accuracy: 0.8917 - val_loss: 0.3168 - val_accuracy: 0.8611
Epoch 9/25
8/8 [==============================] - 2s 225ms/step - loss: 0.3051 - accuracy: 0.8875 - val_loss: 0.3377 - val_accuracy: 0.8750
Epoch 10/25
8/8 [==============================] - 2s 217ms/step - loss: 0.2953 - accuracy: 0.8958 - val_loss: 0.2966 - val_accuracy: 0.9167
Epoch 11/25
8/8 [==============================] - 2s 221ms/step - loss: 0.2614 - accuracy: 0.9083 - val_loss: 0.2994 - val_accuracy: 0.9167
Epoch 12/25
8/8 [==============================] - 2s 217ms/step - loss: 0.2333 - accuracy: 0.9125 - val_loss: 0.2863 - val_accuracy: 0.9167
Epoch 13/25
8/8 [==============================] - 2s 221ms/step - loss: 0.2039 - accuracy: 0.9250 - val_loss: 0.2822 - val_accuracy: 0.9306
Epoch 14/25
8/8 [==============================] - 2s 268ms/step - loss: 0.1584 - accuracy: 0.9542 - val_loss: 0.2953 - val_accuracy: 0.9167
Epoch 15/25
8/8 [==============================] - 2s 221ms/step - loss: 0.1230 - accuracy: 0.9500 - val_loss: 0.3012 - val_accuracy: 0.9167
Epoch 16/25
8/8 [==============================] - 2s 246ms/step - loss: 0.0775 - accuracy: 0.9667 - val_loss: 0.3314 - val_accuracy: 0.9167
Epoch 17/25
8/8 [==============================] - 2s 226ms/step - loss: 0.0420 - accuracy: 0.9917 - val_loss: 0.4865 - val_accuracy: 0.9306
Epoch 18/25
8/8 [==============================] - 2s 235ms/step - loss: 0.0212 - accuracy: 1.0000 - val_loss: 0.5664 - val_accuracy: 0.9444
Epoch 19/25
8/8 [==============================] - 2s 273ms/step - loss: 0.0051 - accuracy: 1.0000 - val_loss: 0.8663 - val_accuracy: 0.9306
Epoch 20/25
8/8 [==============================] - 2s 220ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 1.0267 - val_accuracy: 0.9444
Epoch 21/25
8/8 [==============================] - 2s 214ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 1.1256 - val_accuracy: 0.9306
Epoch 22/25
8/8 [==============================] - 2s 228ms/step - loss: 6.0399e-04 - accuracy: 1.0000 - val_loss: 1.0580 - val_accuracy: 0.930
Epoch 23/25
8/8 [==============================] - 2s 228ms/step - loss: 7.5434e-04 - accuracy: 1.0000 - val_loss: 1.0860 - val_accuracy: 0.930
Epoch 24/25
8/8 [==============================] - 2s 237ms/step - loss: 2.5484e-04 - accuracy: 1.0000 - val_loss: 1.1418 - val_accuracy: 0.930
Epoch 25/25
8/8 [==============================] - 2s 225ms/step - loss: 1.3518e-04 - accuracy: 1.0000 - val_loss: 1.1881 - val_accuracy: 0.930
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 20/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
History of Model 6
plot_loss_curves(history_6)
Prediction
def load_and_prep_image(filename, img_shape=224):
"""
Reads an image from filename, turns it into a tensor
and reshapes it to (img_shape, img_shape, colour_channel).
"""
# Read in target file (an image)
img = tf.io.read_file(filename)
# Decode the read file into a tensor & ensure 3 colour channels
# (our model is trained on images with 3 colour channels and sometimes images have 4 colour channels)
img = tf.image.decode_image(img, channels=3)
# Resize the image (to the same size our model was trained on)
img = tf.image.resize(img, size = [img_shape, img_shape])
# Rescale the image (get all values between 0 and 1)
img = img/255.
return img
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 21/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
def pred_and_plot(model, filename, class_names):
"""
Imports an image located at filename, makes a prediction on it with
a trained model and plots the image with the predicted class as the title.
"""
# Import the target image and preprocess it
img = load_and_prep_image(filename)
# Make a prediction
pred = model.predict(tf.expand_dims(img, axis=0))
# Get the predicted class
pred_class = class_names[int(tf.round(pred)[0][0])]
# Plot the image and predicted class
plt.imshow(img)
plt.title(f"Prediction: {pred_class}")
plt.axis(False);
This image was collected randomly from , Tradingview.com . image : Screenshot 2023-07-17 153241.png
Url of image : https://github.com/farshid101/Real-life-Exclusive-Deep-learning-
project/blob/main/FX%20Trading/test_iamge/Screenshot%202023-07-17%20153241.png
pred_and_plot(model_6, "Screenshot 2023-07-17 153241.png", class_names)
!wget https://github.com/farshid101/Real-life-Exclusive-Deep-learning-project/blob/main/FX%20Trading/test_iamge/test%20image%20.png
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 22/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
pred_and_plot(model_6, "Screenshot 2023-07-17 155647.png", class_names)
pred_and_plot(model_6, "Screenshot 2023-07-17 155859.png", class_names)
Saving Model
model_6.save('Raw_model.h5')
Loading model
# Load the saved model
model = load_model('Raw_model.h5')
# Use the loaded model for predictions
pred_and_plot(model, "Screenshot 2023-07-17 182350.png", class_names)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 23/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
Checking GPU
# Are we using a GPU?
!nvidia-smi
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
import datetime
def create_tensorboard_callback(dir_name, experiment_name):
log_dir = dir_name + "/" + experiment_name + "/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir
)
print(f"Saving TensorBoard log files to: {log_dir}")
return tensorboard_callback
Now we're going to do a similar process, except the majority of our model's layers are going to come from TensorFlow Hub.
1. ResNetV2 - a state of the art computer vision model architecture from 2016.
2. EfficientNet - a state of the art computer vision architecture from 2019.
import tensorflow as tf
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 24/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
import tensorflow_hub as hub
from tensorflow.keras import layers
Models url
# Resnet 50 V2 feature vector
resnet_url = "https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4"
# Original: EfficientNetB0 feature vector (version 1)
efficientnet_url = "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1"
# Setup data inputs
from tensorflow.keras.preprocessing.image import ImageDataGenerator
IMAGE_SHAPE = (224, 224)
BATCH_SIZE = 32
train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)
print("Training images:")
train_data = train_datagen.flow_from_directory(train_dir,
target_size=IMAGE_SHAPE,
batch_size=BATCH_SIZE,
class_mode="binary")
print("Testing images:")
test_data = train_datagen.flow_from_directory(test_dir,
target_size=IMAGE_SHAPE,
batch_size=BATCH_SIZE,
class_mode="binary")
Training images:
Found 240 images belonging to 2 classes.
Testing images:
Found 72 images belonging to 2 classes.
def create_model(model_url, num_classes=10):
"""Takes a TensorFlow Hub URL and creates a Keras Sequential model with it.
Args:
model_url (str): A TensorFlow Hub feature extraction URL.
num_classes (int): Number of output neurons in output layer,
should be equal to number of target classes, default 10.
Returns:
An uncompiled Keras Sequential model with model_url as feature
extractor layer and Dense output layer with num_classes outputs.
"""
# Download the pretrained model and save it as a Keras layer
feature_extractor_layer = hub.KerasLayer(model_url,
trainable=False, # freeze the underlying patterns
name='feature_extraction_layer',
input_shape=IMAGE_SHAPE+(3,)) # define the input image shape
# Create our own model
model = tf.keras.Sequential([
feature_extractor_layer, # use the feature extraction layer as the base
layers.Dense(num_classes, activation='sigmoid', name='output_layer') # create our own output layer
])
return model
Resnet_model
# Create model
resnet_model = create_model(resnet_url, num_classes=1)
# Compile
resnet_model.compile(loss='binary_crossentropy',
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 25/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
# Fit the model
resnet_history = resnet_model.fit(train_data,
epochs=10,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=len(test_data),
# Add TensorBoard callback to model (callbacks parameter takes a list)
callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", # save experiment logs here
experiment_name="resnet50V2")]) # name of log
Predict of Resnet_model
pred_and_plot( resnet_model, "Screenshot 2023-07-17 153241.png", class_names)
Resnet_history
histories = [ resnet_history]
plot_compare_history(histories)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 26/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
# Fit the model
resnet_history_epochs = resnet_model.fit(train_data,
epochs=25,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=len(test_data),
# Add TensorBoard callback to model (callbacks parameter takes a list)
callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", # save experiment logs here
experiment_name="resnet50V2")]) # name of log
Download
resnet_model.save('resnet_model_25_epochs.h5')
histories = [ resnet_history_epochs ]
plot_compare_history(histories)
Prediction of model
pred_and_plot( resnet_model, "Screenshot 2023-07-18 021441.png", class_names)
pred_and_plot( resnet_model, "test.png", class_names)
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 28/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
pred_and_plot( resnet_model, "1111.png", class_names)
pred_and_plot( resnet_model, "2222.png", class_names)
Aurthor
Name : Farshid Hossain
Github : https://github.com/farshid101
Linkedin : https://www.linkedin.com/in/farshid-hossain-b67890218/
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 29/30
7/18/23, 2:38 AM FX_Trading_candle_stick.ipynb - Colaboratory
https://colab.research.google.com/drive/1g9VV3JUXcBDEiqZBq9fS4E8lm_B8bRjt#scrollTo=bE-V8gL0-9k7&printMode=true 30/30