You are on page 1of 22

Dataset Drive Link:

https://drive.google.com/drive/folders/1YrF9bKyERtN_iNnkPryOtWrwhFyey9mW

Decision Tree Classifier

Code Explanation:
1. Import Libraries:
o pandas as pd: Imports the pandas library for data manipulation (creating

DataFrames).
o numpy as np: Imports the NumPy library for numerical operations.

o from sklearn.datasets import load_iris: Imports the load_iris function from

scikit-learn to load the Iris dataset.


o from sklearn.model_selection import train_test_split: Imports the train_test_split

function from scikit-learn to split data into training and testing sets.
o from sklearn.tree import DecisionTreeClassifier: Imports the

DecisionTreeClassifier class from scikit-learn for building decision tree

models.
o from sklearn import tree: Imports the tree module from scikit-learn for

visualizing decision trees.


o import matplotlib.pyplot as plt: Imports the matplotlib.pyplot library for

creating plots.
o import seaborn as sns: Imports the seaborn library for creating heatmap

visualizations (confusion matrix).


2. Load Iris Dataset:
o data = load_iris(): Loads the Iris dataset from scikit-learn. It provides

features (sepal length, sepal width, petal length, petal width) and target
labels (species of Iris flower).
3. Create DataFrame:
o df = pd.DataFrame(data.data, columns=data.feature_names): Converts the data

features into a pandas DataFrame.


 data.data: Contains the numerical feature values.

 data.feature_names: Contains the names of the features (e.g., 'sepal

length (cm)', 'sepal width (cm)', etc.).


o df['Species'] = data.target: Adds a new column named "Species" to the

DataFrame containing the target labels.


4. Encode Species Names:
o target = np.unique(data.target): Gets the unique target values (species labels).

o target_names = np.unique(data.target_names): Gets the unique species names

as strings (e.g., 'Iris-setosa', 'Iris-versicolor', 'Iris-virginica').


o targets = dict(zip(target, target_names)): Creates a dictionary that maps

numerical labels to species names.


o df['Species'] = df['Species'].replace(targets): Replaces the numerical labels in

the "Species" column with the corresponding species names using the
dictionary.
5. Extract Features and Target:
o x = df.drop(columns="Species"): Creates a new DataFrame x containing all

features except "Species".


o y = df["Species"]: Selects the "Species" column as the target variable y.

o feature_names = x.columns: Stores the feature names as a list.

o labels = y.unique(): Gets the unique species labels (names) as a list.

6. Split Data into Training and Testing Sets:


o X_train, test_x, y_train, test_lab = train_test_split(x, y, test_size=0.4,

random_state=42): Splits the data into training and testing sets using

train_test_split.

 test_size=0.4: Specifies that 40% of the data will be used for testing.

 random_state=42: Sets a random seed for reproducibility (ensuring

the same split every time you run the code).


7. Train Decision Tree Classifier:
o clf = DecisionTreeClassifier(max_depth=3, random_state=42) : Creates a

DecisionTreeClassifier object with a maximum depth of 3 (meaning the


tree will not grow beyond 3 levels) and the same random seed for
reproducibility.
o clf.fit(X_train, y_train): Trains the decision tree classifier on the training data

(X_train and y_train).


8. Visualize Decision Tree:
o plt.figure(figsize=(30, 10), facecolor='k'): Creates a large figure for the decision

tree plot with a black background.


o a = tree.plot_tree(clf, feature_names=feature_names, class_names=labels,

rounded=True, filled=True, fontsize=14): Generates the decision tree plot

using tree.plot_tree, specifying features, class labels, styling options


(rounded nodes, filled boxes, larger font size), and stores

Code:
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
data = load_iris()
#convert to a dataframe
df = pd.DataFrame(data.data, columns = data.feature_names)
#create the species column
df['Species'] = data.target
#replace this with the actual names
target = np.unique(data.target)
target_names = np.unique(data.target_names)
targets = dict(zip(target, target_names))
df['Species'] = df['Species'].replace(targets)
#extract datasets
x = df.drop(columns="Species")

y = df["Species"]
feature_names = x.columns
labels = y.unique()
#split the dataset
from sklearn.model_selection import train_test_split
X_train, test_x, y_train, test_lab = train_test_split(x,y,test_size = 0.4,random_state = 42)
# Importing Decision Tree Classifier
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth =3, random_state = 42)
clf.fit(X_train, y_train)
# Tree Diagram
from sklearn import tree
import matplotlib.pyplot as plt
plt.figure(figsize=(30,10), facecolor ='k')
a = tree.plot_tree(clf,feature_names = feature_names, class_names = labels,rounded = True,filled
=
True,fontsize=14)
plt.show()
# Predict Class From Test Values
test_pred_decision_tree = clf.predict(test_x)
from sklearn import metrics
import seaborn as sns
import matplotlib.pyplot as plt
confusion_matrix = metrics.confusion_matrix(test_lab,test_pred_decision_tree)
matrix_df = pd.DataFrame(confusion_matrix)
ax = plt.axes()
sns.set(font_scale=1.3)
plt.figure(figsize=(10,7))

sns.heatmap(matrix_df, annot=True, fmt="g", ax=ax, cmap="magma")


ax.set_title('Confusion Matrix - Decision Tree')
ax.set_xlabel("Predicted label", fontsize =15)
ax.set_xticklabels(['']+labels)
ax.set_ylabel("True Label", fontsize=15)
ax.set_yticklabels(list(labels), rotation = 0)
plt.show()
1.Tensor Flow and Keras

Explanation of the Improved Code:


1. Import Essential Libraries:
Python
import tensorflow as tf
from tensorflow.keras import Sequential # Import Sequential model for building layered
architectures
from tensorflow.keras.layers import Dense, Input, Flatten # Import necessary layers for building
import numpy as np # Import NumPy for potential data manipulation (optional)
 tensorflow and tensorflow.keras are essential for building and training neural
networks.
 Sequential is the common way to create layered neural networks in Keras.
 Dense represents fully connected layers, the workhorses of neural networks.
 Input defines the input layer, specifying the input shape for the model.
 Flatten transforms 2D image data into a 1D vector suitable for dense layers.
 numpy might be useful for preprocessing or analyzing data (optional in this
case).
2. Load the MNIST Dataset (using Keras' built-in function):
Python
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
 This line directly loads the MNIST dataset using TensorFlow Keras' load_data
function.
 The data is split into training and testing sets, providing images ( train_images,
test_images) and corresponding labels (train_labels, test_labels).

3. Preprocess the Data (Optional, but Recommended for MNIST):


Python
# Normalize pixel values to the range [0, 1]
train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0

# Reshape images to add a channel dimension (MNIST images are grayscale)


train_images = train_images.reshape((train_images.shape[0], 28, 28, 1))
test_images = test_images.reshape((test_images.shape[0], 28, 28, 1))
 Normalization: It's a common practice to normalize image pixel values to the
range [0, 1] for better training performance.
 Reshaping: MNIST images are grayscale (single channel). Add a channel
dimension (1) to match the expected input shape for convolutional layers if you
plan to use them.
4. Build the Neural Network Model:
Python
model = Sequential([
Input(shape=(28, 28, 1)), # Input layer for 28x28 grayscale images (channel dimension 1)
Flatten(), # Flatten 2D images into 1D vectors for dense layers
Dense(units=84, activation="relu"), # First dense layer with 84 neurons and ReLU activation
Dense(units=10, activation="softmax") # Output layer with 10 neurons and softmax activation for
classification
])
 Input Layer: Defines the input shape of the model as 28x28 pixels with one
channel (grayscale).
 Flatten Layer: Transforms the 2D image data (28x28) into a 1D vector suitable
for dense layers.
 Dense Layers:
o The first hidden layer has 84 neurons and uses the ReLU (Rectified Linear
Unit) activation function for non-linearity.
o The output layer has 10 neurons, one for each digit class (0-9), and uses
the softmax activation function to produce probabilities for each class.
5. Model Summary:
Python
model.summary()
 Prints a summary of the model's architecture, including the number of layers,
parameters, and input/output shapes.
o This helps you understand the model's complexity and identify potential
bottlenecks.
6. Compile the Model (Specify Optimizer, Loss Function, and Metrics):
Python
model.compile(optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), # Specify
from_logits=False for categorical crossentropy
metrics=["accuracy"])
 Optimizer: adam is a widely used optimizer that efficiently updates model
weights during training.
 Loss Function: SparseCategoricalCrossentropy is suitable for multi-class
classification tasks like MNIST. from_logits=False ensures the loss function expects
class probabilities (not logits).
 Metrics:
#download the data from the Keras datasets module
import tensorflow as tf
import tensorflow.keras as keras
(trainX, trainY), (testX, testY) = keras.datasets.mnist.load_data()
#build your model
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input, Flatten
model = Sequential([
Input(shape=(28,28,1,)),
Flatten(),
Dense(units=84, activation="relu"),
Dense(units=10, activation="softmax"),
])
print (model.summary())
#compile your model

model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics="acc")
#compile your model
model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics="acc")

2.Keras

Code Breakdown:
Python
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

# Load the MNIST dataset


(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Print the shapes of training data


print(x_train.shape, y_train.shape)

# Reshape the training and testing data for CNN compatibility


x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)

# Define the input shape for the CNN model


input_shape = (28, 28, 1)

# Set the batch size for training the model (adjustable)


batch_size = 128
Explanation:
1. Import Libraries:
o keras: The core deep learning library used for building and training the

model.
o from keras.datasets import mnist: Imports the MNIST dataset specifically.

o from keras.models import Sequential: Used to create a sequential neural

network model.
o from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D: Imports

various neural network layer types (dense, dropout, flatten,


convolutional, and max pooling).
o from keras import backend as K: Provides access to backend-specific

functions (less commonly used in modern Keras).


2. Load MNIST Dataset:
o (x_train, y_train), (x_test, y_test) = mnist.load_data(): Loads the MNIST dataset

into training (x_train, y_train) and testing (x_test, y_test) sets.


 x_train: Training images (28x28 pixel grayscale handwritten digits).

 y_train: Training labels (0-9 representing the digit in each image).

 x_test: Testing images (same format as training).

 y_test: Testing labels (same format as training).

3. Print Data Shapes:


o print(x_train.shape, y_train.shape): Prints the dimensions of the training data.

 The typical output is (60000, 28, 28) for images and (60000,) for
labels (60,000 samples).
4. Reshape Data for CNN:
o x_train = x_train.reshape(x_train.shape[0], 28, 28, 1): Reshapes the training data

to fit the expected input format of a convolutional neural network (CNN).


 CNNs typically require 4 dimensions: (samples, height, width,
channels).
 This step adds an extra dimension of 1 to represent the grayscale
channel (since MNIST images are grayscale).
o x_test = x_test.reshape(x_test.shape[0], 28, 28, 1): Performs the same reshaping

for the testing data.


5. Define Input Shape:
o input_shape = (28, 28, 1): Explicitly defines the input shape for the CNN

model. This helps make the code more readable and maintainable.
6. Set Batch Size (Optional):
o batch_size = 128: Sets the batch size for training the model.

 A batch is a subset of the training data presented to the model


during each training iteration.
 This value can be adjusted based on hardware limitations and
desired training speed. Higher batch sizes tend to train faster but
might require more memory.
Next Steps:
 Build the CNN Model: This code provides the foundation for building a
convolutional neural network to classify handwritten digits in the MNIST
dataset. You'll typically define the CNN architecture using Sequential and add
layers like Conv2D, MaxPooling2D, Flatten, and Dense to create a multi-layered
network.
 Compile the Model: After defining the model architecture, you'll compile it by
specifying the optimizer (e.g., adam), loss function (e.g

Code:
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape, y_train.shape)
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
input_shape = (28, 28, 1)
batch_size = 128

3.Scipy

1. Loading and Saving Images:


Python
from scipy import misc
import imageio
import matplotlib.pyplot as plt

# Read a raccoon image


face = misc.face()

# Save the image as 'raccoon.png'


imageio.imsave('raccoon.png', face)

# Display the image using Matplotlib


plt.imshow(face)
plt.show()

# Load the saved image


img = imageio.imread('raccoon.png')

# Print image shape (e.g., height, width, channels)


print(img.shape)

# Print image data type (e.g., uint8 for unsigned 8-bit integers)
print(img.dtype)

# Display the loaded image


plt.imshow(img)
plt.show()
Explanation:
 SciPy's misc.face() function loads a sample image (usually a raccoon face).
 imageio.imsave() saves the image as a PNG file.

 Matplotlib's plt.imshow() displays the image, and plt.show() makes it visible.


 imageio.imread() loads the saved image.
 Printing img.shape and img.dtype provides information about the image's
dimensions and data type.
2. Reading Raw Data and Extracting Image Information:
Python
# Save the image to a raw file (without headers)
face.tofile("raccoon.raw")

import numpy as np

# Load the raw data as a NumPy array with type uint8


img = np.fromfile('raccoon.raw', dtype=np.uint8)

# Print the shape of the loaded image


print(img.shape)

# Get the maximum, minimum, and mean pixel intensity values


img = misc.face()
print(img.max())
print(img.min())
print(img.mean())
Explanation:
 face.tofile() saves the raccoon image data in raw format (without headers).

 NumPy's np.fromfile() loads the raw data into a NumPy array, specifying the data
type as uint8 for unsigned 8-bit integers (common for image data).
 Similar to before, img.shape shows the image dimensions.
 img.max(), img.min(), and img.mean() provide insights into the intensity range of the

image's pixels.
3. Grayscale Conversion, Cropping, and Flipping:
Python
# Load the image and convert to grayscale
img = misc.face(gray=True)

# Get image dimensions (height and width)


x, y = img.shape

# Crop a rectangular region from the center


crop = img[x // 3: -x // 8, y // 3: -y // 8]

# Display the cropped image


plt.imshow(crop)
plt.show()

# Load the image


img = misc.face()
# Flip the image vertically
flip = np.flipud(img)

# Display the flipped image


plt.imshow(flip)
plt.show()
Explanation:
 misc.face(gray=True) loads the image in grayscale mode.

 img.shape retrieves the image's height (x) and width (y).

 Slicing is used to crop a rectangular area from the center of the image:
[start_row:end_row, start_column:end_column].
o x // 3 and y // 3 calculate one-third of the image height and width,

respectively.
o Negative indexing (-x // 8) is used to exclude a portion from the end (right
and bottom).
 np.flipud(img) flips the image vertically (upside down).

4. Image Rotation and Blurring:


Python
from scipy import misc, ndimage

# Load the image


img = misc.face()

# Rotate the image by 30 degrees


rotate = ndimage.rotate(face, 30)

# Display the rotated image


plt.imshow(rotate)
plt.show()

# Load the image (grayscale for blurring)


img = misc.face(gray=True).astype(float) # Convert to float for calculations

# Apply Gaussian blur with a sigma of 5


blur = ndimage.gaussian_filter(img, 5)

# Display the blurred image


plt

Code:
from scipy import misc
import imageio
import matplotlib.pyplot as plt
# reads a raccoon face
face = misc.face()
# save the image
imageio.imsave('raccoon.png', face)
plt.imshow(face)
plt.show()
img = imageio.imread('raccoon.png')
print(img.shape)
print(img.dtype)
plt.imshow(img)

plt.show()
# reads a raccoon face
face = misc.face()
face.tofile("raccoon.raw")
import numpy as np
img = np.fromfile('raccoon.raw', dtype=np.uint8)
print(img.shape)
img = misc.face()
print(img.max())
print(img.min())
print(img.mean())
# for grascaling the image
img = misc.face(gray = True)
x, y = img.shape
# Cropping the image
crop = img[x//3: - x//8, y//3: - y//8]
plt.imshow(crop)
plt.show()
img = misc.face()
flip = np.flipud(img)
plt.imshow(flip)
plt.show()
from scipy import misc,ndimage
import matplotlib.pyplot as plt
img = misc.face()
rotate = ndimage.rotate(face, 30)

plt.imshow(rotate)
plt.show()
img = misc.face()
blur_G = ndimage.gaussian_filter(img,sigma=7)
plt.imshow(blur_G)
plt.show()
img = misc.face(gray=True).astype(float)
blur = ndimage.gaussian_filter(img, 5)
# Showing Blur Image
plt.imshow(blur)
plt.show()
blur_G = ndimage.gaussian_filter(blur, 1)
alpha = 30
sharp = blur+alpha*(blur-blur_G)
# showing sharp images
plt.imshow(sharp)
plt.show()
img=misc.face(gray=True).astype(float)
img=img[40:100,30:100]
noise_img=img+0.9*img.std()*np.random.random(img.shape)
plt.imshow(noise_img)
plt.show()
4.Keras

1. Library Imports (Both Parts):


 TensorFlow and NumPy (tf, np): The foundation for building and manipulating
machine learning models. NumPy provides numerical computations, while
TensorFlow handles model creation, training, and evaluation.
2. Data Preparation (Part 1):
 Generate Random Dataset (Colors):
o np.random.seed(42): Sets a random seed for reproducibility (same data

generation on every run).


o num_samples = 1000: Defines the number of data points (color samples).

o colors = np.random.rand(num_samples, 3): Creates a random NumPy array

with dimensions (1000, 3) containing RGB color values between 0 and 1


(representing color intensity).
 Assign Labels (Red or White):
o labels = (colors[:, 0] > 0.5).astype(int): Generates labels (0 or 1) based on the

red color value. If red value is greater than 0.5 (threshold), it's labeled as
red (0), otherwise white (1).
3. Data Preparation (Part 2):
 Image Loading and Preprocessing:
o Reads image files from a directory structure containing subdirectories for
each plant disease class.
o Converts loaded images into NumPy arrays.
o Resizes images to a specific size (256x256 pixels in this case).
 Label Assignment: Assigns labels (0, 1, 2) based on the subdirectory the image
belongs to (e.g., 0 for Corn disease).
 Data Splitting: Splits the data into training, testing, and validation sets (common
for model training and evaluation).
 Normalization: Pixel values are normalized by dividing each value by 255 (since
they typically range from 0 to 255).
 Reshaping: The data is reshaped to a format suitable for CNNs (usually 4D
tensors with dimensions for image width, height, channels (RGB), and number of
samples).
4. Model Building (Both Parts):
 Sequential Model:
o model = Sequential(): Creates a sequential model, where layers are added

one after another. This is a common structure for many neural networks.
5. Model Layers (Part 1):
 Dense Layers (Fully Connected):
o These layers are the core building blocks of artificial neural networks.
They perform linear transformations on the input data and introduce
non-linearity with activation functions.
o model.add(Dense(64, activation='relu', input_shape=(3,)):

 Adds the first Dense layer with 64 neurons and the ReLU (Rectified
Linear Unit) activation function. ReLU helps introduce non-
linearity, allowing the model to learn more complex patterns.
 input_shape=(3,) specifies the input shape as a 3D vector

representing the RGB color values.


o model.add(Dense(32, activation='relu')): Adds another Dense layer with 32

neurons and ReLU activation.


6. Model Layers (Part 2):
 Convolutional Layers:
o These layers are specifically designed for image recognition tasks. They
can automatically learn spatial features from the input images.

o model.add(Conv2D(32, (3,3), padding="same", input_shape=(256, 256, 3),

activation="relu")):

 Adds the first convolutional layer with 32 filters of size 3x3. Filters
are like kernels that slide across the image, extracting features.
 padding="same" ensures the output remains the same size as the

input, preserving spatial information.


 input_shape=(256, 256, 3) specifies the input shape as a 4D tensor for

the image (256x256 pixels with 3 color channels).


 activation="relu" applies the ReLU activation function for non-

linearity.
o Additional convolutional layers with different filter sizes and pooling
layers (MaxPooling2D) are used for further feature extraction and
reducing dimensionality.

 Flatten Layer:
o model.add(Flatten()): Flattens the multi-dimensional output of the

convolutional layers into a single dimension suitable for feeding into


dense layers. Dense layers typically require a 1D input vector.

Code:

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Generate an imaginary dataset with random RGB values
np.random.seed(42) # For reproducibility
num_samples = 1000
colors = np.random.rand(num_samples, 3) # RGB values between 0 and 1
# Labels: 0 for red, 1 for white
labels = (colors[:, 0] > 0.5).astype(int)
# Split the dataset into training and testing sets
split_ratio = 0.8
split_index = int(num_samples * split_ratio)

train_colors, test_colors = colors[:split_index], colors[split_index:]


train_labels, test_labels = labels[:split_index], labels[split_index:]
# Create the Keras model
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(3,)))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid')) # Output layer with 1 unit for binary
classification
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(train_colors, train_labels, epochs=100, batch_size=32, validation_split=0.2)
# Evaluate the model on the test set
loss, accuracy = model.evaluate(test_colors, test_labels)
print("Test accuracy:", accuracy)
# Make predictions on an example input color
input_color = np.array([[0.1, 0.1, 0.1]]) # Example input color with R < 0.5
predicted_probability = model.predict(input_color)[0][0]
predicted_label = 'Red' if predicted_probability < 0.5 else 'White'
print("Predicted color:", predicted_label)

5.Tensorflow
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.image import imread
import cv2
import random
import os
from os import listdir
from PIL import Image
from sklearn.preprocessing import label_binarize, LabelBinarizer
from keras.preprocessing import image
from tensorflow.keras.utils import img_to_array, array_to_img
from tensorflow.keras.optimizers import Adam
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Flatten, Dropout, Dense
from sklearn.model_selection import train_test_split
from keras.models import model_from_json
from tensorflow.keras.utils import to_categorical
!ls "/content/drive/MyDrive/Plant_images"
plt.figure(figsize=(12,12))
path = "/content/drive/MyDrive/Plant_images/Potato___Early_blight"

for i in range(1,17):
plt.subplot(4,4,i)
plt.tight_layout()
rand_img = imread(path +'/'+ random.choice(sorted(os.listdir(path))))
plt.imshow(rand_img)
plt.xlabel(rand_img.shape[1], fontsize = 10)#width of image
plt.ylabel(rand_img.shape[0], fontsize = 10)#height of image
#Converting Images to array
def convert_image_to_array(image_dir):
try:
image = cv2.imread(image_dir)
if image is not None :
image = cv2.resize(image, (256,256))
#image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
return img_to_array(image)
else :
return np.array([])
except Exception as e:
print(f"Error : {e}")
return None
dir = "/content/drive/MyDrive/Plant_images"
root_dir = listdir(dir)
image_list, label_list = [], []
all_labels = ['Corn-Common_rust', 'Potato-Early_blight', 'Tomato-Bacterial_spot']
binary_labels = [0,1,2]
temp = -1
# Reading and converting image to numpy array
#Now we will convert all the images into numpy array.
for directory in root_dir:
plant_image_list = listdir(f"{dir}/{directory}")
temp += 1
for files in plant_image_list:
image_path = f"{dir}/{directory}/{files}"
image_list.append(convert_image_to_array(image_path))
label_list.append(binary_labels[temp])
# Visualize the number of classes count
label_counts = pd.DataFrame(label_list).value_counts()
label_counts.head()
#Next we will observe the shape of the image.
image_list[0].shape

label_list = np.array(label_list)
label_list.shape
x_train, x_test, y_train, y_test = train_test_split(image_list, label_list, test_size=0.2,
random_state =
10)
#Now we will normalize the dataset of our images.As pixel values ranges from 0 to 255
so we will
divide each image pixel with 255 to nor
x_train = np.array(x_train, dtype=np.float16) /225.0
x_test = np.array(x_test, dtype=np.float16) /225.0
x_train = x_train.reshape(-1,256,256,3)
x_test = x_test.reshape(-1,256,256,3)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = Sequential()
model.add(Conv2D(32, (3,3), padding="same",input_shape=(256,256,3),
activation="relu"))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Conv2D(16, (3,3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(8, activation="relu"))
model.add(Dense(3, activation="softmax"))
model.summary()
model.compile(loss ='categorical_crossentropy', optimizer =
Adam(0.0001),metrics=['accuracy'])
#Next we will split the dataset into validation and training data.
# Splitting the training data set into training and validation data sets
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size =0.2)
# Training the model
epochs =50
batch_size =128
history = model.fit(x_train, y_train, batch_size = batch_size, epochs =
epochs,validation_data =
(x_val, y_val))
#Plot the training history
plt.figure(figsize=(12,5))
plt.plot(history.history['accuracy'], color='r')
plt.plot(history.history['val_accuracy'], color='b')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['train','val'])
plt.show()
y_pred = model.predict(x_test)
print("[INFO] Calculating model accuracy")
scores = model.evaluate(x_test, y_test)
print(f"Test Accuracy:{scores[1]*100}")
img = array_to_img(x_test[10])
img

You might also like