You are on page 1of 2

1.

Batch size: In machine learning, batch size refers to the number of training
examples or samples that are used in one forward/backward pass of the
neural network during the training process. It is the number of training
examples used in each iteration of the training process. A larger batch size can
lead to faster training, but may also require more memory and can result in
less accurate updates to the model parameters.
2. Recall: In binary classification, recall is the fraction of true positive predictions
out of all actual positive instances. It is the ability of a model to correctly
identify all relevant instances. A high recall means that the model is identifying
most of the positive instances, while a low recall means that the model is
missing many positive instances.
3. Precision: In binary classification, precision is the fraction of true positive
predictions out of all positive predictions made by the model. It is the ability
of a model to only identify relevant instances. A high precision means that the
model is only identifying relevant instances, while a low precision means that
the model is identifying many irrelevant instances.
4. Accuracy: In binary classification, accuracy is the fraction of correct predictions
made by the model out of all predictions. It is the overall ability of the model
to correctly classify instances. A high accuracy means that the model is
correctly classifying most instances, while a low accuracy means that the
model is making many incorrect predictions.
5. Loss: In machine learning, loss is a measure of how well a model is able to
predict the correct output. It is a mathematical function that calculates the
difference between the predicted output and the actual output. The goal of
the training process is to minimize the loss.
6. Validation accuracy: In machine learning, validation accuracy is the accuracy of
the model on a validation set. During the training process, a portion of the
data is set aside for validation purposes, and the model's performance on this
validation set is monitored to ensure that it is not overfitting to the training
data.
7. Validation loss: In machine learning, validation loss is the loss of the model on
a validation set. During the training process, a portion of the data is set aside
for validation purposes, and the model's loss on this validation set is
monitored to ensure that it is not overfitting to the training data.
8. Callbacks in TensorFlow: Callbacks are functions that can be passed to
TensorFlow during training to perform certain actions at specific points in the
training process. For example, a callback can be used to save the model
weights after each epoch, or to stop training early if the validation loss stops
improving.
9. Binary crossentropy and multiple crossentropy difference: Binary crossentropy
is a loss function used for binary classification tasks, where the model is trying
to predict between two classes. Multiple crossentropy is a loss function used
for multi-class classification tasks, where the model is trying to predict
between more than two classes.
10. Sigmoid: Sigmoid is a mathematical function that is commonly used in
machine learning as an activation function for neural networks. It maps any
input value to a value between 0 and 1, which can be interpreted as a
probability. It is often used in binary classification tasks to convert the output
of a neural network to a probability.
11. Overfitting: Overfitting occurs when a machine learning model is trained too
well on the training data and is not able to generalize well to new data. This
can occur when the model is too complex or when there is not enough
training data. Signs of overfitting include a high training accuracy but a low
validation accuracy, or a low training loss but a high validation loss.
Techniques to prevent overfitting include using regularization, dropout, or
early stopping.

You might also like