0% found this document useful (0 votes)
68 views6 pages

Hyperparameter Tuning: Grid vs Random Search

Uploaded by

cuteangel887766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views6 pages

Hyperparameter Tuning: Grid vs Random Search

Uploaded by

cuteangel887766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Grid and Random

Search
Model Tuning
Hyperparameter optimizations
• All machine learning models have a set of hyperparameters or
arguments that must be specified by the practitioner
• Hyperparameters refer to the parameters that cannot be learned
from data and need to be provided before training.
• For Example: In the case of a random forest, hyper parameters
include the number of decision trees in the forest , for a neural
network, there is the learning rate, the number of hidden layers
• The performance of machine learning models relies heavily on finding
the optimal set of hyperparameters.
Cont’d
• The only way to find the best possible hyperparameters for your
dataset is by trial and error, which is the main concept behind
hyperparameter optimization/tuning
• Hyperparameter optimization is a technique that involves searching
through a range of values to find a subset of results that achieve the
best performance on a given dataset
• To achieve high precision and accuracy
• There are two popular techniques used to perform hyperparameter
optimization - grid and random search
Grid Search
• When performing hyperparameter optimization, we first need to define
a parameter space or parameter grid, where we include a set of
possible hyperparameter values that can be used to build the model.
• Grid consists of selected hyperparameter names and values
• The grid search technique is then used to place these hyperparameters
in a matrix-like structure, and the model is trained on every
combination of hyperparameter values.
• The model with the best performance is then selected.
• Suffers when the number of hyperparameters grows exponentially
Random Search
• Random search is a technique where random combinations of the
hyperparameters are used to find the best solution for the built
model.
• To optimize with random search, the function is evaluated at some
number of random configurations in the parameter space
• The key difference from grid search is in random search, not all the
values are tested and values tested are selected at random
• Random search works best for lower dimensional data since the time
taken to find the right set is less with less number of iterations
Find applications, advantages and
drawbacks of both algorithms

You might also like