You are on page 1of 2

What is the name of the function that takes the input and maps it to the output

variable called ?-Hypothesis


What is the process of dividing each feature by its range called ?-Feature Scaling

How are the parameters updates during Gradient Descent Process-Simultaneously

Cost function in linear regression is also called squared error functio-TRUE


What is the process of subtracting the mean of each variable from its variable
called-MEanNormalization
____________ controls the magnitude of a step taken during Gradient Descent .-
Learning
The objective function for linear regression is also known as Cost Function.-TRUE

The result of scaling is a variable in the range of [1 , 10].-FALSE


Output variables are known as Feature Variables . FALSE
Input variables are also known as feature variables. -TRUE

For different parameters of the hypothesis function we get the same hypothesis
function.-FALSE
What is the Learning Technique in which the right answer is given for each example
in the data called ?-Supervised

Problems that predict real values outputs are called ? -Regrression

For ______________ the error is determined by getting the proportion of values


miss-classified by the model-Regression
____________ is the line that separates y = 0 and y = 1 in a logistic function -
Divider(incorrect)
Where does the sigmoid function asymptote-o and 1
Problems where discrete valued outputs predicted are called ?-Clasification
Underfit Data has a high variance-FALSE

Overfiting and Underfitting are applicable only to linear regression problems-FALSE

Linear Regression is an optimal function that can be used for classification


problems-FALSE
High values of threshold are good for the classification problem.-FALSE
Overfit data has a high bias-FALSE

Lower Decision boundary leads to False Positives during classification-TRUE


Classification problems with just two classes are called Binary classification
problems.-TRUE
A suggested approach for evaluating the hypothesis is to split the data into
training and test set-TRUE
Reducing the number of features can reduce overfitting - TRUE

What measure how far the predictions are from the actual value-Bias
What measures the extent to which the predictions change between various
realizations of the model ?-Variance
So when a ML Model has high bias, getting more training data will help in improving
the model-FALSE
I have a scenario where my hypothesis fits my training set well but fails to
generalize for test set. What is this scenario called-Overfilling/Unfitting
For an underfit data set the training and the cross validation error will be high-
FALSE
If you have a basket of different fruit varieties with some prior information on
size, color, shape of each and every fruit . Which learning methodology is best
applicable?-Supervised
Do you think heuristic for rule learning and heuristics for decision trees are both
same ? -FALSE
What is the benefit of Naïve Bayes ?-Less Training
What is the advantage of using an iterative algorithm like gradient descent ?
(select the best)-Non linear no closed

You might also like