You are on page 1of 6

6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

Go Back to Machine Learning

Course Content

Weekly Quiz 2 (ML)


Type : Graded Quiz

Attempts : 1/1

Questions : 8

Time : 20m

Due Date : Jul 05, 11:59 PM

Your Score : 15.00/15

Instructions

Attempt History

Date Attempt Marks

Jun 28, 4:39 PM 1 15 Hide answers

Question No: 1 Correct Answer

Which of the following is / are true about weak learners used in ensemble model?

1. They have low variance and they don’t usually over-fit


2. They have high bias, so they cannot solve hard learning problems
3. They have high variance and they don’t usually over-fit

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 1/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

2 and 3

1 and 2 You Selected

1 and 3

None of these

Weak learners are sure about particular part of a problem. So they usually don’t overfit
which means that weak learners have low variance and high bias

Question No: 2 Correct Answer

In an election, N candidates are competing against each other and people are voting for
either of the candidates. Voters don’t communicate with each other while casting their votes.

Which of the following ensemble method works similar to above-discussed election


procedure?

Hint: Persons are like base models of ensemble method

A Or B

Boosting

None of the mentioned

Bagging You Selected

In bagging process, you will take multiple random sample from the population and build
CART on the random sample and then taking the count of the responses to come up with
the predictive score

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 2/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

Question No: 3 Correct Answer

Models with High Bias and Low Variance are inconsistent but accurate on average

FALSE You Selected

TRUE

High bias, low variance algorithms train models that are consistent, but inaccurate on
average. High variance, low bias algorithms train models that are accurate on average, but
inconsistent

Question No: 4 Correct Answer

If the model has a very high bias the model is more likely to

Overfit

Underfit You Selected

In machine learning terminology, underfitting means that a model is too general, leading to
high bias

Question No: 5 Correct Answer

What type of boosting involves the following three elements?

1. A loss function to be optimized.


2. A weak learner to make predictions.
3. An additive model to add weak learners to minimize the loss function.

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 3/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

Adaptive Boosting

Extreme Gradient Boosting

Gradient Boosting You Selected

Gradient Boosting for regression builds an additive model in a forward stage-wise fashion;
it allows for the optimization of arbitrary differentiable loss functions. The objective is to
minimize the loss of the model by adding weak learners using a gradient descent like
procedure

Question No: 6 Correct Answer

In terms of the bias-variance trade-off, which of the following is/are substantially more
harmful to the test error than the training error?

Risk

Loss

Bias

Variance You Selected

The variance is an error from sensitivity to small fluctuation in the training set. High
variance can cause an algorithm to model the random noise in training data, rather than
the intended outputs. This is known as overfitting. And once you run the model on test
data to check the performance of model you will get drop in model performance because
of overfitting

Question No: 7 Correct Answer

What is the explanation of the mentioned code snippet?

“balanced.gd <- SMOTE(Class ~., smote.train, perc.over = 4800, k = 5, perc.under = 1000)”

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 4/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

We are subtracting 48 for every 100 of the minority class sample

We are adding 10 for every 100 of the minority class sample.

We are subtracting 10 for every 100 of the minority class sample.

We are adding 48 for every 100 of the minority class sample You Selected

SMOTE is R algorithm for unbalanced classification problems. Alternatively, it can also run
a classification algorithm on this new data set and return the resulting model. perc.over: A
number that drives the decision of how many extra cases from the minority class are
generated, known as over sampling. Pred.under: A number that drives the decision of how
many extra cases from the majority class are selected for each case generated from the
minority class known as under sampling

Question No: 8 Correct Answer

What kind of Boosting is the following?

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 5/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning

Gradient Boosting

Adaptive Boosting You Selected

Extreme Gradient Boosting

In the above diagram base model has been created and then getting a weak classifier and
updating the population distribution for the next step. Use the new population
distribution to again find the next learners. And this is the process of gradient boosting
methodology.

Previous

Proprietary content. ©Great Learning. All Rights Reserved. Unauthorized use or distribution prohibited.

© 2020 All rights reserved

Privacy Terms of service Help

This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00

https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 6/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
Powered by TCPDF (www.tcpdf.org)

You might also like