Professional Documents
Culture Documents
Course Content
Attempts : 1/1
Questions : 8
Time : 20m
Instructions
Attempt History
Which of the following is / are true about weak learners used in ensemble model?
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 1/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning
2 and 3
1 and 3
None of these
Weak learners are sure about particular part of a problem. So they usually don’t overfit
which means that weak learners have low variance and high bias
In an election, N candidates are competing against each other and people are voting for
either of the candidates. Voters don’t communicate with each other while casting their votes.
A Or B
Boosting
In bagging process, you will take multiple random sample from the population and build
CART on the random sample and then taking the count of the responses to come up with
the predictive score
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 2/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning
Models with High Bias and Low Variance are inconsistent but accurate on average
TRUE
High bias, low variance algorithms train models that are consistent, but inaccurate on
average. High variance, low bias algorithms train models that are accurate on average, but
inconsistent
If the model has a very high bias the model is more likely to
Overfit
In machine learning terminology, underfitting means that a model is too general, leading to
high bias
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 3/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning
Adaptive Boosting
Gradient Boosting for regression builds an additive model in a forward stage-wise fashion;
it allows for the optimization of arbitrary differentiable loss functions. The objective is to
minimize the loss of the model by adding weak learners using a gradient descent like
procedure
In terms of the bias-variance trade-off, which of the following is/are substantially more
harmful to the test error than the training error?
Risk
Loss
Bias
The variance is an error from sensitivity to small fluctuation in the training set. High
variance can cause an algorithm to model the random noise in training data, rather than
the intended outputs. This is known as overfitting. And once you run the model on test
data to check the performance of model you will get drop in model performance because
of overfitting
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 4/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning
We are adding 48 for every 100 of the minority class sample You Selected
SMOTE is R algorithm for unbalanced classification problems. Alternatively, it can also run
a classification algorithm on this new data set and return the resulting model. perc.over: A
number that drives the decision of how many extra cases from the minority class are
generated, known as over sampling. Pred.under: A number that drives the decision of how
many extra cases from the majority class are selected for each case generated from the
minority class known as under sampling
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 5/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
6/28/2020 Weekly Quiz 2 (ML): Machine Learning - Great Learning
Gradient Boosting
In the above diagram base model has been created and then getting a weak classifier and
updating the population distribution for the next step. Use the new population
distribution to again find the next learners. And this is the process of gradient boosting
methodology.
Previous
Proprietary content. ©Great Learning. All Rights Reserved. Unauthorized use or distribution prohibited.
This study source was downloaded by 100000759713971 from CourseHero.com on 03-06-2022 02:59:22 GMT -06:00
https://olympus.greatlearning.in/courses/9218/quizzes/39839?module_item_id=600589 6/6
https://www.coursehero.com/file/65650199/Weekly-Quiz-2-ML-Machine-Learning-Great-Learning-ankitpdf/
Powered by TCPDF (www.tcpdf.org)