You are on page 1of 6

Linear Regression

Questions
(1) What is Linear Regression?
Linear regression is a type of supervised machine
learning algorithms that computes the linear relationship
between a dependent variable and one or more
independent features.

(2) What is Simple Linear Regression?


One Variable Regression = Unvariate Regression
is a statistical method that allows us to summarize and
study relationships between two continuous
(quantitative) variables.

(3) What are the key benefits of Linear Regression?


 Easy implementation
 Easy Interpretability
 Wide Scalability
 Robustness to Noise
 Wide Range of Applications

(4) List Applications of Linear Regression


a. Finance: stock price prediction
b. Economics: predicting GDP growth based on factors
such as inflation
c. Healthcare: Predicting patient outcomes using medical
variables
d. Marketing: Estimating sales based on advertising
spending

(5) What is the General Formula for Linear Regression


Equation and Cost Function?
Function:

Parameters: w > weight, b > y- intercept


Cost Function:

Goal:

MCQ
1. What is the purpose of feature scaling in linear regression?
a) To improve interpretability of coefficients
b) To reduce the impact of outliers
c) To increase the R-squared value
d) To make the model more robust to noise
Answer: a) To improve interpretability of coefficients

2. What is gradient descent used for in linear regression?


a) To select the best features
b) To find the optimal model parameters
c) To validate the model performance
d) To calculate the p-values

Answer: b) To find the optimal model parameters

3. What is the purpose of cross-validation in linear


regression?
a) To improve model fit on training data
b) To assess model performance on unseen data
c) To select the best features
d) To calculate the p-values
Answer: b) To assess model performance on unseen data

4. What is the primary objective of linear regression?


a) Minimize the sum of absolute errors
b) Minimize the sum of squared errors
c) Maximize the correlation coefficient
d) Maximize the R-squared value

Answer: b) Minimize the sum of squared errors

5. What is the difference between linear regression and


logistic regression?
a) Linear regression predicts continuous values, logistic
regression predicts binary outcomes
b) Linear regression predicts binary outcomes, logistic
regression predicts continuous values
c) Both predict continuous values
d) Both predict binary outcomes

Answer: a) Linear regression predicts continuous values,


logistic regression predicts binary outcomes

6. What is the main limitation of linear regression?


a) It can only be used with continuous data.
b) It assumes a linear relationship between variables.
c) It is computationally expensive.
d) It cannot handle missing data.
Answer: b) It assumes a linear relationship between variables.
7. What are some alternatives to linear regression when the
relationship between variables is not linear?
a) Logistic regression
b) Decision trees
c) Support vector machines
d) All of the above

Answer: d) All of the above

8. The learner is trying to predict the cost of papaya based on


its size. The variable “cost” is
a) independent variable
b) target Variable
c) ranked variable
d) categorical variable
Answer: b) target Variable
9. The learner is trying to predict housing prices based on the
size of each house. The variable “size” is
a) dependent variable
b) label set variable
c) independent variable
d) target variable
Answer: c) independent variable

10. What does (x(5), y(5)) represent or imply?


a) There are 5 training examples
b) The values of x and y are 5
c) The fourth training example
d) The fifth training example

Answer: d) The fifth training example


11. The hypothesis is given by h(x) = t0 + t1x. What is the
goal of t0 and t1?
a) Give negative h(x)
b) Give h(x) as close to 0 as possible, without themselves being
0
c) Give h(x) as close to y, in training data, as possible
d) Give h(x) closer to x than y

Answer: c) Give h(x) as close to y, in training data, as possible


12. In a linear regression problem, h(x) is the predicted
value of the target variable, y is the actual value of the
target variable, m is the number of training examples. What
do we try to minimize?
a) (h(x) – y) / m
b) (h(x) – y)2 / 2*m
c) (h(x) – y) / 2*m
d) (y – h(x))
Answer: b) (h(x) – y)2 / 2*m

You might also like