You are on page 1of 8

Simple Linear Regression With scikit-learn

There are five basic steps when you’re implementing linear regression:

1. Import the packages and classes you need.


2. Provide data to work with and eventually do appropriate transformations.
3. Create a regression model and fit it with existing data.
4. Check the results of model fitting to know whether the model is satisfactory.
5. Apply the model for predictions.

These steps are more or less general for most of the regression approaches and implementations.

Problem Statement: -

The Head HR of a certain organization wants to automate their salary hike


estimation. The organization consulted an analytics service provider and asked them
to build a basic prediction model by providing them with a sample data that contains
historic data of the years of experience and the salary hike given accordingly over
the past years. Approach - A Simple Linear regression model needs to be built with
target variable ‘Salary’ to predict the salary hikeapply necessary transformations
and record the RMSE values, Correlation coefficient values for different
transformation models.

Step 1: Import packages and classes

The first step is to import the package numpy and the class LinearRegression from sklearn.linear_model:

import numpy as np
from sklearn.linear_model import LinearRegression
Now, you have all the functionalities you need to implement linear regression.

The fundamental data type of NumPy is the array type called numpy.ndarray. The rest of this article
uses the term array to refer to instances of the type numpy.ndarray.

The class sklearn.linear_model.LinearRegression will be used to perform linear and polynomial


regression and make predictions accordingly.

Step 2: Provide data


The second step is defining data to work with. The inputs (regressors, 𝑥) and output (predictor, 𝑦).

Salary_data.csv is imported .

Exploratory data analysis is performed on data

Step 3: Create a model and fit it

The next step is to create a linear regression model and fit it using the existing data.

Let’s create an instance of the class LinearRegression, which will represent the regression model:

Simple linear regression


model = LinearRegression()
This statement creates the variable model as the instance of LinearRegression. You can provide several
optional parameters to LinearRegression

statsmodels.formula.api is imported to build a model based on ols of data

model1=smf.ols('calories ~ weight',data=cal_data).fit()

Regression line is plotted after obtaining predicted values


after plotting scattered plot root mean squared error is calculated

In order to reduce the errors and to obtain best fit line Transformation is performed on data

Log transformation

In exponential transformation, transformation is applied on y data

#x=log(weight),y=calories

scattered plot is plotted

later correlation coefficient is obtained between transformed input and output

model2 is built on obtained data

new regression line is plotted


new rmse is calculated

Exponential transformation

In exponential transformation, transformation is applied on y data

#x=(weight),y=log(calories)

scattered plot is plotted


later correlation coefficient is obtained between transformed input and output

model3 is built on obtained data

new regression line is plotted

new rmse is calculated

Polynomial transformation

x=exp ,x^2=exp*exp, y=log(churn)

from sklearn.preprocessing import PolynomialFeatures to build the polynomial regression

new regression line


from the above regressive model the rmse is obtained

choose the best model by using all RMSE values of above transformations

models with respective RMS values are tabulated

from the above observations exp model is taken as best

Step 4: Get results


Once you have your model fitted, you can get the results to check whether the model works
satisfactorily and interpret it.

the summary of final model is

final model is fitted on train and test split data and prediction is observed

the final rmse value is

You might also like