Professional Documents
Culture Documents
Importación de Librerias
In [1]:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
ad_data=pd.read_csv('airfoil_self_noise.csv')
In [3]:
ad_data.head()
Out[3]:
In [4]:
ad_data.info()
<class 'pandas.core.frame.DataFrame'>
In [5]:
ad_data.describe()
Out[5]:
In [6]:
sns.set_palette("GnBu_d")
sns.set_style("whitegrid")
sns.displot(ad_data['Scaled sound pressure level'])
Out[6]:
<seaborn.axisgrid.FacetGrid at 0x2425c926b80>
In [7]:
sns.jointplot(data=ad_data)
Out[7]:
<seaborn.axisgrid.JointGrid at 0x2425ea24a30>
In [8]:
Out[8]:
<seaborn.axisgrid.JointGrid at 0x242599a2280>
Crear un jointplot mostrando la distribución kde de "Daily Time spent on site" vs. "Age".
In [9]:
sns.pairplot(data=ad_data)
Out[9]:
<seaborn.axisgrid.PairGrid at 0x2425ee1a280>
** Crear un jointplot de "Daily Time Spent on Site" vs. "Daily Internet Usage"**
In [10]:
Out[10]:
<seaborn.axisgrid.FacetGrid at 0x24261214eb0>
In [12]:
In [13]:
Out[13]:
In [14]:
Out[14]:
0 126.201
1 125.201
2 125.951
3 127.591
4 127.461
...
1498 110.264
1499 109.254
1500 106.604
1501 106.224
1502 104.204
Regresión Lineal
localhost:8888/notebooks/Tareas Inteligencia artificial/parcial/Pregunta1 - Pablo David.ipynb 8/14
6/7/22, 22:18 Pregunta1 - Pablo David - Jupyter Notebook
In [16]:
In [17]:
lm=LinearRegression()
In [18]:
lm.fit(X_train,y_train)
Out[18]:
LinearRegression()
In [19]:
X_train.head()
Out[19]:
print(lm.intercept_)
132.76846988060998
In [21]:
print(lm.coef_)
-1.51813495e+02]
Predicciones y Evaluaciones
** Ahora pronostique los valores para los datos de prueba.**
In [22]:
predictions = lm.predict(X_test)
In [23]:
In [24]:
plt.scatter(y_test, predictions)
Out[24]:
<matplotlib.collections.PathCollection at 0x2426244f580>
In [25]:
sns.displot((y_test-predictions),bins=30)
Out[25]:
<seaborn.axisgrid.FacetGrid at 0x24261705940>
In [27]:
MAE: 3.6747310654156187
MSE: 22.395946643814117
RMSE: 4.732435593202946
Out[28]:
LinearRegression()
print("Interceptor: ",modeloLineal2.intercept_)
Interceptor: 132.9105746266391
In [30]:
print("Coeficientes:",modeloLineal2.coef_)
-1.47089444e+02]
predicciones2 = modeloLineal2.predict(X_test2)
In [32]:
plt.scatter(y_test2,predicciones2)
Out[32]:
<matplotlib.collections.PathCollection at 0x2426255aaf0>
In [33]:
sns.displot((y_test2-predicciones2),bins=30)
Out[33]:
<seaborn.axisgrid.FacetGrid at 0x242624cb640>
MAE: 3.510346177232778
MSE: 21.213057157693157
RMSE: 4.605763471748539
Si se desea reducir las métricas de evaluación de errores, es necesario comparar y revisar el modelo 2 donde
el valor resultante de MAE, MSE, RMSE son de menor valor que el modelo 1, por lo tanto podemos afirmar que
el Modelo 2 es mejor que el Modelo 1.
¡Buen trabajo!