You are on page 1of 5

Republic of the Philippines

UNIVERSITY OF NORTHERN PHILIPPINES


Tamag, Vigan City
2700 Ilocos Sur

College of Communication and Information Technology


Website: www.unp.ph Mail: ccit@unp.edu.ph
Tel #: (077)632-0602

Name : Sherwin G. Renon Subject : HCI


ID No. : 100505050013 Date Submitted :

Activity No. 2
Linear Regression Model (Univariate)

import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Load dataset
df = pd.read_csv('boston_housing.csv')
# display the first 5 rows of the dataset
df.head()
#extract column RM
x_data = df.iloc[:, 5]
num_rows = len(x_data)
x_data = np.array(x_data)
x_data = np.reshape(x_data, (num_rows, 1))
y_data = df.iloc[:, 13]
y_data = np.array(y_data)
y_data = np.reshape(y_data, (num_rows, 1))
X = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='X')
Y = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='Y')
b0 = tf.Variable(tf.zeros([1]), name='b0')
b1 = tf.Variable(tf.zeros([1, 1]), name='b1')
y_pred = tf.matmul(X, b1) + b0
cost = tf.reduce_mean(tf.square(y_pred - Y))
mse = tf.reduce_mean(tf.square(y_pred -Y))
y_mean = tf.reduce_mean(tf.square(Y))
total_error = tf.reduce_sum(Y - y_mean)
unexplained_error = tf.reduce_sum(tf.square(Y - y_pred))
rs = 1 - tf.div(unexplained_error, total_error)
learning_rate = 0.001
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
epochs = 50000
b0_hat = 0
b1_hat = 0
cost_epochs = np.empty(shape=[epochs],dtype=float)
mse_epochs = np.empty(shape=[epochs],dtype=float)
rs_epochs = np.empty(shape=[epochs],dtype=float)
mse_score = 0
rs_score = 0
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
sess.run(optimizer, feed_dict={X:x_data, Y:y_data})
cost_curr = sess.run(cost, feed_dict={X:x_data, Y:y_data})
b0_curr, b1_curr = sess.run([b0, b1])
b0_curr = np.reshape(b0_curr,(1))
b1_curr = np.reshape(b1_curr,(1))
if(epoch % 1000 == 0):
print('cost_curr: {0} b0: {1} b1: {2}' . format(cost_curr, b0_curr, b1_curr))

print('\nThe final model:\n')


print('\ny = {1}*X + {0}' . format(b0_curr[0], b1_curr[0]))
Experiment 1:
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
epochs = 10000
Result:
cost_curr: 76.25054931640625 b0: [0.45065615] b1: [2.9219003]
cost_curr: 53.04937744140625 b0: [-6.9776087] b1: [4.748627]
cost_curr: 49.44146728515625 b0: [-12.897448] b1: [5.679256]
cost_curr: 47.21120071411133 b0: [-17.551807] b1: [6.4109445]
cost_curr: 45.832523345947266 b0: [-21.211222] b1: [6.986223]
cost_curr: 44.98027038574219 b0: [-24.088406] b1: [7.4385314]
cost_curr: 44.45344543457031 b0: [-26.350521] b1: [7.794147]
cost_curr: 44.12778091430664 b0: [-28.129065] b1: [8.073743]
cost_curr: 43.92646789550781 b0: [-29.527412] b1: [8.2935705]
cost_curr: 43.80202102661133 b0: [-30.62684] b1: [8.466406]

Conclusion:
As we can see in the experiment 1 we set the value of learning rate to 0.01 which is
higher than experiment 2 so we get a higher value.
Experiment 2:
learning_rate = 0.001
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
epochs = 10000
Result:
cost_curr: 508.32257080078125 b0: [0.04506562] b1: [0.29219005]
cost_curr: 58.174434661865234 b0: [-0.27770874] b1: [3.6953685]
cost_curr: 57.4901008605957 b0: [-1.0949024] b1: [3.8238356]
cost_curr: 56.837894439697266 b0: [-1.8926773] b1: [3.9492497]
cost_curr: 56.21632385253906 b0: [-2.6714935] b1: [4.071685]
cost_curr: 55.623931884765625 b0: [-3.431803] b1: [4.1912093]
cost_curr: 55.059364318847656 b0: [-4.1740475] b1: [4.307893]
cost_curr: 54.521297454833984 b0: [-4.898658] b1: [4.421803]
cost_curr: 54.00850296020508 b0: [-5.606048] b1: [4.5330105]
cost_curr: 53.51979064941406 b0: [-6.296629] b1: [4.6415715]

Conclusion:
You can see that on experiment 2 b0 and b1 got lower value because. We set 0.001 on
learning rate.
Experinment 3:
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
epochs = 50000
Result:
cost_curr: 76.25054931640625 b0: [0.45065615] b1: [2.9219003]
cost_curr: 53.04937744140625 b0: [-6.9776087] b1: [4.748627]
cost_curr: 49.44146728515625 b0: [-12.897448] b1: [5.679256]
cost_curr: 47.21120071411133 b0: [-17.551807] b1: [6.4109445]
cost_curr: 45.832523345947266 b0: [-21.211222] b1: [6.986223]
cost_curr: 44.98027038574219 b0: [-24.088406] b1: [7.4385314]
cost_curr: 44.45344543457031 b0: [-26.350521] b1: [7.794147]
cost_curr: 44.12778091430664 b0: [-28.129065] b1: [8.073743]
cost_curr: 43.92646789550781 b0: [-29.527412] b1: [8.2935705]
cost_curr: 43.80202102661133 b0: [-30.62684] b1: [8.466406]
cost_curr: 43.725101470947266 b0: [-31.491257] b1: [8.602297]
cost_curr: 43.67753982543945 b0: [-32.170876] b1: [8.709136]
cost_curr: 43.64814376831055 b0: [-32.705193] b1: [8.793134]
cost_curr: 43.62997055053711 b0: [-33.12531] b1: [8.859179]
cost_curr: 43.618743896484375 b0: [-33.45561] b1: [8.911102]
cost_curr: 43.61178970336914 b0: [-33.715286] b1: [8.951926]
cost_curr: 43.60750198364258 b0: [-33.91949] b1: [8.984028]
cost_curr: 43.604854583740234 b0: [-34.080025] b1: [9.009264]
cost_curr: 43.603214263916016 b0: [-34.206253] b1: [9.029108]
cost_curr: 43.60219192504883 b0: [-34.30548] b1: [9.044707]
cost_curr: 43.60157012939453 b0: [-34.38351] b1: [9.056973]
cost_curr: 43.60118103027344 b0: [-34.444866] b1: [9.06662]
cost_curr: 43.60093688964844 b0: [-34.493084] b1: [9.0742]
cost_curr: 43.600791931152344 b0: [-34.53101] b1: [9.080161]
cost_curr: 43.600704193115234 b0: [-34.56077] b1: [9.08484]
cost_curr: 43.600643157958984 b0: [-34.584423] b1: [9.088558]
cost_curr: 43.60060501098633 b0: [-34.60265] b1: [9.091424]
cost_curr: 43.6005859375 b0: [-34.61721] b1: [9.093713]
cost_curr: 43.6005744934082 b0: [-34.628654] b1: [9.095511]
cost_curr: 43.60056686401367 b0: [-34.63707] b1: [9.096834]
cost_curr: 43.600563049316406 b0: [-34.6447] b1: [9.098034]
cost_curr: 43.600555419921875 b0: [-34.649635] b1: [9.09881]
cost_curr: 43.60055160522461 b0: [-34.65345] b1: [9.09941]
cost_curr: 43.60055923461914 b0: [-34.657265] b1: [9.100009]
cost_curr: 43.60055160522461 b0: [-34.66108] b1: [9.100609]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]
cost_curr: 43.60055160522461 b0: [-34.66259] b1: [9.100846]

Conclusion: Experiment 3 is higher than experiment 4 although they have the same
epochs experiment 3 has higher learning rate so the results is higher.
Experiment 4:
learning_rate = 0.001
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
epochs = 50000
Result:
cost_curr: 508.32257080078125 b0: [0.04506562] b1: [0.29219005]
cost_curr: 58.174434661865234 b0: [-0.27770874] b1: [3.6953685]
cost_curr: 57.4901008605957 b0: [-1.0949024] b1: [3.8238356]
cost_curr: 56.837894439697266 b0: [-1.8926773] b1: [3.9492497]
cost_curr: 56.21632385253906 b0: [-2.6714935] b1: [4.071685]
cost_curr: 55.623931884765625 b0: [-3.431803] b1: [4.1912093]
cost_curr: 55.059364318847656 b0: [-4.1740475] b1: [4.307893]
cost_curr: 54.521297454833984 b0: [-4.898658] b1: [4.421803]
cost_curr: 54.00850296020508 b0: [-5.606048] b1: [4.5330105]
cost_curr: 53.51979064941406 b0: [-6.296629] b1: [4.6415715]
cost_curr: 53.05402374267578 b0: [-6.9708047] b1: [4.747558]
cost_curr: 52.61012268066406 b0: [-7.6289616] b1: [4.8510222]
cost_curr: 52.187076568603516 b0: [-8.271473] b1: [4.952028]
cost_curr: 51.78388595581055 b0: [-8.898722] b1: [5.050635]
cost_curr: 51.399627685546875 b0: [-9.51107] b1: [5.1469]
cost_curr: 51.03342056274414 b0: [-10.108865] b1: [5.2408776]
cost_curr: 50.68439865112305 b0: [-10.692456] b1: [5.3326178]
cost_curr: 50.35177230834961 b0: [-11.262179] b1: [5.422184]
cost_curr: 50.034767150878906 b0: [-11.818367] b1: [5.5096173]
cost_curr: 49.73263931274414 b0: [-12.361348] b1: [5.594976]
cost_curr: 49.444705963134766 b0: [-12.891412] b1: [5.678308]
cost_curr: 49.17029571533203 b0: [-13.408886] b1: [5.759657]
cost_curr: 48.908756256103516 b0: [-13.914073] b1: [5.839075]
cost_curr: 48.65951156616211 b0: [-14.407246] b1: [5.9166045]
cost_curr: 48.42196273803711 b0: [-14.888704] b1: [5.9922924]
cost_curr: 48.19557189941406 b0: [-15.358723] b1: [6.066179]
cost_curr: 47.97981262207031 b0: [-15.817583] b1: [6.1383157]
cost_curr: 47.774192810058594 b0: [-16.265493] b1: [6.208732]
cost_curr: 47.57819747924805 b0: [-16.702833] b1: [6.277481]
cost_curr: 47.391456604003906 b0: [-17.129702] b1: [6.3445873]
cost_curr: 47.21345138549805 b0: [-17.546463] b1: [6.410106]
cost_curr: 47.04380798339844 b0: [-17.953335] b1: [6.474067]
cost_curr: 46.88213348388672 b0: [-18.35052] b1: [6.5365057]
cost_curr: 46.728057861328125 b0: [-18.738266] b1: [6.597461]
cost_curr: 46.581207275390625 b0: [-19.116806] b1: [6.6569695]
cost_curr: 46.44123840332031 b0: [-19.486393] b1: [6.7150707]
cost_curr: 46.307865142822266 b0: [-19.847118] b1: [6.7717795]
cost_curr: 46.18073272705078 b0: [-20.199352] b1: [6.8271527]
cost_curr: 46.059600830078125 b0: [-20.543137] b1: [6.881199]
cost_curr: 45.94413757324219 b0: [-20.878792] b1: [6.933963]
cost_curr: 45.83409881591797 b0: [-21.206478] b1: [6.985479]
cost_curr: 45.72922134399414 b0: [-21.52638] b1: [7.035766]
cost_curr: 45.62928009033203 b0: [-21.838663] b1: [7.0848613]
cost_curr: 45.53402328491211 b0: [-22.143536] b1: [7.1327868]
cost_curr: 45.443241119384766 b0: [-22.441162] b1: [7.179579]
cost_curr: 45.35670852661133 b0: [-22.731773] b1: [7.225262]
cost_curr: 45.27426528930664 b0: [-23.015366] b1: [7.269843]
cost_curr: 45.1956672668457 b0: [-23.29233] b1: [7.313383]
cost_curr: 45.120784759521484 b0: [-23.562616] b1: [7.355872]
cost_curr: 45.049400329589844 b0: [-23.826542] b1: [7.3973618]

Conclusion: As you compare the experiment 3 and experiment 4 experiment 3 got


higher result because we made the learning rate into 0.001 and epochs into 50000

You might also like