You are on page 1of 11
NNFL (BITS F312) Assignment 3 2016B5A30572H K Pranath Reddy Question 1 The dataset in ‘data_for_cnn.mat' consists of 1000 ECG signals and each row corresponds to one ECG signal. The class label for each ECG signal is given in ‘class_label. mat'file. Implement the 1D convolutional neural network with BPCNN as the learning algorithm for the evaluation of optimal weight matrices in FC layers and optimal kernels or filters in convolution layer. The network consists of one convolutional layer, one pooling layer and two fully connected (FC) layers. The network flow is given by Input-Convolution Layer-Pooling layer- Convolution Layer-Pooling layer -FC1-FC2-FC3-Output Consider the square loss function as cost function in the output layer. You can select number of hidden neurons by your own choice. In the pooling layer, you can use average pooling with down- sampling factor {as 2. (For implementation of the BPCNN algorithm, please refer to the class notes or slides). (For MATLAB, you can use the inbuilt functions from deep learning toolbox) (You can use Python with Keras, and tensorflow at the backend.) Solution : Code: #06 om ote Binary Classification Author + Pranath Redaiy from matépy import leadmat import numpy as. np import pandas as pd from keras import Sequential from keras import optimizers from keras.layers inport Dense, Conv1D, AveragePoolingiD, Flatten from sklearn.model_selection inport train_test_split from sklearn inport preprocessing x ~ Loadmat ("data for_enn.mat') x = pd.DataFrame (x) x = np.aserray (x) x temp = U for i in range(len(x)) x_temp.append (x41 [0)} x_tenp = np.asazray(x_tenp) x = x temp x = preprocessing-normalize (x) x = x.reshape (x.shape(0],x-shape [1], 1) y + loadmat ("class tabel.mat’) y = pd.DataFrame(y) y > np.asarray(y) yterp = 11 for 4 in range (len (y)): y_temp-append (y[4] 10) (01) y_temp = np.asarzay(y_temp) y = y_tenp xtr, xte, ytz, yts = train test_split(s, y, test_six x_te = x tr. reshape (700, 1000/1) xts = x ts. reshape (300, 1000/1) + Hyperparaneters learning_rate = 0.02 epochs = 1000 # Input-Cenvolution Layer-Pocling layer- Convolution Layer-Pooling layer -FC1-FC2-FC3-outpat model ~ sequential () model. add (Conv1D (filters=64, kernel_size=(5), input_shape=(1000,1), stride: padding='valid', activation='relu')) rodel. rodel. model model model model rodel model odel model. # test, yp sm y_temp: for i add (AveragePooling1D (peol_size=2, strides=2,padding="same')} add ConviD (Eilters-32, kernel_size-(5), strides-2, padding-'valid', tions! elu") add (AveragePoolinglD (pool_size=2, add (Flatten ()) add (Dense (100, activation='relu')) add (Dense (80, activation="relu")) add (Dense (1, activation="linear")) compile (optini zer="adan',1oss='nean_squared_error',metrics=['accuracy'1) -surmary() -£it(x_te,y tr, epochs-epochs) ing the model odel .predict_classes (x_ts) 30 in range (Jen (yp}) = y_temp3 .append (yplil 01) y_temp: 3 = np.asarray(y_temp3) yp = y_temp2 yiactu; ypeed al = pd.Series(y_ts, name='Actual') pd.Series (yp, name='Predicted") confnat = pd.crosstab(y actual, y pred) print (confmat) confmat = np.asarray (confmat) tp = confmat (11 (11 tn = confmat (01 [0] fp = confmat(0] [1] fn = confmat (11 {01 Aco = float (tpttn) /float (tp+tnttpt en) SE ~ float (tp) /float (tp+in) SP = float (tn) /float (tn+fp) print ( print ( print ( ‘accuracy t ' + striae!) ‘sensitivity : "+ str(se)) tepecificity : ' + etx(SP)) Q1 Results : Predicted EXtascan 0 cc Ley il 12 129 Accuracy 0.9133333333333333 sensitivity : 0.9148936170212766 specificity : 0.9119496855345912 Question 2 Implement the 1D convolutional autoencoder for the dataset given in data_for_cnn.mat file. The architecture is given as (input-convolution layer-pooling layer-FC-upsampling layer-ranspose convolution layer) (For MATLAB, you can use the inbuilt functions from deep learning toolbox) (You can use Python with Keras, and tensorflow at the backend.) Solution : Code: +44 1p Convolutional Autoencoder **# Author + Pranath Redaiy 201685A30572H from mat4py import leadnat import numpy 28 np import pandas as pd fzom from from from keras keras x keras Sequential import Model Amport Input, Dense, ConviD,MaxPoolingLD, UpSampLingLD, Reshape, Flatten import matpletlib.pyplot as plt from sklearn import preprocessing x = loadmat ("data for_enn.mat') x ~ pd.DataFrame (x) x = np-asarray (x) xtemp - for i in range(1en(x)) x_temp-append (x 141101} x_temp ~ np.asarray(x_temp) x = x temp x = proprocessing-normalize(x) x = x. reshape (x. shape[0],x-shape [1], 1) ‘Hinput-convolution layer-pooling layer-FC-upsampling layer-transpose convolution layer input = encoder encoder encoder encoded decoder decoder decoded Input (shape=(1000,1)) = ConviD(32, 5, activation= trelut , padding= 'same') (input) = MaxPooling1D(4, padding- ‘same') (encoder) = Flatten() (encoder) = Dense (500, activation='softnax') (encoder) = vpsamplingtD (2) (encoded) Reshape (2000, 1)) (decoder) ConviD(1, 5, activation='sigmoid’, padding='same') (decoder! autoencoder = Model (input, decoded) autoencoder. summary () opt = optinizers.Adam(1r=0.01) avtoencoder.compile (optimizer opt, loss+"mse") fautoencoder compile (optimizers opt, loss="binary_crossentropy") history = autoencader.fit(x, x, epocha=2000, bateh_size=512, shufflestrue) # Plot training lose plt.plot (history.history(*loss']) plt.title('Model loss") plt.ylabel (*20ss") plt.xlabel ("Epoch") plt.show() plt.savefig("1oss.png") 2 Results : With Binary cross entropy, without normalizing data Model loss tas 8 20 so 70 1000 1250 1500 1750 2000 Epoch With Binary cross entropy and normalizing data Model loss 04 Loss, 02 00 20 so 70 1000 1250 1500 1750 2000 Epoch With Mean Squared Error, without normalizing data Model loss 2700 2690 22680 4 22670 26650 oo 500 750 1000 1250 1500 1750 2000 Epoch With Mean Squared Error and normalizing data Model loss 025 020 5 § 10 005, 000 20 so 70 1000 1250 1500 1750 2000 Epoch Question 3 Implement the neuro-fuzzy inference system (NFIS) classifier for the classification task. You can use datad xlsx file. The training and test instances can be selected using hold out cross- validation. Solution : Code wee aners + Molticlass Classification author Pranath Reddy 2016850305721 import keras import tensorflow as tf import numpy 28. np import pandas as pd import time from sklearn.model_selection import train test_split # Function to set the class lebels to predictions det setty) for 4 in range(len(y)) = 5£(0,0¢y[L}=2.5) yl = 3.0 retura y # Punction to normalize the data def norm(x): )) festa (axis return (x > x.mean (axis 4 Adaptive neuro-fuzzy inference system implenentation class ANFIS: def _init__{se1£, n_inputs, a_rules, learning rateste-2) self.n = n_Anputs selé.m ~ n_rules self inputs = tf.placeholder(t#.float32, shapes (lone, n_inputs)) # Input self targets = tf placeholder (t#.float32, shapesiione) # Desired output mu ~ tf.get_variable("mu", [n_rules * n_inputsl, initializes £,random_normal_initializer(0, 1)) # Means of Gaussian MES signa + tf.get_variable("sigua", [n_rules * n_inputs], initializerstf,randon_normal_initializer(0, 1)) @ Standard deviations of Gaussian MFS y+ tfget_variable(*"y", [1, a rules}, ipitializer-tf.random normal_initializer (0, 1)) # Sequent centers self.parans = tf,tvainable variables() self.rul = tf.reduce_prod( t£.reshape(tf.oxp(-0.5 * tf square(tf.subtract (tf.tile(self.inputs, (1, a.rules)), mu)) / tf.square(signa) ), (1, mules, n_inputs)), axis«2) # Rule activations # Fuzzy base expansion function: ) den = tf.clip by value(tf.reduce_sum(self.rul, axisel}, 1e-12, 1e12) num = t£.reduce_sum(tf.multiply(self.ral, y), axis self.out = t£.divide(num, den) self.loss = t£-losses.huber_loas(self.targets, self.out) # Loss function computation selfoptinize = tf. train.Adamoptimizer (learning _rate-learning rate) .minimize(selt.loss) # optimization step self. init_variables = tf.global_variables_initializer() # Variable initializer # Function to get predictions from test samples det infer(self, sess, x, targetseNone): Af targets is None: xeturn sess.zun(self.out, feed dict-(self.inputs: x)) else: return sess.run([self.out, self.loss], feed _dicte{self.inputs: x, self targets: targets) # Function to initiate and train the graph def train(self, sess, x, targets): yer 1, _ = sess.run([self.out, self.loss, self.optimizel, feed dict=(self.inputs: x, self.targets: targets)! return Ly yp + Importing the data data = pd.read_excel (‘datad.x1sx") data = pd.DateFrane (data) data = np-asarray (data) y= datats,-1] x= data(e,:-1] x ~ norm(x) # Split train and test set xte, Mts, ytr, y te = train_test_eplit(s, y, teat_size~0.3) m= x te.ehape[0] a= x trehape(2] # nyperparaneters m= 16 # number of rules alpha = 0.01 # learning vate epochs= 2000 fis = ANFIS(n_inputs=7, n_rules=a, learning_rate~alpha} # Initialize session to make computations on the Tensorflow graph with tf.Session() ao sess: # initialize model paraneters sess-run(fis.init variables) trn_costs = 11 val_costs = 1 time_start = tine. time () for epoch in range (epochs| # Train the model ten_toss, train pred = fis.train(sess, x tr, y_trl # Evaluate on test set test_pred, val_loss = fis.infer(sess, x yts) # Print the training cost S£ epoch § 10 «= 0 Train cost after epoch ¥i: % (epoch, trn_loss)) Af epoch == epochs - 1 e_end = time.time() yp ~ test_pred # Get the predictions yp = set (yp) 4 Confusion trix and accuracy pd. Series y, name="Actual*) -Series (yp, nane="Predicted") confmat = pd.crosstab(y actual, y pred) print (confmat) confmat = np.asarray (confmat) on tm Accuracy = flo: 1 [0] teonémat [1] [1] teonfmat [21 [21) /float (yp. shape[01) print (‘accuracy :! +" ' + etx (Accuracy) Q3 Results : Predicted 1.0 2.0 3.0 Actual 1.0 0 0 2.0 ai) ah 3.0 0 0 ale) Accuracy : 0.9555555555555556

You might also like