You are on page 1of 15

CEN 556: Intelligent systems – Homework #1

Part (1)
Exercise (1)
1. A. The neuron is linear
𝝋 = ( 𝒗 = 𝒚)
𝑛𝑒𝑡 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑥0 𝑤0 + 𝑥1 𝑤1
𝑛𝑒𝑡 = (10𝑥0.8) + (−20𝑥0.2) + (4𝑥 − 1) + (−2𝑥 − 𝑜. 9) = 1.8
𝑦 = 1.8

B. The neuron is represented by Mc Culloch-Pitts model


𝟏 , 𝒊𝒇 𝒗 ≥ 𝟎
𝝋=<
𝟎 , 𝒊𝒇 𝒗 < 𝟎
𝑛𝑒𝑡 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑥0 𝑤0 + 𝑥1 𝑤1
𝑛𝑒𝑡 = (10𝑥0.8) + (−20𝑥0.2) + (4𝑥 − 1) + (−2𝑥 − 𝑜. 9) = 1.8
𝑛𝑒𝑡 = 1.8 ≥ 0
𝑦=1

2. 10-4-3-1 neuron network

4 1
1
5 2
2 1
6 3
3
7 4

10
Exercise (2)
1. A. AND
𝒙𝟐 𝒙𝟏 𝒅
0 0 0
0 1 0
1 0 0
1 1 1

𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏 < 0 , 𝑓𝑜𝑟 𝑦 = 0
𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏 ≥ 0 , 𝑓𝑜𝑟 𝑦 = 1

𝒙𝟐 𝒙𝟏 𝒙𝟏 𝒘 𝟏 + 𝒙𝟐 𝒘 𝟐 𝒚
0 0 0 0 𝑏 ≤ 0
0 1 𝑤, 0 𝑤, + 𝑏 ≤ 0
1 0 𝑤/ 0 𝑤/ + 𝑏 ≤ 0
1 1 𝑤, + 𝑤/ 1 𝑤, + 𝑤/ + 𝑏 > 0

From the table we can recognize that b is negative value and 𝑤, + 𝑤/ + 𝑏


should be positive.
𝑤, = 𝑤/ = 1. & 𝑏 = −1.5
𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏

1 , 𝑖𝑓 𝑣 ≥ 0
𝜑=<
0 , 𝑖𝑓 𝑣 < 0

𝒙𝟐 𝒙𝟏 𝒅 𝒗 𝒚
0 0 0 -1.5 0
0 1 0 -0.5 0
1 0 0 -0.5 0
1 1 1 0.5 1
Exercise (2)
1. B. OR
𝒙𝟐 𝒙𝟏 𝒅
0 0 0
0 1 1
1 0 1
1 1 1

𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏 < 0 , 𝑓𝑜𝑟 𝑦 = 0
𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏 ≥ 0 , 𝑓𝑜𝑟 𝑦 = 1

𝒙𝟐 𝒙𝟏 𝒙𝟏 𝒘 𝟏 + 𝒙𝟐 𝒘 𝟐 𝒚
0 0 0 0 𝑏 < 0
0 1 𝑤, 1 𝑤, + 𝑏 ≥ 0
1 0 𝑤/ 1 𝑤/ + 𝑏 ≥ 0
1 1 𝑤, + 𝑤/ 1 𝑤, + 𝑤/ + 𝑏 ≥ 0

𝑤, = 𝑤/ = 1. & 𝑏 = −0.5
𝒗 = 𝑥, 𝑤, + 𝑥/ 𝑤/ + 𝑏

1 , 𝑖𝑓 𝑣 ≥ 0
𝜑=<
0 , 𝑖𝑓 𝑣 < 0

𝒙𝟐 𝒙𝟏 𝒅 𝒗 𝒚
0 0 0 -0.5 0
0 1 0 0.5 1
1 0 0 0.5 1
1 1 1 1.5 1
Exercise (2)
2. A. AND
% Exercise (2): 2. AND

P = [0 0 1 1; 0 1 0 1];
T = [0 0 0 1];
net = newp([0 1; 0 1],1);
Y = net(P)
net.trainParam.epochs = 20;
net = train(net,P,T);
pause
Y = sim(net,P)% The output
W=net.IW{1,1} % The weights
B=net.b{1,1} % The bias
pause
figure
plotpv(P,T)
plotpc(W,B)
Exercise (2)
2. B. OR
% Exercise (2): 2. OR

P = [0 0 1 1; 0 1 0 1];
T = [0 1 1 1];
net = newp([0 1; 0 1],1);
Y = net(P)
net.trainParam.epochs = 20;
net = train(net,P,T);
pause
Y = sim(net,P)% The output
W=net.IW{1,1} % The weights
B=net.b{1,1} % The bias
pause
figure
plotpv(P,T)
plotpc(W,B)
Exercise (3)
1. A.
1 V
1
𝐸(𝑤) = 𝜎 / − 𝑟TU 𝑤 + 𝑤 V 𝑅T 𝑤
2 2
𝑑𝐸(𝑤) V
1
= − 𝑟TU + (𝑅T + 𝑅TV ) 𝑤
𝑑𝑤 2

𝑑𝐸(𝑤) 1 1 0.8182 1 0.8182


= −[0.8182 0.354] + \] ^+] ^_ 𝑤
𝑑𝑤 2 0.8182 1 0.8182 1

𝑑𝐸(𝑤) 1 2 1.6364
= −[0.8182 0.354] + \] ^_ 𝑤
𝑑𝑤 2 1.6364 2

𝑑𝐸(𝑤) 1 0.8182
= −[0.8182 0.354] + ] ^𝑤
𝑑𝑤 0.8182 1
Ua(b)
To find the minimum: Ub
=0

1 0.8182
−[0.8182 0.354] + ] ^𝑤 = 0
0.8182 1

[0.8182 0.354] = ] 1 0.8182


^𝑤
0.8182 1

1 0.8182 d,
𝑤 = [0.8182 0.354] ∗ ] ^
0.8182 1

𝒘 = [𝟏. 𝟓𝟗𝟗𝟎 −𝟎. 𝟗𝟓𝟒𝟑]

𝑑 / 𝐸(𝑤) 1
/
= (𝑅T + 𝑅TV )
𝑑𝑤 2

𝑑 / 𝐸(𝑤) 1 1 0.8182 1 0.8182


= \] ^+] ^_
𝑑𝑤 / 2 0.8182 1 0.8182 1

𝑑 / 𝐸(𝑤) 1 2 1.6364 1 0.8182


= \] ^_ = ] ^ > 0
𝑑𝑤 / 2 1.6364 2 0.8182 1

𝑬(𝒘) 𝒊𝒔 𝒎𝒊𝒏𝒊𝒎𝒖𝒎 𝒘𝒉𝒆𝒏 𝒘 = [𝟏. 𝟓𝟗𝟗𝟎 −𝟎. 𝟗𝟓𝟒𝟑]


Exercise (3)
1. B. Steepest descent
close all
clear all
clc
rxd = [0.8182 ; 0.354]; Rx = [1, 0.8182; 0.8182, 1];
w_opt = inv(Rx)* rxd;
w = zeros(2,100);
for n = [0.3, 1.0];
cost = zeros(1,100);
for nn=1:100 % number of iterations
if(nn == 1)
wn = [0;0];
gn = Rx*wn - rxd;
else
wn1 = w(:,nn-1); % get w(nn-1)
gn = Rx*wn1 - rxd; % here's my gradient
wn = wn1 - n*gn; % get next weights
w(:,nn) = wn; % save the weights for plotting
end
cost(nn) = wn'*Rx*wn/2 - rxd'*wn; % here's the cost function
end
figure
plot(w(1,:), w(2,:),'-', w_opt(1), w_opt(2), 'x');
grid on ;
title(sprintf('Steepest descent with n=%f',n));
legend('iterate values', 'optimal value');
xlabel('w_1(n)'); ylabel('w_2(n)');
grid on;
end
fprintf('w_opt = %12.4e %12.4e\n',w_opt(1), w_opt(2));

- 𝜼 = 𝟎. 𝟑
- 𝜼=𝟏
Exercise (4)
1. XOR Problem

𝒙𝟐 𝒙𝟏 𝒅
0 0 0
0 1 1
1 0 1
1 1 0

𝟏 , 𝒊𝒇 𝒗 ≥ 𝟎
𝝋=<
𝟎 , 𝒊𝒇 𝒗 < 𝟎

For input (0,0)


𝑛𝑒𝑡 = (0𝑥1) + (0𝑥1) − 1.5 = −1.5 => 𝑦 = 0

𝑛𝑒𝑡 = (0𝑥1) + (0𝑥 − 2) + (0𝑥1) − 0.5 = −0.5 => 𝒚 = 𝟎

For input (0,1) and (1,0)


𝑛𝑒𝑡 = (0𝑥1) + (1𝑥1) − 1.5 = −0.5 => 𝑦 = 0

𝑛𝑒𝑡 = (0𝑥1) + (0𝑥 − 2) + (1𝑥1) − 0.5 = 0.5 => 𝒚 = 𝟏

For input (1,1)


𝑛𝑒𝑡 = (1𝑥1) + (1𝑥1) − 1.5 = 0.5 => 𝑦 = 1

𝑛𝑒𝑡 = (1𝑥1) + (1𝑥 − 2) + (1𝑥1) − 0.5 = −0.5 => 𝒚 = 𝟎

So, the network solves the XOR problem.


Exercise (4)
2. XOR Problem
% number of samples of each class
K = 50;
% Input
q = .6; % offset of classes
A = [rand(1,K)-q; rand(1,K)+q];
B = [rand(1,K)+q; rand(1,K)+q];
C = [rand(1,K)+q; rand(1,K)-q];
D = [rand(1,K)-q; rand(1,K)-q];
P = [A B C D];
% plot Inputs
figure(1)
plot(A(1,:),A(2,:),'k+')
hold on
grid on
plot(B(1,:),B(2,:),'bd')
plot(C(1,:),C(2,:),'k+')
plot(D(1,:),D(2,:),'bd')
% encode clusters a&c as one class,b&d as
another class
a = -1; % a | b
c = -1; % -------
b = 1; % d | c
d = 1; %
% Target
T = [repmat(a,1,length(A))
repmat(b,1,length(B)) ...
repmat(c,1,length(C)) repmat(d,1,length(D)) ];
% create a neural network
net = feedforwardnet([5 3]);
% train net
net.divideParam.trainRatio = 1; % training set [%]
net.divideParam.valRatio = 0; % validation set [%]
net.divideParam.testRatio = 0; % test set [%]
[net,tr,Y,E] = train(net,P,T);
span = -1:.005:2;
[P1,P2] = meshgrid(span,span);
pp = [P1(:) P2(:)]';
aa = net(pp);
figure(1)
mesh(P1,P2,reshape(aa,length(span),length(span))-5);
colormap cool
Part (2)
Question (1): Data Classification
a) Check if these 4 classes are linearly separable

The 4 classes are not linearly separable


b) Perceptron rule to train the network (hardlimit activation function).
Plot points and decision boundaries.
- Rosenblatt’s perceptron

clc;
clear all;
close all;
% Input and output
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 1.8]';P(:,3)=[0.8 1.6]'; % Group 1
P(:,4)=[0.8 0.6]'; P(:,5)=[1 0.8]'; % Group 2
P(:,6)=[0.3 0.5]'; P(:,7)=[0 0.2]';P(:,8)=[-0.3 0.8]'; % Group 3
P(:,9)=[-.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % Group 4
T(:,1)=[0 0]'; T(:,2)=[0 0]';T(:,3)=[0 0]';
T(:,4)=[0 1]'; T(:,5)=[0 1]';
T(:,6)=[1 0]'; T(:,7)=[1 0]';T(:,8)=[1 0]';
T(:,9)=[1 1]'; T(:,10)=[1 1]';
% Creation of the net
net=perceptron;
% Training
net.trainParam.epochs = 20;
net = train(net,P,T);
% Check the output
Y=sim(net,P)
% To see the weights
W=net.IW{1,1}
% The bias
B=net.b{1}
% to plot data and boundaries on the same plot
figure;
plotpv(P,T)
plotpc(W,B)
- MLP with one hidden layer
clc;
clear all;
close all;
% % Input and output
P(:,1)=[0.1 1.2]'; P(:,2)=[0.7 1.8]';P(:,3)=[0.8 1.6]'; % Group 1
P(:,4)=[0.8 0.6]'; P(:,5)=[1.0 0.8]'; % Group 2
P(:,6)=[0.3 0.5]'; P(:,7)=[0.0 0.2]';P(:,8)=[-0.3 0.8]'; % Group 3
P(:,9)=[-0.5 -1.5]'; P(:,10)=[-1.5 -1.3]'; % Group 4

T(:,1)=[-1 -1]'; T(:,2)=[-1 -1]';T(:,3)=[-1 -1]';


T(:,4)=[-1 1]'; T(:,5)=[-1 1]';
T(:,6)=[1 -1]'; T(:,7)=[1 -1]';T(:,8)=[1 -1]';
T(:,9)=[1 1]'; T(:,10)=[1 1]';
% Creation of the net
net=newff(minmax(P),[10 2],{'radbas' 'tansig'});
net.inputWeights{1}.learnFcn
% Training
net.trainParam.goal = 0;
net.trainParam.epochs = 400;
net = train(net,P,T);
% Check the output
Y=sim(net,P);
% To see the weights
W=net.IW{1,1};
% The bias
B=net.b{1};
Part (2)
Question (2): Pattern Recognition
a) Design a perceptron network that will distinguish input pattern.
clc;
clear all;
close all;

% Patterns
P(:,1)= [1 -1 1 1]';
P(:,2)= [1 1 -1 1]';
P(:,3)= [-1 -1 -1 1]';
% Targets
T(:,1)=[-1 -1]';
T(:,2)=[1 -1]';
T(:,3)=[1 -1]';
% Building a persepton neural net
net = feedforwardnet(10);
net = train(net,P,T);
y = net(P);
% Testing Pattern
pt=[1 -1 1 -1]';
Y=sim(net,pt)

b) Test your design by observing how the following test pattern would be
recognized.

You might also like