You are on page 1of 9

Kartik Sharma - 2303760 - MSC(CS)

EXP-5: Alpha Least Mean Squared Learning

This experiment explores the behavior of the LMS algorithm. The LMS algorithm is an
adaptive algorithm that is used to update the weights of a linear neuron. The update
equation for the LMS algorithm is:
W(k+1) = W(k) + eta * e(k) * X(k)/(||X(k)||^2)
where:
• W(k) is the weight vector at time step k
• eta is the learning rate
• e(k) is the error at time step k
• X(k) is the input vector at time step k
The LMS algorithm is a stochastic algorithm, which means that it does not converge to
the same solution every time it is run. However, it is guaranteed to converge to a
solution that minimizes the mean squared error.

CODE
max_points = 200; % Assume 200 data points
x = linspace(0,2.5,max_points);% Generate the x linspace
y = .5*x + 0.333; % Define a straight line
scatter = rand(1,max_points); % Generate scatter vector
ep = .1; % Compress scatter to 0.1
d = ((2*scatter-1)*ep) + y; % Set up desired values
eta = .01; % Set learning rate
w = 3*(2*rand(1,2) - 1); % Randomize weights
for loop = 1:50 % Train for 50 epochs
randindex = randperm(200); % Randomize order
for j = 1: max_points % For each data point
i = randindex(j); % Get the index
s(i) = w(1) + w(2)*x(i); % Compute signal value
err(i) = d(i) - s(i); % Compute pattern error
w(1) = w(1) + eta*err(i)/(1+x(i)^2);% Change the weights
w(2) = w(2) + eta*err(i)*x(i)/(1+x(i)^2);
end
end
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'r'); % Plot the result
■ Modify the data scatter and the number of points, and see the final solution
each time.
For 400 points and data scatter
max_points = 400; % Assume 200 data points
x = linspace(0,2.5,max_points);% Generate the x linspace
y = .5*x + 0.333; % Define a straight line
scatter = rand(1,max_points); % Generate scatter vector
ep = .1; % Compress scatter to 0.1
d = ((2*scatter-1)*ep) + y; % Set up desired values
plot (x, d, 'b.');
hold on,
eta = .01; % Set learning rate
w = 3*(2*rand(1,2) - 1); % Randomize weights
for loop = 1:50 % Train for 50 epochs
randindex = randperm(max_points); % Randomize order
for j = 1: max_points % For each data point
i = randindex(j); % Get the index
s(i) = w(1) + w(2)*x(i); % Compute signal value
err(i) = d(i) - s(i); % Compute pattern error
w(1) = w(1) + eta*err(i)/(1+x(i)^2);% Change the weights
w(2) = w(2) + eta*err(i)*x(i)/(1+x(i)^2);
end
end
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'r'); % Plot the result

For 100 points and data scatter 1 and .01


max_points = 100; % Assume 200 data points
x = linspace(0,2.5,max_points);% Generate the x linspace
y = .5*x + 0.333; % Define a straight line
scatter = rand(1,max_points); % Generate scatter vector
ep = .1; % Compress scatter to 0.1
d = ((2*scatter-1)*ep) + y; % Set up desired values
plot (x, d, 'b.');
hold on,
eta = .01; % Set learning rate
w = 3*(2*rand(1,2) - 1); % Randomize weights
for loop = 1:50 % Train for 50 epochs
randindex = randperm(max_points); % Randomize order
for j = 1: max_points % For each data point
i = randindex(j); % Get the index
s(i) = w(1) + w(2)*x(i); % Compute signal value
err(i) = d(i) - s(i); % Compute pattern error
w(1) = w(1) + eta*err(i)/(1+x(i)^2);% Change the weights
w(2) = w(2) + eta*err(i)*x(i)/(1+x(i)^2);
end
end
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'r'); % Plot the result

▪ Reduce the number of epochs and check what happens. Try 5, 8, 10

For 5
max_points = 200; % Assume 200 data points
x = linspace(0,2.5,max_points);% Generate the x linspace
y = .5*x + 0.333; % Define a straight line
scatter = rand(1,max_points); % Generate scatter vector
ep = .1; % Compress scatter to 0.1
d = ((2*scatter-1)*ep) + y; % Set up desired values
plot (x, d, 'b.');
hold on,
eta = .01; % Set learning rate
w = 3*(2*rand(1,2) - 1); % Randomize weights
for loop = 1:5 % Train for 50 epochs
randindex = randperm(max_points); % Randomize order
for j = 1: max_points % For each data point
i = randindex(j); % Get the index
s(i) = w(1) + w(2)*x(i); % Compute signal value
err(i) = d(i) - s(i); % Compute pattern error
w(1) = w(1) + eta*err(i)/(1+x(i)^2);% Change the weights
w(2) = w(2) + eta*err(i)*x(i)/(1+x(i)^2);
end
end
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'r'); % Plot the result

For 8
For 10

Now Modifying the code and taking snapshots of the hyperplane in 2-d as
learning proceeds.

max_points = 200; % Assume 200 data points


x = linspace(0,2.5,max_points);% Generate the x linspace
y = .5*x + 0.333; % Define a straight line
scatter = rand(1,max_points); % Generate scatter vector
ep = .1; % Compress scatter to 0.1
d = ((2*scatter-1)*ep) + y; % Set up desired values
plot (x, d, 'b.');
hold on,
eta = .01; % Set learning rate
w = 3*(2*rand(1,2) - 1); % Randomize weights
epoch = 50;
for loop = 1:epoch % Train for 50 epochs
randindex = randperm(200); % Randomize order
for j = 1: max_points % For each data point
i = randindex(j); % Get the index
s(i) = w(1) + w(2)*x(i); % Compute signal value
err(i) = d(i) - s(i); % Compute pattern error
w(1) = w(1) + eta*err(i)/(1+x(i)^2);% Change the weights
w(2) = w(2) + eta*err(i)*x(i)/(1+x(i)^2);
end
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'g'); % Plot the result
drawnow
end
legend('Data Scatter', 'Hyperplane during learning process');
s = w(1) + w(2)*x; % Finally compute the function learnt
plot(x,s,'r','Linewidth', 1, 'DisplayName','Final Hyperplane');
% Plot the result

Reduce the number of points to 25, and set ep = 0.001. Note the number of
epochs required to achieve a suitably accurate solution. The accuracy can be
checked by running a MATLAB polyfit command for 1-d fitting, and comparing the
error on the 25 points achieved via the MATLAB fit and the LMS fit that you get.
It finally converged on data point = 100
Observations:
● Changing the data scatter and the number of points alters the final solution,
demonstrating the algorithm's adaptability to different datasets.
● Reducing the number of epochs affects the convergence speed and final
accuracy. Fewer epochs may lead to underfitting.
● With fewer points and a smaller perturbation (\(ep\)), more epochs may be
needed for accurate convergence. Comparing LMS fit with MATLAB polyfit allows
for accuracy validation.

The alpha-LMS algorithm exhibits dynamic behavior in adapting to different datasets.


The choice of parameters such as the learning rate (\(\eta\)), the number of epochs, and
data characteristics influences the algorithm's performance. The trade-off between
speed and accuracy becomes apparent when adjusting the number of points and
perturbation. Overall, the experiment provides valuable insights into the behavior and
sensitivity of the LMS algorithm in a one-dimensional linear neuron scenario.

You might also like