You are on page 1of 3

Noise is unwanted sound considered unpleasant, loud or disruptive to hearing.

From
a physics standpoint, noise is indistinguishable from sound, as both are vibrations
through a medium, such as air or water. The difference arises when the brain
receives and perceives a sound.

the color of noise refers to the power spectrum of a noise signal (a signal
produced by a stochastic process). Different colors of noise have significantly
different properties: for example, as audio signals they will sound different to
human ears, and as images they will have a visibly different texture. Therefore,
each application typically requires noise of a specific color.

Impulsive noise are relatively short duration “on/off” pulses, caused by switching
noise or adverse channel environments in a communication system, drop outs or
surface degradation of audio recordings, clicks from computer keyboards etc.

In signal processing, white noise is a random signal having equal intensity at


different frequencies, giving it a constant power spectral density.

In discrete time, white noise is a discrete signal whose samples are regarded as a
sequence of serially uncorrelated random variables with zero mean and finite
variance; a single realization of white noise is a random shock.

Active noise control (ANC), also known as noise cancellation, or active noise
reduction (ANR), is a method for reducing unwanted sound by the addition of a
second sound specifically designed to cancel the first.Sound is a pressure wave,
which consists of alternating periods of compression and rarefaction. A noise-
cancellation speaker emits a sound wave with the same amplitude but with inverted
phase (also known as antiphase) to the original sound. The waves combine to form a
new wave, in a process called interference, and effectively cancel each other out –
an effect which is called destructive interference.

PAGE 2

An adaptive filter is a system with a linear filter that has a transfer function
controlled by variable parameters and a means to adjust those parameters according
to an optimization algorithm. Because of the complexity of the optimization
algorithms, almost all adaptive filters are digital filters. Adaptive filters are
required for some applications because some parameters of the desired processing
operation (for instance, the locations of reflective surfaces in a reverberant
space) are not known in advance or are changing. The closed loop adaptive filter
uses feedback in the form of an error signal to refine its transfer function.

Generally speaking, the closed loop adaptive process involves the use of a cost
function, which is a criterion for optimum performance of the filter, to feed an
algorithm, which determines how to modify filter transfer function to minimize the
cost on the next iteration. The most common cost function is the mean square of the
error signal.

SLIDE 6

An adaptive algorithm is an algorithm that changes its behavior at the time it is


run,[1] based on information available and on a priori defined reward mechanism (or
criterion). Such information could be the story of recently received data,
information on the available computational resources, or other run-time acquired
(or a priori known) information related to the environment in which it operates.

Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares
(LMS), which represents a class of stochastic gradient-descent algorithms used in
adaptive filtering and machine learning. In adaptive filtering the LMS is used to
mimic a desired filter by finding the filter coefficients that relate to producing
the least mean square of the error signal (difference between the desired and the
actual signal).

Here are some qualities that make for a “good” algorithm:

1 Efficient (with respect to a given problem)


2 Space-efficient (does not utilize more memory than necessary), a lot of the
time the battle is between finding an efficient algorithm that is space-
efficient. Remember that you can re- use memory, but time is not reusable.
3 Stable: does the algorithm produce solutions in a manner that is consistent.
This may be that the calculations performed are numerically stable, or
perhaps that the algorithm follows a similar execution path in a program for
all instances so that the time taken for a given input (of a given size) does not
differ greatly.
4 Effective: is the algorithm correct, or is this more of a heuristic that is
allowed to not yield optimal/correct answers sometimes?
5 Simple and/or elegant: makes for an easier to understand algorithm and tends to
be easier to implement and use. Furthermore, if it is “simple” and utilizes
well-known techniques, it can cut down the time needed to check/utilize/implement
a given algorithm.

LMS ALGORITHM

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a
desired filter by finding the filter coefficients that relate to producing the
least mean square of the error signal (difference between the desired and the
actual signal). It is a stochastic gradient descent method in that the filter is
only adapted based on the error at the current time. The basic idea behind LMS
filter is to approach the optimum filter weights, by updating the filter weights in
a manner to converge to the optimum filter weight. This is based on the gradient
descent algorithm. The algorithm starts by assuming small weights (zero in most
cases) and, at each step, by finding the gradient of the mean square error, the
weights are updated. That is, if the MSE-gradient is positive, it implies the error
would keep increasing positively if the same weight is used for further iterations,
which means we need to reduce the weights. In the same way, if the gradient is
negative, we need to increase the weights.

The weight update equation is

where e[n]x[n] represents the mean-square error and mu is a convergence


coefficient.

The positive sign shows that if we go down the slope of the error, to find the
filter weights, which minimize the error.

The mean-square error as a function of filter weights is a quadratic function which


means it has only one extremum, that minimizes the mean-square error, which is the
optimal weight. The LMS thus, approaches towards this optimal weights by
ascending/descending down the mean-square-error vs filter weight curve.

As the LMS algorithm does not use the exact values of the expectations, the weights
would never reach the optimal weights in the absolute sense, but a convergence is
possible in mean. That is, even though the weights may change by small amounts, it
changes about the optimal weights. However, if the variance with which the weights
change, is large, convergence in mean would be misleading. This problem may occur,
if the value of step size is not chosen properly.

If mu is chosen to be large, the amount with which the weights change depends
heavily on the gradient estimate, and so the weights may change by a large value so
that gradient which was negative at the first instant may now become positive. And
at the second instant, the weight may change in the opposite direction by a large
amount because of the negative gradient and would thus keep oscillating with a
large variance about the optimal weights. On the other hand, if mu is chosen to be
too small, time to converge to the optimal weights will be too large.

NLMS ALGORITHM

As we have seen from LMS algorith that if mu is chosen to be too small, time to
converge to the optimal weights will be too large. SO here they are structurally
same but the weight should be changed in minimum order....................ISME
BAAKI KA SLIDES SE HI BOL DUNGA....................................................

You might also like