You are on page 1of 18

Adaptive Noise Cancellation using LabVIEW

T.J.Moir

Institute of Information and Mathematical Sciences Massey University Albany Campus Auckland New Zealand t.j.moir@massey.ac.nz

Summary

It is shown how the Least-mean squares algorithm (LMS) can be implemented by

using

the

graphical

language

LabVIEW.

The

LMS

algorithm

underpins

the

vast

majority

of

current

signal

processing

adaptive

algorithms

including

noise-

cancellation, beamforming, deconvolution, equalization and so on. It will be shown that LabVIEW is a convenient and powerful method of implementing such algorithms.

1.Introduction

LabVIEW

(trademark

National

Instruments)

has

been

in

existence

since

the

mid

1980s

[1]

and

is

normally

associated

with

virtual

instrumentation.

Based

on

the

graphical language ‘g’ it is a block-diagram approach to programming. LabVIEW is a

full programming language which is compiled when the ‘run’ button is pressed. It is

perhaps a little harder to learn that approaches such as MATLAB but as will be shown

the effort is more than worth the end results. There is still a wealth of information

available on how to use LabVIEW to control a whole range of instrumentation but far

less on the basics of signal processing. Perhaps the main exception is the excellent

LabVIEW Signal Processing reference [2]. The basics of sampling, signal generation,

filters,

matrix

manipulation

and

FFTs

and real-world applications.

are

covered

therein

with

numerous

activities

The author was unable to source any information on using LabVIEW for adaptive

signal

processing

using

the

least-mean-squares(LMS)

algorithm.

As

the

LMS

algorithm is the most fundamental of all algorithms used in this area it was decided to

write

a

‘g’

program which would perform this function. The type of applications

which use LMS are to name a few:

system identification, adaptive beamforming (for

radar

applications),

adaptive

filtering

for

time-delay

estimation,

active

noise

hearing

aids

cancellation,

or

generally

speech

signals,

adaptive

equalization

for

communication channels and some areas of adaptive control.

This article is not meant as an in-depth study of aspects

but

rather

to

illustrate

how

implementing such algorithms.

LabVIEW

can

be

used

of adaptive signal processing

as

a

scientific

tool

when

2.The LMS algorithm for system identification.

Consider the block diagram shown in Figure 1 below. Although the LMS algorithm

can be applied to all of the applications mentioned in the previous section, perhaps the

easiest to understand and test any algorithm is that of system identification. The basic

objective is to find the transfer function of an unknown system. Usually the system is

driven by a white-noise source. The output of the unknown system is labeled the

primary

input

to

the

algorithm

whilst

the

white-noise

source

itself

is

labeled

the

reference.

The

coefficients

of

another

filter

within

the

LMS

algorithm

are

then

adjusted according to an error (shown on the diagram). When this error (in fact its

average squared value) is at a minimum value, it is considered that the LMS algorithm

has converged and the coefficients of the filter within the LMS algorithm then match

the unknown system. Probably for the simple example shown it is best thought of as

‘path balancing’ ie if the unknown system was placed instead in the reference path

then the LMS algorithm would have to converge to the inverse of the unknown

system.

placed instead in the reference path then the LMS algorithm would have to converge to the

Figure 1. LMS algorithm used for system identification.

The LMS inputs are described mathematically at time sample ‘k’ as

input) and

u (reference input). The unknown system is labeled

k

H (z

1

-

1

s

)

k

(primary

which for

convenience

is a finite impulse response filter (FIR) of order n

with

u

k

as its input

and

s as its output. The unknown system need not be FIR but it is easier to check the

k

results if it is. Suppose H (z

1

- 1

)

has a z-transfer function

H (z

1

-

1

)

=

b

0

+

b z

1

-

1

+

b z

2

-

2

+

+

b

n

z

-

n

Where the coefficients b of the filter are unknown and are to be estimated.

The LMS algorithm is given by [3]

Update the error

e

k

=

s

k

Update the weight vector estimate W

k

W

k

=

W

k

-

-

1

X

+

T W

k

k

- 1

2Xe

m

k

k

The weight vector is a column vector

W

k

È

Í

Í

= Í

Í

Í

Í

Î

w

w

.

.

w

0

1

n

˘

˙

˙

˙

˙

˙

˙ ˚

(1)

(2)

and X k is the column vector of regressors (or past values of input) given by

X

k

È

Í

Í

= Í

Í

Í

Í

Î

u

k

u

k

-

u

.

.

k

-

1

n

˘

˙

˙

˙

˙

˙

˙ ˚

n is the system order and there are

n

+

1

weights. The transpose notation (T) in

equation (1) therefore makes X a row vector. For the purposes of programming both

W and X are arrays or length n.

This

simple

recursion

lends

itself

well

good tracking ability. The system input

to

real-time

applications

and

has

relatively

u (reference signal) for this application is

k

assumed to have no dc and have a variance (average power)

s

2

u

. If a white-noise

signal is not available then a signal rich

in harmonics can be used (for example a

square wave). The step-size m must be chosen carefully. Too large and the algorithm

will go unstable, too low and convergence will be slow. It is well known that the

condition for convergence [3] is

step size ( m ) < 1/number of weights times reference noise power

In fact practically often a tenth of the theoretical maximum is used or sometimes one

third. For speech signals the variance will vary with time and clearly under such

conditions

m must

vary too or be fixed at the worst (smallest) possible case. If the

LMS

algorithm

converges

then

each

weight

coefficients of the unknown system. For example

will

w

0

= b

in

this

0

, w

1

= b

case

correspond

1

and so on.

to

the

3.The LabVIEW LMS virtual instrument

This section will cover the graphical code for implementing the basic LMS algorithm.

It is best to start using a simple example and for this an FIR filter of third order (ie

four weights) was used driven by white Guassian noise. Figure 2 illustrates how this

is achieved in the main programming diagram. The LMS algorithm is implemented

for convenience as a sub .vi (virtual instrument file). It is contained within a While

loop which runs endlessly. The FIR system parameters were chosen arbitrarily as

there are no stability issues. The simplest way to simulate a third order FIR system is

to use a formula node. The registers on the main while loop are required to store past

values of input (ie regressors). The input and output to the LMS algorithm are both

scalar

quantities.

As

the

library

function

which

generates

Guassian

white

noise

is

fundamentally an array, it is necessary to make this of length one point and use an

index array block after this pointing to the zeroth element ie the first point. Otherwise

the noise generator output will stay as

type array and the connection to the LMS sub

will not be possible. In Figure 2, sk is the system output and uk the system input with

uk1,uk2 and uk3 all past samples of uk.

Figure 2. The basic LMS virtual instrument . Looking inside the LMS sub itself reveals

Figure 2. The basic LMS virtual instrument.

Looking inside the LMS sub itself reveals the diagram shown in Figure 3 below.

. Looking inside the LMS sub itself reveals the diagram shown in Figure 3 below. Figure

Figure 3. The LMS algorithm sub .vi

All inputs are shown to the left and outputs to the right. One of the tasks performed is

to shuffle the past samples of the

contents in the vector X. For example if we had

four weights it would be necessary to perform at each iteration of the main while loop

X[3]=X[2]

X[2]=X[1]

X[1]=X[0]

and finally X[0] = new sample for reference input ( u ).

0

This is relatively easy to perform in sequential code but in graphical code this is a

little more complicated and one solution is shown in Figure 4 below.

complicated and one solution is shown in Figure 4 below. Figure 4. Shuffles regressor vector X

Figure 4. Shuffles regressor vector X

The diagram in Figure

4 makes use of the library function replace array subset and

index array. The function index array selects the desired element of the X array and

the replace array subset puts the shuffled value into the array. External to the for loop

is another replace array subset which replaces the new reference input point into the

array at index value zero. (X[0]= u )

0

Equation (1) involves the term

X

T

k

W

k

-

1

where X and W

are both arrays of equal

dimension. This is just the summation of the product of corresponding X and W array

elements which have been multiplied together. In sequential pseudo code this would

traditionally be implemented using something like

sum=0

for i= 0 to array_length

sum=sum+X[I]*W[I]

end for loop

In graphical code this is implemented using the diagram shown in Figure 5.

T Figure 5 Shows how to compute X W k k - 1 The arrays

T

Figure 5 Shows how to compute X W

k

k

-

1

The

arrays

product

X

T

k

W

W

k

-

and

X are both

shown

whilst

the

broken

line

in

the

far

right is the

1

. All that is required here is use of the library function index array to

select the individual points in the array. They are then multiplied and stored in the

register. Note that the register is initialized to zero each time the loop is executed.

The

rest

of

the LMS algorithm is pretty

straight forward as there are only two

equation in total. For instance equation (1) only requires the primary input minus the

previously

calculated

X

T

k

W

k

-

1

to

give

the

scalar

error

e

k

.

Equation

(2)

involves

straight multiplication of scalars and one vector to give

2Xem

k

k

. The step length

m

is

made an external input

to the

LMS

sub

.vi.

In

the

main

panel

it

is

set as a control

input. Of course more sophisticated automatic calculation of the step length can be

performed but this can easily be added later.

The

final

stage

is

to

implement

the

vector

recursion

W

k

=

W

k

- 1

+

2Xe

m

k

k

where

2Xem

k

k

has already been calculated. This is done by using a register to store the

current values in the vector

W

k

and recovering them at the next iteration of the main

While loop whereby they become

W

k

-

1

. Equation (1) only then needs a summation to

complete the whole equation. The register is external to the LMS sub ie in the main

While loop as shown in Figure 2. The regressor vector X must also be stored in a

register so as not to lose the values therein.

The front panel of the LMS virtual instrument is quite basic and is shown in Figure 6 .

virtual instrument is quite basic and is shown in Figure 6 . Figure 6. Front panel

Figure 6. Front panel of LMS virtual instrument

It can be seen that the step size m is selected by a control which is set to around 0.01.

The four weights are shown for the third order FIR system and as expected give

precise values for such a simple example. The error is also shown and the number of

weights is also a control variable.

4.Testing the LMS virtual instrument on real data.

The generic LMS algorithm has two inputs, a primary and reference. To get real data

into the algorithm, the easiest way

is

to

make use of the sound card. For higher

bandwidth problems a special data acquisition card can be used. Whilst it would be

possible to read the

two signals ‘live’ from sound sources it was thought better

initially to read pre-recorded signals from a stereo .wav file. The stereo .wav file had

one channel as signal plus noise and the other channel as ‘noise alone’. Of course with

the real recordings there is no such thing as ‘noise alone’. The diagram

shows a better model of what is happening.

in Figure 7

shows a better model of what is happening. in Figure 7 Figure 7 Model of adaptive

Figure 7 Model of adaptive filter for recoding of real signals.

It can be seen that although it is assumed that the signal is a perfect measurement, the

noise is in fact measured via two separate acoustic transfer functions

H (z

1

-

1

)

and

H

2

(z

-

1

)

which alter the magnitude and phase of the recordings at each microphone.

Even this is a little simplistic as it does not allow for room reverberations

unfortunate

possibility

that

signal

(speech)

may

be

strong

enough

to

and the

reach

the

reference microphone. Note that for this more general case we are neither trying to

model H1 or H2 or their inverses. In fact the LMS algorithm minimized the mean-

squared error and hence the result is an adaptive

reference [3].

Wiener filter. For more details see

adaptive reference [3]. Wiener filter. For more details see Figure 8. Layout of room The actual

Figure 8. Layout of room

The actual room used was approximately 3.5 meters square and the noise source was

approximately 2.5 meters apart from the signal source. The noise source was an

ordinary radio and the speech source was a person talking a sentence in English. The

resulting .wav file was saved direct to disk using an audio package. LabVIEW could

have been used for this purpose too. A diagram is shown in Figure 7. There were

other furnishings in the room not shown in the diagram, for example a book case, two

desks and a filing cabinet. The microphones required a pre-amplifier to boost the

signal strengths to acceptable strengths. A number of recordings were made for later

analyses.

A typical signal plus noise and

is shown in Figure 9.

LabVIEW LMS estimate for recorded data

is shown in Figure 9. LabVIEW LMS estimate for recorded data Figure 9. Signal plus noise

Figure 9. Signal plus noise (top) and LabVIEW LMS estimate (bottom)

The number of weights used was 512 for this example and the sampling frequency

was 22050 Hz. An eight bit word length was used although it would have been just as

straightforward to use 16 bits. As the data (speech) is a signal whose average power

changes with time, it is not too easy to measure signal to noise ratio (SNR) as it too

will vary with time. SNR is defined as

SNR

=

10log [

10

P

s

P n

]

where

P

s

and

P

n

are respectively the power in the signal and the power in the noise.

In

general power P (or variance when there is no

samples according to

dc) is easily calculated for m

P

=

1

m

m

Â

i

1

=

x

2

i

where

x is the sample of the signal. The problem with our real-world

i

example

is

that

the

true

signal

is

unknown

cannot be calculated on its own.

(un-measurable) and hence its power

The usual way to calculate SNR for such problems is to measure the power of the

noise when the speech signal is absent and to then measure the power of the signal +

noise (assume the noise has not changed its characteristics ). By subtracting the two

measures (in dB), 1+SNR (in dB)

ie

1

+

SNR

=

10log [

10

Three

areas

of

P

s

+

P

n

P

n

]

‘noise-alone’

in

is found from which SNR can then be calculated.

the

waveform

were

used

and

hence

three

measurements of SNR were obtained and averaged. This method is sometimes known

as segmented SNR measurement. This assumes of course that the noise power does

not change throughout the whole time of measurement, which is in reality a false

assumption with non-stationary data. However, it is a good approximation and the

only method available other than simulation results. Power of the noise alone and

power in the signal plus noise were measured manually using

commonly available

audio editing software (cooledit). It would be possible to do this automatically in

LabVIEW but a rather sophisticated speech activity detector would then have to be

developed which is a current research topic in itself.

For simulated data, that is by using transfer functions to model the acoustic paths

H (z

1

-

1

)

and

H

2

(z

-

1

)

it is not uncommon to see large improvements in SNR, anything

from 6dB upwards. Similarly in anechoic chamber tests results are quite impressive.

However,

in

real-world applications such as this there are many factors effecting

performance

the

most

notable

being

reverberations

in

the

room.

This

means

that

instead of being just one noise source there are in fact many depending on how

reverberant the room is. Bearing this

in

mind

it

was

found that on average the

improvement in SNR given by the LMS algorithm in this environment was a little

over 1.5dB. There are many approaches in the literature on how to improve on this,

some of which involve using multiple sensor arrays. Of course this too is possible

with LabVIEW too but is beyond the scope of this article.

5.Conclusions

The LMS algorithm has been implemented using the graphical programming language

‘g’ (LabVIEW). It has been shown how aspects of adaptive signal processing can be

quite elegantly implemented using this method and is a powerful alternative to other

programming methods. Adaptive array signal processing could be investigated using a

similar

approach

which

would

give

improved

results.

Furthermore,

this

explored only a few applications of the LMS algorithm from a vast

applications

6.References

article

has

literature of

[1] Johnson,G.W.,LabVIEW Graphical programming, McGraw-Hill series,NY,1994

[2] Chugani,M,L,Samant,A.R and Cerna,M., LabVIEW Signal Processing, National

Instruments Series, Prentice Hall,NJ,1998

[3] Haykin,S., Adaptive Filter theory, Prentice Hall Englewood Cliffs,NJ,1986

Appendix. Reading Stereo .wav files

Reading mono .wav files is covered extensively in the LabVIEW books and help

facilities. Perhaps much less coverage is given to stereo .wav files and how to read

them. The diagram below show how this can be achieved.

much less coverage is given to stereo .wav files and how to read them. The diagram

Figure A1 Reading a .wav file using LabVIEW

The reading is done within a sequence structure (the ‘film strip’) as the file has to be

read before it is processed. (The rest of the LMS processing must also performed

within a sequence structure.) The read wav file library sub gives out a 2D integer

array which must be split into two separate channels. This is done by using the index

array library call

as shown using 0 and 1 to point to each channel (column of the 2D

array). For convenience the two separate 8 bit integer arrays are then put through a

case structure which will swap them over at the push of a button on the front panel.

This is just in case the two channels are saved the wrong way around in the file. Not

shown is the next step where 128 must be subtracted from each input and scaled down

to a suitable level for processing. The 128 comes about since for 8 bits we have 256

levels and dc sits at mid way at 128. A scaling factor of 0.01 then gives a dynamic

range of +/- 1.28 for each channel. This scaling is not critical as long as

later, when

writing the processed speech to a .wav file the inverse is used to get back to integer

values and a suitable dynamic range.