You are on page 1of 3

TURBULENCE SIGNAL DETECTION USING SPIRAL MICROPHONE ARRAY YiNeng Wang National Taiwan University of Science and Technology

Department of Electronic Engineering No.43, Sec. 4, Keelung Rd., Da'an Dist., Taipei City 106, Taiwan

ABSTRACT The purpose of this draft is to verify that the turbulence signal can be efficiently measured using the Fourier transform of sampled sound pressure based on the spiral microphone array. Depending on the characteristics of microphones, microphone array must be well configured and the frequencies of the source are restricted. Since the number of microphones is restricted, the resulting resolution of Fourier spectrum will be poor without expanding the number of sampling ponits. In this study, we utilize the WhittakerShannon Sampling theorem [1] to predict the transformation result. According to this theorem, the spectrum of sampled sound pressure can be obtained by erecting the spectrum of the original sound pressure. We have simulated different transmitted waves with constant or random-distributed intensities over the wavefront and their spectrums all look like turbulence. Index Terms Microphone array, Whittaker-Shannon sampling theorem. Spiral configuration,

with relatively low frequenciesacoustic sound, as our sound sources. 2. WHITTAKER-SHANNON SAMPLING THEOREM To lead to the simplest way to predict what will arise after applying Fourier transform to the sampled signals, we use rectThis draft was supported by Interactive Multimedia Laboratory, NTUST. angular lattices to approximate the samples of the function g ( x , y )

defined as:
gs (x,y) = comb(^) comb(y)g(x,y)

1. INTRODUCTION Because a pressure microphone exhibits some directionality along its main axis at short wavelengths caused principally by diffraction effects [2], we carefully arrange each microphone such that they are oriented toward the same direction, as illustrated in Fig. 1. Our microphones are fairly small so that they will have minimal effect on the sound field they are sampling. As microphones operating at higher frequencies, there are bound to be certain aberrations in directional response as the dimensions of the microphone case become a significant portion of the sound wavelength. Moreover, such high frequencies are inaudible to the average person [3]. In order for the application of audio recordings, we employ signals

The sampled function g s ( x , y ) , thus consists of an array of S functions, spaced at intervals of width X in the x direction and width Y in the y direction, as illustrated in Fig. 2. The area under each S function is proportional to the value of the function g ( x , y ) at that particular point in the rectangular sampling lattice. As implied by the convolution theorem, the spectrum G s ( f x , f y ) of g ( x , y ) , can be found by convolving the transform of c o m b ( j) c o m b ( y) with the transform of g ( x , y ) , or:
G s ( f x , f y ) = F { c o m b ( X ) c o m b ( Y ) } G ( f x , f y ) (2)

where indicates that a two-dimensional convolution is to be performed. By using Table 1, we have:


F{comb(-) comb(y)} = X Y XYcomb(Xfx) comb(Yfy)

(1)

(3)

Table 1. Transform pairs for some functions separable in rectangular coordinates.


Function Transform
+ b2

exp[n(a2x2 + b2y2)]
l

rect(ax)rect(b sinc(fx/a) sinc(fy /b) y) 2 sine ( f x / a ) l ab\a x ) A( A sinc2(fy /b) \ 1 (by) S(ax, S ( f x_ a / 2 , f y b / 2 ) bx p [ j n ( a x e y) + b ab l l \ab\ jnfx jnfy by)] sgn(ax) comb(fx sgn(by) /a)comb( comb(ax)comb(by f y/ b ) ) exp[jir(a2x2 + b2y2)] exp[jn(ax + by)]
2

) ]

exp[(a\x\ + b\y\)]

2 \ab\ 1+(2n fx/ a ) 2 1+(2n f y/ b ) 2

whereas from the property of Dirac delta function:


XYcomb(Xfx) comb(Yfy) =
n= oo m= oo

(4)

It follows that:

oo

(5)
G(fx
= oo m= oo n ? ,Y )f X
y

Evidently the spectrum of g s( x , y ) , can be simply found by erecting the spectrum of g ( x , y ) about each point ( n / X , m / Y ) in the f x - f y plane as shown in Fig. 3. Since the function g ( x , y ) is assumed to be bandlimited, its spectrum G ( f x , f y ) is nonzero over only a finite region R of the frequency space. As implied by Eq. 5, the region over which the spectrum of the sampled function is nonzero can be found by constructing the region R about each point ( n / X , m / Y ) in the frequency plane. Notice that if X and Y are sufficiently small, i.e., the samples are sufficiently close to each other, then the separations 1 / X and 1 / Y of the various spectral islands will be great enough to assure that the adjacent regions do not overlap. However, in our case, as shown in Fig. 4, there are still some overlaps as a handful of microphones are used compared to the resolution we need. 3. FURTHER DISCUSSIONS The recovery ofthe original spectrumG ( f x , f y ) fromG s ( f x , f y ) can be accomplished exactly by passing the sampled function g(x, y) through a linear invariant filter that transmits the term

( n = 0 , m = 0 ) of Eq. 5 without distortion, whereas perfectly

excluding all other terms. 4. CONCLUSION AND FUTURE RESEARCH In spite of the poor resolution, due to the limitation of the number of microphones, we use signal processing technique to make the spectrum look more like a spiral. For example, expanding the sampled points by zero padding and then performing Fourier transform (the sampled acoustic pressure is viewed as the intensity of an image). The reason for the two spectra showing different colors is that the spacing of each Dirac comb function is not uniform for spiral microphone configuration [4]; however, the deployment of the spectra of the sampled signal are always spiral regardless of the sound source is plane wave, spherical wave, or even random distribution [5]. The recovery of the original spectrum G ( f x , f y ) from G s ( f x , f y ) can be accomplished exactly by passing the sampled function g(x, y) through a linear invariant filter that transmits the term ( n = 0 , m = 0 ) of Eq. 5 without distortion, whereas perfectly excluding all other terms. Therefore, our next objective in the future is to find an exact replica of the original data g ( x , y ) with an FIR filter. 5. REFERENCES [1] Joseph W. Goodman, Introduction to Fourier Optics, McGrawHill, New York, 2005. [2] Hoffman, M.W., "Microphone array calibration for robust adaptive processing," IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 11-14, Oct 1995. [3] John Eargle, The Microphone Book: From Mono to Stereo to Surround - A Guide to Microphone Design and Application, Focal-Press, Oxford, 2004. [4] Tamai, Y., Kagami, S., Amemiya, Y., Sasaki, Y., Mi-zoguchi, H., and Takano, T., "Circular microphone array for robot's audition," Proc. of the IEEE Sensors, vol. 2, pp. 565-570, Oct. 2004. [5] Del Galdo, G., Thiergart, O., Weller, T., and Habets, E.A.P., "Generating virtual microphone signals using geometrical information gathered by distributed arrays," Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA), pp. 185-190, May 30 2011June 1 2011.

You might also like