You are on page 1of 157

Sine wave

From Wikipedia, the free encyclopedia

(Redirected from Sign Wave) Jump to: navigation, search "Sinusoid" redirects here. For the blood vessel, see Sinusoid (blood vessel).

The graphs of the sine and cosine functions are sinusoids of different phases. The sine wave or sinusoid is a mathematical function that describes a smooth repetitive oscillation. It occurs often in pure mathematics, as well as physics, signal processing, electrical engineering and many other fields. Its most basic form as a function of time (t) is:

where:

A, the amplitude, is the peak deviation of the function from its center position. , the angular frequency, specifies how many oscillations occur in a unit time interval, in radians per second , the phase, specifies where in its cycle the oscillation begins at t = 0.

When the phase is non-zero, the entire waveform appears to be shifted in time by the amount / seconds. A negative value represents a delay, and a positive value represents an advance. Sine wave 5 seconds of a 220 Hz sine wave

Problems listening to this file? See

media help.

The oscillation of an undamped spring-mass system around the equilibrium is a sine wave. The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.

Contents
[hide]

1 General form 2 Occurrences 3 Fourier series 4 See also 5 References

[edit] General form


In general, the function may also have:

a spatial dimension, x (aka position), with frequency k (also called wavenumber)

a non-zero center amplitude, D

which looks like this:

The wavenumber is related to the angular frequency by:.

where is the wavelength, f is the frequency, and c is the speed of propagation. This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.

[edit] Occurrences

Illustrating the cosine wave's fundamental relationship to the circle. This wave pattern occurs often in nature, including ocean waves, sound waves, and light waves. A cosine wave is said to be "sinusoidal", because cos(x) = sin(x + / 2), which is also a sine wave with a phase-shift of /2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a single frequency with no harmonics; some sounds that

approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork. To the human ear, a sound that is made up of more than one sine wave will either sound "noisy" or will have detectable harmonics; this may be described as a different timbre.

[edit] Fourier series

Sine, square, triangle, and sawtooth waveforms Main article: Fourier analysis In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to describe and approximate any periodic waveform including square waves. Fourier used it as an analytical tool in the study of waves and heat flow. It is frequently used in signal processing and the statistical analysis of time series.

[edit] See also


Wave (physics) Crest (physics) Fourier transform Harmonic series (mathematics) Harmonic series (music) Helmholtz equation Instantaneous phase Pure tone

Sawtooth wave Sinusoidal model Simple harmonic motion Square wave Triangle wave Wave equation

[edit] References

Hazewinkel, Michiel, ed. (2001), "Sinusoid", Encyclopedia of Mathematics, Springer, ISBN 978-1556080104, http://www.encyclopediaofmath.org/index.php?title=s085640 [dead link]

Retrieved from "http://en.wikipedia.org/w/index.php? title=Sine_wave&oldid=466783323" View page ratings Rate this page

Rate this page Page ratings


What's this?

Current average ratings. Trustworthy

Objective

Complete

Well-written

I am highly knowledgeable about this topic (optional) I have a relevant college/university degree It is part of my profession It is a deep personal passion

The source of my knowledge is not listed here I would like to help improve Wikipedia, send me an e-mail (optional)
We will send you a confirmation e-mail. We will not share your e-mail address with outside parties as per our feedback privacy statement.

Submit ratings Saved successfully Your ratings have not been submitted yet

Your ratings have expired

Please reevaluate this page and submit new ratings. An error has occured. Please try again later. Thanks! Your ratings have been saved.

Please take a moment to complete a short survey.


Start surveyMaybe later Thanks! Your ratings have been saved.

Do you want to create an account?


An account will help you track your edits, get involved in discussions, and be a part of the community. Create an accountorLog inMaybe later Thanks! Your ratings have been saved.

Did you know that you can edit this page?


Edit this pageMaybe later Categories: Trigonometry Wave mechanics Waves Hidden categories: Articles with hAudio microformats All articles with dead external links Articles with dead external links from December 2011
Personal tools

Log in / create account Article

Namespaces


Views

Discussion

Variants


Actions Search

Read Edit View history

Top of Form

Special:Search

Bottom of Form

Navigation


Toolbox

Main page Contents Featured content Current events Random article Donate to Wikipedia Help About Wikipedia Community portal Recent changes Contact Wikipedia What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page Create a book Download as PDF

Interaction

Print/export

Printable version Bosanski Catal Dansk Deutsch Eesti Espaol Franais Ido Bahasa Indonesia Italiano Nederlands Polski Portugus Shqip Simple English / Srpski Srpskohrvatski / Basa Sunda Suomi Svenska Trke Wolof This page was last modified on 20 December 2011 at 00:28. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details.

Languages

Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.


Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Sine wave
From Wikipedia, the free encyclopedia (Redirected from Sign Wave) Jump to: navigation, search "Sinusoid" redirects here. For the blood vessel, see Sinusoid (blood vessel).

The graphs of the sine and cosine functions are sinusoids of different phases.

The sine wave or sinusoid is a mathematical function that describes a smooth repetitive oscillation. It occurs often in pure mathematics, as well as physics, signal processing, electrical engineering and many other fields. Its most basic form as a function of time (t) is:

where:
A, the amplitude, is the peak deviation of the function from its center position. , the angular frequency, specifies how many oscillations occur in a unit time interval, in radians per second , the phase, specifies where in its cycle the oscillation begins at t = 0. When the phase is non-zero, the entire waveform appears to be shifted in time by the amount / seconds. A negative value represents a delay, and a positive value represents an advance. Sine wave

5 seconds of a 220 Hz sine wave

Problems listening to this file? See media help.

The oscillation of an undamped spring-mass system around the equilibrium is a sine wave.

The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.

Contents
[hide] 1 General form 2 Occurren ces 3 Fourier series 4 See also 5 Reference s

[edit] General form


In general, the function may also have:
a spatial dimension, x (aka position), with frequency k (also called wavenumber) a non-zero center amplitude, D

which looks like this:

The wavenumber is related to the angular frequency by:.

where is the wavelength, f is the frequency, and c is the speed of propagation. This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as

a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.

[edit] Occurrences

Illustrating the cosine wave's fundamental relationship to the circle.

This wave pattern occurs often in nature, including ocean waves, sound waves, and light waves. A cosine wave is said to be "sinusoidal", because cos(x) = sin(x + / 2), which is also a sine wave with a phase-shift of /2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a single frequency with no harmonics; some sounds that approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork. To the human ear, a sound that is made up of more than one sine wave will either sound "noisy" or will have detectable harmonics; this may be described as a different timbre.

[edit] Fourier series

Sine, square, triangle, and sawtooth waveforms Main article: Fourier analysis

In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to describe and approximate any periodic waveform including square waves. Fourier used it as an analytical tool in the study of waves and heat flow. It is frequently used in signal processing and the statistical analysis of time series.

[edit] See also


Wave (physics) Crest (physics) Fourier transform Harmonic series (mathematics) Harmonic series (music) Helmholtz equation Instantaneous phase Pure tone Sawtooth wave Sinusoidal model Simple harmonic motion Square wave

Triangle wave Wave equation

[edit] References
Hazewinkel, Michiel, ed. (2001), "Sinusoid", Encyclopedia of Mathematics, Springer, ISBN 978-1556080104, http://www.encyclopediaofmath.org/index.php?title=s085640
[dead link]

Retrieved from "http://en.wikipedia.org/w/index.php? title=Sine_wave&oldid=466783323" View page ratings Rate this page

Rate this page Page ratings


What's this? Current average ratings. Trustworthy

Objective

Complete

Well-written

I am highly knowledgeable about this topic (optional)

I have a relevant college/university degree

It is part of my profession

It is a deep personal passion

The source of my knowledge is not listed here

I would like to help improve Wikipedia, send me an e-mail (optional)

We will send you a confirmation e-mail. We will not share your e-mail address with outside parties as per our feedback privacy statement.

Submit ratings Saved successfully Your ratings have not been submitted yet

Your ratings have expired


Please reevaluate this page and submit new ratings.
An error has occured. Please try again later. Thanks! Your ratings have been saved.

Please take a moment to complete a short survey.


Start surveyMaybe later
Thanks! Your ratings have been saved.

Do you want to create an account?


An account will help you track your edits, get involved in discussions, and be a part of the community.
Create an accountorLog inMaybe later

Thanks! Your ratings have been saved.

Did you know that you can edit this page?


Edit this pageMaybe later
Categories: Trigonometry Wave mechanics Waves Hidden categories: Articles with hAudio microformats All articles with dead external links Articles with dead external links from December 2011
Personal tools


Views

Log in / create account Article Discussion

Namespaces

Variants


Actions Search

Read Edit View history

Top of Form

Special:Search

Bottom of Form

Navigation

Main page Contents Featured content Current events Random article


Toolbox

Donate to Wikipedia Help About Wikipedia Community portal Recent changes Contact Wikipedia What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page Create a book Download as PDF Printable version Bosanski Catal Dansk Deutsch Eesti Espaol Franais Ido Bahasa Indonesia Italiano Nederlands

Interaction

Print/export

Languages

Polski Portugus

Shqip Simple English / Srpski Srpskohrvatski / Basa Sunda Suomi Svenska Trke Wolof

This page was last modified on 20 December 2011 at 00:28. Text is available under the Creative Commons AttributionShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Square wave
From Wikipedia, the free encyclopedia Jump to: navigation, search This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2009)

Sine, square, triangle, and sawtooth waveforms

A square wave is a kind of non-sinusoidal waveform, most typically encountered in electronics and signal processing. An ideal square wave alternates regularly and instantaneously between two levels. Its stochastic counterpart is a two-state trajectory.

Contents
[hide] 1 Origins and uses 2 Examining the square wave 3 Characteristics of imperfect square waves 4 Other definitions 5 See also 6 External links

[edit] Origins and uses


Square waves are universally encountered in digital switching circuits and are naturally generated by binary (two-level) logic devices. They are used as timing references or "clock signals", because their fast transitions are suitable for triggering synchronous logic circuits at precisely determined intervals. However, as the frequency-domain graph shows, square waves

contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead of square waves as timing references. In musical terms, they are often described as sounding hollow, and are therefore used as the basis for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect used on electric guitars clip the outermost regions of the waveform, causing it to increasingly resemble a square wave as more distortion is applied. Simple two-level Rademacher functions are square waves.

[edit] Examining the square wave


In contrast to the sawtooth wave, which contains all integer harmonics, the square wave contains only odd integer harmonics. Using Fourier series we can write an ideal square wave as an infinite series of the form

A curiosity of the convergence of the Fourier series representation of the square wave is the Gibbs phenomenon. Ringing artifacts in non-ideal square waves can be shown to be related to this phenomenon. The Gibbs phenomenon can be prevented by the use of approximation, which uses the Lanczos sigma factors to help the sequence converge more smoothly. An ideal square wave requires that the signal changes from the high to the low state cleanly and instantaneously. This is impossible to achieve in real-world systems, as it would require infinite bandwidth. It would also require particles to be able to be able to travel faster than the speed of light, as the slope of an ideal square wave at these points is undefined (or infinite).[citation needed] Even in the best approximation of a square wave, this slope is finite, however.

Animation of the additive synthesis of a square wave with an increasing number of harmonics

Real-world square-waves have only finite bandwidth, and often exhibit ringing effects similar to those of the Gibbs phenomenon, or ripple effects similar to those of the approximation. For a reasonable approximation to the square-wave shape, at least the fundamental and third harmonic need to be present, with the fifth harmonic being desirable. These bandwidth requirements are important in digital electronics, where finite-bandwidth analog approximations to square-wave-like waveforms are used. (The ringing transients are an important electronic consideration here, as they may go beyond the electrical rating limits of a circuit or cause a badly positioned threshold to be crossed multiple times.) The ratio of the high period to the total period of a square wave is called the duty cycle. A true square wave has a 50% duty cycle - equal high and low periods. The average level of a square wave is also given by the duty cycle, so by varying the on and off periods and then averaging, it is possible to represent any value between the two limiting levels. This is the basis of pulse width modulation.
Square wave sound sample

5 seconds of square wave at 1 kHz

Problems listening to this file? See media help.

[edit] Characteristics of imperfect square waves


As already mentioned, an ideal square wave has instantaneous transitions between the high and low levels. In practice, this is never achieved because of physical limitations of the system that generates the waveform. The times taken for the signal to rise from the low level to the high level and back again are called the rise time and the fall time respectively. If the system is overdamped, then the waveform may never actually reach the theoretical high and low levels, and if the system is underdamped, it will oscillate about the high and low levels before settling down. In these cases, the rise and fall times are measured between specified intermediate levels, such as 5% and 95%, or 10% and 90%. Formulas exist that can determine the approximate bandwidth of a system given the rise and fall times of the waveform.

[edit] Other definitions


The square wave has many definitions, which are equivalent except at the discontinuities: It can be defined as simply the sign function of a sinusoid:

which will be 1 when the sinusoid is positive, 1 when the sinusoid is negative, and 0 at the discontinuities. It can also be defined with respect to the Heaviside step function u(t) or the rectangular function (t):

T is 2 for a 50% duty cycle. It can also be defined in a piecewise way:

when

NyquistShannon sampling theorem


From Wikipedia, the free encyclopedia (Redirected from Nyquist theorem)

Jump to: navigation, search

Fig.1: Hypothetical spectrum of a bandlimited signal as a function of frequency

The NyquistShannon sampling theorem, after Harry Nyquist and Claude Shannon, is a fundamental result in the field of information theory, in particular telecommunications and signal processing. Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space). Shannon's version of the theorem states:[1] If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. The theorem is commonly called the Nyquist sampling theorem; since it was also discovered independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others, it is also known as NyquistShannonKotelnikov, WhittakerShannonKotelnikov, WhittakerNyquist KotelnikovShannon, WKS, etc., sampling theorem, as well as the Cardinal Theorem of Interpolation Theory. It is often referred to simply as the sampling theorem. In essence, the theorem shows that a bandlimited analog signal that has been sampled can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency of the original signal. If a signal contains a component at exactly B hertz, then samples spaced at exactly 1/(2B) seconds do not completely determine the signal, Shannon's statement notwithstanding. This sufficient condition can be weakened, as discussed at Sampling of non-baseband signals below. More recent statements of the theorem are sometimes careful to exclude the equality condition; that is, the condition is if x(t) contains no frequencies higher than or equal to B; this condition is equivalent to Shannon's except when the function includes a steady sinusoidal component at exactly frequency B. The theorem assumes an idealization of any real-world situation, as it only applies to signals that are sampled for infinite time; any time-limited x(t) cannot be perfectly bandlimited. Perfect reconstruction is mathematically possible for the idealized model but only an approximation for real-world signals and sampling techniques, albeit in practice often a very good one. The theorem also leads to a formula for reconstruction of the original signal. The constructive proof of the theorem leads to an understanding of the aliasing that can occur when a sampling system does not satisfy the conditions of the theorem. The sampling theorem provides a sufficient condition, but not a necessary one, for perfect reconstruction. The field of compressed sensing provides a stricter sampling condition when the

underlying signal is known to be sparse. Compressed sensing specifically yields a sub-Nyquist sampling criterion.

Contents
[hide] 1 Introduction 2 The sampling process 3 Reconstruction 4 Practical considerations 5 Aliasing 6 Application to multivariable signals and images 7 Downsampling 8 Critical frequency 9 Mathematical reasoning for the theorem 10 Shannon's original proof 11 Sampling of non-baseband signals 12 Nonuniform sampling 13 Beyond Nyquist 14 Historical background 14.1 Other discoverers 14.2 Why Nyquist?

15 See also 16 Notes 17 References 18 External links

[edit] Introduction
A signal or function is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth B. The sampling theorem asserts that, given such a bandlimited signal, the uniformly spaced discrete samples are a complete representation of the signal as long as the sampling rate is larger than twice the bandwidth B. To formalize these concepts, let x(t) represent a continuous-time signal and X(f) be the continuous Fourier transform of that signal:

The signal x(t) is said to be bandlimited to a one-sided baseband bandwidth, B, if


for all

| f | > B,

or, equivalently, supp(X) [B, B].[2] Then the sufficient condition for exact reconstructability from samples at a uniform sampling rate fs (in samples per unit time) is:

The quantity 2B is called the Nyquist rate and is a property of the bandlimited signal, while fs/2 is called the Nyquist frequency and is a property of this sampling system. The time interval between successive samples is referred to as the sampling interval:

and the samples of x(t) are denoted by:

where n is an integer. The sampling theorem leads to a procedure for reconstructing the original x(t) from the samples and states sufficient conditions for such a reconstruction to be exact.

[edit] The sampling process


The theorem describes two processes in signal processing: a sampling process, in which a continuous time signal is converted to a discrete time signal, and a reconstruction process, in which the original continuous signal is recovered from the discrete time signal. The continuous signal varies over time (or space in a digitized image, or another independent variable in some other application) and the sampling process is performed by measuring the continuous signal's value every T units of time (or space), which is called the sampling interval. Sampling results in a sequence of numbers, called samples, to represent the original signal. Each sample value is associated with the instant in time when it was measured. The reciprocal of the sampling interval (1/T) is the sampling frequency denoted fs, which is measured in samples per unit of time. If T is expressed in seconds, then fs is expressed in hertz.

[edit] Reconstruction
Reconstruction of the original signal is an interpolation process that mathematically defines a continuous-time signal x(t) from the discrete samples x[n] and at times in between the sample instants nT.

Fig.2: The normalized sinc function: sin(x) / (x) ... showing the central peak at x= 0, and zero-crossings at the other integer values of x. The procedure: Each sample value is multiplied by the sinc function scaled so that the zero-crossings of the sinc function occur at the sampling instants and that the sinc function's central point is shifted to the time of that sample, nT. All of these shifted and scaled functions are then added together to recover the original signal. The scaled and time-shifted sinc functions are continuous making the sum of these also continuous, so the result of this operation is a continuous signal. This procedure is represented by the Whittaker Shannon interpolation formula. The condition: The signal obtained from this reconstruction process can have no frequencies higher than one-half the sampling frequency. According to the theorem, the reconstructed signal will match the original signal provided that the original signal contains no frequencies at or above this limit. This condition is called the Nyquist criterion, or sometimes the Raabe condition.

If the original signal contains a frequency component equal to one-half the sampling rate, the condition is not satisfied. The resulting reconstructed signal may have a component at that frequency, but the amplitude and phase of that component generally will not match the original component. This reconstruction or interpolation using sinc functions is not the only interpolation scheme. Indeed, it is impossible in practice because it requires summing an infinite number of terms. However, it is the interpolation method that in theory exactly reconstructs any given bandlimited x(t) with any bandlimit B < 1/(2T); any other method that does so is formally equivalent to it.

[edit] Practical considerations


A few consequences can be drawn from the theorem:
If the highest frequency B in the original signal is known, the theorem gives the lower bound on the sampling frequency for which perfect reconstruction can be assured. This lower bound to the sampling frequency, 2B, is called the Nyquist rate. If instead the sampling frequency is known, the theorem gives us an upper bound for frequency components, B<fs/2, of the signal to allow for perfect reconstruction. This upper bound is the Nyquist frequency, denoted fN. Both of these cases imply that the signal to be sampled must be bandlimited; that is, any component of this signal which has a frequency above a certain bound should be zero, or at least sufficiently close to zero to allow us to neglect its influence on the resulting reconstruction. In the first case, the condition of bandlimitation of the sampled signal can be accomplished by assuming a model of the signal which can be analysed in terms of the frequency components it contains; for example, sounds that are made by a speaking human normally contain very small frequency components at or above 10 kHz and it is then sufficient to sample such an audio signal with a sampling frequency of at least 20 kHz. For the second case, we have to assure that the sampled signal is bandlimited such that frequency components at or above half of the sampling frequency can be neglected. This is usually accomplished by means of a suitable low-pass filter; for example, if it is desired to sample speech waveforms at 8 kHz, the signals should first be lowpass filtered to below 4 kHz. In practice, neither of the two statements of the sampling theorem described above can be completely satisfied, and neither can the reconstruction formula be precisely implemented. The reconstruction process that involves scaled and delayed sinc functions can be described as ideal. It cannot be realized in practice since it implies that each sample contributes to the reconstructed signal at almost all time points, requiring summing an infinite number of terms. Instead, some type of approximation of the sinc functions, finite in length, has to be used. The error that corresponds to the sincfunction approximation is referred to as interpolation error. Practical digital-to-analog converters produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally low-pass filtered would yield the original signal), but a sequence of scaled and delayed rectangular pulses. This practical piecewise-constant output can be modeled as a zeroorder hold filter driven by the sequence of scaled and delayed dirac impulses referred to in the mathematical basis section below. A shaping filter is sometimes used after the DAC with zero-order hold to make a better overall approximation.

Furthermore, in practice, a signal can never be perfectly bandlimited, since ideal "brick-wall" filters cannot be realized. All practical filters can only attenuate frequencies outside a certain range, not remove them entirely. In addition to this, a "time-limited" signal can never be bandlimited. This means that even if an ideal reconstruction could be made, the reconstructed signal would not be exactly the original signal. The error that corresponds to the failure of bandlimitation is referred to as aliasing. The sampling theorem does not say what happens when the conditions and procedures are not exactly met, but its proof suggests an analytical framework in which the non-ideality can be studied. A designer of a system that deals with sampling and reconstruction processes needs a thorough understanding of the signal to be sampled, in particular its frequency content, the sampling frequency, how the signal is reconstructed in terms of interpolation, and the requirement for the total reconstruction error, including aliasing, sampling, interpolation and other errors. These properties and parameters may need to be carefully tuned in order to obtain a useful system.

[edit] Aliasing
Main article: Aliasing

The Poisson summation formula shows that the samples, x[n]=x(nT), of function x(t) are sufficient to create a periodic summation of function X(f). The result is: (Eq.1 ) As depicted in Figures 3, 4, and 8, copies of X(f) are shifted by multiples of fs and combined by addition.

Fig.3: Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A "brick-wall" low-pass filter can remove the images

and leave the original spectrum, thus recovering the original signal from the samples.

If the sampling condition is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous X(f). Any frequency component above fs/2 is indistinguishable from a lower-frequency component, called an alias, associated with one of the copies. The reconstruction technique described below produces the alias, rather than the original component, in such cases.

Fig.4 Top: Hypothetical spectrum of an insufficiently sampled bandlimited signal (blue), X(f), where the images (green) overlap. These overlapping edges or "tails" of the images add, creating a spectrum unlike the original. Bottom: Hypothetical spectrum of a marginally sufficiently sampled bandlimited signal (blue), XA(f), where the images (green) narrowly do not overlap. But the overall sampled spectrum of XA(f) is identical to the overall inadequately sampled spectrum of X(f) (top) because the sum of baseband and images are the same in both cases. The discrete sampled signals xA[n] and x[n] are also identical. It is not possible, just from examining the spectra (or the sampled signals), to tell the two situations apart. If this were an audio signal, xA[n] and x[n] would sound the same and the presumed "properly" sampled xA[n] would be the alias of x[n] since the spectrum XA(f) masquerades as the spectrum X(f).

For a sinusoidal component of exactly half the sampling frequency, the component will in general alias to another sinusoid of the same frequency, but with a different phase and amplitude. To prevent or reduce aliasing, two things can be done:
1. Increase the sampling rate, to above twice some or all of the frequencies that are aliasing. 2. Introduce an anti-aliasing filter or make the anti-aliasing filter more stringent.

The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper sampling. Such a restriction works in theory, but is not precisely satisfiable in reality, because realizable filters will always allow some leakage of high frequencies. However, the leakage energy can be made small enough so that the aliasing effects are negligible.

[edit] Application to multivariable signals and images

Fig.5: Subsampled image showing a Moir pattern

Fig.6: See full size image

The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to timedependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, LAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moir pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor. Another example is shown to the left in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moir pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The application of the sampling theorem to images should be made with care. For example, the sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from the ideal sampling which would measure the image intensity at a single point. Instead these devices have a relatively large sensor area at each sample point in order to obtain sufficient amount of light. In other words, any detector has a finite-width point spread function. The analog optical image intensity function which is sampled by the sensor device is not in general bandlimited, and the non-ideal sampling is itself a useful type of lowpass filter, though not always sufficient to remove enough high frequencies to sufficiently reduce aliasing. When the area of the

sampling spot (the size of the pixel sensor) is not large enough to provide sufficient anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) is typically included in a camera system to further blur the optical image. Despite images having these problems in relation to the sampling theorem, the theorem can be used to describe the basics of down and up sampling of images.

[edit] Downsampling
When a signal is downsampled, the sampling theorem can be invoked via the artifice of resampling a hypothetical continuous-time reconstruction. The Nyquist criterion must still be satisfied with respect to the new lower sampling frequency in order to avoid aliasing. To meet the requirements of the theorem, the signal must usually pass through a low-pass filter of appropriate cutoff frequency as part of the downsampling operation. This low-pass filter, which prevents aliasing, is called an anti-aliasing filter.

[edit] Critical frequency

Fig.7: A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and 1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate.

To illustrate the necessity of fs > 2B, consider the sinusoid:

With fs = 2B or equivalently T = 1/(2B), the samples are given by:

Those samples cannot be distinguished from the samples of:

But for any such that sin() 0, x(t) and xA(t) have different amplitudes and different phase. This and other ambiguities are the reason for the strict inequality of the sampling theorem's condition.

[edit] Mathematical reasoning for the theorem

Fig.8: Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A brick-wall low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from the samples.

From Figures 3 and 8, it is apparent that when there is no overlap of the copies (aka "images") of X(f), the k = 0 term of Xs(f) can be recovered by the product:
where:

H(f) need not be precisely defined in the region [B, fs B] because Xs(f) is zero in that region. However, the worst case is when B = fs/2, the Nyquist frequency. A function that is sufficient for that and all less severe cases is:

where rect(u) is the rectangular function. Therefore:

(from Eq.1, above).

The original function that was sampled can be recovered by an inverse Fourier transform:

[3]

which is the WhittakerShannon interpolation formula. It shows explicitly how the samples, x(nT), can be combined to reconstruct x(t).
From Figure 8, it is clear that larger-than-necessary values of fs (smaller values of T), called

oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Theoretically, the interpolation formula can be implemented as a low pass filter, whose impulse response is sinc(t/T) and whose input is w hich is a Dirac comb function modulated by the signal samples. Practical digital-toanalog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error.

[edit] Shannon's original proof


The original proof presented by Shannon is elegant and quite brief, but it offers less intuitive insight into the subtleties of aliasing, both unintentional and intentional. Quoting Shannon's original paper, which uses f for the function, F for the spectrum, and W for the bandwidth limit:
Let

F() be the spectrum of f(t). Then

since

F() is assumed to be zero outside the band W. If we let

where n is any positive or negative integer, we obtain

On the left are values of f(t) at the sampling points. The integral on the right will be recognized as essentially the nth coefficient in a Fourier-series

F(), taking the interval W to W as a fundamental period. This means that the values of the samples f(n / 2W) determine the Fourier coefficients in the series expansion of F(). Thus they determine F(), since F() is zero for frequencies greater than W, and for lower frequencies F() is determined if its Fourier coefficients are determined. But F() determines the original function f(t) completely, since a function is
expansion of the function determined if its spectrum is known. Therefore the original samples determine the function

f(t) completely.
Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstructio n via sinc functions, what we now call the Whittaker Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but

these would have been familiar to engineers reading his works at the time, since the Fourier pair relationship between rect (the rectangular function) and sinc was well known. Quoting Shannon:
Let

xn be the nth sample. Then the function f(t) is represented by:

As in the other proof , the exist ence of the Fouri er trans form of the origi nal signa l is assu med, so the

proof does not say whet her the samp ling theor em exten ds to band limit ed statio nary rand om proc esses .

[ed it] Sa mp lin g of no nbas eb an d sig nal s

As discu ssed by Shan non:


[1]

A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to singlesideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained from sin(x)/x by single-side-band modulation. T h a t i s , a s u f f i c i e n t n o l o s s c o n d it i o n f o r

s a m p li n g s i g n a l s t h a t d o n o t h a v e b a s e b a n d c o m p o n e n t s e x i

s t s t h a t i n v o l v e s t h e w i d t h o f t h e n o n z e r o f r e q u e n c y i n t

e r v a l a s o p p o s e d t o it s h i g h e s t f r e q u e n c y c o m p o n e n t. S e e S a

m p l i n g ( s i g n a l p r o c e s s i n g ) f o r m o r e d e t a il s a n d e x a m p l e

s . A b a n d p a s s c o n d it i o n i s t h a t X ( f ) = 0 , f o r a ll n o n n e g a ti v e

f o u t s i d e t h e o p e n b a n d o f f r e q u e n c i e s :

for so m e no nn eg ati ve int

eg er N. Th is for m ul ati on in cl ud es th e no rm al ba se ba nd co nd iti on as th e ca se N =0 . Th e co rre sp on di ng int er po lat io

n fu nc tio n is th e im pu lse res po ns e of an id eal bri ck wa ll ba nd pa ss filt er (as op po se d to th e id eal bri ck wa ll lo w pa

ss filt er us ed ab ov e) wi th cu tof fs at th e up pe r an d lo we r ed ge s of th e sp eci fie d ba nd , w hi ch is th e dif fer en ce be

tw ee n a pa ir of lo w pa ss im pu lse res po ns es:

Other genera lizatio ns, for examp le to signals occup ying multip le noncontig uous bands, are possib le as well. Even the most genera lized form

of the sampli ng theore m does not have a provab ly true conver se. That is, one cannot conclu de that inform ation is necess arily lost just becaus e the conditi ons of the sampli ng theore m are not satisfi ed; from an engine ering perspe ctive, howev er, it is genera lly safe to assum

e that if the sampli ng theore m is not satisfi ed then inform ation will most likely be lost.

[edi t] Non unif orm sam plin g


The sampli ng theory of Shann on can be genera lized for the case of nonuni form sampl es, that is,

sampl es not taken equall y spaced in time. The Shann on sampli ng theory for nonunifor m sampli ng states that a bandlimite d signal can be perfect ly recons tructed from its sampl es if the averag e sampli ng rate satisfi es the Nyqui st conditi on.[4] Theref ore,

althou gh unifor mly spaced sampl es may result in easier recons tructio n algorit hms, it is not a necess ary conditi on for perfect recons tructio n. The genera l theory for nonbaseba nd and nonuni form sampl es was develo ped in 1967 by Landa u.[5] He proved that, to paraph

rase roughl y, the averag e sampli ng rate (unifor m or otherw ise) must be twice the occupi ed bandw idth of the signal, assumi ng it is a priori known what portio n of the spectr um was occupi ed. In the late 1990s, this work was partial ly extend ed to cover signals of

when the amoun t of occupi ed bandw idth was known , but the actual occupi ed portio n of the spectr um was unkno wn.[6] In the 2000s, a compl ete theory was develo ped (see the sectio n Beyon d Nyqui st below) using compr essed sensin g. In particu lar, the

theory , using signal proces sing langua ge, is descri bed in this 2009 paper.
[7]

They show, among other things, that if the freque ncy locatio ns are unkno wn, then it is necess ary to sampl e at least at twice the Nyqui st criteri a; in other words, you must pay at least a factor of 2

for not knowi ng the locatio n of the spectr um. Note that minim um sampli ng requir ement s do not necess arily guaran tee stabilit y.

[edi t] Bey ond Nyq uist


The Nyqui st Shann on sampli ng theore m provid es a suffici ent conditi

on for the sampli ng and recons tructio n of a bandlimite d signal. When recons tructio n is done via the Whitta ker Shann on interp olation formul a, the Nyqui st criteri on is also a necess ary conditi on to avoid aliasin g, in the sense that if sampl es are taken at a slower rate than twice

the band limit, then there are some signals that will not be correct ly recons tructed . Howe ver, if further restrict ions are impos ed on the signal, then the Nyqui st criteri on may no longer be a necess ary conditi on. A nontrivial examp le of exploit ing extra

assum ptions about the signal is given by the recent field of compr essed sensin g, which allows for full recons tructio n with a subNyqui st sampli ng rate. Specif ically, this applie s to signals that are sparse (or compr essible ) in some domai n. As an examp le, compr

essed sensin g deals with signals that may have a low overall bandw idth (say, the effecti ve bandw idth EB), but the freque ncy locatio ns are unkno wn, rather than all togeth er in a single band, so that the passba nd techni que doesn't apply. In other words, the freque

ncy spectr um is sparse. Traditi onally, the necess ary sampli ng rate is thus B / 2. Using compr essed sensin g techni ques, the signal could be perfect ly recons tructed if it is sampl ed at a rate slightl y greater than the

EB / 2. The

downs ide of this approa ch is that recons tructio n is no

longer given by a formul a, but instea d by the solutio n to a conve x optimi zation progra m which requir es wellstudie d but nonlin ear metho ds.

[edi t] Hist oric al bac kgr oun d


The sampl ing theore m was implie d by the

work of Harry Nyqui st in 1928 ("Cert ain topics in telegra ph transm ission theory "), in which he showe d that up to 2B indepe ndent pulse sampl es could be sent throug ha system of bandw idth B; but he did not explici tly consid er the proble m of sampli ng and recons

tructio n of contin uous signals . About the same time, Karl Kpf mller showe da similar result, [8] and discus sed the sincfuncti on impuls e respon se of a bandlimitin g filter, via its integra l, the step respon se Integr alsinu s; this bandli miting and recons tructio n filter that is

so central to the sampli ng theore m is someti mes referre d to as a Kpfm ller filter (but seldo m so in Englis h). The sampli ng theore m, essenti ally a dual of Nyqui st's result, was proved by Claud e E. Shann on in 1949 ("Com munic ation in the presen ce of noise"

). V. A. Koteln ikov publis hed similar results in 1933 ("On the transm ission capaci ty of the 'ether' and of cables in electri cal comm unicati ons", transla tion from the Russia n), as did the mathe matici an E. T. Whitta ker in 1915 ("Exp ansion s of the Interp olation Theor

y", "Theo rie der Kardin alfunk tionen "), J. M. Whitta ker in 1935 ("Inter polato ry functi on theory "), and Gabor in 1946 ("The ory of comm unicati on").

[edit] Othe r disco verer s


Others who have indepe ndentl y discov ered or played roles in the develo pment of the

sampli ng theore m have been discus sed in severa l histori cal article s, for examp le by Jerri[9] and by Lke. [10] For examp le, Lke points out that H. Raabe, an assista nt to Kpf mller , proved the theore m in his 1939 Ph.D. dissert ation; the term Raabe condit ion came

to be associ ated with the criteri on for unamb iguous repres entatio n (sampl ing rate greater than twice the bandw idth). Meijer ing[11] mentio ns severa l other discov erers and names in a paragr aph and pair of footno tes: As pointe d out by Higgin s [135], the sampli

ng theore m should really be consid ered in two parts, as done above: the first stating the fact that a bandli mited functi on is compl etely determ ined by its sampl es, the second descri bing how to recons truct the functi on using its sampl es. Both parts of the sampli ng

theore m were given in a some what differe nt form by J. M. Whitta ker [350, 351, 353] and before him also by Ogura [241, 242]. They were probab ly not aware of the fact that the first part of the theore m had been stated as early as 1897 by Borel [25].27

As we have seen, Borel also used around that time what becam e known as the cardin al series. Howe ver, he appear s not to have made the link [135]. In later years it becam e known that the sampli ng theore m had been presen ted before Shann on to the Russia

n comm unicati on comm unity by Kotel' nikov [173]. In more implic it, verbal form, it had also been descri bed in the Germa n literat ure by Raabe [257]. Severa l author s [33, 205] have mentio ned that Somey a [296] introd uced the theore m in the Japane se

literat ure paralle l to Shann on. In the Englis h literat ure, Westo n [347] introd uced it indepe ndentl y of Shann on around the same time.28
27

Severa l author s, follow ing Black [16], have claime d that this first part of the sampli ng theore m was stated even earlier

by Cauch y, in a paper [41] publis hed in 1841. Howe ver, the paper of Cauch y does not contai n such a statem ent, as has been pointe d out by Higgin s [135].
28

As a conseq uence of the discov ery of the severa l indepe ndent introd uction s of the sampli ng theore m,

people started to refer to the theore m by includi ng the names of the afore mentio ned author s, resulti ng in such catchp hrases as the Whitta kerKotel nikovShann on (WKS ) sampli ng theore m" [155] or even "the Whitta kerKotel' nikovRaabe Shann onSomey a

sampli ng theore m" [33]. To avoid confus ion, perhap s the best thing to do is to refer to it as the sampli ng theore m, "rather than trying to find a title that does justice to all claima nts" [136].

[edit] Why Nyqu ist?


Exactl y how, when, or why Harry Nyqui st had his

name attach ed to the sampli ng theore m remain s obscur e. The term Nyqui st Sampl ing Theor em (capita lized thus) appear ed as early as 1959 in a book from his former emplo yer, Bell Labs, [12] and appear ed again in 1963, [13] and not capital ized in 1965. [14] It

had been called the Shann on Sampl ing Theor em as early as 1954, [15] but also just the sampli ng theore m by severa l other books in the early 1950s. In 1958, Black man and Tukey[
16]

cited Nyqui st's 1928 paper as a refere nce for the sampli ng theore m of inform

ation theory , even though that paper does not treat sampli ng and recons tructio n of contin uous signals as others did. Their glossa ry of terms includ es these entries :
Sampling theorem (of information theory) Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See Cardinal theorem.) Cardinal theorem (of interpolation theory) A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band-limited function with the aid of the function

Exactly what "N result" they are to remains mys

When Shannon proved the sam theorem in his 1 according to M "he referred to t sampling interv (2W) as the Nyq interval corresp the band W, in r of Nyquists dis the fundamenta importance of t in connection w telegraphy." Th Nyquist's name critical interval, the theorem.

Similarly, Nyqu was attached to rate in 1953 by Black:[17]


"If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed a Nyquist interval." (bold added for emphasis; italics as in the original)

According to th may be the orig Nyquist rate. In it is not a sampl signaling rate

NyquistShannon sampling theorem


From Wikipedia, the free encyclopedia (Redirected from Nyquist theorem) Jump to: navigation, search

Fig.1: Hypothetical spectrum of a bandlimited signal as a function of frequency

The NyquistShannon sampling theorem, after Harry Nyquist and Claude Shannon, is a fundamental result in the field of information theory, in particular telecommunications and signal processing. Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space). Shannon's version of the theorem states:[1] If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. The theorem is commonly called the Nyquist sampling theorem; since it was also discovered independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others, it is also known as NyquistShannonKotelnikov, WhittakerShannonKotelnikov, WhittakerNyquist KotelnikovShannon, WKS, etc., sampling theorem, as well as the Cardinal Theorem of Interpolation Theory. It is often referred to simply as the sampling theorem. In essence, the theorem shows that a bandlimited analog signal that has been sampled can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency of the original signal. If a signal contains a component at exactly B hertz, then samples spaced at exactly 1/(2B) seconds do not completely determine the signal, Shannon's statement notwithstanding. This sufficient condition can be weakened, as discussed at Sampling of non-baseband signals below. More recent statements of the theorem are sometimes careful to exclude the equality condition; that is, the condition is if x(t) contains no frequencies higher than or equal to B; this condition is equivalent to Shannon's except when the function includes a steady sinusoidal component at exactly frequency B. The theorem assumes an idealization of any real-world situation, as it only applies to signals that are sampled for infinite time; any time-limited x(t) cannot be perfectly bandlimited. Perfect reconstruction is mathematically possible for the idealized model but only an approximation for real-world signals and sampling techniques, albeit in practice often a very good one. The theorem also leads to a formula for reconstruction of the original signal. The constructive proof of the theorem leads to an understanding of the aliasing that can occur when a sampling system does not satisfy the conditions of the theorem. The sampling theorem provides a sufficient condition, but not a necessary one, for perfect reconstruction. The field of compressed sensing provides a stricter sampling condition when the underlying signal is known to be sparse. Compressed sensing specifically yields a sub-Nyquist sampling criterion.

Contents
[hide] 1 Introduction 2 The sampling process 3 Reconstruction 4 Practical considerations 5 Aliasing 6 Application to multivariable signals and images 7 Downsampling 8 Critical frequency 9 Mathematical reasoning for the theorem 10 Shannon's original proof 11 Sampling of non-baseband signals 12 Nonuniform sampling 13 Beyond Nyquist 14 Historical background 14.1 Other discoverers 14.2 Why Nyquist?

15 See also 16 Notes 17 References 18 External links

[edit] Introduction
A signal or function is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth B. The sampling theorem asserts that, given such a bandlimited signal, the uniformly spaced discrete samples are a complete representation of the signal as long as the sampling rate is larger than twice the bandwidth B. To formalize these concepts, let x(t) represent a continuous-time signal and X(f) be the continuous Fourier transform of that signal:

The signal x(t) is said to be bandlimited to a one-sided baseband bandwidth, B, if

for all

| f | > B,

or, equivalently, supp(X) [B, B].[2] Then the sufficient condition for exact reconstructability from samples at a uniform sampling rate fs (in samples per unit time) is:

The quantity 2B is called the Nyquist rate and is a property of the bandlimited signal, while fs/2 is called the Nyquist frequency and is a property of this sampling system. The time interval between successive samples is referred to as the sampling interval:

and the samples of x(t) are denoted by:

where n is an integer. The sampling theorem leads to a procedure for reconstructing the original x(t) from the samples and states sufficient conditions for such a reconstruction to be exact.

[edit] The sampling process


The theorem describes two processes in signal processing: a sampling process, in which a continuous time signal is converted to a discrete time signal, and a reconstruction process, in which the original continuous signal is recovered from the discrete time signal. The continuous signal varies over time (or space in a digitized image, or another independent variable in some other application) and the sampling process is performed by measuring the continuous signal's value every T units of time (or space), which is called the sampling interval. Sampling results in a sequence of numbers, called samples, to represent the original signal. Each sample value is associated with the instant in time when it was measured. The reciprocal of the sampling interval (1/T) is the sampling frequency denoted fs, which is measured in samples per unit of time. If T is expressed in seconds, then fs is expressed in hertz.

[edit] Reconstruction
Reconstruction of the original signal is an interpolation process that mathematically defines a continuous-time signal x(t) from the discrete samples x[n] and at times in between the sample instants nT.

Fig.2: The normalized sinc function: sin(x) / (x) ... showing the central peak at x= 0, and zero-crossings at the other integer values of x. The procedure: Each sample value is multiplied by the sinc function scaled so that the zero-crossings of the sinc function occur at the sampling instants and that the sinc function's central point is shifted to the time of that sample, nT. All of these shifted and scaled functions are then added together to recover the original signal. The scaled and time-shifted sinc functions are continuous making the sum of these also continuous, so the result of this operation is a continuous signal. This procedure is represented by the Whittaker Shannon interpolation formula. The condition: The signal obtained from this reconstruction process can have no frequencies higher than one-half the sampling frequency. According to the theorem, the reconstructed signal will match the original signal provided that the original signal contains no frequencies at or above this limit. This condition is called the Nyquist criterion, or sometimes the Raabe condition.

If the original signal contains a frequency component equal to one-half the sampling rate, the condition is not satisfied. The resulting reconstructed signal may have a component at that frequency, but the amplitude and phase of that component generally will not match the original component. This reconstruction or interpolation using sinc functions is not the only interpolation scheme. Indeed, it is impossible in practice because it requires summing an infinite number of terms. However, it is the interpolation method that in theory exactly reconstructs any given bandlimited x(t) with any bandlimit B < 1/(2T); any other method that does so is formally equivalent to it.

[edit] Practical considerations


A few consequences can be drawn from the theorem:
If the highest frequency B in the original signal is known, the theorem gives the lower bound on the sampling frequency for which perfect reconstruction can be assured. This lower bound to the sampling frequency, 2B, is called the Nyquist rate. If instead the sampling frequency is known, the theorem gives us an upper bound for frequency components, B<fs/2, of the signal to allow for perfect reconstruction. This upper bound is the Nyquist frequency, denoted fN. Both of these cases imply that the signal to be sampled must be bandlimited; that is, any component of this signal which has a frequency above a certain bound should be zero, or at least sufficiently close to zero to allow us to neglect its influence on the resulting reconstruction. In the first case, the condition of bandlimitation of the sampled signal can be accomplished by assuming a model of the signal which can be analysed in terms of the frequency components it contains; for example, sounds that are made by a speaking human normally contain very small frequency components at or above 10 kHz and it is then sufficient to sample such an audio signal with a sampling frequency of at least 20 kHz. For the second case, we have to assure that the sampled signal is bandlimited such that frequency components at or above half of the sampling frequency can be neglected. This is usually accomplished by means of a suitable low-pass filter; for example, if it is desired to sample speech waveforms at 8 kHz, the signals should first be lowpass filtered to below 4 kHz. In practice, neither of the two statements of the sampling theorem described above can be completely satisfied, and neither can the reconstruction formula be precisely implemented. The reconstruction process that involves scaled and delayed sinc functions can be described as ideal. It cannot be realized in practice since it implies that each sample contributes to the reconstructed signal at almost all time points, requiring summing an infinite number of terms. Instead, some type of approximation of the sinc functions, finite in length, has to be used. The error that corresponds to the sincfunction approximation is referred to as interpolation error. Practical digital-to-analog converters produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally low-pass filtered would yield the original signal), but a sequence of scaled and delayed rectangular pulses. This practical piecewise-constant output can be modeled as a zeroorder hold filter driven by the sequence of scaled and delayed dirac impulses referred to in the mathematical basis section below. A shaping filter is sometimes used after the DAC with zero-order hold to make a better overall approximation.

Furthermore, in practice, a signal can never be perfectly bandlimited, since ideal "brick-wall" filters cannot be realized. All practical filters can only attenuate frequencies outside a certain range, not remove them entirely. In addition to this, a "time-limited" signal can never be bandlimited. This means that even if an ideal reconstruction could be made, the reconstructed signal would not be exactly the original signal. The error that corresponds to the failure of bandlimitation is referred to as aliasing. The sampling theorem does not say what happens when the conditions and procedures are not exactly met, but its proof suggests an analytical framework in which the non-ideality can be studied. A designer of a system that deals with sampling and reconstruction processes needs a thorough understanding of the signal to be sampled, in particular its frequency content, the sampling frequency, how the signal is reconstructed in terms of interpolation, and the requirement for the total reconstruction error, including aliasing, sampling, interpolation and other errors. These properties and parameters may need to be carefully tuned in order to obtain a useful system.

[edit] Aliasing
Main article: Aliasing

The Poisson summation formula shows that the samples, x[n]=x(nT), of function x(t) are sufficient to create a periodic summation of function X(f). The result is: (Eq.1 ) As depicted in Figures 3, 4, and 8, copies of X(f) are shifted by multiples of fs and combined by addition.

Fig.3: Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A "brick-wall" low-pass filter can remove the images

and leave the original spectrum, thus recovering the original signal from the samples.

If the sampling condition is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous X(f). Any frequency component above fs/2 is indistinguishable from a lower-frequency component, called an alias, associated with one of the copies. The reconstruction technique described below produces the alias, rather than the original component, in such cases.

Fig.4 Top: Hypothetical spectrum of an insufficiently sampled bandlimited signal (blue), X(f), where the images (green) overlap. These overlapping edges or "tails" of the images add, creating a spectrum unlike the original. Bottom: Hypothetical spectrum of a marginally sufficiently sampled bandlimited signal (blue), XA(f), where the images (green) narrowly do not overlap. But the overall sampled spectrum of XA(f) is identical to the overall inadequately sampled spectrum of X(f) (top) because the sum of baseband and images are the same in both cases. The discrete sampled signals xA[n] and x[n] are also identical. It is not possible, just from examining the spectra (or the sampled signals), to tell the two situations apart. If this were an audio signal, xA[n] and x[n] would sound the same and the presumed "properly" sampled xA[n] would be the alias of x[n] since the spectrum XA(f) masquerades as the spectrum X(f).

For a sinusoidal component of exactly half the sampling frequency, the component will in general alias to another sinusoid of the same frequency, but with a different phase and amplitude. To prevent or reduce aliasing, two things can be done:
1. Increase the sampling rate, to above twice some or all of the frequencies that are aliasing. 2. Introduce an anti-aliasing filter or make the anti-aliasing filter more stringent.

The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper sampling. Such a restriction works in theory, but is not precisely satisfiable in reality, because realizable filters will always allow some leakage of high frequencies. However, the leakage energy can be made small enough so that the aliasing effects are negligible.

[edit] Application to multivariable signals and images

Fig.5: Subsampled image showing a Moir pattern

Fig.6: See full size image

The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to timedependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, LAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moir pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor. Another example is shown to the left in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moir pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The application of the sampling theorem to images should be made with care. For example, the sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from the ideal sampling which would measure the image intensity at a single point. Instead these devices have a relatively large sensor area at each sample point in order to obtain sufficient amount of light. In other words, any detector has a finite-width point spread function. The analog optical image intensity function which is sampled by the sensor device is not in general bandlimited, and the non-ideal sampling is itself a useful type of lowpass filter, though not always sufficient to remove enough high frequencies to sufficiently reduce aliasing. When the area of the

sampling spot (the size of the pixel sensor) is not large enough to provide sufficient anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) is typically included in a camera system to further blur the optical image. Despite images having these problems in relation to the sampling theorem, the theorem can be used to describe the basics of down and up sampling of images.

[edit] Downsampling
When a signal is downsampled, the sampling theorem can be invoked via the artifice of resampling a hypothetical continuous-time reconstruction. The Nyquist criterion must still be satisfied with respect to the new lower sampling frequency in order to avoid aliasing. To meet the requirements of the theorem, the signal must usually pass through a low-pass filter of appropriate cutoff frequency as part of the downsampling operation. This low-pass filter, which prevents aliasing, is called an anti-aliasing filter.

[edit] Critical frequency

Fig.7: A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and 1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate.

To illustrate the necessity of fs > 2B, consider the sinusoid:

With fs = 2B or equivalently T = 1/(2B), the samples are given by:

Those samples cannot be distinguished from the samples of:

But for any such that sin() 0, x(t) and xA(t) have different amplitudes and different phase. This and other ambiguities are the reason for the strict inequality of the sampling theorem's condition.

[edit] Mathematical reasoning for the theorem

Fig.8: Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A brick-wall low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from the samples.

From Figures 3 and 8, it is apparent that when there is no overlap of the copies (aka "images") of X(f), the k = 0 term of Xs(f) can be recovered by the product:
where:

H(f) need not be precisely defined in the region [B, fs B] because Xs(f) is zero in that region. However, the worst case is when B = fs/2, the Nyquist frequency. A function that is sufficient for that and all less severe cases is:

where rect(u) is the rectangular function. Therefore:

(from Eq.1, above).

The original function that was sampled can be recovered by an inverse Fourier transform:

[3]

which is the WhittakerShannon interpolation formula. It shows explicitly how the samples, x(nT), can be combined to reconstruct x(t).
From Figure 8, it is clear that larger-than-necessary values of fs (smaller values of T), called

oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Theoretically, the interpolation formula can be implemented as a low pass filter, whose impulse response is sinc(t/T) and whose input is w hich is a Dirac comb function modulated by the signal samples. Practical digital-toanalog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error.

[edit] Shannon's original proof


The original proof presented by Shannon is elegant and quite brief, but it offers less intuitive insight into the subtleties of aliasing, both unintentional and intentional. Quoting Shannon's original paper, which uses f for the function, F for the spectrum, and W for the bandwidth limit:
Let

F() be the spectrum of f(t). Then

since

F() is assumed to be zero outside the band W. If we let

where n is any positive or negative integer, we obtain

On the left are values of f(t) at the sampling points. The integral on the right will be recognized as essentially the nth coefficient in a Fourier-series

F(), taking the interval W to W as a fundamental period. This means that the values of the samples f(n / 2W) determine the Fourier coefficients in the series expansion of F(). Thus they determine F(), since F() is zero for frequencies greater than W, and for lower frequencies F() is determined if its Fourier coefficients are determined. But F() determines the original function f(t) completely, since a function is
expansion of the function determined if its spectrum is known. Therefore the original samples determine the function

f(t) completely.
Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstructio n via sinc functions, what we now call the Whittaker Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but

these would have been familiar to engineers reading his works at the time, since the Fourier pair relationship between rect (the rectangular function) and sinc was well known. Quoting Shannon:
Let

xn be the nth sample. Then the function f(t) is represented by:

As in the other proof , the exist ence of the Fouri er trans form of the origi nal signa l is assu med, so the

proof does not say whet her the samp ling theor em exten ds to band limit ed statio nary rand om proc esses .

[ed it] Sa mp lin g of no nbas eb an d sig nal s

As discu ssed by Shan non:


[1]

A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to singlesideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained from sin(x)/x by single-side-band modulation. T h a t i s , a s u f f i c i e n t n o l o s s c o n d it i o n f o r

s a m p li n g s i g n a l s t h a t d o n o t h a v e b a s e b a n d c o m p o n e n t s e x i

s t s t h a t i n v o l v e s t h e w i d t h o f t h e n o n z e r o f r e q u e n c y i n t

e r v a l a s o p p o s e d t o it s h i g h e s t f r e q u e n c y c o m p o n e n t. S e e S a

m p l i n g ( s i g n a l p r o c e s s i n g ) f o r m o r e d e t a il s a n d e x a m p l e

s . A b a n d p a s s c o n d it i o n i s t h a t X ( f ) = 0 , f o r a ll n o n n e g a ti v e

f o u t s i d e t h e o p e n b a n d o f f r e q u e n c i e s :

for so m e no nn eg ati ve int

eg er N. Th is for m ul ati on in cl ud es th e no rm al ba se ba nd co nd iti on as th e ca se N =0 . Th e co rre sp on di ng int er po lat io

n fu nc tio n is th e im pu lse res po ns e of an id eal bri ck wa ll ba nd pa ss filt er (as op po se d to th e id eal bri ck wa ll lo w pa

ss filt er us ed ab ov e) wi th cu tof fs at th e up pe r an d lo we r ed ge s of th e sp eci fie d ba nd , w hi ch is th e dif fer en ce be

tw ee n a pa ir of lo w pa ss im pu lse res po ns es:

Other genera lizatio ns, for examp le to signals occup ying multip le noncontig uous bands, are possib le as well. Even the most genera lized form

of the sampli ng theore m does not have a provab ly true conver se. That is, one cannot conclu de that inform ation is necess arily lost just becaus e the conditi ons of the sampli ng theore m are not satisfi ed; from an engine ering perspe ctive, howev er, it is genera lly safe to assum

e that if the sampli ng theore m is not satisfi ed then inform ation will most likely be lost.

[edi t] Non unif orm sam plin g


The sampli ng theory of Shann on can be genera lized for the case of nonuni form sampl es, that is,

sampl es not taken equall y spaced in time. The Shann on sampli ng theory for nonunifor m sampli ng states that a bandlimite d signal can be perfect ly recons tructed from its sampl es if the averag e sampli ng rate satisfi es the Nyqui st conditi on.[4] Theref ore,

althou gh unifor mly spaced sampl es may result in easier recons tructio n algorit hms, it is not a necess ary conditi on for perfect recons tructio n. The genera l theory for nonbaseba nd and nonuni form sampl es was develo ped in 1967 by Landa u.[5] He proved that, to paraph

rase roughl y, the averag e sampli ng rate (unifor m or otherw ise) must be twice the occupi ed bandw idth of the signal, assumi ng it is a priori known what portio n of the spectr um was occupi ed. In the late 1990s, this work was partial ly extend ed to cover signals of

when the amoun t of occupi ed bandw idth was known , but the actual occupi ed portio n of the spectr um was unkno wn.[6] In the 2000s, a compl ete theory was develo ped (see the sectio n Beyon d Nyqui st below) using compr essed sensin g. In particu lar, the

theory , using signal proces sing langua ge, is descri bed in this 2009 paper.
[7]

They show, among other things, that if the freque ncy locatio ns are unkno wn, then it is necess ary to sampl e at least at twice the Nyqui st criteri a; in other words, you must pay at least a factor of 2

for not knowi ng the locatio n of the spectr um. Note that minim um sampli ng requir ement s do not necess arily guaran tee stabilit y.

[edi t] Bey ond Nyq uist


The Nyqui st Shann on sampli ng theore m provid es a suffici ent conditi

on for the sampli ng and recons tructio n of a bandlimite d signal. When recons tructio n is done via the Whitta ker Shann on interp olation formul a, the Nyqui st criteri on is also a necess ary conditi on to avoid aliasin g, in the sense that if sampl es are taken at a slower rate than twice

the band limit, then there are some signals that will not be correct ly recons tructed . Howe ver, if further restrict ions are impos ed on the signal, then the Nyqui st criteri on may no longer be a necess ary conditi on. A nontrivial examp le of exploit ing extra

assum ptions about the signal is given by the recent field of compr essed sensin g, which allows for full recons tructio n with a subNyqui st sampli ng rate. Specif ically, this applie s to signals that are sparse (or compr essible ) in some domai n. As an examp le, compr

essed sensin g deals with signals that may have a low overall bandw idth (say, the effecti ve bandw idth EB), but the freque ncy locatio ns are unkno wn, rather than all togeth er in a single band, so that the passba nd techni que doesn't apply. In other words, the freque

ncy spectr um is sparse. Traditi onally, the necess ary sampli ng rate is thus B / 2. Using compr essed sensin g techni ques, the signal could be perfect ly recons tructed if it is sampl ed at a rate slightl y greater than the

EB / 2. The

downs ide of this approa ch is that recons tructio n is no

longer given by a formul a, but instea d by the solutio n to a conve x optimi zation progra m which requir es wellstudie d but nonlin ear metho ds.

[edi t] Hist oric al bac kgr oun d


The sampl ing theore m was implie d by the

work of Harry Nyqui st in 1928 ("Cert ain topics in telegra ph transm ission theory "), in which he showe d that up to 2B indepe ndent pulse sampl es could be sent throug ha system of bandw idth B; but he did not explici tly consid er the proble m of sampli ng and recons

tructio n of contin uous signals . About the same time, Karl Kpf mller showe da similar result, [8] and discus sed the sincfuncti on impuls e respon se of a bandlimitin g filter, via its integra l, the step respon se Integr alsinu s; this bandli miting and recons tructio n filter that is

so central to the sampli ng theore m is someti mes referre d to as a Kpfm ller filter (but seldo m so in Englis h). The sampli ng theore m, essenti ally a dual of Nyqui st's result, was proved by Claud e E. Shann on in 1949 ("Com munic ation in the presen ce of noise"

). V. A. Koteln ikov publis hed similar results in 1933 ("On the transm ission capaci ty of the 'ether' and of cables in electri cal comm unicati ons", transla tion from the Russia n), as did the mathe matici an E. T. Whitta ker in 1915 ("Exp ansion s of the Interp olation Theor

y", "Theo rie der Kardin alfunk tionen "), J. M. Whitta ker in 1935 ("Inter polato ry functi on theory "), and Gabor in 1946 ("The ory of comm unicati on").

[edit] Othe r disco verer s


Others who have indepe ndentl y discov ered or played roles in the develo pment of the

sampli ng theore m have been discus sed in severa l histori cal article s, for examp le by Jerri[9] and by Lke. [10] For examp le, Lke points out that H. Raabe, an assista nt to Kpf mller , proved the theore m in his 1939 Ph.D. dissert ation; the term Raabe condit ion came

to be associ ated with the criteri on for unamb iguous repres entatio n (sampl ing rate greater than twice the bandw idth). Meijer ing[11] mentio ns severa l other discov erers and names in a paragr aph and pair of footno tes: As pointe d out by Higgin s [135], the sampli

ng theore m should really be consid ered in two parts, as done above: the first stating the fact that a bandli mited functi on is compl etely determ ined by its sampl es, the second descri bing how to recons truct the functi on using its sampl es. Both parts of the sampli ng

theore m were given in a some what differe nt form by J. M. Whitta ker [350, 351, 353] and before him also by Ogura [241, 242]. They were probab ly not aware of the fact that the first part of the theore m had been stated as early as 1897 by Borel [25].27

As we have seen, Borel also used around that time what becam e known as the cardin al series. Howe ver, he appear s not to have made the link [135]. In later years it becam e known that the sampli ng theore m had been presen ted before Shann on to the Russia

n comm unicati on comm unity by Kotel' nikov [173]. In more implic it, verbal form, it had also been descri bed in the Germa n literat ure by Raabe [257]. Severa l author s [33, 205] have mentio ned that Somey a [296] introd uced the theore m in the Japane se

literat ure paralle l to Shann on. In the Englis h literat ure, Westo n [347] introd uced it indepe ndentl y of Shann on around the same time.28
27

Severa l author s, follow ing Black [16], have claime d that this first part of the sampli ng theore m was stated even earlier

by Cauch y, in a paper [41] publis hed in 1841. Howe ver, the paper of Cauch y does not contai n such a statem ent, as has been pointe d out by Higgin s [135].
28

As a conseq uence of the discov ery of the severa l indepe ndent introd uction s of the sampli ng theore m,

people started to refer to the theore m by includi ng the names of the afore mentio ned author s, resulti ng in such catchp hrases as the Whitta kerKotel nikovShann on (WKS ) sampli ng theore m" [155] or even "the Whitta kerKotel' nikovRaabe Shann onSomey a

sampli ng theore m" [33]. To avoid confus ion, perhap s the best thing to do is to refer to it as the sampli ng theore m, "rather than trying to find a title that does justice to all claima nts" [136].

[edit] Why Nyqu ist?


Exactl y how, when, or why Harry Nyqui st had his

name attach ed to the sampli ng theore m remain s obscur e. The term Nyqui st Sampl ing Theor em (capita lized thus) appear ed as early as 1959 in a book from his former emplo yer, Bell Labs, [12] and appear ed again in 1963, [13] and not capital ized in 1965. [14] It

had been called the Shann on Sampl ing Theor em as early as 1954, [15] but also just the sampli ng theore m by severa l other books in the early 1950s. In 1958, Black man and Tukey[
16]

cited Nyqui st's 1928 paper as a refere nce for the sampli ng theore m of inform

ation theory , even though that paper does not treat sampli ng and recons tructio n of contin uous signals as others did. Their glossa ry of terms includ es these entries :
Sampling theorem (of information theory) Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See Cardinal theorem.) Cardinal theorem (of interpolation theory) A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band-limited function with the aid of the function

Exactly what "N result" they are to remains mys

When Shannon proved the sam theorem in his 1 according to M "he referred to t sampling interv (2W) as the Nyq interval corresp the band W, in r of Nyquists dis the fundamenta importance of t in connection w telegraphy." Th Nyquist's name critical interval, the theorem.

Similarly, Nyqu was attached to rate in 1953 by Black:[17]


"If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed a Nyquist interval." (bold added for emphasis; italics as in the original)

According to th may be the orig Nyquist rate. In it is not a sampl signaling rate.

[edit] See

Hartley's l

Reconstru zero cross

Zero-order

The Cheun theorem s conditions restoration by the sam theorem c ill-posed.

BalianLow similar the lower-boun sampling r which app frequency

[edit] Not

1. ^ a b C. E. S

"Communic presence o Institute of vol. 37, no. 1949. Repr paper in: Pr no. 2, (Feb. of X.

2. ^ supp(X) m

3. ^ The time

follows from 102 of the t

4. ^ Nonunifo

Theory and Marvasti), K Academic/P New York, 2

5. ^ H. J. Land

density con sampling an certain enti Acta Math., 52, Feb. 19

6. ^ see, e.g.,

Universal sampling an reconstruct signals, Ph University o Urbana-Cha

7. ^ Blind Mul

Reconstruc Sensing for Moshe Mish Eldar, in IEE Processing Vol 57 Issue

8. ^ K. Kpfm

Dynamik de Verstrkung Elektrische

Nachrichten no. 11, pp. (German) K. Kpfmll dynamics o controllers, Nachrichten no. 11, pp. (English tra

9. ^ Abdul Jer

Sampling T Various Ext Application Review, Pro IEEE, 65:15 1977. See a "The Shann theoremI extensions A tutorial re Proceeding 67:695, Apr

10. ^ Hans Die

Origins of t Theorem, IE Communica pp.106108

11. ^ a b Erik M

Chronology From Ancie Modern Sig Processing, 2002.

12. ^ Members

Staff of Bel Lababorato Transmissio Communica 264 (Vol.2

13. ^ Ernst Ado

(1963). The Physical Sy http://books ks?id=jtIAAAAIAAJ&q sampling-th 1965&dq=N theorem+d 1965&as_b DJqegowLy

14. ^ Richard A

Ben F. Bart

Signal Dete Composite Theory, 196

15. ^ Truman S

Electronics: Electronics, and Associa 1954.

16. ^ R. B. Blac

Tukey, The Power Spec Point of Vie Communica Engineering Dover, 195

17. ^ Harold S.

Modulation

[edit] Refe

J. R. Higgin stories abo cardinal se of the AMS

V. A. Kotel the carryin the ether a telecomm Material fo All-Union C on Questio Communic Red. Upr. S Moscow, 1 (Russian). translation

Karl Kpfm "Utjmning inom Teleg Telefontek ("Transien telegraph telephone engineerin Tidskrift, n 160 and 1 182, 1931

R.J. Marks Introductio Shannon S

Interpolati Spinger-Ve

R.J. Marks Advanced Shannon S Interpolati Springer-V

R.J. Marks of Fourier Its Applica University (2009), Ch Google bo

H. Nyquist topics in te transmissi Trans. AIE 617644, A Reprint as in: Proc. IE No. 2, Feb

Press, WH SA; Vetter Flannery, B "Section 1 Numerical Sampling T Numerical Art of Scie Computing New York: University 978-0-521 http://apps /empanel/ g=717

C. E. Shan "Communi presence o Proc. Instit Engineers, no.1, pp. 1 1949. Rep paper in: P Vol. 86, No 1998)

Michael Un Sampling-

after Shan IEEE, vol. 8 569587, A

E. T. Whitt Functions Represent Expansion Interpolati Proc. Roya Edinburgh vol.35, pp. 1915

J. M. Whitt Interpolato Theory, Ca Univ. Pres Cambridge 1935.

[edit] Exte links

Learning b Simulation simulation effects of sampling

Undersam application

Sampling T Digital Aud

Journal de Sampling T

"The Origi Sampling T Hans Diete published Communic Magazine"

Retrieved from "http://en.wik index.php?titl %E2%80%93 mpling_theore 8629419"

View page rat

Rate this pag

Rate this

Page rati

What's this?

Current ave ratings. Trustworthy

Objective

Complete

Well-written

I am highl knowledgeab topic (optiona

I have a re college/unive

It is part o profession

It is a dee passion

The sourc knowledge is here

I would lik improve Wikip me an e-mail

We will send yo confirmation e-m not share your e with outside pa feedback privac

Submit rating

Saved succes

Your ratings h been submitte

Your rating expired

Please reeva page and su ratings.

An error ha Please try a Thanks! Your been saved.

Please ta moment complete short sur

Start survey
Thanks! Your been saved.

Do you w create a account?

An account w track your ed involved in d and be a par community.

Create an acc inMaybe later Thanks! Your been saved.

Did you that you edit this

Edit this pag later


Categories:

Digital sign Informatio Theorems Fourier an Hidden categ Articles co


Personal tools

Log in / cre Article

Namespaces

Discussion

Variants Views

Read

Edit

View histo

Actions Search
Top of Form

Special:Search

Bottom of Form

Navigation

Main page Contents

Featured c

Current ev Donate to Help

Random a

Interaction

About Wik

Communit

Recent cha

Contact W

Toolbox

What links

Related ch

Upload file

Special pa

Permanen

Cite this p

Rate this p

Print/export

Create a b Download

Printable v

Languages

Catal esky Deutsch Espaol Franais

Esperanto

Italiano

Nederland

Polski

Portugus

Slovenin Suomi Svenska

Basa Sund

Ting Vit

This page modified o 2011 at 17

Text is ava Creative C Attribution License; a may apply use for de Wikipedia trademark Wikimedia Inc., a non organizatio

Contact us

Privacy po

About Wik

Disclaimer

Mobile vie