Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly

at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code. PCM has been used in digital telephone systems and 1980s-era electronic musical keyboards. It is also the standard form for digital audio in computers and the compact disc "red book" format. It is also standard in digital video, for example, using ITU-R BT.601. Uncompressed PCM is not typically used for video in standard definition consumer applications such as DVD or DVR because the bit rate required is far too high.

Modulation
Sampling and quantization of a signal (red) for 4-bit PCM In the diagram, a sine wave (red curve) is sampled and quantized for PCM. The sine wave is sampled at regular intervals, shown as ticks on the x-axis. For each sample, one of the available values (ticks on the y-axis) is chosen by some algorithm (in this case, the floor function is used). This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. For the sine wave example at right, we can verify that the quantized values at the sampling moments are 7, 9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc. Encoding these values as binary numbers would result in the following set of nibbles: 0111, 1001, 1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could then be further processed or analyzed by a purpose-specific digital signal processor or general purpose CPU. Several Pulse Code Modulation streams could also be multiplexed into a larger aggregate data stream, generally for transmission of multiple streams over a single physical link. This technique is called time-division multiplexing, or TDM, and is widely used, notably in the modern public telephone system. There are many ways to implement a real device that performs this task. In real systems, such a device is commonly implemented on a single integrated circuit that lacks only the clock necessary for sampling, and is generally referred to as an ADC (Analog-to-Digital converter). These devices will produce on their output a binary representation of the input whenever they are triggered by a clock signal, which would then be read by a processor of some sort.

Demodulation
To produce output from the sampled data, the procedure of modulation is applied in reverse. After each sampling period has passed, the next value is read and a signal is shifted to the new value. As a result of these transitions, the signal will have a significant amount of high-frequency energy. To smooth out the signal and remove these undesirable aliasing frequencies, the signal would be passed through analog filters that suppress energy outside the expected frequency range (that is, greater than the Nyquist frequency

fs / 2). Some systems use digital filtering to remove some of the aliasing, converting the
signal from digital to analog at a higher sample rate such that the analog filter required for anti-aliasing is much simpler. In some systems, no explicit filtering is done at all; as it's impossible for any system to reproduce a signal with infinite bandwidth, inherent losses in the system compensate for the artifacts — or the system simply does not require much precision. The sampling theorem suggests that practical PCM devices, provided a sampling frequency that is sufficiently greater than that of the input signal, can operate without introducing significant distortions within their designed frequency bands. The electronics involved in producing an accurate analog signal from the discrete data are similar to those used for generating the digital signal. These devices are DACs (digital-toanalog converters), and operate similarly to ADCs. They produce on their output a voltage or current (depending on type) that represents the value presented on their inputs. This output would then generally be filtered and amplified for use.

Limitations
There are two sources of impairment implicit in any PCM system:

Choosing a discrete value near the analog signal for each sample (quantization error)

The quantization error swings between to . In the ideal case (with a fully linear ADC) it is equally distributed over this interval thus with follows equals zero while the equals to

Between samples no measurement of the signal is made; due to the sampling theorem this results in any frequency above or equal to (fs being the sampling frequency) being distorted or lost completely (aliasing error). This is also called the Nyquist frequency.

As samples are dependent on time, an accurate clock is required for accurate reproduction. If either the encoding or decoding clock is not stable, its frequency drift will directly affect the output quality of the device. A slight difference between the encoding and decoding clock frequencies is not generally a major concern; a small constant error is not noticeable. Clock error does become a major issue if the clock is not stable, however. A drifting clock, even with a relatively small error, will cause very obvious distortions in audio and video signals, for example.

Digitization as part of the PC process
In conventional PCM, the analog signal may be processed (e.g. by amplitude compression) before being digitized. Once the signal is digitized, the PCM signal is usually subjected to further processing (e.g. digital data compression).

Some forms of PCM combine signal processing with coding. Older versions of these systems applied the processing in the analog domain as part of the A/D process; newer implementations do so in the digital domain. These simple techniques have been largely rendered obsolete by modern transform-based audio compression techniques.

DPCM encodes the PCM values as differences between the current and the predicted value. An algorithm predicts the next sample based on the previous samples, and the encoder stores only the difference between this prediction and the actual value. If the prediction is reasonable, fewer bits can be used to represent the same information. For audio, this type of encoding reduces the number of bits required per sample by about 25% compared to PCM. Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the quantization step, to allow further reduction of the required bandwidth for a given signal-to-noise ratio. Delta modulation, another variant, uses one bit per sample.

In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either μ-law (mu-law) PCM (North America and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic compression systems where a 12 or 13-bit linear PCM sample number is mapped into an 8-bit value. This system is described by international standard G.711. An alternative proposal for a floating point representation, with 5-bit mantissa and 3-bit radix, was abandoned. Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-bit µ-law or A-law PCM samples into a series of 4-bit ADPCM samples. In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard. Later it was found that even further compression was possible and additional standards were published. Some of these international standards describe systems and ideas which are covered by privately owned patents and thus use of these standards requires payments to the patent holders.

Encoding for transmission
Pulse-code modulation can be either return-to-zero (RZ) or non-return-to-zero (NRZ). For a NRZ system to be synchronized using in-band information, there must not be long sequences of identical symbols, such as ones or zeroes. For binary PCM systems, the density of 1-symbols is called ones-density. Ones-density is often controlled using precoding techniques such as Run Length Limited encoding, where the PCM code is expanded into a slightly longer code with a guaranteed

bound on ones-density before modulation into the channel. In other cases, extra framing bits are added into the stream which guarantee at least occasional symbol transitions. Another technique used to control ones-density is the use of a scrambler polynomial on the raw data which will tend to turn the raw data stream into a stream that looks pseudorandom, but where the raw stream can be recovered exactly by reversing the effect of the polynomial. In this case, long runs of zeroes or ones are still possible on the output, but are considered unlikely enough to be within normal engineering tolerance. In other cases, the long term DC value of the modulated signal is important, as building up a DC offset will tend to bias detector circuits out of their operating range. In this case special measures are taken to keep a count of the cumulative DC offset, and to modify the codes if necessary to make the DC offset always tend back to zero. Many of these codes are bipolar codes, where the pulses can be positive, negative or absent. In the typical alternate mark inversion code, non-zero pulses alternate between being positive and negative. These rules may be violated to generate special symbols used for framing or other special purposes.

History
In the history of electrical communications, the earliest reason for sampling a signal was to interlace samples from different telegraphy sources, and convey them over a single telegraph cable. Telegraph time-division multiplexing (TDM) was conveyed as early as 1853, by the American inventor M.B. Farmer. The electrical engineer W.M. Miner, in 1903, used an electro-mechanical commutator for time-division multiplex of multiple telegraph signals, and also applied this technology to telephony. He obtained intelligible speech from channels sampled at a rate above 3500–4300 Hz: below this was unsatisfactory. This was TDM, but pulse-amplitude modulation (PAM) rather than PCM. Paul M. Rainey of Western Electric in 1926 patented a facsimile machine using an optical mechanical analog to digital converter. The machine did not go into production. British engineer Alec Reeves, unaware of previous work, conceived the use of PCM for voice communication in 1937 while working for International Telephone and Telegraph in France. He described the theory and advantages, but no practical use resulted. Reeves filed for a French patent in 1938, and his U.S. patent was granted in 1943. The first transmission of speech by digital techniques was the SIGSALY vocoder encryption equipment used for high-level Allied communications during World War II from 1943. In 1943, the Bell Labs researchers who designed the SIGSALY system, became aware of the use of PCM binary coding as already proposed by Alec Reeves. In 1949 for the Canadian Navy's DATAR system, Ferranti Canada built a working PCM radio system that was able to transmit digitized radar data over long distances. PCM in the 1950s used a cathode-ray coding tube with a grid having encoding perforations. As in an oscilloscope, the beam was swept horizontally at the sample rate

pulse width modulation and pulse position modulation. In this respect. The device that performs the coding and decoding function in a telephone circuit is called a code . except that all can be used in time division multiplexing. producing current variations in binary code. This perhaps is a natural consequence of this technique having evolved alongside two analog methods. Nomenclature The word pulse in the term Pulse-Code Modulation refers to the "pulses" to be found in the transmission line. and the binary numbers of the PCM codes are represented as electrical pulses.while the vertical deflection was controlled by the input analog signal. causing the beam to pass through higher or lower portions of the perforated grid. The grid interrupted the beam. respectively. Rather than natural binary. PCM bears little resemblance to these other forms of signal encoding. the grid was perforated to produce Gray code lest a sweep along a transition zone produce glitches. in which the information to be encoded is in fact represented by discrete signal pulses of varying width or position.

.

the key disadvantage of PPM is that it is inherently sensitive to multipath interference that arises in channels with frequency-selective fading. where there tends to be little or no multipath interference. or relative to a common clock). such that the receiver must only measure the difference in the arrival time of successive pulses. the presence of one or more echoes can . Synchronization One of the key difficulties of implementing this technique is that the receiver must be properly synchronized to align the local clock with the beginning of each symbol. Therefore.Pulse-position modulation Pulse-position modulation is a form of signal modulation in which M message bits are encoded by transmitting a single pulse in one of 2M possible time-shifts. so that an error in measuring the differential delay of one pulse will affect only two symbols. whereby the receiver's signal contains one or more echoes of each transmitted pulse. where by each pulse position is encoded relative to the previous . It is possible to limit the propagation of errors to adjacent symbols. Since the information is encoded in the time of arrival (either differentially. It is primarily useful for optical communications systems. This is repeated every T seconds. it is often implemented differentially as Differential Pulse-position modulation. Sensitivity to Multipath Interference Aside from the issues regarding receiver synchronization. such that the transmitted bit rate is M/T bits per second. instead of effecting all successive measurements.

Whereas frequency-selective fading produces echoes that are highly disruptive for any of the M time-shifts used to encode PPM data. Unfortunately this image does not display PPM. is the radio control of model aircraft. as all M of the possible frequency-shifts are impaired by fading. M-FSK PPM and M-FSK systems with the same bandwidth. This makes it a suitable candidate for optical communications systems. first used in the early 1960s. to accurately determine the correct pulse position corresponding to the transmitted pulse.. The advantage of using PPM for this type of application is that the electronics . N. The number of pulses per frame gives the number of controllable channels available. but a PWM signal Narrowband RF (Radio Frequency) channels with low power and long wavelengths (i. Conversely.e. if not impossible. while the short duration of the PPM pulse means that only a few of the M time-shifts are heavily impaired by fading. which is the frequency-domain dual to PPM. The only other common M-ary non-coherent modulation technique is M-ary Frequency Shift Keying. average power. and PPM is a viable modulation scheme in many such applications. One common application with these channel characteristics. Optical communications systems (even wireless ones) tend to have weak multipath distortions. However. boats and cars. and PPM is better suited than MFSK to be used in these scenarios. with the position of each pulse representing the angular position of an analogue control on the transmitter. Applications for RF Communications PPM Implementation This figure illustrates how PPM is used to control servos in RC applications.B. low frequency) are affected primarily by flat fading. or possible states of a binary switch. and transmission rate of M/T bits per second have identical performance in an AWGN (Additive White Gaussian Noise) channel. Non-coherent Detection One of the principal advantages of Pulse Position Modulation is that it is an M-ary modulation technique that can be implemented non-coherently.make it extremely difficult. PPM is employed in these systems. such that the receiver does not need to use a Phase-locked loop (PLL) to track the phase of the carrier. frequency-flat fading is more disruptive for M-FSK than PPM. where coherent phase modulation and detection are difficult and extremely expensive. PPM vs. it selectively disrupts only some of the M possible frequency-shifts used to encode data for M-FSK. their performance differs greatly when comparing frequency-selective and frequency-flat fading channels.

while also arriving at the antenna by a shorter. The standard statistical model of this gives a distribution known as the Rayleigh distribution. which is more complex but offers greater flexibility and reliability. named after Lord Rayleigh. Equalisers are often used to correct the ISI. and phase shifting of the signal. Ghosts occur when transmissions bounce off a mountain or other large object. Radar multipath echoes from an actual target cause ghosts to appear. In a Global Positioning System receiver. The effects of multipath include constructive and destructive interference. multipath causes jitter and ghosting. Alternatively. In facsimile and television transmission. multipath signals can cause a stationary receiver's output to indicate as if it were randomly jumping about or creeping. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. deceiving the radar receiver. direct route. or to be Rician fading. These ghosts are particularly bothersome since they move and behave like the normal targets (which they echo).the receiver is merely required to demultiplex the separate channels and feed the pulses to each servo. In radar processing. with the receiver picking up two signals separated by a delay. seen as a faded duplicate image to the right of the main image. which leads to small. Causes of multipath include atmospheric ducting. ionospheric reflection and refraction. light-weight receiver/decoder units. More sophisticated R/C systems are now often based on pulse-code modulation.required to decode the signal are extremely simple. techniques such as orthogonal frequency division modulation and Rake receivers may be used. and reflection from water bodies and terrestrial objects such as mountains and buildings. In digital radio communications (such as GSM) multipath can cause errors and affect the quality of communications. Servos made for model radio control include some of the electronics required to convert the pulse to the motor position . Overview In wireless telecommunications. The errors are due to Intersymbol interference (ISI). Rayleigh fading with a strong line of sight content is said to have a Rician distribution. (Model aircraft require parts that are as lightweight as possible). This causes Rayleigh fading. multipath causes ghost targets to appear. and so the receiver has difficulty in isolating the correct target echo. multipath is the propagation phenomenon that results in radio signals' reaching the receiving antenna by two or more paths. When the .

e. and since every path has a geometrical length possibly different from that of the other ones. i. G.. Guard Interval duration) for each media. Multipath propagation in wired media Multipath propagation may also happen in wired media. the light takes 3μs to cross a 1 km span). in free space. The mathematical model of the multipath can be presented using the method of the impulse response used for studying linear systems. A well-known example is Power line communication. due to the presence of the multiple electromagnetic paths. and represent the complex amplitude (i. x(t) = δ(t) At the receiver. Thus. The ITU-T G. High-speed Power line communication systems usually employ multi-carrier modulations (such as OFDM or Wavelet OFDM) to avoid the Intersymbol interference that Multipath propagation would cause. ideal Dirac pulse of electromagnetic power at time 0. but it still degrades the displayed accuracy. G. the received signal will be expressed by where N is the number of received impulses (equivalent to the number of electromagnetic paths.hn uses OFDM with a Cyclic prefix to avoid ISI. magnitude and phase) of the generic . Mathematical modeling Mathematical model of the multipath impulse response.hn standard provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines. since the electromagnetic signals travel at the speed of light. thus the pulse shape is not modified at all). and each one of them will arrive at different times. Suppose to transmit a single. In fact. more than one pulse will be received (we suppose here that the channel has infinite bandwidth.unit is moving the jumping or creeping is hidden. specially in cases in which impedance mismatches cause signal reflections. and possibly very large). Because Multipath propagation behaves differently in each kind of wire. τn is the time delay of the generic nth impulse. there are different air travelling times (consider that.hn uses different OFDM parameters (OFDM symbol duration.e. phone lines and coaxial cables).

99%. there is a coherence bandwidth of about 330 kHz . The so-called coherence bandwidth is thus defined as For example. Keeping our aim at linear. an eigenfunction of every linear system. More in general.g. it can be shown that.received pulse. and as such we have τn = τn(t) ρn = ρn(t) φn = φn(t) Very often. the multipath time is computed by considering as last impulse the first one which allows to receive a determined amount of the total transmitted power (scaled by the atmospheric and propagation losses). y(t) also represents the impulse response function h(t) of the equivalent multipath model. the distance (in Hz) between two consecutive valleys (or two consecutive peaks). time invariant systems. e. TM. The obtained channel transfer characteristic has a typical appearance of a sequence of peaks and valleys (also called notches). we can also characterize the multipath phenomenon by the channel transfer function H(f). and it is defined as the time delay existing between the first and the last received impulses TM = τN − 1 − τ0 Mathematical model of the multipath channel transfer function. this impulse response is time varying. which is defined as the continuous time Fourier transform of the impulse response h(t) where the last right-hand term of the previous equation is easily obtained by remembering that the Fourier transform of a Dirac pulse is a complex exponential function. In practical conditions and measurement. As a consequence. in presence of time variation of the geometrical reflection conditions. is roughly inversely proportional to the multipath time. on average. just one parameter is used to denote the severity of multipath conditions: it is called the multipath time. with a multipath time of 3μs (corresponding to a 1 km of added on-air travel for the last received impulse).

The assumptions necessary to prove the theorem form a mathematical model that is only an idealization of any real-world situation. it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. in particular telecommunications and signal processing. The theorem also leads to a formula for reconstruction of the original signal. Sampling is the process of converting a signal (for example. then samples spaced at exactly 1/(2B) seconds do not completely determine the signal. a function of continuous time or space) into a numeric sequence (a function of discrete time or space). B. The conclusion that perfect reconstruction is possible is mathematically correct for the model but only an approximation for actual signals and actual sampling techniques. If a signal contains a component at exactly B hertz. A signal that is bandlimited is constrained in how rapidly it changes in time. if: . this condition is equivalent to Shannon's except when the function includes a steady sinusoidal component at exactly frequency B. The constructive proof of the theorem leads to an understanding of the aliasing that can occur when a sampling system does not satisfy the conditions of the theorem. and therefore how much detail it can convey in an interval of time. In essence the theorem shows that an analog signal that has been sampled can be perfectly reconstructed from the samples if the sampling rate exceeds 2B samples per second. where B is the highest frequency in the original signal. Introduction A signal or function is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth B. Shannon's statement notwithstanding.The Nyquist–Shannon sampling theorem is a fundamental result in the field of information theory. The field of Compressed sensing provides a stricter sampling condition when the underlying signal is known to be sparse. The theorem states:[1] If a function x(t) contains no frequencies higher than B hertz. Compressed sensing specifically yields a subNyquist sampling criterion. that is. let represent a continuous-time signal and be the Fourier transform of that signal: The signal is bandlimited to a one-sided baseband bandwidth. the condition is if x(t) contains no frequencies higher than or equal to B. The Nyquist–Shannon sampling theorem is also known to be a sufficient condition. To formalize these concepts. The sampling theorem asserts that the uniformly spaced discrete samples are a complete representation of the signal if this bandwidth is less than half the sampling rate. More recent statements of the theorem are sometimes careful to exclude the equality condition.

This results in a sequence of numbers. in which a continuous time signal is converted to a discrete time signal. then fs is expressed in Hz.. and a reconstruction process. The sampling theorem leads to a procedure for reconstructing the original from the samples and states sufficient conditions for such a reconstruction to be exact. or less. the sampling interval is typically quite small. Each sample value is associated with the instant in time when it was measured. If T is expressed in seconds. The continuous signal varies over time (or space in a digitized image. The reciprocal of the sampling interval (1/T) is the sampling frequency denoted fs. which is called the sampling interval.for all or. Fig.B]. or another independent variable in some other application) and the sampling process is performed by measuring the continuous signal's value every T units of time (or space). in which the original continuous signal is recovered from the discrete time signal. for signals that are a function of time. microseconds. which is measured in samples per unit of time.2: The normalized sinc function: sin(πx) / (πx) .. equivalently. on the order of milliseconds. while is called the Nyquist frequency and is a property of this sampling system. supp(X)[2] [-B. The time interval between successive samples is referred to as the sampling interval: and the samples of are denoted by: (integers). Reconstruction of the original signal is an interpolation process that mathematically defines a continuous-time signal x(t) from the discrete samples x[n] and at times in between the sample instants nT. The sampling process The theorem describes two processes in signal processing: a sampling process. called samples. to represent the original signal. showing the central peak at x= 0. . In practice. Then the sufficient condition for exact reconstructability from samples at a uniform sampling rate (in samples per unit time) is: or equivalently: is called the Nyquist rate and is a property of the bandlimited signal. and zero-crossings at the other integer values of x.

According to the theorem. This lower bound to the sampling frequency. The resulting reconstructed signal may have a component at that frequency. 2B. sounds that are made by a speaking human normally contain very small frequency components at or above 10 kHz and it is then • • . The scaled and time-shifted sinc functions are continuous making the sum of these also continuous. The condition: The signal obtained from this reconstruction process can have no frequencies higher than one-half the sampling frequency. any component of this signal which has a frequency above a certain bound should be zero. of the signal to allow for perfect reconstruction. the theorem gives the lower bound on the sampling frequency for which perfect reconstruction can be assured. Both of these cases imply that the signal to be sampled must be bandlimited. is called the Nyquist rate. the condition of bandlimitation of the sampled signal can be accomplished by assuming a model of the signal which can be analysed in terms of the frequency components it contains. that is. • If the original signal contains a frequency component equal to one-half the sampling rate. B<fs/2. However. This procedure is represented by the Whittaker–Shannon interpolation formula. nT. for example. All of these shifted and scaled functions are then added together to recover the original signal. If instead the sampling frequency is known. or at least sufficiently close to zero to allow us to neglect its influence on the resulting reconstruction. This upper bound is the Nyquist frequency. the reconstructed signal will match the original signal provided that the original signal contains no frequencies at or above this limit. denoted fN.• The procedure: Each sample value is multiplied by the sinc function scaled so that the zero-crossings of the sinc function occur at the sampling instants and that the sinc function's central point is shifted to the time of that sample. This reconstruction or interpolation using sinc functions is not the only interpolation scheme. it is impossible in practice because it requires summing an infinite number of terms. the theorem gives us an upper bound for frequency components. This condition is called the Nyquist criterion. but the amplitude and phase of that component generally will not match the original component. or sometimes the Raabe condition. so the result of this operation is a continuous signal. the condition is not satisfied. Indeed. Practical considerations A few consequences can be drawn from the theorem: • If the highest frequency B in the original signal is known. In the first case. any other method that does so is formally equivalent to it. it is the interpolation method that in theory exactly reconstructs any given bandlimited x(t) with any bandlimit B < 1/2T).

Practical digital-to-analog converters produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally low-pass filtered would yield the original signal). the reconstructed signal would not be exactly the original signal. These properties and parameters may need to be carefully tuned in order to obtain a useful system. but its proof suggests an analytical framework in which the non-ideality can be studied. The sampling theorem does not say what happens when the conditions and procedures are not exactly met. but a sequence of scaled and delayed rectangular pulses. This means that even if an ideal reconstruction could be made. The reconstruction process that involves scaled and delayed sinc functions can be described as ideal.sufficient to sample such an audio signal with a sampling frequency of at least 20 kHz. The error that corresponds to the failure of bandlimitation is referred to as aliasing. the sampling frequency. All practical filters can only attenuate frequencies outside a certain range. The error that corresponds to the sinc-function approximation is referred to as interpolation error. in particular its frequency content. the signals should first be lowpass filtered to below 4 kHz. • • Aliasing Main article: Aliasing . and neither can the reconstruction formula be precisely implemented. • In practice. requiring summing an infinite number of terms. This is usually accomplished by means of a suitable low-pass filter. since ideal "brick-wall" filters cannot be realized. a "time-limited" signal can never be bandlimited. neither of the two statements of the sampling theorem described above can be completely satisfied. This practical piecewise-constant output can be modeled as a zero-order hold filter driven by the sequence of scaled and delayed dirac impulses referred to in the mathematical basis section below. some type of approximation of the sinc functions. including aliasing and interpolation error. It cannot be realized in practice since it implies that each sample contributes to the reconstructed signal at almost all time points. In addition to this. in practice. we have to assure that the sampled signal is bandlimited such that frequency components at or above half of the sampling frequency can be neglected. Furthermore. For the second case. for example. A shaping filter is sometimes used after the DAC with zero-order hold to make a better overall approximation. A designer of a system that deals with sampling and reconstruction processes needs a thorough understanding of the signal to be sampled. Instead. and the requirement for the total reconstruction error. finite in length. has to be used. not remove them entirely. how the signal is reconstructed in terms of interpolation. a signal can never be perfectly bandlimited. if it is desired to sample speech waveforms at 8 kHz.

and 8.The Poisson summation formula indicates that the samples of function x(t) are sufficient to create a periodic extension of function X(f). Such a restriction works in theory. Fig. but is not precisely satisfiable in reality. The reconstruction technique described below produces the alias. A "brick-wall" low-pass filter can remove the images and leave the original spectrum.4 Top: Hypothetical spectrum of an insufficiently sampled bandlimited signal (blue). copies of X(f) are shifted by multiples of and combined by addition. Introduce an anti-aliasing filter or make the anti-aliasing filter more stringent. If this were an audio signal. For a sinusoidal component of exactly half the sampling frequency. Any frequency component above is indistinguishable from a lower-frequency component. . Increase the sampling rate. to tell the two situations apart. just from examining the spectra (or the sampled signals). two things can be done: 1. rather than the original component. 4. thus recovering the original signal from the samples. where the images (green) overlap. But the overall sampled spectrum of XA(f) is identical to the overall inadequately sampled spectrum of X(f) (top) because the sum of baseband and images are the same in both cases. where the images (green) narrowly do not overlap. but with a different phase and amplitude. the component will in general alias to another sinusoid of the same frequency. The result is: As depicted in Figures 3. If the sampling condition is not satisfied. because realizable filters will always allow some leakage of high frequencies. XA(f). xA[n] and x[n] would sound the same and the presumed "properly" sampled xA[n] would be the alias of x[n] since the spectrum XA(f) masquerades as the spectrum X(f). associated with one of the copies.3: Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. to above twice some or all of the frequencies that are aliasing. Bottom: Hypothetical spectrum of a marginally sufficiently sampled bandlimited signal (blue). creating a spectrum unlike the original. called an alias. adjacent copies overlap. and it is not possible in general to discern an unambiguous X(f). X(f). It is not possible. The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper sampling. The discrete sampled signals xA[n] and x[n] are also identical. 2. To prevent or reduce aliasing. These overlapping edges or "tails" of the images add. in such cases. Fig.

frequency modulation (FM). PM is not very widely used. for example. the leakage energy can be made small enough so that the aliasing effects are negligible From Wikipedia. search Phase modulation (PM) is a form of modulation that represents information as variations in the instantaneous phase of a carrier wave. the signal has changed phase by +180° or -180°. the free encyclopedia Jump to: navigation. Theory . Unlike its more popular counterpart.However. This is because it tends to require more complex receiving hardware and there can be ambiguity problems in determining whether.

PM can thus be considered a special case of FM in . it can also be viewed as a change of the frequency of the carrier signal. The top diagram shows the modulating signal superimposed on the carrier wave.An example of phase modulation. Suppose that the signal to be sent (called the modulating or message signal) is m(t). Clearly. The carrier onto which the signal is to be modulated is Then the modulated signal is This shows how m(t) modulates the phase. The bottom diagram shows the resulting phasemodulated signal.

A run consisting of all 1s would correspond to the maximum (positive) amplitude value. and its bandwidth is approximately . this can be represented as: where x[n] is the bipolar bitstream (either -A or +A) and a[n] is the corresponding binary bitstream (either 0 or 1).which the carrier frequency modulation is given by the time derivative of the phase modulation. but the mathematics reveals that there are two regions of particular interest: • • For small amplitude signals. This process uses a one bit quantizer that produces either a 1 or 0 depending on the amplitude of the analog signal. Pulse-width modulation (PWM) is the special case of PDM where all the pulses corresponding to one sample are contiguous in the digital signal. This is also known as Carson's Rule for PM. The continuous amplitude waveform is recovered by low-pass filtering the bipolar PDM bitstream. Pulse-density modulation. PM is similar to FM. Basics In a pulse-density modulation bitstream a 1 corresponds to a pulse of positive polarity (+A) and a 0 corresponds to a pulse of negative polarity (-A). For a single large sinusoidal signal. Mathematically. or PDM. Because in the real world analog signals are rarely . A 1 or 0 corresponds to a signal that is all the way up or all the way down. In a PDM signal. where fM = ωm / 2π and h is the modulation index defined below. specific amplitude values are not encoded into pulses as they would be in PCM. respectively. and alternating 1s and 0s would correspond to a zero amplitude value. PM is similar to amplitude modulation (AM) and exhibits its unfortunate doubling of baseband bandwidth and poor efficiency. is a form of modulation used to represent an analog signal in the digital domain. Analog-to-digital conversion A PDM bitstream is encoded from an analog signal through the process of delta-sigma modulation. Instead it is the relative density of the pulses that corresponds to the analog signal's amplitude. all 0s would correspond to the minimum (negative) amplitude value. The spectral behaviour of phase modulation is difficult to derive.

Two periods of a higher frequency sine wave would appear as: 01011011111111111111011010100100000000000001000100110111011111111111110 11010100100000000000000100101 or In pulse-density modulation. the difference between the 1 or 0 and the actual amplitude it represents. The average amplitude of pulses is measured by the density of those pulses over time. Examples A single period of the trigonometric sine function. thus a low pass filter is the only step required in the decoding process. Algorithm . This works because the function of a low-pass filter is essentially to average the signal. 1s represented by blue.all the way in one direction there is a quantization error. 0s represented by white. sampled 100 times and represented as a PDM bitstream. This error is fed back negatively in the ΔΣ process loop. a high density of 1s occurs at the peaks of the sine wave. overlaid with the sine wave. is: 01010110111101111111111111111111110111111011011010101001001000000100000 00000000000000001000010010101 An example of PDM of 100 samples of one period a sine wave. Digital-to-analog conversion The process of decoding a PDM signal into an analog one is simple: one only has to pass the PDM signal through an analog low-pass filter. In this way every error successively influences every other quantization measurement and its error. This has the effect of averaging out the quantization error. while a low density of 1s occurs at the troughs of the sine wave.

We choose for convenience. we may convert this into a difference equation relating the input of the delta-sigma modulator to its output in the discrete time domain. we obtain Here. so it is clear that E(z) contributes less to the output Y(z) at low frequencies. and more at high frequencies. The factor 1 − z − 1 represents a high-pass filter. meaning it can take on only two values. In the discrete frequency domain. the delta-sigma modulator's operation is represented by Rearranging terms.Pulse-density modulation of a sine wave using this algorithm. Second. Using the inverse Z-transform. y[n] is represented as a single bit. allowing us to write . Consider a signal x[n] in the discrete time domain as the input to a first-order delta-sigma modulator. with y[n] the output. A digital model of pulse-density modulation can be obtained from a digital model of the delta-sigma modulator. y[n] = x[n] + e[n] − e[n − 1] There are two additional constraints to consider: first. at each step the output sample y[n] is chosen so as to minimize the "running" quantization error e[n]. This demonstrates the noise shaping effect of the delta-sigma modulator: the quantization noise is "pushed" out of the low frequencies up into the high-frequency range. E(z) is the frequency-domain quantization error of the delta-sigma modulator.

The following pseudo-code implements this algorithm to convert a pulse-code modulation signal into a PDM signal: Library > Miscellaneous > Wikipedia Sampled signal (discrete signal): discrete time. Quantized signal: continuous time.100] to an integer In other words. quantized): discrete time. Digital signal (sampled. For example. finally. discrete values. continuous values. rounding a real number in the interval [0.e[n] = y[n] − x[n] + e[n − 1] This. discrete values.b] of the range of a continuous valued signal. In digital signal processing. with a single . quantization is the process of approximating ("mapping") a continuous range of values (or a very large set of possible discrete values) by a relatively small ("finite") set of ("values which can still take on continuous range") discrete symbols or integer values. quantization can be described as a mapping that represents a finite continuous interval I = [a. The quantization error of each sample is fed back into the input for the following sample. gives a formula for the output sample y[n] in terms of the input sample x[n].

100 Hz and quantized with 16 bits (2 bytes) which can be one of 65.7183. one both samples (discrete time) and quantizes the resulting sample values (discrete values). To produce a digital signal (discrete time and discrete values).5) with the number c. In electronics.e. We shall be aware of the fact that.number c. rather it is the existence of infinitely many such values due to the definition of continuity. any infinite precision continuous valued signal should be quantized to fit a finite resolution. Contents [show] Applications A common use of quantization is in the conversion of a discrete signal (a sampled continuous signal) into a digital signal by quantizing. For example we can design a quantizer such that it represents a signal with a single bit (just two levels) such that. the quantized . and need not be quantized (it can have continuous values)..14. In signal processing. as a means of efficient compression. so that it can be represented (stored) in CPU registers and memory. After that quantization we produce a finite set of values which can be encoded by say binary techniques. quantization refers to approximating the output by one of a discrete and finite set of values.(which therefore requires infinitely many bits to represent). A specific example would be compact disc (CD) audio which is sampled at 44.c + ..536 (i. Definition In signal processing the quantization process is the necessary and natural follower of the sampling operation...5. adaptive quantization is a quantization process that varies the step size based on the changes of the input signal." ( say encoded with a 0). It is necessary because in practice the digital computer with is general purpose CPU is used to implement DSP algorithms. For example. while replacing the input by a discrete set is called discretization. one level is "pi=3." (say encoded with a 1) and the other level is "e=2. 216) possible values per sample. rounding to the nearest integer (rounding ½ up) replaces the interval [c − . it is not the continuous values of the analog function that inhibits its binary encoding. Both of these steps (sampling and quantizing) are performed in analog-to-digital converters with the quantization level specified in bits. and is done by sampling: the resulting sampled signal is called a discrete signal (discrete time). as we can see. for integer c. Two approaches commonly used are forward adaptive quantization and backward adaptive quantization. And since computers can only process finite word length (finite resolution/precision) quantities. which is also on that interval.

a scalar quantization operator can be represented as . Therefore the Quantizer design problem is a Rate-Distortion optimization type. irrational numbers. since it operates on scalar (as opposed to multi-dimensional vector) input data. which is a non-uniform type in general. but only a restricted range such that they can fit in computer registers. The output of a quantizer has two important properties: 1) a Distortion resulting from the approximation and 2) a Bit-Rate resulting from binary encoding of its levels. In theory there is no relation between quantization values and binary code words used to encode them (rather than a table that shows the corresponding mapping. But there are only two levels. Concluding from this we can see that it is not the discreteness of the quantized values that enable them to be encoded but the finiteness enabling the encoding with finite number of bits. Mathematical description Quantization is referred to as scalar quantization. In general. And this last option merges the first two paragrahs in such a way that. It is simple to design and implement and for most cases it suffices to get satisfactory results. Since it is very difficult to correctly predict that in advance. Indeed by the very inherent nature of the design process. the decision boundaries {di} and the corresponding representation values {ri}. just as examplified above). a given quantizer will only produce optimal results for the assumed signal statistics. the optimal quantizer which minimizes the MSQE wrt the given signal statistics is called the Max-Lloyd quantizer. A quantizer is identified with its number of levels M. any static design will never produce actual optimal performance whenever the input statistics deviates from that of the design assumption. The only solution is to use an adaptive quantizer. The most common quantizer type is the uniform one.values of the signal take on infinite precision. And we can represent the output of the quantizer with a binary symbol. The design of a quantizer usually means the process to find the sets {di} and {ri} such that a measure of optimality is satisfied (such as MMSEQ (Minimum Mean Squarred Quantization Error)) Given the number of levels M. However due to practical reasons we may tend to use code words such that their binary mathematical values has a relation with the quantization levels that is encoded. If we are only allowed to use fixed length code for the output level encoding (the practical case) then the problem reduces into a distortion minimization one. if we wish to process the output of a quantizer within a DSP/CPU system (which is always the case) then we can not allow the representation levels of the quantizers to take on arbitrary values.

whereng an integer result index. called mid-rise and mid-tread uniform quantizers. . If x is a real-valued number between -1 and 1.02 dB. it is often said that the SNR is approximately 6 dB per bit. There are two common variations of uniform quantization. From this equation. • that is sometimes referred to as the quantization f(x) and g(i) are arbitrary real-valued functions. a method known as uniform quantization is the most common.5 would be added within the floor function instead of outside of it. In computer audio and most other applications. Sometimes. the signal to noise ratio (SNR) of the quantization can be computed via the 20 log rule as . In this case the f(x) and g(i) operators are just multiplying scale factors (one multiplier being the inverse of the other) along with an offset in g(i) function to place the representation value in the middle of the input region for each quantization index. This reduces the signal to noise ratio by approximately 6. the offset of 0. a mid-rise uniform quantization operator that uses M bits of precision to represent each quantization index can be expressed as . Using this quantization law and assuming that quantization noise is approximately uniformly distributed over the quantization step size (an assumption typically accurate for rapidly varying x or high M) and further assuming that the input signal x to be quantized is approximately uniformly distributed over the entire interval from -1 to 1.5. and then the final interpretation is constructed using g(i) when the data is later interpreted. For mid-tread uniform quantization. The integer-valued quantization index i is the representation that is typically stored or transmitted. The value 2 − (M − 1) is often referred to as the quantization step size. mid-rise quantization is used without adding the offset of 0. but may be acceptable for the sake of simplicity when the step size is small.

the required number of bits is determined by rounding (6 / log 2)—where log refers to the base ten. are used to represent a quantity—is called Vernier quantization. It is also possible. 20 bits.932. This is only practical when a small range of values is expected to be captured: for example. for example. logarithm—up to the nearest integer.g. This type of quantization—where a set of binary digits.30102. During JPEG encoding. the required number of bits is then given by (6 / 0. although rather less efficient. that is to say. to rely upon equally spaced quantization levels. In some compression schemes. base ten. the entropy of the output of a quantizer matters more than the number of possible values of its output (the number of values being 2M in the above example). that it is necessary to record six significant digits.. although obviously less efficient than a mere trio of binary digits (bits)— . Even the original representation using 24 bits per pixel requires quantization for its PCM sampling structure. Suppose. or common. the data representing an image (typically 8-bits for each of three color components per pixel) is processed using a discrete cosine transform and is then quantized and entropy coded. To express six decimal digits.30102). For example. an action that can be analyzed as a quantization process (e. viz. Because the human ear's perception of loudness is roughly logarithmic. two popular quantization schemes are the 'A-law' (dominant in Europe) and 'μ-law' (dominant in North America and Japan). e. the number of bits needed to represent the image can be reduced substantially.. a set of eight possible values requires eight equally spaced quantization levels—which is not unreasonable.g. rounded up to the nearest integer. algorithms are used. is approximately 0. an arithmetic register in a CPU. millionths. or 19. compression is also achieved by selectively discarding some data. images can often be represented with acceptable quality using JPEG at less than 3 bits per pixel (as opposed to the typical 24 bits per pixel needed prior to JPEG compression).In digital telephony. One example of a lossy compression scheme that uses quantization is JPEG image compression. In order to determine how many bits are necessary to effect a given precision. like MP3 or Vorbis. Quantization and data compression Quantization plays a major part in lossy data compression. In many cases. Since the logarithm of 2. In modern compression technology. this provides a higher signal to noise ratio over the range of audible sound intensities for a given number of bits. and the use of quantization is nearly always motivated by the need to reduce the amount of data needed to represent a signal. The number of values that can be expressed by N bits is equal to two to the Nth power. These schemes map discrete analog values to an 8-bit scale that is nearly linear for small values and then increases logarithmically as amplitude grows. a vector quantization process) or can be considered a different kind of lossy process. By reducing the precision of the transformed values using quantization.. quantization can be viewed as the fundamental element that distinguishes lossy data compression from lossless data compression.

but a set of. the error introduced by modeling them as continuous is vanishingly small. the intrusion of extraneous phenomena present in the system upon the signal of interest. RMS voltage). First. SNRs are usually expressed in terms of the logarithmic decibel scale. some physical quantities are quantized. 10 times the logarithm of the power ratio. Because many signals have a very wide dynamic range. is the inaccuracy of instruments. signal-to-noise ratio is a term for the power ratio between a signal (meaningful information) and the background noise: where P is average power and A is root mean square (RMS) amplitude (for example. Thus. This is a result of quantum mechanics (see Quantization (physics)). defined as the ratio of a signal power to the noise power corrupting the signal. can be expressed using only six bits. biological cell signaling). Relation to quantization in nature At the most fundamental level. the less obtrusive the background noise is. requiring sixty-four equally spaced quantization levels. and within the same system bandwidth. In less technical terms. although all physical signals are intrinsically quantized. In any practical application. . typically. The higher the ratio. by definition. signal-to-noise ratio compares the level of a desired signal (such as music) to the level of background noise. it is overshadowed by signal noise. this inherent quantization is irrelevant for two reasons. also used in other fields (such as scientific measurements. In decibels. say. Signals may be treated as continuous for mathematical simplicity by considering the small quantizations as negligible. Contents [show] Technical sense In engineering. which appears only in measurement applications. The second. the SNR is. which is obviously far more efficient. Both signal and noise power (or amplitude) must be measured at the same or equivalent points in a system. If the signal and the noise is measured across the same . sixty-four possible values.

the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second. though it is also possible to apply the term to sound stimuli. sounding a tone. this reference signal is usually a sine wave. such as 1 kHz at +4 dBu (1. the signal is 'cleaner'. measuring signal-to-noise ratios requires the selection of a representative or reference signal. the SNR of an image is usually defined as the ratio of the mean pixel value to the standard deviation of the pixel values. Because of this. Signal-to-noise ratio is closely related to the concept of dynamic range. μ / σ (the inverse of the coefficient of variation). An SNR less than 5 means less than 100% certainty in identifying image details. SNR is usually taken to indicate an average signal-to-noise ratio.impedance then the SNR can be obtained by calculating 20 times the base-10 logarithm of the amplitude ratio: Electrical SNR and acoustics Often the signals being compared are electromagnetic in nature. The connection between optical power and voltage in an imaging system is linear. current. In audio engineering. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. With an interferometric system. In general. the SNR gives the same result independent of the type of signal which is evaluated (such as power. or voltage). Due to the definition of decibel. SNR measures the ratio between noise and an arbitrary signal on the channel. higher signal to noise is better.[1] The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. This usually means that the SNR of the electrical signal is calculated by the 10 log rule. where interest lies in the signal from one arm only.228 VRMS). not necessarily the most powerful signal possible. where dynamic range measures the ratio between noise and the greatest un-distorted signal on a channel. Related measures are the "contrast ratio" and the "contrast-tonoise ratio". at a recognized and standardized nominal level or alignment level. however.[2] . Image processing and interferometry Main article: Signal to noise ratio (image processing) In image processing. Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical signals from optical interferometry are following the 20 log rule. the reference arm is constant). as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different.

vibrations. The noise is modeled as an analog error signal being summed with the signal before quantization ("additive noise"). the middle of the curve shows a lower noise. The noise level is nonlinear and signal-dependent. it is possible to enhance the SNR by increasing the measurement time. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. depending on what is measured and of the sensitivity of the device. Any measurement device is disturbed by parasitic phenomena. variations of temperature. Otherwise. different calculations exist for different signal models. This includes the electronic noise as described above. variations of humidity etc. In this case the noise is the error signal caused by the quantization of the signal. but also any external event that affects the measured phenomenon — wind. gravitational attraction of the moon. When the noise is a random perturbation and the signal is a constant value. it is possible to filter it or to process the signal. taking place in the analog-to-digital conversion. MER can be expressed in dB. It is often possible to reduce the noise by controlling the environment. Like SNR. Digital signals When using digital storage the number of bits of each value determines the maximum signal-to-noise ratio.For measurement devices in general Recording of the noise of a thermogravimetric analysis device that is poorly isolated from a mechanical point of view. Fixed point See also: Fixed point arithmetic For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. when the characteristics of the noise are known and are different from the signal's. due to a lesser surrounding human activity at night. .

Assuming a uniform distribution of input signal values. Often special filters are used to weight the noise: DIN-A. the quantization noise is a uniformly-distributed random signal with a peak-to-peak amplitude of one quantization level. which gave +9 dB more SNR for video. with n-m bits in the mantissa and m bits in the exponent: Note that the dynamic range is much larger than fixed-point. the SNR is approximately Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. for video. the quantizer is designed such that it has the same minimum and maximum values as the input signal).02m. making the amplitude ratio 2n/1. the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level[3] and uniform distribution. since it requires more forethought in designing algorithms. Further information: Quantization noise.the energy per bit per noise power spectral density. Maximum possible full scale signal can be charged as peak-to-peak or as RMS. DIN-D. In this case.[4] Notes • • • • Analog-to-digital converters have other sources of noise that decrease the SNR compared to the theoretical maximum from the idealized quantization noise. DIN-B. Audio uses RMS. special filters such as comb filters may be used. The formula is then: This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Video P-P. For n bit floating-point numbers. DIN-C. It is more common to express SNR in digital systems using Eb/No . Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6. but at a cost of a worse signal-to-noise ratio. Each extra quantization bit increases the dynamic range by roughly 6 dB. Bit resolution . Assuming a full-scale sine wave signal (that is. The very large dynamic range of floating-point can be a disadvantage. CCIR-601.

Eb/N0 is routinely used to refer to the energy per information bit (i. It should be noted that when forward error correction is being discussed. The noise spectral density N0. can also be seen as having dimensions of energy. or joules per cycle. usually expressed in units of watts per hertz. i. Eb/N0 is therefore a non-dimensional ratio.e. "signal-to-noise ratio" refers to the ratio of useful information to false or irrelevant data Eb/N0 (the energy per bit to noise power spectral density ratio) is an important parameter in digital communication or data transmission. It is a normalized signal-tonoise ratio (SNR) measure. It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account. Contents [show] Relation to carrier-to-noise ratio Eb/N0 is closely related to the carrier-to-noise ratio (CNR or C/N).e. after the receiver filter but before detection: where . where the bits in this context are transmitted data bits. the signal-to-noise ratio (SNR) of the received signal. inclusive of error correction information and other protocol overhead. Eb/N0 is equal to the SNR divided by the "gross" link spectral efficiency in (bit/s)/Hz.[clarification needed] Examples of power-limited communications include deep-space and spread spectrum. In this context. Es/N0 is generally used to relate actual transmitted power to noise[clarification needed]. Eb/N0 is commonly used with modulation and coding designed for noise-limited rather than interference-limited communication. also known as the "SNR per bit". and for power-limited rather than bandwidthlimited communications.Informal use Informally. the energy per bit net of FEC overhead bits)[citation needed]. or units of joules. and is optimized by using large bandwidths relative to the bit rate.

and in that case. This measure is also commonly used in the analysis of digital modulation schemes. For a PSK. the B/fs ratio is usually slightly larger than 1. Shannon limit The Shannon–Hartley theorem says that the limit of reliable data rate of a channel depends on bandwidth and signal-to-noise ratio according to: . where Es is the Energy per symbol in Joules. and B is the channel bandwidth The equivalent expression in logarithmic form (dB): . Caution: Sometimes. there will be a 3dB difference. where C/N is the carrier-to-noise ratio or signal-to-noise ratio. The two quotients are related to each other according to the following: . fs is the symbol rate in baud or symbols/second. the noise power is denoted by N0 / 2 when negative frequencies and complex-valued equivalent baseband signals are considered. Relation to Es/N0 Eb/N0 can be seen as a normalized measure of the energy per symbol per noise power spectral density (Es/N0).fb is the channel data rate (net bitrate). where M is the number of alternative modulation symbols. depending of the pulse shaping filter. ASK or QAM modulation with pulse shaping such as raised cosine shaping. B is the channel bandwidth in Hertz. Es/N0 can further be expressed as: .

so that Rl is near zero. and N is the total noise power in the bandwidth.[1] is: which corresponds to –1. a bandwidth utilization parameter of bits per second per half hertz. by considering a bit rate equal to R and therefore an average energy per bit of Eb = S/R. there exists what is known as a cutoff rate R0. it is conventional to define a normalized rate Rl = R/(2B).59 dB. sometimes called the ultimate Shannon limit. Making appropriate substitutions. typically corresponding to an Eb/N0 about 2 dB above the Shannon capacity limit. This equation can be used to establish a bound on Eb/N0 for any system that achieves reliable communication. References . or bits per dimension (a signal of bandwidth B can be encoded with 2B dimensions.where R is an information rate in bits per second. S is the total signal power (equivalent to the carrier power C). [citation needed] The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity. according to the Nyquist–Shannon sampling theorem). B is the bandwidth of the channel in hertz. Cutoff rate For any given system of coding and decoding. with noise spectral density of N0 = N/B. For this calculation. the bound. the Shannon limit is: Which can be solved to get the Shannon-limit bound on Eb/N0: When the data rate is small compared to the bandwidth. but has been rendered largely obsolete by the more recent discovery of turbo codes.

. An introductory article on Eb/N0 This entry is from Wikipedia. search It has been suggested that this article or section be merged with Baud. View the results here. In the case of a line code. symbol rate. The results for Wikimedia's licensing update vote have been announced. (Discuss) In digital communications. is the number of symbol changes (signalling events) made to the transmission medium per second using a digitally modulated signal or a line code. Algorithms for Communications Systems and Their Applications. The Symbol rate is measured in baud (Bd) or symbols/second. p. The symbol rate is related to but should not be confused with gross bitrate expressed in bit/s. each symbol may encode one or several binary bits) or the data may be represented by the transitions between symbols or even by a sequence of many symbols. ^ Nevio Benvenuto and Giovanni Cherubini (2002). [Hide] [Help us with translations!] Symbol rate From Wikipedia. It may not have been reviewed by professional editors (see full disclaimer) You can support Wikipedia by making a tax-deductible donation. John Wiley & Sons.and the receiving device has the job of detecting the sequence of symbols in order to reconstruct the transmitted data. the leading user-contributed encyclopedia. the free encyclopedia Jump to: navigation. External links • Eb/N0 Explained. also known as baud or modulation rate. Contents [show] [edit] Symbols A symbol is a state or significant condition of the communication channel that persists for a fixed period of time. Each symbol can represent or convey one or several bit of data.1. There may be a direct correspondence between a symbol and a small unit of data (for example. ISBN 0470843896. the symbol rate is the pulse rate in pulses/second. A sending device places symbols on the channel at a fixed and known symbol rate. 508.

000" if we mean bit rate.000 Bd is synonymous to a symbol rate of 1. and 45° right.000 tones per second.600 bit/s. 90° left. so it may represent more than one binary bit (a binary bit always represents exactly two states). In more advanced modems and data transmission techniques. 45° left. For example. the symbol rate can be calculated as: In that case M=2N different symbols are used. The data rate is three bits per second. A simple example: A baud rate of 1 kBd = 1. phase and/or frequency. [edit] Relationship to gross bitrate The term baud rate has sometimes incorrectly been used to mean bit rate. each conveying several bits. See below for more details on these techniques. in a 64QAM modem. Hartley[1] constructed a measure of the gross bitrate R as: . If N bits are conveyed per symbol. a higher data rate. a symbol may have more than two states. so the combinations of these produce many symbols. 135° left. these may be M different voltage levels. also known as unit interval. this corresponds to 1. inclusive of channel coding overhead. In a modem. and the gross bit rate is R. It is not correct to write "the baud rate of Ethernet is 100 Mbaud" or "the baud rate of my modem is 56. more than one flag pattern and arm can be used at once. 90° right.000 second = 1 millisecond. since there is one bit per symbol in this case. these may be sinewave tones with unique combinations of amplitude. the baud rate value will often be lower than the gross bit rate. For this reason. In case of a modem. Each signal carries three bits of information. The symbol duration time is 1/1. and binary "1" by another symbol. this corresponds to 1. It takes three binary digits to encode eight states. such that binary "0" is represented by one symbol. can be directly measured as the time between transitions by looking into an eye diagram of an oscilloscope. straight down (which is the rest state.000 symbols per second. In a line code. and in case of a line code. The difference between baud (or signalling rate) and the data rate (or bit rate) is like a man using a single semaphore flag who can move his arm to a new position once each second. The flag can be held in one of eight distinct positions: Straight up.000 pulses per second. so his signalling rate (baud) is one symbol per second. The symbol duration time Ts can be calculated as: where fs is the symbol rate. since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol. 135° right. Example of use and misuse of "baud rate": It is correct to write "the baud rate of my COM port is 9. In the Navy.The symbol duration time.600" if we mean that the bit rate is 9. M=64. By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent. where he is sending no signal).

34 modem may transmit symbols at a baud rate of 3. amplitude and phase. meaning that the lower cut off frequency is 1800 -2400/2 = 600 Hz. The maximum baud rate for a passband for common modulation methods such as QAM. radio channels and other frequency division multiplex (FDM) channels.800 bit/s. In this case the baud rate is synonymous to the pulse rate in pulses/second. However. the modem is said to operate at a net bit rate of 33. excluding physical layer overhead. and is half the bandwidth (half the upper cut-off frequency). where each quadrature amplitude modulation symbol carries two bits of information. The modem can generate M=22=4 different symbols. a high bit rate in bit/s although the bandwidth in hertz may be low. a serial cable or a Local Area Network twisted pair cable.e. It requires a bandwidth of 2400 Hz (equal to the baud rate).420 Bd. . (See Hartley's law). • [edit] Line codes for baseband transmission In case of a baseband channel such as a telegraph line. and each symbol can carry up to ten bits. it is common for one symbol to carry up to 10 bits per symbol. [edit] Modems for passband transmission Modulation is used in passband filtered channels such as telephone lines. and the upper cutoff frequency is 1800 + 2400/2 = 3000 Hz. It reduces the time required to send a given quantity of data over a limited bandwidth. The carrier frequency (the central frequency of the generated spectrum) is 1800 Hz. The maximum baud rate or pulse rate for a base band channel is called the Nyquist rate. i. Voiceband modem examples: • A V. Conveying more than one bit per symbol or bit per pulse has advantages. A V. i. resulting in a gross bit rate of 3420 * 10 = 34. data is transferred using line codes.22bis modem transmits 2400 bit/s using 1200 Bd (1200 symbol/s). In voiceband modems for the telephone network. One symbol can carry one or several bits of information. PSK and OFDM is approximately equal to the passband bandwidth.e. pulses rather than sinewave tones. each symbol is typically a sine wave tone with certain frequency. A high spectral efficiency in (bit/s)/Hz can be achieved.where fs is the baud rate in symbols/second or pulses/second.200 bit/s. In a digital modulation method provided by a modem. The baud rate is the number of transmitted tones per second.

e. The bits per symbol is the (modulation's power of 2)*(Forward Error Correction). In the case of 3/4 FEC. M. Emile Baudot (1845–1903) worked out a five-level code (five bits per character) for telegraphs which was standardized internationally and is commonly called Baudot code. . and FireWire typically have a symbol rate slightly higher than the data bit rate. you are sending out 4 bits. The 188 is the number of data bytes (187 bytes) plus the leading packet sync byte (0x47). J. 1000 Mbit/s Ethernet LAN cables use many wire pairs and many bits per symbol to encode their data payloads. multi-carrier modulation. Common communication links such as 10 Mbit/s Ethernet (10Base-T).. 3/4. i. to achieve high data rates. etc. Example: given bit rate = 18096263 Modulation type = 64-QAM FEC = 3/4 then In digital terrestrial digital television (DVB-T.The simplest digital communication links (such as individual wires on a motherboard or the RS-232 serial port/COM port) typically have a symbol rate equal to the gross bit rate. one of which is for error correction. and others. DVB-H and similar techniques) OFDM modulation is used.e. The Forward Error Correction (FEC) is usually expressed as a fraction.25×108 Bd).. [edit] Digital television and OFDM example In digital television transmission the symbol rate calculation is: symbol rate in symbols per second = (Data rate in bits per second * 204) / (188 * bits per symbol) The 204 is the number of bytes in a packet including the 16 trailing Reed-Solomon error checking and correction bytes. for every 3 bits of data. 1. due to the overhead of extra non-data symbols used for self-synchronizing code and error detection. i.e. The above symbol rate should then be divided by the number of OFDM sub-carriers in view to achieve the OFDM symbol rate. 1/2. More than two voltage levels are used in advanced techniques such as FDDI and 100/1000 Mbit/s Ethernet LANs. See the OFDM system comparison table for further numerical details. 1000BASE-T uses four wire pairs and two data bits per symbol to get a symbol rate of 125 MBd (i. USB. So for example in 64-QAM modulation 64 = 26 so the bits per symbol is 6.

. fixed set of possible values. and is common in military radio and cell phones. CDMA cell phones. the symbol rate is equal to or lower than the bit rate. Representing one bit by a chip sequence of many symbols overcomes co-channel interference from other transmitters sharing the same frequency channel. For example. However. or a block of information bits that are modulated using for example conventional QAM modulation. noise characteristics of the channel and the receiver. in frequency-shift keying (FSK). the term symbol may also be used at a higher layer and refer to one information bit. and other spread spectrum links) have a symbol rate much higher than the data rate (they transmit many symbols called chips per data bit. before the CDMA spreading code is applied.) In a modulated system. the symbol rate of the physically transmitted high-frequency signal rate is called chip rate. In these systems. it allows many simultaneous users. desired information rate. [edit] Relationship to bit error rate The disadvantage of conveying many bits per symbol is that the receiver has to distinguish many signal levels or symbols from each other. In a synchronous data transmission system. [edit] Modulation Many data transmission systems operate by the modulation of a carrier signal. In that case. in view to reduce the bit error rate.[edit] Relationship to chip rate Some communication links (such as GPS transmissions. Despite the fact that using more bandwidth to carry the same bit rate gives low channel spectral efficiency in (bit/s)/Hz. in spread spectrum systems. which also is the pulse rate of the equivalent base band signal. the frequency of a tone is varied among a small. including radio jamming. which may be difficult and cause bit errors in case of a poor phone line that suffers from low signal-to-noise ratio. (The concept of symbols does not apply to asynchronous data transmission systems. the tone can only be changed from one frequency to another at regular and well-defined intervals. using fewer bits per symbol. the term modulation rate may be used synonymously with symbol rate. An optimal symbol set design takes into account channel bandwidth. a modem or network adapter may automatically choose a slower and more robust modulation scheme or line code. and receiver and decoder complexity. which results in high system spectral efficiency in (bit/s)/Hz per unit of area. The presence of one particular frequency during one of these intervals constitutes a symbol. Using the latter definition.

A more practical scheme is differential binary phase-shift keying. The bit rate can then be greater than the symbol rate. or bandwidth. only one bit of data (i.000 05 bits/symbol ). for each additional bit encoded in a symbol. . For example. Then two bits could be encoded at each symbol interval. During each symbol. For example.) [edit] N-ary Modulation. or jumps by 180°. rather than the symbols themselves (the actual phase). [edit] Data Rate versus Error Rate Modulating a carrier increases the frequency range. In a more complex scheme such as 16-QAM. This is an example of data being encoded in the transitions between symbols (the change in phase). encoding a 1. the V. a differential phase-shift keying system might allow four possible jumps in phase between symbols. For example. giving an effective bit rate of 9. However. If each chip is considered a symbol.[edit] Binary Modulation If the carrier signal has only two states. resulting in a bit rate of four times the symbol rate.600 bits per second. leading to fewer and fewer data bits per symbol in order to spread the bandwidth.. one representing a 0 and the other a 1. encoding a 0. This makes the states less distinct from one another which in turn makes it more difficult for the receiver to detect the symbol correctly in the presence of disturbances on the channel. each symbol contains far less than one bit ( 50 bit/s / 1023 Ksymbols/s =~= 0. it occupies. As the bit rate is the product of the symbol rate and the number of bits encoded in each symbol.. The history of spread spectrum goes in the opposite direction. then only one bit of data (i.29 specifies 4 bits per symbol.400 baud. In the case of GPS. a 0 or 1) can be transmitted in each symbol. achieving a data rate of double the symbol rate. The bit rate is in this case equal to the symbol rate. four bits of data are transmitted in each symbol. Again. a 0 or 1) is transmitted by each symbol. the phase either remains the same. the number of bits encoded in each symbol can be greater than one. a binary FSK system would allow the carrier to have one of two frequencies. we have a data rate of 50 bit/s and a symbol rate of 1. The history of modems is the attempt at increasing the bit rate over a fixed bandwidth (and therefore a fixed maximum symbol rate). the constellation of symbols (the number of states of the carrier) doubles in size. in which the carrier remains at the same frequency. N greater than 2 By increasing the number of states that the carrier signal can take. leading to increasing bits per symbol.023 Mchips/s.e. it is clearly advantageous to increase the latter if the former is fixed.e. The bandwidth depends on the symbol (modulation) rate (not directly on the bit rate). Transmission channels are generally limited in the bandwidth they can carry. at a symbol rate of 2. (The reason for this in phase-shift keying is that it is impractical to know the reference phase of the transmitter. but can be in one of two phases.

^ a b c Federal Standard 1037C. [edit] Significant condition In telecommunication. 1983-05-28. DTMF. 2.cfm?ident_number=35582. demodulator.[2] A change from one significant condition to another is called a signal transition. a significant condition is one of the values of the signal parameter chosen to represent information. A. National Communications System. 1996-07-07.[3] Significant conditions are recognized by an appropriate device called a receiver.htm. processing.The complete collection of M possible symbols over a particular channel is called a Mary modulation scheme. or a space. Information can be transmitted either during the given time interval.dla.[2] [edit] References 1. and its Engineering Applications (3rd ed. 3. [edit] See also • • • • • • Chip rate Gross bit rate.[2] A significant condition could be an electrical current (voltage. also known as data signaling rate or line rate.mil/quicksearch/basic_profile. bandwidth Bitrate Constellation diagram. which shows (on a graph or 2D oscilloscope image) how a given signal state (a symbol) can represent three or four bits at once. requiring the complete collection to contain M = 2^b different symbols. List of device bandwidths . http://assist.daps. Bell (1962). or a particular frequency or wavelength. ^ "System Design and Engineering Standard for Tactical Communications". Each significant instant is determined when the appropriate device assumes a condition or state usable for performing a specific function.its. spread spectrum modulation) require a different description. or power level). Mil-Std-188-200 (United States Department of Defense). http://www. The decoder translates the actual signal received into its intended logical value such as a binary digit (0 or 1). ^ D. Most popular modulation schemes can be described by showing each point on a constellation diagram. in the modulation of a carrier. New York: Pitman.). although a few modulation schemes (such as MFSK. a mark. or gating. Information Theory. The duration of a significant condition is the time interval between successive significant instants. pulse-position modulation.bldrdoc. Most modulation schemes transmit some integer number of bits per symbol b. a phase value. such as recording. an alphabetic character. or encoded as the presence or absence of a change in the received signal.gov/fs-1037/fs1037c. or decoder. an optical power level.

compkarori.• PCM [edit] External links • • What is the Symbol rate? "On the origins of serial communications and data encoding". http://www.org/wiki/Symbol_rate" Categories: Data transmission Hidden categories: All articles to be merged | Articles to be merged from December 2008 | Cite web templates using unusual accessdate parameters Views • • • • Article Discussion Edit this page History Personal tools • Log in / create account Navigation • • • • • Search þÿ Go Search Main page Contents Featured content Current events Random article Interaction • • • • About Wikipedia Community portal Recent changes Contact Wikipedia . Retrieved on January 4 2007.com/dbase/bu07sh.wikipedia. Retrieved from "http://en.htm.

Analog-to-digital conversion An important concept in pulse modulation is analog-to-digital (A/D) conversion. Varying the amplitude. a U. First.S. pulsewidth modulation (PWM. the range of . presence or absence. also known as pulse-duration modulation. Conceptually. analog-to-digital conversion involves two steps.and amplitude-continuous) information signal s(t) is changed at the transmitter into a series of regularly occurring discrete pulses whose amplitudes are restricted to a fixed and finite number of values. PDM). and pulseposition modulation (PPM).• • Toolbox • • • • • • • Donate to Wikipedia Help What links here Related changes Upload file Special pages Printable version Permanent link Cite this page Languages • • • Deutsch Italiano Español • • • • • This page was last modified on 30 May 2009 at 08:52. registered 501(c)(3) tax-deductible nonprofit charity. Privacy policy About Wikipedia Disclaimers A set of techniques where by a sequence of information-carrying quantities occurring at discrete instances of time is encoded into a corresponding regular sequence of electromagnetic carrier pulses. (See Copyrights for details. All text is available under the terms of the GNU Free Documentation License. polarity. duration.) Wikipedia® is a registered trademark of the Wikimedia Foundation. pulse-code modulation (PCM). An inverse digital-to-analog (D/A) process is used at the receiver to reconstruct an approximation of the original form of s(t).. in which an original analog (time. or occurrence in time of the pulses gives rise to the four basic forms of pulse modulation: pulse-amplitude modulation (PAM). Inc.

. Quantization. the amplitude of s(t) is periodically measured or sampled and replaced by the pulse representing the level that corresponds to the measurement. Pulse-amplitude modulation In PAM the successive sample values of the analog signal s(t) are used to effect the amplitudes of a corresponding sequence of pulses of constant duration occurring at the sampling rate. however.amplitudes of s(t) is divided or quantized into a finite number of predetermined levels. See also Analog-to-digital converter. Digitalto-analog converter. According to the Nyquist sampling theorem. but in most practical systems the pulse duration. Such a restriction creates the possibility of interleaving during one sample interval one or more pulses derived from other PAM systems in a process known as time-division multiplexing (TDM). the so-called quantization error. b). is limited to a fraction of the sampling interval. introduces an irreversible error. 1a. the process of reconstructing s(t) from the sequence of pulses yields only an approximate version of s(t). In principle the pulses may occupy the entire time between samples. No quantization of the samples normally occurs (Fig. and each such level is represented by a pulse of fixed amplitude. Consequently. See also Multiplexing and multiple access. known as the duty cycle. Second. the latter can be unambiguously reconstructed from its amplitude values at the sampling instants by applying them to an ideal low-pass filter whose bandwidth matches that of s(t). since the pulse representing a sample measurement determines only the quantization level in which the measurement falls and not its exact value. if sampling occurs at a rate at least twice that of the bandwidth of s(t).

In these so-called binary digital systems. in which the amplitude of each pulse representing a quantized sample of s(t) is converted into a unique sequence . the duration of the pulses is typically a fraction of the sampling interval. s(t). (c) Pulse-width modulation. (b) Pulse-amplitude modulation. s(t). Pulse-width modulation In PWM the pulses representing successive sample values of s(t) have constant amplitudes but vary in time duration in direct proportion to the sample value. 1d). Pulse-position modulation PPM encodes the sample values of s(t) by varying the position of a pulse of constant duration relative to its nominal time of occurrence. is a sine wave. (b) Pulseamplitude modulation.sine wave. (a) Analog signal. The pulse duration can be changed relative to fixed leading or trailing time edges or a fixed pulse center. s(t). (c) Pulse-width modulation. the maximum pulse duration may be limited to a fraction of the time between samples (Fig. To allow for time-division multiplexing. In addition. (d) Pulse-position modulation. As in PAM and PWM. 1c). the analog-to-digital conversion process is extended by the additional step of coding. Pulse-code modulation Many modern communication systems are designed to transmit and receive only pulses of two distinct amplitudes. (d) Pulse-position modulation. (a) Analog signal."> Forms of pulse modulation for the case where the analog signal. the maximum time excursion of the pulses may be limited (Fig.

New pulses can then be generated and transmitted to the next such decoding point. the binary pulses propagating along a medium can be intercepted and decoded at a point where the accumulated distortion and attenuation are sufficiently low to assure high detection accuracy.of one or more pulses with just two possible amplitudes. (c) With pulses of amplitude V or −V. (b) With pulses of amplitude V or 0. This fact creates the possibility of deploying generic transmission systems suitable for many types of information. 2b these pulses are of amplitude V or 0. but are otherwise largely independent of the information content of s(t). PCM enjoys many important advantages over other forms of pulse modulation due to the fact that information is represented by a two-state variable. Assuming that the number of quantization levels is limited to 8. or position of a pulse in which these quantities are not constrained. (a) Three successive quantized samples of an analog signal. the design parameters of a PCM transmission system depend critically on the bandwidth of the original signal s(t) and the degree of fidelity required at the point of reconstruction. Second. Time-division multiplexing . 2c the amplitudes are V and −V. This so-called process of repeatering significantly reduces the propagation of distortion and leads to a quality of transmission that is largely independent of distance. duration. The complete conversion process is known as pulse-code modulation. in which sampling occurs every T seconds and the pulse representing the sample is limited to T/2 seconds. whereas in Fig. Figure 2a shows the example of three successive quantized samples of an analog signal s(t). Pulse-code modulation. Third. the detection of the state of a two-state variable in a noisy environment is inherently simpler than the precise measurement of the amplitude. each level can be represented by a unique sequence of three two-valued pulses. First. In Fig.

while the other 192 pulses are the composite of 8 pulses from each of 24 voice signals. Depending on the shape of the pulses and the amount of intersymbol interference. including twisted pairs of telephone wiring. and by now almost completely. Applications PAM. coaxial cables. fiber-optic cables.000 hertz. with each of them being represented by a sequence of 8 binary pulses. The bulk of these transmission systems use some form of time-division multiplexing. with each 8-pulse sequence occupying a specified position. and microwave. converted their transmission facilities to PCM technology. This compares to a bandwidth of only 4000 hertz for the transmission of the signal in analog mode. PWM. Since the sampling.625 microseconds. pulses derived from other sampled analog signals can be accommodated on the transmission system. Bandwidth requirements Pulse modulation systems may incur a significant bandwidth penalty compared to the transmission of a signal in its analog form.65 microsecond. T1 carriers and similar types of digital carrier systems are in widespread use in the world's telephone networks. By restricting the time duration of a pulse representing a sample value from a particular analog signal to a fraction of the time between successive samples. An example is the standard PCM transmission of an analog voice signal band-limited to 4000 hertz over a T1 carrier. and coding process produces 8 binary pulses 8000 times per second for a total of 64.An advantage inherent in all pulse modulation systems is their ability to transmit signals from multiple sources over a common transmission system through the process of timedivision multiplexing. See also Coaxial . In standard T1 coding. the pulses occur every 15.000 to 64. See also Bandwidth requirements (communications). These carrier systems are implemented over many types of transmission media. quantizing.000 binary pulses per second. One important application of this principle occurs in the transmission of PCM telephone voice signals over a digital transmission system known as a T1 carrier. Since the early 1960s. One of these serves as a synchronization marker that indicates the beginning of such a sequence of 193 pulses. and is then sampled at the Nyquist rate of 8000 samples per second. an original analog voice signal is band-limited to 4000 hertz by passing it through a low-pass filter. They have since fallen into disuse in favor of PCM. as exemplified by the 24-voice channel T1 carrier structure. the required transmission bandwidth will fall in the range of 32. largely in the domain of radio telemetry for remote monitoring and sensing. The samples are quantized to 256 levels. a total of 193 pulses can be accommodated in the time span of 125 microseconds between samples. By limiting the duration of a single pulse to 0. many of the world's telephone network providers have gradually. and PPM found significant application early in the development of digital communications. so that the time between successive samples is 125 microseconds.

The deployment of high-speed networks such as the Integrated Service Digital Network (ISDN) in many parts of the world has also relied heavily on PCM technology. PCM and various modified forms such as delta modulation (DM) and adaptive differential pulsecode modulation (ADPCM) have also found significant application in satellite transmission systems .cable. Optical fibers. Switching systems (communications). Microwave. Optical communications. Communications cable.