# Lecture 2 Analog Digital Conversion: The analog digital conversion is divided into two parts.

I- Quantification II- Coding Quantification: It consists into mapping the samples of a signal to a finite discrete set of amplitudes. So each amplitude interval at the input of the qunatizer is mapped to 1 value which is the middle of the interval. See figure below Input signal amplitude Quantized value ( between 2 dashed line)

q 0 -q

First interval second interval third interval

Example of a quantizer with 3 levels (-q,0,q) Another representation of the quantization is given in the following figure. Uniform Qunatification
3q 2q q -3q -2q -q 0 -q -2q -3q q 2q 3q Input value

The quantization process introduces a noise which is the difference between the real amplitude value and the quantified value. The quantization noise power= q 2 / 12 The Quatization error has the following distribution.

log( L) dB. .76 + 6. log(2 n ) = 1. Example: Human voice is situated in the frequency band 80 Hz-3400 Hz. II –Coding: The coding operation consists in attributing a digital code for each of the quantized levels. Remark: .Each additional bit means that the number of quantification levels will be multiplied by a factor of 2.q 2 2 The input signal power = = = 2 2 8 So the signal to noise ratio is given by : SNR = 10log( Ps Vmax 2 / 2 3 ) = 10log( ) = 10. For the previous example L=3. The quantization error= the variance of the quantization error Noise power= probability of 1 E ( x ) = ∫ e ( x). log( L) = 1. For representing L levels we need n=Log2(L) bits or L=2n.76 + 20.76 + 20. ∫ q 3 2 q/2 = −q / 2 1 q3 q3 q2 ( + )= q 24 24 12 The input signal full scale voltage (peak to peak) = L*q where L= the number of levels. 2 Pnoise 2 q /12 So it’s obvious that by increasing the number of quantification levels we can reduce the noise power and increase the signal to noise ratio. We require an SNR of 40 dB for good reproduction of the voice Calculate n and the bit rate of the resulting digital signal.76 + 20. SNR = 1. For example we need 3 bits to generate 8 different codes for 8 different quantization levels 3 bits =log2(8 levels ) or 23 =8.e(x) q/2 -q/2 q/2 -q/2 x So the errors are equally distributed between –q/2 and +q/2 with e(x)=x. (L * q ) 2 [ ] V max 2 L2 . The signal to noise ratio could be expressed in term of bits of the analog digital converter by simply replacing L in the previous equation by 2n. the sampling frequency is 8000 samples/s. p( x ).Each additional bit used for coding 6 dB increase in the SNR. The voice is digitized using n bits ADC. each error=1/q. d ( x ) .02n dB.dx = q 2 2 −q / 2 q/2 q/2 −q / 2 1 e3 = e ( x ). log( L2 ) = 1. Solution: .

amax = L.q 2 ) = 10. log( ) + 6.6dB.02 N + 1. So the SNR decreases dramatically for small signals which is not good.q/2 is the max amplitude of the input signal. This is due to the fact that the error is situated in an interval of [-q/2. log( a 2 max 2.q/2] Whatever the amplitude of the signal . log( ) = 10.72 = a 2 max 2. a2 = 10. The most known compression LAW is called µ law. Maximum=1 y= 1+ µ .a 2 max Where a is the amplitude of the input signal and amax=L. It has the following expression: ln(1+ | µ . What will be the SNR in the case of a signal with a smaller amplitude. For example for a amax/2 or Pmax -6 dB The SNR = SNR)full scale. For a signal with max amplitude= 4 q for example and n=8 bits. Bit rate= 8000 samples/s * 7 bits/sample = 56 Kbps.q / 2 8 = 2−5 SNR decreases by a 20. log( 12.x |) x: is the normalised input value. Remark: the SNR calculated corresponds to a full scale input signal. log(2) = −30dB a max The problem is that that human ear is sensitive to small signals and they have higher probability of existence so they should be quantified with low quantization error . log( )= Pnoise q 2 / 12 q2 12. log( ) = 10. So for smaller input values error becomes even higher than the signal Solution: In order to improve the SNR for small signals we should make the error small when the signal is small .a 2 max a 20. log( ) = 20. log(2 − 5 ) = −100. log( ) + 6.q 2 a 2 max a2 12.a 2 max ) = 10.72 a max So the SNR will decreases if a < amax.02 N + 1. a2 SNR = 10log( Ps a2 / 2 2.q/2 = 2 n.a 2 max a2 ) + 10.SNR=40 n=7 bits because 6 bits will give an SNR<40 dB. This could achieved by modifying the signal (compressing it) with a function which compress the Signal when the amplitude is high and stretch the signal when the amplitude is small.q 2 a=4q a/amax= 22 q 2 .

the interval value =q). expansion is called compounding. ( before it was –q/2 q/2. Smaller amplitudes are quantified with smaller intervals smaller errors The maximum error amplitude for each interval= ∆X /2. Otherwise we will obtain a different signal This operation is called expanding. The American and Japanese standards uses this compression LAW with a µ = 255 (high compression) We need for decoding to use the inverse of the function used for compression . The max error amplitude = the value of interval /2. The whole operation of compression. Remark: The compressed signal is quantified with the same quantizer (uniform on the y axis). µ law compression 3q 2q q 0 ∆X Uniform quantification Without compression 1 Non uniform quantification x/xmax As we can see the quanization intervals on the x axis are not the same. The full scale of a such quantizer is given by the following expression: .x |) 1+ µ µ law compression 1 1 x/xmax The compressed signal is quantified with uniform quantizer as shown in the following figure.y= ln(1+ | µ . quantization.

the SNR does not decrease for small amplitude like in the case of uniform quantizer .2 2n ) where C = 3 [ln(1 + µ )]2 SNRdB = 10. log(C. log(C ) + 6n = α + 6n The following figure shows a comparison of the uniform and non uniform quantization for different input amplitudes. The SNR of non uniform quantizer is less than that of the uniform quantizer but it’s advantage is that.SNRdB = 10.