This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

1

**Syed Ali Khayam
**

Department of Electrical & Computer Engineering Michigan State University March 10 2003

th

1

This document is intended to be tutorial in nature. No prior knowledge of image processing concepts is assumed. Interested readers should follow the references for advanced material on DCT.

ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application

1. Introduction

Transform coding constitutes an integral component of contemporary image/video processing applications. Transform coding relies on the premise that pixels in an image exhibit a certain level of correlation with their neighboring pixels. Similarly in a video transmission system, adjacent pixels in consecutive frames

2

show very high correlation. Consequently, these

correlations can be exploited to predict the value of a pixel from its respective neighbors. A transformation is, therefore, defined to map this spatial (correlated) data into transformed (uncorrelated) coefficients. Clearly, the transformation should utilize the fact that the information content of an individual pixel is relatively small i.e., to a large extent visual contribution of a pixel can be predicted using its neighbors.

A typical image/video transmission system is outlined in Figure 1. The objective of the source encoder is to exploit the redundancies in image data to provide compression. In other words, the source encoder reduces the entropy, which in our case means decrease in the average number of bits required to represent the image. On the contrary, the channel encoder adds redundancy to the output of the source encoder in order to enhance the reliability of the transmission. Clearly, both these high-level blocks have contradictory objectives and their interplay is an active research area ([1], [2], [3], [4], [5], [6], [7], [8]). However, discussion on joint source channel coding is out of the scope of this document and this document mainly focuses on the transformation block in the source encoder. Nevertheless, pertinent details about other blocks will be provided as required.

2

Frames usually consist of a representation of the original data to be transmitted, together with other bits which may be used for error detection and control [9]. In simplistic terms, frames can be referred to as consecutive images in a video transmission. 1

ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application

Transformation

Quantizer

Entropy Encoder

Channel Encoder

Original Image

Source Encoder

Transmission Channel

Inverse Transformation

Inverse Quantizer Source Decoder

Entropy Decoder

Channel Decoder

Reconstructed Image

Figure 1. Components of a typical image/video transmission system [10]. As mentioned previously, each sub-block in the source encoder exploits some redundancy in the image data in order to achieve better compression. The transformation sub-block decorrelates the image data thereby reducing (and in some cases eliminating) interpixel redundancy [11]. The two images shown in Figure 2 (a) and (b) have similar histograms (see Figure 2 (c) and (d)). Figure 2 (f) and (g) show the normalized autocorrelation among pixels in one line of the respective images. Figure 2 (f) shows that the neighboring pixels of Figure 2 (b) periodically exhibit very high autocorrelation. This is easily explained by the periodic repetition of the vertical white bars in Figure 2(b). This example will be will be employed in the following sections to illustrate the decorrelation properties of transform coding. Here, it is noteworthy that transformation is a lossless operation, therefore, the inverse transformation renders a perfect reconstruction of the original image.

3

3

The term interpixel redundancy encompasses a broad class of redundancies, namely spatial redundancy, geometric redundancy and interframe redundancy [10]. However throughout this document (with the exception of Section 3.2), interpixel redundancy and spatial redundancy are used synonymously. 2

(f) normalized autocorrelation of one line of second image. (b) second image.2 0.1 0. might sacrifice visual quality in order to achieve bandwidth efficiency.6 0.7 0.4 0. . This idea can be extended to low bitrate receivers which. due to their stringent bandwidth requirements.9 (d) 0. Such redundancy is referred to as psychovisual redundancy [10].2 0.4 0.5 1 1 0.3 0.5 0. (a) First image.8 0. (e) normalized autocorrelation of one line of first image.5 0 0 0 50 100 150 200 250 0 50 100 150 200 250 (c) 1 1 0.7 0.3 0.5 1.9 0.5 0. (d) histogram of second image. The quantizer sub-block utilizes the fact that the human eye is unable to perceive some visual information in an image.5 0. Such information is deemed redundant and can be discarded without introducing noticeable visual artifacts. (c) histogram of first image.6 0.(a) x 10 4 (b) x 10 4 2.1 0 0 50 100 150 200 250 300 350 0 0 50 100 150 200 250 300 350 (e) (f) Figure 2.5 2 2 1.8 0.

the entropy encoder employs its knowledge of the transformation and quantization processes to reduce the number of bits required to represent each symbol at the quantizer output. Further discussion on the quantizer and entropy encoding sub-blocks is out of the scope of this document. that is. receivers might tolerate some visual distortion in exchange for bandwidth conservation. f (x ) cos ⎢ 2 N ⎦⎥ ⎣ (1) for u = 0. N − 1 . Similarly.1. for example. the Discrete Cosine Transform (DCT) attempts to decorrelate the image data. After decorrelation each transform coefficient can be encoded independently without losing compression efficiency. This section describes the DCT and some of its important properties. In the last decade. the inverse transformation is defined as . MPEG. DCT has been widely deployed by modern video coding standards. Discrete Cosine Transform (DCT) has emerged as the de-facto image transformation in most visual systems. The Discrete Cosine Transform Like other transforms. This document introduces the DCT. 2. JVT etc.This concept is the basis for rate distortion theory. The One-Dimensional DCT The most common DCT definition of a 1-D sequence of length N is N −1 C (u ) = α (u ) x=0 ∑ ⎡ π (2x + 1)u ⎤ . 2.…. 1. Lastly. 2. elaborates its important attributes and analyzes its performance using information theoretic measures.

….…. ignore the f (x ) and α (u component in (1). α (u )C (u ) cos⎢ 2 N ⎥⎦ ⎣ u =0 ∑ (2) for x = 0. In both equations (1) and (2) α(u) is defined as ⎪⎧ ⎪ ⎪⎨ α(u) = ⎪ ⎪⎩ 1 N 2 N for for 1 N N −1 x =0 u=0 u ≠ 0. the first the top-left waveform ( u = 0 ) renders a constant (DC) value. . none of the basis functions can be represented as a combination of other basis functions [14]. multiplication of any waveform in Figure 3 with another waveform followed by a summation over all sample points yields a zero (scalar) value. Thus. whereas multiplication of any waveform in Figure 3 with itself followed by a summation yields a constant (scalar) value. 7 ) give waveforms at progressively increasing frequencies [13]. (3) It is clear from (1) that for u = 0 C (u = 0) = . All other transform coefficients are called the AC Coefficients . Orthogonal waveforms are independent. ∑ f (x ) . N − 1 . 2.N −1 f (x ) = ⎡ π (2x + 1)u ⎤. Note that these basis functions are orthogonal. In literature. this value is referred to as the DC Coefficient. 1. whereas. that is. 4 N −1 To fix ideas. Hence. the first transform coefficient is the average value of the sample sequence. 2. In accordance with our previous observation. The plot of ) ∑ cos⎢ x =0 ⎡ π (2x + 1)u ⎤ ⎥ 2N ⎣ ⎦ for N = 8 and varying values of u is shown in Figure 3. These waveforms are called the cosine basis function. all other waveforms ( u = 1.

.and alternating.4 These names come from the historical use of DCT for analyzing electric circuits with direct.currents.

5 0 0 1 8 2 3 4 5 6 7 u=2 -1 1 2 3 4 5 6 7 8 u=3 0 0 -1 1 1 8 2 3 4 5 6 7 u=4 -1 1 0 1 8 2 3 4 5 6 7 u=5 0 -1 1 8 2 3 4 5 6 7 u=6 -1 1 8 2 3 4 5 6 7 u=7 1 1 0 0 -1 1 8 2 3 4 5 6 7 -1 1 2 3 4 5 6 7 8 Figure 3. This reduces the number of mathematical operations (i. multiplications and additions) thereby rendering computation efficiency. If the input sequence has more than N sample points then it can be divided into sub-sequences of length N and DCT can be applied to these chunks independently. a very important point to note is that in each such computation the values of the basis function points will not change. The Two-Dimensional DCT The objective of this document is to study the efficacy of DCT on images. This necessitates the extension of ideas presented in the last section to a two-dimensional space.. since it shows that the basis functions can be pre-computed offline and then multiplied with the sub-sequences.2. This is a very important property. 2.1 u=0 1 u=1 0. Only the values of f (x ) will change in each sub-sequence. The 2-D DCT is a direct extension of the 1-D case and is given by . One dimensional cosine basis function (N=8).e. Here.

y = 0. this function assumes a constant value and is referred to as the DC coefficient. Hence. it can be noted that the basis functions exhibit a progressive increase in frequency both in the vertical and horizontal direction. N − 1 . 2N 2N ⎦ ⎣ ⎦ ⎣ (4) for u. u =0 v=0 ⎡ π (2 x + 1)u ⎤cos ⎡ π (2 y + 1)v .N −1 N −1 C (u. N −1 defined as and α (u ) and α (v ) are defined in (3).…. v ) = α (u )α (v ) cos ⎢ ∑∑ f (x. v = 0. The 2-D basis functions can be generated by multiplying the horizontally oriented 1-D basis functions (shown in Figure 3) with vertically oriented set of the same functions [13]. The inverse transform is N −1 N −1 f (x.…. 2. The basis functions for N = 8 are shown in. 2. 1. y ) = v ) cos ⎢ ∑∑ α (u )α (v )C (u. ⎤ ⎥ ⎢ ⎥ 2 2N ⎦ ⎣ ⎦ ⎣ N (5) for x. . The top left basis function of results from multiplication of the DC component in Figure 3 with its transpose. 1. y ) ⎡ x=0 y =0 π (2 x + 1)u ⎤ ⎡ π (2 y + 1)v ⎤ ⎥ cos ⎢ ⎥. Again.

white represents positive amplitudes. Two dimensional DCT basis functions (N = 8). Neutral gray represents zero.Figure 4. . and black represents negative amplitude [13].

4 0.9 0.8 0.6 0.2 0.7 0.1 0 0 50 100 1 0 5 2 0 0 2 0 5 300 350 -0. the intuitive insight into its image processing application has not been presented.3. However.2 0. This leads to uncorrelated transform coefficients which can be encoded independently.1.1 0 0 50 100 1 0 5 2 0 0 2 0 5 300 350 -0. . This section outlines (with examples) some properties of the DCT which are of particular value to image processing applications. 1 1 0. the amplitude of the autocorrelation after the DCT operation is very small at all lags.2 -0.8 0.4 0 50 100 1 0 5 2 0 0 2 0 5 300 350 (a) 1 1 0.5 0. Clearly.8 0.6 0. 2.8 0.2 0. Let us consider our example from Figure 2 to outline the decorrelation characteristics of the 2-D DCT. The normalized autocorrelation of the images before and after DCT is shown in Figure 5. the principle advantage of image transformation is the removal of redundancy between neighboring pixels.4 0 50 100 1 0 5 2 0 0 2 0 5 300 350 (b) Figure 5.4 0.9 0.6 0. Decorrelation As discussed previously.3 0 0. Properties of DCT Discussions in the preceding sections have developed a mathematical foundation for DCT.4 0.2. Hence.2 0.5 0. (b) Normalized autocorrelation of correlated image before and after DCT.4 0.6 0. (a) Normalized autocorrelation of uncorrelated image before and after DCT.2 -0. it can be inferred that DCT exhibits excellent decorrelation properties.3.7 0.3 0 0.

Energy Compaction Efficacy of a transformation scheme can be directly gauged by its ability to pack input data into as few coefficients as possible. the uncorrelated image has more sharp intensity variations than the correlated image. (a) (b) Figure 6. In addition to their respective correlation properties discussed in preceding sections. whereas the energy of the correlated image is packed into the low frequency region (i. This allows the quantizer to discard coefficients with relatively small amplitudes without introducing visual distortion in the reconstructed image. (b) Correlated image and its DCT.2.2. the uncorrelated image has its energy spread out. Clearly. Therefore. (a) Uncorrelated image and its DCT.e. . Figure 6 shows the DCT of both the images. Let us again consider the two example images of Figure 2(a) and (b). top left region). DCT exhibits excellent energy compaction for highly correlated images.. the former has more high frequency content than the latter.3.

(a) (b) (c) .ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application Other examples of the energy compaction property of DCT with respect to some standard images are provided in Figure 7.

(e) Baboon and its DCT.e. (c) Circuit and its DCT. However. Figure 7(c) contains a number of edges (i. (d) Trees and its DCT. These images can be classified as low frequency images with low spatial details. the image data exhibits high correlation . (f) a sine wave and its DCT.. sharp intensity variations) and therefore can be classified as a high frequency image with low spatial content. Figure 7 (a) and (b) contain large areas of slowly varying intensities. (a) Saturn and its DCT.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (d) (e) (f) Figure 7. A DCT operation on these images provides very good energy compaction in the low frequency region of the transformed image. A closer look at Figure 7 reveals that it comprises of four broad image classes. (b) Child and its DCT.

v) Column transform C(u. Hence.. v = 0. known as separability.3. Consequently. from the preceding discussion it can be inferred that DCT renders excellent energy compaction for correlated images. the transform coefficients are spread over low and high frequencies. has the principle advantage that C(u. DCT provides (almost) optimal decorrelation for such images [15]. Figure 7 (d) and (e) are images with progressively high frequency and spatial content. This property. 2. The arguments presented can be identically applied for the inverse DCT computation (5).e.…. Studies have shown that the energy compaction performance of DCT approaches optimality as image correlation approaches one i. This idea is graphically illustrated in Figure 8. Separability The DCT transform equation (4) can be expressed as. v) . ⎣ 2N ⎦ (6) x=0 for u. v ) = α (u )α (v ) cos ⎢ π ⎡−1 (2 x + 1)u ⎤ ⎥ ⎣ 2N ⎦ N ∑ ∑ y =0 ⎡ π (2 y + 1)v ⎤ f (x. 2. N −1 C (u. y ) cos ⎢ ⎥. Figure 7(e) shows periodicity therefore the DCT contains impulses with amplitudes proportional to the weight of a particular frequency in the original waveform.3. y) Row transform C(x. N − 1.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application which is exploited by the DCT algorithm to provide good energy compaction. v) can be computed in two steps by successive 1-D operations on rows and columns of an image. 1. The other (relatively insignificant) harmonics of the sine wave can also be observed by closer examination of its DCT image. y f(x.

x ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application Figure 8. . Computation of 2-D DCT using separability property.

A = A . A separable and symmetric transform can be expressed in the form [10] T = AfA . Orthogonality In order to extend ideas presented in the preceding section.e. DCT basis functions are orthogonal (See Section 2. let us denote the inverse transformation of (7) as f = A TA −1 −1 .4. and in addition to its 5 In image processing jargon this matrix is referred to as the transformation kernel.1).5.3. -1 T As discussed previously. Symmetry Another look at the row and column operations in Equation 6 reveals that these operations are functionally identical. j ) = α ( j ) j =0 j ⎡ ∑ cos⎢(22N+ 1)i⎦⎥⎤. . (7) where A is an N × N symmetric transformation matrix with entries a (i. ⎣ π and f is the N × N image matrix.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application 2. 5 2.3. the inverse transformation matrix of A is equal to its transpose i. This is an extremely useful property since it implies that the transformation matrix can be precomputed offline and then applied to the image thereby providing orders of magnitude improvement in computation efficiency. In our scenario it comprises of the basis functions of Figure 4. j ) given by N −1 a (i. Therefore. Such a transformation is called a symmetric transformation. Thus.

[24]. [22]. [29]. this property renders some reduction in the pre-computation complexity. [20]. 2. A Faster DCT The properties discussed in the last three sub-sections have laid the foundation for a faster DCT computation algorithm. Here. The 1-D sequence f (x ) in (1) can be expressed as a sum of an even and an odd sequence f ( x ) = f e (x ) + f o (x ) ~ ~ ~ . [27]. [21]. [30]. and 0 ≤ x ≤⎛ N ⎞⎟ − 1 .4. Generalized and architecture-specific fast DCT algorithms have been studied extensively ([17]. it is important to note that DCT is not the real part of the Discrete Fourier Transform (DFT). we only discuss one algorithm that utilizes the Fast Fourier Transform (FFT) to compute the DCT and its inverse [16]. [18]. and in accordance with the readers’ engineering background. [28]. ⎜ = = .[26]. where f e (x ) f (2 x ) f ( x ) f o ( x ) f (2 x + 1) f (N − x − 1) . = = ⎝2⎠ The summation term in (1) can be split to obtain . This can be easily verified by inspecting the DCT and DFT transformation matrices. [19].ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application decorrelation characteristics. [25]. [23]. [31]). This algorithm is presented for the 1-D DCT nevertheless it can be extended to the 2-D case. However.

~ N ⎦⎥ f (2 x ) f (2 x) and the odd points can be = ~ N f [2(N − 1 − x )] for 0 ≤ x ≤⎛ ⎞⎟ − 1 . the KLT transformation kernel is generally not separable. some fast KLT algorithms have been suggested ([32]. It is optimal in the sense of energy compaction. KLT is data dependent and. 2) Discrete Fourier Transform (DFT). The KLT is a linear transform where the basis functions are taken from the statistical properties of the image data. nevertheless the overall complexity of KLT is significantly higher than . it places as much energy as possible in as few coefficients as possible. More specifically.5. we compare DCT with two linear transforms: 1) The Karhunen-Loeve Transform (KLT). [33]). i. DCT versus DFT/KLT At this point it is important to mention the superiority of DCT over other image transforms. Derivation of the respective basis for each image sub-block requires unreasonable computational resources.⎪ ⎟−1 ⎪⎜⎜⎝ 2 ⎟⎠ ⎡ π(4x + 1)u ⎤ f (2x ) cos ⎪ (u ) = α (u ) C + ⎢ ⎥ ⎨ ∑ 2N ⎪ x =0 ⎪⎩ ⎛ N ⎞⎟ ⎪⎜ 2 ⎟⎠−1 ⎪ ⎛ N⎞ ⎛ N⎞ ⎟ ⎜⎝⎜ 2 ⎟− 1 ⎠ x =0 ∑ ⎡ π(4x + 3)u ⎪⎤ f (2x + 1) cos ⎢ ⎥⎪ ⎬ ⎥⎦ ⎪ 2N ⎪⎭ ⎪ ⎡ π(4x + 1)u ⎡ π(4x + 3)u ⎤ ⎪ ⎤ ⎪ ⎪ ⎥ + ∑ fo (x ) cos ⎢ = α (u ) ⎨ ∑ fe (x ) ⎥⎬ cs ⎢ o N ⎥⎦ ⎪ 2N 2 x =0 ⎪ x =0 ⎪⎩ ⎪⎭ N −1 ~ ⎡ π(4n + 1)u ⎤ = α (u ) ∑ f (x ) c s ⎢ o ⎥ 2N x =0 ⎡ −j πu /2N N −1 ~ −j 2πux / N ⎤ ⎥ = Re ⎢⎡α (u ) = Re ⎢α (u )e ∑ f (x )e ⎢ x =0 W2N ⎥ ⎢⎣ ⎣ ⎦ For the inverse transformation. However. and thus the full matrix multiplication must be performed. and can thus be adaptive.e. we can calculate calculated by noting that f (2 x + 1) = ⎜ ~ ⎛ N ⎞⎟ −1 ⎝⎜⎜ 2 ⎟⎠ u /2 DFT f (x ) { } ⎤. ⎝2⎠ 2. therefore. without a fast (FFT-like) precomputation transform. In other words.. Although.

the respective DCT and DFT algorithms. .

3. like DCT. familiarity with Discrete Fourier Transform (DFT) has been assumed throughout this document. The first column of Table 1 gives the entropy of the images listed in Figure 7. It also exhibits good decorrelation and energy compaction characterictics. Hence. DCT Performance Theoretic Approach Evaluation: An Information In this section we investigate the performance of DCT using information theory measures. in turn. the DFT is a complex transform and therefore stipulates that both image magnitude and phase information be encoded. it has fixed basis images and fast implementations are possible. Throughout the following discussion. This will. decrease the number of bits required to represent the image. 3.In accordance with the readers’ background. After quantization. In addition. the implicit periodicity of DFT gives rise to boundary discontinuties that result in significant high-frequency content. studies have shown that DCT provides better energy compaction than DFT for most natural images. the pmf refers to the normalized histogram of an image. All following columns tabulate the entropies of the respective DCT images with a certain number of . Furthermore. Entropy is defined as H(X)=− i =1 ∑ N p(xi ) log( p( xi )) .1. The DFT transformtion kernel is linear. where p (xi ) is the value of the probability mass function (pmf) at X = x i . However. Entropy Reduction The decorrelation characteristics of DCT should render a decrease in the entropy (or selfinformation) of an image. separable and symmetric. Gibbs Phenomeneon causes the boundary points to take on erraneous values [11].

7599 6.125 Entropy DCT (75%) 1.6906 0.1445 0.2058 0. Image Saturn Child Circuit Trees Baboon Sine Entropy original image 4.6674 Entropy DCT (25%) 0.7006 7.2081 0. coding it in the spatial domain is very inefficient since the gray levels are somewhat uniformly distributed across the image.1140 5. the original image data is transformed in a way that the histogram is stretched and the amplitude of most transformed outcomes is very small. Therefore. A closer look at the Table 1 reveals that the entropy reduction for the Baboon image is very drastic. and 5). namely Circuit and Trees. Consider for instance a 352x288 image. The second column of Table 1 clearly shows that the DCT operation reduces the entropy of all images. As a consequence of the decorrelation property. DCT (25%) retains only 25% (352x288x0.7504 0. However.9944 1.2139 0. It is noteworthy that the inherent lossless nature of the transform encoder dictates that the coefficient discarding operation be implemented in the quantizer. The decrease in entropy is easily explained by observing the histograms of the original and the DCT encoded images in Figure 9.9439 5.9144 1. . This discussion also applies to the other high frequency images. More specifically. The reduction in entropy becomes more profound as we decrease the number of retained coefficients (see Table 1 columns 3.3461 3. Entropy of images and their DCT. DCT (M%) means that after the DCT operation only the first M percent of the total coefficients are retained. The entropy of the original image ascertains that it has a lot of high frequency component and spatial detail.5859 0.25=25560) of the total (102240) coefficients.7073 1.2181 0. It should be emphasized again that this decrease in entropy represents a reduction in the average number of bits required for a particular image.4961 1.6181 0.2120 0. This very interesting result implies that discarding some high frequency details can yield significant dividend in reducing the overall entropy.7348 0.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application coefficients retained. the DCT decorrelates the image data thereby stretching the histogram.2125 Table 1.2096 Entropy DCT (50%) 0. 4.

8 0.2 0.2 0.4 0.9 1 (b) 7000 1000 6000 800 5000 600 4000 3000 400 2000 200 1000 0 0 0 50 100 150 200 250 0 0.1 0.8 0.6 0.8 0.1 0.7 0.6 0.3 0.4 0.5 0.4 0.3 0.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application 12000 12000 10000 10000 8000 8000 6000 6000 4000 4000 2000 2000 0 0 0 50 100 150 200 250 0 0.1 0.2 0.3 0.9 1 (a) 1600 6000 1400 5000 1200 1000 4000 800 3000 600 2000 400 1000 200 0 0 0 50 100 150 200 250 0 0.5 0.9 1 (c) .5 0.6 0.7 0.7 0.

ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application 2500 2000 2000 1500 1500 1000 1000 500 500 0 0 50 100 150 200 250 0 0 0. (a) Histogram of Saturn and its DCT.1 0.6 0. 7 0.7 0. 8 0. 3 0. . 4 0.4 0. 9 1 (e) 800 1800 700 1600 600 1400 500 1200 400 1000 800 300 600 200 400 100 200 0 0 0 50 100 150 200 250 0 0. 7 0.2 0. (b) Histogram of Child and its DCT. 5 0. 4 0. 3 0.9 1 (f) Figure 9.8 0. (d) Histogram of Trees and its DCT. 6 0. 9 1 (d) 3000 3000 2500 2500 2000 2000 1500 1500 1000 1000 500 500 0 0 50 100 150 200 250 0 0 0. 1 0. 1 0. 6 0. 2 0. 8 0.3 0. (f) Histogram of a sine wave and its DCT. (c) Histogram of Circuit and its DCT. 2 0. (e) Histogram of Baboon and its DCT.5 0. 5 0.

DCT(25%) introduces blurring effect in all images since only one-fourth of the total number of coefficients are utilized for reconstruction. However. Clearly. Removal of high-frequency coefficients results in removal of certain frequencies that were originally present in the sine wave. DCT coefficients can be discarded by the quantizer while rendering acceptable quality. After losing certain frequencies it is not possible to achieve perfect reconstruction. DCT(50%) provides almost identical reconstruction in all images except Figure 13 (Trees) and Figure 15 (Sine). Therefore. Figure 15 (Sine) is easily explained by examination of its DCT given in Figure 7(f). . The results of Figure 13 (Trees) can be explained by the fact that the image has a lot of uncorrelated high-frequency details. DCT(75%) provides excellent reconstruction for all images except the sine wave. This is a very interesting result since it suggests that based on the (heterogeneous) bandwidth requirements of receivers. Nevertheless. discarding high frequency DCT coefficients results in quality degradation.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application The preceding discussion ignores one fundamental question: How much visual distortion in the image is introduced by the (somewhat crude) quantization procedure described above? Figure 10 to Figure 15 show the images reconstructed by performing the inverse DCT operation on the quantized coefficients.

(b) DCT(75%). (c) DCT(50%). Inverse DCT of Saturn. (d) DCT(25%). (a) DCT(100%).ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) Figure 10. .

(a) DCT(100%). (d) Figure 11. . (c) DCT(50%).ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) DCT(25%). (b) DCT(75%). Inverse DCT of Child.

(a) DCT(100%). (b) DCT(75%).ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) DCT(25%). . Inverse DCT of Circuit. (d) Figure 12. (c) DCT(50%).

(c) DCT(50%).ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) DCT(25%). (a) DCT(100%). . Inverse DCT of Trees. (b) DCT(75%). (d) Figure 13.

ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) DCT(25%). (b) DCT(75%). (d) Figure 14. (a) DCT(100%). (c) DCT(50%). . Inverse DCT of Baboon.

the change of information between consecutive video frames is quite low. The first two frames exhibit high temporal correlation. However. frame-2 and frame-3 represent a change in scene. Such temporal redundancy can be removed to provide better compression. . This is clearly shown in Figure 16. Mutual-Information A video transmission comprises of a sequence of images (referred to as frames) that are transmitted in a specified order. (d) Figure 15. it should be obvious that if the two respective frames represent a scene change then the mutual information is reduced. Hence. However. Inverse DCT of sine wave. (a) DCT(100%). therefore the information overlap between these frames is relatively smaller.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application (a) (b) (c) (d) DCT(25%). (b) DCT(75%). pixel values in one frame can be used to predict the adjacent pixels of the next frame. This phenomenon is referred to temporal correlation. 3. (c) DCT(50%). Generally.2.

1).e. Hence. y ) be defined by coupling the values of adjacent pixels in any two consecutive frames. The mutual information is as follows: Mutual Information between frame-1 and frame-2: 1. Mutual Information is given by I (X . In addition. (0.2785 . p (x and p (y ) represent the marginal pmf of X and Y ) respectively. y is the joint probability mass function (pmf) of random variables X and Y (i. Mutual information of two video frames can. …….Y ) = p (x. ⎞⎟ ⎛ . p ) y ) log y (x. 2). ) frame.9336 Mutual Information between frame-2 and frame-3: 0. The allowed range of values for p (x. 0).. let p (x and p (y ) be the normalized histograms of frame. the total number of ) outcomes is (255) . ∑∑ ⎜⎝ p (x ) p (y )⎠⎟ x y where p (x. (0. Let p (x. We utilize the Mutual Information Measure to quantify the temporal correlation between frames of a video sequence.X and frame-Y in our case). therefore. 255). Let us consider the mutual information of the video frames shown in Figure 16. y is (0. Three consecutive frame of a video sequence. be characterized as the amount of redundancy (correlation) between the two frames.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application Frame-1 Frame-2 Frame-3 Figure 16.X ) 2 and frame-Y respectively. (255.

In other words. [40]). adjacent pixels from past and future video frames can be used to predict the pixel values in a particular frame. for example. Lastly. frames in a video sequence exhibit high temporal correlation. Therefore. MPEG-4 FGS. all (uncorrelated) transform coefficients can be encoded independently without compromising coding efficiency. Such a (course) quantization scheme causes further reduction in the entropy (or average number of bits per pixel). The aforementioned attributes of the DCT have led to its widespread deployment in virtually every image/video processing standard of the last decade. [37]. MPEG-2. the DCT still offers new research directions that are being explored in the current and upcoming image/video coding standards ([35]. MPEG1. H.26L). H. These results outline a very interesting phenomenon. This temporal correlation is employed by most contemporary video processing systems. However. Thus. 4. [38]. JPEG (classical). [39]. Conclusions and Future Directions The results presented in this document show that the DCT exploits interpixel redundancies to render excellent decorrelation for most natural images. In addition.263 and JVT (H. Nevertheless. and that is. This correlation can be employed to improve coding efficiency. therefore. hence. [36]. MPEG-4. . frame-2 and frame-3 represent a change in video scene. their mutual information is quite high.Since. the mutual information of the two images is relatively smaller. Such temporal redundancies can be exploited by the source encoder to improve coding efficiency. it is concluded that successive frames in a video transmission exhibit high temporal correlation (mutual information). the DCT packs energy in the low frequency regions. Further discussion on this topic is out of the scope of this document. there exists high temporal correlation between the first two frames. some of the high frequency content can be discarded without significant quality degradation.261.

September 1981. A. D. K.C. Zhang. 1974. [2] [3] M. “Discrete cosine transform. vol. January 1972. October 1995.” January . 11(1):6−23. October 2001. Ortega. Q. W. MA: Addison-Wesley. C.” Reading. Vetterli. “Joint Source/Channel Coding of Statistically Multiplexed Real Time Services on Packet Networks. T. Zhang. Sydney.523-2001. Zu Ji. New Jersey. and Y. Jan. Vetterli. Garrett and M. Lu. Cover. Sayood and J. Ramchandran.” IEEE VTC’01. Ji. Zhang. American National Standard: Telecom Glossary 2000. 214. “A Power-Optimized Joint Source Channel Coding for Scalable Video Streaming over Wireless Channel. 1977. [4] K.” IEEE Transactions on Computers. “Digital Image Processing. 2001. pp. C. R. Natarajan." IEEE ISCAS’01. Zhu.5. Washington D. McCanne and M. W. Ahmed. [9] T1. Z. vol.” IEEE Journal on Selected Areas in Communications. Rao. 1993. and K. “Lecture Notes: ECE 802 . Daut. and M. [6] [7] [8] T. “Use of residual redundancy in the design of joint source/channel coders. [10] Hayder Radha. S. 39(6):838-846. and A. Gonzalez and P. Modestino. Vetterli. “Combined source channel coding of images using the block cosine transform.1261-1274. 29. Zhu. “Joint source/channel coding for multicast packet video.” IEEE Transactions on Information Theory. “Joint power control and source-channel coding for video communication over wireless. J. References [1] K.” IEEE/ACM Transactions on Networking.” IEEE Transactions on Communications. 1)-Volume 1. Australia. pp. “Broadcast Channels. [11] R. Zhang. May. Q. and Y. 2003. 90-93. [5] J. January 1993. Wintz. Borkenhagen.-Q.Information Theory and Coding. 18. pp. Metin Uz. “Multiresolution broadcast for digital HDTV using joint source/channel coding. Vickers.G.” International Conference on Image Processing (Vol. [12] N. vol.” IEEE Transactions on Communications. C-32. June 1991. W.

[18] G. vol. July 1993. v C-31 p. No. Duhamel.” IEEE Transactions on Acoustics. “Trade-offs in the Computation of Mono.78-83. December 1985. Kamangar and K.” UC Irvine. pp. Aggarwal and D. [15] R. C. [17] A. [22] M. “New Scaled DCT Algorithms for Fused Multiply/Add Architectures.” ICASSP '91. vol. “Fast Algorithm and Implementation of 2-D DCT.” SIAM Review. “JPEG – Still Image Data Compression Standard.” Multimedia Systems. March 1991. “A Two-Dimensional Fast Cosine Transform. “A DCT Chip based on a new Structured and Computationally Efficient DCT Algorithm. Rao. B. [25] C.[13] W. Mitchell. Hung and TH-Y Meng. 135-147. March 1998. [23] F. J.and Multi-dimensional DCTs. ASSP-33 pp. [27] P. D.” New Jersey: Prentice Hall Inc. 2 pp. Number 1. A.” ICASSP '89. Gajski. P. 1538. pp. Guillemot. Blinn. [14] G. [28] N. R. 38 p. 5 Vol. [20] C. p. March 1992. Linzer and E. [19] J.” IEEE Transactions On Circuits and Systems. [24] E. Pennebaker and J. . 1532-1539. T.” IEEE Transactions on Computers.” ICCAS '90. p. N. Vetterli. Carlach. Moschytz. 1999. “Fast Algorithms for the 2-D Discrete Cosine Transform.” IEEE Computer Graphics and Applications.. Liu. L. 988. [26] M. Cho and S. K. “Transform Coding of Images. F. J. p.U. 77.” IEEE Trans.” Newyork: International Thomsan Publishing. “A Comparison of fast DCT algorithms. 1985. “Real-Time Parallel and Fully Pipelined 2-D DCT Lattice Structures with Application to HDTV Systems. “Exploring DCT Implementations.R. “Fundamentals of Digital Image Processing. Ligtenberg. on Circuits and Systems for Video Technology. and J. 2201. Jain. C. vol. 297. 1993.” New York: Academic Press. Strang. “The Discrete Cosine Transform. Lee. Clark. “Practical Fast 1-D DCT Algorithms with 11 Multiplications. Duhamel. “Fast 2-D Discrete Cosine Transform. 899. C. Vetterli. and C. p. Speech and Signal Processing.” ICASSP '89. 2. 1989. p. Volume 41.” ICASSP '85. Guillemot. Dec 1994. “What's the Deal with the DCT. A. 25-37. and G. I. Technical Report ICS-TR-98-10. Loeffler. [21] Haque. Chiu and K. Feig. 999. [16] A.

Yeung. T.26L Video Compression Standard. “An Overview of the Draft H. Castrillon-Candas and K. 2197-2220. M. Chen. Stockhammer. I. “A Fast Karhunen-loeve Transform for Digital Restoration of Images Degraded by White and Colored Noise. Wiegand.” New York: Kluwer Academic/Plenum Publishers. [31] P. June 1977. Cho. C. Wiegand. 2003. Chen. Liu and A. “Optimized Transmission of H. “Efficient DCT-domain Blind Measurement and Reduction of Blocking Artifacts. Wenger. [40] M. Duhamel and C. and T. E. [37] Special Tutorial issue on the MPEG-4 standard. Yun. McMillan and L. Blättermann. March. [38] S. [39] H. “Polynomial Transform Computation of the 2-D DCT. 1515. p. G.” ICASSP '91. Lee.” DCC '92. and S. April 2001. [30] L. Bovik. and H. van der Schaar. number 6." IEEE Transactions on Multimedia. Y.” IEEE Transactions on Circuits and Systems for Video Technology. "The MPEG-4 Fine-Grained Scalable Video Coding Method for Multimedia Streaming over IP. [34] R. [36] T.26L/JVT Coded Video Over Packet-Lossy Networks. “A First Course in Information Theory. 219. .” ICIP 2002. Amaratunga. D. 26.” IEEE Transactions on Circuits and Systems for Video Technology. Westover. and S. [35] D. 2001. Signal Processing: Image Communication. Radha. p. Guillemot. 2000 (Elsevier). vol. 2001. p. Radha.ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application [29] N. W.” Submitted to IEEE Transactions on Signal Processing. I. Marpe. “Embedded DCT and Wavelet Methods for Scalable Video: Analysis and Comparison. “A Forward-Mapping Realization of the Inverse DCT. “A Fast Algorithm for 2-D DCT. van der Schaar. Jain.” IEEE Transactions on Computers. U. “Fast Estimation of Karhunen-Loeve Eigen Function usin Wavelets. 2002. [32] J. 15(4−5). January 2000. [33] A. Y.” Visual Communications and Image Processing.” ICASSP '90.

- 4 × 4 2-D DCT for H.264AVC
- Digital Image Processing 2
- Full Text
- 2010A Pipe Lining Hardware Implementation of H.264 Based on FPGA
- 3fast 1-d Dct Algorithms With11 Mul
- 2.ASIP FOR H
- High-performance Hardware Implementation of the h
- 4bEfficient Implementation of the 1D DCT Using FPGA Technology
- 1.Overview of h.264 Coding Standard
- 2010A Pipe Lining Hardware Implementation of H.264 Based on FPGA

- Package 3 D Rgl
- Image Reconstruction with Tikhonov Regularization
- Efficiently Generating Triangle Strips for Fast Rendering
- VHDL Implementation of Multiplier Less High Performance DWT Filter Bank WCE2007
- F6-1
- Pstricks Add Manual
- VECTOR QUANTISATION
- Introducing the Finite Element Method in Electromagnetics to Undergraduates Using MATLAB - Haldar
- Dsp Realization
- Elfrida Saragi Icanse Fullpaper (2)
- MSing
- CG_QB.pdf
- Bista Hough Cordic
- Signal
- Discrete Filters
- 0810.3776
- Matrix Convolution using Parallel Programming
- Performance Analysis of Fast wavelet transform and Discrete wavelet transform in Medical Images using Haar, Symlets and Biorthogonal wavelets
- ELMER, a computational tool for PDEs - Application to vibroacoustics
- Five-Axis High Speed Machining of Sculptured Surfaces by Surface-Based
- Image Processing
- 12112
- Wavelets
- 05BC04_FractionalPID
- dwdm
- Weaving
- Edge Detection and Morphological Operations
- A VLSI Efficient Programmable Power-Of-Two Scaler for {2^{n}-1,2^{n},2^{n}+1} RNS _2012
- Nq 2422332236
- Cg[1] Material

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd