H.

264/AVC intra coding and JPEG 2000 comparison
Giacomo Camperi - giacomo.camperi@gmail.com
Master of Science in Computer and Communication Networks

Vittorio Picco - vittorio.picco@gmail.com
Master of Science in Communication Engineering

Politecnico di Torino, Italy April 11, 2008

Contents
1 Introduction 2 H.264 and JPEG 2000 overview 2.1 H.264 . . . . . . . . . . . . . . 2.2 JPEG 2000 . . . . . . . . . . . 2.3 Coding software . . . . . . . . . 2.3.1 JM . . . . . . . . . . . . 2.3.2 Kakadu . . . . . . . . . 2 2 2 2 3 3 3 3 3 4 4 4 4 5 5 5 5 6 6 6 8 8 9 9 11 12 13

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

3 Metrics for codec comparison 3.1 Objective tests . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 PSNR . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Blurring . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Blocking . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Subjective tests . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 DSCQS - Double Stimulus Continuous Quality Scale 3.2.2 DSIS - Double Stimulus Impairment Scale . . . . . . 3.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Analysis and results 4.1 Objective comparisons . . . . . . . . . . . . 4.1.1 High bitrates . . . . . . . . . . . . . 4.1.2 Medium bitrates . . . . . . . . . . . 4.1.3 Low bitrates . . . . . . . . . . . . . 4.2 Subjective comparisons . . . . . . . . . . . 4.2.1 Double Stimulus Continuous Quality 4.2.2 Double Stimulus Impairment Scale . 5 Conclusions A Test images B Detailed tests

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . . . . . . . . . . Scale . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

1

1

Introduction

H.264 and JPEG 2000 are two modern standards for image and video compression. H.264 focus on the coding of video sequences, but can compress images as well thanks to new Intra-coding techniques, being able to work on single-frame videos. JPEG 2000, like is precursor JPEG, is aimed to the compression of still images, but can also work on video sequences, generating what is called Motion JPEG. Both standards are relatively new and they were designed to overcome some of the limitations of preceding codecs, which they are intended to replace. Some important features added to these standards are aimed not only to the field of bare image compression efficiency, but to give also the user a superior control and flexibility of the image processing chain. For example JPEG 2000 allow to transmit an image which has been divided in layers, so that the final user can choose which layers to download, depending on her bandwidth capabilities: the image can be scaled both in resolution and/or quality. Similarly, H.264 allows scalable video coding, splitting a video sequence in multiple streams. JPEG 2000 has been developed with two main goals: achieve good performances at very low bit rates, and add the new image scalability features we just described to the former JPEG codec. These goals have been successfully carried out, but using quite cumbersome technologies, that makes JPEG 2000 a quite complex codec. The high performances of H.264 in the coding of video sequences let us ask if this standard could also be used for still images compression. H.264 comes as an outsider in the still image compression codecs run. The possibility to use H.264 in a complete Intra mode, gives us the opportunity to compare this standard, typically used for videos, also with images, and the results are quite surprising. We will start with a brief introduction of the two standards, to understand their technical differences; then we will describe the two softwares (JM for H.264 and Kakadu for JPEG 2000) that implement the two standards, and the methods used to compare them; finally we will compare their performances. For the comparison we tried to use a not-so-classical approach. We will first compare some non-standard images, to give the reader a general view of the two standards behaviour; then we will use standard test images to highlight the differences between the codecs, on specific details: we will see how H.264 and JPEG 2000 handle flat, homogeneous regions, rather then a lot of small details, or again how they behave with highly colored zones. We decided to use this approach to avoid an annoying list of PSNR values and plots, for a lot of test images. The results are showed and discussed in the Analysis and results chapter and then summarized in the Conclusions chapter.

2
2.1

H.264 and JPEG 2000 overview
H.264

H.264/AVC is newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goals of the H.264/AVC standardization effort are enhanced compression performance and provision of a “network-friendly” video representation addressing “conversational” (video telephony) and “non-conversational” (storage, broadcast, or streaming) applications. H.264/AVC has achieved a significant improvement in rate-distortion efficiency relatively to other existing standards. Regarding our paper, the new video coding standard H.264/AVC provides a new way of still image coding. The improvement in coding performance comes mainly from the prediction part. Unlike previous standards, prediction must be always performed before texture coding for both inter and intra macroblocks. Intra prediction significantly improves the coding performance of H.264/AVC intra frame coder. For more details on AVC please refer to [2].

2.2

JPEG 2000

JPEG 2000 is a wavelet-based image compression standard. It was created by the Joint Photographic Experts Group committee in the year 2000 with the intention of superseding their original discrete cosine transform-based JPEG standard (created about 1991). It supports some important features such as 2

improved compression efficiency, lossless and lossy compression, multi-resolution representation, Region Of Interest (ROI) coding, error resilience and a flexible file format. The aim of JPEG 2000 is not only improved compression performance over JPEG but also adding (or improving) features such as scalability and editability. In fact, JPEG 2000’s improvement in compression performance relative to the original JPEG standard is actually rather modest and should not ordinarily be the primary consideration for evaluating the design. Very low and very high compression rates are supported in JPEG 2000. In fact, the graceful ability of the design to handle a very large range of effective bit rates is one of the strengths of JPEG 2000. Motion JPEG 2000 (often referenced as MJ2 or MJP2) is the leading digital film standard currently supported by Digital Cinema Initiatives (a consortium of most major studios and vendors) for the storage, distribution and exhibition of motion pictures. Unlike common video codecs, such as MPEG-4, WMV, and DivX, MJ2 does not employ temporal or inter-frame compression. Instead, each frame is an independent entity encoded by either a lossy or lossless variant of JPEG 2000. For more details on AVC please refer to [1].

2.3
2.3.1

Coding software
JM

JM 13.2 is the reference software used for compression, it is written by the Fraunhofer Institute in C language and its source code is freely available. Its the reference encoder, meaning its not optimized in any way for speed, but the implementation is safe and compliant to the standard; all parameters and compression techniques are correctly taken into account. For our purposes the encoder was configured to use CABAC, LevelIDC 5.1, High profile 4:4:4 subsampling, intraprofile enabled, no loopfilter and no rate control. Also, to ensure that the image was considered as a unique slice, we forced fixed number of macroblocks per image and no more that 9216 was allowed. This limited our image size to about 1536x1536 equivalent area. Quality scaling was achieved modifying the QP parameter, computing the corresponding bitrate and setting the JPEG 2000 encoder accordingly. 2.3.2 Kakadu

Kakadu is an implementation of Part 1 of the JPEG 2000 standard. It should be fully compliant with Profile-1, Class-2, as defined in Part 4 of the standard, which describes compliance. Kakadu is platform independent, and poses a special focus on computational efficiency, because of the complexity of the JPEG 2000 algorithm. In our tests, we used the Windows version, that comes as a command line program. The usage of Kakadu is straightforward and, theoretically, does not require any knowledge of the standard it implements. The two basic executable files that we used are kdu compress to create compressed files, and kdu expand to extract the coded files so as to measure the resulting image quality. The input file may be of many formats, also bmp is accepted and we used it. The output file is either jp2 or jpx. The desired coding rate is simply expressed in bpp, and passed as a parameter to the kdu compress executable; the no info option is used to avoid that the program writes useless information in the output file header, so as to obtain the minimum possible file size. To know the details of the Kakadu software implementation, please refer to [3].

3

Metrics for codec comparison

We call metrics all the tests, both objective and subjective, that are carried out in order to measure the performance of an algorithm.

3.1

Objective tests

The objective metrics are aimed to give us a rigorous and scientific measure of the quality of the coded images. To make this analysis we used the free tool MSU Video Quality Measurement Tool, that provides a large number of indicators. This tool is aimed to measure video sequences, but we chose it because of

3

the easiness of use and the great number of available tests; we converted our images to raw yuv video sequences composed of one frame only. We used three different metrics, each one dealing with a different aspect of the image quality. 3.1.1 PSNR

The PSNR (Peak to peak Signal to Noise Ratio), is the main parameter used worldwide to evaluate a codec performance. It is defined as: PSNR = 10 · log10 2552 N 2 i=0 (Xi − Yi ) [dB]

where X represents the pixel value for the original image and Y the same for the coded one; each image is made of N pixels The PSNR gives an idea of “how different” the coded image is compared to the original one. 3.1.2 Blurring

There is not a uniform definition to compute the blurring of an image, and MSU does not indicate which one it uses. In any case, the program works by measuring the blurring of the original image, in a way similar to the computation of the PSNR. The more blurred the image, the smoother are the differences between adjacent pixels and this is what the tool looks for; the result is a number referred to that image. After computing the value for the original picture, the same operation is performed on the coded one, leading to another value. If the value for the coded image is smaller than the original one, the coded image is more blurred, and viceversa. In all the tests, we computed the difference between the values for the original and the coded image and this is the value that will be reported. 3.1.3 Blocking

As for blurring test, MSU does not provide the algorithm it uses for the computation of this indicator. It works by using heuristic method for detecting objects edges. This tool is more precise on video sequences, since it uses previous frames to achieve a better accuracy. It is anyway a good test for measuring the “blockiness” of an image, as we will see soon. The software works in the same way of the blurring, by giving one value for the original image and another for the coded one; we computed the absolute value of the difference between the first and the second.

3.2

Subjective tests

In order to specify, evaluate and compare video communication systems, it is necessary to determine the quality of the video image to the viewer. Measuring visual quality is a difficult and often imprecise task because there are so many factors that can affect the result. Visual quality is inherently subjective and is influenced by many factors that make it difficult to obtain a completely accurate measure of quality. Also, measuring visual quality using objective criteria gives accurate results, but as yet there are no objective measurement system that completely reproduces the subjective experience of a human observer watching video display. Our perception of a visual scene is formed by a complex interaction between the components of the Human Visual System (HVS) in the eye and the brain. The perception of visual quality is influenced by spatial fidelity and temporal fidelity. However, a viewers opinion of quality is also affected by other factors such as the viewing environment, the observers state of mind and the extent to which the observer interacts with visual scene. Other important influences on perceived quality include visual attention. All of these factors make it very difficult to measure visual quality accurately and quantitatively. More information can be found in [4]. There are 2 big families of metrics: • To test the overall image quality; • To test the impairment factor, such as artifact generated during compression. 4

In general, test should not last more than 30 minutes, they should include some gray level image to reset viewer opinions, plus a training sequence to warm up (train) user judgment and at least a sample of 15 people should be taken into account. During our test we saw we got to train non engineering people using the right words, we got to describe clearly and in a non complex way what user should judge, look and get interested in the image otherwise random non-useless information is generated by their opinions. 3.2.1 DSCQS - Double Stimulus Continuous Quality Scale

This test is aimed to measure the overall image quality perceived by the users. To do this, the user is showed a pair of picture: one is the original, uncompressed, and the other is the coded one; the user (we will call him the assessor ) does not know neither which one is the original picture, nor if the encoded one has been processed by H.264 rather than JPEG 2000. The assessor must simply express a mark for every picture, representing the quality of what he sees, from 1 (Bad) to 5 (Excellent). 3.2.2 DSIS - Double Stimulus Impairment Scale

DSIS is used to measure the robustness of systems (i.e. failure characteristics), in our case the fallacies of compression artifacts and how they impair the picture. A typical assessment might call for an evaluation of either a new system, or the effect of a transmission path impairment. The assessor is first presented with an unimpaired reference, then with the same picture impaired. Following this, he is asked to vote on the second, keeping in mind the first. In sessions, which last up to half an hour, the assessor is presented with a series of pictures or sequences in random order and with random impairments covering all required combinations. The assessor is required to express a mark evaluating the level of disturb caused by the impairments, from 1 (Very annoying) to 5 (Imperceptible).

3.3

Methodology

The tests have been carried out following this sequence of operations: • Set a QP level in the JM encoder; • Read and report the resulting bitrate; • Set the same bitrate in Kakadu encoder; • Use MSU tools on the decoded images produced by both JM and Kakadu to obtain objective metrics; • Show the original and the two coded images (in random order) to the users, for subjective metrics.

4

Analysis and results

In this pages we are going to analyze one images (the famous “Bike” test image) in many of its parts. This image is a perfect test picture since it contains all the elements that we are interested to study: uniform zones, high frequency parts, black and white as well as colored spots. A complete description of the tests made, with images and graphs is available in Appendix B. Figure 1 shows a thumbnail of the test picture, along with the resulting Rate-Distortion curve. This image perfectly summarizes the results of all our tests. H.264 turns out to be better than JPEG 2000, especially for bitrates comprised between 0.5 and 4. JPEG 2000, though, presents a good behaviour at very low bitrates, sometimes beating H.264 in terms of PSNR. Also for very high bitrates, JPEG 2000 recover some of the distance that usually it has to pay to H.264. We now see in details how the codecs performed.

5

Figure 1: “Bike” test image

Figure 2: Particular of “Bike” QP=20: original (center), JPEG 2000 (left), H.264 (right)

4.1
4.1.1

Objective comparisons
High bitrates

For high bitrates we mean the one obtained by setting QP up to 20 in the H.264 encoder. The resulting bitrate has been used to code the image with Kakadu. In figure 2 we can see the visual effect of the encoding. For both H.264 and JPEG 2000 the differences are barely visible; from a visual point of view this coding could be defined quasi-lossless. The resulting numerical values are reported in Table 1. The PSNR are really high, but the compression ratios are quite high, especially for the QP=20. In this case the measures of blockiness and blurring are not relevant, since both indicators are very small and no artifacts can be seen in the compressed image. 4.1.2 Medium bitrates

For QP values higher than 20, some artifacts start to appear. We focus on the QPs 30 and 40: at these level of compression, artifacts become visible but the overall image quality is still good. At least for what involves the Bike image, these are the threshold values where an user can state than one codec is better than the other. This is the core of our work. Figure 3 shows a detail of the Bike picture, coded with both JPEG 2000 and H.264. We can see how QP 10 20 Bitrate [bpp] 5.84 2.14 PSNR H.264 [dB] 52.11 43.40 PSNR JPEG [dB] 50.85 40.14 Compression ratio 4:1 11:1

Table 1: High bitrate tests for the image Bike

6

QP 30 40

Bitrate [bpp] 0.75 0.24

PSNR H.264 [dB] 36.08 29.47

PSNR JPEG [dB] 32.91 27.14

Compression ratio 32:1 100:1

Table 2: Medium bitrate tests for the image Bike

Figure 3: Particular of “Bike” QP=30: original (center), JPEG 2000 (left), H.264 (right) the little dots on the apple surface completely disappear. In addition, if we look to the background, we can see how H.264 flatten out the small variation of the cloth. Let’s observe this other example, that clarify the inner nature of the two standards. In table 2 we can see that when QP=40 H.264 is better in term of PSNR of about 2dB. In figure 4 we can see how the two standards deals with small details. JPEG 2000 completely disrupt the written word while H.264 maintains it almost perfectly. JPEG 2000 is less aware of small details, because it uses a transform on the whole image. H.264 performs much better because the word is contained in one or more block and is transformed with a better accuracy. The price to pay is, again, in terms of precision in coding of uniform zones. About blocking and blurring measure, we summarized our results for medium bitrates in table 3. In figure 5 we can see the result of the blocking and blurring tests at various bitrates. For the blocking plot we observe that the two standards are almost equals for a bitrate greater than 0.75 and this is due to the fact that the measuring software can hardly find blocking artifact and therefore the measure becomes less relevant. For lower bitrates, things gets clear, with H.264 that shows a much greater blocking index than JPEG 2000. The blur values are almost the same for the two standards: this is a common situation that we found in our test. H.264 creates blocks artifacts which are also blurred; JPEG 2000 does not create evident blocks but the global feeling of the image is of something not well focused. At the end the two effects (different in nature) lead to similar results. We can now take a look to the coded colors. As usual, we will use another example with a characteristic spot of the image “Bike” (figure 6). H.264 is able to better separate and represent the color, while JPEG 2000 adds some unpleasant ringing artifact and the color tonality is not exactly the same as the original. This test performed at QP=40 leads to a quite ruined version of the compressed image, but it’s the only way to make the differences visible.

Figure 4: Particular of “Bike” QP=40: original (center), JPEG 2000 (left), H.264 (right)

7

QP 30 40

Blocking H.264 6.09 9.95

Blocking JPEG 5.68 0.44

Blurring H.264 3.04 6.02

Blurring JPEG 2000 2.36 5.23

Table 3: Blocking and blurring for image Bike

Figure 5: Blocking and blurring vs. Bitrate diagrams for image Bike 4.1.3 Low bitrates

For QP higher than 40, the quality of the coded image results poor. At these low bitrates is possible to make more detailed considerations about the artifacts. The most immediate comment that we can make is that JPEG 2000 is better than H.264, especially on some images, representing the whole image in a way that the human eye can better appreciate. To explain this concept we are going to use a different image, that is not standard; it is called “Space”. In Appendix A we can see a thumbnail of this picture along with its characteristics. In figure 7 are the results of the compression with QP 50, in table 4 are the numeric results. The PSNR gain when QP is 40, is about 0.2 dB, but when QP is raised up to 50, JPEG 2000 outperforms H.264 of 4.6 dB. This is due to the fact that H.264 uses a block-based transform, while JPEG 2000 apply the transform to the whole image. As a result, H.264 creates the ugly blocks visible in figure 7, while JPEG 2000 smooths everything. At the end the image coded with JPEG 2000 is more pleasant to see, but lose all the details. H.264 saves the details, but this at the cost of completely disrupt all the flat zones. This is more clear with this example, taken from another image, called DC (a thumbnail and related information about this image is available in Appendix A. Figure 8 is an aerial picture of a parking: while H.264 shows with a good approximation the cars, JPEG 2000 makes them hardly recognizable. Everything else is completely lost by H.264 in an uniform gray block, but somehow better coded by JPEG 2000. This concludes the objective tests performed on the images that can be shown here. In Appendix B it’s possible to find the complete set of images, table and diagrams realized to infer the conclusions.

4.2

Subjective comparisons

We are going to show the results for the two images Bike and DC 1 (thumbnails in Appendix A).

Figure 6: Particular of “Bike” QP=40: original (center), JPEG 2000 (left), H.264 (right)

8

Figure 7: Particular of “Space” QP=50: original (center), JPEG 2000 (left), H.264 (right) QP 40 50 Bitrate [bpp] 0.04 0.02 PSNR H.264 [dB] 38.39 30.46 PSNR JPEG [dB] 38.62 35.08 Compression ratio 578:1 1120:1

Table 4: Low bitrate tests for the image Space 4.2.1 Double Stimulus Continuous Quality Scale

In figure 9, we can see the results obtained. The plot represents the averaged values of the marks given by the users. We can see that H.264 is always better than JPEG 2000, except for the Bike test image for the lowest quality (QP=50). This is probably due to the different kind of artifact created: for medium bitrates the users seemed to prefer the presence of small blocking artifacts, instead of the generalized quality reduction caused by JPEG 2000. When the bitrate becomes very low, the blocks become very annoying because they make the image look unnatural. On the other hand JPEG 2000 is preferred because it does not create an artificial image impairment, but everything looks like seen through a dirt glass, i.e. more natural. On the other hand, DC 1 is an image with a lot of small details, and the users seemed to always prefer H.264. This happen because H.263 is able to preserve the small details, even though this is done disrupting flat, uniform zones. 4.2.2 Double Stimulus Impairment Scale

In this test, summarized in figure 10 all the considerations of above become evident. DSIS measures how much the compression errors disturb the users. The test reveals that H.264 artifacts are less annoying at medium bitrates, while JPEG 2000 ones are better tolerated. Again, this is a confirm that when the blocking artifacts are a few they are not very disturbing, but when the bitrate is very low, ringing and blurring impairment are preferred by the users.

Figure 8: Particular of “DC” QP=50: original (center), JPEG 2000 (left), H.264 (right)

9

Figure 9: DSCQS results for test images Bike and DC 1

Figure 10: DSCQS results for test images Bike and DC 1

10

5

Conclusions

In this paper we compared two modern standards for still image compression: H.264 and JPEG 2000. While the first was originally designed for the compression of video sequences has proved to behave better than JPEG 2000 in almost every test made. H.264 comes out as the winner of this comparison: it outperforms JPEG 2000 in almost all tests, with PSNR values that are higher than JPEG 2000 for every practical case. H.264 makes use of a block based transform, and this is evident when artifacts appear. The blocking metrics clearly reveal this, showing values that are always higher than JPEG 2000 ones. The based block transform also allows the coded to perform good on small details, even at very low bitrates. On the other hand this causes a bad coding of flat, uniform zones. Overall the perceived image quality is always better than JPEG 2000, with the only exception of some images coded at a very low bitrate. JPEG 2000 has showed a behaviour that outperforms H.264 only at very low bitrates, and not with all images. On the pros side we find the artifacts issue: JPEG 2000 does not rely on a block based transform, therefore we do not find any block based artifact. The overall look-and-feel of the encoded image is a generalized lack of details, instead of unnatural blocks. These considerations are supported by the collected data, in terms of PSNR, Blocking and Blurring. The PSNR measurements reveal an overall better performance of H.264, especially at medium and high bitrates, as we already said. The Blocking indicator turns out to be a good metric for the blockiness of an image. Unfortunately we don’t know the exact algorithm that leads to the reported values, for lack of documentation of the tool. In any case it is clear that H.264 always obtain higher values in this metric, for all the tested bitrates. To what relates JPEG 2000, the indicator does not provide useful values, since it is hard for the algorithm to find block edges in the images. The fundamental result is that the metric always shows higher values for H.264. The Blurring indicator turned out to be less straightforward to read. Initially we expected that this indicator worked like the Blocking one, with a predominance of JPEG 2000 over H.264. This did not happen. The reason is that H.264 creates blocking artifacts, but, moreover, each block is blurred. H.264 is able to maintain details better than JPEG 2000, but at low bitrates it completely disrupts uniform and flat zones. This makes the Blurring indicator grow and eventually goes over the JPEG 2000 one. Another remark is that while the Blocking indicator for JPEG 2000 was like random, here acquires a precise trend, a symptom that the metric can work well. To what relates subjective metrics, the final result is that H.264 outperforms JPEG 2000 for all but lowest bitrates. Normal rates allow H.264 to find the best compromise between compression and artifacts. The users seemed to notice more image degradation in JPEG coded images, probably because of the global image quality that decreased altogether. H.264 preserves more details and at medium-high bitrates the artifacts results less annoying. On the other hand, when the encoded image quality was very bad, H.264 block artifacts resulted much more annoying than JPEG 2000 ones, because they looked more unnatural. In conclusion, H.264 results inherently better than JPEG 2000 considering both objective and subjective measurements made, even tough JPEG 2000 presents artifacts that are less disturbing.

11

A
Bike

Test images

Width Height File size DC 1

1344 1680 6,773,760 bytes

Width Height File size DC 2

1536 1536 7,077,888 bytes

Width Height File size Space

1920 1088 6,266,880 bytes

Width Height File size Palm

1024 512 1,572,864 bytes

Width Height File size

1488 1120 4,999,680 bytes

12

B
Bike

Detailed tests
H.264 Blurring 10.83 6.02 3.04 0.92 0.04 JPEG 2000 PSNR Blurring Blocking 23.50 9.31 0.85 27.14 5.23 0.44 32.91 2.36 5.68 40.14 0.91 2.67 50.85 0.06 0.03

Bpp 0.07 0.24 0.75 2.14 5.84

PSNR 23.65 29.47 36.08 43.40 52.11

Blocking 8.54 9.95 6.09 1.57 0.43

DC 1 Bpp 0.14 0.74 2.07 3.84 5.83 PSNR 18.25 24.70 33.15 43.07 52.70 H.264 Blurring 33.58 10.14 1.71 0.05 -0.03 Blocking 21.05 6.75 3.06 3.74 2.25 PSNR 17.79 22.23 29.71 38.18 51.23 JPEG 2000 Blurring Blocking 32.50 5.67 12.96 4.71 2.62 2.88 0.54 3.47 0.00 1.97

DC 2 Bpp 0.10 0.51 1.61 3.38 6.36 PSNR 19.89 25.69 33.15 42.87 52.84 H.264 Blurring 28.86 12.99 2.97 0.14 0.00 Blocking 7.51 2.81 6.29 7.44 4.69 PSNR 19.51 23.65 29.74 37.72 56.43 JPEG 2000 Blurring Blocking 27.39 8.55 12.61 7.58 4.06 5.68 0.72 7.19 -0.01 0.93

13

Space Bpp 0.02 0.04 0.11 0.31 2.06 PSNR 30.46 38.39 44.20 47.70 52.28 H.264 Blurring 1.67 1.33 1.11 0.91 0.14 Blocking 3.76 4.01 3.80 3.63 2.35 PSNR 35.08 38.62 43.07 47.08 51.85 JPEG 2000 Blurring Blocking 1.49 3.29 1.26 2.52 1.06 2.43 0.79 3.22 0.14 0.00

Palm Bpp 0.02 0.05 0.23 1.13 4.24 PSNR 29.99 34.50 39.26 43.89 51.68 H.264 Blurring 5.82 4.88 3.17 1.56 0.11 Blocking 11.81 12.85 10.80 6.63 0.10 PSNR 32.96 35.01 38.96 43.39 50.52 JPEG 2000 Blurring Blocking 5.44 11.71 4.39 11.12 2.82 9.55 1.17 6.67 0.10 0.00

14

References
[1] Athanassios Skodras, Charilaos Christopoulos, Touradj Ebrahimi — The JPEG 2000 still image compression standard. [2] Wiegand T., Sullivan G.J., Bjntegaard G., Luthra, A. — Overview of the H.264/AVC video coding standard. [3] David Taubman — Kakadu Survey Documentation [4] RECOMMENDATION ITU-R BT.500-11 — Methodology for the subjective assessment of the quality of television pictures [5] www.wikipedia.org [6] www.jpeg.org/jpeg2000/ [7] http://compression.ru/video/quality measure/video measurement tool en.html

15