You are on page 1of 15

H.

264/AVC intra coding and JPEG 2000 comparison

Giacomo Camperi - giacomo.camperi@gmail.com


Master of Science in Computer and Communication Networks

Vittorio Picco - vittorio.picco@gmail.com


Master of Science in Communication Engineering

Politecnico di Torino, Italy

April 11, 2008

Contents
1 Introduction 2

2 H.264 and JPEG 2000 overview 2


2.1 H.264 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 JPEG 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Coding software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3.1 JM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3.2 Kakadu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Metrics for codec comparison 3


3.1 Objective tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.1.1 PSNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.1.2 Blurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.1.3 Blocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Subjective tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2.1 DSCQS - Double Stimulus Continuous Quality Scale . . . . . . . . . . . . . . . . . 5
3.2.2 DSIS - Double Stimulus Impairment Scale . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Analysis and results 5


4.1 Objective comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.1.1 High bitrates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.1.2 Medium bitrates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.1.3 Low bitrates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Subjective comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2.1 Double Stimulus Continuous Quality Scale . . . . . . . . . . . . . . . . . . . . . . 9
4.2.2 Double Stimulus Impairment Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

5 Conclusions 11

A Test images 12

B Detailed tests 13

1
1 Introduction
H.264 and JPEG 2000 are two modern standards for image and video compression. H.264 focus on the
coding of video sequences, but can compress images as well thanks to new Intra-coding techniques, being
able to work on single-frame videos. JPEG 2000, like is precursor JPEG, is aimed to the compression of
still images, but can also work on video sequences, generating what is called Motion JPEG.
Both standards are relatively new and they were designed to overcome some of the limitations of
preceding codecs, which they are intended to replace. Some important features added to these standards
are aimed not only to the field of bare image compression efficiency, but to give also the user a superior
control and flexibility of the image processing chain. For example JPEG 2000 allow to transmit an image
which has been divided in layers, so that the final user can choose which layers to download, depending on
her bandwidth capabilities: the image can be scaled both in resolution and/or quality. Similarly, H.264
allows scalable video coding, splitting a video sequence in multiple streams.
JPEG 2000 has been developed with two main goals: achieve good performances at very low bit rates,
and add the new image scalability features we just described to the former JPEG codec. These goals have
been successfully carried out, but using quite cumbersome technologies, that makes JPEG 2000 a quite
complex codec.
The high performances of H.264 in the coding of video sequences let us ask if this standard could also
be used for still images compression. H.264 comes as an outsider in the still image compression codecs
run. The possibility to use H.264 in a complete Intra mode, gives us the opportunity to compare this
standard, typically used for videos, also with images, and the results are quite surprising.
We will start with a brief introduction of the two standards, to understand their technical differences;
then we will describe the two softwares (JM for H.264 and Kakadu for JPEG 2000) that implement the
two standards, and the methods used to compare them; finally we will compare their performances. For
the comparison we tried to use a not-so-classical approach. We will first compare some non-standard
images, to give the reader a general view of the two standards behaviour; then we will use standard test
images to highlight the differences between the codecs, on specific details: we will see how H.264 and
JPEG 2000 handle flat, homogeneous regions, rather then a lot of small details, or again how they behave
with highly colored zones. We decided to use this approach to avoid an annoying list of PSNR values and
plots, for a lot of test images.
The results are showed and discussed in the Analysis and results chapter and then summarized in the
Conclusions chapter.

2 H.264 and JPEG 2000 overview


2.1 H.264
H.264/AVC is newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC
Moving Picture Experts Group. The main goals of the H.264/AVC standardization effort are enhanced
compression performance and provision of a “network-friendly” video representation addressing “con-
versational” (video telephony) and “non-conversational” (storage, broadcast, or streaming) applications.
H.264/AVC has achieved a significant improvement in rate-distortion efficiency relatively to other existing
standards.
Regarding our paper, the new video coding standard H.264/AVC provides a new way of still image
coding. The improvement in coding performance comes mainly from the prediction part. Unlike pre-
vious standards, prediction must be always performed before texture coding for both inter and intra
macroblocks. Intra prediction significantly improves the coding performance of H.264/AVC intra frame
coder.
For more details on AVC please refer to [2].

2.2 JPEG 2000


JPEG 2000 is a wavelet-based image compression standard. It was created by the Joint Photographic
Experts Group committee in the year 2000 with the intention of superseding their original discrete cosine
transform-based JPEG standard (created about 1991). It supports some important features such as

2
improved compression efficiency, lossless and lossy compression, multi-resolution representation, Region
Of Interest (ROI) coding, error resilience and a flexible file format.
The aim of JPEG 2000 is not only improved compression performance over JPEG but also adding (or
improving) features such as scalability and editability. In fact, JPEG 2000’s improvement in compression
performance relative to the original JPEG standard is actually rather modest and should not ordinarily
be the primary consideration for evaluating the design. Very low and very high compression rates are
supported in JPEG 2000. In fact, the graceful ability of the design to handle a very large range of effective
bit rates is one of the strengths of JPEG 2000.
Motion JPEG 2000 (often referenced as MJ2 or MJP2) is the leading digital film standard currently
supported by Digital Cinema Initiatives (a consortium of most major studios and vendors) for the storage,
distribution and exhibition of motion pictures. Unlike common video codecs, such as MPEG-4, WMV, and
DivX, MJ2 does not employ temporal or inter-frame compression. Instead, each frame is an independent
entity encoded by either a lossy or lossless variant of JPEG 2000.
For more details on AVC please refer to [1].

2.3 Coding software


2.3.1 JM
JM 13.2 is the reference software used for compression, it is written by the Fraunhofer Institute in C
language and its source code is freely available. Its the reference encoder, meaning its not optimized in
any way for speed, but the implementation is safe and compliant to the standard; all parameters and
compression techniques are correctly taken into account.
For our purposes the encoder was configured to use CABAC, LevelIDC 5.1, High profile 4:4:4 subsam-
pling, intraprofile enabled, no loopfilter and no rate control. Also, to ensure that the image was considered
as a unique slice, we forced fixed number of macroblocks per image and no more that 9216 was allowed.
This limited our image size to about 1536x1536 equivalent area. Quality scaling was achieved modifying
the QP parameter, computing the corresponding bitrate and setting the JPEG 2000 encoder accordingly.

2.3.2 Kakadu
Kakadu is an implementation of Part 1 of the JPEG 2000 standard. It should be fully compliant with
Profile-1, Class-2, as defined in Part 4 of the standard, which describes compliance. Kakadu is platform
independent, and poses a special focus on computational efficiency, because of the complexity of the JPEG
2000 algorithm.
In our tests, we used the Windows version, that comes as a command line program. The usage of
Kakadu is straightforward and, theoretically, does not require any knowledge of the standard it imple-
ments. The two basic executable files that we used are kdu compress to create compressed files, and
kdu expand to extract the coded files so as to measure the resulting image quality.
The input file may be of many formats, also bmp is accepted and we used it. The output file is
either jp2 or jpx. The desired coding rate is simply expressed in bpp, and passed as a parameter to the
kdu compress executable; the no info option is used to avoid that the program writes useless information
in the output file header, so as to obtain the minimum possible file size.
To know the details of the Kakadu software implementation, please refer to [3].

3 Metrics for codec comparison


We call metrics all the tests, both objective and subjective, that are carried out in order to measure the
performance of an algorithm.

3.1 Objective tests


The objective metrics are aimed to give us a rigorous and scientific measure of the quality of the coded
images. To make this analysis we used the free tool MSU Video Quality Measurement Tool, that provides
a large number of indicators. This tool is aimed to measure video sequences, but we chose it because of

3
the easiness of use and the great number of available tests; we converted our images to raw yuv video
sequences composed of one frame only. We used three different metrics, each one dealing with a different
aspect of the image quality.

3.1.1 PSNR
The PSNR (Peak to peak Signal to Noise Ratio), is the main parameter used worldwide to evaluate a
codec performance. It is defined as:
!
2552
PSNR = 10 · log10 PN [dB]
2
i=0 (Xi − Yi )

where X represents the pixel value for the original image and Y the same for the coded one; each image is
made of N pixels The PSNR gives an idea of “how different” the coded image is compared to the original
one.

3.1.2 Blurring
There is not a uniform definition to compute the blurring of an image, and MSU does not indicate which
one it uses. In any case, the program works by measuring the blurring of the original image, in a way
similar to the computation of the PSNR. The more blurred the image, the smoother are the differences
between adjacent pixels and this is what the tool looks for; the result is a number referred to that image.
After computing the value for the original picture, the same operation is performed on the coded one,
leading to another value. If the value for the coded image is smaller than the original one, the coded
image is more blurred, and viceversa.
In all the tests, we computed the difference between the values for the original and the coded image
and this is the value that will be reported.

3.1.3 Blocking
As for blurring test, MSU does not provide the algorithm it uses for the computation of this indicator. It
works by using heuristic method for detecting objects edges. This tool is more precise on video sequences,
since it uses previous frames to achieve a better accuracy. It is anyway a good test for measuring the
“blockiness” of an image, as we will see soon. The software works in the same way of the blurring, by
giving one value for the original image and another for the coded one; we computed the absolute value of
the difference between the first and the second.

3.2 Subjective tests


In order to specify, evaluate and compare video communication systems, it is necessary to determine the
quality of the video image to the viewer. Measuring visual quality is a difficult and often imprecise task
because there are so many factors that can affect the result. Visual quality is inherently subjective and
is influenced by many factors that make it difficult to obtain a completely accurate measure of quality.
Also, measuring visual quality using objective criteria gives accurate results, but as yet there are no
objective measurement system that completely reproduces the subjective experience of a human observer
watching video display. Our perception of a visual scene is formed by a complex interaction between
the components of the Human Visual System (HVS) in the eye and the brain. The perception of visual
quality is influenced by spatial fidelity and temporal fidelity. However, a viewers opinion of quality is also
affected by other factors such as the viewing environment, the observers state of mind and the extent to
which the observer interacts with visual scene. Other important influences on perceived quality include
visual attention. All of these factors make it very difficult to measure visual quality accurately and
quantitatively. More information can be found in [4].
There are 2 big families of metrics:

• To test the overall image quality;

• To test the impairment factor, such as artifact generated during compression.

4
In general, test should not last more than 30 minutes, they should include some gray level image to
reset viewer opinions, plus a training sequence to warm up (train) user judgment and at least a sample of
15 people should be taken into account. During our test we saw we got to train non engineering people
using the right words, we got to describe clearly and in a non complex way what user should judge, look
and get interested in the image otherwise random non-useless information is generated by their opinions.

3.2.1 DSCQS - Double Stimulus Continuous Quality Scale


This test is aimed to measure the overall image quality perceived by the users. To do this, the user is
showed a pair of picture: one is the original, uncompressed, and the other is the coded one; the user (we
will call him the assessor ) does not know neither which one is the original picture, nor if the encoded one
has been processed by H.264 rather than JPEG 2000. The assessor must simply express a mark for every
picture, representing the quality of what he sees, from 1 (Bad) to 5 (Excellent).

3.2.2 DSIS - Double Stimulus Impairment Scale


DSIS is used to measure the robustness of systems (i.e. failure characteristics), in our case the fallacies of
compression artifacts and how they impair the picture. A typical assessment might call for an evaluation
of either a new system, or the effect of a transmission path impairment. The assessor is first presented
with an unimpaired reference, then with the same picture impaired. Following this, he is asked to vote on
the second, keeping in mind the first. In sessions, which last up to half an hour, the assessor is presented
with a series of pictures or sequences in random order and with random impairments covering all required
combinations.
The assessor is required to express a mark evaluating the level of disturb caused by the impairments,
from 1 (Very annoying) to 5 (Imperceptible).

3.3 Methodology
The tests have been carried out following this sequence of operations:

• Set a QP level in the JM encoder;

• Read and report the resulting bitrate;

• Set the same bitrate in Kakadu encoder;

• Use MSU tools on the decoded images produced by both JM and Kakadu to obtain objective metrics;

• Show the original and the two coded images (in random order) to the users, for subjective metrics.

4 Analysis and results


In this pages we are going to analyze one images (the famous “Bike” test image) in many of its parts.
This image is a perfect test picture since it contains all the elements that we are interested to study:
uniform zones, high frequency parts, black and white as well as colored spots. A complete description of
the tests made, with images and graphs is available in Appendix B.
Figure 1 shows a thumbnail of the test picture, along with the resulting Rate-Distortion curve. This
image perfectly summarizes the results of all our tests. H.264 turns out to be better than JPEG 2000,
especially for bitrates comprised between 0.5 and 4. JPEG 2000, though, presents a good behaviour at
very low bitrates, sometimes beating H.264 in terms of PSNR. Also for very high bitrates, JPEG 2000
recover some of the distance that usually it has to pay to H.264. We now see in details how the codecs
performed.

5
Figure 1: “Bike” test image

Figure 2: Particular of “Bike” QP=20: original (center), JPEG 2000 (left), H.264 (right)

4.1 Objective comparisons


4.1.1 High bitrates
For high bitrates we mean the one obtained by setting QP up to 20 in the H.264 encoder. The resulting
bitrate has been used to code the image with Kakadu. In figure 2 we can see the visual effect of the
encoding.
For both H.264 and JPEG 2000 the differences are barely visible; from a visual point of view this
coding could be defined quasi-lossless. The resulting numerical values are reported in Table 1. The PSNR
are really high, but the compression ratios are quite high, especially for the QP=20. In this case the
measures of blockiness and blurring are not relevant, since both indicators are very small and no artifacts
can be seen in the compressed image.

4.1.2 Medium bitrates


For QP values higher than 20, some artifacts start to appear. We focus on the QPs 30 and 40: at these
level of compression, artifacts become visible but the overall image quality is still good. At least for what
involves the Bike image, these are the threshold values where an user can state than one codec is better
than the other. This is the core of our work.
Figure 3 shows a detail of the Bike picture, coded with both JPEG 2000 and H.264. We can see how

QP Bitrate [bpp] PSNR H.264 [dB] PSNR JPEG [dB] Compression ratio
10 5.84 52.11 50.85 4:1
20 2.14 43.40 40.14 11:1
Table 1: High bitrate tests for the image Bike

6
QP Bitrate [bpp] PSNR H.264 [dB] PSNR JPEG [dB] Compression ratio
30 0.75 36.08 32.91 32:1
40 0.24 29.47 27.14 100:1
Table 2: Medium bitrate tests for the image Bike

Figure 3: Particular of “Bike” QP=30: original (center), JPEG 2000 (left), H.264 (right)

the little dots on the apple surface completely disappear. In addition, if we look to the background, we
can see how H.264 flatten out the small variation of the cloth.
Let’s observe this other example, that clarify the inner nature of the two standards. In table 2 we can
see that when QP=40 H.264 is better in term of PSNR of about 2dB. In figure 4 we can see how the two
standards deals with small details.
JPEG 2000 completely disrupt the written word while H.264 maintains it almost perfectly. JPEG
2000 is less aware of small details, because it uses a transform on the whole image. H.264 performs much
better because the word is contained in one or more block and is transformed with a better accuracy. The
price to pay is, again, in terms of precision in coding of uniform zones.
About blocking and blurring measure, we summarized our results for medium bitrates in table 3.
In figure 5 we can see the result of the blocking and blurring tests at various bitrates. For the blocking
plot we observe that the two standards are almost equals for a bitrate greater than 0.75 and this is due to
the fact that the measuring software can hardly find blocking artifact and therefore the measure becomes
less relevant. For lower bitrates, things gets clear, with H.264 that shows a much greater blocking index
than JPEG 2000.
The blur values are almost the same for the two standards: this is a common situation that we found
in our test. H.264 creates blocks artifacts which are also blurred; JPEG 2000 does not create evident
blocks but the global feeling of the image is of something not well focused. At the end the two effects
(different in nature) lead to similar results.
We can now take a look to the coded colors. As usual, we will use another example with a characteristic
spot of the image “Bike” (figure 6).
H.264 is able to better separate and represent the color, while JPEG 2000 adds some unpleasant ringing
artifact and the color tonality is not exactly the same as the original. This test performed at QP=40 leads
to a quite ruined version of the compressed image, but it’s the only way to make the differences visible.

Figure 4: Particular of “Bike” QP=40: original (center), JPEG 2000 (left), H.264 (right)

7
QP Blocking H.264 Blocking JPEG Blurring H.264 Blurring JPEG 2000
30 6.09 5.68 3.04 2.36
40 9.95 0.44 6.02 5.23
Table 3: Blocking and blurring for image Bike

Figure 5: Blocking and blurring vs. Bitrate diagrams for image Bike

4.1.3 Low bitrates


For QP higher than 40, the quality of the coded image results poor. At these low bitrates is possible to
make more detailed considerations about the artifacts. The most immediate comment that we can make
is that JPEG 2000 is better than H.264, especially on some images, representing the whole image in a
way that the human eye can better appreciate. To explain this concept we are going to use a different
image, that is not standard; it is called “Space”. In Appendix A we can see a thumbnail of this picture
along with its characteristics. In figure 7 are the results of the compression with QP 50, in table 4 are
the numeric results.
The PSNR gain when QP is 40, is about 0.2 dB, but when QP is raised up to 50, JPEG 2000
outperforms H.264 of 4.6 dB. This is due to the fact that H.264 uses a block-based transform, while
JPEG 2000 apply the transform to the whole image. As a result, H.264 creates the ugly blocks visible
in figure 7, while JPEG 2000 smooths everything. At the end the image coded with JPEG 2000 is more
pleasant to see, but lose all the details. H.264 saves the details, but this at the cost of completely disrupt
all the flat zones. This is more clear with this example, taken from another image, called DC (a thumbnail
and related information about this image is available in Appendix A.
Figure 8 is an aerial picture of a parking: while H.264 shows with a good approximation the cars,
JPEG 2000 makes them hardly recognizable. Everything else is completely lost by H.264 in an uniform
gray block, but somehow better coded by JPEG 2000.
This concludes the objective tests performed on the images that can be shown here. In Appendix B
it’s possible to find the complete set of images, table and diagrams realized to infer the conclusions.

4.2 Subjective comparisons


We are going to show the results for the two images Bike and DC 1 (thumbnails in Appendix A).

Figure 6: Particular of “Bike” QP=40: original (center), JPEG 2000 (left), H.264 (right)

8
Figure 7: Particular of “Space” QP=50: original (center), JPEG 2000 (left), H.264 (right)

QP Bitrate [bpp] PSNR H.264 [dB] PSNR JPEG [dB] Compression ratio
40 0.04 38.39 38.62 578:1
50 0.02 30.46 35.08 1120:1
Table 4: Low bitrate tests for the image Space

4.2.1 Double Stimulus Continuous Quality Scale


In figure 9, we can see the results obtained. The plot represents the averaged values of the marks given
by the users. We can see that H.264 is always better than JPEG 2000, except for the Bike test image for
the lowest quality (QP=50). This is probably due to the different kind of artifact created: for medium
bitrates the users seemed to prefer the presence of small blocking artifacts, instead of the generalized
quality reduction caused by JPEG 2000. When the bitrate becomes very low, the blocks become very
annoying because they make the image look unnatural. On the other hand JPEG 2000 is preferred because
it does not create an artificial image impairment, but everything looks like seen through a dirt glass, i.e.
more natural.
On the other hand, DC 1 is an image with a lot of small details, and the users seemed to always
prefer H.264. This happen because H.263 is able to preserve the small details, even though this is done
disrupting flat, uniform zones.

4.2.2 Double Stimulus Impairment Scale


In this test, summarized in figure 10 all the considerations of above become evident. DSIS measures how
much the compression errors disturb the users. The test reveals that H.264 artifacts are less annoying
at medium bitrates, while JPEG 2000 ones are better tolerated. Again, this is a confirm that when the
blocking artifacts are a few they are not very disturbing, but when the bitrate is very low, ringing and
blurring impairment are preferred by the users.

Figure 8: Particular of “DC” QP=50: original (center), JPEG 2000 (left), H.264 (right)

9
Figure 9: DSCQS results for test images Bike and DC 1

Figure 10: DSCQS results for test images Bike and DC 1

10
5 Conclusions
In this paper we compared two modern standards for still image compression: H.264 and JPEG 2000.
While the first was originally designed for the compression of video sequences has proved to behave better
than JPEG 2000 in almost every test made.
H.264 comes out as the winner of this comparison: it outperforms JPEG 2000 in almost all tests, with
PSNR values that are higher than JPEG 2000 for every practical case. H.264 makes use of a block based
transform, and this is evident when artifacts appear. The blocking metrics clearly reveal this, showing
values that are always higher than JPEG 2000 ones. The based block transform also allows the coded to
perform good on small details, even at very low bitrates. On the other hand this causes a bad coding of
flat, uniform zones. Overall the perceived image quality is always better than JPEG 2000, with the only
exception of some images coded at a very low bitrate.
JPEG 2000 has showed a behaviour that outperforms H.264 only at very low bitrates, and not with all
images. On the pros side we find the artifacts issue: JPEG 2000 does not rely on a block based transform,
therefore we do not find any block based artifact. The overall look-and-feel of the encoded image is a
generalized lack of details, instead of unnatural blocks.
These considerations are supported by the collected data, in terms of PSNR, Blocking and Blurring.
The PSNR measurements reveal an overall better performance of H.264, especially at medium and
high bitrates, as we already said.
The Blocking indicator turns out to be a good metric for the blockiness of an image. Unfortunately we
don’t know the exact algorithm that leads to the reported values, for lack of documentation of the tool.
In any case it is clear that H.264 always obtain higher values in this metric, for all the tested bitrates. To
what relates JPEG 2000, the indicator does not provide useful values, since it is hard for the algorithm
to find block edges in the images. The fundamental result is that the metric always shows higher values
for H.264.
The Blurring indicator turned out to be less straightforward to read. Initially we expected that this
indicator worked like the Blocking one, with a predominance of JPEG 2000 over H.264. This did not
happen. The reason is that H.264 creates blocking artifacts, but, moreover, each block is blurred. H.264
is able to maintain details better than JPEG 2000, but at low bitrates it completely disrupts uniform and
flat zones. This makes the Blurring indicator grow and eventually goes over the JPEG 2000 one. Another
remark is that while the Blocking indicator for JPEG 2000 was like random, here acquires a precise trend,
a symptom that the metric can work well.
To what relates subjective metrics, the final result is that H.264 outperforms JPEG 2000 for all but
lowest bitrates. Normal rates allow H.264 to find the best compromise between compression and artifacts.
The users seemed to notice more image degradation in JPEG coded images, probably because of the global
image quality that decreased altogether. H.264 preserves more details and at medium-high bitrates the
artifacts results less annoying. On the other hand, when the encoded image quality was very bad, H.264
block artifacts resulted much more annoying than JPEG 2000 ones, because they looked more unnatural.
In conclusion, H.264 results inherently better than JPEG 2000 considering both objective and subjec-
tive measurements made, even tough JPEG 2000 presents artifacts that are less disturbing.

11
A Test images
Bike

Width 1344
Height 1680
File size 6,773,760 bytes
DC 1

Width 1536
Height 1536
File size 7,077,888 bytes
DC 2

Width 1920
Height 1088
File size 6,266,880 bytes
Space

Width 1024
Height 512
File size 1,572,864 bytes
Palm

Width 1488
Height 1120
File size 4,999,680 bytes

12
B Detailed tests
Bike
H.264 JPEG 2000
Bpp PSNR Blurring Blocking PSNR Blurring Blocking
0.07 23.65 10.83 8.54 23.50 9.31 0.85
0.24 29.47 6.02 9.95 27.14 5.23 0.44
0.75 36.08 3.04 6.09 32.91 2.36 5.68
2.14 43.40 0.92 1.57 40.14 0.91 2.67
5.84 52.11 0.04 0.43 50.85 0.06 0.03

DC 1
H.264 JPEG 2000
Bpp PSNR Blurring Blocking PSNR Blurring Blocking
0.14 18.25 33.58 21.05 17.79 32.50 5.67
0.74 24.70 10.14 6.75 22.23 12.96 4.71
2.07 33.15 1.71 3.06 29.71 2.62 2.88
3.84 43.07 0.05 3.74 38.18 0.54 3.47
5.83 52.70 -0.03 2.25 51.23 0.00 1.97

DC 2
H.264 JPEG 2000
Bpp PSNR Blurring Blocking PSNR Blurring Blocking
0.10 19.89 28.86 7.51 19.51 27.39 8.55
0.51 25.69 12.99 2.81 23.65 12.61 7.58
1.61 33.15 2.97 6.29 29.74 4.06 5.68
3.38 42.87 0.14 7.44 37.72 0.72 7.19
6.36 52.84 0.00 4.69 56.43 -0.01 0.93

13
Space

H.264 JPEG 2000


Bpp PSNR Blurring Blocking PSNR Blurring Blocking
0.02 30.46 1.67 3.76 35.08 1.49 3.29
0.04 38.39 1.33 4.01 38.62 1.26 2.52
0.11 44.20 1.11 3.80 43.07 1.06 2.43
0.31 47.70 0.91 3.63 47.08 0.79 3.22
2.06 52.28 0.14 2.35 51.85 0.14 0.00

Palm
H.264 JPEG 2000
Bpp PSNR Blurring Blocking PSNR Blurring Blocking
0.02 29.99 5.82 11.81 32.96 5.44 11.71
0.05 34.50 4.88 12.85 35.01 4.39 11.12
0.23 39.26 3.17 10.80 38.96 2.82 9.55
1.13 43.89 1.56 6.63 43.39 1.17 6.67
4.24 51.68 0.11 0.10 50.52 0.10 0.00

14
References
[1] Athanassios Skodras, Charilaos Christopoulos, Touradj Ebrahimi — The JPEG 2000 still image com-
pression standard.

[2] Wiegand T., Sullivan G.J., Bjntegaard G., Luthra, A. — Overview of the H.264/AVC video coding
standard.

[3] David Taubman — Kakadu Survey Documentation

[4] RECOMMENDATION ITU-R BT.500-11 — Methodology for the subjective assessment of the quality
of television pictures

[5] www.wikipedia.org

[6] www.jpeg.org/jpeg2000/

[7] http://compression.ru/video/quality measure/video measurement tool en.html

15

You might also like