Hybrid JPEG Compression Using Histogram Based

M.Mohamed Sathik
, K.Senthamarai Kannan
and Y.Jacob Vetha Raj

Department of Computer Science, Sadakathullah Appa College, Tirunelveli, India
Department of Statistics, Manonmanium Sundaranar University, Tirunelveli, India

Abstract-- Image compression is an inevitable solution for
image transmission since the channel bandwidth is limited and
the demand is for faster transmission. Storage limitation is also
forcing to go for image compression as the color resolution and
spatial resolutions are increasing according to quality
requirements. JPEG compression is a widely used compression
technique. JPEG compression method uses linear quantization
and threshold values to maintain certain quality in an entire
image. The proposed method estimates the vitality of the block
of the image and adapts variable quantization and threshold
values. This ensures that the vital area of the image is highly
preserved than the other areas of the image. This hybrid
approach increases the compression ratio and produces a
desired high quality output image.

Key words-- Image Compression, Edge-Detection,
Segmentation. Image Transformation, JPEG,

Every day, an enormous amount of
information is stored, processed, and transmitted digitally.
Companies provide business associates, investors, and
potential customers with financial data, annual reports,
inventory, and product information over the Internet. And
much of the online information is graphical or pictorial in
nature; the storage and communications requirements are
immense. Methods of compressing the data prior to storage
and/or transmission are of significant practical and
commercial interest.
Compression techniques fall under two broad
categories such as lossless[1] and lossy[2][3]. The former
is particularly useful in image archiving and allows the
image to be compressed and decompressed without losing
any information. And the later, provides higher levels of
data reduction but result in a less than perfect reproduction
of the original image. Lossy compression is useful in
applications such as broadcast television,
videoconferencing, and facsimile transmission, in which
certain amount of error is an acceptable trade-off for
increased compression performance. The foremost aim of
image compression is to reduce the number of bits needed
to represent an image. In lossless image compression
algorithms, the reconstructed image is identical to the
original image. Lossless algorithms, however, are limited
by the low compression ratios they can achieve. Lossy
compression algorithms, on the other hand, are capable of
achieving high compression ratios. Though the
reconstructed image is not identical to the original image,
lossy compression algorithms obtain high compression
ratios by exploiting human visual properties.
Vector quantization [4],[5] , wavelet transformation
[1] , [5]-[10] techniques are widely used in addition to
various other methods[11]-[17] in image compression. The
problem in lossless compression is that, the compression
ratio is very less; where as in the lossy compression the
compression ratio is very high but may loose vital
information of the image. Some of the works carried out in
hybrid image compression [18]-[19] incorporated different
compression schemes like PVQ and DCTVQ in a single
image compression. But the proposed method uses lossy
compression method with different quality levels based on
the context to compress a single image by avoiding the
difficulties of using side information for image
decompression in [20].
The proposed method performs a hybrid
compression, which makes a balance on compression ratio
and image quality by compressing the vital parts of the
image with high quality. In this approach the main subject
in the image is very important than the background image.
Considering the importance of image components, and the
effect of smoothness in image compression, this method
segments the image as main subject and background, then
the background of the image is subjected to low quality
lossy compression and the main subject is compressed with
high quality lossy compression. There are enormous
amount of work on image compression is carried out both
in lossless [1] [14] [17] and lossy [4] [15] compression.
Very few works are carried out for Hybrid Image
compression [18]-[20].
In the proposed work, for image compression, the
edge detection, segmentation, smoothing and dilation
techniques are used. For edge detection, segmentation
[21],[22] smoothing and dilation, there are lots of work has
been carried out [2],[3]. A novel and a time efficient
method to detect edges and segmentation used in the
proposed work are described in section II, section III gives
a detailed description of the proposed method, the results
and discussion are given in section IV and the concluding
remarks are given in section V.

A. JPEG Compression
Components of Image Compression System (JPEG).
Image compression system consists of three closely
connected components namely
- Source encoder (DCT based)
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
300 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
- Quantizer
- Entropy encoder
Figure 2 shows the architecture of the JPEG encoder.
Principles behind JPEG Compression. A common
characteristic of most images is that the neighboring pixels
are correlated and therefore contain redundant information.
The foremost task then is to find less correlated
representation of the image. Two fundamental components
of compression are redundancy and irrelevancy reduction.
Redundancy reduction aims at removing duplication from
the signal source. Irrelevancy reduction omits parts of the
signal that will not be noticed by the signal receiver, namely
the Human Visual System (HVS). The JPEG compression
standard (DCT based) employs the use of the discrete cosine
transform, which is applied to each 8 x 8 block of the
partitioned image. Compression is then achieved by
performing quantization of each of those 8 x 8 coefficient
Image Transform Coding For JPEG Compression
In the image compression algorithm, the input image is
divided into 8-by-8 or 16-by-16 non-overlapping blocks,
and the two-dimensional DCT is computed for each block.
The DCT coefficients are then quantized, coded, and
transmitted. The JPEG receiver (or JPEG file reader)
decodes the quantized DCT coefficients, computes the
inverse two-dimensional DCT of each block, and then puts
the blocks back together into a single image. For typical
images, many of the DCT coefficients have values close to
zero; these coefficients can be discarded without seriously
affecting the quality of the reconstructed image. A two
dimensional DCT of an M by N matrix A is defined as
M-1 N-1
p q mn
m=0 n=0
0 p M-1
0 q N-1
(2m+1)p (2n +1)q
α α A cos cos ,
2M 2N
pq B
 
= s s
s s


1/ M , p = 0
p =
2/M , 1 p M-1
s s ¦¹

1/ N , q = 0
q =
2/N , 1 q N-1
s s ¦

The DCT is an invertible transformation and its inverse is
given by
M-1 N-1
p q pq
m=0 n=0
0 p M-1
0 q N-1
(2m+1)p (2n +1)q
α α B cos cos ,
2M 2N
mn A
 
= s s
s s


1/ M , p = 0
p =
2/M , 1 p M-1
s s ¦

1/ N , q = 0
q =
2/N , 1 q N-1
s s ¦

The DCT based encoder can be thought of as essentially
compression of a stream of 8 X 8 blocks of image samples.
Each 8 X 8 block makes its way through each processing
step, and yields output in compressed form into the data
stream. Because adjacent image pixels are highly correlated,
the ‘forward’ DCT (FDCT) processing step lays the
foundation for achieving data compression by concentrating
most of the signal in the lower spatial frequencies. For a
typical 8 X 8 sample block from a typical source image,
most of the spatial frequencies have zero or near-zero
amplitude and need not be encoded. In principle, the DCT
introduces no loss to the source image samples; it merely
transforms them to a domain in which they can be more
efficiently encoded.
After output from the FDCT, each of the 64 DCT
coefficients is uniformly quantized in conjunction with a
carefully designed 64 – element Quantization Table. At the
decoder, the quantized values are multiplied by the
corresponding QT elements to recover the original
unquantized values. After quantization, all of the quantized
coefficients are ordered into the “zig-zag” sequence as
shown in figure 1. This ordering helps to facilitate entropy
encoding by placing low-frequency non-zero coefficients
before high-frequency coefficients. The DC coefficient,
which contains a significant fraction of the total image
energy, is differentially encoded.

Figure 1 Zig-Zag Sequence

The JPEG decoder architecture is shown if figure 3
which is the reverse procedure described for compression.

B. Segmentation
Let D be a matrix of order m x n, represents the
image of width m and height n. The domain for Di,j is
[0..255] , for any i=1..m, and any j=1.. n.
The architecture of segmentation using histogram
is shown in figure 4. To make the process faster the high
resolution input image is down sampled 2 times. When the
image is down sampled each time the dimension is reduced
by half of the original dimension. So the final down sampled
image (D) is of the dimension
. The down sampled
image is smoothed to get smoothed gray scale image using
equation (1).
= 1/9(D





(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
301 http://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 2. JPEG Encoder Block Diagram

Image data

Figure 3. JPEG Decoder Block Diagram

Figure -4 Classifier
The histogram(H) is computed for the gray scale image(S).
The most frequently present gray scale value (Mh) is
determined from the histogram by equation (2) and is shown
as indicated by a line in figure 5.
Mh = arg{max(H(x))}

The background value of the images is having the highest
frequency in the case of homogenous background. In order
to surmise background textures a range of gray level
values are considered for segmentation. The range is
computed using the equations (3) and (4).
L = max( Mh – 30,0)

U = min( Mh + 30,255)

The gray scale image S is segmented to detect the
background area of the image using the function given in
equation (5)
= (S
> L) and (S
< U) …(5)

Figure – 5

After processing the pixel values for background area is 1 in
the binary image B. To avoid the problem of over
segmentation the binary image is subjected to sequence of
morphological operations. The binary image is eroded with
smaller circular structural element (SE) to remove smaller
segments as given in equation (6).
SE B B O =
Smooth the
2 Times (D)
(L & U)
2 Times
8 X 8 blocks
FDCT Quantizer Entropy
image data
Source image
Dequantizer IDCT
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
302 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
Then the resultant image is subjected to morphological
closing operation with larger circular structural element as
given in equation (7).

SE B B - =
The input image is initially segmented into
background and foreground parts as described in section
II.B Then the image is divided into 8x8 blocks and DCT
values are computed for each block. The quantization is
performed according to the predefined quantization table.
The quantized values are then reordered based on zig-zag
ordering method described in section II A. The lower values
of AC coefficients are discarded from the zig-zag ordered
list by comparing the threshold value selected by the
selector as per the block’s presences identified by the
classifier. If the block is present in foreground area then the
threshold is set to a higher value by the selector, otherwise a
lower value for threshold is set by the selector. After
discarding insignificant coefficients the remaining data are
compressed by the standard entropy encoder based on the
code table.
1. Input High Resolution Color image.
2. Down sample the input image 2 times.
3. Convert the down sampled image to gray scale
image (G).
4. Find histogram (H) of the gray scale image.
5. Find the lower (L) and upper (U) gray scale value
of background area.
6. Find Binary segmented image (B) from the gray
scale image (G)
7. Up sample Binary image (B) two times.
8. Divide the input image into 8x8 blocks
9. Find DCT coefficients for each blocks
10. Quantize the DCT coefficients
11. Discard lower quantized values based on the
threshold value selected by the selector.
12. Compress remaining DCT coefficients by Entropy
The architecture of the proposed method is shown in figure
6. The Quantization Table is a fixed classical table derived
from empirical results. The Quantizer quantizes the DCT
coefficients computed by FDCT. The classifier identifies the
class of each pixel by segmenting the given input image.
The selector and limiter works together to find the discard
threshold limit. The entropy encoder creates compressed
code using the Code Table. The compressed image may be
stored or transmitted faster than the existing method.
The Hybrid JPEG Compression method is implemented
according to the description in section III and tested with a
set of test images shown in figure 8. The results obtained
from the implementation of the proposed algorithms are
shown in figures 7, 9,10 and table I. Figure 7.a shows the
original input image. In Figure 7.b the segmented object
and background area is discriminated by black and white.
The compressed bit rates of the twelve test images are
computed and tabulated in table 1. The low quality (LQ) and
high quality (HQ) JPEG compression is performed and the
corresponding compression ratios(CR) and PSNR values
are tabulated. The PSNR is higher for HQ and CR is higher
for LQ. The Hybrid JPEG compression performs HQ
compression on main subject area and LQ compression on
background area thus the PSNR value at main subject area is
the same for Hybrid JPEG and HQ JPEG. Figure 9 shows
the comparison of normalized CRs of Hybrid JPEG and HQ
JPEG, it is observed that almost all of the images are
compressed better than classical JPEG compression. Figure
10 shows how well the compression ratio is increased than
the classical JPEG compression method.
FDCT Quantizer
Code Table Entropy
Figure 6. Hybrid JPEG compression method
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
303 http://sites.google.com/site/ijcsis/
ISSN 1947-5500

a)Input Image

b)Segmented Main Subject Area
Figure – 7 Input /Output
Table -1 Compression Ratio and PSNR values obtained by Hybrid JPEG and JPEG

Image LQ Hybrid Hybrid/HQ HQ
3 26.00 21.52 7.46 27.82 27.84 7.46 27.82 0.0000
5 25.29 22.09 6.80 28.87 28.93 6.80 28.87 0.0004
10 23.83 20.78 5.41 28.09 28.13 5.41 28.09 0.0031
4 24.33 22.08 6.61 31.13 31.20 6.60 31.13 0.0068
11 27.75 25.33 8.62 35.97 36.25 8.61 36.01 0.0198
9 24.92 21.62 6.58 30.45 30.56 6.56 30.51 0.0204
1 27.54 23.03 8.40 29.37 29.40 8.37 29.37 0.0276
7 24.85 17.71 4.01 23.38 23.43 3.92 23.38 0.0911
8 24.63 21.97 6.73 31.05 31.13 6.64 31.10 0.0921
12 29.18 22.73 5.92 28.70 29.07 5.61 28.93 0.3111
6 26.47 20.12 5.54 25.31 25.33 5.19 25.30 0.3449
2 22.84 20.40 7.91 27.93 28.21 7.25 28.06 0.6541
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
304 http://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure – 8 Test Images (1-12 from Left to Right and Top to Bottom)
1 2 3 4 5 6 7 8 9 10 11 12

HQ Lossy

Figure – 9 Normalized Compression Ratio Obtained for Test Images
Increased Compression Ratio by Hybrid JPEG Compression
1 2 3 4 5 6 7 8 9 10 11 12


Figure – 10 Increased Compression Ratios by Hybrid Compression.

The compression ratio of Hybrid JPEG method is
higher than JPEG method in more than 90% of test cases. In
the worst case both Hybrid JPEG and JPEG method gives
the same compression ratio. The PSNR value at the main
subject area is same for both methods. The PSNR value at
the background area is lower in Hybrid JPEG method which
is acceptable, since the background area is not vital. The
Hybrid JPEG method is suitable for imagery with larger
trivial background and certain level of loss is permissible.

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
305 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
The authors express their gratitude to University
Grant Commission and Manonmanium Sundaranar
University for financial assistance under the Faculty
Development Program.

[1] Xiwen OwenZhao, Zhihai HenryHe, “Lossless Image
Compression Using Super-Spatial Structure Prediction”, IEEE
Signal Processing Letters, vol. 17, no. 4, April 2010 383
[2] Aaron T. Deever and Sheila S. Hemami, “Lossless Image
Compression With Projection-Based and Adaptive Reversible
Integer Wavelet Transforms”, IEEE Transactions on Image
Processing, vol. 12, NO. 5, May 2003.
[3] Nikolaos V. Boulgouris, Dimitrios Tzovaras, and Michael
Gerassimos Strintzis, “Lossless Image Compression Based on
OptimalPrediction, Adaptive Lifting, and Conditional Arithmetic
Coding”, IEEE Transactions on Image Processing, vol. 10, NO.
1, Jan 2001
[4] Xin Li, , and Michael T. Orchard “ Edge-Directed Prediction for
Lossless Compression of Natural Images”, IEEE Transactions on
Image Processing, vol. 10, NO. 6, Jun 2001.
[5] Jaemoon Kim, Jungsoo Kim and Chong-Min Kyung , “A
Lossless Embedded Compression Algorithm for High Definition
Video Coding” 978-1-4244-4291 / 09 2009 IEEE. ICME 2009
[6] Rene J. van der Vleuten, Richard P.Kleihorstt ,Christian
Hentschel,t “Low-Complexity Scalable DCT Image
Compression”, 2000 IEEE.
[7] K.Somasundaram, and S.Domnic, “Modified Vector Quantization
Method for mage Compression”, ’Transactions On Engineering,
Computing And Technology Vol 13 May 2006
[8] Mohamed A. El-Sharkawy, Chstian A. White and Harry
,”Subband Image Compression Using Wavelet Transform And
Vector Quantization”, 1997 IEEE.
[9] Amir Averbuch, Danny Lazar, and Moshe Israeli ,”Image
Compression Using Wavelet Transform and Multiresolution
Decomposition”, IEEE Transactions on Image Processing, vol. 5,
NO. 1, Jan 1996.
[10] Jianyu Lin, Mark J. T. Smith,” New Perspectives and
Improvements on the Symmetric Extension Filter Bank for
Subband /Wavelet Image Compression”, IEEE Transactions on
Image Processing, vol. 17, NO. 2, Feb 2008.
[11] Yu Liu, Student Member, and King Ngi Ngan, “Weighted
Adaptive Lifting-Based Wavelet Transform for Image Coding
“,IEEE Transactions on Image Processing, vol. 17, NO. 4, Apr
[12] Michael B. Martin and Amy E. Bell, “New Image Compression
Techniques Using Multiwavelets and Multiwavelet Packets”
,IEEE Transactions on Image Processing, vol. 10, NO. 4, Apr
[13] Roger L. Claypoole, Jr , Geoffrey M. Davis, Wim Sweldens
,“Nonlinear Wavelet Transforms for Image Coding via Lifting” ,
IEEE Transactions on Image Processing, vol. 12, NO. 12, Dec
[14] David Salomon, “Data Compression , Complete Reference”,
Springer-Verlag New York, Inc, ISBN 0-387-40697-2.
[15] Eddie Batista de Lima Filho, Eduardo A. B. da Silva Murilo
Bresciani de Carvalho, and Frederico Silva Pinagé “Universal
Image Compression Using Multiscale Recurrent Patterns With
Adaptive Probability Model” , IEEE Transactions on Image
Processing, vol. 17, NO. 4, Apr 2008.
[16] Ingo Bauermann, and Eckehard Steinbach, “RDTC Optimized
Compression of Image-Based Scene Representations (Part I):
Modeling and Theoretical Analysis” , IEEE Transactions on
Image Processing, vol. 17, NO. 5, May 2008.
[17] Roman Kazinnik, Shai Dekel, and Nira Dyn , ”Low Bit-Rate
Image Coding Using Adaptive Geometric Piecewise Polynomial
Approximation”, IEEE Transactions on Image Processing, vol.
16, NO. 9, Sep 2007.
[18] Marta Mrak, Sonja Grgic, and Mislav Grgic ,”Picture Quality
Measures in Image Compression Systems”, EUROCON 2003
Ljubljana, Slovenia, 0-7803-7763-W03 2003 IEEE.
[19] Alan C. Brooks, Xiaonan Zhao, , Thrasyvoulos N. Pappas.,
“Structural Similarity Quality Metrics in a Coding Context:
Exploring the Space of Realistic Distortions”, IEEE Transactions
on Image Processing, vol. 17, no. 8, pp. 1261–1273, Aug 2008.
[20] Hong, S. W. Bao, P., “Hybrid image compression model based
on subband coding and edge-preserving regularization”, Vision,
Image and Signal Processing, IEE Proceedings, Volume: 147,
Issue: 1, 16-22, Feb 2000
[21] Zhe-Ming Lu, Hui Pei ,”Hybrid Image Compression Scheme
Based on PVQ and DCTVQ “,IEICE - Transactions on
Information and Systems archive, Vol E88-D , Issue 10 ,October
[22] Y.Jacob Vetha Raj, M.Mohamed Sathik and K.Senthamarai
Kanna,, “Hybrid Image Compression by Blurring Background
and Non-Edges. The International Journal on Multimedia and its
applications.Vol 2, No.1, February 2010, pp 32-41
[23] Willian K. Pratt ,”Digital Image Processing” ,John Wiley & Sons,
Inc, ISBN 9-814-12620-9.
[24] Jundi Ding, Runing Ma, and Songcan Chen,”A Scale-Based
Connected Coherence Tree Algorithm for Image Segmentation”,
IEEE Transactions on Image Processing, vol. 17, NO. 2, Feb
[25] Kyungsuk (Peter) Pyun, , Johan Lim, Chee Sun Won, and Robert
M. Gray, “Image Segmentation Using Hidden Markov Gauss
Mixture Models”, ” ,IEEE Transactions on Image Processing,
VOL. 16, NO. 7, JULY 2007

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
306 http://sites.google.com/site/ijcsis/
ISSN 1947-5500

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times