Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
2Activity
×
0 of .
Results for:
No results containing your search query
P. 1
Hybrid JPEG Compression Using Histogram Based Segmentation

Hybrid JPEG Compression Using Histogram Based Segmentation

Ratings: (0)|Views: 1,062|Likes:
Published by ijcsis
Image compression is an inevitable solution for image transmission since the channel bandwidth is limited and the demand is for faster transmission. Storage limitation is also forcing to go for image compression as the color resolution and spatial resolutions are increasing according to quality requirements. JPEG compression is a widely used compression technique. JPEG compression method uses linear quantization and threshold values to maintain certain quality in an entire image. The proposed method estimates the vitality of the block of the image and adapts variable quantization and threshold values. This ensures that the vital area of the image is highly preserved than the other areas of the image. This hybrid approach increases the compression ratio and produces a desired high quality output image.
Image compression is an inevitable solution for image transmission since the channel bandwidth is limited and the demand is for faster transmission. Storage limitation is also forcing to go for image compression as the color resolution and spatial resolutions are increasing according to quality requirements. JPEG compression is a widely used compression technique. JPEG compression method uses linear quantization and threshold values to maintain certain quality in an entire image. The proposed method estimates the vitality of the block of the image and adapts variable quantization and threshold values. This ensures that the vital area of the image is highly preserved than the other areas of the image. This hybrid approach increases the compression ratio and produces a desired high quality output image.

More info:

Published by: ijcsis on Dec 05, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

12/05/2010

pdf

text

original

 
Hybrid JPEG Compression Using Histogram BasedSegmentation
M.Mohamed Sathik 
1
, K.Senthamarai Kannan
2
and Y.Jacob Vetha Raj
3
 
1
Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Indiammdsadiq@gmail.com
2,3
Department of Statistics, Manonmanium Sundaranar University, Tirunelveli, Indiasenkannan2002@gmail.com jacobvetharaj@gmail.comAbstract--
 
Image compression is an inevitable solution forimage transmission since the channel bandwidth is limited andthe demand is for faster transmission. Storage limitation is alsoforcing to go for image compression as the color resolution andspatial resolutions are increasing according to qualityrequirements. JPEG compression is a widely used compressiontechnique. JPEG compression method uses linear quantizationand threshold values to maintain certain quality in an entireimage. The proposed method estimates the vitality of the blockof the image and adapts variable quantization and thresholdvalues. This ensures that the vital area of the image is highlypreserved than the other areas of the image. This hybridapproach increases the compression ratio and produces adesired high quality output image..
 
Key words--
Image Compression, Edge-Detection,Segmentation. Image Transformation, JPEG,Quantization.I. INTRODUCTION
Every day, an enormous amount of information is stored, processed, and transmitted digitally.Companies provide business associates, investors, andpotential customers with financial data, annual reports,inventory, and product information over the Internet. Andmuch of the online information is graphical or pictorial innature; the storage and communications requirements areimmense. Methods of compressing the data prior to storageand/or transmission are of significant practical andcommercial interest.Compression techniques fall under two broadcategories such as lossless[1] and lossy[2][3]. The formeris particularly useful in image archiving and allows theimage to be compressed and decompressed without losingany information. And the later, provides higher levels of data reduction but result in a less than perfect reproductionof the original image. Lossy compression is useful inapplications such as broadcast television,videoconferencing, and facsimile transmission, in whichcertain amount of error is an acceptable trade-off forincreased compression performance. The foremost aim of image compression is to reduce the number of bits neededto represent an image. In lossless image compressionalgorithms, the reconstructed image is identical to theoriginal image. Lossless algorithms, however, are limitedby the low compression ratios they can achieve. Lossycompression algorithms, on the other hand, are capable of achieving high compression ratios. Though thereconstructed image is not identical to the original image,lossy compression algorithms obtain high compressionratios by exploiting human visual properties.Vector quantization [4],[5] , wavelet transformation[1] , [5]-[10] techniques are widely used in addition tovarious other methods[11]-[17] in image compression. Theproblem in lossless compression is that, the compressionratio is very less; where as in the lossy compression thecompression ratio is very high but may loose vitalinformation of the image. Some of the works carried out inhybrid image compression [18]-[19] incorporated differentcompression schemes like PVQ and DCTVQ in a singleimage compression. But the proposed method uses lossycompression method with different quality levels based onthe context to compress a single image by avoiding thedifficulties of using side information for imagedecompression in [20].The proposed method performs a hybridcompression, which makes a balance on compression ratioand image quality by compressing the vital parts of theimage with high quality. In this approach the main subjectin the image is very important than the background image.Considering the importance of image components, and theeffect of smoothness in image compression, this methodsegments the image as main subject and background, thenthe background of the image is subjected to low qualitylossy compression and the main subject is compressed withhigh quality lossy compression. There are enormousamount of work on image compression is carried out bothin lossless [1] [14] [17] and lossy [4] [15] compression.Very few works are carried out for Hybrid Imagecompression [18]-[20].In the proposed work, for image compression, theedge detection, segmentation, smoothing and dilationtechniques are used. For edge detection, segmentation[21],[22] smoothing and dilation, there are lots of work hasbeen carried out [2],[3]. A novel and a time efficientmethod to detect edges and segmentation used in theproposed work are described in section II, section III givesa detailed description of the proposed method, the resultsand discussion are given in section IV and the concludingremarks are given in section V.
II. BACKGROUNDA. JPEG CompressionComponents of Image Compression System (JPEG).
 Image compression system consists of three closelyconnected components namely
 
Source encoder (DCT based)
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 8, November 2010300http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
Quantizer
 
Entropy encoderFigure 2 shows the architecture of the JPEG encoder.
Principles behind JPEG Compression
. A commoncharacteristic of most images is that the neighboring pixelsare correlated and therefore contain redundant information.The foremost task then is to find less correlatedrepresentation of the image. Two fundamental componentsof compression are redundancy and irrelevancy reduction.Redundancy reduction aims at removing duplication fromthe signal source. Irrelevancy reduction omits parts of thesignal that will not be noticed by the signal receiver, namelythe Human Visual System (HVS). The JPEG compressionstandard (DCT based) employs the use of the discrete cosinetransform, which is applied to each 8 x 8 block of thepartitioned image. Compression is then achieved byperforming quantization of each of those 8 x 8 coefficientblocks.
Image Transform Coding For JPEG CompressionAlgorithm.
In the image compression algorithm, the input image isdivided into 8-by-8 or 16-by-16 non-overlapping blocks,and the two-dimensional DCT is computed for each block.The DCT coefficients are then quantized, coded, andtransmitted. The JPEG receiver (or JPEG file reader)decodes the quantized DCT coefficients, computes theinverse two-dimensional DCT of each block, and then putsthe blocks back together into a single image. For typicalimages, many of the DCT coefficients have values close tozero; these coefficients can be discarded without seriouslyaffecting the quality of the reconstructed image. A twodimensional DCT of an M by N matrix A is defined asfollows
M-1N-1pqmnm=0n=0
0pM-10qN-1
(2m+1)p(2n+1)q
ααA cos cos ,
2M2N
 pq
 B
 
 where
1/M , p = 0p =2/M , 1pM-1
α
 
1/N , q = 0q =2/N , 1qN-1
α
 
The DCT is an invertible transformation and its inverse isgiven by
M-1N-1pqpqm=0n=0
0pM-10qN-1
(2m+1)p(2n+1)q
ααB cos cos ,
2M2N
mn
 A
 
 
Where
1/M , p = 0p =2/M , 1pM-1
α
 
1/N , q = 0q =2/N , 1qN-1
α
 The DCT based encoder can be thought of as essentiallycompression of a stream of 8 X 8 blocks of image samples.Each 8 X 8 block makes its way through each processingstep, and yields output in compressed form into the datastream. Because adjacent image pixels are highly correlated,the ‘forward’ DCT (FDCT) processing step lays thefoundation for achieving data compression by concentratingmost of the signal in the lower spatial frequencies. For atypical 8 X 8 sample block from a typical source image,most of the spatial frequencies have zero or near-zeroamplitude and need not be encoded. In principle, the DCTintroduces no loss to the source image samples; it merelytransforms them to a domain in which they can be moreefficiently encoded.After output from the FDCT, each of the 64 DCTcoefficients is uniformly quantized in conjunction with acarefully designed 64 – element Quantization Table. At thedecoder, the quantized values are multiplied by thecorresponding QT elements to recover the originalunquantized values. After quantization, all of the quantizedcoefficients are ordered into the “zig-zag” sequence asshown in figure 1. This ordering helps to facilitate entropyencoding by placing low-frequency non-zero coefficientsbefore high-frequency coefficients. The DC coefficient,which contains a significant fraction of the total imageenergy, is differentially encoded.Figure 1 Zig-Zag SequenceThe JPEG decoder architecture is shown if figure 3which is the reverse procedure described for compression.
 B. Segmentation
Let D be a matrix of order m x n, represents theimage of width m and height n. The domain for Di,j is[0..255] , for any i=1..m, and any j=1.. n.The architecture of segmentation using histogramis shown in figure 4. To make the process faster the highresolution input image is down sampled 2 times. When theimage is down sampled each time the dimension is reducedby half of the original dimension. So the final down sampledimage (D) is of the dimension
4
m
x
4
n
. The down sampledimage is smoothed to get smoothed gray scale image usingequation (1).S
i,,j
= 1/9(D
i-1,,j-1
+D
i-1,,j
+D
i-1,,j+1
+D
i,,j-1
+D
i,,j
+D
i,,j+1
+D
i+1,,j-1
+D
i+1,,j
+D
i+1,,j+1
)
 
…(1)AC
77
 AC
07
 AC
01
 DCC
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 8, November 2010301http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
Figure 2. JPEG Encoder Block Diagram
 
CompressedImage dataFigure 3. JPEG Decoder Block DiagramFigure -4 ClassifierThe histogram(H) is computed for the gray scale image(S).The most frequently present gray scale value (Mh) isdetermined from the histogram by equation (2) and is shownas indicated by a line in figure 5.Mh = arg{max(H(x))}
(2)The background value of the images is having the highestfrequency in the case of homogenous background. In orderto surmise background textures a range of gray levelvalues are considered for segmentation. The range iscomputed using the equations (3) and (4).L = max( Mh 30,0) …(3)U = min( Mh + 30,255) …(4)The gray scale image S is segmented to detect thebackground area of the image using the function given inequation (5)B
i,j
= (S
i,j
> L) and (S
i,j
< U) …(5)Figure – 5After processing the pixel values for background area is 1 inthe binary image B. To avoid the problem of oversegmentation the binary image is subjected to sequence of morphological operations. The binary image is eroded withsmaller circular structural element (SE) to remove smallersegments as given in equation (6).
SE  B B
…(6)Smooth theImage(S)DownSample2 Times (D)HistogramComputation(H)RangeCalculation(L & U)BinarySegmentation(B)UpSample2 Times
8 X 8 blocksFDCT QuantizerEntropyEncoderQuantizerTableHuffmanTableCompressedimage dataSource imageEntropyDecoderDequantizer IDCTQuantizerTableHuffmanTableReconstructedImage
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 8, November 2010302http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->