Image Compression

7/14/2012

Image Compression

1

Reference
[1] Gonzalez and Woods, Digital Image Processing.

7/14/2012

Image Compression

2

Objective • Reduce the number of bytes required to represent a digital image – Redundant data reduction – Remove patterns – Uncorrelated data confirms redundant data elimination • Auto correlation? 7/14/2012 Image Compression 3 .

Enabling Technology • Compressions is used in – – – – – FAX RPV Teleconference REMOTE DEMO etc 7/14/2012 Image Compression 4 .

Review • • • • What and how to exploit data redundancy Model based approach to compression Information theory principles Types of compression – Lossless. lossy 7/14/2012 Image Compression 5 .

• How to measure the data redundancy.Information recovery Data Processing Information • We want to recover the information. • Reduce data redundancy. with reduced data volumes. 7/14/2012 Image Compression 6 .

– Let n1 and n2 be the info – carrying units of the respective data sets.Relative Data Redundancy • Assume that we have two data sets D1 and D2. – Both on processing yield the same information. – Relative data redundancy is defined on comparing the relative dataset sizes RD = 1 – 1/CR where CR is the compression ratio CR = n1 / n2 7/14/2012 Image Compression 7 .

implies that 90% of the data in D1 is redundant.e. i. n1 = n2 then RD=0. i. When CR = 10. When CR = 1. What does it mean if n1 << n2 ? Image Compression 8 • • • • 7/14/2012 .9.Examples RD = 1 – 1/CR CR = n1 / n2 D1 is the original and D2 is compressed. n1 = 10 n2 then RD=0.e. no data redundancy relative to D1 .

Types of data redundancy • Coding • Interpixel • Psychovisual 7/14/2012 Image Compression 9 .

and use more bits for the less frequently used alphabet 7/14/2012 Image Compression 10 .Coding Redundancy • How to assign codes to alphabet • In digital image processing – Code = gray level value or color value – Alphabet is used conceptually • General approach – Find the more frequently used alphabet – Use fewer bits to represent the more frequently used alphabet.

1. For M x N image.2 …. k = 0. Lavg = 8. 7/14/2012 Image Compression 11 . where L is the number of gray level values l(rk) = number of bits to represent rk Lavg = k=0 to L-1 l(rk) pr(rk) = average number of bits to encode one pixel. l(rk) = 8. Fixed length codes.. L-1.Coding Redundancy 2 • Focus on gray value images • Histogram shows the frequency of occurrence of a particular gray level • Normalize the histogram and convert to a pdf representation – let rk be the random variable pr(rk) = nk/n . bits required is MN Lavg For an image using an 8 bit code.

099 7/14/2012 Image Compression 12 .11 RD = 1 – 1/1.7 = 1.Fixed vs Variable Length Codes From [1] Lavg = 2.7 CR= 3/2.11 = 0.

Code assignment view From [1] 7/14/2012 Image Compression 13 .

Interpixel Redundancy From [1] 7/14/2012 Image Compression 14 .

63 = 0.63 RD = 1-1/2.Run Length Coding From [1] CR=1024*343/12166*11 = 2.62 7/14/2012 Image Compression 15 .

Psychovisual Redundancy • Some visual characteristics are less important than others. • In general observers seeks out certain characteristics – edges. etc – and the mentally combine them to recognize the scene. 7/14/2012 Image Compression 16 . textures.

From [1] 7/14/2012 Image Compression 17 .

From [1] 7/14/2012 Image Compression 18 .

Fidelity Criteria • Subjective • Objective – Sum of the absolute error – RMS value of the error – Signal to Noise Ratio 7/14/2012 Image Compression 19 .

Subjective scale From [1] 7/14/2012 Image Compression 20 .

Image Compression Model From [1] Run length JPEG Huffman 7/14/2012 Image Compression 21 .

Sign up to vote on this title
UsefulNot useful