You are on page 1of 4

IMAGE COMPRESSION BY RETAINING

IMAGE QUALITY
Ajayjith.N.S 18BCE2332 ajayjithn.sreejith2018@vitstudent.ac.in, Ch Sriadithya 19BCE0711
sri.adithyasivasa2019@vitstudent.ac.in, K Monish 19BCE0441 mohnishk.2019@vitstudent.ac.in

1. ABSTRACT
data item i.e. pixel in images. The technique isto
Data compression is also called as source coding. It use a lower number of bits to encode the data in
is the process of encoding information using fewer bits to binary codes that occurs more frequently.
than an uncoded representation is also making a use of Compression occurs when a single code is output
specific encoding schemes. instead of a string of characters. The comparison
Compression is a technology for reducing thequantity of between various compression algorithms helps us
data used to represent any content without excessively to use a suitable technique for various
reducing the quality of the data. It also reduces the applications.
number of bits required to store and/or transmit digital LITERATURE SURVEY
media. Compression is a technique that makes storing SURVEY PAPERS PROBLEM STATEMENT &
easier forlarge amount of data. SOLUTION
Images are very important documents nowadays; to work Distributed Vector Processing of a The ST has been shown to be
with them in some applications they need to be New Local MultiScale Fourier useful in a variety of medical
compressed, more or less depending on the purpose of the Transform for Medical Imaging image processing tasks.
application. The need for an efficient techniquefor Applications Unfortunately, research is
compression of Images ever increasing because the raw difficult, and clinical relevance of
images need large amounts of disk space seems to be a big techniques is limited because of
disadvantage during transmission & storage. The various the long processing times. We
techniques we have used for this paper is Huffman have demonstrated a method for
Encoding and Run length encoding, We have analysed vector computation of the ST,
the Huffman algorithm and the run length algorithm for resulting in a speed improvement
image compression and few other compressionfor of 6.8 times for a 256 256 pixel
comparison purposes. image, and a method for
distributed computation of the ST
that scales acceptably with
2. KEYWORDS: increasing cluster size.
Image compression, Huffman Algorithm, Run A Distributed Image Processing Currently, they supervise MatLab
Length Algorithm , Entropy encoding Algorithm, Support System Application to programs which are slow and
Frequency, Prefix-free code Medical Imaging take an important CPU time for
their execution. They take
approximately seven minutes for
3. INTRODUCTION
small images (256*256) and forty
five minutes for larger ones
Compression refers to reducing the quantity of data (512*512).
used to represent a file, image or video content without A Multi-Layered Development Choosing open solutions in the
excessively reducing the quality of the originaldata. Framework for Medical Imaging development allowed the whole
Image compression is the application of data compression Applications set of libraries, applications and
on digital images. In effect, the objective is to reduce prototypes to be easily adapted to
redundancy of the image data in order to be able to store run on different operating
or transmit data in an efficient form. Data compressionhas systems and processor’s
become requirement for most applications in different architectures
areas such as computer science, Information technology, Texture-preserved low-dose CT We peoposed to learn he priors
communications, medicine etc. In computer science, Data reconstruction using region from the clustered patches of the
compression is defined as the science or the art of recognizable patch-priors from previous NdCT image, and these
representing information in a compact form. It also previous normal-dose CT images priors depict the edge or structure
reduces the number of bits required to store and/or information adaptively.
transmit digital media. To compress something means that Medical image processing The implementaion of the Del
you have a piece of data and you decrease its size. There using brain emulation Viva filter on the brain MRI
are different techniques and they all have their own demostrated a great potential to
advantages and disadvantages. Huffmancoding is a enhance salient features related to
loseless data compression technique. Huffman coding is the shape of the gray matter.
based on the frequency of occurrence of a Asoftware implementaion has
been developed and used for
explorative purposes.
GG.

4. HUFFMAN ALGORITHM A. Basic Technique


 Scan text for symbols (e.g. 1-byte characters) and
calculate their frequency of occurrence. Symbol value
Huffman coding is an entropy encoding with its count of occurrences is asingle leaf.
algorithm used for lossless data compression in  Start a loop.
computer science and information theory. The
term refers to theuse of a variable-length code  Find two smallest probability nodes and combine them
table for encoding a source symbol (such as a into single node.
character in a file) where the variable-length
code table has been derived in a particular way  Remove those two nodes from list and insert combined one.
based on the estimated probability of occurrence
 Repeat the loop until the list has only single node.
for each possible value of the source symbol.
Huffman coding uses a specific method for  This last single node represent a Huffman tree.
choosing the representation for each symbol,
resulting in a prefix-free code (that is, the bit string
representing some particular symbol is never a
prefix of the bit string representing any other 5. RUNLENGTH ALGORITHM
symbol) that expresses the most common
characters using shorter strings of bitsthan are used Run-length encoding (RLE) is a form of lossless
for less common source symbols. Huffman was data compression in which runs of data (sequences in which
able to design themost efficient compression the same data value occurs in many consecutive data
method of this type: no other mapping of elements) are stored as a single data value and count, rather
individualsource symbols to unique strings of bits than as the original run. This is most efficient on data that
will produce a smaller average output size when contains many such runs, for example, simple graphic
the actual symbol frequencies agree with those images such as icons, line drawings, Conway's Game of
used to create the code. A method was later found Life, and animations. For files that do not have many runs,
to do this in linear time if input probabilities (also RLE could increase the file size.
known as weights) are sorted. RLE may also be used to refer to an early
For a set of symbols with a uniform graphics file format supported by CompuServe for
probability distribution and a number ofmembers compressing black and white images, but was widely
which is a power of two, Huffman coding is supplanted by their later Graphics Interchange
equivalent to simple binary block encoding Format (GIF). RLE also refers to a little-used image format
e.g.,ASCII coding.
in Windows 3.x, with the extension rle , which is a run-
length encoded bitmap, used to compress the Windows 3.x
startup screen.
Fig 1 : Huffman algorithm ooooooooooooooooooooooooooooooooooooooooooo
A.
ooooooooooooooooooooooooooooooooooooooooooo
B.
C. ooooooooooooooooooooooooooooooooooooooooo
D.
E.
F.
G.
H.
I.
J.
K.
L.
M.
N.
O. Fig 2: Run-Length Algorithm(RLA) or Run- Length
P. Encoding(RLE)
Q.
R.
S.
T.
U.
V.
W.
X.
Y.
Z.
AA.
BB.
CC.
DD.
EE.
FF.
8.Evaluation Quality
HH. 5. Compression

Step 1: Firstly, Input the colored source image file.

Step 2: Find out the size of source


image by following
statement
[row,col,dim]=size(
I);
Step 3: Read pixel values from first pixel
of source image by help of this statement
X=impixel
(I,i,j); Here
i=row;
J=column 9.Evaluation -File size
;
I= Image;

Step 4: Read next pixel value, if current pixel is end of


the image then exit fromloop otherwise
(i).If next pixel value is same from previous than Count
= count+1;
(ii). Else if mismatch in value of next pixel as the 10. Output
previous than save as the newvalue of pixel in array.
Step 5: Read and count all the value of pixel.

Step 6: Go to step 4 until all pixel read


Step 7: Display the result array with intensity value.

6. EXISTING SYSTEM:
Compression refers to reducing the
quantity of data used to represent a file, image or
video content without excessively reducing the
quality of the originaldata. We use Discrete cosine
transform and normal extraction by that It also
reduces the number of bits required to store and/or
transmit digital media. To compress something
means that you have a piece of data and you
decrease itssize. There are different techniques
who to do that and they all have their
ownadvantages and disadvantages.

7.Bloack Diagram 11. Conclusion


In this Project we used Run length and Huffman techniques to
compress the image without maximum loss of data. We show
the MSE and PSNR Values that show the quality and also CR
which give the file size. This is mostly used in Medical
purpose image compression as the loss of information will be
minimum. The proposed compression method is lossless here
both of the Huffman and RLE methods are lossless which is
useful in text compression sincelosing a single character can
in the worst case make the text dangerously misleading. This
means increasing compression ratio without losing
information.When the compressed file contains on a longer
sequence of frequented symbols, it has a high effect on the
compression ratio as the proposed compression methodgives
Fig 3 : Block Diagram better results. when the RLE method is applied with the
Huffman algorithm, if it does not decreases the file size it will
not increases it. Which indicates that the RLE method has an
High effect on the Huffman method when applied together.
8.
9.
10.
11.
12.
12.References
[1] Darrel Hankersson, Greg A. Harris, and Peter D.
Johnson Jr.Introduction toInformation Theory and
Data Compression. CRC Press, 1997.

[2] Gilbert Held and Thomas R. Marshall. Data and


Image Compression: Toolsand Techniques

[3] Terry Welch, "A Technique for High-


Performance Data Compression",Computer,
June 1984 .

[4] Dzung Tien Hoang and Jeffery Scott Vitter .Fast and
Efficient Algorithms forvideo Compression and Rate
Control,June 20,1998.

[5] http://www.Howstuffworks How File Compression


Works.htm.

[6] David Solomon ,”Huffman coding”

[7] A.. W. Berger and others,” A Hybrid Coding Strategy


for Optimized Test Data Compression”, University of
Innsbruck, Austria, Proceedings IEEE International
Test Conference, Charlotte, NC, USA, September 30
– October 2,2003.

You might also like