You are on page 1of 124

By,

Prof (Dr.) P P Ghadekar


OUTLINE
 On completion of this topic students will be able to understand
 Basics concepts of Image Compression.

 Run Length coding

 Entropy coding

 Huffman coding

Prof (Dr) P P Ghadekar, VIT Pune


WHAT IS IMAGE PROCESSING?

Prof (Dr) P P Ghadekar, VIT Pune


DEFINITION OF IMAGE AND VIDEO
COMPRESSION
Image and video data compression refers to a process in which the
amount of data used to represent image and video is reduced to
meet a bit rate requirement (below or at most equal to the
maximum available bit rate).
 The quality of the reconstructed image or video satisfies a
requirement for a certain application.
 The complexity of computation involved is affordable for the
application.

The basis of compression process is removal of redundant


data.
4

Prof (Dr) P P Ghadekar, VIT Pune


WHAT IS THE NEED FOR COMPRESSION
 In terms of storage, the capacity of a storage can be effectively
increased.
 In terms of communications, the bandwidth of a digital
communication link can be effectively increased .
 At any given time, the ability of the internet to data is fixed.
Thus, if data can effectively be compressed wherever possible,
significant improvement of data throughput can be achieved.
 Many files can be combined into one compressed documents
making sending easier.

Prof (Dr) P P Ghadekar, VIT Pune


NEED FOR COMPRESSION
 To minimize the Storage Space
 To enable higher rate of Data transfer.

 There is still a need to develop newer and faster algorithms


adapted to image data

Prof (Dr) P P Ghadekar, VIT Pune


NEED FOR IMAGE COMPRESSION

Prof (Dr) P P Ghadekar, VIT Pune


FUNCTIONALITY IN VISUAL TRANSMISSION AND
STORAGE

Image and video compression for visual transmission and storage

Prof (Dr) P P Ghadekar, VIT Pune


TYPES OF COMPRESSION
 There are two types of compression
 Lossless compression- It can recover the exact original data
after compression.
 It is used mainly for compressing database records, spreadsheet
or word processing files, medical imaging, where exact
replication of the original is essential.
 Lossy compression- It will result in a certain loss of accuracy
for a substantial increase in compression.
 Lossy compression is more effective when used to compress
graphic images and digitized voice where losses outside visual
or aural perception can be tolerated.
9

Prof (Dr) P P Ghadekar, VIT Pune


HOW TO ACHIEVE COMPRESSION
 Minimizing the Redundancy in the Image.
 Types of Redundancy-Mainly two types

 Statistical Redundancy- Interpixel Redundancy & Coding


Redundancy.
 Interpixel Redundancy-Spatial Redundancy & Temporal
Redundancy.
 Psychovisual Redundancy.

10

Prof (Dr) P P Ghadekar, VIT Pune


IMAGE COMPRESSION TECHNIQUES

11

Prof (Dr) P P Ghadekar, VIT Pune


COMMUNICATION/STORAGE WITH LIMITED BANDWIDTH

12

Prof (Dr) P P Ghadekar, VIT Pune


TYPES OF COMPRESSION: LOSSLESS

13

Prof (Dr) P P Ghadekar, VIT Pune


TYPES OF COMPRESSION: LOSSLESS
 Perfectly reproduce the image at the decoder(MSE=0, PSNR=Inf)
 Reduction in bit rate is usually limited.

 Can not guarantee a fixed rate.

 Applications: –Fax Machines–Medical Images

14

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS COMPRESSION METHODS
 Run length Encoding (RLE)
 Arithmetic coding

 Huffman coding

 LZ and LWZ coding

 Predictive coding

 Variable length coding etc

15

Prof (Dr) P P Ghadekar, VIT Pune


TYPES OF COMPRESSION: LOSSY
 Idea: some deviation of decompressed image from original is
often acceptable.
 Human visual system might not perceive loss or tolerate it
Thus, omit irrelevant details that humans cannot perceive.
 No need to represent more than the visible resolution in:
 Space
 Time
 Brightness
 color

16

Prof (Dr) P P Ghadekar, VIT Pune


EXPLOITING LIMITS OF COLOR VISION

17

Prof (Dr) P P Ghadekar, VIT Pune


COMPRESSION: LOSSLESS & LOSSY
 Lossy Compression-
Does not perfectly reproduce the image at the decoder (MSE>0)
 Much higher compression than with lossless compression.

 Can guarantee a fixed rate.

 Lossy compression

It is used widely for natural images (e.g. JPG) or videos (E.g


MPEG)

18

Prof (Dr) P P Ghadekar, VIT Pune


EXPLOITING REDUNDANCY

19

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING
 Run-length coding is a very widely used and simple
compression technique which does not assume a memoryless
source
 We replace runs of symbols (possibly of length one) with pairs of
(run-length, symbol)
 For images, the maximum run-length is the size of a row
 Run length coding (RLC) is effective when long sequences of
the same symbol occur.
 Run length coding exploits the spatial redundancy by coding the
number of symbols in a run.

20

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING
 The term run is used to indicate the repetition of a Symbol,
while the term run length is used to represent the number of
repeated symbols.
 Run length coding maps a sequence of numbers into a
sequence of symbol pairs (run value).
 Images with large areas of constant shade are good
candidates for this kind of compression.
 It is used in windows bitmap file format.

 Run length coding can be classified into

I) 1-D run length coding.


II) 2-D run length coding.
21

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING
I) I-D run length coding-
In 1-D run length coding, each scan line is encoded independently.
1-D RLC utilizes only the horizontal correlationship between pixels
on the same scan line.
2-D RLC utilizes both horizontal and vertical correlationship
between pixels.

22

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING
 2-D run length coding
 The 1-D run length coding utilizes the correlation between
pixels with in a scanline.
 In order to utilize correlation between pixels in neighbouring
scan lines to achieve higher coding efficiency, 2-D run length
coding was developed.
 In RLC, two values are transmitted- the first value indicates the
number of times a particular symbol has occurred. The second
value indicates the actual symbol.

23

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING

24

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH CODING

25

Prof (Dr) P P Ghadekar, VIT Pune


RUN LENGTH LIMITATIONS

26

Prof (Dr) P P Ghadekar, VIT Pune


ENTROPY

27

Prof (Dr) P P Ghadekar, VIT Pune


ENTROPY
 In information theory, entropy is a measure of the
uncertainty associated with a random variable.
 This term usually refers to the Shannon entropy, which
quantifies the expected value of the information contained
in a message.
 Entropy is typically measured in bits.

 Entropy, in an information sense, is a measure of


unpredictability.
 Compressed message is more unpredictable, because there
is no redundancy; each bit of the message is communicating
a unique bit of information.
 Entropy is a measure of how much information the message
28
contains.
Prof (Dr) P P Ghadekar, VIT Pune
ENTROPY

E = entropy(I)

E=a scalar value representing the entropy of grayscale image I.


Entropy is a statistical measure of randomness that can be used to
characterize the texture of the input image. Entropy is defined as

where p contains the histogram counts returned from imhist.


where b is the base of the logarithm used. Common values of b are
2,10.

29

Prof (Dr) P P Ghadekar, VIT Pune


A SIMPLE EXAMPLE
 Suppose we have a message consisting of 5 symbols, e.g.
[►♣♣♠☻►♣☼►☻]
 How can we code this message using 0/1 so the coded
message will have minimum length (for transmission or
saving!)

 5 symbols  at least 3 bits


 For a simple encoding,

length of code is 10*3=30 bits

30

Prof (Dr) P P Ghadekar, VIT Pune


A SIMPLE EXAMPLE – CONT.
 Intuition: Those symbols that are more frequent should have
smaller codes, yet since their length is not the same, there must
be a way of distinguishing each code

 For Huffman code,


length of encoded message
will be ►♣♣♠☻►♣☼►☻
=3*2 +3*2+2*2+3+3=24bits

31

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODE

32

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING
 Huffman codes are optimal codes that map one symbol with one
code word.
 In Huffman coding, it is assumed that each pixel intensity has
associated with it a certain probability of occurrence, and this
probability is spatially invariant.
 Huffman coding assigns a binary code to each intensity value,
with shorter codes going to intensities with higher probability.
 If the probabilities can be estimated then the table of Huffman
codes can be fixed in both the encoder and the decoder.

33

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING
 The Parameters involved in Huffman coding are as follows.
 Entropy

 Average Length

 Efficiency

 Variance.

 Prefix Code-A code is a prefix code if no code word is the prefix


of another code word. The main advantage of a prefix code is that
it is uniquely decodable.
Example-Huffman code.

34

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING ALGORITHM
 Initialization: Put all symbols on a list sorted according to their
frequency counts.
 From the list pick two symbols with the lowest frequency counts.
Form a Huffman subtree that has these two symbols as child
nodes and create a parent node.
 Assign the sum of the children's frequency counts to the parent
and insert it into the list such that the order is maintained.
 Delete the children from the list.
 Repeat until the list has only one symbol left

 Assign a codeword for each leaf based on the path from the root.

35

Prof (Dr) P P Ghadekar, VIT Pune


PROPERTIES OF HUFFMAN CODES
 No Huffman code is the prefix of any other Huffman codes so
decoding is unambiguous
 The Huffman coding technique is optimal (but we must know
the probabilities of each symbol for this to be true)
 Symbols that occur more frequently have shorter Huffman codes

36

Prof (Dr) P P Ghadekar, VIT Pune


WHAT IS A TREE?

A tree consists of:


• a set of nodes
• a set of edges, each of which connects a pair of
nodes.
The node at the “top” of the tree is called the
root of the tree. 37

Prof (Dr) P P Ghadekar, VIT Pune


RELATIONSHIPS BETWEEN NODES

 If a node N is connected to other nodes that are


directly below it in the tree, N is referred to as their
parent and they are referred to as its children.
 example: node 5 is the parent of nodes 10, 11, and 12

 Each node is the child of at most one parent. 38

Prof (Dr) P P Ghadekar, VIT Pune


TYPES OF NODES

 A leafnode is a node without children.


 An interior node is a node with one or more children.

39

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

40

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

41

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

42

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

43

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

44

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING EXAMPLE

45

Prof (Dr) P P Ghadekar, VIT Pune


46

Prof (Dr) P P Ghadekar, VIT Pune


HUFFMAN CODING
 Variants:
 In extended Huffman coding we group the symbols
into k symbols giving an extended alphabet of nk
symbols
 This leads to somewhat better compression
 In adaptive Huffman coding we don’t assume that we
know the exact probabilities
 Start with an estimate and update the tree as we
encode/decode
 Arithmetic Coding is a newer (and more
complicated) alternative which usually performs
better
47

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
 Unlike the variable-length codes described previously, arithmetic
coding, generates non-block codes.
 In arithmetic coding, a one-to-one correspondence between source
symbols and code words does not exist.
 Instead, an entire sequence of source symbols (or message) is assigned
a single arithmetic code word.
 The code word itself defines an interval of real numbers between 0 and
1. As the number of symbols in the message increases, the interval
used to represent it becomes smaller and the number of information
units (say, bits) required to represent the interval becomes larger.
 Each symbol of the message reduces the size of the interval in
accordance with the probability of occurrence. It is supposed to
approach the limit set by entropy.
48

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
 In arithmetic coding, the interval from zero to one is divided according
to the probabilities of the occurrence of the intensities.
 Arithmetic coding does not generate individual codes for each
character but performs arithmetic operation on a block of data, based
on the probabilities of the next character.
 Using arithmetic coding it is possible to encode characters with a
fractional numbers of bits, thus approaching the theoretical optimum.
 Arithmetic coding performs very well for sequences with low entropy
where Huffman codes lose their efficiency.

49

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
 The formula that is used to compute the low and high range for each
symbol within the interval is given by
low= low + range x (low_range)
High = low + range x (high_range)
Here low_range refers to the symbol of the low range.
Tag is the average of the lower and upper intervals.

50

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING-EXAMPLE

 Let the message to be encoded be a1a2a3a3a4

51

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING

52

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
 So, any number in the interval [0.06752,0.0688) , for example 0.068
can be used to represent the message.
 Here 3 decimal digits are used to represent the 5 symbol source
message. This translates into 3/5 or 0.6 decimal digits per source
symbol and compares favorably with the entropy of
-(3x0.2log100.2+0.4log100.4) = 0.5786 digits per symbol

53

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
 As the length of the sequence increases, the resulting arithmetic code
approaches the bound set by entropy.
 In practice, the length fails to reach the lower bound, because:
• The addition of the end of message indicator that is needed to
separate one message from another
• The use of finite precision arithmetic

54

Prof (Dr) P P Ghadekar, VIT Pune


ARITHMETIC CODING
Decoding:
Decode 0.572.
Since 0.8>code word > 0.4, the first symbol should be a3.

1.0 0.8 0.72 0.592 0.5728

0.8 0.72 0.688 0.5856 0.57152


Therefore, the
message is
a3a3a1a2a4

0.4 0.56 0.624 0.5728 056896

0.2 0.48 0.592 0.5664 0.5676


8 55

0.0 0.4
0.56
Prof (Dr) 0.56
P P Ghadekar, VIT Pune 0.5664
FEASIBILITY OF IMAGE AND VIDEO COMPRESSION

 Interpixel Redundancy /Spatial Redundancy


Statistical correlation among pixels within an image frame:
intraframe redundancy.

A picture: “Boy and Girl” 58

Prof (Dr) P P Ghadekar, VIT Pune


INTERPIXEL REDUNDANCY/SPATIAL REDUNDANCY

 Not necessary to represent each pixel in an image frame


independently. Instead, one can predict a pixel from its
neighbors.
 Removing a large amount of the redundancy within an image
frame, we may save a lot of data in representing the frame, thus
achieving data compression.

59

Prof (Dr) P P Ghadekar, VIT Pune


TEMPORAL REDUNDANCY
 Statistical correlation between pixels from successive frames:
interframe redundancy.

(a) 21st and (b) 22nd frames of Ms. America Sequence

60

Prof (Dr) P P Ghadekar, VIT Pune


TEMPORAL REDUNDANCY
 For a videophone-like signal with moderate motion in the
scene, on average, less than 10% of pixels change their gray
values between two consecutive frames by an amount of 1% of
the peak signal.
 Removing a large amount of temporal redundancy leads to a
great deal of data compression: motion compensated (MC)
predictive coding.

61

Prof (Dr) P P Ghadekar, VIT Pune


PSYCHOVISUAL REDUNDANCY
 Originates from the characteristics of the human visual system
(HVS).
 Visual information not perceived equally by HVS.

 If apply less data to represent less important visual information,


perception will not be affected.
 In this sense, some visual information is psychovisually
redundant.
 Eliminating psychovisual redundancy leads to data
compression.

62

Prof (Dr) P P Ghadekar, VIT Pune


PREDICTIVE CODING
 Predictive coding involve predicting the current pixel value
based on the value of the previously processed pixels.
 Usually the difference between the predicted value and the
actual value is transmitted.
 The receiver makes the prediction as the transmitter and then
adds the difference signal in order to reconstruct the original
value.

63

Prof (Dr) P P Ghadekar, VIT Pune


DELTA ENCODING
 The term ‘delta encoding’ refers to the techniques that store data
as the difference between successive samples rather than storing
the samples themselves.
 Delta encoding is effective if there is only small change
between adjacent sample values.
 The delta encoding techniques can be broadly classified into

I) Delta modulation (DM)


II) Adaptive Delta Modulation (ADM)
III) Differential Pulse code modulation (DPCM)
IV) Adaptive Differential Pulse code modulation (ADPCM)

64

Prof (Dr) P P Ghadekar, VIT Pune


PULSE CODE MODULATION
 Pulse-code modulation (PCM) is a digital representation of an analog
signal where the magnitude of the signal is sampled regularly at uniform
intervals, then quantized to a series of symbols in a numeric (usually
binary) code.

4-bit pulse
code
modulation
showing
quantization
and sampling
of a signal
(red).

65

Prof (Dr) P P Ghadekar, VIT Pune


PULSE CODE MODULATION
 It can use 4, 8 or 16 bits per sample.
 The main problem of PCM is that it needs large bandwidth.

 To reduce the bandwidth, normally number of bits required to


represent samples value are reduced.
 But if number of bits per samples are reduced, noise increases.

 The other techniques like Delta modulations and Adaptive delta


modulation keeps noise minimum and at the same time reduces
the number of bits per samples.
 The number of bits and transmission bandwidth are also related.

 System is complex.

66

Prof (Dr) P P Ghadekar, VIT Pune


DELTA MODULATION
 In PCM, it transmits all the bits which are used to code the sample.
Hence signalling rate and transmission channel bandwidth are large
in PCM.
 To overcome this problem Delta modulation is used.
 In the delta modulation system, the difference between two
consecutive pixel intensities is coded by a one bit quantizer.
 Delta modulation is the simplest form of DPCM.

f(n) + e(n) +
e^(n) f^(n)
One bit Low Pass
One channel One
- Quantizer Filter
+
f^(n-1)

+
Delay ∑
Delay 67
+

Decoder
Encoder Prof (Dr) P P Ghadekar, VIT Pune
DELTA MODULATION
 The Delta Modulation has following advantages over PCM.
 Delta modulation transmits only one bit for one sample. Thus
the signalling rate and transmission bandwidth is quite small for
Delta modulation.
 The transmitter and receiver implementation is very small for
delta modulation. There is no analog to digital converter
involved in delta modulation.
 Disadvantages of Delta Modulation- It has two drawbacks

 Slope overload distortion

 Granular noise.

68

Prof (Dr) P P Ghadekar, VIT Pune


ADAPTIVE DELTA MODULATION
 To overcome the quantization errors due to slope overload and
granular noise, the step size is made adaptive to variations in the
input signal.
 Particularly in the steep segment of the signal, the step size is
increased.
 When the input is varying slowly, the step size is reduced.

 The step size is increases or decreases according to certain rule by


using additional circuit logic for step size control.

69

Prof (Dr) P P Ghadekar, VIT Pune


DPCM
 DPCM stands for Differential Pulse Code Modulation.
 DPCM is a Predictive Compression technique in which a
prediction of the pixel being encoded is Subtracted from the
current pixel to create a difference value which is then
Quantized for subsequent transmission.
 DPCM comes with two flavours
 DPCM without quantizer-Lossless DPCM.
 DPCM with quantizer- Lossy DPCM.

70

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

71

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

72

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

73

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

74

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

75

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

76

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

77

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

78

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

79

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

80

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

81

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

82

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

83

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

84

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

85

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

86

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

87

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

88

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

89

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

90

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

91

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

92

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

93

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

94

Prof (Dr) P P Ghadekar, VIT Pune


LOSSLESS DPCM

95

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

96

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

97

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

98

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

99

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

100

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

101

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

102

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

103

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

104

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

105

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

106

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

107

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

108

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

109

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

110

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

111

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

112

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

113

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

114

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

115

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

116

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

117

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

118

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

119

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

120

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

121

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

122

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

123

Prof (Dr) P P Ghadekar, VIT Pune


LOSSY DPCM

124

Prof (Dr) P P Ghadekar, VIT Pune


125

Prof (Dr) P P Ghadekar, VIT Pune


Thanks

126
Prof (Dr) P P Ghadekar, VIT Pune

You might also like