You are on page 1of 36

Video Compression

Techniques
Fundamentals of Video
Compression

• Introduction to Digital Video


• Basic Compression Techniques
• Still Image Compression
Techniques - JPEG
• Video Compression
Factors Associated with
Compression
The goal of video compression is to massively reduce
the amount of data required to store the digital
video file, while retaining the quality of the
original video

# Real-Time versus Non-Real-Time


# Symmetrical versus Asymmetrical
# Compression Ratios
# Lossless versus Lossy
# Interframe versus Intraframe
# Bit Rate Control
Lossless vs. Lossy Compression
• In lossless compression, data is not altered or lost
in the process
of compression or decompression
• Some examples of lossless standards are:
— Run-Length Encoding
— Dynamic Pattern Substitution - Lampel-Ziv
Encoding
— Huffman Encoding
• Lossy compression is used for compressing
audio, pictures, video
• Some examples are:
— JPEG
— MPEG
— H.261 (Px64) Video Coding Algorithm
Real-Time V/s Non-Real-Time
Some compression systems capture, compress to disk, decompress and
play back video (30 frames per second) all in real time; there are no
delays.

Other systems are only capable of capturing some of the 30 frames per
second and/or are only capable of playing back some of the frames.

Insufficient frame rate is one of the most noticeable video deficiencies.

Without a minimum of 24 frames per second, the video will be noticeably


jerky. In addition, the missing frames will contain extremely important
lip synchronisation data.

If the movement of a person's lips is missing due to dropped frames


during capture or playback, it is impossible to match the audio correctly
with the video.
Symmetrical V/s Asymmetrical
This refers to how video images are compressed and decompressed.
Symmetrical compression means that if you can play back a
sequence of 640 by 480 video at 30 frames per second, then you can
also capture, compress and store it at that rate.

Asymmetrical compression means just the opposite. The degree of


asymmetry is usually expressed as a ratio. A ratio of 150:1 means it
takes approximately 150 minutes to compress one minute of video.

Asymmetrical compression can sometimes be more elaborate and


more efficient for quality and speed at playback because it uses so
much more time to compress the video.

The two big drawbacks to asymmetrical compression are that it takes


a lot longer, and often you must send the source material out to a
dedicated compression company for encoding
Compression Ratio
The compression ratio relates the numerical representation of
the original video in comparison to the compressed video.
For example, 200:1 compression ratio means that the
original video is represented by the number 200. In
comparison, the compressed video is represented by the
smaller number, in this case, that is 1.
With MPEG, compression ratios of 100:1 are common, with
good image quality.
Motion JPEG provides ratios ranging from 15:1 to
80:1, although 20:1 is about the maximum for maintaining a
good quality image.
Interframe V/s Intraframe
One of the most powerful techniques for compressing video is
interframe compression. Interframe compression uses one or more
earlier or later frames in a sequence to compress the current
frame, while intraframe compression uses only the current
frame, which is effectively image compression.

Since interframe compression copies data from one frame to


another, if the original frame is simply cut out (or lost in
transmission), the following frames cannot be reconstructed properly.

Making 'cuts' in intraframe-compressed video is almost as easy as


editing uncompressed video — one finds the beginning and ending of
each frame, and simply copies bit-for-bit each frame that one wants to
keep, and discards the frames one doesn't want.

Another difference between intraframe and interframe compression is


that with intraframe systems, each frame uses a similar amount of
data.
Bit Rate Control
A good compression system should allow the user
to instruct the compression hardware and
software which parameters are most important.

In some applications, frame rate may be of


paramount importance, while frame size is not.

In other applications, you may not care if the


frame rate drops below 15 frames per second, but
the quality of those frames must be of very good.
Introduction to Digital Video
• Video is a stream of data composed of discrete frames,
containing both audio and pictures
• Continuous motion produced at a frame rate of 15 fps or
higher
• Traditional movies run at 24 fps
• TV standard in USA (NTSC) uses ≈ 30 fps
With digital video, four factors have to be kept in mind.
# Frame rate
# Colour Resolution
# Spatial Resolution
# Image Quality
Frame Rate
The standard for displaying any type of non-film video is 30 frames per

second (film is 24 frames per second). Additionally these frames are split in

half (odd lines and even lines), to form what are called fields.

When a television set displays its analogue video signal, it displays the odd

lines (the odd field) first. Then is displays the even lines (the even field).

Each pair forms a frame and there are 60 of these fields displayed every

second (or 30 frames per second). This is referred to as interlaced video.

Fragment of the "matrix" After processing the fragment on the left by the
sequence (2 frames) FRC filter the frame rate increased 4 times
Colour Resolution
This second factor is a bit more complex. Colour resolution
refers to the number of colours displayed on the screen at one
time. Computers deal with colour in an RGB (red-green-blue)
format, while video uses a variety of formats. One of the most
common video formats is called YUV.

This test table was used to


estimate the color resolution.
First we determine the border
when one of the colors on the
resolution chart disappears, and
color sharpness is found on the
scale on the right.
Spatial Resolution
The third factor is spatial resolution - or in other words, "How big is the
picture?". Since PC and Macintosh computers generally have resolutions
in excess of 640 by 480,

The National Television Standards Committee ( NTSC) standard used in


North America and Japanese Television uses a 768 by 484 display.

The Phase Alternative system (PAL) standard for European television is


slightly larger at 768 by 576.

Spatial resolution is a parameter


that shows how many pixels are used
to represent a real object in digital
form. Fig. 2 shows the same color
image represented by different
spatial resolution. Left flower have a
much better resolution that right
one
Image quality
The final objective is video that looks acceptable for your
application.
For some this may be 1/4 screen, 15 frames per second
(fps), at 8 bits per pixel.
Other require a full screen (768 by 484), full frame rate
video, at 24 bits per pixel (16.7 million colours).
MPEG Compression
 Compression through
 Spatial
 Temporal
Spatial Redundancy
 Take advantage of similarity among
most neighboring pixels
Spatial Redundancy Reduction
 RGB to YUV
 less information required for YUV (humans less
sensitive to chrominance)
 Macro Blocks
 Take groups of pixels (16x16)
 Discrete Cosine Transformation (DCT)
 Based on Fourier analysis where represent
signal as sum of sine's and cosine‟s
 Concentrates on higher-frequency values
 Represent pixels in blocks with fewer numbers
 Quantization
 Reduce data required for co-efficients
 Entropy coding
 Compress
Spatial Redundancy Reduction

“Intra-Frame
Encoded”

Quantization Zig-Zag
• major reduction Scan,
• controls Run-length
„quality‟ coding
Loss of Resolution
Original (63 kb)

Low (7kb)

Very Low (4 kb)


Temporal Redundancy
 Take advantage of similarity between
successive frames

950 951 952


Temporal Activity

“Talking Head”
Temporal Redundancy
Reduction
Temporal Redundancy Reduction

• I frames are independently encoded


• P frames are based on previous I, P frames
– Can send motion vector plus changes
• B frames are based on previous and following I
and P frames
– In case something is uncovered
Group of Pictures (GOP)
• Starts with an I-frame
• Ends with frame right before next I-
frame
• “Open” ends in B-frame, “Closed” in P-
frame
– (What is the difference?)
• MPEG Encoding a parameter, but
„typical‟:
–IBBPBBPBBI
–IBBPBBPBBPBBI
• Why not have all P and B frames after
initial I?
Typical MPEG Parameters
Typical Compress. Performance
Type Size Compression
---------------------
I 18 KB 7:1
P 6 KB 20:1
B 2.5 KB 50:1
Avg 4.8 KB 27:1
---------------------
Note, results are Variable Bit
Rate, even if frame rate is
constant
MPEG (Moving Picture Expert Group)
MPEG was set standard for Audio and Video compression and transmission

MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress

VHS-quality raw digital video and CD audio down to 1.5 Mbit/s (26:1 and 6:1

compression ratios respectively) without excessive quality loss, making Video CDs, digital

cable/satellite TV and digital audio broadcasting (DAB) possible.

MPEG-1 has become the most widely compatible lossy audio/video format in the

world, and is used in a large number of products and technologies.

The best-known part of the MPEG-1 standard is the MP3 audio format .

The standard consists of the following five Parts:

1. Systems (storage and synchronization of video, audio, and other data together)

2. Video (compressed video content)

3. Audio (compressed audio content)

4. Conformance testing & 5. reference software


MPEG-2
was designed for coding interlaced images at transmission
rates above 4 million bits per second.
MPEG 2 can be used on HD-DVD and blue ray disc.
handles 5 audio channels,
Covers wider range of frame sizes (HDTV).
Provides resolution 720*480 and 1280*720 at 60 fps with full
CD quality audio used by DVD-ROM.
MPEG-2 can compress 2 hours video into a few GHz.
MPEG-2 is used for digital TV broadcast and DVD.
 An MPEG-2 is designed to offer higher quality than MPEG-
1, at a higher bandwidth (between 4 and 10 Mbit/s).
The scheme is very similar to MPEG-1, and scalable.
MPEG-3

Designed to handle HDTV signal in


range 20 to 40 Mbits/sec.
HDTV-resolution is 1920* 1080*30 Hz
But MPEG-2 was fully capable of
handling HDTV so MPEG -3 is no longer
mentioned.
MPEG-4
MPEG-4 is a collection of methods defining compression of
audio and visual (AV) digital data.
MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2
and other related standards, Wavelength band MPEG-4 files
are smaller than JPEG. so they transmit video and images
over narrower bandwidth and can mix video with text
graphics and 2D and 3D animation layers.
MPEG-4 provides a series of technolgies for developers for
various service providers and end users.
SP use for data transparency
Helps end users with wide range of interaction with animated
objects.
MPEG-4 multiplexes and synchronizes data .
Interaction with audio visual scene.
MPEG-7
MPEG-7 is a content representation standard for information search.
It is also titled Multimedia Content Description Interface.
It will define the manner in which audiovisual materials can be coded and
classified so the materials can be easily located using search engines just as
search engines are used to locate text-based information
. Music, art, line drawings, photos, and videos are examples of the kinds of
materials that will become searchable based on descriptive language defined
by MPEG-7.
* Provide a fast and efficient searching, filtering and content identification
method.
* Describe main issues about the content (low-level
characteristics, structure, models, collections, etc.).
* Index a big range of applications.
* Audiovisual information that MPEG-7 deals is :
Audio, voice, video, images, graphs and 3D models
* Inform about how objects are combined in a scene.
* Independence between description and the information itself.
MPEG-7 applications
* Digital library: Image/video catalogue, musical
dictionary.
* Multimedia directory services: e.g. yellow pages.
* Broadcast media selection: Radio channel, TV channel.
* Multimedia editing: Personalized electronic news
service, media authoring.
* Security services: Traffic control, production chains...
* E-business: Searching process of products.
* Cultural services: Art-galleries, museums...
* Educational applications.
* Biomedical applications.
Still Image Compression - JPEG

• Defined by Joint Photographic Experts Group


• Released as an ISO standard for still color and gray-scale
images
• Provides four modes of operation:
— Sequential (each pixel is traversed only once)
— progressive (image gets progressively sharper)
— Hierarchical (image compressed to multiple
resolutions)
— lossless (full detail at selected resolution)

Definitions in the JPEG Standard


Three levels of definition:
• Baseline system (every codec must implement it)
• Extended system (methods to extend the baseline system)
• Special lossless function (ensures lossless compression/
decompression)
H.261 (Px64)
• H.261 was designed for datarates which
are multiples of
64Kbit/s, and is sometimes called p x
64Kbit/s (p is in the
range 1-30).
•These datarates suit ISDN lines, for which
this video codec
was designed for
• Intended for videophone and video
conferencing systems
H.263 Standard
•The development of modems allowing
transmission in the
range of 28-33 kbps paved the way for the
development of an
improved version of H.261
• It was designed for low bitrate
communication , however this
limitationhas now been removed
• It is expected that H.263 will replace H.261
Prepared by:
Saurabh Verma
B.Tech Vth Sem.
CSE
12

You might also like