You are on page 1of 61

MULTIMEDIA TECHNOLOGY

VIDEO

Dr. Zeeshan Bhatti


BSIT-III
Chapter 5
BY: DR. ZEESHAN BHATTI

VIDEO
Video is somewhat like a series of still images

Video uses Red-Green-Blue color space


Pixel resolution (width x height), number of bits per pixel, and frame
rate are factors in quality

But theres much more to it

BY: DR. ZEESHAN BHATTI

VIDEO STRUCTURING
A video can be decomposed into a well-defined structure consisting of
five levels
1. Video shot is an unbroken sequence of frames recorded from a single
camera. It is the building block of a video.

2. Key frame is the frame, which can represent the salient content of a shot.
3. Video scene is defined as a collection of shots related to the video
content, and the temporally adjacent ones. It depicts and conveys the concept
or story of a video.
4. Video group is an intermediate entity between the physical shots and the
video scenes. The shots in a video group are visually similar and temporally
close to each other.

5. Video is at the root level and it contains all the components defined
above.
BY: DR. ZEESHAN BHATTI

TYPES OF VIDEO SIGNALS


1. Component video
2. Composite Video
3. S-video

BY: DR. ZEESHAN BHATTI

COMPONENT VIDEO SIGNLAS


Component video: Higher-end video systems make use of three separate
video signals for the red, green, and blue image planes. Each color channel is
sent as a separate video signal.
Most computer systems use Component Video, with separate signals for R, G,
and B signals.

For any color separation scheme, Component Video gives the best color
reproduction since there is no crosstalk between the three channels.
This is not the case for S-Video or Composite Video, discussed next.
Component video, however, requires more bandwidth and good
synchronization of the three components.
Makes use of three separate video signals for Red, Green and Blue.
BY: DR. ZEESHAN BHATTI

COMPOSITE VIDEO | 1 SIGNAL


Composite video: color (\chrominance") and intensity (\luminance") signals
are mixed into a single carrier wave.
Chrominance is a composition of two color components (I and Q, or U and V).
Used by broadcast TV, In NTSC TV, e.g., I and Q are combined into a chroma
signal, and a color subcarrier is then employed to put the chroma signal at the
high-frequency end of the signal shared with the luminance signal.
The chrominance and luminance components can be separated at the receiver
end and then the two color components can be further recovered.
When connecting to TVs or VCRs, Composite Video uses only one wire and
video color signals are mixed, not sent separately. The audio and sync signals
are additions to this one signal.
Since color and intensity are wrapped into the same signal, some interference
between the luminance and chrominance signals is inevitable. BY: DR. ZEESHAN BHATTI 6

S-VIDEO | 2 SIGNALS
S-Video: as a compromise, (Separated video, or Super-video, e.g., in
S-VHS) uses two wires, one for luminance and another for a composite
chrominance signal.
As a result, there is less crosstalk between the color information and
the crucial gray-scale information.

The reason for placing luminance into its own part of the signal is that
black-and-white information is most crucial for visual perception.
In fact, humans are able to differentiate spatial resolution in grayscale
images with a much higher acuity than for the color part of color images.
As a result, we can send less accurate color information than must be sent
for intensity information | we can only see fairly large blobs of color, so it
makes sense to send less color detail.

BY: DR. ZEESHAN BHATTI

ANALOG VIDEO

BY: DR. ZEESHAN BHATTI

ANALOG VIDEO
An analog signal f(t) samples a time-varying image. So called
progressive scanning traces through a complete picture (a frame) rowwise for each time interval.
In TV, and in some monitors and multimedia standards as well, another
system, called \interlaced" scanning is used:

a) The odd-numbered lines are traced first, and then the even numbered
lines are traced. This results in odd and even fields --- two fields make
up one frame.
b) In fact, the odd lines (starting from 1) end up at the middle of a line at
the end of the odd field, and the even scan starts at a half-way point.

BY: DR. ZEESHAN BHATTI

INTERLACING
Image separated in 2 series of lines (odd and even) called fields
One frame displayed = one field
NTSC = about 60 fields / sec
PAL & SECAM = 50 fields / sec

Interlacing was invented because it was difficult to transmit the


amount of information in a full frame quickly enough to avoid flicker
The Double number of fields presented to the eye reduce perceived flicker

BY: DR. ZEESHAN BHATTI

10

Figure: Interlaced raster scan

Figure 5.1 shows the scheme used. First the solid (odd) lines are traced, P to Q,
then R to S, etc., ending at T; then the even field starts at U and ends at V.
The jump from Q to R, etc. in Figure 5.1 is called the horizontal retrace, during
which the electronic beam in the CRT is blanked. The jump from T to U or V to P
is called the vertical retrace.
BY: DR. ZEESHAN BHATTI

11

Field 1

Field 2
BY: DR. ZEESHAN BHATTI

12

Finally on TV/Computer Screen

BY: DR. ZEESHAN BHATTI

13

NTSC
NTSC (National Television System Committee) TV standard is mostly used
in North America and Japan.

It uses the familiar 4:3 aspect ratio (i.e., the ratio of picture width to its
height)
Uses 525 scan lines per frame at 30 frames per second (fps).
a) NTSC follows the interlaced scanning system, and each frame is
divided into two fields, with 262.5 lines/field.
b) Each line takes 63.5 microseconds to scan. Horizontal retrace takes 10
microseconds (with 5 microseconds horizontal synch pulse embedded), so
the active line time is 53.5 microseconds
c) Since the horizontal retrace takes 10.9 sec, this leaves 52.7 sec for the
active line signal during which image data is displayed (see Fig.5.3).
BY: DR. ZEESHAN BHATTI

14

Vertical retrace takes place during 20 lines reserved for control information
at the beginning of each field. Hence, the number of active video lines per
frame is only 485.
Similarly, almost 1/6 of the raster at the left side is blanked for horizontal
retrace and sync. The non-blanking pixels are called active pixels.

Since the horizontal retrace takes 10.9 sec, this leaves 52.7 sec for the
active line signal during which image data is displayed
NTSC video is an analog signal with no fixed horizontal resolution.
Therefore one must decide how many times to sample the signal for display:
each sample corresponds to one pixel output.
A pixel clock" is used to divide each horizontal line of video into samples.
The higher the frequency of the pixel clock, the more samples per line there
are.
BY: DR. ZEESHAN BHATTI
15

PAL
PAL (Phase Alternating Line) is a TV standard widely used in Western
Europe, China, India, and many other parts of the world, invented by German
Scientist
PAL uses 625 scan lines per frame, at 25 frames/second, with a 4:3 aspect
ratio and interlaced fields.
(a) PAL uses the YUV color model. It uses an 8 MHz channel and allocates a
bandwidth of 5.5 MHz to Y, and 1.8 MHz each to U and V. The color
subcarrier frequency is fsc 4:43 MHz.

(b) In order to improve picture quality, chroma signals have alternate signs
(e.g., +U and -U) in successive scan lines, hence the name Phase Alternating
Line".
(c) Interlaced, each frame is divided into 2 fields, 312.5 lines/field
Its broadcast TV signals are also used in composite video
BY: DR. ZEESHAN BHATTI

16

SECAM
SECAM stands for Systeme Electronique Couleur Avec Memoire, the
third major broadcast TV standard.
SECAM also uses 625 scan lines per frame, at 25 frames per second,
with a 4:3 aspect ratio and interlaced fields.
SECAM and PAL are very similar. They differ slightly in their color
coding scheme:
(a) In SECAM, U and V signals are modulated using separate color
subcarriers at 4.25 MHz and 4.41 MHz respectively.

(b) They are sent in alternate lines, i.e., only one of the U or V signals
will be sent on each scan line.

BY: DR. ZEESHAN BHATTI

17

Comparison of Analog Broadcast TV Systems

BY: DR. ZEESHAN BHATTI

18

DIGITAL VIDEO

BY: DR. ZEESHAN BHATTI

19

DIGITAL VIDEO
The output is digitized by the camera into a sequence of single
frames.
The video and audio data are compressed before being written to a
tape or digitally stored.

BY: DR. ZEESHAN BHATTI

20

BY: DR. ZEESHAN BHATTI

21

DIGITAL VIDEO BASICS


A video signal consists of luminance and chrominance information

Luminance brightness, varying from white to black (abbreviated as


Y)
Chrominance color (hue & saturation), conveyed as a pair of color
difference signals:
R-Y (hue & saturation for red, without luminance)
B-Y (hue & saturation for blue, without luminance)

BY: DR. ZEESHAN BHATTI

22

DIGITAL VIDEO
The advantages of digital representation for video are many.

For example:
(a) Video can be stored on digital devices or in memory, ready to be
processed (noise removal, cut and paste, etc.), and integrated to
various multimedia applications;
(b) Direct access is possible, which makes nonlinear video editing
achievable as a simple, rather than a complex, task;
(c) Repeated recording does not degrade image quality;
(d) Ease of encryption and better tolerance to channel noise.

BY: DR. ZEESHAN BHATTI

23

CHROMA SUBSAMPLING
Since humans see color with much less spatial resolution than they see
black and white, it makes sense to decimate" the chrominance signal.
Interesting (but not necessarily informative!) names have arisen to
label the different schemes used.
To begin with, numbers are given stating how many pixel values, per
four original pixels, are actually sent:
(a) The chroma subsampling scheme 4:4:4" indicates that no chroma
subsampling is used: each pixel's Y (luminance) , Cb (Blue difference)
and Cr (Red Difference) values are transmitted, 4 for each of Y, Cb, Cr.

BY: DR. ZEESHAN BHATTI

24

(b) The scheme 4:2:2" indicates horizontal subsampling of the Cb, Cr


signals by a factor of 2. That is, of four pixels horizontally labelled as
0 to 3, all four Ys are sent, and every two Cb's and two Cr's are sent,
as (Cb0, Y0)(Cr0, Y1)(Cb2, Y2)(Cr2, Y3)(Cb4, Y4), and so on (or
averaging is used).

(c) The scheme 4:1:1" subsamples horizontally by a factor of 4.


(d) The scheme 4:2:0" subsamples in both the horizontal and vertical
dimensions by a factor of 2. Theoretically, an average chroma pixel is
positioned between the rows and columns as shown Figure.

BY: DR. ZEESHAN BHATTI

25

CHROMA SUBSAMPLING.

BY: DR. ZEESHAN BHATTI

26

The mapping examples given are only theoretical and for illustration
BY: DR. ZEESHAN BHATTI

27

CCIR STANDARDS FOR DIGITAL VIDEO


CCIR is the Consultative Committee for International Radio, and one of
the most important standards it has produced is CCIR-601, for
component digital video.
This standard has since become standard ITU-R-601, an international
standard for professional video applications
adopted by certain digital video formats including the popular DV video.

Table below shows some of the digital video specications, all with an
aspect ratio of 4:3. The CCIR 601 standard uses an interlaced scan,
so each eld has only half as much vertical resolution (e.g., 240 lines in
NTSC).

BY: DR. ZEESHAN BHATTI

28

CIF stands for Common Intermediate Format specied by the CCITT.

(a) The idea of CIF is to specify a format for lower bitrate.


(b) CIF is about the same as VHS quality. It uses a progressive
(non-interlaced) scan.

(c) QCIF stands for Quarter-CIF". All the CIF/QCIF resolutions are
evenly divisible by 8, and all except 88 are divisible by 16; this
provides convenience for block-based video coding in H.261 and
H.263.

BY: DR. ZEESHAN BHATTI

29

Table: Digital video specications

BY: DR. ZEESHAN BHATTI

30

HDTV (HIGH DENITION TV)


The main thrust of HDTV (High Denition TV) is not to increase the denition" in
each unit area, but rather to increase the visual eld especially in its width.
(a) The rst generation of HDTV was based on an analog technology developed
by Sony and NHK in Japan in the late 1970s.
(b) MUSE (MUltiple sub-Nyquist Sampling Encoding) was an improved NHK
HDTV with hybrid analog/digital technologies that was put in use in the 1990s.
It has 1,125 scan lines, interlaced (60 elds per second), and 16:9 aspect ratio.

(c) Since uncompressed HDTV will easily demand more than 20 MHz
bandwidth, which will not t in the current 6 MHz or 8 MHz channels, various
compression techniques are being investigated.
(d) It is also anticipated that high quality HDTV signals will be transmitted using
more than one channel even after compression.
BY: DR. ZEESHAN BHATTI

31

For video, MPEG-2 is chosen as the compression standard.

For audio, AC-3 is the standard. It supports the so-called 5.1 channel
Dolby surround sound, i.e., ve surround channels plus a subwoofer
channel.
The salient dierence between conventional TV and HDTV:
(a) HDTV has a much wider aspect ratio of 16:9 instead of 4:3.
(b) HDTV moves toward progressive (non-interlaced) scan. The
rationale is that interlacing introduces serrated edges to moving objects
and flickers along horizontal edges.

BY: DR. ZEESHAN BHATTI

32

The FCC (Federal Communications Commission) has planned to


replace all analog broadcast services with digital TV broadcasting by
the year 2006. The services provided will include:
o SDTV (Standard Denition TV): the current NTSC TV or higher.
o EDTV (Enhanced Denition TV): 480 active lines or higher, i.e., the
third and fourth rows in Table 5.4.
oHDTV (High Denition TV): 720 active lines or higher.

BY: DR. ZEESHAN BHATTI

33

USES OF DIGITAL VIDEO


The uses of digital video are widespread. People who are teaching,
training, and selling can all use digital video to improve their jobs.
For teaching, digital video is particularly helpful in teaching
multiculturalism. From the classroom, the class is able to visit distant places.
The students are able to see and hear the local sounds.
This aspect of digital video is also particularly helpful in the Real Estate
field. Customers are able to visit properties without having to actually be
there.
Digital video is also a useful tool for training. Because digital video is
capable of constructing images through the use of a three-dimensional
model, producing simulated training situations with digital video can be
an effective tool. For example, training an airline pilot with a flight
simulator allows the operator to experience what flying a jet is really
like.
BY: DR. ZEESHAN BHATTI

34

USES OF DIGITAL VIDEO


Interior designers and landscape architects can also use this tool to
simulate what a final project will look like when completed.
Selling products can also be enhanced through digital video. Video of
products allow the customer to get a realistic glimpse of a product
and its capabilities. Digital video provides product demonstrations
and can often answer customer's questions.
Video conferencing allows salespeople to pitch there products to
people around the world without the travel expenses.
The medical field has many different uses of interactive digital video.
Digital video provides doctors with current information that is
continuously changing. It also allows for doctors to learn from
colleagues that are far away and to learn from from various sources
of information.
BY: DR. ZEESHAN BHATTI

35

PROCESS OF CONVERTING A ANALOG VIDEO


SIGNAL TO A DIGITAL VIDEO SIGNAL
Video data normally occurs as continuous, analog signals. In order for
a computer to process this video data, we must convert the analog
signals to a non-continuous, digital format. In a digital format, the
video data can be stored as a series of bits on a hard disk or in
computer memory.

The process of converting a video signal to a digital bitstream is


called analog-to-digital conversion (A/D conversion), or digitizing.
A/D conversion occurs in two steps:
1.

Sampling captures data from the video stream.

2.

Quantizing converts each captured sample into a digital format.

BY: DR. ZEESHAN BHATTI

36

SAMPLING:
Each sample captured from the video stream is typically stored as a 16-bit
integer.
The rate at which samples are collected is called the sampling rate.
The sampling rate is measured in the number of samples captured per
second (samples/second).
For digital video, it is necessary to capture millions of samples per second.

BY: DR. ZEESHAN BHATTI

37

QUANTIZING
Quantizing converts the level of a video signal sample into a discrete,
binary value.
This value approximates the level of the original video signal sample.
The value is selected by comparing the video sample to a series of
predefined threshold values.
The value of the threshold closest to the amplitude of the sampled
signal is used as the digital value.

BY: DR. ZEESHAN BHATTI

38

A video signal contains several different components which are mixed


together in the same signal.
This type of signal is called a composite video signal and is not really
useful in high-quality computer video.
Therefore, a standard composite video signal is usually separated into
its basic components before it is digitized.
The composite video signal format defined by the NTSC (National
Television Standards Committee) color television system is used in the
United States.
The PAL (Phase Alternation Line) and SECAM (Sequential Coleur Avec
Memoire) color television systems are used in Europe and are not
compatible with NTSC.

Most computer video equipment supports one or more of these system


BY: DR. ZEESHAN BHATTI
standards.

39

VIDEO COMPRESSION
A single frame of video data can be quite large in size.
A video frame with a resolution of 512 x 482 will contain 246,784
pixels.
If each pixel contains 24 bits of color information, the frame will
require 740,352 bytes of memory or disk space to store.
Assuming there are 30 frames per second for real-time video, a 10second video sequence would be more than 222 megabytes in size!
It is clear there can be no computer video without at least one efficient
method of video data compression

BY: DR. ZEESHAN BHATTI

40

DIGITAL VIDEO
Example:
NTSC: 640x480px 24bits (millions - 3Bytes) colour pictures
900 Kbytes / frame
x 30 frames / sec
26 Mbytes / sec
x 60 seconds
1.6 GBytes / min !!! Without sound !!!

Even worse with PAL / SECAM


Only Studios (Films or TV) can deal with such Size and Data Rates

41

VIDEO COMPRESSION (CONTINUED)


There are many encoding methods available that will compress video
data.
The majority of these methods involve the use of a transform coding
scheme, usually employing a Fourier or Discrete Cosine Transform
(DCT).

These transforms physically reduce the size of the video data by


selectively throwing away unneeded parts of the digitized
information.
Transform compression schemes usually discard 10 percent to 25
percent or more of the original video data, depending largely on the
content of the video data and upon what image quality is considered
acceptable.
BY: DR. ZEESHAN BHATTI

42

VIDEO COMPRESSION (CONTINUED)


Usually a transform is performed on an individual video frame.

The transform itself does not produce compressed data. It discards


only data not used by the human eye.
The transformed data, called coefficients, must have compression
applied to reduce the size of the data even further.
Each frame of data may be compressed using a Huffman or
arithmetic encoding algorithm, or even a more complex compression
scheme such as JPEG.

This type of intraframe encoding usually results in compression ratios


between 20:1 to 40:1 depending on the data in the frame. However,
even higher compression ratios may result if, rather than looking at
single frames as if they were still images, we look at multiple frames
as temporal images.
BY: DR. ZEESHAN BHATTI

43

MOTION COMPRESSION TECHNIQUE


In a typical video sequence, very little data changes from frame to
frame.
If we encode only the pixels that change between frames, the amount
of data required to store a single video frame drops significantly.
This type of compression is known as interframe delta compression, or
in the case of video, motion compensation.
Typical motion compensation schemes that encode only frame deltas
(data that has changed between frames) can, depending on the data,
achieve compression ratios upwards of 200:1.
This is only one possible type of video compression method. There are
many other types of video compression schemes, some of which are
similar and some of which are different.
BY: DR. ZEESHAN BHATTI

44

CAPTURE AND COMPRESSION


Uncompressed Video only High End Equipt

COMPRESSION

45

CAPTURE AND COMPRESSION


Capture Card

Analogue Signal
TV, VCR, analogue camera

Noise losses

Capture Card

Computer

Generational Loss and Noise Bad Quality and Compression Problems


Usually Compression is Performed in the Capture Card
Mostly Hardware but Software Possible
Capture Card must be compatible with signal type (NTSC, PAL, VHS type,
S-VHS, Hi8)

46

CAPTURE AND COMPRESSION


Digital Signal
Commonly FireWire Connection (IEEE1394)
Digital Signal
Digital TV, Digital Camera

Computer

No Generational Loss or Noise Good Quality and Better Compression


Usually Compression is Performed in the Camera
No Control Over Such Compression (Typically DV for us)

47

CODECS
The algorithm used to compress (code) a video for delivery.

Decodes the compressed video in real-time for fast playback.


Streaming audio and video starts playback as soon as enough data
has transferred to the users computer to sustain this playback.

MPEG is a real-time video compression algorithm.


MPEG-4 includes numerous multimedia capabilities and is a preferred
standard.

Browser support varies

BY: DR. ZEESHAN BHATTI

48

VIDEO FORMAT CONVERTERS


Produce more than one version of your video to ensure that video will
play on all the devices and in all the browsers necessary for your
projects distribution

BY: DR. ZEESHAN BHATTI

49

VIDEO: SOURCE FORMATS


Analog
Tape: VHS, Betamax, 8mm, Hi8, Umatic, Betacam SP
Disc: Laserdisc, SelectaVision

Digital
Tape: MiniDV, Digital8, DVCAM, Digital Betacam

Disc: Video CD, DVD , BlueRay

BY: DR. ZEESHAN BHATTI

50

CHROMA KEYS
Blue or Green screen or chroma key editing is used to superimpose
subjects over different backgrounds.

BY: DR. ZEESHAN BHATTI

51

CREATING AND SHOOTING VIDEO


Shooting platform
A steady shooting platform should always be used.
Use an external microphone.
Know the features of your camera and software.
Decide on the aspect ratio up front.

BY: DR. ZEESHAN BHATTI

52

CREATING AND SHOOTING VIDEO STORYBOARDING


Successful video production requires planning.

Storyboards are very important, as they form the basis of the work
that is carried out on the movie, describing most of the major features
as well as the plot and its development.

BY: DR. ZEESHAN BHATTI

53

CREATING AND SHOOTING VIDEO COMPOSITION


Consider the delivery medium when composing shots.
Use close-up and medium shots when possible.
Move the subject, not the lens.
Beware of backlighting.
Adjust the white balance.

BY: DR. ZEESHAN BHATTI

54

CREATING AND SHOOTING VIDEO Titles and text (continued)


Use plain, sans serif fonts that are easy to read.
Choose colors wisely.
Provide ample space.
Leave titles on screen long enough so that they can be read.
Keep it simple.

BY: DR. ZEESHAN BHATTI

55

BY: DR. ZEESHAN BHATTI

56

BY: DR. ZEESHAN BHATTI

57

NONLINEAR EDITING
High-end software has a steep learning curve.

Adobes Premiere, Apples Final Cut, Avids Media Composer


Simple editing software is free with the operating system.
Microsofts Windows Live Movie Maker, Apples iMovie.
Remember video codecs are lossy; avoid re-editing.

BY: DR. ZEESHAN BHATTI

58

NONLINEAR EDITING (CONTINUED)

BY: DR. ZEESHAN BHATTI

59

NONLINEAR
EDITING

(CONTINUED)

BY: DR. ZEESHAN BHATTI

60

THANKYOU

Q&A
For My Slides and Handouts

http://zeeshanacademy.blogspot.com/
https://www.facebook.com/drzeeshanacademy

Website:
https://sites.google.com/site/drzeeshanacademy/
BY: DR. ZEESHAN BHATTI

61

You might also like