You are on page 1of 58

Unit-1

Color models in image and


video
4.1 Color Models in Images

Fig. 4.15: RGB and CMY color cubes

Chap 4 Color in Image and 2


Video
Additive and Subtractive Color

 Additive color:
When two light beams impinge on a
target, their colors add; when two phosphors on a CRT
screen are turned on, their colors add.

 Subtractive color:
For ink deposited on paper, the opposite situation
holds: yellow ink subtracts blue from white illumination,
but reflects red and green; it appears yellow.

Chap 4 Color in Image


3
and Video
Subtractive Color: CMY Color Model

 Instead of red, green, and blue primaries, we need


primaries that amount to -red, -green, and -blue.
I.e., we need to subtract R, or G, or B.

 These subtractive color primaries are Cyan (C),


magenta (M) and Yellow (Y ) inks.

Chap 4 Color in Image


4
and Video
Transformation from RGB to CMY

 Simplest model we can invent to specify what ink


density to lay down on paper, to make a certain desired
RGB color:

 Then the inverse transform is:

Chap 4 Color in Image and 5


Video
Undercolor Removal: CMYK System

 Undercolor removal
• Sharper and cheaper printer colors:
• Calculate that part of the CMY mix that would be
black, remove it from the color proportions, and add
it back as real black.

 The new specification of inks is thus:

Chap 4 Color in Image and 6


Video
Fig. 4.16: Additive and subtractive color. (a): RGB is used
to specify additive color. (b): CMY is used to specify
subtractive color

Chap 4 Color in Image and 7


Video
4.2 Color Models in Video

 Largely derive from older analog methods of coding


color for TV.
Luminance is separated from color information.
 YIQ is used to transmit TV signals in North America
and Japan (NTSC).
 In Europe, video tape uses the PAL or SECAM codings,
which are based on TV that uses a matrix transform
called YUV.
 Digital video mostly uses a matrix transform called
YCbCr that is closely related to YUV

Chap 4 Color in Image and 8


Video
YCbCr Color Model

 The Rec. 601 standard for digital video uses another


color space, YCbCr
closely related to the YUV transform.
 YUV is changed by scaling such that Cb is U, but with
a coefficient of 0.5 multiplying B’.
 In some software systems, Cb and Cr are also shifted
such that values are between 0 and 1.

Chap 4 Color in Image and 9


Video
YCbCr Color Model

 In practice, in 8-bit coding,


with a maximum Y’ value of only 219, and a minimum
of +16. Cb and Cr have a range of 112 and offset of
+128.
If R’, G’, B’ are floats in [0, 1], then we obtain
Y’, Cb, Cr in [0, 255] via the transform:

 The YCbCr transform is used in JPEG image


compression and MPEG video compression.

Chap 4 Color in Image and 10


Video
Analog video signal types
Analog Video

 An analog signal f(t) samples a time-varying image.


So-called progressive scanning traces through a
complete picture (a frame) row-wise for each time
interval.
 In TV, and in some monitors and multimedia standards
as well, another system, called interlaced scanning is
used:
 The odd-numbered lines are traced first, and then the
even-numbered lines are traced. This results in odd and

even fields - two fields make up one frame.


Analog Video

Interlaced raster scan


Analog Video

 The odd lines (starting from 1) end up at the middle of


a line at the end of the odd field, and the even scan
starts at a half-way point.
 First the solid (odd) lines are traced, P to Q,
then R to S, etc., ending at T; then the even field
starts at U and ends at V.
 The jump from Q to R, etc. in Figure 5.1 is called the
horizontal retrace, during which the electronic beam
in the CRT is blanked.
 The jump from T to U or V to P is called the
vertical retrace.
Electronic signal for one NTSC scan line.
NTSC Video

 NTSC (National Television System Committee) TV


standard is mostly used in North America and Japan.
It uses the familiar 4:3 aspect ratio (i.e., the ratio of
picture width to its height) and uses 525 scan lines
per frame at 30 frames per second (fps).
 NTSC follows the interlaced scanning system,
and each frame is divided into two fields,
with 262.5 lines/field. Pixel clock divides each horizontal
line of video into samples.
NTSC uses YIQ color model. It uses quadrate modulation
to combine I & Q signals into single chroma signal
C=I cos (FSCt) + Q sin (FSCt) Fsc=3.58MHz
Video raster, including retrace and sync data.
NTSC Video

 this modulated chroma signal is known as Color Sub


carrier. Magnitude=√I2+Q2 and phase =tan-1(Q/I)
NTSC composite signal is composition of luminance
signal (Y) and chroma signal (C). Composite=Y+C
Decoding composite signal at receiver : separating Y &
C
Low pass filters can be used to extract Y signal.
To extract I signal:
Multiply signal C by 2 cos(FSCt)
C.2 cos(FSCt)=I+I.cos(FSCt)+Q.sin(FSCt)
Apply low pass filter to obtain I and discard two
higher frequency 2FSC terms
NTSC Video

Y Signal
I & Q Signal

Audio
Sub Carrier

0 1.25 4.83 5.75 6.0 f (MHz)


Picture Color
Carrier Sub
Carrier
PAL Video

 PAL (Phase Altering Line) TV standard originally


invented by germen scientist. It uses 625 scan lines per
frame at 25 fps with 4:3 aspect ratio.
PAL uses YUV color model with 8 MHz channel,
allocating band width of 5.5 MHz to Y and 1.8 MHz to
each U & V. the color sub carrier frequency is
Fsc=4.43MHz. Chroma signals have altering signs as +U &
-U in successive scan lines hence the name Phase
Altering Line.
The signals in consecutive lines are averaged so as to
cancel chroma signal for separating Y and C
It uses comb filter at receiver
SECAM Video

 SECAM (Systeme Electronique Couleur Avec Memoire)


is invented by French for TV broadcast.
It uses 625 scan lines per frame at 25 fps with 4:3
aspect ration & interlaced fields.
SECAM and PAL are similar, differing slightly in their
color coding scheme. In SECAM , U & V signals are
modulated using separate color sub carriers at 4.25 MHz
and 4.41 MHz respectively. They are sent in alternate
lines that is, only one of U or V signals will be sent on
each scan line.
Unit-2

Types of Video Signals

 Component video
 Composite video
 S-Video
Component Video

 Higher-end video systems make use of three separate


video signals for the red, green, and blue image
planes.
Each color channel is sent as a separate video signal.
 Most computer systems use Component Video,
with separate signals for R, G, and B signals.
 For any color separation scheme, Component Video
gives the best color reproduction since there is
no crosstalk between the three channels.
 Component video requires more bandwidth and good
synchronization of the three components.
Composite Video – 1 Signal

 Color (chrominance) and intensity (luminance) signals


are mixed into a single carrier wave.
 Chrominance is a composition of two color components
(I and Q, or U and V).
 In NTSC TV, e.g., I and Q are combined into a chroma
signal, and a color subcarrier is then employed to put
the chroma signal at the high-frequency end of the
signal shared with the luminance signal.
 The chrominance and luminance components can be
separated at the receiver end and then the two color
components can be further recovered.
Composite Video

 When connecting to TVs or VCRs, Composite Video uses


only one wire and video color signals are mixed,
not sent separately.
 The audio and sync signals are additions to this one
signal.
 Since color and intensity are wrapped into the same
signal, some interference between the luminance and
chrominance signals is inevitable.
S-Video Signal

 As a compromise, (Separated video, or Super-video)


uses two wires, one for luminance and another for
a composite chrominance signal.
 As a result, there is less crosstalk between the color
information and the crucial gray-scale information.
 The reason for placing luminance into its own part of
the signal is that black-and-white information is most
crucial for visual perception.
Structure of midi
Structure of MIDI message
 MIDI message mainly classified into two types:
1) Channel Messages: Voice, Mode messages
2) System Messages: Common, Real time, Exclusive messages

Channel Messages:
It contains 3 bytes, first byte is status byte. It has two types
of messages including Voice & Mode messages
1)Voice Messages: it is used to control voice, it sends note on or
off, key pressure, to specify effects such as sustain, vibrato,
tremolo and pitch wheel etc. Here n is status byte
Voice message Status Data byte 1 Data byte2
byte
Note on/off &H8n / Key number Note on/off
&H9n
Key/Channel &HAn / Key number/ pressure Amount/ None
Pressure &HDn
Control/ Program &HBn / Controller/ Program Controller value/
Change &HCn number None
Pitch Bend &HEn MSB LSB
Structure of MIDI message
2)Mode Messages: they form a special case of control change
message, therefore all modes have opcode B. mode messages
determine how an instrument processes MIDI voice messages.
Poly means a device will play back several notes at once if
requested to do so. Default mode is POLY ON
Omni means that devices respond to messages from all channels.
Default is OMNI OFF - pay attention to your own messages only
1st data byte Description Meaning of 2nd data
byte
&H79 Reset all controllers None, set to 0
&H7A Local Control O=0ff, 127=0n

&H7B All notes off Controller value/


None
&H7C Omni mode off None, set to 0

&H7D Omni mode on None, set to 0

&H7E Mono on (Poly off) Controller number

&H7F Mono off (Poly on) None, set to 0


Structure of MIDI message
System Messages: they have no channel number and are
meant for commands that are not channel specific such
as timing signals for synchronization, positioning
information in prerecorded MIDI sequence
1) Common messages: which relate to timing or
positioning. It is measured in beats.
2) Real time messages: which are related to
synchronization
3) Exclusive messages: is included so that
manufacturers can extend MIDI standard. After initial
code, they can insert stream of any specific messages
that apply to their own product.
Common Messages

Status byte System common message N


&HF1 MIDI timing code 1

&HF2 Song position pointer 2

&HF3 Song select 1

&HF6 Tune request None

&HF7 EOX( terminator) None

Real time Messages


Status byte System real time message
&HF8 Timing clock

&HFA Start sequence

&HFB Continue sequence

&HFC Stop sequence

&HFE Active sensing

&HFF System Reset


Methods of pcm
Pulse Code Modulation
 The basic techniques for creating digital signals from
analog signals are sampling and quantization.
 Quantization consists of selecting breakpoints
(boundary levels) in magnitude, and then re-mapping
any value within an interval to one of the
representative output levels.
 The set of interval boundaries are called decision
boundaries, and the representative values are called
reconstruction levels.
 The boundaries for quantizer input intervals that will
all be mapped into the same output level form a coder
mapping.
Sampling and Quantization.
Pulse Code Modulation
 The representative values that are the output values
from a quantizer are decoder mapping.
 Finally, we may wish to compress the data, by
assigning a bit stream that uses fewer bits for the
most prevalent signal values
 Every compression scheme has three stages:
 The input data is transformed to a new representation

that is easier or more efficient to compress.


2. We may introduce loss of information. Quantization is
the main lossy step → we use a limited number of
reconstruction levels, fewer than in the original signal.
3. Coding. Assign a codeword (thus forming a binary bit
stream) to each output level or symbol.
This could be a fixed-length code, or a variable length
code such as Human coding
 For audio signals, we first consider PCM for
digitization.
 This leads to Lossless Predictive Coding as well as the
DPCM scheme; both methods use differential coding.
 As well, we look at the adaptive version, ADPCM,
which can provide better compression.
PCM in Speech Compression
 Assuming a bandwidth for speech from about 50 Hz
to about 10 kHz, the Nyquist rate would dictate a
sampling rate of 20 kHz.

(a) Using uniform quantization, the minimum sample size


we could get away with would likely be about 12 bits.
Hence for mono speech transmission the bit-rate
would be 240 kbps.
PCM in Speech Compression

(b) With companding, we can reduce the sample size


down to about 8 bits with the same perceived level of
quality, and thus reduce the bit-rate to 160 kbps.

(c) However, the standard approach to telephony in fact


assumes that the highest-frequency audio signal we
want to reproduce is only about 4 kHz.
Therefore the sampling rate is only 8 kHz, and the
companded bit-rate thus reduces this to 64 kbps.
Pulse Code Modulation (PCM). (a) Original analog signal
and its corresponding PCM signals. (b) Decoded staircase signal.
(c) Reconstructed signal after low-pass filtering.
PCM signal encoding & decoding
Band µ/A law
limiting Compresso Linear PCM
filter r

Low pass µ/A law DA


filter Expander Converter

Filters are used to remove noise or unwanted


frequencies in the signal.
Compressor & Expander are used to change the
amplitudes of signals.
PCM is used for analog to digital conversion and can be
reversed by Digital to Analog converter.
Differential coding of audio

Audio is offer stored in simple PCM but in form that


exploits differences. In general differences are small
number which require fewer bits to store.
An advantage of forming differences is that histogram
of differences signal is usually peaked, with maximum
around zero.
3) Lossless Predictive Coding:
it simply transmit differences, we predict the next
sample as being equal to current sample and send not
that sample itself but the error involved in making
this assumption. The error is just difference
between previous and next value.
Differential coding of audio

The differences can be as much as -255 to 255. The


prediction value is calculated as:
fn=fn-1 en=fn-fn
^

^
Differential coding of audio

Our prediction fn to be as close as possible to actual


signal fn. Some function of few of previous values fn-1,
fn-2, fn-3 etc produce better prediction function fn.
2 to 4

Linear∑
^

predictor
k =1

fn= an-kfn-k such predictor can allow truncating


or rounding operation to result in integer values.
^

In case of values with exceptional large differences,


can be solved by considering Shift Up (SU) and Shift
Down (SD)
A simple predictor can be given as: fn=[1/2(fn-1+fn-2)]
Suppose samples are in range 0 to 255 and differences
Differential coding of audio

Ex: let us consider 5 frequency values of signal


f1=21 f2=22 f3=27 f4=25 f5=22 then assume f0=f1=21,
e1=0
^

f2=21, e2=22-21=1 f3=[1/2(22+21)]=21, e3=27-


21=6
f4=[1/2(27+22)]=24,e4=25-24=1
^
f5=26, e5=22-
26=-4
When you decode these values we can get as same input
2) Differential PCM(DPCM):
It is exactly same as PCM except that it incorporates a
quantizer step. After calculation of en value from fn
values, then quantizer step is performed to calculate
Differential coding of audio

Fn=function_of(fn-1,fn-2,fn-3,…), en=fn-fn, en=Q[en]


Transmit codeword en; reconstruct with fn=fn+en
For every block of signal, starting at time i we could
^

take a block of N values fn and try to minimize


quantization error:
i +N −1

∑ n =i

Min (fn-Q[fn])2 signal differences simulate


^

laplacian probability distribution function


The box labeled Symbol coder in block diagram simply
means a Huffman coder.
Ex: f1=130 f2=150 f3=140 f4=200 f5=230; f0=f1=130,
e1=0
f= 130 130 142 144 167
Differential coding of audio

en in range Quantized
^
Value
e=0 24 -8 56 56 -255 to -240 -248
-239 to -224 -232
f=130 154 134 200 223
-24
Input and output values are not same -15 to 0
-8
1 to 16 8
130 142 144 167 17 to 32 24

154 134 200 223 241 to 255 248


Differential coding of audio

3)Delta Modulation (DM):


It is much simplified version of DPCM often used as
quick analog to digital converter. Uniform Delta Dm: the
^

idea is to use only single quantized error value, either


+ve or –ve. Such as 1 bit coder as follows:
en=+k if en>0, where k is constant; =-k otherwise
^

Ex: f1=10 f2=11 f3=13 f4=15


fn= 10 14 10
en= 1 -1 5
en= 4 -4 4
fn= 14 10 14
Differential coding of audio

4) Adaptive DPCM(ADPCM):
It uses idea of adapting the coder to suit the input
much further. It uses adaptively modified quantizer, by
^

changing step size as well as decision boundaries in non


uniform quantizer. Generally making predictor
coefficients adaptive called as Adaptive Predictor
^

Coding (APC).
It includes two type of adaptive predictors: Forward or
Backward. The number of previous values used is called
M

∑ of predictor.
as order
i =1
fn= ai fn-i
Unit-3
oop key points
2. Object-Oriented ActionScript
2.2.1 Class Syntax

2.2.2 Object Creation


Objects are created (instantiated) with the new operator, as in:
new ClassName( )
where ClassName is the name of the class from which the object will be
created.
new SpaceShip( )
var ship:SpaceShip = new SpaceShip( );

2.2.3 Object Usage


To invoke a method, we use the dot operator (i.e., a period) and the
function call operator (i.e., parentheses). For example:
ship.fireMissile( );
To set a property, we use the dot operator and an equals sign. For
example:
ship.speed = 120;
To retrieve a property's value, we use the dot operator on its own. For
example: trace(ship.speed);

2.2.4 Encapsulation
Objects are said to encapsulate their property values and method source
code from the rest of the program.
Encapsulation is an important aspect of object-oriented design because it
allows different programmers to work on different classes
2. Object-Oriented ActionScript
2.2.5 Datatypes
– Each class in an object-oriented program can be thought of as defining a unique
kind of data, which is formally represented as a datatype in the program.
– A class effectively defines a custom datatype.
public var speed:Number;
2.2.6 Inheritance
When developing an object-oriented application, we can use inheritance
to allow one class to adopt the method and property definitions of
another.
Using inheritance, we can structure an application hierarchically so that
many classes can reuse the features of a single class.
2.2.7 Packages
In a large application, we can create packages to contain groups of
classes. A package lets us organize classes into logical groups and
prevents naming conflicts between classes.
This is particularly useful when components and third-party class libraries
are involved.
2.2.8 Compilation
When an OOP application is exported as a Flash movie (i.e., a .swf file),
each class is compiled; that is, the compiler attempts to convert each
class from source code to bytecode—instructions that the Flash Player
can understand and execute.
If a class contains errors, compilation fails and the Flash compiler
displays the errors in the Output panel in the Flash authoring tool.
2. Object-Oriented ActionScript
2.2.9 Starting an Objected-Oriented Application
object-oriented application is made up of classes and
objects.
Every Flash application, no matter how many classes or
external assets it contains, starts life as a single .swf
file loaded into the Flash Player.
When the Flash Player loads a new .swf file, it executes the
actions on frame 1 and then displays the contents of
frame 1.
– Create one or more classes in .as files.
– Create a .fla file.
– On frame 1 of the .fla file, add code that
creates an object of a class.
– Optionally invoke a method on the object to
start the application.
– Export a .swf file from the .fla file.
– Load the .swf file into the Flash Player.
Different classes
class TimeTracer TimeTracer Class
{
private var timerInterval:Number;

public function startTimeDisplay ( ):Void


{ stopTimeDisplay( );
var begunAt:String = new Date( ).toString( );
timerInterval = setInterval(displayTime, 1000);

function displayTime ( ):Void


{
trace("Time now: " + new Date( ).toString( ) + ". " + "Timer started at: "
+ begunAt);
}
}
public function stopTimeDisplay ( ):Void
{
clearInterval(timerInterval);
}
}
A method that accepts an unknown number of
arguments

public function sendMessage ( ):Void


{
var message:String = "<MESSAGE>";
for (var i:Number = 0; i < arguments.length; i++)
{ message += "<ARG>" + arguments[i] + "</ARG>"; }
message += "</MESSAGE>";
trace("message sent: \n" + message);
socket.send(message);
}
Complete Box Class
class Box
{
private var width:Number; private var height:Number;
private var container_mc:MovieClip;
public function Box (w:Number, h:Number, x:Number, y:Number, target:MovieClip,
depth:Number)
{container_mc = target.createEmptyMovieClip("boxcontainer" + depth, depth);
setWidth(w); setHeight(h); setX(x); setY(y); }
public function getWidth ( ):Number { return width;
public function setWidth (w:Number):Void { width = w; draw( );
public function getHeight ( ):Number { return height; }
public function setHeight (h:Number):Void { height = h; draw( );
public function getX ( ):Number { return container_mc._x; }
public function setX (x:Number):Void { container_mc._x = x; }
public function getY ( ):Number { return container_mc._y; }
public function setY (y:Number):Void { container_mc._y = y; }
public function draw ( ):Void
{container_mc.clear( ); container_mc.lineStyle(1, 0x000000);
container_mc.moveTo(0, 0);
container_mc.beginFill(0xFFFFFF, 100);
container_mc.lineTo(width, 0); container_mc.lineTo(width, height);
container_mc.lineTo(0, height); container_mc.lineTo(0, 0); container_mc.endFill( ); }
}
Simulation of multiple constructors
class Box
{ public var width:Number; public var height:Number
public function Box (a1:Object, a2:Object)
{ if (arguments.length == 0) { boxNoArgs( ); }
else if (typeof a1 == "string") { boxString(a1); }
else if (typeof a1 == "number" && typeof a2 == "number")
{ boxNumberNumber(a1, a2); }
else { trace("Unexpected number of arguments passed to Box constructor."); }
}
private function boxNoArgs ( ):Void
{ if (arguments.caller != Box) { return; }
width = 1; height = 1;
}
private function boxString (size):Void
{ if (arguments.caller != Box) { return; }
if (size == "large") { width = 100; height = 100; }
else if (size == "small") { width = 10; height = 10; }
else { trace("Invalid box size specified"); }
}
private function boxNumberNumber (w, h):Void
{ if (arguments.caller != Box) { return; }
width = w; height = h;
}
}
Another is image viewer and space ship classes it
is in notes

You might also like