You are on page 1of 72

Lecture 19

Color Image Processing

Preview
Why use color in image processing?
Color is a powerful descriptor
Object identification and extraction
e.g., Face detection using skin colors
Humans can discern thousands of color shades and
intensities
c.f. Human discern only two dozen shades of grays

Preview
Two category of color image processing
Full color processing
Images are acquired from full-color sensor or
equipments

Pseudo-color processing
In the past decade, color sensors and processing
hardware are not available
Colors are assigned to a range of monochrome
intensities

Color fundamentals
Physical phenomenon
Physical nature of color is known (1666, Isaac Newton)

Chromatic light span the electromagnetic spectrum (EM) from 400


to 700 nm

Physiopsychological phenomenon
How human brain perceive and interpret color?

Color fundamentals
The color that human perceive in an object =
the light reflected from the object

scene

Illumination source

eye

reflection

Physical quantities to describe a


chromatic light source
Radiance: total amount of energy that flow from the
light source, measured in watts (W)
Luminance: amount of energy an observer perceives
from a light source, measured in lumens (lm)
Far infrared light: high radiance, but 0 luminance

Brightness: subjective descriptor that is hard to


measure, similar to the achromatic notion of intensity

How human eyes sense light?

6~7M Cones are the sensors in the eye


3 principal sensing categories in eyes
Red light 65%, green light 33%, and blue light 2%

Primary and secondary colors


In 1931, CIE(International Commission on
Illumination) defines specific wavelength values
to the primary colors
B = 435.8 nm, G = 546.1 nm, R = 700 nm
However, we know that no single color may be called
red, green, or blue

Secondary colors: G+B=Cyan, R+G=Yellow,


R+B=Magenta

Application of additive nature of light colors


Color TV

Application of additive nature of light colors


Color TV

CIE XYZ model


RGB -> CIE XYZ model
X 0.431 0.342 0.178 R
Y 0.222 0.707 0.071 G


Z 0.020 0.130 0.939 B

Normalized tristimulus values


X
x
X Y Z

Y
y
X Y Z

Z
z
X Y Z

=> x+y+z=1. Thus, x, y (chromaticity coordinate) is


enough to describe all colors

By additivity of colors:
Any color inside the
triangle can be produced
by combinations of the
three initial colors

RGB gamut of
monitors
Color gamut of
printers

Color models
Color model, color space, color system
Specify colors in a standard way
A coordinate system that each color is
represented by a single point

RGB model
CYM model
CYMK model
HSI model

Suitable for hardware or


applications
- match the human description

RGB color model

Pixel depth
Pixel depth: the number of bits used to
represent each pixel in RGB space
Full-color image: 24-bit RGB color image
(R, G, B) = (8 bits, 8 bits, 8 bits)

Application of additive nature of light colors


Color TV

Safe RGB colors


Subset of colors is enough for some application
Safe RGB colors (safe Web colors, safe browser
colors)

(6)3 = 216

Safe RGB color

Safe RGB color

Full color cube

Safe color cube

RGB color model

CMY model (+Black = CMYK)


CMY: secondary colors of light, or primary
colors of pigments
Used to generate hardcopy output

C 1 R
M 1 G

Y 1 B

HSI color model


Will you describe a color using its R, G, B
components?
Human describe a color by its hue, saturation,
and brightness
Hue: color attribute
Saturation: purity of color (white->0, primary
color->1)
Brightness: achromatic notion of intensity

HSI color model


These spaces use a cylindrical (3D-polar) coordinate system to
encode the following three psycho-visual coordinates:
Hue (dominant colour seen)
Wavelength of the pure colour observed in the signal.
Distinguishes red, yellow, green, etc.
More the 400 hues can be seen by the human eye.

Saturation (degree of dilution)


Inverse of the quantity of white present in the signal. A pure colour has
100% saturation, the white and grey have 0% saturation.
Distinguishes red from pink, marine blue from royal blue, etc.
About 20 saturation levels are visible per hue.

Brightness
Amount of light emitted.
Distinguishes the grey levels.
The human eye perceives about 100 levels.

HSI color model


RGB -> HSI model

Colors on this triangle


Have the same hue

Intensity
line

saturation

HSI model: hue and saturation

HSI model

RGB to HSI

; B G
H
360 ; B G

cos {

1
[( R G ) ( R B)]
2
2

1/2

[( R G) ( R B)(G B)]

S 1

3
[ MIN ( R, G, B)]
( R G B)

( R G B)
I
3

HSI to RGB

RG Sector
(0H<120):

B I (1 S )
S cos H
R I [1
]
cos(60 H )
G 3I ( R B )

BR Sector
(240H<360):

H H 240
G I (1 S )

GB Sector
(120H<240):

H H 120
R I (1 S )
S cos H
]
cos(60 H )
B 3I ( R G )

G I [1

S cos H
]
cos(60 H )
R 3I ( B G )
B I [1

HSI component images

R,G,B

saturation

Hue

intensity

Example 1

Color Image

Hue

Saturation

Luminance

Example 2

Color Image

Hue

Saturation

Luminance

Color spaces

RGB (CIE), RnGnBn (TV - National Television Standard Comittee)


XYZ (CIE)
UVW (UCS de la CIE), U*V*W* (UCS modified by the CIE)
YUV, YIQ, YCbCr
YDbDr
DSH, HSV, HLS, IHS
Munsel colour space (cylindrical representation)
CIELuv
CIELab
SMPTE-C RGB
YES (Xerox)
Kodak Photo CD, YCC, YPbPr, ...

Yet there are many such spaces


HSV
described in books.
How does one choose which one
IHS
to use?

triangle

HSI
HLS

Lecture 20
Color Image Processing

Pseudo-color image processing

Assign colors to gray values based on a


specified criterion
For human visualization and interpretation of
gray-scale events
Intensity slicing
Gray level to color transformations

Intensity slicing
3-D view of intensity image
Color 1

Color 2

Image plane

Intensity slicing
Alternative representation of intensity slicing

Application 1

X-ray image of a weld

Intensity slicing
More slicing plane, more colors

Application 2

Radiation test pattern

8 color regions

* See the gradual gray-level changes

Gray level to color transformation

Application 1

Combine several monochrome images


Example: multi-spectral images

Rainfall statistics

Washington D.C.

R+G+B

Near
Infrared
(sensitive
to biomass)

near-infrared+G+B

Color pixel
A pixel at (x,y) is a vector in the color space
RGB color space

R ( x, y )
c( x, y ) G ( x, y )
B( x, y )
c.f. gray-scale image
f(x,y) = I(x,y)

Example: spatial mask

How to deal with color vector?


Per-color-component processing
Process each color component
Vector-based processing
Process the color vector of each pixel
When can the above methods be equivalent?
Process can be applied to both scalars and
vectors
Operation on each component of a vector must
be independent of the other component

Two spatial processing categories


Similar to gray scale processing studied before,
we have to major categories
Pixel-wise processing
Neighborhood processing

Color transformation
Similar to gray scale transformation
g(x,y)=T[f(x,y)]

Color transformation
si Ti (r1 , r2 ,..., rn ) , i 1,2,..., n
g(x,y)

s1
s2

sn

f(x,y)

T1
T2

Tn

f1
f2

fn

Use which color model in color transformation?


RGB CMY(K) HSI
Theoretically, any transformation can be
performed in any color model
Practically, some operations are better suited
to specific color model

Example: modify intensity of a color image


Example: g(x,y)=k f(x,y), 0<k<1
HSI color space
Intensity: s3 = k r3
Note: transform to HSI requires complex
operations
RGB color space
For each R,G,B component: si = k ri
CMY color space
For each C,M,Y component:
si = k ri +(1-k)

H,S

Implementation of color slicing


Recall the pseudo-color intensity slicing

1-D intensity

Implementation of color slicing


How to take a region of colors of interest?

prototype color
Sphere region

prototype color
Cube region

Application
cube

sphere

Color image smoothing


Neighborhood processing

Color image smoothing: averaging mask

1
c( x, y )
K

c( x, y )

( x , y )S xy

R ( x, y )

K ( x , y )S xy

c( x, y )
G ( x, y )

K ( x , y )S xy

K B ( x, y )
( x , y )S xy

vector processing

Neighborhood
Centered at (x,y)

per-component processing

original

Example: 5x5 smoothing mask

RGB model

Smooth I
in HSI model

difference

Example: Image Sharpening

Lighting conditions
The lighting conditions of the scene have a large
effect on the colours recorded.

Image taken lit by a flash.

Image taken lit by a


tungsten lamp.

Lighting conditions

The following four images of the same scene were acquired under different lighting
conditions.

Dealing with Lighting Changes


Knowing just the RGB values is not enough to
know everything about the image.
The R, G and B primaries used by different devices are usually
different.

For scientific work, the camera and lighting should be


calibrated.
For multimedia applications, this is more difficult to
organise:
Algorithms exist for estimating the illumination colour.

You might also like