You are on page 1of 33

Digital Image Processing

Topic: Image Enhancement


Speaker: Mrs. S. S. Palsule
Date: 18-01-2012
Venue: CEPT New Bldg.
Course: M. Sc. In Geomatics
Faculty of Geomatics & Space Applications
IMAGE ENHANCEMENT
• Objective:
• The main objective of enhancement is to process an image
• so that result is more suitable than the original image for specific
application.
• This enhancement processes improves the visual interpretability of
an image and increases distinction of the features in the scene.
• Image enhancement techniques fall into two broad categories.
• -Spatial Domain -Frequency Domain
• Spatial Domain refers to the image plane itself and pixels of image
are directly manipulated.
• Frequency Domain are based on frequency content of image which
are computed using Fourier transfer of an image.
• There is no general theory of image enhancement.
• The processed image is visually interpreted and the viewer is the
ultimate judge of specific applications.
Techniques for Basic Gray level Transformations

• The study of image enhancement techniques is based on gray-level


transformation functions.
• The spatial domain refers to the aggregate of pixels composing of
an image assume:
• f (x , y) is the input image, g (x , y) is the processed output image
• T is an operator on f, defined over some neighborhood of (x , y).
• The simplest form of T is when
• (a) the neighborhood of size 1 x 1 is single pixel point operation.
• (b) the neighborhood is of rectangular /square of size n x n, n x m
or circle with a radius r around a centered pixel at (x , y) is known as
Local Operations.
• g (x , y) = T [f( x , y)]
• s = T (r)
• Where r & s are variables denoting the gray level of f (x , y) &
• g (x , y) at any point (x , y) respectively.
Mapping Functions for Enhancement

• This T is gray level/intensity mapping function.

• This mapping function (linear / non-linear) could be as,


• - threshold function
• - logarithmic
• - power law
• - histogram manipulation.
• The enhancement at point operation in an image
depends only the gray level at that point.
• The larger neighborhood principal of local operations is
based on the use of masks which could be equivalent to
filters, kernels, templates, etc.
Mathematical Representation of Mapping Function

• The following three basic functions are used for image enhancement
• 1.Linear Functions: This technique is based on Image
• negatives approach where intensity levels are reversed.
• S = L – 1 – r, L is maximum intensity.

• 2.Log Transformation: This transformation maps a narrow range of


low gray level of input image into a wider range of output levels.
• S = C log (l + r), C is Constant.

• 3. Power Law Transformation: This follows the basic form as:


• S = C r exp D
• Where C is constant and exponent D is gamma Correction factor,
• D factor can be varied from D < 1 to D > 1
• which accommodate lower and higher intensity manipulation
Digital enhancement technique
for remote sensing applications
• Contrast Manipulation: This technique used piece wise linear
transformation functions where end user selects input for specific
application. The popular technique are
• contrast stretching, gray level threshold, gray level – Bit plane
slicing.
• Spatial Feature Manipulation: This technique uses powerful tool as
Fourier transformation. The relationship between the frequency
components of the Fourier transformation and spatial
characterization of an image can be established. The popular
techniques are spatial filtering of low, medium and high frequencies,
edge enhancement.
• Multi-Image Manipulation: Remotely sensed data are available in
various form as multi-spectral, multi-resolution and temporal domain.
The popular techniques are multi spectral band rationing,
differencing, principal components, canonical components,
vegetation components, intensity – hue-saturation (IHS) colors
space transformation, de-correlation stretching.
Reference file for further discussion
• Slides From 4 to 39
• File name: EDUSAT ENHANCEMENT. pdf
Sample Applications of Contrast Manipulation

• Gray-level Thresholding: This technique is used to segment an


image into two classes where either of class of image pixels will
have low/high fixed values depending on threshold value applied.
• Level Slicing: This technique uses analyst specified interval of gray
values (slices) for input image. All gray values within a given interval
of input scene will show one gray value in output image. This results
in contour map or single class with specified color.
• Contrast Stretching: This technique incorporates user defined
piecewise linear function to increase the dynamic range of gray
levels for obtaining better contrast of scene features. The simplest
technique is linear stretch for derived area using following equation.
• DN = (DN – MIN ) x 255
• (MAX – MIN)
• Where MIN and MAX gives required threshold value for
computational efficiency the input/output gray levels can be stored in
Look Up Tables.
Contd.
• The other types of non-linear contrast enhancement are also used.
• This can be achieved by modification of the histogram of DN values.
The commonly used technique is histogram equalization which
means that original histogram will be redistributed to produce
uniform population density of pixels (set of classes).
• This redistribution reduces the contrast in higher and lower DN
values and maximum contrast enhancement happens to the most
populated range of DN values of the image.
• What we discussed so far was to apply the same technique through
the image.
• However, when the scene has significant variation, a more optimum
enhancement can be achieved by using an adaptive algorithm
whose parameter changes from pixel to pixel according to the local
image contrast. .
Detect ability Vs. Information content

• The enhancement technique used is,


• dependent on the scene and the image analyst’s interest for
specific feature.

• Highlighting one feature could be at the cost of another feature.

• Enhancement technique only increases


• the detect ability of a feature for visual observations
• and does not increase the information content.
Spatial Feature Manipulation
• Spatial Filtering:
• In point operation enhancement technique, we have been
manipulating the radiance value, of each pixel without considering
the values of the neighbors.
• It is important to analyze the radiance variation across the image,
that is spatial variation is an important parameter for correct image
interpretation,
• therefore, by manipulation of the spatial distribution of the radiance
value it should be possible to emphasize/de-emphasize certain
features.
• This process is called spatial filtering.
• This can be performed directly on the image data in spatial domain
or can be done in frequency domain using Fourier Transformation.
Contd.
• In both the cases the frequency content of the image is altered
• and choice depends on the ease of implementation and usage.
• The frequency content of the image is analyzed by computing the
rate of radiance values changes within scene.
• If radiance changes abruptly within a relatively small number of
pixels, it is high frequency content as road, field boundaries, coastal
line.
• The slow varying radiance value represent low frequency
component as mono cropped agriculture field, large water bodies.

• In general the following filtering techniques are applied:


• - Low Pass Filter (which suppresses the high frequency contact)
• - High Pass Filter (which suppresses the low frequency contact)
• - Band Pass Filter (selected frequencies)
Contd.
• A useful analogy is to compare the Fourier transform to a
glass prism.
• The prism is a physical device that separates light into
various color components each depending on its
wavelength (or frequency) content.
• The Fourier transform may be viewed as a
“mathematical prism” that separates a function into
various components based on frequency content.
• When we consider light we talk about its spectral or
frequency content. Similarly the Fourier Transform
characterizes a function by its frequency content.
• .
Reference file for further discussion
• Slides From 40 to 52
• File name: EDUSAT ENHANCEMENT. pdf
Discussion on Edge Enhancement

• This high pass filter sharpens the edge but it enhances noise and
produces a `rough’ appearance.
• Edge-enhanced images attempt to preserve both local contrast and
low frequency brightness information.
• The Kernel size used to produce this image is chosen based on
roughness of image.
• All of a fraction of the gray level in each pixel of the original scene is
added back to the high frequency component image.
• The composite image is contrast stretched. This resultant image
contains local contrast enhancement of high frequency features that
also preserves the low frequency brightness information contained in
the scene.
• Directional first differencing is another enhancement technique where
horizontal, vertical or diagonal pixel differences are computed.
• The differences can be either positive or negative so constant scene
medium value is added.
Image Transformation
Multi Image Manipulation
• The following techniques are used for multi image manipulation:
• - Spectral Ratio of images
• - Principal component Analysis
• - Vegetation components
• - Intensity-Hue-Saturation color Space Transformation

• Reference file for further discussion


• Slides From 53 to 78
• File name: EDUSAT ENHANCEMENT. pdf
Discussion on Results of Ratio images

• Ratio images can be used to generate false color composites by


• Combining three monochromatic ratio data sets.
• Such composites have advantage of combining spectral information
& presenting data in color.
• The optimum Index Factor (OIF) is one criteria to assist in selecting
Which ratio combination to include in color composite such that
either it should convey overall information of Scene or best
combination for conveying the specific information desired by the
image analyst.
• The ’intensity blind’ problem ( dissimilar material with different
radiances but having similar slope appear identical) are optimized
by using Hybrid color ratio composite.
• Ratio images cancels multiplicative effect, not additive effect.
Contd.

• Computed ratios of raw D n s & radiances values will be different


due to detector response curves which effect information content of
ratio images.
• mathematical ratios can blow up due to near zero numbers or
compress due to division by large number which means it is
important to scale the results of ratio computation.
• The simple algorithm is,
• D n’ = R arctan ( D n x /D n y ) where D n’ is scaled D n
value & R is scale factor, other component is angle in radians where
• Blow up & compress factors are normalized.
Discussion on Principal components Analysis

• The form of relationship to transform a data value in the original


band A-band B coordinate system into its value in the new axis
• I-axis II system is
• D N I = a11 D N A + a12 D N B D N II = a21 D N A + a22 D N B

• Where D N I & D N II = digital numbers in new p c coordinate


system, D N A & D N B = digital number in original coordinate
system, a11,a12,a21,a22 are transformation coefficients

• The p c data values are simply linear combinations of original values


• The data along first p c have grater variance or dynamic range than
original data. The data along second p c have far less variance.
• Successive components are orthogonal to all previous so data
contain are uncorrelated.
Contd.

• P c images can be analyzed as separate black & white images or


any three component images to form a color composite.
• P c data are normally treated as original data for classification
process. The number of components used is normally reduced
which makes classification process more efficient by reducing
processing time.
• Principal Component enhancement techniques are appropriate
where little prior information concerning a scene is available.
• Canonical component analysis is appropriate when information
about particular features of interest is known. The C C analysis is
carried out by reconstructing axes I & II to maximize the separability
of feature classes by minimizing variance within class. C C analysis
improves classification efficiency & accuracy of identified features.
Vegetation Components
• Definition
• The reduction of multi spectral scanning
measurements to a single value for predicting and
assessing vegetative characteristics.
– Examples of such characteristics include plant leaf area, total biomass

• Components
– Brightness
• Weighted sum of all bands in direction of principle variation in
soil reflectance.
– Greenness
• Contrast between NIR and Visible bands
VI Transformations
• Basic Vegetation Index
– VI= NIR/ R
• The simplest type of vegetation index, VI, is
obtained by dividing the reflectance from the near-
infrared band, NIR, by the reflectance from the red
visible band, R .The value of the ratio will be
greater for increasing amounts of healthy green
vegetation. This is because vegetation reflects
more strongly in the NIR than in the red visible
VI Transformations
• Transformed Vegetation Index
– TVI=[(NIR-R/NIR+R)+0.5]1/2 x 100
• Green Normalized Difference Vegetation
– GNDVI=[(NIR-G/NIR+G)+0.5]1/2 x 100
– For High to mid level leaf area indexes
• Enhanced Vegetation Indexes
– To minimize soil background influences
• Soil-Adjusted Vegetation Index
• SAVI = (NIR-R) (1+L) / (NIR+R+L)
Applications
• Calculating vegetation cover type
– Calibrating TVI values with original Data
• Helping crop / Precision farming
management systems
– Calculating Irrigation water, fertilizers etc..
Requirements
Color Fundamentals
• Color image processing is divided into two major areas:
• - Full color where images are acquired with a full-color system
• - Pseudo-color where a color is assigned to a particular
• monochrome intensity & range of intensities .
• The colors that humans/animals perceive in an object are
determined by the nature of reflected light from object.
• Characterization of light is central concept to the science of color.
• Achromatic light (void of color) is black & white (gray level)
• Chromatic light spans the E.M. spectrum from 400 to 700 mm.
• Cones in the eyes are sensors responsible for color vision.
• 6 to 7 millions cones in human eyes are divided into 3-principal
sensing.65 % cones for red,33 % for green,2 % for blue
Contd.

• These absorption characteristics of the human eye ,colors are seen as


variable combinations of Primary colors red ( R ), green ( G ), & blue ( B ).
• The primary colors are added to produce the secondary colors of light-
magenta ( red plus blue ), cyan ( green plus blue ),
• yellow ( red plus green ).
• Mixing the three primaries, or a secondary with its opposite primary color in
the right intensities will produce white light.
• Differentiating between the primary colors of light & primary colors of
pigments is important.
• The primary colors of light are R G B. The primary colors of pigments are
defined as one that subtracts or absorbs a primary color of light & reflect &
transmits the other two. Therefore the primary colors of pigments are
magenta, cyan & yellow, secondary colors are R G B.
• A proper combination of three pigment primaries produces Black.
Brightness (Intensity), Hue, Saturation

• The characteristics generally used to distinguish one color from


another are Brightness, Hue, Saturation.
• Brightness is a subjective descriptor that is practically impossible to
measure. It embodies the achromatic notion of intensity and is one
key factor to describe color sensation.
• Hue is an attribute associated with the dominant wavelength in a
mixture of light waves. Hue represent dominant color as perceived by
observer.
• Saturation refers to the purity or the amount of white light mixed with
Hue. The pure spectrums are fully saturated. Pink color is a
combination of white & red, lavender is a combination of white &
violet are less saturated.
• The degree of saturation being inversely proportional to the amount of
white light added.
H I S Color Model

• A color model is a specification of a coordinate system & a subspace


within the system where each color is represented by a single point.
• When human view a color object, it is described by its Hue, Brightness
& Saturation. Intensity ( gray level ) is most useful descriptor of
monochromatic image & it is measurable while brightness is not
measurable.
• The H I S color model decouples the intensity component from color
carrying information ( Hue, Saturation ) in a color image.
• The H I S model is an ideal tool for developing image processing
algorithm based on color descriptions which is natural & intitutive to
humans.
• R G B is ideal for image color generation ( color camera, image display
system ) but its use for color description is much more limited.
Therefore H I S model extracts intensity from R G B model & it is a
practical tool of describing colors for human interpretation.
HIS Color Space Transformation
– A color model is an abstract mathematical model describing the way colors can be represented as cubes of
numbers, typically as three or four values or color components. When this model is associated with a precise
description of how the components are to be interpreted (viewing conditions), the resulting set of colors is
called a color space. This section describes ways in which human color vision can be modeled

– RGB Model
HIS Color Space Transformation
• Mathematical model for transformation
HIS Color Space Transformation
– IHS Model
Hex cone Model for transforming
R G B components to H I S Model

• The figure from previous slide represent Hex cone Model.


• It involves the projection of the R G B color cube onto a plane that is
perpendicular to the gray line & tangent to the cube at the corner
farthest from the origin.
• The resulting projection is hexagon. If the plane of projection is
moved from white to black along gray line successively smaller color
sub cubes are projected & series of hexagons of decreasing size
result. The hexagon at white is maximum & minimum as point at
black. Thus the series of Hexagons developed define a solid called
the Hex cone.
• In cone any arbitrary point at given intensity can be measured. The
hue of the point is an angle from red axis, the length of vector is
saturation. The H I S components are manipulated to enhance the
feature & transformed back to R G B for display.
Discussion on HIS Color Space Transformation

• IHS provides independent control over


individual components
• RGB is very efficient in representation and
processing
• We can convert as per requirement
[processing vs. representation] from one
model to another
– RGB , HIS ,CMYK (ideal for printing)

You might also like