You are on page 1of 55

DIGITAL IMAGE PROCESSING

IMAGE ENHANCEMENT

Dr V Shreedhara
10th, March,2021
Image Enhancement

• Modification of an image to alter its impact on viewer. Enhancements


are used to make it easier for visual interpretation and understanding
of imagery.

• Process of making an image more interpretable for a particular


application

• Useful since many satellite images give inadequate information for


image interpretation.
• In raw imagery, the useful data often populates only a small portion
of the available range of digital values
• Attempted after image is corrected for distortions.

• May be performed temporarily or permanently


What Is Image Enhancement?

Image enhancement is the process of making images more


useful

The reasons for doing this include:

– Highlighting interesting detail in images

– Removing noise from images

– Making images more visually appealing


Enhancement Types
• Point Operations
Modification of brightness values of each pixel in an image data
set independently. ( radiometric enhancement)

Brings out contrast in the image

• Local operations
• Modification of pixel values based on the values of surrounding
pixels. (spatial enhancement )

• Image Transformations
Enhancing images by transforming the values of each pixel on
a multiband basis (spectral enhancement )
A Note About Grey Levels
So far when we have spoken about image grey
level values we have said they are in the range
[0, 255]
– Where 0 is black and 255 is white
There is no reason why we have to use this
range
– The range [0,255] stems from display technologes
For many of the image processing operations in
this grey levels are assumed.
CONTRAST
• Amount of difference between average gray level of an
object and that of surroundings

• Difference in illumination or grey level values in an image or


Intuitively, how vivid or washed-out an image appears
• Ratio of Maximum Intensity to Minimum Intensity
CON = VMax / BVMin

Larger the ratio more easy it is to interpret the image

• Reasons for Low Contrast

• Scene itself has low contrast ratio


• Sensitivity of Detectors
• Atmospheric Scattering
Why is it needed to contrast stretch

Band2
Band4

FCC (4,2,1)

Band1
Contrast Enhancement

• Expands the original input values to make use of the total range
of the sensitivity of the display device.

• The density values in a scene are literally pulled


farther apart, that is, expanded over a greater range.

• The effect is to increase the visual contrast between two areas of


different uniform densities.

• This enables the analyst to discriminate easily between areas


initially having a small difference in density.

• Types

• Linear - Input and Output Data Values follow a linear


relationship

 Non Linear
Linear Contrast Stretch
• A DN in the low range of the original histogram is
assigned to extreme black, and a value at the high end is
assigned to extreme white.
• The remaining pixel values are distributed linearly
between these two extremes
Linear stretch
Linear Contrast Stretch

255

Y = ax + b

e min DN ax DN
m

0
0 255

DN
grey shade
Linear stretch
Computations
Y = ax + b

• The values of 'a' and 'b' are computed from the


equations.
• where, X =Input pixel value Y =Output
pixel value

• To stretch the data between 0- 255 Eq.1 takes the


form
• Y = (X - Xmin ) / Xmax - Xmin ) * 255
Linear stretch
Standard Deviation Linear Stretch

• Similar to the minimum-maximum linear contrast


stretch except this method uses a specified minimum
and maximum values that lie outside a certain
standard Deviation of pixels from the mean of the
histogram.

• A standard deviation from the mean is often used to


push the tails of the histogram beyond the original
minimum and maximum values
• In a normal distribution, about 68% of the values
are within one standard deviation of the mean
and about 95% of the values are within two
standards deviations of the mean.
Non-Linear Contrast Enhancement
• Input and Output Data Values do not follow
linear relationships
• Input and output are transformation
function
• Y = ƒ(x)
• Increases or decreases different regions
of histogram
Transfer Function Types

Mathematical Statistical
• Logarithmic
* Inverse Log Gaussian Stretch
* Exponential Histogram Equalization
* Square
• Square root Trigonometrical
* Cube * Arc tangent ( tan-1)

* Cube Root
Logarithmic Contrast Stretch

• .In this process linearly the logarithmic values of the


input data are stretched to get the desired output
values.

• It is a two step process. In the first step we find out


the log values of the input DN values.
• In the second step the log values are linearly
stretched to fill the complete range of DN no. (0- 255).

• Logarithmic stretch has greatest impact on the


brightness values found in the darker part of the
histogram or on the low DN values
Transfer Function – Non Linear Stretch
Histogram Equalisation

 In this technique, histogram of the original image is


redistributed to produce a uniform population density.

 This is obtained by grouping certain adjacent gray values.

 Thus the number of gray levels in the enhance image is less


than the number of gray levels in the original image.

 Contrast is increased at the most populated range of


brightness values of the histogram (or "peaks").

 It automatically reduces the contrast in very light or dark parts


of the image associated with the tails of a normally distributed
histogram
Histogram Equalizations
Gaussian Stretch
• Stretching based on histogram of pixel
values
• involves fitting of observed histogram to a
normal or Gaussian histogram
• highlight the tail parts of the histogram
SPATIAL
ENHANCEMENT
Local Operations

• pixel value is modified based on the values


surrounding it.
• Spatial Filtering - is the process of dividing the image into its
constituent spatial frequencies, and selectively altering
certain spatial frequencies to emphasize some image
features.

This technique increases the analyst’s ability to


discriminate detail used for enhancing certain features
• removal of noise.
• Smoothening of image
Where are the low and high frequencies
Filters
• are Algorithms for filtering Composed of Windows of
maximum kernal/ convolution mask

• And
Constants (Weights given to mask)

• Mask size 3x3, 5x5, 7x7, 9x9………


ex. Square mask

1 1 1
1 1 1
1 1 1
Filter Types

Low Pass Filters


• block high frequency details
• has a smoothening effect on images.
• Used for removal of noise
• Removal of “salt & pepper” noise
• Blurring of image especially at edges.
High Pass Filters
• Preserves high frequencies and Removes slowly varying
components
• Emphasizes fine details
• Used for edge detection and enhancement
• Edges - Locations where transition from one category to
other occurs
Convolution ( Filtering Technique)

• Process of evaluating the weighted neighbouring pixel


values located in a particular spatial pattern around the
i,j, location in input image.
• Technique
Mask window is placed over part of image
Convolution Formula is applied over the part of image
Sum of the Weighted product is obtained (coefficient of
mask x raw DN value)/ sum of coefficients)
Central value replaced by the output value
Window shifted by one pixel & procedure is repeated
for the entire image.
Convolution Process

• Please Note that:

• Using 3x3 kernel results in output image 2 lines and


2 columns smaller.

• Boundary pixels are either copied from original


image
Or
• Duplicated from the sides and then computed
Central Pixel

Window mask
010
111 BV5, out = 932 / 5 = 186
010
High Pass Filtering
• Types
–Linear
• output brightness value is a function of linear
combination of BV‟s located in a particular spatial
pattern around the i,j location in the input image
–Non Linear
• use non linear combinations of pixels
• Edge Detection - Background is lost
• Edge Enhancement
• Delineates Edges and makes the shapes and details more
prominent
• • background is not lost.
Edge Detection

• Zero-Sum Kernels
• Zero-sum kernels are kernels in which the sum of all coefficients in
the kernel equals zero.
• This generally causes the output values to be:
• zero in areas where all input values are equal (no edges)
• low in areas of low spatial frequency
• extreme in areas of high spatial frequency (high values become
much higher, low values become much lower)

• Therefore, a zero-sum kernel is an edge detector, which usually


smoothes out or zeros out areas of low spatial frequency and
creates a sharp contrast where spatial frequency is high, which is at
the edges between homogeneous (homogeneity is low spatial
frequency) groups of pixels. The resulting image often consists of
only edges and zeros.
IMAGE TRANSFORMATION
Image Transformation

• Image transformations typically involve the manipulation of


multiple bands of data, whether from a single multispectral
image or from two or more images of the same area acquired at
different times (i.e. multitemporal image data).

• Either way, image transformations generate "new" images from


two or more sources which highlight particular features or
properties of interest, better than the original input images
Image Division

• The most common transforms applied to image data.

• On a pixel-by-pixel basis carry out the following


operation…
•Band1/Band2 = New band

•resultant data are then rescaled to fill the range of


display device
• Very popular technique, commonly called „Band
Ratio‟
Mathematically

i,j,r = BVi,j,k / BVi,j,l


BV

Where
BVi,j,k Brightness value at the location line
i, pixel j in k band of imagery
BVi,j,l Brightness value at the same location in band l
BVi,j,r Ratio value at the same location

(Note: If Denominator is 0 (zero) then Denominator BV


is made 1)
Reasons / Application of Ratios

• Undesirable effects on recorded radiances (e.g. variable


illumination) caused by variations in topography.

–Sometimes differences in BV‟s from identical surface material


are caused by topographic slope and aspect, shadows or
seasonal changes

–These conditions hamper the ability of an interpreter to


correctly identify surface material or land use in a remotely
sensed image.

• Ratio transformations can be used to reduce the effects of such


environmental conditions
Ratios for Elimination of Topographic Effect

• DN value Band A Band B Ratio


• Sunlight = 60 30 2
• Shadow = 20 10 2

• Identical Same cover type


• Radiance at shadow is only 50% of
radiance at sunlit
• Ratio nearly identical
Use/ Application of ratios
• Certain aspects of the shape of spectral reflectance
curves of different Earth surface cover types can be
brought out by ratioing.

• In addition ratios may provide unique information not


available in any single band

• Ratios discriminate subtle spectral variances


• Ratios are independent of the absolute pixel values
• Ratios can be used to generate false colour
composites by combining three monochromatic ratio
data sets
Which bands to Ratio
InfraRed

RATIO IR/R
Commonly used Vegetation Indices
• Vegetation Index or Ratio Vegetation Index

(RVI) = IR / R

• Normalized Differential Vegetation Index


(NDVI) = (IR - R)/(IR + R)

• Transformed Vegetation Index


(TVI) = {(IR - R)/(IR + R) + 0.5}1/2 x 100
Vegetation Index

• RATIO IR/R (RVI) NDVI{(IR-R)/(IR+R)}


Principal Component Analysis (PCA)
•Different bands of multispectral data are often highly correlated
and thus contain similar information.

•We need to Transforms the original satellite bands into new


“bands” that express the greatest amount of variance
(information) from the feature space of the original bands
Graphical Conceptualization

• PCA is accomplished by a linear transformation of


variables that corresponds to a rotation and
translation of the original coordinate system
Principal Component 1

The first principal component,


broadly simulates standard
black and white photography
and it contain most of the
pertinent information
inherent to a scene

• .
Principal Component 2

• Thus as is the
cnvention the second
PC has a smaller
voariance than the
first PC
Principal Component 4

Very Little
Information
Content
Composite PC Image

Forest
appears
green, river
bed in blue,
water in
Red –
orange ,
vegetation
appears in
varying
shades of
green and
fallow
agriculture
field as pink
to magenta
IMAGE FUSION
• Most of the sensors operate in two modes: multispectral mode
and the panchromatic mode.
• The panchromatic mode corresponds to the observation over a
broad spectral band (similar to a typical black and white
photograph) and
• the multispectral (color) mode corresponds to the observation
in a number of relatively narrower band.
• Usually the multispectral mode has a better spectral resolution
than the panchromatic mode.
• Most of the satellite sensors are such that the panchromatic
mode has a better spatial resolution than the multispectral
mode,
• Better is the spatial resolution, more detailed information about
a landuse is present in the imagery

• To combine the advantages of spatial and spectral resolutions


of two different sensors, image fusion techniques are applied
Techniques of pixel based fusion

• Band substitution
• Statistical (PCA)
• Color space transformations (RGB, IHS)
Aim and use of Image Fusion

Sharpening of Images

enhance features not visible in either of the single data


alone

detect changes using multitemporal data

substitute missing information in one image with signals


from another sensor image ( e.g. clouds in NIR and
Shadows in SAR)

replace defective data


Multi Sensor Image Data Fusion

Merged PAN+ MSI


THANK YOU

You might also like