You are on page 1of 30

REMOTE SENSING IMAGES:

PREPROCESSING AND PROCESSING


TECHNIQUES, ENHANCEMENT
TECHNIQUES, FILTERING.
OUTLINE OF THE PRESENTATION:
1. INTRODUCTION
2. DEFINITION
3. PRE-PROCESSING
a) Geometric Registration
b) Radiometric and Atmospheric Correction
c) Numerous Forms of Image Enhancement
4. DIGITAL IMAGE PROCESSING
a) Image Restoration
b) Image Enhancement
c) Image Classification
d) Image Transformation
5. ACCURACY ASSESSMENT
6. USES OF REMOTELY SENSED IMAGES
7. CONCLUSION
REMOTE SENSING & IMAGE PROCESSING
Of all the various data sources used in GIS, one of the most
important is undoubtedly that provided by remote sensing. Using
satellites, we now have a continuing program of data acquisition for
the entire world with time frames ranging from a couple of weeks to
a matter of hours. Very importantly, we also now have access to
remotely sensed images in digital form, allowing rapid integration of
the results of remote sensing analysis into a GIS.

The development of digital techniques for the restoration,


enhancement and computer-assisted interpretation of remotely
sensed images initially proceeded independently and somewhat
ahead of GIS.
DEFINITION OF RS:
Remote sensing can be defined as any process whereby information is gathered
about an object, area or phenomenon without being in contact with it. Remote
sensing is learning something about an object without touching it. As human
beings, we remotely sense objects with several of our senses using our eyes, our
noses, and our ears.
We can gather information about our surroundings by gauging the amount and
nature of the reflectance of visible light energy from some external source
(such as the sun or a light bulb) as it reflects off objects in our field of view.
Remote sensing imagery can either be photographic or digital.
Photographic, or analog, remote sensing uses film to record
reflected electromagnetic (EM) radiation to produce an image of the
scene. Images are reproduced as photographic prints or
transparencies for interpretation, and subsequent processing options
such as brightness and contrast corrections are somewhat limited.
Digital remote sensing instruments record the reflected or emitted
EM radiation that impacts the sensor and record the data as
numerical values that can then be displayed on a computer monitor
as an image.
PRE-PROCESSING OF IMAGE:

Pre-processing comprises any technique performed on the image


prior to classification. There are many possible preprocessing
techniques. The three most important techniques include:

 Geometric Registration
 Radiometric and Atmospheric Correction, and
 Numerous Forms of Image Enhancement.
 Geometric registration is typically performed to adjust the
imagery to accurately coincide spatially to another image, spatial data
layer or to the ground surface. The significant advances in GIS have
made the use of geometric registration an integral part of digital
remote sensing.
 Radiometric and atmospheric correction
is performed to remove the effects of spectral variations caused by
atmospheric conditions. For example, Earth’s atmosphere reflects
some EM radiation back toward a sensor before that radiation interacts
with the ground surface. This effect varies seasonally and with
changing weather conditions.
 Many types of digital image processing
enhancements can be applied to remotely sensed data.
Enhancements to digital imagery are mathematical procedures
that are applied to the digital numbers (DNs) in the image on a
pixel-by-pixel basis. Some of these enhancements, such as
Principal Components Analysis (PCA), can be applied to
individual bands of imagery or to all pixels in all bands of the
imagery.

An example is the Normalized Difference Vegetation Index or


NDVI.
DIGITAL IMAGE PROCESSING:

Digital Image Processing is largely concerned with four


basic operations:

 Image restoration
 Image enhancement
 Image classification
 Image transformation.
 IMAGE RESTORATION
Image restoration is concerned with the correction and calibration of images in order to
achieve as faithful a representation of the earth surface as possible—a fundamental
consideration for all applications.
Remotely sensed images of the environment are typically taken at a great distance from
the earth's surface. There is a substantial atmospheric path that electromagnetic energy
must pass through before it reaches the sensor. Depending upon the wavelengths
involved and atmospheric conditions, the incoming energy may be substantially
modified. The sensor itself may then modify the character of that data since it may
combine a variety of mechanical, optical and electrical components. During the time the
image is being scanned. The geometry of the image is thus in constant flux. Finally, the
signal needs to be telemetered back to earth, and subsequently received and processed
to yield the final data we receive.
Broadly, image restoration can be broken down into the two sub-areas of radiometric
restoration and geometric restoration.
Radiometric Restoration
Radiometric restoration refers to the removal of distortions in the degree of electromagnetic energy
registered by each detector. A variety of agents can cause distortion in the values recorded for image
cells. Some of the most common distortions for which correction procedures exist include:
Uniformly elevated values, due to atmospheric haze, which scatters short wavelength bands;
Striping, due to detectors going out of calibration;
Random noise, due to unpredictable and unsystematic performance of the sensor and
Scan line drop out, due to signal loss from specific detectors.

Geometric Restoration
With satellite imagery, the very high altitude of the sensing platform results in minimal image
displacements due to relief. As a result, registration can usually be achieved through the use of a
systematic rubber sheet transformation process that gently warps an image based on the known
positions of a set of widely dispersed control points.
 IMAGE ENHANCEMENT
Image enhancement is predominantly concerned with the modification of images to
optimize their appearance to the visual system.
Image enhancement is concerned with the modification of images to make them more suited
to the capabilities of human vision. Image enhancement includes the following area:
Contrast Stretch
Digital sensors have a wide range of output values to accommodate the strongly varying
reflectance values that can be found in different environments. Contrast manipulation
procedures are thus essential to most visual analyses. This is normally used for visual
analysis only.
Composite Generation
For visual analysis, color composites make fullest use of the capabilities of the human eye.
Depending upon the graphics system in use, composite generation ranges from simply
selecting the bands to use, to more involved procedures of band combination and
associated contrast stretch.

Figure shows several composites made with different band combinations from the same set of TM images.
DIGITAL FILTERING
One of the most intriguing capabilities of digital analysis is the ability to apply
digital filters. Filters can be used to provide edge enhancement (sometimes called
crispening), to remove image blur, and to isolate lineaments and directional
trends, to mention just a few. The IDRISI module FILTER is used to apply
standard filters and to construct and apply user-defined filters.
 IMAGE CLASSIFICATION:
Image classification refers to the computer-assisted
interpretation of remotely sensed images.
Although some procedures can incorporate information about such image
characteristics as texture and context, most of the image classification is based
solely on the detection of the spectral signatures of land cover classes. The success
with which this can be done will depend on two things:

1) the presence of distinctive signatures for the land cover classes of interest in the
band set being used and
2) the ability to reliably distinguish these signatures from other spectral response
patterns that may be present.
There are two general approaches to image classification
depending on how the classification is performed:

 SUPERVISED: In the case of supervised classification, the software system


delineates specific land cover types based on statistical characterization data drawn
from known examples in the image (known as training sites).

 UNSUPERVISED: With unsupervised classification, however, clustering


software is used to uncover the commonly occurring land cover types, with the
analyst providing interpretations of those cover types at a later stage.
Supervised Classification Unsupervised Classification
A supervised classification, the analyst must identify areas of An unsupervised classification, the analyst begins by
the image with a known surface cover type and create a selecting the number of unique, separate classes that will be
training area (grouping of pixels) from which the computer derived from the spectral response data contained in the
generates a statistics file. image.

The software system is then used to develop a statistical The computer process applies a statistical clustering
characterization of the reflectance for each information class. algorithm to group the pixels in the image into spectral
classes.
Informational types that are homogeneous and distinct (e.g., These classes are spectrally unique, but a single cluster may
water or sand) require only a few training areas. be a combination of a number of land cover or land use
types.
The analyst selects training areas for use in a supervised An unsupervised classification highlights spectrally separate
classification. classes within the image coverage area.

Supervised classification works is known as complex as Unsupervised classification works is known as cluster
detailed analysis. analysis.

Supervised classification produces are information classes. Unsupervised classification produces are not information
classes, but spectral classes.
 IMAGE TRANSFORMATION:
Digital Image Processing offers a limitless range of possible
transformations on remotely sensed data. Some are mentioned here
specifically, because of their special significance in environmental
monitoring applications.

 NDVI (Normalized Difference Vegetation Index)


 NDMI (Normalized Difference Moisture Index)
 NDBI (Normalized Difference Build-up Index)
VEGETATION INDICES:
There are a variety of vegetation indices that have been developed to help in the
monitoring of vegetation, water, build-up area, moisture etc. Most are based on the
very different interactions between vegetation and electromagnetic energy in the red
and near-infrared wavelengths. Although several variants of this basic logic have been
developed, the one which has received the most attention is the normalized difference
vegetation index (NDVI). It is calculated in the following manner:
NDVI = (NIR - R) / (NIR + R)
Where, NIR = Near Infrared and R = Red
NDVI needs specific calibration to be used as an actual measure of biomass, many
agencies have found the index to be useful as a relative measure for monitoring
purposes.
SAJEK KAPTAI
VALLEY LAKE
2015 2015

NDVI calculated with TM bands 7 and 8


Principal Components Analysis:
Principal Components Analysis (PCA) is a linear transformation technique related to
Factor Analysis.
PCA is a sequential mathematical procedure that identifies specific sets of bands in
multi-band imagery that show the greatest spectral differences among surface features;
there are multiple PCA images possible for a single image. For example, one PCA of
an image over an actively burning fire might emphasize the extent of the smoke from
the fire.
PCA has traditionally been used in remote sensing as a means of data compaction. For
a typical multispectral image band set, it is common to find that the first two or three
components are able to explain virtually all of the original variability in reflectance
values.
Other Transformations:
These include color space transformations (COLSPACE), texture calculations
(TEXTURE), blackbody thermal transformations (THERMAL), and a wide variety of
ad hoc transformations (such as image rationing) that can be most effectively
accomplished with the image calculator utility.
ACCURACY ASSESSMENT:
Once a thematic map has been created from the remotely sensed imagery, it is
important to know how accurate the map is. There are two components to map
accuracy: positional accuracy and thematic accuracy.

 Positional accuracy is a measure of the distances between points on a map


compared with the distances between the same on the ground.
 Thematic accuracy is a measure of whether the classes shown on the thematic
map produced from the imagery are accurately identified.
Positional accuracy is measured in distances while thematic accuracy uses a table
called an error matrix to show errors in labeling. Usually, field observations are
used for assessing the accuracy of a thematic map.
These figure shows the effect of an accuracy assessment performed on a supervised
classification of the 1999 Carroll County Land sat image. The original thematic map of
land cover on the left show specific inaccuracies. Following field inspection of areas
were submitted to the supervised classification algorithm and the classification was run
again, with significantly higher accuracy achieved in the thematic map shown on the
right. Accuracy levels for roads, development, and residential land cover types
improved to over 90%.
USES OF REMOTELY SENSED IMAGES :
 Satellite imagery can detect and map the extent of inundation and consequent
damage to ecosystems and human infrastructure following hurricanes.
 Emergency managers relied on maps generated from remotely sensed imagery for
near-real-time response to emergency events.
 Resource managers use satellite imagery to detect and map damage and to plan
remediation efforts.
 Both farmers and crop insurance investigators rely on satellite imagery to map the
extent and impact of hailstorms on crops.
 Volcanic activity can be monitored by satellite for ash clouds that might impact
aircraft or people, as well as lava flows that might impact human infrastructure.
 Comparative analyses can then be done between images captured on different dates.
 Changes in land use and land cover can be monitored that affect soil and water
resources and even local temperature patterns.
 Remotely sensed imagery is also applied to mapping agricultural land cover
impacted by plant and animal pests.
 for boating and fishing, lake quality can be monitored accurately using remotely
sensed images.
 The effects of climate variability on water supplies are being monitored in reservoirs
that provide flood control, irrigation water resources, and habitat for many forms of
wildlife.
 Surface mining that changes land cover and land use can be monitored to assess
impacts on ecosystems and water resources.
CONCLUSIONS:
Remotely sensed data is important to a broad range of disciplines. This will continue to
be the case and will likely grow with the greater availability of data promised by an
increasing number of operational systems. The availability of this data, coupled with
the computer software necessary to analyze it, provides opportunities for
environmental scholars and planners, particularly in the areas of land use mapping and
change detection.
THANK YOU
for
Hearing or
ANY QUERIES??

You might also like