You are on page 1of 40

Basics of Remote Sensing and Image Processing

Mehul R. Pandya
Scientist
AED, BPSG, EPSA
Space Applications Centre, Ahmedabad
Remote Sensing

• Remote sensing – is the science of deriving


inferences about objects from measurements,
made at a distance, without coming into
physical contact with the objects under study.
• It is a process of studying interaction of EM
radiation with different objects – land, water,
atmosphere, etc without physical contact by
an instrument from a distant platform.
Components of Remote Sensing and Applications

Societal Benefits
Image Processing Applications

Data Product Generation


RS Observations

Data reception

Acquisition of data from space

Development of Sensor
Various sensor types used in RS Applications

Active Passive

•Radiometers (in
visible, near &
LIDAR Optical thermal infrared)
•Imaging
spectrometer

UV Visible NIR SWIR Thermal IR

•Synthetic
Aperture RADAR Multi-frequency
(SAR) Microwave Microwave
•Scatterometer Radiometer
•Altimeter
Ka K Ku X C S L P
What is an image?
• Data that are organized in a grid of columns and rows
• Usually represents a geographical area

X-axis
Spatial resolution

Spatial resolution depends on


the FOV, altitude and viewing
angle of a sensor
Example of different spatial
resolution

AWiFS, 56 m

LISS-III, 24 m

LISS-IV, 5.8 m
Swath
• Sensors collect 2D images of the surface in a swath
below the sensor
• Example: IRS-AWiFS has a 740 km swath
Landsat has a 185 km swath
Spectral resolution: Measuring Light- Bands

• Human eyes only ‘measure’ visible light


• Sensors can measure other portions of EMS

Bands
Spectral signatures: Basis for discriminating various
Earth surface features
80
R
E TRUE COLOR FALSE COLOR
F COMPOSITE COMPOSITE
L 60 SILTY CLAY SOIL
E
C VEGETATION
T
40
A
N
C MUCK SOIL
E 20
(%)
WATER
WATER (Shallow/Deep)
0
0.4 0.8 1.2 1.6 2.0 2.4
WAVELENGTH (µm)

BLUE BAND GREEN BAND RED BAND NEAR IR


(0.4-0.5 µm) (0.5-0.6 µm) (0.6-0.7 µm) (0.7-0.9 µm)

1- SAND 2-VEGETATION 3-WATER


Multi Temporal Observation
5:30 8:30 11:30 16:00

25 Jun 29 Sep 09 Oct 13 Oct 14 Nov 04 Dec

17 Jan 13 Feb 17 M ar 02 Apr 05 M ay 25 M ay


17 Jan. 13 Feb 17 Mar 02 Apr 05 May 25 May
Radiometric resolution
IMAGE ACQUISITION

IMAGE Pre-Processing

IMAGE PROCESSING
(Image Enhancement)
(Feature extraction)

IMAGE CLASSIFICATION

ACCURACY ASSESSMENT
What is pre-processing
• Every “raw” remotely sensed image contains a
number of artifacts and errors

• Correcting such errors and artifacts before


further use is termed pre-processing

• The term comes from the fact that PRE-


processing is required for a correct PROCESSING
to take place

• The boundary line between pre-processing and


processing is often fuzzy
Image Pre-Processing

• Create a more faithful representation through:


– Geometric correction
– Radiometric correction
– Atmospheric correction
• Can also make it easier to interpret using
“image enhancement”
• Rectification – remove distortion (platform,
sensor, earth, atmosphere) ….
Factors affecting RS image
• Which factors influence RS image acquisition?
– Sensor characteristics
– Earth/satellite (geometry)
– Acquisition method: satellite or airborne
– Atmosphere (scattering, absorption…)
– Others: …

• However, Remote Sensing images are comparable:


– In time (e.g., monitoring )
– Between sensors (e.g., MODIS and IRS)
– Between different acquisition by same sensor
Radiometric correction
• Radiometric correction, or radiometric calibration, is a
procedure meant to correctly estimate the target
reflectance from the measured incoming radiation

• The radiometric calibration includes the following steps:


– Sensor normalization
• correcting the data for Sensor Irregularities (sensor noise)
• Converting the data so they accurately represent the reflected or
emitted radiation measured by the sensor.
– DN to at-sensor radiance conversion
Geometric correction
• Transforming a RS image to make it compatible with a given type of Earth
surface representation is termed GEOMETRIC CORRECTION

• Creating a equation relating each pair of pixel coordinates in the image


with a geographic coordinate pair is called GEOREFERENCING

• Geometric correction often implies COREGISTRATION of an image to


another – reference – image or map
What is image processing
• Is enhancing an image or extracting
information or features from an image

• Computerized routines for information


extraction (eg, pattern recognition,
classification) from remotely sensed
images to obtain categories of
information about specific features.

• ….
Image Enhancement
• Image Enhancement: Improving the interpretability
of the image by increasing apparent contrast among
various features.
– Contrast manipulation: Gray-level thresholding, level
slicing, and contrast stretching.
– Spatial feature manipulation: Spatial filtering, edge
enhancement, and Fourier analysis.
– Multi-image manipulation: Band ratioing, principal
components, vegetation components, canonical
components…

• image reduction, image magnification, transect extraction, contrast


adjustments (linear and non-linear), band ratioing, spatial filtering,
fourier transformations, principle components analysis, texture
transformations, and image sharpening
Image Enhancement: Contrast stretching
Spatial Feature Enhancement
(local operation)

• Spatial filtering/ Convolution:


•Low-pass filter: emphasizes regional spatial
trends, deemphasizes local variability
•High-pass filter: emphasizes local spatial
variability

• Edge Enhancement: combines both filters


to sharpen edges in image
Image classification

•This is the technique of turning RS data into meaningful categories


representing surface conditions or classes (feature extraction)
•Spectral pattern recognition procedures classifies a pixel based on
its pattern of radiance measurements in each band: more common
and easy to use
•Spatial pattern recognition classifies a pixel based on its
relationship to surrounding pixels: more complex and difficult to
implement
•Temporal pattern recognition: looks at changes in pixels over time
to assist in feature recognition
Spectral Classification
Two types of classification:
•Supervised:
•A priori knowledge of classes
•Tell the computer what to look for

•Unsupervised:
•Ex post approach
•Let the computer look for natural clusters
•Then try to classify those based on posterior interpretation
Supervised Classification
• Better for cases where validity of classification depends
on a priori knowledge of the technician; already know
what “types” you plan to classify

• Conventional cover classes are recognized in the scene


from prior knowledge or other GIS/ imagery layers

• Training sites are chosen for each of those classes

• Each training site “class” results in a cloud of points in n


dimensional “measurement space,” representing
variability of different pixels spectral signatures in that
class
Supervised Classification
•Here are a bunch of pre-chosen training sites of known cover type

Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
Source: http://mercator.upc.es/nicktutorial/Sect1/nicktutor_1-15.html
Supervised Classification
•The next step is for the computer to assign each pixel to the spectral class is
appears to belong to, based on the DN’s of its constituent bands
•Clustering algorithms look at “clouds” of pixels in spectral “measurement space”
from training areas to determine which “cloud” a given non-training pixel falls in.

Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
Source: http://mercator.upc.es/nicktutorial/Sect1/nicktutor_1-15.html
Supervised Classification
• Algorithms include
– Minimum distance to means classification (Chain Method)
– Gaussian Maximum likelihood classification
– Parallelpiped classification

• Each will give a slightly different result

• The simplest method is “minimum distance” in which


a theoretical center point of point cloud is plotted,
based on mean values, and an unknown point is
assigned to the nearest of these. That point is then
assigned that cover class.
Supervised Classification

Examples of two classifiers

Source: http://mercator.upc.es/nicktutorial/Sect1/nicktutor_1-16.html
Unsupervised Classification

•Assumes no prior knowledge


•Computer groups all pixels
Spectral class 1
according to their spectral
relationships and looks for
natural clusterings
Spectral class 2
•Assumes that data in
different cover class will not
belong to same grouping
•Once created, the analyst
assesses their utility and can
adjust clustering parameters

Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
Unsupervised Classification
Example: Change detection stages of development
Thank you

You might also like