You are on page 1of 10

COURSE

STAE 2352 REMOTE SENSING AND ADVANCED GIS

TUGASAN 1: PENGENALAN APLIKASI PENDERIAAN JAUH

LECTURER
PROF. MADYA DR. TUKIMAT BIN LIHAN

PREPARED BY
KAM CAI SEE, A175735
Bincangkan langkah-langkah yang boleh digunakan oleh seorang penyelidik untuk
meningkatkan kualiti dalam intepretasi imej satelit dalam mengkaji alam sekitar.

Remote sensing technology has been more widely used in natural resource mapping and
as a source of input data for modeling environmental processes in recent years. Due to the
availability of remotely sensed data from various sensors on various devices with a wide variety
of spatial and temporal resolutions, remote sensing has been the most important source of data
for large-scale environmental monitoring.
Using a variety of sensors, remote sensing technology may detect an entity. These
satellite imagery allows us to put the satellite imagery into good use such as land-cover and
floodplain mapping, fractional vegetation cover and impervious surface area mapping, surface
energy flux, and micro-topography correlation. To accurately chart and monitor the
environmental changes, this technology is needed.
The extraction of quantitative and qualitative information from satellite images using
field knowledge, direct recognition, and inference is referred to as image interpretation. To
generate details and determine the significance of objects, an interpretation approach was used
on a satellite image.
Image interpretation can be separated into 4 orders/steps:
● Detection
● Identification
● Measurement
● Problem solving (create information from classified image)
However, not all of the steps must be completed in a certain order or in any event. For
this technique, satellite images are used as the primary input. These satellite images were
collected and used as a primary source of information in analysing the data..
The examination of images acquired by sensors using different portions of the
electromagnetic spectrum for the purpose of identifying objects and judging their significance is
known as image interpretation in remote sensing.
Researchers can detect objects/components from the satellite image such as buildings,
roads, vegetation and etc. Next, researchers can start to identify the objects by indicating the
objects/phenomenon. Identification somehow can be done simultaneously with the detection
phase. After this phase, measurement can be obtained from the satellite imagery such as, height,
area, length. Finally, the information can be extracted and analysts can solve their designated
problems.
There are 2 ways adopted for interpretation of satellite image:
1.0 Visual Interpretation
2.0 Digital Interpretation

1.0 Visual Interpretation


It interprets satellite images using visual techniques. Objects are visually detected on a
satellite image and their importance is determined using interpretation keys/elements in this
process. When most satellite images were printed on paper, this technique was very popular.
Since there was no digital feedback in the context of image processing tools, there was just a
technique. When satellite images with high resolution, such as 60cm/30cm, are visible, visual
perception becomes much more critical. However, this approach is used in conjunction with
digital techniques in existing image processing methods, rather than as a stand-alone method for
analysis. The practice of observing photographs for the purpose of recognizing objects and
assessing their meaning is known as satellite image analysis. Remote sensing images are studied
logically by interpreters who try to classify, quantify, and determine the meaning of natural and
cultural elements. Image analysis is a labor-intensive process that involves rigorous preparation.
The characteristics of image features, such as scale, shape, tone, texture, shadow, pattern,
relation, and so on, are used to derive information from images. While this solution is concise, it
has the following flaws:
● In contrast to what can be captured in digital form, the range of gray values
produced on a film or print is limited
● Because the human eye can only distinguish a finite range of color tones,
radiometric precision cannot be fully utilized;
● Visual perception is a significant constraint when combining data from different
sources.
Therefore, digital interpretation can be one of the approaches in improving the quality of the
satellite image so the analysis work in environmental monitoring can be benefited.
2.0 Digital Interpretation
Quantitative processing of digital data using computers to collect information about the
earth's surface is facilitated by digital representation. 'Image Processing' is a common term for
digital interpretation. Prior to data analysis, the raw data is normally processed to correct for any
distortion caused by the imaging system's properties and imaging conditions in order to improve
the quality of the satellite image. Until the data is transmitted to the end-user, the ground station
operators can perform certain standard correction procedures, depending on the analyst's
requirements.
To assist the visualization of remote sensing images and collect as much information as
possible, a variety of image processing and analysis techniques have been developed. The exact
methods or algorithms to use are determined by the project's objectives.
Image processing is a method for performing operations on images in order to enhance
the satellite image or extract useful information from it - thematic map. It's a kind of signal
processing in which the input is an image and the output is either an image or the image's
characteristics/features.
In general, image processing/ to enhance the satellite image entails the following steps:
● Importing satellite image with an image acquisition tool;
● analyzing and manipulating (correction/enhancement) the satellite image;
● and providing input under which the image or documentation based on an image
interpretation can be changed.
Methods for manipulating digital images using computers are known as digital image processing
methods. When using digital image processing, there are three general steps that all types of data
must go through:
2.1 Preprocessing
2.2 Image augmentation (image enhancement)
2.3 Information retrieval and image classification

2.1 Preprocessing
The first step in image processing to enhance the quality of the satellite image is preprocessing,
which is critical because image acquisition provides input data for the whole process, since
spatial information can be collected. Preprocessing is used to improve image data by suppressing
unwanted distortions and enhancing image features that are important for subsequent processing
and interpretation. Image feature distortions must be added during image acquisition, so
correction must be implemented. Corrections can be divided into two categories:
● Radiometric Correction (energy observed differs from the energy released or reflected by
the same source; blocked by atmospheric movement such as fog, haze, aerosol and etc)
● Geometric Correction (geometrically distorted satellite images due to satellite's orbital
orientation)
To improve the accuracy of the images, a variety of processing algorithms are used in the
satellite image map development process. Various methods for relative radiometric sensor
calibration are used during preprocessing, for example. During the geometrical resampling
method, other techniques are needed to maintain image quality.

2.2 Image Enhancement


Image enhancement occurs after preprocessing, and the aim of image enhancement is to
improve the effectiveness of an image for an interpretation, such as producing a more
subjectively pleasing image for human viewing. Little or no attempt is made in image
enhancement to determine the precise cause of image degradation. There are a few methods that
we can practice to achieve a higher quality satellite imagery for a better interpretation tasks
which includes:
2.2.1 Contrast Enhancement
2.2.2 Fusion method
2.2.3 False Colour Composite (FCC)

2.2.1 Contrast Enhancement


The distribution and spectrum of digital numbers allocated to each pixel in the image was
changed by contrast stretching. This is mostly done to draw attention to aspects of the image that
are difficult for a person to see. It is accomplished by linear transition extending the original grey
level range of contrast enhancement or stretching.
Linear contrast stretching changes the distribution of an image from the initial digital
values derived from remote sensing to a new one, without changing the original data. The aim is
to alter the relative light and darkness of objects on the scene in order to improve their visibility.
By mapping the grayscales in the image to new values and transforming the grayscales, the
image's contrast and tone can be modified. Under linear transformation, there are:
● Minimum-maximum linear stretches
- When the data's minimum and maximum values are applied to a new set of
values, the Minimum-Maximum Linear Stretches function is used. For example,
consider an image with a minimum brightness of 84 and a maximum brightness of
153. The values 0 to 84 and 153 to 255 are not visible while the image is shown
without enhancements. As a result, extending the minimum value, 84 to 0 and the
highest value, 153 to 255, researchers can improve the satellite image quality.
● Piecewise Linear Stretches
- When a researcher needs to improve a specific/selected region, Piecewise Linear
Stretches can be used after the distortion corrections. As a result, researchers may
extend those histogram values in order to improve a satellite image.
For nonlinear transformation, there are:
● Histogram equalisation
- Histogram Equalization is a form of digital image processing that improves image
contrast. This is done by effectively scattering the most common intensity values,
i.e. by extending the image's intensity range. When the available data is
represented by near contrast values or by the same number of pixels allocated, this
method typically improves overall image contrast. This makes for a higher
contrast in areas with a lower local contrast.

● Standard-deviation
- First, we use an integral image to conveniently calculate the mean and standard
deviation values of all the local windows depending on each pixel, and then we
use the enhancement model to lift the contrasts of the local regions to varying
degrees. In other words, where a certain number of standard deviations is
stretched from the mean, standard deviation works.

2.2.2 Fusion method


In the satellite images, there are two types of records. A high spatial resolution
panchromatic image and a low spatial resolution multispectral image are the two types of images
high spatial resolution panchromatic satellite imageries are good for detecting spatial
information, while low spatial resolution multispectral satellite imageries are good for detecting
features based on their spectral characteristics Many sensors are unable to capture both high
spatial and spectral images due to technological limitations. Therefore, the fusion method is
applied to enhance the quality in interpreting satellite images.
By merging data from multiple sensors and optimizing the combination of spatial details
and feature information, the fusion approach is one of the methods to solve the problem. By
combining resolution, improving object detection, improving image enhancement and analysis
methodology, obtaining richer content than any inputs, and increasing the objects' information
can be achieved from this fusion method.
Data fusion is a useful technique for increasing the availability of images from various
satellite sensors of varying spectral and spatial resolution. The fusion technique uses high spatial
resolution data to resample low spatial resolution data, resulting in the same pixel size. high
spatial resolution panchromatic image and low spatial resolution multispectral image data are
combined to produce a high-resolution multispectral image. Fusion images can be used to
estimate environmental components that need to be monitored from different perspectives, such
as evapotranspiration prediction, 3D ground-based characterisation, smart city systems, design
and modeling maritime surveillance scenarios as well as urban and aquatic ecosystems and
vegetation.

2.2.3 False Colour Composite (FCC)


The FCC is uniform and it provides all users with equal knowledge of Earth properties.
False color images are representations of multispectral images produced with bands other than
visible red, green, and blue as the display's red, green, and blue elements. FCC allows us to see
wavelengths that are invisible to the naked eye. Fake colored composites come in a variety of
colors and can be used to highlight various features. Many people choose true color composites
because colours look normal to us, but small variations in details may be difficult to detect.
Owing to the scattering of blue light by the atmosphere, natural color pictures can be poor in
contrast and hazy. Hence, the use of bands such as near infrared emphasizes spectral variations
and improves the satellite image data's interpretability.

2.3 Information retrieval and image classification


Image preprocessing and enhancement are important for improving the quality of the
image produced and correcting any distortions caused by radiometric or geometric distortions.
After correcting distortions and enhancing the quality satellite image according to researchers'
decisions, researchers can move on to extracting information and classification of satellite
images. Some image classification algorithms use spectral characteristics, or the brightness and
"color" information found in each pixel, to distinguish different land cover forms in an image.
There are 2 methods to classify the image.
3.1 Supervised Classification
3.2 Unsupervised Classification
The spectral features of certain areas of known landcover forms are derived from the
image in supervised classification. The "training grounds" are what they're called. Per pixel in
the image is then graded into one of the groups based on how similar its spectral features are to
the training areas' spectral features.
The computer software dynamically divides the pixels in the image into distinct clusters
based on their spectral characteristics in unsupervised classification. The analyst would then
assign a landcover form to each cluster. There are various types of grouping algorithms; there are
simple clustering, K-means and Iterative Self Organizing Data Analysis (ISODATA)
Chart 1: Image Interpretation Process

Example

Figure 1: SPOT multispectral image of the Figure 2:After a basic linear greylevel stretching, a
same region as in the previous segment, but multispectral SPOT picture was enhanced.
taken at a later time. Corrections for
radiometric and geometric errors have been
made. The picture has also been adjusted to
suit a specific map projection (UTM
projection). This image is seen without any
additional enhancements.
From the steps that are being made, researchers will be able to improve the satellite image to
perform a better analyst in environmental monitoring. A bluish tint can be seen all over Figure 1
in the above unenhanced image, giving it a hazy appearance. The hazy look is caused by the
scattering of sunlight by the atmosphere into the sensor's field of view. The contrast between
various land covers is also lowered as a result of this effect.
Figure 2 satellite image depicts the product of adding the linear stretch. But for a few
spots near the top of the picture, the hazy appearance has been almost eliminated. The distinction
between various characteristics has been strengthened.

You might also like