You are on page 1of 7

1.

The more complex world and ever-evolving technology has altered the way

life on Earth operates. Every day, people's expertise is competing with customer

expectations, and technology-based organizations are increasingly employing

machine learning algorithms to streamline their operations. The primary distinction is

that one employs labelled data to aid in prediction while the other does not. However,

there are notable differences between the two techniques, as well as critical areas

where one performs better than the other.

According to Geospatial Technology websites (2019), the idea behind

supervised classification is that a user may pick sample pixels in an image that

represent certain classes and then instruct image processing software to utilise

these training sites as references for the categorization of all other pixels in the

picture. Training locations are chosen based on the user's expertise. The user

also specifies how similar other pixels must be to be grouped together. These

boundaries are frequently defined depending on the training area's spectral

properties, plus or minus a particular increment. Dr. Muhammad Zulkarnain,

A. R. (2015) also shown that unsupervised classification results from software

analysis of a picture without the user giving example classes. The computer

uses algorithms to detect which pixels are connected and classifies them. The

user may define the algorithm the programme will use and the required

number of output classes, however the software does not help with

categorization. When the computer-generated groupings of pixels with similar

characteristics must be matched to actual features on the ground, the user must

have knowledge about the region being classed.


According to GISGeography websites (2022), there so many

differences between supervised classification and unsupervised classification.

Unsupervised has two common steps: cluster generation and class assignment.

You create clusters using remote sensing software. ISODATA and K-means

are two of the most used picture cluster methods. After using a clustering

technique, you decide which groups to create. For example, build 8, 20, and 42

clusters. Within each group, the most similar pixel will have a few clusters.

More clusters enhance the group's variability. The next step is to manually

assign a class to each cluster. For example, you must select the optimal

clusters to represent and categorise both non-vegetation and vegetation.

Instead of supervised classification is an idea-based classification method that

allows a user to select pixel samples from pictures that represent certain

classes. The image processing programme will use training sites as references

to group all pixels on the picture. The training sites, also known as input

classes or testing sets, are chosen based on how well the user understands.

You may also use boundaries to discover the relationship of other pixels to

connect them. The boundaries are usually chosen based on the spectral

properties of the testing sets. Also included are the maximum and lowest

strengths in specific spectral bands.

2.

Image Classification is used to automatically classify pixels into land

cover classifications, often known as themes. The spectral pattern of each

pixel produced from the multispectral stack is employed for characterization in

each imaging class. Land cover classification refers to the physical and

biological materials found on the land's surface, such as vegetation or


developed environments. A national park, for example, may be usable for

recreation yet be covered by dipterocarp of the forest.

The categorization technique assumes that various feature types on the

earth's surface have varied spectral reflectance and remittance qualities. Image

classification, in its broadest definition, is described as the act of categorizing

all pixels in an image or raw remotely sensed satellite data to produce a

particular set of labels or land cover themes (Lillesand, Keifer 1994). To

create land cover maps, many categorization systems have been developed and

widely utilized (Aplin, Atkinson 2004). They differ in logic from supervised

to unsupervised; parametric to nonparametric to nonmetric; hard and soft

classification; and per-pixel, sub-pixel, and pore field. However, there are two

basic categories of classification procedures, each of which is used in the

processing of remote sensing images: supervised classification and

unsupervised classification. These can be used as alternatives, but they are

frequently integrated into hybrid procedures that employ more than one

method (Richards, Jia 2006).

It is frequently difficult to determine the optimal classifier for certain

research owing to a lack of selection criteria and the availability of viable

classification algorithms with the availability of multiple classification

methods, the preferred strategy is to perform a comparative analysis to

determine which method is best for a given dataset. Furthermore, combining

multiple classification algorithms has been proven to assist enhance

classification accuracy (Guo, J., Zhang, J., Zhang, Y. & Cao, Y. 2008). Image

classification has made significant progress in the following four areas over

the past decades: producing regional and global land cover maps, developing
and employing advanced classification algorithms such as subpixel, pre-field,

and knowledge-based classification algorithms, employing multiple remote-

sensing features such as spectral, spatial, multitemporal, and multisensory

information, and incorporating ancillary data into classification procedures,

including An image classification technique includes an accuracy assessment.

The availability of high-quality remotely sensed images and supplementary

data, the design of a competent classification technique, and the analyst's

abilities and experiences all contribute to the effectiveness of image

classification in remote sensing.

3.

All the Sun's energy that reaches the Earth arrives as solar radiation,

which is part of a huge collection of energy known as the electromagnetic

radiation spectrum. Visible light, ultraviolet light, infrared light, radio waves,

X-rays, and gamma rays are all types of solar radiation. Radiation is one

method of transferring heat. To "radiate" is to send out or distribute

information from a central area (Cheolmin Kim, 2016). If anything radiates, it

expands outward from a beginning place, whether it is light, music, waves,

rays, flower petals, wheel spokes, or pain. After the Sun produces

electromagnetic radiation, the portion of it that makes its way through space to

the top of the Earth's atmosphere must pass through the atmosphere, be

reflected by the Earth's surface, pass through the atmosphere again on its way

back to space, and then arrive at the sensor to be recorded. While the radiation

field does nothing while it travels through empty space, numerous things

happen when it meets with the Earth's atmosphere and surface. It's because the

measured radiation contains information about the Earth's environment as a


result of these interactions, it is critical to investigate what happens in these

interactions and how it impacts the radiation field (Jwan Al-doski, 2013).

The interaction between electromagnetic radiation and the Earth's

atmosphere can be divided into three components: refraction, scattering and

absorptive of the propagation direction of electromagnetic radiation when it

passes between two denser substances. This occurs when radiation from outer

space (density 0) penetrates the atmosphere (density >0). The refractive

indices of the two mediums define the angle at which the propagation

direction changes. The refractive index of a medium (n) is calculated by

dividing the speed of electromagnetic radiation in a vacuum (c) by the speed

of electromagnetic radiation in the medium (cn): n=c/cn. A typical atmosphere

has a refractive index of 1.0003, whereas water has a refractive index of 1.33

(Sadoun, Balqies & al rawashdeh, Samih. 2009). Snell's Law may be used to

calculate the amount of refraction using the refractive indices of the two

media: n1 * sin1 = n2 * sin2.

Scattering is one of the two remaining mechanisms that alter

electromagnetic radiation as it travels through the atmosphere. Scattering

occurs when a photon interacts with something in the environment and

changes direction. There are two forms of scattering based on the size of the

item with which the photon interacts. When an item is substantially smaller

than the wavelength of the radiation, Rayleigh scattering occurs. In the case of

sunlight and the Earth's atmosphere, this implies that atmospheric gases such

as N2, O2, CO2, and others generate Rayleigh scattering (Reis S, 2008). Mie

scattering occurs when an item is the same size as the wavelength of the

radiation, implying that it is generated by aerosols such as smoke and dust


particles. If radiation interacts with particles bigger in size than its

wavelengths, such as water droplets or sand particles, further scattering might

occur. The amount of Rayleigh scattering is inversely linked to the 4th power

of the wavelength of the radiation, which is very important for Earth remote

sensing (Jwan Al-doski, 2013). In other words, Rayleigh scattering scatters far

more radiation with shorter wavelengths than radiation with longer

wavelengths. This indicates that in visible wavelengths, blue light is scattered

more than green light, which is dispersed more than red light. This is the

process that causes the Earth's seas to seem blue from space.

4.

Some forms of electromagnetic radiation move through the atmosphere

with ease, whereas others do not. The capacity of the atmosphere to allow

radiation to travel through it is known as transmissivity, and it varies

depending on the wavelength or kind of radiation. The gases that make up our

atmosphere absorb some wavelengths of radiation while allowing other

wavelengths to flow through (Kern, Stefan & Ozsoy, Burcu. 2018). Unlike

absorption bands, there are parts of the electromagnetic spectrum where the

environment is transparent to certain wavelengths. These wavelength ranges

are referred to be atmospheric "windows" because they allow radiation to

travel readily through the atmosphere to the Earth's surface. Most remote

sensing equipment on aircraft or space-based platforms measure in one or

more of these windows using detectors tuned to certain frequencies that travel

through the atmosphere (Kumar R Rao, 2020). When a piece of remote

sensing equipment encounters an item that is reflecting sunlight or producing

heat, it gathers and records the radiant energy.


Even though water vapor has a bigger overall influence on warming,

CO2 is the most essential factor in influencing atmospheric temperature. This

is since water vapor is self-regulating; if more water vapor is added to the

atmosphere without an immediate change in temperature, the water vapor will

condense back to liquid (Mohammad Rezaei, Mahdi Khazaei, 2022).

Greenhouse emissions vary from other types of emissions because they persist

in the atmosphere as a gas long enough to raise the temperature. Only then can

the lower atmosphere accommodate additional water vapor, amplifying the

original CO2-driven heating until no more water vapor can be tolerated.

However, not all wavelengths of solar electromagnetic radiation reach the

planet, and not all wavelengths emitted by the earth reach space. Some of this

energy is absorbed by the atmosphere, while other wavelengths pass through.

The locations where energy goes through are referred to as "atmospheric

windows." In remote sensing, we utilize these "windows" to gaze into the

atmosphere, from which we may gather a wealth of meteorological

information (Emberton, S., Chittka, L., Cavallaro, A., & Wang, M. 2015). The

visible and near-infrared portions of the electromagnetic spectrum account for

most of the sun's energy. The earth emits all of its radiated energy as infrared.

You might also like