You are on page 1of 31

Earth Science and Remote Sensing Applications

Series of Remote Sensing/Photogrammetry


Vol. 43, pp.1-30, 2018, Springer

Chapter 1

Copyright © Springer, 2018


Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Chapter 1

Panchromatic and Multispectral Remote


Sensing Image Fusion Using Particle Swarm
Optimization of Convolutional Neural
Network for Effective Comparison of Bucolic
and Farming Region

P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li, Yanmin Yuan

Abstract. With the advance of remote sensing in observation engineering,


high multispectral and spatial resolution of remote sensing imagery such as
Landsat Thematic Mapper, Spot, Ikonos, Worldview, Seastar and Geoeye
metaphors were attained by distinct types of sensors and strong-minded in
topographically allied monitoring, planning, mining and understanding
information. To improve the superiority of the fused images, researchers
anticipate image fusion schemes in fusing panchromatic and multispectral
images. This chapter emphases on the optimized fusion of high resolution
panchromatic and low resolution multispectral images using particle swarm
optimization of convolutional neural network in categorizing bucolic and
farming region. Qualitative and quantitative evaluation approaches were
used to measure the eminence of the fused images with and without the
reference image. The practical fallouts demonstrates that the anticipated
method provided better concert in enhancing the quality of the fused images
and fashioned effective comparison of bucolic and farming region.
Keywords: Image fusion, particle swarm optimization, bucolic and farming
region classification, convolutional neural network, multispectral ımaging,
panchromatic ımaging.
Cite this chapter as: P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li,
Yanmin Yuan. (2018) ‘Panchromatic and Multispectral Remote Sensing
Image Fusion using Particle Swarm Optimization of Convolutional Neural
Network for Effective Comparison of Bucolic and Farming Region’, Earth
Science and Remote Sensing Applications, Series of Remote Sensing
/Photogrammetry, Vol. 43, pp.1-31, Springer.

This work is funded and carried out at Dartmouth College, Hanover, New
Hampshire, United States under the project titled “Bucolic and Farming
Region Taxonomy Using Neural Networks for Remote Sensing Images"

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 3

1 Introduction

Most extreme of the earth critique satellites, for example, QUICKBIRD, SPOT,
IKONOS, FORMOSAT or ORBVIEW and a couple of advanced airborne sensors
report picture measurements in one in every one of the sort mode, a low-choice
multispectral (MS) mode or the top determination panchromatic (PAN) mode. An
unprecedented element for these sensors is the way that the best spatial choice is
recorded in their panchromatic mode while the multispectral recording mode
produces depictions of diminished spatial determination. The change in the spatial
determination in the midst of the panchromatic and the multispectral mode is most
likely bound by the method for the proportion of their relating ground sample
distance (GSD) and can contrast among 1:3 and 1:6. This proportion may wind up
shoddier if data from differing satellites are used. For example, the determination
proportion amidst IKONOS (PAN mode) and IKONOS (MS mode) are 1:11. The
reason for notorious image combination is to blend the panchromatic and the
multispectral records to frame an intertwined multispectral image that holds the
spatial realities from the unbalanced determination panchromatic image and the
ghastly qualities of the reduction choice multispectral image. Programs for
secured picture datasets exemplify town mapping, change recognition, rustic and
cultivating area, bucolic and farming region classification. Image fusion methods
have commonly been progressed for single-sensor, single-date combination, as an
occasion, IKONOS panchromatic images are melded with the equivalent IKONOS
multispectral images. The multisensory or multitemporal combination is from time
to time being employed or is utilized with LANDSAT multispectral and SPOT
panchromatic records. In this manner, greatest of the combination procedures
indicate conditions if extraordinary sensors from striking examples are joined.
With the persistent improvement of software engineering and innovation, inquire
about in the field of picture handling bit by bit expanded. New pattern are getting
turned on utilizing nature enlivened registering in the region of picture preparing.
One of the developing procedures are picture division in view of swarm insight.
Swarm intelligence (SI) is new developing territory in different fields including
enhancement. One of exceptionally well known SI strategies is particle swarm
optimization (PSO) for finding upgraded arrangement. PSO is a stochastic hunt
strategy in view of the sociological conduct of feathered creature running. It
introduces a populace of particles that reenacts a rush of bird flocking. The
calculation of PSO is simple and quick to get arrangement with the goal that it can
be connected to understand an extensive variety of improvement issues in
numerous fields, for example, picture handling fields including picture division.
The objective is to give a compelling correlation of rural and cultivating area in
view of the possibility of PSO utilizing convolutional neural systems.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

2 Remote Sensing Image Fusion

Remote sensing image fusion denotes the procedure of merging two or more
satellite sensor images into one compound image, which assimilates the
information confined within the discrete input images. The resultant image has a
sophisticated content related to the input sensor images. The aim of remote
sensing image fusion procedure is to appraise the data at respective pixel position
in the input sensor images and to hold the information which best signifies the
factual scene or to improve the effectiveness of the fused image for bucolic and
farming region taxonomy. Owing to system compromises associated to data size
and signal-to-noise ratio restrictions, remote sensing images incline to have low
spectral resolution and high spatial resolution or otherwise. The selection of input
data for the fusion procedure is extremely contingent on the drive of image fusion.
Data that are suitable in one instance might be impractical in another instance. The
choice very much hinge on the experiential distinctiveness, sensor features, data
accessibility, besides the availability of current algorithms for information mining.

2.1 Panchromatic Images and Multispectral Images

In remote sensing, different sorts of sensors are accessible like spaceborne and
airborne. The images which are captured by using those sensors can be arranged
into two kinds as panchromatic and multispectral images. The panchromatic
pictures are taken by utilizing SAR radar which is having the wavelength of 1 mm
to 1 m. SAR sensor picture joins diverse wavelengths to get a composite picture
and every wavelength will be shown as a Red, Green, and Blue in the last picture.
By consolidating diverse wavelength pictures in different strategies finds the
highlights from earth surface. The multispectral pictures are caught by getting the
reflectance of the microwaves from the earth surface. On a high spatial PAN
image, general geometric highlights can easily be recognized, though the MS
image joins more prosperous unearthly data. The limits of the pictures can be
enhanced if the advantages of together high spatial and ghastly determination can
be joined into one lone picture. The exhaustive structures of such a strong picture
in this way can be essentially perceived and will help various applications. By
methods for reasonable calculations, it is conceivable to blend MS and PAN
groups and yield a simulated picture with their best geologies. This methodology
is recognized as multisensory blending, or combination, or fusing. Its goal is to
absorb the spatial component of high-determination PAN picture and the shading
data of a low-determination MS picture to accomplish a high-determination MS
picture. Fig.1, Fig.2 and Fig.3 demonstrates the GeoEye-1 0.5m high-resolution
panchromatic picture, GeoEye-1 0.5m low-resolution multispectral picture and
fused panchromatic and multispectral picture individually.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 5

Fig. 1. GeoEye-1, 0.5m High Resolution Panchromatic Image

Fig. 2. GeoEye-1, 0.5m Low resolution Multispectral Image

Fig. 3. Fused Image of GeoEye-1, 0.5m

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

2.2 Need For Fusing Remote Sensing Images

Remote sensing conveys multimodal and worldly information from the Earth's
surface. Keeping in mind the end goal is to adapt multidimensional information
sources and to benefit as much as possible from them, image fusion is a profitable
tool. It has been created during recent decades into a usable image combination
method for extricating data of higher quality and dependability. As more sensors
and propelled picture combination strategies have turned out to be accessible,
scientists have led a huge measure of fruitful examinations utilizing picture
combination. Remote sensing image has turned into a built up, cutting edge
handling way to deal with extricate the ideal data from multisensor information.
Remote sensing picture combination has demonstrated its helpfulness and
significance in various applications in the past 20 years. Early examinations were
committed in understanding the reciprocal of optical i.e. noticeable and infrared
other than microwave i.e. radar remote sensing and how to increment spatial
determination while keeping up phantom honesty of optical sensor pictures by
methods for pansharpening. To start with fruitful applications were mapping, GIS,
farming, stereo-photogrammetry, geography and surge checking. It shapes a sub-
aggregate in remote sensing picture combination. It emerged with the accessibility
of single stage multisensor images, perhaps, the multispectral and panchromatic
stations of the principal SPOT satellite. Together with the acknowledgment of the
estimation of joined integral pictures, combination for pansharpening is one
reason why image fusion has picked up ubiquity and perceivability separated from
the way that more research has gone into this logical field. The expansion in
available sensors, spatial determination and PC control has been the contributing
element to the fame of remote sensing image fusion. This advancement is joined
by the trouble of recognizing reasonable preparing methods for multisensor
datasets, for unpracticed clients. There are decisions to be made with respect to the
correct pictures, pre-preparing procedures, picture combination approaches and
the last translation strategies for the melded information.

3 Remote Sensing Images and Fusion Algorithms

Remote sensing procedures have ended up being effective tool for observing the
Earth's surface and environment on a worldwide, local, and even local scale, by
giving essential scope, mapping and arrangement of land cover highlights, for
example, vegetation, soil, water and woodlands. The volume of remote detecting
pictures keeps on developing at a colossal rate because of advances in sensor
innovation for both high spatial and worldly determination frameworks. An
expanding amount of image information from airborne/satellite sensors have been
accessible, including multi-determination pictures, multi-transient pictures, multi-

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 7

recurrence/ghostly groups of pictures and multi-polarization picture. Remote


sensing data is helpful and simple to be gotten to over a vast region requiring little
to no effort, yet because of the effect of cloud, airborne, sun oriented rise edge and
bio-directional reflection, the surface vitality parameters recovered from remote
sensing information are frequently missing; in the mean time, the regular variety
of surface parameter time-arrangement plots will be likewise influenced. To
diminish such effects, for the most part time composite technique is embraced.
The objective of numerous sensor picture combination is to incorporate correlative
and repetitive data to give a composite picture which could be utilized to better
comprehension of the whole scene.

3.1 Image Fusion Methods

In this sub-section. various image fusion approaches are descibed in brief. Image
fusion methods can be divided into three different types: pixel level, feature level,
and decision level image fusion.
3.1.1. Pixel Level Fusion Method. It is performed concerning a pixel-by-pixel
commence as spoke to in Fig.4. It creates an interlaced picture in which
information related with each pixel is settled from a plan of pixels in source
pictures to improve the execution of picture getting ready errands. Pixel-level
picture blend is the most decreased level of picture mix, where another photo is
encircled having pixel regards got by uniting the pixel estimations of different
pictures through a couple of figurings under strict enlistment conditions. The new
picture keeps more unrefined data to give rich and exact picture information which
is furthermore used for straightforward examination and getting ready by
incorporate extraction and gathering. The photo mix at pixel level may be single
sensor, multi-sensor or short lived picture mix, et cetera. Good position of pixel-
level picture mix is slightest loss of information, be that as it may it has the
greatest measure of information to be taken care of, in this manner slowest
planning speed, and a higher enthusiasm for adapt.
3.1.2. Feature Level Fusion Method. It requires an extraction of things perceived
in the diverse data sources as showed up in Fig.5. It requires the extraction of
striking features which depends upon their condition, for instance, pixel powers,
edges or surfaces. These similar features from input pictures are joined. Feature
level combination is direct level of picture mix where the features, for instance,
edges, surface, shape, extend, edge, speed, practically identical significance of
focus zone, et cetera are all things considered in statics are expelled from different
photos of a similar land locale by free preprocessing. The evacuated features are
joined to shape a perfect rundown of capacities, moreover masterminded using
true or various types of classifiers. Features from different source-pictures
preprocessed using assorted plans are joined to outline a decision.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Fig. 4. Schematic of Pixel Level Fusion

3.1.3. Decision-Level Fusion Method. It includes mixing information at a more


hoisted measure of consultation, unites the results from various counts to yield a
last merged decision as portrayed in Fig.6. Data pictures are arranged only for
information extraction. The procured information is then solidified applying
decision precepts to fortify standard clarification. Decision level blend is an
unusual state mix, and its results give the preface to request and control basic
authority. In decision level fusion, the photos are arranged autonomously. The
readied information is then refined by joining the information obtained from
different sources and the qualifications in information are settled in light of certain
decision rules. In composing, two sorts of decision level blend are viewed. Here,
arrange from different sorts of classifiers for a comparative picture may be joined
to hint at change portrayal correctnesses or two particular complimentary sources
like optical imagery and radar data can be masterminded freely and combined to
make a refined gathering map. A grouping of sensible reasoning methods, genuine
methodologies, information theory procedures can be used for decision level mix,
for instance, Bayesian reasoning, Dempster-Shafer Evidence considering, voting
structure, cluster examination, soft set speculation, neural framework , the entropy
system and so forth. Decision level mix has a tolerable constant and adjustment to
non-basic disappointment, however its pretreatment cost is higher. The data
measure of decision level mix is the humblest and its ability of antagonistic to
impedance is the most shocking. The probability and reality of joined results are
high and the execution of multisensor structure is gained ground.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 9

Fig. 5. Schematic of Feature Level Fusion

Fig. 6. Schematic of Decision Level Fusion

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

3.2 Choosing an Image Fusion Algorithm for Remote Sensing Application

In this section, the choosing and selection of an image fusion algorithm for various
remote sensing applications are discussed in brief.
3.2.1. In Intensity-Hue-Saturation (IHS) based image fusion, three groups of a
multispectral picture are changed from the RGB area into the IHS shading space.
The panchromatic segment is coordinated to the power of the IHS picture and
replaces the force segment. Changed IHS combination which was produced for a
superior fitting of the intertwined multispectral groups to the first information can
be utilized. Subsequent to coordinating, the panchromatic picture replaces the
force in the first IHS picture and the intertwined picture is changed again into the
RGB shading space. This strategy functions admirably with information from one
sensor, yet for multitemporal or multisensoral combination the outcomes are by
and large not satisfactory.
3.2.2. The Principal Component Analysis (PCA) is a factual system that changes
a multivariate dataset of connected factors into a dataset of uncorrelated direct
blends of the first factors. For pictures, it makes an uncorrelated element space
that can be utilized for encourage examination rather than the first multispectral
highlight space. For the most part, the PCA strategy is connected to the
multispectral groups and the panchromatic picture is histogram coordinated to the
primary vital segment. It at that point replaces the chose part and a converse PCA
change takes the intertwined dataset once again into the first multispectral
highlight space. The upside of the PCA based combination technique is that the
quantity of groups isn't confined. It is, nonetheless, a factual strategy which
implies that it is touchy to the territory to be honed. The combination results may
shift contingent upon the chose picture subsets.
3.2.3. The Ehlers fusion depends on an IHS change combined with a Fourier area
separating. This method is reached out to incorporate more than 3 groups by
utilizing numerous IHS changes until the point when the quantity of groups is
depleted. An ensuing Fourier change of the force part and the panchromatic
picture permits a versatile channel outline in the recurrence space. Utilizing Fast
Fourier change (FFT) systems, the spatial segments to be upgraded or stifled can
be straightforwardly gotten to. The force range is sifted with a low pass channel
while the panchromatic range is separated with an opposite high pass channel. In
the wake of separating, the pictures are changed over into the spatial area with a
backwards FFT and included to shape a melded power segment with the low-
recurrence data from the low determination multispectral picture and the high-
recurrence data from the high determination picture. This new force part and the
first tone and immersion segments of the multispectral picture shape another IHS
picture. These means can be rehashed with progressive 3-band choices until the
point when all groups are combined with the panchromatic picture. The Ehlers
combination demonstrates the best ghostly conservation yet in addition the most
astounding calculation time.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 11

3.2.4. Wavelet Transform is actualized in the Erdas Imagine Software bundle.


For picture combination, a wavelet change is connected to the panchromatic
picture bringing about a four-segment picture: a low-resolution approximation
component (LL) and three images of horizontal (HL), vertical (LH), and diagonal
(HH) wavelet coefficients which contain data of nearby spatial detail. The low-
determination segment is then supplanted by a chose band of the multispectral
picture. This procedure is rehashed for each band until the point that all groups are
changed. A turn around wavelet change is connected to the intertwined parts to
make the combined multispectral picture. For the most part, wavelet melded
pictures deliver great ghostly conservation yet poor spatial change. The AWL
strategy is one of the current multiresolution wavelet-based picture combination
procedures. It was initially intended for a three-band red-green-blue (RGB)
multispectral picture. In this strategy, the ghastly mark is saved on the grounds
that the high determination panchromatic structure is coordinated into the
luminance band of the first low determination multispectral picture. Subsequently,
this strategy is characterized for three groups. It keeps up the otherworldly mark of
a n-band picture similarly as AWL does with RGB pictures. This summed up
strategy is called proportional AWL (AWLP). This technique creates preferred
outcomes over standard wavelet calculations, however the spatial change is by and
large still not worthy.
3.2.5. Multiplicative method is derived from the four segment method. In this
moethod, amoung the four conceivable number-crunching techniques just the
duplication is probably not going to misshape the hues by changing a force picture
into a panchromatic image. In this manner this calculation is a basic augmentation
of each multispectral band with the panchromatic picture. The upside of the
calculation is that it is clear and basic. By duplicating a similar data into all
groups, nonetheless, it makes ghastly groups of a higher connection which implies
that it alters the unearthly qualities of the first picture information.
3.2.6. Brovey Transformation was created to stay away from the hindrances of
the multiplicative strategy. It is a combination of number juggling operations and
standardizes the ghostly groups previously they are duplicated with the pan image.
The otherworldly properties, in any case, are typically not all around saved.
3.2.7. Color Normalization spectral sharpening is an extension of the Brovey
algorithm and groups the input image bands into spectral segments defined by the
spectral range of the panchromatic image. The corresponding band segments are
processed together. Each input band is multiplied by the sharpening band and then
normalized by dividing it by the sum of the input bands in the segment. This
method works well for data from one sensor, but if the spectral range of the
panchromatic image does not match the spectral range of the multispectral images
no spatial improvement is visible.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

3.2.8. The Gram Schmidt fusion recreates a panchromatic band from the lower
spatial determination ghostly groups. Later, it is accomplished by averaging the
multispectral groups. Subsquently, a Gram Schmidt change is performed for the
mimicked panchromatic band and the multispectral groups with the reproduced
panchromatic band utilized as the principal band. At that point the high spatial
determination panchromatic band replaces the main Gram Schmidt band. At last, a
reverse Gram Schmidt is applied to make the pansharpened multispectral groups.
This strategy more often than not creates great outcomes for combination pictures
from one sensor, however it is likewise a factual methodology like the PCA, with
the goal that the combination results may shift contingent upon the chose datasets.
3.2.9. In High Pass Filtering (HPF) fusion, first the proportion between the
spatial determination of the panchromatic and the multispectral picture is
ascertained. A high pass convolution channel part is made and used to channel the
highresolution input information with the span of the bit in light of the proportion.
The HPF picture is added to each multispectral band. Prior to the summation, the
HPF picture is weighted in respect to the worldwide standard deviation of the
multispectral groups with the weight factors again figured from the proportion. As
a last advance, a direct extend is connected to the new multispectral picture to
coordinate the mean and standard deviation estimations of the first info
multispectral picture. It indicates worthy outcomes for both multisensoral and
multitemporal information. Now and then the edges are underlined excessively.
3.2.10. In University of New Brunswick (UNB) combination calculation, a
histogram institutionalization is ascertained for the multispectral and panchromatic
groups of the info pictures. The multispectral groups in the unearthly scope of the
panchromatic picture are chosen and a relapse investigation is computed utilizing
a slightest square calculation. The outcomes are utilized as weights for the
multispectral groups by means of duplication with the relating groups and
following an expansion, another integrated picture is created. To make the
combined picture, each institutionalized multispectral picture is duplicated with
the institutionalized panchromatic picture and isolated by the blended picture. This
technique was intended for single sensor, single-date pictures and does not deliver
adequate outcomes for multisensor and multitemporal combination. It is utilized
as the standard strategy for Quickbird pansharpening.
3.2.11. Neural Networks are the frameworks that try to imitate the procedure
utilized as a part of organic sensory systems. A neural system comprises in layers
of preparing components, or hubs, which might be interconnected in an assortment
of ways. A neural system can be prepared utilizing an example or preparing
informational index either directed or unsupervised relying upon the preparation
mode to perform rectify groupings by efficiently modifying the weights in the
actuation work. This actuation work characterizes the handling in a solitary hub. A
definitive objective of neural system preparing is to limit the cost or blunder work

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 13

for every single conceivable case through the info yield connection. The neural
systems can be utilized to change multisensor information into a joint revelation of
character for an element. Fig.7 shows a four-layer connect with each layer having
various handling components.

Fig. 7. Four-layered Neural Network

3.3 Nature-Inspired Optimization Algorithms

Nature has inspired numerous scientists from multiple points of view and
accordingly is a rich wellspring of motivation. These days, most new calculations
are nature-motivated, because they have been produced by drawing motivation
from nature. Indeed, even with the accentuation on the wellspring of motivation,
various levels of orders are accessible relying upon alternate subtle elements like
sort of subsources being used. For straightforwardness, the largest amount
sources, for example, science, material science or nature can be considered. In the
most nonspecific term, the principle wellspring of motivation is nature.
Consequently, all new calculations can be alluded to as nature-propelled. By a
long shot most nature-motivated calculations depend on some fruitful attributes of
organic framework. Like this, the biggest part of nature-roused calculations is
science motivated, or bio-propelled in short. Among bio-motivated calculations,
an exceptional class of calculations have been produced by drawing motivation
from swarm insight. In this way, a portion of the bio-motivated calculations can be
called swarm-insight based. Calculations in view of swarm knowledge are among
the most famous. Great illustrations are subterranean insect settlement
enhancement, particle swarm advancement, cuckoo look, bat calculation, and
firefly calculation. Clearly, not all calculations depended on organic frameworks.
Numerous calculations have been produced by utilizing motivation from physical
and synthetic frameworks. Some may even be founded on music.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

3.4 Swarm Intelligence

Swarm intelligence (SI) concerns the emerging, directing and connecting related
agents that take after some straightforward standards. While every specialist might
be considered as unintelligent, the total arrangement of various operators may
demonstrate some self-association conduct and along these lines can carry on like
a type of aggregate knowledge. Numerous calculations have been created by
drawing motivation from swarm-insight frameworks in nature. All SI-based
calculations utilize multi-operators, propelled by the aggregate conduct of social
bugs, like ants, termites, honey bees, and wasps, and additionally from other
creature social orders like runs of winged animals or fish. The established particle
swarm optimization (PSO) utilizes the swarming conduct of fish and flying
creatures, while firefly algorithm (FA) utilizes the blazing conduct of swarming
fireflies. Cuckoo search (CS) depends on the agonizing parasitism of some cuckoo
species, while bat calculation utilizes the echolocation of searching bats.
Subterranean insect state streamlining utilizes the connection of social bugs (e.g.,
ants), while the class of honey bee calculations are altogether in view of the
rummaging conduct of bumble bees. SI-based calculations are among the most
prevalent and broadly utilized. There are numerous purposes behind such ubiquity,
one reason is that SI-based calculations typically sharing data among different
operators, with the goal that self-association, co-advancement and picking up amid
emphases may give the high effectiveness of most SI-based calculations. Another
reason is that various specialist can be parallelized effortlessly so vast scale
streamlining turns out to be more viable from the execution perspective.

3.5 Benefits of Particle Swarm Optimization (PSO)

The advantages of PSO over other nature inspired algorithms are;


1. PSO depends on the insight. It can be connected into both logical research and
building research.
2. PSO have no covering and transformation estimation. The inquiry can be com-
pleted by the speed of the particle. Amid the improvement of a few ages, just the
most positive thinking particle can transmit data onto alternate particles, and the
speed of the investigating is quick.
3. The computation in PSO is extremely basic. Contrasted to the other algorithms,
it involves the greater advancement capacity and it can be finished easily.
4. PSO embraces the genuine number code, and it is chosen straightforwardly by
the arrangement. The quantity of the measurement is equivalent to the stability of
the arrangement.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 15

4 Machine Intelligence and Image Fusion

Machine Learning (ML) and Artificial Intelligence (AI) have advanced quickly in
recent years. Strategies of both ML and AI have assumed critical part in image
understanding, picture combination, picture enlistment, picture division. Picture
recovery and examination strategies of ML remove data from the pictures and
speaks to data viably and proficiently. These strategies made out of ordinary
calculations without learning like Support Vector Machine (SVM), Neural
Network (NN), and profound learning calculations, for example, Convolutional
Neural Network (CNN), Recurrent Neural Network (RNN), Long Short Term
Memory (LSTM), Extreme Learning Model (ELM), Generative Adversarial
Networks (GANs) and so forth. Previous calculations are restricted in handling the
regular pictures in their crude frame, tedious, in light of master information and
requires a great deal time for tuning the highlights. The later calculations are
sustained with crude information, programmed highlights student and quick.
These calculations attempt to take in various levels of reflection, portrayal and
data naturally from expansive arrangement of pictures that display the coveted
conduct of information. The mechanized characterization of rural and cultivating
locale of remote detecting pictures on traditional strategies has been demonstrated
with huge exactnesses for a considerable period of time, however new advances in
machine learning systems have touched off a blast in the profound learning. CNN
based calculations demonstrated promising execution also speed in various spaces
like discourse acknowledgment, content acknowledgment, lips perusing, PC
supported analysis, confront acknowledgment, tranquilize disclosure, remote
detecting applications.

4.1 Why Convolutional Neural Networks?

Latest advances in machine learning have accomplished promising outcomes in


numerous testing assignments. The best in class in question location is spoken to
by convolutional neural networks (CNNs), like, the quick R-CNN calculation.
These CNNs based techniques enhance the discovery execution fundamentally on
a few open nonexclusive protest identification datasets. CNNs raise a critical
effect on remote detecting applications, which accomplished promising outcomes
in numerous troublesome protest discovery challenges. Contrasted and abnormal
state picture combination, the proposed strategy can accomplish a higher precision
and computational productivity. CNNs comprises of at least one convolutional
layers, frequently with a subsampling layer, which are trailed by at least one
completely associated layers as in a standard neural system. The plan of a CNNs is
propelled by the disclosure of a visual component, the visual cortex, in the mind.
The visual cortex contains a considerable measure of cells that are in charge of
recognizing light in little, covering sub-areas of the visual field, which are called
responsive fields. These responsive fields go about as neighborhood channels over

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

the information space, and the more intricate cells have bigger open fields. The
convolution layer in a CNNs plays out the capacity that is performed by the cells
in the visual cortex. A run of the mill CNNs for is appeared in Fig.8. Each element
of a layer gets contributions from an arrangement of highlights situated in a little
neighborhood in the past layer called a nearby open field. With nearby responsive
fields, highlights can remove rudimentary visual highlights, for example, arranged
edges, end-focuses, corners, and so forth., which are then joined by the higher
layers. In the conventional model of example/picture acknowledgment, a hand-
outlined element extractor accumulates applicable data from the information and
dispenses with superfluous inconstancies. The extractor is trailed by a trainable
classifier, a standard neural system that groups highlight vectors into classes. In a
CNNs, convolution layers assume the part of highlight extractor. Be that as it may,
they are not hand planned. Convolution channel piece weights are chosen as a
major aspect of the preparation procedure. Convolutional layers can separate the
nearby highlights since they confine the open fields of the concealed layers to be
neighborhood.

Fig. 8. A typical 2 Stage Convolutional Neural Network

CNNs are utilized as a part of assortment zones, including picture and example
acknowledgment, discourse acknowledgment, regular dialect preparing, and video
investigation. There are various reasons that convolutional neural systems are
getting imperative. In conventional models for design acknowledgment, highlight
extractors are hand planned. In CNNs, the weights of the convolutional layer
being utilized for include extraction and the completely associated layer being
utilized for order are resolved amid the preparation procedure. The enhanced
system structures of CNNs prompt investment funds in memory prerequisites and
calculation unpredictability necessities and, in the meantime, give better execution
for applications where the information has nearby relationship e.g., picture and
discourse. Extensive prerequisites of computational informations for preparing
and assessment of CNNs are once in a while met by realistic handling units, DSPs,
or other silicon structures advanced for high throughput and low vitality when
executing the particular examples of CNN calculation. Truth be told, propelled
processors, for example, the Tensilica Vision P5 DSP for Imaging and Computer
Vision from Cadence have a relatively perfect arrangement of calculation and
memory assets required for running CNNs at high proficiency.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 17

4.2 Advatages of Convolutional Neural Networks

Convolutional neural networks are naturally motivated variations of multilayer


perceptron (MLP), intended to imitate the conduct of a visual cortex. These
models alleviate the difficulties postured by the MLP engineering by misusing the
solid spatially nearby relationship introduce in characteristic pictures. CNNs have
the accompanying recognizing highlights:
1. Detection using CNN is rugged to distortions, for example, change fit as a
fiddle because of camera focal point, distinctive lighting conditions, diverse
postures, nearness of fractional impediments, flat and vertical movements, and so
forth. Nonetheless, CNNs are move invariant since a similar weight setup is
utilized crosswise over space. In principle, it is conceivable to accomplish move
invariantness utilizing completely associated layers. In any case, the result of
preparing for this situation is various units with indistinguishable weight designs
at various areas of the info. To take in these weight designs, a substantial number
of preparing occurrences would be required to cover the space of conceivable
varieties.
2. In this same speculative situation where a completely associated layer is utilized
to extricate the highlights, the info picture of size 32x32 and a concealed layer
having 1000 highlights will require a request of 106 coefficients, an enormous
memory necessity. In the convolutional layer, similar coefficients are utilized
crosswise over various areas in the space, so the memory prerequisite is radically
lessened.
3. Utilizing the standard neural system that would be comparable to a CNN, be-
cause the quantity of parameters would be considerably higher, the preparation
time would likewise increment proportionately. In CNNs, since the quantity of pa-
rameters is lessened, preparing time is proportionately decreased. In any case, in
handy preparing, a standard neural system proportional to CNN would have more
parameters, which would prompt more commotion expansion amid the preparation
procedure.

5 Implementation

As illustrated in Fig. 9, the whole fusion system is composed of four modules:


Module 1. an image fusion module, which can fuse panchromatic image and
multispectral images using particle swarm optimization of convolutional neural
network into a optimized fused image.
Module 2. ROI classification and fine regression module, which is performed to
obtain corresponding bucolic and farming region.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Module 3. Qualitative and quantitative evaluation module, which evaluates the


quality of optimized fused image with and without reference image.
Module 4. Comparitative evaluation module, which compares the classification of
bucolic and farming region of optimized fused image with and without reference
image.
Initially, the panchromatic image and the multispectral image is fed as the input to
the convolutional neural network inorder to perfom the fusion. Further the particle
swarm optimation is performed for every 5x5 convolution and 2x2 subsampling is
obtained as illustrated in Fig. 10. At the outset, the Particle Swarm Optimization
(PSO) parametes are defined and the fusion of PAN image and MS image are
achieved for 5x5 optimizated convolution. Then 2x2 subsampling optimization is
applied for every 5x5 optimized convolutions. The fitness function is evaluated
based on the required conditions for maximim number of iterations. If the number
of iterations reaches the maximum or the reqired condition is satisfied, the gbest is
saved or else the PSO is updated and the fusion if performed so on. Once the gbest
is saved, it is fed to full connection stage i.e. the CNNs classifier to obtain the
optimized fused image of the PAN image and MS image. Further details of fusing
panchromatic and multispectral images is conveyed in section 5.1. In the second
module, the ROI classification and regression is applied to classifiy the bucolic
and farming region as described in section 5.2. The third module evaluates the
quality of the optimized fused images with and without reference image as
described in section 5.3. The fourth module is the comparitative evaluation of the
bucolic and farming region that is dealt in section 5.4.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 19

Fig. 9. Block Diagram of Proposed Fusion System

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Fig. 10. Particle Swarm Optimization (PSO) Framework

5.1 Performing Image Fusion of PAN and MS Images

The panchromatic image and the multispectral image from various sensors have
been tested in the proposed fusion system. A dataset containing 50 panchromatic
image and 50 multispectral image from various sensors like Worldview, GeoEye,
SPOT, QuickBird, Landsat, SeaStar for different scenes heve been utilized. The
fusion is performed for the same scene taken at different time from different
sensors were tested. The challenge is to classify the bucolic and farming region of
the fused image. In general, the fused image obtained from fusing PAN and MS
image from day vision gives combined and detailed information but it is not the
same for night vision. This problem is overcomed by the proposed fusion system,
since the fusion is based on optimization method. The proposed system is based
on particle swarm optimization due to its inherent stable arrangement and less
computational complexity. The panchromatic picture and the multispectral picture
is encouraged as the contribution to the convolutional neural network inorder to
perfom the fusion. Facilitate the particle swarm optimation is performed for each
5x5 convolution and its corresponding 2x2 subsampling is acquired. At the start,

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 21

the Particle Swarm Optimization parametes are characterized and the combination
of PAN picture and MS picture are accomplished for 5x5 optimizated convolution.
At that point 2x2 subsampling advancement is connected for each 5x5 enhanced
convolutions. The wellness of the proposed is assessed in view of the required
conditions for maximim number of emphasess. On the off chance that the quantity
of cycles achieves the greatest or the reqired condition is fulfilled, the gbest is
spared or else the PSO is refreshed and the combination if performed so on. Once
the gbest is spared, it is nourished to full association organize i.e. the CNNs
classifier to get the optimized fused image of the PAN image and MS image.

5.2 ROI Classification and Regression

An image decomposition strategy is considered whereby the Region of Interests


(ROIs) are spoken to utilizing a quad-tree portrayal. All the more particularly the
Minimum Bounding Rectangles (MBR) encompassing the ROIs. The advantage
offered is that a quad-tree portrayal will keep up the auxiliary data of the ROI
contained in MBR. By applying a weighted regular subgraph mining calculation,
gSpan-ATW, to this portrayal, visit subgraphs that happen over the tree to set of
MBR can be recognized. The distinguished successive subgraphs each portraying,
regarding size, contour, color, intensity, edge, some piece of the MBR, would then
be able to be utilized to frame the central components of an element space. Thus,
this element space can be utilized to portray an arrangement of highlight vectors
for standard classification of bucolic and farming region as shown in Fig.11.

Fig. 11. ROI Based Classification and Regression Using Quadtree Approach

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

5.3 Evaluation of Optimized Fused Image

The assessment techniques depend on the check of the protection of ghastly


qualities and the change of the spatial determination. To start with, the intertwined
pictures are outwardly thought about. The visual appearance might be subjective
and relies upon the human mediator, yet the energy of the visual perception as a
last background can't be thought little of. Second, various factual assessment
techniques are utilized to gauge the shading safeguarding. These strategies must
be objective, reproducible, and of quantitative nature. The following quantitative
evaluation strategies were utilized:
5.3.1 Correlation Coefficient (CC) between the first multispectral groups and the
intertwined groups. This esteem ranges from -1 to 1. The best correspondence
amongst fused and unique picture information demonstrates the most astounding
relationship esteems.
5.3.2 Per-pixel Deviation (PD), it is important to corrupt the combined picture to
the spatial determination of the first picture. This picture is then subtracted from
the first picture on each pixel premise. As a definite advance, compute the normal
deviation per pixel estimated as digital number, which depends on a 8-bit or 16-bit
run. Here, zero is the best esteem.
5.3.3 Root Mean Square Error (RMSE) is figured as the distinction of the
standard deviation and the mean of the combined and the first picture. The most
ideal esteem is again zero.
5.3.4 Structure Similarity Index (SSIM) is a technique that consolidates a
correlation of luminance, differentiation and structure and is connected locally in a
8x8 square window. This window is moved pixel-by-pixel over the whole picture.
At each progression, the neighborhood measurements and the SSIM record are
computed inside the window. The qualities change in the vicinity of 0 and 1.
Qualities near 1 demonstrate the most elevated correspondence with the first
pictures. The goal is to locate the intertwined picture with the ideal blend of
otherworldly attributes conservation and spatial change.
5.3.5 High Pass Correlation (HCC) is the correlation between the first
panchromatic band and the combined groups after high pass separating. The high
pass channel is connected to the panchromatic picture and each band of the
melded picture. At that point the connection coefficients between the high pass
separated groups and the high pass sifted panchromatic picture are ascertained.
5.3.6 Edge Detection (ED) in the panchromatic picture and the intertwined
multispectral groups; for this, a Sobel channel is choosen and played out a visual
investigation of the edges recognized in the panchromatic and the combined
multispectral pictures. This was done autonomously for each band. The esteem is
given in percent and shifts in the vicinity of 0 and 100. 100% implies that every
one of the edges in the panchromatic picture were identified in the fused picture.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 23

TABLE I. EVALUATION OF FUSED IMAGE QUALITY

Qualitative
Type CC PD RMSE SSIM HCC ED
Assessment
With Reference Image
(WorldView-1, 0.5m Night 0.9261 0.0141 0.0011 0.8025 0.8453 97.12% Best
Vision Image)
With Reference Image
(WorldView-2, 0.5m 0.9142 0.0125 0.0015 0.9001 0.9678 98.34% Best
Daylight Vision Image)
Without Reference Image
(GeoEye-2, 0.5m Daylight 0.9522 0.0016 0.0014 0.9911 0.9541 99.12% Best
Vision Image)
Without Reference Image
(QuickBird-1, 0.7m 0.9691 0.0019 0.0009 0.9898 0.9346 98.56% Best
Daylight Vision Image)

CC – Correlation coefficient; PD – Per pixel deviation; RMSE – Root mean square error;
SSIM – Structure similarity index; HCC – High pass correlation; ED – Edge detection

TABLE II. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION

Overall Kappa Qualitative


Type
Accuracy Index* Assessment
With Reference Image
(WorldView-1, 0.5m Night 89.25% 0.89 Best
Vision Image)
With Reference Image
(WorldView-2, 0.5m Daylight 89.92% 0.92 Best
Vision Image)
Without Reference Image
GeoEye-2, 0.5m Daylight 93.54% 0.97 Best
Vision Image)
Without Reference Image
(QuickBird-1, 0.7m Daylight 94.56% 0.96 Best
Vision Image)

*Poor classification = Less than 0.20


*Fair classification = 0.20 to 0.40
*Moderate classification = 0.40 to 0.60
*Good classification = 0.60 to 0.80
*Very good classification = 0.80 to 1.00

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

5.4 Effective Comparison of Bucolic and Farming Region

Farming is the foundation of the national economy and the ensuing segment for
shielding sustenance haven. Convenient availability of data on farming is
noteworthy for taking acquainted decisions on nourishment of human wellbeing.
Numerous countries on the planet that practices space innovation and land-based
explanations for causing intermittent advises on generation data and taking
endeavors to accomplish expressive farming. Satellite-based optical and radar
symbolism are extensively utilized in horticulture examine. Radar symbolism are
especially recycled amid blustery season. Joint use of geospatial instruments with
trim preparations and the perception arrange favors reasonable product yield
forecasts, shortage assessment and observing for suitable horticultural needs. The
appraisal methods for near assessment of bucolic and farming area relies upon the
overall accuracy and the kappa index. The overall accuracy provides the efficacy
of region classification. Higher the exactness, best is the grouping. Kappa is an
amount of agreement amid the two elements. Kappa is exactly or equivalent to 1.
An estimation of 1 indicates culminate grouping and qualities under 1 determines
underneath consummate characterization.

6 Result and Analysis

Table I and Table II illustrates the sample of the evaluation results of optimized
fused image quality and comparison of bucolic and farming region classification
respectively tested on the proposed fusion system. Bascially, the evaluation of the
optimized fused image quality is aspired with reference image and without the
reference image. WorldView-1, 0.5m and WorldView-2, 0.5m respresents the
same scene captured during night vision and daylight vision respectively. Here,
the quality evaluation is computed with respect to a reference image. The results
illustrates that the evaluation metrics such as CC–Correlation coefficient, PD–Per
pixel deviation, RMSE–Root mean square error, SSIM–Structure similarity index,
HCC–High pass correlation, ED–Edge detection were found to be relatively
convincing and their quantitative assessment is at its best as shown in Fig. 12-14
and Fig. 15-17. Consequently, GeoEye-2, 0.5m and QuickBird-1, 0.7m respresents
different scene captured during daylight vision. The fused quality evaluation here
is ascertained without reference image. The test results depicts that the evaluation
metrics were found to be imperative and the quantitative evaluation is at its best as
shown in Fig. 18-20 and Fig. 21-23. Similarly, the entire dataset is simulated and
tested for the quality of the optimized fused image. The comparison of bucolic and
farming region is evaluated with respect to the overall accuracy and kappa index.
The classification of bucolic and farming region performed on ROI based quadtree
shows higher efficieny both with respect to overall accuracy and kappa index as
osbserved in Fig. 24-27, which clearly and precisely indicates the farming region,
road map, bucolic region and land cover correspondingly.

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 25

Fig. 12. WorldView-1, 0.5m Night Vision Panchromatic Image

Fig. 13. WorldView-1, 0.5m Night Vision Multispectral Image

Fig. 14. Optimized Fused WorldView-1, 0.5m Night Vision Image

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Fig. 15. WorldView-2, 0.5m Daylight Vision Panchromatic Image

Fig. 16. WorldView-2, 0.5m Daylight Vision Multispectral Image

Fig. 17. Optimized Fused WorldView-2, 0.5m Daylight Vision Image

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 27

Fig. 18. GeoEye-2, 0.5m Daylight Vision Panchromatic Image

Fig. 19. GeoEye-2, 0.5m Daylight Vision Multispectral Image

Fig. 20. Optimized Fused GeoEye-2, 0.5m Daylight Vision Image

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Fig. 21. QuickBird-1, 0.7m Daylight Vision Panchromatic Image

Fig. 22. QuickBird-1, 0.7m Daylight Vision Multispectral Image

Fig. 23. Optimized Fused QuickBird-1, 0.7m Daylight Vision Image

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 29

Fig. 24. Bucolic and Farming Region Classification of Optimized Fused


WorldView-1, 0.5m Night Vision

Fig. 25. Bucolic and Farming Region Classification of Optimized Fused


WorldView-2, 0.5m Daylight Vision

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region

Fig. 26. Bucolic and Farming Region Classification of Optimized Fused


GeoEye-2, 0.5m Daylight Vision

Fig. 27. Bucolic and Farming Region Classification of Optimized Fused


QuickBird-1, 0.7m Daylight Vision

Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)


P.S.Jagadeesh Kumar et al. 31

7 Conclusion

Machine learning is one plausible strategy to lever the high measurement nature of
satellite sensor information. It can be discernable and reliably discovered that
machine learning based convolutional neural networks bear the best image fusion
quality with adoration to spatial quality, spectral quality and global quality with
and without a reference picture. Thusly, just with a best-melded fusion image, the
superlative arrangement of bucolic and farming region of remote sensing image
can be for all intents and purposes viable. All in all, the fused picture acquired
from melding PAN and MS picture from day vision gives joined and nitty-gritty
data however it isn't the same for night vision. This issue is overcome by the
proposed fusion framework since the combination depends on advancement
technique utilizing particle swarm optimization. Although the proposed fusion
framework provides higher precision and productivity, it suffers from halfway
hopefulness and can't work out the issues of scattering.

References

1. P.S.Jagadeesh Kumar, X Li, TL Huan. (2018) ‘Panchromatic and Multispectral Remote


Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming
Region’, Int. J. Computational Science and Engineering, Vol. 15, No. 5/6, pp.340-370.
2. Fonseca et al. (2008) ‘Multitemporal image registration based on multiresolution
decomposition’, Revista Brasileira de Cartografia, Vol.60, No.3, pp.271-286.
3. P.S.Jagadeesh Kumar, TL Huan, RK Rossi, Y Yuan, X Li. (2018) ‘Color Fusion of
Remote Sensing Images for Imparting Fluvial Geomorphological Features of River
Yamuna and Ganga over Doon Valley’, Journal of Geomatics, 12 (1), pp. 270-286.
4. Laporterie F et al. (2005) ‘Thematic and statistical evaluations of five panchromatic/
multispectral fusion methods’, Information Fusion, Vol.6, No.3, pp.193-212.
5. L. Dong, Yang et al. (2015) ‘High quality multi-spectral and panchromatic image fusion
technologies based on curvelet transform’, Neurocomputing, Vol.159, pp.268–274.
6. P.S.Jagadeesh Kumar, RK Rossi, Y Yuan, TL Huan. (2017) ‘Image Fusion Intervening
Hybrid Fusion for Monitoring Erosion and Rock Formation in Eastern and Western
Ghats’, Journal of Earth Science, 28(6), pp. 333-354, Springer.
7. Choi M, Kim R.Y, Nam M.R, Kim, H.O. (2005) ‘Fusion of multispectral and
panchromatic satellite images using the curvelet transform’, IEEE Geoscience and
Remote Sensing Letters, Vol.2, No.2, pp.136-140.
8. P.S.Jagadeesh Kumar, X Li, R Rossi, Y Yuan, T Huan. (2018) ‘Multispectral and
Hyperspectral Remote Sensing Image Fusion in Mapping Bucolic and Farming Region
for Land Use’, Int. Conference on Multispectral Remote Sensing Systems and Image
Interpretation, Singapore, WASET.
9. Palubinskas. (2015) ‘Joint quality measure for evaluation of pansharpening accuracy’,
Remote Sensing’, Vol.7, No. 7, pp.9292–9310.
10. P.S.Jagadeesh Kumar, X Li, R Rossi, Y Yuan, T Huan. (2017) ‘Congenital Bucolic and
Farming Region Taxonomy Using Neural Networks for Remote Sensing Imagery and
Pattern Classification’, IAENG Int. Journal of Computer Science, 56 (3), pp. 183-192.

Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer