You are on page 1of 21

International Journal of Remote Sensing

ISSN: 0143-1161 (Print) 1366-5901 (Online) Journal homepage: https://www.tandfonline.com/loi/tres20

The effect of fusing Sentinel-2 bands on land-cover


classification

Mateo Gašparović & Tomislav Jogun

To cite this article: Mateo Gašparović & Tomislav Jogun (2018) The effect of fusing Sentinel-2
bands on land-cover classification, International Journal of Remote Sensing, 39:3, 822-841, DOI:
10.1080/01431161.2017.1392640

To link to this article: https://doi.org/10.1080/01431161.2017.1392640

Published online: 23 Oct 2017.

Submit your article to this journal

Article views: 2385

View related articles

View Crossmark data

Citing articles: 48 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tres20
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2018
VOL. 39, NO. 3, 822–841
https://doi.org/10.1080/01431161.2017.1392640

The effect of fusing Sentinel-2 bands on land-cover classification


a
Mateo Gašparović and Tomislav Jogunb
a
Faculty of Geodesy, University of Zagreb, Zagreb, Croatia; bPromet i prostor d.o.o., Zagreb, Croatia

ABSTRACT ARTICLE HISTORY


The Sentinel-2 satellite currently provides freely available multi- Received 3 January 2017
spectral bands at relatively high spatial resolution but does not Accepted 4 October 2017
acquire the panchromatic band. To improve the resolution of 20 m
bands to 10 m, existing pansharpening methods (Brovey trans-
form [BT], intensity–hue–saturation [IHS], principal component
analysis [PCA], the variational method [P + XS], and the wavelet
method) required adjustment, which was achieved using higher
resolution multispectral bands in the role of a panchromatic band
to fuse bands at a lower spatial resolution. After preprocessing, six
bands at lower resolution were divided into two groups because
some image fusion methods (e.g. BT, IHS) are limited to a max-
imum of three input bands of a lower resolution at a time. With
respect to the spectral range, the higher resolution band for the
first group was synthesized from bands 4 and 8, and band 8 was
selected for the second group. Given that one of the main remote
sensing applications is land-cover classification, the classification
accuracy of the fusion methods was assessed as well as the
comparison with reference bands and pixels. The supervised clas-
sification methods were Maximum Likelihood Classifier, artificial
neural networks, and object-based image analysis. The classifica-
tion scheme contained five classes: water, built-up, bare soil, low
vegetation, and forest. The results showed that most of the fusion
methods, particularly P + XS and PCA, improved the overall classi-
fication accuracy, especially for the classes of forest, low vegeta-
tion, and bare soil and in the detection of coastlines. The least
satisfying results were obtained from the wavelet method.

1. Introduction
Most of the operating Earth observation satellites, such as Landsat, Spot, IKONOS,
Quickbird, Formosat, GeoEye, and Orbview, acquire panchromatic bands at higher
spatial resolution than in their multispectral mode. Many fusion, or pansharpening,
methods have been developed to combine panchromatic and multispectral bands to
produce enhanced multispectral bands of higher spatial resolution. However, some
sensors acquire information about the Earth only in either panchromatic (EROS-A,
EROS-B, WorldView-1) or multispectral (RapidEye, Sentinel-2) mode (Ehlers et al. 2010).

CONTACT Mateo Gašparović mgasparovic@geof.hr Faculty of Geodesy, University of Zagreb, Kačićeva 26,
Zagreb 10000, Croatia
© 2017 Informa UK Limited, trading as Taylor & Francis Group
INTERNATIONAL JOURNAL OF REMOTE SENSING 823

Traditional image fusion methods are based on component substitution (CS) or


multiresolution analysis (MRA) methods (Palsson, Sveinsson, and Ulfarsson 2014). The
CS methods involve transformations, such as principal component analysis (PCA)
(Chavez, Sides, and Anderson 1991), or a spectral transformation, such as intensity–
hue–saturation (IHS) transformation (Carper, Lillesand, and Kiefer 1990). A component
derived from the lower resolution multispectral bands is substituted by a component
derived from the higher resolution band, and the fused bands are then obtained by the
inverse transformation. The MRA approach is usually based on methods such as the
high-pass filtering method (Chavez, Sides, and Anderson 1991), the wavelet transform
method (Li, Manjunath, and Mitra 1995), or other sorts of pyramidal representations.
Typically, some sort of injection model is used to replace or enhance the details of the
multispectral band with details from the panchromatic band. More recent are hybrid
fusion methods based on both MRA and CS (Palsson, Sveinsson, and Ulfarsson 2014),
geostatistical approaches, such as Kriging with an external drift (Sales, Souza, and
Kyriakidis 2013), and area-to-point regression Kriging (ATPRK) (Wang et al. 2016). There
is also a branch of so-called variational approaches, such as P + XS (Ballester et al. 2006),
variational wavelet pansharpening (VWP) (Moeller, Wittman, and Bertozzi 2009), and
algorithms based on total variation (Palsson, Sveinsson, and Ulfarsson 2014).
Sentinel is a new mission developed by the European Space Agency specifically for
the operational needs of the Copernicus programme. Each Sentinel mission is designed
on a constellation of twin satellites to fulfil revisit and coverage requirements and
provide robust datasets for Copernicus services. Sentinel-2 is a polar-orbiting, multi-
spectral high-resolution imaging mission for land monitoring (land-cover and land-use
mapping or change detection) that can also deliver information for emergency services.
Sentinel-2A was launched on 23 June 2015, and Sentinel-2B was launched on 07 March
2017 (Arianespace 2017; Copernicus 2015). Sentinel-2 carries an optical multispectral
instrument payload that samples 13 spectral bands: four bands at 10 m spatial resolu-
tion, six bands at 20 m spatial resolution, and three bands at 60 m spatial resolution
(Figure 1, Richter et al. 2011).
Sentinel-2 currently provides freely available images at relatively high spatial and
spectral resolution but does not acquire a classic panchromatic band. Instead, it has four
multispectral bands at 10 m spatial resolution that can be used to improve the spatial
resolution of other 20 or 60 m bands. Authors Selva et al. (2015) focused on the fusion of

Figure 1. Sentinel-2 bands accordingly to Richter et al. 2011.


824 M. GAŠPAROVIĆ AND T. JOGUN

hyperspectral satellite images in general and recommended two schemes for determin-
ing a higher resolution band when a panchromatic band is missing – selection or
synthesis. Wang et al. (2016) researched the fusion of Sentinel-2 lower spatial resolution
bands based on using multispectral bands at a higher spatial resolution in the role of the
panchromatic band. They used bands 2, 3, 4, and 8 at 10 m resolution, as well as their
linear combinations weighted according to the multiple regression model to upgrade
the resolution of 20 m bands to 10 m. In this way, they produced a 10-band set at 10 m
resolution, which was a significant achievement. There has also been some research on
the fusion of Sentinel-2 satellite images with images from other sensors (Pereira, Melfi,
and Montes 2017; Surek et al. 2016; Wang et al. 2017). The fusion of Sentinel-2 bands has
immense potential to improve spatial resolution in comparison to the original dataset,
although this is still in the development stage. The first objective of this research was to
contribute to that development by determining the optimal input band or combination
of bands at higher and lower spatial resolutions. Five standard pansharpening methods
(Brovey, IHS, PCA, P + XS, and Wavelet) were adapted to upgrade the resolution of 20 m
bands to 10 m using selected or synthesized bands instead of a panchromatic band. The
accuracy of fusion methods based on different possible higher resolution bands would
be quantitatively tested (by correlation coefficient) and visually compared.
Originally, remote sensing image fusion was mainly used to enhance visual interpreta-
tion, e.g. at the pixel level, and was not usually used in research to improve image
classification (Wenbo, Jing, and Tingjun 2008). However, image fusion methods have
been increasingly applied in various remote sensing applications during the last decade,
including land-cover classification, change detection, object identification, image segmen-
tation, and hazard monitoring (Jiang, Zhuang, and Huang 2013; Yuhendra et al. 2012).
With the development of image fusion applications, the early quality measures of image
fusion were based on the similarity of pixel values between images at different spatial and
spectral resolutions (Chavez, Sides, and Anderson 1991; Terreattaz 1998; Wald, Ranchin,
and Mangolini 1997; Wang et al. 2005). The similarity of pixel values between an original
band and the fused band is not always a guarantee of the quality of a land-cover
classification. Recent studies have recognized the importance of including derived infor-
mation from fused images in analyses (Aguilar, Saldaña, and Aguilar 2013; Colditz et al.
2006; Sarup and Singhai 2011; Witharana, Civco, and Meyer 2014; Zoleikani, Valadan Zoej,
and Mokhtarzadeh 2017); however, not many articles have explicitly dealt with the effects
of fusion on land-cover classification accuracy (Lin et al. 2015). Considering the previous
statements, the main objective of this research was to progress beyond pixel-based
comparisons of fusion results and to examine the accuracy of land-cover classification
based on the original and fused Sentinel-2 multispectral bands. The accuracy of land-cover
classifications was tested using WorldView-2 images via the OpenLayers service, which
would serve as independent reference data of a higher quality. This research aimed to give
objective information on how original and fused Sentinel-2 bands affect the accuracy of
land-cover classification. If more accurate results can be obtained by land-cover classifica-
tion based on fused bands than from the classification based on the original bands, the
main objective of this article would be attained. Another objective of this research was to
give recommendations for combinations of fusion methods and classifications based on
the achieved land-cover classification accuracy.
INTERNATIONAL JOURNAL OF REMOTE SENSING 825

2. Study area and data


The main part of the study area was in West Croatia near the coastal city of Rijeka (Figure 2).
The study area was 60 km × 60 km and encompassed highly diverse landscapes from
densely populated urban settlements, such as Rijeka, Opatija, and Crikvenica, to coastal zones,
karst plains, vast forests, and rural areas. Land-cover classification and quantitative accuracy
assessment (correlation coefficients between bands and classification accuracy measured by
commission, omission, figure of merit, and agreement) were calculated for the entire study
area. Because of the large size of the study area, visual comparisons of the effects of fusion
methods on spectral distortions in fused bands and on land-cover classification were per-
formed on an example subset (1.5 km × 1.5 km) that contained all classes (Figure 2).
The bands used in this research were downloaded from the USGS Earth Explorer
(Earth Explorer 2017) web portal and were acquired by the Sentinel-2A satellite on 11
July 2015 (tile number: T33TVL), when the study area was cloud free. The Level-1C
processed data were georeferenced in the WGS 84 UTM 33N coordinate system and are
in compressed JPEG2000 format. The original granules/tiles had an extent of
110 km × 110 km and were thus clipped to the study area (60 km × 60 km) and
converted to the GeoTIFF format using the GRASS GIS (version 7.0.5).

Figure 2. (a) Geographic location of the study area, (b) the study area with the location of the
example subset, and (c) enlarged example subset.
826 M. GAŠPAROVIĆ AND T. JOGUN

3. Methods
This section explains known and new methods of processing the Sentinel-2 satellite
images. Our research can be divided into four phases (Figure 3), where the first three
phases correspond to the following subsections.

3.1. Preprocessing of satellite images


Preprocessing of the Sentinel-2 satellite images was performed according to the Level-2A
algorithm in the Sen2Cor toolbox (version 2.2.1) with the Sentinel Application Platform
(SNAP, version 5.0.0). Level 2A-processing performs two tasks: Scene classification (SC)
provides a pixel classification map (Main-Knorn et al. 2015; Pflug et al. 2016) and atmo-
spheric correction (AC) transforms top-of-atmosphere reflectance into bottom-of-atmo-
sphere reflectance. Sentinel-2 Atmospheric Correction (S2AC) is based on an algorithm
proposed in atmospheric/topographic correction for satellite imagery (Richter and
Schlaepfer 2011). The method performs AC based on the libRadtran radiative transfer
model (Mayer and Kylling 2005). S2AC also employs Lambert’s reflectance law.
Topographic effects are corrected during the surface retrieval process using an accurate
digital elevation model (in our case, the 1 as ASTER global digital elevation model was used).

3.2. Image fusion


For this research, we selected five independent image fusion methods that have been
documented and examined thoroughly in previous articles. The first selected fusion
method is based on relative spectral contribution (Brovey transform [BT]), another two
are known as CS methods (IHS, PCA), one represents MRA (wavelet), and the last is based
on a variational approach (P + XS).
The BT is a numerical fusion method developed to visually increase the contrast of an
image histogram. In the BT procedure, each of three lower resolution bands is first

Figure 3. The research workflow.


INTERNATIONAL JOURNAL OF REMOTE SENSING 827

multiplied by the higher resolution band, and then each product is divided by the sum of
the lower resolution bands. The method is simple and well known but is limited to three
input lower resolution bands and sometimes distorts the radiometric characteristics of the
bands (Gillespie, Kahle, and Walker 1987; Nikolakopoulos 2008; Pandit and Bhiwani 2015).
In IHS fusion, a red–green–blue (RGB) colour composite of bands is transformed into an
IHS colour space. The intensity component is replaced by the higher resolution image, the
hue and saturation bands are resampled to the higher resolution pixel size using an
interpolation technique, and the scene is reverse transformed to an RGB composite and its
components (Carper, Lillesand, and Kiefer 1990; Nikolakopoulos 2008). Like the BT, it is one
of the most often used and computationally efficient methods but is limited to three input
multispectral bands and sometimes distorts spectral values (Pandit and Bhiwani 2015).
The PCA transform method converts intercorrelated lower resolution multispectral
bands into a new set of uncorrelated components. The first component resembles a
higher resolution image and is replaced by a higher resolution image for the fusion. The
higher resolution image is fused into the lower resolution multispectral bands by
performing a reverse PCA transform (Chavez, Sides, and Anderson 1991;
Nikolakopoulos 2008). The method can be used for more than three input lower
resolution bands and usually introduces less spectral distortion than IHS (Pandit and
Bhiwani 2015). However, it assumes that the first principal component is very similar to
the higher resolution band, which is the case only when the lower resolution bands
spectrally overlap the higher resolution band (Chavez, Sides, and Anderson 1991).
Unsatisfying results are obtained when an image contains areas with diverse spectral
characteristics (i.e. many land-cover classes) because these differences will be contained
in the first principal component (Amolins, Zhang, and Dare 2007).
The wavelet transform method is a MRA method that is performed using a single
prototype function called a wavelet, which can be considered a bandpass filter (Vetterli
and Herley 1992). Generally, the wavelet transform method consists of image decom-
position and reconstruction at different resolution levels. For example, a high-resolution
panchromatic band is first decomposed into a set of low-resolution panchromatic bands
with corresponding wavelet coefficients (spatial details) for each level (Jiang, Zhuang,
and Huang 2013; Zhang 2002). Four sub-images are generated at each lower resolution
level; one sub-image is low-pass filtered and the other three are high-pass filtered in the
horizontal, vertical, and diagonal directions of the image matrix (Amolins, Zhang, and
Dare 2007). In the second step, individual bands of the multispectral image replace the
low-resolution panchromatic band at the resolution level of the original multispectral
image. Finally, the high-resolution spatial detail is injected into each multispectral band
by performing a reverse wavelet transform on each multispectral band together with the
corresponding wavelet coefficients in horizontal, vertical, and diagonal directions (Jiang,
Zhuang, and Huang 2013; Zhang 2002). The advantages of this method are that it
minimizes spectral distortions (Jiang, Zhuang, and Huang 2013) and captures good
resolution in both time and frequency domains (Pandit and Bhiwani 2015). On the
other hand, it is more computationally intensive and requires more parameters than
the standard methods. It has been reported that colours are usually not smoothly
integrated into the spatial features and the spectral content of small objects is often
lost (Jiang, Zhuang, and Huang 2013; Zhang 2002).
828 M. GAŠPAROVIĆ AND T. JOGUN

The variational method, according to the McGraw-Hill Dictionary of Scientific &


Technical Terms (2017), is a method used in quantum mechanics for calculating the
energy level of a quantum-mechanical system and an approximation of the correspond-
ing wave function. During the last two decades, mathematicians and computer scientists
have introduced variational methods into image processing. The variational approach
perceives an image as a function whose sampling corresponds to the discrete matrix
form of a given image (Chen 2013). The grey-level value of a point in an image matrix
represents the photonic flux over a band of wavelengths, and the first step in the
realization of a variational method is thus to design the so-called energy function
(Ballester et al. 2006). Mathematical tools needed for further image processing are
energy optimization, regularization, partial differential equations, level set functions,
and numerical algorithms (Chen 2013). The P + XS method introduces the geometry
information of the higher resolution image by aligning all edges of the higher resolution
image with each lower resolution multispectral band. To obtain the spectral information
for the fused image, the method assumes that images taken in different spectral bands
share common geometric information and that the higher resolution (panchromatic)
image can be approximated as a linear combination of the high-resolution multispectral
bands (Ballester et al. 2006; He et al. 2012). More detailed description and mathematical
expressions can be found in Ballester et al. (2006). Although the variational P + XS
method is computationally intensive and requires setting more parameters than the
standard fusion methods, it has a better ability to preserve sharp discontinuities such as
edges and object contours (He et al. 2012).
The five described fusion algorithms were implemented in Matlab R2013a
(Mathworks 2013) using the FuseBox library (Github 2014). All fusion methods assume
correct geometrical registration between higher- and lower resolution bands, which was
visually checked using the GRASS GIS (version 7.0.5).
Considering the specific properties of the Sentinel-2 dataset described in the intro-
duction, a very important step in fusion was the selection of input lower- and higher
resolution bands. First, bands 1, 9, and 10 at 60 m resolution were excluded from the
fusion because they were not intended for land-cover classification (Wang et al. 2016).
The selection process was constrained by the fact that traditional image fusion methods
(e.g. BT, IHS) are limited to a maximum of three input bands of a lower resolution at a
time, whereas Sentinel-2 provides six bands at 20 m resolution for fusion, which required
them to be divided into two groups (Figure 1).
One of the key rules of fusion is that the higher spatial-resolution band should cover
the spectral range of the lower resolution bands that are sharpened. Such examples can
be seen in articles dealing with the fusion of IKONOS and Quickbird satellite images
(Aiazzi et al. 2003; El-Mezouar et al. 2011). Considering the spectral range of lower
resolution bands, bands 5, 6, and 7 were in the first group and bands 8a, 11, and 12
were in the second group. For the second group, the only possible higher resolution
band at 10 m spatial resolution was band 8 because it was spectrally the closest to them.
There were more possibilities for the higher resolution band of the first group because
they were spectrally between bands at 10 m spatial resolution. Central wavelengths of
the first group bands according to Richter et al. (2011) are 0.705 μm (band 5), 0.740 μm
(band 6), and 0.783 μm (band 7), while the central wavelengths of Sentinel-2 10 m
spatial resolution bands are 0.490 μm (band 2), 0.560 μm (band 3), 0.665 μm (band 4),
INTERNATIONAL JOURNAL OF REMOTE SENSING 829

and 0.842 μm (band 8). Analogue to the decision for the second group, the spectrally
closest higher resolution bands for the first group were band 4 and 8. Neither band 4
nor 8 did not completely overlap the spectral range of the first group bands (band 5–7).
Research practice and a suggestion in Selva et al. (2015) induced us to extend the
spectral range of the higher resolution band for the first group by making the synthe-
sized band (S) the combination of the bands 4 (B4) and 8 (B8) according to the equation,
B4 þB8
S¼ (1)
2
To determine the best higher resolution band for the first group, three sets of fusions
were made (based on the selected higher resolution bands 4, 8, and the synthesized
band). Degradation of spatial resolution does not significantly impact the spectral
properties of a band, whereas upgrading resolution by fusion introduces spectral
distortions (Wald, Ranchin, and Mangolini 1997), which meant that the original
Sentinel-2 bands at 20 m resolution contained spectral values similar to true values
that would have been acquired at 10 m resolution. Therefore, the original bands
resampled to 10 m by the nearest neighbour method were used as a reference for
comparison to the bands at 10 m obtained by the fusion methods. A quantitative
measure of similarity between the reference and fused bands was Pearson correlation
coefficient, which was calculated in the GRASS GIS (version 7.0.5).

3.3. Land-cover classification


The original bands resampled to 10 m by the nearest neighbour method, and bands
fused to 10 m by the five fusion methods were classified using three independent pixel-
based and object-based classification algorithms that were implemented in the SAGA
GIS (version 2.3.1).
The algorithms for the pixel-based supervised classification were the Maximum
Likelihood Classifier (MLC) and artificial neural networks (ANNs). The MLC algorithm is one
of the most commonly applied methods within the parametric classification approaches
(Colditz et al. 2006). In the parametric approach, data are assumed to be distributed
according to a predefined probability model. The parameters for this distribution are
derived from the input training samples (Colditz et al. 2006). We set the probability thresh-
old parameter of the algorithm to 0.1% in the SAGA GIS. In contrast to the MLC algorithm,
ANNs can be used as nonparametric or discriminative binary classifiers. No assumption of
data distribution is needed in nonparametric ANNs. In our study, parameters were deter-
mined by visual analyses of results after several trials and with respect to the recommenda-
tions of the software developers (Neural Networks 2016). The final classifications in the
SAGA GIS were executed with 10 neurones in one layer, with a maximum of 500 iterations
for learning. The error change was set to 0.001, and the activation function was sigmoid,
with alpha set to 0.6 and beta set to 1. The training method was the back-propagation
algorithm with weight gradient and moment terms set to 0.1.
Whereas pixel-based analysis has long been the main approach for classifying remo-
tely sensed images, object-based image analysis (OBIA) has become increasingly com-
mon over the last decade (Duro, Franklin, and Dubé 2011). Instead of using single pixels
as image primitives, the object-based classification approach predicates segmentation of
830 M. GAŠPAROVIĆ AND T. JOGUN

the image data. The main advantages of image objects (segments) are the more natural
representation of real-world objects, an increasingly uncorrelated feature space by using
shape parameters and neighbourhood relations, and texture calculation (Colditz et al.
2006). The OBIA module in the SAGA GIS (version 2.3.1) uses only the spectral features of
input images for their segmentation. The bandwidth for seed point generation was set
to 2, and a generalization of segments with a 3 × 3 pixel size majority filter was applied.
After segmentation, the image objects were classified by the MLC for shapes.
The classification scheme contained five classes: water, built-up, bare soil, low vege-
tation, and forest (Table 1).
The same training polygons for pixel-based classification of all bands (original and
fused) were used. In the entire study area, 94 polygons with almost 323,000 pixels were
collected. For object-based classification, the training polygons were as similar as
possible to the polygons for pixel-based classifications; their shape and area depended
on the shapes of segments, which were slightly different for each set of input bands.
Training polygons were collected by visual inspection of RGB composite images from
Sentinel-2 at 10 m spatial resolution (true colour bands – 4–3–2; false colour bands –
8–4–3) and ancillary images acquired by the WorldView-2 satellite via the OpenLayers
service at approximately 2 m spatial resolution (OpenLayers 2017). The results of the
classifications were not post-processed (e.g. filtered).
Reference polygons for land-cover classification accuracy assessment were deter-
mined independently from the classification results and without overlapping with train-
ing polygons. The external data source of a higher quality was ancillary images acquired
by the WorldView-2 satellite at approximately 2 m spatial resolution and dated approxi-
mately the same as the Sentinel-2 images. In this procedure, almost 178,000 pixels in
1801 reference polygons were collected over the entire study area (60 km × 60 km). To
remove a bias of the shares of reference samples between classes after sampling, a
confusion matrix was converted to represent the shares of classes in the entire study
area by applying Equation (1) given in Pontius and Millones (2011, 4412). Instead of
using traditional methods for quantitative accuracy assessment, e.g. kappa coefficients,
which have certain limitations presented in Pontius and Millones (2011), the figure of
merit (F) was obtained by the following equation,

a
F¼  100% (2)
oþaþc

where a is the number of agreements, o is the number of omissions, and c is the number
of commissions. Additionally, the overall agreement (A) was expressed as the sum of the
agreements for class i, ai (i = 1, . . ., n), where n represents the number of classes,

Table 1. Classification scheme in the study.


Class Description
Water Water bodies, such as sea, rivers, lakes, reservoirs
Built-up Human made constructions, such as buildings, roads
Bare soil Surfaces without vegetation, such as soil, rocks, karst plains
Low vegetation Annual plants, such as crops, pastures, natural grass
Forest Wood vegetation, such as coniferous/non-coniferous forests, shrubs
INTERNATIONAL JOURNAL OF REMOTE SENSING 831

X
n
A¼ ai (3)
i¼1

The agreement is the number of correctly classified pixels for each class, an omission
is the number of pixels of a class misclassified as other classes, and a commission is the
number of pixels of other classes misclassified as a particular class.

4. Results
According to the research workflow, the first step in the fusion of Sentinel-2 images
was selection/synthesis of a higher resolution band for the lower resolution bands
5, 6, and 7 (Table 2).
For most fusion methods, quantitative agreement between the spectral values of
original and fused bands expressed by correlation coefficients was the highest when the
higher resolution band was the synthesized band. Specifically, the average correlation
between fused and original images was the highest for the synthesized band (0.902),
followed by band 8 (0.882), and the lowest was for band 4 (0.591). Average correlations
per fusion method (summed for all higher resolution bands) ranged from the highest to
the lowest as follows: wavelet (0.951), IHS (0.808), BT (0.791), PCA (0.787), and P + XS
(0.622). However, average correlations per fusion method analysed separately for differ-
ent higher resolution bands had a different order. When the higher resolution band was
4, the wavelet method had a significantly higher correlation with the original bands than
the other methods. Conversely, when the higher resolution bands were synthesized or 8,
wavelet had slightly lower correlation than the BT, PCA, and IHS methods. Because the
differences in correlation were the largest when the higher resolution band was 4, the
wavelet method had the highest overall average correlation.
Visual assessment of agreement between the original and fused ‘false colour’ com-
posites (7–6–5) depending on different possible higher resolution bands confirmed the
quantitative indices (Figure 4).
It was obvious that the least distorted spectral values were in the composites in
which the higher resolution band was synthesized, and the largest distortions occurred

Table 2. Correlation coefficients between the original bands and fused bands (5, 6, 7) depending on
different possible higher resolution bands (4, synthesized, 8).
Fusion methods
Higher resolution bands Fused bands BT IHS PCA P + XS Wavelet Overall
4 5 0.746 0.490 0.807 0.493 0.974 0.702
6 0.291 0.446 0.255 0.629 0.979 0.520
7 0.238 0.635 0.205 0.712 0.966 0.551
Overall 4 0.425 0.524 0.422 0.611 0.973 0.591
Synthesized 5 0.983 0.921 0.958 0.425 0.922 0.842
6 0.978 0.984 0.967 0.709 0.979 0.923
7 0.979 0.990 0.974 0.777 0.990 0.942
Overall synthesized 0.980 0.965 0.966 0.637 0.964 0.902
8 5 0.964 0.845 0.969 0.352 0.781 0.782
6 0.969 0.976 0.979 0.718 0.988 0.926
7 0.972 0.985 0.973 0.782 0.979 0.938
Overall 8 0.968 0.935 0.974 0.617 0.916 0.882
Overall 0.791 0.808 0.787 0.622 0.951
832 M. GAŠPAROVIĆ AND T. JOGUN

Figure 4. Comparative visual analysis of ‘false colour’ composites (7–6–5) using different possible
higher resolution bands (4, synthesized, 8) and fusion methods on the example subset
(1.5 km × 1.5 km).
INTERNATIONAL JOURNAL OF REMOTE SENSING 833

when the higher resolution band 4 was selected. Moreover, the BT, IHS, and PCA
methods produced similar and visually pleasing results, whereas the P + XS method
introduced the largest amount of noise. There was a visible impact of pixel values from
bands at a lower resolution level in the bands fused by the wavelet method.
Based on the previous quantitative and visual assessments, we chose the synthesized
band obtained by Equation (1) as the higher resolution band for further analysis.
The effects of the fusion techniques on land-cover classification were analysed by
shares of classes that cover the scene (Table 3).
The most obvious trends in shares of classes when compared with the original images
were as follows: first, all fusion methods reduced the areas classified as built-up, except
for PCA classified by ANN and wavelet classified by OBIA. Second, all fusion methods
increased the areas classified as bare soil, except for wavelet classified by MLC and
P + XS classified by OBIA. Third, all fusion methods in pixel-based classifications reduced
the areas classified as forest, except for wavelet classified by MLC, whereas in OBIA the
effects were diverse. Finally, the effects of all fusion methods were the most diverse for
the low vegetation class and were the smallest for the water class.
Further effects of the fusion techniques were measured by the land-cover classifica-
tion accuracy (Table 4).
Most of the fusion methods increased the overall agreement of the land-cover
classifications. Exceptions were the wavelet method in MLC and OBIA classifications
and P + XS in OBIA classification. The largest improvements were observed in the
fusion method P + XS, which was ranked first in MLC and ANN classifications;
however, P + XS had the lowest accuracy in OBIA classification. PCA fusion was
ranked second in MLC and OBIA and third in ANN classification. Generally, relative
improvements in overall agreement were larger in the pixel-based than in the object-
based classifications.
Analysis of specific classes revealed that fusion produced the most significant
improvement of classification accuracy in the bare soil class, especially in the pixel-
based classifications (average increase of figure of merit was 12.5). In the bare soil

Table 3. Shares of classes for the fusion and classification method in percentages.
Classification method Fusion method Water Built-up Bare soil Low vegetation Forest
MLC Original 12.79 5.24 3.05 7.66 71.26
BT 12.80 4.46 3.66 8.64 70.43
IHS 12.78 3.35 3.74 9.41 70.72
PCA 12.79 3.86 4.59 8.57 70.18
P + XS 12.84 1.77 4.31 15.50 65.58
Wavelet 12.78 4.95 2.92 7.44 71.91
ANN Original 12.95 4.24 2.09 6.15 74.57
BT 12.99 4.16 3.95 6.95 71.94
IHS 12.94 2.82 3.21 6.63 74.40
PCA 13.00 4.67 3.86 5.21 73.26
P + XS 13.00 1.96 2.99 10.46 71.59
Wavelet 12.93 2.99 5.87 5.14 73.06
OBIA Original 12.79 2.94 3.79 9.66 70.83
BT 12.86 2.49 4.26 9.16 71.24
IHS 12.78 2.31 3.97 11.49 69.45
PCA 12.81 2.58 4.25 9.45 70.91
P + XS 12.92 1.99 3.60 13.97 67.52
Wavelet 12.75 3.32 3.95 8.82 71.16
834 M. GAŠPAROVIĆ AND T. JOGUN

Table 4. Classification accuracies for the fusion and classification methods.


Classification Fusion Water Built-up Bare soil Low vegetation Forest
method method F o c F o c F o c F o c F o c A
MLC Original 98.1 1.9 0.0 47.8 2.3 51.7 47.7 48.9 12.4 41.8 57.5 4.2 86.5 0.2 13.3 87.1
BT 98.2 1.8 0.0 47.9 4.3 51.0 54.4 41.2 12.1 49.2 49.6 4.6 88.6 0.3 11.1 89.1
IHS 98.2 1.8 0.0 50.7 9.4 46.5 58.6 35.4 13.7 50.1 48.4 5.4 88.2 0.3 11.6 89.2
PCA 98.5 1.5 0.0 50.1 6.1 48.2 60.6 32.6 14.3 49.9 49.0 4.3 89.2 0.3 10.6 89.7
P + XS 99.5 0.5 0.0 47.4 29.2 41.1 66.7 25.6 13.4 64.8 26.5 15.4 90.3 3.0 7.1 91.6
Wavelet 98.0 2.0 0.0 43.4 2.4 56.1 45.4 51.1 13.5 39.0 60.4 4.2 85.1 0.3 14.7 86.0
ANN Original 98.4 1.0 0.6 47.2 4.7 51.6 27.3 70.8 19.2 31.0 68.1 8.4 82.2 0.1 17.7 83.6
BT 99.6 0.0 0.3 48.2 5.0 50.6 45.7 46.5 24.1 38.3 61.1 3.6 86.0 0.2 13.9 86.6
IHS 99.8 0.1 0.2 52.7 10.1 44.0 38.2 56.0 25.8 30.3 67.8 16.4 84.1 0.1 15.8 85.1
PCA 99.5 0.0 0.5 47.3 4.1 51.7 37.5 54.6 31.8 29.9 69.6 5.1 85.4 0.1 14.5 85.4
P + XS 99.6 0.0 0.4 49.6 22.7 42.0 46.4 50.5 11.9 49.6 47.8 9.0 85.5 0.7 14.0 87.8
Wavelet 99.2 0.5 0.3 47.4 13.7 48.7 47.1 37.0 34.9 26.8 72.5 8.2 84.7 0.1 15.2 84.9
OBIA Original 97.2 2.7 0.1 55.1 9.6 41.5 55.4 39.1 14.0 45.2 52.1 11.0 86.1 0.6 13.5 87.6
BT 98.6 1.3 0.1 54.3 14.9 40.0 56.8 36.7 15.4 44.3 53.3 10.5 86.0 0.6 13.5 87.8
IHS 97.5 2.4 0.0 54.1 15.8 39.8 55.9 38.3 14.5 52.2 43.4 13.0 87.5 1.2 11.6 88.9
PCA 98.1 1.9 0.1 53.3 12.5 42.4 57.6 35.7 15.2 47.3 50.3 9.4 87.2 0.5 12.4 88.6
P + XS 98.9 0.9 0.2 45.1 29.3 44.6 43.7 52.6 15.4 43.6 47.1 28.8 82.3 4.2 14.6 84.7
Wavelet 96.8 3.1 0.0 49.9 8.3 47.8 52.9 41.3 15.7 41.8 56.2 10.1 84.7 0.8 14.7 86.4
F: figure of merit (%); o: omission (%); c: commission (%); A: overall agreement (%).

class, fusion reduced errors of omission in all classifications except for the wavelet
method classified by MLC and OBIA and P + XS classified by OBIA. The classification
accuracy of low vegetation was significantly increased in MLC classification, except
for wavelet fusion. The classification accuracy of the forest was moderately improved
in the pixel-based classifications because of reduced errors of commission, except for
wavelet fusion. The classification accuracy of water was slightly increased in all
classifications, except for wavelet fusion followed by MLC and OBIA classifications.
The built-up class had larger errors of commission than omission in all classifications.
The fusion methods slightly reduced the errors of commission but increased the
errors of omission. This discrepancy was the largest in the built-up class in OBIA
classification; therefore, it can be concluded that all fusion methods had the least
favourable effects in that case.
Classification results of original and fused images were visually assessed on the
example subset (Figure 5).
All fusion methods produced classification maps with smaller patches of land-cover
classes, i.e. with a higher level of detail. On the other hand, they produced increased
amounts of noise. Most of the fusion methods, especially P + XS in the pixel-based
classifications, reduced commissions of built-up area that were intensive in the classifi-
cations of the original images. The bare soil class in the original pixel-based classifica-
tions had large errors of omission that were reduced by the fusion methods. Coastlines
were most precisely detected in images fused by the P + XS and PCA methods. Problems
with the classification of coastlines in images fused by the wavelet method were the
most convincing visual evidence of its lowest quality compared with the other fusion
methods. The influence of pixel values from input bands at lower resolution in wavelet
fusion, which was detected in the visual analysis of the 7–6–5 composite, produced
square and rough edges between classified patches.
INTERNATIONAL JOURNAL OF REMOTE SENSING 835

Figure 5. Comparative visual analysis of fusion and classification methods on the example subset
(1.5 km × 1.5 km).
836 M. GAŠPAROVIĆ AND T. JOGUN

5. Discussion
Fusing the six Sentinel-2 20 m bands with four potential 10 m bands is different and
more complex than the conventional fusion task, where a single panchromatic band is
available as the reference for several lower resolution bands. Therefore, existing pan-
sharpening methods need to be adjusted to Sentinel-2 images via two schemes, the
synthesized band and selected band schemes (Wang et al. 2016).
Determining the optimal input images for fusion in this study not only produced
some answers but also posed some further questions. Regarding their original spectral
characteristics, the best higher resolution band for fusion of bands 5, 6, and 7 appeared
to be the synthesized band obtained by the linear combination of bands 4 and 8. For
fusion of the second group (bands 8a, 11, and 12), band 8 was selected as the higher
resolution band. In the second case, there was a larger spectral distance between the
higher resolution band and the lower resolution bands than in the first case. For future
research, it would be important to investigate and synthesize even better higher
resolution bands for the fusion of lower resolution band groups or single bands.
Instead of a simple linear combination of bands 4 and 8, an advanced weighted linear
combination of bands 4 and 8, or of all four bands at 10 m, could be used, as was seen in
Wang et al. (2016). Wang et al. used a multiple linear regression model for weighting but
we suggest additional approaches based on the spectral distance between mean
wavelengths for each higher resolution band or the area under the curve that covers
the wavelength range for each higher resolution band. The improvement of the higher
resolution band for the second group is more complex because the spectral range of
lower resolution bands 11 and 12 exceeds the range of the higher resolution band 8.
Our idea is to generate a synthesized higher resolution band from bands 8, 11, and 12 in
an iterative fusion process using a method that can operate with single input images
[e.g. the local mean and variance matching algorithm (Cornet et al. 2001)]. A higher
resolution band from these input bands can also be synthesized by PCA, where the first
component would be the desired product. As proposed in Moeller, Wittman, and
Bertozzi (2009), VWP is a very promising method for hypersharpening because it can
process images with an arbitrary number of spectral bands (i.e. it does not require
grouping), it explicitly enforces spectral cohesion, and it does not make any assumption
on the spectral response of the higher resolution image.
We discovered that the synthesized scheme for higher resolution bands is better
than the selection of an existing higher resolution band, which corresponds to
previous research (Wang et al. 2016). Wang et al. have also concluded that the
ATPRK approach gave more similar results to reference images than the conventional
CS or MRA fusion methods. That claim should be examined for land-cover classifica-
tion in future research.
Land-cover classification is one of the main remote sensing applications, and it is thus
necessary to measure the effects of fusion on its accuracy. In our study, image fusion
improved the overall classification accuracy in most cases, especially in pixel-based
classifications. The best fusion methods appeared to be P + XS and PCA, and the highest
overall classification accuracy was achieved by P + XS fusion in combination with MLC.
However, images fused by P + XS and PCA had the lowest correlation with the original
(reference) images during the determination of the higher resolution band for lower
INTERNATIONAL JOURNAL OF REMOTE SENSING 837

resolution bands 5, 6, and 7. Those methods most distorted the spectral values of input
lower resolution bands; however, they emphasized differences in the spectral signatures
of classes, which resulted in better classification. On the other hand, the wavelet method
most degraded the accuracy of all classifications, even though it had the highest
correlation with the reference bands. This proved that quality assessment of image
fusion should not be exclusively based on the comparison between pixel values from
input bands and that derived fusion products (e.g. classification) tested with external
data sources of higher quality (e.g. imagery in higher resolution) should be included in
that assessment.
A deeper analysis of the effects of fusion on land-cover classification for classes
revealed that the largest relative improvements were in the classes of soil, low
vegetation, and forest, especially in pixel-based classifications after P + XS, BT, and
PCA fusion. There were two pairs of reversed improvements: reduced errors of
omission in soil and low vegetation classes and reduced errors of commission in
built-up and forest classes. The enhanced spatial resolution of bands that had a high
spectral resolution in red and near-infrared parts of the spectrum improved the
separation between soil and vegetation classes. In P + XS and PCA fusion, the
coastline, as a boundary between sea and land, was the most successfully detected.
All these findings correspond to the fact that the Sentinel-2 mission was mainly
intended for the monitoring of vegetation, soil and water cover, inland waterways,
and coastal areas (Copernicus 2015; Wang et al. 2016). The water class is usually
classified with the highest accuracy and there is not much space for improvement. In
contrast, built-up areas are usually the most complex and the most problematic class
because they incorporate a mosaic of buildings, vegetation, and soil. Therefore, it is
more suitable to use images of a very high spatial resolution than spectral resolution.
Very high spatial resolution images (0.25–5.0 m) classified by the object-based
approach are the promising solution to improve the classification of urban areas
(Myint et al. 2011). Post-classification techniques (shadow-removal, knowledge-based
rules, and region-based filtering) have also been proposed (Van De Voorde, De Genst,
and Canters 2007). In our study, the fusion methods did not uniformly improve the
classification accuracy of the built-up class; however, the P + XS method visually
revealed the finest details in these areas. Considering these observations, we con-
cluded that the P + XS and PCA image fusion methods have the greatest potential for
the detection of subtle variations in vegetation, soils, and coastlines. However, fused
images do not have sufficient spatial resolution for detailed and highly accurate
classification of urban areas. Of all tested fusion methods, the wavelet method
produced the least satisfying overall results because it produced bands that were
the most similar to the original bands at a lower resolution and because it produced
square and rough edges between classified patches.
Since this research dealt with the effects of fusion on land-cover classification, some
other common image preprocessing and classification post-processing methods were not
used. New research should explore the influence of radiometric variations on images, such
as histogram equalization and the high dynamic range method, which are preprocessing
methods applicable to the Sentinel-2 satellite images. Additionally, classification post-
processing methods, such as filtering and the combination of the classification results
from different classifiers (e.g. production rules, a sum rule, stacked regression methods,
838 M. GAŠPAROVIĆ AND T. JOGUN

majority voting), must be tested because they have proved to enhance classification
accuracy (Lu and Weng 2007). The 10-band 10 m time-series products will offer excellent
opportunities for blending with the Sentinel-1 (Surek et al. 2016), Landsat 8 (Dimov, Kuhn,
and Conrad 2016), and Sentinel-3 data (Wang et al. 2016). All mentioned pre- and post-
processing procedures should result in an optimized workflow for the maximal exploitation
of freely available remote sensing datasets.

6. Conclusions
The fusion of Sentinel-2 satellite images and tests of the effects of fusion on land-cover
classification offered interesting new insights. According to the accuracy assessment
based on the comparison of different bands and pixels, the fusion methods P + XS and
PCA had the lowest correlation coefficients compared with the reference (original)
bands. However, the classification results based on these fusion methods had the high-
est accuracy. The opposite situation occurred with bands fused by the wavelet method.
The classification of remote sensing images is widely used to formulate and answer
environmental, ecological, and land-use management questions. Therefore, it is very
important to improve classification quality, and a possible solution is image fusion. In
that context, the traditional evaluation of image fusion methods based on a comparison
of fused and reference images should be supplemented by classification accuracy
assessment using an external data source of higher accuracy (e.g. field work and images
at higher spatial resolution).
Because the fusion methods P + XS and PCA produced the most improved
classifications of the forest, low vegetation, soil, and coastal zone classes, those
methods are proposed for the observation of natural land areas covered by vegeta-
tion or bare soil and the detection of coastlines. For improving the classification
accuracy of built-up areas, we recommend the use of very high-resolution images
and post-classification methods. On the other hand, the effects of the wavelet
method on the classification accuracy of Sentinel-2 images were the least satisfying
and should be examined in future research.
Sentinel-2 bands are of very good quality for applications in forest, agricultural, and
coastal zones, and band fusion followed by classification in these areas can lead to even
more satisfying results. If other preprocessing and post-processing methods are applied
to Sentinel-2 images, along with different spatial data sources, freely available remote
sensing applications will advance the detection of objects that are under the scope of
the original datasets.

Disclosure statement
No potential conflict of interest was reported by the authors.

ORCID
Mateo Gašparović http://orcid.org/0000-0003-2345-7882
INTERNATIONAL JOURNAL OF REMOTE SENSING 839

References
Aguilar, M. A., M. M. Saldaña, and F. J. Aguilar. 2013. “GeoEye-1 and WorldView-2 Pan-Sharpened
Imagery for Object-Based Classification in Urban Environments.” International Journal of Remote
Sensing 34 (7): 2583–2606. doi:10.1080/01431161.2012.747018.
Aiazzi, B., L. Alparone, S. Baronti, A. Garzelli, and M. Selva 2003. “An MTF-based Spectral Distortion
Minimizing Model for Pan-Sharpening of Very High Resolution Multispectral Images of Urban
Areas.” Remote Sensing and Data Fusion over Urban Areas, 2003. 2nd GRSS/ISPRS Joint Workshop:
90–94. Berlin, May 22–23. doi:10.1109/DFUA.2003.1219964.
Amolins, K., Y. Zhang, and P. Dare. 2007. “Wavelet Based Image Fusion techniques—An
Introduction, Review and Comparison.” ISPRS Journal of Photogrammetry and Remote Sensing
62 (4): 249–263. doi:10.1016/j.isprsjprs.2007.05.009.
Arianespace 2017. “VV09 Sentinel-2B.” Arianespace. Accessed April 27 2017. http://www.ariane
space.com/wp-content/uploads/2017/02/VV09-launchkit-EN.pdf
Ballester, C., V. Caselles, L. Igual, J. Verdera, and B. Rougé. 2006. “A Variational Model for P+ XS Image
Fusion.” International Journal of Computer Vision 69 (1): 43–58. doi:10.1007/s11263-006-6852-x.
Carper, W. J., T. M. Lillesand, and R. W. Kiefer. 1990. “The Use of Intensity-Hue-Saturation Transform
for Merging SPOT Panchromatic and Multi-Spectral Image Data.” Photogrammetric Engineering &
Remote Sensing 56 (4): 459–467. https://www.asprs.org/wp-content/uploads/pers/1990journal/
apr/1990_apr_459-467.pdf.
Chavez, P. S. Jr., S. C. Sides, and J. A. Anderson. 1991. “Comparison of Three Different Methods to
Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.”
Photogrammetric Engineering and Remote Sensing 57 (3): 295–303. https://www.asprs.org/wp-
content/uploads/pers/1991journal/mar/1991_mar_295-303.pdf.
Chen, K. 2013. “Introduction to Variational Image-Processing Models and Applications.”
International Journal of Computer Mathematics 90 (1): 1–8. doi:10.1080/00207160.2012.757073.
Colditz, R. R., T. Wehrmann, M. Bachmann, K. Steinnocher, M. Schmidt, G. Strunz, and S. Dech. 2006.
“Influence of Image Fusion Approaches on Classification Accuracy: A Case Study.” International
Journal of Remote Sensing 27 (15): 3311–3335. doi:10.1080/01431160600649254.
Copernicus 2015. “Sentinel-2A Launch.” Copernicus. Accessed April 26 2017. http://www.coperni
cus.eu/sites/default/files/documents/Sentinel_2A.pdf
Cornet, Y., S. De Bethune, M. Binard, F. Muller, G. Legros, and I. Nadasdi 2001. “RS Data Fusion by
Local Mean and Variance Matching Algorithms: Their Respective Efficiency in a Complex Urban
Context.” IEEE/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas: 105–
111. Rome. doi: 10.1109/DFUA.2001.985736
Dimov, D., J. Kuhn, and C. Conrad. 2016. Assessment of Cropping System Diversity in the Fergana
Valley through Image Fusion of Landsat 8 and Sentinel-1. ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences 3(7):XXIII. ISPRS Congress, Prague, July 12–19.
10.5194/isprs-annals-III-7-173-2016.
Duro, D. C., S. E. Franklin, and M. G. Dubé. 2011. “A Comparison of Pixel-Based and Object-Based
Image Analysis with Selected Machine Learning Algorithms for the Classification of Agricultural
Landscapes Using SPOT-5 HRG Imagery.” Remote Sensing of Environment 118: 259–272.
doi:10.1016/j.rse.2011.11.020.
Ehlers, M., S. Klonus, P. J. Åstrand, and P. Rosso. 2010. “Multi-Sensor Image Fusion for
Pansharpening in Remote Sensing.” International Journal of Image and Data Fusion 1 (1): 25–
45. doi:10.1080/19479830903561985.
El-Mezouar, M. C., N. Taleb, K. Kpalma, and J. Ronsin. 2011. “An IHS-Based Fusion for Color
Distortion Reduction and Vegetation Enhancement in IKONOS Imagery.” IEEE Transactions on
Geoscience and Remote Sensing 49 (5): 1590–1602. doi:10.1109/TGRS.2010.2087029.
Explorer, E. 2017. “Earth Explorer.” Accessed April 15 2017. http://earthexplorer.usgs.gov/
Gillespie, A. R., A. B. Kahle, and R. E. Walker. 1987. “Color Enhancement of Highly Correlated
images-II: Channel Ratio and “Chromaticity” Transformation Techniques.” Remote Sensing of
Environment 22: 343–365. doi:10.1016/0034-4257(87)90088-5.
Github. 2014. “FuseBox.” Accessed April 18 2017. https://github.com/sjtrny/FuseBox
840 M. GAŠPAROVIĆ AND T. JOGUN

He, X., L. Condat, J. Chanussot, and J. Xia 2012. “Pansharpening Using Total Variation
Regularization.” Geoscience and Remote Sensing Symposium (IGARSS), 2012. IEEE International.
Munich, July 22-27. doi:10.1109/IGARSS.2012.6351611.
Jiang, D., D. Zhuang, and Y. Huang. 2013. “Investigation of Image Fusion for Remote Sensing
Application.” In New Advances in Image Fusion, edited by Q. Miao, 1–18. Rijeka: InTech.
doi:10.5772/3382.
Li, H., B. S. Manjunath, and S. K. Mitra. 1995. “Multisensor Image Fusion Using the Wavelet
Transform.” Graphical Models and Image Processing 57: 234–245. doi:10.1006/gmip.1995.1022.
Lin, C., C. C. Wu, K. Tsogt, Y. C. Ouyang, and C. I. Chang. 2015. “Effects of Atmospheric Correction
and Pansharpening on LULC Classification Accuracy Using WorldView-2 Imagery.” Information
Processing in Agriculture 2 (1): 25–36. doi:10.1016/j.inpa.2015.01.003.
Lu, D., and Q. Weng. 2007. “A Survey of Image Classification Methods and Techniques for
Improving Classification Performance.” International Journal of Remote Sensing 28 (5): 823–
870. doi:10.1080/01431160600746456.
Main-Knorn, M., B. Pflug, V. Debaecker, and J. Louis. 2015. Calibration and Validation Plan for the
L2A Processor and Products of the Sentinel-2 Mission. The International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences 40(7):1249–1255. 36th
International Symposium on Remote Sensing of Environment. Berlin, May 11–15. 10.5194/isprsarc-
hives-XL-7-W3-1249-2015.
Mathworks. 2013. MATLAB R2013a, the Language of Technical Computing. Natick: MathWorks, .
Mayer, B., and A. Kylling. 2005. “Technical Note: The libRadtran Software Package for Radiative
Transfer Calculations-Description and Examples of Use.” Atmospheric Chemistry and Physics 5 (7):
1855–1877. doi:10.5194/acp-5-1855-2005.
McGraw-Hill Dictionary of Scientific & Technical Terms. 2017. “Variational Method.” The McGraw-
Hill Companies, Inc. Accessed April 28 2017. http://encyclopedia2.thefreedictionary.com/varia
tional+method
Moeller, M., T. Wittman, and A. L. Bertozzi 2009. “A Variational Approach to Hyperspectral Image
Fusion.” Proceedings of SPIE 7334. Algorithms and Technologies for Multispectral, Hyperspectral,
and Ultraspectral Imagery. Orlando, April 27. doi:10.1117/12.818243.
Myint, S. W., P. Gober, A. Brazel, S. Grossman-Clarke, and Q. Weng. 2011. “Per-Pixel Vs. Object-
Based Classification of Urban Land Cover Extraction Using High Spatial Resolution Imagery.”
Remote Sensing of Environment 115 (5): 1145–1161. doi:10.1016/j.rse.2010.12.017.
Neural Networks. 2016. “Saga GIS – OpenCV Neural Networks Module.” Accessed April 17 2017.
https://sourceforge.net/p/saga-gis/wiki/opencv_3/
Nikolakopoulos, K. G. 2008. “Comparison of Nine Fusion Techniques for Very High Resolution
Data.” Photogrammetric Engineering & Remote Sensing 74 (5): 647–659. doi:10.14358/
PERS.74.5.647.
OpenLayers. 2017. “OpenLayers”. Accessed April 20 2017. https://openlayers.org/
Palsson, F., J. R. Sveinsson, and M. O. Ulfarsson. 2014. “A New Pansharpening Algorithm Based on
Total Variation.” IEEE Geoscience and Remote Sensing Letters 11 (1): 318–322. doi:10.1109/
LGRS.2013.2257669.
Pandit, V. R., and R. J. Bhiwani. 2015. “Image Fusion in Remote Sensing Applications: A Review.”
International Journal of Computer Applications 120 (10): 22–32. doi:10.5120/21263-3846.
Pereira, O. J. R., A. J. Melfi, and C. R. Montes. 2017. “Image Fusion of Sentinel-2 and CBERS-4
Satellites for Mapping Soil Cover in the Wetlands of Pantanal.” International Journal of Image
and Data Fusion 8 (2): 148–172. doi:10.1080/19479832.2016.1261946.
Pflug, B., M. Main-Knorn, J. Bieniarz, V. Debaecker, and J. Louis. 2016. “Early Validation of Sentinel-2
L2A Processor and Products.” Proceedings of Living Planet Symposium 2016. Prague, May 9–13.
Pontius, R. G. Jr, and M. Millones. 2011. “Death to Kappa: Birth of Quantity Disagreement and
Allocation Disagreement for Accuracy Assessment.” International Journal of Remote Sensing 32
(15): 4407–4429. doi:10.1080/01431161.2011.552923.
Richter, R., and D. Schlaepfer. 2011. Atmospheric/Topographic Correction for Satellite Imagery:
ATCOR-2/3 User Guide Vers. 8.0.2. Oberpfaffenhofen: DLR–German Aerospace Center, Remote
Sensing Data Center.
INTERNATIONAL JOURNAL OF REMOTE SENSING 841

Richter, R., X. Wang, M. Bachmann, and D. Schläpfer. 2011. “Correction of Cirrus Effects in Sentinel-
2 Type of Imagery.” International Journal of Remote Sensing 32 (10): 2931-2941. doi: 10.1080/
01431161.2010.520346.
Sales, M. H. R., C. M. Souza Jr., and P. C. Kyriakidis. 2013. “Fusion of MODIS Images Using Kriging
with External Drift.” IEEE Transactions on Geoscience Remote Sensing 51 (4): 2250–2259.
doi:10.1109/TGRS.2012.2208467.
Sarup, J., and A. Singhai. 2011. “Image Fusion Techniques for Accurate Classification of Remote
Sensing Data.” International Journal of Geomatics and Geosciences 2 (2): 602–612. http://www.
ipublishing.co.in/jggsvol1no12010/voltwo/EIJGGS3052.pdf.
Selva, M., B. Aiazzi, F. Butera, L. Chiarantini, and S. Baronti. 2015. “Hyper-Sharpening: A First
Approach on SIM-GA Data.” IEEE Journal of Selected Topics in Applied Earth Observation and
Remote Sensing 8: 3008–3024. doi:10.1109/JSTARS.2015.2440092.
Surek, G., G. Nádor, Z. Friedl, A. Rotterné Kulcsár, D. Á. Gera, and M. Rada 2016. “Fusion of the
Sentinel-1 and Sentinel-2 Data for Mapping High Resolution Land Cover Layers.” 36th EARSeL
Symposium, Bonn, June 23.
Terreattaz, P. 1998. “Comparison of Different Methods to Merge SPOT P and XS Data: Evaluation in an
Urban Area.” 17th EARSeL Symposium, Future trends in remote sensing: 435–445. Lyngby, June 17–20.
Van De Voorde, T., W. De Genst, and F. Canters. 2007. “Improving Pixel-Based VHR Land-Cover
Classifications of Urban Areas with Post-Classification Techniques.” Photogrammetric Engineering
and Remote Sensing 73 (9): 1017–1027. https://www.asprs.org/wp-content/uploads/pers/
2007journal/september/2007_sep_1017-1027.pdf.
Vetterli, M., and C. Herley. 1992. “Wavelets and Filter Banks: Theory and Design.” IEEE Transactions
on Signal Processing 40 (9): 2207–2232. doi:10.1109/78.157221.
Wald, L., T. Ranchin, and M. Mangolini. 1997. “Fusion of Satellite Images of Different Spatial
Resolutions: Assessing the Quality of Resulting Images.” Photogrammetric Engineering &
Remote Sensing 63 (6): 691–699. https://www.asprs.org/wp-content/uploads/pers/1997journal/
jun/1997_jun_691-699.pdf.
Wang, Q., G. A. Blackburn, A. O. Onojeghuo, J. Dash, L. Zhou, Y. Zhang, and P. M. Atkinson. 2017.
“Fusion of Landsat 8 OLI and Sentinel-2 MSI Data.” IEEE Transactions on Geoscience and Remote
Sensing 55 (7): 3885–3899. doi:10.1109/TGRS.2017.2683444.
Wang, Q., W. Shi, Z. Li, and P. M. Atkinson. 2016. “Fusion of Sentinel-2 Images.” Remote Sensing of
Environment 187: 241–252. doi:10.1016/j.rse.2016.10.030.
Wang, Z., D. Ziou, C. Armenakis, D. Li, and Q. Li. 2005. “A Comparative Analysis of Image Fusion
Methods.” IEEE Transactions on Geoscience and Remote Sensing 43: 1391–1402. doi:10.1109/
TGRS.2005.846874.
Wenbo, W., Y. Jing, and K. Tingjun 2008. Study of Remote Sensing Image Fusion and Its
Application in Image Classification. The International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences. 37(B7):1141–1146. XXI ISPRS Congress, Beijing July 3–
11. http://www.isprs.org/proceedings/XXXVII/congress/7_pdf/6_WG-VII-6/15.pdf
Witharana, C., D. L. Civco, and T. H. Meyer. 2014. “Evaluation of Data Fusion and Image
Segmentation in Earth Observation Based Rapid Mapping Workflows.” ISPRS Journal of
Photogrammetry and Remote Sensing 87: 1–18. doi:10.1016/j.isprsjprs.2013.10.005.
Yuhendra,, I. Alimuddin, J. T. S. Sumantyo, and H. Kuze. 2012. “Assessment of Pan-Sharpening
Methods Applied to Image Fusion of Remotely Sensed Multi-Band Data.” International Journal of
Applied Earth Observation and Geoinformation 18: 165–175. doi:10.1016/j.jag.2012.01.013.
Zhang, Y. 2002. “Problems in the Fusion of Commercial High-Resolution Satelitte as Well as
Landsat 7 Images and Initial Solutions.” International Archives of Photogrammetry Remote
Sensing and Spatial Information Sciences 34 (4): 587–592. http://www.isprs.org/proceedings/
XXXIV/part4/pdfpapers/220.pdf.
Zoleikani, R., M. J. Valadan Zoej, and M. Mokhtarzadeh. 2017. “Comparison of Pixel and Object
Oriented Based Classification of Hyperspectral Pansharpened Images.” Journal of the Indian
Society of Remote Sensing 45 (1): 25–33. doi:10.1007/s12524-016-0573-6.

You might also like