You are on page 1of 15

Geocarto International

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tgei20

Comparison of classification methods for urban


green space extraction using very high resolution
worldview-3 imagery

S. Vigneshwaran & S. Vasantha Kumar

To cite this article: S. Vigneshwaran & S. Vasantha Kumar (2021) Comparison of classification
methods for urban green space extraction using very high resolution worldview-3 imagery,
Geocarto International, 36:13, 1429-1442, DOI: 10.1080/10106049.2019.1665714

To link to this article: https://doi.org/10.1080/10106049.2019.1665714

Published online: 18 Sep 2019.

Submit your article to this journal

Article views: 139

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tgei20
GEOCARTO INTERNATIONAL
2021, VOL. 36, NO. 13, 1429–1442
https://doi.org/10.1080/10106049.2019.1665714

Comparison of classification methods for urban


green space extraction using very high resolution
worldview-3 imagery
S. Vigneshwaran and S. Vasantha Kumar
School of Civil Engineering, Vellore Institute of Technology (VIT), Vellore, India

ABSTRACT ARTICLE HISTORY


Urban green space (UGS) plays a vital role in maintaining the eco- Received 12 April 2019
logical balance of a city and in ensuring healthy living of the city Accepted 25 August 2019
inhabitants. It is generally suggested that one-third of the city
KEYWORDS
should be covered by green and to ensure this, the city adminis-
Urban green space
trators must have an accurate map of the existing UGS. Such a mapping; unsupervised
map would be useful to visualize the distribution of the existing classification; supervised
green cover and also to find out the areas that can possibly be classification; NDVI; object-
converted to UGS. Reported studies on UGS mapping have mostly based classification
used medium and high resolution images such as Landsat-TM,
ETMþ, Sentinel-2A, IKONOS, etc. However, studies on the use of
very high resolution images for UGS extraction are very limited.
The present study is a first attempt in utilizing the very high reso-
lution Worldview-3 image for UGS extraction. Performance of dif-
ferent classification methods such as unsupervised, supervised,
object based and normalized difference vegetation index (NDVI)
were compared using the pan sharpened Worldview-3 image cov-
ering part of New Delhi in India. It was found that the unsuper-
vised classification followed by manual recoding method showed
superior performance with overall accuracy (OA) of 99% and j
coefficient of 0.98. Also, the OA achieved in the present study is
the highest when compared to other reported studies on UGS
extraction. The map of UGS revealed that almost 40% of the
study area is covered by green which is more than the recom-
mended value of 33% (one-third). In order to check the universal-
ity of the unsupervised classification approach in extracting UGS,
Worldview-3 image covering Rio in Brazil was tested. It was found
that an OA of 98% and j coefficient of 0.95 were obtained which
clearly indicate that the proposed approach would work very well
in extracting UGS from any Worldview-3 imagery.

1. Introduction
Cities are expanding at an unprecedented pace in the last few decades as people are
migrating from village to city for improved job opportunities and living conditions. The
percentage of world’s population that live in cities has already crossed 50% and it is
expected that the number will still increase to 66 in 2050 (Spence et al. 2013). Though the

CONTACT S. Vasantha Kumar svasanthakumar1982@gmail.com; svasanthakumar@vit.ac.in


ß 2019 Informa UK Limited, trading as Taylor & Francis Group
1430 S. VIGNESHWARAN AND S. VASANTHA KUMAR

process of urbanization has many deteriorating effects like crowded habitats, polluted air,
water and noise, road traffic congestion, etc., one of the major effects that has profound
social and ecological impact is the loss of green cover in a city. Urban green space (UGS)
has been defined as the area covered by vegetation which may include trees, shrubs, grass-
land and they are considered as the last remnants of nature in urban areas (Yusof 2012;
Chen et al. 2018). Lo and Jim (2012) define UGS as the ‘open spaces situated within city
limits with a vegetation cover planted deliberately or inherited from pre-urbanization
vegetation left by design or by default’. The UGS is very important for a city from an eco-
logical point of view as it helps to conserve biodiversity, alleviate urban heat island effects,
enhance air quality, reduce noise pollution and prevent soil erosion (Pullen et al. 2011;
Ibrahim et al. 2014; Daniels et al. 2018; Yang et al. 2018). On the other hand, UGS has
many social benefits too such as providing recreational opportunities, space for social
meetings, reducing mental stress and improving the well-being of residents. Considering
the importance of UGS, World Health Organization (WHO) suggested that there should
be a minimum 9 m2 of UGS per person available in cities (WHO 2010). Hence the assess-
ment of existing UGS and identification of new areas for UGS to meet the WHO standard
become utmost essential in order to achieve a sustainable city development.
Mapping of existing UGS using conventional field surveys is laborious, time consuming
and expensive task. The alternative way is to use the remote sensing technology as it pro-
vides the latest clear picture of the complex urban landscape in relatively shorter amount
of time. With high temporal resolution, it is now possible to have satellite images taken at
regular intervals which thus enables monitoring of UGS easier. Many studies were
reported on UGS extraction using moderate and high resolution satellite images such as
Landsat-TM, ETMþ, OLI (Nero 2017; Texier et al. 2018; Van et al. 2018), Sentinel-2A
(Vatseva et al. 2016; Kopecka et al. 2017), SPOT5 (Yang et al. 2018), AVNIR-2 (Van et al.
2018), Rapid Eye (Nero 2017; Di et al. 2019), LISS-IV of IRS Resourcesat-2 (Sathyakumar
et al. 2018), GaoFen-2 (Chen et al. 2018) and IKONOS (Yusof 2012). These studies have
used images of spatial resolution ranging from 4 m to 30 m. Studies on using very high
resolution images of spatial resolution less than 1 m for UGS extraction are very limited
(Zylshal et al. 2016). One of the main motivations for the present study was to explore
how very high resolution imagery of less than 1 m spatial resolution could potentially be
used for UGS extraction. Zylshal et al. (2016) have used pan sharpened imagery of reso-
lution 0.5 m from Pleiades-1A satellite. However, there are satellites with sensors that are
capable of producing images with spatial resolution of less than 0.5 m. One such satellite
is Worldview-3 which can give pan sharpened images of 0.31 m resolution. The present
study aims to utilize the Worldview-3 imagery for UGS extraction. It is important to note
here that the Worldview-3 is a commercially available satellite of the highest spatial reso-
lution in the world at present. To the best of the authors’ knowledge, there does not exist
any previous published work on utilizing Worldview-3 for UGS extraction and the pre-
sent study is a first attempt in this regard.
Most of the reported studies have used image processing techniques like supervised
classification, object based classification and Normalized difference vegetation index
(NDVI) for UGS extraction. The studies mentioned above have reported overall classifica-
tion accuracy in the range of 86 to 94%. It is important to mention here that none of the
reported studies have made a comparison of the classification methods nor have reported
the best method which could yield results of higher accuracy. Hence, the present study
aims to compare the variations of supervised, unsupervised, object based and NDVI
approaches in extracting UGS and find the best method which could provide high classifi-
cation accuracy of more than what has been reported in the present study. This would
GEOCARTO INTERNATIONAL 1431

Figure 1. Map showing the location of study area in Delhi, India.

help the researchers to opt for the best performing classification method when they use
very high resolution imagery like Worldview-3 for UGS extraction rather than using the
existing available methods. The following section gives the details of the test site and sat-
ellite data used while Section 3 explains the various methods used in the present study for
UGS extraction. The results are discussed in Section 4, followed by concluding remarks in
Section 5.

2. Materials and methods


2.1. Study area
The study area selected for the present work is located in New Delhi, the capital of India
as shown in Figure 1(a). Delhi also known as the National Capital Territory of Delhi
(NCT) is a union territory of India, consisting of 11 districts namely, North, North West,
North East, West, South West, South, South East, East, Central, New Delhi and Shahdara
(Figure 1(b)). The area considered for UGS extraction is basically a rectangular section of
size 5.654 km  4.625 km with an area of 26.15 km2 (Figure 1(c)). The study area falls in
New Delhi and South East districts as seen in Figure 1(b). The reason for selecting this
particular area is that it has a mix of different landuse/land covers like residential (Lodhi
colony, Jangpura), commercial (Khan market), industrial (BHEL industry sector, Pragati
thermal power plant), institutional (Central and state government offices, Foreign
Embassies, Delhi public school, Dyal Singh College, etc.), recreational (Lodhi garden,
National zoological park, Delhi Golf club, Jawaharlal Nehru stadium, etc.) and water
bodies (Yamuna river, Old fort lake) with vast amount of green space widely distributed
across the study area. Therefore, with this study area having a mix of different landuse/
land covers, it would be easier to check the performance of various classification techni-
ques in extracting the UGS alone when compared to other land use/land covers.

2.2. Data used


The WorldView-3 satellite data acquired on September 25, 2014 was used in the present
study. DigitalGlobe, one of the world’s leading providers of high-resolution satellite
imagery, has launched the WorldView-3 satellite on August 13, 2014. In order to enable
the research community to explore the uses of WorldView-3 in various application areas,
they have provided some free sample images in their website and the present study used
1432 S. VIGNESHWARAN AND S. VASANTHA KUMAR

one such sample images in order to explore its use in UGS extraction (DigitalGlobe Inc.
2019). A unique feature of Worldview-3 is that it captures the earth’s surface with a
revisit time of less than one day in 29 spectral bands which is far more than any other
sensors currently available. The bands are: one panchromatic band of 0.31 m spatial reso-
lution, eight multispectral bands of 1.24 m resolution, eight shortwave infrared (SWIR)
bands of 3.7 m resolution and 12 CAVIS (Clouds, Aerosols, Vapours, Ice, and Snow)
bands of 30 m resolution. The present study used 4 PAN sharpened images of 0.31 m spa-
tial resolution. Pan sharpening is a process in which high spatial resolution panchromatic
image is merged with the lower resolution multispectral imagery in order to produce a
single high spatial resolution color image. At the time of downloading, the images were
available as PAN sharpened images in Blue, Green, Red and Near Infrared-2 (NIR-2)
bands. The downloaded images were processed at Level-1C which means the radiometric
correction, geometric correction, and ortho rectification were done and images were spa-
tially registered using WGS84 datum and projected using Universal Transverse Mercator
(UTM) Projection.

2.3. Methods
Generally, the methods for extraction of UGS from a satellite image can be grouped into
two types: (1) methods based on multispectral image classification like unsupervised,
supervised and object based classification, and (2) methods based on normalized differ-
ence indices like normalized difference vegetation index (NDVI). The present study
employed all the four methods namely, unsupervised, supervised, object based and NDVI
approach for extracting the UGS using Worldview-3 satellite image. Finally their perform-
ances were compared through accuracy assessment and the best method was identified.
For unsupervised, supervised and NDVI approach, ArcGIS 10.6 was used and for object
based classification, the Imagine Objective tool of ERDAS Imagine 14 was utilized. The
methods which were used for UGS extraction are explained in the following section.

2.3.1. Unsupervised classification


The Iterative Self-Organizing Data Analysis Techniques Algorithm (ISODATA), one of
the widely used unsupervised classification techniques, was used to extract the UGS. The
ISODATA is basically a clustering technique which uses number of iterations to group
pixels of similar spectral values. It starts with an initial clustering of the pixels and calcu-
lates cluster means. At each iteration, the technique assigns pixels to a particular cluster
based on the spectral distance of the pixel to the cluster mean. The entire process of cal-
culation of new cluster means and assigning of pixels to a particular cluster based on
spectral distance is repeated until the maximum number of iterations is reached. In the
present study, ISODATA classification was applied with 50 clusters or classes and 10 iter-
ations. The number of clusters and iterations, i.e., 50 and 10, respectively were the default
values shown by the software and hence, the same has been used. Even though the final
output after recoding will have only two classes, namely, ‘UGS’ and ‘Others’, it will be
good to run the ISODATA algorithm with 50 classes as the satellite image used is of very
high resolution and hence could spectrally discriminate well the various earth features
present in it. It took almost 5 hours to complete the 10 iterations. The computer with the
following specifications was used to run the unsupervised classification: Windows 10
Single Language 64 bit, IntelV CoreTM i5-3337U Processor 1.80 GHz with Turbo Boost up
R

to 2.70 GHz, 8 GB DDR3L SDRAM and 750 GB Hard Disk Drive.


GEOCARTO INTERNATIONAL 1433

Figure 2. True color and false color composites of Worldview-3 data.

After running the ISODATA algorithm, the classified image was obtained with 50 classes.
In order to assign each of these 50 classes to either of the two classes, namely, ‘UGS’ or
‘Others’, the satellite image was used on the background and manual recoding was carried out.
During manual recoding, true color composite (TCC) was used for the class ‘Others’ and false
color composite (FCC) was used for ‘UGS’. The reason behind this was that in TCC it is diffi-
cult to differentiate between UGS and water bodies as both of them appeared green in color as
shown in Figure 2. The presence of algae and polluting elements like metals and chemicals
cause the lake water to appear greenish in color in a true color composite. Thus it becomes dif-
ficult to differentiate UGS from water bodies as both exhibit similar tone. In order to over-
come this issue, FCC was used while recoding UGS, as vegetation reflects more infrared
energy and thus appears dark red in FCC as shown in Figure 2, while the water bodies appear
in dark cyan color. After manual recoding, finally a map with only two classes was obtained,
i.e., UGS and Others. UGS includes all natural and man-made vegetated areas. These include
the trees, lawns in parks, gardens, apartments, institutions, etc. and green space in sports facili-
ties like golf course, football and hockey grounds which are predominantly covered with grass
or turf. Figure 3 shows the three predominant UGS types as found in the study area. The class
‘Others’ include all land covers other than UGS such as built-up area, water bodies and barren
land. The map so prepared was checked for its accuracy using 100 random points generated
using ‘Create Accuracy Assessment Points’ tool in ArcGIS software. The overall accuracy and
j statistics were then calculated and used to compare its performance with other methods as
explained below.

2.3.2. Supervised classification


In Supervised classification, the user needs to select some samples or representative pixels
for each land cover class in a satellite image which are often known as ‘training sites or
1434 S. VIGNESHWARAN AND S. VASANTHA KUMAR

Figure 3. Predominant UGS types found in the study area.

pixels’. Normally it is recommended that 15 to 20 polygons (training samples) need to be


collected for each class (ESRI 2019a, 2019b). In the present study, 50 training polygons
were collected for each class, namely, UGS and Others. While selecting the training poly-
gons for UGS, it was ensured that all the three predominant UGS types available in the
study area (Figure 3) were included. Also, it was suggested that at least 10n to 100n pixels
need to be sampled for each class, where ‘n’ is the number of bands in the image
(Lillesand et al. 2008; Jensen 2015). As the number of bands used is 4, the required num-
ber of pixels is 40 to 400 only. But in the present study, around 2 million pixels were
taken for each class as the image was of very high spatial resolution with pixel size of
0.31 m. Thus the training pixels taken were adequate for performing the supervised
classification.
Once the training samples were given, the supervised classification algorithm used
the spectral signatures created from these training samples to classify the whole image.
The supervised classification algorithms which are available in most of the commercial
and open source image processing softwares are maximum likelihood, minimum dis-
tance, Mahalanobis Distance and spectral angle mapper. Among them, the maximum
likelihood technique is the most popular and widely used algorithm for supervised
classification and the same has been used in the present study also. As maximum like-
lihood is the most commonly used technique, ArcGIS 10.6 also used the same for
supervised classification. The maximum likelihood technique assumed Gaussian or nor-
mal distribution for the training dataset and calculated the probability function for
each class. Then using the probability function, it assigned each pixel to a class that
had the highest probability or maximum likelihood. Finally, accuracy assessment was
carried out using 100 random points to find the overall accuracy and j statistic.
GEOCARTO INTERNATIONAL 1435

2.3.3. Object based classification


The pixel based classification methods such as supervised and unsupervised are solely
based on the information in each pixel whereas, in object based classification, the infor-
mation from a group of similar pixels called objects are used for classification. The objects
are group of pixels that are similar in terms of shape, size and the spectral characteristics
and they are also sometimes called segments or patches. The Imagine Objective tool, a
specialized tool in ERDAS Imagine 14 software for performing object based classification,
has been used in the present study. It mainly consisted of four steps: computation of
probability matrix, image segmentation, vector conversion and processing and vector
cleanup. For computation of probability matrix, training pixels of ‘UGS’ were required.
The training pixels used in supervised classification method were also used here. Once the
training sites were given, the probability of being a UGS or not (0 to 1) was calculated
based on the spectral similarity of the given pixel to the spectral values of the training
pixels of UGS identified during the training stage. The second step of image segmentation
utilized both the satellite image and the probability layer (resulting from the previous
step) to partition the image into segments based on spatial and spectral characteristics.
The segments thus created would have the zonal mean value of the pixel probabilities
as attributes.
The segmented raster image was converted to a vector file, which was then used for
further processing and cleanup process. During vector data processing, the geometry of
the training polygons was used to refine the classification process and finally the area of
each polygon in the vector file was computed. In the last step of vector cleanup process, a
probability filter was applied to the vector file and smoothing of polygon objects was car-
ried out. The probability filter removed objects that fell below a threshold value. The
default threshold was 0.9 and hence, the same had been used. As done earlier for super-
vised and unsupervised methods, accuracy assessment was carried out.

2.3.4. Classification based on NDVI


One of the popular indices used by many researchers for assessing the vegetation status is
normalized differenced vegetation index (NDVI) (Rouse et al. 1973). The idea behind
using NDVI is that the green vegetation reflects more near-infrared (NIR) energy while it
absorbs a larger amount of red in visible portions of the electromagnetic spectrum. The
formula to compute NDVI is given below (Rouse et al. 1973).
bNIR bRED
NDVI ¼ (1)
bNIR þ bRED
where bNIR and bRED are brightness values from NIR and Red band, respectively. The
NDVI calculation is relatively simple and straight forward for extracting the area covered
by vegetation. The minimum and maximum value of NDVI is –1 and þ1, respectively.
The values below 0.1 represent barren rocks, water clouds, sand and snow whereas, values
above 0.6 represent temperate and tropical rainforests (ESRI 2019b). In the present study,
NIR-2 and red bands were used to calculate the NDVI using Equation (1). A careful
investigation of the NDVI output with satellite image on the background revealed the fact
that all green covered areas in the study area were having NDVI values above 0.45. Thus
keeping 0.45 as the threshold, the UGS was extracted, i.e., all pixels having NDVI of 0.45
and above were classified as ‘UGS’ while the pixels having NDVI of less than 0.45 were
classified as ‘Others’. In the same way, the accuracy assessment was carried out using ran-
dom points.
1436 S. VIGNESHWARAN AND S. VASANTHA KUMAR

Figure 4. Map showing ‘UGS’ and ‘Others’ prepared using (a) unsupervised classification, (b) supervised classification,
(c) object based classification, (d) NDVI based classification.

3. Results and discussion


The map showing ‘UGS’ and ‘Others’ prepared using four classification methods is shown
in Figure 4. The results of accuracy assessment are presented in Table 1. It can be seen
from Table 1 that the unsupervised classification yields accurate results with an overall
accuracy of 99% and j coefficient of 0.98. It can also be seen from Table 1 that the
GEOCARTO INTERNATIONAL 1437

Table 1. Results of accuracy assessment.


Class UGS Others Total User accuracy j
Unsupervised classification
UGS 39 0 39 1 0
Others 1 60 61 0.983 0
Total 40 60 100 0 0
Producer accuracy 0.975 1 0 0.99 0
j 0 0 0 0 0.979
Supervised classification
UGS 26 3 29 0.896 0
Others 11 60 71 0.845 0
Total 37 63 100 0 0
Producer accuracy 0.702 0.952 0 0.86 0
j 0 0 0 0 0.685
Object based image classification
UGS 32 0 32 1 0
Others 8 60 68 0.882 0
Total 40 60 100 0 0
Producer accuracy 0.8 1 0 0.92 0
j 0 0 0 0 0.827
NDVI based classification
UGS 41 1 42 0.976 0
Others 5 53 58 0.913 0
Total 46 54 100 0 0
Producer accuracy 0.891 0.981 0 0.94 0
j 0 0 0 0 0.878

unsupervised classification performed well when compared to other methods as the over-
all accuracy and j coefficient of other methods were less compared to that of unsuper-
vised classification. The reason for its best performance is that in other classification
methods like supervised and object based, only some representative training pixels were
given first for the two classes (UGS and Others), and then the classification was applied.
Whereas, in case of unsupervised classification method, the classification (ISODATA) was
applied first and then, the resulting output was verified manually using the original satel-
lite image on the background. During this verification process, each polygon on the classi-
fied output was assigned either of the two classes manually by checking the background
satellite image. This process is called ‘recoding’ and with high resolution Worldview-3
imagery, it is possible to do this recoding very accurately; and this is the reason why the
unsupervised classification yields accurate results when compared to other methods. The
segregation of the image into various polygons based on spectral homogeneity was done
by the software and after that the assigning of polygons to correct classes was done manu-
ally. Whereas, in case of supervised and object based methods, the software itself does the
assigning part based on training samples given and directly gives the output with two
classes. Sometimes if the spectral values of the two classes fall in the same range, for
example the water and the green space (Figure 2), the supervised and object based meth-
ods may give erroneous results. It was interesting to note that the NDVI method stands
next to the unsupervised classification with an overall accuracy of 94% and j coefficient
of 0.87. The use of NDVI for extracting vegetated areas is a well-established procedure
and used by many researchers worldwide in the past decades. Hence for extracting green
covered areas also, the method performs well as the green covered areas reflect more NIR
and absorbs red energy. After comparing the NDVI output with Worldview-3 imagery, it
was found that all green covered areas in the study area were having NDVI values above
0.45. Thus, by keeping 0.45 as the threshold, the UGS was extracted. According to
Congalton and Green (2009), an overall accuracy of 85% is the cut-off between acceptable
and unacceptable results. The overall accuracy obtained in the present study for the four
1438 S. VIGNESHWARAN AND S. VASANTHA KUMAR

classification methods were 99%, 94%, 92% and 86% for unsupervised, NDVI, object
based and supervised classification methods respectively. Based on Congalton and Green
(2009), we can say that all the methods yielded acceptable results and the use of
Worldview-3 was one of the main reasons for getting such acceptable results as it was of
very high resolution in nature. Though all the methods produced acceptable results, yet
the unsupervised classification showed superior performance compared to all the
other methods.
It is important to mention here that the present study achieved the highest overall
accuracy when compared to other reported studies on UGS extraction. Review of studies
on UGS extraction revealed that the maximum accuracy reported so far is only 94% by
Di et al (2019) when they applied NDVI followed by object based classification on
RapidEye imagery of 5 m resolution. Even the study by Zylshal et al. (2016) using very
high resolution (0.5 m) pan sharpened imagery from Pleiades-1A satellite reported a max-
imum accuracy of 86% only through object based classification. However, in the present
study, an overall accuracy of 99% was obtained using Worldview-3 (0.3 m) pan sharpened
imagery when unsupervised classification followed by recoding was applied for UGS
extraction. This clearly shows that the classification by ISODATA unsupervised technique
followed by recoding is highly suitable for UGS extraction using Worldview-3 data.
Though unsupervised classification yields accurate results, yet it requires more time for
running the algorithm (approximately 5 hr for 10 iterations) and recoding (approximately
2 hr). The 7 hr time as needed for the unsupervised classification is for the computer spe-
cification as mentioned before. However, if a computer with high end processor like
IntelV CoreTM i7 or i9 is used, it would take less time for running the algorithm. But
R

such high end processors may cost more. The supervised and object based classification
requires around 2 hr time to collect the training sample and run the algorithm. Out of
2 hr, 30–45 minutes is sufficient for training data collection as one can clearly see UGS
and other land covers in Worldview-3. The running of algorithm only may take time
especially for object based method, as it involves many steps. Though supervised and
object based, take less processing time compared to the unsupervised, yet both the meth-
ods are incapable of producing the accurate results and hence, not suggested for UGS
extraction. If someone wants to perform UGS extraction with limited computing resour-
ces and less computation time, then NDVI method can be preferred as it takes around
15–30 minutes only to get the final UGS map. The present study used a threshold NDVI
value of 0.45 to differentiate between UGS and others. This particular threshold value was
obtained after careful investigation of the NDVI output with original image on the back-
ground. This value is not a generic one and may slightly vary depending on the study
area chosen and the nature of UGS that exists there. Hence, one must be careful in choos-
ing the correct threshold value in order to get accurate results in NDVI method.
The results of accuracy assessment (Table 1) revealed the fact that UGS was sometimes
misclassified as ‘Others’, whereas the vice versa is very minimal. For example, in unsuper-
vised classification, only 1 out of 40 points was omitted from the correct UGS category
and committed to the incorrect ‘Others’ category; whereas, all 60 points were correctly
classified as ‘Others’. Similarly in NDVI method also, 5 out of 46 points was omitted
from the correct UGS category and committed to the incorrect ‘Others’ category; whereas,
only 1 out of 54 points was wrongly committed to UGS instead of ‘Others’. A careful
examination of the points where UGS was misclassified as ‘Others’ with the Worldview-3
image on the background showed that the trees which were present there, exhibited a
faint red color and/or presence of shadow in FCC compared to the surrounding trees and
this might be a possible reason for misclassification. Even if it was misclassified by the
GEOCARTO INTERNATIONAL 1439

Figure 5. Area covered by ‘UGS’ and ‘Others’ by different classification methods.

unsupervised algorithm, it could have been corrected during manual recoding step.
However, as the image was of very high resolution with millions of pixels, it became tedi-
ous to perform manual recoding and hence, could have led to error if not done carefully.
In such case, one might prefer using NDVI method which performs reasonably well com-
pared to unsupervised classification. The analysis of errors of omission and commission
of supervised and object based methods showed that they were not suitable for UGS
extraction as the errors were huge. For example, in supervised classification, 11 out of 37
points were classified as ‘Others’ while they were actually UGS (Table 1, supervised classi-
fication). Similarly in object based method also, 8 out of 40 points were misclassified as
‘Others’, though they belonged to UGS (Table 1, object based image classification). Thus,
the comparison of accuracy assessment results of all the four methods finally suggested
that, only unsupervised and NDVI methods were found to be suitable for UGS extraction.
If computing resources and time are constraints, then one can prefer using NDVI
method, else unsupervised classification can be used for UGS extraction.
The area covered by ‘UGS’ and ‘Others’ calculated using four classification outputs is
shown in Figure 5. It can be seen that the supervised and object based methods underesti-
mate the green cover as the area covered by UGS is only 7.68 km2 and 8.43 km2, respect-
ively. Whereas the area covered by green cover is 10.20 km2 and 10.96 km2 by
unsupervised and NDVI methods, respectively. As the unsupervised classification pro-
duced a highest accuracy of 99% when compared to other methods, the area given by it
can be considered to be accurate. The use of calculating the area extent is that we can
crosscheck whether the green cover available is in sufficient quantity or not. For example,
global standards recommend 33% green cover for urban areas, i.e., 1/3rd of the total
urban area should be covered by green (MOEF 2014; Govindarajulu 2014). From Figure
5, it can be seen that the proportion of UGS by unsupervised classification is 38.98%.
Thus we can say that almost 40% of the study area is covered by green which is more
than the recommended value of 33%. If the present study is extended to entire Delhi cor-
poration, then it would be possible to accurately calculate the percentage of green cover
and check whether the UGS is available as per the standards or not.
In order to check the universality of the best performing unsupervised classification
approach in extracting UGS, Worldview-3 image covering Rio in Brazil was utilized. The
PAN sharpened images of 0.31 m spatial resolution acquired on May 02, 2016 was
1440 S. VIGNESHWARAN AND S. VASANTHA KUMAR

Figure 6. (a) Worldview-3 image of Rio, Brazil (True color composite), (b) false color composite, (c) map showing
‘UGS’ and ‘Others’ prepared using unsupervised classification.

Table 2. Results of accuracy assessment for Rio, Brazil.


Class UGS Others Total User accuracy j
UGS 16 1 17 0.941 0
Others 0 33 33 1 0
Total 16 34 50 0 0
Producer Accuracy 1 0.970 0 0.98 0
j 0 0 0 0 0.954

downloaded from DigitalGlobe (DigitalGlobe Inc. 2019). The image size was 2.396 km 
1.844 km with an area of 4.419 km2. The unsupervised classification using ISODATA tech-
nique was carried out with 50 classes and 10 iterations. The same parameters used to pro-
cess the Delhi image were applied here also, while running the ISODATA algorithm. The
manual recoding was then applied using the Worldview-3 image on the background and
finally a map with two classes, namely, ‘UGS’ and ‘Others’ was prepared. The results are
shown in Figure 6 with accuracy assessment in Table 2. It can be seen from Figure 6 that
the proposed approach of unsupervised classification followed by manual recoding worked
very well as it had perfectly extracted the UGS from the satellite image. The overall accur-
acy and j were found to be 98% and 0.95, respectively and this clearly indicated that the
proposed approach would work very well in extracting UGS from any Worldview-
3 imagery.

4. Concluding remarks
The ever expanding cities especially in developing countries like India results in the con-
version of productive agricultural and forest lands to impervious concrete surfaces in the
recent decades. The non-availability of sufficient green cover in cities leads to not only
ecological imbalance but also calls for other problems such as increase in mental stress, &
deterioration of the physical health. The first and foremost step, the civic authorities
should do to ensure the availability of sufficient green spaces in any city is to prepare a
map showing the exact location and extent of existing green spaces, using which one can
identify whether the existing UGS is sufficient or not. If it is not adequate to meet the
growing city population, then steps can be taken to increase the UGS by planting more
trees, construction of parks, etc. The very high resolution satellite images like Worldview-
3 can be best utilized to prepare such UGS maps in a fast & efficient way and the present
study attempted this by comparing all the popular image classification methods. The
unsupervised classification followed by manual recoding showed superior performance
when compared to other methods even though it’s slightly time consuming. The overall
GEOCARTO INTERNATIONAL 1441

accuracy achieved by unsupervised classification in the present study is the highest when
compared to any other reported studies on UGS extraction. If computing resources and
time are constraints, then one can prefer using NDVI method, else unsupervised classifi-
cation can be used for UGS extraction.

Disclosure statement
No potential conflict of interest was reported by the authors.

Acknowledgement
The authors thank VIT for providing ‘VIT SEED GRANT’ for carrying out this research work.

References
Chen W, Huang H, Dong J, Zhang Y, Tian Y, Yang Z. 2018. Social functional mapping of urban green
space using remote sensing and social sensing data. ISPRS J Photogramm Remote Sens. 146:436–452.
Congalton RG, Green K. 2009. Assessing the accuracy of remotely sensed data. 2nd ed. New York (NY):
CRC Press.
Daniels B, Zaunbrecher B, Paas B, Ottermanns R, Ziefle M, Nickoll M. 2018. Assessment of urban green
space structures and their quality from a multidimensional perspective. Sci Tot Environ. 615:
1364–1378. 10.1016/j.scitotenv.2017.09.167
Di S, Li Z, Tang R, Pan X, Liu H, Niu Y. 2019. Urban green space classification and water consumption
analysis with remote-sensing technology: a case study in Beijing, China. Int J Remote Sens. 40(5–6):
1909–1929.
DigitalGlobe Inc. 2019. : Product samples: Worldview-3 Satellite ortho ready standard; [accessed 2019
April 1]. http://www.digitalglobe.com/samples
Environmental Systems Research Institute (ESRI). 2019a. Creating training samples. ArcGIS desktop help
10.6 spatial analyst extension. http://desktop.arcgis.com/en/arcmap/latest/extensions/spatial-analyst/
image-classification/creating-training-samples.htm
Environmental Systems Research Institute (ESRI). 2019b. NDVI function. ArcGIS desktop help 10.6 raster
functions. http://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/ndvi-function.htm
Govindarajulu D. 2014. Urban green space planning for climate adaptation in Indian cities. Urban
Climate 10(1):35–41.
Ibrahim I, Samah AA, Fauzi R. 2014. Biophysical factors of remote sensing approach in urban green ana-
lysis. Geocarto Int. 29(7):807–818.
Jensen JR. 2015. Introductory digital image processing: a remote sensing perspective. 4th ed. USA:
Pearson.
Kopecka M, Szatmari D, Rosina K. 2017. Analysis of urban green spaces based on Sentinel-2A: Case stud-
ies from Slovakia. Land 6(25):1–17.
Lillesand T, Kiefer RW, Chipman J. 2008. Remote sensing and image interpretation. 6th ed. Hoboken
(NJ): Wiley.
Lo AYH, Jim CY. 2012. Citizen attitude and expectation towards greenspace provision in compact urban
milieu. Land Use Policy 29(3):577–586.
Ministry of Environment and Forests (MOEF). 2014. Guidelines for conservation, development and man-
agement of urban greens. http://envfor.nic.in/content/comments-invited-draft-guidelines-conservation-
development-and-management-urban-greens
Nero BF. 2017. Urban green space dynamics and socio-environmental inequity: multi-resolution and spa-
tiotemporal data analysis of Kumasi, Ghana. Int J Remote Sens. 38(23):6993–7020.
Pullen NH, Patterson MW, Gatrell JD. 2011. Empty spaces: neighbourhood change and the greening of
Detroit 1975–2005. Geocarto Int. 26(6):417–434.
Rouse JW, Haas RH, Schell JA, Deering DW. 1973. Monitoring vegetation systems in the Great Plains
with ERTS. In: 3rd ERTS Symposium, NASA SP-351 I. p. 309–317; NASA, Washington, D.C.,
United States.
1442 S. VIGNESHWARAN AND S. VASANTHA KUMAR

Sathyakumar V, Ramsankaran R, Bardhan R. 2018. Linking remotely sensed Urban Green Space (UGS)
distribution patterns and Socio-Economic Status (SES) – a multi-scale probabilistic analysis based in
Mumbai, India. GISci Remote Sens. 56(5):645–669.
Spence M, Annez PC, Buckley RM. 2013. Urbanization and growth. The International Bank for
Reconstruction and Development. Washington (DC): The World Bank on behalf of the Commission
on Growth and Development.
Texier M, Schiel K, Caruso G. 2018. The provision of urban green space and its accessibility: Spatial data
effects in Brussels. PLoS One 13(10):1–17.
Van TT, Tran NDH, Bao HDX, Phuong DTT, Hoa PK, Han T. 2018. Optical remote sensing method for
detecting urban green space as indicator serving city sustainable development. Proceedings 2:1–6.
Vatseva R, Kopecka M, Otahel J, Rosina K, Kitev A, Genchev S. 2016. Mapping urban green spaces based
on remote sensing data: case studies in Bulgaria and Slovakia. In: Proceedings of the 6th International
Conference on Cartography and GIS; Albena, Bulgaria: Bulgarian Cartographic Association, Bulgaria;
p. 569–578.
World Health Organization (WHO) 2010. Urban planning, environment and health: from evidence to
policy action 2010. Denmark: WHO.
Yang J, Guan Y, Xia J(C), Jin C, Li X. 2018. Spatiotemporal variation characteristics of green space eco-
system service value at urban fringes: a case study on Ganjingzi District in Dalian, China. Sci Tot
Environ. 639:1453–1461.
Yusof MJ. 2012. Identifying green spaces in Kuala Lumpur using higher resolution satellite imagery. Int J
Sustainable Trop Des Res Pract. 5(2):93–106.
Zylshal SS, Yulianto F, Nugroho JT, Sofan P. 2016. A support vector machine object based image analysis
approach on urban green space extraction using Pleiades-1A imagery. Model Earth Syst Environ. 2(54):
1–12.

You might also like