You are on page 1of 16

Chapter 2

Reviews of Related Literature and Studies

This chapter includes the ideas, finished thesis,


generalization or conclusions, methodologies and others.
Those that were included in this chapter helps in
familiarizing information that are relevant and similar
to the present study.

Related Literature

According to Parr D. A. 2015. The arrival of the World


Wide Web, smartphones, tablets and GPS-units has
increased the use, availability, and amount of digital
geospatial information present on the Internet. Users can
view maps, follow routes, find addresses, or share their
locations in applications including Google Maps,
Facebook, Foursquare, Waze and Twitter. These
applications use digital geospatial information and rely
on data sources of street networks and address listings.
Previously, these data sources were mostly governmental
or corporate and much of the data was proprietary.
Frustrated with the availability of free digital
geospatial data, Steve Coast created the OpenStreetMap
project in 2004 to collect a free, open, and global
digital geospatial dataset. Now with over one million
contributors from around the world, and a growing user
base, the OpenStreetMap project has grown into a viable
alternative source for digital geospatial information.
The growth of the dataset relies on the contributions of
volunteers who have been labeled ‘neogeographers’ because
of their perceived lack-of-training in geography and
cartography (Goodchild 2009b; Warf and Sui 2010; Connors,
Lei, and Kelly 2012). This has raised many questions into
the nature, quality, and use of OpenStreetMap data and
contributors (Neis and Zielstra 2014; Neis and Zipf 2012;
Estima and Painho 2013; Fan et al. 2014; Haklay and Weber
2008; Corcoran and Mooney 2013; Helbich et al. 2010;
Mooney and Corcoran 2012b; Haklay 2010b; Budhathoki and
Haythornthwaite 2013; Mooney, Corcoran, and Winstanley
2010; Mooney and Corcoran 2011; Haklay et al. 2010;
Mooney, Corcoran, and Ciepluch 2013; Stephens 2013). This
study aims to complement and contribute to this body of
research on Volunteered Geographic Information in general
and OpenStreetMap in particular by analyzing three
aspects of OpenStreetMap geographic data. The first
aspect considers the contributors to OSM by building a
typology of contributors and analyzing the contribution
quality through the lens of this typology. This part of
the study develops the Activity-Context-Geography model
of VGI contribution which uses three aspect dimensions of
VGI contributions: the Activity (the amount and frequency
of content creation, modification and deletion); Context
(the technological and social circumstances that support
a contribution); and Geography (the spatial dimensions of
a contributor’s pattern). Using the complete
OpenStreetMap dataset from 2005 to 2013 for the forty-
eight contiguous United States and the District of
Columbia, the study creates twenty clusters of
contributors and examines the differences in positional
accuracy of the contributors against two datasets of
public school locations in Texas and California. The
second part of the study considers the questions of where
mapping occurs by evaluating the spatial variability of
OSM contributions and comparing mapping activity against
population and socioeconomic variables in the US. The
third part of the study considers the choices that OSM
contributors make through the types of features that are
most commonly mapped in different locations.
Understanding the types of contributors, their
differences in quality, the spatial variability in
mapping activity, and their choices in types of features
to provide data will provide insight into the credibility
of users, the trustworthiness of their contribution, and
where there are gaps in mapping activity and feature
representation.

According to Norwood C.M. 2012. Rural communities with


bountiful natural amenities are attracting unprecedented
inmigration. When unmanaged, the ensuing development
threatens the ecological and cultural assets that are
driving growth and valued by many residents. Despite the
availability of geospatial analysis and visualization
tools that seem well-suited to aiding community
deliberations about land use planning and common pool
resources, these tools have rarely been shown to
effectively help communities understand and address
threats to their landscape. Through a multi-year, mixed-
method participatory research process with community
partners in Macon County, North Carolina, I have studied
the potential of geospatial information to enjoy
increased local relevance, become more accessible to
local discussions, and better engage local stakeholders.
I co-developed an iterative research process that draws
on critical GIS and participatory research traditions,
using ethnographic interviews to guide geospatial
analysis and mapping. I produced maps and landscape
visualizations that successfully contributed to efforts
to engage local residents in discussions about their
changing community. I also studied how maps contribute to
local planning efforts and their effect on attitudes
towards planning. I found that maps designed to be
relevant to local planning discussions can support more
deliberative discussion and successful public engagement,
aid in the recognition and articulation of shared
community goals that challenge dominant pro-growth
narratives, and enhance local capacity for planning and
resource management. Further, the maps produced in
community-driven processes both reflect and shape the
shifting discursive strategies through which land use
planning or conservation advocates navigate amenity
migration landscapes. However, simply supplying visual
information about growth and development trends in an
experimental mail survey did not affect attitudes towards
planning measures. This research addresses critical but
often unasked questions about the relationship between
research and on-the-ground outcomes. It should be of
interest to landscape change researchers who want their
findings to inform land use decision making, critical GIS
scholars who are interested in applications,
participatory researchers interested in GIS and iterative
research designs, and local leaders who want to better
engage residents in thinking about changing landscapes
and growth management.

According to Imaoka L. B. 2016. Mapping Risk situates


geospatial technologies in a long line of media composing
an evolving (nuclear) imagination of disaster while
unpacking the assumptions of a field of knowledge that
position them as risk managing technologies par
excellence. It considers how spatial technology acts in
the “risk society” as a visual cultural media practice.
That is, as a technology of risk production and risk
management within the context of ordinary everydayness
and catastrophic extraordinariness. Threads of inquiry
include unpacking the social and economic evaluation and
subsequent branding practices of geospatial technology by
practitioners and commercial suppliers to show how these
tools are positioned as systematic means to see and
manage the world. It then follows these assumptions into
spheres of media practice. First, it examines the
applications of geospatial technologies by news media
outlets and social media networks during past large-scale
disasters and ongoing cases of risk. The local and
globally produced and circulated geodata, visuals, and
social and mass media narratives of radiation risk
emanating from the 2011 Fukushima Nuclear Power Station
disaster provide the main case study. Lastly, it
considers the viability of the geographic web for
disaster capitalistic ventures by examining spatial media
in the sphere of digital humanitarian and for-profit
practice. It looks at Japan’s tourism industry’s
collaborations with Google Corporation following the
triple disaster, reading the location-based “content”
deployed with the purpose to return international
travelers to the nation and instill national pride in
Japanese citizens dealing with the disaster's effects.

Acording to Carr J. D 2012. Geospatial tools and


technologies have become core competencies for natural
resource professionals due to the monitoring, modeling,
and mapping capabilities they provide. To prepare
students with needed background, geospatial instructional
activities were integrated across Forest Management;
Natural Resources; Fisheries, Wildlife, & Conservation
Biology; and Environmental Technology & Management
curricula in the Department of Forestry and Environmental
Resources at North Carolina State University. As
additions were made to curriculum, the effectiveness of
the integration and how well students were meeting
geospatial outcomes were unknown. The purpose of this
study was to evaluate student attainment of geospatial
outcomes. The study was conducted in three phases to
address three study objectives. The first objective was
to develop an outcomes-based framework to assess student
learning. An assessment framework is a conceptual
approach for identifying foundational elements
underpinning assessment activities such as identifying
the type of assessment, identifying stakeholders,
articulating student learning outcomes, and identifying
criteria for success. The second objective was to develop
assessment methods, identifying where and how often
evidence of learning would be collected and analyzed. The
third objective was to report results of the assessments,
commenting on the current state of student learning and
suggesting possible avenues for improving student
learning, geospatial integration, and the assessment
process. To develop our framework, we reviewed assessment
literature and consulted assessment experts on campus.
That guidance, in combination with our assessment goals,
led us to choose a formative and utilization-focused
assessment approach focused on intended uses by key
stakeholders. Our stakeholders included facilitators
responsible for developing and integrating geospatial
activities in courses, faculty with geospatial
integration in their courses, and program directors of
curricula with integration in courses. We worked with NC
State University Planning and Analysis and developed
structured interviews. Content analysis of interview data
identified stakeholders' geospatial objectives, where
they would look for evidence of learning, and their
criteria for success. This information helped guide the
development and implementation of assessment methods.
Faculty and administrators indicated that they believed
evidence of student learning was demonstrated through
students' deliverables or could be tested directly. In
response, we collected students' maps, lab reports, term
projects, and capstone course management plans and
evaluated them with rubrics. Other assessment tools
included tracking questions embedded on tests and
quizzes, pre-post tests before and after series of
instructional laboratories, and longitudinal surveys
designed to solicit students' awareness of and confidence
in their ability to use geospatial tools. Students'
deliverables produced mixed results, but students in
programs with integration incorporated spatial analysis
within their assignments successfully. Pre-post tests
showed that students' knowledge increased after course-
embedded activities, and surveys indicated students'
awareness and confidence were significantly increased at
the completion of their programs. Rubrics used to assess
students' term projects and capstone management plans
revealed that forestry seniors met skills-based,
information literacy, and conceptual knowledge outcomes.
Natural resources seniors independently chose to use
appropriate spatial analysis in their term projects and
management plans, demonstrating adoption and
internalization of spatial problem solving techniques.
Curricula and courses we have worked with the longest
have more instructional opportunities and the most
seamless integration into ongoing coursework. The
assessments showed that students in these programs
performed better than students in programs with fewer
learning opportunities. As a result, we are working with
faculty in all curricula to design and facilitate
activities that effectively complement students'
classroom activities and that are more closely aligned
with course content and performance expectations. This
approach helps students utilize the knowledge and tools
in authentic situations. The assessments helped us
identify instructional missteps and unforeseen assessment
issues that help us modify our teaching and assessment
methods. The assessments are also producing baseline
student learning information we can use to objectively
evaluate both student performance and our performance as
educators. We believe this study will be useful to
institutions with similar goals and needs, and that the
assessment methods can be adapted to fields of
instruction other than forestry, natural resources, and
spatial information systems.
According to Frey O., Meier E. 2010. Synthetic aperture
radar (SAR) systems are used to obtain geospatial
information for a broad range of applications, such as
measuring geo- and biophysical parameters, topographic
mapping, monitoring of land subsidence, landslides, and
crustal deformation, as well as disaster mapping. In
recent years, advanced SAR acquisition modes of growing
complexity have been proposed in order to gain more
flexibility in terms of usable sensor constellations and
acquisition scenarios, as well as in an attempt to
increase the number of observables to allow for a more
reliable image and parameter inversion. These new imaging
modes require more flexible SAR image reconstruction
algorithms. Within the scope of this dissertation, a
novel time-domain back-projection (TDBP) based SAR image
processing software was developed and investigated in
terms of two nonstandard data acquisitions scenarios: 1)
SAR imaging along highly nonlinear sensor trajectories,
and 2) high-resolution tomographic imaging of a forest at
L-band and P-band. To this end, two airborne SAR
experiments were designed, which were own by the German
Aerospace Center's E-SAR system in September 2006. By
means of the experimental data involving highly nonlinear
sensor trajectories it was shown that the TDBP focusing
algorithm yields a superior image quality as compared to
a combined chirp scaling and mosaicking approach. The
results of the study indicate that, in general, the TDBP
algorithm imposes virtually no restrictions on the shape
of the sensor trajectory. It is therefore an attractive
method for e cient mapping along curvilinear objects of
interest, such as tra c routes, rivers, or pipelines. A
second emphasis of this dissertation is on SAR tomography
of forest environments. In order to explore in detail the
back-scattering behavior of radar signals within a forest
a non-model-based TDBP tomographic imaging approach was
pursued. In particular, three different direction-of-
arrival estimation techniques, multilook beamforming,
robust Capon beamforming, and MUSIC beamforming, were
implemented in order to focus the two multibaseline
airborne SAR data sets at L-band and P-band. In terms of
focusing quality, an unprecedented level of detail was
obtained using the proposed TDBP-based tomographic
imaging approach. Gaps in the canopy due to features like
small forest roads are well visible in the tomographic
image, for instance. Thus, the three-dimensional
tomographic SAR imagery provides a good basis to
investigate the back-scattering properties of the
forested area at L-band and P-band. With three
prospective spaceborne SAR remote sensing missions,
BIOMASS at P-band, Tandem-L, and DESDynI, both at L-band,
which are all aimed at global mapping and monitoring of
carbon stock by assessing the above ground biomass of
forests, establishing a good understanding of the
interaction of microwaves at L-band and P-band with
forests is critical in order to develop reliable biomass
products. By means of a detailed analysis of the high-
quality threedimensional SAR data products obtained by
tomographic processing, including a cross-validation with
airborne laser scanning data, a substantial contribution
towards an improved understanding of the interaction of
microwaves at L-band and P-band with forest environments
was achieved within this work.

According to Qin H. 2017. Map-based crowdsourcing is one


of the most significant contemporary trends in the
geospatial sciences and has completely changed many data
collection workflows, and added new sources of data. An
important aspect of this emerging trend is the manner in
which data quality is assessed, and how well these
quality assessment processes match processes used in
traditional map-based and geographic information systems-
based quality assessment procedures. This dissertation
studies the evolution of geographic data collection, and
the methods of quality assessment, and builds a
comprehensive quality assessment workflow for
geocrowdsourced data. This workflow is based on many
traditional formulations of quality, such as positional
accuracy, temporal consistency, categorical accuracy,
fitness-for-use, and lineage. These quality assessment
workflows are studied through the George Mason University
Geocrowdsourcing Testbed (GMU-GcT), which was designed to
study dynamic aspects of map-based crowdsourcing. The
GMU-GcT tests the implementation of techniques from the
US National Map Accuracy Standard (NMAS) as well as the
National Standard for Spatial Data Accuracy (NSSDA), as
well as several new techniques, modified over time, that
are shown to have value within the specific context of
geocrowdsourcing conducted with the GMU-GcT. This
research extends the quality assessment work with
modeling of a pedestrian network and the accessibility
characteristics associated with navigation obstacles,
many of which have been crowdsourced with the GMU-GcT,
and tests the feasibility of infrastructure maintenance
using geocrowdsourced data and associated quality
assessment parameters. The quality assessment techniques
from traditional mapping domains are shown to have value
in the domain of geocrowdsourcing, and the ability to
model pedestrian network accessibility and maintenance
optimization is demonstrated through this work.
Extensions of this research into geosocial media is
explored with mixed results, and future work in
simplified, image-based geocrowdsourcing is explored to
determine what quality assessment metrics can be derived
from greatly simplified geocrowdsourcing methods.
Additional modeling enhancements, based on alternative
optimization strategies and weighting factors, is
discussed as a future area for work. Summary of end-user
and subject matter experts is discussed in context of
future modifications to the GMU-GcT.

According to Thomas E. I. 2017. Floods are one of the


most devastating disasters known to man, caused by both
natural and anthropogenic factors. The trend of flood
events is continuously rising, increasing the exposure of
the vulnerable populace in both developed and especially
developing regions. Floods occur unexpectedly in some
circumstances with little or no warning, and in other
cases, aggravate rapidly, thereby leaving little time to
plan, respond and recover. As such, hydrological data is
needed before, during and after the flooding to ensure
effective and integrated flood management. Though
hydrological data collection in developed countries has
been somewhat well established over long periods, the
situation is different in the developing world.
Developing regions are plagued with challenges that
include inadequate ground monitoring networks attributed
to deteriorating infrastructure, organizational
deficiencies, lack of technical capacity, location
inaccessibility and the huge financial implication of
data collection at local and transboundary scales. These
limitations, therefore, result in flawed flood management
decisions and aggravate exposure of the most vulnerable
people. Nigeria, the case study for this thesis,
experienced unprecedented flooding in 2012 that led to
the displacement of 3,871,53 persons, destruction of
infrastructure, disruption of socio-economic activities
valued at 16.9 billion US Dollars (1.4% GDP) and sadly
the loss of 363 lives. This flood event revealed the
weakness in the nation’s flood management system, which
has been linked to poor data availability. This flood
event motivated this study, which aims to assess these
data gaps and explore alternative data sources and
approaches, with the hope of improving flood management
and decision making upon recurrence. This study adopts an
integrated approach that applies open-access geospatial
technology to curb data and financial limitations that
hinder effective flood management in developing regions,
to enhance disaster preparedness, response and recovery
where resources are limited. To estimate flood magnitudes
and return periods needed for planning purposes, the gaps
in hydrological data that contribute to poor estimates
and consequently ineffective flood management decisions
for the Niger-South River Basin of Nigeria were filled
using Radar Altimetry (RA) and Multiple Imputation (MI)
approaches. This reduced uncertainty associated with
missing data, especially at locations where virtual
altimetry stations exist. This study revealed that the
size and consistency of the gap within hydrological time
series significantly influences the imputation approach
to be adopted. Flood estimates derived from data filled
using both RA and MI approaches were similar for
consecutive gaps (1-3 years) in the time series, while
wide (inconsecutive) gaps (> 3 years) caused by gauging
station discontinuity and damage benefited the most from
the RA infilling approach. The 2012 flood event was also
quantified as a 1-in-100year flood, suggesting that if
flood management measures had been implemented based on
this information, the impact of that event would have
been considerably mitigated. Other than gaps within
hydrological time series, in other cases hydrological
data could be totally unavailable or limited in duration
to enable satisfactory estimation of flood magnitudes and
return periods, due to finance and logistical limitations
in several developing and remote regions. In such cases,
Regional Flood Frequency Analysis (RFFA) is recommended,
to collate and leverage data from gauging stations in
proximity to the area of interest. In this study, RFFA
was implemented using the open-access International
Centre for Integrated Water Resources Management–Regional
Analysis of Frequency Tool (ICI-RAFT), which enables the
inclusion of climate variability effect into flood
frequency estimation at locations where the assumption of
hydrological stationarity is not viable. The Madden-
Julian Oscillation was identified as the dominant flood
influencing climate mechanism, with its effect increasing
with return period. Similar to other studies, climate
variability inclusive regional flood estimates were less
than those derived from direct techniques at various
locations, and higher in others. Also, the maximum
historical flood experienced in the region was less than
the 1-in-100-year flood event recommended for flood
management. The 2012 flood in the Niger-South river basin
of Nigeria was recreated in the CAESAR-LISFLOOD
hydrodynamic model, combining open-access and third-party
Digital Elevation Model (DEM), altimetry, bathymetry,
aerial photo and hydrological data. The model was
calibrated/validated in three sub-domains against in situ
water level, overflight photos, Synthetic Aperture Radar
(SAR) (TerraSAR-X, Radarsat2, CosmoSkyMed) and optical
(MODIS) satellite images where available, to access model
performance for a range of geomorphological and data
variability. Improved data availability within
constricted river channel areas resulted in better
inundation extent and water level reconstruction, with
the F-statistic reducing from 0.808 to 0.187 downstream
into the vegetation dominating delta where data
unavailability is pronounced. Overflight photos helped
improve the model to reality capture ratio in the
vegetation dominated delta and highlighted the
deficiencies in SAR data for delineating flooding in the
delta. Furthermore, the 2012 flood was within the confine
of a 1-in-100-year flood for the sub-domain with maximum
data availability, suggesting that in retrospect the 2012
flood event could have been managed effectively if flood
management plans were implemented based on a 1-in-100-
year flood. During flooding, fast-paced response is
required. However, logistical challenges can hinder
access to remote areas to collect the necessary data
needed to inform real-time decisions. Thus, this adopts
an integrated approach that combines crowd-sourcing and
MODIS flood maps for near-real-time monitoring during the
peak flood season of 2015. The results highlighted the
merits and demerits of both approaches, and demonstrate
the need for an integrated approach that leverages the
strength of both methods to enhance flood capture at
macro and micro scales. Crowd-sourcing also provided an
option for demographic and risk perception data
collection, which was evaluated against a government risk
perception map and revealed the weaknesses in the
government flood models caused by sparse/coarse data
application and model uncertainty. The C4.5 decision tree
algorithm was applied to integrate multiple open-access
geospatial data to improve SAR image flood detection
efficiency and the outputs were further applied in flood
model validation. This approach resulted in F-Statistic
improvement from 0.187 to 0.365 and reduced the CAESAR-
LISFLOOD model overall bias from 3.432 to 0.699. Coarse
data resolution, vegetation density, obsolete/non-
existent river bathymetry, wetlands, ponds, uncontrolled
dredging and illegal sand mining, were identified as the
factors that contribute to flood model and map
uncertainties in the delta region, hence the low accuracy
depicted, despite the improvements that were achieved.
Managing floods requires the coordination of efforts
before, during and after flooding to ensure optimal
mitigation in the event of an occurrence. In this study,
and integrated flood modelling and mapping approach is
undertaken, combining multiple open-access data using
freely available tools to curb the effects of data and
resources deficiency on hydrological, hydrodynamic and
inundation mapping processes and outcomes in developing
countries. This approach if adopted and implemented on a
large-scale would improve flood preparedness, response
and recovery in data sparse regions and ensure floods are
managed sustainably with limited resources.

According to Warren J.Y. 2010. Geospatial tools and


information play an important role in urban planning and
policymaking, and maps have diverse uses in legal,
environmental, political, land rights, and social arenas.
Widespread participation in mapmaking and access to its
benefits is limited by obscure and expensive tools and
techniques. This has resulted in poor or nonexistent maps
for much of the world's population, especially in areas
of urban poverty. In particular, public access to recent
and high-resolution satellite imagery is largely
controlled by government and large industry. This thesis
proposes balloon and kite aerial photography as a low-
cost and easy to learn means to collect aerial imagery
for mapping, and introduces a novel open-source online
tool for orthorectifying and compositing images into
maps. A series of case studies where such tools and
techniques were used by communities and activists in
Lima, Peru and during the 2010 BP oil spill highlight the
empowering role broader participation in cartography can
play in advocacy, and the potential for increased
cartographic literacy to level the playing field in
territorial self-determination for small communities.
Compared to other efforts to democratize mapmaking, which
focus primarily on the presentation and interpretation of
existing map data, this project emphasizes participation
in the creation of new data at its source - direct
imaging of the earth's surface. Accompanying educational
materials and workshops with adults and youth, as well as
an active online community of participants, have ensured
wide adoption of Grassroots Mapping practices.

According to Hodza P. 2007. This dissertation examines


the coupling of GIS and immersive visualization (IV).
This study was premised on the hypothesis that linking IV
and GIS could potentially enhance a user's visual-
cognitive capacity to perceive, analyze, and understand
complex geospatial data. A loosely-coupled GIS-IV system
was developed and subsequently tested by professional
soil scientists in soil boundary mapping and soil map
update case studies. Soil boundary mapping is essentially
a visualization enterprise involving the creation of
cognitive models representing the relationship between
soil and observable environmental features. A GIS-IV
conceptual model was formulated, and an operational
system was designed and implemented. The GIS-IV model
seeks to bring the cognitive and logical semantic worlds
closer together, and move GIS toward experiential
immersion within the geospatial and mapped data. The
model places the GIS-IV system within a framework that
emphasizes significant user-computer interactivity,
multidimensional representation, and experiential
knowledge creation and understanding. The GIS-IV
framework supports geocomputational analysis, and
facilitates 'visual and spatial thinking'. In addition,
the GIS-IV model bridges the historical divide between
GIS and advanced geovisualization methods, and extends
the visualization capability of GIS well beyond
traditional cartographic 2D mapping. The GIS-IV system
was implemented using commercial-off-the-shelf (COTS)
software, a stereoscopically-enabled multi-user immersive
Cave Automatic Virtual Environment (CAVE), a pen-based
Tablet PC, and an enterprise geodatabase server. The GIS-
IV system was supported by a robust geospatial data
management system, geospatial analysis, and geovisual
analytical capabilities. In addition, the system is
scalable, extensible, and flexible, and facilitates and
encourages geocollaboration between researchers. A user-
based and task-based use and usability testing of the
GIS-IV system in two soil mapping applications involved
several collaborating soil scientists, and revealed very
positive reactions, considerable commonality in
viewpoint, and occasional varying user perceptions of the
system and experience of performing collaborative
'virtual' soil mapping. Overall, the participants'
questionnaire responses reflected positively on the use
of the GIS-IV system for virtual soil boundary mapping
and soil map revision. In particular, the ability of the
system to support 'same place-same time' geocollaborative
interpretative image analysis, a 'go anywhere'
capability, and an immersive and experiential
interpretation and mapping environment that provided
access to physically inaccessible or trespass prohibited
areas as well as multiple aerial imagery and geospatial
datasets were identified as the main strengths of the
system. The participants found the GIS-IV system to be
more intuitive than traditional soil mapping practice,
and capable of improving the speed and quality of soil
mapping. The quality and accuracy of the virtual soil map
products were examined using comparative visual analysis,
a confusion matrix, a fuzzy agreement matrix, and through
geospatial 'error' modeling methods. The fuzzy-based
approach is sensitive to the imprecision in human
reasoning used in soil mapping and is recommended as the
best approach for assessing the quality of the soil map
products.

According to Robinson J.A. 2010. Public Participation


Geographic Information Systems (PPGIS) study the
applications of geospatial technologies, including
Geographic Information Systems (GIS), by members of the
public. PPGIS emerged in the 1990s in response to
epistemological criticisms that the social, political and
philosophical implications of GIS had been largely
ignored by GIS practitioners. PPGIS strives to address
criticisms by making GIS more widely available to
grassroots groups and individuals. In the United States,
access to GIS among community-based organizations (CBOs)
remains limited because of the cost and complexity of
geospatial technologies and the inaccessibility of
appropriate spatial data. GIS mapping and spatial
analyses, however, have proved to be valuable to CBOs in
visualizing community dynamics. PPGIS examines access
barriers and utilizes participatory approaches to build
GIS capacity at the grassroots. Syracuse Community
Geography (SCG) was developed in 2005 with the goal of
improving access to GIS among community-based
organizations in Syracuse, New York. SCG is a university-
community partnership that responds to requests for GIS
assistance from CBOs seeking to use GIS to support a wide
variety of community initiatives. The objective of the
current research is to examine how the Syracuse Community
Geography facilitator-based model of PPGIS responds to
GIS and Society criticisms and PPGIS practical
implementation challenges. Using case studies and
questionnaires, I investigate key process and outcome
measures discussed in the literature using three case
studies and questionnaires. Case studies explore how SCG
facilitates GIS access among community-based
organizations seeking to use GIS to analyze issues of
food insecurity, neighborhood walkability and adolescent
health. Questionnaires distributed to an additional 28
SCG community project representatives test participants'
perceptions of SCG's efficacy. Analyses of case study and
questionnaire evidence reveal that the Syracuse Community
Geography model is largely successful in addressing
challenges ascribed to PPGIS. It is a viable model of
PPGIS that uses a facilitator-based approach and a
participatory process that could be replicated in other
settings. The process and outcome evaluation metric used
to evaluate the efficacy of SCG could also be adapted by
other PPGIS practitioners. Implications for future
research are discussed.

According to Graziosi G.H. 2012. This dissertation


examines the dynamics of Urban Geospatial Digital
Neighborhood Areas (Urban GeoDNA) and their impacts on
local information discovery. It analyzes the demand and
supply sides of information from a community perspective
to understand how variations in local boundary
definitions condition the quantity and quality of
informational resources users can discover through
digital libraries to plan urban neighborhood
environments. Primary data obtained through interviews
with bottom-up participants from local Community Based
Organizations (CBOs) and libraries are combined with
secondary data gathered from a variety of top-down
sources including federal, state and city agencies. These
datasets are analyzed using a series of Geographic
Information System (GIS) processes and results are loaded
into a final GeoDNA database developed according to
current Geospatial Information and Mapping Policies
(GIPMs). Using a selected set of seven neighborhoods in
Bronx County, NY, the study integrates top-down and
bottom-up boundary definitions to test the role urban
GeoDNA plays for discovering local information by online
users to conduct community development and environmental
planning activities. Specifically, the research compares
three different neighborhood boundary versions to assess
their effects on the quality and quantity of local
information users can discover through digital libraries
geospatially. In addition, a group of socio-demographic
variables at the census tract level are examined to
determine if such boundary variations are related not
only to information discoverability but also to the
characteristics found within different types of
neighborhoods. Finally, the study evaluates the use of
combining top-down with bottom-up geospatial information
by appending the different neighborhood boundary files
gathered for the research and testing their aggregate
usability to discover relevant resources with which to
conduct planning activities at the local level. Results
from the study suggest that, by combining geospatial
definitions from top-down and bottom-up sources, new and
extended neighborhood boundaries can be created and used
to georeference local resources without altering the
ranking of materials found through geospatial searches.
Therefore, an aggregate boundary approach can be used to
enrich the fundamental essence of urban GeoDNA materials
to allow users to discover information that carries both
geographical and ontological knowledge about local
neighborhoods simultaneously. The study also provides
insights for community users to become more proactively
involved in the dissemination of local knowledge because,
by publishing metadata about their studies, reports and
other resources with aggregate geospatial definitions,
the chances for their discovery are increased. Moreover,
the study contributes to the growing body of literature
on Public Participatory GIS (PPGIS) by expanding the
opportunities community participants have to send local
information from the ground up to make them discoverable
in a geolibrary environment.

According to Yu C. 2005. Modern geography focuses on


studying processes. In addition to observed phenomena,
the study of geographic processes must (and does) place
emphasis on understanding how components interact within
geographic systems. As a fundamental tool for geographic
representation and spatial analysis, current GISystems
(geographic information systems) are nevertheless still
data centered. While they are good at representing "what"
and "where" information, they have limited capabilities
in representing higher-level knowledge. This is because
in the current GISystems there is a lack of means of
capturing and representing human understanding of
geographic processes to address "how" and "why"
questions. In addition, non-observational factors such as
laws, policies, regulations, plans, and cultural elements
(e.g. religions, customs) cannot be easily represented.
Instead of the traditional data-centered approach, this
dissertation presents a knowledge-oriented strategy for
the representation of geographic processes. To reach that
end, two major steps are adopted: (1) introducing the
concept of GeoAgents as the spatiotemporally distributed
knowledge-representation components, and (2) presenting
an integrated approach to incorporate multiple knowledge-
representation techniques with geospatial databases.
GeoAgents are defined in this dissertation as spatial,
dynamic, and scale-dependent agents within an explicitly
geographic context. By incorporating GeoAgents with
graph-based concept maps, rule-based expert systems,
quantitative models, and geospatial databases, this
research develops a Java-based prototype-- GeoAgent-based
Knowledge System (GeoAgentKS)--that allows the
representations of diverse kinds of geographic knowledge
and spatial data to be integrated in a single cohesive
software system. To examine the knowledge-oriented
strategy of geographic representation in real-world
problems, GeoAgentKS are employed in a case study to
represent the complex geographic processes relevant to
community water systems (CWSs) in Central Pennsylvania.
In this case study, geographic knowledge is captured via
interpretation of the pre-existing documents and
computer-based-concept-mapping interviews with domain
experts. To evaluate the usability of GeoAgentKS,
evaluation interviews with different experts and novices
were conducted to assess the adequacy of the knowledge
representation and the effectiveness in conveying
knowledge. The experts in the evaluation interviews
believed that it was possible to use the GeoAgentKS to
represent the complex, dynamic and scale-dependent human-
environment interactions. And the knowledge stored in the
GeoAgentKS could be quickly learned by novices.

You might also like