You are on page 1of 61

Increasing the accuracy of

location-based augmented
reality applications using
object recognition
David Chalk | MSC GIS | September 13, 2017
Table of Contents
List of Figures ...................................................................................................................... 3
List of Tables ....................................................................................................................... 4
Abstract ............................................................................................................................... 5
1. Introduction ................................................................................................................. 5
1.1. Context ................................................................................................................. 5
1.2. Research Question, aims and objectives............................................................. 6
1.3. Report structure ................................................................................................... 7
2. Literature Review ......................................................................................................... 8
2.1. Introduction ......................................................................................................... 8
2.2. Augmented reality ............................................................................................... 8
2.2.1. What is augmented reality?.......................................................................... 8
2.2.3. Marker-based AR ......................................................................................... 10
2.2.4. Location-based AR ........................................................................................11
2.2.5. Image / Natural Feature Tracking for AR ................................................... 12
2.2.6. Outdoor SLAM (Simultaneous Localization And Mapping)....................... 13
2.2.7. Building Information Modelling (BIM) AR.................................................. 14
2.2.8. Notable AR application and research .......................................................... 14
2.3. 3D GIS.................................................................................................................. 15
2.3.1. What is 3D GIS? ........................................................................................... 15
2.3.2. 3D Geographic Data Visualisation .............................................................. 17
2.3.3. 3D GIS data in AR ........................................................................................ 17
2.4. AR Accuracy ........................................................................................................ 18
3. Methodology ...............................................................................................................19
3.1. Defining the problem ......................................................................................... 19
3.2. SQL Spatial Database & Android Studio developed system ............................ 20
3.3. Unity developed system ..................................................................................... 21
3.4. Comparison, system selection and justification ............................................... 24
3.5. Marker-based AR System .................................................................................. 26
3.5.1. Defining the system aim ............................................................................. 26
3.5.2. Data ............................................................................................................ 27
3.5.3. System Process ........................................................................................... 27
3.5.4. System Code ................................................................................................ 28

PAGE 1
3.5.5. System summary ........................................................................................ 28
4. Developing a visually assisted location-based AR application................................. 29
4.1. Location-based AR System ................................................................................ 29
4.1.1. Defining the Problem .................................................................................. 29
4.1.2. Data ............................................................................................................ 29
4.1.3. System process ............................................................................................ 29
4.1.4. System code ................................................................................................ 30
4.1.5. System Summary ......................................................................................... 31
4.2. Image Recognition Algorithm ............................................................................ 31
5. Testing the system ..................................................................................................... 37
5.1. Data accuracy Testing: Methodology ............................................................... 37
5.2. Data Accuracy Testing: Results ......................................................................... 39
6. Discussion .................................................................................................................. 42
6.1. Study summary .................................................................................................. 42
6.2. Review of the application developed ................................................................ 42
6.3. Results of the study in context .......................................................................... 43
6.4. Assumptions and limitations ............................................................................ 44
6.5. Implications of the research .............................................................................. 46
7. Conclusion.................................................................................................................. 46
7.1. Conclusions ........................................................................................................ 46
7.2. Future work ........................................................................................................ 47
8. References .................................................................................................................. 48
9. Appendices ................................................................................................................. 53
9.1. GPS C# script ..................................................................................................... 53
9.2. Gyrocontrol C# script ........................................................................................ 53
9.3. MapRenderScript C# script ............................................................................... 54
9.4. MercatorProjection C# script ............................................................................ 55
9.5. PanelScript C# Script ......................................................................................... 56
9.6. UpdateGPSText C# script .................................................................................. 56
9.7. WebcamEdgeDetection C# script ..................................................................... 56

PAGE 2
List of Figures
Figure 1 - A system overview for 4D BIM augmented reality application. Taken from
Hakkarainen et al. 2009. ..................................................................................................... 9
Figure 2 - Diagram visualising the process of a Marker-based AR system. Taken from
(Ćuković, et al., 2015). .........................................................................................................11
Figure 3 - Fiducial marker proposed by (Edwards, et al., 2016). ...................................... 13
Figure 4 - A conceptual framework for GIScience. Taken from (Goodchild, 2010). ....... 16
Figure 5 - Screenshot of code used to generate and populate the SQL spatial database.
........................................................................................................................................... 20
Figure 6 - Screenshot of the data held on the spatial database overlaid an
OpenStreetMap web mapping service in QGIS desktop application version 2.18.1. ....... 21
Figure 7 - Diagram to show flow from data detection to final product. ......................... 22
Figure 8 - A screenshot of the Unity project designed to test whether the software is
capable of an AR application. ........................................................................................... 23
Figure 9 - Four screenshots displaying the application requesting permissions,
displaying GPS information and changing virtual camera orientation based on the
device gyroscope. .............................................................................................................. 23
Figure 10 - Diagram visualising the system review and selection process ...................... 25
Figure 11 - Polyhedral surface in the form of a hammerhead shark, taken from (Kettner,
1999). ................................................................................................................................. 25
Figure 12 - Screenshot displaying the unity project for the marker-based AR
application. ........................................................................................................................ 27
Figure 13 - Screenshot of the AR application in action, a printed QR code is placed on
the floor and the geometry of the box and pipes are calibrated with respect to it. ....... 28
Figure 14 - Screenshot displaying a top-down view of how the location of the virtual
cylinders are positioned in the virtual Unity world in relation to the camera. .............. 30
Figure 15 - Screenshot of the image recognition code calculating the edges of the
objects in the device camera view. ................................................................................... 34
Figure 16 - Screenshot visualising how the edge detection algorithm is used to identify
the edges of vertical objects and highlight the entire column green. ............................ 35
Figure 17 - Diagram indicating the scenario in which the tests were carried out with
testing locations. ............................................................................................................... 39
Figure 18 - Four images detailing the differences in 'pixelDetection' and
'detectionSensitivity' values. A) Low 'pixelDetection' b) high 'pixelDetection' c) low
'detectionSensitivity' d) high 'detectionSensitivity'. ........................................................ 40
Figure 19 - Two screenshots of the application in action. A) displays the AR system
calibrated using only the device sensors, b) displays the positional accuracy of the
cylinder increased using the object recognition algorithm. ............................................ 41

PAGE 3
List of Tables
Table 1 - Summary of cost/benefit analysis of the two AR application development
systems. ............................................................................................................................. 24
Table 2 - Table displaying the results of the data accuracy test. Distances that the
virtual cylinders moved were recorded. Sucessful movements were recorded in black
and unsuccessful movements, red. .................................................................................. 39

PAGE 4
Abstract
A location-based augmented reality (AR) application is presented, enhancing the
positional accuracy of the visualised data using a supplementary object recognition
algorithm tailored to identifying vertical utility infrastructure, such as piping and
cabling. A literature review identified developed methods of increasing the accuracy of
location-based AR systems and a visual object recognition method was selected as the
most relevant for this study. Two potential application development methods were
assessed and the Unity 3D video-gaming software package was concluded as the most
applicable for this study. A marker-based; location-based and enhanced location-based
AR systems were developed and tested with respect to utility data positional accuracy
and concluded to be a viable option for AR positional improvement. However, further
study was identified to be necessary to make the application robust and reliable enough
for implementation in a construction environment.

1. Introduction

1.1. CONTEXT

In the construction and associated industries, ineffective communication of information


and data between parties lead to delays in project completion, delays to nearby transport
infrastructure, increased project cost and reduced employee safety. The UK government
estimate that roughly 2 million road maintenance projects take place on the roads in
England every year and there are two main types of road works:

 works for utility companies maintaining infrastructure networks such as water,


electricity, gas and telecommunications; and
 local highway authorities maintaining the roads (Department for Transport,
2016).

It has been estimated that local authorities in the UK are able to save up to 17% of their
expenditure on road maintenance and continue to maintain current quality of
infrastructure (Wheat, 2017). However, this must be done in an effective manner,
because poor methods of saving money through infrastructure improvement can
produce a reduction in benefits for the economy of up to £2.90 for every £1 saved in the
UK (Thiessen, et al., 2016). The number of accidents that occur in construction

PAGE 5
environments was found to be directly correlated to site risk (Forteza, et al., 2017). In
the most extreme cases, employees could be seriously injured or killed as the result of
poor communication in the construction process. Of the 137 people that were killed in
their workplace in the statistics year 2016/17, 30 were in the construction sector, the
highest of any sector in the UK. Examples of such occurrences include the unfortunate
circumstance of construction engineers drilling into live electric cables; 8 of the 30
examples from the construction sector (Health and Safety Executive, 2017).

Traditionally, construction companies and contractors use paper mapping of


underground utility resources and assets to communicate information between the
surveyors, architects, engineers, project managers and manual labourers. As technology
improved over time, 3D computer models were used to transfer information between
the surveyors, architects, engineers and project managers during the design process.
However, there is still no single system which is accurate and powerful enough to give
reliable, live information, in 3D, to all parties throughout the construction process to
ensure the maximum safety and efficiency of the project workers. This is due to the
accuracy of the mobile device sensors (GPS, accelerometer and magnetometer) which
are only sensitive and accurate enough to be effective for navigation, retail based or
gaming augmented reality (AR) applications and not accurate enough for use by a
construction engineer.

This study will produce a mobile, 3D, augmented reality android application that can be
used to quantify to what extent the positional accuracy of data visualised in an
augmented reality application can be improved using object recognition. Then, the
application will be tested in an environment designed to replicate an area with utility
asset infrastructure access in order to evaluate the feasibility of such an application being
used in a real-life construction scenario.

1.2. RESEARCH QUESTION, AIMS AND OBJECTIVES

The research question which will be reviewed in this study is:

PAGE 6
In order to effectively answer the proposed research question, the study has been split
into two research aims; each with a set of objectives to be evaluated. Each aim and
objective have been assigned a number and a letter respectively to facilitate their
reference in the main text.

1) Develop an augmented reality application (AR) on the android operating system


which can appropriately display utility data, superimposed over the device camera
image
a) Compare and review available mobile application development software options
b) Develop a mobile application that can display utility data based on geographic
location
2) Test the positional accuracy of the mobile application and quantify to what extent
the accuracy can be enhanced using an object recognition algorithm
a) Write an object recognition algorithm based on the images that are recorded by
the device camera
b) Test, to what extent, the positional accuracy of the AR system has been improved
by using an object recognition algorithm.

1.3. REPORT STRUCTURE

The following report continues with the second chapter that critically reviews and
compares current, relevant literature into this particular field of research. Categorised
into smaller subsections, the chapter will examine current mobile AR applications;
research into 3D GIS platforms; and research into AR application accuracy quantification
and improvement. In chapter 3, the study methodology is detailed, describing how the
study will compare and contrast application development possibilities, develop the
application and test for functionality. Chapter 4 will guide the reader through the system
development process, reviewing the challenges that the design process faced and how
they were overcome. In chapter 5, the results of the application testing will be displayed,
before chapter 6 discusses the results of the study, states and explains their limitations
and reviews how they fit into current academic research. Finally, the conclusions of the
study will be summarised in chapter 7 with the required and potential future work that
could build on this research.

PAGE 7
2. Literature Review

2.1. INTRODUCTION

In the following section, the literature regarding current research into relevant field will
be reviewed and critiqued. Categorised into three sections, research into mobile AR is
reviewed in section 2.2; 3DGIS in section 2.3 and research into the accuracy of AR
applications in section 2.4 to identify the method of AR positional accuracy
enhancement that will be developed for this study.

2.2. AUGMENTED REALITY

2.2.1. What is augmented reality?

Augmented reality (AR) has been defined as the ‘real time addition of virtual information
that is coherently positioned with respect to a real environment’ (Lima, et al., 2017).
Research into AR was first recorded upon the development of the HMD (Head Mounted
Display) that was invented in 1965 by computer scientist Ivan Sutherland at the
University of Utah for the purposes of an immersive video game (Sutherland, 1965).
Three decades later two researchers from the Boeing Company (Tom Caudell and David
Mizell) built on this work in an attempt to employ AR technology to assist in the
complex manufacturing processes of commercial airliners (Caudell & Mizell, 1992). As
technology continued to become smaller and more powerful, researchers developed the
first mobile AR system (MARS) which was designed to display to the user, live
navigational information in 3D (Feiner, et al., 1997). By the turn of the millennium, AR
was a rapidly growing and transformed from an application of computer science, to an
increasingly important field of research in its own right (Zhang, et al., 2016).

Augmented reality has grown and developed as a field of research and the applications
and potential applications of AR have grown at a similar rate. Pantano et al. (2017)
studies to what extent AR in retail contributes to the decision making process of the
consumers and concluded that the AR technology is a “powerful tool to be adopted for
supporting the decision making process”. A number of studies also conclude that AR is
an effective and helpful tool in retail and that consumer responses to the technology are
very positive (Rese et al. 2016, Dacko 2016).

PAGE 8
2.2.2. Applications of Augmented Reality

Moreover, applications of AR also include, but are not limited to, imaginative
distribution of information at cultural heritage sites (tom Dieck & Jung, 2017);
visualisation of 3D GIS data to help maintain riverbanks (Pierdicca, et al., 2016); journey
navigation (Meenakshi, et al., 2015); dynamic advertising in Malaysia (Wafa & Hashim,
2016); and immersive video gaming (Georgiou & Kyza, 2017).

Chen et al. (2016) review an AR platform written in the open-source software suite
AndAR for the mobile android operating system. The paper merges an image processing
algorithm for initial image processing and the Lucas-Kanade (LK) optical flow algorithm
to track the objects movement to reduce computational workflow on the device
processor. The optical flow method functions when the following three criteria are met:

1. Brightness constancy
2. Temporal persistence or small movements
3. Spatial coherence.

This method of object tracking has


been the subject of research before,
with Reitmayr and Drummond (2006,
2007) developing one of the first
‘robust’ image-based outdoor tracking
algorithms for use with AR.

Applications of AR that are directly


related to this study build on the work
of Gerhard Schall et al. (2010) who
developed the project Vidente to
visualise underground utility data in
3D. The AR application visualises 3D
utility data in an outdoor
environment and allows the user to
interact with discrete objects within
the environment, moving them Figure 1 - A system overview for 4D BIM
around and modifying the additional augmented reality application. Taken from
Hakkarainen et al. 2009.
information associated with them.

PAGE 9
Navigational applications of AR have been reviewed in the literature. Recognition of
paper maps and superimposing the user’s location with supplementary navigational
information has been successfully developed and compared with regard to mobile
operating system (Morrison, et al., 2011). Pedestrian based AR systems have been
developed that are able to offer the user greater contextual information, enhanced
awareness of their surroundings and was found to cause fewer missed turns in
comparison to paper maps (Chung, et al., 2016). AR has been reviewed as an option for
displaying shipping data as live information to sailors. The concept was found to be
promising, with stable operation and positive feedback from testers, but required further
development to stop data overlaying nearby information and preventing effective
interpretation (Jaeyong, et al., 2016).

Further research conducted by Hakkariainen et al. (2009) explored the possibility of


incorporating a time based element of the data visualisation. Future applications of such
research could allow the visualisation of live data feeds to allow engineers and utility
company employees to see maintenance reports and problems in real time. However, in
this project, the user is able to view 4D building information modelling (BIM) with
changes over time. Figure 1 is a diagram to summarise the system overview described by
Hakkariainen et al. (2009).

2.2.3. Marker-based AR

Marker based AR uses an image recognition algorithm to “find optical square markers
and estimate their relative pose to the camera” (Ćuković, et al., 2015). Therefore, once the
geometry of a marker relative to the camera is calculated, additional information, data
or 3D shapes can be superimposed and calibrated with respect to the marker.

Once the marker has been identified, a tracking algorithm is employed to ensure that
the superimposed additional information moves, rotates and scales in its geometry with
respect to the marker, such as the example displayed in Figure 2. The main advantage of
marker-based AR include the accuracy of the information overlaid to markers in the real
world, however, it has been concluded that even with fiducial markers, AR technologies
require further development in order to comply with ‘industrial requirements of
robustness and reliability’ (Palmarini, et al., 2018). Such as the example in figure 2, for
visualising 3D models or other micro-scale AR applications, marker-based calibration is
an excellent solution. However, the requirement to produce a fiducial marker to

PAGE 10
calibrate the camera geometry causes scalability and mass implementation issues; for
this, a location-based AR system is more effective, reviewed in section 2.2.4 as
functionality in not confined to a controlled environment.

Figure 2 - Diagram visualising the process of a Marker-based AR system.


Taken from (Ćuković, et al., 2015).

2.2.4.Location-based AR

In contrast to the marker-based AR applications (section 2.2.3), which use a fiducial


marker such as a QR code, barcode or pre-defined symbol to calibrate the 3D world with
the device; location-based AR applications use the device sensors to calibrate the
superimposed information. Such sensors include the GPS receiver to calculate the device
location on the earth’s surface; magnetometer to calculate device orientation from true
North; gyroscope to calculate the device local rotation; and accelerometer to detect
device tilt.

Both marker-based and location-based AR applications allow the user to interact with
the world around them in a more complex and enlightened manner. However, location-
based AR has the advantage of scalability as a marker-based application requires the
identification of markers to calibrate the user to the world, a location-based AR system
does not, and a system that functions in a test location, can be scaled up in size with

PAGE 11
ease. A method of scaling such an application is reviewed by Meenakshi et al. (2015)
using cloud computing to allow the continual updating of data that is visualised for the
user. The authors conclude that merging cloud computing and location-based AR will
enhance user experience in the fields of tourism and navigation.

The main advantages that marker-based AR boast over location-based AR is the accuracy
of the data visualised due to the precision of mobile device sensors. One method to
improve this was developed by Rudiger Pryss and colleagues which implements a track
algorithm on their AREA (Augmented Reality Engine Application) project to enhance
the accuracy of sensor-based calibration (Pryss, et al., 2017). The results demonstrated
that the track algorithm was more effective than competitive location-based mobile AR
applications and can be implemented using a variety of mobile operating systems.

2.2.5. Image / Natural Feature Tracking for AR

Image or natural feature tracking is a field that is of crucial importance for marker-based
AR applications as it, under most circumstances, allows the accuracy benefits of a
marker-based AR system to be used without the requirement of a fiducial marker itself.
The system works by running an image recognition algorithm on a camera image and
identifying what can be referred to as ‘keypoints’ in the image (where there are drastic
variances in neighbouring pixel colour values) and tracking these keypoints in a similar
fashion to a fiducial marker in an marker-based AR system (Weng, et al., 2013).

Feature tracking was evaluated as a plugin for the Unity video-games development
software and found to be reasonably successful for environments in which marker-based
tracking or SLAM can be used, however the accuracy of the system was found to be not
suitable for large scale environments or commercial implementation (Forsman, et al.,
2016).

PAGE 12
2.2.6.Outdoor SLAM (Simultaneous Localization And Mapping)

As an extension of image / natural feature tracking


for AR reviewed in section 2.2.5, simultaneous
localization and mapping (SLAM) is a field of
research that allows computer algorithms to analyse
camera imagery to build a 3D spatial model of the
device surroundings. Although SLAM systems are
currently effective in a controlled environment;
they do not currently have the precision and still
require an “appropriately large tracking volume for
outdoor use” (Edwards, et al., 2016). Figure 3 - Fiducial marker
proposed by (Edwards, et al.,
However, in order to combat this void, the same 2016).
conference proceedings proposed a system which used circular fiducial markers
(visualised in Figure 3) and was found to increase pose estimation accuracy by 200% in
comparison to traditional fiducial markers. However, the research also accepts that the
proposed system does not allow markers to be identified individually and it is suggested
to use the proposed fiducial marker in conjunction with traditional fiducial markers
within the same system for best results (Edwards, et al., 2016).

Moreover, it was concluded that effective outdoor SLAM is feasible once a “concrete
business case” is developed and remains a “matter of time”. Despite the cloud and device
boasting adequate computational power to design and maintain such a system; there are
various other issues that a large-scale outdoor SLAM system will need to overcome.
These include scalability, surface morphology, illumination and meteorological
conditions (Fleck, et al., 2016).

Scalability causes problems as systems currently require a controlled environment,


although the partitioning the visible world into predefined blocks can aid this process.
The changes caused by Earth’s surface morphology can also cause problems for the
image recognition algorithms as the correct area might not be correctly identified due
post survey surface morphology. Variances in the illumination of the environment can
cause the image to be distorted by changes in the camera aperture to compensate for
particularly bright light sources in the image such as a reflection of the sun, for instance.
One method to minimise these effects is to incorporate a LiDAR (LIght Detection And

PAGE 13
Ranging) facilities into the system which produces results that are ‘less warped’ than
standard vision-based SLAM results (Gee, et al., 2016). Meteorological conditions must
also be taken into account as snow cover or precipitation can distort the images
captured, or cause unwanted reflections on the surfaces being measured.

2.2.7. Building Information Modelling (BIM) AR

AR can be used in conjunction with building information modelling (BIM) in order to


enhance the visualisation of structural information. However, at current development,
the number of surfaces that must be recognised and analysed in order for BIM
information to be accurately and effectively displayed in real time to a user must be
reduced. Additionally, object recognition within camera imagery is still at a stage in its
improvement before where it must be for mass consumption. However, it has been
argued that AR for BIM could “revolutionize” construction and even remove the
requirement for tape measures and paper plans (Ren & Ruan, 2016).

Equally, the benefits of integrating BIM-based AR in education have been found to aid
the education of students, particularly in the Civil Engineering discipline. AR-SKOPE is
a multi-platform mobile application which is designed and coded in the Unity3D gaming
development software. The application allows students to visit specific buildings and
view supplementary building information to aid in their understanding of complex
building design and variable construction using a smartphone or tablet. The study found
that the students were able to walk around within the building and interact with the
BIM information, giving the students “x-ray vision” into the inner workings of the
structures. However, the study concedes that the application is an early prototype and
further research is required to evaluate whether student experience or learning
strategies are improved using the system (Vassigh, et al., 2016).

2.2.8.Notable AR application and research

Notable applications of AR research have already been discussed in this chapter such as
the application of AR for education purposes to augment BIM information over the
respective building for engineering students in section 2.2.7; the benefits of cloud
computing for AR applications in tourism and navigation in section 2.2.4; and 3D model
generation for medical diagnosis in section 2.2.3. However, it has been found that AR
could also be of value for cultural heritage sites, if the organisation controls adequate

PAGE 14
resources including “people, technology, and finances as part of a business value
generation process” (tom Dieck & Jung, 2017).

In Malaysia, AR is used for dynamic advertisements superimposed over printed adverts.


However, the technology was found to be unfairly represented by certain industries over
others, rarely used and only by very few brands (Wafa & Hashim, 2016).

AR was, however, found to provide an excellent platform to aid consumers in retail with
a very high level of satisfaction of 4.02 / 5.00 average rating on Android for MAR
shopping apps of all kinds and 65% of users believing that such apps will be mainstream
within 5 years (Dacko, 2016), (Pantano, et al., 2017), (Rese, et al., 2016).

In a different form of consumerism, an AR system has been developed to allow skiers to


share information collected during their ski routes to be held in a database to be shared
with future skiers that chose the same path. The system was proposed but not effectively
tested and although could potentially prove hugely beneficial to its users in the future
in producing a safer environment for skiers, the system was concluded as not ready for
implementation, yet (Fedosov, et al., 2016).

The impacts of AR application usage with regard to battery and CPU usage has been
reviewed using a variety of operating systems and the results indicated that resource
usage of the mobile device can be decreased by up to 35% through use of efficient
algorithms, minimising use of web services and graphics rendering.

2.3. 3D GIS

2.3.1. What is 3D GIS?

Geographic Information Science is a phrase first proposed by Michael Goodchild in 1990


to describe the field of research that involves the study of spatial and geographical data.
Since then, the field has expanded vastly due to the rapid improvements in computer
hardware, computer software, more precise and effective methods of data capture and,
in turn, potential applications of the science. Figure 4 is a graph that was produced by
Michael Goodchild in 2010 to visualise the ‘conceptual framework’ of GIS with reference
to the three key components of its research: the human, the computer and society in
general (Goodchild, 2010).

PAGE 15
Figure 4 - A conceptual framework for GIScience. Taken from (Goodchild, 2010).

Applications of GIS in current literature are vast, with uses of the science being
employed to simulate floodplain river systems (Monteiro, et al., 2017); analyse and
minimise the spread of contagious diseases (Dogru, et al., 2017); giving the public the
power to contribute to conservation decision making in citizen science projects
(Newman, et al., 2017) and many more.

This research will focus on GIS data and analysis with respect to three dimensions of
applicability to AR data visualisation. Studies have been written investigating to what
extent 3D data can be generated and effectively visualised to provide additional
information to an outdoor location such as navigation-based AR for mountainous
environments (Pryss, et al., 2017); contextually defined AR to improve the efficiency of
industrial maintenance (Erkoyuncu, et al., 2017) and a regionally guided AR system to
give tourists further information at the Byblos Roman Theatre in Italy (Younes, et al.,
2017) reporting decreasing levels of success.

Storage of 3D datasets will form a major component of this research. Zhang et al. (2016)
uses a PostGIS spatial database in order to hold both point and line geometry data on a
remote server. The GeoJSON notation is utilised for the spatial data transfer noting that

PAGE 16
the notation is a ‘lightweight data format, faced with the http agreement is adopted in the
data communication agreement’.

2.3.2. 3D Geographic Data Visualisation

Visualisation of 3D data is a complex conundrum that, if performed poorly, can distort


how the data is viewed and even change the perceived results of certain scenarios. The
benefits of effective 3D data visualisation are well documented, well presented 3D GIS
data can give the viewer the ‘sense of mass and space more closely attuned with human
perception and this allows us to more intuitively interact with our data’ (Richards-
Rissetto, 2017). 3D geographic data can be used to allow researchers to explore
archaeological sites in greater detail (Galeazzi, et al., 2016); offer citizens a greater
understanding of coastal hazards in flood prone tourist towns (Yang, 2016); offer
researchers an ability to study how urban design qualities affect pedestrian counts (Yin,
2017); and many more.

However, the negative impacts of ineffective data visualisation are documented also;
users were found to respond poorly to, among other things, widget overloading,
necessary functionality and inconsistent data highlighting when viewing data on mobile
tablets (Games & Joshi, 2015). Algorithms have been written to create smooth zooming
experiences for the user while maintaining accurate data visualisation while minimising
distortion and occlusion of the data (van Oosterom, 2014). When 3D data is displayed
on a 2D surface such as a screen, the data is distorted which reduces ‘visual
dimensionality’ and reduces the user’s ability to ‘perceive and interpret 3D spaces’. These
impacts can be minimised, however, by offering the user an effective method of
interacting with, scaling and rotating the data (Reda, et al., 2013).

2.3.3. 3D GIS data in AR

ARGIS has been suggested as the result of a mixture between AR and 3D GIS. Huang et
al. (2016) suggests a method of accurately overlaying 3D spatial data over the
corresponding object in real-life is referencing a building wire-frame over the processed
image from the device camera. The paper concludes that users were able to “finish the
precise registration within 2 s by using RTKGPS [real time kinematic global positioning
system] and 30 s by using lowly accurate GPS receivers.”

PAGE 17
Zhang et al. (2016) review whether ARGIS calibrated via image processing is more or less
accurate than using sensor calibration. The results indicate that the AR calibrated using
the image processing (computer vision) has a ‘better performance in virtual-real fusion
effect and interactive experience’ than the sensor calibrated equivalent.

However, as data becomes more accurate, more accessible and more abundant with
improvements in computer hardware and software, the need to effectively visualise large
quantities of data becomes increasingly relevant. Augmented reality (AR), virtual reality
(VR), and hybrid reality (HR) / mixed reality are described as having ‘great potential in
helping us tackle some of the complex visualisation challenges involving large amounts of
heterogeneous information’ (Reda, et al., 2013). Thus, continued research into AR data
visualisation is potentially crucial to effective data interpretation in the future.

2.4. AR ACCURACY

Regarding this literature review, the role of improving the accuracy of current AR
systems is the most relevant, as it is within this section that this study will reside itself.
As discussed in section 2.2.4, a location-based AR system has the benefit of scalability
and functionality in a non-controlled environment. The two main competing systems
are: a marker-based (discussed in section 2.2.3); and a natural-feature tracking AR
application (reviewed in section 2.2.5). Benefits of these systems include greater data
visualisation accuracy, as the system is able to identify a marker to calibrate the data
against, and identify ‘keypoints’ in the camera image against which to calibrate the data,
respectively. Further systems have been researched such as indoor position calculation
based on proximity to Bluetooth emitters (Oosterlinck, et al., 2017); and Wi-Fi routers
(Hayashi, et al., 2016), please see section 6.3 for further details. This section summarises
current literature which studies the possibility of filling this gap by increasing the
accuracy of a location-based AR system while maintaining its scalability and
functionality in a non-controlled environment.

Using the Google Maps Street view infrastructure has been reviewed for its ability to
enhance the positional accuracy of an AR system based on image recognition. The
results were found to be able to locate the user purely using image recognition
algorithms and greater accuracy was recorded when more groups of images were
analysed. Greater research, however, is required to make such a system a viable option
for enhancing GPS locations as the results were less accurate than current mobile device

PAGE 18
GPS error (Zamir & Shah, 2010). In contrast, as discussed in section 3.1, reliable mobile
GPS coordinates can be distorted due to user environmental such as underground or in
densely urban environments (Adjrad, et al., 2015). Therefore, using building recognition
to enhance the locational accuracy of GPS data in densely urban environments is a viable
line of research.

Another area of research in this particular field is building recognition using single
images. A computer algorithm that identifies the 3D shape of buildings in the camera
image and matches the user’s location using such data has been found to be very
successful, with greater than 690 images located to within 18m of their correct location
out of 780 in total (Sattler, et al., 2011).

This work was built upon by a study which claims to be able to achieve user positional
accuracy of less than one metre, based on vision interpretation of a mobile device. The
computer algorithms are able to calculate the approximate 3D geometry of buildings in
the camera view and calculate user position by compressing the models for recognition
using both image and video (Chen, et al., 2015). Thus, it has been selected by the
researcher to pursue image object recognition as the method of enhancing the GPS
accuracy of the mobile AR application for utility data.

3. Methodology

3.1. DEFINING THE PROBLEM

In section 2.2.4, it was found that the main benefit of location-based over marker-based
AR applications is the scalability of the system. However, this is at the cost of the
accuracy of the system due to the reliability of the sensors of the mobile device. As the
device sensors improve, the systems will become more accurate, but there will always
be a need for further accuracy calculation. This is because certain areas do not offer
mobile devices the capability to receive accurate location and orientation information
such as in underground areas or in dense urban environments, respectively (Adjrad, et
al., 2015). Therefore, the issue of increasing accuracy, and therefore reliability, of a
location-based AR system must be enhanced using an additional method. This study
aims to assess to what extent this can be achieved and how (see section 1.2).

PAGE 19
3.2. SQL SPATIAL DATABASE & ANDROID STUDIO DEVELOPED SYSTEM

The first system designed for this study was an SQL spatial database held on a private
server which would be accessed by an android application developed in android studio
and superimposed over a camera image to appear in a position in the real world, based
on its virtual location as an AR application. Once, the AR system had been developed,
the object recognition algorithm could be written. A benefit of the system is that the
information is held separately to the application which means that it can be updated
remotely and accessed by multiple devices simultaneously.

The opening task was to create a spatial database on the University College London
(UCL) private server. For this, SQL script was executed on the PGAdmin III software
version 1.16.1 to create a database table with two columns. The first was a primary key
column called ‘id’ which would auto-increment integer values using the ‘serial’ column
type. The second column was named ‘geom’ and held the spatial information for each
entry to the database in the 3D WGS84 projection system (reference EPSG4327). A single
data entry was inserted to represent a 3D shape of a hexagonal prism located on earth’s
surface using latitude and longitude values recorded using the Google Maps API
(displayed in Figure 6). The code that was used to create the spatial database, generate
the columns, specify the primary key, set the coordinate reference system and insert the
data can be viewed in Figure 5.

Once the database was created, the data was viewed in QGIS desktop application version
2.18.1 in order to validate the data. The results can be viewed in Figure 6.

Figure 5 - Screenshot of code used to generate and populate the SQL spatial database.

PAGE 20
Figure 6 - Screenshot of the data held on the spatial database overlaid an OpenStreetMap
web mapping service in QGIS desktop application version 2.18.1.
However, as discussed in section 3.1, part of the motivation for researching AR accuracy
is that in certain environments, such as underground, data is difficult to receive for the
device. Therefore, accessing information held on a spatial database will also be difficult,
or not possible in construction locations. This is a factor which was considered during
the system comparison found in section 3.4.

3.3. UNITY DEVELOPED SYSTEM

The second system which was developed in conjunction with the spatial database system
detailed in section 3.2 was using the Unity games development software version 5.6.1f1
(Unity Technologies, 2017). The software was considered due to its excellent access to
underlying application code for the developer; ability to export in the Apple IOS, Google
Android and WebGL compatible formats; and extremely powerful graphics rendering
and interaction qualities. Moreover, it was the system used to design applications in a
number of studies reviewed in section 2 (Ćuković, et al., 2015), (Forsman, et al., 2016),
(Vassigh, et al., 2016).

The first task was to assess to what extent the software could handle the individual tasks
which would, together, make up the augmented reality application. In essence, it was to

PAGE 21
confirm that all of the data that is measured or captured in the ‘Data input’ section of
Figure 7 was possible in the Unity software package.

Application Final
application

Virtual and real world 3D world generation and then


merge superimposed over camera image

Raw data intepretation Camera geometry


calculation
Utility data
interpretation
Camera image

Data input Gyroscope GPS Magnetometer Raw utility data Device camera

Figure 7 - Diagram to show flow from data detection to final product.


For this, a unity project was created which would test to what extent these data inputs
can be measured. Thus, the project uses the default ‘skybox’ background which has a
horizon, brown colour below and blue colour above to represent a simple ‘ground’ and
‘sky’ environment. Superimposed over the top of the skybox background is a large ‘raw
image’ which takes the form of a large white rectangle in the camera view and a text
object within.

The ‘Main Camera’ contains an additional four C# scripts: Gyro Control, Phone Camera,
GPS and GPS update text. ‘Gyro Control’ is a script which, when executed, measures the
data collected from the mobile device gyroscope in all three measured axes (x, y and z)
and performs the same rotations to the camera orientation in the Unity 3D game
environment. Thus, when the user tilts the mobile device in a downwards motion in real
life to look at the floor, the same rotation will take place in the 3D game environment
by the virtual camera towards the virtual floor.

Following the ‘Gyro Control’ script, is the ‘Phone Camera’ script which causes the mobile
device to request access to the device camera and display the camera image on the ‘raw
image’ in the device view. Then, two scripts named ‘GPS’ and ‘GPS update text’ are run
which request access to the device GPS location and update the text shown in the middle
of the device screen with the GPS values, respectively (Figure 8).

PAGE 22
Figure 8 - A screenshot of the Unity project designed to test whether the software is capable
of an AR application.

Figure 9 - Four screenshots displaying the application requesting permissions, displaying


GPS information and changing virtual camera orientation based on the device gyroscope.
Displayed in Figure 9 are four screenshots from the application developed in the Unity
game development software package to test its suitability for AR application
development. The first two screenshots demonstrate the applications ability to request
the permissions that are required of the user to access the device sensor readings. The
third and fourth screenshots were captured to visualise the effects of the gyroscope C#
script. The camera image in the centre of the screen displays live images being recorded
by the device camera and therefore confirm that when the orientation of the device is
tilted towards the floor, the virtual camera is also tilted towards the floor, due to the
appearance of the ground and sky behind the camera image. Conversely, in the final

PAGE 23
image, the device is tiled upwards, shown by the sky in the camera image and the sky in
the virtual image correspondingly.

Thus, based on the ability for Unity to produce an application that can successfully
request the relevant permissions of the user, collect and interact with the device GPS
information and manipulate the user camera based on the device gyroscope, the
software package can be considered for use in developing an AR application.

3.4. COMPARISON, SYSTEM SELECTION AND JUSTIFICATION

AR app development Benefits Costs


system

SQL spatial database  Less powerful graphics


 Versatility of data storage
and Android Studio rendering
 Cheaper software licensing
 Development only in
costs
android mobile operating
system (OS)

Unity software package  Potentially large software


 Intuitive software
licensing costs
development interface
 Data must be compatible
 Complex user coding
with Unity
customisation
 Device platform versatility
 Powerful graphics rendering

Table 1 - Summary of cost/benefit analysis of the two AR application development


systems.
Two options for an AR application were reviewed: an SQL spatial database that allows
spatial information to be downloaded and visualised in an android application
developed in Android Studio; or an AR application coded using the games development
software package Unity. The SQL spatial database has a number of advantages over the
unity app including cost and data versatility. The relative costs and benefits of holding
the data used by the app on a private server and downloading to the app over the
internet will not be included in the system selection. This is because both systems could
benefit from its use, and for neither system is it a requirement, as both AR applications
are capable of holding the spatial information in the application itself.

PAGE 24
SQL Server / Android
Unity 3D
Studio

Test application
Test database created
created

System reviewed and System reviewed and


discontinued selected

AR applcation
development

Figure 10 - Diagram visualising the system review and selection process

The benefits of developing the system using an SQL spatial database and android studio
include the versatility of data which can be visualised due to the variety in shapes which
can be created using a ‘polyhedral surface’ such as the example visualised Figure 11. Thus,
the complexity and intricacy of the spatial data is limited only by the quality of the data
generation and the size of the data that the user wishes to download to the mobile
device.

Moreover, the software licensing that is required to insert data into a spatial database
and produce the augmented reality app is much cheaper than the unity equivalent. The
pgAdmin III software used to insert the spatial data is free to implement under the
PostgreSQL license and the Android Studio terms and conditions grant the developer a
‘worldwide, royalty-free
[license] … to develop
applications for compatible
implementations of Android’
(Google Inc., 2016).

Conversely, the visualisation of


the data in android studio will
be more difficult, all of the
algorithms that must be Figure 11 - Polyhedral surface in the form of a
performed regarding device hammerhead shark, taken from (Kettner, 1999).

PAGE 25
orientation, data orientation and size based on distance from the user must be written
in order to visualise the data accurately in an AR application developed in Android
Studio. Whereas, in the Unity gaming development software package, the location of
the data relative the user camera must be calculated, but the virtual world generation
and all associated geometry computations are determined automatically due to the
requirements to do so in 3D video gaming.

Moreover, the Unity video games development software have other benefits such as the
ability to export the same application in Google Android, Apple iOS and webGL
operating systems; an intuitive user interface to visually aid the design process;
comprehensive coding customisation options; and powerful 3D graphics rendering. The
drawbacks of developing an AR application in Unity include the potentially expensive
licensing costs of Unity, should the system be successful in a business environment and
the requirement for the data to be interpreted by Unity for rendering.

The system which was selected was the Unity video game development system as it
amalgamated the powerful 3D graphics rendering possibilities with platform versatility
and intuitive and customisable user interface (process of development visualised in
Figure 10). The potentially large licensing costs are not directly relevant in this particular
study as the study is for academic purposes which does not incur a licensing fee and data
variety is not crucial to the aims of this particular study, detailed in section 1.2.

3.5. MARKER-BASED AR SYSTEM

3.5.1. Defining the system aim

The aim of the Marker based AR application was to review the first attempt at producing
an AR application using the Unity video game creation software package. By producing
this system, the researcher was assessing the ability of Unity to display 3D utility data in
3D; the customisation options of the 3D development user interface; and gaining useful
insight into image tracking processes which could be used in later AR systems. Success
of the marker-based system would be measured by the system’s ability to accurately
detect pre-defined markers in the device camera image; calculate the marker orientation
and therefore the camera pose; accurately superimpose 3D utility data into the scene
and track the marker to maintain accurate utility data visualisation for the user.

PAGE 26
Based on the literature, the results are expected to be more accurate data visualisation
in a controlled environment, and are more difficult to scale up to larger or non-
controlled environments (further details in section 2.2.3). To what extent this is true,
and to what extent this can be improved will also be reviewed during the development
of the marker-based AR system.

3.5.2. Data

For this system, the data which was used was fictional utility data generated in Unity.
The data was developed to simulate genuine utility asset data and is composed of
pipelines which run below the surface of the marker used for detection. In order to aid
visualisation of the data, a virtual box was generated around the pipelines to give the
impression to the user that the pipelines were being viewed because of a hole dug in the
ground. Without the virtual box, the data appears to be floating on top of the ground,
rather than represent pipes which are held under the surface, an example of this can be
viewed in Figure 12.

Figure 12 - Screenshot displaying the unity project for the marker-based AR application.

3.5.3. System Process

The process was designed to produce a working AR application calibrated using a


fiducial marker in the device camera view. Firstly, the application required an algorithm
to scan the camera image and detect a pre-defined marker. Vuforia is a company that
have developed an augmented reality extension to unity that includes complex marker

PAGE 27
recognition and tracking. These two algorithms were implemented and used to calibrate
3D fictional utility data in the device camera image. Once the device camera image has
been analysed for the marker, the marker has been detected and the fictional utility data
has been superimposed over the camera image, the marker is then tracked in the camera
view to maintain the 3D utility data in its virtual location in the real world (Figure 13).

Figure 13 - Screenshot of the AR application in action, a printed QR code is placed on the


floor and the geometry of the box and pipes are calibrated with respect to it.

3.5.4. System Code

The code that was required for this project was minimal, as the majority of code that
was executed was written by either Unity or Vuforia. Therefore, for this application, no
additional code was required from the researcher.

3.5.5. System summary

In summary, the marker-based AR system developed served the purpose of allowing the
researcher experience of producing an AR application and offered valuable insight into
data visualisation techniques. For instance, it was noted that a background colour is
beneficial for utility data that is under a surface such as a floor, ceiling or wall; without
it, the visualisation looks distorted. Moreover, the system aided understanding of 3D
shape creation and positioning that will be crucial in the location-based AR application
that will be developed in section 4.

PAGE 28
4. Developing a visually assisted location-based AR application
4.1. LOCATION-BASED AR SYSTEM

4.1.1. Defining the Problem

Further to the marker-based AR system for which the development was described in
section 3.5, the next stage was to develop a location-based AR system, to eliminate the
drawbacks of marker-based AR systems reviewed in section 2.2.3 such as scalability and
versatility, while maintaining the accuracy benefit. Therefore a location-based AR
system was designed which allows the researcher to test, to what extent, the accuracy of
the AR system can be improved, in any environment. Such an application would require
the user to visualise utility data, superimposed over the device camera image based on
the device, location and orientation while offering the user versatility to modify the
application for testing.

4.1.2. Data

The data that was visualised in this particular app were two fictional cylinders which
would represent two pipes in a construction environment. The rationale for such a
selection was that a piping and cabling would constitute the majority of utility resources
and therefore a cylinder shape was the most relevant shape to be visualised in such a
circumstance. The reason for two cylinder to be generated, is to allow the depth of the
cylinder in comparison to the user to be represented and to give the user a greater
understanding of their orientation as well as their location.

4.1.3. System process

The process of development required a number of stages to be completed: the first was
to ensure that all the required sensory information could be attained and their values
could be interacted with. For this, the code that was written for the application
developed to test Unity suitability were used (Gyroscope, GPS, Phone Camera and GPS
update text) to obtain relevant user permissions and collect the sensory data.

PAGE 29
Figure 14 - Screenshot displaying a top-down view of how the location of the virtual
cylinders are positioned in the virtual Unity world in relation to the camera.
Following this, the application had to generate a virtual 3D world which corresponded
directly to the real world surrounding the user and populate it with the utility data based
on their location. For this, a C# script was executed to transform the latitude and
longitude values of the user and the utility data from the web Mercator projection into
x and y values (visualised in Figure 14).

However, this produced an error as the values were so large that the mobile device was
unable to render the virtual world (in essence, the virtual world had to be rendered from
coordinates (0, 0, 0) all the way to the latitude and longitude of the user, which is a
virtual 3D world stretching from the Atlantic ocean, off the coast of Western Africa, to
the UK. Therefore, the code was modified to place the user at the coordinates (0,0,0) in
the virtual world, and calculate the location of the utility data in relation to the user by
subtracting the utility data coordinates from the user coordinates. Thus, determining
the location of the cylinders in relation to the user and moving the virtual world around
the user in the opposite direction, to give the impression that the user is moving within
the virtual world.

4.1.4. System code

The system code required a total of 7 C# scripts which can be viewed in in the
appendices: GPS.cs, GyroControl.cs, MapRenderScript.cs, MercatorProjection.cs,
PanelScript.cs, UpdateGPSText.cs and WebcamEdgeDetection.cs. The C# code

PAGE 30
WebcamEdgeDetection.cs uses sections of code that have been adapted from a script
found at https://gist.github.com/anonymous/05f21191b4c7eea68d52 and
MercatorProjection.cs uses sections of code that have been adapted from the script
published on http://wiki.openstreetmap.org/wiki/Mercator#C_implementation.

4.1.5. System Summary

The application has so far been successful in creating an AR environment that uses the
mobile device sensors to calibrate the information that is augmented over the device
camera image. The GyroControl C# script is able to rotate the virtual camera based on
readings from either the internal magnetometer or the gyroscope based on user
selection. The MapRenderScipt C# script is able to render a 3D environment of virtual
objects based on the latitude and longitude values of the user and the virtual objects
using the MercatorProjection script. Then the GPS, UpdateGPSText and PanelScript
scripts are all able to collect GPS data, display the data to the user and display an
intuitive UI to the user, respectively.

4.2. IMAGE RECOGNITION ALGORITHM

Once the virtual world has been generated and superimposed over the camera image, a
settings menu was created giving the user a variety of options with regards to application
manipulation. The first option offered to the user in the settings menu is the ability to
initiate the image recognition algorithm. The reasons for making this optional are based
on the literature. Research suggests that efficient AR platforms can save up to 35% of
their mobile device resource usage (Karaman, et al., 2016). More information can be
found in section 2.2.8. Thus, the user is given the option to initiate or terminate the
image recognition algorithm to save battery or device processing if desired.

The algorithm which analyses the image captured by the camera was too demanding of
the processor of the mobile device used for testing to be executed at every frame.
Therefore, a ‘coroutine’ was called so that the processor could complete sections of the
task during each frame and the results would be displayed upon completion. Thus,
minimising the likelihood of the application freezing and maximising the chances of the
application running smoothly for the user.

1. IEnumerator DetectEdges() {
2. while (true) {
3. if (counter % 2 == 1) {

PAGE 31
4. imageLog.text = "";
5. Debug.Log("Detect Edges has started");
6. yield return StartCoroutine(CalculateEdges());
7. img.SetPixels(camTex.GetPixels());
8. Debug.Log("Detect Edges has finished");
9. if (isReady == true) {
10. RICamera.texture = AVGimg;
11. yield
12. return new WaitForSeconds(1 f);
13. }
14. isReady = false;
15. } else {
16. yield return null;
17. }
18. }
19. }
20. Code example 1
21. IEnumerator DetectEdges() {
22. while (true) {
23. if (counter % 2 == 1) {
24. imageLog.text = "";
25. Debug.Log("Detect Edges has started");
26. yield return StartCoroutine(CalculateEdges());
27. img.SetPixels(camTex.GetPixels());
28. Debug.Log("Detect Edges has finished");
29. if (isReady == true) {
30. RICamera.texture = AVGimg;
31. yield
32. return new WaitForSeconds(1 f);
33. }
34. isReady = false;
35. } else {
36. yield return null;
37. }
38. }
39. }

Code example 1 - Starting the image recognition coroutine and accessing camera image.

Code example 1 displays the code which was written to start the coroutine and access the
mobile device camera. Line 1 initiates the coroutine; and line 7 takes the pixel values
from the camera image and assigns them to an image variable that has been pre-defined
to the dimensions of the device camera. The ‘if’ statement in which the code resides is
to confirm that user has selected to initiate the algorithm and the ‘while’ loop ensures
that the function is called repeatedly while the application is open.

The code which analyses each pixel with regards to its neighbouring pixels was adapted
from a section of C# code uploaded to GitHub.com1 and can be viewed in Code example
2. Another coroutine is initiated to allow the execution to take place over a number of
given frames of the application runtime. Lines 2 to 16 split the RGB values of each pixel

1
Full address https://gist.github.com/anonymous/05f21191b4c7eea68d52.

PAGE 32
into 3 separate variables and apply them to pre-defined image variables. Then, from lines
17 to 34, each pixel is assigned a value based how different it is to its neighbouring pixels
and if the value is greater than the pre-set threshold value ‘th’ then a new texture is
created with the pixel coloured black, and pixels whose assigned value is below the
threshold value are coloured white. The final output, therefore, is a white image which
the edges of shapes in the camera view are outlined in black as shown in Figure 15.

1. IEnumerator CalculateEdges() {
2. for (int x = 0; x < img.width; x++) {
3. for (int y = 0; y < img.height; y++) {
4. temp = img.GetPixel(x, y);
5. rImg.SetPixel(x, y, new Color(temp.r, 0, 0));
6. gImg.SetPixel(x, y, new Color(0, temp.g, 0));
7. bImg.SetPixel(x, y, new Color(0, 0, temp.b));
8. t = temp.r + temp.g + temp.b;
9. t /= 3 f;
10. kImg.SetPixel(x, y, new Color(t, t, t));
11. }
12. }
13. rImg.Apply();
14. gImg.Apply();
15. bImg.Apply();
16. kImg.Apply();
17. for (int x = 0; x < img.width; x++) {
18. for (int y = 0; y < img.height; y++) {
19. rL[x, y] = gradientValue(x, y, 0, rImg);
20. gL[x, y] = gradientValue(x, y, 1, gImg);
21. bL[x, y] = gradientValue(x, y, 2, bImg);
22. kL[x, y] = gradientValue(x, y, 2, kImg);
23. ORL[x, y] = (rL[x, y] >= th || gL[x, y] >= th || bL[x, y] >= th) ? th :
0 f;
24. ANDL[x, y] = (rL[x, y] >= th && gL[x, y] >= th && bL[x, y] >= th) ? th
: 0 f;
25. SUML[x, y] = rL[x, y] + gL[x, y] + bL[x, y];
26. AVGL[x, y] = SUML[x, y] / 3 f;
27. }
28. if (x % 10 == 0) {
29. yield return null;
30. }
31. }
32. TextureFromGradientRef(AVGL, th, ref AVGimg);
33. isReady = true;
34. }

Code example 2 - Snippet of code used to compare each pixel of the camera image to its
neighbouring pixels to detect the edges of objects. Code modified from code uploaded to
https://gist.github.com/anonymous/05f21191b4c7eea68d52 (accessed August 2017).

PAGE 33
Figure 15 - Screenshot of the image recognition code calculating the edges of the objects
in the device camera view.
Following the algorithm that detects the edges of objects in the view; the application
should detect whether or not the edges are the result of potential utility asset
infrastructure. If so, the information can be used to snap the virtual utility data over its
corresponding object in the real world, further to the research aim discussed in section
1.2. In this instance, the utility data which is being overlaid takes the form of a cylinder
with a vertical orientation which is being used to simulate a common form of utility
infrastructure: a pipe or cable running in a vertical direction. Therefore, the application
must identify the calculated shape edges from the output of the code displayed in Code
example 1 and Code example 2.

In order to do this, an integer variable named ‘counter’ was declared and inserted into
the section of code which allocated either a black or a white colour to each pixel
(visualised in Code example 3). For each vertical column of pixels that make up the
image, the counter increases incrementally every time a pixel is set to the colour black
(line 9). Therefore, when the algorithm reaches the bottom of the image, the counter
holds the number of pixels in the column which are deemed to be an edge to a shape in
the image. An ‘if’ statement is called which tests to see if the column contains more than
a threshold number of edge pixels and if so, the column is turned green and the x
coordinate is added to a list called ‘pipeList’ (lines 14 – 19). An example of the output of

PAGE 34
the code is displayed in Figure 16 whereby the edges of vertical objects in the camera
image cause the entire column to be highlighted in the colour green.

1. void TextureFromGradientRef(float[, ] g, float thres, ref Texture2D output) {


2. PipeList.Clear();
3. PipeListFinal.Clear();
4. for (int x = 0; x < output.width; x++) {
5. pixelCounter = 0;
6. for (int y = 0; y < output.height; y++) {
7. if (g[x, y] >= thres) {
8. output.SetPixel(x, y, Color.black);
9. pixelCounter++;
10. } else {
11. output.SetPixel(x, y, Color.white);
12. }
13. }
14. if (pixelCounter > output.height * DetectionSensitivity) {
15. for (int y = 0; y < output.height; y++) {
16. output.SetPixel(x, y, Color.green);
17. }
18. PipeList.Add(x);
19. }
20. }
21. output.Apply();
22. pipelineRecognition();
23. }

Code example 3 - Code used to identify edges to vertical objects in the device camera
view.
Once the x screen coordinates of the potential edges of utility infrastructure are known
and held in the pre-defined list named ‘PipeList’ the ‘piplineRecognition()’ function is
called in line 22 of Code example 3.

Figure 16 - Screenshot visualising how the edge detection algorithm is used to identify the
edges of vertical objects and highlight the entire column green.

PAGE 35
The function ‘pipelineRecognition()’ serves two purposes and can be viewed in Code
example 4. Firstly, the algorithm was written to perform a cluster analysis of the x
coordinates which were identified in the code presented in Code example 3 to eliminate
single or anomalous values. Then, the location of the cylinder superimposed into the
view is moved to the location of its nearest green column. Thus, the cylinder which is
located at the rough position based on the device GPS position, is enhanced by
identifying the object that it represents in the real world and snapping onto its location
in the device camera. In theory, the result is an AR application which superimposes
virtual objects roughly over their corresponding objects in the real world; then uses the
camera to increase the accuracy of the augmented information. Thereby meeting the
aims of the research stipulated in section 1.2; and filling the gap in the current literature
created in-between accurate visually calibrated and location-calibrated AR applications
discussed in section 2.4.

1. void pipelineRecognition() {
2. var a = 1;
3. var b = 1;
4. var c = 1;
5. var d = 1;
6. foreach(int value in PipeList) {
7. pipelineCounter++;
8. double AverageA = (double)((a + b + c + d) / 4) / value;
9. double AverageB = (double)((a + b + c + d) / 4) / value;
10. double AverageC = (double)((a + b + c + d) / 4) / value;
11. double AverageD = (double)((a + b + c + d) / 4) / value;
12. if (pipelineCounter % 4 == 0) {
13. a = value;
14. if (AverageA > PipeListSensitivityLower && AverageA < PipeListSensitivi
tyHigher) {
15. PipeListFinal.Add(value);
16. }
17. }
18. if (pipelineCounter % 4 == 1) {
19. b = value;
20. if (AverageB > PipeListSensitivityLower && AverageB < PipeListSensitivi
tyHigher) {
21. PipeListFinal.Add(value);
22. }
23. }
24. if (pipelineCounter % 4 == 2) {
25. c = value;
26. if (AverageC > PipeListSensitivityLower && AverageC < PipeListSensitivi
tyHigher) {
27. PipeListFinal.Add(value);
28. }
29. }
30. if (pipelineCounter % 4 == 3) {
31. d = value;
32. if (AverageD > PipeListSensitivityLower && AverageD < PipeListSensitivi
tyHigher) {
33. PipeListFinal.Add(value);
34. }
35. }

PAGE 36
36. }
37. if (PipeListFinal.Count > 3) {
38. PipelistDistanceList1.Clear();
39. PipelistDistanceList2.Clear();
40. Vector3 closestC1 = cylinder1.transform.position;
41. Vector3 closestC2 = cylinder2.transform.position;
42. foreach(int value in PipeListFinal) {
43. Vector3 targetX1 = Camera.main.ViewportToWorldPoint(new Vector3((float)
value / Screen.width, 0.5 f, cylinder1.transform.position.z));
44. Vector3 targetX2 = Camera.main.ViewportToWorldPoint(new Vector3((float)
value / Screen.width, 0.5 f, cylinder2.transform.position.z));
45. float distanceColumn1 = Vector3.Distance(targetX1, cylinder1.transform.
position);
46. float distanceColumn2 = Vector3.Distance(targetX2, cylinder2.transform.
position);
47. PipelistDistanceList1.Add(distanceColumn1);
48. PipelistDistanceList2.Add(distanceColumn2);
49. if (distanceColumn1 <= PipelistDistanceList1.Min()) {
50. closestC1 = targetX1;
51. imageLog.text = PipelistDistanceList1.Min().ToString();
52. }
53. if (distanceColumn2 <= PipelistDistanceList2.Min()) {
54. closestC2 = targetX2;
55. imageLog.text = imageLog.text + " : " + PipelistDistanceList2.Min()
.ToString();
56. }
57. }
58. cylinder1.transform.position = closestC1;
59. cylinder2.transform.position = closestC2;
60. }
61. }

Code example 4 - Code used to perform a cluster algorithm eliminating anomalous x


values from the list of detected utility infrastructure edges; and algorithm used to snap
the virtual cylinders onto the estimated utility infrastructure in the camera view.

5. Testing the system

5.1. DATA ACCURACY TESTING: METHODOLOGY

In order to test to what extent the AR application developed and discussed in section 4
meets the research aim described in section 1.2 and fills the current gap in academic
knowledge discussed in section 2.4, a testing methodology was designed. Firstly, the
cylinders in the application were allocated the latitude and longitude of two vertical,
bamboo tubes which were rested against a white brick wall in real life.

Then, the application was run a few metres away from and pointed towards the bamboo
tubes. The GPS location was used to place the virtual cylinder over the camera image as
a rough estimate to the location of the bamboo tube. Once the rough location had been
identified, the image recognition algorithm was initiated and the researcher modifies
the detection sensitivity and the pixel sensitivity values of the application accordingly,

PAGE 37
such that the edges of the bamboo tubes are detected correctly. The environment in
which the testing would take place was considered using information gathered in the
literature. Three factors were identified as affecting optical object recognition and
tracking (Chen, et al., 2016):

1. Brightness constancy
2. Temporal persistence or small movements
3. Spatial coherence.

Thus, in order to minimise the impact of the brightness constancy, a day with overcast,
cloudy weather will be chosen for testing as the light would be dispersed more
effectively, there would be less shadowing of the bamboo tube to interfere with the
results and the brightness of the scene would remain more constant. The temporal
persistence and spatial coherence would be dictated by the steadiness of hand of the
user. However, no compensation was deemed necessary regarding these factors as the
application performs the image recognition over a number of seconds which would
eliminate the effect of small movements in the device pose.

The study considers the fact that compensating for these impacts would decrease the
robustness of the application in a real-life setting. However, the purpose of the study is
to evaluate to what extent the accuracy of an AR application can be improved. Thus, the
application was tested for concept and improvements to the system for real-life
implementation are not included in the scope of this study.

Following the object edge detection, the algorithm is used to move the cylinder to the
location that the application has identified as the bamboo tube and displays the distance
that the virtual cylinder was moved on the screen. If the cylinder had moved in the
correct direction and was therefore closer to the bamboo tube, a tick was recorded and
the distance travelled noted down. This task was completed at 1, 2 and 3 metre distances
from the bamboo tubes and at -30°, 0° and 30° from perpendicular to the background
wall, visualised in Figure 17. If the cylinder appeared not to move at all, or in the incorrect
direction, a cross result was recorded.

PAGE 38
Figure 17 - Diagram indicating the scenario in which the tests were carried out with testing
locations.

5.2. DATA ACCURACY TESTING: RESULTS

Cylinder 1 (Unity game units) Cylinder 2 (Unity game units)


Distance (m) -30° 0° +30° -30° 0° +30°
1 2.81 3.01 3.21 3.82 2.89 2.92
2 5.11 6.08 5.85 8.89 6.09 4.18
3 9.46 6.18 8.42 7.64 9.14 8.62

Table 2 - Table displaying the results of the data accuracy test. Distances that the virtual
cylinders moved were recorded. Sucessful movements were recorded in black and
unsuccessful movements, red.
The results of the data accuracy test are summarised in Table 2 and an example of how
the application functions in visualised in Figure 19. As discussed in more detail in section
5.1, the results were collected by recording the distance that the virtual cylinder had to
travel (in unity units). A major factor in the results collection was the requirement for
effective settings manipulation. It was required for the user to spend a few minutes
adjusting the ‘pixelDetection’ and the ‘detectionSensitivity’ values to ensure that the
application correctly identified the edges to the bamboo tubing. In this particular test,
a low ‘pixelDetection’ was required as the background was generally a similar colour.
Thus, only profound changes in colour were detected. Therefore, a low
‘detectionSensitivity’ value was required also, to ensure that the bamboo edges were

PAGE 39
detected at all, given the low ‘pixelDetection’ value. Examples of how the two values
change the interpretation of an image, please see Figure 18.

Figure 18 - Four images detailing the differences in 'pixelDetection' and


'detectionSensitivity' values. A) Low 'pixelDetection' b) high 'pixelDetection' c) low
'detectionSensitivity' d) high 'detectionSensitivity'.
There were, however, some negative results. Two of the six results recorded at two
metres distance and three of the six results recorded at three metres from the bamboo
tubes did not place the virtual cylinder correctly over their real-life counterpart. One of
the main factors for this was that the final x coordinate of the image is always recorded
as an edge. Therefore, if the virtual cylinder was closer to the right hand side of the
screen than the bamboo tube, its position would be changed to edge of the screen, rather
than the edge of the bamboo tube. The impacts of this are discussed further in section
6.

PAGE 40
Figure 19 - Two screenshots of the application in action. A) displays the AR system
calibrated using only the device sensors, b) displays the positional accuracy of the
cylinder increased using the object recognition algorithm.
A positive impact of recording the results as unity game units is that for a single line of
code, the distance that the cylinder travelled can be displayed as a string in the centre
of the screen. Thus, making it easy to record at the point it is calculated. However, a
negative impact is that it is difficult to quantify the results in real life. Unity units are
based on real life metres, but the reliability of that would be subject to the dimensions
of the device camera image and device camera angle of view. Given these facts, the unity
game results will only be able to offer a percentage increase in visual accuracy (discussed
further in section 6).

PAGE 41
6. Discussion

6.1. STUDY SUMMARY

To summarise the research aim stated in section 1.2, the purpose of this study was to
identify to what extent a location-based AR application can be enhanced with regard to
data visualisation accuracy. The gap in academic knowledge is discussed in detail in
section 2.4 as current AR applications require a controlled environment for accurate
visualisation and motivations for the research are detailed in section 1.1 as a requirement
to improve construction efficiency and safety standards using effective data transfer
within the project.

The literature identified that a marker-based AR application has the advantage of being
highly accurate for data positioning, but only works in a controlled environment.
Whereas, a location-based AR application works in a global environment, but is only as
accurate as the sensors of the mobile device. Therefore, this study reviewed a suitable
software package for AR application creation and developed a location-based AR
application that has the accuracy enhanced using visual object recognition.

6.2. REVIEW OF THE APPLICATION DEVELOPED


Success of the application would be determined by examining to what extent the
application was able to increase the positional accuracy of the virtual objects by
recognising the corresponding objects in the device camera.

Many assumptions and simplifications were made due to the time constraints of the
study and lack of experience for the researcher in C# scripting; Unity application
development and mobile app design. For example, utility infrastructure, that the
application was designed to work with was simplified to vertical pipes and cables to aid
in the object recognition. Although this allowed more effective shape detection within
the study application, this does not represent the entire utility asset and an assumption
has been made that effective recognition of vertical piping and cabling can be modified
for recognition of other types of utility data. Large quantities of utility infrastructure are
horizontal piping and cabling such as those under the road network or within a single
storey building.

PAGE 42
However, the purpose of the study was to reveal to what extent the accuracy of an AR
application can be enhanced. Therefore, as the application has been shown to perform
this function, the code and the theory can be implemented in future studies (discussed
further in section 7.2) and eventually in the field itself. For instance, the same counter
of edges could be performed in a horizontal direction along the screen and horizontal
edges identified.

In addition, the application could only identify two vertical objects whose latitude and
longitude coordinates were pre-loaded into the application in the Unity games
development user interface. Although this was acceptable for this study as it was the
application’s visualisation accuracy that was being investigated; in future iterations of
the application, the benefits of downloading live data from a server over the internet
could be explored (literature identified benefits and drawbacks reviewed in section
2.3.3). The possibility of bi-directional spatial data download and additional data re-
upload is discussed in section 7.2.

6.3. RESULTS OF THE STUDY IN CONTEXT


The results of the application testing (detailed in section 5) revealed that a location-
based AR application could improve the accuracy of the data that it augments using
camera image interpretation. However, these must be compared to other methods of
positional calculation to quantify the impact of the results.

For instance, it has been found that a user’s location can be calculated using Bluetooth
tracking. The researchers noted that implementation required large amounts of time
and effort to tune and position the scanning devices; but, the results were able to reveal
user paths when Bluetooth communication was made (Oosterlinck, et al., 2017). In
contrast, this research was focused on anonymous user detection in a shopping centre
for footfall statistics, but the infrastructure could be reversed to allow an individual user
to track their indoor location, rather than the system. However, the quoted detection
rate was only 9.8% which would not allow effective and accurate indoor positioning in
its current form.

Additionally, a Wi-Fi based indoor positioning infrastructure was tested and found to
be able to increase positional accuracy by 16% in comparison to a ‘state-of-the-art’
method (Hayashi, et al., 2016) and a crowd sourced indoor navigation system has been

PAGE 43
implemented and reviewed using radio maps found to be a success for both indoor and
outdoor mapping (Jung, et al., 2016).

It is probable, that each location and land-use type will be suited to a specific method of
positional calculation for the user. For example:

 galleries and museums are likely to be appropriate for Bluetooth based AR


application infrastructure for the user’s proximity to artefacts;
 an airport may prefer a Wi-Fi aided AR navigational system for augmenting gate
departure information as individual gates are a number of metres apart; and
 a construction site in a dense urban environment might prefer a visually
enhanced location-based AR system for accuracy in an uncontrolled
environment.

However, based on the evidence of this research, a location-based AR application can be


produced for displaying accurate utility data in an uncontrolled environment. This study
uses object recognition in the device camera in order to perform this task. Other
methods are more suited to a controlled environment, such as fiducial markers, a
Bluetooth emitter or a Wi-Fi network infrastructure.

6.4. ASSUMPTIONS AND LIMITATIONS


Some of the assumptions that have been made to ensure that the study met the research
aim have been discussed in section 6.2 such as:

 the assumption that a system that can correctly identify vertical utility
infrastructure can also correctly identify other utility infrastructures; and
 the assumption that if the system can correctly identify two vertical pipes or
cables, the scale could be increased for larger quantities of infrastructure.

However, further assumptions were also made and will be discussed in conjunction with
their associated limitations in this section.

One of the assumptions that was made while developing the application reviewed in this
study was the ability for the mobile device on which it was running is able to receive
GPS information. Further to the information discussed in section 2.4, part of the
motivation for the research is that despite, likely improvement to the reliability and
precision of the sensors on mobile devices; some locations will still yield unreliable

PAGE 44
results. Examples of such locations include underground locations and densely urban
environments (Adjrad, et al., 2015). Therefore despite improvements in device sensor
hardware, there will continue to be a need to enhance the locational accuracy of
location-based AR applications. However, in underground locations, it is likely that the
mobile device will be unable to receive any GPS location information. In order to
potentially overcome this, it is suggested that alternative methods of increasing the
positional reliability of AR systems be researched (examples reviewed in section 6.3)
such as Bluetooth emitters, fiducial markers or Wi-Fi network location calculation.
Thus, a limitation of the method researched in this study is the requirement for the
device to either be in position where GPS information can be received, or in a controlled
environment.

Another assumption that was made was the reliability of the script used to convert
latitude and longitude values into Unity 3D world units. The script was based on the
Haversine formula; the spherical law of cosines and used a script written by Florian
Müller and updated by David Schmitt as a guide. The haversine formula is currently the
most accurate method of calculating distance on the surface of a sphere or spheroid and
is regularly used to calculate world distances in the literature (Pai, et al., 2017) (Drolet &
Martel, 2016) (Sangat, et al., 2016). Despite, as discussed in section 5.2, Unity 3D world
units being based on real life metres, the accuracy of visualisation in AR is subject to the
viewing angle of the device camera and the pixel dimensions of the camera image and
device screen. Thus, a limitation of the research is that the data visualisation is less
accurate, the greater the distance that the user is to the infrastructure itself; a limitation
that is supported by the results of this study (section 5.2).

Furthermore, due to the algorithm that is executed in this particular application and
study, each individual section of utility infrastructure is analysed and re-positioned
independently. Thus, there is a possibility that with multiple utility assets within a single
camera image, the image could be distorted in the perception of the user. A benefit of
the system is that if the angle of perspective is incorrect for the user, treating each asset
individually would rectify the issue. However, if the user’s viewpoint is correct and the
application incorrectly re-positions a utility asset to the edge of the screen, the AR
visualisation for the user would be distorted.

PAGE 45
6.5. IMPLICATIONS OF THE RESEARCH
The implications of the research have the potential to be profound. Further to the gap
in academic knowledge outlined in section 2.4, this study has successfully met the
research aims and objectives detailed in section 1.2. Developed for this study, an
application was able to enhance the accuracy of data visualised in AR using object
recognition with regards to utility infrastructure.

The assumptions that were necessary to make for the purposes of this research and the
consequential limitations of the research are described in section 6.4; and the necessary
future work that would be necessary to turn a tested concept into a robust, working
application are described in section 7.2. However, the ability for a single application to
detect it’s approximate location on earth’s surface; use object recognition to calculate its
exact position and orientation and overlay useful construction data would improve the
efficiency of the construction project, increase levels of employee safety, decrease the
cost of the construction project and decreased associated delays for affected transport
networks (further discussed in section 1.1).

7. Conclusion

7.1. CONCLUSIONS

This research has described the successful development on an AR application that uses
the GPS information and magnetometer to estimate the location and orientation of the
mobile device; and an object recognition algorithm to calculate the exact location and
orientation of the device with regards to utility infrastructure. However, the study aimed
to develop and test a proof of concept and therefore further research is still required to
improve the application developed, into a robust and fully functional application for use
in a construction site. The application was successfully able to identify the location of
the user and two virtual cylinders in relation to each other based on their GPS
coordinates.

Testing was employed to quantify to what extent the application was able to correctly
re-position the virtual cylinders over the corresponding objects in real life and the results
indicated success of the system as a concept, but further development is still required.

PAGE 46
Most notably, the limitation that the results are more reliable, the closer the user is to
the utility infrastructure.

7.2. FUTURE WORK

The possibilities for future work are extensive. Firstly, future work is required to improve
a proof of concept application, to a robust and reliable application for real world
implementation; then potential continued work could improve on the advantages that
an effective application could produce.

In order to make the application worthy for use in the field, an algorithm that is able to
interpret spatial data formats must be written. An XML parser or an algorithm that is
able to read AutoCAD .dwg file formats would be most useful for a construction
company.

At a more fundamental level, improvements must be made to increase the object


recognition algorithm written for this study from purely vertical to include horizontal
asset recognition. This would improve vertical as well as horizontal data visualisation
accuracy.

Regarding post application implementation improvements, the most influential


development would be the visualisation of live data. For instance, if pressure sensors or
electrical currents are measured the information could be overlaid over the respective
assets. Thus, if there is a leak in a pipe or a power surge, the engineers on the ground
could all view exactly what and where the source of the problem was, in real time.

The final suggestion for future study and work that will be made in this study is the
potential for live data upload, as well as data download from a spatial database. For
instance, if object recognition algorithms are able to detect the location of various assets
within a construction environment, if an additional asset is erected, the locational
information could be automatically uploaded back into the server, thereby keeping a
perpetually updated building model.

PAGE 47
8. References
Adjrad, M., Groves, P. D. & Ellul, C., 2015. Enhancing GNSS Positioning with 3D
Mapping. London, UK, University College London.

Caudell, T. & Mizell, D., 1992. Augmented reality: an application of heads-up display
technology to manual manufacturing process. System Sciences, Volume 2, pp. 659-669.

Chen, K.-W.et al., 2015. To know where we are: vision-based positioning inoutdoor
environments. [Online]
Available at: https://arxiv.org/abs/1506.05870
[Accessed 3 September 2017].

Chen, P., Peng, Z., Li, D. & Yang, L., 2016. An improved augmented reality system
based on AndAR. Journal of Visual Communication Image Representation, Volume 37,
pp. 63-69.

Chung, J., Pagnini, F. & Langer, E., 2016. Mindful navigation for pedestrians: Improving
engagement with augmented reality. Technology in Society, Volume 45, pp. 29-33.

Ćuković, S. et al., 2015. Marker Based vs. Natural Feature Tracking Augmented Reality
Visualization of the 3D Foot Phantom. Dubai, UAE, Proceedings of the International
Conference on Electrical and Bio-medical Engineering, Clean Energy and Green
Computing.

Dacko, S., 2016. Enabling smart retail settings via mobile augmented reality shopping
apps. Technological Forecasting & Social Change.

Department for Transport, 2016. Road Works: Reducing disruption on local 'A' roads,
London, UK: Department for Transport.

Dogru, A. O. et al., 2017. GIS based spatial pattern analysis: Children with Hepatitus A
in Turkey. Environmental Research, Volume 156, pp. 349-357.

Drolet, J.-P. & Martel, R., 2016. Distance to faults as a proxy for radon gas
concentration in dwellings. Hournal of Environmental Radioactivity, Volume 152, pp. 8-
15.

Edwards, M. J., Hayes, M. P. & Green, R. D., 2016. High-Accuracy Fiducial Markers for
Ground Truth. Palmerston North, New Zealand, Image and Vision Computing New
Zealand (IVCNZ).

Erkoyuncu, J. A. et al., 2017. Improving efficiency of industrial maintenance with


context aware adaptive authoring in augmented reality. Manufacturing Technology,
66(1), pp. 465-468.

Fedosov, A. et al., 2016. SkiAR: Wearable Augmented Reality System for Sharing
Personalized Content on Ski Resort Maps. Geneva, Swizerland, Augmented Human
International Conference 2016.

PAGE 48
Feiner, S., MacIntyre, B., Hollerer, T. & Webster, A., 1997. A touring machine:
Prototyping 3D mobile augmneted reality systems for exploring the urban environment.
Cambridge, MA, USA, IEEE.

Fleck, P., Schmalstieg, D. & Arth, C., 2016. Visionary Collaborative Outdoor
Reconstruction using SLAM and SfM. Greenville, SC, USA, Software Engineering and
Architectures for Realtime Interactive Systems (SEARIS).

Forsman, M., Arvo, J. & Lehtonen, T., 2016. Extended panorama tracaking algorithm for
augmenting virtual 3D objects in outdoor environments. Kuala Lumpur, Virtual System
& Multimedia.

Forteza, F., Carretero-Gómez, J. & Sesé, A., 2017. Occupational risk, accidents on sites
and economic performance of construction firms. Safety Science, Volume 94, pp. 61-76.

Galeazzi, F. et al., 2016. Web-based visualization for 3D data in archaeology: The ADS
3D viewer. Journal of Archaeological Science: Reports, Volume 9, pp. 1-11.

Games, P. & Joshi, A., 2015. An evaluation-guided approach for effective data
visualization on tablets. Visualization and Data Analysis.

Gee, T. et al., 2016. Lidar guided stereo simultaneous localization and mapping (SLAM)
for UAV outdoor 3-D scene reconstruction. Palmerston North, New Zealand, Image and
Vision Computing New Zealand (IVCNZ).

Georgiou, Y. & Kyza, E., 2017. The development and validation of the ARI
questionnaire: an instrument for measuring immersion in location-based augmented
reality. International Journal of Human-Computer Studies, Volume 98, pp. 24-37.

Goodchild, M. F., 2010. Twenty years of progress: GIScience in 2010. Journal of Spatial
Information Science, Volume 1, pp. 3-20.

Google Inc., 2016. Android Studio Terms and Conditions. [Online]


Available at: https://developer.android.com/studio/terms.html
[Accessed 5 September 2017].

Hayashi, T., Taniuchi, D., Korpela, J. & Maekawa, T., 2016. Spatio-temporal adaptive
indoor positioning using an ensemble approach. Pervasive and Mobile Computing.

Health and Safety Executive, 2017. Fatal injuries arising from accidents at work in Great
Britain 2017, London, UK: National Statistics.

HM Treasury, 2017. Spring Budget 2017, London, UK: HM Treasury.

Huang, W., Sun, M. & Li, S., 2016. A 3D GIS-based interactive registration mechanism
for outdoor augmented reality system. Expert Systems With Applications, Volume 55,
pp. 48-58.

PAGE 49
Jaeyong, O., Park, S. & Kwon, O.-S., 2016. Advanced navigation aids system based on
augmented reality. International Journal of e-Navigation and Maritme Economy,
Volume 5, pp. 21-31.

Jung, S.-h., Lee, S. & Han, D., 2016. A crowdsourcing-based global indoor positioning
and navigation system. Pervasive and Mobile Computing, Volume 31, pp. 94-106.

Karaman, A., Erisik, D., Incel, O. & Alptekin, G., 2016. Resource usage analysis of a
sensor-based mobile augmented reality application. Procedia Computer Science,
Volume 83, pp. 300-304.

Kettner, L., 1999. Using generic programming for designing a data structure for
polyhedral surfaces. Computational Geometry, Volume 13, pp. 65-90.

Lima, J. P. et al., 2017. Markerless tracking system for augmented reality in the
automotive industry. Expert Systems with Applications, Volume 82, pp. 100-114.

Meenakshi, V., Shriram, V., Ritesh, A. & Santhosh, C., 2015. An innovative app with for
location finding with augmented reality using CLOUD. Procedia Computer Science,
Volume 50, pp. 585-589.

Monteiro, P. R. et al., 2017. MGB-IPH model for hydrological and hydraulic simulation
of large floodplain river systems coupled with open-source GIS. Environmental
Modelling & Software, Volume 94, pp. 1-20.

Morrison, A. et al., 2011. Collaborative use of mobile augmented reality with paper
maps. Computers & Graphics, Volume 35, pp. 789-799.

Newman, G. et al., 2017. Leveraging the power of place in citizen science for effective
conservation decision making. Biological Conservation, Volume 208, pp. 55-64.

Oosterlinck, D., Bennoit, D., Baecke, P. & Van de Weighe, N., 2017. Bluetooth tracking
of humans in an indoor environment: An application to shopping malls. Applied
Geography, Volume 78, pp. 55-65.

Pai, M., Mudenagudi, U. & Sharma, G., 2017. Intersection movement assistance for
vehicles. International Journal of Advance Research, Ideas and Innovations in
Technology, 3(4), pp. 180-186.

Palmarini, R., Erkoyuncu, J. A., Roy, R. & Torabmostaedi, H., 2018. A systematic review
of augmented reality applications in maintenance. Robotics and Computer-Integrated
Manufacturing, Volume 49, pp. 215-228.

Pantano, E., Rese, A. & Baier, D., 2017. Enhancing the online decision-making process
by using augmented reality: A two country comparison of youith markets. Journal of
Retailing and Consumer Services, Volume 38, pp. 81-95.

Pierdicca, R. et al., 2016. Smart maintenance of riverbanks using a standard data layer
and Augmented Reality. Computers & Geosciences, Volume 95, pp. 67-74.

PAGE 50
Pryss, R. et al., 2017. Enabling tracks in location-based smart mobile augmented reality
applications. Procedia Computer Science, Volume 110, pp. 207-214.

Reda, K. et al., 2013. Visualizing large, heterogeneous data in hybrid reality


environments. Big Data Visualization, 33(4), pp. 38-48.

Reitmayr, G. & Drummond, T., 2006. Going out: robust model-based tracking for
outdoor augmented reality. Proceedings of the 5th IEEE and ACM International
Symposium on Mixed and Augmented Reality, pp. 109-118.

Reitmayr, G. & Drummond, T., 2007. Initialisation for visual tracking in urban
environments. Washington, DC, USA, IEEE Computer Society.

Ren, J. & Ruan, Z., 2016. Architecture in an age of augmented reality: applications and
practices for mobile intelligence BIM-based AR in the entire lifecycle. Beijing, China,
Electronic Information Technology and Intellectualization.

Rese, A., Baier, D., Geyer-Schulz, A. & Schreiber, S., 2016. How augmented reality apps
are accepted by consumers: a comparative analysis using scales and opinions.
Technological FOrecasting & Social Change.

Richards-Rissetto, H., 2017. What can GIS + 3D mean for landscape archaeology.
Journal of Archaeological Science, Volume 84, pp. 10-21.

Sangat, P. et al., 2016. Processing high-volume geospatial data: a case of monitoring


heavy haul railway operations. Procedia Computer Science, Volume 80, pp. 2221-2225.

Sattler, T., Leibe, B. & Kobbelt, L., 2011. Fast image-based localization using direct 2D-
to-3D matching. Barcelona, Spain, Computer Vision, IEEE International Conference.

Schall, G., Schmalsteig, D. & Junghanns, S., 2010. Vidente-3D visualization of


underground infrastructure using handheld augmented reality. GeoHydroinformatics:
Integrating GIS and Water Engineering.

Serino, M., Cordrey, K., McLaughlin, L. & Milanaik, R., 2016. Pokémon Go and
augmented virtual reality games: a cautionary commentary for parents and
pediatricians. Current Opinion in Pediatrics, 28(5), pp. 673-677.

Sutherland, I., 1965. The ultimate display. Computer Aided Architectural Design, 65(2),
pp. 506-508.

Thiessen, P., Collins, J., Buckland, T. & Abbell, R., 2016. Valueing the wider benefits of
road maintenance funding. Barcelona, Spain, 44th European Transport Conference
2016.

tom Dieck, C. & Jung, T. H., 2017. Value of augmented reality at cultural heritage sites:
a stakeholder approach. Journal of Destination Marketing & Management.

Unity Technologies, 2017. Unity 3D. s.l.:s.n.

PAGE 51
van Oosterom, P., 2014. Vario-scale data structure supporting smooth zoom and
progressive transfer of 2D and 3D data. International Journal of Geographical
Information Science, 28(3), pp. 455-478.

Vassigh, S. et al., 2016. Integrating Building Information Modeling with Augmented


Reality for interdisciplinary learning. Merida, Mexico, Mixed and Augmented Reality
(ISMAR-Adjunct).

Wafa, S. N. & Hashim, E., 2016. Adoption of mobile augmented reality advertisements
by brands in Malaysia. Procedia - Social and Behavioral Sciences, Volume 219, pp. 762-
768.

Weng, E., Khan, R., Adruce, S. & Bee, O., 2013. Object tracking from natural features in
mobile augmented reality. Procedia - Social and behavioural sciences, Volume 97, pp.
753-760.

Wheat, P., 2017. Scale, quality and efficiency in road maintenance: evidence for English
local authorites. Transport Policy, Volume 59, pp. 46-53.

Yang, B., 2016. GIS based 3-D landscape visualization for promoting citizen's awareness
of coastal hazard scenarios in flood prone tourism towns. Applied Geography, Volume
76, pp. 85-97.

Yin, L., 2017. Street level urban design qualities for walkability: Combining 2D and 3D
GIS measures. Computers, Environment and Urban Systems, Volume 64, pp. 288-296.

Younes, G. et al., 2017. Virtual and augmented reality for rich interaction with cultural
heritage sites: A case study from the roman theatre in Byblos. Digital Applications in
Archaeology and Cultural Heritage, Volume 5, pp. 1-9.

Zamir, A. R. & Shah, M., 2010. Accurate image localization based on google maps street
view. Berlin, Heidelberg, European Conference on Computer Vision.

Zhang, X., Han, Y., Hao, D. & Lv, Z., 2016. ARGIS-based outdoor underground pipeline
information system. Journal of Visual Communication & Image Representation, Volume
40, pp. 779-790.

PAGE 52
9. Appendices
9.1. GPS C# SCRIPT
1. using System;
2. using System.Collections;
3. using System.Collections.Generic;
4. using UnityEngine;
5. using UnityEngine.UI;
6. public class GPS: MonoBehaviour {
7. public static GPS Instance {
8. set;
9. get;
10. }
11. public float UserLatitude;
12. public float UserLongitude;
13. public float GPSAccuracy;
14. public Text UserCoordinates;
15. private void Start() {
16. Instance = this;
17. StartCoroutine(StartLocationService());
18. }
19. private IEnumerator StartLocationService() {
20. if (!Input.location.isEnabledByUser) {
21. Debug.Log("User has not enabled GPS");
22. yield
23. break;
24. }
25. Input.location.Start();
26. int maxWait = 20;
27. while (Input.location.status == LocationServiceStatus.Initializing && maxWa
it > 0) {
28. yield
29. return new WaitForSeconds(1);
30. maxWait--;
31. }
32. if (maxWait <= 0) {
33. Debug.Log("Timed out");
34. yield
35. break;
36. }
37. if (Input.location.status == LocationServiceStatus.Failed) {
38. Debug.Log("Unable to determine device location");
39. yield
40. break;
41. }
42. yield
43. break;
44. }
45. void Update() {
46. UserLatitude = Input.location.lastData.latitude;
47. UserLongitude = Input.location.lastData.longitude;
48. GPSAccuracy = Input.location.lastData.horizontalAccuracy;
49. }
50. }

9.2. GYROCONTROL C# SCRIPT


1. using System.Collections;
2. using System.Collections.Generic;
3. using UnityEngine;
4. using UnityEngine.UI;
5. public class GyroControl: MonoBehaviour {
6. private bool gyroEnabled;

PAGE 53
7. private Gyroscope gyro;
8. public GameObject cameraContainer;
9. private Quaternion rot;
10. public Text GyroText;
11. public Text Orientation;
12. public float speed = 2.5 f;
13. public int counter;
14. public Text options;
15. private void Start() {
16. cameraContainer.transform.position = transform.position;
17. gyroEnabled = EnableGyro();
18. Input.compass.enabled = true;
19. Input.location.Start();
20. }
21. private bool EnableGyro() {
22. if (SystemInfo.supportsGyroscope) {
23. gyro = Input.gyro;
24. gyro.enabled = true;
25. cameraContainer.transform.rotation = Quaternion.Euler(90 f, 90 f, 0 f);

26. rot = new Quaternion(0, 0, 1, 0);


27. return true;
28. }
29. return false;
30. }
31. public void ToggleOrientationCalculation() {
32. counter++;
33. }
34. private void Update() {
35. if (gyroEnabled) {
36. if (counter % 2 == 0) {
37. transform.localRotation = gyro.attitude * rot;
38. options.text = "Gyro Calibrated";
39. } else {
40. transform.rotation = Quaternion.Slerp(transform.rotation, Quaternio
n.Euler(0, Input.compass.trueHeading, 0), speed * Time.deltaTime);
41. options.text = "Magnetometer Calibrated";
42. }
43. GyroText.text = gyro.attitude[1].ToString();
44. Orientation.text = Input.compass.trueHeading.ToString();
45. }
46. }
47. }
1.

9.3. MAPRENDERSCRIPT C# SCRIPT


1. using System;
2. using System.Collections;
3. using System.Collections.Generic;
4. using UnityEngine;
5. using UnityEngine.UI;
6. public class MapRenderScript: MonoBehaviour {
7. public bool FakeCoords = false;
8. public float UserLat = 51.580229 f;
9. public float UserLon = -0.431425 f;
10. public float Pillar1Lat = 51.580277 f;
11. public float Pillar1Lon = -0.431660 f;
12. public float Pillar2Lat = 51.580279 f;
13. public float Pillar2Lon = -0.431526 f;
14. public Vector3 CentreMapLatLon;
15. public Vector3 Pillar1LatLon;
16. public Vector3 Pillar2LatLon;
17. public Text Pillar1Text;
18. public Text Pillar2Text;

PAGE 54
19. public Text CentremapText;
20. public GameObject Pillar1;
21. public GameObject Pillar2;
22. public GameObject GroundPlane;
23. public float WorldRenderScale = 50 f;
24. int counter;
25. void Start() {
26. Input.location.Start();
27. CalculatePlane();
28. }
29. void Update() {
30. if (FakeCoords == false) {
31. UserLat = GPS.Instance.UserLatitude;
32. UserLon = GPS.Instance.UserLongitude;
33. }
34. CentreMapLatLon = new Vector3((float) MercatorProjection.lonToX(UserLon), 0
, (float) MercatorProjection.latToY(UserLat));
35. Pillar1LatLon = new Vector3((float) MercatorProjection.lonToX(Pillar1Lon),
1.5 f, (float) MercatorProjection.latToY(Pillar1Lat));
36. Pillar2LatLon = new Vector3((float) MercatorProjection.lonToX(Pillar2Lon),
1.5 f, (float) MercatorProjection.latToY(Pillar2Lat));
37. counter++;
38. if (counter % 200 == 0) {
39. Pillar1.transform.position = Pillar1LatLon - CentreMapLatLon;
40. Pillar2.transform.position = Pillar2LatLon - CentreMapLatLon;
41. }
42. Pillar1Text.text = Pillar1LatLon.ToString() + " " + (Pillar1LatLon - Centre
MapLatLon).ToString();
43. Pillar2Text.text = Pillar2LatLon.ToString() + " " + (Pillar2LatLon - Centre
MapLatLon).ToString();
44. CentremapText.text = CentreMapLatLon.ToString();
45. }
46. private void CalculatePlane() {
47. GroundPlane.transform.position = new Vector3(0, 0, 0);
48. GroundPlane.transform.localScale = new Vector3(WorldRenderScale, 0, WorldRe
nderScale);
49. }
50. }

9.4. MERCATORPROJECTION C# SCRIPT


1. using System;
2. /// C# Implementation by Florian Müller, based on the C code published at
3. /// http://wiki.openstreetmap.org/wiki/Mercator#C_implementation 14:50, 20.6.2008;
4. /// updated to static functions by David Schmitt, 23.4.2010 at the same URL
5. /// Modified for use in this study.
6. public static class MercatorProjection {
7. private static readonly double R_MAJOR = 6378137.0;
8. private static readonly double R_MINOR = 6356752.3142;
9. private static readonly double RATIO = R_MINOR / R_MAJOR;
10. private static readonly double ECCENT = Math.Sqrt(1.0 - (RATIO * RATIO));
11. private static readonly double COM = 0.5 * ECCENT;
12. public static double lonToX(double lon) {
13. return R_MAJOR * DegToRad(lon);
14. }
15. public static double latToY(double lat) {
16. lat = Math.Min(89.5, Math.Max(lat, -89.5));
17. double phi = DegToRad(lat);
18. double sinphi = Math.Sin(phi);
19. double con = ECCENT * sinphi;
20. con = Math.Pow(((1.0 - con) / (1.0 + con)), COM);
21. double ts = Math.Tan(0.5 * ((Math.PI * 0.5) - phi)) / con;
22. return 0 - R_MAJOR * Math.Log(ts);
23. }
24. }

PAGE 55
9.5. PANELSCRIPT C# SCRIPT
1. using System.Collections;
2. using System.Collections.Generic;
3. using UnityEngine;
4. using UnityEngine.UI;
5. public class PanelScript: MonoBehaviour {
6. public GameObject DataPanel;
7. int datacounter;
8. public GameObject SettingsPanel;
9. int settingscounter; // Update is called once per frame
10. public void showhideDataPanel() {
11. datacounter++;
12. if (datacounter % 2 == 0) {
13. DataPanel.gameObject.SetActive(false);
14. } else {
15. DataPanel.gameObject.SetActive(true);
16. }
17. }
18. public void showhideSettingsPanel() {
19. settingscounter++;
20. if (settingscounter % 2 == 0) {
21. SettingsPanel.gameObject.SetActive(false);
22. } else {
23. SettingsPanel.gameObject.SetActive(true);
24. }
25. }
26. }

9.6. UPDATEGPSTEXT C# SCRIPT


1. using System.Collections;
2. using System.Collections.Generic;
3. using UnityEngine;
4. using UnityEngine.UI;
5. public class UpdateGPSText: MonoBehaviour {
6. public Text UserCoordinates;
7. private void Update() {
8. UserCoordinates.text = "Lat: " + GPS.Instance.UserLatitude.ToString() + " L
on: " + GPS.Instance.UserLongitude.ToString() + " Acc: " + GPS.Instance.GPSAccuracy
;
9. }
10. }

9.7. WEBCAMEDGEDETECTION C# SCRIPT


1. /// Script modified from a C# script found at this URL:
2. /// https://gist.github.com/anonymous/05f21191b4c7eea68d52
3. using UnityEngine;
4. using UnityEngine.UI;
5. using System.Collections;
6. using System.Collections.Generic;
7. using System.Threading;
8. using System;
9. using System.Text;
10. using System.Linq;
11. public class WebcamEdgeDetection: MonoBehaviour {
12. public static WebcamEdgeDetection Instance {
13. set;
14. get;
15. }
16. public static WebCamTexture camTex; //public float th = 0.09f;
17. static Texture2D img, rImg, gImg, bImg, kImg, AVGimg;
18. static float[, ] rL, gL, bL, kL, ORL, ANDL, SUML, AVGL;
19. static Color temp;
20. static float t;

PAGE 56
21. public RawImage RICamera;
22. bool isReady = false;
23. int counter;
24. int pixelCounter;
25. public float DetectionSensitivity;
26. public float th;
27. public Text imageLog;
28. public Slider DetectionSensitivitySlider;
29. public Slider PixelSensitivitySlider;
30. public List < int > PipeList = new List < int > ();
31. public float PipeListSensitivityLower = 0.4 f;
32. public float PipeListSensitivityHigher = 1.6 f;
33. public List < int > PipeListFinal = new List < int > ();
34. public List < float > PipelistDistanceList1 = new List < float > ();
35. public List < float > PipelistDistanceList2 = new List < float > ();
36. public GameObject cylinder1;
37. public GameObject cylinder2;
38. void Start() {
39. WebCamDevice[] devices = WebCamTexture.devices;
40. WebCamDevice cam = devices[0];
41. camTex = new WebCamTexture(cam.name);
42. camTex.Play();
43. img = new Texture2D(camTex.width, camTex.height);
44.
45. rImg = new Texture2D(img.width, img.height);
46. bImg = new Texture2D(img.width, img.height);
47. gImg = new Texture2D(img.width, img.height);
48. kImg = new Texture2D(img.width, img.height);
49.
50. rL = new float[img.width, img.height];
51. gL = new float[img.width, img.height];
52. bL = new float[img.width, img.height];
53. kL = new float[img.width, img.height];
54. ORL = new float[img.width, img.height];
55. ANDL = new float[img.width, img.height];
56. SUML = new float[img.width, img.height];
57. AVGL = new float[img.width, img.height];
58. AVGimg = new Texture2D(img.width, img.height);
59. StartCoroutine(DetectEdges());
60. }
61. private void Update() {
62. if (isReady == false) {
63. RICamera.texture = camTex;
64. }
65. DetectionSensitivity = DetectionSensitivitySlider.value;
66. th = PixelSensitivitySlider.value
67. }
68. public void Toggle() {
69. counter++;
70. }
71. IEnumerator DetectEdges() {
72. while (true) {
73. if (counter % 2 == 1) {
74. imageLog.text = "";
75. Debug.Log("Detect Edges has started");
76. yield
77. return StartCoroutine(CalculateEdges());
78. img.SetPixels(camTex.GetPixels());
79. Debug.Log("Detect Edges has finished");
80. if (isReady == true) {
81. RICamera.texture = AVGimg;
82. yield
83. return new WaitForSeconds(1 f);
84. }

PAGE 57
85. isReady = false;
86. } else {
87. yield
88. return null;
89. }
90. }
91. }
92. IEnumerator CalculateEdges() {
93. for (int x = 0; x < img.width; x++) {
94. for (int y = 0; y < img.height; y++) {
95. temp = img.GetPixel(x, y);
96. rImg.SetPixel(x, y, new Color(temp.r, 0, 0));
97. gImg.SetPixel(x, y, new Color(0, temp.g, 0));
98. bImg.SetPixel(x, y, new Color(0, 0, temp.b));
99. t = temp.r + temp.g + temp.b;
100. t /= 3 f;
101. kImg.SetPixel(x, y, new Color(t, t, t));
102. }
103. }
104. rImg.Apply();
105. gImg.Apply();
106. bImg.Apply();
107. kImg.Apply();
108. for (int x = 0; x < img.width; x++) {
109. for (int y = 0; y < img.height; y++) {
110. rL[x, y] = gradientValue(x, y, 0, rImg);
111. gL[x, y] = gradientValue(x, y, 1, gImg);
112. bL[x, y] = gradientValue(x, y, 2, bImg);
113. kL[x, y] = gradientValue(x, y, 2, kImg);
114. ORL[x, y] = (rL[x, y] >= th || gL[x, y] >= th || bL[x, y] >=
th) ? th : 0 f;
115. ANDL[x, y] = (rL[x, y] >= th && gL[x, y] >= th && bL[x, y] >
= th) ? th : 0 f;
116. SUML[x, y] = rL[x, y] + gL[x, y] + bL[x, y];
117. AVGL[x, y] = SUML[x, y] / 3 f;
118. }
119. if (x % 10 == 0) {
120. yield
121. return null;
122. }
123. }
124. TextureFromGradientRef(AVGL, th, ref AVGimg);
125. isReady = true;
126. }
127. float gradientValue(int ex, int why, int colorVal, Texture2D image) {
128. float lx = 0 f;
129. float ly = 0 f;
130. if (ex > 0 && ex < image.width) lx = 0.5 f * (image.GetPixel(ex + 1,
why)[colorVal] - image.GetPixel(ex - 1, why)[colorVal]);
131. if (why > 0 && why < image.height) ly = 0.5 f * (image.GetPixel(ex,
why + 1)[colorVal] - image.GetPixel(ex, why - 1)[colorVal]);
132. return Mathf.Sqrt(lx * lx + ly * ly);
133. }
134. Texture2D TextureFromGradient(float[, ] g, float thres) {
135. Texture2D output = new Texture2D(g.GetLength(0), g.GetLength(1));
136. for (int x = 0; x < output.width; x++) {
137. for (int y = 0; y < output.height; y++) {
138. if (g[x, y] >= thres) {
139. output.SetPixel(x, y, Color.black);
140. } else {
141. output.SetPixel(x, y, Color.white);
142. }
143. }
144. }

PAGE 58
145. output.Apply();
146. return output;
147. }
148. void TextureFromGradientRef(float[, ] g, float thres, ref Texture2D outp
ut) {
149. PipeList.Clear();
150. PipeListFinal.Clear();
151. for (int x = 0; x < output.width; x++) {
152. pixelCounter = 0;
153. for (int y = 0; y < output.height; y++) {
154. if (g[x, y] >= thres) {
155. output.SetPixel(x, y, Color.black);
156. pixelCounter++;
157. } else {
158. output.SetPixel(x, y, Color.white);
159. }
160. }
161. if (pixelCounter > output.height * DetectionSensitivity) {
162. for (int y = 0; y < output.height; y++) {
163. output.SetPixel(x, y, Color.green);
164. }
165. PipeList.Add(x);
166. }
167. }
168. output.Apply();
169. pipelineRecognition();
170. }
171. void pipelineRecognition() {
172. /*var a = 1; var b = 1; var c = 1; var d = 1;
foreach (int value in PipeList) { pipelineCounter++;
double AverageA = (double)((a + b + c + d) / 4) / value; double Ave
rageB = (double)((a + b + c + d) / 4) / value; double AverageC = (double
)((a + b + c + d) / 4) / value; double AverageD = (double)((a + b + c +
d) / 4) / value; if (pipelineCounter % 4 == 0) {
a = value; if (AverageA > PipeListSensitivityLower && AverageA <
PipeListSensitivityHigher) { PipeListFinal.Add(va
lue); } } if (pipelineCounter % 4 == 1)
{ b = value; if (AverageB > PipeListSensitivityL
ower && AverageB < PipeListSensitivityHigher) { P
ipeListFinal.Add(value); } } if (pipelineCount
er % 4 == 2) { c = value; if (AverageC > P
ipeListSensitivityLower && AverageC < PipeListSensitivityHigher) {
PipeListFinal.Add(value); } }
if (pipelineCounter % 4 == 3) { d = value;
if (AverageD > PipeListSensitivityLower && AverageD < PipeListSensitivityHigher)
{ PipeListFinal.Add(value); }
} }*/
173. if (PipeList.Count > 2) {
174. PipelistDistanceList1.Clear();
175. PipelistDistanceList2.Clear();
176. Vector3 closestC1 = cylinder1.transform.position;
177. Vector3 closestC2 = cylinder2.transform.position;
178. foreach(int value in PipeList) {
179. if (Camera.main.WorldToViewportPoint(cylinder1.transform.pos
ition).x < 1) {
180. Ray ray1 = Camera.main.ScreenPointToRay(new Vector3(((fl
oat) value / camTex.width) * Screen.width, Screen.height / 2, 0));
181. Debug.DrawRay(ray1.origin, ray1.direction * (cylinder1.t
ransform.position.z), Color.red, 1 f);
182. float distanceColumn1 = Vector3.Distance(ray1.GetPoint(c
ylinder1.transform.position.z), cylinder1.transform.position);
183. PipelistDistanceList1.Add(distanceColumn1);
184. if (distanceColumn1 <= PipelistDistanceList1.Min()) {

PAGE 59
185. closestC1 = ray1.GetPoint(cylinder1.transform.positi
on.z);
186. imageLog.text = PipelistDistanceList1.Min().ToString
();
187. }
188. }
189. if (Camera.main.WorldToViewportPoint(cylinder2.transform.pos
ition).x < 1) {
190. Ray ray2 = Camera.main.ScreenPointToRay(new Vector3(((fl
oat) value / camTex.width) * Screen.width, Screen.height / 2, 0));
191. Debug.DrawRay(ray2.origin, ray2.direction * (cylinder2.t
ransform.position.z), Color.red, 1 f);
192. float distanceColumn2 = Vector3.Distance(ray2.GetPoint(c
ylinder2.transform.position.z), cylinder2.transform.position);
193. PipelistDistanceList2.Add(distanceColumn2);
194. if (distanceColumn2 <= PipelistDistanceList2.Min()) {
195. closestC2 = ray2.GetPoint(cylinder2.transform.positi
on.z);
196. imageLog.text = PipelistDistanceList1.Min().ToString
() + " : " + PipelistDistanceList2.Min().ToString();
197. }
198. }
199. }
200. cylinder1.transform.position = closestC1;
201. cylinder2.transform.position = closestC2;
202. /* foreach (int value in PipeList) {
Vector3 targetX1 = Camera.main.ScreenToWorldPoint(new Vector3(value, 0, 0
)); Vector3 targetX2 = Camera.main.ViewportToWorldPoint(new Vector3(
value, 0, 0)); Vector3 targetX1a = new Vector3(targetX1.x, cylinder1
.transform.position.y, cylinder1.transform.position.z); Vector3 targ
etX2a = new Vector3(targetX2.x, cylinder2.transform.position.y, cylinder2.transform
.position.z); float distanceColumn1 = Vector3.Distance(targetX1a, cy
linder1.transform.position); float distanceColumn2 = Vector3.Distanc
e(targetX2a, cylinder2.transform.position); PipelistDistanceList1.Ad
d(distanceColumn1); PipelistDistanceList2.Add(distanceColumn2);
if(distanceColumn1 <= PipelistDistanceList1.Min())
{ closestC1 = targetX1; imageLog.te
xt = PipelistDistanceList1.Min().ToString(); } if(dis
tanceColumn2 <= PipelistDistanceList2.Min()) { cl
osestC2 = targetX2; imageLog.text = PipelistDistanceList1.Min().
ToString() + " : " + PipelistDistanceList2.Min().ToString(); }
} cylinder1.transform.position = closestC1;
cylinder2.transform.position = closestC2;*/
203. }
204. }
205. }

PAGE 60

You might also like