Trail Visualization: A Novel System for Monitoring Hiking Trails

University of Michigan School of Information, SI 649 ~ Information Visualization Zhenan Hong, Sangmi Park, Jessamyn Smallenburg, Gary Suen

Abstract We describe an information visualization system to visualize and track individuals walking on hiking trails. Our model develops an identity as a unique system through comparison with related work, including research into user requirements specific to hiking. We emphasize safety as it pertains to hiking on trails classified as arduous, and develop a use-case scenario built around a particularly strenuous hiking trail in Yosemite National Park. Our target audience for the scope of this project consists of individuals we refer to as “savers,” park officials skilled in both the use of intuitive and usable visualization interfaces and rescue tactics in the wilderness. We demonstrate the development of our thinking processes, and we illustrate the stages of interface design using screen shots. We indicate how system evaluation was conducted, and discuss our system in the context of different visualization concepts. We address alternatives that we considered, and conclude with a section on possible future developments.

Keywords: Information Visualization, Contextual Awareness System, Mobile Computing, Interactive Visualization, Hiker, Environmental Monitoring System, Rescue and Safety Tools

We built a system to help monitor users' activities outdoors, particularly in mountainous areas. The system displays information including trail records, travelling speed, ambient light intensity, current weather conditions, and weather forecast (see prototype figure below). The intended target audience consists of people whom we are calling “savers.” These individuals monitor the hiking routes of people out trekking in the field, while collecting data about both the person and the contextual surroundings from a smart phone. The savers can communicate with hikers about safety and precautionary data, such as flood warnings and rock slides, and they can help guide the hikers back to the appropriate trail should they wander off into the wilderness. They can also contact other park officials monitoring specific trails, should the individuals with the phones either choose not to answer, be incapable of answering, or be immersed in the crowded environment and unaware that someone is trying to contact them via telephone. With information the system provides, savers gain an understanding of the past and current positions of the hikers and contextual information Literature Review The concept behind our system – that the target users are trail monitors – is novel, to the best of our knowledge. We have conducted literature searches on the topics of hiking and information visualization, hiking trail and safety visualizations, travel safety and information visualization, and the different uses of maps in information visualization. We did not identify any other system that is designed to collect the same combinations of data for a visualization intended to be

viewed by people – savers – who may be monitoring the trails for safety and protection purposes. Our system is intended to represent map, terrain, and satellite ground data, current and forecasted weather data, sunrise/sunset times, ambient light intensity, and hikers’ contextual data, including current and past trail location, speed, acceleration, direction, and orientation. Unlike other visualizations associated with mountaineering, our Trail Visualization system collects all of the above-mentioned data and conveys it to a trained set of savers, individuals who take on the responsibility of watching out for the safety of others. The intended audience, which in our scenario consists of park rangers, park rescue personnel, and possibly the concerned family members, is quite distinct from the intended audience that is most often addressed by other visualization projects. The target audience often consists of the hikers themselves, who access the visualizations locally on mobile phones, and frequently for the purpose of gathering tourist information about area highlights and attractions. Other outdoor purposes include general map handling, basic navigation, and communication. For example, recent research into the visualization of geo-information has prioritized tourism to be the driving motivation behind the development of geographic information visualization. Because tourism information is mostly geo-information, tourism companies are exploring visualization strategies that maximize the promotional appeal of their regions and assets, particularly unique 1 scenery and the natural environment. Other researchers have addressed the use of geographical visualizations for planning hikes.2 Specifically, in the research study reported in [2], Bleisch et al. tested the usefulness of a 3-D visualization for planning hikes in 1

the foothills of the Swiss Alps. Nivala et al. conducted research into user requirements for location-based services to support hiking activities. The hypothesis was that hikers should benefit from location-based information that would support communication and social behavior needs. The goal of this particular study was to define user requirements regarding geospatial and other location-based information, focusing on the needs of hikers. Of the nine user requirements identified, one was 'Emergency Situations,' and a second was 'Saving Experiences.' Slocum et al. researched cognitive and usability issues in geovisualization.4 These authors argue that both cognitive and usability issues should be considered in the context of six primary themes: geospatial virtual environments, dynamic representations (including animated and interactive maps), metaphors and schemata in user interface design, key individual and group differences, collaborative geovisualization, and the evaluation of the effectiveness of geovisualization methods. The authors acknowledge that applying usability engineering to geovisualization has potential to be problematic because of the difficulties involved in defining the nature of users and their tasks. In this work, we have taken extra care to clearly and precisely identify our target user base: the savers. System Architecture The system consists of three parts, as illustrated in Figure 1, below. The parts are a mobile client for capturing sensor data, a middle-ware system to transfer sensor data, and a console application to monitor hikers' activities and their contextual surroundings. As


developed a probe application on the Android platform using JAVA, and installed it in a Nexus One smart phone. When a hiker turns on the application, it runs in the background, automatically capturing live sensor data with accelerometer and orientation sensors stored in the phone. Collected data is stored in XML format and is transmitted on a continuous basis from the phone to the remote server. The server is a black-box system that retrieves sensor data from the Nexus phone, and then sends it to our Trail Visualization console application. The server then maintains users' identification information, as well as additional applications capable of making queries to retrieve data from specific users. In this fashion, it makes the system scalable, with the capability to support multiple mobile phone users in the future. The visualization console was built in dashboard style. The low- and high-fidelity prototype visualization interfaces are shown below, as is an image of the visualization in its final format. For the implementation, we used JSP + Java to make the system. JAVA was used for the back-end of the system to retrieve sensor data remotely from mobile phones. JSP was used for the front-end of the system’s visualization. Here, we also used JavaScript libraries to help with the visualization, for example, Protovis and the Google Map API. Iterative Design Process Figure 2 (below) is our high-fidelity prototype, which consists of a satellite image from Google Earth depicting the Glacier Point hike to the top of Half Dome in

Figure 2: Google Earth trail map of Half Dome Hike. Larger screen is shown at end of paper.

Figure 1: System Architecture

mobile technology becomes increasingly pervasive, computing is embedded into our everyday lives. Without the need for special tracking devices, we can collect mobile users' data simply by installing applications on their mobile phones. In our project, we

Yosemite National Park, California, U.S.A. This destination was selected both because as a group we possess familiarity with the layout of the park, and because this particular trail is known to be long, arduous, and challenging. Dangers abound on long, strenuous hikes, and this sort of challenging situation is the type for which we are designing this system. The starting point of this trail is in the lower-left, displayed 2

as a blue bubble with a star in the middle. Similarly, the end point, on top of Half Dome, is also marked with a blue bubble containing a star. The hiker, shown on the map in blue, is working his way down the trail in this view. The data on the right are, from top to bottom, speed and acceleration, ambient light, a timeline slider to retrieve recent data, and a weather display both for the conditions throughout the day, and the six day weather forecast. Scoping and Implementation The scope of our project consists of the system’s successful working functionality through the implementation of its architectural features. The architecture consists of the collection of real-time dynamic data via the built-in sensors, and its display through the interface of the visualization we have developed. The entire architecture of our system has been successfully deployed. The mobile and desktop clients work together with the server to pass data back and forth. We are currently able to collect live sensor data from our device. The motion, direction, and orientation information associated with the individual holding the phone with live sensors has been successfully visualized through the structure of our interface. We also collect terrain and route data from Google Maps. The sensors collect data on acceleration, orientation, light intensity, GPS readings, and weather and forecast data, all of which are updated on a continuous basis. We made the ambient light console interface using Protovis. In order to catch the saver's attention, we used multiple retinal elements to encode the variables that represent the light intensity. First of all, the size of the bubbles indicates degree of intensity. The positioning of the bubbles spatially references the light intensity (bubbles in higher position represent higher light intensity). To make our system more accessible

and to facilitate pre-attentive processing, we used color encoding as a metaphor to represent the temperature of the light. A higher degree of light intensity is encoded with a warmer color (i.e. red), otherwise in a colder color (i.e. blue). For the motion monitoring feature, we used the accelerometer sensor from the Nexus One smart phone to collect contextual data from the hiker. The data are then transformed to the visualization and clearly displayed on the accelerometer panel on the right side of the application, just below the light intensity graph. We used three different colors to encode the hiker’s movement, specifically the three basic RGB colors – red, green and blue – to represent the movement variables, because their hue values can be easily distinguished. We assigned blue to the hiker’s left and right movements, green to forward and backward motions, and red as up and down movements. In the motion, or movement panel, we embedded a 2D coordinate plane that displays motion data in occurring in a 3D field. The x- and y-axis values vary across the x-axis in the first and fourth coordinate regions, where the x-axis values represent time and the y-axis values represent the sequences of data associated with the directional movements of the hiker. Left/right movement is coded in blue, with positive values indicating the hiker’s movements to the left and negative values clearly indicating movement to the right. Directional motion data are displayed as linear curves, which helps savers to monitor hikers’ current and past movements. The visualization also clearly shows when the hiker is moving backward or falling down, as the value for these two variables would become negative. The change can be easily observed when the curve goes down below the x-axis in the coordinate plane. By combining continuous curves, colors, and divisions in the visualization for motion monitoring, capabilities for pre-attentive processing and minimization of degree of focus are enhanced. For the location tracing feature, we used a GPS system in the Nexus One smart phone. With this, we could get the hiker's current coordinate location (i.e. latitude and longitude). The system can also display a hiker’s position and trail on the map, so that savers can easily tell where the hiker went and how far the hiker has already walked. To create the weather component, we used Java to parse XML files from Yahoo weather RSS and Wunderground data feeds, and created the weather table with the data in HTML format. We decided to use these specific APIs because both are prominently used on sites such as Google, the Weather Channel, and Yahoo. To get the data from both APIs, we queried the 3

Figure 3: Final Visualization Interface. Larger screen shown at end.

data using the zip code associated with the phone’s current location. From the Yahoo weather RSS Feed, we could get visibility, wind direction, humidity, sunset and sunrise times, degrees Fahrenheit, and the image of the current weather condition. For certain pieces of important information, such as visibility and sunset time, we decided to change the text color from white to red to draw the user’s attention quickly. For example, if the current time is close to the sunset time, the color of the sunset data changes to red. In addition, we obtained a six day weather forecast from the Wunderground data feed. To emphasize the forecast for the near future, we differentiate by using different transparency levels. Also, the images associated with different weather conditions change from a day image to a night image, depending on the current time.

Use Case Scenario: When the hiker sets out on the trail to Half Dome, she carries her mobile phone with the built-in sensor set to ‘on.’ The saver collects the sensor data from the hiker’s phone, and uses the data visualization to help him understand the hiker’s contextual information. This contextual data provides clues necessary for the saver to locate and assist the hiker. For example, the saver may find that the hiker is engaging in dramatic or risky movements based on readings from the accelerometer sensor, or that she is approaching a section of particularly dangerous terrain. Using this contextual data, he may report to the hiker herself, and provide suggestions as to where stable and steady terrain is located, based on information collected with the GPS sensing device. The GPS data will be shown using a histogram or an accumulation of lines. The saver may be monitoring several hikers at the same time; accordingly, we try to make the visualization data accessible pre-attentively. The Half Dome Cables, considered to be the most infamous part of the hike, allow hikers to climb the last 400 feet to the summit vertically and without rock climbing equipment. In our scenario, the hiker is approaching the cables that ascend to the summit, and because it’s a summer weekend, the path and surrounding areas are fairly crowded. Because she has registered her Android phone with the local park rangers, she knows her progress and status are being monitored by trained rescuers. After hiking for 10 hours (the hike to Half Dome is a 10 to 12 hour hike), she is fatigued, and as she is waiting restlessly for the line to progress, she becomes disoriented from exhaustion and dehydration. Because she can’t seem to think clearly about the precariousness of the surroundings, she begins to wander away from the congested waiting line, straying to portions of the area that are off-limits due to dangerous cliffs and risky rock slopes. The GPS sensor conveys to the rescuers’ station that the park guest has detoured out of the main area, and the Hiker’s Movement data suggests that her movements are gaining in irregularity, and becoming increasingly erratic. As soon as they observe this data via the visualization, the savers take action to rescue the park guest. Her data indicate that she is out of sorts, and because she is within range of the Half Dome cables, she is at heightened risk of danger. She is disconcerted, her judgment might falter, and should she carelessly decide to ascend up the rock face using the cables, her risk of injury or death is extremely high. The rescuers first contact the hiker, but because she does not answer, park rangers who monitor this part of the park are called. As soon as a nearby saver is alerted, 4

Figure 4: Weather component showing differences in transparency, and night and day images, along with hi and low temps.

We envision the potential application of our system in settings such as Yosemite, where trails can be long, arduous, and risky. In particular, we chose Yosemite’s trail up through the mountains to Half Dome’s summit because of its physically and emotionally challenging nature. Our research on Yosemite in general and Half Dome in particular indicates that rescuers assist hundreds of people on the Half Dome trail every summer. It is even necessary to first obtain a permit before attempting the hike. The National Park Service (NPS) warns that visitors on the trail will experience extended exposure to potentially uncomfortable conditions. More alarmingly, the NPS warns potential hikers about an increased likelihood of irresponsible behavior due to frustration with conditions. It is advised that hikers start off around sunset, making sure to check both sunrise and sunset times to be able to commit to a time by which they will turn around, to eliminate the chance of ending up on a difficult and dangerous path after dark. Accordingly, we designed our system to display sunrise and sunset times as valuable information for savers to take into account. According to the NPS, hikers regularly struggle along the trail after dark because they forget flashlights.

our hiker is approached and calmly coaxed to move to a safe, protected, and guarded area. Because the savers had motion, location, and directional information on a specific individual, they were able to protect her by alerting the closest park officials. Because of the dense crowds and highly uncomfortable conditions, her location and movement patterns – which indicate potential falters in judgment or awareness – would have gone unnoticed by rangers in the adjacent area, and her level of danger would likely have been dramatically higher. Evaluation To test the system, we collected data in close proximity to one team member's apartment in Ann Arbor. We engaged in different types of motions to provide data for the sensors to collect, including running, jumping, walking, and changing the proximity of the phone relative to the sun. We also tested the ambient light sensor indoors and outdoors; in the depiction of the system interface above, the ambient light data box displays the sequence of light intensity data represented by red and blue spots. The time sequence data moves from the left to right; first the phone was taken outside, which is depicted on the right using the red spots. Next, the phone was taken indoors, which is represented by the blue dots. The red and blue color variations indicate the intensity of light outdoors versus indoors, respectively. The weather indicator was tested over a sequence of days, and its accuracy was evaluated by comparing the weather status displayed with the weather channel report. Design Principles The design principle of pre-attentive processing is employed in our visualization. The elements of our visualization display the associated data in such a way that it is made available to the perceptual system with speed and ease. Specifically, the ambient light intensity data element depicts warm temperatures intuitively as warm colors, such as red and orange. Conversely, it depicts colder temperatures using cooler colors, specifically shades of blue. These colors are universally associated with their respective meanings; everyone associates blue with cold and red with warm, without exception. Thus, the meaning of the colored data depicted in the visualization is understood so quickly that it can be classified as facilitating pre-attentive processing. In a system such as ours, as well as in other related geovisualization systems, users benefit from the ability to accomplish certain standard tasks while navigating and browsing through geospatial representations. One should be able to make changes

of scale, control map projection, modify level of generalization and field of view, and pan, move, and browse across map contents.5 Additionally, enabling processes like drill-down methods, a popular mechanism in information visualization, would make details available to the savers instead of providing all information in one view. The depiction of movement in our system most closely approximates the notion of filtering data sources to drill down to the specifics. That is, check boxes are there to selectively edit which motion components – data lines representing movements in the X-Y-Z coordinate plane – are visible in the graph. Ideas Explored With Low-Fidelity Prototype The alternative ideas that we explored at first, and that we visualized through our low-fidelity prototype, involved collecting contextually relevant sensor data from individuals with Android cell phones who would be walking through the local environment. Whereas ultimately our system acquired an overarching purpose and cause, our original ideas were focused entirely on making choices about what types of data to collect, and how to represent each type. We originally conceived of the light intensity being displayed as a yellow sun, the direction or orientation being depicted with the associated letter (i.e. N, E, S, W), and the visualization of the person’s movement as displayed with a walking figure. We also considered depicting the movement of our target walker using a video recording of a real person, or potentially an animated person. Our lowfidelity prototype interface is displayed below:

Figure 5: Low-Fidelity prototype

Ultimately, we developed more of a mission for our visualization, which was to address the concerns associated with hiking on dangerous terrain. Additionally, the domain of collecting sensor data in urban and suburban areas has been researched at length, whereas location data collection in non-urban, strenuous, and challenging environments has not been 5

as widely covered.6 We also felt that little research into the use of information visualization systems for enhancing safety and preventing injury, particularly in high traffic, mountainous places like national parks, has been developed. We searched for and identified research conducted on systems applied to the improvement of biking, flight-simulation and aviation safety, and safety-related in-car information systems. We located sources of informational research that pertained to hiking and a few that incorporated safety information for hiking. To the best of our knowledge, our system is the only visualization designed to enhance safety for hikers by providing sensor data on contextual surroundings to individuals qualified as savers, who then use the data in the visualization to communicate with other savers, essentially networking to protect hikers at risk. Alternatives Implemented and Tested Originally, we implemented our motion monitor console in Processing. It's a faster prototyping tool that enables us to quickly explore design decisions. However, we found that it was hard to integrate this with Google Maps, so we switched to the JSP framework, which takes advantage of existing JavaScript libraries, and is also compatible with Google Maps. Figure 5 is our testing unit for the motion monitor console.

(see figure 3). By glancing at the motion console, the saver can obtain an overall understanding of the hiker's movement. If he wants to drill down to analyze the hiker's activity, he can use the check boxes below the data lines to filter out unwanted data, which enables him to focus on specific data. Due to the powerful extensibility of the Perl Language in retrieving data, we parsed Yahoo XML data using Perl. However, due to the complexity of transforming data between Perl and JavaScript, we changed the Perl parsing code to Java parsing code. Also, we originally tried using only the Yahoo RSS Feed, but because the API lacked forecast information, we decided to use the Wunderground API to query six day weather forecasts. In addition, since the time value of the Yahoo API did not return the current time at the location, we had to call the current time from the HTML using JavaScript. Also, we tried showing weather data by hours, but we couldn't find a way to get the weather condition by hour. Thus, we changed our visualization to show the current weather condition, along with the six day forecast. Also, because the first API we tried did not provide all of the information we needed, including visibility, humidity, sunset/sunrise data, etc., we were only able to show the image associated with each weather condition, and daily low and high Fahrenheit temperatures. For all of these reasons, we switched from the Yahoo to the Wunderground API. Directions for Future Development In the future, we envision testing our system with users to better ascertain the intuitiveness and ease-of-use of the interface. As noted in research by MacEachren et al., current geospatial information technologies are typically hard to use and are ill-suited to group work. The barriers to use cited in this research include overly complex interfaces, limited support for analytical reasoning and decision-making, and for coordinated team work.7 We have deliberately developed an interface that is clear and not exceedingly complex. Setting our system in front of potential users, or even non-members of the target audience, and getting their feedback as to the complexity, readability, clarity, simplicity, and their general, overall impressions would provide us with input to guide future decisions. We believe that our interface is intuitive and easy to understand, because the elements that make up the visualization are widely familiar. The weather data as displayed is consistent with the appearance of weather data universally used to represent satellite weather information. Likewise, the map itself, which takes up the largest amount of screen real estate, looks similar to 6

Figure 6: 3D plane with x-y-z fields on separate lines.

The x-y-z dimensional data were drawn using three different data lines. They show different motions in 3Dimensional space: left-right, up-down, and forth-back, respectively. During the testing, we found that it didn't make much sense to draw three data lines separately. First, it takes up to much space on the screen. Second, it increases the mental effort of the saver when working to understand the graph. What the saver wants to know is the hiker's overall movement. For instance, a saver wants to know whether the hiker is making drastic or dramatic movements, but not movements in a specific direction. So we combined the three data lines and used transparent color to avoid either one blocking another

other web-based maps that the public and private sectors are accustomed to viewing. Actually, it looks just like maps from sites like or Google Maps, and is, in fact, built using the Google Maps API. Similarly, the segment of the interface displaying the ‘Hiker’s Motion’ information is easy to read. The X-Y-Z coordinates are clearly displayed and color-coded, so that the spikes in the graph are clearly identified as X, Y, or Z spatial coordinate information. Also, the light intensity data uses color coding that is generally associated with warm and cool temperatures; warm colors, such as red and orange, indicate relatively warmer temperatures, while cold colors, such as blue and aqua, represent relatively cooler temperatures. As user-experience research has discovered many times, it is wise to present the interface to impartial and neutral eyes, to others who have not been involved in designing the interface, and to those who represent members of the target audience. This process would follow any and all additional enhancements we would make to the program before testing it with potential users. In addition to user testing, other possible areas for future exploration and development involve the construction of a user interface for the mobile client. In the School of Information visualization class, we developed a visualization interface for the target audience, whom we identified as the savers, individuals who monitor trails and coordinate to protect hikers. Both the low-fidelity prototype and high-fidelity prototype interfaces presented in this work display the interface created for the savers. Our future work would be to create a visualization interface for the hikers themselves. Our weather visualization component could be enhanced through future work. Wind direction is given in number format; reading clockwise from true North, the wind direction is measured by degrees around the compass to 360 degrees, which is what is used for true North. This format might make it hard for the target user to ascertain the current direction. Thus, for future work, we would like to add the compass displayed above, which is already built in Java using Processing. However, we haven’t determined how to integrate the compass, in Java format, with our application, which is in HTML format. Additionally, we first intended to show the weather condition by hour so that the target user could foresee the condition for the hiker more accurately using narrower intervals (by hour versus by day). However, due to the API’s lack of support for this feature, we could not query weather condition by hour, but only by zip code. Lastly, we currently use zip codes

to get current weather conditions. However, what we receive from the smart phone’s GPS feature are coordinates showing the user’s current location. Therefore, we need to transform the coordinates into a zip code from the smart phone so that the weather condition can be changed according to the user’s current location.
Figure 7: Conceptual Compass

After the further development of features discussed in previous sections, and following the interface and feature reviews conducted with possible users, we could potentially continue the development of our visualization by using it on location in a place with hiking dangers. Testing the system with users while they're hiking on an arduous trail is the exact setting the system was designed for.
References 1. Almer, A., Schnabel, T., Schardt, M., Stelzl, H. (2004). Real-Time Visualization of Geo- Information Focusing on Tourism Applications. Joanneum Research, Institute of Digital Image Processing, Graz, Austria 2. Bleisch, S., Dykes, J. (2009). Using Web-Based 3-D Visualization for Planning Hikes Virtually – An Evaluation. Representing, Modeling, and Visualization the Natural Environment, 2009. (Book Section) Nivala, A., Sarjakoski, T., Laakso, K., Itaranta, J., Kettunen, P. (2009). User Requirements for Location-Based Services to Support Hiking Activities. Location Based Services and TeleCartography II, From Sensor Fusion to Context Models, Springer-Verlag, pp. 167-184. Slocum, T., Blok, C., Jiang, B., Koussoulakou, A., Montell, D., Fuhrmann, S., Hedley, N. (2001). Cognitive and Usability Issues in Geo-Visualization. Cartography and Geographic Information Science, v28, p61-75 Cartwright, W., Crampton, J., Gartner, G., Miller, S., Mitchell, K., Siekierska, E., Wood, J. (2001). Geospatial Information Visualization User Interface Issues. Vaittinen, T., Laakso, K., Itaranta, J. (2008). Kuukkeli: Design and Evaluation of Location-Based Service with Touch UI for Hikers. Proceedings: NordiCHI 2008. MacEachren, A., Cai, G., McNeese, M., Sharma, R., Fuhrmann, S. (2006). GeoCollaborative Crisis Management: Designing Technologies to Meet RealWorld Needs. Proceedings of the 2006 International Conference on Digital Government Research, San Diego, Cal. Cartography and Geographic Information Science.







Final Design of Trail Visualization Project: Light intensity in the upper right, movement/motion data at lower right, weather data in the upper left, and map and actual trail shown in the lower left.

Trail Visualization High-Fidelity Prototype: For this interface we used a configured map from Google Earth. The map displays the trail in red. This perspective of this image is from above, and the large rock in the center is Half Dome. On the right are speed, ambient light, and weather data. 8

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.