You are on page 1of 22

Exploring the role of artificial intelligence and

inclusive technologies during navigation-based


tasks for individuals who are blind or who have low
vision: Future directions and priorities.
Natalina Martiniello
School of optometry, University of Montreal, Montreal, QC, CA
Maxime Bleau
School of optometry, University of Montreal, Montreal, QC, CA
Nathalie Gingras-Royer
School of optometry, University of Montreal, Montreal, QC, CA
Catherine Tardif-Bernier
School of optometry, University of Montreal, Montreal, QC, CA
Joseph Paul Nemargut (  joe.nemargut@umontreal.ca )
School of optometry, University of Montreal, Montreal, QC, CA

Research Article

Keywords: Visual impairment, blind, low vision, orientation and mobility, accessibility, inclusive
technologies, artificial intelligence, quality of life, universal design

Posted Date: December 7th, 2023

DOI: https://doi.org/10.21203/rs.3.rs-3715501/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Page 1/22
Abstract
Background
Mainstream smartphone applications are increasingly replacing the use of traditional visual aids (such
as hand-held telescopes) to facilitate independent travel for individuals who are blind or who have low
vision.

Objective
The goal of this study was to explore the navigation-based apps used by individuals who are blind or low
vision, the factors influencing these decisions, and perceptions about gaps to address future needs.

Methods
An international online survey was conducted with 139 participants who self-identified as blind or low
vision (78 women, 52 men) between the ages of 18 and 76.

Results
Findings indicate that the decision to use an app based on artificial intelligence versus live video
assistance is related to whether the task is dynamic or static in nature. Younger participants and those
who are congenitally blind are significantly more likely to employ apps during independent travel.
Although a majority of participants rely on apps only during unfamiliar routes (60.91%), apps are shown
to supplement rather than replace traditional tools such as the white cane and dog guide. Participants
underscore the need for future apps to better assist with indoor navigation and to provide more precise
information about points of interest.

Conclusions
These results provide vital insights for rehabilitation professionals who support the growing population
of clients with acquired and age-related vision loss, by clarifying the factors to consider when selecting
apps for navigation-based needs. As additional technology-based solutions are developed, it is essential
that blind and low vision individuals, including rehabilitation professionals, are meaningfully included
within design.

Introduction

Page 2/22
According to the World Health Organization, at least 2.2 billion people have a visual impairment
(blindness or low vision) worldwide [1]. This number is projected to at least double by 2050 across Global
North countries, as individuals continue to age [2]. This poses an important challenge for the future, as
visual impairment may significantly impact quality of life, even when compared to some other chronic
conditions [3–7]. Indeed, individuals with visual impairments, regardless of their age group, may be more
likely to experience anxiety [8] and/or depressive symptoms and may experience lower health related
quality of life due to limited physical activity and sedentary behaviors [9–11]. Furthermore, visual
impairment may affect safe and independent travel and is associated with an increased risk of falls.
Consequently, expanding access to rehabilitation services and assistive technology is vital for enhancing
overall quality of life. Forward-facing solutions must also take into account broader aspects of universal
accessibility and inclusion to encourage and facilitate independent travel. This is especially important
since the ability to safely perform daily activities and independent travel is correlated with higher levels of
education, employment, social inclusion [12] and can increase the likelihood of aging in place at home .

Two interrelated navigation-based competencies that individuals who are blind or have low vision employ
are orientation (the ability to determine one’s position in space) and mobility (the ability to safely navigate
within one’s environment). Mobility tools that facilitate independent travel include optical aids (i.e.,
magnifier, telescope), a long white cane for assistance with obstacle detection and the use of a dog guide
trained for obstacle avoidance while following route directions provided by the handler [13]. Blind and low
vision travelers also draw on a scope of compensatory strategies including the use of auditory and tactile
cues and echolocation within indoor and outdoor environments [14]. While these existing tools and
techniques significantly facilitate independent travel, a number of external barriers persist. Travellers with
disabilities must negotiate increasingly complex and dynamic spaces, including congested pedestrian
traffic, complex multi-lane intersections and silent electric vehicles that cannot be discerned by sound,
and challenging environments such as unpredictable construction zones and climate conditions.
Members of the blind and low vision community underscore the need to consider innovative approaches
to address these persisting barriers, including environmental accessibility and the use of inclusive
technologies [15].

Mainstream smartphones and tablets (such as Apple iPhone and Google Android products) incorporate
built-in accessibility features (including screen magnification, electronic braille connectivity and text-to-
speech software) that can be activated by all users [16]. Based on the principles of universal design,
these accessibility features enable users to access applications that support daily living activities,
including those to read email, manage calendars and provide GPS support during travel [17]. Few prior
studies have explored the extent to which smartphone applications are addressing different facets of
navigation-based needs within the blind and low vision population. However, a recent study found that
mainstream devices are increasingly replacing the use of traditional visual aids (such as specialized
stand-alone GPS devices for blind users) for navigation-based tasks [17]. Previous investigations have
centered heavily on the design and evaluation of sight substitution prototypes to address perceived
challenges, including assisting with localization, multiple object tracking and obstacle avoidance, and the
ability to follow a moving target, such as a line in a coffee shop [18–20]. Other investigations have
Page 3/22
explored which navigation-based applications are generally used by individuals with visual impairments,
but have not examined the extent to which current solutions address specific navigation-based needs [21,
22]. This is particularly relevant given that barriers that impede orientation and mobility may arise at any
point on the travel continuum (see Table 1), yet knowledge on the role of inclusive technologies at each of
these points remains limited.

Current navigation-based applications can be classified into three broad categories, based on the
audience that is targeted (mainstream apps designed for the general public or specialized options for
blind and low vision users), the task that is addressed, and the underlying mechanism used to drive the
application. Applications tend to focus on specific facets of independent travel. For instance, there are
applications that assist with route planning, provide information about street names and other points of
interest, vocalize stop announcements on public transportation, or those which enable users to geolocate
services (such as restaurants) within a given geographic range [13]. Several of these applications have
been designed to meet unique travel needs for blind and low vision users. For example, blind and low
vision smartphone users can take a picture of signage or other environmental text, and use artificial
intelligence (AI) to perform optical character recognition (whereby the text is interpreted and read aloud)
[23]. Additionally, these applications also differ based on the ways in which environmental information is
gathered and transmitted. While some applications draw on AI to identify objects, locate points of interest
or perform optical character recognition, others enable users to call upon remote video assistance,
provided by sighted volunteers, family and friends or trained visual interpreters [15, 24].

Objectives

To date, research on factors influencing the decision to use specific navigation-based apps remains
scant, including whether the use of AI or live remote sighted assistance is preferrable depending on the
nature of the task. Therefore, we conducted an international survey to collect information from the blind
and low vision community about their travel habits and smartphone application usage in navigation-
based tasks. The threefold objectives of this study were to explore the applications used for different
navigation-based tasks among blind and low vision users, identify the factors that correlate with
application usage, and explore the extent to which current solutions are meeting navigation-based needs.
These findings will better inform technology-based recommendations made by rehabilitation practitioners
who support clients with vision loss and will direct developers engaged in inclusive technology design.

Methods
Data were collected through an accessible anonymous online survey hosted on the Université de
Montréal’s Lime Survey platform between June 2022 and February 2023. Ethics approval consistent with
the Declaration of Helsinki [25] was obtained through the Université de Montréal (CERC 2022 − 1465).
Prior to commencing the survey, participants read the Information and Consent form on the initial page of
the platform. Informed consent was obtained by the decision to proceed to the online survey.

Eligibility and recruitment


Page 4/22
Participation was open internationally to individuals aged 18 years or older, who had been using a
smartphone for at least three months, who could communicate in English or French and who self-
identified as blind or as having low vision. To provide additional context about level of vision, data about
reading and writing methods used (e.g. large print, text-to-speech software and braille) and visual
diagnosis were also collected. The invitation to participate was posted to over 150 social media groups
geared towards members of the blind and low vision community. Snowball sampling (whereby
participants were invited to share the invitation with others) provided additional reach beyond these
means [26].

Sample size

Given the exploratory nature of this study, no a priori sample size was computed. A sample size of 100 +
participants was expected, given previous success with similar studies aimed at blind and low vision
technology users [17].

Materials and procedure

The survey included between 45 and 50 questions (depending on participant responses). Data were
collected through primarily closed-ended (multiple choice) questions, with the option of providing
additional open-ended comments (see supplementary file 1 for the full instrument). The survey was
comprised of three distinct sections. Section one of the survey (between 22 and 27 questions) gathered
data about demographic characteristics. Section two included 14 questions about navigation-based
tasks and the type of smartphone applications employed to perform these tasks, if any. Table 1* includes
the categories of navigation-based tasks explored in this section and provides a brief definition for each.
The different navigation-based tasks investigated reflect the typical trajectory for completing a route
independently, from route planning to reaching a point of interest [13]. Participants were asked to indicate
whether they use an application for each task, and to identify the application either by selecting from the
provided list or by supplying an additional response of their own. If a participant did not use any
applications for a given task, they were asked to specify the reason either by selecting among the
provided choices or by inserting their own open-ended response. The final section, section three, of the
survey (consisting of 9 questions) asked participants to comment further on the factors which influence
their decision to use a smartphone application for different navigation-based tasks, and the advantages
and disadvantages of currently available solutions.

Data analysis
Results from the survey were exported into excel where incomplete submissions were removed, resulting
in a total of 139 respondents of which 125 completed section two, and 110 completed section three.
Following data cleaning, all applications used by participants were classified into four distinct categories
(Human assistance, Computer Vision/AI, Camera magnification/Image rendering, and GPS), such that
participants could be classified as either using or not using apps within that category for each navigation-
based task (see Table 1 and 2)*.
Page 5/22
Table 1
Categorization of data OR descriptions
Navigation-based tasks across the independent travel trajectory

Task Definition

Establishing a route Determining route instructions prior to leaving, preparing a travel plan
itinerary

Obstacle detection The ability to detect and avoid obstacles in one’s line of travel, such as
other pedestrians, poles, and signposts

Visual identification The ability to identify aspects of the visual environment, such as an
environmental object (e.g. product at a store) or text (e.g. public signage
and street names)

Street crossings The ability to determine when it is time to safely cross the street, read
traffic signals and cross while maintaining alignment

Using public Determining schedules (e.g. bus times), locating bus stops and tracking
transport bus stop announcements (e.g. to determine one’s location while on route)

Identifying points of The ability to confirm a nearby point of interest/destination (e.g.


interest in close restaurant, pharmacy)
proximity

Geolocation Identifying one’s position while on route (e.g. where am I?) such as a
specific street, intersection or address (GPS).

Categories of apps

Category Definition

Human assistance Interacting with a sighted person virtually through video and /or audio
(i.e.,AIRA)

Computer vision/AI Automated Visual interpretation using artificial intelligence (AI) of an


image or text from the smartphone camera (i.e., Seeing AI)

Camera Altering of an image to increase visibility / readability to the user (i.e.,


magnification/image Magnifying Glass)
rendering

GPS Use of the smartphone geolocation and possible integration of other


geolocation applications (i.e., Google Maps)

App audience

Audience Definition

Specialized apps Apps purposefully designed to be used by individuals with visual


impairments (ex: BeMyEyes)

Mainstream apps Apps meant to be used by the general public, with or without
considerations of disability (ex: google maps)

Categories of unmet needs

Page 6/22
Navigation-based tasks across the independent travel trajectory

Task Definition

Indoor navigation Difficulty navigating large indoor spaces (i.e., finding store in mall)

Accessibility Some or all the functions of the app/website were inaccessible

POI/geolocation Difficult finding specific locations when nearby (i.e., entrance to a store,
gate at airport, etc.)

Street crossing Information about street crossing unavailable (i.e., crossing time,
crossing signal)

Route planning Lack of information to accurately plan route (i.e., map of bus terminal
not available, information not available while traveling in car)

App precision Errors or unreliability of information (i.e., GPS in mountainous area, bus
stops not properly labeled/updated)

Versatility Prefer to have multiple functions in one app

Object ID Identify objects and traffic sounds

Obstacle detection Finding obstacles that the white cane does not

The data were then analysed using three main statistical tests. First, Cochran’s Q tests were performed to
assess if there were differences in matched sets of proportions (binary variables: yes or no). When
Cochran’s Q tests revealed that the proportions of yes and no responses varied in a certain set, pairwise
Post-hoc Dunn’s Multiple Comparison Test (with Bonferroni corrections) were performed to locate what
proportions significantly differed. Second, Chi-square tests of independence were performed to examine
the relation between different pairs of nominal or binary variables. Third, Spearman Rho tests were used
to investigate the relation between variables when at least one of these was continuous or ordinal.

Page 7/22
Table 2
Dependent and independent variables
Independent variables

Variable Type Levels

Age Continuous ≥ 18

Age group* Binary < 50 or > 50

Gender Nominal M, F, non-binary, or other

Visual impairment Nominal Blind, Low vision, or other

Age at onset of visual impairment Continuous ≥0

Age at onset group* Ordinal 1 = During early childhood: [0,3[

2 = During childhood: [3,12[

3 = During adolescence: [12,20[

4 = During adulthood [20,50[

5 = During late adulthood: 50+

Visual impairment duration* Continuous = Age - Age at onset of visual


impairment

Type of mobility aid Nominal Long cane, dog guide, or no


aids/non-traditional aids

Type of phone Nominal iPhone or Android

Time using phone Ordinal 1 = > 3h/day

2 = 1-3h/day

3 = > 1h/week

Confidence with technology Ordinal 1 = Beginner

2 = intermediary

3 = advanced

Frequency of travel Ordinal 1 = daily

2 = several times a week

3 = once a week

4 = sometimes per month

Page 8/22
Independent variables

Variable Type Levels

Confidence during independent travel Ordinal 1 = Beginner

2 = intermediary

3 = advanced

Dependent variables

Why Variable Type Levels

Use of app Binary (set) Yes or No (for each navigation


task)

Use of app* Binary (set) Yes or No (for each app category,


one set per navigation task)

Reported reasons why not using apps Binary (set) Yes or No (for each reason, one set
for a navigation task per navigation task)

Are travels possible without the use of Binary Yes or No


apps

Reason for choosing an app Binary (set) Yes or No (for each reason)

How apps are learned Binary (set) Yes or No (for each way to learn)

Conditions in which apps do not work Binary (set) Yes or No (for each condition)
properly

Are there needs that are not met by Binary Yes or No


apps

Unmet needs* Binary (set) Yes or No (for each category of


need)

*Variable derived from the data. O&M, Orientation and mobility

Results
Demographics

A total of 139 participants completed the survey (78 women, 52 men, mean age = 48.9 ± 15.5 years) from
a variety of countries, mainly Canada (n = 85), the USA (n = 18), and countries within Europe (n = 27)
Table 3 contains additional demographic information.

Page 9/22
Table 3
Participants Demographics
Response n respondents

Gender

Female 78

Male 54

Did not answer 6

Country

Canada 85

USA 18

Europe 27

Others 5

Did not answer 3

Age

18–19 3

20–29 13

30–39 29

40–49 19

50–59 26

60–69 34

70–76 10

VI category

Blind 86

Low vision 43

Other 6

Did not answer 3

Age at VI onset

0–2 60

3–11 17

Page 10/22
Response n respondents

Gender

12–20 12

21–50 23

50–68 8

Auditory impairment

Yes 24

No 114

Type of phone

Iphone 120

Android 23

Time phone use

> 3 h/day 78

1–3 h/day 44

< 1h/day 12

Did not answer 3

Competence with technology

Beginner 10

Intermediary 47

Advanced 77

Did not answer 4

Aids used

None or non-traditional aids 20

White cane 92

Guide dog 26

Area of travel

Urban 105

Suburban 66

Rural 23
Page 11/22
Response n respondents

Gender

Frequence travel

Everyday 58

Several times a week 54

Once a week 12

A few times a month 6

Other 4

Did not answer 3

Confidence in independent navigation

Beginner 20

Intermediary 51

Advanced 63

Did not answer 4

Q1: What types of apps are used for navigation-based tasks?

In total, 125 participants completed the section about navigation-based tasks. Findings indicate that
most participants (n = 120, 96%) use applications during independent travel, with only 4% (n = 5) of
participants indicating that they do not use apps for navigation-based tasks at all. However, the
proportions of participants using apps significantly differed between the navigation-based tasks (Q(6, n =
125) = 253.003, p < .001). More precisely, participants are less likely to use apps for street crossings and
obstacle detection compared to other navigation-based tasks (p < .001). More detailed information can be
found in Fig. 2 and supplementary file 2.

Apps to access visual information

Overall, the navigation-based task for which apps are most commonly adopted (76% of respondents)
relates to visual interpretation (e.g., locating/identifying objects or reading environmental text). For such
visual interpretation, respondents significantly preferred using specialized apps (n = 87, 91.6%) compared
to mainstream apps (n = 50, 52.63%), and predominantly used computer vision/AI and human assistance
apps for this purpose. For those who indicated not using apps for visual interpretation, the most common
reason reported was the lack of awareness of apps to assist with this task (n = 13, 48.15%).

Apps for planning and following routes

Page 12/22
For navigation-based tasks related to general orientation and routes (e.g., planning routes, finding POIs,
using public transport, and geolocation), GPS apps were predominantly used. Mainstream apps were
significantly preferred to specialized apps for planning routes (92.5% vs 43.62%) and taking public
transportation (77.03% vs 52.70%), but there was no apparent preference between mainstream and
specialized apps for geolocation (74.07% vs 72.84%) and finding POIs (68.60% vs 82.56%). For
participants who indicated not using apps for these tasks, the main reasons identified were that
respondents did not feel the need to use apps for this purpose or did not know apps that could help them.

Apps for dynamic tasks related to perceived risk

Participants were significantly less likely to use apps for dynamic navigation-based tasks (such as street
crossings and obstacle detection while in movement) because most respondents: 1) already use other
aids for these tasks (street crossings, n = 49; obstacle detection, n = 68); 2) don’t know apps that could
help them (street crossings, n = 49; obstacle detection, n = 47); and 3) don’t feel the need to use apps for
this purpose (street crossings, n = 35; obstacle detection, n = 41). Other reported reasons included the lack
of trust in applications or technology in general (e.g., incomplete maps, battery life, processing delays, or
smartphone processing capacities, n = 9). However, among those who do use apps for dynamic tasks
(street crossings, n = 23; obstacle detection, n = 15), human assistance apps are more commonly used
than those based on AI, with a preference for specialized apps over mainstream apps (street crossings,
86.95% vs 47.82%; obstacle detection, 72.22% vs 53.33%), a pattern that was only significant for street
crossing.

Q2: What factors are correlated with app usage?

Among the 110 responses, 36 (32.72%) of participants indicated that they do not rely on apps for travel;
67 (60.91%) rely on apps only in unfamiliar areas, and 7 (6.36%) rely on apps for all travel. This was
found to be true for both blind and low vision participants, with a significant correlation with the
frequency at which participants travel. Specifically, participants who travel less often are more likely to
rely on apps during travel (rho = 0.228, p = .021). Furthermore, we investigated the correlation between
several factors and apps usage for the different navigation-based tasks, with the following significant
correlations observed:

Age. Younger respondents were more likely to use apps for obstacle detection (rho=-0.179, p = .046) and
visual identification (rho=-0.202, p = .025).

Frequency of travel. The more respondents travel, the more likely they are to use apps to prepare routes
(rho = -0.263, p = .004).

Level of vision. Blind respondents were more likely to use apps for visual interpretation (X2(1, n = 116) =
12.14, p < .001), finding points of interest (X2(1, n = 116) = 5.02, p = .025), and geolocation (X2(1, n = 116)
= 6.001, p = .014). In general, blind and low vision respondents equally used apps of the different
categories except for visual interpretation where low-vision respondents are more likely to use image

Page 13/22
rendering apps (13/23 vs 22/68, stats), and blind respondents, to use computer vision/AI apps (66/68 vs
15/23, stats).

VI onset. Participants with earlier VI are more likely to use apps for street crossing (rho=-0.193, p = .044),
finding POI (rho=-0.349, p < .001), using public transport (rho=-0.396, p < .001), and geolocation
(rho=-0.282, p = .003). Similarly, participants with a longer VI duration – found to be linked with earlier VI
onset (rho=-0.605, p < .001) – are also more likely to use apps for street crossing (rho = 0.323, p < .001),
finding POI (rho = 0.204, p = .037), and public transport (rho = 0.254, p = .009).

Type of navigational aid. As the type of navigational aids used was closely related to level of vision (X2(2,
n = 129) = 23.842, p < .001), the analysis revealed the same pattern of results. Participants using non-
traditional aids, or no aids (30% of low-vision participants, 3% of blind participants) were less likely to use
apps for visual interpretation (X2(2, n = 119) = 12.14, p = .002), finding POI (X2(2, n = 119) = 11.19, p = .004)
and geolocation (X2(2, n = 119) = 6.02, p = .049). However, it was found that individuals using non-
traditional aids, or no aids, were less likely to use apps to take public transport (X2(2, n = 119) = 8.31, p
= .016).

Overall, the most reported condition that prevents the use of an app is the presence of loud ambient
sounds (n = 60, 54.55%), and the least reported was high luminosity (n = 9, 8.18%). Low-vision
participants are more likely to be bothered by high luminosity (16.67% vs 1.45%, X2(1, n = 105) = 8.804, p
= .003). As for the factors participants take into account when selecting apps, the results show that the
importance of the different factors significantly varied (Q(3,n = 110) = 121, p < .001). The most important
factors were 1) the ease of use/accessibility of the app (N = 90, 81.82%); and 2) that the app responds to
a certain need (n = 79, 71.82%). Moreover, the price (reported by 42 respondents, 38.18%) was perceived
to be more important than the amount of data used by the app (see Fig. 3). Blind respondents were more
likely to choose apps based on whether the app responds to a specific need (54/69 vs 21/36, X2(1, n =
105) = 4.603, p = .032), and also tended to choose their apps according to their ease of use/accessibility”
(60/69 vs 26/36, X2(1, n = 105) = 4.5, p = .063).

Q3: Are current apps addressing navigation-based needs?

In total, 55 respondents (50%) reported having navigation-based needs that are currently not met by
applications available to them. Among these, blind respondents were more likely to have unmet needs
than those with low vision (38/69 vs 12/36, X2(1, n = 105) = 4.482, p = .034). Respondents were then
invited to elaborate on what needs were currently not met. The 49 responses received were classified into
nine categories (see Table 1 and 2*). Results showed that some categories were more frequently reported
than others (Q(8, n = 49) = 41.096, p < .001). The three most reported categories of unmet needs were:
“POI/geolocation” (n = 17, 34.70%), “indoor navigation” (n = 14, 28.57%), and “route planning” (N = 15,
30.61%), while the least reported categories were: “Obstacle detection” (n = 1, 2.41%), “Object
Identification” (n = 2, 4.82%) and “versatility” (n = 1, 2.41%). More detailed information can be found in
Fig. 3 and supplementary file 2. Additionally, respondents traveling in semi-urban/suburbs were more
Page 14/22
likely to have unmet needs for POI/geolocation, than those not traveling in such areas (11/19 vs 6/30,
X2(1, N = 49) = 7.373, p = .007).

Discussion
The goals of this study were to explore 1) the navigation-based apps used by blind and low vision
individuals, 2) the factors that correlate with the decision to use an application, and 3) the perceptions on
the extent to which applications are meeting current navigation-based needs. Our findings raise important
insights to strengthen the potential for future inclusive technologies to better support the health, safety,
and independent travel needs of blind and low vision adults.

Navigation-based apps used and influential factors

While prior studies have found that mainstream solutions such as smartphones are increasingly
replacing the use of traditional aids (such as hand-held telescopes) during independent travel [17, 27], the
findings of the current study underscore an additional distinction between mainstream and specialized
solutions. Participants are adopting mainstream applications (i.e., Google Maps) available to the general
public for planning and following routes, while selecting more specialized solutions (i.e., AIRA,
BlindSquare) when additional visual information about a route is required. In fact, a majority of
navigation-based tasks performed by blind and low vision users mirror those of the general population.
Like sighted individuals, persons with visual impairments may draw on technologies to plan a route to an
unfamiliar destination or to obtain new route instructions. However, they may also draw on additional
visual interpretation support (such as information about specific landmarks, intersections, and points of
interest) that move beyond the level of detail provided by mainstream applications.

This study also suggests that AI is more frequently employed when the user remains stationary (i.e.,
reading signage, confirming a point of interest). Conversely, participants are more likely to draw on live
sighted assistance during dynamic situations, such as locating a front door while walking or crossing a
street. These findings point to the inherent limitations of AI, which is not yet able to provide the same
timely and precise information afforded by a live visual interpreter. Time sensitive and precise
information about a physical environment is especially vital in dynamic situations, where users must
negotiate constantly moving targets, such as people and traffic, highlighting the benefit of asking precise
and timely questions.

Separate from technology competencies, users must also have opportunities to learn about the benefits
and limitations of these solutions and to assess the best option, based on their safety and needs [28].
Such insights underscore the value of interdisciplinary collaboration between both assistive technology
and orientation and mobility specialists, but also the potential benefit of multifunctional devices that
incorporate both AI and human-based assistance for greater flexibility.

Findings also accentuate the importance of proactively incorporating accessibility within mainstream
solutions [15]. While accessibility standards for websites and applications exist, these are not
Page 15/22
consistently applied across platforms, leading to persisting access barriers for users with diverse needs,
including older adults and second language speakers [29]. The accessibility of a mainstream app may
also change when software updates are implemented, if designers do not consider these factors prior to
launch [30]. Additionally, a recent study has shown that O&M specialists tend to adopt independent travel
technologies slower than the clients they serve [31], highlighting the need for professionals to remain
informed about application developments and changes.

In line with previous studies [32], participants with congenital, or early onset, blindness or low vision are
more likely to draw on applications during independent travel. Likewise, participants who are blind are
more likely to use applications for a variety of navigation-based tasks, including object identification and
the reading of text during independent travel. These findings raise the importance of supporting the early
introduction of these tools for users who have progressive vision loss. Ample research indicates that the
early adoption of tools facilitates cortical plasticity and the transition to non-visual methods once
additional vision loss occurs [33].

Unmet needs and future directions

Participants highlighted that although technologies are beneficial in a number of contexts, these tools
supplement rather than replace the use of traditional tools and techniques, such as the use of a dog
guide or white cane. This is evidenced by the fact that participants may not use navigation-based apps in
familiar locations. However, respondents highlight an expressed need for further support during indoor
navigation and for soliciting more precise information about points of interest, such as the exact location
of an entry door. Importantly, previous research has centered on the development of solutions to assist
with other aspects of independent travel not identified as significant barriers here, such as the ability to
avoid obstacles or maintain one’s position in line [34, 35]. It is evident that while helpful for some end-
users, not all applications necessarily reflect the needs and priorities of a majority of blind and low vision
users. Experts with disabilities have increasingly emphasized the need to meaningfully include end-users
with disabilities as core members of the team from inception, to ensure that technology designs are
based upon priorities from within the community [36]. Beyond meaningful partnerships with blind and
low vision users, assistive technology specialists and orientation and mobility professionals (who may
also be members of the blind and low vision community) can also provide professional insights to further
clarify issues pertaining to environmental accessibility and safety to inform the development of new
tools.

Limitations and Future Research

Participants were primarily recruited through English-speaking social media platforms. It is likely that the
sample does not represent the views of those with lower technology competencies and those who are
less connected to online services, including those located in developing countries. To supplement this
data, members of this research team are pursuing a number of related investigations, including those that
focus on the navigation-based needs of those newer to blindness or low vision and with less self-
identified technology and independent travel competencies.
Page 16/22
Conclusion
As technology developers work towards the design of new and innovative solutions, it is vital that these
tools respond to the increasing diversity of individuals. For those who are blind or low vision, navigation-
based applications provide essential insights about the visual environment to supplement the knowledge
they gain through traditional methods. For these tools to be most effective, blind and low vision users
including assistive technology, vision rehabilitation and orientation and mobility specialists, should be
meaningfully included in the consultation and design process from inception [37]. Through meaningful
inclusion, developers can gain vital insights about the decisions underpinning the use of specific
navigation-based apps, alongside future development priorities to respond to unmet needs.

Declarations
Conflicts of interest
The authors declare having no conflict of interest.

References
1. World Health Organization. World report on vision. 2019; Available from:
https://www.who.int/publications/i/item/9789241516570.
2. Varma, R., T. Vajaranant, and B. Burkemper, Visual impairment and blindness in adults in the United
States: Demographic and geographic variations from 2015 to 2050. Journal of the American Medical
Association: Opthalmology, 2016. 134(7): p. 802-809.
3. Wiener, W.R., R.L. Welsh, and B.B. Blasch, Foundations of orientation and mobility. Vol. 1. 2010:
American Foundation for the Blind.
4. Marron, J.A. and I.L. Bailey, Visual factors and orientation-mobility performance. Optometry and
Vision Science, 1982. 59(5): p. 413-426.
5. Popescu, M.L., et al., Age-related eye disease and mobility limitations in older adults. Investigative
ophthalmology & visual science, 2011. 52(10): p. 7168-7174.
6. Turano, K.A., et al., Association of visual field loss and mobility performance in older adults:
Salisbury Eye Evaluation Study. Optometry and vision science, 2004. 81(5): p. 298-307.
7. Loprinzi, P.D., B.K. Swenor, and P.Y. Ramulu, Age-related macular degeneration is associated with less
physical activity among US adults: cross-sectional study. PLoS One, 2015. 10(5): p. e0125394.
8. Langelaan, M., et al., Impact of visual impairment on quality of life: a comparison with quality of life
in the general population and with other chronic conditions. Ophthalmic epidemiology, 2007. 14(3):
p. 119-126.
9. Ahmad Bahathig, A., et al., Relationship between physical activity, sedentary behavior, and
anthropometric measurements among Saudi female adolescents: a cross-sectional study.

Page 17/22
International journal of environmental research and public health, 2021. 18(16): p. 8461.
10. Haegele, J.A., R. Famelia, and J. Lee, Health-related quality of life, physical activity, and sedentary
behavior of adults with visual impairments. Disability and rehabilitation, 2017. 39(22): p. 2269-2276.
11. Smith, L., et al., Visual impairment and objectively measured physical activity and sedentary
behaviour in US adolescents and adults: a cross-sectional study. BMJ open, 2019. 9(4): p. e027267.
12. Martiniello, N. and W. Wittich, Employment and visual impairment: Issues in adulthood, in Routledge
Handbook of Visual Impairment, J. Ravenscraft, Editor. 2019, Routledge. p. 415-437.
13. Wiener, W.R., R.L. Welsh, and B.B. Blasch, Foundations of Orientation and Mobility, 3rd Edition:
Volume 2, Instructional Strategies and Practical Applications. 2010: AFB Press.
14. Kreidy, C., et al., How Face Masks Affect the Use of Echolocation by Individuals With Visual
Impairments During COVID-19: International Cross-sectional Online Survey. Interactive Journal of
Medical Research, 2022. 11(2): p. e39366.
15. Siu, Y.-T. and I. Presley, Access technology for blind and low vision accessibility. 2020: APH Press,
American Printing House for the Blind.
16. Khan, A. and S. Khusro, An insight into smartphone-based assistive solutions for visually impaired
and blind people: issues, challenges and opportunities. Universal Access in the Information Society,
2021. 20: p. 265-298.
17. Martiniello, N., et al., Exploring the use of smartphones and tablets among people with visual
impairments: Are mainstream devices replacing the use of traditional visual aids? Assistive
Technology, 2022. 34(1): p. 34-45.
18. Kuribayashi, M., et al. Linechaser: A smartphone-based navigation system for blind people to stand
in lines. in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
19. Murata, M., et al., Smartphone-based localization for blind navigation in building-scale indoor
environments. Pervasive and Mobile Computing, 2019. 57: p. 14-32.
20. Faubert, J. and L. Sidebottom, Perceptual-cognitive training of athletes. Journal of Clinical Sports
Psychology, 2012. 6: p. 85-102.
21. Kuriakose, B., R. Shrestha, and F.E. Sandnes. Smartphone navigation support for blind and visually
impaired people-a comprehensive analysis of potentials and opportunities. in Universal Access in
Human-Computer Interaction. Applications and Practice: 14th International Conference, UAHCI 2020,
Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24,
2020, Proceedings, Part II. 2020. Springer.
22. Kuriakose, B., R. Shrestha, and F.E. Sandnes, Tools and technologies for blind and visually impaired
navigation support: a review. IETE Technical Review, 2022. 39(1): p. 3-18.
23. Kelley, S., Access to information: Electronic listening, recording, and reading devices, in Foundations
of Vision Rehabilitation Therapy, H. Lee and J. Ottowitz, Editors. 2020, APH Press.
24. Martiniello, N., et al., Artificial intelligence for students in postsecondary education: a world of
opportunity. AI Matters, 2021. 6(3): p. 17-29.

Page 18/22
25. World Medical Association, Declaration of Helsinki: ethical principles for medical research involving
human subjects. Journal of the American Medical Association, 2013. 310(20): p. 2191-4.
26. Goodman, L.A., Snowball Sampling. Ann. Math. Statist., 1961. 32(1): p. 148-170.
27. Wittich, W., et al., Device abandonment in deafblindness: a scoping review of the intersection of
functionality and usability through the International Classification of Functioning, Disability and
Health lens. BMJ open, 2021. 11(1): p. e044873.
28. Mino, N.M., Problem solving in structured discovery cane travel. Journal of Blindness Innovation and
Research, 2011. 1(3).
29. W3C. Web Content Accessibility Guidelines (WCAG) 2.1. 2018 5 June 2018 [cited 2021 June 9];
Available from: https://www.w3.org/TR/WCAG21/.
30. Lee, H., Keyboarding and Access Technology, in Foundations of Vision Rehabilitation Therapy, H. Lee
and J. Ottowitz, Editors. 2020, APH Press: Louisville, KY. p. 313-355.
31. Deverell, L., et al., Use of technology by orientation and mobility professionals in Australia and
Malaysia before COVID-19. Disability and Rehabilitation: Assistive Technology, 2022. 17(3): p. 260-
267.
32. Martiniello, N., et al., Exploring the use of smartphones and tablets among people with visual
impairments: Are mainstream devices replacing the use of traditional visual aids? Assistive
Technology, 2019: p. 1-12.
33. Sadato, N., et al., Tactile discrimination activates the visual cortex of the recently blind naive to
Braille: a functional magnetic resonance imaging study in humans. Neuroscience Letters, 2004. 359:
p. 49-52.
34. Ptito, M., et al., Brain-machine interfaces to assist the blind. Frontiers in Human Neuroscience, 2021.
15: p. 638887.
35. Xu, P., et al., Wearable obstacle avoidance electronic travel aids for blind and visually impaired
individuals: A systematic review. IEEE Access, 2023.
36. Mankoff, J., G.R. Hayes, and D. Kasnitz, Disability studies as a source of critical inquiry for the field
of assistive technology, in Proceedings of the 12th international ACM SIGACCESS conference on
Computers and accessibility. 2010, Association for Computing Machinery: Orlando, Florida, USA. p.
3–10.
37. Nemargut, J.P., et al. Next Gen Health Powered by The Open Grid And Edge AI [Moderated Panel
Discussion]. in IEEE Future Networks World Forum. 2022. Montreal, QC, Canada.

Supplementary Files
Supplementary Files are not available with this version

Figures

Page 19/22
Figure 1

Apps usage across all navigational tasks. Rows represent navigational tasks, sorted according to the
percentage of respondents using applications to perform them. The first column “task” contains pie
charts showing the proportions of respondents using apps (“Yes”) or not using apps (“No”) for specified
tasks. The second column “app categories” represents data from respondents using apps. It contains the
number of respondents who reported using apps of every category (Human assistance vs AI vs GPS vs
Page 20/22
Image rendering & specialized apps vs mainstream apps). The third column represents data from
respondents who do not use apps. It includes the different reasons why they do not use apps sorted
according to the number of respondents having reported them. The figure also reports the significant
differences found in the different statistical tests performed (see supplementary material for a detailed
summary of every statistical tests performed). Alternative text for this figure can be found in
supplementary file 3A. *, p<.05; **, p<.01; ***, p<.001.

Page 21/22
Figure 2

Needs unmet by current apps. According to 49 participants, their needs in POI/ geolocation, route
planning and indoor navigation are the most unlikely to be met by their current use or knowledge of
navigation apps. This was significantly more reported than needs in obstacle detection or regarding the
use of multiple apps. Alternative text for this figure can be found in supplementary file 3B. *, p<.05; **,
p<.01

Page 22/22

You might also like