You are on page 1of 8

Toward Achieving Universal Usability for Older Adults Through Multimodal Feedback

V. Kathlene Emery1, Paula J. Edwards1, Julie A. Jacko1, Kevin P. Moloney1, Leon Barnard1, Thitima Kongnakorn1, Franois Sainfort1, Ingrid U. Scott2

School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA USA +1 404 385 2545

Bascom Palmer Eye Institute University of Miami School of Medicine Miami, FL USA 1 305 326 6447

{vkemery, pedwards, jacko, kmoloney, lbarnard, kongnako, sainfort} ABSTRACT

This experiment examines the effect of combinations of feedback (auditory, haptic, and/or visual) on the performance of older adults completing a drag-and-drop computer task. Participants completed a series of drag-and-drop tasks under each of seven feedback conditions (3 unimodal, 3 bimodal, 1 trimodal). Performance was assessed using measures of efficiency and accuracy. For analyses of results, participants were grouped based on their level of computer experience. All users performed well under auditory-haptic bimodal feedback and experienced users responded well to all multimodal feedback. Based on performance benefits for older adults seen in this experiment, future research should extend investigations to effectively integrate multimodal feedback into GUI interfaces in order to improve usability for this growing and diverse user group.

from the current older adult population. This study aims to demonstrate that different configurations of feedback integrated into a conventional GUI task, namely, the drag-and-drop, may improve performance for older users with different levels of computer experience. The experimental tasks in the present study exposed participants (aged 61-91) to various unimodal and multimodal feedback conditions that provided auditory, haptic, and/or visual information to the user during task performance. Based on previous research on multimodal feedback ([13], [14], [29], [30]), we hypothesize that the use of multimodal feedback will improve performance of a computer-related task by older adults with varying levels of computer experience.

1.1. Background
According to the 2000 U.S. Census, only 28% of adults aged 65 and older have home computer access compared to 51% of adults aged 55-64 and 65% for those aged 45-54 [20]. Accordingly, as the baby boom population ages, it will be the first generation in which the majority of the population will already have significant computer experience when they reach the age of 65. Because computer experience affects performance on a computer task [e.g., [17]], we can infer that future generations of older adults will have different needs and skills compared to todays older adults. In order to develop more effective interactions with this user group, developers and researchers need a better understanding of needs driven from age-related changes in cognitive and motor abilities and needs related to the level of computer experience.

Categories and Subject Descriptors

H.5.2 [User Interfaces]: Auditory Feedback, Evaluation methodology, GUI, Haptic I/O, Interaction styles, Theory and methods, User-centered design

General Terms
Experimentation, Human Factors

Older adults, computer use, experience, multimodal, haptic, auditory, visual

During the next 15 years, over 82 million people who constitute the baby boom population (born 1946-1964) will join the older adult population [18]. While progress has been made in addressing the special needs of older adults, aged 65 and older, related to computer use, most of this research has focused on understanding the needs of the current older adult population. However, this population is changing, and computer experience is one trait that clearly sets the future older adult population apart
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CUU03, November 10-11, 2003, Vancouver, British Columbia, Canada. Copyright 2003 ACM 1-58113-701-X/03/0011$5.00.

1.1.1. Age-related Changes in Abilities

Considerable research (e.g. [6], [22]) has been completed to help us better understand the aging process and the changes that accompany aging. Several of the changes that effect computer use include: cognitive slowing, limited processing recourses, lack of inhibition, visual perception and processing, hearing, speech, psychomotor abilities, attention, memory, and learning. These cognitive abilities are an integral component of human computer interaction, which puts this population at a distinct disadvantage when using computers. For example, in a review of studies on older adults and attention, Hawthorn [10] finds that older adults have more difficulty with tasks that require selective attention or


divided attention. Therefore, interface components that help direct a users attention may be beneficial to this user group. However, what makes the older adult population such a unique and interesting group is that declines in abilities related to aging are not homogeneous [22] and older adults tend to use other abilities to compensate for abilities in decline [6]. Therefore, a one-size-fits-all design approach results in sub-optimal performance for older adults due to the high variability of their abilities. Understanding how these changes influence computer related tasks is critical to the design of software and hardware that enables older adults to more effectively use the vast tools and information available through computers.

used within the visual display to generate faster response times from users and provide wider effective target areas for users. The authors propose that multimodal systems can elicit performance advantages for users. In their review of the multimodal literature base, Vitense et al. [29] illustrate that there are gaps in the research literature that compare different modalities of feedback, and their differing impacts on performance. In contrast, studies of multimodal input techniques are prevalent in the literature base. Few studies report comprehensive comparison of unimodal, bimodal, and trimodal feedback that employ visual, auditory and haptic modalities, in a GUI interface. This research is crucial since effective feedback configurations may vary significantly between different user groups due to differences in experience, training, physiological abilities, cognitive traits, and cue processing. As a consequence, a specific feedback configuration may not present effective cues in the varying contexts of several different direct manipulation tasks for several different users. Modalities of information can be combined so that their total effect on a persons perception is a signal that may be weaker, stronger, or conflicting with other information [16]. Through their empirical research, Van Beers, Wolpert, and Haggard [27] theorize that influencers of attentional resources in direction dependent precision tasks (such as positioning a cursor) are likely a function of three cognitive features: 1) Noise incoming through sensory information 2) Capabilities of the multi sensory brain regions, and 3) Experience. Differences between older adults with respect to the capacity to integrate modality information in a computer related task is surely to be affected by incoming noise, capabilities of specific brain regions, and experience. The present paper proposes computer experience as a predicative metric of how older users integrate multimodal sensory cues, relative to his/her performance.


Direct Manipulation

Shneiderman [24] notes that poor design, slow implementation, or inadequate functionality could destabilize the positive impacts that the direct manipulation paradigm advocates. The declining abilities of older adults, paired with a design that does not meet their cognitive or physical needs, could impede the beneficial effects of a GUI for this user population. Because direct manipulation interfaces present different advantages for novice and expert users, age related declines could have different impacts on the interaction of users of varying levels of computer experience. Due to its prevalence in GUIs, the drag-and-drop interaction is a critical technique for successful human-computer interaction in many direct manipulation interfaces. The drag-and-drop task merits this studys focus, as it has endured as an interaction technique that is highly representative of direct manipulation. It has endured despite being problematic under specific circumstances for some users and despite the fact that other methods, such as point-click, have been identified as less erroneous and more efficient ways to acquire targets (e.g., [15]). Research on drag-and-drop can help identify design strategies to mitigate the negative potential of the drag-and-drop on users with limited abilities. Our research investigates feedback mechanisms that may improve drag-and-drop task performance for an older population.

2. METHODS 2.1. Participants

Twenty-nine volunteers, including 18 females and 11 males, aged 61 to 91 years (mean 73.83 years), participated in this study. Participants were recruited with the assistance of the Bascom Palmer Eye Institute. Participants were selected based on the following criteria: age, right-handedness, visual acuity, and ocular health. Participants were provided with a clinical eye exam to obtain their best-corrected visual acuity. All patients were diagnosed as having best corrected visual acuity between 20/20 and 20/40 and no ocular disease. Each participants self-reported computer experience was elicited though a verbally administered background questionnaire. Our approach for assessing computer experience is a synthesis of the methods used in other studies (e.g. [4], [17], [19]. An index for computer experience was calculated for each participant, based on frequency of use rating scale from 0 (no use) to 5 (daily use) and number of applications used (0 to 5). Based on this experience index, participants were stratified into three groups: No Experience (n=9), Limited Experience (n=9), and Experienced (n=11). Due to the inherent variability in the abilities of the participant population, tests of manual dexterity, mental health, and physical health were administered. Participants manual dexterity was


Multimodal Feedback

Feedback can play an augmentative role in target acquisition tasks such as the drag-and-drop [1]. The integration of additional feedback, such as haptic, visual, and auditory gives designers an opportunity to assist users interactions. Visual feedback is most commonly integrated into a drag-and-drop task in three forms: 1) Continuous feedback about the location of the source icon as it is dragged into position, 2) Feedback about the completion of the task when the source icon disappears after an accurate drop in the target object, and 3) A colored highlight of the source icon when it is in an acceptable location for a correct drop. Fraser and Gutwin [8] proposed a Framework of assistive pointers for low vision users, introducing the idea that the presentation of different feedback forms could assist people who have visual impairments. Three modalities of feedback, 1) visual, 2) auditory, and 3) haptic, were suggested to have the capacity to better enable this segment of the user population. Work by Gaver [9] proposed that auditory icons could mitigate errors commonly associated with the drag-and-drop. Gaver proposed that auditory confirmation could be a more obvious indicator of feedback for users, indicating a true hit. Research by Akamatsu and Sato [2] demonstrates that tactile or force feedback can also be effectively


assessed using the Purdue Pegboard test of manual dexterity [26] prior to the computer task. Participants mental and physical health were assessed by means of a verbally administered

Short Form 12(SF-12) health survey [31]. Analyses confirmed no significant differences (=0.05) between the stratified groups on the factors of age, gender, dexterity, or SF-12 mental and physical health assessments.

milliseconds. A comprehensive measure of performance, it includes time caused by errors as well as time for the successful drop. Due to the number of factors that influence TT, it is difficult to draw conclusions on feedback effectiveness based on TT alone. Therefore, additional target highlight time measures were used to gain further insight into feedback effectiveness. FTHT is the amount of time that feedback was provided to the user on the final, successful drop. This is the same as the Target Highlight Time reported in similar studies [3],[29]. TTHT measures the amount of time that feedback was provided to the user on the final, successful drop as well as any previous, unsuccessful attempts. Therefore TTHT, like TT, includes aspects of both efficiency and accuracy. Accuracy was also assessed using the direct measure of OND errors. OND occurred when the participant correctly placed the file over the folder, thus receiving feedback, but moved off of the folder without dropping the file.

2.2. Apparatus & Experimental Task

Participants were seated approximately 24 inches from a 20-inch viewable Trinitron flat screen display. Screen resolution was set at 1152 X 864 pixels, with a 32-bit color setting. To perform the drag-and-drop task, participants used a Logitech WingMan Force Feedback Mouse providing haptic feedback. Multimodal AHV 2.0 software, developed for this study, presented a series of drag-and-drop tasks. The file icon and the target folder sizes were 36.8mm (diagonal distance) based on the findings from Jacko et al. [12], [13]. The experimental task used in this study was a simplified version of the drag-and-drop task employed in previous studies [3], [29]. The users received feedback to indicate that the file is positioned correctly for a successful drop into the folder. The feedback conditions, listed in Table 1, included uni-, bi-, and trimodal combinations of auditory, haptic, and visual feedback. The visual feedback was a purple coloration that highlighted the file icon. Auditory feedback consisted of an auditory icon that mimicked the sound of an object being pulled or sucked into a box. The volume level was adjusted for each participant to a level that was easily detectable. A vibration produced by a Logitech WingMan Force Feedback Mouse was used for haptic feedback. The visual unimodal condition served as the control condition since it closely resembles the feedback received by users in the standard Windows GUI interface. Table 1: Feedback forms employed in this study Condition Feedback Form Feedback Type A Auditory Unimodal H Haptic Unimodal V Visual Unimodal AH Auditory, Haptic Bimodal AV Auditory, Visual Bimodal HV Haptic, Visual Bimodal AHV Auditory, Haptic, Visual Trimodal

2.4. Data Analysis

The raw data were not normally distributed. By transforming the time-based metrics (TT, FTHT, and TTHT) using the log10 of these measures, these data more closely mirrored a normal distribution. Therefore, analysis was performed on the transformed measures. Before transformation, the order of magnitude of the time-based measures ranged from 1284.72 to 9166l.20ms for TT, 418.59 to 1159.73ms for TTHT and 314.77 to 971.44ms for FTHT. For the time-based metrics, a repeated measures general linear model (GLM) was used to examine significant differences in task performance related to the between-subjects factor of computer experience and the within-subjects factor of feedback condition. Tests of the within-subjects factors were completed using the Greenhouse-Geisser adjusted F-statistic to compensate for violations to sphericity [7]. When significant differences were identified, post-hoc tests were performed using the Bonferroni method. Error analysis was completed using non-parametric tests. The Wilcoxon-Mann-Whitney Rank Sum test was used to identify differences in accuracy between computer experience groups. The Chi-squared test was used to identify differences in frequency of errors between feedback conditions within a given user group.

2.3. Experimental Design

The experiment utilized a 7 x 3 factorial design with seven feedback conditions (within subject factor) and three computer experience groups (between subjects factor). The 7 feedback conditions (see Table 1) were presented to each participant in a randomized order using a 7 x 7 counterbalanced latin square design. Within each feedback condition, each participant completed 15 trials with the target folder in a different location for each trial. The order of location was also randomized using a counterbalanced latin square design. Dependent variables of Trial Time (TT), Final Target Highlight Time (FTHT), and Total Target Highlight Time (TTHT) and Over-No-Drop Errors (OND) were used to assess participants efficiency and accuracy. TT is the interval from when the target folder is displayed until the file is successfully dropped in the target folder, measured in

3. RESULTS 3.1. Time-based Metrics

For all three measures, the analyses identified main effects for feedback condition and computer experience group (see Table 2, 3). For FTHT and TTHT there was also an interaction effect between feedback condition and group (see Table 2, 3). Figures 1, 2, and 3 graph the means of each transformed measure (TT, FTHT, and TTHT, respectively) by computer experience group and feedback condition.


Table 2: Tests for Significant Differences Effect F df

2 2 2 5.727 5.862 5.828 11.454 11.735 11.656


< .001* .012* < .001*


Computer Experience Group TT* 86.701 FTHT* 4.488 THTT* 8.271 Feedback Condition Trial Time* 5.946 FTHT* 21.855 THTT* 25.592 Feedback*Group Interaction Trial Time 1.123 FTHT* 3.781 THTT* 4.047 *Significant at the .05 level

GreenhouseGeisser < .001* < .001* < .001* GreenhouseGeisser .337 < .001* < .001*

Log10(TT in msec)




Computer Experience
No Experience

3.2 Limited Exp 3.1 A H V AH AV HV AHV Experienced

Feedback Condition

Table 3: Feedback Condition Main Effects.

Main Effect * X > Y indicates that the condition X had a significantly higher time than condition Y at the Performance p<.05 level Measure Trial Time H>A V>A AHV > A V > AH V > HV FTHT H>A V>A H > AH V > AH H > AV V > AV H > AHV V > HV V > AHV TTHT H>A V>A H > AH V > AH H > AV V > AV H > AHV V > HV V > AHV * Columns organize significant differences by modality to illustrate trends in significant differences.

Figure 1: Graph of Trial Time Transformed Means



Log10 (FTHT in msec)


Computer Experience
2.6 No Experience Limited Exp 2.5 A H V AH AV HV AHV Experienced

Feedback Condition

Log10(TTHT in msec)

The main effect of computer experience is not surprising. For all three measures, experienced users performed better than users with no experience. This effect was most evident in TT where No Experience (No Exp) had higher times than the two experienced groups (p<.001) and the Limited Experience (LExp) group performed worse than the Experienced (Exp) group (p=.001). For FTHT and TTHT, the same trends were seen. For FTHT, the No Exp group performed worse than the Exp group (p=.01). With TTHT, the No Exp had higher times than both the LExp group (p=.01) and the Exp group (p<.001). The performance improvement from computer experience is plainly demonstrated in Figures 1, 2, and 3. The No Exp group performed best under the auditory unimodal and auditory-haptic bimodal conditions. Similar to the LExp group, this groups performance was poor in the visual and haptic unimodal conditions. Unlike the other two groups, this group also performed poorly under the haptic-visual bimodal condition, suggesting that the haptic component combined with the visual component can be problematic for this group. These results indicate that the No Exp group can benefit from multimodal feedback that includes an auditory component. (See Error! Reference source not found. for a complete list of interaction effects.)

Figure 2: Graph of FTHT Transformed Means




Computer Experience
2.7 No Experience Limited Exp 2.6 A H V AH AV HV AHV Experienced

Feedback Condition

Figure 3: Graph of TTHT Transformed Means


Table 4: Interaction Effects of Feedback Condition and Computer Experience Group. Columns organize significant differences by modality to illustrate trends in significant differences.
Significant Interaction Effects* X > Y indicates that the condition X had a significantly higher time than condition Y for the given All differences are significant at the p < .05 level. FTHT TTHT V>A V>A V>H V>H V > AH V > AH V > AV V > AV V > HV V > HV V > AHV V > AHV H > AH V>A H>A V>A H > AV V > AH H > AH V > AH H > HV V > AV H > AV V > AV V > HV H > HV V > HV V > AHV H > AH V>A H > AH V>A HV > A V > AH V > AH HV > AH V > AHV HV > AHV experience group

Experience Group Experienced with Computers

Limited Computer Experience

No Computer Experience


* Columns organize significant differences by modality to illustrate trends in significant differences.

3.2. Errors
The error analysis addressed over-no-drop (OND) errors. This error could occur multiple times within the course of a single trial. Figure 4 displays the percentage of the trials in which each frequency of OND errors occurred. As seen in these graphs, there was a relatively low error rate.
1 00% 90% 80% 70% 60% 50% 40% 30% 20% 1 0% 0% 0 1 2 3 4 5 # of Errors per Trial 6 7 >8 No Experience Limited Exp Experienced

The Wilcoxon-Mann-Whitney Rank Sum test was used to identify differences in accuracy between computer experience groups. There were very few statistical differences in the occurrence of errors. For the auditory unimodal condition, the No Exp group had a higher occurrence of OND errors than the LExp group (p=.004). Under the visual unimodal condition, the No Exp group had a higher error rate than the LExp group (p=.010). The auditory-haptic bimodal condition also resulted in more errors for the No Exp group than the Exp group (p=.010). Due to the low error rate, errors contributed only slightly to the overall TT and TTHT performance measures. Therefore the differences in the times between groups and feedback conditions are not due to an increase in errors. Rather, these differences in efficiency can be attributed primarily to the effects of the feedback condition and computer experience.

Based on the results of the experiment, several points of discussion emerge about the effect that computer experience and various feedback modalities have on the performance of older adults in a drag-and-drop task.

Lack of Errors
The analysis of OND errors indicated high accuracy for all three groups and no statistically significant increases in errors due to feedback condition within any group. This high level of accuracy across the board is likely due to the level of precision required for the task and the absence of distracters. Additional distractions in the task environment may increase error occurrence and should be studied and considered in design. An interesting conclusion that can also be drawn from the lack of errors is that none of the feedback conditions significantly decreased accuracy for the task within any of the three groups. This indicates that efficiency gains achieved by the most effective feedback conditions were not offset by decreases in accuracy.

Figure 4: Frequency of OND Errors. (A trial represents a single drag-and-drop task performed by a participant.)

The Friedman test was used to identify differences in the frequency of errors between feedback conditions within a given computer experience group. This analysis indicated no significant differences related to feedback condition.


4.1. Visual Effect

In reviewing results of all three groups, the visual unimodal condition consistently demonstrated relatively poorer performance. This result is fairly intuitive since a direct manipulation such as a drag-and-drop requires a high concentration of visual attention on a display. Therefore visualonly feedback does not add much value compared to other forms of feedback. Similarly, older adults have been shown to be more distracted by on-screen items and to process visual signals more slowly in short-term memory (STM) [5], [10]. Since this task presented no intentional distracters or other visual complexities, it is likely that visual unimodal feedback will result in even worse performance for older adults in more complex environments. Consequently, we can conclude that visual feedback alone is not enough, but it can be effective when combined with other modalities. Similar results have been seen in studies addressing younger users [29], so developers need to consider judicious use of non-visual output in computer interfaces. Future research needs to provide a better understanding of how to integrate visual and non-visual feedback to improve performance on more complex computer tasks and for other user groups.

sound is not localized in the actual icons, the users may experience a ventriloquist effect [25]. This is a simple misjudgment of the localization of sound, based on information from other sensory modalities. The ventriloquist effect may be assisting in the effectiveness of the auditory feedback by drawing the users attention to the action on the screen with the auditory cue. It is interesting to note that at first glance these results appear to contrast with some of the findings by Vitense et al.[29]. In their study of young adults completing a complex drag-and-drop task under uni-, bi-, and trimodal conditions, Vitense et al. found that haptic and haptic-visual feedback resulted in the best target highlight times while auditory and auditory-visual had the best trial times. Similar feedback (visual and haptic) was used in both studies, with the exception of auditory. The Vitense experiment used an earcon (length: 1.9 sec.) compared to the auditory icon (length: 0.1 sec.) used in this experiment. Based on the observation by Vitense et al. that subjects waited for the earcon to completely play before dropping the file, this likely explains some of the differences in target highlight time results of the two studies. Furthermore, there may be differences in the ways that younger and older adults respond to haptic and/or auditory feedback or differences in the effectiveness of auditory feedback related to the presence or absence of distracters. It is concluded from the results of this study that auditory feedback, alone and in combination, can improve performance for older adults. Developers and researchers need to understand the beneficial and detrimental effects of auditory feedback as it affects different user groups.

4.2. Haptic Effect

In this experiment, results were mixed for haptic feedback, alone and in combination. The best results for haptic feedback were observed when haptic was combined with other modalities, especially auditory. It is interesting to note that due to the age dependent degeneration of the peripheral nervous system common in individuals 40 years and older [11], older adults may be less sensitive to haptic feedback. This could explain why the haptic unimodal feedback was less effective compared to other feedback conditions. From these results, we can conclude that haptic feedback, used in combination with other modalities, has the potential to improve performance for older adults. However, more research is needed to design appropriate haptic feedback for GUIs since older adults may be less sensitive to certain types of haptic feedback.

4.4. Multimodal Effect

Considering the overall results of the experiment, it is apparent that while all three user groups benefited from some form of multimodal feedback, users with more experience performed better with multimodal feedback compared to users with limited or no experience. The Exp group performed well with both bimodal and trimodal feedback. The LExp group performed well with bimodal, but did not benefit as much from trimodal feedback. Comparatively, the No Exp group performed well with auditory-haptic, but had diminished performance with haptic-visual. These results are even more compelling when you consider the mental processing occurring in each of the three groups. For the Exp group, mental demands for completing the task are reduced due to the automation [21] of common computer tasks and improved mental models[28], leaving more mental resources available to process feedback from multiple modalities. The lack of automation of tasks for inexperienced users may have caused this group to incur slightly higher mental demands, potentially causing these users to experience interference effects under the trimodal condition. The No Exp group has higher mental demands compared to the other groups since they are performing an unfamiliar task. Therefore, they are likely to be more affected by constraints on STM and mental processing compared with the other groups. Despite the differences in some forms of multimodal feedback, all groups responded well under the auditory-haptic bimodal condition. As mentioned previously, this result is supported by Wickens MRT since auditory uses separate mental resources from the visual/spatial resources required by the nature of the

4.3. Auditory Effect

Auditory feedback, alone and in combination, improved performance for all three groups in the experiment. Considering the visual/spatial nature of the drag-and-drop task, Wickens Multiple Resource Theory (MRT) [32] provides strong support for these results. According this theory of shared cognitive resources, the more that tasks operate on common modalities (visual vs. auditory) and/or codes (verbal vs. spatial), the more task interference will be observed. Auditory processing uses different mental resources than visual processing. Thus, the auditory feedback uses mental resources not already in use due to the visual/spatial nature of the task. Therefore when auditory feedback is used, users experience an interactive effect, instead of interference, by combining auditory information with the other two modalities. Research also indicates that indirect auditory stimuli have the propensity to trigger shifts in attention when the signal mandates immediate action [23]. Since older adults tend to be more easily distracted, auditory feedback may refocus them on the task at hand, thereby improving performance. Despite the fact that the


drag-and-drop task. Additionally, the task is more visually intensive, so sensory receptors in the ears and hands are likely to have more resources available to process feedback. Based on these results, developers need to consider the specific needs of each experience group when integrating multimodal feedback into applications, although auditory-haptic appears to be the most effective for all groups.

BPEI, for so generously providing the space within which this experimentation was conducted.

[1] Akamatsu, M., MacKenzie, I. S., & Hasbroucq, T. (1995). A comparison of tactile, auditory, and visual feedback in a pointing task using a mouse-type device. Ergonomics, 38(4), 816-827. [2] Akamatsu M., & Sato, S. (1994). A multi-modal mouse with tactile and force feedback. International Journal of Human Computer Studies, 40, 443-453. [3] Brewster, S. A. (1998). Sonically-enhanced drag and drop. Proceedings of the International Conference on Auditory Display (ICAD98), Glasgow, UK, 1-7. [4] Czaja, S. J., Sharit, J., Ownby, R., Roth, D. L., and Nair, S., 2001, Examining Age Differences in Performance of a Complex Search and Retrieval Task. Psychology and Aging, 16(4), 564-579. [5] Ellis, R. D., & Kurniawan S. (2000). Increasing the usability of online information for older users: A case study in participatory design. International Journal of HumanComputer Interaction, 2(12), 263-276. [6] Fisk, A. D., & Rogers, W. A. (2000). Influence of training and experience on skill acquisition and maintenance in older adults. Journal of Aging and Physical Activity, 8, 373-378. [7] Field, A. (2000). Discovering Statistics using SPSS for Windows (Great Britain: The Cromwell Press, Ltd.) [8] Fraser, J., & Gutwin, C. (2000). A Framework of Assistive Pointers for Low Vision Users. Fourth Annual International ACM/SIGCAPH Conference on Assistive Technologies (ASSETS 2000), Arlington, VA, pp. 9-16. [9] Gaver, W. (1989) The SonicFinder: An interface that uses auditory icons. Human Computer Interaction, 4(1), 67-94. [10] Hawthorn, D. (2000). Possible implications of aging for interface designers. Interacting with Computers, 12, 507528. [11] Hilz, M. J., Axelrod, F. B., Hermann, K., Haertl, U, Duetsch, M., & Neundrfer, B. (1998). Normative values of vibratory perception in 530 children, juveniles and adults. Journal of the Neurological Sciences, 159, 219-225. [12] Jacko, J. A., Rosa, R. H., Jr., Scott, I. U., Pappas, C. J., & Dixon, M. A. (2000). Visual impairment: The use of visual profiles in evaluations of icon use in computer-based tasks. International Journal of Human-Computer Interaction, 12(1), 151-164. [13] Jacko, J. A., Scott, I. U., Sainfort, F., Barnard, L., Edwards, P. J., Emery, V. K., Kongnakorn, T., Moloney, K. P., & Zorich, B. S. (2003). Older adults and visual impairment: What do exposure times and accuracy tell us about performance gains associated with multimodal feedback? Proceedings of the ACM Conference on Human-Factors in Computing Systems (CHI 2003), 5(1) (pp. 33-40). Ft. Lauderdale, FL, USA. [14] Jacko, J. A., Scott, I. U., Sainfort, F., Moloney, K. P., Kongnakorn, T., Zorich, B. S., & Emery, V. K. (2003). Effects of multimodal feedback on the performance of

5. Future Directions
The results of the experiment support our original hypotheses: 1) The performance in a computer-related task of older adults with varying levels of computer experience was improved with the use of some form of multimodal feedback; 2) The level of computer experience influenced how an older adults performance was affected by uni-, bi-, and trimodal feedback. The most important direction for future research is in effectively integrating multimodal feedback into the broader context of the GUI. Much research has been done on the effects of each modality individually and some has been done addressing integrating modalities for a specific task. However, we need to ensure that the benefits seen in evaluations of specific tasks and modalities hold when multimodal feedback is integrated into the larger GUI. Within the overall GUI, there is an increased potential for interference when multimodal feedback is used for a variety of tasks/situations. A framework for feedback inclusion for common GUI tasks would help researchers and developers understand which types of feedback are most effective for each common GUI task. The present study inspires the foundations for such a framework. Three valuable guidelines that emerge included: The employment of only visual unimodial feedback, applied to visual/spatial tasks, is likely an insufficient enhancement in terms of user performance. The consideration of computer experience and associated abilities can inform the strategic inclusion of multimodal feedback. Experienced users may have more mental resources available to attend to multiple sensory cues without sacrificing their performance. The integration of auditory-haptic bimodal feedback within the context of a drag and drop task, and as used in this study, can enhance user interactions.

This early development of such a framework has the potential to inform enhancements to GUIs. These enhancements may augment the performance for not only older adults, but also other significant populations [13], [28]. The continued extension of the framework to other forms of feedback in more complex situations is a necessary future direction. Only then will we be able to more closely make advancements towards the universal usability of GUIs.

This research was made possible through funding awarded to Julie A. Jacko by the Intel Corporation and the National Science Foundation (BES-9896304). The invaluable contributions of Mahima Ashok, Brynley S. Zorich, and Dr. Dagmar Lemus are gratefully acknowledged, as is the support of Reva Hurtes and the Mary and Edward Norton Library of Ophthalmology at


older adults with normal and impaired vision. Lecture Notes in Computer Science (LNCS), 2615, 3-22. [15] MacKenzie, I. S., Sellen, A., & Buxton, W. A. S. (1991). A comparison of input devices in elemental pointing and dragging tasks. Proceedings of the ACM Conference on Human Factors in Computing Systems: Reaching through Technology (CHI 91), New Orleans, 161-166. [16] McGee, M. R., Gray, P. D., & Brewster, S. A. (2001). The effective combination of haptic and auditory textural information. In S. A. Brewster & R. Murray-Smith (Eds.), Haptic-Human-Computer Interaction (Springer LNCS), 2058, pp.118-126. [17] Mead, S. E., Sit, R. A., Rogers, W. A., Jamieson, B. A., & Rousseau, G. K. (2000). Influences of general computer experience and age on library database search performance. Behaviour & Information Technology, 19(2), 107-123. [18] Meyer, J. (2001). Age: 2000. Census 2000 Brief. C2KBR/01-12, US Census Bureau, USA. [19] Morrell, R. W., Mayhorn, C. B., & Bennett, J. (2000). A survey of World Wide Web use in middle-aged and older adults. Human Factors, 42(2), 175-182. [20] Newburger, E. C. (2001). Home Computers and Internet Use in the United States: August 2000. Current Population Reports, Series P23-207, US Census Bureau, USA. [21] Proctor, R. W., & Vu, K.-P. (2002). Human information processing: Some implications for human-computer interaction. In J. A. Jacko & A. Sears (Eds.), The HumanComputer Interaction Handbook, pp. 35-51. Mahwah, NJ: Lawrence Erlbaum Associates. [22] Rogers, W. A. (1997). Individual differences, aging, and human factors: An overview. In A. D. Fisk & W. A. Rogers (Eds.), Handbook of Human Factors and the Older Adult, pp. 151-170. London: Academic Press.

[23] Sanders, M. S., & McCormick, E. J. (1993). Human Factors in Engineering and Design, pp. 160-219. New York: McGraw Hill. [24] Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human Computer Interaction. (Massachusetts: Addison Wesley). [25] Soto-Faraco, S., Lyons, J., Gazzaniga, M., Spence, C., & Kingstone, A. (2002). The ventriloquist in motion: Illusory capture of dynamic information across sensory modalities. Cognitive Brain Research, 14, 139-146. [26] Tiffin, J., & Asher, E.J. (1948). The Purdue Pegboard: Norms and Studies of Reliability and Validity. Journal of Applied Psychology, 32, 234-247. [27] Van Beers, R. J., Wolpert, D. M., & Haggard, P. (2002). When feeling is more important that seeing in sensorimotor adaptation. Current Biology, 12, 834-837. [28] Van der Veer, G. C., and Melguizo, M. (2002). Mental Models. In J. A. Jacko and A. Sears (Eds), The HumanComputer Interaction Handbook, pp.52-80. Mahwah, NJ: Lawrence Erlbaum Associates. [29] Vitense, H. S., Jacko, J. A., & Emery, V. K. (2003). Multimodal feedback: An assessment of performance and mental workload. Ergonomics, 46(1-3), 68-87. [30] Vitense, H. S., Jacko, J. A., and Emery, V. K. (2002). Foundations for improved interaction by individual with visual impairments through multimodal feedback. Universal Access in the Information Society (UAIS), 2(1), 76-87. [31] Ware, J. E., Kosinski, M., and Keller, S. D., 1995, SF-12: How to Score the SF-12 Physical and Mental Health Summary Scales (2nd Ed.). (Boston, MA: The Health Institute, New England Medical Center). [32] Wickens, C.D. (1984) Processing resources in attention. In R. Parasuraman & D. R. Davies (Eds.), Varieties of Attention. 63-102. New York, Academic Press.