You are on page 1of 79

THE PENNSYLVANIA STATE UNIVERSITY

SCHREYER HONORS COLLEGE

THE SCHOOL OF ENGINEERING DESIGN, TECHNOLOGY, AND PROFESSIONAL


PROGRAMS

IMPROVING THE USABILITY OF DESIGN TOOLS

MADISON REDDIE
FALL 2020

A thesis
submitted in partial fulfillment
of the requirements
for a baccalaureate degree
in Mechanical Engineering
with honors in Engineering Design

Reviewed and approved* by the following:

Matthew Parkinson
Professor of Engineering Design and Mechanical Engineering
Thesis Supervisor

Sven Bilén
Professor of Engineering Design, Electrical Engineering, and Aerospace Engineering
Honors Adviser

*Electronic approvals on file


i

Abstract

Design research ultimately seeks to advance engineering and design activities. However, indus-
try professionals, not researchers, are responsible for designing most of the artifacts that surround
us on a daily basis. Therefore, academic researchers must translate their work into a format that is
usable by industry designers in order to see their research implemented on a large scale. Resources
generated for this purpose are referred to as design tools. Despite the efforts made by researchers
to create design tools and the presence of a thorough articulation of user needs for design tools in
the literature, studies consistently find that such tools are seldom used in industry due to signifi-
cant usability issues. This thesis uses the Virtual Fit Tool, an ergonomics resource, to investigate
and reconcile the disconnect between the literature regarding design tool requirements and users’
real experiences with design tools. An initial usability study of the Virtual Fit Tool highlights
usability problems within the tool and informs design changes, along with the literature. After
a comprehensive redesign process, a second usability study of the redesigned tool demonstrates
an improved user experience but also outstanding shortcomings and usability challenges that the
literature did not predict. Implications for design tool development that are applicable to research
in all engineering and design domains are drawn from these findings.
ii

Table of Contents

List of Figures iv

List of Tables vi

Acknowledgements vii

1 Introduction 1

2 Background 3
2.1 Design Tool Uses and Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Design Tool Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 User Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 Approachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.3 Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Methods Used in Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Methods 12
3.1 The Case for the Virtual Fit Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Multivariate Accommodation . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Initial Usability Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 VFT Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.2 Initial Usability Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Redesign of the VFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Usability Study of the Redesigned VFT . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4 Results 26
4.1 Initial Usability Study Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Redesigned VFT Usability Study Results . . . . . . . . . . . . . . . . . . . . . . 32

5 Discussion 39
5.1 Next Steps for the Redesigned VFT . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Implications for Design Tool Development . . . . . . . . . . . . . . . . . . . . . . 41
iii

5.2.1 Constraining Information Presentation and User Actions . . . . . . . . . . 42


5.2.2 Considering the Efficiency-Usability Trade-off in Instruction Sets . . . . . 42
5.2.3 Conducting Usability Tests to Identify and Address Failure Modes . . . . . 43
5.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6 Conclusion 46

7 Appendices 48
7.1 Appendix A: VFT Interfaces and Instructions Used in Initial Usability Study . . . . 48
7.2 Appendix B: Initial Usability Study Survey . . . . . . . . . . . . . . . . . . . . . 55
7.3 Appendix C: Redesigned VFT Usability Study Survey . . . . . . . . . . . . . . . 57
7.4 Appendix D: Initial Usability Study Results . . . . . . . . . . . . . . . . . . . . . 61
7.5 Appendix E: Redesigned VFT Usability Study Results . . . . . . . . . . . . . . . 65

Bibliography 67
iv

List of Figures

2.1 Cards from Lockton’s Design with Intent Toolkit [32] . . . . . . . . . . . . . . . . 4


2.2 Example of a Quality Function Deployment matrix [33] . . . . . . . . . . . . . . . 4

3.1 The Virtual Fit Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14


3.2 Buttock-popliteal length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Hip breadth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 Popliteal height h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Redesigned VFT launch site, 1 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Redesigned VFT launch site, 2 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Redesigned VFT for seating, 1 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.8 Redesigned VFT for seating, 2 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.1 Accommodation rates achieved with each VFT version in the initial usability study 27
4.2 Accommodation rates achieved with the original and redesigned VFTs in the sec-
ond usability study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Accommodation rates achieved with the original VFT in the second usability study
when used first and second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 Accommodation rates achieved with the redesigned VFT in the second usability
study when used first and second . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7.1 Part of the full VFT interface used in the initial usability study in tool versions A
and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 The partial VFT interface used in the initial usability study in tool versions C and D 50
7.3 The partial with range VFT interface used in the initial usability study in tool
versions E and F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.4 Page 1 of 2 of the original instruction set used in the initial usability study in tool
versions A, C, and E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.5 Page 2 of 2 of the original instruction set used in the initial usability study in tool
versions A, C, and E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.6 The prompting instruction set used in the initial usability study in tool versions B,
D, and F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.7 Page 1 of 2 of the survey used in the initial usability study . . . . . . . . . . . . . 55
7.8 Page 2 of 2 of the survey used in the initial usability study . . . . . . . . . . . . . 56
7.9 Survey used in the usability study of the redesign, 1 of 4 . . . . . . . . . . . . . . 57
v

7.10 Survey used in the usability study of the redesign, 2 of 4 (this page presented twice,
with the tool number and tool link changed) . . . . . . . . . . . . . . . . . . . . . 58
7.11 Survey used in the usability study of the redesign, 3 of 4 (this page presented twice,
with the tool number changed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.12 Survey used in the usability study of the redesign, 4 of 4 . . . . . . . . . . . . . . 60
vi

List of Tables

2.1 Design tool primary and secondary user needs . . . . . . . . . . . . . . . . . . . . 6

3.1 VFT versions used in initial usability study . . . . . . . . . . . . . . . . . . . . . 17

4.1 Average tool version use times in the initial usability study . . . . . . . . . . . . . 28
4.2 Average tool version characteristic ratings in the initial usability study . . . . . . . 28
4.3 Average instruction clarity ratings in the initial usability study . . . . . . . . . . . 30
4.4 Trust in tool versions in the initial usability study . . . . . . . . . . . . . . . . . . 31
4.5 Average tool version characteristic ratings in the second usability study . . . . . . 37
4.6 Average original tool characteristic ratings in the second usability study by order
of tool use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.7 Average redesigned tool characteristic ratings in the second usability study by or-
der of tool use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7.1 Initial usability study trial times . . . . . . . . . . . . . . . . . . . . . . . . . . . 61


7.2 Initial usability study open-ended feedback . . . . . . . . . . . . . . . . . . . . . 63
7.3 Second usability study open-ended feedback . . . . . . . . . . . . . . . . . . . . . 65
vii

Acknowledgements

Thank you to Dr. Matt Parkinson for aiding in my decision to come to the Schreyer Honors
College at Penn State, introducing me to design, advising this thesis, making the redesigned VFT
possible, and continually pushing me to grow as a student and researcher. Thank you to Songlin
Wu for the advice and technical assistance. I would also like to thank Dr. John Carroll and Dr.
Sooyeon Lee for giving me the opportunities and experiences that have made me the researcher
that I am today. Finally, I thank my friends and family for their continuous support.
1

Chapter 1

Introduction

The purpose of this thesis is to investigate the sources and manifestations of the incongruous-
ness between design tools created by researchers and their intended users, industry professionals,
and demonstrate how that incongruousness can be resolved through a case study. An initial usabil-
ity study of one design tool, The Virtual Fit Tool, illuminates specific usability issues within the
tool. The redesign of the tool based on the results of the initial usability study and the literature re-
view, a subsequent usability study of the redesign, and an analysis of the results of the two studies
illustrate how design tools can be developed to meet user needs and better accommodate proper
use by non-expert designers.
Design research is an interdisciplinary area of study with the goal of advancing design practice.
Current topics of interest include sustainable design, accessibility, design for additive manufactur-
ing, and ergonomics. Much of this research culminates in valuable findings, but it is not practical
to expect industry professionals to keep up to date on all of the emerging literature from around
the world. Acknowledging this issue, many researchers create design tools to translate their work
into a more consumable format. These tools can exist in the form of guidelines, standards, process
maps, templates, web applications, or other media. Such resources can support design teams by
providing expertise in a specific domain, easing communication within the team and between the
team and other stakeholders, or facilitating various stages of the design process. For example, Lut-
2

tropp and Lagerstedt synthesized a list of “Ten Golden Rules” for eco-design from a large body of
literature to inform designers of environmentally-friendly design principles [1].
Despite the efforts by researchers to generate accessible tools, studies spanning the last four
decades have continuously reported that design tools are incompatible with industry practice and
used only minimally [2, 3, 4, 5, 6, 7]. As a result, design research has limited impact on popular
practice, and the quality of products being developed and sold stagnates in some respects. Many
design practices that are considered antiquated in academia remain dominant in practice (e.g.,
univariate analysis for evaluating ergonomic accommodation). Industry’s slow adoption of recom-
mendations from academia is especially detrimental to groups with specific needs that designers
may be unfamiliar with, like people with disabilities.
This thesis extends the current literature by using previously established user needs to redesign
an existing design tool and, through the usability study of the redesign, identifying other user needs
and methods of improving design tool usability that are more powerful than simply considering
user needs. The present work culminates in implications for the development of design tools that
are applicable to all engineering and design research domains.
3

Chapter 2

Background

Design researchers have been developing tools for practitioners based on their work for many
decades. Still, even modern design tools fail to meet the needs of design teams, and studies con-
sistently find that industry utilization of design tools remains low [2, 3, 4, 5, 6, 7, 8, 9, 10]. Despite
their potential to improve the quality of designed artifacts and positively impact society, design
tools are generally not developed in a careful, thoughtful manner. Thus, they are not user-centric
[11, 12]; they are difficult to apply [6, 8, 9, 13], are written in unfamiliar language [2, 9], fail to
clearly communicate their contributions [9, 10, 14, 15], and do not accommodate designers’ pre-
ferred ways of working [3, 7, 8, 9, 16, 17, 18, 19]. As a result, design research is not implemented
in practice, and the diffusion of knowledge from academia to industry is impeded. For example,
there are numerous free accessibility evaluation tools, and yet, many products still fail to meet
basic accessibility objectives [3]. This section reviews the uses and importance of design tools and
previously identified shortcomings, compiles and explains design requirements, and situates the
current study within the existing literature.

2.1 Design Tool Uses and Benefits

Design is a complex, messy process that seeks to achieve many goals simultaneously (e.g.
efficiency, cost-effectiveness, functionality, etc.). Design tools support users by either providing
4

Figure 2.1: Cards from Lockton’s Design Figure 2.2: Example of a Quality Function
with Intent Toolkit [32] Deployment matrix [33]

knowledge or capabilities relevant to project goals or by adding procedural structure to improve


the design process.
Specialized knowledge in a number of domains may be necessary for a single design project.
Individuals, teams, and even firms may not be able to maintain all essential expertise internally
[20]. Design tools are a publicly available (often free) alternative way of accessing expert-level in-
formation. Examples of these types of tools include Luttrop and Lagerstedt’s Ten Golden Rules for
eco-design [1], Lockton’s Design with Intent Toolkit (Figure 2.1) [21], Clarkson et al.’s Inclusive
Design Toolkit [22], and the ANSI/HFES 100 standard [23].
Other resources can introduce process structure [24, 25], encourage creative thinking [26, 27],
facilitate communication and knowledge transfer [4, 20, 21, 25], or promote team-building [28].
These functions help keep the design process organized and productive. Gantt charts [29] and
Quality Function Deployment (QFD) [30] belong to this category of tools (Figure 2.2) [31].
Design tools can come in many different forms. Some of the most popular are software, brief
written guidelines, longer booklets, card decks, static templates, and interactive templates. They
are generally accessible or downloadable from the internet or included in research articles. While
diverse in form and function, design tools’ target users have similar characteristics, and the con-
cepts and interfaces can be considered to have the same basic user needs.
5

2.2 Design Tool Failures

Radcliffe (2014) outlines causes of failure in design projects: “inadequate articulation of re-
quirements, poor planning, inadequate technical skills and continuity, lack of teamwork, poor com-
munication and coordination, insufficient monitoring of progress, [and] inferior corporate support.”
Existing design tools can address all of these concerns, and studies have demonstrated a desire for
design resources in industry [4, 7, 16, 34]. However, their use in industry practice is low. The
concept of design tools is promising, but the execution is problematic and deters designers.
The poor quality of existing design tools has been independently established in the literature of
several different design subfields, including eco-design/Design for Environment, Universal Design,
and human factors and ergonomics. Shortcomings arise from the pervasive differences between
design researchers and industry design professionals and the exclusion of designers from the tool
development process [6, 20].
Researchers and industry professionals have different backgrounds and different ideas about
what is important. Researchers pay little attention to the investment associated with implement-
ing new methods, which is a primary consideration in industry. Practitioners are usually more
focused on specific product types than the general, abstract design theory of interest to researchers
[12]. Academia emphasizes the theoretical merits of tools while industry professionals feel more
comfortable with known, quantitative returns on investment [2, 17].
Researchers tend to prioritize academic interests over industry interests in the development of
design tools [12, 19], leading designers to find them “verbose” and “difficult to understand” [8].
Several studies report that design tools are developed without user input, so their failure to meet
designers’ needs is not surprising [6, 11, 20, 25]. Choi et al. surveyed design resource creators, and
some even responded that they were neutral about whether they believed their tools to be useful
[8].
A self-fulfilling feedback loop between researchers and designers forms: researchers do not
put much effort into design tools because they believe they will not be heavily used, and designers
6

do not use tools because they are poorly designed, validating researchers’ prediction. Zitkus et al.
(2013) calls the lack of care in design tool development and subsequent low rates of tool adoption a
“systemic problem.” To break this cycle and increase the utilization of design research, researchers
must develop tools that clearly meet user needs and appeal to designers before, during, and after
use.

2.3 User Needs

Discussions of design tool user needs are abundant in the literature, but the defined needs are
not consistent, and many are specific to a particular design domain. Table 2.1 displays the basic
user needs that are common across disciplines.

Table 2.1: Design tool primary and secondary user needs

Approachability Usability Value


Aesthetics Understandability Clear benefit
Use of visuals Clear language Effectiveness
Clear instructions Efficiency
Context
Interpretability
Confidence generation
Adaptability

There are three primary user needs: approachability, usability, and value. The remaining user
needs (secondary needs) all feed into these high-level, emergent characteristics. Many of the sec-
ondary needs are interrelated, but each is distinct to some degree and worth discussing.

2.3.1 Approachability

Approachability is the quality that encourages prospective users to pick up design tools for the
first time. A tool should have an inviting appearance that communicates a low use cost [2]. Burns
et al. (1997) found that designers frequently overestimate the cost of retrieving information from
7

design resources, leading them to choose not to seek out guidance from research. Conveying a low
use cost through an approachable interface mitigates user deterrence based solely on appearance.

Aesthetics

Aesthetics affect the approachability and perceived overall quality of design tools [11, 35].
Clean aesthetics make tools easier to read, navigate, and use properly. Poor aesthetics is a common
complaint encountered in user tests of design resources [21, 36, 37].

Use of Visuals

Design tools may need to communicate lots of information, but excessive text can look unattrac-
tive, be ineffective, and signal a higher use cost [15, 21]. Designers place value on visual displays
of information, and incorporating visuals can break up large blocks of text [8, 36]. By making
technical information more accessible, visually represented information can enhance user under-
standing, and visual results can simplify users’ interpretation of tool outputs [11, 27, 38].

2.3.2 Usability

Lutters et al. (2014) and Lindahl (2005) point out that the effectiveness of a design tool is
determined entirely by its capacity to facilitate proper use. At a minimum, tools must be usable by
their target users. Making them usable by novices, however, can make valuable design methods
more accessible and further promote the ideologies contained within them. Furthermore, support-
ing an “enjoyable” experience encourages more thorough user engagement [27]. Design tools
should strive to be simple and intuitive so that users spend their time using the tool rather than
trying to figure out how to use it. Usability is a function of many other characteristics but can be
systematically evaluated through testing.
8

Understandability

Designers need to understand how, when, and why to appropriately employ design tools to
reap their full benefits [4]. One of the easiest and quickest ways to judge prototypes is to test
them oneself (self-referential evaluation), but a researcher who has spent lots of time creating a
tool such that it makes sense to them may not recognize possible sources of confusion for others.
Users should be able to understand how to use a design tool quickly with minimal instructions.
Clear demonstration of tools’ processes and underlying principles can help users to more fully
understand how and why tools work [27].

Clear Language

The overarching purpose of all design tools is to disseminate knowledge. Therefore, tools
should be accessible to non-experts. Design tools should not be overly formal or esoteric to encour-
age use and learning by novices [24, 31]. They should use language familiar to designers, different
from the language used by design researchers (i.e. “non-scientific” language [36]) [2, 38, 39].

Clear Instructions

Tool instructions may be the first part of a design tool that a user interacts with. A first impres-
sion conveying simplicity, clarity, and relevance will prime the user for confident use. To ensure
that users give instructions the attention necessary to absorb the information they need to success-
fully utilize tools, instructions should be brief and engaging. Visuals may be helpful in making
instructions visually interesting.

Context

Design theory and methods research is most often non-specific and meant to be applied to
many design scenarios, and this generality can come off as irrelevance if authors are not careful
with their use of abstractions [2, 3, 10, 27]. Designers need to understand how tools are relevant to
their design problem and how to apply them to a particular context. A balance needs to be struck
9

between being concrete enough to show users where to start but abstract enough to be applicable
to a variety of design projects [27]. Straying too far in either direction will limit the usefulness of
the resource.

Interpretability

Research can be much more exploratory than industry design practice, where deliverables and
deadlines are the focus. Designers may want more definitive outputs than researchers, who are
comfortable with broad discussions of results [15]. Tool outputs can vary, and not all tools are
meant to provide decisive guidance, but results should nevertheless be actionable and easy to in-
terpret [6, 35, 40].

Confidence Generation

If a tool does not generate confidence in its users, they may feel unsure of the results and
thus doubt the value of the design tool [11, 26, 35, 41, 42]. Uncertainty can lead users to spend
more time validating the output, increasing the use cost and decreasing the perceived efficiency.
Confidence generation is an emergent effect that can be weakened by several factors, including
poor aesthetics, ambiguity, excessive abstractions and lack of examples, confusing language, and
vague output. Yargin and Crilly (2015) recommend “continuous feedback” as a way to reassure
tool users that they are on the right track.

Adaptability

The nature of design work is fluid and not conducive to strictly structured, linear methods.
Individual designers and firms frequently modify or wish to modify methods from academia to fit
their exact needs [1, 5, 9, 27, 34, 40]. Designers prefer flexible, informal tools that accommodate
this personalization [6, 7, 11, 19, 21, 25, 38, 39]. Adaptability is also important for tools to function
in contexts that creators may not foresee.
10

2.3.3 Value

Design tools must add value to the design process in some way for industry professionals to
expend the time and effort to seek them out and use them [9]. The perceived added value of a
design tool is a result of weighing the apparent benefits and use cost [35]. The actual and apparent
benefits may not be identical; many design tools offer value but fail to successfully communicate
it [9]. Similarly, the apparent and actual use cost may be different if a tool has an unapproachable
appearance. These factors are most important for initial tool adoption. For complete, proper,
and continued use, design tools also need to prove to be both effective and efficient at assisting
designers to make progress toward project goals [9, 41].

Clear Benefit

Designers may pass judgement regarding the use cost and value of a tool before actually in-
teracting with it, emphasizing the importance of clear communication of benefits and low use cost
before use [9, 10]. Value is best conveyed in terms of the outcome of the design process. For
example, advertising that a tool aids in accessible design is not compelling to a company that does
not consider accessibility a priority [19]. Expressing that the tool will lead to a product that is
usable and desired by more people provides a more concrete and widely appealing idea of the
tool’s benefit. The benefit of a resource should be clear to not only users but also management and
clients. These stakeholders also influence design tool implementation [3, 7, 27].

Effectiveness

Effectiveness, although seemingly obvious, can be a point of contention between researchers


and designers. A tool that helps users explore a problem but does not provide actionable output may
be considered effective to researchers but not to industry professionals who have rigidly defined
deliverables and deadlines. Tools that are effective in theory but do not actually enhance the work
of designers or advance their project in some way will not be used in industry [9].
11

Efficiency

Efficiency is a primary consideration in use cost calculations [3, 9, 27, 35]. Requiring too
much time is a major reason given for the dismissal of design tools [4, 7, 11]. The time and effort
to use a tool can be affected by its construction, the clarity of its instructions, its usability, its
interpretability, and its confidence generation. The scope of research may be much larger than
what most tool users will be interested in, so only the necessary information should be included in
design tools [27].

2.4 Methods Used in Previous Work

Prior work has either studied how designers work to determine their needs or evaluated de-
sign tools among researchers or small groups of designers. The poor quality of design tools has
been recognized many times over, and researchers have hypothesized ways to increase their use
in industry, but these hypotheses have not been tested. This thesis will utilize the literature de-
tailing designer preferences to demonstrate how existing design tools can be improved and how
new ones can be developed to better meet user needs. Tests with non-expert users rather than
self-referential evaluation will be used to assess an existing design tool, the Virtual Fit Tool, and a
redesigned version to determine the effects of design choices grounded in the identified needs on
overall usability.
12

Chapter 3

Methods

This section details the process through which the redesigned Virtual Fit Tool (VFT) was de-
veloped and evaluated. First, rationale for the use of the VFT as an example for the present thesis is
explained. Second, an initial usability study of the original VFT and how it informed the redesign
is described. Third, the development of the redesigned tool and a study evaluating its usability is
presented.

3.1 The Case for the Virtual Fit Tool

The VFT was selected for analysis and redesign in the present study for several reasons:

1. The VFT is intended to help designers explore anthropometric data, especially for
use in designing artifacts that interact with the human body. Ergonomics and product
fit with respect to users’ bodies are critical considerations in the design of physical
products, making the tool highly relevant and widely applicable to design practice.
2. The VFT is a free, publicly available resource residing in a downloadable Excel
spreadsheet on the Human Factors and Ergonomics Society’s (HFES) website [43].
3. The co-developer of and data embodied in the VFT were accessible. Professor Matt
Parkinson co-created the VFT, maintaining all rights to its use, and advised this thesis.
13

The VFT (Figure 3.1) is a powerful resource for determining how a product or physical arti-
fact’s dimensions will accommodate users’ bodies. The Excel sheet includes 21 body dimensions
drawn from anthropometric data representing the US civilian population. Users first specify the
gender ratio in the row marked “Fraction Male.” They then go on to enter measurements into
a measure’s “Low” column if they wish to know the percentage of people with a measurement
greater than the value that they enter, or into a measure’s “High” column if they wish to know
the percentage of people with a measurement less than the value that they enter. The male and
female percentiles given serve as a reference. Entering values into both columns will calculate
the percentage of people that fall within the range between the low and high measurement values.
Accommodation percentages (the percentage of people whose bodies are fit) for men, women, and
the total population appear in the three pink columns. Users can enter measurements for as few or
as many of the 21 measures as they wish or as needed for their design problem. In the bottom right
corner of the sheet, total multivariate accommodation (explained further in the following section)
for men, women, and the total population is displayed.

3.1.1 Multivariate Accommodation

The VFT sets itself apart from standard tables of anthropometric data through its capacity
to calculate multivariate accommodation. Multivariate accommodation considers users’ fit on all
dimensions simultaneously, which is distinct from a univariate calculation that evaluates users’ fit
on each dimension separately.
As an illustrative example, consider the scenario of designing a seat (also used later in the
usability studies). The designer will want to consider users’ width when sitting, since they will
need to fit within the width of the seat; the length of the upper half of their legs, since they will
not want the front end of the seat putting pressure on the backs of their calves; and the length from
the bottom of their feet to the inside of their knees, assuming that they will not want their feet
to dangle. If the designer wanted the seat to accommodate 90% of the population, a univariate
analysis would ensure that 90% of people fit within the seat’s width, 90% of people do not feel
14

Figure 3.1: The Virtual Fit Tool

the seat on their calves, and 90% of people’s feet reach the ground, all independently. The flaw
in this method is that the widest 10% of people are not necessarily also a part of the 10% with
the shortest calves or the 10% with the longest thighs. 90% of people will be accommodated by
each of the three dimensions separately, but fewer than 90% will be accommodated by all three
at the same time, and a person is only considered fully accommodated if they are fit well on all
dimensions. Some may fit within the seat but have their feet dangle. Others may have their feet
touch the ground but have the seat pressing into their calves. As a result, more than 10% of people
will be uncomfortable, or disaccommodated, in at least one way, and the univariate analysis does
not lead to the intended percent accommodation goal. Alternatively, a multivariate analysis takes
all of an individual’s measurements into account simultaneously to determine whether or not they
are fully accommodated. This calculation is more complicated and requires more detailed data but
yields more accurate accommodation rate estimates than univariate analyses.
15

3.2 Initial Usability Study

An initial study was designed to evaluate the usability of the original Virtual Fit Tool developed
by Professors Matt Parkinson and Matthew Reed. However, to extract as much information as
possible from the study and to promote the success of study participants, some modifications were
made to the tool. Six alternative versions of the VFT were created to provide slightly different
user experiences and inform the direction for the redesign. Factors identified as important in the
literature review were varied between versions to test their impact on overall tool usability and user
preference.

3.2.1 VFT Versions

The six versions are combinations of three different tool interfaces and two different sets of
instructions. Since the original VFT included only the names of the various measures, images
depicting each measure were added to all of the versions so that lack of familiarity with the formal
anthropometric language used would not present a barrier to study participants. The first tool
interface, hereafter referred to as the “full VFT,” included all 21 measures contained in the original
VFT (Figure 7.1 in Appendix A). The second interface, or “partial VFT,” (Figure 7.2 in Appendix
A) included only the three measures that were relevant to the design problem to be given to study
participants, buttock-popliteal length (Figure 3.2), hip breadth (Figure 3.3), and popliteal height
(Figure 3.4). The third interface, or “partial VFT with range,” also included only the three relevant
measures and displayed user entries on the image of each measure as a range of accommodated
measurements (Figure 7.3 in Appendix A).
One instruction set is an abbreviated version of the instructions provided by HFES on their
website with the tool (with the slides not relevant to the study task removed). The relevant slides
were cropped and printed out in color on one sheet of paper, front and back. These instructions
(“original instructions,” shown in Figures 7.4 and 7.5 in Appendix A) explained where to enter
values but did not provide much information regarding how to contextualize a design problem
16

Figure 3.2: Buttock-popliteal Figure 3.3: Hip Figure 3.4: Popliteal height
length breadth h

within the VFT. A second set of instructions, which will be referred to as “prompting instructions,”
was created as a simple numbered list with the goal of facilitating users’ thinking about how they
ought to use the tool (Figure 7.6 in Appendix A). For example, it contains questions, such as “How
do body parts interact with the product?” and “Are you minimizing, maximizing, or establishing
a range?” These questions are intended to prompt the user to consider factors that will influence
where they should enter values, which the original instructions did not emphasize. The prompting
instructions were altered slightly between the tool combinations to reflect the interface that they
were paired with.
The gender ratios were altered slightly between each tool so that they would provide marginally
different answers, and this also presented the opportunity to gauge how different permutations
affected user confidence in answers. The gender ratio row was hidden on the sheets that participants
interacted with. The versions are identified by a letter A-F and are shown in Table 3.1.
17

Table 3.1: VFT versions used in initial usability study

Identifier Interface Instructions Gender Ratio


A Full Original 0.50
B Full Prompting 0.45
C Partial Original 0.55
D Partial Prompting 0.60
E Partial with Range Original 0.40
F Partial with Range Prompting 0.65

The different interfaces and instruction sets were generated to examine the effects of direct
versus thought-provoking instructions and more versus fewer measures provided in terms of ap-
proachability, usability, value, and other user needs.

3.2.2 Initial Usability Study

The six versions of the VFT were tested through the presentation of a simple design problem
to participants, each of whom used two versions each to solve the problem. Participants were
recruited at the Penn State University Engineering Design Capstone Showcase verbally and offered
ice cream for their participation. They were told that they would be asked to complete a short
design challenge using a tool, answer a few questions about it, then complete the challenge again
with a second tool, answer a few questions about it, and then compare the two tools at the end.
Participants were given the following design problem:

“Beaver Stadium is getting new bleachers! We want the product to work for about
90% of the population in the most efficient way possible. A seat is considered to work
for a user if their hips fit within the arm rests, the front of the seat doesn’t rub the back
of their legs, and their feet are not dangling. What is the maximum depth, minimum
width, and maximum height the seat surface should be (in millimeters)?”
18

Participants were handed a survey (Figures 7.7 and 7.8 in Appendix B) with the prompt at the top
while they were waiting in line to participate so that they had time to familiarize themselves with
the design prompt before beginning.
Each participant was asked to come up with answers to the problem using two tool versions, one
at a time. 30 permutations of two different tools exist. Each permutation was listed and numbered
1-30, and vectors containing numbers 1-30 were randomly ordered in MATLAB. The randomly
ordered numbers were then assigned to participant numbers 1-30, 30-60, etc. in the order that they
were generated. The associated tool permutations were pre-assigned to the participant numbers
and written on the survey forms. Participants were numbered in the order of their participation and
received the corresponding tool permutation.
Upon consenting to participate in the study and confirming that they understood the design
prompt, they were given their first tool and instruction set. They were told that they could look
over the instructions and continue to reference them throughout the trial. They were also informed
that they would be timed but that they were subject to no time constraints. Once they began using
the VFT interface, a stopwatch began timing. After reaching answers using the tool, they informed
the researcher and wrote their answers on the survey form. The researcher stopped the timer and
recorded the time in an Excel spreadsheet with the participant number and tool permutation. Par-
ticipants then rated the tool’s approachability, aesthetics, usability, clarity of instructions, ease of
understanding tool output, efficiency, and effectiveness on a five-point scale (from “very insuffi-
cient” to “more than sufficient”) and answered an open-ended question about what they believe
could make the tool more approachable, usable, or valuable. Once finished with these questions,
they were given their second tool and instruction set. The same procedure was followed, and par-
ticipants wrote down their new answers and evaluated the second tool. Lastly, participants chose
which tool’s answers they felt more confident in if their answers differed, explained why, and in-
dicated if they were a student, faculty member, or professional. Participant questions about the
prompt and tools were answered minimally to help participants complete the trial. Observations
were also recorded throughout.
19

30 participants (28 students and two professionals) completed the study, so every VFT version
was used the same number of times and was used first and second the same number of times.
The first two trials both involved the full VFT interface, and both participants were under the
impression that they had to use every row in the tool. This took significantly more time than
expected and caused frustration. After the second trial, participants assigned a full VFT interface
were told that they would not need to use all of the rows. No further explanation or details were
given. Two showcase attendees began the study but did not finish. No data from these participants
is included in the data analysis.

3.2.3 Data Analysis

After the study was completed, participants’ answers were entered into the proper cells in the
tool interfaces that they used, and the total accommodation that their design achieved was found
and recorded. (Proper use of the tool entailed setting a low value for buttock-popliteal length, a
high value for hip breadth, and a low value for popliteal length, such that the total accommodation
was calculated to be around 90%.) Average times and characteristic ratings were also calculated
for each version and each VFT interface. Times and ratings are compared across individual ver-
sions, interfaces, and instruction sets and between identical interfaces and instruction sets. Average
instruction ratings were calculated for the two instruction sets. The number of times tool versions
were trusted over another tool was counted for each version, interface, and instruction set. Con-
clusions are drawn from quantitative comparisons of the numerical data for the various versions,
interfaces, and instruction sets. Finally, answers to the open-ended questions and observations
made during the study were qualitatively analyzed for recurring themes and mapped to the other
results where possible. Results pertinent to the redesign process are discussed briefly below, and
all results are reported comprehensively in the next chapter.
20

3.3 Redesign of the VFT

In addition to the literature review, notable findings from the initial usability study informed
priorities for the redesign of the VFT. First, one of the most notable and potentially problematic
behaviors observed during the study was participants’ minimal interaction with instructions. There
were problems identified in both sets of instructions, and many participants simply did not want to
take the time to carefully review either instruction set. A goal for the redesigned VFT was therefore
to necessitate no instructions. Second, several participants using tool versions A and B believed
that they needed to enter values for every measurement on the sheet, including those that were not
relevant to the design problem. Another objective of the redesign was thus to minimize the risk
of considering irrelevant measures. Third, many users thought that they needed to enter both a
low and high value for every measure, while buttock-popliteal height required only a low value,
hip breadth required only a high value, and popliteal height required only a low value, as implied
by the prompt. The redesign also sought to prevent users from entering values unnecessarily.
Guided by the above goals and user needs identified in the literature, the redesign of the VFT was
iteratively designed and prototyped on paper and then translated into an R Shiny web application,
an interactive app coded in R, with the help of Professor Matt Parkinson.
The redesign is accessible from a webpage on the OPEN Design Lab’s website. The launch
page includes a brief three sentence introduction to the Virtual Fit Tool; links to access the VFT
calibrated to display anthropometry (unaltered body measurements) for seated measures, standing
measures, or all measures (contained within the original VFT); links to access the VFT calibrated
to display design variables for four common ANSI/HFES 100-supported artifact types: seating,
seated workstations, standing workstations, and sit/stand workstations; links to access the VFT
calibrated to display seated design variables (body measurements plus clothing tolerances), stand-
ing design variables, and all design variables (from ANSI/HFES 100 [23]); and more information
about the VFT, including where the tool and data come from and how multivariate accommodation
21

Figure 3.5: Redesigned VFT launch site, 1 of 2

is calculated (Figures 3.5 and 3.6). Like the original VFT, the data is CAESAR data [44] that has
been reweighted using NHANES data [45] by Professor Matt Parkinson.
These links are intended to lead to the R Shiny application calibrated to include only relevant
measures according to the link used. This automatic calibration is meant to mitigate confusion
regarding which measures are important. Only an R Shiny interface for seating scenarios was
generated for the usability study of the redesign. On the interface (Figures 3.7 and 3.8), the names
and images of each relevant measure are presented.
Next to each image are reference percentiles for men and women and a set of radio buttons
with the prompt “Set a:” and options “min,” “max,” “range,” and “none.” There are numeric input
22

Figure 3.6: Redesigned VFT launch site, 2 of 2

fields for a minimum and a maximum, but neither appears unless a radio button necessitating
them is selected. The labels of the numerical input fields are worded “Fits [measure] above”
for minima and “Fits [measure] below” for maxima. The seating scenario is configured with the
proper button for each measure automatically selected (maximum for seated hip breadth, minimum
for buttock-popliteal length, and minimum for popliteal height), to preclude confusion regarding
which parameter(s) must be specified. Once the user inputs a value, accommodation for men,
women, and the population according to the gender ratio is calculated and displayed at right. Total
accommodation is calculated by clicking a button at the top of the page. The accommodation rate
outputs are displayed as “X% of [gender] fit.” The structure and functionality of the interface are
hypothesized to circumvent the need for explicit instructions. The redesigned VFT is evaluated in
the study that follows.
23

Figure 3.7: Redesigned VFT for seating, 1 of 2

3.4 Usability Study of the Redesigned VFT

In the usability study of the redesigned VFT, participants used the redesign and the original,
full-length VFT (with images of measures added) in a random order to address the same design
prompt given in the initial usability study (reworded slightly for clarity). The study was conducted
online via a web-based survey due to COVID-19 concerns. The original VFT and the abbreviated
version of the instruction set provided by HFES used in the initial usability study were both em-
bedded into a web page, and the redesign was hosted on another web page. The gender ratio was
set to 65% male in the original and hidden, and the gender ratio in the redesign was set to 50%
male.
Engineering students and professionals were recruited to participate via email through Penn
State University and the Human Factors and Ergonomics Society. Included in the recruitment email
was a link to a consent form that redirected consenting participants to a survey that facilitated the
study (Figures 7.9-7.12 in Appendix C). The survey confirmed participant eligibility; confirmed
24

Figure 3.8: Redesigned VFT for seating, 2 of 2

that each participant understood the design prompt; and asked each to rate and briefly explain their
experience with design, ergonomics, and human factors on a five-point scale from “no experience”
(1) to “expertise” (5) and specify whether or not they had used the VFT before. Participants were
then given a link to either the original VFT or the redesign; asked to provide answers to the prompt
and rate their confidence in their answers on a five-point scale (from “not confident at all” to “very
confident”); and then asked to rate the tool’s approachability, usability, ease of understanding tool
output, and efficiency on the same five-point scale used in the initial usability study survey (from
“very insufficient” to “more than sufficient”). Participants were also able to specify what they
liked and disliked about the tool in an open-ended section. Once finished with their first tool, the
survey presented the other and repeated the same questions for the second tool. At the end of the
survey, participants indicated which tool performed better on approachability, usability, ease of
understanding output, confidence generation, and efficiency and to what extent using a five-point
scale. They were also able to leave additional comments in an open-ended field. 32 participants
completed the study. Only one had used the VFT previously. One participant commented that they
25

did not use the original VFT, and another spent far less time on the survey than other participants
and gave uniform responses, so these two participants’ data is excluded from the data analysis.

3.4.1 Data Analysis

As in the initial usability study, participants’ answers were entered into the proper cells in the
tool interfaces that they used, and the total accommodation that their design achieved was cal-
culated and recorded in connection with their survey responses. Possible failure modes were also
determined by entering participants’ answers into incorrect fields such that tools would output 90%
accommodation rates. The distribution of accommodation rates was analyzed by order in which
the tools were used, and the relationship between design experience level and accommodation
rates achieved was evaluated. Average characteristic ratings were calculated for the original and
redesigned VFTs and compared. Average ratings were also analyzed by order in which tools were
used. Finally, comments left in response to the open-ended questions were qualitatively analyzed
for recurring themes.
26

Chapter 4

Results

4.1 Initial Usability Study Results

The initial usability study’s results reflect concerns in the literature about poor design tool us-
ability. Participants struggled to use all of the VFT versions properly and to satisfy the design
prompt. The target total accommodation rate was 90%. The mean overall accommodation for
seat designs is 13.1%, and the median is 1%. Rounding each accommodation percentage to the
nearest integer, 34 of the 60 attempts (56.7%) resulted in less than or equal to 1% accommodation.
Only 16 attempts (26.7%) achieved greater than or equal to 10% accommodation. A single par-
ticipant achieved the target of approximately 90% accommodation (90.8%). One other participant
reached 99.2% accommodation, which surpasses the target. These successes were achieved with
tool versions E (99.2% accommodation) and F (90.8% accommodation), partial with range inter-
faces with each set of instructions. Both successes occurred in the participants’ second attempts.
Two other attempts reached accommodations around 80%, which can be reached by seeking 90%
accommodation rates on several individual measures (i.e., univariate analysis). Accommodation
rates achieved with each tool version are shown in Figure 4.1.
The surprisingly low accommodation rates achieved by participants is a poor testament to the
VFT’s overall usability. Some of the issues participants ran into are articulated in the open-ended
27

Figure 4.1: Accommodation rates achieved with each VFT version in the initial usability study

sections of the surveys. All interfaces elicited confusion regarding how to interpret the VFT’s
output. Multiple participants commented that entering inputs into the interfaces could be more
intuitive. Three comments suggested allowing users to set a target accommodation rate and have
the tool work in both directions (i.e., being able to calculate accommodation from measures and
find satisfactory measures from a target accommodation).
Version B was used longest, on average, and version D was used for the least amount of time,
followed closely by version C. Even omitting the two trials completed before participants were told
that they would not need to use all rows of full interfaces, version B’s average use time is 6:59, and
it remains the longest used tool. Version A was used for less time than B but still longer than any
28

partial or partial with range version. The partial and partial with range versions took significantly
less time to use compared to the full versions, with the difference between the averages falling
close to three minutes. This equates to the full interfaces being used approximately 75% longer
than the others. Tools accompanied by the VFT’s original instructions were used for less time than
those with the prompting instructions. Trial times are summarized in Table 4.1, and the times of
all trials are recorded in Table 7.1 in Appendix D.

Table 4.1: Average tool version use times in the initial usability study

Tool Version(s) Average Time


A 6:22
B 7:46
C 3:38
D 3:36
E 4:07
F 5:03
Full interface 7:04
Partial and partial with range interfaces 4:06
Original instructions 4:42
Prompting instructions 5:28

Table 4.2: Average tool version characteristic ratings in the initial usability study

Tool Approach- Aesth- Usability Output Conf. Effi- Effect- Instr.


Version(s) ability etics Clarity Gen. ciency iveness Clarity
A 3.3 3.8 3.8 3.7 3.7 3.4 3.7 3.8
B 3.4 3.7 3.5 2.7 2.8 2.9 3.1 3.3
C 3.9 3.8 3.9 3.6 3.2 3.8 3.5 3.3
D 3.8 3.3 4.2 3.9 3.4 4.0 3.6 3.7
E 4.3 4.2 4.4 3.8 3.9 4.3 4.2 4.1
F 3.9 3.9 4.2 3.5 3.7 4.0 3.8 2.8
Full 3.4 3.7 3.7 3.2 3.3 3.2 3.4 –
Partial 3.6 3.6 4.1 3.8 3.3 3.9 3.6 –
Partial/range 4.1 4.1 4.3 3.7 3.8 4.2 4.0 –
29

Average ratings for each version and interface appear in Table 4.2. The partial with range
versions’ characteristics were rated highest in all but one category, with ratings generally falling
around 4/5 (defined as “sufficient” in the survey). The most variability between identical interfaces
with different instruction sets occurred between versions A and B, with five of the eight character-
istics being rated at least 0.5/5, or 10% on the five-point scale, differently. Between C/D and E/F,
most characteristics were rated similarly. Only four out of 40 average tool characteristic scores
were below 3/5 (or less than “barely sufficient”), with three of the four being version B attribute
ratings.
The partial with range versions were rated highest on approachability at 4.1/5, followed by the
partial versions at 3.6/5 and the full versions at 3.4/5. Seven comments requested that the full ver-
sions include fewer measures for simplicity. Aesthetics between the three interfaces scored within
10% of each other on the scale, which could be expected due to minimal aesthetic variability be-
tween them. The aesthetics were generally considered acceptable with room for improvement.
Participant suggestions include using a more aesthetically pleasant platform (compared to Excel),
replacing the anthropometric images with clearer ones, and providing a responsive visual repre-
sentation of entries (e.g., a person whose measures change size).
Partial with range interfaces were rated the highest for usability at 4.3/5, and full interfaces
were rated lowest on this metric at 3.7/5. Several participants commented that they enjoyed the
partial and partial with range interfaces because they had fewer measures than the full interface.
However, a few participants commented that the partial and partial with range interfaces should
still be made more intuitive. The fairly positive overall characterization of the VFT’s usability tells
a very different story than does the actual success rate of participants; despite evaluations hovering
around “sufficient,” only one participant of 30 achieved the design goal.
Output clarity was rated approximately the same for partial and partial with range interfaces,
slightly higher than full interfaces. This metric was evaluated as between “barely sufficient” and
“sufficient,” on average, despite several comments requesting better instruction regarding how to
interpret tool output. Confidence generation ratings were lowest for version B (2.8/5) and high-
30

est for version E (3.9/5). The full and partial interfaces scored the same while the partial with
range versions averaged 0.5 points higher. Five comments expressed that the interfaces with fewer
measures generated more confidence because of their simplicity relative to the full versions.
Partial with range interfaces scored most efficient (4.2/5) followed by partial interfaces (3.9/5)
and then full interfaces (3.2/5). Finally, effectiveness trended the same way, with partial with
range versions rated 4/5, partial versions rated 3.6/5, and full versions rated 3.4/5. No open-ended
participant feedback explicitly discussed efficiency or effectiveness.
The original instructions were rated 0.5/5 points higher for clarity than the prompting instruc-
tion set, a fairly small difference (see Table 4.3). The same instructions were rated differently on
average when accompanied by different interfaces, indicating interdependence between perceived
tool quality and perceived instruction quality. The clarity of instructions was rated most differently
between identical interfaces E and F, with E’s instructions scoring 1.3/5, or 26% of the scale ca-
pacity, higher than F’s. Positive and critical comments about both sets of instructions were listed
in response to the open-ended survey questions. Participants liked the practicality and straight-
forwardness of the original instructions with tutorial-like images of the tool as well as the linear
thought process facilitated by the prompting instructions. Several participants were confused about
which column to enter values into, which is more thoroughly addressed in the prompting instruc-
tions. Users also wanted better explanation of how to use tool output and how total accommodation
was calculated. Some also desired less wordy instructions.

Table 4.3: Average instruction clarity ratings in the initial usability study

Instruction Set Average Rating (1-5)


Original instructions 3.7
Prompting instructions 3.3

Several participants looked over the instructions only briefly or tried to use tools without con-
sulting the instructions at all, which could have contributed to some users’ confusion and tool
misuse. For example, multiple participants were observed writing down one design specification
at a time with significant time in between, indicating that they were examining measures indepen-
31

dently and not utilizing the multivariate analysis that the VFT supports. Users were unlikely to
have known that they needed to use the multivariate total accommodation without carefully con-
sulting the instructions. Participants’ lack of engagement with both instruction sets highlights the
importance of inherent usability and intuitiveness in design tools. Tools requiring minimal instruc-
tions by design may be the most conducive to user success based on the behaviors observed in this
study.
Full VFT interfaces were trusted less often than partial and partial with range versions, with the
exception of version D being trusted the same number of times as A and B (3 of the 10 times they
were used), as shown in Table 4.4. Version C was trusted most often (8/10 times). Versions with
the original instructions were trusted more frequently than tools with the prompting instructions.
Only one participant declined to trust one of their tool versions over the other; the remaining 29
all indicated that they trusted one more. Reasons cited for choices include one tool having clearer
instructions, fewer measures making non-full versions easier to use, more measures making full
versions more robust, and better understanding on the second attempt. Instruction clarity was the
most frequently listed reason, more often in support of versions with the original instructions.

Table 4.4: Trust in tool versions in the initial usability study

Tool Version(s) Times Trusted / Times Used


A 3 / 10
B 3 / 10
C 8 / 10
D 3 / 10
E 7 / 10
F 5 / 10
Full 6 / 20
Partial 11 / 20
Partial with range 12 / 20
Original instructions 18 / 30
Prompting instructions 11 / 30
32

Interestingly, participants that achieved 0% accommodation commented that the tools were
straightforward and rated tool characteristics, including usability, favorably. Although a single
participant was successful in reaching the target efficiently, most tool characteristic averages scored
“barely sufficient” or better, including clarity of instructions (except for version F). No participants
expressed doubt in more than one set of answers. These results suggest that users were not aware
that they were using the VFTs incorrectly.
Across interfaces and instruction sets, participants had trouble understanding which rows and
columns they should be entering values into. Multiple participants working with full interfaces
used unnecessary rows, and some thought they had to enter values into both columns and were then
confused about how to translate that into one design specification (e.g., seat width). According to
usability ratings and comments, providing users with only the rows that they need alleviates this
struggle. All comments from the open-ended survey questions and their frequency are presented
in Table 7.2 in Appendix D.

4.2 Redesigned VFT Usability Study Results

As in the initial usability study, participants struggled to achieve the target accommodation
with both the original VFT and the redesign. However, the redesigned VFT was preferred over
and rated more favorably than the original. 18 participants used the redesign first and then the
original, and 12 participants received the original first and then the redesign. The order in which
participants used the tools influenced their tool characteristic ratings.
The total accommodation rates achieved using both the original and redesigned VFTs appear
in Figure 4.2. Most attempts resulted in accommodation rates either below 30% or above 70%.
The mean and median accommodation rates reached using the original VFT are 21.9% and 1.4%,
respectively. The redesign’s mean and median accommodation rates are 24.9% and 3.0%. Like in
the previous study, many total accommodation rates are clustered in the 0-10% range.
33

Figure 4.2: Accommodation rates achieved with the original and redesigned VFTs in the second
usability study

Four participants achieved a roughly 90% accommodation rate using the original tool, and one
achieved an accommodation rate of approximately 80%. As noted previously, total multivariate
accommodation rates around 70-80% can be reached by seeking 90% accommodation rates on
multiple individual measures. One approximately 90% rate and the approximately 80% rate were
reached by participants who used the original tool first. The other three 90% rates were achieved
by participants who used the original second.
Five participants achieved an overall accommodation rate of about 90% using the redesign, and
three others reached an accommodation rate in the 70’s. One 90% accommodation rate and two
rates in the 70’s were reached with the redesign by participants who used it first, and four more
roughly 90% overall accommodation rates and a rate in the 70’s were achieved using the redesign
after the original.
34

Two participants achieved 90% accommodation rates using both the original and redesign,
accounting for four of the nine successful attempts. While there was no relationship between self-
rated experience level and accommodation rate in general, one of the participants who succeeded
with both tools reported having 14 years of experience in human factors engineering and was the
only participant to report having used the VFT before. Of the other successes, none were achieved
in the participants’ first attempt. Two were accomplished using the original VFT second, and the
other three were accomplished using the redesign second.
Overall, the distributions of total accommodation rates achieved in the first and second attempts
were similar. As shown in Figure 4.3, despite having previously seen the correct measures and
parameters in the redesign, participants who used the original VFT second were not much more
successful than those who used the original first. As depicted in Figure 4.4, similar proportions
of participants achieved low and high accommodation rates using the redesign first and second as
well.
Participants’ responses contained some indications of specific failure modes. Many failed at-
tempts (six with the original VFT and 12 with the redesign) yielded a roughly 90% overall accom-
modation rate if all of the specified dimensions were set as maxima, or entered into the “High” col-
umn of the original VFT, though the design prompt required minimum values for buttock-popliteal
length and popliteal height. Despite the correct radio buttons being automatically selected in the re-
designed VFT, large depth and height answers suggest that several participants may have changed
the “min” selection to “max” on the buttock-popliteal length and popliteal height measures. One
participant also commented that the “range” option was difficult to use, showing that they had
changed the radio button selection. Others (five with the original and four with the redesign) seem
to have set a maximum for buttock-popliteal length rather than a minimum. These nine attempts
resulted in valid dimensions for seat width and height but a large depth that would disaccommodate
a large percentage of the population.
Answers from three failed attempts (one with the original and two with the redesign) yield
90% accommodation rates for each individual measure but overall accommodation rates in the
35

Figure 4.3: Accommodation rates achieved with the original VFT in the second usability study
when used first and second

mid- to high-70’s. Each of the three failed attempts was by a different participant. These three
participants likely answered without using the total overall accommodation calculation. Entering
a maximum value for all measures, entering a maximum for buttock-popliteal height, and seeking
90% accommodation for each individual measure could explain 18 of the 25 failed attempts using
the redesigned VFT.
Potential failure modes using the original are more varied and more difficult to predict. A
few participants’ answers seem to be too high or low to be derived from the proper measures (e.g.,
679mm recommended for seat depth when the population’s 95th percentile buttock-popliteal length
is 542mm, 850mm recommended for seat height while the population’s 95th percentile popliteal
height is 473mm). One participant’s seat width recommendation suggests that they set a minimum
value for hip breadth instead of a maximum, and another seems to have set maximum values for
36

Figure 4.4: Accommodation rates achieved with the redesigned VFT in the second usability study
when used first and second

buttock-popliteal length and popliteal height instead of minima. By including many more measures
and being (in participants’ words) more “overwhelming,” the original VFT inherently enables more
failure modes.
While there was not a dramatic difference in accommodation rate distributions between the
original and redesigned VFTs, the redesign was rated much more favorably than the original.
Three participants reported that they preferred the original tool overall, and the other 27 preferred
the redesign. Participants rated their confidence in their answers higher with the redesign than the
original, and the redesign was rated better than the original on average on all attributes (approach-
ability, usability, ease of understanding output, and efficiency). Average numerical ratings appear
in Table 4.5. The original VFT’s attributes were generally rated around 3/5, or “barely sufficient,”
and the redesign’s attributes were generally rated around 4/5, or “sufficient.”
37

The largest differences in average ratings are those for approachability, usability, and efficiency,
with the redesign rated roughly 20% higher than the original VFT on each. The differences in
ratings for confidence in answers and ease of understanding output are smaller but still both greater
than 10%. Although characteristics of the original were evaluated to be around “barely sufficient,”
12 participants commented that the large amount of unnecessary information was overwhelming,
and 10 said that it was difficult to use and understand. 10 critiqued various aspects of the interface,
and several mentioned that having to scroll through the tool was cumbersome.

Table 4.5: Average tool version characteristic ratings in the second usability study

Tool Confidence Approachability Usability Understanding Efficiency


in Answers Output
Original 3.0 3.0 3.3 3.3 3.3
Redesign 3.7 4.2 4.4 3.9 4.2

Despite limited success in addressing the design prompt, the redesign was rated 4.4/5, between
“sufficient” and “more than sufficient,” on usability. Comments about the redesign were mostly
favorable. The most frequent comments include that it was clear, straightforward, and simple com-
pared to the original; it was easy to input values; and it was easy to use and understand. However,
five comments requested better explanation of the parameter options (minimum, maximum, and
range). A few also expressed that it was easy to use “once you figure it out,” implying that they
did not find the tool immediately intuitive. Four participants liked that the correct parameter was
automatically selected for them, but based on the accommodation rates achieved and likely failure
modes, it seems that more participants did not recognize this feature.
The redesign’s approachability was also rated well, and participants said that it was polished,
easy to read and navigate, and aesthetically pleasing. Many appreciated that only the necessary
measures were shown and said that that contributed to its efficiency. Seven participants commented
that they liked the redesign’s output, with one specifying that it “jumped out” more than the output
of the original VFT.
38

Average characteristic ratings varied with the order in which the two tools were used. When
participants were able to compare the two tools, they viewed the original more poorly and the
redesign more favorably. The original VFT was rated lower on every metric when it was used after
the redesign compared to when it was used first, as shown in Table 4.6. The difference is most
significant for usability and efficiency and least significant for ease of understanding output.

Table 4.6: Average original tool characteristic ratings in the second usability study
by order of tool use

Order Confidence Approachability Usability Understanding Efficiency


in Answers Output
First 3.2 3.3 3.7 3.4 3.7
Second 2.8 2.8 3.1 3.2 3.1

The redesign was also rated better on every metric when it was used after the original, as shown
in Table 4.7. Ease of understanding output and efficiency ratings increased the most, and confi-
dence in answers and approachability changed the least. As may be expected, efficiency ratings
changed by a significant amount (more than 10%) for both the original VFT and the redesign,
likely due to dramatic difference in scope and information content between the two.

Table 4.7: Average redesigned tool characteristic ratings in the second usability study
by order of tool use

Order Confidence Approachability Usability Understanding Efficiency


in Answers Output
First 3.9 4.0 4.2 3.7 3.9
Second 4.2 4.4 4.7 4.3 4.7

Other comments regarding both the original and redesigned VFTs requested dynamic visual
output (e.g., having the images change to represent output) and a way to reach answers without
tedious guessing and checking. Participants also generally found the visuals and reference per-
centiles helpful. All comments from the open-ended survey questions and their frequency are
presented in Table 7.3 in Appendix E.
39

Chapter 5

Discussion

The initial usability study of the VFT revealed significant usability issues within the tool, echo-
ing the long record of poor design tool usability documented in the literature. Participants par-
ticularly struggled to navigate the large volume of information presented in the full-length VFT
versions and to understand which of the available fields they needed to use. Findings from the
initial usability study as well as the literature review informed the redesign of the VFT. How-
ever, the redesigned tool still demonstrated an extremely high failure rate in the second usability
study. One major objective of the redesign was to require no explicit instructions, but without
instructions, many users changed the tool’s calibrated settings and set incorrect parameters. The
assumption that users would understand that the redesign had been automatically calibrated cor-
rectly for them proved to be incorrect. Other design changes, such as including only the relevant
measures and developing a more approachable interface, were successful in improving the user
experience. Although users did not perform much better on the design challenge with the redesign
than the original, they enjoyed interacting with the redesign much more and rated it better on all
metrics, experienced less confusion and cited fewer sources of confusion with the redesign, and
appeared to exhibit fewer failure modes with the redesign than with the original VFT. Outstand-
ing opportunities for improvement, outlined in the section that follows, will be addressed after the
conclusion of this thesis project, and the redesigned VFT will be made publicly available online.
40

5.1 Next Steps for the Redesigned VFT

To complete the development of the redesigned VFT, configurations of the tool for scenarios
besides seating will be generated, and usability issues revealed through the usability study of the
redesign will be addressed. The likely failure modes discovered through the results of the second
usability study were changes to the pre-selected parameter options for measures and neglect of
the total accommodation calculator. To inhibit the first of these failure modes, the launch site
for the VFT will explain that the links for specific design scenarios lead to an interface that has
been properly calibrated for the selected scenario, information that was not communicated in the
usability study of the redesign. Second, a brief statement explaining to users how and why to use
the total accommodation calculator will be included in the launch site, and options for having the
total accommodation automatically update as inputs are entered (without having the click a button)
within R Shiny will be investigated.
All text on the site will be kept brief and spread out so as to avoid the lack of user engagement
with tool instructions observed in the initial usability study. While many users did not seem inter-
ested in reading through the instructions provided with tools in the first study, assuming that users
could fully and easily intuit exactly how to use the tool was not a successful strategy. The final
form of the VFT will seek to strike a balance, providing users with enough context and direction
to use the tool correctly without requiring much time or effort to read.
As requested in a few comments in both the first and second studies, an option to input a
target overall accommodation will be implemented. This option will allow users to specify a
numerical target accommodation percentage rather than inputting dimensions for several measures.
Users will still need to determine which measures and parameters are relevant (or use one of the
specific scenarios linked on the launch site). Given a target accommodation and a set of parameters,
the program will suggest a set of dimensions for those parameters that satisfies the target overall
accommodation. For example, if a user enters a target accommodation of 90% for a 50% male and
50% female population in the seating scenario, the program might suggest a maximum of 520mm
41

for hip breadth/seat width, a minimum of 425mm for buttock-popliteal length/seat depth, and a
minimum of 345 for popliteal height/seat height.
Finally, to clarify what is meant by the parameter options “max,” “min,” and “range,” the word-
ing of the input field labels will be changed from “Fits [measure] below/above” to “Fits [measure]
greater than/less than,” as suggested by a participant.
These changes will be implemented in the seating scenario and other future scenarios. Scenar-
ios supported by the original VFT and the ANSI/HFES 100 standard for which new VFT interfaces
will be created include seated workstations, standing workstations, and sit-stand workstations. In-
terfaces containing seated, standing, and all anthropometric measures from the original VFT and
interfaces containing seated, standing, and all design variables will also be generated to support
other types of design problems. The measures included in these more extensive interfaces will all
have their default parameter option set to “none” so that users will have to select relevant measures
before input fields appear, which will limit the manipulation of unnecessary measures.

5.2 Implications for Design Tool Development

Previous work has studied how designers work in general to understand design tool user needs.
The present work extends the literature in this domain by further articulating these needs; iden-
tifying others in the context of a specific, concrete example; and illuminating the limitations of
designing tools based solely off of the requirements in the literature. Findings from the two studies
conducted for this thesis suggest three major implications for design tool development:

1. Information presented and user actions allowed should be conditionally constrained.


2. The trade-off between design tool efficiency and usability must be considered in the
development of tool instructions.
3. Design tool prototypes should undergo usability tests by users with a range of
experience levels in order to identify and address likely failure modes.
42

5.2.1 Constraining Information Presentation and User Actions

Participants in both the first and second usability studies reported feeling overwhelmed by the
amount of information presented in the full-length/original VFTs. Information overload made the
tools not only unapproachable but also inefficient and difficult to use, with some users commenting
that they “got lost” in the tools. The large volume of unnecessary information, input fields, and
outputs demanded more time and effort from users than simpler versions as they had to determine
what information they needed to consider and which parts of the tools they needed to use. Addi-
tionally, the unnecessary fields increased the number of potential failure modes. Study participants
frequently used too many input fields, and there is no evidence to suggest that any used too few.
The presence of fields, to many users, suggested that the fields needed to be utilized. Particularly
in the initial usability study, many participants entered low and high values for each available mea-
sure, regardless of tool version. The frequency of this mistake was reduced in the redesign, which
showed through the “min” and “max” parameter options that entering only one value per mea-
sure is valid. Constraining the information and possible actions that users had access to increased
tool usability, according to both participant comments about and ratings of tools and the apparent
impedance of some failure modes.
The structure of the redesigned VFT using a launch page containing several options that lead
to automatically configured, simplified tool interfaces (and a non-default option to access the full,
unconstrained tool) is one method of constraining information presentation and user actions within
a tool while still maintaining its versatility. This model can be applied in other design tools if the
tool creators are able to predict common contexts in which or problems to which their tool might
be applied.

5.2.2 Considering the Efficiency-Usability Trade-off in Instruction Sets

The literature contains hints as to the importance of the clarity of design tool instructions;
however, a novel finding in this work was the low level of user engagement with two very different,
43

but both succinct, instruction sets. Based on this finding, the redesigned VFT sought to avoid
having instructions altogether. Although participants in the usability study of the redesign liked
the time savings associated with the lack of instructions, this concept sacrificed usability. Many
participants made mistakes that could have been prevented with one or two sentences of instruction
(e.g., “The tool has been calibrated for you, so you do not need to change the selected parameter
options. Use the button at the top of the page to calculate total accommodation.”). Such brief
instructions would require marginally more reading time but could have prevented up to 18 of the
25 failures with the redesign.
There is an apparent trade-off between efficiency and tool usability when it comes to design
tool instructions. If instructions are too lengthy, users may not consult them or may be displeased
with the time and effort required to review them, but insufficient instruction regarding how to use
a tool may cause preventable failures. The initial usability study shows that the acceptable range
of instruction length is small. The two instruction sets provided were 6 slides and a 3/4 page-long
numbered list, and user engagement was a recurring issue for both. The efficiency-usability trade-
off must therefore be carefully considered in the development of design tools and their instructions.
Tool developers should strive for a balance between efficiency and completeness of instructions
that promotes both user engagement with instructions and user understanding of the design tool.
Since engaging instruction sets must be brief, tools should be as intuitive as possible and constrain
information presentation and user actions to minimize the amount of content needed in instruction
sets.

5.2.3 Conducting Usability Tests to Identify and Address Failure Modes

The literature reviews user needs for design tools at length, but despite being well-informed by
this literature, the redesigned VFT exhibited serious and surprising usability problems. Consider-
ation of the list of user needs extracted from the literature is simply not enough to ensure design
tool usability. The usability study of the redesigned VFT emphasized the importance of identifying
failure modes. As discussed in the previous section, a couple of sentences of instruction can be
44

added to the redesigned VFT to avert the three most commonly observed failure modes. Seeing
a smaller number of failure modes with the redesign also suggested improved usability compared
to the original VFT, for which specific failure modes could often not be easily identified. Design
tool developers can thus understand their progress as well as further opportunities for improvement
by analyzing types and numbers of tool failure modes and whether they are preventable through
design.
The usability study of the redesign also suggests that usability tests with users with a range of
experience levels are critical for identifying possible failure modes. In the initial usability study,
participants often used the wrong measures and input values for the wrong parameters because
they were unsure of which of the many fields presented they were supposed to make use of. It was
hypothesized that the redesign would avert these failure modes by including only relevant measures
and automatically selecting the correct parameters for users. However, failure by setting incorrect
parameters dominated the usability study of the redesign. The unexpected remaining issues with
the redesign would not have come to light had novice users unfamiliar with the VFT not tested it.
Had the redesign been assumed to have already fully addressed those failure modes through the
previous design iteration or more experienced testers not demonstrated those failure modes, further
design changes would not have been made, and many users may have been set up to misuse the
tool. Design tools should therefore be iteratively prototyped and tested with a range of users to
ensure that design goals are actually achieved.

5.3 Limitations

This thesis studied the usability of design tools through one specific example, the Virtual Fit
Tool. As all results emerged from studies of the VFT, some findings may not be generalizable to
the majority of design tools. For example, other tools involving concepts that are more well-known
than anthropometry and design for human variability may not have user success rates as low as the
VFT did in the studies presented here. However, the implications for design tool development are
45

non-specific and can be of use to researchers creating design tools of any level of simplicity and
within any design domain.
The study samples were largely made up of engineering students, whereas design tools are
generally intended for professional audiences. Engineering and design professionals with more
experience may have had greater success using various versions of the VFT than students with less
experience. Usability tests with less experienced users, however, help to identify more potential
usability challenges and were therefore a useful exercise for the purposes of this work. A diverse
sample could similarly be useful in the development of other design tools seeking to be accessible
to broader audiences.
46

Chapter 6

Conclusion

Design tools are important products of design research that seek to facilitate the use of complex
academic research by industry professionals. However, design tools from a variety of design sub-
fields have been found to be severely lacking in usability such that they deter their intended users,
impeding the dissemination of critical new knowledge from academia out into industry. This work
utilized an existing design tool, the Virtual Fit Tool (VFT), to investigate why, given a large body
of literature articulating design tool user needs, researchers continue to struggle to create design
tools considered usable by their intended audiences.
An initial usability study of the original VFT and simplified variants revealed significant us-
ability problems within the tool, evidenced by extremely low user success rates. Informed by the
results of the initial usability study and the literature, the VFT was redesigned in an effort to in-
crease its usability. A second usability study comparing the redesign and the original VFT yielded
low user success rates with both versions of the tool. However, users viewed the redesign much
more favorably, and fewer likely failure modes were identified with the redesign than with the
original.
Based on user feedback and failure modes observed in the two studies, the information pre-
sented in and user actions allowed by design tools should be conditionally constrained in order
to avoid overwhelming users and preclude as many failure modes as possible. The studies also
47

illuminated a trade-off between efficiency and usability that manifests in design tool instructions.
Users may neglect instructions that are too long in pursuit of efficiency but may not be able to
understand a tool if instruction is lacking. Design tool creators must therefore carefully consider
this trade-off and should seek to maximize inherent tool intuitiveness and minimize instructions.
Finally, it is critical that design tools are developed iteratively with frequent usability tests along
the way, preferably with users with a range of experience levels. Such tests identify expected
and unexpected failure modes that can then be addressed in future design iterations to increase
usability.
48

Chapter 7

Appendices

7.1 Appendix A: VFT Interfaces and Instructions Used in Ini-

tial Usability Study


49

Figure 7.1: Part of the full VFT interface used in the initial usability study in tool versions A and
B
50

Figure 7.2: The partial VFT interface used in the initial usability study in tool versions C and D
51

Figure 7.3: The partial with range VFT interface used in the initial usability study in tool versions
E and F
52

Figure 7.4: Page 1 of 2 of the original instruction set used in the initial usability study in tool
versions A, C, and E
53

Figure 7.5: Page 2 of 2 of the original instruction set used in the initial usability study in tool
versions A, C, and E
54

Figure 7.6: The prompting instruction set used in the initial usability study in tool versions B, D,
and F
55

7.2 Appendix B: Initial Usability Study Survey

participant #__________ tools _____ _____

Design Problem
Beaver Stadium is getting new bleachers! We want the product to work for about 90%
of the population in the most efficient way possible. A seat is considered to work for a
user if their hips fit within the arm rests, the front of the seat doesn’t rub the back of
their legs, and their feet are not dangling. What is the maximum depth, minimum
width, and maximum height the seat surface should be (in millimeters)?

Please write your answers down immediately after finishing using each tool, and do
not change them once you move on.

Tool #1
Based on the information from Tool #1, what are your recommendations for the depth, width, and
height of the seat?
depth ________ mm width ________ mm height ________ mm

Please evaluate Tool #1 by filling in the circle for the option you most agree with for each attribute below

very barely more than


insufficient insufficient sufficient sufficient sufficient

approachability

aesthetics

usability

clarity of instructions

ease of understanding output

generating confidence in output

efficiency

effectiveness

What would make Tool #1 more approachable, useable, or valuable?

Figure 7.7: Page 1 of 2 of the survey used in the initial usability study
56

Tool #2
Based on the information from Tool #2, what are your recommendations for the depth, width, and
height of the seat?
depth ________ mm width ________ mm height ________ mm

Please evaluate Tool #2 by filling in the circle for the option you most agree with for each attribute below

very barely more than


insufficient insufficient sufficient sufficient sufficient

approachability

aesthetics

usability

clarity of instructions

ease of understanding output

generating confidence in output

efficiency

effectiveness

What would make Tool #2 more approachable, useable, or valuable?

Tool Comparison
If the answers you obtained from the two tools are different, in which tool’s answers are
you more confident?
Tool #1 Tool #2

Why are you more confident in those answers?

About you
Are you a: student faculty professional

Figure 7.8: Page 2 of 2 of the survey used in the initial usability study
57

7.3 Appendix C: Redesigned VFT Usability Study Survey

Figure 7.9: Survey used in the usability study of the redesign, 1 of 4


58

Figure 7.10: Survey used in the usability study of the redesign, 2 of 4 (this page presented twice,
with the tool number and tool link changed)
59

Figure 7.11: Survey used in the usability study of the redesign, 3 of 4 (this page presented twice,
with the tool number changed)
60

Figure 7.12: Survey used in the usability study of the redesign, 4 of 4


61

7.4 Appendix D: Initial Usability Study Results

Table 7.1: Initial usability study trial times

Participant First Tool First Trial Time Second Tool Second Trial Time
1 E 2:26 B 14:57
2 B 6:44 A 4:13
3 A 8:56 D 1:30
4 D 4:18 A 5:19
5 C 2:29 A 5:28
6 F 8:31 E 1:33
7 B 15:15 C 2:05
8 F 1:13 C 1:00
9 D 3:17 B 2:54
10 C 2:41 E 1:50
11 B 10:30 D 2:35
12 A 5:51 F 3:55
13 F 6:30 D 2:20
14 E 6:15 A 3:20
15 E 9:35 F 5:00
16 F 11:40 B 4:10
17 F 2:30 A 4:15
18 D 2:45 E 2:41
19 C 7:31 F 3:28
20 A 5:05 B 4:00
21 E 7:56 C 3:35
22 A 11:02 C 1:50
23 A 10:15 E 0:55
24 D 4:10 C 2:05
25 E 6:15 D 3:50
26 C 5:00 D 2:50
27 B 3:22 E 1:45
62

Participant First Tool First Trial Time Second Tool Second Trial Time
28 D 8:25 F 4:56
29 B 9:00 F 2:50
30 C 8:08 B 6:48
63

Table 7.2: Initial usability study open-ended feedback

Subject Comment (x Frequency, if Greater Than 1)


Full interface -Include fewer measures/simplify x 7
-More measures makes it seem more complete/robust x 3
-Did not understand input
-Want explanation of how to use output
Partial interface -Having fewer measure options generated greater confidence
than full versions x 3
-Liked that it had fewer measures compared to full interfaces x 2
-Outputs should be labeled
-Want average dimensions provided
-Want it to be more intuitive
-Want explanation of how to use output
-Want output to be visual
-Straightforward
-There was less room for error than in full versions
-Output was more concise than in full versions
Partial w/ range interface -Having fewer measure options generated greater
confidence than full versions x 2
-Aesthetics were good x 2
-Easier to use than full versions x 2
-Want all cells to fit without needing to scroll
-Diagrams were easy to read
-Want it to be more intuitive
-Want entering inputs to be more intuitive
-Want average dimensions provided
-Straightforward
-Confusing
Original instructions -Preferred over the prompting instructions x 3
-Easier to use than a full version x 2
-Too wordy
-Want explanation of how to use output
-More detailed than the prompting instructions
-Simple
64

Prompting instructions -Clearer than the original instructions x 2


-Too wordy, want instructions in video form
-Vague
-Simple and linear
-Want explanation of how to use output
General -Want explanation of how total accommodation is calculated x 6
-Interface aesthetics need improvement x 5
-Should allow user to set a target accommodation x 3
-Want clearer images x 2
-Not familiar with Excel
-Want visuals that change size
-Confused about how to get one specification from a range obtained
using both columns
65

7.5 Appendix E: Redesigned VFT Usability Study Results

Table 7.3: Second usability study open-ended feedback

Subject Comment (x Frequency, if Greater Than 1)


Original -Overwhelming amount of irrelevant information x 12
-Difficult to use and understand x 10
-Interface could be improved x 10
-Do not like scrolling x 6
-Having more measures is good for other types of problems x 5
-Easy to use and understand x 3
-More difficult to use than the redesign x 2
-Hard to see total accommodation changing at the bottom as you change
measures
-Had to keep referring to instructions to remember what low and high columns
were for
-Having instructions for or seeing how calculations are done would improve
understanding
Redesign -Easy to input and adjust values x 5
-Want a better explanation of the parameter options x 5
-More polished and approachable than the original x 5
-Simple/easy to use and understand x 4
-Preferred over original x 4
-Like the output x 4
-Clearer/more straightforward/easier to use than original x 3
-Like seeing the output immediately x 3
-Easy to use once you figure it out x 3
-Interface is easy to read, navigate, and use x 3
-Like that only the relevant measures are shown x 3
-Want ability to input a target accommodation at the top x 2
-Want the unit of measurements to be given x 2
-Clean and aesthetically pleasing x 2
-Dislike having to click to calculate total accommodation x 2
-Less universal because it has fewer measures x 2
-More efficient than original because it only has the relevant measures
66

Redesign -Like the option to choose to set a maximum or minimum


-Like that it suggests how to use values, which is useful for those with less
experience
-Parameter options helped decision making
-Unsure of what to input at first
-Prompt where you enter the value is a bit ambiguous
-Unsure of why there is a difference between individual and total
accommodation rates
-Output calculation is confusing
-Instructions unclear
-Like that correct parameters are pre-selected
-Range option is hard to use
-Output jumped out more than in the original
-Like that you estimate a value and the tool does the calculation for you
-Gender ratio seems hidden on the side
-Like that it requires less time for reading instructions
-Did not waste time entering multiple dimensions for every measurement
General -Like figures showing the measures x 14
-Like the reference percentiles x 6
-Want dynamic graphical/pictorial output x 3
-Dislike guessing and checking to get answers x 2
67

Bibliography

[1] Conrad Luttropp and Jessica Lagerstedt. Ecodesign and the ten golden rules: generic advice
for merging environmental aspects intro product development. Journal of Cleaner Produc-
tion, 14(15-16):1396–1408, 2006.

[2] R. B. Frost. Why does industry ignore design science? Journal of Engineering Design,
10(4):301–304, 1999.

[3] Emilene Zitkus, Patrick Langdon, and P. John Clarkson. Inclusive design advisor: Under-
standing the design practice before developing inclusivity tools. Journal of Usability Studies,
8(4):127–143, 2013.

[4] Mattias Lindahl. Engineering Designers’ Requirements on Design for Environment Methods
and Tools. PhD thesis, KTH, Stockholm, 2005.

[5] Belinda López-Mesa. Design methods and their sound use in practice. In Design Methods
for Practice, pages 87–94, 2006.

[6] Shamraiz Ahmad, Kuan Yew Wong, Ming Lang Tseng, and Wai Peng Wong. Sustainable
product design and development: A review of tools, applications and research prospects.
Resources, Conservation Recycling, 132:49–61, 2018.

[7] Joy Goodman-Deane, Patrick Langdon, and P. John Clarckson. Key influences on the user-
centered design process. Journal of Engineering Design, 21(2-3):345–373, 2010.

[8] Young Sang Choi, Ji Soo Yi, Chris M. Law, and Julie A. Jacko. Are “universal design
resources” designed for designers? In Proceedings of the 8th International ACM SIGACCESS
Conference on Computers and Accessibility, pages 87–94. ACM, 2006.

[9] Eric Lutters, Fred J. A. M. van Houten, Alain Bernard, Emmanuel Mermoz, and Corné S. L.
Schutte. Tools and techniques for product design. CIRP Annals, 63(2):607–630, 2014.

[10] Catherine M. Burns, Kim J. Vicente, Klaus Christoffersen, and William S. Pawlak. Towards
viable, useful and useable human factors design guidance. Applied Ergonomics, 28(5-6):311–
322, 1997.

[11] Jorge Perez. Virtual human factors tools for proactive ergonomics: Qualitative exploration
and method development. Master’s thesis, Ryerson University, Toronto, ON, Canada, 2011.
68

[12] Xuezi Ma and James Moultrie. What stops designers from designing sustainable packaging?
–a review of eco-design tools with regard to packaging design. In Sustainable Design and
Manufacturing 2017: Selected papers on Sustainable Design and Manufacturing, volume 68,
pages 127–139. Gewerbestrasse 11, 6330 Cham, Switzerland, 2017.
[13] Robin Roy and James P. Warren. Card-based design tools: a review and analysis of 155 card
decks for designers and designing. Design Studies, 63:125–154, 2019.
[14] Mary Beth Rosson, Wendy Kellogg, and Susanne Maass. The designer as user: building re-
quirements for design tools from design practice. Communications of the ACM, 31(11):1288–
1298, 1988.
[15] Emilene Zitkus, Patrick Langdon, and P. John Clarkson. Accessibility evaluation: Assistive
tools for design activity in product development. In Proceedings of the Sustainable Intelligent
Manufacturing Conference, pages 659–670, 2011.
[16] Caroline Clarke Hayes and Farnaz Akhavi. Creating effective decision aids for complex tasks.
Journal of Usability Studies, 3(4):152–172, 2008.
[17] Nicklas Bylund, Christian Grante, and Belinda López-Mesa. Usability in industry of methods
from design research. In Proceedings of the 14th International Conference on Engineering
Design. The Design Society, 2003.
[18] Elies Jones. Eco-innovation: Tools to facilitate early-stage workshops. PhD thesis, Brunel
University School of Engineering and Design, 2003.
[19] Chris M. Law, Ji Soo Yi, Young Sang Choi, and Julie A. Jacko. A systematic examination of
universal design resources: part 1, heuristic evaluation. Universal Access in the Information
Society, 7(1-2):31–54, 2008.
[20] Wen-Chuan Chiang, Arunkumar Pennathur, and Anil Mital. Designing and manufacturing
consumer products for functionality: a literature review of current function definitions and
design support tools. Integrated Manufacturing Systems, 12(6-7):430–448, 2001.
[21] Daniel Lockton. Design with intent: A design pattern toolkit for environmental and social
behaviour change. PhD thesis, Brunel University School of Engineering and Design, 2013.
[22] John Clarkson, Roger Coleman, Ian Hosking, and Sam Waller. Inclusive design toolkit.
University of Cambridge, 2007.
[23] Human Factors Engineering of Computer Workstations. Standard, Human Factors and Er-
gonomics Society, 2007.
[24] Mattias Lindahl. Designers’ utilization of and requirements on design for environment (dfe)
methods and tools. In Fourth International Symposium on Environmentally Conscious Design
and Inverse Manufacturing, pages 224–231, 2005.
[25] Mattias Lindahl. Engineering designers’ experience of design for environment methods and
tools – requirement definitions from an interview study. Journal of Cleaner Production,
14(5):487–496, 2006.
69

[26] Benjamin Tyl, Jérémy Legardeur, Dominique Millet, and Flore Vallet. Stimulate creative
ideas generation for eco-innovation: an experimentation to compare eco-design and creativity
tools. In Proceedings of IDMME-Virtual Concept, pages 1–6, 2010.

[27] Gülsen Töre Yargin and Nathan Crilly. Information and interaction requirements for software
tools supporting analogical design. Artificial Intelligence for Engineering Design, Analysis
and Manufacturing: AIEDAM, 29(2):203–214, 2015.

[28] Svante Hovmark and Margareta Norell. The gapt model: Four approaches to the application
of design tools. Journal of Engineering Design, 5(3):241–252, 1994.

[29] James M. Wilson. Gantt charts: A centenary appreciation. European Journal of Operational
Research, 149(2):430–437, 2003.

[30] Xiufen Zhang, Shuyou Zhang, Lichun Zhang, Junfang Xue, Rina Sa, and Hai Liu. Identifi-
cation of product’s design characteristics for remanufacturing using failure modes feedback
and quality function deployment. Journal of Cleaner Production, 239, 2019.

[31] Stephen R. Rosenthal and Mohan V. Tatikonda. Competitive advantage through design tools
and practices. In Integrating Design and Manufacturing for Competitive Advantage, USA,
1992. Oxford University Press.

[32] Dan Lockton. Design with intent. http://danlockton.com/


design-with-intent/. Accessed: 04.27.2020.

[33] Peter Gregson. Dalhousie University Faculty of Engineering Electrical Computer Engineer-
ing ECED-2900 Design Methods 1. Dalhousie University, 2016.

[34] Debbie Kalish, Susan Burek, Amy Costello, Lawrence Schwartz, and John Taylor. Integrating
sustainability into new product development. Research-Technology Management, 61(2):37–
46, 2018.

[35] Wenwen Zhang, Charlie Ranscombe, David Radcliffe, and Simon Jackson. Creation of a
framework of design tool characteristics to support evaluation and selection of visualisation
tools. In Proceedings of the 22nd International Conference on Engineering Design, pages
1115–1124. The Design Society, 2019.

[36] Vicky A. Lofthouse. Ecodesign tools for designers: Defining the requirements. Journal of
Cleaner Production, 14(15-16):1386–1395, 2006.

[37] Flore Vallet, Benoı̂t Eynard, and Dominique Millet. Investigating the use of eco-design
guides: Presentation of two case studies. In Proceedings of the 17th International Conference
on Engineering Design, pages 441–452, 2009.

[38] Gülsen Töre Yargin, Roxana Morosanu Firth, and Nathan Crilly. User requirements for ana-
logical design support tools: Learning from practitioners of bio-inspired design. Design
Studies, 58(C):1–35, 2018.
70

[39] Julia Kantorovitch, Ilkka Niskanen, Anastasios Zafeiropoulos, Aggelos Liapis, Jose
Miguel Garrido Gonzalez, Alexandros Didaskalou, and Enrico Motta. Knowledge extraction
and annotation tools to support creativity at the initial stage of product design: Requirements
and assessment. In Knowledge, Information and Creativity Support Systems, pages 145–159.
2016.

[40] JungKyonn Yoon, Pieter M. A. Desmet, and Anna E. Pohlmeyer. Developing usage guide-
lines for a card-based design tool: A case of the positive emotional granularity cards. Archives
of Design Research, 29(4):5–19, 2016.

[41] Flore Vallet, Benoı̂t Eynard, and Dominique Millet. Requirements and features clarifying for
eco-design tools. In Global Product Development, pages 127–135. Springer, Berlin, Heidel-
berg, 2011.

[42] Jeremy Garritano. Make dependable decisions: Using information wisely. In Integrating
Information into the Engineering Design Process, volume 31, pages 137–148. Purdue Uni-
versity Press, West Lafayette, Indiana, USA, 2013.

[43] Human Factors and Ergonomics Society. Virtual fit mul-


tivariate anthropometric tool. https://www.hfes.
org/Go.aspx?MicrositeGroupTypeRouteDesignKey=
f1ec6a41-06bd-4050-9838-6898d5a545e5&NavigationKey=
7a3ee7f0-d508-4592-9972-ee24aeb334c2. Accessed: 2019-10-02.

[44] Civilian american and european surface anthropometry resource project—caesar®. http:
//store.sae.org/caesar/.

[45] National health and nutrition examination survey. https://www.cdc.gov/nchs/


nhanes/index.htm.
MADISON REDDIE
LEADERSHIP & INVOLVEMENT EDUCATION
Design thinking workshop B.S. in Mechanical Engineering with honors in Engineering
organizer & facilitator 2020 Design & Certificate in Engineering Design
Global Conversations,
Conversation Partner Volunteer 2018-20 Schreyer Honors College, The Pennsylvania State University
Tempe Sister Cities,
Student Ambassador to Ecuador 2016-17
Certificate in Mechanical Engineering Product Design
The National University of Singapore, June 2018
PUBLICATIONS
Lee, S., Reddie, M., Tsai, C., Beck, J., Rosson, M. B.,
& Carroll, J. M., 2020, "The Emerging EXPERIENCE
Professional Practice of Remote Sighted Student R&D Consultant, Siemens Stiftung
Assistance for People with Visual Impairments,"
ACM CHI Conference. Aug. 2020-present
Carroll, J. M., Lee, S., Reddie, M., Beck, J., &
Collaborating with Siemens Stiftung to develop and launch a venture
Rosson, M. B., 2020, "Human-Computer
bringing solar-battery powered irrigation to smallholder farmers in Kenya
Synergies in Prosthetic Interactions," IxDA, 44.
Using ethnographic research to craft a business model suited to a rural,
agrarian Kenyan community

COURSES & SKILLS


Research Assistant, Center for Human-Computer Interaction
Design for the developing world
Sustainable design Jan. 2018 - present, University Park, PA
Design thinking
Co-designing new assistive technologies with visually impaired people
Product development
Conducting ethnographic research and determining needs of blind users
Engineering sciences & mechanics
Authoring and presenting technical papers describing research findings
Rapid prototyping & 3D modeling
Human factors & ergonomics
MATLAB Research Assistant, OPEN Design Lab
Social entrepreneurship
Sept. 2018 - present, University Park, PA
Global health
Technical writing & communication Evaluating the design and ergonomics of bus operator workstations
Spanish proficiency Using novel statistical methods to generate bus operator anthropometry
Reviewing literature and contributing to technical reports

HONORS New Product Development Engineering Intern, Herman Miller


Student Marshal 2020
Summer 2019, Holland, MI
Evan Pugh Senior Scholar Award 2020
Reeder Award in Mechanical Conducted user research and ergonomic analysis to improve designs
Engineering 2020 Designed mechanical systems for office seating applications
Vought Scholarship in Engineering 2018-20 Led an interdisciplinary team in redesigning the onboarding experience
Dean's List 2017-20
TSYS Future Scholar 2017-20
Product Development Intern, The Bacchus Co.
Penn State Provost's Award 2017-20
The President Sparks Award 2019 Summer 2018, Phoenix, AZ
Louis A. Harding Memorial Award 2019
H. A. Everett Memorial Award 2019 Guided clients through various stages of product development
The President's Freshman Award 2018 Developed improved ERP system to streamline company processes
NASA Space Grant 2017 Improved overall quality of parts delivered through thorough inspections

You might also like