You are on page 1of 27

Introduction to Human

Computer Interaction
Lecture 11

Course Instructor: Ms. Farheen Ramzan

Department of Computer Science


University of Engineering and Technology Lahore
Reference

For the material covered in this lecture,


read
• Chapter 14 – Introducing Evaluation
(Section 14.1 – 14.3)
(from Interaction Design: Beyond Human-
Computer Interaction, 5th Edition by Helen
Sharp et al.)
INTRODUCING
EVALUATION
Overview
 Introducing Evaluation
 The Why, What, Where, and When of
Evaluation
 Types of Evaluation Techniques
 Controlled Setting Involving Users
 Natural Setting Involving Users
 Any Setting Not Involving Users
 Comparison of Evaluation Techniques

www.id-book.com 4
Goals
1. Explain the key concepts and terms used in evaluation
2. Introduce range of different types of evaluation methods
3. Show how different evaluation methods are used for
different purposes at different stages of the design
process and in different contexts of use
4. Show how evaluators mixed and modified to meet the
demands of evaluating novel systems

www.id-book.com 5
Introduction to Evaluation
• Suppose an app for teenagers to share
music, gossip, and photos
– How would you find out whether it would
appeal to them and whether they will use it?
– You would need to evaluate it—but how?

www.id-book.com 6
Evaluation
• Evaluation is integral to the design
process.
– Involves collecting and analyzing data about
users’ or potential users’ experiences when
interacting with a design artifact.
– Central goal is to improve the artifact’s
design.
– Focuses on both the usability of the system
and on the users’ experiences.
– Design for the target user population

www.id-book.com 7
Why, What, Where, and
When of Evaluation
Iterative design and evaluation is a continuous process
that examines:
Why evaluate?
What to evaluate?
Where to evaluate?
When to evaluate?

www.id-book.com 8
Why Evaluate?
• Why: To check users’ requirements and confirm
that users can utilize the product and that they
like it
– User experience involves all aspects of the user’s
interaction with the product.
– Well-designed products sell, companies invest in
evaluating the design of products.
– Different types of users in your evaluations.
– Activity: Facebook usage (Adults and teenagers)

www.id-book.com 9
What to Evaluate?
• What: A conceptual model, early and subsequent
prototypes of a new system, more complete
prototypes, and a prototype to compare with
competitors’ products
– Ranges from low-tech prototypes to complete systems
– From a particular screen function to the whole workflow
– From aesthetic design to safety features
– Examples: web browser, game app, computerized system
for controlling traffic lights, website for disable people,
toys, digital music players, company’s home page design,
smartphone apps etc.
– Will users use it?

www.id-book.com 10
Where to Evaluate?
• Where: In natural, in-the-wild, and laboratory
settings
• Where evaluation takes place depends on what is
being evaluated.
– Examples:
• Web accessibility, handheld device for playing games in lab.
• Toys and user experience aspects in natural settings/in-the-wild
studies.
• Remote studies of online behavior in user’s homes or place of work.
– Living labs are a compromise between the artificial,
controlled context of a lab and the natural, uncontrolled
nature of in-the-wild studies.
• Embedded with technology

www.id-book.com 11
When to Evaluate?
• When: Throughout design; finished
products can be evaluated to collect
information to inform new products
– The stage in the product lifecycle when
evaluation takes place depends on the type of
product and the development process being
followed.
– Example: new concept, upgrade to an existing
product, product in a rapidly changing market

www.id-book.com 12
When to Evaluate? (Cont.)
• Formative evaluations: When evaluations
are conducted during design to check that
a product continues to meet users’ needs.
• Summative evaluations: Evaluations that
are carried out to assess the success of a
finished product.

www.id-book.com 13
Bruce Tognazzini tells you
why you need to evaluate

“Iterative design, with its repeating cycle of


design and testing, is the only validated
methodology in existence that will consistently
produce successful results. If you don’t have
user-testing as an integral part of your design
process you are going to throw buckets of money
down the drain.”
See AskTog.com for topical discussions about design and evaluation

www.id-book.com 14
Types of Evaluation
• Three broad categories, depending on the
setting, user involvement, and level of control.
1. Controlled settings directly involving users
2. Natural settings involving users
3. Any settings not directly involving users

www.id-book.com 15
Controlled Settings Involving Users
• Controlled settings directly involving users: Users’
activities are controlled to test hypotheses and measure
or observe certain behaviors.
– Examples: usability labs and research labs
– Method: usability testing, experiments
• Experiments and user tests are designed to control what
users do, when they do it, and for how long.
– Designed to reduce outside influences
• Usability Testing
– Usability specification

www.id-book.com 16
Activity

Devices for monitoring activity and heart rate (a) Fitbit Charge
and (b) Polar A370

www.id-book.com 17
Living labs
• People’s use of technology in their
everyday lives can be evaluated in living
labs
• Such evaluations are too difficult to do in a
usability lab
• An early example was the Aware Home
that was embedded with a complex
network of sensors and audio/video
recording devices (Abowd et al., 2000)
www.id-book.com 18
Living labs (Cont.)
• More recent examples include whole blocks and
cities that house hundreds of people, for
example, Verma et al., research in Switzerland
(2017)
• Mobile living lab: people fitted with wearables
• Many citizen science projects can also be
thought of as living labs, for instance,
iNaturalist.org
• These examples illustrate how the concept of a
lab is changing to include other spaces where
people’s use of technology can be studied in
realistic environments
www.id-book.com 19
Natural Settings Involving Users
• Natural settings involving users: There is little or no control of users’
activities to determine how the product would be used in the real
world.
– Examples: online communities, products that are used in public
places
– Method: field studies, in-the-wild studies (interviews,
observations and interaction logging etc.)
• Field studies are used to:
– Help identify opportunities for new technology
– Establish the requirements for a new design
– Facilitate the introduction of technology or inform deployment of
existing technology in new contexts
• The goal is to be unobtrusive and not to affect people’s behavior.

www.id-book.com 20
Natural Settings Involving Users (Cont.)
• Disruptive technology
• Downside of handing over control is that it makes it
difficult to anticipate what is going to happen.
– To be present when something interesting does happen.
– The researcher has to rely on the participants recording and
reflecting on how they use the product.

• Virtual field studies, where observations take place


online.
– A goal is to examine the kinds of social processes such as
collaboration, confrontation, and cooperation.
– Increasingly, online is partnered with a real-world experience to
get the best of both situations (Cliffe, 2017).
www.id-book.com 21
Any Settings Not Involving Users
• Any settings not directly involving users: Consultants and
researchers critique, predict, and model aspects of the interface to
identify the most obvious usability problems.
– Method: inspections, heuristics, walk-throughs, models, and analytics

• Inspection methods utilize knowledge of usability, users’ behavior,


the contexts and the kinds of activities that users undertake
• Heuristic evaluation applies knowledge of typical users guided by
rules of thumb
• Cognitive walkthroughs involve stepping through a scenario or
answering a set of questions for a detailed prototype

www.id-book.com 22
Any Settings Not Involving Users
(Cont.)
• Analytics is a technique for logging and analyzing data either
at a customer’s site or remotely.
– Web analytics is the measurement, collection, analysis, and
reporting of Internet data to understand and optimize web usage
e.g. number of visitors to a website
– Learning analytics for assessing the learning that takes place in
online environments
• Models are used for comparing the efficacy of different
interfaces for the same application
– Fitts’ law for prediction

www.id-book.com 23
Selecting and Combining Methods
• Combinations of methods
– used across the categories to obtain a richer
understanding.
– Example: usability testing in lab combined with
observations in natural settings
• Opportunistic Evaluations
– Done early in the design process to provide designers
with feedback quickly about a design idea
– Informal and do not require many resources
– Easier to make changes to an evolving design
– Hone the target audience

www.id-book.com 24
Benefits of Evaluation

Evaluation Pros
Type
Controlled • test hypotheses about specific features of the
Setting interface
• results can be generalized to the wider
population
Uncontrolled • unexpected data can be obtained
Setting • different insights into people’s perceptions
and their experiences of using, interacting, or
communicating in their context

www.id-book.com 25
Comparison of Evaluation Categories
Evaluation Pros Cons
Category
Lab-based • good at revealing • poor at capturing context
studies usability problems of use
Field studies • good at demonstrating • time-consuming and
how people use more difficult to conduct
technologies in their
intended setting
Modeling and • quick to perform • Can miss unpredictable
predicting usability problems and
approaches subtle aspects of the
user experience
Analytics • good for tracking the use • not good for finding out
of a website how users feel about a
new color scheme or
why they behave as they
do
www.id-book.com 26
Questions?

You might also like