You are on page 1of 40

Interaction Design Models

• Model Human Processor (MHP)

• Keyboard Level Model (KLM)

• GOMS

• Modeling Structure

• Modeling Dynamics

• Physical Models

Predictive/Descriptive Models

• Predictive models such as the Model Human Processor (MHP) and the Keyboard Level Model
(KLM), are a priori (pre-experience) models

– They give approximations of user actions before real users are brought into the testing
environment.

• Descriptive models, such as state networks and the Three-State Model, provide a framework for
thinking about user interaction

– They can help us to understand how people interact with dynamic systems.

Model Human Processor (MHP)

The Model Human Processor can make general predictions about human performance

• The MHP is a predictive model and is described by a set of memories and processors that
function according to a set of principles (principles of operation)

• Perceptual system (sensory image stores)

• Sensors

• eyes

• ears

• Buffers

• Visual memory store (VIS)

• Auditory memory store (AIS)

• Cognitive system

• Working memory (WM)—Short-term memory

• Long-term memory (LTM)


• Motor system

• arm-hand-finger
finger system

• head-eye
eye system

MHP – Working Memory

• WM consists of a subset of “activated” elements from LTM

• The SISs encode only the nonsymbolic physical parameters of stimuli.

• Shortly after the onset of a stimulus, a symbolic representation is registered in the WM

• The activated elements from LTM are called chunks.

• Chunks can be composed of smaller units like the letters


letter in a word

• A chunk might also consist of several words, as in a well-known


well known phrase
MHP – Long-Term Memory

• The cognitive processor can add items to WM but not to LTM

• The WM must interact with LTM over a significant length of time before an item can be stored in
LTM

• This increases the number of cues that can be used to retrieve the item later

• Items with numerous associations have a greater probability of being retrieved

MHP – Processor Timing

• Perceptual—The perceptual system captures physical sensations by way of the visual and
auditory receptor channels

• Perceptual decay is shorter for the visual channel than for the auditory channel

• Perceptual processor cycle time is variable according to the nature of the stimuli

Cognitive—The cognitive system bridges the perceptual and motor systems

• It can function as a simple conduit or it can involve complex processes, such as learning, fact
retrieval, and problem solving

• Cognitive coding in the WM is predominantly visual and auditory

• Cognitive coding in LTM is involved with associations and is considered to be predominantly


semantic

• Cognitive decay time of WM requires a large range

• Cognitive decay is highly sensitive to the number of chunks involved in the recalled item

• Cognitive decay of LTM is considered infinite

• Cognitive processor cycle time is variable according to the nature of the stimuli

Motor—The motor system converts thought into action

• Motor processor cycle time is calculated in units of discrete micromovements

Keyboard Level Model (KLM)

• The KLM is a practical design tool that can capture and calculate the physical actions a user will
have to carry out to complete specific tasks

The KLM can be used to determine the most efficient method and its suitability for specific contexts.

Given: -- A task (possibly involving several subtasks)

– The command language of a system

– The motor skill parameter of the user


– The response time parameters

Predict: The time an expert user will take to execute the task using the system

– Provided that he or she uses the method without error

The KLM is comprised of:

– Operators

– Encoding methods

– Heuristics for the placement of mental (M) operators

KLM – Operators

• Operators

– K Press a key or button

– P Point with mouse

– H Home hands to keyboard or peripheral device

– D Draw line segments

– M Mental preparation

– R System response

• KLM of Type a transform in Goblinxna, and then compile

KLM – Encoding Methods

• Encoding methods define how the operators involved in a task are to be written

MK[i] K[p] K[c] K[o] K[n] K[f] K[i] K[g] K[RETURN]

It would be encoded in the short-hand version as

M 8K [ipconfig RETURN]

This results in a timing of 1.35 + 8 * 0.20 = 2.95 seconds for an average skilled typist.

KLM – Heuristics for M Operator Placement

• The KLM operators can be placed into one of two groups—physical or cognitive.

• The physical operators are defined by the chosen method of operation, such as clicking an icon
or entering a command string.

• The cognitive operators are governed by the set of heuristics


What the KLM Does Not Do

• The KLM was not designed to consider the following:

– Errors

– Learning

– Functionality

– Recall

– Concentration

– Fatigue

– Acceptability

APPLICATIONS

Case 1 (Mouse-Driven Text Editor)

– During the development of the Xerox Star KLMs served as expert proxies

Case 2 (Directory Assistance Workstation)

– The KLM clarified the tradeoffs between the number of keystrokes entered in the query
and the number of returned fields

GOMS

Goal/task models can be used to explore the methods people use to accomplish their goals

• Card et al. suggested that user interaction could be described by defining the sequential actions
a person undertakes to accomplish a task.

• The GOMS model has four components:

– goals

– operators

– methods

– selection rules

Goals - Tasks are deconstructed as a set of goals and subgoals.

Operators - Tasks can only be carried out by undertaking specific actions.

Methods - Represent ways of achieving a goal

– Comprised of operators that facilitate method completion

Selection Rules - The method that the user chooses is determined by selection rules
GOMS – CMN-GOMS

CMN-GOMS can predict behavior and assess memory requirements

• CMN-GOMS (named after Card, Moran, and Newell) -a detailed expansion of the general GOMS
model

– Includes specific analysis procedures and notation descriptions

• Can judge memory requirements (the depth of the nested goal structures)

• Provides insight into user performance measures

GOMS – Other GOMS Models

• NGOMSL (Natural GOMS Language), developed by Kieras, provides a structured natural-


language notation for GOMS analysis and describes the procedures for accomplishing that
analysis (Kieras, 1997)

– NGOMSL Provides:

• A method for measuring the time it will take to learn specific method of
operation

• A way to determine the consistency of a design’s methods of operation

CPM-GOMS represents

– Cognitive

– Perceptual

– Motor operators

CPM-GOMS uses Program Evaluation Review Technique (PERT) charts

– Maps task durations using the critical path method (CPM).

CPM-GOMS is based directly on the Model Human Processor

– Assumes that perceptual, cognitive, and motor processors function in parallel

• Program Evaluation Review Technique (PERT) chart Resource Flows


Modeling Structure

• Structural models can help us to see the relationship between the conceptual components of a
design and the physical components of the system, allowing us to judge the design’s relative
effectiveness.

Modeling Structure – Hicks Law

Hick’s law can be used to create menu structures

• Hick’s law states that the time it takes to choose one item from n alternatives is proportional to
the logarithm (base 2) of the number of choices, plus 1.

• This equation is predicated on all items having an equal probability of being chosen

T = a + b log2(n 1)

• The coefficients are empirically determined from experimental design

• Raskin (2000) suggests that a 50 and b 150 are sufficient place holders for “back-of-the-
envelope” approximations

Menu listing order must be logical and relevant

• Menus are lists grouped according to some predetermined system

• If the rules are not understood or if they are not relevant to a particular task, their arrangement
may seem arbitrary and random, requiring users to search in a linear, sequential manner.

Modeling Dynamics

Understanding the temporal aspects of interaction design is essential to the design of usable and
useful systems

• Interaction designs involve dynamic feedback loops between the user and the system

– User actions alter the state of the system, which in turn influences the user’s
subsequent actions

– Interaction designers need tools to explore how a system undergoes transitions from
one state to the next

Modeling Dynamics – State Transition Networks

State Transition Networks can be used to explore:

– Menus

– Icons

– Tools
State Transition Networks can show the operation of peripheral devices

• State Transition Network

• STNs are appropriate for showing sequential operations that may involve choice on the part of
the user, as well as for expressing iteration.

Modeling Dynamics – Three-State Model

The Three-State Model can help designers to determine appropriate I/O devices for specific interaction
designs

• The TSM can reveal intrinsic device states and their subsequent transitions

– The interaction designer can use these to make determinations about the correlation
between task and device

– Certain devices can be ruled out early in the design process if they do not possess the
appropriate states for the specified task

• The Three-State Model (TSM) is capable of describing three different types of pointer
movements

– Tracked: A mouse device is tracked by the system and represented by the cursor
position

– Dragged: A mouse also can be used to manipulate screen elements using drag-and-drop
operations

– Disengaged movement: Some pointing devices can be moved without being tracked by
the system, such as light pens or fingers on a touchscreen, and then reengage the
system at random screen locations
Modeling Dynamics – Glimpse Model

• Forlines et al. (2005):

– Because the pen and finger give clear feedback about their location when they touch
the screen and enter state 2, it is redundant for the cursor to track this movement

– Pressure-sensitive
sensitive devices can take advantage of the s1 redundancy and map pressure
to other features

– Undo commands coupled with a preview function (Glimpse) can be mapped to a


pressure-sensitive
sensitive direct input device

• Some applications

– Pan and zoom interfaces—Preview


interfaces different magnification levels

– Navigation in a 3D world—Quick
world Quick inspection of an object from different perspectives
– Color selection in a paint program—Preview the effects of color manipulation

– Volume control—Preview different volume levels

– Window control—Moving or resizing windows to view occluded objects

– Scrollbar manipulation—Preview other sections of a document

Physical Models

• Physical models can predict efficiency based on the physical aspects of a design

• They calculate the time it takes to perform actions such as targeting a screen object and clicking
on it

Physical Models – Fitts’ Law

• Fitts’ law states that the time it takes to hit a target is a function of the size of the target and the
distance to that target

Fitts’ law can be used to determine the size and location of a screen object

• There are essentially three parts to Fitts’ law:

– Index of Difficulty (ID)—Quantifies the difficulty of a task based on width and distance

– Movement Time (MT)—Quantifies the time it takes to complete a task based on the
difficulty of the task (ID) and two empirically derived coefficients that are sensitive to
the specific experimental conditions

– Index of Performance (IP) [also called throughput (TP)]—Based on the relationship


between the time it takes to perform a task and the relative difficulty of the task

• Fitts described “reciprocal tapping”

– Subjects were asked to tap back and forth on two 6-inch-tall plates with width W of 2, 1,
0.5, and 0.25 inches
Fitts proposed that ID, the difficulty of the movement task, could be quantified by the equation

ID = log2(2A/W)

Where:

A is the amplitude (distance to the target)

W is the width of the target

• This equation was later refined by MacKenzie: ID = log2(A/W + 1)

The average time for the completion of any given movement task can be calculated by the following
equation:

MT = a + b log2(A/W + 1)

Where: MT is the movement time

Constants a and b are arrived at by linear regression

• By calculating the MT and ID, we have the ability to construct a model that can determine the
information capacity of the human motor system for a given task.

– Fitts referred to this as the index of performance (throughput)

• Throughput is the rate of human information processing TP = ID/MT

• Implications of Fitts’ Law

– Large targets and small distances between targets are advantageous

– Screen elements should occupy as much of the available screen space as possible

– The largest Fitts-based pixel is the one under the cursor

– Screen elements should take advantage of the screen edge whenever possible

– Large menus like pie menus are easier to uses than other types of menus.

• Limitations of Fitts’ Law

– There is no consistent way to deal with errors

– It only models continuous movements

– It is not suitable for all input devices, for example, isometric joysticks

– It does not address two-handed operation

– It does not address the difference between flexor and extensor movements

– It does not address cognitive functions such as the mental operators in the KLM model
• Bivariate data

– Smaller-Of—The smaller of the width and height measurements:

IDmin(W, H ) = log2 [D/min (W, H ) + 1]

This is not a particularly good method. Why?

Amplitude Pointing: One-dimensional tasks

– Only the target width (whether horizontal or vertical) is considered

– The constraint is based on W, and target height (H) is infinite or equal to W

– AP errors are controlled at “the final landing”


Directional Pointing: If W is set at infinity then H becomes significant

– The constraint is based on H

– DP errors are corrected incrementally during the pointing movemet

• Implications for interaction design:

– Overly elongated objects hold no advantage (W/H ratios of 3 and higher).

– Objects should be elongated along the most common trajectory path (widgets normally
approached from the side should use W, those approached from the bottom or top
should use H).

– Objects should not be offset from the screen edge (consistent with the Macintosh OS).

– Objects that are defined by English words generally have W>H and should be placed on
the sides of the screen. (However, greater amplitude measurements may be significant
on the normal “landscape”-oriented screens.)

DESIGN PRICIPLES

• Principles of Interaction Design

• Comprehensibility

• Learnability

• Effectiveness/Usefulness

• Efficiency/Usability

• Grouping

• Stimulus Intensity

• Proportion

• Screen Complexity

• Resolution/Closure

• Usability Goals

Principles of Interaction Design

• How do we create elegant solutions to complex interaction problems?

• How do interaction designers succeed at creating great designs that are powerful and
aesthetically appealing?
Design principles can be used to guide design decisions

• Design principles do not prescribe specific outcomes; they function within the context of a
particular design project.

• Design principles guide interaction designers and help them make decisions that are based on
established criteria

Framework for Design Principles

The framework has the following components:

• Usability Goals

– There are two main usability goals in the framework; comprehensibility and learnability.

• Design Principle Categories

– The framework also divides the design principles into two main groups; efficiency
principles and effectiveness principles.

• Format to Describe Design Principles

– The framework uses the format “serves the principle of … which promotes …” to
describe the different principles.

– Familiarity serves the principle of memorability, which promotes usability.



• Functionality - The system must have adequate functionality for a particular task.

• Presentation Filter - The functionality must be made accessible through the presentation filter
(interface).

• Comprehensibility Barrier - If the presentation is comprehensible, the comprehensibility barrier


will be superseded. This depends on the degree of efficiency/usability
efficiency/usability in the interface design.

• Learnability Barrier – If the interface is comprehensible it will be learnable, there is a direct


relationship.

• Effectiveness/Usefulness - If the user can learn the interface he can take advantage of the
functionality and the interface
nterface will, therefore, be useful.

Comprehensibilty

An interface design that is easy to comprehend will be efficient and effective

• If a user does not understand the interface it will be useless

• A design’s comprehensibility is highly dependent on the way in which the interface


communicates its functionality to the user

Learnabilty

An interface with high usability will be easier to learn

• The learnability of a design is based on comprehensibility: if you can’t understand it, you can’t
learn it

Comprehensibility Learnabilty

• Learnability and comprehensibility are recursive: we start with comprehensibility which affects
learnability, which will in turn increase comprehensibility.
Principles of Interaction Design

• Effectiveness/Usefulness

– Utility

– Safety

– Flexibility

– Stability

Efficiency/Usability

– Simplicity

– Memorability

– Predictability

– Visibility

(MAIN HEADING)Design Principle Categories

• (SUB-HEADING) Effectiveness/Usefulness

Effectiveness describes the usefulness of a design

• The effectiveness goal stipulates that a design must fulfill the user’s needs by affording the
required functionality

• Utility - The principle of utility relates to what the user can do with the system

• In other words: To what degree does this actually help the user? How much
functionality does it provide?

• Safety - If a design has a high degree of safety, it will prove more useful than a design that
involves a high degree of risk. (crashing, precariously placed buttons)

• For example, if a delete button was located in immediate proximity to a save button

• Recovery - can be implemented in interaction designs by incorporating appropriate


undo functionality and robust error recovery routines.

A computer shall not harm your work or, through inaction, allow your work to come to harm.


• Safety Example

– Document Recovery in Word 2003

Poor Button placement

• Flexibility - A tool that is flexible can be used in multiple environments and may address diverse
needs

– Customization - A tool would have greater flexibility if people were able to customize
the interface according to their personal preferences

– Example: Adobe Photoshop, used for all kinds of projects, and allows a wide range of
customizations to make it as suited to the user as possible.

• Stability - A stable system is a robust system.

– A system that functions consistently well will be more useful than a system that crashes
frequently

• Flexibility Example

– Workspace features in Adobe Photoshop CS5


Efficiency/Usability

Efficiency describes the usability of a design

• The efficiency goal stipulates that a design should enable a user to accomplish tasks in the
easiest and quickest way possible without having to do overly complex or extraneous
procedures.

• In other words, given its functionality, how easy/quick is it to do what you want it to do?

A computer shall not waste your time or require you to do more work than is strictly necessary.

• Simplicity - If things are simple they will be easy to understand and, therefore, easy to learn and
remember.

– 80/20 Rule - The 80/20 rule implies that 80% of an application’s usage involves 20% of
its functionality

– Satisficing - Combines the conflicting needs of finding the optimal solution that satisfies
all the requirements and the need to settle on a solution that will be sufficient to
proceed with the design

• 80/20 Rule Example

– Windows 7

– Most operations are moving files, navigating folders, starting, closing,


and switching between various programs


• Simplicity

– Progressive Disclosure - Show the user only what is necessary

– Microsoft Word 2003 Find and Replace

• Simplicity (continued)

– Constraints - Involves limiting the actions that can be performed in a parti


particular design

• Controls the design’s simplicity

• Physical Constraints

– Paths - constrain movement to a designated location and direction

– Scrollbars, Sliders

– Axes - constrain the user’s movement to rotation around an axis

– Trackball

– Barriers -provide
provide spatial constraints that can confine the user’s movement to the
appropriate areas of the interface

– The bounds of the computer screen

• Constraints Example

– A graphic equalizer built with jQuery UI


• Simplicity (continued)

• Psychological Constraints

– Conventions - exploit learned behavior to influence a user’s actions

– File Menu, Edit menu

– Mapping - can influence the way in which people perceive relationships between
controls and effects

– Ribbon bar in Microsoft Office 2007+

– Symbols - can influence the way in which we interact with an interface by defining
meaning and constraining our possible interpretations of interface elements

– Shopping cart icon

• Mapping Example

– Microsoft Word 2010 Ribbon Interface

• Memorability - Interfaces that have high memorability will be easier to learn and use

– Many different parameters affect memorability:

• Location

– Where you’re used to seeing certain things

• Logical Grouping

– Similar things grouped together

• Conventions

– Things you’re used to that are accepted

• Redundancy

– Similar user interface elements even though tasks might be different


Predictability - Predictability involves a person’s expectations and his ability to determine the
results of his actions ahead of time.

• When you click a B on a toolbar in a text editor, you expect the text to become
bold

Consistency-Correctness

– Consistency reinforces our associations and, therefore, increases our ability to


remember and predict outcomes and processes.

– Before we strive to be consistent, we must make sure we are correct

– Don’t decide to make the B button do something else

• UI Consistency Failure : Adobe CS4 (Dreamweaver, InDesign,Fireworks) on OSX


• Predictability (continued)

– Conventions: allow us to use our intuitions

– If you see a floppy icon, you usually assume its to save whatever you’re
working on

– Familiarity: familiar menu names and options help users locate objects and functions
more easily

– File/Edit menu’s, address bar in a browser

– Location, Location, Location: Not all areas on the screen are created equal

– Top of screen usually reserved for menu’s and functions


• UI Convention Example

– Notifications in iOS and Android

• Predictability (continued)

– Modes: Modes create instability in mental models because they change the way objects
function

– The shift button on a keyboard changes the input mode

– Text mode in Photoshop

– Equation editor mode in Word

• Visibility - The principle of visibility involves making the user aware of the system’s components
and processes, including all possible functionality and feedback from user actions.

• The principles of progressive disclosure and simplicity should be used in conjunction with the
principle of visibility to avoid overload

– Overload: Following the principle of visibility without also applying progressive


disclosure can lead to visual overload

– Imagine if all of Words functionality were on the first ribbon

– Feedback: Direct Manipulation interfaces provide immediate visual feedback about user
actions. It is the task of the interaction designer to decide what form that feedback
takes

– Clicking a button usually simulates pushing a button in real life, letting


the user know the application has acknowledged their selection


– Overload Example


– Feedback Example

– Button feedback for normal, hover and clicked states

• Visibility (continued)

– Recognition/Recall: The principle of visibility is based on the fact that we are better at
recognition than we are at recall

– Recognizing is better than remembering

– Orientation: People need to be able to orient themselves, especially in complex


information spaces

– Title bars and headers in a web page


• Recognition / Recall Failure

• Bad website design courtesy of Saturn

• Orientation Failure

• Where am I? (Zune Journey)


Grouping

Gestalt Principles of Perception

• Low-level principles - used to make decisions about specific screen controls, menus and layouts
Use visual cues to support the logical structure of the interface

• Gestalt Principles of Perception

– Gestalt psychology strives to explain the factors involved in the way we group things

– At the heart of Gestalt psychology is the idea that we strive to find the simplest
solutions to incomplete visual information

• Figure-Ground: Basic premise

– We perceive our environment by differentiating between objects and their backgrounds

THE RUBIN FACE/ VASE ILLUSION MAC LOGO

• The Gestalt Principles of Perception:

– Proximity

– Similarity

– Common Fate

– Closure

– Good Continuity

– Area

– Symmetry

– Surroundedness

– Prägnanz
Proximity Principle – Objects that are close to each other will be seen as belonging together

EQUIDISTANT horizontal proximity vertical proximity

• Proximity - Adobe PhotoShop Preferences Dialog

• Similarity Principle – Objects that have similar visual characteristics, such as size, shape or color
will be seen as a group and therefore related

Rows of similar objects columns of similar objects grouped columns


• Property Pane from Macromedia’s Dreamweaver
– Our eyes pick up all of the text boxes because of the strong blue squares and the white
areas that they have in common


• Common Fate Principle – Objects that move together are seen as related

Unaligned Drop-Down
Down Menus Aligned Drop-Down Menus

• Closure Principle – We tend to see things as complete objects even though there may be gaps in
the shape of the objects

][ ][ ][

[ ][ ][ ][
Closure Principle

• Good Continuity Principle – We tend to see things as smooth, continuous representations


rather than abrupt changes
Good Continuity Principle
• The Area Principle – Objects with small area tend to be seen as the figure, not the ground (also
called the smallness principle)

Symmetry Principle – Symmetrical areas tend to be seen as complete figures that form around their
middle

• Surroundedness Principle – An area that is surrounded will be seen as the figure and the area
that surrounds will be seen as the ground
• Prägnanz Principle – We tend to perceive things based on the simplest and most stable or
complete interpretation


Visual Conflict
ict with Common Fate Visual Conflict with Surroundedness

Other Principles of Perception - Stimulus Intensity

• We respond first to the intensity of a stimulus and only then do we begin to process its meaning.
Other Principles of Perception – Proportion

• Proportion can be used to represent logical hierarchies

• Golden Ratio - The golden ratio expresses the relationship between two aspects of a form such
as height to width and must equal 0.618

• Examples of the Golden Ratio


• Fibonacci - A sequence of numbers in which each number is the sum of the two preceding
numbers.

– The relationship between the numbers in the Fibonacci series is similar to phi.

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, …

• Golden Spiral - A log spiral whose growth factor is relative to the golden ratio.

– Gets wider by a factor phi every quarter turn.

• Golden Triangle – The ratio of the longest side to the smallest side is phi.

Other Principles of Perception - Screen Complexity

• The measure of complexity developed by Tullis (1984) can be used to calculate the relative
complexity, and therefore the difficulty, of a design.

- This measure of complexity uses information theory


• Formula for calculating the measure of complexity

C, complexity of the system in bits

N, total number of events (widths


widths or heights)

m, number of event classes (number of unique widths or heights)

pn, probability of occurrence of the nth event class


(based on the frequency of events within that class)

• To calculate the measure of complexity for a particular screen, do


do the following:

1. Place a rectangle around every screen element

2. Count the number of elements and the number of columns (vertical alignment points)

3. Count the number of elements and the number of rows (horizontal alignment points)


• Redesigned screen

• Simplified
plified formula by Galitz (2002):

1. Count the number of elements on the screen

2. Count the number of horizontal (column) alignment points.

3. Count the number of vertical (row) alignment points.

• Complexity vs. Usability

– Comber and Maltby (1997) found that both overly simple and overly complex screens
were low in usability

• Usability

– Usability is defined in terms of:

• Effectiveness (Understandable)

• Learnability (Easy to Learn)

• Attitude (Appearance)
• Complexity Guidelines

– Optimize the number of elements on a screen within the limits of clarity and utility.

– Minimize the alignment points. Use grid structures.

• Tradeoffs between usability and complexity:

– As complexity decreased, predictability increased.

– As complexity decreased, it became harder to differentiate among screen objects; the


screen became artificially regular.

– Decreased complexity meant that there were fewer ways to group objects.

– Excessive complexity made screens look artificially irregular.

– Increased complexity could occur from increased utility.

• Resolution/Closure - Relates to the perceived completion of a user’s tasks.

– When the user’s objective is satisfied, he or she will consider the task complete and
move on to the next goal

– Can lead to problems like with ATM machines

Usability Goals – Principles – Guidelines

Usability Goal—Easy to use

– Most people are interested in completing their tasks and do not enjoy struggling with
the tools they need to use. One of the most important goals of user-centered design is
to make things easy to use.

Design Principle—Simplicity

– Simple things require little effort and can often be accomplished without much thought.
If interaction designs are guided by the principle of simplicity, they will be easier to use.

Project Guideline—All dialogue boxes should present only the basic functions that are most often used
and that other, less used functions can be accessed using an expandable dialogue with a link for “More
Options.”

Design Models 3 – Hick’s and Fitts’ Laws

Interaction Design Models


• Model Human Processor (MHP)

• Keyboard Level Model (KLM)

• GOMS

• Modeling Structure
• Modeling Dynamics

• Physical Models

• Modeling Structure : Structural models can help us to see the relationship between the
conceptual components of a design and the physical components of the system, allowing us to
judge the design’s relative effectiveness.

Modeling Structure – Hicks Law

Hick’s law can be used to create menu structures

• Hick’s law states that the time it takes to choose one item from n alternatives is proportional to
the logarithm (base 2) of the number of choices, plus 1.

• This equation is predicated on all items having an equal probability of being chosen

Hick’s law can be used to create menu structures

• Hick’s law states that the time it takes to choose one item from n alternatives is proportional to
the logarithm (base 2) of the number of choices, plus 1.

• This equation is predicated on all items having an equal probability of being chosen

T = a + b log2(n + 1)

• The coefficients are empirically determined from experimental design

• Raskin (2000) suggests that a = 50 and b =150 are sufficient place holders for “back-of-the-
envelope” approximations.

Menu listing order must be logical and relevant

• Menus are lists grouped according to some predetermined system

• If the rules are not understood or if they are not relevant to a particular task, their arrangement
may seem arbitrary and random, requiring users to search in a linear, sequential manner.

Physical Models

• Physical models can predict efficiency based on the physical aspects of a design

• They calculate the time it takes to perform actions such as targeting a screen object and clicking
on it

• Physical models can predict efficiency based on the physical aspects of a design

• They calculate the time it takes to perform actions such as targeting a screen object and clicking
on it

Physical Models – Fitts’ Law

Fitts’ law states that the time it takes to hit a target is a function of the size of the target and the
distance to that target
Fitts’ law can be used to determine the size and location of a screen object

• There are essentially three parts to Fitts’ law:

– Index of Difficulty (ID)—Quantifies the difficulty of a task based on width and distance

– Movement Time (MT)—Quantifies the time it takes to complete a task based on the
difficulty of the task (ID) and two empirically derived coefficients that are sensitive to
the specific experimental conditions

– Index of Performance (IP) [also called throughput (TP)]—Based on the relationship


between the time it takes to perform a task and the relative difficulty of the task

• Fitts described “reciprocal tapping”

– Subjects were asked to tap back and forth on two 6-inch-tall plates with width W of 2, 1,
0.5, and 0.25 inches

• Fitts proposed that ID, the difficulty of the movement task, could be quantified by the equation

ID = log2(2A/W)

Where: A is the amplitude (distance to the target),, W is the width of the target

• This equation was later refined by MacKenzie to align more closely with Shannon’s law (from
information theory):

ID = log2(A/W + 1)

The average time for the completion of any given movement task can be calculated by the following
equation: MT = a + b log2(A/W + 1)

Where: MT is the movement time

Constants a and b are arrived at by linear regression

(just like for Hick’s Law, a = 50 and b = 150 are sufficient place holders for “back-of-the-envelope”
approximations)

• By calculating the MT and ID, we have the ability to construct a model that can determine the
information capacity of the human motor system for a given task.

– Fitts referred to this as the index of performance (throughput)

• Throughput is the rate of human information processing

TP = ID/MT
• Implications of Fitts’ Law

– Large targets and small distances between targets are advantageous

– Screen elements should occupy as much of the available screen space as possible

– The largest Fitts-based pixel is the one under the cursor

– Screen elements should take advantage of the screen edge whenever possible

– Large menus like pie menus are easier to uses than other types of menus.

• Limitations of Fitts’ Law

– There is no consistent way to deal with errors

– It only models continuous movements

– It is not suitable for all input devices, for example, isometric joysticks

– It does not address two-handed operation

– It does not address the difference between flexor and extensor movements

– It does not address cognitive functions such as the mental operators in the KLM model

• W is computed on the same axis as A

Horizontal and vertical trajectories Targeting a circular object

Bivariate data

– Smaller-Of—The smaller of the width and height measurements:

IDmin(W, H ) = log2 [D/min (W, H ) + 1]


Targeting a rectangular object.

W—The “apparent width” calculated along the approach vector

IDW = log2 (D/W + 1)

Apparent width.

Amplitude Pointing (AP): One-dimensional tasks

– Only the target width (whether horizontal or vertical) is considered

– The constraint is based on W, and target height (H) is infinite or equal to W

– AP errors are controlled at “the final landing”

Directional Pointing (DP): If W is set at infinity then H becomes significant

– The constraint is based on H

You might also like