You are on page 1of 108

Heuristic Studies

Made By:
Zeeshan Arshad 18 ARID 5444
Saad ul Haq 18 ARID 5422
Ameer Hamza
Outline
• Usability heuristics
• Heuristic evaluation
Usability heuristics
• Heuristics:
– rules of thumb that describe features of usable systems
• As design principles:
– broad usability statements that guide design efforts
– derived by evaluating common design problems across
many systems
• As evaluation criteria:
– the same principles can be used to evaluate a system for
usability problems
– becoming very popular
• user involvement not required
• catches many design flaws
Usability heuristics
• Advantages
– the minimalist approach
• a few guidelines can cover many usability problems
• easily remembered, easily applied with modest effort
– discount usability engineering
• cheap and fast way to inspect a system
• can be done by usability experts and end users
• Problems
– principles are at the ‘motherhood’ level
• can’t be treated as a simple checklist
• there are subtleties involved in their use
Visibility of system status
• The system should always keep users informed
about what is going on
• Appropriate feedback within reasonable time
Visibility of system status
What mode
am I in now?

What did I
select? How is the
system
interpreting
my actions?
Match between system and real world
• The system should speak the users' language
– Words, phrases and concepts familiar to the user
– Not system-oriented terms
• Follow real-world conventions
• Put information in a natural and logical order
Match between system and real world
Match between system and real world
User control and freedom
• Users often choose system functions by
mistake
– They need clearly marked “emergency exits”
– Let them get back without extended dialogue
• Support undo and redo
User control and freedom
User control and freedom
Consistency and standards
• Users should not have to wonder whether
different words, situations, or actions mean
the same thing.
• Follow platform conventions
Consistency and standards

Ok Cancel Cancel Ok Done Never Mind Accept Dismiss


Consistency and standards
Error prevention
• Even better than good error messages is a
careful design which prevents a problem from
occurring in the first place.
• Two options:
– Eliminate error-prone conditions
– Check for them and present users with a
confirmation option before they commit
Recognition rather than recall
• Minimize the user's memory load by making
objects, actions, and options visible
• The user should not have to remember
information from one part of the dialogue to
another
• Instructions for use of the system should be
visible or easily retrievable whenever
appropriate
Recognition rather than recall
Flexibility and efficiency of use
• Accelerators can speed up interaction for
expert users
– Accelerators are unseen by the novice user
– The system can cater to both inexperienced and
experienced users
• Allow users to tailor frequent actions.
Aesthetic and minimalist design
• Dialogues should not contain information that
is irrelevant or rarely needed
• Every extra unit of information in a dialogue
competes with relevant units, and diminishes
their visibility
Help users recognize, diagnose, and
recover from errors
• Error messages:
– Should be expressed in plain language (no codes)
– Should precisely indicate the problem
– Should constructively suggest a solution
Phases of Heuristic Evaluation
• Pre-evaluation training
– give evaluators needed domain knowledge & information
on the scenario
• Evaluation
– individuals evaluates UI & makes list of problems
• Severity rating
– determine how severe each problem is
• Aggregation
– group meets & aggregates problems (w/ ratings)
• Debriefing
– discuss the outcome with design team
Heuristic Evaluation Details
• Each inspector works alone with the interface
– A session typically lasts 1-2 hours
• The two-pass approach
– first pass: focus on specific interface elements
– second pass: higher level integration and flow
• If system is walk-up-and-use or evaluators are domain
experts, no assistance needed
– otherwise might supply evaluators with scenarios
• Each evaluator produces list of problems
– explain why with reference to heuristic or other
information
– be specific & list each problem separately
Severity Rating
• Used to allocate resources to fix problems
• Estimates of need for more usability efforts
• Combination of
– frequency
– impact
– persistence (one time or repeating)
• Should be calculated after all evals. are complete
• Should be done independently by all judges
Heuristic evaluation of usability
Content
• What is Heuristic evaluation
• Background
• Benefits
• Main advantages and drawbacks of the method
• Scenario and methods of evaluation
• 10 usability Heuristics in usability engineering
• How to conduct heuristic Evaluation
• Phases of the Evaluation Method
• Problems and Evaluators
• Seamlessness thought the whole user experience
What is heuristic Evaluation?
• Inadequate use of usability engineering methods in software development projects

costs $30 billion per year in US only


• A generic name for a set of methods based on having evaluators inspect or
examine usability-related aspects of a user interface

• Form of usability inspection where usability specialists


examine the interface and judge its compliance with recognized usability principles
(the "heuristics").
Background
• Developed by Jakob Nielsen and Rolf Molich in 1990

• Main concepts are to improve analyzing the interface and judge its compliance with
recognized usability principles ( heuristics )

• External/ carefully chosen experienced evaluators examine usability-related aspects


of UI by

o Analyzing the quality attribute’s


o Set of methods for improving ease-of-use during the design process
o Increases efficiency, consumer satisfaction rates and learning
Main advantages and drawbacks
• Quick, valid, useful and cost-effective

• Feedback early in the design process

• Relevanat set of criterias/ heuristics

• Compatible with other usability testing methodologies

• Enables for further examine potential issues

ON THE OTHER SIDE DISADVANTAGES COULD BE…


COnt…
The Heuristic evaluation

• Requires knowledge and experience for effective implementation

• Experts are sometimes hard to find and can be expensive

• Complexity of aggregating multiple experts results

• Threat of incorrectly weighting issues


Scenario and methods of evaluation

• Define application objectives-know the system


• Define user goals-know the user!
• Conduct inquiry methods
• Informal usability testing
• Develop and administer the survey
• Apply heuristics-Global and Local principles
10 Usability Heuristics in usability engineering
• Visibility of system status
keep users informed about what is going on

• Match between system and the real world


The application should speak the users' language

• User control and freedom


Users often choose app functions by mistake and will need a clearly marked
emergency exit.
COnt…
•Consistency and standards
Follow platform conventions
• Error prevention
confirmation option before they commit to the action

• Recognition rather than recall


Minimize the user's memory load
• Flexibility and efficiency of use
Think of both inexperienced and experienced users
COnt…
• Aesthetic and minimalist design
Avoid rarely needed information as much as possible

• Help and documentation


Always provide users with more information when they are looking for it.
How to
• Difficult for a single individual to do

• possible to improve the effectiveness of the


method significantly by involving multiple
evaluators

• performed by having each individual evaluator


inspect the interface alone
• based on the given heuristics
Phases of heuristic evaluation

1. Pre-evaluation training
2. Evaluation
3. Severity rating (Priority)
4. Debriefing
Problems & Evaluators
• No evaluator finds everything
• Some find more than others
Predictive Studies

This material has been developed by Georgia Tech HCI faculty, and continues
to evolve. Contributors include Gregory Abowd, Al Badre, Jim Foley, Elizabeth
Mynatt, Jeff Pierce, Colin Potts, Chris Shaw, John Stasko, and Bruce Walker.
Permission is granted to use with acknowledgement for non-profit purposes.
Last revision: January 2007.

Agenda

• Evaluation
– Overview

• Predictive evaluation
– Heuristic evaluation
– Discount usability testing
– Cognitive walkthrough

6750-Spr ‘07 2
Evaluation

• Gathering data about usability of a


design by a specified group of users for a
particular activity within a specified
environment

6750-Spr ‘07 3

Goals

• 1. Assess extent of system’s functionality

• 2. Assess effect of interface on user

• 3. Identify specific problems with system

6750-Spr ‘07 4
Forms

• Formative
– As project is forming. All through the
lifecycle. Early, continuous. iterative.
– “Evaluating the design”

• Summative
– After a system has been finished. Make
judgments about final item.
– “Evaluating the implementation”

6750-Spr ‘07 5

Approaches

• Experimental (Lab studies, quantitative)


– Typically in a closed, lab setting
Manipulate independent variables to see
effect on dependent variables

• Naturalistic (Field studies, qualitative)


– Observation occurs in “real life” setting
Watch process over time

6750-Spr ‘07 6
Tradeoffs

• Experimental • Naturalistic

+ Replicable + “Ecologically valid”


+ More “objective” + Cheap, quick

- Expensive, requires -Not reproducible,


real users & lab user-specific results
- Realistic? -Not quantitative
(how much better?)

6750-Spr ‘07 7

Evaluation Methods

• 1. Experimental/Observational Evaluation
– Typically with users
– Experiments (usability specifications)

• 2. Predictive Evaluation (without users)

6750-Spr ‘07 8
Predictive Evaluation

• Basis:
– Observing users can be time-consuming and
expensive
– Try to predict usage rather than observing it
directly
– Conserve resources (quick & low cost)

6750-Spr ‘07 9

Approach

• Expert reviews (often used)


– HCI experts (not real users) interact with
system, try to find potential problems, and
give prescriptive feedback

• Best if
– Haven’t used earlier prototype
– Familiar with domain or task
– Understand user perspectives

6750-Spr ‘07 10
Predictive Eval. Methods

• 1. Heuristic Evaluation
• 2. Discount usability testing
• 3. Cognitive Walkthrough

6750-Spr ‘07 11

1. Heuristic Evaluation

• Developed by Jakob Nielsen

(www.useit.com)

• Several expert usability evaluators assess


system based on simple and general
heuristics (principles or rules of thumb)

Essay: http://www.useit.com/papers/guerrilla_hci.html

6750-Spr ‘07 12
Procedure

• 1. Gather inputs
• 2. Evaluate system
• 3. Debriefing and collection
• 4. Severity rating

6750-Spr ‘07 13

Gather Inputs

• Who are evaluators?


– Need to learn about domain, its practices

• Get the prototype to be studied


– May vary from mock-ups and storyboards to
a working system

6750-Spr ‘07 14
Evaluation Method

• Reviewers evaluate system based on


high-level heuristics (i.e., usability
principles):

• use simple and natural dialog • provide clearly marked exits


• speak user’s language • provide shortcuts
• minimize memory load • provide good error messages
• be consistent • prevent errors
• provide feedback

6750-Spr ‘07 15

Updated Heuristics

• Stresses

• visibility of system status • recognition rather than recall

• aesthetic and minimalist • flexibility and efficiency of use


design • recognition, diagnosis and
• user control and freedom recovery from errors

• consistency and standards • help and documentation

• error prevention • match between system and real


world

6750-Spr ‘07 16
Process

• Perform two or more passes through


system inspecting
– Flow from screen to screen
– Each screen

• Evaluate against heuristics


• Find “problems”
– Subjective (if you think it is, it is)
– Don’t dwell on whether it is or isn’t

6750-Spr ‘07 17

Debriefing

• Organize all problems found by different


reviewers
– At this point, decide what are and aren’t
problems
– Group, structure
– Document and record them

6750-Spr ‘07 18
Severity Rating

• 0-4 rating scale


– 4 is the most severe

• Based on
– frequency
– impact
– persistence
– market impact

6750-Spr ‘07 19

Advantages

• Cheap, good for small companies who


can’t afford more

• Getting someone practiced in method is


valuable

6750-Spr ‘07 20
Application

• Nielsen found that


about 5 evaluations
found 75% of the
problems

• Above that you get more, but at


decreasing efficiency

6750-Spr ‘07 21

Somewhat Controversial

• Very subjective assessment of problems


– Depends of expertise of reviewers

• Why are these the right heuristics?


– Others have been suggested

• How to determine what is a true usability


problem
– Some recent papers suggest that many
identified “problems” really aren’t

6750-Spr ‘07 22
2. Discount Usability Testing
• Hybrid of empirical usability testing and
heuristic evaluation

• Have 2 or 3 think-aloud user sessions


with paper or prototype-produced mock-
ups

6750-Spr ‘07 23

Discount Usability in Action

• Mockups are not supposed to be perfect!


• A variety of approaches for mockups:
– Must be quick to create; economical in use of
resources
– Sketches most common
– Paper has its limitations; tends to focus on the
visual elements
– Sometimes awkward to use in usability testing

6750-Spr ‘07 24
3. Cognitive Walkthrough

• Assess learnability and usability through


simulation of way users explore and
become familiar with interactive system
• A usability “thought experiment”

• Like code walkthrough in s/w engineering


• From Polson, Lewis, et al at UC Boulder

6750-Spr ‘07 25

CW Process

• Construct carefully designed tasks from


system spec or screen mock-up
• Walk through (cognitive & operational)
activities required to go from one screen
to another
• Review actions needed for task, attempt
to predict how users would behave and
what problems they’ll encounter

6750-Spr ‘07 26
Requirements

• Description of users and their


backgrounds
• Description of task user is to perform
• Complete list of the actions required to
complete task
• Prototype or description of system

6750-Spr ‘07 27

Assumptions

• User has rough plan


• User explores system, looking for actions
to contribute to performance of action
• User selects action seems best for
desired goal
• User interprets response and assesses
whether progress has been made toward
completing task

6750-Spr ‘07 28
Methodology

• Step through action sequence


– Action 1
– Response A, B, ..
– Action 2
– Response A
– ...

• For each one, ask four questions and try


to construct a believability story

6750-Spr ‘07 29

CW Questions

• 1. Will users be trying to produce


whatever effect action has?
• 2. Will users be able to notice that
correct action is available?
• 3. Once found, will they know it’s the
right action for desired effect?
• 4. Will users understand feedback after
action?

6750-Spr ‘07 30
Answering the Questions

• 1. Will user be trying to produce effect?


– Typical supporting Evidence
• It is part of their original task
• They have experience using the system
• The system tells them to do it
– No evidence?
• Construct a failure scenario
• Explain, back up opinion

6750-Spr ‘07 31

Next Question

• 2.Will user notice action is available?


– Typical supporting evidence
• Experience
• Visible device, such as a button
• Perceivable representation of an action such
as a menu item

6750-Spr ‘07 32
Next Question

• 3.Will user know it’s the right one for the


effect?
– Typical supporting evidence
• Experience
• Interface provides a visual item (such as
prompt) to connect action to result effect
• All other actions look wrong

6750-Spr ‘07 33

Next Question

• 4.Will user understand the feedback?


– Typical supporting evidence
• Experience
• Recognize a connection between a system
response and what user was trying to do

6750-Spr ‘07 34
Example

• Program VCR
– List actions
– Ask questions

6750-Spr ‘07 35

IRB

• Need to move ahead for project now


• Prepare human subjects submission by
next Tuesday
– Sample consent forms available
– Do best job with survey instruments
– Must be forwarded to me
– Can be amended later

6750-Spr ‘07 36
Administratia

• Missing survey forms

6750-Spr ‘07 37

Upcoming

• Requirements gathering & Understanding


users
– Contextual inquiry
– Ethnography

• Task Analysis & User requirements

6750-Spr ‘07 38
Introduction to User Studies
Three Types of Studies
1. Traditional studies of users and usage
(often of libraries)
2. Studies that include Internet uses and
similar measures
3. Studies of Digital Libraries and their users
(“digital communities,” just starting)
Traditional Studies
Focus of
the study

Studies of Studies
Users of Usage

- socio-economic
factors - transaction
-ethnicity, national data, both
origin, language, automated and
etc. traditional
- non-users
Basic Problems
• Is there a theoretical basis to the analysis?
• Are the results reproducible?
• What can be changed based on the findings?
• How do we know changes will help?

Possible solution:
- Develop theoretical approach early
Typical Steps in Social Science Research

• State general nature of problem, question, relationship


• Review previous studies
• Create and state “stylized facts”
• Develop testable hypotheses
• Discover or develop data to be used to test hypotheses
• Perform appropriate (statistical) tests
• Describe results
• Draw conclusions
Traditional Bibliometric Model of Scholarly Productivity

Socio-economic factors
Institutional factors
Funding Research output
Revised Bibliometric Model of Scholarly Productivity
Socio-economic factors
Institutional factors

Funding Research output


Computer skills/literacy

Computer use /
Internet use
Revised Bibliometric Model of Scholarly Productivity
Socio-economic factors
Institutional factors

Funding Research output

Computer skills/literacy

Computer use /
Internet use

Major issues:
- What sample/universe to use?
- How to measure inputs?
- How to measure outputs?
- Functional form? (linear?, additive?)
List of Selected Processes

Mail Pnews archie cnrmenu elm eudora finger ftp gladis gopher
imap irc kermit lynx mail mailtool mailx melvyl netscape newmail
nn nslookup pine popper rlogin talk telnet tin trn trn-artc
weather webster xrn rsh rcp whois
Percentage of Faculty Using Each
Service

100 94%
90
80
70 62%
% of Faculty

60
50 44%
37% 35%
40
25% 24%
30
16% 13%
20
9%
10 3%
0

WAIS
Sequence

FTP
WWW

Gopher
Telnet

Newsgroup
Supercomp
E-mail

Listserver

E-journal
ana.

uter
Services
% of Faculty Using Internet Services
from Questionnaire and Log

100
90
% of faculty using the

80
70
60 Que %
service

50
40 Log %
30
20
10
0 Gopher

FTP
WWW
Telnet

Newsgroups
E-mail

Services
Our main hypothesis in this research is that scholars'
Internet-use data add explanatory power to models of
scholarly productivity. The formal null and alternate
hypotheses are:
H0: Internet use data does not add explanatory
power to the Traditional Publication Model.
H1: Internet use data adds explanatory power to
the Traditional Publication Model.
We used multiple regressions to estimate the four models
listed in Figure 5. Since we are interested in whether a set
of one or more Internet-use variables “improves” the
explanatory power of the traditional model, the appropriate
measure is the F-statistic from a comparison of the
restricted (traditional) model with the unrestricted (new)
model.
Dependent Variable: PUBAV

Analysis of Variance

Sum of Mean
Source DF Squares Square F Value Prob>F
Model 6 56.72845 9.45474 1.230 0.3148
Error 35 269.07034 7.68772
Total 41 325.79879

Root MSE 2.77267 R-square 0.1741


Dep Mean 3.78048 Adj R-sq 0.0325
C.V. 73.34194

Parameter Estimates

Variable DF Parameter Standard T for H0: Prob > |T|


Estimate Error Parameter=0
INTERCEP 1 22.344070 8.16887630 2.735 0.0097
AGE 1 -0.615779 0.24909898 -2.472 0.0184
PHD_AGE 1 0.632716 0.25079432 2.523 0.0163
TIMTOPHD 1 0.460290 0.33084979 1.391 0.1729
RL 1 -0.020384 0.02346329 -0.869 0.3909
CAR_SCR 1 -0.597347 0.54715391 -1.092 0.2824
PIM1 1 0.051832 0.28491924 0.182 0.8567

Figure 4: The best traditional model of publication


Model Variables:

Raw Internet Data:

1 AGE PHD_AGE RL PIM1 COMP1 COMP2

2 AGE PHD_AGE RL PIM2 COMP1 COMP2

Transformed Internet Data:

3 AGE PHD_AGE RL PIM1 COMP1 COMP2

4 AGE PHD_AGE RL PIM2 COMP1 COMP2

Figure 5: Publication models with Internet principal components


Eqn. MODEL Internet N R2r R2ur F CV (5%)
Data

1 AGE, PHD_AGE, RL, PIM1, Raw 23 0.1765 0.4811 4.04 3.63


COMP1, COMP2

2 AGE, PHD_AGE, RL, PIM2, Raw 23 0.1808 0.4806 4.02 3.63


COMP1, COMP2

3 AGE, PHD_AGE, RL, PIM1, Transform 23 0.219 0.5662 6.4 3.63


COMP1, COMP2

4 AGE, PHD_AGE, RL, PIM2, Transform 23 0.2199 0.5663 12.77 3.63


COMP1, COMP2

KEY:

N Number of cases

R2r R2 of the restricted model

R2ur R2 of the unrestricted model

F The value of the F statistic

CV The critical value of the F distribution

Figure 6: Tests of Hypotheses—Models and Results


Variable Eqn. 1 Eqn. 2 Eqn. 3 Eqn. 4

Constant 17.1 17.2 11.2 11.2

Age -0.313 -0.317 -0.216 -0.215

Ph.D.-age 0.197 0.201 0.138 0.138

RL 0.00860 0.00777 0.00563 0.00565

PIM -0.0794 -0.051 0.0515 0.0392

COMP1 (raw) 19.6 * 19.7 *

COMP2 (raw) 2.99 * 2.99 *

COMP1 (trns) 1.84 * 1.84 *

COMP2 (trns) 0.328 0.324

R2 0.481 0.481 0.566 0.566

NOTE: * = statistically significant; p<0.05.

Figure 7: Regression Results (Internet Use Equations)


The exact equation, as estimated in Equation 1, is:

PUBAV = 17.1 – 0.313 AGE + 0.197 PHD-AGE + 0.0860 RL – 0.0794 PIM1

+ 19.6 (Login-FTP principal component)

+ 2.99 (Kermit principal component).


Std. Regression Effect on
Variable Mean Std. Dev. Mean + 10% Z-score Scoring Coeff. PUBAV
Coeff.

ALLL 20.24 94.81 22.27 0.0214 0.4939 19.65 0.207

FTP 160.1 609.4 176.2 0.0263 0.4927 19.65 0.254

KERMIT 9.643 31.99 10.61 0.0301 0.4499 2.992 0.041

Figure 8: Effects of Internet-Use Variables

Variable Mean Coefficient 10% of Effect on Elasticity of


mean PUBAV Substitution (*)

AGE 52.4 -0.313 5.24 -1.6385 -7.91

PHD-AGE 23.0 0.197 2.3 0.4523 2.18

RL 57.6 0.00860 5.76 0.0495 0.24

PIM1 4.17 -0.0794 0.417 -0.0331 -0.16

NOTE: * calculated relative to change in ALLL

Figure 9: Effects of Other Variables and Elasticities


Digital Libraries
and Collaborative Knowledge
Construction
Based on a presentation by
Nancy Van House
September 15, 1999
Digital Libraries
• Support knowledge work
• Knowledge work occurs in material and
social matrices
• DLs mutually constituted with work,
practices, tools, artifacts, knowledge,
knowledge communities
Evaluating DLs
• Direct effects: how well they meet users’
needs & expectations
• Higher-order effects: Understanding how
DLs support and potentially change
– Work
– Its practices and artifacts
– People/communities who do the work
– Institutions that support it
Possible Theoretical Bases for
DL Evaluation
• Situated action
– work and learning
• Science studies
– knowledge creation
– knowledge communities
• Actor-Network Theory
– knowledge creation and communities
– stabilization of socio-technical systems
Characteristics of Knowledge
Work
• Situated in place and time
• Distributed x people, places, time
• Social - collectively decide what is known,
how we determine what is known, how we
do the work
The UC Berkeley Digital Library
Project
• Supporting environmental planning
• Innovative tools
• Large, diverse testbed
This Research

• To better understand distributed,


collaborative cognitive work, the role of
information and information practices and
artifacts, and the potential effects of digital
information
• To investigated the practices of
environmental planning, particularly
information use and production
This Study
• Not about the UC Berkeley DL but about
potential of DLs in general, and current
issues around sharing of digital data.
• Method: Interviews
Knowledge Communities
• Two engaged in work related to environmental
planning
– Water planning
– Botanical
• Commonalities
– Multiple scientific (mostly) disciplines
– Various organizational settings
– Large datasets of observations and summaries and
analyses
• collected by many, over time, differing methods
– Potentially major economic impact
– Often highly political
Findings
• Contributions
• Use
• Cooperation in creating and operating DL
Findings: Barriers to
Contribution
• Misuse of data
– People who ‘don’t understand’
– “Using our data against us’
• ‘Productizing’
– Burden
– Mismatched incentives
– Making invisible work visible
Findings: Use
• User must be willing to trust both content and
functionality of DL
• Assessing trustability of contents
– Assessing sources
• Ability and training
• Interests
– DL’s filtering policies and processes
• DL’s functionality
– E.g., search tools, GIS overlays
Findings: Cooperation in
Creating and Operating DL
• Both a research project and active testbed
• Groups to interest: computer scientists and
applications area specialists
• Cutting edge vs reliable, incremental improvement
over past
• Differences in decision-making styles:
hierarchical, planned vs distributed, emergent
• Articulation work
Possible Theoretical Bases for
DL Evaluation
• Situated action
– work and learning
• Science studies
– knowledge creation
– knowledge communities
• Actor-Network Theory
– knowledge creation and communities
– stabilization of sociotechnical systems
Two Broad Approaches to Social
Theory
• Reductionist: economics, behavioral
psychology, structural-functional sociology
• Irreductionist: ethnomethodology
(Garfinkel), practice/structuration theory
(Giddens, Bourdieu, Lave)
Irreductionist Approaches
• Social order/institutions not pre-given but
continually recreated through the concrete, day-to-
day activities Interest in processes by which
people construct meaning and re/produce social
order
• Interest in processes by which this done
• Emphasis on practices and artifacts which shape
understanding and carry it across time and space
Situated Action, Situated
Learning
• Knowledge not inert substance transferred
from teacher to student
• Knowledge and learning on-going
construction of meaning through action and
interaction
• Knowing not activity of single mind but
complex social and material phenomenon,
“nexus of relations between the mind and
work and the work in which it works”
(Lave, 1988).
Communities of Practice: the Social
Matrix of Knowledge Work
• People learn & work within groups who share
understanding, practices, technology, artifacts, and
language, e.g., professions, workgroups, and
disciplines.
• A person typically belongs to multiple
communities of practice
• Important task of communities of practice:
– deciding what is known;
– the processes and principles by which
knowledge claims are evaluated;
– who is entitled to participate in the discussion;
– whom to believe.
Information Artifacts
• Include texts, images, maps, databases,
thesauri, classification systems…
• Not simply reflections of knowledge but
instrumental in creating it, coordinating
work across space and time
• Tightly bound up with practices and
communities
Science and Technology Studies
• Science Studies
– practices of scientific work
– how the participants come to an agreement about what
counts as fact or discovery
– what inferences are made from facts
– what is regarded as rational or proper conduct
– how credibility of claims is assessed
• Science and Technology Studies (STS)
– Increased emphasis on technology, esp’ly large
sociotechnical systems
• Science as prototype of rational knowing >
applicable to knowledge work generally
Approaches to Research
• Sci & tech knowledge neither purely natural nor
social
• Attention to processes of sci knowledge
production
• Methodological orientation
– “Follow the actor”
• Attention to activities
• Not deciding ahead of time where to draw
boundaries
– Funding as part of scientific activity
– Inscriptions play key role in sci knowledge production
• E.g., labs as producing papers
• Carrying work across space and time
Actor-Network Theory
(Bruno Latour, Michel Callon, John Law)
• Not a theory but a “method of looking at the actors’ own
world building activities”
• Doesn’t take the stability of sociotechnical systems as
given, but asks how they are created and maintained.
• “Organization is an achievement, a process...Its
components -- hierarchies, organizational arrangements,
power relations, flows of information -- are uncertain
consequences of the ordering of heterogeneous materials..
ANT is thus a theory of agency, of knowledge, and of
machines.” (Law in systems science (Emphasis added)).
Key Concepts of ANT
• Action-at-a-distance
• The actor-network
• Translation
• Black-boxing
• Intermediaries
Key Concepts of ANT
• Action-at-a-distance is the problem of
coordinating and sustaining collective activities
over space and time).
• The actor-network “is most simply defined as any
collection of human, non-human, and hybrid
human/non-human actors who jointly participate
in some organized (and identifiable) collective
activity in some fashion for some period of time.”
– Perhaps the most radical contribution of ANT is the
inclusion of the non-human in its “heterogeneous
networks.”
Key Concepts, cont.
• Translation is a key process by which actor-
networks are created and stabilized, however
temporarily. Actors’ disparate interests get
translated into a set of interests that coincide in the
network.
• Black-boxing is a process of closing questions and
debates.
• An intermediary is an actor (of any type) that
stands between two others and translates between
them in such a way that their interaction can be
more effectively articulated.
– Inscriptions, Latour’s “immutable, combinable
mobiles:”
Approaches to Research
of Situated Action, Science
Studies, ANT
• Contention that sci & tech knowledge
neither purely nature nor purely social
• Methodological orientation:
– “Follow the actor” (ethnography)
– Inscriptions
Applied to our DL
• Applications area: environmental planning
• The DL as a large sociotechnical systems
• Method
Implications: Joining Findings & Theory
I. DLs and Knowledge Productions

• DLs support knowledge work, which is


situated, distributed, and social
• DL ‘evaluated’ according to its relationship
with knowledge work and its social and
material matrix
Users’ and Providers’ Concerns – the DL
and Knowledge Production
• Fear of misuse of info, inability to evaluate
credibility = crossing boundaries of communities
and assemblages within which/for which created
• Practices of (1) work and (2) assessing/demo’ing
credibility undermined by
– Ready crossing of sociotechnical boundaries
– Fluid recombination of inscriptions
– Opening of black boxes, e.g. document, methods of
work
– Closing of others, e.g. analytical and search tools
Implications: Joining Findings & Theory
II. The DL as a Sociotechnical System

• Successful DL enrolls dynamic network of


users, sources (human and nonhuman),
builders, and technology
• Requires on-going enrollment and multiple
translations
• Stabilization not equilibrium but on-going
achievement
Implications for DL Design
• Articulate with existing assemblage of practices,
artifacts, participants
– Attention to existing and emergent practices of trust and
credibility
• Design specific to communities and tasks
– Tension of specificity and generality
• Include high-order user involvement
• Design to be emergent and dynamic – on-going
enrollment and co-constitution
Implications for DL Evaluation
• Not just how well it meets users’ needs but
how co-constituted with assemblages of
work
• Target to specific user communities and
tasks
• Consider not just service delivery but
stabilization of DL as a heterogeneous
network
Evaluation Methods
• Ethnographic
– “Follow the actor”
– Study activities in natural settings in which they
occur
– Focus on what people actually do, not simply
their accounts of behavior
• Varied
– Different methods appropriate for different
issues
Conclusion
• Understanding knowledge work not simple
• Beginning to have an analytical approach from
which to better understand KW and DLs
• Applies to other areas of electronic collection,
delivery, manipulation of information where trust
and credibility are important
– E.g., knowledge management

You might also like