You are on page 1of 268

Introduction

Interface, the electronic journal of the Berglund Center for Internet Studies, Pacific
University Oregon, is written and produced for the broad audience of those interested in
the impact of the Internet in all its manifold forms. These include not only such obvious
ones as the rapid shift into electronic banking and commerce, but also even very subtle
ones as the impact of the Internet on popular film and fiction, and even on poetry, and
upon the English language itself.

At Interface, we study all of these phenomena, and the flexibility of an electronic journal,
which can be useful to all audiences, unlike the rigidly determined form of traditional
journals, has let us range widely. At the Berglund Center we not only publish the results
of our authors’ research and thought, but also have the great pleasure of encouraging
such productivity through grants and scholarships as well as our payments for articles.

This volume represents the content of our electronic journal, Interface, for 2008-09.
Future publications now in process will include original work from Chinese and British
scholars, as well as members of the Pacific University faculty. For more information on
this project, please see our web pages.

This is the first of several planned hard-copy publications to be produced by the Center.
While our work emphasizes the impact of the Internet, we are also aware of the
importance of hard copy publications for such purposes as archival durability, assessment
and tenure processes, and simply because as much as we love the Internet, most of us
still love a good book. We hope that you will find this such a book.

Acknowledgments
We are now in our eighth full year of publishing Interface. That it was ever published at all
is due entirely to the generosity of James and Mary Berglund, our far-sighted sponsors of
the Berglund Center. It is to them that we dedicate this first publication. We are also
grateful to our founding Director, Joe Howell, and to his successor, Steven Boone, each of
whom set us on the proper and highly timely path of studying the Internet.

In addition, we are grateful to the efforts of our present Program Coordinator, Theresa
Floyd, who brought this volume from virtual to physical reality, and to our former Program
Coordinator, Tara Fechter, who did the long-term planning. As well, we are grateful to
graduating senior Zach Bingaman who has redesigned all of our sites as well as this
publication itself.

And of course, we are grateful not only to this year’s Berglund Center student staff, guided
this year by Student Projects’ Director Maria Walters, but to the eight years of staff
members who preceded them, many of whom are now employed in the industry or
running their own highly successful businesses.

And above all, we are grateful to our writers and editors, who labor away steadily with little
immediate recompense and have made the content enclosed here so useful, informative,
and occasionally entertaining as it is.

Jeffrey Barlow
Director, Berglund Center for Internet Studies
Pacific University Oregon
Table of Contents

Issue 1- January-February
Articles

Feature: The Influence of Collectivistic and Individualistic Value


Orientations on the Acceptance of Individually-Tailored Internet
Communications
View Online at:
http://bcis.pacificu.edu/journal/2008/01/davis.php
By Shawn Davis, Ph.D 1
Legal: Can You Get Out of a Contract You Signed?
View Online at: http://bcis.pacificu.edu/journal/2008/01/duboff.php
By Leonard D. DuBoff 13
Gaming: Defining the All-Important Difficulty Curve
View Online at: http://bcis.pacificu.edu/journal/2008/01/pruett.php
By Chris Pruett 15
Education: Internet Distance Education: Object-oriented Modeling in
Instructional Design
View Online at: http://bcis.pacificu.edu/journal/2008/01/boulet.php
By Charles Boulet 19

Book and Site Reviews

R.V. Kelly 2’s Massively Multiplayer Online Role-Playing Games


View Online at: http://bcis.pacificu.edu/journal/2008/01/kelly2.php
Review by IProfess, Elvin Druid of Zuljin, Azeroth 31
Bruce Abramson’s The Secret Circuit
View Online at:
http://bcis.pacificu.edu/journal/2008/01/abramson.php
Review by Jeffrey Barlow 37

Editorial

Yo Ho Ho: Video Piracy, a Reappraisal and a Modest Suggestion


View Online at: http://bcis.pacificu.edu/journal/2008/01/edit.php
By Jeffrey Barlow 40
Issue 2- March-April
Articles

Feature: The Rise of Manipulatives in Video Games


View Online at: http://bcis.pacificu.edu/journal/2008/02/pruett.php
By Chris Pruett 43
Education: The Intuitive Artistry of Action Learning in Organizations
View Online at:
http://bcis.pacificu.edu/journal/2008/02/cockburn.php
By Tom Cockburn 49
Legal: Do You Understand Representations, Warranties and Boilerplate
Clauses?
View Online at: http://bcis.pacificu.edu/journal/2008/02/duboff.php
By Leonard D. DuBoff 56
Technology: Something's in the Air: Adobe's New Software Platform
View Online at: http://bcis.pacificu.edu/journal/2008/02/geraci.php
By Mike Geraci 60
Technology: A Quick Introduction to Thin Clients
View Online at: http://bcis.pacificu.edu/journal/2008/02/elliot.php
By Ben Elliot 64

Book and Site Reviews

Nicholas Carr’s The Big Switch


View Online at: http://bcis.pacificu.edu/journal/2008/02/carr.php
Review by Jeffrey Barlow 67
Tarleton Gillespie’s Wired Shut: Copyright and the Shape of Digital
Culture Review
View Online at: http://bcis.pacificu.edu/journal/2008/02/gillespie.php
Review by Jeffrey Barlow 70

Editorial

Confucius says: Privacy is Dead; Get over it...


View Online at: http://bcis.pacificu.edu/journal/2008/02/edit.php
By Jeffrey Barlow 73

Issue 3- August
Articles

Feature: Virtual Death vs Reality


View Online at:
http://bcis.pacificu.edu/journal/2008/03/hernandez.php
By Jenn Hernandez 77
Health: With a Little Help from my Online Friends: The Health Benefits of
Internet Community Participation
View Online at: http://bcis.pacificu.edu/journal/2008/03/davis.php
By Shawn Davis, Ph.D 82
Education: Web 2.0 and the Demise of the Shelf Concept
View Online at: http://bcis.pacificu.edu/journal/2008/03/rhine.php
By Steve Rhine 88
Technology: Photoshop Express: Web Photo Sharing Gets Interesting
View Online at: http://bcis.pacificu.edu/journal/2008/03/geraci.php
By Mike Geraci 93

Book and Site Reviews

Patrick O'Keefe’s Managing Online Forums


View Online at: http://bcis.pacificu.edu/journal/2008/03/okeefe.php
Review by Jeffrey Barlow 97
Fareed Zakaria’s The Post American World
View Online at: http://bcis.pacificu.edu/journal/2008/03/zakaria.php
Review by Jeffrey Barlow 101

Editorial

Safer Practices in Financial Transactions on the Internet Editorial Essay


View Online at: http://bcis.pacificu.edu/journal/2008/03/edit.php
By Jeffrey Barlow 105

Issue 4- September
Articles

Game Worlds: Toons and Terrorism


View Online at: http://bcis.pacificu.edu/journal/2008/04/iprofess.php
By IProfess 112
Integrated Media: Can I really watch what I want when I want it on my
TV?
View Online at: http://bcis.pacificu.edu/journal/2008/04/irons.php
By Lynda Irons 116
Health: Managing Health Online: Developing a Personal Health Record
View Online at: http://bcis.pacificu.edu/journal/2008/04/davis.php
By Shawn Davis 122
Book and Site Reviews

M. J. Rose’s The Venus Fix


View Online at: http://bcis.pacificu.edu/journal/2008/04/rose.php
Review by Jeffrey Barlow 128
John Burdett’s Bangkok Haunts
View Online at: http://bcis.pacificu.edu/journal/2008/04/burdett.php
Review by Jeffrey Barlow 133

Editorial


X? XX? Or XXX? The Internet and Pornography Editorial Essay


View Online at: http://bcis.pacificu.edu/journal/2008/04/edit.php
By Jeffrey Barlow 137

Issue 5- October
Articles

Feature: Interactive Engagement Learning Strategies in an Optometry


Classroom Setting
View Online at:
http://bcis.pacificu.edu/journal/2008/05/hallbutler.php
By James J. Butler and Stephen C. Hall 145
Legal: Questions & Answers (In Plain English)®
View Online at: http://bcis.pacificu.edu/journal/2008/05/duboff.php
By Leonard D. DuBoff 159
Research Report: Learning to Co-operate: A Case Study in Ethical
Banking
View Online at: http://bcis.pacificu.edu/journal/2008/05/jahdi.php
By Khosro S. Jahdi and Tom Cockburn 162

Book and Site Reviews

Lee Siegel’s Against the Machine: Being Human in the Age of the
Electronic Mob
View Online at: http://bcis.pacificu.edu/journal/2008/05/siegel.php
Review by Jeffrey Barlow 173
John Palfrey and Urs Gasser’s Born Digital: Understanding the First
Generation of Digital Natives
View Online at: http://bcis.pacificu.edu/journal/2008/05/palfrey.php
Review by Jeffrey Barlow 177
Editorial

Dining, Whining, and Opining: From the Googleplex to Beijing Editorial
View Online at: http://bcis.pacificu.edu/journal/2008/05/edit.php
By Jeffrey Barlow 181
Byteing Off More...
View Online at: http://bcis.pacificu.edu/journal/2008/05/byte.php
187

Issue 6- November
Articles

Education: Seeing Beyond The Grand Illusion


View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=1
By Steve Rhine, Ed. D 189
Politics: Campaign 2.0
View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=2
By Jenn Hernandez 195
Legal: The World is a Showcase for Creative People
View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=3
By Leonard D. DuBoff 200
Health: Would You Like Virtual Fries with That?: The New Frontier of
Online Food Marketing
View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=4
By Shawn Davis, Ph.D 207

Book and Site Reviews

Matt Richtel’s Hooked: A Thriller About Love and Other Addictions


View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=5
Review by Tara Fechter 212
David Crystal’s Txting: The Gr8 Db8
View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=6
Review by Jeffrey Barlow 215

Editorial

The Madness of Crowds: Recent Criticisms of Web 2.0
View Online at:
http://bcis.pacificu.edu/journal/2008/06/article.php?id=7
By Jeffrey Barlow 219

Issue 7- December
Articles

Media: Movies, the Internet, and Piracy


View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=10
By Lynda Irons 225
Technology: Does Microblogging Have a Future in Your Organization?
View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=11
By Michael Geraci 228
Security: Why The Shoemaker's Children have Flip-Flops
View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=12
By Pat McGregor 235

Gaming: On the Declining Viability of Testosterone


View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=13
By Chris Pruett 239

Book and Site Reviews

Sam Han’s Navigating Technomedia: Caught in the Web


View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=14
Review by Jeffrey Barlow 243
Eric Butow and Kathleen Taylor’s How to Succeed in Business Using
LinkedIn
View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=15
Review by Jeffrey Barlow 247
Editorial


Co-Dependence: The Chinese and American Economies and the World


Economic Problem
View Online at:
http://bcis.pacificu.edu/journal/2008/07/article.php?id=16
By Jeffrey Barlow 249
The Influence of Collectivistic and
Individualistic Value Orientations on the
Acceptance of Individually-Tailored
Internet Communications
by Shawn Davis, Ph.D. <davissh@pacificu.edu>
about
Pacific University—School of Professional Psychology
Berglund Fellow, 2007-2008

Introduction
Tailored communications are individualized communications
intended to reach one specific person that are based on
characteristics that are unique to that person. The rationale for
utilizing a tailored communication is founded in the idea that
information elaboration is likely when a message is seen as
personally relevant and this increased elaboration leads to an
enhanced likelihood of adoption and utilization of the tailored
message [1]. Research has found tailored communications to be
effective in promoting behavioral change within a variety of areas
such as smoking cessation [2], weight loss [3], educational
achievement [4], and the adoption of healthy eating behaviors [5].
While the evidence in support of tailored communication is
promising, research has demonstrated that the overall effectiveness
of the tailoring approach might be limited [6]. For example, in
some studies aimed at detailing the effectiveness of the tailoring
approach, only about half of the participants indicated that the
study materials that they received applied to them specifically [7]
[8]. It has been suggested that limitations found in such an
approach might be the result of not including potentially relevant
and influential characteristics such as contextual, cultural, or
personality factors [9] [10]. While work detailing personality
factors that might enhance the tailoring approach has begun [e.g.,
11], an examination of cultural influences on the acceptance of
tailored forms of communication remains lacking.
One such cultural dimension upon which identification and use
might bolster the effectiveness of tailored internet communications
is within the distinction between a collectivist and an individualist
value orientation. The identification of such a culturally-based
value orientation has proven useful in a variety of areas including
career counseling [12], Internet shopping [13], and psychotherapy
[e.g., 14]. Though diverse in their investigative focus, such studies
consistently highlight the influence of an individual's cultural
values and beliefs on the subsequent acceptance of information.

1
This project is aimed at detailing the role that collectivist and
individualistic value orientations play in the acceptance and
eventual utilization of information delivered via the Internet that is
tailored (i.e., individualized) to the needs and characteristics of an
individual recipient. Participants from both a predominantly
collectivist culture and a predominantly individualistic culture
provided demographic, dietary, and health belief information used
in the creation of a series of Internet-based tailored health
messages. It was expected that the individuals receiving the
tailored communication message would be more likely to indicate
a personal connection to the information contained in the message
and would be more likely to utilize this information in the future
regardless of cultural value orientation.

Method
Participants
Participants within the present study were drawn from two diverse
locations. In particular, one group of study participants included
individuals currently living in the United States, and the other
included individuals currently living in Japan. Participants in the
US sample—students currently attending a graduate program in
the Portland, Oregon area—were recruited via e-mail
communication wherein they were presented with a brief
description of the study and hyperlink access to begin
participation. Participants in the Japan sample were obtained via a
snowball sampling methodology. In this sampling method, three
individuals were directly contacted by the principle researcher and
presented with a translated version of the recruitment e-mail used
for the US sample. These individuals, in turn, distributed this
e-mail recruitment message to other individuals within the target
population (Japan) with a request for them to also distribute the
recruitment message to additional individuals and so on. Such a
sampling method is often utilized in situations where either target
individuals are difficult to access, or when other survey methods
are not available [15].
Measures
For the US sample, all communications and study materials were
presented in English. Communication messages and study
materials for the Japan sample were translated versions of the
materials used in the US sample. A two-step translation method
was employed with materials first being translated into Japanese
by an individual currently living in the US for whom Japanese was
their native language. These materials were then back-translated to
English by an independent translator to verify the maintenance of
cross-linguistic meaning. Within the present study, the following

2
assessments were presented to each research participant:
Demographic Assessment. Participants provided standard
demographic information used in the development of the tailored
heath communication message (for those individuals within the
tailoring condition), and for use in future e-mail contact. This
demographic information included the participant's first name,
their age, their gender, their e-mail address, and country of
residence. Also, a series of questions were presented aimed at
assessing the participant's current health beliefs. For example,
participants were asked to indicate such things as how important
they believe their daily diet is in maintaining good health, their
perceptions of how likely they are to improve their dietary habits
in the future, and their beliefs of who and what influences their
dietary habits.
Tailoring Questionnaire. Within the tailoring questionnaire,
participants were presented with a series of questions aimed at
determining their current dietary practices. These questions were
drawn from standard dietary recommendations as presented by US
department of Health and Human Services and the Ministry of
Health in Japan. The focus of this questionnaire was on those areas
of recommendation shared by both cultures, and included
questions on such things as the regular assessment of one's weight
and physical activity; regular intake of fruits, vegetables, and dairy
products; and the keeping of regular hours for meals.
Measure of Vertical and Horizontal Collectivism and
Individualism. To determine the cultural value orientation of each
participant along the collectivism and individualism dimension, a
measure developed by Singelis, et al [16] was utilized. A particular
strength of this measure is that it categorizes individuals within
each orientation along the additional dimension of vertical and
horizontal characteristics within each. This approach is widely
viewed as a preferred approach over a simple distinction between
the collectivist and individualist orientations [17]. This method
results in the relative placement of an individual within four
distinguishable categories. The first, horizontal collectivism (HC)
is a cultural pattern stressing equality wherein an individual views
oneself as an interdependent part of others within the group. The
second, vertical collectivism (VC), is a pattern stressing service
and sacrifice for the overall group wherein an individual (while
interdependent with others in the group) recognizes differences in
status within the group. The third, horizontal individualism (HI), is
a pattern stressing autonomy between group members wherein an
individual is viewed as independent yet more or less equal in
status with others within the group. The final pattern, vertical
individualism (VI), stresses competition between members within

3
the group and the individual is viewed as independent and a level
of inequality is expected [18].
Self-Esteem Scale. The Rosenberg Self-Esteem Scale [19]—a
10-item self-report measure—was utilized as a measure of global
feelings of self-worth and self-acceptance.
Resiliency Scale. The Neill and Dias [20] modification of the
Wagnild and Young measure [21] of resiliency was utilized in the
present study. Resilient individuals are characteristically
self-confident and understand their personal strengths and abilities.
Perseverance in the face of change and challenges highlights the
resilient individual.
General Self-Efficacy Scale. The General Self-Efficacy Scale
(GSES) [22] is a 10-item measure designed to assess an
individual's self-beliefs in their abilities to cope with a wide range
of life challenges. Rather than being a measure of optimism, the
GSES refers to personal agency—the belief that one's actions are
responsible for successful outcomes.
Multidimensional Health Locus of Control. Each participant's
beliefs of personal control over his or her health and well-being
were determined using the Multidimensional Health Locus of
Control Scale (MHLC) [23]. The MHLC is an 18-item measure
that contains three sub-scales that determine the relative influence
of three sources of control over an individual's health: internal,
powerful others (external), and chance (external).

Procedure
Data collection involved participation in three distinct study
sessions detailed below. The procedures to follow were the same
for both the US and the Japan sample.
Session 1
Upon entering the secure study website, each participant was
welcomed to the study and presented with a copy of an informed
consent document informing them of their rights as a research
participant. Upon agreeing to continue participation in the study,
they were then presented with, and asked to complete, the
assessment measures detailed previously. Upon completion of
these measures, participants were informed that they would be
contacted via e-mail in one week to continue their study
participation.
Session 2
One week after completing participation in the first session,
participants received an e-mail message containing an Adobe
Acrobat (.pdf) file attachment containing one of two types of
dietary/health information communication. Specifically,
participants were presented with a health communication message

4
detailing recommended healthy eating behaviors (4 printed pages
in length) either tailored to their individual characteristics as
determined through assessment in Session 1 or with a generic
equivalent.
Participants were asked to read this material and were directed to a
second secure study website within which they rated their
particular health communication along a number of dimensions,
including the applicability of the information to them personally
(i.e., goodness of fit), their emotional reaction to the material, the
utility of the information, their understanding of the information,
and the "trustworthiness" of the information presented.
Participants were then informed that they would be contacted
again in one week to complete their participation in the research
project. The assessment procedure detailed above applied to all
study participants regardless of communication condition (tailored
v. generic health communication).
Session 3
One week following completion of the second session, study
participants were sent an e-mail message containing a hyperlink to
a third secure study website within which they provided
assessment information on how many times they revisited the
healthy eating materials and how influential the materials were in
helping them to eat healthier. Additionally, participants again
completed the self-esteem, resiliency, self-efficacy, and health
locus of control measures used in Session 1 to determine if
significant changes resulted from having been exposed to a
tailored communication message. Upon completing these
assessments, participants were informed that their participation
was now complete and were thanked for their time.

Results
Sixty-two individuals in total participated in the study. Of these
participants, 24 (11 male, 13 female) were individuals living in
Japan and 38 (8 male, 30 female) comprised the US sample. The
average age of participants in the Japan sample was 24.13 years
(range: 19 to 29 years) and 30.26 years (range: 22 to 58 years) for
the US sample. Given the very unequal representation of males
and females within the samples, no further analyses will be based
on participant gender unless specifically stated.
Participant Characteristics by Communication Condition
To determine if significant differences existed between participants
within the tailored and generic communication conditions on any
relevant study variables prior to more detailed analyses, a series of
t-test analyses were conducted between condition groups on initial
reports (Session 1) of age, demographic information, learning

5
style, cultural value orientation, self-esteem, self-efficacy,
resiliency, and health locus of control. No significant preexisting
differences were identified between groups, therefore subsequent
analyses continued without the need for corrective measures.
Overall Tailoring Effectiveness at Session 2
Significant differences between communication conditions were
found in responses to the healthy eating materials in Session 2
with individuals in the tailoring condition indicating that the
materials were more engaging (t (49) = -4.44, p = .000), more
attractive (t (49) = -2.60, p = .012), more informative (t (49) =
-5.05, p = .000), more interesting (t (49) = -4.93, p = .000), more
likely to bring about a change in eating habits (t (49) = -5.43, p =
.000), had more application to them personally (t (49) = -5.99, p =
.000), and were seen as more trustworthy (t (49) = -2.73, p = .009).
When examined by cultural value orientation, the responses to the
healthy eating materials were found to differ somewhat. Table 1
below details message characteristics assessed and presents
significant differences that were found between message
conditions (tailored v. generic), both overall and by cultural value
orientation.
Table 1. Evaluation of Message Characteristics

The information in the healthy eating materials was


engaging.1,2
The information in the healthy eating materials was new to
me.3
The healthy eating materials were attractive.1
The healthy eating materials were informative.1,2,3
The healthy eating materials were clear.
The healthy eating materials were interesting.1,2,3
The healthy eating materials encouraged me to change my
eating habits.1,2,4
The healthy eating materials applied to me personally.1,2,3,4
The healthy eating materials applied to me culturally.2
The healthy eating materials were trustworthy.1,4
I will likely re-read and use the healthy eating materials.
I am likely to change my dietary routine.

1 - Significant differences between message condition (tailored v.


generic) overall (N = 51, p <.05)
2 - Significant differences between message condition (tailored v.
generic) HC (N = 20, p < .05)
2 - Significant differences between message condition (tailored v.
generic) VC (N = 9, p < .05)
4 - Significant differences between message condition (tailored v.

6
generic) HI (N = 20, p < .05)
Note: There were too few participants in the VI cultural value
condition to assess differences between message condition (N = 2).
Session 3 Outcomes Overall and by Cultural Value Orientation
Significant differences between message conditions (tailored v.
generic) were also found in responses made during the third study
session. Individuals in the tailored message condition were
significantly more likely to report having re-read the healthy eating
materials more often between Sessions 2 and 3 (t (42) = - 2.91, p =
.006). Additionally, individuals in the tailored condition were
significantly more likely to indicate that the healthy eating
materials were influential in helping them eat healthier (t (42) =
-3.91, p = .000).
When examining responses made in Session 3, differences
between message conditions were found to be inconsistent across
cultural value orientations. In particular, individuals holding a
horizontal collectivist (HC) orientation who received the tailored
communication message indicated a higher likelihood of
re-reading the information than those individuals who received the
generic message (t (14) = -3.79, p = .002). No significant
difference was found, however, between message conditions for
the HC individuals in its perceived influence on healthy eating.
Individuals holding a vertical collective (VC) orientation who
received the tailored communication message were not found to
differ from those receiving the generic message in either their
having re-read the information or in its perceived influence on
their dietary habits. Individuals holding a horizontal individualism
(HI) orientation who received the tailored communication,
however, did not re-read the information significantly more than
those in the generic condition, but they did indicate a greater
degree of influence of the healthy eating materials on their dietary
habits (t (16), - 3.05, p = .008).
Self-Esteem, Self-Efficacy, Resiliency, and Health Locus of Control
As previously discussed, there were no significant differences
between message conditions in initially-reported self-esteem,
self-efficacy, resiliency, or health locus of control beliefs. No
significant differences were found between message conditions in
a change in these study variables from Session 1 to Session 3 when
examining the entire sample. Likewise, no significant differences
were found between Session 1 and Session 3 for any of these
variables within any of the cultural value conditions.
A series of one-way ANOVA procedures was conducted to
determine if significant differences existed between cultural value
orientations in initial reports on the self-esteem, self-efficacy,
resiliency, and health locus of control measures. A significant

7
difference was found between cultural orientation groups in
self-esteem (F (3, 58) = 4.94, p = .004) and self-efficacy (F (3, 58)
= 4.59, p = .006). Post hoc t-test analyses (using a Bonferroni
alpha correction to control for multiple comparisons) revealed the
differences to be between the horizontal collectivist and vertical
collectivist orientations for both. In particular, individuals in the
horizontal collectivist condition indicated higher levels of
self-esteem (M = 24.88) and self-efficacy (M = 30.04) than
individuals expressing a vertical collectivist orientation (Self-
Esteem M = 21.70, Self-Efficacy M = 25.20).
Differences between Study Locations
The focus of the present study is not on differences between Japan
and the United States in particular, but rather on the effectiveness
of the tailoring approach and the potential influence of cultural
value orientation. It is interesting to note, however, that a number
of significant differences were found in responses to study
assessments when study site is considered. In particular,
individuals in the US sample were significantly more likely to
endorse an internal health locus of control than were individuals
within the Japan sample (t (60) = -8.64, p = .000). Likewise,
individuals in the Japan sample were more likely to endorse a
powerful others (external) health locus of control (t (60) = 7.60, p
= .000). There was no difference between study sites, however, in
the endorsement of a health locus of control that emphasizes the
role of chance.
Significant differences between study sites were also found with
individuals in the US sample indicating higher levels of reported
self-esteem (t (60) = -7.57, p = .000), self-efficacy (t (60) = - 7.82,
p = .000), and resiliency (t (60) = -6.78, p = .000).

Discussion
Overall, the present study provides additional support for the
growing body of literature on the effectiveness of information
tailoring. Across the entire sample, individuals who received a
health communication message that was tailored to their current
dietary behaviors and health beliefs were more likely to feel an
initial connection to the health communication and were more
likely to utilize the information contained within the message than
were individuals who received a generic equivalent. These
findings mimic those of previous studies that found individuals
receiving a tailored communication indicate more positive
thoughts regarding the communication [24], indicate a more
personal connection to the communication [25], and indicate a
resulting change in behavior [26]. In the present study, however, a
number of differences were found between individuals holding

8
different cultural value orientations. While individuals within all
cultural value orientations who received a tailored health
communication message indicated a greater personal connection to
the message than did those receiving a generic equivalent, the
level of engagement, attractiveness, and novelty of the information
presented within the tailored communication message did differ
between orientations.
These differences are further reflected in differences between
cultural value orientation groups in the continued reference to the
tailored materials and in the perceived utility of the tailored health
communication message. Individuals holding a horizontal
collectivist orientation were more likely to re-read the tailored
communication, but did not find it to be useful in bringing about a
change in dietary behavior. Individuals holding a horizontal
individualistic orientation, however, were more likely to view the
tailored communication message as useful in bringing about a
change in eating behavior. Individuals holding a vertical
collectivist orientation who received the tailored communication
were not more likely to re-read the information and did not find the
information any more useful than did those receiving the generic
equivalent. Thus, while an initial connection to the material was
indicated by all orientation groups, only those individuals holding
a horizontal individualist value orientation found eventual utility in
the tailored health communication. Given the disproportionate size
of the cultural orientation groups (and the fact that there were only
3 individuals representing the vertical individualism orientation),
this finding should be taken with caution and be further examined
in future research.
No significant changes in self-esteem, self-efficacy, resiliency, and
health locus of control beliefs were found between the initial
assessment of each and their assessment during Session 3 in either
the tailored or generic communication condition. Differences,
however, were found between individuals in regard to self-esteem
and self-efficacy with individuals holding a horizontal collectivist
orientation indicating a higher level of both when compared to
individuals holding a vertical collectivist orientation. Also, while
not a focus of the present study, it is interesting to note that
individuals in the Japan sample indicated significantly lower levels
of internal health locus of control, self-esteem, self-efficacy, and
resiliency than did the US sample. These findings, however,
support those of similar cross-cultural research and highlight
previous study findings [27] [28] 29]. That is not to say that
Japanese individuals truly believe themselves lower on such
attributes. It is likely, however, that this particular pattern of
responses reflects a self-effacement norm prevalent in their society

9
[30].

Limitations of the Current Study


Though the findings within the current study are promising,
significant limitations exist. Firstly, there was highly unequal
representation within the four cultural value orientation conditions.
This inequality limited the resulting analyses and conclusions that
could be drawn. The use of two culturally-diverse study locations
was an attempt to maximize the likelihood of obtaining samples
representing a variety of orientations. The range of cultural value
orientation within the US sample, however, was particularly
restricted and was not what one would characteristically expect in
that it was dominated by individuals holding either a horizontal
collectivist or horizontal individualist orientation. This is contrary
to the expected vertical individualist orientation previously found
within US samples [31]. A more diverse sample of cultural value
orientation with a more expansive sampling method is called for.
Additionally, while no changes in self-efficacy, self-esteem,
resiliency, or health locus of control were found from Session 1 to
Session 3, it is premature to assume that information tailoring has
no influence on these variables. The time span between the first
and third sessions was likely too short and the influence of a single
communication message too minimal to bring about such changes.
Examination of the influence of more extensive tailored
communication messages presented over a longer period of time
on these variables is called for.

Conclusions
The present study represents an initial step toward a more
complete understanding of the role that cultural value orientation
plays in the acceptance to tailored internet communication
messages. While the findings of the study provide further support
for the tailoring communication methodology, differences found
between cultural value orientation differences that were found are
preliminary and should be taken with caution. Further research is
called for to better detail differences found within this study that
addresses the limitations mentioned previously. Regardless, the
information tailoring methodology remains a beneficial approach
to the effective distribution of a wide variety of messages.

Endnotes
[1] Kreuter, M. W., Strecher, V. J., & Glassman, B. (1999). One
size does not fit all: The case for tailoring print materials. Annals
of Behavioral Medicine, 21(4), 276-283.

10
[2] Strecher, V. J., Kreuter, M. W., Den Boer, D. J., Korbin, S.,
Hospers, H., & Skinner, C. S. (1994). The effects of computer-
tailored smoking cessation messages in family practice settings.
Journal of Family Practice, 39(3), 262-270.
[3] Kreuter, M. W., Bull, F. C., Clark, E. M., & Oswald, D. L.
(1999). Understanding how people process health information: A
comparison of tailored and untailored weight loss materials.
Health Psychology, 118, 487-494.
[4] Davis, S. E. (2008, in preparation). An educational application
of internet-based tailored communication.
[5] Davis, S. E., Martinez, T., & Kurian, A. (2005, April). Tailored
communication and eating behaviors: The influence of learning
style. Poster presented at the 85th Annual Meeting of the Western
Psychological Association, Portland, OR.
[6] Kreuter, M. W., Bull, F. C., Clark, E. M., & Oswald, D. L.
(1999).
[7] Brug, J., Steenhaus, I., Van Assema, P., & de Vries, H. (1996).
The impact of computer-tailored nutrition intervention. Preventive
Medicine, 25, 236-242.
[8] Brug, J., Glanz, K., Van Assema, P., Kok, G., & VanBreukelen,
G. J. P. (1998). The impact of computer-tailored feedback and
iterative feedback on fat, fruit, and vegetable consumption. Health
Education and Behavior, 25(4), 517-531.
[9] Kreuter, M. W., Oswald, D. L., Bull, F. C., & Clark, E. M.
(2000). Are tailored health education materials always more
effective than non-tailored materials? Health Education Research,
15, 305-316.
[10] Kreuter, M. W., & Holt, C. L. (2001). How do people process
health information? Applications in an age of individualized
communication. Current Directions in Psychological Science, 10,
206-209.
[11] Davis, S. E., Martinez, T., & Kurian, A. (2005, April).
[12] Lowe, S. M. (2005). Integrating collectivist values into career
counseling with Asian Americans: A test of cultural
responsiveness. Journal of Multicultural Counseling and
Development, 33, 135-145.
[13] Lim, K. H., Leung, K., Sia, C. L., & Lee, M. K. (2004). Is
e-commerce boundary-less? Effects of individualsm-collectivism
and uncertainty avoidance on Internet shopping. Journal of
International Business Studies, 35, 545-559.
[14] Kagawa-Singer, M., & Chung, R. (1994). A paradigm for
culturally based care for minority populations. Journal of
Community Psychology, 22(2), 192-208.
[15] http://www.socialresearchmethods.net/kb/sampnon.php
[16] Singelis, T. M., Triandis, H. C., Bhawuk, D. P. S., & Gelfand,

11
M. J. (1995). Horizontal and vertical dimensions of individualism
and collectivism: A theoretical and measurement refinement.
Cross-Cultural Research, 29(3), 240-275.
[17] Triandis, H. C., & Gelfand, M. J. (1998). Converging
measurement of horizontal and vertical individualism and
collectivism. Journal of Personality and Social Psychology, 74(1),
118-128.
[18] Singelis, T. M., Triandis, H. C., Bhawuk, D. P. S., & Gelfand,
M. J. (1995).
[19] Rosenberg, M. (1965). Society and the adolescent self-image.
Princeton University Press: Princeton, NJ.
[20] Niell, J. T., & Dias, K. L. (2001). Adventure education and
resilience: the double edged sword. Journal of Adventure
Education and Outdoor Learning, 1(2), 35-42.
[21] Wagnild, G. M., & Young, H. M. (1993). Development and
psychometric evaluation of the Resilience Scale. Journal of
Nursing Measurement, 1(2), 165-178.
[22] Schwarzer R, & Jerusalem M, (1993). The General
Self-Efficacy Scale. [Online] available: http://userpage.fu-berlin.de
/~health/engscal.htm.
[23] Wallston, K. A., Wallston, B. S., & DeVellis, R. (1978).
Development of the Multidimensional Health Locus of Control
(MHLC) scales. Health Education Monographs, 6, 160-170.
[24] Kreuter, M. W., Bull, F. C., Clark, E. M., & Oswald, D. L.
(1999).
[25] Brug, J., Steenhaus, I., Van Assema, P., & de Vries, H. (1996).
[26] Davis, S. E., Martinez, T., & Kurian, A. (2005, April).
[27] Muramoto, Y., & Yamaguchi, S. (1999, August). An
alternative route to self-enhancement among Japanese. Paper
presented at the Third Conference of the Asian Association of
Social Psychology, Taipei, Taiwan.
[28] Heine, S. J., Lehman, D. R., Markus, H. R., & Kitayama, S.
(1999). Is there a universal need for positive self-regard?
Psychological Review, 106, 766-794.
[29] Heine, S. J., Takata, T., & Lehman, D. R. (2000). Beyond
self-presentation: Evidence for self-criticism among Japanese.
Personality and Social Psychology Bulletin, 26, 71-78.
[30] Best, D. L., & Williams, J. E. (2001). Gender and culture. In
D. Matsumoto (Ed.), The Handbook of Culture and Psychology.
Oxford University Press: New York, NY.
[31] Singelis, T. M., Triandis, H. C., Bhawuk, D. P. S., & Gelfand,
M. J. (1995).

12
Can You Get Out of a Contract You
Signed?
by Leonard D. DuBoff © 2008
We wrote the book on small business law.
about
One of the most frequently misunderstood provisions of
contract law is the one that deals with the so-called
three-day cooling-off period. It is very common for a client or
prospective client to contact our office and ask us to confirm the
fact that the caller has the right to rescind a valid contract because
of the cooling-off period. Unfortunately, the answer is generally
that the three-day cooling-off period does not apply to the caller's
situation.
The cooling-off rule allows a buyer three days to cancel a sale only
if the amount of the sale exceeds $25 and the sale took place at (1)
the buyer's home, workplace or dormitory, or (2) a place rented by
a seller on a temporary basis, such as a motel room, fair grounds,
restaurant or the like. In addition, the rule applies only if the goods
or services are intended primarily for personal, family or
household purposes. There are some other requirements and
exceptions as well.
When the cooling-off period applies, the law requires the seller to
advise the prospective purchaser in writing that the right of
rescission is available. The document must comply with the rules
promulgated by the Federal Trade Commission (FTC) and, among
other things, must be in the same language in which the discussion
between the parties was held. It must provide the date of sale, as
well as the date by which the rescission must be completed. If the
seller fails to provide this document, then the three-day rescission
period does not begin to run until the buyer has been given the
appropriate notice. If the seller never provides the notice, and if
the rule applies, the buyer may rescind the transaction at any time.
All other contracts are binding and enforceable unless there is a
defense that undermines the validity of the contract. These
defenses would include, among other things, the fact that no
agreement was ever made or that the purchaser lacked the legal
capacity to enter into the transaction. Situations in which
purchasers lack such capacity include obvious mental incapacity
and obvious intoxication. Contracts by individuals who are under
the age of 18 may be voidable in some situations, though these
situations vary from state to state, and there is no universal right
for a minor to rescind an otherwise valid contract.
Another defense is that the contract was obtained through
misrepresentation or fraud. An egregious example of fraud, for

13
example, would be where a seller represents that a used vehicle
has never been in an accident and it is later learned that, in fact, it
had been in several. State laws in many jurisdictions also provide
the right of rescission where automobile odometers have been
tampered with.
The fact that a contract is not written does not necessarily mean
that it is unenforceable, though the law provides that some
contracts for future performance must be in writing. Nevertheless,
once the contract has been performed, it cannot be undone unless
one of the rights previously discussed is available.
Individuals who enter into transactions should assume that those
transactions will be valid, binding and enforceable. Buyers should
be careful to avoid transactions where sellers orally represent
qualities or attributes of products which sound too good to be true,
when the salesperson refuses to put the representations in writing,
or when the salesperson is extremely aggressive. A salesperson
who states "trust me" invites distrust unless there is some written
evidence of the qualities represented that becomes part of the sales
agreement.
If you want the right to change your mind and the three-day rule
does not apply, then you must insist on having the seller expressly
provide you with that right in the written agreement itself.
By being prudent and evaluating the merits of the transaction
before completing it or by obtaining the express right to rescind
the transaction within a set period of time, you may avoid having
buyer's remorse. Remember that the vast majority of transactions
will not provide buyers with the very limited cooling-off period
prescribed by the FTC. You can obtain more information on the
"cooling-off rule" at www.ftc.gov.

14
Defining the All-Important Difficulty Curve
by Chris Pruett <c_pruett@efn.org>
about
Many video game designers believe that the first few
minutes of a game are the most important. In the first
moments of play, the player must be so enthralled that he
will be willing to commit to the time required to complete the
game, which can be anywhere from ten to fifty hours. The player's
first impression of a game is likely to shape their opinion of the
title as a whole, especially if that opinion is negative. If a game
fails to hold the player's attention during its first few minutes of
play, many gamers will put the game down and never come back
to it.
Game design teams struggle with what exactly to show the player
during that precious time. Some games opt for non-interactive
videos that set up the game's story line or introduce the player to
the main characters. Others provide a training area where the
player can learn the mechanics of the game by following a tutorial.
Some games, such as Silent Hill 3 and Eternal Darkness, throw the
player into an extremely short action sequence in order to give
them a taste for game mechanics that, for plot progression reasons,
will not appear again until later in the game. The way that a game
begins will set the player's expectations for the rest of the
experience, and an introduction that is too easy or too hard can
cause gamers to quit before they even really begin to play; the
seminal 3D driving game Driver, for example, required players to
pass an exceedingly difficult test before they were able to play
normally, which caused a lot of players to give up on the game
almost immediately.
The problem game designers face is the definition of the difficulty
curve, a term used to describe the progression of challenge from
the beginning of a game to the end. While the introductory
moments are of particular importance, much of the work in
creating a successful game rests in the management of difficulty
over the course of the entire game experience. If the play proves
too difficult, the player will be frustrated and may end up with a
bad impression of the game. On the other hand, if the game is too
easy then the player may become bored and quit before the game
design has really had a chance to really show off its potential. As
the player puts more time into the game, he will become
increasingly adept at the challenge it offers, so the game must
therefore increase in difficulty over time in order to stay
interesting. The speed at which the difficulty increases should
ideally define a curve, gradual at the beginning but increasing at a

15
steady rate until the end of the game. But actually making a game
conform to such a curve is quite hard because no two gamers play
the same way. Striking a middle ground that is challenging and fun
for a wide audience is an incredibly difficult task.
There are a lot of approaches to staging difficulty progression. The
Role Playing Game genre (so named because its mechanics are
based on pen and paper games like Advanced Dungeons and
Dragons), for example, often allows the player to control the
difficulty of the game themselves by managing statistics about
their characters. Games like Final Fantasy will typically require
that the player engage in combat to build up the strength of their
characters, and will provide an endless supply of monsters for the
player to defeat. This mechanic puts the pace of the game in the
player's hands: if a particular section is too difficult, he can simply
return to the previous section and battle more monsters to increase
the strength of his characters. Combat in most RPGs is a function
of random rolls of the dice rather than the dexterity of the player,
so the difficulty curve of these games is consequently defined by
how much time the player chooses to spend "grinding" his
characters into powerful heros. The down side to this method is
that the player himself cannot improve at this kind of game; it is
the statistics related to his character that improves, not the player
himself. So there's no way for veteran players to short-cut the
process and jump ahead; everybody is forced to spend some time
developing their characters before they can continue with the
game.
The Crash Bandicoot series, a resident of the Platformer genre,
employs a system called dynamic difficulty adjustment in which
the game changes subtlety to match the prowess of the player. If
the player fails too many times in a specific section, the Crash
games will temporarily increase the player's health or modify the
section to decrease the level of challenge. Naughty Dog, the
developer of the Crash series, stated that their goal with this
system is for the user never to see the game over screen; in order
to be enjoyed by the widest range of players possible, they have
built in ways for the game to help the player complete difficult
sections.
The Brawler genre takes the polar opposite approach in its design
by embracing crushing difficulty as a core theme. Brawlers are
games in which the player is pitted against wave after wave of
enemy characters, each of which must be individually dispatched
using a variety of flashy combat moves. Brawlers such as Devil
May Cry and Ninja Gaiden focus on player dexterity and hand-eye
coordination; though these games also allow limited capacity for
the player to "power up" his character, most of the challenge rests

16
in the player himself learning how to deftly control the game.
These games revel in their difficulty, and aim to create a feeling of
accomplishment in the player when he is finally able to overcome
a significant challenge. Devil May Cry starts out hard (the first
boss encounter is famous for its surprising difficulty) and gets
increasingly difficult over time; it has a well-defined difficulty
curve that happens to begin at a level of difficulty that many other
games never reach.
One of the interesting things about difficulty curve design is that
the mentality of the player has a significant impact on how quickly
the difficulty may be allowed to rise. Players who enjoy RPGs like
Final Fantasy and Dragon Quest are often drawn to the relaxed
pace and lack of mechanical challenges offered by the genre. On
the other hand, players who enjoy Devil May Cry and Ninja
Gaiden often see the game as a test of their gaming prowess, a
testament to their ability to see a task through to fruition despite
significant challenge.
This mentality is evident in the way game developers label their
various difficulty levels: rather than simply giving the player a
choice between "easy," "medium," or "hard," some games go out
of their way to play up the significance of the challenge that they
provide. The popular first-person shooting game Halo, for
example, provides difficulty levels named "Hero" and "Legendary"
in addition to the mores standard "Easy" and "Normal" modes. The
naming of these modes suggests that a player must be better than
average to meet the challenge that they provide. Viewtiful Joe,
another brawler, ingeniously labels its two difficulty levels "Kids"
and "Adults," thereby informing younger players which mode is
correct for them while simultaneously challenging the ego of older
gamers. Devil May Cry does not provide a difficulty setting up
front, but if the player dies too many times in the same spot the
game will ask the player if he would like to switch to easy mode (a
change that, once accepted, can never be reversed). This is a nice
feature for casual gamers who are not interested in intense
difficulty, but many Brawler aficionados find the suggestion that
they switch to an easier mode offensive to their pride. Indeed, even
the dynamic difficulty adjustment employed by the Crash
Bandicoot series is subtle and non-obvious; many players would
react negatively if they knew that the game was making itself
easier every time they failed.
Defining the way that players experience challenge, from the first
few moments of play to the ending credits, is one of the core
problems that game designers face. All of the games that are
famous for their high quality, from Halo to Super Mario Bros.,
feature expertly designed difficulty curves. Though there are many

17
strategies for ensuring that a game will increase in difficulty at a
rate appropriate to a wide range of players, few games are able to
maintain perfect difficulty balance from start to finish. Many
otherwise well-designed games are marred by unintentional spikes
in difficulty, which can lead to player frustration and ultimately to
a negative impression of the game overall. However, careful game
designers can manipulate the player into putting up with
frustration by convincing him that completion of the game is a task
worthy of a badge of honor, or by allowing the player to proceed at
his own pace. A successful game designer must not only create
challenge systems that can increase in difficulty, he must also
understand the psychology of his audience. And even if a designer
pours his heart and soul into the design of a game, many people
will never experience the fruits of his labor if the first few minutes
of play fail to grab them and hold their interest. Perhaps this is
why the truly terrific games are so few and far between; the
formula for success is so difficult that only a small percentage of
attempts can actually pull it off.

18
Internet Distance Education: Object-
oriented Modeling in Instructional Design
by Charles Boulet <cboulet@verizon.net>
about

Preface
This discussion centers on Distance Education (DE) in general and
introduces Object-oriented Distance Education modeling (OODE),
and remains largely neutral with respect to development or
deployment platforms. Drawing examples from the grade-school
classroom to large industrial and government applications, the goal
is to establish general guidelines for approaching instructional
design for online delivery models based on a synergy of
well-established technical and pedagogical constructs. There are
many ways through which DE can be delivered and managed, but
the final technological implementation is a secondary concern and
will only be considered tangentially.
For purposes of this discussion, Distance Education is defined as
Internet-mediated instruction where the learner is not physically be
in the presence of the instructor and assumes varied degrees of
synchrony in interactions between the learner and the provider.
The discussion recognizes that any solution for Distance Education
can equally well be implemented on a local level to enhance or run
in parallel more traditional models of delivery. Furthermore, given
the wide array of instructional purposes and delivery limitations, it
is impossible to do more than simply introduce and illustrate
concepts. Please consult the references for further detail.

Introduction
In the previous article, it was noted that educators had learned
some important lessons in implementing computer technology in
the classroom and school division. As military and government
bodies had known for decades before schools, computer techology
and networking can facilitate enormous growth in research and
development, and new and extended efficiencies internally to the
organization and in dealing with outside parties, organizations, and
clients. From the mid-1980s to 2000, school divisions for their part
learned that computer workstations and data storage lend
themselves particularly well to automation of routine reporting and
data collection, as notable but not solitary examples, but they are
poor substitutes in the classroom. From the perspective of
curriculum development, educators learned that it is best to learn
to use computers to accomplish relevant and necessary tasks rather

19
than to understand how and why they work.
Today, educators have a unique opportunity in history, pioneering
the universe of Internet-mediated Distance Education. In the last
ten years, there has been explosive growth in web-based services
and technologies yielding new services for data management and
reporting, to enrollment, curriculum, and support gateways.
Evolving in parallel, third-party contractors have proven
themselves to be worthy and capable of technical provision,
design, and management, all of which is often poorly managed and
under-funded in educational institutions and providers. All this
leaves the door open to focus on new and effective ways of doing
what educators do — teach.
The current state of technology implementation in education
reflects a trend towards a more reasonable and balanced
integration of skills and expertise with educators using technology
to enhance and extend delivery, allowing new tools to facilitate the
process rather than confound it. In this article, we will continue the
discussion by focussing on the modeling of instructional constructs
for Distance Education from the dual perspective of technology
and pedagogy, the former from the view of Object-oriented
constructs, and the latter from the view of Bloom's taxonomy for
learning, teaching, and assessment. Several benefits of DE will be
exposed through a discussion of how formalized well-structured
instruction enhances the experience and production of online
education.

Contrasting Today and Tradition


More traditional approaches to teaching, by their nature, revolve
around the instructor. A student must be delivered to some center
of learning, entering as a guest under several conditions, and
becomes the pupil, a necessarily subordinate role. The student
waits upon the instructor and is completely dependent upon the
instructor, and to a lesser extent his fellow students, for learning.
These factors alone impose serious restrictions on the learner as he
is forced to internalize new rules of conduct and adapt to a new
environment and instructional style before he can fully avail
himself of the opportunity to learn. Even then, the learner is
further limited by the abilities of the instructor to assess his
learning style and level of comprehension then adapt the
instruction accordingly. Furthermore, the student runs the risk of
having an instructor who, for innumerable possible causes, cannot
effectively execute the task or might subject the student to unfair
treatment or bias. Whereas in the ideal state, the traditional
approach to instruction allows for a guided, personalized and
socratic solution to instructional needs, the reality is that the

20
modern classroom, being overly dependent on the instructor for
delivery, falls short on the promise. This is not to say that good
teaching does not occur, teachers do not always have the time nor
skills to do the job effectively. They remain, however, the best
source for tailored and meaningful instruction and evaluation
compared to any automated system that might exist.
In its simplest form, Internet-mediated Distance Education offers
many options for pre-packaged rectilinear asynchronous training
solutions for a variety of needs. The student typically will pay a
fee for access to video content and will need to connect to the
provider's site in order to gain access to it. This approach typically
offers text, audio, and video content which the user works through
in a set sequence. If any questions arise with respect to content, the
user can play back the audio and video or re-read selected
passages but only rarely has access to experts to consult. There are
often exercises or sample files the student works with, most often
by mimicking what the recorded instructor does and there is
usually no facility for feedback on performance or troubleshooting
help. This format is particularly effective in technical skills
training for desktop computer productivity tools such as word and
image processing, web and paper publishing, and others where
instruction requires only that students be exposed to new
procedures limited scope or more complex procedures consisting
of simpler tasks chained in a predictable sequence. While this
approach allows for a simple retail 'on-demand' approach to
training, it does not address concerns that arise from a need for
more interactive instruction and evaluation required by more
complex learning situations. It does, however, provide a great deal
of flexibility for the student as they can approach learning at a
self-determined pace and schedule. The student is further freed
from the constraints of space, instructor bias, and other
psychosocial factors present in a more traditional setting.
Today's web provides advanced facilities to more flexibly
approach individual learning styles and so the simple pre-packaged
solutions described above are not representative of what is
possible. Indeed, new DE paradigms are making use of advanced
algorithms and summative and formative assessment in order to
more closely approximate training solutions to students' specific
needs. In addition, the combined use of varied synchronous and
asynchronous communication modalities allows for human
interaction and professional intervention where required or
desired. Whereas facilities for managing and delivering data to the
student continue to evolve more robust and flexible solutions for
DE, the foundation of effective teaching and learning remains in
sound instructional design. DE can then be implemented in such a

21
fashion as to meet the requirements of instruction, rather than
altering instruction to meet limits of technology.

Designing for DE
Does the mode of delivery impact on instructional design? Should
it? In practical terms, the answer to both questions is yes; the
design of an instructional solution depends on myriad factors
including but not limited to budget, availability of existing
solutions and components, staffing, target audience, numbers of
pupils versus instructors, availability of broadband and print
materials, mail service, physical limitations of students and
instructors. In principle, however, the answer to both questions is
no. Effective instruction means learning is facilitated and guided
through key milestones leading to the final outcome, whatever that
might be, regardless of how interactions between student and
instructor are mediated.
The primary functional distinction between traditional and DE
delivery models is the role played by the instructor. The classroom
teacher, provided she knows the subject matter well enough, could
begin teaching with little to no advance preparation, correcting
herself as she moves through curricular goals; she becomes the
center of the process and the learning is entirely dependent on her
performance. DE, in contrast, forces the instructor to abstract her
lesson, to deconstruct the goals and objectives of curriculum and
very deliberately build lessons according to known rules and
predictable processes; the student assumes more of a prominent
central role while the instructor, though present, is not the focus.
The process of designing by object-oriented principles for DE, as
presented herein, illustrates several benefits of adopting more of a
formal structure to instructional design. In particular, by borrowing
constructs from relational database management and object-
oriented programming and combining them with well-established
pedagogical principles, instructors can create learning solutions
that can enhance flexibility and efficacy of DE from the view of
programming and instruction, with similar positive effects when
implemented in conjunction with more traditional instruction.
Designing instruction from a detached perspective allows the
instructor to appreciate what is important and to lose what is
irrelevant and potentially confounding. Furthermore, following
more formalized principles of abstraction and atomicity leads to
simpler yet pedagogically comprehensive instructional models,
which are more easily adapted to varied learning styles. Students
will appreciate and are protected by the clarity, predictability, and
lack of cultural and personal bias in well-abstracted models.
Additionally, thoughtful an formalized instruction protects the

22
instructor from grievances and frees her to concentrate more on
instruction.
From an administrative perspective, such modeling provides
maximum efficiency as each component defined is required and
only ever created once, as a class, for example. The full course
offering of the organization becomes a catalogue from which
customizable solutions may be drawn. These classes may be
combined in a variety of digital media from desktop applications
to web-based delivery modes to meet the needs of DE and locally
situated students.
Paradoxically, of the many great benefits of approaching
development in this way, perhaps the most significant advantages
relate to neither pedagogy nor technology in a direct way. First of
all, adaptation to this sort of design methodology only requires that
the instructor adds more detailed form to his instruction and
removes himself from the process. In this way, the problem of
personality is eliminated. For the instructor, this simplifies
management of students, and the organization gains in terms of
transitional problems that occur with changing staff and
curriculum. Secondly, OODE models provide maximum
technological accessibility and flexibility for all stakeholders from
students to managers and directors. This promotes productivity
and creates new opportunities for reaching new markets.

Data Normalization and Atomicity


Let us begin the discussion by considering briefly a relational
database concept, that of Normal Forms (NF). Databases, in the
most general terms, contain tables of data which are referenced
alone or in combination in order to add or retrieve simple and
complex values. Database tables are defined by columns first, then
rows. Columns define what the values represent and what
constraints might be applicable. For example, a table might have
columns for LastName and FirstName defined as a text fields of
maximum length 50 characters each. The same table might also
have a column defined as a date field called BirthDay. By
comparison, each row of the table would include unique values for
each of the described columns; our table would contain numerous
rows, each one representing a single person.
First elaborated by Edgar F. Codd, the Normal Forms provide
criteria for designing database tables in such a way that risks of
data compromise and inconsistencies are limited. Normalizing
tables often also has the corollary effect of rendering data stores
more efficient and flexible. Let's consider briefly the core
principles of the first three (of several) Normal Forms for tables.
(This is not intended to be a detailed presentation of Normal

23
Forms, nor the controversies that might surround their definitions.)
First Normal Form (1NF):
Eliminate duplicate columns from the table (in other words,
eliminate redundancy) and
Create separate tables for each group of intrinsically related data,
and
Identify each row with a unique column (a key).
Second Normal Form (2NF)
Meet all the requirements of the first normal form, and
Remove subsets of data that apply to multiple rows of a table and
place them in separate tables, and
Create relationships between these new tables and their
predecessors through the use of foreign keys.
Third Normal Form (3NF)
Meet all the requirements of the second normal form, and
Remove columns that are not dependent upon the primary key.
In 1NF, we gather data into related groups and identify each group
(table) with an identifier or key. The key identifies the row and has
no other value to the dataset. In 2NF, we verify that all rows in the
table represent the same constructs - a table of people, for
example, should not include the name of a commercial enterprise.
Using keys, we create relationships between the separated tables
(TablePeople and TableEnterprises) so that, for example, John
Smith's key is associated with a row in TableEnterprises (Smith
and Co. Ltd.). 3NF takes another look at the tables and encourages
us to eliminate rows that are not intrinsically dependent on the
primary of the table. So, for TablePeople, each row would be
identified by a primary key (a.k.a. the 'key' or unique identifier)
and would contain information specific to the construct of 'person'.
A column in the TablePeople for bank accounts would not make
sense, according to 3NF, because the TablePeople key defines the
person as opposed the banks they use. Another separate table,
TableAccounts, would be better suited to contain this information
as 'Bank Account' is a construct completely distinct from 'Person'.

Class-based Design
The second computing science concept called into play is that of
object-oriented design. Object-oriented programming (OOP), first
introduced in the 1960s through Simula 67 then formalized and
expanded later by IBM as Smalltalk in the early 1970s and others
since, allows programmers to model the real world by creating
classes of objects, each of which has its own internal workings,
and defines its own data and behavior. Classes are used as a type of
flexible template for the creation of course and lesson objects.
Given the predictable and well-structured definition of the classes

24
and objects, they may also be recombined as building blocks of
new courses and lessons.
These classes, and the objects created from them, relate to one
another through exposed interfaces and otherwise hide the details
of their operations, in other words, they are self-contained. As a
metaphor, you might ask a car salesman to accept a certain price
on a vehicle and he will give you the answer without divulging the
details of how he worked out their response, but he might convey
some of the hidden information to a trusted partner, such as the
dealership's business manager. To further the example, the
salesman will understand a certain set of data constructs (such as
English words), but will not know how to manage others (such as
Finnish). This is to say that classes within an organization will
share common interfaces with constructs of similar names
behaving in similar fashions from department to department.
The following are key concepts in OOP as they relate to DE
design:
Class - A class lists the traits of a thing (object), in other words it
'abstracts' the thing - in this case students, instructors, courses and
lessons. The class includes the thing's characteristics (its attributes,
fields or properties) and the thing's behaviors (the things it can do,
or methods). For example, the class ClassStudent would consist of
traits shared by all students, such as firstName, lastName,
mailingAddress, emailAddress, and enrollmentStatus
(characteristics or 'properties'), and the ability to enrollNewClass,
dropExistingClass, makePayment, takeTest (behaviours or
'methods'). Classes provide modularity and structure in an object-
oriented computer program. The inner workings of a class (code to
a programmer, instructions to an educator) should be relatively
self-contained, in other words encapsulated. Collectively, the
properties and methods defined by a class are called members.
Object - A particular instance of a class. Whereas 'ClassStudent' is
a generalized construct, 'Sally_Peterson' is an instance of the class,
with specific properties defined. We say the object Sally_Peterson
is an instance of the ClassStudent class. The set of values of the
attributes of a particular object is called its state. The object
consists of a particular state and the behaviours defined in the
object's class.
Method and Properties - As described earlier, methods are an
object's abilities, in other words, the things it is allowed to do and
designed to accomplish. enrollNewClass, dropExistingClass,
makePayment, takeTest are all examples of methods defined for
the ClassStudent and its instances (the objects created, or
instantiated, from it and the subclasses derived from it). Obviously,
it makes no sense to design and assign a method (ability) to a

25
student that the student would or could never use, though students
might have methods defined for them that they are conditionally
allowed to execute (perform). Properties are simply characteristics,
or traits, of the objects. Examples of properties of the ClassLesson
class might be NumberOfCredits (3 or 4 or whatever), Available
(yes or no), and Enrollment (being the number of students enrolled
currently).
Inheritance - 'Subclasses' are more specialized versions of a class,
which inherit attributes and behaviors from their parent classes,
and can introduce their own. For example, the class ClassStudent
might service as templates for the sub-classes called
ClassGraduateStudent, ClassUndergraduateStudent,
ClassMedicalStudent, or ClassLawStudent. Clearly, all of these are
students, but they have important differences. The subclasses can
inherit properties and behaviours but can also add their own
tailored members. ClassMedicalStudent will inherit the basis
information required for all students and may add properties and
methods to describe medical specialty training and immunization
status, but this information would not be required for
undergraduates, for example. Finally, classes may draw traits from
not only their parents but also from other classes with which they
have permission to communicate.
Encapsulation - Encapsulation conceals the functional details of
the inner workings of the class from objects that communicate
with it. In terms of DE design, encapsulation requires that there be
no guessing as to how the lesson be carried out. The lesson may
reference antecedent work inherited through its parent class (that
is, all students must have some core set of properties and methods)
and so it does not always have to redefine them, but anything else
that has not been inherited must be deliberately and explicitly
defined. In practical terms, the student should not have to guess as
to requirements, instructions, or their status.
Abstraction - Abstraction is simplifying complex reality by
modeling classes appropriate to the problem, and determining
which of the parent class members to inherit to the new child.
Abstraction of the elements of a course seems intuitively simple
but can be difficult, even more so for a particular lesson, because it
demands that we define many things we typically take for granted,
have forgotten, or have simply integrated into our daily lives as
instructors. The instructor must adopt the outside view, that of the
student newly approaching the learning environment.
Polymorphism - Polymorphism allows you to treat derived class
members just like their parent class's members. More precisely,
polymorphism allows the programmer/designer to have the object
(an instance of a class) respond to the same method call in

26
accordance with those behaviours defined for them. In other
words, the same method will be handled differently depending on
who is carrying out the action. As an example,
ClassGraduateStudent and ClassUndergraduateStudent would both
have the method/ability to enroll in a course, call it the
enrollCourse method. Both students would make the same request
(call the method), but it might be handled differently from an
administrative perspective. The point is that simplicity arises when
the interface relies on predictable methods and procedures so that
whichever student you are dealing with has the same command
set. Flexibility arises because even though the same action is
requested by different classes of students, the request is handled
according to the rules defined for that particular class of student.
The point here is not to 'program' courses and lessons as though
they were computer programs, but rather to use these concepts as
guides in the deconstruction of learning needs and subsequent
reconstruction of more effective and flexible models of instruction.
Furthermore, not every lesson need reflect all of the principles
presented in order to be effective.

Fusion of Technology and Pedagogical Foundations


These principles of deconstruction and reconstruction, when
combined with the learning paradigms described by Bloom et al.
(1956) and Anderson & Krathwohl (2001), or by Guilford (1956,
1966, 1967) and Meeker (1969), the structures that result can yield
formidable and efficient instructional models, which can easily
adapt to DE and traditional models.
To review, Bloom's Taxonomy, or simply 'The Taxonomy',
elaborates the cognitive process dimensions of learning in terms of
six types, each of which is further sub-divided into measurable
behaviours. This represents the Cognitive Process Dimension:
Remember: Recognize, Recall
Understand: Interpret, Exemplify, Classify, Summarize, Infer,
Compare, Explain
Apply: Execute, Implement
Analyze: Differentiate, Organize, Attribute
Evaluate: Check, Critique
Create: Generate, Plan, Produce
Each of these juxtaposed against the axis of the Knowledge
Dimension consisting of Factual Knowledge
Conceptual Knowledge
Procedural Knowledge
Meta-cognitive Knowledge
The resulting arrangement appears thus: Remember Understand
Apply Analyze Evaluate Create Factual

27
Conceptual
Procedural
Meta-cognitive
On the simplest level, these axes can be accounted for intuitively
in the design of the lesson classes. Put another way, the instructor
can 'wing it' and assume that she is covering the wide range of
cognitive skills required by the program of studies. The preferred
approach is to incorporate the skills in an array in the lesson
classes (hence objects) themselves, the table indicating which
elements have been taken into account in the particular lesson. In
this view, the skills are thus represented as properties of the class
available to be referenced externally to other instructors who
might want to make use of the lesson object in question. Further,
the object taxonomy properties for an entire course could be
queried and assessed in order to determine the nature of the
cognitive skills and knowledge inherent in the program of studies.

Approaching Instructional Design for DE:


To achieve the best results in instructional design for DE, the same
guidelines apply, regardless of the nature of the learning task. As a
first priority, use well established models of learning to clearly
conceptualize specific learning outcomes incorporating the
requisite knowledge elements. The next step is to then follow the
principles of relational databases and object-oriented programming
to guide the deconstruction of the learning requirements leading to
the reconstruction of the instructional model and lessons.
Consider what knowledge, skills, attitudes and, as appropriate,
affective responses are desired as relevant outcomes.
Sequence activities and course requirements in such a way that
students can achieve progressive successes in prerequisite skills as
they move forward in their learning.
Analyze activities individually ensuring all prescribed cognitive,
affective, and behavioral (skill) requirements have been
incorporated.
Plan out content as a function of outcomes bearing in mind the
specific learning dimensions addressed in the lesson. Ensure that
the curriculum and content is addressing not only the knowledge
but the cognitive skills required by the program.
Eliminate redundancies in the course and its lessons, unless
specifically required for enhancement and reinforcement of
learning.
Clearly define pre-requisites and post-requisites and other external
dependencies.
If the same outcomes are covered in another lesson or in another
course, consider using that lesson in the current instance.

28
If the information is available elsewhere, reference it instead of
creating a new instance of the data.
Eliminate unnecessary dependencies between elements. Is the
instructional sequence the way it is because it makes sense, or
rather because of tradition? Can the sequence be changed without
impacting achievement of global objectives?
Think in terms of Classes — Identify first the levels of abstraction
required, perhaps the highest level is the 'degree' or 'certificate', or
it may simply be 'course'. Each course is generally then divided
into lessons. Each lessons contains learning and evaluation
elements. Clearly define the methods (activities allowed the
students and instructors) and properties (resources, statistics, other
state elements) required for the highest classes and then inherit the
relevant members to the subclasses, making members more
specific according to the needs of the subclass.
Consider the Object — Let the nature of the final object guide the
design of the class. Consider, for example, not 'Sally Peterson' so
much as the class she represents. What makes her different from
others in her group? If the answer is that there is nothing relevant
that separates her from her peers, then they are all in the
appropriate class. Define then all the relevant common terms
between Sally and her peers including the things she needs to do,
or have done for her (the methods), and the information required to
manage Sally as a student (properties, state).
Methods and Properties — Avoid the specificity trap when dealing
with class and class member definitions. Approach the problem
from the view of determining those members which absolutely
must be defined, leaving everything else to be ignored.
Inheritance — The highest level of abstraction should be
determined by a central governing body in an organization and
inherited downward to departments and classrooms. By extension,
all departments inheriting from the same parent would be able to
freely interface with the others, making full use of available
libraries from other areas.
Encapsulation — Instructions and procedures should be
unequivocal and consistent throughout the entire program (degree
or course or lesson). Be sure to indicate where members are
polymorphic with respect to parent classes, but this need only be
noted within the child class itself as the specifics are irrelevant to
other classes.

Conclusion
This paper presents an outline to an approach to instructional
design that borrows a conceptual framework from computing
science and merges it with well-established pedagogical principles

29
in order to establish more efficient and flexible models of delivery.
Relational database normalization provides a paradigm for
atomizing data and rendering it more useful to varied aspects of
the training organization. Concepts drawn from object-oriented
programming further allow the designer to abstract learning and
instructional constructs in such a fashion as to render it more
useful to the organization and more easily comprehensible from
the perspective of students and instructors.
The next paper will present a generalized example of instructional
design following OODE principles. While OODE lends itself well
to delivery in the context of digital media, it will be shown that the
same principles will work equally well when implemented locally
in more traditional settings.
References:
Bloom, B. S. (editor), Engelhart, M. D., Furst, E. J., Hill, W. H., &
Krathwohl, D. R., (1956). Taxonomy of educational objectives:
Handbook I: Cognitive Domain. New York: David McKay.
Guilford, J.P., "The Structure of Intellect" American Psychologist,
21, 1 (1956)
Guilford, J.P., The Nature of Human Intelligence. New York:
McGraw-Hill Book Company, 1967.
Guilford, J.P., "Intelligence - 1965 Model." American
Psychologist, 21 1 (1966), 20-26
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy
of educational objectives, the Classification of Educational Goals;
Handbook II: The affective domain. New York: David McKay.
Meeker, Mary N., The Structure of Intellect: Its Interpretation and
Uses.
Newell C. Kephart, Editor. Charles E. Merrill Publishing
Company, 1969 A taxonomy for learning, teaching, and assessing:
a revision of Bloom�s taxonomy of educational objectives.
(2001).LW. Anderson, D. R. Krathwohl (Editors), with P.W.
Airasian et al. New York: Addison Wesley Longman, Inc.
The International Review of Research in Open and Distance
Learning - http://www.irrodl.org/index.php/irrodl/index
The Journal of Learning Design - http://www.jld.qut.edu.au/
Canadian Journal of Learning and Technology - http://www.cjlt.ca/
Australasian Society for Computers in Learning in Tertiary
Education - http://www.ascilite.org.au/
For a comprehensive listing of instructional design (ID) models,
see http://carbon.cudenver.edu/~mryder/itc/idmodels.html

30
Massively Multiplayer Online Role-Playing
Games
Review by IProfess, Elvin Druid of Zuljin, Azeroth
All communications care of <barlowj@pacificu.edu>
(Please place "For Iprofess" in subject line)
Kelly 2, R. V. Massively Multiplayer Online Role-Playing
Games. London: McFarland & Company, Inc., Publishers:
2004.
Editor's Forward: This review is another anonymous contribution
purporting to be written by "Iprofess," who presents himself as a
cartoon character--which he now rather grandly describes as an
"Avatar" — living in the cybernetic confines of an online game,
The World of Warcraft. Mr. IProfess' first contribution "The Tales
of Azeroth," can be found at: http://bcis.pacificu.edu/journal
/2005/03/iprofess.php and his second, a book review, Synthetic
Worlds, The Business and Culture of Online Games at:
http://bcis.pacificu.edu/journal/2006/01/castronova.php. While we
have pressed him for more original essays, he prefers now to
confine his possibly certifiable thoughts to reviewing occasional
books.
IProfess begins:
Ah, gentle readers, it has now been more than a year since last I
shared with you my rich experience as a resident of Zuljin, a most
excellent server in the World of Warcraft. At that time I was lured
into the shadow world that you inhabit to review Edward
Castronova's work, Synthetic Worlds, The Business and Culture of
Online Games. [1] I have insisted to your otherwise excellent
editor of this journal, in response to his constant importuning to
write more, that I was simply not interested. What can usefully be
said, has been said, or so I thought. Like all well-adjusted Avatars,
I prefer to play.
My withdrawal was somewhat enforced by an unforeseen event in
the World of Warcraft: an expansion was added, just as I reached
the apogee of the sixty possible levels in the original. And this
required that I began again, questing and bashing monsters in The
Outlands, where I can fly! I can fly!
Well, as you can see, The Outlands are quite an exciting alternative
to the so-called The Real World (TRW) where eyeballs are sold to
TV advertisers who extract them from their audiences by luring
them to observe passively completely non-interactive games, such
as the sad decline of Brittany, who began dancing before gnomes
and dwarves for silver pieces in Ironforge in Azeroth, and now
dances for far less in TRW.
But I digress--a continual problem in TRW where the self-imposed

31
disciplines of leveling are both vague and elusive and their
rewards seldom of epic quality! I return to TRW to write this
review essay not about a new book, but by the standards of
gaming, a very old one, the 2004 volume, Massively Multiplayer
Online Role-Playing Games.
This book was written by a highly qualified author, the amusingly
named R.V. Kelly 2 (I assume that the neo-classical RTD [2] Kelly
was probably taken on his server.) At the time of publication, this
was an excellent introduction to MMORPG, or, as his title would
rather have it, Massively Multiplayer Online Role-Playing Games.
Kelly 2 worked in virtual reality application development for the
Digital Equipment Corporation, a leader in TRW industries, which
briefly dominated the field. I thought it would be amusing, and
possibly even instructive — though I realize this latter process is
not one of much interest to denizens of TRW who prefer to be
endlessly diverted from their cruel lives — to use this book as a
sort of touchstone for where MMORPGs have gone since the
work's publication in 2004.
Upon reading and reflection, this work proved to be a worthy
choice because it has stood the test of time, at least in TRW where
time stomps by like a lagging Dying Clefthoof. Because Kelly 2
tried to make useful analytical points from the games of the time,
many of his observations proved not to be time-bound at all. It is
evident that he was a close observer of the industry as it
developed, and most of his points remain true today.
One of Kelly's concerns is the issue of addiction in MMORPG.
This, of course, presents some odd logical issues for me: am I, for
example, addicted to myself? However, this issue is indeed still a
timely one.
At present, the homeland of this estimable journal in TRW is
engaged in the quadrennial "Quest for the Presidency of the United
States" — no winners there, only several hundred million losers.
But it seems safe to hazard that a question in that election, issuing
no doubt from such saintly lips as those of candidates Romney or
Huckabee (!), both eager to become Master of the Reagan Guild,
will be questions as to whether not to impose increased regulations
on game playing, lest even more eyeballs be lost to advertisers.
So the question of addiction has not changed. Kelly 2, however,
brings a sharp eye and much thoughtful time spent in MMPORGs
to the issue. His entire Chapter 4, "Attraction and Addiction" is
devoted to the issue. He wisely deals first with attraction and finds
that in many cases the games make up for deficits the play
experiences in TRW, including perhaps poor social skills, less than
attractive RW bodies, an inability to succeed in reaching goals, etc.
MMORPG provide a second chance, in effect, a place to excel,

32
where disabilities are potentially balanced by the development of
new skills, in new virtual identities.
For other players, the MMORPG are tension releasers,
entertainment more enveloping than television, new and beautiful
lands to explore, a new dimension in which to do art, even a place
to find a new sort of spiritual satisfaction in being kind, generous
and thoughtful with other players.
Some, however, are so attracted that they can properly be
described as addicted. Kelly 2 believes that this is true of 9 to 30%
of the populations of games in which he has played. His definition
is very loose---these players are in the games "all the time..." (p.
66). Kelly 2 unabashedly presents this addiction as physically
similar to drug addiction, and likely genetic in its origins.
While I am hardly an expert in this field, this is a position which is
highly controversial in the literature, and, in my view, probably not
ultimately defensible. In this regard at least, the field has advanced
since the publication of this book and if the discussions are no less
decisive, they are now conducted with much more information and
from a much wider perspective.
An area where Kelly 2 brings his innate thoughtfulness plus
considerable experience in games to bear is in delineating some of
the higher-level satisfactions experienced by players he meets on
line. Many felt that they were creating a story both as participant
and author. Others enjoyed communities which seemed as real and
as useful to them as RW groups of friends.
Kelly 2's questioning in games must make him seem a very strange
sort of player to many other Avatars. He is seemingly always
playing with one hand on the mouse and the other on a pen, taking
notes. While this participant-observation technique is certainly
common among RW academics, Kelly 2's approach is more
impressionistic than systematic.
This aspect of game research, too, has changed. We now have
many more statistical studies done with much wider databases than
Kelly 2 can provide alone. [3] But nonetheless, Kelly 2's insights
are often quite startling to a calloused Avatar like myself who has
come to expect very little from a denizen of TRW who merely dips
in and out of virtual reality.
Kelly 2 devised many thoughtful experiments to test virtual world
reactions to some of the questions he wished to explore. For
example, he gave away valuable items to players, including large
amounts of game wealth. He then asked that they be returned in
exchange for even greater gifts. Few people thanked him, and none
returned the first gift, although some were very grateful. (Pp.
39-40) This makes an Avatar ashamed of his people, and one
wishes that Kelly 2 had been more specific so that warning signs

33
could be posted on the outskirts of this server, whichever one it
was!
Kelly 2, following on the heels of Edward Castranova's excellent
work, which I reviewed in these pages, also discusses gaming
economy, which he finds dreadfully Darwinian, stating: "Greed is
Creed."
Ultimately, Kelly 2 comes down firmly on the side of gaming
worlds as escapist, and states that "..the essential element that
defines all MMORPG cultures (is)-the ability of players to
experience the joys and triumphs of life without the physical risks
and punishing failures." (44)
This is, I hope, largely the experience of a non-gamer, one who has
approached the MMORPG culture more as an anthropologist than
as a true participant. At the same time, the book seems to take a
contrary view to this doleful assessment at many points as Kelly 2
chronicles the satisfactions experienced by others and the many
examples of altruism.
I myself have recently been speaking with soldiers playing in The
Outlands of WoW from TRW's outlands, Iraq. These men, while
clearly trying to distract themselves from an exceedingly harsh
reality, also seem to me to be approaching the virtual world as
altruists, whereas one would expect them to be savagely
virtual-self centered. I do hope that somebody as sensitive as Kelly
2 seems to be to virtual worlds does some work in this unique
environment.
One of Kelly 2's chapters is "Sampling the Games." Among other
examples, Kelly 2 uses games such as, Asheron's Call [4], City of
Heroes, A Tale in the Desert, Darkfall, Toontown, The Saga of
Ryzom, Lost Continents, Priest, Dragon Empires, Neocron, Dark
Age of Camelot, Anarchy Online, Project Entropia, Star Wars
Galaxies, Middle Earth Online, and, of course, Everquest. He also
mentions but does not go into any detail on several others,
including World of Warcraft, whose future success was
unforeseeable as he finished writing this work, probably in 2003.
The history and economies of MMORPG are illustrated in the list
that Kelly 2 chose: Five years after Kelly 2 published, Ultima
Online, City of Heroes, Lineage, The Saga of Ryzom, Neocron,
Dark Age of Camelot, Anarchy Online, Project Entropia, Star
Wars Galaxies, and Everquest are all going strong and into
subsequent versions and expansion packs.
Gone but not forgotten, still represented on the passage between
TRW and Cyber worlds, the Internet, are: Earth & Beyond, and
several other of the games which attracted a lasting fan base in
their brief lives in TRW. Darkfall is apparently still under
development five years later, but a loyal crew still works to bring it

34
out. Toontown, despite being an attempt of the Disney Studios to
cash in on MMORPGs while following the "play nice" rules of
TRW, seems to be moribund. Lost Continents seems never to have
gotten off TRW ground. Priest never really made it out of its home
in TRW, Korea. Dragon Empires died in 2005.
The failure of some of Kelly 2's game choices to survive
somewhat dates his book, but his chapter was really written to
permit him to make useful observations about games in general as
much as to simply list possible choices for the reader. It would be
nice, however, if the publisher were to update this list, or Kelly 2
would update it on the Internet. Sadly, this book seemed to have
died an undeserved death in TRW.
And for each game on the list Kelly 2 put forward, there are at
least five or six which have developed and proven successful since
2004. [5]
Kelly 2's last chapters, "Making a MMORPG" and "The Future of
MMORPGs" are useful to straight citizens of the shadow world —
TRW-and to Avatars alike. Because of, no doubt, his own technical
background, he goes into quite a bit of detail that has, I must say,
given me an entirely new appreciation for myself and for my
worlds. His "Future" chapter seems to me to be rather weak, the
technology of the games has not changed as quickly as he has
expected, perhaps because the industry has largely been taken over
my money men from TRW and the artists and Avatars have been
driven back from a frontier where start-up costs now are in the
many millions of dollars.
All in all I have found Kelly 2's little book a very worthwhile one,
and the exercise of comparing the world of the MMORPG in 2003
with the one we know today a somewhat surprising one in that it
seems to show that the essential nature and questions of the
environments have been stabilized for some time. This is
particularly odd in light of the continual spate of breathless tomes
that seek to explain these worlds to novice audiences. Not that
breathless now, and consumers not that novice, either. World of
Warcraft alone now has ten million subscribers.
[1] http://bcis.pacificu.edu/journal/2006/01/castronova.php
[2] Editor's note: it is impossible to resist pointing out that, in fact,
the candidate to raise this issue was Hillary Clinton:
http://blog.wired.com/27bstroke6/2007/12/clinton-would-c.html
[3] See Nicholas Yee's The Daedlus Project at
http://www.nickyee.com/daedalus/
[4] Which passed into the history of TRW in 2005; see the very
touching tribute by Clive Thompson at: http://www.wired.com
/gaming/gamingreviews/commentary/games/2005/12/69848.
Clive, relax, it still plays very well, just not in TRW!

35
[5] See a typical hybrid fan/commercial site at:
http://www.mmorpg.com/index.cfm?BHCP=1&bhcp=1 for lists of
such new games.

36
The Secret Circuit
Review by Jeffrey Barlow <barlowj@pacificu.edu>
about
Abramson, Bruce D. The Secret Circuit. The
Little-Known Court Where the Rules of the Information
Age Unfold. New York: Rowman & Littlefield,
Publishers, Inc., 2007.
This has been, for me, a perplexing book to review. I find it well
researched and generally well written, particularly given the
complexity of its subject. For certain niche audiences, the work
will prove very valuable. However, I must question a number of
important decisions made by the author in organizing it, and
wonder precisely for which audience the work is intended.
The Secret Circuit is, simply put, an overview from a historical
and legal perspective of The United States Court of Appeals for
the Federal Circuit. The court is, as the subtitle asserts, little-
known, yet very important.
The Court was created in 1982 to hear particularly complex cases
dealing with patent law (including Intellectual Property). As well,
it serves as the appellate jurisdiction — the court of appeals — for
the courts of International Trade and Federal Claims.
The author is well qualified to write such a book. He held a
Clerkship at the court in 2003, and is familiar, as the useful notes
and bibliography demonstrate, with a wide range of literature
dealing with the issues covered.
In addition, Abramson is the author of Digital Phoenix, a 2005
MIT press work on the rise, fall, and the author argues, inevitable
second rise, of the digital economy. He is also the President of
Informationism, Inc., a name chosen because it apparently
represent the author's personal philosophy.
Because the author's philosophy continually intrudes into the
work, and a key effort of the book is to describe the location of the
court and its important decisions on the political spectrum running
from socialist to conservative, readers should be prepared to deal
with this issue.
I initially decided that the author was essentially as I understand it,
a libertarian. His later prolonged argument that he is in fact, a
liberal, I found interesting but unconvincing. An issue such as this,
should, of course be quite irrelevant to a work on the "Rules of the
Information Age," but the author views these discussions as
critical to an understanding of the court itself.
I found these discussions confusing and distracting. Doubtless
these responses result from my own intellectual shortcomings, but
I think it probable that many readers may share my reaction.

37
As well as writing an objective history of the court, Abramson
continually evaluates its work with the question "Is the policy we
have the policy we want?" This makes the work both descriptive
and proscriptive, and again many readers may decide that the
author's proscriptions are too much rooted in his personal opinions
as to the future we should collectively share.
Another flaw with the work is that it was, I feel, in part sold under
false colors. The sub-title as well as the colorful cover of the work
with its background of digital numerals behind a classical balance
scale, all implies that the work will deal quite directly with the
information age.
At points it does so, and very well indeed. It is for this reason that
I have chosen to review the work. The treatment of Intellectual
Property and the World Trade Organization's TRIPS agreement
(Trade Related Aspects of Intellectual Policy, DRM (Digital
Rights Management) legislation and legal decisions resultant from
it are very clearly laid out and very informative. The author's
references to other relevant works are also quite valuable.
The author has no problem relating these topics to the
development of digital technology and the rise of the digital
economy. Unfortunately the work then begins to wander from this,
to us, useful focus, as it considers the court's additional appellate
roles in trade and federal claims.
At the last we must accept the argument that we are in the
information age, as defined by the author, and that this court is an
important legal jurisdiction in that age as adequate justification for
the sub-title. This position, however, seems quite strained as the
work progresses and many readers may hurry through the last half
of the book with some impatience.
However, there is no question but those involved in any of the
issues covered in the work will find a quick reading of the relevant
portions of the work, easily identifiable from its superb index,
useful. For example, I have been dealing at the Berglund Center
with an issue involving patents, and the work prepared me well for
a discussion with a legal specialist in the area, as well as with a
venture capitalist.
Attorneys will find the book useful as well, though there may well
be better sources for highly specialized readers. What the author
terms "policy wonks" might also find the discussion of current
policies vs. the author's perceived ideal policies useful.
But the work as a whole seems to me to be cluttered with the
author's continually idiosyncratic intellectual digressions. At the
end I think it potentially valuable for a number of discrete
audiences, but question the authorial decision to attempt to lump
such a broad grouping of topics together, despite their focus in the

38
work of the United States Court of Appeals for the Federal Circuit.
The Secret Circuit, for all its insights and value, might well have
been issued as several much shorter and more tightly edited
volumes, shorn of the author's personal perspectives insomuch as
this is ever possible.
See review by Steven Margolis at: http://www.independent.org
/publications/tir/article.asp?issueID=46&articleID=594
See the author's web site with links to his blog at:
http://www.theinformationist.com/index/ Pp 241-246.

39
Yo ho ho: Video Piracy, a Reappraisal and a
Modest Suggestion
Yo ho ho: Video Piracy, a Reappraisal and a Modest
Suggestion by Jeffrey Barlow <barlowj@pacificu.edu>
about
(Written in Wenzhou, China, January 11, 2008)
I have been writing about video piracy in China for some
time now. It is an inescapable part of living and working in China.
I do not know any foreigner here who does not purchase such
videos, nor, for that matter, do I know any such Chinese.
The videos have, in the past five years, sold for 8 Ren Min Bi (The
Chinese currency; currently 1 USD = 7.2 RMB) to up to 15 RMB
for the unwary. They have most usually been packaged in thin
poorly colored packages, often with mismatched blurbs on the
back; a package supposedly containing "Terminator II" might well
have a review of "Blue Crush" on the back of it.
Astonishingly, some of the blurbs, even if for the correct film,
would be incredibly critical ones taken from the Internet. "Sawing
Heads 14" — "Only a bloodthirsty idiot would sit through the first
ten minutes of this film. If you have watched the entire thing, seek
counseling immediately, in the interests of my safety, if not your
own."
Many of the films were shot in theaters with hand-held digital
cameras and not a few I have seen included arguments with nearby
patrons who began yelling at the perp. Often the film quality,
particularly the sound, was so low as to be unwatchable.
Another common problem is that the only market is not the
English language one. I once tried to watch Mel Gibson's Mayan
flick Apocalypto (2006) in Russian, which was amusing for less
than five minutes.
Because of all these problems, I had assumed that this industry
was not a real long-term threat to intellectual property rights.
American companies have operated under the illogical assumption
that a pirate selling for less than two dollars here was depriving
them of the sale of a twenty-dollar U.S. copy. This is nonsense and
ignores all the market truths of supply and demand, not to mention
issues of localization and competition.
At the August 2007 Aspen Conference I had a discussion--warm if
not occasionally heated--with a Hollywood intellectual property
rights attorney over the degree to which such piracy was really all
that costly. After we went back and forth over the question as to
whether or not an urban Chinese with a daily income of about ten
dollars U.S. even at the upper end in a city as rich as Wenzhou,
would ever pay U.S. market price for such a video, he replied in

40
frustration that he considered that the cost of goods was often
overemphasized. I do hope that he either has no balloon mortgage
or that it has not yet come due.
My earlier thinking from observing such pirate markets in Asia for
almost forty years now, was that the Chinese income would
continue to improve, and the U.S. industry, like Warner Bros has
done, would "get it" and release full high-quality versions of their
catalogues at lower prices which would ultimately push the cheap
and unreliable copies out of the market here.
However, when I was in Huangzhou receiving an award for
educational contributions to Zhejiang province in April, I
discovered a new and much higher quality pirated product was
emerging on the market.
These are very slickly packaged, sometimes with graphic posters
and inserts not found on the American market as far as I know.
They are in a standard black push-to-close box, just like the videos
I rent at Blockbuster in Forest Grove. They are in a sense more
high-tech than U.S. copies in that they are frequently in advanced
formats, and commonly have a variety of subtitle or even audio
tracks not available on the U.S. releases.
I assumed that the new product reflected the Hangzhou local
production facilities or possibly the local market. Hangzhou has
been a production center in global trade for well over a thousand
years — Marco Polo wrote about its riches in the 12th century —
and I was not surprised to see it take the lead in piracy, too.
However, when I got to Wenzhou in December, I found that the
changes in quality were apparently at least region wide in that the
same slick products are now available here.
The price has gone up somewhat. I noticed that when "Atonement"
first came onto the market here (about three weeks after the U.S.
release) it was available only in one high-end store in the toney
Wuma Jie shopping district and cost 20 RMB, about three dollars
— an unheard of price for a pirate DVD. No amount of spirited
haggling would get the price down, either. The clerk simply told
me I could not find it anyplace else, which was true--for about a
week.
Then it appeared simultaneously all over town, and in different
packaging as well. The classical 12 RMB version was available in
smudgy off-set printing, but so was the high-end beautifully
printed and boxed version, which I have heard, ahem, plays
flawlessly in a number of languages with a rich choice of subtitles.
I see now that my original hopeful prediction of a crossover point
for quality vs. cost was only partly accurate. Chinese viewers have
moved up-market as their disposable income has improved, but so
have the pirates moved up with them, improving quality of product

41
as well as the slickness of advertising and packaging.
It seems to me now that a solution will be found to this issue, but it
will not be the one I earlier envisioned. I now believe that market
realities will force American video entrepreneurs, perhaps the
studios themselves, to invest in the pirate companies here.
Such partnerships will give the current pirates first shot at new
films, done from original masters ensuring quality, and will make
the copyright and intellectual property rights issues an internal
Chinese problem. We would soon see reformed pirates, with a real
knowledge of the industry and thus able to make a strong case
appearing in a Chinese court — "I used to work for that guy and
his production and supply chain looks like this... —" suing the
pants off unlicensed competitors on the ultimate behalf of
American companies.

42
The Rise of Manipulatives in Video Games
By Chris Pruett
about
My wife is a native of Japan, and we travel back to that
country to spend time with her family at least once a year.
Every time I visit Japan I make it a point to visit local
arcades to see what types of video games are currently popular.
Though they have all but vanished from the United States, game
arcades have managed to survive in Japan by appealing to a wide
audience: most include "print club" photo booths (aimed at teens)
on the first floor and gambling simulations (aimed at an older
crowd) on the upper levels, with traditional action-oriented games
wedged in-between. The level of commitment required roughly
correlates with the floor: the street-level floors are reserved for
casual gamers while the upper floors are dimly lit and smokey
bar-like affairs, with most of the customers fixated intensely upon
large screens depicting virtual horse races. "Normal" games, the
kind that we are used to in the West, are somewhere in the middle.

The middle floors of these Japanese arcades are fun because many
of the games are played using elaborate devices. Namco's Taiko No
Tetsujin ("Taiko Drum Master"), for example, is played by
pounding plastic drumsticks into giant taiko drums that protrude
from the game's cabinet. Another Namco game, Rapid River
DX[1], simulates white-water rafting by forcing players to sit in a
miniature raft and manipulate a long paddle. Every time I visit a
Japanese arcade I am surprised to find some new game with a
unique control method.
But last year I began to notice a new phenomenon in Japanese
arcades. Off in the corner of the middle floors, where the lights

43
begin to dim and the atmosphere from the floors above begins to
bleed in, I found a number of people playing video games with
playing cards. I am not talking about virtual poker or games where
cards appear on the screen, but rather games where the player has
physical cards in a deck that he uses to interact with the game
machine. These games are a fairly recent development in the
Japanese arcade world, and from what I could see they appear to
be quite popular. Players control the game using special cards,
which are placed a flat surface in front of the screen[2]. The
machine can tell which cards are in play and where they rest on the
playing field, and by manipulating the cards the player engages in
a strategy game against the computer or other players. In another
version aimed at younger kids, poker chip-like plastic discs were
used instead of playing cards.
In some ways, the idea of a video game driven by playing cards is
so obvious that it is amazing that such systems have appeared only
recently. Japan's affinity with card games stretches back hundreds
of years; Nintendo, the company that is the father of the modern
game industry, originated in 1889 as a company that produced a
card game called hanafuda. Fitting, then, that Nintendo is also the
force behind popular game-related card games today. In 1996,
Nintendo repurposed its incredibly popular Pokémon video game
brand as a collectable card game. Modeled after an American
game called Magic: The Gathering, Nintendo's Pokémon card
game requires players to assemble a collection of cards from
which a powerful deck can be carefully built. Like baseball cards,
Pokémon cards are sold in small packs, and the content of each
pack is random. Some cards are rarer than others, making them
more valuable among players and collectors. Once a player has
created a deck, he can challenge another player to a game. Play
involves placing cards that represent monsters and resources from
one's hand out on the play field, and using those cards to try to
defeat the opponent[3].
It is not hard to see why the Pokémon card game is popular,
especially among kids. The game taps into collection and
management mechanics common to all kinds of collectable cards,
and the random nature of each pack makes card collecting itself a
form of Skinner box. The game took off in Japan and remains
popular there today; subsequent releases in the US and other
countries were met with similar success. The idea that a card game
made by a video game company might be tied back into a video
game itself seems obvious. Indeed, Nintendo did attempt to sell a
peripheral for its GameBoy Advance system that could read
magnetic strips from cards, but the idea never really took off. Now,
a few years later, other companies have filled this gap in the

44
market by producing arcade games that play like card games but
render the imaginary context of the game in realistic 3D graphics.
What I find most interesting about this trend is that playing cards
are the most recent example of arcades games in Japan surviving
because they provide physical manipulatives. When my wife was
studying for her Masters in education, her teachers regularly touted
the use of manipulatives as a way to give students, particularly
elementary and middle school students, a concrete understanding
of abstract concepts. The idea is that by rendering abstract ideas
into physical objects that can be manipulated and observed, a
teacher can communicate the details of the idea more effectively
than through lecture alone. A typical example of this approach
appears in Elizabeth G. Cohen's Designing Groupwork:

"Key concepts, such as linear coordinates, are embedded


into the activities. The child encounters linear coordinates
repeatedly in different forms and at different stations. For
example, at one station, students locate their homes on a
map, using the coordinates. At another station they work
with longitude and latitude on a globe. After repeated
experience with these abstract ideas in different media, the
child acquires a fundamental grasp of the idea that they
will transfer so that he or she will recognize it in new
settings, including in an achievement test."[4]
There's something about being able to hold an object in your hand
that makes the properties of that object obvious, and I think that
the appeal of arcade games is that they can actually provide a
physical object for the player to interact with that is less abstract
than a joystick. If the goal of manipulatives in education is to teach
concepts though physical interaction, perhaps video games that use
manipulatives work on the same principle. Games actively strive
to teach the player the rules by which the game world is governed,
but they also have another goal: to make the experience of using
the game intrinsically enjoyable. As linear coordinates may be too
abstract a concept for some children, the traditional game
controller itself may be a similar barrier for casual gamers. In his
book What Video Games Have to Teach Us About Learning and
Literacy[5], James Paul Gee argues that while video games are not
usually attempting to impart knowledge to their players that is
useful in the real world, they are nevertheless extremely effective
teachers. His thesis is that the mechanics used in video games to
communicate abstract ideas to the player are entirely applicable to
"important knowledge," the stuff that is taught in school. If he's
right, it makes sense that the reverse would also be true: that
manipulatives, having been found to be an effective tool in the

45
education world, would also have applications in the video game
industry.

Namco's Taiko no Tetsujin is played by hitting the giant drums.


Of course, interesting input devices in arcade games are nothing
new; many arcade games have featured steering wheels, guns, and
other types of unique controllers for years. Music games in
particular have been big arcade hits ever since Konami's Dance
Dance Revolution and Beat Mania provided ways for casual
gamers to act out fantasies of being dance masters and killer DJs.
The incredible international success of Dance Dance Revolution,
which is played by standing on top of a special platform and
hitting arrows on the ground with your feet, paved the way for a
series of music-oriented arcade games like Guitar Freaks and the
aforementioned Taiko No Tetsujin.
A few years later, these games became available on home consoles
as well. In fact, in the last five years or so, a number of games
using non-standard control devices have shipped for home use.
Almost all of these are music games, and some of the most notable
titles include Nintendo's Donkey Konga (which uses a pair of
conga drums as its controller) and Harmonix's Guitar Hero, which
has enjoyed intense popularity in America. Harmonix's latest
game, Rock Band, features full-size guitar, drum, and microphone
controllers, and allows players and their friends to "play" popular
music[6]. Though the controller phenomenon has been restricted
almost exclusively to music games, Sony's Eye of Judgement for
the PlayStation 3 is a version of the card-based video game trend

46
designed for the home[7]. And though it features a controller that
is not specifically designed to emulate a real-world object, I think
that Nintendo's Wii console is one of the most dramatic recent
examples of manipulatives in video games; its motion-based
control system has proven popular with a wide audience of both
casual and hard-core gamers.
The rise of manipulatives in video games is keeping arcades alive
in Japan and attracting new players here in the States. As
manipulatives diversify--be it though card-based video games or
home systems like the Nintendo Wii--I think that we will continue
to see games designed to engage the player through different forms
of physical interaction. The attractiveness of this method of play,
combined with the prospect of lucrative peripheral and trading
card sales makes this particular trend too powerful for the game
industry to ignore. For my part, I'm looking forward to the new
types of games that this trend will inevitably spawn; visiting Japan
just would not feel right if the country ever lost its crazy arcade
games.
1
Information about Rapid River DX:
http://www.coinopexpress.com/products/machines/sport_games
/Rapid_River_(DX)_1486.html
For full effect, check out video of people playing this game:
http://youtube.com/watch?v=MENuOWLAiIE

2
Photo credit and a bit of information about one of these games
can be found here:
http://www.geekologie.com/2007/09
/japanese_card_game_looks_aweso.php

Here's a picture of a sports-related card-video game:


http://www.cynicaltravel.com/blog/images/GAMES5.jpg

3
Wikipedia has an extensive article on the Pokémon card game:
http://en.wikipedia.org/wiki/Pokémon_Trading_Card_Game

4
Cohen, E. (1994). Designing Groupwork. 2nd ed. New York:
Teachers College Press. p155-156.

5
Gee, J. (2003). What Video Games Have to Teach Us About
Learning and Literacy. New York: Palgrave Macmillan. p20-50.

6
Another factor worth noting is that controllers and other
peripherals can be very lucrative for the game industry. Rock Band
retails for $170.00, almost three times the cost of a regular video

47
game.

7
A video demonstration of The Eye of Judgement is available on
YouTube:
http://www.youtube.com/watch?v=Bhlq_GhYGsM

48
The Intuitive Artistry of Action Learning in
Organizations
Tom Cockburn
about
Action Learning and action research are forms of learning
by doing. The method of action research was originally
credited to Kurt Lewin (1948: 202-3) but action learning
was a development associated most commonly today with the
pioneering work of Reg Revans (1982, 1998). Some forms of
action research and learning are gaining a greater level of
academic acceptability (Burgess, 2006). Practice or taking action
in the real world necessarily involves interacting at some level
with other people. For many action researchers, such activities
ought to be part of a democratic, participative process (Greenwood
& Levin, 1998; Kemmis, 2001). Heron argues that you can only
study people if you deal with them as persons. That is, deal with
the ‘subjects’ of your research as intentional ‘actors’ in their own
life-drama and thus as meaning makers as well as ‘meaning-takers’
(Heron, 1996).
Others have since differentiated categories or modes of learning
called Participatory Action Research and Participatory Action
Learning which basically focus upon the empowering aspect of
this approach to learning for individuals and/or communities. Yet
others have defined categories called action science and action
inquiry. However, as these are all derivatives of the two original
modes in many ways, I will focus on Action research and Action
Learning. I tend not to separate the two and rather regard them as
two sides of the same coin. The two concepts have often been
regarded as separate activities: one about finding new knowledge
or re-interpreting old knowledge, the other about application of
that knowledge. That is groups of learner-researchers working on
their own, real, practical problems with a focus on learning geared
for implementation in the learners’ work or social life. The groups
involved in action research and action learning are called learning
sets.
Models of action learning place the learner and their peers in the
foreground and central to their own learning and development
more than other forms of learning. Another key element is the
immediate relevance of the knowledge gained and the process of
learning. The Revans formula for learning is L = P+Q, where
L=learning, P=Programmed or taught components and
Q=questioning (of the processes, basic premises and substantive
content of the knowledge held to be true). Thus, by utilising the
combined knowledge and skills of a small group of people

49
(typically 6-8 in a group), combined with skilled questioning (of
‘evidence’ presented about a topic, issue or problem), they are
enabled to re-interpret old concepts, revise constraints and to
thereby produce fresh ideas - often without needing new
knowledge inputs. Not only that, but he recognized that to be
effective all learning had to take place faster than the rate of
change in the environment, hence the formula, L > C.
Knowledge derived in this way can also be clustered into
categories or taxonomies based on whether it refers to specific
objects of knowledge development (e.g. subjects or disciplines),
particular processes, products or learning events (Von Krogh et al,
1998). Alternatively, knowledge can be categorised in terms of
how it is acquired or where it is located e.g. such as embodied
tacitly in behaviour patterns or in skills of individuals, or
collectively in teams or in organisational databases. The above
discussion has often tended to privilege explicit knowledge and
intentional action. That ‘rationalism’ -- of action learning
proponents rather than the method per se--has been criticized in
the eyes of some, who argue the methodology has greater but as
yet unrealized potential (Willmott, 1994, 1997).
However, there are various levels of what could be called
‘intentional’ action; some of it is explicit and well-articulated,
other actions are tacit and unarticulated. Some actions by people
are unconscious, intuitive and/or unknown, whilst others form part
of the complexity of the richly-interactive social systems people
inhabit. The latter forms of action may then truly be called
unintentional although they do have a tangible and explicit impact
on events, processes and systems’ and their affordances to use
Cook and Brown’s terminology (Cook and Brown, 2002).
Affordances refers to what a system and context allows’ or affords
the people involved in terms of both constraints and opportunities.
The awareness and surfacing of the complex, tacit or internalized
as well as the explicit elements of interaction requires deeper
forms of reflection and challenge. That is in order to confront
habitual or routine approaches to self-knowledge as much as to
technical forms of knowledge embodied in action learning
processes. The sorts of thinking-in-action that got the learning set
to the position where they had a problem to deal with in the first
place. Some of these unintentional processes are most obvious or
more noticeable when people are new to a group, team or
organization.
Action Learning as legitimate peripheral participation
The concept of communities of practice (Lave and Wenger, 19XX)
is relevant. In this model the process of induction of new members
into such groups is called legitimate peripherality. That is the

50
‘newbie’ begins on the periphery but is increasingly socialized or
drawn into full membership of the community as they gain greater
awareness and relevant skills regarding the aims and tasks of the
community of practice. The latter process involves more than
learning the job as per the job description since a key component is
‘fitting in’ with the rest of the community in terms of goals ,
expectations, values, orientations to tasks, use of tools and skill
sets. Moving from ‘outsider’ to insider therefore requires
acquisition of unspoken tacit knowledge as much, if not more than
the explicit technical knowledge. That idea i.e. of legitimate
‘peripherality’ has some resonance with action learning insofar as
the peripheral participant is engaged in a process of action learning
in order to become part of a community in much the same way as
the individuals in a learning set. Of course it is possible to then as
‘insider’ lose sight of the embedded, internalized and embodied
features of the community of practice. The taken-for-granted
assumptions of the ‘culture’ can then become a constraint on future
learning and the community can become a closed, sect-like group
adhering to often unspoken values.
So to overcome that, Revans stressed the need for critical
evaluation by a process of questioning within action learning sets
as indicated in the L=P + Q formula above. That approach steps
aside from a traditional view of inquiry as a search for one valid
‘truth’. All understanding is seen as socially constructed from a
particular viewpoint (Denzin, 1997), and all action is in therefore a
pursuit of a specific –even if unspoken-- valued purpose. So then
we are no longer simply pursuing action concerned with getting
‘right’ answers. There are parallels here with Rorty’s view of irony
and its use. His notion of the ‘ironist’ as someone who owns up to
the ‘contingency’ of their own language, identity and community,
combining a strong commitment nevertheless but with a clear
awareness of their own potential ‘bias’(Reason, 2003; Rorty,
1989:61).
My own research outlined briefly in an earlier issue (see Interface
#2) concerns the review and interpretive analysis of a form of
action learning in the tacit domain. The study particularly focused
on mapping MBA teams’ learning and emotional regimes that they
developed. These landscapes were framed within a commitment
index based on axes of trust and anxiety. A typology of each
team’s emotional regime and an embodied multi-spiral model
describing individual as well as collective learning development
emerged. The research showed how people learned a lot about
their own behavior in the throes of learning to collaboratively
inquire into the dynamic process of their group and reflect on their
own contribution to this activity. They were developing a capacity

51
for self-reflective learning and the invitation to relate to others in
more open, authentic and equal relationships. As previously
indicated everyone did not travel to the end of that road. Some fell
by the wayside and reverted to archetypal forms of behavior
related to the emotional regimes described. Some, such as the
suspicious mercenaries often struggled with the unacknowledged
power differentials and instrumentality of their own and others’
actions on projects. Tacit knowledge has been further developed
by Eraut and Hirsch (2007) in considering how managers make
decisions. They describe what they consider as the three main
types of tacit knowledge involved.

1. Situational understanding, which is based largely on


experience and remains mainly tacit.

2. Standard, routinised procedures which have been developed


through to allow people to cope with work whilst minimizing
information overload and, once the competence stage has
been achieved some of the procedures become what he calls
ʻautomatisedʼ or are performed almost robotically and are
increasingly. Thus, even though they may have begun their
lifecycle as explicit procedural knowledge they reach a stage
of routinisation which renders them ʻinvisibleʼ. A common
example might be learning to drive a car. Initially the learner
is fully aware and conscious of the steering, signaling and
gear manipulations required but as they ʻabsorbʼ such actions
into their behavior patterns in cars they can go onto
ʻautomatic pilotʼ and sometimes reach a destination without
even recalling how they got there.

3. Intuitive decision making, whereby pattern recognition allied


to rapid responses to developing situations are both based on
the tacit application of tacit rules. Here patterns of cues in a
situation are internalized and stimulate somewhat fixed
patterns of responses.

Schon in an earlier work seems to have extended this concept


–number 3, above--with his notion of ‘intuitive artistry’, involving
the “....kinds of knowing embedded in competent
practice.”(Schon,1995:29). He did not define this idea formally
but, from examples he gives, it seems to refer to the tacit
knowledge, or “competence” that practitioners apply in situations
of complexity and uncertainty (Schon,1995:29). That is, the
ability of skilled practitioners to ‘sense’, through the use of an

52
instrument, or tool , qualities of the materials being worked and
respond accordingly. This is close to what Cook and Brown
describe as ‘affordances’ but refers to the solo practitioner skills
rather than the team or group system level.
Such forms of tacit knowledge harbor seeds of groupthink and
conformity and need challenge and some discontinuous forms of
learning interventions. That is where there is a break from the
‘tried and trusted’ recipes to allow for some reframing and
reconfiguration of the issues, the constraints and the resources.
Then, the learning set can begin to build up ways to pursue change
and an agenda or action plan with measurable outcomes given in a
timeframe that is reasonable.
How can I use this approach in my organisation?
One way is to build in challenges to how decisions are made as
well as developing a community of practice that is both action-
oriented and reflective. Such action learning-based communities
need a learning space that is not only supported by senior
management but one where challenge and questioning are
incorporated and valued. This is a form of capacity building, since
such skills can form what Nonaka (1995) has called a ‘hypertext’
organizational structure where learning is being exchanged as the
‘trained’ and openly supported (by senior management) members
of the action learning sets become evangelists across the
organization. There are a number of current examples of the use of
action learning in large and small organizations in the public,
not-for-profit sector as well as the private sector and in services as
well as production in many countries (Payne and Keep,2003).
Whilst these do not wholly conform to all of the prescriptions
referred to above, they do show the way forward in many respects
and serve to indicate a growing trend in the corporate sector.
Some U.S. examples of organizations using action learning type
methods for organizational development and change are briefly
described below.
USA
General Electric (GE)
General Electric (GE) has a process they call ‘work outs’. The
company forms action learning teams to work on organizational
problems that are real, relevant, and require decisions. Typically,
these consist of two teams of five to seven people from diverse
businesses and functions within GE working together on the
specified problem. Time is built in for the team members to reflect
on the total learning experience and it is hoped that the diversity of
the teams will engender sufficient challenge when there is
well-developed facilitation.
Federal Aviation Administration (FAA)

53
The Federal Aviation Administration (FAA) used action learning
as part of a 2-year development plan for middle managers. FAA
wanted managers to learn from their practice whilst they worked
on some real world problems. Senior managers were sponsors for
the action learning teams and also identified critical problems,
issues and concerns that were not only vehicles for learning in the
teams but also served the organization’s needs. Three teams met
over a six month period using action learning to resolve problems
on these projects. Finally after that time they presented their
findings and results to their senior managers who were amazed at
the gains made.
Federal Deposit Insurance Corporation (FDIC)
The Federal Deposit Insurance Corporation (FDIC) found that
managers were prevented from being able to perform optimally
due to the myriad of organizational issues and problems they
faced. They also realized that training alone would not solve the
problem. Thus they decided to use action learning as the
backbone of their core training for their executives and managers.
So Action learning became a key organizational problem-solving
and decision-making method for FDIC.
It is worth noting however, that the emergent emotional
landscapes of these teams is seldom referred to in publicly-
available documents but my own research suggests that emotional
regimes that spontaneously emerge require different approaches
and, in some dysfunctional cases ‘therapies’.
Concluding thoughts
Action learning can be used to good effect in organizational
development and change but its proponents have aspirations to be
more than simply another tool of management. There is a growing
interest in this method of self-managed learning in the various
forms it takes such as action research, action science, action
inquiry, participative action research. The interest is not confined
to explicit or to tacit learning, or to particular academic or industry
sectors and the span is global. Used with awareness of the
potential as well as the shortcomings and with attention to the
processes involved, real and sustainable change can be achieved in
terms of personal and professional development, organizations and
systems and, importantly, knowledge gained.
References:
Burgess, J. (2006) Participatory action research: First-person
perspectives of a graduate student, Action Research, 4 (4): pp
419-437.
Cook, S.D. N. and Brown, J.S. (2002). Bridging Epistemologies:
The Generative Dance between organisational Knowledge and
organisational Knowing, in S. Little and P. Quintas, T. Ray, (Eds.)

54
(2002). Managing Knowledge, Thousand Oaks: Sage.
Denzin, N. (1997) Interpretive ethnography: Ethnographic
practices for the 21st century. Thousand Oaks, CA: Sage.
Eraut, M & Hirsch, W (2007). The Significance of Workplace
Learning for Individuals, Groups and Organisations, Cardiff:
SKOPE (www.skope.ox.ac.uk)
Greenwood, D., & Levin, M. (1998). Introduction to action
research. Thousand
Oaks, CA: Sage.
Heron, J. (1996) “Overview of Cooperative Inquiry” in his
Cooperative Inquiry:
Research into the Human Condition, Sage
Kemmis, S. (2001) Exploring the relevance of critical theory for
action research: Emancipatory action research in the footsteps of
Jurgen Habermas. In P. Reason & H. Bradbury (Eds.), Handbook
of action research: Participative
inquiry and practice (pp. 91–102). London: Sage.
Lewin, K. (1946). Action research and minority problems. Journal
of Social Issues, Vol. 2: pp34-46.
Payne, J & Keep, E (2003). What can the UK learn from the
Norwegian and Finnish experience of attempts at work
re-organisation, SKOPE Research Paper No. 41, Coventry:
University of Warwick, SKOPE.
Reason, P (2003) Doing Co-operative Inquiry. In J. Smith (Ed.),
Qualitative Psychology: A Practical Guide to Methods. London:
Sage Publications
Rorty, R.M. (1989) Contingency, Irony, and Solidarity.
Cambridge: Cambridge University Press.
Nonaka,I,the Knowledge-Creating Company,June 9,1995,paper
delivered at Theseus Institute,conference on The Emergent
Corporation,Sophia Antipolis,June 7-10,1995
Revans, R. W. 1982. The Origins & Growth of Action Learning.
Bromley, UK:
Chartwell Bratt.
Revans, R. (1998), ABC of Action Learning, London, Lemons and
Crane
Willmott, H. (1994) ‘Management Education. Provocations to a
Debate’ Management Learning, 25 (1): 105-36
Willmott, H. (1997) ‘Critical Management Learning’ in
J.Burgoyne and M.Reynolds (eds),Management Learning,
London: Sage.

55
Do You Understand Representations,
Warranties and Boilerplate Clauses?
by Leonard D. DuBoff, copyright 2008
We wrote the book on small business law
about
Most well drafted agreements contain provisions in which
one or more of the parties are required to confirm certain
facts. The provisions dealing with these circumstances are usually
called "representations and warranties."
A typical representation requires the party making that statement
to insure that the event, fact or circumstance has or has not
occurred. For example, the seller of a commercial property may be
asked to confirm that there has never been a release of hazardous
materials on the property, or an author may be asked to confirm
that her book contains no defamatory language. Even if the seller
was unaware that the prior owner had leaked hazardous materials
on the property and the author was unaware that facts she obtained
from her source were false, the seller and the author will each still
be liable for breach of their respective agreements.
Experienced business lawyers recognize this problem and attempt
to neutralize representations made by their clients by stating that
they are "to the best knowledge" of the client. This modification
means that the client would be in breach of the agreement only if
he knew that the statement was inaccurate. On the other hand, the
other party will certainly prefer that there be no "best of
knowledge" qualification since, with such a clause, it will be liable
if there is a problem the representing party did not know about.
That is, the buyer in the example above would have no right to
reimbursement from the seller for any necessary environmental
clean-up if the seller made the representation to the best of its
knowledge and was not aware of the prior leak. Similarly, the
publisher would have no recourse against the author in the above
example if the author was unaware of the factual errors. Another
problem with the qualifiers is the fact that whether or not the party
making the representations and warranties knew of the problem
can be difficult to establish.
The give and take of negotiating for a "best of knowledge"
modifier often seems like nitpicking to a client, but the
consequence of providing absolute representations and warranties
can be catastrophic. Conversely, if you are the party for whose
benefit the representations and warranties are being made, you
may find you are liable for problems you thought would be the
other party's responsibility.
Another issue is that a minor violation neither party really cares

56
about may constitute a breach. For example, a representation and
warranty that you have complied with all laws means that even an
innocent violation not relevant to the contact, such as having a
parking meter run out while in a restaurant having lunch, would be
a breach of warranty. This problem could be avoided by limiting
the representation to, for instance, your "material" compliance
with "all laws pertaining to" the subject of the contract.
Clients also tend to ignore the so-called boilerplate provisions of
contracts but then realize how important these provisions are when
a problem arises.
Boilerplate provisions often include clauses dealing with the
identification of the jurisdiction where a dispute is to be
adjudicated. By agreeing to having all problems with respect to the
arrangement resolved in a location other than yours, you will be
forced to engage in long-distance litigation. This is often very
costly, and if it is another state or country, you will be required to
retain an attorney licensed to practice in that jurisdiction to assist
with the dispute. The extra cost, time and difficulty in handling a
dispute in a far-off place often results in parties being willing to
ignore many problems unless they are so significant that engaging
in the more costly and stressful far-away lawsuit is warranted. It
also means that the party who has the benefit of requiring disputes
to be adjudicated in that party's home court may be more
aggressive and more demanding.
Boilerplate clauses often deal with identification of the
jurisdiction's law that will be applied to interpret the transaction as
well. This "choice of law" provision is also significant. Laws differ
from state to state and from country to country. A transaction that
may be legitimate in one jurisdiction may be flawed in another.
Attorneys recognize this fact and commonly demand that the law
of the jurisdiction(s) in which they are licensed should apply to the
transaction at hand. Problems arise when lawyers in different
states or counties negotiate with each other and each wishes to
have the choice of law reflect their own home jurisdiction.
Another clause that is typically found in boilerplate provisions
deals with the payment of attorneys' fees. The American Rule is
that each party is responsible for the party's own attorneys' fees,
regardless of who is successful in the litigation. This rule has been
modified by legislation dealing with issues of public interest, such
as consumer protection, civil rights and intellectual property in
some circumstances. In all other cases, each party is to bear its
own legal fees. The law does provide, though, that the parties may
agree to modify this general rule by agreeing that if a dispute
arises and litigation results, the prevailing party in that litigation
will be entitled to recover, in addition to all other amounts

57
awarded, the reasonable attorneys' fees incurred in adjudicating the
matter.
Most agreements are the product of negotiation, which may result
in some confusion. For this reason, a typical boilerplate clause will
state that the four corners of the document contain the entire
arrangement between the parties and that there are no agreements
other than those set forth in the written document. This type of
"merger" clause is intended to prevent one of the parties from later
claiming that there was a side deal modifying the arrangement.
Use of a clause such as this means that the parties have got to be
sure that all of the terms they negotiated and are relying on find
their way into the written document. If one is omitted and a merger
clause is used, the party relying on the omitted provision will
likely be out of luck when attempting to claim that it was
inadvertently left but that the contract was signed anyway.
There is a dispute among lawyers and commentators regarding the
benefits of arbitration over litigation. Those who favor arbitration,
that is, the use of an arbitrator rather than a judge to resolve a
dispute, may include an arbitration clause in an agreement.
Arbitrators are professionals who may or may not be attorneys,
who are selected by all parties to a dispute. The arbitrator acts as
both judge and jury in resolving the case, and the decision of an
arbitrator is typically binding. This means that there is generally
no right of appeal unless it can be shown that the arbitrator was
arbitrary and capricious. Arbitrators are paid by the parties to the
arbitration, each paying a pro rata share of the arbitrator's fees. It
should be noted that judges are paid by the municipality out of tax
revenue, and, thus, in some respect, arbitration may be more
expensive than litigation.
Mediation should be distinguished from arbitration, and some
agreements contain a boilerplate clause requiring that all disputes
first be mediated before either party may proceed with either
arbitration or litigation. A mediator is an impartial or referee who
merely assists the parties in attempting to work out a resolution of
their dispute. A mediator is not a decision maker. Rather, a
mediator or impartial acts as a catalyst in aiding the parties to work
out an arrangement to which they can agree. The parties
themselves must reach an agreement in mediation.
There are often other boilerplate clauses in well drafted
agreements that should be reviewed and understood before signing
an agreement. If you do not understand what a clause means or the
effect it will have on the arrangement, then you should consult
with your attorney before signing the contract.
While most transactions are completed without serious problems,
occasionally something goes wrong and inevitably the boilerplate

58
proves to be very beneficial or catastrophic. It is only by
understanding the importance of these clauses that you can
understand the consequences that may result when deals go awry.

59
Something's in the Air: Adobe's New
Software Platform
By Mike Geraci
about
On February 25, 2008, The software giant, Adobe
Systems, released version 1.0 of the Adobe Integrated
Runtime, a.k.a "AIR" and significantly reinforced their
position as purveyors of rich media on the Web and desktop. AIR
does just that: delivers on Adobe's strong multimedia technologies
infrastructure by offering a free software development platform
that connects locally installed software with the power of Rich
Internet Applications (or RIAs). Seems like a mouthful of tech
buzzwords, doesn't it? What it means to the average computer user
is that Adobe has produced a system for the creation of software
that is poised to bring a new generation of specialized desktop
applications that are Internet savvy to our personal computers. Oh
yeah, and it's free.
In the halcyon days of software development, if I had an idea for a
piece of software, like a tool that lets you compose text documents
-- let's call it a "word processor". I would have to write all the code
in a particular programming language and then compile it into a
executable application. I would need to do this for each computing
platform that I wanted to deliver it to: Windows, Mac OS, Linux,
etc. Users of my application would have to purchase or download
the right installer for their computer and then remember to check
my Web site every so often for updates, patches, and new versions.
In recent years, we've seen tremendous growth in the number and
sophistication of applications like my word processor that run in
the Web browser such as the ever-expanding offerings from
Google. Adobe AIR combines the best of both these worlds, and it
does it with industry-standard tools and technologies which allow
non-programmers (well at least, technically inclined
non-programmers) to build and deploy applications to all major
computing platforms.
You may already have what it takes to build an application with
AIR. You see AIR applications are created from one or more of the
following technologies: HTML, JavaScript, XML, AJAX, Flash,
and FLEX. So if you can create a Web page or a Flash movie, you
can deliver it via AIR. Your end users can download it and use it
on their computers (Mac, Windows, Linux) as they would any
traditional piece of software. To do so, simply requires that they
first download and install the free Adobe AIR runtime, a "virtual
machine" application that runs behind the scenes. Just as Apple's
QuickTime and Adobes Acrobat Reader give us access to a variety

60
of media types, AIR provides an abstraction layer where AIR
applications can interact with the operating system and other
applications installed on the local system such as a Web browser.
What makes AIR special is that it is truly cross-platform. An
application need be created just once and it can be delivered to all
major platforms. Even the installer, which physically locates the
application and all its resources on the users system is platform
agnostic. On top of this, AIR applications work like other
applications, they are persistent on the system (unlike Web apps
which go away and take their data with them when the browser is
closed) and they use the system's resources such as the task bar or
application tray in Windows or the Dock, Dashboard, or Menu bar
on the Mac. AIR apps use the local file system, meaning that they
store and access information on the local system, which makes for
a more responsive experience and much more storage capacity
than is possible with Web apps.
Things get interesting when we consider the "Internet-enabled"
aspect of the AIR platform. For once they are installed, AIR
applications can operate using standard Internet protocols which
allows them to send and receive data from remote hosts, download
updates, and collaborate with other users without having to rely
upon other applications. A good example of this comes from
Adobe, who naturally want to highlight the power of this new
system. According to the AIR Web site, Adobe used AIR to create
a corporate contacts application that had every employee's contact
information, calendar, and office location in it. The application and
all of its data was installed and stored locally on employee
computers. So online or off, at work or on the road, employees had
access to the corporate contact database. However, once connected
to the Adobe network, the application automatically synchronized
the user's data with a master database and updated itself with new
features (or bug fixes, as the case may be).
As of this writing, there are over 125 AIR applications available
online; the majority of which are listed at the AIR apps wiki.
While the possibilities for AIR apps seems limitless, it appears that
a large percentage of the current offerings are companion products
to popular Web services. For example, followers of the social news
site, DIGG can now download a number of AIR applications that
allow them to keep track of stories, diggs, and their Digg friends
without having a Web browser running. Similarly, addicts... I mean
users... of other popular social Web networks such as Flickr,
Facebook, World of Warcraft, and all major Chat services can use
AIR applications to feed their... ummm... social pastimes.
On the more utilitarian side, there's AIR applications that let Mac
users search the massive AppleCare system, including the

61
knowledgebase and discussions forums, create and manage e-Bay
listings; and there's even a beta version of a project management
application, Agile Agenda that allows for real-time project
tracking and scheduling that allows you to store your project data
online so it can be accessed from anywhere.
As an educator, I am excited to see that at least one of the online
course management systems, Ucompass, is planning on building
an AIR application that will allow users of their system to mange
their online courses in a more efficient and streamlined manner.
One can only assume that this will also happen at the student level,
thereby providing students with easy access to course information
and interactions.
It's fair to note that Adobe is nor the first company to offer a
hybrid software environment like this. Google Gears, Microsoft
Silverlight, and Mozilla Prism are all similar efforts to bring
Internet-aware applications to the desktop. What makes Adobe
AIR a bit more compelling is Adobe's dominance in the rich media
landscape. They have the dominant creation tools in Dreamweaver,
Flash and Flex and all of these applications are now AIR enabled
via a free extension that can be downloaded. Adobe has the ability
to combine their proprietary technologies like PDF (documents),
SWF (multimedia), and FLV (streaming audio and video) with
current open-source standards like AJAX, SQLite, and the Webkit
HTML engine. In short, AIR applications should "just work" and
will be buttressed by the immense user communities built around
today's most popular standards.
If you're the security-aware type of person, you're probably
thinking that AIR seems like yet another gaping access point for
malicious code and malware. It's true that any locally installed
application that has access to your file system presents a security
challenge, but Adobe is taking security seriously by having all
AIR application "digitally signed" using industry standard
certificates and they have created a AIR sandbox model. All the
gritty security details are beyond the scope of this article, but for
now it's safe to assume that AIR applications will not simply be
released into the wild to run amok on users' computers.
In conclusion, I believe that Adobe's AIR platform is a harbinger
of what is to come; it's the next generation of software
development working in concert with Web technologies that, even
at this early stage, shows lots of promise for developers and users
alike.

For more information about Adobe AIR:

The Adobe AIR Web site


The Adobe AIR FAQ

62
Adobe AIR on Wikipedia
The AIR security model
Adobe AIR first look at Ars Technica
The AIR apps Wiki

63
A Quick Introduction to Thin Clients
By Ben Elliot

Introduction
A thin client is computer which acts as a remote keyboard,
video, and mouse terminal for another machine. The terminal server
performs all computation and stores all data. Multiple thin clients can
access simultaneously. Together, the clients and server make up the thin
client computing model. Key differences between this model and
traditional computing is that a thin client model can better utilize
resources, improve centralization and integrity of data, and reduce total
hardware costs.

Startup Process
A typical thin client only needs enough memory to power its display and
a network card to communicate with the server. Most computers
manufactured since 1998 are Preboot Execution Environment (PXE)
capable, meaning that they can use their network card as a boot media,
similar to starting up from a CD or hard drive. The server is set up to
recognize the PXE startup signal and will send enough software to
allow the computer to act as a thin client. Once fully booted, users can
log into the thin client, and can use the Operating System and
Applications installed on the server.

Image credit: https://help.ubuntu.com/community/UbuntuLTSP


/LTSPWiring
Thin clients will usually be set up on their own private network,
separate from other machines. One network card of the server will be on
the thin client network, and the other network card will connect to the

64
regular network as usual. It is important that the two network cards are
not switched; the card on the private network will have usually have a
DHCP service running, which will interfere with the regular network
which will already have DHCP service.

Resource Utilization
All of the computing power, memory, and hard drive space is on the
server side. These resources will be shared among all thin clients. These
resources are easier to allocate due to their centralized location.
On a thin client server, programs only need to be loaded into memory
once, regardless of the number of people using the program at the time.
A web browser in operation on 16 different workstations will have a
total memory footprint of 16 times the ordinary amount since it was
loaded in 16 different locations. If this same web browser were on a thin
client server, the program itself would only be in memory once, only
requiring additional space to hold user specific data.
Similarly, other resources such as the CPU can be more fully utilized
when all of the power is in a central location.

Data Centralization
The server will have direct access to all hard drives in the system.
Instead of having data spread among a number of workstations like in
the traditional computing model, all user data will be stored on the
server. This makes the process of finding and making backup copies of
the data much simpler, improving data integrity. If any single
workstation's hard drive fails in a traditional computer lab, data loss is
much more likely since workstations are not backed up as often as
servers. A server can survive such an incident through redundant hard
drives (RAID arrays) and automated backups, both of which are easier
to implement in one location than in many.
If a hardware failure occurs on a thin client, no saved data is lost; the
client can easily be replaced with another machine set up for network
booting. In the meantime, the user can log into another thin client and
be able to access the same environment they had on the previous
machine.

Hardware Costs
A good server can easily cost about six times as much as a typical
workstation, but that cost includes multiple CPUs, along more ram and
hard drive space than any workstation. However, with these resources, a
thin client server can serve more clients for less total cost than buying
each workstation separately. This is made possible due to the economy
of scale provided by resource centralization.
The thin clients, due to the low hardware requirements of terminal
software, can be machines near the end of their life cycle that are unable
to comfortably run modern software. With no hard drive required for
these operations, it is not necessary to trust aging hard drives with
important data. Ideally suitable thin clients will already be on hand,

65
making the cost of redeployment very low. The largest expense for these
workstations will probably be a new monitor if needed.

Further Information
Here are several links for further basic information about thin client
networks:
General introduction: https://help.ubuntu.com/community
/EdubuntuDocumentation/EdubuntuCookbook/ThinClient
Background information: https://help.ubuntu.com/community
/EdubuntuDocumentation/EdubuntuCookbook/Background
A wiring guide: https://help.ubuntu.com/community/UbuntuLTSP
/LTSPWiring

66
The Big Switch
Review by Jeffrey Barlow <barlowj@pacificu.edu>
about
Carr, Nicholas. The Big Switch. New York: W.W. Norton and
Company, 2008
Nicholas Carr, the author of The Big Switch, wrote in May 2003
a controversial essay, “IT Doesn’t Matter” in The Harvard Business
Review,1 then in 2004 extended the analysis in the book, Does IT
Matter?2 His argument, reduced very simplistically, was that initially IT
(Information Technology) did matter a great deal, because early business
adopters achieved numerous competitive advantages in their operations.
But as it became more widespread, IT simply became a necessity;
everybody had to have it. Not having it was a spectacular, even fatal
competitive disadvantage; but having it was nothing special.
This argument, of course, was not particularly welcome to many .com
firms, industry savants, and manufacturers, simply because it reduced
what they preferred to think of as a magic bullet, worth any price in the
war against competitors, to the level of a pencil. “Nice tool, but hey,
everybody has one.”
Carr’s argument also kicked off a very heated round of debate which
much increased our understanding of the regards in which IT did, and
did not, matter.3 Any new book by Carr, then, is well worth examining.
He has a very strong track record as an analyst and his ideas are both
provocative and worthwhile.
The Big Switch promises to have considerably less impact than his
earlier works; it is probably impossible, after all, to antagonize a major
industry each time out. But it is, nonetheless, well worth reading. The
audience should have a good general interest in technology and be
comfortable with occasionally come to seem rather large conclusions
drawn from small bits of evidence. But those with a wide view of the
Internet and its social impacts will enjoy the book.
Those of us who have somehow survived from the initial impact of the
Internet to at least this writing, may be somewhat tired of analogies
dealing with it: “Its impact is like that of the printing press! No, it is less
important than the air conditioner in enabling human civilization!” We
begin to wish the fabled elephant had simply stamped the blind Indian
wise men into jungle jam the moment they laid a hand on it.
Carr builds his new book around such an analogy: the invention,
development, and spread of electricity as a power source and as an
industry. But no matter with how much caution we may approach such a
reductionist argument, Carr makes it interesting, provocative, and,
ultimately, instructive.
Carr’s position is that computing will invariably make a transition
similar to that of the earlier means of distributing electricity. Electricity
moved from initially incredibly expensive, highly localized, and vastly
complex devices served by a select priesthood, to a ubiquitous service.
That is, power for production was initially, of course, human generated,
then produced by energy of animals, moving water, etc. All of these had

67
serious problems, notably the distribution of energy from the point at
which it was generated. Then electricity became a source that could be
easily distributed through the entire manufacturing enterprise of the
entrepreneur capable of buying or building a generator.
This change in distribution worked enormous changes. 19th century
factories driven by water power were dangerous, noisy, inefficient plants
which produced power at a water source and distributed it through a
complex system of axles, belts, and pulleys. But ultimately the utilizable
power was proportional to the closeness to the water, because so much
was lost in driving the transmission system.
Then voila! Thomas Edison appeared; and, arguably as importantly,
Samuel Insull, who arrived in New York in 1881 from England. Insull
saw that electricity, rather than being generated by every factory needing
power, could better be distributed from a common source, such as the
power plants of Consolidated Edison. Other men had to make equally
important contributions, each described interestingly and succinctly by
Carr, and soon electricity was a service. With electricity we enter
General Electric’s pavilion in Disney’s Tomorrow Land where
everything is clean, quite, affordable, and, oh yes, American!
To Carr, computing has gone through analogous stages, from a time
when it was expensive investments in Wang. DEC, or Apollo systems
that enabled some to begin crunching data, paying employees, storing
records, etc, locally at their business. Then others began to see the
possibilities and farsighted entrepreneurs began to contract to provide
services for others. But soon, following Intel’s 1971 development of the
PC, the approach switched again. A thousand PC’s bloomed in every big
firm and data processing returned to the home office.
Carr makes this extended comparison both interesting and illuminating.
What we learn is that, however different the units of comparison---
electrical generators and pc’s--- may seem, in fact there is a sort of
common logical progression in terms of the social and economic
functions each industry served.
There are even apt comparisons in the development of each
technology--- winners and losers, like Thomas Edison on the one hand,
and Nicholas Tesla on the other, who may ultimately have had the better
technology, but did not bring it to market as successfully. In computing,
many might argue, we have Microsoft vs. Apple, a virtually ubiquitous
OS which, however clunky and vulnerable, quickly outpaced the more
elegant but idiosyncratic MAC OS.
However, Carr argues, computing is capable of achieving, unlike
electrical generation, an important additional transition: from
everywhere to, in a sense, nowhere. Google is now providing us with the
distributed computing power, the off-site storage and via its core
business of searching, the means of integrating all data everywhere. We
now enter, metaphorically at least, the stratosphere, with computing
visualized as a cloud made up of users and producers, all held together
by the Internet itself.
While Carr goes to a great deal of work to construct his very detailed
comparative history of technology, the really exciting part of the book is
perhaps the many points at which he ties technology to its impact on our

68
social practice, even upon our very human psychology.
We learn, for example, that the more we share information with people
of similar opinions, the more extreme our common opinions grow.4 The
photosphere is not truly heterogeneous; few political blogs are anything
more than a forum for like-minded people and very few (about 9% in
one study) cross-link to blogs on the other end of the left-right political
spectrum.5 As we become more connected, paradoxically, we also in
important ways become more divided.
In Carr’s final chapter, “iGod,” he perhaps takes himself a bit too
seriously as a savant and spins off into the clouds, but the entire journey
has been so interesting and the analysis so useful that we can forgive him
even that.
As a whole, the book is somewhat discursive. There are points that I at
least wearied of the extended analogy between electricity and
computing, but the information that we encounter as we proceed through
the work is invariably provocative. Carr’s vision is a compelling one,
and once again he reduces the importance of computing to a scale which,
if seems less revolutionary and analogous to earlier changes in
production, at the last also promises to be far more transformative.
After all as Carr points out, within several decades of the development of
urban power plants, still less than ten percent of power was produced in
that manner. Most was still produced and consumed locally. We are in
the very early days of fully realizing the impact of the Internet.
1 The essay can be downloaded on a paid basis at:
http://harvardbusinessonline.hbsp.harvard.edu/b01/en/common
/item_detail.jhtml;jsessionid=ENB34YRB5QWSGAKRGWDSELQBKE0YIISW?id=R0305B
2 See Carr’s home page together with book reviews at:
http://www.nicholasgcarr.com/doesitmatter.html Amazon.com, in its
usual dazzling display that IT can still matter a great deal, makes large
portions of it available at: http://books.google.com
/books?id=wrROE6SLJFEC&dq=carr+does+it+matter&pg=PP1&
ots=hvTj3027am&sig=VKGYfZHpK3kcC_-kEUySSGlA6L8&hl=en&
prev=http://www.google.com/search?q=Carr+Does+IT+matter&ie=utf-
8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&sa=X&
oi=print&ct=title&cad=one-book-with-thumbnail
3 See hundreds of thousands of sites at: http://www.google.com
/search?q=Carr+Does+IT+matter&ie=utf-8&oe=utf-8&aq=t&
rls=org.mozilla:en-US:official&client=firefox-a most of which have an
opinion on the issue.
4 Carr. 165.
5 Carr 163.

69
Wired Shut: Copyright and the Shape of
Digital Culture
Review by Jeffrey Barlow <barlowj@pacificu.edu>
about
Gillespie, Tarleton. Wired Shut: Copyright and the Shape
of Digital Culture. Cambridge Mass: The MIT Press,
2007. .
At first glance, this work will inevitably be taken as a highly
technical discussion of what may be simultaneously both the most
critical and the most boring issue relating to the impact of the
Internet: copyright law. However, Dr, Gillespie, an Assistant
Professor of Communications at Cornell University,1 utilizes the
topic to markedly enhance the reader’s understanding of a wide
variety of topics relating to culture in general, and to digital culture
in particular.
The work is also a very welcome one in that the author
convincingly shows that the current debate over digital rights,
particularly as reflected in long-running discussions of music and
piracy, has been very ably shaped and controlled by but one side in
the debate, at least at the public level.2 After reading Wired Shut,
any reader is going to be a much wiser consumer of information
bearing upon public and legal debates over copyright law, and
particularly over the technical fixes, such as digital rights
management software and hardware so often said to be the
solution to the “problem of piracy.
Wired Shut lets us see very critical issues from a much broader
perspective, though the author’s biases are quite clear. He is very
much a netizen who fears that the potentially extraordinary utility
of the Internet for human development may well be choked off by
the desires for profit of a relatively few interested parties. From
this perspective, the group advocating stringent copyright law and
particularly those who seek to “wire shut” the technology by
incorporating various hard-wired schemes to limit copying, are
infringing upon hard-won civil liberties going back before the U.S.
constitution.
The author reminds us that the purpose of U.S. copyright law at
the national founding was not only to protect the rights of authors
and those who disseminate their work. The arguably more
important purpose was to “promote the progress of science and
useful arts...”3 Gillespie argues persuasively that the technological
fix may well retard both. If tinkerers are now to be punished for
opening up their computer, or their cd player literally informs the
corporation that produced it that it has been altered, then more is
lost, Gillespie believes, than when a cd is illicitly copied for

70
personal use.
Underlying the copyright protection of cultural expressions in the
author’s view, in an inexact definition of such expressions as
“property.” There are, Gillespie shows, many critical differences
between the two.
Gillespie does not argue that “information (or music) wants to be
free.” He rather argues that we are in a new era, and that we must
be careful as to what legal frameworks we construct to protect
rights, lest we not only limit creativity, but in the long run, also
hobble our technological development because sheer profit motive
comes to be the dominant factor in creating culture.
The Internet is, the author believes, very different from previous
broadcast models of disseminating popular culture. Rather than a
simple one-to-many model, peer-to-peer applications make the
consumer also the distributor, creating a “cultural politics of
decentralization.” The purpose of so many attempts at copyright
protection, lawsuits, attempts to legislate permissible and
impermissible technology, are precisely to transform the Internet
into a broadcast medium, or a “Client server relationship” in which
consumers are allowed access to material under highly centralized
and carefully controlled conditions.4
The work is couched largely in postmodernist language and
interpretation, but happily this analysis is extremely accessible to
those of us in the pre-postmodern herd, because of its very tight
organization. The author continually explains where, in his view,
we are going, and why. He also gives us a very pithy summary at
the end of each chapter which simultaneously extends the analysis,
and reminds us where we have been. As he segues into his next
topic, we are then fully prepared to confront it however complex it
may be.
I am relieved in part to read this book because it somewhat lessens
my own guilt at my dramatically piratical past: See “China and
the Internet, Part 1: My Life as a Pirate “ http://bcis.pacificu.edu
/journal/2003/09/edit.php While I cannot necessarily view myself
as a civil libertarian as I facilitated copying of rock music in my
callow youth while living in Taiwan, I now see that the issue is far
more complicated than simple piracy. I now think of myself as an
unauthorized distributor at that time.
Gillespie’s central argument is that the technological fix to
copying, the legal restriction of what sorts of machines can be built
to play or create digital materials, is both “strategic” and
“paradigmatic.” They are deliberately intended to change us from
creators and users to culture to consumers of culture.5 These
practices try to draw a sharp distinction between producers and
consumers, in a cultural world where creation has always had

71
elements of both consumption and production in it.
The problem, in Gillespie’s view, is that one side has all the guns.
Corporations have increasing control over the law. This is
particularly true in areas were digital technology is the field of
battle. The corporations, purchasing influence via our badly
crippled election system, in effect buy access, which permits them
to write the laws.
On the other side are corporations that refuse to cooperate in
restriction schemes, quite often because such schemes are
originally intended to increase the market share of a few big
players at the expense of potentially disruptive smaller firms. And
of, course, also engaged are the endlessly creative hackers and
tinkerers who produce an unending stream of applications such as
Gnutella, Napster, etc. Too, there are courts and judges who can
understand the link between technology and the creation of
culture, and periodically intervene to protect that link.6 Some
congressmen as well, in the author’s view, understand the stakes.
It is difficult to say where the balance in this contest rests right
now. Hollywood is extremely reluctant to pursue its Holy
Grail---on line download of first-run films, because in the absence
of the protections they want, it could be ultimately destructive of
their valuable productions. Music companies, however, are
seemingly much more interested in a subscription model or
pay-per-download on the ITunes model now that the sale of CDs
has fallen so rapidly.
These are all very complex issues. The best explanation, however,
that I have seen of both the legal and technological histories of the
problem is Wired Shut. All consumers and producers of digital
materials should read it.

1 See his personal WWW page at: http://www.tarletongillespie.org/ His


CV can be found at http://www.tarletongillespie.org/cv.html
2 At http://www.wiredshut.org/ch1.html a PDF file of the entire
first chapter of the work can be downloaded. Gillespie is to be
commended for making so much of the work available online.
3 P. 22.
4 See pp 40-47.
5 See discussion at p. 18 in PDF linked above.
6 A useful work in this regard is Abramson, Bruce D. The Secret
Circuit. The Little-Known Court Where the Rules of the
Information Age Unfold. New York: Rowman & Littlefield,
Publishers, Inc., 2007. Reviewed in Interface at:
http://bcis.pacificu.edu/journal/2008/01/abramson.php

72
Confucius says: Privacy is Dead; Get over
it...
Editorial
essay
Jeffrey Barlow
about
In 2006 I attended a conference in Victoria, B.C, on the
topic "Privacy and Security.1" The government of British
Columbia sponsored the conference, and it seemed that the civil
service of the entire province was in attendance.
I presented a paper on attitudes toward personal privacy in China.
I argued simply that what is taken for totalitarian or “communist”
control over social relations and information flow in contemporary
China actually has deep traditional Confucian roots. The primary
issue is simply that the Confucian governmental mandate rested
rather firmly on those in power continually exhibiting superior
morality. In such a system, political opposition necessarily comes
to be configured as moral criticism and is in turn taken by those in
power to constitute a moral failing on the part of the critics. At the
time, the inclusion of my paper at the conference seemed to be
little more than a bow to diversity, and largely irrelevant to issues
facing Canada and the United States.
The other presenters, in addition to Canadian political figures and
information technology notables, seemed to fall into two groups:
The first was made up of academics and idealists who generally
pointed with alarm at the erosion of privacy as a result of the
vulnerability of electronic communications, notably the Internet, to
private or governmental surveillance. The second group was
composed of consultants and industry representatives, who saw the
whole issue as largely settled and, hence, tiresome.
In one well-attended panel session the two groups clashed openly.
As I recall, one noted intellectual opened the discussion with
gruesome details on the steadily mounting threat to privacy. His
counterpart came to the podium, shrugged, and announced:
“Privacy is dead. Get over it.” While he went on to fill that
perspective in somewhat, he had little more to say.
At the time, I was under the hopeful impression that the issue of
privacy was still open, in the West if not in China. What a
difference two years makes. The recent Elliot Spitzer imbroglio
persuades me that privacy, particularly in the U.S. is, indeed, dead,
and that its mortal remains are being eagerly cannibalized.
Our well-connected Interface audience doubtless is aware of the
Spitzer incident. But for those who are too sensitive or perhaps
had better things to read about at the time it broke, let’s remind

73
ourselves that Spitzer was the Governor and the former Attorney
General of the State of New York. He quickly resigned following
the initial revelations in The New York Times on March 10 that he
had been “linked to a prostitution ring.”2 There were, of course
endlessly repetitious reports about the events and columnists
pointed with amusement or alarm for some weeks. Savvy
entrepreneurs quickly jumped aboard, and soon one could buy a
“Lov Gov Bear” or a “Love Gov T-shirt” on the web.3
Several issues seemed to commentators to be particularly salient.
First, Spitzer was incredibly wealthy, as he was thought to have
paid up to eighty thousand dollars for sex in the past ten years and
as much as four thousand for recent trysts. Secondly, Spitzer was
adjudged incredibly hypocritical, as he had made a political career
in an almost Confucian fashion by calling for a higher level of
morality among various public figures. He had, it is said, initially
gained a public awareness when he investigated conflict of
interests among investment bankers during the .com bubble4. Then
he proceeded to indict or sue a parade of financiers and corporate
figures as New York State Attorney General, including Richard
Grasso, a former Chairman of the New York Stock Exchange.
This combination of Spitzer’s hypocrisy and the powerful enemies
he had accumulated, many among financial and .com circles, gave
the story staggering velocity on the web. It was said, possibly
apocryphally, that stockbrokers stood on their desks and cheered at
the news. It is probable that only the flaring up of China-Tibet-
Olympics conflicts displaced the story, at least for a while.
What we might not have predicted, however, is the tie that quickly
developed between the incident and the lower depths of the
Internet. The public’s demand for such information is clearly
insatiable. The most recent highly priced object of Spitzer’s
affection, Ashley Dupree, has become an instant celebrity with
456,000 links to her videos on the WWW as of this writing.5 A
man purporting to be her pimp appeared on Fox News, which
leaves no stomach unturned in its search for the truth, testifying to
her inestimable qualities as a prostitute.
Supposedly confidential legal documents pertaining to the case
were not only leaked but are now accessible from at least
hundreds, possibly thousands, of sites on the WWW. Mr. Spitzer
even has his own Wikipedia entry.6 Nicely formatted excerpts
from “client 9’s” messages as entered into the leaked evidence, can
be found in the Huffington Post.7
Some have felt that it was his political status and the enemies that
Spitzer made that ultimately brought him down, and that he was
deliberately targeted over a period of time in hopes that something
just like this issue would raise its ugly head. Others have argued

74
that the issue that ultimately provoked the investigation---what
seemed to be suspicious movements of cash on Spitzer’s
part---were, in fact, below the threshold that usually triggers
suspicion.8
We cannot comment on these issues, though we hope that they are
thoroughly investigated. But it is extremely unlikely that any
ultimate judgments can be made. There are so many exceptions
now to privacy laws and protections for electronic rights that a
case can be made on a number of grounds for opening up almost
anybody’s communications. And once the issue is made morality,
particularly a conflict between public face and private actions, all
rational discourse fades.
It is quite widely believed that the current administrations in
national investigative bodies, such as the Attorney General’s
office, have rarely been even-handed in their investigations. Local
New York bodies handed off the Spitzer investigation to the F.B.I.
on the grounds that his movements of cash, intended to mask
payments for sexual services, may somehow have been related to
bribery attempts. But the initial discovery seems to have been
related to the bank’s concerns about either terrorism or money
laundering. Then, it appears, a prolonged sting operation was set
up to ensure that if he was arrested it would be for intrastate sexual
commerce, a far more serious charge than a local New York
charge. Clearly it will take a federal investigation to unravel, if
ever, the labyrinth of tangled political and legal motivations.
But, however much many may argue that the incident has deep
roots in partisan politics, there can be no Democratic Party outcry
calling for an investigation of the investigations of Spitzer. To do
so would be to empower the values-oriented Republican “base” of
neoconservatives. A perfect Confucian checkmate results: only
the immoral would question investigations into morality.
But what I believe, from living in China, is that motives do not
matter. Once it becomes possible that you are being eavesdropped
upon, that your communications are being archived against
possible future transgressions, or being “data mined” with
increasingly complex methods that seem simultaneously to be
terrifyingly thorough and terrifyingly capable of producing
erroneous conclusions, you effectively no longer have any privacy.
I have noticed in the past several years that an increasingly
common type of joke among both my students and my colleagues
relates to the issue of whether or not one’s email is being read.
Ironically, the more sophisticated the user, the more likely he or
she seems to be concerned, regardless of their obvious innocence,
apolitical attitudes, or squeaky clean moral values and practices.
Once you fear you have lost your privacy, you have, in fact, lost it,

75
whether anybody is listening or not. A process of self-censorship
properly sets in and you do well to ask yourself how a casual
statement in email or on the telephone might be misinterpreted.
Interesting or provocative ideas and statements are thoughtfully
redacted.
In the Confucian value structure there was little or no separation
between the individual’s life, including his family life, and the
purview of the state. This is what totalitarianism means:
everything is properly the concern of the state. Elliot Spitzer has
learned this the hard way.
So what can a casual user do except self-censor? Perhaps all we
can do is recognize that privacy is well and truly dead, and that we
do need to get over it.
11. The 7th Annual Privacy and Security Conference, Victoria
Conference Center, Victoria, B.C. February 10, 2006. See:
http://www.rebootconference.com/privacy2006/
22 See Danny Hakim and William K. Rashbaum, , “Spitzer is
linked to Prostitution Ring”, The New York Times, March 10,
2008.http://www.nytimes.com/2008/03/10/nyregion/10cnd-
spitzer.html?_r=1&oref=slogin
33 The “Lov Gov Bear” can be purchased at the usually staid
Vermont Teddy Bear site: http://shop.vermontteddybear.com
/luvgov.html The rather unimpressive Lov Gov T-shirt is found at:
http://www.zazzle.com/spitzer_the_love_gov_shirt-
235449409458462688
44 See Robert Kuttner, “Wake Up, Wall Street: Eliot Spitzer Is a
Hero” in
Business Week, May 19, 2003, at: http://www.businessweek.com
/magazine/content/03_20/b3833047_mz007.htm
55 http://www.google.com/search?q=ashley+dupree+video&
ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&
client=firefox-a
67 http://en.wikipedia.org/wiki/Eliot_Spitzer
78 See: http://www.huffingtonpost.com/2008/03/10/spitzer-
as-client-9-read_n_90787.html
89 Paul Campos “CAMPOS: Was Spitzer targeted?” The Rocky
Mountain News, March 12, 2008 at
http://www.rockymountainnews.com/news/2008/mar/12/campos-
was-spitzer-targeted/ When published also at “Common Dreams” a
progressive blog, the Campos article evoked a fascinating range of
comments. See them at: http://www.commondreams.org/archive
/2008/03/12/7639/

76
Virtual Death vs Reality
by Jenn Hernandez
The Internet lends
itself to a separate
reality and a sort of
immortality based on
the permanence of
content posted there.
A person's memory
can outlive them, as
long as there are
browsers visiting their blogs, or our memorials [1], and websites.
The Internet is global, so as the bounds of communication are
made endless and reality follows suit as users are immersed in a
separate cyber reality where death may not seem truly final.
Part of a person's identity is social. People define and identify
themselves as others see them. According to the Social Identity
Theory as posted on the University of Twente in the Netherlands'
communications webpage:

Apart from the "level of self", an individual has multiple


"social identities". Social identity is the individual's
self-concept derived from perceived membership of social
groups...people's sense of who they are is defined in terms of
'we' rather than 'I'...The theory has also implications on the
way people deal with social and organizational change. [2]

So people are remembered by the "we" of society (whoever that


"we" may be) when the individual ceases to exist. People seek out
"we"s within the reader and blogging sphere to create a place to
share thoughts so that they can be known to others, and therefore
hold a reality of having existed. When they die, this is the part that
lives on. In Judeo-Christian traditions, the individual and personal
dies utterly. There is no soul left as a hungry ghost to roam the
world, remembrances should be kept to a minimum, as the person
is dead--why bring them back to life in the memory if they're truly
dead? In The Book of Sirach in the Old Testament of The Bible, it
is stated that:

My son, shed tears for one who is dead with wailing and
bitter lament; as only is proper, prepare the body, absent
yourself not from his burial; Weeping bitterly, mourning fully,
pay your tribute of sorrow, as he deserves, one or two days to
prevent gossip; Then compose yourself after your grief, for

77
grief can bring on an extremity and heartache destroy one's
health. Turn not your thoughts to him again; Cease to recall
him; Think rather of the end. Recall him not for there is no
hope of his return; It will not help him, but will do you harm.
Sirach 38:16-21 [3]

This is contrary to the beliefs of cultures such as the Asians and


the Greeks, as evidenced in Achilles' monologue in Homer's
Odyssey:

But say, if in my steps my son proceeds,


And emulates his godlike father's deeds?
If at the clash of arms, and shout of foes,
Swells his bold heart, his bosom nobly glows?
Say if my sire, the reverend Peleus, reigns,
Great in his Phthia, and his throne maintains;
Or, weak and old, my youthful arm demands,
To fix the sceptre steadfast in his hands?
O might the lamp of life rekindled burn,
And death release me from the silent urn!
This arm, that thunder'd o'er the Phrygian plain,
And swell'd the ground with mountains of the slain,
Should vindicate my injured father's fame,
Crush the proud rebel, and assert his claim.' [4]

The ancient Greeks believed that a ghost


had reality for as long as the person was
remembered and had a link to the living
world. In Achilles' case, his link is his son.
If his son is successful, then others will
remember his father through his heroic
deeds.
In Asian cultures,
shrines are kept
to give ancestors
the reality of
remembrance, as
evidenced in Lisa See's novel, Peony in
Love, which follows the main character,
Peony, in her death and afterlife as a
hungry ghost, seeking remembrance.
The last poem Peony writes while still
living reveals her view of death and
hope for remembrance: "It is not so easy to wake from a dream.
My spirit, if sincere, will stay forever under the moon or by the
flowers..." [5]. Peony hopes that the man who was to be her

78
husband will remember her by dotting and placing her ancestor
tablet in his family shrine because, "the dotting would allow [her]
to be worshipped as an ancestor and give [her] a place to inhabit
on earth for all eternity..." [6].
Blogs are like cyber shrines
in some cases, being devoted
to memorials written by or
about someone before
they've died. One example of
such a blog is Andy
Olmsted's [7]. He writes a
final post before going to
serve in Iraq, in case he
should die. He does die, but
is strongly remembered and kept alive in the cyber reality through
his blog. He created a sense of community for himself online,
which is upheld in a memorial posted by a friend. This memorial
includes a link to Andy's last post, which he asked to have put up
on this friend's blog [8] in the event of his death. The final post,
"stirred so much interest that Olmsted's father, Wes Olmsted, said
it has since been translated into several languages, including
Hebrew, Farsi and Russian. 'He touched a lot of people around the
world,' Wes Olmsted said" [9]. So, just as other cultures use
different techniques to remember their dead, the Internet culture
can be as sacred as a gravesite and resting place for the deceased's
soul to continue contact with the world even after they are gone.
The blogs and websites individuals leave behind can give an
insight into their lives, an aid of remembrance. When reading the
work of the deceased, one may even have trouble imagining they
are dead, because reading their words lends a sort of vitality as
well as immortality to the person's image.
This February 15th, 2008, a high school student killed himself, but
left behind a Xanga [10], a social blog, that he kept updated almost
daily. Upon reading through it and hoping to figure out what he
was trying to tell the world in his final days, many of the entries
were so full of life and deep emotion that he seemed as alive as
ever.
The social identities of Dustin Tran and I are intertwined, as we
went to the same high school, so he continues to live in my
memories of him, rather than in the reality he's not a part of
anymore. It would be hard to imagine that he just stopped existing
one day when his memory is so alive online.
The last entry Dustin had left was a running checklist of things he
wanted to accomplish in the New Year:

1. Graduate.

79
2. Get accepted into a college/university. OSU! Chyeahh!
3. Finish the rest of my make-up work.
4. Get the Natural Helper's Club back on track.
5. Have a successful Clothing Drive.
6. Get lettered/corded in Key Club.
7. Get lettered/corded in Speech & Debate.
8. Get lettered in NHS.
9. Finish my Internship.
10. Get good Midterm and Final Grades.
11. Enjoy Prom.
12. Workout and get beefed up. ;]
13. Begin studying piano.
14. Sleeping and waking up early.
15. Get on Varsity Tennis.
16. Go on a road trip down to Cali. with some friends.
17. Make Senior Jagfest Court.
18. Become a better speaker.
19. Finish the STARS Video Project.
20. Make a new friend.
21. Become a legal adult.
22. Register to Vote.
23. Karaoke with friends.
24. Get started on making the Valentines Day gift. :]
25. Smile. [11]

Seeing what he'd already


crossed off (get accepted to
college) and seeing what he
had yet to accomplish (enjoy
prom, make varsity tennis)
made it seem as though he
hadn't left off, though he has.
His list will remain forever
frozen in cyberspace as a
memorial and a reminder. He'd never have a chance to enjoy prom
or cross out anything more on his list. I can finish his list, but he
cannot. This realization brought a nauseating visceral reaction: the
pit of my stomach contorted and then went numb as Dustin was
wrenched from my reality. As much as I don't want to believe it,
he is gone. I can live in the blogosphere where he is still alive all I
want, but in reality he is gone.
Blogs can also form online communities that may not be able to
exist otherwise. One such example is the blog of an inmate on
death row, Vernon [12]. Through his blog he shares his reflections
on his life, where death is inevitable, and it's just a matter of
waiting out the sentence. This applies to all of us; we are awaiting

80
death, but on not always so definite a timeline. If we had no
contact with others through the Internet who knew their timeframe,
then we'd have no idea what those who don't have much time left
are thinking. We would be less likely to reflect on our lives as well.
Another example of this sort of blog is BH's blog [13]. He had
ALS and used his space to chronicle his thoughts and the rest of
his life on the Internet. By doing so he touched other's lives, as
well as carved out a memory for himself as he was losing his own.
As death becomes more openly discussed and mourned by online
communities through blogs and websites, a separate reality is
created, an epitaph of sorts. What remains on the Internet as an
epitaph is the final and lasting image of a person and their life. The
Internet can blur the boundaries of reality as we are given a
permanent reminder of those who have succumbed to a mortal
death, though perhaps not so mortal after all.
References:
[1] http://obit.mtscottfuneralhome.com
/wrapper_gb.php?id=507386
[2] http://www.tcw.utwente.nl/theorieenoverzicht
/Theory%20clusters
/Interpersonal%20Communication%20and%20Relations
/Social_Identity_Theory.doc/
[3] The New American Bible. World Bible Publishers, 1987.
[4] http://pd.sparknotes.com/lit/odyssey/section12.html
[5] See, Lisa. Peony in Love. New York: Random House, 2007.
95.
[6] Ibid, 112.
[7] http://andrewolmsted.com/archives/2008/01/final_post.html
[8] http://obsidianwings.blogs.com/obsidian_wings/2008/01
/remembering-and.html
[9] http://www.rockymountainnews.com/news/2008/jan/16
/farewell-for-soldier-who-moved-many/
[10] http://www.xanga.com/SmartChincBoi
[11] http://www.xanga.com/SmartChincBoi/635610462/item.html
[12] http://meetvernon.blogspot.com/
[13] http://brainhell.blogspot.com
Image Citations:
[a] http://www.rockymountainnews.com/news/2008/jan/16
/farewell-for-soldier-who-moved-many/
[b] http://www.calvin.edu/academic/clas/courses
/231/cl231u11.htm
[c] http://academic.hws.edu/chinese/images/mdt09.jpg
[d] http://obsidianwings.blogs.com/obsidian_wings/2008/01
/remembering-and.html
[e] www.xanga.com/smartchincboi

81
With a Little Help from my Online Friends:
The Health Benefits of Internet Community
Participation
by Shawn Davis, Ph.D. <davissh@pacificu.edu>
Introduction
From its inception, the Internet has proved itself fertile
ground for the growth of online (virtual) communities.
Around 1997, the term online community first became
common for describing the various communication media that
enable individuals to come together online [1]. An online
community is a collection of individuals that interact via the
Internet rather than through face to face contact for a variety of
purposes including social networking [2], professional
development [3][4], educational instruction [5], or for a full host of
other reasons [6]. Online communities grow and develop around
the same set of purposes as do other groups of like-minded
individuals who are centralized around a particular interest or
goal. Also, these online communities seem to mirror the function,
strengths, and limitations often found in traditional interpersonal
contact. This article will focus on some of the potential health
benefits that are seen through online community participation and
present a variety of factors found to be important in the
development of successful online communities. To best set the
stage, let’s begin with a look at a predecessor of online community
participation, journaling.
The Health Benefits of Journaling
Keeping a journal of one’s thoughts, feelings, and activities
represents a long-held tradition. Not only have these recordings
helped to provide a framework for future generations to better
understand the past, the act of putting one’s inner-self on paper has
been found to be individually beneficial as well. According to
James Pennebaker, a psychologist and researcher at the University
of Texas at Austin, evidence is accumulating that supports the idea
that journaling can positively impact one’s physical well-being [7].
In particular, Pennebaker finds that active journaling serves to
strengthen T-lymphocytes, immune cells within the body. These
physical benefits are highlighted in the reduction of asthma
symptoms and those of rheumatoid arthritis. The benefits gained
through journaling are proposed to be centered on the idea of
stress reduction. When an individual makes the effort to write
about a stressful or troubling event, they are initiating a series of
cognitions that help them come to term with the event and this
self-reflective catharsis serves to reduce the impact of the stressor
on their physical health.

82
While the benefits to one’s health have been demonstrated through
regular journaling, the catharsis gained through reflection and
externalizing of the inner self is a rather solitary activity. By
extending the act of journaling to an interactive, yet virtual
community these health benefits are multiplied.
The Benefit of Online Community
Not only do the benefits of traditional journaling translate well to
cyberspace, virtual journaling (e.g., blogging, instant messaging,
and discussion boards) within a group of individuals sharing
similar thoughts, concerns, or stressors has been found to provide a
host of psychological benefits as well [8]. James Baker and Susan
Moore, researchers from the Swinburne University of Technology
in Melbourne Australia, found that individuals who engaged in
regular blogging over a two-month period reported having larger
friendship networks and felt a greater degree of social support than
did a comparison group of individuals who did not blog. This
extended social network served to encourage higher levels of
confidence in those individuals who blogged regularly in that they
felt less isolated and more connected to a community of
like-minded individuals that they could rely on for help and
support. Furthermore, after two months of activity within the
online community, study participants reported feeling less stressed,
depressed, and anxious than they did prior to their involvement.
Online Health Communities
These benefits are further highlighted in the aid that such online
communities provide those currently suffering from illness [9].
Increasing technological availability and proficiency as well as the
growing number of online health communities provides
individuals with limited mobility, embarrassing medical
conditions, or other restrictive issues informational and emotional
support that they desperately need. No longer are these
individuals isolated; through online community participation they
report feeling more empowered, more equipped in their decision
making, less alone, and indicate an improved quality of life.
While the benefits of online health communities have been
documented, to be maximally beneficial they need to be more than
just an open forum. Beyond cultural and linguistic issues, a
number of factors have been identified that, when properly
addressed, create an atmosphere in which social interaction,
knowledge, and safety operate in unison.
What Makes for a Beneficial Community?
Brennan and Fink discuss the role of social networks to be one that
is affirming, supporting, and rewarding good health practices and
that serves to provide emotional and social support [10]. This
supposition is in line with research findings that suggest that

83
successful support programs involve interactions that provide a
combination of affirmation, emotional support, and information
[11]. Also, for the benefits of online health communities to
flourish, the structure of the site must be easily navigated,
welcoming, and the interaction between visitors must be handled
in such a way so as to allow members to best evaluate the quality
of the information presented therein. To best meet these goals, it
has been suggested that the successful design of such a system
should be the result of an active collaboration between both
clinicians and patients [12]. The active participation of health care
providers is seen as particularly important in combating the spread
of misinformation that often results in entirely unmoderated
forums.
The fear that misinformation or incomplete information can be
spread through online health communities has been the subject of
considerable debate [13]. Traditionally, the health care provider
has been the guardian of medical knowledge. With the
proliferation of online communities centered on a variety of health
issues, some have expressed a concern that “a little knowledge can
be a bad thing” [14]. Others, however, support the notion that
individuals should position themselves as educated consumers of
health information presented online [15] and that the myriad health
benefits (both physical and psychological) to be gained from
online community involvement make such interaction a valuable
adjunct to a traditional doctor-patient relationship.
To explore the issue of community moderation, Wise, Hamman,
and Thorson observed 62 participants active within either a
moderated or unmoderated online community [16]. The study
participants, in response to questions related to their online
participation, indicated a higher intent to participate within the
moderated community. While value is seen in both moderated and
unmoderated communities, the structure provided through
moderation not only improves the flow of communication, it
provides a level of protection in that the information contained
within the discussion is more often valid.
In addition to addressing concerns over the value and accuracy of
information discussed within the context of online community
interaction, research has been conducted on ways to encourage
active participation in the online community. This is opposition to
a situation in which a few contributing “elders” dominate the
online conversation with the majority of others being mere
"lurkers" [17]. Online communities have been found most
successful when individuals feel that their contributions matter,
when the information presented is appropriate, and when control
of the group is decentralized [8][19].

84
The “lurking” or under-contribution problem is a concern for many
online communities. Ling, et al applied social psychological
theories of social loafing and goal-setting in their examination of
various community designs [20]. Individuals participating in a
movie recommender community (established for the purpose of
the research study) were provided differing explanations for the
value of their individual contributions. It was found that
individuals contributed more often when they were provided
reminders of their uniqueness and when they were presented with
challenging goals associated with their contribution.
Furthermore, Lazar and Preece contend that communities
demonstrating high levels of sociability are those that possess
explicit social policies that support the purpose of the community
that are understandable, practicable, and socially acceptable [21].
Sociability, however, is but one part of the equation. Usability, the
making of interfaces that are consistent, predictable, and
controllable, must work in conjunction with the sociability of
interaction for the development and growth of a successful online
community.
Conclusion
The act of releasing one’s inner thoughts, concerns, and feelings
has been shown to provide a multitude of benefits for the
individual both in terms of their physical health and psychological
well-being. These benefits are enhanced when one is an active
participant within a community of like-minded individuals sharing
a similar set of circumstances. As discussed previously, however,
such a community must be viewed by the individual as more than
a dumping ground of one’s concerns. To be most beneficial, this
community must also be a source of information, a means to
establish true personal connection, and above all a place of safety
within which the individual can be honest, open, and vulnerable.
ENDNOTES
1. Preece, J. (2000). Online Communities: Supporting Sociability,
Designing Usability. Chichester: John Wiley & Sons Ltd.
2. ibid.
3. Lock, J. V. (2006). A new image: Online communities to
facilitate teacher professional development. Journal of Technology
and Teacher Education (JTATE), 14(4), 663-678.
4. Hur, J. W., & Hara, N. (2007). Factors cultivating sustainable
online communities for K-12 teacher professional development.
Journal of Educational Computing Research, 36(3), 245-268.
5. Poole, M. J. (2006). Participation, negotiation, and sociability:
Building online communities of practice in preservice teacher
education. In A. Schorr and S. Seltman (Eds.), Changing media
markets in Europe and abroad: New ways of handling information

85
and entertainment content. (pp. 393-415). Lengerich, Germany:
Pabst Science Publishers.
6. http://en.wikipedia.org/wiki/Online_Health_Communities
7. http://psychcentral.com/lib/2006/the-health-benefits-
of-journaling/
8. http://www.abc.net.au/news/stories/2008/03/03/2178512.htm
9. http://en.wikipedia.org/wiki/Online_Health_Communities
10. Brennan, P.F., & Fink, S. V. (1997). Health promotion, social
support, and computer networks. In R. L. Street, Jr., W.R. Gold, &
T. Manning (Eds.), Health promotion and interactive technology:
Theoretical applications and future directions (pp. 157-169).
Malway, NJ: Lawrence Earlbaum.
11. Cwikel, J., & Israel, B. A. (1987). Examining mechanisms of
social support and social networks: A review of health related
intervention studies. Public Health Reviews, 15, 159-193.
12. http://en.wikipedia.org/wiki/Online_Health_Communities
13. Kral, G. (2006). Online communities for mutual help: Fears,
fiction, and facts. In M. Murero and R. Rice (Eds), The Internet
and health care: Theory, research, and practice. (pp. 215-232).
Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
14. Scheerhorn, D. (1997). Creating illness-related communities
in cyberspace. In R. L. Street, Jr., W.R. Gold, & T. Manning
(Eds.), Health promotion and interactive technology: Theoretical
applications and future directions (pp. 157-169). Malway, NJ:
Lawrence Earlbaum.
15. http://bcis.pacificu.edu/journal/2007/01/davis.php
16. Wise, K., Hamman, B., & Thorson, K. (2006). Moderation,
response rate, and message interactivity: Features of online
communities and their effects on intent to participate. Journal of
Computer-Mediated Communication, 12(1), 24-41.
17. Bishop, J. (2007). Increasing participation in online
communities: A framework for human-computer interaction.
Computers in Human Behavior. 23(4), 1881-1893.
18. Cosley, D. R.. (2007). Helping hands: Design for member-
maintained online communities. Dissertation Abstracts
International: Section B: The Sciences and Engineering, 67(8-B),
4516.
19. Lin, H., & Lee, G. (2006). Determinants of success for online
communities: An empirical study. Behaviour & Information
Technology, 25(6), 479-488.
20. Ling, K., Beenen, G., Ludford, P., Wang, X., Chang, K., Li, X.,
Cosley, D., Frankowski, D., Terveen, L., Rashid, A. M., Resnick,
P., & Kraut, R. (2005). Using social psychology to motivate
contributions to online communities. Journal of Computer-
Mediated Communication, 10(4).

86
21. Lazar, J., & Preece, J. (2003). Social considerations in online
communities: Usability, sociability, and success factors. H. van
Oostendorp (Ed), Cognition in a digital world. (pp. 127-151).
Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

87
Web 2.0 and the demise of the shelf
concept
Steve Rhine
Willamette University
There has been a gradual erosion of one of my core belief
systems. I have done my best to fend off repeated
advances on the concept, but I think it is succumbing to
pressures outside of my control. It certainly is making me nervous,
as I am not entirely clear on the implications if this concept is to
ultimately meet its demise. Fad or revolution? I'm not quite sold
that it is a revolution, but I am now feeling pangs of discomfort in
ways that I haven't before. Is the tide in my mind turning? The
concept?
The internet is just a really big book shelf.
With the advent of Alta Vista (the first search engine that I called
my own), I saw the internet light. I can look for any information in
the world with this tiny box. An easier card catalogue! The little
librarian inside that box will go out into the world and find my
"book" (article, picture, what-have-ya), pull it off of someone's
shelf (aka website), and bring it back to me so I can read it. What a
deal!
Wikipedia? I know encyclopedias! This is just like paging through
my monster old orange Columbia Encyclopedia or set of World
Book Encyclopedias I had as a kid, except I don’t have to pull the
volume off of the shelf and turn the pages! I just follow the links!
It saves my back!
I don’t have to empty my piggy back for change, spend hours in
the stacks to find the journal I want, and struggle to mash the
impenetrable binding to the Xerox machine glass so I can get a
copy of that article? I just click "Full Text"? Wow! No more musty
smell!
As you can see, my shelf concept was very happy. Each new tool
of the internet was at home in the library of my mind.
Now as I merrily assimilated these new phenomena into my world
view there were the occasional, irritating thoughts that crept into
my brain. Questions such as: "Where did that website come from?"
and "Who wrote Wikipedia anyway?" gently gnawed at my
sensibilities, but the shelf concept remained.
Then along came Blogs, Wikis, Podcasts and Other Powerful Web
Tools for Classrooms by Will Richardson to tip the balance. I have
been skeptical of the blogosphere, wikis, and podcasts as I just
can’t understand how people have the time to randomly create
content let alone consume what everyone seems to be saying about
a topic. However, Mr. Richardson is gradually turning up the

88
dimmer switch so that the light is beginning to take hold in my
mind. Two light bulbs, in fact, are beginning to shine and
overcome the darkness. Two new ways of thinking are starting to
take shape and tear down the walls of the internet library.
Idea #1: The purpose of creative activity is to contribute to the
world’s understanding and initiate conversation.
The term "Web 2.0" was developed in 2004 by Dale Dougherty of
O’Reilly Media [1]. There is a bit of argument over exactly what it
means, but my sense is that it signifies the transformation of the
internet from a place for the masses to acquire information to a
space in which the masses could create content. The Pew Internet
& American Life Project reported in 2003 that 53 million
American adults, or almost half of adult internet users had created
content on the internet. [2] Ian Kallen of Technorati noted in
December of 2007 that 1.5 million blog posts are made each day.
[3] In 2007, Pew reported that 64% of online teenagers had created
content, up from 57% in 2004. [4] Web 2.0 is an internet that is
evolving away from being simply a library/storehouse of
information and towards an interactive, worldwide connection
place.
What does this have to do with me, as an educator? Well, I have
spent many hours grading papers and returning them to students
who quickly dispose of them. I have also spent many hours in the
classroom listening to students' presentations on one topic or
another. The parade of groups in front of the classroom is usually
engaging for the first few and then begins to drag. Time is eaten
away. In spite of my exhortations about the value of students' ideas
to students, the only one the group is often talking to is me. Yet, I
truly believe in the value of getting students to develop and
communicate their thinking collaboratively. Students believe the
purpose of creating a product or making a presentation is to
accomplish the task assigned by the teacher, make me happy. I
want the dynamic exchange of ideas. I want a greater, more valued
purpose. Hmmm...
Richardson shares 10 'Big Shifts' that he sees as the result of the
dawn of Web 2.0. Shift #10 is that the ultimate goal of activity is
not completion, but contribution. "Instead of simply handing in
countless assignments to teachers to be read, graded, handed back,
and most likely thrown away, we can now offer our students a
totally new way of looking at the work they do. It’s not meant for
the teacher or the class or even the school. It’s meant for the world,
literally. It’s not meant to be discarded or stored in a folder
somewhere; it’s meant to be added to the conversation and
potentially used to teach others."[5]
Putting information into the web means adding to the knowledge

89
base around a topic so that others might use it. The purpose of
creative activity is not to inform one person but to continue the
ongoing effort to build understanding. This requires that you have
something new to add to the conversation, not just rehash the same
ideas. What new ways do you have of looking at the topic? What
fresh perspective can you bring to the conversation? This also
requires that you listen to the conversation first in order to
understand the arguments currently being made. This certainly
makes me rethink how I design assignments for my students. Not
only can they help fill the library, but they can alter what is in it.
They can also develop a sense that what they have to say is of
value to others and develop a sense of responsibility to contribute
to the dialogue. The investment students might have in their work
and the quality of their effort might improve if they know that their
ideas will go beyond the four walls of my office.
Idea #2: Learning is achieved through "utilizing collective
intelligence."[6]
What is information and how do we learn? Up until recently, the
answer was "information is in a book or article". I want to learn
something, I pull that book off the shelf. However, Wikipedia has
highlighted the fact that information is dynamic and not static. I
was on the CNN website recently and a news flash came up that
Castro had just stepped down and his brother Raul was now
president. I wondered aloud "How old is Raul Castro?" and
decided to find out on Wikipedia. It turns out he is 76, but more
importantly it said on Wikipedia that he was now president of
Cuba as of an hour ago. Now, technically that information was
incorrect, because it was February 19th and the National Assembly
didn’t elect him until February 24th, but that moment
demonstrated two things. First, that the internet library has
"books" flying on and off the shelves so fast that I’m not so sure
the shelf concept is serving me well anymore. Information changes
every second and the internet makes it possible to keep up with the
dizzying pace of changing ideas. Second, that Wikipedia opens the
door to the concept of collective intelligence. Shortly after my visit
to the site, someone corrected the entry for Raul.
I was still hanging on to my shelf concept by a thread when
Richardson made the final snip. How do I generally learn
something? I might look up a book or article. I might ask some
friends. I might even type a word or phrase into Google. Those are
all shelf strategies. My field of view is constrained by my
knowledge base or network and the 10-20 hits I might look up on
Google. Richardson’s perspective of collective intelligence is an
exponential leap.
Tens, hundreds, or even thousands of people have looked for,

90
documented, and talked about the topic in which I am currently
interested. They have found books and articles, bookmarked
websites, blogged about perspectives, and developed databases. If
I could tap into all of their work, it would be like an army of
librarians working for me to find out about this topic. My network
has just enlarged incalculably. However, there is no way that I
could possibly get through all of it and find out just what is
relevant to me! Information overload!! Or is there...?
If you only have time to read 10 pages in your life, read Chapter 5.
Enter the world of RSS (Really Simple Syndication). Now I know
how to subscribe to a podcast with iTunes, so I understand RSS.
Every week Bill Moyer is downloaded to my computer without me
even thinking about it. Richardson takes it further. With an
aggregator such as Bloglines you can subscribe to anything that
has an RSS feed...news outlets, journals, blogs, etc. You can filter
that content by topic so you only get new information about just
what you are looking for. Any new information on your topic is
automatically found and brought to your doorstep. You can create
RSS feeds for search terms!
One way of doing this is to use Google News’ advanced search
feature. You can narrow the field to just the sources you want and
the exact terms you want in or out. When you hit "search" the
results are returned with an RSS button. If you like the results,
paste the feed into your aggregator and any time something new
about your topic is published it comes to you.
One thing all of us should worry about is the quality of
information we receive. With a site such as del.icio.us.com, the
concept of collective intelligence really starts to make sense to me.
Instead of having Google algorithms determine the relevancy and
quality of information coming your way, how about having
humans do it for you? del.icio.us is a social bookmarking site
where people collect website links about particular topics. They
tag the sites with key words. [7] del.icio.us takes all the sites that
have been tagged with the same key words you are looking for and
connect them and the people who tagged them. You find out who
is looking for what you are looking for and benefit from their
efforts. Wikipedia points out that the advantage of social
bookmarking is that "as people bookmark resources that they find
useful, resources that are of more use are bookmarked by more
users. Thus, such a system will 'rank' a resource based on its
perceived utility." [8] Richardson describes this processs as "no
longer taxonomy but 'folksonomy.'" [9] People collectively decide
the value of information and raise it in the hierarchy. The more
people use a resource, the more likely it has value.
The PEW folks summarize Richardson’s points concisely: Web 2.0

91
applications "replace the authoritative heft of traditional
institutions with the surging wisdom of crowds." [10] The shelf is
dead, long live the Borg.
I’ll close with the words of poet, essayist, and former lyricist for
the Grateful Dead, John Perry Barlow at the dawn of the internet
age in 1994: "The economy of the future will be based on
relationship rather than possession. It will be continuous rather
than sequential. And finally, in the years to come, most human
exchange will be virtual rather than physical, consisting not of
stuff but the stuff of which dreams are made." [11]
1. "What is Web 2.0," by Tim O’Reilly. Published on the O’Reilly
website on September 30, 2005: http://www.oreillynet.com/pub/a
/oreilly/tim/news/2005/09/30/what-is-web-20.html.
2. "Reports: Online activities & pursuits." by Amanda Lenhart,
Deborah Fallows, and John Horrigan. Published on the Pew
Internet & American Life Project website on February 29, 2004.
http://www.pewinternet.org/PPF/r/113/report_display.asp.
3. "Use The Technorati Percolator to Discover The Real Time
Web" by Ian Kallen. Posted on Technorati by Ian Kallen on
December 4, 2007: http://technorati.com/weblog/2007/12
/406.html.
4. “Reports: Family, Friends, and Community.” By Amanda
Lenhart, Mary Madden, Alexandra Rankin Macgill, and Aaron
Smith. Published on the Pew Internet & American Life Project
website on December 19, 2007. http://www.pewinternet.org/PPF/r
/230/report_display.asp.
5. Richardson, W. (2006). Blogs, Wikis, Podcasts, and Other
Powerful Tools for Classrooms. Thousand Oaks, CA: Corwin
Press, p. 132.
6. "Riding the Waves of Web 2.0" by Mary Madden and Susannah
Fox. Published on the Pew Internet & American Life Project
website on October 5, 2006. http://www.pewinternet.org/PPF/r
/189/report_display.asp, p. 1.
7. The Apple Learning Exchange publishes an interesting lesson
plan on "Tagging for Learning". http://edcommunity.apple.com
/ali/story.php?itemID=12084.
8. "Social Bookmarking" Published on Wikipedia, retrieved on
March 1, 2008. http://en.wikipedia.org/wiki/Social_bookmarking.
9. Richardson, W. (2006), p. 92.
10. "Riding the Waves of Web 2.0", p. 2.
11. "The Economy of Ideas" by John Perry Barlow. Published in
Wired Magazine in March, 1994. http://www.wired.com/wired
/archive/2.03/economy.ideas_pr.html.

92
Photoshop Express: Web photo sharing
gets interesting
by Mike Geraci
In March 2008, Adobe Systems entered the Web photo-
sharing arena with the introduction of Photoshop Express
(currently in beta). Given its total dominance of the photo
editing, and Web and Interactive design markets, Adobe's
offering is more than just a space to show off your photographic
abilities, it's a nicely polished set of technologies that demonstrate
what's possible on the Web these days, provided you've got the
bandwidth.
Photoshop Express is a Flash-based application that allows users
to upload, edit, organize, present, and share their photographs. The
service is compatible with Mac (10.4 and up) and Windows (XP,
Vista) computers running Explorer (6 & 7), Firefox (2.x) and
Safari (3.x) browsers. The free Flash Player plug-in (version 9.x)
is required but comes installed in the above browsers by default.
The site will let you know if you need to update your software and
it will start the download and install process automatically
To browse any of the approximately 50,000 photo galleries posted
by end users, you simply need to go to the site and click on the
Browse button. To post your own photos, you need to establish a
free account. Once you're official, you are given 2GB of storage
space on Adobe's servers to store your images.
Uploading photos is a simple process that is enhanced by the
ability to add multiple images from different locations on your
computer into a queue. Once you've got a batch ready, click the
upload button and the images are transferred. I uploaded 8MB of
images in under a minute on a cable modem connection, so the
response time is adequate. Once uploaded, your images appear in a
thumbnail gallery which feels a lot like dedicated photo
management applications like iPhoto and Lightroom. Thumbnails
are automatically date-stamped and you can scale them up or
down, add captions, and star ratings. The photo library can be
viewed in a grid, list or enlargement format, and thumbnails can be
rearranged by certain criteria (like star rating) or via drag and drop.
Photoshop Express only accepts JPEG formatted images, so
die-hard photographers who shoot in Camera Raw or TIFF format
will have to convert images in a dedicated application before
uploading. To test the technical capabilities of the service, I
uploaded eight images that had different resolutions, dimensions,
and embedded color profiles. Photoshop Express handled them all
with aplomb, and I was not able to detect any loss of quality in the
uploaded images, however they are scaled down on the fly to fit in

93
the browser window, which creates some pixel "shimmering" in
the images. Images larger than 2880 pixels on any dimension will
be shrunk down to this size when they are added to the library.
According to the documentation single images over 10MB in size
cannot be uploaded.
Things start to get interesting when you mouse over a thumbnail.
A "Photo Options" menu appears which gives you numerous
options for managing the photo. There are a few handy features
available in this menu like copying a hyperlink link directly to a
particular photo for pasting into an e-mail or onto a Web page and
basic rotation options. The real feat comes in the "edit photo"
option, which opens the photo in an enlarged format and provides
a toolbar of editing controls that are too numerous to list here.
Suffice it say that basic toning, color correction, cropping, and
special effects are all present.

figure 1: The Edit Photo menu


Photoshop users won't be too excited at the editing options as there
aren't any histograms, layers or selection tools, but for a
Web-based service, I was impressed with the speed and granularity
of options available in Photoshop Express. I was especially
impressed with the ability to re-white balance images that may
have been taken under complex lighting situations and have
undesirable color casts as a result – a common error found in
digital photography. Edited images can be saved as new images,
giving you the ability to create multiple versions from a single
base image. Lastly, I'm somewhat awestruck by the fact that all
editing operations are applied in a non-destructive format, giving
the user the ability to remove individual edits (out of order) or
revert back to the original image in a few clicks.
Once your images are ready for viewing, you can collect subsets of
photos into virtual albums. Photoshop Express extends the

94
experience by allowing you to drag thumbnails from the library or
existing albums into additional albums just as you might create
playlists in iTunes. The mouse pointer changes to indicate you are
adding photos and the thumbnails being added even shrink down
as they are "dropped" into the album. Albums can be shared
publicly on the site with the click of a button or, in an interesting
twist, you can invite only those you select to access your albums
by generating an e-mail to them from the service which provides
them with the necessary access.

figure 2: Photo Albums


Albums can be viewed as standard thumbnails that can be clicked
to view enlargements or animated slideshows can be created in one
of four different formats that can include 3D effects (images move
in three-dimensional space). End users can only view slideshows,
but they have access to all the options for formatting, captions,
scale, duration, and auto or manual playback. Another feature that
exceeds the typical Web experience is the ability to switch
Photoshop Express into full-screen mode so that no remnants of
the Web browser or the operating system are visible while viewing
images. Galleries (either your own or those of others) can be
selected as your "favorites" which makes them easily accessible
whenever you log in. You can also access previously viewed
galleries from a history list.
The biggest feature that is missing from Photoshop Express is the
ability for users to tag photos with keywords, which would allow
for the creation of "meta" collections of images that all match
user-defined criteria. Imagine being able to search for a person or
an event and having access to all the photos that have been tagged
to match the search terms come up. Perhaps we can look forward
to this functionality in future revisions; it is still a "beta" version
after all.

95
figure 3: Slideshow creation and settings panel
While I am enthusiastic about what's possible with Photoshop
Express, my experience was not without some technical glitches. I
was unable to upload photos from my primary desktop computer.
This might have been because I am running the beta version of
Flash Player 10. I had no issues on multiple other computers that
differed only this respect. Beyond that, there is the occasional odd
behavior in accessing public galleries and performing searches on
the service. Both Firefox and Safari report infrequent minor errors
in performing functions in the source code or accessing an object
in the data structure. The mouse pointer tends to disappear during
slideshows, which requires moving the cursor rapidly near the top
of the browser to bring it back into view.
Overall, the glitches in the application don't take much away from
the well-designed and highly interactive experience offered by
Photoshop Express. As a whole, it is an amazing service that rivals
the commercial and non-commercial photo-sharing products
currently available on the desktop or the Web.
To see the sample gallery I posted for the purposes of this article
follow this link: http://geracim.photoshop.com
/?trackingid=BTAGC&wf=share&
galleryid=f965a03d2b96475f9813d56904a89bc6

96
Managing Online Forums
O'Keefe, Patrick. Managing Online Forums. AMACOM,
New York, 2008.
Managing Online Forums is an excellent book for those
interested in participating in or in creating and operating
such forums. In this work "forums" is a reference
primarily to Bulletin Board systems permitting asynchronous
community discussions. These have been around for sometime,
dating back to the pre-WWW stage of the Internet, when
communication was limited primarily to text, and that laboriously
produced. The number of potential users of such systems was
inherently limited because sending even short text strings was
complex and frustrating.
Today's forums are sites, usually within an HTML web-based
environment, on which registered members may communicate at
the time of their choosing, primarily by posting messages to each
other which may well be read and replied to days later. The
common forum applications "thread" these messages, organizing
them by topic, by author, by time posted, in a variety of ways so as
to build a community of discussants around common themes. Here
is what one forum taken from a recent class I taught looks like
(contributors' names removed):
Joined: 31 Jan 2008
Posts: 30
Post Posted: Mon Feb 04, 2008 2:49 am Post subject: Dispatches
Question Reply with quote The book only talks about U.S. troops
essentially going crazy from the war and always being frightened
of what may be around them ready to kill them, but were the Viet
Cong affected in the same way? Did a lot of them go home from
the war, like the U.S. troops, and have constant nightmares,
flashbacks, and basically not be able to re-enter society?
_________________
author:
Back to top
View user's profile Send private message AIM Address
Joined: 30 Aug 2007
Posts: 22
PostPosted: Mon Feb 04, 2008 4:27 am Post subject: Reply with
quote
I guess I have a similar question to the one posted above,
pertaining to the troops, and their reintegration into home life. Not
knowing much about this war, or any prior pertaining to the finer
details of troop-life, I am curious as to the idiosyncrasies of the
Vietnam War that contributed to the severe augmentation of troop

97
psyche, especially upon returning to the U.S. - what about this war
made it difficult, if not impossible for a reintegration into civilian
life (citing particularly examples in the book of soldiers incapable
of resuming a non-combatant lifestyle)? Or, if there is no
difference, then what about this war, and information covering it,
highlighted these issues, compared to the aftermath of previous
conflicts?
Back to top
View user's profile Send private message AIM Address
author
To see the discussion in its original HTML formatting which
makes it much more readable, go to: http://bcis.pacificu.edu
/phpBB2/viewtopic.php?t=373&
sid=aa5b71c7226f31a6fb1efea2ace5f4cc
A well-run forum may have many hundreds, often thousands, of
participants interacting in smaller or larger groups as they choose.
Many users also appreciate the information or enjoy the interaction
of others but simply "lurk" and do not post.
Forums are different than the more familiar "chat rooms," which
permit only discussions between participants who are on line at the
same time ("synchronously"). Asynchronous forums are more
useful for many, who prefer the ability to post following careful
reading and consideration of previous postings, rather than
engaging in simultaneous instantaneous communications, which at
the least privilege speed typists.
Because of the problems now presented by the Internet such as
spamming, obscene or unwelcome postings, inappropriate
discourse, or simply personal conflicts among participants ("flame
wars") these forums are usually managed or mediated by an editor
or editors.
These editors or "managers" as O'Keefe prefers to call them must
establish rules for behavior in the forum, and then enforce them,
perhaps to the point of "banning" participants or blocking the
location from which they usually post. There are also, in these
days of digital rights management and its accompanying
violations, an accumulating body of law that must be scrupulously
observed. And even the seemingly anarchic World Wide Web has
its standards of etiquette and good practices, violations of which
may cause a cascade of negative comments to flood the web,
seriously damaging the forum that has offended.
Patrick O'Keefe is well qualified to guide the would-be forum
creator, manager, or participant through these many issues. He is
experienced in running a wide variety of such forums, including
managing electronic communities of Karate practitioners, despite
the fact that he himself has never studied any martial arts. That he

98
is able to do so effectively is a testament to his understanding of
how such communities are created, managed, and are grown.
Managing online forums is, not surprisingly, complex--at one point
the author speaks of "cultivating" them which seems a precise
description of what is actually required of an owner-manager.
While some forums generate substantial income, most, as the
author ruefully admits, do not. These depend largely on volunteer
labor and on the strong support of the community members
themselves, making management of the forum as much a social
interaction as a business. O'Keefe has many useful lessons on how
to work with such groups, and seemingly has himself made every
possible mistake. The lessons derived from these errors are as
useful, of course, as those drawn from the successes.
At first glance Managing Online Forums may strike some readers
as both over-written, full of restatements of the same information
through repeated examples, and at points even simplistic as well. I
myself have been utilizing forums in classes and on-line
publications for many years, and thought that I had little to learn.
I was very wrong. The approach taken by O'Keefe, while
deliberately popular and sometimes overly obvious, is extremely
effective. His real-world examples, taken from a series of
communities--of which the reader soon comes to almost feel him-
or herself a member--are both humorous and instructive.
The author addresses a number of types of communities with
different purposes, and extracts useful lessons common to all. He
also discusses software for creating and managing forums,
thankfully sticking to the most commonly utilized packages. He
thus may avoid the rapid dating of his work, a common result of
authorial discussions of programs found in such a quickly evolving
environment as the Internet.
The work is fully supported by an on-line web site, and if O’Keefe
practices even a portion of what he preaches in this book, it seems
safe to assume that it will be continually updated and can itself
serve as a means of enabling reader-author discussions.
While the author continually insists that he is himself not able to
program nor particularly competent with computers, his use of
terminology is both appropriate and deft so that those more
advanced than he can yet benefit from the work. Those working at
creating a community site will find the work a very useful
introduction to the technology, but by no means a fully adequate
resource. Many more technical works are to be found in O'Keefe’s
references and his useful appendix will give the reader a sufficient
grounding in vocabulary and in the definitions necessary to fully
understand the analysis.
There are many points at which some readers may also part

99
company with the author’s single-minded focus on the smooth
"management" of the community, even if sometimes at the
expense of the freedom of speech of individual members. For him,
such discussions are seen as primarily legal ones, rather than
ethical issues. His readers are urged to keep discussions of politics
and religion, for example, out of any forums not specifically
dedicated to such content. Such rules can give a forum an
otherworldly aura in which discussions seem more virtual than
real.
While O'Keefe is quite careful to keep this work relevant to a very
broad group, the holy grail of many forums is to attract a
sufficiently large audience that advertising can be sold, or perhaps
some larger conglomerate will "acquire" it for its own purposes.
Despite the fact that the author repeatedly describes such hopes as
unrealistic for most forum creators, one is left with the impression
that even for O'Keefe much of the purpose of cultivating a large
and smoothly functioning community is to somehow amortize it.
The book will be of lesser interest for those seeking primarily to
understand the impact of such forums, or their social, economic, or
political functions in an internet-enabled world. The current
buzzword with regard to web-enabled communications is "Web
2.0," a reference to the most recent stage of the web, facilitated by
the wealth of tools and on-line storage space produced in the last
five years or so, and by the eagerness of readers to themselves
become writers. This work however, is about practicing within this
environment, and primarily from the perspective of the owner-
operator rather than from a participant’s perspective.
Nonetheless, such forums can be useful adjuncts to such a wide
variety of Internet sites that the work will probably go through
many editions.

100
The Post American World
Zakaria, Fareed, The Post-American World. New York:
W.W. Norton, 2008.
Review by Jeffrey Barlow, editor, Interface
Fareed Zakaria's recent work, The Post-American World,
is clearly destined to be an influential one, as shown by its
widespread reviews. This is probably because it come at a time
when American's are at last prepared to understand that they are in
the grips of major change, some aspects of which are far from
positive ones, such as a declining dollar and rising oil prices, and a
prolonged war. Zakaria is also a familiar and respected media
figure. He is an editor of Newsweek's International Edition, and a
prolific writer on economic issues. In addition he has an hourly TV
interview program on CNN. [1]
Zakaria has what has in the past been a tough argument for
Americans to swallow. He believes that the U.S. is clearly in an
inevitable relative decline in terms of its power, influence and
wealth. This decline is not caused by a Spenglerian [2] "Decline of
the West," but by the "Rise of the Rest," most notably by the rapid
progress of China and of India.
"Three Forces" have contributed to this decline, according to
Zakaria, "Politics, Economics, and Technology." [3] At Interface
we are primarily interested in the last of these, of course, as it is
fairly described as the impact of the Internet.
Works comparing Asia and the West are by no means new, nor was
Spengler the first to warn of inevitable decline. As early as the
17th century, European Jesuits living in China were writing
appreciations of Chinese society. Sometimes these were honest
attempts to understand a highly developed non-European society.
More often than not, however, such analysts used Asia as a tool
with which to criticize their own culture, perhaps for its
materialism, or its lack of stability and order.
This trope of the failing West and the admirable East has continued
to be a staple of East-West comparisons then, for more than five
hundred years. A recent approach has been a cautionary one by
which intellectuals measure America in particular against an Asian
culture---most recently China or India---and find it either
gratifyingly successful or give warning of its imminent decline and
fall.
An example of the cautionary school was Paul Kennedy's 1987
work, The Rise and Fall of the Great Powers. While Kennedy was
mostly concerned about internal factors explaining the decline of
great empires, particularly expensive military overextension, he
directly applied his arguments to the United States. His position

101
while widely discussed and ultimately influential was also roundly
pilloried both by offended American conservatives and by critical
academics. [4]
Kennedy's work, among other factors, evoked a sort of intellectual
riposte from conservatives such as David M. Landes. [5] Landes
wrote a very well received work, The Wealth and Poverty of
Nations. Why Some Are So Rich and Some Are So Poor, published
a decade later. Landes' theme was a familiar one of late 20th
century market (and American) triumphalism.
According to Landes, the West, and particularly the United States,
simply had a superior culture, particularly in its Judaeo-Christian
roots, which prized private property and thus encouraged
individual initiative. Asians, however, were too often in the grips
of strong states which prevented markets and individuals from
working their magic. By this time, a decade after Kennedy had
written and during which the United States had arguably ascended
rather than declined, Landes, like so many others, was able to
pretty much ignore Kennedy as a typical academic alarmist. The
decade from Landes' publication in 1998 to 2008, however, has
seen a great many changes. The spectacular continuing "rise" of
China and the rapid recent decline of the American economy and
other economies closely linked to it, such as England's, has
brought a new perspective to the discussions.
National Public Radio recently reviewed a number of books
making quite a different argument than did Landes, by three
authors holding from different perspectives a position much closer
to Kennedy than to Landes. [6] Zakaria, then, is suddenly well
within a very broad emerging analysis. Zakaria, however, is almost
soothing in his approach to what in the past has been viewed as an
alarmist if not downright terrifying perspective. In part this is, I
think, because Zakaria, despite his Indian birth and his eminence
in the media, seems, well, to be one of us. He too is paying much
more for his gasoline, and probably flying less and enjoying it less,
as well. And his position nicely takes the moral sting out of even
relative decline; it is not our fault.
The problem is not that we have changed; we have, in fact, been
highly successful. Much of the world wants what we want, and
intends to get it in much the same way as we did: though reliance
on capitalist market forces. "We" won, then. So why does it seem
so much like a loss to us? And what caused it?
Zakaria easily answers this latter question; we leave the former
one to each of our readers to ruminate for him- or her self.
According to Zakaria: "Since the 1980s, these three forces---
politics, economics, and technology----have pushed in the same
direction to produce a more open, connected, exacting

102
international environment." [7]
In this review we choose to focus upon one of these forces,
obviously upon the impact of technology. But Zakaria gives a
wonderful treatment of both politics and economics as well, giving
just enough comparative information and historical background to
make the reader ultimately comfortable with his conclusions.
The relationship between technology and the "Rise of the Rest" is
easily summarized. It is a result of near-instantaneous
communications, which makes it possible for capital and labor,
those familiar factors of the productive processes, to rapidly seek
environments in which return upon them can be maximized.
Arbitrage, the movement of money, and outsourcing of both labor
and factories, then, have enabled others to meet and surpass
American standards.
The results of these changes, however, are not simple nor always
negative for the United States nor positive for others. Zakaria's
contacts and wide understanding of politics and economics lets
him move easily from his analysis of causes to a very thorough
analysis of the broad consequences.
It seems to us that of all the works we have reviewed in Interface
touching upon this element of our contemporary world, from
Landes to Greenspan, Zakaria's is the most thoughtful and readable
one, fully deserving its current widespread notice.
1.See his webpage at: http://www.fareedzakaria.com.
2. Oswald Spengler was, of course, an earlier critic of Western
progress who argued in 1918 that weaknesses in Western
philosophy inevitably doomed it to fail to progress and compete.
There is a sense in which Fareed and Spengler might be said to
agree in that Spengler believed that all great societies would
inevitably be superceded by newer more vigorous ones. See a
useful but incomplete Wikipedia discussion at:
http://en.wikipedia.org/wiki/The_Decline_of_the_West.
3. For his discussion of the three forces begin at p. 21.
4. For one such review, downloadable in PDF, see Henry R. Nau,
"Why 'The Rise and Fall of the Great Powers' was wrong" in the
Review of International Studies (2001), 27, 579–592 Copyright
British International Studies Association at:
http://www.nationalism.org/library/science/ir/nau/nau-ris-
2001-27-04.pdf.
5. For two editorial pieces discussing the broader issue of Western
vs. Eastern economic standing, see Jeffrey Barlow "Paradigms of
World History and American Technology" at
http://bcis.pacificu.edu/journal/2005/06/edit.php and Jeffrey
Barlow "American Exceptionalism and Technological
Development" at http://bcis.pacificu.edu/journal/2005/07/edit.php.

103
6. Hear: "The Growing Economic Influence of China and India" at:
http://www.npr.org/templates/story/story.php?storyId=90030319.
7. p. 25.

104
Safer Practices in Financial Transactions
on the Internet
Editorial Essay by Jeffrey Barlow, editor, Interface.
Introduction: We have often addressed issues relative to
financial transactions on the Internet. [1] So far as the
impact of the Internet is concerned, on-line financial
security should, surely, have long since become a topic on
which very little remains to be said. Unfortunately this is not true,
but there is much information available to help us guard ourselves
against electronic mishaps. Here we address an important new
study on safe web practices for both financial institutions and their
customers.
Most of us are now so accustomed to electronic transactions,
whether through E-bay or Amazon.com or the thousands of other
possibilities that we tend to think little about them. We are often
amused by obvious phishing scams[2] though, incredibly, many
still fall victim to them.
Yet the losses to various forms of electronic fraud continue to
mount annually.[3] It was once possible to easily find annual
estimates of such losses. We are now well beyond any simple
accounting. The schemes have become so varied and so common
that no satisfactory estimate is possible.
However, there are many signs that electronic fraud, despite all
precautions, is probably many times more costly than ever before.
One such index is the number of attempts at such fraud. If the
industry were not paying, it would die out. Rather it has become
steadily more pervasive and more sophisticated. So have the
counter-measures, ranging from simple insurance offers to
elaborate electronic protection software and sites. Our
vocabularies have enlarged to include new meanings for such
terms as "virus" and "worm" and new words like malware and
keyloggers. Surely almost all of us have met or read about a victim
of identity theft.
Yet, while our awareness has surely increased so has our reliance
upon electronic transactions. I recently had the experience of
opening with my daughter a new account at a local bank into
which I will make electronic deposits from Oregon, while she later
will enthusiastically make withdrawals from Beijing. I asked the
very efficient and knowledgeable bank officer about security in
such transactions. And of course, we weren’t discussing
transactions between two New York banks, but one between
perhaps one of the least sophisticated nodes of the banking
industry, small town Oregon, and the home of many of the world’s
most sophisticated hackers, China. "Oh, no problem," the officer

105
assured me.
We eventually completed our transactions, somewhat slowly
because the entire computer system of this nation-wide bank was
down and data had to be hand-written onto forms for later entry.
An Important Recent Study: Following our transactions, I returned
to my office at Pacific University's Berglund Center for Internet
Studies to find an e-mail from Ben Elliott, our former Systems
Manager, calling my attention to a sophisticated new study of
security design flaws on financial websites. This study is the
centerpiece for this editorial.[4]
The study has been widely reported upon, by the usual pointing-
with-alarm journalists, and by those hoping to sell us relief for
such fears. The paper is a carefully nuanced one and its
conclusions are, I feel, often overstated in the subsequent reports
on its appearance. One such alarmist notice, for example, shouts
"75 Percent Of Banking Websites Vulnerable To Cyber Thieves
Study Shows" while hawking software and services.[5]
In fact, what the paper seems to me to tell us is not that these
banks (The authors studied the electronic sites of 214 U.S.
financial institutions!) are necessarily vulnerable, but that flaws in
their page designs make it difficult or impossible for even
sophisticated users to judge whether or not they are vulnerable.
This conclusion, embedded as it is in an excellent analysis, is
actually more valuable to us than would be the extremist views of
its findings. None of us wants to lose our ability to use sites such
as those at financial institutions. We all, however, should want to
become more sophisticated users of such sites.
While the study is at points pretty heavy going, it can certainly be
recommended for anyone wanting to better understand best
practices in financial sites, and to better understand safe use of
such sites. It will also be of great interest to anyone capable of
reading source code with some awareness, meaning anyone who
has made a web page.
The Study's Conclusions: I shall try to boil down some of the
study’s conclusions for nonprogrammers, a group to which I
regretfully belong. We are not by any means suggesting that the
following discussion will make you safe on the Internet nor that
you should use it for a primer with which to berate your banking
officers. Reading it will make you a more secure Internet user.
A preliminary discussion of the study's methodology is in order
because it reveals some of the limitations of the study. The authors
used a combination of software to automatically scan and save the
pages easily accessible at financial institutions. They then used
other software to identify potential weaknesses in such sites. They
tried throughout to draw only very conservative conclusions, and

106
gave the benefit of the doubt to many apparently flawed pages.
However, their purpose was not to specifically identify
vulnerabilities open to hackers, but to find badly designed sites
that would prevent concerned users from assessing the relative
security of such pages. In short, the authors believe that good
design should let consumers make reasoned judgments about
security. Bad design then, is not necessarily indicative of
vulnerabilities, but it erodes, or should erode, a user’s trust in such
sites.
The authors repeatedly state that pages with design flaws are an
indication that the flaws are not well understood by those
responsible for security.[6] And, if we are to draw an extremist
conclusion of our own from the study, we will go farther to say
that this lack of awareness makes us at least question the
designers’ knowledge of security in general. It may be that
underlying security devices not immediately accessible to the
study team or to the wary user are robust, even bullet-proof, but to
fail to follow best practices in design may well be an indication of
additional potentially compromising flaws in security itself.
The Importance of a Chain of Trust: What then, are best practices,
either for financial institutions or for users? First, users must have
a complete "chain of trust." [7] Simply put, this means that pages
that are secured according to best practices must be easily
distinguishable as such. This means for us, at a minimum, that the
site should use "SSL-protected pages" (Secure Socket Layer) or
some variant upon them. Such pages are on an authenticated
server: the user knows with a high level of confidence that the
server is what and where it claims to be. It is this protocol that
produces the often tiresome "Warning you are leaving a Secure
environment for an insecure environment. Do you wish to do so?"
or its equivalent. At the very least, if we leave an SSL protected
page in a financial site for an unprotected one, we should be
warned.
It is not unusual for financial sites to fail to provide this level of
warning. In fact, the study contends, 30% of financial sites break
the chain of trust. Most commonly this consists of sending the user
from a secured page on an authenticated server to an insecure page
on the same server.
The problem is that this insecure page is much more easily
accessible to a determined hacker (or to an insider) who may
change data on the page to direct a user to another site with evil
intentions, or perhaps may introduce malware to capture data, or
may not properly encrypt data moving from that page to other
Internet sites. So, as a general principle, a user should never leave
an SSL (there are variations on this protocol which are also secure)

107
environment without being warned.
Even worse, not infrequently the user may be directed from the
authenticated server to a server controlled by others, most
probably by a firm offering additional services to the institution.
We cannot know that the new destination is insecure; but we can
state that we should have been warned that we were leaving so that
we could choose to examine that new server’s authenticity by
looking at its certificate. Often, that certificate will show
ownership not by the financial institution or the company
providing the services, but perhaps by a third party which owns the
second company. Our “chain of trust” is now properly somewhat
attenuated.
How can we know that we have fallen afoul of any of these
problems? Most basically, the URL throughout our journey in a
financial site should always read HTTPS, meaning that it uses SSL
protocols or one of the several common variations of it. On many
pages we may also find a graphic representation of a lock and a
statement that the page follows certain practices.
I suggest that to get an idea of how your own financial institution
works, you go to its main page and carefully read all the
information on security. Note that the main or welcome page is
probably not SSL protected. But once you enter your user name
and password you should be in HTTPS designated pages, or at
least warned you are not. At my bank’s site I was directed to an
unsecured page without warning, but I was also kicked out of my
personal site and had to log back in if I wanted to work with my
accounts. This is probably less than ideal security as it did expose
me briefly to an unsecured page, but I was, as best I am capable of
determining, at least in the bank’s sites throughout.
This is important because SSL requires that our server examine the
site’s certificate and affirms to our server and then to our browser
that the site is what and where it purports to be. It also means that
the data being transferred is properly encrypted and is much less
vulnerable to interception in route between the server and our
browser.[8] This is, of course, emphatically not total security.
Much can go wrong, particularly at our end, but it means that the
site is doing its best, given current practices.
Secure Login: The second issue that should cause the aware reader
concern is the security of login options. This is particularly
problematic. My own bank places my user name login options on
an unsecured page. This is probably because they want as much
information as possible available to a user with a minimum of
hassles. It is, however, not a best practice according to authors
Falk, Prakash, and Borders in their study. The bank does not give
me a password option until I am in an SSL environment so it is

108
probably still quite secure. Some banks, however, at least at the
time the study was done, do not. This means that it is possible that
an entire page has been spoofed or altered specifically to misdirect
the user into harm’s way.
An aware user should probably check his or her bank’s pages and
at least have a chat with an officer in the local branch if it provides
both login and password in an insecure environment as indicated
by the lack of an HTTPS url. The local officer will probably have
to make some phone calls, but you will perhaps learn a bit more
about your own security, and your bank may well learn something
too, perhaps that many customers now care about such arcane
issues.
Contact/Security Information on Insecure Pages: There are other
issues that must be considered as part of login practices. Does your
institution provide its security information on secure pages? Mine
does not. In theory, as the authors point out, this means that such
pages could be more easily spoofed (perhaps a phishing attack
directs me to a phony page with a dire warning) and then asks me
to change my password? Or I am directed to call a given phone
number for some necessary procedure which results in me giving
up personal data? If the security pages are in a secure environment
I am less likely to have been fooled, and the page highly unlikely
to have been spoofed.
Inadequate Policies for User IDS and Passwords: None of us like
unwieldy passwords or IDS. Few of us change them regularly. The
worst practices here are easily summarized:

Your institution should not permit you to use social security


numbers or email addresses as user ids or passwords.

Social Security numbers and email addresses are terribly easy to


collect. If you are in any way identifiable as someone likely to
have even moderate resources, it is worth the time of a criminal to
collect such data on you, perhaps by purchasing it from yet other
thieves. Alternatively, you may simply be unlucky enough to be
included in one of the many massive thefts of unencrypted data as
per any number of recent cases. If your institution lets you use
either of these identifiers, change them immediately on that
institution’s site, and at the first opportunity chide them for the
practice.
Why should we be concerned about the institution’ practices once
we have protected ourselves against them? I was once studying the
Nigerian email scams (See references at note 1 below.) and asked
my bank for permission to open a false account and to place a
minimal amount of money in it so that I could follow up the chain
of the scam. I offered to do anything reasonable to indemnify the

109
bank, to produce my bona fides, to get the FBI involved from the
beginning, etc.
I was quickly referred to a very alarmed security officer at the
home office. This person could barely speak of the sophistication
of hackers, Dutch, Russian, Nigerian, unknowns, all manner of
lurking threats, without sobbing. I was told that if such master
criminals got one valid account number, which I proposed to give
them of course, as part of my study, they would then be able to
identify nearby accounts in the numbered sequence with more
certainty, and perhaps hack into them. So none of us want any
accounts in our institutions to be vulnerable; one security hole may
well open others.
Email Security: Worst of all possible bad practices by financial
institutions or by any user, is to move critical information by
email. Your email is not secure, no matter where it is stored or how
it is sent. One very sophisticated user recently told me that storing
email in one of the currently popular free email services on the
web was equivalent to leaving it in a public restroom, and the
archives were about as difficult to rummage through.
Conclusion: This editorial has not been a light-hearted romp, either
to write, or, I am sure, to read. It is dismaying to think that we as
individuals should bear so much responsibility for protecting
ourselves in environments that, after all, we are paying no small
amount of our limited resources to enter. But, sadly, this is the fact.
Educate yourself. Save yourself and others. Lobby for best
practices, including web site design that permits you to make
reasoned judgments about the level of security it provides.
1. While many of these issues are so familiar to even
unsophisticated users of the Internet as to have become common
knowledge, surprisingly, all of them still present dangers to the
unwary. See, for example: "To E- Or Not To E-: Financial
Transactions On the Internet" at http://bcis.pacificu.edu/journal
/2003/05/edit.php; Financial Transactions on the Internet, Part II
at: http://bcis.pacificu.edu/journal/2003/05/edit.php; Nigerian
Email Fraud: Recent Developments at http://bcis.pacificu.edu
/journal/2002/05/editorial.php; Data Intrusions: The Need for
Federal Legislation at: http://bcis.pacificu.edu/journal/2003/06
/edit.php; Globalism, Crime, and the Internet at:
http://bcis.pacificu.edu/journal/2002/04/editorial.php Although
many of the problems discussed in these pieces are now at least 6
years old, little progress has been made in solving them.
2. It is hard to imagine a reader who is at all interested in Interface
and also is unfamiliar with this term, but it is a generic name for
attempts to inveigle a user into providing information or better,
money, to an online criminal. Usually these are some variant on

110
either promising something for nothing, or, ironically, to providing
data to protect yourself against other schemes: "You need to
reregister your user name/password/social security number/other
valuable data to continue using your bank/E-Bay", etc.
3. To see the endless creativity of these attempts, and the clever
way in which they often establish trust by tying into recent events,
such as rebate checks, see the FBI site at: http://www.fbi.gov
/cyberinvest/escams.htm
4. Laura Falk, Atul Prakash and Kevin Borders, "Analyzing
Websites for User-visible Security Design Flaws." The study can
be downloaded in PDF from http://www.eecs.umich.edu
/~aprakash/ See references at "Some Sampling of My Research"
5. See report at: http://www.nationalcybersecurity.com/blogs
/813/75-Percent-Of-Banking-Websites-Vulnerable-To-Cyber-
Thieves-Study-Shows.html
6. p. 1.
7. As the PDF is not paginated we resort here to citing sections of
the study, here, for example section 3.1.
8. I have found the materials at PCmag.com useful in
understanding these issues. See: http://www.pcmag.com
/encyclopedia_term/0,2542,t=SSL&i=51944,00.asp See also Bank
of America's statements about security found at:
https://www.bankofamerica.com/privacy
/Control.do?body=privacysecur_bankofamerica for an example of
very thorough explanations.

111
Toons and Terrorism
By Iprofess,[1] Elvin Druid of Zuljin
A recurrent issue facing those of us who live largely in Azeroth, the virtual
World of Warcraft, now a population of ten million realworldwide[2], is
what is real, and why? We think that the boundaries between our virtual
world and your presumably real one are increasingly difficult to discern.
Here we explore that issue in the most serious of all the games found in TRW (The
Real World), national security!
In our last piece in this estimable journal, we reviewed a book by another Toon,
Sabert of Ever Quest, who passes in TRW as the economist Edward Castronova.
Here we quote both Sabert and ourself from that earlier review, thus becoming
pleasingly, doubly, self-referential:

"One of his (Sabert's) more interesting concepts, building on earlier


research, is to view "synthetic worlds" (ugly term!) as "a locus of
migration." His point, with which I am in full agreement, is that
increasing numbers of people from TRW are choosing to live
increasingly larger portions of their lives in online games because they
find those carefully selected and constructed lives more meaningful and
more satisfactory than their shadow lives in TRW. In our worlds you
are who you appear to be, though that may be an identity, even a
gender, quite different than the one which nature so capriciously
assigned you. And our worlds most often enforce a generally
cooperative and social existence, unlike the dreadfully competitive and
increasingly lose-lose calculus of TRW."

At the time of our review of Sabert's still earlier publication, (2006 A.C.E., TRW),
this argument that people were migrating to cyberspace for the more pleasing life
style must have seemed somewhat strange to the simple citizens of TRW. But,
sadly, the once sharp divide between "reality" and Azeroth, let alone all other
game worlds, is blurring and not only are Toons like myself slipping back and
forth, but so are some very questionable "real" characters coming the other way.
True, we Toons are not entirely blameless, though I think most of us would prefer
a good quest in Azeroth to some of the events occurring in the frontier between our
worlds. In Mexico City, it is reliably reported— by the standards of journalism in
TRW— a male player repeatedly "ganked" —killed in game— a female Toon to
prevent her from progressing in Azeroth. He unwisely challenged her husband to
intervene, giving his address in Mexico City, TRW. He came to regret this
revelation when three men arrived (her guildies?), smashed his computer and
broke his wrist and a finger.[3] As the comments appended to the blog cited above
indicate, there is widespread disagreement at to who, exactly, is at fault here. Some
think he got his proper reward, others think she should have sucked it up and
soldiered on.
The rise of cyber harassment has been so rapid with the growth of the Web that
even The New York Times has spotted it.[4] It is, according to that source, now
called "trolling," in an effort to point with alarm with more linguistic specificity.
Unfortunately, these practices have resulted in real deaths. [5]
But for every Toon who crosses into TRW seeking revenge for virtual slights, it
appears that there well may now be an even stranger group entering the fray,
intelligence agents from TRW looking for evidence of terrorist plots in on-line
games.
Sabert had earlier warned of this possibility. Then, more recently, a number of
groups in TRW, mostly those who stand to benefit financially from paranoia,
became aware of this threat, even before either the U.S. Federal Government or
The New York Times discovered it. [6]

112
The U.S. Federal Government, fully appraised, now began to study the issue. The
federal project to fund the study is called "Reynard." In the report to Congress
detailing data mining programs by the Office of the Director of Intelligence, it was
said:

"Reynard is "seedling effort looking at the emerging social dynamics of


virtual worlds and on-line games, and their implications to the IC
(Intelligence Community?):
The cultural and behavioral norms of virtual worlds and gaming are
generally unstudied. Therefore, Reynard will seek to identify the
emerging social, behavioral and cultural norms in virtual worlds and
gaming environments. The project would then apply the lessons learned
to determine the feasibility of automatically detecting suspicious
behavior and actions in the virtual world.
If it shows early promise, this small seedling effort may increase its
scope to a full project. Reynard will conduct unclassified research in a
public virtual world environment. The research will use publicly
available data and will begin with observational studies to establish
baseline normative behaviors."

The above report is written with such dry wit and insouciance that it could only
have been written by a Blood Elf—the dark side esthetes of Azeroth. One can
hardly wait to read the follow up report in which "baseline normative behaviors"
are clearly established.
Evidence of the critical importance of this task is further indicated by the fact that
the U.S. Army has issued a Request for Proposals, stubbornly refusing to yield the
virtual battlespace to terrorist Toons.[7] The army's RFP, clearly also Blood Elf
prose, though usually they are more facile in using the possessive case, goes in
part:

The contractor shall conduct internet awareness services in support of


the Governments (sic) activities to include Indications and Warning,
Force Protection, and situational awareness.
The purpose of the services will be to identify and assess stated and
implied threat, antipathy, unrest, and other contextual data relating to
selected internet domains. The contractor will prioritize foreign
language domains that relate to specific areas of concern.
The contractor will analyze various web pages, chat rooms, blogs and
other internet domains to aggregate and assess data of interest to the
Government. It will also identify new Internet domains that directly
relate to the Governments (sic) specific local requirements.

I doff my helm to the Toons who pulled this one off— to be paid for living in
Azeroth and periodically reentering TRW only to report to gullible bureaucrats is a
dream job, indeed. And subsequently, other renegade Toons can comment on those
suppositions at various security-addled agencies and publications! It is the
equivalent of a Full Employment for Toons Program.
However, intelligence-wise, this is not an ideal approach. The intent of Reynard
and the army is to monitor Cyberspace from TRW. But I can do much better than
that! I hereby volunteer my services as a clandestine agent. Living even
periodically in TRW has gotten quite expensive lately, and I could use some
additional income, preferably in Euros, thank you, or better yet, gold delivered in
Azeroth.
I even have strong qualifications for this role as luck would have it. My very name,
"Iprofess," was originally chosen in Azeroth to indicate that I was somewhat
cranky and opinionated. As it turned out, however, the term "I Profess" is also a
ritual statement taken from the Koran, and in many sites on the web, the boundary

113
between our worlds, this is of course truncated as IProfess.[8]
And sure enough, once I established my credentials in Azeroth my fame began to
spread, a strong indication that there is something to this "implied threat, antipathy,
unrest, and other contextual data" and soon there was a musical group named after
me in SoCal (apparently a geographical region in TRW roughly equivalent to the
monster-ridden deserts of Silithus[9] in Azeroth, though more crowded).[10]
I am, in short, already "inside," if, as I understand it is illegal to do, one were to
profile apparent Muslims to scrutinize their normative behaviors. And if these
were not sufficient credentials—and in the shadowy world of cyber terrorism there
is probably some danger that they are, in fact, too good, and will disqualify me
from enlisting—I once played in a group of three in Azeroth which included a
teenager from Saudi Arabia. The third member of our party was an intrepid
huntress named Bodicea, who repeatedly saved us by sending her pet pig,
"Mizpiggy," at our mutual enemies.
I was quite curious about playing with a Saudi and we exchanged a great deal of
information while roving. I asked him religious authorities in Saudi Arabia felt
about those playing in Azeroth. This seemed to cause him some concern, as though
I perhaps were already an undercover Toon for some religious agency. Then
Bodicea asked if it bothered him to be repeatedly saved by what was, after all, a
vicious assemblage of bacon that squealed joyfully as it went careening into
battle?
The Saudi youth regrettably responded to this latter query by abruptly leaving the
game. This might have been evidence of grievous trans-baseline normative
behavior, or perhaps it was just that his mom or the Imam was calling him. I
withhold judgment until the Reynard study bears fruit. But I stand ready to go back
undercover whenever needed.
[1] Iprofess purports to be a 'toon, by which he means a character in an on-line
role-playing game, the World Of Warcraft. We have published several pieces by
him, which can be found here: http://bcis.pacificu.edu/journal/indexes
/?index=author&p=I
Here we reprint our own cautionary note from his last publication: Because we are
uncomfortable with publishing an anonymous manuscript, we now require that any
responses resulting directly from the publication of this review be sent to Mr.
IProfess via our own office. Mr. IProfess, for his part, has agreed that not only any
responses from readers, but also his responses to readers will go through our
office.
[2] James Ransom-Wiley, Joystiq, Jan 22nd, 2008 "WoW surpasses 10 million
subscribers, now half the size of Australia" On line: http://www.joystiq.com
/2008/01/22/world-of-warcraft-surpasses-10-million-subscribers-now-half-the/ 38
Comments by James Ransom-Wiley Jan 22nd 2008.
Brendan Peat, "4 times as many WoW players as farmers in the U.S." October
22nd, 2007, On line: http://www.wikinomics.com/blog/index.php/2007/10/22/4-
times-as-many-wow-players-as-farmers-in-the-us/
[3] Klink, Funtechtalk, "World Of Warcraft Player Claims A Bounty On His Head
In And Out Of Video Game" on line: http://www.funtechtalk.com/world-
of-warcraft-player-claims-a-bounty-on-his-head-in-and-out-of-game/
[4] Matthatais Schwartz "The Trolls Among Us" The New York Times, on line:
http://www.nytimes.com/2008/08/03/magazine/03trolls-t.html
[5] A forum in WOW discussing trolling, on line: http://forums.wow-europe.com
/thread.html;jsessionid=9730CDD000D5A9F0563736DBCEF9A206.app06_07?topicId=5383591642&sid=1
Also related on line: http://forums.worldofwarcraft.com
/thread.html;jsessionid=14DBEBE2C8054A49EA21C696C2FB87B4.app05_05?topicId=9023786390&sid=1
[6] Jay Frazer "Extending the Discussion on Terrorism in a Virtual World" at:
Threats Watch, on line at: http://threatswatch.org/rapidrecon/2008/03/extending-
the-discussion-on-te/ This piece followed on an earlier one by Michael Tanji
"Getting Serious about 'Virtual' Terror. Some Informed Comment About Online

114
Hype and Reality" Threats Watch, February 26, 2008 on line:
http://threatswatch.org/commentary/2008/02/getting-serious-about-virtual/ For a
somewhat more measured response to this issue, see: Juan Cole, "Osama bin
Laden's "Second Life" In virtual worlds, does it take two terrorists to tango? And
how much should we worry about those secret stockpiles of cartoon weapons?" On
line: http://www.salon.com/opinion/feature/2008/02/25/avatars/print.html The
PDF of the study is found online: Office of the Director of National Intelligence,
15 Feb. 2008. "Data Mining Report" PDF blog.wired.com/27bstroke6/files
/dni_datamining_report_2008.PDF
[7] William Welsh, "Army Issues Internet awareness RFP" Washington
Technology, http://www.washingtontechnology.com/online/1_1/33024-1.html See
the RFP, if only in order to reassure yourself that I have not been spending too
much time in the Silithusian sun at: https://www.fbo.gov/index?s=opportunity&
mode=form&id=4d5a47be731171c3750a402d218cda1e&tab=core&_cview=0&
cck=1&au=&ck=
[8] For example, http://www.IProfess.com, an email group for Islamic converts.
[9] For a screen shot of Silithus showing its resemblance to SoCal see
http://www.wowwiki.com/Silithus
[10] See the band leader's pages on line: http://profile.myspace.com/musiciprofess

115
Can I really watch what I want when I want
it on my TV?
By Lynda R. Irons
My cassette player died yesterday. I shook it and banged
it gently — to no avail. I have not looked for a new one
yet, but I suspect that in these days of iPods, iPhones, and
other ithis and ithat, it may be challenging to find a
quality player. I belong to the old school where I listen to a tape of
an artist from beginning to end, although MP3 players and iPods
have revolutionized the way one listens to music. Consumers now
can pick and choose songs they want to listen to rather than
listening to ones someone else told them to listen to. This
self-selection and customization is the underlying mantra of
today's technology-savvy consumer. Witness the explosion of Web
2.0 tools, including MySpace, Facebook, and Flickr, just to name a
few. YouTube, now owned by Google, allows individuals to post
self-created videos for mass consumption.
This opportunity to tailor and create content has yet to be applied
to cable television programming. There had been some discussion
in the industry to allow consumers choices of selecting
programming suitable to their interests and desires. In fact, in a
2005 statement, FCC Chairman Kevin Martin felt an "alternative
is for cable and DBS operators to offer programming in a more a la
carte manner, giving consumers more choice over which programs
they want to purchase."[1] As a subscriber to digital cable, I
heartily applaud this sentiment as I am paying for content that I do
not watch. However, these conversations have been short and
abbreviated.
While a la carte programming is not yet available, there are two
separate events that will have huge impacts on consumers and their
television habits: digital television and IPTV.
Congress has mandated that beginning in February 2009,
conversion to all-digital television broadcasting, also known as the
digital television (DTV) transition whereas digital is a more
efficient transmission technology that allows broadcast stations to
offer improved picture and sound quality as well as offer more
programming options for consumers through multiple broadcast
streams (multicasting).[2]
And while not new (in fact, companies have been "trying to bring
the Internet to your home since 1996"[3]), it becoming clear that
IPTV is moving to the forefront of providing consumers with
choices of how, what, when, and where they watch TV content.
According to a recent MSN.com headline, "Is the Internet Finally
Killing TV?", NBC is "streaming 2,200 hours of live competition

116
in 25 sports on the NBCOlympics.com website."[4] Even the
major networks recognize the need and value for presenting
content in non-traditional formats. With streaming, for example,
watchers are watching the Olympics on their computer screens.
With IPTV, viewers can watch Internet content on their TVs.
So what Is IPTV? Internet Protocol television is "becoming a
common denominator for systems where television and/or video
signals are distributed to residential subscribers via a broadband
connection"[5] Or, in other words, "the delivery of television
content over IP technologies by Internet service providers to their
subscribers."[6] IPTV is also a "digital platform that allows
consumers to "customize and create their own content."[7]
Consumers will now have the opportunity to view Internet
programming on their high-quality digital television sets.
Just because consumers can watch programming on their
high-quality TVs does not necessarily mean that quality content
will naturally follow. While YouTube enthusiasts love watching
the snippets on their personal computers, Albert Cheng, executive
vice president of digital media at Disney/ABC argues that
"viewers will expect high-quality, professionally produced
shows."[8]
The bottom line, IPTV service will "bring fully personalized,
on-demand experience to the TV environment."[9] According to
Michael Lantz, guest columnist for TVB Europe, IPTV (Internet
TV) is reaching a tipping point and that content producers and
service providers not only need to keep abreast of these
developments but be at the forefront of change.[10]
The entire Internet Web 2.0 revolution no longer has passive
participants who merely absorb content from a myriad of content
providers. Rather Internet users are fully participating in their own
experience, even creating it. Lantz argues that IPTV brings
value-added content in two ways: truly non-linear experiences and
interactive services.[11] While providing consumers with choices,
Peter Weitzel, principal technology manager for Siemens IT
Solutions & Systems, notes "there remain considerable limitations
on what can actually be delivered." [12] He feels that "IPTV
services complement television, but they can only flourish when
they offer more than just broadcast content."[13] Director Paul
Kafno observed that "consumers are increasingly looking to
specialist channels that can easily be assembled by hobby
enthusiasts and other special interest groups." He continues "users
are empowered to make their own choices rather than be dictated
to by schedulers. They are also compelled to make their own
programming and share it with others."[14]
And the delivery of content is not without confusion. According to

117
Colin Dixon, a broadband media analyst for The Diffusion Group,
"people are going to get television from the Web through a real
hodgepodge of technologies, including game console with movie
download services, hybrid set-top boxes that let you download
shows and watch satellite or cable, and Internet-ready TVs."[15]
According to a 2004 Harris Poll, 66 percent of all U.S. adults are
now online.[16] Another study shows that "almost half (48
percent) of all families with children between the ages of 2 and 17
have all four of the media staples: a television, a VCR, video game
equipment, and a computer."[17] Accordingly, there will be a
convergence of technologies and content to afford the consumer
with an enriched and rewarding experience that is tailored to meet
his/her own needs.
However, IPTV has its cheerleaders and distracters. Members
attending the 2007 Royal Television Society Thames Valley Centre
Technical Colloquium spoke that the money invested in IPTV
"represents a huge gamble."[18] Alex Gibbons, director, Digital
Media Sales, Europe, Akamai, argues that the "industry should
concentrate on delivering programming to the television via the
Internet for real business opportunities."[19] It appears that
billions are being invested in a not-yet-proven commodity, and yet
businesses must capitalize on these emerging technologies in order
to stay competitive. Iain Morris notes that the United States is "one
of the most closely monitored IPTV markets I the world."[20] The
two major competitors, AT&T and Verizon, are slowly
encroaching into territory previously dominated by major cable
providers and may package their IPTV services differently
(Verizon calls it FIOS-branded service as IPTV while AT&T is
truly IPTV), but their customers are slowly being offered more
choices in how content is delivered, accessed, and customized.[21]
While AT&T and Verizon are slowly negotiating the IPTV
horizon, Telekom Austria's Helmut Leopold is chairman of the
Broadband Services Forum, a body of "representatives from IPTV
operators, content providers, and equipment manufacturers that is
committed to exploring new IPTV opportunities."[22] Not only
programming is called into question, but companies are forced to
investigate how to contain costs while building infrastructures to
ensure "high-quality delivery of content over the Internet."[23]
Erik Keith, senior analyst for broadband infrastructure for Current
Analysis, notes that "all operators are going to have to implement
IP video of some sort to compete effectively in the long term"[24]
And consumers want "good, clear video signs, and they want it at
a good price, and they want to be able to pay one bill."[25]
What does the future hold for IPTV? James Health, Dittberner
Associates, puts it very succinctly

118
You have to build this IPTV network to keep your
broadband Internet customers from leaving, and there
are going to be some new applications you can put on it
and make some money. In the interim, you might as
well as offer IPTV because it helps defer some of the
costs of building the network in the first place.[26]

Works Cited
DTV is Coming Sooner Than You Think. FCC Consumer Facts
Sheet. http://www.fcc.gov/cgb/consumerfacts/digitaltv.html.
Accessed August 17, 2008.
El-Sayed, Ying Hu, Samrat Kulkarni, and Newman Wilson.
"Comparison of Transport Network Technologies for IPTV
Distribution." Bell Labs Technical Journal 11.2 (2006): 215. Wiley
InterScience. Pacific University, Forest Grove, OR. July 7, 2008.
http://www.interscience.wiley.com
Gibbons, Alex. "Successfully Delivering Internet TV." TVB
Europe 17.5 (May 2008): 40. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
"Is the Internet Finally Killing TV?"
http://articles.moneycentral.msn.com/Investing
/CompanyActionDyn.aspx?cp-documentid=9405650. Accessed
August 17, 2008.
Lantz, Michael. IPTV: When Do I Need to Care?" TVB Europe
17.4 (April 2008): 49. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
Morris, Iain. "Conflicting IPTV visions." Telecommunications
41.11 (Nov. 2007: 19. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
Mszaraos, Peggy S. "The Wired Family: Living Digitally in the
Postinformation Age." American Behavioral Scientist 48. 377
(2004): 380. Sage Journals Online. Pacific University, Forest
Grove, OR July 7, 2008. http://online.sagepub.com
Oral Statement of Kevin J. Martin, Chairman, Federal
Communications Commission, Before the Committee on
Commerce, Science, and Transportation United States Senate.
http://fjallfoss.fcc.gov/edocs_public/attachmatch
/DOC-262484A1.doc. Accessed August 17, 2008
Porges, Seth. "The Future of Web TV." PC Magazine 26.24 (Dec.
4, 2007): 19. Computer Source. EBSCO. Pacific University, Forest
Grove, OR. July 7, 2008. http://search.ebscohost.com
Stewart, David. "TV via IP: Friend or Foe?" TVB Europe 17.2

119
(Feb. 2008): 30. Computer Source. EBSCO. Pacific University,
Forest Grove, OR. July 7, 2008. http://search.ebscohost.com

Footnotes
[1] Oral Statement of Kevin J. Martin, Chairman, Federal
Communications Commission, Before the Committee on
Commerce, Science, and Transportation United States Senate.
http://fjallfoss.fcc.gov/edocs_public/attachmatch
/DOC-262484A1.doc. Accessed August 17, 2008.
[2] DTV is Coming Sooner Than You Think. FCC Consumer Facts
Sheet. http://www.fcc.gov/cgb/consumerfacts/digitaltv.html.
Accessed August 17, 2008.
[3] Porges, Seth. "The Future of Web TV." PC Magazine 26.24
(Dec. 4, 2007): 19. Computer Source. EBSCO. Pacific University,
Forest Grove, OR. July 7, 2008. http://search.ebscohost.com
[4] "Is the Internet Finally Killing TV?"
http://articles.moneycentral.msn.com/Investing
/CompanyActionDyn.aspx?cp-documentid=9405650. Accessed
August 17, 2008.
[5] El-Sayed, Ying Hu, Samrat Kulkarni, and Newman Wilson.
"Comparison of Transport Network Technologies for IPTV
Distribution." Bell Labs Technical Journal 11.2 (2006): 215. Wiley
InterScience. Pacific University, Forest Grove, OR. July 7, 2008.
http://www.interscience.wiley.com
[6] Gibbons, Alex. "Successfully Delivering Internet TV." TVB
Europe 17.5 (May 2008): 40. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
[7] Porges, 19.
[8] Ibid.
[9] Lantz, Michael. IPTV: When Do I Need to Care?" TVB Europe
17.4 (April 2008): 49. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
[10] Ibid.
[11] Ibid.
[12] Stewart, David. "TV via IP: Friend or Foe?" TVB Europe
17.2 (Feb. 2008): 30. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
[13] Ibid.
[14] Ibid.
[15] Porges.
[16] Mszaraos, Peggy S. "The Wired Family: Living Digitally in
the Postinformation Age." American Behavioral Scientist 48. 377

120
(2004): 380. Sage Journals Online. Pacific University, Forest
Grove, OR July 7, 2008. http://online.sagepub.com
[17] Ibid, 382.
[18] Stewart.
[19] Gibbons.
[20] Morris, Iain. "Conflicting IPTV visions." Telecommunications
41.11 (Nov. 2007: 19. Computer Source. EBSCO. Pacific
University, Forest Grove, OR. July 7, 2008.
http://search.ebscohost.com
[21] Ibid, 20.
[22] Ibid, 23.
[23] Gibbons.
[24] Barthold, Jim. "IPTV: The Time Has Come, Ready or Not."
Telecommunications 42.4 (June/July 2008): 21. . Computer
Source. EBSCO. Pacific University, Forest Grove, OR. July 7,
2008. http://search.ebscohost.com
[25] Barthold, 23.
[26] Barthold, 23.

121
Managing Health Online: Developing a
Personal Health Record
By Shawn E. Davis

Introduction
Perhaps one of the most important aspects of one's life is
their personal health. Unfortunately, this is also one of the facets
wherein the individual feels the least control. During the past
several years, however, there has been a significant push by the
American public to become a more active agent in their health
care. Advancements in the Internet, web technologies and
communications, and other electronic tools are allowing
individuals to become increasingly informed consumers of health
information and more actively engaged in their health care than
was previously possible. One such tool that has shown great
promise in returning to the individual the management of one's
health is the online personal health record.

What is a Personal Health Record?


The personal health record (PHR) is a tool for collecting,
organizing, and sharing important and relevant health information
for yourself or for someone within your care [1]. It is through use
of PHR that an individual is positioned to make better health
decisions by allowing them easy and immediate access to their
health information for their personal use and for effective
communication of this information with others concerned with
their healthcare [2]. Do note that a PHR is created and maintained
by the individual and is different from the electronic health records
that are maintained by one's health care provider [3].
In 2004, President Bush made the commitment that all citizens
within the United States would have access to their electronic
medical records within 10 years. This pledge reflects the belief
among many that there is great potential of health information
technology to significantly improve an individual's health and
health care. It is expected that the personal health record will serve
as an adjunct to established medical record keeping and that it will
serve as a lifelong resource useful for the individual as they make
health care decisions [4].

What Information is Contained in a PHR?


Of course, the actual content of your personal health record will
depend greatly upon the health care that you have personally
received. There are, however, documents that are common to most

122
personal health records, including [5]:

An identification document including contact information and


relevant insurance information.
Documents describing your individual and family history
including any major illnesses, surgeries (including operative
reports and discharge summaries), health habits, and current
and past medications (including any allergies to medicines
that might exist).
A record detailing any physical examinations and the results
of these examinations.
Documentation of any medical tests and laboratory reports
including x-rays, medical scans, ultrasounds, mammograms,
cholesterol tests, blood tests, etc.
Reports of consultations with health care professionals and
any subsequent recommendations (including progress notes
when applicable).
A complete record of all immunizations received.
Documents detailing any authorizations and consent for
hospital admission, medical treatment, and the release of
medical information.

How is a Personal Health Record Useful?


By maintaining the information contained within one's personal
health record in a unified location, an individual is provided
convenient access to their health history, both from home and
when traveling. The information contained within the record is
easily updated and can contain information beyond that of the
formal health records maintained by one's physician including
dietary and exercise information, goals for present and future
behavior, and any herbal or other non-prescription medications
that one might be taking.
Fundamentally, by maintaining a personal health record an
individual can play a more active role in their healthcare [6]. With
a comprehensive PHR, the individual can more knowledgably
discuss pertinent health information and concerns with their
healthcare provider. In situations when one must seek out a new
healthcare provider, the PHR allows the individual to provide
accurate and complete information to the new provider. A PHR
allows the individual to access their health information at any
time; they are no longer bound by the operating hours of their
healthcare provider. Furthermore, a detailed PHR provides a useful
source of information when dealing with health insurance
companies and allows for a convenient method of tracking

123
developing health care concerns.

How Do You Create a Personal Health Record?


The first step toward the creation of a personal health record is
requesting a copy of your health records from your healthcare
provider. For a more complete recording of your health
information, this should include requesting information from any
specialists that you might have seen as well as from your general
practitioner. It might be possible for you to obtain these records in
an electronic format [7]. Next, determine which mechanism you
will utilize to maintain your information. This mechanism can be
as simple as a file folder or more interactive as is found in most
online maintenance systems (see section on Online Maintenance of
a Personal Health Record below). The system that you will utilize
will, no doubt, reflect the complexity of your health information
and the frequency that this information must be updated. Online
systems are particularly useful for individuals who have a
considerable amount of information to maintain and for those
individuals who have particular need to make this available to
others who are geographically separated.
It is quite possible that you will not be able to gather all of the
information that you will ultimately include in your PHR
immediately as much of our health information is distributed
among a number of practitioners and even among family members.
Make sure you remember to update your PHR as you have future
interactions with healthcare providers and when you have the need
to add information to reflect your changing health knowledge and
concerns. In that your PHR contains a significant amount of very
private information, ensure that you keep it in a safe and protected
location (in the case of a physical copy of your PHR) or determine
if any online system that you might utilize has safeguards in place
to adequately protect your data.

Online Maintenance of a Personal Health Record


Personal health records have their origin as a paper-based
recording approach. Internet-based systems that have developed
during the last decade, however, offer significant advantages over
such traditional recording systems. Internet-based systems allow
for easy entering and access of information directly from one's
home computer. While there are a number of free online PHR
systems (see below), these are often limited in their functionality
and some individuals opt to utilize either a pay or subscription
service.
As mentioned previously, the move toward online record keeping
of one's personal health information offers many benefits for the

124
individual. First, it is becoming more commonplace for one's
healthcare professional to use electronic health records that can be
accessed through a specified patient gateway [8]. Keep in mind,
however, that this information will be limited to those interactions
that one has had with that particular professional or treatment
group. A true PHR is often more comprehensive in that it can
contain information from multiple sources, as well as from the
patient themselves. If your physician does utilize such an online
system, this allows you to view the information that your
physician has on record and you can, therefore, suggest corrections
if any of this information is wrong or incomplete [9]. This is
generally a good place to start in your collection of information
that will ultimately comprise your PHR.
Information in Internet-based PHRs is stored on a remote server
and, as such, is able to be accessed and edited from most any
computer with web access. Often, such online systems have the
capability to backup data, print the entire PHR or select portions of
the record, import data from other compatible information
systems, and share information with approved relatives and health
care providers. Also, some PHRs provide services such as
electronic messaging between individuals and their health care
providers and even the ability to check potential drug interactions
[10].
Unfortunately, as with any other form of online information, there
is the threat that PHRs may be vulnerable to unauthorized access
[11]. Also, in that this information is stored electronically, there is
the possibility of loss because of damage to the web server
wherein it is housed. It is recommended, therefore, that even if one
is utilizing an online PHR system, they either download a copy of
their records to their personal computer (or other storage device)
or periodically print a hard copy for safe keeping.
Beyond concerns of unauthorized access, the savvy consumer will
determine who will have authorized access to their online health
information before an Internet-based PHR is established. This
information should be detailed fully on the website of the PHR. If
it is not, either contact the site administrators directly or avoid this
particular site. According to Dr. Steven Schwaitzberg, Associate
Professor of Surgery at Harvard Medical School and a medical
informatics expert, if you do decide to establish an electronic
medical record, only include information, "you wouldn't mind
reading on the front page of your local newspaper" [12].

Online Resources
A host of Internet-based systems have been developed that greatly
simplify the processes of creating and keeping one's personal

125
health record. There is considerable variety in the features
provided by these systems and a wise consumer will take the time
to compare the functionality of these systems to best identify one
that will meet their individual needs. Following are some online
locations wherein an individual can begin development of their
own PHR. This is, by no means, an all-inclusive list; it is merely a
representation of the vast array of free Internet sites dedicated to
the development of a personal health record [13].

Dr I-Net - http://www.drinet.com/
EMRy Stick - http://www.emrystick.com/download.htm
Health Butler - http://healthbutler.com/
Healthy Circles - https://www.healthycircles.com/
iHealth Record - http://www.ihealthrecord.org/
MedicAlert - http://www.medicalert.org
/home/Homegradient.aspx
Microsoft Health Vault - http://www.healthvault.com/
MyHealth Folders - https://myhealthfolders.com/
MiVIA - https://www.mivia.org/
PersonalMD - http://www.personalmd.com
/medrecordintro.shtml
TeleMedical - http://www.telemedical.com/records.html
World MedCard - http://www.worldmedcard.com/

There are also a number of premium sites wherein you will pay for
service. If none of the sites listed above provides the services and
functionality that you desire, these pay sites can be found with a
simple Internet search.

Conclusion
While recent in their formal development, the establishment and
maintenance of an online personal health record provides a
convenient way for an individual to organize their health
information. They have proven themselves a useful not only in
terms of ease of access to information, but also as a mechanism for
individuals to better retain a level of individual control over their
health and health care.

ENDNOTES
[1] Hassol A., Walker J. M., Kidder D., Rokita K., Young D.,
Pierdon S., Deitz D., Kuck S., & Ortiz E. (2004). Patient
experiences and attitudes about access to a patient electronic
healthcare record and linked Web messaging. Journal of Medical
Informatics Association, 11(6), 505-513.

126
[2] http://www.myphr.com
[3] Ibid.
[4] http://www.cms.hhs.gov/perhealthrecords/
[5] Ball, M. J., & Gold J. (2006). Banking on health: Personal
records and information exchange. Journal of Healthcare
Information Management, 20(2), 71-83.
[6] Jeffs, D., Nossar, V., Bailey, F., Smith, W., & Chey, T. (1994).
Retention and use of personal health records: A population-based
study. Journal of Pediatric Child Health, 30(3), 248-52.
[7] Kim, M. I., & Johnson, K. B. (2002). Personal health records:
Evaluation of functionality and utility. Journal of the American
Medical Informatics Association, 9(2), 171-180.
[8] http://en.wikipedia.org/wiki/Personal_health_record
[9] Sittig, D., Middleton, B., & Hazlehurst, B. L. (1999).
Personalized health care record information on the Web. Presented
at the Quality Healthcare Information on the 'Net '99 Conference
held Oct. 13, 1999, New York, NY.
[10] http://en.wikipedia.org/wiki/Personal_health_record
[11] http://www.cnn.com/2008/HEALTH/06/05/ep.online.records
/index.html
[12] Ibid.
[13] http://www.myphr.com/resources/phr_search.asp

127
The Venus Fix
Rose, M. J. The Venus Fix. Ontario, Canada: Mira,
2006.
Review by Jeffrey Barlow, editor, Interface.
Here we review a work of popular fiction, a murder
mystery, as an index of the impact of the Internet. In this
case it gives us yet another look at the topic of the Internet and
"pornography,"[1] discussed in several places in this issue of
Interface.
M. J. Rose is more than qualified to discuss this issue. Her success
as the best-selling author of nine works of fiction alone has
depended to a large extent on the Internet. Her initial work, Lip
Service, was self-published in 1998 after being turned down by
hard copy publishers.[2] She packaged the manuscript as a
downloadable e-book and it opened up a very successful career for
her.
In addition to writing fiction, Rose has also worked in advertising
and wrote with Angela Adair-Hoy How to Publish and Promote
Online, and with Doug Clegg Buzz Your Book, both works
instructing others on how best to use the Internet to sell books.
Not only does Rose have excellent credentials for discussing the
Internet, but she might also be considered an expert in the issue of
erotica. Most of her works have an overtly sexual theme. It has
been said of Rose that she "... writes erotic better than just about
anyone..."[3] However, at least The Venus Fix, the only one of her
works I have read, is quite mild by contemporary standards. If I
had to classify the book, it would probably be in the Romance
genre rather than as erotica.
The novel proceeds in the first person voice of the protagonist, Dr.
Morgan Snow, who is a sex therapist. She is drawn into a series of
cases involving murders and attempted murders of women who
make their living by appearing on erotic Internet sites, live-action
sexually themed video cams, or "sex cams".
The Venus Fix is driven by an uncomfortable number of
coincidences, though these are explained in such a fashion that no
one of them seems overly objectionable. But the totality of them
might tend to discomfit a reader concerned more with plot than
with character development and dialogue. So far as the latter are
concerned, the book unfolds in a lively and diverting fashion and
by the end we do identify with the protagonist, though she seems
to lead an awfully hectic life. We want to say to her not "Hey, get a
life"" but rather "Hey, choose one life!"
In reviewing a novel, of course, we are constrained by the need to
not give away any details critical to plot development or its

128
denouement. But with Rose, as might be expected from her own
experience with the Internet, we are drawn quickly into the
complicated question of the Internet and pornography.
In working to uncover the murderer, Dr. Snow is able to proceed
with police assistance, and can work from a list of those who have
visited one or more of the victims' sites or have otherwise been
added to the list of suspects. In following Dr. Snow's investigation,
then, we meet a number of users of Internet pornography and are
given various explanations for their behavior.
In some cases, we also meet the women, young and old, whose
lives are impacted by the fact that the men (or boys) in their lives
habitually view erotica on the Internet. The fact that the voice
shifts rapidly and that many characters come and go rather quickly
makes The Venus Fix seem somewhat like a series of Internet
pages rapidly loaded by a hyper-kinetic user.
It is this aspect of the book which makes it of value to us in trying
to understand the link between the Internet and pornography. Rose
shows us not any one version of the Internet, but rather a series of
vignettes in which the Internet is little more than a tool. It is not, in
and of itself, evil.
In this sense, the Internet, viewed from Rose's perspective, is
somewhat different than the Internet presented in the other work
reviewed in this issue, John Burdett's Bangkok Haunts (See review
here). Burdett's character, a Bangkok policeman, approaches the
Internet as a Buddhist. To him, it is a typical manifestation of
Western materialism, hence capable of doing great damage to
those trying to live peaceful and ordinary lives.[4] Conversely,
Rose's protagonist approaches the Internet as a tool which can be
misused, but possessing no inherent moral qualities for good or
evil.
The many characters whose lives intersect with those who use the
Internet can, however, be subject to what amounts to collateral
damage. Because somebody in their lives spends a great deal of
time and effort—erotic energy if you will—on the Internet, their
own lives are damaged by the Internet.
This problem is presented as particularly acute for girls in their
late adolescence, and for the young males in their lives because all
are too inexperienced to see the women in the web cams as a
well-constructed fantasy but rather confuse them with reality, with
frequently devastating effects.
We emailed the author upon deciding to review The Venus Fix and
asked her about the impact of the Internet upon erotic materials.
She responded immediately and helpfully:

The Story Behind the Novel:


I was at a bookstore signing and a young teenage girl

129
approached me to tell me how much she liked my
work. We got to talking, and she told me about how she
and all of her friends read the sexy parts in my novels.
That led to her eventually telling me that she and her
friends were having a hard time dealing with how
deeply the guys at school were into Internet porn.
At that point, I was halfway through writing the second
novel in the Dr. Morgan Snow series (THE DELILAH
COMPLEX) and wasn't ready to think about the third,
but I knew I was hearing about something that
mattered.
I actually started researching the next day doing
research with women involved in the online sex
industry and with dozens of teenagers, as well as
several therapists, and adults addicted to Internet porn.
I researched every group I wrote about in the book.
I've never written a novel before in which the research
was as disturbing or as troublesome as it was this time.
The implications of what I found out were far reaching,
not in the least because no one seems to have any
solutions other than to turn their backs and ignore the
problem.
Access of Internet sex sites by adolescents are a
problem without a seeming solution. Given the nature
of the Internet, it is almost impossible to effectively
monitor such sites; similarly, it is extremely difficult to
restrict an adolescent from accessing them.
In my research, I found that the kids with the most
involved parents had the fewest problems. I'm not
talking about sitting in the room and not giving them
freedom; I'm talking about spending real time with
them and discussing what the dangers are out there ---
being part of their lives. Some of the most
well-adjusted kids had parents who took them online,
showed them the sex sites, and let them see what they
were talking about.
My goal with every novel is to thrill my readers and
keep them turning the pages. But my mission is to find
the dark, disturbing subjects that pain us and shine
some light on them.
I hope I've done both with this one. The subject matter
deserves it.
Cheers,
M.J. [5]

As is evident in the above personal communication, as well as the

130
treatment in The Venus Fix, Rose does not see the Internet as
yielding to legal control. Her position, which strikes us as realistic,
is that the genie is out of the bottle. Now we must learn to live
with it, both for its strengths, and for its troubling weaknesses.
The Venus Fix, then, is far from the "viewing with alarm" sort of
fiction work that threatens us with shadowy internet-enabled
killers. The Internet is rather a tool which can, in the proper hands,
facilitate good lives, even among young women who find that
erotic acting on the web can be a useful employment at some point
in their lives. It is extremely unlikely that John Burdett's Buddhist
protagonist however, could possibly see such work as a form of
"Right Livelihood," that is work of which the Buddha would
approve, if only because such sex cam operations depend above all
on sexual desire, ultimately the worst snare of all human drives.
The issue of pornography is also in Rose's view a highly nuanced
one. This is not unexpected as doubtless there are many who
would condemn The Venus Fix itself as pornographic by a broad
definition. Sexual activity is portrayed though in a far from explicit
manner, and in most cases viewed as not only normal but also
positive.
On the other hand, Rose's protagonist treats a group of teen-age
boys who "...were severely addicted to Internet porn..." (p. 75).
Her killer, whose voice drives the narrative at critical points, while
clearly portrayed as unhinged in the degree of hatred for young
women who appear in sexual contexts on the Internet, nonetheless
might speak for many who see such sites as pornographic by any
definition when it is stated:

"No one, not therapists, not lawyers, not teachers, not


parents, has the experience or the knowledge to deal
with our troubled children because they are a
mutation—the first generation who have been suckled
by twenty-four hour, easily accessible and practically
free instant gratification. Twenty-four hour poison." (p.
254)

It seems to us on balance that "pornography", in Rose's


perspective, is defined not by what it is, but by what it does. If
sexual content harms the viewer or others, it perhaps crosses over
from eroticism into pornography.
[1] As in the rest of this issue, we place the term "pornography" in
quotes here to signal its contested meaning. The author of the work
under review might well have preferred that we use "erotica", or
perhaps "adult material" to characterize her book. We use
"pornography" because we are trying to discuss these materials as
themselves socially contested; that is, part of America society

131
believes that such materials should not be available to anyone,
precisely because of the issues discussed in the two books we
review here.
[2] See her home page at: http://www.mjrose.com/
[3] From bookbitch.com, cited in book cover blurb to The Venus
Fix.
[4] For an analysis of Buddhism and the Internet, see our review of
Hershock, Peter D. Reinventing the Wheel: A Buddhist Response to
the Information Age. Albany, State University of New York Press,
1999. At: http://bcis.pacificu.edu/journal/2007/05/hershock.php
[5] At M. J.'s request we link these sites which related to
references in the above review:

M.J. Rose, Author (http://www.mjrose.com/);


AuthorBuzz (http://www.authorbuzz.com/);
Buzz, Balls & Hype (http://mjroseblog.typepad.com
/buzz_balls_hype/)

132
Bangkok Haunts
Burdett, John. Bangkok Haunts. New York: Vintage
Books, 2007
Review by Jeffrey Barlow, editor, Interface.
One of the more interesting impacts of the Internet is the
manner in which it appears in fiction. This month both of
the books we are reviewing have plots which deal to a
considerable extent with the Internet as a vehicle for
"pornography".[1] M. J. Rose's work, The Venus Fix, takes a
nuanced but highly negative view. (See the review here) John
Burdett's Bangkok Haunts is both nuanced and ultimately
equivocal.
Burdett is a well-known author in what may be a relatively minor
niche: novels set in Thailand. His three works[2] feature the
Buddhist police detective, Sonchai Jitpleecheep, product of a
liaison between an American soldier and a Thai prostitute during
the Vietnam War. Sonchai moves uneasily in, what is to
Westerners, the vice-ridden depths of Bangkok.
To many Thais and to Burdett, a long-term resident of Thailand,
this demimonde is simply a very human and necessary
accommodation to living in the phenomenal world of the Red
Dust, where Buddhist saints, local Thais warped by their karmic
passages through their lives, various spirits, and blundering, barely
aware Westerners careen between comedy and tragedy.
Sonchai is a deeply moral Buddhist, until, as is frequently the case,
things just get too much for him, then he smokes yaa
baa—meth—or shacks up with one or more of the girls who work
in his mother's brothel. His mother became very wealthy by
opening the market for the first Viagra-fueled Thai bordello
exclusively for geriatric Vietnam era vets recruited from American
senior citizen's groups via the Internet. As Sonchai's mom's
enterprise shows, Burdett, like his protagonist, has a deeply
mordant sense of humor which will bring most readers to rueful
laughter at many points in his works.
Sonchai's boss, the chief of a Bangkok police subdivision, is one
of the largest crime lords in the city. A recurrent character is his
deceased partner who advises Sonchai from time to time on how to
best advance his personal karma amidst the violent confusion of
modern Bangkok.
One of the recurrent tropes of Sonchai's inner musings (and
Burdett's outer ones[3]) is the Thais' broad acceptance of every
possible human sexual permutation; "polymorphous perversity" is,
in Burdett's Bangkok, simply the human condition.[4] This is
broadly and often humorously contrasted with the Western

133
perspective, which consists of private pursuit of pleasures for
which most of us feel obliged to publicly disapprove.
In this particular work, we meet Damrong, Sonchai's former lover
who is "...every Farang's (Westerner) idea of the perfect Oriental
lover." "...a world-class triple-A ****** in other words." (p. 289)
This woman is both perpetrator and victim, and we can say very
little more about the plot without giving away key points. Here we
simply say that this work raises a great many issues dealing with
the impact of the Internet on pornography.
Burdett drives home the gap between Thai and Western values RE
sexuality and pornography with something rarely found in crime
procedurals, whether set in Bangkok or not: an appendix
containing a long piece by Timothy Egan in The New York Times
from 2000, "Erotica Inc.—a Special Report: Technology Sent Wall
Street into Market for Pornography"[5] This report, as discussed in
detail in our editorial in this issue (http://bcis.pacificu.edu/journal
/08/04/edit.php) discussed the manner in which community
standards, under the impact of the much wider distribution of
digital adult materials by increasingly more powerful corporate
interests, have been eroded.
While Burdett plays in his novel with the question of corporate
involvement, he raises an even larger issue: What, if anything, is
always and in every case definable as pornographic? In Bangkok
Haunts, what we might regard as "straight pornography," that is,
graphic depictions of sexual acts between consenting adults of
whatever gender (and in Mr. Burdett's works, gender often veers
into "whatever" terrain) is not only not legal but also
unobjectionable.
There are, however, some things that even Thais find
pornographic, however much that definition may have been forced
upon them by globally held values. These include child
pornography, and protecting children from pornography, a problem
made much more salient by the impact of the Internet. This latter
issue continues to be a problem even for the American legal
system,[6] as practiced and experienced as it is in adjudicating
matters pornographic.
One of Bangkok Haunts' plot points centers around another
element of pornography universally defined as objectionable:
"snuff films," in this case, films of murder as a sexual act.[7] But
like everything else in Sonchai's complicated multidimensional
world, even snuff films cannot simply be considered in isolation,
like the question of "pornography" itself.
In the third world, including some parts of rural Thailand, many
people continue to be so poor and miserable that a life as the
object of pornographic productions, even including snuff films,

134
can seem preferable to the alternatives of slow starvation or living
in subhuman conditions. Mr. Burdett takes us into this world, and
introduces us to carefully drawn characters who may seem to us to
inhabit some dreadful alternative reality. But to Mr. Burdett they,
like us, are merely living their lives according to the reality which
they inhabit and understand.
Among the several questions that the novel implicitly raises is
"Where, after all, does pornography rank on the hierarchy of
possible crimes?" Here Mr. Burdett's response to a question from
Adam Dunn in their interview is pertinent.[8]

(Dunn question): Pornography itself is hardly new;


what is the catalyst for the burgeoning piped-in porn
market in hotel chains? (Is it just the fluidity of digital
media transmission, or simply "a growing American
market that wants pornography in the home"? [302]
Increased business travel? Fear of AIDS? Surely it
cannot be something so mundane as a renewed interest
in fidelity.)
Burdett: I think if we read the article from the NYT,
appended to the book, the answer is clear: porn is a
massive growth industry because digital media permits
it to be viewed in secret. Porn was negligible when it
came in the form of postcards and bulky "men's"
magazines, started to take off with video shops, then
when total anonymity was available in the form of
downloads from the Net the shame factor was
eliminated and porn exploded — and is continuing to
do so. I admire your unquenchable optimism when you
ask if porn is a consequence of "a renewed interest in
fidelity". Once again, don't you think you are being
slightly too literal? It's like asking if the media's
fascination with violence is a "consequence of renewed
interest in security". How faithful would you feel if
your mind was on the girl in the porn video while you
made love to your betrothed? (How faithful would you
feel your partner was being if her mind was on the jock
in the porn clip?)[9] I think to Sonchai porn is simply
part of the "functional barbarism" of our times — an
abuse of applied science which allows us to cop out
emotionally, and even erotically. That's why he prefers
prostitution.

Burdett's Bangkok Haunts takes us into a very unfamiliar world,


one where virtually all crimes are relative to a wide variety of
possible outcomes, and where all things are enmeshed in the

135
endless and timeless Buddhist world of Karmic cause and effect,
including the Internet, murder, and pornography.
[1] We put the term in quotations to indicate our awareness that its
meaning is much contested. We could have used erotica, or
perhaps adult material, each of which might have been more or
less acceptable to various groups of our readers. In general in these
reviews, we understand the term pornography to be largely a
matter of legal definition.
[2] His first was Bangkok Tattoo, the second Bangkok 8; Bangkok
Haunts is his most recent.
[3] There is an excellent interview with Burdett by Adam Dunn
found at http://www.john-burdett.com/2007/05/09/interview-
with-john-burdett-for-adam-dunn-at-cobrapostcom/
[4] While it is always possible that Burdett's viewpoints is no more
than his personal opinion, his depiction of the Bangkok morality
corresponds to widely accepted views firmly set in the Western
consciousness from at least the Vietnam war era forward.
[5] See the report online at: http://query.nytimes.com
/gst/fullpage.html?res=9B01EEDA1631F930A15753C1A9669C8B63#
[6] On July 23, 2008, as I was researching for this review, "...the
3rd U.S. Circuit Court of Appeals upheld a 2007 lower-court
decision that the Child Online Protection Act violated the First
Amendment since it was not the most effective way to keep
children from visiting adult websites." This report, by Library and
Information Science News at: http://www.lisnews.org
/child_online_protection_act_overturned has appended to it a
substantial list of comments indicating the range of public
opinions on this issue.
[7] The 1976 Japanese film Realm of the Senses directed by Nagisa
Oshima raised some of the same issues as Bangkok Haunts. It was
immediately banned in Japan and only recently has the uncut
version been available to Japanese audiences. The events behind
the story in the film lay in an erotic murder-suicide from the 1930's
in Japan. See the excellent if pedantically postmodern analysis by
Freda Freiberg "The unkindest cut of all? Some reflections on the
recent cinematic release of the uncut version of Nagisa Oshima's
Ai no corrida (1976) found at: http://www.sensesofcinema.com
/contents/01/12/senses.html
[8] http://www.john-burdett.com/2007/05/09/interview-with-john-
burdett-for-adam-dunn-at-cobrapostcom/4/
[9] These questions speak to an important issue raised by M.J.
Rose in the other work under review here, The Venus Fix.
http://bcis.pacificu.edu/journal/08/04/rose.php

136
X? XX? Or XXX? The Internet and
Pornography
Editorial Essay by Jeffrey Barlow, editor, Interface
One of the undeniable impacts of the Internet has been the
fact that it has clearly accelerated the distribution of
"pornography".[1] In our July Berglund Summer Institute
[2] our topic was "Web 2.0: The Wisdom and Madness of
Crowds." From our keynote speaker, Ward Cunningham, the
inventor of the Wiki, through all of our speakers, we heard much
of wisdom and very little of madness. As the last speaker, I felt it
my duty, and perhaps my special talent, to introduce at least a
touch of madness to the discussion.
I did so by raising some recent criticism of the Internet, and
particularly of Web 2.0, which is, of course, a general term used to
describe the user-created portion of the web, particularly
interactive social sites and other distributed sites where many
authors produce content. Two works that I chose to single out raise
the issue of pornography. These were:

Siegel, Lee. Against the Machine. Being Human in the


Age of the Electronic Mob. New York: Spiegel & Grau,
2008.
and
Keen, Andrew. The Cult of the Amateur: How Today's
Internet is Killing Our Culture. New York, Doubleday,
2007.

We have reviewed Keen earlier and will review Siegel in a future


posting of Interface. [3] While these books are quite different in
tone, both focus ostensibly upon Web 2.0 and generally agree on
the Internet's crimes and failings, including pornography.
According to Siegel, the Internet has commercialized pornography,
making it commonplace and less objectionable. Keen fulminates
that the Internet plays host to "...an infestation of anonymous
sexual predators and pedophiles."[4]
At Interface, we have, for quite some time, been searching for an
editor both capable and desirous of dealing with the inflammatory
issue of the Internet and "pornography".
Perhaps because of these very controversies, we have been
unsuccessful in finding such an editor and hence have decided to
take matters into our own hands. Classically, intellectuals who
write pornography or who write about it resort to pseudonyms or
risk losing their jobs or at least their reputations in polite company.
In preparation for writing this editorial we slunk down to Powell's

137
City of Books, to the intersection of "Mystery" and "Erotica. "
There we hoped to find some works which would be useful
sources for our research. Though we did not find any immediately
relevant titles, we did purchase a couple of anthologies and
subsequently read them, out of a sense of duty, of course.[5]
Our reading of these works indicates that so far as anthologies go,
Susie Bright is certainly the editor to beat. She has been editing
erotic anthologies for a number of years and has at least fourteen
titles out at present.[6]
We emailed Ms. Bright, inviting her to write for us, discussing the
issue of the impact of the Internet on adult literature. She was very
helpful, but as might be expected from somebody who has
published at least fourteen books, also quite busy with lecturing
and writing, and at least temporarily, demurred. Hence, we are on
our own; we will attempt some observations then, based on our
extremely limited research.
In resorting first to written materials in order to enhance our
understanding we were, of course, following an old path. The first
extended discussion over pornography in the United States took
place following the 1934 publication in Paris of Henry Miller's
Tropic of Cancer. The work was then banned in the United States
but proved to be the book that substantially changed pornography
laws here when many intellectuals championed it, not under the
laws of free speech, but as "art." [7]
So for a long time, then, pornography meant first and foremost
written materials. But those attempting to see a connection
between written pornography and the Internet, as I initially
attempted to do, will be disappointed.
After reading several works of fiction and a number of
anthologies, and browsing sexually themed material on the
Internet, it seems to me incontestable that the impact of the
Internet on pornographic writing has been minimal in terms of
content, subject, style, plot—you name it. There is no indication in
anything that I have seen that we should consider the Internet as
anything other than an electronic means of publishing, so far as
written erotica is concerned. I would be delighted to learn of
pertinent works which contradict this conclusion.
What the Internet has clearly done, however, is to exponentially
increase access to a much wider array of erotic materials. This has,
for younger generations, changed the very definition of
pornography. The current college generation is largely puzzled at
the notion that writing could be pornographic in any legal sense.
To them, pornography is graphical—pictures.[8]
For quite some time, graphical erotica meant "men's magazines,"
notably of course, Playboy. After hitting a high of 6.6 million

138
copies in circulation in 1972 and expanding to include film
production, by 1986 circulation was down to 3.4 million; by 2005,
down to about 3 million. In the fall of 2005, it began to offer a
digital on-line edition.[9] In 1996 Congress, like Playboy, noticed
this seismic shift in the distribution of adult materials and passed
the Communications Decency Act in an unsuccessful attempt to
control porn on the Internet.[10]
At that time it was estimated that Americans spent about $5 billion
to $8 billion on porn each year; sex sites accounted for 40% of all
Internet traffic; satellite and cable operators earned about $800
million a year from adult movies, or about 40% of pay TV and
on-demand TV revenue.[11] All of these, with the exception of the
declining men's magazines and analogue video, are digital
materials, illustrating the impact of such materials on the print
industry.
This replacement of writing and print by digital materials is clearly
an impact of the Internet, and of digitization in general. Digital
porn is much cheaper to produce than printed materials or
analogue films, and, most importantly far cheaper to disseminate
via the Internet.
The impact of the Internet was immediate and substantial. We
leave the definition of pornography up to attorneys, but clearly
there is a lot more of something out there. A Google search on the
term "sex cam" for example will turn up 7,280,000 hits, most of
them sites purveying apparently either downloadable or real-time
actual or simulated sexual activity. A search on "XXX" will turn
up an incredible 360,000,000 hits. X-rated films and film clips are
all over the Internet, and for the very patient who are willing to
invest in the fastest computer gear and Internet connections, now
downloadable from vast archival sites. This has not only made
pornographic or adult materials much more widely available, it has
also introduced two new complexities.
One of these complexities is that this new digital pornography is,
emphatically, not high culture. One of the criticisms fundamental
to both the works of Siegel and of Keen cited above is precisely
this: the Internet has made it possible for a lot of amateur writers
to be read. Very often their voice is raw, even ugly—at least by
traditional or professional standards.
There is a world of difference between this new Internet erotica,
and Henry Miller's works. This shift in what we might call the
class basis of pornography[12] is one factor that makes it a
particularly controversial issue. While intellectuals might side with
a noted erotic writer, at least after a few recognizably important
literary figures have done so, few are likely to rally behind even a
small percentage of the digital adult materials as in any manner an

139
artistic expression. It is perhaps relevant to note that whereas adult
materials were once defended, like the Tropic of Cancer, on the
grounds that they were art or literature, the main defense of such
materials has now become that they are protected by freedom of
speech or expression.
The second complicating factor resulting from the rapid growth of
the Internet is its impact on the doctrine of community standards,
for some time the foundation of legal prosecutions of pornography.
One of the two books reviewed in Interface this month, John
Burdett's Bangkok Haunts, introduced us to an important report by
Timothy Egan in The New York Times from 2000, "Erotica Inc.—a
Special Report: Technology Sent Wall Street into Market for
Pornography"[13] which speaks directly to the issue of community
standards. It also deals not with written pornography, but with
graphical images, also an important element of the new
environment in which pornography is usually prosecuted or
discussed.
Egan's piece dealt with a widely reported case in a very
conservative county of Utah wherein a local video storeowner was
charged with renting pornographic films in violation of community
standards.
However, when the Defense Attorney on the case, a Mr. Spencer,
checked into video rentals at the Marriot hotel in Provo Utah,
where the trial was being held, he found that the Marriot did a
booming business in renting adult videos, in some cases the same
titles that his client was charged with renting. As Egan relates:

Why file criminal charges against a lone video retailer,


Mr. Spencer argued, when some of the biggest
corporations in America, including a hotel chain whose
board of directors includes W. Mitt Romney, president
of the Salt Lake City Olympics organizing committee,
and a satellite broadcaster heavily backed by Rupert
Murdoch, chairman of the News Corporation, were
selling the same product?
"I despise this stuff -- some of it is really raunchy," said
Mr. Spencer, a public defender who described himself
as a devout Mormon. ''But the fact is that an awful lot
of people here in Utah County are paying to look at
porn. What that says to me is that we're normal.''[14]

The plaintiff was immediately acquitted. The Governor of Utah,


however, responded by seeking funding for a "pornography czar"
at the state level to help local communities successfully prosecute
subsequent cases. [15]
The legal argument for prosecuting, or even defining pornography

140
in the United States has, for more than forty years, rested on the
issue of "community standards."[16] That is, pornography is what
the community (in practice we can define "community" as the
group from which jurors might be drawn in attempted prosecutions
of pornography cases) says it is.
This question of local standards, while establishing a legal basis
for defining pornography after courts groped for almost 20 years to
do so following the 1957 Supreme Court ruling in Roth v. United
States, opened up additional issues. It was the gap between public
protestations as voiced in laws, and private behavior as practiced
in large chain hotels that resulted in the Utah acquittal.
These two complexities are nothing, however, compared to the
coming storm threatening attempts to control or prosecute
pornography. The ability to download films on demand to one's
private space will have a number of immediate consequences.
First, the tattered community standards defense will be very
difficult to uphold; it will be almost impossible to prove that any
community as such was impacted. Secondly, the distributor, rather
than being easily arrested and tried, will likely be in some distant
land with little or no interest in such prosecutions. Thirdly, as Egan
points out, the fact that large commercial interests are even now
engaged in production and distribution means that any attempts to
legislate or prosecute will encounter opponents with very deep
pockets indeed.
All of these are reasons why, I think, the question of how to
control pornography is increasingly restricted to issues dealing
with children.[17] By so doing, we surely approach what is a
near-universal human value: children should not be sexually
exploited and may well be harmed by viewing sexual activity at an
age when they are unable to understand its nature.[18] This issue is
not proving any easier to adjudicate than were earlier attempts to
control pornography. But we are probably safe in thinking that
most of us know this particular form of pornography when we see
it, and that we may have a true community standard, hopefully
even a global one.
[1] We place this term into quotation marks upon first using it in
each of the articles in which it appears in this issue, to remind our
audience that the meaning of the term is highly contested, and that
one person's pornography may well be another's erotic writing, or
perhaps even "normal" or "ordinary."
[2] See http://bcis.pacificu.edu/summerinstitute/2008/
[3] See Keen review at http://bcis.pacificu.edu/journal/2007/05
/hershock.php
[4] Keen, 7.
[5] Russ Kick (ed.) Hot off the Net, erotic and other sex writings

141
form the Internet San Francisco: Black Books, 1999, initially
seemed very promising, but the Internet in fact figures in none of
the works included as far as I found. The work does contain a list
of erotic sites in the appendix, most of which no longer exist based
on my sampling. We also picked up Susi Bright (ed.) The Best of
Best American Erotica 2008. New York: Touchstone, 2008.
[6] See her pages at: http://susiebright.blogs.com
/susie_brights_journal_/Resume_Susie_Bright.html
[7] There is a strongly appreciative piece on Miller in Wikipedia,
found at: http://en.wikipedia.org/wiki/Henry_Miller
[8] My source here is a discussion with our web master, Maria
Walters, a graduating math major at Pacific University.
[9] See USA Today, Money, "Playboy to hit Internet with digital
edition" 8/24//2005 at http://www.usatoday.com/money/media
/2005-08-24-playboy-online_x.htm
[10] See complete text at: http://www.fcc.gov/Reports
/tcom1996.txt The act, of course, was quickly ruled
unconstitutional.
[11] See Michael Brush, "Company Focus: Can Playboy pull a
rabbit out of the hat? MSN Money, 11.23.2005 found at:
http://moneycentral.msn.com/content/P133917.asp
[12] There has always been a lower-class pornography, but those
raised safely in the middle classes rarely saw it. Now, we assert
totally without evidence, it is pushing out the safer sanitized
airbrushed versions that were once dominant.
[13] See the report online at: http://query.nytimes.com
/gst/fullpage.html?res=9B01EEDA1631F930A15753C1A9669C8B63#
[14] http://query.nytimes.com
/gst/fullpage.html?res=9B01EEDA1631F930A15753C1A9669C8B63#
[15] There is additional material on this case and its antecedents to
be found at: BMICHAEL JANOFSKY, "Utah Law Creates First
'Pornography Czar'" The New York Times March 16, 2000, found
at: http://query.nytimes.com
/gst/fullpage.html?res=9905E4D81F3BF935A25750C0A9669C8B63
[16] For a discussion that is at least comprehensible on the legal
history of this issue with particular relevance to pornography and
the Internet, see: http://www.enotes.com/everyday-
law-encyclopedia/pornography We reproduce from the above site
the legal history of this issue here:
"Milestones in the development of Internet pornography law
include the following.

The Supreme Court established that obscenity is not


protected by the First Amendment in Roth v. United States
(1957), declaring obscenity to be "utterly without redeeming

142
social importance."
After subsequent cases showed the difficulty of finding a
conclusive definition of obscenity, the Court restated its
definition in Miller v. California (1973). It substituted a detailed
three-part test ultimately to be used by each locality--the
so-called "community standards" test.
The Court ruled that child pornography is not a form of
expression protected under the constitution in New York v.
Ferber (1982). It has also upheld a state law prohibiting the
possession and viewing of child porn in Osborne v. Ohio
(1990).
Seeking to control Internet porn, Congress first passed
legislation in 1996. The Communications Decency Act (CDA)
criminalized the dissemination over computer networks of
obscene or indecent material to children. Immediately
blocked from enforcement by the courts, it was ruled
unconstitutional under the First Amendment in 1997.
Seeking to update federal child pornography law for the
Internet, Congress passed the Child Pornography Prevention
Act (CPPA) of 1996. Among other features, the law
criminalized any visual depiction that "appears to be" child
pornography, including so-called virtual porn created by
computer. After lower courts struck down provisions of the
STATUTE, the U.S. Supreme Court agreed to hear an appeal
in Ashcroft v. Free Speech Coalition, with a verdict expected
in late 2002.
The Child Online Protection Act (COPA) of 1998 revived the
CDA by modifying its scope. COPA criminalized the use of the
World Wide Web to sell material harmful to minors. Ruled
unconstitutional, the case remained on appeal before the
Supreme Court with a decision expected by summer 2002.
The Protection of Children from Sexual Predators Act of 1998
included Internet-specific provisions for reporting child
pornography to authorities and prohibiting federal prisoners
from being allowed unsupervised Internet usage.
Two federal laws regulate access to Internet pornography at
libraries and schools, the Children's Internet Protection Act
(CIPA) and the Neighborhood Internet Protection Act.
Together, they require so-called filtering software to be
installed on computers in public schools and libraries as a
condition for federal funding. Both laws were challenged in
court in early 2002, with their outcome uncertain.

143
[17] Notice the list of legal issues outlined immediately above the
extent to which legal issues have almost entirely revolved around
child pornography on the Internet in recent decades.
[18] There are of course outlaw regimes and remote corners of the
world economy where the rule of law has not yet reached, but
these are increasingly circumscribed.

144
Interactive Engagement Learning
Strategies in an Optometry Classroom
Setting
By James J. Butler and Stephen C. Hall

I. Introduction
We teach optics to a large class of about 90 students in
Pacific University's College of Optometry. Like most instructors,
we worry about the effectiveness of our class. Are the students
getting the most from their classroom time? How do we keep
every student engaged? How do we get students to read the text
before class? Is there a better way to teach? For answers, we
turned to modern research about student learning. The recurring
message is that successful classrooms are ones in which students
are actively engaged in their learning. The Physics Education
Research (PER) community has developed a number of techniques
and strategies that have been shown to be successful at promoting
better and more efficient learning. We used modern technology to
implement two of these in our course. In one we used the Internet
to deliver questions and gather responses before each class. In the
other we armed our students with "clickers" and used our lecture
time to promote discussion amongst our students to help them
actively engage the material. We believe these changes resulted in
better conceptual understanding of basic optics by our students.
In order motivate our use of PER-based teaching techniques, it is
best to begin with a look at some issues involved with traditional,
lecture-based classes. These classes tend to promote a passive
view of learning in which the teacher transmits information
directly into the students' minds through spoken and/or written
material. It is implicitly assumed that this information is
completely and thoroughly received by the students so that they
have been "educated." In this view, students are passive
participants in the process. This encourages students to come to
class unprepared. Why read the textbook if the teacher is going to
tell you all you need to know in class? In addition, lecture is often
a one-way street, with information only going from teacher to
learner. Because there is little information flowing from the
learners to the teacher, it is difficult for teachers to judge what
students find confusing and why. As a consequence, lecture is
inefficient since it spends equal time on that which students find
easy and that which they find difficult.
A great deal of research, in the fields of education, psychology and
physics education, has shown that there are many problems with

145
this view of teaching [1]. This work suggests a different view of
learning, one in which knowledge is not simply transmitted, but
must be constructed by the learner through social interactions
(both with the teacher and other students) involving the material of
interest. This view of learning suggests that in a successful class,
learners are actively engaged in constructing their own knowledge.
Consistent with this view, years of study by the Physics Education
Research (PER) community has shown that (at least in Physics)
more effective teaching is done using Interactive Engagement (IE)
strategies. IE strategies involve some form of active engagement
of the students, such as studio style classes [2], prediction-
observation activities [3], peer instruction [4] or small-group
tutorials [5].
Some of the evidence for this view is provided by conceptual
diagnostic exams developed by the PER community that provide
objective assessment tools. For example, the Force Concept
Inventory tests basic understanding of forces, and can be used to
evaluate introductory physics classes [6]. These instruments are
usually multiple-choice exams that tend to emphasize conceptual
understanding, require little calculation, and are given pre- and
post-instruction. One of the most important uses of the diagnostic
exams is as a tool to assess the impact of a new curriculum idea or
an innovative pedagogy.
To compare students whose different initial backgrounds result in
different pre-test scores, one calculates the normalized gain <g>=
(post-test score - pre-test score)/(max possible score - pre-test
score) which represents the fraction of possible improvement
achieved.
One of the most striking results to come from the use of concept
exams is the comparison of traditional, lecture-based instruction to
"interactive engagement" (IE) instruction.
Figure 1 shows the results of one such study by Redish et. al.,
comparing standard lecture-based instruction classes to ones that
regularly incorporate an IE activity [7]. As seen in the figure, the
average gain in traditional classes was around 0.20. This means
that students only improved their conceptual understanding by 20
percent in these classes. On the other hand, the average gain in IE
classes was between 0.35 and 0.45. This data clearly shows the
advantage of IE strategies were that (on average) students improve
their conceptual understanding by roughly twice as much as
traditional lecture. More detailed analysis by the PER community
has been done that shows this trend is widespread, across class
size, school type and other factors [8]. In fact, this result is so well
accepted that an average normalized gain of 0.4 on a suitable
diagnostic exam has been taken as the benchmark to determine if

146
IE strategies have been successfully implemented in a class. These
results from the PER community provide a strong motivation for
us to incorporate IE activities into our classes.

Figure 1: Results of study by Redish et al [7]


comparing average student gains in classes with
different learning styles. Plotted is the fractional gain
achieved on the FCI in three types of classes:
traditional, moderate active engagement (tutorial/group
problem solving), and strong active engagement (early
adopters of workshop physics). Histograms are
constructed for each group and fit with a Gaussian,
which is then normalized.

II. Our Optics Course


We implemented IE teaching strategies in our OPT 501/502
(Geometric and Physical Optics) class, which is a one-year,
introductory optics sequence taught in the College of Optometry
and taken by all first year optometry graduate students. Typical
enrollment is about 90 students per semester. Each student in the
College of Optometry is required to have a laptop, and the
classroom is equipped with a wireless network and a computer
projection system. There are three (Fall semester) or two (Spring
semester) 55 minute class periods each week, as well as a weekly
2 hour lab period. There are daily readings from the required text
and the instructors' notes are provided electronically before class.
Homework problems from the text are suggested by not graded.
There are two or three exams during the semester as well as a
cumulative final exam. The exams consisted of both multiple-
choice and worked out problems. The authors co-taught the course
during the 2007-08 academic year, generally alternating classes.

147
III. Pedagogical Techniques: Peer Instruction and Just in
Time Teaching
We implemented two pedagogical techniques to create an
interactively engaged classroom: Peer Instruction (PI),
implemented by a Personal Response System, and Just in Time
Teaching (JiTT). Both are well-established techniques developed
by Physics Education Researchers.
Peer Instruction is a technique to get students to teach each other,
recognizing that some of the best learning occurs when you must
explain your understanding to someone else. This technique is
particularly useful for promoting IE in large classroom
environments. We use the following PI cycle. We present the
students with a conceptual question and ask them to answer on
their own without talking with their classmates. We collect their
answers and display the results to the class without indicating
which answer is correct. Unless a large percent of the students
(more than 80 percent) answer correctly, we ask the students to
find neighbors who disagree with their choice and discuss their
reasoning. After a short discussion period (2-3 minutes) we ask the
students to answer the question again and we display the results
once more. Typically, a greater number of students will have
selected the correct answer, and in fact in most cases the correct
answer will be selected by greater than 80 percent of the students.
If this is not the case, we take the opportunity to provide more
instruction on the topic. This process allows students to organize
their thoughts and create a coherent argument about the subject,
and has been shown to reinforce the conceptual understanding of
physics [4]. In a typical 55-minute class session, we spend about
15-20 minutes lecturing to highlight the most important concepts
or particularly subtle points from the reading. The rest of the class
time is spent on PI questions or having the students work sample
problems.
To collect answers quickly and accurately from 90 students we
used a Personal Response System (PRS), or "clickers". Clickers
are handheld devices that allow students to transmit their answers
to the instructor's computer where they may be displayed and
stored for future use. Older PRS systems used IR transmitters
while newer ones use RF transmitters, which can transmit more
data faster. We used RF transmitters from Interwrite [9] shown in
Figure 2. The devices allow students to answer a wide range of
question types including True/False, multiple choice, numerical
answer and multiple selection. The screen provides students
feedback when their answer has been received. Each clicker costs
about 30 dollars and the RF receiver that is connected to the
instructor's computer costs about 100 dollars. Free software is

148
provided to run the system and display the results.

Figure 2: The "clicker", part of the PRS system from


Interwrite [9] used in the study.

Figure 3 shows an example of a question used in a PI cycle. Figure


4 shows the student responses before and after discussion, showing
the dramatic improvement described above. Figure 5 shows
another example. In this case there are actually three PI questions,
one for each numbered incident light ray. Figure 6 shows the
student responses for ray 1 and ray 2. In each case, a large fraction
of the students chose the correct answer the first time. When this
happens, we do not complete the PI cycle by having the students
discuss their answer. In the case of ray 3 however, the first set of
answers show greater student confusion, as shown in Figure 7, and
so the full PI cycle was done. After student discussion, the correct
answer was selected by a large fraction of the students.

149
Figure 3: Example of a question used in the Peer
Instruction cycle.

Figure 4: Student responses to the question shown in


Figure 3 before (blue) and after (red) discussion.

Figure 5: Another example of a question used in the


Peer Instruction cycle.

150
Figure 6: Student responses for (a) ray 1 and (b) ray 2
in the question shown in Figure 5. Because the majority
of students answered correctly, the Peer Instruction
cycle was not continued for these questions.

Figure 7: Student responses before (a) and after (b)


discussion for ray 3 in the question shown in Figure 5.

Since the Interwrite PRS allows a variety of question types in


addition to multiple choice, instructors can go beyond the standard
multiple choice style question. Figure 8 and Figure 9 show two
different kinds of questions that can be implemented with the
multiple selection question type. Ranking questions in particular
can be very powerful probes of student understanding. [10]

151
Figure 8: An example of a multiple selection question,
in which several answers can be selected.

Figure 9: An example of a ranking task question.

A PRS provides a number of advantages to instructors. First, it


gives students a way to actively engage with the material during
class. Instead of simply sitting in their seat, listening to a lecture,
they must use their understanding of the material to answer a
question. Also, the students can answer anonymously, which frees
them from peer pressure and allows them to answer based on their
own reasoning. Secondly, a PRS provides immediate feedback to
both instructor and students. As opposed to hearing answers to
questions posed in class from just a small number of students,
clickers enable the instructor to collect answers from the whole

152
class. The instructor can quickly determine whether the class is
"getting it," which allows the instructor to tailor the class to the
students. If the students demonstrate understanding of a concept,
the instructor can move on. If not, the instructor can provide
additional instruction, including additional PI questions. The
feedback provided to the students is also important. Students get to
test their understanding and immediately find out if they are
understanding the material correctly. Also, they can see how their
classmates are struggling with the material. Very often students
feel like they are the only ones who are struggling in a class. It can
be reassuring for such students to see that other students are
confused, too. Additionally, a PRS allows the questions and
student responses to be archived. This can be useful for research
purposes and to allow instructors to evaluate the success of
individual questions used throughout a semester. The system also
allows instructors to track responses from individual students, to
assign grades or give credit for participation.
In conjunction with PI, we also used JiTT, a pedagogical technique
pioneered by Novak and Patterson that seeks to maximize the
effectiveness of class time [11]. It does this by motivating students
to read the textbook before class and by helping the instructor
understand what areas students find difficult. We accomplished this
via Web Warm-Ups, a small number (2-4) of short-answer or
multiple-choice questions that are posted on the web a day or more
before a class meeting. An example is shown in Figure 10. The
students are expected to read the required material and submit their
answers to the questions on the web before attending the class. The
questions are often designed to help the student focus on the most
important or most difficult material from the reading. The
instructor views the student responses before the class meets in
order to tailor the class to provide help in the areas that the
majority of students are struggling to understand. This makes the
in-class time more efficient since the majority of time is spent
where it is needed most.

153
Figure 10: A screen shot of the web interface used to
collect student responses to Web Warm-ups.

JiTT and PI are known to be effective IE strategies on their own.


However, we have discovered that the use of Web Warm-Ups and
PI together provides unexpected advantages. First, a Web
Warm-Up question can serve as the first half of a PI cycle. In this
case the instructor shows the student responses to the warm-up
question in class and then asks students to discuss with their
neighbors and answer again. As an example, Figure 11 shows a
question used in this way. Figure 12 shows the student responses
to this question on the Web Warm-up and after discussion in class,
showing the marked increase in the number of correct responses.
This use of Web Warm-Ups as part of the PI cycle frees up time in
the classroom for additional material or IE activities. Second,
written responses to short answer questions posed in Web
Warm-Ups can be used to gain insight into student misconceptions
and provide appropriate distracters for peer instruction questions.
This is a valuable aid for the instructor when developing new PI
questions.

154
Figure 11: Example question from a Web Warm-up
that was used as the first half of a PI cycle. The
question was shown in class and students answered
again after discussion. Student responses are shown in
Figure 12.

Figure 12: Student responses to the question shown in


Figure 11, from the Web Warm-up (blue) and in class
after discussion (red).

III. Results
To measure whether our pedagogical changes were successful we
administered a diagnostic conceptual exam as discussed in the

155
introduction. Unfortunately, there does not exist a standard
diagnostic concept exam for optics. However, we were able to
obtain a draft concept exam from Sokolof who has helped design
several other concept exams that are commonly used in the physics
community.[12] We modified the exam slightly for our optometry
student audience, and split the exam into two parts covering
geometric optics and physical optics. The geometric optics exam,
administered in the fall, had 36 multiple-choice questions and took
about 30 minutes to complete. The physical optics exam
administered in the spring had 16 multiple-choice questions and
took about 20 minutes to complete. Each exam was given at the
start and end of the semester. The average normalized gain in the
fall was <g>=0.47 with a standard deviation of s=0.17, and in the
spring it was <g>=0.5 with a standard deviation of s=0.24. These
results are consistent with previous results from diagnostic concept
exams for an IE class (see Figure 1, which suggests that the
pedagogical techniques we adopted successfully promoted
interactive engagement by the students. However, we must be
cautious about our conclusion. The diagnostic concept exam we
used has not undergone the rigorous validation that the other
commonly used tools cited earlier have. Also, we do not have data
from a traditionally taught course since we were not able to give
the concept exam in our course before implementing the new
pedagogical techniques. Because of this, we cannot say with
certainty that our results represent an improvement over a
traditional course. However, previous research has consistently
shown that traditionally taught courses yield a normalized gain of
about 20 percent on diagnostic conceptual exams and there is no
reason to believe that the results from the optical concept exam
would be very different. If we accept that our results indicate that
our course is an IE one, we still cannot be certain that this is due to
the pedagogical techniques we adopted. Were we already
promoting interactive engagement in our class in the past? While
we did have students do some group problem solving, we certainly
did not systematically and thoroughly adopt IE strategies at the
level described here. Despite these uncertainties we provisionally
conclude that we succeeded in promoting IE in our class and that
our students achieved greater conceptual understanding than they
would have in a standard class.

IV. Conclusion
We implemented two strategies designed to promote interactive
engagement in our classroom, Peer Instruction via clickers and
Just in Time Teaching via the web. Results from a diagnostic
conceptual exam suggest that we succeeded in fostering an IE

156
classroom and that our students developed greater conceptual
understanding of optics than they would have under standard
lecture-based instruction. In addition, we experienced a subjective
improvement in our class. Students appeared to enjoy using the
clickers and seemed to be truly engaging with the material when
answering the clicker questions. Many students commented that
they enjoyed the style of learning and found it helpful.
An important observation from our work is that clickers alone do
not make an IE class. The framework of Peer Instruction is what
promotes intellectual engagement of the students, which in turn is
responsible for improvements in conceptual understanding. The
clickers are an efficient tool for implementing PI, but are not
themselves responsible for improved student performance. One
could explore other technologies for gathering student responses.
In fact, our original proposal was to have the students use their
laptops to submit their responses, but we could not find free
software to do this well. Interwrite does sell a software version of
their clicker, but since a sufficient number of hardware clickers
were available to us we decided to use them.
In the future, we hope to have the diagnostic exam administered at
other colleges of Optometry in order to build up a more extensive
database of results for a variety of teaching techniques to verify
the trend that we have observed. We hope that the improved
student learning we have observed using IE techniques will
encourage other instructors to adopt such techniques in their
classrooms.

Endnotes
[1] An extensive bibliography, can be found in L.C. McDermott
and E.F. Redish, "Resource letter on Physics Education Research,"
Am. J. Phys. 67 755 (1999). This bibliography is also available on
the Internet at http://www.phys.washington.edu/groups/peg/rl.htm.
[2] P.W. Laws, "Calculus-Based Physics Without Lectures,"
Physics Today, 44, 24 (1991); P. W. Laws, Workshop Physics
Activity Guide Modules 1 - 4, (John Wiley & Sons, New York,
1997); and see the project website at http://physics.dickinson.edu
/~wp_web/wp_homepage.html
[3] D. R. Sokoloff and R. K. Thornton, Interactive Lecture
Demonstrations, Active Learning in Introductory Physics, (John
Wiley & Sons, New York, 2006); D. R. Sokoloff, "Engaging
Students with Microcomputer-Based Laboratories and Interactive
Lecture Demonstrations," Proceedings of the National Science
Foundation Workshop on the Role of Faculty from the Scientific
Disciplines in the Undergraduate Education of Future Science and
Mathematics Teachers, pp. 38-48, (NSF, August 1993).

157
[4] C. H. Crouch and Eric Mazur, "Peer Instruction: Ten Years of
Experience and Results," Am. J. Phys., 69, 970-977 (2001); Eric
Mazur, Peer Instruction: A User's Manual, Series in Educational
Innovation (Prentice Hall, 1997)
[5] L.C. McDermott and P. S. Shaffer, Tutorials in Introductory
Physics, (Prentice Hall, 2002); L.C. McDermott and P.S. Shaffer,
"Research as a guide for curriculum development: An example
from introductory electricity, Part I: Investigation of student
understanding." Am. J. Phys. 60 (11) 994 (1992)
[6] D. Hestenes et al, "Force Concept Inventory," Phys. Teach. 30,
141-158 (1992)
[7] Figure from E. F. Redish and R. N. Steinberg, "Teaching
physics: Figuring out what works," Physics Today 52, 24-30 (Jan
1999), used with permission of E. F. Redish.
[8] R. R. Hake, "Interactive-engagement vs traditional methods: A
six-thousand-student survey of mechanics test data for
introductory physics courses," Am. J. Phys. 66, 64- 74 (1998)
[9] http://www.interwritelearning.com/products/prs/index.html
[10] D. P. Maloney and A. W. Friedel, "Ranking Tasks Revisited,"
Journal of College Science Teaching, 25, 205-210; T. L. O'Kuma,
D. P. Maloney and C. J. Hieggelke, Ranking Task Exercises in
Physics: Student Edition, Benjamin Cummings (2003)
[11] Gregor Novak, Andrew Gavrin, Wolfgang Christian, Evelyn
Patterson, Just-In-Time Teaching: Blending Active Learning with
Web Technology, (Addison-Wesley, 1999)
[12] R. K. Thornton and D. R. Sokoloff, "Assessing Student
Learning of Newton's Laws: The Force and Motion Conceptual
Evaluation and the Evaluation of Active Learning Laboratory and
Lecture Curricula," Am. J. Phys. 66, 338-352 (1998); see
http://www.physics.umd.edu/perg/tools/diags.htm for an overview
of the diagnostic exams available.

158
Questions & Answers (In Plain English)®
by Leonard D. DuBoff, © 2008
We wrote the book on small business law.

If you want to ask Leonard DuBoff to answer a


question regarding intellectual property or
copyright issues, send your question to
barlowj@pacificu.edu with subject line "Ask Leonard."
Question: Some school districts require students to submit their
papers through the turnitin.com website, which is used to
determine whether such papers have been plagiarized. The website
compares submitted works against its database of Internet content,
commercial databases of articles and periodicals, and previously
submitted works. After the site reviews the submitted papers, it
may, if permitted by the school district, add those papers to the
site's database. Isn't reproducing a student's paper without
permission an infringement?
Answer: This is precisely the question that was raised by students
in the case of A.V., et al., v. iParadigms, LLC. Each of the
complaining students attended schools in districts that required
students to submit their papers through turnitin.com. Students not
doing so would receive a "zero" grade. Papers cannot be submitted
to the site unless the user agrees to a "limitation of liability"
arrangement. Although each of the students included a disclaimer
on the face of his/her work indicating s/he did not consent to such
archiving, the site archived the students' works, and they filed suit,
alleging that the reproduction of their papers without their consent
is an infringement.
The court in which this case was brought (the U.S. District Court
for the District of Virginia) disagreed. After upholding the validity
of the Clickwrap Agreement, including the limitation of liability
clause, the court addressed the infringement claim and held that
such reproduction is defensible as a "fair use."
In order to determine whether an unauthorized use is defensible as
a fair use, said the court, it is necessary to evaluate at least four
factors, namely:

the purpose and character of the use;


the nature of the copyrighted work;
the extent of copying; and
the effect this copying has on the copyright owner's potential
market for or value of the copyrighted work.

Focusing on the purpose and character of the use of the work, the

159
first fair use factor, the court held that the work is "transformative"
since the reproduction is for the purpose of preventing plagiarism,
a purpose completely different than that for which the papers were
originally drafted.
Referencing the second fair use factor, it could be argued that a
student paper is never intended to be copied without first obtaining
the student's permission. The court, however, held that this factor
favored neither party, since iParadigm uses the work only for
comparative purposes and does not make any use of the creative
aspects of the student's work.
With regard to the extent of copying, since the entire paper is
copied, it is clear that the result of this factor would ordinarily
favor the complaining student, but the court held that since the
paper can be viewed by the teacher only when a plagiarism alert
occurs and only for the purpose of comparing the works, this
factor also favored neither party.
When the effect the reproduction would have on the copyright
owner's market is considered, it becomes apparent that the site's
reproduction of the student papers is for comparative purposes
only. Further, the papers are not publicly accessible or
disseminated. The court actually felt that the defendant's use
helped the plaintiffs, since it helped prevent their papers from
being plagiarized.
The plaintiffs argued that it does prevent them from selling their
works to websites that purchase original high school papers to sell
to other high school students, but the court held that the plaintiff's
argument runs counter to the Copyright Act's purpose of
encouraging creative original work.
After considering all of the factors, the court held that the
copyright owner would have no viable claim and that iParadigms
should enjoy a fair use defense.
At least one commentator has suggested that the copyright in and
to a student paper may actually belong to the school attended by
the student, but it is well settled that the arrangement between the
student and the school attended by that student is based on
contract. Thus, the school would own the copyright if the
application the student signed in order to attend the school made it
clear that, as part of the contractual arrangement between the
school and the student, the student will assign the copyright in and
to any work created by the student during the time s/he is attending
the school. Absent any arrangement indicating that the student's
copyrights are automatically conveyed to the school, the student
would retain the copyright in and to any works created by him/her.
Unfortunately, this conclusion does not end the story since use of a
student's copyrighted work for purposes of determining whether

160
the work was actually created by the student and not plagiarized
from another is, said the federal court, a fair use of that work and,
thus, not actionable. Since this court's decision may not be
followed in other jurisdictions, teachers or school districts desiring
to take appropriate precautions should obtain an assignment from
the student of the copyright in and to works submitted while
attending the academic institution. With this form of express
agreement, students would be prevented from raising any
questions of infringement. Without such an assignment, the
teacher or institution would be forced to rely on the fair use
doctrine as set forth in the statute and interpreted by the courts.

161
Learning to Co-operate: A Case Study in
Ethical Banking
K.Jahdi and T.Cockburn

Introduction
There are many different types of co-operatives; some are
producer co-operatives producing goods and services ranging from
farm produce to washing machines. Others are consumer
cooperatives, set up using the capital of founding members to
retail goods and provide a dividend to members. The largest
consumer co-operative in the USA is REI, started in 1938 and with
3 million members currently [1]. REI specialises in outdoor
pursuits and recreational goods. REI charges a once-only fee and
also pays a dividend of about 10% based on members' purchases in
the previous year. REI's website declares: At REI, "We inspire,
educate and outfit for a lifetime of outdoor adventure and
stewardship."[2].
The title of largest consumer co-operative in the world goes to the
Co-operative Group in the UK which now also provides financial
and banking as well as retailing goods. The UK-based
Co-operative Bank, which is the focus of this article, began life as
a small consumer co-operative in the nineteenth century. In 1844,
28 working men -the 'Rochdale Pioneers' - opened a small grocer's
shop in Toad Lane. The 1844 cooperative principles were as
follows:

1. Membership was open to all.


2. Governance was democratic ie one person,one vote.
3. Profits were distributed amongst members and customers in
proportion to purchases made from the cooperative as a
member's 'dividend'.
4. Limited interest paid on capital.
5. Political and religious neutrality.
6. Cash trading only.
7. Education and personal 'betterment' was encouraged.

The initial pioneers collaborated with other cooperatives and


diversified into other businesses. The original set of principles
were also extended and updated in the twentieth century to cover
wider social concerns such as anti-racism and anti-sexism.
However, they still retain their aim to educate and advocate for
their principles whilst operating responsibly and ethically.
In 1872 the Co-operative Wholesale Society opened a Loan and

162
Deposit Department, which became the CWS Bank four years
later. Almost one hundred years later, in 1971 the bank was
registered under the UK Companies' Act as Co-operative Bank
Limited. In 1974 it became the first UK bank to offer free banking
for those in credit. The mission statement was drawn up in 1988 to
reflect Co-operative principles. Following customer consultation,
in 1992 the Bank's Ethical policy was launched—a world first. In
1994 it was the first UK bank to offer Customer Service
Guarantees.
Two years later, in 1996 the Bank announced its Ecological
Mission Statement. 1997 saw the launch of the Bank's Partnership
Approach. The first published Partnership Report and the audit of
the Bank's operations was carried out in 1998. In 1998 the bank
was the first in the world to issue an independently-audited
sustainability report [3]. The Bank's Ecological Statement reads:

"We the Co-operative Bank, will continue to develop


our business taking into account the impact our
activities have on the environment and society at large.
The nature of our activities are such that our indirect
impact, by being selective in terms of the provision of
finance and banking arrangements, is more ecologically
significant than the direct impact of our trading
operations".[4]

In the period covering 2001-2003, the bank concentrated on its


commitment to assess the secondary impact of its products and
services. For instance, calculating how many miles Co-op bank
customers travelled as a result of the manner by which the Bank
delivers services as well as how many miles staff travel on bank
business. When initially asked if the organisation considered itself
"green", the response by J. Middleton,1999, was that "it is not a
term we would use. We would use 'ethical', but we are trying to
stay away from that as well...we would use 'socially and
environmentally responsible"'.
Perhaps more than anything else the historical background of the
organisation and the basic principles of co-operation shaped its
socially and environmentally responsible marketing. Press reports
in 2003 referred to the massive loss of funds due to its vetting of
'unethical customers'. Indeed, the costs were £6.9 million and £8.7
million for the years 2003 and 2004 respectively. Over half of the
£10 M turned away in 2005 were refused because of
environmental concerns about their polluting business processes. A
further £1.9M or 20 percent of companies applying for loans were
refused because of poor animal welfare. Another 20 per cent were
turned away because of human rights abuses- this also cost them

163
£1.9M.

Prudent but principled bankers


The Bank's is required to make a profit but in an ethical manner,
according to its revised set of principles. Since it is also selective
with respect to its customers as we have indicated, financial gains
appear as important as ethics, but not necessarily an overriding
objective. Profit is needed for survival of the Co-operative bank;
however, making profits is not the Bank's 'raison d'etre'. Thus,
although the Co-op bank refers to 'profiting from our principles',
since 1998 it has annually reiterated that 'business should have a
purpose beyond profit', it was also delighted to note that 'our
shareholder has decided not to take a dividend but to allow us to
reinvest profits in building an even better bank' [5].
Paraphrasing Crane (2000), this is not ethical marketing as such;
rather, it is 'marketing ethics'. Ethics is a Unique Selling
Proposition (USP) in the case of the Bank. It appears that the Bank
is adhering to its core ethical policies, by encompassing green
issues and environmental considerations- despite the financial
costs to the business. For other co-operatives, a green agenda
might be seen as less unexpected. For instance, REI who also
espouse an environmental agenda, and claim on their webpage:

"Each year, REI donates millions of dollars to support


conservation efforts nationwide, and sends scores of
volunteers to build trails, clean up beaches, and teach
outdoor ethics to kids." [6]

However, banks have generally had a bad press with respect to


their business practices. Susan George, a respected author on Third
World debt, likened bankers to bomber pilots suggesting that
bankers hide behind the smokescreen of 'neutral lending decisions'
[7]. In the post-sub-prime lending crisis, this reputation is
reinforced for many people with rising interest rates on housing
mortgages adding to their cost of living, if not actually causing
people to lose their homes.

Principled practice and organisational learning


The Co-operative bank has sought to learn what the public at large
and customers seek from a bank since its founding in the early
19th century. An early 1990s survey of the Bank's customers
revealed that 20 percent had joined it for ethical reasons. A 1997
QCL market research survey based on interviews with 500
members of the UK public revealed that 70 percent thought it
important for banks to have a clear ethical policy. Of the Bank's

164
customers, 90 percent supported its Ethical Policy [8]. In a 2006
report the bank noted the trend in ethical consumer spending
meant that it had reached a new milestone by overtaking tobacco
and alcohol expenditure in the UK as mentioned. However, they
also noted that "Overall, spend on ethical foods still only accounts
for 5 percent of the typical UK shopping basket" [9]. In reference
to the role of consumer and society at large in shaping the Bank's
green/ethical policies:

"We, the Co-operative Bank, will continue to develop


our business taking into account the impact our
activities have on the environment and society at large.
The nature of our activities are such that our indirect
impact, by being selective in terms of the provision of
finance and banking arrangements, is more ecologically
significant than the direct impact of our trading
operations." (The Co-op Bank's Ecological Mission
Statement)

The Bank's ethical stance has been "well received by the general
public, leading to business growth in the personal and corporate
sectors". [10]. The fact that in 2005 the Bank attributed one third
of its £132 million profits to its sustainability and ethical policies
launched in 1992 is a clear testimony to its successful
organisational learning related to implementing socially
responsible policies based on listening to customers . There has
also been an increase of 29 percent in the number of loans and
savings account customers joining for the ethical/green reasons.
[11]. In 2007, Williams indicated that a third of new customers
came to it for its ethical stance [12].
It appears that the Bank has inherited a USP in the shape of its
historical background and has built on this legacy of ethical
good-standing as a competitive tool. On the other hand, Devinney,
et al (2006), suggest that often consumers say they want to be
socially responsible but when it comes to 'ethical' purchasing, their
actions belie their noble intentions, so perhaps the Co-op Bank
customers are a niche market. However, despite the principled
stand the bank has taken on investments it is not immune from
being criticised by some for not going far enough.

Shadows between the principles


In 2006 the bank was criticised for investing in GlaxoSmithKline
(GSK) and Vodaphone because of drug discounts for developing
countries and Vodaphone's environment research [13]. A couple of
weeks later the bank is attacked for using 'weasel words' to

165
'greenwash' disreputable activities by default such as general lab
testing of animals or neglect of 'poor country' disease research at
GSK and GSK and Vodaphone's links to arms companies [14].
The Bank's record on employee relations is somewhat mixed. In
1997, the bank employed 3983 all nine redundancies [15] were on
a voluntary basis. For bank contracted staff there were no
involuntary redundancies in 2003 and in 2005 there were 3911 .
The figures exclude the 100 or so employed at offshoot Unity
Trust Bank. However, at CIS such staffing changes "...are managed
in consultation with affected staff and trades unions. For CIS
contracted staff, there were 140 redundancies during 2003, the
majority of which were involuntary." [16].
In 2004,however, 2500 staff were culled from the Co-operative
group as a whole [17]. In May 2005, the parent company the
Co-op Group announced 600 redundancies at its Manchester head
office. However, in July 2007, the group cooperative financial
services announced plans to cut staff 10% by 2008 from its current
10,000 staff, located in the 11 corporate and 90 retail branches
[18].

Have competitive forces 'greened' the Co-operative Bank?


The Bank acknowledges the notion that throughout the
business world there appears to be a greater awareness
that organisations should not exist merely to generate
profits for shareholders. It refers to a MORI survey [19]
of consumers which suggests that they '... are making it
very clear they want to deal with companies that take a
much broader view of what their role should be ' The
UN has also suggested : ...Corporate citizenship does
not oppose the doctrine of profit maximization. Rather,
a socially responsible organisation stands to be more
profitable in the long run. [20]

These findings were subsequently confirmed elsewhere. A


Millennium poll of 1000 consumers from each of 23 nations on 6
continents found 49 per cent cited corporate citizenship factors
such as business ethics, environmental practice and labour
management issues as the most significant determinant of their
impressions of companies [21]. Only 32 per cent were most
influenced by basic business investment factors such as finance,
management or size of enterprise.
Marlin supported the data from United States trends in investment
spending, where well over one trillion dollars, or one in every
eight investment dollars, was at that time managed in social
responsibility investment vehicles [22]. In Europe, too, three of the

166
four scenarios outlined by the corporate consulting giant
PricewaterhouseCoopers in the late 1990s, suggested that ethical
issues, especially those relating to the environment and genetics,
will have a major influence on the future economic as well as
social prosperity [23] and governance of Europe [24]. Twenty-first
century, hard-nosed business perspectives, rather than nostalgia
about sustainability and public good, inform current corporate
brand image and profitability projections of CEOs [25].
In the light of the above, many organisations have begun to
reconsider their priorities. Measures such as Environmental
Auditing have gained prominence. At an interview with the Bank's
Ethics representative Jack(ie) Middleton, she said: "We had a look
at what our USP was and what came through, through consulting
our customers was that we were an ethical organisation....What we
had never done was to look at ourselves and realise who we
actually were, and our customers told us that". At the same
interview competition was referred to as a driving force towards
green marketing. A further driver was the historical and political
background of the UK Co-operative Movement and one of its off
shoots, the Bank. The UK Co-operative movement (though not
necessarily the Bank) has traditionally been left-of-centre,
politically, and closely associated with the UK Labour party.
The Co-op Bank decided through its evolving Partnership
Approach, to address green issues and perhaps by so doing, gain a
competitive advantage. Amongst its Unique Selling Propositions
(USPs) and perhaps the most important is its Ethical Policy
(encompassing environmental issues). In order to accomplish this
policy, three areas of assessment were considered by the Bank as
follows:

Delivering Value - by delivering real value to its partners the


Bank can ensure survival. The bank still has to be able to
successfully compete in a financial services marketplace.
Social Responsibility - delivering value with ethical integrity in
a socially responsible way will also act as a differentiating
feature in the market.
Ecological Sustainability - the critical value of this to the
bank's future success has been recognised and incorporated
into the Bank's business strategy.

Such endeavours have attracted a third of their new customers and


helped retain existing ones [26]. However, they also place the
organisation in the gaze of public scrutiny. As indicated above, any
activities that may be perceived as contradicting the Bank's
espoused ethical stance will be highlighted and publicised.

167
Damage to corporate image could either take a long time to repair
or may indeed be irreparable.
The Bank regards itself as socially responsible to all its Partners
whether they are shareholders, customers, staff and their families,
suppliers, Local communities, National and international society,
or past and future generations of co-operators. However, the Bank
admits that unlike ecological sustainability, there appears to be
little consensus as to what constitutes a socially responsible
business. It appears that the organisation's social responsibility has
been, as might logically be expected, subsumed into its Ethical
Policy. The Bank also recognises that its ethical and social
responsibility policies have to be reviewed, modified, updated and
re-considered. For this purpose, it is in regular consultation with
independent organisations such as Amnesty International, the
National Council for Civil Liberties, RSPB, the World-Wide Fund
for nature and the New Consumer [27].
The Bank was also aware of its possible impact on communities as
the quote below suggests.

"Central to the Bank's Partnership Approach is an


appreciation of the manner in which the Bank's future
is linked to a whole variety of Partners. This includes a
recognition of the special commitment in which we are
based...(as well as) its charitable aid programme...we
look at the other side of the coin - what is the impact on
the community of the Bank withdrawing its presence
and services. More specifically, what effect has the
closure of a number of our branches made on
communities in which it had previously operated."[28]

The bank has thereby established communication channels with


organisations that broadly represent its core retail customers'
social, political and environmental profiles. However, the l997
quote from Gribben looks somewhat compromised by the recent
decision to make 1000 staff redundant [29] which,once again,
illustrates how such pronouncements can become a 'hostage to
fortune' for the organisation concerned.
Nevertheless the Co-op group is now breaking new ground by
encouraging partners to lobby MPs on climate change and
initiating crime-reduction projects [30]. This move into direct
action is still controversial [31]. The aims are to mainstream social
responsibility and lead from the front in the financial services
sector. This is another bold educational gamble likely to attract the
derision of the purists as well as potentially risking a 'muddying of
the waters' in terms of the market. Even if partners are loyal and
don't succumb to a form of 'activism fatigue' as seen in other

168
organisations such as NGOs will a new government seek to
constrain the bank's role in the future?
There is also the danger of activism becoming a form of radical
populism or being perceived as interfering in the democratic
process and thus potentially alienating some current supporters and
investors. Alternatively, will it simply become the financial arm of
the political left and find that the bank's fortunes are inseparable in
the public's eyes from the fortunes of specific parties or
movements? The same movements may evolve in less agreeable
ways in future. Thus the current icon could end up with feet of
clay.

Concluding observations
The Bank has employed a fairly sophisticated marketing strategy,
based on learning about and from customers thus creating and
sustaining a niche in the market place as a socially responsible,
ethical, and ecologically-aware organisation. Due to the nature of
its customers' attitude and ethical/green perceptions, the Bank, like
other retail banking organisations is partially acting in a consumer
driven manner but can't be accused of simply 'surfing' a wave of
consumer eco-fashion. The bank is a learning organisation in more
than name; it practices what it preaches and as can be seen in the
latest endeavour to mainstream social responsibility and green
issues, it also takes some risks.
However, the Co-op bank is still not a mainstream bank and thus
the majority of the UK population are not currently aligned with
its values to the extent that they wish to put their money into the
Co-op rather than the so-called 'Big four'global Banks. Thus its
educative mission still has many hurdles to overcome before it can
attain the same financial status as its mainstream banking peers in
the UK, et alone elsewhere .As a financial institution it is in the
business of making money whilst adhering to strict ethical and
environmental codes of practice. Thus it has carved out a unique
niche market that provides a healthy financial return for partners.
There are some signs of a convergence on ethical values by
consumers but there are also signs that suggest some hypocrisy
amongst consumers as Devinney et al suggest. So, the jury is still
out on the question of whether (or not) a majority of the next set of
consumers i.e. Gen Y, will adopt the principled stance of many of
the current customers of the Bank.

End Notes
[1] see http://www.rei.com/aboutrei/about_rei.html, last accessed
on August 23rd, 2008
[2] http://www.rei.com/aboutrei/about_rei.html

169
[3] see webpage http://www.cfs.Co.uk/images
/pdf/Overview311006.pdf last accessed 18-09-07
[4] The Partnership Report 1998, p 51
[5] the Partnership Report, 1998, 2003, 2004, 2005, 2006
sustainability reports on bank's webpage
[6] http://www.rei.com/aboutrei/about_rei.html
[7] Williams, 1996
[8] the Partnership Report, 1997, p66
[9] see http://www.co-operativebank.co.uk/servlet
/Satellite?c=Page&cid=1170748475331&
pagename=CB%2FPage%2FtplStandard&loc=l last accessed
09-18-07
[10] Williams,1999, p 65, O'Hara, 2007
[11] The Observer, May 22, 2005
[12] O'Hara, 2007
[13] see http://www.schnews.org.uk/archive/news537.htm#8, last
accessed 18-09-07
[14] see http://www.schnews.org.uk/archive/news539.htm and
http://www.foe.co.uk/factorywatch,last accessed 18-09-07
[15] 2002 report,p 24
[16] see http://www.cfs.co.uk/sustainability2003/deliveringvalue
/staff.htm, last accessed 18-09-07
[17] Costello, 2007
[18] Vorster, 2007, Costello, 2007
[19] Partnership Report, 1998, p2
[20] UN Global compact report, 2005, p7
[21] Marlin, 2000
[22] Marlin, 2000
[23] McKie & Cockburn, 1999
[24] Pedler, 2002
[25] The McKinsey Quarterly, 2006, January, p. 4
[26] O'Hara, 2007
[27] Williams, 1999
[28] Gribben, 1997 p 61
[29] Vorster, 2007, Costello, 2007
[30] O'Hara, 2007
[31] O'Hara, 2007

References
Brown, K (2003). 'Trust me ethics really pay'. The Financial
Times, weekend April 5/6 2003.
CFS sustainability reports for 2003, 2004, 2005, 2006 last
accessed at http://www.co-operativebank.co.uk/servlet
/Satellite?c=Page&cid=1177569063509&pagename=CB
/Page/tplStandard and http://www.cfs.co.uk/servlet

170
/ContentServer?cid =1161931603427&
pagename=CFSSustain/Page/tplSusBlank&c=Page
Costello, M (2007). Union anger over 1,000 job cuts to come at
Co-op Bank, Timesonline, July 21st, last accessed, 19-09-07 at
http://business.timesonline.co.uk/tol/business/industry_sectors
/banking_and_finance/article2112857.ece
Crane, A (2000). 'Facing the backlash: green marketing and
strategic re-orientations in the 1990s'. Journal of Strategic
Marketing, 8, pp 277-296
Devinney, T.M., Auger,P., Eckhardt, G & Birtchnell , T.( 2006)
The Other CSR, Stanford Social Innovation Review, Fall, last
accessed on 1-06-07 at http://www.ssireview.org/articles/entry
/the_other_csr/
Fill, C. (2001). 'Integrated Marketing Communications'. BPP
publishing
Jahdi, KS (2006). 'A study of ethical green marketing'. PhD thesis,
Sheffield Hallam University
Marlin,A.T.(2000). Social and Ethical Quality in a global
Competition, conference paper in: Innovation, Manufacturing and
Services: How to improve ethical quality? IESE, Graduate School
of management, University of Navarre, 9-10 Nov., 2000.
McKie, D. and Cockburn, T. (1999), 'Strategic conversations:
Existing scenarios, public relations theory and futures'. Paper
published in conference proceedings, Public Relations, Public
Affairs and Corporate Communications in the New Millennium,
The Future. Ljubljana, Slovenia.
McKinsey Quarterly online survey of business executives, January
2006, last retrieved on 24-08-06 from
http://www.mckinseyquarterly.com
O'Hara, M (2007), Fresh take on responsibility, The Guardian,
July 18th, 2007, last accessed 19-09-07 at
http://www.guardian.co.uk/society/2007/jul
/18/ethicalmoney.money
Pedler, R. (ed.), (2002). European Union Lobbying: Changes in
the Arena New York: Palgrave
UN (2005) Global compact in South Asian corporate citizenship
report: Building Corporate Citizenship into Corporate Ethos, New
Delhi: UN/ Confederation of Indian Industry
Vorster, G (2007). 1000 jobs to go at co-operative financial
services in bid to slash operating costs by £100m, Personnel today,
last accessed on 9-09-07 at http://www.personneltoday.com
/Articles/2007/07/24/41660/.html.
Williams, RH (1996). 'European Union Spatial Policy and
Planning'. Paul Chapman:UK
Williams, S (1999). 'How principles benefit from the bottom line.

171
The experience of the Co-operative Bank'. In Human Rights
Standards and the
Responsibility of Transnational Corporations. MK Addo (Ed).
Kulver International, UK.

172
Against the Machine: Being Human in the
Age of the Electronic Mob
Siegel, Lee. Against the Machine. Being Human in the
Age of the Electronic Mob. New York: Spiegel & Grau,
2008.
Review by Jeffrey Barlow
For our Berglund 2008 Summer Institute, "The Wisdom
and Madness of Crowds, Web 2.0" we found it much easier to find
speakers and materials dealing with the wisdom of crowds rather
than with their madness. One conspicuous exception was Against
the Machine by Lee Siegel. Our group's discussion of his work
was animated and often highly critical. But as we study the impact
of the Internet in all of its aspects at the Berglund Center, Siegel is
a good balance for the many highly positive and laudatory works
available. It is just about as distant from the views of Palfrey and
Gasser, the authors of the second work reviewed in this issue, Born
Digital, as is possible. It would be ideal if the two books reviewed
this month could be read together to present a roughly balanced
view, as they both deal with those who can be called the Web 2.0
population. But, unfortunately, they are quite different in tone and
content.
Siegel's perspective can be summed up with these words:

What I have been describing is the surreal world of


Web 2.0, where the rhetoric of democracy, freedom,
and access is often fig leaf for antidemocratic and
coercive rhetoric; where commercial ambitions dress
up in the sheep's clothing of humanistic values; and
where, ironically, technology has turned back the clock
from disinterested enjoyment of high and popular art to
a primitive culture of crude, grasping self-interest. [1]

Many readers may find Against the Machine more of a rant than an
analysis; if, however, you find it an attractive rant, this may be the
book you are seeking to bring to a head all of your own doubts and
fears about the impact of the Internet.
While ostensibly intended as a criticism of Internet 2.0, usually
defined as the interactive portions of the Internet, Against the
Machine reprises all the familiar criticisms of Internet 1.0, the
Internet, as we once knew it. These include:

The Internet destroys community.


It creates an illusory space which ill prepares us for "the
untamed, undigested, unrationalized[2], uncontrolled world..."

173
[3]
It destroys privacy.
It has commercialized pornography and made it
commonplace and less objectionable.

While we merely list these elements here, Siegel has a gift for
inflammatory critical prose, sometimes openly ad hominem in its
focus, and the impact of his arguments is significantly more heated
than this list, if not always more persuasive.
The causes of these adverse impacts, as Siegel views them, are
perhaps less familiar:

The Internet commercializes culture.


The Internet encourages and abets uninformed amateurs in
pushing out of the public discourse thoughtful professionals.
This group of uninformed amateurs can be thought of as the
"electronic mob" especially when acting as bloggers and
Internet enabled cranks.
This group grabs whatever is popular, regardless of its
quality, and spreads it through the culture by means of web
2.0 publications.

Another major cause of the failures of the Internet are "Internet


Boosters," intellectuals who write about the web and praise it
uncritically, writing in "Internetese, this new, strangely robotic,
automatic-pilot style of writing."[4] The result is that critical
values give way to a very uncritical search for popularity,
measured above all as "clicks."
And these effects of the Internet are more than just a minor
annoyance. This process is destroying classical Western culture,
which has always answered the question of what it means to be
human.
In leveling these many charges about the cultural threat of the
Internet Siegel relies upon a very restricted definition of "culture."
Although he references "mass culture," he insists on maintaining a
distinction between culture created for the masses (film, especially
black and white film) and culture created by the masses
(YouTube). The former is classical, appropriate, even enduring; the
latter is a lot of stupid film clips created by people earnestly
endeavoring not to create something new and valuable, but to copy
trash in an endless drive to be popular.
The perpetrators of these cultural crimes, the new Web 2.0 masses,
are driven by resentment of their betters. Siegel at no point uses
the language of "class" in a Marxist or even sociological sense, but
draws a distinction something like that between "high" culture and

174
"low culture." As Siegel says this crowd is motivated by a
"...universal impatience with authority, with any kind of
superiority conferred by excellence and expertise."[5]
In Siegel's perspective, "What the new crude egalitarianism is
doing, in the name of democracy, is allowing the strongest
assertion to edge out the most conscientious talent." Thus the
downward spiral of Western culture.
This process was sufficiently dangerous earlier. But now Web 2.0
has multiplied that threat. He defines Web 2.0 as "...the Internet's
characteristically mechanistic term for the participatory culture
that it has now consummated and established as a social reality."
[6]
We might conclude this anti-diatribe with Siegel's list of "Five
Open Secrets," fatal weaknesses in the Internet:

1. Not everyone has something meaningful to say.


2. Few people have anything original to say.
3. Only a handful of people know how to write well. [7]
4. Most people will do almost anything to be liked.
5. "Customers" are always right, but "people" aren't. [8]

This book could be useful to many, but it is the sort of work that
will divide any audience into fiercely partisan groups, either "for"
or "against" the Internet. While he has some of the same concerns
as do Palfrey and Gasser, unlike them, he rarely suggests solutions
to the problems he raises, but rather condemns the alleged
perpetrators with angry blasts of rhetoric.
[1]Siegel 134
[2]"Unrationalized" does not make the Merriam-Webster online
dictionary, http://www.merriam-webster.com/dictionary
/unrationalized though it is in common usage, so I won't add a
snide "sic" at this point to the quotation, though given that one of
Siegel's complaints about the www is that "Only a handful of
people know how to write well," I am strongly tempted.
[3]Siegel, 17
[4]Siegel, 126
[5]Siegel 141
[6]Siegel, 126
[7]While I have successfully resisted many impulses to lampoon
Siegel, this latter list, and particularly the hubristic #3, "Only a
handful of people know how to write well." cannot be overlooked.
So many questions come to mind: Does this mean then, that most
books are badly written and that the bad writing of the Internet is
not all that much of a departure? The little phrase "know how to"
is particularly evocative. While clearly resisting "Internetese,"

175
Siegel leaves some ambiguity here: Why did he not simply say:
"Only a handful of people write well?" Does he mean to imply that
there are people who know how to write well but choose not to do
so? Or does he mean that some people do not know how to write
well, but nonetheless do? Or is he simply trying to make a rather
short work as lengthy as possible by introducing extraneous prose
at every opportunity? But that would be a rank commercial
purpose, found largely among Internet boosters.
[8]Siegel 161

176
Born Digital: Understanding the First
Generation of Digital Natives
John Palfrey and Urs Gasser, Born Digital. Understanding
the First Generation of Digital Natives. New York: Basic
Books, 2008.
Review by Jeffrey Barlow
Born Digital is simply a wonderful book for anyone with
even a casual interest in the impact of the Internet. I believe that it
will dominate the discussion of a wide number of Internet-related
topics for at least the next year. Many discussions will begin or
close with the equivalent of "Well, Palfrey and Gasser say..."
The excellence of the work is no surprise. Palfrey is Faculty
Director of Harvard's Berkman Center for Internet and Society,
and Gasser is the faculty director of the University of St. Gallen's
Research Center for Information Law. Both men, in addition, are
attorneys and widely published with an abundance of contacts
among those who work in or study the information industries. For
all of their experience and erudition, and despite the fact that
between them they have read and understood most significant
studies bearing upon their topics, the book is written in the open
and light-hearted style necessary in discussing topics that attract
passionate audiences ranging from adolescent twitch gamers to
(ahem) mature scholars.
The focus of the work, as the subtitle suggests, is "Understanding
the First Generation of Digital Natives." The authors' thesis is that
those born after 1980 have grown up in a networked world and are
different, in sometimes mysterious ways, from those born before
them, no matter the depth of that older generation's participation in
digitally-enabled activities. In Palfrey and Gasser's view, the
digital natives promise to make extraordinary contributions to
mankind, but also face daunting problems. It is the authors' view
that we must do all we can to enable the progress of Digital
Natives while trying to create the institutions and values necessary
to protect them from the threats that they face, if we are to
eventually reap the unforeseeable riches of their potential
contributions.
This summary of the future impact of the contributions of Digital
Natives may seem high-flown, even idealistic; more often,
discussions of the future are conducted in breathless terms by
those pointing with alarm, or by those viewing the apparently
endless possibilities for profit through the lenses of naked
self-interest. It is the authors' gift to put these discussions into
clear, objective prose. In their words, "The purpose of this book is
to separate what we need to worry about from what's not so scary,

177
what we ought to resist from what we ought to embrace." [1]
The authors properly remind us that the digital natives are by no
means a "generation," but rather a select "population," because of
the gap between the digitally enabled wealthy and the very much
more numerous poor.[2] The work focuses, of course, on this
fortunate minority.
The work makes major contributions to both an intellectual
understanding of important issues, and to setting the terms for the
dialogues required to confront them. Many of these topics will
seem familiar ones: identity, privacy, piracy, learning, political
activism, etc. Yet the authors are able to sum up previous works
and to lead us into an appreciative understanding of the topics so
that they seem fresh and new, and we are able to view them in a
rounded perspective for what may seem to be the first time.
Each of the work's thirteen chapters merits an extended discussion,
but this is a review of the work, not an attempt at a full synthesis.
We have asked some of our contributors to write upon such
sections as bear upon their own disciplines, and in coming weeks
those comments should appear in our soon-to-be comment-enabled
journal. The authors also have created a WIKI which is itself
certain to be a focus of continuing discussions. [3] Here we pull
out only several topics which have particularly attracted our own
interest.
The last chapter, "Synthesis," is interesting both for its content and
for its methodology. The chapter consists of a series of email
exchanges between the two authors, mostly written while
traveling, in which each discusses a topic relating to the book's
contents or its origins, then closes by inviting the other to discuss a
new topic. We see, then, both the origins of the work and its major
conclusions simultaneously. This gives the chapter a very fresh
and intimate feeling, almost as though it were an interactive
synchronous event rather than a conventional work. We learn that
the audience was intended to be parents and teachers of Digital
Natives, though we think the book is valuable to a much wider
group.
We also learn that the authors do not really expect Digital Natives
to read this work so much as to "skim" it, reinforcing their
argument that this group does not learn poorly so much as they
learn differently. Earlier they describe the research process
common to those born to the WWW as "grazing," a process of
sampling information sources, often followed by a "deep dive,"
that is, an intensive reading of selected sources. Then comes the
truly characteristic learning process of this population,
"interacting," in which the researcher enters into a "feedback loop"
in which they pass the news on to friends, comment upon it

178
positively or negatively in blogs, perhaps prepare podcasts or web
pages, etc.[4] This latter process, of course, as every teacher
knows, ultimately "constructivist," much enhances learning.
Sadly, however, although the Digital Natives are capable of very
sophisticated research and learning, they also have some
systematic problems. Characteristic of the group is constant
multitasking that gives them perhaps the illusion of broad learning.
But the research is now adequate to demonstrate that multitasking
is inappropriate for new materials or those difficult to grasp. One
should not, for example, try to learn foreign language vocabulary
or mathematical formulas while multitasking, although many
Digital Natives do so.
Digital Natives are also impatient, wanting to get their content in
quick, short chunks. Longer classical works of literature are of
little interest to them, and as Rachel Dretzen's PBS documentary
"Growing Up Online," demonstrated, many Digital Natives have
become very sophisticated at getting just enough information from
WWW materials to appear knowledgeable without having engaged
with an actual book. [5]
There are many additional issues. Digital Natives tend while
younger to be very uncritical users of electronic materials, making
decisions as to what to read online on the basis of the formatting of
the site rather than judging the value of its contents. By the time
they are in high school, however, they seem to have themselves
worked out the standards for evaluating quality. They also believe
that information is free, and they freely cut and paste, ignoring
intellectual property rights, authority and authorship, origins of
materials, while creating what is to them a satisfactory pastiche.
The authors also give thoughtful advice as to how best to deal with
the Digital Natives as learners. First, they tell us, as parents and
teachers, we ourselves need to be familiar with the technology. Not
just by reading about it, but by engaging with it to the point where
we can "do" it.[6] Then, they tell us, we need not to have teaching
and learning driven by the technology, but the technology selected
to compliment the pedagogy. Traditional tools such as discussion
and lecture still have their place in the classroom, and many tasks
are not enhanced by technology, but may even be diminished.
We close with an analogy from the last pages of the work. By the
time that a Digital Native turns twenty years old, they will have
been actively on the Internet for ten thousand hours. This is the
equivalent of practice necessary to become a concert pianist.[7] Of
course, not all them are going to play the digital equivalent of
Carnegie Hall; but as a group they have a great deal of experience
which they are undoubtedly going to deploy regularly in their lives
and careers, and it behooves us all to understand the strengths and

179
weaknesses that the impact of the Internet has given them. This
work is the best place to start that exploration of any single source
of which I am aware.
[1] p. 9.
[2] See the discussion at pages 14-15.
[3] http://www.digitalnative.org/wiki/Main_Page
[4] pp. 240-243.
[5] View the film online at: http://www.pbs.org/wgbh/pages
/frontline/kidsonline/
[6] p. 280.
[7] p. 289.

180
Dining, Whining, and Opining: From the
Googleplex to Beijing
Editorial by Jeffrey Barlow
All editorials are at bottom opinion pieces; otherwise we
would launch them under other, more objective titles. In
this October posting of Interface, I discuss recent travels
to Silicon Valley against the backdrop of the current
economic panic on Wall Street. I argue that the world market, due
to the impact of the Internet, is a unified one requiring that any
solutions to the crisis take into account opinions abroad,
particularly in Asia.
My travels began in mid-September when I had the good fortune
to travel to Google in Mountain View, California, where I
observed the opening several hours of this year's "Zeitgeist"
conference—Google's annual extravagant gift to its more
important partners. I was not included in this group, but while in
good faith looking for the conference to which I had been invited,
I was misdirected to the Zeitgeist auditorium just as the lights went
down, dazzling digitized displays came up, and James Fallows
introduced Jared Diamond, both particular heroes of mine. As I
had two hours to find my own conference and there were plenty of
extra seats, I saw no reason to hustle out.
Ninety very informative and entertaining minutes later, I went
from there to another part of the Googleplex for the conference of
the Family Online Safety Institute. There I viewed the PBS
documentary "Growing Up Online," with its producer Rachel
Dretzen and a number of representatives from various interest
groups and industry sectors who were also there to discuss the
question of safety on the Internet. This group was dressed very
differently than the Zeitgeist audience, less entertained than
concerned, and our lunch, while tasty and filling, was somewhat
south of the fine dining I had seen being prepared for the Zeitgeist
luncheon as I left.
Then that evening it was back to the Intel complex in Hillsboro,
Oregon, to hear Dr. Kanwal Rekhi, the most successful of
Indian-born entrepreneurs in the U.S., discuss "The Branding of
India, Indians, and Indians in America" at the local meeting of The
Indus Entrepreneurs, or TIE. While now immensely wealthy and
treated with open awe by the audience largely composed of young
Indian engineers, many from Intel or other Silicon Forest firms, as
we Oregonians like to call our own high-tech enclave, he
chronicled a period when Indians like himself were met with
suspicion and steered forcefully away from the front office to
careers as techies. More than once when seeking venture capital in

181
the late 1960s and early 1970s he basically was told that his small
group of Indian entrepreneurial friends was not fundable because
they had no white guys in management and sales. Since then, Dr
Rekhi has gone on to found, work with or invest in more than fifty
high tech firms worldwide.
In between my conferences at Google and at Intel, I met with a
variety of people in the industry, including members of our
Berglund Center board. Throughout this rapid series of events, I
was informed by my reading of one of the books reviewed in this
issue, Palfrey and Gasser's Born Digital. Palfrey and Gasser
concentrate upon a world of Internet-enabled "Digital Natives"
who promise, if conditions are right, to make immense
contributions to human progress. But while thoroughly enjoying
the work and finding it a useful guidebook to the people I was
meeting from the Silicon Valley to the Silicon Forest, it was also
oddly dissonant with the events unfolding around me.
My travel took place from September 16th to the 17th, while the
U.S. stock market fell almost 800 points and rose 700, at one point
gyrating over a range of more than 600 points in one day.
American financial authorities and the White House rushed to
create an immense and somewhat vague relief package that may or
may not stem the current outgoing tide of investment capital and
the associated decline of financial markets worldwide.
Behind this world wide financial panic is, of course, electronic
communication and computer-enabled investment practices
ranging from derivatives to currency arbitrage. The net has made
us one bourse. And the economic role of "foreigners"—a term that
while ostensibly merely descriptive can quickly become
pejorative—has altered significantly. In the 18th century we were
also one bourse, a net of precious metal extraction and trading that
took silver from Peruvian and Mexican mines to Europe, onto
India, and then finally to China, which was silver's ultimate
consumer. That market was held together not by electrons but by
slow and uncertain winds pushing creaking sailing ships, when all
worked properly, across vast oceans of space and time. Then, few
were aware that the price of silver in China was somehow related
to the ability of English or American entrepreneurs to purchase
iron and coal for emerging modern industries.
While the net of sailing ships also transformed economies, it did so
very slowly, and so subtly that the effect was easy to miss. Now
the transformations, occurring in Internet time, are so rapid that
they again defy comprehension. But it is apparent that the world
has just changed significantly. While many of those changes are
unknowable at present, certain elements are clear. One is the
simple truth that we are one bourse; foreigners, Americans, our

182
fates are very much intertwined.
Americans like to think that our influence upon the "outside"
world is disproportional to its influence on us. Clearly our
importance to the world market is disproportionate to our numbers,
even to our wealth, because of our reckless willingness to borrow,
both as individuals and as a nation. But our current reliance on
outside capital to finance the resultant deficit, and our willingness
to import oil even at catastrophic prices, makes us particularly
vulnerable to the perceptions of foreign investors.
We are also more dependant than we have ever been on the talents
of what many call "foreigners" even if those so designated are
nationalized citizens or even American-born, as my recent travels
demonstrated to me. Indian techies are ubiquitous in Silicon Valley
and of course, the meeting of TIE at Intel demonstrated their
importance in the Silicon Forest as well. The estimate of the Indian
techies at Intel was that they were more than one third of the total
engineering work force.
All of these people, Americans or not, Palfrey and Gasser call
"Digital Natives." But those authors remind us that this is by no
means a "generation," because of the gap between the digitally-
enabled wealthy and the very dominant poor, but rather a select
"population." But the cultural context in which this minority
progresses is surrounded by multitudes who see the intrusion of a
global culture centered upon the values of science and the Western
tradition, not as an opportunity but as a threat to indigenous
cultures and beliefs.
Moreover, as Americans we have an additional problem. Our
recent high-handedness in international affairs has added greatly to
long-term resentments at our past pronounced economic, cultural,
and political dominance. For example, our daughter is studying in
Beijing at Tsinghua University, China's own geek and policy wonk
paradise, with a group of twenty-somethings, including Africans,
Russians, Palestinians, Iranians, and some few Americans. When
one of their number arrived at lunch recently and breathlessly
announced that the American economy had finally collapsed, the
group largely met the news with excited approval, seeing it in part
as a possible beginning of their own liberation from American
influence.
That luncheon table in Beijing is also the world of the Internet. It
is far more than an increasingly homogenous—because
increasingly interactive—group of twenty-somethings bopping to
an MP3-delivered world beat. It includes many talented and
ambitious people, as well as many poor ones, who emphatically
dislike America, partly from past experience, if also from less
well-founded resentments.

183
The current financial panic presents Americans with a number of
unpalatable choices. We are now far enough from the initial
suggestion of a five hundred billion dollar bailout of troubled
firms—since grown to seven hundred billion—to have read the
polls, and to have had an opportunity for specialists and politicians
to speak. It is evident that there is much opposition to the proposed
bailout, currently a three-page document entirely lacking in details.
Here I wish to confine myself to one small corner of these issues,
the question of Asian support for such a bailout. Such support
might be expressed at the simplest level in rising Asian markets,
which would in turn generate capital to support the American
economy. Others hope that foreign institutions will kick into the
seven hundred billion pot in order to protect their own economies
from the consequence of what is feared to be, failing a bailout, at
least a prolonged American recession.
Some foreign observers have pointed out that when the Asian
economies slumped in the late '90s, our own advice, forcefully
delivered through the American dominated International Monetary
Fund and the World Bank, was that these countries had to suffer in
order to recover. Their firms, even if long-established flagships,
had to go down in the name of "market discipline." Market
discipline, however, seems suddenly to have become absolutely
unimportant—a fact much noted abroad.
Other Asians believe that the underlying problem is more than a
temporary capital shortage remediable with a massive transfusion.
Some feel that the underlying issue is a distorted financial system
lacking meaningful regulation which serves in large part to
regularly transfer huge sums to a small percentage of consumers,
leaving the vast majority to borrow in order to finance their
purchases of Asian products. As one influential Chinese investor
said, "You have been selling mirror images reflected in other
mirrors, and it is over."
The danger, it seems to me, is another sudden shock, perhaps a
rapid decline in the value of the dollar. This would presumably
stampede all doubters into signing on the dotted line, three page
long agreement or not. This is the implied threat held over the
American people and Congress.
There well could be some continuing decline—after all the seven
hundred billion will not reduce our oil bill, nor our deficits, but
rather add rapidly to them. But there are factors which reduce the
probability of a rapid decline. Most Asian holders of American
dollars, notably China, Taiwan, and Japan, do not want to dump
that currency. Those holdings are far the largest part of their own
accumulated wealth.
In addition, Asians truly want to see the American economy

184
recover, not temporarily, but in a manner that would promise more
than just a caesura before the inevitable subsequent panic. For
many of these observers, themselves members of highly
cooperative societies, a solution that would strengthen consumers
in general—perhaps by protecting those holding delinquent
mortgages—would also be a strongly positive indication that the
U.S. now intends to come to grips with another important part of
the underlying problem.
Another positive sign for Asians would be some indication that the
investment of seven hundred billions of taxpayer dollars would
purchase public equity in the firms being bailed out; equity which
could later be sold, permitting the American state to reduce its
deficits, which would further strengthen the economy. This was the
model followed by Sweden in dealing with a very similar difficulty
in 1992.
And there are, after all, also very different models to examine.
When the Asian Contagion of 1997 hit Asian economies, a major
issue was how to support effective demand, how to bring value
back into the economies as world investment retreated and Asian
currencies collapsed. The Chinese chose to invest in education and
transportation infrastructure. This put money into the coffers of the
local and provincial governments which managed the projects, and
ultimately into the hands of workers. The result was that while
Chinese growth slumped, it still continued. After the smoke
cleared, China had thousands of miles of new roads, railways,
many new educational institutions, and a populace that felt that
such suffering as had occurred had been widely shared. Once we
can decide upon an appropriate solution, we still must decide who
is "us" and who is "them." Any bailout of troubled financial
institutions must include foreign ones as well. We need the talents
and the investment funds of foreigners, and to pull back into a
fortress America mentality would be worse than reckless; it would
be stupid. Yet it will be a tempting argument for many who do not
fully understand the roots of our problems, or who do not wish to
solve them in an equitable manner.
But it is much easier to say what can go wrong than to be
optimistic. We got into this mess in part because of the breakdown
not only of our banking system, but also because of the breakdown
of our political system, gridlocked in partisan struggles while
quietly ceding real power to those private interests which fund the
electoral system and effectively preclude real regulation. And our
traditional watchdog against such crimes, the press, has mutated
into some sort of infotainment monster that now gropes to
understand, let alone explain, where we are now and how we got
this way.

185
So many questions: will the captains of finance who put us on
these shoals be rewarded for their prior greed from the two
thousand dollars per American man, woman, and child required to
fund a bailout by even the most minimal estimate? Will the
incoming President be left with enough resources, let alone power,
to referee what promises to be yet another struggle over the corpse
of the market economy? Can either of the candidates understand
this mess sufficiently to help solve it? Does the electorate even
care? Can we build a consensus to look at new solutions or will we
depend yet again on the Old Time Religion to see us through?
And all these questions arise in an environment where information,
incorrect information, and outright misinformation, circulate only
slightly slower than light. The world is not only watching now, but
will be a participant in this issue, and an increasingly skeptical
one.
I conclude with an incident from Google's Zeitgeist. One of the
speakers was Gao Xiqing, a very sophisticated graduate of Duke
Law, a former Wall Street attorney, and a protester at Tian An Men
in 1989. Now he is the head of the state institution which selects
Chinese investments abroad. He was asked how China might feel
about future investments in the United States including, of course,
those financial instruments which support our deficit. He paused
and selecting his words carefully, replied that China was going to
behave as a rational investor and observe very carefully. The price
of silver in China—in this case, dollars—is again important, and
this time around we had better recognize that fact.

186
Byteing Off More...
This is a new feature for Interface. We have been
interested for some time in drawing readers' attention to
worthwhile discussions on the World Wide Web that are
short and can stand alone. It is also our intention to lead
readers to useful sources to which they themselves can
usually subscribe without charge.
For this month we lead off with an issue currently in the news, part
of the "Sarah! Sarah! Sarah!" phenomenon, referring of course to
Sarah Palin, the clearly charismatic Republican Vice-Presidential
candidate. The event we are following is the hacking of Palin's
email account. There is a great deal of material on the WWW
about this incident, but we have chosen to follow the thread in
successive postings in ComputerWorld because we find them
easily accessible and well written. [1]
We are less interested in the contents of Palin's email, which seem,
frankly, dull, than in what the rest of us can learn about protecting
our own email. After all, few of us pay all that much attention to
email security.
The most probable method used—although some doubt this would
in fact be effective[2]—was a simple password reset scheme. This
amounts to attempting to log into an email account of a second
party.
The Internet or service provider is pretty easy to identify for many
people, and often scattered across the Internet. Services such as
http://www.contactvip.com/ sell access to tens of thousands such
addresses. For many of us, simply accessing institutional or
company pages would give us such as address—try finding your
own, you will very probably be surprised at how easy it is.
The next step is simply to try to log into the second party's
account. Many people and some sites utilize the user's email
address as the login ID. We already have that....
Now all that remains is the password. After a few failed tries you
will almost certainly encounter the "forgotten your password?"
screen. Yes? No problem, we can reset the password. But the next
step is often to answer a security question: "What is your mother's
maiden name?" Your favorite sport? Your dog's name? Etc.
Now back to the web and look the information up. Between
corporate bios, Youtube and other Web 2.0 social sites,
genealogical sites, blog postings, etc., most information can be
found. In Palin's case a small group working together apparently
answered three separate security questions in little time.
Answer the question correctly, then reset the password. Viola!
While this might take some limited time on the WWW, this

187
approach would probably work for all but the truly security
conscious among us.
What is the solution? Don't use your real mother's name, but the
mother you wish you had had, in my case, Lauren Bacall. (Sorry
mom.) Or the dog you wish you had had: Lassie! No more worry
about falling down wells, or at least of getting out of them! (Sorry
Buddy.)
Other good advice is to mix up letters and numbers in your
passwords or logons or identifications. Simply doing this will
much delay even a major "brute force" attack wherein a large
computer or group of computers simply starts trying all possible
combinations until the intruder is in. But network security is very
likely to have its own safeguards against such attacks, and
hopefully they will soon be noticed in any event.
Such passwords can be difficult to remember, so try this simple
variant: use whatever English word you are going to use, but
systematically convert some letters to digits. "L" might become
"1", "O" becomes "zero" "J" a "6" etc. To remember this system
you could also establish simple correspondences once, write it
down someplace—the table—not the password, and you are good
to go forever.
[1] We have relied primarily upon a posting By Gregg Keizer,
"Security researchers ponder possible Palin hacks. There are lots
of ways someone could hack her Yahoo e-mail, say experts",
found at: http://www.computerworld.com/action
/article.do?command=viewArticleBasic&articleId=9115100&
intsrc=news_ts_head and also by Gregg Keizer "Update: Hackers
claim to break into Palin's Yahoo Mail account It's 'incredibly
dangerous' to use a private account, says security expert" found at
http://www.computerworld.com/action
/article.do?command=viewArticleBasic&articleId=9114934 Also
useful was Jaikumar Vijayan "Web proxy firm working with FBI
to trace Palin e-mail hacker. The webmaster of a Ga. company
says he's been asked to save server logs"
http://www.computerworld.com/action
/article.do?command=viewArticleBasic&articleId=9115099
[2] As indicated in the sources used for this article, some have
questioned whether or not a password reset attack could have been
successful given the security at Yahoo.com, where the hack
supposedly occurred. We, however, as stated above, believe this to
have been the probable method utilized.

188
Seeing Beyond The Grand Illusion
Steve Rhine, Ed. D.
Willamette University
In 2003, Larry Cuban warned us that computers have
been Oversold and Underused[1]. In his experience in
schools in the Silicon Valley, he found computers either
not being used or being used as advanced typewriters. Clifford
Stoll urged us to reconsider our growing addiction to the Internet,
arguing in Silicon Snake Oil (1996)[2] that the Internet cannot
provide a richer or better life. He opened that book with a
comparison of "exploring" a virtual cave with a software program
and his fearful, memorable experience trudging through the mud
and darkness of a real cave. In High Tech Heretic (2000)[3] Stoll
contended that we need to consider the costs of technology along
with the benefits. Students are already overwhelmed with
information. What they need is the critical analysis required for
learning. Each of these authors question the grand illusion that
technology is benefiting learning. However, each believes that
there are potential benefits of technology that we have not
consistently accessed in schools.
In my own experience visiting middle and elementary schools, I've
seen computer "learning centers" in which students play
"educational games." In my article, "Exorcising the Edutainment
Curse" for the journal TechLearning (1997),[4] I lamented the use
of computers in classrooms for edutainment—the illusion of
learning while having fun. Programs such as Living Books' "Just
Grandma and Me" is a perfect example of parents and teachers
hoping that reading skills are being developed while students
mindlessly click on trees so squirrels run around. "Lacking
significant staff development, it is no wonder that teachers' use of
technology often diminishes to what is easiest to do with
technology: Sit students in front of the computer and let them play.
Hence, the edutainment curse is from programs that make
technology easily usable and fun while not maximizing the power
of computers as learning tools. Exorcising this curse will take
recognition of technology's possibilities and training to make them
happen."
This spring the National Education Association (NEA) and
American Federation of Teachers (AFT) published the report
"Access, Adequacy, and Equity in Education Technology,"[5]
indicating that we haven't made much progress in using
technology to its potential in the decade since I wrote that article.
There are some intriguing findings in the report that suggest
confusing dichotomies. Urban teachers were less likely to use

189
computers than suburban or rural teachers, while they were
strongest in their belief that computers could positively impact
student learning, likely due to the fact that they believed they had
less adequate equipment, software, and support. Elementary
teachers had more computers in the classroom than their
counterparts in higher grades, but used computers less for
instructional purposes. Middle and secondary teachers had
students use the internet in labs and libraries more, particularly for
research, yet they also believed that over use of information on the
Internet caused the quality of student research to decline and that
they did not have sufficient access to technology to do their jobs
effectively. While new educators were more likely to integrate
technology into their instruction, they were most likely to believe
they were inadequately prepared to use the Internet for research or
integrate technology into their instruction. Access and training are
key issues all around.
However, while less than one-third (32 percent) of teachers used
technology for instruction "at least a few times a week,"[6]
teachers continue to be "highly optimistic about the impact of
technology on their jobs and on their students, and they considered
technology essential to teaching and learning."[7] Ninety-five
percent of teachers believed that technology could improve
students' learning. Perhaps in agreement with Cuban, Stoll, and
others, we, as educators, continue to hang on to the belief that
technology's promise is just around the corner, but we are not
realizing that promise.
So, where do we go from here?
One direction is suggested in Steven Jones' book Against
Technology: From the Luddites to Neo-Luddism.[8] In the book
Jones tells the history of the "Luddites" who followed the
inspiration of Ned Ludd in smashing the new machinery of the
British textile industry. Some might say they were just fearful and
stubbornly resisting progress. Others may say they were trying to
preserve a way of life they valued, which was being threatened. As
I implore my son to get off the Internet and go outside or plead
with my daughter to unplug herself from her iPod so we can chat, I
can appreciate the sentiments that lead to wanting to rage against
the machine. However, I have seen students get excited about
learning, develop understanding of concepts in ways that were
impossible previously, and create multiple representations of ideas
that inspire new meaning. I've seen glimmers of that promise.
So, how do we move beyond that elusive illusion and capture the
potential? Two roads diverge in the woods.
With the advent of No Child Left Behind (NCLB) in 2002,
government funding for technology in schools ended for programs

190
such as "Preparing Tomorrow's Teachers to use Technology" and
moved into improving school testing and data-analysis.

Since the 2002 national policy shift, no policies have


made a systematic or broad-scale effort to channel the
resources and experimentation of states and districts
toward using technology as an assistive learning tool in
education to the degree that other fields and industries
have used technology to enhance performance.[9]

In our current era of accountability, the only things that count are
the ones that can be counted. The only way we know if we are
achieving something is if it can be measured objectively.
Professional development has been de-emphasized if it didn't
relate to testing.
O'Dwyer et al (2005) and others have rightly critiqued studies of
the impact of educational technology as lacking in academic
rigor.[10] As a result, "achievement" is increasingly being
narrowed to success on standardized achievement tests in order to
address the issue of rigor. The article points out, however, that
these recent studies have their shortcomings as well, including:

a. weak or non-existent measures of student use of


technology;
b. measures of technology use that treat use as a
unidimensional construct rather than a multi-
faceted set of constructs;
c. failure to use a measure of prior achievement to
control for pre-existing differences in achievement;
d. use of total test scores as the outcome measure
rather than focusing on the sub-scores that are
most closely associated with the constructs
developed through a given use of technology;
e. use of analytic methods that do not consider the
multilevel structure of educational settings;
f. use of school-level rather than student-level
measures of achievement; and
g. failure to randomly assign participants, either at
the individual or classroom/school level, to control
and experimental conditions. [11]

Measuring the impact of technology is a complex task indeed.


In O'Dwyer et al's study they purported to address each of these
variables with the exception of randomizing students or creating
control groups. In the end, they found that use of technology to

191
edit papers increased students' scores on the state standardized
achievement test. Oddly enough, considering Cuban's concerns,
advanced typewriters improved writing. Interestingly, they also
found that students using computers to create presentations or
using a home computer for recreation was associated with lower
test scores, especially reading.
So, one road moving us beyond the illusion to a sense of progress
is narrow. The goal is to have a clear and focused connection
between the tool and the learning goal. This makes sense. Using
technology to edit papers helps your writing skills. However, is the
conclusion that we need to reduce education to what can be
measured with a multiple-choice test? And only use technologies
that can lead to clearly measured outcomes with standardized
tests?
The other road is the wide, holistic path. What about creativity,
critical thinking skills, problem solving, etc? Is part of the illusion
our inability to see and document what we are achieving? As my
experience with edutainment suggests, I wholeheartedly agree that
all uses of technology are not equal in the eyes of the achievement
gods. Some uses of technology can distract from learning. As
O'Dwyer's group discovered, there may be trade-offs as
well—students creating presentations may be developing critical
thinking and communication skills, but the time it consumes may
take away from developing other skills. Are we willing as a nation,
as a society, to accept some of those trade-offs? What do we want
of our graduates? I'm hoping it is more than the ability to bubble in
"C".
What about the concerns of teachers in the NEA/AFT report?
What about the persistent barriers to achieving technology's
promise? Stoll ultimately contends that administrators must
involve teachers in the planning and implementation of technology
plans. They should allow them more unstructured time, technical
support, and professional development opportunities. Suzie Boss,
in a recent article in Edutopia[12] counters that we cannot wait for
policy shifts or the pot of gold. She suggests five steps to
achieving some of that potential right now:

1. Innovate with tools you already have;


2. Seek out free, easy to use digital resources;
3. Overcome your fear of the unknown;
4. Start with small, fast projects that enhance learning; and
5. Learn with your students.

The group that I work with, the Oregon Technology in Education


Network, has found that small, targeted grants can do a world of

192
wonders in helping new teachers explore effective uses of
technology in their classrooms. [13]
Malcom Gladwell's book, The Tipping Point, explains that change
often happens quickly and unexpectedly.[14] Little changes can
ultimately make a huge difference. While professional
development is at the top of the list regarding why technology use
in schools is not realizing the hopes of society, access is close
behind. Netbooks (small, internet computers) are now at the price
that graphing calculators were 10 years ago.[15] Cell phones are
becoming ubiquitous and the iPhone represents a seismic shift in
what a phone can do, particularly with Internet access. Are we
nearing the tipping point for access to the Internet in schools? Will
teachers capitalize on the resources of the Internet if it does tip?
In conclusion, I have great hopes and expectations for the next few
years. While I concede that we have learned the value of focus
from the NCLB years, we also have ascertained once again that
reading, writing, and arithmetic are not everything when it comes
to education. Both roads, narrow and wide, have their value. We
need to develop new ways to demonstrate success in a variety of
forms, so achievement of non-3R learning is just as valued, so
technology's broad potential impact can be made tangible. On the
other hand, if a richer or better educational life can be found
without the technology, we need to be aware of that. A new
presidential administration will also bring new emphases on the
roles technology can play in schools. As the dawn of the Internet
age finally reaches the classroom, access will no longer be the
mantra of teachers wanting to maximize students' learning. Are we
ready to see beyond the illusion of learning with technology in
education? Are we prepared to make it real?

Endnotes
[1] Cuban, L. (2003). Oversold and Underused: Computers in the
Classroom. Harvard University Press.
[2] Stoll, C. (1996). Silicon Snake Oil: Second Thoughts on the
Information Highway. Anchor Books.
[3] Stoll, C. (2000). High Tech Heretic: Reflections of a Computer
Contrarian. Anchor Books.
[4] http://www.techlearning.com/db_area/archives/WCE/archives
/rhine.php, 1997; accessed 10/2008.
[5] http://www.nea.org/research/images
/08gainsandgapsedtech.pdf, 2008, accessed 10/2008.
[6] NEA/AFT report, p. 20.
[7] NEA/AFT report, p. 22.
[8] Jones, S. (2006). Against Technology: From Luddites to
Neo-Luddism. Routledge.

193
[9] NEA/AFT report, p. 23
[10] O'Dwyer, L. M., M. Russell, D. Bebell, and K. R. Tucker-
Seeley. 2005. "Examining the Relationship between Home and
School Computer Use and Students' English/Language Arts Test
Scores." The Journal of Technology, Learning and Assessment,
3(3).
[11] O'Dwyer, et. al., p. 36.
[12] Boss, S. (2008). "Overcoming Technology Barriers: How to
Innovate Without Extra Money or Support"
http://www.edutopia.org/technology-how-to-implement-classroom,
2008, accessed 10/2008.
[13] http://www.oten.info/grants/minigrants.html
[14] http://www.gladwell.com/tippingpoint/index.html
[15] http://cnettv.cnet.com/9742-1_53-50003413.html?tag=api

Comments

Theresa

Thank you for your article. I have wondered how effective the
edutainment model of education is in developing critical
thinking in the students. Children exposed to television and
now the internet at such an early age come to expect a lot of
entertainment with their learning. The high stimulus level of
these two medias seem counter to creativity.

Posted at 13:50 on February 13, 2009

194
Campaign 2.0
By Jen Hernandez, Berglund Student Fellow
A concrete characterization of the electronic aspects of the
2008 presidential campaign can be hard to put a finger on,
just as is the case with the elusive categorization of Web
2.0. Web 2.0, best paraphrased by Ian Davis,

is an attitude not a technology. It's about enabling and


encouraging participation through open applications
and services. By open I mean technically open...but
also, more importantly, socially open...Of course the
web has always been about participation, and would be
nothing without it...Technology has moved on and it's
important that the social face of the web keeps pace. [1]
`

It's time now that the politicians keep up with the emerging "social
face" of potential voters, as well as modern America. The 2008
campaign has tapped into trends and attitudes regarding increased
interactivity with the web as a platform. More attention is being
paid to the social side, as well as user interaction with news and
information online. Communication is changing, and so must the
messenger.
Communication has always been a critical part of social
interaction. That has not changed much, rather the modes of
communication have. With faster bandwidths available, as well as
information supported by multiple devices, communication can be
nearly instantaneous. Also, according to David Talbot in an article
explaining the use of the social technology at the center of Barack
Obama's campaign for the presidency, "Americans are more able
to access media-rich content online; 55 percent have broadband
Internet connections at home, double the figure for spring 2004.
Social-networking technologies have matured, and more
Americans are comfortable with them." [2]
In a study by the Pew Research Center, it was found that now 42
percent of the age demographic 18-29 regularly gets information
about the campaign online, compared to 20 percent in 2004. [3]
Younger people also tend to rely on the Internet for campaign
information, while older people tend to use more traditional media.
David Talbot pointed out in his October 2008 article for the
Technology Review that

...at times last year, [McCain] made effective use of the


Internet. His staff made videos...celebrating his

195
wartime service—that gained popularity on YouTube.
But the McCain site is ineffectual for social
networking... McCain's organization is playing to an
older base of supporters... it seems not to have grasped
the breadth of recent shifts in communications
technology." [4]

While some of the new media is merely online versions of


televised news stations such as CNN, there are also strictly
internet-based news sites, like Yahoo News, and even YouTube.
[5]
Social networking sites like MySpace [6] and Facebook [7] also
play a role in providing information, because according to the Pew
study, "more than a quarter of those younger than age 30 (27
percent)—including 37 percent of those ages 18-24—have gotten
campaign information from social networking sites. This practice
is almost exclusively limited to young people." [8]
However, according to Talbot, there is a growing population under
25 who, "are no longer using e-mails, not even using Facebook; a
majority are using text messaging" [9].
The advent of another more widely used form of instantaneous
communication—text messaging—has been fully utilized by the
Democratic Party, which uses text messages very creatively, as a
political buffer. Leslie Sanchez states that, "The voters on Obama's
cell phone, e-mail, and text-messaging lists can be contacted
instantly, wherever they are. This will allow him to stay ahead of
negative news stories, and in a close race, it might make all the
difference." [10] The more instantaneous the communication, the
more quickly parties are able to combat bad publicity about their
candidates before the news even hits the unwired masses via
traditional modes.
For the politicians, this presents a new challenge to get their
message across to voters, active and disaffected alike. With the
individual in control of what they see, the politicians must be just
as savvy in grabbing their attention, especially the attention of the
voters who are using the Internet to test the waters and discern
what they can about the election. The freedom of choice on the
voters' part puts the politicians at somewhat of a disadvantage.
So, we—the voters—can benefit from this system because we can
choose when, where, and how we get our information. Should we
choose, we can sign up to get text messages, browse websites, seek
out only what we want to know. Besides benefiting on the
receiving end, voters can also become part of a dialogue with
others, and their party as a whole.
An article addressing this issue on CNN Politics.com states that,
"Between political blogs, social networking sites, online media

196
and video share sites, people need little more than an Internet
connection to become a more active part of the political
process—or at least keep up with it." [11]
There are two sides to every innovation, however. By only seeking
out what one wants to see, especially when beginning with a slant
towards one party or idea, the viewer may completely close
themselves off to learning about the views of the other candidates.
One of the drawbacks for both the people and the politicians is the
wealth of information the Internet provides; "The [Internet], like
many things in this world, is a very powerful tool when used
judiciously. Subscribing to sound bytes with no depth and
following sites that only affirm our currently held beliefs are just
as bad or even worse than not being informed at all" [12],
according to David Sanderson.
The Internet can be just as harrowing as it is helpful, but by
keeping this in mind users can select information from competing
sources, though they may clash with currently held beliefs. In a
blog entry, Andrew Neubauer urges responsible citizens to, "use
the internet to broaden your views, not reinforce your own
preconceived notions." [13]
After Election Day, it should be interesting to look back at how the
candidate's social networking strategies played into the results and
turnout, if at all. Also, when we have a new President, it should be
interesting to see how either Party continues with their networking
strategies. Just think of the possibilities of an established network
of the people and their government. Will the dialogue be
continued, or will it be cut off once whoever is in office is safely
there?

Sources:
[1] Davis, Ian. "Talis, Web 2.0 and All That." Internet Alchemy. 4
July. 2005 http://iandavis.com/blog/2005/07/talis-web-20-
and-all-that.
[2] Talbot, David. "How Obama Really Did It: Social Technology
Helped Bring Him to the Brink of the Presidency." Technology
Review Oct. 2008: 78.
[3] Scott Keeter. "Internet's Broader Role in Campaign 2008:
Social Networking and Online Videos Take Off." The Pew
Research Center for the People & the Press. 11 Jan. 2008
http://people-press.org/report/384/internets-broader-role-in-
campaign-2008.
[4] Talbot, David. "How Obama Really Did It: Social Technology
Helped Bring Him to the Brink of the Presidency." Technology
Review Oct. 2008: 82.
[5] Scott Keeter. "Internet's Broader Role in Campaign 2008:

197
Social Networking and Online Videos Take Off." The Pew
Research Center for the People & the Press. 11 Jan. 2008
http://people-press.org/report/384/internets-broader-role-in-
campaign-2008.
[6] http://www.myspace.com
[7] http://www.facebook.com
[8] Scott Keeter. "Internet's Broader Role in Campaign 2008:
Social Networking and Online Videos Take Off." The Pew
Research Center for the People & the Press. 11 Jan. 2008
http://people-press.org/report/384/internets-broader-role-in-
campaign-2008.
[9] Talbot, David. "How Obama Really Did It: Social Technology
Helped Bring Him to the Brink of the Presidency." Technology
Review Oct. 2008: 82.
[10] Sanchez, Leslie. "Commentary: Obama's High-Tech Edge in
Presidential Politics." CNN.com. 2 Sept. 2008 http://www.cnn.com
/2008/TECH/09/01/sanchez.obama/index.html.
[11] "How Technology is Revolutionizing Democracy." CNN.com.
26 June 2008 http://www.cnn.com/2008/POLITICS/06/26
/technology.election/index.html.
[12] Sanderson, David. "Democracy Waits for the Responsible
Web-Surfer." eHub. 4 Sept. 2008 http://ehub.journalism.ku.edu
/2008/09/democracy_waits_for_the_respon.php.
[13] Neubauer, Andrew. "A Plea for Sanity." eHub. 3 Sept. 2008
http://ehub.journalism.ku.edu/2008/09/a_plea_for_sanity.php.

Comments

Pat McGregor

198
I find it very interesting that the statistics show that the young
crowd is spending more time acquainting themselves with
issues online than those of us over 35. The Blogs I subscribe
to, generally sites with a good mix of individuals but specifically
with those of us who are older, have had rabid interest in all
areas of the campaigns and propositions/initiatives, etc.

Now, there certainly is a certain amount of self-selection here,


but I do wonder how the statistics about the age groups was
gathered.

However, I do agree about how the Net has sparked the


interest and imagination of younger folk. It's gratifying to see
blogs, chat rooms, etc., full of folks 18-30 who are involved in
the process more than at any time I can remember since the
late 60's.

I look forward to seeing how the political analysts sift and


digest this data and shape upcoming campaigns based on their
visions of what it all means.

Posted at 13:05 on February 13, 2009

Ryan

As more and more moves online in terms of formal modes of


communication, the digital divide concerns me more and more.
I work with college students on a daily basis and their skills in
effectively using online information are often significantly
overstated. This is particularly true of young people who come
from economically disadvantaged areas. As literacy expanded,
more and more people could access information on their own,
and I fear that we will be leaving huge segments behind if we
do not take care.

Posted at 13:43 on February 13, 2009

199
The World is a Showcase for Creative
People
by Leonard D. DuBoff, © 2007
In a world where Youtube is a creative venue for aspiring
television writers, artists, and Second Life is used for
post-college job interviews, it is rare, and sometimes
undesirable, for a creative person's work to remain local.
With the World Wide Web's ubiquity and the removal of other
barriers to international commerce, most items find their way into
both national and international channels of commerce.
For instance, a graphic designer may be asked to create a logo for
a company or institutional website, perhaps according to a contract
or as part of a funded grant, or a photographer may be asked to
provide an image for a publication that will be distributed
throughout the English-speaking world and beyond. Even
inventors, such as university faculty who commercialize their
work through private or university-private consortia, may find that
their inventions are being handled by multinational companies,
such as, for example, Price Costco, with retail outlets throughout
much of the world.
The fact is that we have become a global economy and this
presents both benefits and challenges. It is gratifying to learn that
one's creative work is being appreciated by individuals all over the
globe, while it is frightening to realize that one's creative rights
may be at risk in far-off places. This raises interesting questions
for creative people regarding the ability to protect rights in foreign
jurisdictions and the necessity to comply with foreign law as well.

Copyright
It is well known that any original work of authorship, such as the
creation of a work of art or the writing of a novel or a textbook
will automatically provide the person engaged in this activity with
a copyright in and to the work created if it is both original and has
some minimal degree of creativity associated with it once the work
is put into a tangible form.
This is true under American copyright law, but it may not be true
under the copyright law of every country of the world. Fortunately
for creative people in the United States, we have become parties to
at least three multinational copyright treaties that provide the
creative person with the ability to obtain protection in more than
160 countries of the world with minimal effort.
The oldest and most universal copyright treaty is the Berne
Convention. It is more than a century old; yet, the United States

200
resisted membership in the Berne Convention until the 1980s. The
effective date for the United States implementation of the Berne
Convention is March 31, 1989. This is because it was felt that the
rights accorded creative people under that treaty were more
extensive than American public policy was prepared to permit.
This attitude began to change over time, and the political
pendulum has swung in the other direction so that the United
States not only became a party to the Berne Convention, but
enacted the Visual Artist's Rights Act of 1990, which provided
creative Americans with rights in their work that had been
available only on the European continent.
The Berne Convention has been adopted by 163 countries of the
world, and, among other things, relaxes many of the formalities
previously required by the American copyright laws. The Berne
copyright treaty makes it clear that a creative person need not use
the copyright notice on a work in order to enjoy copyright
protection for that work. This is, unfortunately, misleading since
the rights obtained when one omits the copyright notice from a
protected work may be illusory. The U.S. copyright law provides
that anyone who relies in good faith on the fact that no notice is
used on a copyrighted work may be deemed an "innocent
infringer." Innocent infringers may not be held liable for damages
and may even be permitted to continue infringing. The law goes on
to state that the way to defeat the defense of innocent infringement
is to use the copyright notice. It is, therefore, clear that while the
notice is not required, it is certainly a good idea to use it.
Most of the provisions of the Berne Convention have been
incorporated into the newest multinational treaty affecting
copyright – the Agreement on Trade Related Aspects of
Intellectual Property Rights (TRIPs), administered by the World
Trade Organization (WTO). The moral rights provisions (that is,
the creative person's right to preserve the integrity of the work and
to require that the work be properly attributed) of the Berne
Convention have not been included and there are some additional
provisions found in TRIPs. For instance, TRIPs provides that
computer programs are protected as literary works and sets forth
rules regarding the protectability of databases.
The United States is also a party to the Universal Copyright
Convention (UCC) developed by UNESCO, which has about 62
member nations. This treaty also requires the person to comply
with the copyright law of the country where the work is created
and, in addition, to use the international symbol "©". The UCC
also requires disclosure of the year when the work was first
published, as that term is defined in the statute. Publication is a
technical concept and includes any distribution of copies to the

201
public by any means now known or hereafter developed. Copy is
defined as including the original work itself.
The UCC provides the copyright claimant with the ability to obtain
reciprocal copyright protection within all other UCC-member
nations; provided, however, that the claimant may not obtain any
better rights than would be available for individuals under the law
of the nation where rights are being claimed. This means that if
American rights are greater and more protective than the rights of
a country in which a copyright is being enforced under the UCC,
then the party desiring to protect those rights will be accorded the
same degree of protection given other nationals of the country in
which the copyright is to be enforced. An American seeking to
enforce a copyright in, for example, France will be entitled to only
the rights provided French nationals under the copyright laws of
France, even if those rights are less protective than the American
copyright law.
The United States is also a party to the Buenos Aires Convention,
which provides copyright protection throughout this hemisphere so
long as the treaty's requirements are fulfilled. That treaty provides
that citizens of members nations, such as the United States, may
obtain copyright protection in other member nations by complying
with the copyright laws of the copyright claimant's country and, in
addition, adding the words "All Right Reserved" in Spanish,
Portuguese or English, whichever is the official language of the
claimant's country. Since 2000, all Buenos Aires Convention
members have been members of the Berne Convention, so this
treaty is essentially obsolete.
As can be seen from this discussion, a person who creates
copyrightable work in the United States anticipating that it will be
used or displayed in other countries should permanently affix a
copyright notice to the work.

Trademarks
In addition to creating work that may be protected by copyright,
creative people often create works that are intended to identify the
source or sponsorship of a product or service. These are commonly
referred to as either trademarks or service marks, though the most
common term used for both is trademark.
The United States trademark law is known as the Lanham Act, and
the law provides that any name, symbol or logo when used in
interstate or foreign commerce may be protected under the federal
statute. In order for a trademark to be protectable, it must not be
generic (i.e., the noun describing the item in question, such as
chairs for chairs), and it must not be confusingly similar to the
protected mark of another. Under U.S. law, there are 34

202
international categories for goods and 12 categories for services.
Generally, marks that are protected in one category may be used
by another in other categories, so long as there is no likelihood of
confusion. This means that, for example, the word Cadillac was
permitted to be used to describe automobiles by the General
Motors Company and by another independent company for dog
food. It also means that the mark Nike had been used by an Oregon
shoe manufacturer for running shoes and by the U.S. government
for missiles. On September 28, 2006, however, the U.S. Congress
passed a new trademark dilution act that prohibits the use of marks
similar to famous marks if the use would dilute or undermine the
protected mark either by blurring or tarnishing the protected mark
– even if the products or services are in different categories and
even if there is no likelihood of confusion.
As of the date of this writing, there is no multinational trademark
treaty to which the United States is a party regarding trademark
enforcement, though TRIPs (discussed above) does require
member countries to implement minimum standards for the
protection of trademarks.
The United States has become a party to the Madrid Protocol,
which enables Americans to register their trademarks in 78
countries throughout the world through the U.S. Patent and
Trademark Office. Unfortunately, Canada has not become a party
to this treaty, and thus it is still necessary to register a trademark in
Canada by working with a Canadian correspondent.
The European economic community has, among other things,
created a so-called community mark within the EEC. In order to
obtain registration of a community mark, a correspondent in a
member nation must assist by applying through a member nation.
Moving in the other direction, every state in the United States has
its own state trademark law. Unfortunately, these laws have
application only within the geographic boundaries of the state, and
the remedies available for infringement of a state trademark often
differ from those available under the federal laws.
It is unlikely that a creative person will find that the state
trademark registries provide the kind of protection necessary in
today's global society. This is particularly true if the mark is to be
used as a domain name on the World Wide Web. In this event, the
word used as a URL will actually be displayed throughout the
globe, and local protection will not suffice.
As previously noted, there are numerous categories of trade and
service marks in which a word may be protected. This is true for
trademark purposes; yet, there is only one World Wide Web, and
the same name or abbreviation may be adopted by several different
businesses in several different countries. Thus, for example, the

203
American Bar Association, known as the ABA, found that it could
not obtain the URL for "aba" since that URL had already been
taken by the American Booksellers Association.
This situation can give rise to a great deal of confusion and
frustration, particularly when the look-alike domain name has been
adopted for the express purpose of causing market confusion by
deflecting potential customers away from the original owner of the
name.
In the early days of the Web, this gave rise to insoluble problems
for many individuals and businesses who learned that there was
not anything they could do when the owner of an offending URL
was located in another state or country. Recently, the situation has
changed, and today the World Intellectual Property Organization
(WIPO) has established an online arbitration procedure whereby a
domain name owner can arbitrate the validity of another's use of a
confusingly similar URL, regardless of where the owner of the
offending domain name is located.
The proceeding occurs online, and great deference is given to
those who have registered their domain names as trademarks and
established that they have a priority of use as compared to the
other party who has acquired a confusingly similar domain name.
All that is available in this process is the opportunity to acquire the
offending domain name; damages may not be awarded. Despite
this fact, the process is quite beneficial since it is fast and
comparatively inexpensive - particularly when compared to the
alternative of filing a lawsuit in federal district court.

Patents
A creative person who develops something that is new and
innovative, functional and not obvious may be entitled to obtain a
patent to protect that invention. The patent laws of the United
States make it clear that the invention may not have been sold or
otherwise disclosed more than one year before the application for a
patent is filed, or the invention may not be patented.
As of the date of this writing, the United States is not a party to
any multinational patent treaty, though TRIPs does require
member countries to implement minimum standards for the
protection of inventions. An inventor applying for a patent in the
United States must apply for a patent in every country in the world
in which a patent is desired before a patent is issued in any
country. This means that an inventor desiring international
protection must be prepared to invest a great deal of time, money
and energy in applying for patents in all desired countries before
any patent issues.
Bills have been introduced into the United States Congress which,

204
if adopted, would revise the U.S. patent system and position this
country to consider participating in a multinational patent treaty. It
is, however, very unlikely that the United States will, in the
foreseeable future, become a party to such a treaty.

Trade Secrets
The Uniform Commission on state laws is a body that has as its
mission the adoption of uniform laws throughout the United
States. It has drafted the Uniform Trade Secrets Act, which has
been adopted by virtually every state. Under this body of law,
anything that provides its owner with a commercial advantage and
is kept secret may be protected as a trade secret. Anyone who
obtains a protected trade secret through improper means will be
vulnerable to a damage award, as well as an order preventing the
improper use. In addition, the Uniform Trade Secrets Act provides
the complaining party with the ability to recover attorney fees
incurred in the litigation.
Many countries have adopted laws that are comparable to the
Uniform Trade Secrets Act, and it is important for any person who
works with innovative work which is commercially valuable and
has been kept secret to determine whether any international
transactions will be with individuals or businesses in countries
having comparable laws. An experienced intellectual property
lawyer should be able to assist in making this determination.

And Other Requirements


Creative people who deal with personal care products or food
products must also comply with the labeling requirements of all
countries into which their products are imported and sold. Here too
it is important to work with an experienced attorney who can
research the requirements of the countries in which these products
are to be sold.
Most civilized countries have recognized the importance of
protecting the environment in which we live. There has been a host
of international conferences dealing with environmental
protection, and environmental laws have sprung up throughout the
world. Unfortunately, the laws are not consistent or uniform. An
extensive discussion of the complex and often confusing
environmental laws that have blanketed the globe is beyond the
scope of this article, but by working with an experienced
international environmental lawyer, you should be able to
determine the requirements of the countries into which sales of
your products are targeted.
The world of international commerce provides creative people
with a host of opportunities. It also provides them with a vast array

205
of challenges. These include, among other things, the requirement
that one have the skill to create a work with global appeal. It also
means that the creative person must be sensitive to the laws,
treaties and practices that exist in the countries or venues in which
the creative work is to be marketed. The contract used by the
creative person to distribute work or provide services must reflect
myriad laws, treaties and practices which are present in the desired
channels of commerce.

Comments

206
Would You Like Virtual Fries with That?:
The New Frontier of Online Food Marketing
Shawn Davis, Ph.D.
Pacific University - School of Professional Psychology

Introduction
Let me begin with a confession. I'm finishing this article with an
ever-growing stack of game pieces for an online diversion
currently being pushed by a national fast-food chain. Yes, you
know the one I'm talking about [1]. During my search for the
ever-elusive Boardwalk piece (You're out there somewhere!), I
catch myself wondering how I've been so easily convinced to put
my waistline at risk one value meal dollar at a time. Sadly, it's not
just those individuals advertising their products who are to blame,
it's human nature and our adherence to the fundamentals of basic
learning theory that help make online marketing so effective.

Why Does it Work So Well?


Advertising itself, of course, isn't new. We are bombarded on a
minute-by-minute basis with advertising of all forms. For years,
television and radio broadcasters have sold advertising space for
economic support, and the time devoted to advertising during a
given hour or broadcasting has steadily increased [2]. This form of
advertising, however, is different from that encountered online in a
number of ways, with each distinction tapping into human
characteristics that serve to make them more effective.
First, there has traditionally been a clear difference between
entertainment content and advertising in a given broadcast.
Though this is changing with increased product placement within
the entertainment portion itself [3], for now there remains a
distinction. As such, we are able to shift our attentional focus away
from the advertising without missing out on the desired
entertainment content. Advertising is often contained in online
content without this clear demarcation (e.g., popup advertisements,
resulting links in online searches, and embedded hyperlinks), and
thus we aren't provided a cue to moderate our attention. As a
result, we provide online advertising the same level of attentional
focus that we utilize when dealing with desired web content. This
is product placement at its best. Though we might view this form
of advertising as a momentary annoyance, by devoting our
attention to the advertised product we have actively engaged in a
fundamental cognitive step toward remembering the information
contained within the advertisement and have increased the

207
potential for changes in future behavior.
A second way that online advertising has effectively tapped into
human cognitive processing is in the use of online games to
promote food. The increase in attentional focus that we find when
dealing with online static advertisements is multiplied when the
user is an active participant in a game either devoted to use of a
given food product or laden with references to the product. This
active participation leads to a "deeper" level of information
processing that has been found to have a substantial influence on
our future behavior. According to Craik and Lockhart and their
levels-of-processing approach, deep (i.e., meaningful) processing
of information leads to retention that is more permanent than
shallow (i.e., sensory) forms of processing [4] [5]. This is
particularly influential when the user is positioned as a central
character within the game itself. In an application of the levels-
of-processing approach, Rogers, Kuiper, and Kirker found that
information that is deemed personal in nature will be more easily
recalled via the self-reference effect [6]. As an explanation for the
self-reference effect, Bellezza suggests that the "self" is a rich
structure of internal cues to which new information can readily be
associated [7] [8]. Therefore, by actively engaging with such
online advertising in a personal way we open ourselves to the
messages embedded within the game itself.
This level of personal investment is further strengthened in
situations where the play of such online games holds the potential
for tangible gain for the user. The use of online games that offer
the possibility for the user to win food products as well as
monetary prizes are especially effective in encouraging continued
food purchases as they place a value on the product reflective of
possible financial gain. The product itself now has an increased
and tangible value.
The fact that an individual often must purchase a food product for
their chance to participate in the online game places a higher level
of importance on the outcome of the game itself. This encourages
the participant to seek out future play when the outcome is
negative because they now have made an investment for which
they need to recoup their loss. In fact, situations in which reward is
only provided to the game participant after a non-continuous
number of plays have been found to be among the most effective
methods of encouraging future behavior. According to B. F.
Skinner, preeminent psychologist and founder of the theory of
operant conditioning, a variable ratio schedule of reinforcement
(as is established in such online game play) is an extremely
powerful method of encouraging future behavior in that it is
relatively easy to establish, yet it is very resistant to extinction [9].

208
In essence, the game player knows that it is their behaviors (i.e.,
food purchase and online participation) that establish the
possibility for reward and that losses that they encounter can only
be overcome with continued behavior. As with any other form of
gambling behavior, the game player associates winning with their
own behavior and easily develops feelings that "they are due" or
"next time for sure" when they lose.
This feeling of investment takes on a different form in a third
distinction between online and traditional forms of advertising.
Online surveys and viral forms of marketing in which an
individual's social network is targeted also encourage a level of
investment that serves to enhance personal identification with a
food product, thus increasing the likelihood of future purchasing
behavior. Many individuals actively participate in online surveys
for food products even in the absence of external reward for doing
so. The individual, however, is rewarded nonetheless in that they
have been given the opportunity to share something of himself or
herself with another "individual". This internal reward is often
more effective than use of external or tangible rewards. In survey
situations, the individual willingly explores their attitudes or
beliefs in relation to a given food product, thus associating the
product to the "self". Even though information gained from the
survey participant is undoubtedly valuable for future marketing,
the simple act of survey participation is an immediate form of
marketing within itself.
In a viral marketing situation, wherein the individual either
provides contact information for another or forwards information
regarding a product to another person, the individual has
positioned himself or herself as either implicitly or explicitly
endorsing the product. Negative psychological outcomes (e.g.,
anxiety) can result from situations wherein an individual's external
behavior (product endorsement) doesn't match his/her internal
beliefs. To reduce the cognitive dissonance between attitude and
behavior, an individual is likely to engage in a change that resolves
this conflict [10]. If the individual has already engaged in an
external endorsing behavior (product endorsement), the most
likely outcome is to develop increasingly positive attitudes toward
the product. Needless to say, this is a win in terms of marketing
effectiveness.

Targeting Children
Of particular concern is the rising rate of websites and techniques
that food advertisers use to target children. In a recent
investigation, it was found that 85 percent of the leading food
brands that advertise to children on TV also have an Internet

209
presence with content geared at children [11]. The 4,000 sites
included in the study received 12.2 million visits from children
between the ages of 2 and 11 during the second quarter of 2005
alone. To better protect children from often exploitative marketing
techniques, a number of groups around the world are working
together to encourage greater regulation of marketing of food
products to children online [12]. Until then, the best thing a parent
can do is to closely monitor their children's online activities. Not
only is this good practice in general, it provides an opportunity for
parents to discuss with their children the nature and content of any
online advertising that they come into contact with. Both the
Campaign for a Commercial-Free Childhood [13] and
Commonsense Media [14] suggests tips for parents and a number
of questions that a parent can ask when talking to their children
about food advertising.

Conclusions
Online marketing of food products isn't going away anytime soon.
In fact, it would be safe to say that we will see even more creative
methods of online advertising develop in conjunction with better
ways to track our online behaviors (cookies aren't just for milk!).
As our Internet experience becomes increasingly tied to our
individual interests and activities, website developers have an
audience that is much more receptive than we have seen for any
previous form of advertising. Knowledge, however, remains the
best line of defense; knowledge of human learning and behavior as
well as how our human nature can be used against us. As a final
thought, I encourage you to eat your broccoli, take a walk, brush
your teeth, and be aware of the power in the messages that we
encounter in our online activities.

ENDNOTES
[1] http://www.mcdonalds.com/
[2] http://en.wikipedia.org/wiki/Advertising
[3] http://www.theinternetpatrol.com/tivo-blamed-for-massive-
increase-in-product-placement-in-television-shows
[4] Craik, F. I. M. (1979). Levels of processing: Overview and
closing arguments. In L. S. Cermak & F. I. M. Craik (Eds.), Levels
of Processing in Human Memory. Hillsdale, NJ: Erlbaum.
[5] Craik, F. I. M. & Lockhart, R. S. (1972). Levels of processing:
A framework for memory research. Journal of Verbal Learning
and Verbal Behavior, 11, 671-684.
[6] Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977).
Self-reference and the encoding of personal information. Journal
of Personality and Social Psychology, 35, 677-688.

210
[7] Bellezza, F. S. (1984). The self as a mnemonic device: The role
of internal cues. Journal of Personality and Social Psychology, 47,
506-516.
[8] Bellezza, F. S. & Hoyt, S. K. (1992). The self-reference effect
and mental cueing. Social Cognition, 10, 51-78.
[9] http://wik.ed.uiuc.edu/index.php/Reinforcement_theory
[10] http://www.learningandteaching.info/learning/dissonance.htm
[11] http://www.kff.org/entmedia/upload/7536.pdf
[12] http://www.bizreport.com/2006/09
/should_internet_junk_food_advertising_ be_regulated.html
[13] http://www.commercialexploitation.org
/news/whatcanparentsdo.htm
[14] http://kids.yahoo.com/parents/blog/1603
/3--Talking+to+Your+Kids+About+Food+Advertising
/tag/talking+about+media/8

Comments

211
Hooked: A Thriller About Love and Other
Addictions
Review by Tara Fechter fech5029@pacificu.edu
Richtel, Matt. Hooked: A Thriller About Love and Other
Addictions. New York: Hachette Book Group USA, 2008.
I found that in spite of Matt Richtel's Hooked being, at its
core, largely focused on the sudden disappearance and
purported re-appearance of the main character's deceased
girlfriend, the novel makes several interesting notes on technology
and our rapidly developing relationship with all things digital.
The author, like the novel's protagonist Nat Idle, is a reporter based
in San Francisco. While Richtel focuses on technology and
communications, however, Idle is a medical journalist, wrought
with the memory of his presumed dead girlfriend, Annie, years
ago.
Annie was, by all accounts, a princess in the Silicon Valley. With a
venture capitalist father with a niche for founding online start-up
businesses, Annie felt the next big step for capitalism was the
elimination of human labor in favor of computer automation. [1]
Nat finds himself caught in the mix of both corporate and personal
secrets Annie hid from him, culminating in her disappearance and
presumed death. Only after Nat receives a note written in Annie's
handwriting instructing him to leave a cafe—which promptly
explodes with him outside—does her death look less and less
likely.
Before the ragged journey to find this woman begins, however,
Richtel aptly sets the tone we are to take regarding technology—it
is evolving, it is not always friendly, and it is here. There are
several references to other forms of information distribution as
being "old", with the Internet a far superior means of
communication. [2] Few would argue that the Internet makes
information travel quicker, but there are many examples on how it
is not necessarily highly accurate. There was a time when people
believed that if a story was printed, it was true. Why else would it
be in the paper? Today's generation has an inherit distrust of what
they read—which is often justified.
This distrust of technology is not limited to information, however.
Richtel notes that most of us rely daily on electronics we, or at
least the vast majority of us, could not fix. [3] It was not so long
ago that most individuals in the developed world did not regularly
carry cell phones or owned home computers. Now it appears that
we cannot live without them. I have known students to be late to
class because they forgot their cell phone in their apartment, or
peers who are completely lost without their BlackBerry, forgetting

212
meetings and appointments. I myself get precious little work done
if my computer is not working. To rely so heavily on something
the average individual knows so little about is more than
disconcerting.
We also explore the characteristics of computer addiction and the
human-computer interaction. The term "computer addiction" has
been in existence less than 20 years, which means that potential
treatment for such an affliction has been in effect far less. [4]
Richtel adds a twist to the stereotypical gamer glued to his
console, or the programmer neglecting family and friends. His
form of addiction is subtle, without the knowledge or even consent
of the "victim." It is easy to fall into this model created for us, with
our established distrust of technology.
While the novel's description of this new digital era could be
viewed as a "cautionary tale," the conclusion seemed too clean,
wrapped up in a bow, for me to feel sufficiently frightened by the
possibilities presented. As Richtel pointed out earlier, much of the
inner workings of technology are intimately known by only a
handful of programmers and techies, leaving the "rest of us" to its
mercy. On more than one occasion, Nat is able to take personal
command of the direction of this technology, a direction I find
more literary than truthful.
As a fiction novel, I found that Richtel's first book rang hollow on
occasion. I did not feel invested in Nat's numerous precarious
predicaments, real or perceived, and had difficulty identifying with
characters in the novel. Portions of the plot were predictable, with
the exception of some very notable twists, and Nat's vision of
Annie is, at times, tiring and repetitive. If, instead, the novel had
been devoted to Richtel's vision of the future of business and
technology, I would have found myself much more engaged.
In spite of its flaws, Richtel paints a dark portrait of the future—a
future that should be considered seriously by everyone. Without
revealing the grand finale of the novel, I find it necessary to point
out a particularly favorite incarnation of addiction presented in
Hooked:

Characteristics of the "illness" entail frequent,


compulsive multitasking, and a pressing urge to fill life
with stimulation or distraction. Sufferers feel bored in
the absences of something to do, and tend to seek out a
focus, an activity, even an intense discussion—the kind
of emotional spur that Freudian thinkers would refer to
as drama. [5]

Were this printed in a medical journal, I would find it prudent for


many of us to check in to our local hospital.

213
Endnotes
[1] Richtel; p. 80
[2] Richtel; p. 7, 10
[3] Richtel; p. 24
[4] Shotton, Margaret A. Computer Addiction? A study of
computer dependency. 1st edition, 1989.
[5] Richtel; p. 287

Comments

214
Txting: The Gr8 Db8
Crystal, David. Txtng: The Gr8 Db8. Oxford: Oxford
University Press 2008.
One of the truly controversial impacts of the Internet and
other electronic communication methods has been that
upon writing. Some see, for example, the rapid spread of
texting as disastrous. David Crystal's Txtng: The Gr8 Db8, should
make us all feel better, providing that we can decipher the title, of
course. One of its many benefits is that it teaches the reader to
understand texts.
Crystal is both a noted academic linguist, and a witty writer who
pays particular attention to the effects of digital communication
upon the English language. We have earlier reviewed others of his
works.[1] Here he discusses texting in his usual highly informative
and entertaining style.
At first glance Txtng:The Gr8 Db8 might be thought of as an
elegant trifle. It is published in a small format, bound with heavy
gold embossing and black covers and endpapers, and, of course,
issued by the Oxford University Press. Both the topic and the
relative brevity of the book suggests perhaps a quick witty romp
through the latest linguistic crimes of the thumb tribes, those
individuals--usually young individuals--seen tapping away at cell
phones at every moment, whether convenient or inconvenient.
Crystal, however, is one of the principle analysts and historians of
English, and consequently has a very broad perspective. Crystal
argues that texting is not only nothing all that new, but in general a
very positive development which promises, if anything, to enhance
the language.
Why then, does it arouse so much resistance if not outright
resentment in many? One possibility, speaking as a teacher, is
certainly that it can often be a terrible distraction, both for the
texter and for the unfortunate bystanders. In general, however, I
agree with Crystal that the source of our doubts is simply
ignorance. Texting is not, as anybody who has tried it for the first
time is aware, all that easy.
The very difficulty of manipulating the physical device, usually a
cell phone but increasingly also an internet-enabled tool such as a
Blackberry, puts a premium upon simplicity and speed.
Capitalization is probably the first element to go, quickly followed
by commas, apostrophes, the possessive case, unnecessary vowels,
and soon entire words.
To Crystal, texting is a sort of demotic language, a dialect of the
language from which it is derived. And texting is seemingly
universal. Crystal's own grasp of at least the linguistic descriptions

215
of many languages, from Welsh to Italian via Japanese, Chinese,
Czech, Russian, etc., allows him to make many observations about
commonalities as well as distinctive differences between texters
world-wide.
And while short, the work is by no means simple. In order to read
it in any depth, we also need to be able to read texts. By the end of
the book, we have taken, if not necessarily passed with honors,
short courses in Texting 101,102, and 103. At the beginning of the
book we are introduced to rules, such as usually droping repeated
leters, then bgining to abrev, ltr drpng vwls, nex punctuation (xcpt
whn clrt sez otherwise, thn voila! we r reding txt.
Texting also often has, like so much digital communication, a ludic
or playful element. We have had texted poetry contests, entire
novels, interactive multi-author works, and many other forms of
texted prose. Crystal draws on examples of many such works to
makes his points, and in doing so introduces several interesting
new genres to us.
One poet, Norman Silver, has published two collections of
text-poetry. Here, shamelessly cribbed from "The virtual
linguist"[2] are two of his poems, also reproduced by Silver.

"txt commndmnts"
1 u shall luv ur mobil fone with all ur hart
2 u & ur fone shall neva b apart
3 u shall nt lust aftr ur neibrs fone nor thiev
4 u shall b prepard @ all times 2 tXt & 2 recv
5 u shall use LOL & othr acronyms in conversatns
6 u shall b zappy with ur ast*r*sks & exc!matns!!
7 u shall abbrevi8 & rite words like theyr sed
8 u shall nt speak 2 sum1 face2face if u cn msg em
insted
9 u shall nt shout with capitls XEPT IN DIRE
EMERGNCY+
10 u shall nt consult a ninglish dictnry

"langwij"
langwij
is hi-ly infectious
children
the world ova
catch it
from parence
by word of mouth
the yung
r specially vulnerable
so care

216
shud b taken how langwij
is spread
symptoms include acute
goo-goo
& the equally serious ga-ga
if NE child
is infected with langwij
give em
3 Tspoons of txt
b4 bedtime
& _ a tablet of verse
after every meal [3]

Neither are the short forms or conventions of texting new ones. As


Crystal says, "Texting may be using a new technology, but its
linguistic processes are centuries old."[4] A surprising number of
common abbreviations are well-established ones, and many can be
found in classical English literature and poetry.
Crystal also studies the origins and the spread of texting, and its
impact. The largest concern has so far been, aside from some
worries about malformed or otherwise injured thumbs, that it will
begin to replace Standard English. Crystal has collected hundreds
if not thousands of examples of texting in many languages, and
assures us that even the very young have an appropriate sense of
when it is all right to use text, and when more formality is
required.
In addition, Crystal argues, because one must learn whatever the
standard version of one's language in order to write at all, those
who text well almost invariably also have an excellent command
of their language. Those who write more text seemingly also both
write more, and better, Standard English.
Crystal not only as this book makes evident, walks the walk, but
he also txts the txt. His blog, a delight for all those interested in the
many usages of English and other languages, can be found
online.[5] The work also contains a very useful glossary of
computer-mediated communication as well as lists of texting
conventions found in eleven different languages.
With the holiday season fast approaching, this work would make a
wonderful family gift, if you can get the kids to put down their
dam fone long enough to read it!

Endnotes
[1] We previously reviewed Crystal, David. English as a Global
Language. 2nd edition. Cambridge: 2003., found at:
http://bcis.pacificu.edu/journal/2004/06/crystal.php

217
[2] http://virtuallinguist.typepad.com/the_virtual_linguist/2008/09
/text-poetry.html
[3] Crystal, p. 83. For an introduction to Silver see: "Norman
Silver: Further Details" I, http://www.artscape.org.uk
/detail_page2.php?id=7969
[4] p. 27
[5] See it at: http://david-crystal.blogspot.com/search?q=texting

Comments

218
The Madness of Crowds: Recent Criticisms
of Web 2.0
Editorial, Jeffrey Barlow
The theme of the Berglund Institute for the summer of
2008 was "The Wisdom and Madness of Crowds: Web
2.0."[1] This theme offered a group of scholars, teachers,
and Pacific University staff members the opportunity to
reflect systematically upon "Web 2.0," the term now widely used
to denote the social, interactive part of the Web. Web 2.0 is said to
embrace both business practices—such as building your content
from users' contribution—and social applications, such as
YouTube, Facebook, and, most notably, Wikipedia. Our keynote
speaker was Ward Cunningham, the initial developer of the Wiki
software, who got the Institute off to a strong start. Ward was
followed by other theorists and practitioners, who presented a
variety of perspectives.
Throughout the Institute, our consensus was that Web 2.0 was a
marvelous new stage in the development of the Internet, or at least
a very intriguing new collection of applications. The interactive
elements of Web 2.0 might realize some of the early hopes that the
Web would stimulate the growth of on-line communities and
democratic discourse.
I knew from works I had reviewed recently, however, that many
feel that there is a much darker side to Web 2.0. In our enthusiasm
for what Web 2.0 could do for us, our highly wired group of
practitioners tended to brush aside such criticisms.
Here I intend to reflect upon these cautionary perspectives at more
length. I remain highly positive, but I do believe that some of these
cautions merit consideration. At the least, we must take care to see
that these dangers do not inevitably accompany the benefits of
Web 2.0.
Criticisms of the World Wide Web, of course, are not new. From
the mid-1990s a number of works were very critical. These
included Peter D. Hershock's Reinventing the Wheel: A Buddhist
Response to the Information Age (1999) [2], Ellen Rose's User
Error: Resisting Computer Culture (2003)[3], Clifford Stoll's
several works including Silicon Snake Oil (1995); High-Tech
Heretic: Reflections of a Computer Contrarian, (2000); and Neil
Postman's Conscientious Objections: Stirring Up Trouble About
Language, Technology and Education (1988) and Technopoly: The
Surrender of Culture to Technology (1992), among many others.
These early criticisms were often thoughtful and cautionary, at
other times seeming to be more contrarian attempts to cash in on
the remarkable interest in the impact of the Internet by appealing

219
to the fear of rapid unforeseeable changes. A recent spate of works
extends these criticisms into the era of Web 2.0, often seeing it as
the cultural juggernaut which has finally realized the worst of our
initial fears of the Web. These include, Andrew Keen's The Cult of
the Amateur: How Today's Internet is Killing Our Culture [4], Lee
Siegel's Against the Machine: Being Human in the Age of the
Electronic Mob [5], and recently, Nicholas Carr's "Is Google
Making Us Stoopid?: What the Internet is Doing to Our Brains."
[6]
Clearly, some of the principle concerns of earlier critics of the
web, such as those mentioned above, have been rendered even
more salient by the impact of Web 2.0. While I think that the
Internet itself is still the fundamental issue, and that criticisms of
authority and authorship, and other issues dealing with digital data
are still central to recent critics, it seems to me that the basic
characteristics of Web 2.0 and its more successful applications
such as YouTube and Wikipedia are producing a much heightened
concern. This concern, in the case of the works of Keen and
Siegel, approaches a sort of rage against the machine, if by
machine we mean computers and their interlinks.
Lee Siegel, a Senior Editor of The New Republic is the author of
Against the Machine. While the work is much more of a rant than
a critical analysis, some of his points are worth drawing out of the
"noise" of his fulminations.
Siegel opens with the formal tropes of Internet criticism; the
Internet destroys community, it creates an illusory space which ill
prepares us for "the untamed, undigested, unrationalized,
uncontrolled world..."[7], it destroys privacy, it has
commercialized pornography and made it commonplace and less
objectionable, to name only a few of his criticisms of the web.
Siegel sees two major causes of these problems. The first is the
commercialization of culture. Businessmen want profits regardless
of the cost to culture and have reduced all knowledge to mere
information and put a price on it. They have made it possible for
the uninformed and the amateur to push out the works of
thoughtful professionals. Instead of Casablanca, we get
"American Idol," the dancing babies of YouTube rather than the
Greek philosopher Epictetus.[8]
The second factor responsible for what Siegel believes to be a
deliberate attempt to produce a new answer to the question, "What
does it mean to be human?" is the "electronic mob," the largely
amateurish group who produce for YouTube, write for Wikipedia,
practice Citizen Journalism or blog endlessly and self-referentially.
This group is largely narcissistic above all else and wants,
basically, simply to be liked. They therefore instantly grab onto

220
whatever is popular and spread it through the culture by means,
largely, of Web 2.0 applications.
With Siegel, we enter a dystopian view of Web 2.0. By
empowering the electronic mob, it is destroying culture, perhaps
even the possibilities of "being human." While Siegel's views are
easily lampooned, he nonetheless does describe some of the worst
elements of Web 2.0, although largely from an established, if not
reactionary, intellectual perspective foreign to the cacophonous
hurly-burly of Web 2.0. What Web 2.0 has indeed done is to
empower the amateurs. It turns out that many such amateurs would
rather listen to each other than to Siegel, or certainly to me. While
this narcissism shows deplorable taste, at least in the latter
instance, it hardly seems a crime against culture.
Another of the dystopians is Andrew Keen, author of The Cult of
the Amateur: How Today's Internet is Killing Our Culture. And
"today's Internet" is, of course, Web 2.0. Like Siegel, Keen
reprises many of what we might think of as "Criticisms 1.0" of the
web. The Internet is anonymous, mined with questionable
information, and rather than being driven by commercial values as
in Siegel's view, it is in Keen's killing commerce by driving it onto
the web. And, of course, it is killing books, too.
Keen is, if anything, less measured in his criticisms than is Siegel.
The crimes of the Internet include, in Keen's estimation, the
destruction of network television, music, advertising, newspapers,
the movies, and the creation of "...an infestation of anonymous
sexual predators and pedophiles."[9] In Keen's view, taken
together, the Internet threatens the decline and fall of Western man.
Keen merits less discussion here because he is less thoughtful than
Siegel and his 2007 book is one year older. The similarities,
however, are overwhelming. Both Siegel and Keen work
repeatedly the familiar tropes of Web criticism.
A far more nuanced criticism is Nicholas Carr's "Is Google
Making Us Stoopid?: What the Internet is Doing to Our
Brains."[10] This piece will, we think, have significant influence
because any criticism of the Internet published in a recognizably
intellectual but mass-market journal is bound to have significant
legs. "Is Google Making Us Stoopid?" will be quoted in refutation
wherever the Internet is extolled. Probably it will be most often
summed up as, "Its been proven that the Internet can damage your
brain."
I do not find the article entirely persuasive, perhaps because I did
not think all that clearly even before encountering the Internet. But
the author cannot easily be dismissed. Carr has excellent
credentials and has written a number of important works.
Carr writes well and draws on a number of relevant sources well

221
worth following up on. Here we will not cite these, but simply
treat all the conclusions of the article as though they are Carr's
own. With this approach the article is easily summarized:

There may be a relationship between the way we read


and the way we think. Reading on the web is seldom
"deep reading" but more of a quick scanning, searching
for information. It is possible that there is even a
relationship between the way we read and our notions
of self.

From this point forward, the article, like Keen and Siegel, reprises
some pertinent "Criticism 1.0" charges: E-mail diffuses our
attention; information is increasingly commoditized.
Also like Keen and Siegel, Carr takes a rather narrow view of
culture. He summarizes at one point:

...the Net isn't the alphabet, and although it may replace


the printing press, it produces something altogether
different. The kind of deep reading that a sequence of
printed pages promotes is valuable not just for the
knowledge we acquire from the author's words but for
the intellectual vibrations those words set off within our
own minds. In the quiet spaces opened up by the
sustained, undistracted reading of a book, or by any
other act of contemplation, for that matter, we make our
own associations, draw our own inferences and
analogies, foster our own ideas. [11]

This is, as one of his sources, the playwright Richard Foreman,


states a few lines below the above statement "what's at stake":

I come from a tradition of Western culture, in which the


ideal (my ideal) was the complex, dense and
"cathedral-like" structure of the highly educated and
articulate personality—a man or woman who carried
inside themselves a personally constructed and unique
version of the entire heritage of the West. [But now] I
see within us all (myself included) the replacement of
complex inner density with a new kind of
self—evolving under the pressure of information
overload and the technology of the "instantly
available."[12]

Unlike Keen and Siegel however, the possible outcome of Web 2.0
in Carr's view, is not the collapse of Western man or of the loss of

222
the possibility of being truly human, but unforeseeable sorts of
changes.
And Carr, unlike Keen and Siegel, brings some comforting
perspective to these possible outcomes. We have been here before,
dominant forms of disseminating information, even the transition
to writing itself, have always been met with alarm.
These factors discussed above may mean that, yes, our culture, or
at least a small element of it, is threatened by Web 2.0. But the
element of culture at stake for Carr, as for Keen and Siegel, is
precisely "high culture." Part of our alarm comes from the prospect
of having to join the lonely crowd, working frantically on the
Internet, denied the time to reflect, to read deeply. This, however,
is not a social phenomenon caused by Web 2.0 but largely by
economic forces.
My own perspective is that Web 2.0 will be what we make of it. If
we are not to be overwhelmed by unreliable data and video clips of
dancing babies, then we must teach younger users how to
recognize, and produce, authoritative data. And, as a more
comforting work, John Palfrey and Urs Gasser's Born Digital:
Understanding the First Generation of Digital Natives[13],
suggests, most users of Web 2.0 are able to both skim quickly, and
to read deeply. They also have a better sense of the appropriate use
of the web than most of us fear, though they do need proper
training to become truly critical users.[14]
The best solution to the problems of the Internet, whether 1.0 or
2.0—or of the currently nascent Web 3.0, the Semantic Web[15],
are, I suggest, that we all pitch in and improve the understanding
of those who approach the Internet as young users. Writers like
Keen and Siegel would do well to put their shoulder to the wheel
rather than exclaiming in alarm from the comfortable sidelines.
After all, the babies only dance when they are watched.

Endnotes
[1] We are processing the video from the Summer Institute now
and it will be available for download in the near future. Check
back in the December issue of Interface for an announcement as to
its location
[2] Reviewed in Interface at: http://bcis.pacificu.edu/journal
/2007/05/hershock.php
[3] See review at: http://bcis.pacificu.edu/journal/2003/09/rose.php
[4] Also reviewed in Interface at: http://bcis.pacificu.edu/journal
/2007/05/hershock.php
[5] Reviewed at: http://bcis.pacificu.edu/journal/2008/05
/siegel.php
[6] See our review of his latest book at: http://bcis.pacificu.edu

223
/journal/2008/02/carr.php
[7] Siegel 17
[8] Siegel 150
[9] Keen 7
[10] See the article, from the July/August 2008 issue of The
Atlantic Monthly, at: http://www.theatlantic.com/doc/200807
/google
[11] http://www.theatlantic.com/doc/200807/google
[12] http://www.edge.org/3rd_culture/foreman05
/foreman05_index.html
[13] Reviewed at: http://bcis.pacificu.edu/journal/2008/05
/palfrey.php
[14] See the review at: http://bcis.pacificu.edu/journal/2008/05
/palfrey.php
[15] For an introduction to this topic, begin with Aaron Swartz
'The Semantic Web In Breadth' at: http://logicerror.com
/semanticWeb-long and follow through some of its links such as
the much more complex "The Semantic Web (for Web
Developers)" at: http://logicerror.com/semanticWeb-webdev

Comments

Anonymous

An interesting and thoughtful look at these three articles.


Posted at 13:46 on February 13, 2009

224
Movies, the Internet, and Piracy
Lynda Irons
A few years ago, a friend downloaded full versions of
recently released-to-DVD films from a Russian website
for less than $5 per film. I asked why this did not raise
any red flags for her, and she said she checked the site
thoroughly and was convinced it was legitimate. I am not sure how
she "checked" the site nor am I convinced that it was as genuine as
she thought it was, but this interaction illustrates one of the many
underlying assumptions many users have regarding the Internet.
One such assumption is normal copyright laws somehow do not
apply in this virtual world, and that belief may be costing the
"worldwide picture industry, which includes foreign and domestic
producers, distributors, theaters, video stores, and pay-per-view
providers, an estimated 18.2 billion dollars to piracy." [1]
According to recent news accounts, pirates distributed early copies
of Ratatouille and American Gangster before these films' theatrical
releases. John Desmond, Vice President of SafeNet Inc.'s
MediaSentry, notes that, "the longer you can prevent a scenario
like this one with 'American Gangster' from occurring, the better
the return on investment for a studio."[2] Nonetheless, he stated
that if a movie is to be pirated, the ideal for studios is a
poor-quality bootleg copy as consumers would rather experience a
high-quality film rather than the bootleg.[3]
The Motion Picture Association (MPA) estimates that in 2005,
MPA studios lost 6.1 billion dollars to worldwide piracy; 2.4
billion dollars to bootlegging; and 1.4 billion dollars to illegal
copying.[4] A major culprit in the piracy issues is the Asia-Pacific
region. The MPA, working in conjunction with the Taiwan
Foundation Against Copyright Theft, has reported more than 320
piracy cases in 2008 so far; up from 300 in the previous year.[5]
Copyright law "guarantees to copyright owners certain exclusive
rights, including the exclusive right to make copies, distribute
copies, publicly perform the work, and make new derivative works
based on it."[6] It is this right of distribution that pirates are
attempting to thwart. The Motion Picture Association defines
Internet piracy as "the downloading or distribution of unauthorized
copies of intellectual property such as movies, television, music,
games, and software programs via the Internet."[7] And the MPA
is increasingly becoming more aggressive in pursuing copyright
infringers; for example, they recently announced that a Singapore
court has sentenced Lee Eng Sent, 50, to 15 month's imprisonment
after he pleaded guilty to selling illegal DVDs.[8] However,
Stephanie Ardito reports that many media outlets are taking a

225
wait-and-see attitude. She feels the media giants will "eventually
calm down and learn to work with social networking sites and
video websites. Otherwise, they risk losing their customer
base."[9] While she is specifically targeting YouTube and
MySpace, the same principles apply in unlawfully distributing
copyrighted material. Also, it appears that brokering or distributing
the motion picture is unlawful as well. In a recent court case,
MGM Studios filed suit against a popular peer-to-peer software
company, Grokster. MGM argued that "the company should be
held liable for encouraging its users to violate the copyright
law."[10] The U. S. Supreme Court ruled unanimously that
Grokster was indeed liable for infringing copyright laws and was
held accountable for their actions.[11]
Motion picture studios have been relatively late-comers to the
video streaming world, but they have been gaining momentum in
providing consumers with alternative methods to access their
products. Lucille Ponte notes that the film business should
"creatively experiment with ways to offer the public faster and
cheaper access to a broad selection of films."[12] Ponte further
argues that the industry should take another look at how it controls
Digital Rights Management endeavors as a way to allow
consumers to legitimately make copies for personal use. This, she
argues, could eliminate the piracy issues.[13] MPA notes there are
legal distribution services such as CinemaNow, iFilm, Movieflix,
or Movielink.[14] I can download movies even from my Netflix
account.
Unless copyright laws are re-evaluated in light of today's
technological schema, any illegal copying or distributing of
copyrighted material is unlawful, and the motion picture industry
will continue to aggressively pursue infringers.

Endnotes
[1] Economies. Motion Picture Association. http://www.mpaa.org
/piracy_Economies.asp. October 13, 2008.
[2] McBride, Sarah, and Peter Sanders. "Pirates Foil Hollywood's
High-Tech Security." The Wall Street Journal. October 26, 2007.
http://proquest.umi.com October 13, 2008
[3] Ibid.
[4] Taiwan Pirate Websites Indicted. Motion Picture Association.
http://ww.mpaa.org/press_releases
/taiwan_piratewebsiteraid_oct08.pdf October 13, 2008
[5] Ibid.
[6] Von Lohmann, Fred. "Fair Use, Film, and the Advantages of
Internet Distribution." Cinema Journal. Winter 2007, 46(2), p. 129.
Project Muse. http://muse.jhu.edu/search October 13, 2008

226
[7] Internet Piracy. Motion Picture Association.
http://www.mpaa.org/piracy_internet.asp. October 13, 2008.
[8] Singapore Court Sentences Movie Pirate to 15 Months' Jail.
Motion Picture Association. http://www.mpaa.org/press_releases
/singapore_moviepiratesentence_oct08.pdf October 13, 2008
[9] Ardito, Stephanie. "MySpace and YouTube Meet the Copyright
Cops." Searcher. May 2007, 15(5), p.3. Academic Search Premier.
http://web.ebscohost.com October 13, 2008
[10] Legal Cases. Motion Picture Association.
http://www.mpaa.org/NewsStand_Legal.asp October 13, 2008.
[11] Ibid.
[12] Ponte, Lucille M. "Coming Attractions: Opportunities and
Challenges in Thwarting Global Movie Piracy." American
Business Law Journal. Summer 2008 45(2), p. 368. Wilson
OmniFile Full-Text Mega Edition http://vnweb.hwwilsonweb.com
October 13, 2008
[13] Ibid.
[14] Copy Protection Technologies. Motion Picture Association.
http://www.mpaa.org/piracy_FightPir.asp October 13, 2008

Comments

227
Why The Shoemaker's Children have
Flip-Flops
By Pat McGregor
Once upon a time, in a city far, far away (do you know the
way to San Jose?), there was a computer professional
who, because she lived in a very safe neighborhood, left
the sliding door on her patio open about 6 inches one
night because she liked the air, and, frankly, she had burnt a bag of
microwave popcorn and wanted to get the smell out before
morning.
She went to bed at 4 a.m. because, like many computer
professionals, she is at heart a night owl, and because she had just
gotten a brand new computer from Dell and she was gleefully
transferring information from her old computer to her new
computer. The big Dell box was sitting in her living room, empty
component boxes scattered around, because we never pick up and
put away the wrapping paper carefully when we're opening
presents, now, do we?
Because of the defects that had caused her to get a new computer
in the first place, she was transferring data by using 4 GB flash
drives rather than a network or other handy gadget.
Finally she went to bed, because the werewolf movie she was
watching while she worked was just too stultifying and because,
after all, it was 4 am. Even though the next day was Saturday, she
had a new computer to play with (and because she's not 20
anymore and all-nighters get harder with every decade).
The next morning she woke up about 11 am, made a couple of
important phone calls while she was sitting in bed, and wandered
out into the living room in her jammies where she would check
email and then make breakfast. Email is first in priority to every
real computer geek, after all.
Hmm. There were no power cords stretched across the living room
floor. Blink. There were no computers sitting on the couch. She
looked again. No, they were really gone. She looked at the wallet
next to the place where the computers had been. Even though she
had a credit card casually tucked into the flap of the wallet, it was
still there! So were the money and ID's and other credit cards. So
was her new BlackBerry! Even all the legal copies of the software
that had come with the new laptop were still there! (some of us
have weird priorities, OK?)
She called the police. While they were on the way she changed
into real clothes, and then called a friend who is online almost all
the time. She walked this friend through changing the passwords
on important systems such as her bank, her PayPal account, her

228
credit card companies, and some others with real money attached.
She thought she had been careful not to let the computer remember
those passwords, but it is so tempting just to say yes when the
browser asks "Do you want me to remember this password?"
And while she had also guarded the machines so they need a
password to boot up, such things are breakable. In her heart of
hearts she hoped that the failing power source and corrupted hard
drive on the seven-year-old computer would fail completely on the
crooks, but she knew that was a mean and uncharitable and
probably completely unrealistic thought.
Then she called her apartment complex to notify them of the
break-in, and they dispatched a locksmith immediately to change
all the locks on her doors.
And then, because like every child of the television age she had
seen every CSI and Law and Order show ever made, she sat
quietly in the rocking chair in her living room, reading a book to
distract herself, carefully not disturbing the scene.
When the officers arrived, they went through all the usual
questions, including "was this door locked?" The security expert
admitted that no, the slider was open, but it was her ritual to latch
the screen and all the other doors and windows before she went to
bed. She couldn't say for sure that she had latched and locked
everything else that night, because it was a habitual act and she
couldn't distinguish one night from the other in her head. There
was a slit in the screen by the slider latch, which may have been
how the burglars got in.
The police were a little bit surprised that the computer professional
had slept through it all, but she explained that she had taken A
Leading Night-time Cold Medicine before she went to bed. She
also had a fan running in her room for white noise to cover noises
like, say, her apartment being broken into. (Although entering a
domicile while a person is present makes the crime First Degree
Burglary, surprising a burglar in the act can end up with the victim
getting hurt. Our heroine is glad that she slept through it, all in all.)
She did have to explain that the chaos in the living room (all those
boxes, you know) was hers, and that the computers had been tidily
picked up, including unplugging the power cords and
disconnecting the network cables, and that was all that was
missing.
When she was asked if either computer had any distinguishing
characteristics, she said that the new one was bright pink. Did she
know anything like serial numbers, etc? She handed the officer a
copy of the invoice for the computer, which included the serial
number for the CPU, the unique MAC address for the Ethernet
card, and other details about the internals of the machine.

229
The officer explained that his CSI (crime scene investigator, and
despite everything else it was kind of a thrill to hear this jargon
being thrown around in real life instead of just television) was on
the way. In the meantime, the officer asked more questions and
checked all the doors and windows. All of them turned out to be
unlocked.
Then the officer dusted the surge protector for fingerprints, since to
remove the power cords the thief would have had to use both
hands to get the plugs out.
When the CSI arrived, they decided that, unlike on television, they
couldn't get a clear print off the textured surface of the surge
protector. After all, these things are textured for two reasons — to
give people a better grip to insert and remove cords, and because it
prevents fingerprints from showing up. Sigh.
Once she had permission, the victim had gotten a glass of water
and was still sitting quietly in the rocker in her living room,
staying out of the officers' way. One of their radios squawked
something about a stolen car, and dread struck at the security
expert's heart. She ran to the back window and looked out — and
her parking space was empty. No, really. There was nothing there.
She looked again, to be sure she was looking at the right carport,
but, no, there was no car there.
Because our computer professional can relish a good ironic scene
as much as the next person, she went back into the living room
where the officers were making notes, drawing diagrams, and
fingerprinting the screen doors and slider.
"You'll need to add something else to that list of things stolen,
officers." After the expected "Oh?" she said, "They got my car,
too."
A certain amount of scurry resulted. It turned out that her main set
of keys, with the house keys and other people's house keys, was
still hanging on the key rack by the front door. So she surmised
they had taken her spare set, which was on the kitchen counter
where she had left them when she returned from traveling the few
days before.
When the officer started asking questions about the stolen car, she
described the license plate to him and then took her spare
insurance card out of her wallet. He copied down the VIN numbers
and other identifying information.
At this point the apartment manager and the locksmith arrived, and
when the police gave permission, all the locks were changed,
including the mailbox lock. (The police and the computer
professional were amazed at the promptness with which the
apartment management took care of this. While this was the first
burglary in the complex in nine years, they were clearly prepared

230
to deal with it.)
While one officer was out photographing the carport with nothing
in it, he noticed that the screen in the spare window was lying in
the bushes. Had she noticed it missing before? No, the last time
she had closed and locked the window, after a friend's visit, it was
still in place.
The officers took the screen away to be more closely examined.
They gave her a card with the police report number on it and told
her someone would be in touch. (The Crime Scene technician also
told her that, because no one was hurt, it would be at least four
months before anyone got around to analyzing the fingerprints.
Sigh. Not like CSI at all. )
Our computer professional sat down and made all the other phone
calls that had to be made: insurance company, family, people who
were expecting email from her, and so on. When the mail came, it
included a letter from Dell with the details of her new computer
and a notice that it should have been there already. That ironic
moment wasn't as much fun, but glancing over the duplicate
invoice she noticed that her new computer had LoJack! If the
robbers turned on the computer anywhere near a wireless network,
the computer would phone home!
The police were very interested in hearing this. If the robbers
turned the computer on, instead of just wiping the hard drive clean
with a magnet and selling it immediately, they might be able to
find it.
They were also glad that her car had a FasTrak toll pass
transponder on it, so that if the car went over any of the bridges in
the area or any other areas that recorded FasTrak data, they would
know.
Chances are good that her car, a hybrid, will be recovered. Hybrids
are not as attractive to chop shops as other cars because the parts
are not in high demand. Fingers crossed. But because it is likely to
be dumped off somewhere, the police encouraged her to put the
car's description and distinguishing marks on every blog and
mailing list she can, so that more people will be thinking about it
and the chances of it being found are higher. (What a cool
expansion of the Neighborhood Watch concept!)
Today our computer professional is kicking herself for having not
done all the protective items in the list below. Her computers had
flip-flops instead of good, sturdy shoes. What's your excuse?

Things to consider
General Life:

Lock your doors and windows (but we should all know this by

231
now.)
Consider one of those alarms that go off when you open a
door or window. They are inexpensive and very noisy.
If you want to leave your sliding patio doors open and you live
on the first or second floor, buy a dowel and cut it 4-5" shorter
than your door is wide, and drop it in the track. It will prevent
anyone from opening the door any wider.
Keep duplicate copies of your paperwork for your
possessions on hand for the police report. It makes the report
go much more smoothly and you won't feel like so much of an
idiot.
Have renter's or home owner's insurance.
Keep an inventory on paper of your possessions and their
serial numbers.
Don't leave boxes with tempting contents in plain sight from
your windows.

Computer Security:

Back up your data! Back up your data! Back up your


data! Our professional only had 8GB out of 38 on her flash
drives when this happened, so she's lost all her family
pictures, soft copies of her books, articles, and the cookbook
she has written, tax returns, and so on and so on. Since she's
hunting for a new job, she was happy her resumes and other
material were still there, but it might have been a total loss.
Protect everything with strong passwords. The best
passwords aren't words or numbers related to you, are at
least 8 characters long, and have at least 3 of the following 4
symbols:
capital letter
lowercase letter
number
"Special" — one of these characters (!#*&)({+@~^) or
others like them
Don't write your passwords down and don't tape them to the
bottom of the keyboard, on a sticky note nearby, or anything
else!
Back up your data!
At least put passwords on accounts with administrative rights
— this also helps so that if someone comes into your
machine from the network — it will be harder to assume your
privileges and use your machine for nefarious purposes.

232
Don't store your passwords to important places, such as your
bank, on your machine. If you have a security "wallet," such
as the ones Symantec and Norton offer, you can consider
using it, but you have to protect it with a strong password!!! A
good way to make a strong password is to use something you
find memorable and do a substitution process on it. It's pretty
easy, actually:
Say you graduated from college in Boston in 1987. Your
password might be:
Boston87 morphed into B0s+on87
Use words and symbols that make sense to you — for
example, substituting the "Zero" (0) for an Oh (o), or the
Plus Sign (+) for the Tee (t)
(Don't use this example! Make up one of your own.)
You can use the same password on multiple systems if
you've made up a good strong password and you
change them once in a while.
Consider using security functions built into programs such as
your online checking software or other programs where you
have stored personally sensitive data. Some programs, for
example, let you password-lock your data files in your
financial software when you close the program. The data is
encrypted and can't be decrypted without the password.
Back up your data! Terabyte drives that will hold everything
are now less than $100 and well worth the effort.

PS:
Our computer professional's insurance company authorized an
immediate replacement for her laptop, both because it is a critical
tool for her work and because she's job hunting. They've also
authorized a rental car for three weeks while they wait to see if the
car shows up. Check your insurance and consider such coverage.
It is kind of creepy that someone was in the apartment and she
slept through it. And all her friends have been very kind to offer
her company and couches to sleep on. She's going to be OK.

Comments

233
Pat McGregor

I just thought I might mention that 10 days later my car was


recovered by the Mountain View police. It is beaten up some,
as there was some sort of accident, but is largely in good
condition. A significant number of items were not in the car,
including my new computer, but I did get the car back. Hoorah!

Posted at 13:50 on February 13, 2009

234
Does Microblogging Have a Future in Your
Organization?
By Michael Geraci
Just as you were finally able to convince your department
head or IT director that your organization could benefit
from the timely and highly specialized communications
afforded by a major blogging implementation for some of
your internal or external communications, it might be time to go
back and make the case for an even more curious and
technological solution: a microblog solution.
In practice, microblogging is the conflation of blogging and text
messaging. Born from the lust for instantaneous communications
among one's peers that made social networking sites, like
MySpace and FaceBook such phenomena among today's youth,
microblogging has emerged as a potentially useful tool for small
organizations and/or multi-functional workgroups in an
organization that do not work in close proximity to one another.
By harnessing Web 2.0 constructs like content tagging (a' la flickr
and del.ici.ous) and friend-following (as in LinkedIn, FaceBook,
Digg), microblogging has emerged as an instant communications
medium that is potentially more powerful than good ol' e-mail and
instant messaging.
While there are over 100 microblogging sites and services
available, Twitter seems to have captured the vast majority of
attention in its short two-year history. If you haven't heard of
Twitter, it is really a simple concept. As a user, you are able to
send out short (as in one or two sentence long) text messages that
are delivered instantly to those who have chosen to follow your
"feed". As such, Twitter messages, called "tweets", tend to be
highly timely and topical. For example: "Going to buy tickets for
tonight's premiere. Let me know if you're in." Twitter messages
can be composed and sent through the free service from any
computer or mobile device that is connected to the Internet, and
there are even low-cost services that transcribe voice messages
sent from a phone and post them to your Twitter feed. Those who
subscribe to your feed can receive the message in just about any
mode including e-mail, text message, FaceBook, and through
dedicated applications available for mobile devices.
It is estimated that Twitter had over 3 million users as of
September 2008 including Barack Obama who is the most
followed Twitter user to date. Obama has over 130,000 people
following his tweets, which he, himself composed and sent out
with some frequency while on the campaign trail, including this
one on November 5, 2008,

235
We just made history. All this happened because you
gave your time, talent and passion. All of this happened
because of you. Thanks.

Obama has since ceased his Twitter efforts, perhaps to make room
for Twitter's next big celebrity, Brittany Spears, who likes to keep
her fans informed of her trips to the grocery store with the kids.
Despite its popularity and accessibility, Twitter is ill-suited for
workplace communications. Anybody in the world can choose to
follow anyone else, and the somewhat generic architecture of
Twitter lacks the features that could benefit targeted and threaded
communications among co-workers. A new start-up has emerged
to fill this role, in Hollywood-based Yammer. Yammer caught the
tech-industry spotlight when it won the top prize in September
2008's TechCrunch50 conference, where 50 tech start-ups are
vetted by a handful of industry moguls. The most promising
product or service receives 50,000 dollars in venture capital.
Yammer is free to anyone with a corporate e-mail account, i.e.,
gmail users need not apply. Individuals that share the same
corporate e-mail domain can use Yammer to send, receive, and
organize private communications which can include images,
documents, and videos. The idea being that co-workers involved in
collaborative efforts can have instant, timely, and threaded
discussions in a dedicated application. If it sounds like just another
way to do e-mail or instant messaging, it's not. Yammer is free
from all the clutter and noise that has become inherent in our
e-mail in-boxes, and it includes numerous features that add value
to the traditional instant messaging model.
Chief among the advantageous features in Yammer is the ability to
follow certain discussions, people, or projects rather than all of the
Yammer
messages

(yammerings?) being sent throughout the organization. Let's say I


am involved in a project that includes a small subset of employees
in my company. I can choose to follow just the messages from all

236
those involved in the project, or just certain individuals with whom
I am working most closely. I can create groups of the Yammer
users in the company and track their discussions, or, thanks to
content tagging, I can filter out all communications except those
that have been tagged as being relevant to a particular project or
subject in which I am interested. For example, if I'm involved in a
company-wide re-branding effort, I can send out a message like
"Where are we with the latest #logo design?" to the design team.
The pound sign (#) designates the word "logo" as subject matter
that can now be followed as discussions proceed. Whenever other
users include the #logo tag in their messages or respond to tagged
messages, I will receive them.
When it comes to sending and receiving these communications,
Yammer makes it easy. As a Yammer user, I can engage in
discussions through a variety of applications that provide me with
access regardless of where I am. There is a Web browser-based
Yammer client, a desktop application (built on the Adobe AIR
platform), and Blackberry and iPhone clients. Yammer messages
can also be sent and received via e-mail, instant messaging, and
SMS text messages, making it virtually impossible to miss out on
discussions.
If the IT folks are concerned about security, network traffic, or
managing users, a fee-based system can be implemented where
Yammer provides administration services that include such things
as limiting access to specific individuals or I.P. addresses,
requiring passwords, and establishing permissions for what
messages or groups users can follow. This administration service
costs organizations 1 dollar per user per month.
Tech entrepreneur, Jason Calcanis, who uses Yammer among his
employees at his user-powered search service, Mahalo, says "Any
company which isn't running Yammer right now is missing out on
the best communications and productivity tool available in the
market today. It's a massive game changer." Whether or not this is
true for all types of organizations, it is nice to see that Web 2.0
technologies are maturing in ways that can bring newfound utility
and benefit to today's diverse work environments.

Comments

237
Jeffrey Barlow

Michael: As always, your article leads us to


reconsider our operations at the Berglund Center.
A question for you: How are enterprises paying
staff for short bursts of work twittered or yammered
on say, the week-end. I see the value of such as
system for emergency situations (server crashes, etc.)
But how do staff who participate get paid? Any ideas
or examples to relate?

Jeffrey

Posted at 12:49 on December 3, 2008

238
On the Declining Viability of Testosterone
By Chris Pruett
The traditional wisdom in the game industry is that tech
sells. The formula is simple: games are aimed primarily at
teenage and adult males, who also happen to be the target
market for gadgets and films in which helicopters
explode. This audience, it is believed, enjoys technology for the
sake of technology; they are the group that bought HDTVs before
anybody else, they are the group that outfits their car with
expensive stereos, and they are the group that will spend money on
new, technically advanced video games.
In order to keep selling to this group, the game industry must
continually improve the technology of its video games and
systems. Since the late 1980s, there has been a cycle of new video
game system releases, each iteration improving on the
computational performance of the last. The latest round of systems
includes Sony's PlayStation 3 and Microsoft's Xbox360, which are
powerful computers. Guys buy this stuff, the industry believes,
because they enjoy knowing that their system is powerful enough
to produce entertainment unlike any other video game system
prior. The promise of raw performance is one of the industry's
main marketing strategies.
This mindset is something of a self-fulfilling prophecy. Because
the industry believes that males interested in exploding helicopters
and high-end game tech are its target audience, the industry
produces games that cater to that group almost exclusively. Games
like God of War, Gears of War, Halo 3, and Call of Duty 4 are all
squarely aimed at this group, and they are also shining examples of
technologically advanced game software. These games look better
than anything else on the market, and they are aimed at a group
that is particularly interested in owning the best.
But there is a problem with this formula. Firstly, by catering to a
single group exclusively, the game industry has systematically
excluded other audiences. Entire game genres such as Adventure
and Real Time Strategy have mostly died off because they were
deemed to be liabilities in a market supported exclusively by
young men. In 1993, Cyan Worlds released an adventure game
called Myst, which sold so well and to such a wide audience that
its sales numbers were not matched for a decade. Yet, today there
are no games like Myst being produced, and most game publishers
would not seriously consider revisiting its format. Over time, the
"tech sells" mindset has caused the intended audience for video
games to become more homogeneous, even as the size of that
audience has expanded.

239
The other problem with the game industry's formula is that
technology gets more and more expensive to produce every time it
improves. Since the game industry believes that it must jump on
the latest tech every five to seven years, it is always on the
bleeding edge and always paying bleeding-edge prices. This
generation, the PlayStation 3 debuted at 599 dollars, which was a
significant jump over Sony's previous offering (the PlayStation 2
sells for 149 dollars). At the same time, this new hardware
demands new, more complicated software, which in turn costs
much more to develop. Ten years ago, a hit game could be made
for less than a million dollars. Five years ago, the average was
somewhere around five million dollars. Today, development
budgets are in the thirty to fifty million-dollar range. Of course, the
visual fidelity of modern games is vastly improved, but that
improvement comes at a significant price.
So, the video game industry has arrived here in 2008 and has
discovered that it is quickly painting itself into a corner. The
increased cost of game consoles has slowed consumer adoption,
and since the cost of game development itself has also risen
dramatically, developers are now faced with a situation in which
they must spend much more money to sell their product to a much
smaller group of customers. Developers are assuming a huge
amount of risk, much more than has ever been necessary in the
past, and as a result the market has fewer games. Even worse,
because games have been aimed at such a narrow band of
consumers for such a long time, there is a sameness to many of the
titles currently reaching the market; the helicopter exploding genre
has a significant breadth in the game industry. There is research to
suggest that only the top 10 percent of games are actually
profitable.
However, not all sectors of the game industry are feeling trapped.
After placing third in console sales in the previous generation,
Nintendo has turned around and produced two systems, the Wii
and the DS, which are both leading the market by wide margins.
Nintendo's success is based on its willingness to actively reject the
old guard mentality that gamers want tech; instead of making a
yet-more-powerful game system, the Kyoto Company is trying to
knock down the barriers that prevent non-gamers (that is, most
people who are not young males) from playing games. They
identified traditional game controllers as a source of intimidation
for non-gamers, and so the new Nintendo systems use
non-standard (and non-threatening) control systems: a motion-
sensitive remote on the Wii, and a stylus on the DS. Nintendo
realized that the ultra-realistic art style that so many games pursue
is not particularly attractive to many consumers, and have

240
consequently branded its game systems with cute cartoon
characters. The company has also aggressively targeted
non-traditional gamers by running ads in magazines and TV shows
aimed at middle-aged couples. Nintendo has even tried to reduce
the cost of game development by making its systems easy to
program and cutting the cost of its proprietary development kits. In
short, this current round of offerings from Nintendo is a calculated
and dramatic attempt to break away from the traditional game
market.
Interestingly, reaction to Nintendo's move within the industry is
split. Most game developers are themselves men in their twenties
and thirties, and many of them are having trouble coming to grips
with an industry which no longer revolves around their group.
Backlash at Nintendo from developers and gamers has arrived as
accusations that Nintendo is "leaving the hard core fans behind,"
or that they have "sold out." One developer I spoke with hated the
entire concept of the Wii because he's interested in making games
about exploding helicopters. "But Chris," he laments with a curl of
the lip, "I don't want to make games about bowling."
It is not that the traditional game development approach is invalid;
it is simply no longer capable of supporting itself on exploding
helicopter games aimed at young males alone. And the growth of
games into non-traditional markets is not limited to Nintendo.
Developers who are not comfortable with Nintendo's approach
might want to duck and cover at this point because the winds of
change are a-blowin', and many of them emanate from a company
that is not normally considered a force in the game industry.
Apple's iPhone has the potential to change the video game industry
dramatically. The target audience for the iPhone includes gamers,
but it also includes a wide and diverse group of people who
comprise a much larger customer base than just people who buy
games. The iPhone's technology is good enough that it can host
modern games, but at the same time the device is not sold on
promises of its computational power. Some analysts believe that
Apple will achieve sales of up to 45 million iPhones per year[1];
compare that to the 15 million PlayStation 3s that have sold in the
two years since it was released. Even the PlayStation 2, one of the
best-selling game systems of all time, only shipped 140 million
units over its eight year lifespan [2]. By all accounts, the iPhone is
primed to eclipse every gaming system available in a very short
amount of time.
But the real value to the game industry is not just Apple's quickly-
expanding army of iPhone users. Games sold for the iPhone go
through the iTunes store and are downloaded directly to the phone,
with no physical packaging whatsoever. Apple takes a cut of each

241
sale, but the economics are such that even a moderately successful
application can be very profitable. The iPhone represents a
low-cost, low-risk way to make games for a much larger audience;
its revenue model is much less risky than a modern AAA video
game. The device is not the most powerful portable handheld
system (that honor goes to Sony's PlayStation Portable, another
system that is struggling to find a market), but the audience for
iPhone games is not likely to care about absolute visual fidelity or
tech spec badges of honor.
The Apple iPhone, the Nintendo Wii, and the other similar devices
(such as Google's Android platform [3]) are primed to usher in a
dramatic change to the way that the game industry operates. If
profitability can be achieved without the need to constantly
increase the cost of game development, game companies will have
a way to experiment with new customers, marketing strategies,
and game designs. The old guard is powerful in the game industry,
and the makeup of the industry itself serves to reinforce its rather
narrow view of gamers. But cheap, mass-market devices like those
peddled by Nintendo and Apple are likely to force even the most
traditional game developers to consider alternatives to the status
quo.
Endnotes
[1] http://www.appleinsider.com/articles/07/06
/07/iphone_yearly_sales_rate_should_top_45_million_by_2009_says_firm.html
[2] http://en.wikipedia.org/wiki/PlayStation_2
[3] Full disclosure: I am a Google employee.

Comments

242
Navigating Technomedia: Caught in the
Web
Review by Jeffrey Barlow
There are many ways to approach the impact of the Web.
Many of us are simply pragmatic users, wanting to know
more about useful applications. Perhaps at the opposite
end of the virtual poles from those users are theorists who
want to know what, finally, it all means. The author of
Technomedia, Sam Han, is one of the latter.
Mr. Han is a student in the Ph.D. program in Sociology at The
Graduate Center of the City University of New York (CCNY) and
teaches in the Department of Sociology, Anthropology and Social
Work at The College of Staten Island of CCNY. We assume that
Technomedia may be from a Ph.D. thesis in progress and he may
have become, by the time of this writing, Dr. Sam Han.
Mr. Han is very much a postmodernist,[1] and that is perhaps the
primary value of this book. It is now evident that most scholarship
analyzing the nature of the World Wide Web is going to be
dominated indefinitely by postmodernist approaches. To those of
us who are perhaps less theoretical in our approach, or impatient to
arrive at a "real" or "final" understanding of the nature of the Web,
postmodernist theory often seems like a foreign language spoken
by those who are self-referentially passing through an infinite
regress. The minute they open one door and we believe we see a
glimmer of understanding, another one, barred to us by our lack of
vocabulary or unfamiliarity with the standards by which it was
constructed, swings closed.
Navigating Technomedia serves nicely as an introduction to the
broad issues in postmodernist analysis of electronic media and will
quickly bring a determined reader up to speed so as to better
understand emerging analysis. We emphasize "determined." The
work is anything but easy going, but neither is postmodernism.
It is not, I think, Sam Han's intention to arrive at a final analysis.
Navigating Technomedia offers, rather, an intellectual history of
the broad sweep of media analysis so far as the World Wide Web is
concerned. It is Han's position that we can no longer separate
"media" from "technology." They have become interrelated to an
extent that they are "radically challenging the fundamental
concepts of modernity, namely, knowledge, space/time,
subjectivity, and politics."[2] These four factors constitute the
"core themes" of the work.
As Han maintains, such topics as these, as related to technomedia,
challenge any attempt to utilize a simple "linear chronicling" of
events. That is, Han believes it impossible to write a simple

243
narrative history of technomedia, so intertwined has the subject
become with the various theoretical approaches to it. What he
offers us then is an intellectual history of the central (and
sometimes not so central) theorists who have contributed to
contemporary studies of technomedia, always bearing in mind that
the author believes the technology and its content to be
inseparable.
This interconnectedness leads to a sometimes-disjointed work,
though the focus on the four core themes is a very useful one.
Anyone who works often with the Web or uses it as an
entertainment device is usually aware, if thinking at all, that it
raises many fundamental questions for all of us. Where, exactly, is
"there" on the web? Who is it that speaks and who is it that listens?
What is "real" in an HTML enabled world? What is "true" on the
Web, and what is merely "truthy" in the comedian Stephen
Colbert's term?[3] Does the Web, in fact, as Han argues, change
the very meaning of time?
Han's analysis, and accumulated postmodernist analysis in general,
is founded on the assumption that the largely dominant postulants
of Modernism (think primarily here of science and objectivity) are
called into question by the World Wide Web.
Reading this book will not place the reader in a comfortable space
in which he or she finally answers the above questions. But it will
introduce the audience to the major thinkers and their theories in a
cogent and clear framework, which can orient the reader to the
various threads of the discussion.
While holding to his four core themes, Han also introduces
elements of the ongoing debates between various postmodernist
schools and their proponents. As befitting a postmodernist
analysis, ideas are always presented as imbedded in a specific
context. In particular, Han utilizes a very useful device of first
introducing an important theorist (most of them European
academics) and then in a page or less explaining their careers and
introducing their important works.
It would not be fair to reduce the book to being merely an
introduction to postmodernist analysis because Han brings his own
perspective to the discussion, particularly his insistence that media
and technology are now, for all practical purposes, one and the
same. But it is certainly one of the work's main contributions that
it lets us begin to understand which thinkers might llluminate some
of our own questions. Navigating Technomedia then, to return to
my metaphor, might be the first door that many of us might
reasonably open.

Endnotes

244
[1] See his review essay, "Tracking a Convergence Beyond
Postmodernism," a review of Charles Lemert. Postmodernism Is
Not What You Think: Why Globalization Threatens Modernity (2nd
Edition). Boulder, Colorado: Paradigm Publishers, 2005. And
Anthony Elliot. Subject to Ourselves: Social Theory,
Psychoanalysis, and Postmodernity (2nd Edition). Boulder,
Colorado: Paradigm Publishers, 2004., in The International
Journal of Baudrillard Studies, Volume 4, Number 2 (July, 20007
at http://www.ubishops.ca/baudrillardstudies/vol4_2/v4-2-
shan.html)
[2] p. xxiii
[3] See the discussion of the origins of this term, in fact in use by
1800, according to the Oxford English Dictionary, at:
http://en.wikipedia.org/wiki/Truthiness

Comments

SH

Professor Barlow's review of my work is spot-on. My thanks to


him and the Journal for Education, Community and Values for
reviewing it.

I would just like to add one piece of corrective information. The


Graduate Center is the campus for all PhD programs in the City
University of New York, which is designated by the acronym
CUNY. "CCNY," the acronym that Professor Barlow used in the
review, is actually the shortened form of City College, which is
perhaps the "flagship" undergraduate campus of CUNY since it
is the most famous.

245
Apologies in advance for the finicky comment. Once again,
many thanks for the review!

Sam

Posted at 13:16 on January 13, 2009

246
How to Succeed in Business Using
LinkedIn
Review by Jeffrey Barlow
Perhaps one of the best things that reviewers might say
about a new book is that it made them realize that they
had neglected an important resource to better employ the
World Wide Web for that which it is best used, as a tool
for building communities and connections. Butow and Taylor's
How to Succeed in Business with LinkedIn is such a book.
While I have been aware of LinkedIn's existence, due to friends
and colleagues asking me to join their networks from time to time,
and once used it to track down a friend with whom I had no
contact in almost thirty years, I have not appreciated the breadth
and depth of its utility. Butow and Taylor show LinkedIn to be the
very useful and sophisticated tool that it is.[1]
LinkedIn is a Web 2.0 social site clearly aimed at a niche audience,
those interested in building connections with others who, like
themselves, are largely interested in business ties. Butow and
Taylor bring a great deal of experience and expertise to the work.
Butow has written twelve books, many of them how-to guides to
other Web 2.0 applications such as MySpace, others more
technical guides to Interface design and file virtualization.[2] His
wide experience may account for the book's clear crisp design with
an abundance of screen shots and useful graphics leading the
reader through step-by-step use of LinkedIn's many utilities.
Kathleen Taylor works as a corporate recruiter in the high tech
field and is an expert at using social networking sites for such
work.[3] She gives an abundance of tips, not only on how to use
the many tools contained within LinkedIn, but also pays great
attention to the etiquette of making on-line contacts, perhaps
saving the reader from making egregious personal errors which
might decrease the utility of LinkedIn.
Following this guide, the novice user could quickly join, create an
appropriate profile, then utilize a number of utilities to make
connections with a wide variety of others in the same field, as well,
of course, among the user's own friends and colleagues. Then the
reader can fine-tune his or her interests to identify experts and to
query them, to explore the many LinkedIn communities so as to
bring oneself to the attention of a noted member of the profession,
or to look for a job. As one becomes more successful, both as a
member of the LinkedIn community and as a member of one's
profession, the program has appropriate tools for building
legitimacy and reputation.
I suspect that LinkedIn might lend itself to a wide range of

247
community building functions--for example the use to which I put
it in locating a lost friend--but the primary utility of this book will
be for the business user. Almost all of the tools of LinkedIn are
explored for their immediate business potential. This treatment of
LinkedIn sometimes makes the application seem like a Darwinian
electronic jungle where every creature in the ecosystem is seeking
some evolutionary advantage. However, this is the obvious
intention of the application and the authors are refreshingly direct
about that purpose. And LinkedIn is clearly the most useful
weapon in that particular jungle....
Endnotes
[1] See LinkedIn at: http://www.linkedin.com/
[2] See his corporate pages at: http://www.butow.net/
[3] It seems particularly appropriate to link here to her own
LinkedIn page, found at: http://www.linkedin.com/in/kathytaylor
See also her corporate website at:

248
Co-Dependence: The Chinese and
American Economies and the World
Economic Problem
Editorial Essay by Jeffrey Barlow

Introduction
The Co-dependent Relationship
The Chinese Addiction
The American Addiction
Endnotes

Introduction
This editorial has both everything to do with the Internet, and
nothing. Everything, in the sense that the underlying cause for the
changes in Sino-American economic relations that we will be
discussing here are largely a result of the impact of the Internet.
The World Wide Web has made possible electronic banking, and
the instantaneous movements of capital, whether dollars or Ren
Min Bi (RMB), the People's Currency of China. It has therefore
made possible the complicated interlinking of national economies
in a process that is not always clearly understood, as the problems
with derivatives and other financial instruments reveals.
Too, the Web has facilitated very complex production chains
which see components manufactured in a variety of countries,
shipped just-in-time to second countries where they perhaps
undergo additional processing or pre-assembly, then sent to yet a
third country for final assembly, most often to China.[1] Then of
course, they are exported, usually to America or Europe.
To some, however, the issue discussed here will seem to have little
to do with the Internet as we seldom refer to it directly. But it is
important to keep in mind that the current financial problem,
however we define it, is an event best understood as a result of the
third stage of globalization, the digitalization of media, and of
economies alike.

The Issue
We are writing from Wenzhou, China. After spending a week in
Beijing in mid-October at a Technology Management conference,
we then came onto Wenzhou where we have now been working for
almost a month. The news, of course, whether television such as
CNN or China Central Television (CCTV, the Chinese
governmental network[2]), or newspapers[3] is dominated by what

249
Americans prefer to call the "world wide economic" problem, but
what Chinese see largely as an American, and secondarily as a
European, issue.
This problem is extremely broad in its scope. We are teaching here
to Chinese students on the subject of globalization, viewed from a
historical perspective. Many cultural or economic historians would
argue that this is at least the third period of globalization with the
first stage visible by the 9th century A.D., the second by the 14th
century, and the current one largely an artifact of the digital age.
Others would argue, and I agree with them, that in fact there has
been one continuous process of globalization, marked by a world
economic system which has had its ups and downs, but has always
been present. And that system, over most of its existence, has been
dominated by Asia, and particularly by China.[4]
Europeans, and later Americans, enjoyed a relatively brief period
of ascendancy, mostly as a result of military superiority initiated
by English naval power in the 18th century and continued by
American atomic power throughout the 20th.[5] That ascendancy,
we believe, has just ended. We will now enter another period of
multipolarity, again marked by the importance of Asia, and
particularly of China.
This, of course, seems an outlandish view to most Americans who
have been sheltered for at least one hundred and fifty years by
self-satisfied visions of cultural superiority (best characterized as
Eurocentrism) buttressed by unthinking racism. In this happy
dream, the "West" has had inherent advantages over the "East"
(both largely meaningless concepts unless the world is viewed
from London) and Americans have had to fear only that those
tricky Asians would copy our products and ultimately our
economic systems. This is perhaps our most comforting refuge in
these confusing times: the rise of China is due to their emulation of
our system.
The current problem thus comes as a rather complex surprise. As
is too often the case, Americans are pretty much reeling in
confusion, largely obsessed with the issue of who to blame
(banking?), and its closely allied concern, who to save (the auto
industry?).[6] Here we argue that the crux of the problem can best
be understood as an issue in Sino-American relations.
This also is a natural scape-goating intellectual response for
Americans, particularly for the Democratic Party. "When we were
not looking, China stole our factories and our jobs, largely by
cheating in trade and manipulating its currency."[7]
I believe, rather, that we have been locked in a co-dependent
economic relationship with China and that the terms of that
relationship have just shifted with seismic force, as has long been

250
inevitable. We perhaps are at last leaving the post-9/11 world and
entering the post-decline age in that we have spent ourselves into
much reduced national power due to our persistent failure to
confront our excesses.
In doing so, we have damaged the world economy. We have given
almost everybody except our lapdog, Great Britain (and even
Britain is eying the doggie door in the event that the Anglo-
American house is indeed aflame), cause to reduce U.S. influence
in international financial institutions such as the International
Monetary Fund and the World Bank.
This problem, however, cannot be seen simply as the "Rise of
China." China, as is appropriate for a co-dependent, has been
ignoring its own problems and now must solve them.

The Co-dependent Relationship


The Sino-American financial relationship is quite complex, and in
understanding it I have found Charles Dumas works, The Bill from
the China Shop (2006) and China and America, a Time of
Reckoning (2008) invaluable.[8] The accuracy of Dumas' analysis
is surely demonstrated by the fact that his 2006 work predicted the
2008 economic crisis in its details. We follow here his analysis,
augmented with our own direct observations while resident in
China for one month each of the last five years.[9]
If the Sino-American relationship is complex, our joint problem is
at bottom a simple one which is expressed by the economic laws
that have dominated the global trading system for more than a
thousand years: In the long run, markets must be in balance—there
must be sales for goods produced, or some producers eventually
must go out of business. Each buyer must one day pay up—or be
cut off from access to goods. There must be some store of value
(currency) to facilitate exchange and whether the currency is gold,
cowry shells or dollars, its value will fluctuate in complex
relationship to the supply of and demand for goods and services.
Unfortunately, China and America have found it to their mutual
interest to largely ignore these truths for the past several years. But
another truth is that, given appropriate circumstances, it is easier to
endlessly accumulate wealth, as has China, than it is to endlessly
accumulate debt, as has the United States.

The Chinese Addiction


China has attempted to, and largely succeeded, in exporting its
way to economic growth via low labor costs. The rise of China has
been spectacular and is, in almost every way, praise worthy. The
Chinese economic system has grown steadily since 1949 with
some scallops during periods of intense politicization, but has

251
generally progressed regardless of who was in charge or which
ideology was dominant. Since 1980, the Chinese GDP has
averaged 9.8% annual growth.[10] More people were lifted out of
poverty in the last half of the 20th century in China than ever
before in human history, and in fact almost all people lifted out of
poverty in the 20th century were Chinese.
Many factors can be given at least partial credit for this success
including Chinese savings (now running at 50 percent), direct
foreign investment in Chinese export industries, a controlled
currency, Chinese willingness to adapt their ideology to pragmatic
conditions, highly-educated labor and equally highly-educated
leadership. Yet, underpinning them all has been cheap but
productive labor vis-a-vis their American and European (and
Japanese) markets.
This combination of factors has made China an export monster.
Chinese exports have grown as high as 50 percent annually and it
is a poor year that has seen export growth of less than 25 percent.
This method of growth has had some detractors, of course, but not
as many as might be supposed. Much of the exports have been
products assembled in China but parts of which have been
produced in Taiwan, Japan, Korea, Europe or the United States.
Any negative voices in those countries (such as those of displaced
workers) have then been offset by those of supporters who benefit
from the process. Hollywood may scream about pirated videos, but
WalMart, depending almost entirely on Chinese goods for its sales,
points out that these lower costs to the American consumer and
restrain inflation.
Endlessly rising exports, however, have both limitations and costs.
The limitation is that no country can depend indefinitely on low
cost labor. The very success of such a strategy will increase both
wages and inflation, requiring still more wage increases, until
ultimately a new low-labor producer will enter the market. The
Chinese have responded to this in the classical fashion, by in turn
investing in those low-labor countries so as to benefit even from
that shift.
However, there is another powerful drive for the assembly-
exporting country—to move up the value chain by producing the
parts that go into the final assembly. China is, in fact, very quietly
doing just this, a factor very important to the current discussion,
which is largely ignored. We will come back to it at the
appropriate time. But any low-labor producer will be drawn to the
same strategy, ultimately "hollowing out" the original producer,
which happened to Japan and has now largely happened to Taiwan
as well.
The Chinese addiction then, is to the export market. Not only has

252
this produced great growth, but also it has cushioned one of
China's potentially most explosive problems, rapid population
growth (despite the one-child policy, given its vast base) which
must be met with equally increasing jobs. In general, the Chinese
believe that 8 percent GDP growth a year is necessary just to keep
up!
Additionally, China is well aware of the fact that rural areas lag far
behind urban ones, to the point where the Chinese distribution of
wealth is almost as skewed as is the American one. This means
that rural jobs, at least, must somehow be upgraded, but not at the
expense of urban incomes. The solution thus far has been to spin
off satellite production centers for the export market into the
hinterlands.
There are also very complex monetary and fiscal consequences
from the Chinese accumulation of export profits. These tend to
drive inflation through a variety of linkages, once again putting
pressure on wages.
It is then highly desirable for the Chinese to find alternatives to the
export model. But it is hard to argue with success, and there is an
additional factor, which is very difficult to calculate even for the
Chinese themselves. That is the political link between local party
bosses and the export industries. These bosses have long been
rewarded for increasing production from the export industries in
their regions, which has lead to a variety of effects. These bosses
have little interest in labor problems, for example, but have
repeatedly called out local police forces to at least control if not
outright suppress labor unrest.
It is also almost impossible to believe that there is not a systematic
link of corruption between powerful local producers and equally
powerful bosses. The result has been scandal after scandal in
China over the last few years. Spectacular problems such as food
safety and vast pollution caused by illegal dumping often have
their origins in this corruption, which causes local areas to ignore
national policies, the Chinese equivalent of inadequate supervision
of corporate interests.
An important consequence of these unofficial links is that when
central government funds go into various local projects, then, there
are many pressures that will tend to funnel them into existing
export industries.[11] What is good for the central government and
for the national economy is often not at all seen as desirable at the
local level.
Ironically, in light of the usual American argument that China is a
highly centralized totalitarian economy (when we are not
celebrating its capitalist successes) the central government in a
very important sense, lacks the power to enforce its policies at the

253
local level. To the historian, this is not surprising. The Chinese
central government has almost always had a very light political
"footprint" as a whole, but depended on value agreements
mediated by ideology—whether Confucian or Communist
—between local and national elites. Now that ideology says:
"grow!"
The Chinese then, are in effect addicted to exporting. But every
market must balance, so someplace there needs to be an import
junkie.

The American Addiction


That junkie is, of course, primarily the Unites States. Our
addiction to personal over-consumption (that is, consumption well
beyond our incomes) is well known. One estimate is that we have
been consuming 106 percent of our annual GDP.[12] Such a debt
is not necessarily a problem at the level at which the accounts of
countries are balanced. If America wants to buy more from China
than it sells, there are a number of ways of running such a deficit,
such as selling financial investments—debt instruments—in the
American economy to foreigners, especially to Chinese.
While these can become worrisome and a cause for national
reflection, these current account imbalances can be sustained for
very long periods, perhaps indefinitely given a generally strong
U.S. economy, and a willingness on the part of foreign
governments to continually finance yet more of our debt. And the
Chinese, in the interest of feeding their export habit, must do so.
Sino-American trade depended on both parties' willingness to enter
into this implied contract.
Yet, our over-consumption has tended to be consumer driven.
Business and industry have, in general, complex procedures which
act as checks and balances on income and outgo. Consumers,
however, are encouraged on a daily basis to over-consume and are
regularly provided a variety of means to do so.
The consumer has proven to be the flaw in this structure. The
edifice has depended for some time on the assumption that the
equity of consumers would continually increase, allowing greater
and greater amounts of debt. And most equity for Americans is in
their home.
The growth required in home values to ultimately fund the
over-consumption, because of a variety of reasons, amounts to
more than the 6.5 percent mentioned above. Real growth in GDP
plus service costs must be added in as well, giving the American
consumer the burden of somehow financing over-consumption of
about 10 percent annually out of rising home equity.
In the spring of 2006, this structure began to collapse, accelerated

254
by higher fuel costs,[13] generalized anxiety at endless wars, real
doubts about the future of the country, and stagnating home values.
The result was a cutback in spending, and the collapse of the
rickety support system of complex derivative instruments that had
allowed banks to assuage their doubts about the underlying
assumption of ever-increasing home equity. Values plummeted,
liquidity—the ability of banks to loan money on the assumptions
that the loans they had out on homes were, in fact, assets which
would one day be on the plus side of their balance sheets
—dissolved.
Other national economies, Iceland, Great Britain, Spain and Italy
primarily, and secondarily other European countries, were also
revealed to have been burning their consumers at both ends, and to
have been literally buying into the American derivatives markets.
To summarize, based on Dumas' analysis, the "Goldilocks
Economy" as he calls it, had four causes:

1. Globalization—China became the export monster, the United


States (and Europe) the monster consumer.
2. Consumers were encouraged to consume. National
authorities told us we owed it to the country to do so; Alan
Greenspan said the equivalent, both to the homeowner and
the banking industry of "bubble-shmubble."
3. Chinese excess capital (export profits, savings) flowed into
asset prices, housing markets, and became the collateral for
our excess consumption.
4. The complex instruments created to cushion the threat to any
one bank, derivatives and other asset-backed securities,
slouched abroad via, once again, digital globalization.[14]

This then, in regrettably more than a nutshell, is the current


economic relationship between China and the United States. The
"world economic problem," then, is to a remarkable extent, a
Sino-American problem.
In our February issue, we will update this analysis, and turn to the
range of solutions that have been proposed. These are, as might be
expected, significantly different in China and in the United States,
though there are important areas of agreement that suggest the
possibility of a positive outcome for both, and for the world
economy. But such an outcome will require each country to kick
its habit...

Endnotes
[1] Charles Dumas, author of China and America, A Time of

255
Reckoning, (London: Profile Books, 2008.) states that China's
share of world final assembly processes now "has to be dominant
to the point of monopoly." P. 9.
[2] See CCTV website at: http://english.cctv.com/index.shtml
[3] Thanks also to the Web, I have available a wide range of news
to supplement my daily dose of The China Daily, though I find it
extremely useful.
[4] See a series of editorials in Interface for example, Jeffrey
Barlow, "Development, Productivity, and the World Wide Web, an
Editorial Review Essay" found at: http://bcis.pacificu.edu/journal
/2007/03/atkinson.php; "The Internet, Securities, and Security" at:
http://bcis.pacificu.edu/journal/2002/06/editorial.php and
"Globalism and the Internet: Editorial Essay" at:
http://bcis.pacificu.edu/journal/2002/01/editorial01.php
[5] There have been conspicuous exceptions to that dominance, or
at least limits to its efficacy, among which I would list the Boxer
Rebellion and the Korean and Vietnam wars. All of these revealed
the inability of the U.S. to directly control China or its close allies
by means of military force.
[6] For my own confused reeling see: "Dining, Whining, and
Opining: From the Googleplex to Beijing" found at:
http://bcis.pacificu.edu/journal/2008/05/edit.php
[7] To his credit, President-elect Obama did not make this an issue
in the recent election, and the Republican candidate had, as is
usually the case for Republican spokesmen, little interest in
questioning an economic relationship which was long to the
advantage of American economic elites, an issue explored below.
[8] We are using the joint edition, which contains both works, and
our notes refer to the combined volume, China and America, a
Time of Reckoning. (London: Profile Books, 2008.) Dumas, of
course, is not the only one to hold these views. See Stephen
Roach, Chairman of Morgan Stanley, Asia, who holds similar
views at least with regard to the problems of the Chinese economy.
For these views see China Daily, interview by Wang Xu with
Roach, "How to Pick up the Economic Pieces," 11/11/2008, p 14.
For Roach's views on the American problem, see Stephen S.
Roach, "Double Bubble Trouble," The New York Times, 3/5/2008
at: http://www.nytimes.com/2008/03/05/opinion/05roach.html In
this piece Roach, citing the same factors as does Dumas (an asset
dependent credit bubble) accurately forecasts the October
downturn, more than 6 months before it entered into its recent
acute phase. The important disagreement between Roach and
Dumas would seem to be Roach's belief that the trade imbalance is
a more significant contributor to the problem than does Dumas.
Roach accordingly stresses encouraging American exports, while,

256
like Dumas he thinks significant investment in national
infrastructure to also be an important step.
[9] For our observations on these visits see my blog, Chinatripper,
at: http://bcis.pacificu.edu/blogs/chinatripper/chinatripper.php
[10] Dumas, 14.
[11] Immediately after China announced its national bail-out plan,
the provinces and important municipal jurisdictions such as
Shanghai, which would in the normal course of events be
unveiling their own economic plans for the coming year in
December in any event, made their responses to the government
plans. I have seen summaries of these plans from Guangdong,
Shanghai, and Guangxi and each, I would argue, plans additional
major investment in export-industry sectors.
[12] See Dumas' accounting at p. 25-6.
[13] These of course, are not unrelated to the marked increase in
oil consumption by China. Its export economy and its
transportation infrastructure alike are very energy inefficient and it
consumes relatively more fuel than its Japanese or American
equivalent per output
[14] See discussion at pp. 27-30.

257

You might also like