You are on page 1of 15

Validity in Quantitative Content Analysis

Author(s): Liam Rourke and Terry Anderson


Source: Educational Technology Research and Development, Vol. 52, No. 1 (2004), pp. 5-18
Published by: Springer
Stable URL: https://www.jstor.org/stable/30220371
Accessed: 24-01-2019 04:37 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Springer is collaborating with JSTOR to digitize, preserve and extend access to Educational
Technology Research and Development

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
Validity in Quantitative Content Analysis

Liam Rourke

Terry Anderson

Over the past 15 years, educational 0 The primary role of networked computers in
technologists have been dabbling with a higher education has shifted from presenting
research technique known as quantitative structured, preprogrammed learning materials
content analysis (QCA). Although it is to facilitating communication. In turn, the role of
characterized as a systematic and objective educational technology researchers has
procedure for describing communication, expanded to include the role of communication
readers find insufficient evidence of either researcher. In the late 1980s, studies began to
quality in published reports. In this paper, it is appear that incorporated new perspectives, new
argued that QCA should be conceived of as a methods, and new techniques. One of the most
form of testing and measurement. If this promising was quantitative content analysis
argument is successful, it becomes possible to (QCA).
frame many of the problems associated with Berelson (1952) defined QCA as "a research
QCA studies under the well-articulated rubric technique for the systematic, objective, and
of test validity. Two sets of procedures for quantitative description of the manifest content
developing the validity of a QCA coding of communication" (p. 18). In this context,
protocol are provided, (a) one for developing a description is a process that includes segment-
protocol that is theoretically valid and (b) one ing communication content into units, assigning
for establishing its validity empirically. The each unit to a category, and providing tallies for
paper is concerned specifically with the use of each category. Bullen (1998), for instance, stud-
QCA to study educational applications of ied participation and critical thinking in an onl-
computer-mediated communication. ine discussion by counting the number of times
each student contributed to the discussion and
by assigning each contribution to one of three
categories of critical thinking.
By 1999, enough of these types of studies had
been conducted in the field of educational tech-

nology that we were able to prepare a literature


review (Rourke, Anderson, Garrison, & Archer,
2001). We used published reports to illustrate
many of the basic issues of QCA, including
objectivity, reliability, units of analysis, types of
content, research designs, and ethics. Our intent
was to summarize the experiences of researchers
like us who were struggling to apply this unfa-
miliar technique in a disciplined and efficient
manner. We found the technique promising but
chided researchers on the rigor of their reports,
particularly on the lack of reliability data. Those
who persisted with the technique found their
own way to these conclusions, and today

ETR&D, Vol. 52, No. 1, 2004, pp. 5-18 ISSN 1042-1629 5

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
6 ETR&D, Vol. 52, No 1

reviewers are more


QCA's purely descriptive role.critica
Studies of com-
informative. puter-mediated communication (CMC) that
Even report the frequency with which students post
demonstrating so, th
collection is messages
only (Bullen, 1998;one
Weiss & Morrison,
prem 1998)
ultimate or the average number of words
argument towardin messages
that (Parson, 1996)will
argument adhere to Berelson's
persuadefinition.
inferences drawn
It is through these typesfrom
of studies that the a
supported test validity
by question peculiar to QCA coding
empirical evi
rationale. protocols emerged:
When QCA Does the procedure
isdescribe
use
rence of what it purports
wholly to describe (Krippendorf, 1980; c
manifest
the number Riffe, of Lacy, &messages
Fico, 1998)? In this context, the p
student), the
researcher argument
is providing something closer to data i
When QCA is used to draw inferences about than interpretation, and the data speak for them-
constructs (e.g., assessing the level of critical selves (Kaplan, 1964).
thinking in a computer conference transcript), When researchers use QCA to make infer-
the argument is not so clear-cut. In this article ences about constructs, the data are no longer
we review procedures for making a sound valid- speaking for themselves. To make the transition
ity argument in the latter case. from description to inference, a richer definition
The article is divided into three sections. The of test validity is required, such as the one pro-
posed in Messick's (1989) landmark chapter:
first section begins with the observation that
QCA is a form of testing and measurement but
notes that the procedures of test development Validity is an integrated evaluative judgment of the
degree to which theoretical rationales and empirical
codified in the psychometric literature are given
meagre consideration in QCA research. The sec- evidence support the adequacy and appropriateness
of interpretations and actions based on test scores or
ond section describes the process of constructing
other methods of assessment. (p. 13)
a coding protocol that is theoretically valid, or as
Sheppard (1993) says, reasonable. This section
draws on Crocker and Algina's (1986) presenta- The concepts included in this dense defini-
tion of essential steps in test construction. The tion are particularly important when the fre-
third section follows Messick's (1989) discussion quency of certain types of content are used to
of several types of empirical studies that can be offer interpretations about constructs such as
conducted to establish the validity of inferences higher-order learning, social processes, or criti-
derived from a testing procedure. cal thinking. For instance, using Henri's (1991)
protocol, observers count (among other things)
the number of times students in a computer con-
QCA AS TESTING AND MEASUREMENT
ference formulate a proposition that proceeds from
previous statements and then offer inferences
Description Versus Inference
about students' cognitive and metacognitive
skills. Bereiter and Scardemalia (1987) ques-
A survey of QCA studies in the educational
tioned the validity of this type of procedure
technology literature shows that they often
when they studied the cognitive processes that
involve making inferences about constructs. underlie
In composition:
the example cited earlier, Bullen's (1998) QCA
culminated in conclusions about the students'
It might seem that the way to begin an explanation of
level of critical thinking. This goes well beyond
our research is by showing pieces of writing that exem-
the purpose for which the standards of plify
QCA knowledge telling and knowledge transforming. That
would be misleading, however. Knowledge telling and
were originally developed. Originally, media
researchers such as Berelson (1952) used the knowledge transforming refer to mental processes by
which texts are composed, not to texts themselves. You
technique in a more modest role to describe the
cannot tell by reading this chapter whether we have
surface content of communication. This is evi-
engaged in problem-solving and knowledge-transform-
dent in Berelson's definition, which specifieding operations while writing it or whether we have

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 7

definition
simply written down content that wasis deliberately
already general because
stored the
in memory in more or less the form presented here. (p.
class of things encompassed by the term test is
13)
diverse. It subsumes an assortment of proce-
dures and aims that range from standardized
In this argument, it becomes apparent that achievement tests to teacher-made multiple-
assessing the surface characteristics of written
choice quizzes, from published personality
composition-something the writing teacher inventories to researchers' questionnaires, and
does-and measuring the cognitive processes beyond.
that underlie written composition-something a
One specific form of testing that Crocker and
cognitive psychologist might want to do-are
Algina (1986) depicted is "a standard schedule
two different things. Bereiter and Scardemalia
and a list of behaviors that can be used by an
(1987) rejected the possibility of learning about
one through descriptions of the other. For the observer who codes behavior displayed by sub-
content analyst engaged in a purely descriptive jects in a naturalistic setting" (p. 4). This depic-

study, this argument is inconsequential. For the


tion alone provides a fairly complete
analyst engaged in an inferential study-those characterization of QCA. It is precisely how data
were collected in the two examples that have
using Henri's (1991) procedure for instance--
this argument is fatal. Bereiter and Scardemalia been presented previously. Bullen (1998), for
would assert that one cannot say anything about instance, provided observers with a list of 16
students' cognitive or metacognitive skills based behaviors, such as defining terms and judging defi-

on how many times they formulate a proposition nitions, which they were to look for in the mes-

that proceeds from previous statements. sages that students posted to an educational
Psychometricians would disagree. Much of computer conference. Similarly, the cognitive
test theory and the day-to-day practice of testing skills section of Henri's (1991) coding protocol
consists of 18 behaviors that coders seek in the
and measuring is the attempt to make accurate
judgments about unobservable constructs based paragraphs within students' postings to an onl-
ine educational discussion (Hara, Bonk, &
on observable behavior: Candidates' potential
for success in graduate school is gauged through Angeli, 2000). Each of the elements and pro-
their scores on paper-and-pencil aptitude tests; cesses in Crocker and Algina's depiction of test-
experimental subjects' attitudes are assessed ing is apparent in these two examples. The
using researchers' questionnaires; and appli- naturalistic setting in both examples is the online
class discussion. The lists of behaviors are the
cants' suitability for jobs is predicted using per-
sonality measures, for instance. 16- and 18-item lists that are purportedly indica-
Test theory accepts Bereiter and Scarde- tive of critical thinking and cognitive skills as
these constructs manifest themselves in online
malia's (1987) argument as fundamental, and
prescribes that if there is a gulf between that discussions. These lists are provided to the
which one wishes to study and that which is observers or, as they referred to in QCA, coders
directly observable, then some sort of or raters; and the standard schedules that accom-

correspondence between the two must be estab- pany these protocols are the individual mes-
lished before inference begins. The standards of sages or paragraphs within the messages in the
testing and measurement corollary to Messick's online discussion. Note that physical or syntacti-
(1989) definition of validity set out the steps for cal units of analysis (e.g., conference messages
accomplishing this. To begin our discussion of or paragraphs within the messages) replace tem-
these steps, we will first show that QCA is a poral units (e.g., 30-sec intervals) when observa-
form of testing and measurement. tion moves from the face-to-face classroom to

the computer conference transcript. Clearly,


there is room for QCA coding protocols under
Testing, Measurement, and QCA
the general definition of tests and within the spe-

A test according to Crocker and Algina (1986) is cific process that Crocker and Algina portrayed.
"a standard procedure for collecting a sample of Add to this the corollary process of measure-
behavior from a specified domain" (p. 4). Their ment-assigning numbers to properties of objects

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
8 ETR&D, Vol. 52, No. 1

or events, text. This conceptualizationto


according could sensitize
rul
bers reflect researchers
differencesto the tendency of data collection
in
the variableand analysis procedures to drift
present in from the
dif
Novick, 1968;
descriptive into Rogers,
the inferential, and it could alert 1
Bullen's (1999)
them to thereport
significance of this shift. illus
Some con-
form in QCA: tent analysts have been unmindful of this dis-
tinction, or they have been mindful but have not

[The messages posted by] the students were sorted known how to proceed in a valid manner. Locat-
into one of three categories of critical thinking. The cat- ing QCA under the rubric of testing and mea-
egories and corresponding scores were as follows: (3)- surement provides a well-articulated model of
extensive use of critical thinking skills (2)-moderate how to move, in a defensible manner, from fre-
use of critical thinking skills (1)-minimal use of critical
quency counts of directly observable behavior to
thinking skills. (p. 14)
insights about the complex constructs that they
allegedly signify. The basic elements of this
The objects or events that Bullen focused on model are presented in the next section.
were the messages posted by undergraduate
students to their computer conference. The rules
that were used to regulate the assignment of DEVELOPING A THEORETICALLY VALID
numbers to these messages included three PROTOCOL
things: (a) an overarching theory of critical
thinking (Norris & Ennis, 1989) with which The steps to developing a theoretically valid
observers were familiarized, (b) a 16-item set of
protocol, are:
behaviors indicative of how three levels of criti-
* Identifying the purpose of the coding data
cal thinking manifest themselves in text-based
online discussion, and (c) an incremental num- * Identifying behaviours that represent the
construct
bering system corresponding to the hierarchical
conceptualization of critical thinking. The man- * Reviewing the categories and indicators
ner in which numbers were assigned to the
* Holding preliminary tryouts
students' messages reflected differences in the
types of critical thinking, and subsequent fre- * Developing guidelines for administration,
quency counts of each of the categories reflected scoring, and interpretation of the coding
differences in the amount of critical thinking scheme

present across discussion weeks and between


students.

Together, the complementary processes of Identifying the Purpose of the Coding


testing and measurement provide a reasonably Data
complete and accurate description of what the
content analyst does. The generic definition of The first step in developing a coding protocol is
test and the depiction of one specific form reveal to identify the purpose for which the coding
that QCA, as it is conceptualized and practiced, data will be used. To continue with a previous
sits firmly within the rubric of testing and mea- example, Henri's (1991) first question should be
surement.
What do I want to do, say, or infer, after I have
counted
This relationship has not been explicated in the number of times that students in a

the past perhaps because it would have computer


been conference, for instance, ask relevant
questions?
unreasonable to bring the complex machinery of There are two general purposes for
test theory to an operation that was purely
any kind of testing: making decisions and con-
descriptive. However, now that content ducting
ana- research. Questions and hypotheses,
focusing mainly on the types of communication
lysts, particularly those in the field of educa-
that occur in CMC and their influence on learn-
tional technology, consistently use the technique
ing,
in an inferential capacity, it is advantageous to have been explored using QCA. Henri's
position it in the appropriate psychometric (1991)
con- instrument, for instance, yields informa-

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 9

to use nominal scales or norm-referenced


tion on the interactive, participative, social, cog- inter-

pretation
nitive, and metacognitive models; it isof
dimensions simply
com-conventional.
However, there are implications for both of
munication in computer conferences.
these decisions and therefore they should be
Identifying the purpose of the data informs
made thoughtfully.
decisions about scaling, score interpretation
The purpose of
models, and the types of validity the coding data
evidence thatalso deter-
mines the
are required. QCA protocols usedtypes in
of validity
CMC evidence
stud-that need to
be gathered.
ies typically use nominal scales ofIn measurement.
keeping with the previous dis-
cussion, Cronbach
In these scales, numbers are used to (1990) distinguished between
categorize
segments of transcripts using a test tothe
with describe and to make decisions
numbers
about a person.
reflecting nothing about the segments other Deciding which candidate
than
they are different. Using should receive a scholarship or
Gunawardena, which applicants
Lowe,
and Anderson's (1997) scheme should befor
admitted to a graduate
coding socialprogram is
knowledge construction, consequential
messages and necessitates
are coded a thorough
as
investigation of of
1 (statement of opinion), 2 (statement several elements of an assess-
agreement),
3 (corroborating example), 4ment procedure, including
(clarifying its relevance,
detail), or 5 utility,
social consequences,
(problem definition). Statement of opinion value(1)
implications,
does and
construct validity.
not mean less social construction In the context of basic
of knowledge
research,(2);
than statement of agreement in contrast,
the the researcher may neglect
difference
between 1 and 2 and 3 andsome
4 of these criteria
does without placing hopeful
not represent
students
an equal difference in level of in any peril.
social In this latter situation,
knowledge
construction; and 0 does not represent the(1971)
Messick (1989) and Cronbach abso-encouraged
test developers
lute absence of social knowledge to direct finite resources into a
construction.
systematic investigation of construct validity.
The purpose of the data also influences the
More will be said about this type of validity in
selection of score interpretation models. Crite-
subsequent sections.
rion-referenced score interpretations focus on
the classification of test takers-pass-fail, mas-
ter-nonmaster-in conventional testing. Norm-
referenced interpretations focus
Identifying onThat
Behaviours relative
Represent
the Construct
comparisons of test takers-average, above
average, below average. The former model is
prevalent in applied decision-making con-
Once the purpose of coding data has been deter-
texts-selection, placement, and so forth, and
mined, the next step is to identify behaviors that
requires established criteria and justified cut-
represent the construct. The goal in this step is to
scores with which one can judge mastery. So far,
ensure that a coding protocol neither leaves out
criteria have not been proposed
behaviors that for the
should be issues
included, nor includes
that educational technologists study with QCA
behaviors that should be left out. This is particu-
(e.g., interaction, participation, procession
larly important in QCA because the technique is
through the problem-solving process).
essentially observational; therefore, in an opera-
tionist and behaviorist
Norm-referenced interpretations sense, the construct, at
character-
ize the whole of QCA studies in to
one level, comes our domain.
be defined by observable
Generally, one transcript or one
behavior. segment
The precariousness ofenterprise
of this a
transcript is positioned relative to with
can be illustrated another. For One
a simple example.
instance, Chou (2001) construct
compared that has receivedlevels of
continued attention

learner-learner interaction in
from CMCsynchronous ver-
researchers is participation, which, in
sus asynchronous communication modes, and
QCA studies, is regularly defined by a single
concluded, in a norm-referenced
representativefashion,
behavior-postingthata message to a
there was a higher percentage
conference. If of social-emo-
this specific behavior is interesting
tional interaction in synchronous mode
in itself, perhaps than in interac-
in a human-computer
asynchronous mode. It is not logically
tion study, necessary
then this operational definition is sat-

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
10 ETR&D, Vol. 52, No. 1

isfactory. operational definitions this


However, of the construct,
is and not
cal analysis in the
indicators that requiredfield of
only minor modification e
ogy. Here, before they could be used in their QCA coding
participation is us
dependent scheme. independent
or
tionships areQualitative
sought betwe
content analysis has also been
variables, such as moderat
used to gather behavioral indicators. Mason
Anderson, Rourke, Garrison
(1991) had experts examine a large sample of
learner characteristics
computer conference transcripts in (e.g.,
order to cata- M
2000), achievement (e.g.,
log the types of communication in which stu- Ri
2003), satisfaction
dents engage. She (Gunaward
then reduced these to a
and higher-order learning
parsimonious set of categories and indicators. (e
Gunawardena et al., 1997; Henri, 1991). To Her typology included eminently codable
understand how participation relates to thesebehaviors such as use of personal experience related
factors, researchers may want to add at least oneto course themes and reference to appropriate mate-
other representative behavior to the convention- rial outside the course package. Most QCA
ally narrow definition. Sutton (2001) provided
researchers move iteratively between these two
some evidence of one possible reason why. Inmethods-literature review and direct observa-
his study of vicarious interaction, he found that
tion of transcripts-to develop relevant and rep-
many students who do not post messages in the resentative coding indicators.
online discussions learn adequately by observ-
Beyond the domain of QCA, other proce-
ing and actively processing the interactions
dures have been developed to identify represen-
between other students. This mode of observa-
tative behaviors. In the field of industrial and
tion and active processing may be considered a
organizational psychology, the critical incident
legitimate type of participation depending on
technique (Flanagan, 1954) is widely used to dis-
the aims of a study. Operationally defining a
cover behaviors that contribute to success in spe-
construct with a single representative behavior
cific situations. To analyze a domain using the
could lead to a distorted understanding of exist-
critical incident technique, a researcher first asks
ing relationships. people familiar with the context to describe par-
Some of the procedures used to achieve con-ticularly effective behaviors, that is, critical inci-
dents. The researcher then identifies themes
struct representativeness in QCA protocols
represented by the incidents, and these themes
include literature review and qualitative forms
are used as the basis for indicators. In CMC
of content analysis. Other methods used outside
studies, students and teachers could be inter-
of QCA, such as the critical incident technique
viewed in an effort to identify critical types of
(Flanagan, 1954) and protocol analysis (Ericsson
communication that enhance learning. These
& Simon, 1993), may also be helpful. Conducting
literature reviews is a common method for iden- data could then be used to justify their inclusion
in a coding protocol.
tifying representative behaviors. For instance,
Curtis and Lawson (2001) wanted to examine The preceding techniques are useful in ana-
student interaction in computer conferences lyzing manifest content or, as Potter and Levine-
from a collaborative learning perspective. There- Donnerstein (1999) explained, content that
fore, they referred to the extensive body of liter- resides on the surface of communication. How-
ature on collaborative and cooperative learning, ever, CMC researchers often want to analyze
including three decades of research by Johnson latent content, or constructs that are signified by
and Johnson (1979; 1986; 1989; 1992a; 1992b; the communication, for example, cognition
1994a; 1994b; 1996) books by Slavin (1991) and (Garrison, Anderson, & Archer, 2001; Henri,
Sharon and Sharon (1992), and several meta- 1991). To identify behaviors that are representa-
analyses (Johnson, Johnson, & Stanne, 2000; tive of these types of constructs, it seems appro-
Johnson, Johnson, & Maruyama, 1983). Within priate to refer to the techniques developed by
this body of literature, they found a mature the- cognitive psychologists. Researchers such as
ory of collaborative learning, syntactical and Snow, Federico, and Montague (1980) and

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 11

Embretson (1983) argued that the


of the process
construct" of
(p. 67). test
Worse still, it can result
construction should be informed
in data thatby an under-
are uninterpretable or rival interpre-
tations that
standing of the cognitive activities are more
that areplausible
con- than those
stituents of a task. offered by the content analyst.

Techniques such as computational and math-


ematical modeling, chronographic analysis, and
protocol analysis have been used to this end. Reviewing the Categories and
Bereiter and Scardemalia (1987), in fact, used the Indicators

latter two techniques in their program of


research on cognition during composition. Mea- Most of the steps in this section are connotative
suring the start-up times of students confronted of content validity, that is, procedures for ensur-
with various writing tasks, and analyzing ing that the categories and indicators in a QCA
students' think-aloud protocols enabled them to protocol adequately represent a performance
offer evidentiary statements about the cognitive domain or construct. Establishing content valid-
processes of students. In a CMC context, stu- ity is largely a subjective operation that relies on
dents could be asked to think aloud as they read the judgment of experts. This process ranges in
and responded to messages in computer con- levels of formality; however, a responsible
ferences in order to identify representative developer will want a group of experts to evalu-
behaviors.
ate the provisional coding categories and indica-
tors to determine their relevance and
There are few examples of these methods
being applied in the educational CMC content representativeness. The term expert in th
analysis literature, but there are many examplestext refers to someone who has experienc
of researchers proceeding without them. Garri- QCA, knowledge of the construct, and fam
son et al. (2001) developed a coding protocol to ity with the context in which the coding p
assess cognitive presence in computer conferenc-will be used. If several judges are ava
ing, which they defined as the ability of learners some variation of a Delphi technique (Dal
Helmer, 1963) can be used, in which memb
to construct meaning through sustained com-
munication. The indicators that constituted the the group evaluate the indicators, compare
evaluations, and discuss deviant evaluation
protocol were based on the authors' four-stage
model of critical thinking and included items
such as puzzlement, brainstorming, connecting
ideas, and defending solutions. When codersHolding Preliminary Tryouts
applied the indicators in an empirical study,
they were unable to find any evidence of one Theof next step in this process is a preliminar
the four stages, coded a third of the transcriptout
as of the coding protocol. There are s
other, and coded the remaining two thirds into examples of this in the educational techn
the final two stages. This left the authors unable
literature. In fact, this step accounts for m
to interpret whether the data reflected shortcom-
the published QCA studies. Except for Fah
ings in (a) the coding protocol, (b) the instruc-
press; 2002a; 2002b; 2001), Henri's (1991
tional design of the course, or (c) the medium,the Community of Inquiry's (2002) co
the lack of (d) cognitive presence, or a combina-
schemes, we know of no protocols tha
tion of (e) all these factors and more.
been used in multiple studies.
Cronbach (1990) criticized the process by Even rigorously developed coding prot
which constructs are translated into observable will be subject to unforeseeable shortcom
behaviors as largely private, informal, andDuring each successive use of Anderson e
undocumented. Our personal experience with(2001) coding scheme for teaching presen
the development of coding protocols corrobo- Rourke et al.'s (1999) scheme for social pre
rates his critique. "Such an approach," argued substantial changes were made: (a) indi
Crocker and Algina (1986), "results in a highlythat were not being used were abandoned
subjective and idiosyncratic operationalizationindicators that were unreliable were reworded

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
12 ETR&D, Vol. 52, No. 1

or discarded,andand (c)
conducting validity indicato
studies for the final
ceptually misaligned were
form of a coding protocol. This step signals the
appropriate categories. Simila
conclusion of the first stage of establishing valid-
accrued with Henri's (1991) instrument ity in QCA studies and the first section of this
although no changes have been formally
article. In this section we translated some of the

adopted. essential processes of responsible test develop-


ment into the less paradigmatic testing and mea-
suring context of QCA. Addressing these steps
Developing Guidelines for during the development process will increase
Administration, Scoring, and the durability, persuasiveness, and validity of
Interpretation of the Coding Scheme QCA studies.
Having completed the above steps, the con-
Based on the work that has been conducted in tent analyst has an instrument that is plausibly
valid. In the next section, we present several
the previous steps, developers are in a position
research designs for gathering empirical evi-
to amass guidelines for administration, scoring,
dence of the validity of a QCA protocol.
and interpretation of the protocol. Thorough
QCA reports, such as those presented by
Jonassen and Kwon (2001), contain information
GATHERING EMPIRICAL EVIDENCE
concerning intrarater and interrater agreement,
FOR VALIDITY
training procedures for coders, and samples of
coded transcripts. The frequency with which
A rigorous
journal editors are requesting this information is and systematic process of construc-
tion yields a set of indicators that are reasonable
increasing, and if researchers are eager to have
(Sheppard, 1993). However, the appropriateness
their protocols used and a set of results repli-
cated, it is requisite information. of inferences made from the protocol remains
A useful element in interpreting the resultshypothetical
of until it is demonstrated empiri-
cally. One example of this, discussed earlier, is
a QCA study would be a pool of normative data.
Unfortunately, no such pools exist. Kamin,problem that vicarious interaction (Sutton,
the
2001) presents for the validity of participation
O'Sullivan, Younger, and Deterding (2001) used
protocols. Another example is Bereiter and
QCA to study the critical thinking of medical
Scardemalia's (1987) cautious attitude toward
students during problem-based learning. Their
writing samples as evidence of cognition. A fur-
discussion pointed to the problem of interpre-
ting results in the absence of normative data: ther example is found in our work on social
presence (Rourke & Anderson, 2002). Origi-
nally, we developed a scheme for coding the
Additional work is needed to determine the magni-
purely social elements of computer conferences,
tude of critical thinking ratios that should be obtained
based on the belief that social communication is
in a typical 3rd-year medical students' problem-based
anbeimportant antecedent to critical discourse.
learning group. Critical thinking could possibly
However,
higher in other situations or this could be as high as it interviews indicated that this inter-
gets. Other researchers studying classroom discourse
pretation was not universal among the students
found critical thinking ratios between .44 and .87, but
who were participating in computer conferen-
they sampled undergraduate students, not medical
students or problem-based learning groups. (p. 33)ces. As one participant said:

While I was not inhibited from commenting in general,


At the culmination of their study, Kamin et
I was reluctant to bring up points of dispute. The envi-
al. (2001) were able to describe their construct ronment became much more social than useful in the
for the group they studied, but without norma- exchange of ideas. I grew tired of the niceties of online
tive data, they could not offer a meaningful protocol and wished that other participants would just
interpretation of the data in a larger context. get to the point. (Rourke & Anderson, 2002)

The final step in test construction that Croc-


ker and Algina (1986) discussed is designing The problem this comment reveals is not with

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 13

more traditional
the power of the coding scheme to method, a 5-item semantic
provide a dif-
ferential scale
tally of salutations, compliments, anchored
and by theindi-
other positive adjec-
cators in the protocol. It is with the
tives warm, inference
friendly, that close,
trusting, disinhibiting,
and personal
what the raters have observed, (Gunawardena & Zittle,
categorized, and 1998;
counted is communicationShort,
that Williams, & Christie, critical
supports 1976). The authors
discourse. Further investigation is required
found that correlations to
between the frequency of
warrant such an interpretation.
the 15 indicators and the students' ratings of
socialseveral
Messick (1989) discussed presence weretypes
weak (r = 0.4,
ofapproxi-
mately).be
investigations that should Furthermore,
conductedsignificant correlations
to
weretest.
establish the validity of any observedOfonly these,
between a subset
three of the indi-
are particularly germane to cators and a subset
the of the social presence
development ofdimen-
coding protocols whose sions represented is
purpose on the
tosemantic
collectdifferential
scale.
descriptive data and generate inferential infor-
mation in research contexts. These are:
These results point to difficulties in at least
1. Correlational analyses two areas of validity. First, content representa-

2. Examinations of group differences, and tiveness-the coding scheme was not measuring
all of the dimensions of social presence. Second,
3. Experimental or instructional interventions. content relevance-some of the indicators did
In this section we will examine all three. To illus- not correlate with any of the dimensions of
trate the discussion, we will draw primarily onsocial presence. Ultimately this presents a
examples from our own work with which we are challenge to the appropriateness of the infer-
most familiar and which provides some of theences that one would want to make from the
few available examples. coding data; that is, that observing, categorizing,
and counting the occurrence of the 15 indicators
would allow one to infer how well students

Correlational Analyses could project themselves socially and affectively


into a mediated community of inquiry.
We have also conducted studies in which
The first type of study Messick described is cor-
multiple methods of data collection were used to
relational analyses. In this type of study, the con-
tent analyst attempts to demonstrate thatcorroborate the results of a QCA (Rourke &
measurements of the construct made throughAnderson, 2002). To explore the validity of
QCA are consistent with measurements of the Anderson et al.'s (2001) teaching presence cod-
construct made through other methods. Rourkeing scheme, they combined a QCA with inter-
views and a questionnaire. The questionnaire
and Anderson (2002) performed this type of
study on their social presence instrument. Work-items were derived explicitly from the teaching
ing from a definition of social presence as thepresence indicators. For example, the indicator
ability of learners to project themselves sociallydrawing in participants was transposed into the
and affectively into a mediated community ofquestionnaire item: This week's discussion
inquiry, they constructed a coding scheme con-leader was effective at drawing in participants.
Similarly, the interviews were structured to fur-
sisting of 15 representative behaviors (Rourke,
ther address the same indicator. Data from each
Anderson, Garrison, & Archer, 1999). The pur-
of the methods were then triangulated. We were
poses of their correlational study were to (a)
explore the relevance of the indicators, (b) testrelieved to find that most of the indicators the
raters were observing, categorizing, and count-
that the frequency of the indicators was mean-
ingful (an implicit assumption of QCA), and (c)ing were perceptible to the students, were per-
assess social presence using an alternate forming their hypothesized function, and were
method. To accomplish these goals they asked differentially effective based on their frequency.
students to rate the frequency of the each of the We were also able to determine that many of the
indicators. They also asked students to assess segments of the transcript that were rated high
the social presence of their conference using a by the QCA were the same ones that were rated

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
14 ETR&D, Vol. 52, No. 1

high by the who he hypothesized would have


students who varying but had
conference. predictable degrees of success. His predictions
were confirmed when education students study-
ing mental retardation scored highest, commu-

Group Differences
nity college students enrolled in upgrading
classes scored lowest, and the remaining two
groups-high school tutors working with men-
Coding schemes for constructs such as critical tally retarded students and education students
thinking, group problem solving, or social com- studying statistics-scored in the middle.
munication are essentially embodiments of what
their authors think the ideal process would look
like. This belief gives rise to a type of study in
which an ideal group engaging in the target pro- Experimental and Instructional
Intervention
cess is compared to a group that is much less
ideal. One hopes the instrument distinguishes
between them appropriately. There are two In the group differences scenario, naturally oc-
forms of this type of study, cross-sectional and curring criterion groups are identified that are
longitudinal. In a cross-sectional study, an effec- expected to differ with respect to the construct
tive protocol for coding social communication being measured (e.g., teachers vs. students on
should distinguish between a cohort group in moderating ability). In experimental or instruc-
the final year of their program and a zero-his- tional intervention studies, researchers delib-
tory group. In a longitudinal study, the same erately manipulate the groups or their
protocol should be able to distinguish between environment to induce an alteration in the con-
the initial weeks and the final weeks of a com-
struct under study. An attempt is made to mod-
puter conference. Demonstrating this ability is ify behavior in theoretically predicted ways to
an important early step in validating a QCA pro-determine whether the coding protocol is sensi-
tocol.
tive to these changes. For example, Jonassen and
Anderson et al. (2001) were able to provide Kwon (2001) studied students' problem-solving
this type of evidence for their teaching presence skills using a content analysis protocol devel-
instrument. Composed of the categories directoped by Poole and Holmes (1995). If valid, such
instruction, instructional design, and facilitatingan instrument should be sensitive to instruc-
discourse, their instrument was able to distin-tional interventions such as training students in
guish between weeks of discussion led by stu- the problem-solving process or providing expert
dents and weeks of discussion led by the assistance while students are engaged in prob-
instructor. As would be hypothesized by their lem-solving tasks. The following year, Jonassen
model, instructor-led discussion contained moreand Kwon (2002) conducted such a study.
evidence of direct instruction and instructional Undergraduate economics students were
divided into small teams and asked to resolve
design while student-led discussion contained
more evidence of facilitating discourse (Rourke economics problems online. To communicate
& Anderson, 2002). with each other, some teams used a standard

This particular study, however, is not a para-


computer conferencing system while others
used an electronic argumentation-scaffolding
digmatic group differences test, nor are we
aware of other more definitive studies in the system. Using Poole and Holmes's coding proto-
col, Jonassen and Kwon were able to identify
educational technology literature. Because it is
clearly where the scaffolding system had been
important to understand this key form of test
used and where it had not.
validation, we have selected an example from
outside the domain. Willms (1978, as cited in Correlational analyses, group differences
Rogers, 1999) developed a test to assess knowl- studies, and experimental or instructional inter-
edge of mental retardation. Once a set of items vention studies provide empirical evidence that
had been amassed, Willms administered the a coding protocol describes what it purports to
provisional instrument to four intact groups describe and that the inferences made from the

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 15

coding data are warranted. Without this


Others have tried evi- our understand-
to deepen
ing of these phenomena
dence, inferences remain speculative. by using procedures
If protocol
developers are convinced ofthatthe expand the traditionalor
obviousness conception of QCA.
They too begin
appropriateness of their behavioral by identifying and categorizing
indicators,
they will not be averse to an directly observableevalua-
empirical behaviors. These behaviors,
however, are
tion of the coding protocol's utility. Ifnot
theof proper
interest in and of them-
steps have been taken during the
selves. development
They are taken as signs, evidence, or indi-
cators of aninstrument
stage, evaluations of the provisional underlying construct. Drawing
should not result in dramaticconclusions
surprisesaboutor
underlying
disap-constructs based
pointments. on frequency counts of the surface content of
communication is a complicated analytical pro-
cess, though it is rarely recognized as such.

CONCLUSION Seeking a model to guide this complicated


process, we have drawn attention to a fact that is
often overlooked: QCA is a form of testing and
In a previous paper, we constructed a table in
measurement. This ontological positioning con-
which the criteria of QCA constituted the col-
nects content analysts to an assortment of well-
umns and published studies the rows (Rourke et
defined procedural tools that will help them
al., 2001). Studies that epitomized the use of a
make inferences and interpretations that are the-
particular criterion were inserted in the appro-
oretically and empirically defensible. Among
priate cells. If we prepared a similar table for this
these tools are procedures for amassing a set of
paper, most of the cells would be empty. Exam-
legitimate behavioral indicators and a set of
ples of QCA research in which a coding protocal
empirical studies designed to test their useful-
is developed methodically and validated sys-
ness.
tematically are rare. Consequently, the qualities
that define QCA, distinguish it from qualitative Researchers who are contemplatin
forms of content analysis, and endow it withstudy
its may find this discussion dishea
appeal as a research technique fade away. that has not been our purpose. Most bo
Reeves (1995) chastised educational technolo- treatments of QCA begin by informin
gists for failing to establish the validity of theirthat "the first step in observational re
measurement procedures, and he regarded thisdeveloping a coding scheme" (Bakeman
man, 1997, p. 15) thus propelling inve
continual failure as evidence of a "research mal-

headlong
aise of epic proportions" (9 23). We have argued into the tangle of issues disc
merely that attention to test validity will this paper. Gall, Borg, and Gall's (1996
tion, located in the context of a broader
strengthen the claims of QCA studies and
increase their information yield. sion of research design issues, is imme
more appropriate: "Consider employin
Our argument is relevant to the content ana-
ing system that has been used in p
lyst engaged in a purely descriptive study, but it
research" (p. 359). Jonassen and Kw
is directed specifically at those who include
2002) took this approach in their studies
inferential processes in the collection, analysis,
and interpretation of their data. Some research-
lem solving in CMC. Rather than po
their study while they undertook an
ers continue to use QCA in the manner in which
process of instrument development an
it was originally conceived. They systematically
tion, the authors proceeded directly w
identify, categorize, and count the objective ele-
investigation after selecting an instrum
ments of communication and provide audiences
had
with a summary of this data. The procedure is been developed by Poole and
(1995).
sound, the analysis leaves little room for counter
interpretation, and the results of descriptiveUnfortunately, few researchers appe
studies are valuable, especially when they con-
interested in conducting their studies w
cern relatively new educational phenomena
ing instruments. Those who do accomp
such as the use of CMC in teaching and learning.
eral things: They contribute to the accu

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
16 ETR&D, Vol. 52, No. 1

validity of an REFERENCES
existing proced
compare their results witha gr
normative Anderson, T.,
data, and Rourke, L., Garrison,
leapfrog D.R., & Archer, W. o
ment (2001). Assessing process.
construction teaching presence in a computer
In o
conferencing environment. Journal of Asynchronous
gram, we dedicated two years and a
Learning Networks, 5 (2). Retrieved, March 6, 2002,
considerable proportion of our research funds to
from http://www.aln.org/alnweb/joumal/jaln-
the development of three QCA protocols (Com-vol5issue2v2.htm
munity of Inquiry, 2002). Only then were we
Bakeman, R., & Gottman, J.M. (1997). Observing interac-
able to begin using the protocols to investigatetion: An introduction to sequential analysis (2nd ed.).
New York: Cambridge University Press.
the phenomena that had originally captured our
Bereiter, C., & Scardemalia, M. (1987). The psychology of
attention. In our case, the trade-off was justified
written composition. Hillsdale N.J: Lawrence Erlbaum
because instrument development was a central
Associates.

focus of our proposal, and we and other


Berelson, B. (1952). Content analysis in communication
researchers continue to use the instruments in research. Glencoe, IL: Free Press.

current studies. This is an exceptional case. Bullen, M. (1998). Participation and critical thinking in
online university distance education. Journal of Dis-
The purpose of this discussion has been to
tance Education, 13(2), 1-32.
prompt some reflection about what is required
Chou, C. (November, 2001). A model of learner-centered
before one can make inferences from frequencycomputer-mediated interaction for collaborative distance
counts of communicative behavior. The discus- learning. Paper presented at the Annual Meeting of
the Association for Educational and Communica-
sion is incomplete. We have talked about corre-
tions Technology, Atlanta, GA.
lational analyses, but have not mentioned factor
Community of Inquiry. (2002). Critical thinking in a
analysis (or path analysis, structural equation
text-based environment: Computer conferencing in
modeling, or hierarchical linear modeling). Wehigher education. Retrieved March 6, 2002, from
have touched on protocol analysis and chro-http:/ /www.atl.ualberta.ca/cmc
nometric analysis but not computational or
Crocker, L., & Algina, J. (1986). Introduction to classical
mathematical modeling. Of Crocker and and modern test theory. Toronto: Harcourt Brace
Jovanovich College Publishers.
Algina's (1986) 10 steps of test construction, we
Cronbach, L. (1971). Test validation. In R.L. Thorndike
have discussed only 6. And we have drawn min- (Ed.), Educational measurement (2nd ed.). Wash-
imally from Messick's (1989) 102-p. chapter. ington, D.C.: American Council on Education.
There is a rich body of literature that researchers Cronbach, L. (1990). Essentials of psychological testing
can refer to when developing a protocol and (5rd ed.). New York:Harper & Row.
Curtis, D., & Lawson, M. (2001). Exploring collabora-
making inferences from coding data. Some pre-
tive learning online. Journal ofAsynchronous Learning
liminary considerations are outlined in this Networks, 5(1). Retrieved, March 6, 2002, from
paper but much work remains. - http://www.aln.org/alnweb/journal/Vol5_issuel/Curtis
/curtis.htm
Dalkey, N., & Helmer, 0. (1963). An experimental
Liam Rourke [lrourke@ualberta.ca] is a Ph.D. application of the delphi method to the user of
candidate in the Department of Educational experts. Management Science, 9(3), 458-467.
Psychology at the University of Alberta, Edmonton
Embretson, S. (1983). Construct validity: Construct
AB.
representation versus nomothetic span. Psychological
Terry Anderson [terrya@athabascau.ca] is Professor
Bulletin, 93, 179-197.
and Canada Research Chair in Distance Education at
Ericsson, K., & Simon, H.(1993) Protocol analysis: Verbal
Athabasca University in Alberta. reports as data. Cambridge, Mass: MIT Press.
Fahy, P. (2001). Addressing some common problems
in transcript analysis. International Review of Research
in Open and Distance Learning, 1(2). Retrieved, March
20, 2002, from the World Wide Web at http:
/ /www.irrodl.org / content / vl1.2 / research.html
Fahy, P. (2002a). Epistolary and expository interac-
tions patterns in a computer conference transcript.
Retrieved, March 1, 2002, from
http:/ /cde.athabascau.ca /softeval/report
jde.pdf
Fahy, P. (2002b). Evaluating critical thinking in a com-

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
VALIDITY IN QUANTITATIVE CONTENT ANALYSIS 17

puter conference transcript: A among heterogeneous


comparison ofand
twohomogeneous individu-
models. Unpublished manuscript.als: A theoretical formation and a meta-analysis of
Fahy, P. (in press). Use of linguistic qualifiers
the research. and Research, 53(5), 5-
Review of Educational
54.
intensifiers in a computer conference. American Jour-
nal of Distance Education. Johnson, D., Johnson, R., & Stanne, M. (2000). Coopera-
tive learning
Flanagan, J. (1954). The critical incident methods: A meta-analysis.
technique. Psy- Retrieved,
March 6, 2003, from http: //www.co-opera-
chological Bulletin, 51(4), 327-359.
tion.org/pages/cl-methods.html
Gall, M., Borg, W., & Gall, J. (1996). Educational research:
Jonassen,
An introduction (6th ed.). White D., & Kwon,
Plains, NY: H. (2001). Communication pat-
Long-
man. terns in computer mediated versus face-to-face
group
Garrison, D.R., Anderson, T., & Archer, W. problem
(2001). solving. Educational Technology
Critical thinking, cognitive presence,Research and Development, 49(1), 35-51.
and computer
Jonassen, D.,
conferencing in distance education. American & Kwon, H. (2002). The effects of argu-
Journal
of Distance Education, 15(1). mentation scaffolds on argumentation and problem
Gunawardena, C.N., Lowe, C.A., & Anderson, T. solving. Educational Technology Research and Develop-
(1997). Analysis of a global online debate and the ment, 50(3), 5-21.
development of an interaction analysis model forKamin, C., O'Sullivan, P., Younger, M., & Deterding,
examining social construction of knowledge in com- R. (2001). Measuring critical thinking in problem-
puter conferencing. Journal of Educational Computing based learning discourse. Teaching and Learning in
Research 17(4), 397-431. Medicine, 13(1), 27-35.
Gunawardena, C.N., & Zittle, F. (1998). Social presenceKaplan, A. (1964). The conduct of inquiry: Methodology
as a predictor of satisfaction within a computer for behavioral science. Scranton, PA:Chandler.
mediated conferencing environment. The AmericanKrippendorff, K. (1980) Content analysis: An introduc-
Journal of Distance Education, 11(3), 8-25. tion to its methodology. Beverly Hills, CA: Sage.
Hara, N., Bonk, C., & Angeli, C. (2000). Content analy-Lord, F., & Novick, M. (1968). Statistical theories of men-
sis of online discussion in an applied educational tal test scores. Reading, MA: Addison-Wesley.
psychology course. Instructional Science, 28(2), 115-Mason, R. (1991). Analyzing computer conference
152. interactions. Computer in Adult Education and Train-
Henri, F. (1991). Computer conferencing and content ing, 2(3), 161-173.
analysis. In Collaborative learning through computer McLean, S., & Morrison, D. (2000). Learners
conferencing (pp. 117-136). Berlin: Springer-Verlag. sociodemographic characteristics and participation
Johnson, D., & Johnson, R. (1979). Conflict in the class- in computer conferencing. Journal of Distance Educa-
room: Controversy and learning. Review of Educa- tion, 15(2). Retrieved, March 9, 2002, from
tional Research 49, 51-70. http:/ /cade.athabascau.ca/voll5.2/mclean.html
Johnson, D., & Johnson, R. (1986). Computer-assistedMessick, S. (1989). Validity. In R.L. Linn (Ed.), Educa-
cooperative learning. Educational Technology 26(1), tional measurement (3rd ed, pp. 13-103). New York:
12-18. Macmillan

Johnson, D., & Johnson, R. (1989). Cooperation and com-


Norris, S., & Ennis, R. (1989). Evaluating critical think-
petition: theory and research. Edina, MN: Interaction. ing. CA: Critical Thinking Press and Software.
Paisley, W. (1969). Studying style as deviation from
Johnson, D., & Johnson, R. (1992a). Creative controversy:
Intellectual challenge in the classroom. Edina, MN:encoding norms. In G. Gerbner, O. Holsti, K.
Interaction. Krippendorf, W. Paisley, & P. Stone (Eds.), The anal-
Johnson, D., & Johnson, R. (1992b). Positive interde- ysis of communication contents: Developments in scien-
pendence: Key to effective cooperation. In R. Hertz- ttfic theories and computer techniques (pp. 4458). New
Lazarowitz & N. Miller (Eds.), Interaction in York: Wiley.
cooperative groups: The theoretical anatomy of group
Parson, M. (1996). Look who's talking: A pilot study of
learning (pp. 174-99). Cambridge, England: Cam-
the use of discussion lists by journalism educators
bridge University Press. and students. (ERIC Document Reproduction Serv-
ice No. ED 400 562)
Johnson, D.W., & Johnson, F. (1994a). Joining together.
Group theory and group skills (5th ed). Englewood
Poole, M., & Holmes, M. (1995). The longitudinal anal-
Cliffs, NJ: Prentice Hall. ysis of interaction. In B. Montgomery & S. Duck
(Eds.), Studying interpersonal interaction (pp. 286-
Johnson, D., & Johnson, R. (1994b). Leading the coopera-
tive school (2nd ed.) Edina, MN: Interaction. 302). New York: Guilford. 1991.
Potter, J., & Levine-Donnerstein, D. (1999). Rethinking
Johnson, D., & Johnson, R. (1996). Cooperation and the
use of technology. In D. Jonassen (Ed.), Handbook validity
of and reliability in content analysis. Journal of
Applied Communication Research, 27(3): 258-284.
research for educational communications and technology
(pp. 1017-1044). New York: Simon and Schuster
Reeves, T. (1995). Questioning the questions of instruc-
Macmillan. (1996). tional technology research. [Online] Available http:
/ /www.hbg.psu.edu/bsed/intro/docs/dean/
Johnson, D.W., Johnson, R., & Maruyama, G. (1983).
Interdependence and interpersonal attractionRichardson, J., & Swan, K. (2003). Examining social

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms
18 ETR&D, Vol, 52, No. 1

presence in online courses


Expanding cooperative in
learning. New York: Teacher's re
perceived College Press.
learning and satisfactio
chronous Learning Networks
Sheppard, L. (1993). 7(1
Evaluating test validity. Review of
2003 from the World Wide Web at Research in Education, 19, 405-450.
http://www.aln.org/publications/jaln/v7nl /v7n
Short, J., Williams, E., & Christie, B. (1976). The social
1_richardson.asp
psychology of telecommunications. London, U.K.:
Riffe, D., Lacy, S., & Fico, F. (1998) Analyzing media mes-
Wiley, 1976.
sages: Using quantitative content analysis in research.
Mahwah, NJ: Lawrence Erlbaum. Slavin, R. (1991). Student team learning: A practical guide
to cooperative
Rogers, W.T. (1999). Error of measurement learning (3rd ed.). Washington, DC:
and validity.
Edmonton AB: Available from author. National Education Association.

Rourke, L., & Anderson, T. (2002). Social communica-


Snow, R., Federico, P., & Montague, W. (Eds.). (1980).
tion in computer conferencing. Journal of Interactive
Aptitude, learning, and instruction. (Vols. 1 & 2).
Learning Research, 13(3), 259-275. Hillsdale, NJ: Lawrence Erlbaum.
Rourke, L., Anderson, T. Garrison, D.R., & Archer, W.
Sutton, L. (2001). The principle of vicarious interaction
(1999). Assessing social presence in asynchronous,
in computer mediated communication. International
text-based computer conferencing. Journal of Dis-
Journal of Educational Telecommunications, 7(3), 223-
tance Education, 14(3), 51-70. 242.
Rourke, L., Anderson, T., Garrison, D.R., & Archer, W.
Stevens, S. (1946). On the theory of scales of measure-
(2001). Methodological issues in the content analysis
ment. Science, 103, 677-680.
of computer conference transcripts. International
Journal of Artificial Intelligence in Education, 12(1),Weiss,
8- R., & Morrison, G. (1998). Evaluation of a grad-
22. uate seminar conducted by listserve. (ERIC Docu-
Sharon Y., & Sharon S. (1992). Group investigation:
ment Reproduction Service No. ED 423 868)

Call for Manuscripts


ETR&D invites papers dealing with research in instructional
development and technology and related issues
involving instruction and learning.

Research: Manuscripts that are Development: Manuscripts that are


primarily concerned with research in primarily concerned with the design
educational technology should be sent and development of learning systems
to the Editor of the Research Section: and educational technology applica-
tions should be sent to the Editor of the
Steven M. Ross
Development Section:
Research Editor, ETR&D
Center for Research in J. Michael Spector
Educational Policy Development Editor ETR&D
325 Browning Hall IDD&E
The University of Memphis 330 Huntington Hall
Memphis, TN 38152 Syracuse University
Syracuse, NY 13244

See inside back cover for Directions to Contributors.

This content downloaded from 202.92.130.213 on Thu, 24 Jan 2019 04:37:25 UTC
All use subject to https://about.jstor.org/terms

You might also like