You are on page 1of 19

Journal of Communication, September 2004

Evaluating the Effectiveness of Distance


Learning: A Comparison Using Meta-Analysis

By Mike Allen, Edward Mabry, Michelle Mattrey, John Bourhis, Scott Titsworth,
and Nancy Burrell

This article uses meta-analysis to summarize the quantitative literature comparing


the performance of students in distance education versus traditional classes. The
average effect (average r = .048, k = 39, N = 71,731) demonstrates that distance
education course students slightly outperformed traditional students on exams and
course grades. The average effect was heterogeneous, and the examination of sev-
eral moderating features (presence or absence of simultaneous interaction, type of
channel used in distance education, and course substance) failed to produce a
homogeneous solution. The results demonstrate, however, no clear decline in edu-
cational effectiveness when using distance education technology.

The profound impact that technological innovations are having in all facets of
education focuses attention on assessing relationships between changing modes
and practices of instruction and their outcomes. The emergence of new technolo-
gies does not change the goals of education. The new technologies change the
process of communication within an educational setting to accomplish those goals.
Research by communication scholars is needed to examine how changes in means
of communicating content impacts the goals of those engaged in a communica-
tion activity.
Understanding potential impacts of technologically driven differences between
traditional classrooms and distance learning contexts is clearly appropriate (Althaus,
1997; Boettcher, 1996; Greene & Meek, 1998; McHenry & Bozik, 1995; Verduin &

Mike Allen (PhD, Michigan State University) is a professor and chair of the Department of Communi-
cation at the University of Wisconsin, Milwaukee, where Edward Mabry (PhD, Bowling Green State
University) is an associate professor and Nancy Burrell (PhD, Michigan State University) is a professor.
Michelle Mattrey (MA, Cornell University) is a doctoral student in the Department of Communication,
Pennsylvania State University. John Bourhis (PhD, University of Minnesota) is a professor in the De-
partment of Communication and director of online education at Southwest Missouri State University.
Scott Titsworth (PhD, University of Nebraska) is an assistant professor at Ohio University. Correspon-
dence should be addressed to the first author at Dept. of Communication, University of Wisconsin,
Milwaukee, WI 53201; mikealle@uwm.edu.

Copyright © 2004 International Communication Association

402
Distance Education Effectiveness

Clark, 1991; Whittington, 1987). Merisotis and Phipps (1999) have concluded that
current research, although rapidly accumulating, generally lacks systematic com-
parisons of factors that can differentiate traditional classroom and distance learn-
ing outcomes. This sentiment was also present in Kuehn’s (1994) call for more
research on the consequences of using mediated communication technologies in
instructional settings and in Christ and Potter’s (1998) broader agenda connecting
media education to advances in media literacy. This review of the literature using
meta-analysis compares whether the performance of students differs based on the
method of instruction (traditional classrooms and distance learning).
Distance learning represents a change in the fundamental orientation of the
learning environment. Traditional, physically copresent classrooms and pedagogi-
cal practices involve face-to-face [FtF] instructor-learner relationships. These physi-
cally and socially immediate instructional contexts are transformed in distance
learning through, typically, the technological intermediation of communication
between teacher and students (Berge & Collins, 1995; Hiltz, 1994, Kuehn, 1994).
Distance education is not a one-dimensional construct. It refers to a wide range of
pedagogical choices and instructional tools (Berge, 2001; Greene & Meek, 1998;
Harasim, 1990). We define distance learning as a course in which the expectation
is that the student and instructor will not be physically copresent in the same
location.
The diversity of possible distance learning formats is large even with respect to
the constraining expectation that instructor and students will never be physically
copresent (Hiltz, 1994; Benbunan-Fich & Hiltz, 1999). Distance learning can be
conducted using time-independent (or, asynchronous) communication formats
like mail correspondence, electronic mail (e-mail), and taped or digitally com-
pressed video recordings; it can also be delivered using time-dependent (or syn-
chronous) communication formats like radio, television, telephone, and interac-
tive video or television. When describing a distance education course, it is often
necessary to provide a detailed explanation of its processes and the type(s) of
technology involved with its delivery.
Distance learning should be contrasted with computer-assisted instruction (CAI).
CAI represents the process of using computer-based software programs and simu-
lations to improve the educational process. Many forms of CAI can be used in
addition to traditional methods of instruction as well as a replacement for tradi-
tional instruction methods. Two meta-analyses (Clark, 1985; Kulik, Kulik, & Cohen,
1980) have demonstrated that CAI is an effective addition to traditional educa-
tional methods. The adoption of an online course shifts the class from a format
that integrates technology to one that uses the computer as the sole means of
instruction. Any computer application can be used either in conjunction with or
instead of the traditional classroom format (Hiltz, 1986, 1994).
The comparison of distance learning with other formats for education involves
a number of potential outcomes. Various educational issues like persistence (does
a student drop out or complete the degree), satisfaction, and cost effectiveness,
can be considered, as well as whether the student masters the content of the course.
A critical question is whether the offering of a distance education format significantly
changes some outcome, when compared to a traditional course format.

403
Journal of Communication, September 2004

The efficacy of distance education, however, is still controversial among educa-


tors. Hiltz (1986) demonstrated that computer-mediated communication (CMC)
technologies supporting online courses (a common form of distance learning)
were perceived as an effective mode of instruction. However, Mottet’s (2000)
assessment of interactive television instruction indicated teachers’ preexisting posi-
tive attitudes and experiences produced positive impressions of distance teach-
ing, but teachers still perceived distance instruction negatively (even among gen-
erally approving teachers) because of diminished contact with students and a
loss of control over the classroom environment caused by technological intru-
siveness.
Distance learning raises important communication issues as well as pedagogi-
cal concerns. Since McLuhan’s (1964) observation that the “medium is the mes-
sage” (p. 7), the focus on the impact of the particulars of technology (often asso-
ciated with communication channels) has generated much attention. The require-
ments of a medium’s perceptual demands involve a behavioral integration into
message activities and how those message activities are perceived (Burgoon et al.,
2002; Lievrouw & Finn, 1990). Comparing how the particular channel of commu-
nication might impact message processing or outcomes remains a central consid-
eration in research (Allen, D’Alessio, & Brezgel, 1995; Allen, Emmers, Gebhardt, &
Giery, 1995; D’Alessio & Allen, 2000). The choice to provide a message using a
particular channel carries a number of implications about the reaction and pro-
cessing by the message receiver.
In the case of an educational setting, the question becomes whether those
differences in medium (or channel) relate to instrumental goals like learning.
Walker and Hackman (1992) point out the need for and importance of immediacy
in televised learning (however, a recent meta-analysis by Witt, Wheeless, & Allen,
in press, disputes the value of immediacy for cognitive learning). The problem is
that the particular behaviors that create a sense of immediacy between student
and teacher differ based on the particular channel of communication. Various
possibilities that exist in the traditional, copresent classroom (e.g., vocal expres-
sion, nonverbal gestures, and proxemics) do not exist when the course is con-
ducted in an online format. Communication over the computer through a Web
server will require different kinds of communication techniques and skills on the
part of both the student and the teacher. What happens is that the method of
communication will generate different messages as well as different procedures
for evaluation. Technological communication means changing the classroom by
creating a different learning environment that may interact with the learning style
of the student. Some students may learn better in a social mode that requires the
presence of other students whereas some students learn better in a solitary setting.
The implications of this preference for a method of communication and setting
on outcomes associated with the use of distance learning environment are cause
for concern (Gunawardena, & Zittle, 1997). The use of any means of communica-
tion in an educational setting may have a differential impact that favors or disad-
vantages different persons.
Mabry (2002) provided a basis for comparison when he considered how the
introduction of technology into group communication redefines the task-process

404
Distance Education Effectiveness

relationship and influences group outcome when groups go from a FtF method of
communication to a “virtual” group by using only CMC. A critical question for
scholars and teachers is whether the change in the format of communication will
impact the level of learning. The decision to teach (really communicate) in a
distance learning environment requires a change in expectations about how com-
munication between student and teacher will occur. The goal of instruction re-
mains the same; the process of communication to achieve that goal changes.
The advent of the newspaper, radio, television, and now the Web has changed
various aspects of the electoral process. However, the fundamental goal of the
politician remains the same—to obtain more votes than an opponent. Similarly,
the goal of the instructor remains the same—to maximize the objectives sought
through the educational process. The use of technology offers an opportunity to
incorporate additional methods of improving instruction (see Bourhis & Allen,
1998, for a meta-analysis about how the use of videotape feedback in public
speaking improves instruction). Understanding the process of adaptation to new
forms of communication within an existing structure (Hunter & Allen, 1992) pro-
vides clues to the sources of resistance as well as means of improving the adop-
tion of technological communication innovations.

Distance Learning and Instructional Effectiveness

There is evidence that technologies involved in distance learning have active


effects on the learning process. Hackman and Walker (1990) noted that communi-
cation technology influences learning outcomes. Students participating in interac-
tive television classes believed the technology had been effective when it worked
well. Besides perceptions of the technologies themselves, various human factors
(e.g., personality, attitudes, skill) emerge to influence user reactions to communi-
cation technologies used in distance education. Hiltz (1986) found experience
and extensive orientations to class technologies among students participating in a
“virtual classroom” produced strong correlations with learning outcomes. Althaus’s
(1997) study of students involved in e-mail discussion groups devised to support
traditional classroom instruction also showed a connection between experience
and participation. Students with more computer experience were more likely to
use the online discussion groups and perceive them as beneficial.
A recent meta-analysis compared distance learning and traditional classroom
environments on the basis of the level of satisfaction students experienced. Allen
et al. (2002) compared the level of satisfaction of students with a distance educa-
tion course format compared to a traditional copresent format for instruction. The
findings indicate that distance education courses compared to traditional courses
demonstrate a 22% drop in the level of satisfaction (r = -.107, k = 19, N = 4492). The
result indicates that on average the student in a distance education course demon-
strates a lower level of satisfaction with the process compared to a person in a
traditional, copresent class. The question of satisfaction is important because of the
potential implications for issues like completion or dropout rates. However, satis-
faction may be less important than issues of performance, the question of whether

405
Journal of Communication, September 2004

the ability to master content and skills is diminished when moving to a distance
education format.
Clearly, both user and technological efficacy are involved in the success of new
communication technologies applied in learning contexts. The critical question is
whether distance learning produces results equal to or better than traditional learning
environments. Benbunan-Fich and Hiltz (1999) summarized research on the use
of various types of CMC systems in a range of instructional situations. They con-
cluded that CMC systems generally produced either moderate positive effects, or
no significant differences, on learning outcomes compared to traditional modes of
instruction. Their review did not differentiate between distance learning and the
use of computers to assist FtF classes.
The definition of effectiveness, for the purposes of this article, refers to demon-
strations of performance related to scores on tests, grades achieved, or other
similar evaluations of student performance. The basis for assessment considers
how students in a distance learning environment compared to students involved
in a traditional format in terms of mastering course content using either cross-
sectional posttest-only comparisons or longitudinal changes score comparisons.
Distance learning may, in some cases, represent one option among many that
are potentially available. The student, university, or educational system may be
faced with making a choice among particular educational formats. The question is
which format should be the one preferred and sought as a means to use for
learning on the part of the students. If distance education methods produce a
higher level of content mastery, then that method of instruction may become a
preferred method of instruction. If distance education methods are equivalent,
then the choice of technique may be based on other factors related to practical
issues, such as cost and availability. One primary goal of education is to demon-
strate that students have mastered some content, as measured by assignments,
exams, or course grades.

Assessing Distance Learning Using Meta-Analysis

More than a decade ago, McIsaac (1990) recognized the need for the application
of meta-analysis to the distance education literature. Since 1990, there have been
few applications of meta-analysis to summarize the efforts in this area—at least as
listed in such indexes as the Educational Resource Information Clearinghouse
(ERIC). In conclusion, McIsaac wrote:

Meta-analysis offers a way to begin to synthesize research studies in this grow-


ing area. It is important to consider other ways of encouraging not only origi-
nal research reports of studies so that the results which are generalizable can
be made available around the world. (p. 15).

Nettles, Dziuban, Cioffi, Moskal, and Moskal (1999) pointed out that myriad con-
tradictory findings continue to point to the proposition that “a complete meta-
analysis is required” (p. 3). Meta-analysis can serve a variety of purposes (Preiss &

406
Distance Education Effectiveness

Allen, 1995), in this case to compare the impact of a choice of method of commu-
nicating instructional content.
Besides the sensitizing question of traditional versus distance learning effec-
tiveness, there is clearly more to this issue than the effects of copresence on
instructional outcomes. Specifically with respect to the delivery of distance educa-
tion, the nature of the communication context constructed to support a distance
learning environment is critically important for understanding comparative out-
comes either between traditional and distance learning instruction or among vari-
ous distance learning approaches.
The types and properties of communication channels elected for distance in-
struction comprise the most accessible variables. Communication channels incor-
porate one or more of three modes of expression: audio, video, or written text.
Communication channels create connectivity between sender and receiver, or
teacher and learner. However, the connective properties of communication chan-
nels may be further defined by their interactivity (Burgoon, et al., 2002; Rafaeli,
1988).
Interactivity denotes the capability of the channel to function similarly between
sender and receiver (Rafaeli, 1988). Thus, for example, receiving a televised lec-
ture broadcast over a public television channel is roughly equivalent to hearing a
radio broadcast of the same material delivered over public radio. Neither channel
permits the receiver to employ it as a means of communicatively reacting to the
sender. This constrained level of interactivity can be contrasted with the interactivity
of asynchronous e-mail, synchronous “chat” connections for exchanging textual
messages in real-time interaction, or telephony. These and similar channels pro-
vide connectivity that allows a two-way flow of messages between sender and
receiver.
Accounting for the communication context created by channel type and chan-
nel interactivity choices is indispensable in understanding how distance learning
is being accomplished. These factors can influence learning outcomes when com-
pared to each other or with traditional instructional contexts, and they may well
interact with other factors like the content (subject matter) of the course(s) being
compared.

Method

Literature Search
The search of the literature involved using a combination of manual and elec-
tronic reviews of available materials. The searches used the keywords “distance
learning” and “distance education.” We examined the complete set of journals
relating to distance education (American Journal of Distance Education, Distance
Education) as well as relevant indexes (Communication Index, ERIC, Psychlit)
and searched for and examined relevant Web-based materials. Various reviews of
the literature (Barry & Runyon, 1995; Clark, 1995; Cookson, 1989; Holmberg, 1977;
1981; Institute for Higher Education, 1999; Kuehn, 1994; Maushak, 1997; Maushak,
Simonson, & Wright, 1997; Mount & Walters, 1985; Perraton, 1992; Schlosser &

407
Journal of Communication, September 2004

Anderson, 1994; Stickell, 1963, Wells, 1992; Whittington, 1987; Woodley & Kirkwood,
1995) also were examined for manuscripts that would potentially contribute use-
ful data for this summary. We also conducted a manual search of the archives for
the American Society for Distance Learning, located in State College, Pennsylva-
nia, for additional records and information. Once we had obtained the materials,
we examined reference sections for additional references. A complete bibliogra-
phy of the more than 500 manuscripts that are part of this long-term project, as
well as complete coding information for moderator variables and statistical issues,
is available from the first author.
Manuscripts were excluded if the focus was on technology in a traditional
classroom. Studies focusing on computer-assisted instruction (CAI) would not qualify
if the CAI was intended as a supplement for traditional teaching or CAI was the
choice of a technology for delivering materials. For example, a CD-ROM used in a
foreign language course as a study aid would not be included unless the CD-ROM
was the actual means of education without traditional classroom access. In some
cases, the comparison between distance education and traditional education in-
volved both classrooms using a common technology (like a CD-ROM or other
materials). The key was that the distance learning experience had been one in
which the expectation was that the instructor and student (or other students)
would not be physically copresent for the instruction. To be included in this
report a manuscript had to meet the following criteria: (a) involve a comparison
between a distance learning course and a traditional format course (use of a
control condition); (b) involve at least one assessment of student performance in
the course related to the mastery of some content or skill taught in the course; and (c)
report sufficient statistical information to permit the calculation of an effect size.
A number of investigations were not included for a variety of reasons. Studies
employing nonquantitative assessments of distance learning could not be incor-
porated (LaRose & Whitten, 2000). We did not include studies that did not employ
a control group (Barrington, 1972; Fjortoft, 1995, 1996; Foell & Fritz, 1995; Guzley,
Avanzino, & Bor, 1999; Jegede, Naidu, Taylor, Carr-Spencer, & Atkinson, 1995;
Kember, Murphy, Siaw, & Yuen, 1991) or that used a comparison of distance
learning to a FtF class involving CAI (Althaus, 1997). Studies that did not provide
the necessary statistical information to compute an effect size were also excluded
(Boucher & Barron, 1986; Curran & Murphy, 1992). Some investigations were
comparisons of various issues for delivering distance education, but the compari-
son was not to a traditional classroom (Edwards & Rennie, 1991).

Coding for Potential Moderating Variables


Potential content or design issues may generate different clusters of outcomes in
the observed effects across the investigations. The examination of those influ-
ences between groups of studies can be examined statistically, and such influ-
ences are considered or labeled moderator variables.
Synchronous or asynchronous learning. This potential moderator considers
whether the distance education technology incorporates a direct method of feed-
back for the student to the instructor during a live presentation. All educational
systems are interactive (a correspondence course permits feedback and interac-

408
Distance Education Effectiveness

tion, albeit delayed). An important distinction in communication, dating back to


the original Berlo (1960) model, is the nature of the feedback present in the
communication system. The use of various technologies changes the feedback
between instructor and student and may impact the level of learning.
To be designated as a synchronous system in this condition, comments, ques-
tions, or feedback were simultaneous. Usually, the design would involve two-way
audiovisual links between or among two or more environments. In some cases
the teacher was broadcast live from a source to a destination where the students
could interact via some type of audio (telephone) system. The key is that a syn-
chronous system, in this view, involves a “live” instructor with whom the students
may directly communicate.
Conversely, an asynchronous system is one in which the student cannot di-
rectly communicate with an instructor. For example, online courses conducted via
computer mediation would typically be considered asynchronous, even though
there may be interaction between the student and instructor.
Channel of delivery. This potential moderator focuses on the method of in-
struction in terms of the communication channel used. The choices of instruc-
tional method could involve one of the following: (a) video, (b) audio, or (c)
written text. The video channel includes live televised as well as videotaped in-
struction. The video may broadcast via a television channel accessed at home or
one that requires the student to be at a designated space (classroom) at a particu-
lar time; the video information could be sent via mail as videotape; or the video
could be compressed or obtained through some vender. The use of video meth-
ods assumes audio information is included, and that was true for all examples in
this summary.
Audio methods of instruction involve the use of sound, either through radio or
cassette tapes, to provide instruction. The use of audio was very rare (only two
studies) but it does represent a possibility that should be considered as an alternative.
Written methods involve the use of text without any accompanying video or
audio information. Many online computer applications involve only the use of
text as well as some form of correspondence course where the materials and
assignments are sent via the mail. In this application, written refers to the type of
presentation and not the means of transmission of that material.
Instructional methods were coded in terms of highest level of sensory informa-
tion available; that is, video was considered higher in terms of sensory input than
audio, audio higher in sensory content than written information. A course using
video designates as video regardless of whether audio or written information was
used in addition. No course, for example, relied exclusively on video; all courses
used written information in addition to the video information provided. The result
is that the coding fails to reflect an exclusive system but rather a hierarchical
system in terms of sensory input.
Course content. One potential moderator requires consideration of the content
of the course. Courses in natural sciences might produce different results than
courses in the humanities. Particular types of content require different pedagogi-
cal approaches, and evaluations of instructional method should reflect this. Dis-
tance education may be effective for some course content and ineffective for

409
Journal of Communication, September 2004

other material. The available studies in this investigation were classified on the
following basis: (a) natural sciences and mathematics (this included engineering
courses), (b) military training, (c) foreign language instruction, (d) classes in the
social sciences (sociology, communication, history), (e) classes in education, and
(f) classes combined across various content areas. Ideally, the coding might have
been better had it been based on some particular assessment of the use of tech-
nology related to particular types of assignments or course content (e.g., knowl-
edge, skills, tests, and papers). However, insufficient detail was available for this
to be considered as a possible means of evaluating variability in study findings.

Statistical Analysis
This particular meta-analysis used the variance-centered form of meta-analysis
developed by Hunter and Schmidt (1990). The variance-centered form of meta-
analysis considers not simply the average effect but also the issues of variability in
findings. The process takes the particular data and corrects for various sources of
bias and artifacts (attenuation in measurement, restriction in range, regression to
the mean). Studies had statistical information converted to a common metric (in
this case the correlation coefficient). The formulas for conversion are found in
Hunter and Schmidt (1990) and information for the process for each investigation
can be obtained from the first author.
We averaged the individual estimates for each investigation and weighted for
sample size. The average effect assumes that the individual contributions from
each study represent random departures from the average effect that are the result
of sampling error. To assess the variability, we conducted a homogeneity test
using a chi-square statistic. A significant chi-square indicates that the level of
variability among the observed effects is greater than one would expect due to
sampling error. A nonsignificant chi-square indicates that the observed effect rep-
resents an average across the sample of correlations that are discrepant from each
other as a function of sampling error.
Caution should be used in interpreting an average correlation when heteroge-
neity exists. The heterogeneity indicates that, rather than one distribution of ef-
fects, there probably are multiple (at least two) distributions of effects that need to
be identified. The identification process is the testing of potential moderator vari-
ables that can be used to classify the effects. A successful moderator analysis
would generate distributions that are each homogeneous and significantly differ-
ent from other distributions (Hall & Rosenthal, 1991). This condition is similar to
the ANOVA assumptions of homogeneity within a cell but mean differences be-
tween the cells for a significant effect.

Results

Overall
The average effect demonstrates a small improvement in performance for the
courses using distance education technology (average r = .048, k = 39, N = 71,731,
variance = .007). The sample of correlations demonstrates a group of effects that

410
Distance Education Effectiveness

are heterogeneous, χ2 = 169.10 (38, N = 71,731), p < .05. The heterogeneity indi-
cates that the interpretation of the average effect should be made cautiously; the
significant chi-square indicates the probable existence of some moderator variable(s)
that should be examined. (A complete set of the coding and statistical information
for each study can be obtained from the first author).
A problem in the analysis may occur because one study (Chale, 1983) ac-
counted for 63,516 of the participants in a national study of the Tanzanian teacher-
training program over a 5-year period. Deletion of this study and a reanalysis to
determine the impact of this study is warranted. Removal of the study impacts
only slightly on the results (average r = .068, k = 38, N = 8,215, variance = .022)
and the results remain heterogeneous, χ2 = 171.33 (37, N = 8,215), p < .05. Subse-
quent analyses will not include that particular investigation because of the impact
it has on the moderator analysis. The incorporation of such a large study swamps
the contributions of the other individual studies.

Moderator Analysis
Use of synchronous systems. The coding reflects a simple binary system, either
education was synchronous or asynchronous. Twenty-seven of the investigations
used an educational design that was synchronous, and those approaches demon-
strated a small positive correlation, indicating distance education groups obtained
improved outcomes (average r = .066, k = 27, N = 6,847, variance = .023). The
distribution for the synchronous systems was heterogeneous, χ2 = 156.20 (26, N =
6,847), p < .05.
Ten studies analyzed asynchronous educational methods demonstrating a simi-
lar pattern of a slight edge for the distance education courses (average r = .074,
k = 10, N = 1,319, variance = .017). The sample of correlations demonstrates a
group of effects that are heterogeneous, χ2 = 22.96 (9, N = 1,319), p < .05.
What this outcome suggests is that the presence of simultaneous interaction did
not impact on the performance scores of the students. Although interaction may
influence other outcomes like retention or satisfaction, there is no demonstrable
impact on outcomes related to the level of learning as assessed by exam scores or
grades earned in a course.
Channel of delivery. The video method of distance education demonstrates a
slightly higher level of performance when compared to traditional formats (aver-
age r = .077, k = 29, N = 6,547, variance = .024). A test for heterogeneity demon-
strates that the correlations represent a distribution with a moderator probably
present, χ2 = 160.32 (28, N = 6,547), p < .05.
The audio method of delivery contained only two studies and demonstrated a
slightly negative effect (average r = -.028, k = 2, N = 631, variance = .010). To
generate a chi-square for the moderator test requires a minimum of three correla-
tions, so no test of this was conducted. The small number of studies and small
sample size indicate that the effect should be treated with caution.
The use of written methods of instruction provided a small positive effect simi-
lar to the video method (average r = .068, k = 6, N = 988, variance = .012). The
sample of correlations demonstrates a group of effects that are heterogeneous, χ2
= 12.21 (5, N = 988), p < .05.

411
Journal of Communication, September 2004

The analysis by channel used for instruction demonstrates that the variable
does not adequately function to explain the heterogeneity of the observed effects.
No real difference existed between the video and written methods. The audio
channel of instruction, although discrepant with only two studies, did not contain
enough information to warrant any assessment. This indicates a possible differ-
ence but one that requires further development.

Course Content
Natural science courses. There were a total of six investigations that provided
relevant data for this comparison. The average effect was essentially zero for this
cluster (average r = .005, k = 6, N = 833, variance = .048). The sample of effects
was considered heterogeneous, χ2 = 40.08 (5, N = 833), p < .05.
Military courses. These courses were conducted either as part of Reserve Of-
ficer Training (ROTC) or as part of normal military continuing education. The
average effect for this cluster was negative (average r = -.183, k = 3, N = 210,
variance = .019). The sample of effects was considered homogenous, χ2 = 4.01 (2,
N = 210), p > .05. Although the effects are homogeneous, the small number of
studies indicates that the effect is not stable and any conclusion should be consid-
ered with caution.
Instruction in foreign language. The courses involving instruction in foreign
language demonstrated a very large advantage for the distance education group
(average r = .218, k = 3, N = 2,238, variance = .001). The sample of effects was
considered homogeneous, χ2 = 1.08 (2, N = 2,238), p > .05. The number of effects
was small but the sample size was relatively large. One note should be made
about the design of these investigations. The use of distance education was
typically to permit students to interact with native speakers of the language,
so the two-way video broadcasts were efforts to coordinate a conversation
between the students and native speakers of the language. The impact of this
particular instructional use was a relatively large increase in the performance
of those students.
Social sciences courses. The effect for social sciences classes was small and
favored the distance education group (average r = .075, k = 9, N = 680, variance =
.046). The sample of effects was considered heterogeneous, χ2 = 31.39 (8, N =
680), p < .05. This sample of effects more closely resembled the general or overall
analysis with respect both to the size of the effect and degree of variability.
Education courses. Education courses represented the single largest concentra-
tion of investigations in this analysis. The effect was negative although close to
zero (average r = -.021, k = 13, N = 1,828, variance = .008). The distribution of
effects demonstrated homogeneity, χ2 = 13.75 (12, N = 1,828), p > .05. This set of
studies demonstrated that the use of traditional educational methods offered only
a small advantage over distance education methods.
Examinations across the curriculum. This set of studies represented those in-
vestigations that examined the comparison of distance learning courses across a
variety of content areas. The average effect demonstrated a small advantage for
distance learning (average r = .036, k = 3, N = 2,377, variance = .003). The sample of
effects was considered heterogeneous, χ2 = 7.06 (2, N = 2,377), p < .05. The hetero-

412
Distance Education Effectiveness

geneity probably reflected the issues underlying the content issues that would
vary in terms of density across the individual studies.

Conclusion

The results demonstrate little distinction between traditional and distance learning
classrooms on the basis of performance. The average effect, although slightly
favoring distance learning, was heterogeneous and should be interpreted cau-
tiously. Examination of several moderators generates solutions that only partly
account for the variability observed in the outcome of the investigations. The most
complete or insightful explanations deal with the issues of course content because
several of the individual cells demonstrated homogeneity.
Course content as a moderator illustrates that for military-related instruction,
the distance learning environment lowered performance (but the sample size was
very small). For the natural sciences and education courses, the effect was virtu-
ally zero. However, for foreign language instruction, the use of distance technol-
ogy demonstrated a clear superiority for that technology. Foreign language in-
struction in a distance environment permits the students to interact on a regular
basis with native speakers of the language. The goal of most foreign language
instruction is acquiring a competency in performance; interaction with a native
speaker would seem to be a very desirable method of generating a laboratory
setting that would contribute to the development and improvement of the skills.
Certain content areas may be taught using distance learning technology, whereas
other content areas find that employing distance learning diminishes the quality of
the outcome. The data suggest that for many curricular areas, the outcome re-
mains unaffected by the choice of technology.
One important finding is that there does not seem to be support for the imple-
mentation of synchronous interactive technologies or classrooms to increase per-
formance. Performance did not differ as a result of the use of synchronous inter-
active technologies. The cost of developing and implementing synchronous inter-
active technologies may make the cost of distance education prohibitive or limit
the ability to implement the effort, except on a regional basis. An online course
offered through asynchronous CMC, for example, does not require that the student(s)
or instructor share the same time reference. Students’ participation in distance
learning can be located in any and all time zones and does not have to be coordi-
nated with an instructor. The findings of this meta-analysis indicate no significant
reduction in performance as a result of such designs.
Consistent with the foregoing observations, the examination of channel as a
source of moderation demonstrated little impact. Given that the choice of instruc-
tional method may require different resources and effort, the understanding of
how the channel of communication influences effectiveness is important. We con-
ducted no assessment of channel relationships to particular course content; the small
number of total studies made such an analysis impossible. The data may suggest,
however, that particular channels of communication may be better suited to produc-
ing more effective instruction. Such an outcome requires empirical investigation.

413
Journal of Communication, September 2004

One limitation of this meta-analysis is the inability to adequately compare vari-


ous formats of distance learning against each other to assess the impact that the
particular format has on learning. Learning outcome may vary as particular forms
of distance learning are compared to traditional copresent classrooms. Distance
learning programs vary in terms of structure in about as many ways as there are
programs. Course formats are often “hybrids” or combinations of FtF instruction
with multiple technologies and channels. Likewise, some distance learning for-
mats require a great deal of discussion between students, whereas others are
more self-paced, individualized instructional formats with little expectation of in-
teraction between students. The expectation of exchanging messages with the
instructor was not examined (and often not reported in the methods) and there-
fore may change the results. This meta-analysis could not provide information
suggesting that a particular format was capable of generating comparatively better
learning than other formats.
We also made no assessment for the quality of the technology or the training of
students with the technology. The quality of any educational experience depends
on a variety of factors, including accessibility to the instructor and adequacy of
feedback. An instructor with 300 students in a mass lecture may be more inacces-
sible to the average student than an online instructor with 10 students, depending
on how the course and communication are structured. Students unprepared to
use technology must master the technology at the same time they are trying to
comprehend course content. This clearly reinforces Christ and Potter’s (1998) call
for strengthening the academic links between media literacy and technological
communication instruction. Nothing in the current studies permitted an assess-
ment of this competency or the support mechanisms available to assist students in
handling these issues.
An issue not addressed in the review was student motivation. Distance learning
students may have more motivation to achieve than traditional students (Hiltz,
1994). Given the extra effort perceived to be required for distance learning (mas-
tering the technology of the course), the achievement scores of distance learning
students may be the product of higher levels of motivation then traditional copresent
students. The self-selection issue may represent a fundamental threat to the com-
parison found in the investigations. If distance learning students can be consid-
ered nontraditional (sometimes older) compared to FtF students, differential mo-
tivation may function as a form of biased sampling when the participant pools are
drawn from groups with different reasons and motivations to participate in the
particular instructional format. The issue of the age or level of the student could
not be directly addressed. Distance learning might be more appropriate for younger
or older students. However, the current sample of studies used almost exclusively
a college setting with undergraduates. The findings should be viewed with cau-
tion when applied to other settings.
Another review limitation was that performance was limited to tests and grades.
For some types of learning (public speaking, foreign language), tests may be
inappropriate or misleading. Tests may involve measures of short-term learning
(or even be unrelated to the quality of instruction). In this case, the heterogeneity
among results may reflect performance excellence and not instructional excel-

414
Distance Education Effectiveness

lence because the traditional test reflects what a person knows and not the rela-
tionship of learning to the instructional method.
In mapping a research agenda, the issue of learning style as it relates to perfor-
mance and satisfaction with a particular instructional format needs to be addressed.
One concern regarding comparisons of distance and traditional learning is that
some students may simply learn more effectively in one format than another.
Outcomes observed in this review may fail to consider the level of individual
learner differences that exist. Individuals may learn more effectively in one format
or another (reflecting an individual learning style preference), and researchers
may need to identify the learning style of an individual and match the particular
style of instruction to that person to maximize the effectiveness of the outcome
(Foell & Fritz, 1995; Jegede, et al., 1995). The authors plan to follow this meta-
analysis with one that considers the relationship of learning style to the effective-
ness and satisfaction with distance education.
The use of distance education or technological means of delivering instruc-
tional materials is going to continue and expand. Although there are significant
arguments about the cost and effectiveness of such efforts, especially when com-
pared to other means of instruction, the flexibility of new communication technol-
ogy affords greater access to educational opportunities at all levels of education. A
critical issue facing designers of educational systems is the question of the pur-
pose and outcomes for evaluating the success of those systems. Are educational
efforts designed to improve the level of literacy and competence of the popula-
tion, or are educational systems expected to show a profit?
The current findings suggest that distance education technologies do not nec-
essarily create a less effective learning environment and, in some instances, may
enhance effectiveness. The broad base of studies indicates that, in fact, distance
education students score slightly better than traditional students when considering
exam scores or grades achieved in a particular course.

References
*indicates study contributing data used in this analysis

Allen, M., Bourhis, J., Mabry, E., Emmers-Sommer, T., Titsworth, S., Burrell, N., Mattrey, M., Crowell, T.,
Bakkar, A., Hamilton, A., Robertson, T., Scholl, J., & Wells, S. (2002). Comparing student satisfaction
of distance education to traditional classrooms in higher education: A meta-analysis. American Jour-
nal of Distance Education, 16, 83–97.

Allen, M., D’Alessio, D., & Brezgel, K. (1995). A meta-analysis summarizing the effects of pornography
II: Aggression after exposure. Human Communication Research, 22, 258–283.

Allen, M., Emmers, T., Gebhardt, L., & Giery, M. (1995, Winter). Pornography and rape myth accep-
tance. Journal of Communication, 45(1), 5–26.

Althaus, S. (1997). Computer-mediated communication in the university classroom: An experiment


with on-line discussions. Communication Education, 46, 158–174.

Barrington, H. (1972). Instruction by television—two presentations compared. Educational Research


14, 187–190.

Barry, M., & Runyan, G. (1995). A review of distance learning studies in the U.S. military. American
Journal of Distance Education, 9, 37–47.

415
Journal of Communication, September 2004

*Beare, P. (1989). The comparative effectiveness of videotape, audiotape, and telelecture in delivering
continuing teacher education. American Journal of Distance Education, 3, 57–66.

*Benbunan-Fich, R. (1997). Effects of computer-mediated communication systems on learning, perfor-


mance and satisfaction: A comparison of groups and individuals solving ethical scenarios. Unpub-
lished doctoral dissertation, Rutgers University, Newark, NJ.

Benbunan-Fich, R., & Hiltz, S. R. (1999). Educational applications of CMCS: Solving case studies through
asynchronous learning networks. Journal of Computer-Mediated Communication, [On-line], 4(3).
Retreived from http://www.ascusc.org/jcmc/vol4/issue3/benbunan-fich.html

Berge, Z. L. (Ed.). (2001). Sustaining distance training: Integrating learning technologies into the
fabric of the enterprise. San Francisco: Jossey-Bass.

Berge, Z. L., & Collins, M. (Eds.). (1995). Computer mediated communication and the online class-
room. Cresskill, NJ: Hampton Press.

Berlo, D. (1960). The process of communication. New York: Holt, Rinehart, & Winston.

Boettcher, J. (1996, July). Distance learning: Looking into the crystal ball. Tallahassee: Florida State
University. Retrieved from http://www.cren.net/~jboettch/eme5457/acti/jvb_cause.html

Boucher, T., & Barron, M. (1986). The effects of computer-based marking on completion rates and
student achievement for students taking a secondary-level distance education course. Distance Edu-
cation, 7, 275–280.

Bourhis, J., & Allen, M. (1998). The role of videotaped feedback in the instruction of public speaking:
A quantitative synthesis of published empirical literature. Communication Research Reports, 15, 256–
261.

*Bruning, R., Landis, M., Hoffman, E., & Grosskopf, K. (1993). Perspectives on an interactive satellite-
based Japanese language course. American Journal of Distance Education, 7, 22–38.

Burgoon, J. K., Boniot, J. A., Ramirez, Jr., A., Dunbar, N. E., Klein, K., & Fischer, J. (2002). Testing the
interactivity principle: Effects of mediator, propinquity, and verbal and nonverbal modalities in inter-
personal relationships. Journal of Communication, 52, 657–677.

*Chale, E. (1983). Teaching and training: An evaluation of primary school teachers in Tanzania fol-
lowing preservice training in teachers colleges and through distance education. Unpublished doc-
toral dissertation, University of London, England.

*Cheng, H., Lehman, J., & Armstrong, P. (1991). Comparison of performance and attitude in traditional
and computer conferencing classes. American Journal of Distance Education, 5, 51–64.

Christ, W. G., & Potter, W. J. (1998). Media literacy, media education, and the academy. Journal of
Communication, 48(1), 5–15.

Clark, R. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-
analysis. Educational Communication and Technology, 33(4), 249–262.

Clark, T. (1995, May). The IDEA: An Iowa approach to improving instructional quality. In M. Beaudoin
(Ed.), Distance Education Symposium 3: Instruction. Selected Papers at the Third Distance Education
Research Symposium (pp. 13–29). State College: American Center for the Study of Distance Educa-
tion, Pennsylvania State University, PA.

Cookson, P. (1989). Research on learners and learning in distance education: A review. American
Journal of Distance Education, 3, 22–35.

*Cross, R. (1996). Video-taped lectures for honours students on international industry based learning.
Distance Education, 17, 369–386.

Curran, C., & Murphy, P. (1992). Distance education at the second-level and for teacher education in

416
Distance Education Effectiveness

six African countries. In P. Murphy & A. Zhiri (Eds.), Distance education in Anglophone Africa:
Experience with secondary education and teacher training (pp. 17–40). EDI Development Policy
Case Studies Series, Analytical Case Studies, No. 9. Washington, DC: World Bank.

D’Alessio, D., & Allen, M. (2000). Media bias in presidential elections: A meta-analysis. Journal of
Communication, 50(4), 133–156.

*Dillon, C., Gunawardena, C., & Parker, R. (1992). Learner support: The critical link in distance educa-
tion. Distance Education, 13, 29–45.

*Doerfert, D., & Miller, W. (1997, June). Using diaries to assess the learning needs and course perfor-
mance of students served by three instructional delivery means. In N. Maushak, M. Simonson, & K.
Wright (Eds.), Encyclopedia of distance education research in Iowa (2nd rev. ed.; pp. 153–162).
Ames: Teacher Education Alliance of the Iowa Distance Education Alliance, Iowa’s Star Schools
Project, Technology Research and Evaluation Group, Iowa State University. (ERIC ED 409 001)

Edwards, I., & Rennie, L. (1991). Enhancing the learning of year 11 students with a videotaped lesson
in genetics. Distance Education, 12, 266–278.

*Ellis, L., & Mathis, D. (1985). College student learning from televised versus conventional classroom
lectures: A controlled experiment. Higher Education, 14, 165–173.

Fjortoft, N. (1995, October). Predicting persistence in distance learning programs. Paper presented at
the Mid-Western Educational Research Meeting, Chicago, IL. (ERIC ED 387 620)

Fjortoft, N. (1996). Persistence in a distance learning program: A case in pharmaceutical education.


American Journal of Distance Education, 10, 39–48

Foell, N., & Fritz, R. (1995). Association of cognitive style and satisfaction with distance learning.
Journal of Industrial Teacher Education, 33, 46–59.

*Gee, D. (1990). The effects of preferred learning style variables on student motivation, academic achieve-
ment, and course completion rates in distance education. Unpublished doctoral dissertation, Texas
Technological University, Lubbock.

Greene, B., & Meek, A. (1998). Distance education in higher education institutions: Incidence, audi-
ences, and plans to expand. (Report no. NCES-98–132). Washington, DC: U.S. Government Printing
Office.

*Grimes, P., Nielson, J., & Niss, J. (1988). The performance of nonresident students in the “Economics
U$A” telecourse. American Journal of Distance Education, 2, 36–43.

Gunawardena, C., & Zittle, F. (1997). Social presence as a predictor of satisfaction within a computer-
mediated conferencing environment. American Journal of Distance Education, 1, 8–26.

Guzley, R., Avanzino, S., & Bor, A. (1999, November). Video interactive distance learning: A test of
student motivation, satisfaction with interaction and mode of delivery, and perceived effectiveness.
Paper presented at the annual conference of the National Communication Association, Chicago.

Hackman, M. Z., & Walker, K. B. (1990). Instructional communication in the televised classroom: The
effects of system design and teacher immediacy on student learning and satisfaction. Communica-
tion Education, 39, 196–206.

Hall, J., & Rosenthal, R. (1991). Testing for moderator variables in meta-analysis: Issues and methods.
Communication Monographs, 58, 437–448.

*Hammond, R (1997, August). A comparison of the learning experience of telecourse students in com-
munity and day sections. Paper presented at the Distance Learning Symposium, Utah Valley State
College, Orem, UT. (ERIC ED 410 992)

Harasim. L. (Ed.). (1990). On-line education: Perspectives on a new medium. New York: Praeger/
Greenwood.

417
Journal of Communication, September 2004

Hiltz, S. R. (1986). The “virtual classroom”: Using computer-mediated communication for university
teaching. Journal of Communication, 36(2), 95–104.

Hiltz, S. R. (1994). The virtual classroom: Learning without limits via computer networks. Norwood, NJ:
Ablex.

*Hogan, R. (1997, July 25). Analysis of student success in distance learning courses compared to tradi-
tional courses. Paper presented at the annual conference on Multimedia in Education and Industry,
Chattanooga, TN.

Holmberg, B. (1977). Distance education: A survey and bibliography. London: Kogan Page.

Holmberg, B. (1981). Status and trends of distance education: A survey and bibliography. London:
Kogan Page.

Hunter, J., & Allen, M. (1992). Adaptation to electronic mail. Journal of Applied Communication Re-
search, 20, 254–274.

Hunter, J., & Schmidt, F. (1990). Methods of meta-analysis: Correcting for artifact and bias in research.
Beverly Hills, CA: Sage.

Institute for Higher Education. (1999, April). What’s the difference? A review of contemporary research
on the effectiveness of distance learning in higher education. Report prepared for the American
Federation of Teachers and National Education Association. Washington, DC: Institute for Higher
Education.

Jegede, O., Naidu, S., Taylor, J., Carr-Spencer, W., & Atkinson, R. (1995). Information processing
mechanisms underlying the novice to expert shift: Empirical evidence. In D. Sewart (Ed.), One world
many voices: Quality in open and distance learning. Selected papers from the 17th World Confer-
ence of the International Council for Distance Education (Vol. 2, pp. 93–97). United Kingdom: Open
University.

*Johnson, J. (1991). Evaluation report of the community college of Maine interactive television system.
Augusta: University of Southern Maine.

*Keene, S., & Cary, J. (1990). Effectiveness of distance education approach to U.S. Army Reserve
component training. American Journal of Distance Education, 4, 14–20.

Kember, D., Murphy, D., Siaw, I., & Yuen, K. (1991). Towards a causal model of student
progress in distance education: Research in Hong Kong. American Journal of Distance
Education, 5, 3–15.

Kuehn, S. (1994). Computer-mediated communication in instructional settings: A research agenda.


Communication Education, 43, 171–183.

Kulik, J., Kulik, C., & Cohen, P. (1980). Effectiveness of computer-based college teaching: A meta-
analysis of findings. Review of Educational Research, 50, 525–544.

*LaRose, R., Gregg, J., & Eastin, M. (1998, December). Audiographic telecourses for the web: An
experiment. Journal of Computer Mediated Communication, 4(2). Retrieved from http://
www.ascusc.org/jcmc/vol4/issue2/larose.html

LaRose, R., & Whitten, P. (2000). Re-thinking instructional immediacy for web courses: A social cogni-
tive explanation. Communication Education, 49, 320–338.

*Larson, M., & Bruning, R. (1996). Participant perceptions of a collaborative satellite-based mathemat-
ics course. American Journal of Distance Education, 10, 6–32.

Lievrouw, L. A., & Finn, T. A. (1990). Identifying the common dimensions of communication: The
communication systems model. In B. Ruben & L. A. Lievrouw (Eds.), Information and behavior: Vol.
3. Mediation, information and communication (pp. 37–65). New Brunswick, NJ: Transaction Press.

418
Distance Education Effectiveness

Mabry, E. (2002). Group communication and technology: Rethinking the role of communication mo-
dality in group work and performance. In L. Frey (Ed.), New directions in group communication (pp.
285–298). Thousand Oaks, CA: Sage.

*Martin, E., & Rainey, L. (1993). Student achievement and attitude in a satellite-delivered high school
science course. American Journal of Distance Education, 7, 54–61.

Maushak, N. (1997, June). Distance education: A review of the literature. In N. Maushak, M. Simonson,
& K. Wright (Eds.), Encyclopedia of distance education research in Iowa (2nd rev. ed.; pp. 221–234).
Ames: Teacher Education Alliance of the Iowa Distance Education Alliance, Iowa’s Star Schools
Project, Technology Research and Evaluation Group, Iowa State University. (ERIC ED 409 001)

Maushak, N., Simonson, M., & Wright, K. (1997, June). Distance education: A selected bibliography. In
N. Maushak, M. Simonson, & K. Wright (Eds.), Encyclopedia of distance education research in Iowa
(2nd rev. ed.; pp. 221–234). Ames: Teacher Education Alliance of the Iowa Distance Education
Alliance, Iowa’s Star Schools Project, Technology Research and Evaluation Group, Iowa State Univer-
sity. (ERIC ED 409 001)

*McCleary, I., & Egan, M. (1989). Program design and evaluation: Two-way interactive television.
American Journal of Distance Education, 3, 50–60.

McHenry, L., & Bozik, M. (1995). Communicating at a distance: A study of interaction in a distance
education classroom. Communication Education, 44, 362–371.

McIsaac, M. (1990). Problems affecting evaluation of distance education in developing countries. Re-
search in Distance Education, 37, 12–16.

McLuhan, M. (1964). Understanding the media: The extension of man. New York: McGraw-Hill.

Merisotis, J., & Phipps, R. (1999, May/June). What’s the difference? Outcomes of distance vs. traditional
classroom-based learning. Change, 13–17.

*Miller, J., McKenna, M., & Ramsey, P. (1993). An evaluation of student content learning and affective
perceptions of a two-way interactive video learning experience. Educational Technology, 13(6), 51–
55.

Mottet, T. P. (2000). Interactive television instructors’ perceptions of students’ nonverbal responsive-


ness and their influence on distance education. Communication Education, 49, 146–164.

Mount, G., & Walters, S. (1985). Televised versus on-campus introductory psychology: A review and
comparison. Journal of Educational Technology Systems, 13(3), 165–174.

Nettles, K., Dziuban, C., Cioffi, D., Moskal, P., & Moskal, P. (1999, March). Technology and learning:
The “no significant difference” phenomenon: A structural analysis of research on technology en-
hanced instruction, “Birnam Wood is moving.” Unpublished paper, University of Central Florida.

*Patterson, N. (1999, November). An evaluation of graduate class interaction in face-to-face and asyn-
chronous computer groupware experiences: A collective case study. Paper presented at the 24th an-
nual conference of the Association for the Study of Higher Education (ASHE), San Antonio, TX. (ERIC
ED 437 015)

Perraton, H. (1992). A review of distance education. In P. Murphy & A. Zhiri (Eds.), Distance education
in Anglophone Africa: Experience with secondary education and teacher training (pp. 7–15). EDI
Development Policy Case Studies Series, Analytical Case Studies, No. 9. Washington, DC: World
Bank.

*Phelps, R., Wells, R., Ashworth, R., & Hahn, H. (1991). Effectiveness and costs of distance education
using computer-mediated communication. American Journal of Distance Education, 5, 7–19.

Preiss, R., & Allen, M. (1995). Understanding and using meta-analysis. Evaluation & the Health Profes-
sions, 18, 315–335.

419
Journal of Communication, September 2004

Rafaeli, S. (1988). Interactivity: From new media to communication. Sage Annual Review of Communi-
cation Research, 16, 110–134.

*Ritchie, H., & Newby, T. (1989). Classroom lecture/discussion vs. live televised instruction: A compari-
son of effects on student performance, attitude, and interaction. American Journal of Distance Edu-
cation, 3, 36–45.

Schlosser, C., & Anderson, M. (1994, January). Distance education: Review of the literature. Report for
the U.S. Department of Education, Washington, DC. (ERIC ED 382 159)

*Simpson, H., Pugh, H., & Parchman, S. (1991). An experimental two-way video teletraining system:
Design, development and evaluation. Distance Education, 12, 209–231.

*Simpson, H., Pugh, H., & Parchman, S. (1993). Empirical comparison of alternative instructional TV
technologies. Distance Education, 14, 147–164.

Stickell, D. (1963). A critical review of the methodology and results of research comparing televised and
face-to-face instruction. Unpublished doctoral dissertation, Pennsylvania State University, State Col-
lege, PA.

*Souder, W. (1993). The effectiveness of traditional vs. satellite delivery in three management of tech-
nology master’s degree programs. American Journal of Distance Education, 7, 37–53.

*Timmins, K. (1989, October). Educational effectiveness of various adjuncts to printed study material in
distance education. Research in Distance Education, 36, 12–13.

*Treagust, D., Waldrip, B., & Horley, J. (1993). Effectiveness of ISDN video-conferencing: A case study
of two campuses and two different courses. Distance Education, 14, 315–330.

Verduin, J. R., & Clark, T. A. (1991). Distance education: The foundations of effective practice. San
Francisco: Jossey-Bass.

Walker, K., & Hackman, M. (1992). Multiple predictors of perceived learning and satisfaction: The
importance of information transfer and non-verbal immediacy in the televised course. Distance
Education, 13, 81–92.

Wells, R. (1992). Computer mediated communication for distance education: An international review
of design, teaching, and institutional issues. University Park: Pennsylvania State University, American
Centre for the Study of Distance Education. (ERIC ED 351 997)

Whittington, N. (1987). Is instructional television educationally effective? A research review. American


Journal of Distance Education, 1, 47–57.

Witt, P., Wheeless, B., & Allen, M. (in press). A meta-analysis of the relationship between teacher
immediacy and learning. Communication Monographs.

Woodley, A., & Kirkwood, A. (1995). Evaluation in distance learning. In D. Sewart (Ed.), One world
many voices: Quality in open and distance learning. Selected papers from the 17th World Confer-
ence of the International Council for Distance Education (Vol. 1, pp. 283–303). United Kingdom:
Open University.

420

You might also like