Professional Documents
Culture Documents
The Effectiveness of Online and Blended Learning - A Meta-Analysis of The Empirical Literature
The Effectiveness of Online and Blended Learning - A Meta-Analysis of The Empirical Literature
BARBARA MEANS
SRI International
YUKIE TOYAMA
SRI International
ROBERT MURPHY
SRI International
MARIANNE BAKI
SRI International
1
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 2
graduate programs or professional training. The average learner age in a study ranged from
13 to 44.
Intervention/Program/Practice: The meta-analysis was conducted on 50 effects found in
45 studies contrasting a fully or partially online condition with a fully face-to-face instruc-
tional condition. Length of instruction varied across studies and exceeded one month in the
majority of them.
Research Design: The meta-analysis corpus consisted of (1) experimental studies using ran-
dom assignment and (2) quasi-experiments with statistical control for preexisting group dif-
ferences. An effect size was calculated or estimated for each contrast, and average effect sizes
were computed for fully online learning and for blended learning. A coding scheme was
applied to classify each study in terms of a set of conditions, practices, and methodological
variables.
Findings/Results: The meta-analysis found that, on average, students in online learning
conditions performed modestly better than those receiving face-to-face instruction. The
advantage over face-to-face classes was significant in those studies contrasting blended
learning with traditional face-to-face instruction but not in those studies contrasting purely
online with face-to-face conditions.
Conclusions/Recommendations: Studies using blended learning also tended to involve
additional learning time, instructional resources, and course elements that encourage inter-
actions among learners. This confounding leaves open the possibility that one or all of these
other practice variables contributed to the particularly positive outcomes for blended learn-
ing. Further research and development on different blended learning models is warranted.
Experimental research testing design principles for blending online and face-to-face instruc-
tion for different kinds of learners is needed.
2
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 3
3
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 4
4
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 5
CONCEPTUAL FRAMEWORK
As noted, online and blended learning are not clearly defined in the lit-
erature. For the purpose of this meta-analysis, we adopted the Sloan
Consortium’s definition of online learning as learning that takes place
entirely or in substantial portion over the Internet. We operationalized
the concept of “substantial portion” as 25% or more of the instruction on
the content assessed by a study’s learning outcome measure. This crite-
rion was used to avoid including studies of incidental uses of the Internet,
such as downloading references and turning in assignments.1 Cases in
5
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 6
6
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 7
types of variables that may influence effect sizes as those relating to the
online instruction practices, conditions under which the study was con-
ducted, and aspects of the study methodology. Within each of these cate-
gories, we specified a set of specific features that have been hypothesized
or found to influence learning outcomes in prior meta-analyses of dis-
tance learning (Bernard et al., 2004; Sitzmann et al., 2006; Zhao et al.,
2005).
All these variables were coded, and in cases in which an adequate num-
ber of studies with the necessary information was available, a variable was
tested as a potential moderator of the online learning effect size.
As discussed, from a practical perspective, one of the most fundamen-
tal distinctions among different online learning activities is whether they
are blended or conducted purely online. Purely online instruction serves
as a replacement for face-to-face instruction (e.g., a virtual course), with
attendant implications for school staffing and cost savings. Purely online
instruction may be an attractive alternative for cost reasons if it is equiva-
lent to traditional face-to-face instruction in terms of student outcomes.
Blended learning, on the other hand, is expected to be an enhancement of
7
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 8
8
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 9
RELATED META-ANALYSES
9
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 10
10
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 11
11
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 12
12
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 13
METHOD
LITERATURE SEARCH
13
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 14
• Involve learning that took place over the Internet. The use of the Internet
had to be a substantial part of the intervention. Studies in which the
Internet was only an incidental component of the intervention were
excluded. In operational terms, to qualify as online learning, a study
treatment needed to provide at least a quarter of the
instruction/learning of the content assessed by the study’s learning
measure by means of the Internet.
• Contrast conditions that varied in terms of use of online learning. Student
outcomes for classroom-based instruction had to be compared
against a condition falling into at least one of two study categories:
(1) purely online learning compared with offline/face-to-face learn-
ing, or (2) a combination of online plus offline/face-to-face learning
(i.e., blended learning) compared with offline/face-to-face learning
alone.
• Describe an intervention study that had been completed. Descriptions of
study designs, evaluation plans, or theoretical frameworks were
excluded. The length of the intervention/treatment could vary from
a few hours to a quarter, semester, year, or longer.
• Report a learning outcome that was measured for both treatment and control
14
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 15
15
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 16
study’s learning measure, and (2) lack of statistical control for prior
abilities in quasi-experiments.
From the 502 articles, analysts identified 522 independent studies
(some articles reported more than one study). When the same study was
reported in different publication formats (e.g., conference paper and
journal article), only the more formal journal article was retained for the
analysis. Of the 522 studies, 176 met all the criteria of the full-text screen-
ing process. Table 1 shows the bases for exclusion for the 346 studies that
did not meet all the criteria.
Table 1. Bases for Excluding Studies During the Full-Text Screening Process
Primary Reason for Exclusion Number Percentage
Excluded Excluded
Did not use statistical control 137 39
aOther reasons for exclusion included: (1) did not provide enough information, (2) was
written in a language other than English, and (3) used different learning outcome mea-
sures for the treatment and control groups.
16
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 17
The precision of each effect estimate was determined by using the esti-
mated standard error of the mean to calculate the 95% confidence inter-
val for each effect.
During the data extraction phase, it became apparent that one set of
studies rarely provided sufficient data for Comprehensive Meta-Analysis
calculation of an effect size. Quasi-experimental studies that used hierar-
chical linear modeling or analysis of covariance with adjustment for
pretests and other learner characteristics through covariates typically did
not report some of the data elements needed to compute an effect size.
For studies using hierarchical linear modeling to analyze impacts, typi-
cally the regression coefficient on the treatment status variable (treat-
ment or control), its standard error, and a p value and sample sizes for
the two groups were reported. For analyses of covariance, typically the
adjusted means and F statistic were reported, along with group sample
sizes. In almost all cases, the unadjusted standard deviations for the two
groups were not reported and could not be computed because the
pretest-posttest correlation was not provided. To avoid eliminating all
these studies (which included some of the largest and most recent inves-
tigations), analysts used a conservative estimate of the pretest-posttest
correlation (r = .70) in order to estimate an effect size for those studies
in which the pretest was the same measure as the posttest, and a pretest-
posttest correlation of r = .50 for studies in which different measures were
used at pretest and posttest. These effect sizes were flagged in the coding
as “estimated effect sizes,” as were effect sizes computed from t tests, F
tests, and p levels. In extracting effect size data, analysts followed a set of
rules:
17
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 18
The review of the 99 studies to obtain the data for calculating effect size
produced 50 independent effect sizes (27 for purely online vs. face-to-
face and 23 for blended vs. face-to-face) from 45 studies. Fifty-four stud-
ies did not report sufficient data to support calculating an effect size.
All studies that provided enough data to compute an effect size were
coded for practices, conditions, and features of study methodology.
Building on the project’s conceptual framework (Figure 2) and the cod-
ing schemes used in several earlier meta-analyses (Bernard et al., 2004;
Sitzmann et al., 2006), a coding structure was developed and pilot-tested
with several studies. The top-level coding structure, incorporating refine-
ments made after pilot testing, is shown in Table 2.
To determine interrater reliability, two researchers coded 20% of the
studies, achieving an interrater reliability of 86% across those studies.
Analysis of coder disagreements resulted in the refinement of some defi-
nitions and decision rules for some codes; other codes that required
information that articles did not provide or that proved difficult to code
reliably were eliminated (e.g., whether or not the online instructor had
been trained in this method of instruction). A single researcher coded
the remaining studies.
18
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 19
• Author
• Learner age
• Learner incentive for involvement in the study
• Learner type
• Learning setting
• Subject matter
• Type of publication
• Year of publication
• Attrition equivalence
• Contamination
• Curriculum material/instruction equivalence
• Equivalence of prior knowledge/pretest scores
• Instructor equivalence
• Sample size for unit of assignment
• Student equivalence
• Study design
• Time-on-task equivalence
• Unit of assignment to conditions
• Whether equivalence of groups at preintervention was described
19
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 20
DATA ANALYSIS
FINDINGS
20
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 21
21
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 22
22
Table 3. Purely Online Versus Face-to-Face Studies Included in the Meta-Analysis
23
Caldwell (2006) A comparative study of traditional, web-based and
online instructional modalities in a computer
programming course +0.132 0.310 -0.476 0.740 0.43 100 100 60 students
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 23
24
(2008) versus face-to-face dyspnea self-management program
for patients with chronic obstructive pulmonary
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 24
disease: Pilot study +0.292 0.316 -0.327 0.910 0.93 Unknown Unknown 39 participants
Ocker and Asynchronous computer-mediated communication
Yaverbaum (1999) versus face-to-face collaboration: Results on student
learning, quality and satisfaction -0.030 0.214 -0.449 0.389 -0.14 Unknown Unknown 43 students
Padalino and Peres E-learning: A comparative study for knowledge
(2007) apprehension among nurses 0.115 0.281 -0.437 0.666 0.41 Unknown Unknown 49 participants
Peterson and Bond Online compared to face-to-face teacher preparation
(2004) for learning standards-based planning skills -0.100 0.214 -0.520 0.320 -0.47 Unknown Unknown 4 sections
Schmeeckle (2003) Online training: An evaluation of the effectiveness
and efficiency of training law enforcement personnel
over the Internet -0.106 0.198 -0.494 0.282 -0.53 Unknown Unknown 101 students
TCR, 115, 030303 Online and Blended Learning
Table 3. Purely Online Versus Face-to-Face Studies Included in the Meta-Analysis (continued)
Schoenfeld-Tacher, Do no harm: A comparison of the effects of online vs. +0.800 0.459 -0.100 1.700 1.74 100 99.94 Unknown
McConnell and traditional delivery media on a science course
Graham (2001)
Sexton et al. (2002) A comparison of traditional and World Wide Web -0.422 0.385 -1.177 0.332 -1.10 Unknown Unknown 26 students
methodologies, computer anxiety, and higher order
thinking skills in the inservice training of Mississippi
4-H extension agents
Turner et al. (2006) Web-based learning versus standardized patients for +0.242 0.367 -0.477 0.960 0.66 Unknown Unknown 4 classrooms
teaching clinical diagnosis: A randomized, controlled,
crossover trial
Vandeweerd et al. Teaching veterinary radiography by e-learning versus +0.144 0.207 -0.262 0.550 0.70 Unknown Unknown 30 students
Teachers College Record, 115, 030303 (2013)
25
Wallace and Achievement predictors for a computer-applications +0.109 0.206 -0.295 0.513 0.53 Unknown Unknown 92 students
Clariana (2000) module delivered online
Wang (2008) Developing and evaluating an interactive multimedia -0.071 0.136 -0.338 0.195 -0.53 Unknown Unknown 4 sections
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 25
26
Day, Raven, and The effects of World Wide Web instruction and
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 26
27
skills +0.332 0.213 -0.085 0.750 1.56m Unknown Unknown 88 students
O’Dwyer, Carey, and A study of the effectiveness of the Louisiana algebra
Kleiman (2007) I online course +0.373 0.094 0.190 0.557 3.99*** 88.51 64.4 Unknownb
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 27
28
a This number represents the assigned units at study conclusion. It excludes units that attrited.
b The study involved 18 online classrooms from six districts and two private schools; the same six districts were asked to identify comparable face-to-face classrooms, but
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 28
the study does not report how many of those classrooms participated.
c Two independent contrasts were contained in this article, which therefore appears twice in the table.
*p < .05. ** p < .01. *** p < .001. SE = standard error.
TCR, 115, 030303 Online and Blended Learning
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 29
MAIN EFFECTS
The overall finding of the meta-analysis is that online learning (the com-
bination of studies of purely online and of blended learning) on average
produces stronger student learning outcomes than learning solely
through face-to-face instruction. The mean effect size for all 50 contrasts
was +0.20, p < .001.
Next, separate mean effect sizes were computed for purely online ver-
sus face-to-face and blended versus face-to-face contrasts. The mean
effect size for the 27 purely online versus face-to-face contrasts was not
significantly different from 0 (g+ = +0.05, p = .46). The mean effect size
for the 23 blended versus face-to-face contrasts was significantly different
from 0 (g+ = +0.35, p < .0001).
A test of the difference between the purely online versus face-to-face
studies and the blended versus face-to-face studies found that the mean
effect size was larger for contrasts pitting blended learning against face-
to-face instruction than for those of purely online versus face-to-face
instruction (Q = 8.37, p < .01). Thus, studies of blended instruction found
a larger advantage relative to face-to-face instruction than did studies of
purely online learning.
29
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 30
PRACTICE VARIABLES
30
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 31
31
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 32
CONDITION VARIABLES
The strategy to investigate whether study effect sizes varied with publica-
tion year, which was taken as a proxy for the sophistication of available
technology, involved splitting the study sample into two subsets by con-
trasting studies published between 1996 and 2003 against those published
32
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 33
in 2004 through July 2008. Publication period did not moderate the
effectiveness of online learning significantly.
To investigate whether online learning is more advantageous for some
types of learners than for others, the studies were divided into three sub-
sets of learner type: K–12 students, undergraduate students (the largest
single group), and other types of learners (graduate students or individ-
uals receiving job-related training). As noted previously, the studies cov-
ered a wide range of subjects, but medicine and health care were the
most common. Accordingly, these studies were contrasted against studies
in other fields. Neither learner type nor subject area emerged as a statis-
tically significant moderator of the effectiveness of online learning. In
summary, for the range of student types for which controlled studies are
available, online learning appeared more effective than traditional face-
to-face instruction in both older and newer studies, with both younger
and older learners, and in both medical and other subject areas. Table 6
provides the results of the analysis of these variables.
METHODS VARIABLES
33
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 34
The influence of study sample size was examined by dividing studies into
three subsets, according to the number of learners for which outcome
data were collected. Sample size was not found to be a statistically signifi-
cant moderator of online learning effects. Thus, there is no evidence that
the inclusion of small-sample studies in the meta-analysis was responsible
for the overall finding of a positive outcome for online learning.
Comparisons of the three designs deemed acceptable for this meta-
analysis (random-assignment experiments, quasi-experiments with statis-
tical control, and crossover designs) indicate that study design is not
significant as a moderator variable (see Table 7). Moreover, in contrast
with early meta-analyses in computer-based instruction and web-based
training, in which effect size was inversely related to study design quality
(Pearson et al., 2005; Sitzmann et al., 2006), those experiments that used
random assignment in the present corpus produced significantly positive
effects (+0.25, p < .001), whereas the quasi-experiments and crossover
designs did not (both p > .05).
Table 7. Tests of Study Features as Moderator Variables
Number Weighted Standard Lower Upper
Variable Contrast Q-Statistic
Studies Effect Size Error Limit Limit
Fewer than 35 11 0.203 0.139 -0.069 0.476
Sample size From 35 to 100 20 0.209* 0.086 0.039 0.378 0.01
More than 100 19 0.199** 0.072 0.058 0.339
Declarative 12 0.180 0.097 -0.010 0.370
Type of knowl- Procedural/ Procedural and
30 0.239*** 0.068 0.106 0.373 0.37
edge testeda declarative
Strategic knowledge 5 0.281 0.168 -0.047 0.610
Random assignment control 32 0.249*** 0.065 0.122 0.376
34
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 35
35
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 36
36
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 37
37
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 38
larly urgent need for more well-designed studies of alternative models for
younger and less advanced students. Policy makers and practitioners
require a better understanding of the kinds of online activities and
teacher supports that enable these students to learn effectively in online
environments. Future experimental and controlled quasi-experimental
designs should report the practice features of both experimental and
control conditions to support future meta-analyses of the effectiveness of
alternative online learning approaches for specific types of students.
Effective practices for learners with different levels of motivation and dif-
ferent senses of efficacy in the subject domain of the online experience
need to be studied as well.
Even with this expected expansion of the research base, however, meta-
analyses of online learning effectiveness studies will remain limited in
several respects. Inevitably, they do not reflect the latest technology inno-
vations. The cycle time for study design, execution, analysis, and publica-
tion cannot keep up with the fast-changing world of Internet technology.
In the present case, important technology practices of the last five years,
notably the use of social networking technology to create online study
groups and recommend learning resources, are not reflected in the cor-
pus of published studies included in this meta-analysis.
In addition, meta-analyses of effectiveness studies provide only limited
guidance for instructional design and implementation. Moderator vari-
able analyses, such as those reported here, yield reasonable hypotheses as
to factors that can influence the effectiveness of online instruction but
offer only general guidance to those engaged in developing purely
online or blended learning experiences. Feature coding of large num-
bers of studies to support moderator variable analysis necessarily sacri-
fices detailed description and context. Meta-analysis is better suited to
answering questions about whether to consider implementing online
learning or what features to look for in judging online learning products
than to guiding the myriad of decisions involved in actually designing
and implementing online learning.
We expect more well-designed studies of alternative online learning
models to emerge as this kind of instruction becomes increasingly main-
stream. But instructional design involves thousands, if not millions, of
decisions about the details of structuring a learner’s engagement with the
material to be learned. Moreover, some studies are finding that design
principles that have empirical support when applied to some kinds of
learning content prove ineffective with other content (Wylie, Koedinger,
& Mitamura, 2009). Under these circumstances, the resources available
for online learning research are sure to be outstripped by the sheer num-
ber of decisions to be made. Other research approaches, in which online
38
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 39
Acknowledgments
The revised analysis reported here benefits from input received from Shanna Smith Jaggars and Thomas
Bailey of the Community College Research Center of Teachers College, Columbia University, in response
to the 2009 technical report describing an earlier version of the analysis.
We would also like to thank Bernadette Adams Yates and her colleagues at the U.S. Department of
Education for giving us substantive guidance and support throughout the study. Finally, we also thank
members of a large project team at the Center for Technology in Learning–SRI International.
Support for this research was provided by U.S. Department of Education, Office of Planning,
Evaluation, and Policy Development. Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the authors and do not necessarily represent the positions or poli-
cies of the Department of Education.
An earlier version of the analysis reported here appeared in a report published by the U.S. Department
of Education (2009). Subsequent to release of that technical report, several transcription errors were dis-
covered. Those errors have been corrected and all analyses rerun to produce the findings reported here.
Notes
1. A study by Ellis, Wood, and Thorpe (2004) provides an example of conditions that
we did not consider online learning based on this criterion. In this study, three instructional
delivery conditions were compared: (1) traditional face-to-face instruction, (2) face-to-face
instruction supplemented with printed materials, audio, video, software, and email, and
(3) independent study with a CD-ROM, which included “a virtual learning environment”
39
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 40
(p. 359). The second condition includes the use of emails, but email exchanges were not
described as a major component of the instruction. The virtual learning environment in
the third condition was not offered over the Internet.
2. After completion of this meta-analysis, in response to a suggestion from one of this
article’s reviewers, we looked at the studies of online learning listed on the NSD (No
Significant Difference) website established to document studies that have found no differ-
ence in learning outcomes based on the modality of delivery (http://www.nosignificantdif-
ference.org/about.asp). A search with the term online generated 36 studies sorted by their
conclusion, of which 26 reported no significant difference (10 reported a significant differ-
ence, with 9 of these differences favoring the online condition). A quick review of the
nature of the NSD studies found that 8 of the listed articles were not empirical studies, 16
lacked control for potential preexisting group differences, 4 lacked an objective measure of
student learning, 2 were duplicates of studies listed earlier on the NSD website, 4 were not
available online for review, and 2 were outside the time frame for our meta-analysis.
References
*Aberson, C. L., Berger, D. E., & Romero, V. L. (2003). Evaluation of an interactive tutorial
for teaching hypothesis testing concepts. Teaching of Psychology, 30(1), 75–78.
*Al-Jarf, R. S. (2004). The effects of Web-based learning on struggling EFL college writers.
Foreign Language Annals, 37(1), 49–57.
Allen, I. E., & Seaman, J. (2003). Sizing the opportunity: The quality and extent of online educa-
tion in the United States, 2002 and 2003. Retrieved from http://sloanconsortium.org/pub-
lications/survey/sizing_the_opportunity2003
Allen, I. E., & Seaman, J. (2010). Learning on demand : Online education in the United States,
2009. Retrieved from http://www.sloanc.org/publications/survey/pdf/learningon
demand.pdf
Barab, S. A., Squire, K., & Dueber, B. (2000). Supporting authenticity through participatory
learning. Educational Technology Research and Development, 48(2), 37–62.
Barab, S. A., & Thomas, M. K. (2001). Online learning: From information dissemination to
fostering collaboration. Journal of Interactive Learning Research, 12(1), 105–143.
Bates, A. W. (1997). The future of educational technology. Learning Quarterly, 2, 7–16.
*Beeckman, D., Schoonhoven, L., Boucque, H., Van Maele, G., & Defloor, T. (2008).
Pressure ulcers: E-learning to improve classification by nurses and nursing students.
Journal of Clinical Nursing, 17(13), 1697–1707.
*Bello, G., Pennisi, M. A., Maviglia, R., Maggiore, S. M., Bocci, M. G., Montini, L., &
Antonelli, M. (2005). Online vs. live methods for teaching difficult airway management
to anesthesiology residents. Intensive Care Medicine, 31(4), 547–552.
*Benjamin, S. E., Tate, D. F., Bangdiwala, S. I., Neelon, B. H., Ammerman, A. S., Dodds, J.
M., & Ward, D. S. (2008). Preparing child care health consultants to address childhood
overweight: A randomized controlled trial comparing web to in-person training. Maternal
and Child Health Journal, 12(5), 662–669.
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., Wallet, P. A.,
. . . Huang, B. (2004). How does distance education compare with classroom instruction?
A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439.
Bersin, J. (2004). The blended learning book: best practices, proven methodologies, and lessons
learned. San Francisco, CA: Wiley.
40
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 41
Bethel, E. C., & Bernard, R. M. (2010). Developments and trends in synthesizing diverse
forms of evidence: Beyond comparisons between distance education and classroom
instruction. Distance Education, 31(3), 231–256.
*Beyea, J. A., Wong, E., Bromwich, M., Weston, W. W., & Fung, K. (2008). Evaluation of a
particle repositioning maneuver web-based teaching module. The Laryngoscope, 118(1),
175–180.
Bhattacharya, M. (1999). A study of asynchronous and synchronous discussion on cognitive
maps in a distributed learning environment. In Proceedings of WebNet World Conference on
the WWW and Internet 1999 (pp. 100–105). Chesapeake, VA: AACE.
Biostat Solutions. (2006). Comprehensive Meta-Analysis (Version 2.2.027). Mt. Airy, MD:
Biostat Solutions.
Bonk, C. J., & Graham, C. R. (Eds.). (2005). Handbook of blended learning: Global perspectives,
local designs. San Francisco, CA: Pfeiffer.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to
meta-analysis. Chichester, England: Wiley.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experi-
ence, and school. Washington, DC: National Academy Press.
*Caldwell, E. R. (2006). A comparative study of three instructional modalities in a computer pro-
gramming course: Traditional instruction, web-based instruction, and online instruction
(Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database.
(UMI No. AAT 3227694)
Cavanaugh, C. (2001). The effectiveness of interactive distance education technologies in
K–12 learning: A meta-analysis. International Journal of Educational Telecommunications,
7(1), 73–78.
Cavanaugh, C., Gillan, K. J., Kromrey, J., Hess, M., & Blomeyer, R. (2004). The effects of dis-
tance education on K–12 student outcomes: A meta-analysis. Retrieved from
http://www.ncrel.org/tech/distance/index.html
*Cavus, N., Uzonboylu, H., & Ibrahim, D. (2007). Assessing the success rate of students
using a learning management system together with a collaborative tool in web-based
teaching of programming languages. Journal of Educational Computing Research, 36(3),
301–321.
Childs, J. M. (2001). Digital skill training research: Preliminary guidelines for distributed learning
(Final report). Retrieved from http://www.stormingmedia.us/24/2471/A247193.htm
Christensen, C. M., Horn, M. B., & Johnson, C. W. (2008). Disrupting class: How disrupting
innovation will change the way the world learns. New York, NY: McGraw-Hill.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational
Research, 53(4), 445–449.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
*Davis, J. D., Odell, M., Abbitt, J., & Amos, D. (1999, March). Developing online courses: A com-
parison of web-based instruction with traditional instruction. Paper presented at the Society for
Information Technology & Teacher Education International Conference, Chesapeake,
Va. Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.View
Abstract&paper_id=7520
*Day, T. M., Raven, M. R., & Newman, M. E. (1998). The effects of World Wide Web instruc-
tion and traditional instruction and learning styles on achievement and changes in stu-
dent attitudes in a technical writing in agricommunication course. Journal of Agricultural
Education, 39(4), 65–75.
*DeBord, K. A., Aruguete, M. S., & Muhlig, J. (2004). Are computer-assisted teaching meth-
ods effective? Teaching of Psychology, 31(1), 65–68.
41
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 42
Dede, C. (2000). The role of emerging technologies for knowledge mobilization, dissemination, and
use in education. Paper commissioned by the Office of Educational Research and
Improvement, U.S. Department of Education.
Dede, C. (Ed.). (2006). Online professional development for teachers: Emerging models and meth-
ods. Cambridge, MA: Harvard Education Publishing Group.
Dynarski, M., Agodini, R., Heavisude, S., Novak, T., Carey, N., Campuzano, L., . . . Sussex,
W. (2007). Effectiveness of reading and mathematics software products: Findings from the first stu-
dent cohort. Report to Congress. NCEE 2007-4006. Washington, DC: U.S. Department of
Education.
*El-Deghaidy, H., & Nouby, A. (2008). Effectiveness of a blended e-learning cooperative
approach in an Egyptian teacher education programme. Computers & Education, 51(3),
988–1006.
Ellis, R. C. T., Wood, G. D., & Thorpe, T. (2004). Technology-based learning and the pro-
ject manager. Engineering, Construction and Architectural Management, 11(5), 358–365.
*Englert, C. S., Zhao, Y., Dunsmore, K., Collings, N. Y., & Wolbers, K. (2007). Scaffolding
the writing of students with disabilities through procedural facilitation: Using an
Internet-based technology to improve performance. Learning Disability Quarterly, 30(1),
9–29.
Feng, M., Heffernan, N.T., & Koedinger, K.R. (2010). Using data mining findings to aid
searching for better cognitive models. In V. Aleven, J. Kay, & J. Mostow (Eds.), Proceedings
of the International Conference on Intelligent Tutoring Systems (pp. 368–370). Heidelberg,
Berlin, Germany: Springer.
Florida TaxWatch. (2007). Final report: A comprehensive assessment of Florida virtual school.
Retrieved from http://www.floridataxwatch.org/resources/pdf/110507FinalReport
FLVS.pdf
*Frederickson, N., Reed, P., & Clifford, V. (2005). Evaluating Web-supported learning ver-
sus lecture-based teaching: Quantitative and qualitative perspectives. Higher Education,
50(4), 645–664.
Galvis, A. H., McIntyre, C., & Hsi, S. (2006). Framework for the design and delivery of effective
global blended learning experiences. Report prepared for the World Bank Group, Human
Resources Leadership and Organizational Effectiveness Unit. Unpublished manuscript.
*Gilliver, R. S., Randall, B., & Pok, Y. M. (1998). Learning in cyberspace: Shaping the future.
Journal of Computer Assisted Learning, 14(3), 212–222.
Graham, C. R. (2005). Blended learning systems: Definition, current trends, and future
directions. In C. J. Bonk & C. R. Graham (Eds.), Handbook of blended learning: Global per-
spectives, local designs (pp. 3–21). San Francisco, CA: Pfeiffer.
*Hairston, N. R. (2007). Employees’ attitudes toward e-learning: Implications for policy in industry
environments (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses
database. (UMI No. AAT 3257874)
*Harris, J. M., Elliott, T. E., Davis, B. E., Chabal, C., Fulginiti, J. V., & Fine, P. G. (2008).
Educating generalist physicians about chronic pain: Live experts and online education
can provide durable benefits. Pain Medicine, 9(5), 555–563.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic
Press.
Hermann, F., Rummel, N., & Spada, H. (2001). Solving the case together: The challenge of
net-based interdisciplinary collaboration. In P. Dillenbourg, A. Eurelings, & K.
Hakkarainen (Eds.), European perspectives on computer-supported collaborative learning (pp.
293–300). Proceedings of the European conference on computer-supported collabora-
tive learning. Maastricht, The Netherlands: McLuhan Institute.
42
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 43
Horn, M. B., & Staker, H. (2011). The rise of K-12 blended learning. Innosight Institute.
Retrieved from http://www.innosightinstitute.org/mediaroom/publications/education-
publications
*Hugenholtz, N. I. R., de Croon, E. M., Smits, P. B., van Dijk, F. J. H., & Nieuwenhuijsen, K.
(2008). Effectiveness of e-learning in continuing medical education for occupational
physicians. Occupational Medicine, 58(5), 370–372.
*Jang, K. S., Hwang, S. Y., Park, S. J., Kim, Y. M., & Kim, J. (2005). Effects of a Web-based
teaching method on undergraduate nursing students’ learning of electrocardiography.
Journal of Nursing Education, 44(1), 35–39.
Jonassen, D. H., Lee, C. B., Yang, C.C., & Laffey, J. (2005). The collaboration principle in
multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning
(pp. 247–270). New York, NY: Cambridge University Press.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis (Vol. 49). Thousand Oaks, CA:
Sage.
*Long, M., & Jennings, H. (2005). “Does it work?”:The impact of technology and professional devel-
opment on student achievement. Calverton, MD: Macro International.
*Lowry, A. E. (2007). Effects of online versus face-to-face professional development with a team-based
learning community approach on teachers’ application of a new instructional practice (Doctoral
dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No.
AAT 3262466)
Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in
distance education. American Journal of Distance Education, 14(1), 27–46.
*Maki, W. S., & Maki, R. H. (2002). Multimedia comprehension skill predicts differential
outcomes of Web-based and lecture courses. Journal of Experimental Psychology: Applied,
8(2), 85–98.
Martyn, M. (2003). The hybrid online model: Good practice. Educause Quarterly. Retrieved
from http://www.educause.edu/ir/library/pdf/EQM0313.pdf
*Mentzer, G. A., Cryan, J., & Teclehaimanot, B. (2007). A comparison of face-to-face and
Web-based classrooms. Journal of Technology and Teacher Education, 15(2), 233–246.
*Midmer, D., Kahan, M., & Marlow, B. (2006). Effects of a distance learning program on
physicians’ opioid- and benzodiazepine-prescribing skills. Journal of Continuing Education
in the Health Professions, 26(4), 294–301.
National Survey of Student Engagement. (2008). Promoting engagement for all students: the
imperative to look within. Bloomington: Indiana University, Center for Postsecondary
Research. Retrieved from http://www.nsse.iub.edu
*Nguyen, H. Q., Donesky-Cuenco, D., Wolpin, S., Reinke, L. F., Benditt, J. O., Paul, S. M.,
& Carrieri-Kohlman, V. (2008). Randomized controlled trial of an Internet-based versus
face-to-face dyspnea self-management program for patients with chronic obstructive pul-
monary disease: Pilot study. Journal of Medical Internet Research. Retrieved from
http://www.jmir.org/2008/2/e9/
*Ocker, R. J., & Yaverbaum, G. J. (1999). Asynchronous computer-mediated communica-
tion versus face-to-face collaboration: Results on student learning, quality and satisfac-
tion. Group Decision and Negotiation, 8(5), 427–440.
*O’Dwyer, L. M., Carey, R., & Kleiman, G. (2007). A study of the effectiveness of the
Louisiana Algebra I online course. Journal of Research on Technology in Education, 39(3),
289–306.
*Padalino, Y., & Peres, H. H. C. (2007). E-learning: A comparative study for knowledge
apprehension among nurses. Revista Latino-Americana de Enfermagem, 15, 397–403.
Paradise, A. (2008). 2007 State of the industry report. Alexandria, VA: American Society of
Training and Development.
43
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 44
Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions:
2006-07. Washington, DC: National Center for Education Statistics, U.S. Department of
Education.
Pearson, P. D., Ferdig, R. E., Blomeyer, R. L., Jr., & Moran, J. (2005). The effects of technology
on reading performance in the middle school grades: A meta-analysis with recommendations for pol-
icy. Naperville, IL: Learning Point Associates.
*Peterson, C. L., & Bond, N. (2004). Online compared to face-to-face teacher preparation
for learning standards-based planning skills. Journal of Research on Technology in Education,
36(4), 345–361.
Picciano, A. G., & Seaman, J. (2007). K–12 online learning: A survey of U.S. school district admin-
istrators. Retrieved from http://www.sloanc.org/publications/survey/K-12_06.asp
Picciano, A. G., & Seaman, J. (2008). Staying the course: Online education in the United States.
Retrieved from http://www.sloanc.org/publications/survey/pdf/staying_the_
course.pdf
Riel, M., & Polin, L. (2004). Online communities: Common ground and critical differences
in designing technical environments. In S. A. Barab, R. Kling, & J. H. Gray (Ed.),
Designing for virtual communities in the service of learning (pp. 16–50). Cambridge, England:
Cambridge University Press.
*Rockman et al. (2007). ED PACE final report. Submitted to the West Virginia Department
of Education. Retrieved from http://www.rockman.com/projects/146.ies.edpace/final-
report
Rooney, J. E. (2003). Blending learning opportunities to enhance educational program-
ming and meetings. Association Management, 55(5), 26–32.
Rudestam, K. E., & Schoenholtz-Read, J. (2010). The flourishing of adult online education:
An overview. In K. E. Rudestam, & J. Schoenholtz-Read (Ed.), Handbook of online learning
(pp. 1–18). Los Angeles, CA: Sage.
*Schilling, K., Wiecha, J., Polineni, D., & Khalil, S. (2006). An interactive web-based curricu-
lum on evidence-based medicine: Design and effectiveness. Family Medicine, 38(2),
126–132.
*Schmeeckle, J. M. (2003). Online training: An evaluation of the effectiveness and effi-
ciency of training law enforcement personnel over the Internet. Technology, 12(3),
205–260.
*Schoenfeld-Tacher, R., McConnell, S., & Graham, M. (2001). Do no harm: A comparison
of the effects of online vs. traditional delivery media on a science course. Journal of Science
Education and Technology, 10(3), 257–265.
Schwen, T. M., & Hara, N. (2004). Community of practice: A metaphor for online design.
In S. A. Barab, R. Kling, & J. H. Gray (Eds.), Designing for virtual communities in the service
of learning (pp. 154–178). Cambridge, England: Cambridge University Press.
Shotsberger, P. G. (1999). Forms of synchronous dialogue resulting from web-based profes-
sional development. In J. Price et al. (Eds.), Proceedings of Society for Information Technology
& Teacher Education International Conference 1999 (pp. 1777–1782). Chesapeake, VA:
AACE.
*Sexton, J. S., Raven, M. R., & Newman, M. E. (2002). A comparison of traditional and
World Wide Web methodologies, computer anxiety, and higher order thinking skills in
the inservice training of Mississippi 4-H extension agents. Journal of Agricultural Education,
43(3), 25–36.
Sitzmann, T., Kraiger, K. , Stewart, D., & Wisher, R. (2006). The comparative effectiveness
of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623–664.
Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies.
American Psychologist, 32, 752–760.
44
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 45
45
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 46
development. In C. Dede (Ed.), Online professional development for teachers: Emerging models
and methods (pp. 13–29). Cambridge, MA: Harvard University Press.
Wise, B., & Rothman, R. (2010, June). The online learning imperative: A solution to three looking
crises in education. Washington, DC: Alliance for Excellent Education.
Wisher, R. A., & Olson, T. M. (2003). The effectiveness of web-based training. Alexandria, VA:
U.S. Army Research Institute.
Wylie, R., Koedinger, K. R., & Mitamura, T. (2009, July–August). Is self-explanation always
better? The effects of adding self-explanation prompts to an English grammar tutor. In
Proceedings of the 31st Annual Conference of the Cognitive Science Society. Amsterdam, The
Netherlands.
Young, J. (2002). “Hybrid” teaching seeks to end the divide between traditional and online instruc-
tion: By blending approaches, colleges hope to save money and meet students’ needs. Retrieved
from http://chronicle.com/free/v48/i28/28a03301.htm
*Zacharia, Z. C. (2007). Comparing and combining real and virtual experimentation: An
effort to enhance students’ conceptual understanding of electric circuits. Journal of
Computer Assisted Learning, 23(2), 120–132.
*Zhang, D. (2005). Interactive multimedia-based e-learning: A study of effectiveness.
American Journal of Distance Education, 19(3), 149–162.
*Zhang, D., Zhou, L., Briggs, R. O., & Nunamaker, J. F., Jr. (2006). Instructional video in e-
learning: Assessing the impact of interactive video on learning effectiveness. Information
and Management, 43(1), 15–27.
Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practi-
cal analysis of research on the effectiveness of distance education. Teachers College Record,
107(8), 1836–1884.
Zirkle, C. (2003). Distance education and career and technical education: A review of the
research literature. Journal of Vocational Education Research, 28(2), 161–181.
46
27628h_TCR_March2013_text_Layout 1 2/13/13 8:52 AM Page 47
47