You are on page 1of 23

Education Tech Research Dev (2013) 61:819–840

DOI 10.1007/s11423-013-9310-9

RESEARCH ARTICLE

A computer-based spatial learning strategy approach


that improves reading comprehension and writing

Hector R. Ponce • Richard E. Mayer • Mario J. Lopez

Published online: 24 August 2013


Ó Association for Educational Communications and Technology 2013

Abstract This article explores the effectiveness of a computer-based spatial learning


strategy approach for improving reading comprehension and writing. In reading compre-
hension, students received scaffolded practice in translating passages into graphic orga-
nizers. In writing, students received scaffolded practice in planning to write by filling in
graphic organizers and in translating them into passages. Based on a cluster-randomized
sampling process, 2,468 students distributed in 12 schools and 69 classrooms participated
in the study. Schools were randomly assigned to the computer-based instruction (CBI)
group or traditional instruction (TI) group. Teachers assigned to the CBI treatment inte-
grated the applications into the language arts curriculum during one school semester. A
standardized test was used to measure reading comprehension and writing. The data were
analyzed through a statistical multilevel model. The findings showed that students in the
CBI group improved their reading and writing skills significantly more than students under
TI—yielding an effect size d = 0.30.

Keywords Computer-based instruction  Reading comprehension  Writing 


Learning strategies  Multilevel analysis  Graphic organizers

H. R. Ponce (&)
Faculty of Management and Economics, University of Santiago of Chile, Av. L. B. O’Higgins,
3363 Santiago, Chile
e-mail: hector.ponce@usach.cl

R. E. Mayer
Department of Psychological and Brain Sciences, University of California, Santa Barbara,
CA 93106-9660, USA
e-mail: rich.mayer@psych.ucsb.edu

M. J. Lopez
Department of Industrial Engineering, University of Santiago of Chile, Av. L. B. O’Higgins,
3363 Santiago, Chile
e-mail: mario.lopez@usach.cl

123
820 H. R. Ponce et al.

Introduction

Objective

The objective of this study was to assess the effectiveness of a computer-based instruc-
tional program for teaching reading comprehension and writing based on spatial learning
strategies—that is, visualizing the content of a passage in a spatial layout. In particular,
students learned to analyze or create text passages by completing graphic organizers, which
provide a spatial arrangement of key concepts based on common rhetorical structures, such
as cause-and-effect as shown in Fig. 1. A graphic organizer consists of a spatial
arrangement of words (or word groups) that represent the conceptual structure of a text
passage, such as cause-and-effect (as in a flow chart), compare-and-contrast (as in a
matrix), or a hierarchy (as in a hierarchical network) (Stull and Mayer 2007). Thus, the
main focus of the computer-based instructional program was to give students guided
practice in converting a passage into a graphic organizer during reading and converting a
graphic organizer into a passage during writing.
To accomplish this goal, a set of modular and independent pieces of software (i.e.,
software components) was first developed that implemented spatial learning strategies to
support specific aspects of reading comprehension and writing. These strategies relied
principally on graphic organizers (e.g., cause-and-effect diagram, comparison matrix, and
hierarchy network); with the additional inclusion of paragraph templates, and text editors
with functionalities for presenting text, highlighting main ideas, and summarizing. Once
the individual components were developed, a set of software applications was implemented
by integrating these visual schemas. The aim with these applications was to provide
teachers and students with a well-structured set of strategies to improve the cognitive skills
involved in understanding and producing text.
To evaluate the effectiveness of the applications, twelve schools from Santiago, Chile
were randomly selected and invited to participate in the study. The selection process was

Fig. 1 A cause-and-effect graphic organizer

123
A computer-based spatial learning strategy approach 821

based on a cluster-randomized sampling process. Six schools were assigned to the com-
puter-based instruction (CBI) group and the other six to the traditional instruction (TI)
group. In each school all classrooms from fourth grade (i.e., ages 6–7), sixth grade (i.e.,
ages 8–9), and eighth grade (i.e., ages 13–14) were selected. A total of 33 classrooms were
part of the CBI group, which used the software applications during language arts
instruction (i.e., Spanish), whereas 36 classrooms were part of the TI group, which
received traditional language arts instruction without the software. Students in both groups
completed a standardized pretest and posttest on reading and writing. The instructional
activities implemented by participant teachers using the software applications lasted for
one school semester and covered approximately one-third of the standard curriculum in
each grade (i.e., an average of 14 sessions of 90 min each).

Theoretical framework

The theoretical basis for teaching students how to use a graphic organizer strategy is that it
helps them learn to engage in generative processing during learning. According to Witt-
rock’s (1989), Mayer and Wittrock (2006) generative theory of reading comprehension and
Mayer’s 2009 cognitive theory of multimedia learning, generative processing refers to
deeper processing required for making sense of the presented material.
In particular, as summarized in Table 1, asking learners to fill in graphic organizers as
they read texts is intended to prime three crucial cognitive processes necessary for gen-
erative learning: selecting—paying attention to relevant information; organizing—men-
tally arranging it into a coherent structure; and integrating—relating it to prior knowledge.
Concerning the cognitive process of integrating, choosing a graphic organizer during
reading a passage primes relevant prior knowledge about a common rhetorical structure.
For example, using a compare-and-contrast graphic organizer in the form of a matrix
activates the learner’s compare-and-contrast schema from long-term memory in which two
or more items are compared along several dimensions. Concerning the cognitive processes
of selecting and organizing, using a graphic organizer (such as a compare-and-contrast
structure) helps students search for relevant pieces of information in the text and mentally
organize them within a structure. Repeated practice with constructing a graphic organizer
is intended to help learners develop a storehouse of schemas for basic rhetorical struc-
tures—including compare-and-contrast (as a matrix), cause-and-effect (as a flowchart), and
hierarchy (as a network). This storehouse of graphic organizers can be used to guide
reading and writing of expository text (Cook and Mayer 1988).
The key to the spatial learning strategy in this study is that helps learners to build a
storehouse of schemas based on common rhetorical structures useful for reading and
writing text. A rhetorical structure is a generalized type of prose organization such as

Table 1 Three cognitive processes primed by practice in constructing a graphic organizer


Cognitive process Description Example for graphic organizer

Selecting Focus on relevant information Focus on descriptions of each item on each


dimension (to fit in cells of a matrix)
Organizing Arrange relevant information into a Create matrix with each item in a row and each
coherent structure dimensions of comparison in a column
Integrating Prime relevant prior knowledge Prime compare-and-contrast schema

123
822 H. R. Ponce et al.

compare-and-contrast, process, and classification, as summarized in Table 2 (Chambliss


and Calfee 1998; Meyer and Poon 2001; Cook and Mayer 1988). In the present study,
students practiced on a diverse array of graphic organizers representing common rhetorical
structures. Lab-based research has shown that training in using graphic organizers for note
taking can improve learning new material based on immediate tests (Dansereau et al. 1979;
Holley et al. 1979; Kiewra et al. 1991) but few studies involve long-term classroom
interventions with randomized controls and assessments based on standardized tests of
reading and writing.

Literature review

Research on graphic organizers can be divided into (a) learning strategy studies in which
students learn to construct spatial representations (e.g., graphic organizers) and
(b) instructional design studies in which students receive a teacher-made spatial repre-
sentation (e.g., graphic organizer along with text). The present study focuses on the first
issue—that is, helping students learn spatial learning strategies for reading and writing
based predominantly on the students’ construction of graphic organizers representing
common rhetorical structures.
The definition of graphic organizer presented in the introduction is broad enough to
include concept maps (Novak and Gowin 1984), knowledge maps (O’Donnell et al. 2002),
matrices (Kiewra et al. 1999; Robinson 1998), flowcharts (Chambliss and Calfee 1998),
outlines (Balluerka 1995), or ordered lists (Cook and Mayer 1988). Consistent with gen-
erative learning theory, researchers have asked students to generate graphic organizers in
order to foster deeper processing during learning (Alvermann 1981; Hall et al. 1997;
Katayama and Robinson 2000), assess learner understanding (McCagg and Dansereau
1991), and promote better connections with existing memory (Kiewra et al. 1988, 1991).
However, researchers have also noted that constructing graphic organizers requires
extensive training (Hall and Sidio-hall 1994; McCagg and Dansereau 1991; Ponce et al.
2012) and instructor guidance (Robinson et al. 2003). In short, although converting text to
graphic organizers and vice versa has been proposed as a generative learning activity, its
effectiveness can be diminished when students are overwhelmed with extraneous pro-
cessing—that is, cognitive processing that does not serve the instructional goal. In order to
maximize the benefits of graphic organizers while minimizing the costs, the computer-
based instructional program in this study involves extensive guided practice.
The present large-scale, school-based study is motivated by promising preliminary
evidence mainly from small scale or short term studies. For example, Alvermann and
Boothby (1983) reported that 4th graders were better able to remember important infor-
mation from a disorganized text if they had received training in using graphic organizers.
In a follow-up study, Alvermann and Boothby (1986) reported that a 14-day program in

Table 2 Examples of rhetorical structures represented by graphic organizers


Type Description Example Format

Comparison Compares two or more things along Compares two mammals and reptiles in Matrix
several dimensions terms of key features
Process Describes a cause-and-effect chain Explains how the human ear works Flow
chart
Classification Groups elements into categories Lists and defines the types of clouds Hierarchy

123
A computer-based spatial learning strategy approach 823

which students learned to fill in graphic organizers while reading text improved reading
comprehension on a subsequent test as compared to a group receiving no instruction.
Similarly, Englert et al. (1991) carried out an intervention study with 4th and 5th graders to
improve their reading and writing skills through teacher and student dialogue about
expository writing strategies, prose structure processes with graphic organizers, and self-
regulated learning. Compared with the control group, the experimental group showed
improved skills on writing text structures, such as explanations and comparison-and-
contrast texts, and on near transfer tests of reading comprehension. In a study that lasted
one school semester, Ponce et al. (2012) tested the use of a computer application based on
visual schemas with several 4th grade classrooms aimed at improving reading compre-
hension. They found that students in the experimental classrooms showed a significantly
better performance than those in the control classrooms.
In a short-term classroom study, Williams et al. (2005) taught 2nd graders how to
comprehend compare-and-contrast expository text through an instructional program that
included graphic organizers. Compared to conventional instruction, the focused instruc-
tional program resulted in improved reading comprehension on new compare-and-contrast
texts but not on other text structures. This is a promising preliminary finding that
encourages a broader, longer-term study such as the current one.
In short-term lab studies with college students, some researchers found improved
learning when students were asked to complete partially complete graphic organizers
(Robinson et al. 2006; Stull and Mayer 2007). Explicit training in note-taking using gra-
phic organizers has been shown to improve learning from text in short-term studies with
immediate tests (Cook and Mayer 1988; Holley et al. 1979).
The major advances in the present study include testing the effectiveness of spatial
learning strategy instruction (1) in authentic classrooms, (2) over a long term, (3) with a
large randomly assigned sample, and (4) using a standardized language arts test of reading
comprehension and writing.

Predictions

Based on generative learning theory, we expect that guided practice in constructing graphic
organizers for a variety of texts will improve students’ cognitive processes for reading and
writing by providing them with a storehouse of schemas for common rhetorical structures,
and skills in using them to select and organize relevant information. Thus, generative
learning theory predicts that students who receive computer-based training in how to use
graphic organizers (CBI group) will perform better on a subsequent standardized test of
reading and writing than students who receive conventional instruction in language arts (TI
group). This prediction is the primary focus of the study. If the treatment is as robust as we
expect, we also predict that the CBI will be equally effective for boys and girls and for each
grade level.

Method

Participants and sampling procedure

The participants were 2,468 students grouped in 69 classrooms from 12 schools. Table 3
shows the number of classrooms and students in each group at each grade level. There were
1,265 students in the CBI group and 1,203 in the TI group.

123
824 H. R. Ponce et al.

Table 3 Number of classrooms and students per grade


Grade CBI group TI group

N classrooms N students N classrooms N students

4th 10 378 12 393


6th 12 472 12 426
8th 11 415 12 384
Total 33 1,265 36 1,203

The schools were selected through a cluster-randomized sampling process (Hayes and
Bennett 1999). This process involves two phases. First, to compute student sample (n) in
each school (cluster), we used
 2  
n ¼ za=2 þ zb r20 þ r21 =ðl0  l1 Þ2 ð1Þ

Second, to compute school sample, we used


 2 h 2 2  i
c ¼ 1 þ za=2 þ zb r0 þ r21 =n þ k2 l20  l21 =ðl0  l1 Þ2 ð2Þ

The coefficient k is the standard deviation divided by the mean. Now, if we consider the
effect size as d = (l0 - l1)/r (Cohen 1988) then Eq. 1 reduces to n = 2(za/2 ? zb)2/d2.
Therefore, taking into account a statistical power of 0.80 (1 - b, zb = 0.842), a level of
significance of 0.95 (za/2 = 1.96), equal variances between groups (r0 = r1) and a
medium effect size (d = 0.5) (Moran et al. 2008); then the number of students required in
each school is n = 63. A typical classroom in a Chilean school has between 30 and 45
students, so in order to have 60 students participating in the intervention; a minimum of
two classrooms per grade was needed. The data available for the computations required in
Eq. 2 was the following. From the validation of the RW test developed by Medina et al.
(2009), we used the mean score obtained by students in fourth grade in the Metropolitan
region. This was 61 points with SD = 17.9. Meanwhile, to estimate the coefficient k, we
considered the average scores in the Chilean standardized test named SIMCE1 for reading
in 2009. The results for the 35 schools were M = 268.66 and SD = 15.88. Therefore,
k = 0.059 (i.e., 15.88/268.66). Thus, using the computed k and n, assuming equal vari-
ances and replacing them in Eq. 2, we obtained c = 5 schools. To account for possible
school attrition, we decided to select randomly 6 schools. We recruited schools from three
municipalities that shared similar socio-economic background. Additionally, we only
considered municipal schools or private schools subsided by government vouchers (i.e.,
corresponding to around 90 % of all schools in Chile). Thirty-five schools located in the
selected municipalities fulfilled these requirements.
During the selection process, a first group of six randomly-selected schools was con-
tacted, and then a second group of six, and finally, a third group of six until twelve schools
accepted the invitation to be part of the CBI or TI group. According to data from the
Chilean Ministry of Education, the schools selected shared the following categorizations.
In the CBI group, three were municipal and three subsided. In term of socioeconomic

1
SIMCE (‘‘Sistema de Medición de la Calidad de la Educación’’ or System to Measure Quality of Edu-
cation) is a compulsory and standardized test taken by all Chilean students in 4th, 8th and 12th grades that
measures achievements in reading, math and science.

123
A computer-based spatial learning strategy approach 825

status, one school was classified as medium–low, four as medium and one as medium–
high. In the TI group, three schools were municipal and three subsided. In term of
socioeconomic status, five schools were categorized as medium and one as medium–high.
Therefore, the schools selected for the CBI and TI groups were highly similar under these
parameters.

Measures and instruments

The independent variable was the instructional method for teaching reading comprehen-
sion and writing skills. Students in the CBI group were taught focusing on the spatial
learning strategies integrated into the software applications. Meanwhile, students in the TI
group were taught using more traditional methods and without the support of computer-
based applications for reading or writing. The main dependent measure for both groups
was reading comprehension and writing score as measured by the RW Test. Additionally,
we measured, at the end of the implementation process, the performance of the CBI group
on the learning strategy test.

The RW test

To measure reading comprehension and writing we used the 4th, 6th, and 8th grade
versions of a standardized test called (in Spanish) Prueba de Comprensión Lectora y
Producción de Textos (Medina et al. 2009, 2010). We will refer to this test as the RW Test
because it assesses reading comprehension and writing; it was released in 2010. It is the
only test validated and standardized in Chile that measures both reading comprehension
and writing skills in one application. It has versions from kindergarten through eighth
grade.
The RW tests used share a similar structure. The first item asks students to read a text
twice, and without looking back at the text, they are asked to complete a cloze subtest, with
missing words from the original text. One point is given to each word correctly identified.
Next, a series of multiple-choice questions are presented regarding the type and content of
the text presented in the first item. After that, another item asks students to write a short
descriptive text by offering them an imaginary situation derived from the text in item one.
Subsequently, the RW test includes new texts (e.g., a map of Chile with information about
renewable energies and a short newspaper article with similar information) that students
are asked to read. A series of multiple-choice questions follow and questions that involve
completing a matrix or a diagram with information related to the texts. Finally, students are
asked to write another text, usually an argumentative one, explaining a particular position
on a topic (e.g., why is it important to develop non-conventional renewable energies in
Chile). The maximum score for the RW test is 97, 117 and 114 points for fourth, sixth and
eighth grades, respectively.
We used the same paper-based version of the RW test in each grade as pretest and
posttest. The research team administered the test and teachers did not have any partici-
pation during its administration. Testing took place during normal classroom hours and
lasted for 90 min in each instance. A group of three language arts teachers, members of the
research team and not associated with the participant schools, scored both the pretest and
posttest following the instructions and rubrics provided in the respective application
manuals. The tests were blindly scored. Each test was scored only once, due to the number
of tests involved, the complexity of the scoring process, and the limited resources avail-
able. To check the reliability of the marking process, the marking team scored twice a

123
826 H. R. Ponce et al.

sample of 55 tests; we found a high consistency (r = 0.81) among the evaluators. Details
of the validity (e.g., content and construct) and reliability (e.g., test–retest) of the RW Test
are described in Medina et al. (2009).

The learning strategy test

Additionally, we administered a learning strategy test to assess how well students per-
formed on the set of strategies embedded in each application. This test was paper-based
and it was applied only to the CBI group at the end of the treatment. Three similar tests
were devised for each grade but with texts and demands appropriate for each level. The
three tests assessed a common set of skills: (1) establishing hypothesis before reading a text
(with information from its title), (2) identification of key information from a text using the
highlighting strategy, (3) making comparisons, (4) identifying causal relationships, and (5)
identifying sequences through graphic organizers. The ability to write a short story (two
paragraphs) was additionally assessed in fourth grade, and the ability to write an expository
text and define a word using a definition graphic organizer was added in eighth grade. To
evaluate these skills, students were given a set of texts, activities and precise instructions.
Each test consisted of nine items and to evaluate and score them a set of rubrics was
devised. The same team that assessed the RW tests scored the learning strategy test. The
tests were blindly scored. Each test was scored only once, due to the number of tests
involved, the complexity of the scoring process, and the limited resources available. The
marking team agreed on how to use the rubrics with a small sample of tests and we did not
score this test more than once.

The software applications

The development process followed a four-step approach. The first step was to identify
those basic cognitive skills and text types recommended in the Chilean learning standards
for language arts in fourth, sixth, and eighth grades (MINEDUC 2008a, b, 2009). As a
result, a list of cognitive skills (e.g., compare-and-contrast, sequence, simple comparison,
cause-and-effect relationships) and text types such as narratives (e.g., stories, fables, and
poems), and expository text (e.g., news articles and informational texts) were determined.
The second step was to establish those visual schemas, mainly graphic organizers and
paragraph templates, that were most appropriate to support the development of each
cognitive skill and their rhetoric structures (McKnight 2010; Beyer 1997; Cook and Mayer
1988).
The third step involved the design and implementation of each graphic organizer and
paragraph template following a software component methodological approach (Almarza
et al. 2011; Laitkorpi and Jaaksi 1999). In other words, each organizer and template was
implemented as a self-contained piece of software (i.e. a component) and the components
were used to implement larger applications. Figures 1 and 2 represent two examples of
software components. Figure 1 shows a cause-and-effect graphic organizer (with an
example of the causes and effects of pollution) and Fig. 2 presents one possible corre-
sponding paragraph structure. Additionally, text editors were implemented to integrate
texts and the highlighting, and summary strategies. All software components were
developed in Spanish, so the components shown in the figures were translated into English
for this article.
Once the software components were developed and tested, the fourth step was to
generate the applications following similar instructional sequences. The instructional

123
A computer-based spatial learning strategy approach 827

Fig. 2 A cause-and-effect paragraph structure

sequences included in the software consisted of how to analyze texts and the phases
involved in writing texts. The analytical tools available to support reading comprehension
in the applications were reading (e.g., highlighting the important aspects of the text) and
analyzing (e.g., identifying cause-and-effect and similarities and differences using the
graphic organizers provided). The writing tools were structured to provide guidance on the
writing processes (Hayes and Flower 1980) of planning (e.g., using the brainstorming
graphic organizers to generate ideas), organizing information (e.g., using paragraph
structures), and writing and publishing the text (e.g., write a summary using the embedded
editor or a word processor). The main difference between the applications was the type of
texts used. In fourth grade, stories were used and an additional step was included to support
the planning phase in the writing process by adding functionalities to transform parts of the
story grammar (e.g., character or conflict) of the original story. In sixth grade, stories and
news articles were used, and in eighth grade, mainly expository texts. Figure 3 shows a
screen image of the application used in fourth grade.

Procedure

Teacher instruction

Twenty-eight teachers participated in the implementation of the computer-based instruc-


tional programs in the 33 classrooms. Five teachers taught more than one course. Before
the implementation, teachers were trained through a series of workshops devised in two
phases that totaled 24 h of training. The first phase consisted of three sessions of 4 h each
on the conceptual and procedural aspects involved in learning and teaching strategies to
improve reading comprehension and writing. In the second phase, participant teachers were
required to attend three practical workshops of 4 h each on the use of the software. Most
teachers reported that they had little experience using computer technology in the class-
room, and those who had some experience indicated that they basically showed videos
from the internet and some had used PowerPoint. Those teachers with less experience
using computer applications were offered additional training. During the training phase,

123
828 H. R. Ponce et al.

Instructional sequence:
Reading, analyzing, transforming, revising and writing.

Instructional
sequence to
identify the
story
grammar
(e.g.,
Text editor characters
with and conflict).
functions to
highlight
main ideas.
Graphic
organizers to
support the
analysis of
story
grammar.

Fig. 3 Example of one of the application used in the study

teachers made some recommendations to improve the applications and adapting some
features or adding others to fit their educational needs.

Classroom implementation

The implementation of the instructional methods occurred in the following way. First,
teachers from both the CBI and TI groups were contacted in the beginning of the school
year to set a date to administer the RW pretest. The pretest was administered during the last
3 weeks of March 2010 in all classrooms. In the CBI group, 1,163 students took the pretest,
and in the TI group 1,052 students (resulting in attrition rates of 8 and 13 % in each group,
respectively).
Second, the application of the CBI began in April in the CBI group while classrooms in
the TI group continued with their own planned instructional activities. Teachers in the CBI
group agreed to integrate our software-based instructional activities in the language arts
curriculum during the first school semester of the year 2010. We initially asked them to
dedicate 24 sessions of 90 min each to teach the strategies embedded in the applications;
however, as the implementation progressed it was difficult to achieve this goal. An average
of 14 sessions (SD = 4) in each classroom made use of the software and the associated
strategies, which corresponded to approximately 30 % of the sessions available in lan-
guage arts. The other 70 % was spent on the standard curriculum for language arts,
previously planned by each teacher within their schools.
Third, the teaching and learning activities in the CBI group were conducted following
the instructional sequences embedded in the applications. The instructional activities were
implemented in their normal classrooms and in the computer lab. Each lesson began with
the teacher presenting a text to students, either in class using a computer and data projector
or in the computer lab. Students were asked to conjecture about the content of the text from
reading its title (pre-reading) and to formulate two or three hypotheses by completing
graphic organizers indicated by their teachers (e.g., use the ‘‘brainstorming’’ diagram to

123
A computer-based spatial learning strategy approach 829

generate ideas). If students were in the classroom, they completed a printed version of the
graphic organizer; if they were in the computer lab, they completed it directly in the
respective application. Teachers were encouraged to explain each learning strategy before
its use and to give feedback on students’ work. This type of pre-reading activity was
intended to activate student prior knowledge and generate conditions to better comprehend
a text. Next, teachers asked students to read the text, highlighting main ideas and key
components of the texts; followed by a review of the hypotheses stated in the pre-reading
session. Subsequently, in the analysis phase of the instructional sequence, students were
asked to review in more detail the key components of the text being analyzed (e.g., a story,
news or expository text) by selecting a graphic organizer and filling in the required
information. The applications offered the option to select among several graphic orga-
nizers, although teachers provided guidance on which organizers were the most relevant
for the source text.
After the respective text was analyzed, students were asked to write a new story, news
article or expository text by following the plan, revise and write process. The graphic
organizers were linked to paragraph structures so the writing process was facilitated by
offering students a spatial representation of the paragraph with connecting words and
transitions. Students planned their essays by filling in an appropriate graphic organizer.
Finally, the applications incorporated a text editor to help revise and write the text, with
functionality to copy the paragraph structures composed in the previous phase.
Due to the number of classrooms involved in the study and the need to more directly
support teachers with the software applications, a group of senior psychology under-
graduate students was hired to support the implementation in each school. They spent an
average of 4 h a week in each school interacting with teachers and support staff. They were
particularly careful to make sure that teachers and students did not have problems using the
software applications, although the applications were relatively simple and easy to use.
These assistants were not permitted to teach any of the sessions, although in some occa-
sions teachers asked them to explain to students the use of the software in the computer lab.
Among the difficulties, we found a strong initial resistance by teachers to take their
students to the computer lab, despite an earlier agreement that they would use it as planned
in the workshops. One of the consequences was that in average they did not complete the
original 24 sessions we asked them to dedicate to our instructional program, as indicated
previously. Two reasons seem to explain this situation. First, we underestimated the
complexity of intervening 33 classrooms in 12 schools and the additional support required
by teachers, especially in the lab sessions. Thus, it was demanding for the research team to
coordinate the beginning of the implementation of the program in each school and
classroom. Second, the lack of experience of participant teachers with using information
and communication technology in the classroom, and the lack of centralized computer
support offered by their schools contributed significantly to slow down the implementation
of the program as well. Although all schools had a computer lab with between 20 and 30
computers, these were not always connected to the internet and/or were not in operation all
the time. Lack of dedicated computer support staffs was also common. Under this context,
the role of the psychology students assisting the process was essential not only to overcome
the initial logistic difficulties but also to give us feedback on the use of the software in the
computer lab, and in keeping a communication link between the research team and the
schools. Teachers were more comfortable using the software applications in the classroom,
with a data projector and computer, and students working on paper-based learning
material.

123
830 H. R. Ponce et al.

The computer applications were complemented with printed instructional material that
contained exercises on the learning strategies embedded in each application. Such material
contained explanations on how each strategy worked, offering detailed examples to
teachers and students. Teachers also included their own texts within each application,
usually taken from the textbooks employed in class. The paper-based material used by
students was essential to begin using the applications since it gave teachers the opportunity
to use the available computer and data projector in each classroom.
Regarding schools in the TI group, these followed the standard curriculum. None of
these schools used any computer-based instructional approach to teach reading compre-
hension and writing strategies. Each session was organized according to the school’s yearly
plan and followed the learning standards for the respective grade. In general, teachers used
the before-during-after reading approach to facilitate reading comprehension. This
approach consists of specific strategies to activate students’ background knowledge,
establish a reading purpose and predict text content before actually reading the text. During
reading, students are encouraged to use specific reading comprehension strategies, such as
identifying main ideas, recognizing the structure of the text and its main elements, and
confirming or rejecting predictions. Finally, after reading, students engage in drawing
conclusions, summarizing, and connecting with their prior knowledge (Condemarı́n and
Medina 2000). Silent reading in class was widely used as well followed by in-class
questions lead by teachers. An important proportion of the time was invested in teaching
spelling and grammar. Less attention was put on developing writing, although students
were normally asked to write short texts. Similar to the CBI group, the TI group also
concentrated on stories, news articles and expository texts which are part of the standard
curriculum. We did not collect data about how actual teaching activities were implemented
in this group; instead, we interviewed a group of teachers from the TI group and asked
them how they carried out their teaching lessons.
The posttest was administered during the first 3 weeks of September 2010. In the CBI
group, 970 students responded the posttest and in the TI group, 888 students (resulting in
additional attrition rate of 16.6 % in the CBI group and 15.6 % in the TI group compared
with the pretest). The main reasons for attrition were that students were absent during the
administration of the tests and many students moved to other schools during the semester.
This phenomenon was observed in all schools.

Data analysis

Our main goal was to determine whether there was a significant difference on performance
on the RW Test after treatment between the CBI and TI groups. Given that the data were
collected at the student level (e.g., pretest and posttest) and the applications were imple-
mented at the classroom level (i.e. whole classrooms guided by their teachers), a statistical
multilevel approach was considered (De Wever et al. 2007). We computed the intra-class
correlation (ICC); its value was equal to 0.22, which was much greater than the recom-
mended value of 0.05 to justify a single level analysis (Heck et al. 2010). The specification
of this model in a hierarchical form is the following. The level-one or student-level model
is: Posttestij = b0j ? b1jPretest ? b2jGender ? b3jPretest 9 Gender ? eij. The student-
level model includes pretest scores and gender as variables that allowed us to control for
differences before treatment in scores in the RW Test and to control for achievement
differences between girls and boys. We also added the interaction term pretest by gender to
determine significant differences in posttest average scores between girls and boys due to
variations in their pretest scores. The level-two or classroom-level model corresponds to:

123
A computer-based spatial learning strategy approach 831

b0j = c00 ? c01Treatment ? c02Grade ? l0j; b1j = c10 ? c11Treatment ? c12Grade ? l1j;
b2j = c20 ? c21Treatment ? c22Grade; b3j = c30.
The classroom-level model includes the variables treatment and grade. Treatment was
coded as 1 if the group was under CBI and 0 when it was under TI. Grade is a variable that
indicates to which grade the respective classroom belong to; it is a categorical variable that
indicates whether the classroom was grade 4, 6, or 8 (coded in the model as 0, 1 and 2,
respectively). The intercept c00 corresponds to the posttest mean score for the TI group and
the slope c01 represents the mean difference between CBI and TI group, controlling for the
other variable included in the model. In other words, it indicates the main effect of the use
of the CBI compared with TI. The other intercepts and slopes are more clearly represented
in the full or mixed model: Posttestij = c00 ? c01Treatment ? c02Grade ? c10Pretest ?
c11Pretest 9 Treatment ? c12Pretest 9 Grade ? c20Gender ? c21Gender 9 Treatment ?
c22Gender 9 Grade ? c30Pretest 9 Gender ? (l0j ? l1jPretest ? eij).
The factor l1jPretest represents the assumption that pretest scores randomly vary among
classrooms. We also assumed that posttest differences due to gender (if they exist) would
be similar among classroom, that is, we assumed only fix effects (c21 and c22); random
effects were not included. The full model explicitly represents the cross level interactions.
Additionally, three relevant coefficients were considered: the ICC (q), effect size (d), and
proportion of variance explained between and within clusters. The ICC was computed as
q = s/(s ? r2), where s is the variation between classrooms, r2 is the variation within
classrooms and s ? r2 is the total variation. The standardized effect size of treatment was
computed as d = c01/(s ? r2)1/2 where c01 represents the difference between the mean for
CBI group and the mean for the TI group. The interpretation of the effect size is the same as
Cohen’s d (Spybrook 2008). Finally, the proportion of within-classrooms (student level)
variance explained by the treatment model was computed as (r2(null) - r2(treatment))/r2(null) and
the proportion of between-classrooms variance explained by the treatment model was
computed as (s(null) - s(treatment))/s(null) (Ma et al. 2008).
The data were analyzed with IBM SPSS 17 using the mixed model functionality with
REML method, and with main effects option for fixed effects and unstructured method for
covariance estimates (Bickel 2007).

Results

Research questions

The primary question addressed in this study is: Does the CBI group outperform the TI
group on the RW Test? According to generative theory, the CBI should perform better
because learning to use graphic organizers that represent basic rhetorical structures should
improve subsequent performance on reading comprehension and writing. Secondary
questions include: Does the CBI group outperform the TI group on overall improvement
for both girls and boys? Does the CBI group outperform the TI group in each grade level?
If the effects of training are robust, we expect the predicted superiority of the CBI to be
evident for all of these partitions of the data.

Multilevel analysis

To test the hypotheses of this study and answer the above questions, the first task was to
compute the parameters in the multilevel model. Results at student and classroom levels on

123
832 H. R. Ponce et al.

the RW Test are presented in Table 4. Table 5 presents the descriptive statistics for the
variables involved in the multilevel model. Data from students that took both the pretest
and posttest and responded to all subparts of the test are included in the analysis, which
decreased the final number to 1,562 students. The scores for pretest and posttest were
standardized in z-scores for each grade creating a common and comparable score in the
analysis. Additionally, Table 5 presents data about gender (0 = boy, 1 = girl), indicating
that overall 54 % were girls. As indicated previously, the variable instructional method
was labeled as treatment in the model; it was coded as 1 to indicate the use of the
applications by the CBI group and 0 for the TI group (classroom under TI). Grades
corresponded to fourth (coded as 0), sixth (coded as 1) and eighth grades (coded as 2).
The right column of Table 4 shows that gain scores are larger for the CBI group
compared with the TI group at both student and classroom levels. To determine whether
these differences are statistically significant and to account for variance due to students
nested in classrooms, we present the results in a multilevel analysis format. The results are
presented in Table 6 divided into two models—null and treatment models. The treatment
model contains the effects of the CBI (c01) and only those variables that were statistically
significant; in this case, treatment and pretest.
Table 6 also includes the coefficient described in the previous section (i.e., data anal-
ysis). The statistically significant between-classroom variance shows that the average
posttest scores varied across participant classrooms (s00 = 0.23, Wald Z = 4.91,
p \ 0.01). The ICC resulted in 0.22; indicating that 22 % of the total variance in the
posttest was attributable to classroom, while 78 % was attributable to student. These
coefficients demonstrate that a multi-level analysis was the appropriate model to be used in
this case and it signaled that classroom level variables had an impact on student perfor-
mance in the RW Test. Furthermore, in comparison to the null model, the proportion of
variance explained by the treatment model, which includes the effect of the use of the
software-based instructional approach, was 39 % at the classroom level and 28 % at the
student level. Both implied a substantial reduction in variability. Also, the null model

Table 4 Mean scores (and SDs) at student and classroom level on the pretest and posttest
Level Group N Pretest Posttest Gain

M SD M SD M SD

Student TI 743 -0.02 0.95 -0.14 1.01 -0.11 0.98


CBI 819 0.02 1.04 0.13 0.98 0.10 1.00
Classroom TI 36 -0.05 0.38 -0.17 0.53 -0.11 0.48
CBI 33 -0.02 0.43 0.08 0.49 0.10 0.32

Table 5 Descriptive statistics for the variables involved in the two-level model
Level Variable N M SD Max Min

Student Pretest 1,562 0.00 1.00 4.34 -2.37


Posttest 1,562 0.00 1.00 4.11 -3.40
Gender 1,562 0.54 0.50 1 0
Classroom Treatment 69 0.52 0.50 36 33
Grade 3 1 1 0 2

123
A computer-based spatial learning strategy approach 833

Table 6 Statistical results of the instructional effects on the RW test

Fixed effects Null model Treatment model

Coefficient SE t p Coefficient SE t p

Intercept (c00) -0.04 0.06 -0.72 0.475 -0.17 0.10 -1.62 0.11
Treatment (c01) 0.25 0.10 2.40 0.02
Pretest (c10) 0.49 0.03 17.87 0.00

Random effects Variance SE Wald Z p Variance SE Wald Z p

Between-classroom variance (s00) 0.23 0.05 4.91 0.00 0.15 0.03 4.66 0.00
Within-classroom variance (r2) 0.79 0.03 27.31 0.00 0.57 0.02 26.72 0.00
Proportion of variance explained
Classroom level (between) 0.39
Student level (within) 0.28
Effect size (d) 0.30

indicates that for all students (N = 1,562) the mean posttest score was not statistically
different from zero (c00 = -0.04, p [ 0.05). Given that pretest and posttest scores were
standardized, this was an expected value (as shown in Table 5).
The fit of this model was computed comparing it with a preliminary model that con-
tained only the effect of treatment on the posttest. The present model, which includes
pretest, grade and gender, showed a significantly better fit than this preliminary model
(v2(6) = 470, p \ 0.01).

Does the computer-based instruction group perform better than the traditional
instruction group on the RW test?

As can be seen in Table 4, at both the student level and the classroom level, students in the
TI and CBI group scored at about the same level (averaging near 0 in z-score) on the
pretest, but the CBI group averaged a higher score (indicated by a positive z-score) than the
TI group (indicated by a negative z-score) on the posttest, yielding a greater pretest-to-
posttest gain for the CBI group (which has a positive z-score) than the TI group (which has
a negative z-score). To determine whether those differences were significant, we ran the
multilevel model; the results are presented in Table 6. This model yielded a significant
effect of the treatment variable, in which the use of the computer-supported instructional
methods had a positive effect on student performance on the posttest (c01 = 0.25,
t(63) = 2.40, p \ 0.05), controlling for pretest, gender, and grade. The effect size was
d = 0.30 which is considered moderate in educational context (Hattie 2009). In other
words, a student in the CBI group that obtained an average pretest score (equal to zero in
the model) scored 0.25 points (in z-score) higher on the posttest compared with an average
student in the TI group. This significant difference in learning gains favoring the treatment
group is the primary finding of the study.

Does the effectiveness of the CBI treatment differ for boys and girls?

A secondary hypothesis was that the treatment would be equally effective for boys and
girls. In the model, the variable gender (c20) was not statistically significant, F(1,

123
834 H. R. Ponce et al.

1,539) = 0.63, p [ 0.05. Additionally, the interaction between gender and treatment (c21)
was not statistically significant, F(1, 1,542) = 0.05, p [ 0.05. Thus, there is no evidence
that the CBI affected boys and girls differently.

Does the effectiveness of the CBI treatment differ by grade level?

Another secondary issue concerns whether the CBI treatment would be effective across
grade levels. The effect of grade (c02) was not statistically significant, F(2, 63) = 0.04,
p [ 0.05; indicating that students in the CBI group outperformed students in the TI group
independently of the grade they were in. Additionally, we tested the interaction between
grade and pretest (c12) which indicated that there was a significant difference in pretest
scores only in fourth grade, favoring the CBI group, F(2, 65) = 9.78, p \ 0.05. In the
multilevel model, we controlled for differences in the pretest scores; therefore, the results
are consistent with the observation that the effect of treatment was equally effective for all
grade levels, despite initial differences in pretest scores.

Does performance on the learning strategies have an effect on the RW test?

The learning strategy test was administered to assess how well students were performing
with each strategy embedded in the computer-based instructional methods. This test was
administered only to students in the CBI group since they were taught these particular
learning strategies and not students in the TI classrooms. At student level, the mean score
on this test was 0.51 (SD = 0.18, N = 785) in a scale 0–1. This score indicates that an
average of 51 % of the test was correctly responded by the students in the CBI group. To
analyze the relation between this test and the posttest, we computed a multilevel model
similar to the one presented in Table 6 but adding the variable learning strategy (with slope
c30), without the effect of treatment (c01). In comparison to the null model, the proportion
of variance explained by this new model was 54 % at the classroom level and 33 % at the
student level (35 and 17 % respectively if we use only learning strategy scores). The results
from the multilevel analysis indicate that the only two factors that significantly predict
performance on the posttest in this new model for the CBI group were the pretest
(c10 = 0.41, t(26) = 9.65, p \ 0.01) and learning strategy (c30 = 0.26, t(543) = 7.20,
p \ 0.01). Grade and gender were not significantly related to performance on the posttest.
We also tested all the interactions; these were not statistically significant.
Additionally, we analyzed the data at the students level (not in a multi-level model) for
each strategy assessed in the test. Students showed a better than average performance in the
following strategies: ability to generate hypotheses before reading a text (M = 0.888,
SD = 0.189), compare two elements using a graphic organizer (M = 0.591, SD = 0.318),
and defining a word using a matrix for conceptualization (M = 0.576, SD = 0.336). On
the other hand, the strategies that showed a below than average performance were the
ability to identify causal relationships using a cause-effect diagram (M = 0.226,
SD = 0.246), sequence of events (M = 0.398, SD = 0.312) and writing a paragraph using
a schema (M = 0.408, SD = 0.344). One possible explanation for the high performance
on the hypothesis strategy is that this was always the first one in the sequence of strategies
implemented in the applications, so teachers tended to ask students to practice with it more
regularly during the study.
The correlation between scores on the RW posttest and the learning strategy test was
low and statistically significant (r = 0.32, p \ 0.01). Regarding the strategies analyzed
individually, they were all statistically significant but not highly correlated with the RW

123
A computer-based spatial learning strategy approach 835

posttest; the smallest correlation was with the highlighting strategy (r = 0.19, p \ 0.01)
and the largest correlation was with the sequence strategy (r = 0.24, p \ 0.01).

Discussion

Empirical contributions

This study shows that students instructed with spatially-based learning strategies and
supported with the computer applications improved their reading and writing skills, as
measured by the RW Test, more than students in the TI group. The multilevel model
indicated that such improvement was statistically significant with an effect size of
d = 0.30 for all students; supporting the main hypothesis of this study. We attribute these
effects to the instructional methods (e.g., asking students to translate passages into graphic
organizers during reading and translate graphic organizers into passages during writing)
rather than to the instructional medium (i.e., computer-based delivery). These results are in
line with previous findings on the positive impacts of teaching students learning strategies
for reading comprehension and writing (National Institute of Child Health and Human
Development 2000; Graham and Hebert 2010).
The effect sizes obtained in this intervention are to some extent similar to other learning
strategy-based studies to support instruction on reading comprehension and writing. A
review by the National Reading Panel in United States concluded that instructional
methods that use more than one strategy showed an average effect size of 0.32 when
standardized tests were used (National Institute of Child Health and Human Development
2000). In recent meta-analysis on writing activities aimed at improving reading compre-
hension, Graham and Hebert (2010) found evidence that asking students to write about the
text they read improved students’ reading comprehension, with an effect size of 0.40.
Similarly, teaching writing skills, such as the process of writing, text structures and par-
agraph construction skills, also improved comprehension; although the mean effect size
was smaller, with d = 0.18 (in both cases for standardized tests).
Studies examining the use of computer applications to support reading have reported
similar effect sizes. For example, a meta-analysis developed by Murphy et al. (2002)
obtained an average weighted effect size of 0.35 for the use of discrete software to improve
reading comprehension. In a different study, Moran et al. (2008) estimated a mean
weighted effect size of 0.30 for the use of technology to support reading performance in
middle school grades when standardized tests were employed (as in our study). Addi-
tionally, for those studies that lasted five or more weeks (our intervention lasted in average
14 weeks), the mean effect size was 0.34.
We did not find significant differences in the effectiveness of the CBI treatment based
on gender or grade level, thereby suggesting that the CBI treatment is robust across
different kinds of learners. In short, the use of spatial-based learning strategies to teach
reading and writing was more effective than TI, when tested in a large scale, long-term
classroom study.

Theoretical contributions

The findings are consistent with cognitive theories of learning in which a spatial schema
has the advantage of providing a way of organizing the information in working memory by
helping learners distinguish between relevant and secondary information and therefore

123
836 H. R. Ponce et al.

facilitating reading comprehension and writing processes. In particular, the applications


used in the CBI classrooms involved implementation of spatial schemas to identify for
example story grammars (e.g., characters, setting, conflict and sequence) and to analyze
key aspects of it such as comparison among characters (e.g., using a comparison matrix)
and sequence of events in the story (e.g., using a scene sequence diagram). The positive
effects of using spatial learning strategies are in line with similar findings from short-term
lab studies that emphasize teaching text structure strategies (Meyer and Poon 2001; Loman
and Mayer 1983) and graphic organizers (Strangman et al. 2004; Stull and Mayer 2007) to
improve reading comprehension and writing. Similarly, the inclusion of spatial schemas to
summarize either the stories being read or the news being analyzed worked well with
teachers and students, which is in line with previous findings that proved the effectiveness
of the summarization strategy to foster writing skills (Graham and Perin 2007a, b). Fur-
thermore, the instructional sequence incorporated in the applications to develop writing
skills was grounded in a research-based theory of the cognitive processes in writing. It
follows the planning, translating and revising model of writing by Hayes and Flowers
(1980) by providing specific instructions and graphic organizers to work in each phase.
This is supported by previous research that indicates that writing strategies for planning,
drafting and revising have a strong effect on the quality of students’ writing (Graham and
Perin 2007b) as shown by our own study.

Practical contributions

Care must be taken on how each strategy integrated in the applications is implemented in
the classroom. Our observation logs of some teaching sessions indicated that some teachers
asked students to use too many strategies in one session, without much time for reflection
and feedback, particularly during the computer lab sessions. Deriving from cognitive load
theory (Sweller et al. 2011), overloading students with too many visual schemas to choose
from makes the decision on what aspects of the text to concentrate long and confusing.
This overloading of schemas, although it may create redundancy to comprehend a text,
does not appear to facilitate the development of metacognitive strategies to regulate the
writing process. Possible solutions that we analyzed were: (1) to reduce the number of
elements to be analyzed in a text; (2) to separate more clearly in the application that text
analysis and text writing are distinct since the latter requires additional cognitive skills
(e.g., generating ideas); and (3) to include worked out examples (Pollock et al. 2002) that
guide teachers and students on how to analyze and write texts.

Limitations and future directions

Due to the extent of this study, there were several aspects related to how teaching and
learning took place that were not evaluated. First, the instructional sequences embedded in
the applications consisted of interdependent procedures and we were not able to establish
whether all of them are required to improve reading and writing skills or a smaller set is
equally effective. Second, we focused on a general standardized test of reading and writing
to assess learning because our goal was to examine overall effectiveness, but future
research could use tests that are designed to assess specific skills particularly those aligned
with the instructional intervention (such as a more extensive version of the learning
strategies test). Third, the texts were not examined for signaling aids that may have
influenced reading comprehension (Meyer and Poon 2001). Fourth, feedback is an
important learning strategy (Hattie 2009) which may have also influenced the use of each

123
A computer-based spatial learning strategy approach 837

visual schema and instructional sequence embedded in the applications. Fifth, the degree to
which teachers implemented the instructional programs as intended (i.e., fidelity) was not
systematically measured apart from qualitative observations of some lessons. Sixth, school
management systems and teachers’ support play an important role in the curricular inte-
gration of technology, and particularly, in key subject areas such as language arts. During
the study we did not address this issue so it was difficult to establish how much influence it
had over the implementation of the programs in the classroom.
Regarding future directions, we outline a set of research avenues that can contribute to
improving the computer-based instructional approach. A first line of research involves
contextualizing in classrooms. An interesting research study involves examining how
teachers implement the program in their classrooms, and a comparison of gain scores in
reading and writing for students in classes where the program is implemented well versus
for students in classrooms where the program is implemented poorly. A second line of
research involves adjusting the graphical organizers. Some of the underlying rhetorical
structures may be more important for analyzing texts than others, and some of the resulting
graphic organizers may be more effective than others. Therefore, additional research is
needed to identify the most effective graphic organizers at each grade level. A third line of
research involves adjusting the target texts. Some of the target texts may be more effective
than others. A fourth research avenue concerns determining proper dosage. Additional
research is needed to determine how much practice is needed. Adjusting the tasks in
response to student progress is another possible useful feature to add to the program in the
future. A fifth research issue involves providing opportunities for guidance. An interesting
issue concerns how much instructional guidance to build into the system and how much to
depend on the teacher to provide instructional guidance. A future version of the applica-
tions could include metacognitive prompts to guide the learner, explanative feedback after
students try each step in a learning episode, or worked-out examples showing how a
successful student carried out an exercise. A final research direction involves the inclusion
of embedded assessment. In particular, research is needed on how to assess learner’s
progress, so a future version of the applications could include embedded assessments that
keep track of the learner’s progress on key skills such as highlighting or completing each
type of graphic organizer.

Acknowledgments Funding for this study was provided by CONICYT (Comisión Nacional de Ciencia y
Tecnologı́a) of Chile through FONDEF Projects TE04i1005 and D08i1010. We wish also to thank Nancy
Collins for her expert advice on the statistical procedures used in this study.

References

Almarza, F. A., Ponce, H. R., & Lopez, M. J. (2011). Software component development based on the
mediator pattern design: The interactive graphic organizer case. IEEE Latin America Transactions,
9(7), 1105–1111. doi:10.1109/tla.2011.6129710.
Alvermann, D. E. (1981). The compensatory effect of graphic organizers on descriptive text. The Journal of
Educational Research, 75(1), 44–48. doi:10.2307/27539864.
Alvermann, D. E., & Boothby, P. R. (1983). A preliminary investigation of the differences in children’s
retention of ‘‘inconsiderate’’ text. Reading Psychology, 4(3), 237–246. doi:10.1080/027027
1830040304.
Alvermann, D. E., & Boothby, P. R. (1986). Children’s transfer of graphic organizer instruction. Reading
Psychology, 7(2), 87–100. doi:10.1080/0270271860070203.
Balluerka, N. (1995). The influence of instructions, outlines, and illustrations on the comprehension and
recall of scientific texts. Contemporary Educational Psychology, 20(3), 369–375.

123
838 H. R. Ponce et al.

Beyer, B. K. (1997). Improving student thinking: a comprehensive approach. Boston, MA: Allyn and
Bacon.
Bickel, R. (2007). Multilevel analysis for applied research: It’s just regression!. New York: Guilford Press.
Chambliss, M. J., & Calfee, R. C. (1998). Textbooks for learning: nurturing children’s minds. Malden, MA:
Blackwell Publishers.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Mahwah, NJ: Erlbaum.
Condemarı́n, M., & Medina, A. (2000). Evaluación auténtica de los aprendizajes: un medio para mejorar
competencias en lenguaje y comunicación. Santiago de Chile: Editorial Andrés Bello.
Cook, L. K., & Mayer, R. E. (1988). Teaching readers about the structure of scientific text. Journal of
Educational Psychology, 80, 448–456. doi:10.1037/0022-0663.80.4.448.
Dansereau, D. F., Collins, K. W., Mcdonald, B. A., Holley, C. D., Garland, J., Diekhoff, G., et al. (1979).
Development and evaluation of a learning strategy training-program. Journal of Educational Psy-
chology, 71(1), 64–73.
De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2007). Applying multilevel modelling to content
analysis data: Methodological issues in the study of role assignment in asynchronous discussion
groups. Learning and Instruction, 17(4), 436–447. doi:10.1016/j.learninstruc.2007.04.001.
Englert, C. S., Raphael, T. E., Anderson, L. M., Anthony, H. M., & Stevens, D. D. (1991). Making strategies
and self-talk visible: Writing instruction in regular and special education classrooms. American
Educational Research Journal, 28(2), 337–372. doi:10.2307/1162944.
Graham, S., & Hebert, M. (2010). Writing to read: Evidence for how writing can improve reading. A
carnegie corporation time to act report. Washington, DC.
Graham, S., & Perin, D. (2007a). A meta-analysis of writing instruction for adolescent students. Journal of
Educational Psychology, 99(3), 445–476. doi:10.1037/0022-0663.99.3.445.
Graham, S., & Perin, D. (2007b). Writing next: effective strategies to improve writing of adolescents in
middle and high schools: A report to carnegie corporation of New York. Washington, DC: Alliance for
excellent education.
Hall, V. C., Bailey, J., & Tillman, C. (1997). Can student-generated illustrations be worth ten thousand
words? Journal of Educational Psychology, 89(4), 677–681. doi:10.1037/0022-0663.89.4.677.
Hall, R. H., & Sidio-hall, M. A. (1994). The effect of color enhancement on knowledge map processing. The
Journal of Experimental Education, 62(3), 209–217. doi:10.1080/00220973.1994.9943841.
Hattie, J. (2009). Visible learning: a synthesis of over 800 meta analyses relating to achievement. New
York: Routledge.
Hayes, R. J., & Bennett, S. (1999). Simple sample size calculation for cluster-randomized trials. Interna-
tional Journal of Epidemiology, 28(2), 319–326. doi:10.1093/ije/28.2.319.
Hayes, J., & Flower, L. (1980). Identifying the organization of writing processes. In L. Gregg & E. Steinberg
(Eds.), Cognitive processes in writing (pp. 3–30). Hillsdale, NJ: Erlbaum.
Heck, R. H., Thomas, S. L., & Tabata, L. N. (2010). Multilevel and longitudinal modeling with IBM SPSS.
New York: Routledge.
Holley, C. D., Dansereau, D. F., McDonald, B. A., Garland, J. C., & Collins, K. W. (1979). Evaluation of a
hierarchical mapping technique as an aid to prose processing. Contemporary Educational Psychology,
4(3), 227–237. doi:10.1016/0361-476X(79)90043-2.
Katayama, A. D., & Robinson, D. H. (2000). Getting students ‘‘partially’’ involved in note-taking using
graphic organizers. Journal of Experimental Education, 68(2), 119–133.
Kiewra, K. A., DuBois, N. F., Christian, D., & McShane, A. (1988). Providing study notes: Comparison of
three types of notes for review. Journal of Educational Psychology, 80(4), 595–597. doi:10.1037/0022-
0663.80.4.595.
Kiewra, K. A., DuBois, N. F., Christian, D., McShane, A., Meyerhoffer, M., & Roskelley, D. (1991). Note-
taking functions and techniques. Journal of Educational Psychology, 83(2), 240–245.
Kiewra, K. A., Kauffman, D. F., Robinson, D. H., Dubois, N. F., & Staley, R. K. (1999). Supplementing
floundering text with adjunct displays. Instructional Science, 27(5), 373–401. doi:10.1023/a:
1003270723360.
Laitkorpi, M., & Jaaksi, A. (1999). Extending the object-oriented software process with component-oriented
design. Journal of Object-Oriented Programming, 12(1), 41–50.
Loman, N. L., & Mayer, R. E. (1983). Signaling techniques that increase the understandability of expository
prose. Journal of Educational Psychology, 75(3), 402–412. doi:10.1037/0022-0663.75.3.402.
Ma, X., Ma, L., & Bradley, K. D. (2008). Using multilevel modeling to investigate school effects. In A.
A. O’Connell & D. B. McCoach (Eds.), Multilevel modeling of education data (pp. 59–110). Charlotte,
NC: IAP-Information Age Publishing.
Mayer, R. E. (2009). Multimedia learning (2nd ed.). New York: Cambridge University Press.

123
A computer-based spatial learning strategy approach 839

Mayer, R. E., & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander & P. H. Winne (Eds.),
Handbook of educational psychology (pp. 287–303). New York: Routledge.
McCagg, E. C., & Dansereau, D. F. (1991). A convergent paradigm for examining knowledge mapping as a
learning strategy. The Journal of Educational Research, 84(6), 317–324. doi:10.1080/00220671.1991.
9941812.
McKnight, K. S. (2010). The teachers’s big book of graphic organizers: 100 reproducible organizers that
help kids with reading, writing, and the content areas. San Francisco, CA: Jossey-Bass.
Medina, A., Gajardo, A. M., & Fundación-Educacional-Arauco. (2009). Prueba de comprensión lectora y
producción de textos (CL-PT): Kinder a 4to año básico. Santiago de Chile: Ediciones Universidad
Católica de Chile.
Medina, A., Gajardo, A. M., & Fundación-Educacional-Arauco. (2010). Pruebas de comprensión lectora y
producción de textos (CL-PT) 5to a 8vo año básico: Marco conceptual y manual de aplicación y
corrección. Santiago de Chile: Ediciones Universidad Católica de Chile.
Meyer, B. J. F., & Poon, L. W. (2001). Effects of structure strategy training and signaling on recall of text.
Journal of Educational Psychology, 93(1), 141–159. doi:10.1037/0022-0663.93.1.141.
MINEDUC. (2008a). Mapas de progreso del aprendizaje: sector lenguage y comunicación, mapas de
progreso de producción de textos. Santiago: Ministerio de Educación de Chile.
MINEDUC. (2008b). Mapas de progreso: Sector lenguage y comunicación, mapas de progreso de la
lectura. Santiago: Ministerio de Educación de Chile.
MINEDUC. (2009). Objetivos fundamentales y contenidos mı́nimos obligatorios de la educación básica y
media. Santiago: Ministerio de Educación de Chile.
Moran, J., Ferdig, R. E., Pearson, P. D., Wardrop, J., & Blomeyer, R. L. (2008). Technology and reading
performance in the middle-school grades: A meta-analysis with recommendations for policy and
practice. Journal of Literacy Research, 40, 6–58. doi:10.1080/10862960802070483.
Murphy, R. F., Penuel, W. R., Means, B., Korbak, C., Whaley, A., & Allen, J. E. (2002). E-DESK: A review
of recent evidence on the effectiveness of discrete educational software. USA: Planning and Evaluation
Service, U. S. Department of Education.
National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel.
Teaching children to read: an evidence-based assessment of the scientific research literature on
reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No.
00-4754). Washington, DC.
Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press.
O’Donnell, A., Dansereau, D., & Hall, R. (2002). Knowledge maps as scaffolds for cognitive processing.
Educational Psychology Review, 14(1), 71–86. doi:10.1023/a:1013132527007.
Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and Instruction,
12(1), 61–86. doi:10.1016/s0959-4752(01)00016-0.
Ponce, H. R., Lopez, M. J., Labra, J. E., & Toro, O. A. (2012a). Integración curricular de organizadores
gráficos interactivos en la formación de profesores. Revista de Educacion, 357, 397–422. doi:10.4438/
1988-592x-Re-2010-357-066.
Ponce, H. R., Lopez, M. J., & Mayer, R. E. (2012b). Instructional effectiveness of a computer-supported
program for teaching reading comprehension strategies. Computers & Education, 59(4), 1170–1183.
doi:10.1016/j.compedu.2012.05.013.
Robinson, D. H. (1998). Graphic organizers as aids to text learning. Reading Research and Instruction,
37(2), 85–105. doi:10.1080/19388079809558257.
Robinson, D. H., Corliss, S. B., Bush, A. M., Bera, S. J., & Tomberlin, T. (2003). Optimal presentation of
graphic organizers and text: A case for large bites? Educational Technology Research and Develop-
ment, 51(4), 25–41. doi:10.1007/bf02504542.
Robinson, D. H., Katayama, A. D., Beth, A., Odom, S., Hsieh, Y.-P., & Vanderveen, A. (2006). Increasing
text comprehension and graphic note taking using a partial graphic organizer. The Journal of Edu-
cational Research, 100(2), 103–111.
Spybrook, J. (2008). Power, sample size, and design. In A. A. O’Connell & D. B. McCoach (Eds.),
Multilevel modeling of educational data (pp. 273–311). Charlotte, NC: IAP.
Strangman, N., Hall, T., & Meyer, A. (2004). Graphic organizers and implications for universal design for
learning: curriculum enhancement report. Washington, DC: The National Center on Accessing the
General Curriculum.
Stull, A. T., & Mayer, R. E. (2007). Learning by doing versus learning by viewing: Three experimental
comparisons of learner-generated versus author-provided graphic organizers. Journal of Educational
Psychology, 99(4), 808–820. doi:10.1037/0022-0663.80.4.448.
Sweller, J., Ayres, P. L., & Kalyuga, S. (2011). Cognitive load theory. New York: Springer.

123
840 H. R. Ponce et al.

Williams, J. P., Hall, K. M., Lauer, K. D., Stafford, K. B., DeSisto, L. A., & deCani, J. S. (2005). Expository
text comprehension in the primary grade classroom. Journal of Educational Psychology, 97(4),
538–550. doi:10.1111/1540-5826.00072.
Wittrock, M. C. (1989). Generative processes of comprehension. Educational Psychologist, 24(4), 345.

Hector R. Ponce is associate professor at the University of Santiago of Chile and scientific director of the
Visual Technology Laboratory (VirtuaLab) at the same university. His research has concentrated on
research and development of software components that implement visual and spatial learning strategies,
particularly to support reading comprehension and writing. He is also a visiting researcher at the University
of California, Santa Barbara.

Richard E. Mayer is Professor of Psychology at the University of California, Santa Barbara. His research
interests are in applying the science of learning to education, with a focus on multimedia learning. He served
as President of Division 15 (Educational Psychology) of the American Psychological Association and Vice
President of Division C (Learning and Instruction) of the American Educational Research Association. He is
the winner of the Thorndike Award for career achievement in educational psychology and the Distinguished
Contribution of Applications of Psychology to Education and Training Award, and is ranked #1 as the most
productive educational psychologist in the world in Contemporary Educational Psychology. He serves on
the editorial boards of 12 journals mainly in educational psychology. He is the author of more than 400
publications including 25 books, such as Applying the Science of Learning, e-Learning and the Science of
Instruction (with R. Clark), Multimedia Learning, Learning and Instruction, and the Cambridge Handbook
of Multimedia Learning (editor).

Mario J. Lopez is associate professor at the University of Santiago of Chile and director of the Visual
Technology Laboratory (VirtuaLab) at the same university. His research has concentrated on research and
development of software components that implement visual and spatial learning strategies. He has directed
several projects on educational technology funded by the Chilean Council for Science and Technology.

123
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.

You might also like