You are on page 1of 12

Educ Inf Technol (2007) 12:59–70

DOI 10.1007/s10639-007-9032-x

Effects of ICT: Do we know what we should know?

Margaret J. Cox & Gail Marshall

Published online: 11 May 2007


# Springer Science + Business Media, LLC 2007

Abstract Many decades after the introduction of ICT into classrooms there are still
unanswered questions about the impact of technology in the long and short term on
students’ learning, and how it has affected simple and complex learning tasks. These are
important for (a) forming government policies; (b) directing teacher education programmes:
(c) advancing national curricula; (d); designing or reforming classroom implementation and
(e) analysing costs and benefits. While a plethora of studies has been conducted on the
effects of ICT in education, major policy and methodological problems have precluded an
unambiguous answer to such questions as:—“Does the way in which ICT is implemented
have a major/minor impact on students’ knowledge and understanding?” and “Does the
impact affect the surface or deep structure of students’ thinking and acting?” To date we
have had no large-scale longitudinal studies of ICT’s impact in education such as we have
in the form of studies of earlier major curriculum development projects. Nor have we had
many comprehensive studies of the complex interactions between various types of ICT
implementation and the effects of other factors such as school-based interventions, socio-
economic status and school expenditures which have been shown to have a greater impact
on education compared with other previous innovations in education. Furthermore we do
not know if previous research studies have used research methods that matched learning

M. J. Cox (*)
Department of Education and Professional Studies, and The Dental Institute, King’s College London,
Franklin-Wilkins Building, Stamford Street, SE1 9NN London, UK
e-mail: mj.cox@kcl.ac.uk

M. J. Cox
Department of Mathematics and Computing, University of Melbourne, Melbourne, Australia

G. Marshall
Gail Marshall & Associates, 2393 Broadmont Court, Chesterfield, MO 63017, USA
e-mail: gailandtom@compuserve.com
60 Educ Inf Technol (2007) 12:59–70

objectives to instruments/procedures. Many previous studies are vague as to the actual


measures used but we can infer that standardized tests were a frequent measure. In other
instances, ad hoc analyses, with criteria that may have varied from analyst to analyst and
were not “blind” analyses were certainly used to measure “success.” All of these limitations
and uncertainties and many more point to the need for a thorough, rigorous, and
multifaceted approach to analysing the impact of ICT on students’ learning. This paper
draws on previous research evidence to identify relevant research strategies to address the
gaps in our knowledge about ICT and students’ learning explained above.

Keywords Research methods . Government policy . Assessment .


Evaluation and accreditation . Integration of ICT into education .
Impact of ICT on students’ learning

1 Introduction

Scattered across the African continent is evidence of early humans’ tool making.
Anthropologists claim that the types of tools found tell us much about the ways those
early humans behaved, how they organized their lives and how versatile were their
cognitive processes. Similarly, anyone looking at the studies of ICT use over several
decades can glean information about what methods researchers have used and what they
have deemed to be important processes and outcomes. Previous research into the uptake
and use of ICT in education and employment has provided substantial convincing evidence
of its positive impact on learning gains (cf. Watson 1993; Bliss 1994; Liao 1999; Cox and
Abbott 2004); pupils’ motivation (cf. Gardner et al. 1994; Cox 1997a; Hennessy et al.
2005); and of changing the ways in which we teach (Cox 1997b; Loveless and Ellis 2001;
Webb 2002; Cox and Webb 2004; Sutherland et al. 2004; Pearson and Naylor 2006).
However, there is also previous evidence of barriers to teachers’ adoption of ICT (cf. Cox
1997b; Jones 2004); negative attitudes towards the use of ICT (cf. Gardner et al. 1993);
people’s inability to use advanced ICT applications (cf. Preston et al. 2000); and mainly
very specific uses of ICT having the most impact on attainment (Cox and Abbott 2004;
Wentling et al. 2006). The question of whether or not ICT has made significant impacts on
a wide variety of student learning outcomes is still in doubt because of the variety of
assumptions made in many research studies and the limited reliability of some research
methods. Many of the effects, which have been recorded in previous years, provide a basis
for analysing recent research data and reports, and the earlier results also provide evidence
as to the effectiveness of different research methods.
Research into the contribution of ICT to students’ thinking and acting reflects the social
and epistemological beliefs of the research community. The “tools” which researchers use
and thus the design of classroom experiences and the evaluation of impacts reflect several
different collective consciousnesses of the ICT community as well as the larger community
in which ICT use is conducted. Those constructions and implementations are part of social
reality, i.e. “a system of tacit norms ... specifying who does what ...” (Collins 1997, page 2).
The prevailing research and evaluation paradigms are macrostructural, i.e. “What
conditions occur when ICT is introduced in a few schools?” or “What is the impact of a
simple intervention on a small group of students?” Missing from most previous research
agendas are major policy analyses that encompass a wide range of settings and look for
commonalities and differences as a result of systemic conditions.
Educ Inf Technol (2007) 12:59–70 61

A literature review of ICT and attainment by Cox and Abbott (2004) showed that the
most robust evidence of ICT use enhancing pupils’ learning was from studies which
focussed on specific uses of ICT. Where the research aim has been to investigate the effects
of ICT on attainment without clearly identifying the range and type of ICT use, then unclear
results were obtained making it difficult to conclude any repeatable impact of a type of ICT
use on pupils’ learning. Also missing from many previous research publications are
methodologically robust studies that might be based on large and varied samples, that are
conducted over several years and that provide unambiguous answers to questions such as:
“What impact have specific ICT uses had on students?”
“Does the way ICT is implemented have a major/minor impact on students’ learning?”
“Does the impact affect the surface or deep structure of students’ thinking and acting?”
We have also little evidence of the ways in which a wide range of ICT-based
instructional and curricular plans affect the learning of the students (Cox and Webb 2004).
The research tradition of ICT has yet to include the type of investigation conducted years
ago by Stallings (1975) where many different curricular based educational philosophies
were investigated by means of a range of different research instruments. Their findings
show that different curricular plans have widely differing effects on what and how students
learn. Furthermore, previous research has also identified the important influence of the
teacher who decides how the ICT resources are chosen (Castillo 2006), how they are used
in schools and the classroom and how the pupils interact with the materials (Hennessy et al.
2005). Therefore the teacher’s input will crucially affect the impact of ICT use on students’
learning.
Finally, the current situation is based on several major problems with the tools we use to
assess students and conduct evaluations. First of all, researchers need to realize that when they
design research and evaluation studies they are operating within a framework dictated by a tacitly
designed social reality. One dominant social reality, at least in many Anglo-Saxon countries is the
industrial model, i.e. inputs, which can be quantitatively measured, and which yield and or
predict outputs. That reality, which was fostered by the accountability models of industry during
the early years of educational research (Callahan 1962), fails to take into account how children
think. Consequently, the goals of research and evaluation based on research conducted within
the input/output framework usually fail to examine the cognitive processes which students
employ to reach their goals and the instruments used in the assessments are simple tests to
determine how much students have learned. Examples of this can be found in the research on
Logo conducted by Pea and Kurland (1994) and Gillespie (2004)
An alternative model is based on a European tradition of asking “What and how do
students learn?” That model, which derives from the work of Pestalozzi, Montessori and
Piaget, is exemplified by research conducted by Kalas and Blaho (2003). The instrument/
processes of evaluation have sometimes failed to match the intrinsic goals of the ICT-based
content and processes and the subsequent data analyses have therefore failed the standard of
rigour, consistency and sophistication required to obtain robust and sustainable results. The
evidence of the literature review by Cox and Abbott (2004) showed that there was more
reliability and robust evidence in studies where the tests had been closely linked to the
clearly identified potential effects of ICT on pupils’ learning. Therefore although it is
possible to measure the effects of ICT on pupils’ attainment through large scale comparative
studies, the tests need to be designed and based on an extensive knowledge of the types of
ICT uses being made by the subjects in the study, and an understanding of the likely impact
of such uses on pupils’ learning and thought processes.
62 Educ Inf Technol (2007) 12:59–70

In the sections that follow we will analyse each of these five problems identified above
in more detail.

2 Problem analyses

The analysis below focuses on the five important problems which need to be understood
and addressed if further research into the effects of ICT on students’ learning is to provide
results which are robust and reliable in many different educational settings.

2.1 The problem of assumptions about how learners think

A classic example of the problem of failing to understand how learners think is a body of
studies examining the impact of Logo on young children. Many of the studies were
designed and conducted within a social reality framework which assumed that children
were essentially passive receptors who could be manipulated by external agents and events
(Pea and Kurland 1994). Their research was based on the hypothesis that young children’s
ability to plan could be enhanced by instruction in Logo. But earlier research summarized
by Ginsburg and Opper (1979) has shown us that young children’s information processing
systems are different from adults.’ Another summary of research tells us that children aged
four to six are often “illogical and confused” in their thinking, and that children aged seven
to nine “focus on one and only one operation’ (Biggs and Collis 1982). Therefore the
expectation that teaching children Logo and inducing planning skills in children below the
age of nine for a short period of time, a school semester at most, is unrealistic. Despite a
wealth of information on the gap between children’s information processing skills and
adults’ skills, Pea & Kurland (ibid.), among others, conducted a series of research studies
which attempted to train students to perform Logo-based activities that were at odds with
their level of cognitive development. They found that their performance was limited by the
age of the pupils and their cognitive development.
Another study by Cox and Nikolopoulou (1997) on students’ abilities to analyse
complex data showed that the level and complexity of analyses which could be achieved
was limited by the ages of the pupils and by their intellectual skills which are dependent
upon their stage of development. Similar results from a range of studies on students’
modelling skills showed that young children compared with those aged 13+ years were less
able to build computer models than they were able to manipulate existing ones (Mellar et al.
1994). Research into students’ understanding of different representations has shown that the
learner needs to understand the metaphors and symbolisms which are presented to them on
the screen (Mellar et al. 1994; Cheng et al. 2001). The way in which new technologies has
changed the representation and codifying of knowledge and how this relates to learners’
mental models has shown that learners develop new ways of reasoning and hypothesizing
their own and new knowledge (Bliss 1994; Cox 2005). Therefore measuring the effect of
ICT on students’ learning needs to address the literacy of the students in the ICT medium as
well as the specific learning outcomes relating to the aims of the teacher or the ICT designer.
How the learner thinks about the problem or task will be influenced by their familiarity with
the ICT medium and with the type of ICT environment with which they are working.
As we know, ICT environments take very many forms and can be situated in different
software frameworks which can challenge the learner to investigate the same processes but
may have totally different representations. For example, an investigation of energy
consumption in the home could be carried out using the Spreadsheet application Excel or
Educ Inf Technol (2007) 12:59–70 63

an educational modelling environment such as Model Builder (Cox and Webb 1994). These
two software environments have completely different representations of the same problem
because of the design of the modelling framework. In the case of Excel, the learner needs to
understand the relationship between mathematical equations and tabular means of
presenting and inserting these. In the case of Model Builder, the learner needs to learn a
new modelling syntax based on natural language and how this can be used in conjunction
with icons and images on the screen (Cox 2000). So we must ask: “What instructional
strategies, if any, accelerate/promote changes in the type or level of children’s thinking and
at what points in the child’s intellectual development do those changes take place?”

2.2 The problem about the different effects of specific types of ICT uses

Previous evidence from the literature shows that there is a wide range of access to ICT in
schools and at home in many countries across the world (cf. Watson and Andersen 2002;
Marshall and Katz 2003). Yet the actual range of ICT use within a single research study is
usually very narrow, being limited by the choices of the teacher and or the researcher (Cox
and Abbott 2004; Cox and Webb 2004). Various government surveys have shown that
teachers’ ICT uses are usually confined to very few types, e.g. using an interactive white-
board for whole class demonstrations or using word-processing for creative writing (cf.
DfES 2003). Furthermore, regular uses reported by teachers may mean only a few minutes
of use by individual students, or extensive use by some and much less by others. This
variation in use will clearly affect the possible impact that using an ICT resource may have
on students’ learning (cf. Munro 2002; Cox and Abbott 2004; Wentling et al. 2006) and
therefore affect the instruments that need to be used by researchers. Previous research has
also shown that different types of ICT resources will have different effects on students’
learning, for example, using science simulations to correct students’ misconceptions and
alternative frameworks (Cox 2000); using data handling software to improve students’
abilities to apply binary logic (Cox and Nikolopoulou 1997); and using word-processing in
English to reduce punctuation and grammatical mistakes (Barker and Pearce 1995). It is
clear that from these and numerous other examples that the contribution of ICT to students’
learning was very dependent upon the type of ICT resource and the subject in which is was
being used. Any impact on the students’ learning could be measured by investigating the
specific nature of the ICT-based tasks and the types of concepts, skills and processes which
it might affect. There is therefore a dilemma for researchers between investigating very
select uses of ICT through an in-depth case-study approach or conducting a larger scale
study which may produce more generalizable results but will be limited because of not
having sufficiently detailed data about the specific uses made by each learner.
Some of the large-scale studies have shown a statistically significant positive effect of
ICT on pupils’ learning, e.g. the Impact1 (Watson 1993) and Impact2 studies (Harrison et
al. 2002) and the large scale metal-analysis conducted by Niemiec and Walburg (1992).
However in many cases it has not been possible to identify the actual types of ICT use
which have contributed to these learning gains. Therefore, although the outcome of such
research may be that ICT has had a positive effect on students’ learning, it is not known in
such large scale studies if this was due for example to using simulations, or problem solving
software or accessing additional relevant information over the Internet. Furthermore as was
explained above, the different forms of ICT representations will also have a varied effect on
students’ abilities to benefit from the ICT resource and their abilities to use it (e.g.
Hennessy 2000). We must therefore ask “what types of ICT resources suit the topic being
researched and how will these affect specific learning gains?”
64 Educ Inf Technol (2007) 12:59–70

2.3 The problem of how curricular design and implementation affect students.

The evidence discussed above and reviewed in previous literature reviews (e.g. Cox and
Abbott 2004) has shown that whatever the type of ICT resource, how it is adopted for use
in the curriculum and implemented in the programme of study will influence its impact on
students. For example, research into “discovery learning” shows that Logo learners need
structured examples and activities in order to develop their own constructions of those rules
and procedures (Noss and Hoyles 1992). Furthermore, Clements and Meredith’s (1992)
review of Logo research supports the Noss and Hoyles report but they also found that
“research shows that teachers find it extremely difficult to create a learning environment
that fosters creativity within existing school and curricular structures.” More recently,
Johnson (2000), in the context of programming, observed that the expectation that the
programming environments themselves, e.g., LOGO micro-worlds, would become an
integral part of the school mathematics curriculum had clearly failed to gain the support of
the educational system. The opportunities provided by a range of ICT resources could only
therefore be taken up by students if the teachers themselves knew enough about ICT to be
able to design the curriculum activities to implement the ICT activity effectively into the
learning programme.
In a study of ICT teachers by Preston et al. (2000) it was shown that the use of ICT was
limited by the teacher’s expectations and understanding of the ICT resources and the
content of the ICT curriculum. Similarly, in a more recent analysis of the ICT curriculum by
Webb (2002), in relation to problem-solving, two key elements of the content
understanding were identified: 1) the concepts and techniques of representation of data,
knowledge and processes; and 2) metaphors and capabilities of types of application
software. Without this knowledge then it is not possible for ICT teachers to design the
curriculum activities to make appropriate uses of ICT.
In the case of the English curriculum, Mumtaz and Hammond (2002), found that word-
processing was not fully embedded and that often word-processing was used superficially
with pupils with little opportunities for drafting and redrafting through which the most
positive effects have been identified. Similar evidence of limitations to curriculum
integration was found for the use of ICT in science when teachers were studied using
computer based modelling (Mellar et al. 1994). They found that: “Change can provide both
challenges and threats. At a personal and professional level it can call into question values,
beliefs and practices that were previously assumed and accepted by teachers.” (ibid, p.
210). “Broadly speaking, the acceptance and integration into routine classroom practice of
the modelling approach adopted by the project depended on the extent to which the teacher
agreed with the ideology.” (ibid, p.211) and that “Some may adopt the innovation whole-
heartedly because it reflects their own educational philosophy, others may reject it...our
experience suggests that teachers will engage in an adaptation of the innovation to suit their
own circumstances.” (ibid. p.213).
To date we have had no large scale longitudinal studies of ICT’s impact such as we have
in the form of studies of major curriculum development projects (Marshall and Herbert
1983). Nor have we had studies of the complex interactions between various types of ICT
implementation and their effects on different facets of children’s thinking as described
above such as those reported by Stallings (1975). Without such large-scale research
evidence of the factors involved in how, when and under what conditions teachers can
devise appropriate “discovery” methods and through which competency is best attained, we
are left with only provocative hints but no clear-cut guidelines on how best to provide
instruction in such mathematically sophisticated settings as Logo instruction and Logo is
Educ Inf Technol (2007) 12:59–70 65

one of the most widely researched ICT resource. If we still know little about the multiple
and complex interactions among setting, instructional goals and children’s cognitive status
in Logo environments (Papert 1980), how much less do we know about other uses of ICT
in classrooms? We must therefore ask: “What kinds of curriculum implementation and
classroom settings have most influence on the impact of ICT on students’ learning?”

2.4 Problems of the pedagogical approaches of the teachers

Many published studies have also shown as expected that teachers’ pedagogies have a large
impact on students’ uses of ICT and thereby on any changes in their learning. Cox and
Webb (2004) identified a range of activities in their literature review which related to
teachers’ ideas, beliefs and actions about ICT in teaching. These include: teachers’ beliefs
about how students learn; the types of ICT resources teachers choose to use; their
knowledge about their own subject and the potential for ICT to enhance their pupils’
learning; and their abilities to integrate ICT into their whole curriculum programme. The
evidence shows that when teachers used their knowledge of both the subject and the way
students understood the subject their use of ICT had a more direct effect on students’
attainment. This occurred more reliably when students were challenged to think and
question their own understanding, albeit through students using topic focussed ICT
software on their own or in pairs or through a whole class presentation.
Most of the practices discussed above require teachers to have a leading role in the
classroom. Research by Harkin, Birnik, Wubbels, Tartwijk and Brekelmans (Wubbles
1995) and others, has shown that the behaviour of teachers towards students and the
practices they adopt in traditional classes have a major influence on the cognitive
achievements of the students. A range of studies reported in the literature, where ICT is also
used, reveal changes in some aspects of pedagogy by some target teachers. For example,
in an earlier study by Underwood (1988) of teachers using information handling packages in
primary and secondary schools she found that the teachers believed that when using ICT in
their lessons, their role would be more of a classroom manager and facilitator in the
learning situation, rather than having a pivotal role in the learning activities of the students.
Similarly, a more recent study of teachers using Logo by Clements (2000) found that
although Logo had been shown to help students develop problem-solving abilities, and
higher-level meta-cognitive skills the success was dependent upon teachers’ guidance in
using Logo appropriately in the curriculum.
Previous research has also shown that the ways in which teachers might use ICT will
depend upon: their perceived value of ICT to their students’ learning (Preston et al. 2000;
Castillo 2006); their preferences and beliefs about teaching and the role of ICT (Webb and
Cox 2004; Scrimshaw 2004); and the attitudes and perceived behavioural control of the
teachers (Koutromanos 2004; Preston et al. 2000). These and other studies have found that
very few teachers have a comprehensive knowledge of ICT nor are confident in using the
wide range of ICT resources now available in education. These limitations have been
shown to affect the way the lesson is conducted and therefore any research outcomes. In a
recent analysis of teachers’ pedagogical practices when using ICT, Webb and Cox (2004)
found that the pedagogical practices of the teachers ranged from only small enhancements of
practices using more traditional methods to fundamental changes in their philosophy of
teaching. These changes were in the way they taught their subject and the tasks required of the
students. ICT use has therefore had a limited impact on learning and teaching where teachers
fail to appreciate that interactivity requires a new approach to pedagogy and rethinking how
they plan their lessons and their whole curriculum. Although a minority of teachers has been
66 Educ Inf Technol (2007) 12:59–70

found to reorganize the delivery of their curriculum radically, the majority still use ICT to add
to or enhance their existing practices.
A major part of teachers’ pedagogies is in the planning, preparation and follow up of
lessons. This means that although many teachers are reporting that when using ICT they
become a facilitator in the lesson instead of a leader they still have a mainly leadership role
in their overall teaching because they have planned and monitored the direction of the
learning. Where some studies have shown that little planning has occurred, the evidence is
that the students’ class work was unfocussed and led to less than satisfactory outcomes.
There is therefore a fundamental misunderstanding by many teachers and even teacher
trainers about how to incorporate ICT in their whole teaching programme. From these and
other studies we can conclude that one of the most difficult aspects of evaluating the impact
of ICT in teaching is that researchers need to measure the degree of student autonomy
which the students have and which will depend upon the nature of the software and the
strategies of the teacher. We therefore need to measure the IT expertise of the teacher and
students, and the relevance of the software and ICT activities to the whole curriculum. So
we must ask: “What pedagogies and pedagogical approaches promote what kinds of ICT
experiences and learning opportunities and intellectual competencies, and how do these
approaches affect different learners and in all countries?”

2.5 The problem of selecting research instruments and interpreting the results.

Finally many of the previous studies which have been reviewed and analysed show that the
research instruments and procedures used do not always match the learning objectives and
the learning outcomes which the researchers have tried to measure (Cox 2003). Researchers
have sometimes measured the ‘wrong’ things, looking for improvements in traditional
processes and knowledge instead of new reasoning and new knowledge which might
emerge from the ICT use. As we have illustrated in the preceding sections there are many
factors which need to be taken into account when evaluating the effects of ICT on students.
These include the quality and depth of ICT use; the design of the attainment tests;
observations of students using ICT; the analysis of students’ products; and the students’ and
teachers’ questionnaires and records. Researchers also need to take account of ICT leading
to new forms of knowledge and knowledge representations and therefore new types of
achievement. It is possible to measure the effects of ICT on students’ attainment through
large-scale comparative studies. However, the tests to measure attainment need to be
designed and based on an extensive knowledge of the types of ICT uses being made by the
subjects in the study and the different forms of knowledge representation which the ICT
environment might offer (Cox and Abbott 2004). In large quantitative studies, methods
need to be included which will also measure or compensate for the effects of teachers’
different pedagogies and of the ways in which the ICT activities are incorporated into the
curriculum. If a large scale quantitative study is set up to measure the impact of levels of
ICT use on attainment, unless the specific uses of ICT are also recorded and analysed it will
not be subsequently possible to determine which particular ICT uses had contributed to the
learning gains of the students.
Some studies involve conducting experiments where the ICT is predetermined and the
teachers are trained to use the ICT resources in specific ways (cf. Mellar et al. 1994). This
helps to avoid the effects of different undetermined ICT uses and different interpretations of
how it should be used by the participating teachers. However, such studies are often not
conducted in naturalistic settings therefore it is difficult to generalise the outcomes for
governments who might want to decide whether or not such ICT resource use will have an
Educ Inf Technol (2007) 12:59–70 67

effect in other educational settings, different curricular contexts or with different groups of
teachers. On the other hand, evaluating the effects of ICT in naturalistic settings, in which
the ICT uses are chosen by the teachers and schools or colleges themselves, needs very
complex instruments. These need to measure the different effects of different teachers, the
way in which the ICT resource is incorporated into the curriculum and the level of
intellectual development of the students being studied as explained above. There will also
be the influence of cultural and social differences from country to country and the way in
which the curriculum is organised and the ICT priorities of each particular country.
There are many research studies reported here and or reviewed previously in which the
researchers themselves do not appear to understand the attributes of the ICT resource nor
the ways in which teachers might use such resources. For example, there is no point in
attempting to measure the effects of using ICT on English and literacy attainment if the only
use being made by the teacher is for students to type up their hand-written products and
print them out. Similarly if a science teacher is mainly using word-processing with the
students to write up assignments then there is unlikely to be a positive effect of such ICT
use on students’ understanding of science concepts. It is therefore essential in any research
study that the actual types of uses of ICT are accurately recorded and measured as well as
the effects on students’ attainment. Studies, which only record students’ ICT use but which
do not identify what specific uses occur, cannot subsequently claim any useful relationship
between ICT use and learning outcome.
The evidence of the review by Cox and Abbott (2004) has shown that there is more
reliability and robust research evidence in studies where the tests have been closely linked
to the likely effects of ICT on students’ learning. For example, if using LOGO enhances
students’ geometry and problem-solving skills then the mathematics tests should focus on
measuring these and not include a range of other mathematical skills. Similarly the use of
simulations in science has been shown to help students learn difficult science concepts
through confronting their own misconceptions and presenting graphs on the screen. Tests to
measure these should closely relate not only to measuring the students’ learning of those
concepts but also to the graphical interpretations of the concept relationships promoted by
the simulation.
Many previous qualitative studies of the impact of ICT use on students have included
detailed observations of students using ICT and of teachers’ classroom practices. These can
assess the students’ learning strategies, the effects of teacher interventions etc. and provide
more in-depth data about the learning experiences of the students which can then be related
to learning outcome measures. One of the most useful techniques noted from the studies is
the use of video recordings of students’ and teachers’ actions when using ICT. This
technique has been used to enable the researcher to analyse the relationship between the
students’ on screen interactions alongside the teachers’ interventions. Other studies have
included the software itself recording the actions of the learner. The difficulty with this
method is that without also observing and recording the actual human–computer–teacher
interactions it is not possible when analysing the data afterwards to know, for example,
whether a long pause between two learners’ actions may be due to the learners thinking
about the task, talking to a fellow student, or being distracted to doing something totally
unrelated. It is therefore almost always necessary to have direct observations as well to
clarify these ambiguities.
In addition to using standard tests and observations some researchers have also collected
and analysed students’ own outcomes in the form of products (Cox 1993; Harrison et al.
2002). These include students’ word-processed work, models of processes, LOGO
microworlds etc. These have been shown to provide evidence of students’ attainment
68 Educ Inf Technol (2007) 12:59–70

and/or changes in understanding. However, in analysing students’ products these


techniques need to take account of other educational inputs and experiences, including
the intervention of the teacher which may have contributed to the production process.
Another method of finding out how students and teachers have used ICT is through detailed
questionnaires or records of activities. Although previous research shows that questionnaire
surveys can provide evidence of teachers’ and pupils’ uses of ICT (cf. Cox and Abbott
2004; Koutromanos 2004) it is difficult to acquire reliable evidence using such survey
techniques of the quality of use. Therefore these methods which provide useful base-line
data need to be augmented with more detailed observational data to provide an adequately
comprehensive measure of the ICT learning experience.
In conclusion a whole range of issues have been identified which limit the results of
many previous research studies and we must ask: “What methods and instruments should
be used to determine reliable measurements of the impact of specific ICT types on students
in different educational settings and with different teacher influence and inputs?”

3 Implications and conclusions

The problems identified above have several important implications for government
programmes involving ICT in education. We have shown that evidence from previous
large-scale studies about the impact of ICT on students’ learning may have limited
information about the possible effects on the deep structure of students’ thinking and acting.
The instructional strategies of the teachers have been shown to have a significant impact on
the effects of ICT, which might accelerate or restrict changes to students’ learning and will
depend upon the selection and appropriateness of the ICT environment which they are using.
Governments need therefore to include in their ICT development policies, clear guidelines
about the potential impact on learning of specific types of ICT resources and uses.
Teacher training programmes not only need to prepare and support teachers in the
appropriate choices and uses of ICT environments but they also need to challenge teachers’
fundamental beliefs about how to teach their subject and how specific ICT resources can
enhance and fundamentally change the way in which their students learn. Training
programmes need to include showing teachers new instructional strategies, learning about
new forms of knowledge representation and how to rethink the curriculum and the
classroom uses of ICT. Teachers need to know that using ICT is not confined to the actual
hands-on uses in the classroom but includes the associated intellectual tasks of the students
away from the ICT resource and in many different settings such as when using the Internet,
or when analysing the outcome of a learning activities with fellow students informally in
recreation time.
National curricula need to embrace the fact that knowledge can be represented in new
forms and this will have a fundamental impact on how a subject/topic is presented, taught
and assessed. This, in turn requires professional development for all those in designing
and creating national and local curricula and examinations. Finally, new research projects
need to account for the limitations of previous research methods discussed in this paper
so that research outcomes are more generalizable, can be useful to many different
countries and cultures and provide a robust and reliable taxonomy of the relationship
between different ICT resources, teachers’ pedagogies and students’ learning. This will
enable governments to identify more effectively the cost benefits of ICT in their
education budgets and more securely plan and implement new innovation programmes
involving ICT in education.
Educ Inf Technol (2007) 12:59–70 69

References

Barker, R. T., & Pearce, C. G. (1995). Personnal attributes and computer writing quality. Journal of
Educational Computing Research, 13(1), 17–26.
Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO Taxonomy. New York:
Academic.
Bliss, J. (1994). Causality and common sense reasoning. In H. Mellar, J. Bliss, R. Boohan, J. Ogborn, &
Tompsett (Eds.), Learning with artificial worlds: Computer based modelling in the curriculum. London:
Falmer.
Callahan, R. (1962). Education and the cult of efficiency. Chicago: University of Chicago Press.
Castillo, N. (2006). The Implementation of Information and Communication Technology (ICT): An
investigation into the level of use and integration of ICT by secondary school teachers in Chile. PhD
thesis. King’s College London, University of London
Cheng, P. C.-H., Lowe, R. K., & Scaife, M. (2001). Cognitive science approaches to diagrammatic
representations. Artificial Intelligence Review, 15(1/2), 79–94.
Clements, D. H. (2000). From exercises and tasks to problems and projects—Unique contributions of
computers to innovative mathematics education. Journal of Mathematical Behavior, 19(1), 9–47.
Clements, D. H., & Meredith, J. (1992). Research on Logo: Effects and efficacy. New York: Logo Foundation.
Collins, F. (1997) Social reality. London: Routledge.
Cox, M. J. (1993). Information technology resourcing and use. In D. M. Watson (Ed.), Impact–an evaluation
of the impact of the information technology on children’s achievements in primary and secondary
schools. King’s College London.
Cox, M. J. (1997a). The effects of information technology on students’ motivation. Final report. National
Council for Educational Technology, Coventry.
Cox, M. J. (1997b). Identification of the changes in attitude and pedagogical practices needed to enable teachers to
use information technology in the school curriculum. In D. Passey & B. Samways (Eds.), IFIP: Information
technology: Supporting change through teacher education (pp. 87–94). London: Chapman & Hall.
Cox, M. J. (2000). Information and communication technologies: Their role and value for science education.
In M. Monk & J. Osborne (Eds.), Good practice in science teaching—What research has to say (pp.
142–158). UK: Open University Press.
Cox, M. J. (2003). How do we know that ICT has an impact on children’s learning? A review of techniques
and methods to measure changes in pupils’ learning promoted by the use of ICT. In G. Marshall & Y.
Katz (Eds.), Learning in school, home and community. ICT for early and elementary education.
Massachusetts, USA: Kluwer.
Cox, M. J. (2005). Educational conflict: The problems in institutionalizing new technologies in education. In
G. Kouzelis, M. Pournari, M. Stoeppler & V. Tselfes (Eds.), Knowledge in the new technologies.
Frankfurt, Berlin: Peter Lang.
Cox, M. J., & Abbott, C. (2004). ICT and attainment: A review of the research literature, Coventry and London,
British Educational Communications and Technology Agency/Department for Education and Skills.
Cox, M. J., & Nikolopoulou, K. (1997). What information handling skills are promoted by the use of data
analysis software? Education and Information Technologies Journal, 2(2), 105–120.
Cox M. J., & Webb, M. E. (1994). Developing software and curriculum materials: The Modus Project. In H.
Mellar, J. Bliss, J. Ogborn & C. Tompsett (Eds.), Learning with artificial worlds: Computer based
modelling in the curriculum 188–198. London: Falmer.
Cox, M. J., & Webb, M. E. (2004). ICT and pedagogy: A review of the research literature, Coventry and London,
British Educational Communications and Technology Agency/Department for Education and Skills.
DfES (2003). Department for education and skills. Fulfilling the potential. Transforming teaching and
learning through ICT in schools. DfES. London.
Gardner, D. G., Dukes, R. L., & Discenza, R. (1993). Computer use, self-confidence, and attitudes: A causal
analysis. Computers in Human Behavior, 9, 427–440.
Gardner, J., Morrison, H., Jarman, R., Reilly, C., & McNally, H. (1994). Learning with portable computers.
Computers and Education, 22(1/2), 161–171.
Gillespie, C. W. (2004). Seymour Papert’s vision for early childhood education? A descriptive study of the
Head Start and kindergarten students in discovery-based Logo-rich classrooms. Early Childhood
Research and Practice, 6(11) http://ecrp.uiuc.edu/v6n1/Gillespie.html
Ginsburg, H., & Opper, S. (1979). Piaget’s theory of intellectual development: An introduction. Englewood
Cliffs, NJ: Prentice-Hall.
Harrison, C., Comber, C., Fisher, T., Haw, K., Lewin, C., Linzer, E., et al. (2002). ImpaCT2: The impact of
information and communication technologies on pupil learning and attainment. Coventry: British
Educational Communications and Technology Agency.
70 Educ Inf Technol (2007) 12:59–70

Hennessy, S. (2000). Graphing investigations using portable (palmtop) technology. Journal of Computer
Assisted Learning, 16(3), 243–258.
Hennessy, S., Ruthven, K., & Brindley, S. (2005). Teacher perspectives on integrating ICT into subject
teaching: Commitment, constraints, caution and change. Journal of Curriculum Studies, 37, 155–192.
Johnson, D. C. (2000). Algorithmics and programming in the school mathematics curriculum: Support is
waning—Is there still a case to be made? Education and Information Technologies, 5(3), 201–214.
Jones, A. (2004). A review of the research literature on barriers to the uptake of ICT by teachers. Coventry:
Becta
Kalas, I., & Blaho, A. (2003). Exploring visible mathematics with IMAGINE. In G. Marshall & Y. Katz
(Eds.), Learning in school, home and community: ICT for early and elementary education (pp. 54–64).
Boston: Kluwer.
Koutromanos, G. (2004). The effects of head teachers, head officers and school counsellors on the uptake of
Information technology in Greek schools, Department of Education and Professional Studies. PhD
thesis,. University of London.
Liao, Y. K. C. (1999). Effects of hypermedia on students’ achievement: a meta-analysis. Journal of
Educational Multimedia and Hypermedia, 8(3), 255–277.
Loveless, A., & Ellis, V. (Eds.) (2001). ICT, pedagogy and the curriculum: subject to change. London:
Falmer.
Marshall, G., & Herbert, M. (1983). Comprehensive school mathematics project. Denver, CO: McRel.
Marshall, G., & Katz, Y. (2003). Learning in school, home and community. ICT for early and elementary
education. Massachusetts, USA: Kluwer.
Mellar, H., Bliss, J., Boohan, R., Ogborn, J., & Tompsett, C. (Eds.) (1994). Learning with artificial worlds:
computer based modelling in the curriculum. London: Falmer.
Mumtaz, S., & Hammond, M. (2002). The word processor re-visited: Observations on the use of the word
processor to develop literacy at key stage 2. British Journal of Educational Technology, 33(3), 345–347.
Munro, R. (2002). Curriculum focused ICT—the critical resource. In D. M. Watson & J. Andersen (Eds.),
Networking the learner. Computers in education (pp. 179–188). Massachusetts, USA: Kluwer.
Niemiec, R. P., & Walburg, H. J. (1992). The effects of computers on learning. International Journal of
Educational Research, 17(1), 99–107.
Noss, R., & Hoyles, C. (1992). Looking back and looking forward. In C. Hoyles & R. Noss (Eds.), Learning
logo and mathematics. Cambridge, MA: MIT.
Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. New York: Basic Books.
Pea, R. D., & Kurland, D. M. (1994). Logo programming and the development of planning skills. Technical
report no. 11. New York: Bank Street College of Education.
Pearson, M., & Naylor, S. (2006). Changing contexts: Teacher professional development and ICT pedagogy.
Education and Information Technologies, 11, 283–291.
Preston, C., Cox, M. J., & Cox, K. M. J. (2000). Teachers as Innovators: An evaluation of the motivation of teachers
to use information and communications technologies. Croydon: King’s College London and Mirandanet.
Scrimshaw, P. (2004). Enabling teachers to make successful use of ICT. Coventry: British Educational
Communications and Technology Agency (Becta).
Stallings, J. (1975). Implementation and child effects of teaching practices in follow through classroom.
Monographs of the Society for Research in Child Development, 40/78, Chicago.
Sutherland, R., Armstrong, V., Barnes, S., Brawn, R., Breeze, N., Gall, M., et al. (2004). Transforming
teaching and learning: Embedding ICT into everyday classroom practices. Journal of Computer Assisted
Learning, 20(6), 413–425.
Underwood, J. (1988). An Investigation of teacher intents and classroom outcomes in the use of information-
handling packages. Computers and Education, 12(1), 91–100.
Watson, D. M. (Ed.) (1993). Impact—An evaluation of the impact of the information technology on
children’s achievements in primary and secondary schools. King’s College London.
Watson, D. M., & Andersen, J. (Eds.) (2002). Networking the learner. Computers in education.
Massachusetts, USA: Kluwer.
Webb, M. E. (2002). Pedagogical reasoning: Issues and solutions for the teaching and learning of ICT in
secondary schools. Education and Information Technologies, 7(3), 237–255.
Webb, M. E., & Cox, M. J. (2004). A review of pedagogy related to ICT. Technology, Pedagogy and Education,
13(4).
Wentling, T. L., Park, J., & Peiper, C. (2006). Learning gains associated with annotation and communication
software designed for large undergraduate classes. JCAL, 23(1), 36–46.
Wubbles, T. (1995). An interpersonal perspective on teacher behaviour in the classroom. European
Conference on Educational Research, Bath, England.

You might also like