Professional Documents
Culture Documents
net/publication/32231466
CITATIONS READS
148 528
2 authors, including:
Claire O'Malley
University of Nottingham
114 PUBLICATIONS 4,507 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Claire O'Malley on 01 April 2015.
REPORT 12:
ABOUT FUTURELAB
• incubates new ideas, taking them from the lab to the classroom
• offers hard evidence and practical advice to support the design
and use of innovative learning tools
• communicates the latest thinking and practice in educational ICT
• provides the space for experimentation and the exchange of ideas
between the creative, technology and education sectors.
EXECUTIVE SUMMARY 2
SECTION 4
FOREWORD CASE STUDIES OF LEARNING
WITH TANGIBLES 27
When we think of digital technologies in discusses evidence from educational
schools, we tend to think of computers, research and psychology, and provides SECTION 5
keyboards, sometimes laptops, and more an overview of a wide range of SUMMARY AND IMPLICATIONS
FOR FUTURE RESEARCH,
recently whiteboards and data projectors. challenging projects that have attempted APPLICATIONS, POLICY
These tools are becoming part of the to use such ‘disappearing computers’ AND PRACTICE 35
familiar educational landscape. Outside (or tangible interfaces) in education –
the walls of the classroom, however, from digitally augmented paper, toys GLOSSARY 39
there are significant changes in how we that remember the ways in which a child
BIBLIOGRAPHY 42
think about digital technologies – or, to be moves them, to playmats that record
more precise, how we don’t think about and play back children’s stories. The
them, as they disappear into our clothes, review challenges us to think differently
our fridges, our cars and our city streets. about our future visions for educational
This disappearing technology, blended technology, and begins to map out a
seamlessly into the everyday objects framework within which we can ask
of our lives, has become known as how best we might use these new
‘ubiquitous computing’. Which leads approaches to computing for learning.
us to ask the question: what would a
school look like in which the technology As always, we are keen to hear
disappeared seamlessly into the everyday your comments on this review at
objects and artefacts of the classroom? research@futurelab.org.uk
1
EXECUTIVE SUMMARY
2
REPORT 12
LITERATURE REVIEW IN LEARNING WITH TANGIBLE TECHNOLOGIES
CLAIRE O’MALLEY, LEARNING SCIENCES RESEARCH INSTITUTE, SCHOOL OF PSYCHOLOGY, UNIVERSITY OF NOTTINGHAM
DANAE STANTON FRASER, DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF BATH
3
SECTION 1
INTRODUCTION
4
and, it is hoped, the new possibilities of ICT in schools tend to lag behind
for learning that they provide. developments in other areas of life (the
workplace, home). At the turn of the 21st
We have tried in this review to outline century, the use of computers in schools
the interactional capabilities afforded largely concerns the use of desktop
by tangible technology by analysing (increasingly laptop) machines, whilst
the properties of forms of action and outside of school the use of computers
representation embodied in their use. We consists increasingly of personal mobile
have also reviewed some of the relevant devices which bear little resemblance to
research from developmental psychology those desktop machines of the 1980s.
and education in order to draw out
implications for the potential benefits Technology is now increasingly embedded
and limitations of these approaches to in our everyday lives. For example, when
learning and teaching. Although many of we go shopping barcodes are used by
the educational examples given here refer supermarkets to calculate the price of
to use by young children, several also our goods. Microchips embedded in
involve children of secondary school age. our credit cards and magnetic strips
In general we feel that the interactional embedded in our loyalty cards are used by
and educational principles involved can supermarkets and banks (as well as other
be applied to learning in a wide variety agencies we may not know about) to debit
of age groups and contexts. our bank accounts, do stock control,
predict our spending patterns and so on.
Sensors in public places such as the
1.2 BEYOND THE street, stores, our workplace are used to
SCHOOL COMPUTER record our activities (eg reading car
number plates, detecting our entrance and
Few nowadays would disagree that exit via radio frequency ID tags or pressure
information and communication or motion sensors in buildings). We
technology (ICT) is important for learning communicate by mobile phone and our
and teaching, whether the argument is location can be identified by doing so. We
vocational (it’s important for young people use remote control devices to control our
to become skilled or ‘literate’ in ICT to entertainment systems, the opening and
prepare them for life beyond school) or closing of our garages or other doors. Our
‘techno-romantic’ (ICT provides powerful homes are controlled by microcomputers
learning experiences not easily achieved in our washing machines, burglar alarms,
through other means – see Simon 1987; lighting, heating and so on.
Underwood and Underwood 1990; Wegerif
2002). There is no doubt that there have This is as much true for school-aged
been tremendous strides in the take-up children as it is for adults: the average
and use of ICT in both formal school and home with teenagers owns an average
informal home settings in recent years, of 3.5 mobile phones (survey by technology is
and that there have been many positive ukclubculture, reported in The Observer, now increasingly
impacts on children’s learning (see Cox et 17 October 2004); teenagers prefer to buy
al 2004a; Cox et al 2004b; Harrison et al their music online rather than buying CDs; embedded in our
2002). However, developments in the use 80% of teenagers access the internet from everyday lives
5
SECTION 1
6
refer to physical things, but in computer output is fairly obviously decoupled
science, the term bits refers also to digital because I have to move the mouse in 2D
‘things’ (ie binary digits). The phrase on a horizontal surface, yet the output
‘tangible bits’ therefore attempts to appears on a vertical 2D plane. However,
consider both digital and physical things even if I use a stylus on a touchscreen
in the same way. (tablet or PDA), there is still a sense of
decoupling of input and output because
Tangible interfaces emphasise touch and the output is in a different representational
physicality in both input and output. Often form to the input – ie the input is physical
tangible interfaces are closely coupled to but the output is digital.
the physical representation of actual
objects (such as buildings in an urban In contrast, tangible user interfaces (TUIs)
planning application, or bricks in a provide a much closer coupling between tangible
children’s block building task). Applications the physical and digital – to the extent that
range from rehabilitation, for example, the distinction between input and output interfaces
tangible technologies that enable becomes increasingly blurred. For emphasise touch
individuals to practise making a hot drink example, when using an abacus, there
following a stroke (Edmans et al 2004) to is no distinction between ‘inputting’ and physicality
object-based interfaces running over information and its representation – this in both input
shared distributed spaces, creating the sort of blending is what is envisaged by
illusion that users are interacting with tangible computing. and output
shared physical objects (Brave et al 1998).
The key issue to bear in mind in terms of 1.4 DEFINING SOME KEY CONCEPTS:
the difference between typical desktop TANGIBLE COMPUTING, UBIQUITOUS
computer systems and tangible computing
COMPUTING AND AUGMENTED
is that in typical desktop computer
systems (so-called graphical user REALITY
interfaces or GUIs) the mapping between
the manipulation of the physical input Tangible computing is part of a wider
device (eg the point and click of the concept know as ‘ubiquitous computing’.
mouse) and the resulting digital The vision of ubiquitous computing is
representation on the output device (the usually attributed to Mark Weiser, late of
screen) is relatively indirect and loosely Xerox PARC (Palo Alto Research Centre).
coupled. You are making one sort of Weiser published his vision for the future
movement to have a very different of computing in a pioneering article in
movement represented on screen. For Scientific American in 1991 (Weiser 1991).
example, if I use a mouse to select a menu In it he talked about a vision where the
item in a word processing application, I digital world blends into the physical
move the mouse on a horizontal surface world and becomes so much part of the
(my physical desktop) in 2D in order to background of our consciousness that it
control a graphical pointer (the cursor) on disappears. He draws an analogy with print
the screen. The input is physical but the – text is a form of symbolic representation
output is digital. In the case of a desktop that is completely ubiquitous and pervasive
computer, the mapping between input and in our physical environment (eg on
7
SECTION 1
billboards, signs, in shop windows etc). and Ullmer (1997) also make a distinction
As long as we are skilled ‘users’ of text, in ubiquitous computing between the
it takes no effort at all to scan the foreground or centre of the user’s attention
environment and process the information. and the background or periphery of their
The text just disappears into the attention. They talk about the need to
background and we engage with the enable users both to ‘grasp and
content it represents, effortlessly. Contrast manipulate’ foreground digital information
this with the experience of visiting a using physical objects, and to provide
foreign country where not only do you not peripheral awareness of information
know the language, but the alphabet is available in the ambient environment.
completely unfamiliar to you – eg an The first case is what most people refer
English visitor in China or Saudi Arabia – to nowadays as tangible computing; the
suddenly the text is not only virtually second case is what is generally referred
impossible to decipher, it actually looks to as ‘ambient media’ (or ambient
very strange to see the world plastered intelligence, augmented space and so on).
with what seem to you like hieroglyphs Both are part of the ubiquitous computing
(which, of course, for the Chinese or vision, but this review will focus only on
Arabic reader, are ‘invisible’). So, the vision the first case.
of ubiquitous computing is that computing
will become so embedded in the world that A third related concept in tangible
we don’t notice it, it disappears. This has computing is augmented reality (AR).
already happened today to some extent. Whereas in virtual reality (VR) the goal
Computers are embedded in light is often to immerse the user in a
switches, cars, ovens, telephones, computational world, in AR the physical
doorways, wristwatches. world is augmented with digital
information. Paul Milgram coined the
In Weiser’s vision, ubiquitous computing phrase ‘mixed reality’ to refer to a
is ‘calm technology’ (Weiser and Brown continuum between the real world and
1996). By this, he means that instead of virtual reality (Milgram and Kishino 1994).
occupying the centre of the user’s AR is where, for example, video images of
attention all the time, technology moves real scenes are overlaid with 3D graphics.
seamlessly and without effort on our part In augmented virtuality, displays of a 3D
between occupying the periphery of our virtual world (possibly immersive) are
attention and occupying the centre. Ishii overlaid with video feeds from the real world.
Fig 1: The continuum of mixed reality environments (from Milgram and Kishino 1994).
8
As we shall see, many examples of so- The sheer range of applications means it
called tangible computing have employed is out of this report’s scope to adequately
AR techniques and there are some describe all tangible interface examples.
interesting educational applications. In this review, we focus specifically on
Generally, the distinction between research in the area of tangible interfaces
ubiquitous computing (or ‘ubicomp’), for learning.
tangible technology, augmented reality
and ‘ambient media’ has been blurred or
at least the areas overlap considerably. 1.5 WHY MIGHT TANGIBLES BE
Dourish (2001), for example, refers to OF BENEFIT FOR LEARNING?
them all as tangible computing.
Historically children have played
One of the earliest examples of tangible individually and collaboratively with
computing was Bricks, a ‘graspable user physical items such as building blocks,
interface’ proposed by Fitzmaurice, Ishii shape puzzles and jigsaws, and have been
and Buxton (Fitzmaurice et al 1995). This encouraged to play with physical objects
consisted of bricks (like LEGO bricks) to learn a variety of skills. Montessori
which could be ‘attached’ to virtual (1912) observed that young children were
objects, thus making the virtual objects intensely attracted to sensory development
physically graspable. apparatus. She observed that children used
materials spontaneously, independently,
Fitzmaurice et al cite the following repeatedly and with deep concentration.
properties of graspable user interfaces. Montessori believed that playing with
Amongst other things, these interfaces: physical objects enabled children to engage
in self-directed, purposeful activity. She
• allow for more parallel input advocated children’s play with physical
specification by the user, thereby manipulatives as tools for development 2.
improving the expressiveness or the
communication capacity with the
computer
Resnick extended the tangible interface historically
concept for the educational domain in the
• take advantage of well developed, term ‘digital manipulatives’ (Resnick et al children have
everyday, prehensile skills for physical 1998), which he defined as familiar played with
object manipulations (cf MacKenzie and physical items with added computational
Iberall 1994) and spatial reasoning power which were aimed at enhancing physical objects
• externalise traditionally internal children’s learning. Here we discuss to learn a variety
computer representations physical and tangible interfaces – physical
in that the interaction is based on of skills
• afford multi-person, collaborative use. movement and tangible in that objects
are to be touched and grasped.
We will return to this list of features
of tangible technologies a little later on Figures 2 and 3 show two examples
to consider their benefits or affordances of recent applications of tangible
for learning. technologies for educational use.
2 www.montessori-ami.org/ami.htm
9
SECTION 1
Fig 2: These images show the MagiPlanet application developed at the Human Interface
Technology Laboratory New Zealand 3 (Billinghurst 2002).
Imagine a Year 5 teacher is trying to help her class understand planetary motion. They are
standing around in a circle looking down on a table on which are marked nine orbital paths
around the Sun. The children have to place cards depicting each of the planets in its correct
orbit. As they do, they can see overlaid onto the cards a 3D animation of that planet spinning
on its own axis and orbiting around the Sun. They can pick up the card and rotate it to see the
surface of the planet in more detail, or its moons.
10
Fig 3: These images show the ‘I/O Brush’
system developed by Kimiko Ryokai, Stefan
Marti and Hiroshi Ishii of MIT Media Lab’s
Tangible Media Group 6 (Ryokai et al 2004).
Familiar objects such as building bricks, people’s familiarity with their way of
balls and puzzles are physically interacting in the physical world (Ishii and
manipulated to make changes in an Ullmer 1997). In so doing, it is assumed
associated digital world, capitalising on that more degrees of freedom are provided
6 http://web.media.mit.edu/~kimiko/iobrush/
11
SECTION 2
12
command-line interfaces and keyboards, Semantic directness in Hutchins et al’s
technologies were developed which analysis referred to the extent to which the
enabled users to interact much more meaning of a digital representation (eg an
directly via a pointing device (the mouse) icon representing a wastebasket) mapped
and a graphical user interface employing onto the physical referent in the real world
the now familiar desktop system with (ie the referent of a wastebasket).
windows, icons and menus. The earliest
example of such systems was the Star In terms of tangible user interfaces,
office application (Smith et al 1982), the similar analyses have been carried out
precursor to Microsoft Windows and the concerning the mapping between (i) the
Apple Macintosh operating systems. form of physical controls and their effects
and (ii) the meaning of representations and
Researchers in HCI called such systems the objects to which they refer. We outline
‘direct manipulation’ interfaces (Hutchins these analyses below.
et al 1986; Shneiderman 1983). Hutchins
et al (1986) analysed the properties of
so-called direct manipulation interfaces 2.2 CONTROL AND REPRESENTATION
and argued that there were two
components to the impression the user In this section we consider the extension
could form about the directness of of Hutchins et al’s analysis of direct
manipulating the interface: the degree to manipulation to the design of tangible
which input actions map onto output user interfaces.
displays in an interactional or articulatory
sense (psychologists refer to this as As argued earlier, in traditional human-
‘stimulus-response compatibility’), and the computer interfaces (ie graphical user
degree to which it is possible to express interfaces or GUIs), a distinction is typically
one’s intentions or goals with the input made between input and output. For
language (semantic directness). example, the mouse is an input device and
the screen is the output device. In tangible
Articulatory directness refers to the user interfaces this distinction disappears,
extent to which the behaviour of an input in two senses:
action (eg moving the mouse) maps
directly or otherwise onto the effects on 1 In tangible interfaces the device that
the display (eg the cursor moving from one controls the effects that the user wants
position to another). In other words, a to achieve may be at one and the same
system has articulatory directness to the time both input and output.
extent that the form of the manipulation of
the input device mirrors the corresponding 2 In GUIs the input is normally physical
behaviour on the display. This is a similar and the output is normally digital, but
concept to stimulus-response compatibility in tangible user interfaces there can
(SRC) in psychology. An example of SRC in be a variety of mappings of digital-to-
the physical world is where the motion of physical representations, and vice
turning a car steering wheel maps onto versa. Ullmer and Ishii explain this by
the physical act of the car turning in the analogy with one of the earliest forms
appropriate direction. of computer, the abacus:
13
SECTION 2
14
devices of interfaces that could count demonstrated how a pen could be used as
as tangible, and, in fact, many examples a container to ‘pick-and-drop’ digital
which claim to employ tangible user information between computers,
interfaces for learning do not lie on analogous to the ‘drag-and-drop’ facility
the strict, tightly coupled end of this for a single computer screen (Rekimoto
continuum. 1997). However, even though containers
have some physical analogy in their
The strongest level of coherence (physical- functionality, they do not necessarily
digital coupling) in the scheme of Koleva et reflect physically that function nor their
al is where there is the illusion that the use – they lack natural ‘affordances’ in the
physical and digital representations are the Gibsonian sense (Gibson 1979; Norman
same object. An example in the world of 1988, 1999).
tangible computing is the Illuminating Clay
system (Piper et al 2002), where a piece of In contrast, tokens are objects that
‘clay’ can be manipulated physically and as physically resemble the information they
a result a display projected onto its surface represent in some way. For example, in the
is altered correspondingly. metaDESK system (Ullmer and Ishii 1997) 7,
a set of objects were designed to physically
A second kind of distinction which Koleva resemble buildings on a digital map. By
et al make is in the nature (meaning) of moving the physical buildings around on
the representational mappings between the physical desktop, relevant portions of
physical and digital. In the case of a literal the map were displayed.
or one-to-one mapping, any actions with
the physical devices should correspond Finally, in Holmquist et al’s analysis, tools
exactly to the behaviour of the digital are defined as physical objects which are
object. In the case of a transformational used as representations of computational
mapping there is no such direct functions. Examples include the Bricks
correspondence (eg placing a digital object system referred to at the beginning of this
on a particular location might trigger an review (Fitzmaurice et al 1995), where the
animation). bricks are used as ‘handles’ to manipulate
digital information. Another example is the
use of torches or flashlights to interact
2.3 CONTAINERS, TOKENS with digital displays as in the Storytent
AND TOOLS system 8 (Green et al 2002), described in
more detail in Section 4.
Another classification of tangible
technologies is provided by Holmquist
et al (1999). They distinguish between 2.4 EMBODIMENT AND METAPHOR
containers, tokens, and tools. They define
containers as generic objects that can be Fishkin (2004) provides a two-dimensional
associated with any types of digital classification involving the dimensions
information. For example, Rekimoto ‘embodiment’ and ‘metaphor’. By
7 http://tangible.media.mit.edu/projects/metaDESK/metaDESK.html
8 http://machen.mrl.nott.ac.uk/Challenges/Devices/torches.htm
15
SECTION 2
embodiment Fishkin means the extent to Ishii 2000). Finally, the output can be distant
which the users are attending to an object from the input (eg on another screen).
while they manipulate it. In terms of
Weiser’s notion of ubiquitous computing Fishkin argues that as embodiment
discussed previously (Weiser and Brown increases, the ‘cognitive distance’ between
1996), this is the dimension of central- input and output increases. He suggests
peripheral attentional focus. At one that if it is important to maintain a
extreme of Fishkin’s dimension of ‘cognitive dissimilarity’ between input and
embodiment, the output device is the same output ‘objects’ then the degree of
as the input device – examples include the embodiment should be decreased. This
Tribble system 9 (Lifton et al 2003) which may be important in learning applications,
consists of a ‘sensate skin’ which can react for example if the aim is to encourage the
to being touched or stroked by changing learner to reflect on the relationships
lights on its surface, vibrating, etc. Another between the two.
similar example is ‘Super Cilia Skin’10
(Raffle et al 2003). It consists of computer- The second dimension in Fishkin’s
controlled actuators that are attached to framework is metaphor. In terms of
an elastic membrane. The actuators tangible interfaces this means the extent
change their physical orientation to to which the user’s actions are analogous
represent information in a visual and to the real-world effect of similar actions.
tactile form. Just like clay will deform if At one end of this continuum the tangible
pressed, these surfaces act both as input manipulation has no analogy to the
and output devices. resulting effect – an extreme example is a
command-line interface. An example they
Slightly less tight coupling between input cite from tangible interfaces is the Bit Ball
and output in Fishkin’s taxonomy is seen system (Resnick et al 1998) where
in cases where the output takes place squeezing a ball alters audio output.
proximal or near to the input device.
Examples include the Bricks system In other systems an analogy is made to the
(Fitzmaurice et al 1995) and a system physical appearance (visual, auditory,
called I/O Brush 11 (Ryokai et al 2004) tactile) of the device and its corresponding
referred to in the introduction to this appearance in the real world. In terms of
review. This consists of a paintbrush which GUIs this corresponds to the appearance
has various sensors for bristles which of icons representing physical documents,
allow the user, amongst other things, to for example. However, the actions
‘pick up’ the properties of an object (eg its performed on/with such icons have no
colour) and ‘paint’ them onto a surface. analogy with the real world (eg crumpling
paper). In terms of TUIs examples are
In the third case of embodiment the output where physical cubes are used as input
is ‘around’ the user (eg in the form of audio devices where the particular picture on a
output) – what Ullmer and Ishii refer to as face of a cube determines an operation
‘non-graspable’ or ambient (Ullmer and (Camarata et al 2002).
9 http://web.media.mit.edu/~lifton/Tribble/
10 http://tangible.media.mit.edu/projects/Super_Cilia_Skin/Super_Cilia_Skin.htm
11 http://tangible.media.mit.edu/projects/iobrush/iobrush.htm
16
In contrast, other systems have analogies respect to output (eg using a generic
between manipulations of the tangible tool like a mouse to interact with the
device and the resulting operations (as output on a screen), and input devices
opposed to operands). Yet other systems which have a close correspondence in
combine analogies between the referents behavioural meaning between input and
in the physical and digital worlds and the output (eg using a stylus to draw a line
referring expressions or manipulations. directly on a tablet or touchscreen). This
For example, in GUIs the ‘drag-and-drop’ is what Hutchins et al (1986) refer to as
interface (eg dragging a document icon articulatory correspondence. The form
into the wastebasket) involves analogies of such mappings may result in one-to-
concerning both the appearance of the many relations between input and
objects (document, wastebasket) and the output (as in arbitrary relations between
physical operation of dragging and a mouse, joystick or trackpad and
dropping an object from the desktop into various digital effects on a screen), or
the wastebasket. An example in terms of one-to-one relations (as in the use of
TUIs is the Urp system described earlier special purpose transducers where
(Underkoffler and Ishii 1999) where each device has one function).
physical input devices resembling • The semantics of the physical-digital
buildings are used to move buildings representational mappings: this refers
around a map in the virtual world. to the degree of metaphor between the
physical and digital representation. It
Finally, Fishkin describes the other can range from being a complete
extreme of this continuum, where the analogy, in the case of physical devices
digital system maps completely onto the resembling their digital counterparts,
physical system metaphorically. He refers to no analogy at all.
to this as ‘really direct manipulation’
(Fishkin et al 2000). An example from GUIs • The role of the control device: this
is the use of a stylus or light pen to refers to the general properties of the
interact with a touchscreen or tablet (pen- control device, irrespective of its
tablet interfaces), where writing on the behaviour and representational
screen results in altering the document mapping. So, for example, a control
directly. An example from TUIs that he device might play the role of a container
gives is the Illuminating Clay system of digital information, a representational
(Piper et al op cit). token of a digital referent, or a generic
tool representing some computational
function.
2.5 SUMMARY • The degree of attention paid to the
control device as opposed to that which
The dimensions highlighted in these it represents: completely embodied
frameworks are: systems are where the user’s primary
focus is on the object being manipulated
• The behaviour of the control devices rather than the tool being used to
and the resulting effects: a contrast is manipulate the object. This can be more
made between input devices where the or less affected by the extent of the
form of user control is arbitrary and has metaphor used in mapping between the
no special behavioural meaning with control device and the resulting effects.
17
SECTION 3
18
This is supported by research by Goldin- Recent neuroscientific research suggests
Meadow (2003), who has shown over an that some kinds of visuo-spatial
extensive set of studies how gesture transformations (eg mental rotation
supports thinking and learning. Church tasks, object recognition, imagery) are
and Goldin-Meadow (1986) studied 5-8 interconnected with motor processes
year-olds’ explanations of Piagetian and possibly driven by the motor system
conservation tasks and showed that some (Wexler 2002). Neuroscientific research
children’s gestures were mismatched with also suggests that the neural areas
their verbal explanations. For example, activated during finger-counting, which
they might say that a tall thin container is a developmental strategy for learning
has a large volume because it’s taller, but calculation skills, eventually come to
their gesture would indicate the width of underpin numerical manipulation skills
the container. Children who produced in adults (Goswami 2004).
these kinds of gestures tended to be those
who benefited most from instruction or
experimentation. The argument is that 3.2 THE USE OF PHYSICAL gesture supports
their gestures reflected an implicit or tacit MANIPULATIVES IN TEACHING
understanding which wasn’t open to being thinking and
AND LEARNING
expressed in language. Goldin-Meadow learning
also argues that gestures accompanying
There is a long history of the use of
speech can, in some circumstances,
manipulatives (physical objects) in
reduce the cognitive load on the part
teaching, especially in mathematics and
of the language producer and, in other
especially in early years education (eg
circumstances, facilitate parallel
Dienes 1964; Montessori 1917). There are
processing of information (Goldin-
several arguments concerning why
Meadow 2003).
manipulation of concrete objects helps in
learning abstract mathematics concepts.
Research has also shown that touching
One is what Chao et al (2000) call the
objects helps young children in learning to
‘mental tool view’ – the idea that the
count – not just in order to keep track of
benefit of physical materials comes from
what they are counting, but in developing
using the mental images formed during
one-to-one correspondences between
exposure to the materials. Such mental
number words and item tags (Alibali and
images of the physical manipulations can
DiRusso 1999). Martin and Schwartz (in
then guide and constrain problem solving
press) studied 9 and 10 year-olds learning
(Stigler 1984).
to solve fraction problems using physical
manipulatives (pie wedges). They found
An alternative view is what Chao et al call
that children could solve fraction problems
the ‘abstraction view’ – the idea that the
by moving physical materials, even though
value of physical materials lies in learners’
they often couldn’t solve the same
abilities to abstract the relation of interest
problems in their heads, even when shown
from a variety of concrete instances
a picture of the materials. However, action
(Bruner 1966; Dienes 1964). However, the
itself was not enough – its benefits
findings concerning the effectiveness of
depended on particular relationships
manipulatives over traditional methods of
between action and prior knowledge.
instruction are inconsistent, and where
19
SECTION 3
20
classification. If they can draw pictures of More recent theories from cognitive
these groupings they are displaying an development also involve progressive
iconic understanding. If they can use ‘explicitation’ of representational systems
verbal labels to represent the categories – for example, Karmiloff-Smith’s theory of
they are displaying a symbolic representational re-description (1992)
understanding. Iconic representations have describes transitions in children’s
a one-to-one correspondence with their representations from implicit to more
objects they represent and they have some explicit forms. In her theory children start
resemblance (in representational terms) to out learning in a domain (eg drawing,
that which they represent. They serve as mathematics) by operating on solely
bridging analogies (cf Brown and Clement external representations (similar to
1989) between an implicit sensorimotor Bruner’s enactive phase). At this level they
representation (in Piagetian terms) have learned highly specific procedures
and an explicit or articulated symbolic that are not open to revision, conscious
representation. inspection or verbal report and are
triggered by specified environmental
Piaget’s developmental theory involved the conditions. Gradually these
transformation of initially sensorimotor representations become more flexible and
representations (from simple reflexes to can be changed internally (for example,
more and more controlled repetition of children gradually learn to generalise
sensorimotor actions to achieve effects in procedures and appear to operate with
the world), to symbolic manipulations internal rules for producing behaviour).
(ranging from simple single operations to Finally the child is able to reflect upon his
coordinated multiple operations, but or her representations and is able, for
operating on concrete external example, to transfer rules or procedures
representations), to fully-fledged formal across domains or modes of
operations (involving cognitive representation.
manipulation of complex symbol systems,
as in hypothetico-deductive reasoning). The representational transformations
Each transition through Piagetian stages described by Piaget, Bruner and Karmiloff-
of cognitive development involved Smith are also mirrored in another highly
increasing explicitation of representations, influential theory of learning – that of John
gradually rendering them more and more Anderson – but in the reverse sequence.
open to conscious, reflective cognitive Anderson’s theory of skill acquisition
manipulation. Similarly, Bruner’s theory of (Anderson 1993; Anderson and Lebiere
development (1966) also involved 1998) describes the transitions from initial
transitions from implicit or sensorimotor explicit or declarative representations that
representations (which he terms ‘enactive’) involve conscious awareness and control
to gradually more explicit and symbolic (learning facts or condition-action rules)
representations. The distinction, however, to procedures which become more
between Piaget’s and Bruner’s theories automated and implicit with practice. An
lies in the degree to which such changes example is in learning to drive, where rules
are domain-general (for Piaget they were, of behaviour (eg depressing the clutch and
for Bruner each new domain of learning moving the gear lever into position) are
involved transitions through these stages) initially, for the novice driver, a matter of
and age-related or independent. effortful recall and control and where it is
21
SECTION 3
12 The term ‘constructionism’ is based on Piaget’s approach of ‘constructivism’ – the idea being that children develop
knowledge through action in the world (rather than, in terms of the preceding behaviourist learning theories, through
more passive response to external stimuli). By emphasising the construction of knowledge, Papert was stating that children
learn effectively by literally constructing knowledge for themselves through physical manipulation of the environment.
22
Papert outlined a number of principles So ‘really direct manipulation’ may not be the act of
for learning through such activities. the best for learning. For learning, the goal
These included the idea that otherwise of interface design may not always be to instructing
implicit reasoning could be made explicit render the interface ‘transparent’ – another body
(ie the child’s own sensorimotor sometimes ‘opacity’ may be desirable if it
representations of moving themselves makes the learner reflect upon their to produce the
through space had to be translated into actions (O’Malley 1992). There are at least same behaviour
explicit conscious and verbalisable three layers of representational mapping
instructions for moving another body to be considered in learning environments: renders the
through space), that the child’s own learner's
reasoning and its consequences were • the representation of the learning
therefore rendered visible to themselves domain (eg symbol systems knowledge
and others, that this fostered planning and representing mathematical concepts) explicit
problem solving skills, that therefore • the representation of the learning
errors in the child’s reasoning (literally, activity (eg manipulating Dienes blocks
‘bugs’ in programming terms) were made on a screen)
explicit and the means for ‘debugging’
such errors were made available, all • the representation embodied in the tools
leading to the development of themselves (eg the use of a mouse to
metacognitive skills. manipulate on-screen blocks versus
physically moving blocks around in the
In earlier sections of this review we saw real world).
how various frameworks offered for
understanding tangible computing referred When designing interfaces for learning
to close coupling of physical and digital the main goal may not be the speed and
representations – the ideal case being ease of creation of a product – when the
‘really direct manipulation’ where the user educational activity is embedded within the
feels as if they are interacting directly with task one may not want to minimise the
the object of interest rather than some cognitive load involved. Research by
symbolic representation of the object Golightly and Gilmore (1997) found that a
(Fishkin et al 2000). Papert’s vision also more complex interface produced more
involves close coupling of the physical and effective problem solving compared to an
digital. However, crucial to Papert’s easier-to-use interface. They argue that,
arguments about the advantages of Logo, for learning and problem solving, the rules
turtle geometry and ‘body-centered of transparent direct manipulation
geometry’ (McNerney 2004) is the notion of interface design may need to be broken.
reflection by the learner. The point about However, designs should reduce the
turtle geometry is not just that it is learner’s cognitive load for performing
‘natural’ for the learner to draw upon their non-content related tasks in order to
own experience of moving their body enable learners to allocate cognitive
through space, but that the act of resources to understanding the
instructing another body (whether it is a educational content of the learning task.
robot or a screen-based turtle) to produce A similar point is made by Marshall et al
the same behaviour renders the learner’s (2003) in their analysis of tangible
knowledge explicit. interfaces for learning.
23
SECTION 3
effective Marshall et al discuss two forms of bugs have occurred. In so doing, they
tangible technologies for learning: become more reflective about the
learning involves expressive and exploratory. They draw technology they are using. This line of
being able to upon a concept from the philosopher argument is also in keeping with a situated
Heidegger, termed ‘readiness-to-hand’ learning perspective, where opportunities
reflect upon, (Heidegger 1996) later applied to human- for learning occur when there are
objectify and computer interface design by Winograd ‘breakdowns’ in ‘readiness-to-hand’
and Flores (1986) and more recently to (Lave and Wenger 1991).
reason about the tangible interfaces by Dourish (2001). The
domain, not just concept refers to the way that when we are Marshall et al suggest that effective
working with a tool (eg a hammer) we learning stems from cycling between the
to act within it focus not on the tool itself but on the task two modes of ‘ready-to-hand’ (ie using the
for which we are using the tool (ie system to accomplish some task) and
hammering in a nail). The contrast to ‘present-to-hand’ (ie focusing on the
‘readiness-to-hand’ is ‘present-at-hand’ – system itself). They also suggest two kinds
ie focusing on the tool itself. Building on of activity for using tangibles in learning;
Ackermann’s work (1996, 1999), they argue expressive activity is where the tangible
that effective learning involves being able represents or embodies the learner’s
to reflect upon, objectify and reason about behaviour (physically or digitally), and
the domain, not just to act within it. exploratory activity is where the learner
Marshall et al’s notion of ‘expressive’ explores the model embodied in the
tangible systems refers to technologies tangible interface that is provided by the
which enable learners to create their own designer (rather than themselves). The
external representations – as in Resnick’s latter can be carried out either in some
‘digital manipulatives’ (Resnick et al 1998), practical sense (eg figuring out how the
derived from the tradition of Papert and technology works, such as the physical
Logo programming. They argue that by robot in LEGO/Logo – Martin and Resnick
making their own understanding of a topic 1993), or in what they call a theoretical
explicit, learners can make visible clear sense (eg reasoning about the
inconsistencies, conflicting beliefs and representational system embodied in the
incorrect assumptions. model, such as the programming language
in the case of systems like Logo).
Marshall et al contrast these expressive
forms of technologies with exploratory A study by Sedig et al (2001) provides some
tangible systems where learners focus on support for the claims made by Marshall et
the way in which the system works rather al. The study examined the role of interface
than on the external representations they manipulation style on reflective cognition
are building. Again, this is in keeping with and concept learning. Three versions of a
Papert’s arguments about the value of tangrams puzzle (geometrically shaped
‘debugging’ for learning. When their pieces that must be arranged to fit a target
program doesn’t run or the robot doesn’t outline) were designed:
work in the way in which they expected,
learners are forced to step back and reflect • Direct Object Manipulation (DOM)
upon the program itself or the technology interface in which the user manipulates
itself to try and understand why errors or the visual representation of the objects
24
• Direct Concept Manipulation (DCM) it is a model and can find objects hidden in
interface in which the user manipulates the miniature room. However they have
the visual representation of the great difficulty in using this model as a
transformation being applied to clue to finding a corresponding real object
the objects hidden in an identical real room (DeLoache
• Reflective Direct Concept Manipulation 1987). DeLoache argues that the problem
(RDCM) interface in which the DCM stems from the dual nature of
approach is used with scaffolding. representations in such tasks. While the
models serve as representations of
Results from 44 11 and 12 year-olds another space, they are also objects in
showed that the RDCM group learned themselves – ie a miniature room in
significantly more than the DCM group, which things can be hidden. Young
who in turn learned significantly more children find it difficult, she argues, to
than the DOM group. simultaneously reason about the model
as a representation and as an interesting
Finally, research by Judy DeLoache and object partly because it is highly salient
colleagues on the development of and attractive as an object in its own right
children’s understanding of and reasoning (DeLoache et al op cit). She tested this
with symbols provides another note of theory by having children use photographs
caution in drawing implications for of the real room rather than the scale
learning from the physical-digital model and showed that children found that
representational mappings in tangibles. task much easier. This and other research
Deloache et al (1998) argue that it cannot leads DeLoache and colleagues to issue
be assumed that even the most ‘obvious’ caution in assuming that highly realistic
or iconic of symbols will automatically be representations somehow make the task of
interpreted by children as a representation mapping from the symbol to the referent
of something other than itself. We might easier. In fact, they argue, it can have the
be tempted to assume that little learning is opposite effect to that which is desired.
required for highly realistic models or They argue that the task for young children
photographs – ie symbols that closely in learning to use symbols is to realise that
resemble their referents. Even very young the properties of the symbols themselves
infants can recognise information from are less important than what they
pictures. However, DeLoache et al argue represent. So making symbols into highly
that perceiving similarity between a picture salient concrete objects may make their
and a referent is not the same as meaning less, not more, clear to young
understanding the nature of the picture. children.
She and her colleagues have shown over a
number of years of research that problems Such cautions may also apply to the use of
in understanding symbols continue well manipulatives with older children. For
beyond infancy and into childhood. example, Hughes (1986) found that 5 – 7
year-olds had difficulties using small
For example, young children (2.5 years) blocks to represent a single number or the
have problems understanding the use of a simple addition problems. The difficulties
miniature scale model of a room as a the children had suggested that they didn’t
model. They understand perfectly well that really understand how the blocks were
25
SECTION 3
supposed to relate to the numbers and the representations. The point is not that the
problem. DeLoache et al (op cit) make a objects are concrete and therefore
very interesting comment on the difference somehow ‘easier’ to understand, but that
between the ways in which manipulatives physical activity itself helps to build
are used in mathematics teaching in Japan representational mappings that serve to
and in North America: underpin later more symbolically mediated
activity after practise and the resulting
“In Japan, where students excel in ‘explicitation’ of sensorimotor
mathematics, a single, small set of representations.
manipulatives is used throughout the
elementary school years. Because the However, other research has shown that it
objects are used repeatedly in various is important to build in activities that
contexts, they presumably become less support children in reflecting upon the
interesting as things in themselves. representational mappings themselves.
Moreover, children become accustomed to DeLoache’s work suggests that focusing
using the manipulatives to represent children’s attention on symbols as objects
different kinds of mathematics problems. may make it harder for them to reason
For these reasons, they are not faced with with symbols as representations. Marshall
the necessity of treating an object et al (2003) argue for cycling between what
simultaneously as something interesting in they call ‘expressive’ and ‘exploratory’
its own right and a representation of modes of learning with tangibles.
something else. In contrast, American
teachers use a variety of objects in a
variety of contexts. This practice may have
the unexpected consequence of focusing
children’s attention on the objects rather
than on what the objects represent.”
(Deloache et al 1998)
26
SECTION 4
13 www.leapfrog.com
14 www.paperplusplus.net
27
SECTION 4
28
4.2 PHICONS: THE USE OF PHYSICAL accompanied by the associated audio.
OBJECTS AS DIGITAL ICONS Thus children can activate stories other
children have told and also edit them and
The previous section gave examples of create their own stories.
the use of paper and books which were
augmented by a range of technologies In the KidStory project referred to earlier,
(eg barcodes, RFID tags, video-based RFID tags were embedded in toys as story
augmented reality) to trigger digital characters. When the props were moved
effects. In this section we review examples near to the tag reader it triggered
of the use of other physical objects such corresponding events involving the
as toys, blocks and physical tags, to trigger characters on the screen (Fig 5).
digital effects.
17 www.equator.ac.uk/Projects/Digitalplay/Chromarium.htm
29
SECTION 4
A number of different ways of mixing of the use of physical devices which have
colour were explored, using a variety of computational properties embedded
physical and digital tools. For example, one in them.
activity allowed them to combine colours
using physical blocks, with different Resnick’s group at MIT’s Media Lab have
colours on each face. By placing two built on Papert’s pioneering work with
blocks together children could elicit the Logo to develop a suite of tangible
combined colour and digital animations technologies dubbed ‘digital manipulatives’
on an adjacent screen. Children were (Resnick et al 1998). The aim was to
intuitively able to interact with the coloured enable children to explore mathematical
cubes combining and recombining colours and scientific concepts through direct
with immediate visual feedback. Another manipulation of physical objects. Digital
interactions activity enabled children to use software manipulatives are computationally
tools, disguised as paintbrushes, on a enhanced versions of toys enabling
with a digital digital interactive surface. Here children dynamics and systems to be explored.
image triggered could drag and drop different coloured This work began with a collaboration
digital discs and see the resultant mixes. with the LEGO toy company to create the
a physical A third activity again allowed children to LEGO/Logo robotics construction kit
movement on an use the digital interactive surface, but this (Resnick 1993) by which children can
time their interactions with a digital image create Logo programs to control LEGO
adjacent toy triggered a physical movement on an assemblies. The kit consists of motors and
windmill adjacent toy windmill. In their comparison sensors and children use Logo programs
of different types of these mappings to control the items they build.
between digital and physical
representations, Rogers et al (2002) found The next generation of these digital
the coupling of a familiar physical action manipulatives was Programmable Bricks
with an unfamiliar digital effect to be (P-Bricks) (Resnick et al 1996). These
effective in causing children to talk about bricks contain output ports for controlling
and reflect upon their experience. The motors and lights, and input ports for
activities that enabled reversibility of receiving information from sensors (eg
colour mixing and immediate feedback light, touch, temperature). As with
were found to support more reflective LEGO/Logo, programs are written in Logo
activity, in particular the physical blocks and downloaded to the P-Brick, which then
provided a wider variety of physical contains the program and is autonomous.
manipulation and encouraged children These approaches are commercially
to explore. available under the LEGO Mindstorms
brand 18.
18 http://mindstorms.lego.com
30
create the BitBall – a transparent rubbery combination of static and motorised
ball. BitBalls have been used in scientific components, children can assemble
investigations with undergraduates and dynamic biomorphic figures, such as
children in learning kinematics. For animals, and animate them by pushing,
example, BitBalls can be thrown up into pulling and twisting them, and observe
the air and can store acceleration data the system play back those motions.
which can then be viewed graphically on a The designers argue that Topobo can
screen. Children as young as 10 years old be used to help children learn about
used programmable bricks and Crickets to dynamic systems.
build and program robots to exhibit chosen
behaviours. Crickets have been used by The Telltale system 19 is a technology- BitBalls can
children to create their own scientific enhanced language toy which is said to aid
instruments to carry out investigations children in literacy development (Ananny be thrown up
(Resnick et al 2000). For example, one girl 2002). It consists of a caterpillar-like into the air
built a bird feeder with a sensor attached structure and children can record a short
so that when the bird landed to feed, audio clip in each segment of the body and and can store
the sensor triggered a photo of the bird can rearrange each segment to alter the acceleration data
to be taken. The girl could then see all story. Annany found that Telltale’s
the birds that had visited the feeder while segmented interface enabled children to which can then
she was away. produce longer and more linguistically be viewed
elaborate stories than with a non-
Curlybot (Frei et al 2000) is a palm-sized segmented interface. graphically on
dome-shaped two-wheel toy that can a screen
record and replay physical motion. A child StoryBeads (Barry et al 2000) are
presses a button to record the movement tangible/wearable computing elements
of Curlybot and then presses a button to designed to enable the construction of
indicate recording has finished. The replay stories by allowing users to trade stories
facility then enables the child to replay the through pieces of images and text.
motion and Curlybot repeats the action at The beads communicate by infrared and
the same speed and with the same pauses can beam items from bead to bead or
and motion. Curlybot can also be used with can be traded in order to construct and
a pen attached to leave a trail as it moves. share narratives.
The authors highlight Curlybot as an
educational tool encompassing geometric Triangles (Gorbet et al 1998) were used as
design, gesture and narrative. a tangible interface for exploring stories
in a non-linear manner. Triangles are
More recently, Raffle et al produced interconnecting physical shapes with
Topobo (Raffle et al 2004), a 3D magnetic edges. Each triangle contains a
‘constructive assembly system’ which is microprocessor which can identify when it
aimed at helping children to understand is connected to another. Triangles have
behaviours of complex systems. Topobo is been used for a range of applications
embedded with kinetic memory – the including storytelling. Audio, video and still
ability to record and play back physical images can be captured and stored within
motion. By snapping together a the triangles. The three-sided nature of
19 http://web.media.mit.edu/~ananny/thesis.html
31
SECTION 4
32
As the children moved the Snooper around a device for
the room they could discover hidden food
tokens which they could then try out in the tangible
well. The PDA displayed an image of the storytelling
food token on the display as it was moved
near to the location of the hidden tag. involves the
These and other technologies were use of ordinary
evaluated with 6 and 7 year-olds hunting
the Snark in pairs (Price et al 2003). flashlights or
torches to
4.4 SENSORS AND DIGITAL PROBES interact with
a display
This section describes a range of tangible
devices which are based on physical tools
which act as sensors or probes of the
environment (eg light, colour, moisture).
21 www.equator.ac.uk/Challenges/Devices/torches.htm
33
SECTION 4
Children can move the brush over any visualisations. Analysis of the patterns of
physical surface and pick up colours and interaction amongst the children showed
textures and then draw with them on that the technologies encouraged active
canvas. The authors found that when the exploration, the generation of ideas and
I/O brush was used in a kindergarten the testing of hypotheses about the
class, children talked explicitly about ecology of the woodland.
patterns and features available in their
environment.
4.5 SUMMARY
children can The SENSE project (Tallyn et al 2004) has
been exploring the potential of sensor We have very briefly reviewed a number of
move the brush technologies to support a hands-on examples of tangible technologies which
over any physical approach to learning science in the school have been designed to support learning
classroom. Children at two participating activities. We have discussed these under
surface and pick schools designed and used their own four headings: digitally augmented paper,
up colours and pollution sensors within their local physical objects as icons (phicons), digital
environment. The technology consisted manipulatives and sensors/probes.
textures and of a PDA and carbon monoxide pollution
then draw with sensor. The sensor was coloured The reason for treating digital paper as a
differently on each side so that the distinct category is to make the point that,
them on canvas direction in which the sensor was facing rather than ICT replacing paper and book-
would be evident when children later based learning activities, they can be
inspected video data of the sensor in use. enhanced by a range of digital activities,
Children captured their own sensor data from very simple use of cheap barcodes
using this device, which was downloaded printed on sticky labels and attached to
to additional visualisation technologies paper, to the much more sophisticated use
to help them analyse their data and to of video tracing in the augmented reality
understand it in the context of similar examples. However, as Adrian Woolard and
data gathered by scientists carrying out his team at the BBC have shown, real
related research (Equator Environmental classrooms can use AR technology very
e-Science project 22). effectively, without the need for special
head-mounted displays to view the
In the Ambient Wood project mentioned augmentation. He and his team have
earlier (Price et al 2003; Rogers et al in worked with teachers using webcams
press; Rogers et al 2002a; Rogers et al and projectors to overlay videos and
2002b) groups of children used mobile animations on top of physical objects. Even
technologies (PDAs) outdoors to support something as simple as projecting a
scientific enquiry about the biological display on top of paper, or on a tabletop
processes taking place in a wood. One of with physical objects, could be used as an
the devices used (a probe tool) contained effective technique for getting children
sensors enabling measurement of the light to see the relationships between objects
and moisture levels within the wood. and representations they create and
A small screen was also provided which manipulate in the physical world, and what
displayed the readings using graphical happens in the computational world.
22 www.equator.ac.uk
34
SECTION 5
35
SECTION 5
In addition, the following claims have been Other research reviewed here has also
made for tangible interfaces and digital suggested that so-called ‘really direct
manipulatives: manipulation’ (Fishkin et al 2000) may not
be ideal for learning applications where
• they allow for parallel input (eg two often the goal is to encourage the learner
hands) improving the expressiveness to reflect and abstract. This is borne out by
or the communication capacity with research showing that ‘transparent’ or
the computer really easy-to-use interfaces sometimes
lead to less effective problem solving.
• they take advantage of well developed
Sometimes more effort is required for
motor skills for physical object
learning to occur, not less. However, it
manipulations and spatial reasoning
would be wrong to conclude that interfaces
if physical • they externalise traditionally internal for learning should be made difficult to use
objects are made computer representations – the point is to channel the learner’s
• they afford multi-person, attention and effort towards the goal or
too 'realistic' target of the learning activity, not to allow
collaborative use
they can actually • physical representations embody a
the interface to get in the way. In the same
vein, Papert argued that allowing children
prevent children greater variety of mechanisms for
to construct their own ‘interface’ (ie build
interactive control
learning about robots or write programs) focused the
• physical representations are child’s attention on making their implicit
what the objects perceptually coupled to actively knowledge explicit. Others (Marshall et al
represent mediated digital representations 2003) support this idea and argue that
36
effective learning should involve both RFID tags). Even the augmented reality
expressive activity, where the tangible software is available as a public domain
represents or embodies the learner’s toolkit 23 and can be used together with
behaviour (physically or digitally), and ordinary cameras or webcams and
exploratory activity, where the learner projectors to create effective augmented
explores the model embodied in the reality demonstrations for children – as
tangible interface. seen in the innovative work by Adrian
Woolard and colleagues at the BBC
Finally, some implications for design and Creative R&D. Data-logging and sensors
use of tangibles could be drawn from have been used for many years in
research on multiple representations in secondary school science. New portable
cognitive science. Ainsworth (Ainsworth technologies such as PDAs extend the
1999; Ainsworth et al 2002; Ainsworth and flexibility of such systems to create
VanLabeke 2004) has carried out research potentially powerful mobile learning
on the use of multiple representations environments for use outside of the
(eg text, graphics, animations) across a classroom.
number of domains. She argues that the
use of more than one representation can Even if the technologies aren’t yet
support learning in several different ways. available, the pedagogy underlying these
For example, multiple representations can approaches can be used as a source for
present complementary information, or ideas in thinking about using ICT in
they can constrain interpretations of teaching and learning various subjects.
each other, or they can support the Especially in early years there is a need
identification of invariant aspects common to recognise that younger children may
to all representations and support not understand the representational
abstraction. The research on the design significance of the objects they are asked
of tangibles for learning has not so to work with in learning activities (whether
far drawn on research on multiple computer-based or not). They need
representations, but it is clearly potentially scaffolding in reasoning about the
relevant and important. relationship between the properties of an
object (or symbol on the screen) and the
more abstract properties they represent.
5.2 IMPLICATIONS FOR POLICY
AND PRACTICE It is hoped that teachers might also take
inspiration from the whole idea of
Many of the technologies reviewed here technology-enhanced learning moving
are still in early prototype form and not beyond the desktop or classroom
readily available for the classroom. computer by, for example, making links
However some of them are – Logo and between ICT-based activities and other
Roamer have been used for many years more physical activities. For example,
now in primary classrooms – and some of interactive whiteboards are appealing
them are potentially available now with a because teachers can use them in familiar
bit of programming effort (eg barcodes, ways for whole class teaching. But they
23 www.hitl.washington.edu/research/shared_space/download/
37
SECTION 5
have much more potential than just being some suggestions, it is hoped, for seeing
slightly enhanced versions of PowerPoint. how physical and digital activities could be
Children could be encouraged to interact more integrated.
actively with whiteboard sessions by
collecting their own data for presentation More research is needed on the benefits
(eg via mobile phones, digital cameras). of tangibles for learning – so far the
Physical displays around the classroom evaluation of these technologies has been
might be linked to digital displays by, for rather scarce. More effort is also needed
example, camera phones or digital to translate some of the research
cameras. Children could be encouraged to prototypes into technologies and toolkits
simulate screen-based activities with that teachers can use without too much
physical models, either on paper or using technical knowledge. Teachers also need
new forms of 3D objects. Digital displays could be training in the use of non-traditional forms
projected in more imaginative ways than of ICT and how to incorporate them into
assessment are just the traditional vertical screen. For teaching across the curriculum. And
needed to reflect example, projecting onto the floor would finally, new forms of assessment are
enable children to sit in a circle around the needed to reflect the potential of these
the potential of shared screen. The display could be new technologies for learning.
these new combined with the use of physical objects
and, using some ‘wizardry’, teachers might
technologies make some changes to the display by
for learning manipulating the mouse or keyboard
‘behind-the-scenes’. Displays could also
be projected onto tabletops, over, say,
paper which children are working with.
Such arrangements not only enable the
links between computer-based and non
computer-based representations to be
made more explicit for children, but the
social or collaborative arrangement also
changes in interesting ways.
38
GLOSSARY
39
GLOSSARY
Container in the context of tangible user Enactive a term used by Jerome Bruner to
interfaces this means the use of a control refer to a sensorimotor representation –
device to ‘contain’ some digital properties, eg a child who is physically counting out
as in ‘pick-and-drop’ interfaces. from an array of objects is said to be
enacting the process of counting
Debugging the process of discovering and
eradicating errors in computer programs Explicitation a term used to refer to the
(see ‘bug’) transformation of an implicit or tacit
representation (eg procedural knowledge
Declarative representation usually refers of counting) to one that is open to
to the way in which factual knowledge verbalisation and reflection
(knowing that) is stored in human memory.
Used in contrast to procedural knowledge Fiducial markers special graphical
(knowing how) symbols overlaid on physical displays (eg
paper) that are used by vision-based
Digital manipulative a term coined by systems to register location and identity in
Mitchell Resnick and colleagues (1998) to order to project augmented reality displays
refer to the digital augmentation of
physical manipulatives such as Dienes GUI Graphical User Interface
blocks
HCI Human-Computer Interaction – an
Digital toys physical toys that can be interdisciplinary research field focused on
manipulated to produce digital effects the design and evaluation of ICT systems
(eg make sounds, trigger animations
on a computer screen) Head-mounted display a projection
system worn by a user – can be a helmet
Direct manipulation a concept coined in that completely immerses the user or a
the mid-80s by Ben Schneiderman and see-through display (eg glasses) that
others to refer to the use of physical input allows the user to see both the real world
devices such as mice to interact with and the projected world at the same time
graphical displays. Also refers to a theory
concerning direct physical-digital ICT Information and Communication
mappings in human-computer interaction Technology
(Hutchins et al 1986) Input device A physical device used to
Drag-and-drop a term used to refer to an interact with a computer system
interface technique where one uses an Interactive whiteboard a display that uses
input device (usually a mouse) to select a touchscreen for direct input with a stylus
and ‘drag’ a screen-based object from one or other pointing device (also referred to as
location to another a ‘smartboard’)
Embodiment a term used in tangible LED Light Emitting Device (eg display on a
computing to refer to the degree to which digital watch)
an interaction technique (eg manipulation
of a digital object) carries with it an Manipulative a term used to refer to the
impression of being a physical (as opposed use of physical objects such as Dienes
to digital) activity blocks in teaching mathematics
40
Mixed reality A term used to refer to Sensorimotor representation a term
the merging of physical and digital derived from Piaget’s theory of intellectual
representations development referring to a non-symbolic
cognitive representation based in low-level
Output device usually refers to a screen or perception and action rather than ‘higher-
monitor in traditional graphical user level’ cognition
interfaces, but can also refer to audio
speakers, force-feedback devices, tactile Space-multiplexed with space-multiplexed
displays input, each function to be controlled has a
dedicated transducer, each occupying its
Palmtop a small hand-held device usually own space. For example, a car has a
with a touchcreen display that can be brake, clutch, accelerator, steering wheel,
manipulated with a stylus (see PDA) and gear stick which are distinct,
PDA Personal Digital Assistant dedicated transducers controlling a single
(eg Palm, iPaq) specific task (see ‘time-multiplexed’)
41
BIBLIOGRAPHY
42
Anderson, JR and Lebiere, C (1998). The Camarata, K, Do, EY, Gross, M and
Atomic Components of Thought. Mahwah, Johnson, BR (2002). Navigational blocks:
NJ: Lawrence Erlbaum Associates tangible navigation of digital information.
Proceedings of the ACM SIGCHI
Back, M, Cohen, J, Gold, R, Harrison, S
Conference on Human Factors in
and Minneman, S (2001). Listen Reader:
Computing Systems (CHI’02 Extended
an electronically augmented paper-based
Abstracts), Seattle, Washington, USA,
book. Proceedings of the ACM SIGCHI
pp752-754
Conference on Human Factors in
Computing Systems (CHI’01), Seattle, Carroll, L (1962/1876). The Hunting of the
WA, ACM Press, pp23-29 Snark. Bramhall House
Barry, B, Davenport, G and McGuire, D Chao, SJ, Stigler, JW and Woodward, JA
(2000). Storybeads: A wearable for story (2000). The effects of physical materials
construction and trade. Proceedings of the on kindergartners’ learning of number
IEEE International Workshop on concepts. Cognition and Instruction, 18(3),
Networked Appliances 285-316
Billinghurst, M (2002). Augmented reality Church, RB and Goldin-Meadow, S (1986).
in education. New Horizons for Learning The mismatch between gesture and
speech as an index of transitional
Billinghurst, M and Kato, H (2002).
knowledge. Cognition, 23, 43-71
Collaborative augmented reality.
Communications of the ACM, 45(7), 64-70 Colella, V (2000). Participatory simulations:
building collaborative understanding through
Borovoy, R, McDonald, M, Martin, F and
immersive dynamic modelling. Journal of
Resnick, M (1996). Things that blink:
the Learning Sciences, 9(4), 471-500
computationally augmented name tags.
IBM Systems Journal, 35(3), 488-495 Cox, M, Abbott, C, Webb, M, Blakeley, B,
Beauchamp, T and Rhodes, V (2004). A
Brave, S, Ishii, H and Dahley, A (1998).
Review of the Research Literature Relating
Tangible interfaces for remote
to ICT and Attainment. DfES/Becta
collaboration and communication.
Proceedings of the Conference on Cox, M, Webb, M, Abbott, C, Blakeley, B,
Computer Supported Cooperative Work Beauchamp, T and Rhodes, V (2004). A
(CSCW’98), ACM Press Review of the Research Evidence Relating
to ICT Pedagogy. DfES/Becta
Brown, D and Clement, J (1989).
Overcoming misconceptions via analogical DeLoache, JS (1987). Rapid change in the
reasoning: factors influencing symbolic functioning of very young
understanding in a teaching experiment. children. Science, 238, 1556-1557
Instructional Science, 18, 237-261
DeLoache, JS (2004). Becoming symbol-
Bruner, J (1966). Toward a Theory of minded. Trends in Cognitive Sciences,
Instruction. New York: WW Norton 8(2), 66-70
Bruner, JS, Olver, RR and Greenfield, PM DeLoache, JS, Pierroutsakos, SL, Uttal,
(1966). Studies in Cognitive Growth. New DH, Rosengren, KS and Gottlieb, A (1998).
York: John Wiley & Sons Grasping the nature of pictures.
Psychological Science, 9(3), 205-210
43
BIBLIOGRAPHY
44
Green, J, Schnädelbach, H, Koleva, B, Koleva, B, Benford, S, Ng, KH and
Benford, S, Pridmore, T and Medina, K Rodden, T (2003, September 8). A
(2002). Camping in the digital wilderness: framework for tangible user interfaces.
tents and flashlights as interfaces to Proceedings of the Physical Interaction
virtual worlds. Proceedings of the ACM (PI03) Workshop on Real World User
SICGHI Conference on Human Factors in Interfaces, Mobile HCI Conference 2003,
Computing Systems (CHI’02), Minneapolis, Udine, Italy
Minnesota, ACM Press, pp780-781
Lave, J and Wenger, E (1991). Situated
Harrison, C, Comber, C, Fisher, T, Haw, K, Learning: Legitimate Peripheral
Lewin, C, Lunzer, E et al (2002). ImpaCT2: Participation
The Impact of Information and
Leslie, AM (1987). Pretense and
Communication Technologies on Pupil
representation: the origins of ‘theory of
Learning and Attainment (ICT in Schools
mind’. Psychological Review, 94, 412-426
Research and Evaluation Series No 7):
DfES/Becta Lifton, J, Broxton, M and Paradiso, J
(2003). Distributed sensor networks as
Heidegger, M (1996). Being and Time.
sensate skin. Proceedings of the 2nd IEEE
Albany, New York: State University of New
International Conference on Sensors
York Press
(Sensors 2003), Toronto, Canada, IEEE,
Holmquist, LE, Redström, J and pp743-747
Ljungstrand, P (1999). Token-based access
Luckin, R, Connolly, D, Plowman, L and
to digital information. Proceedings of the
Airey, S (2003). Children’s interactions with
1st International Symposium on Handheld
interactive toy technology. Journal of
and Ubiauitous Computing (HUC’99),
Computer Assisted Learning, 19, 165-176
Springer, pp234-245
Luff, P, Tallyn, E, Sellen, A, Heath, C,
Hughes, M (1986). Children and Number:
Frohlich, D and Murphy, R (2003). User
Difficulties in Learning Mathematics.
Studies, Content Provider Studies and
Oxford: Blackwell
Design Concepts (No IST-2000-26130/D4&5)
Hutchins, E, Hollan, JD and Norman, DA
MacKenzie, CL and Iberall, T (1994).
(1986). Direct manipulation interfaces, in:
The Grasping Hand. Amsterdam:
DA Norman & SW Draper (Eds), User
North-Holland, Elsevier Science
Centered System Design (pp87-124).
Hillsdale, NJ: Lawrence Erlbaum MacKinnon, KA, Yoon, S and Andrews, G
Associates (2002). Using ‘Thinking Tags’ to improve
understanding in science: a genetics
Ishii, H and Ullmer, B (1997). Tangible bits:
simulation. Proceedings of the Conference
towards seamless interfaces between
on Computer Suuport for Collaborative
people, bits and atoms. Proceedings of the
Learning (CSCL 2002), Boulder, CO
ACM SIGCHI Conference on Human
Factors in Computing Systems (CHI’97), Marshall, P, Price, S and Rogers, Y (2003).
234-241 Conceptualising tangibles to support
learning. Proceedings of the Interaction
Karmiloff-Smith, A (1992). Beyond
Design and Children (IDC’03), Preston, UK,
Modularity. MIT Press
ACM Press
45
BIBLIOGRAPHY
46
Rekimoto, J (1997). Pick-and-drop: a Rogers, Y, Scaife, M, Gabrielli, S, Smith, H
direct manipulation technique for multiple and Harris, E (2002). A conceptual
computer environments. Proceedings of framework for mixed reality environments:
the 10th Annual ACM Symposium on User designing novel learning activities for
Interface Software and Technology young children. Presence: Teleoperators &
(UIST’97), ACM Press, pp31-39 Virtal Environments, 11(6), 677-686
Resnick, M (1993). Behavior construction Rogers, Y, Scaife, M, Muller, H, Randell,
kits. Communications of the ACM, 36(7), C, Moss, A, Taylor, I et al (2002, 25-28
65-71 June 2002). Things aren’t what they seem
to be: innovation through technology
Resnick, M, Berg, R and Eisenberg, M
inspiration. Proceedings of the Designing
(2000). Beyond black boxes: bringing
Interactive Systems, London, pp373-379
transparency and aesthetics back to
scientific investigation. Journal of the Ryokai, K, Marti, S and Ishii, H (2004).
Learning Sciences, 9(1), 7-30 I/O brush: drawing with everyday objects
as ink. Proceedings of the ACM SIGCHI
Resnick, M, Martin, F, Sargent, R and
Conference on Human factors in
Silverman, B (1996). Programmable
Computing Systems (CHI’04), Vienna,
bricks: toys to think with. IBM Systems
Austria, ACM Press, pp303-310
Journal, 35(3), 443-452
Ryokai, R and Cassell, J (1999). StoryMat:
Resnick, M, Maryin, F, Berg, R, Boovoy, R,
a play space for collaborative storytelling.
Colella, V, Kramer, K et al (1998). Digital
Proceedings of the ACM SIGCHI
manipulatives: new toys to think with.
Conference on Human Factors in
Proceedings of the ACM SIGCHI
Computing Systems, Pittsburgh, PA, ACM
Conference on Human Factors in
Press, pp272-273
Computing Systems, 281-287
Sedig, K, Klawe, M and Westrom, M
Resnick, M and Wilensky, U (1997). Diving
(2001). The role of interface manipulation
into complexity: developing probabilistic
style and scaffolding on cognition and
centralized thinking through role-playing
concept learning in learnware. ACM
activities. Journal of the Learning
Transactions on Computer-Human
Sciences, 7(2)
Interaction (ToCHI), 8(1), 34-59
Rieser, JJ, Garing, AE and MF, Young
Shneiderman, B (1983). Direct
(1994). Imagery, action and young
manipulation: a step beyond programming
children’s spatial orientation - it’s not
languages. IEEE Computer, 16(8), 57-69
being there that counts, it’s what one has
in mind. Child Development, 65(5), 1262- Simon, T (1987). Claims for LOGO: what
1278 should we believe and why? In J
Rutkowska and C Crook (Eds), Computers,
Rogers, Y, Price, S, Randell, C, Stanton
Cognition and Development (pp115-133).
Fraser, D, Weal, M and Fitzpatrick, G
Chichester: John Wiley & Sons Ltd
(in press). Ubi-learning: integrating
indoor and outdoor learning experiences. Smith, DC, Irby, CH, Kimball, RB,
Communications of the ACM Verplank, WH and Harslem, EF (1982).
Designing the Star user interface. Byte,
7(4), 242-282
47
BIBLIOGRAPHY
48
Reviews available from Futurelab:
DISCLAIMER
Futurelab
1 Canons Road
Harbourside
Bristol BS1 5UH
United Kingdom
www.futurelab.org.uk
REPORT 12
ISBN: 0-9548594-2-1
Futurelab © 2004