You are on page 1of 338

Metacognitive Structuring while Learning with Hypermedia

Author
Beven, Frederick Alan

Published
2010

Thesis Type
Thesis (PhD Doctorate)

School
School of Education and Professional Studies

DOI
https://doi.org/10.25904/1912/3556

Copyright Statement
The author owns the copyright in this thesis, unless stated otherwise.

Downloaded from
http://hdl.handle.net/10072/367457

Griffith Research Online


https://research-repository.griffith.edu.au
Metacognitive Structuring while Learning with Hypermedia

Frederick Alan Beven

Dip Teach (TAFE) (Brisbane College of Advanced Education)


Bachelor of Education (Brisbane College of Advanced Education)
Master of Education with Honours (Griffith University)

School of Education and Professional Studies (Brisbane, Logan)


Faculty of Education
Griffith University

Submitted in fulfilment of the requirements of the degree of


Doctor of Philosophy

February 2010
Declaration

This work has not previously been submitted for a degree or diploma in any university.
To the best of my knowledge and belief, the thesis contains no material previously
published or written by another person except where due reference is made in the thesis
itself.

Fred Beven
2010

i
Abstract

In this research, the presence and nature of learners’ metacognition were explored
through think-aloud protocols while they were engaged with autonomous learning in
hypermedia settings. This purpose was achieved through two studies, one a preliminary
investigation which established the viability of application in hypermedia learning
settings of a method of identification and classification of metacognition designed by
Meijer, Veenman, and Van Hout Wolters (2005, 2006). This proved positive and
informed the major research, a multiple case study, of the extent to which learners regard
themselves as autonomous in these settings and whether provision of metacognitive
training enhanced their awareness of being metacognitive, and/or their autonomy. In this
major research, top-level structuring (Bartlett, 1978, 2008) was adopted as a second
classificatory technique which allowed distinctions of a learner’s organisational systems
to be captured within the categories established through Meijer et al.’s system.

Findings of the major study were that learners were effectively managing their own
learning. A major driver of this effectiveness appears to be the deployment of well
constructed chains of metacognitive processes. Each of the subjects considered
themselves to be a competent user of hypermedia and an autonomous learner. Their
accounts of what enables this autonomy were that:
• They were able to establish a purpose or goal for their learning upfront.
• They were able to quickly master the learning interface and use it in ways that suit
their learning style and preferences.
• They were able to establish ways of progressing and of realising their learning
goals from within or from outside of the learning interface.
• They were able to effectively monitor both the learning interface and their
learning trajectory using a range of top-level structuring.

Finally, there is tentative evidence that training can have a positive effect and raise
awareness amongst learners of their metacognitive activity. Higher ratings by the

ii
learners were associated with higher totals of metacognitive activity and spread across
into different and more complex types of top-level structure. The associations suggest a
nexus between instruction and these changes in ratings which can also be associated with
greater autonomy and learner competence.

The case study method used in the major study was the most appropriate approach
because case study is a preferred strategy when ‘how’ and ‘why’ questions are being
posed, when the investigator has little control over events, and when the focus is on a
contemporary phenomenon within some real life context. It is argued that qualitative
case studies are characterised by the discovery of new relationships, concepts and
understandings rather than verification of pre-determined hypothesis. A unique data
capture method, tested in the pilot study, that captured a video of the screen-based
learning event overlaid with verbal data collected through a modified stimulated recall
method - retrospective questioning technique, was used.

Within limitations of the research, major implications for practice drawn from this
research are that learners in hypermedia settings have potential for autonomous learning,
and, that they are responsive to training directed to finesse their metacognitive awareness
and capability. Contributions made by the research to theory and method are that the
combination of language analysis and the underlying ideation in think-aloud transcripts is
accessible through a method of hierarchically representing and interpreting content and
relational networks and that top-level structuring affords such analysis. Accordingly, this
research has provided some evidence that depths of thought within what learners do when
monitoring or evaluating or any such metacognitive activity, may be more clearly
understood if the “hows’ of these are considered in relation to the sophistication of
learners’ top-level structuring specification when listing or comparing, problem-solving,
of using causal organisers.

iii
Acknowledgments
A number of people have provided support and encouragement throughout the process of
completing this study. Foremost among these have been my past and present supervisors,
Professor John Stevenson and Professor Brendan Bartlett, both of whom have shown a
fortitude and patience of rare proportion. Their ability to provide rapid and insightful
feedback and to show a genuine interest in my study was the motivation to keep going
much of the time. Thanks also go to my past and present co-supervisors, Dr Charlie
McKavanagh and Dr Ivan Chester both of whom have played significant parts in the
formulation of this work.

Thanks must also go to my past and present colleagues at Griffith University. Stephen
Billett, Jean Searle, Leesa Wheelahan, Howard Middleton, and more recently Ian James
and Ann Kelly, all of whom have been wonderful sources of advice and encouragement.

To David, Lesley, Tammy, Judy and Ray (pseudonyms) my research participants’. I


would like to thank you for the time you gave so freely and for such insightful reflections
on your learning. Thank you also to the group who were involved in the pilot study who
also allowed me such free access to their learning settings.

Finally, I would like to acknowledge the most important and significant person of all, my
wife Carol to whom I am indebted in so many ways. First, for her patience and support
during the many years of study that has culminated in this thesis. As an exceptional
educator herself, along with her knowledge of language and its structures, she was able to
play so many critical roles in enabling me to craft this work. In particular, her guidance
and assistance with respect to the language aspects of the data analysis was outstanding.
I have indeed been fortunate to have her there.

I dedicate this thesis to Carol, the memory of my father, Frederick Lawson Beven and my
beautiful grand-daughters Grace and Ada.

iv
Table of Contents
Declaration........................................................................................................................... i
Abstract ............................................................................................................................... ii
Acknowledgments.............................................................................................................. iv
Table of Contents................................................................................................................ 1
List of Figures ..................................................................................................................... 5
List of Tables ...................................................................................................................... 7
Chapter 1 – Introduction ..................................................................................................... 9
Reason for the research................................................................................................... 9
Previous hypermedia research reported in the literature................................................. 9
Metacognitive concepts adopted in this thesis.............................................................. 12
Top-level structuring concepts adopted in this thesis ................................................... 14
Empirical studies........................................................................................................... 15
Significance and contribution of the research............................................................... 16
Summary and thesis structure ....................................................................................... 18
Chapter 2 - Review of the Literature and Pilot Study....................................................... 22
Introduction................................................................................................................... 22
Background ................................................................................................................... 22
Hypermedia................................................................................................................... 24
Introduction............................................................................................................... 24
A Definition of hypermedia ...................................................................................... 25
Claims about the value of hypermedia...................................................................... 28
Educational hypermedia............................................................................................ 29
Claims about the value of educational hypermedia .................................................. 31
Hypermedia and learning.......................................................................................... 33
Educational hypermedia design ................................................................................ 35
The pilot study .............................................................................................................. 39
Background ............................................................................................................... 39
Method ...................................................................................................................... 39
Cohort ....................................................................................................................... 40
The hypermedia ........................................................................................................ 41
Data analysis ............................................................................................................. 42
Findings..................................................................................................................... 43
Metacognition ............................................................................................................... 48
Metacognition as beliefs ........................................................................................... 50
Metacognition as the knowledge of one’s own cognitive processes. ....................... 51
Metacognition as executive control .......................................................................... 51
Metacognition as the steering of one’s own cognitive processes ............................. 52
Metacognitive training .............................................................................................. 53
A taxonomy of metacognitive activities ................................................................... 54
Top-level structuring..................................................................................................... 59
A general purpose metacognitive taxonomy for examining hypermedia learning
settings ..................................................................................................................... 62
Conclusion .................................................................................................................... 65
Chapter 3 - Method ........................................................................................................... 67

1
The research in context ................................................................................................. 67
Qualitative research perspectives.................................................................................. 68
The importance of theory in qualitative research ..................................................... 68
Case studies................................................................................................................... 69
Activity Analysis and Verbal Data ............................................................................... 71
Activity Analysis ...................................................................................................... 72
Verbal Data ............................................................................................................... 72
Thinking aloud methods ....................................................................................... 73
Stimulated Recall .................................................................................................. 73
Research Design - Setting, participants and procedures............................................... 75
Stages and Setting ..................................................................................................... 75
Stage One – Pilot Study ............................................................................................ 75
Stage Two – Major Study: Case Studies................................................................... 77
Introduction........................................................................................................... 77
Case study participants ......................................................................................... 77
Procedure .............................................................................................................. 77
Data collection ...................................................................................................... 78
Metacognitive training .............................................................................................. 79
Data preparation and analysis ................................................................................... 79
The process of data analysis. ........................................................................................ 81
Issues of validity and reliability................................................................................ 81
Summary ....................................................................................................................... 83
Chapter 4 - Findings.......................................................................................................... 84
Introduction................................................................................................................... 84
The Cases ...................................................................................................................... 86
Case One – David ..................................................................................................... 86
Case Two – Lesley.................................................................................................... 86
Case Three – Tammy ................................................................................................ 86
Case Four – Judy....................................................................................................... 86
Case Five – Ray ........................................................................................................ 87
Case One – David ......................................................................................................... 88
Learning module 1 .................................................................................................... 88
Summary of learning module 1............................................................................... 101
Metacognitive activity ............................................................................................ 101
Top-level structure linguistic markers .................................................................... 101
Self awareness of learning autonomy 1st rating ...................................................... 102
Response to question 1............................................................................................ 102
Response to question 2............................................................................................ 104
Response to question 3............................................................................................ 104
Metacognitive training intervention........................................................................ 105
Learning module 2 .................................................................................................. 106
Summary of learning module 2............................................................................... 122
Metacognitive activity ............................................................................................ 122
Top-level structure linguistic markers .................................................................... 122
Self awareness of learning autonomy 2nd rating ..................................................... 123
Effect from metacognitive training......................................................................... 123

2
Case Two - Lesley ...................................................................................................... 125
Learning module 1 .................................................................................................. 125
Summary of learning module 1............................................................................... 140
Metacognitive activity ............................................................................................ 140
Top-level structure linguistic markers .................................................................... 140
Self awareness of learning autonomy 1st rating ...................................................... 141
Response to question 1............................................................................................ 141
Response to question 2............................................................................................ 143
Response to question 3............................................................................................ 144
Metacognitive training intervention........................................................................ 144
Learning module 2 .................................................................................................. 144
Summary of learning module 2............................................................................... 159
Metacognitive activity ............................................................................................ 159
Top-level structure linguistic markers .................................................................... 160
Self awareness of learning autonomy 2nd rating ..................................................... 160
Effect from metacognitive training......................................................................... 161
Case Three - Tammy................................................................................................... 163
Learning module 1 .................................................................................................. 163
Summary of learning module 1............................................................................... 184
Metacognitive activity ............................................................................................ 184
Top-level structure linguistic markers .................................................................... 184
Self awareness of learning autonomy 1st rating ...................................................... 185
Response to question 1............................................................................................ 185
Response to question 2............................................................................................ 187
Response to question 3............................................................................................ 188
Metacognitive training intervention........................................................................ 189
Learning module 2 .................................................................................................. 189
Summary of learning module 2............................................................................... 210
Metacognitive activity ............................................................................................ 210
Top-level structure linguistic markers .................................................................... 211
Self awareness of learning autonomy 2nd rating ..................................................... 211
Effect from metacognitive training......................................................................... 212
Case Four - Judy ......................................................................................................... 214
Learning module 1 .................................................................................................. 214
Summary of learning module 1............................................................................... 224
Metacognitive activity ............................................................................................ 225
Top-level structure linguistic markers .................................................................... 225
Self awareness of learning autonomy 1st rating ...................................................... 225
Response to question 1............................................................................................ 226
Response to question 2............................................................................................ 227
Response to question 3............................................................................................ 228
Metacognitive training intervention........................................................................ 228
Learning module 2 .................................................................................................. 229
Summary of learning module 2............................................................................... 242
Metacognitive activity ............................................................................................ 242
Top-level structure linguistic markers .................................................................... 242

3
Self awareness of learning autonomy 2nd rating ..................................................... 243
Effect from metacognitive training......................................................................... 243
Case Five - Ray........................................................................................................... 245
Learning module 1 .................................................................................................. 245
Summary of learning module 1............................................................................... 262
Metacognitive activity ............................................................................................ 262
Top-level structure linguistic markers .................................................................... 262
Self awareness of learning autonomy 1st rating ...................................................... 263
Response to question 1............................................................................................ 263
Response to question 2............................................................................................ 265
Response to question 3............................................................................................ 266
Metacognitive training intervention........................................................................ 266
Learning module 2 .................................................................................................. 267
Summary of learning module 2............................................................................... 286
Metacognitive activity ............................................................................................ 286
Top- level structure linguistic markers ................................................................... 286
Self awareness of learning autonomy 2nd rating ..................................................... 287
Effect from metacognitive training......................................................................... 287
Chapter 5 - Conclusions.................................................................................................. 289
Discussion of findings................................................................................................. 289
Question 1 ............................................................................................................... 290
Question 2 ............................................................................................................... 293
How did users see themselves as autonomous?.................................................. 293
How does this manifest itself in practice? .......................................................... 298
Summary ............................................................................................................. 303
Question 3 ............................................................................................................... 304
Summary ............................................................................................................. 308
Implications for hypermedia learning......................................................................... 309
How hypermedia learning might now proceed in a better way .................................. 314
Teaching and learning............................................................................................. 314
Instructional design................................................................................................. 315
Contributions to method and theory ........................................................................... 315
Method .................................................................................................................... 315
Theory ..................................................................................................................... 315
Limitations .................................................................................................................. 316
Recommendations for further research....................................................................... 317
References....................................................................................................................... 319
Appendix 1: Example page from data analysis tables ................................................... 331
Appendix 2 – CD containing data analysis files............................................................. 332

4
List of Figures
Figure 1: A representation of hypertext nodes and links ................................................. 26
Figure 2: What is a document? (Tricot et al., 2000, p. 104) ........................................... 27
Figure 3: The relationship of CBL and Hypermedia ....................................................... 30
Figure 4: Typical screen layout and navigational devices ............................................... 42
Figure 5: Sample data used for the preliminary analysis.................................................. 43
Figure 6: Unit modules .................................................................................................... 88
Figure 7: Search for the word catholic............................................................................. 96
Figure 8: Search for the word religion............................................................................. 96
Figure 9: Question 2 and answer ..................................................................................... 97
Figure 10: Kinetic energy diagram .................................................................................. 98
Figure 11: Energy storage examples................................................................................ 99
Figure 12: Unit outline................................................................................................... 100
Figure 13: Johari Window ............................................................................................. 109
Figure 14: Barriers to effective communication text ..................................................... 112
Figure 15: Video text and graphic ................................................................................. 114
Figure 16: Highlighted text............................................................................................ 115
Figure 17: Levels of conflict diagram............................................................................ 117
Figure 18: Identify conflict table ................................................................................... 118
Figure 19: Links for the topic: Self awareness in conflict situations............................ 119
Figure 20: Self check questions showing an answer ..................................................... 120
Figure 21: Learning interface......................................................................................... 126
Figure 22: Home page table........................................................................................... 130
Figure 23: Numbered navigation links .......................................................................... 130
Figure 24: Grid............................................................................................................... 135
Figure 25: Color Matters link ........................................................................................ 135
Figure 26: Drunk Tank Pink graphic ............................................................................. 136
Figure 27: Sensory Input................................................................................................ 137
Figure 28: Colour psychology ....................................................................................... 139
Figure 29: Toolboxes page............................................................................................. 145
Figure 30: Unit welcome ............................................................................................... 146
Figure 31: Activities Map .............................................................................................. 147
Figure 32: Bottom menu ................................................................................................ 148
Figure 33: Opening screen ............................................................................................. 150
Figure 34: Practical exercise question 1 ........................................................................ 156
Figure 35: Lesson Contents ........................................................................................... 157
Figure 36: Methods of Drilling opening page................................................................ 163
Figure 37: Page layout ................................................................................................... 164
Figure 38: Interactive screen.......................................................................................... 165
Figure 39: Shelf and contents ........................................................................................ 166
Figure 40: Selected items............................................................................................... 166
Figure 41: Notice board ................................................................................................. 168
Figure 42: Worksite graphic .......................................................................................... 169
Figure 43: Twin windows .............................................................................................. 170
Figure 44: Pre-start site checklist................................................................................... 173

5
Figure 45: Highlighted rubbish...................................................................................... 174
Figure 46: Drill Hole Stability page............................................................................... 175
Figure 47: Before you begin screen ............................................................................... 176
Figure 48: Maintenance activity .................................................................................... 178
Figure 49: Sampling exercise introductory screen......................................................... 180
Figure 50: Sampling exercise......................................................................................... 181
Figure 51: Final Air Sampling activity .......................................................................... 183
Figure 52: Lightbox Studio............................................................................................ 190
Figure 53: Getting Started.............................................................................................. 191
Figure 54: Activities window......................................................................................... 193
Figure 55: Workers in the Studio................................................................................... 194
Figure 56: Job One Screen............................................................................................. 195
Figure 57: Self Quiz....................................................................................................... 198
Figure 58: Self Check Quiz............................................................................................ 199
Figure 59: Breadcrumb trail........................................................................................... 200
Figure 60: Print Checklist .............................................................................................. 201
Figure 61: Storyboard Artist window ............................................................................ 203
Figure 62: Storyboard Page 1 ........................................................................................ 204
Figure 63: Storyboard Page 3 ........................................................................................ 206
Figure 64: Layout Artist screen ..................................................................................... 208
Figure 65: Module 1 Introduction screen....................................................................... 215
Figure 66: Text graphic.................................................................................................. 217
Figure 67: Course Facilitator Screen ............................................................................. 219
Figure 68: Timeline introduction ................................................................................... 221
Figure 69: Online Modules introduction........................................................................ 222
Figure 70: Project characteristics screen........................................................................ 224
Figure 71: Scenario 1 ..................................................................................................... 229
Figure 72: Scenario participants .................................................................................... 230
Figure 73: Ben Robbins’ profile .................................................................................... 231
Figure 74: Ben Robbins’ screen..................................................................................... 232
Figure 75: Text boxes .................................................................................................... 235
Figure 76: Daily eating habits screen with diary overlay .............................................. 237
Figure 77: Guidelines tab............................................................................................... 239
Figure 78: Diet analysis question................................................................................... 241
Figure 79: Female drinking............................................................................................ 246
Figure 80: The Liver graphic ......................................................................................... 247
Figure 81: Screenshot of How Long Does it Take text and graphic.............................. 247
Figure 82: Effects of Alcohol screen ............................................................................. 249
Figure 83: Alcohol Abuse text and audio ...................................................................... 251
Figure 84: Abusing Alcohol activity.............................................................................. 252
Figure 85: Etrainu learning interface navigation tabs.................................................... 254
Figure 86: My Stages page ............................................................................................ 258
Figure 87: My Slides page ............................................................................................. 259
Figure 88: Code of Practice page................................................................................... 260
Figure 89: Examples page.............................................................................................. 261
Figure 90: Lesson 1 screen ............................................................................................ 267

6
Figure 91: Learning interface......................................................................................... 268
Figure 92: Audio and translate interface........................................................................ 269
Figure 93: Review options ............................................................................................. 270
Figure 94: Right answer indicator.................................................................................. 270
Figure 95: Right and left arrows .................................................................................... 271
Figure 96: Quiz interface (showing missing components) ............................................ 272
Figure 97: Present simple verbs interface...................................................................... 274
Figure 98: Unit 10 - Vocabulary screen......................................................................... 277
Figure 99: Clothing vocabulary activity ........................................................................ 277
Figure 100: Read and Listen text ................................................................................... 279
Figure 101: Eyes graphics in question 5 ........................................................................ 281
Figure 102: Graphics for question 9 .............................................................................. 282

List of Tables
Table 1: Scaffolding design strategies for hypermedia learning (Shapiro, 2008 p. 35) .. 36
Table 2: Examples of Planning and Monitoring from the data........................................ 44
Table 3: Examples of Self Regulation from the Data ...................................................... 44
Table 4: Examples of Orientation and Execution from the Data..................................... 45
Table 5: Examples of Evaluation and Elaboration from the Data ................................... 45
Table 6: Taxonomy of Metacognitive Activities ............................................................. 57
Table 7: Top-level structuring rhetorical structures......................................................... 61
Table 8: Metacognitive classifications with Top-level structure associations................. 62
Table 9: An example of the analysis of a verbal protocol with a one to one association 64
Table 10: An example of the analysis of a verbal protocol with a one to several
association........................................................................................................ 64
Table 11: Outline of data analysis table with sample data............................................... 80
Table 12: Table of metacognitive activity with sample data ........................................... 80
Table 13: Table of top-level structuring activity with sample data ................................. 81
Table 14: Effectiveness rating Likert Scale ..................................................................... 85
Table 15: Metacognitive activity learning module 1 - David ........................................ 101
Table 16: Top-level structuring activity learning module 1 - David ............................. 102
Table 17: Metacognitive activity learning module 2 - David ........................................ 122
Table 18: Top-level structuring activity learning module 2 - David ............................. 123
Table 19: Collective data of metacognitive activity - David ......................................... 124
Table 20: Collective data of Top-level structuring activity - David.............................. 124
Table 21: Metacognitive activity learning module 1 - Lesley ....................................... 140
Table 22: Top-level structuring activity learning module 1 - Lesley ............................ 141
Table 23: Metacognitive activity learning module 2- Lesley ........................................ 160
Table 24: Top-level structuring activity learning module 2 - Lesley ............................ 160
Table 25: Collective data of metacognitive activity - Lesley ........................................ 161
Table 26: Collective data of Top-level structuring activity - Lesley ............................. 162
Table 27: Metacognitive activity learning module 1 - Tammy ..................................... 184
Table 28: Top level structuring activity learning module 1 - Tammy ........................... 185
Table 29: Metacognitive activity learning module 2 - Tammy ..................................... 210
Table 30: Top-level structuring activity learning module 2 - Tammy........................... 211
Table 31: Collective data of metacognitive activity - Tammy ...................................... 212

7
Table 32: Collective data of Top-level structuring activity - Tammy ........................... 212
Table 33: Metacognitive activity learning module 1 - Judy .......................................... 225
Table 34: Top-level structuring activity learning module 1 - Judy ............................... 225
Table 35: Observed metacognitive activity learning module 2 - Judy .......................... 242
Table 36: Top-level structuring activity learning module 2 - Judy ............................... 243
Table 37: Collective data of metacognitive activity - Judy ........................................... 244
Table 38: Collective data of Top-level structuring activity - Judy ................................ 244
Table 39: Metacognitive activity learning module 1 - Ray ........................................... 262
Table 40: Top-level structuring activity learning module 1 - Ray................................. 263
Table 41: Metacognitive activity learning module 2 - Ray ........................................... 286
Table 42: Top-level structuring activity learning module 2 - Ray................................. 287
Table 43: Collective data of metacognitive activity - Ray ............................................ 288
Table 44: Collective data of Top-level structuring activity - Ray ................................. 288
Table 45: Extract of Metacognitive Classification and Top-level structure Analysis ... 291
Table 46: Example of Simple Monitoring ..................................................................... 292
Table 47: Learning effectiveness scores of participants ................................................ 293
Table 48: Aggregated total of metacognitive processes across cases............................ 299
Table 49: Changes in effectiveness scores of participants............................................. 304

8
Chapter 1 – Introduction
“It has been disputed whether a person does or can directly monitor all or only
some of the episodes of his own private history; but, according to the official
doctrine, of at least some of these episodes he has direct and unchallengeable
cognizance. In consciousness, self–consciousness and introspection he is directly
and authentically apprised of the present states and operations of his mind. He
may have great or small uncertainties about concurrent and adjacent episodes in
the physical world but he can have none about at least part of what is
momentarily occupying his mind.” Gilbert Ryle, (1963).

In this section the reason for the research is presented, together with an overview of
previous studies, an introduction to important concepts, and an outline of the thesis as a
whole. In the remainder of the chapter the empirical studies conducted as the basis of
this thesis and their significance are outlined.

Reason for the research

Learning with hypermedia can be seen as a situated cognitive interaction between the
learner and a complex device, and in this thesis an evidence-based account is given of the
metacognitive processes involved as learners engage with hypermedia as their medium
for learning. Several characteristics of the user and the device influence the interaction
between them and the thinking that occurs. Parameters such as cognitive abilities and
expertise in the domain affect how a learner uses and regards the complex medium. On
the system side, navigation tools, structure of information and shape and location of text
windows may influence the learner’s actions and interactions between learner and
medium. The research originated in an objective to understand better those aspects of
metacognition that appear to contribute to learner autonomy in hypermedia learning
settings.

Previous hypermedia research reported in the literature

The thesis commences with an examination of the hypermedia literature in order to


establish the current level of theoretical understanding of the research problem, to
identify whether or not there are gaps in this knowledge and to identify those aspects of

9
the research problem that are unique to the hypermedia environment. The review
highlights that computer-based learning environments (CBLEs) present important
opportunities for fostering learning (Lajoie & Azevedo, 2006). However, little is known
about how successful students take advantage of these environments (Winters, Greene, &
Costich, 2008). Learning technology tools such as web-based learning environments,
hypermedia, and other open-ended learning environments now have a 30 year history and
yet still raises several theoretical, empirical, and educational issues that, if left
unanswered, may undermine the potential of these powerful learning environments to
foster student learning (Azevedo, 2005). Current evidence suggests that theoretically, our
understanding of the underlying learning mechanisms that mediate students’ learning
with such environments has not kept pace with the technological advances that have been
made.

Today, CBLEs incorporate various kinds of computer technology to assist individuals in


learning for a specific educational purpose (Azevedo, 2005; Chen, 1995; Lajoie &
Azevedo, 2006). This technology can afford several different representations of
information including text, diagrams, and graphics, among others. CBLEs that
incorporate multiple representations of information are a type of multimedia learning
environment (Mayer, 2001, 2005). More specifically, those that allow for direct user
manipulation of these representations are called simulations or micro worlds (Reiber,
2005), while CBLEs that allow for user selection of links between representations of
information are called hypermedia environments (Dillon & Jobst, 2005). It is these
hypermedia learning environments that are the focus of this thesis.

As learning typically involves the use of numerous self-regulatory processes, researchers


have begun to examine the role of students’ abilities to regulate several aspects of their
cognition, motivation and behaviour during learning with hypermedia. According to
Azevedo and Cromley (2004), this research has demonstrated that students have
difficulties benefitting from hypermedia environments because they fail to engage in key
mechanisms related to regulating their learning. Earlier, Williams (1996) argued that to
regulate their learning students need to make decisions about what to learn, how to learn

10
it, how much to learn, how much time to spend on it, how to determine whether or not
they understand the materials, when to abandon or modify plans and strategies and when
to increase effort. Azevedo and Cromely (2004) concluded that specifically learners need
to analyse their learning situation, set meaningful learning goals with them and determine
which strategies to use, assess their effectiveness, presumably concurrently as part of the
implementation, and determine if the strategies are effective for a particular learning goal,
and presumably to make some changes if they are not.

In this thesis it is hypothesised that in hypermedia settings one of the important drivers of
self-regulation is a learner’s capacity to deploy effective metacognitive strategies to the
presenting task. In much of the recent research, the measure of success has been
improvements in test scores. Whilst this has provided quantifiable evidence of success,
these studies provide little in the way of finely-grained evidence of the causes of that
success. In order to design hypermedia that might provide better and more effective
metacognitive strategies, more specific evidence about the causes of success is needed.
That is, the design strategies that encourage better metacognitive practice need to be
understood better, and in particular, we need now to explore how learners think about
their navigation choices and the relationships between the available links.

Research undertaken to date as reported in Chapter 2 suggests that there is a need for a
better understanding of the ‘guidance’ and the ‘control’ aspects of a learner’s engagement
with hypermedia, and while engaging in such investigation, to establish any causal
relationships between such guidance and control. Of these two aspects, it is the gap in
the knowledge about an exploration of a learner’s ‘control’ of his/her engagement with
hypermedia that this thesis seeks to address. As a preliminary step in identifying
learners’ cognitive and metacognitive activities, it was necessary to ascertain whether it is
possible to observe and capture these ‘control’ aspects. A pilot study was undertaken
which established the capacity of a methodology employing screen-capture software to
record the learning and capture the verbal protocols of learners engaged in learning with
hypermedia. Analyses of the pilot data enabled the identification of the underlying

11
cognitive and metacognitive activity. This affirmed that metacognition could indeed be
accessed through the method, and framed key elements of the design of the major study.

Metacognitive concepts adopted in this thesis

A review of the literature on metacognition was undertaken in order to establish the


current level of theoretical understanding, and to identify an empirically-based taxonomy
that might be used in its description. The review highlights that Flavell (1976, 1992),
Brown (1981), and Simon (1979) all made important contributions to our earliest
understandings of this concept and that subsequent contributions enable three properties
of metacognition to be distinguished. The first is metacognitive knowledge. This
understanding (e.g. Wellman, 1985; Simons, 1996) refers to the knowledge that people
have of their own and others’ cognition. A second understanding of metacognition (e.g.
Simons, 1996; Dweck, 1988) is as a complex set of beliefs that people have of their own
and others’ cognition. A third (e.g. Nelson, 2005; Stevenson, 1986a, 1986b) is that
metacognition is an active monitoring and steering of one’s on-going cognitive processes,
for which there is likely to be a set of general strategies. It is the last of these meanings,
the active monitoring and steering of on-going cognitive processes through a set of
general strategies, which aligns with the kind of metacognition hypermedia learners are
likely to be able to render through think-aloud protocols. That is, the focus of this
research is on the executive control functions of metacognition; rather than with
metacognitive knowledge or beliefs per se. Or as put by Kratzig and Arbuthnott (2009),
metacognition is best seen as a person’s ability to think about their own thinking, to think
about their own cognitive ability and knowledge, and then to take the appropriate
regulatory steps. Adopting this construct raised further questions as to whether or not
this “ability to think about” could be learned, and if so, whether it was possible to train
learners to do so more effectively.

Reports of success in improving learning outcomes from metacognitive training programs


across a range of learning settings and student types can be found in recent literature.

12
Examples include undergraduate students in biology (Azevedo & Cromely, 2004),
university students (Bannert & Mengelkamp, 2008), algebraic reasoning in elementary
school teachers (Kramarski, 2008), high school students studying mathematics
(Mevarech & Amrany, 2008), primary school students (Ritchart, Turner, & Hadler,
2009), and young and older adults (Kratzig & Arbuthnott, 2009). Collectively, these
studies suggest that metacognitive training is effective across the lifespan and across
learning contexts.

Having established a theoretical description of the kinds of metacognitive activity likely


to occur in hypermedia learning settings and that this activity could be improved through
training, a further observation from the literature is that Meijer, Veenman, and Van Hout
Wolters (2005, 2006) had developed a recent hierarchical taxonomy of metacognitive
activities for the interpretation of think-aloud protocols. They had worked with students
in secondary education who studied texts on history and physics, and had drawn
extensively on earlier classificatory systems (e.g. Flavell, 1979; Pintrich & De Groot,
1990; O’Neil & Abedi, 1996, Schraw & Moshman, 1995) to guide their construction.
Initially, they developed an elaborate taxonomy, but pilot testing found that the inter-rater
correspondence was well below par and they concluded that the categories in the
taxonomy had been too highly specified. They deduced that self-regulation appeared to
be used as a general category and this led them to postulate a super-ordinate group of
categories within their taxonomy. They labelled these orientation, planning, execution,
monitoring, evaluation and elaboration.

Meijer et al. (2006) were able to demonstrate that this super-ordinate group of categories
showed a substantial correlation between metacognitive activities across both task-
domains in their study. This implied that within their taxonomy, these super-ordinate
categories of metacognitive activity transcended task-specificity, and were more
generalised in purpose. The general nature of these super-ordinate categories suggests
they ought to have applicability across learning settings and to other kinds of learners.
Moreover, the taxonomy’s initial association with the task-specific domains of reading
and problem-solving, two critical skills in using hypermedia successfully, makes it

13
particularly suitable for the research of these kinds of learning settings. The elaborated
set of categories which includes several sub-ordinates of the six major ones was
considered by the authors to be generally context-specific and not easily transferable to
other settings. Therefore, the taxonomy of choice for the present research was that
involving Meijer et al’s (2006) super-ordinate categories.

The pilot study was able to identify metacognitive activity at a coarse-grained categorical
level of analysis. A finer-grained rendering of the metacognitive processes within
categories was needed, that is, a method was required to identify the more subtle
differences and patterns within the categorical data. The tool to do this would need to
complement and maintain the general purpose nature of the existing taxonomy, and to be
fine-grained enough to identify subtle differences that the verbal protocols may harbour.
Top-level structuring (Bartlett, 1978, 2008a) offered such potential.

Top-level structuring concepts adopted in this thesis

Top-level structuring describes the strategic processing involved in converting newly-


acquired metalinguistic knowledge of language structure into deliberate, procedural
‘know-how’ for learning from text (Bartlett, 1978). The theoretical construct behind
research on top-level structure and top-level structuring is that meaningful human
language beyond the word is a connected network of ideas and interrelationships
(Bartlett, 2008b). The former gives a communication its substantive content, while the
latter affords it coherence and cohesiveness. That is, the differently communicative and
informing character of a communication depends as much on the nature of the logical
structure of ideas as on the ideas themselves. The ‘idea structure’ is a linguistic construct
depicting language hanging together logically as a communication which reveals how
ideas reconfigure semantically and grammatically through patterns of relationships both
within clusters and across them.

14
Bartlett (2008b) described top-level structuring as a two stage method for determining the
‘gist’ or main idea in a communication based on the assumption that when processing
information about a topic, the ideas fit, however loosely, into one main message. The
construct suggests that if we look at the ideas at the end of a statement or piece of
writing, the ways each idea interrelates with others, and the number of relations others
have to it, indicate its interconnectedness and its relative importance. Moreover, these
ideas, to which many others relate, are closer to the top of what is a hierarchical ordering
of the text. The idea with the most supporting information beneath it is at the top level
and is the main idea.

Bartlett (2008b) contends that research has shown that the way ideas fit together across a
text to project the main idea can be represented by just a few possible rhetorical
structures; these being list, comparison, cause and effect, and problem and solution.
Fletcher, Zuber-Skerritt, Piggot-Irvine and Bartlett (2008) reported that while there are
many sub-forms, to top level structure a communication is to present a well-signaled
account of how things fit together in one of these four ways. Moreover, they assert that it
is unique in applying the structural features of language as a framework, and importantly
for this research, that in doing so it is relatively independent of the content domain used.

Top-level structuring would seem to be an ideal sub-ordinate level tool with which to
examine the think-aloud protocols of hypermedia learners. The top-level structuring
rhetorical structures ought to provide a more fine-grained (sub-ordinate level) analysis of
the think-aloud protocols which should assist with metacognitive classification and
provide a richer understanding of the structure and nature of each of the metacognitive
processes.

Empirical studies

An investigation into the capacity of a screen-capture based methodology to capture the


cognitive and metacognitive activity of learners engaged in learning with hypermedia
formed the first part of the empirical research. Study 1, the pilot study, reported in this

15
thesis (see Chapter 2) was undertaken to answer the first of the research questions:
Whilst learners are in situ in hypermedia settings, to what extent are their cognitive and
metacognitive activities accessible to recording using video capture software protocol?
The findings from the pilot, which answered the question in the affirmative, were then
used in conjunction with metacognitive and top-level structuring theory to refine a
methodology thought to be suitable to address the major research problem concerning
understanding learner autonomy in hypermedia learning settings. Study 2 was the main
study and was a set of case studies in which the metacognitive activity of five
hypermedia learners was captured through their verbal protocols and examined. Its
results and their interpretation are reported in Chapters 4 and 5. Study 2 included a small
intervention strategy aimed at improving the metacognitive capacities of the learners.
The metacognitive training literature formed the basis of the intervention strategy. Thus,
Study 2 set out to extend examination of the question posed in the pilot (through the lens
of an improved and more empirically-based methodology, reported in Chapter 3), and
two further research questions:
• To what extent (how) do learners see themselves as autonomous in such activity
and how does this manifest itself in practice?
• To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?

Significance and contribution of the research

The problem examined in this thesis is whether, and just how, the autonomy to control
learning is realised in hypermedia learning settings. The practical and theoretical
significance of the problem are addressed next.

The research is important for practice for three reasons. First, it has proven a
methodology for successfully collecting and examining metacognitive activity in
hypermedia learning settings. Second, there now is a tool with which to understand
better and improve these environments. The current literature highlights the fact that, as
yet, little is known about how successful students take advantage of hypermedia learning

16
environments (Winters et al., 2008). Therefore, tools that enable researchers to explore
these environments are critical to advancing their understanding. Bhavnani, Fleming,
Forsythe, Garrett and Shaw (1995) cited in Chester (2006) pointed out that CAD
operators often, “seemed more interested in simply ‘getting the job done’ than in learning
new and better ways to use the system” (p.14). Bhavnani et al. (1995) theorised that this
was in part due to the absence of strategic knowledge on the part of CAD operators,
leading to the use of sub-optimal strategies. Given the paucity of knowledge about
learner autonomy and control in hypermedia learning settings the use of sub-optimal
strategies is also likely to be true of these settings. Third, the video capture technique of
a video that produces a data-rich picture of the learning event could have application in
research into a range of complex computer-based learning tasks.

The research is theoretically significant because computer-based learning environments


(CBLEs) present important opportunities for fostering learning (Lajoie & Azevedo,
2006). The use of technological tools now has a 30 year history and yet still raises
several theoretical, empirical, and educational issues that, if left unanswered, may
undermine the potential of these learning environments to foster students’ learning
(Azevedo, 2005). Theoretically, our understanding of the underlying learning
mechanisms that mediate students’ learning within such environments lags in comparison
to the technological advances that have been made in these same environments.

A central tenet of the effectiveness of learning with hypermedia is its capacity to afford
learners the autonomy to control their learning. Yet, the field is characterised by a lack of
a systematic theory of hypermedia. Currently, Mayer and colleagues’ (2001) Generative
Theory of Multimedia Learning is the only dedicated theory of hypermedia learning in
the existing literature, and the research lacks a focus on the learner (Azevedo, 2005).
Dillon and Jobst (2005) reported that while a more critical view of hypermedia and
cognition has since evolved, formal theories of hypermedia learning have not developed
in any substantial way. Instead, existing theoretical models from education and
psychology have been applied to certain aspects of hypermedia design and use.

17
Chapter 2 traverses the current literature to identify what underpins the central tenet, and
to indicate a gap in which the literature had not established whether it is indeed possible
to capture learner’s cognitive and metacognitive activity whilst in situ in hypermedia
settings, and in the event that it were possible, to explore whether a research-driven
manipulation of what learners, do might enhance their control of learning and the
effectiveness of its outcomes.

The addition of top-level structuring as a sub-ordinate data analysis mechanism within


the classifications of the metacognitive taxonomy affords greater trustworthiness about
category assignation for metacognitive events and strengthens the Meijer et al. (2005,
2006) system as a classificatory tool. Moreover, the use of top-level structuring to add
reliability to the metacognitive classification within the taxonomy and to provide a more
fine-grained analysis within each of the classifications advances its theoretical
boundaries.

Summary and thesis structure

The contribution of this thesis to an understanding of learner autonomy and how it is


realised in hypermedia learning settings, through the lens of a learner’s metacognitive
processes, is the manner in which the research is underpinned by a theoretical approach
that conceptualises the potential of educational hypermedia and metacognition as a key
driver of learner autonomy. It has been argued that an examination of a learner’s
metacognitive processes is likely to provide insights into how their learning autonomy is
realised.

Chapter 2 examines the evolution of hypermedia and shows how reported research has
advanced an understanding of hypermedia as a technology for learning. It also discusses
claims made about its educational worth; in particular, claims about its capacity to enable
learners’ autonomy. It is then hypothesised from this review that what has emerged as a
key driver of learner autonomy is a learner’s metacognitive capacity. Next, the chapter
addresses issues from the literature that underpin the initial research question: Whilst
learners are in situ in hypermedia settings, to what extent are their cognitive and

18
metacognitive activities accessible to recording using video capture software protocol?
A pilot study was conducted and the results confirmed that it was possible to capture the
cognitive and metacognitive activity of hypermedia learners. Next, the metacognition
literature is examined in order to identify the kinds of metacognitive processes that
hypermedia learners might employ, before an empirically based metacognitive taxonomy
able to be used across learning domains was identified for use in a follow up study. The
pilot study also identified the need for a finer-grained secondary analysis of the verbal
protocols that would give rise to greater insights within the metacognitive categories.
The Chapter concludes by arguing that top-level structuring could provide a finer-grained
analysis of learners’ verbal protocols that may provide additional validity and reliability
to the metacognitive classifications. Moreover, it is argued that the top-level structuring
rhetorical structures ought to provide a richer understanding of the structure and nature of
individual metacognitive processes.

Chapter 3 describes and justifies the methodological approach used in the collection of
data for both the pilot and main studies. It argues for a case study approach as a preferred
strategy when ‘how’ and ‘why’ questions are being posed, when the investigator has little
control over events, and when the focus is on a contemporary phenomenon within some
real life context. Further, it argues that qualitative case studies are characterised by the
discovery of new relationships, concepts and understandings rather than verification of
pre-determined hypothesis. A unique data capture method that proved successful in the
pilot study, which captured a video of the screen-based learning event overlaid with
verbal data collected using a modified stimulated recall method - retrospective
questioning technique was used. The modified technique attempted to address some of
the disadvantages associated with thinking aloud, stimulated recall, respective
questioning and direct retrospection during the learning task methods when they are used
in isolation.

Chapter 4 presents the findings from the main study through a detailed analysis of the
metacognitive processes and top-level structuring with rhetorical structures identified
within the verbal protocols of each learning event. Each analysis includes tables of

19
quantitative data about both data types for further examination. The finer-grained
analysis afforded by the top-level structuring revealed rich chains of complex structures
within each of the metacognitive processes. The analyses reveal how these rich chains of
complex structures varied the individual renditions of a metacognitive process, and how
in turn, chains of these individual metacognitive processes appeared to be driving learner
autonomy under display.

Chapter 5 presents the conclusions and limitations of the research along with
recommendations for future research. Three overall conclusions are presented. First, the
richer description of the metacognitive processes afforded by the top-level structuring
analysis provides a greater trustworthiness about category assignation for those
metacognitive events and strengthened the system as a classificatory tool. Second, it was
concluded that there was clear illustration of accessibility about the character of each of
the five learners’ cognitive and metacognitive activity as each one spoke about his/her
learning engagement as it was happening in its vocational learning setting. Their reports
further developed the thesis that such activity in relation to engagement happens, is
accessible, and, may be captured and analysed. This position addresses the first of the
questions that had stimulated the research originally.

The third conclusion was that the perception of one’s autonomy as a learner, while
differently present across the pre-intervention data amongst the cases, indeed was
present. Further, it was positively affected by training - but again in different ways and to
different degrees. In each case, shifts were recorded in what presented as preferences for
various metacognitive actions. Following training, an individual’s metacognition
changed. This shift has been documented according to descriptive categories for
metacognitive processes used in the analysis and in what seemed to be happening by way
of the top-level structuring that a learner used to organise his/her ideas when centering on
one or other of the categories.

Next, several limitations in this research are discussed. First there are the limitations in a
case study methodology. While case study is an important research tool for looking at

20
deeper explanations, and its adoption in this research has helped to uncover and to go
beyond the large categories of metacognition, its purpose is for deep single-entity
research. Before generalisation to other learners, other times and other places can be
made, large scale studies using different methodologies, particularly quantitative
methods, are needed. Nonetheless, the case study method has revealed issues that are
important to the research of hypermedia learning and therefore it has made an important
theoretical contribution. Second, the method proved to be sound and appropriate for
adult learners in vocational learning settings. In both the pilot and the main study these
learners showed a propensity to articulate and self report their learning experiences in
rich and meaningful ways. The extent to which the method is transferable to other kinds
of learners and settings needs to be examined and tested further. Third, the study
experienced and did little to improve what Meijer et al (2001) had indicated as specific
procedural difficulty with discriminating categories around their “execution”
classification. One way this might be addressed in future research is by ensuring that
researchers elicit a specific utterance on identifying what appears to be an execution
process. This would need to be undertaken with some care so as not to interfere with a
respondent’s train of thought, or at the expense of confounding other associated
metacognitive categories or missing other important utterances.

Chapter 5 concludes with recommendations for further research. These are the
following:
1. Reviewing the data collection method in an attempt to address issues concerning
the execution category within the taxonomy;
2. Extending metacognitive training to include a focus on top-level structuring,
3. Using the methodology under experimental conditions to test the efficacy of the
metacognitive training;
and
4. Testing the methodology across a range of learners to establish any
generalisability.

21
Chapter 2 - Review of the Literature and Pilot Study

Introduction

In this chapter, the review of relevant literature is presented as it informed this research of
the nature of hypermedia learning and the claims made about its educational value, how
learners realise their learning in hypermedia learning settings, and the metacognition
necessary to drive that learning. The purposes of a literature review are to situate
research ideas into the existing body of work in the field and determine where the current
research will add to existing knowledge (Gay & Airasian, 2003; Wisker, 2001). Glense
(1999) believes that the literature review for qualitative research should continue
throughout the project and should be broad enough to cover the research topic as it is
embedded in areas beyond the immediate context. In this research, the review of the
literature began before the research into the topic of hypermedia learning, and in
particular, the cognitive aspects of hypermedia learning. Areas of interest grounded in
that context were metacognition and top-level structure strategy. The review of the
literature was on-going throughout the research.

The chapter is organised around four topics. It begins with an overview of the
development of hypermedia, its evolution into learning settings and the claims made
about its learning potential. After that, a pilot study that determined that it was possible
to capture the cognitive activity of vocational learners in a hypermedia learning setting is
presented. Then a review of the current literature about metacognition identifies its many
guises before focusing on its capacity to assist learners to steer their own cognitive
process. Finally, a review of the top-level structuring literature is presented.

Background

Computer-based learning environments (CBLEs) present important opportunities for


fostering learning (Lajoie & Azevedo, 2006). Yet little is known about how successful

22
students take advantage of these environments (Winters, Green, & Costich, 2008). The
use of technological tools such as web-based learning environments, hypermedia, and
other open-ended learning environments now has a 30 year history and yet still raises
several theoretical, empirical, and educational issues that, if left unanswered, may
undermine the potential of these powerful learning environments to foster students’
learning (Azevedo, 2005). Theoretically, our understanding of the underlying learning
mechanisms that mediate student’s learning with such environments lags in comparison
to the technological advances that have been made in these same environments.

CBLEs incorporate various kinds of computer technology to assist individuals in learning


for a specific educational purpose (Azevedo, 2005; Chen, 1995; Lajoie & Azevedo,
2006). Computer technology can afford several different representations of information
including text, diagrams, and graphics, among others. CBLEs that use multiple
representations of information are a type of multimedia learning environment (Mayer,
2001, 2005). CBLEs that allow for direct user manipulation of these representations are
called simulations or micro worlds (Reiber, 2005), while CBLEs that allow for user
selection on links between representations of information are called hypermedia
environments (Dillon & Jobst, 2005). It is these hypermedia learning environments that
are the focus of this thesis.

A central tenet of the effectiveness of learning with hypermedia is its capacity to afford
learners the autonomy to control their learning. Yet, the field is characterised by a lack
of a systematic theory of hypermedia. Currently there is only one dedicated theory of
hypermedia learning, Mayer and colleagues (2001) Generative Theory of Multimedia
Learning, and there is a lack of focus on the learner in the research (Azevedo, 2005).
The problem examined in this thesis is whether, and just how, the autonomy to control
learning is realised. What follows in this chapter traverses the current literature to
identify what underpins the tenet, and to indicate a gap in which the literature had left
unestablished whether it is indeed possible to capture learners’ cognitive and
metacognitive activity whilst in situ in hypermedia settings, and in the event that it were
possible, to explore whether a research-driven manipulation of what learners do might

23
enhance their control of learning and the effectiveness of its outcomes. These two
elements of the gap constitute the research under three specific questions:

1. Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?
2. To what extent (how) do users see themselves as autonomous in such activity and
how does this manifest itself in practice?
3. To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?

Hypermedia

Introduction
The current literature on hypermedia is presented against three perspectives. First,
hypermedia is defined, and differences between hypermedia and its forerunner, hypertext,
are discussed. Initially, educators’ knowledge of hypertext was driven by their
conceptions of ‘text’ and ‘hyper’, and the advent of ‘hypermedia’ has extended their
interest both as ‘text’ became ‘media’ and as ‘hyper’ was presumed to have become more
intricate and complex while unchanged as a descriptive term. That is, presumption has
outpaced evidence about the nature of what learners are doing intellectually when
engaged in hypermedia activity. Second, hypermedia’s benefits for accessing
information are outlined along with strengths and weaknesses in where and how learner
characteristics are involved. Third, this examination reveals that while people generally,
and educators particularly, are well served by the current literature with definitional
explanations of hypermedia, notably in relation to its potential through the cognitive and
metacognitive operational actions of those who use it, a systematic theory of hypermedia
is lacking in relation to its focus on whether learners realise this potential, and if so, the
extent to which they do so autonomously and effectively.

24
A Definition of hypermedia
Nelson (1965) is credited with coining the term, ‘hypertext’, and also with the first use of
the word, ‘hypermedia’. Earlier, Bush (1945) had conceived of them as computer-
supported documents (i.e. not software or databases). In context, a document is
considered to be structured material that enables the user to ‘build sense’ (Tricot, Pierre-
Demarchy, & El Boussarghini, 2000). The terms, ‘hypermedia’ and ‘hypertext’, are
often used interchangeably by both authors and researchers (Dillon & Jobst, 2005).
However, there are important evolutionary differences between the two. ‘Hypertext’ is
an idea that can have its historical roots traced back to Vannevar Bush (1945). Bush,
with his emphasis on the role of association in cognition dreamed of a technology that
would allow people to deal with an exponentially growing knowledge base by quickly
facilitating the selecting, retrieving and arranging of data. The advent of the computer
had allowed for the realisation of this idea. The concept of hypertext is a simple one
concerning connectivity, that of nodes of text and texts being linked together.
Information is seen as an organised network in which nodes are text chunks (e.g., lists of
idea items, paragraphs, and pages) and links are relationships within and between them
providing a mapping of the conceptual logic of all content (Bartlett, 1978, 2003, 2008a).
Virtually any kind of relationship, and hence linkages, can be imagined between two text
passages (Rouet, Levonen, Dillon, & Spiro, 1996). For example, in a setting where a
learner is engaging with a new concept, one possible way to cue linkages would be to
create a glossary of terms, each of which has elaborations throughout the text/s.
Following various instances and elaborations of a word would conceivably enhance a
reader’s knowledge, and the discretion with which readers do this sort of patterning might
be manipulated through awareness-raising instruction. Figure 1 illustrates a set of
relationships between nodes that might exist in this instance.

Thus, hypertext systems have been seen as a potential means of facilitating the positive
interactions readers have with texts (Chen, 1995; Dede, 1996; Duchastel, 1990; Meyer,
1975; Meyer, Young, & Bartlett, 1993).

25
Text Node (explanation of link term)

---------------------. ------
----------------------------.

Glossary of Terms Text Node (explanation of link term)

-------------------------. -----------------------. --
------□--------. ------------------.□------------.

New Concept - Main Text

---------------------□----------------. ---------------□---------. --------


----------------------. -----------------------------------□--------. -----
------------□---------. -----------□----□--------.

Keys: ------ represents text, □ represents a hypertext linked word, represents possible
link choices, and represents possible return pathways.
Figure 1: A representation of hypertext nodes and links

In contrast, hypermedia can be thought of as an extension of hypertext in which the


technologies of sound, graphics, animation and video have been added to that of text (Lai
& Waugh, 1995). According to Tricot, Pierre-Demarchy and El Boussarghini (2000),
hypermedia are electronic materials where the communication channel can be audio,
visual or both; the code used can be linguistic, iconic or analogical (e.g. sounds, pictures,
dynamic pictures); and the structure can be linear or nonlinear (see Figure 2). Dillon and
Jobst (2005) regard hypermedia as a multimedia form of hypertext, as regardless of the
term, the technology remains based on modes or chunks of information that are linked
together and which a user can explore by following links they deem relevant.

26
code

linguistic
channel
iconic structure
audio analogical

audiovisual no code non-linear

visual linear
Access material

On-line off-line paper magnetic electronic


Figure 2: What is a document? (Tricot et al., 2000, p. 104)

As a result, the notion of a node is extended beyond that of just containing textual
information to one in which it includes sound, graphics, animation and video. The term
hypermedia thereby accounts for the inclusion of these additional technologies. It is this
more inclusive definition that more accurately portrays the current application of these
technologies. At its core, hypermedia still has a text base; however, sound, graphics,
animation and video are increasingly supplementing this. Collectively, these
technologies are referred to as (hyper) media, reflecting the move beyond just (hyper)
text.

For some time now hypermedia has been finding uses and having impact on many fields
of human endeavour, including education (Jacobson & Azevedo, 2007; Jacobsen &
Spiro, 1995, Lai & Waugh, 1995). A range of perspectives are emerging as to what
hypermedia is capable of in these fields of endeavour. However, Jacobson and Azevedo
(2007) argue that a critical look at principled research into learning with hypermedia, in
contrast to information dissemination and access, has been decidedly mixed. There is a
lack of consensus in this area, a situation due to the novelty of the technology, and to its
continuing transformation as technical possibilities continue to expand.

27
Educationally, hypermedia has been seen as a suitable vehicle for the implementation of
constructivist (Bruner, 1996; Vygotsky, 1978) ideas reflected by its ability to allow the
learner to explore ideas through multiple types of media (text, images, video and audio).
Constructivism asserts that knowledge is not merely transmitted from teacher to student,
but rather, that it is actively constructed in the mind of the learner. Thus educationally,
hypermedia can be seen as catering to various learning styles and individual learning
needs.

Claims about the value of hypermedia


Hypermedia presents a new way to interact with media that differs from interacting with
standard printed media (such as textbooks, diagrams and charts). For example, within a
textbook the text is typically presented in a linear form in which there is a single way to
progress through it, starting at the beginning and reading to the end. In contrast, within
hypermedia, information can be represented in a semantic network where multiple related
sections of media are connected to one another (Foltz, 1996). Users may browse through
the sections of media, jumping from one node to another. This permits the reader to
select a path through the media that is most relevant to their interest, which best builds
upon their current knowledge, or best suits their particular learning style. It is the
concept of user-selected pathways in hypermedia to retrieve and read information, and
the potential of this for learning, that has caused a great deal of interest, particularly
within the educational community (Mayer, 2005).

What is of importance in this thesis is the focus on the ‘hyper’ element, and of less
concern is the particular kind of media. That is, the focus is on the way learners employ
the ‘hyper’ to facilitate their learning. By ‘hyper’ is meant the choices the learner makes
when confronted with a linked choice. So, the term hypermedia is used throughout as the
more general term referring to information systems that offer hyper structuring,
regardless of whether they include media other than text.

28
Educational hypermedia
As a subset of the more general notion of information technology, (Chen, 1995) defines a
computer-based learning environment as “the display (e.g. text, graphics, speech and
animation) and user-computer interactions elicited by the program (e.g. the input required by
the learner) for a specific educational purpose (e.g. learning subject matter knowledge or
problem-solving skills)” (p. 185). This definition highlights three important aspects of these
environments: (i) the technology itself; (ii) the interaction of the learner with that
technology; and (iii) the learning taking place.

The use of computer technology to enhance learning began in the late 60’s (Riva, 2001).
Since that time the presence of computer technology in all forms of education has
increased dramatically and predictions are that this trend will continue to accelerate.
More recently, the appearance of internet-based information and communication
technologies has provided an additional dimension to the way in which education and
training are being conducted. The assertion of Federico (1999) that “we are in the midst
of a paradigm shift from classroom centric to network centric” education is representative
of the current view. For example, in recent years, the emergence of digital documents
has progressed from word-processed text, through stand-alone hypermedia, to the World
Wide Web. When directed at educational purposes, the technologies are collectively
referred to as educational hypermedia.

Initially some saw educational hypermedia (or previously hypertext) as a new Computer-
based Instruction (CBI) authoring environment (Park, 1991), whilst others saw a new
type of CBI application emerging - hypermedia assisted instruction (HAI) or learning
(Heller, 1991). Others saw hypermedia as an ideal knowledge representation format that
allowed for generative or adaptive learning (Dede, 1988; Jonassen, 1986, 1988); or as a
powerful environment for exploratory learning for ill-structured, advanced knowledge
domains or literacy education (Spiro, Feltovich, Jacobson, & Coulsen, 1991; Spiro &
Jehng, 1990); or as a platform for multidisciplinary learning in the increasingly complex
and growing field of science (Davenport & Cronin, 1990; Marchionini & Shneiderman,
1988).

29
However, Chen (1995) argues that educational hypermedia should not be considered to be
another form of computer-based learning (CBL), but rather that it is another aspect of the
information technology environment (Chen, 1995). Figure 3 depicts this relationship.

Information Technology

CBL Hypermedia
Figure 3: The relationship of CBL and Hypermedia

The point being made here is that hypermedia is not just another CBL, rather, that it is an
extension of information technology that has an educational application. Unlike many
previous computer-based training technologies, hypermedia is not constrained by the
linear nature of programming software, and therefore has the potential to develop a
greater range of, and different kinds of knowledge. For example, while most CBL
software has the capacity to provide the learner with learning choices, these choices are
usually moderated by the successful completion of previous learning tasks. That is,
advancement through the learning is usually dependent upon programmed rules within
the software. In contrast, advancement through hypermedia learning is regulated by the
choices made by learners and driven more by their knowledge of what they already know
and understand, and is not limited by the material provided.

Dillon and Jobst (2005) summarize this interest nicely in noting that educational
“Hypermedia proponents suggest that its ability to make information available in a
multitude of formats, provide individual control, engage the learner, and cater to various
learning styles and needs makes it the harbinger of a new learning revolution” (p. 569).

It has been the more recent merging of computer, communication, and information
technologies that has provided the catalyst for the current capabilities found in
hypermedia learning settings. The use of educational hypermedia to complement

30
customary instruction or even to provide entire courses has gathered a great deal of
momentum in educational communities. In fact, a large number of leaning applications
using hypermedia has already been developed. However, an understanding of how
hypermedia technologies might support educational processes presents a number of
substantial challenges. The debate about its effectiveness continues as varying
conclusions are reported (Tudhope, 2007). This is because not only does the use of these
technologies not guarantee effective learning, it is possible that inappropriate uses may in
fact hinder learning. Nonetheless, educators and authors (Hall, 2000) argue that many
aspects of technology make it easier to create environments that fit the principles of
learning. A more detailed analysis of the learning claims made about educational
hypermedia follows.

Claims about the value of educational hypermedia


Some commentators and educators (Dryden, 1994; Landow, 1992) had predicted major
shifts in paradigm with respect to the manner in which we understand the learning
experience and the educational process as a result of hypermedia technologies. The
promise of hypermedia had been promoted as having “the potential to become a
significant application area, equalling or perhaps exceeding that of word processing,
spreadsheets and general database application” (Begoray, 1990). While hypermedia has
become a significant software application, the manner in which we understand the
learning experiences of learners and the educational processes that result from its use are
still not well understood. It was the nonlinear nature of hypermedia and assumptions
about its educational value that gave rise to these original predictions.

After the first decade of research, some authors (Dillon & Gabbard, 1998; McKnight,
Dillon, & Richardson, 1991, 1996) were suggesting that such strong claims are short of
supporting evidence from studies of learners. In 1996, (Dillon) argued that in the
previous decade of empirical evidence much has been generally assumed about hypertext
and hypermedia, but rarely demonstrated; stating that ‘that the unmistakable
advantages...have rung hollow' (p. 26). At that time the lack of a general theory of
hypermedia learning and a tendency to overlook the lessons learned from user studies of

31
previous technologies were considered major reasons for these claims not being realised
(Dillon & Gabbard, 1998). Rouet and Levonen (1996) agreed that a major drawback in
the research and understanding of hypermedia at that time was its lack of a thorough
theoretical foundation. Their concerns were that there was neither a general theory of
hypermedia, nor a model of the cognitive processes involved in interacting with
hypermedia. Currently, there is only one dedicated theory of multimedia, the generative
theory of multimedia learning (Mayer, 2001), that can be drawn upon. Thus, there
remains a large gap between theories of knowledge and the actual capacities of
hypermedia systems. While, today, hypermedia as a learning medium continues to be
researched, apart from Mayer’s (2001) “Generative Theory of Multimedia Learning”, a
general theory of hypermedia appears still to be elusive. Chen and Dwyer (2003)
reviewed the existing research and reported there remained little empirical evidence
showing that a hypermedia learning environment improves learning outcomes.
Moreover, Dillon and Jobst (2005) reported that while a more critical view of hypermedia
and cognition has since evolved, formal theories of hypermedia learning have not
developed in any substantial way. Instead, existing theoretical models from education
and psychology have been applied unproblematically to certain aspects of hypermedia
design and use.

Welch and Brownell (2000) have argued that technology is effective when developers
thoughtfully consider the merits and limitations of a particular application while
employing effective pedagogical practices. This stance argues that instructional
objectives should drive decisions as to what technology is to be used and how. Tessmer
(1993), almost a decade earlier, argued that developers should not just use a specific form
of technology because “we can”. Instead, he urged developers to conduct what he called
a “front end” evaluation to carefully consider the suitability of a multimedia format in
terms of the instructional objectives. This, he argued, would help to establish the extent
to which the medium enhances the learning experience. In order to do this, he suggested
that developers employ formative evaluation procedures to test prototypes of multimedia
products. He stated that there are at least three dimensions that ought to be considered:
(a) user interface, (b) multimedia integration, and (c) the learning experience. Resiner

32
(1987) considered the user interface to be the interaction between the technological tool
and the user. By multimedia integration, Tessmer (1993) referred to the seamless
organisation and utilisation of multimedia attributes in an effort to bridge the content to
the learner in various ways. Tessmer argued that the more “user-friendly” the
technology, the more a learner can concentrate on the learning experience.

A central requirement in this thesis is that, in determining the effectiveness of any new
technology in education an evaluation of the learning experience is necessary. One
purpose in undertaking such an evaluation would be to better understand the claims made
about the learning experiences that are associated with using educational hypermedia.
Central to the approach of this thesis is the need to examine learning experiences in terms
of the thinking (cognitive) and the self-regulated processes (metacognitive) involved.

Hypermedia and learning


In more recent times, and with the advent of the World Wide Web, applications have
developed which utilise web browsers as the underlying production and delivery engine.
The earliest entries into computer based learning were usually written using
programming languages and constructed in the main by computer programmers. The
pervasive nature of web browser technology, the underlying universally adopted
Hypertext Markup Language (HTML), and software that enables the lay person to
develop browser screens without any knowledge of HTML, have led to a vast array of
web-based products including educational products.

Hypermedia as a form of information access is highly attractive to learners because, on


the surface at least, it leaves them in full control of that access, while at the same time
making it extremely easy for them to navigate amongst the resources provided. Welsh
(1995) contends that hypermedia provides “learning environments that promote the
active, personal exploration of information for both comprehension and information” (p.
275). Thus, as a learning context, it is seen as turning control over to the learner, a
construct considered central to effective learning. Hypermedia as such espouses a very
constructivist (Bruner, 1996; Dewey, 1987; Piaget, 1990; Vygotsky, 1986) approach to

33
learning: a view of learning emphasising active and interpretative knowledge acquisition
as individuals integrate and extend their knowledge in an effort to maintain its viability.
Advocates of constructivism agree that it is the individual's processing of stimuli from the
environment and the resulting cognitive structures, that produce adaptive behaviour,
rather than the stimuli themselves. Therefore, a key aspect of this thesis is to explore this
processing, the structures involved and the capacity of learners to regulate it.

Web-based learning has been lauded as ushering in a new era of learning, and like many
technologies in the past, as the mechanism to change the very nature of education and
schooling in the future. Yet, as Dillon and Jobst (2005) in their review of the hypermedia
literature warn, during the previous two decades the empirical evidence did not support
this view. They argue that the practice of hypermedia design has been accompanied by
an uncritical acceptance of a host of quasi-psychological notions of reading and
cognition. They go on to argue that, as a consequence, hypermedia has largely failed to
fulfil much of its early promise. They believe that this is in part due to an inappropriate
emphasis on the technology and not the learner. As with technologies and educational
innovations of the past, research tends to lag behind development. Finally, they believe
that the lack of a systematic theory of hypermedia has not yet solved some of the basic
issues raised by the technology. As to whether or not hypermedia is a real step forward
to learning, Dillon and Jobst (2005) cautiously report that “it might be”.

Much of the research into the use of hypermedia in education has focused on the
capability of hypermedia to manage flexibly information organisation and retrieval,
interface design, or mixed media. The use of hypermedia as a tool for mediating the
nature of the cognitive interactions that occur between learners and the computer has
been less thoroughly explored (Yang, 2002). In addition, not much attention has been
given to analysing the cognitive processes that go on in learners’ interactions with the
technology. Therefore, there is a need for further exploration of learners’ interactions
with hypermedia in order to understand better the cognitive processes it activates.

34
As stated earlier, hypermedia is highly attractive to educational users because, on the
surface at least, it leaves the learner in full control of their access to, and navigability of
the learning resources. This raises questions about the ways these various ‘guidance’ and
‘control’ features of hypermedia affect the learning that occurs. One question is: how is
educational hypermedia best organised to cater for various learning events? A second
question is: how are the learning experiences affected by the self regulatory
(metacognitive) capacities of the learner (e.g. planning, monitoring, etc.)? The answers
to these questions are fundamental to understanding how hypermedia ought to be
developed and best organised (i.e. provide learner guidance), as well as what kinds of
cognitive and metacognitive capacities enable a learner to successfully manage this form
of learning (i.e. provide learner control). In order to address these questions, the
educational hypermedia design literature provides a starting point and is examined next.

Educational hypermedia design


There is a body of literature that to date has provided guidance about hypermedia design
strategies which rely on the principles of user-centred design. However, Shapiro (2008)
noted that much of the more recent efforts have focused more on developing learner-
centred hypermedia. She describes learner-centred hypermedia as ‘being designed to
assist learners to achieve their educational goals, rather that offer mere usability’ (p. 29).
She goes on to say that these efforts are being hampered somewhat by a lack of empirical
research on the topic. Recent research undertaken by Shapiro (1999; 2000) and others
(Clark & Mayer, 2003; Jacobson, 2006; Jacobson & Archodidou, 2000) has provided
some insights, and the empirical evidence does suggest that several system and user
characteristics influence outcomes of hypermedia-assisted learning (HAL). Shapiro
argues that among the most relevant of these factors are learners’ levels of metacognition
and prior knowledge, and the interaction between these factors and hypermedia
structures. Shapiro further argues that by capitalising on this research it ought to be
possible to create hypermedia that scaffolds learners in their quest to build knowledge.

The precise function of ‘scaffolding’ as a learning support mechanism varies between


authors. The scaffolding metaphor was introduced by Wood, Bruner and Ross (1976)

35
who used it to describe the support function of a human tutor based on Vygotsky’s
research on scaffolding to bridge learners to the next level of development, and Collins,
Brown and Newman (1989) who used it to describe the support function in their
cognitive apprenticeship model of learning. Since that time, the notion of scaffolding has
been used to describe a variety of learner support mechanisms, whether human or
technological. In general though, scaffolding serves as a tool that enables a learner to
overcome difficulties encountered when engaging in a learning task. Shapiro (2008)
reports that a good deal of attention has been paid to the use of scaffolding to support
learners in circumventing or overcoming the many factors that prevent them from
achieving their goals. Drawing upon recent empirical research findings, Shapiro has
compiled a table of hypermedia design strategies aimed at scaffolding learners engaged in
HAL.
Table 1: Scaffolding design strategies for hypermedia learning (Shapiro, 2008 p. 35)
Scaffolding purposes Design suggestions
Enhance learning for students • Organize a hypertext with hierarchies or other
with low prior knowledge well defined structures
• Provide site maps
• Structure the hypertext in a manner that is
compatible with a learner’s goal
• Attach notations to links that explicate the
relationships they represent
• Highlight or otherwise encourage use of
particularly important links
Enhance learning for students • Promote use of existing knowledge
with high prior knowledge • Provide minimum cues to cohesion
• Allow maximum learner control
Enhance metacognition • Provide metacognitive prompts
• Avoid using differential link placement or style
Help students meet specific • Structure the hypertext in a manner that is
learning goals compatible with a learners’ goal
• Highlight or encourage the use of links that are
relevant to learners’ goals

36
Shapiro argues that the fundamental components of a hypermedia system (links, nodes,
site maps, the global structure imposed on documents etc.) can be engineered to function
as scaffolding. She refers to this approach as ‘embedded scaffolding’ and notes that by
using support features that exist as a natural part of the hypermedia interface learners are
less likely to notice the scaffolding’s presence and that it has the potential to make fading
(i.e., the removal of scaffolding) less obvious.

Three scaffolding purposes emerge in Shapiro’s table. These are enhancing learning for
students with low/high prior knowledge, enhancing metacognition and helping students
to meet their specific learning goals. Shapiro (2008) reports that prior knowledge usually
varies widely amongst learners irrespective of the learning context and that research into
most forms of learning indicates that existing knowledge is an important predictor of
future learning. She argues that it is therefore not surprising that this variable has
received considerable attention in recent HAL research. Likewise, assisting learners to
achieve their learning goals is an important design feature of any learning setting.
Hypermedia settings can be structured in multiple ways that might highlight any number
of themes or perspectives; however, if they are not structured in a manner that is
compatible with a learner’s goals then they might mitigate such an outcome. Finally, it is
well understood in the educational community that good metacognitive skills lead to
enhanced learning outcomes. In fact, metacognitive training programs in a range of
learning settings have proven successful in improving learning outcomes (e.g. Azevedo
& Cromely, 2004; Bartlett, 2008b). However, to engage hypermedia learners in a
metacognitive training program is often impractical, particularly in informal learning
environments.

Shapiro (2008) highlights a very important tension here between the strategies used to
scaffold metacognition and the building of domain knowledge. In scaffolding for
knowledge, the key is to provide specific pointers to relationships between ideas thus
allowing novices to reduce the cognitive load associated with navigating unfamiliar
territory. In contrast, scaffolding to encourage metacognition requires learners to think
critically, question and engage in self monitoring (p. 38). A learner using strategies for

37
knowledge acquisition may be discouraged from invoking metacognitive strategies and
vice versa. She asserts that what are required are design strategies that simultaneously
encourage metacognition while providing the guidance and cues required by novice
learners. Kauffman (2002, 2004) demonstrated that learners can be encouraged to think
more critically about hypermedia content, whilst at the same time critically thinking
about their navigational choices.

Researchers have begun to examine the role of students’ ability to regulate several
aspects of their cognition, motivation and behaviour during learning with hypermedia
(Azevedo, Guthrie, & Seibert, 2004; Hadwin & Winne, 2001; Winne & Stockley, 1998).
Azevedo and Cromley (2004) concluded that this research has demonstrated that students
have difficulties benefitting from hypermedia environments because they fail to engage
in key mechanisms related to regulating their learning. To regulate their learning
students need to be able to make decisions about what to learn, how to learn it, how much
to learn, how much time to spend on it, how to determine whether or not they understand
the materials, when to abandon or modify plans and strategies and when to increase effort
(Williams, 1996). Specifically, they need to analyse the learning situation, set
meaningful learning goals and determine which strategies to use, assess their
effectiveness, and determine if the strategies are effective for a particular learning goal
(Azevedo & Cromely, 2004). In hypermedia settings it is argued that one of the
important drivers of this self-regulation is the learner’s capacity to deploy effective
metacognitive strategies to the task.

Therefore, focusing on the potential of hypermedia to press learners into using more
sophisticated metacognitive strategies needs further examination. In the majority of
recent research, the measure of success has been improvements in test scores. Whilst this
has provided quantifiable evidence of success, these studies provide little in the way of
more fine-grained evidence of the causes of that success. In order to design hypermedia
that might provide better and more effective metacognitive strategies, more specific
evidence about the causes of success is needed. That is, the design strategies that
encourage better metacognitive practice, and in particular how learners think about their

38
navigation choices and the relationships between the available links, need to be
understood better.

It is clear that the research undertaken so far suggests that there is a need for a better
understanding of the ‘guidance’ and the ‘control’ aspects of a learner’s engagement with
hypermedia, as well as establishing any causal relationships between them. Of these two
aspects, it is a more fine-grained understanding of the learner’s ‘control’ of their
engagement with hypermedia that is the gap in the knowledge that this thesis seeks to
address. Before such research is undertaken, it needs to be established first if it is
possible to observe and capture these ‘control’ aspects. That is, can the cognitive and
metacognitive activities of the hypermedia learners be identified and captured? The next
section describes a pilot study undertaken to establish whether it was possible to capture
the cognitive and metacognitive activity of learners engaged in learning with hypermedia.

The pilot study

Background
The study used computer software to capture learners’ cognitive engagement with
educational hypermedia. It followed a qualitative paradigm and focused on gathering
data from students undertaking a web-based course in computer networking as they
interacted with the web-based courseware.

Method
Data were collected using screen-based video capture software (Camtasia) that recorded
the learner’s interactions with the software as a real time video in an AVI format (Beven,
2006). Immediately following the initial capture, the session was replayed to each
learner during which time she/he was continually asked to provide explanations of ‘what’
they were doing, of the thinking behind this action, and ‘why’ they were doing it.

39
This follow-up event was recorded using the capture software, and differed from the
original recording only through the addition of the learner’s recalls of their cognitive and
metacognitive actions. The original video could be paused while a subject’s response
continued. In that manner, lengthy explanations of the cognitive and metacognitive
processes being undertaken at a particular position in the original video could be
captured. This capture was thus related to the particular point of the original event. That
is, ‘a rich picture’, of the event was captured for later analysis. It is a ‘richer picture’
because learners were able to ‘relive’ the event and provide a much richer and more
detailed account of the thinking and actions that underpinned the learning. The same
level of detail might not be expected through the more traditional approaches to
stimulated recall because a learner’s fine grained detail of past events tends to decay with
the passage of time. Moreover, the ‘reliving’ as opposed to the ‘recalling’ of the learning
that took place was a deliberate move to ensure more fidelity and richness in the data.

The steps of the study were as follows:


1. Data Collection
a. Capture Software sent to students for loading onto their computer;
b. Data collection undertaken with each individual student and transferred to CD.
2. Data analysis
a. Data transcribed to framework for initial review (see Figure 5);
b. Data specific taxonomy compiled from initial framework (see Tables 2, 3, 4 & 5).

Cohort
Participants were 4 students who self selected by volunteering from a class group of 12.
This study group was chosen for a number of reasons. First, learners were engaged in on-
line learning. Second, as most of the web-based interactions happened on home
computers, it was critical that the computer on which they were working had the disk
capacity to hold video files and had a read and write CD-ROM for transferring back these
large files. Third, the student cohort was known to possess the technical skills needed to
load the capture software, activate the capture, and transfer the results to a CD-ROM. All

40
volunteers had computers of sufficient capacity and also had the necessary technical
skills.

The hypermedia
Courseware consisted of a series of topics constructed as a set of documents or screens.
Each respondent was asked to work through a topic she/he had not previously attempted.
The courseware allowed them to choose the manner in which they did this. For example,
it was possible for them to move through the screens in a linear fashion following the
structure of the courseware. Conversely, by using a number of navigational aids that
were part of the software, learners were able to select alternate learning pathways and
move between screens in a manner of their choosing. A sample screen is shown as
Figure 4.

Each of the screens provided learners with a consistent interface. The right hand side of
the screen contained text, whilst the left hand side provided a graphics display area. Not
all screens contained graphics, and in those cases where they did not, the graphics area
contained a watermarked logo to signify this. Some screens contained multiple graphics,
indicated by numbered buttons on the left side of the graphics segment (see Figure 4 as
an example). Additional graphics could be viewed by placing the mouse cursor over the
appropriate number. On many screens the quantity of text was greater than that able to fit
in the text window. In these cases scroll bars were provided. Also, a series of
navigational aids was provided across the bottom of the screen. These allowed learners
to move forward or backward in a linear fashion, or to move directly to some other part
of the courseware. The navigation bar also provided access to a set of review questions, a
quiz and a glossary. As a result, learners were able to follow a pathway offered by the
courseware, select an alternate learning pathway, or test their knowledge to help them
determine their next course of action.

41
Figure 4: Typical screen layout and navigational devices

Data analysis
In order to assist in later analysis of the data, learners were asked to place the cursor in
the area of the screen they were currently using. For example, if they were reading text
they moved the cursor down the text as they read. Further, the researcher sat in the
background both observing and recording the actions of the learner. These notes were
used to prime the stimulated recall captured as the data set.

The ‘rich picture’ recorded version of learner’s interactions and navigational decisions
formed the data set for initial analysis. A framework for initial review of the data was
developed to present a systematic outline of stimulus and responses. Each of the video
files was viewed and the data mapped onto a four-column table (see Figure 5).

The first data column contained a graphic of screens selected by learners in the order in
which they had worked through them. The second indicated whether or not users had

42
engaged with that screen or simply by-passed it. The third column captured those
utterances made by learners during the stimulated recall recording that were considered
by the researcher to be describing their cognitive and/or metacognitive actions. The final
column was a record of learner’s transactions identified on the recording as well as those
transactions noted by the researcher whilst observing the learning session. Together,
these provided a comprehensive picture of the navigational/learning track chosen by
learners, and learner and researcher insights about the cognition driving these actions.

Screen Used Respondents Statements User transactions

Overview
I started to read the text and glanced Started Notepad
Yes briefly at the graphic.
Typed acronyms in full
There were some acronyms here that in notepad
were new to me
Cut & pasted acronyms
I Ignored the threaded case study as I
don’t find it useful at this point Reviewed text

Ignored case study

Glanced at the diagram


Figure 5: Sample data used for the preliminary analysis.

The final step was to map the user transactions onto a metacognitive taxonomy (see
Tables 2, 3, 4 & 5) developed from the literature and discussed in more detail in the next
section of this chapter. The following text provides an example of a metacognitive
planning activity identified from a learner’s verbal protocol and assigned to the
taxonomy: I am just opening Notepad because I like to cut and paste stuff to it rather
than having to write things down.

Findings
The tables below present segments from the mapped taxonomy data which provided
evidence of the methodology’s capacity to capture and identify metacognitive activity
within these hypermedia learning events.

43
Table 2: Examples of Planning and Monitoring from the data
Metacognitive Learning Metacognitive activities
Strategy/Skill General Skill Subordinate Categories identified by the researcher
within the data

Planning Setting Goals • Skimming a text before • Removing


(Brown, Bransford, (Brown et al., reading extraneous parts of
Ferrera, & Campion, 1983) • Generating questions the screen
1983) before reading • Setting up Notepad
Selection of • Undertaking a task software
procedures analysis of the problem • Deciding where to
necessary for (P R Pintrich & Schrauben, start
performing a 1992) • Undertaking
task (Whitbread software Quiz to
et al., 2009) test level of
knowledge.

Monitoring (of one’s Checking • Tracking of attention • Use Software


thinking) comprehension while reading a text or Glossary to check
(Brown et al, 1983) (Pintrich, 1989) listening to a lecture term
• Self-testing using • Re-reading text to
. Noting questions about the text check meaning
inconsistencies material (to check • Refer to software
(Meijer, understanding) Index
Veenman, & • Monitoring speed and • Undertake lab
Van Hout adjusting to time activity to check
Wolters, 2005) available knowledge
(Pintrich & Schrauben, 1992) • Undertaking Quiz
to check progress
• Deliberately pausing
(Meijer et al., 2005)

Table 3: Examples of Self Regulation from the Data


Metacognitive Learning Metacognitive activities
Strategy/Skill General Skill Subordinate Categories identified by the researcher
within the data

Regulating Executive • Rereading to monitor • Re-reading previous


(P R Pintrich, 1989) control functions comprehension pages
(P. R. Pintrich, • Slowing pace of • Returning to previous
Self regulation Wolters, & reading for more diagram
(Brown et al., 1983) Baxter, 2000) difficult or less familiar • Skipping forward as
text material is familiar
(Note: Appears • Reviewing any aspect • Comparing a current
sometimes to be used as Management of of materials diagram to a previous
a rather general learning • Skipping material and version showing less
category, encompassing (Meijer et al., returning later information
finer grained 2005) (P R Pintrich & Schrauben,
distinctions such as 1992)
monitoring and
evaluation.)

44
Table 4: Examples of Orientation and Execution from the Data
Metacognitive Learning Metacognitive activities
Strategy/Skill General Skill Subordinate Categories identified by the researcher
within the data

Orientation Activities related • Observing tables and • Establishing a


(Meijer et al., 2005) to familiarization diagrams starting point
(Meijer et al., (Meijer et al., 2005) • Observing diagrams
2005) • Skipping parts that
are already known
Activating prior • Linking diagram to
knowledge (P. R. text
Pintrich et al.,
2000)

Execution Observable • note taking • Note taking on paper


(Meijer et al., 2005) actions within • reading only a part of a • Note taking using
the learning that text Notepad software
enable it to • estimating a solution to • Cutting and pasting
proceed a problem. text to Notepad
(Meijer et al., 2006) • Skimming through
(Meijer et al., text with cursor
2005) • Opening links
• Changing screens
• Moving to next
section

Table 5: Examples of Evaluation and Elaboration from the Data


Metacognitive Learning Metacognitive activities
Strategy/Skill General Skill Subordinate Categories identified by the researcher
within the data

Evaluation Checking error • Verifying • Comparing on


(Meijer et al., 2005) detection (Meijer et al., 2005) previous renditions of
(Meijer et al., diagrams for
2005) differences
• Checking on previous
Comprehension text to compare
monitoring meaning
.(Whitbread et
al., 2009)

Elaboration Paraphrasing • Commenting on • Comment - reviewing


(Meijer et al., 2005) Drawing difficulty for forthcoming
conclusions • Recapitulating practical test
(Meijer et al., (Meijer et al., 2005) • Comment – subject
2005) matter already
familiar
• Comment – this is all
new material for me

45
While it had been possible to capture much of the learner’s cognitive activity, there are
some cautions. At times metacognitive activity was occurring so speedily, so in parallel,
and so contemporaneously that the linearity of capture methods was problematic.
However, the data capture methodology developed did go some way to addressing this;
that is, capturing the combined data synchronously. So it will always remain somewhat
problematic as to whether or not all metacognitive instances can be captured and
categorised accurately. Further, in some instances activity could only be identified as
either cognitive or metacognitive by inferences associated with its context of use, or by
relationships to pre or proceeding activity, rather than from the verbal protocols. This
related particularly to the metacognitive classification of execution which showed to be
different to the other categories in a number of significant ways.

Execution seemed to be almost like punctuation in the story where the learner stopped
thinking and started acting. So in most cases execution as a metacognitive activity was
not literally put into words. As a result execution was only rarely identified through the
aid of the verbal protocols. Rather, it rendered itself more frequently in overt ways (e.g.
the learner clicked on a hyperlink). These non-verbal ways were more difficult to label
as metacognitive, as doing so could only be done by inference from the surrounding
actions and story line. Meijer et al. (2006) drew attention to this issue in their work and
reported ‘Some subordinate categories of execution activities appear to be of a cognitive
nature rather than a metacognitive nature. However, they are mostly overt cognitive
activities from which covert metacognitive activities are inferred’ (p. 218).

In this pilot most of the activity labelled as execution was inferred and lacked the
additional validation afforded to the other categories through their associated verbal
protocols. Thus, the reliability and fidelity of this classification proved to be more
problematic. In contrast, the other categories of metacognition were identified and
validated only through the use of verbal protocols. Nonetheless, the data revealed more
instances of execution than any other category. Therefore, while execution was

46
problematic from a reliability of classification standpoint, it translated as an important
category.

Having established the capacity to identify successfully the cognitive and metacognitive
activities of hypermedia learners, within the limitations outlined above, it was now
possible to explore hypermedia learning settings in more detail. In order to advance to
such an examination, three further things would be necessary. First, it would be
necessary to identify, or develop, a theoretically robust taxonomy of metacognitive
activities that would underpin the accurate identification of metacognitive activities in a
hypermedia learning setting. Second, such a taxonomy would need to be general purpose
in nature and have a level of granularity that enabled it to support a study to identify not
just the metacognitive categories, but also the differences and nuances that might reveal
the characteristics of effective use and provide clues as to how they might drive or assist
the learning autonomy claimed to be possible in hypermedia learning settings. Finally,
account would need to be taken of the problematic nature of the execution category.

The final part of this chapter examines the metacognitive literature and identifies a
taxonomy that it is argued has a super-ordinate level of sufficient generality to examine
hypermedia learning. It also identifies and argues for the use of top-level structuring,
which has proven to be context-independent in its application, as a tool for undertaking a
sub-ordinate level of examination and providing the granularity necessary to identify the
characteristics of learning autonomy sought.

47
Metacognition

Brown (1981) cites Vygotsky (1978) as describing two phases in the development of
knowledge: first, its automatic unconscious acquisition followed by gradual increases in
active conscious control over that knowledge. Brown (1981) argues that this distinction
is essentially the separation between cognitive and metacognitive aspects of performance.
The distinction between knowledge and the understanding of that knowledge has been of
interest to developmental psychologists for some time (Wellman, 1985). Irrespective of
the learning approach adopted by a hypermedia author, the self-regulated nature of
hypermedia learning environments requires the learner to possess and apply both their
metacognitive and cognitive capacities to their learning.

As discussed in the earlier part of this chapter, one of the keys to effective hypermedia
learning is the active conscious ‘control’ over the acquisition of knowledge, that is, the
effectiveness of the metacognitive capacity of the learner. This section begins by
defining what developmental psychologists include under the heading of metacognition.
Metacognition has been identified as that body of knowledge and understanding that
reflects on cognition itself, or the mental activity for which other mental states or
processes become the object of reflection (Yussen, 1985). Metacognition is often
referred to as ‘thinking about thinking’, or thoughts about cognition, and has been a topic
of scholarly interest since the 1970’s.

Flavell is most often cited as the originator of the term metacognition. He defined
metacognition as “one’s knowledge concerning one’s own cognitive processes and
products or anything related to them, e.g., the learning-relevant properties of information
or data.” (1976, p. 232); and in doing so he provided the following example.

I am engaging metacognition (metamemory, metalearning, metaattention,


metalanguage, or whatever) if I notice that I am having more trouble learning A
than B; if it strikes me that I should double-check C before accepting it as a fact; if

48
it occurs to me that I had better scrutinize each and every alternative in any
multiple-choice type task situation before deciding which is the best one; if I sense
that I had better make a note of D because I may forget it;…(more examples)…
Metacognition refers, among other things, to the active monitoring and consequent
regulation and orchestration of these processes in relation to the cognitive objects or
data on which they bear, usually in the service of some concrete goal or objective.
(1976, p. 232).

Prior to the 1970’s, it was thought that children’s development of knowledge of memory
or ‘metamemory development’ was completed somewhere around 12 years. In earlier
work, Flavell, Friedrichs and Hoyt (1970) found that young children appeared to be quite
limited in their knowledge and cognition about cognitive phenomenon, or metacognition.
That is, that they did little monitoring of their own memory and other cognitive
enterprises. It has become less certain that metacognitive development is completed at
around 12 years of age, and in fact that adults may differ in metacognitive knowledge as
a consequence of the domains and problems studied. Since that time, the nature and
development of metacognition has been a continuing area of investigation. By the 1980’s
Yussen (1985) points out that the use of the term metacognition had become highly
variable and that much of the theory being developed had been done using a restrictive
framework which focused on the solving of well defined problems in examining
cognition, and a focus on children. He argues that there are important reasons for
examining the metacognitive aspects of adults as well.

Stevenson (1986a) proposed that cognitive processing was hierarchical where specific
procedures at the first order enabled automatic achievement of goals; second order
procedures were executed in a controlled way to solve new problems and utilised first
order procedures; and third order executive procedures switched processing between
orders. Thus metacognition was seen as controlled cognitive processing.

By the mid 1990’s, Simons (1996) reports that the concept of metacognition continued to
be used with different meanings. For example, he argues that sometimes the term is used

49
to describe the more general ideas and theories that people have about their own
cognition and the cognition of others, or what he terms as metacognitive beliefs
(Wellman, 1985). Further, at other times the term refers to knowledge about one’s own
and other people’s cognitive processes (Flavell, 1976). Finally, Brown (1981) proposed
the concept of metacognition as meaning the steering of one’s own cognitive processes.
In summary, Simons argues that “metacognition is primarily concerned with those human
reasoning processes that are necessary to solving problems for which no completely
developed or automated procedures are available. Both knowledge of these processes
and their control or regulation are typically subsumed in the concept of metacognition”
(p. 441). Each of these meanings is addressed in more detail next.

Metacognition as beliefs
Simons (1996) refers to Dweck(s) 1988 focus on metacognitive beliefs about intelligence.
Dweck found that people had two kinds of theories of intelligence: the entity theory and
the incremental theory. In entity theory, intelligence is a fixed commodity and resistant
to change through mastery of new skills. In contrast, in incremental theory, intelligence
is not fixed, rather, it can be increased through effort and learning. Dweck further
contends that a learner’s conception of intelligence probably determines the kind of goals
a learner will adopt. Learners with an entity conception of intelligence will tend to adopt
performance goals, whilst learners with an incremental conception tend to choose
learning goals. Performance goals centre around a learner validating their competence,
in contrast, learning goals aim at increasing one’s competence at understanding, at
figuring out something new. Having learning goals is only possible when one has the
incremental view of intelligence. Thus changing learning strategies is only possible
when the learner has learning goals. Therefore, efforts to change learning strategies will
only be successful when learners have an incremental theory of intelligence. Simons
(1996) argues that in learning strategy research, the important problems include how to
change people’s conception of intelligence to include learning goals instead of
performance goals.

50
Metacognition as the knowledge of one’s own cognitive processes.
Flavell (1976, 1992) considered metacognitive knowledge to be the declarative
knowledge one has about the interplay between personal characteristics, task
characteristics and the available strategies in a learning situation. For example, in
returning to an on-screen learning unit, the learner might realise that what is presented is
material that they have already mastered and they move on quickly. In this instance, they
have used the knowledge that when encountering material that is already known, it is
most efficient to quickly move through that material.

Flavell (1992) identified four broad classes of knowledge that a learner might acquire
about some cognitive activity; tasks, self-knowledge, strategies, and interactions. He
argues that tasks refer to the knowledge about how the nature of the task influences
performance on it, for example, knowing that it is easier to recognise an on-screen icon
than it is to recall a labelled name. He argues that self-knowledge is about one’s own
skills, strengths, and weaknesses as a cognitive being, for example, knowing that using
pull-down menu structures and recalling the keystrokes associated with shortcut keys.
He argues that strategies is knowledge about the value of alternative strategies for
enhancing performance, for example, knowing that when multiplying a number by nine,
it is easier to multiply by ten and subtract the multiplier. Finally, he argues that
interactions is knowledge about the ways in which the abovementioned categories of
knowledge might interact with one another to influence the outcome of some cognitive
performance, for example, knowing that it is of help to repeat a list of items in order to
remember them (task), as opposed to repeating an instruction that is not understood the
first time (strategy).

Metacognition as executive control


Flavell (1992) uses the term monitoring for what Brown (1981) calls executive control.
Flavell believes that the monitoring of a wide variety of cognitive tasks occurs “through
the actions of and interactions among four classes of phenomena: metacognitive

51
knowledge, metacognitive experiences, goals or tasks, and actions or strategies” (p. 4).
Flavell maintains that metacognitive knowledge is that part of a person’s stored world
knowledge that has to do with their being cognitive creatures with diverse cognitive
tasks, goals, actions, and experiences. In contrast, metacognitive experiences are any
conscious cognitive or affective experiences pertaining to any intellectual activity. Goals
or tasks refer to the objective of an intellectual activity and actions or strategies refer to
the behaviours employed to achieve them. The skills of metacognition are those
attributed to the executive in many theories of human memory and machine intelligence.
Simon (1979) describes an executive control process as “The control structure governing
the behaviour of thinking man is a strategy or program that marshals cognitive resources
for performance of a task” (p. 365).

Metacognition as the steering of one’s own cognitive processes


Brown (1981) believes there are two kinds of metacognitive knowledge – static and
strategic. She regards ‘static knowledge’ as the verbalisable things people state about
cognition. ‘Strategic knowledge’, by comparison, is the steps individuals take to regulate
and modify the progress of a cognitive activity as it is occurring. While acknowledging
that there might be a host of specific strategies to regulate particular cognitive activities,
she does suggest a list of general strategies that are present in almost all forms of
cognitive activity. Nelson (2005) reports on refinements of theories about metacognition;
in particular, new theory and data that help refine what is already known. He reports that
this includes contexts in which people deliberately initiate metacognitive activity, that is,
a context in which metacognition has a functional role affecting performance. He argues
that such contexts should help to extend ideas about metacognition away from laboratory
settings and help increase our understanding of applied situations where people are
monitoring and controlling their own cognitive activity. More research since has
occurred in applied situations (e.g. Azevedo & Cromely, 2004; Kratzig & Arbuthnott,
2009; Valot, 2002).

52
Yussen (1985) believes that these are the major ideas that have structured the definition
of metacognition as a field. The work by Flavell (1976, 1992), Brown (1981), and Simon
(1979) have all made important contributions to our understanding of this concept. In
summary, metacognition has a degree of imprecision as a construct and could be
distinguished as being of three kinds. First, metacognitive knowledge, referring to the
knowledge people have of their own, and others, cognition. Second, metacognition as
beliefs that people have of their own cognition. Finally, metacognition as the active
monitoring and steering of on-going cognitive processes, for which there is likely to be a
set of general strategies. It is the final of these meanings, the active monitoring and
steering of on-going cognitive processes through a set of general strategies, which aligns
with the kind of metacognition hypermedia learners are likely to be able to render
through think aloud protocols. That is, the focus of this research is on the executive
control functions of metacognition; rather than on metacognitive knowledge or beliefs
per se. Or as succinctly put by Kratzig and Arbuthnott (2009), it focuses on
metacognition as a person’s ability to think about their own thinking, to think about their
own cognitive ability and knowledge, and then to take the appropriate regulatory steps.

Metacognitive training
The literature continues to report the success of metacognitive training programs, from a
range of learning settings and student types, in improving learning outcomes. Recent
examples include undergraduate students in biology (Azevedo & Cromely, 2004),
university students (Bannert & Mengelkamp, 2008), algebraic reasoning in elementary
school teachers (Kramarski, 2008), high school students studying mathematics
(Mevarech & Amrany, 2008), primary school students (Ritchart, Turner, & Hadler,
2009), and young and older adults (Kratzig & Arbuthnott, 2009). Collectively these
studies suggest that metacognitive training can be effective across the lifespan and across
learning contexts.

In the study by Azevedo and Cromley (2004) the authors examined the effectiveness of
self-regulated training in facilitating learning with hypermedia. Subjects in an
experimental group were given a 30 minute training session on the use of specific

53
empirically based self-regulated learning variables designed to foster their conceptual
understanding. Pre-test, post-test and verbal protocol data were collected from both the
experimental and a control group, which showed that there had been a significant shift in
the experimental group’s mental modes. The verbal protocol data indicated that this was
associated with the use of the Self-Regulated Learning (SRL) variables taught during
training. The collection of verbal protocol data was the crucial step that enabled the
researchers to explain the significant shift in mental modes highlighted by the
quantitative pre-test and post-test data.

This study by Azevedo and Cromley (2004) is important for two reasons. First, it
demonstrates that metacognitive training can have a positive effect on hypermedia
learners. Second, it demonstrates that it is possible to use verbal protocols to collect and
examine metacognitive activity. In order to do so an empirically derived taxonomy of
metacognitive activities is necessary. Such a taxonomy is discussed next.

A taxonomy of metacognitive activities


Recently Meijer, Veenman & Van Hout Wolters (2005, 2006) developed an hierarchical
taxonomy of metacognitive activities for the interpretation of thinking aloud protocols of
students in secondary education who studied texts on history and physics. They drew
extensively on the work of others (e.g. Flavell, 1979; Pintrich & De Groot, 1990; O’Neil
& Abedi, 1996; Schraw & Moshman, 1995) and developed a taxonomy from the
similarities they identified, arguing that the taxonomy should first be related to other
known taxonomies of metacognitive activity in contemporary literature. Second, they
believed that if this relationship was absent then there would only be slight convergence
of the new method for coding think-aloud protocols with already existing methods.
Finally, they believed that the taxonomy should focus on metacognitive activities rather
than cognitive activities.

54
Initially they developed an elaborate taxonomy and on testing with multiple raters found
that the interrater correspondence was well below par and concluded that the categories
in the taxonomy were too highly specified. In interpreting the findings they argued that
metacognitive activities which involve an executive control function are often mentioned
under a range of labels to do with self regulation and executive control, self-management
of learning and self-regulated learning. They deduced that self-regulation appeared to be
used as a general category and this led them to postulate a super-ordinate group of
categories within their taxonomy: orientation, planning, execution, monitoring,
evaluation and reflection. However, further analysis saw the category, reflection,
renamed elaboration. They believed that the ways in which the super-ordinate group of
categories in the taxonomy was ordered “more or less reflect the temporal course of the
reading and problem-solving process” (p. 218). Moreover, they did suggest that there
would be shifts in this temporal organisation, giving the example that a learner may resort
to intermediate evaluative activities before they finished a task.

Meijer et al. (2006) concluded that for this super-ordinate group of categories, “a
substantial correlation between metacognitive activities across both tasks-domains was
established” (p. 231). This would imply that within their taxonomy these super-ordinate
categories of metacognitive activity transcended task-specificity, and were more general
purpose in nature. The general purpose nature of these super-ordinate categories suggests
they may have applicability to other learning settings and to other learners. Moreover,
the taxonomy’s initial association with the task-specific domains of reading and problem-
solving make it particularly suitable for this present research, given they are two critical
skills in successfully using hypermedia. However, the sub-ordinate set of categories
developed specifically to address the studying of texts on history and physics were
generally context specific and not easily transferable to other settings. Therefore, it is
argued that the taxonomy has application to this research at the super-ordinate level only.

Meijer et al. (2006) report that their six super-ordinate categories were derived as
follows. The first three categories were taken from Flavell’s (1979) original three-fold

55
classification of planning, monitoring and evaluation. Second, drawing on the work of
Schoenfeld (1992) and Van Streun (1990) they included orientation as it sometimes
preceded planning. Schoenfeld (1992) and Van Streun (1990) found that experts spent
much more time on orientation activities as compared to beginners. Third, they also
added an execution category which their studies showed occurred mostly directly after
planning and before monitoring. Fourth, in their initial work they argued that evaluation
was sometimes followed by reflection. They suggested that reflection was usually less
bound to the particular task, but rather aimed at the learning experience and consequences
for the future. Following an analysis of the super-ordinate level of their taxonomy they
found that none of their sub-ordinate categories appeared to fit the super-ordinate
reflection category. They concluded that metacognitive actions such as concluding,
inferring, paraphrasing, summarizing and commenting were more elaborative than
reflective and replaced the category, reflection with elaboration. Thus, their final super-
ordinate categories were orientation, planning, execution, monitoring, evaluation and
elaboration as outlined in the first column of Table 6.

A set of general descriptions was developed and added to inform category identification.
These were constructed from descriptions given by Meijer et al. (2006), as well as
Whitbread et al. (2009) whose category names and descriptors relate to both verbal and
non-verbal data. These general descriptions are necessary to guide data analysis. For
example, at the more specific level of the verbal protocol examples of orientation might
be - What is expected from me? And, where will I start? Examples of planning might
include - Let me check the assessment, and strategic statements like - First I’ll make a
summary of this piece and then I’ll read on. An example of monitoring might include
comprehension statements like - So, mega means large, and, I need to move on now. An
example of evaluation might include an error detection statement like - No, wait, that’s
wrong. Finally, examples of elaboration might include a paraphrasing statement like - If
that first sum is wrong then the second answer needs rechecking. And a concluding
statement like - Oh, well, then that’s the answer. The general description of execution is
somewhat different to the rest in that it is more often identified as a physical action that

56
can be observed rather than inferred from a verbalisation as is the case with the other
categories.

Table 6: Taxonomy of Metacognitive Activities


Category – General Description
Meijer et al. (2006) Meijer et al. (2006, )Whitbread et al. (2009)
Orientation Any verbalisation related to familiarisation activities.
Planning Any verbalisation related to the formulation of plans and strategic
statements that enable the selection of procedures necessary for
performing a task.
Execution Any physical action such as executing an action plan, note taking,
reading only part of a text and estimating a solution to a problem,
or on rare occasions a verbalisation thereof.
Monitoring Any verbalisation related to the on-going on-task assessment of
the quality of task performance and the degree to which
performance is progressing towards a desired goal.
Evaluation Any verbalisation related to comprehension monitoring, error
detection or reviewing task performance and evaluating the
quality of performance.
Elaboration Any verbalisation related to recapitulation, paraphrasing and
drawing conclusions.
Note: Additional detail on the metacognitive categorisation criteria is provided in the Method Chapter
(p.80) and Appendix 1.

This taxonomy was judged well suited for the current research for a number of reasons.
First, Meijer et al.’s (2006) work had led them to verify that metacognitive activities are
not always task specific and that there is evidence of the generality of metacognitive
activity across various tasks and domains. As a consequence the super-ordinate
categories of the taxonomy reflect this. Second, they have designed a taxonomy that is
suitable for the interpretation of statements in thinking-aloud protocols. Finally, the
taxonomy seems suited to addressing the theoretical gap that this research is attempting
to address, that is, the nature of the metacognitive activities related to learner autonomy.

Meijer et al. (2006) noted a concern with the execution category and reported that while
there appeared to be sub-ordinate categories of execution that are cognitive in nature,

57
they are mostly overt cognitive activities from which covert metacognitive activities are
inferred. They provided the following example to make this point: “to read only
particular sections of the text is a cognitive activity in itself, but the decision to select
only those sections is of a metacognitive nature” (p. 218). This concern became evident
in the pilot study data. Examples of activities that were categorised as execution
included: moving from page to page, opening hyperlinks, reading parts of the text, taking
notes, cutting and pasting text to Notepad, listening to an audio link, and watching an
animation. However, the classification of many of the activities as execution was, to use
Meijer et al.’s terminology, covertly inferred from the pilot video and audio data and not
generally supported by a verbal protocol. That is, while instances of the other five
classifications were derived exclusively from the verbal protocols (which was sometimes
supported by the video data), execution activities generally were not. In the pilot study,
as discussed earlier, this significantly impacted on the reliability of the execution
classification and presented as an issue that would need to be considered in further
studies.

The pilot study was able to identify metacognitive activity at what could be described as
a coarse-grained categorical level of analysis. Meijer et al. (2006) noted that a
disadvantage of increasing grain size in the descriptions of metacognitive activities is that
one loses sight of the exact nature of the activities involved. Therefore, what would be
more informative would be an analytical tool that provided a finer-grained rendering of
the metacognitive processes, both across and within categories. That is, a capacity to
identify the more subtle differences and patterns within the data. It is argued that these
differences and patterns are more likely to reveal what makes one learner more effective,
or autonomous, than another. What was needed was a way of further analysing the
verbal protocols within categories. The tool to do this would need to complement and
maintain the general purpose nature of the existing taxonomy. This need is accepted as a
natural progression to understanding better the role these metacognitive activities play
and any more subtle differences they may harbor. Top-Level structuring (Bartlett, 1978,
2008a) offers such a potential tool and it is discussed next.

58
Top-level structuring

The term ‘top-level structuring’ was coined by Bartlett (1978) to describe the strategic
processing involved in converting newly-acquired metalinguistic knowledge of language
structure into deliberate, procedural “know-how” for learning from text. He argues that
the theoretical construct behind his research on top-level structure and top-level
structuring is that meaningful human language beyond the word is a connected network
of ideas and interrelationships (Bartlett, 2008b). He argues that the former (ideas) give a
communication its substantive content; while the latter (interrelationships) affords it
coherence and cohesiveness. He explains that from the viewpoint of the text itself, the
differently communicative and informing character of a communication depends as much
on the nature of the logical structure of ideas as on the ideas themselves. Bartlett
contends that the ‘idea structure’ is a linguistic construct depicting language hanging
together logically as a communication. Moreover, he argues that it reveals how ideas
reconfigure semantically and grammatically through patterns of relationships both within
clusters and across them. While he contends that different explanations exist for such
configurations and patterning and for analysing data gathered from people’s work in
composing and/or comprehending text, his is based on Meyer’s (1975) work, which
combined the case and propositional grammars from Grimes (1975) and Frederikson
(1975).

The capacity for learners to use top-level structuring to improve their memory and
comprehension has been well established. Meyer (1971, 1975) in her initial work
showed that freshmen at Cornell University remembered best and longest when they
wrote recall using the same top-level element of the idea structure in the presenting text.
Bartlett (1978) found that 9th graders who used top-level structure to organize memory
and comprehension performances achieved significantly higher than the control group.
Other more recent studies Roberts (2004) and Bartlett (2008a) show how railway
workers, business executives and teachers have been able to learn about top-level
structuring quickly and been able to perform significantly better than they had done

59
previously at remembering. Bartlett (2008b) asserts that in learning about top-level
structuring the learners have been able to identify propositions of increasing complexity
in the written or transcribed protocols that communicators produce, and separate the
content into two main categories of information – ideas and relations. He goes on to
explain that ideas are seen in words that have substantive meaning whereas relations are
shown either explicitly by words that have signalling functions, or implicitly through
one’s intuition about elliptical connections.

Bartlett (2008b) describes top-level structuring as a two stage method for determining the
‘gist’ or main idea in a communication. He argues that it is based on the assumption that
when processing information about a topic, the ideas fit, however loosely, into one main
message. That the construct suggests that if we look at the ideas at the end of a statement
or piece of writing, the ways each idea interrelates with others, and the number of
relations others have to it, indicate its interconnectedness and its relative importance.
Moreover, these ideas, to which many others relate, are closer to the top of what is a
hierarchical ordering of the text. The idea with the most supporting information beneath
it is at the top level and is the main idea.

Bartlett (2008b) contends that research has shown that the way ideas fit together across a
text to project the main idea can be represented by just a few possible rhetorical
structures; these being a list, a comparison, a cause and effect, and a problem and
solution. Fletcher, Zuber-Skerritt, Piggot-Irvine and Bartlett (2008) report that while
there are many sub-forms, to top-level structure a communication is to present a well
signaled account of how things fit together in one of these four ways. Table 7 provides
an example of verbal protocols of each.

In the first example of a list three separate aspects can be identified. The use of and in
the second sentence has been used to join two of them. In the comparison example the
use of the word but provides the clue to the comparison being made. In the cause/effect

60
example the word because provides the clue to the relationship between the cause and
effect. That is, ‘someone had put their name in’ (cause), and ‘it didn’t work (effect).
Finally, the problem/solution example shows that the (cause) ‘in case I want to Google it
later’ has a (solution) ‘Keep it on the clipboard’.

Table 7: Top-level structuring rhetorical structures


TLS Structure Example verbal protocols
List I remember that picture. It was text and I was trying to see what’s
there.
Comparison Never in any type of learning would I go to Wikipedia, but I know it
is meant to have less errors in it than other sources.
Cause/effect That didn’t work because there had already been someone who had
put their name in
Problem/solution I was going to go back and keep it on the clipboard in case I
wanted to Google it or whatever.
Note: Additional detail on the top-level structuring criteria is provided in the Method Chapter (p.80) and
Appendix 1.

Fletcher et al. (2008) argue that top-level structuring aligns with most forms of data
collection that rely on verbal behaviour as a means of transcript analysis. They further
argue that it is unique in applying the structural features of language as a framework, and
importantly for this research, that in doing so it is relatively independent of the content
domain used. Thus, top-level structuring would seem to be an ideal tool with which to
examine the think-aloud protocols of hypermedia learners. That is, the top-level
structuring rhetorical structures ought to provide a more fine-grained (sub-ordinate level)
analysis of the think-aloud protocols which should assist with metacognitive
classification and provide a richer understanding of the structure and nature of each of the
metacognitive processes.

How the Meijer et. al. (1996) metacognitive taxonomy and the Bartlett (1978) Top-level
structuring rhetorical structures could be combined as a multi-layer taxonomy for the
examination of the verbal protocols from hypermedia learning settings is discussed next.

61
A general purpose metacognitive taxonomy for examining hypermedia
learning settings

It is argued that the general purpose nature of both the super-ordinate level of the Meijer
et. al. (1996) metacognitive taxonomy and the Bartlett (1978) top-level structuring
rhetorical structures, together with their empirically established capacity to examine
verbal protocols, provide a useful tool with which to examine metacognitive activity in
hypermedia learning settings. When used in combination they ought to produce a
synergy. That is, the top-level structuring linguistic markers, identified as a sub-ordinate
category within each of the metacognitive activities of the taxonomy, ought to
synergistically render a richer insight about the nature of the metacognitive activity, as
well as aid with its classification. Table 8 outlines how they would work together as an
analytical tool and the possible associations they could reveal.

Table 8: Metacognitive classifications with Top-level structure associations


Super –ordinate Sub-ordinate level Æ
Level È
Top-level structuring linguistic markers
Metacognitive List Cause/Effect Problem/Solution Comparison
Activities
Orientation
Planning
Execution
Monitoring
Evaluation
Elaboration

For example, one learner may monitor their learning using a list structure, while another
may monitor using a cause and effect and/or a problem and solution structure. While in
both cases the learners are monitoring their learning, they are doing so in different ways
and the recognition of this difference is important. That is, one kind of monitoring may
prove to be more effective than the other per se, or, one kind of monitoring may be more
effective only in a particular learning circumstance or condition.

62
These examples highlight the two ways in which the association between a metacognitive
classification and the top-level structuring linguistic markers can operate. That is, the
association may be a one to one, or a one to several. Instances in which the association is
one to several are actually more sophisticated versions of the top-level structuring
rhetorical structure, list. This means that this classification is of two kinds which have
important differences. The first kind of list, those identified as having a one to one
association, could be considered to be simple lists. The second kind of list, those
identified as having a one to several association, could be considered more complex lists
as they contain other kinds of linguistic markers (e.g. comparison or cause/effect).
Identifying this difference would seem to be important because the complex list structure
would suggest that a learner has been able to articulate a richer understanding of their
learning. As a consequence the top-level structuring linguistic marker, list was expanded
to incorporate both kinds of lists.

In order to demonstrate the capacity of the taxonomy to analyse verbal protocols, Tables
9 and 10 provide examples. Table 9 presents an analysis of a verbal protocol that has
been classified metacognitively as an evaluation, and which has a one to one association
with the top-level structure, comparison. In contrast, Table 10 presents an analysis of a
verbal protocol that has been classified metacognitively as orientation. In this example
the first part of the verbal protocol is a simple statement: I think I need to click here to
Start, which has been classified as a list-simple structure. Whereas, the final part of the
verbal protocol: And it actually had that arrow so it made it quite easy, has been
classified as a cause/effect structure. This means that the protocol, as a whole, has been
analysed as having a one to several association, and classified as a complex list.

The analysis in Table 10 also demonstrates how the top-level structuring classification
can often assist with the validation of the metacognitive classification. In this example
the cause/effect structure in the latter part of the protocol highlights the basis on which
the learner recognised how to proceed, lending weight to metacognitive classification of
orientation.

63
Table 9: An example of the analysis of a verbal protocol with a one to one
association
Verbal protocol: I think this is a really useful picture, but it is not like the other
pictures I looked at earlier.

Super –ordinate Sub-ordinate level Æ


Level È
Top-level structuring linguistic markers
Metacognitive List- List- Cause/ Problem/ Comparison 9
Activities Simple Complex Effect Solution
Orientation
Planning
Execution
Monitoring
Evaluation 9 > really useful
picture
> not like the others
Elaboration

Table 10: An example of the analysis of a verbal protocol with a one to several
association
Verbal protocol: I think I need to click here to Start. And it actually had that arrow so
it made it quite easy
Super –ordinate Sub-ordinate level Æ
Level È
Top-level structuring linguistic marker classification
Metacognitive List - List-Complex Cause/ Problem/ Comparison
Activities Simple 9 Effect Solution
Orientation 9
----Æ Click here to
Click start
here to C: Had that Å---
start arrow C: Had that
E: Made it arrow
quite easy E: Made it
quite easy
Planning
Execution
Monitoring
Evaluation
Elaboration

64
Conclusion

This chapter discusses the evolution of educational hypermedia and its potential as a
learning tool. Claims about the value of educational hypermedia suggest that,
theoretically at least, it would seem to have the potential to enhance learning. However,
research to date suggests that this potential has not been realised and that realising this
potential requires a better understanding of how learners interact with it. A central tenet
of the potential of educational hypermedia is its capacity to support learner autonomy.
However, a theoretical gap in the knowledge about how this occurs currently exists. One
aspect of this gap is a lack of understanding about the kinds of metacognitive capacities
learners need to manage this form of learning and how they might best be manipulated to
secure successful learning outcomes.

In order to begin to address this theoretical gap a pilot study was developed and
undertaken: The purpose of the pilot was establish the accessibility of the cognitive and
metacognitive activities of learners whilst engaged in hypermedia learning to recording
using video capture software. The results of the pilot indicated that it was possible to
capture and categorise the cognitive activity of learners from their think-aloud protocols
while they engaged with educational hypermedia.

Next, the metacognitive literature is examined to determine the kinds of metacognitive


processes that might be the ones deployed by learners in a hypermedia setting. This
examination outlines a taxonomy of metacognitive activity developed specifically for the
interpretation of thinking-aloud protocols of students in secondary education who studied
texts on history and physics. The taxonomy is hierarchical in structure and at the super-
ordinate level has been shown to be domain independent. While the domain
independence at the super-ordinate level make it suitable for examining hypermedia
learning settings it could only provide a course-grained examination of the protocols. In
order to undertake a more fine-grained examination a sub-ordinate categorization that
was also domain independent is necessary.

65
In the final section of the chapter it is argued that top-level structuring, originally
developed to describe the strategic processing involved in converting newly acquired
metalinguistic knowledge of language structure into deliberate procedural “know-how”
for learning from text, is a suitable tool for further analysis of verbal protocols. Its
unique capacity to apply the rhetorical structures of language as a framework for further
analysis of verbal protocols in a manner independent of the content domain has proven
effective in transcript analysis. Thus, it is argued that it can provide a more fined grained
secondary analysis of verbal protocols assigned to each of the metacognitive categories,
as well as afford greater reliability to that classification.

The review of literature has identified three research questions to be addressed through an
empirical study. These are:
1. Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?
2. To what extent (how) do users see themselves as autonomous in such activity and
how does this manifest itself in practice?

3. To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?

The first of the questions has been partially answered in a pilot study discussed earlier in
this chapter. The following chapter describes the methodology for the main study
designed to address more fully question one and the two remaining questions.

66
Chapter 3 - Method

The research in context


Capturing and examining learners’ cognitive and metacognitive activity whilst engaged
in hypermedia learning settings is the focus of this research. A central requirement of a
hypermedia learning setting is its capacity to afford learners the autonomy to control their
learning. However, ways in which learners are able to realise this autonomy are not well
understood. Therefore, this research seeks to understand better a learner’s individual
cognitive and metacognitive learning experiences in these settings.

Learners in a vocational education setting were investigated to determine: (i) what of


their cognitive and metacognitive activity when engaged in a hypermedia learning could
be captured; (ii) to determine to what extent they saw themselves as autonomous in such
activity; and (iii) to see whether a research-driven manipulation of what learners do might
enhanced their control of learning and the effectiveness of its outcomes including their
performance success. A qualitative research method was adopted to seek answers to the
following research questions through a pilot and main study.

A pilot study reported earlier was undertaken to answer the following question:

1. Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording, using video capture software
protocol?

A positive outcome from the pilot enabled a second investigation which constitutes the
main study reported here. It set out to examine:

2. To what extent (how) do users see themselves as autonomous in such activity and
how does this manifest itself in practice?

67
3. To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?

Qualitative research perspectives


Research in education has been dominated by quantitative approaches, derived from
behavioural, developmental and cognitive psychology. More recently, researchers have
acknowledged that qualitative research such as case studies (Yin, 2003) can add
substance and subtlety to the field's knowledge base; in this instance enriching the
understanding of vocational learning, and the settings in which that learning takes place.
As a result, the use of qualitative methods are now more widely used and accepted, and
interest in qualitative approaches has grown rapidly in the educational sphere (Bryman,
2006).

Marshall and Rossman (1999) justify qualitative research as research that delves in depth
into complexities and processes that examine little known phenomena, while Strauss and
Corbin (1998) contend that qualitative educational researchers seek to uncover the nature
of learning experiences, as well as to understand what lies behind much of the
phenomena about which little is yet known. In vocational education, qualitative
researchers look at learners’ experiences in a range of learning settings, attempting to see
the world from the learners’ points of view. For example, they analyse classroom,
workshop and work-based discourses, in an effort to understand what drives and shapes
the interactions of vocational learners in these settings. Moreover, they seek to
understand the systems of meaning that prevail in these various learning settings. A
qualitative approach was clearly relevant to the present research in which enhanced
understandings were sought about the drivers of, and about relationships between
metacognition in hypermedia practice and vocational learner’s success.

The importance of theory in qualitative research


Greckhamer and Koro-Ljungberg (2005) argue that it is important for researchers to be
aware of, and lay open, the theoretical and epistemological foundations of their research,

68
which they describe as “a certain understanding of how we know what we know, e.g.
through objectivism, constructionism and subjectivism and their variants" (p. 737). It is
important because the type of data collected and analysis methods used are influenced by
theoretical perspectives and epistemology. Crotty (1998) explains that “Justification of
our choice and particular use of methodology and methods is something that reaches into
the assumptions about reality that we bring to our work. To ask about these assumptions
is to ask about our theoretical perspective” (p. 2). The argument has been progressed by
Greckhamer and Koro-Ljungberg (2005) who claim that processes of data collection and
analyses are interrelated, and serve the epistemological goal of producing particular
knowledge. Therefore, researchers should make explicit the theoretical, epistemological
and conceptual connections of the methods used. This study attempts to fulfil this
imperative through the use of an empirically based metacognitive taxonomy to analyse
the verbal protocols of hypermedia learners in situ in order to understand better how their
metacognitive activity is realised using a qualitative (case study) method.

Case studies

Yin (2003) states that “In general, case studies are the preferred strategy when ‘how’ and
‘why’ questions are being posed, when the investigator has little control over events, and
when the focus is on a contemporary phenomenon within some real life context” (p. 1).
The current research is, in essence, five qualitative case studies embedded within a ‘case’.

The literature is rich with definitions and descriptions of case study method. Rose
(1991), Merriam (1998), Sturman (1999), Cohen, Manion & Morrison (2000), Stake
(2003), and Yin (2003) all describe the nature and features of case studies. Merriam
(1998) suggested that "the single most defining characteristic of case study research lies
in delimiting the object of study, the case… a thing, a single entity, a unit around which
there are boundaries” (p. 27). The focus of this research has these clear parameters,
making an argument for the use of case study method. In addition, the research is closely
aligned with Merriam's criteria for an ‘interpretive’ case study, or ‘multicase studies’, as
would occur with five participants, in which the "researcher gathers as much information

69
about the problem as possible with the intent of analysing, interpreting, or theorising
about the phenomena" (p. 38).

Yin (2003) proposed a "technical definition" of a case study in two parts. The first
relates strongly to this research. Yin (2003) outlined the property as "an empirical
inquiry that investigates contemporary phenomena within its real life context, especially
when the boundaries between phenomena and context are not clearly evident". The
purpose of this research is to gather information about learners’ cognitive and
metacognitive activities (the phenomenon) as they occurred whilst learning in a
hypermedia learning setting (real-life context). The boundaries between the learner’s
more general metacognitive actions when learning and the specific press of the
hypermedia itself are not clearly evident. The second part of Yin’s (2003) definition
states that "the case study enquiry:
• copes with the technically distinctive situation in which there are many more
variables of interest than data points,
• and as one result relies on multiple sources of evidence, with data needing to
converge in a triangulating fashion,
• and as another result benefits from the prior development of theoretical
propositions to guide data collection and analysis." (p. 13)

Therefore, in this research, case study research is used to explore the processes and
dynamics of learner’s practice in a vocational education setting and to gain an in-depth
understanding of a situation where they are engaged in hypermedia learning and its
meaning for those involved. It carries with it a non-generalisable reliance on the
specificities of the case/s reported.

Qualitative case studies are characterised by the discovery of new relationships, concepts
and understandings rather than verification of pre-determined hypothesis (Yin, 2003). As
Merriam (1998) described, "The interest is in the process rather than outcomes, in context
rather than a specific variable, in discovery rather than confirmation" (p. xii). In this
research, vocational learners were the focus cases. Phenomena explored were learner’s

70
metacognitive experiences as they engaged with and interacted with hypermedia. Case
studies often rely on inductive reasoning from data grounded in the context that is the
focus of the research. This means that an examination of the data allows possible
generalisations, concepts or hypotheses to emerge (Yin, 2003).

Yin (2003) makes the very important distinction between the use of case studies in
research as opposed to other uses (e.g. for teaching), and in particular, the role of theory
in design work. He argues that a research design should include five components. The
research design should indicate what data are to be collected, as indicated by (a)
‘research’ questions, (b) its propositions, and (c) its units of analysis. The design also
should tell you, what is to be done after the data have been collected, as indicated by (d) a
logic linking the data to the propositions and (e) the criteria for interpreting the findings
(p. 28).

Activity Analysis and Verbal Data

Methods of assessing the cognitive processes that learners engage in have been developed
and used in traditional learning settings and more recently in hypermedia settings. Stahl
(2004) argues that, depending on the data level, three groups of methods for assessing
process data can be identified. The first of these, Activity Analysis methods, focus on
collecting data about what the learner is doing. For example, how long does the learner
spend on a particular node? These data point to how a learner is using the system and can
be used to speculate about the kinds of cognitive activity they might be employing.
Second, more direct access to the kinds of cognitive activity in use can be elicited
through use of Verbal Data methods such as ‘thinking aloud’ and ‘retrospection’
(retrospective questioning and stimulated recall). A third method is that of employing
Learner Self-rating Systems during the learning process. This third method was not
adopted in this research as it was judged that it might not realise the kinds of data
required, that is, the capturing of the metacognitive activity of learners. Collectively, it
was considered that Activity Analysis and Verbal Data ought to provide rich pictures of
learner engagement. These methods are outlined next.

71
Activity Analysis
Determining students’ learning experiences and behaviour in hypermedia environments
provides particular challenges for both researchers and educators alike. Electronic
learning settings are usually multifaceted and complex and learners are often remote from
the teachers. Gathering information about a learner’s engagement requires methods other
than a conventional survey or interview if the continuous monitoring of what is actually
happening is to be achieved.

Technology has been employed previously by educators and researchers to gather data
about learner’s interactions in the form of log files. Interactions can be recorded in these
files and provide information about such things as the frequency of web-site usage, time
spent on a site, and the time spent on individual pages. Federico (1999) suggests that log
files provide a researcher with an unobtrusive window for evaluating an individual’s on-
line learning without encroaching on their cognitive or metacognitive processes. A
number of studies have used log files to determine usage patterns (Ingram, 1999) and
some report relationships between usefulness and usability (Piguet & Peraya, 2000) based
on these kinds of data. However, Sheard, Ceddia, Hurst and Tuovinen (2003) argue that
the relationship between learning outcomes and website usage is more difficult to
establish. While data from log files might reveal ‘what’ the learner is doing they provide
little insight into ‘why’. The ‘why’ data are critical to understanding the cognitive and
metacognitive activities of learners.

Verbal Data
It is the ‘why’ data that are more likely to provide clues to the cognitive processes
learners are employing when engaged with hypermedia. One way of determining the
‘why’ data is to ask learners directly, allowing them to report what it is they are doing or
have done. Two methods that have been used in educational research for some time are
‘thinking aloud’ and ‘stimulated recall’. Using these protocols learners can provide

72
reasons for the various actions captured using activity analysis methods. These two
methods are outlined next.

Thinking aloud methods


These methods require learners to say everything that comes into their heads while
working on a task. The methods draw upon the idea that learners will verbalise the
current content of their short-term memories (Ericsson & Simon, 1993). The data should
provide a suitable basis for drawing conclusions on learner’s decisions and cognitive
processes while processing a task. However, Stahl (2004) warns that there may be a
problem with using these methods. He considers that they place an additional cognitive
load on learners to what cognitive effort is already being applied whilst engaging with the
hypermedia. This additional cognitive load (Sweller, 1989; 1993) may impact on how
they cope with the task at hand. Nonetheless, this method has been employed
successfully in much of the research in educational settings, particularly in cases where
the learning event is not over an extended period and the focus is on capturing the
learner’s cognitive activity. One means of accommodating Stahl’s (2004) warning would
be to include opportunities at the end of the activity for a learner to self-report their
awareness of any interference.

Stimulated Recall
Two methods of stimulated recall are in common use and are discussed next. Both adopt
identical means of questioning the learner; however, the timing of the questioning
intervention differentiates them.

Retrospective questioning at the completion of a learning task


Retrospective questioning at the end of the investigation asks learners about their ‘doing
task’ strategies, approaches and decisions. The learning event can be recorded and
played back to learners at which time a series of questions can be put that ask why they
carried out specific actions. This is the procedure known as stimulated recall.

73
An advantage of this method is that learners are not interrupted during the learning
sessions. This eliminates problems identified in the thinking aloud process. However, a
replacement issue is that information may be lost given the lapse in time as such
questions address only some aspects of the ‘stimulated’ event. Further, the extent to
which the answers to retrospective questions might still reflect the actual decisions made
at the time is uncertain (Gerdes, 1997; Stahl, 2004). This method is also time consuming,
as detailed questioning of events is likely and many questions cannot be anticipated prior
to the learning. Moreover, justifications rather than explanations may be evoked

Direct retrospection during the learning task


In direct retrospection, learners are asked about their thoughts and decisions at regular
intervals during the learning task. This requires them to stop work for short periods and
report on what they have been doing. Advantages of this method are that questions are
posed closer to the time that events actually take place, and the cognitive load is
somewhat less than that required during methods like ‘thinking aloud’. A disadvantage
of this method is that, like the ‘thinking aloud’ method, it might add interference to the
task. Further, progressive questions might influence learners shaping their cognitive
activity as the task proceeds to completion.

In this research, an attempt to overcome these disadvantages was incorporated into a


modified stimulated recall method - retrospective questioning at the completion of a
learning task. Outlined in more detail later, the learning event was recorded using screen
capture software that permitted learners to ‘relive’ the event immediately following the
learning. Given that learner’s cognitive and metacognitive activity was the essence of the
sought data, this method placed no additional cognitive load during the task and did not
interfere with the learner’s thinking during the learning event itself.

74
Research Design - Setting, participants and procedures

Stages and Setting


This research was conducted in a large metropolitan Institute of TAFE in Queensland,
Australia. It was undertaken in two distinct stages. The first stage was a pilot study
undertaken with a group of adult vocational learners undertaking an Associate Diploma
in Computer Networking as outlined earlier. This first stage was to test the extent to
which it was possible to capture learner’s cognitive and metacognitive activity when they
were engaged in learning with hypermedia, and from this experience, to suggest
refinements in the data collection process for the major study.

The second stage was the major investigation, then possible after a positive outcome in
the pilot. The investigation involved a series of case studies undertaken at the same
TAFE institution. It involved a group comprised of teachers and educational designers
from a variety of disciplines who engaged in learning with hypermedia modules as part of
their daily activities. The case studies explored the extent to which a research-driven
manipulation of a metacognitive taxonomy could successfully identify the ways in which
these learners saw themselves as autonomous in their hypermedia activities, and how this
autonomy actually manifested itself in practice.

Stage One – Pilot Study


A brief summary only of the pilot study is provided next as that study is presented in
detail in Chapter 2. The study used computer software to capture learners’ cognitive
engagement with educational hypermedia. It followed a qualitative paradigm and
focused on gathering data from students undertaking a web-based course in computer
networking as they interacted with the web-based courseware. Participants were 4
students who self selected by volunteering from a class group of 12. Data were collected
using screen-based video capture software (Camtasia) that recorded the learner’s
interactions with the software as a real time video in an AVI format. Immediately
following the initial capture, the session was replayed to each learner during which time

75
she/he was continually asked to provide explanations of ‘what’ they were doing, of the
thinking behind this action, and ‘why’ they were doing it. This ‘rich picture’ of the event
provided the data for later analysis. As outlined earlier the steps of the pilot were as
follows:

1. Data Collection
• Capture software was sent to students for loading onto their computer.
• Data collection was undertaken with each individual student and transferred to a
CD-ROM.
2. Data analysis
• Data were transcribed to a framework for initial review
• Data specific taxonomy was compiled from the initial framework.

A framework for initial review of the data was developed to present a systematic outline
of stimulus and responses. Each of the video files was viewed and the data mapped onto
a four-column table (see Figure 5 on page 43). This map provided a comprehensive
picture of the navigational/learning track chosen by learners, and insights about the
cognition driving these actions. The final step was to map the learner’s transactions onto
a set of metacognitive processes (see Tables 2, 3, 4 & 5 on pages 44 and 45).

The data gathered from this pilot allowed the examination of the preliminary research
question:

Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?

An analysis of the data suggested that the question could be answered in the affirmative
and therefore, stage 2 was possible. The findings from the pilot required some
modifications be made to the methodologies which were implemented in stage 2
including accommodation for the difficulties in assessing metacognition for the category
“execution”. The stage 2 methodology is discussed next.

76
Stage Two – Major Study: Case Studies

Introduction
The data capture method and procedures trialled in the pilot study were adopted in this
stage. The major changes were the adoption of a theoretically driven taxonomy and the
addition of an additional level of analysis that examined the verbal protocols using top-
level structuring. The theoretical construct behind top-level structure and top-level
structuring is that meaningful human language beyond the word is a connected network
of ideas and interrelationships. The top-level structuring rhetorical structures were
expected to provide a more fine-grained (secondary level) analysis of the think-aloud
protocols in order to assist with metacognitive classification and provide a richer
understanding of the structure and nature of any metacognitive processes therein. These
changes led to the production of a much richer and more fine-grained data set for
analysis. Additional steps in the methodology reflected the more purposeful focus on
answering the set of research questions posed in the last Chapter (see page 63).

Case study participants


The participants were approached as a group at one of their formal staff meetings. After
learning about the study, five learners agreed to participate.

Procedure
Data collection in this stage of the study mirrored that trialled and successfully
implemented in the pilot study. Screen-based video capture software (Camtasia) was
used to record learners’ interactions with the hypermedia as a real time video in an AVI
format. As in the pilot study, immediately following the initial capture, the session was
replayed to each learner during which time they were continually asked to provide
explanations of what they were doing, of the thinking behind this action, and why they
were doing it. This process was once again captured. This second capture produced a
video of the original learning event, overlayed with the retrospective questioning, which
became the data set for analysis. Next, the learner participated in a metacognitive

77
training program. Finally, another hypermedia engagement was recorded using the steps
outlined previously.

Data collection
The steps for collecting data in each of the case studies were as follows:

1. Learning Event (1)


• A laptop computer containing the capture software was connected to the internet
to link to the computer learning network.
• Using this laptop with the capture software activated, the learner engaged in their
first hypermedia learning event of approximately 30 minutes. The researcher
observed the proceedings and made notes.
• Following a short break the learning event was replayed and the learner asked to
explain in as much detail as possible their thoughts and actions as they ‘relived’
the event (retrospective questioning). Once again the capture software was
activated and captured the learners ‘reliving’ of the event.
• Following a short break the learner was asked to reflect upon the learning
experience as a whole and answer the following three questions:
1. How would you describe your engagement with hypermedia?
2. To what extent do you see yourself autonomous in such activities? And
3. How would you rate how effectively you engage with educational
hypermedia?
In order to answer question 3 each learner was asked to rate their effectiveness
using a 6 point Likert scale and to briefly outline their reasons for the rating.
Responses were captured on a voice recorder.
2. Metacognitive Training
• Data from the learning event 1 were transcribed and analysed (see Appendix 1).
A printed copy was then used to enable member checking as well as to inform
discussion during the metacognitive awareness activity described next.
• Each learner participated in a 30 minute metacognitive awareness (training)
activity prior to the second learning event. A more detailed description of this
event is provided in the next section.

78
3. Learning Event (2)
• Immediately following the metacognitive training, a second hypermedia learning
event was captured.
• Data were collected as in (1) above.
• Each learner was asked to rate their effectiveness using a 6 point Likert scale, as
in (2) above, and to briefly outline their reasons for the rating. Responses were
captured on a voice recorder.

Metacognitive training
The method used to structure the metacognitive training was based on the work of
Azevedo and Cromely (2004), discussed earlier in Chapter 2 (see page 53). Azevedo and
Cromely employed a 30 minute training session just prior to the learning event in which
they discussed with participants an empirically developed set of self-regulated learning
variables. The basis of this training session was the empirically based metacognitive
taxonomy discussed in Chapter 2 (see Table 6, page 57) and the top-level structuring
rhetorical structures (see Table 7, page 61). The metacognitive training was carried out
with each case study participant individually. The training session lasted approximately
30 minutes and divided into three sections. First, a paper copy of the taxonomy and the
top-level rhetorical structures was given to the participant and the initial 10 minutes was
spent explaining and discussing each of the categories they contained. Second, a paper
copy of the analysis of the first learning event was given to the participant and the next
15 minutes were spent examining and discussing these data. Third, in the final 5 minutes
the learner was asked to reflect on their utilisation of metacognitive strategies and discuss
future utilization.

Data preparation and analysis


An initial data analysis was carried out utilising the procedures successfully trialled in the
pilot study. That is, for each learning activity the learner’s activities and actions were
transferred to a table (see Table 11 and Appendix 1), along with any learner utterances
and the rich descriptions retrospectively collected, which provided an initial framework
for review. Following this, metacognitive classifications and a top-level structuring
analysis was applied to these data. Given the lack of verbal data associated with the

79
execution category it was not possible to assign any top-level structure analyses to this
category.

Table 11: Outline of data analysis table with sample data


Time Screen Researcher’s Respondent’s Meta Top-
Code Characteristics/Usage/Observations Remarks/Questions Utterances Class Level
Structure
Analysis
01:28 OB The learner listened to an audio EX
file of the text. And stopped the file
after the word alcohol in the second
line. You stopped it I thought it EV C:
there. was just the Thought
same – a text it was the
reader I same
didn’t want to E: Didn’t
listen to that. want to
listen

These analyses provided rich and detailed data sets of the learning events, and the
learning trajectory taken by each of the learners; and captured much of the thinking that
drove their learning, from which could be deduced many of the metacognitive processes
that were associated with that thinking.

From these completed data sets, quantitative data about the metacognitive and top-level
structuring activities were extracted and mapped for each learning event. Tables 12 and
13 show the format adopted. An arbitrary splitting of the learning event into slices of
time was undertaken in an effort to identify possible differences in usage. It was
hypothesised that in line with more traditional learning settings different patterns of use
might be seen. For example, it might be expected that more orientation and planning was
used/might occur in the early stages of the sessions and less in the closing stages.

Table 12: Table of metacognitive activity with sample data


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 2 7 6 10 5 4
Body 6 4 28 32 7 5
Last 5 mins 1 1 9 12 4 4
Total 9 12 43 54 16 13
% 6.12% 8.16% 29.26% 36.74% 10.88% 8.84%

80
Table 13: Table of top-level structuring activity with sample data
Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 4 3 4 1 5
mins
Body 13 5 8 3 14
Last 5 3 4 3 1 6
mins
Total 20 12 15 5 25
% 25.97% 15.58% 19.48% 6.49% 32.48%

The process of data analysis.


Issues of validity and reliability.
Rigour in scientific research, including the social sciences, is measured through two
central tenets. These are validity and reliability (Silverman, 1993).

‘Reliability’ refers to the degree of consistency with which instances are assigned to the
same category by different observers, or by different observers on different occasions
(Hammersley, 1992). To address the issue of reliability in this research, three processes
were used. First, the coding categories were established from a grounded theoretical
framework. Second, top-level structure (TLS) categories were assigned to all of the
learner utterances by the researcher and a co-rater for the complete data set. Both sets
were compared, and where differences were found, these were discussed and a
categorisation assigned as agreed to by both raters. Third, for the metacognitive
classifications the coding process was explained to a co-rater (not the same person who
undertook the TLS coding) who replicated the coding process to establish the reliability
of the categorisation. The co-rater undertook coding of 10% of transcripts of data from
the first three cases and an inter-rater reliability of 96% was established. The coding
process was judged to be very reliable.

81
What was considered to be less reliable though was whether or not all of the identified
cases of execution were in fact metacognitive. Categorising an action as an execution
was not problematic in itself. Rather, what was problematic was that in most cases the
execution action was identified only by observation and, due to the absence of an
associated verbal protocol, not able to be inferred from one. This meant that in most
cases a metacognitive inference could only be tenuously established. That is, the
thinking behind the action could not be as readily established as one inferred from a
verbal protocol. For this reason an accommodation was made to concentrate the study on
the five categories other than “execution” wherein reliability could be more faithfully
established.

‘Validity’, described by Hammersley (1992) as the extent to which an account accurately


represents the social phenomena to which it refers, was addressed in this research in four
ways: First, through the inter-rater reliability of the data coding; second, through the use
of multiple data collection strategies including observation, video-capture, retrospective
questioning at the completion of a learning task, and the use of learner ‘self-reporting’;
third, through the extensive member checking of the TLS and metacognitive data; and
fourth, through the cognitive awareness activity conducted at the completion of each
learning event. With member checking, the validity procedure shifted from the
researcher to participants in the research (Cresswell & Miller, 2000). Lincoln and Guba
(1985) describe member checking as “the most crucial technique for establishing
credibility” in a study. This provided a sound basis for the triangulation of the data.

This research makes no claim for generalisability, but ultimately aims through rich
description and reporting of the research process to ensure transferability and further
development of the research themes (Lincoln & Guba, 1985; Patton, 1990).
Dependability and confirmability were addressed through the systematic recording and
transcription of the learning events, inter-rater reliability and member checking.

82
Summary

The theoretical perspectives underpinning the methodological approach have been


outlined to demonstrate how a qualitative methodology - a case study approach, driven by
a theoretical model of metacognition, was appropriate for the research focus. Marshall
and Rossman (1999) justify qualitative research as research that delves in depth into
complexities and processes that examine little known phenomena, while Strauss and
Corbin (1998) contend that qualitative educational researchers seek to uncover the nature
of learning experiences, as well as to understand what lies behind much of the
phenomena about which little is yet known; the essential focus of the present research. In
general, case studies are the preferred strategy when ‘how’ and ‘why’ questions are being
posed, when the investigator has little control over events, and when the focus is on a
contemporary phenomenon within some real life context (Yin, 2003).

The participants and procedure involved are described. Details of the data collection and
analysis using a retrospective questioning at the completion of a learning task and
mapping these responses to a theoretical frame of metacognition are discussed. Given
that learners’ cognitive and metacognitive activity was the essence of the sought data, it
was argued that the method adopted placed no additional cognitive load during the task
and did not interfere with the learners’ thinking during the learning event itself.

The chapter concludes with an examination of validity and reliability. It is argued that
processes were in place to ensure the credibility of the outcomes in the research.

83
Chapter 4 - Findings

Introduction

The capturing and examining of learners’ cognitive and metacognitive activity whilst
engaged in learning with hypermedia in vocational learning settings is the focus of this
research. The research is driven by three questions:
1. Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?
2. To what extent (how) do users see themselves as autonomous in such activity and
how does this manifest itself in practice?
3. To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?

Although the answer to question one was partially established in the pilot study reported
in Chapter 2, this chapter reports the more complete findings, as well as findings to
questions two and three. This has been achieved through the use of a more sophisticated
methodology (outlined in Chapter 3) in analysing the case studies which improved the
fidelity of the assignment of metacognitive classifications. Using the metacognitive
linguistic markers identified in a top-level structuring (TLS) analysis of the learners’
responses, an inter-rater ratio of 96% was achieved. Moreover, the TLS analysis has
provided more systematic and insightful ways with which to examine the autonomy
displayed by the learners engaged in their practice.

Each of the learners was asked to participate in two meetings which were separated by a
four to six week time span. At the first meeting each was asked to undertake a
hypermedia learning task (learning module 1) for approximately 30 minutes, which was
followed by a 10-15 minute informal discussion. The discussion involved learners
reflecting upon and answering the following three questions:

84
1. How would you describe your engagement with hypermedia?
2. To what extent do you see yourself autonomous in such activities?
3. How would you rate how effectively you engage with educational hypermedia?

As part of their response to question three, they were asked to rate themselves on a 6
point Likert Scale.

Table 14: Effectiveness rating Likert Scale


1 2 3 4 5 6
Very ineffective Ineffective A little ineffective Somewhat effective Effective Very effective

Following a preliminary analysis of the data from learning module 1, each of the
respondents was engaged in a second meeting. At this meeting each undertook a 30
minute training session followed by a second learning task. During the training, they
were provided with an explanation of the metacognitive taxonomy used in this research,
which in turn was used to facilitate a discussion about the metacognitive strategies they
had employed in their first learning task. Following a short break, they undertook a
further hypermedia learning task (learning module 2). Finally, they were asked to again
rate themselves using the 6 point scale adopted previously.

What follows are the analyses of each of the five case studies. Each case is presented in
the sequence in which the data were gathered, as outlined above, namely learning module
1, reflection on learner autonomy and finally learning module 2. The chapter concludes
with a summary of these analyses.

To assist in the reading of the case analyses, the following conventions have been
adopted within the text of each case. Italics have been used to identify the learner’s
utterances and responses. Underlining has been used to identify module and topic
headings, screen names, hyperlinks and navigational buttons. Finally, Courier New
has been used to signify text that has been typed or for software generated messages that
appear on screen.

85
At the end of each of the learning modules a numerical summary of the metacognitive
activity and top-level structure linguistic markers is provided. Further, at the end of each
case the summary data from both learning modules are presented.

There were five cases and the background information to each of these cases is provided
next.

The Cases
Case One – David
David is a male in his late 20’s who is a trained teacher. He has experience with learning
with hypermedia and has used it on a number of occasions. The reason for his learning is
professional development.

Case Two – Lesley


Lesley is a female in her mid 50’s who is working as a manager within an Institute of
TAFE and who had previously trained as a teacher. Lesley has considerable experience
in learning with hypermedia and the learning modules she undertook in this study were
for professional development purposes.

Case Three – Tammy


Tammy is a female in her mid 50’s who is working as an administrative supervisor within
an educational institution. Tammy has previous experience in learning with hypermedia
and the learning modules she undertook in this study were for professional development
purposes.

Case Four – Judy


Judy is a female in her late 20’s who is working as an instructional designer within a
product development unit within an institute of TAFE, although she had formally trained
as a graphic designer. Judy has had a lot of experience designing the graphical aspects of
media and some experience in learning through hypermedia. Having recently started as

86
an instructional designer she was formally enrolled in a Diploma of Project Management,
a unit of which she used in the first learning module.

English is a second language for Judy, and while her verbatim utterances were not always
grammatically correct, in most cases meanings were generally easily derived. The fact
that Judy was culturally different has manifested itself in her being descriptively less
forthcoming than the others. Being linguistically less nimble has meant that her
descriptions are not always as rich and as full. Despite this, in reflecting on this unit of
work, Judy’s responses to the researcher’s observations and questions were usually part
of a unified dialogue that were rich with metacognitive linguistic structures.

Case Five – Ray


Ray is a male in his early 30’s who works as an educational designer within an
educational institution. Ray has previous experience in learning with hypermedia and the
learning modules he undertook in this study were for professional development purposes.

87
Case One – David
Learning module 1

The first learning module with which David engaged was the unit, S207 1 The Restless
Universe from the Open University’s Science and Nature series. The unit was made up
of an introduction and two major topics, the second of which had a number of sub-topics.

Figure 6: Unit modules

David did not appear to read the introductory page and just selected the first link,
Introduction. The typeface was small and the text stretched across the screen to which
David remarked: Yeah, I just found that reading across things was a little bit frustrating
(00:02). He commenced skim reading the page and did not appear to spend a great deal
of time doing so: Yeah, I didn’t see the value of reading about what I was going to
learn. I thought well if I am going to learn it, I am going to learn it (00:30). He then
selected the next button to move on. These comments suggest that David commenced by
monitoring and evaluating on two levels, first in terms of his user interface, and second,
about how he was going to approach his learning.

88
This brought him to the first topic, Physics and the physical world where, as he had been
inclined to do in his first module, he scanned down the page getting some idea of what
was on the page: Yeah, where we were going, demonstrating his need to orientate himself
with the learning interface. I asked him if he had seen the unit outline on the top right of
his screen and he remarked: No, I did have a look at it at the start there. And thought,
OK, I mean at that stage I thought when I got to number 2 I didn’t know how big it was
going to be, and I didn’t bother to press the plus button because I knew I was going to get
there soon (00:53). These confirmation and cause/effect structures indicated he
continued to orientate himself to the learning interface and monitor his progress. He
returned to the top of the topic and his cursor movements indicated that he read through
the material quickly, but quite deliberately: I find that I use the mouse a lot to track
where I was reading and it actually crossed my mind at one point that you know why is it
just a cursor, why can’t I get something like a little pointer, a finger pointer or something
(01:21). Within the topic were a series of lightly shaded blue boxes which he appeared
not to take too much notice of: No, I didn’t really – actually I don’t think I read it at all
(01:49). These statements further indicate that he was still grappling to orientate himself
with the learning interface as he continued to monitor his progress.

David moved his cursor down the page to a diagram: I had seen this picture at the
bottom and I was interested to get down and have a look at it, and commented: I sort of
understood that concept too. Just from school, indicating he found it necessary to
evaluate the relative use of the diagrams. He continued to explore (monitor) aspects of
the diagram further: Yeah. Just to sort of see what they used as examples for those
different sizes (02:04). He then moved on to another diagram and remarked: That
diagram I thought was particularly interesting because I saw galactic down the bottom,
and I always thought galactic was this enormous thing, but it is actually talking about the
earlier on phase – like it is a time measurement rather than a size measurement so it took
me some time to get my head around that I think – but yeah then I figured out what was
going on (02:28). His response indicated that he had engaged in a connected series of
evaluation, monitoring and elaboration processes using a complex list of cause/effect and

89
comparison structures to guide his learning. He then used the next button to move on to
the next topic.

The topic started with a quote which David bypassed and then came back to: I nearly
didn’t read that quote. I thought, ah, it is just a quote. But then I thought often in this
science area quotes are laws and rules and stuff, so I bothered to read it (03:17). These
comparison and problem/solution structures indicate David continued with his
monitoring and evaluation strategy. This was followed by a large block of text in which
his cursor movements indicated that he started to read through systematically (not with a
skimming motion often adopted in the past) before making the text bigger and continued
to read in the same vein: That’s when I made the text a bit bigger because it was getting
a bit hard to read. The increased text size meant that the menu previously available was
now not on screen: I did have a look at the menu before I did that though and I thought
oh I have not too far to go in this unit 2. But it just seemed to go on and on (03:59).
Once again orientating himself to the interface and evaluating its impact on his progress.

David continued to read and reached a graphic of Roger Bacon which he seemed to
ignore and read the text that followed: No. It didn’t bother me what he looked like
(04:30), evaluating its usefulness. As his reading progressed he reached a section that
contained some formulas and an explanation of their constructs. He remarked: I often
had to read the formulas again a couple of times to sort of understand what was going
on. Being not real mathematically minded (05:00), in monitoring his progress. At this
point his cursor stalled, indicating that he was studying this section quite intently: It was
really when I got to the examples that I sort of gathered that I don’t think there is one for
that particular equation, but later on they used some examples and that’s when I actually
understood it (05:17). He elaborated further by observing that: later on when you started
to see examples it started to make sense, and: Oh, I left knowing the formula made sense.
Having some understanding of it. But I don’t think I could apply it, put it that way
(05:39). This series of comparison and cause/effect structures suggest a monitoring
activity underpinned by a set of interrelated evaluations was occurring.

90
David continued to read another large section of text, again, his cursor movements
indicated that he was continuing with the more deliberate and systematic style he had
adopted for this topic: Yeah, I read that pretty quickly though. Because it was sort of like
background information, I thought (06:06), and commented further: Yeah, or that I had
some idea of, and: Yeah, and I knew that they were mathematicians as well as scientists
so it was sort of like yeah (06:20). On reaching the next graphic he seemed to ignore it
once again: Didn’t look at the graphic. Immediately following the graphic was a
question: That question interested me because it talked about the Jesuits. And having
background – knowing a bit about religion I thought I would read it. So I thought about
the answer myself to this one (06:33). David’s cursor movement indicated that he read
and re-read the questions several times and then clicked on Now read the answer: Yeah
well I was interested in the Jesuits would talk about science – but then I thought the
Jesuits – I thought about the Chinese situation and put those pieces together myself so
really tried to put myself in place of the Jesuits there (06:54). He evaluated: I did have
an answer and it was pretty much what they said (07:11). This complex list of
comparison and cause/effect structures articulate a series of monitoring processes that,
although linked, were underpinned by a sub-series of evaluative processes that seemed to
determine both their linkage and direction. That is, while David was serially negotiating
a number of linked monitoring actions, he was concurrently having to process, in parallel
and at a sub-level, a series of evaluative processes as well. David clicked on the next link
to move on.

David is taken to the next sub-topic, The Clockwise Universe, where he adjusted the text
to a smaller size and skimmed through the first two paragraphs: Yeah. I did adjust the
text. And I thought about not reading it because I know about the different world view
and then I thought, oh, I had better. I probably read it a bit quicker than I read some of
the other things (07:42). These statements suggest that here he planned a change to the
learning interface as he evaluated his strategy to proceed with his learning. He then went
back to the first paragraph and his cursor indicated that he read selected parts of the text
just covered: Yeah, I was thinking about the And yet it moves statement there (08:18).
Evaluating what he had just read. He continued reading and seemed to ignore three
graphics while scrolling further through the text: I read that they talked about Keplerian

91
there and they hadn’t mentioned it before - that theory. So I figured it must be afterwards
and then it was (08:40). David continued monitoring his progress and evaluating the
whereabouts of missing theory. After looking at a graph that came next in the topic,
David scrolled back up the page: But I always think when I see an image or graph that
hasn’t been talked about – oh – should I be on to something there? (08:56), and when
asked if he had looked at the three graphics he seemingly ignored, he responded: No, just
a cursory look (09:13). The graphics were accompanied by explanatory text which David
appeared to read: I did only read the first couple of words of each one though. I went -
The earth-centred view, The Copernican, The Keplerian (09:20). These comparative
structures suggest David had a need to resolve (evaluate) in what way the graphic might
inform (monitor) his learning progress.

David continued to read and appeared to pause as he highlighted some text: Yeah. I
think I was struggling to keep reading there so I started to highlight to track where I was
up to (09:47) and elaborated further: It was just the amount of text (10:11). This break in
concentration appeared to require him to monitor his progress and reorientate his position
in the text. Next, he paused over a graphic and the explanatory text that followed. His
cursor movement became quite inconsistent as he moved back and forth between the
graphic and text he was studying: Yeah, I move my mouse around a lot (10:25),
suggesting that for David, the mouse plays an important role in navigating the learning
space and orientating his eyes and mind. David moved on to the next section of the sub-
topic, a graphic and text section on Isaac Newton, which his cursor movement suggested
he appeared to cover quite thoroughly: Yeah, I had a good read about that actually. He
was clearly more focused on this section: Just, you know, knowing a bit about his life I
suppose. And it was probably at that point reading about his life and looking at it that
way that I was most interested. Rather than – the facts were obviously embedded in that.
But the way it was presented I found a lot more interesting (11:06). David continued to
monitor and evaluate his progress.

The discourse on Isaac Newton contained a diagram of Trinity College at Cambridge


University as it was at the time of Newton, which David seemed to spend time studying.

92
He remarked: Yeah I had a close look at that to see what it was like (12:06) and
evaluated: I think that was when I was having a closer look at the diagram and it made
me think of That Incredible Mind movie. I don’t know why (12:22). He continued to read
the text and highlighted and copied the word Unitarion from within it, opened a new
browser window, and started Google. His explanation was: Yeah, which I imagined was
a Trinitarian that didn’t believe in you know – who was a Christian who didn’t believe in
– not a Christian but someone who didn’t believe in the Christian God as God. I just
wanted to check up that I was right about that. (12:48). These statements constructed
around a series of complex lists containing comparisons suggest that David continued to
monitor his learning as he systematically evaluated key aspects of the material presented
in this section.

David switched to the now open Google browser, pasted Unitarion and read through the
responses and selected the link to the free dictionary website: Yeah, just used the Google
definition (13:13). While waiting for that page to open he moved back on the topic page,
highlighted and copied the word Lucasian, opened a new browser page and searched for
the Wikipedia site: I thought I would check up – just out of interest – because I read that
and I thought what is a Lucasian? (13:29). While that search started, he returned to the
Google page and scanned the free dictionary entry for Unitarion for a few seconds and
returned to the topic page where he continued to read the text from where he had left off.
After a few seconds he briefly returned to the free dictionary tab and again scanned the
entry. Here David was clearly multi-tasking both at the learning interface level and at a
cognitive level. While on the surface his actions appeared linear, cognitively he was
dealing with a series of parallel user interfaces with their associated learning streams.
That is, at this point he was required to be cognitively switching between the learning
interface, in a Google search and in a Wikipedia search. Therefore, as he continued to
monitor and evaluate his learning he also needed to monitor and evaluate his decisions
about the use of the multifaceted learning interface.

David returned to the topic page and scrolled through the information about Newton and
paused again on the word Unitarian and remarked: It has got to be pretty close to

93
Judiasm (14:06). He then went back to the free dictionary definition of Unitarian on the
Google tab and reread it: I wanted to know that – but I wasn’t sort of stuck on it. So I
was happy to just click the search button and go back to my readings and return to it
when I was ready. So it was sort of giving me a break from the heavy text (14:34). So
despite having to continue to switch within the complex interface, it appears it has
provided him with a cognitive break. David closed the Google tab before going back to
the Wikipedia tab and pasting the term Lucasia: He explained his action by commenting:
Well I think I chose Wikipedia for something that was more likely to be – I didn’t think it
would be a definition as such – but, um, something you would look up in an encyclopedia.
He returned to the topic tab and continued reading the text and after reading through
another section, highlighted some of the text several times: I was starting to struggle to
keep up with where I was at this point, which is why I started highlighting (15:22). He
elaborated further: I think it just keeps me up to where I am. And really forces me to
read the words at a speed I can highlight. Because I have noticed that I will skim read
over the top pretty quick (15:45). I asked him if at this point he had found the going
getting tough, and he responded: That part was (16:36). Clearly the cognitive load of
dealing with both the learning and the multiple learning interfaces was impacting on his
performance.

David continued through the topic and reached another graphic with some accompanying
text. He skimmed across it and his cursor movement suggested that for just a couple of
seconds before he returned to the Wikipedia tab he had switched from skimming to
systematically reading the text. He skimmed through that Wikipedia text briefly before
closing the tab: Yeah. That’s because I am used to slower search engines and just being
able to click and go on (16:07). He elaborated further: I think probably an efficient use
of time as well. I will go back to it, read a bit more, and flick out of it. So it is almost like
a brain break sort of thing (16:34). This returned him to the topic tab where he continued
to read through the text. After a short time his cursor paused: Yeah. I think I was sitting
back and reading for a while – and then I thought this quote by Joseph whatever his
name is. Yeah, Lagrange, I thought that was interesting (17:00). After some further
skimming, his cursor slowed over the text of three key points: The minute I read key

94
points, I thought I had better read these really properly, so I spent a lot of time on these
(17:22). Elaborating further: I think they are new points (17:35). These responses
provided a list of evaluative processes that appear to be driven by an overarching
monitoring engagement.

David continued reading and started to play with a small ball that he manipulated with his
fingers: I do this without thinking. My hands are sore at the moment, so that is why – it
stretches my fingers and back (18:13). I asked if this is something he uses as a break, and
he responded: At the times I stretch my back, I will often look away from the screen, and
elaborated further: Which probably gives me just a few seconds to just process what I
have just read (18:45). He then made a further evaluation about the reading of on-screen
text: I think the really heavy side-to-side text reading is very difficult on the screen
(19:17).

David opened a new tab and connected to Wikipedia again and typed in the word
determinism and returned to the topic: I think I looked up determinism here. He
went on to explain: I actually did that for more information than they had given me
because they basically said that determinism is – what is it called – first cause and all
that sort of stuff. But I just wanted to see if it had any associations with any religion or
anything like that (19:22). Back on the topic, his cursor movements suggested that he
continued to read on, ignoring another graphic, and reaching a question in the text: Yeah.
That question I read very quickly and came up with an answer very quickly, which turned
out to be pretty much what they had said (20:11). These comparison and cause/effect
structures suggest that David continued to monitor his progress while again using a
multiple learning interface.

David moved to the Wikipedia tab and skimmed quickly through the page returning to a
link to determinism in western tradition, which he followed. He continued to scan
through the text which contained a number of further links. He used his cursor to jump
from link to link which appeared to be a way of managing his skimming of the text: I am
really skim reading here just looking for – I was actually looking at the blue words. He

95
then paused and opened the browser’s Find function and searched for the word catholic:
Yeah, well I wanted to find out what the church’s reaction to it would be because of the
whole free will thing (20:44).

Figure 7: Search for the word catholic

As this was unsuccessful, David next tried to find the word religion. He was successful
in doing this and was taken to several parts of the Wikipedia text which contained the
word.

Figure 8: Search for the word religion

I asked him if he had found that search to be useful and he responded: Not particularly.
Not for that particular question I was asking, but it did give me a little bit more –
probably confirmed what I thought I knew about it (20:54). He closed the Wikipedia tab
and returned to the topic where he continued to read on. David’s responses, which

96
contain cause/effect and comparison structures, suggest that this search for meaning
required him to engage in evaluative and elaborative cognitions.

On returning to the topic, David reread question two: I had actually read that before I
went to Wikipedia – that question – and when I came back I just read the key words
(21:30), and selected the Now read the answer link.

Figure 9: Question 2 and answer

I asked him if he found this useful. He responded: Yeah, I would say so. At least they
force you to think about it a little bit beforehand (21:45). He elaborated: Yeah, I was a
bit shocked when I clicked show answer, and it was such a big answer though – because I
had thought of about four words. Which ended up being basically what they said. I
asked him if he found the answer complex, to which he responded: Which they did say it
was an open ended question. So there is lots of answers (22:06). He spent some time
reading the answer thoroughly before moving on through the text. These responses
indicated that David was still engaged in a continuing period of evaluation.

David spent the next few minutes reading another substantial block of text, initially
without the use of his cursor, and then seemed to reread it with the aid of the cursor. He
reached a point in the text where he appeared to stall and reread a section again: Oh,
yeah. I struggled with that law of conservation, and then I realised that the weight/mass,
sorry, the um mass volume (undecipherable) (23:33). David elaborated further: I thought

97
a sort of diagram there would have helped (23:52). I remarked that he would normally
pay little attention to a diagram to which he responded: I think with concepts like that it
would be helpful though. Because why read text for a simple concept, when a diagram
would clearly say it. David continued to read the text until he encountered a diagram of
the principles of kinetic energy which he seemed to study.

Figure 10: Kinetic energy diagram

I observed that normally he had not been giving graphics much attention, to which he
responded: Yeah, I actually pictured myself throwing a ball up in the air. I think I
actually did that – I threw my ball up in the air at the time (24:04). David interestingly
engaged in drawing upon prior knowledge as he continued to evaluate his learning.

David then reread some of the text that preceded the diagram: I also wondered how the
potential energy was lost. And the potential energy reaches its peak. I wondered how it
disappears. It must convert to kinetic energy (25:01). I asked him if that was what the
diagram was saying to him, to which he responded: Well it says potential energy drops

98
(25:10). He moved his cursor back through the diagram and the accompanying text,
seemingly checking it once more. The processes of evaluation and elaboration that David
was using continued. David then proceeded to a set of graphics that provided examples
of energy storage.

Figure 11: Energy storage examples

He spent little time examining them: I thought those diagrams were silly. The different
forms of energy. So I just clicked straight on to next. I went, yep, there is a battery
energy, go, (26:03) and selected the next button which took him to the next topic.

David started to read the text and then attempted to alter the text size. He selected
medium from the menu, however, this did not change the text size. He remarked: At that
point when I went up there I was trying to get my right menu back, which is why I have
come over (26:24). He moved his cursor across the page to display the unit outline:
Which is why I have come over. Just to see where I am up to because I thought I was
going to be through it a lot quicker than I was (26:26). It would appear that at this point
David had found it necessary to re-orientate himself.

99
Figure 12: Unit outline

I asked him if at this stage he felt lost. He responded: That’s right. And I thought how
much more reading is there before I get some sense of achievement. Because the next
button is not giving me anything. He elaborated further: And I have been sitting here for
20 minutes and haven’t been told I am doing a good job or anything, and: Yeah, that’s
what I mean. Feel some sort of achievement- or summary items – you have done that
(26:45).

David selected the forums link at the bottom of the unit outline before commencing
reading the topic material. He then started skimming the introductory paragraphs until
the forums page loaded and he then selected the forums tab: I thought maybe there is
something in these forums that provide another dimension to this learning. I went in
there and there was nothing. So I went out again (27:48). David returned to the topic
and continued reading. His cursor movement indicated that he did this in the systematic
fashion he had adopted in the latter part of this learning. He reached text where his
cursor seemed to hover for short periods of time, which suggested he was paying
particular attention to this text: I seem to move my mouse over something if I like it
(28:53). David completed his reading of the topic and ceased work.

100
Summary of learning module 1
In reflecting on this unit of work, David’s responses to my observations and questions
were usually part of a unified dialogue that was loaded with metacognitive linguistic
structures. The observations contributing to my analysis of his metacognitive activity
and top-level structure linguistic markers during the 30 minutes of the lesson have been
sliced into three purposive though arbitrary segments (see explanation page 80) to
represent David’s progression through the beginning, body and conclusion of the learning
event. These are represented in Tables 15 and 16.

Metacognitive activity
The totals of metacognitive activity identified for David showed that metacognitively he
drew heavily upon monitoring, execution and evaluation, and to a much lesser extent,
elaboration and orientation. In contrast, he engaged in very little planning. Typical
examples of each have been presented in the preceding narrative.

Table 15: Metacognitive activity learning module 1 - David


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 4 0 5 11 8 1
Body 2 1 33 37 30 0
Last 5 mins 2 1 8 4 2 3
Total 8 2 46 52 40 13
% 4.97% 1.25% 28.57% 32.30% 24.34% 8.07%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify David’s metacognitive activity are
outlined below. These indicate that David relied heavily on cause/effect, and to a lesser
extent on comparisons, and he drew least upon problem/solution structures to underpin
his metacognition. He used more complex lists than simple lists in this instance.

101
Table 16: Top-level structuring activity learning module 1 - David
Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 10 4 4 1 6
Body 13 21 28 6 22
Last 5 mins 2 5 8 2 2
Total 25 30 40 9 30
% 18.66% 22.39% 29.84% 6.72% 22.39%

Self awareness of learning autonomy 1st rating


At the completion of learning module 1, David participated in a 10 minute session in
which he was asked to reflect upon and answer the following three questions:

1. How would you describe your engagement with hypermedia?


2. To what extent do you see yourself autonomous in such activities? and
3. How would you rate how effectively you engage with educational hypermedia – on a
6 point scale?

David’s responses to these questions follow.

Response to question 1
David opened by saying that: Basically I feel very comfortable when engaging with
hypermedia. He suggested that working on computers and having to scan for information
and engage with whatever is on the screen was part of his everyday life: I do it all the
time for both social and professional reasons. He suggested that his approach included:
Firstly scanning of pages; I typically only read the first few sentences of a paragraph to
get the gist of what that paragraph is about. This enables him to get through the material
quickly and focus on his reading where required. When he encountered anything: Tricky
to understand and the original resource is not helping: I will change out of that document
and go and do my own internet research in real time, straight away, enabling him to gain
immediate understanding instead of having to: jot things down and going back to them
later. Thus, understanding what is currently being learned before proceeding, is
important to him.

102
In managing his learning, he saw it important to commence by: having a quick scan of
the entire document to see how much reading I have got to do; identify any activities I
have to do; also if there is any assessment. He said that he likes to determine these things
straight away. For example, in the first learning exercise he determined that there was no
assessment so that left him to: Make sure I knew where I was headed in that learning
experience. So David considers that orientating himself is important. Not only with
respect to the hypermedia and its structure, but also with respect to the way he intends to
go about his learning. For example, he says he commences the learning by: Scanning
through the introductory paragraphs and perhaps the final sentence of each paragraph,
as well as the introductory sentence, just to make sure I have understood the main topics.
He likes to scan read, which enables him to: Look for any words that were unfamiliar to
me; these words would be a highlight for me to go and find out more about that. He
reflects upon what he has just said, and then suggests that the very first thing he does is:
Understand how the page works, how I can interact with the page, in terms of where I
have to click to move on, where I would go for more help, where I could change the font
size or the colours, if those were options.

In talking about working with the learning materials, David suggested that he likes to
engage with all aspects of the materials being offered: For example he suggested that: I
often interact pretty in-depth with a diagram; I see that information as being presented to
me very quickly, and I appreciate that. He suggested that he finds pictures less important
and recalls: I remember in this learning experience there was a picture of the people they
were talking about, and I thought that that was largely insignificant, because it didn’t
matter what they looked like. In comparison, he found that graphs were: quick to read
and easy to understand and valued added what I was learning. He suggested that in
general he is drawn by how professional he considered the resource to be.

Overall, David’s response suggests that he considers himself to be very comfortable and
confident in learning using hypermedia. His comments suggest that he spends time in

103
orientating and planning his learning and that he spends time monitoring his progress as
he enacts those plans.

Response to question 2
David saw himself as being: largely autonomous. For example, he suggested that in the
first learning experience he felt that he was a: Passive recipient of information, however,
as a learner he found that: I had an obligation to tend to interact with that information.
For example, going back over his work, linking text to graphs, and trying to associate
meaning, and looking for terms in the text now well understood are some of the
interactive strategies he said he employs. So he described himself as being able to: Go
from it being plain text, which is passive, to actually creating meaning for myself. David
believes that he is able to demonstrate his autonomy through his capacity to create his
own interactivity (or learning strategies), rather than this being provided to him by the
materials. That is, he regards autonomy to be his capacity to engage effectively in the
learning. He went on to suggest that he tends to adopt one of two approaches. First, he
tends to work things through in a linear fashion and if he finds: nothing new or exciting,
he will skip forward. Second, if it is content that he thinks is familiar, he will: go
directly to the assessment and link my way back to what I would need to do to satisfy the
assessment. This suggests that David is happy to use the linearity and structure afforded
by the materials where he is less sure about the content. In contrast, with more familiar
material he seems willing to exert higher levels of autonomy.

David believes that he is an autonomous learner when engaged in hypermedia learning.


He considers he has the tools to interact effectively with the different kinds of media, and
that he can draw from several learning strategies depending upon his prior knowledge of
the content.

Response to question 3
David rated himself as a 5 (Effective) on the 6-point Likert Scale presented. He
suggested that this rating was because: I knew what I needed to learn, and I learnt that,
and I was able to quickly sift out that information that I didn’t really need to cover. He
said that he did not select 6, Very effective, because: I did get bored, and I suppose that

104
is part of my generation to get a bit bored. He went on to suggest that might also be
accounted for by a lack of variety in the media (materials): A very text heavy situation.
He thought that he would generally engage with any media that supplemented the text.

David’s rating appears to be supported by his response to the first two questions and in
particular his well-developed capacity to plan and monitor learning in this setting.

Metacognitive training intervention


The purpose of this intervention was to provide David with training and to raise his self-
awareness of his metacognitive activity in hypermedia learning prior to undertaking
learning module 2. A thirty (30) minute session was conducted in which David was:

(i) Provided with a paper copy of the metacognitive taxonomy (Table 6, page 57) and
top-level structuring rhetorical structures (Table 7, page 61) and the initial 10
minutes was spent explaining and discussing the categories they contained.
(ii) Provided with a paper copy of the analysis of his first learning event, and the next
15 minutes were spent examining and discussing these data
(iii) Engaged in a 5 minute reflection in which he was asked to reflect on the
utilisation of his metacognitive actions and discuss future utilization.

David was asked to reflect on these discussions prior to undertaking a second module.

105
Learning module 2
The second module in which David engaged was from e-training resources. The unit was
Utilise specialist communication skills to build strong relationships.

David commenced by reading the introduction to the module’s first topic. His opening
statement was constructed around a list of intentions that: At the very start I was trying to
read every sentence – I was going to get into this and read it properly – and you know
pretty straight forward. The comment suggests that at this very early stage he was
metacognitively orientating his position and monitoring his progress.

On completing the module’s introduction, David moved on to the next topic,


Development of the community service and disability sectors. He quickly read through
the opening paragraphs before pausing at a series of quotes from the Dalai Lama. His
comment indicated an active monitoring of progress, with a strong comparison between
what was interesting and what was necessary: And I saw the quote and to be honest
thought, they are not going to mark me on my knowledge of the quote. He also indicated
an evaluative stream of consciousness in sorting what was interesting from that which he
thought immediately useful to his next move: The Dalia Lama says some very clever
things obviously, but I just thought I don’t need to know that right now (00:11).

David stopped reading and moved to the next topic, Self awareness, where he quickly
scanned through the material: I just wanted to see how much content there was (01:06).
This suggested that he was again monitoring and evaluating his progress. He returned to
the beginning of the topic and commenced reading it. After a short stanza of reading, he
highlighted some text and in his commentary indicated a continuation of his monitoring
and a need to orientate his position within the learning: it sounded like a definition that
would be important to remember and I thought I would highlight it more but it was sort
of that I would remember that’s where it was (01:16). The text had introduced the
concept of emotional intelligence (EI) as content and provided a weblink to an EI
questionnaire which he opened and started to complete. He appeared to realise the

106
number of questions he had to answer and evaluated: I didn’t read the instructions
(02:11).

Answering questions entailed using the mouse to click and place a tick in the response
selected for each question. Initially, David appeared to find the interface difficult,
evaluating: It was a bit tricky because it kept jumping up (02:28). He then appeared to
realise that the answers to the questions were displayed on top of the question and
evaluated: When you did get the hang of it, it actually became quite good before
elaborating further: I didn’t have to scroll down, but the first time I did it I thought woo,
what have I done wrong? (02:40). His commentary with its cause/effect and embedded
problem/solution response suggested that his pattern of monitoring, orientating and
evaluating continued as he dealt with new requirements of the learning interface.
Following this revelation, David started to select answers very quickly and evaluated:
Some of the wordings of the questions are typical for emotional intelligence
questionnaires (03:03). However, at times, he showed some consternation as he
monitored the reason: When they asked a funny question – like when it had a negative –
like almost a double negative (04:10). In these instances within the questionnaire,
strongly disagreeing was actually agreeing with the topic of the sentence. The learner
partially attributed his speed to: yeah I was sort of pre-empting where my mouse would
go next (04:31). These actions and responses appeared to maintain an active pattern of
monitoring and orientating.

David spent 10 minutes answering a series of select the answer questions at which time
he became frustrated with the questionnaire and commented: I gave up. I really went –
this is enough for me. At this point David had decided to halt his progress (monitor) and
concluded (evaluated) that he was going to abort the unsatisfying questionnaire and move
on. He did this and became aware that the introductory screen to the questionnaire had
clearly indicated there were 136 questions, and that it would take about 20 minutes to
complete, and commented: See, I didn’t read the instructions (5.00). This series of
cause/effect structures suggested that further monitoring and evaluation were being made
about his attitude to the questionnaire. He returned to the module.

107
Up to this point, David had been navigating the learning resources in a linear way as
prescribed by the structure of the learning resources. He continued to do so throughout
the session. He clicked on the link to Basic Communication Process, and then he clicked
on the Self awareness link before it had a chance to open. He seemed to monitor the
interface as these comparison and cause/effect structures suggest: Well it didn’t say that
it was going to another side. It went on to a Myers Briggs skills profile, which is actually
a website.com and then they send you advertising stuff. Next he appeared to anticipate
(evaluate) the likely outcome of this action: It is going to come up telling me I’m a type
four person (5.44).

The next topic was the Johari Window, a topic with which David was not familiar:
That’s why I have been having a bit of a read about it (05:58). Here, he took care to read
the short explanatory sentences. The next paragraph provided him with a web-link to a
Wikipedia entry for Johari Window which he ignored and remarked: I remember
thinking when I saw the Wikepedia there I thought to myself – never in any type of
learning would I go to Wikepedia, but I know it is meant to have less errors in it than
other sources, but I can’t see why the authors wouldn’t have, you know, put the content
on themselves (06:08). Within this list of statements he made a comparison between this
and previous learning, which suggested that he continued to monitor his learning
activities and evaluate and elaborate on his progress. The next activity in the module was
a web-link to an Interactive Johari Window, which David selected. While it was opening
he moved on to the next topic of the module, Basic Communication Processes. When the
Johari Window opened, David was asked to select five or six words that he felt best
described himself.

108
Figure 13: Johari Window

Using the cursor to explore his options he explained: I was actually looking for
something like innovative or something like that, but it wasn’t on there (06:30). David
selected six words and entered his name in the text box at the bottom and clicked on
Save. Nothing seemed to happen so he closed the Johari Window and selected the
module window. The following complex list of cause/effect and comparison structures
suggest that he appeared to have evaluated the reason for the window not saving: That
didn’t work because there had already been someone who had put their name in as David
– you need a unique identifier, and elaborated further: but I sort of saw it as something to
do – as a fun thing – that’s why I just switched straight back out of it (07:08). This
returned him to the topic he had selected earlier.

Upon returning to the topic Basic Communication Processes, David quickly scanned
down through the page - seemingly ignoring two graphics within the text: Didn’t look at
them (07:36). He appeared to think further about how he learns and commented: Ahhh –
I would say that I mostly learn by playing. Or arguing. Or being told I have to defend a

109
point or something. So in that sense probably textual (07:43). The search for evidence in
this comment indicated that while these comparison and cause/effect structures are only
inferentially connected to this learning session, they served to support and validate some
of the inferences being drawn from it.

The last activity in this topic was a topic reading, which David opened momentarily
before returning to the Johari Window screen to discover that his attempt to save his
entries had failed. He attempted to resave his input, using a string of numbers in place of
his name. He returned again to the topic’s secondary window where the topic reading
had previously opened. David again returned to the Johari Window to discover his
second save attempt had resulted in an error message being displayed: At this point I just
thought this is too hard – if it doesn’t work the first time it is just some silly thing (08:12).
So he closed this window and returned to the current topic where he studied parts of the
text further. David clicked on the link to a Reading and continued to study the text and
diagram on screen as it loaded: I was reading the bit about – I was re-reading the stuff
about um the encode and decode. So reliving old university notes (08:49). The series of
complex lists containing cause/effect and comparison structures within his utterances
suggested that David continued to build upon a cycle of monitoring and evaluating his
progress while trying to orientate himself as he continued.

The reading had by now opened, and using the cursor, he scanned through the first part of
the document, spending very little time doing so: and I thought if there are additional
readings to the course, why is it repeating the same thing? He reached a section which
contained the explanation of a number of acronyms: And then I got into this which
seemed a little bit trickier. The AAC’s and so on, and I had heard about everything
except that Makaton Vocabulary so I highlighted that and tried to think about whether I
had ever heard about it before, and then read up on it, and I still really didn’t get it and I
actually thought about going and looking up then and there but I thought – nup – I will
wait because maybe it is covered a bit more in the learning material (09:00). This
complex list, constructed of a series of embedded comparison and cause/effect linguistic
structures, is indicative of the depth and complexity of the metacognitive processing

110
demonstrated by David so far. He seemed to be regularly monitoring and evaluating his
progress and periodically re-orientating his position within the module more generally.
This appeared to be both a regular and deliberate metacognitive strategy he adopted.

Using the cursor David continued skimming through the remaining sections, stopping
occasionally at some sections of text. When asked about stopping at one spot he said:
Yeah I thought this is an acronym. Where some of the others – I know what an alphabet
is (10:09). In responding to his skimming of this section generally: I read the topic – the
heading – and then went move on (10:28). David then returned to parts of the text he had
skimmed previously and highlighted the word, Makaton: Yeah. Because I was getting
ready to leave the document at that point I think, and I sort of thought I got to remember
this (10:57). He placed the word on the clipboard: Took it with me. That was to sort of
search for it – you know I was going to go back and keep it on the clipboard in case I
wanted to Google it or whatever (11:07).

The integrated problem/solution, cause/effect and comparison structures of these


comments suggests that while David was again monitoring his progress, at this juncture
the task had now become much more complex and demanding. That is, he now needed
additional strategies to cope with the increased cognitive demand placed on him by the
need to retain what he was learning. Up to this point his learning appeared to have been
effectively managed by following the linear discourse of the module’s materials.
However, he now found it necessary to employ multiple and parallel capture strategies to
aid with his knowledge retention. Moreover, this needed to be accomplished over and
above any assistance or scaffolding being afforded by the module materials. Therefore,
David was required to engage in a complex and often integrated set of metacognitive
activities to deal with this new level of activity. His commentary suggests that as he
engaged in a top layer set of monitoring activities, he needed to also engage with an
embedded sub-layer of other activities which included orientation, evaluation and
elaboration.

111
David then proceeded to the next topic Barriers to Effective Communication. The text
consisted of a series of lists of barriers and associated case examples.

Figure 14: Barriers to effective communication text

Again, David seemed to skim through these very quickly: I didn’t read the case
examples – I thought these are terms – the case examples are unnecessary I thought and
the fact that it was in a blue box also separated it from the content for me and to be
honest what I thought was, this is what I would read if I was a really super keen student
who didn’t understand. The white – the black text on the white background is obviously
the core stuff and the other things are there as a support (11:23). These comments list an
array of comparison and cause/effect structures that again suggested that a meta level of
monitoring was being underpinned by a set of evaluative metacognitive actions.

He then moved to the next topic Specific communication needs. He appeared to read the
first few paragraphs almost verbatim: I sort of read that top paragraph about the
different reasons (12:40). The text contained a graphic: I didn’t look at them (12.43). I
saw they were two heads pointing at each other but I didn’t read what the text said
because I could tell it was just text on a picture and it was just repeating whatever was in
there (12.49). David continued to read the text quite deliberately and highlighted the
word cueing: I thought that was interesting that paragraph about – what was it – I was

112
wondering what cueing was (13:06). The topic finished with a reading which he selected
and which opened in a separate window: Well it was the same reading so I sort of went
in and out of it (13:24). He used the cursor to scan through the reading quickly before
using the Microsoft Word function (Find and Replace) to search for the word cueing: I
looked for that cueing word because I thought maybe I have missed this. It wasn’t there
so I went, oh, they are not going to help me (13:39). David then undertook a Google
search by typing definition: cueing. Whilst waiting for the Google search result
he returned to the module and quickly scanned through the topic several times: That was
me just checking that you know I have read everything on this page (14:06). He then
returned to the Google page to view the result of his search which was unsatisfactory so
he adjusted his search term to include a space between definition and cueing. While
awaiting the outcome of that search, the learner moved on to the next topic, Working
with Interpreters.

During this section of learning, David continued to monitor his progress regularly. Here
his commentary was rich in comparison and cause/effect structures that again are
embedded and interrelated. This suggests that as his learning progressed, his initial much
simpler linear monitoring actions appear to have given way to more complex monitoring
tasks. Also, his learning interface had become more complex with the addition of the
Google search function.

David started to read through the opening paragraphs in a deliberate fashion that was
more deliberate than had been noticed previously. The reading was slower, the cursor
movement was slower, and he leaned forward. The impression was that he was very
engaged with the content: That seemed to be like really common sense stuff – but I
flicked through it to make sure there were no – you know – anything unusual (14:37).
These paragraphs were followed by a reference to a video.

113
Figure 15: Video text and graphic

David made several attempts to click on the graphic: Thought this was a bit weird with
the video – I clicked on it like crazy trying to get it to work but then it is referring to a
video rather than being a video (14:48). And you would sort of anticipate it being in
there I thought (15:06). David appeared to encounter an extra layer of complexity in his
learning at this point. He now had to deal with the idiosyncrasies of the learning
interface that required him to parallel process metacognitively. The comparison and
problem/solution structures of his commentary suggest that he was evaluating and
elaborating the effectiveness of this aspect of the learning interface. At this stage he went
back to the Google window to check the results of the cueing definition from his
amended search, he read them aloud and then commented: I assumed that it was but I
just wanted to make sure that it wasn’t some – you know. So that sort of confirms what I
thought (15:22), closed the Google window, and returned to the learning module.

David continued to read on and highlighted the words “culturally and linguistically
diverse”, and appeared to evaluate and plan how he might use them: That was another
acronym that I thought I would have to remember (15:45). Whilst reading he seemed to
skip over text contained in a series of pale green boxes and evaluated his action: I sort of
– the minute I saw them being a different colour. I thought these are additional like
helpful hint like things. And I took that as a cue to not worry about it (16:11). David then
opened Microsoft Notepad (a basic text processing program in Windows) and typed a
definition – culturally and linguistically diverse: I deliberately typed
that in there instead of copying and pasting it to see if I could remember it (16:29) and
commented further: So it wasn’t really that I was learning any new understanding – it

114
was just facts (16:50). The learner then returned to the learning module. This complex
list of cause/effect and comparisons suggested that David continued to monitor and
evaluate both his actions and his progress. He had now added to the complexity of his
learning interface by having the Notepad text editor open alongside his topic window.

David then skimmed back over previous topics and located the Section One reading he
had opened several times beforehand. He opened it in a separate window and returned to
and re-read the definition of a Makaton vocabulary, returned to Notepad and typed
Makaton = symbol pictures cues. He then returned to the reading and
searched for and located the term AAC which he re-read. He moved to Notepad and
typed AAC augmentive or Alternative Communication Systems.
David explained: I sort of hang on to a couple of units at a time or often what I do is
rather than print it I would email them to myself so that next time I look into my email it
would reinforce that knowledge again and um yeah (17:48). David then typed Sender
– Decoder and returned to the learning module where he selected another topic
Building Rapport. This complex list of comparison and cause/effect structures suggested
that while David continued to monitor his progress, this monitoring was being moderated
by his drawing from a series of sub-level evaluations.

While undertaking this topic David skimmed quickly through the material moving
forward and backward a number of times and appeared to evaluate this action: I thought
about skipping it but I thought about going to the self check and seeing about how much I
could get because I was just knowing so much of it and I thought oh no, there might be
something here that’s interesting (18:31). On continuing to read he highlighted some
text,

Figure 16: Highlighted text

115
and explained: I highlighted the bit about using a person’s preferred name and speaking
with them because I make a point of trying to do that myself I suppose (18:53). This
cause/effect structure suggests that parallel metacognitive processes were occurring here.
First, a re-orientating within the learning text was occurring. Second, he was articulating
details of the cognitive hook he was using to secure the learning at this point, as
suggested by the cause/effect and comparison structures in the following explanation: Oh
I suppose what it does a lot to me is it marks it for me, like it reminds me of where I was
and what page, or sometimes what I will do is I will highlight it and copy it and maybe
paste it later to you know …. If I am still left wondering what it is – although other times
it sinks in (19:09). The final section of this topic was an activity link which he selected.
This opened in a separate window and he again skimmed and scrolled through the page:
I read these activities but couldn’t be bothered doing it basically because I thought it was
pretty clear what you should do (19:32). David closed this window and returned to the
topic page where he briefly re-checked the model answer of the case study. This
suggested that he had evaluated his options in order to plan his next move.

He continued to read on and reached an interactive case study where he was given the
opportunity to type a response before checking his answer. He did not type an answer,
however, he did check the model answer provided. David explained: You could type in
an answer, um, it was my take for them just to give me an answer though, I thought.
Because actually, I tried it with the first one and it wasn’t actually checking your answer,
just telling you what the answer is, it wasn’t doing any comparison (20:13). He
elaborated further: Well it doesn’t actually check it, it just gives you the answer, which is
fair enough, it would have to be awfully intuitive (20:34). Thus, David continued to
monitor his progress and evaluate the capacity of the user interface. He selected the next
topic, Managing mistrust and conflict.

In what had now been established as a familiar process, David skimmed through the topic
material, quickly scanning from top to bottom. A diagram seemed to take his attention
and he appeared to be trying to reconcile it with the adjacent text. The complex list of
comparison and cause/effect linguistic structures in his remarks that follow suggest that

116
he continued to monitor, evaluate and elaborate his understanding of the material being
presented: I did look at that diagram quite a lot actually, um, and I thought I would just
read the topic headings,

Figure 17: Levels of conflict diagram

and then I went back up here and saw these different things (see Figure 18),

117
Figure 18: Identify conflict table

and I thought how do they fit in to the diagram because they didn’t match up, so that’s
when I started reading what the actual bubbles said (20.49). When asked if the
relationship between them was clear, David elaborated: No, no, not really, so it actually
required some level of cognition I suppose in linking up where would this behaviour fit
in, would it be, you know, something that is likely to happen during tension or crisis. He
continued to move his cursor between the text and the diagram and monitored this
learning behaviour: I always seem to go down to the bottom to see how much there is on
the page first (21:55), and further elaborated: Yeah, (deciding) how much time am I
prepared to spend on this (22:04). With some linkage of meaning seemingly unresolved,
he decided to move on. It seemed that he had come to a judgmental decision (evaluated)
that, while he did not fully comprehend the message, he was not prepared to afford it any
further time and move on (planning) nonetheless.

David then selected the next topic, Self awareness in conflict situations and started to
scan down through the text. He continued to read down the page and encountered a
series of links.

118
Figure 19: Links for the topic: Self awareness in conflict situations

He moved to the first of these links, a weblink, Non-violent communication, which he


clicked to open in a new window, and moved immediately back up the topic page and
highlighted some text, which he copied and added (pasted) into the Notepad page he had
previously used. Despite the fact that he had been skimming the text previously, he
seemed to have decided that this text was important: Yeah, but I also decided that I
couldn’t be bothered learning them right then and there because they were all common
sense stuff – it was just what these particular people were labelling them – these
particular behaviours. He elaborated further: also on that page there was a link to a
journal, like where you could participate in a journal, and my reaction was just not to do
that (22:25), referring to the second link available to him. He made a final evaluation: I
didn’t think a journal would be a valuable learning resource at that point (22:49). This
complex list of embedded comparison, cause/effect and problem solving structures
suggest David was monitoring his progress, drawing upon a sub-level of actions in which

119
he evaluated his current position and planned what to do next before finally evaluating
his action.

On leaving Notepad, David returned to the topic, clicked on the link to the next topic,
Self check and then moved to the nonviolent communication weblink which had by now
downloaded into a separate window. The link presented him with a menu from where he
opened and read some of the sub-topic headings. He did not explore this, other than
superficially, and finally closed the window. This returned him to the new topic, Self
check, which he had selected earlier.

The Self check presented David with a series of questions and boxes into which he could
type his answer. He read through each of the first four questions and without making any
attempt at typing an answer, selected the check your answer button.

Figure 20: Self check questions showing an answer

His cursor movement suggested that he was comparing the question and answers. I asked
him why he did that: Yeah, it was to reinforce the readings and obviously if they are
going to ask a question about it, it is one of the core things. That’s probably coming from

120
my background (as a teacher) as well (23:25). On reaching question five he elected to
type an answer before choosing the check you answer button. He remarked: This one I
thought, oh I will have a go at this because it was the thing that I copied but as I said I
didn’t read it, so I thought let’s see if I can guess it – and I got it wrong. I knew it was
either b or d but I got it wrong (24:11). This complex list of cause/effect and comparison
linguistic structures suggested that David had planned a change to the answering of this
question and monitored and evaluated his attempt. David went to Notepad and checked
his answer, and evaluated: And then I automatically thought it was not managing and
checked it on the sheet (24:21). He returned to question five and evaluated his answer:
That was just that I noticed it had barriers within two words of each other (24:30), before
elaborating: And it is related to the message which I thought was interesting because I
don’t know poor listening skills are related to the message, it is related to the receiver.
Which is why I was sort of flipping through it (24:53). David read through questions six
and seven and without offering any answers chose check you answer for both, as he had
done previously. He then selected the final topic, Summary. This complex list of
cause/effect and comparison structures suggest that while David continued to monitor his
progress through the learning materials, he was also monitoring, evaluating and
elaborating on the extent of his learning.

The summary page consisted of a short summative paragraph and three dot points that
David briefly scanned before clicking on three topic links in quick succession. He
remarked: I read the dot points, but I didn’t read the other stuff. Often if there are dot
points, I find myself just reading those dot points. That’s again, because I am relatively
comfortable with the content, like it is not really higher order type stuff (25:16). The
succession of topic pages opened and shut before arriving at his final selection, Basic
communication processes. This complex list of comparison and cause/effect structures
suggest that a complex set of interrelated evaluative decisions were being made about the
value of this summary information and how to proceed.

David scanned down the page, stopping and reading various parts along the way. He
selected the reading link and scanned through the reading: Oh, I was looking up that

121
thing with the question – the last question we were talking about and wondering how it
was that you know that fatigue one, that third dot point, was related to the message
because I dead set thought it was the receiver and then I couldn’t find it in the document
and went whatever (25:33). At this point the learning session ended. This final complex
list of cause/effect and comparison structures in his statement suggests that David
continued to monitor an outstanding learning issue before deciding (evaluating) that he
would leave it unresolved.

Summary of learning module 2


In reflecting on this unit of work, David’s responses to my observations and questions
were usually part of a unified dialogue that was loaded with metacognitive linguistic
structures. The observations contributing to my analysis of David’s metacognitive
activity and top-level structure linguistic markers during the 30 minutes of the lesson
have been sliced into three arbitrary segments (see explanation page 80) which are
collectively represented in Tables 17 and 18.

Metacognitive activity
The totals of metacognitive activity identified for David showed that metacognitively he
drew heavily upon execution, moderately on evaluation and monitoring, and to a much
lesser extent, elaboration and orientation. In contrast, he engaged in very little planning.
Typical examples of each have been presented in the preceding narrative.

Table 17: Metacognitive activity learning module 2 - David


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 3 1 12 8 7 1
Body 9 4 48 28 27 7
Last 5 mins 0 3 13 8 10 9
Total 12 8 73 44 44 17
% 6.06% 4.05% 30.87% 22.22% 22.22% 8.58%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify David’s metacognitive activity are
outlined below. These indicate that David relied heavily on cause/effect, comparisons

122
and to a much lesser extent problem/solution to underpin his metacognition. He used
more complex lists than simple lists in this instance.

Table 18: Top-level structuring activity learning module 2 - David


Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 3 4 7 1 6
Body 7 26 37 5 28
Last 5 mins 1 8 15 1 15
Total 11 38 59 7 49
% 6.71% 23.17% 35.97% 4.27% 29.88%

Self awareness of learning autonomy 2nd rating


At the completion of this learning module 2, David was presented with the 6 point Likert
Scale and asked again to rate himself as to how effectively he considered he had engaged
with educational hypermedia on this occasion. David rated himself as a 6, (very
effective). He was then asked to comment on the reasons for the rating.

David suggested that this rating was because: My experience with hypermedia learning
means I am able to quickly establish what I need to know. This in turn enables him to:
find what I need to learn and I focus on that and ignore the padding that is sometimes
there. He said that he selected 6, very effective, because: Unlike the first module I did, I
did find the learning interesting and did not get bored. He added that he found: that the
variety in the media easy to work with. He reiterated the point he had made when asked
the same question about engaging with the first learning module, that he: generally likes
to engage with any media that supplements the text.

Effect from metacognitive training


David rated himself higher in the second learning module:
Learning module 1 rating 5/6
Learning module 2 rating – 6/6

123
Table 19: Collective data of metacognitive activity - David
Orientation Planning Execution Monitoring Evaluation Elaboration
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 min 4 3 0 1 5 12 11 8 8 7 1 1
Body 2 9 1 4 33 48 37 28 30 27 9 7
Last 5 min 2 0 1 3 8 13 4 8 2 10 3 9
Total 8 12 2 8 46 73 52 44 40 44 13 17
Comparison of total activity expressed as a percentage (%)
Session 1 4.97% 1.25% 28.57& 32.30% 24.34% 8.07%
Session 2 6.06% 4.05% 30.87% 22.22% 22.22% 8.58%

A comparison of the metacognitive data by percentage indicates that David engaged in


more orientation, planning, and execution activity following the training. In contrast, he
engaged in less monitoring and evaluation activity. However, his use of elaboration was
similar on both occasions.

Table 20: Collective data of Top-level structuring activity - David


Simple TLS More complex Top-level structuring events
event
List – Simple List – Complex Cause/Effect Problem/Solution Comparison
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 min 10 3 4 4 4 7 1 1 6 6
Body 13 7 21 26 28 37 6 5 22 28
Last 5 min 2 1 5 8 8 15 2 1 2 15
Total 25 11 30 38 40 59 9 7 30 49
Comparison of total activity expressed as a percentage (%)
Session 1 18.66% 22.39% 29.34% 6.72% 22.39%
Session 2 6.71% 23.17% 35.97% 4.27% 29.88

A comparison of the linguistic markers data for top-level structures by percentage


indicates that David used more cause/effect and comparison structures following the
training. In contrast, he used far less of simple list or problem/solution structures.
However, his use of complex lists was similar on both occasions.

124
Case Two - Lesley

Learning module 1

The first module in which Lesley engaged was within the Australian Flexible Learning
Visual Design Toolbox – Film, Television, Radio and Multimedia Training Package.
The element of competency she undertook in this learning session was CUFMEM07A
Apply the principles of visual design and communication to the development of a
multimedia product.

Lesley commenced by clicking on the How to get around link from the welcome page
and used the cursor to randomly skim over the text: so I get an understanding of how this
is going to be presented to me. I was not reading word for word, it was just speed
reading type of thing just to get the gist of it. This list of actions suggests that she was
planning and orientating herself to both the interface and the task. She then ran the
cursor more carefully over some of the text: You can usually tell when I am reading
word for word because see how I seem to use the cursor to go through it. She elaborated
further: At the beginning I am more pedantic about it (00:19). She appeared to be
making sure she had a clear general idea of what to do before she started: I think I have
got the general overview of it, so now what I need to do is start and I can come back to
the screen if I need to (01:05). Her initial planning and orientation continued as she
evaluated its effectiveness. She went back to the front page and read the information
about the order in which to do the work, using the cursor to point to the text as she read,
then chose the first hyperlink on the page. Lesley again read through the text on the
screen and appeared a bit confused about the mention of a training plan, elaborating: it is
asking you to do that training plan, but it had not mentioned anything in the time that I
was working through it. At this stage she was monitoring her attempts to orientate
herself to the structure of the learning interface, using a series of comparative and causal
structures.

125
Lesley then continued to the next page by clicking the reports button, which was one of
two buttons to choose (see Figure 28) even though there had been no explanation of how
they were to be used.

As she waited for the screen to advance she made an evaluative comment about the
learning interface: I kind of like the design of it – you would hope so in visual design.

Figure 21: Learning interface

A dialogue box allowing her to open a word document appeared and she clicked on the
Open button and commented: It was asking me to form my own training plan for what I
want to do (01:31), as she monitored her progress. A document containing a list of goals
needed to achieve the training plan opened and she skimmed through the text on the page,
commenting: That was about establishing my goals for learning etc. Oh – fifty hours –
that was the first thing I looked at – how long is this going to take? She followed this
monitoring comment with an evaluation of what to do next: So I don’t need to do that for
now, so I am going to go back (02:25), and clicked on the back button. She had resolved
that the training plan did not need to be completed straight away, and returned to the front
page, commenting: Probably a good idea I am thinking to have that training plan there
because it is making me think when I am going to do this – morning, afternoon, where I

126
am going to do it. So it has given me some disciplines (02:44). Lesley appeared to be
evaluating her situation and planning her way forward.

Lesley was returned to the learning module then used the cursor to move back and forth
over the set of number buttons at the top of the page (see Figure 28 above) and then
clicked on 1.3. She was confused about the numbering system because of the colour of
the boxes containing the sets of numbers. She thought that she was already in number 1.2
out of a set of 6: But I wasn’t I was in 1.1, so that is a bit of a – because it is white and
the others are black that means I am in that one, but then I realised I wasn’t and it was
the subheading that I needed to look at. But it would be funny if you had the three in
black I think, it would look strange (03:00). This complex list of comparison,
cause/effect and list structures shows that Lesley appeared to be confused by this aspect
of the learning interface and was attempting to re-orientate herself through a series of
evaluative and elaborative processes.

Despite having had to resolve some confusion within the learning interface, Lesley
appeared to find the interface to her liking, as suggested by her evaluative comment: a
very clean design. It is not cluttered with colours in here or anything and I thought that
was good (03:30), appearing to come to terms with (evaluating) the design of the page.
She continued to read through the page using the cursor, before clicking on the Set 2 link.
However, on reaching the link she commented: I found that unusual – the set 1, set 2.
Then once I got into it I thought, oh yeah, that makes sense, providing more evaluative
evidence of her comfort with the design of the interface. Lesley looked at a sub-heading
Direction on the page and commented: It was something that I have to do. Then
elaborating further about the interface, she added: Probably would have preferred
Activity. If that was me I would use that word rather than Direction (03:48). She then
clicked on the hyperlink visual design which failed to open in a sub window, so she
closed it down and clicked on the Set 2, 2.2 link where she skimmed the text, and
commented: I am skimming there. And the reason I start to skim was because I started
to see what all these things meant – I thought oh – theory type stuff (04:33), monitoring
and evaluating her understanding of the learning. She spent some time on this page

127
before clicking on the Perception hyperlink which contained a diagram of a brain. She
commented: Brain diagram – yuk – nothing turns you off more than diagrams with
heaps and heaps of detail in them. Too much to take in. For me personally, if I was
needing to learn that, I would prefer to see some sort of flash activity that just showed me
the main things (05:24). These monitoring comments, uttered through a complex list of
cause/effect and comparison structures, highlight the evaluative and elaborative response
to the value Lesley perceived the diagrams had at this point in her learning.

She closed the window and clicked the next link How Eyes See and commented: Oh,
boring. I don’t want to know how things reflect off my eye (06:11). Lesley continued to
elaborate on the usefulness of the material to her learning. She decided to move on to the
next number set, 2.3. Once again she commented on the navigational interface: I am
actually liking the way this is set out at the top. Showing very clearly that I am in set 2.3
and I have only got to go to 2.4, then I can go to the next set. Now that’s really good if I
need to take a break for any reason, I would probably wait until I got to the next set,
indicating her continuing monitoring and evaluation of aspects of the learning interface.
She elaborated further: And having said that then, what I would say now is that I would
love to see set 3 of 10 sets. I don’t know. Maybe that would even help me more. She
reflected on her mastery of the learning interface and elaborated: that sense of progress –
Where am I in this now? (06:39). A complex list of cause/effect and comparison
structures was used by Lesley to work through this elaboration of navigating the
interface.

Lesley then hovered back and forth between the Reports and Network buttons and
clicked on the Reports button: because I didn’t explore this in previous screens (07:24),
and orientated herself to this aspect of the interface. She was unsure what Reports was
about until she evaluated its role: I realised that it was linked to that Direction, that
activity (07:30). She glanced at the document and then clicked back arrow to return to
the content screen. There she chose the Network button and read through the text and
commented: OK, so they are holding a place there for you to put a discussion forum in
(07:42), offering (evaluating) a possible reason. She then clicked the back button and

128
using the cursor, read the Directions in the activity line by line. She then hovered the
cursor between the Reports button and Communication hyperlink, before choosing
Reports. She appeared to be unsure of what to do next as she monitored her progress: I
thought do I need to go and have a look at this report again? She then realised
(evaluated) that: it said that I needed to write that down so the teacher could use that to
mark the visual (08:43).

After closing the Reports link down, Lesley moved the cursor between the Network
button and the Communication hyperlink several times before choosing the
Communication hyperlink. She then realised (evaluated) that: this has been shut down
since the toolbox was released, so closed the window and returned to the content page,
elaborating that hyperlinks: need to be checked constantly (09:02). Lesley then opened
the Stained Glass hyperlink and continuing to monitor, commented: I always like to go
into everything that is there. Just in case I want to delve. Elaborating further: But if I
don’t I just skip over, I am just frightened if I don’t I will miss something important
(09:42). She looked at the image on the screen, monitoring its usefulness, but closed the
window, and elaborated: I found that a very unattractive site. I am not interested in that
– that is about the history of stained glass (09:58).

Lesley went back to the content page and in monitoring her way forward, commented:
So now I am on 2.4 and going on to set 3. However, she noticed the Home button, which
afforded her a choice of ways forward. This appeared to cause her to re-consider her
learning pathway. She chose the Home pathway over proceeding to Set 3, thereby re-
orientating her learning direction, and commented: I am intrigued, I can go home
anytime I like, so for some reason now I have decided I would like to explore that
(10:08). She skimmed (monitored) the Home page with her cursor and realised that it
showed the sub-topic number sets; displaying some of them in a different colour.

129
Figure 22: Home page table

As she orientated herself to the table, she used simple lists, comparison and cause/effect
structures to monitor, elaborate and evaluate: How to find your way around – I have
already done that. The progress bar shows that the briefs I have completed and the date
I last visited the brief. So saying that I don’t really see a progress bar. These are the
briefs here – that is what they are calling the sub sections of the set. It is great though if
you could do ticks or something to show you have done it (10:22).

Lesley ran the cursor over the numbered navigational links at the top of the page and
remarked: That’s good to go backwards and forwards (11:17), as she re-orientated
herself to this aspect of the learning interface.

Figure 23: Numbered navigation links

She clicked on Exploring Copyright from the list on the home page and rapidly moved
the cursor around the page. She commented: Look how much spare white space there is
and I think that as a learner people are almost squinting and I know that we can change

130
the resolution on this but we have to be really careful when designing programs for that.
It is almost like you need to recommend to people what resolution they need to use to
view it best (11:45). She elaborated further: Basically when you are an old lady like me
your sight it failing (12:15). The comparison and cause/effect structures within this
complex list suggests that she was continuing to monitor the usefulness of the design of
the learning interface before elaborating on what might be best for others.

Lesley’s cursor paused momentarily over the Australian Copyright Council hyperlink,
before moving on: I won’t be going to the Australian Copyright Council. I used to
manage the libraries and copyright and intellectual property is a nightmare. I hate it –
especially in this digital age (12:24). Instead she returned to the top of the page, chose
the number set 3.2 link, briefly scanned the text on computer images and clicked on the
number SET 4 link which took her to number set 4.1. Using a complex list of
cause/effect structures, it seemed that Lesley had elaborated on previous experience to re-
orientate her learning pathway. As she waited for the new page to open, she made a
comment about the design of the program: See I think it is good – you start to realise
that when there is an activity or when there is a direction, there is a report there - some
sort of template for you to use to do it. It is designed well (12:45). Lesley continued her
pattern of monitoring the effectiveness of the design of the learning interface
concurrently with her learning. She carefully read through the introductory text before
passing the cursor slowly over the text under the Direction link. Next, she clicked on
number set 4.2 and again carefully read through the text. While reading this text, she
commented: I never knew that pixel was short for picture element. Makes sense when
you look at the word, doesn’t it? (13:33) as she appeared to monitor and elaborate on this
new piece of knowledge.

Lesley then moved on to number set 4.3 and appeared to read it quite intently. She
commented: We are getting in to the actual use of something in the design. It is not the
theory behind it – like how the brain interprets colours and everything. And I am going
to have to start using these colours when I design. It is very relevant (13:46). This
complex list of cause/effect and comparison structures suggested that she was monitoring

131
her progress as she evaluated its meaning and elaborated on how she might put it to use.
Lesley then clicked on the SET 5 button to move to the next screen. She read through the
text, hovered the mouse over the www.scantips.com link without opening it, moved the
cursor back to the information under Direction and scanned through it. She then decided
to open the link on scanning tips and clicked on the hyperlink www.scantips.com which
opened. Lesley spent some time reading through a large block of text. She commented:
It took me a long time to think well where is it that I want to get my scanning tips, and
then I saw Scanning 101. I don’t know why, but everything 101 you know is the
beginning. So I thought OK it has to be down in here somewhere. I did not go to the
contents. I came over here to the basics of scanning (15:27). This complex list of
structures suggests that having moved from her learning interface to an unfamiliar
webpage, Lesley had to orientate herself to and evaluate the usefulness of these new
surroundings by using causal and comparative processes. There appeared to be an
element of frustration and she commented: This was annoying this (15:54), as she
elaborated on the new interface.

Lesley used the cursor to continue to scan through the content list under the heading of
Scanning 101 – The Basics, and clicked on a link to Photo resolution, when she appeared
to find a way forward, commenting: I found something that grabbed my attention about
what I wanted to know. See I never knew that 300 dpi’s is the most you should scan a
photograph at. It seemed to have taken her some time to re-orientate herself to these new
surroundings. She continued carefully reading through the text appearing to have
reconnected (re-orientated) to a line of learning, and through a complex list of
cause/effect structures elaborated: it was very interesting – relevant. Again, I need to
know it because what I want to do is scan all these photos I have got from years past for
my Mum so we can give her a digital photo frame (16:06). Lesley completed scanning
through the page, clicked on the Back button and was confused when the screen reverted
to the toolbox home page, and commented evaluatively: Shouldn’t use the back button
obviously, so she re-opened the toolbox, elaborating further: I still don’t know what I did
wrong there (16:48).

132
Lesley appeared to be struggling to get back to (re-orientate) the place in the learning
materials where she had been before hyper-linking out to the scanning tips, as suggested
by the following complex list of cause/effect and comparison structures: When I shut that
window down and come back into this, I expected that I could come back in and it would
quickly come to the home page of this and find my way around. Not finding a link of that
first page that will allow me to go anywhere (18:10). After an examination of the page
she decided to click on Apply the principles of visual design and communication to the
development of a multimedia product hyperlink, and evaluated: so I think that OK I have
to go into here. She appeared confused, and commented: I never saw any of this before,
which is setting out the competency requirements. But I don’t want that – I want to get
back into what I was learning (18:33). This list of comparison and cause/effect structures
suggests that Lesley continued to monitor the learning interface in an effort to find
(orientate) her way forward. She clicked on the back button, and then chose Brief 1.1
Framing a training plan, elaborating: so now I am back in here. I must have been in
Framing a training plan (18:55), elaborating further: Yes, I am. Lesley appeared to
believe that she was now closer (re-orientated) to where she wanted to recommence her
learning. Lesley clicked on the Home button.

Lesley was returned to the familiar table on the home page and started to scan it. She
appeared to recall those aspects of the learning she had completed. Working her way
down the table, she commented: Then I can come back to where I was, which is
exploring Copyright, and elaborated further: But then I was a long way ahead of that.
She then clicked on Exploring copyright link and advanced through the screens until she
came to her previous position. She remarked that: having that progress bar filled in
would have been great. I would have gone straight to the right spot. Having identified
the topic she was studying when she hyper-linked away, she started a more fine-grained
search (evaluation) of the sub-topics: 4.3 – Yes I have done that so I was up to set 5. And
I have done the scanning, so this is where I was when I accidently shut the window down
(19:03). This series of comparison and cause/effect structures indicated that Lesley was
now monitoring the final stages of her return to the learning point she was seeking using
a series of evaluation and elaboration processes.

133
Lesley clicked on set 5.2 and spent some time reading through the screen before clicking
on Set 6, where she continued to read through the text. She reached a hyperlink, Design
elements, where she paused briefly before clicking on it with a cautionary remark: This
is where I shut the wrong thing down before, as she continued to monitor her pathway
back. While waiting for the new window to open she commented: As I was going to
that, I was thinking now be careful and notice whether this opens in a new window or not
because I accidentally put myself out of the program before. Elaborating further: I
actually like to see something about this will open in a new window (20:45). This list of
comparison and cause/effect structures demonstrated her continuing use of evaluative and
elaborative actions to drive her monitoring processes. A new page opened, however, the
hyperlink was not established and reported that the page could not be found. Lesley
closed the window and moved on to the next number set, 6.2, and read carefully through
the text before clicking on the Lines hyperlink which failed to open, just as the link on the
previous page had. Lesley commented that: It really annoys me when you can’t find
links that they put in programs. And you don’t go to find anything else to take its place.
You just think – right – it can’t be that important (22:00), evaluating the relative merits of
these links to her progress through a complex list of comparison structures. She closed
the window and moved on to the next number set, 6.3.

Lesley read through the text which was supported by a diagram giving an example of a
grid drawing, and evaluated its merit: I actually found that interesting that you could do
a grid over a picture and then do your own grid to help you draw it. Before elaborating
further: Because I always see myself as not being talented in that area. Well that’s a
good little tip – maybe I could do things if I was taught.

134
Figure 24: Grid

Lesley appeared to read more intently the text relating to the diagram, evaluating this
intensity as: you definitely have to be interested in what you are learning. Elaborating
further: If you do things because you have to do them to get the qual – oh (22:21). She
then moved on to SET 7, and took some time to read the text. She reached a hyperlink,
Colors which she then clicked on, and was taken to the Color Matters website which
opened in a separate window which she maximised. Lesley then used the cursor to look
down the navigation panel on the left hand side of the screen before choosing the second
topic, Effects on the body, and elaborated that she was: Really interested in this (23:59).
She then chose the topic, Color and appetite matters.

Figure 25: Color Matters link

She appeared to read this with great interest, evaluating: This is such a great example of
the use of colour to turn you on and off. I found this really, really informative (24:10).
Lesley quickly read through the text and elaborated on the impact of the visuals: That

135
speaks a million words doesn’t it? This is a great visual. That would turn you off being
hungry (24:16). Elaborating further that: Blue is an appetite suppressant, and that:
Weight loss plans suggest putting food on a blue plate – good lord (24:42). She then
clicked the back arrow and chose the next topic, and reflected evaluatively: So see, the
things that interest you are the things that cause you to stay delving around in.

She reached the topic, Drunk Tank Pink which commenced with a graphic followed by a
column of text.

Figure 26: Drunk Tank Pink graphic

She read the top section of the screen intently before scanning through the rest of the
page. The graphic and the opening text appeared to have an impact as she commented
evaluatively: This was good. This was about pink in a jail. She elaborated further: I
can’t imagine the guys with big tats in a pink jail (24:46). The screen was text dense and
Lesley appeared to monitor its impact: Yeah, a lot of information. I was looking for the
main facts about those colours. She clicked on the back arrow after she completed
scanning the text, and made an evaluative comment about one of the last paragraphs she
had read: This was interesting – they paint the opposing football team’s locker room in
pink. Then, she elaborated further, saying that: It saps them a little bit of their energy
(25:30). She then clicked on the next topic, Taking the color of Medications seriously.

136
Lesley read intently through the text on the screen, using the mouse as a pointer, and she
appeared to be progressively monitoring the content: Blue for an anti-depressant – red
for a pep up pill – those sorts of things (25:51), before she clicked on the back arrow and
chose the last topic, How color affect taste and smell. The screen opened and displayed
diagrams and text.

Figure 27: Sensory Input

Her immediate response was: That turned me off straight away – too technical,
elaborating its immediate impact. Despite her feelings about this particular page she
appeared to evaluate the usefulness of the site more generally, and commented: Now I
would bookmark that (site) as a really useful resource. Before finally elaborating: The
first thing that I have come across that I have found very useful (26:25). She closed the
window and returned to the content page.

137
Lesley studied the page briefly, seemingly re-orientating herself to its content and
evaluatively commented: The more I look at that, Fred, that design is beautiful (27:04),
before she clicked on the next set, 7.2. She scanned down the page using the cursor
before carefully reading the information under the heading Direction that contained
information about the colours. She made an evaluative comment about the change in
screen layout on this page: Something different happening here – just little changes on
each screen that you come to, without it being different/different. Elaborating further: I
didn’t even know what the primary colours were – red, blue and yellow (27:10). She
examined the diagrams explaining the colour hues and then noticed (monitored) that there
was a repeat of information: I thought wow what’s going on here – they have actually
repeated exactly the same information in that paragraph down in this paragraph (27:51).
This series of comparisons enabled Lesley to evaluate and monitor the content as she
moved forward.

Lesley then clicked on the Color theory hyperlink and used the cursor and scroll bar to
scan through the text. She appeared to be orientating herself with the information: I am
going in because it is about colour theory (27:59). She slowed the mouse over the
section on Color Interaction and commented: I found that bit boring – that bit there. I
thought what is the point here? That’s why I am hovering over it (28:41), as she
appeared to monitor its relevance to her learning. She then continued to scroll down the
page; however, the text was dense and she elaborated: There is too much – too much
(28:56). She closed the window and returned to the content page and chose the next set,
7.3.

Lesley read through the text before she clicked on the hyperlink, Flag database, which
opened in a new window. Her cursor movement indicated that she read through the first
two paragraphs before using the back arrow to exit. It appeared that she did not find the
database useful so she moved on to the next number set, 7.4.
Lesley scanned through the text and clicked on the hyperlink Analogous colors, which
opened in a new window. The page contained very little information which she quickly
scanned before exiting. On her return to the Content page, and monitoring her progress,

138
she realised she had completed all of the number sets, commenting: So, I have done set
7. So maybe that is the end (30:44). She clicked the Home button, and then realised that
additional number sets were listed on the home screen. She scrolled down the list on the
home page and commented: There is a Set 8. So why didn’t they take me to Set 8?
Maybe because it is a preview and it is not all there. Let’s see (30:56), as she monitored
and elaborated on her current situation. She clicked on 7.4 (the last number set
completed) to re-check the numbering system, but became frustrated because the internet
was working slowly. When the screen re-appeared, the number set contained SET 8,
which Lesley thought was very weird. She clicked on SET 8 and read the page intently,
before appearing to evaluate the material: I thought that was very interesting. All of that
(32:20). She used the cursor to point to words in the list on the screen and then clicked
on the hyperlink, Color psychology which opened in a new window that she maximised.
She appeared initially to be a bit confused, and on monitoring the page, commented
evaluatively: I thought this can’t be the right spot. But it is when you realise - when you
come further down (33:08).

Figure 28: Colour psychology


She scanned through a list of colours and their meanings and hovered over the line about
light blue text. She commented: I am particularly interested in pale blue, so I go
through here to find the things I am particularly interested in. Because I am thinking of

139
decorating at the moment and I think that is the colour that I want. Brings peace and
tranquillity to the home (33:30). This series of complex and simple lists containing
cause/effect and comparison structures indicated that Lesley had evaluated the learning
material and elaborated on its usefulness for her purposes. Lesley stopped the learning at
this point.

Summary of learning module 1


In reflecting on this unit of work, Lesley’s responses to my observations and questions
were part of a dialogue richly loaded with metacognitive linguistic markers. The
observations contributing to my analysis of her metacognitive activity and top-level
structure linguistic markers during the 33 minutes of the lesson have been sliced into
three arbitrary segments (see explanation page 80) which are collectively represented in
Tables 21 and 22.

Metacognitive activity
The totals of metacognitive activity identified for Lesley in this learning module showed
that metacognitively she drew heavily upon execution, moderately on evaluation and
monitoring, and to a lesser extent, elaboration and orientation. In contrast, she engaged
in little planning. Typical examples of each have been presented in the preceding
narrative.

Table 21: Metacognitive activity learning module 1 - Lesley


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 6 3 13 6 10 2
Body 18 0 57 29 31 30
Last 5 mins 0 0 15 10 7 4
Total 24 3 85 45 48 36
% 9.96% 1.25% 35.27% 18.67% 19.91% 14.94%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify Lesley’s metacognitive activity are
outlined below. These indicate that Lesley relied heavily on cause/effect and

140
comparisons, and made little use of problem/solution structures to underpin her
metacognition. She made more use of complex lists than simple lists in this instance.

Table 22: Top-level structuring activity learning module 1 - Lesley


Simple TLS More complex Top-level structuring events
event
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 3 9 9 1 11
Body 14 29 42 0 33
Last 5 mins 3 9 11 0 5
Total 20 47 62 1 49
% 11.17% 26.26% 34.64% 0.56% 27.37%

Self awareness of learning autonomy 1st rating


At the completion of learning module 1, Lesley participated in a 10-minute session in
which she was asked to reflect upon and answer the following three questions:

1. How would you describe your engagement with hypermedia?


2. To what extent do you see yourself autonomous in such activities? and
3. How would you rate how effectively you engage with educational hypermedia on a 6
point scale?

Response to question 1
Lesley described her engagement with hypermedia as: A little tentative at first, as she
tried to: ascertain the level it is going to be pitched at when I first start. She described
her starting point as: I try to get the overall view of the structure so I can see how I am
going to progress through it. She liked to ascertain her: logical starting point - I like to
see some sort of direction. She believed that the ‘hyper’ in hypermedia is: a bit off-
putting. By its very name, hypermedia means you can go everywhere. Although on
reflecting on her learning she felt: that was good, because if there was something I didn’t
understand, in more cases than not I was able to link somewhere else; to dig down
deeper. She felt that this was important to her although: I always worried that I might

141
get lost and not be able to come back to where I was. She believed this to be: the
problem with hyper-linking etc.

She regarded the visual stimulus of: diagrams, learning objects, whatever it might be
within the program, is really important to me. She suggested that if there were only
words: then I like to be able to go somewhere and discuss things. She considered that in
hypermedia terms: this would mean a discussion board or chat or something like that.

Lesley remarked that she would normally take lots of notes, however, her decision to take
notes in a hypermedia setting: depends on whether I am going to be able to come back to
it or not – key points I’ll make note of, probably new points I will make note of. She
believed that the capacity to come back to them is important. Lesley remarked that in
hypermedia settings she applies this skill differently because: In a traditional classroom
much of what is said is lost if not recorded. She described the way in which she kept
notes in hypermedia settings is to: have (Microsoft) Word open and copy and paste url’s,
notes, and quotes that I think are really good and want to refer to later. She stated that if
for example she was reading an article, she would: Always have a cover sheet to make
annotations about that article; like the main idea and what the key points are, and any
quotes I really like. So when using hypermedia: I will often use in that word document
the same template. However, she remarked that if she were studying only using
hypermedia: I would definitely use that annotation method.

Lesley stated that her approach would differ with the purpose of her learning, and: it
would depend on the type of assessment I would be expecting. For example, for a closed
book or supervised assessment: that would be very different to if I was doing project
work or an assignment. For a closed book assessment Lesley stated: I think I would be
more pedantic about the amount of information I kept, rather than just the general theme
or flow of everything.

Lesley believed that she is normally guided by the structure of the hypermedia: Knowing
that minds better than mine have put it together that way, so that I can logically progress

142
from the known and all that stuff. However she remarked that: I really like though to
have that interaction. I keep coming back to that people interaction through discussion
forums or something. Adding further: It is really important for me to know how I am
working through that compared with other people. Lesley indicated that in the learning
just undertaken she: struggled with it, as little or no interaction was provided.

As a visual learner, Lesley sees graphics and diagrams as important. However, anything:
that is too over the top, if I come to something like that, I think, Oh! Got to read the
explanation. I’ve got to just keep looking back at it as I am working through the
information, trying to work it out, see if I agree. She noted that if she was getting tired
and was confronted with a lot of detail: I would say no – another day.

Overall, Lesley’s response suggests that although she is sometimes a little apprehensive
to start with, she considers herself to be both comfortable and confident in learning with
hypermedia. Her comments suggest that she spends time in gaining an overall view of
the task prior to commencing. Once she is orientated to the task and is assured of her
way forward she undertakes the work with confidence spending time monitoring her
progress.

Response to question 2
In talking about the previous learning, Lesley said: I was autonomous, and felt
comfortable about being autonomous. I knew that before I started. She felt she was
prepared for the autonomy of it all stating: I never feel as though there is nowhere to go,
when I am an autonomous learner using hypermedia though. Her autonomy is founded
upon her knowledge that: There is a whole range of things that I can do. Got all your
different search capabilities. You’ve probably got hyperlinks somewhere in that
hypermedia. Further, when faced with an unknown, she likes to: go out and find what it
means. I am quite able to do that. For the management of her learning she remarked: I
like to see that overall conceptual framework at the beginning if it’s there, so that if I get
lost I go back and think, oh gosh, what was I doing? She did acknowledge that some
systems are not well structured and you do get lost: Especially when they are sending
you off to look at things, and it’s not made clear that you are being directed away.

143
Response to question 3
Lesley rated herself as a 5 (effective) on the Likert Scale. She explained that this rating
was because she had undertaken the learning for a reason, and she felt she had achieved
her learning objectives, particularly in the short timeframe.
Lesley’s rating and its rationale appear to be supported by her response to the first two
questions and in particular her expressed confidence in her capacity to deal with anything
with which the learning setting might challenge her.

Metacognitive training intervention


The purpose of this intervention was to provide Lesley with training and to raise her self-
awareness of her metacognitive activity in hypermedia learning prior to undertaking
learning module 2. A thirty (30) minute session was conducted in which Lesley was:

(i) Provided with a paper copy of the metacognitive taxonomy (Table 6, page 57) and
top-level structuring rhetorical structures (Table 7, page 61) and the initial 10
minutes was spent explaining and discussing the categories they contained.
(ii) Provided with a paper copy of the analysis of her first learning event, and the next
15 minutes were spent examining and discussing these data
(iii) Engaged in a 5 minute reflection in which she was asked to reflect on the
utilisation of his metacognitive actions and discuss future utilization.

Lesley was asked to reflect on these discussions prior to undertaking a second module.

Learning module 2
The second module in which Lesley engaged was within the Australian Flexible Learning
toolboxes - Accounting (207), FNB50299 Diploma of Accounting. The element of
Competency she undertook in this learning session was FNBACC158 Evaluate
Organisation’s Financial Performance.

Lesley commenced by going to the Australian Flexible Learning website and selecting
the Flexible Learning Toolbox hyperlink which took her to a list of toolboxes. Here she

144
scanned down the page: So what I am doing is looking for the Flexible Learning Toolbox
for Finance Services (00:08).

Figure 29: Toolboxes page

After scanning the page Lesley remarked: I thought I would see a menu there actually,
but no it wasn’t. I was looking for a menu because that says Toolboxes, and I am
thinking OK there will be a list of Toolboxes (00:20). She appeared not to find what she
was looking for and went back to the introductory screen and used the search facility to
try to locate the finance services toolbox. This provided her with a preview of the
toolbox, but she realised it was not the toolbox she was looking for: No that’s not it – it’s
Accounting, wrong one (00:41). She ran the mouse over the page but did not see the
toolbox for Accounting in the related toolbox navigational panel. Lesley then went back
to the introductory screen and typed in the search box, Accounting Toolbox and
clicked on the preview button. She scrolled down the list of topics on the page: Looking
for a fairly descriptive title of what the unit is going to tell me (01:58).

145
These comments suggest that from the outset Lesley was orientating her position and
already starting to monitor her progress and was metacognitive in that she had oversight
of these processes. The discursive structures of her metacognition include: I thought I
would…, but no it wasn’t, and I am thinking OK there will be.., and No, that’s not it.
These examples contain language expressly used as description of her own thinking and a
comparison top-level structure that frames up the purpose and outcome of her action.

On initially looking at the page, she appeared not to realise that it contained a list of
assignments, remarking later: I don’t know whether I realised that I was in assignments
at that stage (01:45). She appeared to be monitoring and evaluating her choices as she
scanned through the topics remarking: So this is talking about evaluating business
performance, so evaluating organisation’s financial performance – yeah, that’s the one
that I am going to choose (01:58). She selected the Unit of Competency, Evaluate
organisation’s financial performance. The link took her to the unit’s welcome page
where she scanned the introductory text and a list of unit topics.

Figure 30: Unit welcome

She selected the link, The unit site design, and remarked: I am going into the unit’s site
design because I need to have understanding of the overall composition of a unit on-line,

146
I can’t just dive in. She used the mouse to scroll over the information on the page: I
expected to see a visual structure of the site rather than the words that are presented
there (02:45). She read through the text and I asked her if that had been useful, she
replied: Limited. It is a bit like when someone’s got a kit for a coffee table and they open
it up and try to assemble it without reading the instructions. I guess I was trying to
curtail any difficulties before I did it (03:24). Lesley went back to the welcome page and
clicked on the next hyperlink – Using the Activities Map. This page provided an
introduction to the activities map that would follow. She read the information on the
page and commented: It is actually telling me what I am going to encounter when I go
through those areas (03:47).

She clicked on the Activities Map link and looked through the table listing associated
activities.

Figure 31: Activities Map

She remarked: That table – I thought it was going to link to the activities because they
were in red – but it didn’t. It just told me they were activities associated with each of

147
those areas (04:12). This series of cause/effect, problem/solution and comparison
structures suggest that she continued to use the various tools afforded to her by the
learning resources to orientate her to the learning ahead, and evaluate her options as she
attempted to plan her way forward.

Lesley then went back to the introductory page and selected the next topic, Spreadsheet
Models. She used the cursor to explore the page and spent some time reading the
content, commenting: I always read it. Pointing, as I go along (04:31). She then
returned to the introductory page and clicked on the next hyperlink, Using the Business
Simulations. Lesley looked over the page and commented: And that told me three parts
of 5/8th of nothing that one. I expected that they were going to tell me about hopping into
the simulated business environments, which it would eventually I guess, but it didn’t
explain it as well as I thought it would. It has mentioned something to me here about
something being linked from the bottom menu, so when it does that, am chuffing off here
to see what is actually on this bottom menu (04:42). She looked at the bottom menu and
did not initially find the link, but found it after revisiting the menu.

Figure 32: Bottom menu

She opened the box containing the background information on the business simulations,
and skimmed through the information quickly: I guess it is putting it into context. I
didn’t read further than that … that was all I needed to know – the names of the
companies (05:26). Indicating not all the background information was necessary to know
at that stage. This complex list of comparison, problem/solution and cause/effect
structures indicate Lesley was monitoring her progress using a series of evaluative steps.

Lesley then progressed to the next topic from the introductory page, Suggested pathways
for learning. She found this useful because: It identified if I could have the prior
knowledge or no prior knowledge (06:02). Lesley read the information and then tried to

148
find the site map mentioned in the text. She continually ran the mouse over the icons at
the bottom of the page looking for the site map. She commented: Now I thought why is
there no site map below when it is telling me there is. And I wouldn’t believe it – I kept
going back and looking (06:38), as she attempted to evaluate and monitor her current
position before planning her next move. She gave up looking for the site map and instead
went back to the information about the choices of learning pathways on the same page: I
think I am one of those, she elaborated referring to text on the page that read, Go to the
self-assessment section to confirm areas of expertise (07:03). Lesley searched for the
self-assessment link in the same bottom menu and remarked: I am not understanding
where that is (07:15), as she tried to re-orientate her learning path and evaluate her next
move. She moved her cursor to the side menu: I am looking over there and thinking well
there is a review exercise, is that self-assessment? Because this is only a preview, it may
be that it is not there because of that (07:36). She continued to look at all parts of the
screen, looking for the self-assessment link and reported: Well now I am thinking, well it
has to be somewhere. Where is it? I thought I will just try everything. See I am looking
everywhere on the screen (07:42). She continued to search the page and appeared to be
frustrated and remarked: So, I am still looking. I am thinking it has to be here (08:20).
Finally, she then returned to the bottom navigation toolbar: So I am reading about the
bottom navigation toolbar again because I thought, OK, well what’s on it? What am I
missing? (08:35). In what appeared to be one last effort, she hovered over the icon Study
Help on the bottom navigation toolbar, but did not click on it. She remarked: Isn’t it
funny, I didn’t go to that, I haven’t clicked on it. I just looked to see what it is. That may
have helped me. We really don’t use on-line help things enough. That’s something that I
learnt with my study. We have an aversion to them for some reason (08:49). Finally, she
clicked on the Temp Office link on the bottom navigation toolbar: still looking for those
self-assessment things (08:59). This took her back to the initial screen.

149
Figure 33: Opening screen

She used the mouse to scan the text and icons and then realised that: OK this is a dead
end (09:25). Lesley engaged in a series of complex and simple lists containing
cause/effect and comparison structures while trying to orientate herself, then decided
(evaluated) to give up her search for the self-assessment task as she planned her way
forward.

Lesley’s next move appeared to re-orientate her within the learning interface as she chose
to explore the different business icons listed on the page. She clicked on VOC
Enterprises, maximised the screen and read through the information. She hovered over
the Click here link, but decided not to proceed, and commented: I have come in here
thinking that I am going to find out all this financial information about this business
(09:40). Her actions and discourse suggest she had been deliberately monitoring and
evaluating the information’s usefulness to her learning.

150
Lesley then chose to investigate the side navigation panel instead and scanned the list of
options. She attempted to: just go straight into those records (09:40). However, there
were no links from the topics on the side navigation panel. She commented: That was
annoying, because there was a pointy finger there, but no link. From just the records, it
was only under the um, sub-titles. So I waited a while for it to come up, and then gave up
and went to the next one (10:07). She went back to Assignments page, then to the
introductory page Oh that is when I decided I had enough of this – it is rubbish (10:26).
This series of comparison, cause/effect and problem/solution structures again indicate
Lesley’s ongoing monitoring of her progress to evaluate her position as a precursor to
planning her next move.

Lesley next checked the Desktop to see if the links had loaded in the background. She
then went to the VOC Enterprises page and clicked on The Financial Records link and
scanned the list of topics. She commented: I am thinking well which ones do I need?
(10:43). Lesley chose Analysis Report from the list of Financial Records. She appeared
to again become frustrated when she realised that these were only just text files and that
they were very slow in opening. She commented: What I am feeling at the moment is, I
know I need to learn something, and I thought this might be able to show me, give me
some information it – it doesn’t appear that it is going to and probably shortly I am going
to give up and go into Google and do it (11:41). Lesley appeared to be still trying to
work her way forward using a set of evaluations to monitor her progress.

Lesley clicked on another topic from the Financial Record list, but then clicked Cancel
before it had opened: That is just going to be a report so I will just go back out of that -
wrong choice (12:04). She returned to the unit introductory page and quickly scanned
through the links. Then she used the mouse to scan the rest of the page before choosing
the Review Exercise link from the side navigational panel. She elaborated: Mm. I guess
a double check at the end to make sure I am making the right decision to leave it (12:25).
The Review Exercise link did not contain any exercises: Mm. That is what I had tried
before. I didn’t believe it obviously (12:38). Lesley then chose the Content of Unit link
from the side navigation panel: The content of the Unit is where I seem to need to be

151
going (12:55), as she again attempted to re-orientate her learning pathway. She quickly
scrolled down through the page without reading it and then went back to a diagram for a
brief look. She commented: I felt that it was more theoretical and what I was really
looking for was just that um definition around those things that I would find in those
financial reports. I didn’t really want to go into all the theoretical background (13:05).
This series of cause/effect and problem/solution structures suggests that she was still
struggling to move forward, and continued to evaluate and elaborate as part of her
monitoring function. Lesley scrolled back through the page and read the information
before deciding to close it down. She remarked: This is not giving me what I want
(13:40).

Lesley loaded Google and entered a search term and remarked: I have searched for
interpreting financials. Actually I thought I put in very good words there and got exactly
what I wanted (14:37), as she evaluated the effectiveness of the search. She scrolled
through the results of the search and chose the topic containing the word Introduction,
explaining: It was because it said an introduction and that is what attracted me to it
(14:56). The page opened and she scanned the text: I thought great, I looked down and
the contents was the first thing I looked for, and I thought right I am going to work my
way through this (15:14). She appeared to study the page quite intently: I was reading
everything carefully, and I think I was looking in the book at the same time because I
wanted to check. I thought that was going to be a hyperlink there because it was blue
(15:49). This series of cause/effect structures suggested she was now happier with the
information and its presentation, and she evaluated: I actually like the look of that
because I think it is a nice clean presentation and I thought that at the time. Good easy
navigation (16:09). Elaborating further: There just seems to be highlights in the right
places, not too much text to read on each screen, you keep getting an example of what
they are discussing (16:26). She then selected the link to the topic, The Balance Sheet.

Lesley worked through the information screen by screen using the Next button and cross-
checking the information on the screen against a financial statement she had from her
own organisation. She said: It was the Owner Equity that puzzled me. She elaborated

152
further: Because I kept thinking – interesting – Owners Equity - so that must mean
Southbank’s equity in its own business – so where do we get that equity from? It must
have been given to us by the department when we became a statutory authority (17:19).
She then appeared to reflect further on one of her learning goals: I was actually looking
that we were using the same terminology in ours as they were using in theirs and I was a
bit confused because we don’t use long term assets, it is called non current assets in that
book. That is interesting isn’t it? (17:55). At this point Lesley appeared to take stock
(evaluate) of where she was at with her learning and planned her way forward, reflecting:
So I guess I am wanting to apply what I am learning there – yeah it is a wanting to apply
right now to solve the problem – of understanding that booklet. Yeah, that’s why I was
doing it (18:20). Lesley continued to cross-check the information on the web page
against the financial statement: I guess I was checking the validity of that with accepted
actual practice. Can I trust that this is right? And then when I am starting to think that,
oh well, this is doing that, so you have got two completely separate sources doing the
same thing - they’re not, this must be the way it is done. Indicating that she was
evaluating metacognitively; that is, she was recapitulating and drawing conclusions.
Next, Lesley appeared to evaluate further as she reflected on how this learning might
apply to her workplace situation: For me personally, what I would really love to do
would be to get our chief financial officer to give me one hour and sit down with me and
just let me um give me an overview of what that means and me ask her specific questions
(18:35).

Following this Lesley spent quite a bit of time looking at the Order of Listing page and
trying to come to an understanding of the concepts by cross-checking the website against
her financial statement. She used cause/effect and comparison structures, which seemed
to show she enjoyed the way the text and supporting examples were presented on the
webpage, as she orientated herself to the interface: because what they are talking about
up here, I can see down here. If there was just text on that screen and no diagram, it
would be boring. But I am very visual (20:05). Lesley continued reading carefully
through the text, hovering over anything that took her interest and commented: I think
this definitely makes you more of a … makes you follow more of a linear path the way it

153
is all presented, doesn’t it? Which is not a bad thing moving from the known to the
unknown (21:08). It would appear that she was metacognitively elaborating on her
learning trajectory and evaluating its relevance to her learning style. She then proceeded
to the next screen which contained a chart containing several hyperlinks of which she
appeared to be initially unaware. She commented: Oh. I didn’t even think they might be
hyperlinks. There you go – never even saw it – never saw it. All I focused on was Assets,
top left hand side of the Balance Sheet. Elaborating further: and I think it said something
here about the order things appear in the Balance Sheet – meaning those that are most
quickly converted to cash, you start with those first, and you roll your way down from
there. And I had never heard liabilities referred to as claims on assets, isn’t that funny?
(21:23). The list of utterances and cause/effect structures that underpin these responses
suggest that Lesley was now having to cognitively parallel processes. That is, she was
metacognitively elaborating at the learning interface and with the learning content
concurrently.

She continued examining the chart and realised that the way she had scanned it had
caused her to miss information, commenting later: Still never noticed this. Could be that
I read from left to right. While monitoring her progress, it would seem that the learning
content had gained precedence over the learning interface as she struggled to sustain the
parallel cognitive processing deployed moments earlier. She came across a term that was
continually used in the learning materials and remarked: I keep coming across this term
Contributed Equity, so I need to understand that (22:18). As she continued to monitor
her progress she decided to open Google in a new tab because she was: worried that I
would lose my navigation (22:42), and entered the term Contributed Equity. This
choice suggested that she had made a planning decision to add the Google search
function to her learning interface and entered a phase of learning which required her to
manipulate parallel interfaces. The Google search did not provide her with a satisfactory
answer because her keywords were not specific enough. She commented: I am thinking
this isn’t giving me the definitions because what it is doing is picking up contributed
equity from different annual reports. She elaborated further: and that’s when I decided
to come back and type in you know – definition (23:10), and re-entered an expanded term,

154
Definition of Contributed Equity. On obtaining the result she appeared to
be more satisfied and remarked (evaluated): It has given me a better selection of things
to choose from. She elaborated further: I bet they use something like this to try and
pinpoint people’s search habits for Google (23:29). During the process of monitoring,
planning and elaboration, she engaged in a series of problem/solution and cause/effect
processes that enabled her to come to a satisfactory result.

Next, Lesley selected the first item from the Google list, and appeared frustrated by the
clutter of advertisements and text on the screen: She remarked: Ah. This threw me out -
all of these ads and everything. I thought oh – that really got me frustrated. Um Then I
read down there – it wasn’t a great help – this was good this bit down here. So it took me
a while to find it. So they are talking equity, but not contributed equity. Not that it makes
any difference perhaps. I don’t know (23:55). The initial cause/effect structure suggests
that she was monitoring her progress and the final comparison structure suggests that she
was evaluating her learning. She ignored the advertisements elaborating on her reason:
All the ads on the side I don’t look at them. And I never look at them because I know if I
open them up I am going to be inundated with spam (24:43), and spent some time reading
the information on equity. She continued to use the financial statement to cross-check
the information in the learning module as she tried to understand the term: I am still
feeling puzzled about contributed equity. I mean it tells you someone has put some
money in but I wanted to understand how that would happen. Or they put something in,
not money perhaps, but something (25:21). Remarks indicating that she continued to
reflect on the term using evaluations and elaborations. She made a final attempt at
understanding the term contributed equity and remarked (elaborated): Assets received.
That’s where contributed equity comes in (25:43).

Lesley closed the second tab containing the Google search results and commented: I am
not going to read those, but at least I understand what it is. So go back out of that. This
indicated that she had made a judgment (evaluated) about the value of this material. She
elaborated further: And I was thinking to myself, thank God, I am pleased I am not an

155
accountant (26:06). She then went back to navigating through the learning module until
she came to a practice exercise.

The exercise consisted of 5 objective type questions. On question one she selected
answers A and C and in both cases was advised she was wrong.

Figure 34: Practical exercise question 1

She commented: Oh, this was terrible. If you don’t get positive responses, correct
marks, I believe, when you start an on screen assessment of some sort with your feedback
and you just keep getting told you are wrong, you just want to stop (26:36). She decided
to move to question two, commenting: Left the first one – decided I don’t want to know –
let’s see if this one is any easier (27:11). She selected an answer for question two, and
was again advised she was wrong. She hesitated over choosing an answer to question
three: I wasn’t game to press it because I didn’t want to be marked wrong (27:32). She
skipped questions three, four and five and selected the next button: I will skip that
because I don’t want to be told four times that I don’t know what I am doing. Despite the
fact that she had not been successful at this practical exercise, she appeared to enjoy the
interactivity it had afforded her: But I was pleased to see that there was some
interactivity in it. (27:47). As she proceeded through this exercise, Lesley’s lists and
comparison and cause/effect structures suggested that she continuously monitored her
progress using a series of evaluative and elaborative processes.

Lesley went to a new page containing information on The Income Statement, scanned
through the list of items and clicked the next button: Wasn’t really what I needed

156
(28:09), as she continued to monitor her progress. The next topic was Financial
Accounting vs Tax Accounting and Lesley read through the text and commented: That
was interesting too, because it made me realise that we do that in our own company – my
husband’s company – you have got your financial statements but then you have got your
tax statements, so you do actually, you do keep two separate books. You are not going to
tell the tax office all your profit – but I don’t mean cheating wise, there is so many other
things that have to go into it (28:16). Lesley’s response suggested that she was able to
elaborate on and monitor her learning using a complex list of comparison and
cause/effect structures that indicated she was evaluating the content. This was the final
page of the learning module and Lesley selected the Return to the beginning link.

Lesley was returned to the opening screen where she scanned the list of the lesson
content. When she initially started this module she commenced her learning at the topic
Balance Sheet, saying at the time that she knew all about the initial topics.

Figure 35: Lesson Contents

This time however, she started at the top of the list and selected the Overview and
Objectives link, and browsed through the screen in a linear fashion. She selected the next
button and was taken to the next topic, The Accounting Equation. She appeared to be
using this section both to monitor and evaluate her level of knowledge, commenting
through a complex list of comparison structures: And what I am thinking to myself here
as I started doing this was I know that, I know that – I will just go to the part that I really
need to know about, but when I got there, particularly when I got that feedback from the

157
assessment along the way – I thought you are an idiot. You should have just gone
through and done a refresher on the things that you think you know about. She seemed
to reflect on what she just said and elaborated further: So I changed from I know that to I
think I know that, and I don’t know it. I guess we are time poor always (29:16).

Lesley continued to read carefully through the page, engaging in a list of comparison and
cause/effect structures, suggesting that she continued to monitor and evaluate her
knowledge: I am really learning a lot here about something I didn’t think I was going to
learn about – I am actually learning – not learning what I really thought I was coming in
to find out, but the thing that has got me intrigued is this thing called owner’s equity, so I
am going off on a tangent I suppose (30:14). Lesley continued reading through the page,
and appeared to be still concerned that she did not fully understand the concept of
owner’s equity. She commented: it is still not telling me about the owner’s equity, it is
labelling owner’s equity, but it is still not giving me that (30:50). She continued to work
carefully through the information on the screen, trying to understand owner’s equity and
finally appeared to have worked out its meaning and elaborated: It is such a simple
concept when I look at it from here now. That’s the owner’s equity – it’s the assets minus
the liabilities. So why doesn’t it just say that to you earlier on? (31:33). Lesley clicked
on the next button and was taken to a practical exercise. This process of monitoring and
elaborating identified that Lesley used cause/effect, problem/solution and comparison
structures in order to clarify her understanding of the content.

Lesley commenced the practical test, however, this time she appeared to be drawing upon
(evaluating) the negative experience from the earlier test to plan her strategy for this test:
This is where I decided I am going to change how I do this. I am going to try to find the
right answer and work backwards. Quicker! Yes there are some correct answers
(31:48). She used a complex list of problem/solution and cause/effect structures to
enable her to formulate her planning. Lesley continued the practical exercise using her
new strategy of clicking the answers until the correct answer was identified. This
appeared to enable her to reflect upon the question by articulating the answer as an
explanation: I used the answer to explain to me how they worked this out. Her

158
elaboration of the answer to question one appeared to support this: Oh yeah. Well a
company has that in owner’s equity and that in liabilities, what are the assets? Owner’s
Equity. Assets equals liabilities plus Owner’s Equity. So it is one million. Whatever the
Owner’s Equity (32:30). At one stage she worked on two answers at once: Yes I am
working on that. I am reading what they have got there. I’m saying OK the correct
answer is that plus that plus that minus that OK. They have taken that and added this
and blah blah blah (33:26). In monitoring her progress she appeared to want to capitalise
on the success of this strategy: I will go back and do this one the same (33:47).
However, for the next question she articulated (monitored) a slight change in the strategy
she adopted: That was actually a process of elimination. She elaborated further with a
series of causal and comparative remarks: Because I thought that’s an asset, that’s an
asset, that’s an asset – retained earnings – what the heck are retained earnings – oh I
don’t think that is an asset. Did I choose it? Still haven’t decided. And I was thinking it
can’t be all of the above. Because they are. So that is really not a well constructed
exercise, that one. Because you can do it through the process of elimination (34:26).
Lesley stopped at this point.

Summary of learning module 2


In reflecting on this unit of work, Lesley’s responses to my observations and questions
were usually part of a unified dialogue that could be easily reduced to ‘confirmation’, and
was highly loaded with metacognitive linguistic markers that signalled her
metacognition. The observations contributing to my analysis of Lesley’s metacognitive
activity and top-level structure linguistic markers during the 30 minutes of the lesson
have been sliced into three purposive though arbitrary segments (see explanation page
80) to represent Lesley’s progression through the beginning, body and conclusion of the
learning event. These are represented in Tables 23 and 24.

Metacognitive activity
The totals of metacognitive activity identified for Lesley showed that metacognitively she
generally drew heavily upon execution, monitoring and evaluation, and to a lesser extent,

159
elaboration. In contrast, there was little evidence of the use of orientation and planning.
Typical examples of each were presented in the preceding narrative.

Table 23: Metacognitive activity learning module 2- Lesley


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 2 2 10 11 8 2
Body 11 6 50 37 39 29
Last 5 mins 0 1 4 8 6 6
Total 13 9 64 56 53 37
% 5.60% 3.88% 27.59% 24.14% 22.84% 15.95%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify Lesley’s metacognitive activity are
outlined below. These indicate that Lesley relied heavily on cause/effect and
comparisons, and to a lesser extent problem/solutions, to underpin her metacognition.
She made more use of complex lists than simple lists in this instance.

Table 24: Top-level structuring activity learning module 2 - Lesley


Simple TLS More complex Top-level structuring events
event
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 3 8 8 2 7
Body 16 31 53 11 19
Last 5 mins 2 8 7 2 8
Total 21 47 68 15 34
% 11.35% 25.41% 36.76% 8.11% 18.37%

Self awareness of learning autonomy 2nd rating


At the completion of this second learning module, Lesley was presented with the 6 point
Likert Scale and asked again to rate herself as to how effectively she considered she had
engaged with educational hypermedia on this occasion.

Lesley rated herself as a 6, (very effective). She was then asked to comment on the
reasons for the rating.

160
Lesley suggested that this rating was because: I felt much more comfortable the second
time. I think that maybe that was because I feel more aware of my capabilities with
hypermedia learning, especially my metacognitive strategies. I think that my previous
experience with hypermedia learning helped also. She added that: I feel I get on top of
the interface easily now, having more experience, so this lets me get to the learning
quicker. So I am less concerned about the challenges that the interface may present. She
said that she selected 6 (very effective) because: I believe that I am an independent
learner and with my accumulated experience using hypermedia learning I use it very
effectively. She finished by saying: I would have to admit though that boring material
and lack of interactivity are a bit of a turn off.

Effect from metacognitive training


Lesley rated herself higher in the second learning module:
Learning module 1 rating 5/6
Learning module 2 rating 6/6

Table 25: Collective data of metacognitive activity - Lesley


Orientati Planning Execution Monitoring Evaluation Elaboration
on S1 S1 S2 S1 S2 S1 S2 S1 S2
S1 S2 S2
First 5 min 6 2 3 2 13 10 6 11 10 8 2 2
Body 18 11 0 6 57 50 29 37 31 39 30 29
Last 5 mins 0 0 0 1 15 4 10 8 7 6 4 6
Total 24 13 3 9 85 64 45 56 48 53 36 37
Comparison of total activity expressed as a percentage (%)
Session 1 9.96% 1.25% 35.27% 18.67% 19.91% 14.94%
Session 2 5.60% 3.88% 27.59% 24.14% 22.84% 15.95%

A comparison of the metacognitive data by percentage indicates that Lesley engaged in


slightly more planning and elaboration, more monitoring, and more evaluation activity
following the training. In contrast, she engaged in less orientation and execution activity.

161
Table 26: Collective data of Top-level structuring activity - Lesley
Simple TLS More complex Top-level structuring events
event
List – Simple List – Complex Cause/Effect Problem/Solution Comparison
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 mins 3 3 9 8 9 8 1 2 11 7
Body 14 16 29 31 42 53 0 11 33 19
Last 5 mins 3 2 9 8 11 7 0 2 5 8
Total 20 21 47 47 62 68 1 15 49 34
Comparison of total activity expressed as a percentage (%)
Session 1 11.17% 26.26% 34.64% 0.56% 27.37%
Session 2 11.35% 25.41% 36.76% 8.11% 18.37

A comparison of the linguistic markers data of top-level structures by percentage


indicates that Lesley used more cause/effect and problem/solution structures following
the training. In contrast, she used less comparative structures. However her use of
simple and complex lists was similar on both occasions.

162
Case Three - Tammy

Learning module 1
The first module in which Tammy engaged was within the Australian Flexible Learning
toolboxes – DRT03 Drilling Industry Training Package. The course was the Certificate II
in Drilling and the element of Competency she undertook in this learning session was
Methods of Drilling (706).

Tammy opened the Unit, read through the opening page, reviewed the four types of
drilling and clicked on the Take a Tour hyperlink, which opened in a separate window.

Figure 36: Methods of Drilling opening page

She explained her actions as she appeared to familiarise herself to the learning interface:
I just hovered my mouse over those icons just to see what they are about. Just there,

163
there was the invitation to take a tour, and I thought, oh, I will take a quick tour and that
might give me a bit of an overview of what I am about to do. She quickly scrolled
through the tour: But I could see basically that it was kind of more instructional material
I guess, for students and perhaps teachers as well. It didn’t really interest me so I
decided to skip through that (00:06), and closed the window. This complex list of
cause/effect and comparison structures indicated that she was attempting to orientate
herself to both the learning interface and the learning materials.

She then clicked on the Air hyperlink and articulated her reason: So I went to the upper
left and I thought I would work my way from left to right. So I started with Air first
(00:53), evaluating her options. She used the mouse to scroll down the page offering an
insight into her progress: interestingly my eyes went to the big column on the right hand
side of the page and not the small column on the left. But obviously the small column on
the left provided the instruction of what to do. Yeah, first of all my eyes went to the big
hunk of material to the right, but later I realised that reading that little bit on the left was
important (01:01). This complex list of comparison structures suggests that she was both
monitoring her progress and orientating herself to the learning interface.

Figure 37: Page layout

Tammy then clicked the Continue button, and commented: So obviously after reading
through that text there is an activity to do. And I could see the instructions were there on

164
the left (01:34), indicating the continuation of her monitoring her progress. The next
screen was a different interface containing an interactive graphic and some instructions.

Figure 38: Interactive screen

Her cursor indicated she read the instruction on the left of the screen before she clicked
on the form on the notice board, as directed, which opened a form in a new window.
Tammy commented: So the instruction was to read the form on the notice board, so I
quickly read through that (01:54). The second part of the instruction asked her to select
her Personal Protective Equipment from the shelf. She clicked on the Move Right button
to move the diagram to the right and reveal the shelf and its contents.

165
Figure 39: Shelf and contents

Tammy used the cursor to explore the shelves: Just hovering over some of the things
before selecting them just to see what they are. Using the mouse, she started selecting
various items and the correct items were automatically placed in the bottom bar of the
screen. As she proceeded she appeared to evaluate her choices: I could see there were
two pairs of safety glasses and one was tinted and one wasn’t tinted, elaborating: so I
thought obviously I need to make a choice, and finally evaluating: So, I first selected the
wrong ones for the job (02:04).

Figure 40: Selected items

166
Tammy continued the activity, this time making a choice of boots: multiple choices with
the boots. By trial and error I thought that it does not hurt to click on them because if
you click on the wrong thing it is going to tell you anyway. So if unsure, just keep
clicking (02:28). This list of statements with both cause/effect and problem/solution
structures indicate that Tammy was monitoring her choices and elaborating on the
consequences of those choices. She completed the selection of items and clicked on the
move left button.

As the shift change form on the notice board was still highlighted, she clicked on the
form and it opened in a new window, and she read through the form. She commented:
Obviously I missed part of the instruction there around the sharp bits for the drill, so I
just had to go back in and read that again, as she monitored her progress, before
evaluating her next move: And then I realised that I would have to go over to the drill
sharpener on the bench and click on that, and elaborated: as one of the things I would
need to do before going out on site (02:56). She closed the shift change notice and
returned to the bench and clicked on the bit sharpener, and the item moved to the bottom
bar. She commented: Obviously that tells me I am right, as she monitored her action.
An instruction appeared in the left hand box advising her that she still needed to collect
the rest of her gear which she monitored: And I obviously still need to select a few things
(03:17). The activity would not let her continue until it was completed which caused her
to continue to monitor her actions up to that point: Oh, OK, so I had to go and do the job
log and I am just looking for the job log form. Which led her to evaluate: Perhaps it is
on the notice board. I sort of went back to the notice board there (03:35). She moved
the cursor around the screen and prompts appeared, as she kept monitoring: Ah, what is
the guy telling me to do. Back to the notice board. Where is the job log? I can’t see the
job log, before elaborating: Bit confused (03:40).

167
Figure 41: Notice board

She moved her cursor to the buttons at the top of the screen and evaluated their potential:
Ah, might be these things at the top, and added: Nope it is not one of them (03:52). She
continued to look for the job log form and scroll over items on the notice board and then
clicked on the move right button, again continuing to search other parts of the scene. She
elaborated: Mmm. Have I missed it? Where is it? Can’t find it, before evaluating:
Getting a bit frustrated, and elaborating further: I have been over that area before. It is
not there, it is not there (03:58). She then went back to the notice board and used the
cursor to review the items on the notice board. Finally, she looked at an area of the
screen outside of the diagram and found it, evaluating: Ah, there it is at the top (04:14).
She clicked on the Job Log link and opened the document. The job log took some time to
open in a new window, and monitoring its progress she commented: It didn’t seem to be
a very big document, but I think because it has got some graphics in it, before evaluating:
it might take a while to load down (04:43). During this metacognitive process, Tammy
engaged in simple and complex lists, cause/effect, comparison and problem/solution
linguistic structures, which enabled her to work through the problem and eventually find
the job log form.

168
Tammy then scrolled through the job log, scanning the text and its associated graphics,
commenting: So that is the job log. Obviously there is a bit of information in there that
is recapping on what I have already done. So I skipped through that pretty quickly. If I
had of had access to a printer, maybe it would have been opportune to print it right now
so that I could then have it beside me as I continued studying. Maybe fill it in as I go
(04:57). I have just scrolled through the PPE stuff there. And I can see that the next
thing that I am going to be required to do is this pre-start check. This complex list of
cause/effect structures suggest that she continued to monitor her learning trajectory,
elaborating some of those decisions. She minimised the window explaining
(monitoring): so I just had a quick glance at that (screen) before clicking continue.
Expecting that the next thing would be the pre-start check (05:25). This took her to the
worksite graphic.

Figure 42: Worksite graphic

Tammy remarked: So, here we are on site, and referring to the instructions in the left
hand pane: and yep it is telling me to do a pre-start check to see if there is anything that
could be a hazard (05:41), as she continued to monitor her progress. She maximised the
job log form and commented: Back to the pre-start check list to see what the hints are

169
there (05:55), monitoring once again. She scrolled through the form before positioning it
adjacent to the worksite graphic (see Figure 56).

Figure 43: Twin windows

Tammy commented on her reason for setting the screen in this manner: because I have
never been on a drilling site before, I thought I should just check and then minimise that
so that it is kind of there and I can look at both things at the same time (05:57). She
moved the mouse over the screen: Ah, just rolling my mouse over to see if anything
highlights, monitoring her knowledge that this had been a characteristic of the last
interactive graphic. The rod rack highlighted and she evaluated its impact: Obviously
there is something there that might need actioning. The rod rack – so ah –do I click on
that now? No, bit later (06:07). She continued moving the mouse over the picture: OK,
there is the branch. Um, (06:23) evaluating the impact of a branch lying on the ground
which highlighted under her cursor. She clicked on the branch to remove it. She was
then presented with two action choices, the first to deal with the rod racks, and the second
to pre start site checks. Tammy chose to deal with the rod racks and chose the Stabilise
Racks button. This caused her to monitor her action: OK, clicking on that now,
stabilizing the rod rack. I can see a few other things lying around the site so I just sort of
click on them to indicate they could be a potential hazard, before evaluating: nothing at

170
the end there – um (06:28). A message appeared in the left window telling her that there
was one more item that needed attention and she started searching with her cursor,
appearing to monitor her search: where is the item? Can’t see the item. Running my
mouse around – can’t find it – where is it – what have I missed? Having a look again
back to the right. Can’t find the item. There is nowhere on the page for me to over-ride
this (06:51).

Tammy clicked on the Job log window in what appeared to be an attempt to use another
approach to identify the missing item, and appeared to monitor her actions: OK, go back
to the checklist. See if there is anything there that I have missed, and evaluating: and I
think I have got most of those things on that list. Not really sure what I have missed. She
then minimised the job log window and began to search through the graphic activity once
again monitoring her actions: Have a look around – run my mouse around expecting
something to be highlighted when I hover over it. Can’t really find anything. It is a bit
tricky. Where is it? It is like hide and seek. I have been over this area before. What
have I missed? She then elaborated: Mmm. Getting frustrated. Looking for a way to
over-ride it – move on (07:15), as her search continues to prove fruitless.

Tammy next tries another tack and clicked the Text Version link and continues to
monitor her actions, commenting: Ah, I will check the text version to see if that provides
me with any clues. Just scanning through that. Most of that stuff I have covered. I have
covered the tree branch, and the water bottles, mm (08:08). I moved the list out to the
right. Maybe look at both together. Tammy moved the mouse from the text box back to
the activity and clicked on it, evaluating: But then when I click on the activity it
minimises that. She then maximised the text box again, and monitored her progress: so
quickly scanning for what I might have missed. She elaborated further: But
simultaneously thinking that this is actually a well thought out script. It has got lots of
alternatives, and it has put a script in for most of the things that may occur during the
activity (08:29).

171
Tammy then moved the cursor back to the activity and clicked the move right and move
left buttons to navigate through the picture and ran the mouse over all parts of the picture,
monitoring her actions, she commented: Running over everything again. A little bit
tighter this time, and evaluated: Is there a piece of equipment that I have missed?
(09:11). She then clicked back on the text version window and scanned through the text,
monitoring her previous check: Um, these rods. I have checked the rod rack, before
elaborating: Can’t seem to see anything that’s jumping out at me. Mm (09:24). She then
clicked on the Job log button at the top of the screen: What if I check the job log? Will
that provide any clues? Nup (09:49), as she continued to monitor her actions. She then
accidentally exited the toolbox and then re-opened it, and evaluated: Oops, clicked the
wrong thing there (10:00), before re-orientating herself: Just bringing the activity back –
looking for that one last item. She paused and appeared to take stock of her situation
(monitored): Let’s stumble across it this time. She continued moving the cursor over the
items in the picture and evaluated: Nothing very obvious. Must be hidden somewhere
(10:07). She pointed to the Pre-start site check buttons at the top of the screen and then
clicked on the Text version link in her bid to locate this final item, and evaluated this
move: Maybe there is some clues in these documents at the top (10:47). Tammy clicked
several times more on the link and got no response, so she then clicked on the Pre-site
check button and the document opened in a window. She looks through the pre-site
checklist and monitored its content: That is pretty well much the same list as I had
before, before evaluating that she: Can’t seem to see anything different (11:19). As
Tammy continued to locate the last of the missing hazards, she engaged a series of
comparison and cause/effect structures to logically link her train of thought during this
series of metacognitive processes.

172
Figure 44: Pre-start site checklist

Tammy continued her search, and moved her cursor down the checklist and monitored
her progress: Just checking through the drill rig and general setup – all that is familiar –
I think I have picked up most of that stuff in the activity. Steps and ladders – did I see a
ladder somewhere? (11:34). She then appeared to take stock and evaluated her next
move: Let me go back and check for steps and ladders. She closed the Pre-site checklist
window.

Back on to the activity screen, Tammy continued with her search by moving the cursor
around the picture, and monitored her progress: No, it is not that steppy thing he is
standing on. Her cursor passed over and highlighted some rubbish on the ground.

173
Having found the lost item she evaluated: Ooops – there it is (11:52). Some rubbish – I
have missed the rubbish (12:02).

Figure 45: Highlighted rubbish

Tammy then clicked the Continue button and read the instructions in the left hand box
which she appeared to evaluate: OK, so I have got to do something with the truck.
Where do I park the truck? (12:10). She moved the cursor around the screen evaluating
her next move: Not in the front, because there does not seem to be much room and the
land is a bit unstable, before elaborating: So I will choose behind because that is a nice
flat area (12:21). She then chose the position to park the truck and clicked the Continue
button. She read the next instruction and evaluated her response: Ah, have to make a
choice about stabilising the rack. And now I need to do a pre-start check so I will go to
the checklist and see what I have to do (12:39). She clicked on the Job Log button. She
quickly scanned the job log and went back on to the activity, appearing to have decided

174
(monitored) on what to do next. At this point she appeared to come to terms with an
idiosyncrasy in the learning interface: I sort of worked out that I needed to click on the
job log every time, having evaluated: because even if I kept it minimised at the bottom of
the screen it would still not let me continue until I actually clicked the job log for that
particular activity (12:47).

Tammy then clicked Continue link and was taken to the next page on Drill Hole Stability.

Figure 46: Drill Hole Stability page

She scanned the activity and monitored her progress: just the different sorts of holes. A
little bit more information seems to be underneath, so I am looking at that (13:33). The
learning interface had changed slightly in design, and her attention appeared to be
momentarily directed towards its impact. She commented: And I can see now that they
are just hyperlinks I guess to the pictures, because the pictures – if you click on the
pictures – they will take you to the relevant information that is just sitting below the

175
pictures at the top. So having a quick scan of that information. I am not sure if I really
need to right now, so I might just quickly scan it and come back to it if I need it later.
Just exploring the different help there, or information that is available (13:48). Tammy
had monitored the affect on her learning pathway and evaluated how to proceed by using
a series of complex lists containing cause/effect and comparison structures.

Tammy then clicked the Next button and was taken to the Before you begin screen which
was very text intense compared with the previous screens. This appeared to have an
impact as she monitored the interface: OK, so a fair bit of text here. This appeared to be
only momentary as she moved quickly back to monitoring her learning: So just sort of
familiarising myself there with the rods. Looking at the box and the pin end (14:26).

Figure 47: Before you begin screen

176
She continued scanning the page and monitoring her reading of sections of the text:
familiarising myself with the different set ups for the rods and the rod slings. In parallel,
she needed to evaluate aspects of the learning interface, commenting: They don’t seem to
be interactive, so hovering over them I hoped there was something interactive that I
could click on to, but there is not, so it is just a straight graphic (14:44). She continued
scanning the document and evaluated her progress: I am thinking now that there is an
awful lot of text there and I am a bit bored with it – what do I really need to know to do
the next activity? Um – thinking that perhaps there is too much being presented to me all
at once, and I wouldn’t mind just knowing the little chunks of information that I need to
be able to do the next activity, rather than all this. She appeared to draw these thoughts
together and elaborated further: Thinking that I wouldn’t mind some sort of animation to
show me what to do (15:12).

The next picture in the document contained a film clip and as she monitored, she
commented: And lo and behold, there is a nice little film clip. She evaluated its impact:
I am quite impressed that the film clip shows a real life scenario. Like this is obviously
filmed in a real life job so that is quite impressive, other than a stereotypical graphic
animation. At this point, her attention appeared to momentarily switch back to the
learning interface itself as she elaborated on one of its attributes: Could probably do
without the music. When the film clip was finished, Tammy elaborated further that the
music in the clip was: A little distracting, and that: I like the fact that they pan out and
pan in. So that was good (15:45). She returned to reading the text and continued to
monitor her progress: I appreciated that clip and again we are back into text here and I
am thinking the text is very heavy. There is a lot of it – I am just skimming it. She then
appeared to pause and take stock (evaluate) of how effective her recent learning had
been: Hopefully some of the words like hoist, plug and those sorts of things, um, I will
store them in my memory so if I see them I can just come back to this point and refresh
and have a look at the text in a bit more detail (16:33). A complex list of cause/effect
structures was used to enable Tammy to draw these conclusions.

177
Tammy clicked the Next button, continued to work her way through the material
presented and monitored her progress: Here is another activity. Um, just reading
through the instructions. This is a little maintenance activity. She seemed to pause over
the graphic which contained a highlighted section and evaluated: And I can see that the
highlighted portion there is the part that I need to maintain (17:02). She continued
reading and discovered (monitored) she had a small problem with the learning interface:
There is no instructions for this activity. This appeared to cause her to think about
(evaluate) a way forward: so I am hoping that if I just click on something I can just intuit
what to do next. She clicked on the yellow shaft (see Figure 61) and the shaft
disappeared leaving only the hammer bit. This appeared to indicate to her that she was
on the right track as she monitored and evaluated her progress: So clicking on each of
those and I can see that this is the bit that I need to service (17:20).

Figure 48: Maintenance activity

Tammy continued the activity appearing to ignore the messages in the left hand box and
monitored her progress: Clicking on each of the things again, there is an absence of
instructions. So just clicking hoping that it will become apparent (17:38). Despite not
following the instructions and advice being displayed in the left hand side of the screen
she seemed to be making progress. A message appeared in the left hand box that said
Great work. Let’s put oil on it and put it back together and an oil

178
can appeared on the screen. She found that she was able to manipulate the oil can with
the cursor and moved it onto the hammer bit. While doing this she appeared to monitor
her progress: There is a little oil can. Yes, applying the oil to the thread there. Then
evaluated the outcome of these actions: That just seemed the logical step. Um I have
applied the oil and I have got some feedback. I am not sure what to do now (17:45). Her
comments indicated that she now had seen the message in the left hand box that read,
Yep, a thin film of oil is good. Despite this, she moved the oil can back over
the hammer bit and a message appeared, You’ve already oiled that part. After
a few seconds the software seemed to automatically return the shaft to the hammer bit
and posted a Well Done message. Tammy appeared to be confused and monitoring the
learning interface, commented: OK, so it just seems to have automatically restarted
(18:13), not realising that she had in fact completed the maintenance task. She clicked
the Close button.

Tammy next clicked on the Job log button which opened the log in a separate window.
This action appeared to be deliberately driven by her prior knowledge of the interface as
she monitored her progress: And again I have to fill out the job log knowing that I need
to click on it, and evaluated her action: I just can’t click continue, and elaborating: So I
have to wait for that to open before I can close it and then click continue (18:20).

The Job log opened and Tammy scrolled part way through before opening Microsoft
Word’s find and replace feature: Here I actually thought, OK, I will search in the word
document for this maintain drilling and sampling equipment because I wouldn’t mind just
having a look at the form that I am supposed to fill out. In the Find and Replace window
she typed in Maintain drilling and sampling equipment, commenting: So I
am doing a quick search of the document for those key words and I can’t find it. No
items were found in the document and she speculated: Obviously the search function is
disabled in this somehow, so I might just have to scroll through the document to find the
section on maintain drilling and sampling equipment (18:36). Tammy used the cursor to
scroll through the document: And there it is there. So I can see that it is related to some
competency units and I just need to fill in the boxes and write a short amount of text, so I

179
will do that later when I print it out (19:24). This sequence of interconnected
cause/effect, problem/solution and comparison structures suggest that Tammy, in
attempting to position herself correctly within the document, needed to guide the
monitoring of her search by drawing upon a series of evaluative cognitions. She scanned
the text briefly, then minimised the window and clicked the Continue button.

Tammy was taken to the next screen which provided an introduction to an Air Sampling
exercise using a familiar learning interface.

Figure 49: Sampling exercise introductory screen

Tammy read the instructions in the left hand box before she clicked the Start button. She
was presented with another graphical activity which she studied for a short time and
seemed to evaluate: Um, don’t really know what to do here, but I will just chance it and
take a look, before elaborating: And then I realised that I really don’t know how to do
this activity, I have had no preparation, so I might read the instructions (19:49).
Interrelated comparison, cause/effect and problem/solution structures informed the basis
of the metacognitive process that enabled Tammy to reach a decision about what she
needed to do to complete the activity.

180
Figure 50: Sampling exercise

She clicked on the Handle air chip samples link at the top of the page and a graphic of a
sample trailer opened in a separate window. She read the instructions and in response
moved her cursor across the graphic which caused various parts to highlight before
clicking on the Next button. She appeared to monitor her gaining of familiarity with the
parts of the trailer: I am rolling over each of the parts as per the instructions just looking
at what each part of the machine does – becoming familiar with the bolded words, before
evaluating her next move: not much there so click to Next (20:23). Tammy read through
the text on the next screen, and continued to monitor her progress: And this is where it
tells me about collecting the samples. I am just reading through the text. Um,
correlating what is written in the text with the picture that is next to it. She reached a
point in the text where she paused the cursor, and evaluated: Oh and the left hand side
words pop out at me, so I think OK I have to read this sentence (20:39). She then clicked
the Next button.

Tammy was taken to the next screen where she used the cursor to scan the document and
continued to monitor her progress: Obviously there is a way of stacking or ordering the

181
bags once they come out of the drilling samples. So just reading through that in a little
bit more detail, before elaborating further: Since I know that I am going to have to
demonstrate that in the activity (20:59). She then closed the window and returned to the
sampling activity. Momentarily she appeared to pause and take stock (monitor) before
commencing the activity, and commented: So I am ready, before she clicked on the
Ready button.

The empty sample bag on the trailer filled and a text instruction click to move the
bag appeared over the bag. Tammy attempted to grab it using the cursor, however, it
moved only a short distance and stopped and she monitored her effort: I am clicking to
move the bag. Um, the bag just suddenly dropped and I lost it. Next, she clicked on the
bag again and was able to move it, trying unsuccessfully to place it in an area marked out
on the ground (see Figure 63 above), and evaluated: I had the impression from the
information that I read that I needed to start over on the right hand side of the area that
was laid out, but I can’t seem to drop the bag there. She managed to drop the bag,
evaluating: so I will try over by the peg, and elaborated: and that seems to work (21:22).
She continued the move the bags as they filled, realising (evaluating) that earlier she had
not actually dropped the bag: And by now I am just starting to work out that once the bag
comes out of the machine it needs to drop on the ground before I stack it. She continued
to monitor her progress: I am stacking each bag in an orderly fashion behind the first
bag in the column as per the instructions in the section previously (21:49), before she
reflected on those instructions and elaborated: And it did say in the instructions that
there were five rows so I know that I need to start a new column when I have reached five
(22:15). Again, Tammy engaged in a series of complex lists containing cause/effect,
problem/solution and comparison structures to help her problem solve.

After Tammy had filled three rows of bags the activity ended and she was presented with
buttons that gave her the option to either Do another hole or Take a break. She studied
the screen and noticed the time the activity had taken her. She seemed to monitor this
result: I have looked at my time – 57 seconds, before evaluating her effort: I am not very
impressed with that – I think I can do it a bit faster. As a consequence she clicked on the

182
Do another hole button, and elaborated: Um - and it is not a huge time investment – so
yeah – let’s just check my mouse dexterity and I will do that activity again a little bit
faster. She commenced a second activity and reflected (elaborated): So probably just on
a little bit of an ego trip here, to see whether I can better my time. As the activity
progressed she appeared to monitor her progress: It is all going pretty smoothly – I have
worked out what to do by this stage – it is just a mechanical exercise, but I am sure going
to beat that 57 seconds this time around (22:23). Tammy completed the activity and
evaluated her effort: I am pretty happy that I have done it in 30. She then clicked the
Take a break button having evaluated her next move: I think I have earned a break
(23:00). A message appeared Nice samples – good job, and Tammy responded
with: Ah, thanks for that feedback (23:07).

Tammy then clicked on the Job log button and monitored her progress through it: And
filling in the Job Log. So I will just quickly open that document (23:11). And here I am
just scrolling through to have a look at what the job log says about the sampling exercise.
And that seems to be the end of the job log (23:19). She closed the job log window,
clicked the Continue button and monitored her actions: So, continuing on, I am sort of
expecting that might be the end. She was presented with a final screen and monitored:
And yep, it is giving me the wrap up.

Figure 51: Final Air Sampling activity

183
Finally, she clicked the Return to the menu button and monitored her final action: And I
have to complete the reflection section (23:40).

Summary of learning module 1


In reflecting on this unit of work, Tammy’s responses to my observations and questions
were part of a dialogue that was richly loaded with metacognitive linguistic structures.
The observations contributing to my analysis of Tammy’s metacognitive activity and top-
level structure linguistic markers during the 24 minutes of the lesson have been sliced
into three purposive though arbitrary segments (see explanation page 80) to represent
Tammy’s progression through the beginning, body and conclusion of the learning event.
These are represented in Tables 27 and 28.

Metacognitive activity
The totals of metacognitive activity identified for Tammy in this learning module showed
that metacognitively she drew heavily upon monitoring, execution and evaluation, and to
a lesser extent elaboration and orientation,. There was no planning in this instance.
Typical examples of each have been presented in the preceding narrative.

Table 27: Metacognitive activity learning module 1 - Tammy


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 4 0 18 15 9 10
Body 1 0 38 45 40 13
Last 5 mins 3 0 9 15 11 4
Total 8 0 65 75 60 27
% 3.41% 0.00% 27.66% 31.91% 25.53% 11.49%

Top-level structure linguistic markers


A summary of the linguistic structures used to identify Tammy’s metacognitive activity
are outlined below. These indicate that Tammy relied heavily on cause/effect, and to a
lesser extent comparison and made very little use of problem/solution structures to drive
her metacognition. She made more use of complex lists than simple lists in this instance.

184
Table 28: Top level structuring activity learning module 1 - Tammy
Simple TLS More complex Top-level structuring events
event
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 7 11 13 3 7
Body 11 29 40 5 22
Last 5 mins 4 9 16 5 8
Total 22 49 69 13 37
% 11.58% 25.79% 36.32% 6.84% 19.47%

Self awareness of learning autonomy 1st rating


At the completion of learning module 1, Tammy participated in a 10 minute interview in
which she was asked to respond to the following three questions.
This was followed by a 10 minute session in which she was asked to reflect upon and
answer the following three questions:

1. How would you describe your engagement with hypermedia,


2. To what extent do you see yourself autonomous in such activities, and
3. How would you rate how effectively you engage with educational hypermedia on a 6
point scale?

Tammy’s responses to each of these questions are outlined next.

Response to question 1
Tammy described her engagement with hypermedia as: Pretty positive in general, I like
learning via the internet and use hypermedia every day in the context of my work. I am
comfortable moving in that environment. Whilst she liked training in this environment
she: also likes face to face learning as well. What she liked about taking a structured
hypermedia course is that: someone has actually sat down and collected together in one
spot the material I need to know in order to achieve the outcomes of the course. She
thought that it was useful that: someone has already made the decision for me about

185
what are the boundaries of what I need to know. One thing in particular she liked about
hypermedia learning was: the opportunity for learners to explore further if they are
interested. So, she liked the structure and the boundaries afforded by the hypermedia
materials on the one hand, but also enjoyed the opportunity to be able to explore outside
of these boundaries when necessary. This indicates that she is a competent and confident
user of hypermedia.

Tammy believed that the purposeful structure of the materials provided by the
hypermedia author acted much like a teacher setting boundaries in the classroom, and that
it: stops me wasting my time and going down the rabbit warren of things that I may not
need to know. She described herself as often being time poor when it comes to learning
so: if someone has made the decision for me about what I need to know to get through
this course, then I am happy enough studying within those boundaries. If there are
aspects of the learning for which she has a special or deeper interest, then she: will go off
and research that a little bit more. She described herself as being very focussed when
she studies, as a consequence: I like to stick within the boundaries of the course and
exploring within those boundaries, not only just the breadth of the information, but also
the depth as well. She made the point that she would: rather go deeper than broader.

Tammy suggested that in managing her learning in a hypermedia environment: I am very


focussed, I’m time bound and quite methodical, and I do approach it in a linear way.
Therefore, it would seem that the structure and linearity are seen as learning aids,
however, she does suggest that: if there are elements of the learning that I know already,
then I am quite happy to skim those sections, although I tend not to omit them completely.
In the first learning segment reported earlier, she demonstrated this by scanning topic
headings and text on a regular basis. She described herself as a: reflective learner who
in her early learning made: copious notes. However, much of that early note-taking was
in classrooms and was: more the act of doing something because I was taught and
believed that taking notes aids learning. However, she appears to have modified her
learning methods as a consequence of learning using hypermedia, stating: but now I find
that that is not necessarily the case, as I find that now reflecting aids my learning more.

186
She said that now she does not take a lot of notes, rather: I’ll just sit back and think
holistically, and process as I go through it. If someone gives me a handout I’ll look at it
and assess at that time if that impacts on my thinking and processing. If it doesn’t, I’ll
put it aside and come back to it later. In contrast, she believes that if it does impact on
her learning, then she is likely to: focus more on that handout a little bit more deeply at
that time, because it is contingent on what else in going on in my mind at the time.

I asked Tammy if she engaged with other learning objects in the hypermedia, for
example, exercises and interactive activities. She responded: I do when I am ready to
engage, um, I recall in the learning I just completed there were some hypermedia things
that I skipped over. For example there were some hyperlinks to a word, and the first
couple of times I clicked on them and discovered it was just a definition, so I only
selected those words where I was uncertain of the meaning. She also indicated that she
engaged with hypermedia differently when using it on subsequent occasions: Yes, I’d say
the second time round I would interact with it differently. The first time is getting an
overview of the breadth of the topic. Going back a second time, I’d be aware that my
learning has to be a little bit deeper and a little bit more time understanding and
processing, rather than the first time through just getting a picture. More specifically,
she said that if preparing for a test: I would be using the hypermedia probably a little bit
more intensively; I’d be slowing down the rate at which I skipped through it so that I was
ensuring that I was understanding enough to feel confident with the material.

Overall, Tammy describes her interaction with hypermedia as a positive experience in


which she engages with the material in a confident and competent manner.

Response to question 2
Tammy stated that: I would say that I am very autonomous, and suggested that she: likes
to do things my way, and in my time. She described herself as: confident when engaging
with hypermedia. She believed that this confidence is partly due to the fact that: I
appreciate the structure that’s in it and the fact that someone has taken the time to clump
it together under its discrete themes or topics. I suggested to her that some might see
such structure as an imposition upon their autonomy. Her response was that in her case it

187
was quite the opposite, and that: it actually provides guidance and it helps me make
sense of it. Elaborating further: someone’s done a little bit of the pre-work for me in
terms of clumping it all together and providing some very clear themes about the
material.

Her recollection of the learning discussed earlier was that: it provided an overview of an
area that was completely new to me. Everything I was learning I was learning from the
ground up. So, providing that overview information. On further reflection she reported:
I noticed also that I skipped the activities the first time I went through because I just
wanted to get that high level overview first, with the intention of returning again and
doing the activities in my own time. She summed up by saying: that to me, um, I guess
reinforces that I am fairly autonomous in the way I engage with hypermedia. In drawing
a parallel to her having a different approach she explained: If I was the sort of person
that didn’t need to get that high level overview first, you know the helicopter view before
diving into it in depth, if I was the sort of learner that took each chunk and worked
through each chunk as I came through it, and did the activities as I went through it, that
would allow me the same level of autonomy as the way that I chose.

Tammy sees herself as an autonomous learner. Her knowledge of, and her belief in her
learning capacity, enables her to use hypermedia in very sophisticated ways. For
example, rather than seeing the structure of the hypermedia as restricting her autonomy,
she sees it as a strength she can draw upon to focus her efforts, as well as providing her
with guidance as to the boundaries of her learning.

Response to question 3
Tammy rated herself as a 6 (very effective) on the Likert Scale presented. She suggested
that this rating was because: I have been using computers and media for a long time now.
She suggested that the reasons were because it was: integral to the way that I work and
integral to the way I study and learn. She reported that outside of work she engages with
media as a matter of course through: RSS, social bookmarking, blogs, that sort of thing,
so I am really comfortable working and operating in that environment on a daily basis.

188
Metacognitive training intervention
The purpose of this intervention was to provide Tammy with training and to raise her
self-awareness of her metacognitive activity in hypermedia learning prior to undertaking
learning module 2. A thirty (30) minute session was conducted in which Tammy was:

(i) Provided with a paper copy of the metacognitive taxonomy (Table 6, page 57) and
top-level structuring rhetorical structures (Table 7, page 61) and the initial 10
minutes was spent explaining and discussing the categories they contained.
(ii) Provided with a paper copy of the analysis of her first learning event, and the next
15 minutes were spent examining and discussing these data
(iii) Engaged in a 5 minute reflection in which she was asked to reflect on the
utilisation of his metacognitive actions and discuss future utilization.

Tammy was asked to reflect on these discussions prior to undertaking a second module.

Learning module 2
The second module in which Tammy engaged was within the Australian Flexible
Learning toolboxes – CUF01 Film, TV, Radio and Multimedia Training Package. The
course was the Certificate II in Screen, and the element of competency she undertook in
this learning session was Animation (405).

Tammy commenced by clicking on the Before you Begin link on the opening screen.
This opened a small secondary window on the screen and she quickly scrolled through
the text: so basically I was just scrolling through there to see what it was exactly they
wanted to tell me but it was just the usual stuff. So I just ran my eye over it (00:13). She
commented: So, eager to start, so I was happy to enter into the doors (00:34). Tammy
appeared to be orientating herself to the learning interface and monitoring the information
provided in order to evaluate how she would proceed with the task at hand. Tammy
clicked on the Click here to begin arrow to enter the Lightbox Animation Studio,
commenting: And it actually had that arrow so it made it quite easy (00:42). She

189
quickly ran the cursor over a number of the animated learners on the screen, which then
provided a short introductory message in a balloon, before clicking on the Getting Started
link.

Figure 52: Lightbox Studio

She commented: what I wanted to do was click first on the getting started, just because I
am assuming they are giving you this information to orient you so I kind of wanted to
work out if they are giving me a sequence of things to do, or find a general map of the
site. So really that was just telling me about who all the characters were (00:48). And
some of the names of the jobs were starting to appear there (01:25). There was little in
the way of instructions so Tammy continued trying to orientate herself to the learning
interface and monitor the information provided in order to work out how to proceed by
engaging in a complex list of cause/effect and comparison structures.

190
Figure 53: Getting Started

Tammy skimmed through the text and appeared to read some parts more thoroughly. She
commented: Well I guess obviously where there is a lot of dot points, you can skim quite
quickly through that. But sometimes if there is text you kind of need to slow down a little
bit. Because the text to me indicates that there is more content there – there is more
narrations, so I kind of need to slow down and just read some or just skim read some of
that just to make sure I am not overlooking something that is important (01:43). This
complex list of comparison and cause/effect structures indicate that she was in a
monitoring phase and used the language structures to come to grips with the learning
interface. She then moved the cursor back and forth between the symbols on the
Introduction page and the Getting started screen and continued to monitor and evaluate
progress: because I was starting to get some hints there about those graphical symbols
and what they meant, so I kind of wanted to just run my mouse over them to make sure
that yes, they were changing and it was consistent with the information they were giving
me there (02:15). She then scrolled down the Getting Started window and commented:
Then again this is all general stuff that I would expect to see so I kind of didn’t spend too
much time on that. Just sort of clicking through (02:39). The learning matrix, actually
when I saw that come up I was quite interested in it – I thought that was something that I

191
would like to look at a little closer. Because that might be a bit of a mud map for me. To
see how everything was laid out. So I thought I was going to go and have a look at that
later (02:48). This complex list of cause/effect structures suggested that Tammy had
moved from an orientation phase on to a phase of evaluation and planning. She continued
to scroll through the Getting Started screen and commented: I was reading through here
– stopped to read through what the jobs were because I thought this was perhaps
something that might be a little bit important to get familiar with the major tasks. So I
just kind of wanted to read and absorb a little bit of that without taking in too much of the
detail. Like I recognized that a lot of these concepts were new to me and I had not really
come across them before so I kind of just wanted to absorb the names and just get a
general feeling without trying to understand them in too much detail (03:03). This list of
comparison and cause/effect structures suggests that while monitoring the interface, she
was using a series of evaluations to enhance her learning. She paused and read more
carefully through a part of the instructions and commented: A fair bit of text there
(03:44). Because they are actually giving you sort of instructions there – like note this,
and do that in the final part, so I thought those things were kind of important. So I
wanted to spend a bit of time reading them instead of just skimming them (03:56). This
complex list of cause/effect and comparison structures suggests that she was not just
monitoring and evaluating, but now using the process to plan her way forward. She
continued scrolling through the text and commented: And again this stuff here looked
reasonably straight forward. The text I presume was being supported by the pictures so I
thought well that’s …(04:10), indicating she was still monitoring and evaluating her
progress. She then clicked on the Learning Matrix link as she seemed to re-orientate
herself: And you can see I went straight back to the Learning Matrix because that’s
something that interested me, before evaluating: However, when I got there I was a little
disappointed because basically that was just telling me about the job. And that job had
some activities in and was linked to the performance criteria (04:26).

Next, Tammy used the mouse to move over the job descriptions, and then moved to the
activities and performance criteria, continuing to monitor and evaluate: And then when I
clicked on performance criteria, it bought up this other box, which really wasn’t a

192
performance criteria at all. That was unit descriptor, so I was a bit confused there
thinking, oh well, that’s a little bit inaccurate. And then I started to loose faith in like
what they were telling me to do. This caused her to plan her next move: Like the
credibility was shot, so thought I would just try one or two more (04:42).
Tammy clicked on the Activities link which open in a secondary window and elaborated
on her expectations: OK, yes they are activities. Yeah, I wanted to read about what was
expected of an Art Director and a Clean-up Artist, and planned her next move: so I kind
of went through that a little bit carefully, again, new names, just wanted to increase my
familiarity with them (05:05).

Figure 54: Activities window

She then closed the Learning matrix and clicked on the Site Map in the navigation panel.
The following complex list of cause/effect and comparison structures suggested that
Tammy was attempting to re-orientate herself to her learning: because I like to know
where I am going, like the mud map, I like to see the helicopter view first so the site map
is, in fact, I even do that on websites – I look at the site map rather than use the search
bar because I want to see how it is all laid out. And that kind of gives me in my mind’s
eye a mental map, and that will help me orient (05:23). She followed up by monitoring:
And here I could see that the jobs were colour coded, orientating: so again the colouring

193
was a bit of orientation navigation as well, and evaluating: So that kind of clicked in to
me that things were being colour coded as well (05:53).

She moved the mouse from left to right monitoring each worker sitting at the back of the
room: So here I was rolling over each of the persons – a quick look at the page – I
assumed that one should start at the left and roll to the right. She then seemed to
evaluate why she thought that: Don’t necessarily know why but it sort of seemed like a
bit of a process chain of one person handing off one thing to the next person, before
elaborating: and the fact that they were all in a line seemed to indicate that yes that
might be the chain progression (06:15). She continued moving the mouse progressively
through the workers at the back of the room, then moved across the side to the front
workers, and finally moved the mouse over the workers in the middle of the office.

Figure 55: Workers in the Studio

She then paused to monitor and evaluate her action: Of course when you come to the
little box in the offices you realise that the chain is breaking down a little bit and some of
these people may have different – ah – a different focus to their role within the group and
you could sort of see here that you are still in the technical stuff, but when you jumped
into the middle it sort of got a little bit adminney because they were answering phones
and – but yeah (06:50). And in turn, using comparative structures, she seemed to
evaluate the consequence: So that was just giving me an overview of the major roles
within the animation sequence and I wasn’t really bothered about learning their names,

194
before elaborating: but I guess just associating a job title with a very brief description
with what they do (07:30).

She then attempted to click on what appeared to be a link, Studio Tour (see Figure 39
above) and nothing seemed to happen: And I was a bit confused there for a moment, but
I thought perhaps that might be the internet connection, so I clicked it again – again
nothing really seemed to happen – so I thought OK abandon that activity and go on to
Job One (07:45). This complex list of comparison and cause/effect structures suggest she
found it necessary to monitor and evaluate her actions as she again struggled with the
learning interface. She appeared perplexed and clicked on the Job One link and
explained: So, opening up Job One, I was momentarily confused there because I thought
that there was more information – there could have been more information there, as she
evaluated the situation. She monitored her action: and I kind of wanted to maximise that
window and I saw, no, I couldn’t maximise it so they were really just presenting me with
that amount of information there, before evaluating its impact: which really seemed a bit
too light on for me (08:00).

Figure 56: Job One Screen

195
Tammy then clicked on the TVC link which opened a glossary in another window, and
commented: So, I knew what a TVC was because we talk about them here, but I wanted
to see what their definition of a TVC is. After looking at the definition of a TVC, she
started to explore the glossary further by casually scrolling through it. She seemed to
have temporarily suspended the line of learning she had been taking: So of course I
clicked on that and then got lost in the glossary for a while (08:30). She then appeared to
reflect (monitor) further on this action: So this was bringing up a glossary. Yeah, linked
to a glossary, brings up the glossary, and I saw that TVC was Television Commercial
which I knew that is what it was. Before she followed up with what appeared to be an
evaluation of what to do next: and I thought OK here is the glossary and let me spend
just a couple of minutes. This culminated with her planning her next move: I will start
from the top and I will just run my eye over the key headings there just so I am familiar
with the type of terminology that they could be using in the course. In trying to draw to a
close this diversion to her learning, she elaborated: I didn’t really want to spend any time
reading definitions or explanations (08:40). Tammy appeared to still not be able to get
back on her learning track and continued to scroll through the glossary. This necessitated
her needing to orientate herself to the glossary’s structure and content: Although a few
things did tweak my interest, for example, cel, c e l there – I hadn’t heard of that before
so I stopped to read that. Same as cel paint. Kind of just skim read that one and then
continued on. However, she appeared to pause and evaluate, commenting: But what I
was trying to really do was just acquaint myself with a whole new system of terminology
and jargon that would probably make more meaning for me later on, but if I could just
get some of those key words into my head, then I would, you know, it would kind of make
sense to me when I was reading through perhaps some text later on (09:16). So it did
seem a bit tedious at the time (09:54). She continued scrolling through the glossary and
stopped to read some definitions. She clearly was covering a considerable amount of
new material, so I commented that despite this, she had not taken any notes. She
responded evaluatively: because I felt there was so much text on the screen and that this
was an approach that I would absorb by doing, so I really didn’t want to take notes. She
paused seemingly to monitor her position, and commented: I really wanted to go with the
notion of the activity that they were getting me to do. And I thought that I would get a

196
clearer understanding by following a process rather than by actually studiously taking
notes. She evaluated further: And I think in these sorts of toolboxes, I am familiar with
the format of them and I do know that in each of the activities they do tend to give you
instructions, so at the time I was confident that all would be revealed as I stepped
through the process (10:12).

Tammy next reflected (monitored) on her learning approach to the toolbox: I am aware
that toolboxes are something that support an institutional pathway so it is something that
supports things that you do in a class, or in a workshop, or on-line, or those sorts of
things. Or if I was doing this as a student, I would probably be going to classes and the
toolbox would be given to you as sort of homework or an activity to work through at your
own pace. She then seemed to evaluate the consequences: so I was aware that I was
missing content and context by not having gone to classes, but this still had sufficient
content for me to be interested. And I felt it had content that I could work my way
through it and it would give me an overview of what an animator does (11:42). She
continued and elaborated further: Interestingly I happened to walk past the animation
classroom here at Southbank one day and it was absolutely full – it was the biggest class
I had seen. In fact they had to put two classrooms together, the class was so big. So I
was just sort of aware that this is a pretty popular industry to go into, so that sort of
tweaked my curiosity at the time. I thought, oh, what is it that animators do? So hence
this is quite interesting (12:34).

Tammy closed the glossary and returned to the Job One screen where she momentarily
ran the cursor over the Key drawings hyperlink. Next, she selected the link to Kate the
Production Manager and orientated herself to the text: So here I am opening up Job One
and I am just looking at what the production manager is, and monitored the outcome:
And that is just outlining the job. She moved on to the Self Quiz and appeared to
evaluate its usefulness: I thought do I want to really take the self-quiz? And I didn’t
really. This evaluation using a complex list containing cause/effect structures seemed to
press her to plan her way forward: so I skipped to the content below the self-quiz box and
when I got to the end of that I thought, mmm, maybe I should take the self-quiz (13:02).

197
So I went back and did the self-check quiz and I was both delighted and disappointed that
there was only one question in it.

Figure 57: Self Quiz

Tammy continued reading through the information on the production manager and
reached a paragraph that discussed model sheets where her cursor showed she stopped
and re-read the material a number of times. The words, Model sheet was a hyperlink
which she clicked on and was then taken to the Glossary. She appeared to evaluate the
term: And of course not being overly familiar with the terminology, model sheets there, I
was just clarifying what model sheets were (13:31). She read through the definition
before she returned and continued reading. In the same paragraph the text contained
another hyperlinked term, scene folder which she paused at before moving on. She
appeared to monitor her progress: And I didn’t really go into the scene folder. Because I
sort of thought, yeah, I know what that will be. I didn’t need a detailed explanation, but I
thought this model sheet was quite key to the particular job that they are asking you to
do, before evaluating: so I thought I would just check exactly what that is (14:14).

198
Tammy had engaged in a complex list of cause/effect and comparison structures that
enabled her to work out what was necessary for her to understand in the content.

Tammy next clicked on the Self Check Quiz link, which opened in a separate window,
and read the question.

Figure 58: Self Check Quiz

She seemed to attempt to orientate herself to the question: And what is a Ruff, before
monitoring her current knowledge about it: Had no idea, and evaluating her response:
But intuitively I thought that it would be a drawing from an animator. Tammy selected
answer B and was given a message that she was correct. She appeared to not know what
to do next, commenting: Momentarily confused – where is the next question. There are
no arrows (14:40), as she monitored the learning interface. Tammy had not noticed that
the Check Answer button had been replaced by a Next Question button and closed the
window. She was returned to the previous screen where she selected the Topic Menu
button and selected the Welcome to the Job link. She realised (evaluated) that she had
previously seen this text: I wondered whether this was just the same information that I
had just been to and yes it was. So, spend no time there. She then attempted to move on

199
by clicking on what appeared to be a breadcrumb trail before selecting the Topic Menu
button followed by the Job Requirements button. She appeared to monitor this action:
Breadcrumbs at the top weren’t working, before planning her way forward: so I realised
that I would actually have to go back, and re-orientating her learning pathway: and
navigate through from the beginning again (15.23). The Job Requirements screen
opened. This series of problem/solution structures helped her to re-orientate her learning.

Figure 59: Breadcrumb trail

Tammy used the cursor to read through the text, appearing to orientate to the new
material: Yeah, and I hadn’t seen this before, so I spent a bit of time just reading through
what the job requirements were. A bit confusing that there was kind of double navigation
there. You could get to the same synopsis information the same way and yet this job
information stuff was hidden. She monitored her progress: Normally I might have
thought that OK the first of the two hyperlinks that I clicked on – the first one led me to
something that I had seen before – by extension, the second one should have also led me
to something I had seen before, as she proceeded. She then paused to evaluate,
commenting: But I was surprised that it hadn’t. So I thought well maybe that was a
design improvement that they could make next time (15:47).

200
Tammy then clicked on the Activity menu link, followed the Visiting the Team Members
activity link and read the short text. She next selected the Print Checklist link where she
seemed to monitor her action: then I realised that we were not set up here for a printer
(16:44). The checklist window opened and she continued to monitor, commenting: And
I could see here that this was a list of key people that I needed to consult. Next, she re-
planned her learning strategy and started to take notes, commenting: and I thought I had
better write those names down. So I wrote the names down thinking the names would
prompt me. Following her note-taking she returned to reading the screen and continued
to monitor her learning: but then realised that it was actually their positions that I should
have written down (16.46). This appeared to cause her to re-orientate to the learning
interface: I could see though that the list in the drop down menu – where it says
personnel there, was roughly following the top four positions, so then I thought well in
order to navigate through that I would just use the drop-down menu and then just
sequentially select each of the positions (17:03). The complex series of lists that Tammy
used contained comparison, cause/effect and problem/solution structures that enabled her
to formulate a logical pathway to navigate through the interface.

Figure 60: Print Checklist

Below the picture of Kate were a set of arrows which Tammy tried to click on next: I
tried here with the left and right arrows, but they didn’t seem to really go anywhere
(17:42), evaluating the result. She clicked on the Close Window link which returned her
to the Job One screen, and monitored the consequence: And then realised, oh, I have
closed that down, before planning her next move: and I will have to go back in again

201
(17:45). She hovered over the download resources button and then clicked the hyperlink
Kate the Production Manager. She tried the left and right arrows again and the next
screen appeared. She appeared to evaluate this outcome: Oh, they were working. Oh,
OK. It wasn’t working the first time, before elaborating on why: because I was already
on that page, mmm (18:05).

She then selected Production Manager from the dropdown personnel menu and then
chose Activity Menu link, followed by the Visiting the Team Members hyperlink. She
commented: So here I am just going through the Personnel. Yes, working out the same
order (18:07), indicating she was still monitoring her progress. She then chose Director
from the drop down personnel list and read through the text. Her comments indicated
that she monitored her progress: And then just reading through these people’s jobs.
Which I kind of wanted to spend a bit of time doing – not just sort of skim read it, and
elaborated: but actually just get my head around what the function of each of the key
personnel was (18:30). Tammy engaged in a complex list of comparison and cause/effect
structures to come to a deeper understanding of the content.

As Tammy continued to choose each of the personnel and read the text about them I
asked her how these examples contributed. She elaborated: Well, Kate had explained
that I needed to go and see each of these people on the list. This caused her to orientate
herself to each of the personnel: so I was just now clicking on each of those personnel to
see what they had to tell me before I actually commenced doing the job. As she
proceeded, she continued to monitor using cause/effect structures: So this is just really
reading through that really carefully to try and get a mental picture of the roles and
responsibilities of each of the key people (18:50). She continued to select and read about
each of the personnel and reflected: I was reading this quite carefully because I was
trying to kind of form in my mind a collage of the sorts of activities that I would be
required to do and again, even though I have looked at the glossary, it’s just becoming
more and more familiar with the terms model sheets, scene folder etc., so each repetition
of those just basically gives me more context and I can locate what that particular item is
in my newly acquired vocabulary (20:01). This complex list of cause/effect structures

202
suggests that her monitoring of this part of the learning caused her to orientate herself to
the activities of each of the different roles and to evaluate her knowledge of building
vocabulary.

Tammy chose the Storyboard Artist from the drop down personnel menu and read the
text briefly. She then clicked on Storyboard Page 1 link.

Figure 61: Storyboard Artist window


The first page of the storyboard opened and then Tammy maximised the screen to enable
her to read the text better.

203
Figure 62: Storyboard Page 1

She studied the graphics and accompanying text and commented: there is quite a lot of
new stuff here that I have not seen before. Obviously there is a drawing, then there is a
dialogue and then an action or effect. And then I was a bit confused about what would
go in that bottom box called Trans, but I could see that they were all blank, so I thought
all will be revealed later (20:40). She continued to review the page for a short time and
remarked: here I could also see up in the top left hand corner there is the scene one, and
I didn’t know what a bg was, and I thought, mm, I could possibly go and look at the
glossary, but maybe it is just – it seemed to have a place anyway and I didn’t know what
bg stood for – maybe it would become clearer later (21:04). These complex lists of
cause/effect and comparison structures suggests that Tammy drew upon a series of
evaluative cognitions as the means by which she regulated the monitoring of the complex
nature of this page, and the new information it contained.

Tammy then closed the window and clicked on the link Storyboard Page 2 and studied
the page. Although this second storyboard page was similar to the first, it still appeared
to be pressing Tammy cognitively: So here I was just trying to get my head around the

204
actual animation. Like I could see it was kind of like a cartoon almost. I mean I am quite
familiar with cartoons, but in the cartoons they usually just have the little balloon, and
the speech in the balloon. Obviously here on the storyboard they have the speech or
dialogue below. I was interested actually in the action/effects. Obviously they’re giving
a description of what’s actually happening in that scene. Whether it is a hand moving or
a person moving. And I didn’t think that the action and effects and the dialogue together
were really – I couldn’t really form in my mind a visual picture of what was happening
(21:38). She made a comment about the storybook pages and the abbreviation bg: Well,
it is obviously sequential because they are numbered. No, didn’t really know and I didn’t
give it too much thought (22:30). I also, by reading the dialogue, action and looking at
the pictures, I didn’t really think it was that funny. I know it is a 30 second commercial
and you have got to get your message across fairly quickly (23:00). Once again this very
rich list of cause/effect and comparative statements suggest that Tammy continued to
draw upon a series of evaluative cognitions as the means used to regulate her monitoring
of this continuing supply of new content and interface knowledge.

Tammy closed the window and clicked on Storyboard Page 3 and spent some time
looking through the material before closing the window and returning to the Storyboard
Artist page. She then clicked on Storybook Page 3 and commented: Oh I thought, did I
go into page 3, or Storyboard page 3? And yes I did, so closed that down again and just
went into page 4, monitoring the mistake she had made in progressing. She evaluated
this further: It would have been helpful I guess if the colour of the hyperlinks changed for
the ones that you had visited (23:12).

205
Figure 63: Storyboard Page 3

Again, she spent some time looking at the screen and commented: camera trucks in
quickly to mid shot on LP. So I assume LP was lemon pops. Yes, but obviously that
indicates some kind of positional change, or transition as you say (23:58). These
cause/effect and comparison structures suggested that she continued to evaluate this
continuing stream of new knowledge.

Tammy then closed the window and clicked on Storyboard Page 5. This was the last
storyboard page and she appeared to study it in the same manner as those previous. She
seemed to be attempting to relate to their overall impact on her learning, and commented:
And I guess realising this was the last storyboard, I was really keen to see the punch line.
And then I read the dialogue and the action and I just thought is that a punch line? Is
that going to make me laugh if I was the consumer? And I guess it struck me then that as
an animator, you know how invested are you in the actual message that you are creating
for (24:11). This list of comparisons and cause/effects structures indicated that Tammy
was attempting to evaluate what she had learned from this series of storyboard pages and

206
relate this to the consequences for animators more generally, before elaborating further:
Yeah. Does the customer give them the storyline? And then they just have to make it a
reality (24:52).

Tammy closed the window, returned to the Storyboard Artist page once more, and
clicked on the Sign Post button which took her to the Head of Timing page. She scanned
through the materials and commented: And I was really pleased when I saw there was a
demo here. Because I thought I really do want to see what this ad looks like, as she
continued to monitor her progress. She next appeared to stop and orientate herself to a
new term and plan a way of becoming familiar with it: In my head I spent a few seconds
going over the word animatic because it is a word that I am not familiar with, so I was
really just pronouncing it in my head, before finally elaborating: so I would become
familiar when I saw it on the screen (25:10). She then clicked the Demo button, then
Play.

A window opened and a demonstration of the ad played which she watched carefully and
appeared to sum up what this all meant to her: And here I was very pleased that they
included a demo and called it an animatic. And seeing it all together, it now made sense
to me that obviously Lemon Pops, well people would think well lemon is bitter, if you
were having it with milk it wouldn’t go very well. They had to sell it to the audience that
it was a sweet breakfast cereal – with a twist obviously, a little bit of tang there, so it is
something unusual, so in that regard they were kind of getting the message across – you
know, don’t expect the cereal to be bitter – sweetened with sugar and a bit tangy – that’s
how I read it anyway (25:50). Once again this very rich list of cause/effect and
comparison statements suggest that Tammy continued to draw upon a series of
orientation, evaluation and elaboration cognitions as the means of drawing together her
conception of what she had just learned. She closed the Demo window after it had
finished playing.

Tammy moved on to the Layout Artist, which was the next entry in the personnel drop-
down box, and scanned through the material.

207
Figure 64: Layout Artist screen

She came across two unfamiliar terms: They were talking about layout and clean up in
betweens and I was sort of starting to have a query about OK, well I just have to get my
head around exactly what this clean up is (26:58). Tammy appeared to have evaluated
her lack of understanding of the term, as she planned to correct this. She clicked on
Glossary in the left hand navigational panel and used the alphabet buttons to search for
words. She located and read the definitions of clean-up and in-between. This appeared
to assist her with the reading of the learning materials and she commented: Because now
that I have seen the visuals, the job is becoming a little bit clearer to me. So if I look up
the words clean-up and in-between, I will get a much clearer idea of what I am expected
to do on job one, so now when I am reading the definition of clean-up it is actually
making sense to me, whereas before I was becoming familiar with the term, but reading
the definition here, I could actually see how it fits in with the process. So I had quite a
clear idea of what the terms clean-up and in-betweens means (27:24). Once again this
very rich list of cause/effect and comparison structures suggest that Tammy drew upon a
series of evaluating and elaborating cognitions as the means of drawing together her
conception of this section of the learning. Tammy closed the glossary.

208
Tammy was returned to the Layout Artist screen where she continued to read the text and
study each of the diagrams as she moved systematically down the page. She came across
a term she was unsure about, remarking: And again I think I went in and had a look at
the definition for key pose. Sort of got a visual indication there from the two diagrams
that they showed, but I thought I had better go back and just double check this key pose;
and I had remembered actually when I first went through the glossary that I did stop at
key pose. Because when I started to read the first line of the definition, I thought, yeah, I
remember reading that before. So having become familiar with those words from the first
reading of the definition in the glossary, and now seeing it in the flow of the job between
the various people, it is all sort of starting to make more sense to me (28:14). This
complex list of cause/effect and comparison statements suggest that Tammy engaged in a
series of planning, monitoring and evaluating cognitions as the means of making sense of
the term.

Tammy then clicked on Key Animator from the personnel drop-down box and took some
time to read the information on the job description. She commented: And here I actually
thought the content of the people’s job was becoming more interesting (29:00), as she
evaluated its impact. She then clicked on the Key drawings link which took her to the
glossary. She read the definition for key drawings and then clicked on the Welcome to
the Job page for the Key Animator, left the glossary at the back, and continued to read the
job description. She commented: I remember reading all of that stuff quite carefully.
And I think this explanation of the job was very clarifying for me. Having seen the
graphics, the cartoon strip – yes I recall it was pretty rough – and yes I can see what the
clean-up involves and the in-between involves. Tammy appeared to evaluate and monitor
the material as a way of clarifying her understanding, as well as indicating that her
planning of this had been successful: So that faith that I had in the beginning that it
would come clear to me has actually come to fruition now (29:33). She continued
reading the information on the key animator’s job description and using the glossary
hyperlink, clicked on the model sheet link. Monitoring her progress she commented:

209
And again here I was just double checking my understanding of some of those key terms
(30:31).

She then completed selecting the personnel from the drop-down box and read through
each of their job descriptions to complete this activity. On completion she remarked: In
fact I did do all of the people. And in fact I just want to run back to the animation room
and again run my mouse over those key people so I could actually visualise where they
were in the room (30:40). This final act of monitoring was followed by a plan of how she
might crystallise the learning further. Tammy stopped at this point.

Summary of learning module 2


In reflecting on this unit of work, Tammy’s responses to my observations and questions
were usually part of a unified dialogue that was richly loaded with metacognitive
linguistic structures. The observations contributing to my analysis of her metacognitive
activity and top-level structure linguistic markers during the 30 minutes of the lesson
have been sliced into three purposive though arbitrary segments (see explanation page
80) to represent Tammy’s progression through the beginning, body and conclusion of the
learning event. These are represented in Tables 29 and 30.

Metacognitive activity
The totals of metacognitive activity identified for Tammy in this learning module showed
that metacognitively she drew heavily upon monitoring, evaluation and execution, and to
a lesser extent, orientation, elaboration and planning. Typical examples of each have
been presented in the preceding narrative.

Table 29: Metacognitive activity learning module 2 - Tammy


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 6 3 11 9 12 0
Body 18 11 33 40 33 12
Last 5 mins 3 4 10 10 10 3
Total 27 18 54 59 55 15
% 11.84% 7.89% 23.68% 25.88% 24.12% 6.58%

210
Top-level structure linguistic markers
A summary of the linguistic markers used to identify Tammy’s metacognitive activity are
outlined below. These indicate that Tammy relied heavily on cause/effect, and to a lesser
degree comparisons, and made very little use of problem/solution structures to drive her
metacognition. She used complex lists extensively, and made little use of simple lists.

Table 30: Top-level structuring activity learning module 2 - Tammy


Simple TLS More complex Top-level structuring events
event
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 5 10 18 0 6
Body 5 39 58 4 39
Last 5 mins 3 6 11 1 5
Total 13 55 87 5 50
% 6.19% 26.19% 41.43% 2.38% 23.81%

Self awareness of learning autonomy 2nd rating


At the completion of this second learning module Tammy was presented with the 6 point
Likert Scale and asked again to rate herself as to how effectively she considered she had
engaged with educational hypermedia on this occasion.

Tammy rated herself as a 6, (very effective). She was then asked to comment on the
reasons for the rating.

Tammy believed that: even though this was new material, I was confident in my ability
with hypermedia learning generally. As an example, she reflected on the problem she
had encountered when attempting to prepare the drilling site, and how she had resolved it.
She remarked: when I was trying to find the last item to clear the drilling site, although it
did take me a little time, I did think that my ability to draw upon my knowledge of user
interfaces enabled me to solve the problem. She added further: the interface was
graphical and interactive and I was able to work my way through it without any
assistance.

211
Effect from metacognitive training
Tammy rated herself equal in both learning modules:
Learning module 1 rating 6/6
Learning module 2 rating – 6/6

Table 31: Collective data of metacognitive activity - Tammy


Orientation Planning Execution Monitoring Evaluation Elaboration
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 mins 4 6 0 3 18 11 15 9 9 12 10 0
Body 1 18 0 11 38 33 45 40 40 33 13 12
Last 5 mins 3 3 0 4 9 10 15 10 11 10 4 3
Total 8 27 0 18 65 54 75 59 60 55 27 15
Comparison of total activity expressed as a percentage (%)
Session 1 3.41% 0.00% 27.66% 31.91% 25.53% 11.49%
Session 2 11.84% 7.89% 23.68% 25.88% 24.12% 6.58%

A comparison of the metacognitive data by percentage indicates that Tammy engaged in


more orientation and planning activity following the training. In contrast, she engaged in
less execution, monitoring and elaboration activity. However, her use of evaluation was
similar on both occasions.

Table 32: Collective data of Top-level structuring activity - Tammy


Simple TLS event More complex Top-level structuring events
List – Simple List – Complex Cause/Effect Problem/Solution Comparison
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 mins 7 5 11 10 13 18 3 0 7 6
Body 11 5 29 39 40 58 5 4 22 39
Last 5 mins 4 3 9 6 16 11 5 1 8 5
Total 22 13 49 55 69 87 13 5 37 50
Comparison of total activity expressed as a percentage (%)
Session 1 11.58% 25.79% 36.32% 6.84% 19.47%
Session 2 6.19% 26.19% 41.43% 2.38% 23.81

212
A comparison of the linguistic markers data for top-level structures by percentage
indicates that Tammy used more cause/effect and comparative structures following the
training. In contrast, she used less simple list and problem/solution structures. However,
her use of complex lists was similar on both occasions.

213
Case Four - Judy
Learning module 1
The first module in which Judy engaged was part of her formal Diploma of Project
Management studies. She had previously previewed the material and was now
undertaking it formally. In this session she worked through the program introduction and
the orientation.

Judy commenced by skimming through the contents of the welcome page, planning
where to commence: I was deciding where to start (01.15). She read part way through
the table of contents and then clicked on the Settings button and elaborated: I clicked on
settings as I was curious (02:05), as she was orientating herself to the material. The
settings page opened and provided her with an opportunity to change her email and
password. She did not seem to be interested and clicked the Contents button and returned
to the welcome page and its table of contents. She again scanned through the table of
contents and appeared to be deciding how to commence. She made a number of remarks
as she did this: I had started on this module before, (03:05) and: I am now choosing
where to start (03:09), as she continued to orientate herself. She clicked on the Module 1
link, was presented with a list of module topics, and clicked on the Introduction link
which she appeared to read before moving her cursor to the left side of the screen
containing a navigation bar.

214
Figure 65: Module 1 Introduction screen

She studied the navigation bar and remarked: Here I looked at the thumbnails – both of
them do the same thing except one is a thumbnail and one is just a list, evaluating this
aspect of the learning interface, and further elaborated: So I decided to go with the list.
See the outlines it goes a list of 1 to 10, and the thumbnails bit gives exactly the same
thing but with a thumbnail. So it goes longer and you need to scroll (03:19). This list
of comparison and problem/solution structures demonstrates Judy’s capacity to take up
the challenge that the learning interface presented.

She clicked on the Outline tab and tried to select the first link 2008 Diploma of Project
Management and received an on-screen message Please view all items on this
slide to continue, referring to the slides to the right. She appeared to evaluate her
situation: I tried to start from 1 but it wouldn’t let me until you finished seven first
(03:55), and elaborated further: So I clicked through them one by one to read them

215
through (04:32). She returned to the slides and continued reading them, and seemed to
monitor her progress: I read them through from top to bottom, every word, because it
was important and I didn’t want to miss any information, before elaborating: and
secondly, because all the text on every page is pretty short, so I didn’t feel the need to
rush and skim (04:42). This complex list of cause/effect and comparison structures is
indicative of her ability to monitor her learning. She continued to orientate herself to the
learning interface: I didn’t take notice of the graphics. I didn’t pay attention to them
(05:03). While she offered (evaluated) her reason for doing so: Because it was just like –
I may have glanced at the clock and didn’t think anything of it – it’s just that because it’s
pretty, she also speculated (elaborated) on other forms of graphics: If it was a graph or
something like a clickable thing I would have paid more attention to it (05:14).

Judy continued reading the text, and appeared to do so quite intently. She commented:
Um, these are the kind of course information like um - like for example, how quickly can I
complete this, the assessment process. Just all the basic course information stuff you
need to know before you get started (06:04). This list of cause/effect and comparison
statements suggests that she was monitoring and evaluating the material presented. I
mentioned that she did not write any information down, and she provided (evaluated) her
reason: Because they were quite straight forward and I could remember them in my head
(06:30).

The learning interface was providing material using a consistent format of text and
sometimes an accompanying graphic. Judy continued to read through the material and
appeared to pause on a graphic and orientate herself to it: The assessor. I looked at that
picture trying to just put a face to a name, to see who he was (06:50), before evaluating
its impact: but it did come into my mind that it could be just some picture, it might not be
the person (07:01). She continued scrolling and reading through the information and
appeared not to pay any attention to the accompanying graphics; the first a file folder, and
the second, a numeric keypad with a pencil in it. When asked about it later she evaluated
through a list of cause/effect structures: I don’t remember seeing that picture at all, and
elaborated further: I mean I would have glanced at all the pictures unconsciously and

216
made a decision not to investigate further (07:33). Further down the screen she
commented: I remember that picture. It was text and I was trying to see what’s there
(07:53).

Figure 66: Text graphic

She hesitated and appeared to orientate herself to its content: I just saw that picture and
it just grabbed more attention I guess because it’s sort of text – yeah (08:09).

Judy continued scanning through the large amount of text and encountered another
graphic of a schoolroom chalkboard and commented: I didn’t pay any attention to it
(08:26); suggesting that she chose to ignore it as she monitored her progress. She
continued to scan the text and monitor her progress: There is really not much to
remember because it is actually quite straight forward, like very standard kind of thing
(8:50). She came across a picture that had appeared previously and evaluated it,
commenting: And also I think - that’s a repeat picture, and then elaborated further: They
use very simple language and straight to the point (09:15). Judy continued to scroll
through the screen still focusing on the text and appearing to continue to ignore the
graphics and monitored the impact: I guess because I know that the text would provide
the information. She elaborated further: Like, if it was in a content area I would pay
more attention to pictures because it might be more relevant. Especially if something

217
comes in graph format, or flowchart or something, I would definitely pay attention to it
(09:40). This complex list of cause/effect and problem/solution structures provided an
insight into how she draws upon graphics to inform her learning. She summed up the
impact of the graphics in this section of the learning materials and indicated that she had
consistently monitored, albeit in her words unconsciously: Yeah. I mean if my eye
glanced on that page its – unconsciously I would look at it and go well that does not
mean anything, and I would not even look at it. Judy made a final comment about the
graphics, elaborating: I don’t think it matters whether it is there or not. It doesn’t get in
the way (10:40). She completed reading the topic.

Judy then clicked on the next topic link, Bricks and Mortar Support, located on the side
navigational panel, but closed it down straight away and clicked on 2008 Diploma of
Project Management, the first topic. This appeared to be the consequence of her
monitoring her action: Because I noticed on the menu on the left, it was going from the
FAQ point seven, straight to point eight, bricks and mortar, and then evaluating her next
step: So I knew it was going on to the next point, but I wanted to start from point one
(10:50). She seemed to continue to monitor: And after that you are allowed to navigate
through in the sequence you want, finally evaluating: So that is why I clicked on one,
which lasted eight seconds and we are on to two. So then I did it in sequence after that
(11:14). This series of comparison and cause/effect structures are indicative of the
monitoring and evaluating sequence adopted by Judy to drive the understanding in her
learning.

An introductory screen opened and displayed for a short time before advancing to the
next topic, Your Course facilitator. This provided Judy with the name and details about

218
Figure 67: Course Facilitator Screen

her facilitator. Judy detected that there was sound associated with this screen and
searched for the sound control through the Start menu of the operating system: I couldn’t
hear him so I tried to find the volume control. It took me a while to find it. I should
know, it’s just that it is not my computer (11:25). This list of cause/effect structures
suggested that she was dealing with the sound problem by monitoring her progress. She
finally located the sound control and adjusted it, however, the presentation continued as
she had not paused it. This appeared to have been deliberate: Because I knew I could go
back and start again (11:58), indicating she had monitored her action. In making the
sound adjustment she had missed parts of the audio and decided to replay it: I went back
and started again because I missed it (11:58), re-orientating her learning pathway. She
clicked on the Your online Facilitator link and replayed the material. As she read the text
and listened to the audio, she appeared to hover over the graphic of the facilitator, and
monitored its impact, before evaluating: Oh, so this is the person. Yeah. – Oh, so that is
a true picture (12:35).

Judy continued to read the text and listen to the audio and I remarked that she had not
taken any notes. She elaborated: Because I knew I was only doing a half hour session,
before reflecting (monitoring) on why not: But I was actually planning to get around to

219
actually get really involved with this course. This appeared to instigate her to plan for
such note taking in the future: I will go through it and I will have a dedicated notebook
and I will actually be making notes to it. She elaborated: Um, I wouldn’t for that one,
and evaluated: Because I don’t think it’s relevant to the actual project management
process to know the particular objective of this part of the course (13:51), before she
completed listening to the audio. This complex list of comparison and cause/effect
structures show the richness of the cognition deployed by Judy in resolving the issue of
not taking notes at that time. She re-orientated by clicking on the link that returned her to
the menu: You have to click on that button there before you can get to the menu (15:41).

Judy was returned to the Introduction page where she scanned the text briefly before
choosing one of three navigation buttons available to her, Online Content. Using the
scroll bar she made her way through the material which was dominated by a graphic at
the top of the screen. It appeared to have had an impact as she monitored the page: I did
look at that graphic, before evaluating its effect on her: and thought that it was pretty
nice (17:02). Judy’s cursor movement suggested that she continued to read through the
material in a traditional linear way. She completed reading the material in Online
Content and then worked her way through the contents of the remaining two navigation
buttons, Tutorials and Assessment. These materials were informational rather than
content, so she appeared to be simply browsing through them.

Judy clicked on the forward button and entered the next section – Timeline.

220
Figure 68: Timeline introduction

She commenced reading through the introductory text before she used the cursor to scan
the timeline at the bottom of the screen. She discovered that each of the five circles on
the timeline were links. She clicked on the first link, Content Set 1, read through and
monitored her progress: Because I read about it previously so I knew that set one is four
weeks and set two is week’s five to eight. She then appeared to elaborate on the
usefulness of the timeline: I thought that it was really nice that it gave you a time marker
at the bottom (18:16). Using the forward button she systematically continued selecting
each of the remaining four links and read the text without comment.

The next section provided an overview of the Online Modules and Judy appeared to focus
firstly on the diagram and monitored: Yes, I saw that, before evaluating its impact: I
quite liked it. She continued to monitor her progress: Reading through that text tells you
what to click on, before elaborating on other aspects of the learning interface: I really
like the dragging bar because they are nice and big and chunky (20:49).

221
Figure 69: Online Modules introduction

This seemed to cause her to reflect on herself as a learner and she evaluated: I am a
visual person (21:30). She continued to monitor both sections of the diagram while using
the scroll bar to view the text associated with each before clicking on the forward button.
Judy has used a series of comparison and cause/effect structures to enable her to evaluate
and monitor the interface.

Judy was taken back to the opening screen and its table of contents where she monitored
herself once more: So now we are finished six and now we are back onto seven (21:58).
She attempted to select the link to 8 Bricks and Mortar Support, before returning to the
previous screen and using the forward button to proceed. She appeared to re-orientate
herself: I tried to bypass seven to go to eight. It wouldn’t let me. So I had to click
through it until I go past (22:06), as she monitored this forced detour: So that was a little
bit annoying but wasn’t too annoying given that I knew how many questions there – that

222
how many pages to click through. This caused her to elaborate further: If there was lots
of pages, or I didn’t know how many pages to go through, then I didn’t know (22:30).
This complex list of comparative structures indicates how Judy effectively monitors and
elaborates her progress metacognitively. Judy was now able to select 8 Bricks and
Mortar Support and was taken to its opening screen. This page contained a large graphic
and audio which she listened to, and monitored: So I was listening with my ears because
I knew that it was going to stay static, before evaluating: and I didn’t have to pay
attention to the screen (23:24). The audio continued and the screen contents changed.
This provided her with a picture of the course assessors and their contact details. She
studied the page and elaborated: So, I identified the supporters. That was the one on the
left, from remembering him from before (23:56). And, um, at this point I did write down
the name and email address (24:05). Judy listened to the remainder of the audio, read the
screen advice to proceed to Online Module 2, and returned to a contents page and
observed (monitored) the unusual nature of the numbering: Yeah, it lists them like 1, 11,
12 and so on to 2 (25:50).

Module 2 was an orientation to the domain of project management which commenced


with an opening graphic and an audio playing in the background. Judy listened to the
audio while watching the synchronised text and graphics before proceeding to the next
screen. This screen was identical to the last and contained a sound file which outlined the
module objectives. Judy listened to a definition on the sound file but did not appear to
take any notes, however, did appear to plan how she might engage with them in the
future: Yeah, I intend going back through this again at home so I am just going through
it at the moment. I had in the back of my mind that this session was approximately ½ an
hour and there is not much to go. The following list of cause/effect and comparisons
demonstrate how she appeared able to metacognitively monitor and evaluate what she
had just said: But, if I was to take notes, I would probably not be taking this down
because I already know this. What I will take down is later on when things get more
technical (28:17). She evaluated further: These are the things I need to understand, like
project start to finish – which is pretty common. But I was looking for more like – um,
they might have more defined processes or steps or whatever (28:38). Judy continued

223
listening to the audio file and observing the synchronised graphics and text in a linear
manner. She came to a graphic depicting the characteristics of a project and appeared to
give it some attention.

Figure 70: Project characteristics screen

The following simple list and cause/effect structures highlight how Judy appeared to
monitor its impact: The text to the left is more useful, before elaborating: But I did pay
attention to that diagram because it was a diagram. I did look at what the numbers were
pointing to (29:28). The session stopped at this point.

Summary of learning module 1


In reflecting on this unit of work, Judy’s responses to my observations and questions
were usually part of a unified dialogue that was loaded with metacognitive linguistic
structures. The observations contributing to my analysis of Judy’s metacognitive activity
and top-level structure linguistic markers during the 29 ½ minutes of the lesson have
been sliced into three purposive though arbitrary segments (see explanation page 80) to
represent Judy’s progression through the beginning, body and conclusion of the learning
event. These are represented in Tables 33 and 34.

224
Metacognitive activity
The totals of metacognitive activity identified for Judy in this learning module showed
that metacognitively she drew heavily upon execution and monitoring, and to a lesser
extent elaboration and evaluation. In contrast, she engaged in far less orientation and
very little planning. Typical examples of each have been presented in the preceding
narrative.

Table 33: Metacognitive activity learning module 1 - Judy


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 5 1 13 1 3 5
Body 9 1 33 25 16 18
Last 5 mins 0 2 7 5 3 2
Total 14 4 53 31 22 25
% 9.3%9 2.68% 35.57 20.81% 14.77% 16.78%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify Judy’s metacognitive activity are
outlined below. These indicate that Judy relied heavily on cause/effect and comparisons,
and drew least upon problem/solution structures to underpin her metacognition. She used
a similar number of simple and complex lists in this instance.

Table 34: Top-level structuring activity learning module 1 - Judy


Simple TLS More complex Top-level structuring events
event
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 4 2 5 1 6
Body 12 12 29 1 14
Last 5 mins 1 2 5 0 4
Total 17 16 39 2 24
% 17.35% 16.33% 39.79% 2.04% 24.49%

Self awareness of learning autonomy 1st rating


At the completion of learning module 1, Judy participated in a 10 minute session in
which she was asked to reflect upon and answer the following three questions:

225
1. How would you describe your engagement with hypermedia?
2. To what extent do you see yourself autonomous in such activities? and
3. How would you rate how effectively you engage with educational hypermedia – on a
6 point scale?

Judy’s responses to these questions follow.

Response to question 1
First, Judy advised that she had undertaken courses online previously and that she was
therefore: quite familiar with learning online. Her work in design of the graphical
aspects of multimedia means that she is: quite familiar with multimedia presented online.
She considers that she is therefore: more familiar than the average person based on my
past learning and work experience. She describes herself as a: visual person who also
prefers audio. She suggested that while some people prefer to read, she is: not so much
of a reader, and would much prefer rich media that comes with audio. She said that she
found the course she undertook (and described above) was: OK for me because there
was not a lot of text to read on-screen, and it was broken down into small chunks. She
found that: most slides were supported by audio, so it felt like a lecturer talking to you.

Judy found that in the learning she undertook previously the media afforded her the kind
of support she enjoyed. She said there were: a lot of visual aids like pictures and some
animation, and also the menu system was easy to navigate. So she never: felt lost in the
course. She said she liked doing things in: small chunks, so, I like how I can start and
stop for a time and stop and absorb and think about what I have just learned. The ability
to start and stop of her choosing meant that she could also return to: things I am not sure
about. She stated that she also liked to: multitask, so that for example while I am
listening to the audio I could maybe open a webpage and Google something that was
mentioned.

Judy believed that she was competent at managing her learning in a hypermedia
environment. She believed that she had demonstrated this in the previous learning. She

226
said that for example, when she commenced the project management course: after
logging in, the first thing I did was check the navigation system, see what they had
available, for example check for what help was available, and then just play around with
the navigation to see how it works. She thought that she had demonstrated good
management skills when she contextualised her learning by: firstly playing around with
the navigation to see how it works, the structure of the site, and look at how many
modules I have to go through. She also liked to firstly enter a couple of modules to: see
the size of each module, and to click through them quickly to get a general idea of what I
am going to face, an overview, the big picture. She believes that this is important to her
because: I need to know the structure first, rather than going into the unknown and just
take what comes my way.

When managing the information being generated by her learning she believes that her
multitasking skills help. Her strategy is that: when I get to something important I just
type notes, open Microsoft Word and type notes.

Response to question 2
In general, Judy likes to feel in charge and choose when to study. However, she does like
the structure that is afforded by good hypermedia because: I trust that they would have
structured that in a logical sequence. She describes herself as being comfortable
following the linear nature of the materials. Referring to the previous learning she said:
It went from module 1 to 20, so I don’t see the point of jumping to module 20 and finding
that I need to have some prior knowledge in certain areas and then going back through
the modules trying to find where that information is. She felt that despite the fact that she
tended to follow the structure, it enabled her to manage her studies because: this week I
can do module one, and next week I can do two and three.

Judy suggested that she does not always follow the structure provided and gave an
example. She said: If I was doing an assessment and going back, obviously I would not
be going through module 1 to 20. I would look at the topic names and in my mind I
would have some idea of what that topic would cover. Then I would have in my mind
some idea of what I am looking for to do the assessment. In this instance her approach to

227
the learning would be a little different in that she: would just skim through it. I would go
to the first line and then put my mouse over the next button and go through each page
quickly until I find what I am looking for. In reflecting on how she engaged in her
previous learning she thought she: was autonomous, and felt comfortable.

Response to question 3
Judy rated herself as a 4 (somewhat effective) on the Likert Scale. She explained that
this rating was because she: felt that the material was quite dry and not that well
structured. She believed that the structure of the materials and the available help impacts
on how effective she is in hypermedia learning settings. She stated that she had
undertaken hypermedia learning in the past and that she would rate herself as being
highly effective because: it was a well structured course and provided sufficient
coverage of the topics.

Metacognitive training intervention


The purpose of this intervention was to provide Judy with training and to raise her self-
awareness of her metacognitive activity in hypermedia learning prior to undertaking
learning module 2. A thirty (30) minute session was conducted in which Judy was:

(i) Provided with a paper copy of the metacognitive taxonomy (Table 6, page 57) and
top-level structuring rhetorical structures (Table 7, page 61) and the initial 10
minutes was spent explaining and discussing the categories they contained.
(ii) Provided with a paper copy of the analysis of her first learning event, and the next
15 minutes were spent examining and discussing these data
(iii) Engaged in a 5 minute reflection in which she was asked to reflect on the
utilisation of her metacognitive and discuss future utilization.

Judy was asked to reflect on these discussions prior to undertaking a second module.

228
Learning module 2
The second module in which Judy engaged was within the Australian Flexible Learning
Fitness online Toolbox – Fitness Industry Training Package. The qualification was the
Certificate IV in Fitness, and the element of competency she undertook in this learning
session was SRFFIT013B Provide information and exercise related to nutrition and body
composition.

Judy started off by choosing the competency from the available list. She read through the
text quite thoroughly appearing to be orientating herself to the task: To get all the
information I needed, and planning her approach: Also because it is the beginning of the
course, so you just don’t want to miss out on any important information (00:29). Her
cursor movement suggested that she read through the introductions to both scenarios
presented and then clicked Scenario 1. Judy then used the side scroll bar on the text box
and read the scenario text. The following complex list of cause/effect structures
illustrates how she was orientating and monitoring her progress: So I read that white text
box. Because it is just there. And then I followed the instruction on the bottom to start
reading the information. And I read the information thoroughly (01:15). She then
clicked on each tab on the left hand side of the screen and read through the information.
She appeared to be concerned with the interface and monitored its effect: I had to lean
towards the computer, because the text size is rather small (01:55).

Figure 71: Scenario 1

229
Judy continued using the scroll bar to read the text, and continued to monitor her
progress: So again, I read all the information (02:45). She then clicked on the Start
Scenario button and used the cursor to direct her reading of the text as she read the top
section of the page. She then pointed to the first person on the left and listened to the
audio message, elaborating: On the right hand side it says listen to what they have to say,
and before that it says by clicking on their picture, so I followed that 03:35). She then
clicked on the person in the middle and listened to what they had to say, then progressed
to the person on the right, monitoring her action: And just with a logical sequence, I went
to the right (03:53).

Figure 72: Scenario participants

Judy had been provided with a great deal of information, however, was not taking notes.
This appeared to be a deliberate strategy. The following complex list of cause/effect and
problem/solution structures illustrate how she had evaluated the process: I didn’t feel the
need to take notes because from the instructions it seemed to me all they wanted you to

230
do was just listen to it. You don’t have to record any information. Further, she appeared
to have planned for the consequence: And I also thought subconsciously, that if I do need
to take notes I can just go back and listen to it again (04:17).

Judy then moved down the page using the cursor and continued to monitor her progress:
And I read the stuff in the box, before evaluating how to proceed: And that’s where I saw
the view button at the bottom, so I was just curious what it was. So I opened it and closed
it, and finally elaborating: It was quite useful. It tells you the path that you have taken,
so you know what you have done so far (05:15). She then clicked on the button Choice 1
Ben Robbins and a new screen opened.

Figure 73: Ben Robbins’ profile

Using the cursor, Judy read through the opening text and reached some instructions
which she followed. The complex list of cause/effect structures that follow highlight how
she monitored her actions: So I read the instructions, and then downloaded the two
documents. I saved them because you had to fill them out, before evaluating her next
move: But in a normal situation, if I have access to a printer, I would have printed them
out straight away (05:59). She hovered the mouse over the General Health link and

231
appeared to monitor her way forward: I saw the list and was wondering to myself were
they links, or just text, and evaluated: because links usually have underline, before
elaborating: But they didn’t, so I just wanted to check (06:35).

Judy next downloaded the Health Information Questionnaire and Daily Food Diary
documents, opened them, scanned the contents and closed them and evaluated her action:
Looking at the documents just showed me what kind of information I should keep an eye
out for (07:44). She then clicked on Show subtitles tab and scanned the text before she
clicked on the Audio on button. The audio volume was low, which she adjusted, and she
remarked (monitored): So now he is playing and I couldn’t hear the audio, and then
evaluated: so I click on the audio button, and I click on the subtitles so I can read the
subtitles (08:14).

Figure 74: Ben Robbins’ screen

She listened to the audio while the Health Information Questionnaire was on the screen
and typed information from the audio onto the questionnaire. She tried to highlight the

232
gender cell and monitored her action: So there, I tried to make it bold, before elaborating:
but realised that it was already bold, so I put underline on (08:44). The comparison and
problem/solution structure of her statements highlighting the metacognitive actions. Next
she completed the date of birth cell from information provided by the audio. Judy
continued to read various parts of the text and her cursor movement indicated she was
moving between the material in the learning interface and the open document she was
completing. She moved the document window from in front of the learning materials and
used the mouse to read the revealed text. She appeared to be refreshing (monitoring) her
memory of the contents: I was just reading the subtitles (08:58).

Judy went back to the questionnaire and updated the information she had previously
entered into the height cell before changing the format of the information in the gender
cell: So there I didn’t like that underline. So I decided to change the colour (09:11). She
added information to some of the remaining cells of the questionnaire and returned to the
content page. She scanned the page briefly for additional information before she returned
to the questionnaire and scrolled through it once more. She appeared to be monitoring
her actions: So now I was ready to listen to the next bit (09:49). She then went back to
the questionnaire and continued to monitor her progress: Yeah, to see what else I need to
fill out. So if this was printed I wouldn’t have had to do that (10:00).

Judy clicked on the next link, Medication. A new page opened, which she read before
she clicked on the hide subtitles tab that provided information about the clients
medication. She returned to the Medication cells of the questionnaire and typed N/A and
evaluated her response: He said there he was not on any medication (10:37). She
returned to the learning materials, clicked on the next link, Injuries, and then clicked the
audio on button. She listened to the audio, returned to and scrolled down the
questionnaire and moved the cursor above a couple of the cells. She appeared to monitor
her action as highlighted by her comparative statement: I was deciding whether to put in
the first box or whether it was more relevant in the second box, before she evaluated her
choice: And I decided it was more relevant in the second one (10:55).

233
Judy returned to the learning materials, clicked on the next link Physical Activity Levels,
then clicked on to subtitles tab, and appeared to monitor her progress: And then I went
back to double check what he said (11:22). A short time later she clicked the audio on
button and monitored her action: And then I went to the next audio, before she planned
how to effectively transfer it to text: With my hand on the button already – ready to go
(11:33). She continued to listen to the audio and while she scrolled through the
questionnaire, she appeared to orientate the audio information to the correct cells: To see
where to put the information (11:40). She continued orientating the audio to the
questionnaire as she looked for the appropriate place to record the information: So I
found the place. Now I am still looking – deciding, (12:00), and finally evaluating her
choice: And I decided I would put it there (12:10). Judy appeared to check (monitor) the
information she had just typed against the learning materials: Reading subtitles, and as a
result evaluated through a cause/effect structure, and corrected her typed responses:
Rearranging my sentence. She elaborated on her corrections using the comparative
statement: Just to make it shorter. And in my own words. But I guess if I was writing it,
I probably wouldn’t have done that (12:50).

Judy worked her way through the second last link in the scenario, Past Weight Loss
Methods using her established routine of first turning on the audio and then returning to
the questionnaire to locate and complete the appropriate cells using the audio information
being provided. On reaching the last link she monitored her progress: The last one
(13:42). She once again turned on the audio, returned to the questionnaire and sought the
appropriate cell (evaluated) to transfer the information by scrolling through the
questionnaire: So I was deciding where to put the information that was just talked about
(13:55). She returned to the learning materials.

Having completed all of the information links for this client, Judy moved on and read the
text under the heading Points to consider very briefly before continuing with Tutorial
hints.

234
Figure 75: Text boxes

She appeared to have monitored the text: Yeah, I started reading that, before evaluating
its usefulness: I decided there was nowhere to put it, so I didn’t put it anywhere (14:02).
Judy returned to the questionnaire and scrolled up and down appearing to monitor and
evaluate the content she had entered: So what I am doing here is checking to see I have
filled in all the boxes that I could possibly fill - before I move on (14:20). Part way
through she stopped scrolling and reflected (monitored) on one of her entries: I looked at
that, and then evaluated her response: and I decided no that should go into another box
so I cut it and pasted it (14:42). She continued to scroll through the document and
checked the remaining cells. She commented: OK, so there was nothing else (15:04).
She closed the questionnaire. Judy used a complex list of problem/solution and
cause/effect structures to reach her decision.

Next in the learning materials was a series of choices for Judy to make. She appeared to
review these choices and decided (evaluated) on one: So I chose Choice One (15:18).
She clicked on Choice 1 which opened in a new page, Advise Mr Robbins on portion
sizes of food. Judy then opened two documents associated with this section, the Daily
Food Diary and the Health Information Questionnaire and evaluated her selection
through a complex list of cause/effect structures: And because that was related to food, I
looked at the Food Diary to see what is required (15:28). She quickly scanned the
documents, minimised them and returned to the learning materials. She paused at that
point and appeared to consider her next move, and clicked on the back tab which returned

235
her to the previous screen where she clicked on the link General Diet, subtitles and audio
on. The speed of the selection of these links and tabs showed that this was by now a
familiar pattern of actions. It appeared that Judy had concluded (evaluated) that some of
the information for completing the Daily Food Diary might, at least in part, be available
in the previous section. So with the audio playing, she opened the diary and commented:
So I went back and tried to find out from the previous audio (15:49). She listened to the
audio which enabled her to complete parts of the form before deciding to move on.

Judy went back to Choice 1 elaborating: Because that was the next step (16:36). She
seemed to be reading through the material and appeared to stall for a short time: Must
have been reading the blue boxes, before she clicked on the Choice One link which
followed in the text. A new page Refer Mr Robbins opened. She read the opening
sentence and clicked on the link Referral/Medical Clearance Form, having evaluated her
options: Because you had three choices, and elaborated further: It is that scenario you
go down different parts, demonstrating the cause/effect process of her metacognition
(17:35). The form completed downloading and Judy saved it to the Desktop before
closing it. Returning to the learning materials, she used the cursor to continue to read
through the text and then select the Choice 1 link on this page which opened in a new
window. She read the opening text and stopped her cursor over the Daily Food Diary
link. She appeared to consider the option to download, but did not do so, having
remembered (evaluated): because I knew I already had one downloaded (18:22).

Judy moved the cursor to the top of the page and paused it momentarily before she
clicked on the back tab. The following statement suggested that she was considering
(evaluating) her next move: So now I tried to go back to the previous page. She
confirmed this when she evaluated her reason for doing so in the complex list of
cause/effect structures: I think at this point I realised that I made the wrong choice, so I
went to View, to see the options I had taken. The view button enabled Judy to review her
learning pathway. She scanned the details in the view window and elaborated: And I
decided that I actually made the wrong choice the first time (18:35). She then used the
back arrow twice more to return her to the start of the scenario, and where she appeared

236
to re-plan her way through the material: So now I am back to the beginning and this time
I made a different choice (18:59). Judy spent time looking at the screen and indicated
that she had evaluated her next move by clicking on the Choice 3 link: At the previous
point I decided it was the wrong choice so I went back and decided choice 3 was the most
logical (19:27). She used the back tab to once more read and reflect (evaluate) the
information on the previous page: So choice three went straight to the dietary
information before that check. Seemingly satisfied, she returned and read the details on
the Choice 3 page and evaluated the outcome: So now at this point I believe I made the
right choice (20:10). Judy completed reading the content on that page.

Judy clicked on Daily Eating Habits link which opened on a new page, then clicked the
audio on button, and opened the Daily Food Diary. The cause/effect structure of her
statement illustrates how she had planned what to do next: So now I knew I had to fill
this out so I opened it up and started to fill it out (20:38).

Figure 76: Daily eating habits screen with diary overlay

As she listened to the audio, she made amendments to the form, correcting some entries
from the previous time. She then went back to the subtitled text (a verbatim copy of the

237
audio) and used the scroll bar to scan through it. She appeared to monitor her progress in
completing the diary: Because he talked faster than I could type, so I went back to get
the information I needed for the rest of the form. While she had been able to complete
some of the diary by following the audio: And I did fill in the morning tea and breakfast
by listening, she completed some sections by having to refer to information in the
subtitles. She reached the dinner category, appeared to reconsider (evaluate) one of the
entries, and moved the item beer to the category supper: Because he said he likes beer
with his dinner but in the evenings he has five cans, so really it sort of starts from dinner
and all throughout the night, and elaborated further: And I decided it was more
appropriate to supper (23:24). She then minimised the diary. Judy used a complex list
of cause/effect and comparison structures to enable her to organize the information into
the correct categories.

Judy then clicked on the second and final link on this page, Special Dietary
Requirements. It opened in a new screen where she clicked on the audio on button and
reopened the diary. She listened to the audio and made additional diary entries. She then
opened the Health Information document for this client and moved back and forth
between the two documents. She appeared to be monitoring the appropriate place to
record some of the information as indicated by her cause/effect structures: Because at
this point he was saying he had no allergies, which was in the other form, after evaluating
which one to use in this instance: Because I read the other form I know where to go
(23:33). She minimised both documents and continued. Judy completed reading the
remainder of the text, the final section of which asked her to make a choice of what to do
next. She studied the choices and monitored her decision: So I made choice 1, and
evaluated that choice: Because it was the most logical choice to me (24:38).

Choice 1 was to assess the client’s diet for nutrients against the guidelines for a healthy
diet. Judy’s cursor movement suggested that she read the introductory text and the text
specific to the client before she clicked on the Diet Assessment Form hyperlink and
downloaded and saved it. She appeared to have evaluated what best to do with the form:
In normal situations I would have printed it (25:13). She scanned the form briefly before

238
she minimised it and clicked on the Guidelines tab where she was asked to make an
assessment: She commented: Um, I think I was just reading what was in there (25:42).

Figure 77: Guidelines tab

Judy took some time to read the balance of food proportions and chose one, evaluating:
Because I thought that was the most accurate (26:04). A message appeared on the screen
telling her the choice was not the correct one. She then used the cursor to point to each
button and read the answers carefully before choosing the next answer. A message
indicating the choice was wrong appeared on the screen. She then used the cursor to
point to the last two buttons and read the answers and chose the last answer in the list. A
message indicating the answer was the wrong choice again appeared on screen. She then
chose the only button she had not clicked on and monitored her lack of success: I got it
right the last time. I guess if I was a student that has done the course, I would know the
correct answer (26:11).

Judy clicked on the Analyse Diet link and read through the information which included a
calculation task and monitored her progress using a cause/effect structure: Here I was
trying to work out reading the information on the page – well the next thing you had to
fill in the box. Her next problem/solution structure suggested that she evaluated how to

239
undertake the task: And you need a calculator to calculate the percentage (26:47), before
she continued on through the exercise. She responded by opening the calculator available
in Windows accessories. Using the calculator she attempted to do the first calculation.
She appeared to be a little tentative as she attempted it: So 153 grams – then times the
calories. She paused and used her notepad to write down the figures, evaluating:
Because there is no way I can remember those numbers for all three (27:15). She moved
on to the next calculation and tentatively attempted it using the Windows calculator,
seemingly unsure (evaluating) of her effort: 183 was the next one - just thinking whether
I was doing the right thing, and elaborating: So I am just doing what I logically think is
right (28:00). She continued doing the calculations: So now I am calculating the
percentage. Judy’s problem/solution structure indicated that she checked (monitored) the
result: The number didn’t look right to me, and evaluated what to do next: so I decided
to calculate it again. She repeated the calculation and after obtaining the same result, she
evaluated its impact: So I decided that it was correct (29:30). She entered the answer in
the on-screen box provided, having to jump between the answer box and the calculator
several times, monitoring the transfer: I forgot the numbers, so went back (29:45). She
then moved on to the next calculation, continuing to monitor her progress: So now I was
calculating the total fat (30:00).

Judy then completed the calculations and transferred her answers to the on-screen answer
boxes. She appeared to peruse her answers and read the information about conversion
data just below the answer boxes. The following complex list of cause/effect and
problem/solutions structures indicate that this appeared to cause her to reflect on
(monitor) the manner in which she had just undertaken the calculations: So now what I
was doing – I wanted to see – because they are not rounded up. Because I didn’t see that
little comment – that tiny little comment there – round calculations up to the nearest
whole number. I didn’t see that at this point, so I added all three percentages and when I
added them up they came to 101 percent. As a result she appeared to evaluate the reason
why: So I decided not to round them up. And of course I was wrong. Oh, because I
didn’t round it to the whole numbers. Because at that point I didn’t read that line.
Finally she appeared to elaborate on the reasons why: And so that wiped it out, so that

240
made me really frustrated after I calculated all that. And I didn’t even write it down.
The percentages. And now I had to get the calculator out and do it all again (30:44).

Judy opened the Windows calculator and recalculated the figures. The following
complex list of cause/effect structures illustrate how she monitored her effort: But this
time I was faster because I had done it before. And this time I remembered the comment
at the bottom and I rounded up (31:50). She appeared to evaluate one of the critical
success factors she had adopted: At this point I learnt my lesson because I wrote the
percentages down (32:54). She clicked on the done button and was taken to the next
screen and monitored her success: I got it right, before she elaborated on the outcome:
Because it brought me to the next screen rather than wiping it out again (33:00).

On the next screen Analyse Diet, Judy was asked to consider proportions within the
client’s diet. She appeared to read through the choices and her cursor movement
indicated that she was considering her answer for some time.

Figure 78: Diet analysis question

She finally chose an answer from each of the columns, and using a simple list structure,
evaluated: Deciding which one to click on to get the correct answer (33:20), and then
after some hesitation elaborated: And then I decided no, the first one I got it wrong, so I
changed it (33:35). She paused before making (monitoring) her choice for the final

241
column: And then I was thinking what was the answer for the third one? Following what
appeared to be a final reflection, she then clicked the done button which took her to the
next section: And I got it right (33:45). She made a selection of the next set of answers
and clicked on DONE and monitored her choice: So this was going with my logic. She
received a message that her answer was incorrect and evaluated: I got it wrong. Judy
progressed through this process using comparison and cause/effect structures. Judy
stopped the learning task at this point.

Summary of learning module 2


In reflecting on this unit of work, Judy’s responses to my observations and questions
were usually part of a unified dialogue that was loaded with metacognitive linguistic
markers. The observations contributing to my analysis of Judy’s metacognitive activity
and top-level structure linguistic markers during the 30 minutes of the lesson have been
sliced into three purposive though arbitrary segments (see explanation page 80) to
represent Judy’s progression through the beginning, body and conclusion of the learning
event. These are represented in Tables 35 and 36.

Metacognitive activity
The totals of metacognitive activity identified for Judy showed that metacognitively she
drew heavily upon execution and monitoring, and to a lesser extent, evaluation and
elaboration. In contrast, she engaged in very little orientation and planning. Typical
examples of each have been presented in the preceding narrative.

Table 35: Observed metacognitive activity learning module 2 - Judy


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 2 2 7 6 2 2
Body 2 4 65 41 31 11
Last 5 mins 0 0 14 11 9 6
Total 4 6 86 58 42 19
% 1.86% 2.79% 40.00% 26.98% 19.53% 8.84%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify Judy’s metacognitive activity are
outlined below. These indicate that Judy relied heavily on cause/effect and comparisons,

242
and very minimally on problem/solutions to underpin her metacognition. She used a
similar number of simple and complex lists.

Table 36: Top-level structuring activity learning module 2 - Judy


Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 4 4 7 1 1
Body 21 21 37 4 12
Last 5 mins 6 5 13 3 5
Total 31 30 57 8 18
% 21.53% 20.83% 39.59% 5.55% 12.50%

Self awareness of learning autonomy 2nd rating


At the completion of this second learning module, Judy was presented with the 6 point
Likert Scale and asked again to rate herself as to how effectively she considered she had
engaged with educational hypermedia on this occasion. Judy rated herself as a 5,
(effective). She was then asked to comment on the reasons for the rating.

Judy suggested that this rating was because: I felt I was much more engaged the second
time because the material was presented more interestingly. She went on to explain that
she was: much more aware of the way I learn as a result of the feedback from the first
time, but I am not sure how that affected me. While I thought about some of the things I
found out about the way I go about my learning, most of the time I was focusing on the
material I was learning. She did think that: I was more aware of making connections
between the things I was learning, as we had discussed, but not sure how successful I was
doing that. She said that she selected 5, effective, because: I felt I had learned more
effectively, and with a bit more confidence.

Effect from metacognitive training


Judy rated herself/ higher in the second learning module:
Learning module 1 rating 4/6
Learning module 2 rating – 5/6

243
Table 37: Collective data of metacognitive activity - Judy
Orientation Planning Execution Monitoring Evaluation Elaboration
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 mins 5 2 1 2 13 7 1 6 3 2 5 2
Body 9 2 1 4 33 65 25 41 16 31 18 11
Last 5 mins 0 0 2 0 7 14 5 11 3 9 2 6
Total 14 4 4 6 53 86 31 58 22 42 25 19
Comparison of total activity expressed as a percentage (%)
Session 1 9.39% 2.68% 35.57% 20.81% 14.77% 16.78%
Session 2 1.86% 2.79% 40.00% 26.98% 19.53% 8.84%

A comparison of the metacognitive data by percentage indicates that Judy engaged in


more execution, monitoring, and evaluation activity following the training. In contrast,
she engaged in less orientation and elaboration activity. However her use of planning was
similar on both occasions.

Table 38: Collective data of Top-level structuring activity - Judy


Simple TLS More complex Top-level structuring events
event
List – Simple List – Complex Cause/Effect Problem/Solution Comparison
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 4 4 2 4 5 7 1 1 6 1
mins
Body 12 21 12 21 29 37 1 4 14 12
Last 5 1 6 2 5 5 13 0 3 4 5
mins
Total 17 31 16 30 39 57 2 8 24 18
Comparison of total activity expressed as a percentage (%)
Session 1 17.35% 16.33% 39.79% 2.04% 24.49%
Session 2 21.53% 20.83% 39.59% 5.55% 12.50%

A comparison of the linguistic markers data for top-level structures by percentage


indicates that Judy used more simple list, complex list, and problem/solution structures
following the training. In contrast, she used slightly less comparative structures.
However, her use of cause/effect was similar on both occasions.

244
Case Five - Ray

Learning module 1

The first module in which Ray engaged was from an accredited online course (under the
Australian Quality Training Framework – AQTF) for the Hospitality, Retail and
Construction industries. The course was THHBFB09B Responsible service of alcohol.

Ray commenced by opening the training website, logging in and going to his My
Training page. He immediately clicked on the link which took him to the Responsible
Service of Alcohol unit, without reading any of the information on the front page. He
appeared to be orientating himself to a starting point: I had already had a quick look at
that, before planning his way forward: So I thought I would get into the materials where
it is more interactive (00:44). The module loaded and presented him with a table of
contents with links to the various topics. He clicked on the link to Understanding
Alcohol and its Effects and skim read the text and monitored his action: because it just
seemed like an introduction (01:10). He clicked on the link to the audio file at the bottom
of the screen which was an audio of the text he had just skimmed. Ray briefly listened
then stopped the audio and evaluated its impact: I thought it was just the same – a text
reader. I didn’t want to listen to that (01:28). He moved on to the next screen and
clicked on the audio file and monitored his progress: I just wanted to check the sound
again now. After listening for one line he commented: turned that off. So I am just
skimming here. And I kind of understand those ideas. It is nothing sort of unusual
(01:40).

Ray moved to the next screen and moved the cursor around the picture (see Figure 79)
and monitored his action: So this one I was seeing if it was a roll over on that image
because it seemed like it might be bigger than it was, before he evaluated the outcome. It
was just something about that image (02:17).

245
Figure 79: Female drinking

He next clicked on the audio link at the bottom of the screen and monitored his progress:
Again there was no indication of what that sound is down the bottom, and evaluated what
this meant to him: It doesn’t say why it is there (02:29).

Ray’s actions and comments during this initial stage of his learning suggest that he was
able to impose structure on the way he managed his learning. He commenced by
undertaking an introductory orientation and planning episode to secure his learning
trajectory before he deployed a series of monitoring and evaluation processes in order to
confirm that trajectory. The discursive markers in his metacognition include: I had
already …, So I thought I would…, Because it just seemed…, I thought it was…, and I
wanted to check. These examples contain language used to describe his own thinking
and which contain a rich array of cause/effect and comparison top-level structures that
serve to identify the purpose and outcomes of his actions.

He moved on to the next screen that contained two pictures and text and monitored his
progress: So I was looking at that image as well, before he evaluated its usefulness: And
I thought there is no roll over there again. There is only a little bit of text (02:45).

246
Figure 80: The Liver graphic

He skimmed over the second picture and evaluated its use to his learning: Didn’t interest
me at all. I thought it was just padding (03:13).

Figure 81: Screenshot of How Long Does it Take text and graphic

247
Ray’s cursor movement suggested that he skimmed through the text (see Figure 81) and
monitored his progress: Um. I saw a little bit of that (bold type) but I thought it was just
the 10 grams of alcohol – and I guess I am skimming because I think these kind of
learning things – you are given an assessment, before he evaluated: but really you have
to go back and look at stuff. He then re-evaluated his current approach to his learning:
So I am not remembering anything because I don’t want to remember, I just want to
understand what it is (03:31). This list of cause/effect and comparison linguistic
structures serves to highlight the sophistication with which Ray manages to self-regulate
his learning.

He moved to the next screen that again contained a picture and text and monitored their
impact: I had a bit of a closer look at that image because I wasn’t sure what that was. I
just saw that text on there (03:59). Ray’s cursor movement suggested that he appeared to
read this text more carefully and monitored his action: Well it seemed to be a little bit
more interesting, and evaluated: it wasn’t so introductory (04:11). He moved on to the
next page and clicked on the audio link and listened for a short time before returning to
reading the text. He appeared to monitor his progress: It seemed to be important, and
then evaluated the importance of the material just covered: I just wanted to check if there
was anything different there, but there didn’t seem to be. I wanted to make sure I knew
some of these details here. And so I was just reading carefully and slowly down through
there. I guess I don’t use the cursor to read. I find it gets annoying (04:25).

Ray moved to the next screen which contained a graphic and a number of thumbnail
images, each of which represented a different stage of the effect of alcohol.

248
Figure 82: Effects of Alcohol screen

He moved the mouse between the first and second thumbnails, and text associated with
the first thumbnail, Stage One, appeared in the text box. He appeared to monitor his
progress: I read that quickly, and evaluated the effect: And that was the same as what
happened before in the previous slide. He then elaborated further: But I was confused
about how to get this to go further. See I am trying to go over to the next image – it
wasn’t working. So I am seeing what other roll overs – they only seemed to be working
on the first one – and then I realised that the sound was being downloaded for that image
(5:22). This complex list of cause/effect and comparison linguistic structures highlights
how he reached an understanding of the reason for his not being able to progress. Next
he appeared to monitor what was transpiring: And then I noticed that the other image
appears now. It is highlighted now. It was a delay in the download and also you have to
wait until the sound plays, it seems before I can proceed with the next one. Finally, he
seemed to evaluate why: So it doesn’t come up until the sound is finished (05:57), before
he elaborated further through a comparison structure: Yes. It forces you to listen but it is
just reading the same thing. He followed this with the cause/effect structure: So you
know, and also it is the same information as the previous slide, so I was thinking – there
is no connection between those two (06:18), which suggested that he evaluated the

249
situation. He continued to click on each thumbnail in a linear manner and monitored his
progress: There is no indication that we are going to look at somebody in this stage. So
that’s what I find a bit strange about the design of this object (06:36). This appeared to
cause him to evaluate the object itself: This would be more useful if it was in point form.
And then they read about it, so it is like a presentation, so that could create more interest.
Because then it would give me a reason to listen because I don’t want to just read along
(07:08), and elaborated further: I would have listened to that. It would have given me a
reason to listen (07:28).

Ray clicked on the next thumbnail, still showing concern (evaluating) for the learning
interface: And also I kind of find this kind of learning is very one way – there is no kind
of prediction involved, before elaborating further: Like it is not saying, well, what do you
think would happen at this level? So instead of stimulating thinking, you are just being
given stuff and then tested, so I don’t really find that a stimulating experience (07:45).
He continued to evaluate the learning interface: And I found that text reader very
annoying. That’s why I didn’t want to listen to it – that text to audio. Just didn’t want to
listen to it (08:15). This complex list of comparison and cause/effect structures
demonstrates the level of cognitive maturity Ray is able to impose on his evaluative
capacities.

Ray moved to the next screen and used the mouse to skim over the text set out in a series
of dot points. He appeared to be orientating himself to the learning once again: So I
didn’t really look at the graphic. I just tried to look at the main points and see what I
know. So I was reading through that carefully to make sure that I know – because I
thought that was important in what I needed to know (8:30).

He moved to the next screen and read through the text before listening to the first of two
audio files.

250
Figure 83: Alcohol Abuse text and audio

He appeared to reflect (monitor) on the material presented: That was a kind of summary
of some of the things that would lead on from that. He then listened to the second audio
while still showing concern (evaluating) with the learning interface: It didn’t tell me
what kind of thing I was listening to – it gives no indication of that – so it is a policeman
who is going to talk about the social problems and the sound controls the scene (09:05).

Ray clicked on the play button at the bottom of the screen and listened to an audio file of
a real life scenario. He appeared to be attempting to orientate himself to the content: I
was just wondering what else they were talking about. The following cause/effect
structures suggest he then evaluated what had just been covered: and then I realised that
it was just some of the social problems that police face and I wasn’t that interested in
knowing that (09:48). He continued to read the text and monitored the learning interface:
I was also looking at the top. I wanted to see if there was a glossary at all that would
have some of these terms, after which he evaluated the material just covered: If they are
talking about intoxication, incontinence, those kind of things (10:28). He stopped the
audio file and moved to the next screen and commented (monitored) on why: So that sort
of rambled a bit for me. Went to the next one (10:55). He ran the mouse over the two
bullet lists and orientated himself to their contents: So I was just looking at those two
(11:04). He did not appear to notice the graphic and commented: I didn’t look at that.
And evaluated its value: Well, I saw it but I just thought it was padding really (11:07).

251
Ray moved to the next screen that contained an activity which required him to drag a
number of impacts of alcohol abuse to short and long term drop boxes, and monitored the
instructions: So this is just a drag and drop (11:26).

Figure 84: Abusing Alcohol activity

He scanned through the impact categories for a short period before he paused and
monitored his next step: I was looking at which one to put in. He then appeared to
evaluate the situation: But then I was thinking some could be long term, some could be
short term. Could be both really, like depression. Or damage to unborn babies (12:00).
These comparative structures highlight Ray’s continuing capacity to effectively monitor
and evaluate his understanding of the material provided. After the activity was
completed, he repeated it and monitored his reason: Because it says there that it is
incorrect (12:22). He went back to the screen prior to the activity that contained the
bullet lists on which the activity was based and recapped (monitored) the information: I
just had a look at the slide again and what I did - I went back and just remembered the
first short term effects. So that’s what I was doing (12:29). He returned to the activity
and deployed his strategy of focusing only on the short term impacts and monitored his
progress: So I am just looking for the short term ones, and evaluated his rationale for
doing so: because I thought it would be easier to do those first. And I was thinking that

252
some of these could be both really (12:36). This time he completed the activity
successfully and reflected on (monitored) the reason he had retried the activity: Oh, you
could have probably moved on. It is the object inside that gives that. But I thought I
would make sure and see what happens (13:04).

Ray then moved on to the next page that contained text and graphics. He appeared to
read through the first part of the text before he highlighted and copied it using the
Windows copy function. He evaluated his action: This one it seems that the formatting
is wrong. I think it was the columns that were wrong (13:19). He opened Microsoft
Notepad, pasted the text and used a complex list containing a problem/solution structure
to complete the process: I was thinking Notepad first (monitoring) but I then realised
that the text would just go right across for the paragraph (13:30). He closed Notepad
and opened Microsoft Word and pasted the text which presented in a much larger type,
and correctly formatted in columns. So I had to open up Word after that (13:38). So
when I put it into Word I put it into columns then. He scanned the text in Word, however
appeared to be still evaluating the mistake in the learning interface: I think it must be in
the code – the html code. It is a wonder they haven’t picked that up because it would be
quite straightforward to fix it up you would think (13:50). He appeared to orientate
himself to the new structure of the text: So I went down there first and then I came down
this side, before evaluating its message: and I thought well the main idea is just about the
number of drinks and then there is the link about alcohol guidelines. So I didn’t really
read that very carefully. Ray appeared to act on his evaluation and planned ahead: I
mean most of these things if I was going through this course and I thought there was
something useful I would save it in my bookmark in a folder and just come back to those
when I wanted to use it. So I use Google bookmark so I can access it anywhere (14:29).
He completed reading the text, closed the word document and returned to the main screen
where he clicked on the Australian Alcohol Guidelines link and elaborated his reasons:
And I would do that (bookmark) for this because that looked like an interesting website.
These complex lists of cause/effect, comparison and problem/solution linguistic
structures, embedded in Ray’s remarks, suggest that he continues to sustain a powerful
and effective capacity to metacognitively manage his learning.

253
Ray clicked back and forth on the opened tab buttons, appearing to orientate the interface
to his needs: So I wanted to have the two open, and monitoring his rationale for doing so:
So I could go back and forward if I had to (15:18). He kept the webpage at the front of
the screen and scanned through its contents, first, to orientate himself: And I had a quick
look at that and I wanted to see where this page was in the site so there was no other
navigation, and second, to monitor the content: So I went to the home page just to have a
look (15:34). He continued to scan the material and monitored his progress: And then I
saw that from the home page there was the Alcohol Guidelines there, before he evaluated
his actions: So that’s why I clicked in it just to make sure I knew what it was. And there
didn’t seem to be anything else at that moment so normally I would have saved this, I
think, because that looks like a useful site to look at (15:48).

Ray clicked on the etrainu Training tab and returned to the unit. He then clicked on My
Stages link, and appeared to investigate various parts of the screen. He had been using
the interface in a linear fashion up to this point and the navigation buttons available
through the interface navigated had not been employed.

Figure 85: Etrainu learning interface navigation tabs

As he investigated his navigational options he seemed to firstly be orientating himself to


this aspect of the interface: So what I wanted to do here is just – I was just thinking – I
just wanted to know how to navigate through this because I hadn’t really worked it out.
And I had noticed the top there – it says My Stages – My Slides but I was looking at the
page and it seemed to say stage 2. And that wasn’t really um. So it says here stage 2 and
then I realised that that was the second part of – the introduction was probably stage 1.
So I was just trying to get myself orientated again (16:39). He then seemed to plan how

254
he might best use them: I guess I just wanted to get an idea how much content there was
going to be up to the assessment. And just get a feeling for how much I needed to keep in
my head, before evaluating how this might impact on his learning: Or just how much I
would come back to so I just thought there was so much here that I would just come back
to it anyway (17:20). This learning interval is again rich in the cause/effect, comparison
and problem/solution discursive structures that serve to highlight the richness and
structure of Ray’s metacognitive activity.

Ray continued to click through and briefly scan each of the stages until the Review
screen appeared which provided a checklist for preparing for assessment. Ray scanned
the checklist and monitored the content: Well they just said tick if you have done
everything and it was the usual thing. He did note (evaluate) though that: They were
already there – it was just an image (17:50). Next Ray: opened up the assessment in a
new window so I could go back and forward between the two of them (18:05). He studied
the questions in the assessment window and then clicked back and forward between the
two opened tabs orientating himself and choosing the three responses required for
question one: So I was reading through here and go back just to check that I had
understood and that I had the right details. In a way I guess I just wanted to do the
assessment (18:13), before planning his approach: I didn’t want to remember everything.
And then I was thinking it would be good if the slide had a search function. I didn’t want
to have a look around for that because it didn’t seem that obvious (18:39). He continued
to engage with the assessment questions and continued to move between the content
pages and the assessment page. He appeared to reflect (monitor) on the set of questions:
So the Australian culture stuff, that didn’t seem to be mentioned in the test (18:59).

Ray switched back to the learning module and replayed an audio file and simultaneously
reflected on (monitored) a graphic on the page: I was just checking to see if there was
anything about that picture that I – but if I had known there was nothing explaining it.
And I read it again and it was clearly straightforward (19:24). He returned to the
assessment page and chose the answer to question two and monitored his answer: No 3 –
A person becomes intoxicated – Sorry I am just reading this (19:50). He went back to the

255
learning module and skimmed through the text, monitoring his action: looking at grams
for alcohol. Yeah pure alcohol (20:00). He continued to skim through the pages looking
for the answer and continued to monitor his search: See that number three. Yeah, I
wasn’t sure whether it was about – yeah it is about when you become intoxicated and in
the content it was saying – ah – I was looking to see if it affected you and if that affect
was intoxication, before evaluating the results: But then they are saying it has no visible
affect from 0.1 to 0.5 so I was confused there between what was in the content and what
the question was asking because I didn’t think it was clearly spelt out there (20:29). He
read a little more and continued to monitor the information he was reading: Like here it
says normal behaviour observed but it doesn’t mean you are not affected by the alcohol,
before evaluating: So that’s why I thought OK, well after you have had a few drinks, and
then elaborating further: So I guess the answer is after the first drink. But in there it says
normal behaviour (21:43). The highly integrated series of comparison and cause/effect
linguistic structures used by Ray during the last few minutes of learning serve to once
again highlight his metacognitive dexterity.

Ray continued to answer the questions by clicking back and forth between question and
content and paused to evaluate his progress: So some of them I think I had the answer
straight away, but I just wanted to check it to make sure. Because I have done these
things before where it is not so clear (22:01). He paused and appeared to orientate his
thoughts: So I have got 10 grams of alcohol – and some of the answers here were fairly
close together. Like one standard drink – 10 grams of alcohol. This seemed to allow
him to evaluate what he saw as inconsistencies in the material: But the slide before was
saying pure alcohol – so there were inconsistencies, and monitor the outcome: so I was
checking those to make sure. Like this one it said between zero and 0.05 but in the text it
is 0.01. So that is a pure alcohol one. It wasn’t – it just said alcohol so I was wondering
if there was a difference between how they were saying alcohol – pure alcohol. He
continued to read and answer the assessment questions and appeared to evaluate their
level of difficulty: These ones were easier because they seemed a bit straight forward
(23:41). He completes the answers to questions 9 and 10 without checking back with the
learning materials and evaluated his answers: The distracters weren’t – no – they were

256
pretty obvious (24:04), before he carried out what appeared to be a final monitor of the
assessment task: Then I had a quick look to see if everything is done – it seems to be OK
(24:21).

Ray clicked on Submit tab and a message appeared on the screen advising him that the
answer to question three was incorrect. He scrolled up and down the question page
attempting to change the response to question three and monitored his effort: I was
trying to find out, OK, it is incorrect, so I was clicking on that question to see if I had to
resubmit it, but it wasn’t working, so I clicked submit again. And nothing was happening
again, before he attempted to orientate himself on how to proceed: I think I pressed
cancel then to get out of it. And then when I went back to - I think I went to the end of the
– I went to the assessment again (24:43). He was presented with a screen message, In
Progress, to which he responded by clicking on the Take Assessment link. The
assessment screen appeared with question three marked in red and Ray appeared to
monitor this outcome: And the same thing happens. I think I tried to… it seems I could
not change it again, and re-orientate himself to the learning interface: I clicked submit,
then it was locked and I got that message to contact the assessor, before evaluating his
actions: So, then I am thinking it looks like you have to get everything correct but there is
no indication of that (25:40). He exited the assessment. The simple and complex lists
containing problem/solution, cause/effect and comparison linguistic structures here serve
to indicate Ray’s capacity to confront the need to metacognitively evaluate and
reorientate his learning in order to maintain a desired learning trajectory.

Ray was taken to the My Stages page, where he scanned through the opening text and
scanned the stage names, and he monitored his progress: So then I went to number two,
and he appeared to plan his next learning objective: And I just wanted to look through
how many stages were in there (26:03). He read through the list and then clicked on the
next page.

257
Figure 86: My Stages page

He was taken to the My Slides page where he scanned the initial text and examined the
list of slides. He monitored his progress: And then they have got a slide name. So I was
just looking to see what kind of stuff was in this overall, and evaluated how he might
proceed: I was wondering if I should keep going ahead with this. I was thinking was
there something else that would be more interesting to do? But I thought I would just go
through that. And I was just looking to see how many slides there were (26:37). He
continued to scan down the slide links and monitored: So I thought about the same
length as before, so maybe something similar as before (27:10).

258
Figure 87: My Slides page

Ray returned to page one and chose the first link, Restrictions on the Availability of
Alcohol. He skimmed the page and evaluated: So that is the same intro – pretty quick
(27:24). He moved to the next page, used the cursor to skim over the page and hovered
over the hyperlink to The Liquor Act 1992. He reflected (monitored) on the material:
And they talk about legislation here, so I just quickly looked at those links there and – I
have seen legislation before and it can be a bit dry so I didn’t want to have a look at that,
and appeared to evaluate its usefulness. So it seems like the last link would be the most
relevant one. And that might be the most recent one to look at (27:54). He opened the
link to The Responsible Service, Supply and Promotion of Liquor Supply Code of
Practice in a new window, and evaluated the document: And that said 2005, so I said OK
that is most probably the most relevant one (28:15).

The top part of the booklet appeared on the screen and Ray adjusted the screen size of the
document: And I was trying to resize the window. Ray used the menu bar to move
quickly between pages of the booklet and monitored his progress: I just had a quick look
to see what that was. He then clicked the zoom button and monitored: And then just to

259
zoom in to read it a bit more carefully. And then that seemed to be just background
information. Some definitions – I read those to make sure I understood that, before
appearing to evaluate what he had just read: And I was thinking some of that I didn’t
really know – like excessive drinking. And normally I would save this as a bookmark as
well, because that was a really good summary and seemed useful (28:43).

Figure 88: Code of Practice page

He used the cursor to skim through the pages and spent time reading the terminology
presented quite thoroughly. He appeared to show more interest in this section and was
orientating himself to the notion of risk, commenting: I was interested in this because of
how they classified different risks and how they named things, so like they have got here
– drink stockpiling, laybacks. He continued to monitor his progress through the material:
I was looking at the sort of things I hadn’t heard of before, or hadn’t thought of. Like test
tubes, or here they are talking about 50% higher for the drinks. The discount is high risk.
He appeared to pause and evaluate: So I thought it was interesting in terms of that detail.

260
And I was wondering what that meant, the emotive titles. So I was thinking about that
one (29:18), before elaborating further: Things like pyrotechnics and fireworks – I
thought that was unusual that they would put that in there. But I guess, you know
(30:12). This series of simple lists and cause/effect linguistic structures highlight how
Ray’s metacognitive capacities enable him to keep his learning on-track, whilst drawing
upon his previous knowledge to cement his current learning.

He continued reading through the document and monitored the material: I realised that it
was a kind of summary of what that table had been before, and evaluated: But I was
interested in how they classified acceptable and unacceptable. He went on to elaborate
his understanding: Like for example they have got promotions involving low alcohol
liquor or you can give a standard drink, but not an all you can drink so it seemed a bit of
a grey area, for me anyway (30:31), and finally evaluate its usefulness: I thought it was a
really good document (31:10). Ray stopped the module at this point.

Figure 89: Examples page

261
Summary of learning module 1
In reflecting on this unit of work, Ray’s responses to my observations and questions were
usually part of a unified dialogue that was richly loaded with metacognitive linguistic
structures. The observations contributing to my analysis of his metacognitive activity
and top-level structure linguistic markers during the 31 minutes of the lesson have been
sliced into three purposive though arbitrary segments (see explanation page 80) to
represent Ray’s progression through the beginning, body and conclusion of the learning
event. These are represented in Tables 39 and 40.

Metacognitive activity
The totals of metacognitive activity identified for Ray showed that metacognitively he
drew heavily upon monitoring, execution and evaluation, and to a lesser extent,
orientation. In contrast, he engaged in very little elaboration and planning. Typical
examples of each have been presented in the preceding narrative.

Table 39: Metacognitive activity learning module 1 - Ray


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 1 2 7 13 10 0
Body 17 4 43 47 33 6
Last 5 mins 3 1 13 8 12 2
Total 21 7 63 68 55 8
% 9.46% 3.15% 28.38% 30.64% 24.77% 3.60%

Top-level structure linguistic markers


A summary of the linguistic markers used to identify Ray’s metacognitive activity are
outlined below. These indicate that Ray relied heavily on cause/effect, to a lesser degree
on comparisons and made little use of problem/solution structures to underpin his
metacognition. He used more complex lists than simple lists in this instance.

262
Table 40: Top-level structuring activity learning module 1 - Ray
Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 2 6 14 0 4
Body 16 27 40 2 26
Last 5 mins 6 5 5 1 4
Total 26 38 59 3 34
% 17.25% 23.75% 36.87% 1.88% 21.25%

Self awareness of learning autonomy 1st rating


At the completion of learning module 1, Ray participated in a 10 minute interview in
which he was asked to respond to the following three questions:

1. How would you describe your engagement with hypermedia?


2. To what extent do you see yourself autonomous in such activities? and
3. How would you rate how effectively you engage with educational hypermedia – on a
6 point scale?

Ray’s responses to each of these questions are outlined next.

Response to question 1
Ray described his engagement with the first learning module as: very uninspiring
example of an on-line course, mainly because I think it could have been put in a print-
based resource that you could read. There is no interaction involved on my part. It was
just working through each of the pages and at the end of that would be a piece of
assessment. Ray suggested that as a learner he: Didn’t think it was a very interesting
way. He went on to suggest that he: would want to have a bit more control of what’s
happening and a bit more access to just that content, of being able to explore different
things and participate when doing that. He gave some examples of the interactive
mechanisms he would prefer to use: Things like Gmail, Google docs, other Wiki’s and
blogs and so on that I find a lot more engaging. Ray made the point that it is a course in
which he has: no other engagement with other students and so, again, I would rather get
a print-based document, read through it and do a test.

263
Ray believed that he was quite familiar with using hypermedia and the technologies, for
example, and he commented that: one thing I really like is web conferencing. I think it is
a very beneficial way of working with a facilitator. He believed that the capacity to share
with other students and their documents and desktops: really benefits the on-line
learning experience. He reported that he had taken part in a number of on-line forums
and discussions and that: Some of these work well. Although it can depend on the
discussion question, and how much you have to offer, the lecturers participation, and the
other students participation.

Ray advised that he also liked participating in Wiki’s and: Collaborating together and
building something, building the finished products based on that. He suggested that he
liked using home pages and publishing work: Publishing websites and so on, I think that
is a really useful way of working. He saw this as a really useful way of getting feedback
from others. I like it when I can apply what I am learning to my own work situation and
develop those skills, rather than being controlled too narrowly into developing for some
kind of assessment. It’s not that meaningful for my own learning needs.

Ray suggested that in managing his learning the first thing he needs to know is: What I
am going to be assessed on, when that’s due, and I would probably work backwards from
that. He suggested that to plan his time he would: probably do a calendar and organise
it myself. He suggested that working full time and studying he needed to be: fairly well
organised. He suggested that in organising materials he prefers to: keep weblinks, rather
than take notes because I would probably keep them after the course, particularly if they
were interesting. He suggested that he does not take many notes: particularly when the
content is already there and note-taking would be more to record what I needed to do for
assessment. When he does keep notes he tends to: copy and paste different bits from
documents or pdf’s using a word processor or Google Doc’s. I have ways of noting
where that comes from, and linking it to other material I have found, or grouping it in
meaningful ways.

264
Rays approach to using the learning material is to: Look at the first module and see how
much detail there is in that, and think about how much time that would take me to get
through it, and then try to get an idea about what would be absolutely essential, and what
would just be extended reading to look at later. He believed he would: try to separate
into what is absolutely essential to get an idea, and then I might go through quickly and
look at the whole course to see if that is the same throughout the rest of the course. He
believed that it was: easy to get bogged down at the start too much, with stuff that might
not be that important. He summed up these remarks by saying he: tries to get a good
overview as soon as I can.

Overall, Ray describes his interaction with hypermedia as usually being a positive
experience, although sometimes not necessarily challenging. He believes he has effective
ways of engaging with, and managing, the learning materials and is confident about his
competence to do so.

Response to question 2
Ray stated that he considers himself to be an autonomous learner in hypermedia learning
settings although he does: tend to be more linear I think because it tends to lead me to
the assessment. He did caution though that he often tends to: move through it fairly fast
and realise I should have to go back and redo something. He referred to the first learning
module as offering him: no choice except to follow the material as presented. He stated:
that for me, the autonomy comes from the assessment really. Because if there is a project
involved that I can develop based on my own interest, then I think that’s how I get the
most out of the course material. The reason for this is because it allowed him to: really
focus on the bits that are relevant to what I am trying to do, or trying to apply. He went
on to say that: often I will look at course materials in a way that – how can I use them for
my own benefit, outside of the course as well.

Ray reflected on his approach to having autonomy over the materials he is given and
explained: So I might look at materials and maybe group things in my bookmarks, for
example, so I can come back to them later. So I guess I like that autonomy in the
assessment and that would drive how I would look at the materials. He believed that he

265
is not restricted to just the materials provided by the course writer. For example, he
stated that he is often guided by what he thinks is: missing from the materials. He said
that he has: done courses before where some things are just too general, or there is
insufficient content, and you have to go and research yourself to get a better
understanding. And so I am prepared to do that to make it worthwhile. He said that
while he recognised that some people: just want to get it done, I want it to be an
enjoyable experience as well.

Response to question 3
Ray rated himself as a 5 (effective) on the Likert Scale presented. He suggested that this
rating was because: I knew what was there, I used everything there I think, but I don’t
know if I was that engaged in what was being presented. He suggested that the reasons
were because it was: not a very engaging experience. It was just a list of information;
there was no input from myself, with just an assessment tacked on the end.

Metacognitive training intervention


The purpose of this intervention was to provide Ray with training and to raise his self-
awareness of his metacognitive activity in hypermedia learning prior to undertaking
learning module 2. A thirty (30) minute session was conducted in which Ray was:

(i) Provided with a paper copy of the metacognitive taxonomy (Table 6, page 57) and
top-level structuring rhetorical structures (Table 7, page 61) and the initial 10
minutes was spent explaining and discussing the categories they contained.
(ii) Provided with a paper copy of the analysis of his first learning event, and the next
15 minutes were spent examining and discussing these data
(iii) Engaged in a 5 minute reflection in which he was asked to reflect on the
utilisation of his metacognitive actions and discuss future utilization.

Ray was asked to reflect on these discussions prior to undertaking a second module.

266
Learning module 2
The second module in which Ray engaged was Intermediate Japanese. Ray has a
Japanese partner and was keen to become more proficient at speaking Japanese. He had
joined the Live Mocha social network service where users learn languages through audio-
visual lessons, peer tutoring tools and support systems. Ray had completed the
introductory Japanese program and was now commencing the intermediate level
program, having set it up previously.

Ray commenced by loading the program and clicked on Looking to practice in a drop-
down box on the introductory page and appeared to orientate himself to the learning
interface: So I was loading up and I wasn’t sure what that was. And then I was just
looking at the screen, before monitoring his progress: This didn’t seem to be related to
this, but more of the collaboration stuff that you could do. He then appeared to evaluate
his position: The Looking to practice I thought might have some relevance to the course,
but it didn’t seem to (00:18). He clicked the start button and monitored his action: This
is one that I sort of set up before. This is the intermediate one, before he evaluated the
outcome: I think this is the only intermediate one that was available. He appeared to
pause and orientate himself once more: And I was looking at what was on the screen
there to get an idea of orientation. There is a learn and review and there seemed to be
read, listen and quiz (00:55).

Figure 90: Lesson 1 screen

267
Ray used the cursor to point to the Read, Listen and Quiz buttons appearing to be
orientating himself: So I thought I would start – yeah I was just rolling over those to see
if they were links. They seemed to be links, so I thought I would start at the learn. He
clicked the learn button and a new screen appeared which he scanned and evaluated: So
even though it said past simple at the top, um – that is the present tense there. So, and
also, it didn’t say anything about clocks or time. So I found that a bit confusing (01:18).

Figure 91: Learning interface

Ray then moved the cursor around the screen and clicked on the Translate button. He
appeared to be orientating himself: And I was just trying to work out what those other
translate buttons would tell me there. He next clicked on the button below the Translate
button (see Figure 92) and elaborated: The Romaji button wasn’t working, so I ignored
that one. Even though this is Romaji, this text here, so I am not sure (01:57).

268
Figure 92: Audio and translate interface

As was the case in learning module 1, Ray’s actions and comments during this initial
stage of his learning suggest that he maintained his capacity to impose structure on the
way in which he managed his learning. He commenced by undertaking a series of
orientation processes to secure his learning trajectory and deployed some monitoring and
evaluation processes in order to confirm that trajectory. The discursive markers in his
metacognition include: So I was …, This didn’t seem to be …, And I was looking at …,
So I thought I would start…, and They seemed to be… . These examples contain
language used to describe his own thinking and which contain a series of list, cause/effect
and comparison top-level structures that serve to clarify the purpose and outcomes of his
actions.

Ray next clicked on the right arrow button of the audio interface (see Figure 92 above) to
progress through the times and evaluated the response: And I could see that there was 40
of these, and I didn’t want to go through and listen to 40 different ways of telling the
time. So this one is saying it is half three. He continued working through the time
activity monitoring his progress: This one is saying 15 minutes before, it was half past
three, and evaluating the material presented: So there were different constructions there.
Saying 15 minutes before, or saying the time exactly. He chose the next time, listening to
the audio and monitoring: This says it is six o’clock (02:41), before choosing the next
and continuing to monitor: And this one says 30 minutes before – sorry 3 hours before it
was six o’clock (02:49). He paused and appeared to monitor and evaluate his progress
once again: So Ima is now, and then, I mean those kinds of things are fairly straight
forward. It is when you hear it in context of a conversation that is what you want to

269
practice. So rather than hearing them in isolation – so that is why I probably went to
look at something else (02:49).

Ray next clicked the Review button and when the screen opened he attempted to click on
the words Read and Listen (see Figure 93), which he appeared to think were links, and
evaluated: I wasn’t sure if they were links there. But they didn’t appear to be, so you
think there would be a link there. Instead of having them on the left menu (03:25).

Figure 93: Review options

He then clicked on Read in the left hand navigation panel. When the screen opened he
studied it briefly before he hovered the mouse over the thumbnails and monitored his
action: I was looking for the correct answer here. But I got, it seems that when you get a
correct answer to screen, there is no text to say you are correct, before evaluating the
outcome: Or – it seems it just goes to the next question if you are correct (03:49). He
appeared to elaborate this point more succinctly: If you are not correct it comes up red.
And lets you continue to look for the correct answer (04:09).

Figure 94: Right answer indicator

270
As he continued with the exercise he seemed to come to terms with how a right answer
was being flagged by the learning interface and evaluated: Green is indicating correct,
but there is no sort of directions around it or um and it didn’t – and then these are these
other navigation there – that I guess (04:16).

Figure 95: Right and left arrows

Ray next used the cursor to point to the right and left arrows (see Figure 95 above) to
review the exercise. He worked his way through the times very quickly, appearing not to
make any attempt to translate them or listen to the audio and monitored this action: Then
I thought I would just go through and see what happens, before evaluating: I didn’t want
to do just the times because it gets a bit boring. He appeared to still be grappling with
how the interface provided him with a clear indication of the correctness of his answers
and elaborated: And then that is when I realised then that the red one – that indicates
when it is wrong. I wanted to get one wrong and see what happens. So that wasn’t so
clear (04:41). This complex list of cause/effect structures demonstrate Ray’s capacity to
manage the challenge presented from time to time by the idiosyncrasies associated with
the learning interface, while keeping his learning on track. He then clicked on the Quiz
button, seemingly finished with, and bored with, undertaking the times exercise.

The quiz screen opened and some of the interface was clearly not being shown correctly.

271
Figure 96: Quiz interface (showing missing components)

Ray seemed to monitor his reaction: Here, at the quiz, I was looking at it to see what that
would do, and further indicated (elaborating) his disquiet: Because it wasn’t very
interactive to me - just looking at that. He returned to monitoring the quiz interface: And
here I could see that obviously the text wasn’t working properly, so there should have
been Romaji text there (05:14) (referring to the missing text between the numbers in
Figure 96 above), and elaborated further: 1 15 1 - then I thought it must be this one
here and he moved the mouse between the first two answers. Followed by: Because it
has 1 15 1 in the answer, so I thought well it is not going to help me do anything
because this isn’t working (05:25). He then chose the second answer which was correct
and moved on to the next question which again contained missing text and monitored its
impact. Not being able to complete this part of the exercise accurately he evaluated his
position: And the same on the next one - there is only one 7, so it could be any one that
has a seven in it. I didn’t worry about that (05:42). Ray stopped the activity.

272
Ray moved on to the listening activity and orientated himself to this section: So then I
thought I would have a look at the Listen section to see how that compares, and
evaluating the reason why: Because listening is often a really good practice. Ray
commenced the listening exercise using the cursor to select the time graphic that he
thought matched the audio being presented. After he had completed the third question,
he monitored his progress: so that one is a little bit more difficult because it is longer,
and elaborated on the reason why: And you have to listen more carefully. He continued
working through the questions and on the seventh question evaluated his progress: This
was fairly simple in a way. When he reached the eighth question, he appeared to be
frustrated by this exercise, discontinued and elaborated: But there is no score, - I was
looking at how many questions I had done – but there is no sense of how many I have got
right so far. So I didn’t think that was very useful. Like I have done other tests like this
where they would have a running score of how many you have got or they would indicate
which ones you had problems with so you could go back and do those again (06:01).
This complex list of cause/effect and comparison structures highlight Ray’s capacity to
metacognitively continue to deal with the unfamiliarity of the learning interface.

Ray then clicked on Course Home link and monitored: I will go back and look at what
else – what other vocabulary they had. He scanned the page and evaluated his options:
So it seemed that was all there was for that – that lesson, so I went to the next one
(07:06). He clicked on Lesson Two and read the screen and evaluated its worth to his
learning: So same – I guess same kind of organisation (07:25). He clicked on the Learn
button and a new screen, Present continuous/past simple verbs appeared.

273
Figure 97: Present simple verbs interface

This exercise required him to select a thumbnail (on the right of the screen) and listen to
the corresponding audio while he viewed the Romaji text. On the first thumbnail, he
evaluated his understanding: And some of these words I don’t know so - like matteimasu.
I wasn’t sure (07:44). He continued to the second thumbnail and continued to monitor
the material: That says she was waiting before – that is the past tense, before evaluating
the graphic: But it is kind of a strange picture (08:05). He then clicked on back arrow to
return to the previous thumbnail and monitored his understanding: I wanted to see the
difference between it, and evaluated: Because the picture wasn’t really good to do that,
and elaborated: So that is why I was looking at the translation (08:19).

He returned to thumbnail three and monitored the text presented: You know that says I
asked somebody where the restaurant was – um – and I just checked that translation to
see if I was correct there (08:39). He appeared to evaluate the exercise generally: But I

274
found that the pictures were a bit difficult because I don’t think it’s obvious. He
continued to thumbnail five and evaluated the audio: And I would have translated that a
bit differently too, so (09:05). He went on to thumbnail six, then returned to thumbnail
five, commenting: That’s why I went back to check those two. The past tense and this is
the past tense and present.

He continued listening to the audio and moved through the questions to thumbnail seven
and monitored the audio: So that one I know, before he moved to thumbnail eight, and
did the same: And I think I knew that one (09:35). On reaching thumbnail ten, he
appeared to reflect (monitor) on the text: So that is just the past tense and not the
present tense, before evaluating: Um – but again there is no kind of context to the - so it
is a bit hard to remember some of this. So what I found I was doing was remembering
the photos, rather than the language itself (09:55). He reached thumbnail eleven and
discontinued the learn activity, and monitored his position: And again it wasn’t - there is
only eleven questions out of 40, and elaborated further: I didn’t find that very interesting,
to just keep going through that. Um. Even though the listening ones were the most
interesting – most useful (10:12). This series of complex lists containing comparison and
cause/effect structures demonstrate Ray’s continuing capacity to monitor, evaluate and
elaborate on how best to manage his learning pathway.

Ray then chose the Review button which tested his knowledge of vocabulary words, as
well as reading, listening and word order skills. This exercise required him to select
words from a pool of words and order them correctly. The first question checked his
reading skills and he selected the correct answer. Question two tested word order skills,
which he attempted: The last word he selected appeared in red. He appeared to be
confused, so selected the √ button which checked the correctness of his answer and
monitored his action: So I thought I would try that. There is no – it wasn’t obvious what
to do after that, so I think I clicked the tick button eventually, and evaluated: I suppose
once you do it nearly once or twice you know what to do, so it doesn’t really matter
(10:49). The answer he had constructed was marked wrong (surrounded by a red box)
and he monitored the response: I went over that button then I realised – and then it was

275
wrong. It is showing me the red one (11:21). He tried to move forward and this caused
the text he had constructed to be cleared. He appeared perplexed and evaluated his
options: And then I thought, oh, OK, don’t want to go to the trouble of doing all of that
again (11:23). Ray moved on to question four and monitored his progress: I was just
remembering the pictures. He selected the Play button and listened to the audio once
again and monitored his action: And I could play it back again, so that was useful, which
caused him to seemingly evaluate its purpose: So I realised that it was a combination of
listening and reading happening here - mixed jumbled sentences, so I guess the listening
would probably be the best. So I think later on I’ll just look at mainly the listening ones
(11:38). The linguistic structures identified within Ray’s explanation of his engagement
with this vocabulary exercise suggest that Ray was effectively monitoring and evaluating
his learning metacognitively.

Ray completed eight out of forty review questions before exiting and clicking on the Quiz
button. He appeared to find no further value in the review questions and focused his
attention on the quiz and monitored: So here I am looking at the quiz, and quickly
established (evaluated) that this was not the best move: And I can see that it is not
working again, before elaborating: So I was looking at how to get out of there (12:12).
He used the mouse to move around the screen and then clicked on the Course Home
button. Ray was returned to the home page where he then chose Lesson 3 and evaluated
his action: I thought I would have a look at the next one – irregular verbs – because
sometimes they are a bit more difficult (12:39). He focused on a section of the screen that
indicated what had been completed and appeared to monitor his thoughts: I was also
wondering percent complete – what that meant – maybe after I did everything (12:50).

Ray then exited Lesson 3 without attempting any content, and clicked on the Unit 10 tab.
A new screen appeared identifying the vocabulary category as Clothing. He reviewed the
screen and appeared to monitor his progress: OK, so it comes up with the, um, the kind of
vocabulary there, and evaluated its impact on his understanding: So at least it gave me
some indication – rather than just the title (12:59).

276
Figure 98: Unit 10 - Vocabulary screen

He then clicked the Learn button. A picture of clothing appears in a box along with an
audio of the word in Japanese. He scrolled through the thumbnail using the arrows and
orientated himself to the change in interface: So I was just scrolling through to see what
happens when you learn, before evaluating: Initially for me if I was doing that I would
like to see that written in Japanese as well so I can remember that (13:28). He attempted
eight out of forty vocabulary exercises and then stopped that activity.

Figure 99: Clothing vocabulary activity

Ray then clicked on the Listen button. An audio of a Japanese word was heard and he
then chose the appropriate thumbnail and monitored his action: I am just checking the
correct answer, before elaborating further: I am listening because I think it is the best
one for me. He reflects (monitors) once again: I am listening and selecting (13:58). He

277
continued this activity and chose the first ten correctly. His answer to question eleven
was incorrect and he monitored his mistake: So that one I didn’t know what that was
(14:35). He clicked on a number of other thumbnails until the correct answer was
chosen. He continued and chose the correct answers for twelve, thirteen and fourteen.
After question fourteen he appeared to monitor and evaluate his learning progress to this
point: So that’s um, when I came to this I guess probably it was as far as I could have
got. I was hoping there might have been a glossary or something but there didn’t seem to
be anything like that (15:00). He then continued the activity and reflected on (monitored)
his progress: So I am just going through doing that (15:23), and following a pause,
elaborated on what he would do next: I guess the next thing to do would be to try and
find someone on line to do some lessons (15:34). Ray completed twenty-four of the forty
questions, paused and appeared to reflect (evaluated) on where he was: I think this isn’t
that useful a site – I think I got to a point where I want to see what other vocabulary there
was (15:44). Despite these thoughts, he continued the activity and monitored once again:
So I am still selecting. Um – trying to see what happens (15:55).

Ray continued to correctly answer the questions until question twenty-eight. He


appeared to be confused by the set of pictures and monitored his reaction to them: So
some of these I wasn’t sure about the picture (16:15). He clicked through the set of
thumbnails until the correct answer was chosen, and evaluated his choice: So that was
the ring. So I eventually got that (16:24). He continued to listen to the audio, select the
corresponding graphic correctly and reached question thirty-four. At this point he
appeared to monitor and evaluate the value of the exercise: A lot of these are Japanese
variations of the English word. So sometimes it is interesting to see how similar they are.
On reaching question thirty-seven he stopped and evaluated what to do next: When I got
this far I thought that’s probably enough for that (16:43). He clicked the Course Home
link.

Ray returned to the course Home Page and monitored his action: Let’s look at another
one. He selected Lesson 2 and appeared to realise that this was not what he had intended,
and evaluated his apparent mistake: So that one was all about clothing, and this one was

278
– oh no, I think I went into the wrong section (17:14). He then clicked on Lesson 2, Unit
10 and then the Learn button, and monitored the contents of the screen: So this about
Foods. Before the Learn section had a chance to load, he clicked on the Quiz button. As
had happened previously the screen did not load correctly and he monitored its failure:
Again the Quiz is a problem (17:22), before he then moved on and clicked the Review
button. The review screen opened and he moved the mouse over the Read and Listen text
thinking that they were hyperlinks.

Figure 100: Read and Listen text

He evaluated his mistake: Again I was thinking these should be the places to click on –
that should be logical (17:55). He then chose Listen from the left hand navigational
panel and commenced the exercise. The linguistic structures in the text that follows
demonstrate how Ray was able to reflect generally (monitor, evaluate and elaborate) on
the usefulness of the resource and the listening exercises in particular: So again this is
the same kind of exercise – listening for the word. For me it is still good practice, but I
would have liked to have a list of words so at least I could review at the end. Rather than
scroll all the way through – and that kind of thing. So I guess that is about all – the
extent that I got to was realising that was what would be useful. I don’t know if I would
use this much again though. Even though it kind of looks nice (18:06). Ray continued
with the exercise and commented further: And also the speed is a little bit slow. And it is
a fairly clear pronunciation. But it is when you hear it in context that you really want to
listen to it (19:10). This complex list of comparison and cause/effect structures indicate
the extent to which Ray found this particular aspect of the learning useful, despite the fact
that he found aspects of the interface irritating. It also indicated how he viewed the
resource for possible future use. He completed twenty-one out of forty questions and
then discontinued this activity and clicked on the Course Home button.

279
Ray returned to the home page and scanned the options available to him and monitored
his progress: So again, I wanted to look at that. Have a look at another one (19:35). He
chose Lesson 3, and monitored its content: So that was about jobs, before he clicked on
the Review button and evaluated his choice: And go to the Review section because that
seems to be the best one (19:48). The top of the screen contained a progress bar about
which he appeared to be confused. He monitored his reaction: Looking at the progress
too – I am not sure exactly what that meant – whether you have to do everything (20:00).

Ray then clicked the Listen button. He commenced the first question and took three
attempts before he chose the correct answer and monitored his progress: Some of these I
don’t know, so, um – and I’m not sure about the pictures too so it was a bit tricky (20:18).
He continued on to question two and the answer was correct at the first attempt. He
replayed the audio for question three, and monitored: So, I didn’t pick that up – solo
means play and alay means wash, before deciding (evaluating) it: Must be dishwasher.
He listened to the audio for question four and chose the correct answer, and evaluated:
Um, so shefu – that is a bit easier. He continued to question five and evaluated the audio:
So that is a bus driver – intensu (20:41). He attempted question six and again evaluated
the audio: Um gagaku means drawing so I didn’t know what the first word meant, so I
just went with the first one. He hesitated at question seven and monitored: I was looking
at the pictures a little closer to see what they were. He then moved on to question eight
and evaluated the audio: So reji is register, so it must be cash register. He discontinued
the exercise after completing nine out of the forty questions and clicked on the Course
Home button. He appeared to reflect (evaluate) on the design of the activity: I had to
roll over and that’s how the image appeared. But it took me a while to realise that
(21:12).

From the Course Home screen Ray then chose Lesson 4 and monitored his action: So
this is another one, and clicked on the Learn button. Before the screen had loaded, he
chose the Read button and continued to monitor: I just wanted to see what the reading
was – whether that was working –something different (21:44). He commenced the

280
exercise and appeared to evaluate the interface: But that would be better if it was written
in Japanese because that would be better practice than the English (22:20). At question
five, he seemed to pause momentarily and evaluate the graphics: I found here that the
pictures were very similar, like eye, however he managed to select the correct answer.

Figure 101: Eyes graphics in question 5

Ray made the incorrect choice for question six, and evaluated his choice: So me means
eye so I clicked eye, but me must mean two eyes (22:32). He continued to question seven,
selected the correct answer and evaluated his choice: So mayuge must be – I wasn’t sure
whether it was eyebrow or eyelash, before elaborating: So one of those (22:49). He
answered questions eight and nine correctly, evaluating his choice for question nine:
nodo is throat, even though this is a throat, the second one.

281
Figure 102: Graphics for question 9

Ray continued and answered question ten correctly on the second attempt. He appeared
to know the correct answer, however, was confused by the graphics. He selected the
bottom graphic before selecting the top graphic (in Figure 102 above). The following list
of comparison and cause/effect structures show how he appeared to evaluate his mistake:
Kuchibiru– kuchi means mouth but I wasn’t sure what that one meant – must be lips I
think. So I got that first one wrong. He selected the correct answer for questions eleven
and twelve and evaluated his responses: Kubi means neck - ah kuchi means mouth – so
again it was a little bit confusing (23:05). Ray selected a graphic for question thirteen
and as the following comparative statement suggests, evaluated his selection: Shita – I
didn’t know that, and elaborated further: So I just guessed it was a tongue maybe. So
that was correct. He selected several graphics before the correct one for question
fourteen. He appeared to be evaluating the reason behind his choice: Hana means nose
but it could also mean – maybe it is tooth (23:23). He selected the correct answer for
question fifteen and evaluated his choice: And se means back but I don’t know what bone
means, before correctly guessing (evaluating) the answer to question sixteen: Ago –
don’t know what that means – must be chin. He knew (evaluated) the answer to question
seventeen: Hone means bone, as well as to question eighteen: Chi is blood. (23:54)

282
Ray found question nineteen problematic and his initial selection was incorrect. He
evaluated his response: And kinniku - I didn’t realise what that one was, so I got that one
wrong. He next carried out a series of monitoring and evaluations as he correctly
answered the next five questions. Question twenty: Ah, hada means skin, question
twenty-one: Onaka means stomach, question twenty-two: senaka is back, question
twenty-three: uesuto is waist, and question twenty-four: and then mune is chest or
breasts. Again, his evaluation of question twenty-five appeared to be just a correct guess:
tenoyubi is finger, but I didn’t do that one (24:28). He stopped the exercise after
completion of twenty-five questions out of forty and then clicked on the Course Home
button.

Ray next selected Lesson 5, chose the Listen link and monitored his action: So I thought
I would just go and have a look at the last one and see what that was like. Lesson five
vocabulary was on the topic home/ tools/materials, and he appeared to monitor his level
of understanding of words associated with these categories: I don’t really know those
very well, before he re-orientated himself: So I went to the listen, and evaluated: Because
I thought that would be better than the reading again (24:38).

Ray commenced the exercise, listened to the audio to question one, selected the correct
graphic and evaluated his selection: So uindo is window. He listened to the audio to
question two and tentatively scanned the graphics. He made an incorrect choice and
evaluated his selection: I didn’t know what that one was, yane. He moved the cursor
over the thumbnails once again and his cause/effect structured response suggested that he
evaluated his action: So I was looking at those more carefully to see what they were so I
didn’t know that one. Maybe it is roof (24:55). He selected the roof graphic which
proved to be correct.

Ray listened to the audio for question three, selected the correct graphic and evaluated his
choice: I didn’t know that one but ame means water, sorry rain, so I didn’t know what
that was (25:14). He listened to the audio of question four, appeared to consider it
momentarily, and then bypassed it and evaluated his decision: I didn’t know what that

283
was so I looked at those more carefully (25:20). He continued to question five and
listened to the audio and selected the correct thumbnail and monitored: Concrete. He
then pointed to another picture before the word had been announced. A red box
(meaning the answer was wrong) appeared around the picture and he evaluated his
mistake: I think I was too early there. He continued to question six, selected the correct
thumbnail and evaluated his choice: Um, uddo means wood, before selecting the correct
answer to question seven: And tiles – that’s English. He then listened to the audio for
question eight, selected the correct thumbnail and evaluated his choice: I didn’t know
that word, but it was the last one, so it had to be correct, bricks (25:52). Ray continued
the exercise, selected the correct thumbnail for question nine and monitored his answer:
Suponji – sponge (25:55). He listened to the audio for question ten, paused and replayed
it and evaluated: Ah, I didn’t know that one, so I replayed that one again and I still
didn’t know what it was. He chose a second thumbnail which was correct this time and
monitored: So it was that one (26:08).

Ray continued to question eleven, selected the correct thumbnail and monitored his
response: So that is a mop (26:14). He appeared to guess correctly the answer to
question twelve and evaluated his choice: I didn’t know that but I could guess that. He
selected the correct thumbnail for question thirteen and evaluated his answer: Naifu –
knife is an easy one. He then listened to the audio for question fourteen and selected an
incorrect thumbnail before choosing the correct one and monitored his progress: So I was
looking at what the pictures were and I got that one wrong – that one. He continued to
question fifteen, and once again seemed to guess the answer and evaluate his choice: I
didn’t know that one – ono - so it was a lucky guess (26:22). He selected the correct
thumbnail for question sixteen, however it appeared to be a guess as he evaluated: And
har - I don’t know what har means.

Ray answered question seventeen correctly and monitored his answer: It is screwdriver.
He listened to the audio for question eighteen, replayed it several times, before he chose
the correct thumbnail and monitored his response: Ah, I wasn’t sure about that because
that’s a wrench. He then listened to the audio for question nineteen, selected the correct

284
thumbnail and monitored: That’s an easy one. He listened to the audio for the next
question, answered it correctly and evaluated: So they are very similar those
pronunciations. He paused slightly at question twenty-one and appeared to monitor his
favoured choice: So, neji I thought was a screw. He used the cursor to scan the
thumbnails before he made a choice and evaluated: I just wanted to make sure what the
others were. He listened to question twenty-two and evaluated his selection: I didn’t
know that word but I knew it wasn’t those two there, so it must be nail, kugi. He correctly
answered questions twenty-three, twenty-four and twenty-five and monitored his
response to each: And that’s nuts…And bolt ... So that is a pencil (27:50).

Ray listened to question twenty-six and chose the thumbnail of a wheelbarrow and
monitored his choice: I didn’t know that one, but they said hand cart. He moved on to
question twenty-seven and correctly selected the thumbnail of a tape measure and
evaluated his selection: I didn’t know what that one was, so I just guessed that one. He
correctly selected thumbnails for questions twenty-eight and twenty-nine and monitored
his responses: And the bucket is the same…So, the drill. Ray listened to question thirty
and clicked on each of the thumbnails until he chose the correct answer and evaluated his
action: Nomi - I wasn’t sure what that meant. Must be chisel. So I eventually got that
one. He next answered question thirty-one and thirty-two correctly.

Ray listened to the audio for question thirty-three and was uncertain which thumbnail to
choose and appeared to guess correctly and monitored: Didn’t know it – shovel – yep.
He continued on to question thirty-four and paused and evaluated his choice: Hasami -
Hasami is scissors but I wasn’t sure which one that was. So it must be the secateurs. He
listened to the audio for questions thirty-five and thirty-six and appeared to just select
thumbnails until he had the correct answer, and monitored his action: I didn’t know
either of those, so I was just guessing. Ray clicked on the Course Home button, finished
the learning and monitored his action: And then I think that was about – it was the last
one. Throughout this activity, Ray engaged in a series of metacognitive processes that
were informed by simple and complex lists that contained comparison, problem/solution

285
and cause/effect structures that enabled him to select the correct answers. Ray ceased
working on the module.

Summary of learning module 2


In reflecting on this unit of work, Ray’s responses to my observations and questions were
part of a dialogue that was richly loaded with metacognitive linguistic structures. The
observations contributing to my analysis of his metacognitive activity and top-level
structure linguistic markers during the 29 minutes of the lesson have been sliced into
three purposive though arbitrary segments (see explanation page 80) to represent Ray’s
progression through the beginning, body and conclusion of the learning event. These are
represented in Tables 41 and 42.

Metacognitive activity
The totals of metacognitive activity identified for Ray in this learning module showed
that metacognitively he drew heavily upon execution, monitoring and evaluation, and to a
much lesser extent, elaboration and orientation. In contrast, he engaged in no planning.

Table 41: Metacognitive activity learning module 2 - Ray


Orientation Planning Execution Monitoring Evaluation Elaboration
First 5 mins 6 0 8 14 15 4
Body 2 0 76 55 50 16
Last 5 mins 2 0 40 23 20 0
Total 10 0 124 92 85 20
% 3.02% 0.00% 37.46 27.79% 25.68% 6.05%

Top- level structure linguistic markers


A summary of the linguistic markers used to identify Ray’s metacognitive processes are
outlined below. These indicate that Ray relied heavily on cause/effect and comparisons
and to a lesser extent on problem/solution structures to underpin his metacognition. He
used more simple lists than complex lists.

286
Table 42: Top-level structuring activity learning module 2 - Ray
Simple TLS event More complex Top-level structuring events
List - Simple List - Complex Cause/Effect Problem/Solution Comparison
First 5 mins 7 9 9 1 11
Body 27 28 41 5 39
Last 5 mins 16 6 8 5 16
Total 50 43 58 11 66
% 21.93% 18.86% 25.44% 4.82% 28.95%

Self awareness of learning autonomy 2nd rating


At the completion of this second learning module, Ray was presented with the 6 point
Likert Scale and asked again to rate himself as to how effectively he considered he had
engaged with educational hypermedia on this occasion. Ray rated himself as a 6 (very
effective). He was then asked to comment on the reasons for the rating.

Ray suggested that he found the second learning module: far more engaging than the
first. It was still a bit repetitive; however, engaging with the multiple forms of media was
a lot more interesting than just reading text. He stated that he scored himself as 6 this
time because: I think I used the full utility of the module and showed that I was able to
work through the module to suit my own learning needs. He summarised his response by
saying: I think I was able to make the module work for me. I felt that I easily imposed
my will on the interface and engaged only with those parts of it that suited me.

Effect from metacognitive training


Ray rated himself higher in the second learning module:
Learning module 1 rating 5/6
Learning module 2 rating – 6/6

287
Table 43: Collective data of metacognitive activity - Ray
Orientation Planning Execution Monitoring Evaluation Elaboration
S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1 S2
First 5 mins 1 6 2 0 7 8 13 14 10 15 0 4
Body 17 2 4 0 43 76 47 55 33 50 6 16
Last 5 mins 3 2 1 0 13 40 8 23 12 20 2 0
Total 21 10 7 0 63 124 68 92 55 85 8 20
Comparison of total activity expressed as a percentage (%)
Session 1 9.46% 3.15% 28.38% 30.64% 24.77% 3.60%
Session 2 3.02% 0.00% 37.46% 27.79% 25.68% 6.05%

A comparison of the metacognitive data by percentage indicates that Ray engaged in


more execution and elaboration in the episode following the training. In contrast, he
engaged in less orientation, planning and monitoring activity. However, his use of
evaluation was similar on both occasions.

Table 44: Collective data of Top-level structuring activity - Ray


Simple TLS More complex Top-level structuring events
event
List – List – Complex Cause/Effect Problem/Solution Comparison
Simple S1 S2 S1 S2 S1 S2 S1 S2
S1 S2
First 5 mins 2 7 6 9 14 9 0 1 4 11
Body 16 27 27 28 40 41 2 5 26 39
Last 5 mins 6 16 5 6 5 8 1 5 4 16
Total 24 50 38 43 59 58 3 11 34 66
Comparison of total activity expressed as a percentage (%)
Session 1 17.25% 23.75% 36.87% 1.88% 21.25%
Session 2 21.93% 18.86% 25.44% 4.82% 28.95%

A comparison of the linguistic markers data for top-level structures by percentage


indicates that Ray used more simple list, problem/solution and comparative structures
following the training. In contrast, he used less complex-lists and cause/effect structures.

288
Chapter 5 - Conclusions

Discussion of findings

Two general conclusions may be drawn from data reported in the previous chapter in
relation to the focal questions of the research. First, in all five cases there is clear
illustration of accessibility about the character of each learner’s cognitive and
metacognitive activity as each spoke about their learning engagement as it was happening
in their vocational learning settings. Their reports further developed the thesis that such
activity in relation to engagement happens, is accessible and may be captured and
analysed. This position addresses the first of the questions that had stimulated the
research originally. Having been established in the pilot, this secured the progression of
the study into an examination of the second and third of its focal questions.

A second conclusion was that the perception of one’s autonomy as a learner, while
differently present across the pre-intervention data amongst the cases, was present.
Further, it did appear to be positively affected by training (not statistically tested), but
again in different ways and to different degrees. In each case, shifts were recorded in
what presented as preferences for various metacognitive actions. Following training, each
individual’s metacognition changed. This shift has been documented according to
descriptive categories for metacognitive processes (Meijer et al., 2006) used in the
analysis and in what seemed to be happening by way of the top-level structuring (Bartlett,
2008) that the learners used to organise their ideas when centering on one or other of the
categories.

Consequently, in this chapter, an account is provided within strengths and limitations of


the study of what has been contributed to better understanding issues underpinning the
three questions of the research, and what implications there are in this contribution for
knowledge about learners using hypermedia to learn in vocational education settings, and
for ongoing investigation of its theoretical tenets. The three questions are:

289
1. Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?
2. To what extent (how) do users see themselves as autonomous in such activity and
how does this manifest itself in practice?
3. To what extent (how) will the provision of metacognitive training affect more
awareness of metacognitive activity and/or greater autonomy?
Findings addressing each of these questions are presented sequentially prior to an
integration of their representation in a general conclusion.

Question 1
Whilst learners are in situ in hypermedia settings, to what extent are their cognitive
and metacognitive activities accessible to recording using video capture software
protocol?

The pilot study established that it was possible to identify the metacognitive activity of
learners engaged in hypermedia learning settings using the methodology developed to
that point. However, a strengthening of the analysis of the metacognitive events, through
the additional analysis of learners’ utterances using linguistic markers top-level
structuring (TLS) in the subsequent case studies afforded a deeper and richer
understanding of this metacognition. The additional TLS classifications provided a more
fine-grained and descriptive data set that has had a threefold effect.

First, this richer description afforded greater trustworthiness about category assignation
for metacognitive events and strengthened the Meijer et al. (2007, 2008) system as a
classificatory tool. An example of this is apparent early in the data in the second learning
module from the first case study, David. The surface language in David’s comment: And
I saw the quote and to be honest thought, they are not going to mark me on my knowledge
of the quote, is sufficient to suggest that he was monitoring his progress. The literal
message does not include the comparison that is implicit in David’s statement. However,
the TLS analysis accounts for the elliptical comparison in David’s thinking – the content
and nature of information in the unspoken “if not knowledge of the quote, then what”.

290
The comparison described David’s sorting of what he thought and knew at this specific
point of thought-and-report. The strong comparison he made from working memory
between what was interesting and what was necessary set up a decision on what to retain.
Together with the content and decision-options, the sorting gave him preparatory
cognitive muscle to handle incoming information from other sources of the presenting
hypermedia as he moved on. The TLS analysis provided a window into what lay behind
what was otherwise incomplete in David’s explicit description. In so doing, it uncovered
further evidence of the nature of David’s metacognition at this moment in his
engagement. In such a contained element of what is a dynamic process of thought and
talk, the TLS-analysis strengthened the probability of the researcher’s classification of
this as monitoring.

Second, the TLS descriptors themselves have provided a further level of analysis within
each metacognitive classification. An example of this is in Judy’s data from the first case
study. Judy was articulating how she was going about reading a section of the text: I
read them through from top to bottom, every word, because it was important and I didn’t
want to miss any information, and secondly, because all the text on every page is pretty
short, so I didn’t feel the need to rush and skim. While at the general level these
comments suggest that she was monitoring and elaborating, further analysis using top-
level structuring provides an additional level of discrimination. Judy had used here a
complex and sophisticated list which contained an embedded series of cause/effect
actions that seemed to be logically connected. When subjected to a Top-level structuring
analysis this monitoring metacognitive event could be distinguished from other
monitoring events in the data (see Table 45).

Table 45: Extract of Metacognitive Classification and Top-level structure Analysis


Respondent’s Utterances Meta Top-level structure Analysis
Class
I read them through from top to bottom, MO List
every word, because it was important and I - C: Was important
didn’t want to miss any information, and - E: Read through from top to bottom
secondly because all the text on every page - C: Didn’t want to miss any info
is pretty short, so I didn’t feel the need to - E: Read from top to bottom
rush and skim. EL - C: Text is pretty short
- E: Didn’t feel need to rush and skim

291
In this example, the learner constructed a complex list with a logically-linked elaborative
extension. The TLS analytical level details the complexity and sophistication of Judy’s
listing, setting it apart from other, less-elaborated instances where monitoring has been
classified on the basis of simple lists, singular cause/effect, comparison or
problem/solution statements (see example in Table 46), some of which contain logical
links to evaluations, orientations and planning extensions.

Table 46: Example of Simple Monitoring


Respondent’s Utterances Meta Top-level structure Analysis
Class
I remember that picture. MO List
- Remember that picture

This capacity for qualitative differentiation within classifications together with the
strengthening of trustworthiness about categorical decisions on classifications of types of
metacognition is an important methodological contribution and has implications which
will be discussed later in the chapter.

A third advantage found in the TLS analysis was that it provided a means of accounting
for contemporaneous processing across several types of metacognitive process as a
learner engaged in sophisticated, dynamic acquisition, interpretation and retention of
information at any particular moment. This adds a breadth to Meijer et al.’s (2007, 2008)
linearity in the lens on metacognition that an individual’s language provides. The
following from the fourth case study (Tammy) exemplifies this benefit. Tammy
considered herself to be both a competent and autonomous hypermedia learner.
Collectively, her metacognitive actions typically followed a pattern where orientation,
planning or monitoring were linked in an evaluation. For example, while following a
video sequence storyboard in her second learning module she commented: Camera
trucks in quickly to mid shot on LP. So I assume LP was lemon pops. Yes, but obviously
that indicates some kind of positional change, or transition as you say. She monitored
her progress through the storyboard before evaluating the impact of the new information
on her understanding. However, TLS analysis provided a significant additional clue as to

292
how her learning autonomy was being realised at the moment of report. It shows that she
has used a cause/effect frame to sort what she was monitoring, and a comparison to link
her evaluation. Being able to comment beyond the components by qualitatively
explaining their link enables a richer statement about Tammy’s engagement and what
underpinned the autonomy she displayed in her learning in situ. Once again this has
important implications that will be discussed later in the chapter.

Question 2
To what extent (how) do users see themselves as autonomous in such activity and how
does this manifest itself in practice?

Data related to answering this question were captured in two ways. First, at the end of
the first learning module each participant was provided with three probes of self-
awareness of their learning autonomy. The first was: How would you describe your
engagement with hypermedia? The second was: To what extent do you see yourself
autonomous in such activities? The third was: Rate how effectively you engage with
educational hypermedia – using a 6 point scale? This question was designed to secure a
self-scoring of effectiveness. Second, data from each of the learning modules contained
descriptions of how each individual’s learning autonomy actually played out in practice.
These data are discussed next.

How did users see themselves as autonomous?


The data suggest that each of learners in their individual ways believed that they had a
level of competence that enabled them to operate autonomously as hypermedia learners.
Each was asked to rate their learning effectiveness following module completion and
their self-ratings indicate their perceived high level of competence (Table 47).

Table 47: Learning effectiveness scores of participants


David Lesley Tammy Judy Ray
End of Learning Module 1 5/6 5/6 6/6 4/6 5/6
End Learning Module 2 6/6 6/6 6/6 5/6 6/6

293
This confidence in their competence appeared to underpin the learning autonomy.
Individual descriptions provide a more strategic view of what constituted this autonomy.

David
David stated that he is very comfortable using computers and hypermedia and believed
that he was able to manage his learning effectively. He believed he adopted an holistic
approach to his learning and that orientating himself to the task was an important
strategy. He did this by scanning through the learning materials and establishing quickly
what was expected of him as well as what he expected from the learning. Understanding
the learning interface was also important. For example, he suggested that knowing how
to proceed and how to get more help when necessary were important. He liked to engage
with all aspects of the material. For example, he often interacted in-depth with diagrams
and graphics, although sometimes he found pictures less important.

David considered himself to be an autonomous hypermedia learner who did not enjoy
being just a passive recipient. In more structured learning (as was the case in his two
learning sessions), rather than just follow the ‘plot’ as laid out for him, he asserted his
autonomy by going back over the material and linking diagrams, graphics and charts to
the text. In this way he believed that he not only absorbed the text, he actually created his
own meaning. His view was that while he was happy to follow the linearity of the
materials, particularly when dealing with unfamiliar material, if this did not assist him to
achieve the learning goal he had set, he would deviate as necessary.

David appeared to render his autonomy through his capacity to orientate and plan his
learning effectively to establish its purpose and then to monitor his progress strategically
as he enacted that plan.

Lesley
Lesley considered herself as able to manage her learning with hypermedia very
effectively. She considered her starting point was getting an overall view of the structure
so that she could decide how to make her way through it. She stated that she liked to

294
ascertain her logical starting point. She was quite happy to work outside the structure of
the learning materials, although she conceded that there was a problem with hyper-
linking and ‘getting lost’. She regarded visual stimuli as important within the learning
materials as well as the capacity to ‘go somewhere and discuss things’, for example a
discussion board. Lesley stated that her approach would differ with the purpose of her
learning. For example, she would adopt a pedantic and systematic approach when
learning with assessment in mind.

Lesley saw herself as an autonomous learner and was not concerned about the unknown
of a hypermedia learning setting. She felt there was always somewhere to go and that she
had at her disposal a whole range of things that she could do. She suggested that for the
management of her learning she did like to establish the overall conceptual framework at
the beginning. Lesley appeared to be confident in her capacity to deal with anything that
the learning setting might challenge her with.

Lesley appeared to render her autonomy through her capacity to quickly establish a
conceptual framework that enabled her to orientate and plan her learning. This enabled
her to ascertain her starting point and also to work outside the structure of the learning
materials where she found it necessary or helpful to do so.

Tammy
Tammy regarded her engagement with hypermedia as being ‘pretty positive’. Since
undertaking much of her recent learning with hypermedia she now described herself as a
‘reflective learner’. She described herself as being very focussed when she studies, and
did not mind the structure imposed by the courseware because someone had sat down and
compiled the material needed. However, she also enjoyed the opportunity to explore
further, noting in particular, that she would rather go deeper than broader. If there are
aspects of the learning for which she had a special or deeper interest, then she would
deviate from the courseware and research that further.

295
Tammy believed that she was very autonomous and liked to do things her way, and in her
time, and that structure acts as a guide to her rather than as an imposition. She stated that
the learning she undertook for this study was in a domain with which she was not
familiar. As a result, she said she had tended to skip some of the activities as she needed
to get a high level overview first; as she described it, a ‘helicopter’ view. She asserted
that her belief in her learning capacities enables her to engage in hypermedia learning in
sophisticated ways.

Tammy appeared to render her autonomy through her capacity to orientate and plan her
learning after firstly establishing its purpose. While she was prepared to explore and
deviate from the expectations of the courseware, she continued to monitor her progress
as the learning unfolds.

Judy
Judy believed that her previous experience with hypermedia learning and her work as a
graphic designer had equipped her well as a competent and autonomous learner. She
considered herself to be a visual learner saying she preferred to interact with media rather
than read a lot of text. She likes to learn in chunks and prefers hypermedia that allows
her to do that.

Judy reported that she liked to initially enter modules in order to see how the interface
worked and get a general idea of what she is going to face. She said she liked to get an
overview or big picture as this helped her structure her learning. She believed that her
ability to manage information helped her to keep on track and that she liked to feel in
charge and enjoyed the structure afforded her by well designed hypermedia. She did note
that she does not always follow the structure and felt comfortable digressing as necessary.

Judy appeared to render her autonomy through her capacity to orientate herself to the
learning interface before establishing her learning goals as a big picture. She seemed
happy to rely on the structure of the hypermedia to advance her learning and to allow her
to focus on using the available media to supplement the text. While she was prepared to

296
digress from the structure of the hypermedia she appeared to rely on it to assist her to
keep her learning on track and monitor her progress.

Ray
Ray reported that he was very familiar with hypermedia learning and the underlying
media technologies and saw himself to be autonomous in such settings. He made the
point that he preferred the interactivity possible through the media technologies and their
potential to improve his hypermedia learning experience. He believed that just working
through pages of essentially textual material was not interesting as it usually did not
provide him with the opportunity to explore other things. He liked the idea of more
communication and interactivity to share ideas and resources with other students.

He suggested that in organising his learning he likes to establish what he is going to be


assessed on and then work back from there as this enables him to plan his time. He
avoids note-taking, because the material is already there, and preferred to keep
documents and links to websites instead. He believes that he was not restricted just to the
material provided by the hypermedia and readily sought additional information he
thought was missing from the courseware.

Ray appeared to render his autonomy through his capacity firstly to establish his
learning goals and then plan his time. He saw the learning interface as a resource rather
than as a controller or monitor of his learning. He appeared to be unrestricted by the
structure of the interface and viewed the seeking of additional material as an essential
tool in his learning.

While these descriptions are varied they do indicate that each one of these learners
perceived themselves to be autonomous learners. Typically, their accounts of what
enables their autonomy were that:
a) They are able to establish a purpose or goal for their learning upfront (David,
Lesley, Tammy, Judy and Ray).

297
b) They are able to master the learning interface quickly and use it in ways that suit
their learning style and preferences (David, Lesley, Tammy and Judy).
c) They are able to establish ways of progressing and of realising their learning goals
from within or from outside of the learning interface (David, Tammy and Lesley).
d) They are able to monitor both the learning interface and their learning trajectory
effectively using a range of top-level structuring (David, Lesley, Tammy, Judy
and Ray).

How does this manifest itself in practice?


Data from the case studies suggest that for learners to engage effectively with the
hypermedia they need to drive their learning much more autonomously. The ways in
which they did this appeared to be somewhat different to what might occur in traditional
classrooms. An important aspect of this drive in most other learning situations is the
learning press normally provided by the teacher. Stevenson (Stevenson, 1986a, 1986b;
Stevenson, McKavanagh & Evans, 1994) called the tendency of a learning environment
to facilitate or impede individuals in goal attainment press. He argues that the concept of
press can be further extended to include student perception of the atmosphere of a
learning setting; that is, that settings elicit behaviour from participants. He concludes that
the behaviour elicited from settings can be attributed to the participants’ cognitive
appraisal of the environment, leading to efforts to adapt to the setting to cope with it.

It would appear that the learning press exerted by the activities of the teacher cannot be
easily replicated in hypermedia learning settings. While some press can be provided in
other ways, for example, through the learning interface, it is the learner who has
substantial agency - the learner can apply different kinds of press for cognitive or
metacognitive activity. In traditional learning settings a key feature of learning press is
the scaffolding (Collins et al., 1989) provided by the teacher. However, much of the ad-
hoc or situational scaffolding that good teachers provide cannot be easily replicated in
hypermedia learning settings. Therefore, learners in such settings need to find ways of
compensating. In hypermedia learning settings, a learner might compensate through use

298
of effective metacognitive strategies. Table 48 presents the totals of metacognitive
processes identified across the cases. The table includes the instances of execution for
completeness; however, given the lack of reliability in the assignment of this latter
category due to its lack of an associated verbal protocol, as discussed in earlier chapters,
the accuracy of this figure is questionable. Therefore, its ranking as the most common
category remains speculative only. Across all of the other metacognitive categories,
though, the case study data reveal that the most commonly used metacognitive strategy
was monitoring and the cases show that the construction of these monitoring processes
range from simple lists to complex and sophisticated structures. These different
metacognitive processes are discussed next.

Table 48: Aggregated total of metacognitive processes across cases


Execution Monitoring Evaluation Elaboration Orientation Planning
708 575 504 211 141 56
32.26% 26.20% 22.96% 9.61% 6.42% 2.55%

Monitoring
Learners in each of the cases claimed to be, and appeared to be, learning autonomously.
Monitoring seemed to play a pivotal role in securing learner autonomy. The learning
interfaces used across the cases afforded learners little in the way of learner or learning
guidance, leaving them to make a range of decisions for themselves. This is in contrast to
other kinds of learning settings, particularly those which have a teacher or facilitator,
where learners tend to receive more direction, support and encouragement.

In its simplest form monitoring is often list-orientated and not necessarily about drawing
connections. The data across cases suggest that monitoring was often fairly passive in
the way in which it unfolded until such time as it raised awareness or some interest. It
was then that it changed or logically progressed to becoming orientation, or planning or
something else. So the prevalence of passive monitoring seemed to be because it could
move learners on fairly quickly. However, they appeared to reach a point at which all of
a sudden they had a reason to slow their progress, to focus their attention through

299
different or additional processing. It was at this point that learners’ monitoring moved
towards being a much more complex process.

In the more complex instances of monitoring the data reveal that it often formed part of a
chain of metacognitive activity. Some of these chains seemed to be used far more
consistently than others. For example, monitoring was often followed by evaluation or
elaboration, and sometimes by both. In some cases the learners tended to have a
readiness for these associations in metacognitive actions closer to their consciousness
than others. For example, in her first learning module, Judy did not seem to monitor as
often as the other cases, thus the perceptions each had of monitoring may have been
different. Alternatively, the coagulation of metacognitive actions may have been
presenting as stylistic perhaps including a learning style. However, following
metacognitive training Judy monitored more than David and Lesley. The TLS data
showed that Judy also engaged in far more complex list structures than she had prior to
the training.

Each of the five learners was not only different from the others, but had very different
facets to ways in which he or she was learning. On the surface it may appear to be that a
given learner was now monitoring when watching the screen or recording or doing
various things. But monitoring, as codified and used in this research, is itself a fairly
clumsy descriptor and what the cases showed is that sometimes learners are monitoring,
and monitoring so as to be able to answer a question, or to solve a problem, or to address
a dilemma. Other times they are monitoring just to see where some item fits, in
comparison with what they already know or know that they don’t know. Moreover, at
other times they are monitoring seeing whether or not a presenting item will be added as
part of a list of previous things that are put into a repository of remembered events. Such
activity transcends simple views of what happens as new information is recognised,
sorted and acquired. For example, this study showed that Ray was monitoring, but
monitoring with different purposes at different times and therefore with different learning
intentions. David did the same. However, his monitoring was mostly done using
complex list structures which facilitated his simultaneous linking into other

300
metacognitive processes such as evaluation. This information is valuable and will be
taken up again later in the chapter.

Evaluation
Evaluation was used extensively and was second only to monitoring in usage (after
removing execution). Across the cases, the data indicated that evaluation was used as a
linked secondary process in combination with other metacognitive processes far more
than it was used on its own. At the metacognitive level evaluation was used most
frequently as a logical extension to monitoring. The TLS data provided a finer grained
view as to how the monitoring-evaluation nexus operated. In most cases linked
evaluations were characterised at the TLS level as a complex list. Embedded within
these lists were cause/effect, comparison and problem/solution structures, each of which
enabled additional information to be added as elaboration of a list’s major items or item
clusters of information. An example from learning module 1 in the case of Ray illustrates
this structure. Ray found a link in the learning module to alcohol guidelines. He
monitored (with a linked evaluation) this discovery using a complex list with embedded
cause/effect structures: And then I saw that from the home page there was the Alcohol
Guidelines there, so that’s why I clicked in it just to make sure I knew what it was. And
there didn’t seem to be anything else at that moment so normally I would have saved this
I think because that looks like a useful site to look at. The frequency with which this
structure was found across the data suggested that it is important.

To a much lesser extent, evaluation was also associated with the other metacognitive
processes of elaboration, orientation and planning in a similar way in each of the cases
studied. It often followed them as a logical extension and when viewed at the level of the
TLS data was often structured as a complex list with embedded cause/effect, comparison
and problem/solution structures.

The frequency with which evaluation was used pointed to the importance the five
learners placed on it in facilitating their learning. The TLS data highlighted the nature of
this role. While evaluation played an important general purpose role in determining

301
value, it also appeared to play a vital specific role where a learner sought to construct
extended or elaborated versions of monitoring, elaboration, orientation and planning. It
was these more sophisticated and extended chains of metacognitive processes that
appeared to be acting as enablers of learning autonomy.

Elaboration
After monitoring and evaluation, elaboration was the next most used metacognitive
process, albeit much less than the former two. This accords with the observations of
Meijer et al. (2006) that elaboration was the last of the six metacognitive strategies which
more or less reflected the temporal course of the reading and problem-solving process.
While Meijer et al. (2006) cautioned that shifts in this temporal organisation will occur,
the metacognitive data across the cases seemed to support their assertion. Instances of
elaboration in all of the cases were rarely metacognitive process in isolation; rather, they
generally followed, or were associated with, other categories of metacognition. In most
instances elaboration followed metacognitive processes of monitoring and/or evaluation
and tended to be inferential, expansive, summarising or reflective in nature.

The TLS data across the cases showed that when elaborating, learners consistently used
cause/effect and comparative structures. Moreover, on examining the chaining of
metacognitive processes around a particular task, elaboration was often found to be the
final link in the metacognitive chain. This would suggest that in its role as the final link,
elaboration utilised causal and comparative structures to effectively play this concluding
role. In those instances in the data where elaboration utilised simple list structures, it
would seem to be playing the simpler and more traditional role of summarising.
However, the need for these elaborative processes to support learning autonomy as well
may have influenced their construction.

Orientation and Planning


Across the cases the least used metacognitive processes were orientation (6.42%) and
planning (2.55%). The structured nature of the learning interfaces used across the cases
may account for this. While the respondents’ levels of competence in hypermedia

302
learning would seem to have provided opportunities for them to use alternative pathways
that might have drawn more heavily on orientation and planning, these learners ‘stuck to
the script’. This was surprising given that each of them indicated that they did not feel
constrained by the interface. Nonetheless, for the most part, these learners did follow the
sequencing of the hypermedia materials and therefore most of the planning and
orientation that did occur related to the use of the learning interface and any
idiosyncrasies of the learning material. This emphasised the importance of the structure
of the learning interface and learning materials and in particular their influence on the
orientation and planning potential of the learner.

Concurrency
Across the cases, learners reached stages in their learning where they needed to deal with
both the learning materials and the learning interface concurrently. These situations were
punctuated by raised levels of metacognitive activity; that is, they were effectively
metacognitively multitasking. That is, they constantly needed to switch between the
learning materials and the user interface. Given these were competent learners, these
episodes were short and appeared to be well managed. However, such episodes are likely
to significantly increase the cognitive load of less competent users.

Summary
The data across cases indicated that learners were effectively driving their own learning.
A major driver of this effectiveness appeared to be the deployment of well constructed
chains of metacognitive processes. For example, monitoring was often followed by
evaluation and/or elaboration in a chain best able to reach a learning end. The top-level
structuring data suggested that what made these chains effective were the embedded top-
level structures they deployed. At times these structures appeared to be quite complex.
This metacognitive mastery appeared to play a critical role in establishing learner
autonomy. Each of the cases considered themselves to be a competent user of
hypermedia and an autonomous learner. Their accounts of what enables this autonomy
are that:

303
• They were able to establish a purpose or goal for their learning upfront.
• They were able to quickly master the learning interface and use it in ways that suit
their learning style and preferences.
• They were able to establish ways of progressing and of realising their learning
goals from within or from outside of the learning interface.
• They were able to effectively monitor both the learning interface and their
learning trajectory using a range of top-level structuring.

Question 3
To what extent (how) will the provision of metacognitive training affect more awareness
of metacognitive activity and/or greater autonomy?

The data related to answering this question were captured in two ways. First, each of the
case study participants was asked: How would you rate how effectively you engage with
educational hypermedia – on a 6 point scale? Second, data concerning their use of
metacognition in each of the learning modules was captured and aggregated. Each of
these data sets is examined next.

Table 49: Changes in effectiveness scores of participants


David Lesley Tammy Judy Ray
End of Learning Module 1 5/6 5/6 6/6 4/6 5/6
End Learning Module 2 6/6 6/6 6/6 5/6 6/6
Change - + or – or n/c + + n/c + +

Table 49 indicates that with the exception of Tammy, who rated herself 6/6 on both
occasions, each of the participants believed that their engagement with educational
hypermedia had improved after receiving the metacognitive awareness training just prior
to undertaking the second learning module. A summary of their justifications for this
improvement is outlined next.

304
David suggested that his second rating was because: My experience with hypermedia
learning means I am able to quickly establish what I need to know. The data suggested
that he did spend time early in the learning modules orientating and planning although the
nature of his second learning module required him to do less planning. He believed that
he was able to: find what I need to learn and I focus on that and ignore the padding that
is sometimes there. The data appeared to support this notion and indicated that he
engaged in a number of evaluations early in his learning. He said that he selected 6/6,
very effective, because: Unlike the first module I did, I did find the learning interesting
and did not get bored. David considered that he was more engaged in the second
learning module which was supported by the metacognitive data which showed he was
engaged in more orientation, planning, evaluation and elaboration (see Table 19, page
124). He made the point that he: generally likes to engage with any media that
supplements the text, after both learning modules and in the second module he found:
that the variety in the media easy to work with. The comments seemed to be supported
by the data which reported increased metacognitive activity. The top-level structure
linguistic marker data (see Table 20, page 124) indicated that David used less simple lists
and problem/solutions in the second episode which suggested that he may have been
engaging in more depth with the material. In contrast, he engaged in more complex lists,
cause/effect and comparison episodes which might also indicate that he found the second
learning module more engaging, resulting in greater metacognitive activity across the
classifications. Therefore, David’s data set provided evidence of a positive effect from
the metacognitive training, despite his existing level of competence.

Lesley suggested that her rating was because: I felt much more comfortable the second
time. I think that maybe that was because I feel more aware of my capabilities with
hypermedia learning, especially my metacognitive strategies. I think that my previous
experience with hypermedia learning helped also. In Lesley’s case the data seemed to
support her evaluation (see Table 25, page 161). The data revealed a general increase in
intensity as she engaged in more planning, monitoring, evaluation and elaboration in the
second learning module. The top-level structure linguistic marker data (see Table 26,
page 162) further supported her claim of greater awareness of her capabilities as she

305
engaged in more cause/effect and problem/solution structures. She added that: I feel I
get on top of the interface easily now, having more experience, so this lets me get to the
learning quicker. So I am less concerned about the challenges that the interface may
present. She said that her second rating of 6/6, very effective, was because: I believe
that I am an independent learner and with my accumulated experience using hypermedia
learning I use it very effectively. She finished by saying: I would have to admit though
that boring material and lack of interactivity are a bit of a turn off. The top-level
structure linguistic markers data indicated that Lesley used fewer comparisons in the
second episode. In contrast, she used more cause/effect and problem/solution structures
while maintaining her usage of complex lists. The increased intensity of planning,
monitoring, evaluation and elaboration may have been an outcome of a confidence from
her heightened awareness associated with the metacognitive training. This, together with
her improved usage of several of the top-level structuring classifications suggested that
the metacognitive training had a positive effect.

Tammy, in commenting on her rating of the second learning module (identical to the first
6/6) believed that: even thought this was new material, I was confident in my ability with
hypermedia learning generally. Following the training the metacognitive data (see Table
31, page 212) indicated that Tammy engaged in more orientation and planning in the
second episode and less execution, monitoring, evaluation and elaboration. This may
have been indicative of an increased confidence in her ability with hypermedia and
preparedness to rely less on being directed by the learning interface. Support for this
conclusion can be found when she reflected on a problem she had encountered in the
second learning module. It centred on her attempting to prepare the drilling site, and how
she resolved it. She remarked: when I was trying to find the last item to clear the drilling
site, although it did take me a little time, I did think that my ability to draw upon my
knowledge of user interfaces enabled me to solve the problem. She added further: the
interface was graphical and interactive and I was able to work my way through it without
any assistance. The top-level structure linguistic marker data (see Table 32, page 212)
indicated that Tammy used more complex lists, cause/effects and comparisons in the
second episode. This would seem to provide further support to her claim regarding her

306
confidence in her ability. In contrast, she engaged in fewer simple lists and
problem/solution episodes as she increased the usage of other structures. Following the
training Tammy’s increased confidence in her ability with hypermedia learning seemed
to be underpinned by her use of a greater range of both metacognitive and top-level
structuring episodes. Despite Tammy’s high level of confidence, her data set suggested
there had been a positive affect of the metacognitive training.

Judy suggested that her rating was because: I felt I was much more engaged the second
time because the material was presented more interestingly. She went on to explain that
she was: much more aware of the way I learn as a result of the feedback from the first
time, but I am not sure how that affected me. While I thought about some of the things I
found out about the way I go about my learning. Most of the time I was focusing on the
material I was learning. Following the training, the metacognitive data (see Table 37,
page 244) indicated that Judy engaged in more execution, monitoring, evaluation, and
planning. Of note was that in her first learning module her use of the range of
metacognitive strategies was lower numerically in comparison to all of the other cases.
This might suggest that the training had resulted in an improvement. In contrast, in the
second episode she engaged in less orientation and elaboration which might also indicate
a greater level of self assurance. She did think that: I was more aware of making
connections between the things I was learning, as we had discussed, but not sure how
successful I was doing that. She said that she selected 5, effective, because: I felt I had
learned more effectively, and with a bit more confidence. The top-level structure
linguistic markers data (see Table 38, page 244) indicated that Judy used more simple
lists, complex lists, cause/effect and problem/solutions in the second episode, which
provided further evidence of her improvement and the effectiveness of the metacognitive
training.

Ray suggested that he found the second learning module: far more engaging than the
first. It was still a bit repetitive. However, engaging with the multiple forms of media was
a lot more interesting than just reading text. He stated that he scored himself as 6 for the
second learning module because: I think I used the full utility of the module and showed

307
that I was able to work through the module to suit my own learning needs. Following the
training the metacognitive data (see Table 43, page 288) indicated that Ray engaged in
more execution, monitoring, evaluation and elaboration in the second episode. In
contrast, his engagement in less orientation and planning may also be indicative of a
more competent approach. He summarised his response by saying: I think I was able to
make the module work for me. I felt that I easily imposed my will on the interface and
engaged only with those parts of it that suited me. The top-level structure linguistic
markers data (see Table 44, page 288) indicated that Ray used more simple lists, complex
lists, problem/solutions and comparisons in the second episode. These data appear to
support his claim that he used the full utility of the module and engaged with those
components that suited him. These comments and data give tentative support to the
effectiveness of the metacognitive training.

Summary
Given that the metacognitive training was not undertaken under experimental conditions
that would control for many of the variables, and as this was a set of case studies, the
positive inferences that can be made are made tentatively. In many aspects of the cases,
higher ratings by the learners were associated with higher precentages of metacognitive
activity and the spread of different and more complex types of top-level structure. While
this association needs to be tested more strongly through multi-subject research, here the
associations suggest a positive relationship between instruction and these shifts.
Individually each of the cases identified different levels of impact from the metacognitive
training. Collectively there is tentative evidence that such training can have a positive
effect.

Judy appears to be the case showing most progress as measured by higher totals and the
use of more sophisticated top-level structures. Both Judy and Lesley made mention of
being more aware because of the training. While this association appears to be a logical
one, it needs further investigation to establish whether or not, like Judy and Lesley, other
users can come to an awareness position. The other cases were people who the data

308
indicate were very competent; therefore, in moving them forward, it would be harder to
realise the gains associated with less competent people.

These were not experimental conditions, and were the content exactly the same, then one
would predict that more of the same type of structures would be evident. The fact that in
many of the cases the learner had more metacognitive processes in total is more
important because it provides two sets of information. First, learners were now
structuring in a more obvious way, so obvious in terms of their own processing, and that
was a real benefit of being metacognitive. And second, the different contents have
allowed the researcher to see that learners were able to transfer across different possible
top-level structures to select what is best for them at the time. That shows that there was
flexibility in their work and this flexibility (their way of changing their thinking in order
to capture more, or better express themselves) provided a benefit in terms of
performance.

As the learning modules were not controlled, it may be that the changes in metacognitive
activity had more to do with the differences in learning materials than the metacognitive
training. However, this does not explain the changes in the learners’ top-level
structuring. This was much more likely due to the effects of their metacognitive training
and would imply that teaching needs to be directed at showing learners how to reframe
and be adaptive to content by exposing them to different types of top-level structures.

Implications for hypermedia learning

While hypermedia has become a significant software application, the manner in which
we understand the learning experiences of learners and the educational processes that
result from its use are still not well understood. The literature over the last two decades
shows little advancement has been made to this understanding. In 1996, Dillon argued
that in the previous decade of empirical evidence much has been generally assumed about
hypertext and hypermedia, but rarely demonstrated; stating that ‘that the unmistakable
advantages...have rung hollow' (p. 26). Almost a decade later Chen and Dwyer (2003)
reviewed the existing research and reported there remained little empirical evidence

309
showing that a hypermedia learning environment improves learning outcomes. More
recently, Dillon & Jobst (2005) reported that while a more critical view of hypermedia
and cognition has since evolved, formal theories of hypermedia learning have not
developed in any substantial way. Instead, existing theoretical models from education
and psychology have been applied unproblematically to certain aspects of hypermedia
design and use. They argue that the practice of hypermedia design has been accompanied
by an uncritical acceptance of a host of quasi-psychological notions of reading and
cognition. They go on to argue that, as a consequence, hypermedia has largely failed to
fulfil much of its early promise.

At present there remain a number of gaps in the literature. Much of the research into the
use of hypermedia in education has focused on the capability of hypermedia to manage
flexibly information organisation and retrieval, interface design, or mixed media. The
use of hypermedia as a tool for mediating the nature of the cognitive interactions that
occur between learners and the computer has been less thoroughly explored (Yang,
2002). In addition, not much attention has been given to analysing the cognitive
processes that go on in learners’ interactions with the technology. Therefore, there is a
need for further exploration of learners’ interactions with hypermedia in order to
understand better the cognitive processes it activates.

However, Shapiro (2008) noted that much of the more recent efforts have focused more
on developing learner-centred hypermedia. She describes learner-centred hypermedia as
‘being designed to assist learners to achieve their educational goals, rather that offer mere
usability’ (p. 29). She goes on to say that these efforts are being hampered somewhat by
a lack of empirical research on the topic. Recent research undertaken by Shapiro (1999;
2000) and others (Clark & Mayer, 2003; Jacobson, 2006; Jacobson & Archodidou, 2000)
has provided some insights, and the empirical evidence does suggest that several system
and user characteristics influence outcomes of hypermedia-assisted learning. Shapiro
argues that among the most relevant of these factors are learners’ levels of metacognition
and prior knowledge, and the interaction between these factors and hypermedia
structures.

310
Hypermedia is highly attractive to educational users because, on the surface at least, it
leaves the learner in full control of their access to, and navigability of the learning
resources. This raises questions about the ways these various ‘guidance’ and ‘control’
features of hypermedia affect the learning that occurs. One question is: how is
educational hypermedia best organised to cater for various learning events? A second
question is: how are the learning experiences affected by the self regulatory
(metacognitive) capacities of the learner (e.g. planning, monitoring, etc.)? The answers
to these questions are fundamental to understanding how hypermedia ought to be
developed and best organised (i.e. provide learner guidance), as well as what kinds of
cognitive and metacognitive capacities enable a learner to successfully manage this form
of learning (i.e. provide learner control).

Researchers have begun to examine the role of students’ ability to regulate several
aspects of their cognition, motivation and behaviour during learning with hypermedia;
(Azevedo, Guthrie, & Seibert, 2004; Hadwin & Winne, 2001; Winne & Stockley, 1998).
Azevedo and Cromley, (2004) concluded that this research has demonstrated that
students have difficulties benefitting from hypermedia environments because they fail to
engage in key mechanisms related to regulating their learning. To regulate their learning
students need to be able to make decisions about what to learn, how to learn it, how much
to learn, how much time to spend on it, how to determine whether or not they understand
the materials, when to abandon or modify plans and strategies and when to increase effort
(Williams, 1996). Specifically, they need to analyse the learning situation, set
meaningful learning goals and determine which strategies to use, assess their
effectiveness, and determine if the strategies are effective for a particular learning goal
(Azevedo & Cromely, 2004). It is argued that in hypermedia settings one of the
important drivers of this self-regulation is the learner’s capacity to deploy effective
metacognitive strategies to the task.

Therefore, focusing on the potential of hypermedia to press (Stevenson, 1986a, 1986b)


learners into using more sophisticated metacognitive strategies needs further

311
examination. In the majority of recent research, the measure of success has been
improvements in test scores. Whilst this has provided quantifiable evidence of success,
these studies provide little in the way of more fine-grained evidence of the causes of that
success. In order to design hypermedia that might provide better and more effective
metacognitive strategies, more specific evidence about the causes of success is needed.
That is, the design strategies that encourage better metacognitive practice, and in
particular how learners think about their navigation choices and the relationships between
the available links, need to be understood better.

It is clear that the research undertaken so far suggests that there is a need for a better
understanding of the “guidance” and the “control” aspects of a learner’s engagement with
hypermedia, as well as establishing any causal relationships between them. Of these two
aspects, it is a more fine-grained understanding of the learner’s ‘control’ of their
engagement with hypermedia that is the gap in the knowledge that this thesis seeks to
address.

This thesis set out to realise a better understanding of the “guidance” and “control”
aspects of learner’s engagement with hypermedia by examining the metacognitive
processes they deploy to render this “guidance” and “control”. One purpose in
undertaking such an evaluation was to better understand the claims made about the
learning experiences that are associated with using educational hypermedia. Central to
the approach of this thesis was the need to examine learning experiences in terms of the
thinking (cognitive) and the self-regulated processes (metacognitive) involved. In this
research, case study research was used to explore the processes and dynamics of learner’s
practice in a vocational education setting and to gain an in-depth understanding of a
situation where they are engaged in hypermedia learning and its meaning for those
involved. Qualitative case studies are characterised by the discovery of new
relationships, concepts and understandings rather than verification of pre-determined
hypothesis (Yin, 2003). Phenomena explored were learner’s metacognitive experiences
as they engaged with and interacted with hypermedia.

312
There are a number of implications for the improvement of hypermedia learning that can
be drawn from the discussion of findings (presented earlier in the Chapter). First, it was
established that it was possible to identify the metacognitive activity of learners engaged
in hypermedia learning settings. Moreover, the additional top-level structuring
classifications provided a more fine-grained and descriptive data set that provided a
means of accounting for contemporaneous processing across several types of
metacognitive process as a learner engaged in sophisticated, dynamic acquisition,
interpretation and retention of information at any particular moment.

Second, competent hypermedia learners believed that their autonomy was rendered in a
number of ways: (a) they were able to establish a purpose or goal for their learning
upfront; (b) they were able to master the learning interface quickly and use it in ways that
suit their learning style and preferences; (c) they were able to establish ways of
progressing and of realising their learning goals from within or from outside of the
learning interface; and (d) they were able to monitor both the learning interface and their
learning trajectory effectively using a range of top-level structuring.

Through the lens of their metacognitive activity this autonomy manifested itself in
practice through the deployment of well constructed chains of metacognitive processes.
For example, monitoring was often followed by evaluation and/or elaboration in a chain
best able to reach a learning end. The top-level structuring data suggested that what
made these chains effective were the embedded top-level structures they deployed. At
times these structures appeared to be quite complex. This metacognitive mastery
appeared to play a critical role in establishing learner autonomy. Their accounts of what
enabled this autonomy can be synthesised thus: (a) they were able to establish a purpose
or goal for their learning upfront; (b) they were able to quickly master the learning
interface and use it in ways that suit their learning style and preferences; (c) they were
able to establish ways of progressing and of realising their learning goals from within or
from outside of the learning interface; and (d) they were able to effectively monitor both
the learning interface and their learning trajectory using a range of top-level structuring.

313
Third, there is tentative evidence that training can have a positive effect and raise
awareness amongst learners of their metacognitive activity. Higher ratings by the
learners were associated with higher totals of metacognitive activity and the spread across
into different and more complex types of top-level structure. The associations suggest a
nexus between instruction and these changes in ratings which can also be associated with
greater autonomy and learner competence.

How hypermedia learning might now proceed in a better way

The results from this research have important implications for practice and in particular
for the way in which hypermedia learning might proceed in a more effective manner.
First, there are implications for teachers and the teaching and learning processes they
might adopt for this medium. Second, there are implications also for the developers and
instructional designers who are responsible for creating both the curricula and the media
for these environments. Each is addressed next.

Teaching and learning


The method employed in this research provides a potential tool for teachers to examine
their learners’ hypermedia practice. In particular, it would enable them to diagnostically
identify deficiencies in a learner’s metacognitive capacity to act autonomously. At a
more fine-grained level, an examination of the learner’s top-level structuring rhetorical
structures would provide important clues as to the level of sophistication they currently
deploy within their metacognitive actions. From this, teaching strategies could be
adopted that give greater attention to the development of a learners range and use of
metacognitive actions, as well as develop their capacity to verbalise these actions through
a wider range of rhetorical structures.

This study, as well as previous research (e.g. Azevedo & Cromely, 2004; Bannert &
Mengelkamp, 2008; Kramarski, 2008; Kratzig & Arbuthnott, 2009; Mevarech &
Amrany, 2008) has tentatively shown the benefit from metacognitive training. The
diagnostic data (discussed above) could be used to create customised metacognitive

314
training for individual learners or groups of learners. This research has shown that a
learner’s metacognitive competence is related to their effectiveness as an autonomous
learner.

Instructional design
Hypermedia developers and instructional designers responsible for the development of
hypermedia learning materials also ought to be able to use the diagnostic data (discussed
above) to reform and reshape materials in ways that press (Stevenson, 1986a, 1986b)
learners into more effective metacognitive processes. Through the diagnostic data,
hypermedia authors will be able to identify both effective and ineffective metacognitive
practice and use this to render effective scaffolding mechanisms within the hypermedia
that model effective metacognitive practice. Moreover, text within the learning materials
might also be reconstructed and reorganised to include a range of top-level structuring
rhetorical structures that would enable the text to act as a subliminal coach. That is, the
learner would be coached in the use of these structures through their interaction with such
text.

Contributions to method and theory

Method
The use of top-level structuring to add reliability to the metacognitive classification
within the taxonomy, and in providing a more fine-grained analysis within each of the
classifications advances its theoretical boundaries. This capacity for qualitative
differentiation within classifications together with the strengthening of trustworthiness
about categorical decisions on classifications of types of metacognition is an important
methodological contribution as it affords greater trustworthiness about category
assignation for metacognitive events.

Theory
The use of top-level structure analysis as a within-category sub-ordinate analytical tool
has strengthened the Meijer et al. (2007, 2006) taxonomy as a classificatory tool.

315
Importantly, in doing so, it has maintained the across-learning domain generalisability of
the super-ordinate level of the taxonomy (Meijer, 2006).

The theoretical boundaries of top-level structuring have also been advanced. This
research has advanced its use from that of a diagnostic tool to one having both
exploratory and classificatory capacities as well. The research has shown how this
exploratory capacity has been able to expose much of the inner fabric of the
metacognitive processes at play in hypermedia learning in a vocational setting. This
uncovers the possibility of its capacity to do this in other learning settings and with other
kinds of learners.

Limitations

Case study methodology is an important research tool for looking at deeper explanations.
Its adoption in this research has helped to uncover, and to go beyond the large categories
of metacognition. To that end, top-level structuring has proven to be a highly suitable
supplementary and supportive tool with which to delve into a deeper level of the
information structure. However, this is still case study and, before generalisation to other
times and to other places can be made, large scale studies using different methodologies,
particularly quantitative methods are necessary. Nonetheless, the case study method has
revealed issues that are important to the research of hypermedia learning and therefore it
has made an important theoretical contribution in this sense.

The methodology has proved to be sound and appropriate for adult learners in vocational
learning settings. In both the pilot and the main study these learners have shown a
propensity to be able to articulate and self report their learning experiences in a rich and
meaningful way. Moreover, each of the learners in the cases was employed in some
aspect of teaching and learning in an educational setting. The extent to which the
methodology is transferable to other kinds of learners and settings needs to be examined
and tested further.

316
The lack of reliability in the metacognitive categorisation of the execution processes
needs to be addressed. A way to ensure that, where possible, more verbal data are
collected about this category needs to be identified and the question remains whether or
not execution is a cognitive rather than metacognitive activity. One way this might be
addressed is by ensuring that the researcher elicits a specific utterance on identifying
what appears to be an execution process. This would need to be undertaken with some
care so as not to interfere with the respondent’s train of thought, or at the expense of
missing other important utterances.

Recommendations for further research

A number of potential areas for additional research arise as a consequence of this


research. These include the following that are outlined in the paragraphs below:

1. Reviewing the data collection method in an attempt to overcome the reliability


issue of the assignment of the execution category within the taxonomy,
2. Extending the metacognitive training to include a focus on top-level structuring,
3. Using the methodology under experimental conditions to test the efficacy of the
metacognitive training, and
4. Testing the methodology across a range of learners to establish any
generalisability.

1. The data collection method as it applies to the collection of the verbal protocols needs
further examination. It might be that the “video of a video” approach could be
refined to ensure that the researcher identifies and notes those actions that could be
classified as execution while the learner is engaged. When the video is being
replayed to the learner and the stimulated recall is being captured, the researcher
could ask the learner specifically about each of these execution events to elicit a
verbal response. Alternatively, the researcher might elicit a think-aloud response
which would be captured on the first video. Bannert and Mengelkamp (2008) found
that think-aloud does not affect learning performance, and that this provides indirect

317
evidence that it does not interfere with metacognition either. They recommend this
type of verbalization as a sensitive metacognitive on-line assessment method. Both
methods could potentially enable the execution category to be evaluated in the same
way as the other five metacognitive categories within the taxonomy.

2. For adult learners, particularly those similar to the cases discussed, who themselves
are in the business of education, pointing out to them the subtle differences identified
by a top-level structuring analysis of their metacognition may be an important way of
enhancing their self management of learning. For example, if you take the case of X,
where monitoring is predominated by a need to see causal relationships. If you point
this out to person X, and say ‘Just go back to that screen where we did such and such’
and try monitoring where you are looking for comparisons, or you are putting another
item into a list of things. Just see what happens when you manipulate your own
metacognitive action. This is an exciting prospect for future research, to look at
whether or not the self management of learning can now be handled differently by a
learner aware of the subtlety behind the metacognitive category.

3. While the methodology has proven successful in a case study scenario its wider
applicability needs to be tested. The research could be undertaken under
experimental conditions using a control group and controlling for variables such as
prior knowledge, differences in the learning materials and time on task. This would
enable the measurement of the significance of the effectiveness of the training.

4. The research needs to be repeated on a range of learners and for different kinds of
learning purposes other than vocational learning (e.g. high school, primary school,
undergraduate and post graduate learners) in order to establish any generalisability.
The taxonomy and top-level structuring tool has been shown to work across learning
domains, so establishing application across different kinds of learners seems a logical
way forward in extending the methodology as both an exploratory and diagnostic
tool.

318
References
Azevedo, R. (2005). Using hypermedia as a metacognitive tool for enhancing learning?
The role of Self-Regulated Learning. Educational Psychologist, 40(4), 193-197.
Azevedo, R., & Cromely, J. (2004). Does Training on self-regulated learning facilitate
students' learning with hypermedia? Journal of Educational Psychology, 96(3), 523-
535.
Azevedo, R., Guthrie, J. T., & Seibert, D. (2004). The role of self-regulated learning in
fostering students’ conceptual understanding of complex systems with hypermedia.
Journal of Educational Computing Research, 29(1-2), 344-370.
Bannert, M. & Mengelkamp, C. (2008). Assessment of metacognitive skills by means of
instruction to think aloud and reflect when prompted. Does the verbalization method
affect learning? Metacognition and Learning, 3(1), 39-58.
Bartlett, B. J. (1978). Top-level structure as an organisational strategy for recall of
classroom text. (Doctoral Dissertation, Arizona State University, 1978), Dissertation
Abstracts International, May 1979, 7911113, p. 6641A.
Bartlett, B. J. (2003). Valuing the Situation: A Referential Outcome for Top-Level
Structurers. In B. J. Bartlett, F. Bryer & D. Roebuck (Eds.), Reimagining Practice:
Researching Change. (pp. 16-37): Proceedings of the 1st International Conference on
Cognition. Language and Special Education. School of Cognition, Language and
Special Education, Griffith University, Brisbane.
Bartlett, B. J. (2008a). I've been workin' on the railroad: Action research in changing
workplace climate. In E. Piggot-Irvine & B. J. Bartlett (Eds.), Evaluating Action
Research (pp. 167-189). Auckland, NZ: New Zealand Research Council.
Bartlett, B. J. (2008b). Learning about written language, literacy and meaning: A
metalinguistic gift. Paper presented at the Qualitative Research refined GABEK and
other methods, VII. Internationale GABEK Symposium 2008 Rathaus Sterzing,
Neustadt 21, 1 Stock, 1-39049 Sterzing, Italy. University of Innsbruck, Innsbruck,
Austria.

319
Begoray, J. (1990). An introduction to hypermedia issues, systems and application areas.
International Journal of Man-Machine Studies, 33, 121-147.
Beven, F. A. (2006). A methodology for capturing metacognitive activity in hypermedia
learning settings. Paper presented at the 4th Biennial Conference on Technology
Education Research, Gold Coast, Queensland.
Brown, A. L. (1981). Metacognitive Development and Reading. In R. J. Spiro, B. C.
Bruce & B. F. Brewer (Eds.), Theoretical Issues in Reading Comprehension (pp. 453-
481). Hillsdale, NJ: Erlbaum.
Brown, A. L., Bransford, J. D., Ferrera, R. A., & Campion, J. C. (1983). Learning,
remembering and understanding. In P. H. Mussen (Ed.), Handbook of Child
Psychology (Vol. 3, pp. 77-166). New York: J Wiley & Sons.
Bruner, J. (1996). The Culture of Education. Cambridge, MA: Harvard University Press.
Bryman, A. (2006). Qualitative Research 2. London: Sage Publications.
Bush, V. (1945). As we may think. Atlantic Monthly, 176(1), 101-108.
Chen, M. (1995). A methodology for characterizing computer-based learning
environments. Instructional Science, 23(1/3), 182-220.
Chen, W. F. & Dwyer, F. (2003). Hypermedia research present and future. International
Journal of Instructional media, 30(2), 143-148.
Chester, I. (2006). Delineating and developing expertise in three-dimensional computer
aided design. Unpublished Doctor of Philosophy, Griffith University, Brisbane.
Clark, R. C., & Mayer, R. (2003). E-Learning and the science of instruction. San
Francisco, CA: Wiley.
Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education (5th
Edition). London: Routledge Falmer.
Collins, A., Brown, J., & Newman, S. (1989). Cognitive apprenticeships: teaching the
crafts of reading, writing and mathematics. In L. Resnick (Ed.), Knowledge learning
and instruction: essays in honor of Robert Glaser (pp. 453-494). Hillsdale New
Jersey: Erlbaum Associates.
Cresswell, J. W., & Miller, D. L. (2000). Determining Validity in Qualitative Inquiry.
Theory into Practice, 39(3), 122 - 130.
Crotty, M. (1998). The foundations of social research. London: Sage Publications.

320
Davenport, E., & Cronin, B. (1990). Hypertext and the conduct of science. Journal of
Documentation, 46(3), 175-192.
Dede, C. (1988). The probable evolution of artificial intelligence based educational
devices. Technological Forecasting and Social Change, 34, 115-133.
Dede, C. (1996). The Evolution of Distance Education: Emerging Technologies and
Distributed Learning. The American Journal of Distance Education, 10(2), 4-69.
Dewey, J. (1987). How we think. New York: Dover Publications.
Dillon, A. (1996). Myths, misconceptions, and an alternative perspective on information
usage and the electronic medium. In J. Rouet, J. Levonen, A. Dillon & R. Spiro
(Eds.), Hypertext and Cognition. Mahwah NJ: Lawrence Erlbaum Associates.
Dillon, A., & Gabbard, R. (1998). Hypermedia as an educational technology: A review
of the quantitative research literature on learner comprehension, control, and style.
Review of Educational Research, 68(3), 322-349.
Dillon, A., & Jobst, J. (2005). Multimedia Learning with Hypermedia. In R. Mayer (Ed.),
The Cambridge Handbook of Multimedia Learning (pp. 569-588). Cambridge MA:
Cambridge University Press.
Dryden, L. M. (1994). Literature, student-centred classrooms, and hypermedia
environments. In C. Selfe & E. Hilligoss (Eds.), Literacy and computers: The
complications of teaching and learning with technology. New York: Modern
Language Association of America.
Duchastel, P. (1990). Examining cognitive processing in hypermedia usage. Hypermedia,
2(3), 221-233.
Dwecke, C. S. (1988). Motivation. In A. Lesgold & R. Glaser (Eds.), Foundations for a
Psychology of Education. Hillsdale, New Jersey: Erlbaum.
Ericsson, K. A., & Simon, H. A. (1993). Protocol Analysis: Verbal reports as data (Rev.
ed.). Cambridge, MA: MIT Press.
Federico, P. (1999). Hypermedia environments and adaptive instruction. Computers in
Human Behaviour, 16(6), 653-692.
Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.),
The nature of intelligence. Hillsdale, NJ: Erlbaum.

321
Flavell, J. H. (1979). Metacognition and cognitive monitoring. A new era of cognitive-
developmental inquiry. American Psychologist, 34(10), 906-911.
Flavell, J. H. (1992). Metacognition and Cognitive Monitoring: A new area of cognitive-
developmental inquiry. In T. Nelson (Ed.), Metacognition Core Readings (pp. 3-8).
Needham Heights, MA: Allyn and Bacon.
Flavell, J. H., Friedrichs, A. G., & Hoyt, J. D. (1970). Developmental changes in
memorization processes. Cognitive Psychology(1), 324-340.
Fletcher, M., Zuber-Skerritt, O., Piggot-Irvine, E., & Bartlett, B. J. (2008). Qualitative
research methods for evaluating action research. In E. Piggot-Irvine & B. J. Bartlett
(Eds.), Evaluating Action Research (pp. 53-89). Wellington, NZ: NZCER Press.
Foltz, P. (1996). Comprehension, coherence, and strategies in hypertext and linear text. In
J. Rouet, J. Levonen, A. Dillon & R. Spiro (Eds.), Hypertext and cognition. Mahwah
NJ: Lawrence Erlbaum Associates.
Frederikson, C. H. (1975). Representing logical and semantic structures of knowledge
acquired from discourse. Cognitive Psychology, 7, 371-458.
Gay, L. R., & Airasian, P. (2003). Educational research: Competencies for analysis and
application (7th Ed.). Upper Saddle River, NJ: Merrill Prentice Hall.
Gerdes, H. (1997). Lernen mit text und hypertext. (Learning with Text and Hypertext).
Berlin: Pabst.
Glense, C. (1999). Becoming qualitative researchers: An introduction. New York:
Longman.
Greckhamer, T., & Koro-Ljungberg, M. (2005). The erosion of a method: Examples
form grounded theory. International Journal of Qualitative Studies in Education,
18(6), 729-750.
Grimes, J. E. (1975). Transition network grammars: A guide. In J. E. Grimes (Ed.),
Network Grammars (pp. 47-84). Norman: Summer Institute of Linguistics of the
University of Oklahoma.
Hadwin, A., & Winne, P. (2001). CoNoteS2: A software tool for promoting self-
regulation. Educational Research and Evaluation, 7(2&3), 313-334.
Hall, R. H. (2000). Education, hypermedia and the World Wide Web: Old realities and
new visions. Cyber Psychology & Behaviour, 3(1), 1-7.

322
Hammersley, M. (1992). What's wrong with ethnography: Methodological explorations.
London: Routledge.
Heller, R. S. (1991). The role of hypermedia in education: A look at the research issues.
Journal of Research on Computing in Education, 22(4), 431-441.
Ingram, A. (1999). Using Web server logs in evaluating instructional Web sites. Journal
of Educational Technology, 28(2), 137-157.
Jacobson, M. J. (2006). From non-adaptive to adaptive educational hypermedia: Theory,
research, and design issues. In S. Chen & G. Magoulas (Eds.), Advances in Web-
based education: Personalized learning environments. Hershey PA: Idea Group.
Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive
flexibility and the transfer of complex knowledge: An empirical investigation.
Journal of Educational Computing Research, 12(4), 301-333.
Jacobson, M. J., & Archodidou, A. (2000). The design of hypermedia tools for learning:
Fostering conceptual change and transfer of complex scientific knowledge. Journal of
the Learning Sciences, 9(2), 149-199.
Jacobson, M., & Azevedo, R. (2007). Advances in scaffolding learning with hypertext
and hypermedia: theoretical, empirical and design issues. Educational Technology
Research and Development, 51(1), 1-3.
Jonassen, D. H. (1986). Hypertext principles for text and courseware design. Educational
Psychologist, 21(4), 269-292.
Jonassen, D. H. (1988). Designing structured hypertext and structuring access to
hypertext. Educational Technology, 28(11), 13-16.
Kauffman, D. (2002). Self-regulated learning in web-based environments: Instructional
tools designed to facilitate cognitive strategy use, metacognitive processing, and
motivational beliefs. Paper presented at the 2002 Annual meeting of the American
Educational Research Association, New Orleans, LA.
Kauffman, D. (2004). Self-regulated learning in web-based environments: Instructional
tools designed to facilitate self-regulated learning. Journal of Educational Computing
research, 30, 139-162.
Kramarski, B. (2008). Promoting teachers' algebraic reasoning and self-regulation with
metacognitive guidance. Metacognition and Learning, 3(2), 83-89.

323
Kratzig, G. P., & Arbuthnott, K. D. (2009). Metacognitive learning: the effect of item-
specific experience and age on metamemory calibration and planning. Metacognition
and Learning, 4(2), 125-144.
Lai, Y., & Waugh, M. (1995). Effects of three different hypertextual menu designs on
various information searching activities. Journal of Educational Multimedia and
Hypermedia, 4(1), 25-52.
Lajoie, S. P., & Azevedo, R. (2006). Teaching and Learning in technology-rich
environments. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational
psychology (2nd ed., pp. 803-821). Mahwah, NJ: Erlbaum.
Landow, G. (1992). Hypertext: The convergence of contemporary critical theory and
technology. Baltimore: John Hopkins University Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage
Publications.
Marchionini, G., & Shneiderman, B. (1988). Finding facts vs browsing knowledge in
hypertext systems. IEEE Computer, 21(1), 70-80.
Marshall, C., & Rossman, G. (1999). Designing qualitative research. Thousand Oaks,
CA: Sage.
Mayer, R. E. (2001). Multimedia Learning. Cambridge: Cambridge University Press.
Mayer, R. E. (2005). The Cambridge handbook of multimedia learning. New York:
Cambridge University Press.
McKnight, C., Dillon, A., & Richardson, J. (1991). Hypertext in context. Cambridge,
England: Cambridge University Press.
McKnight, C., Dillon, A., & Richardson, J. (1996). User centred design of hypertext and
hypermedia for education. In D. H. Jonassen (Ed.), Handbook of research on
educational communications and technology (pp. 622-633). New York: McMillan.
Meijer, J., Veenman, M. V., & Van Hout Wolters, B. (2005). Intelligence, Metacognition
and learning in history and physics. Paper presented at the Earli 11th Biennial
Conference, Nicosia, Cyprus.
Meijer, J., Veenman, M. V., & Van Hout Wolters, B. (2006). Metacognitive Activities in
Text-Studying and Problem-Solving: Development of a taxonomy. Educational
Research and Evaluation, 12(3), 209-237.

324
Merriam, S. B. (1998). Qualitative research and case study applications in education.
San Francisco: Josey-Bass Publishers.
Mevarech, Z., & Amrany, C. (2008). Immediate and delayed effects of meta-cognitive
instruction on regulation of cognition and mathematics achievement. Metacognition
and Learning, 3(2), 147-157.
Meyer, B. J. F. (1971). Idea units recalled from prose in relation to there position in the
logical structure, importance, stability and order in the passage. Unpublished M.S.
Thesis, Cornell University.
Meyer, B. J. F. (1975). The organization of prose and its effects on memory. Amsterdam:
North Holland.
Meyer, B. J. F., Young, C. J., & Bartlett, B. J. (1993). reading comprehension and the use
of text structure across the adult lifespan. In S. R. Yussen & M. Smith (Eds.),
Reading across the lifespan (pp. 161-189). New Jersey: Springer-Verlag.
Nelson, T. (1965). Complex information processing: a file structure for the complex, the
changing and the indeterminate. Paper presented at the Association for Computing
Machinery 20th National Conference, Cleveland, Ohio.
O’Neil, H. F., & Abedi, J. (1996). Reliability and validity of a state metacognitive
inventory: Potential for alternative assessment. The Journal of Educational Research,
89(4), 234-245.
Park, O. (1991). Hypermedia: Functional features and research issues. Educational
Technology, 31(8), 24-31.
Patton, M. Q. (1990). Qualitative evaluation and research methods. Newbury Park, CA:
Sage Publications.
Piaget, J. (1990). The child's conception of the world. New York: Littlefield Adams.
Piguet, A., & Peraya, D. (2000). Creating web-integrated learning environments: An
analysis of WebCT authoring tools in respect to usability. Australian Journal of
Educational Technology, 16(3), 303-314.
Pintrich, P. R. (1989). The dynamic interplay of student motivation and cognition in the
college classroom. In M. L. Maehr & C. Ames (Eds.), Advances in motivation and
achievement (Vol. 6, pp. 117-160). London: JAI Press Inc.

325
Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning
components of classroom academic performance. Journal of Educational Psychology,
82(1). 33-40.
Pintrich, P. R., & Schrauben, B. (1992). Students' Motivational Beliefs and Their
Cognitive Engagement in Classroom Academic Tasks. In D. H. Schunk & J. L.
Meece (Eds.), Student Perceptions in the Classroom (pp. 149-183). Hillsdale, New
Jersey: Lawrence Erlbaum Associates.
Pintrich, P. R., Wolters, C. A., & Baxter, G. P. (2000). Assessing metacognition and self-
regulated learning. In G. Schraw & J. C. Impara (Eds.), Issues in the measurement of
metacognition (pp. 43-97). Lincoln, NE: Buros Institute of Mental Measurements.
Reiber, L. P. (2005). Multimedia learning in games, simulations and microworlds. In R.
E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 549-568). New
York: Cambridge University Press.
Resiner, P. (1987). HCI, what is it and what research is needed? In J. Carroll (Ed.),
Interfacing Human Thought. (pp. 337-353). Cambridge MA: MIT Press.
Ritchart, R., Turner, T., & Hadler, L. (2009). Uncovering students' thinking about
thinking using concept maps. Metacognition and Learning, 4(2), 145-160.
Riva, G. (2001). Shared hypermedia: Communication and interaction in web-based
learning environments. Journal of Educational Computing Research, 23(3), 205-226.
Roberts, E. J. (2004). Implementing an Integrated Learning Workplace in Queensland
Rail Civil Infrastructure 1991-2000. Unpublished Doctoral Dissertation, University
of New England.
Rose, H. (1991). Case Studies. In G. Allan & C. Skinner (Eds.), Handbook for research
students in the social sciences (pp. 190-202). London: The Falmer Press.
Rouet, J., & Levonen, J. (1996). Studying and learning with hypertext: Empirical studies
and their implications. In J. Rouet, J. Levonen, A. Dillon & R. Spiro (Eds.), Hypertext
and cognition. Mahwah NJ: Lawrence Erlbaum Associates.
Rouet, J., Levonen, J., Dillon, A., & Spiro, R. (1996). An introduction to hypertext and
cognition. In J. Rouet, J. Levonen, A. Dillon & R. Spiro (Eds.), Hypertext and
cognition. Mahwah NJ: Lawrence Erlbaum Associates.
Ryle, G. (1963). The Concept of Mind. Harmonsworth: England: Penguin Books Ltd.

326
Schoenfeld, A. H. (1992). Learning to think mathematically: Problem solving,
metacognition and sense making in mathematics. In D. A. Grouws (Ed.), Handbook
of research on mathematics teaching and learning (pp. 334-370). New York:
Macmillan.
Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology
Review, 7(4), 351 – 371.
Shapiro, A. M. (1999). The relevance of hierarchies to learning biology from hypertext.
Journal of Learning Sciences, 8(2), 215-243.
Shapiro, A. M. (2000). The effect of interactive overviews on the development of
conceptual structure in novices learning from electronic texts. Journal of Educational
Multimedia and Hypermedia, 9, 57-78.
Shapiro, A. M. (2008). Hypermedia Design as learner scaffolding. Educational
Technology Research and Development, 56(1), 29 -44.
Sheard, J., Ceddia, J., Hurst, J., & Tuovinen, J. (2003). Inferring student learning
behaviour from website interactions: A usage analysis. Education and Information
Technologies, 8(3), 245-266.
Silverman, D. (1993). Interpreting qualitative data: Methods for analysing talk, text and
interaction. London: Sage Publications.
Simon, H. A. (1979). Information processing models in psychology. Annual Review of
Psychology, 30, 363-396.
Simons, P. R. (1996). Metacognition. In E. De Corte & F. E. Weinert (Eds.),
International Encyclopedia of Developmental and Instructional Psychology (pp. 436-
440). New York: Elsevier Science.
Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulsen, R. L. (1991). Cognitive
flexibility, constructivism, and hypertext: Random access instruction for advanced
knowledge acquisition in ill-structured domains. Educational Technology, 31(5), 24-
33.
Spiro, R. J., & Jehng, J. C. (1990). Cognitive flexibility and hypertext: Theory and
technology for the nonlinear and multidimensional traversal of complex subject
matter. In D. Nix & R. J. Spiro (Eds.), Cognition, education and multimedia:

327
Exploring ideas in high technology (pp. 163-205). Hillsdale, New Jersey: Erlbaum
Associates.
Stahl, E. (2004). Methods of Assessing Cognitive Processes During the Construction of
Hypertexts. In R. Bromme & E. Stahl (Eds.), Writing Hypertext and Learning:
Conceptual and Empirical Approaches (pp. 117-196). Amsterdam: Pergamon.
Stake, R. E. (2003). Case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Strategies of
qualitative inquiry (pp. 134-164). Thousand Oaks, CA: Sage.
Stevenson, J.C. (1986a). Adaptability: empirical studies. Journal of Structural Learning,
9(2), 119 139
Stevenson, J.C. (1986b). Adaptability: theoretical considerations. Journal of Structural
Learning, 9(2), 107 117
Stevenson, J. C., McKavanagh, C., & Evans, G. (1994). Measuring the press for skill
development. In J. Stevenson (Ed.), Cognition at work: the development of
vocational expertise (pp. 198-216). Adelaide: NCVER.
Strauss, A., & Corbin, J. (1998). Basics of Qualitative Research: Techniques and
procedures for developing grounded theory (2nd Ed.). Thousand Oaks, CA: Sage
Publications.
Sturman, A. (1999). Case study methods. In J. P. Keeves & G. Lakomski (Eds.), Issues in
education research (pp. 103-112). Amsterdam: Pergamon.
Sweller, J. (1989). Cognitive technology: Some procedures for facilitating learning and
problem-solving in mathematics and science. Journal of Educational Psychology,
81(4), 457-466.
Sweller, J. (1993). Some cognitive processes and their consequences for the organisation
and presentation of information. Australian Journal of Psychology, 45(1), 1-8.
Tessmer, M. (1993). Front-end and formative multimedia evaluation: Sharpening
"cutting edge" technology. Paper presented at the annual convention of the America
Educational Research Association, Atlanta GA.
Tricot, A., Pierre-Demarchy, C., & El Boussarghini, R. (2000). Specific help devices for
educational hypermedia. Journal of Computer Assisted Learning, 4(4), 102-113.

328
Tudhope, D. (2007). Observing the users of digital educational technologies - theories,
methods and analytical approaches. New Review of Hypermedia and Hypermedia,
13(2), 87-91.
Valot, C. (2002). An ecological approach to metacognitive regulation. In P. Chambres,
M. Izaute & P. J. Marescaux (Eds.), Metacognition: Process, Function and Use (pp.
135-151). Boston, Mas: Kluwer Academic Publishers.
Van Streun, A. (1990). Heuristisch wiskunde-onderwijs [Heuristic mathematics
education]. Groningen, The Netherlands: Rijkuniversiteit, Groningen.
Vygotsky, L. S. (1978). The Mind in Society: The Development of Higher Psychological
Processes. Cambridge, MA: Harvard University Press.
Vygotsky, L. S. (1986). Thought and language. Boston, Mas: MIT Press.
Welch, M., & Brownell, K. (2000). The Development and Evaluation of a Multimedia
Course on Educational Collaboration. Journal of Educational Multimedia and
Hypermedia, 9(3), 169-194.
Wellman, H. M. (1985). The origins of metacognition. In D.L. Forrest-Pressley. G. E.
MacKinnon & T. G. Waller (Eds.), Metacognition, cognition and Human
Performance (pp. 1-31). Orlando, FL: Academic Press, Inc
Welsh, T. M. (1995). Simplifying Hypermedia usage for Learners: The effect of visual
and manual filtering capabilities on efficiency, perceptions and usability, and
performance. Journal of Educational Multimedia and Hypermedia., 4(4), 275-304.
Whitbread, D., Coltman, P., Pasternak, D. P., Sangster, C., Grau, V., Bingham, S., et al.
(2009). The development of two observational tools for assessing metacognition and
self-regulated learning in young children. Metacognition and Learning, 4(1), 63-85.
Williams, M. (1996). Learner control and instructional technologies. In D. Jonassen
(Ed.), Handbook of research on educational communications and technology (pp.
957-983). New York: Scholastic.
Winne, P. H., & Stockley, D. (1998). Computing technologies as sites for developing
self-regulated learning. In D. Schunk & B. Zimmerman (Eds.), Self-regulated
learning: From teaching to self-reflective practice (pp. 106-136). New York:
Guilford Press.

329
Winters, F. I., Greene, J. A., & Costich, C. M. (2008). Self-Regulation of Leaning within
Computer-based Learning Environments: A Critical Analysis. Educational
Psychology Review, 20(4), 429-444.
Wisker, G. (2001). The postgraduate research handbook. Basingstoke England: Palgrave.
Wood, D., Bruner, K., & Ross, G. (1976). The role of tutoring in problem-solving.
Journal of Child Psychology and Psychiatry and Allied Disciplines, 17, 89-100.
Yang, S. (2002). Multidimensional taxonomy of learners cognitive processing in
discourse synthesis with hypermedia. Computers in Human Behaviour, 18, 37-68.
Yin, R. K. (2003). Case study research: Design and methods (Third Edition ed. Vol. 5).
Thousand Oaks, California: Sage Publications.
Yussen, S. R. (1985). The role of Metacognition in Contemporary Theories of Cognitive
Development. In D. L. Forrest-Pressley, G. E. McKinnon & T. G. Waller (Eds.),
Metacognition, Cognition and Human Performance (Vol. Volume 1 Theoretical
Perspectives, pp. 253-283). Orlando, FL: Academic Press Inc.

330
Appendix 1: Example page from data analysis tables
Time Screen Researcher’s Remarks/Questions Respondent’s Utterances Meta Top Level Structure Analysis
Code Characteristics/Usage/Observations Class
00:53 What about the unit outline on No. Confirmation
the side? I noticed in the early
part that it was up there on the I did have a look at it at the OR List
right hand side, but then you start there. And thought, OK, I MO - Did have look
moved the text across, so it was mean at that stage I thought - Comparison
not something that you came when I got to number 2 I didn’t > When got to number 2
back to very often. know how big it was going to > Didn’t know how big
be, and I didn’t bother to press - C: Knew get there soon
the plus button because I knew I E: Didn’t press button
was going to get there soon.

01:21 And you seemed to read this I find that I use the mouse a lot Response
first bit reasonably intently. to track where I was reading OR List
and it actually crossed my mind – I find that use mouse lot
at one point that you know why EV - That crossed my mind
is it just a cursor, why can’t I - Why can’t get little pointer
get something like a little
pointer, a finger pointer or
something.
Yeah, even if it changed the text
to a different colour. Yeah.
So I know I have tracked that Confirmation
MO List
- Have tracked that
Note: data files are attached on a CD as Appendix 2

331
Appendix 2 – CD containing data analysis files
Case 1 David
1. DavidModule1.pdf
2. DavidModule2.pdf
Case 2 Lesley
1. LesleyModule1.pdf
2. LesleyModule2.pdf
Case 3 Tammy
1. TammyModule1.pdf
2. TammyModule2.pdf
Case 4 Judy
1. JudyModule1.pdf
2. JudyModule2.pdf
Case 5 Ray
1. RayModule1.pdf
2. RayModule2.pdf

332

You might also like