Professional Documents
Culture Documents
net/publication/256041032
CITATIONS READS
46 2,695
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Susan Gasson on 09 December 2014.
ABSTRACT
This paper discusses how an interpretive theory of action was explored and developed through iterative cycles of
grounded theory generation. We establish our motivation for employing the grounded theory method in an area
that is overflowing with theories of learning, then move on to the practicalities of generating an interpretive
grounded theory by following the "vapor trails" left by online learners. We describe how we incorporated the use
of mixed methods into an interpretive grounded theory process, with a theoretical sampling strategy that used
"complementary comparison" to feed back into a new cycle of constant comparison. We discuss ways in which
constant comparison may span researchers as well as data samples. Finally, we discuss how and why the
substantive theory of action that was generated by this process provides an original and groundbreaking
contribution to theories of collaborative online learning.
INTRODUCTION
When we examine learner engagement in online learning courses, we encounter a dearth of
useful theory about the strategies employed by learners to manage their engagement with the
course and about the effect that learner strategies have on course outcomes. It is widely assumed
in the literature that meta-cognitive learning strategies and social collaboration lead to improved
learning outcomes. But there is very little evidence to support this position (Bransford et al.,
2000). In addition, we have little understanding of what leads students to engage with course
materials and learning exercises. This is an area of research that is rife with anecdotal studies, but
progress depends on systematic analysis of learner engagement and learning strategies – not least
because learning outcomes are so difficult to assess in any meaningful way.
To select a suitable methodological approach to this research problem, we started by evaluating
extant theories of learner engagement and learning strategies as these applied to online learning
environments, from a review of the educational, psychology, and computer-supported
collaborative work and learning literatures. Our sense that these were inadequate – and that more
questions were raised by the existing research than answered by these studies – is summarized in
the next section. So we asked how we would define and analyze individual engagement in a
social community of learners. Building a sense of community and fully utilizing the socio-
technical capital imbued in it were obviously of key importance (Preece, 2000; Resnick, 2002).
As in any other community, members will not enjoy the maximum benefit if they feel themselves
to be outsiders (Wegerif, 1998). But while computer-mediated communication may make it
easier to cooperate at a distance, it also makes it easier to be more selfish than in face-to-face
communications (Kollock and Smith, 1996). The limitation of assumptional frameworks that
equate student engagement in community learning with collaboration is epitomized by the
phenomenon of lurkers, who belong to communities but make no material contributions to them
(Nonnecke and Preece, 2000). Although lurkers do not participate in overt collaboration, they
may still be using the community effectively, as they engage in "vicarious learning" (Bandura,
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
1977; Cox et al., 1999). By modeling how the world works, from the experiences and examples
of others, students engage in passive learning that requires less engagement and risk than active
participation. But passive learning is difficult to assess and we do not understand what
contributes to effective learning in online communities of inquiry, active or passive.
We decided to address the issue of how to follow the "vapor trails" left behind by online
participants in various forms of social learning community platform (Latour, 2005). Individuals
move in and out of visibility as they participate selectively in different ways and choose to
engage in different ways with different resources, technical tools, structures, and peers. Our
approach was to consider multiple sources of data in developing a grounded theory of online
learning: course community discussions, student perceptions, course grades (a blunt tool for
outcome assessment), student references to course resources, student references to peer-
suggested resources, and evidence of other forms of student interaction with the online learning
environment that were provided by the online learning platform, Blackboard. What follows here
is a summary of our approach in linking these elements into a coherent analysis using grounded
theory techniques, rather than a discussion of findings in detail.
of information and to direct their learning as part of a community of learners (Gabelnick et al.,
1990; Lipman, 1991). If we view learning as a community activity, behaviorist, cognitive, and
constructivist theories can clearly co-exist. Students can be viewed as constructing personal
knowledge from the learning experience itself (constructivism), yet we may still design learning
materials and environments for multiple learning styles within the larger group (cognitivism), or
structure learning materials in such a way that the "rules" of a contextual domain, procedure, or
skill are communicated through directed interactions with learning resources (behaviorism). So
while theories from Psychology may inform the design of learning resources, they do not aid in
understanding how people learn in a social context.
There are two competing theories of online learning that dominate the Information Systems (IS)
literature. The first originated from the work of Garrison, Anderson and Archer (2001), who
developed Dewey's (1916, 1933) social constructivist model of practical inquiry by identifying
three domains of learner participation in an online community of inquiry. Cognitive presence
indicates the degree to which a participant is able to construct meaning through sustained
communication. Social presence is the ability of learners to project their identity and personal
characteristics into the community of inquiry to encourage peer interaction. Teaching presence is
the ability of instructors or learners to provide structure and process in learning environments
(Garrison et al., 2001). An effective learning environment must operate across these three
domains. It must provide a scaffold for learning that structures technology interactions and
interactions with informational resources in meaningful ways to support distance and time-
independent learning (Anderson, 2008). But while we may employ these constructs to
understand how to design learning environments, they do not help us to understand how and why
students engage in behaviors that integrate sensemaking across the three domains.
The alternative theory is based on the work of Vygotsky, who demonstrated that a learner can
perform a task under instructor guidance or through peer collaboration that could not be achieved
alone (Vygotsky, 1978). This led to a focus which dominates much of the Computing & IS,
Computer-Supported Cooperative Learning, and Management Information Systems literatures on
learning, where individual learning is inferred from the construction of shared cultural artifacts
such as WIKIs or graphical representations of a mathematical problem in a chat-room. For
example, Stahl presents a description of collaboration in learning where the construction of
personal understanding both precedes and results from the process of social knowledge building.
Learners iterate through cycles of pre-understanding, personal focus and personal
comprehension. They leave this cycle when ready to externalize their understanding., entering
repeated cycles of social knowledge building where they engage in negotiation of perspectives
until they reach a shared understanding or create a shared artifact. At this point they re-enter a
cycle of personal understanding (Stahl, 2006). However, this body of work often equates the
production of shared artifacts with the internalization of community knowledge without
exploring the shared understandings of these artifact-objects that permit their use in collaborative
problem resolution (Dohn, 2008). There is an incommensurability between the role of objects
and the social ties that make these roles meaningful that requires a careful analysis of both in
order to understand either concept (Latour, 2005). It is possible for individual students to
participate in the construction of shared artifacts without necessarily understanding underlying
concepts or developing transferable skills.
Both of these theories of learning focus on the design of environments and the impacts of
technological affordances on learning outcomes. They also both suffer from the same weakness:
the assumption that learners placed together in a course or learning group will identify with other
group members as a community and therefore develop a socially-situated understanding of
course materials and skills. As any online instructor will recognize, students do not always wish
to collaborate – and they certainly do not always identify with the course as a coherent
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
community. Nor do they develop shared perspectives easily. So we are left with a theoretical
vacuum when we ask how students develop online learning strategies, how they engage with
communities of learning, and how learning strategies or community engagement affect learning
outcomes.
These lacunae led to our research question:
How do students engage with a community of peer‐learners in ways that enhance shared
and/or individual learning?
RESEARCH METHOD
styles, etc." (Glaser and Strauss, 1967, pg. 177). We followed the Glaserian, or "classical"
method for generating Grounded Theory (Glaser, 1992; Glaser and Strauss, 1967), moving from
identifying broad categories of behavior (open coding) to identifying a core category that
represents the central idea or construct of the study, then employing selective categories that
analyze concepts in relation to the core category (selective coding), then onto theoretical coding
that generate concepts to explain the integrated set of relationships between the core category
and other elements of the situation.
This sounds a great deal tidier than it tends to be in practice. As a substantive theory relates to a
core category that is grounded in specific mechanisms, contexts or environments, the core
category is often difficult to define. This leads many researchers to define multiple core
categories, as it is so difficult to determine which categories are the most significant in early
sampling iterations of grounded theory analysis (Gasson, 2009). Theoretical sampling often
involves parallel streams of coding, where fragmented subsets of relationships between the core
category and other conceptual categories are tested for how they fit with the data. Not only the
definition, but the researcher's understand of the meaning of the core category tends to evolve
across multiple iterations of theoretical sampling as researchers construct and discard hypotheses
and theoretical explanations of the situated phenomena that they encounter.
If this appears to lack rigor, the use of a systematic, evidence-based approach to theory
generation compensates for the confusion. The "troublesome trinity" of theoretical sampling,
constant comparison, and theoretical saturation generates considerable explanatory power that is
not found in other approaches (Hood, 2007). We briefly define these concepts here, but explain
their operationalization in our description of how the research was conducted, below.
Theoretical sampling is the selection of additional empirical data for analysis on the basis of
emerging codes or concepts (Glaser & Strauss, 1967). For example, if we discovered that some
students appeared not to understand course materials because they did not read widely enough,
we might choose to examine learning outcomes for students who did read widely, to compare
and contrast learning outcomes for these samples.
Constant comparison is the comparison of concepts developed in one data sample with the
concepts developed for similar situations in previous and ongoing data samples (Glaser &
Strauss, 1967). For example, if we discovered that some students appeared not to understand
course materials, we might compare these findings to other samples to discover if students who
also did not understand course materials shared similar attributes (such as not reading widely
enough) or different attributes (such as reading equally widely, which might indicate that the
materials chosen for the course lacked some requisite attribute for student comprehension).
Theoretical saturation is reached when additional data analysis reveals no new concepts related
to the "core category" (Glaser & Strauss, 1967). For example, the studies reported here revolve
around the processes by which thought-leaders lead, direct, and encourage individual learning in
an online course community. Our core category could be defined as 'student role-behavior.'
Theoretical saturation would be reached when no new process mechanisms, consequences,
effects, types, or attributes of student role-behaviors in online learning communities are found in
new data. This relies on a theoretical sampling strategy that selects appropriate new data to test
and extend the emerging theory, discussed in the next section.
understand how these behaviors might translate to other types of course and environment. All
online courses available to us used the Blackboard learning environment, focusing on the
presentation of course notes by means of annotated PowerPoint slides, Word documents, and
PDF readings. The discussion board used for all courses provided a threaded discussion forum,
where students could post contributions to a class discussion that was guided by instructor
questions or generated by student interest (for example, the instructor might post structured
questions based on a course reading, or a student might post an open question or comment based
on their insights or queries about a course topic or area of competency). The Blackboard
environment was useful for analysis as it could allow us to analyze not only discussion
contributions, but also their timing and frequency, and the number of times each student's post
was read by others. As the discussion board was threaded, we could analyze the depth and
complexity of discussion threads. For each course that we analyzed, we asked students to
volunteer demographic and employment information at the start of the course and administered a
short post-course questionnaire to evaluate student perceptions of the course and peer
interactions at the end. This achieved an almost universal 100% response rate (we only had two
students who did not respond, across all courses sampled). So we were able to relate course-
related factors to demographics and to identify "valued" peers who were later conceptualized as
"thought-leaders."
Our substantive theory emerged through four main iterations of analysis. The theoretical
sampling strategy that we employed evolved as our theory became progressively more complex -
- and better understood. Table 1 relates successive iterations of the study to the logic of how we
selected data for analysis, the evolution of the core category (the category on which the theory
centers), and the emergence of our theory. We started by analyzing student behaviors in a single
course, then we added more and different courses (for the same online learning environment and
type of students). The Management of IS course was perceived as extremely complex by our
technical students and required novel modes of critical thinking. We therefore selected this
course for our initial study, as it presented the potential for deep student engagement. Subsequent
courses that we analyzed were progressively more structured in their use of problem-solving – a
deliberate strategy to examine the impact of problem-structuredness on socially-situated
learning. When we detected inconsistencies in the data, we employed a mixed methods approach,
locating quantitative data to reveal the vapor trails of "invisible" behavior behind the visible
learning behaviors. We analyzed access and read statistics for students on these courses, and then
we analyzed social networks of interaction so that we could relate the degree of interactions in
each course-week with discussion intensity, diversity and direction.
This evolution of theoretical sampling permitted us to achieve theoretical saturation to the degree
possible with available samples, while exploring a greater diversity of related constructs to
construct a more concrete theory over successive studies. The iterations shown in Table 1 are
intended to provide a structure that explains how the grounded theory evolved through four
iterations of the analysis that are discussed below. We summarized the main sampling decisions
in Table 1, but we did not have space to summarize all of the detailed decisions that we made
during the analysis. For example, in the analysis for theory iteration 4, we decided to analyze
threads of discussion depth and discussion diversity backwards across previous samples, as well
as forwards across new samples, as our emerging core category required that we investigate the
drivers for social knowledge construction in more depth than we had in the analysis for previous
iterations of the theory.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
The rest of this paper is focused on the methodological considerations of how a grounded theory
was constructed from our analysis. Data and findings included in the following sections are
meant to represent the analysis, not to provide an exhaustive summary. For example, the
theoretical memos presented below are numbered consecutively in this paper, but these were
selected to represent significant turning-points in the analysis, not the total sequence of
theoretical memos generated.
Initial data sample: discussion board posts over a 10-week teaching quarter from a large
online class (to avoid small group constraints on student interaction) taking a
conceptually complex course (to select data with the potential for student engagement).
We started with an analysis of discussion board posts because the thread structure of discussions,
coupled with the conventions used in student discussions, allowed us to analyze patterns of
explicit (intended) interactions between students. Our initial open coding process focused on
Glaser's (1978) question 3 (what is actually happening in the data?), related to analytical
summaries of questions 1 and 2. These ideas developed over time: for example, the unit of
analysis was taken to be a post statement, rather than a single posted message, in Iteration 2 of
data analysis and theory generation, as discussed below.
Although we predominantly followed a Glaserian approach to the grounded theory analysis
method, our interpretive research philosophy led us to follow the evaluation approach of Strauss
and Corbin (1998), who suggest two types of criteria:
Criteria for the research process are presented as a set of questions that are concerned
with the approach to theory generation. These are concerned with the
grounding/rationale, and the development of criteria for theoretical sampling, data
analysis, and theory-construction.
Criteria for the empirical grounding of the study guide how the researcher may evaluate
the theoretical concepts, their relationships, the conditions under which the grounded
theory may vary, processes embedded in the theory, and the significance of the theory .
We used these criteria to develop reflexivity in our data analysis process, distinguishing between
theoretical memos (used to record insights relating to theoretical patterns in the data) and process
memos (used to record insights relating to our analysis process and its consequences). Where a
significant change in our analysis resulted from a theoretical insight, we have reproduced the
relevant theoretical memo from our original analysis to illustrate interactions between the
development of theory and our analytical process. Where a significant change to our process
resulted from a reflexive insight (one relating cause and effect in the analytical process), we have
reproduced the relevant process memo below.
OPEN CODING
We started by each generating open codes for one week's posts from the discussion board for the
same course (an average of 80-100 posts each), so that we could compare similarities and
differences in posts by the same group of 23 students, taking the same course (Graduate-level
Management of IS). We coded a single week of discussion each (weeks 1 & 2), then exchanged
data samples and coded the other's sample without having seen their coding. As we worked, we
each generated memos on our observations and coding issues, which we shared after we had
each coded each other's data sample. We compared the open codes that we had each generated
and discussed how to resolve discrepancies and how to deal with the coding issues from our
memos. We derived a joint set of open codes, which we then applied to two more weeks' posts
(one week each), again generating memos to record our observations and coding issues and
swapping samples to compare codes. The first major issue, raised by each coder independently,
was the need to place each post in the context of the discussion as the student would perceive it,
in order to interpret its intent and its contribution to the discussion:
Process memo 1: It is difficult to code each post without seeing it in the "flow" of discussion.
Blackboard Threads do not represent this – messages are often assigned to a new thread simply
because the previous thread has reached a certain depth. We need to sort the messages by date
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
& time, then sort them into separate threads of discussion, according to which message they
appear to be posted in response to.
To this end, we added a thread analysis, where threads of conversation were traced through the
discussions. As messages were posted asynchronously, a response might relate to a post that was
made several posts earlier in the discussion thread. At this point, we needed to de-anonymize the
data, as it was totally impractical to analyze peer interactions where these were often indicated
by the use of the student's name, for example "That's a great idea, Joe." We adapted our analysis
method, to anonymize posts after the sequence analysis.
Then we turned our attention to what was happening within the threads: how students were
engaging with the topic or with each other. The change in focus quickly generated a set of open
codes that we could not interpret without understanding to whom the student was responding
and why. Even with an agreed topic guide (which the two of us had discussed in terms of
interpretation), after the first round of analysis we had only 32% coincidence in the open codes,
mainly because we were categorizing similar perceptions of posting behavior in different ways.
Table 2. Open Code Categories Demonstrating Similar Concepts Categorized Differently
Recently I was involved in a project to enable seeded EDI functionality Coder 1 open code categories
out of Oracle. Sounds like a straight-forward thing to do, but the Standalone post in response to
implications to the business and to our customers was astounding. We question
found, in the innocent questioning of stakeholders, what the impact of Example of situation
doing so was and the response was - don't do it! Well... if we're going to Demonstration of professional
do it then we need to customize. The issue is that the data we have in the experience
systems isn't the best. So if we automate our invoicing process without Topic = stakeholder involvement
any controls, bad invoices are sent to the customer - without any review or
intervention. The project came to a virtual stand-still. Instead what we did
Coder 2 open code categories
was gathered all the pertinent process teams together to discuss the process
flows, end-to-end, to ensure that we were going to implement the right Initial post in response to question
solution for the company; to minimize exposure, risk and keep our Example of professional expertise
customer relationships in a good place. We're now in the development Complication of initial problem
stage of the project - 8 months later! It's better that we took the time formulation
upfront to iron it all out than to be in a panic and back-pedal mode once Topic = defining system purpose
everything moved to production.
A major issue in discussions of the grounded theory method is whether to focus on a single, core
category, or whether to develop multiple categories that can be interrelated to provide a meta-
understanding of the core category as this emerges. We decided to use the latter approach,
simply because we were unsure what should constitute our core category: we had so many to
choose from! Because we had, very early on, focused on analyzing student contributions to
discussion boards in the context of the "flow" of discussion (i.e. attempting to see the discussion
flow as the student would perceive it at the time of posting), we started to see different types of
behavior emerging fairly rapidly. We had both noted (in theoretical memos) that students
appeared to be adopting roles in the discussion, that appeared to guide the direction of future
debate. This realization led us to categorize the memos that we had produced in the first two
rounds of coding to see what other similarities had escaped us, across the two coders. We
concluded that the majority of our theoretical memos (memos about what emergent theoretical
patterns or insights we observed in interactions between students), were concerned with what we
identified as three "dimensions" of student posting behaviors:
(i) the apparent intent of the post: ask question, respond to question, provide information
and/or resources, etc.;
(ii) the interaction behavior: standalone post, response to point made by another,
development of another's idea, etc.;
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
(iii) the use of course resources or personal knowledge and expertise: reference to course
materials, reference to reading, reference to professional experience of situation,
reference to personal experience of similar problem, etc. .
We agreed a mutual set of open codes around these three dimensions, then engaged in another
round of open coding. We applied our "dimension-oriented" codes retrospectively to our first
two data samples (a total of four weeks' posts, about 400 messages in total). This appeared to
remove many of the gray areas in coding responses: agreement between the two coders jumped
to between 60-70% and most of our "how can I categorize/interpret this?" issues disappeared.
But we were still not making much progress on understanding the patterns of interaction that
related to the social construction of knowledge. For example, one student might post a question
and several other students might post an answer: these were categorized as student‐question and
question‐response. One early theoretical memo comments that:
Theoretical memo 1: It appears that student‐question <drives> question‐response. Although this is
apparent from our data, it is not helpful in inspiring new theoretical perspectives!
We used these to jointly code the first two weeks' discussions from which we had generated our
open codes. As we discussed how to interpret the categories, the second author generated the
analogy column, which helped us both to understand better how the posting-behavior might be
conceived as playing a specific role in the community debate. We began to understand that a
student's role in the discussion might be a core category for us. Naming of the core category was
important to our analysis: the term "roles" could be interpreted as a static behavior, but it was
obvious after analyzing a few weeks of discussion that student roles were not static:
Theoretical memo 3: Student roles don't seem to be as static as the literature used in my
teacher training would suggest! While some students seem to "lead" the discussion more than
others, it is not always the same students who do this in different weeks, or for different
questions/topics. We need a different term than "role" to describe this behavior.
To indicate the dynamic nature of student roles in the discussions, we decided to refer to our core
category as "role-behavior." The unit of analysis was defined as an individual message-post in
the context of a topic discussion. We applied the role-behavior typology to the six weeks' of
discussion post data that we had already analyzed and found that this immediately generated
contingency-factors for various role-behaviors. From our analysis of posts, the first 5 role-
behaviors (Initiator, Closer, Complicator, Peer-knowledge-elicitor, Facilitator) appeared to
involve higher levels of interactive engagement with community knowledge construction,
whereas the latter 3 role-behaviors (vicarious-knowledge acknowledger, contributor, passive-
learner) appeared to consist of more passive forms of engagement.
could analyze sequences of interaction, in particular to which message the post was responding.
This facilitated a process analysis of interactions between course members in enabling peer-
learning. An example of a discussion-thread sequence from our analysis spreadsheet is provided
in Table 4, to illustrate this.
Table 4. Example of a Discussion-Thread Sequence
Message (Post) From To Role-Behavior
The value chain model doesn't fit today's business model… The correct sequence S4 All Contributor
should put marketing and sales in the first place. Manufacturing, storage, distribution
should rely on customer orders. …
The value chain bugged me, too, when I first read it. On reading it again the light S11 S4 Complicator
dawned. The text is only showing one version of the value chain, when in fact there are
several. Which one is applicable to a business depends on their business model. … I am
most familiar with engineer-to-order (ETO) which is generally for very large,
expensive, and one-of-a-kind items like ships, communications satellites, and power
plants… For ETO I would definitely place "Design" as a primary value chain activity
(rather than secondary as part R&D) …. R&D involves discovery of new design
techniques, design for a customer order uses already proven techniques.
I just showed my value chain bias too - I only talked about models that create a product. S11 S4 Peer-
I imagine there are a whole 'nother set of value-chains for companies that provide Knowledge
services in different ways. Elicitor
I think it is also difficult to understand the value chain because we are reading it from a S12 S11 Complicator
static standpoint. Depending on what phase the product is in, i.e. is it new and the first
batch is being processed, has it been around for a while. I would certainly like to see #4,
Marketing & Sales to be first or second. If initial market studies were unfavorable and
potential customers did not respond favorable to my product, I certainly would not want
to invest heavily in the incoming materials. I think this is where we could begin to look
at IS in assisting with our supply chain.
I agree with S12. Product maturity is not so relevant in an ETO world because most end S11 S12 Vicarious
items are built only for one sale. But in a typical retail world I can see how there would Acknowledger
be differences based on product maturity. The text's value chain almost looks like it is
for a brand new product.
I have to go the other way. To me, the text's value chain seems best suited for an S13 S11 Facilitator
established product. As others have observed, there is no initial step where the need for
the product is determined. That seems to imply a known demand that is being met. …
S11 made some great points about different delivery processes. I think this model could
be used for all of those.
I've been doing some thinking in this area. Education is different in many respects from S9 All Closer
manufacturing. … I got the sense that there is a kind of chicken/egg thing going on with
the value-chain model. Without sales there is no need for inputs, but without
manufacturing, etc. there is nothing to sell. Unless products are marketed, there is no
sales. So it’s circular.
Some role-behaviors were less common than others and we tried to understand the reasons why.
Thread or topic initiator behavior was rarest, with only four students suggesting a completely
new topic of discussion. This summary analysis also presented a problem with our categories, as
the initiated topics included both social posts and course-domain-related topics. We decided to
split the initiator category into social-contact-initiator and topic-initiator. The social-contact-
initiator category (for example: "Hi Jack – I did not know you were also an Eagles fan") was
extremely common in the "introduce yourself" posts of week 1, but not found elsewhere.
Combining these with topic-initiator posts was biasing our analysis of interactive learning
behavior. They also did not appear to influence connections that were later established in the
interactive learning discussions. For example, the most prolific social-contact initiator did not
appear to be regarded as influential or interesting later in the course (very few peer-learners read
their contributions). After examining our data with and without the "introduce yourself" posts
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
(posted to a separate discussion-board from the topic discussions), we decided to exclude the
former from our analysis: social contact initiation did not affect later forms of interaction at all.
From now on, we used the term "Initiator" to refer solely to topic initiation behavior. We
repeated this comparison for later courses and found the same lack of effect.
Topic-initiator posts, where a student introduced a new topic of discussion, were rare (3.6% of
total posts). The Complicator role-behavior, where a student reframed the problem or evidence
being discussed, to redirect the ways in which these were treated by other students, was also rare
(7% of total). By far the most frequent type of contribution to the discussion was the contributor
role-behavior, (52.2% of posts). This was unsurprising in a course where contribution to class
discussion was mandatory. But it also reflects the difficulties with determining the intent of
online discussion posts, particularly whether a post was in response to another student's post, or
in response to the instructor. Facilitator behavior was fairly common (27.5% of posts), which
indicated an awareness that this was a social learning process. Peer-knowledge-elicitation (3.4%)
and vicarious-acknowledgement (2.2%) were also rare, but seemed directed at explicitly
encouraging the social exchange of knowledge. Closer behavior (4.1% of total) alternated among
a small number of students. As we progressed with the analysis, we became better at determining
this issue; in the process, we also became better as understanding what our categories meant, in
terms of posting role-behaviors. Surprisingly, this did not result in more or different categories
(after the split of the Initiator role-behavior). We might have expected our theoretical sampling –
categorizing discussion contributions for more or less difficult or structured problems – to reveal
some variations in role-behavior. But they did not, indicating that our categorization scheme was
applicable across different circumstances, a key indicator of theoretical saturation. However, as
the analysis proceeded, we did interpret these categories in subtly different ways, reflecting a
deeper understanding for each iterative analysis discussed below.
When we analyzed posting behaviors for the class as a whole, by week, we found equally
dynamic patterns of role-behavior. These appeared to be dependent on the topic and the flow of
debate (as indicated by thread diversity, thread depth, and the number of participants in thread).
Figure 1 illustrates the relative incidence of role-behavior categories by week (with the two
initiator categories combined).
80
70
60
50
40
30
20
10
0
intro 1 2 3 4 5 6 7 8 9 10
Initiator Complicator Contributor
Facilitator Peer-Eliciter Vicarious-Ack
Closer
I was a little unsure about the domain integrity and PEER-KNOWLEDGE ELICITOR or FACILITATOR? –
I didn't even think of the member ID #! MIGHT BE AN INTERESTING AMBIGUITY HERE
This matches FACILITATOR:
Attempt to draw out further debate on a question.
Can include a question about a prior contribution i.e "Why did
you use this model not XXX"
How do you think other fields with integers like And PEER-KNOWLEDGE ELICITOR:
phone numbers or addresses would be handled? A request for information from peers, including:
Would they even be considered for domain * What to do, * What does something mean
integrity? I think facilitator as I see it as an attempt to expand the debate
OK - I agree with this, on reflection.
S20 2/19/08 9:26 AM
I figure that attributes like addresses that have both CONTRIBUTOR
numbers and letters would be "text" rather than I originally viewed this as a CLOSER tying up the previous
"integer." I'd probably have the street and zip code posts, however as I look at it the "I figure" and "I’d probably"
as required fields for members, as usually a person look like the poster is putting out a position for debate so I
has to prove their residency to get a library card. changed it to FACILITATOR – but it is far from clear cut.
But I would have phone number as not required - A related ambiguity to the note above – this passage is
I'd probably recommend it so we could call them information-providing, so it’s not PEER-KNOWLEDGE-
about reserves, etc. ELICITOR, but does it try to advance or draw out further
debate? It seems to answer in a closing-kind of way, but it’s
Something I tried to keep in mind about the not creating new knowledge, so I’m calling it
member information is issues of privacy, so I'd CONTRIBUTOR even though that seems to be a loaded and
want to keep as little as possible on file about each pejorative term in your scheme, i.e., it’s "merely
member. contributing."
The one thing that I don't understand is that you PEER-KNOWLEDGE ELICITOR
list "address" as a descriptive element under I see this as a Complicator, pointing out an inconsistency
several different entities such as employee, branch (redundancy) and suggesting an alternative
library, and members. I wasn't sure whether this
Agreed, although the "I wasn’t sure..." part might still be in
was considered redundant or not.
question – it’s a stretch, but there seems to be an implied
It makes sense to me that if it is used under each
question here, and the "It makes sense..." answers it.
entity and is specific to that entity then it is o.k.
CONTRIBUTOR
In my diagram, I made address its own entity OK – you're right about not making too many fine-grained
because I figured that some employees would also divisions, or we'll have a bias to CONTRIBUTORS as follow-
be members, or at least conducting loan on. Let's remove the CONTRIBUTOR category here and look
transactions, so I figured that making address its' at this again for the ones that we have already coded.
own entity would allow all people within the OK – good idea.
county system to be identified by their address.
All of this is just so exciting and it's really neat to
see everyone's ideas. Nice work!
Very Respectfully, S1
As we coded more data, each of us worked for some time with a doctoral student assistant to
code the large amounts of data. This meant that we now had to communicate the rationale for
coding discourse contributions in a specific way, which led us to question our intersubjective
understanding of the selective codes – a useful exercise. There were many debates between the
two lead researchers and the doctoral assistants that questioned the way in which we analyzed
our data, as shown in Table 5. This led to our exchanging multiple process memos, theoretical
memos that related to our grounded theory process, rather than to our emerging substantive
theory of collaborative knowledge-building in online course interaction. Some of these were
contained in emails, some were attached to specific quotes in Atlas.ti, depending on the urgency.
An example of how process memos led to interactive/comparative analysis is given in Table 5,
from the (fairly technical) IS Requirements Analysis course.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
Week Thread Initiator 2nd Poster 3rd Poster Closed Thread depth # Posters
2 1.12 S22 Instr 2 2
2 2.1 S4 S11 S11 Instr 8 5
2 2.2 S9 Instr S8 S21 5 5
2 2.3 S15 S13 S17 S5 6 5
2 2.4 S11 S21 2 2
2 2.5 S23 Instr 2 2
2 3.1 S21 S21 S20 S15 11 7
2 3.2 S11 S17 S13 S13 11 8
2 3.3 S7 S12 S5 3 3
2 3.4 S2 S6 2 2
2 3.5 S8 S21 S20 5 3
2 3.6 S5 S23 S3 S22 9 7
The use of a spreadsheet summary analysis of our various data categories provided visual cues to
stimulate our recognition of patterns in the data. It became obvious from a cursory glance down
the "Initiator" column that S11 and S21 were consistent initiators of discussion threads and that
S21 closed (summarized for others) more threads than any other student. This understanding
provoked the realization that students – apparently quite deliberately - adopted various roles in
the shared construction of knowledge through class discussions.
Degrees of Social Engagement
We found that we could distinguish between three "levels" of individual student engagement
with the debate: participation, involvement, and social engagement. Once we became aware of
the distinction between participation and involvement, we related these to a similar distinction
found in the literature on user involvement in systems. Participation in a process, typically
assessed by the degree to which individuals perform specific activities, can be contrasted with
involvement in the process, which requires a psychological state of identification with some
object or goal, to the extent that it is perceived as both important and personally relevant (Barki
and Hartwick, 1989). We distinguished these categories from a third category which indicated
an interest in social engagement with others in the community of peer-learners, as indicated in
Table 7.
Table 7. Three Levels of Socially-Situated Engagement With The Peer Learning Community
Level Form of Activity Observed Learning Interactions
Participation Observable behavior that denotes interaction with Predominantly contractual reproduction of
course materials through passive activity and knowledge, as student grades depend on the
externalization (reproduction) of knowledge frequency and quality of discussion posts. This
acquired in this way. results in individual learning.
Involvement Behavior that indicates a psychological state of Engaged students, who appear to be enthusiastic
identification with course objects, indicating the about the topic and who debate points raised by
internalization of knowledge from other learners others. This results in a joint learning outcome
and the reuse (objectivation) of such knowledge in (shared knowledge across peer-learners).
discussion posts.
Social Behavior indicating enthusiastic commitment to Students who actively manage social
Engagement the facilitation and direction of sustained learning interactions with peer-learners, explicitly
(cycles of knowledge externalization, facilitating or directing discussions to reframe
objectivation, internalization, and reframing). the subject of discussion. This results in the
Socially engaged students interacted with peers in active co-construction of knowledge with peer-
the learning community as well as the topic. learners.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
We found that analyzing cycles of internalization and externalization in the discussion threads
revealed distinct patterns of interaction, demonstrating the social construction of knowledge
across course members as a community. These patterns involved knowledge externalization
(reproduction of knowledge obtained from other sources, such as course readings, course notes,
or personal experience), knowledge objectivation (accepting someone's ideas as "fact"
independently of the person who first originated them), knowledge internalization (the adoption
of a perspective or understanding provided by others), and knowledge reframing (where the
application of knowledge is challenged and positioned with respect to a new frame of
reference)1. Knowledge externalization was equated with Contributor role-behavior, where the
ideas of reference sources or course materials were applied to a situation under discussion.
Knowledge objectivation was equated with Contributor role-behavior, where the ideas of
reference sources or course materials were applied to a situation under discussion. Knowledge
internalization was equated with Contributor, Peer knowledge-elicitor or Vicarious-knowledge
acknowledger role-behaviors which used or queried knowledge externalized by others, but in a
different context of application. Reframing was equated with Initiator or Complicator role-
behaviors such as responses to the ideas of others with comments such as "but what if X
happened?" (which often initiated a new sub-thread), Peer knowledge-elicitor or Complicator,
role-behaviors where alternative perspectives were presented to compete with the dominant
perspective under discussion by the group, or Closer role-behaviors where the discussion was
synthesized to present a more coherent perspective.
Facilitation role-behaviors were required to keep a cycle going: these tended to originate from
the more socially-engaged individuals. Although the different levels of engagement were
predominantly displayed by individuals (i.e. indicated by specific sequences of role-behavior),
they were also group characteristics, in that a sub-group of the course community would engage
in a heated debate, where participants constantly complicated and reframed each others’
perspectives until they decided that they understood the topic of discussion in a collectively
satisfactory way. This was strongly indicative of socially-shared knowledge construction (Berger
and Luckman, 1966):
Theoretical memo 4: It is unclear whether social engagement is an individual construct or a
group construct. While sequences of posts made by an individual seem to demonstrate the
cycles of externalization objectivation internalization reframing, this only seems to
happen in specific sequences of discussion between students, indicating that the aim is shared
knowledge construction.
We also characterized topics of discussion by the area of interest, arriving at a set of high-level
categories: career-related, professional-practice, industry-specific techniques and tools, skills-
oriented, resource-related (relating to the interpretation of course readings or other materials) or
assignment-specific. We found that topics in the first three categories generated more
impassioned and interactive debate, that went on for much longer than topics in the second three
categories. However, topics in the second three categories could occasionally generate passion
and interactivity when students felt that these were relevant to their course performance. For
example, "how should we order the stages of the value-chain for different industries?" was a
student-initiated thread that related to a course reading. This topic dominated that week's
discussion and was raised frequently in subsequent weeks of discussion.
1
The concepts of internalization, objectivation and externalization derive from the work of Berger and Luckman
(1966) who deal with the social construction of reality. These concepts are dealt with below, in our discussion of the
substantive grounded theory.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
From this categorization, we were able to observe (across courses) that when discussion topics
focused on issues of individual concern, peer-interaction was significantly reduced, with
students posting their opinion, rather than debating issues. We also discovered that all three
courses showed the same pattern of engagement: interactive debate was higher in the early weeks
and tailed off towards the end of the course. From an analysis of the post content, we theorized
that this might be because students were preparing to disengage with the course community, as
well as because of assignment work:
Theoretical memo 5: The posts at the end of term do not contain as many threads or discuss
topics in as rich a manner as earlier threads. This might be because enthusiasm is waning, but it
also appears to be because the social bonding that we saw earlier is not needed at this stage.
This realization developed our understanding of what we understood by social engagement in
peer learning, providing us with the statement of theory iteration 2 that is summarized in Table 1.
THEORY ITERATION 3
Tracking The Vapor Trails of Non-Participation
The findings presented so far reveal a key problem with employing a qualitative grounded theory
analysis to analyze online interactions. We had defined eight categories of role-behavior, but we
were only able to account for seven of them in our analysis of online discussions. We were faced
with the research process problem of how we evaluated non-participant interactions with peer-
learners, course resources, and the instructor. Passive-learners may learn vicariously or may
bring learning from the online community into the real world. However, they decline to engage
actively in community debate, so there is little evidence of their presence. Lowe (1996) suggests
that a grounded theory can be developed, not just by constantly comparing the data across
samples, but also by looking for negative occurrences (absences) of relationships that previous
samples have suggested might hold, to verify or refute the existence of a category. Based on our
analysis of discussion posts from infrequent contributors, we had each generated theoretical
memos that hypothesized that students who did not engage in the more "active" role-behaviors
might be more (inter)active than they appeared. But we needed to discover how they were
interacting with course resources and concepts.
We had some clues about passive engagement, as we could now track the re-emergence of topics
from earlier weeks in later discussions. So we were able to detect, for instance, that some
students were acting anomalously:
Theoretical memo 6: Student S2 in course 1 (MIS) did not contribute any posts at all to
discussions in weeks 1‐5. In week 7, they reiterated ‐‐ and, in fact, complicated ‐‐ an issue raised
in the week 2 discussion of the value‐chain. This indicates that student S2 has read and reflected
on peer discussion posts in the early weeks, even if they did not actively participate.
We had no access to any qualitative data that would permit us to analyze S2’s behavior. We
concluded that interacting with students directly would affect the behavior that we wished to
observe "in the wild." The need to explore hidden behavior relevant to our emerging theory led
us to search for new data that would reveal how each student, whether participating actively or
not, read and used the posts of others. We decided to complement our qualitative analysis with a
quantitative analysis, summarizing our logic in a process memo:
Process memo 2: We need a way to trace passive participation, to explain what is happening
behind the explicit patterns of behavior that we are seeing. Instead of constant comparison
based on triangulation, we need a "complementary comparison" strategy, to explore the gaps in
our data that result from invisible participation (and perhaps even social engagement, if we
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
could see it). We need to see if we have access to any quantitative Blackboard access data that
would allow us to explore inferences relating to these behaviors.
We term this mixed methods, not because we deserted the principles of interpretive grounded
theory, but because we used summary statistics and simple correlations to confirm, explore, or
refute inferences from the "vapor trails" left in the qualitative data by student interactions with
the posts of others. A similar approach was employed in a study by Levina & Vaast, whose
emergent conceptual themes were explored by adding new interview questions for subsequent
informants, who would confirm or refute these propositions (Levina and Vaast, 2008)
Analyzing "Invisible" Participation
We had access to counts of the number of times a student accessed various sections of the course
environment. We could see how frequently each student accessed the discussion board, although
not which messages they were reading. We also had access to read-counts for individual
messages, and we could also see how frequently non-participants were accessing course
resources compared to active participants in any week's discussion. Using this data, we could
"triangulate" some of the student behavior that we were observing. We found, for instance, that
student S2 was reading the discussion board more widely than many active participants in the
discussion (a higher number of messages read than other students would indicate that they read
posts more widely than others, assuming that they did not repeatedly read the same messages).
S2 also visited more frequently than the average, indicating that they were motivated to
participate actively in vicarious peer-learning, even though they were not actively participating
in the peer-discussion. This pattern of behavior appeared widespread: non-participants in any
week's discussion actively read the posts of others a frequently used their ideas in framing their
own work.
Across all students, we discovered that there was a significant correlation (0.94) between the
frequency of read accesses and the student’s course grade (we used a detailed percentage grade
for this analysis, adjusted for the professor's grading curve, not the rather blunt instrument of
letter grades). The correlation between the number of visits to the discussion board by a student
and the number of discussion board posts was 0.69, indicating that the more a student visited the
discussion board, the more they posted. This led us to the tentative theory that many students
read and reflected on others’ contributions, before contributing themselves:
Theoretical memo 7: We are seeing a type of "engaged lurker" behavior here, both in students
who post actively and students who do not. Students who adopt a Complicator role‐behavior
seem to have read more posts in that week than students who do not. Students consistently re‐
raise issues in later weeks that they could have only seen in discussions from earlier weeks. I can
only conclude that students read and reflect on the posts of others before posting themselves –
with the exception of a consistently‐small "core" of influential students, who just seem to pull
the threads together, straight off. But even these students interact a great deal with others. This
could be important.
Given these data points, we returned to the qualitative data analysis and searched for the latter
behavior by examining the references to ideas from other students that we found in low-
frequency posters' messages. We did find strong support for our theory that students who read
the posts of others more tended to do significantly better in the course, whether or not they
posted ideas to the discussion board themselves. We also found strong indications that some
students appeared to act as "thought-leaders" for the learning community. This led us to
formulate theory iteration 3, summarized in Table 1. By the end of theory iteration 3, our core
category definition had evolved: from being defined as interactive student role‐behaviors, we were
now conceptualizing the core category as interactive peer‐learning behavior. We now characterized
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
peer-learning as interaction with the ideas of peer-learners, rather than interaction between peer-
learners -- some of which interaction involved vicarious learning. So our understanding of social
engagement became more complex, involving indirect interactions with the ideas of others, as
well as direct interactions through debate. From conceptualizing passive learners as non-
participants, we began to understand that the majority of our passive learners were deeply
engaged in the process of peer-learning, by regularly reading the posts of others. Although they
chose not to contribute directly to community debates, they were proactively integrating its
product into their individual learning.
THEORY ITERATION 4
Developing The Thought-Leader Concept
We had noted in our theoretical memos that some students appeared more influential than others
in forming the debate. Some students' ideas and suggested resources (such as online readings)
were referred to more frequently than those of others: this reference appeared to be related to
students who adopted Complicator, Initiator, and Facilitator role-behaviors more than other
students, but the link was unclear:
Theoretical memo 8: We are seeing a small core of influential students, the "thought‐leaders,"
who are referenced most and interacted with most, in each week's discussions. Are these the
students who post the most messages in any week? S11 made more Complicator contributions
than any other student. S21 posted more than twice as many contributions as S11 (including
over twice as many facilitation‐role postings). But S21's posts were much less likely to be read:
28% less frequently than S11. S11 also had the highest overall class grade, although none of the
cohort could have predicted this (or could they? – do they have some way of recognizing an
exceptional student?).
We had initially expected the most frequent posters to be the most read, as they would be
perceived as the best informed (or most visible) students. But there appeared to be no
relationship between the number of contributions made by individuals and the number of times
each student’s messages were read. S21 was identified as the single most significant initiator:
they posted 85% of the initiator messages and made twice as many contributions as the next most
frequent poster. They therefore stimulated many more direct interactions than other students. But
the number of accesses per message by other students for S21 was below average. It appears that
other students assigned less importance to this person’s contributions. While S21 achieved a
good grade, this student needed to rework assignments much more than any other student,
perhaps indicating less inclination to internalize what was read than to post thoughts as these
occurred. Messages posted by student S11 were more frequently read than any others. This
particular student exhibited a strong facilitation role in class discussions – S11 made more
facilitation-role contributions than any other student except for S21 – but posted less frequently
than many other students.
Relating Thought-Leaders To Social Network Development
We decided to use an alternative technique to identify which students appeared to be regarded as
thought-leaders by other students and to understand how thought leaders influenced social
engagement by other students. Following Hansen and Kautz (2005), who supplemented the
analytical coding techniques of the grounded theory method by using visualization tools to
suggest inductive theories of software development tool use, we decided to use a visualization
tool to explore our findings. We modeled interactions between students in online discussions,
using a social network analysis visualization, to understand the nexus of influence. An example
is presented in Figure 2.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
We were also able to follow up our analysis of the theme developed in theoretical memo 5, that
there was less social interaction and fewer topics discussed in later debates than earlier in the
course. A content analysis of posts revealed that this was the case, but also that social cues had
not disappeared. Instead, the established social relationships in the community, especially the
identification of key thought-leaders appeared to allow the debate to become more focused, with
students choosing fewer peers with whom to interact and narrowing down the discussion to key
themes much earlier in each week's debate. At this point in our analysis, we became confident
that knowledge was being socially constructed among a "community of inquiry" and that
students themselves recognized this as an important part of the process, which allowed us to
formulate iteration 4 of our substantive theory of social engagement, highlighting the central role
played by thought leaders and elaborating on the mechanisms by which both interactive peer-
learning and vicarious peer-learning takes place within a community of inquiry.
DISCUSSION OF FINDINGS
themes in the qualitative data, and (last but not least) with student questionnaire data that
confirmed or refuted our findings. Each of these sources was obtained by means of a new data
collection cycle; the analysis of each cycle sent us back to our original qualitative analysis to
question or develop our findings from that data.
Although we describe our approach as a mixed methods approach to generating grounded theory,
we wish to distinguish this approach from recent calls to generate multi-grounded theories of
action in the MIS field (Lind and Goldkuhl, 2006; Tan and Hall, 2007). Multi-grounded theory
approaches employ a post-positivist approach to data analysis, using objectivist data coding
techniques to "convert" qualitative data to a form that may be analyzed quantitatively, or
supplementing qualitative data with quantitative data (for example, supplementing interview
findings with large-scale survey data). Findings from a quantitative data analysis are triangulated
against findings from the qualitative data analysis, to provide a pragmatic approach to theory
evaluation. So they privilege commonalities in accounts, rather than abstracting concepts derived
from complementarities in accounts. In our approach, we used quantitative data analysis to
explore qualitatively-derived patterns and to fill gaps in the qualitative findings. We used both
quantitative and visualization techniques to suggest elements that we explored inductively using
an interpretive, qualitative analysis. In other words, our non-qualitative findings were used
inductively to allow us to interpret and question our qualitative findings – not solely for the
purposes of confirmation. We have attempted to demonstrate how this "complementary
comparison" strategy for theoretical sampling fed back into a constant comparative grounded
theory analysis. Ultimately, the use of non-qualitative analysis techniques helped us to follow the
vapor trails of students' socially-situated strategies for online learning, filling the gaps in our
original theory of action. This is akin to the method employed by Levina & Vaast, when
investigating how differences in country context and organizational context affected
collaboration between offshore project team members. Analytical field notes were produced after
each interview that focused on what was learned. These notes provided emergent conceptual
themes and propositions that were explored by adding new interview questions for subsequent
informants who were chosen to confirm or challenge perspectives collected so far (Levina and
Vaast, 2008).
CONCLUSIONS
We preceded our analysis by arguing that there is little theory to explain student strategies for
online learning. Most studies in this area focus on learning environment design or experimental
analyses of course-scaffolding frameworks. These studies are based on assumptions about
students' learning experience and strategies that are not supported by a systematic analysis of
empirical evidence. Our study examined the assumption that online learning is social-
constructivist in nature and found the reality to be much more complex. Our substantive theory
of socially situated, constructivist learning demonstrates how learning occurs in the space
between the individual and the community, not simply through the iterative cycles of
externalization-objectification-internalization described by Berger and Luckman (1966).
We detected three levels of social engagement with the community: participation, involvement,
and social engagement. We can view students' learning strategies through the lens of an
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
emerging community of inquiry, where some students become thought-leaders, forming a core
group of influence-makers who establish a set of intellectual and socio-cultural norms for the
community. Others students engage in legitimate peripheral participation in the community
(Lave and Wenger, 1991), some of which is made explicit through interactions with other
community members, some of which remains invisible and can only be detected through
following the vapor trails of participation. The notions of involvement vs. social engagement are
complicated by our theory, as we must define social engagement as interaction with the ideas of
others, rather than direct interactions with peers in active debate. Both the active and passive
forms of social engagement have been demonstrated to result in constructivist learning
outcomes.
These findings contribute an innovative, substantive theory of action for online learning
communities. We could not have generated this theory without the use of grounded theory
techniques. We also could not have arrived at this position of knowledge without incorporating
mixed methods into our grounded theory generation. Latour (2005) observes that the process of
relating to one group or another is "an on-going process made up of uncertain, fragile,
controversial and ever-shifting ties" (Latour, 2005, pg. 28). Our study of online student
strategies has been performed across multiple groups with multiple course cultures. We end by
reflecting that this observation is as true for us, in our attempts to relate to the processes of
various learning communities, as it is for the student members of those communities.
REFERENCES
ALLY, M (2008) Foundations of educational theory for online learning. In: Theory and Practice of Online
Learning (2nd Edition). (ANDERSON, T and ELLOUMI, F, Eds.) Athabasca University, Edmonton,
Canada, pp 15-44 http://www.aupress.ca/index.php/books/120146, accessed Jan 2011.
ANDERSON, T (2008) Towards a Theory of Online Learning. In: Theory and Practice of Online Learning
(2nd Edition). (ANDERSON, T and ELLOUMI, F, Eds.) Athabasca University, Edmonton, Canada, 2008,
pp 45-74 http://www.aupress.ca/index.php/books/120146, accessed Jan 2011.
BABBIE, E (2009) The Practice of Social Research. Wadsworth Publishing, Belmont CA
BANDURA, A (1977) Social Learning Theory. Prentice Hall, Englewood Cliffs, NJ.
BARKI, H and HARTWICK, J (1989) Rethinking The Concept of User Involvement. MIS Quarterly (13:1)
pp 52-63.
BERGER, PL and LUCKMAN, T (1966) The Social Construction Of Reality: A Treatise In The Sociology of
Knowledge. Doubleday & Company Inc., Garden City N.Y.
BRANSFORD, JD, BROWN, AL and COCKING, RR [Eds.] (2000) How People Learn: Brain, Mind,
Experience, and School. National Academy Press, Washington, DC.
BRYANT, A and CHARMAZ, K (2007) Grounded Theory in Historical Perspective: An Epistemological
Account. In: The Sage Handbook of Grounded Theory. (BRYANT, A and CHARMAZ, K, Eds.) Sage,
Thousand Oaks, CA, pp 32-57.
CHARMAZ, K (2000) Grounded Theory: Objectivist and Constructivist Methods. In: Handbook of
Qualitative Research. (DENZIN, NK and LINCOLN, YS, Eds.) Sage, Thousand Oaks, CA, pp 509-535.
COBB, P (1994) Where is the mind? Constructivist and Sociocultural Perspectives on Mathematical
Development. Educational Researcher (23:7) pp 13-20.
COX, R, MCKENDREE, J, TOBIN, R, LEE, J and MAYES, JT (1999) Vicarious learning from dialogue and
discourse: A controlled comparison. Instructional Science (27) pp 431-458.
DEWEY, J (1916) Democracy and education. Macmillan.
DEWEY, J (1933) How we think. D.C. Heath & Co., Boston MA.
DEY, I (1999) Grounding Grounded Theory. Academic Press, San Diego CA.
DOHN, NB (2008) Web 2.0: Inherent tensions and evident challenges for education. International Journal
of Computer-Supported Collaborative Learning (4:3) pp 343-363.
GABELNICK, F, MACGREGOR, J, MATTHEWS, RS and SMITH, BL (1990) Learning communities: Creating
connections among students, faculty, and disciplines. Jossey-Bass, San Francisco.
Gasson & Waters (2013) European Journal of Information Systems 22, 95–118.
PREECE, J (2000) Introduction. In: Online Communities.) John Wiley, 2000, pp 5-19.
RESNICK, P (2002) Beyond Bowling Together: SocioTechnical Capital. In: HCI in the New Millenium.
(CARROLL, JM, Ed.) Addison-Wesley, pp 247-272.
SCARDAMALIA, M and BEREITER, C (1994) Computer support for knowledge-building communities. The
Journal of the Learning Sciences (3:3) pp 265-283.
STOYANOVA, N and KOMMERS, P (2002) Concept mapping as a medium of shared cognition in computer-
supported collaborative problem solving. Journal of Interactive Learning Research (23) p 111.
STRAUSS, AL and CORBIN, J (1998) Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory., (2nd Edition) Sage, Newbury Park CA.
TAN, MTK and HALL, W (2007) Beyond Theoretical And Methodological Pluralism In Interpretive Is
Research: The Example Of Symbolic Interactionist Ethnography. Communications of AIS (2007:19) pp
589-610.
URQUHART, C (2002) Regrounding Grounded Theory? - Or Reinforcing Old Prejudices? A Brief Reply
To Bryant. The Journal of Information Technology Theory and Application, 4(3) pp 43-54.
VYGOTSKY, LS (1978) Mind in Society. Harvard University Press, Cambridge, MA.
WATERS, J (2008) Social Network Behavior, Thought-Leaders and Knowledge Building In An Online
Learning Community. Hawaii Intl. Conference on System Sciences (HICSS-41), Knowledge
Management Track, IEEE Digital Library, Hawaii, USA.
WATERS, J (2009) Engagement, role-behaviors and thought-leaders: An analysis of student behavior in
asynchronous online learning environments. Ph.D. Thesis, College of Information Science &
Technology, Drexel University, Philadelphia, PA, USA.
WATERS, J and GASSON, S (2005) Strategies Employed By Participants In Virtual Learning Communities.
Hawaii Intl. Conference on System Sciences (HICSS-38), Collaboration Systems and Technology track,
IEEE Software Society, Manua, Hawaii, January 2005, p 3b.
WATERS, J and GASSON, S (2006) Social Engagement in an Online Community of Inquiry. 27th
International Conference on Information Systems (ICIS), Milwaukee WI.
WATERS, J and GASSON, S (2007) Distributed Knowledge Construction In An Online Community Of
Inquiry. Hawaii Intl. Conference on System Sciences (HICSS-40), Jan. 2007. Knowledge Management
Track., IEEE Software Society, Manua, Hawaii, January 2007.
WEGERIF, R (1998) The social dimension of asynchronous learning networks. Journal of Asynchronous
Learning Networks (2:1).
ZIMMERMAN, BJ (1989) Self-regulated learning and academic achievement: An overview. Journal of
Educational Psychology (81:3) pp 329-339.