You are on page 1of 33

Doing Grounded Theory

Notes for the aspiring qualitative analyst

University of Cape Town: Division of Geomatics

Compiled by Simon Hull


2013/12/13

[Type text] Page


Doing Grounded Theory

Contents
1 Introduction: the Grounded Theory Approach_______________________________________ 1
2 What is theory? _______________________________________________________________ 3
3 Doing Grounded Theory ________________________________________________________ 5
3.1 Finding a research topic __________________________________________________________ 5
3.2 Data collection _________________________________________________________________ 5
3.2.1 Excellent research skills __________________________________________________________________ 6
3.2.2 Excellent participants ____________________________________________________________________ 8
3.2.3 Effective, targeted sampling ______________________________________________________________ 8
3.2.4 Saturation _____________________________________________________________________________ 9

3.3 Coding, categorizing, and analysis _________________________________________________ 10


3.3.1 Open coding __________________________________________________________________________ 11
3.3.2 Categorising __________________________________________________________________________ 11
3.3.3 Developing the categories _______________________________________________________________ 12

3.4 Enhancing theoretical sensitivity __________________________________________________ 12


3.4.1 Using questioning ______________________________________________________________________ 13
3.4.2 In-depth analysis ______________________________________________________________________ 14
3.4.3 Using comparisons _____________________________________________________________________ 14
3.4.4 Waving the red flag ____________________________________________________________________ 15

3.5 Memos and diagrams ___________________________________________________________ 15


3.6 Axial coding ___________________________________________________________________ 17
3.6.1 Theoretical coding _____________________________________________________________________ 17
3.6.2 The coding paradigm ___________________________________________________________________ 18

3.7 Selective coding and sorting ______________________________________________________ 20


3.7.1 Finding the story line ___________________________________________________________________ 20
3.7.2 Relating categories and forming propositions _______________________________________________ 21
3.7.3 Sorting memos and diagrams ____________________________________________________________ 21
3.7.4 Validating propositions _________________________________________________________________ 23
3.7.5 Filling in categories _____________________________________________________________________ 23

3.8 Theoretical sampling____________________________________________________________ 23


3.9 Theory-building ________________________________________________________________ 25
3.10 Communicating theory __________________________________________________________ 26

4 Concluding remarks __________________________________________________________ 28


5 References __________________________________________________________________ 30

Page i
Doing Grounded Theory

1 Introduction: the Grounded Theory Approach

“A grounded theory is … inductively derived from the study of the phenomenon it represents. … [It]
is discovered, developed, and provisionally verified through systematic data collection and analysis
of data pertaining to that phenomenon. … One does not begin with a theory, then prove it. Rather,
one begins with an area of study and what is relevant to that area is allowed to emerge.” (Strauss
& Corbin, 1990, p. 23)

Qualitative research yields findings that are not based on statistics or quantities; those are usually
associated with quantitative research (Ibid.). Yet the qualitative research method employed in the
grounded theory approach uses the same scientific logic as the quantitative approach. It is designed
such that if the procedures are correctly followed then the criteria of doing good science are met
(Corbin & Strauss, 1990; Strauss & Corbin, 1990). These are “significance, theory-observation
compatibility, generalizability, reproducibility, precision, rigor [sic], and verification” (Strauss & Corbin,
1990, p. 31).

The Grounded Theory Approach differs from the quantitative method with respect to its purpose,
approach, and the role of theory. It is a systematic approach with the purpose of developing (not
testing) an inductively (not deductively) derived theory that is focused on a concrete phenomenon. The
approach is designed to ensure analytic precision and rigour while allowing for creativity (Strauss &
Corbin, 1990).

“The grounded theory approach is a qualitative research method that uses a systematic set of
procedures to develop an inductively derived grounded theory about a phenomenon. … The
purpose of [this approach] is … to build theory that is faithful to and illuminates the area under
study.” (Strauss & Corbin, 1990, p. 24)

One of the dangers of the Grounded Theory Approach (GTA) is that the researcher will end up with a
comprehensive description of a phenomenon, but will not have generated a well-constructed theory. A
well-constructed grounded theory meets the following four criteria (Strauss & Corbin, 1990):

1. It fits with the area of study, i.e. there is a correspondence between reality and theory;
2. It is understandable and makes sense to those involved in the area of study, both practitioners
and academics;
3. It is abstract (generalizable) enough to have relevance beyond mere description of individual
cases; and
4. It is able to provide control with regard to goal-oriented action toward the phenomenon under
study.

To move from description to theory the researcher needs to be aware of two main points (Ibid.). Firstly,
theory uses concepts. This means that similar data are grouped together and given common conceptual
labels. This is defined as coding, and during the process of coding, interpretation of the data begins. The
researcher needs to be creative in asking pertinent questions of the data and in making comparisons
that yield new insights from the data. Secondly the concepts are hierarchically related or organised
according to themes. This is defined as selective coding or sorting. The relationships between concepts
are described using statements of relationships from which theory is induced.

Effective coding and sorting require that the researcher employs theoretical sensitivity. This is a
personal quality of the researcher that “indicates an awareness of the subtleties of meaning of data”

Page 1
Doing Grounded Theory

(Strauss & Corbin, 1990, p. 41). It refers to the researcher’s attributes of having insight into the data,
ability to give meaning to the data, capacity to understand the data, and capability to separate what is
important from what is not. It comes from a number of different sources (Ibid.): literature, professional
and personal experience, and the analytic process itself. The first three refer to the background of the
researcher, i.e. what s/he brings to the study. The last one, the analytic process, comes from the
researcher’s interaction with the data. Insights and understanding about the phenomenon under
investigation emerge as the researcher collects data, asks question about the data, makes comparisons,
formulates hypotheses, and develops theoretical frameworks about concepts and their relationships.

Classic grounded theory “is simply a set of integrated conceptual hypotheses systematically generated
to produce an inductive theory about a substantive area” (Glaser & Holton, 2007, p. 48). It is not a
description of the facts; it is a highly structured, straightforward, flexible methodology using
procedures of data collection and analysis that are explicit. “The pacing of these procedures is, at once,
simultaneous, sequential, subsequent, scheduled and serendipitous, forming an integrated
methodological ‘whole’ that enables the emergence of conceptual theory” (Ibid.). This theory is based
on a set of plausible, grounded hypotheses organised around and integrated with a core category.
Following the GTA based on the constant comparative method will yield the generation of substantive
or formal theory (see Figure 1 and Section 0) that fits the data, works, is relevant and modifiable (Glaser
& Holton, 2007).

Generating theory in the GTA is an emergent process that involves continuously moving between the
integrated processes of collecting data, coding the data, and conceptually analysing the data with
memos (Glaser & Holton, 2007). This process is summarised in ten, non-sequential categories (Patzelt,
2013) around which the rest of these noted will be based – see Table 1 below and Figure 4 on page 28.
But first the question of what a theory is will be addressed.

Table 1 The process of doing grounded theory

# Heading
1 Finding a research topic
2 Data collection
3 Coding, categorizing, and analysis
4 Enhancing theoretical sensitivity
5 Memo-writing
6 Axial coding
7 Selective coding and sorting
8 Theoretical sampling
9 Theory-building
10 Communicating theory

Page 2
Doing Grounded Theory

2 What is theory?

A theory is a tested and testable concept used to explain an occurrence, usually created after
observation and testing but sometimes suggested prior to research. They are organised bodies of
concepts and principles designed to rationally and clearly explain a phenomenon (Leedy & Ormrod,
2010; McMahon, 2013). Their structure is typically general / summative such that knowledge is
organised through the proposal of a general relationship between events (Robson, 1994). The process
of discovery through empirical observations includes “a large component of searching, speculating, and
discovering”, which is followed by verification and explanation through “the development of concepts
and propositions for interrelating and explaining such observations” (Pelto & Pelto, 1978, p. 22): the
development of theory. Theories help us to describe and understand what has happened, and are useful
in predicting what might happen under similar or different conditions (McAuley et al., 2007).

Theories group phenomena in terms of their perceived similarities and differences in order to help us
make sense of the world. At the heart of ‘making sense’ is simplification, or abstraction, of our
observations into concepts or categories that help us to understand a phenomenon. Theories link these
abstractions together in order to explain something (McAuley et al., 2007). They represent causal
relationships in different contexts and define, classify, or categorise different aspects of the world. They
propose reasons for actions and identify situations in which they will or will not operate, hence defining
the boundaries of interactions (Ibid.).

In order for a theory to be considered ‘scientific’ it must meet four requirements (Lee, 1989):

1. It must be falsifiable. This stems from the postpositivist thinking that it is not possible to truly
know whether something is true or not. “No scientific explanation … may ever be conclusively proven
true” (Lee, 1989, p. 39). Most scientific propositions cannot be directly verified because they cannot be
directly observed. Thus there is the “ever-present possibility for contradictory evidence to surface in a
subsequent test” (Ibid. p. 36). A falsifiable theory is thus considered to be a scientific theory. It is only by
surviving repeated attempts at falsification that a theoretical model gains credibility (Barry & Roux,
2012).
2. A scientific theory displays logical consistency: the deductions giving rise to the theory must not
be contradictory but must show consistent support for each other and for the theory being developed.
3. Regarding competing theories, a truly scientific theory must be at least as explanatory, or
predictive, as its rival theories.
4. Lastly, while showing itself to be falsifiable, a scientific theory must survive any attempts made
to prove it false.

Barry & Roux (2012) have developed the taxonomy of theory illustrated in Figure 1. They identify a
hierarchy of concepts upon which theories are built and within which theories find their place. Of
particular importance to the GTA are the ideas contained towards the middle of Figure 1: construct /
concept, proposition, hypothesis / research question, substantive and formal theory:

• Constructs are abstractions of concepts that are defined in association with relationships to
other constructs and observable entities in the material world. “Clear definitions and
descriptions of constructs and the relationships between them form the essence of good theory”
(Barry & Roux, 2012, p. 306). This describes the GTA (Allan, 2003).
• A proposition is a formal statement of a concept, and a hypothesis is a testable proposition.

Page 3
Doing Grounded Theory

• Building on what has been said before: theories are based on a set of related hypotheses and
condition statements.
o Substantive theory comprises a set of hypotheses that provide an explanation for a
particular phenomenon or area of study. Most grounded theories are developed at this
level (Glaser, 2007).
o Formal theories are generalised from substantive theories that have been validated in a
range of diverse situations (Barry & Roux, 2012). A formal grounded theory is a
conceptual extension of a substantive grounded theory’s core category through constant
comparison and theoretical sampling (Glaser, 2007).

Figure 1 A theory taxonomy (Barry & Roux, 2012, p. 305)

Page 4
Doing Grounded Theory

3 Doing Grounded Theory

3.1 Finding a research topic

A workable research topic needs to be based on an answerable question / researchable problem that
has relevance and can be sufficiently narrowed down to make it feasible (Corbin & Strauss, 2008). “All
research commences with the identification and clear formulation of a research problem … formulated
… in the form either of a research question or a research hypothesis” (Babbie & Mouton, 2001, p. 73).
There are four main sources of researchable problems (Corbin & Strauss, 2008):

1. suggestions from others working in a particular field,


2. literature (both technical and non-technical),
3. personal and professional experience, and
4. the research itself.

In the last case, the researcher may set out with only a vague notion about a particular field or
phenomenon of interest that becomes focused as data is gathered. The topic itself is not yet fully
developed, i.e. further exploration of the topic is necessary to increase understanding (Ibid.).

Having identified a topic the researcher then needs to formulate one or more research questions to
guide the inquiry. The research question is a statement that identifies the topic and informs the reader
that there is something about the topic that is of interest to the researcher. This determines the
research methods to be used and the boundaries of the study (Corbin & Strauss, 2008). The question
also needs to be framed in order to allow the researcher enough flexibility and freedom to explore the
topic in depth, while not being so broad as to give rise to unlimited possibilities. It is a good idea to start
with a broad research question and progressively narrow it down during the research process (Ibid.).

“The purpose of the question is to lead the researcher into the data where the issues and problems
important to the persons, organizations, groups, and communities under investigation can be
explored.” (Corbin & Strauss, 2008)

Babbie & Mouton (2001) distinguish between empirical and non-empirical research questions. The
former asks questions about real-life problems experienced in the world of everyday life. Because
grounded theories are derived from data collected by respondents dealing with their everyday issues,
research problems addressed using the GTA are usually guided by empirical questions. Non-empirical
questions are questions about the meaning of scientific concepts, scholarship trends or competing
theories. Formal grounded theories might ask non-empirical questions.

3.2 Data collection

Typical data sources when using a GTA are observation, interviews, analysis of documents, and possibly
even quantitative data (Charmaz, 2006; Strauss & Corbin, 1990), although grounded theory can use any
data (Glaser & Holton, 2007). Researchers can use one or more different sources alone or in
combination. Combining data sources allows for triangulation and facilitates verification of results
(Corbin & Strauss, 2008). The rule of thumb is to use whichever method of data collection enhances the
development of emergent concepts (Charmaz, 2006). Data collection should be guided by questions that
sensitise the researcher towards the emergent ideas (Ibid.).

Page 5
Doing Grounded Theory

Deciding on the sources of data is an important consideration that needs to be made at the outset of the
research project because the data sources largely determine what theory emerges. Initial decisions
about sampling may change as the project progresses. What happens once data collection is underway
is largely influenced by how well these initial decisions fit the reality of the data. Strauss & Corbin
(1990) give the following general considerations for sampling in GTA:

• The site or group to study should be chosen based on the main research question.
• The types of data that will be used should be chosen based on whichever ones best capture the
information sought.
• For longitudinal studies a decision needs to be made about whether to follow an individual
throughout the process, or certain individuals at varying points in the process.

According to Morse (2007) sampling techniques in grounded theory must cover both the scope and the
trajectory over time of the phenomenon of interest. The following three principles of qualitative
sampling are pertinent to researchers using a GTA (Ibid.): having excellent research skills is essential,
using excellent participants is imperative, and sampling techniques must be targeted and effective.

3.2.1 Excellent research skills


The researcher’s command of a particular technique is critical to the amount of data needed in a study.
Experience enables the researcher to gather good data with smaller samples. In an interview setting,
“the more targeted the content of the interviews, the better the data, the fewer interviews will be
necessary, and the lower the number of participants recruited into the study” (Morse, 2007, p. 230).

Gathering more data will not make bad data good. Becoming skilled at interviewing or observing
requires practice, self-awareness, and a careful critique of every interview and field note (Ibid.).
Maintaining confidentiality during interviews or observations and ensuring respondents’ anonymity
when reporting are essential to gathering data effectively (Corbin & Strauss, 2008).

Obtaining approval from ethics committees can be problematic for the grounded theorist because the
full range of questions and respondents is normally not known by the researcher up-front. The
researcher needs to employ theoretical sampling while being guided by the data (see Section 3.8).
Hence ethics approval applications can neither include all of the questions to be asked nor who will be
asked. Corbin & Strauss (2008) advise that proposals should include some questions to be asked of the
first few respondents based on the researcher’s experience in the field or on extant literature. The
application should include a few sentences indicating that “if a participant brings up another topic that
proves to be important to the investigation, the researcher will follow through on that topic … Adhering
rigidly to initial questions throughout a study hinders discovery because it limits the amount and type
of data that can be gathered” (Corbin & Strauss, 2008, p. 152).

a) Interviews
Data collection using the GTA is usually, but not exclusively, by interview (Allan, 2003). An interview is
a directed conversation – directed by someone (the researcher) who is still seeking the proper direction
during the conversation – with the goal of exploring a particular topic or experience in-depth, relying on
a respondent who has had relevant experience in this area (Charmaz, 2006). Unstructured interviews
tend to yield the most dense data, though researchers are cautioned to be prepared with some
questions to guide the respondent if necessary (Corbin & Strauss, 2008). Knowing when and how to use
a recording device can also be an important tool in getting respondents to relax and open up in an
interview: sometimes the most important reflections come at the end of the interview when the
recorder has been turned off (Ibid.). Semi-structured interviews make use of open-ended questions.

Page 6
Doing Grounded Theory

They are useful for standardising the results of several interviews while still giving respondents
opportunity to speak freely and allowing the interviewer to obtain clarity on particular points (Rule &
John, 2011).

For effective interviewing that generates deep, meaningful, data, interviewers should use only a few
questions and keep them broad and open-ended (Charmaz, 2006). Questions should be framed to
encourage the respondent to explain, i.e. avoiding questions that elicit a yes/no response, but instead
asking questions concerning how or why something has happened. The interviewer should pursue
ideas or issues that emerge during the interview, following up on important topics as they emerge.
Pauses and other non-verbal cues should be observed and explored for their possible significance. The
interviewer should always express interest and desire to learn more, exploring the topic without
interrogating. Restating the important points is a useful way of checking for accurate understanding. At
all times the researcher should show respect for the respondent’s feelings and dignity.

b) Observation
Observations are a very useful form of data collection. While interviews provide a structured format for
data collection, respondents may say one thing in an interview but do something different in practise.
Only observation reveals this dichotomy (Corbin & Strauss, 2008). By observing participants engaging
with processes or phenomena the researcher can see what is going on and learn things that would
never be picked up in an interview or through reading documentation. But observation as a research
method on its own has drawbacks (Ibid.). The researcher might assign meaning to an action or
interaction that is different from the meaning the participants assign to it themselves. Nonverbal
behaviours are easily misinterpreted, especially cross-culturally. Hence observations should always be
backed up with interview and/or documentary evidence.

Charmaz (2006) gives the following suggestions for doing effective observations:

• Avoid describing what is happening and rather focus on giving priority to the conceptual study of
the interesting phenomenon.
• Always go back and forth between observing and interpreting.
• Coding begins when writing field-notes: render observed actions as concepts.
• Make sure that field notes:
o Record both individual and collective actions;
o Contain full, detailed notes;
o Emphasize significant processes;
o Address what the participants describe as interesting and/or problematic;
o Attend to the language used by participants;
o Place the participants and their actions in context;
o Become progressively more focused on the key analytic ideas.

c) Documents
There are two main types of textual evidence (Charmaz, 2006): elicited texts produced in response to a
researcher’s request, e.g. open-ended questions in questionnaires; and extant texts that are already in
existence, e.g. novels, letters, diaries, journals, newspapers, etc. Extant texts are produced for purposes
not directly allied to the researcher’s interests and are shaped instead by their context and the
processes leading to their creation. They hence afford the researcher a valuable additional source of
information that could shed new light onto the phenomenon of interest. Charmaz (2006) advocates

Page 7
Doing Grounded Theory

situating all documentary evidence in its particular context. This will aid the researcher in better
understanding the message contained in the text.

Regarding extant texts, Corbin & Strauss (2008) differentiate between technical and non-technical
literature. They describe the following ways in which technical literature may be used as sources of
data and as an aid during the research process:

• As a source for comparison with data;


• As a means of enhancing the researcher’s sensitivity to subtle nuances in the data;
• Providing questions for initial observations and interviews;
• To stimulate questions during analysis;
• As suggestions for theoretical sampling;
• To confirm findings, or to use findings to show where literature is deficient.

With respect to non-technical literature (e.g. letters, videotapes, memoirs, etc.), it can be used for all of
the purposes listed above as well as in forming a source of primary data and as a supplement to
interviews and observations (Corbin & Strauss, 2008).

Glaser & Holton (2007) caution that the use of literature in a GTA may influence the researcher into
applying pre-conceived codes onto the data, rather than allowing the codes to emerge from the data. By
engaging in a comprehensive literature review prior to data collection, the researcher may also run the
risk of lacking sufficient theoretical sensitivity to recognise the emergence of a completely new core
category that is not backed up by the literature. Instead, the literature should be treated as simply
another source of data to be integrated into the constant comparative process.

3.2.2 Excellent participants


Although there are many factors affecting the quality of the data analysis, one of the most important is
the quality of the materials being analysed (Corbin & Strauss, 2008). Getting good data is the first step
towards generating good grounded theory. Hence mastering the skills associated with data collection is
very important, but equally important is selecting excellent informants (Morse, 2007). An excellent
informant is one who has first or second hand experience of the phenomenon under investigation, is
willing and able to participate, and they must be reflective and able to share their experiences
articulately (Ibid.). Choosing excellent participants means that the researcher needs to employ
purposeful sampling (see the next section).

3.2.3 Effective, targeted sampling


“Excessive data is an impediment to analysis” (Morse, 2007, p. 233). To ensure effective data analysis,
grounded theorists must sample their data sources. This introduces bias into the process, because the
sources have been deliberately sifted and selected (Ibid.). But Morse argues that this bias is an essential
component of qualitative research and does not affect the rigour of the research. By seeking the best, or
worst, examples of the study, the characteristics of the phenomenon become more obvious. Once these
have been clarified, the researcher can investigate less optimal examples.

This inherent bias in qualitative research means that the use of random sampling techniques, common
in quantitative studies, may impede and invalidate the inquiry (Morse, 2007). “Sampling in qualitative
inquiry must be purposeful, with participants invited into the study according to their knowledge about
the topic being researched, or type of information that is needed to complete or to complement our
understanding.” (Morse, 2007, p. 234) A randomly generated sample ensures a normal distribution of
data, meaning that we will gather insufficient data about the extremes of the sample and excessive data

Page 8
Doing Grounded Theory

about the norm. Qualitative data sets require a rectangular distribution: equal amounts of data along
the entire spectrum of possible responses. This allows for saturation of categories, ensuring replication,
validation and reliability.

“Excellent data are obtained through careful sampling.” (Morse, 2007, p. 235) The following sampling
methods are proposed (Ibid.);

1. Convenience sampling: This method may be used at the beginning of a project to identify the
scope, components and trajectory of the overall process. Participants are selected based on their
accessibility. Thereafter the researcher may request the respondent to suggest further possible
participants; this is called snowball sampling.
2. Purposeful sampling: Purposeful samples are selected based on the results of the initial
interview process. Excellent informants are chosen once the scope and stages of the process under
study are identified. “By comparing and contrasting between stages, researchers can recognize
differences, and then deliberately seek out additional participants who are in the midst of a particular
stage along the trajectory that has been identified from the scoping with the convenience sample.”
(Morse, 2007, p. 239)
3. Theoretical sampling: Participants are selected to add descriptions where the emerging
concepts and theory are lacking. Sampling is directed by the main categories and the researcher’s
understanding of the developing theory. Hence participants, who have had particular experiences or
can relate significant descriptions, are deliberately sought. Negative cases might be uncovered. These
are instances in which participants have responded in an unanticipated way that is in contrast to the
majority of respondents. These are integrated into the emerging theory (Morse, 2007). See Section 3.8
for more details.
4. Theoretical group interviews: The emerging model is expanded on and verified through
theoretical group interviews. Participants are recalled in small groups and are shown the initial findings
and are asked to comment on these findings. Their insights are used to further modify and saturate the
emerging model.

3.2.4 Saturation
Sampling ceases once saturation has occurred, but how does the researcher know when this has
happened? Saturation is not the end point of the study, rather it indicates that the researcher has
reached certainty about a particular category and can move on to the next one (Morse, 2007).
“Categories are saturated when gathering fresh data no longer sparks new theoretical insights, nor
reveals new properties of these core theoretical categories” (Charmaz, 2006, p. 113). It denotes the
development of categories in terms of their properties and dimensions, including variation, and the
delineation of relationships between categories (Corbin & Strauss, 2008). This is not the same as
finding repetition in the data. Theoretical saturation should be demonstrated by having cases for the
entire range of dimensions of all categories’ properties (Charmaz, 2006) – see Section 3.3.3.

Charmaz (2006) gives the following suggestions of indicators for when data collection is yielding
quality data:

1. Sufficient background data is collected about people, processes and settings to understand and
portray the full range of contexts of the study;
2. Detailed descriptions are gained about the ranges of participants’ views and actions;
3. The data must reveal what lies beneath the surface;
4. For process questions, the data must be sufficient to reveal changes over time;

Page 9
Doing Grounded Theory

5. Data must allow multiple views of the participant’s ranges of actions;


6. There must be enough data to develop analytic categories;
7. Data must allow for desirable comparisons.

3.3 Coding, categorizing, and analysis

Coding refers to the process of breaking down, conceptualising, and re-assembling data (Strauss &
Corbin, 1990). It is a form of content analysis that is used to find and conceptualise the underlying
issues amongst the noise of the data (Allan, 2003). Codes relating to a common theme are grouped
together to form concepts, and concepts are themselves grouped under higher order commonalities to
form categories. By linking categories and investigating the connections between concepts, the theory
emerges (Ibid.).

Coding is the central process by which grounded theories are derived: coding enables the analyst to
break through his/her inherent biases brought to (or derived during) the research process. Codes
provide the grounding, build the density, and help to develop sensitivity and integration that are
required to generate rich, explanatory theory that closely approximates reality. Codes are not labels
under which similar instances of the same phenomenon are counted. They are the basic elements of an
emerging theory derived from an interpretation of data; they are concepts that depict a part of reality,
arise from the data, and hence are grounded in reality (Patzelt, 2013). “Codes capture patterns and
themes and cluster them under a ‘title’ that evokes a constellation of impressions and analyses for the
researcher” (Lempert, 2007, p. 253)

Holton (2007), drawing on Glaser (1978), identifies two main types of coding: substantive coding and
theoretical coding. Substantive coding begins with open coding as codes related to the empirical
substance of the research domain are developed ad hoc. Through theoretical sampling and selective
coding, theoretical saturation is achieved: no new properties or dimensions emerge from continued
coding and comparison. The researcher then moves to theoretical coding. Theoretical codes establish
conceptual / hypothetical relationships between substantive codes and hence help to form theoretical
models based on the theoretical concepts that the researcher brings into the data collection and
analysis. They give the researcher integrative scope, broad pictures of the data, and a new perspective
for analysis (Glaser & Holton, 2007; Kelle, 2007a). “Theoretical codes are used … to combine
substantive codes to form a theoretical model about the domain under scrutiny” (Kelle, 2007a) and to
“enable the conceptual integration of the core and related concepts to produce hypotheses that account
for relationships between the concepts thereby explaining the latent pattern of social behaviour that
forms the basis of emergent theory” (Holton, 2007, p. 265). Coding in grounded theory goes hand in
hand with conceptual memo-ing (Ibid.) – see Section3.5.

Strauss & Corbin (1990) identify three different types of substantive coding: open, axial and selective
coding, although the distinctions between different types are artificial. The different types do not take
place in stages, but the researcher instead moves between them, often without realising it. Open
coding is the process of breaking down, examining, comparing, conceptualising, and categorising the
data. These will be dealt with below. Axial coding (Section 3.6) involves a set of procedures whereby
data are put back together in new ways after open coding. Identifying connections between categories
is crucial. Selective coding (Section 3.7) is the process of selecting a core category and systematically
relating this to other categories in need of further refinement and development.

Page 10
Doing Grounded Theory

3.3.1 Open coding


During open coding, data is interrogated by line, by sentence or by paragraph, or as a document in its
entirety, using comparisons and asking the following questions (Holton, 2007; Strauss & Corbin, 1990):

• What is this observation (event), sentence (idea), or paragraph (process) a study of?
• What category is indicated here?
• What is actually happening in the data?
• What is the main concern?
• What accounts for the continual resolving of the concern?

The answers to these questions direct the way in which data is named and categorised. These
conceptual labels represent the incident, idea or process and form the basis of all subsequent steps of
theory-building. The generated codes are not summaries of observations using short phrases; rather
they are a conceptualisation of the data using action words (Strauss & Corbin, 1990).

Line by line coding, while tedious, is important for the following reasons (Holton, 2007):

• it forces the researcher to verify and saturate categories;


• it minimises the chances of missing important categories;
• it ensures relevance
o of the generated codes because codes are generated that fit the area under study; and
o of the emergent theory by enabling researchers to see what direction to take in
theoretical sampling before becoming too selective or focused on a particular problem.

The result of line by line coding is a rich, dense theory from which nothing has been left out (Ibid.).

Once several categories have already been generated, it may be more advantageous to code by sentence
or paragraph, looking for the main idea contained therein. Having identified this concept, the researcher
may go back and do a more thorough analysis of that concept (Strauss & Corbin, 1990). One could also
ask of an entire document or interview: what seems to be going on here? Or what makes this document
or interview the same or different from previous ones? The answers to these questions could yield
further opportunities for deeper analysis of, e.g., the specific similarities and differences (Ibid.).

3.3.2 Categorising
Open coding can produce hundreds of codes, leaving the analyst no wiser and hardly closer to theory
formulation than when they started. In order to recognise patterns in the data, similar codes have to be
grouped into categories (Strauss & Corbin, 1990). A category is a code on a more abstract level, and
categorising is directed by asking and answering the following questions with respect to every code
(Ibid.):

• To which class of phenomenon does this code pertain?


• What do the phenomena depicted by these codes have in common?
• What does this phenomenon (and related code) seem to be about?

Note that all coding and categorising is done on a provisional basis. Revisions can be done whenever the
analyst feels it is necessary to do so.

Deciding on the name for a category requires some imagination and sensitivity to the process or
phenomenon being categorised. Most importantly, the chosen name must be something memorable,

Page 11
Doing Grounded Theory

something that promotes thoughtful analysis, and something from which the researcher may draw
theoretical inspiration (Strauss & Corbin, 1990). Names may be derived from a common sense
interpretation of the data. They may also arise from previous experience or a review of relevant
literature, but the researcher must be aware of importing meanings from the literature instead of
deriving meanings from the data. Names may also be derived in vivo from concepts used by the
participants.

3.3.3 Developing the categories


Every category has properties and dimensions (Strauss & Corbin, 1990). Properties are the
characteristics or attributes of a category; dimensions represent the possible locations of properties on
some kind of continuum (Ibid.). The properties and dimensions of each category need to be recognised
and developed because they form the basis of relationships between categories and subcategories. Once
this has been done (akin to establishing the domain and range of the category) each instance of a
categorised phenomenon should be located on each dimension of each property. Hence a dimensional
profile is established for each occurrence of a category (see Table 2). Several dimensional profiles can
be grouped together to form a pattern. This will reveal to the researcher where gaps in the data lie
leading to further (theoretical) sampling.

Table 2 Properties and dimensions of categories

Category Properties Dimensional range


Integration Low High
Developing the
Examination period Short Long
cadastre
Digitisation All paper All digital

The purpose of analysing categories in this way is to facilitate theory-building and pattern recognition
(Ibid.). By comparing occurrences of categories across their dimensions, the studied phenomena can be
grouped and patterns can be recognised. By comparing occurrences of categories with respect to their
properties, their relationships can be established. Both comparisons stimulate and lead to the
development of theory.

3.4 Enhancing theoretical sensitivity

“To remain truly open to the emergence of theory is among the most challenging issues
confronting those new to grounded theory … [Grounded] theory requires the researcher to enter
the research field with no preconceived problem statement, interview protocols, or extensive
review of literature. Instead, the researcher remains open to exploring a substantive area and
allowing the concerns of those actively engaged therein to guide the emergence of a core issue.”
(Holton, 2007, p. 269)

Researchers often fail to see all that is contained in the data they have gathered because they approach
analysis with their own inherent biases and prejudices. Holton (2007) calls this preconception. It is not
necessarily detrimental to analysis, but researchers need to be aware of and challenge their
assumptions in order to uncover phenomena and propose new theories (Strauss & Corbin, 1990).
Extensive engagement in extant literature prior to data collection and analysis can run the risk of
clouding the researcher’s ability to remain open to the emergence of new core categories (Holton,
2007). Yet ‘emergence’ is a problematic methodological concept: researchers always draw on their

Page 12
Doing Grounded Theory

existing theoretical knowledge to help them to make sense of empirically observed phenomena (Kelle,
2007a).

Hence researchers need theoretical sensitivity: “the ability to ‘see’ with analytic depth what is there”
(Strauss & Corbin, 1990, p. 76), especially at the beginning stages of a project. Later on researchers
become theoretically sensitive through working with the data (Ibid.). Theoretical sensitivity requires
that researchers have analytic temperament and competence (Glaser & Holton, 2007; Holton, 2007).
The former allows the researcher to “maintain analytic distance from the data, tolerate regression and
confusion, and facilitate a trust in the power of preconscious processing for conceptual development”
(Holton, 2007, p. 275). Analytic competence refers to the researcher’s ability to “develop theoretical
insights and abstract conceptual ideas from various sources and types of data” (Ibid.). Glaser (1998,
2005) advocates reading widely in other disciplines as a means of enhancing theoretical sensitivity
(Holton, 2007), while Strauss & Corbin (1990) propose several techniques for enhancing theoretical
sensitivity:

• using questioning,
• in-depth analysis of single words, phrases and sentences,
• using comparisons: flip-flop, systematic and far-out, and
• waving the red flag.

These techniques will be explained below, but first let us consider what the intention behind them is
(Ibid.). They are designed to:

1. Steer the researcher out of the confines of technical literature and personal experience.
2. Help researchers avoid standard ways of thinking about phenomena.
3. Stimulate inductive thinking.
4. Focus the researcher’s attention on what the data is saying.
5. Allow for clarification or debunking of assumptions made by respondents.
6. Help researchers attend to what respondents are saying and what they possibly mean.
7. Stop the researcher from missing important concepts.
8. Force the researcher to ask questions and give provisional answers.
9. Allow fruitful, provisional labelling.
10. Allow for exploration and clarification of the possible meanings embedded in the data.
11. Help to discover properties and dimensions of categories in the data.

It is also important to note, at the outset, that any concepts, categories, or hypotheses that emerge from
the data as a result of using one of these techniques, should be considered provisional, or hypothetical
possibilities (Ibid.). They function to guide the researcher in a particular direction and must be backed
up by actual data or dropped altogether. They should also be used only as aids to analysis and should
not be done with every piece of data but should be focused on the first few interviews, observations or
documents (Ibid.). Further development of theoretical sensitivity is addressed in Section 3.6.

3.4.1 Using questioning


The purpose of questioning is to open up the data. Questions lead the researcher to think about
potential categories, including their properties and dimensions, while coding (Strauss & Corbin, 1990).
The basic questions are who-, when-, where-, what-, how-, how much-, and why-type questions (Ibid.).
These questions should be directed toward the conditions, consequences, variations, and processes
related to the phenomenon under study. They should interrogate the temporal (frequency, duration,

Page 13
Doing Grounded Theory

rate, timing), spatial (size, space, storage capacity), and technological (equipment, skills) aspects of the
phenomenon.

The answers to these questions might not be found in the data already gathered, so these questions can
also be used to direct the next round of interviewing, observation, or document analysis. The questions
are not meant to reveal hidden information, but rather they help the researcher to become more
sensitive to the broader context in which the data are situated. Thus they highlight gaps in the available
information that direct how further information will be gathered (Ibid.).

3.4.2 In-depth analysis


The purpose of in-depth analysis of words, phrases and sentences is also to open up the data. This is
achieved by making the data a source of further inspiration, stirring sensitivity to the less obvious
meanings in the data, and illuminating the researcher’s own assumptions, forcing him/her to examine
and question them (Strauss & Corbin, 1990). The procedure is as follows (Ibid.):

• Scan through the document or a significant portion thereof;


• Return to any significant word, phrase or sentence;
• List all of the possible meanings of that word, phrase or sentence.

The goal of in-depth analysis is not to find the ‘correct’ meaning of the significant item, but rather to
sensitise the researcher to the possible range of meanings associated with that item and hence to
inspire useful follow-up questions during subsequent data collection. “Unless we validate possible
meanings during interaction with speakers, or train ourselves to ask what meanings the various
analytically salient terms have for our respondents, we limit the potential development of our theory”
(Strauss & Corbin, 1990, pp. 83–84).

3.4.3 Using comparisons


Comparisons are meant to stir the imagination and generate theoretical sensitivity, not answer
questions. They are also used to help break down assumptions and uncover specific dimensions of the
phenomenon of interest. This requires that the researcher draws on personal and professional
knowledge as well as the technical literature (Strauss & Corbin, 1990). There are three types of
comparisons used in the GTA, and each will be described individually below. See Strauss & Corbin
(1990) for examples.

a) The Flip-flop technique


This technique forces the researcher to think analytically, rather than descriptively, about the data and
helps to generate provisional categories and find their properties and dimensions. The central idea is to
look for and think about the opposite to a particular category, and to then make comparisons at the
extremes of one dimension. If opposites cannot be found in reality or the literature, then they may be
fabricated for the sole purpose of comparison. The goal is to gain sensitivity to possibly relevant
dimensions of the studied phenomenon and to hence be stimulated to find relevant categories. The
procedure is as follows:

• Starting with the data, find properties in whose dimensions the compared cases are most
dissimilar.
• Locate the phenomenon in the data.
• Do theoretical sampling (see Section 3.8) to collect data on real instances of the opposite case.

Page 14
Doing Grounded Theory

b) Systematic comparison
Systematic comparisons help researchers to break away from their inherent patterns of thinking that
are based on or derived from experience or literature, while still retaining what is important. The
central idea is to compare what is expected with its opposite. By questioning assumptions and following
through by collecting and analysing data that answers the questions, the researcher moves away from
the literature on the topic of interest and opens up possibilities for further exploration. The results
might agree with the literature, but the theoretical explanation will be denser because the analysis has
explored alternatives to the standard way of thinking.

c) Far-out comparison
The aim of far-out comparison is to detect properties and dimensions of the researched phenomenon
that would otherwise never have attracted detection. This is done through comparing things that are
quite different; not opposites, because they share common properties, but completely different. This
draws the researcher’s attention to features that may have been dismissed previously as being
absolutely irrelevant, yet might actually be quite important.

3.4.4 Waving the red flag


The purpose behind this technique is to help the researcher to see beyond the obvious in the data
(Strauss & Corbin, 1990). Whenever a respondent uses absolute qualifiers such as ‘never’, ‘always’,
‘impossible’, ‘all’, ‘none’, etc. then the researcher should flag these responses for closer inspection. This
inspection again involves asking questions of the data:

• What is meant by these absolutes?


• Why are they used?
• What are the consequences of their use?
• Under what conditions does it apply?
• Are there strategies to overcome these absolutes?

The key to success in this arena is to never take anything for granted but to instead be sensitive to how
reality is constructed in the data.

3.5 Memos and diagrams

“Memos and diagrams are essential aspects of analysis whether the research aim is description or
theory… [They] are more than just repositories of codes. They stimulate and document the analytic
thought processes and provide direction for theoretical sampling.” (Corbin & Strauss, 2008, p. 140)

Memos are “written records of analysis” (Strauss & Corbin, 1990, p. 197) that contain the results of our
analysis (Corbin & Strauss, 2008): a written record of our abstract thinking about data and categories.
Memo writing is essential to the GTA because it is the fundamental process by which researchers
engage with the data, the result of which is a grounded theory (Lempert, 2007). Through the process of
memo writing the researcher analytically interprets the data, discovers emerging patterns, and
develops theories about these patterns. Through memo writing the researcher is rooted in the data
while simultaneously increasing the level of abstraction of his/her ideas (Charmaz, 2006; Lempert,
2007).

Memos can take several forms and these forms should be kept separate and distinguished from each
other else information retrieval becomes difficult (Strauss & Corbin, 1990). There are memos for
(Corbin & Strauss, 2008):

Page 15
Doing Grounded Theory

• Open data exploration;


• Identifying and developing the properties and dimensions of concepts or categories;
• Making comparisons and asking questions;
• Elaborating the relationships between conditions, actions / interactions and consequences; and
• Developing a storyline.

But it is important for the researcher not to try to fit memos into one of the abovementioned categories,
because in doing so researchers may “lose the generative fluid aspect of memoing [sic]” (Corbin &
Strauss, 2008, p. 118). They should rather simply get into the habit of writing memos, no matter how
messy or abstract initial attempts are (Ibid.).

Memos evolve: they grow in complexity, density and accuracy as the researcher moves from data
collection to theorising (Strauss & Corbin, 1990). They form the foundation for later theory
construction and publication and help the researcher to maintain analytical distance from the data
(Ibid.). Through memos the researcher is able to conceptualise the data in a narrative form (Lempert,
2007). “The starting point of memo writing is the first idea that occurs to the researcher about his/her
data” (Lempert, 2007, p. 251).

Diagrams represent the relationships between analytic concepts visually (Corbin & Strauss, 2008;
Strauss & Corbin, 1990). They provide the researcher with conceptual maps of ideas that visually
represent the relationships between codes and categories. Using diagrams in this way is important for
clarifying ideas and generating order out of codes, categories, ideas, and the emerging grounded theory
(Charmaz, 2006). Drawing diagrams also forces the researcher to think about the data in a minimalist
way that “reduces the data to their essence” (Corbin & Strauss, 2008, p. 125), hence representing
categories and their linkages more precisely and concisely than through memos (Lempert, 2007).
Diagrams are useful for illustrating the emergent processes in a GTA; later the visual representation can
be described in detail using memos (Ibid.).

Writing memos and drawing diagrams are part of the analysis and help to move the analysis forward.
Corbin & Strauss (2008) present the following general notes about memos and diagrams:

1. They vary in content, degree of conceptualisation, and length depending on the phase of the
research, the researcher’s intent, and the materials being coded.
2. It is inadvisable for researcher’s to write memos on field notes.
3. Researchers are at liberty to develop their own style for doing memos and diagrams: there is no
right or wrong way.
4. In addition to their function of storing information, memos and diagrams force the analyst to
work with concepts rather than with raw data. They also stimulate creativity and imagination
that encourages the formulation of new insights, and reflect the analytic thought process of the
analyst.
5. They serve as a storehouse of analytic ideas that can be sorted, ordered and retrieved, and may
reveal concepts that are in need of further development / refinement.
6. Writing memos is part of analysis.
7. Writing summary memos or drawing integrative diagrams is a good way of synthesizing the
content of several memos / diagrams.

In addition to these general features of memos and diagrams, Corbin & Strauss (2008) provide specific
advice for memo-ing and drawing diagrams:

Page 16
Doing Grounded Theory

8. Use the date (and time) at which the memo / diagram was created as a reference to assist with
easy recall of the specific memo / diagram. It is also helpful to include a reference to the
document from which the memo / diagram was derived.
9. Creating a heading for each memo / diagram also facilitates accessibility.
10. Insert short quotes or phrases from the raw data in the memo.

Memos and diagrams should be regularly updated as the analysis progresses and new insights are
derived from the data. Once memos on different codes begin to look the same it may be an indication
that a category is nearing saturation, or it might be time to re-compare similarities and differences
between concepts (Ibid.).

In conclusion, Lempert (2007) offers the following four fundamental principles of memo-writing …

1. The purpose of memo-writing is discovery and theory development, not application or


description.
2. Analysis of the extensive amounts of raw data acquired is done through memo-ing and drawing
diagrams.
3. Writing memos and drawing diagrams occurs throughout the research process, enabling the
shaping of the collection and analysis of the data.
4. Continuous memo-ing and revision leads to progressively more abstract levels of theorising.

… and Charmaz (2006) sets out the following guidelines for memo-ing:

1. Write a unique memo for each code, category or observed pattern;


2. Provide sufficient empirical evidence to support definitions of codes and categories and the
analytic claims made about them;
3. Interrogate codes and categories by expressly asking questions of them;
4. Write memos on comparisons between data and data, data and codes, codes and codes, codes
and categories, and categories and categories.
5. Use raw data (quotations) in memos as sources of inspiration;
6. Identify gaps in the analysis and suggest areas that need follow up in the field.

3.6 Axial coding

The initial idea behind the GTA was that categories and their properties and dimensions would emerge
from the data if researchers had sufficient theoretical sensitivity and applied the technique of constant
comparison, but this has proved difficult to realise in practise (Kelle, 2007b). “The most basic challenge
in grounded theory building is to reconcile the need of letting categories emerge from the material of
research … with the impossibility of abandoning previous theoretical knowledge” (Kelle, 2007b, p. 192).
The development of categories from empirical data relies on the researcher’s theoretical sensitivity and
the availability of an adequate theoretical base. Two techniques have emerged to address this challenge
(Ibid.): theoretical coding (Glaser, 1978) and the coding paradigm (Strauss & Corbin, 1990).

3.6.1 Theoretical coding


“The conceptualization of data through coding is the foundation of [Grounded Theory] development …
The essential relationship between data and theory is a conceptual code” (Glaser & Holton, 2007, p. 58).
During theoretical coding, concepts from diverse theoretical backgrounds and contexts are merged into
coding families that serve as a fund of concepts to guide researchers as they consider their empirical
observations in theoretical terms (Kelle, 2007b). Yet their ability to do so is limited because researchers

Page 17
Doing Grounded Theory

can only make sense of coding families if they understand their inner relations and position within
greater conceptual networks. Using Glaser’s list of coding families requires an advanced understanding
of different schools of thought, terminologies and possible relations in order to choose the list that is
most useful for the data (Ibid.).

“The problem is not so much that Glaser's list of coding families would not be sufficient to stimulate
the discovery of possible theoretical relations between incidents in the data or between newly
developed categories. It is rather that the employment of such an unordered list for the
construction of grounded theories is very difficult if the researcher does not have a very broad
theoretical background knowledge at hand concerning the different theoretical perspectives
entailed in the list.” (Kelle, 2007b, p. 200)

The concept of theoretical coding offers the researcher an approach designed to overcome inductive
inference. But it does not clarify how formal and substantial concepts should be linked in order to
develop empirically grounded theoretical models. The methodology is best suited to experienced
researchers with a broad knowledge of social theory who may construct their own analytical axes using
theoretical concepts derived from different schools of thought (Kelle, 2007a).

3.6.2 The coding paradigm

Figure 2 The paradigm model

Open and axial coding are distinct analytic procedures between which the researcher will alternate
repeatedly while engaged in analysis. If the purpose of open coding is to fracture the data so that
categories, dimensions and properties can be identified, then the purpose of axial coding is to put the
data back together again in new ways. This is achieved by making connections between a category and
its subcategories, and developing categories beyond the analysis of their properties and dimensions
(Strauss & Corbin, 1990).

Axial coding requires an intense analysis of one category at a time, which forms the axis around which
further coding and category building is done (Kelle, 2007b). The central idea is to focus on each
phenomenon on its own, as depicted by a category, and then to assess it in terms of its subcategories.
Subcategories are defined as the category’s (Strauss & Corbin, 1990):

• Causal and intervening conditions;


• Context in which it is embedded;

Page 18
Doing Grounded Theory

• Strategies by which it is handled, managed or carried out; and


• Consequences, especially those pertaining to the action/interaction strategies.

These subcategories, with their own properties and dimensions, form an axis for ongoing analysis as
illustrated in Figure 2. Note that the axis is presented as circular because the outcome forms input for
ongoing analysis. This coding paradigm fulfils the same function as Glaser’s coding family –
representing a group of abstract theoretical terms that are used to develop categories from data and to
find relations between them – but with special emphasis on the intentions and goals of the actors
involved (Kelle, 2007b).

Subcategories are linked to their categories in a set of relationships, beginning at the causal conditions,
phenomenon, context, intervening conditions, action/interaction strategies, and finally the
consequences of the phenomenon that leads back into causal conditions. These linkages denote
causality and the whole is termed the paradigm model (Strauss & Corbin, 1990). Using this model
(proceeding along the axis of analysis) facilitates systematic thinking about the data and enables the
researcher to relate the data in complex ways. Disregarding this model will lead to grounded theory
that lacks density and precision (Ibid.). The components of the model are identified as follows (Strauss
& Corbin, 1990):

• Causal conditions: the events that lead to the occurrence or development of the phenomenon.
• Phenomenon: the central idea or event to which the actions/interactions under study are
related.
• Context: includes the specific set of categories’ properties and dimensions that pertain to the
phenomenon, and the particular set of conditions under which action/interaction strategies are
taken in response to a phenomenon.
• Intervening conditions: everything that shapes the context within which the phenomenon
unfolds.
• Action/interaction strategies: in any grounded theory there are action and interaction in
response to a phenomenon. Action/interaction has the following properties:
o It is evolving in nature and hence must be studied in terms of sequences, movements
and changes over time.
o It is purposeful / goal-oriented and hence can be studied in terms of strategies or tactics.
o Failure to act / interact is as important as action / interaction.
o There are intervening variables that facilitate or constrain action / interaction, and
these must also be discovered.
• Consequences: action / interaction taken, or not taken, in response to a phenomenon has
consequences that need to be traced. Consequences may become part of the intervening or
causal conditions or of the context of either the action / interaction or of the phenomenon under
study. Hence the consequences at one point in time may become the conditions of another point
in time.

Linking and developing categories follows the procedure of asking questions and making comparisons
as described in Section 3.3.1, but axial coding is somewhat more complex than open coding. This is
because four distinct analytic tasks must be fulfilled together, and the comparing and questioning are
based on these tasks (Strauss & Corbin, 1990):

a) Hypothetically relating subcategories to categories through statements describing the nature of


the relationships;

Page 19
Doing Grounded Theory

b) Verifying these hypotheses against actual data (grounding);


i) Looking, in the data, for evidence to support or refute the questions; if our intuitions are
supported by data then they can become clear statements of relationships or hypotheses
to be checked later on;
ii) Looking, in the data, for falsifiers that add variation and depth to our understanding, and
hence add density to the emerging theory;
iii) The final theory is limited to those categories and subcategories, properties and
dimensions, and statements of relationships for which verification in the data could be
found.
c) Continual searching for properties and dimensions of categories and subcategories as well as
the dimensional locations of the data (recognising patterns) that are indicative of the categories
and subcategories.
d) Beginning to explore variation in the phenomenon by comparing each category and subcategory
with respect to different patterns that have been discovered.

The coding paradigm is linked to a micro-sociological perspective. It may prove useful to novice
researchers because it provides clear structure and consists of theoretical terms of limited empirical
content. But researchers wanting to use a macro-sociological or system theory perspective may find the
coding paradigm restrictive (Kelle, 2007a).

3.7 Selective coding and sorting

Once the data has been coded and analysed as described above, all of the categories and subcategories
and propositions generated thus far need to be integrated into a single theory. This integration follows
a similar procedure to axial coding, but at a higher level of abstraction. The researcher will already have
the following at his/her disposal (Strauss & Corbin, 1990):

• Categories with their respective properties, dimensions and paradigmatic relationships;


• Noted relationships between major categories in terms of their properties and dimensions;
• An idea of what the sought for theory will look like.

These can all be integrated into the emergent theory by following several steps, as outlined below.
These steps are neither fully distinct nor need they be taken in linear sequence. The researcher may
move back and forth between steps as needed. This form of coding is called selective coding because
only those categories are selected for further analysis that fit with the core category of the emerging
theory (Ibid.). The steps to be taken are as follows:

1. Choosing a core category and finding the story line of the theory under construction;
2. Relating categories to the core category using the paradigm model;
3. Forming propositions about the relationships between categories at the dimensional level;
4. Sorting of memos and diagrams;
5. Validating these propositions against actual data;
6. Filling in categories that need further refinement or development.

3.7.1 Finding the story line


The central means of achieving theory integration is to find and briefly describe the story about the
central phenomenon of study (Strauss & Corbin, 1990). This is identified by concisely answering the
following two questions (Ibid.):

Page 20
Doing Grounded Theory

• What about the area of study seems most striking?


• What is the main problem?

From the answers to these questions the researcher should be able to identify one or more central
phenomena, to which a conceptual label should be given. This label needs to be abstract, broad, but
telling, and should be derived from the categories already generated. One of them may even be the core
category. Around this core are arranged all the other relevant categories, and they are ordered along a
story line. If two (or more) phenomena seem to qualify as equally important, then the researcher must
choose one as the core category and the other/s should remain as a related sub-category, because
building a story line around two or more core categories is too complex. If a choice cannot be made,
then the researcher needs to choose to build two (or more) theories: one for each central phenomenon
(Glaser, 1978; Strauss & Corbin, 1990).

The general approach to finding a good story line is “to tell the story analytically” (Strauss & Corbin,
1990, p. 120) using the relevant categories generated during open and axial coding. To achieve this,
categories need to be related and memos need to be sorted as described below.

3.7.2 Relating categories and forming propositions


This is no more than a matter of axial coding using the paradigm model on a higher, more abstract level.
To begin with, the core category needs to be developed in terms of its properties and dimensions. This
is done based on memos written thus far and using the techniques described under open coding:
questioning, comparing, and enhancing theoretical sensitivity. The goal is to tell the analytic story (i.e.
to formulate theory) along the properties of the core category (Patzelt, 2013; Strauss & Corbin, 1990).
Thereafter the core category that depicts the central phenomenon is analysed in terms of the paradigm
model, that is in terms of its conditions that lead to the phenomenon itself and hence to the context,
resulting in action / interaction strategies and, ultimately, consequences (Ibid. See Figure 2).

Categories have properties, and properties have dimensions. All of these depict features of the central
phenomenon under study: its context, conditions, actions / interactions, and consequences thereof.
Axial coding reveals the relations among these features that exist in the data, as well as patterns and
variations. All of this must be related to the core category of the emerging theory by again following the
procedure of axial coding: based on the memos and data, propositions are formulated on all of the
relations and patterns that seem to have relevance for the story line. This is done at the level of the
dimensions of properties of the selected categories (Patzelt, 2013; Strauss & Corbin, 1990).

3.7.3 Sorting memos and diagrams


“While ideational memos are the fund of grounded theory, the theoretical sorting of memos is the
key to formulating the theory for presentation to others … Sorting is an essential step in the
grounded theory process that cannot be skipped.” (Glaser, 1978, p. 116)

Generating the story line involves a sequential ordering of categories, subcategories and propositions
about the relationships and patterns of the theory. “Using such a story as a guideline, the analyst can
begin to arrange and rearrange the categories in terms of the paradigm until they seem to fit the story,
and to provide an analytic version of the story” (Strauss & Corbin, 1990, p. 127). This action of sorting
based on memos and diagrams will generate and test ideas for the story line. Strauss & Corbin (1990)
caution that this is a difficult process that requires a lot of thought and trial-and-error. But if sorting is
omitted, the resulting theory will be linear, thin, and not fully integrated. It may have an overall
integration but will lack the internal integration of connections among many different categories.
Proper sorting generates rich, multi-relation, multi-variate theory (Glaser, 1978).

Page 21
Doing Grounded Theory

Sorting of memos and diagrams is an important step towards theoretical integration of categories.
“Grounded theory sorting gives you a logic for organizing your analysis and a way of creating and
refining theoretical links that prompts you to make comparisons between categories” (Charmaz, 2006,
p. 115). It begins with a reassessment of each memo, facilitated by making sure that each memo has a
heading and a clear indication of the codes / categories to which it relates. Ideas for the order of
categories, propositions about relationships, or interpretations must be allowed to emerge while
sorting. It is also a good idea to write memos about these ideas. As a result of sorting, the relationships
between categories should become clearer. These insights should also be memo-ed (Ibid.).

Sorting involves ordering memos based on the similarities, connections, and concepts of categories and
their properties that are contained in the memos. This creates patterns in the memos, which in turn
forms the outline of the theory. It also generates new ideas that should be captured in additional memos
as the analyst compares ideas to ideas. This leads to densification of the theory and saturation of lines of
thought (Glaser, 1978).

Glaser (1978) gives the following analytic rules that guide the construction of the emerging theory and
subsequent communication of theory. As such these rules are equally relevant here as in Section 3.10:
Communicating theory. These rules provide the necessary discipline for focussing on the central theme
as theory is generated. Yet these are not hard and fast rules that must be strictly adhered to; rather
rules should be allowed to emerge as they become relevant, and others are given up as they become
unnecessary (Ibid.).

1. Starting to sort: Start anywhere. It is tempting to try to find an analytic beginning but this is not
necessary. The beginning, middle and conclusion will emerge as sorting progresses.
2. Relate sorting to core variables: Locate the core variable and sort all other categories and
properties only as they relate to it. If a concept does not relate to the core variable then it should be left
out.
3. Promotion / demotion of core variables: The issue of having more than one core variable is dealt
with in Section 3.7.1. Essentially, there can be only one and the analyst must make difficult, reasoned
choices about which ones to include and which ones to leave out.
4. Memo-ing: Whenever new ideas present themselves during the process of sorting, the analyst
must stop sorting and memo, and then sort the new memo into the integrated list of memos.
5. Carry-forward: When categories are related to the core it is important to sort memos such that
the use of concepts builds up cumulatively.
6. Integrative fit: All ideas must fit in somewhere else the analyst will wander off the path of the
story line and necessary ideas and relations will be omitted. Fitting ideas into the theory requires
constant questioning and comparison of each idea with the emerging outline. This yields many
theoretical memos that are resorted into the theoretical outline.
7. Resorting: Sorting involves much resorting as the integrative fit of each memo is constantly
corrected and confirmed. Resorting also finds gaps in the theory that lead the researcher to theoretical
sampling (see Section 3.8).
8. Idea problems: Unused ideas should not be discarded but should instead be carried forward for
future research or publication. They come from several sources:
• Material that does not relate to the core category;
• Ideas that cannot fit the emerging theory;
• Pet ideas that have to be left out as irrelevant;
• Newly generated ideas that have occurred late in the sorting process.

Page 22
Doing Grounded Theory

9. Cutting off: Sorting stops when memos run out, when the core variable is theoretically
saturated, when the analyst is personally saturated, or when a state of theoretical completeness is
reached. This means that the theory is explained using the fewest possible concepts, with the greatest
possible scope, and with as much variation as possible. There must be sufficient explanation of the
concepts that fit, work, have relevance, and are saturated.
10. The mechanics of sorting: Memos should be broken down into smaller parts as much as possible
in order to increase their sort-ability. Make notes of memos that are to be carried forward. Record notes
on sorting rules and theoretical coding while sorting, and don’t forget to stop and memo as new ideas
occur. Move quickly without pondering too much.

3.7.4 Validating propositions


The analyst will know that they have found a sound sequential order (i.e. a convincing story line) when
their ordered memos follow each other smoothly and logically. If that cannot be achieved, then there
may be something lacking in the data or the emergent theory. If it is the data that is lacking, then the
researcher should return to the field and the whole process begins again. But if the problem is still not
resolved, one of three situations may be occurring (Strauss & Corbin, 1990):

• The case may represent a state of transition and hence exhibit aspects of two different cases. In
this instance the theory should include process.
• Intervening conditions may have an impact on the case causing puzzling variation. The solution
is to determine what conditions are causing variation and to build them into the theory.
• The case may have developed over time and hence early interviews and observations differ
from later ones. Then the case should be broken down into its proper pieces and these should
be placed at different points in the theory (analogous to treating the case as an embedded case:
see Yin, 2009).

3.7.5 Filling in categories


The story that is derived following the guidelines given above should be validated against the data to
make sure that it fits (see the criteria for good theory given on page 1). Assuming that this has been
done, the researcher may begin to feel like he/she is nearing the end of analysis. But if, during the
process of writing down the analytic story, it appears that some categories are not yet fully developed,
then there is still work to do.

Categories may be identified as requiring further development if not all of their relevant properties are
clear, or insufficient evidence has been collected regarding how well their dimensions are empirically
grounded. Accepting such gaps would result in a theory with less conceptual density and/or less
specificity than is possible and desirable (Strauss & Corbin, 1990). The solution is to revisit the data or
gather more data specifically aimed at filling these gaps. This process is typical of the GTA in that the
researcher must continually move back and forth between data, analysis and theory (Ibid.).

3.8 Theoretical sampling

Theoretical sampling is a method of data collection based on concepts derived from the data. The
method is not established before research begins but rather responds to the data in an open and flexible
manner. “Concepts are derived from data during analysis and questions about those concepts drive the
next round of data collection … This circular process continues until the research reaches the point of
saturation.” (Corbin & Strauss, 2008, pp. 144–145) By sampling theoretically the researcher is looking
for specific concepts to examine how they might vary under different conditions (Ibid.). The

Page 23
Doing Grounded Theory

representativeness of the sample is of no concern; rather the researcher is looking for concepts and
incidents that illuminate them. Variation, not sameness, guides the choice of cases because it broadens
the concepts and scope of the theory (Ibid.).

Theoretical sampling means sampling on the basis of concepts that have proven theoretical relevance to
the evolving theory (Strauss & Corbin, 1990). Concepts with proven theoretical relevance are
repeatedly notably present (or notably absent) when comparing incident after incident, and have
emerged from the coding process as categories (Ibid.). The aim of theoretical sampling is to sample
events, incidents, etc. related to the study area that are indicative of categories, including their
properties and dimensions, in order to develop those categories and conceptually relate them along
their properties and dimensions. Through sampling, cases are selected that stimulate the building of
theory about the phenomenon of which the cases are examples (Ibid.).

An example illustrates the process (Patzelt, 2013). Suppose that during open coding categories A-F are
generated from the data. They share some attributes and are hence categorised as X. Inspection of the
instances coded as A-F yields properties K, L, and N of category X. While defining the dimensions of K it
appears that there is no evidence in the data for a maximum value of K. Likewise there is no evidence
for a minimum value of L. Hence more data needs to be collected to find these missing cases. If such
cases cannot be found then it could be that the conceptualisation of categories, properties and
dimensions needs revision (because the emerging theory must be grounded in data). Hence sampling is
done on the basis of evolving concepts.

“Theoretical sampling is the process of letting the research guide the data collection. The basis for
sampling is concepts, not persons. Relevant concepts are elaborated upon and refined through
purposeful gathering of data pertaining to these concepts. It is through theoretical sampling that
concepts are elaborated and as such it forms the basis for thick rich description and theory
construction. Theoretical sampling continues until all categories are saturated.” (Corbin & Strauss,
2008, p. 157)

Theoretical sampling is guided by the questioning and comparing that evolves during open, axial, and
selective coding, because these make the researcher aware of the gaps in the data (Strauss & Corbin,
1990). It is cumulative, as concepts and their relationships accumulate “through the interplay of data
collection and analysis” (Strauss & Corbin, 1990, p. 178). Each event sampled builds on previous data
and analysis and contributes to further data collection and analysis that is more and more specific as
the researcher aims for saturation (Corbin & Strauss, 2008). It should be consistent in terms of
systematic gathering of data on each generated category, and it should ensure that variation, process
and density are catered for while important theoretical leads are followed. Theoretical sampling must
also be flexible in that the researcher should investigate areas that were not initially important but
subsequently appear to be. It should be “planned rather than haphazard” (Strauss & Corbin, 1990, p.
178). It also goes hand-in-hand with data analysis: sampling must be carried out “on the basis of the
evolving theoretical relevance of concepts” (Strauss & Corbin, 1990, p. 179).

When doing theoretical sampling we must cover:

• What actions / interactions do or do not occur;


• The range of conditions giving rise to those actions / interactions and their variations;
• How these conditions change over time;
• The impact of those changes;

Page 24
Doing Grounded Theory

• The consequences of action / interaction or the lack thereof.

Theoretical sampling differs according to the phase that the researcher is in (Strauss & Corbin, 1990).
During open coding the aim is inspiration for the creation of codes and openness guides all sampling
choices. During axial coding the aim is to find differences at the dimensional level and sampling must be
done deliberately and systematically. During selective coding the aim is to find opportunities for
verifying the story line and relationships between categories, and filling in poorly developed categories.

3.9 Theory-building

Figure 3 The conditional matrix (Strauss & Corbin, 1990, p. 163)

This is the goal of the GTA. Theory is derived from memo-ing and diagramming, both of which should be
on-going during the processes of coding and categorising. The steps thus far are as follows:

1. During open coding, observations are recorded as concepts;


2. Concepts are grouped into categories;
3. Categories are developed in terms of their properties and dimensions. During this process of
abstraction it is essential that the first pieces of the grounded theory are recorded as memos
and diagrams;
4. The conditions, contexts and consequences of every phenomenon under study, as well as the
action / interaction strategies associated therewith, are revealed and explored through the
paradigm model;
5. Selective coding and sorting develops and tests the story line for the emerging theory.
6. Finally the conditional matrix integrates all of the parts developed thus far.

Page 25
Doing Grounded Theory

The conditional matrix is comprised of a set of concentric circles centred on actions – see Figure 3. It
reflects the layering structure of reality in abstract terms as a set of conditions that work both from the
top-down and bottom-up (Strauss & Corbin, 1990). The researcher uses it by filling in the specific
conditional features for each level that pertain to the chosen area of investigation. This assessment is
based on the data or on literature. Hence any phenomenon can be studied at any level of the matrix in
terms of its conditional relationship to the levels above and below it. The conditional matrix is, thus an:

“… analytic aid, a diagram, useful for considering the wide range of conditions and consequences
related to the phenomenon under study. The matrix enables the analyst to both distinguish and
link levels of conditions and consequences.” (Strauss & Corbin, 1990, p. 158)

Events, incidents or happenings are tracked through the matrix along conditional paths in order to link
them directly to a phenomenon, either from action, via interaction, upwards or from the international
level downwards to interaction and action (Ibid.). This tracing along the conditional path is done in
order to directly link conditions and consequences with action / interaction and helps to avoid pursuing
conditions that have no real relevance for the study. Paths are traced based on evidence in the data that
a particular condition produces an effect on the phenomenon through action / interaction. Ask
questions like, why has this occurred, what conditions were operating, how have the conditions
manifest themselves, and with what consequences? To answer these questions, the effects of conditions
must be systematically followed through the matrix (Ibid.).

Without this level of analysis, only a description of a phenomenon and its context is given. By tracing a
conditional path through the conditional matrix, the specific conditions can be connected to the
phenomenon in question through their effects on action / interaction (Ibid.). Using the matrix helps to
enhance theoretical sensitivity, especially in terms of the range of conditions that might bear upon the
phenomenon under study. It also enables theoretical sensitivity to the range of potential consequences
that result from action / interaction. The matrix assists with the systematic relation of conditions,
actions / interactions, and consequences to a phenomenon, and hence completes the process of
systematic theory-building that began with axial coding (Ibid.).

3.10 Communicating theory

“It is in the act of reading and writing that insights emerge … It is precisely in the process of
writing that the data of the research are … interpreted and that the fundamental nature of the
research question is perceived.” (van Manen, 2006, p. 715)

Communicating theory begins with the sorting of memos and diagrams as explained in Section 3.7.3:
“Writing grounded theory requires a ‘write up’ of the theoretical sorting of memos” (Glaser, 1978, p.
116). Through theoretical sorting the researcher gains several crucially important benefits for theory
writing (Ibid.):

• Analytically sorted memos provide the researcher with a map describing “where he is going and
what to write next” (Glaser, 1978, p. 116).
• A generalised, integrated model by which to write: theoretical sorting enhances the connections
between categories and properties;
• By preventing the regression back to a description of the data, it maintains the researcher’s
focus at the conceptual level;
• It generates dense, complex, complete theory;

Page 26
Doing Grounded Theory

• Through the process of sorting, more memos are generated, often at a higher conceptual level,
that further condense the theory;
• Sorting helps to integrate literature into the theory;
• It shows how the theory works and why each idea is placed as it is; and
• Preliminary sorting forms the basis of the initial draft of a paper or thesis.

Communicating theory is as important as developing theory. A good idea is no good if it can’t be read
and understood. Without presenting our findings in written or verbal form, professional knowledge
cannot be advanced nor can the implications thereof be tested or effected (Corbin & Strauss, 2008).
Hence it is important for the researcher to plan how his/her ideas will be communicated. Planning
begins with a review of the last diagrams and sorted memos so that there is no question about the main
analytic story line, because it is around this that the text needs to be organised (Strauss & Corbin,
1990). Then the following points should give guidance (Ibid.):

• Decide on the main analytic message and summarise this in a few sentences at the beginning of
the text.
• After careful consideration of the analytic logic that informs the story, construct a provisional
table of contents based on sorted memos.
• Decide on a strategy for describing the theory, e.g.:
o From a variety of different viewpoints (walking around the topic);
o With gradually increasing clarity and detail (going downhill towards the topic);
o As if visiting a house, i.e. moving from room to room and exploring the detail contained
in each room;
o Starting at the phenomenon and then broadening to the context (going uphill away from
the topic).

Page 27
Doing Grounded Theory

4 Concluding remarks

The GTA is one of the most widely known and comprehensive qualitative social research methods in
current use (Barry & Roux, 2013). Through application of the methodology described in the preceding
pages and sections, a researcher can develop a theory that is shaped by the views of participants to
explain a process, action, or interaction (Barry & Roux, 2013; Corbin & Strauss, 2008).

The approach employs a continual to-and-fro between data collection, coding, categorising, memo-ing,
sorting, validating, and writing, from which theory emerges. Analysis begins with collection of the data
and coding of the data using ideas derived from the data, not a priori (Barry & Roux, 2013). Coding
begins the process of abstraction as data are conceptualised and labelled in order to filter out noise and
reveal the important issues contained within the data (Hatch, 2006). Data collection, coding,
categorising, and analysis continue until categories are theoretically saturated, i.e. no new properties
emerge from sampling (Charmaz, 2006; Corbin & Strauss, 2008). Hence the choice of data sources is not
driven by the need for representativeness, but rather by gaps in the emerging theory.

Finding a topic

Theory building
Selective
Relating

Coding
Open
Sorting

Axial

Developing categories

Figure 4 The cyclical nature of the grounded theory approach

The cyclical nature of the GTA is attested to by Barry & Roux (2013) and illustrated in Figure 4. It begins
with finding a topic of interest and collecting suitable data on that topic. The data is initially

Page 28
Doing Grounded Theory

conceptualised through the process of open coding. From these codes, categories are developed as
codes are compared and grouped based on their similarities. The categories are thence developed in
terms of their properties and dimensions, and thus the outer circle of the figure is completed. From this
point onward memo-ing should proceed in parallel to analysis and the researcher should employ the
techniques described previously with regards to enhancing theoretical sensitivity. The cycle then
passes by data collection again to illustrate that this should be on-going throughout the research
process. Next comes axial coding where the data that was fractured during open coding and
categorising is put back together again and subcategories are developed (the second ring). Next comes
selective coding during which the theory begins to be integrated and developed. Crucial to this process
is finding the story line. This is facilitated by related propositions (essentially axial coding again but at a
higher conceptual level) and sorting of memos and diagrams. The propositions formed thus far then
need to be validated against data, which might mean returning to the field to collect more data again.
Once this new data has gone through the whole process of coding, categorising, etc. any gaps in the
theory should be evident. These are filled in with more data as needed (and more coding, categorising,
etc. follows). This is supported by theoretical sampling. Eventually the conditional matrix is employed
to further densify theory. Lastly, the memos are referred to again as the newly developed theory is
communicated to the relevant audience.

The main advantage of using the GTA is that the process is explicitly designed to develop rich, detailed
theory inductively from data. If the process is rigorously followed then the derived theory should satisfy
the requirements of a scientific theory (refer back to Section 2). The main disadvantages are that it is
time-consuming, intensive, and laborious, and the analyst is always uncertain whether theory will
emerge or not, making it an inappropriate method for the inexperienced researcher (Barry & Roux,
2013).

Page 29
Doing Grounded Theory

5 References

Allan, G. (2003). A critique of using grounded theory as a research method. Journal of Business
Research, 2(1), 1–10.

Babbie, E., & Mouton, J. (2001). The practice of social research (SA ed.). Cape Town: Oxford
University Press.

Barry, M., & Roux, L. (2012). A change based framework for theory building in land tenure
information systems. Survey Review, 44(327), 301–314.

Barry, M., & Roux, L. (2013). The Case Study Method in Examining Land Registration Usage.
GEOMATICA, 67(1), 9–20.

Charmaz, K. (2006). Constructing grounded theory: a practical guide through qualitative analysis.
Los Angeles: Sage Publications.

Corbin, J., & Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative
criteria. Qualitative sociology, 13(1), 3–21.

Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and procedures for
developing grounded theory (3rd ed.). Thousand Oaks: Sage Publications.

Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory. Mill
Valley: Sociology Press.

Glaser, B. G. (1998). Doing Grounded Theory: Issues and Discussion. Mill Valley: Sociology Press.

Glaser, B. G. (2005). The Grounded Theory Perspective III: Theoretical Coding. Mill Valley:
Sociology Press.

Glaser, B. G. (2007). Doing Formal Theory. In A. Bryant & K. Charmaz (Eds.), The SAGE
Handbook of Grounded Theory (pp. 96–114). Los Angeles: SAGE Publications Ltd.

Glaser, B. G., & Holton, J. (2007). Remodeling Grounded Theory. Historical Social Research,
Supplement, (19), 47–68.

Hatch, M. (2006). Organization theory: modern, symbolic and postmodern perspectives (2nd ed.).
Oxford: Oxford University Press.

Holton, J. (2007). The Coding Process and its Challenges. In A. Bryant & K. Charmaz (Eds.), The
SAGE Handbook of Grounded Theory (pp. 265–290). Los Angeles: SAGE Publications Ltd.

Kelle, U. (2007a). “Emergence” vs. “Forcing” of Empirical Data? A crucial problem of “Grounded
Theory” reconsidered. Historical Social Research, Supplement, (19), 133–156.

Kelle, U. (2007b). The Development of Categories: Different approaches in Grounded Theory. In K.


Charmaz & A. Bryant (Eds.), The SAGE Handbook of Grounded Theory (pp. 191–214). Los
Angeles: SAGE Publications Ltd.

Page 30
Doing Grounded Theory

Lee, A. (1989). A scientific methodology for MIS case studies. MIS quarterly, (March), 32–49.

Leedy, P., & Ormrod, J. (2010). Practical Research: Planning and Design (9th ed.). Pearson
Education International.

Lempert, L. B. (2007). Asking Questions of the Data: Memo writing in the Grounded Theory
tradition. In K. Charmaz & A. Bryant (Eds.), The SAGE Handbook of Grounded Theory (pp.
245–265). Los Angeles: SAGE Publications Ltd.

McAuley, J., Duberley, J., & Johnson, P. (2007). Organization theory: Challenges and perspectives.
Essex: Pearson Education Limited.

McMahon, M. (2013). What is a Theory? Conjecture Corporation. Retrieved December 12, 2013,
from http://www.wisegeek.org/what-is-a-theory.htm

Morse, J. (2007). Sampling in Grounded Theory. In A. Bryant & K. Charmaz (Eds.), The SAGE
Handbook of Grounded Theory (pp. 229–245). Los Angeles: SAGE Publications Ltd.

Patzelt, W. (2013). An Introduction to Doing Grounded Theory. Stellenbosch: African Doctoral


Academy unpublished course notes.

Pelto, P., & Pelto, G. (1978). Anthropological Research: The structure of inquiry (2nd ed.).
Cambridge: Cambridge University Press.

Robson, C. (1994). Real World Research: A resource for social scientists and practioner-
researchers. Oxford: Blackwell Publishers.

Rule, P., & John, V. (2011). Your guide to case study research. Pretoria, South Africa: van Schaik.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Techniques and procedures for
developing grounded theory. Newbury Park: Sage Publications.

Van Manen, M. (2006). Writing qualitatively, or the demands of writing. Qualitative health
research, 16(5), 713–722.

Yin, R. K. (2009). Case Study Research: Design and Methods. Essential guide to qualitative
methods in organizational research (4th ed., Vol. 5). Thousand Oaks: Sage Publications.

Page 31

You might also like