You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/278961843

Qualitative Data Analysis and Interpretation: Systematic Search for Meaning

Research · June 2015


DOI: 10.13140/RG.2.1.1375.7608

CITATIONS READS

40 35,331

1 author:

Patrick Ngulube
University of South Africa
166 PUBLICATIONS   1,567 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Records management and public service delivery in Kenya View project

Availability and use of Information and Communication Technologies (ICTs) for the visually impaired in selected public libraries in South Africa View project

All content following this page was uploaded by Patrick Ngulube on 22 June 2015.

The user has requested enhancement of the downloaded file.


Chapter 81

Qualitative Data Analysis and Interpretation: Systematic


Search for Meaning
Patrick Ngulube

8.1 Introduction
In this chapter, we will discuss the analysis and interpretation of qualitative data as a
kind of follow through on Chapter 7 (seven) discussions. The approaches to
qualitative and quantitative data analysis are different, as illustrated in table 8.1
below. The remarkable growth of qualitative research in many disciplines, including
business and management (Myers, 2009), health and social sciences (Atkinson,
Coffey & Delamont, 2001; Flick, 2002), and psychology (Madill & Gough, 2008),
makes it imperative for researchers to be familiar with qualitative data analysis. An
understanding of qualitative data analysis is fundamental to their “systematic search
for meaning” (Hatch, 2002:148) in their data.

Qualitative data analysis in one of the most important steps in the qualitative
research process (Leech & Onwuegbuzie, 2007) because it assists researchers to
make sense of their qualitative data. The process of qualitative data analysis is
“labour intensive and time consuming” (Lofland, Snow, Anderson & Lofland, and
2006:196). This is partly due to the fact that qualitative research produces “large
amounts of contextually laden, subjective, and richly detailed data” (Byrne,
2001:904).

The “true test of a competent qualitative researcher comes in the analysis of the
data” (Henning, Van Rensburg & Smit, 2004:101). Qualitative data analysis is
concerned with transforming raw data by searching, evaluating, recognising, coding,
mapping, exploring and describing patterns, trends, themes and categories in the
raw data, in order to interpret them and provide their underlying meanings. Patton
(2002:41) refers to this process as inductive analysis and creative synthesis.

After reading this chapter, you should be able to:

 Identify the main source of qualitative data;


 Differentiate between qualitative and quantitative data analysis;
 Identify various approaches to analysing and interpreting qualitative data;
 Describe key qualitative data analysis procedures;
 Explore computer-based qualitative data analysis procedures;
 Outline data management procedures;
 Discuss common criteria for evaluating qualitative data; and
 Reflect on qualitative data interpretation.

1
How to cite this chapter: Ngulube, P. 2015. Qualitative data analysis and interpretation: systematic search for
meaning, in Mathipa, ER & Gumbo, MT. (eds). Addressing research challenges: making headway for developing
researchers. Mosala-MASEDI Publishers & Booksellers cc: Noordywk, pp. 131-156.

1
This chapter examines various strategies for qualitative data analysis and
interpretation. Furthermore, it discusses quality and rigour in qualitative data
analysis. It is apparent from the list of references in this chapter that much has been
written on qualitative data analysis. The proliferation of literature has certainly
contributed to the complexity of the approach towards qualitative data analysis. Birks
(2014:224-225) writes that:

The complexities of qualitative research terminology and some of the original work
on the topic can leave a novice researcher a little overwhelmed when commencing
a study (and often throughout the later stages).

However, little guidance is provided on how the various types of qualitative data
analysis relate to a qualitative research project. It could be that the analysis of
qualitative data is still “a mysterious, half-formulated art”, as Miles asserted in 1979.
Consequently, the issues related to the analysis of qualitative data are still under the
spotlight three decades later. In fact, many qualitative researchers mistakenly
believe that the only way to analyse qualitative data is by means of constant
comparative or constant comparison analysis, as suggested by Glaser and Strauss
(1967) (Leech & Onwuegbuzie, 2007:562).

We concede that it is not always possible to separate data analysis from data
collection in qualitative studies, as analysis sometimes occurs during data collection,
but we are deliberately making this phase distinct in this chapter, because most of
the analysis of the qualitative data tends to take place at the end. It is at the end that
answers are sought to questions such as: What does this data mean? What are the
major themes emerging from the data? Does the data contribute to a further
understanding of the field?

There are a variety of approaches used to analyse qualitative data. Consequently,


this chapter provides a few illustrative examples using the framework described in
section 8.5. Researchers should take note of the fact that whichever approach to
analysing qualitative data is adopted, the data analysis procedure should be aligned
to the data that has been gathered and the assumptions of the research approaches.
Researchers should also note that “all forms of qualitative data analysis involve
interpretation and the researcher must always acknowledge the possibility that
alternative interpretations are possible” (Harding, 2013:139).

8.2 Differentiating features of quantitative and qualitative data


analysis
Table 8.1 below illustrates the differences and similarities in data analysis between
the qualitative and quantitative methodologies. Although data analysis in qualitative
and quantitative traditions is based on different assumptions, data analysis
transcends the qualitative-quantitative divide, which is partially discussed in Chapter
Nine. Data analysis in both research traditions is not a series of binary oppositions. It
is possible for data analysis to have moments of qualitative and quantitative
approaches. This partly explains why some qualitative data analysis software
programs such as Atlas/ti easily interface with SPSS©, a statistical package. It is
unhelpful and unproductive to view data analysis in the two research traditions as
mutually exclusive.

2
Table 8.1: Similarities and differences between quantitative and qualitative data
analysis
Quantitative Qualitative
Collecting and analysing data usually Collecting and analysing data is highly labour-intensive and
straightforward and not stressful generates a lot of stress (Miles, 1979). For instance, Adair
and Pastori (2011) had 150 focus group interviews in multiple
languages that they had to analyse
Helpful in “answering questions of who, Provides naturally occurring information and assists in
where, how many, how much, and what is answering why and how questions, while documenting the
the relationship between specific variables” interventions of the researcher during the whole research
(Adler, 1996: 5) process
Hard data are collected, as they are in the Soft data are collected, as they are in the form of words
form of numbers, counts and other (texts, images, artefacts, narratives) and everything else
statistical formulae (Blaxter, Hughes and Tight, 2006)
Clear and formulated conventions for data Methods of data analysis are not clearly formulated (Miles,
analysis and process is predictable 1979:590) and process is not predetermined (Suter,
2012:343).
Quantitative research produces narratives Qualitative research produces a sequence of events
which document the course of the project independent of the researcher (Boulton and Hammersley,
and give an audit trail of the research 2006::245)
(Boulton and Hammersley, 2006::245)
Data analysis is usually done at the end Data is analysed as they are collected (Glaser and Strauss,
when all data has been collected in a linear 1967; Miles and Huberman, 1994) because data “collection
fashion and analysis are interactive and occur in overlapping cycles”
(McMillan and Schumacher, 2014:364)
Not flexible and is usually difficult to follow- Flexible enough to make adjustments during data collection,
up on promising hunches as supplementary questions may be formulated during data
collection to gather additional data
Standardised data is collected through Huge amounts of data are collected that need to be
measuring either qualitative or quantitative summarised and interpreted
variables
The researcher seeks to verify or test a The researcher puts “aside perceived notions about what the
theory and the approach tends to be researcher expects to find in the research, and letting the
confirmatory data and the interpretation of it, guide analysis” (Corbin and
Strauss, 2008:160) and the approach tends to be exploratory
Relationships between independent and Focus on the meaning of events and actions as expressed by
dependent variables is of major concern the participants (case-centric) (Plowright, 2011)
(tends to be variable-centric)
Biased towards hypothesis testing and Favours analytical induction
deductive-oriented
Use of computer programs and software to Use of computer programs and software to analyse data is
analyse data is possible (e.g. SPSS©, possible (e.g. Nvivo, previously NUD*IST and Atlas/ti)
linear structural relations (LISREL) and
Statistical Analysis System (SAS))

It is clear from the above table that qualitative data is generally analysed from the
beginning of the research, as suggested by Miles and Huberman (1994:50). The
analysis of the collected data assists the researcher to devise strategies to generate
more data, in order to answer the research question. The focus of qualitative
analysis is on the meaning of events and actions, rather than statistical significance
and relationships between variables. Table 8.1 draws an interesting comparison to
table 12.1 in Suter (2012).

8.3 Sources of qualitative data


Qualitative research uses all sorts of data (Braun & Clarke, 2013). Depending on the
research questions informing a study, qualitative empirical materials may be

3
obtained through the utilisation of qualitative designs or approaches, such as the
case study (situated knowledge), historical research (knowledge of history),
grounded theory (knowledge of process and outcome), ethnography (knowledge of
culture), content analysis (knowledge of content), phenomenology (knowledge of
lived experience), action research (knowledge of process, outcome and change),
hermeneutics (knowledge and interpretation of the scriptures or text) and discourse
analysis (knowledge of discourse) (Mills, 2014:35). This is not an exhaustive list, as
other approaches can be found in major texts on qualitative methods. Each of these
designs has different purposes and prospective outcomes.

Many research methods texts confuse research designs with methods. According to
De Vaus (2001:9), “It is not uncommon to see research design treated as a mode of
data collection rather than as a logical structure of the inquiry”. For instance, Payne
and Payne (2004:175) refer to research designs as research methods. This
resonates with Mills’ (2014:36) conceptualisation of research methods as including
data generation and collection, analysis of data, quality and rigour, and the
interpretation of findings. However, Creswell (2013:5) uses the term “research
methods” to refer to techniques such as questionnaires; interviews; observation;
document analysis; and artefact analysis. With reference to Rule and John (2011)
and Creswell (2013), we use the term research methods to refer to techniques for
gathering data, while research designs or research approaches are ways of
designing and conducting research.

Qualitative research designs or approaches are as diverse as sources of qualitative


data. The major sources of qualitative data may be observations, interviews,
questionnaires, physical traces, document review and audio-visual materials (Patton,
2002; McMillan & Schumacher, 2014). However, most qualitative research mainly
relies on interview data (Perakyla & Ruusuvuori, 2011). This is also evident in
disciplines such as psychology (Madill & Gough, 2008) and education (Leech &
Onwuegbuzie, 2008).

8.4 Qualitative data management


Qualitative data are gathered and constructed from relatively few sources, but the
amount of data generated tends to be extensive. A structured mechanism for
managing research data contributes to the credibility of the research outcome (Birks,
2014). The way in which qualitative data and resources are managed contributes to
procedural precision and the preservation of the quality of the research (Birks &
Mills, 2011).

However, there is no widely accepted system of recording qualitative data


(Williamson, Given & Scifleet, 2013), but it is clear that some system is necessary
(Lofland et al., 2006). The major factor that should determine the researcher’s choice
is the logic and security that the system provides (Birks, 2014). Electronic files are
useful in storing transcribed interviews, observation data and memos. Asking the
following three questions suggested by Miles and Huberman (1994:46) will assist a
novice researcher in managing research data using computers:
 What will my files look like?
 How will they be organised?
 How can I get the information from them that I need?

4
The files must be backed up, irrespective of what system is utilised. Printed copies
may be necessary when analysing data, as it is easier to immerse oneself in one’s
data using hard copies than electronic copies (Williamson et al., 2013).

8.5 Creating a picture from pieces of gathered data


Creating meaning and making sense of the data is the main purpose of qualitative
data analysis. Miles and Huberman (1994:10) noted that “the strengths of qualitative
data rest on the competence with which their analysis is carried out”. The methods of
data analysis are based on three qualitative data analysis strategies identified by
Creswell (2007), including preparing and organising the data, coding, and presenting
the data in the form of text, tables or figures. There are various types of qualitative
data analysis and their utilisation depends on within which framework qualitative
research was adopted. Research questions are used as a guide for conducting the
analysis, as for instance, each question becoming a major coding category that is
broken down into sub-categories.

Although not all qualitative data analysis is inductive (Madill & Gough, 2008), the
inductive analytical process is a common characteristic of qualitative data analysis
(Curtis & Curtis, 2011). The common denominator for the data analysis procedures
described in this chapter is that they all involve data reduction, data display, and
conclusion drawing verification (Miles & Huberman, 1994).

After having demonstrated how little focus is placed on qualitative data analysis,
Leech and Onwuegbuzie (2007:563) described seven commonly used techniques for
analysing qualitative data – method of constant comparison, keywords-in-context,
word count, classical content analysis, domain analysis, taxonomic analysis, and
componential analysis. Later on, they expanded the list to eighteen (Leech &
Onwuegbuzie, 2008:588). Dawson (2009:119-125) divided qualitative data analysis
into four components: thematic analysis, comparative analysis, content analysis and
discourse analysis. On the other hand, Madill and Gough (2008:257) categorised
methods of analysing qualitative data as discursive, thematic, structured and
instrumental. Drawing in part from Madill and Gough (2008:257), we have thus
framed our discussion of the different ways of doing qualitative data analysis into
these four groups. These categories are not mutually exclusive and it is possible to
combine some of these categories to illuminate the understanding of the
phenomenon under investigation and to achieve data analysis triangulation, as
conceptualised by Leech and Onwuegbuzie (2007).

Although the list of Madill and Gough (2008) may not be exhaustive, it is sufficiently
comprehensive and offers a helpful framework for conceptualising and
understanding qualitative data analysis. Furthermore, a closer look at the various
data analysis techniques outlined by Leech and Onwuegbuzie (2007; 2008) reveals
a partial overlap with the categories outlined by Madill and Gough (2008), even
though they are articulated in other terms. The categories that were outlined by
Dawson (2009) partly cover the components of the typology of Madill and Gough
(2008). For the purpose of this chapter, one example of the data analysis procedure
under each category described by Madill and Gough (2008) is provided. The reader
is encouraged to further investigate the repertoire of data analysis procedures that
are mentioned in various subsections of this section.

5
Qualitative data analysis is at times guided by theory and theoretical concepts, as
explained in Chapter Four, but is “always shaped to some extent by the researcher’s
standpoint, disciplinary knowledge and epistemology” (Braun & Clarke, 2013:175).
Key questions that should inform qualitative data analysis and be asked continuously
during the process have been outlined by Hair, Jr and others (2011:282):
 What themes and common patterns are emerging that relate to the research
objectives?
 How are these themes and patterns related to the focus of the research?
 Are there examples of responses that are inconsistent with the typical
patterns and themes?
 Can these inconsistencies be explained or perhaps used to expand or redirect
the research?
 Do the patterns or themes indicate that additional data, perhaps in a new
area, needs to be collected? (If yes, then proceed to collect the data).
Earlier on, Henning, Van Rensburg and Smit (2004:106) pointed out that similar
questions may be asked after collected data has been coded and categorised, in
order for the qualitative researcher to see “the whole”.

8.5.1 Coding
According to Leech and Onwuegbuzie (2007:565), coding is a term used to refer to
the method of constant comparison analysis, which was conceptualised by the
fathers of grounded theory, Glaser and Strauss (1967). However, coding can be
used in other qualitative approaches, independently from grounded theory (Miles &
Huberman, 1994; Patton, 2002). Coding plays a key role in category identification in
qualitative data analysis (Williamson et al., 2013). It is a process that assists the
researcher to move data to a higher level of abstraction (Ng & Hase, 2008:159). The
use of codes was further reinforced by the increased use of computer management
programs in qualitative research (Grbich, 2013). The ability of computer programs to
process qualitative data mainly depends on how the data is codified into words,
phrases, sentences or paragraphs.

The aim of coding is to “break down and understand a text and to attach and develop
categories and put them into an order in the course of time” (Flick, 2002:178). The
coding process involves the grouping and labelling of segments of data. It also
assists in identifying and connecting bits of data. Miles and Huberman (1994:56)
state that:

Codes are tags or labels for assigning units of analysis to the descriptive or
inferential information compiled during a study. Codes are attached to “chunks” of
varying size – words, phrases, sentences, or whole paragraphs, connected or
unconnected to a specific setting.

This definition seems to suggest that although there are various qualitative research
approaches, there is not much variation when it comes to the process of coding data
or what Miles and Huberman (1994) refer to as data reduction. The fact that coding
is one of the most commonly used qualitative data analysis methods may have
contributed to the misconception by many qualitative researchers that constant
comparison analysis or coding is the only data analysis technique (see Leech &
Onwuegbuzie, 2007).

6
Data analysis in various qualitative research approaches begins with coding
(Saldaña, 2009; Liamputtong, 2013). The stages of data analysis are similar, as the
process is iterative (i.e. moving backwards and forwards), revolving around the
research questions or theoretical frameworks identified from the literature and
reducing the data into segments and groupings, which are finally linked to the
literature and theory as data are interpreted. Figure 8.1 below gives examples of
some of the qualitative designs which use coding as the initial step in analysing data.

Data coding and constant comparison

Discourse analysis
Phenomenological

Grounded Theory

Content analysis

Action research

ethnograpghy
analysis

Critical
Some qualitative data collection approaches

Figure 8.1: Examples of qualitative research designs that employ coding

There are a number of coding options available to researchers. Grbich (2013)


suggested the following options:
 Coding and developing themes or major codes manually or using a computer
program;
 Developing themes through thematic analysis and then coding the data
around the themes;
 Summarising and presenting data with minimal coding; and
 Using research questions to develop broad themes.
The last two coding options apply mostly to the phenomenology and auto-
ethnography research designs.
According to Boyatzis (1998: x-xi), elements of a good code include the following:
 Labels;
 Definitions of what each theme concerns;
 Descriptions of how to know when the theme occurs;
 Descriptions of any qualification or exclusions to identifying themes; and
 Examples to eliminate possible confusion when looking for themes.

These elements assist researchers to maintain an audit trail, in order to demonstrate


the trustworthiness and credibility of a study, as described in section 8.8.

Byrman (2012) provides a useful list of coding steps that a researcher may use as a
starting point (see figure 8.2 below).

7
Commence coding while collecting data

Figure 8.2: Coding steps (Adapted from Bryman, 2012:575-577; see also Saldaña,
2009)

The arrows in figure 8.2 above are facing in both directions, in order to highlight the
fact that the process of coding is recursive or iterative. Codes are usually revised
and refined during analysis in a backwards and forwards fashion. Readers should
note that it is possible to use codes in more than one category. The researcher
constantly compares the categories in search of meaning. It is noteworthy that
categories in grounded theory are developed from data, in contrast to categories in
content research or citation analysis, where they are at times predetermined or
“brought to the empirical material” (Flick, 2002:190). Content research is discussed
in greater detail in section 8.5.4.1.

In a nutshell, the major tasks associated with coding include sampling, identifying
themes, building codebooks, marking texts, constructing models and testing the
models (Ryan & Bernard, 2000). Sampling involves identifying texts that are
analysed; identifying themes, which entails deriving themes from the data or the
literature; building codebooks, which include a listing of codes and their definitions;
marking texts, which refers to assigning codes to units of text; construction of
models, which establishes the link between themes and concepts; and testing of
models constructed in the previous step, which is the final task in coding.

At times, qualitative researchers use codes and themes interchangeably. This


tendency is understandable if one takes into consideration the fact that qualitative
data analysis mainly involved the identification of dominant themes or thematic
analysis, before Glaser and Strauss (1967) introduced the concept of coding.

However, Grbich (2013:259) cautions researchers to be “transparent about how they


are using such labels in the data analytic process”. Thematic analysis may be
undertaken without coding. Some researchers use codes to develop themes and

8
others start with themes in order to come up with codes. On the other hand, some
researchers use only themes or only codes.

8.5.2 Memos
Glaser (1978) points out that: memos are fundamental to doing grounded theory,
because this makes the researcher stop and analyse the data and codes from the
start of the research. However, memos may be used in various qualitative contexts
(Harding, 2013:110). Memos can serve as an audit trail of the research process.
They give details of what the researcher was doing during the process. Memos
enable researchers to step back from the data and move beyond codes as they think
aloud reflectively and conceptually (Miles & Huberman, 1994:72). Writing memos
assists the researcher to move directly into an analysis of the data and to
systematically examine, explore and elaborate on bits of data and early codes
(Charmaz, 1990:1169).

The main types of memos include procedural memos and analytic memos (Myers,
2009). Procedural memos provide a trail of the research process. They help a
researcher to be accountable and transparent during the research process, as
described in section 8.8. Analytical memos are the researcher’s commentary on
what the data may mean, as they are notes added to coded segments. They are
tools for developing concepts and themes. Analytical memos may be categorised as
code memos and theoretical memos. Code memos are utilised in open coding, while
theoretical memos are used in axial and selective coding.

8.5.3 Thematic data analysis


Thematic analysis is “possibly the most widely used method of data analysis, but not
“branded” as a specific method until recently” (Braun & Clarke, 2013:175). Thematic
data analysis procedures are related to qualitative methods such as grounded
theory, framework analysis, interpretative phenomenological analysis, critical
ethnography and template analysis (Madill & Gough, 2008). Thematic analysis is
considered to be the foundational approach to qualitative data analysis (Braun &
Clarke, 2006; Williamson et al., 2013). As explained in section 8.5.1, coding was only
introduced as a concept in qualitative data analysis in the 1960s.

Thematic analysis is “A method for identifying themes and patterns of meaning


across a dataset in relation to a research question...” (Braun & Clarke, 2013:175).
Flick (2002) identifies two types of thematic analysis, namely theoretical coding and
thematic coding. Theoretical coding was developed by Glaser and Strauss (1967) to
analyse data gathered for developing a grounded theory. Thematic coding differs
from theoretical coding, although it is premised on the same assumptions. The
process starts with specific data that is then transformed into categories and themes.
The conclusions are drawn based on observations from the transformed data.

The focal point of thematic analysis is category coding. Byrne (2001:904) suggests
that thematic analysis is analogous to sorting a box of buttons by grouping them
according to “size, number of holes, color, or type”.

8.5.3.1 Thematic data analysis as an example of thematic coding


Few texts provide guidelines on how themes are identified (Grbich, 2013). In contrast
to grounded theory, thematic analysis does not include theoretical sampling

9
(Liamputtong, 2013). Repetition of terms and typologies may assist in generating
analytic patterns or themes (Braun & Clarke, 2006:86).

The aim of thematic analysis is to generate thematic domains, in contrast to


developing core categories, as in theoretical coding. A case is analysed to determine
themes and the themes that emerge are used to analyse further cases in a
comparative fashion. Flick (2002:211) states that the focus of thematic analysis is
“on conducting case studies and only at a later stage is attention turned to
comparing and contrasting cases”. Open and selective coding may be used to
analyse the first case, and the themes that emerge are used as a basis for
comparison with further cases. The steps in thematic analysis are outlined in figure 3
below.

Transcribe
Take note of items of
interest
Code across the entire
data set Search for themes
Review themes by mapping
provisional themes and
their relationships

Define and name themes

Finalise analysis

Figure 8.3: Steps in thematic analysis (Adapted from Braun and Clarke, 2013; 2006)

However, Harding (2013:112) suggests four steps that are involved in analysing
themes:

 Identifying the theme and creating a category;


 Collating codes from different illustrative issues into the category;
 Creating sub-categories to reflect different elements of the themes; and
 Utilising the themes to explain relationships between different parts of the
data and building theory.

Although the steps in analysing themes differ in the two examples given above, it is
clear that there is an overlap between the steps provided in these examples.

8.5.3.2 Grounded theory data analysis as an example of theoretical coding


Regardless of which grounded theorist’s approach is adopted, grounded theory data
analysis comprises three phases of coding, as illustrated in table 2 below. Both the
Glaserian and Straussian grounded theorists agree that the aim of grounded theory
is to “... generate core concepts and develop a theoretical framework that specifies
their interrelationships” (Parker & Roffey, 1997:222). The data analysis phases
include the following (Mills, Birks & Hoare, 2014):

10
 initial coding;
 intermediate coding; and
 advanced coding.

Holton (2007) refers to the first two phases as substantive coding. On the other
hand, Charmaz (1995) collapses the last two phases of coding into what may be
loosely termed focused coding. Tan (2010: 102) states that: “Glaser and Strauss
(1967) originally did not clearly name the data analysis process as open coding or
theoretical coding, but emphasised the constant comparative method for generating
theory”. “Substantive coding”, which is comparing incident to incident to generate
categories and comparing new incidents to these categories; “theoretical coding”,
which is conceptualising how the substantive codes may relate to each other as
hypotheses to be integrated into a theory; and “coding families” as the analyst’s
coding procedure, were only introduced by Glaser (1978:72) in later works.

The traditional and evolved grounded theorists denote the initial phase of coding as
open coding, while the constructivists prefer the term initial coding. The traditional,
evolved and constructivist grounded theorists regard intermediate coding as
selective, axial and focused coding respectively. Advanced coding, which is the third
and last phase in the process of data analysis, is known as theoretical coding by the
traditional and constructivist grounded theorists, but the evolved grounded theorists
view it as selective coding. Table 8.2 summarises the conceptualisation of these
phases of data coding by the various grounded theorists.

Table 8.2: Phases of grounded theory coding

Genres of Grounded Theory Initial Intermediate Advanced


Traditional (Glaser and Strauss, 1967) Open coding Selective coding Theoretical coding
Evolved (Corbin and Strauss, 2008) Open coding Axial coding Selective coding
Constructivist (Charmaz, 2006) Initial coding Focused coding Theoretical coding
Adapted from Birks and Mills (2011) and Mills, Birks and Hoare (2014)

Flick (2002) cautions that the phases should not be treated as distinct, as they are
just different ways of dealing with textual sources. The phases are used to handle
data in a linear or iterative manner. Ultimately, “data are broken down,
conceptualised, and put together in new ways” (Strauss & Corbin, 1990:57). The
researcher may move back and forth between the phases as he or she interprets the
data. The process of data analysis involves constant comparison of the differences
and similarities in the data, in order to come up with themes and patterns.
Theoretical saturation results from the comparison of incidents or indicators.
Theoretical saturation is reached when new codes are not being generated.

a) Initial coding (descriptive codes)


The initial phase of coding that transforms data into codes is known as open coding
or initial coding. It is the “initial step of theoretical analysis that pertains to the initial
discovery of categories and their properties” (Glaser, 1992:39). Journal notes,
interviews and observations are broken down into phrases and keywords. Open
coding aims at describing the overall features of the data “by breaking down,
analysing, comparing and categorising the data” (Eriksson & Kovalainen, 2008:160-

11
161). It expresses data in the form of concepts (Flick, 2002:177). As discussed in
Chapter Four, concepts are the building blocks of theory. Strauss and Corbin (1990)
reiterate this point when defining open coding:

Concepts are the building blocks of theory. Open coding in grounded theory method
is the analytic process by which concepts are identified and developed in terms of
their properties and dimensions. The basic analytic procedures by which this is
accomplished are: the asking of questions about the data; and making of
comparisons for similarities and differences between each event and other
instances of phenomena. Similar events and incidents are labelled and grouped to
form categories (Strauss & Corbin, 1990:74).

Coding may be applied to a text line by line (See Charmaz, 1995:39), sentence by
sentence or paragraph by paragraph, or a code may be linked to the whole text
(Flick, 2002:178). The advantage of line by line coding is that it “helps you to refrain
from inputting your motives, fears or unresolved personal issues to your respondents
and to your collected data” (Charmaz, 1995:37). The approach that a researcher
uses will be determined by the content and kind of text being coded.

Flick (2002:180) and Bohm (2004:271) provide a useful list of questions that may be
asked about the data when coding:

 What is the concern here?


 Which phenomenon is mentioned?
 Which actors are involved?
 What roles do they play?
 How do they interact?
 Which aspects of the phenomenon are mentioned or not addressed?
 When? How long? Where? (i.e. time, course and location).
 How intense or strong?
 What reasons are given or can be reconstructed?
 With what intentions, to which purpose?
 What means, tactics and strategies can be used to achieve the goal?

Charmaz (2003) also suggests asking more or less the same questions when coding
data at this stage. All these questions seem to be based on the set of questions of
the data formulated by Glaser (1998:140). Glaser (2004) suggests that researchers
should not proceed to selective coding (intermediate in the traditional phase, see
Table 8.2) before a potential core category has emerged through theoretical
sampling. A core category or substantive code relates to many other categories and
their properties, and can explain their variation in a pattern of behaviour (Holton,
2007).

b) Intermediate coding (interpretive codes)


Open coding may result in hundreds of codes (Strauss & Corbin, 1995:65),
depending on the data being analysed. Intermediate coding is applied to condense
and discern the categories of codes through constant comparison. As illustrated in
Table 8.2 above, intermediate coding is also referred to as selective, axial and
focused coding (Mills et al., 2014). The concern at this stage is to establish linkages
and connections between categories, in order to understand the phenomenon to
which they relate and determine if the data supports emerging categories (Holton,

12
2007; Curtis & Curtis, 2011). This process progresses until saturation point has been
reached. The constant comparison process feeds into theoretical sampling, a
process that enables the researcher to make a decision regarding what data to
collect next, in order to substantiate the emerging theory (Holton, 2007). Diagrams,
including matrices, tables, concept maps and cross tabulations, are visual tools that
may be utilised in mapping the relationship between categories.

c) Advanced coding (theoretical codes)


Advanced coding is variously known as theoretical coding or selective coding, as
shown in Table 2 above. Novice researchers experience a lot of problems when
conducting the theoretical coding process (Holton, 2007; Hernandez, 2009). The
core code is selected and identified from all those identified in the first two stages of
data analysis. The formulation of a theory and theoretical integration happens during
this final phase. All substantive codes/categories are related to the core category by
the theoretical code. Theoretical sensitivity, the ability to generate concepts from the
data, and relating them to theory assists in conceptual integration (Glaser, 1978:1-
17). Theoretical sensitivity may be improved partly through wide reading (Glaser,
1998:164-165).

The storyline is a tool that is used for theoretical integration (Mills et al., 2014). The
explanatory power of the storyline is enhanced through the use of theoretical codes,
which “are advanced abstractions that provide a framework for enhancing the
explanatory power of the storyline and its potential as theory” (Birks & Mills,
2011:123). Predictive statements about the phenomenon under study may be
created at this stage if researchers wish to make “predictive statements” – but they
may not wish to do so, as they may subscribe to an ontology which suggests that
social life is not predictable.

8.5.4 Structured data analysis


Structured data analysis procedures are related to qualitative methods such as
critical ethnography and hermeneutics. For instance, structured data analysis
methods are employed in content analysis, vignettes, Q-methodology and protocol
analysis. The data analysis method mainly transforms qualitative data into numbers
based on a coding scheme.

8.5.4.1 Content analysis


Created by journalists and then adopted by social scientists, content analysis is a
research technique that collects and analyses data from texts and messages that are
communicated in various ways, including books, newspapers and other physical
media (Curtis & Curtis, 2011). Stated differently:

Content analysis is a systematic coding and categorising approach you can use to
explore large amounts of existing textual information in order to explore large
amounts of existing textual information in order to ascertain the trends and patterns
of words used, their frequency, their relationships and structures, contexts and
discourses of communication (Grbich, 2013:190).

Content analysis or content research is “conceptually and logically straightforward”


(Curtis & Curtis, 2011:215). Content analysis may either be characterised as
enumerative content analysis or ethnographic content analysis. Ethnographic
content analysis is concerned with analysing documents for significance and

13
meaning, whereas in enumerative content analysis, the major concern is the
frequency of words and categories, including concordance and co-occurrence
(Grbich, 2013; Liamputtong, 2013). Qualitative data analysis (QDA) programs such
as MAXQDA, NVivo and WordStat 6.1 may be used to conduct content analysis.

Patterns are identified and interpreted in doing content research or content analysis.
Enumerative content analysis, or what Liamputtong (2013:246) refers to as
traditional quantitative content analysis, mainly transforms qualitative data into
quantitative forms. The technique tends to overlook the latent or covert elements of
messages or text, as its focus is mainly on the obvious elements that can be counted
(Curtis & Curtis, 2011). The technique assists in only describing social relations, but
cannot explain them. McNabb (2002:414) describes the major advantage and
disadvantage of content analysis as the following:

it provides the researcher with a structured method for quantifying the contents of a
qualitative or interpretive text, and does so in a simple, clear, and easily repeatable
format. Its main disadvantage is that it contains a built-in bias of isolating bits of
information from their context. Thus, the contextual meaning is often lost or, at the
least, made problematic.

Conducting content analysis should be partly based on considerations suggested by


Grbich (2013:190). Thus, researchers should ensure that they:

 Have a sufficient number of documents and determine the aspects of the


documents to be analysed;
 Establish the sampling approach when selecting documents;
 Decide on the level of analysis to be done;
 Decide on how the codes will be generated;
 Consider the relationships between concepts, codes and contexts;
 Record the number of times categories appear; and
 Ascertain the reliability of the coding scheme.

The decoder focuses on the predetermined themes or propositions that are


presumed to be in the data or messages.

Ethnographic content analysis (ECA) may also start with predetermined categories
and themes, like enumerative content analysis, but some categories may emerge
during inductive analysis, as Bryman (2012:559) explains: “there is greater potential
for refinement of those categories and the generation of new ones”.

8.5.5 Discursive data analysis methods


Discursive data analysis procedures are related to qualitative methods such as
critical ethnography, historical research and hermeneutics. Flick (2002) also refers to
discursive data analysis methods as sequential data analysis procedures. Discursive
methods are used to analyse texts, for instance in discourse analysis and semiotic
analysis.

8.5.5.1 Discourse analysis


Discourse analysis is based on social constructivism assumptions. The fundamental
question is framed around how social reality can be understood and explained by
investigating discourses about certain situations and processes. LeGreco (2014:69)

14
provides an instructive list of handbooks and introductory texts for discourse
analysis. The ability of discourse analysis to deal with language, dialogue and texts
has persuaded scholars in fields such as anthropology, communication, sociology,
psychology, media studies, rhetoric, education, linguistics and health sciences to
explore its role in framing social patterns and practices (Liamputtong, 2013).
Researchers doing discourse analysis mainly deal with talk and text. Discourse
analysis is close to conversation analysis, another method of studying verbal and
non-verbal interaction in context.

There are more than fifty ways of doing discourse analysis (Liamputtong, 2013).
However, the two dominant discourse analysis approaches are Foucauldian
discourse analysis or post-structuralist discourse analysis (Braun & Clarke, 2013),
based on Michel Foucault, and critical discourse analysis, which was developed by
Norman Fairclough (Grbich, 2013). According to Foucault (1972:49), discourses are
not about objects; they do not identify objects, they constitute them and in the
practice of doing so conceal their own invention.

Power is fundamental to the creation and sustenance of knowledge in society, and


any discourse should be understood in the context of power relations. On the other
hand, critical discourse analysis is based on the interaction of the text, discursive
practices and the social context (Fairclough, 2000).

Eriksson and Kovalainen (2008) and Braun and Clarke (2013) write of a third variety
of discourse analysis which is prevalent, especially in business and psychology
research. The third version of discourse analysis, which draws on constructionist
psychology and social psychology, is called social psychological discourse analysis.
The aim is to show “how social interaction is performative and persuasive”, and it is a
“negotiation about how we should understand the world and ourselves” (Eriksson &
Kovalainen, 2008:232).

There are many techniques of transcription in discourse analysis, but the


Jeffersonian system is gaining popularity (LeGreco, 2014:74). The analysis of text is
concerned with dialogue structures, discursive practices and conversation strategies
in social settings (LeGreco, 2014:74). Discourse tracing is an important technique for
data analysis (LeGreco & Tracy, 2009). Although the procedures for conducting
discourse analysis are greatly contested, the steps for using discourse analysis,
which are provided by Gill (2000), are instructive (see figure 8.4).

15
Figure 8.4: Steps for using discourse analysis (Adapted from Gill, 2000:178-179)

8.5.6 Instrumental data analysis methods


Instrumental methods employ a variety of methods to fulfil “an overarching or ethical
commitment” (Madill & Gough, 2008:259). For instance, ethnography may utilise
discourse theory and forms of thematic analysis, even if it is committed to naturalistic
inquiry. Other examples of qualitative methods that employ instrumental procedures
of data analysis include ethnomethodology, feminist research, visual methodologies,
action research and media framing analysis (Madill & Gough, 2008).

8.5.6.1 Action research


Action research is different from other qualitative research designs because it is
action-oriented, solves current problems and empowers the research participants,
while extending the frontiers of knowledge. It moves the participants from the realm
of being mere subjects to a sphere where they are empowered to understand and
positively change their situation. It is an iterative research design proceeding through
stages of planning, action and review (Dick, 2014: 51). Action research is prevalent
in education, organisational change, community change and farmer research (Dick,
2014:52).

Data analysis is done collectively and collaboratively with research participants.


Some researchers suggest that even when one is not doing action research as such,
it is worth checking one’s draft analyses and interpretations with participants (Chilisa,
2012). Undertaking member checking with the participants to determine whether the
themes, arguments or assertions developed from the codes accurately reflect their
sentiments enhances descriptive validity (Maxwell, 2005). However, member
checking in action research mainly leads to an understanding of a certain situation,
and informs action. Some researchers use grounded theory and constant
comparison analysis for data analysis in action research (Dick, 2014:52).

16
8.6 Computer-based analysis
Qualitative data analysis (QDA) or computer-aided qualitative data analysis software
(CAQDAS) programs are increasingly used in the analysis of qualitative data, and
they have improved the process of qualitative data analysis. However, computer-
based qualitative data analysis software programs have not yet gained full
acceptance (Cambra-Fierro & Wilson, 2010:17), despite their potential. Computers
may be used in activities such as making field notes, writing up or transcribing notes,
editing, sorting, coding, data linking, memoing, storing, searching, indexing and
retrieving qualitative materials.

There are hopes, fears and fantasies associated with these technologies (Flick,
2002:250; Curtis & Curtis, 2011:51). Some fear that technology might distort
qualitative research practice. This fear is unfounded and Flick (2002) regards it as
phantasm because QDA software does not conduct the analysis, as the researcher
still does the coding. In fact, researchers should be cognisant of the fact that “nothing
takes the place of the researcher’s inductive analysis of the raw data” (McMillan &
Schumacher, 2014:395). In fact, “creating a coding framework and making decisions
about the role of coding in a project still necessitates a great deal of conversation
and debates” (Adair & Pastori, 2011:32).

Current two-file system theory generating computer programs developed from code
and retrieve QDA programs which had single file systems that managed to stored
and retrieve data (Grbich, 2013). There are at least 30 different software choices
(McMillan & Schumacher, 2014:409). The most popular products for social and
management science researchers are Nvivo, previously NUD*IST, and Atlas.ti
(Myers, 2009). ATLAS.ti software uses the grounded theory and theoretical coding
approaches in modelling the data (Flick, 2002). It can interface with statistical
packages such as SPSS©. It has the same capabilities as Nvivo.

Paulus, Lester and Dempster (2014) provide a useful list of questions that may guide
a researcher when selecting a CAQDAS system:

 What features will support my analytical approach?


 Does the system allow me to annotate, link, search, code and visualise data?
 How does the software assist in data management?
 What are the benefits and constraints of the software package?

Grbich (2013:285) suggests that MAXQDA is a better program than the two in terms
of “capability, stability, ease of learning and use”. However, it is important to use a
product that is supported by the researcher’s institution, in order to take care of the
licensing fees, which at times may be high. Equally important is to determine for the
software program was specifically developed. For instance, programs developed
from an ethnographic or grounded theory perspective may not be suitable for
analysing qualitative data from other contexts. Myers (2009) advises against the use
of QDA for hermeneutics and narrative analysis, partly due to the same reasons.

8.7 Interpretation of qualitative data: illustrating assertions and


interpretations through results
Interpretation is the terminal phase of qualitative inquiry (Denzin & Lincoln,
2005:909). Discussing the interpretation of data in qualitative researchis difficult, as it

17
is a hotly contested area. The major challenge in discussing the interpretation of
qualitative data stems from the fact that interpretation is regarded as an art that is
not amenable to formal rules, as the “processes that define the practices of
interpretation and representation are always ongoing, emergent, unpredictable, and
unfinished” (Denzin & Lincoln, 2005:909). However, with reference to Liamputtong
and Ezzy (2005), we have attempted to formalise the notion of interpretation.

The interpretation of data is the core of qualitative research (Flick, 2002:176). This
phase entails the assessment, analysis and interpretation of the empirical evidence
that has been collected. The different points of view of the participants are presented
in sufficient detail and depth, so that the reader may be able to gauge the accuracy
of the analysis. Stated differently, a thick description is presented in the form of an
“analytical narrative” (McMillan & Schumacher, 2014:361). The data are used to
illustrate and validate the interpretation of the data. Pertinent words and comments
of the participants are usually quoted. Verbatim quotations of the participants assist
in “revealing how meanings are expressed in the respondents’ words rather than the
words of the researcher” (Baxter & Eyles, 1997:508).

Chenail (2012:1) cautions that qualitative researchers should be able to refer to their
original data and be able “to construct evidence of the code from the data”.
Furthermore, qualitative researchers should neither say more than what the data
says nor less than the data before them, as they:

need to be aware of making errors of deficiency and exuberance in reporting our


qualitative analysis of the quality we create from the data. By deficiency I mean
“Don’t try to say less than what the data show” and by exuberance I mean, “Don’t
try to say more than what data show.” (Chenail, 2012:1).

The researcher’s perceptions, biases and personal beliefs should also be accounted
for. In other words, the interpretation “includes the voices of participants, the
reflectivity of the researcher, and a complex description ... of the problem” (Creswell,
2007:37). Points related to previous research are good candidates for the
interpretation section of a qualitative report. Although qualitative presentations are
mainly in narrative form, statistical tools such as descriptive statistics may be used to
summarise data. The conclusions should be consistent with the findings. The
synthesis of the findings may be followed by the suggestion of a model or theory.
Models and theories were the subject of Chapter Four.

8.8 Criteria for evaluating qualitative research


Some scholars question the value of qualitative research (see Biesta, 2007;
Hammersley, 2007). It is noteworthy that:

much of the pressure for qualitative criteria comes not so much from the context of
researchers judging research, or even students learning to do this, but rather from
that of lay ‘users’ of research (notably policymakers and practitioners) assessing its
quality (Hammersley, 2007:289).

There is a perception that qualitative research is not rigorous, as the claim is that its
methods and processes are not rigidly controlled, unlike quantitative research, which
is subject to strict rules and standards in relation to the methodology that is used
(Liamputtong, 2013; Birks, 2014; Hammersley, 2007). Rigour is fundamental to any

18
research enterprise because it addresses matters of the quality of the research,
including the analysis and interpretation of generated data. Tobin and Begley
(2004:390) write that: “Without rigour, there is as a danger that research may
become fictional, worthless, as contributing to knowledge”.

Although qualitative research is characterised by methodological pluralism, as


alluded to in section 8.3, Hammersley (2008), Spencer et al. (2003) and Tracy
(2010) attempted to identify criteria by which qualitative research may be assessed.
There is no agreement on whether or not a single set of criteria for assessing rigour
in qualitative research is possible, as a result of the multitude of research designs
used in qualitative research. However, Hammersley (2007:300) advises that
guidelines for valuating qualitative research are useful, as long as they do not
become “a substitute for the practical capacity to assess research”. On the other
hand, Tracy (2010:837) states that any model for gauging quality in qualitative
research should “leave space for dialogue, imagination, growth and improvisation”.

With reference to Denzin and Lincoln (2005), the criteria for evaluating qualitative
research by looking at the results in relationship to the foundations of truth and
knowledge, or the nature of reality (i.e. epistemologies and ontological realism,
respectively) are suggested. The knowledge claims criteria are based on what
Denzin and Lincoln (2005:909) term foundationalists, quasi-foundationalists and non-
foundationalists positions. Many researchers who are interested in understanding the
criteria for evaluating qualitative research have found the categories to be useful in
this regard.

Foundationalists are of the opinion that the same criteria that are applied to
quantitative research should be used to evaluate qualitative inquiry. In evaluating
qualitative research, foundational scholars use the variants of the classical criteria,
such as internal validity, external validity, reliability and objectivity, which are
embedded in the positivist and post-positivist paradigms (Denzin & Lincoln, 2005;
Hammersley, 2007).

Quasi-foundationalists maintain that the evaluative criteria should be unique and


rooted in the constructivist epistemology. Consequently, qualitative research should
be evaluated in terms of plausibility (i.e. is the claim plausible?), credibility (i.e. is the
claim informed by credible evidence?), and relevance (i.e. what is the claim’s
relevance to knowledge about the world?) (Hammersley, 1995). This typology
underscores the need for a study to demonstrate trustworthiness, as articulated by
Lincoln and Guba (1985).

Maintaining the audit trail of the research project may enhance the credibility,
plausibility, authenticity and dependability of a qualitative study. Although Cutcliffe
and McKenna (2004) argue to the contrary and elevate the expertise of the
researcher as a major determinant of credibility and authenticity, Birks (2014)
strongly argues that expertise does not negate “the need to demonstrate precision in
research work”. Maintaining an audit trail fosters procedural precision and
transparent accountability. It also enhances procedural reliability (Flick, 2002;
Liamputtong, 2013) and ensures the dependability of the data and the analysis
thereof. By recording the research activities, any changes to the research plans and
the reasons for any deviations and exceptions do not only demonstrate
professionalism, but make the readers have confidence in your results (Birks, 2014)

19
and adjudge them to be credible. Documenting data collection and analysis
processes ensures that qualitative research is not “limited to a mechanistic analysis
and reporting of content” (Cambra-Fierro & Wilson, 2010:18). Documenting all the
steps and decisions taken during data collection and analysis provides the
researcher with an opportunity to deal with and report on the data that “jump out” in
contradiction.

Non-foundationalists arise as a result of the influence of the feminist and


communitarian ethic of empowerment, community and moral solidarity. They
contend that the empirical claim to knowledge cannot be done epistemologically
because social science research serves a moral and political purpose in a given
context (Denzin & Lincoln, 2005:911). In other words, valid research should address
matters of social inequality and improve the lives of the marginalised. A framework
for evaluating qualitative research should provide some value-relevant criteria, which
can be used to judge the validity and trustworthiness of any qualitative research. Of
course, this constitutes a tacit critique of Hammersley (2007), who does not wish to
see the introduction of value criteria as part of the process of doing research.

8.9 Conclusion
We have demonstrated in this chapter that qualitative data analysis involves the
identification, examination, comparison and interpretation of patterns and themes.
We have shown that qualitative data analysis is different to quantitative traditions of
analysis, hence requiring a different approach to the one discussed in Chapter
Seven when making sense of the data. We established that there are as many
sources of qualitative data as there are types of approaches used to analyse
qualitative data. We have described a typology of qualitative data analysis which
includes discursive, thematic, structured and instrumental methods of data analysis.
Computer-based analysis of qualitative data was also explained. We then reflected
on the interpretation of qualitative data and concluded by discussing the criteria for
evaluating qualitative research.

Window into understanding qualitative data analysis methods


Activity 8.1
Take a few minutes to jot down a few words that describe your understanding of the various
qualitative data analysis procedures outlined in this chapter.
Activity 8.2
On the one hand, the commonly used qualitative methods used in business and management are
action research, case study, ethnography and grounded theory (Myers, 2009:29). On the other hand,
content analysis, grounded theory and discourse analysis are establishing themselves in the field of
psychology, as demonstrated by Carrera-Fernández, Guàrdia-Olmos and Peró-Cebollero (2014).
Trace the prevalent qualitative methods of data analysis in your field.
Activity 8.3
How do researchers achieve rigour in qualitative research?
Activity 8.4
To what extent can computer-based analysis eventually replace researchers in the inductive analysis
of qualitative data?

References
Adair, J.K. & Pastori, G. (2011). Developing qualitative coding frameworks for
educational research: immigration, education and the Children Crossing
Borders project. International Journal of Research and Methods in Education,
34(1): 31-47.

20

View publication stats

You might also like