Professional Documents
Culture Documents
https://www.researchtoaction.org/2020/06/gap-analysis-for-literature-reviews-and-advancing-
useful-knowledge/
The basics of research are seemingly clear. Read a lot of articles, see what’s missing, and
conduct research to fill the gap in the literature. Wait a minute. What is that? “See what’s
missing?” How can we see something that is not there?
Imagine you are videoconferencing a colleague who is showing you the results of their project.
Suddenly, the screen and sound cut out for a minute. After pressing some keys, you manage to
restore the link; only to have your colleague ask, “What do you think?” Of course, you know that
you missed something from the presentation because of the disconnection. You can see that
something is missing, and you know what to ask for to get your desired results, “Sorry, could
you repeat that last minute of your presentation, please.” It’s not so easy when we’re looking at
research results, proposals, or literature reviews.
While all research is, to some extent, useful; we’ve seen a lot of research that does not have the
expected impact. That means wasted time, wasted money, under-served clients, and frustration
on multiple levels. A big part of that problem is that directions for research are often chosen
intuitively; in a sort of ad-hoc process. While we deeply respect the intuition of experts, that kind
of process is not very rigorous.
In this post, we will show you how to “see the invisible;” How to identify the missing pieces in
any study, literature review, or program analysis. With these straight-forward techniques, you
will be able to better target your research in a more cost-effective way to fill those knowledge
gaps to develop more effective theories, plans, and evaluations.
The first step is to choose your source material. That can be one or more articles, reports, or
other study results. Of course, you want to be sure that the material you use is of high quality.
Next, you want to create a causal map of your source material.
We’re going to go a bit abstract on you here because people sometimes get lost in the “content”
when what we are looking at here is more about the “structure.” Think of it like choosing how to
buy a house based on how well it is built, rather than what color it is painted. So, instead of using
actual concepts, we’ll refer to them as concepts A, B, C… and so on.
So, the text might say something like: “Our research shows that A causes B, B causes C, and D
causes less C. Oh yes, and E is also important (although we’re not sure how it’s causally
connected to A, B, C, or D).”
When we draw causal maps from the source material we’ve found, we like to have key concepts
in circles, with causal connections represented by arrows.
There are really three basic kinds of gaps for you to find: relevance/meaning, logic/structure, and
data/evidence. Starting with structure, there is a gap any place where there are two circles NOT
connected by a causal arrow. It is important to have at least two arrows pointing at each
concept/circle for the same reason we like to have multiple independent variables for each
dependent variable (although, with more complex maps, we’re learning to see these as
interdependent variables).
For example, there is no arrow between A and D. Also, there is no arrow between E and any of
the other concepts. Each of those is a structural gap – an opening for additional research.
You might also notice that there are two arrows pointing directly at C. Like having two
independent variables and one dependent variable, it is structurally better to have at least two
arrows pointing at each concept.
To get the greatest leverage for your research dollar, it is generally best to search for that second
arrow. In short, one research question would be: What (aside from A) has a causal influence on
B? Other good research questions would be 1) Is there a causal relationship between A and D? 2)
Is there a causal relationship between E and any of the other concepts? 3) What else besides A
helps cause B? 4) What are the causes of A, D, and E?
Now, let’s take a look at gaps in the data, evidence, or information upon which each causal arrow
is established.
Here, we add to the drawing by making a note showing (very briefly) the kind of data supporting
each causal arrow. We like to have that in a box – with a loopy line “typing” the evidence to the
connection. You can also use different colors to more easily differentiate between the concepts
and the evidence on your map. You can also write the note along the length of the arrow.
Finally, the gap in meaning (relevance) asks if those studies were done with the “right” people.
By this, we mean people related to the situation or topic you are studying. Managers, line
workers, clients, suppliers, those providing related services; all of those and more should be
included. Similarly, you might look to a variety of academic disciplines; drawing expertise from
psychology, sociology, business, economics, policy, and others.
What participants or stakeholders are actually part of your research depends on the project.
However, in general, having a broader selection of stakeholder groups results in a better map.
This applies to choosing what concepts go on the map and also who has been contacted for
interviews and surveys.
All of these three gaps—gaps in structure, data, and stakeholder perspectives--can (and should)
be addressed to help you choose more focused directions for your research – to generate research
results that will have more impact. As a final note, remember that many gaps may be filled with
secondary research; a new literature review that fills the gaps in the logic/structure,
data/information, and meaning/relevance of your map so that your organization can have a
greater impact.
Practical Mapping for Applied Research and Program Evaluation (SAGE) provides a
“jargon free” explanation for every phase of research:
https://us.sagepub.com/en-us/nam/practical-mapping-for-applied-research-and-program-
evaluation/book261152 (especially Chapter 3)
This paper uses theories for addressing poverty from a range of academic disciplines and
from policy centers from across the political spectrum as an example of interdisciplinary
knowledge mapping and synthesis:
https://www.emerald.com/insight/content/doi/10.1108/K-03-2018-0136/full/html
This approach helps you to avoid fuzzy understandings and the dangerous “pretense of
knowledge” that occasionally crops up in some reports and recommendations. Everyone can see
that a piece is missing and so more easily agree where more research is needed to advance our
knowledge to better serve our organizational and community constituents.
Authors’ Information:
Swallis@ProjectFAST.org
https://projectfast.org/
ORCID: https://orcid.org/0000-0001-5207-603X
bernadette@meaningfulevidence.com
http://meaningfulevidence.com/
Twitter: @MeaningflEvdenc
ORCID: https://orcid.org/0000-0002-1044-1323
Dr. Bernadette Wright, founded Meaningful Evidence to help nonprofits leverage research to
make a bigger impact. She has over two decades of experience designing, managing, and
conducting research and evaluation that informs strategies, demonstrates impact, and shapes
effective action. She is author of a book with Dr. Steven E. Wallis, Practical Mapping for
Program Evaluation and Applied Research (Sage Publications, 2019). With an interdisciplinary
background in public policy and evaluation, her research experience has covered aging and
disability, health, human services, racial equity/justice, education, and many other topics.
Dr. Wright earned her PhD in public policy/program evaluation from the University of
Maryland. She is a member of American Evaluation Association and Washington Evaluators.