You are on page 1of 15

Page Proof Instructions and Queries

Please respond to and approve your proof through the “Edit” tab, using this PDF to review figure
and table formatting and placement. This PDF can also be downloaded for your records. We
strongly encourage you to provide any edits through the “Edit” tab, should you wish to provide
corrections via PDF, please see the instructions below and email this PDF to your Production
Editor.

Journal Title: AJE


Article Number: 886914

Thank you for choosing to publish with us. This is your final opportunity to ensure your article will be accurate at publication.
Please review your proof carefully and respond to the queries using the circled tools in the image below, which are available in
Adobe Reader DC* by clicking Tools from the top menu, then clicking Comment.

Please use only the tools circled in the image, as edits via other tools/methods can be lost during file conversion. For comments,
questions, or formatting requests, please use . Please do not use comment bubbles/sticky notes .

*If you do not see these tools, please ensure you have opened this file with Adobe Reader DC, available for free at
https://get.adobe.com/reader or by going to Help > Check for Updates within other versions of Reader. For more detailed
instructions, please see https://us.sagepub.com/ReaderXProofs.

No. Query
Please note, only ORCID iDs validated prior to acceptance will be authorized for
publication; we are unable to add or amend ORCID iDs at this stage.
Please confirm that all author information, including names, affiliations, sequence, and
contact details, is correct.
Please confirm that the Funding and Conflict of Interest statements are accurate.
Please confirm you have reviewed this proof to your satisfaction and understand this is
your final opportunity for review prior to publication.
AQ: 1 Please check the corresponding author details.
AQ: 2 Please insert complete reference details for Johnson and Vindrola-Padros 2017 or
delete the citation.
AQ: 3 Should “Ref” be changed to Reference” in the sentence “The review . . . ” If so, please
insert the necessary correction.
AQ: 4 Please replace “blinded” with names if appropriate throughout the article.
AQ: 5 Please provide complete reference details for “Hargreaves (2014)”.
AQ: 6 Please provide complete reference details for “Sonnichsen (2000)”.
AQ: 7 Please insert complete reference details for Anker et al. 2013 or delete the citation.
Article
American Journal of Evaluation
1-14
ª The Author(s) 2019
Rapid, Responsive, and Relevant? Article reuse guidelines:
sagepub.com/journals-permissions
A Systematic Review of Rapid DOI: 10.1177/1098214019886914
journals.sagepub.com/home/aje
Evaluations in Health Care

Cecilia Vindrola-Padros1,2, Eugenia Brage3,


and Ginger A. Johnson4,2

Abstract
Changing health-care climates mean evaluators need to provide findings within shorter time frames,
but challenges remain in the creation of rapid research designs capable of delivering quality data to
inform decision-making processes. We conducted a review of articles to grapple with these chal-
lenges and explore the ways in which rapid evaluations have been used in health care. We found
different labels being used to define rapid evaluations and identified a trend in the design of eva-
luations, where evaluators are moving away from short studies to longer evaluations with multiple
feedback loops or cyclical stages. Evaluators are using strategies to speed up evaluations: conducting
data collection and analysis in parallel, eliminating the use of transcripts, and utilizing larger eva-
luation teams to share the workload. Questions persist in relation to the suitability of rapid eva-
luation designs, the trustworthiness of the data, and the degree to which evaluation findings are used
to make changes in practice.

Keywords
rapid evaluation methods, rapid feedback evaluations, rapid cycle evaluations, data validity, health
care

Introduction
Rapid evaluations are becoming more frequent in health-care contexts. Changing health-care
climates mean evaluators need to be responsive to changing priorities and deliver evaluation find-
ings within shorter time frames (Schneeweiss, Shrank, Ruhl, & Maclure, 2015). Policy transforma-
tions have produced a notable increase in innovations in health-care delivery, innovations that need

1
Department of Applied Health Research, University College London, United Kingdom
2
Rapid Research, Evaluation and Appraisal Lab (RREAL), University College London, United Kingdom
3
Centro de Estudos da Metrópole, Faculdade de Filosofı́a, Letras e Ciencias Humanas, Universidade de Sao Paulo, Brazil
4
Health Section, UNICEF, Islamabad, Pakistan

Corresponding Author:
[AQ1] Ginger A. Johnson, Health Section, UNICEF, Islamabad, Pakistan; Rapid Research, Evaluation and Appraisal Lab
(RREAL), University College London, London, United Kingdom.
Email: johnson.ginger@gmail.com
2 American Journal of Evaluation XX(X)

to be evaluated rapidly to inform continuity and potential scale-up (Skillman et al., 2019). Time-
liness has become a feature that might determine whether and how findings can be used to inform
decision-making processes (Grasso, 2003; McNall, Welch, Ruh, Mildner, & Soto, 2004). However,
the delivery of findings at a time when they can be actionable remains a challenge, and research and
evaluations continue to lag behind the needs of evidence-based decision-making (Glasgow et al.,
2014; Riley, Glasgow, Etheredge, & Abernethy, 2013). Riley and colleagues (2013) argued that in
order for research and evaluations to have an impact on health-care organization and delivery, they
would need to align to the three Rs: rapid, responsive, and relevant. Alignment with the three Rs
requires the creation of “rapid-learning research systems,” which bring together researchers, fun-
ders, practitioners, and community partners to ask relevant questions and use efficient and innova-
tive research designs (Riley et al., 2013). The challenge that remains is the creation of research and
evaluation designs capable of delivering findings to these systems when they can inform decision-
making processes.
Researchers have been experimenting with different types of research designs to make evalua-
tions more efficient and to organize regular feedback loops so findings can be shared at key points in
time. Rapid assessment procedures (RAP), rapid appraisals, rapid ethnographic assessments (REA),
and rapid ethnographies were developed as research approaches (Beebe, 2014; Johnson & Vindrola-
Padros, 2017[AQ2]; Vindrola-Padros & Vindrola-Padros, 2018), but rapid evaluation designs were
also created through approaches such as rapid evaluation methods (REM), real-time evaluations
(RTE), rapid feedback evaluations (RFE), and rapid cycle evaluations (RCE). Over 10 years ago,
McNall and Foster-Fishman (2007) reviewed the landscape of rapid evaluation and appraisal meth-
ods (REAM), documenting the diversity of rapid approaches and highlighting the challenges they
shared. They identified an intrinsic tension between speed and trustworthiness and argued that rapid
approaches would need to address issues of validity and data quality to gain greater popularity in the
evaluation landscape (McNall & Foster-Fishman, 2007).
Despite evident advances in the field of rapid evaluations, challenges remain in the way we
define, design, and implement rapid evaluations. For example, there is variability in the ways in
which we define rapid time frames as well as what we mean by evaluation (McNall et al., 2004;
Nunns, 2009). Short time frames are often associated with evaluations that might appear to be
rushed, less rigorous, and lacking engagement with theory (McNall & Foster-Fishman, 2007). These
assumptions can influence how evaluation findings are viewed and, ultimately, used in practice
(McNall & Foster-Fishman, 2007). Evident trade-offs are present in relation to the breadth and depth
of data and, some services, interventions or contexts might be more amenable to rapid evaluations,
but this might not be true for others (Nunns, 2009). Finally, gaps remain in our understanding of the
value and use of rapid evaluations in comparison to longer term evaluations.
The purpose of this review was to grapple with these challenges and explore the ways in which
rapid evaluations have been used in health care. We considered common rapid evaluation designs,
issues in implementation, and strategies used for the sharing of findings, and offer recommendations
for the design of evaluations over short periods of time. We identified gaps in knowledge and future
areas of exploration.

Method
Design
This is a systematic review of the literature based on peer-reviewed academic articles. The Preferred
Reporting Items for Systematic Reviews and Meta-Analysis statement was used to guide the report-
ing of the methods and findings (Moher, Liberati, Tetzlaff, & Altman, 2009). The review protocol
was registered with PROSPERO (Ref[AQ3]: CRD42017078530).
Vindrola-Padros et al. 3

Research Questions
The review sought to answer the following questions:

1. When are rapid evaluations used in health-care contexts?


2. How are rapid evaluations defined?
3. What are the evaluation designs (including study time frames)?
4. How are findings disseminated and used in practice?
5. What are the benefits, limitations, and challenges of carrying out rapid evaluations?

Search Strategy
We used the PICOS (Population, Intervention, Comparison, Outcomes, Setting) framework (Gough,
Thomas, & Oliver, 2012) to develop the search strategy. We conducted a review of published
literature using multiple databases: MEDLINE, CINAHL Plus, Web of Science, and Proquest
Central. The searches were conducted in July 2018 and updated in June 2019. Results were com-
bined into RefWorks and duplicates were removed. The reference lists of included articles were
screened to identify additional relevant publications.

Selection
Two of the authors (blinded)[AQ4] screened the articles in three phases (title, abstract, and full text)
based on the following inclusion criteria: (1) article was published in a peer-reviewed journal, (2)
study focused on rapid evaluations, and (3) study presented findings from empirical studies or
models for rapid evaluations based on case studies. Since we were interested in the use of the rapid
evaluation “label,” we included articles that self-identified as rapid evaluations. We did not include
articles on rapid assessment as we followed the lineage of REAM proposed by McNall and Foster-
Fishman (2007), which groups rapid assessments in a separate category to rapid evaluations due to
their roots in anthropology and links with rapid ethnographic assessments. We did, however, include
articles that reported findings from short evaluations as well as those that used iterative or cyclical
approaches to rapid evaluation such as RFE and RCE (i.e., multiple short periods of data collection,
analysis, and feedback over time). Disagreements were discussed until consensus was reached. We
did not apply any restrictions in terms of language or date of publication.

Data Extraction and Management


The included articles were analyzed using a data extraction form developed in REDCap (Research
Electronic Data Capture). The categories used in the data extraction form are listed in Supplemental
Appendix 1. The form was developed after the initial screening of full-text articles. It was then
piloted independently by two of the authors (blinded) using a random sample of five articles.
Disagreements were discussed until consensus was reached. The form was changed based on the
findings from the pilot.

Data Synthesis
Data were exported from REDCap, and the main article characteristics were synthesized. The
REDCap report presented quantitative summaries of some of the entries in our data extraction form
(for details, see Supplemental Appendix 1). The information entered in the free text boxes of the data
extraction form was exported from REDCap and analyzed using framework analysis (N. Gale,
Health, Cameron, Rashid, & Redwood, 2013). The themes were based on our research questions,
but we were also sensitive to topics emerging from the data.
4 American Journal of Evaluation XX(X)

Quality Assessment
We used the Mixed Methods Appraisal Tool (MMAT) to assess the quality of the articles (Pluye
et al., 2012; Pluye & Hong, 2014) as the review included studies with quantitative, qualitative, and
mixed-methods research designs. The MMAT includes questions on the research questions guiding
the studies, data sources, methods of analysis, consideration of how the findings are shaped by the
context and researcher, completeness of data, response rate, sampling strategies, integration of
different streams of work (in mixed-methods studies) and limitations (Pluye et al., 2011). Scores
vary from 25% or * (one criterion met) to 100% or **** (all criteria met; Pluye et al., 2011). Two of
the authors rated these articles independently. In cases of disagreement, the raters discussed their
responses until consensus was reached. Inter-rater reliability was calculated using the k statistic
(Landis & Koch, 1977).

Results
Identification of Articles
The initial search yielded 2,667 published articles (Figure 1). These were screened based on title and
type of article, resulting in 249 articles. These articles were further screened on the basis of their
abstracts, which left 40 articles for full-text review. Full text of these articles led to 11 articles that
met the inclusion criteria. One additional article was identified by reviewing the bibliography,
ultimately leading to 12 articles included in the review.
We excluded articles that used rapid feedback methods for clinical testing or described rapid
evaluations carried out in non-health-care sectors. We also excluded study protocols and systematic
reviews.
The 12 studies were subjected to a quality appraisal process undertaken by two of the authors
using the MMAT tool (see Table 1 for study-specific appraisal results). Inter-rater agreement was
90% with a Cohen’s k indicating substantial agreement (k ¼ 0.80). Overall, most studies covered
three of four criteria. Common limitations found in the articles were lack of reporting some aspects
of the evaluation design (i.e., sample composition and size) and lack of consideration of the impact
of the evaluator’s influence over the findings.

Characteristics of Included Studies


The characteristics of the 12 articles included in the review are presented in Table 1. The articles
were published between 1993 and 2019, but half of the articles (n ¼ 6) were published before 2010.
A significant amount of the studies took place in the United States (n ¼ 6), and one each in the UK,
Bangladesh, India, and Brazil. One study was based on a comparative study drawing from evalua-
tions in Botswana, Madagascar, Papua New Guinea, Uganda, and Zambia and another compared
services in Malawi, Uganda, and Kenya.

Definitions of Rapid Evaluations


The articles used different labels to describe their rapid evaluation designs. Six articles identified
their studies as using REM, three used RFE, and three used RCE. The definitions of each of these
terms are presented in Table 2. REMs were the oldest approach to rapid evaluations, followed by
RFEs and RCEs. There was an overlap in definitions between RFEs and RCEs, but studies using
RCEs tended to adapt the concept of rapid cycles to common iterative processes used in quality
improvement (i.e., Plan-Do-Study-Act cycles). Even though the articles described the studies imple-
mented as “evaluations,” only one article used the term “evaluation team” (McNall et al., 2004). All
of the other articles referred to teams as “research teams” and often used the term evaluation and
“research” interchangeably. Reference to evaluation theory was limited. The only articles that
Vindrola-Padros et al. 5

2667 articles identified


through database search:
• MEDLINE: 1099 2418 articles excluded based on titles and type
of article:
• Web of Science: 666
• Article did not focus on a human
• CINAHL Plus: 273 population
• ProQuest Central: 629 • Article had a clinical focus (i.e. rapid
evaluation of symptoms)
• Articles was not healthcare-related

209 articles excluded based on abstracts


249 articles screened for
further evaluation • Article was a study protocol
• Article described clinical evaluation
• Article described findings from a
systematic review
• Article focused on the translation of a
40 full-text articles rapid assessment tool, but not its
assessed in more detail application

30 articles excluded based on full-text


assessment

• Article described a proposed evaluation


10 articles met inclusion model, but did not explore how it was
criteria implemented

2 additional articles identified by searching


through the bibliography of the included
articles

12 articles were included


in the review

Figure 1. Study selection procedure.

explicitly engaged with theory drew from frameworks used frequently in implementation evalua-
tions (Felisberto, Freese, Natal, & Alves, 2008; Keith, Crosson, O’Malley, Cromp, & Taylor, 2017;
Skillman et al., 2019).

Reasons and Anticipated Benefits of Using Rapid Evaluations


All reviewed articles included a brief description of the reason why the authors decided to use a rapid
evaluation approach. The most frequent reason was the need for the quick turnaround of findings to
inform decision-making, programs, or service delivery (Keith et al., 2017; Schneeweiss et al., 2015).
Related to this reason, some authors argued that rapid evaluations can allow for midcourse program
corrections (Anker, Guidotti, Orzeszyna, Sapirie, & Thuriaux, 1993; Bjornson-Benson et al., 1993;
Skillman et al., 2019). Anker, Guidotti, Orzeszyna, Sapirie, and Thuriaux (1993) argued that rapid
evaluations tended to have a flexible study design, allowing evaluators to adapt to changing cir-
cumstances. Zakocs, Hill, Brown, Wheaton, and Freire (2015) indicated that rapid evaluations,
6
Table 1. Characteristics of Included Evaluations.
Date of Type of Rapid Evaluation Evaluation Evaluation Composition of
First Author Publication Country Evaluation Aim Evaluation Design Methods Duration Sample Evaluation Teams

Anker*** 1993 Botswana, Describe the basic components of Rapid evaluation Qualitative Interviews, Focus 6–10 days NS 4–5 members per
Madagascar, REM and discuss methods groups, team
Papua New methodological issues. (REM) Record
Guinea, review,
Uganda, Observations
Zambia
Bjorson- 1993 United States Develop a monitoring and Rapid feedback Quantitative Review of 6 months 6,000 participants NS
Benson*** evaluation system of trial evaluation recruitment
recruitment methods. (RFE) reports, costs
Chowdhury*** 2004 Bangladesh Evaluate a menstrual regulation REM Qualitative Interviews, 3 months 131 interviews, 11 members
programme. observations, 35 FG
focus groups participants,
(FG) 36
observations
McNall** 2004 United States Evaluate rates of a longitudinal HIV/ RFE Mixed Interviews, NS NS 3 members
AIDS care study that targeted a methods routinely
hard-to-retain population. collected data
Aspray*** 2006 UK Identify barriers in access to home REM Mixed- Interviews, 12 months 1,604 participants 1 member
care for vulnerable population methods record review
living with diabetes.
Felisberto** 2008 Brazil Develop a self-evaluation model of REM Mixed- Case study cross- 7 months NS NS
a public health programme. methods case
comparison
Self-evaluation
matrix
Grant*** 2011 Malawi, Uganda, Describe patient, family and local REM Mixed- Interviews, 1 week at 150 participants Approximately
Kenya community perspectives on the methods observations, each site members
impact of three community- local reports over a members
based palliative care period of
interventions in sub-Saharan 5 months
Africa.

(continued)
Table 1. (continued)

Date of Type of Rapid Evaluation Evaluation Evaluation Composition of


First Author Publication Country Evaluation Aim Evaluation Design Methods Duration Sample Evaluation Teams

Schneeweiss*** 2015 United States Clarify how RCE alters policy Rapid cycle Quantitative Routinely 3–6 months NS Team-based
decisions, develop the RAPID evaluation collected data (does not
framework, and provide (RCE) specify size)
guidelines on evidence
thresholds.
Zakocs*** 2015 United States Describe the data-to-action RFE Mixed- Interviews, 3 years, NS Team-based,
framework: A process to guide methods record review, with 20 (does not
evaluators and practitioners in focus groups feedback specify size)
using rapid feedback cycles in cycles
implementation.
Keith*** 2017 United States Present an approach for supporting RCE Qualitative Interviews, 4 months Staff from 21 5 members and
the rapid cycle evaluation of the observations practices several data
implementation of health-care (sample size analysts
delivery interventions. NS) (number NS)
Munday*** 2018 India Evaluate a palliative care model of REM Mixed- Interviews, 2 months 44 interviews NS
care. methods document
review,
questionnaire,
observations
Skillman*** 2019 United States Describe the advantages and RCE Mixed- Interviews, FG, 9 months 693 interviews, 15 members
limitations of a framework for methods document (quarterly 311 FGs
rapid-cycle, multisite mixed review feedback
method evaluation. sessions)

Note. REM ¼ rapid evaluation methods; NS ¼ not specified; MMAT ratings ¼ ****Denotes highest quality article (i.e., article that met all assessment criteria).

7
8 American Journal of Evaluation XX(X)

Table 2. Definitions of Rapid Evaluation Labels.

Rapid Evaluation Cited Literature (to Support


Articles Reviewed Label Definitions/Key Features Definition)

Anker, Guidotti, Orzeszyna, Rapid evaluation  Set of observation and WHO (2002); Pearson
Sapirie, and Thuriaux methods survey-based diagnostic (1989)
(1993); Chowdhury and activities, which provide a
Moni (2004); Aspray, basis for identifying
Nesbit, Cassidy, and operational problems and
Hawthorne (2006); taking action
Felisberto, Freese, Natal,  Active involvement and
and Alves (2008); Grant, participation of all
Brown, Leng, Bettega, and stakeholders
Murray (2011); Munday,
Haraldsdottir, Manak,
Thyle, and Ratcliff (2018)
Bjorson-Benson, et al. Rapid feedback  Data are continually McNall, Welch, Ruh, Mildner,
(1993); McNall, Welch, evaluation collected, analyzed, and and Soto (2004);
Ruh, Mildner, and Soto (RFE) used to inform action Hargreaves (2014);[AQ5]
(2004); Zakocs, Hill, within a short time Sonnichsen (2000);[AQ6]
Brown, Wheaton, and period Wholey (1983)
Freire (2015)  Aims at providing
program managers with
focused, timely evaluation
conclusions
Schneeweiss, Shrank, Ruhl, Rapid cycle  Provides timely feedback McNall and Foster-Fishman
and Maclure (2015); Keith, evaluation to funding organizations (2007) and Shrank (2013)
Crosson, O’Malley, (RCE) and program staff and
Cromp, and Taylor (2017); care providers
Skillman et al. (2019)  Offers support for
continuous quality
improvement and allows
observations of changes
over time
Note. WHO ¼ World Health Organization.

particularly RFEs, are able to facilitate communication with stakeholders, increasing buy-in into the
project, and creating a culture of learning.

Evaluation Designs
One of the aims of this review was to explore the ways in which rapid evaluations were carried out in
practice. In terms of duration, the evaluations varied considerably, from 6 days to 3 years (with 20
feedback cycles). Seven of the evaluations had a mixed-methods design (although one article only
reported findings from the qualitative strand), three were qualitative, and two were quantitative.
Seven evaluations presented detailed information on the design of formative evaluations with
multiple feedback loops or cycles (Bjornson-Benson et al., 1993; Felisberto et al., 2008; Keith
et al., 2017; McNall et al., 2004; Schneeweiss et al., 2015; Skillman et al., 2019; Zakocs, Hill,
Brown, Wheaton, & Freire, 2015), while we could assume the others used a summative design, by
sharing findings when the evaluation was complete (this was not explicitly reported; Anker et al.,
1993; Aspray, Nesbit, Cassidy, & Hawthorne, 2006; Chowdhury & Moni, 2004; Grant, Brown,
Leng, Bettega, & Murray, 2011; Munday, Haraldsdottir, Manak, Thyle, & Ratcliff, 2018). We have
Vindrola-Padros et al. 9

Table 3. Examples of the Steps Involved in Rapid Feedback or Rapid Cycle Evaluations.

Rapid Feedback Evaluation (RFE) Rapid Cycle Evaluation (RCE)

Schneeweiss, Shrank,
Zakocs, Hill, Brown, Wheaton, McNall, Welch, Ruh, Ruhl, and Maclure
and Freire (2015) Mildner, and Soto (2004) (2015) Skillman et al. (2019)

1. Clarify intent: Purpose, 1. Collect existing data on 1. Review 1. Develop an


questions, study protocol program performance evaluation analytic framework
findings
2. Collect “good enough” data: 2. Collect new data on 2. Translate 2. Collect data (first
Collect and analyze data program performance findings into round)
quickly actions
3. Produce brief memo: Draft 3. Evaluate preliminary 3. Make judgments 3. Analyze data and
concise memo with main data based on findings develop codebook
findings
4. Engage in reflective debrief: 4. Share findings/ 4. Initiate 4. Report findings
Discuss findings with project recommendations with implementation
team project team
5. Decide whether more 5. Develop and analyze 5. Make changes in 5. Collect data
information is needed, take alternative designs for implementation (second round)
action or take no action full-scale evaluation (if needed) adding quantitative
data
Repeat feedback loops (Steps 6. Assist in developing Repeat cycle (Steps
2–5) policy and management 3–5)
decisions

synthesized the main steps used in the four of the articles using rapid feedback or rapid cycles that
clearly described the steps undertaken by the evaluation teams (see Table 3).
The RCE model proposed by Schneeweiss, Shrank, Ruhl, and Maclure (2015) began once
findings were shared with decision makers and considered the steps involved in using the findings
to inform decisions. The other three models involved earlier stages of evaluation and followed
similar steps: (1) preliminary work to identify the aims, existing knowledge, and design, (2) an
initial data collection stage, (3) an analysis and reflection stage, (4) sharing of emerging findings
with key stakeholders, (5) decisions about how to move forward (e.g., informing changes in the
intervention/program being evaluated or the data collected for the evaluation), and (6) cycle or
feedback loop beginning again with another round of data collection.

Data Collection and Analysis


Qualitative evaluations tended to combine interviews, focus groups, and observations, while quan-
titative evaluations and those with mixed-methods designs relied on the analysis of routinely col-
lected data (with the exception of one article, which used questionnaires to collect quantitative data,
Munday et al., 2018). Sample sizes varied and, in the case of five evaluations, these were not
reported (Anker et al., 1993; Felisberto et al., 2008; McNall et al., 2004; Schneeweiss et al.,
2015; Zakocs et al., 2015).

Strategies for Speeding Up Evaluations


Some of the articles included in the review reflected on the strategies used to carry out evaluations
over a short period of time. These strategies could be grouped in the following categories: (1)
strategies to reduce evaluation duration, (2) strategies to increase engagement, and (3) strategies
10 American Journal of Evaluation XX(X)

for quality control. The strategies used to speed up the evaluation entailed carrying out data collec-
tion and analysis in parallel and, in the case of qualitative evaluations, engaging in multiple stages of
coding as data were being collected. The synthesis of data in manageable formats (e.g., tables) was
also mentioned as a strategy to make sense of the findings quickly and share them with stakeholders.
Skillman et al. (2019) also decided to eliminate the transcription of interviews and replace these with
detailed note-taking to reduce time and burden on the evaluators. Large evaluation teams were also
used to cover more ground in shorter amounts of time (Anker et al., 2013[AQ7]; Chowdury et al.,
2004; Grant et al., 2011; Keith et al., 2017; Skillman et al., 2019).
Anker et al. (1993) described the benefits of establishing a “core team” of stakeholders during
early stages of the evaluation to guarantee engagement and the use of findings. This group of
stakeholders took responsibility over the evaluation and participated in decisions related to design,
implementation, time line, and logistics. They also facilitated access to research sites and partici-
pants to prevent delays in the implementation of the evaluation (Anker et al., 1993).
As mentioned before, one of the key areas of concern in rapid evaluations is the validity of the
findings. Some of the evaluations included a series of quality control measures to ensure the data
reflected the realities being studied and to maintain consistency across teams of evaluators. Anker
and colleagues (1993) did this by piloting data collection instruments in the field and cross-checking
the data collected by multiple evaluators as data collection was ongoing. Other evaluations relied on
the cross-checking of coding during the analysis phase, the development of a codebook, and the
training of coders (Keith et al., 2017; Skillman et al., 2019).

Dissemination
One of the areas we were interested in exploring in the review were common formats used to share
the findings and descriptions of how the findings were used in practice. Unfortunately, not all of the
evaluations included information on the dissemination of findings. The few evaluations that did
share dissemination strategies most often described sharing data in the form of a written report with a
selected group of stakeholders (Aspray et al., 2006; Chowdhury & Moni, 2004; Skillman et al.,
2019). Aspray, Nesbit, Cassidy, and Hawthorne (2006) provided a detailed description of the report
they generated at the end of their evaluation (20-page report shared with stakeholders). Bjornson-
Benson et al. (1993), drawing from a rapid feedback design, developed short weekly reports to
inform rapid decisions affecting the program under evaluation, while Anker et al. (1993) developed
tables with critical indicators within 7–10 days of the evaluation start date and fuller reports after
several weeks. Zakocs et al. (2015) shared information in the form of a brief seven-page memo,
which was distributed 2 weeks after data collection had ended and included information on imple-
mentation processes and recommended changes.

Challenges and Limitations in the Implementation of Rapid Evaluations


In this review, we were also interested in identifying any factors the authors highlighted as barriers
or challenges in the implementation of the rapid evaluations. One of the main challenges was making
sure the quality of the data was adequate enough to inform decision-making. Zakocs et al. (2015)
argued that their team had to make difficult decisions on the best action to take based on the
available data. The authors frequently came across the question “how do you decide when there
is enough evidence to make a change?” (Zakocs et al., 2015, p. 478). They included a stage in their
rapid feedback cycle titled collecting “good enough” data which alluded to the need to consider the
extent to which the team could make decisions based on data available at the time (even if the data
quality or depth could eventually be better; Zakocs et al., 2015).
Balancing project resources, mainly in the sense of covering evaluator time, was also mentioned
in two of the articles (Skillman et al., 2019; Zakocs et al., 2015). According to Skillman et al. (2019),
Vindrola-Padros et al. 11

successful rapid evaluations required a relatively stable team of evaluators. This stability helped
maintain consistency in data collection and analysis, and particularly in the case of evaluations with
qualitative designs, it meant new evaluators did not have to secure access to sites and build relation-
ships with participants midway through the evaluation.
As mentioned above, some of the articles presented strategies they used to speed up data collec-
tion and analysis. In the case of one qualitative evaluation, Skillman and colleagues (2019) relied on
a combination of inductive and deductive approaches for data analysis. The authors created an
analytical framework based on findings from the literature as well as their research questions and
applied it to their data. Even though they were still open to topics emerging from the data, they
identified not being able to use a purely inductive approach as a limitation of their evaluation
(Skillman et al., 2019).

Discussion
In this review, we aimed to explore the ways in which rapid evaluations have been used in health
care, identify gaps in knowledge, and highlight future areas of exploration. We were interested in
identifying the extent to which rapid evaluations respond to the agenda set out in the health-care
research landscape, where research and evaluations that can be rapid, responsive, and relevant—the
three Rs—have been identified as being better suited to inform changes in policy and practice (Riley
et al., 2013).
We found that while the majority of studies took place in the United States, a range of other
countries around the world—many of which are using a multisited comparative approach—are also
exploring the functionality of rapid evaluations. The most common reason for using rapid evalua-
tions was the need to inform decision-making in relation to a service, program, or intervention, but
authors also indicated that these types of designs granted additional flexibility to the evaluators (if
changes needed to be made in the design midway through the study) and facilitated communication
and engagement with stakeholders (particularly designs with feedback loops). We found variability
in the definition of the terms “rapid” and “evaluation,” with a wide range of study durations. Some
evaluations adopted a utilization-focused design (Patton, 1997), while others had more exploratory
or diagnostic purposes (Aspray et al., 2006).
We also found that three main labels were currently being used to define rapid evaluations: REM,
RFE, and RCE. We identified a trend in the design of rapid evaluations, where evaluators are moving
away from studies that are short to longer studies with multiple short stages with feedback loops or
cycles. This change in design leads to evaluation approaches that are more centered on stakeholder
engagement and the continuous dissemination of findings.
While the articles we reviewed varied in their evaluation design—qualitative, quantitative, or
both (i.e., mixed methods)—qualitative evaluations tended to utilize primary data such as interview,
focus group, and observations while quantitative and mixed-method evaluations concentrated upon
analyzing routinely collected data. Strategies to reduce data analysis time frames included conduct-
ing data collection and analysis in parallel, eliminating the production of transcripts in favor of
detailed note-taking, and utilizing larger evaluation teams to share the workload.
Unfortunately, not all of the articles reviewed shared details of their data dissemination strategy.
Those that discussed how their findings were disseminated mentioned developing a written report to
be shared with select stakeholders. These reports were often described as “short” reports or “memos”
ranging from 7 to 20 pages in length with the inclusion of summary tables to aid stakeholders in
better understanding study findings and their associated recommendations.
Going back to the three Rs proposed by Riley and colleagues (2013), we can see that the field of
evaluations in health care is responding to the need for rapid findings in the following ways:
12 American Journal of Evaluation XX(X)

1. Rapid: Study time frames are either short or have built-in feedback loops/cycles for the
continuous sharing of findings.
2. Responsive: Rapid evaluation designs were identified as flexible designs that could be
adapted to changes in the health-care climate or the needs of stakeholders. Evaluation teams
tended to maintain close relationships with the evaluation users and other relevant stake-
holders to keep abreast of these changes.
3. Relevant: Several of the evaluations included in the review involved the participation of a
“core” group of stakeholders who were involved in different stages of the evaluation. In some
cases, these stakeholders advised on the evaluation design and implementation, while, in
other instances, findings were shared with this group on a continuous basis (through feedback
loops or cycles). This allowed teams to make sure the aims of the evaluation responded to the
needs of stakeholders and future users of the findings.

Successful evaluations seemed to be characterized by flexible designs, with feedback loops or


cycles defined early in the process and in collaboration with relevant stakeholders and users of the
findings. These findings seem promising, but additional work needs to be carried out to strengthen
the development of rapid evaluations. We found gaps in the reporting of information, particularly in
relation to sample size and composition. Our quality assessment also indicated that evaluations using
qualitative designs rarely engaged in a process of reflection of the role of the evaluator, and how
their presence might influence the collection of data. Only some articles showed evidence of
engagement with evaluation theory. We also need more information on how dissemination is built
into evaluation designs (i.e., how feedback loops are negotiated with stakeholders), the formats that
are effective for the sharing of findings, and ultimately, the impact of sharing findings rapidly on
decision-making processes (i.e., how were these findings used?).
We also need to carry out additional work to critically examine the design and implementation of
rapid evaluations and address concerns over data quality and the validity of the findings. Questions
such as “Are longer evaluations always better?” need to be fully explored and could be answered
with comparative analyses of rapid and long-term evaluation designs. For example, recent work in
qualitative research has compared rapid and more “traditional” longer term approaches to qualitative
data analysis, finding that both approaches can lead to similar, valid, results (R. Gale et al., 2019;
Taylor, Henshall, Kenyon, Litchfield, & Greenfield, 2018).
The review is limited in a number of ways. The literature search for academic articles was carried
out in July 2018 and updated in June 2019, but any articles published after this date were not
included. Furthermore, although we employed multiple broad search terms, it is possible that we
missed articles that did not use these terms. Some articles might have used other terms to describe
rapid evaluations that we are not aware of. We did not include gray literature, thus potentially
excluding an important number of rapid evaluations that have not been published in academic
journals. For instance, we are aware that the literature on RTE has mainly been published in the
form of reports (Sandison, 2003). The tool we used to assess the quality of the studies, the MMAT,
also has limitations, and these have been discussed elsewhere (Crowe & Sheppard, 2011; O’Cathain,
2010; O’Cathain, Murphy, & Nicholl, 2008).

Conclusions
Our review pointed to a wide range of approaches currently being used to design rapid evaluations
within the health-care landscape. Evaluators grappling with the need to deliver findings at a time
when they can be used to inform decision-making are turning to REM, rapid feedback, and RCE to
make sure their evaluations are rapid, responsive, and relevant. Nevertheless, questions still remain
in relation to the suitability of rapid evaluation designs, the trustworthiness of the data, and the
Vindrola-Padros et al. 13

degree to which evaluation findings are used to make changes in practice. Future studies could
compare different rapid evaluation designs and explore the impact of rapid evaluations on the use of
findings to inform decision-making.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or pub-
lication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publica-
tion of this article: This work was supported by University College London.

ORCID iD
Ginger A. Johnson https://orcid.org/0000-0002-4728-3744

Supplemental Material
Supplemental Appendix 1 is available in the online version of the https://journals.sagepub.com/home/aje.

References
Anker, M., Guidotti, R. J., Orzeszyna, S., Sapirie, S. A., & Thuriaux, M. C. (1993). Rapid evaluation methods
(REM) of health services performance: Methodological observations. Bulletin of the World Health
Organization, 71, 15–21.
Aspray, T. J., Nesbit, K., Cassidy, T. P., & Hawthorne, G. (2006). Rapid assessment methods used for health-
equity audit: Diabetes mellitus among frail British care-home residents. Public Health, 120, 1042–1051.
Beebe, J. (2014). Rapid qualitative inquiry (2nd ed.). London, England: Rowman and Littlefield.
Bjornson-Benson, W. M., Stibolt, T. B., Manske, K. A., Zavela, K. J., Youtsey, D. J., & Buist, A. S. (1993).
Monitoring recruitment effectiveness and cost in a clinical trial. Controlled Clinical Trials, 14, 52S–67S.
Chowdhury, S. N. M., & Moni, D. (2004). A situation analysis of the menstrual regulation programme in
Bangladesh. Reproductive Health Matters, 12, 95–104.
Crowe, M., & Sheppard, L. (2011). A review of critical appraisal tools show they lack rigor: Alternative tool
structure is proposed. Journal Clinical Epidemiology, 64, 79–89.
Felisberto, E., Freese, E., Natal, S., & Alves, C. K. (2008). A contribution to institutionalized health evaluation:
A proposal for self-evaluation. Cadernos de saude publica, 24, 2091–2102.
Gale, N., Health, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the framework method for the
analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology,
13, 117.
Gale, R., Wu, J., Erhardt, T., Bounthavong, M., Reardon, C., Damschroder, L., & Midboe, A. (2019). Com-
parison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in
the Veterans health administration. Implementation Science, 14, 11.
Glasgow, R., Kessler, R. S., Ory, M. G., Roby, D., Gorin, S. S., & Krist, A. (2014). Conducting rapid, relevant
research: Lessons learned from the My Own Health Report project. American Journal Preventive Medicine,
47, 212–219.
Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods.
Systematic Reviews, 1, 28–37.
Grant, L., Brown, J., Leng, M., Bettega, N., & Murray, S. A. (2011). Palliative care making a difference in rural
Uganda, Kenya and Malawi: Three rapid evaluation field studies. BMC Palliative Care, 10, 8.
Grasso, P. (2003). What makes an evaluation useful? Reflections from experience in large organizations.
American Journal of Evaluation, 24, 507–514.
14 American Journal of Evaluation XX(X)

Keith, R., Crosson, J. C., O’Malley, A. S., Cromp, D., & Taylor, E. F. (2017). Using the consolidated frame-
work for implementation research (CFIR) to produce actionable findings: A rapid-cycle evaluation approach
to improving implementation. Implementation Science, 12, 15.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics,
33, 159.
McNall, M., & Foster-Fishman, P. (2007). Methods of rapid evaluation, assessment, and appraisal. American
Journal of Evaluation, 28, 151–168.
McNall, M., Welch, V., Ruh, K., Mildner, C., & Soto, T. (2004). The use of rapid-feedback evaluation methods
to improve the retention rates of an HIV/AIDS healthcare intervention. Evaluation and Program Planning,
27, 287–294.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews
and meta-analyses: The PRISMA statement. The British Medical Journal, 339, 332–336.
Munday, D., Haraldsdottir, E., Manak, M., Thyle, A., & Ratcliff, C. M. (2018). Rural palliative care in North India:
Rapid evaluation of a program using a realist mixed method approach. Indian Journal Palliative Care, 24, 3–8.
Nunns, H. (2009). Responding to the demand for quicker evaluation findings. Social Policy Journal of New
Zealand, 34, 89–99.
O’Cathain, A. (2010). Assessing the quality of mixed methods research: Towards a comprehensive framework.
In A. Tashakkori & C. Teddie (Eds.), Handbook of mixed methods in social and behavioural research
(pp. 531–555). Thousand Oaks, CA: Sage.
O’Cathain, A., Murphy, E., & Nicholl, J. (2008). The quality of mixed methods studies in health services
research. Journal Health Service Research Policy, 13, 92–98.
Patton, M. (1997). Utilization-focused evaluation. Thousand Oaks, CA: Sage.
Pluye, P., Bartlett, G., Macaulay, A. C., Salsberg, J., Jagosh, J., & Seller, R. (2012). Testing the reliability and
efficiency of the pilot mixed methods appraisal tool (MMAT) for systematic mixed studies review.
International Journal Nursing Studies, 49, 47–53.
Pluye, P., & Hong, Q. N. (2014). Combining the power of stories and the power of numbers: Mixed methods
research and mixed studies reviews. Annals Review Public Health, 35, 29–45.
Pluye, P., Robert, E., Cargo, M., Bartlett, G., O’Cathain, A., Griffiths, F., & Rousseau, M. C. (2011). Proposal:
A mixed methods appraisal tool for systematic mixed studies reviews. Retrieved August 31, 2019, from
http://mixedmethodsappraisaltoolpublic.pbworks.com
Riley, W., Glasgow, R. E., Etheredge, L., & Abernethy, A. P. (2013). Rapid, responsive, relevant (R3) research:
A call for a rapid learning health research enterprise. Clinical and Translational Science, 2, 10.
Sandison, P. (2003). Desk-based review of real-time evaluation experience. New York, NY: United Nations
Children’s Fund.
Schneeweiss, S., Shrank, W. H., Ruhl, M., & Maclure, M. (2015). Decision-making aligned with rapid-cycle
evaluation in health care. International Journal of Technology Assessment in Health Care, 31, 214–222.
Shrank, W. (2013). The Center For Medicare And Medicaid Innovation’s blueprint for rapid-cycle evaluation of
new care and payment models. Health Affairs, 32, 807–812.
Skillman, M., Cross-Barnet, C., Friedman Singer, R., Rotondo, C., Ruiz, S., & Moiduddin, A. (2019). A
framework for rigorous qualitative research as a component of mixed method rapid-cycle evaluation.
Qualitative Health Research, 29, 279–289.
Taylor, B., Henshall, C., Kenyon, S., Litchfield, I., & Greenfield, S. (2018). Can rapid approaches to qualitative
analysis deliver timely, valid findings to clinical leaders? A mixed methods study comparing rapid and
thematic analysis. BMJ Open, 8, e019993.
Vindrola-Padros, C., & Vindrola-Padros, B. (2018). Quick and dirty? A systematic review of the use of rapid
ethnographies in healthcare organisation and delivery. BMJ Quality & Safety, 27, 321–330.
Wholey, J. S. (1983). Evaluation and effective public management. Boston, MA: Little Brown.
Zakocs, R., Hill, J. A., Brown, P., Wheaton, J., & Freire, K. E. (2015). The data-to-action framework: A rapid
program improvement process. Health Education & Behavior, 42, 471–479.

You might also like