You are on page 1of 5

ACADEMIA Letters

Seven Stages of a Survey Piloting Process


Jose Cossa, Pennsylvania State University
Lecia Barker, University of Colorado Boulder
Vicki Almstrum, University of Texas at
Austin

This article describes the process of piloting a survey that contributed to a larger work
resulting in the article Motivation to Pursue a Ph.D. in Computing: Black Students in
HBCUs (Cossa & Barker, 2021).

State 1: Defining the Project


This stage requires thinking through what it is that the project is about and what makes it
important to pursue. In defining the project, we had to keep in mind that all questions were
to be anchored to this core aim. In addition, we had to think through sampling questions by
carefully defining the target population and strategically deciding on the target schools.

Stage 2: Mapping the Pilot


In this stage, we identified the key information we were seeking (i.e., beyond defining the
scope and nature of the project), who would provide us with such information, and how we
planned to obtain the information. The purpose of a pilot is to find as much information as
possible about the perceptions that respondents have when reading the questions and how
they think to improve them, so the presence of one of the researchers as a facilitator of a
focus group discussion enhances reliability in that the researcher can probe into the
feedback provided and also serve as a witness that the responses are, indeed, the
respondents’ and not, possibly, a result of the respondents’ consulting with another person
who does not fit the criteria. It

Academia Letters, July ©2021 by the authors — Open Access — Distributed under CC BY
2021 4.0
Corresponding Author: Jose Cossa, jxc6421@psu.edu
Citation: Cossa, J., Barker, L., Almstrum, V. (2021). Seven Stages of a Survey Piloting Process.
Academia Letters, Article 2327. https://doi.org/10.20935/AL2327.
1
should go without saying that the more evidence to support claims of validity and
reliability, the more credible the instrument.
After identifying the sites and making sure that there is enough money in the budget to
plan the site visits, one of our team members traveled to the various sites equipped with
documents comprised of (a) pilot process guide for the researcher, which listed the steps of
the process and provided the necessary references and links to documents; (b) pilot session
instructions, a compilation of pilot session survey instructions, pilot item evaluation, and
pilot evaluation form; (c) a hard copy of the pilot survey; and, (d) a hard copy of the pilot
evaluation form.[i]

Stage 3: Designing the Pilot Questions


This stage includes a lot of writing and re-writing of questions that will elicit answers
aligning with the purpose, identifying strategic placement of questions to allow for a
comfortable flow when taking the survey, and making sure that the survey length is
adequate by only including questions that elicit information indispensable for the project’s
core aim. In designing the questions, we needed to hypothetically anticipate respondents’
perceptions of the concepts we employed and to remain cognizant that our best attempt to
select concepts that would resound with respondents was nothing more than an assumption.
This is important because it allows the researcher to remain open to new perceptions about
concepts that may mean one thing in a certain context and another in a different context.
We started with twenty-two questions and our final selection for the pilot comprised
twenty- eight questions. This increase in the amount of question was because we aimed at
having sim- pler questions, thus designed most of the questions to follow a multiple-choice
format rather than a short essay format.

Stage 4: Designing the Pilot Evaluation


The crux of the validity and reliability issues is addressed here. How we ascertain that the
questions are asking what we intend to ask, how we gather information that helps us
determine that, and how we interpret the information gathered are critical to addressing
questions of va- lidity and reliability. In designing the pilot evaluation, the researcher must
recognize various kinds of validity (Maxwell, 1992). While the national survey required
that we address quan- titative validity and reliability, our pilot process required that we
address qualitative validity and reliability (Golafshani, 2003). Therefore, we took into
consideration the following kinds of validity (Maxwell, 1992) when evaluating the survey:
(a) descriptive validity, primarily concerned with the perspective of the researcher; (b)
interpretive validity, concerned with the
perspective of the respondent; (c) theoretical validity, concerned with the theory carried by
the researcher into the research process; (d) generalizability, concerned with whether, or
not, the research findings can be generalized beyond the context of the study; and, (e)
evaluative validity, concerned with the overall evaluation of the data.

Stage 5: Planning the Pilot Site Visits


A key strategy for planning site visits is to identify the best contact person who will help
coordinate the visit. Such a person may be the department chair or an administrator, as they
are better positioned to refer students to take part in the pilot or to coordinate pilot-related
efforts. Our initial assumption was that the department chairs would be the coordinators of
the visit, but we soon realized that they were only a good doorway towards the pilot and
that having a student to take the role of coordinator is a much more efficient approach.
Students became our hosts, were enthusiastic about being a part of the planning process,
and they saw the process as a valuable intellectual exercise. In instances where the chair
was the coordinator, our success was minimal to none; whereas, when we had a student
coordinate the efforts, we were able to meet with all the students selected to participate.

Stage 6: Administering the Pilot


This stage requires that the researcher and coordinator secure an atmosphere conducive for
the exercise. In preparing for the session, the researcher should be equipped with all the
material prepared for the pilot (see Table 1) and take notes during the session.[ii]
Table 1: Pilot Process Guide for the Researcher
→What to copy (one for yourself and one for each participant plus an extra):

1. The instructions for the session (one page – the first page of Pilot-Instructions)
2. The instructions for the pilot evaluation step (one page – the second page of Pilot-
Instructions
3. The hard copy of the pilot survey (e.g., from SurveyMonkey)
4. The hard copy of the pilot item evaluation

5. The evaluation table (as backup); (two pages – pages 3 and 4 of Pilot-Instructions)
→Suggested session flow:
• Pass out document A, introduce the project and yourself
• Let them use their computers to take the survey
• Once they are all finished, pass out document B
• Walk through the four categories and discuss what they mean
• Pass out document C for their reference

Either
- Let them use their computers to evaluate the
items OR
- Pass out document D for them to write
on OR
- Pass out document E for them to write on

• Conduct the focus group process in whatever manner feels best

It is important that the researcher knows the location reserved for the pilot, arrives
early, and is ready to begin on time; make no promises as to the time it will take, but make
sure to make respondents aware that the entire pilot process may take a little longer or
shorter than estimated; and, respondents must also be made aware that if they finish a phase
of the pilot earlier than the rest of the group they will have to wait for the rest of the group
to catch up. These are small details that will help run the pilot in a smooth manner.
Inspired by Gloria Rogers’ (Rogers, 2015) classification categories, we asked
respondents to answer “yes” to understandable, if the item was easy to understand and the
meaning of the question was clear and straightforward (i.e., after only one reading); “yes”
to adequate, if the range of values for the scale covered an appropriate set of choices and
provided an appropriate way to respond; “yes” to applicable, if the response categories
were adequate, equally likely to offer good choices for any respondent, and fit well the
respondent’s situation; and, “yes” to neutral, if the wording of the item was neutral, no one
choice was obviously ‘right’, and it was not likely that most respondents would respond in
the same way. An answer of “no” for each category meant that respondents thought that the
question did not meet the criteria. After the respondents completed the evaluation form, we
discussed any items that respondents gave one or more “no” responses, considered what
was meant by each item, why we included it, and what its greater (contextual) purpose was.
Stage 7: Improving the Survey
Polish the survey by making sure that all the feedback from the pilot is taken into
consideration and selectively integrated. At this stage, researchers contrast the survey, the
evaluation form, and notes to see what changes were suggested and based on suggestions
and brainstorming, what questions make the best sense to include in the final survey.
Researchers may decide to remove questions that disturb the flow and/or are not critical in
eliciting the information needed for the research, and to include new questions. If opting to
include new questions, such must not necessitate a re-piloting of the survey; instead, a
follow up with the students who participated should be sufficient.

References
Cossa, J., & Barker, L. (2021). Motivation to Pursue a Ph.D. in Computing: Black
Students in HBCUs. Journal of Interdisciplinary Studies in Education, 10(1). Retrieved
from https:// ojed.org/index.php/jise/article/view/2685
Golafshani, N. (2003). Understanding Reliability and Validity in Qualitative Research.
The Qualitative Report, 8(4), 597-607.
Maxwell, J. A. (1992). Understanding and Validity in Qualitative Research. Harvard Edu-
cational Review, 69(3), 279-300.
Rogers, G. (2015, April). Sample Protocol for Pilot Testing Survey Items. ABET.
Retrieved 2020, from www.abet.org/WorkArea/DownloadAsset.aspx?id=1299
[i] Be sure to carry a set of electronic copies and a set of hard copies, as backup. The
discus- sion in stage six shows how we found that carrying a hard copy of the survey
allowed us to discover more than just the convenience of a backup copy.
[ii] During each visit, our team representative and pilot administrator wrote notes to share
with the rest of the team members. These notes were useful when discussing changes to
the survey and provided further insight into respondents’ answers in the survey and
evaluation form.

You might also like