You are on page 1of 3

So this unit is a lot about

pretesting techniques and making sure that your questionnaire


is in the best possible format. We'll start out with
a subset of techniques, but let me first tell you why we do
pretesting in the first place. Overall, we try to do
pretest to identify and reduce any possible errors on the test. Now any, that's a
hot claim. Remember, it took them 400 years
to measure the longitudes. So don't expect that you can build
a good measurement instrument now within two weeks or the six weeks of
this course, but these techniques that we present here are a good step to get
better at identifying specification error. Seeing if your decomposition
has worked if you really tap into all the concepts that
we discussed at the beginning. It will help you to identify
operationalization error. Is the question really
measuring the construct, do you see variability across the
respondents, the interviews, the times? Issues with measurement in that regard? And
then measurement error in general,
remember the sensitive questions, do you have question characteristics
that are leading the respondents to answer a certain way? And there are typical
interviewer
characteristics that interfere with the certain question, and
issues of that nature. So all of that, hopefully,
you can detect in pretests. It is rather uncommon that you would
be able to administer a survey perfectly, measurement error free,
without any pretesting, so you know, one of these techniques you should use,
ideally, a multitude of these techniques. We'll show a comparison of the techniques
at the end of the segment, and you will see they're not suitable for
all settings. And you certainly get the best picture
if you use several techniques at once. The first thing, expert reviews, focus
group, cognitive interviews,
they are all more qualitative techniques. And then we have behavior coding and
other statistical methods that, I guess behavioral coding can be done on
the qualitative interviews as well, but all of the others are sort of assuming
that you have a larger set of paces for which you test whether
they measure questions and the answer match what
you're trying to measure. So, let's start with the expert reviews. First thing you
need to ask yourself is,
who's an expert? Who the questionnaire designated
expert in that matter, that's one set. But also, you should probably
enroll subject matter experts to review your questionnaire to make sure
that it also matches the concept that the subject matter expert
told you to begin with. Questionnaire administration
expert is another good set for trying to enlist into review
those would be interviews. And then there are some computer based
systems, expert systems like QUAID that have been developed and the link is
available here, you can test that out. What do these experts do? They identified
potential response
problems and make recommendations for improvement. Now, how do they do that? Are
they individual or in a group setting? Sometimes you have open-ended comments,
sometimes you can solicit codes for a particular problem that
would be in form of an appraisal system. In the end, you hopefully have a report
with comments that help you to improve, noted problems and revisions, a summary of
report that shows you the distribution of the problems you want to definitely
have a few experts review this and not just relying on the answers from one.
They're really qualitative in nature, though, so that you won't be able to make
any statistics off the problems you found. Now, experts are good at identifying
problems related to the data analysis question. They are good at looking
at issues that pop up. In a quantitative analysis of comparing
expert reviews with more sofisticated techniques like latent class analysis,
we've found considerable agreement, but we'll come back to that point once
we look closer at these LCAs. What's definitely good is they're
relatively cheap and fast, and what's not so good is the quality
of various in practice much a lot, we don't have a lot of
literature on this and there is this one thing known, large inconsistency
and disagreement between experts. So with that, let's move to the next
technique, which are focus groups. Focus groups are small groups,
five to ten people, that you bring together to
investigate a research topic. You try to find out what people
think when they hear this topic, when they hear a particular question, so
you can mix with an open discussion, and an evaluation of survey
questions themselves. Try to figure out how people
about the vocabulary, about the terms used, about any key
concepts that appear in the questionnaire. And keep in mind interviewer and
respondents don't always
agree on this context. You know let's say for
example sources of income. A questionnaire administered in a low
SES neighborhood, the investigator and as an extension the interviewer,
they might think of job salary, interest income, dividends, whereas
the respondent would think of ad hoc work, illicit activities, drug use,
prostitution
and gambling, there you have it. So the key task of the focus group is to
explore what the respondent thinks and how the respondent thinks about the topic,
maybe both. You have to think about recruiting,
what the moderator does, identify good moderators,
decide on whether to tape, and videotape, but
at least you should audio record, it's very difficult to take sufficient
notes, and then have a report written. A little bit more on recruitment. You really
would like to target participants that are also part of
your target population in the survey. You want to decide if you want
to have a homogeneous group or a more heterogeneous group. If you have the means
to have several groups, it could be good to
vary that a little bit. You will get a different group discussion
whether you do one or the other. But in general, it's probably a good
idea if you only have one focus group to have it diversified, but not too much. So
each group has a little
bit of a mixture in it. As for the moderator,
you want to give the moderator a guide, where you make clear what
the purpose is that they study here. You can talk about the flow
where you write out questions with open ends and then you can see
what the respondents will answer those. And keep in mind what
problem should be solved and what information you search for. Here's an example for
points that should
be mentioned in the moderator guide. Flow of ground rules, introduction,
open questions, in-depth investigation, and some statements on how they
can close the focus group. It's also good to give them a guideline
for good questions inside the focus group. Tell them that they should ask short
questions to get long answers. They should ask questions
that are easy to say, they should address one issue at a time,
use a conversational tone, ask open-ended questions, and
ask positive before negative ones. A focus group is very good at providing
qualitative information as we had earlier. They're good in providing a range
of information, lots of a variety. But what you learn there,
it's hardly generalizable. So it's qualitative just
like the expert reviews. The advantage is they're efficient and
small. The disadvantage is they're more costly,
because you have to recruit, you have to incentivize, you often have to
rent a facility, in particular if you want to have outside service through one
way mirror and things of that nature. There are lots of good
textbooks on focus groups, a couple of them listed here, more
are listed on the syllabus on Coursera. Our next segment will move into cogmented
interviewing another qualitative technique that can be used to test questionnaires.

You might also like