You are on page 1of 11

3 Ways to Implement Descriptive Research to Benefit Your Organization

The different ways organizations use descriptive research is almost limitless. We


already know that going into the survey design phase with research goals is critical, but
how do we know that our research plan will provide fruitful information. To understand
what your research goals should entail, let’s take a look at the three main ways
organizations use descriptive research today:

1. Defining a Characteristic of Your Respondents: All closed-ended questions aim


to better define a characteristic for your respondents. This could include gaining
an understanding of traits or behaviours, like asking your respondents to identify
their age group or provide how many hours they spend on the internet each
week. It could also be used to ask respondents about opinions or attitudes, like
how satisfied they were with a product or their level of agreement with a political
platform.
In essence, all this information can be used by an organization to make better
decisions. For example, a retail store that discovers that the majority of its
customers browse sale items online before visiting the store would give it insight
on where it should focus its advertising team.
2. Measuring Trends in Your Data: With the statistical capabilities of descriptive
research, organizations are able to measure trends over time. Consider a survey
that asks customers to rate their satisfaction with a hotel on a scale of 0-10. The
resulting value is mostly arbitrary by itself. What does an average score of 8.3
mean? However, if the hotel management makes changes in order to better meet
their customer needs, they can later conduct the same survey again and see
whether the new average score has risen or fallen. This allows the hotel to
effectively measure the progress it is making with customer satisfaction over
time, as well as measure the effects of new initiatives and processes.
3. Comparing Groups and Issues: Organizations also use descriptive research to
draw comparisons between groups of respondents. For example, a shampoo
company creates a survey asking the general public several questions
measuring their attitudes on the company’s products, advertising, and image. In
the same survey they may ask various demographic questions like age, gender,
income, etc.
Afterwards, the company will be able to analyse the data to compare different
groups of people and their attitude. For example, the company can statistically
identify the difference in opinion between genders and age. Maybe they find that
there is a statistically low opinion of their company’s image from young adult
males. This could mean creating a new line of products attempting to cater to this
demographic.

If your research goals fit under one of these three categories, you should be on the ri
ght track. Now all you have left to do is decide how the data collected will help your
organization take action on a certain issue or opportunity. Remember, conducting a
successful survey is only half the battle. It is what you do with the information gathered
that makes your research project useful!
Defining Survey Bias
As we already know from our previous discussion on bias, response bias can be
defined as the difference between the true values of variables in a study’s net sample
group and the values of variables obtained in the results of the same study. A form of
response bias, survey bias encompasses any error due to a study’s survey design.
Though survey bias can be found in any form of questionnaire, it is especially prevalent
in internet surveys since they are completed privately by respondents. Without the
supervision of an interviewer, it is difficult to track participant reactions to the way
questions are worded, the selected question types and design, the structure of the
survey, or its styling and colouring. Problems in any of these four areas can lead to a
dramatic climb in a study’s survey bias and drop-out rates. That is why a researcher
must learn how to correctly address each of these four sources of survey bias before
they begin fieldwork and start collecting data.

Question Wording
Survey bias can be subtle or blatantly obvious in a question’s wording, and can take
many forms. It is the survey designer’s job to remain impartial and avoid writing
questions that lead or confuse the respondent. This is usually easier said than done as
writers generally create biased questions due to their own lack of knowledge in a
subject or ignorance of other people’s perspective on that subject.
To help squash this bias, it is essential to remain neutral in all questions no matter how
extreme the topic. It also helps to conduct secondary research to ensure you have a full
understanding of the topic in study. Furthermore, a constructive peer review by an
expert in the same field of the survey topic will allow you to learn of any problems with
your questions that make them confusing or erroneous for the subject of study and its
target population.
Beyond this, there are several rules that survey creators should always abide by when
writing survey questions. Learn them all and reduce your survey bias by reviewing ‘Get
the Most Out of Your Survey: Tips for Writing Effective Questions‘.

Question Type and Design


This type of survey bias includes the selection of different forms of questions (rating
scales, ranking, open-ended versus closed-ended) and the options of answers provided
to the respondent. The selections made in question types can have significant impact
on the responses received. This is also the case for the options researchers provide for
participants to choose from.
To avoid running into this type of bias, it is crucial for the researcher to understand the
strengths and weaknesses of each question type they will be using. This way the
question and its options will not only be chosen correctly but can be tailored in order to
provide only the most useful data. For more information on how to use different question
types check out FluidSurveys’s video tutorials.
Survey Structure
One of the most overlooked forms of survey bias comes from poorly designed survey
structures. Survey structure usually pertains to the order in which the survey questions
are revealed to the respondent, but can also refer to the number of questions per page,
the survey logic, the survey length, and the introduction and conclusion. Each of these
portions of survey structure can contribute to survey bias and drop outs.
As with the other forms of survey bias, the best way to avoid making errors with your
survey structure is through studying how modifying each aspect of your survey will
affect your respondents’ reactions to each question. For example, putting the most
threatening or personal seeming questions at the end of your survey will decrease your
number of drop outs. Acknowledging information like this will allow you to construct the
best possible surveys.

Styling and Colouring


This section of survey bias includes any form of flare added to a survey design. It can
include colour schemes, font styles, logos, videos, sounds and any other type of
interactive element. Styling is important to provide stimulus to the participant and avoid
respondent fatigue. Moreover, using colours and logos allows respondents to recognize
a survey’s legitimacy. However, providing styling can also bias your survey. The fact is,
people respond in various ways to different colours and imagery.
It is important to use pretesting to ensure there are no issues with your choice of styling.
Ask your pretest team whether they can clearly see and read everything in the survey
and if the style used effected how they felt about the survey questions. A rule of thumb
for styling is to ensure that the survey cannot be considered directed towards one
demographic. Instead any added styling or colouring should make the survey look
neutral while still being inviting and professional.
What is the Difference between Nonresponse Bias and
Response Bias?
As previously discussed, all error can be divided into two groups; random sampling
error and bias (also known as nonsampling error). Continuing further, all bias can be
placed under two categories; response bias and nonresponse bias. To understand bias
it is important to go over both of these categories and explain the differences between
them.

Response bias can be defined as the difference between the true values of variables in
a study’s net sample group and the values of variables obtained in the results of the
same study. This means that response bias is caused by any element in the research
that makes its results different from the actual opinions or facts held by the respondents
participating in the sample. Most often, this type of bias is caused by respondents giving
inaccurate responses and answers being incorrectly recorded or misanalysed. In later
blogs we will discuss the different forms of response bias, such as; researcher
bias, survey bias, and respondent bias.

Nonresponse bias occurs when some respondents included in the sample do not
respond. The key difference here is that the error comes from an absence of
respondents instead of the collection of erroneous data. Put in more technical terms,
nonresponse bias is the variation between the true mean values of the original sample
list (people who are sent survey invites) and the true mean values of the net sample
(actual respondents). Most often, this form of bias is created by refusals to participate or
the inability to reach some respondents.

As discussed in the previous blog on bias and error, to be considered a form of bias a
source of error must be systematic in nature. Nonresponse bias is not an exception to
this rule. If a survey method or design is created in a way that makes it more likely for
certain groups of potential respondents to refuse to participate or be absent during a
surveying period, it has created a systematic bias. Take these two examples for
instance:

1. Asking for sensitive information: Consider a survey measuring tax payment


compliance. Citizens who do not properly follow tax laws will be the most uncomfortable
filling out this survey and be more likely to refuse. This will obviously bias the data
towards a more law abiding net sample than the original sample. Nonresponse bias in
surveys asking for legally sensitive information have been proven to be even more
profound if the survey explicitly states that the government or another organization of
authority is collecting the data!

2. Invitation Issues: Many researchers create nonresponse bias because they do not
pretest their invites properly. For example, a large portion of young adults and business
sector workers answer the majority of their emails through their smartphones. If the
survey invite is provided through an email that doesn’t render well on mobile devices,
response rates in smartphone users will drop dramatically. This will create a net sample
that under represents the opinions of the smartphone user demographic.

Ways to Avoid Nonresponse Bias


Nonresponse bias is almost impossible to eliminate completely, but there are a few
ways to ensure that it is avoided as much as possible. Of course having a professional,
well-structured and designed survey will help get higher completion rates, but here is a
list of five ways to tweak your research process to ensure that your survey has a low
nonresponse bias:

1. Thoroughly Pretest your Survey Mediums: As discussed in the example above, it


is very important to ensure that your survey and its invites run smoothly through any
medium or on any device your potential respondents might use. People are much more
likely to ignore survey requests if loading times are long, questions do not fit properly on
their screens, or they have to work to make the survey compatible with their device. The
best advice is to acknowledge your sample`s different forms of communication software
and devices and pre-test your surveys and invites on each, ensuring your survey runs
smoothly for all your respondents.

2. Avoid Rushed or Short Data Collection Periods: One of the worst things a
researcher can do is limit their data collection time in order to comply with a strict
deadline. Your study’s level of nonresponse bias will climb dramatically if you are not
flexible with the time frames respondents have to answer your survey. Fortunately,
flexibility is one of the main advantages to online surveys since they do not require
interviews (phone or in person) that must be completed at certain times of the day.
However, keeping your survey live for only a few days can still severely limit a potential
respondent’s ability to answer. Instead, it is recommended to extend a survey collection
period to at least two weeks so that participants can choose any day of the week to
respond according to their own busy schedule.

3. Send Reminders to Potential Respondents: Sending a few reminder emails


throughout your data collection period has been shown to effectively gather more
completed responses. It is best to send your first reminder email midway through the
collection period and the second near the end of the collection period. Make sure you
do not harass the people on your email list who have already completed your survey!
You can manage your reminders and invites on FluidSurveys through the trigger
options found in the invite tool.

4. Ensure Confidentiality: Any survey that requires information that is personal in


nature should include reassurance to respondents that the data collected will be kept
completely confidential. This is especially the case in surveys that are focused on
sensitive issues. Make certain someone reading your invite understands that the
information they provide will be viewed as part the whole sample and not individually
scrutinized.
5. Use Incentives: Many people refuse to respond to surveys because they feel they do
not have the time to spend answering questions. An incentive is usually necessary to
motivate people into taking part in your study. Depending on the length of the survey,
the difficulty in finding the correct respondents (ie: one-legged, 15th-century spoon
collectors), and the information being asked, the incentive can range from minimal to
substantial in value. Remember, most respondents won’t have an invested interest in
your study and must feel that the survey is worth their time!

Research experts have always emphasized the importance of obtaining more accurate
information in surveys through the elimination of error and bias. However, most
surveyors and research experts do not have a clear understanding of the different types
of survey error to begin with! Most professional researchers throw terms like response
bias or nonresponse error around the boardroom without a full comprehension of their
meaning. That is why we have decided to go over the different natures of error and bias,
as well as their impacts on surveys.

Defining Error and Bias


In survey research, error can be defined as any difference between the average values
that were obtained through a study and the true average values of the population being
targeted. Simply put, error describes how much the results of a study missed the mark,
by encompassing all the flaws in a research study. Take for example that your study
showed 20% of people’s favourite ice cream is chocolate flavoured, but in actuality
chocolate is 25% of people’s favourite ice cream flavour. This difference could be from a
whole range of different biases and errors but the total level of error in your study would
be 5%.

Whereas error makes up all flaws in a study’s results, bias refers only to error that is
systematic in nature. Research is bias when it is gathered in a way that makes the
data’s value systematically different from the true value of the population of interest.
Survey research includes an incredible spectrum of different types of bias, including
researcher bias, survey bias, respondent bias, and nonresponse bias. Whether it is in
the selection process, the way questions are written, or the respondents’ desire to
answer in a certain way, bias can be found in almost any survey.

For example, including a question like “Do you drive recklessly?” in a public safety
survey would create systematic error and therefore be bias. The reason it is considered
systematic is that many respondents would answer the question falsely in one direction
by selecting “No” even if they are a bad driver.
The Effect of Random Sampling Error and Bias on
Research
But what about error that is not systematic in nature? This is called random sampling
error and is due to samples being an imperfect representation of the population of
interest. Unfortunately no matter how carefully you select your sample or how many
people complete your survey, there will always be a percentage of error that has
nothing to do with bias. This is unavoidable in the world of probability because, as long
as your survey is not a census (collecting responses from every member of the
population), you cannot be certain that the true values resulting from your sample are
the same as the true values of the population.

However, random sampling error can be easily measured through the use of statistics.
Whenever a researcher conducts a probability survey they must include a margin of
error and a confidence level. This allows any person to understand just how much effect
random sampling error could have on a study’s results.

Bias, on the other hand, cannot be measured using statistics due to the fact that it
comes from the research process itself. Because of its systematic nature, bias slants
the data in an artificial direction that will provide false information to the researcher. For
this reason, eliminating bias should be the number one priority of all researchers. Over
the next few articles, we will discuss the several different forms of bias and how to avoid
them in your surveys.
Anyone who has experience in online research will tell you that they hear the words
‘questionnaire’ and ‘survey’ on a daily basis. Typically these two words are used by
professional researchers and the public interchangeably, and are thrown around at
complete random. Sometimes, when an individual is feeling extra adventurous, they will
go all in by combining the two words into the grammatical hybrid known as a ‘Survey
Questionnaire.’ Once or twice, I have even witnessed people throw caution completely
to the wind and use the dreadful ‘Questionnaire Survey.’ Living through this
phenomenon for far too long, I have decided that it was time to get to the bottom of this
etymological mystery! What is the difference between questionnaires and surveys?

Question(naire) Everything

After a preliminary brainstorm with my co-workers by the famous Fluidware kitchen, I


was bombarded with several insightful ideas and educated guesses as to the difference
between a questionnaire and a survey. Most agreed that the two words are
synonymous, with some believing that though both words are acceptable, questionnaire
is more commonly used in Europe whereas survey is more frequently used in North
America.

I graciously digested my co-workers’ collective thoughts and contributions to the


mystery, but felt dissatisfied. Why was there no consensus answer to this
burning question? Naturally, I began an epic descent into the catacombs of internet
answer websites and, like a man trying to self-diagnose a nagging ailment, was
provided with a multitude of unlikely and eccentrically horrifying responses. Finally, I did
the intelligent thing and turned my attention to the dictionary.

Surveys versus Questionnaires

As it turns out, there is a major difference between a questionnaire and a survey. A


survey is defined as the measure of opinions or experiences of a group of people
through the asking of questions. This is opposed to a questionnaire, which is defined as
a set of printed or written questions with a choice of answers, devised for the purposes
of a survey or statistical study.

So really, a questionnaire is a tool to be used for a survey. When conducting a survey


your list of questions is called your questionnaire. A survey, on the other hand,
encompasses all aspects of the research process, including research design, survey
construction, sampling method, data collection, and response analysis.

With this knowledge, we have officially solved the ‘Survey Questionnaire’ riddle.
Regardless of the common uses of survey terminology, we now know that it is
appropriate to say phrases like, “This questionnaire is too long for my survey’s sample
group.”

So go ahead and impress your friends with your survey research know-how! If you have
any interesting questions about online survey research, don’t feel too shy to ask it in the
comments below. I am always happy to help a fellow online researcher! Until next time,
happy questionnaire surveying (cringe)!

Hey gang! In this article we are going to discuss a challenge that every researcher is
confronted with when creating a survey: How to design the right question list for their
questionnaire? This seemingly straightforward and easy task can become incredibly
difficult to accomplish without taking the proper steps. Not only is coming up with
questions without a plan more demanding, but a researcher who does not use caution
when creating their list of questions may collect information that will later be proven
unnecessary or misleading for their organization’s needs. This most will likely lead to
the wasting of company resources, time, and even worse, poor management decisions
based on erroneous data.

As my fifth grade teacher used to say, “It is always hard to get the right answers when
you are asking the wrong questions.” Essentially, a survey is only as good as the
questions created will allow it to be. That is why the process in designing your questions
is so integral to the success of your study. In this blog, I will go over the steps I
personally use to develop a list of questions for any of my surveys.

Defining the Problem


The name of the game when making your survey is to only create questions that are
directly related to gaining the information you need! This will keep your survey short and
focused, which in turn prevents respondent fatigue and allows for a smooth data
analysis process. So before one begins to outline the questions of their survey, they
need to decide the reason this task is necessary in the first place. This begins with
defining a business problem and research problem. The business problem is goal
oriented and asks what the decision maker needs to do, whereas the research problem
is information oriented and asks what data is necessary to make an educated decision
on the business problem.

Say I owned a restaurant and noticed a decrease in my revenue. I may come up with
something like this:

Business Problem: What areas need to be improved and how should they be
improved in order to increase the revenue of my restaurant?
Research Problem: Determine the strengths and weaknesses of the restaurant by
gaining feedback from the restaurant’s customer base.

Creating the Research Purpose and Objectives


With the business and research problems defined, it is now time to build a research plan
that will be able to properly address the issue. It would be unwise for me to begin writing
questions before understanding the main objectives of my study. If I were to jump
directly into making the questions now, the survey structure would be loaded with a mix-
mash list of questions that are randomly put together. This leads to omitted questions
that would have been useful and questions that were included that are either redundant
or misleading to the data. Instead, it is best to create a clear research purpose, followed
by a list of its research objectives.

The research purpose is a reiteration of your research problem, with the added
description of the type of survey that will be carried out. The research objective should
break the research purpose into easy to manage parts. Continuing with my restaurant
example, we can see what my research purpose and objective would look like:

Research Purpose: Measure the level of customer satisfaction for our restaurant and
collect feedback in order to better meet our customer needs.

Research Objectives: Measure the level of customer satisfaction and collect feedback
for each of the following aspects of our restaurant:
1) Food and Drink Menu
2) Customer Service
3) Restaurant Environment
4) Comfort and Cleanliness
5) Overall

With the research purpose broken into five distinct objectives, it is now easy to create a
questionnaire that devotes several questions for each objective. In essence, the
objectives act to organize your survey`s overarching research purpose into separate
sections that will focus the scope of your study. A survey that does not have clear
research objectives will be disorganized as the questions will probably be in a random
order and missing key parts of the topics which need to be researched .

Research objectives also work to subcategorize the survey into quantifiable sections for
data analysis. In regards to my restaurant study, I will be able to measure each aspect
of the restaurant against each other because my questions will be clearly separated into
different groups. This will allow me to realize which aspects of my restaurant are
strengths and weaknesses by providing an overall score for each objective, before I
begin to look at each question individually.

TIP: Do not forget that your survey questions should always be seeking for underlying
information that will help you move forward. Because of this, each objective would be
designed to have closed-ended questions to measure the strength of an aspect of my
restaurant as well an open-ended question to gain feedback on how to improve. For
more information on closed-ended and open-ended questions visit my previous blog,
“Comparing Closed-Ended and Open-Ended Questions.”

Let Me Know What You Think


I have always felt it is the planning put into research that allows one to find the data that
will be truly useful. That being said, many surveyors use different techniques to plan out
their question list. I want to hear the steps you take to prepare your survey questions. If
you do not yet have an account in FluidSurveys, sign up today by visiting our account
page!

You might also like