You are on page 1of 2

Research Instrument and its Validity and Reliability Frequency of Occurrence

Very Frequently
Instruments
Frequently
• These are tools or devices designed to measure data for a Occasionally
particular purpose; in this case, for research. Rarely
• A research instrument is one of the most significant elements to Very Rarely

accomplish a probe. Frequency of Use


Always
Often
Sometimes
Rarely
Never

Degree of Importance
Very Important
Important
Moderately Important
Of Little Importance
Not Important

Quality
Strongly Agree
Agree
Undecided
Three Ways to Construct an Instrument Disagree
ADOPT an Instrument Strongly Disagree
MODIFY an existing instrument.
Level of Satisfaction
CREATE your own instrument.
Very Satisfied
Satisfied
Designing the Questionnaire Undecided
•A questionnaire is an instrument for collecting data. It consists of Unsatisfied
Very Unsatisfied
a series of questions that respondents provide answers to a
research study.
• Generate the items or questions of the questionnaire based on
the purpose and objectives of the research study.
Steps in Designing the Questionnaire
1. Background
GUIDELINES in MAKING QUESTIONS in your QUESTIONNAIRE
2. Conceptualization
1. The questions should be clear, concise and simple using
3. Validity
minimum number of words. Avoid lengthy and confusing lay-out.
4.Reliability
2. Classify your questions under each statement based on your
5. Pilot Testing
problem statement.
6. Revision
3. Questions should be consistent within the needs of the study.
4. Avoid sensitive or highly debatable question.
Steps in Designing the Questionnaire
• Choose the type of questions in developing statements.
1. Background
Dichotomous
• You do a basic research on the background of the chosen
Openended
variable or construct, choose a construct that you can use to craft
Closed
the purpose and objective of the questionnaire.
Rankorder scale
• Example of construct are weight, height, age, IQ, Academic
Rating Scale
Performance.
• After identifying the construct, you can easily state the purpose
3. Establishing Questionnaire Validity
and objective of the questionnaire and the research questions as
• Validity is traditionally defined as "degree to which a test
well; only then can you frame the hypothesis of the study.
measures what it claims or purports to be measuring".
• A questionnaire undergoes a validation procedure to make sure
2. Questionnaire Conceptualization
that it accurately measure what it aims to do. A valid
• Choose the responses scale to use. This is how your
questionnaire helps to collect reliable and accurate data.
respondents to answer in your study. You can choose from the
following response scales.
Face, Content, Criterion Related, Concurrent, Predictive,
➢ Yes/No
Construct
➢ Yes/No/Don`t Know
1. Face Validity - this is a superficial or subjective assessment.
➢ Likert Scale
The questionnaire appears to measure the construct or variable
that the researcher study is supposed to measure.
Likert Scale
2. Content Validity - is most often measured by experts or people
• It is a very popular rating scale used by researchers to measure
who are familiar with the construct being measured. The experts
behaviors and attitudes quantitatively. It consist of choices that
are asked to provide feedback on how well each questions
range from one extreme to another from where respondents
measure the variable or construct under study
choose a degree of their opinions. It the best tool for measuring
3. Criterion-related Validity – This type of validity measures the
the level of opinions.
relationship between a measure and an outcome.
• Concurrent Validity – this type of validity measures how well the •The effect of these interventions can be tested by comparing two
results of an evaluation or assessment correlate with other groups: the experimental group, also known as the treatment
assessments measuring the same variables or constructs. group, which is exposed to the intervention and the group that
• Predictive Validity - this measures how well the results of an was not exposed to the intervention, the control group.
assessment can predict a relationship between the construct •To measure the effect of these interventions, a pre-test and
being measured and future behavior. posttest is conducted. As the term implies, pre-test is given prior
4. Construct Validity – This is concerned with the extent to which the exposure of the experimental group to the intervention, while
a measure is related to other measures as specified in a theory or post-test is given after the intervention.
previous research. It is an experimental demonstration that a test
is measuring the construct it claims to be measuring.

4. Establishing Questionnaire Reliability


• Reliability indicates the accuracy or precision of the measuring
instrument. It refers to a condition where measurement process
yields consistent responses over repeated measurements.
•Test-retest
•Split-half
•Internal consistency

Test-retest reliability
• This is the simplest method of assessing reliability. The same
test or questionnaire is administered twice and correlation
between the two sets of scores is computed.

Split-half method
• This method is also called equivalent or parallel forms. In this
method, two different test covering the same topics are used and
the correlation between the two sets of scores is calculated.

Internal Consistency
• This method is used in assessing reliability of questions
measured on an interval or ratio scale. The reliability estimate is
based on a single form of test administered on a single occasion.

5. Questionnaire Pilot Testing


• Pilot testing a questionnaire is important before you use it to
collect data. Through this process, you can identify questions or
statements which are not clear to the participants or there might
be some problems with the relevance of the questionnaire to the
current study.
• After designing the questionnaire, you may find 10-15 people
from your target group to pre-test the questionnaires. You design
or provide spaces where the testers can freely indicate their
Research Intervention
remarks.
•According to Brown (2015), there are four characteristics of
Such remarks are the following: a sound quantitative research: reliability, validity, replicability,
• “Delete this statement. I don't understand the questions.” and generalizability.
• “Revise the question/statement.”
• “Retain the question/statement.“ •Reliability is the degree to which the result or research
• “There are missing options on the list of choices.” measurements or observations are consistent.
• “The question is too long." •Validity is the degree to which a study’s measurement and
observations represent what they are supposed to
Research Intervention
characterize.
•A classic experimental design contains three key features:
(1) the independent and dependent variables, •Replicability is the degree to which the research supplies
(2) experimental and control groups, and sufficient information for the reader to verify the results by
(3) pre-testing and post-testing (DeCarlo, 2018). replicating or repeating the study.
•In an experimental research, the researcher manipulates the •Generalizability is the degree to which the study is
Independent Variable (IV) and measure its effect on the meaningful beyond the sample in a study to the population
Dependent Variable (DV). This IV is also known as the treatment that it represents.
or intervention, the variable you are studying.

You might also like