You are on page 1of 22

RESEARCH INSTRUMENT

 discuss the instrument used to gather the necessary data


to answer the specific problems posed/raised in the
study.
 indicates if the instrument is a self-made and adapted
(using a conceptual framework) taking into consideration
the object (variables or constructs) of the study; or
adopted, standardized/copyrighted.

Division of General Trias City 1


RESEARCH INSTRUMENT
 clarifies in this section the mode of responses and the
scale to be used as part of the assessment tool or
measure
Guidelines:
1. Are the statements or questions stated clearly?
2. Are the responses to the questions/items verifiable and
testable in terms of hypothesis(es)?
3. Is the scale used appropriate to elicit the response
needed?
Division of General Trias City 2
RESEARCH INSTRUMENT

 Research Instruments are basic tools researchers


used to gather data for specific research problems.
Common instruments are performance tests,
questionnaires, interviews, and observation
checklist.

Division of General Trias City 3


RESEARCH INSTRUMENT
 In constructing the research instrument of the study,
there are many factors to be considered. The type of
instrument, reasons for choosing the type, and the
description and conceptual definition of its parts are
some of the factors that need to be decided before
constructing a research instrument. Furthermore, it is
also very important to understand the concepts of scales
of research instruments and how to establish validity
and reliability of instruments.
Division of General Trias City 4
COMMON SCALES USED IN QUANTITATIVE RESEARCH
 Likert Scale. This is the most common scale used in
quantitative research. Respondents were asked to rate or
rank statements according to the scale provided.
 Example: A Likert scale that measures the attitude of
students towards distance learning

Division of General Trias City 5


COMMON SCALES USED IN QUANTITATIVE RESEARCH

Division of General Trias City 6


COMMON SCALES USED IN QUANTITATIVE RESEARCH

Division of General Trias City 7


COMMON SCALES USED IN QUANTITATIVE RESEARCH

Adapt vs Adopt
 Adapt is used either when a change is made
to make something more suitable for a
particular use or when adjusting to a new
place.
 Adopt is used when something is taken over,
chosen, accepted or approved by choice.
https://www.trinka.ai/blog/adapt-vs-adopt-what-is-the-
difference/#:~:text=Adapt%20is%20used%20either%20when,accepted%20or%
20approved%20by%20choice.
Division of General Trias City 8
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

Validity.
A research instrument is considered valid if it
measures what it supposed to measure. When measuring
oral communication proficiency level of students, speech
performance using rubric or rating scale is more valid than
students are given multiple choice tests. Validity also has
several types: face, content, construct, concurrent, and
predictive validity
Division of General Trias City 9
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT
 Face Validity. It is also known as “logical validity.” It calls
for an initiative judgment of the instruments as it “appear.”
Just by looking at the instrument, the researcher decides if it
is valid.
 Content Validity. An instrument that is judged with
content validity meets the objectives of the study. It is
done by checking the statements or questions if this elicits
the needed information. Experts in the field of interest can
also provide specific elements that should be measured by
the instrument.
Division of General Trias City 10
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

 Construct Validity. It refers to the validity of instruments as


it corresponds to the theoretical construct of the study. It is
concerning if a specific measure relates to other measures.
 For example, if a researcher develops a new questionnaire to
evaluate respondents' levels of aggression, the construct
validity of the instrument would be the extent to which it
actually assesses aggression as opposed to assertiveness, social
dominance, and so forth.

Division of General Trias City 11


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

 Concurrent Validity. When the instrument can predict


results similar to those similar tests already validated, it
has concurrent validity.
 For example, an employment test may be administered to a
group of workers and then the test scores can be correlated
with the ratings of the workers' supervisors taken on the same
day or in the same week. The resulting correlation would be a
concurrent validity coefficient.

Division of General Trias City 12


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

 Predictive Validity. When the instrument is able to


produce results similar to those similar tests that will be
employed in the future, it has predictive validity. It is also
a degree to which test scores accurately predict scores on
a criterion measure.
 This is particularly useful for the aptitude test

Division of General Trias City 13


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

 Reliability of Instrument
Reliability refers to the consistency of the measures or
results of the instrument.

Division of General Trias City 14


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT
1. Test-retest Reliability. It is achieved by giving the same test
to the same group of respondents twice. The consistency of
the two scores will be checked.

The test-retest reliability of a survey instrument, like a psychological


test, is estimated by performing the same survey with the same
respondents at different moments of time. The closer the results,
the greater the test-retest reliability of the survey instrument.
The correlation coefficient between such two sets of responses is
often used as a quantitative measure of the test-retest reliability.
Division of General Trias City 15
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

2. Equivalent Forms Reliability. It is established by administering


two identical tests except for wordings to the same group of
respondents.

For example, one administers a test, say Test A, to students on


June 1, then re-administers the same test (Test A) to the same
students at a later date, say June 15. Scores from the same
person are correlated to determine the degree of association
between the two sets.
Division of General Trias City 16
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

3. Internal Consistency Reliability. It determines how well the


items measure the same construct. It is reasonable that when a
respondent gets a high score in one item, he will also get one in
similar items.
There are three ways to measure the internal consistency;
through the split-half coefficient, Cronbach’s alpha, and Kuder-
Richardson formula.

Division of General Trias City 17


CRONBACH ALPHA

2 σ 2
𝑘 𝑠 𝑦 − 𝑠𝑖
𝛼=
𝑘−1 𝑠2 𝑦

𝑘 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠
෍ 𝑠𝑖2 = 𝑠𝑢𝑚 𝑜𝑓 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑜𝑓 𝑒𝑎𝑐ℎ 𝑖𝑡𝑒𝑚

𝑠 2 𝑦 = 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑜𝑡𝑎𝑙 𝑐𝑜𝑙𝑢𝑚𝑛

Division of General Trias City 18


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

Division of General Trias City 19


SAMPLE VARIANCE COMPUTATION

𝑋ത = 8
Item No. 1 (X) ഥ
𝑿−𝑿 ഥ )𝟐
(𝑿 − 𝑿
11 3 9
𝑛 = 10
8 0 0
12 4 16 σ(𝑿 − 𝑿ഥ )𝟐
7 -1 1 𝑠2 =
5 -3 9
𝑛−1
4 -5 25
2
130
4 -4 16 𝑠 =
10 − 1
15 7 49
9 1 1 2
130
𝑠 =
6 -2 4 9
෍ 𝑥 = 80 ෍ 𝑋 − 𝑋ത = 30 ෍(𝑋 − 𝑋)
ത 2 = 130
𝑠 2 = 14.44
Division of General Trias City 20
VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

Division of General Trias City 21


VALIDITY/RELIABILITY OF RESEARCH INSTRUMENT

Division of General Trias City 22

You might also like