You are on page 1of 20

Week 8

Understanding and
Sytematically Collecting Data

Prepared by:

Mr. Ferdie G. Salao

1
2
3
4
5
Instruments in QNR

 Instruments are tools used to gather data for a particular


research topic.
 Examples: questionnaires, interviews, and observations.
 Coder or evaluator will help you gather and analyze your
data to check the subjectivity of your instrument.

6
Inter-coder or inter-rater agreement

 Refers to the level of concurrence between the scores


given by two or more raters.

7
Inter-coder or inter-rater agreement

8
Things to be considered in craftring the instrument

 Actual instrument used.


 Purpose of the instrument.
 Developer of the instrument.
 Number of items or sections in the instrument.
 Response format used (multiple choice, yes or no).
 Scoring for the responses.
 Reliability and validity of the instrument.

9
Ways of Developing an Instrument for QNR

 Adopting an instrument.
 Modify an existing instrument.
 Create your own instrument.

10
Example: ATMI by Martha Tapia

11
Example: ATMI by Martha Tapia

12
Example: ATMI by Martha Tapia

13
Instrument Validity v. Instrument Reliability

 Validity refers to the degree to which an instrument measures what it


is supposed to measure.
 Reliability refers to the consistency of the results.

14
Types of Validity

 Face Validity- an instrument has face validity when it appears to


measure the variables being studied.
 Content Validity – degree to which an instrument covers a
representative sample (specific element) of the variables to be
measured.
 Construct validity is an intangible or abstract variable such as
personality, intelligence or moods.
 Criterion validity- degree that an instrument predicts the
characteristics of a variable in a certain way.

15
Instrument Reliability

 Test-retest reliability is achieved by administering an instrument twice


to the same group of participants and then computing the consistency
of scores.
 Equivalent forms reliability is measured by administering two tests
identical in all aspects except the actual wording of items. (Pretest-
posttest).
 Internal consistency reliability is a measure of how well the items in
two instruments measures the same construct. Ex. Split-half reliability,
Cronbach alpha, Kuder-Richardson formula.
 Inter-rater reliability- measures the consistency of scores assigned by
two or more raters on a certain set of results.
16
17
18
Measures of Central Tendency and Variability

 Mean, median, mode


 Standard deviation

Example: 78, 78, 79, 80, 81, 86

90, 94, 97, 98, 98, 98, 99, 99, 99

19
20

You might also like