Professional Documents
Culture Documents
3 Reliability and Validity
3 Reliability and Validity
Reliability
When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of measurement error Consistency of measurement
VALIDITY
The extent to which measures indicate what they are intended to measure. The match between the conceptual definition and the operational definition.
Example
Measuring height with reliable bathroom scale Measuring aggression with observer agreement by observing a kid hitting a Bobo doll
Stability Reliability
Test-retest SAME TEST DIFFERENT TIMES Testing phenomenon at two different times; The degree to which the two measurements of Sam Ting, using same measure, are related to one another Only works if phenomenon is unchanging
Example of Stability
Administering same questionnaire at 2 different times Re-examining client before deciding on intervention strategy. Running trial twice (e.. g. errors in tennis serving)
Equivalence Reliability
1. Inter-item (split ) 2. Parallel forms [Different types of
measures]
3. Interobserver Agreement
-Is every observer scoring the same ?
1. Inter-item Reliability
(Internal consistency): The association of answers to a set of questions designed to measure the same concept.
3.Interobserver Reliability
Correspondence between measures made by different observers.
Note on Reliability
For Statistics people, the following quote refers to goodness of fit around a slope line due to measurement error.
Secondary Definition of Reliability from a previous slide
or that the measured scores changes in direct correspondence to actual changes in the phenomenon
Types of Validity
1. Content Validity
Face Validity Sampling Validity (content validity)
2. Empirical Validity
Concurrent Validity Predictive Validity
3. Construct Validity
Face Validity
confidence gained from careful inspection of a concept to see if its appropriate on its face; In our [collective] intersubjective, informed judgment, have we measured what we want to measure? (N.B. use of good judgment)
Content validity
Also called sampling validity establishes that the measure covers the full range of the concepts meaning, i.e., covers all dimensions of a concept N.B depends on good judgment
*Note *
Actually I think face and content validity are probably Sam Ting
EMPIRICAL Validity
Establishes that the results from one measure match those obtained with a more direct or already validated measure of the same phenomenon (the criterion) Includes
Concurrent Predictive
Concurrent Validity
Validity exists when a measure yields scores that are closely related to scores on a criterion measured at the same time Does the new instrument correlate highly with an old measure of the same concept that we assume (judge) to be valid? (use of good judgment)
Predictive Validity
Exits when a measure is validated by predicting scores on a criterion measured in the future Are future events which we judge to be a result of the concept were measuring anticipated [predicted] by the scores were attempting to validate Use of good judgment
Consider This:
If a construct is hard to conceptualize doesnt it make sense that itll be more difficult to operationalize and validate?
Construct validity
: established by showing that a measure is (1) related to a variety of other measures as specified in a theory, used when no clear criterion exists for validation purposes (2) that the operationalization has a set of interrelated items and (3) that the operationalization has not included separate concepts
Construct validity
Check the intercorrelation of items used to measure construct judged to be valid Use theory to predict a relationship and use a judged to be valid measure of the other variable then check for relationship Demonstrate that your measure isnt related to judged to be valid measures of unrelated concepts
Convergent Validity
Convergent validity: achieved when one measure of a concept is associated with different types of measures in the same concept (this relies on the same type of logic as measurement triangulation) Measures intercorrelated
Discriminant Validity
Discriminant validity: scores on the measure to be validated are compared to scores on measures of different but related concepts and discriminant validity is achieved if the measure to be validated is NOT strongly associated with the measures of different concepts Measure not related to unrelated concepts
Using theory
Measure of constructs predicts what theory says it should
Companionate rel
longevity
satisfaction