P. 1
Measurability

Measurability

5.0

|Views: 539|Likes:
Published by peeyush

More info:

Published by: peeyush on Feb 01, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PPT or read online from Scribd
See more
See less

09/14/2010

original

CHAPTER OVERVIEW

• The Measurement Process • Levels of Measurement • Reliability and Validity: Why They Are Very, Very Important • A Conceptual Definition of Reliability • Validity • The Relationship Between Reliability and Validity • A Closing (Very Important) Thought

THE MEASUREMENT PROCESS
• Two definitions
– Stevens—”assignment of numerals to objects or events according to rules.” – “…the assignment of values to outcomes.”

• Chapter foci
– Levels of measurement – Reliability and validity

LEVELS OF MEASUREMENT
Ratio

Level of Measureme nt

For example
Rachael is 5’ 10” and Gregory is 5’ 5” Rachael is 5” taller than Gregory Rachael is taller than Gregory Rachael is tall and Gregory is short

Quality of Level
Absolute zero An inch is an inch is an inch Greater than Different from

Interval Ordinal Nominal

       
 

• Variables are measured at one of these four levels • Qualities of one level are characteristic of the next level up • The more precise (higher) the level of measurement, the more accurate is the measurement process

NOMINAL SCALE
Qualities   Assignment of labels Gender— (male or female) Preference— (like or dislike) Voting record— (for or against) Example What You Can Say Each observation belongs in its own category What You Can’t Say An observation represents “more” or “less” than another observation

ORDINAL SCALE
Qualities Assignment of values along some underlying dimension Example Rank in college Order of finishing a race What You Can Say One observation is ranked above or below another. What You Can’t Say The amount that one variable is more or less than another

INTERVAL SCALE
Qualities Equal distances between points Example Number of words spelled correctly Intelligence test scores Temperature What You Can Say One score differs from another on some measure that has equally appearing intervals What You Can’t Say The amount of difference is an exact representation of differences on the variable being studied

RATIO SCALE
Qualities Meaningful and nonarbitrary zero Example Age Weight Time What You Can Say One value is twice as much as another or no quantity of that variable can exist What You Can’t Say Not much!

WHAT IS ALL THE FUSS?
• Measurement should be as precise as possible • In psychology, most variables are probably measured at the nominal or ordinal level • But—how a variable is measured can determine the level of precision

RELIABILITY AND VALIDITY
• Reliability—tool is consistent • Validity—tool measures “what-itshould” • Good assessment tools →
– Rejection of Null hypotheses
OR

– Acceptance of Research hypotheses

A CONCEPTUAL DEFINITION OF RELIABILITY
Method Error Observed Score = True Score + Error Score Trait Error

A CONCEPTUAL DEFINITION OF RELIABILITY
Observed Score = True Score + Error Score

Method Error Trait Error

• Observed score
– Score actually observed – Consists of two components
• True Score • Error Score

A CONCEPTUAL DEFINITION OF RELIABILITY
Observed Score = True Score + Error Score

Method Error Trait Error

• True score
– Perfect reflection of true value for individual – Theoretical score

A CONCEPTUAL DEFINITION OF RELIABILITY
Observed Score = True Score + Error Score

Method Error Trait Error

• Error score
– Difference between observed and true score

A CONCEPTUAL DEFINITION OF RELIABILITY
Method Error Observed Score = True Score + Error Score Trait Error

• Method error is due to characteristics of the test or testing situation • Trait error is due to individual characteristics • Reliability of the observed score becomes higher if error is reduced!!

INCREASING RELIABILITY → Decreasing Error
• • • • • • • Increase sample size Eliminate unclear questions Standardize testing conditions Use both easy and difficult questions Minimize the effects of external events Standardize instructions Maintain consistent scoring procedures

HOW RELIABILITY IS MEASURED
• Reliability is measured using a
– Correlation coefficient – r test1•test2

• Reliability coefficients
– Indicate how scores on one test change relative to scores on a second test – Can range from -1.0 to +1.0 (perfect reliability)

TYPES OF RELIABILITY
Type of Reliability What It Is How You Do It What the Reliability Coefficient Looks Like rtest1•test1

Test-Retest

A measure of stability

Administer the same test/measure at two different times to the same group of participants Administer two different forms of the same test to the same group of participants Have two raters rate behaviors and then determine the amount of agreement between them Correlate performance on each item with overall performance across participants

Parallel Forms Inter-Rater

A measure of equivalence A measure of agreement

rform1•form2

Percentage of agreements

Internal Consistency

A measure of how consistently each item measures the same underlying construct

Cronbach’s alpha

VALIDITY
• A valid test does what it was designed to do • A valid test measures what it was designed to measure

A CONCEPTUAL DEFINITION OF VALIDITY
• Validity refers to the test’s results, not to the test itself • Validity ranges from low to high, it is not “either/or” • Validity must be interpreted within the testing context

TYPES OF VALIDITY
Type of Validity Content What Is It? A measure of how well the items represent the entire universe of items A measure of how well a test estimates a criterion How Do You Establish It? Ask an expert if the items assess what you want them to

Criterion Concurrent

Select a criterion and correlate scores on the test with scores on the criterion in the present

Predictive

A measure of how Select a criterion and correlate well a test predicts a scores on the test with scores criterion on the criterion in the future A measure of how well a test assesses some underlying construct Assess the underlying construct on which the test is based and correlate these scores with the test scores

Construct

HOW TO ESTABLISH CONSTRUCT VALIDITY OF A NEW TEST
• Correlate new test with an established test • Show that people with and without certain traits score differently • Determine whether tasks required on test are consistent with theory guiding test development

MULTITRAIT-MULITMETHOD MATRIX
Trait 1 Impulsivity
Method 1 Paper and Pencil Method 1 Paper and Pencil Method 2 Activity Level Monitor Moderate Method 1 Paper and Pencil

Trait 2 Activity Level
Method 2 Activity Level Monitor Low

Trait 1 Impulsivit y
Method 2 Activity Level Monitor Method 1 Paper and Pencil Moderate

Trait 2 Activity Level
Method 2 Activity Level Monitor Low

• Convergent validity—different methods yield similar results • Discriminant validity—different methods yield different results

THE RELATIONSHIP BETWEEN RELIABILITY AND VALIDITY
• A valid test must be reliable
But

• A reliable test need not be valid

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->