P. 1
7c0c8Test_Development.ppt

7c0c8Test_Development.ppt

|Views: 2|Likes:
Published by Thoi Bi Ak

More info:

Published by: Thoi Bi Ak on Feb 11, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PPT, PDF, TXT or read online from Scribd
See more
See less

08/08/2013

pdf

text

original

Test Development

Why Develop a New Test?
 meet

the needs of a special group of test takers  sample behaviours from a newly defined test domain  improve the accuracy of test scores for their intended purpose  Tests need to be revised

First Four Steps
 Defining

the test universe, audience, and purpose  Developing a test plan  Composing the test items  Writing the administration instructions

Continued Steps of Test Construction
Diagram of Test Construction (p. 234)
 Constructing

Scales  Piloting the Test  Standardizing the Test  Collecting Norms  Validation & Reliability Studies  Manual Writing  Test Revision

Defining the Test Universe, Audience, & Purpose
 Defining

the test universe.

– prepare a working definition of the construct – locate studies that explain the construct – locate current measures of the construct

& Purpose  Defining the target audience.g. – make a list of characteristics of persons who will take the test--particularly those characteristics that will affect how test takers will respond to the test questions (e. disabilities. Audience. reading level. honesty. language) .Defining the Test Universe..

but also how scores will be used – e. Audience. will scores be used to compare test takers (normative approach) or to indicate achievement (criterion approach)? – e.Defining the Test Universe.g. will scores be used to test a theory or to provide information about an individual? ... & Purpose  Defining the purpose.g. – includes not only what the test will measure.

Developing a Test Plan A test plan includes a definition of the construct. and how the test will be administered and scored . the content to be measured (test domain). the format for the questions.

Defining the Construct  Define construct after reviewing literature about the construct and any available measures  Operationalize in terms of observable and measurable behaviours  Provides boundaries for the test domain (what should and shouldn’t be included)  Specify approximate number of items needed .

Choosing the Test Format  Test format refers to the type of questions the test will contain (usually one format per test for ease of test takers and scoring)  Test formats have two elements: – stimulus (e.. multiple choice. true-false). May be objective or subjective test format .g.g.. a question or phrase) – mechanism for response (e.

audience. scoring) .g.Composing the Test Items  test items are the stimuli presented to the test taker (may or may not take the form of questions)  the form chosen depends on decisions made in the test plan (e.. purpose. method of administration.

Test Types  Structured Response – Multiple Choice – True False. Short Answer – Interview Questions – Fill in the Blank – Projective Techniques . Forced Choice – Likert Scales  Free Response – Essay.

Multiple Choice  Multiple choice most common in educational testing (and also some personality and employment testing) – consists of a stem and a number of responses-should only be one right answer – the wrong answers are called distractors because they may appear correct--should be realistic enough to appeal to uninformed test taker – easy scoring but downside is that test takers can get some correct by guessing .

g. increase reliability measures narrow facets of performance reading time increased with more answers transparent clues (e. verb tenses or letter uses “a” or “an”) may encourage guessing difficult to write four or five reasonable choices takes more time to write questions  Cons      .Multiple Choice  Pros   more answer options (4-5) reduce the chance of guessing that an item is correct many items can aid in student comparison and reduce ambiguity..

True/False  True/False is also used in educational testing and some personality testing – in educational testing the test taker can again gain some advantage by guessing .

)  Ideally a true/false question should be constructed so that an incorrect response indicates something about the student's misunderstanding of the learning objective. especially when constructing a true statement.  This . may be a difficult task.True/False (cont.

.Forced Choice Items  Forced-Choice is similar to multiple-choice but is used in personality and attitude tests (e.g. MBTI) – test taker must choose between unrelated but equally acceptable responses .

Forced Choice Items(cont. ____ Sunny ____ Friendly 2.) Example Place an “X” in the space to the left of the work that of the word in each pair that best describes your personality. ____ Outgoing ____ Loyal . 1.

personality and attitude tests) – item is presented with an array of response options (e.Likert Scales  Likert scales are usually reliable and highly popular (e. usually on an agree/disagree or approve/disapprove continuum ..g. 1 to 5 or 1 to 7 scale)..g.

Knowledge .Test Types  Structured  Great Response – Advantages Breadth  Quick Scoring – Disadvantages  Limited Depth  Difficult to assess higher levels of skills  Guessing/Memorization vs.

Subjective Items  subjective items are less easily scored but provide the test taker with fewer cues and open wider areas for response--often used in education – essay questions .responses can vary in breadth and depth and scorer must determine to what extent the response is correct (often by examining match with predetermined correct response) .

analysis and evaluation) respondent to focus on what they have learned and does not limit them to specific questions  Allows .g.Essay Questions  Provide a freedom on response that facilitates assessing higher cognitive behaviors (e..

skills. abilities and other characteristics required to perform the job Information can be obtained from a job description. current job incumbent . job analysis.Interview Questions – interview questions are often used in organizational settings--interviewer decides what is a good or poor answer test plan should be based on knowledge.

e. words) and responses may be verbal or drawing pictures . pictures. the test taker “projects” his or her perception and perspective onto a neutral stimulus) – variety of stimuli (e..Projective Techniques  Projective techniques are often employed in clinical settings – uses a highly ambiguous stimulus to elicit an unstructured response (i..g.

Sentence Completion  Sentence-Completion format presents an incomplete sentence that the test taker completes (e.g.. “I feel happiest when …) tests are at risk for judgment error and inter-rater reliability is therefore of particular importance--scoring keys and training important  subjective .

Objective Scoring Key .Test Types  Subjective  Can Items – Advantages Test Higher Cognitive skills  Encourages organize/develop thoughts – Disadvantages  Difficult to Grade  Judgement error (e. interrater reliability)  Requires Advance .g..

creativity. Narrow items .Writing Good Items  Basis building block of test construction  Little attention given to writing items  an art that requires originality. etc. may be misinterpreted. combined with knowledge of test domain and good item writing practices  not all items will perform as expected--may be too easy or difficult.  rule of thumb to write at least twice as many items as you expect to use  Broad vs.

)  Suggestions: – identify item topics by consulting test plan (increases content validity) – ensure that each item presents a central idea or problem – write items drawn only from testing universe – write each item in clear and direct manner .Writing Good Items (cont.

. mailman.one question per question) – ask an expert to review items to reduce ambiguity and inaccuracy .Writing Good Items (cont. culture) – take into account sexist or racist language (e..)  Suggestions: – use vocabulary and language appropriate for the target audience (e.g.. age.g..g. fireman) – make all items independent (e.

.g. quiet) – required equipment – time limits or approximate completion time – script for administrator and answers to questions test takers may ask .Writing Administration Instructions  specify the testing environment to decrease variation or error in test scores  should address: – group or individual administration – requirements for location (e.

written. or accompanied by scoring software.Specifying Administration and Scoring Methods  determine such things as how test will be administered (e. but also whether scored by hand by test administrator. computer--individually or in groups)  method of scoring.g.. orally. or sent to test publisher for scoring .

g. or endorses higher numbers on a Likert scale) – correct responses or responses on Likert scale are summed – yields interval data that can be interpreted with reference to norms .. more “correct” answers.Scoring Methods Cumulative model: most common – assumes that the more a test taker responds in a particular fashion the more he/she has of the attribute being measured (e.

a particular pattern of responses may suggest diagnosis of a certain psychological disorder – typically yields nominal data because it places test takers in categories .Scoring Methods (cont.g.) Categorical model: place test takers in a group – e..

.g. a test taker may complete a measure of interpersonal problems of various types and the test administrator may want to determine which of the types the test taker feels is most problematic for him or her Cumulative model may be combined with categorical or ipsative model .Scoring Methods (cont…) Ipsative model: test takers scores are not compared to that of other test takers but rather compare the scores on various scales WITHIN the test taker (Which scores are high & low) – e.

each question can be evaluated from two perspectives: Is the item fair? Is the item biased?  Tests are subject to error and one form comes from the test takers .Response Bias  In preparing an item review.

Distort their responses 3. May feel coerced into completing the test . Information requested is too personal 2.Response Sets/Styles  Are patterns of responding that result in misleading information and limit the accuracy and usefulness of the test scores Reasons for misleading information 1. Answer items carelessly 4.

Response Style – People always agree (acquiescence) or disagree (criticalness) with statements without attending to the actual content – Usually. when items are ambiguous Solution: use both positively.and negativelykeyed items .

Social Desirability Some test takers choose socially acceptable answers or present themselves in a favourable light  People often do not attend as much to the trait being measured as to the social acceptability of the statement represents unwanted variance  This .

– I would rather work in a group than by myself.Social Desirability (cont.) Example items: – Friends would call me spontaneous. – I often get stressed-out in many situations. – People I know can count on me to finish what I start. .

.some test takers may respond in a particular way to cause a desired outcome – may “fake good” (e. in employment settings) to create a favourable impression – may “fake bad” (e.. in clinical or forensic settings) as a cry for help or to appear mentally disturbed – may use some subtle questions that are difficult to fake because they aren’t clearly face valid .g.Faking Faking -.g.

“Faking Bad” – People try to look worse than they really are  Common problem in clinical settings – Reasons:  Cry for help  Want to plea insanity in court  Want to avoid draft into military  Want to show psychological damage – Most people who fake bad overdo it .

g.Impression Management – Mitigating IM: Use positive and negative impression scales (endorsed by 10% of the population) Use lie scales to “flag” those who score high (e. “I get angry sometime”). two different responses to two similar questions) (Use multiple assessment methods (other than self-report) .g. Inconsistency scales (e...

or lacks attention to the task – try to detect by embedding a scale that tends to yield clear results from vast majority such that a different result suggests the test taker wasn’t cooperating .. – likely to occur when test taker lacks the skills (e. does not want to be evaluated.Random Responding Random responding may occur when test takers are unwilling or unable to respond accurately. reading).g.

” Infrequency .” “I have not seen a car in 10 years.Random Responding – Detection: Duplicate items: “I love my mother.” scales: “I’ve never had hair on my head.” “I hate my mother.

Random Responding – May occur for several reasons: People are not motivated to participate Reading or language difficulties Do not understand instructions / item content Too confused or disturbed to respond appropriately .

Piloting and Revising Tests  can’t assume the test will perform as expected  pilot test scientifically investigates the test’s reliability and validity  administer test to sample from target audience  analyze data and revise test to fix any problems uncovered--many aspects to consider .

in sample characteristics.Setting Up the Pilot Test  test situation should match actual circumstances in which test will be used (e... setting)  developers must follow the American Psychological Association’s codes of ethics (e.g.g. strict rules of confidentiality and publish only aggregate results) .

Conducting the Pilot Test  depth and breadth depends on the size and complexity of the target audience  adhere strictly to test procedures outlined in test administration instructions  generally require large sample  may ask participants about the testing experience .

Analyzing the Results  can gather both quantitative and qualitative information  use quantitative information for such things as item characteristics. internal consistency. convergent and discriminate validity. and in some instances predictive validity .

and bias  when new items need to be added or items need to be revised. item difficulty and discrimination.Revising the Test  Choosing the final items requires weighing each item’s content validity. the items must again be pilot tested to ensure that the changes produced the desired results . interitem correlation.

reliability and validity)  first part of establishing content validity is carried out as the test is developed--that it measures the constructs (construct validity) and predicts an outside criterion is determined in subsequent data collection ..Validation and Cross-Validation  Validation is the process of obtaining evidence that the test effectively measures what it is supposed to measure (i.e.

Validation and Cross-Validation  when the final revision of a test yields scores with sufficient evidence of reliability and validity. test developers then conduct cross-validation--a final round of test administration to another sample  because of chance factors the reliability and validity coefficients will likely be smaller in the new sample--referred to as shrinkage .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->