Professional Documents
Culture Documents
Ads by Google
Hospital Constructions
OT's, ICU,CCU . and Labs Prepration
Electricals,HVAC,Fire Fighting,etc
www.chempharmindia.com/Hospital
CONSTRUCTION OF AN ACHIEVEMENT
TEST – A SYSTEMATIC PROCESS
DR.SURAKSHA BANSAL DR. SAROJ AGARWAL PALLAVI SINGH
Practical Criteria:
Ease in administration:
A test is good only when the conditions of answering are simple (scientific
and logical). Its instruction should be simple and clear.
Cost:
A good test should be in expensive, not only from the view point odf money
but also from the view point of time annd effort which is taken in the construction of a
test. Fortunately there is no direct relationship between cost and quality.
1) Consistency (Reliability): -
Reliability of a measuring instruments depends on two fasctors-
1. Adequecy in sampleing
2. Objectivity in scoring
A good instrument will produce consistent scores. An instrument’s
reliability is estimated using a correlation coefficient of one type or
another. For
purposes of learning research, the major characteristics of good scales
include:
● Test-retest Reliability:
The ability of an instrument to give accurate scores from one time to
another. Also known as temporal consistency.
● Split-half Reliability:
This is the most important form of validity, because it really subsumes all
of the
other forms of validity.
‡ Convergent validity:
Comparison and correlation of scores on an instrument with other
variables
or scores that should theoretically be similar.
‡ Discriminant validity:
Comparison of scores on an instrument with other variables or scores
from
which it should theoretically differ.
‡ Factor structure:
A statistical at the internal consistency of an instrument, usually one that has
subscales or multiple parts. The items that are theoretically supposed to be
measuring one concept should correlate highly with each other, but have low
correlations with items measuring a theoretically different concept.
‡ Content validity:
Establishes that the instrument includes items that comprise the relevant
content domain. (For example, a test of English grammar might include questions on
subject-verb agreement, but should not include items that test algebra skills.
‡ Face validity:
A subjective judgment about whether or not on the ³face of it´ the tool
seems to be measuring what you want it to measure.
‡ Criterion-related validity:
The instrument ³behaves´ the way it should given your theory about the
construct.
‡ Concurrent validity:
Comparison of scores on some instrument with current scores on another
instrument. If the two instruments are theoretically related in some manner, the
scores should reflect the theorized relationship.
‡ Predictive validity:
Comparison of scores on some instrument with some future behavior or future
scores on another instrument. The instrument scores should do a reasonable job of
predicting the future performance.
General precautions:
Ebel, in his book Measuring Educational Achievement, has suggested the
following precautions in test construction:
1. It should be decided when the test has to be conducted in the context
of time
and frequency.
2. It should be determined how many questions have to be included in the test.
3. It should be determined what types of questions have to be used in the test.
4. Those topics should be determined from which questions have to be
constructed. This decision is taken keeping in view the teaching
objectives.
5. The level of difficulty of questions should be decided at the beginning of
the test.
6. It should be determined if any correction has to be carried out for guessing.
7. The format and type of printing should be decided in advance.
8. It should be determined what should be the passing score.
9. In order to control the personal bias of the examiner there should be a
provision for central evaluation. A perticular question should be checked
by
the same examiner.
10. A rule book should be prepared before the evaluation of the scripts.
First step: