You are on page 1of 6

/ 12

Download this Document for Free

Ads by Google

Bosch Innovation Game


Learn more about Bosch Power Tools
Innovation and win great prizes now
www.bosch-pt.co.in

Hospital Constructions
OT's, ICU,CCU . and Labs Prepration
Electricals,HVAC,Fire Fighting,etc
www.chempharmindia.com/Hospital

View 20 Lacs + Tenders


India's National Tender Portal
Call 09824051600 / 09374530073
www.TenderTiger.com

ASCE Online Library


Free search 600,000 pages
All areas of civil engineering
www.ascelibrary.org

CONSTRUCTION OF AN ACHIEVEMENT
TEST – A SYSTEMATIC PROCESS
DR.SURAKSHA BANSAL DR. SAROJ AGARWAL PALLAVI SINGH

Achievement is the accomplishment or proficiency of performance in a given skill or


body of knowledge. Therefore, it can be said that achievement implies the over all
mastry of a pupil on a particular context. Any measuring instrument that measures
the attainments or accomplishments of a pupils achievement must be valid and
reliable.

Testing is a systematic procedure for comparing the behaviour of two or more


persons. This way an achievement test is an examination to reveal the relative
standing of an individual in the group with respect to achievement.

Characteristics of Good Measurement Instruments:


Measurement tools can be judged on a variety of merits. These include
practical issues as well as technical ones. All instruments have strengths and
weaknesses no instrument is perfect for every task. Some of the practical issue that
need to be considered includes:

Criteria of a good measuring instrument

Practical Criteria Technical Criteria

* Ease in administration * Reliability


* Cost * Validity
* Time and effort required for respondent to complete measure
* Acceptability

Practical Criteria:
Ease in administration:
A test is good only when the conditions of answering are simple (scientific
and logical). Its instruction should be simple and clear.

Cost:
A good test should be in expensive, not only from the view point odf money
but also from the view point of time annd effort which is taken in the construction of a
test. Fortunately there is no direct relationship between cost and quality.

Time and effort required for respondent to complete


measure:
Generally the time given to students is always in short supply however the
students too do not accept very long tests. Therefore a test should neither be very
long nor very short.
Acceptability:
A good test should be acceptable to student to whom its being given without
regard to any specific situation that is the question given in the test should be neither
very difficult nor very easy.
Technical Criteria:
Along with the practical issues, measurement tools may be judged on the
following:

1) Consistency (Reliability): -
Reliability of a measuring instruments depends on two fasctors-
1. Adequecy in sampleing
2. Objectivity in scoring
A good instrument will produce consistent scores. An instrument’s
reliability is estimated using a correlation coefficient of one type or
another. For
purposes of learning research, the major characteristics of good scales
include:

● Test-retest Reliability:
The ability of an instrument to give accurate scores from one time to
another. Also known as temporal consistency.

● Split-half Reliability:

This is the most important form of validity, because it really subsumes all
of the
other forms of validity.

‡ Convergent validity:
Comparison and correlation of scores on an instrument with other
variables
or scores that should theoretically be similar.

‡ Discriminant validity:
Comparison of scores on an instrument with other variables or scores
from
which it should theoretically differ.

‡ Factor structure:
A statistical at the internal consistency of an instrument, usually one that has
subscales or multiple parts. The items that are theoretically supposed to be
measuring one concept should correlate highly with each other, but have low
correlations with items measuring a theoretically different concept.

‡ Content validity:
Establishes that the instrument includes items that comprise the relevant
content domain. (For example, a test of English grammar might include questions on
subject-verb agreement, but should not include items that test algebra skills.

‡ Face validity:
A subjective judgment about whether or not on the ³face of it´ the tool
seems to be measuring what you want it to measure.

‡ Criterion-related validity:
The instrument ³behaves´ the way it should given your theory about the
construct.

‡ Concurrent validity:
Comparison of scores on some instrument with current scores on another
instrument. If the two instruments are theoretically related in some manner, the
scores should reflect the theorized relationship.

‡ Predictive validity:
Comparison of scores on some instrument with some future behavior or future
scores on another instrument. The instrument scores should do a reasonable job of
predicting the future performance.

Construction procedure of an Achievement


Test:
If a test has to be really made valid, reliable and practical, then it will have to be
suitably planned. For it, qualitative improvement in the test will have to be effected.
For this, the following facts should be kept in view:
* The principles available tests will have to be kept in view so that a
test can be
planned.
* Kill will have to be acquired in constructing and writing different types of
questions. For it are required thoughtful thinking, determination of teaching
objectives, analysis of content and types of questions to be given.

General precautions:
Ebel, in his book Measuring Educational Achievement, has suggested the
following precautions in test construction:
1. It should be decided when the test has to be conducted in the context
of time
and frequency.
2. It should be determined how many questions have to be included in the test.
3. It should be determined what types of questions have to be used in the test.
4. Those topics should be determined from which questions have to be
constructed. This decision is taken keeping in view the teaching
objectives.
5. The level of difficulty of questions should be decided at the beginning of
the test.
6. It should be determined if any correction has to be carried out for guessing.
7. The format and type of printing should be decided in advance.
8. It should be determined what should be the passing score.
9. In order to control the personal bias of the examiner there should be a
provision for central evaluation. A perticular question should be checked
by
the same examiner.
10. A rule book should be prepared before the evaluation of the scripts.

To construct an achievement test the steps referred below if followed will


make the testobjective, reliable and valid –

First step:

Selection of Teaching Objectives for Measurement: At


first those teaching
objectives should be selected from all teaching objectives of subject teaching which
have to be made the basis for test construction. There can be several causes of
selecting these teaching objectives which have to determine related teaching, such
as how much content has been studied, what is the need of student¶ what is the
importance of specific topics in the content etc. For it, the following table can be
used:

You might also like