Professional Documents
Culture Documents
Data Collection
Muhammad Luqman Qadir
M.Phil. (Molecular Biology and
Forensic Science)
MOST COMMONLY USED TOOLS
• Questionnaires
• Interviews
• Observations
• Tests ( achievement, aptitude,
personality)
• Scales
• Document Analysis
TESTS
Tests are measurement tools which are employed to measure the performance
of an individual in some specific area of interest.
TYPES OF TESTS
There are Two main types:
1. Achievement Test
2. Aptitude Test
3. Intelligence Test
4. Personality Test
• to create meaning,
• Main purpose of this test is to measure some specific objectives which are
pre-defined operationally and behaviorally.
Content validity of these tests is required.
▶ The cut off or the passing marks criteria is set by either the
subject teacher or by the researcher.
These two testing types have different construction methods,
underlying goals, and methods for interpreting scores.
2. Ordinal scales
3. Interval scales
4. Ratio scales
5. Likert scales
6. Rating scales
NOMINAL SCALES
▶ The term means “to name”
▶ Each value belongs to only its own category but can be more or less than the
other
It provides :
▶ This scale does not specify how much different the categories are from
each other
▶ It shows how much distant the categories are from each other
▶ A standardized test is any form of test that requires all test takers to answer the same
questions, or a selection of questions from common bank of questions, in the same way,
and that is scored in a “standard” or consistent manner, which makes it possible to
compare the relative performance of individual students or groups of students.
RELIABILIT VALIDITY
Y
• Reliability refers to the extent • Validity refers to the extent
that the instrument yields the that the instrument measures
same results over multiple what it was designed to
trials. measure.
VALIDI
TY
• The accuracy with which a test measures whatever it is supposed to measure.
• Content validity measures the extent to which the items that comprise
the scale accurately represent or measure the information that is being
assessed.
• Are the questions that are asked representative of the possible
questions that could be asked?
Construct validity
• Construct validity measures what the calculated scores mean and if
they can be generalized. Construct validity uses statistical analyses,
such as correlations, to verify the relevance of the questions.
• Questions from an existing, similar instrument, that has been
found reliable, can be correlated with questions from the
instrument under examination to determine if construct validity is
present. If the scores are highly correlated it is called convergent
validity. If convergent validity exists, construct validity is supported.
Criterion-related validity
• Criterion-related validity has to do with how well the scores from the
instrument predict a known outcome they are expected to predict.
Statistical analyses, such as correlations, are used to determine if
criterion-related validity exists.
• Scores from the instrument in question should be correlated with an
item they are known to predict. If a correlation of > .60 exists,
criterion related validity exists as well.
RELIABILITY
▶ Reliability is a characteristic of any test refers to the accuracy and consistency of
information obtained in a study.
▶ A well-developed scientific tool should give accurate results both at present as well
as over the time.
▶ A test good reliability means that the test taker will obtain the same test score over
repeated testing as long as no other extraneous factors have affected the score.
▶ A good instrument will produce consistent scores. An instrument’s
reliability is estimated using a correlation coefficient .
ASSESSMENT OF RELIABILITY
Reliability can be assessed with the:
1. Test-retest Method
2. Alternative Form Method
3. Internal Consistency Method
4. The Split-halves Method
5. Inter-rater Reliability
TEST-RETEST METHOD
• Test-retest is a method that administers the same
instrument to the same sample at two different points in
time, perhaps one year intervals.
• If the scores at both time periods are highly correlated,
> .60, they can be considered reliable
ALTERNATIVE FORM
METHOD
The alternative form method requires two different
instruments consisting of similar content. The same sample
must take both instruments and the scores from both
instruments must be correlated. If the correlations are high,
the instrument is considered reliable.
INTERNAL CONSISTENCY
METHOD
Internal consistency uses one instrument administered only
once. The coefficient alpha (or Cronbach’s alpha) is used to
assess the internal consistency of the item. If the alpha value
is .70 or higher, the instrument is considered reliable