You are on page 1of 12

VALIDITY AND

RELIABILITY

Prof. DR. Deep Bahadur Rawal


Prof. DR. Deep Bahadur Rawal
Central Campus of Management
Central Campus of Management
Mid-Western University, Surkhet
Mid-Western University, Surkhet
Nepal
Nepal
INTRODUCTION OF VALIDITY
 The term Validity means truth. Thus validity refers to the
degree to which a test measures what it claims to measure. It
is also refers to the extent you are measuring what you hope
to measure. It indicates the accuracy of a measure.

Validity is the degree of agreement between actual


measurement and proposed measurement. Data are
considered to be valid when they measure what they are
supposed to measure. Validity generally results from careful
planning of questionnaire or interview questions. If we
measure what we are intended to measure then the
measurement is said to be valid.
---
According to Thomas and Nelson (1996),validity is degree to which a
test or instrument measures what it purports to measure. ”
Goode and Hatt defines that,” a scale process its validity when it actually
measures what it claims to measure.”
Vincent (1999) defines that “ validity is the soundness or
appropriateness of a test to instrument in measuring what it is designed to
measure. ”
Prof. Joppe(2000) provides the following explanation of what validity is
in quantitative research:
“ Validity refers to the truthfulness of findings. It determines
whether the research truly measures that what it was intended to measure
or how truthful the research results are.”
Thus, validity is the extent to which a test measure, what it is supposed to
measure.
---
Types of validity
1) Content Validity / Face Validity
2) Construct Validity
3 ) Criterion –Related Validity
1) Content Validity / Face Validity
Content validity concerns the extent to which a measure
adequately represents all facts of a concept. Content validity
estimates the systematic errors. The most common use of
content validity is with multi-item measures. There is no
numerical way to assess face or content validity. Content is
the most common form of validation in applied research.
---
2) Construct Validity
Construct validity is concerned with knowing more than
just that a measuring instrument works. It seeks
agreement between a theoretical concept and a specific
measuring device or procedure. It is involved with the
factors that lie behind the measurement scores
obtained; with what factors or characteristics account
for or explain, the variance in measurement scores. So
construct validity is related to theory –testing studies.
---
3) Criterion -Related Validity
Criterion related validity is used to demonstrate the accuracy
of a measures or procedure by comparing it with another
measure or procedure which has been demonstrated to be
validity. This validity is established when the measure
differentiates individuals on a criterion it is expected to
predict. The methods of assessing criterion –related validity
are :
a)According Tull & Hawkins (1993), Concurrent validity is
the extent to which one measure of a variable can be used to
estimate an individual’s current score on a different measure
of the same or a closely related, variable. It involves assessing
---
the extent to which the obtained score may be used to
estimate an individual’s present standing with respect
to some other variable. Warner’s ISC scale can be
used to assess the concurrent validity.
b) According to Tull & Hawkins(1993),Predictive
validity is the extent to which an individual’s future
level on some variable can be predicted by his or her
performance on a current measurement of the same or
a different variable. This validity involves assessing
the extent to which the obtained score may be used to
estimate an individual’s future standing with respect to
the criterion variable.
RELIABILITY
Introduction or Reliability of Research Instruments
A measurement device is reliable when it will consistently
produce about the same results when applied to the same
results when applied to the same samples or to different
samples of the same size drawn from the same population.
Joppe (2000) defines reliability as”The extent to which
results are consistent over time and an accurate
representation of the total population under study is
referred to as a reliability, and if the results of a study can
be reproduced under a similar methodology, then the
research instrument is considered to be reliable.”
---
Reliability of a test pertains to reliable measurement which
means that the measurement is accurate and free from any
sort of error. Reliability is one of the most essential
characteristic of a test. If a test gives same result on different
occasions, it is said to be reliable. So Reliability means
consistency of the test result, internal consistency and
consistency of results over a period of time.
According to Anastasi and Ubrina (1982) “Reliability refers to
the consistency of scores obtained by the same persons when
they are re-examined with the same test on different
occasions, or with different sets of equivalent items, or
under other variable examining conditions.”
---
Types of Reliability
a) Test – Retest Reliability
This type of Reliability is estimated by the Pearson product
- moment coefficient of correlations between two administrations of the
same inventory. Estimation is based on the correlation between scores
of two or more administrations of the same inventory.
b) Alternative –Form Reliability
In this method, also requires two testing with the same people.
However, the same test is not given each time. Each of two tests must
be designed to measure the same thing and should not differ in any
systematic way. This method is viewed as superior to the retest method
a respondent’s memory of test items is not as likely to play a role in the
data received. One drawback of this method
---
is the practical difficulty in developing test items that are
consistent in the measurement of a specific phenomenon.
c) Spilt – Half Reliability
This is the simplest type of internal comparison.In
this method, the inventory was divided into two equal
halves and correlation between scores of these halves
was worked out. The measuring instrument can be
divided in various ways but the best way to divide the
measuring instrument into two halves is odd numbered
and even numbered items. This coefficient of the
correlation denotes the reliability of the half test.

Thank You!!!

You might also like