You are on page 1of 23

pilot study

BY :-
Mrs. Sudershna P. Lal
Associate. Professor
pilot study
• Pilot study is a smaller version of proposed study, The function of the study is
to obtain information and assess feasibility of the study for improving and to
decide the plan for data analysis..

• Pilot study was conducted with smaller subjects (about 10% of main study),
the same setting, and same data collection and analysis techniques.
pilot study
• A study is referred to a small scale preliminary tryout of the
method to be used in an actually in large study , which acquaints
the researcher with problems that can be corrected in proportion in
large study or is done to provide the researcher with an opportunity
to try out the procedure, methods, and tools of data collection.
purposes of pilot study
1. To study feasibility and practicability of research study.
2. To asses the availability of study subject.
3. To assess the validity and reliability of tools.
4. To ensure appropriateness of method and procedure of data
collection.
5. To understand study variable.
6. To redefine methodology
7. To plan data analysis and interpretation of data on large scale.
VALIDITY
VALIDITY
• “Validity refer to the degree to which an instrument measures
what it suppose to measuring.”
…………..Polit & Hungler,

• Eg. Thermometer suppose to measure temperature , it cannot be


considered valid for other than temperature.
VALIDITY
• Validity in relation to research is a judgment regarding the degree to which the
components of the research reflect the theory, concept, or variable under study
……………………..(Streiner& Norman. 1996).

• The validity of the instrument used and validity of the research design as
whole are important criteria in evaluating the worth of the results of the
research conducted.
• Internal validity refers to the likelihood that experimental
manipulation indeed was responsible for the differences observed.

• External validity refers to the extent to which the results of the study
can be generalized to the larger population (Polit & Hungler, 1999).
TYPES OF VALIDITY
Four types of validity are used to judge the
accuracy of an instrument:

(1)content validity

(2)Face validity

(3) Criterion validity


a) Predictive b) concurrent validity

(4) construct validity.


1. CONTENT VALIDITY
the extent to which different items in the assessment
measure the trait or phenomenon they were meant to.
High level of content validity indicates that test items
accurately reflect the trait being measured.

A questionnaire to assess anxiety, for example, would be


high in content validity if it included questions about known
symptoms of anxiety such as muscle tension and a rapid
pulse rate.
2. FACE VALIDITY
Face validity involve an overall look of an instrument regarding
its appropriateness to measure a particular attribute or
phenomena.

Face validity is not very important and very essential types of


validity.

This aspect of validity refer to the face value or outlooks of an


instrument.

Eg. – A likert scale is designed to assess attitude of nurses


toward AIDS patient, a researcher may judge the face value of
instrument by its appearance , i.e. looks good or not but provide
no guarantee about appropriateness and completeness.
3. CRITERIAN VALIDITY

In this type of validity a relationship between measurement


of instrument with some other external criteria.

Eg.- A tool is developed to measure the professionalism


among nurses to assess the criteria validity nurses are
separately asked about the no. of research paper they
published and no. of professional conferences they attended.
Later a coefficient correlation is calculated to assess the
criterion validity.
a) PREDICTIVE VALIDITY

the ability of an assessment measure to predict someone’s


future behaviour in related but different, situation.

An assessment measure with high predictive validity is


capable of making accurate predictions of future behaviour.
Low predictive validity means that a measure is of little use
in predicting a particular behaviour.
b) CONCURRENT VALIDITY

reflects how well different measures of the same trait agree


with another.

If a test possesses high degree of concurrent validity, then it


can be expected to give results very similar to other
measures of same characteristic.
CONSTRUCT VALIDITY.

the extent to which a theoretical construct such as a


personality trait can be empirically defined.
RELIABILITY

“ Reliability is the degree of consistency


and accuracy with which an instrument
measures the attribute for which is
design to measure.”
RELIABILITY

Reliability of an instrument reflects its stability and


consistency within a given context.

Reliability is the consistency of measurement over time,


whether it provides the same results on repeated trails.
It is defined as a characteristic of an instrument that reflects
the degree to which the instrument provokes consistent
responses.

For example,
BP apparatus reading 120/80 after some time 150/82 -------
not reliable.
Three characteristics of reliability are commonly
evaluated:

1.Stability,

2.Internal consistency, and

3.Equivalence.
1. STABILITY OR TEST-RETEST RELIABILITY

refers to degree to which research participants’ response


change overtime.

Test-retest method is used to test stability of the tool.

In this method an instrument is given to the same


individuals on two occasions within relatively short duration
of time.

A correlation coefficient is calculated to determine how


closely the participants’ responses on the second occasion
matched their responses on the first occasion.
Interpretation of result

1.00 – Perfect reliable


0.00 – No reliability
Above 0.70 – Acceptable level of reliability

Basic problems is trail may change with time like Attitude


behavior, mood and knowledge etc.
2. HALF-SPLIT RELIABILITY OR INTERNAL
CONSISTENCY

•refers to a measure of reliability that is frequently used with


scales designed to assess psychosocial characteristics.

•Instruments can be assessed for internal consistency using half-


split technique (i.e. answers to one half of the items are compared
with answers to the other half of the items or by odd and even
numbers) or by calculating the alpha coefficient or using Kuder-
Richardson formula.

•In the case alpha coefficient and Kuder-Richardson formula, a


coefficient that ranges from 0 to 1.00 usually results.
3. INTERRATER RELIABILITY OR THE
NOTION OF EQUIVALENCE

It is applicable when different observers are using the same


instrument to collect data at the same time.
A coefficient can be calculated or other statistical or
nonstatistical procedure can be used to see the correlation of
values.

Eg. – A rating scale id develop to assess cleanliness of the


bone marrow transplantation unit, this rating scale may be
administered to observe the cleanliness of the bone marrow
transplantation unit by two different observers
simultaneously but independent.

You might also like