You are on page 1of 24

Instrumentation

An important part of the research study


is the instrument in gathering the data
because the quality of research output
depends to a large extent on the
quality of research instruments used.
Instrument is the generic term that
reseachers used for a measurement
device like survey,test,
To help distinguish between instrument and
instrumentation, consider that the instrument is
the device and instrumentation is the course of
action which is the process of developing,
testing and using the device.
Researchers can choose the type of instruments to
use based on their research question or
objectives. There are two broad categories of
instruments namely ;1)researchers - completed
instruments and 2) subject - completed instrument.
Examples are shown on the following table;
Researcher-Completed Instruments Subject-Completed Instruments

Rating Scales Questionnaires


Interview Schedules/guides Self-Checklists
Tally Sheets Attitude scales
Flowcharts Personality inventories
Performance Checklists Achievement/aptitude tests
Time-and-motion logs Projective devices
Observation forms Sociometric devices
Treece and Treece (1977),divided the research instrument or
tools for gathering data in Research are of two categories or
kind.
1.)Mechanical devices- include almost all tools (such as
microscope, telescopes,thermometers, rulers, and monitors) used
in the physical sciences. In the social sciences and nursing,
mechanical devices includes such equipments as tape
recorders, cameras, films and video tapes. In addition,
included also the laboratory tools and equipments used in
experimental research in the chemical and biological sciences
as in industry and agriculture.
2)Clerical tools- are used when the researcher studies
people and gathers data on the feelings, emotions, attitudes,
and judgments of the subjects. Some of clerical tools are:
filled record, histories, case studies, questionnaires, and
interviews schedules.

A critical potion of the research study is the instrument used


to gather data. The validity of the findings and conclusion
resulting from the statistical instruments will depend greatly
on the
characteristics of your instruments.
According to Calderon (1993), the following are characteristics of a good research
instruments:
1. The instrument must be valid and reliable.
2. It must be based upon the conceptual framework or what the researcher wants to
find out.
3. It must gather data suitable for and relevant to the research topic.
4. It must gather data that would test the hypotheses or answer the questions under
investigation.
5. It should be free from all kinds of bias.
6. It must contain only question or items that are unequivocal.
7. It must contain clear and definite directions to accomplish it.
8. If the instrument is a mechanical device, it must be of the best or latest model.
9. It must be accompanied by a good cover letter.
10. It must be accompanied, if possible, by letter of recommendation from a sponso
According to Falatado et al. (2016) the following are the
general criteria of good research instruments.
1. Validity - refers to the extent to which the instrument
measures what it intends to measure and performs as it is
designed to perform. It is unusual and nearly impossible that an
instrument is 100% valid that is why validity is generally
measured in degrees. As a process,validation involves collecting
and analyzing data to assess the accuracy of an instrument.There
are numerous statistical tests and measures to assess the validity
of quantitative instruments that generally involves pilot testing.
There are three major types of validity. These are the following
Types of Validity

Content Validity Construct Validity Criterion Validity


Types of Validity Description

The extent to which a Research


Content Validity instrument accurately measures all
aspects of a construct.
The extent to which a research
Construct Validity instrument (or tool) measures the
intended construct.
The extent to which a research instrument
Criterion Validity is related to other instruments that
measure the same Variables.
a) Content validity primarily focuses on the appropriateness, authenticity
and representativeness of the items of the test to measure the behavior or
characteristics to be investigated. This normally determined after a group
of experts on the subject matter has examined systematically the test
items. These items are pilot tested and hereafter certain statistical
calculations can be done on the results depending on the type and
purpose of the test. Item analysis, for example may be done with respect
to achievement tests to determine the difficulty and discrimination indices
of each item. Difficulty index describes how easy or difficult the test items
are. The discrimination index gives the ability of each item to identify
those who know and do not know the items.
b) Construct validity refers to whether you can draw inferences about
test scores related to the concept being studied. The extent of a test to
appropriate its ability to demonstrate a particular theoretical construct
or development characteristics or indicator is described by the
materials‟ construct validity. There are three types of evidence that can
be used to demonstrate a research instrument has construct validity:

✓Homogeneity - this means that the instrument measure one construct.

✓Convergence - this occurs when the instrument measures concept similar to


that of other instruments. Although if there are no similar instruments
available this will not be possible to do.

Theory evidence - this is evident when behavior is similar to theoretical


✓Theory Evidence - This is evidence when behavior is similar to
theoretical proposition of the construct measured in the Instrument.

An example of this is when an instrument measures anxiety, one


would expect to see that participants who score high on the
instrument for anxiety also demonstrate symptoms of anxiety in
their day-to-day lives.
c) Criterion validity- related validity is achieved by determining the
effectiveness of the test to measure results against a given set of criteria
or standards. In achievement or performance test, the desired
competencies are used as the criteria. This type of validity is better
understood statistically. A criterion is any other instrument that measures
the same variable. Correlations can be conducted to determine the
extent to which the different instruments measure the same variable.
Criterion validity is measured in three ways:

•Convergent Validity - Shows that an instrument is highly correlated


with measuring similar variables.
Example: geatric suicide correlated significantly and positively with
depression, loneliness and hopelessness.
• Divergent validity - shows that an instrument is poorly correlated to
instruments that measure different variables.

Example: there should be a low correlation between an instrument that


measures motivation and one that measure self - efficacy.

•Predictive validity - means that the instrument should have high


correlation with future criterions.

Example: a score of high self- efficacy Related to performing a task


that should predict the likehood a participant completing the task.
2. Reliability - relates to the extent to which the instrument is consistent.
The instruments should be able to obtain the same response when applied
to respondents who are similarly situated. Likewise, when instrument is
applied at two different points in time, the responses must highly correlate
with one another. Hence reliability can be measured by correlate the
responses of subjects exposed to the instrument at two different time
periods or by correlating the responses of the subjects who are situated.
An example of this is when a participant completing an instrument meant
to measure motivation should have approximately the same responses
each time the test is completed. Although it is not possible to give an exact
calculation of reliability, an estimate of reliability can be achieved through
different measures. The three attribute of reliability are the following:
Attributes of Reliability in Quantitative Research

Attributes Description

• Internal Consistency or The extent of to which all items on


Homogeneity scale measure one Construct.

• Stability or Test-Retest The consistency of results using an


Correlation instrument with repeated testing.
Consistency among responses of multiple users of
• Equivalence an instrument, or among alternate forms of
instrument.
• Internal consistency or homogeneity is when an instrument measures a
specific concept. This concept is through question or indicators and each
question must correlate highly with the total for this dimension. For
Example, teaching effectiveness is measured in terms of seven questions.
The score for each question must correlate highly with the total for
teaching effectiveness.
There are three ways to check the internal consistency or homogeneity of
the index.

a) Split-half correlation. We could split the index of "exposure to televised


news" in half so that there are two groups of two questions, and see if the
two sub-scales are highly correlated .That is, do people who score high on
the first half also score high on the second half?
b) Average inter-item correlation. We can also determine the internal
consistency for each question on the index. If the index is homogeneous,
each question should be highly correlated with the other three questions.

C. Average item-total Correlation. We can correlate each question with


the total score of the TV news exposure index to examine the internal
consistency of items. This gives us an idea of the contribution of each
item to the reliability of the index.
•Stability or test - retest correlation this is an aspect of reliability where
many researchers report that a highly reliable test indicates that the test
is stable over time. Test - retest correlation provides an indication of
stability over time. It is an extent to which scores on a test are essentially
invariant over time. This definition clearly focuses on the measurement
instrument and the obtained test scores in terms of test - retest stability.
An example of this is when we ask the respondents in our sample the four
questions once in the month of September and again in December. We
can examine whether the two waves of the same measures yield similar
results.
• Equivalence reliability is measured by the correlation of scores between
different versions of the same instruments or between instruments that
measure the same or similar constructs, such that one instrument can be
reproduced by the other. If we want to know the extent to which
different investigators use the same instrument to measure the same
individuals at the same time yield consistent results. Equivalence may also
be estimated by measuring the same concepts with different instruments,
for example, survey questionnaire and official records, on the same
sample, which is known as multiple - forms reliability.

When you gather data, consider readability of the instrument. Readability


refers to the level of difficulty of the instrument relative to the intented
users.Thus, an instrument in English Applied to a set of respondents with no
education will be useless and unreadable.
3. The Practicality characteristic of a research instrument can be judged
in terms of economy, convenience and interpretability. From the
operational point of view, the researc instrument ought to be practical i.e.,
it should be economical, convenient and interpretable. Economy
consideration suggests that some trade-off is needed between the ideal
research project and that which the budget can afford. The length of
measuring instrument is an
important area where economic pressures are quickly felt. Although more
items give greater reliability as stated earlier, but in the interest of
limiting the interview or observation time, we have to take only few items
for our study purpose. Similarly, data-collection methods to be used are
also dependent at times upon economic factors. Convenience test
suggests that
the measuring instrument should be easy to administer.
For this purpose one should give due attention to the proper layout of the
research instrument. For instance, a questionnaire, with clear instructions
(illustrated by examples), is certainly more effective and easier to
complete than one which lacks these features. Interpretability
consideration is especially important when persons other than the
designers of the test are to interpret the results. The research instrument,
in order to be interpretable, must be supplemented by:

a) detailed instructions for administering the test;


b) scoring keys;
c) evidence about the reliability and
d) guides for using the test and for interpreting results

You might also like