You are on page 1of 13

Project MIMs

Grade 12 – Practical Research 2

G12 MIMs LC 16 & 20


RESEARH INSTRUMENT

Learning Competencies:
• constructs an instrument and establishes its validity and reliability
• collects data using appropriate instruments

Objectives:
• construct the instrument of a research study and ensure its validity and reliability
• use the researcher-made instrument in data collection

REMEMBER:

RESEARCH INSTRUMENT

A research instrument is a tool used to obtain, measure, and analyze data from
subjects around the research topic. You need to decide the instrument to use based on the
type of study you are conducting: quantitative, qualitative, or mixed method. For instance,
for a quantitative study, you may decide to use a questionnaire, and for a qualitative study,
you may choose to use a scale. While it helps to use an established instrument, as its
efficacy is already established, you may if needed use a new instrument or even create your
own instrument. You need to describe the instrument/s used in the Methods section of the
research paper. The most commonly used research instruments in quantitative research
studies include survey questionnaire and tests.

ENLIGHTEN:

SURVEY QUESTIONNAIRE

A survey questionnaire is a research instrument consisting of a series of questions


and other prompts for the purpose of gathering information from respondents. And it is also
said to be most used quantitative research instrument. In a quantitative survey
questionnaire, you may use a short answer responses or dichotomous questions,
multiple choice answers, paragraph, check boxes, drop down, linear scale, multiple choice
grid and more. As you can see there are various question formats that can be adapted to
your research needs.

Dichotomous Questions

This type of questions is generally answered “yes/ no”.

Example:

Have you traveled to Guatemala?


___ Yes
___ No

Multiple Choice Questions

Where do you get the news from?


___ Television Radio
___ Newspaper
___ Magazine
___ Word-of-mouth
___ Internet
Other: Please Specify _______________
For this type of question, it is important to consider including an "other" category
because there may be other avenues by which the person first heard about your site that
you might have overlooked.

Rank Order Scaling Questions

Rank order scaling questions allow a certain set of brands or products to be


ranked based upon a specific attribute or characteristic. Perhaps we know that Toyota,
Honda, Mazda, and Ford are most likely to be purchased. You may request that the
options be ranked based upon a particular attribute. Ties may or may not be allowed. If you
allow ties, several options will have the same scores.

Example:

Based upon what you have seen, heard, and experienced, please rank the
following brands according to their reliability. Place a "1" next to the brand that is most
reliable, a "2" next to the brand that is next most reliable, and so on. Remember, no two cars
can have the same ranking.

Honda __
Toyota __
Mazda __
Ford __

Rating Scale Questions

A rating scale question requires a person to rate a product or brand along a well-
defined, evenly spaced continuum. Rating scales are often used to measure the direction
and intensity of attitudes.

The following is an example of a comparative rating scale question: Which of the


following categories best describes your last experience purchasing a product or service on
our website? Would you say that your experience was:

___ Very pleasant


___ Somewhat pleasant
___ Neither pleasant nor unpleasant
___ Somewhat unpleasant
___ Very unpleasant

Semantic Differential Scale Questions

The semantic differential scale asks a person to rate a product, brand, or company
based upon a seven-point rating scale that has two bipolar adjectives at each end. The
following is an example of a semantic differential scale question.

Example:

(7) Very Attractive


(6)
(5)
(4)
(3)
(2)
(1) Very Unattractive
Notice that unlike the rating scale, the semantic differential scale does not have
a neutral or middle selection. A person must choose, to a certain extent, one or the other
adjective.

Staple Scale Questions

The staple scale asks a person to rate a brand, product, or service according to a
certain characteristic on a scale from +5 to -5, indicating how well the characteristic
describes the product or service. The following is an example of a staple scale question:

When thinking about Data Mining Technologies, Inc. (DMT), do you believe that the
word "innovative" aptly describes or poorly describes the company? On a scale of +5 to
-5 with +5 being "very good description of DMT" and -5 being "poor description of DMT,"
how do you rank DMT according to the word "innovative"?

(+5) Describes very well


(+4)
(+3)
(+2)
(+1)
Innovative
(-1)
(-2)
(-3)
(-4)
(-5) Poorly Describes

Constant Sum Questions

A constant sum question permits collection of "ratio" data, meaning that the data is
able to express the relative value or importance of the options (option A is twice as important
as option B)

Example:

The following question asks you to divide 100 points between a set of options to
show the value or importance you place on each option. Distribute the 100 points giving the
more important reasons a greater number of points. The computer will prompt you if your
total does not equal exactly 100 points.

When thinking about the reasons you purchased our data mining software, please
rate the following reasons according to their relative importance.

Seamless integration with other software __________


User friendliness of software __________
Ability to manipulate algorithms __________
Level of pre- and post-purchase service __________
Level of value for the price __________
Convenience of purchase/quick delivery __________

Total 100 points

This type of question is used when you are relatively sure of the reasons for
purchase, or you want input on a limited number of reasons you feel are important.
Questions must sum to 100 points.
Open-Ended Questions

The open-ended question seeks to explore the qualitative, in-depth aspects of a


particular topic or issue. It gives a person the chance to respond in detail. Although open-
ended questions are important, they are time-consuming and should not be over-used.

Example:

(If the respondent indicates they did not find what they were looking for...)

What products of services were you looking for that were not found on our website?

If you want to add an "Other" answer to a multiple-choice question, you would use branching
instructions to come to an open ended question to find out what Other....

Demographic Questions

Demographic questions are an integral part of any questionnaire. They are used
to identify characteristics such as age, gender, income, race, geographic place of
residence, number of children, and so forth. For example, demographic questions will help
you to classify the difference between product users and non-users. Perhaps most of
your customers come from the Northeast, are between the ages of 50 and 65, and have
incomes between Php50,000 and Php75,000.

Demographic data helps you paint a more accurate picture of the group of persons
you are trying to understand. And by better understanding the type of people who use
or are likely to use your product, you can allocate promotional resources to reach these
people, in a more cost-effective manner.

TYPES OF SURVEYS

There are several types of surveys as telephone survey, online survey, in-person
surveys, and mobile surveys. These surveys are administered by interviewers who
have experience in research.

Production Tasks

Production tasks is usually used in research related with education purpose. It can
be time consuming and you may use it for diagnostic purposes to see the beginning,
developing, and ending of a phenomenon. This tool is simply an exam to evaluate
knowledge. It can be a written, oral, or a reading or listening comprehension test or
any other type of exam you might consider appropriate for your research purposes.

Checklist

A checklist also known as tick list or chart works as an inventory of behaviors or skills
where the researcher checks indicators that are being observed. A checklist can be a
quantitative or qualitative tool. If you look for specific criteria with a yes/ no answer it
becomes a quantitative tool. On the other hand, if you look for specific criteria or indicators
and you want to deeply or briefly describe what you observe, it becomes a qualitative
tool. A checklist is a list of aspects to observe as content, abilities, and behavior. It is a
mechanism to verify if certain indicators or symptoms are present in a phenomenon. A
checklist provides more information if the researcher records additional comments on the
context.

Research is a wide and changing topic. The paradigm and type of study as well as
your research questions, objectives and hypothesis will guide you to what instruments to use
in your research problem.
ADVANTAGES OF SURVEY QUESTIONNAIRE

• Survey questionnaire can reach a large number of people relatively easily and
economically.
• Survey questionnaire provides quantifiable answers.
• Survey questionnaires are relatively easy to analyze.
• Survey questionnaire is less time consuming than interview or observation.

STEPS IN SURVEY QUESTIONNAIRE CONSTRUCTION

1. Reviewing the Literature

Before constructing the questionnaires, the researcher must review all the related
literature to see if an already prepared questionnaire is available that is similar to the
research topic. This will save time and effort required to construct an entirely new
questionnaire. Changes can be made as the study demands.

2. Deciding What Information should be Sought

List specific objectives to be achieved by the survey questionnaire and that methods
of data analysis that will be applied to the returned questionnaire should also be kept in mind.

3. Knowing Respondents

Researcher must know the target population in relation to occupation, special


sensitivities, education, ethnicity language etc.

4. Constructing Questionnaire Items

Each item on the survey questionnaire must be developed to measure a specific


aspect of objectives or hypothesis. The researcher must be able to explain in detail why a
certain question is being asked and how the responses will be analyzed. Making dummy
tables that show how the item-by-item results of the study will be analyzed is a good idea.

5. Reexamination and Revision of the Questions

After survey questionnaire formulation, revise it which involves:


• supplementation of one’s effort by the critical opinion of experts (should
represent different approaches and belong to different social backgrounds)
• reviewing by representative of different groups such as minorities, racial
groups, women etc.
• survey questionnaire should be scrutinized for any technical defects

6. Pretesting Survey Questionnaire

• Pretest helps in identifying and solving the unforeseen problems in


administration of survey questionnaire such as phrasing, sequence of
questions or its length, identifying the need for any additional questions or
elimination of undesired ones.
• Sample from the population used to pretest survey questionnaire should be
similar in characteristics to those who will be included in actual study.
• Data collection technique should be the same as planned for actual study.
• After making necessary changes, second pretest should be conducted.
Sometimes, in fact a series of three or four or even more revisions and
pretesting is required.
7. Editing the Survey Questionnaire and Specifying Procedure for Use

A final editing by the researcher is done to ensure that every element passes
inspection: the content, form and question sequence, spacing arrangement ad appearance.

RULES FOR CONSTRUCTING SURVEY QUESTIONNAIRE ITEMS

1. Both open as well as closed ended questions can be used however, closed ended
questions are preferred.

2. Clarity of all the items is necessary to obtain valid results.

3. Short items are preferable to long items as they are easier to understand.

4. Negative items should be avoided as they are often misread by many respondents.

5. Avoid “double-barreled” items, which require the subject to respond to two separate
ideas with single answer.

6. Avoid using technical terms, jargons, or big words that some respondents may not
understand.

7. When a general and a related specific question are to be asked together, it is preferable
to ask general question first. Otherwise, it will narrow the focus of the general question
if specific question is asked first.

8. Avoid biased or leading questions.

RULES FOR SURVEY QUESTIONNAIRE FORMAT

• Attractive for the respondents


• Organize such that it is easy to complete
• Number survey questionnaire items and pages
• Instructions should be brief, clear & in bold face
• Organize questions in a logical sequence
• Name and address of the person to whom the survey questionnaire is to be returned
should be mentioned in the beginning as well as in the end
• Use examples before any item that might be confusing or difficult to understand
• Begin with a few interesting and non-threatening items
• Do not put important items at the end of a long survey questionnaire
• Use of words like “questionnaire” and “checklist” should be avoided as some people
might be prejudiced against these words
• Must be short as possible
• For survey questionnaire items to be meaningful to respondents, enough information
should be included

TESTS

A means of measuring the knowledge, skill, feeling, intelligence or aptitude of an


individual or group. Tests produce numerical scores that can be used to identify, classify or
evaluate test takers.

TYPES OF TESTS

Norm-Reference Test

Norm-reference test produces a score that tells how individual’s performance


compares with other individuals. Tables of norms based on scores obtained by relevant
group of subjects tested by the test developer are provided by manual. Interpretations based
on relative performance are very useful for most of the characteristics studied in behavioral
sciences such as anxiety, creativity, dogmatism, or racial attitudes. It describes performance,
such as achievement, in relative terms.

Example:

How does the overall achievement of students in class A compare to the students in class B?

Domain-Reference Test

Domain-reference test measures the learner’s absolute level of performance in a


precisely defined content area or domain. It is being used increasingly to measure
achievement – related performance. Domain reference tests estimate individual’s domain
status i.e. precisely what is his level of performance and specific deficiencies in the domain
covered by the test.

Example:

What percentage of addition problems Jose can solve correctly?

INDIVIDUAL VS GROUP TESTS

• A group test is designed so that a sample of subjects can take the test all at one
time whereas individual test measures one individual at a time.

• Group tests are more of objective type. Individually administered measures are
used when researcher is interested in studying the process rather than the product
and should be used only if they make an important contribution to the research.

Example:

Tests developed in clinical settings such as Rorschach Inkbolt Techniques and Thematic
Apperception Test often are individually administered so that clinicians can measure not
only a subject’s response but also learn why subject gave a particular response.

DEVELOPMENT OF TESTS

• Reviewing Literature
- Before developing a new test, review the available literature in order to seek a
test already available that can be used for the study as test development is an
extremely complex process and require training.

• Define Objectives
- Give careful thought to the specific outcomes/ measure that is to be achieved.
E.g. construction of achievement tests requires careful description of the
knowledge or skill that the test should measure.

• Define Target Population


- Definition of target population is required as characteristics of target population
must be considered in many of the decisions on such questions as item type,
reading level, test length, and type of directions.

• Review Related Measures


- A careful review of tests that measure similar characteristics provide ideas on
methods for establishing validity, application of different type of items, expected
level of validity and reliability, and possible formats.
• Develop an Item Pool
- Before starting to write test items, researcher needs to make decisions regarding
type of items that should be used and amount of emphasis that should be given
to each aspect of characteristics or content area to be measured.

• Prepare a Prototype
- The first form of the test puts into effect the earlier decision made regarding the
format, item type etc. through tryouts.

• Evaluate the Prototype


- Obtain a critical review by at least three or more experts in test construction. This
review identifies needed changes and the prototype is field-tested with a sample
from targeted population.
- After collecting data, item analysis is conducted to identify good and bad items.
Analysis and interpretation depend upon nature of test. E.g. in developing a
norm-referenced achievement test item analysis is usually concerned with
difficulty level of each item and its ability to discriminate between good and poor
students.

• Revise Measure
- On the basis of field test experience and results of item analysis, prototype is
revised and again field tested. This cycle may be repeated several times in order
to develop an effective test, collect data on test reliability and validity.

STANDARDIZED TEST

• Constructed by experts
• Individual test items revised and analyzed to meet standards of quality
• Directions for carrying out test available
• Objective
• Existence of validity and reliability tests

TYPES OF STANDARDIZED TEST

Test Example
Intelligence Test Wechsler Adult Intelligence Scale (suitable for testing late
adolescents and adults)
Aptitude Test Modern Language Aptitude Test
Achievement Test Wide Range Achievement Test (in areas of reading,
spelling and arithmetic)
Diagnostic Test Stanford Diagnostic Reading Test
Measures of Creativity Word Fluency (person writes word each containing a
specified letter)
Projective Techniques Holtzman Inkbolt Technique and Rorschach Test
Measures of Self Concept Tennessee Self Concept Scale (include areas as self-
criticism, physical self, personal self and social self)
Attitude Scale Thurston Type Scale (individual expresses agreement or
disagreement with a series of statements about attitude
object)
Measures of Vocational Career Assessment Inventory
Interest
Self-Report Measures of Personality
a. General Inventories Minnesota Multiphasic Personality Inventory (most suitable
in late adolescents and adults. Measure variables as
response sets, scales of ego strength, anxiety and
repression-sensitization in addition to original scales e.g.
depression, schizophrenia)
b. Specific Inventories Rokeach Dogmatism Scale (designed to measure the
variable of closedmindedness, often used in educational
and psychological research as a measurement of
authoritarianism)
c. Checklist The Adjective Checklist (measure adjectives such as
imaginative, stubborn, relaxed)
USES OF TEST IN RESEARCH

• Tests are used in correlational studies e.g. to determine relationship between


quantitative ability and achievement in science among high school students, the
researcher may decide the quantitative ability which will be defined as scores on
School and College Ability Test Series III and science achievement will be defined
as scores on science sections of Sequential Test of Educational Progress (STEP III).

• Experimental studies also use different tests such as Youth Program Quality
Assessment, Kansas City Youth Net Standards etc.

VALIDITY AND RELIABILITY OF RESEARCH INSTRUMENT

Validity and reliability are two important factors to consider when developing and
testing any instrument (e.g., content assessment test, questionnaire) for use in a study.
Attention to these considerations helps to ensure the quality of your measurement and of
the data collected for your study.

UNDERSTANDING AND TESTING VALIDITY

Validity refers to the degree to which an instrument accurately measures what it


intends to measure. Three common types of validity for researchers and evaluators to
consider are content, construct, and criterion validities.

• Content validity indicates the extent to which items adequately measure or


represent the content of the property or trait that the researcher wishes to measure.
Subject matter expert review is often a good first step in instrument development to
assess content validity, in relation to the area or field you are studying.

• Construct validity indicates the extent to which a measurement method accurately


represents a construct (e.g., a latent variable or phenomena that cannot be
measured directly, such as a person’s attitude or belief) and produces an
observation, distinct from that which is produced by a measure of another construct.
Common methods to assess construct validity include, but are not limited to, factor
analysis, correlation tests, and item response theory models (including Rasch
model).

• Criterion-related validity indicates the extent to which the instrument’s scores


correlate with an external criterion (i.e., usually another measurement from a different
instrument) either at present (concurrent validity) or in the future (predictive validity).
A common measurement of this type of validity is the correlation coefficient between
two measures.

Often times, when developing, modifying, and interpreting the validity of a given
instrument, rather than view or test each type of validity individually, researchers and
evaluators test for evidence of several different forms of validity, collectively.

UNDERSTANDING AND TESTING RELIABILITY

Reliability refers to the degree to which an instrument yields consistent result.


Common measures of reliability include internal consistency, test-retest, and inter-rater
reliabilities.

• Internal consistency reliability looks at the consistency of the score of individual


items on an instrument, with the scores of a set of items, or subscale, which typically
consists of several items to measure a single construct. Cronbach’s alpha is one of
the most common methods for checking internal consistency reliability. Group
variability, score reliability, number of items, sample sizes, and difficulty level of the
instrument also can impact the Cronbach’s alpha value.
• Test-retest measures the correlation between scores from one administration of an
instrument to another, usually within an interval of 2 to 3 weeks. Unlike pre-posttests,
no treatment occurs between the first and second administrations of the instrument,
in order to test-retest reliability. A similar type of reliability called alternate forms,
involves using slightly different forms or versions of an instrument to see if different
versions yield consistent results.

• Inter-rater reliability checks the degree of agreement among raters (i.e., those
completing items on an instrument). Common situations where more than one rater
is involved may occur when more than one person conducts classroom observations,
uses an observation protocol, or scores an open-ended test, using a rubric or other
standard protocol. Kappa statistics, correlation coefficients, and intra-class
correlation (ICC) coefficient are some of the commonly reported measures of inter-
rater reliability.

Developing a valid and reliable instrument usually requires multiple iterations of


piloting and testing which can be resource intensive. Therefore, when available, it is
suggested to use already established valid and reliable instruments, such as those published
in peer-reviewed journal articles. However, even when using these instruments, you should
re-check validity and reliability, using the methods of your study and your own participants’
data before running additional statistical analyses. This process will confirm that the
instrument performs, as intended, in your study with the population you are studying, even
though they are identical to the purpose and population for which the instrument was initially
developed.

LET’S TRY:

Instructions: Write your comprehensive learning about the following.

1. What are the processes and things to be considered in writing the research
instrument of a research study?
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
REINFORCEMENT:

Instructions: Based on your approved research topic and title, write the research instrument
of your research as part of the final requirement in Practical Research 2 by
considering the learning that you have in this module. Use the format below.

Research Instrument
Challenge!

Find four (4) different quantitative research and read the research instrument of the
study. Critique the research instrument of the study based on the learning you gained
using this module. Follow the format below.

1. Research Title:

Remarks on the Research Instrument

2. Research Title:

Remarks on the Research Instrument


3. Research Title:

Remarks on the Research Instrument

4. Research Title:

Remarks on the Research Instrument

Prepared by:

MR. JESTER G. DE LEON


Master Teacher I, MNHS – SHS

You might also like