You are on page 1of 9

Instruments 

in Quantitative Research Design


 
We understand that a Research Instrument is a tool utilized to gather, measure, and analyze data
related to your research interests. It can include interviews, tests, surveys, or checklists. It is usually
determined by the researcher and is tied to the methodology. According to Barrot (2017), some of the
common instruments used for quantitative research are tests – performance-based or paper-and-pencil
questionnaires, interviews and observations. The last two instruments are used more often in qualitative
research. However, these can also be employed in quantitative research studies as long as the required
responses or analyzed data are quantifiable or numerical in nature. It is worth noting that, research
Instrument is done after conceptualization and the units of analysis have been chosen, and before
operationalizing concepts construct instruments.
 
Characteristics of a Good Research Intrument according to the Teachers College, Columbia
University:
 
 Valid and reliable
 Based on a conceptual framework, or the researcher's understanding of
how the particular variables in the study connect with each other
 Must gather data suitable for and relevant to the research topic
 Able to test hypothesis and/or answer proposed research questions under
investigation
 Free of bias and appropriate for the context, culture, and diversity of the
study site
 Contains clear and definite instructions to use the instrument
 
When using instruments that are prone to subjectivity – observation, interview, assessment of
performance tasks – you may consider having another coder or evaluator to help you gather and analyze
your data.  This is to improve the validity and reliability of your data (Barrot, 2017).
  There are aspects in describing your instrument to consider these are:
 The actual instrument used.
1. The purpose of the instrument.
2. The developer of the instrument.
3. The number of items or sections in the instrument (an institution or other
researchers).
4. The response format used (multiple choice, yes or no).
5. The scoring of the responses.
6. The reliability and validity of the study.
Three ways in developing an instrument for quantitative research:
 Adopting an instrument. This means that you will utilize an instrument that has been used in a well-
known institutions or reputable studies and publications. Some of the popular sources of instruments
include professional journals, websites and the likes. Adopting an instrument means that you do not have
to spend time establishing its validity and reliability since they have already been tested by their
developers and researchers.
 

Note: If the available tests do not generate the exact data that you want to obtain, you may either
modify an existing instrument or create your own instrument.

 
 In developing your instrument, be guided by the instruments used in studies similar to yours. Make sure,
however, that the items contained in your instruments are aligned with your research questions or
objective (SOP). Keep in mind that inadequacies in your research instrument will yield inaccurate data
thereby making the results of your study questionable.
 
INSTRUMENT VALIDITY
 
Validity is the degree to which an instrument measure what it is purports to measure. Invalid instruments
can lead to erroneous research conclusions, which in turn can influence educational decisions.
 
According to Ghazali (2016), an instrument is valid when it is measuring what is supposed to measure.
Or, in other words, when an instrument accurately measures any prescribed variable it is considered a
valid instrument for that particular variable. There are four types of validity; face validity, criterion
validity, content validity or construct validity. Face validity is looking at the concept of whether the test
looks valid or not on its surface. Criterion validity is a concept which will be demonstrated in the actual
study as to establish it needs ‘a good knowledge of theory relating to the concept and a measure of the
relationship between our measure and those factors’ whereas content validity is looking at the content of
items whether it really measures the concept being measured in the study. Finally, is the construct
validity, which measures the extent to which an instrument accurately measures a theoretical construct
that it is designed to measure.
 
 According to Barrot (2017),
Face Validity. An instrument has face validity when it “appears” to measure the variables being studied.
Hence, checking for face validity is a subjective process. It does not ensure that the instrument has actual
validity.
Content validity. It refers to the degree to which an instrument covers a representative sample or specific
elements of the variable to be measured. Similar to face validity, assessing content validity is a subjective
process which is done with the help of specifications. This list of specifications is provided by experts in
your field of study.
Construct validity. It is the degree to which an instrument measures the variables being studied as a
whole. Thus, the instrument is able to detect what should exist theoretically. A construct is often an
intangible or abstract variable such as personality, intelligence, or mos. If your instrument cannot detect
this tangible construct, it is considered invalid.
Criterion validity. This refers to the degree that an instrument predicts the characteristic of a variable in
a certain way. This means that the instrument produces results similar to those of other instruments in
measuring a certain variable. Therefore, a correlation between the results obtained through the instrument
and another is ensured. Hence, criterion validity is evaluated through statistical methods. Criterion
validity can be types as concurrent and predictive. An instrument has concurrent validity when it is able
to predict results similar to those tests already validated in the past.  An example of testing concurrent
validity is whether an admission test produces similar to those of the National Achievement Test. On the
other hand, predictive validity when it produces results similar to those of another instrument that will be
employed in the future. An example of testing predictive validity is employing college admissions test in
mathematics. This may be use to predict the future performance of the students in mathematics.
 
INSTRUMENT RELIABILITY
 
Another important factor you need to consider when preparing or selecting instrument is their reliability.
Reliability refers to the consistency of the measures of an instrument. It is an aspect involved in the
accuracy of measurement.  Instrument Reliability is defined as the extent to which an
instrument consistently measures what it is supposed to. A child's thermometer would be very reliable as
a measurement tool while a personality test would have less reliability.
 
There are four types of reliability.
 
1. Test-Retest Reliabilityis the correlation between two successive measurements with the same
test. For example, you can give your test in the morning to your pilot sample and then again in the
afternoon. The two sets of data should be highly correlated if the test is reliable. The pilot sample
should theoretically answer the same way if nothing has changed.
2. Equivalent Forms Reliabilityis the successive administration of two parallel forms of the same
test. A good example is the SAT. There are two versions that measure Verbal and Math skills.
Two forms for measuring Math should be highly correlated and that would document reliability.
3. Split Half Reliabilityis when, for example, you have the SAT Math test and divide the items on
it in two parts. If you correlated the first half of the items with the second half of the items, they
should be highly correlated if they are reliable.
4. Internal Consistency Reliabilityis when only one form of the test is available, or you can ensure
that the items are homogeneous or all measuring the same construct. To do this, you use
statistical procedures like KR-20 or Cronbach's Alpha.
  Cronbach’s alpha measures the reliability with respect to each item and construct being examined by
the instrument. Kuder-Richardson formula tests reliability in terms of instruments od a dichotomous
nature such as yes or no tests.
 
Data Collection is an activity that allows the researchers to obtain relevant
information regarding the specified research questions or objective. It is performed
through utilizing instruments which the researcher has developed or adopted for the
study.  Some of the quantitative research instruments are questionnaires,  tests,
interviews, and observation.
Questionnaire is  paper-based or electronic tool for collecting information about a
particular research interest.  It is a list of questions or indicators that the participants
need to answer.  In a quantitative research, a questionnaire typically uses a scale like
the Likert Scale, which uses ratings to indicate the participants’ level of agreement with
a specific statement. Another approach used in quantitative research is the conversion
of responses into numerical values
Questionnaire can be structured, semi-structured, or unstructured. Structured
Questionnaire uses closed-ended questions or indicators while the unstructured
questionnaire allowed the participants to respond to open-ended questions. Semi-
structured questionnaires have characteristics of both structured and unstructured
types.  - In a quantitative research, the structured type is frequently used because it is
easier to standardize as well as to code and interpret objectively.
Advantages of Using Questionnaire for data Collection
1. It can help collect data quickly from a large number of participants.
2. Using a questionnaire can encourage the participants to be open to the
researcher since their identity can be made anonymous
3. Questionnaire has flexibility because the respondents can answer it in their own
convenient time.
Limitations in the Use of Questionnaire
1. The questions can be interpreted differently by the participants, and
this scenario is beyond your control as a researcher.  The only solution
to this problem is to explain the content to the    participants
2. Problems regarding the response rate of the participants can be
encountered. Some may not able to complete or return the
questionnaire on the set deadline
3. Questionnaires may lack depth as they do not allow further probing
into the answers of the participants.
Guidelines in using Questionnaires for Data Collection
Barrot (2018) listed the guidelines in using questionnaires for data collection
1. Decide on the method of administering the questionnaire. Will you do it face-to-
face or online if you need to capture the non-verbal cues spontaneously
displayed by your participants as well as their emotions and behavior. This
method also includes administering questionnaires through video chat or
conferencing.  The online method, on the other hand, involves administering of
questionnaire through filling out web-based forms.
2. Draft your questionnaire.
3. Divide your questionnaire into two or three parts: the personal
information section, the  main section, and the open-ended question (if
necessary).  The personal information section asks for details about
the participant’s background which are relevant to the study. The name
is only optional as it is important to ensure the confidentiality of data. 
The section aims to establish that you are surveying the right people. 
The  main section lists the specific questions that are aligned with the
specific research questions. The open-ended questions section asks
for additional information that may have been covered by the main
sections.
4. Allign the indicators or questions contained in your questionnaire with
your specific research questions
5. Provide clear directions for answering the questionnaire.
6. Use routing if there is a need to skip some items in the questionnaire.
7. When several related questions need to be asked, begin with the
general questions first followed by the specific ones.
8. Do not make an overly lengthy questionnaire as this may discourage
the participants from completing.
9. Make sure that the predetermined responses match the nature of the
questions. These predetermined responses must be translated into
numerical values to make them quantitative.
 If the content is about belief, the responses should be about
agreement (strongly agree, agree, disagree, strongly disagree)
 If the questionnaire is about behavior and how it manifests, the
responses should be about extent (  great extent, moderate extent,
less extent, not at all)
 If the questions are about frequency, the responses should be
about frequency (always, frequently, sometimes,  never)
 If the content is about quality, the responses should also denote
quality ( excellent, good, fair, poor)
1. Avoid using highly technical terms
2. Avoid using negative statements or statements that use the word not.
3. Avoid including leading and biased questions
4. Avoid using double-barelled questions.
5. Avoid overly sensitive questions because you may not get truthful
answers.
6. Use a reader-friendly layout.
7. Do not split the questions over two pages to avoid unnecessary
interruption in reading them.
8. Before actually administering the questionnaire, it may be useful to pilot-test it
first.
9. Contact the participants before distributing the questionnaires. Give them
instructions to follow.
10. Attach a cover letter when conducting the actual data collection. The letter should
contain the purpose of the study, the instructions in completing the questionnaire,
the guarantee of confidentiality, and the procedure of returning the questionnaire.
11. Follow up the participants who fail to complete their questionnaires by the set
deadline.
12. Immediately encode the data once you have collected and achieve them digitally.
 
Data Collection Procedure
 Before
1. Develop your data collection instruments and materials.
2. Seek permission from the authorities and heads of the institutions or
communities where you will conduct your study.
3. Select and screen the population using appropriate sampling techniques.
4. Train the raters, observers, experimenters, assistants, and other research
personnel who may be involved in data gathering.
5. Obtain informed consent from the participants. An informed consent form is a
document that explains the objectives of the study and the extent of the
participant’s involvement in the research. It also ensures the confidentiality of
certain information about the participants and their responses.
6. Pilot-test the instruments to determine potential problems that may occur when
they are administered.
 During
1. Provide instructions to the participants and explain how the data will be
collected.
2. Administer the instruments and implement the intervention or treatment, if
applicable.
3. As much as possible, utilize triangulation in your method. Triangulation is a
technique for  validating data using two or more sources and methods
 After
1. Immediately encode or transcribe and archive your data.
2. Safeguard the confidentiality of your data.
3. Later, examine and analyze your data using the appropriate statistical tools.
Treatment in Experimental and Quasi- Experimental Studies
          In experimental and quasi-experimental studies, a section that details the
intervention or treatment used must be incorporated.  In ths section, you need to clearly
describe and distinguish the procedures you used for both the treatment group and
control group.  Barrot (2028) enumerated the steps you can take in describing the
intervention procedure.
1. Write the introductory paragraph  that contain background information relevant to
the experiment.  Establish the context in which the experiment has been
conducted and state the duration of implementing the procedure for both the
control and treatment groups.  For instance, if your study has been performed in
connection with the specific school subject, mention this in your methodology. 
You also need to indicate , for instance that your data collection procedure will
take place for one semester.  Describe the key differences and similarities
between the control and treatment groups.  For example, you are conducting a
study on the effectiveness of a curriculum you originally developed. You may
state that the control group uses the standard curriculum, while the treatment
group uses the curriculum you have developed.
2. Extensively describe the procedure that you will use in the control group.  If your
study will use a pretest and posttest design, explain how you will administer the
pretests, implement the intervention, and administer the posttests. A pretest-
posttest design measures the behavior and skills of the participants after
implementing an intervention.
3. Clearly explain the procedure that you will use in the treatment group.  Describe
how you will control and manipulate the variables to achieve the intended
outcome.  For instance, if your study will use pretest-posttest design, describe
how you will administer the pretests, undertake the intervention, and administer
the posttests for the treatment group.
4. You need to explain the basis for undertaking a particular step in your
intervention.  For instance, you are conducting a study on the linguistic
proficiency of students, you like to implement a pretest-posttest design, and plan
to prohibit the participants from using a dictionary, you need to explain the
reason behind this decision.  For example, you may say that you like to further
challenge the vocabulary proficiency of the participants.
 
Steps on How to Use Prototyping Methodology
         Prototyping is used for many reasons. The creator may develop one just to see
what the end product looks like, or they may need a complete model to test the user
experience.

          According to Volchko (n.d.) a prototype is an early sample, model, or release of a


product built to test a concept or process or to act as a thing to be replicated or learned
from. A protoype is a sample implementation of the system. It provides limited and main
functional capabilities of the proposed system.

        The need for the prototype will guide the developer through each of the five stages
of prototyping suggested by the Pacific Research Laboratories, Inc. (n.d.)

1. Define the vision


        At this point in the prototyping process, the developer needs to come up with an
overarching vision for their product. This phase may include sketches, but it can also
work as a verbal description as long as a few key questions are answered:

 What problem does it solve?


 Who is the key market?
 What other options are available?
 What’s the anticipated price point?
 What are the material and labor needs for creation?
Answering these questions gives critical clues as to whether the design will be useful
and if its demand will be able to justify the cost of creation. This way, the creator can
determine if prototyping is even necessary, or if they should reconsider their vision.

2. Focus on key features


       One common mistake of creators is trying to make their prototype identical to the
end product. While this is useful when the prototype is for demonstrating value to
investors, it’s not necessary for most other needs. The creator should single out one or
two key features of their product to focus on in their prototype. Less critical issues, like
the overall cosmetic look of the product or optional features, are not necessary unless
they are the sole reason for the prototyping.

3. Produce
       The actual building of the prototype is the lengthiest part of the process as the
creator has to consider all the various options involved. Some standard prototyping
methods to consider include:

 3D Printing:This is a great option when the creator has a clear vision and
wants a fast way to test its efficacy and function. However, it may not be ideal
for situations where a highly detailed, near-perfect model is required.
 CNC Machining:This is a bit more exacting, meaning it’s great when the
creator needs a highly detailed model. It’s a process that eliminates potential
human error, as much of the creation is automated and machine-controlled.
 Powder bed fusions:This is a process specifically designed for metal or
aluminum materials with high melting points. If the prototype must be in these
materials, it’s the ideal option.
 Mold making and casting:This process allows a lot of flexibility and is ideal for
building custom parts and designs without the need for computer input.
        In some cases, a few different methods are employed in developing the prototype
for the best result. When time is of the essence, rapid prototyping may be the way to go.
Conventional options are more suited to detailed designs. The choice will generally
depend on materials used, timeline, and budget.

4. Test and refine


        After rolling out the initial prototype, the creator will want to evaluate it, consider
update options, and seek out ways to improve the overall process. This may require a
few simple tweaks, or it could involve scrapping the whole initial design and starting
over from step one. In any case, testing and refining should occur multiple times to
ensure the prototype is ready to be unveiled to stakeholders.

5. Present
      The presentation stage will differ based on the purpose of the prototype. It may
include creating multiple models for testing among consumers, sending the design in for
patenting, or showing it to potential investors. The presentation stage will both help
gauge interest and guide manufacturing methods, whether the need is for a simple short
run or if more mass production is necessary.

You might also like