You are on page 1of 17

PROCESS IN DEVELOPING

AND VALIDATING TOOLS FOR


CHAPTER 2

Leticia N. Aquino, PhD

2S ECED 04- BASIC RESEARCH


Focus Question
How can we develop and validate research tool/IM
At the end of the discussion, you should be able to answer
these specific questions:
1. What is data?
2. What are the kinds of data?
3. What are the tools/instruments to collect data?
4. How to develop the research tools/instruments?
5. How to validate the tools/instruments?
What Are Data?
Data refers to the kinds of information that your team will obtain
from the subjects of your research. There are different kinds of data
that researchers intend to gather as part of a research study.

Data would include:


• demographic information, such as age, gender, marital status,
income, religion, education, occupation, and many others;
• scores from a standardized or researcher-made test;
• responses to the researcher’s questions in an oral interview or
• written replies to a questionnaire;
• essays written by students;
• grade point averages obtained from school records;
• performance logs kept by coaches; and,
• anecdotal records maintained by teachers or counselors.
One important decision that your team should make is to determine the
kind/s of data you plan to collect. The tool used to collect, measure, and
analyze data relevant to the subjects of your research is called an
instrument.

Research Tools/instruments include the following:


1. TESTS. Achievement, or ability, tests measure an individual’s knowledge or
skill in a given area or subject. They are mostly used in schools to measure
learning or the effectiveness of instruction.

2. QUESTIONNAIRES. Questionnaires are the most frequently used data


collection method in educational and evaluation research. Questionnaires
help gather information on knowledge, attitudes, opinions, behaviors, facts,
and other information. In a questionnaire, the subjects respond to the
questions by writing or, more commonly, by marking an answer sheet.
3. RATING SCALES. A rating is a measured judgment of some sort. When we rate
people, we make a judgment about their behavior or something they have
produced. Thus, both behaviors (such as how well a person gives an oral report)
and products (such as a written copy of a report) of individuals can be rated.

Attitude Scales. An attitude scale consists of a set of statements to which an


individual responds. Attitude scales are often similar to rating scales in form, with
words and numbers placed on a continuum. Subjects circle the word or number
that best represents how they feel about the topics included in the questions or
statements in the scale. A commonly used attitude scale in educational research
is the Likert scale, named after the man who designed it.

4. CHECKLISTS. A self-checklist is a list of several characteristics or activities


presented to the subjects of a study. The individuals are asked to study the list and
then to place a mark opposite the characteristics they possess or the activities in
which they have engaged for a particular length of time. Self-checklists are often
used when researchers want students to diagnose or to appraise their own
performance.
5. INTERVIEW GUIDES/PROTOCOL involves the same kind of instrument as a
questionnaire – a set of questions to be answered by the subjects of the study.
Interviews are conducted orally, either in person or over the phone, and the
answers to the questions are recorded by the researcher (or a trained assistant).
6. OBSERVATION FORMS. Paper-and-pencil observation forms (sometimes called
observation schedules) are fairly easy to construct. This form requires the
observer not only to record certain behaviors but also to evaluate some as they
occur.
7. TALLY SHEET is a device often used by researchers to record the frequency of
student behaviors, activities, or remarks.
8. TIME-AND-MOTION LOGS - A time-and-motion study is the observation and
detailed recording over a given period of time of the activities of one or more
individuals (for example, during a 15-minute laboratory demonstration).
Observers try to record everything an individual does as objectively as possible
and at brief, regular intervals (such as every 3 minutes, with a 1-minute break
interspersed between intervals).
Where did the instrument come from? Fig. 1 Instrument development and validation process
There are two basic ways for a
researcher to acquire an instrument: (1)
find and administer a previously existing
instrument or (2) administer an
instrument the researcher personally
developed.

The instrument development and


evaluation processes of an instrument
in this study was designed by adapting
the instrument development process
proposed by DeVellis (2017), Nasab et
al.(2015) and Miller et al. (2013). The
process involves ten steps that are
grouped into two phases namely,
development and validation.
Steps in developing a research tool:
1. Background – is developing a thorough understanding of the
research. In this initial step, literature review and identification of key
components/domains, the purpose, objectives, research questions,
and hypothesis of the proposed research are examined.
2. Questionnaire Conceptualization (development of the draft)
the next step is to generate statements/questions for the
questionnaire. In this step, content (from literature/theoretical
framework) is transformed into statements/questions. In addition, a
link among the objectives of the study and their translation into
content is established. For example, the researcher must indicate
what the questionnaire is measuring, that is, knowledge, attitudes,
perceptions, opinions, recalling facts, behavior change, etc. Major
variables (independent)
3. Format and Data Analysis
In Step 3, the focus is on writing statements/questions, selection of appropriate scales of
measurement, questionnaire layout, format, question ordering, font size, front and back
cover, and proposed data analysis. Scales are devices used to quantify a subject's
response on a particular variable. Understanding the relationship between the level of
measurement and the appropriateness of data analysis is important. For example, if
ANOVA (analysis of variance) is one mode of data analysis, the independent variable
must be measured on a nominal scale with two or more levels (yes, no, not sure), and
the dependent variable must be measured on a interval/ratio scale (strongly agree to
strongly disagree).
4. Establishing Validity
Validity is the most important idea to consider when preparing or selecting an
instrument for use as researchers want the information they obtain through the use of an
instrument to serve their purposes. For example, to find out what teachers in a public
secondary school think about a recent policy passed by the school board, researchers
need both an instrument to record the data and some sort of assurance that the
information obtained will enable them to draw correct conclusions about teacher
opinions.
The drawing of correct conclusions based on the data obtained
from an assessment is what validity all about. Thus, validity refers to
the degree to which evidence supports any inferences a researcher
makes based on the data he or she collects using a particular
instrument. It is the inferences about the specific uses of an
instrument that are validated, not the instrument itself. And, these
inferences should be appropriate, meaningful, correct, and useful.
TYPES OF VALIDITY
1. FACE VALIDITY - is a process that requires selected respondents to evaluate
the instrument based on the question interface, structure of sentences,
grammar and other issues in the instrument that are deemed necessary.
Despite testing the passable measures of the conceptual variables, it also
helps the researcher to have early detection of the possibility of
misunderstood or misinterpreted questions (Stangor, 2015; Zainudin Awang,
2015).
2. CONTENT –Content validation is the process of determining if the variables
adequately cover the full measured domain, and this done by using the help
of experts (Clark & Creswell, 2015; Stangor, 2015). In this process, the domain
needs to be clearly defined in order to facilitate the evaluation process
(DeVellis, 2017). There are no cut-off numbers in determining the number of
experts, However, Zamanzadeh et al., (2015) suggest 5-10 content experts are
recommended so as to have enough control over chance agreement. it
concerns item sampling adequacy to test whether a specific set of items
reflects the domain content.
3. CONSTRUCT - is performed to ensure the instrument measures what is
expected to measure. Construct validity is an investigation to make sure the
instrument developed measures correctly what it intended to measure.
4. CRITERION - occurs when the instrument has empirical association with some
criterion or standard (DeVellis, 2017; Nasab et al.2015). There are two types of
criterion validity, which are predictive and concurrent validity.

Questions for the validity of the tool?


5. Is the questionnaire valid? In other words, is the questionnaire measuring what
it intended to measure?
6. Does it represent the content?
7. Is it appropriate for the sample/population?
8. Is the questionnaire comprehensive enough to collect all the information
needed to address the purpose and goals of the study?
9. Does the instrument look like a questionnaire?
Figure 1. Expert validation: Example of questions.
Unclear item Item that is not appropriate for the
school context
Initial item The school leader includes all The school leader organizes
school members in the decision- institutional structures that facilitate the
making process. decision- making process (e.g. co-
management, leadership team,
student projects).
Experts’ comments It is unclear who school members Co-management is not recognized in
are. (E4-P) It is important to highlight Croatian law. (E4-P)
that the school leader includes in This is not applicable to the school
the decision-making process those context. (E5-U)
members who are competent to
be part of it. (E5-U)

Final item The school leader includes all The school leader create steams that
school members (experts, teachers, take responsibility for some school
students, parents, administrative processes (e.g. team for self-
staff) in any decision-making evaluation, team for civic education,
process that is relevant for them. project teams).
Table 3. Omitted items.

Items
A principal is willing to take responsibility for a decision a
teacher makes according to his/her own initiative.
A principal enables teachers to take the initiative in the
decision-making process (new ideas, innovations and
changes).
A principal enables students to take the initiative in the
decision-making process (new ideas, innovations and
changes).
Establishing Reliability
Reliability refers to the consistency of the scores obtained – how consistent they are for
each individual from one administration of an instrument to another and from one set of
items to another. Reliability refers to the repeatability of a measure. Reliability refers to a
measurement that yields consistent results every time it is being used (Miller et al., 2013;
Zainudin Awang, 2015, Nasab et al.2015).

Consider a test designed to measure typing ability. If the test is reliable, a student who
gets a high score the first time he takes the test is expected to get a high score the next
time he takes the test. The scores may not be identical, but they should be close. The
scores obtained from an instrument can be quite reliable but not valid. For example, a
researcher gave a group of Grade 9 students two forms of a test designed to measure
their knowledge of the Philippine Constitution and found their scores to be consistent:
those who scored high on form A also scored high on form B; those who scored low on
A scored low on B; and so on. Suffice to say, the scores were reliable. Suppose the
researcher will use the same test scores to predict the success of these students in their
mathematics classes, how would you react?
THANK YOU

You might also like