You are on page 1of 6

RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION St. Augustine’s School of Iba Inc.

MODULE 2
Iba, Zambales

SUBJECT: PRACTICAL RESEARCH


Semester of A.Y. 2020-2021
Prepared by: Ms. Mary Antonnette Lao, LPT

Introduction
The research design refers to the overall plan and scheme for conducting the study. Thus, the
researcher may utilize descriptive design, historical design, or experimental design.
Sampling is the process of selecting or gathering the respondents of the study with the
minimum cost such that resulting observations will be representative of the entire population. The
ultimate purpose of all the sampling designs is to imitate the behavior of the entire population
based on a few observations only. By studying the sample, you may fairly generalize your results
back to the population from which they were chosen.
Instruments are data-gathering devices that will be used in the study. It is a testing device for
measuring a given phenomenon, such as paper and pencil tests, questionnaires, interviews,
research tools, or a set of guidelines for observation. There are three characteristics of an
instrument that we need to consider: usability, validity, and reliability/
Quantitative analysis is the technique utilized for analyzing the data gathered. Analysis of data
may be statistical or deterministic.

Intended Learning Outcomes

A. Choose appropriate research Design


B. Construct an instrument and establish its validity and reliability
C. Present written Methodology
D. Implement research design principles to produce research work
Discussion
Descriptive Research Designs
Is used to gather information on current situation and conditions. It helps provide answers to
the questions of who, what, when, where, and how, of a particular research study. Descriptive
research studies provide accurate data after subjecting them to a rigorous procedure and using
large amounts of data from a large number of samples. This design leads to logical conclusions
and pertinent recommendations. However, the descriptive research design is dependent on a
high degree of data collection instrumentation.
According to Polit and Hungles (1999), the following research designs are classified as
descriptive design.
 Survey
The Survey research design is usually used in securing opinions and trends through the
use of questionnaires and interviews. A survey is used in data gathering from institutions,
government, and businesses to help in decision-making regarding change of
strategies, improving practices, analyzing views on the choice of products or market
research.
 Correlation Research
is used for research studies aimed to determine the existence of a relationship between
two or more variables and to determine the degree of the relationship. Examples of
Quantitative research involving two quantitative variables that can be correlated are
10

mental ability and grade in math; gender and math performance; advertising and
sales; income and expenses.
Page

M.A. LAO
RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION MODULE 2
 Evaluation
Is conducted to elicit useful feedback from a variety of respondents from various fields
to aid in decision making or policy formation.
Formative Evaluation is used to determine the quality of implementation of a project,
the efficiency, and effectiveness of a program, assessment of organizational processes
such as procedures, policies, guidelines, human resource development, and alike.
Summative Evaluation is done after the implementation of the program. It examines
the outcomes, products, or effects of the program.
Examples of Formative Evaluation:
a. Needs Assessment – Evaluates the needs of the program or project. How great is
the need for a remedial program in mathematics? Who needs the program? When
can the program start? Where or in what programs should it be implemented? What
are the materials needed?
b. Process Evaluation – Evaluates the process of implementation of the program. For
example, a study on the regulations implemented by the Inter-Agency Task Force
(IATF). How will the efficiency be assessed? Is it working well? What suggestions may
be implemented to improve the current program? When will the recommendations,
be taken into consideration and implemented?
c. Implementation Evaluation - Evaluates the efficiency or effectiveness of a project
or program. How effective are the IATF protocols? How many establishments strictly
implement the recommendations?
d. Program Monitoring – Evaluates the performance and implementation of an
unfinished program. The evaluation is done before the completion of the program.
It helps improve implantation and achieve better results.
Examples of Summative Evaluation
a. Secondary Data Analysis – you may examine existing data for analysis.
b. Impact Evaluation – used to evaluate the overall effect of the program in its entirety.
c. Outcome Evaluation – to determine if the program has caused useful effects based
on the target outcomes.
d. Cost-Effectiveness Evaluation – it compares the relative cost to the outcomes or
results of some courses of action.

Exploratory Research Design


Is often used to establish an initial understanding and background information about a
research study of interest, often with very few or no earlier related studies found to be relevant
to the study.
Casual Research Design
Used to measure the impact that an independent variable (causing effect) has on another
variable (being affected) or why certain results are obtained, a valid conclusion may be derived
when an association between independent and dependent variables is obtained. It can also
be used to identify the extent and nature of cause-and-effect relationships.

Describing Sample size and sample procedures


Sample Size Determination
A sample (n) is a selection of respondents for a study to represent the total population (N).
Making a decision about sample size for a survey is important. The too-large sample may mean
of resources, both human and financial. On the other hand, too small a sample decreases the
utilization of the results.
11 Page

M.A. LAO
RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION MODULE 2

The formula in Determining the sample size


n= N Where: n = sample size ; N = Total Population ; e = Margin of Error
1 + Ne2
 Population (N) consists of members of a group that a researcher is interested in studying the
members of a group that usually have common or similar characteristics.
 Margin of error is allowable error margin for research. A confidence interval of 98% gives a
margin of error of 2%
For Example: A research group wants to conduct a survey. If the population of the Senior High
School is 1500, find sample size if the margin of error is 2%.
n = 1500
1 + (1500)(.02)2
n = 219
1 + (1500)(.04)
n = 219
1 + 0.6
n = 219
1.6
n = 136.8 or 137

Probability Sampling Procedures


The most important characteristic of the probability sampling procedure is the random
selection of the samples. Specifically, each sample (n) or element of the population (N) has an
equal chance of selection under a given sampling technique. Four Probability sampling
procedures are described below.
 Simple Random Sampling – This is characterized by the idea that the chance of selection is
the same for every member of the population.
 Systematic Random Sampling – follows specific steps and procedures in doing the random
selection of samples. It requires a list of the elements and every nth element in the list is draw
for inclusion in the sample
 Stratified Random Sampling – the population is first divided into two or more mutually
exclusive categories based on your variables of interest in the study. The population is
organized into homogenous subsets before drawing the samples. With stratified random
sampling, the population is divided into subpopulations called strata. If your variable of
interest is economic status based on the family combined income level, you can divide the
population into strata of different income levels (low, average, high income with the specific
numerical value of annual family income per level). When these have been determined,
you may draw a sample from each stratum with a separate draw from each of the different
strata. The sample size within the strata can now be determined.
 Cluster Sampling – is used when the target respondents in a study are spread across a
geographical location. In this method, the population is divided into groups called clusters
that are heterogeneous and are mutually exclusive.
Non- Probability Sampling
 Convenience Sampling – sometimes called availability sampling. An example would be
conducting a survey or interview on a captive audience inside a mall or park or school to
obtain a quick response for public opinion on an issue.
 Snowball Sampling – researcher identifies a key informant about a research of interest and
then ask that respondent to refer or identify another respondent who can participate in the
study; one person is asked to refer the researcher to another respondent and so on.
 Purposive Sampling – also called subjective sampling employs a procedure in which samples
are chosen for a special purpose. It may involve members of a limited group.
 Quota Sampling – is gathering a representative sample from a group based on certain
12

characteristics of the population chosen by the researcher. Usually, the population is divided
into specific groups. In specific conditions, for example, is for both genders, males and
Page

M.A. LAO
RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION MODULE 2
females are to be represented equally in the sample group, then if 60 representatives are
needed, you’ll get 30 males and 30 females from each group.

Designing the Questionnaire and Establishing Validity and Reliability


Designing the Questionnaire
A questionnaire is an instrument for collecting data. It consists of a series of questions that
respondents provide answers to a research study.

STEP 1 – BACKGROUND
Main Variable:
1. Dependent Variables – these are the variables that you are trying to explain.
2. Independent Variables – variables that cause, influence, or explain a change in the
dependent variable.
3. Control Variables – are used to test for a possible erroneous relationship between the
dependent and independent variable
4. Continues Variables – Examples: time, weight, length, or money
5. Discrete Variables – variables that can be round off a whole number.

STEP 2 – QUESTIONNAIRE CONCEPTUALIZATION


 Yes or No
 Likert Scale

Frequency of Occurrence Frequency of Use


 Very Frequently  Always
 Frequently  Often
 Occasionally  Sometimes
 Rarely  Rarely
 Very Rarely  Never
Degree of Importance Quality
 Very Important  Strongly Agree
 Important  Agree
 Moderately Important  Undecided
 Of little importance  Disagree
 Not important  Strongly Disagree
Level of Satisfaction Agreement
 Very satisfied  Strongly Agree
 Satisfied  Agree
 Undecided  Undecided
 Unsatisfied  Disagree
 Very Unsatisfied  Strongly Disagree

 Generate the items or questions of the questionnaire based on the purpose and objectives
of the study.
o The questions should be clear, concise, and simple using a minimum number of
words. Avoid lengthy and confusing layout
o Classify questions under each statement based on your problem statement
o Questions should be consistent with the needs of the study
o Avoid highly debatable questions
13 Page

M.A. LAO
RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION MODULE 2
 Chose the types of Questions in developing the statements. The types of questions may be
one of the following:
o Dichotomous Question
o Open-ended question
o Closed Questions / Multiple Choice
o Rank-order scale questions
o Rating scale question

STEP 3 – ESTABLISHING THE VALIDITY OF THE QUESTIONNAIRE


Ways to assess the validity of a set of measurements:
 Face Validity – subjective assessment, the questionnaire appears to measure the
construct or variable that the research study is supposed to measure.
 Content Validity – is most often measured by experts or people who are familiar with the
construct being measured. They provide feedback on how well each question measures
the variable or construct under study.
 Criterion-related validity – measures the relationship between a measure and an
outcome,
o Concurrent Validity – how well the results of an evaluation or assessment correlate
with other assessments measuring the same variables or constructs.
o Predictive validity – can predict a relationship between the construct being
measured and future behavior.
 Construct Validity – concerned with the extent to which a measure is related to other
measures as specified in a theory or previous research.

STEP 4 – Establish the Reliability of the Questionnaire


According to Norland (1990) Reliability indicates the accuracy or precision of the measuring
instrument.

Ways to assess the reliability of a questionnaire:


 Test-retest reliability – the simplest method of assessing reliability. The same test or
questionnaire would be administered twice and the correlation between two sets of
scores is computed.
 Split-half method – also called the parallel or equivalent forms, in this method, two
different tests covering the same topics are used and the correlation between two sets
of scores is calculated.
 Internal consistency – used in assessing the reliability of questions measured on an interval
or scale ratio. The reliability estimate is based on a single form of the test administered on
a single occasion. One popular formula to measure internal consistency is called
Cronbach’s Alpha, this can be computed using manual and electronic computations
such as the Statistical Package for the social sciences. Cronbach alpha can range from
0 (poor reliability) to 1 (perfect reliability. Anything above .70 is considered sufficiently
reliable.

STEP 5 – Pilot testing the Questionnaire


After designing the questionnaire, you may find 10-15 people from your target group to pre-
test the questionnaire. You design or provide a space or an area where the testers can leave
comments or remarks, such as, Delete statement; I don’t understand the statement; Revise the
statement; the statement is good; the question is too long, revise, and so on.

STEP 6 – Revise the Questionnaire


After Identifying the problem areas in your questionnaire, revise the instrument as needed,
based on the feedback provided during the pre-testing or pilot testing. The best questionnaire is
one that is edited and refined towards producing clear questions arranged logically and in
14

sequential order. The Questionnaire SHOULD MATCH YOUR OBJECTIVE.


Page

M.A. LAO
RESEARCH DESIGN| SAMPLING| VALIDITY AND RELIABILITY| DATA COLLECTION MODULE 2

Planning Data Collection Procedures


Data collection refers to the process of gathering information. The data that you will collect
should be able to answer the question you posed in your Statement of the Problem. The Data
are collected, recorded, organized, and translated to measurement and scales and entered
into a computer database for statistical computation, using an appropriate software package
like EXCEL, SPSS, SAS, etc.

Types of Quantitative Data Collection Procedures


A. Observation – used in situations where the respondents cannot answer the researcher’s
question to obtain information for a research study. As a researcher, you have to
prepare a checklist using appropriate rating scales that may categorize the behavior,
attitude, or attribute that you are observing to answer the questions posed in your study.
B. Survey
a. Sample Survey – collects data from a sample of the population to estimate the
attributes or characteristics of the population.
b. Administrative data – survey on the day-to-day operations. This kind of data is
now supported with various ICT tools and software making it easy for
organizations especially government, schools, industry, NGO to update their
records efficiently and effectively and up their own Management Information
System (MIS)
c. Census –data is collected from the selected population. It is an official count on
the survey of papulation with details on demographics, economic, and social
data.
d. Tracer Studies – In school settings, tracer studies are used by educational
institutions to follow up on their graduates. The survey is usually sent to a random
sample after one or two years after graduation.
C. Quantitative Interview – Data from quantitative interviews can be analyzed by assigning
numerical values to the responses of the participants. The numeric responses may be
entered into a data analysis computer program where you can run various statistical
measures.
D. Questionnaire
Advantage of standardized usability questionnaire:
a. Validity – how well the questionnaire measures what is intended to be measured.
b. Reliability – how consistent responses are to the questions.
c. Sensitivity – how well the questionnaire can differentiate at a fraction of the
sample size.
d. Objectivity – experts are requested to verify the statement of other practitioners
in the same field.
e. Quantification – the standardized questionnaire has undergone statistical
analysis
f. Norms- the standardized questionnaire has normalized references and
databases which allow one to convert raw scores to percentile ranks.
The following discussion will guide you in formulating good questions in a questionnaire:
1. Avoid leading questions.
2. Be specific with what you like to measure.
3. Avoid unfamiliar words that the respondents might not be familiar with.
4. Multiple choice categories should be mutually exclusive to elicit clear choices.
5. Avoid personal questions, which may intrude to the privacy of the respondents like those
questions about income, family life, beliefs like religions, or political affiliation.
6. Make your questions short and easy to answer.
15
Page

M.A. LAO

You might also like