You are on page 1of 35

Research Methodology

Lesson 1. Essential Elements of Research Methodology and Research Design


Research methodology is the specific procedures or techniques used to identify, select, process, and analyze
information about a topic. In a research paper, the methodology section allows the reader to critically evaluate
a study's overall validity and reliability.

Learning Outcomes:
At the end of the lesson, students are expected to:
1. describe the essential elements of the research methodology; and
2. select the best research design for the thesis

Input and Presentation Phase


How does the researcher answer the questions stated in Chapter 1?
Elements of Research Methodology
1. Research Design.
An important aspect of research methodology which describes the research mode (whether it is a
quantitative research or a quantitative research or if the researcher will use a specific research type,
e.g., descriptive, survey, historical, case study, or experimental). Research design is the framework of
research methods and techniques chosen by a researcher. The design allows researchers to hone in
on research methods that are suitable for the subject matter and set up their studies up for success.
Nieswiadomy (2004) categorized two major research designs, namely the quantitative and the
qualitative.

PAGE \* MERGEFORMAT 33
Quantitative Research Qualitative Research
Aims to characterize trends and patterns Involves processes, feelings, and
motives: the why’s and the how’s (data
are in depth and holistic)
Usually starts with neither a theory nor Usually concerned with generating a
hypothesis about the relationship between hypothesis from data rather than testing a
two or more variables hypothesis
Uses structured research instruments like Uses either unstructured or semi-
questionnaires or schedules structured instruments
Uses large sample sizes that are Uses small sample sizes chosen
representative of the population purposely
Research of this kind can be replicated Validity should be high
Used for greater understanding of group Used to gain a greater understanding of
similarities individual differences in terms of feelings,
motives, and experiences
Uses structured processes Uses more flexible processes
Methods include census, survey, Methods include field research, case
experiments, and secondary analysis study, secondary analysis
Quantitative Research

Quantitative researchers gather empirical evidence-evidence that is rooted in objective reality and gathered
directly or indirectly through the senses. Usually, the information gathered in such a study is quantitative, i.e.,
numeric information that results from some type of formal measurement, and is analysed with statistical
procedures. In the quantitative research, the researcher is concerned with the use of numbers and statistical
analyses. This is ideal for the traditional research approach which must contend with the problems of
measurement. To study a phenomenon, quantitative researchers attempt to measure, that is to attach numeric
values.

Experimental Designs Non-experimental Designs


True experimental design
Pretest-posttest control design Action studies
Posttest only control group Comparative studies
Solomon four-group Correlational studies
Quasi-experimental designs Developmental studies
Non-equivalent Evaluation studies
Time series Meta-analysis studies
Pre-experimental designs Methodological studies
One-shot case study Needs assessment studies
One group pretest Secondary analysis studies
Posttest Survey studies

Experimental Design
Experimental research is concerned primarily with cause and effect relationships in which all
experimental studies involve manipulation or control of the independent variables (causes) and measurement
of the dependent variables (effects). This design utilizes the principle of research known as the method of

PAGE \* MERGEFORMAT 33
difference. This means that the effect of a single variable applied to the situation can be assessed, and the
difference likewise be determined (Mill, as cited in Sevilla,2003).
Threats to Internal Validity
The controlled or experimental design enables the investigator to control for threats to internal and external
validity. Threats to internal validity compromise our confidence in saying that a relationship exists between the
independent and dependent variables.
1. Selection bias. Selection refers to selecting participants for various groups in the study. Selection bias
results when the subjects or respondents of the study are not randomly selected. Are the groups
equivalent at the beginning of the study? If the subjects were selected by random sampling and random
assignment, all had equal chance of being in treatment or comparison groups, and the groups are
equivalent. Were subjects self-selected into experimental and comparison groups? This could affect the
dependent variable. Selection is not a threat
2. Maturation. It happens when the experiment is conducted beyond a longer period of time during which
most of the subjects undergo physical, emotional and/or psychological changes. Maturation is to be
avoided if such changes are not desired. Were changes in the dependent variable due to normal
developmental processes operating within the subject as a function of time? Is a threat to for the one
group design but is not a threat to the two group design, assuming that participants in both groups
change (“ mature”) at same rate.
3. History. It refers to a threat to internal validity which happens during the conduct of the study when an
unusual occurrence affects the results of the experiment. Did some unanticipated events occur while
the experiment was in progress and did these events affect the dependent variable? History is a threat
for the one group design. In the one group pre-post test design, the effect of the treatment is the
difference in the pre-test and post test scores. This difference may be due to the treatment or to history
4. Instrumentation change. The instrument used in gathering the data must be changed or replaced
during the conduct of the study. It must also be instrument used for all the respondents or subjects.
5. Mortality. This is a threat to validity in which one or more subjects die, drop out or transfer, as in the
case of a student who did not finish his/her participation in the experiment.
6. Testing. A testing threat that may occur in a study is when a pretest is given to subjects who have
knowledge of baseline data. Testing bias is the influence of the pretest or knowledge of baseline data
on the posttest scores. Subjects may remember the answers they put on the pretest and put the same
answers on the posttest.
7. Statistical regression. An effect that is the resut of a tendency for subjects selected on the bases of
extreme scores to regress towards the mean on subsequent tests. When measurement of the
dependent variable is not perfectly reliable, there is a tendency for extreme scores to regress or move
toward the mean. The amount of statistical regression is inversely related to the reliability of the test.

PAGE \* MERGEFORMAT 33
Threats to External Validity
1. Experimenter effect. This threat appears when the characteristics of the researcher affect the
behaviour of the subjects or respondents. For example, a known personality conducting the
interview or observation may cause the subjects to be starstruck and the responses to be
superficial.
2. Hawthorne effect. It occurs when the respondents or subjects respond artificially to the treatment
because they know they are being observed as part of a research study.
3. Measurement effect. It is also called the reactive effect of the pretest. It occurs when subjects
have been sensitized to the treatment through taking the pretest. This sensation might affect the
posttest results. If there is a prior announcement of the conduct of the study, the subjects might
prepare and this will give a superficial result.

Types of Experimental Research Designs


1. True experimental designs. A design is considered a true experiment if the following criteria
are present: the researcher manipulates the experimental variables, i.e., the researcher has
control over the independent variables, as well as the treatment and the subjects: there must be
one experimental and one comparison or control group; and subjects are randomly assigned
either to the control group or experimental group. The control group is a group that does not
receive the treatment.
2. Quasi-experimental design. It is a design in which either there is no control group, or the
subjects are not randomly assigned to groups.
3. Pre-experimental design. This experimental design is considered very weak, as the
researcher has little control over the research.

Types of Non-experimental Research Designs


1. Survey studies. The investigations are conducted through self-report. Surveys ask
respondents to report on their attitudes, opinions, perceptions, or behaviors. Survey studies aim
at describing characteristics, opinions, attitudes, and behaviors as they currently exist in a
population(Wilson,19900.
Surveys can be categorized according to:
a. From whom the data is collected—sample, group, mass
b. Methods used to collect the data—telephone, text messages, snail mail, e-mail, face-to-face
c. Time orientation
i. Retrospective. The dependent variable is identified in the present and an attempt is
made to determine the independent variable that has occurred in the past.

PAGE \* MERGEFORMAT 33
ii. Cross-sectional. Data are collected at a single point in time. The design requires subject
who, at different points, phases, or stages, are in the process of moving through an
experience. The subjects are assumed to represent data collected from these different
points in time. For example, if the researcher wants to determine the psychological
experience of oncology patients at different stages of cancer, and gather data at the
same time.
iii. Longitudinal. The researcher collects data from the same people at different times. In
the same study of determining the psychological experience of oncology patients at
different stages of cancer, the researcher must have enough number of patients in the
first or early stage who will be observed as they pass through the different stages. This
study is conducted over a longer period of time as compared to the cross-sectional
survey.
d. Purpose or objective
i. Descriptive. This design aims to gather more information about characteristics
within a peculiar field of study. The purpose is to provide a picture of a situation as it
naturally happens. It may be used to develop theories, justify current clinical
practices or identify problems with them, aid in making professional judgements, or
determine what other practitioners in similar situations are doing.
ii. Comparative. This design is used to compare and contrast representative samples
from two or more groups of subjects in relation to certain designated variables that
occur in normal conditions. The results obtained from these analyses are frequently
not generalized in a population.
iii. Correlational. The design is used to investigate the direction and magnitude of
relationships among variables in a particular population. Likewise, it is designed to
study the changes in one characteristic or phenomenon which corresponds to the
changes in another or with one another. A wide range of variable scores is
necessary to determine the existence of a relationship. Thus, the sample should
reflect the full range of scores, if possible, on the variables being measured.

Qualitative Research
What is qualitative research?

 The naturalistic method of inquiry of research, deals with the issue of human complexity by exploring it
directly.
 The emphasis is on the complexity of humans, their ability to shape and create their own experience,
and the idea that truth is a composite of reality.

PAGE \* MERGEFORMAT 33
 Naturalistic investigations place heavy emphasis on understanding the human experience as it is lived,
usually through the careful collection and analysis of data that are narrative and subjective.
 It focuses on gaining insights on and an understanding of an individual’s perception of events. It is
concerned with in-depth descriptions of people or events and their interpretation of circumstances.
 Data are collected through such methods as unstructured interviews and participant observation
 This emphasize the dynamic, holistic, and individual aspects of human experience and attempts to
capture those aspects in their entirety within the context of those who are experiencing them.

Tasks/role/characteristics of qualitative researcher


 To synthesize the patterns and themes in the data instead of focusing on the testing of hypotheses
 Researcher must not be limited by existing theories but must be open to new ideas and new theories
 Researcher does not have to be concerned with numbers and complicated statistical analyses
What are the products of qualitative research?
 Recurrent themes or hypotheses
 Survey instrument measures
 Taxonomies
 Conceptual models (theories)

Fundamentals of qualitative research: Meaning, not numbers

QUALITATIVE QUANTITATIVE
Approach: Inductive Approach: Deductive
Goal: Depth, local meanings, Goal: Breadth, generalization,
generate hypotheses test hypotheses
Setting: Natural MIXED Setting: Experimental/Quasi-
Sampling: Purposeful Sampling: Probabilistic
Data: Words, images, narrow but rich Data: Numbers, shallow but broad
Data Analysis: Iterative interpretation Data Analysis: Statistical tests,
models
Values: Personal involvement and Values: Detachment and impartiality
partiality(subjectivity, reflexivity) (objectivity)

Fundamentals of qualitative research: No single answer


 Telling one story among many that could be told about the data
 Doesn’t mean that the story is fictional
- Plausible
PAGE \* MERGEFORMAT 33
- Coherent
- Grounded in the data
 Truth can be compelling without claiming to be absolute
Fundamentals of qualitative research: Context is important
 Data does not come “out of ether”
-It is produced within contexts by participants who are located and come from specific contexts
 Contrast with positivist/quantitative ideal of obtaining “uncontaminated’ data or knowledge, with all
biases removed
 In qualitative research, we recognize the subjectivity of the data which analyse and incorporate it in the
analysis
Fundamentals of qualitative research: All sorts of data
 Production of data-by what we get participants to do
 Selection of data-from existing materials, naturally occurring data
 Rich and shallow data- rich data are preferred
 Most important issue is that data serve the purpose of research
Fundamentals of qualitative research: Subjectivity and reflexivity
Subjectivity
 Researchers and participants bring their own history, assumptions, values, perspectives, politics into
the
“…any knowledge produced is going to reflect that way(even if only minor)”
Reflexivity
 Process of critically reflecting on the knowledge with the study and our role in producing this knowledge
 Functional reflexivity-explores how the form and nature of the specific study impacts the knowledge that
is obtained, while “disciplinary” reflexivity explores the impact of approaching an issue from a specific
field of inquiry
 Personal reflexivity-involves thinking on ways in which the researchers' own beliefs and opinions
influence the researcher and about how the research has affected the researcher personally and
professionally.
Common Types of Qualitative Research
1. Phenomenological study. It examine human experiences (lived experiences) through descriptions
provided by subjects or respondents. The goal is to describe the meaning that experiences hold for
each subject. Some of the areas of concern for these studies are humanness, self-determination,
uniqueness, wholeness, and individualism. Thus, with this model, the researcher has to empathize with
the experience of the subjects as if he/she is the one experiencing the phenomenon. This description
consists of “what” they experienced and “how” the experienced it (Moustakas, 1994).
Philosophical assumptions rest on some common grounds:
 The study of lived experiences of persons
 The view that these experiences are conscious ones (van Manen, 19900
 The development of descriptions of the essences of the experiences, not explanations or
analyses (Moustakas, 1994)
PAGE \* MERGEFORMAT 33
Example of a problem of this type: What are the common experiences encountered by a wife/husband with
a spouse who is undergoing rehabilitation?

With this example, the researcher has to discover the inner feelings, emotional hardships, the mental
disturbances that the respondent is experiencing.

2. Ethnographic study. It involves the collection and analysis of data about cultural groups or minorities.
In this type of research, the researcher frequently lives with the people and becomes a part of their
culture. Therein, he/she personally immerses and gets involved in the day-to-day activities of the
subjects. The rituals, ceremonies, norms, and traditions being undertaken in the setting will actually be
experienced by the researcher. He/she will more or less share the same feelings of the cultural groups.
During the immersion process, the researcher has to talk to the key persons and personalities called
key informants, who can provide the important data for the study. The main purpose of this kind of
study is the development of cultural theories.
3. Narrative study. The researchers describe the lives of individuals, collect stories about people’s lives,
write narratives of individual experiences (Connelly & Clandinin, 1990). In narrative research, the
researcher would not follow the step approach but represented the informal data collection of
procedures(Davies, 2009).
1. The narrative researchers captured the personal life experiences and determined either research
problems or questions
2. The authors would observe the individual life experiences and recorded the field notes.
3. The researchers would collect the different personal experiences and historical contexts
4. The researchers analysed the gathering information and collaborated with participants

Key characteristics of narrative design


1. It explores the individual’s experiences;
2. Researcher analyzes and writes about an individual life using time sequence or chronology of
events; researcher orders these events in a way that makes sense to a reader
3. Collects individual stories
4. Researcher gathers stories and analyses them for elements of the story; researcher rewrites the
story to place it in a chronological sequence (restorying). Restotying provides a causal link among
ideas; information would include interaction, continuity, and situation.
5. Coding for themes. Themes provide the complexity of the story; themes add depth to the insight
about understanding an individual’s experiences; themes can be incorporated into the passage
retelling the individual’s experience or as a separate section of the study.
6. Context or setting includes the people involved in the story, the physical setting, may be described
before events or actions, or can be woven throughout the study.
PAGE \* MERGEFORMAT 33
7. Collaboration with participants. Inquires actively involves the participant in the inquiry as it unfolds.

4. Case study. It is an in-depth examination of people or a group of people or an institution. Some of its
purposes are to gain insights into a little-known problem; provide background data for broader studies; and
explain socio-psychological and socio-cultural process. A case study also involves a comprehensive and
extensive examination of a particular individual, group, or situation over a period of time. It provides information
on where to draw conclusion, and about the impact of a significant event on a person’s life(Sanchez, 2002).
Some of the disadvantage of a case study are the problems of generalizability, since the study focuses only on
small group of individuals; the difficulty of determining the adequacy of data; the possibility of biases; and the
expense entailed by the design.

Phases in Qualitative study(Polit, et.al., 2006)

1. Orientation and overview. A qualitative researcher usually embarks on a study not having the vaguest
idea on a topic. Therefore, the first phase is to determine what is salient about the phenomenon or
culture of interest.
2. Focused exploration. It involves a focused scrutiny and in-depth exploration of the aspects of the
phenomenon judged to be salient. The questions asked and the types of people invited are shaped
based on the outcome of the first phase.
3. Confirmation and closure. The researcher undertakes efforts to prove that his/her findings are
trustworthy, often going back to the study and discussing his/her understanding of it with the
participants. Research Design Template-- Example: (Castro & Lombrio , 2020)

PAGE \* MERGEFORMAT 33
Assessment 1 (Final Term)

RESEARCH DESIGN ACTIVITY


Directions: Supply the following questions with the necessary information based on the
knowledge gained from the discussion to formulate your Research Design.
1. Compose the introductory paragraph of your research methodology.
2. Describe the general methodology that you will utilize in your proposed study (Is it qualitative or
quantitative?). Cite your references.
3. Why did you choose this type of research methodology?
4. What specific methodology will you utilize in your proposed study (either; experimental, descriptive
survey, correlational, phenomenological, etc)?
5. Why did you choose this specific methodology?

PAGE \* MERGEFORMAT 33
Lesson 2: Measurement of Constructs

Theoretical propositions consist of relationships between abstract constructs. Testing theories (i.e., theoretical
propositions) require measuring these constructs accurately, correctly, and in a scientific manner, before the
strength of their relationships can be stated. Measurement refers to careful, deliberate observations of the real
world and is the essence of empirical research.

Learning outcomes:
At the end of the lesson, students are expected to:
1. examine the related processes of conceptualization and operationalization
for creating measures of such constructs

Input and Presentation Phase


Conceptualization is the mental process by which fizzy and imprecise constructs (concepts) and their
components are defined in concrete and precise terms. For instance, we often use the word “prejudice” and
the word conjures a certain image in our mind; however, we may struggle if we were asked to define exactly
what the term meant. If women earn less than men for the same job, is that gender prejudice? If churchgoers
believe that non-believers will burn in hell, is religious prejudice? Are there different kinds of prejudice, and I
so, what are they? Are there different levels of prejudice, such as high or low? Answering all of these questions
is the key to measuring the prejudice correctly. The process of understanding what is included and what is
excluded in the concept of prejudice is the conceptualization process. Definition of such constructs is not

PAGE \* MERGEFORMAT 33
based on any objective criterion, but rather on shared “ inter-subjective” agreement between our mental
images(conceptions ) of these constructs.

Operationalizing Constructs and Variables


Operationalization refers to the process of developing indicators or items for measuring these constructs. For
instance, if an unobservable theoretical construct such as socioeconomic status is defined as the level of
family income, it can be operationalized using an indicator that asks respondents the question: what is your
annual family income? Given the high level of subjectivity and imprecision inherent in social science
constructs, we tend to measure most of those constructs (except a few demographic constructs such as age,
gender, education, and income) using multiple indicators.

The process of “pinning down” the construct has to begin by realizing that whatever the researcher
would like to measure the indicants of some property of the objects or entities. Van Dalen (1979) concertizes it
with the following example:

“ A student cannot be measured, but indicants of his weight, intellectual capacity, achievement in Mathematics,
punctuality, and other properties can be”

Property- a concept or logical construct that describes particular characteristics which is common to all
members of a set, but on which members of a set vary. Example : Punctuality

Indicants- something that points to the property and helps define it.

Example: for punctuality

a. is never late for class


b. hands in term of paper on or before the due date
c. is among the first people to arrive for meeting

Determining the constructs as a property in terms of indicants is called “operationalizing of a construct or


concept”. This process is necessary so that the researcher would be able to progress on his research plans

PAGE \* MERGEFORMAT 33
from the realm of theory to the realm of practical investigation. In this way, constructs are rendered more
measurable and become referred to as “variable”.

Variables-are concepts that assume more than one value; are qualities, properties or characteristics of
persons, things, or situations that change or vary (Burns and Grove, 1995)

Two kinds of a Variable (Kerlinger, 1993)


1. Measured Operational Definition
- describes how a variable will be measured

2. Experimental Operational Definition


- Spells out the details (operations) that the researcher’s manipulations of the variable

It is clear that constructs cannot be directly seen and are not easily quantifiable, their presence may be inferred
from their operational definitions.
Kinds of Variables
1. Attribute and Active Variables

2. Continuous and Discrete Variables

PAGE \* MERGEFORMAT 33
3. Two-Category and Multiple Category variables

These Two-Category Variables are expressed in terms in discrete and continuous categories.
Example: Continuous and Multiple Categories
- Height (tall, average, or short)
- Weight (heavy, average, or light)
Variable Traits
1. Exhaustive-includes all possible answerable responses
2. Mutually exclusive-no respondents should be able to have two attributes simultaneously
Classification of variables
1. According to functional relationship
 Independent Variable (or variate) – variable that influences another variable; variable that causes
the variation of another variable
 Dependent Variable (or criterion variable) – variable that is influenced by another variable
2. According to continuity of scale
 Continuous Variable – variable that can assume an unlimited number of intermediate values
within a specified range of values. Examples: Height, Weight, Age, Teaching Experience
Attitude towards Teaching
 Discrete Variable – variable that can take on only designated values (finite or countable)
Examples: Number of Children in a Family, Number of service vehicles of government agencies
Scores on a standardized multiple choice test in Science
3. According to scale of Measurement
Measurement – the assignment of numbers to the categories of a variable according to rules (may be
arbitrary rule or standard rule)
Variable: Sex
Categories: Male and Female
Measurement (Arbitrary): Assign 1 if sex is male Assign 0 if sex is female
Variable: Age
Categories: Years

PAGE \* MERGEFORMAT 33
Measurement (standard): Use the number of years as a measure of age
Variable: Attitude towards Teaching
Categories: Very Positive to Very Negative
Measurement (Arbitrary): Use a 5 point scale
Scale of Measurement
NOMINAL Scale (KEY WORD: LABEL)
• Establishes equivalence or difference between the attributes of the objects or respondents
• In this scale, numbers are used as mere labels of the categories of the variable;
• The numbers cannot be meaningfully ordered
Examples: Sex: 1 – male; 0 – female
Religion: 1 – Roman Catholic 2 – Protestant 3 – INC 4 – Others
Marital Status: 1 – single; 2 – married; 3 – others
ORDINAL Scale (KEY WORD: RANK)
• This scale possesses all characteristics of the nominal scale, i.e., numbers are used as labels
• The numbers can be meaningfully ordered
• But differences between successive categories may not be equal
Examples: SES: 1 – Low; 2 – Average; 3 - High
Educational Attainment: 3 – Doctorate Degree Holder 2 – Master’s Degree Holder 1 – Bachelor’s
Degree Holder
INTERVAL Scale (KEY WORDS: EQUAL INTERVAL)
• This scale possesses all characteristics of the ordinal scale, i.e., numbers are used as labels and they
can be meaningfully ranked.
• The differences between successive categories can be assumed equal
• But the scale has no true zero point (the point that indicates the complete absence of the characteristic
being measured).
Examples:
IQ Score
Temperature in Degree Celsius
Achievement Score on a Standardized Tests
RATIO Scale (KEY WORDS: TRUE ZERO POINT)
• This scale possesses all the characteristics of the interval scale, i.e., numbers are used as labels, they
can be meaningfully ranked, and differences between successive categories are equal.
• The scale has a true zero point.
RATIO Scale (KEY WORDS: TRUE ZERO POINT)
Examples: Age , Height, Weight, Work experience, Number of children in a family, Number of times
absent in a year
Common rating scales
1. Binary scales. Binary scales are nominal scales consisting of binary items that assume one of two
possible values, such as yes or no, true or false, and so on. For example, a typical binary scale for the
“political activism” construct may consist of the six binary items shown in Table 6.2. Each item in this
scale is a binary item, and the total number of “yes” indicated by a respondent (a value from 0 to 6)
can be used as an overall measure of that person’s political activism.

PAGE \* MERGEFORMAT 33
2. Likert scale. Designed by Rensis Likert, this is a very popular rating scale for measuring ordinal data
in social science research. This scale includes Likert items that are simply-worded statements to
which respondents can indicate their extent of agreement or disagreement on a five or seven-point
scale ranging from “strongly disagree” to “strongly agree”. A typical example of a six-item Likert scale
for the “employment self-esteem” construct is shown in Table 6.3. Likert scales are summated scales,
that is, the overall scale score may be a summation of the attribute values of each item as selected by
a respondent.

3. Semantic differential. This is a composite (multi-item) scale where respondents are asked to indicate
their opinions or feelings toward a single statement using different pairs of adjectives framed as polar
opposites. For instance, the construct “attitude toward national health insurance” can be measured
using four items shown in Table 6.4. As in the Likert scale, the overall scale score may be a
summation of individual item scores. Notice that in Likert scales, the statement changes but the
anchors remain the same across items. However, in semantic differential scales, the statement
remains constant, while the anchors (adjective pairs) change across items. Semantic differential is
believed to be an excellent technique for measuring people’s attitude or feelings toward objects,
events, or behaviors.

PAGE \* MERGEFORMAT 33
4. Guttman scale. Designed by Louis Guttman, this composite scale uses a series of items arranged in
increasing order of intensity of the construct of interest, from least intense to most intense. As an
example, the construct “attitude toward immigrants” can be measured using five items shown in Table
6.5. Each item in the above Guttman scale has a weight (not indicated above) which varies with the
intensity of that item, and the weighted combination of each response is used as aggregate measure
of an observation.

Lesson 3: Scale of Validity and Reliability

The previous lesson examined some of the difficulties with measuring constructs in social science
research. For instance, how do we know whether we are measuring “compassion” and not the
“empathy”, since both constructs are somewhat similar in meaning? Or is compassion the same thing
as empathy? What makes it more complex is that sometimes these constructs are imaginary concepts
(i.e., they don’t exist in reality), and multi-dimensional (in which case, we have the added problem of
identifying their constituent dimensions). Hence, it is not adequate just to measure social science
constructs using any scale that we prefer. We also must test these scales to ensure that: (1) these
scales indeed measure the unobservable construct that we wanted to measure (i.e., the scales are
“valid”), and (2) they measure the intended construct consistently and precisely (i.e., the scales are
“reliable”). Reliability and validity, jointly called the “psychometric properties” of measurement scales,
are the yardsticks against which the adequacy and accuracy of our measurement procedures are
evaluated in scientific research.

Learning outcomes:
At the end of the lesson, students are able to:
1. differentiate validity from reliability; and
2. establish the validity and reliability of research instrument in their study

Input and Presentation Phase


A measure can be reliable but not valid, if it is measuring something very consistently but is
consistently measuring the wrong construct. Likewise, a measure can be valid but not reliable if it is
measuring the right construct, but not doing so in a consistent manner. Using the analogy of a
shooting target, as shown in Figure 7.1, a multiple-item measure of a construct that is both reliable and
PAGE \* MERGEFORMAT 33
valid consists of shots that clustered within a narrow range near the center of the target. A measure
that is valid but not reliable will consist of shots centered on the target but not clustered within a narrow
range, but rather scattered around the target. Finally, a measure that is reliable but not valid will
consist of shots clustered within a narrow range but off from the target. Hence, reliability and validity
are both needed to assure adequate measurement of the constructs of interest.

Validity
Validity is the ability of an instrument to measure what it purports to measure. Validity, often called
construct validity, refers to the extent to which a measure adequately represents the underlying construct that
it is supposed to measure. For instance, is a measure of compassion really measuring compassion, and not
measuring a different construct such as empathy? Validity can be assessed using theoretical or empirical
approaches, and should ideally be measured using both approaches. Theoretical assessment of validity
focuses on how well the idea of a theoretical construct is translated into or represented in an operational
measure. When a study investigates the common causes of absences, the content of the instrument must
focus on these variables and indicators.
Types of Validity (Kubiszyn & Borich, 2007)
1. Face validity. Also known as logical validity, face validity involves an analysis of whether the instrument is
using a valid scale. Just by looking at the instrument, the researcher decides if it has face validity. It
includes the font size, spacing, the size of the paper used and other necessary details that will not distract
respondents from answering the questionnaire.
2. Content validity. Content validity is an assessment of how well a set of scale items matches with the
relevant content domain of the construct that it is trying to measure. For instance, if you want to measure
the construct “satisfaction with restaurant service,” and you define the content domain of restaurant
service as including the quality of food, courtesy of wait staff, duration of wait, and the overall ambience of
the restaurant (i.e., whether it is noisy, smoky, etc.), then for adequate content validity, this construct
should be measured using indicators that examine the extent to which a restaurant patron is satisfied with
the quality of food, courtesy of wait staff, the length of wait, and the restaurant’s ambience. Of course, this
approach requires a detailed description of the entire content domain of a construct, which may be difficult
for complex constructs such as self-esteem or intelligence. Hence, it may not be always possible to
PAGE \* MERGEFORMAT 33
adequately assess content validity. As with face validity, an expert panel of judges may be employed to
examine content validity of constructs.
3. Convergent validity refers to the closeness with which a measure relates to (or converges on) the
construct that it is purported to measure, and discriminant validity refers to the degree to which a measure
does not measure (or discriminates from) other constructs that it is not supposed to measure. Usually,
convergent validity and discriminant validity are assessed jointly for a set of related constructs. For
instance, if you expect that an organization’s knowledge is related to its performance, how can you assure
that your measure of organizational knowledge is indeed measuring organizational knowledge (for
convergent validity) and not organizational performance (for discriminant validity)?
4. Criterion-related validity or equivalent test. This type of validity is an expression of how scores from the
test are correlated with an external criterion.
a. Concurrent validity examines how well one measure relates to other concrete criterion that is
presumed to occur simultaneously. For instance, do students’ scores in a calculus class correlate well
with their scores in a linear algebra class? These scores should be related concurrently because they
are both tests of mathematics. Unlike convergent and discriminant validity, concurrent and predictive
validity is frequently ignored in empirical social science research.
b. Predictive validity is the degree to which a measure successfully predicts a future outcome that it is
theoretically expected to predict. For instance, can standardized test scores (e.g., Scholastic Aptitude
Test scores) correctly predict the academic success in college (e.g., as measured by college grade
point average)? Assessing such validity requires creation of a “nomological network” showing how
constructs are theoretically related to each other.

Reliability
Reliability is the degree to which the measure of a construct is consistent or dependable. In other words, if we
use this scale to measure the same construct multiple times, do we get pretty much the same result every
time, assuming the underlying phenomenon is not changing? An example of an unreliable measurement is
people guessing your weight. Quite likely, people will guess differently, the different measures will be
inconsistent, and therefore, the “guessing” technique of measurement is unreliable. A more reliable
measurement may be to use a weight scale, where you are likely to get the same value every time you step on
the scale, unless your weight has actually changed between measurements.
Methods in Establishing Reliability
1. Inter-rater reliability. Inter-rater reliability, also called inter-observer reliability, is a measure of
consistency between two or more independent raters (observers) of the same construct. Usually, this
is assessed in a pilot study, and can be done in two ways, depending on the level of measurement of
the construct. If the measure is categorical, a set of all categories is defined, raters check off which

PAGE \* MERGEFORMAT 33
category each observation falls in, and the percentage of agreement between the raters is an estimate
of inter-rater reliability
2. Test-retest reliability. Test-retest reliability is a measure of consistency between two measurements
(tests) of the same construct administered to the same sample at two different points in time. If the
observations have not changed substantially between the two tests, then the measure is reliable.

3. Split-half reliability. Split-half reliability is a measure of consistency between two halves of a


constructed measure. For instance, if you have a ten-item measure of a given construct, randomly split
those ten items into two sets of five (unequal halves are allowed if the total number of items is odd),
and administer the entire instrument to a sample of respondents. Then, calculate the total score for
each half for each respondent, and the correlation between the total scores in each half is a measure of
split-half reliability. The longer is the instrument, the more likely it is that the two halves of the measure
will be similar (since random errors are minimized as more items are added), and hence, this technique
tends to systematically overestimate the reliability of longer instruments.
4. Internal consistency reliability. Internal consistency reliability is a measure of consistency between
different items of the same construct. If a multiple-item construct measure is administered to
respondents, the extent to which respondents rate those items in a similar manner is a reflection of
internal consistency. This reliability can be estimated in terms of average inter-item correlation,
average item-to-total correlation, or more commonly, Cronbach’s alpha.
Other Criteria for Assessing Quantitative Measure (Polit, 2004)
1. Sensitivity. The instrument should be able to identify a case correctly, i.e., to screen or diagnose a
condition correctly.
2. Specificity. The instrument should be able to identify a non-case correctly, i.e., to screen out those
without conditions correctly.
3. Comprehensibility. Subjects and researchers should be able to comprehend the behaviour required to
secure accurate and valid measurements.
4. Precision. The instrument should discriminate among people who exhibit varying degrees of an
attribute as precisely as possible.
5. Speed. The researcher should not rush the measuring process so that he/she can obtain reliable
measurements.
6. Range. The instrument should be capable of detecting the smallest expected value of the variable to
the largest in order to obtain meaningful measurements.
7. Linearity. The researcher normally strives to construct measures that are equally accurate and
sensitive over the entire range of values.

PAGE \* MERGEFORMAT 33
8. Reactivity. The instrument should, as much as possible, avoid affecting the attribute being measured.

Lesson 4: Sampling

Sampling is the statistical process of selecting a subset (called a “sample”) of a population of interest for
purposes of making observations and statistical inferences about that population. Social science research is
generally about inferring patterns of behaviors within specific populations. We cannot study entire populations
because of feasibility and cost constraints, and hence, we must select a representative sample from the
population of interest for observation and analysis. It is extremely important to choose a sample that is truly
representative of the population so that the inferences derived from the sample can be generalized back to the
population of interest. Improper and biased sampling is the primary reason for often divergent and erroneous
inferences reported in opinion polls and exit polls conducted by different polling groups.

Learning Outcomes:
At the end of the lesson, students should be able to:
1. differentiate the various methods of sampling; and
2. formulate the criteria for choosing participants/respondents

Input and Presentation Phase


The sampling process comprises of several stage. The first
stage is defining the target population. A population can
be defined as all people or items (unit of analysis) with the
characteristics that one wishes to study. The unit of
analysis may be a person, group, organization, country,
object, or any other entity that you wish to draw scientific
inferences about. Sometimes the population is obvious. For
example, if a manufacturer wants to determine whether

PAGE \* MERGEFORMAT 33
finished goods manufactured at a production line meets certain quality requirements or must be scrapped and
reworked, then the population consists of the entire set of finished goods manufactured at that production
facility. At other times, the target population may be a little harder to understand.
The second step in the sampling process is to choose a sampling frame. This is an accessible
section of the target population (usually a list with contact information) from where a sample can be drawn. If
your target population is professional employees at work, because you cannot access all professional
employees around the world, a more realistic sampling frame will be employee lists of one or two local
companies that are willing to participate in your study.
The last step in sampling is choosing a sample from the sampling frame using a well-defined
sampling technique. Sampling techniques can be grouped into two broad categories: probability (random)
sampling and non-probability sampling.
KINDS OF SAMPLING
Probability Sampling
Probability sampling is a technique in which every unit in the population has a chance (non-zero
probability) of being selected in the sample, and this chance can be accurately determined. Sample statistics
thus produced, such as sample mean or standard deviation, are unbiased estimates of population parameters,
as long as the sampled units are weighted according to their probability of selection. All probability sampling
have two attributes in common: (1) every unit in the population has a known non-zero probability of being
sampled, and (2) the sampling procedure involves random selection at some point. The different types of
probability sampling techniques include:
Simple random sampling. In this technique, all possible subsets of a population (more accurately, of
a sampling frame) are given an equal probability of being selected. The probability of selecting any set of n
units out of a total of N units in a sampling frame is NCn. Hence, sample statistics are unbiased estimates of
population parameters, without any weighting. Simple random sampling involves randomly selecting
respondents from a sampling frame, but with large sampling frames, usually a table of random numbers or a
computerized random number generator is used. For instance, if you wish to select 200 firms to survey from a
list of 1000 firms, if this list is entered into a spreadsheet like Excel, you can use Excel’s RAND() function to
generate random numbers for each of the 1000 clients on that list. Next, you sort the list in increasing order of
their corresponding random number, and select the first 200 clients on that sorted list. This is the simplest of
all probability sampling techniques; however, the simplicity is also the strength of this technique. Because the
sampling frame is not subdivided or partitioned, the sample is unbiased and the inferences are most
generalizable amongst all probability sampling techniques.
Systematic sampling. In this technique, the sampling frame is ordered according to some criteria and
elements are selected at regular intervals through that ordered list. Systematic sampling involves a random
start and then proceeds with the selection of every kth element from that point onwards, where k = N/n, where
k is the ratio of sampling frame size N and the desired sample size n, and is formally called the sampling ratio.

PAGE \* MERGEFORMAT 33
It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from
within the first k elements on the list. In our previous example of selecting 200 firms from a list of 1000 firms,
you can sort the 1000 firms in increasing (or decreasing) order of their size (i.e., employee count or annual
revenues), randomly select one of the first five firms on the sorted list, and then select every fifth firm on the
list. This process will ensure that there is no overrepresentation of large or small firms in your sample, but
rather that firms of all sizes are generally uniformly represented, as it is in your sampling frame. In other
words, the sample is representative of the population, at least on the basis of the sorting criterion.
Stratified sampling. In stratified sampling, the sampling frame is divided into homogeneous and non-
overlapping subgroups (called “strata”), and a simple random sample is drawn within each subgroup. In the
previous example of selecting 200 firms from a list of 1000 firms, you can start by categorizing the firms based
on their size as large (more than 500 employees), medium (between 50 and 500 employees), and small (less
than 50 employees). You can then randomly select 67 firms from each subgroup to make up your sample of
200 firms. However, since there are many more small firms in a sampling frame than large firms, having an
equal number of small, medium, and large firms will make the sample less representative of the population
(i.e., biased in favor of large firms that are fewer in number in the target population). This is called non-
proportional stratified sampling because the proportion of sample within each subgroup does not reflect the
proportions in the sampling frame (or the population of interest), and the smaller subgroup (large-sized firms) is
oversampled. An alternative technique will be to select subgroup samples in proportion to their size in the
population. For instance, if there are 100 large firms, 300 mid-sized firms, and 600 small firms, you can
sample 20 firms from the “large” group, 60 from the “medium” group and 120 from the “small” group. In this
case, the proportional distribution of firms in the population is retained in the sample, and hence this technique
is called proportional stratified sampling. Note that the non-proportional approach is particularly effective in
representing small subgroups, such as large-sized firms, and is not necessarily less representative of the
population compared to the proportional approach, as long as the findings of the non-proportional approach is
weighted in accordance to a subgroup’s proportion in the overall population.
Cluster sampling. If you have a population dispersed over a wide geographic region, it may not be
feasible to conduct a simple random sampling of the entire population. In such case, it may be reasonable to
divide the population into “clusters” (usually along geographic boundaries), randomly sample a few clusters,
and measure all units within that cluster. For instance, if you wish to sample city governments in the state of
New York, rather than travel all over the state to interview key city officials (as you may have to do with a
simple random sample), you can cluster these governments based on their counties, randomly select a set of
three counties, and then interview officials from every official in those counties. However, depending on
between-cluster differences, the variability of sample estimates in a cluster sample will generally be higher than
that of a simple random sample, and hence the results are less generalizable to the population than those
obtained from simple random samples.

PAGE \* MERGEFORMAT 33
Matched-pairs sampling. Sometimes, researchers may want to compare two subgroups within one
population based on a specific criterion. For instance, why are some firms consistently more profitable than
other firms? To conduct such a study, you would have to categorize a sampling frame of firms into “high
profitable” firms and “low profitable firms” based on gross margins, earnings per share, or some other measure
of profitability. You would then select a simple random sample of firms in one subgroup, and match each firm
in this group with a firm in the second subgroup, based on its size, industry segment, and/or other matching
criteria. Now, you have two matched samples of high-profitability and low-profitability firms that you can study
in greater detail. Such matched-pairs sampling technique is often an ideal way of understanding bipolar
differences between different subgroups within a given population.
Multi-stage sampling. The probability sampling techniques described previously are all examples of
single-stage sampling techniques. Depending on your sampling needs, you may combine these single-stage
techniques to conduct multi-stage sampling. For instance, you can stratify a list of businesses based on firm
size, and then conduct systematic sampling within each stratum. This is a two-stage combination of stratified
and systematic sampling. Likewise, you can start with a cluster of school districts in the state of New York, and
within each cluster, select a simple random sample of schools; within each school, select a simple random
sample of grade levels; and within each grade level, select a simple random sample of students for study. In
this case, you have a four-stage sampling process consisting of cluster and simple random sampling.

Non-Probability Sampling
Nonprobability sampling is a sampling technique in which some units of the population have zero
chance of selection or where the probability of selection cannot be accurately determined. Typically, units are
selected based on certain non-random criteria, such as quota or convenience. Because selection is non-
random, nonprobability sampling does not allow the estimation of sampling errors, and may be subjected to a
sampling bias. Therefore, information from a sample cannot be generalized back to the population. Types of
nonprobability sampling techniques include:
Convenience sampling. Also called accidental or opportunity sampling, this is a technique in which a
sample is drawn from that part of the population that is close to hand, readily available, or convenient. For
instance, if you stand outside a shopping center and hand out questionnaire surveys to people or interview
them as they walk in, the sample of respondents you will obtain will be a convenience sample. This is a non-
probability sample because you are systematically excluding all people who shop at other shopping centers.
The opinions that you would get from your chosen sample may reflect the unique characteristics of this
shopping center such as the nature of its stores (e.g., high end-stores will attract a more affluent demographic),
the demographic profile of its patrons, or its location (e.g., a shopping center close to a university will attract
primarily university students with unique purchase habits), and therefore may not be representative of the
opinions of the shopper population at large. Hence, the scientific generalizability of such observations will be
very limited. Other examples of convenience sampling are sampling students registered in a certain class or

PAGE \* MERGEFORMAT 33
sampling patients arriving at a certain medical clinic. This type of sampling is most useful for pilot testing,
where the goal is instrument testing or measurement validation rather than obtaining generalizable inferences.
Quota sampling. In this technique, the population is segmented into mutually exclusive subgroups
(just as in stratified sampling), and then a non-random set of observations is chosen from each subgroup to
meet a predefined quota. In proportional quota sampling, the proportion of respondents in each subgroup
should match that of the population. For instance, if the American population consists of 70% Caucasians,
15% Hispanic-Americans, and 13% African-Americans, and you wish to understand their voting preferences in
a sample of 98 people, you can stand outside a shopping center and ask people their voting preferences. But
you will have to stop asking Hispanic-looking people when you have 15 responses from that subgroup (or
African-Americans when you have 13 responses) even as you continue sampling other ethnic groups, so that
the ethnic composition of your sample matches that of the general American population
Expert sampling. This is a technique where respondents are chosen in a non-random manner based
on their expertise on the phenomenon being studied. For instance, in order to understand the impacts of a
new governmental policy, you can sample a group of corporate accountants who are familiar with this act. The
advantage of this approach is that since experts tend to be more familiar with the subject matter than non-
experts, opinions from a sample of experts are more credible than a sample that includes both experts and
non-experts, although the findings are still not generalizable to the overall population at large.
Snowball sampling. In snowball sampling, you start by identifying a few respondents that match the
criteria for inclusion in your study, and then ask them to recommend others they know who also meet your
selection criteria. For instance, if you wish to survey computer network administrators and you know of only
one or two such people, you can start with them and ask them to recommend others who also do network
administration. Although this method hardly leads to representative samples, it may sometimes be the only
way to reach hard-to reach populations or when no sampling frame is available.
Purposive sampling. It involves the handpicking of subjects. This also called judgmental sampling. For
example, in a study about diabetic patients and chooses the necessary number of respondents.
Considerations in determining the Sample Size
1. Sample sizes as small as 30 are generally adequate to ensure that the sampling distribution of the
mean will approximate the normal curve (Shott, 1990).
2. When the total population is equal to or less than 100, this same number may serve as the sample
size. This is called universal sampling
3. The Slovin’s formula is used to compute for sample size (Sevilla, 2003).
4. According to Gay (1976), the following are the acceptable sizes for the different types of research:
Descriptive research -10%-20% may be required
Correlational research -30 subjects or respondents
Comparative research -15 subjects/group
Experimental design -15-30 subjects per group
5. Using Calmorin’ formula

Asse
ssment 2
PAGE \* MERGEFORMAT 33
Directions: Answer the following questions thoroughly to formulate your Research
Sampling . The use of additional references is encouraged.(10pts.)
1.How would you describe the population and parameter of your study?
2.What sampling method will you use? Why do you prefer this method?

Lesson 5: Data Collection

Data collection is defined as the procedure of collecting, measuring and analyzing accurate insights for
research using standard validated techniques. A researcher can evaluate their hypothesis on the basis of
collected data. In most cases, data collection is the primary and most important step for research, irrespective
of the field of research. The approach of data collection is different for different fields of study, depending on
the required information.

The most critical objective of data collection is ensuring that information-rich and reliable data is collected
for statistical analysis so that data-driven decisions can be made for research.

Learning Outcomes:

At the end of the lesson, students should be able to determine the appropriate data collection tool and process
of their proposed thesis.

Input and Presentation Phase

Data Collection Methods: Phone vs. Online vs. In-Person Interviews

Essentially there are four choices for data collection – in-person interviews, mail, phone and online. There are
pros and cons to each of these modes.

1. In-Person Interviews
Pros: In-depth and a high degree of confidence on the data
Cons: Time consuming, expensive and can be dismissed as anecdotal
2. Mail Surveys
Pros: Can reach anyone and everyone – no barrier
Cons: Expensive, data collection errors, lag time
3. Phone Surveys
Pros: High degree of confidence in the data collected, reach almost anyone
Cons: Expensive, cannot self-administer, need to hire an agency
4. Web/Online Surveys
Pros: Cheap, can self-administer, very low probability of data errors
PAGE \* MERGEFORMAT 33
Cons: Not all your customers might have an email address/be on the internet, customers may be wary
of divulging information online.

In collecting the data, the researcher must decide:


 Which data to collect?
 How to collect the data?
 Who will collect the data?
 When to collect the data?
The selection of a method for collecting information depends upon the:
 Resources available
 Credibility
 Analysis and reporting
 Resources
 Skill of the evaluator
Types of Data

1. Primary Data- are those which are collected for the first time and are original in character.
Primary Data may be collected through:
 Experiments
 Surveys- the use of questionnaires
 Interview is a method of collecting data involves presentation of oral-verbal stimuli and reply in
terms of oral-verbal responses
 Observation is a method under which data from the field is collected with the help of observation
by the observer or personally going to the field
Structured Observation
When the observation is characterized by a careful definition of the units to be observed,
the style of recording the observed information, standardized condition of observation and the
selection of related data of observation.
Unstructured Observation
When it takes place without the aforecited characteristics
Participant Observation
When the observer is member of the group which he is observing then it is participant
observation
Non-Participant Observation
When observer is observing people without giving any information to them then it is non-
participant observation
Uncontrolled Observation
When the observation takes place in natural condition ,i.e., uncontrolled observation. It is
done to get spontaneous picture of life and persons.
Controlled Observation
When observation takes place according to pre arranged plans, with experimental procedure
then it is controlled observation generally done in laboratory under controlled condtion.

 Questionnaires
 schedules
2. Secondary Data-are those which have already been collected by someone else and which have
through some statistical analysis.

PAGE \* MERGEFORMAT 33
Examples of Data Gathering Procedures/Data Collection:

Qualitative- Phenomenological Research

PAGE \* MERGEFORMAT 33
Qualitative- Narrative Research “THE LIVES OF OUTSTANDING ENGLISH TEACHERS DURING THE
PANDEMIC

PAGE \* MERGEFORMAT 33
Quantitative- Descriptive Correlation

Product Development Research

PAGE \* MERGEFORMAT 33
Assessment 3

Data Collection

State the data collection tool and process in your thesis proposal. (10pts.)

Lesson 6: Statistical Treatment/Data Analysis

Statistical treatment is the culmination of the long process of formulating a hypothesis, constructing the
instrument, as well as collecting data. It is very important to properly test the hypothesis and answer the
questions posed by the research and to present the result of the study in a clear and understandable manner.
In a qualitative research, data are presented in a purely verbal form, particularly in document analysis,
ethnomethodology, and observation studies. However, in a quantitative research, in which the research is
dealing with the numerical data, as in most surveys and experiments, it is logical to use statistical treatment.
Learning Outcomes:
At the end of the lesson, students should be able to:
1. discuss the commonly used statistical tools;and
2. design an appropriate statistical treatment for their research proposal
Input and Presentation Phase
It is a requisite in any research that the researcher has a full knowledge of statistics. Statistics is the body of
logic and techniques useful for collection, organization, presentation, analysis, and interpretation of data.
Branches of Statistics

PAGE \* MERGEFORMAT 33
1. Descriptive statistics. It involves tabulating, depicting, and describing a collection of data. The data
are summarized to reveal overall patterns and to make them easily manageable.
2. Inferential statistics. It involves making generalizations about the population through a sample drawn
from it. It also involves hypothesis testing and sampling. Similarly, it is concerned with higher degree of
critical judgment and advanced mathematical modes such as using parametric (interval and ratio scale)
or non-parametric (nominal and ordinal data) statistical tools.

Common Statistical Tools


Descriptive statistics
1. Frequency distribution. It refers to the number of individuals or cases located in each category on the
scale of measurement
2. Proportion. It is total frequency divided by the number of cases in each category. It can be derived from
the frequency distribution.
3. Percentage. It is the proportion expressed in %
4. Measures of central tendency. They indicate where the center of the distribution tends to be located.
The central tendency refers to the typical or average score in a distribution.
a. Mode
b. Median
c. Mean
5. Variability or dispersion. It refers to the extent and manner in which the scores in a distribution differ
from each other.
a. Range
b. Average Deviation
c. Variance
d. Standard deviation
Inferential statistics
1. Parametric tests. These tests require a normal distribution. The level of measurements must either be
interval or ratio.
a. T-test. This test is used to compare two means: the means of two independent samples or two
independent groups or the means of two correlated samples before and after the treatment. It can
be used for samples composed of at least 30 subjects
b. Z-test. It is used to compare two means: the sample mean and the perceived population mean. It ca
be used when the sample has 30 or more elements
c. F-test. Also known as the analysis of variance (ANOVA), this is used when comparing the means of
two or more independent groups. One-way ANOVA is used when there is one variable involved,
and two-way ANOVA is used when there are two variables involved.
d. Pearson product-moment correlation coefficient. It is an index of relationship between two variables.
e. Simple linear regression analysis. It is used when there is a significant relationship between x and y
variables. This is used in predicting the value of y given the value of x
f. Multiple regression analysis. It is used in predictions. The dependent variable can be predicted
given several independent variables.
2. Non-parametric tests. It does not require the normal distribution of scores. It can be utilized when the
data are nominal or ordinal.
a. Chi-square test. It is a test of difference between the observed and the expected frequencies.
Three functions of the Chi-square test
1. The test of goodness of fit
2. The test of homogeneity
PAGE \* MERGEFORMAT 33
3. The test of independence
b. Spearman rho. The Spearman's rank-order correlation is the nonparametric version of the Pearson
product-moment correlation. Spearman's correlation coefficient, (ρ, also signified by rs) measures
the strength and direction of association between two ranked variables. Mostly used for ordinal data
c. Eta correlation. Is a non-parametric test used for nominal data

Examples of Data Analysis Statement:


Quantitative Research

Qualitative Research

Assessment 4.
Craft the Statistical Treatment/Data Analysis of your thesis proposal.
PAGE \* MERGEFORMAT 33
References:
Castro, J and Lombrio, C. (2020)., Language Preference: Its Influence on English Writing
Competence (November 14, 2020). Available at SSRN: https://ssrn.com/abstract=3948288
or http://dx.doi.org/10.2139/ssrn.3948288
Cristobal , A. & Cristobal, M. (2013). Research Made Easier. A Step-by Step Process. C&E
Publishing Inc.
Bhattacherjee, A. (2012). Social Science Research: Principles, Methoids, and Practices. Textbooks
Collection. Book 3
Creswell, J.(2013). Qualitative Research Methods.

Prepared by:
Jocelyn S. Castro
COED faculty

Qualitative Study
Chapter III
METHODOLOGY
Research Design
Research Locale
Participants of the Study
Sampling Procedure
Research Instrument
Data Gathering Procedures
Data Analysis
Validity of Findings
Bracketing/Research Reflexivity
Enhancement of Trustworthiness
Ethical Consideration
Semi-structured Interview Guide

PAGE \* MERGEFORMAT 33
Quantitative Study

Chapter III
METHODOLOGY

Research Design
Research Locale
Respondents of the Study
Sampling Procedure
Research Instrument
Data Gathering Procedures
Statistical Treatment
Survey Questionnaire

PAGE \* MERGEFORMAT 33

You might also like