You are on page 1of 8

4.

Sample - a portion of the population from which data


CHAPTER 11 will be solicited for purposes of the research. It is a
subgroup of the population which constitutes the
subjects or respondents of the study.
Nature and Definition of Sampling
Samples may be categorized as follows
Sampling Procedures
 Respondents or participants of the study
 Universe
 Subjects of the study
 Population
 Key informants / volunteer samples
 Sampling
 Sample
5. Sample Size
A large sample size would be required in instances such
1. Universe - the totality of elements to which research
as the following:
findings may apply. This also refers to the target
 Presence of variables which cant be controlled
population, the group of people or objects from which
 Expected differences on a variable of interest
the researcher intends to collect data and generalize the
among population
findings of the study.
 Dividing the population in subgroups
 Expected dropout rate among subjects is high
Elements - refer to entities that make up the sample and
 Statistical tests used require minimum sample
the population
sizes
2. Population - refers to the accessible group of
An absolute minimum sample size should be declared at
individuals from which the samples will be drawn by the
the beginning of a study particularly in the scope and
researcher, consistent with specific criteria, or the total
limitation section of the study.
possible participation of the group in the study. This also
refers to the portion of the universe accessible to the
In quantitative studies, power is the standard for
researcher.
determining sample size adequacy.
Types of Population
Steps in Sampling
a) Target Population - group of individuals or
1) Identify target population
objects which is of interest to the researcher and
2) Identify respondent population
about which speculative information is desired.
3) Specify the inclusion and exclusion criteria
b) Subjects or Respondent Population - group of
4) Specify the sampling design
individuals or objects chosen to provide actual
5) Recruit the subjects
data and information needed in a research.
c) Stratum - describes a mutually exclusive segment
Inclusion criteria - provides guidelines for choosing
of the population, distinguished by one or more
subjects with a predetermined set of characteristics.
traits or qualifications
d) Eligibility or Inclusion criteria - criteria or
Exclusion criteria - are characteristics that exclude a
characteristics specified in the population to be
potential subject from the study.
included in the study. Exclusion criteria are the
characteristics of the population that are not
Types of Sampling
specified in the study and are therefore
disqualified to participate in the study.
1. Non-probability Sampling - respondents or subjects
3. Sampling - process of selecting representative portion
are selected in a non-random way. The researcher
of the population to represent the entire population. This
desires to use available subjects at his convenience
is a practical and efficient means of ensuring the quality
anytime during the period of the study.
of data that will be gathered.

Types of Sampling
Types of Non-Probability Sampling
a) Sampling Unit - specific area or place which can
A) Accidental or Convenience Sampling - uses the most
be used during the sampling process
readily available group of people as study respondents.
b) Sampling Frame - comprises a complete list of
Also called as volunteer samples
sampling units from which sample is drawn
c) Sampling Design - scheme that specifies the
B) Quota Sampling - divides the population into
number of samples drawn from the population, the
homogenous sub-populations to ensure the presence of
inclusion and exclusion criteria for their choice
representative proportions of the various strata in the
and the sampling technique used, such as
sample.
purposive, random sampling, stratified and
convenience sampling.
C) Purposive or Judgment Sampling - subjects are
d) Sampling Size - total number of samples who will
handpicked to be included in the sampling frame based
participate in the study after the sampling design
on certain qualities for purposes of the study.
was completed
Sloven Formula - to get sample size from population
D) Snowball or Network or Chain Sampling - consists
n = N/ 1+Ne2
of identifying the persons who meet the inclusion
n - number of samples
criteria of the study and who in turn refer other
N - population
individuals who may be interviewed.
E - sampling error ranging from 1%-10%
E) Modal instance sampling - used when one wishes to
Sampling Bias - when data is influenced by external
investigate thoughts and actions of typical people and
factors so that the data no longer represents the entire
when the researcher fears that signification data about
population
this group of people might be lost in a more general
study.
Sampling error may arise when the value of one sample
size differ from another drawn from the same population
Advantages of non-probability sampling
 Convenient

1
 Economical  More accurate than reading a complicated
Disadvantages of non-probability sampling questionnaire
 Biased samples/ errors in judgment
 Cant estimate precise population 3. Interview
 Certain elements no chance to be included in  Next most used research instrument
sample  One on one dialogue with the subject
Types of Interview
2. Probability Sampling - random selection of subjects A) Structured Interview
or elements of the population. The goal is to examine  Interview is guided by prepared questions
representative elements of the pop. B) Unstructured Interview
 Ask the questions at random
Types of Probability Sampling
A) Simple Random Sampling - selection of samples on Methods of interviewing are as follows
random basis from a sampling frame. Each element has A) Personal Interview - interviewer facing the
an equal chance or probability of being chosen as interviewee
subjects of the study. B) Telephone Surveys - most popular; easily contacted
C) Mail Surveys - allows respondent to answer during
B) Stratified Random Sampling - divides the his or her leisure time; least expensive method
population into homogenous subgroups from which D) Computer Directed Interviews - answers directly
elements are selected at random. into the computer
E) E-mail Surveys - Efficient and economical; limited
C) Cluster Sampling or Multi-Stage Sampling - only to simple questions
successive selection of random samples from larger to F) Internet Surveys - fast and can gather several
smaller units by using either simple random or stratified thousand responses within a few hours. Can give more
random methods. honest answers

D) Systematic or Sequential Sampling - selection of 4. Scales - devices designed to assign a numeric


sequence according to predetermined modality such as score.
every 10th name in a list of patients in odd or even Types of scales:
numbered. A) Social psychological - used in quantitative studies;
can discriminate among people having different
attitudes, fears, motives, perceptions, etc.

B) Likert scale - consist of declarative statements or


items that expresses viewpoint on a topic. Respondent
shall indicated the degree of agreement of disagreement

Advantages: C) Sematic Differential scale - technique for measuring


 less bias attitudes; variables being measured affects them.
 Everyone equal chance to be selected
Disadvantages: D) Visual analog scale - used to measure subjective
 Time consuming experiences of the respondents
 Expensive
 Inconvenient & impossible to obtain E) Vignettes - brief descriptions of situations; subjects
are asked to react.
Theoretical Sampling Written descriptions but can also be videotaped.
 used in grounded theory studies. It seeks to
discover categories and their properties in order to F) Q sorts - is a sorting technique. Subjects are given
explain interrelationships that exist within the set of cards and to sort it along a specified bipolar
theory that has already been studied. dimension
 The goal is to gain a deeper understanding of the
phenomena under study and facilitate the 5. Self-reports
development of theory.  Report his/her own behavior or mental contents
 This method is also known as “handy sampling”  Notoriously inaccurate & unreliable
 Least accurate is to ask about his/her own past

CHAPTER 13 6. Anecdotal Records - personal accounts of


researcher written on notebook or typewritten;
primary data
Nature of Research Instruments 7. Mechanical Instruments

Research instruments serve as measurement tools and


are an integral component of any nursing research study. Types of questions asked in the interview:
It should be reliable, consistent, and valid. 1. Structured
 Formal interview
Types of research instruments  Interview schedule is used
1. Questionnaire  Structured & well sequenced
 self-directing instrument structured with questions;  Easier to organize and analyze
it is a paper and pencil approach 2. Unstructured
 Most frequently used clerical research instrument  Informal interview
 Need longer time to organize and analyze
2. Scanning Questionnaires  Questions asked: information on knowledge
 Face to face interviews, mail surveys, interview question, opinion questions, application questions,
over the telephone analysis questions, synthesis questions
 Paper questionnaire that can be scanned

2
Criteria for Evaluating the Instrument
1. Reliability - high degree of consistency or
Open ended - respondents use their own words to accuracy
answer questions than those listed in the questionnaire 2. Validity - should be measure precisely
3. Efficiency - Desired time frame
Close ended - respondents answer a number of fixed, 4. Sensitivity - distinguish characteristics of
alternatives responses called dichotomous items. individuals under study
5. Objectivity - able to gather factual and impartial
Types of close ended questions data
A) Dichotomous Items 6. Speed - quick, fast, and complete
 two response alternatives such as yes or no. 7. Reactivity - should not influence the attributes
 Useful in gathering factual data. being measured
 Hard to analyze, used only when no other type of 8. Simplicity - clear and simple; easy to understand
question is appropriate 9. Meaningfulness - valuable and practical

B) Multichotomous Items
 allows respondents to answer questions with a
range of responses such as multiple choice test CHAPTER 14
C) Fixed-Alternative/ Multiple choice items
Nature of Data Collection
 allowed multiple response alternatives. Good
when possible responses are few and clear cut
Data collection is precise systematic process of
gathering information relevant to the purpose of
D) Projective Questions
research, or specific objectives, questions, or hypothesis
 Uses vague questions; attempts to project a
of the study.
person’s attitudes from the response
 Uses fill in the blank sentences
Types of Research Data
 Difficult to analyze; better used for exploratory
1. Cross-sectional Data
research
 Gathers present data on events occurring at that
time.
E) Cafeteria Questions - to respond according to their
 Limited to the subjects at one point in time
own viewpoint
 Best gathered when the time frame is of short
duration
F) Rank Order Questions - Rank answers from most to
least important
2. Retrospective Data
 Frequently called ex-post facto studies
G) Checklist
 Data are collected on events in the past before a
 Aka “matrix questions”
study design in completed, “after the fact”
 two dimensional patterns
 Events have occurred prior to the initiation of the
 Questions written horizontally while respondents
study
answer vertically
 Results are limited to the availability of data
Types of response error:
3. Prospective Data
A) Telescoping error - an error resulting from the
 Refers to future data or events that occurred after
tendency of the respondents to remember events as
the study design has been completed
occurring more recently than they actually did
 Prospective and retrospective researchers are
Dominates for recent events
sometimes called longitudinal studies because they
extend over a long period of time.
B) Recall loss - When respondents forget that an even
 All experimental studies are prospective
occurred. Dominates events that happened in the distant
past

The Pilot Study or Pre-test of Instruments


 A pilot study is a smaller version of a proposed
study and it is conducted to refine the
methodology.
Categories of Data collection
 Pilot study is also known as the Filed Test or Dry-
1. Primary Data collection
Run
 Researcher personally collects data from actual
 Dry run is a trial version of the study
respondents.
 Not accessible to anyone until it is published

2. Secondary Data collection


 Researcher uses data that was collected for another
purpose
 Easier to collect and gather than primary data
Purpose of conducting a Field Test
Steps in data collection
 Determine feasibility of the study
1. Explain
 To validate instruments for measuring the
2. Clarify
variables
3. Administer
 Check the reliability of the instruments
4. Describe
 Ensure its efficiency and effectiveness
 Ensure to use correct language
Methods of collecting Data
 Assess and evaluate study procedure
 Can make revisions
1. Used of Already Existing or Available Data

3
a) Raw Data - from basic documents such as records of Coding - process of transforming data into numerical
patient’s admission, birth dates, discharges symbols

b) Tabular Data - indicating number of patients


admitted or discharged by year or month, total number
of deliveries, surgeries.
CHAPTER 15
Nature of Measurement of Variables
2. Use of Observers’ Data
 Measurement is a procedure for assigning
Types of Observers
numerical values to variables.
a) Non Participant Observer - does not share the same
 Numbers are numerical that have quantitative
milieu with the subjects and is not member of the group
meaning and are amendable to statistical analysis.
of the study
 “Level of measurement” or scales of measure are
expressions that typically refer to the theory of
scale type.
Types of Non-Participant Observers
 Four different types of scales that he called
1. Overt non-participant Observer - Observer
nominal, ordinal, interval, and ratio
identifies herself and her task of conducting research
 Levels of measurement are determining factors of
and informing the subjects of the type of data to be
the type of statistics ti be used in analyzing data.
collected
1. Quantitative Measurement of Variables
2. Covert non-participant observer - the observer does
 Data are defined in such a way that they can
not identify herself to the subjects she will observe.
be explained according to the Scale of Measurement
 Scale of Measurement - device that assigns
b) Participant Observer - observer share the same
code numbers to subjects. Code numbers ranges from 0-
milieu and is better acquainted with the subjects.
100.
 Data in numerical form are easily
Types of Participant Observers
manipulated and analyzed.
1. Overt Participant Observer - is involved with the
subjects and has full knowledge and awareness of the
2. Qualitative Description of Variables of the
subjects to be observed
Descriptive Analysis Phase
A) Nominal scale
2. Covert Participant Observer - Observer interacts
 Used to classify variables or events into categories
with the subject and observes their behavior without
but cant be ranked.
their knowledge. “spying”
 Lowest level of measurement by assigning
characteristics of variables into categories.
 Not measured on a scale
 Cant be in more than one category
 The numbers are called categorical or classified
variables
 If nominal data fall in two categories, they are
referred to as binomial or dichotomous data

B) Ordinal Scale
 Used to show relative rankings of variables or
Two Methods of Observation
events.
A) Structured Observation - researcher has prior
 Used in ordering observations according to
knowledge of the phenomenon of interest
magnitude or intensity
 Ranked order from most to least or highest to
B) Unstructured Observation - researcher attempts to
lowest
describe events with no preconceived ideas of what will
be seen. Requires high degree of attention and
Types of Ordinal Scale
concentration
1. Likert Scale - respondents are asked to indicate the
degree to which they agree to disagree with the ideas
3. The use of Self Recording or Reporting Approach
expressed by the indicator. Used to assess the attitude of
respondents towards the variables
4. Use of Delphi Technique - uses a series of
questionnaires to gather a consensus of opinions from a
2. Graphic Rating Scale - asked to respond in a bipolar
group of experts
continuum such as from highest to lowest or most to
least. A simple line on which one marks an X anywhere
Types of Delphi Technique
between the extremes with an infinite number of places
 Classic Delphi - questions are presented to a panel
where the X can be placed
of informed individuals in a scientific field asking
their opinions on a particular issue or problem.
3. Guttman Scale - used to assess attitudes of
 Modified Delphi - uses interview or focus groups
respondents, using a continuum of cumulative
to gather their opinions on certain issues or trends
statements
 Policy Delphi - uses mostly in organizations to
examine and explore policy issues
4. Semantic Differential Scale - used to measure the
 Real time Delphi - uses a structured time,
meaning of concepts to determine the emotional
avoiding delay cause by pen or paper time, hence a
evaluative component of the respondents’ attitude.
face to face analysis is done by researcher.
Respondents are asked to rate their attitude
 E-Deplhi - uses electronics of e-mails
a) Interval Measurement
5. Critical Incident Technique - a set of principles for
 Shows ranking of variables or events on a scale
collecting data on observable human activities.
with equal intervals between the numbers

4
 Consists of real numbers
 Specifies the distance between ranks c) Equivalence
  instrument shows high consistency or agreement
b) Ratio Measurement and congruence in the observations ratings of the
 Shows ranking of variables or events on scales different observers or raters of certain
with equal intervals and absolute zeros. phenomenon, hence scores are likely to be
accurate and reliable.
Measurement Problems
1. A misplaced belief in precision 2. The split-half technique
2. Measures that go against social conditions  Used to determine homogeneity of items in the
3. The operational definition does not correspond to instruments, in which the items are split in half
the conceptual definition and a correlational procedure performed between
4. The researcher depends on certain statistics the two halves.
 Measures the items on a scale that are split in two
groups.
Errors in Measurements
 Measurement error - difference between the 3. The Cronbach’ alplha or coefficient alpha
actual amount of the attributes (true score) and the  Technique that gives an estimate of the split half
amount of the attribute that was measured correlation of ways in dividing the measure into
(observed score). two halves, not just the odd or even items
 Measurement error is a threat to internal validity,  The higher the reliability, coefficient, the more
which leads to spurious or inaccurate data. accurate the measure for internal consistency

Types of Measurement errors 4. Kuder-Richardson Coefficient


1. Random error - expected and affected by a host of  This will determine the homogeneity in the
influences that may be present in an experiment. May be instrument using dichotomous response format.
due to human factors such as confusion, bias, and
variation. 5. Validity
 Degree by which the instrument measures what it
2. Systemic error - seriously affects the results of a intends to measure.
research; appears to be accurate while remains to be  Face validity is one aspect to consider that the
consistently biased instrument looks like it is measuring the
appropriate concepts.
Sources of Measurement Errors
 Environmental contaminants 4 types of Validity
 Variation in personal factors a) Content Validity
 Quality of responses  concerned with the adequacy of the content area
 Variations in data collections being measured.
 Clarity of the instrument  Relevant to measure cognitive and affective
 Sampling of items domains and complex psychosocial traits which
 Format of the instrument may be based on judgment.
b) Face Validity
STRATEGIES TO MINIMIZE MEASUREMENT  The instrument measure the concept or variables
ERRORS desired by the researcher in the study.
 It is an intuitive type of validity
1. Measurability or Reliability  Useful in pre-testing the instruments to determine
 Measurability or Reliability of instrument is the readability and clarify of items being measure
consistency with which an instrument measures before an actual investigation is done.
the attribute, like a MP apparatus that gives a c) Criterion-related validity
reading of 120/80 in one minute and 130/80 after 5  Degree to which scores on an instrument are
minutes. correlated with some external criterion.
 Reliability also measures the accuracy of an  The instrument is valid if it strongly corresponds
instrument which shows true score and minimizes with scores on the criterion.
the error component of the obtained scores.  Predictive validity - instrument has the ability to
 Reliability affects the precision of a measure. differentiate between respondents performance or
behavior on some future criterion outcome.
a) Stability of measurement  Concurrent Validity - Instrument’s ability to
 the extent to which the scores are obtained when measure the differences in respondents present
the instrument is used with same samples on status on some criterion.
separate occasions.
 This is derived through a test-retest reliability
procedures. d) Construct Validity
 The instrument is administered to a sample in two  Process of reducing concept abstraction into a
occasions then compare the scores by computing more concrete and suitable criterion related
the reliability coefficient which measures the validation.
magnitude of the test’s reliability.  A hypothesis testing approach is then used to
validate the instrument
b) Internal Consistency  Known-Group technique - this is where the
 The instrument shows that all indicators or differences in the group’s attribute are measured
subparts measures the same characteristics or and scores are compared.
attributes of the variables.  Factor analysis - used to identify and group or
 The split half technique and cronbach’s alpha or cluster together related items on a scale to measure
coefficient are the two ways to measure internal the different attributes
consistency.
6. Sensitivity and Specificity

5
 Sensitivity is the ability of the instrument to  Sampling error
correctly screen or identify the variables to be  Sampling distribution
manipulated and measured to diagnose its  Sampling bias
condition.
 Specificity is the ability of the instrument to b) Testing the Null Hypothesis
correctly identify non-cases or extraneous  State the research hypothesis
variables and screen out those conditions not  State null hypothesis to be tested
necessary for manipulation  Choose appropriate statistical test for the given
data
Assessing Qualitative Data  Determine level of significance difference , the
1. Credibility - refers to confidence in the truthfulness relationship or correlation between the given
of data and their interpretations. variables

Steps to demonstrate credibility are:  Normal Curve - theoretical frequency distribution


a) Prolonged engagement of all possible values in a population although no
b) Persistent Observation real distribution exactly fits the normal curve
c) Triangulation  Non-directional hypothesis - assumes that an
d) Peer Debriefing and Member checks extreme score can occur in either tail of the normal
e) Search for disconforming evidence curve
 One-tailed test of significance - test for
2. Dependability - concerned with the stability of directional hypothesis and the extreme statistical
qualitative data for a long period of time values that occur on a single tail of the curve
 Two-tailed test of significance - analysis of non-
3. Confirmability - refers to the objectivity and directional hypothesis.
neutrality of data to show congruence between two or
more independent variables to determine accuracy, 3. Use Decision Theory
relevance, and meaning of data.  Theory based on the assumptions associated with
the theoretical normal curve used in testing for
4. Transferability - refers to research findings which differences between groups with the with the
can be transferred or applied to other settings. expectations that all the groups are members of the
same population.
5. Authenticity - criteria to establish trustworthiness  Level of significance is set at 0.05 before data
of qualitative studies. This refers to the extent to which collection
the researcher describes truthfully and accurately varied
existence of different realities Two types of errors can occur:
 Type I error - when null hypothesis is rejected
when in reality it is not. Occurs when the level of
Nature of Statistics significance is at 0.05 than with a 0.01. Type I
 Statistics is a branch of Mathematics used to error decreases when the level of significance
summarize, organize, present, analyze and become more extreme
interpret numerical data.
Kinds of Statistics  Type II error - occurs when null hypothesis is
 Statistical methods can be used to summarize or regarded as true, but it is in fact false. It has
describe a collection of data, this is called greater risk error when the level of significance is
descriptive statistics. 0.01 than when it is 0.05.
 Useful in research, particularly when 4. Power Analysis
communicating the results of experiments  This is the way to control type II error
 Determine the probability of the statistical test to
 Randomize and and uncertainty in the detect significant differences that exists.
observations may be used to draw inferences about
the process or population being studied, this is 5. Degree of freedom (DOF)
called inferential statistics.  The interpretation of a statistical test depends on
 Inference is a vital element of scientific research, the number of values that are free to vary.
since it will predict a phenomenon based on the
data leading to theory development 6. Frequency Distribution
 Descriptive statistics and inferential statistics or  Method to organize the research data. Types of
predictive statistics together comprise applied frequency distribution include:
statistics.  Grouped frequency distribution - used
with nominal data or when continuous
1. Descriptive statistics or Summary Statistics - variables are being examined.
intended to organize and summarize numerical data  Ungrouped frequency distribution - used
from the population and sample when data are categorized and presented in
tabular form to display all numerical values
Uses of Descriptive Statistics obtained for a particular variable.
a) Measures and condenses data in:
 Frequency distribution - scores are tested from 7. Percentage Distribution
highest to lowest or from lowest to highest  Shows percentage of subjects in a sample whose
 Graphic presentation - data are presented in scores fall into a specific group and the number of
graphic form scores in that group.

2. Inferential Statistics - concerned with population Statistical Tools for Treatment of Data
and the use of sample data to predict future occurrences. 1. Percentage (P) - is computed to determine the
proportion of a part to a whole
Uses of Inferential Statistics
a) To estimate population parameter, the following facts
are considered using inferential statistics

6
2. Ranking - used to determine the order of decreasing
or increasing magnitude of variables. The largest
frequency is ranked 1 and so on.

3. Weighted Mean (WM) - refers to the overall average


or responses/ perceptions of the study respondents.

4. Measures of Central Tendency


a) Arithmetic mean or average weighted mean
describes the central tendency of the given criteria or
variables.
CHAPTER 16
Nature of Data Analysis
b) Median - means central position. Arranges the ratings
 Data analysis is a process to sort, reduce, organize
from highest to lowest or vice versa and it is the middle
and give meaning to the data collected. The
number score.
technique in data analysis include descriptive and
inferential statistics
c) Mode - score which has the most number of
observation
Strategy for Quantitative Analysis
 Quantitative analysis - process of organizing
numerical data into a series of steps for purpose of
analysis and interpretations.
5. Measures of Dispersion
a) Range - the simplest measure of dispersion; obtained
1. Pre-analysis phase - consists of various activities
by subtracting the lowest score from the highest score
such as clerical and administrative tasks.
b) Standard deviation - determines the homogeneity or
2. Preliminary Assessment of Data - researcher
sameness of degree or dimension of given variables or
outlines activities to anticipate major problems that may
the heterogeneity or degree of dispersal of variance of
arise while dealing with data
variables
3. Preliminary Actions - researcher prepares data for
computer use prior to analyses and interpretation
c) Variance - the standard deviation multiplied by itself
is largely used for purposes of reference rather than for
4. Principal Analysis - researcher can now proceed with
data description
substantive data analysis having cleaned data, problems
are resolved regarding missing data, data transformation
d) Interquartile range - set of ordered numbers in a
are completed.
series. It is the difference between the number at the 75 th
percentile.
5. Interpretation of Results - research findings must
have substantive bearing with the objectives of the
6. T-test - compares the responses of two respondent
study, existing theories, related literature and research
groups. Used to test for significant differences between
methods.
two samples.
a) Credibility of Results - studies must be based on
7. Analysis of Variance (ANOVA) - test the differences
truthfulness
between two means which can be used to examine data
b) Meaningfulness of the results - outcomes must be
from two or more groups
within the context
 One-Way Analysis of Variance is a statistical
c) Importance of the Results - research findings are
procedure for testing mean differences among
significant or important
three or more groups
d) Generalizability of the Results - applicable to a
 Two-way Analysis of Variance is used to test the
wider population
relationship between one independent variable and
e) Implications of the Results - credibility, meaning,
multiple dependent variables. This is commonly
importance and generalizability of results
used in factorial design.
Writing a Research Report
8. Factor analysis - examines relationship among large
number of variables and isolate those relationships to
1. Findings of the Study - study results are presented
identify cluster of variables that are most closely linked.
based on empirical data or facts. Data must be in an
Aids in identifying theoretical constructs and confirms
objective process and written in the present or past tense.
its adequacy
Descriptive statistics is used to present findings, while
inferential techniques are used to predict and generalize
9. Regression Analysis - used to predict the value of
results after the hypotheses are tested.
one variable when the value of one or more variables are
Khowrt. The variable to be predicted in regression
2. Presentation of Findings
analysis is referred to as the dependent variable.
a) Textual form - consists of statements with numerals
or numbers to describe data. It is supported by direct
10. Multiple Regression Analysis - used to correlate
quotes, summary of findings
more than two variables.

11. The Complete Randomized Block Design - same as


the ANOVA, except that complete blocks are used
b) Tabular form - used in organizing and presenting
instead of items
data in a systematic way in which numerical or
statistical data are arranged in rows and columns to
make them easily understood and interpreted

Parts of the Statistical Table


1. Tables number

7
2. Title
3. Headnote - written below the title
4. Stub - contains the sub head and the row labels
5. Box Head - contains the master captions that describe
column captions
6. Main body, field or text - contains all the
quantitative and qualitative information
7. Footnote - appears immediately below the bottom
line of the table
8. Textual Presentation of tabular data

 Figures - are visual presentations of processed


data. They include graphs, diagrams, line drawings
and photographs

3. Interpretation of Findings - is a subjective section


of a research report which allows the researcher to
discuss findings; also explains results that considered a
concrete means of presenting research results

4. Interpretation of Results of the Test on the Null


Hypothesis - involves examining results from data
analysis, forming conclusions. Research hypotheses
predict study results based on the theoretical or on
previous research studies
Inferential statistics are used to either accept or reject
null hypothesis.

5. Conclusions - logical outgrowth of the summary of


findings consists of conceptualizations and
generalizations in response to the problems raised in the
study. Focus on the answers to the major problem.
6. Recommendations - suggest solutions to the
problems to prevent the occurrence of these or minimize
their impact or effect. Also intended to improve the
particular discipling or field of the study

You might also like