You are on page 1of 91

Chapter Six

DATA ANALYSIS(Quan,Qual,Mixed)
Statistics
Contents
• Introduction to Statistics
• Methods of Representing Data
• Measures of Central Tendency
• Measures of Variability or Spread
• Measures of Association/Correlation
• The Testing of Hypothesis
– Normal curve
– Z-test
– The t-test
– ANOVA
– Chi-square test
• Regression
– Linear
– Multiple
What to measure?
• To test hypotheses we need to measure variables. Variables are things that can
change or vary.
– Example: IQ, Behaviour, Location, Mood, Achievement, time, etc.

• Most hypotheses can be expressed in terms of two variables: a proposed cause and
a proposed outcome.
• Independent variable: A variable thought to be the cause of some effect. This
term is usually used in experimental research to denote a variable that the
experimenter has manipulated.

• Dependent variable: A variable thought to be affected by


changes in an independent variable. You can think of this variable
as an outcome. Also called:
– Predictor variable

– Outcome variable
Introduction to Statistics
• Statistics involves:
– Observation,

– Collection of data,

– Organisation of data,
– Presentation of data,

– Analysis of data,

– Interpretation of data, and

– Decision making.
Why study statistics?
• Statistical tests simply provide a tool for analysing the results
of any research.
• They are vital to the research process.
Types of Statistics
• There are two major components of the discipline of
Statistics:
– Descriptive statistics
– Inferential statistics
Descriptive statistics:
Methods of organising, presenting, and summarising data in a
convenient and informative way.
Example: mean, median, standard deviation, correlations,
percentages, etc.
Inferential statistics
• methods used to draw conclusions or inferences about
characteristics of populations based on sample data.
– Example: t-tests, ANOVA, Factor analysis, Regression analysis,
chi-square, etc.
Levels/scales of measurement
Categorical Continuous

Nominal Ordinal Interval Ratio

Function Labels Labels and ranks Numeric scale Numeric scale


Categories Categories without true with true zero
zero
Example Male: 1 V.diss: 1 Temperature, Age, annual
Female: 2 Diss: 2 test scores income
Satis: 3
V.satis: 4
Calculations? Do not use in Caution with OK to use in most
arithmetic arithmetic calculations
calculations calculations
Frequencies    

Mode    

Median   
Levels/scales of measurement
Mean  

Max/min    

Range    

Standev  

Variance  

Skewness  

Kurtosis  

Crosstabs  

Suitable graphs Bar chart Bar chart Histogram, box Histogram, box
Pie chart plot, scatter plot plot, scatter plot
Methods of representing data
• Sequencing

• Tables

• Frequency distribution

• Graphs

• Measures of central tendency

• Measures of variation/spread/dispersion

• Measures of relative position


Sequencing (array)
• Arranging data in order of magnitude: ascending or descending

– Example:
If data consists of names, arrange in alphabetical order.
If they consist of objects, events, animals, etc., arrange according
to kinds, species, groups, etc.
Raw score

• 10, 15, 18, 12, 14, 15, 20, 15, 16, 11, 12,
14, 19, 20, 17, 18, 15, 13, 11, 12, 19, 13,
10, 14, 17, 19, 16, 15, 15, 15.
Frequency distribution

Score Frequency
10 2
11 2
12 3
13 2
14 3
15 7
16 2
17 2
18 2
19 3
20 2
30
Measures of central tendency (MCT)
• The central tendency of a distribution is an estimate of the centre
of a distribution of values.
• Measures of central tendency aim to quantify the “typical” or
“average” score in a data set.
• Three major types of estimates are distinguished:
– The Mode – in a distribution of data is simply the score that
occurs most frequently.
– The Median – of a distribution is the value that cuts the
distribution exactly in half (by definition the 50th percentile) –
median position = (n+1)/2.
– The Arithmetic Mean (M) – is the average, technically the sum
of all data scores divided by the number of scores (n) .
Sample Mean
• N = Population size
• n = sample size
n

x i

• Mean X i 1

• Interpretation: The sample mean is the


average of a set of scores. It estimates the
population mean, m.
15
Example to estimate central tendency

Scores: 2, 4, 5, 3, 2, 2, 4, 5, 1, 1, 1, 2, 3, 2, 3 (n = ….)
Sorted: 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5

Mode: most frequent score

Median: median position = (n+1)/2

Mean: average = sum of Scores/n


Example to estimate central tendency

Scores: 2, 4, 5, 3, 2, 2, 4, 5, 1, 1, 1, 2, 3, 2, 3 (n =15 )
Sorted: 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5

Mode: most frequent score = 2

Median: median position = (n+1)/2 = 2

Mean: average = sum of Scores/n = 2


Example to estimate central tendency

Scores: 2,3,4,4,5,6,7,2,4,3,4,4,5 (n = ….)


Sorted: 2,2,3,3,4,4,4,4,4,5,5,6,7
Mode: most frequent score = ?
Median: median position = (n+1)/2 = ?
Mean: average = sum of Scores/n = ?

Exercise : find the MCT for the ff data:


108, 103, 252, 121, 93, 57, 40, 53, 22, 116, 98.
Remove the outlier (252) and find the mean and compare it with the
first mean.
Properties of the mean
• The mean is sensitive to the exact value of all the
scores in the distribution.
• The sum of the deviations about the mean equals
zero.
• The mean is very sensitive to extreme scores.
• The sum of the squared deviations of all the scores
about their mean is a minimum.
• Under most circumstances, of the measures used
for central tendency, the mean is least subject to
sampling variation.
Demonstrations
Xi (Xi - 3)2 (Xi -4)2 (Xi -5)2 (Xi - 6)2 (Xi -7)2 X2
Mean

2 1 4 9 16 25

4 1 0 1 4 9

6 9 4 1 0 1

8 25 16 9 4 1

sum 36 24 20 24 36
Measures of statistical variability

• The statistical variability of a distribution is an estimate of


how variable (dispersed) the scores tend to be.
• Measures of statistical variability aim to quantify the
“variability” of the score in a data set.
• Transforms the original scores to deviations from the mean.
• Three major types of estimates are distinguished:
– Range
– Sum of Squares (SS)
– Variance (s2 or var)
– Standard Deviation (s or SD)
Statistical variability: Range

• The Range is simply the highest value (maximum) minus the


lowest values (minimum).
• Example:
Scores: 2, 4, 5, 3, 2, 2, 4, 5, 1, 1, 1, 2, 3, 2, 3
Sorted: 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5

Maximum = 5
Minimum = 1
Range = 4
Although, the range …
• gives an idea of how far spread the data is
– a higher range number means the data is more spread apart
• and it can compare various sample ranges to see which is spread the most
• BUT the range can be fooled by extreme values (both have range = 10)

4.5 10
4 9

3.5 8
7
3
6
2.5
5
2
4
1.5
3
1 2
0.5 1
0 0
1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11
Statistical variability: Sum of Squares

Raw score:
1, 2, 2, 3, 4.
Mean=1+2+2
+3+4/5=2.4
Statistical variability: Sum of Squares

• With SS we convert each score to its difference with the mean


and square this differences
• The more scores differ from the mean the larger the SS; and
the less scores differ from the mean the smaller the SS
• SS informs about the difference between the observed scores
(real data) and the mean (hypothetical model) BUT squared
(positive and negative)
• Deviations are also described as ERRORS; consequently, the SS
is also labelled Sum of Squared Errors
• The more data points the higher the SS – to avoid large
numbers the variance is introduced
Sample Variance
• Variance is found from the squared differences of each
score from the meann
 Xi  X 
2

S 2  i 1
n 1

• Interpretation: the variance tells us about variability in


terms of distance from the mean, but it is a squared
value. It represents “average” squared deviation from
the mean, and it estimates s2 in the population

26
Statistical variability: Variance

The variance can be thought of


as an averaged sum of squares
(errors).
While important in many
statistical computations, the
variance has one problem: it
represents a measure in unit
squared (example: help
squared)
Statistical variability: Standard Deviation

Standard Deviation (s or SD) is the square root of the


variance. It represents a measure of how well the MEAN
represents the data. Small SDs indicate that the single
scores are close to the Mean, while large SDs indicate
that the single scores are distant from the Mean.
Sample Standard Deviation
• The standard deviation is the square root of the
variance n

 X X
2
i
S  S2  i 1

n 1

• Interpretation: The standard deviation is in the original


units of the variable (that means, not squared). It tells
us about dispersion or how spread out the scores are.
It represents “average” deviation from the mean, and
it estimates s in the population
29
Shape of frequency distributions
• Normal distribution
• Skewed distribution
• kurtosis
Normal distribution
Mean=median=mode.
Skewness=kurtosis=0.
Bell-shaped.
Majority scores lie at the
centre.
Data values gets smaller as
we move away from the
centre.
Skewed distribution
Not symmetrical.
Not symmetrical. Most frequent values
Most frequent values clustered at right end.
clustered at one (left) Negatively skewed.
end of the scale. Mean<median<mode.
Positively skewed.
Mode<median<mean.
Distributions with positive kurtosis (leptokurtic, left figure)
and negative kurtosis (platykurtic, right figure)
Small standard deviation,
pointy distribution. High standard
deviation.
Flatter distribution.
Measures of distribution shape

• Frequency distributions come in many different shapes and


sizes;
• What is the most common distribution (in
nature)/ideal/reference distribution?
• The Normal Distribution
– is characterised by the bell-shaped curve
– majority of scores are all around the central tendency values (mean,
mode, median)
– symmetric curve: mean, mode and median are the same
– the more we get away from the centre the number of scores gets
smaller; indicating that as scores start to deviate from the centre
(mean) their frequency is decreasing
Normal Distribution

(1) Symmetrical Distribution – left half of the distribution is a mirror of


the right half
(2) Majority of scores occur near centre
(3) Mean, Median and Mode are the same
Standard Normal Distribution

Standard Normal Distribution =


z Distribution
Mean = 0
SD = 1
Probability for particular scores
Standard score
• Definition
• Properties
• Assumptions
• Purpose/uses
• Area under the normal curve
– The 68 95 99.7 rule
Deviations from Normal: Skewness
Deviations from Normal: Kurtosis

platykurtic leptokurtic
What are we ideally looking for?

• We are looking for a symmetrical distribution, which is not


skewed and which is not too pointy or too flat.

• If the mean represents the data well then most of the scores
will cluster close to the mean and the resulting standard
deviation is small relative to the mean.
Is my sample representative of the population?

• SD and frequency distribution informs how well the mean


represents the data BUT we collect data from samples rather
than the entire population AND samples can differ
• The question is: How well a particular sample represents the
population?
• Difference between sampling technique and statistics
• Statistical estimate: STANDARD ERROR (which is a different
concept than the standard deviation)
• Let us imagine we would take various samples from a
population
Standard Deviation and the shape of distribution
Standard Error

From each sample we can


calculate the sample means and
compare it to the population mean.
The latter informs us about the
deviation from the sample means
from the population mean =
Standard Error of the mean (SE).
In reality we cannot collect hundreds
of samples and therefore we rely
on approximations of the standard
Error (with large sample sizes).
Clever statisticians have developed
ways to calculate the SE from the SD
using the sample size.

SE = SD/√ N
Standard error
• What is the difference between std. error and
std.?
– Std. error is the standard deviation of the population
mean.
– SD tells the researcher how spread out the
responses are -- are they concentrated around the
mean, or scattered far & wide? Did all of your
respondents rate your product in the middle of your
scale, or did some love it and some hate it?
95% Confidence Intervals
• We use the mean and standard deviation to create
confidence intervals for the population mean.
• Can use Z-distribution if data are normally
distributed, or t-distribution if data are
approximately normally distributed
• We will use t distribution (with n-1=14 df)
• CI for m looks like:
 S 
X  tdf , a / 2   
 n
 39.10 
90.67  2.145    ==> 90.67  2.145 (10.10)
 15 
90.67  21.66 ==> (69.01, 112.33)
45
95% CI for m, for PPVT (n=15)
• Based on our sample results, we are 95% confident
that the true average PPVT score for the population
from which the sample was drawn, is between 69.01
and 112.33.
• Recall that the expression, S ,/ isnreferred to as, the
standard error of the mean
• Tells us about precision, how close our estimates may
be to the true population value
 S X  10.10 
• It’s somewhat large here because the sample is very
small (n=15)
46
Inferential statistics
Population and samples revisited
• Statistical inference is the process by which we
acquire information about populations from
samples.
sample
random

population
• …

Statistic

Parameter
Symbolic notation for some sample and
population measures
Statistical Sample Population Data
measure statistic parameter type
Size n N Qualitative/
Quantitative

Mean 
x

Quantitative
Variance 2

s
Standard s 
Deviation
Proportion p Qualitative
Inferential Statistics
With inferential statistics we are trying to reach conclusions
that extend beyond the immediate data alone; whereas with
descriptive statistics we simply describe what’s going on in
our data.
Two methods of inferential statistics are distinguished:
1. To infer from the sample data to population - the estimation
of parameter(s)
2. To judge whether an observed difference/relationship is a
dependable difference/relationship (systematic) or one that
might have happened by chance – the testing of statistical
hypotheses
Statistical Hypothesis Testing

Statistical hypothesis testing (significance testing) is used to


make a judgement about a claim (assumption/hypothesis) by
addressing the question:
Can the observed difference/relationship be attributed to
chance?
Statistical hypothesis testing is conducted in four steps:
(1) Null Hypothesis
(2) Test statistic
(3) p Value and conclusion
(4) Interpretation
Null Hypothesis

When we conduct research that is hypotheses driven we can


make three predictions:
(1) non-directional predictions – there will be
differences/relationships but we do not state the directions of
the differences/relationships
Example: there is a relationship between A and B
(2) directional predictions – there will be
differences/relationships and we state the directions of the
differences/relationships
Example: A predicts B
(3) Null Hypothesis – there are no differences and
relationships
Example: A is not related to B
Null Hypothesis

The Null hypothesis is named null hypothesis because it can


be tested and thus nullified.
Consequently, to confirm a research hypothesis (alternative
hypothesis) we nullify (reject) the null hypothesis.

What is it what we test with inferential statistics?


These statistics enable us to assess the likelihood
(probability) that we would obtain results like ours IF we
assume that the Null Hypothesis is true/correct.
Test statistic

In the second step the test statistics is calculated from the data.
There are different test statistics and which you choose depends on
various factors:
Type of investigation – differences or relationships
Sample type – independent versus dependent
Number of samples – one, two or more samples
Level of measurement – ratio, interval, ordinal and nominal
Distribution of Data – parametric versus non-parametric tests
Amount of Data (Sample size)
Website:
http://www.gardenersown.co.uk/About/Mark/Cho
osestats.html
Common Test Statistics: Differences
Goal Interval + Normal Ordinal + Nominal (two
Distribution Non-normal possible
outcomes)

Describe one group Mean, SD Median Proportion

Compare one group to a One-sample t test Wilcoxon test Chi-square


hypothetical value (e.g. Scale
centre)

Compare two independent Independent Mann – Fisher’s test


groups samples t test Whitney test

Compare two dependent Paired samples t Wilcoxon test McNemar’s test


groups test

Compare three or more One-way ANOVA Kruskal-Wallis Chi-square


independent groups test

Compare three or more Repeated-measures Friedman test Cochrane Q


dependent groups ANOVA
Common Test Statistics: Relationships
Goal Interval + Ordinal Nominal
Normal (two possible
Distribution outcomes)

Quantify association between Pearson’s Spearman’s Contingency


two variables correlation correlation coefficients

Predict value from one other Simple linear Nonparametric Simple logistic
measured variable regression or regression regression
Nonlinear
regression

Predict value from several Multiple linear Multiple


measured variables regression or logistic
multiple nonlinear regression
regression
Test statistic

Irrespective of what test is used we will get a test statistic


value and a probability value (p value), which indicates the
likelihood (probability) that we obtain the test value – and by
implication, these data – if we assume the null hypothesis be
true.

The probability value (p value) is provided by SPSS. In case you


would calculate the test statistic by hand, you would need to
look up the p value in the appropriate tables.
Example
Test statistics distributions

Test statistics have various


distributions depending on the
Degree of Freedom
Test value and probability value

Probability - is measured conventionally from one to zero. An


event with the probability of one is inevitable; and with the
probability of zero is impossible. Most events in our universe
tend to lie somewhere between these two extremes.
Example: Imagine we have a test value of 4.89 and an associated
p value (probability) of .03.
It would mean: the probability of obtaining this value under the
null hypothesis is 3 times in 100.
Note: Inferential statistics only tells us the probability that we
would get values like those we obtained if the null hypothesis is
true.
p Value and conclusion

In order to decide whether to accept or reject the null


hypothesis we need to set a criterion / significance level – by
which we test for statistical significance.

The rule is as follows: if we find that the probability


associated to our test value falls at or below our
criterion/significance level (near to zero) we reject the null
hypothesis; whereas if we find that the probability associated
with our test value falls above this criterion (near to one), we
accept the null hypothesis.
Criterion/significance level
Criterion/significance level: p = .05
Criterion/significance level: p = .05
Interpretation
Statistical significance testing, therefore compares the
probability (p value) of a test value (e.g., t-value) with the
criterion (significance level, alpha level) of .05.
If the p value is at or smaller than .05, we decide that we have
sufficient evidence to reject the null hypothesis. Under this
condition we would speak of a “statistically significant
difference/relations”.
If the p value is larger than .05 then we decide that we have
insufficient evidence to reject the null hypothesis.
Consequently, differences and relationships are described as
not statistically significant.
Type I and Type II errors

We could always be wrong! We might reject the null hypothesis even


though it might be correct (Type I error); or we might not reject the null
hypothesis even though it might be not correct (Type II error).

Do we know the extent of these errors? YES, we know the probability of


making Type I errors – since it is the significance level (.05). That means
because we will reject the null hypothesis every time that our data
reaches this significance level, we will make a mistake, on average, once in
every 20 times (5 in 100 times).

How to reduce Type I error? more stringent significance level (.01 – 1 in


100 times)
Type I vs. Type II Errors

• Type I Error (false positive): Concluding there is a


difference between the groups being studied when, in
fact, there is no difference.

• Type II Error (false negative): Concluding there is


no difference between the groups being studied when,
in fact, there is a difference.

68
Example1: A doctor diagnoses a patient with
cancer
• Case I:
– Null hypothesis: the patient does not have cancer (which
is a true case).
– Research hypothesis: the patient has cancer.
• Researcher conclusion: the patient has cancer (wrong decision).
In this scenario, the researcher has committed a type I error.
• Case II:
– Null hypothesis: the patient does not have cancer (false
case).
– Research hypothesis: the patient has cancer (true case).
• Researcher conclusion: the researcher concludes that the
patient does not have cancer. In this case the researcher has
committed a type II error.
Example 2: court case
• Case1:
– Null hypothesis: defendant is not guilty.
– Alternative hypothesis: defendant is guilty.
– Researcher conclusion: the defendant is guilty (wrong
decision). In this case, the researcher has committed a
Type I error. Convicting an innocent person.
Type II error: defendant is not guilty. Accepting a null
hypothesis (which is not true). This is the same as setting
a guilty person free.
Hypothesis Truth Table
NULL HYPOTHESIS
TRUE FALSE

CORRECT TYPE II
ACCEPT
DECISION ERROR

DECISION

TYPE I CORRECT
REJECT
ERROR DECISION
Statistical significance

• Calculated value
• Critical value: found in tables or stored in
computer’s memory
• In general, if the calculated value of the
statistic (t, F, etc.) is relatively large, the
probability or p is small, (e.g., .05, .01, .001).

74
Cont…

• If the probability is less than the preset alpha


level (usually .05). we can say that the results
are statistically significant or that they are
significant at the .05 level or that p <.05. We
can also reject the null hypothesis of no
difference or no relationship. 75
Cont…
• Note that, using SPSS computer printouts, it is quite
easy to determine statistical significance because the
actual significance or probability level (p) is printed so
you do not have to look up a critical value in a table.
SPSS labels this value Sig. so all of the common
inferential statistics have a common metric, the
significance level or Sig.
76
Cont…
• This level is also the probability of a Type I error or the
probability of rejecting the null hypothesis when it is
actually true.
• Thus, regardless of what specific statistic you use, if the
sig. or p is small (usually less than .05) the finding is
statistically significant, and you can reject the null
hypothesis of no difference or no relationship.
77
Interpreting Inferential Statistics using the
SPSS Sig.

Sig Meaning Null hypothesis Interpretation

1.00 P = 1.00 Don’t reject Not statistically significant


(could be due to chance)

0.50 P = 0.5

.06 P = 0.06

.05 P<=.05 Reject Statistically significant (not


due to chance)
.01 P=.01

.000 P<.001 78
Mixed methods research
• Mixed methods research takes advantage of using multiple ways to
explore a research problem.
• Basic Characteristics
• Design can be based on either or both perspectives.
• Research problems can become research questions and/or
hypotheses based on prior literature, knowledge, experience, or
the research process.
• Sample sizes vary based on methods used.
• Data collection can involve any technique available to researchers.
• Interpretation is continual and can influence stages in the research
process.
Why Use Mixed Methods?

The simple answer is to overcome the limitations of a


single design. A detailed answer involves:
•To explain and interpret.
•To explore a phenomenon.
•To develop and test a new instrument.
•To serve a theoretical perspective.
•To complement the strengths of a single design.
•To overcome the weaknesses of a single design.
•To address a question at different levels.
•To address a theoretical perspective at different
levels.
Cont…
• What are some strengths?
• Can be easy to describe and to report.
• Can be useful when unexpected results arise from a prior study.
• Can help generalize, to a degree, qualitative data.
• Helpful in designing and validating an instrument.
• Can position research in a transformative framework.
• What are some weaknesses?
• Time required.
• Resolving discrepancies between different types of data.
• Some designs generate unequal evidence.
• Can be difficult to decide when to proceed in sequential designs.
• Little guidance on transformative methods.
• Methodologist John Creswell suggested a systematic framework for approaching mixed
methods research. His framework involves four decisions to consider and six strategies.
Four Decisions for Mixed Method Designs
(Creswell, 2003)
• What is the implementation sequence of data
collection?
• What method takes priority during data
collection and analysis?
• What does the integration stage of finding
involve?
• Will a theoretical perspective be used?
Six Mixed Methods Design Strategies
(Creswell, 2003)
1. Sequential Explanatory
•Characterized by: Collection and analysis of quantitative data
followed by
•a collection and analysis of qualitative data.
•Purpose: To use qualitative results to assist in explaining and
interpreting
• the findings of a quantitative study.
2. Sequential Exploratory
•Characterized by: An initial phase of qualitative data collection and
analysis followed
• by a phase of quantitative data collection and analysis.
•Purpose: To explore a phenomenon.
•This strategy may also be useful when developing and testing a new
instrument
Cont…
3. Sequential Transformative
•Characterized by: Collection and analysis of either quantitative or
qualitative
•data first. The results are integrated in the interpretation phase.
•Purpose: To employ the methods that best serve a theoretical
perspective.
4. Concurrent Triangulation
•Characterized by: Two or more methods used to confirm, cross-
validate,
•or corroborate findings within a study. Data collection is concurrent.
•Purpose: Generally, both methods are used to overcome a
weakness
•in using one method with the strengths of another.
Cont…
5. Concurrent Nested
•Characterized by: A nested approach that gives priority to one of the
methods
•and guides the project, while another is embedded or “nested.”
•Purpose: The purpose of the nested method is to address a different
question than
•the dominant or to seek information from different levels.
6. Concurrent Transformative
•Characterized by: The use of a theoretical perspective reflected in
the purpose or
•research questions of the study to guide all methodological choices.
•Purpose: To evaluate a theoretical perspective at different levels of
analysis.
SUMMARY, CONCLUSIONS AND RECOMMENDATIONS

Summary of findings
• The summary part usually includes a brief
restatement of the problem/s, the main features of
the methods and the most important findings.
• Upon completing the draft of this section, the
writer should check it carefully to determine
whether it gives a concise but reasonably
complete description of the study & its findings.
• S/He should also check to ascertain that no
information has been introduced here that
Cont…
had been included in the appropriate preceding
sections.
• It is a good idea to have a colleague lead the
conclusions section to see if the author is
communicating as well as he intended to do.
• With respect to each finding, you are asking yourself,
knowing what I now know, what conclusion can I draw.
• Research findings are typically defined as the
researchers’ interpretations of the data they collected
or generated in the course of their studies.
Conclusions

• We should limit conclusions to those that have


direct support in the research findings. There is
temptation to conclude too much.
• The hypotheses provide a convenient framework
for stating conclusions; that is, the writer should
indicate in this section whether or not the
findings support his hypotheses.
• Conclusions are assertions based on findings
and must therefore be warranted by the findings.
Cont…
• Conclusions must be logically tied to one another.
There should be consistency among your conclusions;
none of them should be at odds with any of the others
• Conclusions should be confined to those justified by
the data of the research and limited to those for which
the data provide an adequate basis.
• Conclusions are based on an integration of the study
findings, analysis, interpretation and synthesis.
• Conclusions are not the same as findings. Neither are
conclusions the same as interpretations.
Cont…
• Conclusions are essential conclusive statements of what
you now know, having done this research, that you did not
know before.
Recommendations
• Recommendations are the application of those conclusions.
• Recommendations are actionable; that is they suggest
implications for policy and practice based on the findings,
providing specific action planning and next steps.
• Recommendation support the belief that scholarly work
initiates as many questions as it answers, thus opening the
way for further
Cont…
practice and research.
• Recommendation for research describe
topics that require closer examination and
that may generate new questions for
further study

You might also like