Professional Documents
Culture Documents
LEARNING OBJECTIVES
1) To discuss the basic principles of statistics and probability
2) To enumerate the measures of central tendency and explain the
importance of normal distribution in statistics
3) To discuss the basic principles of statistical inference
I. INTRODUCTION TO STATISTICS
• Statistics can mean two things: Figure 1. Theoretical basis of inferential statistics
o Data
§ The numbers we get when we measure or count things C. DEFINITION OF TERMS
o Methods
• Data
§ A collection of procedures that allows us to analyze data (statistical
o Values we collect from respondents or records
tests)
o E.g. From patient records in hospitals
• Why do we need to study all of this?
• Variables
o To conclude that our sample estimates represent population data, or to
o Characteristic of a study subject that may vary from one respondent to
establish a causal association, we need to rule out three things:
another
§ Chance
o E.g. Age, sex, disease status, etc.
Þ Addressed by statistical analysis
Þ Main thing to rule out
§ Bias
Qualitative Variables
Þ Addressed by study design • Aka categorical variables
§ Confounding • Characterizes a certain quality of a subject
Þ Addressed by both statistical analysis and study design • 3 types:
o Binary variables (Dichotomous)
§ Categorical variables that only have two values
A. APPLICATION OF STATISTICS IN EPIDEMIOLOGY AND MEDICINE
§ E.g. Biological sex, disease status (sick or not sick), HIV+ or HIV-
• Determining probability of survival from a disease or medical procedure o Nominal variables
o E.g. Comparing old and new medical procedures § Variables whose categories can be listed in any order
• Effectiveness of drugs or medical procedures § Have no inherent ordering
o E.g. Clinical trials § E.g. Religion, gender
Transcribed by TG 11: Ballelos, Cortez, Gamo, Lacerna, Lim, Monge, Pagayatan, Sanchez
YL6: 06.01
Checked by TG 23: Catacutan, Daco, Go, Luna, Mamaril, Mariano, Rita, Sing 1 of 11
o Ordinal variables Controlling Confounding Variables
§ Variables whose categories have a natural or inherent ordering
• The researcher may control the confounding variables in the following
§ E.g. Data from Likert scales (i.e., from strongly disagree to strongly
stages using the enumerated methods:
agree)
o Design stage
Þ Note: Using quantitative statistical tests for data from Likert
§ Restriction
scales is highly discouraged
Þ E.g. limiting subjects to studying females only or males only
- Data from Likert scales are qualitative; thus it is inappropriate
Þ May be inefficient
for quantitative statistical tests such as t-test
§ Matching (case control studies)
Þ Select a control with similar characteristics to a case
Quantitative Variables § Randomization
• Represents a counted or measured quantity Þ Often done in trials
• 2 types: o Analysis stage
o Interval variable § Regression Analysis
§ No true zero § Stratified Analysis
§ Inherent ordering
§ Exact differences between values
§ E.g. Temperature in Celsius or Fahrenheit [EXAMPLE] CONFOUNDING VARIABLE
o Ratio variable A study found that coffee drinkers (exposure) are 4x more likely to have lung
§ Has a true zero cancer (outcome) compared to those who don’t drink coffee. Consider if this
§ Inherent ordering relationship is actually true or if there is a confounding variable.
§ Exact differences between values
§ E.g. Temperature in Kelvin
Exposure Variable
• Aka Modified, Independent, or Predictor variable
• Variable of interest which you think could have an effect on the outcome
variable
o Modified to assess its effect on an outcome
o E.g. In clinical trials, a patient given a drug is exposed, while a patient
who does not receive the drug is unexposed
• Usually placed on the X-axis on graphs Figure 2. Confounding relationship of smoking, drinking coffee, and lung
cancer
EXPOSURE VARIABLE: MIX
• Modified variable • Confounding Variable: Smoking
• Independent variable o People who smoke are more likely to drink coffee, and smoking can
• X-axis also cause lung cancer
§ Satisfies the first two criteria of a confounding variable
o Smoking is not an intermediate variable because it is not between
Outcome Variable the two other variables
• Aka Dependent or Response variable § Satisfies third criteria
• Variable of interest which you think is affected by the exposure • Intermediate Variable: Caffeine levels
variables/predictors o Drinking coffee causes caffeine levels in the body to increase and, in
o In epidemiology or medicine, this is usually your disease status most cases, it is also associated with lung cancer (spurious
§ Can also be other things such as HIV testing relationship)
o In an experiment, this is the variable that you are observing for a change o Should not be controlled for in the analysis
as you vary your level of exposure • In this study, it’s important to control for smoking (confounding variable)
• Usually placed on the Y-axis but not for caffeine levels (intermediate variable)
OUTCOME VARIABLE: DRY SOURCE: Veincent Christian F. Pepito, MSc – Introduction to Statistics and
• Dependent variable Probability (2021)
• Response variable
• Y-axis
NDTK: Confounding vs Intermediate Variables
Intermediate Variable • Confounding variable: should be controlled for in the analysis
• Variable that lies in the causal pathway between the exposure and • Intermediate variable: should NOT be controlled for in the analysis
outcome
Effect Measure Modifier
D. RELATIONSHIPS BETWEEN VARIABLES
• Variable that alters the effect of the exposure on the outcome
Confounder • E.g. Given a hypothetical drug that cures cancer among female patients but
• A variable that muddles or confounds the relationship between an has no effect on male patients:
exposure and an outcome o Sex is an effect measure modifier in the association between taking the
• Must satisfy all of the following criteria: drug and curing cancer
o Associated with the exposure • Can be assessed statistically, unlike confounding variables
o Associated with the outcome
o Not in the causal pathway between the exposure and outcome II. PROBABILITY
§ The variable should not be an intermediate variable A. DEFINITION
• It is important to do a thorough literature review to determine other
• The proportion of times that we would observe an outcome if we repeated
variables that may confound the relationship between exposure and
the experiment a large number of times
outcome
o E.g. What is the probability of:
o Researcher must collect data from these variables to be able to control
§ Drawing a queen of spades in a standard deck of playing cards
them later in the analysis
Þ 1/52
• While the first two criteria can be tested statistically, note that there is no
§ Throwing a ‘4’ in a six-faced fair die
statistical test for confounding
Þ 1/6
• Values are always 0 ≤ x ≤ 1 or [0, 1]
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 2 of 11
B. RULES OF PROBABILITY • In the example above, 3 donors in the first 100 are expected to be group
AB
Additive Law • However, it cannot be said for certain that there will be 3 group AB donors
in the first 100 due to:
𝑃(𝐴 𝑜𝑟 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) o Random variation
Equation 1. Additive Law § Affects what is observed especially when the number of
experiments is not sufficiently large (E.g. 100)
• For mutually exclusive events, the probability of an event is the sum of the o Small number of observations
probabilities for each event § Makes the expected outcome of 3% imprecise
• Mutually Exclusive Events
o When one outcome happens, the other outcome can no longer occur The Law of Large Numbers
o E.g. Tossing of a coin will result in either a heads or tails, never both
• An experiment repeated many times will result to an observed value that
is equal to the expected value
[EXAMPLE] ADDITIVE LAW o E.g. In the previous example of blood types, repeating the experiment
What is the probability of getting 1 or 6 in a fair six-faced die? sufficiently and getting a total count of 10,000 or 1,000,000 will result to
the expected value of 3% being equal to the observed value
• Solution:
o 𝑃(1 𝑜𝑟 6) = 𝑃(1) + 𝑃(6)
1 1
o 𝑃(1 𝑜𝑟 6) = 2 + 2 = 𝟎. 𝟑𝟑
Multiplicative Law
• Solution:
2
o 𝑃 (𝐴𝐵) = ; = = 0.03
8<<
§ Where:
Þ 6 = Count for AB
Þ 200 = Total counts for all blood types Figure 4. Example of a Normal Distribution Curve
• Answer:
o 3 people in the next 100 will have blood type AB
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 3 of 11
Mean • Can also be used to compute the areas outside a given range by taking the
complement of the values within the range
• Tells the location (or center) of the distribution
o A table of the areas outside a given range is called a two-tailed z-table,
as shown in Figure 9
𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑒𝑎𝑛
𝑧=
𝑆𝐷
Equation 3. Z-score Formula
Figure 8. The Area between -1 and 1 SD in a Standard Normal Distribution.
Z-table This represents 68.3% of the population.
• Can be used to compute for the area between -1 and 1 SD, -2 and 2 SD, and
so on
o Figure 8 depicts a standard normal distribution where the area between
-1 and 1 SD is shown
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 4 of 11
D. SKEW
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 5 of 11
B. TARGET VS. STUDY POPULATION
• Recall:
o Target Population
§ Population about which we aim to generalize the findings of the
study
o Study Population
§ Population about which we can obtain information from
• Where:
o SE = standard error
C. SAMPLING DISTRIBUTION o p = sample value
• The distribution of the proportion of smokers obtained from 1,000 o 𝜋 = population value
different samples (see Figure 13) o 𝑛 = number of observations
o Most of the samples are close to the true prevalence π=30%, and p o 𝑥̅ = mean
ranges from 24-36% which can happen due to chance o 𝜎 = population standard deviation
o Distribution is nearly symmetrical
NDTK: Standard Deviation vs Standard Error
• Standard Deviation: variability in individual data or the sample
• Standard Error: standard deviation of a sampling distribution
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 6 of 11
• The probability of obtaining the observed or a more extreme sample
estimate if the null hypothesis is true
• Not a measurement of how true the null hypothesis is
• Quantifies the strength of the evidence
Large P-Value
• Greater than the level of significance α
• The evidence against the null hypothesis is weak
• The chance of observing a value as extreme as the sampled value, would
be high if the null hypothesis is true
• Sampling variation alone can be the reason for the difference between the
estimate and the parameter (or the null value)
o This means that the findings could just be due to chance
2 E.g. If the p-value = 0.5, then there is a 50% chance (high chance) that
you can observe a value equal to that, if your null hypothesis is true
2 Since it might be due to chance, there might be no significant effect if
Figure 15. Estimated Sampling Distributions. Probability of obtaining extreme you sample others
samples (green and blue) is low. • Sample conclusion made from a large p-value:
o “There is little evidence that the heights of males and females
D. CONFIDENCE INTERVALS significantly differ.”
• The intervals around the estimated mean which we can be confident
contains the true population mean Small P-Value
o “We are 95% confident that the true mean (or proportion) is between • Less than the level of significance α
lower limit and upper limit” • The evidence against the null hypothesis is strong
• Calculated and presented as the values in the data are only estimates and • The chance of observing a value as extreme as the sampled value would be
not true values of the population low if the null hypothesis is true
• Different confidence intervals are used in statistics, e.g., 90%, 95%, 99%; • Sampling variation alone is unlikely to be the reason for the difference
wherein 95% is the most commonly used between the estimate and the parameter (or null value)
2 An increasing confidence level means that the confidence interval also o This means that the finding is most likely not due to chance
becomes wider • Sample conclusion made from a small p-value:
o Multipliers of commonly used CIs are: o “There is strong evidence that the height between the males and
§ 90% CI: 1.28 females is significantly different.”
§ 95% CI: 1.96
§ 99% CI: 2.58
C. TYPE I AND II ERRORS
• A significance test can never prove that a null hypothesis is either true or
V. HYPOTHESIS TESTING
false
A. INTRODUCTION o It only gives an indication of the strength of the evidence against the
• Testing an assumption regarding a population parameter null hypothesis
o Null hypothesis (Ho)
§ Always a statement of equivalence
Types of Errors in Doing Hypothesis Tests
§ E.g. There is no significant difference between the heights of males
and females Type I Error
o Alternative hypothesis (Ha) • Rejecting a null hypothesis when it is true
§ Always a statement of disagreement • A significant effect is stated even when there is none
§ E.g. There is a significant difference between the heights of males • False positive
and females
• Tests if the sample is different from the population value
Type II Error
• Involves calculation of the probability of obtaining the observed data if the
• Failing to reject a null hypothesis when it is false
null hypothesis were true
• No significant effect is stated when there should be a significant effect
• False negative
Steps in Hypothesis Testing
1) Clarify and state your null and alternative hypotheses
Contingency Table in Hypothesis Testing
2) Collect data
3) Compute for the p-value using the appropriate statistical test
4) Make your conclusions
B. P-VALUES
• True Positive
o There is a significant effect in reality and in the findings of the study
o The probability of getting a true positive result is equal to your study
power
Figure 16. P-value • True Negative
o There is no significant effect in reality and in the findings of the study
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 7 of 11
• False Positive (Type I Error) o Effect Measure Modifier: variable that alters the effect of the exposure
o There is no significant effect in reality but the findings of the study state on the outcome
that there is a significant effect
• False Negative (Type II Error) PROBABILITY
o There is a significant effect in reality but the findings of the study state • The proportion of times that we would observe an outcome if we repeated
that there is no significant effect the experiment a large number of times
• Additive Law: for mutually exclusive events, the probability of an event is
α: Level of Significance the sum of the probabilities for each event
o Mutually Exclusive Events: when one outcome happens, the other
• Probability of committing a type I error
outcome can no longer occur
• The threshold that, below which, the p-value will be considered significant
• Multiplicative Law: for independent events, the probability of two
• Usually set at 0.05
independent events is given by the product of their individual probabilities
o Implies that 1 out of 20 results that you get will not be true
o Independent Events: when one outcome happens, it doesn’t affect the
probability of another event happening
β • Random variation: affects what is observed especially when the number
• Probability of committing a type II error of experiments is not sufficiently large
• β is minimized by increasing study/statistical power • Law of Large Numbers: an experiment repeated many times will result to
o Study/statistical power an observed value that is equal to the expected value
§ Probability of rejecting a null hypothesis when a true effect actually
exists (true positive) NORMAL DISTRIBUTION
§ Power can be increased if sample size is increased MEASURES OF CENTRAL TENDENCY
Þ When you increase your sample size, you are less likely to • Mean: arithmetic average; sum of the observations divided by the number
commit a type II error of observations
• Median: the value that divides the data by half
HOW’S MY TRANSING? o Odd number of observations: the middle observation when the
observations are tallied in ascending order
Feedback Form: https://tinyurl.com/2024YL6gHMT
o Even number of observations: the average of the two middle
Errata Tracker: https://tinyurl.com/2024YL6ET06
observations when the observations are tallied in ascending order
• Mode: most common value appearing in the data
QUICK REVIEW • Normal distribution: represents the distribution of values that would be
SUMMARY OF CONCEPTS observed if we could examine everybody in the population; bell-shaped
o Defined by:
• Statistics can mean two things:
§ Mean: location or center of the distribution
o Data: the numbers we get when we measure or count things
§ Standard deviation: measure of spread or dispersion of a set of data
o Methods: a collection of procedures that allows us to analyze data
• Standard Normal Distribution: the standard normal distribution is used to
(statistical tests)
determine areas under the curve
• Three things that must be ruled out when concluding that the sample
o In a standard normal distribution, mean = 0 and SD = 1
represents population data:
o Z-score: standard normal scores; tells how many standard deviations
o Chance: addressed by statistical analysis; main thing to rule out
away a certain value is from the mean
o Bias: addressed by study design
o Z-table: used to compute for the area between -1 and 1 SD, -2 and 2 SD,
o Confounding: addressed by both analysis and study design
and so on and is also used to compute the areas outside a given range
• Descriptive statistics: describes a set of data and is used to organize,
by taking the complement of the values within the range
summarize, and present individual data values
• Skewed distributions: some variables are not normally distributed despite
o Categorical data: percentages or frequencies for data we can categorize
a large number of respondents; distribution becomes skewed due to
o Quantitative data: average values and the spread of the values for data
outliers
which we count or measure
o Positive skew: skewed to the right, where mean > median > mode
• Inferential statistics: uses methods of probability to make inferences about
o Negative skew: skewed to the left, where mean < median < mode
a population using data from a sample
o The median is the measurement used for skewed distributions
• Data: values we collect from respondents or records
• Variables: characteristic of a study subject that may vary from one PRINCIPLES OF STATISTICAL INTERFERENCE
respondent to another • Random sampling: every member of the population has an equal chance
o Qualitative variables: categorical variables; characterizes a certain to be selected regardless of whether other members have already been
quality of a subject picked
§ Binary variables: dichotomous; variables that only have two values
• Selection bias: can occur when respondents are not chosen at random; the
§ Nominal variables: variables whose categories can be listed in any
sample is not representative of the target population and conclusions from
order
the study may be erroneous and not generalizable to the target population
§ Ordinal variables: variables whose categories have a natural or
• Target population: population about which we aim to generalize the
inherent ordering
findings of the study
o Quantitative variables: represents a counted or measured quantity,
• Study population: population about which we can obtain information from
with inherent ordering and exact differences between values
• Sampling distribution: the distribution of the proportion of smokers
§ Interval variable: no true zero
obtained from 1,000 different samples; distribution is nearly symmetrical
§ Ratio variable: has a true zero
o Standard error: standard deviation of the sampling distribution
o Exposure variable: variable of interest which you think could have an
o Characteristics of sampling distributions include:
effect on the outcome variable; modified to assess its effect on the
§ The mean of the sampling distribution of the estimates obtained
outcome
from different samples of identical size is the same as the population
o Outcome variable: variable of interest which you think is affected by
value, regardless of the size of the samples
the exposure variables/predictors
§ The larger the sample size, the narrower the sampling distribution
o Intermediate variable: variable that lies in the causal pathway between
of the estimates obtained from the samples
the exposure and outcome
§ The shape of a sampling distribution becomes closer to that of a
o Confounder: variable that muddles or confounds the relationship
normal distribution as the sample size increases
between an exposure and an outcome
§ The standard error decreases as the sample size increases
§ Associated with the exposure
o Central Limit Theorem: states that when the sample size is large
§ Associated with the outcome
enough, the sampling distribution of the estimates is always normal
§ Not in the causal pathway between the exposure and outcome
§ Controlled in the design stage (restriction, matching, and • Confidence Intervals: the intervals around the estimated mean which we
randomization) and analysis stage (regression and stratified can be confident contains the true population mean
analyses)
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 8 of 11
HYPOTHESIS TESTING Equation 3. Z-score Formula
• Testing an assumption regarding a population parameter
• Null hypothesis (Ho): always a statement of equivalence 𝑉𝑎𝑙𝑢𝑒 − 𝑚𝑒𝑎𝑛
𝑧=
• Alternative hypothesis (Ha): always a statement of disagreement 𝑆𝐷
• P-values: the probability of obtaining the observed or a more extreme • Where:
sample estimate if the null hypothesis is true; quantifies the strength of the o Z = Z-score
evidence o SD = standard deviation
o Large p-value: greater than the level of significance α
§ The evidence against the null hypothesis is weak Equation 4. Standard Error
§ The chance of observing a value as extreme as the sampled value,
[\(1O\) `
would be high if the null hypothesis is true 𝑆𝐸(𝑝) = 𝑆𝐸(𝑥̅ ) =
√\ √^
o Small p-value: Less than the level of significance α
§ The evidence against the null hypothesis is strong • Where:
§ The chance of observing a value as extreme as the sampled value o SE = standard error
would be low if the null hypothesis is true o p = sample value
• True Positive: there is a significant effect in reality and in the findings of o 𝜋 = population value
the study o 𝑛 = number of observations
o The probability of getting a true positive result is equal to your study o 𝑥̅ = mean
power o 𝜎 = population standard deviation
• True Negative: there is no significant effect in reality and in the findings of
the study
REVIEW QUESTIONS
• Type I Error/False Positive: rejecting a null hypothesis when it is true; a
significant effect is stated even when there is none 1. James wanted to see if the amount of exercise a person gets in a week has
an effect on mental status. In this experiment, what kind of variable is
• Type II Error/False Negative: failing to reject a null hypothesis when it is
amount of exercise?
false; no significant effect is stated when there should be a significant effect
a) Response variable
• α/Level of Significance: probability of committing a type I error
b) Ordinal variable
o The threshold that, below which, the p-value will be considered
c) Modified variable
significant
d) Dependent variable
o Usually set at 0.05
• Β: probability of committing a type II error
2. Which of the following is part of the theoretical basis of inferential
• Study/statistical power: probability of rejecting a null hypothesis when a
statistics?
true effect actually exists (true positive)
a) Population
b) Sample
SUMMARY OF NEED-TO-KNOWS (NDTK) c) Statistics
• Confounding vs Intermediate Variables d) All of the above
o Confounding variable: should be controlled for in the analysis e) None of the above
o Intermediate variable: should NOT be controlled for in the analysis
• Standard Deviation vs Standard Error 3. Which law refers to the probability of two events given by the product of
o Standard Deviation: variability in individual data or the sample their individual probabilities?
o Standard Error: standard deviation of a sampling distribution a) Additive
b) Multiplicative
SUMMARY OF PROCESSES c) Law of Large Numbers
d) None of the above
STEPS IN HYPOTHESIS TESTING
1) Clarify and state your null and alternative hypotheses
4. Which of the following is false?
2) Collect data
a) Probability can have values that are 0 ≤ 𝑥 ≤ 1
3) Compute for the p-value using the appropriate statistical test
b) Effect measure modifier can be measured statistically
4) Make your conclusions
c) Additive law of probability considers independent events
d) None of the above
SUMMARY OF MEMORY AIDS
• EXPOSURE VARIABLE: MIX 5. Which of the following is true about standard deviation?
o Modified variable a) It is the average deviation of the observations from the median value
o Independent variable b) Calculated by the square root of the mean
o X-axis c) The more widely spread out the values, the smaller the standard
deviation
• OUTCOME VARIABLE: DRY d) NOTA
o Dependent variable
o Response variable 6. The mean score of batch 2024’s 2nd Pharmacology comprehensive exam is
o Y-axis 58 with a standard deviation of 14. What is the proportion of the batch who
scored above 60?
SUMMARY OF EQUATIONS a) 55.57%
b) 14.28%
Equation 1. Additive Law
c) 44.43%
d) 85.72%
𝑃(𝐴 𝑜𝑟 𝐵) = 𝑃(𝐴) + 𝑃(𝐵)
e) 43.44%
• Where:
7. Which of the following statements is true?
o A, B = mutually exclusive events
a) Statistical tests that assume normality may be used on data with a
skewed distribution as is.
Equation 2. Multiplicative Law
b) The study population is bigger than the target population.
𝑃(𝐴 𝑎𝑛𝑑 𝐵) = 𝑃(𝐴) 𝑥 𝑃(𝐵) c) The mode is the best measure of central tendency for skewed
distributions.
• Where: d) Random sampling ensures that the sample is representative of the
o A, B = independent events study population.
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 9 of 11
8. Which of the following may result from not selecting the sample randomly?
a) Type 1 error
b) Type 2 error
c) Type 3 error
d) Selection bias
e) Failure of study
9. This theorem states that when the sample size is large enough, the
sampling distribution of the estimates is always normal.
a) Sampling Distribution
b) Central Limit
c) Confidence Interval
d) Pythagorean
11. T/F: The standard error is the variability in individual data or the sample.
12. T/F: A type I error occurs when the study states the presence of a significant
effect even though in reality, there is none.
ANSWERS:
1C, 2D, 3B, 4C, 5D, 6C, 7D, 8D, 9B, 10C, 11F, 12T, 13C, 14B
EXPLANATIONS:
6. C – Calculate the z-score (z = 0.1428). Find the area under the curve (0.5557).
This represents the area to the left of z=0.1428. Since we are looking for the
proportion of the batch that scored higher than 60, we need to get the area
of the curve to the right of z=0.1428. To do that: 1-0.5557=0.4443=44.33%.
REFERENCES
REQUIRED
(1) Veincent Christian F. Pepito, MSc. Introduction to Statistics and Probability
[Lecture slides].
(2) Veincent Christian F. Pepito, MSc. Measures of Central Tendency and
Normal Distribution [Lecture slides].
(3) Veincent Christian F. Pepito, MSc. Introduction to Statistical Inference
[Lecture slides].
2 ASMPH 2023. 06.05: Inferential Statistics by Veincent Christian F. Pepito,
MSc.
SUPPLEMENTARY
& Kirkwood, Betty, Sterne, Jonathan AC. Essential Medical Statistics.
Massachusetts, Blackwell Science Ltd, 2003.
FREEDOM SPACE
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 10 of 11
APPENDIX
YL6: 06.01 RESEARCH & EPIDEMIOLOGY: Introduction to Statistics and Statistical Inference 11 of 11