You are on page 1of 31

MEASURES of LOCATION and VARIABILITY

Measures of Location:
Given numbers
sample mean
Notes:
i) population mean denoted by .
ii) trimmed means eliminate percentage of outliers.
iii)categorical data: mean is the sample proportion.
the sample median :
the middle value if n is odd;
average of two middle values if n is even.
Measures of Variability
Given numbers
the deviation from the mean: .
Note: .
the sample variance ;
s is the sample standard deviation.
Computation using shortcut method:
.
the population variance ;
is the population standard deviation.
Why use n-1 for ?
intuitive answer: do not know , so overcompensate.
is based on only n-1 degrees of freedom.
Simple Properties of and s:
Given and c
If ,
then and .
If ,
then and .

Boxplots:
Given numbers
fourth spread upper fourth - lower fourth
lower fourth: median of n/2 or (n+1)/2 smallest 's
upper fourth: median of n/2 or (n+1)/2 largest 's
boxplot:
1. draw and mark axis
2. draw a box extending from lower fourth to upper fourth
3. draw median line in box
4. extend lines from the box edges to the farthest 's within from
edges
5. mild outliers: draw open circles at each from to from box
edges
6. extreme outliers: draw solid circles at each beyond from box
edges
SAMPLE SPACES and EVENTS
Sample Space :

The set of all possible outcomes of an experiment
Event:
A subset of outcomes in
simple event consists of only one outcome
compound event consists of more than one outcome
Set Theory
The union of two events A and B, ,
is all outcomes in either A or B, or both.
The intersection of two events A and B, ,
is all outcomes in both A and B.
The complement of event A, A',
is all outcomes in not in A.
disjoint or mutually exclusive events A and B, have no outcomes in common
PROBABILTITY AXIOMS and PROPERTIES
Objective:
Given , determine for each a number
P(A), the probability or chance that A will occur.
Axioms of Probability
1. For any event A, the probability of A, .
2. 1.
3. If is a collection of mutually exclusive events,

Interpretation:
If an experiment with events from is repeated many times, P(A) is the relative
frequency for A.
Properties of Probability
For any event A, P(A) = 1 - P(A').
If A and B are mutually exclusive, .
For any two events A and B,
.
Note:
.
If are simple events in compound event A

COUNTING TECHNIQUES
Key Formula:
If size of is N and number of outcomes in A is N(A) then P(A) = N(A)/N.
Product Rule for Ordered Pairs
Rule: if 1 element can be selectecd ways and 2 element ways, then the
number of pairs is .
Use of tree diagrams
General Product Rule
Use k-tuple to denote ordered collection of k objects.
If choices for 1 element, choices for 2 element, ..., choices for k
element, then there are possible k-tuples.
Permutations
A ordered set of k objects taken from a set of n distinct objects is
a permutation of size k.
The number of permutations of size k from n distinct objects is .
Combinations
A unordered set of k objects taken from a set of n distinct objects is
a combination of size k.
The number of combinations of size k from n distinct objects
is (n choose k).
CONDITIONAL PROBABILITY
Notation:
P(A | B) is the conditional probability of A given that B has occurred.
Definition:
If P(B) > 0 then

Mutiplication Rule

Note: .
Exhaustive:
Events are exhaustive if one must occur, so
that .
Law of Total Probability:
If are exhaustive and mutually exclusive events, then for any other event B

Bayes' Theorem:
If are mutually exclusive and exhaustive events with all , then for
any other event B with P(B) > 0

INDEPENDENCE
Definition

Events A and B are independent if P(A | B)=P(A).
Two Independent Events

A and B are independent iff .
Mutually Independent Events

Events are mutually independent if for every subset of
indices with

PROBABILITY SUMMARY
Terms

Event, simple, compound, exclusive, sample space,
union, intersection, permutations, combinations,
conditional, exhaustive, independent.
Formulas
Union: , A or B (or both).
Intersection: , A and B.
, .
P(A') = 1 - P(A).

=P(A) + P(B) for exclusive events.

=P(A)P(B) for independent events.
Bayes formula:
for exhaustive and exclusive events and .
Permutations of size k from n distinct objects .
Combinations of size k from n distinct objects
(n choose k).
Diagrams
Venn
Tree
RANDOM VARIABLES
Definition

A random variable is any rule that associates a number with each outcome from some
sample space .
Definition

A Bernoulli random variable is a random variable with only two outcomes 0 and 1.
Definition

A discrete set has either a finite number of elements or elements that can be listed in
sequence.
Definition

A discrete random variable has a discrete set of possible values.
DISCRETE RV PROBABILITY DISTRIBUTIONS
Probability Mass Function
A probability mass function (pmf),
p(x), for a discrete rv is defined by
.
Pictorial representation: probability histogram.
Parameterized PMF's
A family of p(x)'s can depend on a parameter.
Example: Bernoulli rv's, with , ,
depend on parameter .
Cumulative Distribution Function
A cumulative distribution function (cdf),
F(x), for a discrete rv is defined by

Graph of pdf for discrete rv is a step function.
For real a and b with ,

EXPECTED VALUES for DISCRETE RV's
Expected Values
Expected value or mean value for rv X with
values x from some set D is

Expected value of a function h(X) is

Rule for expected values: for constants a and b,

Variance of X
Variance of X is

Standard deviation of X is .
Shortcut formula

Rules for variance:
i) , and ii) .
BINOMIAL DISTRIBUTION
Binomial Experiment
Conditions
1. Experimment has n trials, with n fixed in advance.
2. Trials are identical with S or F results only.
3. Trials are independent.
4. Probability of success for each trial is p.
Large population rule: an S or F without replacement experiment from a
population of size N is approximately binomial if n << N.
Binomial RV's
A binomial random variable X is defined as
X = the number of successes for n trials.
Notation: , with pmf p(x) = b(x; n, p).
, for .

E(X) = np.
V(X) = np(1-p); .
BINOMIAL RELATED DISTRIBUTIONS
Hypergeometric Experiment Conditions
1. Population to be sampled has N objects.
2. Each object is labelled S or F, with M S's.
3. A sample of size n is drawn so that
each subset of size n is equally likely.
Hypergeometric RV's
A hypergeometric random variable X is
X = the number of successes for a sample of size n.
If ,

.
.
Notes:
a) Let ; E(X) = np and .
b) For large N and M, and with p fixed,
.
BINOMIAL RELATED DISTRIBUTIONS
CONTINUED
Negative Binomial Experiment Conditions
1. Experiment consists of sequence of independent trials.
2. Each tria l result is S or F.
3. Probability p of success is constant for each trial.
4. Experiment continues until r S's are observed.
Negative Binomial RV's
A negative binomial random variable X is
X = the number of failures preceeding success.
For x an integer with ,

E(X) = r(1-p)/p, .
Note: If r=1, , the pmf for
the geometric distribution.
POISSON DISTRIBUTION
Poisson Process Assumptions
1. At most one event can occur at random at any time (or at any point in space).
2. The occurrence of an event in a given time (or space) interval is independent of
that in any other nonoverlapping interval.
3. The probability of occurrence of an event in a small interval is proportional (with
some constant , the occurrence rate) to the width of the interval.
Poisson RV's
A Poisson random variable X is
X= number of occurrences of event in interval .

for , and .
.
Notes:
a) For large n and small p
, with .
b) If is given for some time t, ,
.
DISCRETE RV's SUMMARY
Terms:
Random variable, Bernoulli rv, discrete rv, probability mass function, cumulative
distribution function, expected value (mean), variance, standard deviation, binomial
experiment, hypergeometric experiment, negative binomial experiment, Poisson process.
Distributions
Binomial: X = # of successes for n trials;

E(X) = np and V(X) = np(1-p).
Hypergeometric: X = # of S's for a sample of size n;

If , E(X)=np and ;
for large N and M, .
Negative binomial: X = # of F's preceeding S;

E(X) = r(1-p)/p and ;
nb(x; 1, p) is the geometric distribution.
Poisson: X= # of event occurrences in some interval;

for large n and small p, .
CONTINUOUS RANDOM VARIABLES
Continuous Random Variables
Definition: an rv X is continuous
if its set of possible values is an interval.
Density: a probability density function (pdf) is is a function f(x) defined for x in
some [a,b] with

Requirements: a) , and b) . Density Graph: the limit
of a sequence of histograms.
Some Properties:
i) P(X = c) = 0, and
ii)
.
Uniform distribution

A continuous rv has uniform distribution on [A,B] if

CONTINUOUS CDF's and EXPECTATIONS
Cumulative Distribution Function
The cdf for a continuous rv X, with pdf f(y), is

Probabilities: .
Cdf for uniform pdf: .
Pdf from cdf: f(x) = F'(x).
Percentiles
The 100 percentile, , for X is defined by

The median, for X is defined by .
Expectations
The mean of a continuous rv X is

The variance of a continuous rv X is

The standard deviation (SD) for X is .
A symmetric pdf has .
THE NORMAL DISTRIBUTION
Normal Distribution
Normal ( distribution has pdf

Notes:
i)
ii) Notation .
Standard normal distribution has pdf

Percentiles
: denotes the value of z for which of the area under lies to the right of .
or .
is the 100(1- percentile of the standard normal.

NORMAL DISTRIBUTION CONTINUED
Nonstandard Normal
If then the
standardized variable ; .
.
Percentiles:

100 percentile for
= [100 percentile for N(0,1)] .
Many applications to discrete populations
Normal Approximation to Binomial

If with np and nq both large,

THE GAMMA DISTRIBUTION
Gamma Function
For real , the gamma function is

Properties:
a) for any , ;
b) for an integer n, ; c) .
Gamma PDF
A gamma rv X has pdf, for and ,

A standard gamma rv X has pdf, for ,

Gamma CDF
For x > 0, the incomplete gamma function is

A standard gamma rv X has cdf
with .
A gamma rv X has cdf

with ; .
GAMMA RELATED DISTRIBUTIONS
Exponential Distribution
For , , an exponential rv X has pdf

An exponential rv X has cdf

with ; .
Elapsed times: If the number of events in a time interval of length t has Poisson
distribution with parameter , the distribution of elapsed time between two
successive events is exponential with .
Chi-Squared Distribution
For , , a rv X has pdf

is the number of degrees of freedom.
A rv X has cdf

with ; .
CONTINUOUS RV's SUMMARY
Terms:
Continuous rv, pdf, cdf, percentile, median, mean, variance, standard deviation(SD),
symmetric distribution, uniform distribution, standard Normal distribution, standardized
variable, Gamma function, standard Gamma, exponential, Weibull, lognormal and Beta
distributions.
CDF's and Expectations
The cdf, using pdf f(x), is
.
Expectations: mean ;
variance , with SD .
E(aX+b)= aE(X)+b; ;
if Z = aX+b, .
100 percentile, , is defined by ,
with median defined by .
Distributions
Uniform: pdf , for ,
cdf , with , .
Standard Normal: pdf ,
cdf .
If , has cdf .
If for large n,
Gamma function: for , ;
( =(n-1)! for integer n ).
Standard gamma: pdf ,
with .
Exponential: pdf , cdf .
JOINT DISTRIBUTIONS
Joint PMF's:
assume X and Y are rv's for .
The joint pmf is .
If A is a set of (x,y)'s, .
The marginal pmf's for X and Y are
.
Joint PDF's
Assume X and Y are continuous rv's.
If the joint pdf for X and Y is f(x,y),
.
If ,

The marginal pdf's for X and Y are
.
Independence:
two rv's X and Y are independent if
.
.
Conditional Distributions
The conditional pdf of Y given X=x, for
continous rv's X and Y, is .
The conditional pmf of Y given X=x, for
discrete rv's X and Y, is .
EXPECTED VALUES
Assume rv's X and Y with pmf p(x,y) or pdf f(x,y).
Expected Value:
the expected value of h(x,y) is

Covariance:
the covariance between X and Y is

Correlation
The correlation coefficient

.
.
If X and Y are independent, then .
iff Y = aX + b, with .
If then X and Y are uncorrelated.
STATISTICS and DISTRIBUTIONS
Statistics
Background:
before sampling, rv's denote possible observations; after sampling, sample
values denote actual observations.
A statistic is any quantity that can be calculated from sample data. E.g. mean,
variance, median.
The sampling distribution is the distribution for a statistic.
Sampling
An independent and identically distributed (iid)
random sample is an independent set of rv's that all have the same
probability distribution.
E.g. samples with replacement, or samples from a very large population.
Determination of sampling distribution:
! by derivation -
to determine exact sampling distribution.
! by simulation -
use histograms to approximate distribution.
SAMPLE MEAN DISTRIBUTION
Assume a random sample from some
distribution with mean and standard deviation .
Sample Mean Distribution
.
.
If sample total , then
, .
If , then .
Central Limit Theorem (CLT)
If n is sufficiently large, then
and ).
CLT can usually be applied if n > 30.
If only positive 's are possible, then
is approximately lognormal.
DISTRIBUTION of LINEAR COMBINATIONS
Linear Combination Distribution

Given rv's and constants , the rv

is called a linear combination of the 's.
Expected Value:
If has mean for ,

Variance:
If has variance for and
if the 's are independent, then

with .
For any 's,

Difference:
, and
for independent , , .
Normal Case:
If , for ,
independently, then .
JOINT DISTRIBUTIONS and RANDOM SAMPLES SUMMARY
Terms:
Joint pmf's and pdf's, marginal pmf's and pdf's, idependence, conditional pmf's and pdf's,
expected values, coavariance, correlation coefficient, uncorrelated, statistic, sampling
distribution, iid, CLT, linear combination.
Joint Distributions:
assume X and Y are rv's.
Probabilities:
for discrete rv's;
for continuous rv's.
Marginals:
, discrete rv's;
, continuous rv's.
Independence: X and Y are independent if
, for discrete rv's;
, for continuous rv's.
Conditionals: conditional pdf of Y given X=x
for discrete rv's;
for continous rv's.
Expected value of h(x,y) is E[h(x,y)]
for discrete rv's;
for continuous rv's.
Covariance: .
Correlation: ;
X, Y independent ;
uncorrelated.
Sample Mean Distribution:

Assume rv's , for some D.
, and .
If , , .
If , then .
CLT: if n large, and ).
Linear Combinations:
Assume rv's , and
, for some constants .
Expected Value: If ,

Variance: If , and 's are independent,
; .
For any 's, .
Normal Case: If , independently,
.
POINT ESTIMATION CONCEPTS
Point Estimates

A point estimate of a parameter is a single number that is the most plausible value
for . Some suitable statistic for is called a point estimator for .
Unbiased Estimators
is an unbiased estimator of if .
If is biased, is called the bias of .
Principle: Always choose an unbiased estimator.
If , then the sample proportion is an unbiased estimator
of p.
Given rv's with for some D, is unbiased
for and is unbiased for . For D symmetric, or
any trimmed mean are unbiased for .
Minimum Variance Unbiased Estimators
a) MVUE Principle: MVUE estimators are preferred.
b) If , is an MVUE.
c) A robust estimator has low variance for a variety of distributions (e.g. 10-20%
trimmed mean).
The Standard Error
The standard error of an estimator is .
The estimated standard error of an estimator is denoted by or .
POINT ESTIMATION METHODS
Method of Moments
Assume
a random sample with .
The k distribution moment is .
The k sample moment is .
Moment estimators use sample moments as
approximations to distribution moments to determine
.
Maximum Likelhood
Assume
have joint pmf or pdf .
Maximum likelihood estimators (mle's)
maximize f.
Invariance principle: if are mle's for , then the mle of
any is .
For large n an mle is an MVUE.
POINT ESTIMATION SUMMARY
Terms:
Point estimate, estimator, unbiased, MVUE, standard error, distribution moments, sample
moments, MLE.
Unbiased Estimators
If , then the sample proportion is an unbiased estimator
of p.
Given rv's with ,
is unbiased for , and
is unbiased for .
If , is an MVUE.
Estimation Methods
Moment estimators use sample moments as
approximations to distribution moments to
determine .
If have pmf or pdf , the
MLE's maximize f.
CONFIDENCE INTERVAL PROPERTIES
Assume rv's with , known,
and observed values .
Confidence Intervals
The 95% confidence interval (95% CI) for :

so the 95% CI for is
Interpretation: if the the experiment is repeated many times, 95% of the CI's will
contain .
The 100 % CI for :

so the % CI is
Choice of Sample Size

Suppose a % CI of length L is desired.
Determine from .
Solve to determine
LARGE SAMPLE CI's for and p
Assume rv's with , and
observed values , with n large.
Large Sample CI's for
We know , so

and the % CI is
If is not known, , so
is a large sample CI for ,
with confidence level %.
Large Sample CI's for p
For any normal with unbiased ,
is a large sample CI for ,
with confidence level %.
If X is the number successes in a sample of size n,
an unbiased estimator for is X/n, with .
An estimator for is , so
is a large sample CI for ,
with confidence level %.
Given a desired CI length L, ,
but depends on n; use of always provides length .
NORMAL CONFIDENCE INTERVALS
Assume a random sample from .
The t Distribution
A t distribution with degrees of freedom (df) has pdf:

Examples: ; ;
; .
Properties of t distributions:
1. is symmetric about x= 0 and bell-shaped.
2. more spread-out than a .
3. As , .
The t critical value is the point where

Normal CI's using 's
Theorem: has a t distribution with n-1 df.
.
The % CI for is

Prediction interval for with level % is

CI's for and for Normal RV's
Assume a random sample from .
The Distribution
distribution with degrees of freedom has pdf:

Examples: ; ;
; .
Note: is skewed with mean .
The critical value is the point where

CI's for and using 's
Theorem: The rv
has a distribution with n-1 df.
.
The % CI for is

The % CI for is

CONFIDENCE INTERVALS SUMMARY
Terms:
100 % confidence interval, critical value,
t distribution, distribution.
100 % CI for
known, :

If a CI of length L is desired, use .
unknown, :

If a CI of length L is desired, use .
unknown, , n < 30:
HYPOTHESES and TEST PROCEDURES
Hypotheses
Statistical hypothesis: a claim about the value(s) of some population
characteristic(s).
Null hypothesis : a claim believed to be true.
Takes the form ( is the null value).
An alternate hypothesis is the other claim.
Takes the form
1. (implicit null hypoth. ),
2. (implicit null hypoth. ), or
3. .
Test Procedures:
A test procedure is specified by
1. A test statistic, a function of the sample data on which the decision to reject or
not reject is based.
2. A rejection region R, the set of all test statistic values for which will be
rejected.
Errors in Hypothesis Testing
Type I error: is rejected, when true.
Let (type I error) = P( rejected, when true).
Type II error: is not rejected, when false.
Let (type II error) = P( not rejected, when false).
Decrease in R to obtain smaller results in larger .

100 % CI for p
, when :

If a CI of length L is desired, use ;
use of always provides length .
100 % CI for
, when :

POPULATION MEAN TESTS Assume that the null hypothesis is: .
Case I: Normal Population with Known
Test statistic: .

Sample size for one-tailed test: .
Sample size for two-tailed test: .
Case II: Large Sample Tests
- use Case I with .
Case III: Normal Population, uknown , small n
Test statistic: .

Type II Error Probabilities: require use of graphs or complicated numerical
integration.
POPULATION PROPORTION TESTS Assume that the null hypothesis is: .
Large Sample Tests:
assume and .
Test statistic: .

One-tailed test .
Two-tailed test .
Small Sample Tests:
use the Binomial distribution.
P-VALUES for HYPOTHESIS TESTING
P-Values
Definition: The P-value is the smallest level of
significance at which would be rejected when
a specified test procedure is used.
If P-value then reject at level .
If P-value then do not reject at level .
Data is said to be significant if is rejected,
otherwise data is said to be insignificant.
P-Values for a z-test

P-Values for a t-test

These require interpolation in tables for the t distribution.
HYPOTHESIS TESTING SUMMARY
Terms:
hypothesis, null hypothesis, alternate hypothesis,
Type I, II errors, rejection region, P-value, significant data.
Population Mean Tests:
null hypothesis is .
known, , using .

Sample size for one-tailed test: .
Sample size for two-tailed test: .
unknown, : use previous with .
unknown, , n < 30: use