You are on page 1of 51

Third Week

Part 1
Random Variable & Probability
Distribution

Adapted From :
Probability & Statistics for Engineers & Scientists, 8th Ed.
Walpole/Myers/Myers/Ye (c)2007

Chap 5-1
A Random Variable

ƒ A random variable is a function that associates a real


number with each element in a sample space.

ƒ Example: 3 components can either be defective (D) or not (N).


ƒ What is the sample space for this situation?
ƒ S = {NNN, NND, NDN, NDD, DNN, DND, DDN, DDD}
ƒ Let random variable X denote the number of defective
components in each sample point.
ƒ Describe P(X ≤ 2) in words.
ƒ What are the values of P(X = x), for x = 0, 1, 2, 3?
ƒ .125, .375, .375, .125.

Chap 5-2
Discrete and Continuous
Sample Spaces

ƒ A discrete sample space has a finite or countably infinite number of


points (outcomes).
ƒ Example of countably infinite: experiment consists of flipping a
coin until a heads occurs.
ƒ S=?
ƒ S = {H, TH, TTH, TTTH, TTTTH, TTTTTH, …}
ƒ S has a countably infinite number of sample points.
ƒ A continuous sample space has an infinite number of points,
equivalent to the number of points on a line segment.
ƒ For discrete sample spaces, we know that the sum of all of the
probabilities of the points in the sample space equals 1.
ƒ Addition won’t work for continuous sample spaces. Here,
P(X = x) = 0 for any given value of x.

Chap 5-3
Discrete Distributions

ƒ For a discrete random variable X, we generally


look at the probability P(X = x) of X taking on
each value x.
ƒ Often, the probability can be expressed in a
formula, f(x) = P(X = x).
ƒ The set of ordered pairs (x, f(x)), is called the
probability distribution or probability function of
X.
ƒ Note that f(x) ≥ 0, and Σx f(x) = 1.

Chap 5-4
Cumulative Distribution & Plotting

ƒ The cumulative distribution, denoted F(x), of a


discrete random variable X with distribution f(x),
is F(x) = P(X ≤ x)
ƒ How is F(x) calculated?
ƒ F(x) = Σt ≤ x f(t).
ƒ It is useful to plot both a probability distribution
and the corresponding cumulative distribution.
ƒ Typically, the values of f(x) versus x are plotted using
a probability histogram.
ƒ Cumulative distributions are also plotted using a
similar type of histogram/step function.
Chap 5-5
Continuous Distributions
ƒ Continuous distributions have an infinite number of
points in the sample space, so for a given value of x,
what is P(X = x)?
ƒ P(X = x) = 0.
ƒ Otherwise the probabilities couldn’t sum to 1.
ƒ What we can calculate is the probability that X lies in a
given interval, such as P(a < X < b), or P(X < C).
ƒ Since the probability of any individual point is 0,
P(a < X < b) = P(a ≤ X ≤ b)
on, the endpoints can be included or not.
ƒ For continuous distributions, f(x) is called a probability
density function.
Chap 5-6
Probability Density Functions

ƒ If f(x) is a continuous probability density function,


ƒ f(x) ≥ 0, as before.
ƒ What corresponds to Σx f(x) = 1 for discrete distributions?
ƒ ∫-∞…∞ f(x) dx = 1.
ƒ and, P(a ≤ X ≤ b) = ?
ƒ P(a ≤ X ≤ b) = ∫a…b f(x) dx
ƒ The cumulative distribution of a continuous random variable X is?
ƒ F(x) = P(X ≤ x) = ?
ƒ F(x) = ∫-∞…x f(x) dx
ƒ What is P(a < X < b) in terms of F(x)?
ƒ P(a < X ≤ b) = F(b) - F(a)
ƒ If discrete, must use "a < X", and not "a ≤ X", above.

Chap 5-7
Joint Probability Distributions

ƒ Given a pair of discrete random variables on the same sample


space, X and Y, the joint probability distribution of X and Y is

f (x,y) = P (X = x, Y = y)

f (x,y) equals the probability that both x and y occur.


ƒ The usual rules hold for joint probability distributions:
ƒ f (x,y) ≥ 0
ƒ Σx Σy f (x, y) = 1
ƒ For any region A in the xy plane,
P[(X,Y) ∈ A] = ΣΣ A f (x, y)
ƒ For continuous joint probability distributions, the sums above are
replaced with integrals.

Chap 5-8
Marginal Distributions

ƒ The marginal distribution of X alone or Y alone


can be calculated from the joint distribution
function as follows:
ƒ g (x) = Σy f (x,y) and h (y) = Σx f (x,y) if
discrete
ƒ g (x) = ∫y f (x,y) and h (y) = ∫x f (x,y) if
continuous
ƒ In other words, for example, g (x) = P(X = x) is
the sum (or integral) of f(x,y) over all values of
y.
Chap 5-9
Conditional Distributions
ƒ For either discrete or continuous random variables, X
and Y, the conditional distribution of Y, given that
X = x, is

f (y | x) = f (x,y) / g(x) if g(x) > 0


and
f (x | y) = f (x,y) / h(y) if h(y) > 0
ƒ X and Y are statistically independent if
f (x,y) = g (x) h (y)
for all x and y within their range.
ƒ A similar equation holds for n mutually statistically
independent jointly distributed random variables.

Chap 5-10
Statistical Independence

ƒ The definition of independence is as before:


ƒ Previously, P (A | B) = P (A) and P (B | A) = P
(B).
ƒ How about terms of the conditional distribution?
ƒ f (x | y) = g (x) and f (y | x) = h (y).
ƒ The other way to demonstrate independence?
ƒ f (x, y) = g(x) h(y) ∀ x, y in range.
ƒ Similar formulas also apply to more than two
mutually independent random variables.

Chap 5-11
Third Week

Part 2
Mathematical Expectation

Adapted From :
Probability & Statistics for Engineers & Scientists, 8th Ed.
Walpole/Myers/Myers/Ye (c)2007

Chap 5-12
Mean of a Set of Observations

ƒ Suppose an experiment involves tossing 2 coins. The


result is either 0, 1, or 2 heads. Suppose the experiment
is repeated 15 times, and suppose that 0 heads is
observed 3 times, 1 head 8 times, and 2 heads 4 times.
ƒ What is the average number of heads flipped?
ƒ x bar = (0+0+0+1+1+1+1+1+1+1+1+2+2+2+2) / 15
= ((0)(3) + (1)*(8) + (2)*(4)) / 15 = 1.07
ƒ This could also be written as a weighted average,
ƒ x bar = (0)(3/15) + (1)(8/15) + (2)(4/15) = 1.07
where 3/15, 8/15, etc. are the fraction of times the given number
of heads came up.
ƒ The average is also called the mean.
Chap 5-13
Mean of a Random Variable

ƒ A similar technique, taking the probability of an


outcome times the value of the random variable
for that outcome, is used to calculate the mean
of a random variable.
ƒ The mean or expected value μ of a random
variable X with probability distribution f (x), is

μ = E (X) = Σx x f(x) if discrete, or

μ = E (X) = ∫x x f(x) dx if continuous


Chap 5-14
Mean of a Random Variable
Depending on X

ƒ If X is a random variable with distribution f(x).


The mean μg(X) of the random variable g(X) is
μg(X) = E [g(X)] = Σx g(x) f(x) if
discrete, or

μg(X) = E [g(X)] = ∫x g(x) f(x) dx if


continuous

Chap 5-15
Expected Value for a Joint
Distribution

ƒ If X and Y are random variables with joint probability


distribution f (x,y). The mean or expected value μg(X,Y) of the
random variable g (X,Y) is

μg(X,Y) = E [g(X,Y)] = Σx Σy g(x,y) f(x,y) if discrete, or

μg(X,Y) = E [g(X,Y)] = ∫x ∫y g(x,y) f(x,y) dy dx if continuous

ƒ Note that the mean of a distribution is a single value, so it


doesn't make sense to talk of the mean the distribution f
(x,y).

Chap 5-16
Variance

ƒ What was the variance of a set of observations?


ƒ The variance σ2 of a random variable X with distribution f(x) is

σ2 = E [(X - μ)2] = Σx (x - μ)2 f(x) if discrete, or

σ2 = E [(X - μ)2] = ∫x (x - μ)2 f(x) dx if continuous


ƒ An equivalent and easier computational formula, also easy to
remember, is

σ2 = E [X2] - E [X]2 = E [X2] - μ2

ƒ "The expected value of X2 - the expected value of X...squared."


ƒ Derivation from the previous formula is simple.

Chap 5-17
Covariance
ƒ If X and Y are random variables with joint probability
distribution f (x,y), the covariance, σXY , of X and Y is defined
as

σXY = E [(X - μX)(Y - μY)]


ƒ The better computational formula for covariance is

σXY = E (XY) - μX μY

ƒ Note that although the standard deviation σ can't be


negative, the covariance σXY can be negative.
ƒ Covariance will be useful later when looking at the linear
relationship between two random variables.

Chap 5-18
Correlation Coefficient
ƒ If X and Y are random variables with covariance σXY and standard
deviations σX and σY respectively, the correlation coefficient ρXY is
defined as

ρXY = σXY / ( σX σY )

Correlation coefficient notes:


ƒ What are the units of ρXY ?
ƒ What is the possible range of ρXY ?
ƒ What is the meaning of the correlation coefficient?
ƒ If ρXY = 1 or -1, then there is an exact linear relationship between Y
and X (i.e., Y = a + bX). If ρXY = 1, then b > 0, and if ρXY = -1, then b <
0.
ƒ Can show this by calculating the covariance of X and a + bX, which
simplifies to b / √b2 = 1.

Chap 5-19
Third Week

Part 3
Discrete Probability Distribution

Adapted From :
Probability & Statistics for Engineers & Scientists, 8th Ed.
Walpole/Myers/Myers/Ye (c)2007
Introduction to Business Statistics, 5e
Kvanli/Guynes/Pavur (c)2000
South-Western College Publishing
Statistics for Managers
Using Microsoft® Excel 4th Edition

Chap 5-20
Introduction to Probability
Distributions
ƒ Random Variable
ƒ Represents a possible numerical value from
an uncertain event
Random
Variables

Discrete Continuous
Random Variable Random Variable

Chap 5-21
Discrete Random Variables
ƒ Can only assume a countable number of values
Examples:

ƒ Roll a die twice


Let X be the number of times 4 comes up
(then X could be 0, 1, or 2 times)

ƒ Toss a coin 5 times.


Let X be the number of heads
(then X = 0, 1, 2, 3, 4, or 5)

Chap 5-22
Discrete Probability Distribution

Experiment: Toss 2 Coins. Let X = # heads.


4 possible outcomes
Probability Distribution
T T X Value Probability
0 1/4 = .25
T H 1 2/4 = .50
2 1/4 = .25
H T
Probability

.50

.25
H H
0 1 2 X
Chap 5-23
Discrete Random Variable
Summary Measures
ƒ Expected Value of a discrete distribution
(Weighted Average)
N
μ = E(X) = ∑ Xi P( Xi )
i=1

X P(X)
ƒ Example: Toss 2 coins, 0 .25
X = # of heads, 1 .50
compute expected value of X: 2 .25
E(X) = (0 x .25) + (1 x .50) + (2 x .25)
= 1.0

Chap 5-24
Discrete Random Variable
Summary Measures
(continued)
ƒ Variance of a discrete random variable
N
σ 2 = ∑ [Xi − E(X)]2 P(Xi )
i =1

ƒ Standard Deviation of a discrete random variable


N
σ = σ2 = ∑ i
[X
i =1
− E(X)] 2
P(Xi )

where:
E(X) = Expected value of the discrete random variable X
Xi = the ith outcome of X
P(Xi) = Probability of the ith occurrence of X

Chap 5-25
Probability Distributions

ƒ Certain probability distributions occur over and


over in the real world.
ƒ Probability tables are published, mean and standard
deviation is calculated, to make applying them
easier.
ƒ The key is to apply the correct distribution based on
the characteristics of the problem being studied.

Chap 5-26
Probability Distributions
Probability
Distributions

Discrete Continuous
Probability Probability
Distributions Distributions

Binomial Normal

Hypergeometric

Poisson Exponential

Chap 5-27
The Binomial Distribution
Probability
Distributions

Discrete
Probability
Distributions

Binomial

Hypergeometric

Poisson

Chap 5-28
Binomial Probability Distribution
Bernoulli Process :
ƒ A fixed number of observations, n (repeated trials)
ƒ e.g.: 15 tosses of a coin; ten light bulbs taken from a shipment
ƒ Each trial results in an outcome that may be classified
as a success or failure
ƒ e.g.: head or tail in each toss of a coin; defective or not defective
light bulb
ƒ Generally called “success” and “failure”
ƒ Probability of success is p, probability of failure is q
ƒ Constant probability for each observation (p)
ƒ e.g.: Probability of getting a tail is the same each time we toss
the coin
ƒ Repeated trials are independent
Chap 5-29
Binomial Probability Distribution
(continued)

ƒ Observations are independent


ƒ The outcome of one observation does not affect the outcome
of the other
ƒ Two sampling methods
ƒ Infinite population without replacement
ƒ Finite population with replacement

Chap 5-30
Possible Binomial Distribution
Settings

ƒ A manufacturing plant labels items as


either defective or acceptable
ƒ A firm bidding for contracts will either get a
contract or not
ƒ A marketing research firm receives survey
responses of “yes I will buy” or “no I will
not”
ƒ New job applicants either accept the offer
or reject it
Chap 5-31
Rule of Combinations

ƒ The number of combinations of selecting X


objects out of n objects is

⎛n⎞ n!
⎜⎜ ⎟⎟ =
⎝ X ⎠ X! (n − X)!
where:
n! =n(n - 1)(n - 2) . . . (2)(1)
X! = X(X - 1)(X - 2) . . . (2)(1)
0! = 1 (by definition)

Chap 5-32
Binomial Distribution Formula

n! X − X
P(X) = p (q) n
X ! (n − X)!

P(X) = probability of X successes in n trials,


with probability of success p on each trial Example: Flip a coin four
times, let x = # heads:
X = number of ‘successes’ in sample,
n=4
(X = 0, 1, 2, ..., n)
p = 0.5
n = sample size (number of trials
or observations) q = (1 - .5) = .5
p = probability of “success” X = 0, 1, 2, 3, 4
q = 1-p
Chap 5-33
Example:
Calculating a Binomial Probability
What is the probability of one success in five
observations if the probability of success is .1?
X = 1, n = 5, and p = .1

n!
P( X = 1) = p X (q) n − X
X !(n − X )!
5!
= (.1)1 (1 − .1) 5−1
1!(5 − 1)!
= (5)(.1)(.9) 4
= .32805
Chap 5-34
Binomial Distribution
ƒ The shape of the binomial distribution depends on the
values of p and n
Mean
.6
P(X) n = 5 p = 0.1
ƒ Here, n = 5 and p = .1 .4
.2
0 X
0 1 2 3 4 5

P(X) n = 5 p = 0.5
ƒ Here, n = 5 and p = .5 .6
.4
.2
0 X
0 1 2 3 4 5
Chap 5-35
Binomial Distribution
Characteristics

ƒ Mean
μ = E(x) = np
ƒ Variance and Standard Deviation

σ = np(1 - p) = npq
2

σ = np(1 - p) = npq
Where n = sample size
p = probability of success
(1 – p) = probability of failure

Chap 5-36
Binomial Characteristics
Examples
μ = np = (5)(.1) = 0.5
Mean P(X) n = 5 p = 0.1
.6
.4
σ = np(1 - p) = (5)(.1)(1 − .1) .2
0 X
= 0.6708
0 1 2 3 4 5

μ = np = (5)(.5) = 2.5 P(X) n = 5 p = 0.5


.6
.4
σ = np(1 - p) = (5)(.5)(1 − .5) .2
0 X
= 1.118
0 1 2 3 4 5

Chap 5-37
Using Binomial Tables
n = 10
x … p=.20 p=.25 p=.30 p=.35 p=.40 p=.45 p=.50
0 … 0.1074 0.0563 0.0282 0.0135 0.0060 0.0025 0.0010 10
1 … 0.2684 0.1877 0.1211 0.0725 0.0403 0.0207 0.0098 9
2 … 0.3020 0.2816 0.2335 0.1757 0.1209 0.0763 0.0439 8
3 … 0.2013 0.2503 0.2668 0.2522 0.2150 0.1665 0.1172 7
4 … 0.0881 0.1460 0.2001 0.2377 0.2508 0.2384 0.2051 6
5 … 0.0264 0.0584 0.1029 0.1536 0.2007 0.2340 0.2461 5
6 … 0.0055 0.0162 0.0368 0.0689 0.1115 0.1596 0.2051 4
7 … 0.0008 0.0031 0.0090 0.0212 0.0425 0.0746 0.1172 3
8 … 0.0001 0.0004 0.0014 0.0043 0.0106 0.0229 0.0439 2
9 … 0.0000 0.0000 0.0001 0.0005 0.0016 0.0042 0.0098 1
10 … 0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0010 0
… p=.80 p=.75 p=.70 p=.65 p=.60 p=.55 p=.50 x

Examples:
n = 10, p = .35, x = 3: b(3;10,.35) = .2522
n = 10, p = .75, x = 2: b(2;10,.75) = .0004

Chap 5-38
Binomial Distribution
Examples
1. Sebuah perusahaan farmasi memperhatikan 5 orang pekerjanya yang
sering terlambat hadir. Peluang hadir terlambat dari setiap pekerja
adalah 0,4 dan mereka hadir tidak tergantung satu dengan lainnya.
Berapa peluang 2 orang pekerja dari 5 pekerja hadir terlambat ?
5 !
Jawab : P(2) = (0,4)2 (0,6)3
2 ! (5 – 2) !
= (120/12) (0,16) (0,216)
= 0,3456

2. Pemilik toko elektronik memperhatikan bahwa peluang seorang pe-


ngunjung datang membeli adalah 0,3. Jika pada suatu hari ada 15
org pengunjung, berapa :
a. Peluang paling sedikit 1 orang pengunjung yang membeli ?
b. Peluang tidak lebih dari 4 pengunjung yang membeli ?

Jawab : a. P(x ≥ 1 ) = 1 – P(0) = 1 – 0,0047 = 0,9953


b. P(x ≤ 4 ) = P(0) + P(1) + P(2) + P(3) + P(4) = 0,5155

3. Harley Davidson, seorang supervisor pengendalian mutu PT Kyoto


melakukan tugas rutinnya mengecek transmisi otomatis.
Prosedurnya, Ia memindahkan 10 unit transmisi dan memeriksa
kerusakannya. Data masa lalu menunjukkan 2% dari transmisi
mengalami kerusakan.
a. Berapa peluang lebih dari 2 transmisi pada sampel yang
mengalami kerusakan ? (Ans. 0,0009)
b. Berapa peluang tidak ada transmisi yang rusak ? (Ans. 0,8171)

Chap 5-39
The Hypergeometric Distribution

ƒ “n” trials in a sample taken from a finite


population of size N
ƒ Sample taken without replacement
ƒ Trials are dependent
ƒ Concerned with finding the probability of “X”
successes in the sample where there are “k”
successes in the population

Chap 5-40
Hypergeometric Distribution
Formula
(Two possible outcomes per trial)

⎛ k ⎞⎛ N − k ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟
⎝ x ⎠⎝ n − x ⎠
P( X ) = h( x; N , n, k ) =
⎛N⎞
⎜⎜ ⎟⎟
⎝n ⎠
Where
N = Population size
k = number of successes in the population
N – k = number of failures in the population
n = sample size
x = number of successes in the sample
n – x = number of failures in the sample
Chap 5-41
Properties of the
Hypergeometric Distribution
ƒ The mean of the hypergeometric distribution is
nk
μ = E(x) =
N

ƒ The variance is

N −n k ⎛ k ⎞
σ =
2
.n. ⎜1 − ⎟
N −1 N ⎝ N ⎠

Chap 5-42
Using the
Hypergeometric Distribution
■ Example: 3 different computers are checked from 10 in
the department. 4 of the 10 computers have illegal
software loaded. What is the probability that 2 of the 3
selected computers have illegal software loaded?
N = 10 n=3
k=4 x=2

⎛ k ⎞⎛ N − k ⎞ ⎛ 4 ⎞⎛ 6 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟
x n − x ⎠ ⎝ 2 ⎠⎝1 ⎠ (6)(6)
P(X = 2) = ⎝ ⎠⎝ = = = 0.3
⎛ N⎞ ⎛10 ⎞ 120
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝n ⎠ ⎝3 ⎠

The probability that 2 of the 3 selected computers will have


illegal software loaded is .30, or 30%.
Chap 5-43
The Poisson Distribution

ƒ Properties of Poisson Distribution :


ƒ The number of outcomes occurring in one time interval or
specified region is independent of the number that occurs in any
other disjoint time interval or region of space
ƒ The probability that a single outcome will occur during a very
short time interval or in a small regions is proportional to the
length of the time interval or the size of the region and does not
depend on the number of outcomes occurring outside this time
interval or region.
ƒ The probability that more than one outcome will occur in such a
short time interval or fall in such a small region is negligible.
ƒ The average number of outcomes per unit is λ (lambda)

Chap 5-44
Poisson Distribution Formula

− λt
e (λ t ) x
P( X ) = p ( x; λt ) =
x!
where:
x = number of successes per unit = 0,1,2,…
λ = average number of outcomes per unit time, distance area
or volume
e = base of the natural logarithm system (2.71828...)
Example :
# Customers arriving in 15 min
# Defects per case of light bulbs
Chap 5-45
Poisson Distribution
Characteristics

ƒ Mean
μ = λt
ƒ Variance and Standard Deviation

σ 2 = λt
σ = (λt)

Chap 5-46
Using Poisson Tables
λ

X 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90

0 0.9048 0.8187 0.7408 0.6703 0.6065 0.5488 0.4966 0.4493 0.4066


1 0.0905 0.1637 0.2222 0.2681 0.3033 0.3293 0.3476 0.3595 0.3659
2 0.0045 0.0164 0.0333 0.0536 0.0758 0.0988 0.1217 0.1438 0.1647
3 0.0002 0.0011 0.0033 0.0072 0.0126 0.0198 0.0284 0.0383 0.0494
4 0.0000 0.0001 0.0003 0.0007 0.0016 0.0030 0.0050 0.0077 0.0111
5 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0007 0.0012 0.0020
6 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003
7 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

Example: Find P(X = 2) if λt = .50

e − λt λt X e −0.50 (0.50) 2
P( X = 2) = = = .0758
X! 2!

Chap 5-47
Graph of Poisson Probabilities
0.70

Graphically: 0.60

λ = .50 0.50

λ= P(x) 0.40

X 0.50
0.30
0 0.6065
0.20
1 0.3033
2 0.0758 0.10

3 0.0126 0.00
0 1 2 3 4 5 6 7
4 0.0016
5 0.0002 x
6 0.0000
P(X = 2) = .0758
7 0.0000

Chap 5-48
Poisson Distribution Shape

ƒ The shape of the Poisson Distribution


depends on the parameter μ :

0.70
μ = 0.50 0.25
μ = 3.00
0.60
0.20
0.50

0.15
0.40

P(x)
P(x)

0.30 0.10

0.20
0.05
0.10

0.00 0.00
0 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 10 11 12

x x

Chap 5-49
Poisson Distribution
Examples
1. Setiap minggu rata-rata 5 ekor burung merpati mati karena
menabrak Monumen di taman kota. Kelompok pecinta lingkungan
hidup minta PEMDA setempat untuk mengalokasikan dana guna
mencegah terjadi hal tersebut. PEMDA menyetujui bila peluang lebih
dari 3 burung mati melebihi 0,7. Akankah PEMDA memberikan dana
tersebut ?

Jawab : P(x > 3) = 1 – P(x ≤ 3) = 1 – P(0) – P(1) – P(2) – P(3)


= 0, 735
Karena peluangnya lebih besar dari 0,7 maka PEMDA memberikan
dana

2. Seorang pianis merasa terganggu bila akan memulai konser, ia


mendengar suara batuk dari penonton. Pada konser terakhirnya, ia
menghitung ada 8 kali suara batuk terdengar saat konser akan
dimulai. Pianis ini mengancam panitia konser bila malam ini
terdengar lebih dari 5 kali suara batuk, ia akan membatalkan konser.
Berapa peluang ia akan ikut konser malam ini ?

Jawab : P (x ≤ 5 ) = P(0) + P(1) + P(2) + P(3) + P(4) + P(5)


= 0,1912

3. Sebuah mesin fotokopi mengalami gangguan 1 kali per 100 halaman


kopian. Jika seorang harus memfotokopi 500 halaman . Berapa
peluang terjadinya gangguan nol ?

Jawab : untuk interval 100 halaman λt = 1 ; untuk 500 hal. λt = 5


Maka P(x=0) = 0,0067

4. Dalam jam-jam sibuk nasabah yang datang 90 orang/jam.


Berapa peluang 15 orang atau lebih nasabah yang d atang selama 6
menit pada jam-jam sibuk ? (Ans. 0,0414)

Chap 5-50
Summary :
Discrete Probability Distributions

ƒ Binomial: Number of successes in n independent


trials, with each trial having probability of success p
and probability of failure q (= 1-p).
ƒ Hypergeometric: A sample of size n is selected
from N items without replacement, and k items are
classified as successes (N - k are failures).
ƒ Poisson: If λ is the rate of occurrence of an event
(number of outcomes per unit time), the probability
that x outcomes occur in a time interval of length t.

Chap 5-51

You might also like