You are on page 1of 49

Lecture 3

Discrete random variables and Probability Distributions

1

Discrete Random Variables
Can l C only assume a countable number of values t bl b f l
Examples: Roll a die twice Let X be the number of times 6 comes up (then X could be 0, 1, or 2) Toss a coin 5 times. Let X be the number of heads (then X = 0, 1, 2, 3, 4, or 5) The number of cars arriving at a tollbooth in a day Let X be the number of cars arriving at a tool booth per day (then X = 0, 1, 2, 3, 4,…..)

2

Discrete Probability Distribution
Experiment: Toss 2 Coins.
4 possible outcomes

Let X = # heads.

Probability Distribution
X Value 0 Probability 1/4 = 0.25 2/4 = 0.50 1/4 = 0.25
0.50 0.25

T T H H

T H T H

1 2
Pro obability

0

1

2

X
3

Probability Distribution Required Properties
p(x) ≥ 0 for any value of x The individual probabilities sum to 1;

∑ p(x) = 1
x

(The notation indicates summation over all possible x values)

compute expected value of x: E(x) = (0 x .50 0.0 0 1 2 .50) + (2 x .Expected Value Expected Value ( mean) of a discrete p (or ) distribution (Weighted Average) µ = E(x) = ∑ x p(x) x x p(x) 0.25) 25) 50) 25) = 1.25 0. x = # of heads.25 1×0.25) + (1 x .50 2×0.25 2×0 25 Example: Toss 2 coins.25 0 25 xp(x) 0×0.

Variance and Standard Deviation Variance of a discrete random variable X σ = E(X − µ) = ∑ (x − µ) p(x) 2 2 2 x S a da d e a o o d sc e e a do Standard Deviation of a discrete random variable X a ab e σ = σ2 = (x − µ) 2 p(x) ∑ x .

50) (.707 (.25) (.25) 50 707 Possible number of heads = 0. or 2 . 1. compute standard deviation (recall E(x) = 1) σ= (x − µ) 2 p(x) ∑ x σ = (0 − 1)2 ( 25) + (1 − 1)2 ( 50) + (2 − 1)2 ( 25) = . X = # heads.50 = .Standard Deviation Example Example: Toss 2 coins.

05) 0.10 0.15) = 0.8 σ 2 = Var ( X ) = 2.8) 2 (0.8) 2 (0.8) 2 (0.8) 2 (0.10 0 10 0.10) 0.15 0.3 (6)(0 05) = 0 3 ( x − µ ) 2 p( x) (0 − 2.6 (5)(0.57 8 .4 (3)(0.8) 2 (0.30) = 0.0 (1)(0.324 (2 − 2.484 (6 − 2.10) = 0.784 (1 − 2.512 µ = E ( X ) = 2.Example: The number of mortgages approved per week Home mortgages approved/week (x) d/ k( ) 0 1 2 3 4 5 6 Total Probability p(x) 0.8) 2 (0.30) = 0.10) = 0.5 (6)(0.10) = 0.1 (1)(0 10) = 0 1 (2)(0.216 (5 − 2.30 0.10 0.9 (4)(0.012 (4 − 2.20 0.05 0 05 1.20) = 0.05) = 0.46 = 1.00 xp(x) (0)(0.10) = 0.20) = 0.128 (3 − 2.46 σ = 2.10) = 0.15) = 0.8) 2 (0.

then the expected value of function g is E[g(X)] = ∑ g( )p( ) [g( )] g(x)p(x) x .Functions of Random Variables If p(x) is the probability function of a discrete random variable X . and g(X) is some function of X .

Computational formula for variance σ 2 = Var ( X ) = ∑ x 2 p ( x) − µ 2 x = E ( X 2 ) − [ E ( X )]2 10 .

9 (4)(0.2 − 2.05) = 1.5 (6)(0.0 (1)(0.10) = 0 (1) 2 (0.10) = 0.8 (3) 2 (0.46 = 1.8 µ = E ( X ) = 2.30 0.10 0.10 0.20 0.05) 0.10) 0.57 11 .15 0.5 (6) 2 (0.3 (6)(0 05) = 0 3 x 2 p ( x) (0) 2 (0.30) = 0.8 E ( X 2 ) = 10.82 = 2.10) = 0.10 0 10 0.10) = 0.Example: The number of mortgages approved per week (revisit) Home mortgages approved/week (x) d/ k( ) 0 1 2 3 4 5 6 Total Probability p(x) 0.7 (4) 2 (0.00 xp(x) (0)(0.6 (5)(0.1 (2) 2 (0.1 (1)(0 10) = 0 1 (2)(0.3 σ = 10.05 0 05 1.10) = 2.4 (3)(0.20) = 0.30) = 2.4 (5) 2 (0.15) = 0.20) = 0.15) = 2.

..e.Linear Functions of Random Variables Let a and b be any constants. . if a random variable always takes the value a. th expected value of b X is b E( ) i the t d l f b·X i b·E(x) . a) E(a) = a and Var(a) = 0 i.e. y . it will have mean a and variance 0 b) E(bX) = bµX and Var(bX) = b σ 2 2 X i.

Linear Functions of Random Variables (continued) Let random variable X have mean µx and variance σ2x Let a and b be any constants. then E(a + bX) = a + bµ X Var(a + bX) = b σ 2 2 X .

where. a = − .Linear Functions of Random Variables (continued) Let random variable X have mean µx and variance σ2x Let Then So that Z= X − µX µX 1 Z = a + bX . b= σX σX σX µX 1 E (Z ) = a + bE ( X ) = − + µX = 0 σX σX ⎛ 1 ⎞ Var (Z ) = b Var ( X ) = ⎜ Var ( X ) = 1 ⎜σ ⎝ X⎠ 2 2 .

15 . defective or not defective g light bulb Generally called “success” and “failure” Probability of success is p. probability of failure is 1 – p Define random variable X: X=1 if “success” and X=0 if “failure” The B Th Bernoulli probability f lli b bilit function i ti is: P(X=0) = (1-p).Bernoulli Distribution Bernoulli trial: A random experiment that gives rise to two mutually exclusive t o m t all e cl si e and collecti el e ha sti e collectively exhaustive outcomes e.g. head or tail in each toss of a coin.. P(X=1)=p.

Mean and Variance of Bernoulli Distribution The mean is µ = p µ = E(X) = ∑ xP(x) = (0)(1 − p) + (1)p = p ( ) ( ) ( )( ( )p X The variance is σ2 = p(1 – p) σ 2 = E[(X − µ) 2 ] = ∑ (x − µ) 2 p(x) X = (0 − p) 2 (1 − p) + (1 − p) 2 p = p(1 − p) .

The experiment consists of a sequence of n identical Bernoulli trials. For each trial. two mutually exclusive and collectively exhaustive outcomes ( “success” and “failure”) success failure ). The trials are independent The outcome of one observation does not affect the outcome of the other 3. are observed.Binomial experiment Properties of a Binomial experiment p p 1. 2. Constant probability of success (p) for each trial 17 .

denoted Bin(n.2.n.…. 18 .1.p).Binomial Random variable Let X be the number of successes observed during the g n Bernoulli trials. then X is called a Binomial random variable. Note that is discrete random variable t ki on N t th t X i a di t d i bl taking values 0.

p = probability of heads. n = 3 identical trials.p) due to the following: 1.(why?) 3. 19 . X is di t ib t d as Bi (3 ) d t th f ll i i distributed Bin(3. X = number of heads. 2. 3 Probability of heads (p) remains constant for the 3 tosses tosses. Two mutually exclusive outcomes with “success” = “head”. “failure” = “tail”.Example: Probability distribution of binomial random variable bi i l d i bl A coin is tossed 3 times. Trials are independent of each other.

T) (T. T) (H. T) (T. H) (H. H. H) (T. T) Value of X 3 2 2 1 2 1 1 0 20 . H. H) (T. T. H.Tree Diagram for the coin tossing Example t i E l Outcomes (H. T. T. T. H) (H. H.

The distribution function of X for the coin tossing example P ( X = 0) = P(TTT ) = q 3 P ( X = 1) = P( HTT ) + P(THT ) + P (TTH ) = 3 pq 2 P ( X = 2) = P( HHT ) + P( HTH ) + P(THH ) = 3 p 2 q P ( X = 3) = P ( HHH ) = p 3 21 .

.2) . (2)(1) (n)(n X! = (X)(X . . THT.g.1)(n . .1)(X . TTH) 22 .. the number of outcomes resulting in exactly one head is 3 (HTT. Toss a coin three times.Rule of Combinations The number of combinations of selecting X g objects out of n objects is ⎛n⎞ n! ⎜ ⎜ X = X!(n − X)! ⎝ ⎠ where: n! =(n)(n .2) . . (2)(1) 0! = 1 (by definition) •E.

. 1. n) n = sample size (number of trials or observations) p = probability of “success” success Example: Flip a coin four times. 3.Binomial Distribution Formula ⎛n⎞ x P( X = x) = p ( x) = ⎜ ⎟ p (1 − p ) n − x ⎝ x⎠ p(x) = probability of x successes in n trials.p = (1 .0. 1... 2.5) = 0. (x = 0.5 x = 0. 2.5 1 .. 4 23 . with probability of success p on each trial p y x = number of ‘successes’ in sample. let x = # heads: n = 4. p = 0.

⎜ ⎟ = ⎜ ⎟ = 1 ⎜ 4 ⎜1 ⎜5 ⎜0 ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 24 . compute ⎛ 5 ⎞ 5! ⎜ ⎟= ⎜ 0 ⎟ 0!5! = 1 ⎝ ⎠ ⎛ 5 ⎞ 5! 5 × 4! ⎜ ⎟= ⎜ 1 1!4! = 4! = 5.Example: Calculating a Binomial Probability What is the probability distribution of the # of successes in five observations if the probability of success is . ⎝ ⎠ ⎛ 5 ⎞ 5! 5 × 4 × 3! ⎜ ⎟= = = 10 ⎜2 2 × 3! ⎝ ⎠ 2!3! ⎛ 5 ⎞ ⎛ 5⎞ ⎛ 5⎞ ⎛ 5⎞ ⎜ ⎟ = ⎜ ⎟ = 5. ⎝ ⎠ ⎛ 5 ⎞ 5! ⎜ ⎟= ⎜ 3 3!2! = 10.1? First.

1) ⎝ ⎠ ⎛ 5⎞ P(X = 1) = ⎜ (0 1)1 (1 − 0 1) 5−1 = (5)(0 1)(0.Example (Cont’d) (Cont d) ⎛ 5⎞ P(X = 0) = ⎜ (0 1) 0 (1 − 0 1)5-0 = (0 9)5 = 0 59049 0.1) (0.9)5 = 0.9) 4 = 0 32805 0.9) 0.59049 ⎜ 0 (0.00001 25 .9) 0.1)3 (0.9)3 = 0.1) ⎝ ⎠ P(X = 2) = (10)(0.1) 2 (0.32805 ⎜ 1 (0.0081 P(X = 4) = (5)(0 1)1 (0 9) 4 = 0 00045 (5)(0.1) (0.1)(0 9) 0.0729 ( ) ( )( ) ( ) P(X = 3) = (10)(0.9) 2 = 0.1) (5)(0.00045 P(X = 5) = (1)(0.

4 .59049 P(X=1) P(X=2) P(X=3) P(X=4) P(X=5) =0.31250 =0.07290 =0.31250 =0.1 P(X=0) Mean =0.03125 ( ) P(X) .6 .5 P(X=0) P(X 0) P(X=1) P(X=2) P(X=3) P(x=4) ( ) P(X=5) =0.2 0 0 n = 5 p = 0.4 .6 .00810 =0.00001 =0 00001 P(X) .00045 =0.03125 0 03125 =0.2 0 0 n=5 p=05 0.1 X 1 2 3 4 5 n = 5 and p = 0.15625 =0.Shape of Binomial Distribution The shape of the binomial distribution depends on the value of n and p n = 5 and p = 0 1 d 0.5 X 1 2 3 4 5 26 .15625 =0.32805 =0.

02 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 x 0 0.1 0.06 0. p=0.1 0.06 0 06 0.1 N=100. p=0.04 0.05 0.16 0.14 0 14 Probability 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 x 0.08 Probab bility 0.18 0. p=0. (Cont d) 0.04 0.01 0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 x 0 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 x n=20.1 N=50.03 0.15 0.2 0.04 0.02 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 x 0.25 0.5 n=50.12 0.1 0.02 0 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 x 0.05 N=20.08 0.04 0.3 0.5 27 .08 0.12 0.1 01 0.07 0. p=0.06 0.06 0.12 0.2 0.14 Probab bility Probab bility 0.Shape of Bin distribution (Cont’d) Bin.09 0.08 0.12 0.08 0.02 0.02 0.2 Probability Probability 0.1 0.1 0. p=0.06 0.04 0.16 0. p=0.14 0.5 n=100.18 0.

p) Where n = sample size p = probability of success (1 – p) = probability of failure 28 .Binomial Distribution Characteristics Mean µ = E(X) = np σ = np(1 .p) 2 Variance and Standard Deviation σ = np(1 .

6708 P(X) .2 0 P(X) n = 5 p = 0.p) = (5)(0.5) = 2.4 .6 .1) = 0.4 .5) = 1 118 1.2 0 0 n = 5 p = 0.1) = 0.5 σ = np(1.6 4 .p) = (5)(0.5 σ = np(1.1)(1 − 0.Binomial Characteristics Examples µ = np Mean = (5)(0.118 6 .5)(1 − 0.5 X 0 1 2 3 4 5 29 .1 X 1 2 3 4 5 µ = np = (5)(0.

9138 30 . An engineer draws 10 items from each lot with replacement and does the inspection.05) (1 − 0.05)1 (1 − 0.05)10−1 ⎜0 ⎜1 ⎝ ⎠ ⎝ ⎠ = (0. 0.05) 0 05) (Please verify the three conditions) conditions). that a lot will be accepted? Let X be the number of defects.95)10 + 10(0.5987 + 0. P(accept) = P(X = 0) + P(X = 1) ⎛10 ⎞ ⎛10 ⎞ 0 10 − 0 = ⎜ ⎟(0.3151 = 0.05) + ⎜ ⎟(0.95) 9 = 0. Decision: reject the lot if number of defects >1 accept the lot if number of defects ≤1 Assume 5% of the products are defects what is the probability defects.05)(0. then X is distributed as Bin(10.Example Large lots of incoming products at a manufacturing plant are inspected for defectiveness.

10 0.07 0.47 2 0.07 0. Historical records are used to compute the following joint probability table: Number of Lines (X) 3 4 5 Total Number of enquiries ( ) q (Y) 0 0.30 0.23 0.11 0.46 0.Joint probability distribution A real estate agent is interested in the relationship between th number of li b t the b f lines (X) i newspaper in advertisement for an apartment and the volume of inquiries ( ) from p q (Y) potential renters.09 0.19 1 0.24 1 31 .14 0.03 0.16 0.34 Total 0.

y) = P(X = x and Y = y) Properties: ope es 0 ≤ p (x.y) y pY (y) = ∑ p(x.y) ≤ 1. as a function of x and y: p(x. and ∑∑ p(x.y) x .Joint Probability Functions A joint p j probability function is used to express the y p probability that X takes the specific value x and simultaneously Y takes the value y.y) = 1 x y The marginal probabilities are pX (x) = ∑ p(x.

y) p(y|x) = pX (x) Similarly. the conditional probability function of X.y) p(x|y) = pY (y) . p(x. given Y = y is: p(x.Conditional Probability Functions The conditional probability function of the random variable Y expresses the probability that Y takes the value y when the value x is specified for X.

X and Y are independent if p(x|y) = p X (x) or p(y|x) = pY (y) A set of k random variables are independent if and only if p(x1 . .x 2 .Independence The jointly distributed random variables X and Y are said to be independent if and only if their joint probability function is the product of their marginal probability functions: p(x.y) = p X (x)pY ( ) ( ) ( ) (y) for all possible p p pairs of values x and y Equivalently.x k ) = p X1 (x1 )p X 2 (x 2 ) p X n (x k ) .

Example: Stock returns Let X and Y be random variables of possible percent returns (0%.0625 0.0625 0 0625 0.0625 0.0625 0 0625 0.0625 0 0625 0.0625 0 0625 0.0625 0 0625 0.0625 0 0625 0.0625 0 0625 0. 10% and 15%) for each of the two stocks.0625 15% 0.0625 0 0625 0.0625 35 .0625 5% 0. 5%.0625 0. The joint probability distribution given is the table below: Y return X return 0% 5% 10% 15% 0% 0. A and B.0625 0.0625 10% 0.

05 − 0.03125 36 .0625 0.075) 2 (0.0625 0.25 5% 0.152 (0.25 0.25) + (0.0625 0.25) + (0.10 − 0.25 0.0625 0.0625 0.0625 0.25 pX(x) 0.052 (0.0752 = 0.0625 0.25 1 We can compute the mean and variance of X (and Y) µ X = ∑ xP( X = x) x = 0(0.25) + 0.0625 0.003125 or equivalently σ X 2 = ∑ x 2 P( X = x) − µ X 2 x = 0 2 (0.10 2 (0.25) + 0.0625 0.0625 0.25) + 0.0625 0.0625 0.075) 2 (0.075 σ X 2 = ∑ ( x − µ X ) 2 P( X = x) x = (0 − 0.25) + 0.075) 2 (0.25) = 0.0625 0.15 − 0.0625 0.25) + 0.15(0.25) − 0.05(0.25) + 0.10(0.25) = 0.25) + (0.0625 0.075) 2 (0.25 0.Y return X return 0% 5% 10% 15% pY(y) 0% 0.0625 0.25 15% 0.25 10% 0.

0625 0.25 0. X and Y are independent 37 .15).25 pX(x) 0.0625 0.Y=0)/P(Y=0) = 0. Y=y) = 0.0625 0.0625 0. X and Y are independent. 0.25 0. P(X=x|Y=y) = P(X=x) for all possible x values and y values (x. we can look at the joint distribution: P(X=x. 0.0625 0.0625 0.25 = 0.25 0.25 1 Conditional probability function: P(X=0|Y=0) = P(X=0.0625 0.0625 0.0625 P(X=x)P(Y=y) = 0. Equivalently.0625 0.0625/0.Y=y) = P(X=x)P(Y=y) for all possible values of x and y Therefore.25 Is X and Y independent? We just showed that P(X=0|Y=0) =P(X=0) we can also show that =P(X=0). y = 0.25 15% 0.0625 0.25 5% 0.25 10% 0. 0.0625 0.0625 0.0625 0.Y return X return 0% 5% 10% 15% pY(y) 0% 0.0625 P(X=x.0625 0.25 0.0625 0. Therefore.0625 0.10.25 ×0 25 = 0 0625 0 25 ×0.05.

y) − µ x µ y x y .Covariance Let X and Y be discrete random variables with means µX and µY The expected value of (X .Y) = E(XY) − µ x µ y = ∑ ∑ xyp(x.y) x y Computational formula for covariance: Cov(X.Y) = E[(X − µ X )(Y − µ Y )] = ∑ ∑ (x − µ x )(y − µ y )p(x.µX)(Y .µY) is called the covariance between X and Y o discrete a do a ab es For d sc e e random variables Cov(X.

Covariance and Independence The covariance measures the strength of the linear relationship between two variables If two random variables are statistically independent. th covariance b t i d d t the i between th them is 0 The converse is not necessarily true .

25 5% 0.06 5 + 0.06 5 + 0.0625 + 0.0625 − 0 075 × 0.10 × 0.10 × 0 × 0.0625 + 0.0625 0.0625 + 0.05 × 0.0625 + 0 × 0.15 × 0 × 0.25 PX(x) 0.10 × 0. 0 × 0.0625 0.06 5 + 0.25 0.0625 + 0.10 0.15 × 0.15 × 0.0625 0.15 × 0.075 0 075 40 .0625 0.0625 0.Y return X return 0% 5% 10% 15% PY(y) 0% 0.05 × 0.0625 0.0625 0.0625 0.05 × 0.05 × 0.y) x y = 0 × 0 × 0.0625 0.25 0.25 0.Y) Cov(X Y) = ∑∑ xyp(x y) − µ x µ y xyp(x.0625 0.0625 + 0 × 0.0625 + 0 × 0.05 × 0 × 0.06 5 + 0.0625 0.0625 0.0625 0.05 × 0.0625 0.075 = 0 0.0625 + 0.10 × 0.25 1 • The covariance between X and Y for the stock return example is Cov(X.0625 + 0.10 × 0.15 × 0.0625 + 0.0625 0. 5 0.25 10% 0.05 × 0.0625 0.0625 0.10 × 0.0625 0.15 × 0.05 × 0.15 × 0.15 × 0.0625 0.0625 + 0.10 × 0.25 15% 0.0625 0.

{E(Y|x)} = E (Y | x) − {E(Y|x)} ( | ) ( | ) ( | ) ( | ) 2 2 2 y 2 Note that E(Y|x)=E(Y) and Var(Y|x)=Var(Y) when X and ( | ) ( ) ( | ) ( ) Y are independent of each other 41 .Conditional mean and variance The conditional mean of Y given X=x: E(Y|x) = E(Y|X = x) = ∑ yp(y|x) y The conditional variance given X=x: Var(Y|x) = Var(Y|X = x) = ∑ (y-E(Y|x)) 2 p(y|x) y A conputational formula for the conditional variance Var(Y|x) = ∑ y p(y|x).

09 0.11 0.2333 1 P(Y=y|X=4) 0.16 0.03 0.07 0.09/0.46 0.24 2 0.14 0.07/0.07/0.3 = 0.16/0.3 = 0.10 0.11/0.3 = 0.10/0.23 0.47 2 0.46 0.03/0.46 P(Y=y|X=5) 0.14/0.Conditional mean and variance Number of Lines (X) 3 4 5 Total Number of enquiries (Y) 0 0.07 0.24 .46 0.24 Total 1 1 42 P(Y=y|X=3) 0.3 0.34 Total 0.30 0.23/0.24 1 (example) ( l ) Conditional distributions: y 0 1 0.4667 0.19 1 0.

93332 = 0.9333 E (Y 2 | X = 3) = ∑ y 2 P (Y = y | X = 3) y = 0 2 × 0.3 + 12 × 0.196.2333 = 1.4667 + 2 × 0.5289 2 We can also compute E(Y|X 4) 1.333 E(Y|X=5) = 1 333 43 .4 − 0.4667 + 2 2 × 0.2333 = 0.3 + 1× 0. E(Y|X=4) = 1 196 and E(Y|X 5) 1.4 Var (Y | X = 3) = E (Y 2 | X = 3) − [E (Y | X = 3)] = 1.E (Y | X = 3) = ∑ yP(Y = y | X = 3) y = 0 × 0.

3 + 1.933 × 0.34 = 1 15 0.24 = 1.333 × 0.15 In general E (Y ) = ∑ E (Y | X = x) p X ( x ) x The Law of iterated expectations states that E (Y ) = E [E (Y | X )] 44 .47 + 2 × 0.Therefore ∑ E (Y | X = x) p ( x ) X x = 0.15 The unconditional mean of Y is . E (Y ) = ∑ ypY ( y ) y = 0 × 0 19 + 1× 0.19 0 47 0 34 1.46 + 1.196 × 0.

Y ) bC When X and Y are independent. the linear function of X and Y W = aX + bY The Th mean value f W i l for is µ W = E[W] = E[aX + bY] = aµ X + bµ Y The variance for W is σ W 2 = a 2σ X 2 + b 2σ Y 2 + 2abCov ( X .Linear function of random variables Let a and b be constants. σ 2 = a 2σ 2 + b 2σ 2 W X Y .

d) random variables with mean µ and variance σ2 E ( ∑ X i ) = nµ E (X ) = µ i =1 n Var (∑ X i ) = nσ 2 Var (X ) = σ 2 / n i =1 n . Xn are independent and identically distributed (i. …. Xn are independent random variables E (∑ X i ) = ∑ µ X i i =1 i =1 n n Var (∑ X i ) = ∑ σ X i i =1 i =1 n n 2 In particular.In particular. X2. if X1. if X1. ….i. when X and Y are independent: E ( X + Y ) = µ X + µY . Var ( X − Y ) = σ + σ More generally. X2. 2 X 2 Y E ( X − Y ) = µ X − µY 2 X 2 Y Var ( X + Y ) = σ + σ .

each requiring a minimum of $500.6 P(Y=0) = 0.4 P(X= 5) = 0 4 and P(X=20) = 0 6 0. t it ti l The probability functions for X and Y are P(X=-5) 0.4 X and Y are independent 47 .6 and P(Y=25) = 0.Example: Portfolio Analysis An investor has $1000 to invest and two investment opportunities. Let X and Y be the profit per $100 investment from the first and second investment opportunity respectively.

Wc = 5X + 5Y 48 . $500 in each investment Find the mean and variance of the profit from each strategy Let W be the profit. b $1000 in the second investment c.Example (cont’d) (cont d) The investor has the following possible strategies a. $1000 in the first investment b. Wb = 10Y c. a. then p . Wa = 10X b.

4) + (20) 2 (0. σ Wa = 100σ X = 15000 2 2 E (Wb ) = 10µY = $100.6) + (25) 2 (0. σ Wb = 100σ Y = 15000 2 2 E (Wc ) = 5µ X + 5µY = $100.4) = $10 σ X 2 = ∑ x 2 P( x) − µ X 2 = (−5) 2 (0.6) − 10 2 = 150 x y σ Y 2 = ∑ y 2 P( y ) − µ y 2 = (0) 2 (0. σ Wc = 25σ X + 25σ Y = 7500 2 2 2 Note the reduction in risk as a result of diversification! 49 .6) + (25)(0.4) − 10 2 = 150 y E (Wa ) = 10 µ X = $100.6) = $10 x µY = E (Y ) = ∑ yP( y ) = (0)(0.Example (Continued) µ X = E ( X ) = ∑ xP( x) = (−5)(0.4) + (20)(0.