You are on page 1of 14

Training Session 1

D.Majumdar
Random Variables
• Coin Tossing Experiment
• Ω1 = {H,T} – Toss the coin once
• Ω2 = {HH, HT, TH, TT} – Toss the coin twice
• Ω3 = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT} – Toss the coin thrice
• A set of all possible outcomes of a random experiment is called a Sample Space and is
denoted by “Ω”
• Let’s define 2 R.V’s:-
• A. x = No. of Heads
• B. y = No. of Tails
• x(Ω3 ) = {3,2,2,2,1,1,1,0}
• Y(Ω3 )={0,1,1,1,2,2,2,3}
• Definition of a Random Variable – A R.V. is a real valued function, that assigns an
outcome of the sample space and gives it a Real Value.
• Let an outcome be denoted as “ω”, x(ω) -> Real Value
• You can see that x and y operate on the same Sample Space but they are both defined
in such a way that their outcome is different
Random Variables Example
• Ω2 = {HH, HT, TH, TT}
• Let there be a Stock, so whenever the Toss turns up Head, the Stock goes up by certain value and when
the Toss turns up Tails it goes down by certain value

𝑆2 (HH)

𝑆1 (H)
𝑆2 (HT)
𝑆0
𝑆2 (TH)
𝑆1 (T)

𝑆2 (TT)
Random Variables Example
• 𝑆1 − 𝑖𝑠 𝑎 𝑅𝑎𝑛𝑑𝑜𝑚 𝑉𝑎𝑟𝑎𝑖𝑏𝑙𝑒
• 𝑆2 − 𝑖𝑠 𝑎 𝑅𝑎𝑛𝑑𝑜𝑚 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒
• 𝑆0 −
𝑊ℎ𝑎𝑡 𝑎𝑏𝑜𝑢𝑡 𝑡ℎ𝑖𝑠 𝑜𝑛𝑒? 𝑇ℎ𝑒 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑆0 𝑖𝑠 𝑘𝑛𝑜𝑤𝑛 𝑎𝑡 𝑡 =
0, 𝑎𝑛𝑑 𝑑𝑜𝑒𝑠𝑛𝑡 𝑑𝑒𝑝𝑒𝑛𝑑 𝑜𝑛 𝑡ℎ𝑒 1𝑠𝑡 2 𝑐𝑜𝑖𝑛 𝑡𝑜𝑠𝑠𝑒𝑠.
• 𝑇ℎ𝑒𝑠𝑒 𝑘𝑖𝑛𝑑 𝑜𝑓 𝑅𝑎𝑛𝑑𝑜𝑚 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 𝑎𝑟𝑒 𝑐𝑎𝑙𝑙𝑒𝑑
Degenerate Random Variables
• Interesting Fact – When we defined a Random Variable
we did not use the concept of Probability anywhere.
• Probability is used when we talk about a Distribution of
Random Variables
Probability
• Ω3 = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT} – Toss the
coin thrice
• Let P(H) = p, P(T) = q
• p+q=1
• P(HHH) = Coin tosses are independent of another = p*p*p =
𝑝3
• Let there be an event A = [1st coin toss is a Head]. What is
the Probability of such an event?
• By intuition the answer should be p
• P(A) = P(HHH) + P(HHT) + P(HTH) + P(HTT)
• P(A) = 𝑝3 + 𝑝2 𝑞 + 𝑝2 𝑞 + 𝑝𝑞2
• P(A) = 𝑝2 𝑝 + 𝑞 + 𝑝𝑞 𝑝 + 𝑞 = 𝑝2 + 𝑝𝑞 = 𝑝 𝑝 + 𝑞 = 𝑝
Finite Probability Space
• Finite Probability Space is a Probability Space we get by conducting
an experiment which has Finite No. of outcomes. For example “Ω3 ”
• Ω = {ω} – A generic Sample Space
• Important points for a Probability space
a) A sample space Ω
b) A probability measure P
• P(ω) -> [0,1] … It’s a function
• σωЄΩ P(ω) = 1, P(Ω) = 1
• A = Subset of Ω. In our example we had defined a subset
• P(A) = σωЄ𝐴 P(ω)
• Note - Random Variables and Probability are 2 separate concepts.
Later on we will come to Real World Probability and Risk Neutral
Probability . In both cases the Random Variable wont’ change, only
the distribution of the Random Variable will change
Probability Distribution
• Ω3 = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT} – Toss the coin thrice
• Let’s define 2 R.V’s:-
a) x = No. of Heads
b) y = No. of Tails
• x = {3,2,2,2,1,1,1,0}
• Y={0,1,1,1,2,2,2,3}
• What is the distribution of a R.V.? It’s Probability by which a particular R.V. takes various values.
1 1
• Lets define, P(H) = , 𝑃 𝑇 =
2 2
• We would like to find out the Probability Distribution of the R.V. x ?
1
• P(x=0) = P(TTT) =
8
3
• P(x=1) = P{HTT,THT,TTH} =
8
3
• P(x=2) = P{HHT,HTH,THH} =
8
1
• P(x=3) = P{HHH} =
8
1
• So this is the Probability Distribution of the R.V. x, under the Probability Measure P(ω) = , 𝑤ℎ𝑒𝑟𝑒,
1 1 8
P(H) = , 𝑃 𝑇 =
2 2
Change of Probability Measure
2 1
• Let’s say P(H) = , 𝑃 𝑇 =
3 3
1
• P(x=0) = P(TTT) =
27
6
• P(x=1) = P{HTT,THT,TTH} =
27
12
• P(x=2) = P{HHT,HTH,THH} =
27
8
• P(x=3) = P{HHH} =
27
• Now we have a different Probability Distribution
of R.V. x, but R.V. x has not changed
Expectations
• Ω2 = {HH, HT, TH, TT} – Toss the coin twice
1 1 1
• Lets define, P(H) = , 𝑃 𝑇 = , P(ω) =
2 2 4
• Let’s assume a Stock Price Process, where the stock becomes twice when it’s head and it becomes half
when it’s tails

𝑆2 (HH)=
16
𝑆1 (H)=
8 𝑆2 (HT)=4
𝑆0 = 4
𝑆2 (TH)=4
𝑆1 (T)
=2
𝑆2 (TT)=
1
Expectations
1 1 1 1 25
• E[𝑆2 ] = 16* + 4∗ +4∗ +1∗ =
4 4 4 4 4
• When a R.V. is defined in the Probability Space
(Ω, P), E[x] = σωЄΩ x(ω)P(ω)
• E[ax + by] = aE[x] + bE[y]
• In case of a Linear Function l(x) = a+bx, E[l(x)]
= E[a+bx]= a+bE[x] = l(E[x])
• In case of Convex Functions, E[g(x)] >= g(E[x]),
also known as Jensen’s Inequality
Jensen’s Inequality Proof
Jensen’s Inequality Proof
• We have the X-axis and on the Y-axis we have the values for
g(x) – convex function
• Let’s say we draw a tangent at the point X=E[x], the Tangent
line can be denoted as a Linear Function l(x) = a +bx
• The corresponding Y value should be l(E[x]) = g(E[x])
• Let’s have another point x’, so we will have l(x’) and g(x’).
g(x’) > l(x’)
• So we can safely say that at all points g(x) >= l(x)
• Hence, E[g(x)] >= E[l(x)]
• E[g(x)] >= E[a + bx] = a + bE[x] = l(E[x]) = g(E[x])
• Proved :- E[g(x)] >= g(E[x])

You might also like