Professional Documents
Culture Documents
Group Number: 5C
Group Members:
Group # _5-C_
Submission # 1
I. Kolmogorov and his axioms of probability
In 1933, Kolmogorov put forward his three axioms of probability which helped us develop the
understanding of the modern theory of probability and develop the mathematical theory of
probability. The axioms made us realize that the general theory of probability can be described
technically on the basis of concepts such as measures, measurable spaces, measurable sets and
measurable functions. Let us have a look at what the axioms are:
(1) Axiom 1: Let {Ω, F, P}be a probability space where Ω represents the set of all possible
outcomes of a random experiment, i.e. the sample space, F represents the σ-algebra of all the
subsets of Ω and P represents a measure on the measurable space (Ω, F) such that P(Ω) = 1.
Kolmogorov called the sample space, “The space of elementary events”. The elements of F
correspond to an event related to the experiment and for any event A∈F, P(A) represents the
probability of the event A and it is a number in the set [0,1]. Mathematically, it can be written as:
(2) Axiom 2: Let X: Ω→R represent a random variable, ω∈Ω :X(ω)< a represent an event in
the σ−algebra of F ∀ a∈R and Px represent “the law of X” given by:
Px(B) =P(X-1(B))
Where “B” represents a Borel subset on the real line. The random variable induces a probability
in the Borel σ−algebra of the real line. The probability of occurrence of at least one event from
the sample space σ is equal to 1, i.e. P(σ) = 1. This is also called the “unit measure” assumption.
(3) Axiom 3: The third axiom states that for mutually exclusive events “a” and “b” such that for
{a, b} ∈F,
P (a ∪ b) = P(a) +P(b).
This can be extended to any number of events. i.e. for {a, b} ∈F,
These three axioms lead to various theorems which helped formulate the mathematical aspects of
the theory of probability. The theorems that were proven using these axioms are listed below:
Group # _5-C_
(2) The complement rule :P(A) = 1−P(Ac) Where Ac denotes the complementary set of A.
(4) The strong law of large numbers which states that as the number of observations become
very large, the sample means converge to the value of the population mean.
(5) These axioms helped Kolomogorov to prove the 0-1 law which states that any event A in
an asymptotic σ-algebra has the probability of either 0 or 1. We define an asymptotic σ-
algebra as the intersection of the sequence ofσ-algebras generated by the sequence of a
random variable. let’s denote the sequence of sigma-algebras by Fn ∀ n≥1. Mathematically:
F= F1 ∩ F2∩ F3………. ∩ Fn
The work of Kolomogorov acted as the foundation of the calculus of probability theory based
on which other theories such as the Markov process were developed.
Definition: A Markov chain is stochastic model describing sequence of possible events in which
probability of each event depends only on state attained in previous event. A countably infinite
sequence, in which the chain moves state at discrete or finite time steps, is called a discrete-time
Markov chain (DTMC) named after Russian mathematician Andrey Markov.
As time goes by, the process loses the memory of the past. If
3
MScFE 620 Discrete Time Stochastic Processes
Group # _5-C_
The matrix describing the Markov chain is called the transition matrix which is the most
important tool for analysing Markov chains. The transition matrix is usually given the symbol
In the transition matrix P, the rows represent Now or From (X t) , the columns represent Next or
To ( X t+1). Entry (i, j) is the conditional probability that Next = j given that Now = i, the
probability of going From state i To state j . To summarize,
1. The transition matrix P must list all possible states in the state space S
2. P is a square matrix (N × N), because X t+1 and X t both take values in the same state
space S (of size N)
3. The rows of P should each sum to 1 which means that X t+1 must take one of the listed
values
4
MScFE 620 Discrete Time Stochastic Processes
Group # _5-C_
Example: In our example, we will model transitions of stock market. There are three possible
states for stock movements: Bear, Bull and Stagnant. The figure below shows state diagram of
probabilities of transition between different states:
The lines in state diagram show probabilities of transitioning between the different states. For
example, there is 0.15 probability of transitioning from Bear market to Bull market. There is a
0.8 probability of a Bear market transitioning to another Bear market. The state transition
probabilities can be captured in a 3x3 matrix called a transition matrix as shown below:
We can extend this example of 3X3 matrix to show that t-step transition probabilities are given
by the matrix Pt for any t as:
5
MScFE 620 Discrete Time Stochastic Processes
Group # _5-C_
Application: Research has reported application & use of Markov chains in wide range of topics
such as physics, chemistry, biology, medicine, music, game theory and sports. Markov chains
have applications as statistical models in real-world processes such as cruise control systems in
motor vehicles, queues or lines of customers arriving at an airport, modeling call centers queues,
currency exchange rates and animal population dynamics
A martingale is the mathematically described as an outcome of a fair game, which means the
outcome of the next event is theoretically zero.
Martingales allow a glimpse into the behavior of non-independent random numbers as far as
their sequencing and distributions are concerned.
Doob`s Axioms
The pioneering work of Doob can be split into these select three well known Axioms –
Doob Decomposition
Every sub martingale S of has a specific Doob–Meyer decomposition S=M+A, where M is a
martingale and A is a predictable drift process starting at 0.
x n=M n + An
Where M is a is a martingale (with respect to the same filtration as followed previously), and
additionally, A is an is a predictable, integrable, and non-decreasing process, with A0 = 0.
6
MScFE 620 Discrete Time Stochastic Processes
Group # _5-C_
If a sequence S (0), S (1), S(2), . . . is a bounded martingale, and T is a stopping time, then the
expected value of S(T) is S (0).
One of the most prominent applications is in mathematical finance in the theory of arbitrage and
risk-neutral pricing of financial assets.
Since one can always invest in riskless interest-bearing bonds, it is clear that one has to discount
prices by a numéraire, e.g. to divide asset prices by the price of the riskless bond.
A martingale measure is a probability measure Q such that the discounted asset prices form a
martingale.
In a real market, no player (with their own beliefs, expressed by the player’s own probability
measure P, sometimes called the real-world probability measure) expects the asset prices to be
martingales since no one gains by betting on a martingale.
On the other hand, in many situations there is an equivalent (with respect to P) martingale
measure Q such that the discounted asset prices become martingales under the conditional
expectations induced by Q. By the fundamental theorem of asset pricing the existence of a
martingale measure guarantees that there is no arbitrage.
Moreover, the market is complete (i.e. each derivative can be ‘hedged’ or ‘replicated’ by a
trading strategy) if the martingale measure is unique. In a discrete-time setting the existence of
an equivalent martingale measure is even equivalent to an arbitrage-free complete market.
7
MScFE 620 Discrete Time Stochastic Processes
Group # _5-C_
In the classical Black–Scholes model where the asset prices are modelled by a geometric
Brownian motion, it is always possible to find a martingale measure Q with the help of
Girsanov’s theorem. In order to calculate the fair price of a derivative (e.g. a share option) one
performs the following steps:
First one determines the pay-off of the derivative, then one constructs an equivalent martingale
measure Q, finally the discounted price process of the derivative is the expectation (under Q) of
its payoff; since everything is, by construction, a martingale, no arbitrage is possible.
1. https://core.ac.uk/download/pdf/268083255.pdf
2. Markov Chains Wikipedia Source https://en.wikipedia.org/wiki/Markov_chain#Discrete-
time_Markov_chain
3. Lecture notes on Markov chains Olivier Leveque, National University of Ireland, Maynooth,
August 2-5, 2011 http://www.hamilton.ie/ollie/Downloads/Mar1.pdf
4. Department of Electrical Communication Engineering Indian Institute of Science
DISCRETE EVENT STOCHASTIC PROCESSES
https://ece.iisc.ac.in/~anurag/books/anurag/spqt.pdf
5. Department of Statistics, University of Auckland, Stochastic Process Markov Chains
https://www.stat.auckland.ac.nz/~fewster/325/notes/ch8.pdf
6. Department of Statistics at Carnegie Mellon University. (n.d.). Markov Process. Pittsburgh,
Pennsylvania, United States of America
https://www.stat.cmu.edu/~cshalizi/754/notes/lecture-%2009.pdf
7. Maltby, H., Pakornrat, W., & Jackson, J. (n.d.). Markov Chains Source Brilliant:
https://brilliant.org/wiki/markov-chains/