You are on page 1of 9

Submission Number: 1

Group Number: 5C

Group Members:

Name Location (Country) E-Mail Address Non-Contributing


Member (X)
Ravi Shankar Mumbai (India) Ravishnkr.sinha@gmail.com
Gaurav Roongta Kolkata (India) roongtagaurav@gmail.com
Hitesh Sachani Nagpur (India) hiteshsachanI@gmail.com
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Submission # 1
I. Kolmogorov and his axioms of probability

In 1933, Kolmogorov put forward his three axioms of probability which helped us develop the
understanding of the modern theory of probability and develop the mathematical theory of
probability. The axioms made us realize that the general theory of probability can be described
technically on the basis of concepts such as measures, measurable spaces, measurable sets and
measurable functions. Let us have a look at what the axioms are:

(1) Axiom 1: Let {Ω, F, P}be a probability space where Ω represents the set of all possible
outcomes of a random experiment, i.e. the sample space, F represents the σ-algebra of all the
subsets of Ω and P represents a measure on the measurable space (Ω, F) such that P(Ω) = 1.
Kolmogorov called the sample space, “The space of elementary events”. The elements of F
correspond to an event related to the experiment and for any event A∈F, P(A) represents the
probability of the event A and it is a number in the set [0,1]. Mathematically, it can be written as:

P(A) ∈ R, P(A) ≥ 0 ∀ A∈F.

(2) Axiom 2: Let X: Ω→R represent a random variable, ω∈Ω :X(ω)< a represent an event in
the σ−algebra of F ∀ a∈R and Px represent “the law of X” given by:

Px(B) =P(X-1(B))

Where “B” represents a Borel subset on the real line. The random variable induces a probability
in the Borel σ−algebra of the real line. The probability of occurrence of at least one event from
the sample space σ is equal to 1, i.e. P(σ) = 1. This is also called the “unit measure” assumption.

(3) Axiom 3: The third axiom states that for mutually exclusive events “a” and “b” such that for
{a, b} ∈F,

P (a ∪ b) = P(a) +P(b).

This can be extended to any number of events. i.e. for {a, b} ∈F,

P (a1 ∪a2 ∪a3…..∪an) =P(a1) +P(a2) +...+P(an)

These three axioms lead to various theorems which helped formulate the mathematical aspects of
the theory of probability. The theorems that were proven using these axioms are listed below:

(1) The probability of the empty set being 0.


2
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

(2) The complement rule :P(A) = 1−P(Ac) Where Ac denotes the complementary set of A.

(3) The rule of monotonicity: If A⊆B, then P(A)≤P(B).

(4) The strong law of large numbers which states that as the number of observations become
very large, the sample means converge to the value of the population mean.

(5) These axioms helped Kolomogorov to prove the 0-1 law which states that any event A in
an asymptotic σ-algebra has the probability of either 0 or 1. We define an asymptotic σ-
algebra as the intersection of the sequence ofσ-algebras generated by the sequence of a
random variable. let’s denote the sequence of sigma-algebras by Fn ∀ n≥1. Mathematically:

F= F1 ∩ F2∩ F3………. ∩ Fn

The work of Kolomogorov acted as the foundation of the calculus of probability theory based
on which other theories such as the Markov process were developed.

II. Markov Chain Process

Definition: A Markov chain is a stochastic process describing sequence of possible events in


which probability of each event depends on state attained in previous event only. A countable
sequence in which chain moves state at discrete time steps is called discrete-time Markov chain
(DTMC) named after Russian mathematician Andrey Markov.

Equation: A stochastic process {Xn, n ≥ 0}, taking values in set S is


called a discrete time Markov chain (DTMC) if it has the Markov property for all

As time progresses, the process losses memory of past. If

is independent of n, then X is called to be a time homogeneous Markov chain represented by a


transition graph.

For time n, Xn is present, X0, X1, · · ·, Xn−1 is past, and Xn+1 = j is


the future. The Markov property states that given the present, the future
and past are independent of each other. The values taken by the process are also called states of
the process thus set S is also called the state space.

3
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

The matrix describing Markov chain is called transition matrix. It is most important tool for
analysing Markov chains. The transition matrix is given the symbol

In the transition matrix P, rows represent Now or From (X t) , columns represent Next or To ( X
t+1) . Entry (i, j) is conditional probability that Next = j given that Now = i, the
probability of going from state i to state j. To summarize,

Properties of the transition matrix: A transition matrix can be shown below:

1. The transition matrix P should list all possible states in state space S

2. P is square matrix (N × N) as X t+1 and X t take values in the same state space S (of size
N)

3. The rows of P should add up to 1 always

4. The columns of P do not sum up to 1 always

4
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Example: In this example, we will model transitions of stock market. There are three possible
states for stock movements: Bear, Bull and Stagnant. The figure below shows state diagram of
probabilities of transition between different states:

The lines in state diagram show probabilities of transitioning between three different states. For
example, there is 0.15 probability of transitioning from Bear market to Bull market. There is a
0.8 probability of a Bear market transitioning to another Bear market. The state transition
probabilities can be captured in a 3x3 matrix called a transition matrix as shown below:

Bull Bear Stagnant

Bull [ 0.9 0.075 0.025 ]


Bear [ 0.15 0.8 0.015 ]
Stagnant [ 0.25 0.25 0.5 ]

We can extend this 3X3 matrix to show that t-step transition probabilities are given by the matrix
Pt for any t as:

5
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Application: Markov Chains find application in areas such as physics, chemistry, biology,
medicine, music, game theory and sports. Markov chains are used to model many real-world
processes such as car cruise control systems, queueing of passengers at airport, queuing of calls
on customer care & currency exchange rates.

III. J. L. Doob and Development of Martingales

Definition of a Martingale, Early History of Dr. J.L. Doob in Martingale theory

A martingale is the mathematically described as an outcome of a fair game: The expected net
gain or loss from further play, independent of the history, is 0.

Martingales allowed one to study, for the first time, the behavior of sums and sequences of
random variables which are not independent. Martingale theory is one of the cornerstones of
modern mathematical probability theory with wide-ranging applications in stochastic analysis
and mathematical finance.

In order to appreciate Doob’s early achievements, one must first have some understanding of the
state of probability at the beginning of the 1930s. It was not clear at the time whether probability
was part of mathematics or part of physical science. Although many important results had been
established, there was no “theory” of probability. Then in 1933, Kolmogorov proposed the
axiomatic system for probability based on measure theory. Today this is almost universally
accepted as the appropriate framework for mathematical probability.

Doob's work was in probability and measure theory, in particular he studied the relations
between probability and potential theory. Doob made major contributions such as separability,
stochastic processes, martingales, optimal stopping, potential theory, and classical potential
theory and its probabilistic counterpart.

Doob`s Axioms
The pioneering work of Doob can be split into these select three well known Axioms –

Doob Decomposition
Every sub martingale S of class D has a unique Doob–Meyer decomposition S=M+A, where M
is a martingale and A is a predictable drift process starting at 0.
To elucidate in a mathematically profound manner,
let ( x n )n ∈ N be a sub martingale, with respect to the filtration ( F n )n ∈ N
then, there exists a unique decomposition

x n=M n + An
Where M is a is a martingale (with respect to the same filtration), A is an is a predictable,
integrable, and non-decreasing process, with A0 = 0.

6
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Doob`s Optional stopping theorem-

If the sequence S (0), S (1), S(2), . . . is a bounded martingale, and T is a stopping time, then the
expected value of S(T) is S(0).

Doob`s Convergence Theorem -

Let be a filtration defined on a probability space and let be a martingale


with respect to the filtration whose paths are left limited and right continuous. The
following properties are equivalent:
When , converges in ;
When , converges almost surely toward an integrable and -measurable
random variable that satisfies

The family is uniformly integrable.

Applications of Martingales in Finance – Theory of Asset Pricing, Fair Game and No


Arbitrage Rule:

Martingales have found many important applications also outside of probability theory.

One of the most prominent applications is in mathematical finance in the theory of arbitrage and
risk-neutral pricing of financial assets. In a market, an arbitrage opportunity is a trading strategy
that has a positive probability of winning and zero probability of losing money.

It is a money-making machine, the ultimate unfair game. On the other hand, a martingale
represents a fair game and will not allow for arbitrage.

Since one can always invest in riskless interest-bearing bonds, it is clear that one has to discount
prices by a numéraire, e.g. to divide asset prices by the price of the riskless bond.

A martingale measure is a probability measure Q such that the discounted asset prices form a
martingale.

In a real market, no player (with their own beliefs, expressed by the player’s own probability
measure P, sometimes called the real-world probability measure) expects the asset prices to be
martingales since no one gains by betting on a martingale.

On the other hand, in many situations there is an equivalent (with respect to P) martingale
measure Q such that the discounted asset prices become martingales under the conditional
7
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

expectations induced by Q. By the fundamental theorem of asset pricing the existence of a


martingale measure guarantees that there is no arbitrage.

Moreover, the market is complete (i.e. each derivative can be ‘hedged’ or ‘replicated’ by a
trading strategy) if the martingale measure is unique. In a discrete-time setting the existence of
an equivalent martingale measure is even equivalent to an arbitrage-free complete market.

In the classical Black–Scholes model where the asset prices are modelled by a geometric
Brownian motion, it is always possible to find a martingale measure Q with the help of
Girsanov’s theorem. In order to calculate the fair price of a derivative (e.g. a share option) one
performs the following steps:

First one determines the pay-off of the derivative, then one constructs an equivalent martingale
measure Q, finally the discounted price process of the derivative is the expectation (under Q) of
its payoff; since everything is, by construction, a martingale, no arbitrage is possible.

Bibliography & References:

1. https://core.ac.uk/download/pdf/268083255.pdf
2. Markov Chains Wikipedia Source https://en.wikipedia.org/wiki/Markov_chain#Discrete-
time_Markov_chain
3. Lecture notes on Markov chains Olivier Leveque, National University of Ireland, Maynooth,
August 2-5, 2011 http://www.hamilton.ie/ollie/Downloads/Mar1.pdf
4. Department of Electrical Communication Engineering Indian Institute of Science
DISCRETE EVENT STOCHASTIC PROCESSES
https://ece.iisc.ac.in/~anurag/books/anurag/spqt.pdf
5. Department of Statistics, University of Auckland, Stochastic Process Markov Chains
https://www.stat.auckland.ac.nz/~fewster/325/notes/ch8.pdf
6. Department of Statistics at Carnegie Mellon University. (n.d.). Markov Process. Pittsburgh,
Pennsylvania, United States of America
https://www.stat.cmu.edu/~cshalizi/754/notes/lecture-%2009.pdf

7. Maltby, H., Pakornrat, W., & Jackson, J. (n.d.). Markov Chains Source Brilliant:
https://brilliant.org/wiki/markov-chains/

8. An Introduction to Markov Chains Blog Post by Joel Solr


http://joelsolr.blogspot.com/2017/11/a-gentle-introduction-to-markov-chain.html

8
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

You might also like