You are on page 1of 8

Submission Number: 1

Group Number: 5C

Group Members:

Name Location (Country) E-Mail Address Non-Contributing


Member (X)
Ravi Shankar Mumbai (India) Ravishnkr.sinha@gmail.com
Gaurav Roongta Kolkata (India) roongtagaurav@gmail.com
Hitesh Sachani Nagpur (India) hiteshsachanI@gmail.com
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Submission # 1
I. Kolmogorov and his axioms of probability

In 1933, Kolmogorov put forward his three axioms of probability which helped us develop the
understanding of the modern theory of probability and develop the mathematical theory of
probability. The axioms made us realize that the general theory of probability can be described
technically on the basis of concepts such as measures, measurable spaces, measurable sets and
measurable functions. Let us have a look at what the axioms are:

(1) Axiom 1: Let {Ω, F, P}be a probability space where Ω represents the set of all possible
outcomes of a random experiment, i.e. the sample space, F represents the σ-algebra of all the
subsets of Ω and P represents a measure on the measurable space (Ω, F) such that P(Ω) = 1.
Kolmogorov called the sample space, “The space of elementary events”. The elements of F
correspond to an event related to the experiment and for any event A∈F, P(A) represents the
probability of the event A and it is a number in the set [0,1]. Mathematically, it can be written as:

P(A) ∈ R, P(A) ≥ 0 ∀ A∈F.

(2) Axiom 2: Let X: Ω→R represent a random variable, ω∈Ω :X(ω)< a represent an event in
the σ−algebra of F ∀ a∈R and Px represent “the law of X” given by:

Px(B) =P(X-1(B))

Where “B” represents a Borel subset on the real line. The random variable induces a probability
in the Borel σ−algebra of the real line. The probability of occurrence of at least one event from
the sample space σ is equal to 1, i.e. P(σ) = 1. This is also called the “unit measure” assumption.

(3) Axiom 3: The third axiom states that for mutually exclusive events “a” and “b” such that for
{a, b} ∈F,

P (a ∪ b) = P(a) +P(b).

This can be extended to any number of events. i.e. for {a, b} ∈F,

P (a1 ∪a2 ∪a3…..∪an) =P(a1) +P(a2) +...+P(an)

These three axioms lead to various theorems which helped formulate the mathematical aspects of
the theory of probability. The theorems that were proven using these axioms are listed below:

(1) The probability of the empty set being 0.


2
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

(2) The complement rule :P(A) = 1−P(Ac) Where Ac denotes the complementary set of A.

(3) The rule of monotonicity: If A⊆B, then P(A)≤P(B).

(4) The strong law of large numbers which states that as the number of observations become
very large, the sample means converge to the value of the population mean.

(5) These axioms helped Kolomogorov to prove the 0-1 law which states that any event A in
an asymptotic σ-algebra has the probability of either 0 or 1. We define an asymptotic σ-
algebra as the intersection of the sequence ofσ-algebras generated by the sequence of a
random variable. let’s denote the sequence of sigma-algebras by Fn ∀ n≥1. Mathematically:

F= F1 ∩ F2∩ F3………. ∩ Fn

The work of Kolomogorov acted as the foundation of the calculus of probability theory based
on which other theories such as the Markov process were developed.

II. Markov Chain Process

Definition: A Markov chain is stochastic model describing sequence of possible events in which
probability of each event depends only on state attained in previous event. A countably infinite
sequence, in which the chain moves state at discrete or finite time steps, is called a discrete-time
Markov chain (DTMC) named after Russian mathematician Andrey Markov.

Equation: A stochastic process { Xn, n ≥ 0 }, taking values in countable set S is


called a discrete time Markov chain (DTMC) if it has the Markov property that is if for all

As time goes by, the process loses the memory of the past. If

is independent of n, then X is called to be a time homogeneous Markov chain.

We can represent a time-homogeneous Markov chain by a transition graph.


At time n, X n is the “present”, X0, X1, · · ·, X n−1 is the “past”, and X n+1 = j is a
the “future.” The Markov property states that given the present, the future
and the past are independent of each other. The term chain comes from the fact that
{Xn, n ≥ 0} takes values in a denumerable set S. The values taken by the process are also called
states of the process thus set S is also called the state space.

3
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

The matrix describing the Markov chain is called the transition matrix which is the most
important tool for analysing Markov chains. The transition matrix is usually given the symbol

In the transition matrix P, the rows represent Now or From (X t) , the columns represent Next or
To ( X t+1). Entry (i, j) is the conditional probability that Next = j given that Now = i, the
probability of going From state i To state j . To summarize,

Properties of the transition matrix: A transition matrix can be shown below:

1. The transition matrix P must list all possible states in the state space S

2. P is a square matrix (N × N), because X t+1 and X t both take values in the same state
space S (of size N)

3. The rows of P should each sum to 1 which means that X t+1 must take one of the listed
values

4. The columns of P do not in general sum to 1

4
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Example: In our example, we will model transitions of stock market. There are three possible
states for stock movements: Bear, Bull and Stagnant. The figure below shows state diagram of
probabilities of transition between different states:

The lines in state diagram show probabilities of transitioning between the different states. For
example, there is 0.15 probability of transitioning from Bear market to Bull market. There is a
0.8 probability of a Bear market transitioning to another Bear market. The state transition
probabilities can be captured in a 3x3 matrix called a transition matrix as shown below:

Bull Bear Stagnant

Bull [ 0.9 0.075 0.025 ]


Bear [ 0.15 0.8 0.015 ]
Stagnant [ 0.25 0.25 0.5 ]

We can extend this example of 3X3 matrix to show that t-step transition probabilities are given
by the matrix Pt for any t as:

5
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Application: Research has reported application & use of Markov chains in wide range of topics
such as physics, chemistry, biology, medicine, music, game theory and sports. Markov chains
have applications as statistical models in real-world processes such as cruise control systems in
motor vehicles, queues or lines of customers arriving at an airport, modeling call centers queues,
currency exchange rates and animal population dynamics

III. J. L. Doob and Development of Martingales

Definition of a Martingale, Early History of Dr. J.L. Doob in Martingale theory

A martingale is the mathematically described as an outcome of a fair game, which means the
outcome of the next event is theoretically zero.

Martingales allow a glimpse into the behavior of non-independent random numbers as far as
their sequencing and distributions are concerned.

Due to lack of proper understanding of probability theory, many important developments


regarding stochastics were developed in hindrance for want of proper framework. However, in
1933, post Kolmogorov`s publication of axiomatic probability systems, Doob`s real work on
probability and measure theory was laid to foundation. Doob made major contributions such as
separability, stochastic processes, martingales, optimal stopping, potential theory, and classical
potential theory and its probabilistic counterpart.

Doob`s Axioms

The pioneering work of Doob can be split into these select three well known Axioms –

Doob Decomposition
Every sub martingale S of has a specific Doob–Meyer decomposition S=M+A, where M is a
martingale and A is a predictable drift process starting at 0.

To elucidate in a mathematically profound manner,

let ( x n )n ∈ N be a sub martingale, for the filtration ( F n )n ∈ N


hence, there exists a unique decomposition

x n=M n + An
Where M is a is a martingale (with respect to the same filtration as followed previously), and
additionally, A is an is a predictable, integrable, and non-decreasing process, with A0 = 0.

6
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

Doob`s Optional stopping theorem-

If a sequence S (0), S (1), S(2), . . . is a bounded martingale, and T is a stopping time, then the
expected value of S(T) is S (0).

Doob`s Convergence Theorem -

We describe as a filtration over the probability space and also, define


be a martingale with respect to the filtration
Then,
When , converges in ;
When , converges surely toward an integrable and -measurable random
variable that satisfies The family is uniformly integrable.

Applications of Martingales in Finance – Theory of Asset Pricing, Fair Game and No


Arbitrage Rule:

One of the most prominent applications is in mathematical finance in the theory of arbitrage and
risk-neutral pricing of financial assets.

Since one can always invest in riskless interest-bearing bonds, it is clear that one has to discount
prices by a numéraire, e.g. to divide asset prices by the price of the riskless bond.

A martingale measure is a probability measure Q such that the discounted asset prices form a
martingale.

In a real market, no player (with their own beliefs, expressed by the player’s own probability
measure P, sometimes called the real-world probability measure) expects the asset prices to be
martingales since no one gains by betting on a martingale.

On the other hand, in many situations there is an equivalent (with respect to P) martingale
measure Q such that the discounted asset prices become martingales under the conditional
expectations induced by Q. By the fundamental theorem of asset pricing the existence of a
martingale measure guarantees that there is no arbitrage.

Moreover, the market is complete (i.e. each derivative can be ‘hedged’ or ‘replicated’ by a
trading strategy) if the martingale measure is unique. In a discrete-time setting the existence of
an equivalent martingale measure is even equivalent to an arbitrage-free complete market.

7
MScFE 620 Discrete Time Stochastic Processes

Group # _5-C_

In the classical Black–Scholes model where the asset prices are modelled by a geometric
Brownian motion, it is always possible to find a martingale measure Q with the help of
Girsanov’s theorem. In order to calculate the fair price of a derivative (e.g. a share option) one
performs the following steps:

First one determines the pay-off of the derivative, then one constructs an equivalent martingale
measure Q, finally the discounted price process of the derivative is the expectation (under Q) of
its payoff; since everything is, by construction, a martingale, no arbitrage is possible.

Bibliography & References:

1. https://core.ac.uk/download/pdf/268083255.pdf
2. Markov Chains Wikipedia Source https://en.wikipedia.org/wiki/Markov_chain#Discrete-
time_Markov_chain
3. Lecture notes on Markov chains Olivier Leveque, National University of Ireland, Maynooth,
August 2-5, 2011 http://www.hamilton.ie/ollie/Downloads/Mar1.pdf
4. Department of Electrical Communication Engineering Indian Institute of Science
DISCRETE EVENT STOCHASTIC PROCESSES
https://ece.iisc.ac.in/~anurag/books/anurag/spqt.pdf
5. Department of Statistics, University of Auckland, Stochastic Process Markov Chains
https://www.stat.auckland.ac.nz/~fewster/325/notes/ch8.pdf
6. Department of Statistics at Carnegie Mellon University. (n.d.). Markov Process. Pittsburgh,
Pennsylvania, United States of America
https://www.stat.cmu.edu/~cshalizi/754/notes/lecture-%2009.pdf

7. Maltby, H., Pakornrat, W., & Jackson, J. (n.d.). Markov Chains Source Brilliant:
https://brilliant.org/wiki/markov-chains/

8. An Introduction to Markov Chains Blog Post by Joel Solr


http://joelsolr.blogspot.com/2017/11/a-gentle-introduction-to-markov-chain.html

You might also like