You are on page 1of 29

June 16, 2010

ALGORITHMIC TRADING WITH MARKOV CHAINS

HENRIK HULT AND JONAS KIESSLING

Abstract. An order book consists of a list of all buy and sell offers, repre-
sented by price and quantity, available to a market agent. The order book
changes rapidly, within fractions of a second, due to new orders being entered
into the book. The volume at a certain price level may increase due to limit
orders, i.e. orders to buy or sell placed at the end of the queue, or decrease
because of market orders or cancellations.
In this paper a high-dimensional Markov chain is used to represent the state
and evolution of the entire order book. The design and evaluation of optimal
algorithmic strategies for buying and selling is studied within the theory of
Markov decision processes. General conditions are provided that guarantee
the existence of optimal strategies. Moreover, a value-iteration algorithm is
presented that enables finding optimal strategies numerically.
As an illustration a simple version of the Markov chain model is calibrated
to high-frequency observations of the order book in a foreign exchange mar-
ket. In this model, using an optimally designed strategy for buying one unit
provides a significant improvement, in terms of the expected buy price, over a
naive buy-one-unit strategy.

1. Introduction
The appearance of high frequency observations of the limit order book in or-
der driven markets have radically changed the way traders interact with financial
markets. With trading opportunities existing only for fractions of a second it has
become essential to develop effective and robust algorithms that allow for instan-
taneous trading decisions.
In order driven markets there are no centralized market makers, rather all par-
ticipants have the option to provide liquidity through limit orders. An agent who
wants to buy a security is therefore faced with an array of options. One option is
to submit a market order, obtaining the security at the best available ask price.
Another alternative is to submit a limit order at a price lower than the ask price,
hoping that this order will eventually be matched against a market sell order. What
is the best alternative? The answer will typically depend both on the agent’s view
of current market conditions as well as on the current state of the order book. With
new orders being submitted at a very high frequency the optimal choice can change
in a matter of seconds or even fractions of a second.
In this paper the limit order book is modelled as a high-dimensional Markov chain
where each coordinate corresponds to a price level and the state of the Markov chain
represents the volume of limit orders at every price level. For this model many tools
from applied probability are available to design and evaluate the performance of
different trading strategies. Throughout the paper the emphasis will be on what we
call buy-one-unit strategies and making-the-spread strategies. In the first case an
!the
c Authors

1
2 H. HULT AND J. KIESSLING

agent wants to buy one unit of the underlying asset. Here one unit can be thought
of as an order of one million EUR on the EUR/USD exchange. In the second case
an agent is looking to earn the difference between the buy and sell price, the spread,
by submitting a limit buy order and a limit sell order simultaneously, hoping that
both orders will be matched against future market orders.
Consider strategies for buying one unit. A naive buy-one-unit strategy is exe-
cuted as follows. The agent submits a limit buy order and waits until either the
order is matched against a market sell order or the best ask level reaches a prede-
fined stop-loss level. The probability of the agent’s limit order to be executed, as
well as the expected buy price, can be computed using standard potential theory
for Markov chains. It is not optimal to follow the naive buy-one-unit strategy.
For instance, if the order book moves to a state where the limit order has a small
probability of being executed, the agent would typically like to cancel and replace
the limit order either by a market order or a new limit order at a higher level.
Similarly, if the market is moving in favor of the agent, it might be profitable to
cancel the limit order and submit a new limit order at a lower price level. Such
more elaborate strategies are naturally treated within the framework of Markov
decision processes. We show that, under certain mild conditions, optimal strate-
gies always exist and that the optimal expected buy price is unique. In addition a
value-iteration algorithm is provided that is well suited to find and evaluate optimal
strategies numerically. Sell-one-unit strategies can of course be treated in precisely
the same way as buy-one-unit strategies, so only the latter will be treated in this
paper.
In the final part of the paper we apply the value-iteration algorithm to find close
to optimal buy strategies in a foreign exchange market. This provides an example
of the proposed methodology, which consists of the following steps:
(1) parametrize the generator matrix of the Markov chain representing the
order book,
(2) calibrate the model to historical data,
(3) compute optimal strategies for each state of the order book,
(4) apply the model to make trading decisions.
The practical applicability of the method depends on the possibility to make suf-
ficiently fast trading decisions. As the market conditions vary there is a need to
recalibrate the model regularly. For this reason it is necessary to have fast calibra-
tion and computational algorithms. In the simple model presented in Sections 5
and 6 the calibration (step 2) is fast and the speed is largely determined by how
fast the optimal strategy is computed. In this example the buy-one-unit strategy
is studied and the computation of the optimal strategy (step 3) took roughly ten
seconds on an ordinary notebook, using Matlab. Individual trading decision can
then be made in a few milliseconds (step 4).
Today there is an extensive literature on order book dynamics. In this paper the
primary interest is in short-term predictions based on the current state and recent
history of the order book. The content of this paper is therefore quite similar in
spirit to [4] and [2]. This is somewhat different from studies of market impact
and its relation to the construction of optimal execution strategies of large market
orders through a series of smaller trades. See for instance [13] and [1].
Statistical properties of the order book is a popular topic in the econophysics
literature. Several interesting studies have been written over the years, two of
ALGORITHMIC TRADING WITH MARKOV CHAINS 3

which we mention here. In the enticingly titled paper ’What really causes large
price changes?’, [7], the authors claim that large changes in share prices are not
due to large market orders. They find that statistical properties of prices depend
more on fluctuations in revealed supply and demand than on their mean behavior,
highlighting the importance of models taking the whole order book into account.
In [3], the authors study certain static properties of the order book. They find
that limit order prices follow a power–law around the current price, suggesting that
market participants believe in the possibility of very large price variations within
a rather short time horizon. It should be pointed out that the mentioned papers
study limit order books for stock markets.
Although the theory presented in this paper is quite general the applications
provided here concern a particular foreign exchange market. There are many simi-
larities between order books for stocks and exchange rates but there are also some
important differences. For instance, orders of unit size (e.g. one million EUR) keep
the volume at rather small levels in absolute terms compared to stock markets.
In stock market applications of the techniques provided here one would have to
bundle shares by selecting an appropriate unit size of orders. We are not aware
of empirical studies, similar to those mentioned above, of order books in foreign
exchange markets.
Another approach to study the dynamical aspects of limit order books is by
means of game theory. Each agent is thought to take into account the effect of the
order placement strategy of other agents when deciding between limit or market
orders. Some of the systematic properties of the order book may then be explained
as properties of the resulting equilibrium, see e.g. [10] and [14] and the references
therein. In contrast, our approach assumes that the transitions of the order book
are given exogeneously as transitions of a Markov chain.
The rest of this paper is organized as follows. Section 2 contains a detailed
description of a general Markov chain representation of the limit order book. In
Section 3 some discrete potential theory for Markov chains is reviewed and applied
to evaluate a naive buy strategy and an elementary strategy for making the spread.
The core of the paper is Section 4, where Markov decision theory is employed
to study optimal trading strategies. A proof of existence of optimal strategies is
presented together with an iteration scheme to find them. In Section 5, a simple
parameterization of the Markov chain is presented together with a calibration tech-
nique. For this particular choice of model, limit order arrival rates depend only
on the distance from the opposite best quote, and market order intensities are as-
sumed independent of outstanding limit orders. The concluding Section 6 contains
some numerical experiments on data from a foreign exchange (EUR/USD) market.
The simple model from Section 5 is calibrated on high-frequency data and three
different buy-one-unit strategies are compared. It turns out that there is a sub-
stantial amount to be gained from using more elaborate strategies than the naive
buy strategy.

2. Markov chain representation of a limit order book


We begin with a brief description of order driven markets. An order driven
market is a continuos double auction where agents can submit limit orders. A limit
order, or quote, is a request to to buy or sell a certain quantity together with a
worst allowable price, the limit. A limit order is executed immediately if there are
4 H. HULT AND J. KIESSLING

outstanding quotes of opposite sign with the same (or better) limit. Limit orders
that are not immediately executed are entered into the limit order book. An agent
having an outstanding limit order in the order book can at any time cancel this
order. Limit orders are executed using time priority at a given price and price
priority across prices.
Following [7], orders are decomposed into two types: an order resulting in an
immediate transaction is an effective market order and an order that is not exe-
cuted, but stored in the limit order book, an effective limit order. For the rest
of this paper effective market orders and effective limit orders will be referred to
simply as market orders and limit orders, respectively. As a consequence, the limit
of a limit buy (sell) order is always lower (higher) than the best available sell (buy)
quote. For simplicity it is assumed that the limit of a market buy (sell) order is
precisely equal to the best available sell (buy) quote. Note that it is not assumed
that the entire market order will be executed immediately. If there are fewer quotes
of opposite sign at the level where the market order is entered, then the remaining
part of the order is stored in the limit order book.

2.1. Markov chain representation. A continuous time Markov chain X = (Xt )


is used to model the limit order book. It is assumed that there are d ∈ N possible
price levels in the order book, denoted π 1 < · · · < π d . The Markov chain Xt =
(Xt1 , . . . , Xtd ) represents the volume at time t ≥ 0 of buy orders (negative values)
and a sell orders (positive values) at each price level. It is assumed that Xtj ∈ Z
for each j = 1, . . . , d. That is, all volumes are integer valued. The state space
of the Markov chain is denoted S ⊂ Zd . The generator matrix of X is denoted
Q = (Qxy ), where Qxy is the transition intensity from state x = (x1 , . . . , xd ) to
state y = (y 1 , . . . , y d ). The matrix P = (Pxy ) is the transition matrix of the jump
chain of X. Let us already here point out that for most of our results only the
jump chain will be needed and it will also be denoted X = (Xn )∞ n=0 , where n is the
number of transitions from time 0. .
For each state x ∈ S let
jB = jB (x) = max{j : xj < 0},
jA = jA (x) = min{j : xj > 0},
be the highest bid level and the lowest ask level, respectively. For convenience it
will be assumed that xd > 0 for all x ∈ S; i.e. there is always someone willing to
sell at the highest possible price. Similarly x1 < 0 for all x ∈ S; someone is always
willing to buy at the lowest possible price. It is further assumed that the highest
bid level is always below the lowest ask level, jB < jA . This will be implicit in the
construction of the generator matrix Q and transition matrix P . The bid price is
defined to be πB = π jB and the ask price is πA = π jA . Since there are no limit
orders at levels between jB and jA , it follows that xj = 0 for jB < j < jA . The
distance jA − jB between the best ask level and the best bid level is called the
spread. See Figure 1 for an illustration of the state of the order book.
The possible transitions of the Markov chain X defining the order book are given
as follows. Throughout the paper ej = (0, . . . , 0, 1, 0, . . . , 0) denotes a vector in Zd
with 1 in the jth position.

Limit buy order. A limit buy order of size k at level j is an order to buy k units
at price π j . The order is placed last in the queue of orders at price π j . It may be
ALGORITHMIC TRADING WITH MARKOV CHAINS 5

+,-.,/0112
%$

&

!&

!#

!(

!*

!%$
!" #$ #% #& #! ## #' #( #)

Figure 1. State of the order book. The negative volumes to the


left indicate limit buy orders and the positive volumes indicate
limit sell orders. In this state jA = 44, jB = 42, and the spread is
equal to 2.

interpreted as k orders of unit size arriving instantaneously. Mathematically it is


a transition of the Markov chain from state x to x − kej where j < jA and k ≥ 1.
That is, a limit buy order can only be placed at a level lower than the best ask
level jA . See Figure 2.

+,-.,/0112 +,-.,/01123/45657/089/1,-., +,-.,/0112 +,-.,/01123/45657/8.44/1,-.,


%$ %$ %$ %$

* * * *

( ( ( (

# # # #

& & & &

$ $ $ $

!& !& !& !&

!# !# !# !#

!( !( !( !(

!* !* !* !*

!%$ !%$ !%$ !%$


!" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #)

Figure 2. Left: Limit buy order of size 1 arrives at level 42.


Right: Limit sell order of size 2 arrives at level 45.

Limit sell order. A limit sell order of size k at level j is an order to sell k units
at price π j . The order is placed last in the queue of orders at price π j . It may be
interpreted as k orders of unit size arriving instantaneously. Mathematically it is
a transition of the Markov chain from state x to x + kej where j > jB and k ≥ 1.
That is, a limit sell order can only be placed at a level higher than the best bid
level jB . See Figure 2.

Market buy order. A market buy order of size k is an order to buy k units at the
best available price. It corresponds to a transition from state x to x − kejA . Note
that if k ≥ xjA the market order will knock out all the sell quotes at jA , resulting
in a new lowest ask level. See Figure 3.

Market sell order. A market sell order of size k is an order to sell k units at the
best available price. It corresponds to a transition from state x to x + kejB . Note
that if k ≥ |xjB | the market order will knock out all the buy quotes at jB , resulting
in a new highest bid level. See Figure 3.
6 H. HULT AND J. KIESSLING

+,-.,/0112 +,-.,/01123/45,2.6/078/1,-., +,-.,/0112 +,-.,/01123/45,2.6/7.88/1,-.,


%$ %$ %$ %$

* * * *

( ( ( (

# # # #

& & & &

$ $ $ $

!& !& !& !&

!# !# !# !#

!( !( !( !(

!* !* !* !*

!%$ !%$ !%$ !%$


!" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #)

Figure 3. Left: Market buy order of size 2 arrives and knocks


out level 44. Right: Market sell order of size 2 arrives.

Cancellation of a buy order. A cancellation of a buy order of size k at level


j is an order to instantaneously withdraw k limit buy orders at level j from the
order book. It corresponds to a transition from x to x + kej where j ≤ jB and
1 ≤ k ≤ |xj |. See Figure 4.
+,-.,/0112 +,-.,/01123/4564.7758916/1:/0;</1,-., +,-.,/0112 +,-.,/01123/4564.7758916/1:/;.77/1,-.,
%$ %$ %$ %$

* * * *

( ( ( (

# # # #

& & & &

$ $ $ $

!& !& !& !&

!# !# !# !#

!( !( !( !(

!* !* !* !*

!%$ !%$ !%$ !%$


!" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #) !" #$ #% #& #! ## #' #( #)

Figure 4. Left: A cancellation of a buy order of size 1 arrives at


level 40. Right: A cancellation of a sell order of size 2 arrives at
level 47.

Cancellation of a sell order. A cancellation of a sell order of size k at level j is


an order to instantaneously withdraw k limit sell orders at level j from the order
book. It corresponds to a transition from x to x−kej where j ≥ jA and 1 ≤ k ≤ xj .
See Figure 4.
Summary. To summarize, the possible transitions are such that Qxy is non-zero
if and only if y is of the form


 x + kej , j ≥ jB (x), k ≥ 1,

x − kej , j ≤ jA (x), k ≥ 1.
y= (1)

 x − kej , j ≥ jA (x), 1 ≤ k ≤ xj

x + ke , j ≤ jB (x), 1 ≤ k ≤ |x |.
j j

To fully specify the model it remains to specify what the non-zero transition
intensities are. The computational complexity of the model does not depend heav-
ily on the specific choice of the non-zero transition intensities, but rather on the
dimensionality of the transition matrix. In Section 5 a simple model is presented
which is easy and fast to calibrate.

3. Potential theory for evaluation of simple strategies


Consider an agent who wants to buy one unit. There are two alternatives. The
agent can place a market buy order at the best ask level jA or place a limit buy
order at a level less than jA . In the second alternative the buy price is lower but
ALGORITHMIC TRADING WITH MARKOV CHAINS 7

there is a risk that the order will not be executed, i.e. matched by a market sell
order, before the price starts to move up. Then the agent may be forced to buy at
a price higher than jA . It is therefore of interest to compute the probability that
a limit buy order is executed before the price moves up as well as the expected
buy price resulting from a limit buy order. These problems are naturally addressed
within the framework of potential theory for Markov chains. First, a standard
result on potential theory for Markov chains will be presented. A straightforward
application of the result enables the computation of the expected price of a limit
buy order and the expected payoff of a simple strategy for making the spread.

3.1. Potential theory. Consider a discrete time Markov chain X = (Xn ) on a


countable state space S with transition matrix P . For a subset D ⊂ S the set
∂D = S \ D is called the boundary of D and is assumed to be non-empty. Let τ
be the first hitting time of ∂D, that is τ = inf{n : Xn ∈ ∂D}. Suppose a running
cost function vC = (vC (s))s∈D and a terminal cost function vT = (vT (s))s∈∂D are
given. The potential associated to vC and vT is defined by φ = (φ(s))s∈S where
% τ&
−1 ' (
'
φ(s) = E vC (Xn ) + vT (Xτ )I{τ < ∞} ' X0 = s .
n=0

The potential φ is characterized as the solution to a linear system of equations.

Theorem 3.1 (e.g. [12], Theorem 4.2.3). Suppose vC and vT are non-negative.
Then φ satisfies
)
φ = P φ + vC , in D,
(2)
φ = vT , in ∂D.
Theorem 3.1 is all that is needed to compute the success probability of buy-
ing/selling a given order and the expected value of simple buy/sell strategies. De-
tails are given in the following section.

3.2. Probability that a limit order is executed. In this section X = (Xn )


denotes the jump chain of the order book described in Section 2 with possible
transitions specified by (1). Suppose the initial state of the order book is X0 . The
agent places a limit buy order at level J0 . The order is placed instantaneously at
time 0 and after the order is placed the state of the order book is X0$ = X0 − eJ0 .
Consider the probability that the order is executed before the best ask level is at
least J1 > jA (X0 ).
As the order book evolves it is necessary to keep track of the position of the
agent’s buy order. For this purpose, an additional variable Yn is introduced repre-
senting the number of limit orders at level J0 that are in front of the agent’s order,
including the agent’s order, after n transitions. Then, Y0 = X0J0 − 1 and Yn can
only move up towards 0 and does so whenever there is a market order at level J0
or an order in front of the agent’s order is cancelled.
The pair (Xn , Yn ) is also a Markov chain in S ⊂ Zd × {0, −1, −2, . . . } and whose
jump chain has transition matrix denoted P . The state space is partitioned into
two disjoint sets: S = D ∪ ∂D where

∂D = {(x, y) ∈ S : y = 0, or xj ≤ 0 for all J0 < j < J1 }.


8 H. HULT AND J. KIESSLING

Define the terminal cost function vT : ∂D → R by


)
1 if y = 0
vT (x, y) =
0 otherwise
and let τ denote the first time (Xn , Yn ) hits ∂D. The potential φ = (φ(s))s∈S given
by
φ(s) = E[vT (Xτ , Yτ )I{τ < ∞} | (X0 , Y0 ) = s],
is precisely the probability of the agent’s market order being executed before best
ask moves to or above J1 conditional on the initial state. To compute the desired
probability all that remains is to solve (2) with vC = 0.

3.3. Expected price for a naive buy-one-unit strategy. The probability that
a limit buy order is executed is all that is needed to compute the expected price of
a naive buy-one-unit strategy. The strategy is implemented as follows:
(1) Place a unit size limit buy order at level J0 .
(2) If best ask moves to level J1 , cancel the limit order and buy at level J1 .
This assumes that there will always be limit sell orders available at level J1 . If
p denotes the probability that the limit buy order is executed (from the previous
subsection) then the expected buy price becomes
E[ buy price ] = pπ J0 + (1 − p)π J1 .
Recall that, at the initial state, the agent may select to buy at the best ask price
π jA (X0 ) . This suggests that it is better to follow the naive buy-one-unit strategy
than to place a market buy order whenever E[ buy price ] < π jA (X0 ) . In Section 4
more elaborate buy strategies will be evaluated using the theory of Markov decision
processes.

3.4. Making the spread. We now proceed to calculate the expected payoff of
another simple trading strategy. The aim is to earn the difference between the bid
and the ask price, the spread. Suppose the order book starts in state X0 . Initially
an agent places a limit buy order at level j0 and a limit sell order at level j1 > j0 .
In case both are executed the profit is the price difference between the two orders.
The orders are placed instantaneously at n = 0 and after the orders are placed the
state of the order book is X0 − ej0 + ej1 . Let J0 and J1 be stop-loss levels such that
J0 < j0 < j1 < J1 . The simple making-the-spread strategy proceeds as follows.
(1) If the buy order is executed first and the best bid moves to J0 before the
sell order is executed, cancel the limit sell order and place a market sell
order at J0 .
(2) If the sell order is executed first and the best ask moves to J1 before the
buy order is executed, cancel the limit buy order and place a market buy
order at J1 .
This strategy assumes that there will always be limit buy orders available at J0
and limit sell orders at J1 .
It will be necessary to keep track of the positions of the agent’s limit orders. For
this purpose two additional variables Yn0 and Yn1 are introduced that represent the
number of limit orders at levels j0 and j1 that are in front of and including the
agent’s orders, respectively.
ALGORITHMIC TRADING WITH MARKOV CHAINS 9

It follows that Y00 = X0j0 − 1 and Y01 = X0j1 + 1, Yn0 is non-decreasing, and Yn1
is non-increasing. The agent’s buy (sell) order has been executed when Yn0 = 0
(Yn1 = 0).
The triplet (Xn , Yn0 , Yn1 ) is also a Markov chain with state space S ⊂ Zd ×
{0, −1, −2, . . .} ×{ 0, 1, 2, . . .}. Let P denote the its transition matrix.
The state space S is partitioned into two disjoint subsets: S = D ∪ ∂D, where
∂D = {(x, y 0 , y 1 ) ∈ S : y 0 = 0, or y 1 = 0}.
Let the function pB (x, y 0 ) denote the probability that a limit buy order placed
at j0 is executed before best ask moves to J1 . This probability is computed in
Section 3.2. If the sell order is executed first so y 1 = 0, then there will be a positive
income of π j1 . The expected expense in state (x, y 0 , y 1 ) for buying one unit is
pB (x, y 0 )π j0 + (1 − pB (x, y 0 ))π J1 . Similarly, let the function pA (x, y 1 ) denote the
probability that a limit sell order placed at j1 is executed before best bid moves
to J0 . This can be computed in a similar manner. If the buy order is executed
first so y 0 = 0, then this will result in an expense of π j0 . The expected income in
state (x, y 0 , y 1 ) for selling one unit is pA (x, y 1 )π j1 + (1 − pA (x, y 1 ))π J0 . The above
agument leads us to define the terminal cost function vT : ∂D → R by
) j
π 1 − pB (x, y 0 )π j0 − (1 − pB (x, y 0 ))π J1 for y 1 = 0
vT (x, y 0 , y 1 ) =
pA (x, y 1 )π j1 + (1 − pA (x, y 1 ))π J0 − π j0 for y 0 = 0.
Let τ denote the first time (Xn , Yn0 , Yn1 ) hits ∂D. The potential φ = (φ(s))s∈S
defined by
φ(s) = E[vT (Xτ , Yτ0 , Yτ1 )I{τ < ∞} | (X0 , Y00 , Y01 ) = s],
is precisely the expected payoff of this strategy. It is a solution to (2) with vC = 0.

4. Optimal strategies and Markov decision processes


The framework laid out in Section 3 is too restrictive for many purposes, as it
does not allow the agent to change the initial position. In this section it will be
demonstrated how Markov decision theory can be used to design and analyze more
flexible trading strategies.
The general results on Markov decision processes are given in Section 4.1 and
the applications to buy-one-unit strategies and strategies for making the spread
are explained in the following sections. The general results that are of greatest
relevance to the applications are the last statement of Theorem 4.3 and Theorem
4.5, which lead to Algorithm 4.1.
4.1. Results for Markov decision processes. First the general setup will be
described. We refer to [12] for a brief introduction to Markov decision processes
and [6] or [9] for more details.
Let (Xn )∞n=0 be a Markov chain in discrete time on a countable state space S
with transition matrix P . Let A be a finite set of possible actions. Every action
can be classified as either a continuation action or a termination action. The set
of continutation actions is denoted C and the set of termination actions T. Then
A = C ∪ T where C and T are disjoint. When a termination action is selected the
Markov chain is terminated.
Every action is not available in every state of the chain. Let A : S → 2A be
a function associating a non-empty set of actions A(s) to each state s ∈ S. Here
2A is the power set consisting of all subsets of A. The set of continuation actions
10 H. HULT AND J. KIESSLING

available in state s is denoted C(s) = A(s) ∩ C and the set of termination actions
T(s) = A(s) ∩ T. For each s ∈ S and a ∈ C(s) the transition probability from s to
s$ when selecting action a is denoted Pss! (a).
For every action there are associated costs. The cost of continuation is denoted
vC (s, a), it can be non-zero only when a ∈ C(s). The cost of termination is denoted
vT (s, a), it can be non-zero only when a ∈ T(s). It is assumed that both vC and
vT are non-negative and bounded.
A policy α = (α0 , α1 , . . . ) is a sequence of functions: αn : Sn+1 → A such that
αn (s0 , . . . , sn ) ∈ A(sn ) for each n ≥ 0 and (s0 , . . . , sn ) ∈ Sn+1 . If after n transitions
the Markov chain has visited (X0 , . . . , Xn ), then αn (X0 , . . . , Xn ) is the action to
take when following policy α. In the sequel we often encounter policies where the
nth decision αn is defined as a function Sk+1 → A for some 0 ≤ k ≤ n. In that
case the corresponding function from Sn+1 to A is understood as function of the
last k + 1 coordinates; (s0 , . . . , sn ) +→ αn (sn−k , . . . , sn ).
The expected total cost starting in X0 = s and following a policy α until termi-
nation is denoted by V (s, α). In the applications to come it could be intepreted as
the expected buy price. The purpose of Markov decision theory is to analyze op-
timal policies and optimal (minimal) expected costs. A policy α∗ is called optimal
if, for all states s ∈ S and policies α,
V (s, α∗ ) ≤ V (s, α).
The optimal expected cost V∗ is defined by
V∗ (s) = inf V (s, α).
α

Clearly, if an optimal policy α∗ exists, then V∗ (s) = V (s, α∗ ). It is proved in


Theorem 4.3 below that, if all policies terminate in finite time with probability 1,
an optimal policy α∗ exists and the optimal expected cost is the unique solution to
a Bellman equation. Furthermore, the optimal policy α∗ is stationary. A stationary
policy is a policy that does not change with time. That is α∗ = (α∗ , α∗ , . . . ), with
α∗ : S → A, where α∗ denotes both the policy as well as each individual decision
function.
The termination time τα of a policy α is the first time an action is taken from
the termination set. That is, τα = inf{n ≥ 0 : αn (X0 , . . . , Xn ) ∈ T(Xn )}. The
expected total cost V (s, α) is given by
% τ&
α −1

V (s, α) = E vC (Xn , αn (X0 , . . . , Xn ))


n=0
' (
+ vT (Xτα , ατα (X0 , . . . , Xτα )) ' X0 = s .

Given a policy α = (α0 , α1 , . . . ) and a state s ∈ S let θs α be the shifted policy θs α =


(α0$ , α1$ , . . . ), where αn$ : Sn → A with αn$ (s0 , . . . , sn−1 ) = αn (s, s0 , . . . , sn−1 ).
Lemma 4.1. The expected total cost of a policy α satisfies
* +
&
V (s, α) = I{α0 (s) ∈ C(s)} vC (s, α0 (s)) + Pss! (α0 (s))V (s$ , θs α0 (s))
s! ∈S
+ I{α0 (s) ∈ T(s)a}vT (s, α0 (s)). (3)
ALGORITHMIC TRADING WITH MARKOV CHAINS 11

Proof. The claim follows from a straightforward calculation:


& , % τ&
α −1

V (s, α) = vC (s, a) + E vC (Xn , αn (X0 , . . . , Xn ))


a∈C(s) n=1
' (-
+ vT (Xτα , ατα (X0 , . . . , Xτα )) ' X0 = s I{α0 (s) = a}
&
+ vT (s, α0 (s))I{α0 (s) = a}
a∈T(s)
& , ' -
= vC (s, a) + E[V (X1 , θs α) ' X0 = s] I{α0 (s) = a}
a∈C(s)
&
+ vT (s, α0 (s))I{α0 (s) = a}
a∈T(s)
& , & -
= vC (s, a) + Pss! (a)V (s$ , θs α) I{α0 (s) = a}
a∈C(s) s! ∈S
&
+ vT (s, a)I{α0 (s) = a}.
a∈T(s)

!
A central role is played by the function Vn ; the minimal incurred cost before
time n with termination at n. It is defined recursively as
V0 (s) = min vT (s, a),
a∈T(s)
, & -
Vn+1 (s) = min min vC (s, a) + Pss! (a)Vn (s$ ), min vT (s, a) , (4)
a∈C(s) a∈T(s)
s! ∈S

for n ≥ 0. It follows by induction that Vn+1 (s) ≤ Vn (s) for each s ∈ S. To see this,
note first that V1 (s) ≤ V0 (s) for each s ∈ S. Suppose Vn (s) ≤ Vn−1 (s) for each
s ∈ S. Then
, & -
Vn+1 (s) ≤ min min vC (s, a) + Pss! (a)Vn−1 (s$ ), min vT (s, a) = Vn (s),
a∈C(s) a∈T(s)
s! ∈S
which proves the induction step.
For each s ∈ S the sequence (Vn (s))n≥0 is non-increasing and bounded below by
0, hence convergent. Let V∞ (s) denote its limit.
Lemma 4.2. V∞ satisfies the Bellman equation
, & -
V∞ (s) = min min vC (s, a) + Pss! (a)V∞ (s$ ), min vT (s, a) . (5)
a∈C(s) a∈T(s)
s! ∈S

Proof. Follows by taking limits. Indeed,


V∞ (s) = lim Vn+1 (s)
n
, & -
= min min vC (s, a) + lim Pss! (a)Vn (s$ ), min vT (s, a)
a∈C(s) n a∈T(s)
s! ∈S
, & -
= min min vC (s, a) + Pss! (a)V∞ (s$ ), min vT (s, a) ,
a∈C(s) a∈T(s)
s! ∈S
where the last step follows by monotone convergence. !
12 H. HULT AND J. KIESSLING

The following theorem states that there is a collection of policies ℵ for which
V∞ is optimal. Furthermore, if all policies belong to ℵ, which is quite natural in
applications, then V∞ is in fact the expected cost of a stationary policy α∞ .
Theorem 4.3. Let ℵ be the collection of policies α that terminate in finite time,
i.e. P[τα < ∞ | X0 = s] = 1 for each s ∈ S. Let α∞ = (α∞ , α∞ , . . . ) be a
stationary policy where α∞ (s) is a minimizer to
, & -
a +→ min min vC (s, a) + Pss! (a)V∞ (s$ ), min vT (s, a) . (6)
a∈C(s) a∈T(s)
s! ∈S

The following statements hold.


(a) For each α ∈ ℵ, V (s, α) ≥ V∞ (s).
(b) V∞ is the optimal expected cost for ℵ. That is, V∞ = inf α∈ℵ V (s, α).
(c) If α∞ ∈ ℵ, then V∞ (s) = V (s, α∞ ).
(d) Suppose that W is a solution to the Bellman equation (5) and let αw denote
the minimizer of (6) with V∞ replaced by W . If αw , α∞ ∈ ℵ then W = V∞ .
In particular, if all policies belong to ℵ, then V∞ is the unique solution to the
Bellman equation (5). Moreover, V∞ is the optimal expected cost and is attained
by the stationary policy α∞ .
Remark 4.4. It is quite natural that all policies belong to ℵ. For instance, suppose
that Pss! (a) = Pss! does not depend on a ∈ A and the set {s : C(s) = ∅} is non-
empty. Then the chain will terminate as soon as it hits this set. It follows that
all policies belong to ℵ if P[τ < ∞ | X0 = s] = 1 for each s ∈ S, where τ its first
hitting time of {s : C(s) = ∅}.
Proof. (a) Take α ∈ ℵ. Let Tn α = (α0 , α1 , . . . , αn−1 , αT ) be the policy α terminated
at n. Here αT (s) = mina∈T(s) vT (s, a) is an optimal termination action. That is,
the policy Tn α follows α until time n − 1 and then terminates. In particular,
P[τTn α ≤ n | X0 = s] = 1 for each s ∈ S.
We claim that
(i) V (s, Tn α) ≥ Vn (s) for each policy α and each s ∈ S, and
(ii) limn V (s, Tn α) = V (s, α).
Then (a) follows since
V (s, α) = lim V (s, Tn α) ≥ lim Vn (s) = V∞ (s).
n n

Statement (i) follows by induction. First note that


V (s, T0 α) = min vT (s, a) = V0 (s).
a∈T(s)

Suppose V (s, Tn α) ≥ Vn (s) for each policy α and each s ∈ S. Then


&
V (s, Tn+1 α) = vT (s, a)I{α0 (s) = a}
a∈T(s)
& , & -
+ vC (s, a) + Pss! (a)V (s$ , θs Tn+1 α) I{α0 (s) = a}.
a∈C(s) s! ∈S

Since θs Tn+1 α = (α1 , . . . , αn , αT ) = Tn θs α it follows by the induction hypothesis


that V (s$ , θs Tn+1 α) ≥ Vn (s$ ). The expression in the last display is then greater or
ALGORITHMIC TRADING WITH MARKOV CHAINS 13

equal to
& & , & -
vT (s, a)I{α0 (s) = a} + vC (s, a) + Pss! (a)Vn (s$ ) I{α0 (s) = a}
a∈T(s) a∈C(s) s! ∈S
, & -
≥ min min vC (s, a) + Pss! (a)Vn (s$ ), min vT (s, a) = Vn+1 (s).
a∈C(s) a∈T(s)
s! ∈S

Proof of (ii). Note that one can write


% τ&
α −1
' (
V (s, α) = E vC (Xt , αt (X0 , . . . , Xt )) ' X0 = s
t=1
% ' (
+ E vT (Xτα , ατα (X0 , . . . , Xτα )) ' X0 = s ,
and
% τα&
∧n−1
' (
V (s, Tn α) = E vC (Xt , αt (X0 , . . . , Xt )) ' X0 = s
t=1
% ' (
+ E vT (Xτα ∧n , ατα ∧n (X0 , . . . , Xτα ∧n )) ' X0 = s .
From monotone convergence it follows that
% τ&
α −1
' (
E vC (Xt , αt (X0 , . . . , Xt )) ' X0 = s
t=1
% ∧n−1
τα&
' (
= E lim vC (Xt , αt (X0 , . . . , Xt )) ' X0 = s
n→∞
t=1
% τα&
∧n−1
' (
= lim E vC (Xt , αt (X0 , . . . , Xt )) ' X0 = s .
n→∞
t=1

Let C ∈ (0, ∞) be an upper bound to vT . It follows that


' % ' ('
' '
'E vT (Xτα , ατα ) − vT (Xτα ∧n , ατα ∧n (X0 , . . . , Xτα ∧n )) ' X0 = s '
≤ 2CE[I{τα > n} | X0 = s]
≤ 2CP[τα > n | X0 = s].
By assumption α ∈ ℵ so P[τα ≥ n | X0 = s] → 0 as n → ∞, for each s ∈ S. This
shows that limn→∞ V (s, Tn α) = V (s, α), as claimed.
(b) Note that by (a) inf α∈ℵ V (s, α) ≥ V∞ (s). From the first part of Theorem 4.5
below it follows that there is a sequence of policies denoted α0:n with V (s, α0:n ) =
Vn (s). Since Vn (s) → V∞ (s) it follows that inf α∈ℵ V (s, α) ≤ V∞ (s). This proves
(b).
(c) Take s ∈ S. Suppose first α∞ (s) ∈ T(s). Then
V∞ (s) = min vT (s, a) = V (s, α∞ (s)).
a∈T(s)

It follows that V∞ (s) = V (s, α∞ ) for each s ∈ {s : α∞ (s) ∈ T(s)}.


Take another s ∈ S such that α∞ (s) ∈ C(s). Then
&
V∞ (s) = vC (s, α∞ (s)) + Pss! (α∞ (s))V∞ (s$ ),
s! ∈S
14 H. HULT AND J. KIESSLING

and
&
V (s, α∞ ) = vC (s, α∞ (s)) + Pss! (α∞ (s))V (s$ , α∞ ).
s! ∈S

It follows that
&
|V∞ (s) − V (s, α∞ )| ≤ Pss! (α∞ (s))|V∞ (s$ ) − V (s$ , α∞ )|
s! ∈S
&
= Pss! (α∞ (s))|V∞ (s$ ) − V (s$ , α∞ )|
s! :α∞ (s! )∈C(s! )

= E[|V∞ (X1 ) − V (X1 , α∞ )|I{τα∞ > 1} | X0 = s]


= E[E[|V∞ (X2 ) − V (X2 , α∞ )|I{τα∞ > 2} | X1 ] | X0 = s]
≤ ...
≤ E[|V∞ (Xn ) − V (Xn , α∞ )|I{τα∞ > n} | X0 = s]
≤ 2CP[τα∞ > n | X0 = s],
where n ≥ 1 is arbitrary. Since P[τα∞ < ∞ | X0 = s] = 1 the last expression
converges to 0 as n → ∞. This completes the proof of (c).
Finally to prove (d), let W be a solution to (5). That is, W satisfies
, & -
W (s) = min min vC (s, a) + Pss! (a)W (s$ ), min vT (s, a) .
a∈C(s) a∈T(s)
s! ∈S

Proceeding as in the proof of (c) it follows directly that W (s) = V (s, αw ). By (a)
it follows that W (s) ≥ V∞ (s). Consider the termination regions {s : αw (s) ∈ T(s)}
and {s : α∞ (s) ∈ T(s)} of αw and α∞ . Since W (s) ≥ V∞ (s), and both are solutions
to (5) it follows that
{s : α∞ (s) ∈ T(s)} ⊂{ s : αw (s) ∈ T(s)},
and V∞ (s) = mina∈T(s) vT (s, a) = W (s) on {s : α∞ (s) ∈ T(s)}. To show equality
for all s ∈ S it remain to consider the continuation region of α∞ . Take s ∈ {s :
α∞ (s) ∈ C(s)}. As in the proof of (c) one writes
&
W (s) ≤ min vC (s, a) + Pss! (a)W (s$ )
a∈C(s)
s! ∈S
& &
= min vC (s, a) + Pss! (a)(W (s$ ) − V∞ (s$ )) + Pss! (a)V∞ (s$ )
a∈C(s)
s! ∈S s! ∈S
&
= V∞ (s) + Pss! (a)(W (s$ ) − V∞ (s$ ))
s! ∈S
&
= V∞ (s) + Pss! (a)(W (s$ ) − V∞ (s$ ))
s! :α∞ (s! )∈C(s! )
% ' (
= V∞ (s) + E (W (X1 ) − V∞ (X1 ))I{τα∞ > 1} ' X0 = s
% ' (
= V∞ (s) + E (W (Xn ) − V∞ (Xn ))I{τα∞ > n} ' X0 = s .

Since E[(W (Xn ) − V∞ (Xn ))I{τα∞ > n} | X0 = s] → 0 as n → 0 it follows that


W (s) ≤ V∞ (s) on {s : α∞ (s) ∈ C(s)}. This implies W (s) = V∞ (s) for all s ∈ S
and the proof is complete. !
ALGORITHMIC TRADING WITH MARKOV CHAINS 15

In practice the optimal expected total cost V∞ may be difficult to find, and hence
also the policy α∞ that attains V∞ . However, it is easy to come close. Since Vn (s)
converges to V∞ (s) a close to optimal policy is obtained by finding one that attains
the expected cost at most Vn (s) for large n.
For s ∈ S. Let α0 (s) be a minimizer of a +→ vT (s, a) and for n ≥ 1, αn (s) is a
minimizer of
, & -
a +→ min min vC (s, a) + Pss! (a)Vn−1 (s$ ), min vT (s, a) . (7)
a∈C(s) a∈T(s)
s! ∈S

Theorem 4.5. The policy αn:0 = (αn , αn−1 , . . . , α0 ) has expected total cost given
by V (s, αn:0 ) = Vn (s). Moreover, if the stationary policy αn = (αn , αn , . . . ) satisfies
αn ∈ ℵ, then the expected total cost of αn satisfies
Vn (s) ≥ V (s, αn ) ≥ V∞ (s).
Proof. Note that α0 is a termination action and V (s, α0 ) = V0 (s). The first claim
then follows by induction. Suppose V (s, αn:0 ) = Vn (s). Then
& , & -
V (s, αn+1:0 ) = vC (s, a) + Pss! (a)V (s$ , αn:0 ) I{αn+1 (s) = a}
a∈C(s) s! ∈S
&
+ vT (s, a)I{αn+1 (s) = a}
a∈T(s)
& , & -
= vC (s, a) + Pss! (a)Vn (s$ ) I{αn+1 (s) = a}
a∈C(s) s! ∈S
&
+ vT (s, a)I{αn+1 (s) = a}
a∈T(s)
, & -
= min min vC (s, a) + Pss! (a)Vn (s$ ), min vT (s, a) ,
a∈C(s) a∈T(s)
s! ∈S
= Vn+1 (s),
and the induction proceeds.
The proof of the second statement proceeds as follows. For n ≥ 0 and k ≥ 0 let
k
αn:0 = (αn , . . . , αn , αn−1 , . . . , α0 ).
. /0 1
k times

Then 0
αn:0 = αn−1:0 . By induction it follows that V (s, αn:0
k
) ≥ V (s, αn:0
k+1
). Indeed,
note first that
V (s, αn:0
0
) − V (s, αn:0
1
) = Vn−1 (s) − Vn (s) ≥ 0.
Suppose V (s, αn:0
k−1
) − V (s, αn:0
k
) ≥ 0. If s is such that αn (s) ∈ T(s), then
V (s, αn:0
k
) − V (s, αn:0
k+1
) = vT (s, αn (s)) − vT (s, αn (s)) = 0.
If αn (s) ∈ C(s), then
& , -
V (s, αn:0
k
) − V (s, αn:0
k+1
)= Pss! (αn (s)) V (s$ , αn:0
k−1
) − V (s$ , αn:0
k
) ≥ 0.
s! ∈S
This completes the induction step and the induction proceeds. Since αn ∈ ℵ it
follows that V (s, αn ) = limk V (s, αn:0
k
). Indeed,
|V (s, αn ) − V (s, αn:0
k
)| ≤ CP (ταn > k) → 0,
16 H. HULT AND J. KIESSLING

as k → ∞. Finally, by Theorem 4.3,


V∞ (s) ≤ V (s, αn ) = lim V (s, αn:0
k
) ≤ V (s, αn:0
1
) = Vn (s),
k
and the proof is complete. !
From the above discussion it is clear that the stationary policy αn converges to
an optimal policy and that Vn provides an upper bound for the final expected cost
following this strategy. In light of the above discussion it is clear that Algorithm
4.1 in the limit determines the optimal cost and an optimal policy.

Algorithm 4.1 Optimal trading strategies


Input: Tolerance TOL, transition matrix P , state space S, continuation actions
C, termination actions T, continuation cost vC , termination cost vT .
Output: Upper bound Vn of optimal cost and almost optimal policy αn .
Let
V0 (s) = mina∈T(s) vT (s, a), for s ∈ S.
Let n = 1 and d > TOL.
while d > TOL do
Put
, & -
Vn (s) = min min vC (s, a) + Pss! (a)Vn−1 (s$ ), min vT (s, a) ,
a∈C(s) a∈T(s)
s! ∈S
and
d = max Vn−1 (s) − Vn (s), for s ∈ S
s∈S
n = n + 1.
end while
Define α : S → C ∪ T as a minimizer to
, & -
min min vC (s, a) + Pss! (a)Vn−1 (s$ ), min vT (s, a) .
a∈C(s) a∈T(s)
s! ∈S

Algorithm 4.1 is an example of a value–iteration algorithm. There are other


methods that can be used to solve Markov decision problems, such as policy–
iteration algorithms. See Chapter 3 in [16] for an interesting discussion on algo-
rithms and Markov decision theory. Typically value–iteration algorithms are well
suited to solve Markov decision problems when the state space of the Markov chain
is large.
4.2. The keep-or-cancel strategy for buying one unit. In this section a buy-
one-unit strategy is considered. It is similar to the buy strategy outlined in Section
3.3 except that the agent has the additional optionality of early cancellation and
submission of a market order. Only the jump-chain of (Xt ) is considered. Recall
that the jump chain, denoted (Xn )∞ n=0 , is a discrete Markov chain where Xn is the
state of the order book after n transitions.
Suppose the initial state is X0 . An agent wants to buy one unit and places a
limit buy order at level j0 < jA (X0 ). After each market transition, the agent has
two choices. Either to keep the limit order or to cancel it and submit a market buy
order at the best available ask level jA (Xn ). It is assumed that the cancellation
ALGORITHMIC TRADING WITH MARKOV CHAINS 17

and submission of the market order is processed instantaneously. It will also be


assumed that the agent has decided upon a maximum price level J > jA (X0 ). If
the agent’s limit buy order has not been processed and jA (Xn ) = J, then the agent
will immediately cancel the buy order and place a market buy order at level J. It
will be implicitly assumed that there always are limit sell orders available at level
J. Buying at level J can be thought of as a stop-loss.
From a theoretical point of view assuming an upper bound J for the price level
is not a serious restriction as it can be chosen very high. From a practical point of
view, though, it is convenient not to take J very high because it will significantly
slow down the numerical computation of the solution. This deficiency may be
compensated by defining π J appropriately large, say larger than π J−1 plus one
price tick.
Recall the Markov chain (Xn , Yn ) defined in Section 3.2. Here Xn represents the
order book after n transitions and Yn is negative with |Yn | being the number of
quotes in front and including the agent’s order, at level j0 . The state space in this
case is S ⊂ Zd × {. . . , −2, −1, 0}.
Let s = (x, y) ∈ S. Suppose y < 0 and jA (x) < J so the agent’s order has
not been executed and the stop-loss has not been reached. Then there is one
continuation action C(s) = {0} representing waiting for the next market transition.
The continuation cost vC (s) is always 0. If jA (x) = J (stop-loss is hit) or y = 0
(limit order executed) then C(s) = ∅ so it is only possible to terminate.
There are are two termination actions T = {−2, −1}. If y < 0 the only termi-
nation action available is −1 ∈ T, representing cancellation of the limit order and
submission of a market order at the ask price. If y = 0 the Markov chain always
terminates since the limit order has been executed. This action is represented by
−2 ∈ T. The termination costs are
vT (s, −1) = π jA (x)
vT (s, −2) = π j0 .
The expected total cost may, in this case, be interpreted as the expected buy
price. In a state s = (x, y) with jA (x) < J following a stationary policy α =
(α, α, . . . ) it is given by (see Lemma 4.1)
 2
s! Pss V (s , α) for α(s) = 0,
$
 !

V (s, α) = π jA (x)
for α(s) = −1, (8)
 j0
π for α(s) = −2.

When s = (x, y) is such that jA (x) = J, then V (s, α) = π J . It follows immediately


that
π j0 ≤ V (s, α) ≤ π J ,

for all s ∈ S and all policies α. The motivation of the expression (8) for the expected
buy price is as follows. If the limit order is not processed, so y < 0, there is no cost
of waiting. This is the case α(s) = 0. The cost of cancelling and placing the market
buy order is π jA (x) ; the current best ask price. When the limit order is processed
y = 0 the incurred cost is π j0 ; the price level of the limit order.
The collection ℵ of policies with P[τα < ∞ | X0 = s] = 1 for each s ∈ S are
the only reasonable policies. It does not seem desirable to risk having to wait an
infinite amount of time to buy one unit.
18 H. HULT AND J. KIESSLING

By Theorem 4.3 an optimal keep-or-cancel strategy for buying one unit is the
stationary policy α∞ , with expected buy price V∞ satisfying, see Lemma 4.2,
, & -
V∞ (s) = min min $
Pss ! V∞ (s ), min vT (s)
$
a∈C(s) a∈T(s)
s! ∈S
 ,2 -

 min s! ∈S Pss! V∞ (s ), π for jA (x) < J, y < 0,
$ $ jA (x)
,
= π J
, for jA (x) = J, y < 0,

 j0
π , for y = 0.
The stationary policy αn in Theorem 4.5 provides a useful numerical approximation
of an optimal policy, and Vn (s) in (4) provides an upper bound of the expected buy
price. Both Vn and αn can be computed by Algorithm 4.1.

4.3. The ultimate buy-one-unit strategy. In this section the keep-or-cancel


strategy considered above is extended so that the agent may at any time cancel
and replace the limit order.
Suppose the initial state of the order book is X0 . An agent wants to buy one
unit. After n transitions of the order book, if the agent’s limit order is located
at a level j, then jn = j represents the level of the limit order, and Yn represents
the outstanding orders in front of and including the agent’s order at level jn . This
defines the discrete Markov chain (Xn , Yn , jn ).
It will be assumed that the agent has decided upon a best price level J0 and
a worst price level J1 where J0 < jA (X0 ) < J1 . The agent is willing to buy at
level J0 and will not place limit orders at levels lower than J0 . The level J1 is the
worst case buy price or stop-loss. If jA (Xn ) = J1 the agent is committed to cancel
the limit buy order immediately and place a market order at level J1 . It will be
assumed that it is always possible to buy at level J1 . The state space in this case
is S ⊂ Zd × {. . . , −2, −1, 0} ×{ J0 , . . . , J1 − 1}.
The set of possible actions depend on the current state (x, y, j). In each state
where y < 0 the agent has three options:
(1) Do nothing and wait for a market transition.
(2) Cancel the limit order and place a market buy order at the best ask level
jA (x).
(3) Cancel the existing limit buy order and place a new limit buy order at
any level j $ with J0 ≤ j $ < jA (x). This action results in the transition to
! !
jn = j $ , Xn = x + ej − ej and Yn = xj − 1.
In a given state s = (x, y, j) with y < 0 and jA (x) < J the set of continuation
actions is
C(x, y, j) = {0, J0 , . . . , jA (x) − 1},
Here a = 0 represents the agent being inactive and awaits the next market transition
and the actions j $ , where J0 ≤ j $ < jA (x), corresponds to cancelling the outstanding
order and submitting a new limit buy order at level j $ . The cost of continuation is
always 0, vC (s, 0) = vC (s, j $ ) = 0. If y = 0 or jA (x) = J1 , then C(s) = ∅ and only
termination is possible.
As in the keep-or-cancel strategy there are are two termination actions T =
{−2, −1}. If y < 0 the only termination action available is −1, representing can-
cellation of the limit order and submission of a market order at the ask price. If
ALGORITHMIC TRADING WITH MARKOV CHAINS 19

y = 0 the Markov chain always terminates since the limit order has been executed.
This action is represented by −2.
The expected buy price V (s, α) from a state s = (x, y, j) with jA (x) < J follow-
ing a stationary policy α = (α, α, . . . ) is
 2
s! Pss V (s , α) for α(s) = 0,
$

 !
 V (s for α(s) = j $ , J0 ≤ j $ < jA (x),
j ! , α),
V (s, α) =

 π jA (x) for α(s) = −1,
 jB (X0 )
π for α(s) = −2.
! !
In the second line sj ! refers to the state (x$ , y $ , j $ ) where x$ = x+ej −ej , y $ = xj −1.
If s = (x, y, j) with jA (x) = J1 , then V (s, α) = π J1 . Since the agent is committed
to buy at level J0 and it is assumed that it is always possible to buy at level J1 it
follows immediately that
π J0 ≤ V (s, α) ≤ π J1 ,
for all s ∈ S and all policies α.
By Theorem 4.3 an optimal buy strategy is the stationary policy α∞ , with
expected buy price V∞ satisfying, see Lemma 4.2,
, & -
V∞ (s) = min min $
Pss ! V∞ (s ), min vT (s, a)
$
a∈C(s) a∈T(s)
s! ∈S

which implies that


, & -
V∞ (s) = min $
Pss ! V∞ (s ), V∞ (sJ0 ), . . . , V∞ (sjA (x)−1 ), π
$ jA (x)
,
s! ∈S !

for jA (x) < J1 , y < 0, and


)
πJ , for jA (x) = J1 , y < 0,
V∞ (s) =
πj , for y = 0.
The stationary policy αn in Theorem 4.5 provides a useful numerical approximation
of an optimal policy, and Vn (s) in (4) provides an upper bound of its expected buy
price. Both αn and Vn can be computed by Algorithm 4.1.

4.4. Making the spread. In this section a strategy aimed at earning the difference
between the bid and the ask price, the spread, is considered. An agent submits two
limit orders, one buy and one sell. In case both are executed the profit is the
price difference between the two orders. For simplicity it is assumed at first that
before one of the orders has been executed the agent only has two options after
each market transition: cancel both orders or wait until next market transition.
The extension which allows for cancellation and resubmission of both limit orders
with new limits is presented at the end of this section.
Suppose X0 is the initial state of the order book. The agent places the limit
buy order at level j0 and the limit sell order at level j1 > j0 . The orders are
placed instantaneously and after the orders are placed the state of the order book
is X0 − ej0 + ej1 .
Consider the extended Markov chain (Xn , Yn0 , Yn1 , jn0 , jn1 ). Here Xn represents the
order book after n transitions and Yn0 and Yn1 represent the limit buy (negative)
and sell (positive) orders at levels jn0 and jn1 that are in front of and including the
j0 j1
agent’s orders, respectively. It follows that Y00 = X0n − 1 and Y01 = X0n + 1, where
20 H. HULT AND J. KIESSLING

Yn0 is non-decreasing and Yn1 is non-increasing. The agent’s buy (sell) order has
been processed when Yn0 = 0 (Yn1 = 0).
Suppose the agent has decided on a best buy level JB0 < jA (X0 ) and a worst buy
level JB1 > jA (X0 ). The agent will never place a limit buy order at a level lower
than JB0 and will not buy at a level higher than JB1 , and it is assumed to always
be possible to buy at level JB1 . Similarly, the agent has decided on a best sell price
JA1 > jB (X0 ) and a worst sell price JA0 < jB (X0 ). The agent will never place a
limit sell order at a level higher than JA1 and will not sell at a level lower than
JA0 , and it is assumed to always be possible to sell at level JA0 . The state space of
this Markov chain is S ⊂ Zd × {. . . , −2, −1, 0} ×{ 0, 1, 2, . . . } ×{ JB0 , . . . , JB1−1 } ×
{JA0+1 , . . . , JA1 }.
The possible actions are:

(1) Before any of the orders has been processed the agent can wait for the next
market transition or cancel both orders.
(2) When one of the orders has been processed, say the sell order, the agent
has an outstanding limit buy order. Then the agent proceeds according to
the ultimate buy-one-unit strategy presented in Section 4.3.

Given a state s = (x, y 0 , y 1 , j 0 , j 1 ) of the Markov chain the optimal value function
V∞ is interpreted as the optimal expected payoff. Note, that for making-the-spread
strategies it is more natural to have V∞ as a payoff than as a cost and this is how
it will be interpreted. The general results in Section 4.1 still hold since the value
functions are bounded from below and above. The optimal expected payoff can be
computed as follows. Let V∞ B
(x, y, j) denote the optimal (minimal) expected buy
price in state (x, y, j) for buying one unit, with best buy level JB0 and worst buy
level JB1 . Similarly, V∞ A
(x, y, j) denotes the optimal (maximal) expected sell price
in state (x, y, j) for selling one unit, with best sell level JA1 and worst sell level JA0 .
The optimal expected payoff is then given by

 32 4
 max s! ∈S ! Pss V∞ (s ), 0 , for y 0 < 0, y 1 > 0,
 !
$

V∞ (s) = 1
π j − V∞B
(x, y 0 , j 0 ), for y 1 = 0, y 0 < 0,

 A 0
V∞ (x, y , j ) − π j ,
1 1
for y 0 = 0, y 1 > 0.

2
The term s! ∈S Pss! V∞ (s$ ) is the value of waiting and 0 is the value of cancelling
both orders.
In the extended version of the making-the-spread strategy it is also possible to
replace the two limit orders before the first has been executed. Then the possible
actions are as follows.

(1) Before any of the orders has been processed the agent can wait for the next
market transition, cancel both orders or cancel both orders and resubmit
at new levels k 0 and k 1 .
(2) When one of the orders have been processed, say the sell order, the agent
has an outstanding limit buy order. Then the agent proceeds according to
the ultimate buy-one-unit strategy presented in Section 4.3.
ALGORITHMIC TRADING WITH MARKOV CHAINS 21

It is assumed that JB0 , JB1 , JA0 , and JA1 are the upper and lower limits, as above.
Then the optimal expected payoff is given by
 ,2 -

 max !
s ∈S ! P !
ss ∞V (s$
), max V (s 0
∞ k k 1 ), 0 , for y 0 < 0, y 1 > 0,
V∞ (s) = j 1
π − V∞ (x, y , j ),
B 0 0
for y 1 = 0, y 0 < 0,

 A 0
V∞ (x, y 1 , j 1 ) − π j , for y 0 = 0, y 1 > 0.
In the first line the max V∞ (sk0 k1 ) is taken over all states sk0 k1 = (x̃, ỹ 0 , ỹ 1 , k 0 , k 1 )
0 0 1 1 0
where JB0 ≤ k 0 < JB1 , JA0 < k 1 ≤ JA1 , x̃ = x + ej − ek − ej + ek , ỹ 0 = xk − 1,
1
and ỹ 1 = xk + 1. Here k 0 and k 1 represent the levels of the new limit orders.

5. Implementation of a simple model


In this section a simple parameterization of the Markov chain for the order book
is presented. The aim of the model presented here is not to be very sophisticated
but rather to allow for simple calibration.
Recall that a Markov chain is specified by its initial state and generator matrix
Q (see for instance Norris [12], Chapter 2). Given two different states x, y ∈ S, Qxy
denotes the transition intensity from x to y. The waiting time until next transition
is exponentially distributed with parameter
&
−Qx = Qxy . (9)
y*=x

The transition matrix of the jump chain, Pxy , denotes the probability that a tran-
sition in state x will take the Markov chain to state y. It is obtained from Q
via:
Qxy
Pxy = 2 .
y*=x Qxy

Recall from Section 2 the different order types (limit order, market order, can-
cellation) that dictate the possible transitions of the order book. The model is
then completely determined by the initial state and the non-zero intensities for
transitions rates for (1).
In this secton the limit, market and cancellation order intensities are specified
as follows.
• Limit buy (sell) orders arrive at a distance of i levels from best ask (bid)
level with intensity λB
L (i) (λL (i)).
S

• Market buy (sell) orders arrive with intensity λBM (λM ).


S

• The size of limit and market orders follow discrete exponential distributions
with parameters αL and αM respectively. That is, the distributions (pk )k≥1
and (qk )k≥1 of limit and market order sizes are given by
pk = (eαL − 1)e−αL k , qk = (eαM − 1)e−αM k .
• The size of cancellation orders is assumed to be 1. Each individual unit
size buy (sell) order located at a distance of i levels from the best ask (bid)
level is cancelled with a rate λB C (i) (λC (i)). At the cumulative level the
S

cancellations of buy (sell) orders at a distance of i levels from opposite best


ask (bid) level arrive with a rate proportional to the volume at the level:
C (i)|x | (λSC (i)|xjB +i |).
jA −i
λB
22 H. HULT AND J. KIESSLING

In mathematical terms the transition rates are given as follows.


Limit orders:
x → x + kej , j > jB (x), k ≥ 1, with rate pk λSL (j − jB (x)),
x → x − kej , j < jA (x), k ≥ 1, with rate pk λB L (jA (x) − j),

Cancellation orders except at best ask/bid level:


x → x − ej , j > jA (x), with rate λSC (j − jB (x))|xj |,
x → x + ej , j < jB (x), with rate λB
C (jA (x) − j)|x |.
j

Market orders of unit size and cancellations at best ask/bid level:


x → x + ejB (x) , with rate q1 λSM + λB
C (jA (x) − jB (x))|x
jB (x)
|
x → x − ejA (x) , with rate q1 λM + λC (jA (x) − jB (x))|x
B S jA (x)
|,
Market orders of size at least 2:
x → x + kejB , k ≥ 2, with rate qk λSM .
x → x − kejA , k ≥ 2, with rate qk λB
M.

The model described above is an example of a zero intelligence model: Transition


probabilities are state independent except for their dependence on the location of
the best bid and ask. Zero intelligence models of the market’s micro structure
were considered already in 1993, in the work of Gode and Sunder [11]. Despite
the simplicity of such models, they capture many important aspects of the order
driven market. Based on a mean field theory analysis, the authors of [15] and [5]
derive laws relating the mean spread and the short term price diffusion rate to the
order arrival rates. In [8], the validity of these laws are tested on high frequency
data on eleven stocks traded at the London Stock Exchange. The authors find that
the model does a good job of predicting the average spread and a decent job of
predicting the price diffusion rate.
It remains to estimate the model parameters from historical data. This is the
next topic.

5.1. Model calibration. Calibration of the Markov chain for the order book
amounts to determining the order intensities and the order size parameters
(λL , λC , λB
M , λM , αL , αM ).
S

Suppose an historical sample of an order book containing all limit, market and
cancellation orders during a period of time T is given. If NLB (i) denotes the number
of limit buy orders arrived at a distance of i level from best ask, then

B (i) = NL (i)
B
λ5 L
T
is an estimate of the arrival rate of limit buy orders at that level. The rates λSL ,
λBM , and λM are estimated similarly.
S

The cancellation rate is proportional to the number of orders at each level. To


estimate λBC (i) (λC (i)) one first calculates the average number of outstanding buy
S

(sell) orders at a distance of i levels from the opposite best quote bi∗ . If bit denotes
the number of buy orders i levels from best ask at time t, then
6
1 T i
b∗ =
i
b dt.
T 0 t
ALGORITHMIC TRADING WITH MARKOV CHAINS 23

Then an estimate of the cancellation rate λB


C (i) is:

B (i) = NC (i) .
B
λ5
C
b (i)T

and similarly for λSC .


The market and limit order size parameters are estimated by maximum like-
lihood. Given an independent sample (m1 , . . . , mN ) from a discrete exponential
distribution with parameter α the maximum likelihood estimate is given by
m
7 = log
α ,
m−1
2N
with m = N −1 i=1 mi . As a consequence, if there are NL limit orders of size
l1 , . . . , lNL and NM market orders of size m1 , . . . , mNM in the historical data, then
the parameters αL and αM are estimated by
l m
5L = log
α , 8
αM = log .
l −1 m−1
Remark 5.1. It is often the case that one does not have access to the complete
order book. For instance, the data discussed below only contained time-stamped
quotes of the first five non–zero levels on each side of the book. Empirical studies
indicates that arrival rates should decay as a power law (see e.g. [3]), at least for
stock market order books. As the quantities of interest in this paper do not depend
on levels far away from the best bid/ask, this issue will be ignored. Only intensities
at levels where data is available will be estimated.
A more serious problem is the fact that the data most often is presented as a
list of deals and quotes. In other words, one only observes the impact of limit and
cancellation orders, not the orders themselves. As a consequence, to distinguish
different orders from each other, high frequency data is needed.

6. Numerical results
In this section the simple model introduced in Section 5 is calibrated to data
from the foreign exchange market and the performance of the different buy-one-
unit strategies are evaluated.

6.1. Description of the data. In this section the EUR/USD exchange rate traded
on a major foreign exchange market is considered. The data consists of time-
stamped sequences of prices and quantities of outstanding limit orders (quotes)
and market orders (deals) for the first five non-zero levels on each side of the order
book. The trades are in units of one million. That is, a limit buy order of unit size
at 1.4342 is an order to buy one million EUR for 1.4342 million USD. Samples of
the data are presented in Tables 1 and 2. Note that level k in Table 1 refers to the
k’th non-zero bid/ask quote. It does not refer to the distance to the best bid/ask.
Quotes are updated every 100 millisecond and it will be assumed that the updating
frequency is sufficiently high to be able to distinguish different orders (cf. Remark
5.1).
Note that a deal and a market order is not the same thing. The size of a market
order might be larger than the outstanding volume at the best price level. The deal
list can however be used to distinguish deals from cancellations.
24 H. HULT AND J. KIESSLING

Table 1. Data sample – quotes

TIME BID/ASK LEVEL PRICE VOLUME


04:44:20.800 B 1 1.4342 4
04:44:20.800 B 2 1.4341 8
04:44:20.800 B 3 1.4340 8
04:44:20.800 B 4 1.4339 6
04:44:20.800 B 5 1.4338 6
.. .. .. .. ..
. . . . .
04:44:21.500 A 1 1.4344 2
04:44:21.500 A 2 1.4345 5
04:44:21.500 A 3 1.4346 9
04:44:21.500 A 4 1.4347 6
04:44:21.500 A 5 1.4348 3

Table 2. Data sample – deals

TIME BUY/SELL - ORDER PRICE VOLUME


04:44:20.600 B 1.4343 1
04:44:29.700 B 1.4344 1
04:44:29.800 S 1.4344 9

6.2. Calibration result. Extracting all limit, market and cancellation orders from
the data sample enabled calibration according to the procedure explained in Section
5.1. In Table 3 we show the result of the calibration. These particular parameters
where obtained using orders submitted during the 120 minute period on a single
day, from 02:44:20 am to 04:44:20 am. During this period there were a total of
14 294 observed units entered into the order book via 10 890 limit orders. In all,
there were 13 110 cancelled orders and 1 029 traded units, distributed over 660
market orders.

Table 3. Example of market parameters – calibrated to a data


sample of 120 minutes.

i: 1 2 3 4 5

L (i)
λB 0.1330 0.1811 0.2085 0.1477 0.0541
λSL (i) 0.1442 0.1734 0.2404 0.1391 0.0584

C (i)
λB 0.1287 0.1057 0.0541 0.0493 0.0408
λSC (i) 0.1308 0.1154 0.0531 0.0492 0.0437

λSM 0.0467 αL 0.5667


λB
M 0.0467 αM 0.4955
ALGORITHMIC TRADING WITH MARKOV CHAINS 25

6.3. Optimal buy price. Following Section 4.3 the optimal expected buy price
can be evaluated for the ultimate buy-one-unit strategy. Once the expected price
has been determined, it is straightforward to determine an optimal strategy by
Algorithm 4.1.
Recall the setup. An agent wants to buy 1 million EUR at best possible price.
For simplicity, only three price levels will be considered; 1.4342, 1.4343 and 1.4344.
Suppose that, initially, 1.4342 is the best bid level, 1.4344 the best ask and there
are no quotes at 1.4343. The agent has the option of either submitting a market
order at 1.4344 or a limit order at 1.4342 or 1.4343. After each market transition,
the agent has the possibility to change the position. A stop-loss is placed at 1.4345.
If the best ask moves above 1.4344, a market buy order is immediately submitted.
This order is assumed to be processed at 1.4345.
Assuming that the number of outstanding orders at each level is bounded, by 15
say, results in a problem with finite state space. The optimal buy price and optimal
strategy is obtained using Algorithm 4.1 with a tolerance of 10−6 .
The result of the computation is illustrated in Figures 5, 6 and 7. Initially the
agent needs to decide on order type (limit or market) and price level. It turns out
that it is optimal to submit a limit order at 1.4342, independent of the volumes at
1.4342 and 1.4344, see Figure 5.
As new orders arrive, changing the state of the order book, the agent will re-
evaluate the situation. As an example, suppose that after some market transitions
the order book is in a state where there is only one limit buy order of unit size
at 1.4342 in front of the agent’s. The agent will now act differently depending
on the number of outstanding limit orders at 1.4343 and 1.4344. If there are sell
limit orders at 1.4343, then the agent will almost always keep the limit order in
place. However, if there is more than one buy order at 1.4343 the agent will submit
a cancellation followed by, either a new limit order at 1.4343, or a market order,
depending on the situation. The optimal decision along with the corresponding
optimal expected buy price, in this situation, is illustrated in Figure 6.
As long as there are no limit buy orders at 1.4343, Figure 5 indicates that it will
be optimal to keep the limit order at 1.4342. However, if a limit buy order appears
at 1.4343, then the situation changes dramatically. Depending on the number of
limit orders ahead of the agent’s and on the number of sell orders at 1.4344, the
agent will act differently. The optimal decision along with the optimal expected
buy price is illustrated in Figure 7.

6.4. Comparison of strategies for buying one unit. Three different strategies
for buying one unit have been described. The ultimate strategy described above
and in Section 4.3, the keep-or-cancel strategy described in Section 4.2 and the
naive buy strategy from Section 3.3. In the naive strategy the agent submits a
limit order at 1.4342 and does nothing until either the order is matched against a
market order or the best ask moves above 1.4344, in which case the limit order is
cancelled and a market order is submitted. In the keep-or-cancel strategy the agent
submits a limit order at 1.4342. This order can then be cancelled and replaced by
a market order at any time. However it can not be replaced by a limit order at
1.4343.
There are extra computational costs when determining the ultimate strategy,
compared with the naive and keep-or-cancel strategies. It is therefore reasonable to
check wether this increase in complexity generates any relevant cost reduction. The
26 H. HULT AND J. KIESSLING

difference in expected buy price between the ultimate strategy and the other two
is presented in Figure 8. The situation is the same as the initial state considered
above. That is, there are no limit orders at 1.4343.
It turned out that the extra possibility of replacing the limit order with a market
order made a substantial difference to the expected price compared to the naive
strategy. The additional possibility of replacing the order with a limit order at
1.4343 did yield an additional, however much smaller, reduction in expected buy
price.

Remark 6.1. In this example only three levels were considered. Note that the
state space, and hence the complexity, grows exponentially with the number of
levels. As a consequence, Algorithm 4.1 is only applicable when the number of
levels is relatively small.

Acknowledgements. The authors are grateful to Ebba Ankarcrona and her col-
leagues at SEB for interesting discussions as well as data access and analysis.

References
[1] Aurélien Alfonsi, Antje Fruth, and Alexander Schied. Optimal execution strategies in limit
order books with general shape functions. Quant. Finance, 10(2):143–157, 2010.
[2] Marco Avellaneda and Sasha Stoikov. High-frequency trading in a limit order book. Quant.
Finance, 8(3):217–224, 2008.
[3] Jean-Philippe Bouchaud, Marc Mézard, and Marc Potters. Statistical properties of stock
order books: empirical results and models. Quant. Finance, 2(4):251–256, 2002.
[4] Rama Cont, Sasha Stoikov, and Rishi Talreja. A stochastic model for order book dynamics.
SSRN eLibrary, 2008. http://ssrn.com/paper=1273160.
[5] Marcus G. Daniels, J. Doyne Farmer, László Gillemot, Giulia Iori, and Eric Smith. Quanti-
tative model of price diffusion and market friction based on trading as a mechanistic random
process. Phys. Rev. Lett., 90(10):108102, Mar 2003.
[6] Hans Michael Dietz and Volker Nollau. Markov decision problems with countable state spaces,
volume 15 of Mathematical Research. Akademie-Verlag, Berlin, 1983.
[7] J. Doyne Farmer, László Gillemot, Fabrizio Lillo, Szabolcs Mike, and Anindya Sen. What
really causes large price changes? Quant. Finance, 4(4):383–397, 2004.
[8] J. Doyne Farmer, Paolo Patelli, and Ilija Zovko. The predictive power of zero intelligence
models in nancial markets. Proceedings of the National Academy of Sciences of the United
States of America, 102(6):2254–2259, 2005.
[9] Eugene A. Feinberg and Adam Shwartz, editors. Handbook of Markov decision processes.
International Series in Operations Research & Management Science, 40. Kluwer Academic
Publishers, Boston, MA, 2002.
[10] Thierry Foucault. Order flow composition and trading costs in a dynamic limit order market.
Journal of Financial Markets, 2(2):99 – 134, 1999.
[11] Dhananjay K. Gode and Shyam Sunder. Allocative efficiency of markets with zero-intelligence
traders: Market as a partial substitute for individual rationality. Journal of Political Econ-
omy, 101(1):119, 1993.
[12] James R. Norris. Markov chains, volume 2 of Cambridge Series in Statistical and Probabilistic
Mathematics. Cambridge University Press, Cambridge, 1998.
[13] Anna A. Obizhaeva and Jiang Wang. Optimal Trading Strategy and Supply/Demand Dy-
namics. SSRN eLibrary, 2005. http://ssrn.com/paper=666541.
[14] Christine A. Parlour. Price dynamics in limit order markets. Rev. Financ. Stud., 11(4):789–
816, 1998.
[15] Eric Smith, J. Doyne Farmer, Lászloó Gillemot, and Supiriya Krishnamurthy. Statistical
theory of the continuous double auction. Quantitative Finance, 3(6):1469–7688, 2003.
[16] Henk C. Tijms. Stochastic models. Wiley Series in Probability and Mathematical Statistics:
Applied Probability and Statistics. John Wiley & Sons Ltd., Chichester, 1994.
ALGORITHMIC TRADING WITH MARKOV CHAINS 27

Initial level choice and expected payoff


& 5-%&
!1

"0"

"014
!!
"01$

"013
!"

657,8/,9-7:;8,-!-%0"1
"01#

"012
'()*+,-./-%0"1"!

!#
"01"

"011

!$ "01!

"01%

!%&
!
"
#
!%! $
%&
!! &
%! !"
!$ !#
! " # $ %& %! %" %" !%! !%&
'()*+,-./-%0"1"" '()*+,-./-%0"1"" '()*+,-./-%0"1"!

Figure 5. Expected buy price and optimal initial choice for buy
order placement in the ultimate buy-one-unit strategy. The right
plot shows expected buy price given different volumes at price lev-
els 1.4342 and 1.4344. Level 1.4343 contains no quotes. The dark
grey in the choice matrix to the left shows that it is always optimal
to place a limit order at 1.4342.
Buy order at 1.4342, different volumes at 1.4343 and 1.4344
!2
3.%&
"1"
%&
435-60-7.5896-.!.%1"2

"12'
'
()*+,-./0.%1"2"2

"12
&

"1!'

!'
!%&
!'
!%& &
' !
"
%& #
$
%&
! " # $ %& %! %" %!
()*+,-./0.%1"2"2 %"
()*+,-./0.%1"2"" ()*+,-./0.%1"2""

Figure 6. Expected buy price and optimal choice of buy order


placement in the ultimate buy-one-unit strategy. The buy order
currently has place 2 out of a total of 10 limit buy orders at price
level 1.4342. The choice matrix to the left shows optimal choice
given different volumes at levels 1.4343 and 1.4344. Dark grey
indicates that the buy order should be kept in place. Light grey
indicates that the buy order should be cancelled and resubmitted at
level 1.4343. In the white region, the buy order should be cancelled
and replaced by a market order. The plot to the right shows the
optimal expected buy price.
28 H. HULT AND J. KIESSLING

Buy order at different positions at 1.4342

%! !1
;-%&

"0"

%&
"01:
2*(/,3-.4,.5-(6-(75,7

<;=,;/,5-=7>?,-!-%0"1
"01$
$

"019

#
"01#

"018
"

! %!
" %&
! # $
$ #
%& "
%! !
! " # $ %& %! %" %"
'()*+,-./-%0"1"" 2*(/,3-.4,.5-(6-(75,7
'()*+,-./-%0"1""

Figure 7. Expected buy price and optimal choice of buy order


placement in the ultimate buy-one-unit strategy. In these plots,
the total volume at level 1.4342 is −13 and the volume at 1.4343
is −1. The plot to the left shows the optimal choice given different
number of quotes ahead of the buy order at 1.4342. That is, if
there are 4 quotes ahead of the buy order, then these 4 orders need
to be either cancelled or matched against market orders before the
buy order can be executed. In the dark grey region the buy order
should be kept at its current position. In the lighter grey region,
the order should be moved to level 1.4343, where it is behind only
1 order. In the white region finally, the limit order should be
cancelled and replaced by a market order. The plot to the right
shows the corresponding expected buy price.

(H. Hult) Department of Mathematics, KTH, 100 44 Stockholm, Sweden


E-mail address: hult@kth.se

(J. Kiessling) Department of Mathematics, KTH, 100 44 Stockholm, Sweden


E-mail address: jonkie@kth.se
ALGORITHMIC TRADING WITH MARKOV CHAINS 29

Three different buy strategies compared

!) !"
'(#! '(#!

&2"

& &

3'4/51/6(4785/(6899/7/:5/(
3'4/51/6(4785/(6899/7/:5

%2"
% %

$2"
$
$

#2"
#
!#" #
!#"
! !2"
!#! #" !#!
#"
#! #!
!" !" *+,-./(01(#2&%&$
" "
*+,-./(01(#2&%&& *+,-./(01(#2&%&$ *+,-./(01(#2&%&&
! ! ! !

Figure 8. Difference between expected buy price under different


buy strategies. In both plots, level 1.4343 is empty. The plot to
the left shows the difference between the expected buy price under
the keep-or-cancel policy described in Section 4.2 and the ultimate
policy, for different volumes at 1.4342 and 1.4344. The plot to the
right shows the difference in expected buy price between the naive
buy-one-unit strategy and the ultimate policy. It is clear that the
option of early cancellation has a substantial impact on the final
buy price.

You might also like