You are on page 1of 28

A short introduction to the Lindblad Master Equation

Daniel Manzano1
1
Electromagnetism and Condensed Matter Department and Carlos I Institute for Theoretical
and Computational Physics. University of Granada. E-18071 Granada. Spain ∗
The theory of open quantum system is one of the most essential tools for the development of
quantum technologies. Furthermore, the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad)
Master Equation plays a key role as it is the most general generator of Markovian dynamics in
quantum systems. In this paper, we present this equation together with its derivation and methods
of resolution. The presentation tries to be as self-contained and straightforward as possible to be
useful to readers with no previous knowledge of this field.
arXiv:1906.04478v3 [quant-ph] 5 Feb 2020

I. INTRODUCTION

Open quantum system techniques are vital for many studies in quantum mechanics [1–3]. This happens because
closed quantum systems are just an idealisation of real systems1 , as in Nature nothing can be isolated. In practical
problems, the interaction of the system of interest with the environment cannot be avoided, and we require an approach
in which the environment can be effectively removed from the equations of motion.
The general problem addressed by Open Quantum Theory is sketched in Figure 1. In the most general picture, we
have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the
total one (we call it just “system” instead “total system”). Therefore, the whole system is divided into our system of
interest and an environment. The goal of Open Quantum Theory is to infer the equations of motions of the reduced
systems from the equation of motion of the total system. For practical purposes, the reduced equations of motion
should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations
are usually made in the derivation of the reduced dynamics.

FIG. 1: A total system divided into the system of interest, “System”, and the environment.

∗ manzano@onsager.ugr.es
1 The same happens with closed classical systems.
2

One particular, and interesting, case of study is the dynamics of a system connected to several baths modelled by
a Markovian interaction. In this case the most general quantum dynamics is generated by the Lindblad equation
(also called Gorini-Kossakowski-Sudarshan-Lindblad equation) [4, 5]. It is difficult to overemphasize the importance
of this Master Equation. It plays an important role in fields as quantum optics [1, 6], condensed matter [7–10], atomic
physics [11, 12], quantum information [13, 14], decoherence [15, 16], and quantum biology [17–19].
The purpose of this paper is to provide basic knowledge about the Lindblad Master Equation. In Section II,
the mathematical requirements are introduced while in Section III there is a brief review of quantum mechanical
concepts that are required to understand the paper. Section IV, includes a description of a mathematical framework,
the Fock-Liouville space, that is especially useful to work in this problem. In Section V, we define the concept of
CPT-Maps, derive the Lindblad Master Equation from two different approaches, and we discus several properties of
the equation. Finally, Section VI is devoted to the resolution of the master equation using different methods. To
deepen in the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay
is analysed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica
notebooks that can be found at [20].

II. MATHEMATICAL BASIS

The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework
allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only
with finite systems and, therefore, the expressions ‘Hilbert space’ and ‘linear space’ are equivalent. We assume that
the reader is skilled in operating in Hilbert spaces. To deepen in the field of Hilbert spaces we recommend the book
by Debnath and Mikusińki [21]. If the reader needs a brief review of the main concepts required for understanding
this paper, we may recommend Nielsen and Chuang’s Quantum Computing book [22]. It is also required some basic
knowledge about infinitesimal calculus, like integration, derivation, and the resolution of simple differential equations,
To help the readers, we have made a glossary of the most used mathematical terms. It can be used also as a checklist
of terms the reader should be familiar with.
Glossary:
• H represents a Hilbert space, usually the space of pure states of a system.
• |ψi ∈ H represents a vector of the Hilbert space H (a column vector).
• hψ| ∈ H represents a vector of the dual Hilbert space of H (a row vector).
• hψ|φi ∈ C is the scalar product of vectors |ψi and |φi.
p
• |||ψi|| is the norm of vector |ψi. |||ψi|| ≡ hψ|ψi.
• B(H) represents the space of bounded operators acting on the Hilbert space B : H → H.
• 1H ∈ B(H) is the Identity Operator of the Hilbert space H s.t. 1H |ψi = |ψi, ∀|ψi ∈ H.
• |ψihφ| ∈ B(H) is the operator such that (|ψihφ|) |ϕi = hφ|ϕi|ψi, ∀|ϕi ∈ H.
• O† ∈ B(H) is the Hermitian conjugate of the operator O ∈ B(H).
• U ∈ B(H) is a unitary operator iff U U † = U † U = 1.
• H ∈ B(H) is a Hermitian operator iff H = H † .
• A ∈ B(H) is a positive operator (A > 0) iff hφ|A|φi ≥ 0, ∀|φi ∈ H
• P ∈ B(H) is a proyector iff P P = P .
• Tr [B] represents the trace of operator B.
• ρ (L) represents the space of density matrices, meaning the space of bounded operators acting on H with trace
1 and positive.
• |ρii is a vector in the Fock-Liouville space.
• hhA|Bii = Tr A† B is the scalar product of operators A, B ∈ B(H) in the Fock-Liouville space.
 

• L̃ is the matrix representation of a superoperator in the Fock-Liouville space.


3

III. (VERY SHORT) INTRODUCTION TO QUANTUM MECHANICS

The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the
Lindblad Master Equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background
in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore,
if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to
read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners,
we recommend Sakurai’s book [23] or Nielsen and Chuang’s Quantum Computing book [22]. For more advanced
students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and
Pascual [24]. Finally, for a more philosophical discussion, you should go to Peres’ book [25].
We start stating the quantum mechanics postulates that we need to understand the derivation and application of
the Lindblad Master Equation. The first postulate is related to the concept of a quantum state.

Postulate 1 Associated to any isolated physical system, there is a complex Hilbert space H, known as the state space
of the system. The state of the system is entirely described by a state vector, which is a unit vector of the Hilbert space
(|ψi ∈ H).

As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert
space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one
correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical
system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see
Ref. [22] Chapter 7).

Corollary 1 All unit vectors of a finite Hilbert space correspond to possible physical states of a system.

Unit vectors are also called pure states. If we know the pure state of a system, we have all physical information
about it, and we can calculate the probabilistic outcomes of any potential measurement (see the next postulate). This
is a very improbable situation as experimental settings are not perfect, and in most cases, we have only imperfect
information about the state. Most generally, we may know that a quantum system can be in one state of a set {|ψi i}
with probabilities pi . Therefore, our knowledge of the system is given by an ensemble of pure states described by the
set {|ψi i, pi }. If more than one pi is different from zero the state is not pure anymore, and it is called a mixed state.
The mathematical tool that describes our knowledge of the system, in this case, is the density operator (or density
matrix).

X
ρ≡ pi |ψi ihψi |. (1)
i

Density matrices are bounded operators that fulfil two mathematical conditions
1. A density matrix ρ has unit trace (Tr[ρ] = 1).
2. A density matrix is a positive matrix ρ > 0.
Any operator fulfilling these two properties is considered a density operator. It can be proved trivially that density
matrices are also Hermitian.
If we are given a density matrix, it is easy to verify if it belongs to a pure or a mixed state. For pure states, and
only for them, Tr[ρ2 ] = Tr[ρ] = 1. Therefore, if Tr[ρ2 ] < 1 the system is mixed. The quantity Tr[ρ2 ] is called the
purity of the states, and it fulfils the bounds d1 ≤ Tr[ρ2 ] ≤ 1, being d the dimension of the Hilbert space.
N
If we fix an arbitrary basis {|ii}i=1 of the Hilbert space the density matrix in this basis is written as ρ =
PN
i,j=1 ρi,j |iihj|, or

 
ρ00 ρ01 · · · ρ0N
 ρ10 ρ11 · · · ρ1N 
ρ= . ..  , (2)
 
.. ..
 .. . . . 
ρN 0 ρN 1 · · · ρN N
4

ρii ∈ R+
P 
where the diagonal elements are called populations 0 and i ρi,i = 1 , while the off-diagonal elements are
called coherences ρi,j ∈ C and ρi,j = ρ∗j,i . Note that this notation is base-dependent.


Box 1. State of a two-level system (qubit)


The Hilbert space of a two-level system is just the two-dimension lineal space H2 . Examples of
this kind of system are 12 -spins and two-level atoms. We can define a basis of it by the orthonormal
vectors: {|0i, |1i}. A pure state of the system would be any unit vector of H2 . It can always be
2 2
expressed as a |ψi = a|0i + b|1i with a, b ∈ C s. t. |a| + |b| = 1.
A mixed state is therefore represented by a positive unit trace operator ρ ∈ O(H2 ).

 
ρ00 ρ01
ρ= = ρ00 |0ih0| + ρ01 |0ih1| + ρ10 |1ih0| + ρ11 |1ih1|, (3)
ρ10 ρ11

ant it should fulfil ρ00 + ρ11 = 1 and ρ01 = ρ∗10 .

Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. [23],
Section 1.4).

Postulate 2 All possible measurements in a quantum system are described by a Hermitian operator or observable.
Due to the Spectral Theorem we know that any observable O has a spectral decomposition in the form2
X
O= ai |ai ihai |, (4)
i

being ai ∈ R the eigenvalues of the observable and |ai i their corresponding eigenvectors. The probability of obtaining
the result ai when measuring the property described by observable O in a state |ψi is given by

2
P (ai ) = |hψ|ai i| . (5)
After the measurement we obtain the state |ai i if the outcome ai was measured. This is called the post-measurement
state.

This postulate allow us to calculate the possible outputs of a system, the probability of these outcomes, as well as
the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was
already in an eigenstate of the observable.
It is possible to calculate the expectation value of the outcome of a measurement defined by operator O in a state
|ψi by just applying the simple formula

hOi = hψ|O|ψi. (6)


With a little algebra we can translate this postulate to mixed states. In this case, the probability of obtaining an
output ai that corresponds to an eigenvector |ai i is

P (ai ) = Tr [|ai ihai |ρ] , (7)


and the expectation value of operator O is

hOi = Tr [Oρ] . (8)

2 For simplicity, we assume a non-degenerated spectrum.


5

Box 2. Measurement in a two-level system.


A possible test to perform in our minimal model is to measure the energetic state of a system, as-
suming that both states have a different energy. The observable corresponding to this measurement
would be

H = E0 |0ih0| + E1 |1ih1|. (9)

This operator has two eigenvalues {E0 , E1 } with two corresponding eigenvectors {|0i, |1i}.
If we have a pure state ψ = a|0i + b|1i the probability of measuring the energy E0 would be
2 2 2 2
P (E0 ) = |h0|ψi| = |a| . The probability of finding E1 would be P (E1 ) = |h1|ψi| = |b| . The
2 2
expected value of the measurement is hHi = E0 |a| + E1 |b| .
In the more general case of having a mixed state ρ = ρ00 |0ih0| + ρ01 |0ih1| + ρ10 |1ih0| + ρ11 |1ih1| the
probability of finding the ground state energy is P (0) = Tr [|0ih0|ρ] = ρ00 , and the expected value
of the energy would be hHi = Tr [Hρ] = E0 ρ00 + E1 ρ11 .

Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed
quantum system is given by the Schrödinger equation (see [24], Section 2.9).

Postulate 3 Time evolution of a pure state of a closed quantum system is given by the Schrödinger equation

d
|ψ(t)i = −ih̄H|ψ(t)i, (10)
dt
where H is the Hamiltonian of the system and it is a Hermitian operator of the Hilbert space of the system state (from
now on we avoid including Planck’s constant by selecting the units such that h̄ = 1).

The Hamiltonian of a system is the operator corresponding to its energy, and it can be non-trivial to realise.
Schrödinger equation can be formally solved in the following way. If at t = 0 the state of a system is given by |ψ(0)i
at time t it will be

|ψ(t)i = e−iHt |ψ(0)i. (11)

As H is a Hermitian operator, the operator U = e−iHt is unitary. This gives us another way of phrasing Postulate 3.

Postulate 3’ The evolution of a closed system is given by a unitary operator of the Hilbert space of the system

|ψ(t)i = U |ψ(0)i, (12)

with U ∈ B (H) s.t. U U † = U † U = 1.

It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure
states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the
evolution of a real physical system. The answer is yes.

Lemma 1 All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical
realisations like photons and cold atoms.

The proof of this lemma can be found at [22].


6

The time evolution of a mixed state can be calculated just by combining Eqs. (10) and (1), giving the von-Neumann
equation.

ρ̇ = −i [H, ρ] ≡ Lρ, (13)

where we have used the commutator [A, B] = AB − BA, and L is the so-called Liouvillian superoperator.
It is easy to prove that the Hamiltonian dynamics does not change the purity of a system

 2
d  2 dρ
Tr ρ = Tr = Tr [2ρρ̇] = −2iTr [ρ (Hρ − ρH)] = 0, (14)
dt dt

where we have used the cyclic property of the trace. This result illustrates that the mixing rate of a state does not
change due to the quantum evolution.

Box 3. Time evolution of a two-level system.


The evolution of our isolated two-level system is described by its Hamiltonian

Hfree = E0 |0ih0| + E1 |1ih1|, (15)

As the states |0i and |1i are Hamiltonian eigenstates if at t = 0 the atom is at the excited state
|ψ(0)i = |1i after a time t the state would be |ψ(t)i = e−iHt |1i = e−iE1 t |1i.
As the system was already in an eigenvector of the Hamiltonian, its time-evolution consists only in
adding a phase to the state, without changing its physical properties. (If an excited state does not
change, why do atoms decay?) Without losing any generality we can fix the energy of the ground
state as zero, obtaining

Hfree = E|1ih1|, (16)

with E ≡ E1 . To make the model more interesting we can include a driving that coherently
switches between both states. The total Hamiltonian would be then

H = E|1ih1| + Ω (|0ih1| + |1ih0|) , (17)

where Ω is the frequency of driving. By using the von-Neumann equation (13) we can calculate
the populations (ρ00 , ρ11 ) as a function of time. The system is then driven between the states, and
the populations present Rabi oscillations, as it is shown in Fig. 2.
Population

1.0

0.8

0.6

0.4

0.2

Time
1 2 3 4 5

FIG. 2: Population dynamics under a quantum dynamics (Parameters are Ω = 1, E = 1). The
blue line represents ρ11 and the orange one ρ00 .
7

Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.

Postulate 4 The state-space of a composite physical system, composed by N subsystems, is the tensor product of the
state space of each component H = H1 ⊗ H2 ⊗ · · · ⊗ HN . The state of the composite physical system is given by a
unit vector of H. Moreover, if each subsystem belonging to Hi is prepared in the state |ψi i the total state is given by
|ψi = |ψ1 i ⊗ |ψ2 i ⊗ · · · ⊗ |ψN i.

The symbol ⊗ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed
state where each component is prepared in the state ρi the total state is given by ρ = ρ1 ⊗ ρ2 ⊗ · · · ⊗ ρN .
States that can be expressed in the simple form |ψi = |ψ1 i ⊗ |ψ2 i, in any specific basis, are very particular and they
are called separable states (For this discussion, we use a bipartite system as an example. The extension P to a general
multipartite system is straightforward.) . In general, any arbitrary state should be described as |ψi = i,j |ψi i ⊗ |ψj i
P
(or ρ = i,j ρi ⊗ ρj for mixed states). Non-separable states are called entangled states.
Now that we know how to compose systems, but we can be interested in going the other way around. If we have
a system belonging to a bipartite Hilbert space in the form H = Ha ⊗ Hb we can be interested in studying some
properties of the subsystem corresponding to one of the subspaces. To do so, we define the reduced density matrix. If
the state of our system is described by a density matrix ρ the reduced density operator of the subsystem a is defined
by the operator

ρa ≡ Trb [ρ] , (18)

were Trb is the partial trace over subspace b and it is defined as [22]

   
X X X
Trb  |ai ihaj | ⊗ |bk ihbl | ≡ |ai ihaj |Tr  |bk ihbl | . (19)
i,j,k,l i,j k,l

The concepts of reduced density matrix and partial trace are essential in the study of open quantum systems. If we
want to calculate the equation of motions of a system affected by an environment, we should trace out this environment
and deal only with the reduced density matrix of the system. This is the main idea of the theory of open quantum
systems.
8

Box 4. Two two-level atoms


If we have two two-level systems, the total Hilbert space is given by
H = H2 ⊗ H 2 . A basis of this Hilbert space would be given by the set
{|00i ≡ |0i1 ⊗ |0i2 , |01i ≡ |0i1 ⊗ |1i2 , |10i ≡ |1i1 ⊗ |0i2 , |11i ≡ |1i1 ⊗ |1i2 }. If both systems
are in their ground state, we can describe the total state by the separable vector

|ψiG = |00i. (20)

A more complex, but still separable, state can be formed if both systems are in superposition.

1 1
|ψiS = √ (|0i1 + |1i1 ) ⊗ √ (|0i2 + |1i2 )
2 2
1
= (|00i + |10i + |01i + |11i) (21)
2
An entangled state would be

1
|ψiE = √ (|00i + |11i) . (22)
2
This state cannot be separated into a direct product of each subsystem. If we want to obtain a
reduced description of subsystem 1 (or 2) we have to use the partial trace. To do so, we need first
to calculate the density matrix corresponding to the pure state |ψiE .

1
ρE = |ψihψ|E = (|00ih00| + |00ih11| + |11ih00| + |11ih11|) . (23)
2
We can now calculate the reduced density matrix of the subsystem 1 by using the partial trace.

(1) 1
ρE = h0|2 ρE |0i2 + h1|2 ρE |1i2 = (|00ih00|1 + |11ih11|2 ) . (24)
2
From this reduced density matrix, we can calculate all the measurement statistics of subsystem 1.
9

IV. THE FOCK-LIOUVILLE HILBERT SPACE. THE LIOUVILLE SUPEROPERATOR

In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some
linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace 1).
Because of that, we can create a Hilbert space of density matrices just by defining a scalar product. This is clear
for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true
for infinite spaces. This allows us to define a linear space of matrices, converting the matrices effectively into vectors
(ρ → |ρii). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices φ and ρ
is defined as hhφ|ρii ≡ Tr φ† ρ . The Liouville super-operator from Eq. (13) is now an operator acting on the Hilbert

space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.

Box 5. Time evolution of a two-level system.


The density matrix of our system (3) can be expressed in the FLS as

ρ00
 
ρ01 
|ρii =   . (25)
ρ10
ρ11

The time evolution of a mixed state is given by the von-Neumann equation (13). The Liouvillian
superoperator can now be expressed as a matrix

0 iΩ −iΩ 0
 
 iΩ iE 0 −iΩ 
L̃ =  , (26)
−iΩ 0 −iE iΩ 
0 −iΩ iΩ 0

where each row is calculated just by observing the output of the operation −i [H, ρ] in the compu-
tational basis of the density matrices space. The time evolution of the system now corresponds to
the matrix equation d|ρii
dt = L̃|ρii, that in matrix notation would be

ρ̇00 0 iΩ −iΩ 0 ρ00


    
ρ̇01   iΩ iE 0 −iΩ  ρ01 
ρ̇  =  −iΩ 0 −iE iΩ  ρ  (27)
10 10
ρ̇11 0 −iΩ iΩ 0 ρ11
10

V. CPT-MAPS AND THE LINDBLAD MASTER EQUATION.

A. Completely positive maps

The problem we want to study is to find the most general Markovian transformation set between density matrices.
Until now, we have seen that quantum systems can evolve in two way, by a coherent evolution given (Postulate 3) and
by collapsing after a measurement (Postulate 2). Many efforts have been made to unify these two ways of evolving
[16], without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can
be performed in a quantum system, and what is the dynamical equation that describes this transformation.
We are looking for maps that transform density matrices into density matrices. We define ρ(H) as the space of all
density matrices in the Hilbert space H. Therefore, we are looking for a map of this space onto itself, V : ρ(H) → ρ(H).
To ensure that the output of the map is a density matrix this should fulfil the following properties

• Trace preserving. Tr [VA] = Tr [A] , ∀A ∈ O(H).


• Completely positive (see below).

Any map that fulfils these two properties is called a completely positive and trace-preserving map (CPT-maps). The
first property is quite apparent, and it does not require more thinking. The second one is a little more complicated,
and it requires an intermediate definition.

Definition 1 A map V is positive iff ∀A ∈ B(H) s.t. A ≥ 0 ⇒ VA ≥ 0.

This definition is based in the idea that, as density matrices are positive, any physical map should transform positive
matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the
physical validity of a map. It is not. As we know, there exist composite systems, and our density matrix could be the
partial trace of a more complicated state. Because of that, we need to impose a more general condition.

Definition 2 A map V is completely positive iff ∀n ∈ N, V ⊗ 1n is positive.

To prove that not all positive maps are completely positive, we need a counterexample. A canonical example of an
operation that is positive but fails to be completely positive is the matrix transposition. If we have a Bell state in the
form |ψB i = √12 (|01i + |10i) its density matrix can be expressed as

1
ρB = (|0ih0| ⊗ |1ih1| + |1ih1| ⊗ |0ih0| + |0ih1| ⊗ |1ih0| + |1ih0| ⊗ |0ih1|) , (28)
2
with a matrix representation

       
1 1 0 0 0 0 0 1 0
ρB = ⊗ + ⊗
2 0 0 0 1 0 1 0 0
       
0 0 0 1 0 1 0 0
⊗ ⊗ + ⊗ . (29)
1 0 0 0 0 0 1 0

A little algebra shows that the full form of this matrix is

0 0 0 0
 
0 1 1 0
ρB =  , (30)
0 1 1 0
0 0 0 0

and it is positive.
11

FIG. 3: A total system (belonging to a Hilbert space HT , with states described by density matrices ρT , and with
dynamics determined by a Hamiltonian HT ) divided into the system of interest, ‘System’, and the environment.

It is easy to check that the transformation 1 ⊗ T2 , meaning that we transpose the matrix of the second subsystem
leads to a non-positive matrix
       
1 1 0 0 1 0 0 0 0
(1 ⊗ T2 ) ρB = ⊗ + ⊗
2 0 0 0 0 0 1 1 0
       
0 0 0 0 0 0 0 1
⊗ ⊗ + ⊗ . (31)
1 0 1 0 0 1 0 0
The total matrix is

0 0 0 1
 
0 1 0 0
(1 ⊗ T2 ) ρB =  , (32)
0 0 1 0
1 0 0 0
with −1 as an eigenvalue. This example illustrates how the non-separability of quantum mechanics restrict the
operations we can perform in a subsystem. By imposing this two conditions, we can derive a unique master equation
as the generator of any possible Markovian CPT-map.

B. Derivation of the Lindblad Equation from microscopic dynamics

The most common derivation of the Lindblad master equation is based on Open Quantum Theory. The Lindblad
equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This
derivation can be found in several textbooks like Breuer and Petruccione’s [2] as well as Gardiner and Zoller’s [1].
Here, we follow the derivation presented in Ref. [26]. Our initial point is displayed in Figure 3. A total system
belonging to a Hilbert space HT is divided into our system of interest, belonging to a Hilbert space H, and the
environment living in HE .
The evolution of the total system is given by the von Neumann equation (13).

ρ˙T (t) = −i [HT , ρT (t)] . (33)


12

As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees
of freedom to obtain the reduced density matrix of the system ρ(t) = TrE [ρT ]. To separate the effect of the total
hamiltonian in the system and the environment we divide it in the form HT = HS ⊗ 1E + 1S ⊗ HE + αHI , with
H ∈ H, HE ∈ HE , and HI ∈ HT , and being α a measure of the strength of the system-environment interaction.
Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without
losing any generality, the interaction term can be decomposed in the following way

X
HI = Si ⊗ Ei , (34)
i

with Si ∈ B(H) and Ei ∈ B(HE )3 .


To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. [24] for
a detailed explanation about Schrödinger, Heisenberg, and interaction pictures). In the interaction picture, density
matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment
Hamiltonian. An arbitrary operator O ∈ B(HT ) is represented in this picture by the time-dependent operator Ô(t),
and its time evolution is

Ô(t) = ei(H+HE )t O e−i(H+HE )t . (35)

The time evolution of the total density matrix is given in this picture by

dρ̂T (t) h i
= −iα ĤI (t), ρ̂T (t) . (36)
dt
This equation can be easily integrated to give

Z t h i
ρ̂T (t) = ρ̂T (0) − iα ds ĤI (s), ρ̂T (s) . (37)
0

By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the
total Hilbert space. It is also troublesome the fact that the state ρ̃(t) depends on the integration of the density matrix
in all previous time. To avoid that we can introduce Eq. (37) into Eq. (36) giving

Z t h
dρ̂T (t) h i h ii
= −iα ĤI (t), ρ̂T (0) − α2 ds ĤI (t), ĤI (s), ρ̂T (s) . (38)
dt 0

By applying this method one more time we obtain

Z t h
dρ̂T (t) h i h ii
= −iα ĤI (t), ρ̂T (0) − α2 ds ĤI (t), ĤI (s), ρ̂T (t) + O(α3 ). (39)
dt 0

After this substitution, the integration of the previous states of the system is included only in the terms that are O(α3 )
or higher. At this moment, we perform our first approximation by considering that the strength of the interaction
between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (39). Under this
approximation we have

Z t h
dρ̂T (t) h i
2
h ii
= −iα ĤI (t), ρ̂T (0) − α ds ĤI (t), ĤI (s), ρ̂T (t) . (40)
dt 0

We are interested in finding an equation of motion for ρ, so we trace over the environment degrees of freedom

3 From now on we will not writethe identity operators of the Hamil- text.
tonian parts explicitly when they can be inferred from the con-
13

  Z t
dρ̂(t) dρ̂T (t) h i
2
h h ii
= TrE = −iαTrE ĤI (t), ρ̂T (0) − α dsTrE ĤI (t), ĤI (s), ρ̂T (t) . (41)
dt dt 0

This is not a closed time-evolution equation for ρ̂(t), because the time derivative still depends on the full density
matrix ρ̂T (t). To proceed, we need to make two more assumptions. First, we assume that t = 0 the system
and the environment have a separable state in the form ρT (0) = ρ(0) ⊗ ρE (0). This means that there are not
correlations between the system and the environment. This may be the case if the system and the environment have
not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the
initial state of the environment is thermal, meaning that it is described by a density matrix in the form ρE (0) =
exp (−HE /T ) /Tr[exp (−HE /T )], being T the temperature and taking the Boltzmann constant as kB = 1. By using
these assumptions, and the expansion of HI (34), we can calculate an expression for the first element of the r.h.s of
Eq. (41).

h i X h i h i
TrE ĤI (t), ρ̂T (0) = Ŝi (t)ρ̂(0)TrE Êi (t)ρ̂E (0) − ρ̂(0)Ŝi (t)TrE ρ̂E (0)Êi (t) . (42)
i

To calculate the explicit value of this term, we may use that hEi i = Tr[Ei ρE (0)] = 0 for all values of i. This
looks like a strongPassumption, but it is not. If our total
P Hamiltonian does not fulfil it, we can always rewrite it
as HT = (H + α i hEi i Si ) + HE + αHi0 , with Hi0 = S
i i ⊗ (E i − hEi i). It is clear that now hEi
0
i = 0, with
Ei0 = Ei − hEi i, and the system Hamiltonian is changed just by the addition of an energy shift that does no affect the
system dynamics. Because of that, we can assume that hEi i = 0 for all i. Using the cyclic property of the trace, it is
easy to prove that the term of Eq. (42) is equal to zero, and the equation of motion (41) reduces to

Z t h h ii
˙ = −α2
ρ̂(t) dsTrE ĤI (t), ĤI (s), ρ̂T (t) . (43)
0

This equation still includes the entire state of the system and environment. To unravel the system from the envi-
ronment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may
suppose that the system and the environment are non-correlated during all the time evolution. Of course, this is
only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are
expected to appear. On the other hand, we may assume that the timescales of correlation (τcorr ) and relaxation of the
environment (τrel ) are much smaller than the typical system timescale (τsys ), as the coupling strength is very small
(α <<). Therefore, under this strong assumption, we can assume that the environment state is always thermal and
is decoupled from the system state, ρ̂T (t) = ρ̂(t) ⊗ ρ̂E (0). Eq. (43) then transforms into

Z t h h ii
˙ = −α2
ρ̂(t) dsTrE ĤI (t), ĤI (s), ρ̂(t) ⊗ ρ̂E (0) . (44)
0

The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends
on the initial state preparation of the system. We can obtain a Markovian equation by realising that the kernel in the
integration and that we can extend the upper limit of the integration to infinity with no real change in the outcome.
By doing so, and by changing the integral variable to s → t − s, we obtain the famous Redfield equation [? ].

Z ∞ h h ii
˙ = −α2
ρ̂(t) dsTrE ĤI (t), ĤI (s − t), ρ̂(t) ⊗ ρ̂E (0) . (45)
0

It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density
matrices that are non-positive. To ensure complete positivity, we need to perform one further approximation, the
rotating wave approximation. To do so, we need to use the spectrum of the superoperator H̃A ≡ [H, A], ∀A ∈ B(H).
The eigenvectors of this superoperator form a complete basis of space B(H) and, therefore, we can expand the
system-environment operators from Eq. (34) in this basis

X
Si = Si (ω), (46)
ω
14

where the operators Si (ω) fulfils

[H, Si (ω)] = −ωSi (ω), (47)

being ω the eigenvalues of H̃. It is easy to take also the Hermitian conjugated

h i
H, Si† (ω) = ωSi† (ω). (48)

To apply this decomposition, we need to change back to the Schrödinger picture for the term of the interaction
Hamiltonian acting on the system’s Hilbert space. This is done by the expression Ŝk = eitH Sk e−itH . By using the
eigen-expansion (46) we arrive to

eiωt Sk† (ω) ⊗ Ẽk† (t).


X X
H̃i (t) = e−iωt Sk (ω) ⊗ Ẽk (t) = (49)
k,ω k,ω

To combine this decomposition with Redfield equation (45), we first may expand the commutators.

Z ∞ Z ∞
˙ = −α2 Tr
ρ̂(t) ds ĤI (t)ĤI (t − s)ρ̂(t) ⊗ ρ̂E (0) − ds ĤI (t)ρ̂(t) ⊗ ρ̂E (0)ĤI (t − s)
0 0
Z ∞ Z ∞ 
− ds ĤI (t − s)ρ̂(t) ⊗ ρ̂E (0)ĤI (t) + ds ρ̂(t) ⊗ ρ̂E (0)ĤI (t − s)ĤI (t) . (50)
0 0

We now apply the eigenvalue decomposition in terms of Sk (ω) for ĤI (t − s) and in terms of Sk† (ω 0 ) for ĤI (t). By
using the permutation property of the trace and the fact that [HE , ρE (0)] = 0, and after some non-trivial algebra we
obtain

X 0
h i 0
h i
˙ =
ρ̂(t) ei(ω −ω)t Γkl (ω) Sl (ω)ρ̂(t), Sk† (ω 0 ) + ei(ω−ω )t Γ∗lk (ω 0 ) Sl (ω), ρ̂(t)Sk† (ω 0 ) , (51)
ω,ω 0
k,l

where the effect of the environment has been absorbed into the factors

Z ∞ h i
Γkl (ω) ≡ ds eiωs TrE Ẽk† (t)Ẽl (t − s)ρE (0) , (52)
0

where we are writing the environment operators of the interaction Hamiltonian in the interaction picture (Êl (t) =
eiHE t El e−iHE t ). At this point, we can already perform the rotating wave approximation. By considering the time-
dependency on Eq. (51), we conclude that the terms with |ω − ω 0 | >> α2 will oscillate much faster than the typical
timescale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the low-
coupling regime (α → 0) we can consider that only the resonant terms, ω = ω 0 , contribute to the dynamics and
remove all the others. By applying this approximation to Eq. (51) reduces to

X h i h i
˙ =
ρ̂(t) Γkl (ω) Sl (ω)ρ̂(t), Sk† (ω) + Γ∗lk (ω) Sl (ω), ρ̂(t)Sk† (ω) . (53)
ω
k,l

To divide the dynamics into Hamiltonian and non-Hamiltonian we now decompose the operators Γkl into Hermitian
and non-Hermitian parts, Γkl (ω) = 21 γkl (ω) + iπkl , with

−i
πkl (ω) ≡ (Γkl (ω) − Γ∗kl (ω))
Z ∞ 2
h i
γkl (ω) ≡ Γkl (ω) + Γ∗kl (ω) = dseiωs Tr Êk† (s)El ρ̂E (0) . (54)
−∞
15

By these definitions we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform
back to the Schrödinger picture
 o
1n †
γkl (ω) Sl (ω)ρ(t)Sk† (ω) −
X
ρ̇(t) = −i [H + HLs , ρ(t)] + Sk Sl (ω), ρ(t) . (55)
ω 2
k,l

The Hamiltonian dynamics now is influenced by a term HLs = ω,k,l πkl (ω)Sk† (ω)Sl (ω). This is usually called a Lamb
P
shift Hamiltonian and its role is to renormalize the system energy levels due to the interaction with the environment.
Eq. (55) is the first version of the Markovian Master Equation, but it is not in the Lindblad form yet.
It can be easily proved that  the hmatrix formedi
by the coefficients γkl (ω) is positive as they are the Fourier’s
transform of a positive function Tr Êk† (s)El ρ̂E (0) . Therefore, this matrix can be diagonalised. This means that
we can find a unitary operator, O, s.t.
 
d1 (ω) 0 ··· 0
 0 d2 (ω) ··· 0 
Oγ(ω)O† =  . . (56)
 
.. ..
 .. . . 0 
0 0 · · · dN (ω)
We can now write the master equation in a diagonal form

X 1n † o
ρ̇(t) = −i [H + HLs , ρ(t)] + Li (ω)ρ(t)L†i (ω) − Li Li (ω), ρ(t) ≡ Lρ(t). (57)
i,ω
2

This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) Master Equation. In the simplest case,
there will be only one relevant frequency ω, and the equation can be further simplified to

X 1n † o
ρ̇(t) = −i [H + HLs , ρ(t)] + Li ρ(t)L†i − Li Li , ρ(t) ≡ Lρ(t). (58)
i
2
The operators Li are usually referred to as jump operators.

C. Derivation of the Lindblad Equation as a CPT generator

The second way of deriving Lindblad equation comes from the following question: What is the most general
(Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum
information researchers that look for general transformations of quantum systems. We analyse this problem following
mainly Ref. [28].
To start, we need to know what is the form of a general CPT-map.

Lemma 2 Any map V : B (H) → B (H) that can be written in the form Vρ = V † ρV with V ∈ B (H) is positive.

The proof of the lemma requires a little algebra and a known property of normal matrices

Proof.
If ρ ≥ 0 ⇒ ρ = A† A , with A ∈ B(H). Therefore, Vρ = V † ρV ⇒ hψ|V † ρV |ψi = hψ|V † A† AV |ψi = ||AV |ψi|| ≥ 0.
Therefore, if ρ is positive, the output of the map is also positive.
End of the proof.

This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps
that cannot be written in this form, but they are still positive. To go further, we need a more general condition, and
this comes in the form of the next theorem.
16

Theorem 1 Choi’s Theorem.


A linear map V : B(H) → B(H) is completely positive iff it can be expressed as

Vi† ρVi
X
Vρ = (59)
i

with Vi ∈ B(H).

The proof of this theorem requires some algebra.

Proof
The ‘if’ implication is a trivial consequence of the previous lemma. To prove the converse, we need to extend the
dimension of our system by the use of an auxiliary system. If d is the dimension of the Hilbert space of pure states,
H, we define a new Hilbert space of the same dimension HA .
We define a maximally entangled pure state in the bipartition HA ⊗ H in the way

d−1
X
|Γi ≡ |iiA ⊗ |ii, (60)
i=0

being {|ii} and {|iiA } arbitrary orthonormal bases for H and HA .


We can extend the action of our original map V, that acts on B(H) to our extended Hilbert space by defining the
map V2 : B(HA ) ⊗ B(H) → B(HA ) ⊗ B(H) as

V2 ≡ 1B(HA ) ⊗ V. (61)

Note that the idea behind this map is to leave the auxiliary subsystem invariant while applying the original map to
the original system. This map is positive because V is completely positive. This may appear trivial, but as it has been
explained before complete positivity is a more restrictive property than positivity, and we are looking for a condition
to ensure complete positivity.
We can now apply the extended map to the density matrix corresponding to the maximally entangled state (60),
obtaining

d−1
X
V2 |ΓihΓ| = |iihj| ⊗ V|iihj|. (62)
i,j=0

Now we can use the maximal entanglement of the state |Γi to relate the original map V and the action V2 |ΓihΓ| by
taking the matrix elements with respect to HA .

V|iihj| = hi|A (V2 |ΓihΓ|) |jiA . (63)

To relate this operation to the action of the map to an arbitrary vector |ψi ∈ HA ⊗ H, we can expand it in this basis
as

d−1 X
X d−1
|ψi = αij |iiA ⊗ |ji. (64)
i=0 j=0

We can also define an operator V|ψi ∈ B (H) s.t. it transforms |Γi into |ψi. Its explicit action would be written as

Pd−1 P  P
 d−1 d−1
1A ⊗ V|ψi |Γi = i,j=0 αij (1A ⊗ |jihi|) k=0 |ki ⊗ |ki = i,j,k=0 αij (|ki ⊗ |ji) hi|ki
Pd−q Pd−1
= i,j,k=0 αij (|ki ⊗ |ji) δi,k = i,j=0 αij |ii ⊗ |ji = |ψi. (65)
17

At this point, we have related the vectors in the extended space HA ⊗ H to operators acting on H. This can only be
done because the vector |Γi is maximally entangled. We go now back to our extended map V2 . Its action on |ΓihΓ| is
given by Eq. (62) and as it is a positive map it can be expanded as

2
dX −1
V2 (|ΓihΓ|) = |vl ihvl |. (66)
l=0

with |vl i ∈ HA ⊗ H. The vectors |vl i can be related to operators in H as in Eq. (65).

|vl i = (1A ⊗ Vl ) |Γi. (67)

Based on this result we can calculate the product of an arbitrary vector |iiA ∈ HA with |vl i.

d−1
X
hi|A |vl i = hi|A (1A ⊗ Vl ) |Γi = Vl hi|kiA ⊗ |ki. (68)
k=0

This is the last ingredient we need for the proof.


We come back to the original question, we want to characterise the map V. We do so by applying it to an arbitrary
basis element |iihj| of B (H).

 2

dX −1
V (|iihj|) = (hi|A ⊗ 1A ) V2 (|ΓihΓ|) (|jiA ⊗ 1A ) = (hi|A ⊗ 1A )  |vl ihvl | (|jiA ⊗ 1A )
l=0
2 2
−1
dX dX −1
= [(hi|A ⊗ 1A ) |vl i] [hvl | (|jiA ⊗ 1A )] = Vl |iihj|Vl . (69)
l=0 l=0

As |iihj| is an arbitrary element of a basis any operator can be expanded in this basis. Therefore, it is straightforward
to prove that

2
dX −l
Vρ = Vl† ρVl .
l

End of the proof.

Thanks to Choi’s Theorem, we know the general form of CP-maps, but there is still an issue to address. As density
matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement
gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem.

Theorem 2 Choi-Kraus’ Theorem.


A linear map V : B(H) → B(H) is completely positive and trace-preserving iff it can be expressed as

Vl† ρVl
X
Vρ = (70)
l

with Vl ∈ B(H) fulfilling

Vl Vl† = 1H .
X
(71)
l
18

Proof.
We have already proved that this is a completely positive map, we only need to prove that it is also trace-preserving
and that all trace preserving-maps fulfil Eq. (71). The ‘if’ proof is quite simple by applying the cyclic permutations
and linearity properties of the trace operator.

 2
  2
 
dX −1 dX −1
Tr [Vρ] = Tr  Vl ρVl†  = Tr  Vl† Vl  ρ = Tr [ρ] . (72)
l=1 l=1

We have to prove also that any map in the form (70) is trace-preserving only if the operators Vl fulfil (71). We start
by stating that if the map is trace-preserving by applying it to an any arbitrary element of a basis of B (H) we should
obtain

Tr [V (|iihj|)] = Tr [|iihj|] = δi,j . (73)


As the map has a form given by (70) we can calculate this same trace in an alternative way.

 2
  2

dX −1 dX −1
Tr [V (|iihj|)] = Tr  Vl |iihj|Vl†  = Tr  Vl† Vl |iihj|
l=1 l=1
 2
  2

−1
dX dX −1
Vl† Vl Vl† Vl
X
= hk|  |iihj| |ki = hj|   |ii, (74)
k l=1 l=1

where {|ki} is an arbitrary basis of H. As both equalities should be right we obtain

 2

dX −1
hj|  Vl Vl†  |ii = δi,j , (75)
l=1

and therefore, the condition (71) should be fulfilled.


End of the proof.

Operators Vi of a map fulfilling condition (71) are called Krauss operators. Because of that, sometimes CPT-maps
are also called Krauss maps, especially when they are presented as a collection of Krauss operators. Both concepts
are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfil
relation (71) for all times.
At this point, we already know the form of CPT-maps, but we do not have a master equation, that is a continuous
set of differential equations. This means that we know how to perform an arbitrary operation in a system, but we do
not have an equation to describe its time evolution. To do so, we need to find a time-independent generator L such
that

d
ρ (t) = Lρ(t), (76)
dt
and therefore our CPT-map could be expressed as V(t) = eLt . The following calculation is about founding the explicit
d2
expression of L. We start by choosing an orthonormal basis of the bounded space of operators B(H), {Fi }i=1 . To be
orthonormal it should satisfy the following condition

h i
hhFi |Fj ii ≡ Tr Fi† Fj = δi,j . (77)

Without any loss of generality, we select one of the elements of the basis to be proportional to the identity, Fd2 = √1d 1H .
It is trivial to prove that the norm of this element is one, and it is easy to see from Eq. (77) that all the other elements
of the basis should have trace zero.
19

Tr [Fi ] = 0 ∀i = 1, . . . , d2 − 1. (78)
P
The closure relation of this basis is 1B(H) = i |Fi iihhFi |. Therefore, the Krauss operators can be expanded in this
basis by using the Fock-Liouville notation

2
d
X
Vl (t) = hhFi |Vl (t)ii|Fi ii. (79)
i=1

As the map V(t) is in the form (59) we can apply (79) to obtain4 .

 2 
d d2 d2
Fj† hhVl (t)|Fj ii = ci,j (t)Fi ρFj† ,
X X X X
V(t)ρ =  hhFi |Vl (t)iiFi ρ (80)
l i=1 j=1 i,j=1
P
where we have absorved the sumation over the Krauss operators in the terms ci,j (t) = l hhFi |Vl iihhVl |Fj ii. We go
back now to the original problem by applying this expansion into the time-derivative of Eq. (76)

 2 
d
dρ 1 X †
= lim (V(∆t)ρ − ρ) = lim  ci,j (∆t)Fi ρFj − ρ
dt ∆t→0 ∆t ∆t→0
i,j=1
 2 2
dX−1 dX −1
= lim  ci,j (∆t)Fi ρFj† + ci,d2 Fi ρFd†2
∆t→0
i,j=0 i=1
2

dX −1
+ cd2 ,j (∆t)Fd2 ρFj† + cd2 ,d2 (∆t)Fd2 ρFd†2 − ρ , (81)
j=1

where we have separated the summations to take into account that Fd2 = √1 1H . By using this property this equation
d
simplifies to

 2 2
d −1 d −1
dρ 1 X 1 X
= lim ci,j (∆t)Fi ρFj† + √ ci,d2 (∆t)Fi ρ
dt ∆t→0 ∆t
i,j=1
d i=1
2

dX −1
1 1
+ √ cd2 ,j (∆t)ρFj† + cd2 ,d2 (∆t)ρ − ρ . (82)
d j=1 d

The next step is to eliminate the explicit dependence with time. To do so, we define new constants to absorb all the
time intervals.

ci,j (∆t)
gi,j ≡ lim (i, j < d2 ),
∆t→0 ∆t
ci,d2 (∆t)
gi,d2 ≡ lim (i < d2 ),
∆t→0 ∆t
cd2 ,j (∆t)
gd2 ,j ≡ lim (j < d2 ), (83)
∆t→0 ∆t
cd2 ,d2 (∆t) − d
gd2 ,d2 ≡ lim .
∆t→0 ∆t

4 For simplicity, in this discussion we omit the explicit time- dependency of the density matrix.
20

Introducing these coefficients in Eq (82) we obtain an equation with no explicit dependence in time.

2 2 2
dX −1 d −1 d −1
dρ 1 X 1 X gd2 ,d2
= gi,j Fi ρFj† + √ gi,d2 Fi ρ + √ gd2 ,j ρFj† + ρ.
dt i,j=1
d i=1 d j=1 d
(84)
As we are already summing up over all the Krauss operators it is useful to define a new operator

2
d −1
1 X
F ≡√ gi,d2 Fi . (85)
d i=1

Applying it to Eq. (82).

2
dX −1
dρ gd2 ,d2
= gi,j Fi ρFj† + F ρ + ρF † + ρ. (86)
dt i,j=1
d

At this point, we want to separate the dynamics of the density matrix into a Hermitian (equivalent to von Neunmann
equation) and an incoherent part. We split the operator F in two to obtain a Hermitian and anti-Hermitian part.

F + F† F − F†
F = +i ≡ G − iH, (87)
2 2i
where we have used the notation H for the Hermitian part for obvious reasons. If we take this definition to Eq. (86)
we obtain

dρ gd2 ,d2
= gi,j Fi ρFj† + {G, ρ} − i [H, ρ] + ρ. (88)
dt d
gd2 ,d2
We define now the last operator for this proof, G2 ≡ G + 2d , and the expression of the time derivative leads to

2
dX −1

= gi,j Fi ρFj† + {G2 , ρ} − i [H, ρ] . (89)
dt i,j=1

Until now we have imposed the complete positivity of the map, as we have required it to be written in terms of Krauss
maps, but we have not used the trace-preserving property. We impose now this property, and by using the cyclic
property of the trace, we obtain a new condition

 2 
  dX−1
dρ †
Tr = Tr  Fj Fi ρ + 2G2 ρ = 0. (90)
dt i,j=1

Therefore, G2 should fulfil

2
d −1
1 X
G2 = gi,j Fj† Fi ρ. (91)
2 i,j=1

By applying this condition, we arrive at the Lindblad master equation

2
dX −1  o
dρ 1n †
= −i [H, ρ] + gi,j Fi ρFj† − Fj Fi , ρ . (92)
dt i,j=1
2
21

Finally, by definition the coefficients gi,j can be arranged to form a Hermitian, and therefore diagonalisable, matrix.
By diagonalising it, we obtain the diagonal form of the Lindblad master equation.

d X  1n o
ρ = −i [H, ρ] + Γk Lk ρL†k − Lk L†k , ρ ≡ Lρ. (93)
dt 2
k

D. Properties of the Lindblad Master Equation

Some interesting properties of the Lindblad equation are:


d
 
• Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfils dt Tr ρ2 ≤
0. The proof is given in A.
• The Lindblad Master Equation is invariant under unitary transformations of the jump operators

p X p
Γ0i L0i =
p
Γi Li → vij Γj Lj , (94)
j

with v representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form

Li → L0i = Li + ai
1 X  ∗ 
H → H0 = H + Γj aj Aj − aj A†j + b, (95)
2i j

where ai ∈ C and b ∈ R. The proof of this can be found in Ref. [2] (Section 3).
• Thanks to the previous properties it is possible to find traceless jump operators without loss of generality.
22

Box 6. A master equation for a two-level system with decay.


Continuing our example of a two-level atom, we can make it more realistic by including the pos-
sibility of atom decay by the emission of a photon. This emission happens due to the interaction
of the atom with the surrounding vacuum statea . The complete quantum system would be in
this case the ‘atom+vacuum’ system and its time evolution should be given by the von Neumann
equation (13), where H represents the total ‘atom+vacuum’ Hamiltonian. This system belongs to
an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested
only in the time dependence state of the atom, we can derive a Markovian master equation for the
reduced density matrix of the atom (see for instance Refs. [1, 2]). The master equation we will
study is

 
d − + 1 + −
ρ(t) = −i [H, ρ] + Γ σ ρσ − σ σ ,ρ , (96)
dt 2

where Γ is the coupling between the atom and the vacuum.


In the Fock-Liouvillian space (following the same ordering as in Eq. (3)) the Liouvillian corre-
sponding to evolution (96) is

 
0 iΩ −iΩ Γ
 iΩ −iE − Γ
0 −iΩ 
L=
 −iΩ
2
Γ
. (97)
0 −iE − 2 iΩ 
0 −iΩ iΩ −Γ

Expressing explicitly the set of differential equations we obtain

ρ̇00 = iΩρ01 − iΩρ10 + Γρ11


 
Γ
ρ̇01 = iΩρ00 − iE − ρ01 − iΩρ11
2
 
Γ
ρ̇10 = −iΩρ00 −iE − ρ10 + iΩρ11 (98)
2
ρ̇10 = −iΩρ01 + iΩρ10 − Γρ11

a This is why atoms decay.


23

VI. RESOLUTION OF THE LINDBLAD MASTER EQUATION

A. Integration

To calculate the time evolution of a system determined by a Master Equation in the form (96) we need to solve a
set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve
a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because
of this, for bigger systems techniques for dimension reduction are required.
To solve systems of partial differential equations there are several canonical algorithms. This can be done analytically
only for a few simple systems and by using sophisticated techniques as damping bases [29]. In most cases, we have
to rely on numerical approximated methods. One of the most popular approaches is the 4th -order Runge-Kutta
algorithm (see, for instance, [30] for an explanation of the algorithm). By integrating the equations of motion, we can
calculate the density matrix at any time t.
The steady-state of a system can be obtained by evolving it for a long time (t → ∞). Unfortunately, this method
presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means
that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of
stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by
the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates
during the propagation of the state, giving non-physical results after a finite time. One solution to this problem is the
use of algorithms specifically designed to preserve the trace, as Crank-Nicholson algorithm [31]. The problem with
this kind of algorithms is that they consume more computational power than Runge-Kutta, and therefore they are
not useful to calculate the long-time behaviour of big systems. An analysis of different methods and their advantages
and disadvantages can be found at Ref. [32].

Box 7. Time dependency of the two-level system with decay.


In this box we show some results of solving Eq (96) and calculating the density matrix as a function
of time. A Mathematica notebook solving this problem can be found at [20]. To illustrate the time
behaviour of this system, we calculate the evolution for different state parameters. In all cases,
we start with an initial state that represents the state being excited ρ11 = 1, with no coherence
between different states, meaning ρ01 = ρ10 = 0. If the decay parameter Γ is equal to zero, the
problem reduces to solve von Neumann equation, and the result is displayed in Figure 2. The other
extreme case would be a system with no coherent dynamics (Ω = 0) but with decay. In this case,
we observe an exponential decay of the population of the excited state. Finally, we can calculate
the dynamics of a system with both coherent driving and decay. In this case, both behaviours
coexist, and there are oscillations and decay.
Population Population

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Time Time
5 10 15 20 2 4 6 8 10

FIG. 4: Left: Population dynamics under a pure incoherent dynamics (Γ = 0.1, n = 1, Ω =


0, E = 1). Right: Population dynamics under both coherent and incoherent dynamics (Γ =
0.1, n = 1, Ω = 1, E = 1). In both the blue lines represent ρ11 and the orange one ρ00 .
24

B. Diagonalisation

As we have discussed before, in the Fock-Liouville space the Liouvillian corresponds to a complex matrix (in general
complex, non-hermitian, and non-symmetric). By diagonalising it we can calculate both the time-dependent and the
steady-state of the density matrices. For most purposes, in the short time regime integrating the differential equations
may be more efficient than diagonalising. This is due to the high dimensionality of the Liouvillian that makes the
diagonalisation process very costly in computing power. On the other hand, in order to calculate the steady-state,
the diagonalisation is the most used method due to the problems of integrating the equation of motions discussed in
the previous section.
Let see first how we use diagonalisation to calculate the time evolution of a system. As the Liouvillian matrix is
non-Hermitian, we cannot apply the spectral theorem to it, and it may have different left and right eigenvectors. For
a specific eigenvalue Λi we can obtain the eigenvectors |ΛR L
i ii and |Λi ii s. t.

L̃ |ΛR R
i ii = Λi |Λi ii
hhΛL L
i | L̃ = Λi hhΛi | (99)

An arbitrary system can be expanded in the eigenbasis of L̃ as [1, 33]

X
|ρ(0)ii = |ΛR L
i iihhΛi |ρ(0)ii. (100)
i

Therefore, the state of the system at a time t can be calculated in the form

X
|ρ(t)ii = eΛi t |ΛR L
i iihhΛi |ρ(0)ii. (101)
i

Note that in this case to calculate the state a time t we do not need to integrate into the interval [0, t], as we have to
do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate
long-time behaviour. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has
zero eigenvalue, as this is the only one that survives when t → ∞.
For any finite system, Evans’ Theorem ensures the existence of at least one zero eigenvalue of the Liouvillian
matrix [34, 35]. The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In
exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the
system [26, 27, 36]. This is a non-generic case, and for most purposes, we can assume the existence of a unique
fixed point in the dynamics of the system. Therefore, diagonalising can be used to calculate the steady-state without
calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical
approaches are required this technique gives better precision than integrating the equations of motion. The spectrum
of Liouvillian superoperators has been analysed in several recent papers [33, 37].
25

Box 8. Spectrum-analysis of the Liouvillian for the two-level system with decay.
Here we diagonalise (97) and obtain its steady state. A Mathematica notebook solving this problem
can be downloaded from [20]. This specific case is straightforward to diagonalize as the dimension
of the system is very low. We obtain 4 different eigenvalues, two of them are real while the other
two form a conjugated pair. Figure 5 sisplays the spectrum of the superoperator L given in (97).

-0.4 -0.3 -0.2 -0.1

-1

-2

FIG. 5: Spectrum of the Liouvillian matrix given by (97) for the general case of both coherent and
incoherent dynamics (Γ = 0.2, n = 1, Ω = 0, E = 1).

As there only one zero eigenvalue we can conclude that there is only one steady-state, and any
initial density matrix will evolve to it after an infinite-time evolution. By selecting the right
eigenvector corresponding to the zero-eigenvalue and normalizing it we obtain the density matrix.
This can be done even analytically. The result is the matrix:

(1+n)(4 E 2 +(Γ+2n Γ)2 )+4(1+2n)Ω2


 
2(−2 E−i(Γ+2nΓ))Ω
(1+2n)(4 E 2 +(Γ+2n Γ)2 +8Ω2 ) (1+2n)(4 E 2 +(Γ+2n Γ)2 +8 Ω2 )
ρSS =  n(4E 2 +(Γ+2nΓ)2 )+4(1+2n)Ω2
 (102)
2(−2 E+i(Γ+2n Γ))Ω
(1+2n)(4 E 2 +(Γ+2n Γ)2 +8Ω2 ) (1+2n)(4 E 2 +(Γ+2nΓ)2 +8 Ω2 )

VII. ACKNOWLEDGEMENTS

The author wants to acknowledge the Spanish Ministry and the Agencia Española de Investigación (AEI) for
financial support under grant FIS2017-84256-P (FEDER funds).

[1] C.W. Gardiner and P. Zoller. Quantum Noise. Springer, Berlin, 2000.
[2] H.P. Breuer and F. Petruccione. The theory of open quantum systems. Oxford University Press, 2002.
[3] A. Rivas and S. Huelga. Open Quantum Systems. An Introduction. Springer, New York, 2012.
[4] G. Lindblad. On the generators of quantum dynamical semigroups. Commun. Math. Phys., 119:48, 1976.
[5] V. Gorini, A. Kossakowski, and E.C. Sudarsahan. Completely positive semigroups of n-level systems. J. Math. Phys.,
17:821, 1976.
[6] D. Manzano and E. Kyoseva. An atomic symmetry-controlled thermal switch. Scientific Reports, 6:31161, 2016.
[7] T. Prosen. Open xxz spin chain: Nonequilibrium steady state and a strict bound on ballistic transport. Phys. Rev. Lett.,
106:217206, 2011.
26

[8] D. Manzano, M. Tiersch, A. Asadian, and H.J. Briegel. Quantum transport efficiency and Fourier’s law. Phys. Rev. E,
86:061118, 2012.
[9] D. Manzano, C. Chuang, and J. Cao. Quantum transport in d-dimensional lattices. New J. Physics, 18:043044, 2015.
[10] B. Olmos, I. Lesanovsky, and J.P. Garrahan Facilitated Spin Models of Dissipative Quantum Glasses Phys. Rev. Lett.,
109:020403, 2012.
[11] J. Metz, M. Trupke, andA. Beige Robust Entanglement through Macroscopic Quantum Jumps Phys. Rev. Lett., 97:040503,
2006.
[12] R. Jones, J. A. Needham, I. Lesanovsky, F. Intravaia, Beatriz Olmos Modified dipole-dipole interaction and dissipation in
an atomic ensemble near surfaces Phys. Rev. A, 97:053841, 2018.
[13] D.A. Lidar, I.L. Chuang, and K. B. Whaley. Decoherence-free subspaces for quantum computation. Phys. Rev. Lett.,
81(12):2594, 1998.
[14] B. Kraus, H. P. Büchler, S. Diehl, A. Kantian, A. Micheli, and P. Zoller. Preparation of entangled states by quantum
markov processes. Phys. Rev. A, 2008.
[15] T.A. Brun. Continuous measurements, quantum trajectories, and decoherent histories. Phys. Rev. A, 61:042107, 2000.
[16] M. Schlosshauer. Decoherence and the Quantum-to-Classical Transition. Springer, New York, 2007.
[17] M. Plenio and S. Huelga. Dephasing-assisted transport: quantum networks and biomolecules. New J. Phys., 10:113019,
2008.
[18] M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik. Enviroment-assisted quantum walks in photosynthetic energy
transfer. Journal of Chemical Physics, 129:174106, 2008.
[19] D. Manzano. Quantum transport in quantum networks and photosynthetic complexes at the steady state. PLoS ONE,
8(2):e57041, 2013.
[20] https://ic1.ugr.es/manzano/Descargas/Lindblad/Lindblad_Manzano.zip
[21] L. Debnath and P. Mikusińki. Introduction to Hilbert Spaces with Applications. Elsevier Academic Press, 2005.
[22] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge Univ. Press, Cambridge,
2000.
[23] J.J. Sakurai. Modern Quantum Mechanics. Addison-Wesley Publishing Co., 1994.
[24] A. Galindo and P. Pascual. Quantum Mechanics I. Springer, Berlin, 1990.
[25] A. Peres. Quantum Theory: Concepts and Methods. Kluwer Academic Publishers, 1995.
[26] D. Manzano and P.I. Hurtado. Harnessing symmetry to control quantum transport. Adv. Phys, 67:1, 2018.
[27] D. Manzano and P.I. Hurtado. Symmetry and the thermodynamics of currents in open quantum systems Phys. Rev. B,
90:125138, 2014.
[28] M.M. Wilde. Quantum Information Theory. Cambridge Univ. Press, Cambridge, 2017.
[29] H.J. Briegel and B.G. Englert. Quantum optical master equation: The use of damping bases. Phys. Rev. A, 47:3311, 1993.
[30] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes. Cambridge Univ. Press, Cambridge,
2007.
[31] A. Goldberg, H. Schey, and J.L. Schwartz. Computer-generated motion pictures of one-dimensional quantum-mechanical
transmission and reflection phenomena. Am. J. Phys., 35:177, 1967.
[32] M. Riesch and C. Jirauschek. Analyzing the positivity preservation of numerical methods for the liouville-von neumann
equation. J. Comp. Phys., 390:290, 2019.
[33] J. Thingna, D. Manzano, and J. Cao. Dynamical signatures of molecular symmetries in nonequilibrium quantum transport.
Scientific Reports, 6:28027, 2016.
[34] D.E. Evans. Irreducible quantum dynamical semigroups. Commun. Math. Phys., 54:293, 1977.
[35] D.E. Evans and H. Hance-Olsen. The generators of positive semigroups. Journal of Positive Analysis, 32:207, 1979.
[36] B. Buča and T. Prosen. A note on symmetry reductions of the Lindblad equation: Transport in constrained open spin
chains. New J. Physics, 14:073007, 2012.
[37] V.V. Albert and L. Jiang. Symmetries and conserved quantities in Lindblad master equations. Phys. Rev. A, 89:022118,
2014.
27

d
 2
Appendix A: Proof of dt
Tr ρ ≤0

In this appendix we proof that under the Lindblad dynamics given by Eq. (93) the purity of a density matrix fulfils
d
 
that dt Tr ρ2 ≤ 0 if all the jump operators of the Lindblad dynamics are Hermitian.
We start just by interchanging the trace and the derivative. As the trace is a linear operation it commutes with
the derivation, and we have

 2
d dρ
Tr ρ2 = Tr
 
= Tr [2ρρ̇] , (A1)
dt dt

where we have used the cyclic property of the trace operator5 . By inserting the Lindblad Eq. (93) into the r.h.s of
(A1) we obtain

d i
Tr ρ2 = − Tr [(2ρ (Hρ − ρH))]
 
dt h̄ h i h i
Γk Tr ρ Lk ρ L†k − 2 Γk Tr ρ2 L†k Lk .
X X
+2 (A2)
k k

The first term is zero. Therefore, the inequality we want to prove becomes equivalent to

h i X h i
Γk Tr ρ Lk ρ L†k ≤ Γk Tr ρ2 L†k Lk
X
(A3)
k k

As the density matrix is Hermitian we can diagonalize it to obtain its eigenvalues (Λi ∈ R) and its corresponding
eigenvectors (|Λi i). The density matrix is diagonal in its own eigenbasis and can be expressed as6

X
ρ → ρ̃ = Λi |Λi ihΛi |, (A4)
i

where we assume an ordering of the eigenvalues in the form Λ0 ≥ Λ1 ≥ · · · ≥ Λd .


We rename the jump operators in this basis as L̃i a. Expanding each term of the inequality (A3) in this basis we
obtain

 !   
h i X
Γk Tr ρ Lk ρ L†k =
X X X
Γk Tr  Λi |Λi ihΛi | L̃k  Λj |Λj ihΛj | L̃k 
k k i j
h i X X  2 

X X
= Γk Λi Λj Tr L̃k |Λi ihΛi |L̃k |Λj ihΛj | = Γk Λi Λj Tr hΛi |L̃k |Λj i

k i,j k i,j
(k)
X X
= Γk Λi Λj xij , (A5)
k i,j

2
(k)
where we have introduced the oefficients xij ≡ hΛi |L̃k |Λj i . As the operators Lk are Hermitian these coefficients

(k) (k)
fulfil xij = xji
The second term is expanded as

5 This property is used along all the demonstration without ex- 6 This eigenbasis changes with time, of course, but the proof is
plicitly mentioning it. valid as the inequality should be fulfilled at any time.
28

 !  
h i
Γk Tr ρ2 L†k Lk = Λj |Λj ihΛj | L̃†k L̃k 
X X X X
Γk Tr  Λi |Λi ihΛi | 
k k i j
h i h i
L̃k |Λi ihΛj |L̃†k hΛi |Λj i Λ2i Tr L̃k |Λi ihΛi |L̃†k
X X X X
= Γk Λi Λj Tr = Γk
k ij k i
  

Λ2i Tr L̃k |Λi ihΛi |L̃†k 


X X X
= Γk |Λj ihΛj |
k i j
X X h i X X
= Γk Λ2i Tr hΛj |L̃k |Λi i + hΛi |L̃k |Λj i = Γk Λ2i xij , (A6)
k ij k ij
P
where we have used the closure relation in the density matrix eigenbasis, 1H = j |Λj ihΛj |. The inequality can be
written now as

X X X X
Γk Λi Λj xij ≤ Γk Λ2i xij . (A7)
k ij k ij

As xij = xji we can re-order the ij sum in the following way

   
(k) (k) (k) (k)
X X X X X X
2Λi Λj xij + Λ2i xii  ≤ Λ2i + Λ2j xij + Λ2i xii  .

Γk  Γk  (A8)
k i j≤i k i j<i

Therefore, we can reduce the proof of this inequality to the proof of a set of inequalities

(k)  (k)
2Λi Λj xij ≤ Λ2i + Λ2j xij ∀ (k, i, j) . (A9)

It is obvious that (A9) ⇒ (A8) (but not the other way around). The inequalities (A9) are easily proved just by taking
(k)
into account that xij ≥ 0 and applying the Triangular Inequality.

You might also like