Physica A ( ) –

Contents lists available at ScienceDirect

Physica A
journal homepage:

Ensemble and trajectory thermodynamics:
A brief introduction
C. Van den Broeck a,∗ , M. Esposito b
Hasselt University, B-3590 Diepenbeek, Belgium
Complex Systems and Statistical Mechanics, University of Luxembourg, L-1511 Luxembourg, Luxembourg

• We review stochastic thermodynamics at the ensemble level.
• We formulate first and second laws at the trajectory level.
• The stochastic entropy production is the log-ratio of trajectory probabilities.
• We establish the relation between the stochastic grand potential, work and entropy production.
• We derive detailed and integral fluctuation theorems.

article info abstract
Article history: We revisit stochastic thermodynamics for a system with discrete energy states in contact
Available online xxxx with a heat and particle reservoir.
© 2014 Published by Elsevier B.V.
Fluctuation theorem
Markov process

1. Introduction

Over the last few years, it has become clear that one can extend thermodynamics, which is traditionally confined to the
description of equilibrium states of macroscopic systems or to the transition between such states, to cover the nonequilib-
rium dynamics of small scale systems. This extension has been carried out at several levels of description including systems
described by discrete and continuous Markovian and non-Markovian stochastic dynamics, by classical Hamiltonian and
quantum Hamiltonian dynamics and by thermostatted systems. These developments can be seen, on one side, as extend-
ing the work of Onsager and Prigogine [1–4] by including microscopic dynamical properties into the far from equilibrium
irreversible realm. On the other side, they have led to the reassessment of the cornerstone of thermodynamics, namely the
second law of thermodynamics, which is replaced by a much deeper symmetry relation, embodied in the integral and de-
tailed fluctuation theorems. On the more practical side, the new formulation allows to address new questions, either related
to nonequilibrium properties, such as efficiency at maximum power or information to work conversion, or relating to the
thermodynamic description of small systems, e.g., the discussion of Brownian motors and refrigerators or the efficiency of
quantum dots and other small scale devices. In this paper, we present a brief introduction to stochastic thermodynamics.
We refer to the literature for more advanced reviews [5–13].

∗ Corresponding author. Tel.: +32 11268214.
E-mail address: (C. Van den Broeck).
0378-4371/© 2014 Published by Elsevier B.V.

The transition rates have to satisfy an additional property.m . the system is at equilibrium with the reservoir. with corresponding energy ϵm and particle number nm .m′ Pm′ . with discrete non-degenerate states. M. non-dissipative) work source which controls its energy levels ϵ(λ) via a time-dependent control parameter λ = λ(t ). (4) m The crucial property that is required from the rates is the so-called condition of detailed balance. We stress that in the presence of driving. (7) m  N = nm Pm = ⟨nm ⟩.m′ = 0. i. from m′ to m. which is shifting the energy levels in time. but not on the way this distribution was achieved – namely the ensemble-averaged and in general nonequilibrium values of energy. non-dissipative) heat and particle reservoir at temperature T (β = 1/(kB T )) and chemical potential µ. In the steady state.e. with m Pm = 1. Ensemble thermodynamics We consider a system. i. have to be equally likely: eq eq Wm. say from m to m′ .2 C. (1) m′ ′ Here Wm.e. so that ‘‘traditional equilibrium con- cepts’’. this relation is supposed to hold at each moment in time. particle number and entropy:  E= ϵm Pm = ⟨ϵm ⟩. (8) m  S = −kB Pm ln Pm = ⟨−kB ln Pm ⟩.m = − m′ ̸=m Wm′ .  In the ensemble picture. (2) m a property that guarantees the conservation of normalization. hence the rates also become time-dependent. and when appearing in the formulas below. a change of the energy level ϵm . in contact with a single (ideal. The states are identified by an index m. Statistical physics prescribes that the steady state distribu- eq tion is given by the grand canonical equilibrium distribution Pm [14]: eq Pm = exp{−β(ϵm − µnm − Ω eq )}. We use the shorthand notation with diagonal elements defined as Wm. The particle number in a given state is however supposed to be fixed. Esposito / Physica A ( ) – 2.m′ is the probability per unit time  to make a transition from state m to m. (5) Combined with the explicit expression of the equilibrium distribution. Alternatively:  Wm. (6) Wm..e. (11) . We next introduce the basic state functions – quantities that depend on probability distribution Pm of the system. (9) m It is clear from the above formulas that these state variables can change due to two different mechanisms: a change in occupation of the levels. i. or a shift of the energy levels. For simplicity we consider a single type of particle. In particular.m′ Pm′ = Wm′ . The time evolution of the state is described by a Markovian master equation:  dt Pm = Wm. Van den Broeck. T and µ refer to this reservoir. such as temperature and chemical potential need not exist for the system. This condition will be crucial to obtain the correct formulation of the second law. this gives: Wm′ .m ϵm − ϵm′ − µ(nm − nm′ ) qm. (3) The (equilibrium) grand potential Ω eq follows from the normalization of P : eq  exp{−β Ω eq } = exp{−β(ϵm − µnm )}.m′ is the ‘‘elementary’’ heat absorbed by the system to make the transition from m′ to m.m′ T T qm. the rate of energy change is given by:  dt E = {ϵm dt Pm + dt ϵm Pm } (10) m = Q̇ + Ẇchem + Ẇ .m′ kB ln = = . Note that this distribution does not have to be of the equilibrium form. They are however well defined for the ideal reservoir. at equilibrium every transition. a modification of Pm ..m Pm .. the state of the system is described by a probability distribution Pm to be in the state m. We assume that the system can also exchange work with an (ideal. and its inverse.

m Ṡe = Wm. (19) The proof goes as follows:   dt S = −kB dt Pm ln Pm = −kB Wm.m Pm qm. namely the relation between heat and entropy exchange and the positivity of the entropy production.m′ Ṡe = Jm.m′ = ϵm − ϵm′ − µ(nm − nm′ ). Esposito / Physica A ( ) – 3 This decomposition reproduces the first law. Explicitly: dt S = Ṡe + Ṡi . M. (21) 2 Wm′ .m′  Pm = −kB Wm. (23) m>m′  qm.m′ Pm′ − Wm′ . Heat and chemical work correspond to transitions between states.m Pm ln 2 Pm m.m′ Pm′ kB   Wm′ .m Pm ln + Wm.m′ m. C.m′ kB    Wm′ .m . (14) m with the chemical work rate Ẇchem Ẇchem = µdt N . Van den Broeck.m′ = Wm. work is the result of the energy shift of an occupied state. (27) .m′ Pm′ Ṡi = Wm.m′ kB   Wm.m Pm ln (20) 2 Wm′ . (16) m Turning to the second law.m′ Pm′ Xm.m′ m.m′ Pm′ ln Pm m m. (13)–(15):  Q̇ = (ϵm − µnm )dt Pm .m′ kB    Pm′ = Wm. (13) m  Q̇ = ϵm dt Pm − Ẇchem .m′ Pm′ − Wm′ .m′ Pm′ − Wm′ . particle flow dt N and heat flow Q̇ are given by:  Ẇ = dt ϵm Pm = dt λ dλ E .m Pm ln .m′ Pm′ − Wm′ . cf.m′ .m Pm m. but with the additional advantage that it preserves – in nonequilibrium – the basic features of the second law.m Pm 2 Wm. (26) Wm′ . Work rate (power) Ẇ .m Pm ln . (25) Wm.m Pm .m′ or:  Ṡi = Jm. heat and work are not state functions (or the difference of state functions) as they depend on the way the transition between states is carried out. (12) m  dt N = nm dt Pm .m′ We thus identify the entropy production and entropy flow as: kB    Wm.m′ = kB ln . (17) Ṡe = Q̇ /T .m′ m. (18) Ṡi ≥ 0. We also mention for later use the following expression for the heat flux. (15) In words. As is well known.   = Wm. (22) 2 Wm. implying a change in occupation probability. we will show that the above definition of nonequilibrium entropy is a proper choice that reduces to the standard thermodynamic entropy at equilibrium.m′ Pm′ ln Pm′ m.m′ Pm′ − Wm′ .m′ Pm′ − Wm′ .m′ Xm. (24) T m>m′ with Jm.m′ .

(39) T This also implies Ṡi = 0 and Ẇ = dt Ω eq . E eq . with a concomitant heat and particle exchange. Esposito / Physica A ( ) – Jm. It is reassuring that at equilibrium it reproduces the properties of the equilibrium state. The energy–particle spectrum that characterizes the system can be recovered through an inverse Laplace transform [14]. Combined with the second law under the form (34). (31) m T T which is indeed the expected expression for the entropy flow.m′  ϵm − µnm Q̇ = dt Pm = . (32) One immediately finds that: dt Ω = dt E − Tdt S − µdt N = Ẇ − T Ṡi (33) or T Ṡi = Ẇ − dt Ω ≥ 0. a full characterization of the system. Van den Broeck. The above description provides a generalization of the usual equilibrium thermodynamics to far from equilibrium states. (32) thus reproduces the equilibrium Euler relation Ω eq = E eq − TS eq − µN eq . In this case the equilibrium shape of the eq distribution is preserved in time.. particle number and entropy. One immediately finds that for such a transformation:  eq eq dt S = −kB dt Pm ln Pm (37) m  eq ϵm − µnm − Ω eq = dt Pm (38) m T Q̇ = = Ṡe . These relations reproduce the familiar thermodynamic statement for quasi-static processes: dS = dQ /T . N eq . i. a corresponding instantaneous redistribution over the levels has to take place.m′ Pm′ ln (28) Wm. one finds the corresponding results for the energy. We finally consider a quasi-static transformation generated by the work source. Besides energy. In fact.4 C. λ. number of particles. Hence. µ) provides a fundamental relation. M. This is obviously no longer true for the nonequilibrium potential Ω . namely the grand potential: Ω = E − TS − µN . and each of these terms is non-negative. Ω eq .m′ Pm′ (29) T m. we conclude that any nonequilibrium state has the potential to generate an output work of at most kB TI. upper limit reached for a reversible scenario (zero entropy production). . a stronger property holds: it is the sum of contributions due to all pairwise transitions between different states m and m′ . (36) m Pm Here D(P ∥ P eq ) is the relative entropy or Kullback–Leibler distance [15] between the distributions P and P eq .m′  ϵm − ϵm′ − µ(nm − nm′ ) = Wm. Ω eq = Ω eq (T . Ṡi is clearly non-negative. S eq . At equilibrium. entropy and grand potential. λ and µ. By filling in the equilibrium expression for the probability distribution (3) in (7)–(9) and (32).m′ Pm′ (30) T m. Note furthermore that the knowledge of the equilibrium grand potential as a function of T .m Ṡe = kB Wm. Pm = Pm (t ). We however point out the following revealing property: the nonequilibrium potential is always larger than its equilibrium value: Ω − Ω eq = kB TI .m′ the corresponding thermodynamic force.m′  ϵ m − µ nm = Wm. one finds using (6) and (16):  Wm′ . Concerning Ṡe .e.m′ m. (34) As a consequence the rate of decrease −dt Ω in grand potential is an upper bound for the corresponding output power −Ẇ . (35)  Pm I = D(P ∥ P eq ) = Pm ln eq ≥ 0. it is convenient to introduce another state function. as the energy levels are shifted via work.m′ is the net rate of transitions from m′ to m and Xm.

in all of the above formulas where the derivative of the probability appears.m m ln P . the first law (10) remains of course valid.m′ .  Ṡe = (54) ν .m′ = (40) ν with W ν the transition matrix due to coupling with reservoir ν . C.m′ Pm′ = Wmν ′ .m′ One thus finds: Ṡiν .ν with respect to the equilibrium distribution P imposed by reservoir ν at each moment in time (i.m′ Pm′ − Wmν ′ .  dt N = (46) ν Q̇ ν .m′ kB    ν  Wmν .ν ν ν  exp{−β Ω }= exp{−β (ϵm − µ nm )}.m mP ′ − W ν m ..  Wm.m′ ′ ′ 2 ν 2 ν m. for the prevailing energy spectrum ϵm = ϵm (t )): eq. The main new ingredient is the fact that the transitions in the system are now due to the coupling with the different reser- voirs. which we identify by the index ν (e. (48) ν with Ṅ ν = ν  nm Ṗm . the derivation is modified as follows: Wmν .). Van den Broeck.m Pm ln  = ν 2 Pm m.m Pm ln + Wm. chemical potential µν .m′ Pm′ kB    ν  Wmν ′ . temperature T ν . hence: Wmν .ν )}.  (50) m ν µν nm Ṗmν .  Q̇ = (47) ν ν  Ẇchem = Ẇchem .m = Wm.  Ẇchem = (51) m For the second law. Esposito / Physica A ( ) – 5 3. etc. heat and chemical work flux can be separated into contributions from each reservoir: Ṅ ν . M. but particle. In particular. (52) Wmν ′ .  Ṗm = (45) m′ Hence. We assume that these contacts do not interfere. (43) m We can now write: ν  d t Pm = Ṗm .e.m′ Pm′ . With respect to applications. one can identify the separate contribu- tions of the different reservoirs. (49) m Q̇ ν = (ϵm − µν nm )Ṗmν . This latter satisfies detailed balance eq. pumps and refrigerators all in- volve contact with multiple reservoirs.m Pm Wmν . On the theory side. (44) ν ν Wmν . engines.m′ Pm′ − Wmν ′ .m′  kB   Pm′ Wmν . we mention the study of nonequilibrium steady states.m′ m. (42) eq.ν Pm = exp{−β(ϵm − µν nm − Ω eq.ν (41) eq.m Pmeq. We briefly explain how the above formalism can be extended to cover this situation.ν Wmν . Multiple reservoirs The case of multiple energy and particle reservoirs is of obvious theoretical and technological interest.g.m′ Pm′ ln Pm   dt S = −kB dt Pm ln Pm = −kB m ν m.  Ṡi = (53) ν Ṡeν .

One could of course still apply the above presented ensemble thermodynamics to describe the average outcome upon repeating the experiment many times. The state of the system at time t is its actual state m = m(t ). where the averaging brackets refer to an average with respect to the probability distribution Pm(t ) (t ). (64) m  dt fm(t ) (t ) = δ̇mKr. Note that at the trajectory level. We thus consider a small system and focus on its trajectory in the course of time in a single realization of the experiment.m(t ) dt fm (t ). Van den Broeck. In a small scale system. N = ⟨n⟩. their thermodynamic properties are defined with respect to the stochastic dynamics of the system under consideration. the stochastic entropy s introduced by Seifert [30]. so that an ensemble description in terms of average quantities is sufficient.m′ = Wm.m′ Q̇ ν Ṡeν = ν  Jm . to extract the time-dependence on the actual state m(t ) with a delta-Kronecker function:  fm(t ) (t ) = δmKr.m′ Pm′ − Wm′ . the stochastic number of particles n. and the stochastic grand canonical potential ω of this state: e = ϵm(t ) (t ).m′ = ϵm − ϵm′ − µν (nm − nm′ ). (65) m Note that the contribution in δ̇mm Kr ′ ′ corresponds to a change of level occupation from state m to m and will give rise to a Dirac delta function contribution centered at the times t ∗ of the jumps and with amplitude fm (t ∗ ) − fm′ (t ∗ ). E = ⟨e⟩. S and Ω for this particular state are the stochastic energy e. as they depend. (63) We have assumed that the particle number of a given state is fixed. We will henceforth use the lower case no- tation to distinguish the stochastic variables from the corresponding ensemble averaged quantities (it should be clear from the context not to confuse the energy e with Euler’s number e = 2. that the trajectory properties reveal a much deeper formulation of the second law. Hence. for clarity and simplicity of notation.m′ = . µν = 0. The analogues of the above introduced ensemble averaged state functions E. trademark of their thermodynamic content. but also on the probability distribution Pm(t ) . for example of quantum dots in contact with leads [16–19]. . so that nm(t ) has no explicit time dependence. . (56) Tν Tν m>m′ and ν ν ν Jm . But there is an even more important motivation: it turns out. In the following it will be convenient.m Pm qνm. as we proceed to show. 4. Nevertheless.m′ Pm′ Xmν . achieved by setting in the above formulas all the chemical potentials equal to zero. (59) The above formulas allow to investigate the far from equilibrium thermodynamics of small scale systems. (57) ν Wm.). even though we are monitoring single trajectories.m(t ) fm (t ).m′ . in order to take time-derivatives. . this is no longer the case and the quantities that are measured will vary from one experiment to another.m′ = kB ln . there appears an essential difference between the variables e and n on the one hand. Esposito / Physica A ( ) – with Ṡiν = ν ν  Jm . We will limit our presentation. 718 . the experimental measurement of such properties is now within reach. due to the spectacular progress in nano-technology. Trajectory thermodynamics In the limit of a very large system (with no long range correlations). Furthermore.6 C. not only on the actual state m(t ). M. to the case of a single particle and energy reservoir. (58) Wmν ′ . Another question that has received a lot of attention is the universal features of the efficiency of such machines when operating at maximum power [20–29]. (62) ω = e − Ts − µn. ∀ν . one can wonder about the stochastic properties that are revealed at the trajectory level. (55) m>m′ qνm. one expects by the law of large numbers that the properties are self-averaging. We finally mention the simplification in the absence of particle transport. (60) n = nm(t ) . S = ⟨s⟩ and Ω = ⟨ω⟩.m Pm .m(t ) fm (t ) + δmKr. and s and ω on the other hand: the latter retain an ensemble character.m′ Xm. N. The above definitions are consistent with the ensemble description. (61) s = −kB ln Pm(t ) (t ).

s.m Pm } ln ≥ 0.m′ Pm′ = {Wm.h. (70) m Pm (t ) Pm (t ) eq Here we introduced the instantaneous equilibrium distribution Pm (t ). Note that neither of the above contributions has a definite sign. To do the same for the entropy production requires a little bit more work.m′ This is indeed the expression for the ensemble entropy production given earlier. (75) 2 Wm′ . we note that the stochastic entropy production jumps by an amount kB ln eq for a change P ′ Pm m ′ in the state from m to m.e.m′ Pm′ ln eq m. The second one in the r. M. By summation over all the actual states m(t ) = m′ (i.m Pm m.m(t ) m Pm (t )  ( ) dt Pm (t )  Pm t = −kB δ̇mKr.m′ Pm′ ln Wm′ .m′ kB  Wm.m′ Pm′ = kB Wm. . The first term in the r.s. we conclude that the stochastic entropy production is given by  Pm (t ) ()  dt Pm t ṡi = −kB δ̇Kr m. As for the first term. appearing at the instances when the actual state of the system changes.m(t ) . this index becomes a dummy summation variable) one finds that the second term in the stochastic entropy production (74) averages eq Pm′ Pm out to zero. (68) m  ẇchem = µnm δ̇mKr. The probability per unit time for such a transition is Wm. By averaging the result (72) for entropy flow with Pm(t ) (t ).m′ Pm′ − Wm′ . one recovers its ensemble version (18). (74) m eq Pm (t ) () Pm t It consists of two different terms. gives discontin- uous contributions.m′ Pm′ . see also (3): eq Pm (t ) = exp{−β[ϵm (t ) − µnm − Ω eq (t )]}.m(t ) dt ϵm (t ). The second law at the trajectory level is obtained by calculating as follows the time-derivative of the stochastic entropy (62):  dt Pm (t )  dt s = −kB δ̇mKr. [30].m(t ) ln Pm (t ) + δmKr. (69) m The interpretation is essentially the same as in the ensemble description: work corresponds to the shift of an occupied level while heat plus chemical work are the result of a transition between levels. (71) The crucial point is to identify the last term in the r.m(t ) . Esposito / Physica A ( ) – 7 The first law at the trajectory level is obtained by differentiating (60) with respect to time: dt e = q̇ + ẇchem + ẇ.m Pm m.m(t ) ln Pmeq (t ) m 1 = δ̇mKr.m′ Pm′ . using (68) and (69):  ṡe = −kB δ̇mKr. (66)  ẇ = δmKr.m(t ) ln eq + δmKr. (72) T From the decomposition dt s = ṡi + ṡe .h. (73) first obtained by Seifert in Ref. (67) m  q̇ = ϵm (t )δ̇mKr.m(t ) ln Pmeq (t ) .m(t ) + δ̇mKr. and that we have dropped for simplicity of notation the explicit t dependence): eq  Pm′ Pm Ṡi = kB Wm.m(t ) {ϵm (t ) − µnm } T m q̇ = . C. Van den Broeck.s. The average ⟨ṡi ⟩ of the stochastic entropy production thus becomes (note that the averaging brackets now refer to an average with respect to Wm.m(t ) ln +δ Kr m. of (70) as the stochastic entropy flow.h.m(t ) − ẇchem .m′ Pm′ Pm  Wm. corresponds to a smooth contribution: it is positive or negative depending on whether the actual state becomes less or more probable in time.

one can thus write by integration of (76): T ∆i s = w − ∆Ω eq . Fluctuation and work theorem The cumulated entropy production ∆i s is a random variable: it has the value specified in (81) when the system has followed the specified trajectory m. respectively: P(m) ∆i s(m) = kB ln . as a function of time t = [ti = 0. (81) P̃(m̃) Note that the probability for a trajectory is in fact a probability density defined in the function space of trajectories. 5. We also need to define an ‘‘inverse experiment’’. and reduces to the ensemble average instantaneous equilibrium expression (using the explicit expression for the stochastic entropy given in (62)): ωeq (t ) = Ω eq (t ). (76) eq We next make the important observation that. say from m to m′ in the Wm′ . m(t ) = m̃(t̃ ). i. etc. M. The resulting probability for ∆i s is given by the following path integral: P(m)    P (∆i s) = dm δ ∆i s − kB ln P(m). corresponding to reversing the time-dependence of the ‘‘driving’’. the stochastic grand potential given in (63) becomes independent of the state. (82) m P̃(m̃) . eq particular for a quasi-static process Pm = Pm (t ). m̃ refers to the time-inverse trajectory of m and P̃(m̃) to the probability for the time-reversed path in the time-reversed experiment. one finds by a combination of the stochastic first and second laws. (78) with the important consequence that the statistical properties of entropy production ∆i s and work w are the same. we have the starting probabilities for direct and reverse paths. Path description The expression (74) for the entropy production is not physically nor mathematically transparent. A more revealing ex- pression is obtained by considering the cumulative entropy production ∆i s along a trajectory. Pm(ti ) (ti ) and P̃m(t̃i ) (t̃i ) = Pm(tf ) (tf ) (the latter by construction). when the probability distribution has the equilibrium form Pm = and in Pm . Esposito / Physica A ( ) – Turning to the stochastic grand potential (63). so these contributions cancel out. One can identify three contributions to the probability for a trajectory. The corresponding probability for such a trajectory will be denoted by P(m). The log-ratio of these quantities gives the stochastic entropy change of the forward tra- jectory ∆s(m) = kB ln Pm(tf ) (tf ) − kB ln Pm(ti ) (ti ).e. (71). Pm(tf ) (tf ) = P̃m̃(t̃i ) (t̃i ). Finally the probabilities for not making jumps are at every instant of time the same in the forward and reverse experiments. respectively. (80) Pm which is the stochastic analogue of (35). Van den Broeck.m −qm′ . (79) Pm i = ln eq . We will now prove the following main result. We conclude that the log-ratio of the path probabilities is ∆s(m) − ∆e s(m) = ∆i s(m). We will denote by superscript ‘‘tilde’’ the quantities related to the reverse experiment. Second. We represent a system trajec- tory by m. it gives −∆e s(m). the difference of equilibrium and nonequilibrium grand potential can be written as follows: ω − ωeq = kB Ti. for example t̃ = τ − t. t̃i = τ − tf = 0. which happens with probability P(m). The log-ratio of these probabilities is ln Wm. The proof is surprisingly simple. which is the heat −qm′ . Finally. referring to the state of the system. we have the probabilities to make jumps. 6. of the transition rates. The cumulated entropy production ∆i s = ∆i s(m) along a forward trajectory m is the log-ratio of the probabilities to observe this trajectory in the forward and backward experiments.m forward process and hence from m′ to m in the reverse experiment. while using as starting probability for the states in the inverse experiment. the final distribution of the states in the forward experiment. When summed over all transitions (times kB ). m(t ). tf = τ ]. the above expression is well defined. but as only ratios of such quantities appear (and the Jacobian for the transformation to the time-inverse variables is one). very much in the same way as for the ensemble average.. that (compare with (34)): T ṡi = ẇ − dt ω. (77) For a transition between an initial and a final equilibrium state (but with no conditions on the state of the system in the intermediate process).m leaving the system divided by the kB times temperature. cf.8 C. First. apart from a shift by ∆Ω eq and rescaling by T .m′ = kB T (evaluated at the time of the jump).

But P̃ = P also requires that their initial distributions coincide. (87) P̃ (−w) kB T A key remark at this stage is that the fluctuating work (67) naturally stops evolving when the time-dependent perturbation stops. cf. and the fluctuation theorem is a statement only about the forward entropy production. We next turn to another relation that derives from (84). At this point it is important to clarify the meaning of:   P̃(m̃)  P̃ (−∆i s) = dm̃ δ −∆i s − kB ln P̃(m̃). Indeed. implying ∆i s̃(m̃) = −∆i s(m). M. A particularly interesting special case of this type arises if we consider a system starting and ending in an equilibrium state. There are however several cases of interest where the initial conditions match. It is quite natural to wonder what is the relation. Esposito / Physica A ( ) – 9 Due to the very specific structure of ∆i s. This is obviously the case for nonequilibrium steady states (same stationary state all the time). This is in general not the case. leading to the so-called Crooks relation [32–34]: P (w) w − ∆Ω eq = exp . (78). but also if both before and after the time-dependent perturbation. Indeed we have: ˜ −∆ i s   d∆i s exp P (∆i s) = d∆i s P̃ (−∆i s) = 1. irrespective of whether or not P̃ = P. namely the fact that it is a log-ratio of probabilities. (85) m̃ P(m) P̃ (m̃) is obviously the probability density to observe in the reverse dynamics a trajectory m̃ whose inverse trajectory m has an entropy production ∆i s. with the entropy production ∆i s̃(m̃) of the reverse trajectory m̃ in the reverse scenario. Indeed the stochastic entropy is then directly related to the work performed on the system. Hence. In general the latter distribution will be different from ˜ the initial distribution of the forward process so that P ̸= P̃. and it is tempting to assume that P̃˜ = P. if any. This result is usually written under the following form: P ( ∆ i s) ∆i s = exp . The ending state can therefore be any arbitrary nonequilibrium state and the final value of the grand potential will correspond to the value of the time-dependent perturbation at that final nonequilibrium state. there is in general no simple relation between the entropy production of forward and backward processes. By applying (81) to the reverse process one finds that the corresponding entropy production reads: P̃(m̃) ∆i s̃(m̃) = kB ln . one can perform the following trick: P(m)    P (∆i s) = dm δ ∆i s − kB ln P(m) m P̃(m̃) ∆i s P(m)    = exp dm δ ∆i s − kB ln P̃(m̃) kB m P̃(m̃)   ∆i s P̃(m̃)  = exp dm̃ δ −∆i s − kB ln P̃(m̃) kB m̃ P(m) ∆i s = exp P̃ (−∆i s). Van den Broeck. C. (84) P̃ (−∆i s) kB also known as the detailed fluctuation theorem [31]. we recall that – for (86) to correspond to the entropy production in the reverse process – the ˜ initial probability distribution of P̃ has to be the final probability P̃. (83) kB where we have used the fact that the Jacobian for the transformation from m to m̃ is equal to one. (86) ˜ ˜) P̃(m̃ ˜ = m. In words: the probability for a stochastic entropy increase (in the forward dynamics) is exponentially more probable than that of a corresponding decrease (in the reverse dynamics). One prominent situation is for the system starting and ending in a stationary state. The Crooks relation thus only requires the initial condition to be in an equilibrium state. so ˜ ˜ that P and P̃ obey the same master equation. (88) kB . Indeed the transition It is obvious that m̃ ˜ rates of the doubly tilded stochastic process (twice time-inversion of the driving) are again the original ones W̃ = W. and the statistical properties of the entropy production carry over to the work. the system has enough time to relax to the corresponding steady state.

Jarzynski. Seifert. Nevertheless. Today 58 (7) (2005) 43. and M. including surprising findings such as integral and detailed fluctuation theorems for quantities other than the entropy production. Van den Broeck. many of the results. Rev. [6] C. One may wonder whether the obtained results are limited to this particular setting. Hänggi. Nuovo Cimento (2012) http://dx. 7. Modern Phys. (93) m P̃(m̃) This quantity is zero. hence the system cannot be at equilibrium under such a condition. Acknowledgments This research is supported by the research network ‘‘Exploring the Physics of Small Devices’’ of the European Science Foundation.B. C. F. Van den Broeck Il. [9] C. 38 (1931) 2265. We thus obtain the so-called integral fluctu- ation theorem. Annu. P. with Markovian dynamics induced by its contact with idealized work and reser- voirs. Modern Phys. (90) kB T kB T As a closing remark. 1984.36] (usually written in the absence of particle exchange. Rev. Matter Phys. . de Groot. Esposito / Physica A ( ) – where we use the fact that P̃ is a probability density. E. References [1] L. namely the absence of a perpetuum mobile of the second kind. 81 (2009) 1665. Condens. [4] S. Rev. Wiley-Interscience.10 C. (89) kB If the initial state corresponds to equilibrium (for the same reason as the Crooks relation the final state need not be at equi- librium). G. 75 (2012) 126001. J. U. [8] M. [2] L. Talkner. Rev. Discussion Stochastic thermodynamics offers a good marriage between statistical physics and thermodynamics. 83 (2011) 771. Bustamante. V. second ed.. [7] M. Liphardt. Onsager. Phys. the picture presented by stochastic thermodynamics is fully consistent with a microscopic derivation that addresses this issue. Phys. Non-Equilibrium Thermodynamics. (91) This result can be reproduced in a more direct and revealing way from the result (81) for the stochastic entropy production. [3] I. hence normalized to one. Statistical Mechanics of Nonequilibrium Liquids. Phys. P. 2 (2011) 329. Morris. Mukamel. Indeed. ∀m. 2008. M. Prigogine. and in particular the detailed and integral work and fluc- tuation theorems have been obtained in other settings lending credence to the believe that we are indeed dealing with a more profound reformulation of thermodynamics. We described the system via its microscopic energy states. Onsager. Cambridge University Press. 1961. J. Progr. Harbola. Esposito. Mazur. The further development and consolidation of this theory is in fact in full swing. 37 (1931) 405. if and only if P(m) = P̃(m̃). Rev. Ritort. (92) where D is the relative entropy introduced earlier: P(m)  D(P(m) ∥ P̃(m̃)) = dm P(m) ln . We have not discussed here the microscopic origin of irreversibility: the Master equation is irreversible from the very start.d. namely the vanishing of any time-asymmetry: every single trajectory and its reverse have to be equally probable. Its average is given by: ⟨∆i s⟩ = kB D(P(m) ∥ P̃(m̃)). we obtain from (78) the famous Jarzynski relation [35. This reveals again the stringent conditions associated to the ab- sence of entropy production. project FNR/A11/02 and INTER/FWO/13/09.3254/978-1-61499-278-3-155. by the ‘‘ National Research Fund of Luxembourg’’. Dover. Evans. we mention that the integral fluctuation relation and a fortiori the detailed fluctuation relation imply the positivity – on average – of the entropy production: ⟨∆i s⟩ ≥ 0. P. R. S. or are in fact much more general and deep. [10] U. Thermodynamics of Irreversible Processes. [5] D. Campisi. with the Helmholtz free energy F replacing the grand potential Ω ): −∆Ω eq   −w exp = exp . is supported by the ‘‘Fonds voor Wetenschappelijk Onderzoek Vlaanderen’’ (grant G021514N). As mentioned in the introduction. see in particular [37]. This observation is in accordance with the founding principle of the second law. which is valid irrespective of any conditions on initial or final states [30]: −∆ i s   exp = 1. Phys.doi. [11] C. any time-asymmetry in a trajectory could in principle be used to extract work.

Qian. Esposito. Phys. Rev. C. Lett. 78 (1997) 2690. [17] M. Brandes. . [14] L. Brandner. K. C. Rev. Crooks. Lett. Harbola. Phys. Van den Broeck. M. Van den Broeck. [26] U. Rutten. R. H. EPL 89 (2010) 20003. Esposito. [37] M. [20] C. EPL 85 (2009) 60010. 106 (2011) 020601. Phys. Wiley. 108 (2012) 120603. U. Seifert. C. Kawai. 2006. Mukamel. Van den Broeck. Qian. Rev. B. New J. E 61 (2000) 2361. Elements of Information Theory. Thomas. Rutten. 95 (2005) 190602. Strasberg. 510 (2012) 87. B 80 (2009) 235122. Phys. [31] M. Phys. E. M. Esposito. Van den Broeck. A 41 (2008) 312003. Phys. [36] C. Lindenberg. Rev. Lett. 98 (2007) 108301.-J. Esposito. H. A. K. Rev. M. Crooks. Lindenberg. Esposito. M. 510 (2012) 1.5812. Phys. [24] B. B. Schaller. J. B. G. R. K. Phys. Lett. Rev. Seifert. T. Jarzynski. Lindenberg. C. C. Rev. E. [28] K. [30] U. 104 (2010) 090601. Lindenberg. Stat. [33] G. [21] T. 12 (2010) 013013. Esposito. Wiley. C. Phys. K. Van den Broeck. 90 (1998) 1481. Rev. Van den Broeck. Seifert. K. Phys. Rev. K. M. A Modern Course in Statistical Physics. Van den Broeck. Cover. Qian. Van den Broeck. Lett. Saito. 75 (2007) 155316. Crooks. Rev. Lett. Rev. C. Lett. Rev. [16] M. arXiv:1312. S. J. 102 (2009) 130602. 95 (2005) 040602. Jarzynski. [23] M. Rev. Cleuren. Phys. Phys. [34] G. J. Rep. Qian. M. Cleuren. Phys. [27] B. C. Rep. [19] P. Lett. Lindenberg. E 56 (1997) 5018. U. Ge. E 60 (1999) 2721. C. Tu. Esposito. [25] M. E. 110 (2013) 070603. Reichl. Rev. C. Van den Broeck. Phys. Esposito. Lett. E. Esposito. Phys. 2013. E 81 (2010) 041106. [15] T. [32] G. [35] C. Tu. Phys. [13] H. [22] Z. Rev. Zhang. Phys. Esposito / Physica A ( ) – 11 [12] X. Kawai. E 88 (2013) 062107. Seifert. Phys. U. Phys. 2009. Phys. [29] Z. [18] M. Rev. Phys. Schmiedl.