You are on page 1of 840
A Modern Course in Statistical Physics 2nd Edition L. E. REICHL A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York + Chichester - Weinheim + Brisbane - Singapore - Toronto This book is printed on acid-free paper. ©. Copyright © 1998 by John Wiley & Sons, Inc. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (508) 750-8400, fax (508) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. Library of Congress Cataloging-in-Publication Data: Reichl, L. E. ‘A modern course in statistical physics/by L. E. Reichl. — 2nd ed. Poem, Includes bibliographical references and index. ISBN 0-471-59520-9 (cloth : alk. paper) 1. Statistical physics. I. Title QCI174.8.R44 1997 97-13550 530.15°95—de21 cIP Printed in the United States of America 10987654 This book is dedicated to Ilya Prigogine for his encouragement and support and because he has changed our view of the world. CONTENTS Preface xix 1. Introduction 1 1A. Overview 1 1.B. Plan of Book 2 1.C. Use as a Textbook 5 PART ONE THERMODYNAMICS 2. Introduction to Thermodynamics 9 2.A. Introductory Remarks 9 2.B. State Variables and Exact Differentials 11 2.C. Some Mechanical Equations of State 16 2.C.1. Ideal Gas Law 16 2.C.2. Virial Expansion 17 2.C.3. Van der Waals Equation of State 18 2.C.4. — Solids 19 2.C.5. _ Elastic Wire or Rod 19 2.C.6. Surface Tension 20 2.C.7.__ Electric Polarization 20 2.C.8. Curie’s Law 21 2D. The Laws of Thermodynamics 21 2.D.1. Zeroth Law 22 2.D.2. First Law 22 2.D.3. | Second Law 23 2.D.4. Third Law 31 2.E, Fundamental Equation of Thermodynamics 33 2.R Thermodynamic Potentials 36 2.1. — Internal Energy 37 2.2. — Enthalpy 40 2.F3. Helmholz Free Energy 42 2.F4. Gibbs Free Energy 45 2.2.5. Grand Potential 48 viii CONTENTS 2.G. Response Functions 50 2.G.1. Thermal Response Functions (Heat Capacity) 50 2.G.2. Mechanical Response Functions 53 2.H. Stability of the Equilibrium State 55 2.H.1. Conditions for Local Equilibrium in a PVT System 55 2.H.2. Conditions for Local Stability in a PVT System 57 2.H.3. Implications of the Stability Requirements for the Free Energies 63 $2.A. Cooling and Liquefactions of Gases 66 $2.A.1. The Joule Effect: Free Expansion 66 $2.A.2, The Joule—Kelvin Effect: Throttling 68 $2.B. Entropy of Mixing and the Gibbs Paradox 72 S2.C. Osmotic Pressure in Dilute Solutions 74 $2.D. The Thermodynamics of Chemical Reactions 78 S2.D.1. The Affinity 78 S2.D.2. Stability 82 S2.E. The Thermodynamics of Electrolytes 86 References 89 Problems 90 3. The Thermodynamics of Phase Transitions 96 3.A. Introductory Remarks 96 3.B. Coexistence of Phases: Gibbs Phase Rule 98 3.C. Classification of Phase Transitions 100 3.D. Pure PVT Systems 103 3.D.1. Phase Diagrams 103 3.D.2.__ Coexistence Curves: Clausius—Clapyron Equation 105 3.D.3. Liquid-Vapor Coexistence Region 110 3.D.4.__ The van der Waals Equation 115 3.E. Superconductors 118 3.F. The Helium Liquids 123 3.F1. Liquid He* 123 Liquid He? 124 3.R: Liquid He?-He* Mixtures 126 3.G. Landau Theory 128 3.G.1. Continuous Phase Transitions 128 3.G.2. _ First-Order Transitions 134 CONTENTS, 3.H. Critical Exponents 135 3.H.1. Definition of Critical Exponents 136 3.H.2. The Critical Exponents for Pure PVT Systems 137 $3.A. Surface Tension 142 $3.B. Thermomechanical Effect 146 §3.C. The Critical Exponents for the Curie Point 149 $3.D. Tricritical Points 151 S3.E. Binary Mixtures 153 S3.E.1. Stability Conditions 154 $3.E.2. Equilibrium Conditions 155 S3.E.3. Coexistence Curve 160 S3.F. The Ginzburg-Landau Theory of Superconductors 162 References 166 Problems 167 PART TWO CONCEPTS FROM PROBABILITY THEORY 4. Elementary Probability Theory and Limit Theorems 173 4.A. Introduction 173 4.B. Permutations and Combinations 174 4.C. Definition of Probability 175 4.D. Stochastic Variables and Probability 177 4.D.1. Distribution Functions 178 4.D.2. | Moments 180 4.D.3. Characteristic Functions 182 4.D.4. Jointly Distributed Stochastic Variables 183 4.E. Binomial Distributions 188 4.E.1. The Binomial Distribution 188 4.E.2. The Gaussian (For Normal) Distribution 191 4.E.3. The Poisson Distribution 192 4.E4, Binomial Random Walk 194 4.7, A Central Limit Theorem and Law of Large Numbers 197 4.F1. A Central Limit Theorem 197 4.F2. The Law of Large Numbers 198 S4.A. Lattice Random Walk 199 $4.A.1. One-Dimensional Lattice 200 $4.A.2. Random Walk in Higher Dimension 203 x S4.B. S4.C. S4.D. S4.E. References Problems CONTENTS Infinitely Divisible Distributions $4.B.1. Gaussian Distribution $4.B.2. Poisson Distribution S4.B.3. Cauchy Distribution S4.B.4. Levy Distribution The Central Limit Theorem S4.C.1. Useful Inequalities S4.C.2. Convergence to a Gaussian Weierstrass Random Walk $4.D.1. Discrete One-Dimensional Random Walk S4.D.2. Continuum Limit of One-Dimensional Discrete Random Walk S4.D.3. Two-Dimensional Discrete Random Walk (Levy Flight) General Form of Infinitely Divisible Distributions S4.E.1. Levy-Khintchine Formula S4.E.2. Kolmogorov Formula 5. Stochastic Dynamics and Brownian Motion 5.A. 5.B. 5.C. 5.D. 5.E. S5.A. S5.B. Introduction General Theory Markov Chains 5.C.1. Spectral Properties 5.C.2. | Random Walk The Master Equation 5.D.1. Derivation of the Master Equation 5.D.2. Detailed Balance 5.D.3. Mean First Passage Time Brownian Motion 5.E.1. Langevin Equation 5.E.2. The Spectral Density (Power Spectrum) Time Periodic Markov Chain Master Equation for Birth-Death Processes S5.B.1. The Master Equation §5.B.2. Linear Birth-Death Processes $5.B.3. Nonlinear Birth-Death Processes 207 208 209 209 210 211 212 213 214 215 217 218 221 222 223 225 225 229 229 231 234 234 240 242 244 250 251 254 258 260 260 261 265 CONTENTS xi S5.C. The Fokker-Planck Equation 266 $5.C.1. Probability Flow in Phase Space 266 $5.C.2. Probability Flow for Brownian Particle 267 $5.C.3. The Strong Friction Limit 270 $5.C.4. Solution of Fokker-Planck Equations with One Variable 271 S5.D. Approximations to the Master Equation 216 References 278 Problems 279 6. The Foundations of Statistical Mechanics 285 6.A. Introduction 285 6.B. The Liouville Equation of Motion 286 6.C. Ergodic Theory and the Foundation of Statistical Mechanics 296 6D. The Quantum Probability Density Operator 303 S6.A. Reduced Probability Densities and the BBGKY Hierarchy 310 S6.B. Reduced Density Matrices and the Wigner Distribution 314 S6.C. Microscopic Balance Equations 319 S6.D. Mixing Flow 321 S6.E. Anharmonic Oscillator Systems 326 S6.F. Newtonian Dynamics and Irreversibility 334 References 335 Problems 336 PART THREE EQUILIBRIUM STATISTICAL MECHANICS 7. Equilibrium Statistical Mechanics 341 7.A. Introduction 341 7.B. The Microcanonical Ensemble 343 7.C. Einstein Fluctuation Theory 349 7.C.1. General Discussion 349 7.C.2. Fluid Systems 351 7.D. The Canonical Ensemble 354 7.D.1. Probability Density Operator 354 7.D.2. Systems of Indistinguishable Particles 357 7.D.3. Systems of Distinguishable Particles 362 7.E. Heat Capacity of a Debye Solid 364 LE. 7G. 7.H. S7.A. S7.B. S7.C. References Problems CONTENTS Order-Disorder Transitions 7.F1. Exact Solution for a One-Dimensional Lattice 7.F2. Mean Field Theory for a d-Dimensional Lattice The Grand Canonical Ensemble Ideal Quantum Gases 7.H.1, Bose-Einstein Gases 7.H.2. FermiDirac Ideal Gases Heat Capacity of Lattice Vibrations on a One- Dimensional Lattice—Exact Solution S7.A.1. Exact Expression—Large N $7.A.2. Continuum Approximation—Large N Momentum Condensation in an Interacting Fermi Fluid The Yang-Lee Theory of Phase Transitions 8. Order-Disorder Transitions and Renormalization Theory 8.A. 8.B. 8&.C. 8.D. S8.A. S8.B. References Problems Introduction Static Correlation Functions and Response Functions 8.B.1. General Relations 8.B.2. Application to the Ising Lattice Scaling 8.C.1. | Homogeneous Functions 8.C.2. Widom Scaling 8.C.3. Kadanoff Scaling Microscopic Calculation of Critical Exponents Critical Exponents for the S* Model Exact Solution of the Two-Dimensional Ising Model $8.B.1. Partition Function $8.B.2. Antisymmetric Matrices and Dimer Graphs S8.B.3. Closed Graphs and Mixed Dimer Graphs $8.B.4. Partition Function for Infinite Planar Lattice 9. Interacting Fluids 9.A. 9B. Introduction Thermodynamics and the Radial Distribution Function 369 370 372 377 381 383 392 401 404 406 407 418 422 423 427 427 428 429 431 433 433 434 437 440 448 462 462 466 469 475 485 486 488 489 CONTENTS xiii 9.C. Virial Expansion of the Equation of State 492 9.C.1. Virial Expansions and Cluster Functions 493 9.C.2. The Second Virial Coefficient 500 9.C.3. Higher-Order Virial Coefficients 506 S9.A. The Pressure and Compressibility Equations 507 S9.A.1. The Pressure Equation 508 S9.A.2. The Compressibility Equation 509 S9.B. Omstein-Zernicke Equation 510 S9.C. Third Virial Coefficient 513 S9.C.1. Square-Well Potential 514 $9.C.2. Lennard-Jones 6-12 Potential 515 S9.D. Virial Coefficients for Quantum Gases 517 References 526 Problems 527 PART FOUR NONEQUILIBRIUM STATISTICAL MECHANICS 10. Hydrodynamic Processes Near Equilibrium 531 10.A. Introduction 531 10.B. Navier-Stokes Hydrodynamic Equations 533 10.B.1. Balance Equations 534 10.B.2._ Entropy Source and Entropy Current 537 10.B.3. Transport Coefficients 541 10.C. Linearized Hydrodynamic Equations 544 10.C.1. Linearization of the Hydrodynamic Equations 545 10.C.2. Transverse Hydrodynamic Modes 549 10.C.3. Longitudinal Hydrodynamic Modes 550 10.D. Dynamic Equilibrium Fluctuations and Transport Processes 552 10.D.1. Onsager’s Relations 553 10.D.2. Weiner-Khintchine Theorem 557 10.E. Linear Response Theory and the Fluctuation—Dissipation Theorem 561 10.E.1. The Response Matrix 562 10.E.2. Causality 563 10.E.3. The Fluctuation—Dissipation Theorem 568 10.E.4. Power Absorption 570 10.F. Transport Properties of Mixtures 574 10.F.1. Entropy Production in Multicomponent Systems 574 S10.A. $10.B. $10.C. $10.D. S10.E. S10.F $10.G. $10.H. S101. References Problems CONTENTS 10.R2. Fick’s Law for Diffusion 10.F.3. Thermal Diffusion 10.F.4. Electrical Conductivity and Diffusion in Fluids Onsager’s Relations When a Magnetic Field is Present Microscopic Linear Response Theory Light Scattering $10.C.1. Scattered Electric Field $10.C.2. Intensity of Scattered Light Thermoelectricity $10.D.1. The Peltier Effect $10.D.2. The Seebeck Effect $10.D.3. Thomson Heat Entropy Production in Discontinuous Systems $10.E.1. Volume Flow Across a Membrane S10.E.2. Ion Transport Across a Membrane Stochastic Hydrodynamics $10.F.1. Stochastic Hydrodynamic Equations $10.F.2. Properties of Equilibrium Correlation Functions $10.F.3. Random Current Correlation Functions Long-Time Tails $10.G.1. Fluid Flow Around the Brownian Particle $10.G.2. Drag Force on the Brownian Particle $10.G.3. Velocity Autocorrelation Function Superfluid Hydrodynamics $10.H.1. Superfluid Hydrodynamic Equations $10.H.2. Sound Modes General Definition of Hydrodynamic Modes $10.11. Projection Operators $10.1.2. Conserved Quantities $10.13. Hydrodynamic Modes Due to Broken Symmetry 11. Transport Theory 11.A. 11.B. Introduction Elementary Transport Theory 11.B.1. The Maxwell-Boltzmann Distribution 11.B.2. The Mean Free Path 580 582 583 586 589 592 594 597 600 603 605 605 606 610 612 613 614 617 620 621 623 624 631 631 635 639 640 642 649 650 656 656 657 657 658 CONTENTS 11L.C. 1D. 11.G. SILA. References Problems 11.B.3. The Collision Frequency 11.B.4. Self-Diffusion 11.B.5. The Coefficients of Viscosity and Thermal Conductivity 11.B.6. The Rate of Reaction The Boltzmann Equation 11.C.1. Two-Body Scattering 11.C.2. Derivation of the Boltzmann Equation 11.C.3. Boltzmann’s H Theorem Linearized Boltzmann and Lorentz—Boltzmann Equations 11.D.1. Kinetic Equations for a Two-Component Gas 11.D.2. Collision Operators Coefficient of Self-Diffusion 11.E.1. Derivation of the Diffusion Equation 11.E.2. Eigenfrequencies of the Lorentz— Boltzmann Equation Coefficients of Viscosity and Thermal Conductivity 11.F1. Derivation of the Hydrodynamic Equations 1 Eigenfrequencies of the Boltzmann Equation 11.3. Shear Viscosity and Thermal Conductivity Computation of Transport Coefficients Sonine Polynomials Diffusion Coefficient Thermal Conductivity 11.G.4. Shear Viscosity Beyond the Boltzmann Equation 12. Nonequilibrium Phase Transitions 12.A. 12.B. 12.C. 12.D. Introduction Nonequilibrium Stability Criteria 12.B.1. Stability Conditions Near Equilibrium 12.B.2. Stability Conditions Far From Equilibrium The Schlog! Model The Brusselator 12.D.1. The Brusselator—A Nonlinear Chemical Model 12.D.2. Boundary Conditions xv 659 661 664 666 670 671 679 680 682 682 684 688 688 690 691 692 700 701 702 703 704 708 710 17 718 721 721 722 723 726 732 735 736 737 xvi CONTENTS 12.D.3, Linear Stability Analysis 739 12.E. The Rayleigh-Bénard Instability 742 12.E.1. Hydrodynamic Equations and Boundary Conditions 743 12.B.2. Linear Stability Analysis 747 $12.A. Fluctuations Near a Nonequilibrium Phase Transition 753 $12.A.1. Fluctuations in the Rayleigh-Bénard System 753 $12.A.2. Fluctuations in the Brusselator 760 $12.A.3. The Time-Dependent Ginzburg-Landau Equation 764 References 765, Problems 767 APPENDICES A. Balance Equations 768 A.l. General Fluid Flow 768 A.2. General Balance Equation mm References 773 B. Systems of Identical Particles 7714 B.1. Position and Momentum Eigenstates 774 B.1.1. Free Particle 7715 B.1.2. Particle in a Box 716 B.2. Symmetrized N-Particle Position and Momentum Eigenstates 7717 B.2.1. Symmetrized Momentum Eigenstates for Bose-Einstein Particles 778 B.2.2. _ Antisymmetrized Momentum Eigenstates for Fermi-Dirac Particles 779 B.2.3. Partition Functions and Expectation Values 780 B.3. The Number Representation 781 B.3.1. The Number Representation for Bosons 782 B.3.2._ The Number Representation for Fermions 785 B.3.3. Field Operators 788 References 790 C. Stability of Solutions to Nonlinear Equations 791 C.1. Linear Stability Theory 791 CONTENTS C2. Limit Cycles C.3. _ Liapounov Functions and Global Stability References Author Index Subject Index 795 796 798 799 804 PREFACE In 1992 after finishing my book, “The Transition of Chaos,” I realized that I needed to write a new edition of “A Modern Course in Statistical Physics”. I wanted to adjust the material to better prepare students for what I believe are the current directions of statistical physics. I wanted to place more emphasis on nonequilibrium processes and on the thermodynamics underlying biological processes. I also wanted to be more complete in the presentation of material. It turned out to be a greater task than I had anticipated, and now five years later I am finally finishing the second edition. One reason it has taken so long is that I have created a detailed solution manual for the second edition and I have added many worked out exercises to the text. In this way I hope I have made the second edition much more student and instructor friendly than the first edition was. There are two individuals who have had a particularly large influence on this book and whom I want to thank, even though they took no part in writing the book. (Any negative features of this book are totally my responsibility.) The biggest influence has been Ilya Prigogine, and for that reason I have dedicated this book to him. When I first came to the University of Texas to join the Physics faculty, I became a member of what was then the Center for Thermodynamics and Statistical Mechanics (now known as the Prigogine Center for Statistical Mechanics and Complex Systems). My training was in equilibrium statistical mechanics. But that changed when I learned that the focus of this unique research Center, deep in the heart of Texas, was on nonequilibrium nonlinear phenomena, most of it far from equilibrium. I began to work on nonequilibrium and nonlinear phenomena, but followed my own path. The opportunity to teach and work in this marvelous research center and to listen to the inspiring lectures of Ilya Prigogine and lectures of the many visitors to the Center has opened new worlds to me, some of which I have tried to bring to students through this book. The other individual who has had a large influence on this book is Nico van Kampen, a sometimes visitor to the University of Texas. His beautiful lectures on stochastic processes were an inspiration and spurred my interest in the subject. I want to thank the many students in my statistical mechanics classes who helped me shape the material for this book and who also helped me correct the manuscript. This book covers a huge range of material. I could not reference all the work by the individuals who have contributed in all these areas. I have referenced work which most influenced my view of the subject and which could lead students to other related work. I apologize to those whose work I have not been able to include in this book. L. E. Reichl Austin, Texas September 1997 1 INTRODUCTION 1.4. OVERVIEW The field of statistical physics has expanded dramatically in recent years. New results in ergodic theory, nonlinear chemical physics, stochastic theory, quantum fluids, critical phenomena, hydrodynamics, transport theory, and biophysics have revolutionized the subject, and yet these results are rarely presented in a form that students who have little background in statistical physics can appreciate or understand. This book has been written in an effort to incorporate these subjects into a basic course on statistical physics. It includes in a unified and integrated manner the foundations of statistical physics and develops from them most of the tools needed to understand the concepts underlying modern research in all of the above fields. In the field of ergodic theory, for example, chaos theory has deepened our understanding of the structure and dynamical behavior of a variety of nonlinear systems and has made ergodic theory a modern field of research. Indeed, one of the frontiers of science today is the study of the spectral properties of decay Processes in nature, based on the chaotic nature of the underlying dynamics of those systems. Advances in this field have been aided by the development of ever more powerful computers. In an effort to introduce this field to students, a careful discussion is given of the behavior of probability flows in phase space, including specific examples of ergodic and mixing flows. Nonlinear chemical physics is still in its infancy, but it has already given a conceptual framework within which we can understand the thermodynamic origin of life processes. The discovery of dissipative structures (nonlinear spatial and temporal structures) in nonlinear nonequilibrium chemical systems has opened a new field in chemistry and biophysics. In this book, material has been included on chemical thermodynamics, chemical hydrodynamics, and nonequilibrium phase transitions in chemical and hydrodynamic systems. The use of stochastic theory to study fluctuation phenomena in chemical and hydrodynamic systems, along with its growing use in population dynamics and complex systems theory, has brought new life to this field. The discovery of scaling behavior at all levels of the physical world, along with the appearance of Levy flights which often accompanies scaling behavior, has forced us to think 2 INTRODUCTION beyond the limits of the Central Limit Theorem. In order to give students some familiarity with modern concepts from the field of stochastic theory, we have placed probability theory in a more general framework and discuss, within that framework, classical random walks, Levy flights, and Brownian motion. The theory of superfluids rarely appears in general textbooks on statistical physics, but the theory of such systems is incorporated at appropriate places throughout this book. We discuss the thermodynamic properties of superfluid and superconducting systems, the Ginzburg-Landau theory of superconductors, the BCS theory of superconductors, and superfluid hydrodynamics. Also included in the book is an extensive discussion of properties of classical fluids and their thermodynamic and hydrodynamic properties. The theory of phase transitions has undergone a revolution in recent years. In this book we define critical exponents and use renormalization theory to compute them. We also derive an exact expression for the specific heat of the two-dimensional Ising system, one of the simplest exactly solvable systems which can exhibit a phase transition. At the end of the book we include an introduction to the theory of nonequilibrium phase transitions. Hydrodynamics is a very powerful tool for understanding long-wavelength phenomena in classical fluids, solids, liquid crystals, superfluids, and biological systems. This book contains a thorough grounding in hydrodynamics based on the underlying symmetries and stability properties of matter. We discuss properties of correlation functions, causality, the fluctuation—dissipation theorem, the theory of light scattering, and the origin of hydrodynamics in terms of conserved quantities and broken symmetries. We also include a variety of applications of the hydrodynamics of mixtures, a subject essential for biophysics. Transport theory is discussed from many points of view. We derive Onsager’s relations for transport coefficients. We derive expressions for transport coefficients based on simple “back of the envelope” mean free path arguments. The Boltzmann and Lorentz—Boltzmann equations are derived and microscopic expressions for transport coefficients are obtained, starting from spectral properties of the Boltzmann and Lorentz—Boltzmann collision operators. The difficulties in developing a convergent transport theory for dense gases are also reviewed. Concepts developed in statistical physics underlie all of physics. Once the forces between microscopic particles are determined, statistical physics gives us a picture of how microscopic particles act in the aggregate to form the macroscopic world. As we see in this book, what happens on the macroscopic scale is sometimes surprising. 1.B. PLAN OF BOOK Thermodynamics is a consequence and a reflection of the symmetries of nature. It is what remains after collisions between the many degrees of freedom of PLAN OF BOOK 3 macroscopic systems randomize and destroy most of the coherent behavior. The quantities which cannot be destroyed, due to underlying symmetries of nature and their resulting conservation laws, give rise to the state variables upon which the theory of thermodynamics is built. Thermodynamics is therefore a solid and sure foundation upon which we can construct theories of matter out of equilibrium. That is why we place heavy emphasis on it in this book. The book is divided into four parts. Chapters 2 and 3 present the foundations of thermodynamics and the thermodynamics of phase transitions. Chapters 4 through 6 present probability theory, stochastic theory, and the foundations of statistical mechanics, Chapters 7 through 9 present equilibrium statistical mechanics, with emphasis on phase transitions and the equilibrium theory of classical fluids. Chapters 10 through 12 deal with nonequilibrium processes, both on the microscopic and macroscopic scales, both near and far from equilibrium. The first two parts of the book essentially lay the foundations for the last two parts. There seems to be a tendency in many books to focus on equilibrium statistical mechanics and derive thermodynamics as a consequence. As a result, students do not get the experience of traversing the vast world of thermodynamics and do not understand how to apply it to systems which are too complicated for statistical mechanics. For this reason, we begin the book with a thorough grounding in thermodynamics. In Chapter 2 we review the foundations of thermodynamics and thermodynamic stability theory and devote a large part of the chapter to a variety of applications which do not involve phase transitions, such as the cooling of gases, mixing, osmosis, and chemical thermodynamics. Chapter 3 is devoted to the thermodynamics of phase transitions and the use of thermodynamic stability theory in analyzing these phase transitions. We discuss first-order phase transitions in liquid—vapor-solid transitions, with particular emphasis on the liquid—vapor transition and its critical point and critical exponents. We also introduce the Ginzburg-Landau theory of continuous phase transitions and discuss a variety of transitions which involve broken symmetries. Having developed some intuition concerning the macroscopic behavior of complex equilibrium systems, we then turn to microscopic foundations. Chapters 4 through 6 are devoted to probability theory and the foundations of Statistical mechanics. Chapter 4 contains a review of basic concepts from probability theory and then uses these concepts to describe classical random walks and Levy flights. The Central Limit Theorem and the breakdown of the Central Limit Theorem for scaling processes is described. In Chapter 5 we study the dynamics of discrete stochastic variables based on the master equation. We also introduce the theory of Brownian motion and the idea of separation of time scales, which has proven so important in describing nonequilibrium phase transitions. The theory developed in Chapter 5 has many applications in chemical physics, laser physics, population dynamics, and biophysics, and it prepares the way for more complicated topics in statistical mechanics. S INTRODUCTION Chapter 6 lays the probabilistic foundations of statistical mechanics, starting from ergodic theory. In recent years, there has been a tendency to sidestep this aspect of statistical physics completely and to introduce statistical mechanics using information theory. The student then misses one of the current frontiers of modern physics, the study of the spectral behavior of decay processes in nature, based on the chaotic nature of the underlying dynamics of those systems. While we cannot go very far into this subject in this book, we at least discuss the issues. We begin by deriving the Liouville equation, which is the equation of motion for probability densities, both in classical mechanics and in quantum mechanics. We look at the types of flow that can occur in mechanical systems and introduce the concepts of ergodic and mixing flows, which appear to be minimum requirements if a system is to decay to thermodynamic equilibrium. Chapters 7-9 are devoted entirely to equilibrium statistical mechanics. In Chapter 7 we derive the probability densities (the microcanonical, canonical, and grand canonical ensembles) for both closed and opened systems and relate them to thermodynamic quantities and the theory of fluctuations. We then use them to derive the thermodynamic properties of a variety of model systems, including harmonic lattices, spin systems, ideal quantum gases, and super- conductors. In Chapter 8 we introduce the equilibrium fluctuation theory of spin systems and show qualitatively how the spatial extent of correlations between fluctua- tions diverges as we approach the critical point. We also introduce the idea of scaling and use renormalization theory to obtain microscopic expressions for the critical exponents of spin lattices. Finally we conclude Chapter 8 by obtaining an exact expression for the heat capacity of the two-dimensional Ising lattice, and we compare our exact expressions to those of mean field theory. Chapter 9 is devoted to the equilibrium theory of classical fluids. In this chapter we relate the thermodynamic properties of classical fluids to the underlying radial distribution function, and we use the Ursell-Mayer cluster expansion to obtain a virial expansion of the the equation of state of a classical fluid. We also discuss how to include quantum corrections for nondegenerate gases. The last part of the book, Chapters 10-12, deals with nonequilibrium processes. Chapter 10 is devoted to hydrodynamic processes for systems near equilibrium. We begin by deriving the Navier-Stokes equations from the symmetry properties of a fluid of point particles, and we use the derived expression for entropy production to obtain the transport coefficients for the system. We use the solutions of the linearized Navier-Stokes equations to predict the outcome of light-scattering experiments. We go on to derive Onsager’s relations between transport coefficients, and we use causality to derive the fiuctuation—dissipation theorem. We also derive a general expression for the entropy production in systems with mixtures of particles which can undergo chemical reactions. We then use this theory to describe thermal and chemical transport processes in mixtures, across membranes, and in electrical circuits. The hydrodynamic equations describe the behavior of just a few slowly USE AS A TEXTBOOK 5 varying degrees of freedom in fluid systems. If we assume that the remainder of the fluid can be treated as a background noise, we can use the fluctuation— dissipation theorem to derive the correlation functions for this background noise. In Chapter 10 we also consider hydrodynamic modes which result from broken symmetries, and we derive hydrodynamic equations for superfiuids and consider the types of sound that can exist in such fluids. In Chapter 11 we derive microscopic expressions for the coefficients of diffusion, shear viscosity, and thermal conductivity, starting both from mean free path arguments and from the Boltzmann and Lorentz—Boltzmann equations. In deriving microscopic expressions for the transport coefficients from the Boltzmann and Lorentz—Boltzmann equations, we use a very elegant method which relies on use of the eigenvalues and eigenfunctions of the collision operators associated with those equations. We obtain explicit microscopic expressions for the transport coefficients of a hard sphere gas. Finally, in Chapter 12 we conclude with the fascinating subject of nonequilibrium phase transitions. We discuss thermodynamic stability theory for systems far from equilibrium. We also show how nonlinearities in the rate equations for chemical reaction—diffusion systems lead to nonequilibrium phase transitions which give rise to chemical clocks, nonlinear chemical waves, and spatially periodic chemical structures, while nonlinearities in the Rayleigh— Benard hydrodynamic system lead to spatially periodic convection cells. We shall also examine the nature of fluctuations in the neighborhood of the critical point for these transitions and show that they are characterized by a critical slowing down of certain unstable modes. 1.C. USE AS A TEXTBOOK Even though this book contains a huge amount of material, it has been designed to be used as a textbook. In each chapter the material has been divided into core topics and special topics. The core topics provide key basic material in each chapter, while special topics illustrate these core ideas with a variety of applications. The instructor can select topics from the special topics sections, according to the emphasis he/she wishes to give the course. In many sections, we have included nontrivial demonstration exercises to help the students understand the material and to help in solving homework Problems. Each chapter has a variety of problems at the end of the chapter that can be used to help the students test their understanding. Even if one covers only the core topics of each chapter, there may be too much material to cover in a one-semester course. However, the book is designed So that some chapters may be omitted completely. The choice of which chapters to use depends on the interests of the instructor. Our suggestion for a basic well-rounded one-semester course in statistical physics is to cover the core topics in Chapters 2, 3, 4, 7, 10, and 11 (only Section 11.B if time is running short). 6 INTRODUCTION The book is intended to introduce the students to a variety of subjects and resource materials which they can then pursue in greater depth if they wish. We have tried to use standardized notation as much as possible. In writing a book which surveys the entire field of statistical physics, it is impossible to include or even to reference everyone’s work. We have included references which were especially pertinent to the points of view we take in this book and which will lead students easily to other work in the same field. PART ONE THERMODYNAMICS 2 INTRODUCTION TO THERMODYNAMICS 2.A. INTRODUCTORY REMARKS The science of thermodynamics began with the observation that matter in the aggregate can exist in macroscopic states which are stable and do not change in time. These stable “equilibrium” states are characterized by definite mechanical properties, such as color, size, and texture, which change as the substance becomes hotter or colder (changes its temperature). However, any given equilibrium state can always be reproduced by bringing the substance back to the same state. Once a system reaches its equilibrium state, all changes cease and the system will remain forever in that state unless some external influence acts to change it. This inherent stability and reproducibility of the equilibrium states can be seen everywhere in the world around us. Thermodynamics has been able to describe, with remarkable accuracy, the macroscopic behavior of a huge variety of systems over the entire range of experimentally accessible temperatures (10~* K to 10° K). It provides a truly universal theory of matter in the aggregate. And yet, the entire subject is based on only four laws, which may be stated rather simply as follows: Zeroth Law— it is possible to build a thermometer; First Law—energy is conserved; Second Law—not all heat energy can be converted into work; and Third Law—we can never reach the coldest temperature using a finite set of reversible steps. However, even though these laws sound rather simple, their implications are vast and give us important tools for studying the behavior and stability of Systems in equilibrium and, in some cases, of systems far from equilibrium. The core topics in this chapter focus on a review of various aspects of thermodynamics that will be used throughout the remainder of the book. The special topics at the end of this chapter give a more detailed discussion of some applications of thermodynamics which do not involve phase transitions. Phase transitions will be studied in Chapter 3. We shall begin this chapter by introducing the variables which are used in thermodynamics and the mathematics needed to calculate changes in the thermodynamic state of a system. As we shall see, many different sets of 10 INTRODUCTION TO THERMODYNAMICS mechanical variables can be used to describe thermodynamic systems. In order to become familiar with some of these mechanical variables, we shall write the experimentally observed equations of state for a variety of thermodynamic systems. As we have mentioned above, thermodynamics is based on four laws. We shall discuss the content of these laws in some detail, with particular emphasis on the second law. The second law is extremely important both in equilibrium and out of equilibrium because it gives us a criterion for testing the stability of equilibrium systems and, in some cases, nonequilibrium systems. There are a number of different thermodynamic potentials that can be used to describe the behavior and stability of thermodynamic systems, depending on the type of constraints imposed on the system. For a system which is isolated from the world, the internal energy will be a minimum for the equilibrium state. However, if we couple the system thermally, mechanically, or chemically to the outside world, other thermodynamic potentials will be minimized. We will intro- duce the five most commonly used thermodynamic potentials (internal energy, enthalpy, Helmholtz free energy, Gibbs free energy, and the grand potential), and we will discuss the conditions under which each one is minimized at equili- brium and why they are called potentials. When experiments are performed on thermodynamic systems, the quantities which are easiest to measure are the response functions. Generally, we change one parameter in the system and see how another parameter responds to that change, under highly controlled conditions. The quantity that measures the way in which the system responds is called a response function. In this chapter we shall introduce a variety of thermal and mechanical response functions and give relations between them. Isolated equilibrium systems are systems in a state of maximum entropy. Any fluctuations which occur in such systems must cause a decrease in entropy if the equilibrium state is to be stable. We can use this fact to find relations between the intensive state variables for different parts of a system if those parts are to be in mechanical, thermal, and chemical equilibrium. In addition, we can find restrictions on the sign of the response functions which must be satisfied for stable equilibrium. We shall find these conditions and discuss the restrictions they place on the Helmholtz and Gibbs free energy. Thermodynamics becomes most interesting when it is applied to real systems. In order to demonstrate its versatility, in the section on special topics, we shall apply it to a number of systems which have been selected for their practical importance or conceptual importance. We begin with a subject of great practical and historic importance, namely, the cooling of gases. it is often necessary to cool substances below the temperature of their surroundings. The refrigerator most commonly used for this purpose is based on the Joule-Kelvin effect. There are two important ways to cool gases. We can let them do work against their own intermolecular forces by letting them expand freely (Joule effect); or we can force them through a small constriction, thus causing cooling at low temperatures or heating at high STATE VARIABLES AND EXACT DIFFERENTIALS st temperatures (Joule-Kelvin effect). The Joule-Kelvin effect is by far the more effective of the two methods. We shall discuss both methods in this chapter and use the van der Waals equation of state to obtain estimates of the coolling effects for some real gases. For reversible processes, changes in entropy content can be completely accounted for in terms of changes in heat content. For irreversible processes, this is no longer true. We can have entropy increase in an isolated system, even though no heat has been added. Therefore, it is often useful to think of an increase in entropy as being related to an increase in disorder in a system. One of the most convincing illustrations of this is the entropy change which occurs when two substances, which have the same temperature and pressure but different identities, are mixed. Thermodynamics predicts that the entropy will increase solely due to mixing of the substances. When the entropy of a system changes due to mixing, so will other thermo- dynamic quantities. One of the most interesting examples of this is osmosis. We can fill a container with water and separate it into two parts by a membrane permeable to water but not salt, for example. If we put a small amount of salt into one side, the pressure of the resulting salt solution will increase markedly because of mixing. Chemical reactions can be characterized in a rather simple way in terms of a thermodynamic quantity called the affinity. The affinity gives a measure of the distance of a chemical reaction from thermodynamic equilibrium and will be useful in later chapters when we discuss chemical systems out of equilibrium. We can obtain an expression for the affinity by using the conditions for thermodynamic equilibrium and stability introduced in Chapter 2, and at the same time we can learn a number of interesting facts about the thermodynamic behavior of chemical reactions. A special example of a type of reaction important to biological systems is found in electrolytes, which consist of salts which can dissociate but maintain an electrically neutral solution. 2.B. STATE VARIABLES AND EXACT DIFFERENTIALS Thermodynamics describes the behavior of systems with many degrees of freedom after they have reached a state of thermal equilibrium—a state in which all past history is forgotten and all macroscopic quantities cease to change in time. The amazing feature of such systems is that, even though they contain many degrees of freedom (~ 107%) in chaotic motion, their thermo- dynamic state can be specified completely in terms of a few parameters—called State variables. In general, there are many state variables which can be used to Specify the thermodynamic state of a system, but only a few (usually two or three) are independent. In practice, one chooses state variables which are accessible to experiment and obtains relations between them. Then, the “machinery” of thermodynamics enables one to obtain the values of any other State variables of interest. 12 INTRODUCTION TO THERMODYNAMICS State variables may be either extensive or intensive. Extensive variables always change in value when the size (spatial extent and number of degrees of freedom) of the system is changed, and intensive variables need not. Certain pairs of intensive and extensive state variables often occur together because they correspond to generalized forces and displacements which appear in expressions for thermodynamic work. Some examples of such extensive and intensive pairs are, respectively, volume, V, and pressure, P; magnetization, M, and magnetic field strength, H; length, LZ, and tension, J; area, A, and surface tension, o; electric polarization, P, and electric field, E. The pair of state variables related to heat content of thermodynamic system are the temperature, T, which is intensive, and the entropy, S, which is extensive. There is also a pair of state variables associated with “chemical” properties of a system. They are the number of particles, N, which is extensive, and the chemical potential per particle, y.’ , which is intensive. In this book we shall sometimes use the number of moles, n, and the chemical potential per mole, j: (molar chemical potential); or the mass of a substance, M, and the chemical potential per unit mass, ji, (specific chemical potential), as the chemical state variables. If there is more than one type of particle in the system, then there will be a-mole number and chemical potential associated with each type of particle. Other state variables used to describe the thermodynamic behavior of a system are the various response functions, such as heat capacity, C; compres- sibility, «; magnetic susceptibility, x; and various thermodynamic potentials, such as the internal energy, U; enthalpy, H; Helmholtz free energy, A; Gibbs free energy, G; and the grand potential, 2. We shall become thoroughly acquainted with these state variables in subsequent sections. If we change the thermodynamic state of our system, the amount by which the state variables change must be independent of the path taken. If this were not so, the state variables would contain information about the history of the system. It is precisely this property of state variables which makes them so useful in studying changes in the equilibrium state of various systems. Mathe- matically changes in state variables correspond to exact differentials [1]; therefore, before we begin our discussion of thermodynamics, it is useful to review the theory of exact differentials. This will be the subject of the remainder of this section. Given a function F = F(x, x2) depending on two independent variables x1 and x2, the differential of F is defined as follows: OF OF dF =(—) dy +(<) du, 21 (), , (=). 2 @4) where (9F/8x1),, is the derivative of F with respect to x; holding x; fixed. If F and its derivatives are continuous and STATE VARIABLES AND EXACT DIFFERENTIALS 13 then dF is an exact differential. If we denote OF OF e1(x1,%2) = ) and c2(x1,%2) = (=) : 2 4 then the variables c, and x; and variables c, and x, are called “conjugate” variables with respect to the function F. The fact that dF is exact has the following consequences: (a) The value of the integral B B F(B) - F(A) = | de = | (cider + en dey) A A is independent of the path taken between A and B and depends only on the end points A and B. (b) The integral of dF around a closed path is zero: f dF = f (cide; + cxdx2) = 0. closed ‘closed (c) If one knows only the differential dF, then the function F can be found to within an additive constant. If F depends on more than two variables, then the statements given above generalize in a simple way: Let F = F(x1,x2,...,%n), then the differential, dF, may be written 2“. (OF dF = > . 2.3 y (&) a ae The notation (OF /Ax;),,,,,, means that the derivative of F is taken with respect to x; holding all variables but x; constant. For any pair of variables, the following relation holds: (2. (#) 1 fa (2) ] (2.4) (NPR) bpd} tay LO PH to ays An example for the case of three independent variables is dF = cydx, + Codey + c3dx3. 14 INTRODUCTION TO THERMODYNAMICS The Eq. (2.4) leads to the result (Fe) ..” (au! Bs)” Ce) O2) xyxy \OH) zayx5' NOR) yp \OF) Ca) 3] xy) \OR) sy Differentials of all state variables are exact and have the above properties. Given four state variables, x, y, z, and w, where w is a function of any two of the variables x, y, or z, one can obtain the following useful relations along paths for which F(x, y,z) =0: (2.5) (2.6) (2.7) @)-@)@,G), o It is a simple matter to derive Eqs. (2.5)—(2.8). We will first consider Eqs. (2.5) and (2.6). Let us choose variables y and z to be independent, x = x(y,z), and then choose x and z to be independent, y = y(x,z), and write the following differentials; dx = (@x/@y),dy + (Ox/0z),dz and dy = (Oy/Ax),dx + (Oy/ 9z),dz. If we eliminate dy between these equations, we obtain (8,2) ()@)-@]e» Because dx and dz may be varied independently, their coefficients may be set equal to zero separately. The result is Eqs. (2.5) and (2.6). To derive Eq. (2.7) we let y and z be independent so that x = x(y, z) and write the differential for dx. If we then divide by dw, we obtain dx _ (a8) ay | (08) de 5- (5). 2+ ae), dw" For constant z,dz = 0 and we find Eq. (2.7) STATE VARIABLES AND EXACT DIFFERENTIALS 15 Finally, to derive Eq. (2.8) we let x be a function of y and w,x = x(y,w). If we write the differential of x, divide it by dy, and restrict the entire equation to constant z, we obtain Eq. (2.8). When integrating exact differentials, one must be careful not to lose terms. In Exercise 2.1, we illustrate two different methods for integrating exact differentials. mw EXERCISE 2.1. Consider the differential dé = (x? + y)dx + xdy. (a) j Show that it is an exact differential. (b) Integrate d@ between the points A and B in the figure below, along the two different paths, 1 and 2. (c) Integrate d@ between points A and B using indefinite integrals. Answer: (a) From the expression dd =(x°+y)dx+.xdy, we can write (0¢/Ox), =? +y and (9¢/dy), =x. Since [(0/dy) (0¢/0x),], = [(0/0x)(06/0y),], = 1, the differential, dé, is exact. (b) Let us first integrate the differential, dd, along path 1: | to-o={" Ys (ty)de+| xpdy a on 3 = xp + xayp — 3x4 — XAYa- Let us next integrate the differential, dd, along path 2. Note that along path 2, y=ya+(Ay/Ax)(x—x4), where Ay=yg—y, and Ax = xg —x,. If we substitute this into the expression for dd, we find do = (x + y)dx + xdy = [x2 + yq + (Ay/Ax) (2x — xa)]dx. Therefore oo on-[" as[e? +ya +2 Rx -x)| (2) ee | Note that the change in ¢ in going from point A to point B is independent of the path taken. This is a property of exact differentials, 16 INTRODUCTION TO THERMODYNAMICS | (©) We now integrate the differential, d¢, in a different way. Let us first | do the indefinite integrals | (5) a= [ot +a: 42 +9+m10), @) where K;(y) is an unknown function of y. Next do the integral | [(B) a= jap + Hato, 4) | x | where K(x) is an unknown function of x. In order for Eqs. (3) and (4) | to be consistent, we must choose K2(x) = 1x3 + Ky and Kj(y) = Ks, | where K3 is a constant. Therefore, ¢ = tp +xy+ K3 and again, ba — ba = 42x} + xBya — 4x4 — XAYa- \ 2.C. SOME MECHANICAL EQUATIONS OF STATE An equation of state is a functional relation between the state variables for a system in equilibrium which reduces the number of independent degrees of freedom needed to describe the state of the system. It is an equation which relates the thermal state variables, T or S, to the mechanical and chemical state variables for that system and contains a great deal of information about the thermodynamic behavior of the system. It is useful to give some examples of empirically obtained equations of state. 2.C.1. Ideal Gas Law The best-known equation of state is the ideal gas law PV =nRT, (2.9) where n is the number of moles, T is temperature in degrees Kelvin, P is the pressure in Pascals, Vis the volume in cubic meters, and R = 8.314 J/mol- K is the universal gas constant. The ideal gas law gives a good description of a gas which is so dilute that the effect of interaction between particles can be neglected. Tf there are m different types of particles in the gas, then the ideal gas law takes the form m PV =o nT, (2.10) where n; is the number of moles of the ith constituent. SOME MECHANICAL EQUATIONS OF STATE 17 2.C.2. Virial Expansion [2] The virial expansion, RT P= (7) [1 +5 BT) + (B) aur) +--, (11) expresses the equation of state of a gas as a density expansion. The quantities B,(T) and B3(T) are called the second and third virial coefficients and are functions of temperature only. As we shall see in Chapter 9, the virial coefficients may be computed in terms of the interparticle potential. Comparison between experimental and theoretical values for the virial coefficients is an important method for obtaining the force constants for various interparticle potentials. In Fig. 2.1 we have plotted the second virial coefficient for helium and argon. The curves are typical of most gases. At low temperatures, B,(T) is negative because the kinetic energy is small and the attractive forces between particles reduce the pressure. At high temperatures the attractive forces have little effect and corrections to the pressure become positive. At high temperature the second virial coefficient has a maximum. For an ideal classical gas all virial coefficients, B;(i > 2), are zero, but for an ideal quantum gas (Bose-Einstein or Fermi-Dirac) the virial coefficients are -3.5 -4.0 od 1 2 5 10 20 50 100 Fig. 2.1. A plot of the second virial coefficients for helium and argon in terms of the dimensionless quantities, B* = By/bo and T* = kgT/e, where bo and are constants, kg is Boltzmann's constant, and T is the temperature. For helium, bg = 21.07 10-6m3/mol and e/kp = 10.22 K. For argon, by = 49.8 x 10~°m3/mol and e/kp = 119.8 K. (Based on Ref. 2.) 18 INTRODUCTION TO THERMODYNAMICS, nonzero. The “statistics” of quantum particles give rise to corrections to the classical ideal gas equation of state. 2.C.3. Van der Waals Equation of State [3] The van der Waals equation of state is of immense importance historically because it was the first equation of state which applies to both the gas and liquid phases and exhibits a phase transition. It contains most of the important qualitative features of the gas and liquid phases although it becomes less accurate as density increases. The van der Waals equation contains corrections to the ideal gas equation of state, which take into account the form of the interaction between real particles. The interaction potential between molecules in a gas contains a strong repulsive core and a weaker attractive region surrounding the repulsive core. For an ideal gas, as the pressure is increased, the volume of the system can decrease without limit. For a real gas this cannot happen because the repulsive core limits the close-packed density to some finite value. Therefore, as pressure is increased, the volume tends to some minimum value, V = Vinin = nb, where b is an experimental constant. The ideal gas equation of state must be corrected to take account of the existence of the repulsive core and assumes the form nRT V-nb’ The attracive region of the potential causes the pressure to be decreased slightly relative to that of a noninteracting gas because it introduces a “cohesion” between molecules. The decrease in pressure will be proportional to the probability that two molecules interact, this, in turn, is proportional to the square of the density of particles (N/V). We therefore correct the pressure by a factor proportional to the square of the density, which we write a(n?/V?). The constant a is an experimental constant which depends on the type of molecule being considered. The equation of state can now be written (+25) v — nb = nar: (2.12) In Table 2.1 we have given values of a and b for simple gases. The second virial coefficient for a van der Waals gas is easily found to be WW) (7) = (6 - ma (2.13) We see that BS/”)(T) will be negative at low temperatures and will become positive at high temperatures but does not exhibit the maximum observed in real gases. Thus, the van der Waals equation does not predict all the observed SOME MECHANICAL EQUATIONS OF STATE 19 Table 2.1. Van der Waals Constants for Some Simple Gases [4] a(Pa- m°/mol b(m? /mol) H) 0.02476 0.02661 He 0.003456 0.02370 co; 0.3639 0.04267 120 0.5535 0.03049 Oz 0.1378 0.03183 No 0.1408 0.03913 features of real gases. However, it describes enough of them to make it a worthwhile equation to study. In subsequent chapters, we will repeatedly use the van der Waals equation to study the thermodynamic properties of interacting fluids. 2.C.4. Solids Solids have the property that their coefficient of thermal expansion, ap = (1/v)(@v/8T)p, and their isothermal compressibility, «7 = —(1/v)(Ov/OP)7, where v = V/n is the molar volume, are very small. Therefore, for solids at fairly low temperature we can expand the molar volume of a solid in a Taylor series about zero temperature and zero pressure value, vo, and obtain the following equation of state: v =vo(1 + aT — KrP), (2.14) where T is measured in Kelvins. Typical values [5] of xr are of the order of 10~'°/Pa or 10~5/atm. For example, for solid Ag (silver) at room temperature, ®r = 1.3 x 10-!°/Pa (for P=0Pa), and for diamond at room temperature, ®r = 1.6 x 10-1/Pa (for P = 4.0 x 10® Pa — 10! Pa). Typical values of ap are of the order 10~*/K. For example, for solid Na (sodium) at room temperature we have ap = 2 x 10~4/K, and for solid K (potassium) we have ap =2 x 10-*/K. 2.C.5. Elastic Wire or Rod For a stretched wire or rod in the elastic limit, Hook’s law applies and we can write J =A(T)(L—Lo), (2.15) where J is the tension measured in Newtons per meter, A(T) is temperature dependent coefficient, L is the length of the stretched wire or rod, and Lo is the 20 INTRODUCTION TO THERMODYNAMICS. length of the wire when J=0. The coefficient, A(T), may be written A(T) = Ao + AT, where Ao and Aj are constants. The constant, A1, is negative for most substances but may be positive for some substances (including rubber). 2.C.6. Surface Tension [6] Pure liquids in equilibrium with their vapor phase have a well-defined surface layer at the interface between the liquid and vapor phases. The mechanical properties of the surface layer can be described by thermodynamic state variables. The origin of the surface layer is the unequal distribution of intermolecular forces acting on the molecules at the surface. Molecules in the interior of the liquid are surrounded by, and interact with, molecules on all sides. Molecules at the surface interact primarily with molecules in the liquid, since the vapor phase (away from the critical point) is far less dense than the liquid. As a result, there is a strong tendency for the molecules at the surface to be pulled back into the liquid and for the surface of the liquid to contract. The molecular forces involved are huge. Because of this tendency for the surface to contract, work must be done to increase the free surface of the liquid. When the surface area is increased, molecules from the interior must be brought to the surface and therefore work must be done against interior molecular forces. The work per unit area needed to extend the surface area is called the surface tension of the liquid. For most pure liquids, the surface tension does not depend on the area and the equation of state has the form o=o0(1- z) (2.16) where ¢ is the temperature in degrees Celius, oo is the surface tension at t= 0°C,t’ is experimentally determined temperature within a few degrees of the critical temperature, and n is an experimental constant which has a value between one and two. 2.C.7. Electric Polarization [6-8] When an electric field E is applied to a dielectric material, the particles composing the dielectric will be distorted and an electric polarization field, P (P is the induced electric dipole moment per unit volume), will be set up by the material. The polarization is related to the electric field, E, and the electric displacement, D, by the equation D=eE+P (2.17) where 9 is the permittivity constant, ¢> = 8.854 x 10-'7C?/N-m*. The electric field, E, has units of Newtons per coulomb (N/C), and the electric displacement and electric polarization have units of coulomb per square meter ‘THE LAWS OF THERMODYNAMICS, 21 (C/m’). E results from both external and surface charges. The magnitude of the polarization field, P, will depend on the temperature. A typical equation of state for a homogeneous dielectric is P= («+3)e (2.18) for temperatures not too low. Here a and b are experimental constants and T is temperature in degrees Kelvin. 2.C.8. Curie’s Law [6-8] If we consider a paramagnetic solid at constant pressure, the volume changes very little as a function of temperature. We can then specify the state in terms of applied magnetic field and induced magnetization. When the external field is applied, the spins line up to produce a magnetization M (magnetic moment per unit volume). The magnetic induction field, B (measured in units of teslas, 1 T=1 weber/m?), the magnetic field strength, H (measured in units of amphere/ meter), and the magnetization are related through the equation B= oH + “oM, (2.19) where jug is the permeability constant (1 = 4m x 107? T - m/A). The equation of state for such a system at room temperature is well approximated by Curie’s law: M="—H, (2.20) where n is the number of moles, D is an experimental constant dependent on the type of material used, and the temperature, 7, is measured in Kelvins. 2.D. THE LAWS OF THERMODYNAMICS [6] Thermodynamics is based upon four laws. Before we can discuss these laws in a meaningful way, it is helpful to introduce some basic concepts. A system is in thermodynamic equilibrium if the mechanical variables do not change in time and if there are no macroscopic flow processes present. Two systems are separated by a fixed insulating wall (a wall that prevents transfer of matter heat and mechanical work between the systems) if the thermodynamic state variables of one can be changed arbitrarily without causing changes in the thermodynamic state variables of the other. Two systems are separated by a conducting wall if arbitrary changes in the state variables of one cause changes in the state variables of the other. A conducting wall allows transfer of heat. An insulating wall prevents transfer of heat. 22 INTRODUCTION TO THERMODYNAMICS It is useful to distinguish among three types of thermodynamic systems. An isolated system is one which is surrounded by an insulating wall, so that no heat or matter can be exchanged with the surrounding medium. A closed system is one which is surrounded by a conducting wall so that heat can be exchanged but matter cannot. An open system is one which allows both heat and matter exchange with the surrounding medium. It is possible to change from one equilibrium state to another. Such changes can occur reversibly or irreversibly. A reversible change is one for which the system remains infinitesimally close to the thermodynamic equilibrium—that is, is performed quasi-statically. Such changes can always be reversed and the system brought back to its original thermodynamic state without causing any changes in the thermodynamic state of the universe. For each step of a reversible process, the state variables have a well-defined meaning. An irreversible or spontaneous change from one equilibrium state to another is one in which the system does not stay infinitesimally close to equilibrium during each step. Such changes often occur rapidly and give rise to flows and “friction” effects. After an irreversible change the system cannot be brought back to its original thermodynamic state without causing a change in the thermodynamic state of the universe. With these ideas in mind, we an now discuss the four laws of thermodynamics. 2.D.1. Zeroth Law: Two Bodies, Each in Thermodynamic Equilibrium with a Third System, are in Thermodynamic Equilibrium with Each Other The zeroth law is of fundamental importance to experimental thermodynamics because it enables us to introduce the concept of a thermometer and to measure temperatures of various systems in a reproducible manner. If we place a thermometer in contact with a given reference system, such as water at the triple point (where ice, water, and vapor coexist), then the mechanical variables describing the thermodynamic state of the thermometer (e.g., the height of a mercury column, the resistance of a resistor, or the pressure of a fixed volume container of gas) always take on the same values. If we then place the thermometer in contact with a third system and the mechanical variables do not change, then we say that the third system, the thermometer, and water at the triple point all have the same “temperature.” Changes in the mechanical variables of the thermometer as it is cooled or heated are used as a measure of temperature change. 2.D.2. First Law: Energy Is Conserved The first law tells us that there is a store of energy in the system, called the internal energy, U, which can be changed by causing the system to do work, @W, or by adding heat, ¢Q, to the system. We use the notation, ¢W, to indicate that the differential is not exact.) The change in the internal energy which THE LAWS OF THERMODYNAMICS 23 results from these two processes is given by dU =4Q-4W. (2.21) The work, @W, may be due to changes in any relevant extensive “mechanical” or chemical variable. In general it can be written aW = PaV — JdL — odA -E-dP-W-dM — dde~ $7 pydNj, (2.22) 7 where dU,dV,dL,dA,dP,dM,de, and dN; are exact differentials, but ¢Q and dW are not because they depend on the path taken (on the way in which heat is added or work is done). The meaning of the first five terms in Eg. (2.22) was discussed in Section (2.C). The term, —dde, is the work the system needs to do it is has an electric potential, ¢, and increases its charge by an amount, de. The last term, —yjdNj, is the chemical work required for the system to add dN; neutral Particles if it has chemical potential, yj. We may think of — P,J,0,E,H, 4, and yi) as generalized forces, and we may think of dV,dL,dA,dP,dM,de and dN, as generalized displacements. It is useful to introduce a generalized mechanical force, ¥, which denotes quantities such as, —P,J,o,E,H, and ¢ , and a generalized displacement, X, which denotes the corresponding displacements, V,L,A,P,M, and e, respectively. Then the first law can be written in the form dU = 40+ YX + S~ wdNj. (2.23) J Note that j/; is a chemical force and dN; is a chemical displacement. Note also that the pressure, P, has a different sign from the other generalized forces. If we increase the pressure, the volume increases, whereas if we increase the force, ¥, for all other cases, the extensive variable, X, decreases. 2.D.3. Second Law: Heat Flows Spontaneously from High Temperatures to Low Temperatures There are a number of ways to state the second law, with the one given above being the simplest. Three alternative versions are [6]: (a) The spontaneous tendency of a system to go toward thermodynamic equilibrium cannot be reversed without at the same time changing some organized energy, work, into disorganized energy, heat. (b) Ina cyclic process, it is not possible to convert heat from a hot reservoir into work without at the same time transferring some heat to a colder reservoir. aw INTRODUCTION TO THERMODYNAMICS (©) The entropy change of any system and its surroundings, considered together, is positive and approaches zero for any process which approaches reversibility. The second law is of immense importance from many points of view. From it we can compute the maximum possible efficiency of an engine which transforms heat into work. It also enables us to introduce a new state variable, the entropy, S, which is conjugate to the temperature. The entropy gives us a measure of the degree of disorder in a sysem and also gives us a means for determining the stability of equilibrium states, and, in general, it forms an important link between reversible and irreversible processes. The second law is most easily discussed in terms of an ideal heat engine first introduced by Carnot. The construction of all heat engines is based on the observation that if heat is allowed to flow from a high temperature to a lower temperature, part of the heat can be turned into work. Carnot observed that temperature differences can disappear spontaneously without producing work. Therefore, he proposed a very simple heat engine consisting only of reversible steps, thereby eliminating wasteful heat flows. The Carnot engine consists of the four steps shown in Fig. 2.2. These include: (a) Isothermal (constant temperature) absorption of heat AQ, from a reservoir at a high temperature 7, (we use A to indicate a finite rather than an infinitesimal amount of heat) (the process 1 — 2). (b) Adiabatic (constant heat content) change in temperature from 7, to the lower value 7, (the process 2 — 3). isothermal ( y (m) x Fig. 2.2. A Carnot engine which runs on a substance with state variables, X and ¥. The processes 1 + 2 and 3 — 4 occur isothermally at temperatures 7, and 7,, respectively. The processes 2 — 3 and 4 — 1 occur adiabatically. The heat absorbed is AQ» and the heat ejected is AQ«3. The shaded area is equal to the work done during the cycle. The whole process takes place reversibly. THE LAWS OF THERMODYNAMICS 25 (c) Isothermal expulsion of heat AQ,43 into a reservoir at temperature 7, (the process 3 — 4). (d) Adiabatic return to the initial state at temperature 7, (the process 4 — 1). The work done by the engine during one complete cycle can be found by integrating the differential element of work Y dX about the entire cycle. We see that the total work AW, done by the engine is given by the shaded area in Fig. 2.2. The total efficiency 7) of the heat engine is given by the ratio of the work done to heat absorbed: _ AW ot 1= 5" (2.24) Since the internal energy U is state variable and independent of path, the total change AUjo for one complete cycle must be zero. The first law then enables us to write AU ror = AQror — AWior = 0 (2.25) and thus AW = AQrot = AQ12 + AO34 = AQi2 — AQa3. (2.26) If we combine Egs. (2.24) and (2.26), we can write the efficiency in the form oie AQss AQn (2.27) A 100% efficient engine is one which converts all the heat it absorbs into work. However, as we shall see, no such engine can exist in nature. The great beauty and utility of the Carnot engine lies in the fact that it is the most efficient of all heat engines operating between two heat reservoirs, each at a (different) fixed temperature. This is a consequence of the second law. To Prove this let us consider two heat engines, A and B (cf. Fig. 2.3), which run between the same two reservoirs 7}, and 7,. Let us assume that engine A is a heat engine with irreversible elements and B is a reversible Carnot engine. We will adjust the mechanical variables X and Y so that during one cycle both engines Perform the same amount of work (note that X4 and Y,4 need not be the same mechanical variables as Xp and Yg): Awa, = Aw®, = Aw. (2.28) Let us now assume that engine A is more efficient than engine B: ma > (2.29) 26 INTRODUCTION TO THERMODYNAMICS, AQ AQh AQh- AW AQ, - AW Fig. 2.3. Two heat engines, A and B, work together. Engine B acts as a heat pump while engine A acts as a heat engine with irreversible elements. Engine A cannot have a greater efficiency than engine B without violating the second law. and thus AW AW ae > ADE 2.30 Bok ” BOR 20) or AQ’, > AGA. (2.31) We can use the work produced by engine A to drive the Carnot engine as a refrigerator. Since the Carnot engine is reversible, it will have the same efficiency whether it runs as a heat engine or as a heat pump. The work, AW, produced by A will be used to enable the Carnot engine B to pump heat from the low-temperature reservoir to the high-temperature reservoir. The net heat extracted from reservoir 7, and delivered to reservoir 7, is AQ®, — AW — (AQ, — AW) = AQ?, — Ags. (2.32) If engine A is more efficient than engine B, then the combined system has caused heat to flow from low temperature to high temperature without any work being expended by an outside source. This violates the second Jaw and therefore engine A cannot be more efficient than the Carnot engine. If we now assume that both engines are Carnot engines, we can show, by similar arguments, that they both must have the same efficiency. Thus, we reach the following conclusion: No engine can be more efficient than a Carnot engine, and all Carnot engines have the same efficiency. From the above discussion, we see that the efficiency of a Carnot engine is completely independent of the choice of mechanical variables X and Y and therefore can only depend on the temperatures 7, and 7, of the two reservoirs. This enables us to define an absolute temperature scale. From Eq. (2.27) we see that AQs AQn =f (ty 7)» (2.33) THE LAWS OF THERMODYNAMICS 27 x Fig. 2.4. Two Carnot engines running between three reservoirs with temperatures 1h > T > Te have the same overall efficiency as one Carnot engine running between reservoirs with temperatures 7, > 7c. where f(Th,7-) is some function of temperatures 7, and 7,. The function (th) T) has a very special form. Let us consider two heat engines running between three reservoirs tT, > 7’ > 7. (cf. Fig. 2.4). We can write AQ«3 BO Lm ™)s (2.34) Qs _ ays 20s =f(r',t); (2.35) and AQss _ BOn =f (Thy Te)> (2.36) so that Ff (tay Te) =F (ts TMF (7's Te)- (2.37) Thus, f (7h, 7) = (7h)g7!(z) where g(r) is some function of temperature. One of the first temperature scales proposed but not widely used is due to W. Thomson (Lord Kelvin) and is called the Thomson scale {9}. It has the form AQss _e* non a cd (2.38) The Thomson scale is defined so that a given unit of heat AQ; flowing between temperatures r° > (r°—1) always produces the same amount of work, tegardless of the value of 7°. 28 INTRODUCTION TO THERMODYNAMICS A more practical scale, the Kelvin scale, was also introduced by Thomson. It is defined as AQy3 _ Te =<. 2.39) AQT (239) As we will see below, the Kelvin scale is identical to the temperature used in the ideal gas equation of state and is the temperature measured by a gas thermometer. For this reason, the Kelvin scale is the internationally accepted temperature scale at the present time. The units of degrees Kelvin are the same as degrees Celsius. The ice point of water at atmospheric pressure is defined as 0°C, and the boiling point is defined as 100°C. The triple point of water is 0.01°C. To obtain a relation between degrees Kelvin and degrees Celsius, we can measure pressure of a real dilute gas as a function of temperature at fixed volume. It is found experimentally that the pressure varies linearly with temperature and goes to zero at te = —273.15°C. Thus, from the ideal gas law, we see that degrees Kelvin, T, are related to degrees Celsius, f., by the equation T = (te + 273.15). (2.40) The triple point of water is fixed at T = 273.16 K. In Exercise 2.2, we compute the efficiency of a Carnot engine which uses an ideal gas as an operating substance. However, Carnot engines can be constructed using any of a variety of substances (some examples are left as problems). Regardless of the operating substance, all Carnot engines have the same efficiency. | mi Exercise 2.2. Compute the efficiency of a Camot cycle (shown in the | figure below) which uses a monatomic ideal gas as an operating substance. whe] | | WY Va Va Va | Answer: The equation of state of an ideal gas is PV = nRT, where P = —Y | is the pressure and V = X is the volume, Tis the temperature in Kelvins, and | nis the number of moles. The internal energy is U = (3/2)nRT. The Carnot | THE LAWS OF THERMODYNAMICS 29 | cycle for an ideal gas is shown in the figure below. The part 1 — 2 is an | isothermal expansion of the system, and the path 3 — 4 is an isothermal | contraction. It is clear from the equation of state that the temperature, 7}, of | path 1 2 is higher than the temperature, T,, of path 3 4. The path | 2-3 is an adiabatic expansion of the system, and the path 4 1 is an adiabatic contraction. We shall assume that n is constant during each cycle. | Let us first consider the isothermal paths. Since the temperature is | constant along these paths, dU =3nRdT =0. Thus, along the path 12,40 =€W =nRT;(dV/V). The heat absorbed along the path | 1 2is | dV V2 AQi2 = nRT, — = mRT;|n{ — }. 1 O12 +], Vv h (7?) (1) AQsa = nRTeIn (F)- 2) Vs, Since V2 > V;, AQi2 > 0 and heat is absorbed along the path 1 — 2. Since V3 > Va, AQs4 < 0 and heat is ejected along the path 3 > 4. Let us next consider the adiabatic paths. Along the adiabatic path, 4Q = 0 = dU + PdV = (3/2)nRdT + PdV. If we make use of the equation | | of state, we find (3/2)(dT/T) = —(dV/V). We now integrate to find | T3/2y = constant for an adiabatic process. Thus, along the paths 2 3 and | 4-41, respectively, we have | | | The heat absorbed along the path 3 > 4 is | | | | T.V33 = T,V2? and TeV2 = T,V77. ¢) | For the entire cycle, we can write AU =AQi — AW = 0. Thus | AW = AQue = AQi2 + AQs. The efficiency of the Carnot cycle is ya bet AO _Telm(Vs/Va) _y Te AQn AQ Th In(V2/V1) T,’ | | | since from Eq. (3) we have | (4) V3 \ Va We can use the Carnot engine to define a new state variable called the entropy. All Carnot engines have an efficiency AQs _,_Te n= 1-560 i, (2.41) 30 INTRODUCTION TO THERMODYNAMICS Fig, 2.5. An arbitrary reversible heat en- gine is composed of many infinitesimal Camot engines. The area enclosed by the curve is equal to the work done by the heat X ~ engine. (cf. Exercise 2.2) regardless of operating substance. Using Eq. (2.41), we can write the following relation for a Carnot cycle: AQi2 | AQss T * T (2.42) (note the change in indices in AQs,4). Equation (2.42) can be generalized to the case of an arbitrary reversible heat engine because we can consider such an engine as being composed of a sum of many infinitesimal Carnot cycles (cf. Fig. 2.5). Thus, for an arbitrary reversible heat engine we have #2 _ | Pao. (2.43) The quantity =#2 ds = (2.44) is an exact differential and the quantity S, called the entropy, may be considered a new state variable since the integral of dS about a closed path gives zero. No heat engine can be more efficient than a Carnot engine. Thus, an engine which runs between the same two reservoirs but contains spontaneous or irreversible processes in some part of the cycle will have a lower efficiency, and we can write AQs | Te iad 2.45. AQn~ Th (245) and AQn AQ: 7 <% (2.46) For an arbitrary heat engine which contains an irreversible part, Eq. (2.46) gives THE LAWS OF THERMODYNAMICS 31 the very important relation je <0. (2.47) For an irreversible process, @Q/T can no longer be considered an exact differential. A system may evolve between two thermodynamic states either by a reversible path or by a spontaneous, irreversible path. For any process, reversible or irreversible, the entropy change depends only on the initial and final thermodynamic states of the system, since the entropy is a state variable. If the system evolves between the initial and final states via a reversible path, then we can compute the entropy change along that path. However, if the system evolves between the initial and final states via an irreversible path, then we must construct a hypothetical reversible path between the initial and final states in order to use the equations of thermodynamics to compute the entropy change during the spontaneous process. For the irreversible path, the heat absorbed by the system will be less than that along the reversible path [cf. Eqs. (2.45)— (2.47)). Therefore, {.,@Q/T < [,,@/T. This means that for an irreversible process, [..y#Q/T does not contain all contributions to the entropy change. Some of it comes from the disorder created by spontaneity. This result is usually written in the form #2 T where d;S denotes the entropy production due to spontaneous processes. For a reversible process, d;S = 0 and dS = (1/T)@@Q so the entropy change is entirely due to a flow of heat into or out of the system. For a spontaneous (irreversible)) process, djS > 0. For an isolated system we have ¢Q = 0, and we obtain the important relation ds =" +45, (2.48) dS = dS >0, (2.49) where the equality holds for a reversible process and the inequality holds for a spontaneous or irreversible process. Since the equilibrium state is, by definition, a state which is stable against spontaneous changes, Eq. (2.49) tells us that the equilibrium state is a state of maximum entropy. As we shall see, this fact gives an important criterion for determining the stability of the equilibrium state for an isolated system. 2.D.4. Third Law: The Difference in Entropy Between States Connected by a Reversible Process Goes to Zero in the Limit T > 0 K [9-11] The third law was first proposed by Nernst in 1906 on the basis of experimental observations and is a consequence of quantum mechanics. Roughly speaking, a 32 INTRODUCTION TO THERMODYNAMICS So s Fig. 2.6, The fact that curves ¥ = 0 and Y = Y; must approach the same point (the third law) makes it impossible to reach absolute zero by a finite number of reversible steps. system at zero temperature drops into its lowest quantum state and in this sense becomes completely ordered. If entropy can be thought of as a measure of disorder, then at T = 0 K it must take its lowest value. An alternative statement of the third law, and a direct consequence of the above statement, is, It is impossible to reach absolute zero in a finite number of steps if a reversible process is used. This alternative statement is easily demonstrated by means of a plot in the S-T7 plane. In Fig. 2.6 we have plotted the curves as a function of S and T for two states Y = 0 and Y = Y; for an arbitrary system. (A specific example might be a paramagnetic salt with Y =H.) We can cool the system by alternating between the two states, adiabatically and isothermally. From Eqs. (2.5) and (2.6), we write (5) (Gs), 0° (or) es As we shall show in Section 2.H, thermal stability requires that (05/87), > 0. Equation (2.50) tells us that if 7 decreases as Y increases isentropically, then S must decrease as Y decreases isothermally, as shown in Fig. 2.6. For the process 1 — 2 we change from state Y = Y; to state Y = 0 isothermally, thus squeezing out heat, and the entropy decreases. For process 2 — 3, we increase Y adiabatically from Y = 0 to Y = Y; and thus decrease the temperature. We can repeat these processes as many times as we wish. However, as we approach T = 0K, we know by the third law that the two curves must approach the same point and must therefore begin to approach each other, thus making it impossible to reach T = 0 K in a finite number of steps. Another consequence of the third law is that certain derivatives of ‘the entropy must approach zero as T — 0 K. Let us consider a process at T = 0 K such that Y > Y + d¥ and X — X + dX. Then the change in entropy if ¥, T, and FUNDAMENTAL EQUATION OF THERMODYNAMICS 33 N are chosen as independent variables is (assume dN = 0) os ds= (5) dy, (2.51) OY] v=o or if X, 7, and N are chosen as independent we obtain ds = ) dx. (2.52) OX] y,r=0 Thus, if the states (Y,7=OK) and (Y+dY,7=OK) or the states (X,T =0K) and (X + dX,T = 0K) are connected by a reversible process, we must have dS = 0 (third law) and therefore (Be) ynea™ o (2.53) (Fanon? (2.54) Equations (2.53) and (2.54) appear to be satisfied by real substances. and 2.E. FUNDAMENTAL EQUATION OF THERMODYNAMICS [10] The entropy plays a central role in both equilibrium and nonequilibrium thermodynamics. It can be thought of as a measure of the disorder in a system. As we shall see in Chapter 7, entropy is obtained microscopically by state counting. The entropy of an isolated system is proportional to the logarithm of the number of states available to the system. Thus, for example, a quantum system in a definite quantum state (pure state) has zero entropy. However, if the same system has finite probability of being in any of a number of quantum States, its entropy will be nonzero and may be quite large. The entropy is an extensive, additive quantity. If a system is composed of a number of independent subsystems, then the entropy of the whole system will be the sum of the entropies of the subsystems. This additive property of the entropy is expressed mathematically by the relation S(AU, AX, {ANj}) = AS(U,X, {Ni})- (2.55) That is, the entropy is a first-order homogeneous function of the extensive state variables of the system. If we increase all the extensive state variables by a factor 2, then the entropy must also increase by a factor \. 34 INTRODUCTION TO THERMODYNAMICS Differential changes in the entropy are related to differential changes in the extensive state variables through the combined first and second laws of thermodynamics: TaS > dQ = dU — Ydx — wd. (2.56) 7 The equality holds if changes in the thermodynamic state are reversible. The inequality holds if they are spontaneous or irreversible. Equations (2.55) and (2.56) now enable us to define the Fundamental Equation of thermodynamics. Let us take the derivative of AS with respect to 2: d as da os d 59 = (5) ga!) * (BR) ya +E ( ea ) aon) (2.57) i ON} UX{Negs} dx However, from Eq. (2.56) we see that 2s) oS) = 2.58 (= x{N} T ( ) 2s) y as — (2.59 e Ae ) and as ) 4 I =-4. (2.60) Ge, ux(Mwy 7 Equations (2.58)(2.60) are called the thermal, mechanical, and chemical equations of state, respectively. The mechanical equation of state, Eq. (2.59), is the one most commonly seen and is the one which is described in Section 2.C. If we now combine Eqs (2.57)-(2.60), we obtain TS =U -XY- (2.61) Equation (2.61) is called the Fundamental Equation of thermodynamics (it is also known as Euler’s equation) because it contains all possible thermodynamic information about the thermodynamic system. If we take the differential of Eq. (2.61) and subtract Eq. (2.56) (we will take the reversible case), we obtain FUNDAMENTAL EQUATION OF THERMODYNAMICS 35 another important equation, (2.62) SaT + XA¥ + > Nydyyy = i which is called the Gibbs-Duhem equation. The Gibbs~Duhem equation relates differentials of intensive state variables. For a monatomic system, the above equations simplify somewhat if we work with densities. As a change of pace, let us work with molar densities. For single component system the Fundamental Equation can be written TS = U— YX — yn and the combined first and second laws (for reversible processes) can be written TdS = dU — YdX — pidn. Let us now introduce the molar entropy, s = S/n, the molar density, x = X/n, and the molar internal energy, u = U/n. Then the Fundamental Equation becomes Ts =u—Yx—p, (2.63) and the combined first and second laws become (for reversible processes) Tds = du — Ydx. (2.64) Therefore, (Os/Ou), = 1/T and (As/Ax), = —Y/T. The Gibbs-Duhem equa- tion is simply du = -sdT — xdY, (2.65) and therefore the chemical potential has the form, » = 4(T,Y), and is a function only of the intensive variables, Tand Y. Note also that s = —(0u/8T) and x = —(0u/OY);. In Exercise 2.3, we use these equations to write the Fundamental Equation for an ideal monatomic gas. @ EXERCISE 2.3. The entropy of n moles of a monatomic ideal gas is S = (5/2)nR + nRin[(V/Vo)(no/n)(T/To)*/], where Vo,no, and Ty are Constants (this is called the Sackur—Tetrode equation). The mechanical equation of state is PV =nRT. (a) Compute the internal energy. (b) Compute the chemical potential. (c) Write the Fundamental Equation for an ideal monatomic ideal gas and show that it is a first-order homogeneous function of the extensive state variables. Answer: It is easiest to work in terms of densities. The molar entropy can be | written s = (5/2)R + Rin{(v/v0)(T/To)”](v = V/n is the molar volume), | | and the mechanical equation of state is Pv = RT. 36 INTRODUCTION TO THERMODYNAMICS (a) The combined first and second law gives du = Tds— Pav. If we further note that ds = (8s/8T),dT + (As/Av),dv, then | (Ne (54) a+ ir() Fle = Spar, (1) | since (0s/8T), = (3R/2T) and (Os/Ov), =R/v. Therefore, the molar internal energy is u = RT + up where uo is a constant, and the total internal energy is U = nu = 3nRT + Up, where Up = nuo. (b) Let us rewrite the molar entropy in terms of pressure instead of molar volume. From the mechanical equation of state, v = (RT/P) and vo = (RTo/Po). Therefore, s = $R + Rln{(Po/P)(T/T»)*'”]. From the Gibbs—Duhem equation, (0/8T)p = —s = —(3R+ Rin[(Po/P) (T/T)°"}) and (Ou/OP); =v = RT/P. If we integrate these we | obtain the following expression for the molar chemical potential: | 5/27 | p= -RTIn [Po (F) |. (2) LP To J | | (c) Let us rewrite the entropy in terms of the internal energy, volume, and | number of moles. We obtain . 5 V nos? (U\*?] S=2nR+nRinj—(™@)" (=) |. get n(n) (a) j (3) Equation (3) is the Fundamental Equation for an ideal monatomic gas. It clearly is a first-order homogeneous function of the extensive variables. It is interesting to note that this classical ideal gas does not obey the third law of thermodynamics and cannot be used to describe systems at very low temperatures. At very low temperatures we must include quantum corrections to the ideal gas equation of state. 2.F, THERMODYNAMIC POTENTIALS [11] In conservative mechanical systems, such as a spring or a mass raised in a gravitational field, work can be stored in the form of potential energy and subsequently retrieved. Under certain circumstances the same is true for thermodynamic systems. We can store energy in a thermodynamic system by doing work on it through a reversible process, and we can eventually retrieve that energy in the form of work. The energy which is stored and retrievable in the form of work is called the free energy. There are as many different forms of free energy in a thermodynamic system as there are combinations of THERMODYNAMIC POTENTIALS 37 constraints. In this section, we shall discuss the five most common ones: internal energy, U; the enthalpy, H; the Helmholtz free energy, A; the Gibbs free energy, G; and the grand potential, 2. These quantities play a role analogous to that of the potential energy in a spring, and for that reason they are also called the thermodynamic potentials. 2.F.1. Internal Energy From Eq. (2.61) the fundamental equation for the internal energy can be written (2.66) where 7, ¥, and pj are considered to be functions of S, X, and {Nj} (ef. Eqs. (2.58)-(2.60)]. From Eq. (2.56), the total differential of the internal energy can be written dU < TaS + YaX + )~ waNj. (2.67) 7 The equality holds for reversible changes, and the inequality holds for changes which are spontaneous. From Eq. (2.67) we see that ‘av T= (3) , (2.68 8S] x.) ; a) Y=(|~ 2.69) (Fe sion (2.69) and au = (5) (2.70) 1 NON 5.x, ti) We can use the fact that dU is an exact differential to find relations between derivatives of the intensive variables, 7, ¥, and H;. From Eq. (2.4) we know, for example, that Ole Bulge 2 XV AST xo} sey, (8 \°X/ sc} cays . 38 INTRODUCTION TO THERMODYNAMICS From Eqs. (2.68), (2.69), and (2.71), we obtain ar ay eee 2.72) (sy (ay (2.72) (i+ 1) additional relations like Eq. (2.72) exist and lead to the identities fu! i ON) sxivigy OST x.¢n53 Bean) 7 (Bt) pal =(2) (2.74) (ar sxmigy OX sin) and Ou ' (ae) = (Ht) . (2.75) ONi) sx,¢0a) NONI 5.x} Equations (2.72)-(2.75) are extremely important both theoretically and experimentally because they provide a relation between rates of change of seemingly diverse quantities. They are called Maxwell relations. For a substance with a single type of particle, the above equations simplify if we work with densities. Let u = U/n denote the molar internal energy. Then the Fundamental Equation can be written u = Ts + Yx +p, where s is the molar entropy and x is a molar density. The combined first and second laws (for reversible processes) are du = Tds + Ydx. Therefore we obtain the identities (Qu/Os), =T and (Ou/dx), = Y. Maxwell relations reduce to (O7/0x), = (dY/ds),. The internal energy is a thermodynamic potential or free energy because for processes carried out reversibly in an isolated, closed system at fixed X and {Nj}, the change in internal energy is equal to the maximum amount of work that can be done on or by the system. As a specific example, let us consider a PVT system (cf. Fig. 2.7). We shall enclose a gas in an insulated box with fixed total volume and divide it into two parts by a movable conducting wall. We can do work on the gas or have the gas do work by attaching a mass in a gravitational field to the partition via a pulley and insulated string. To do work reversibly, we assume that the mass is composed of infinitesimal pieces which can be added or removed one by one. If PA + mg > P2A, then work is done on the gas by the mass, and if P,A + mg < P2A, the gas does work on the mass. The first law can be written Au = AQ- Aw, (2.76) THERMODYNAMIC POTENTIALS: 39 Fig. 2.7. For a reversible process in a closed, insulated box of fixed size (AS = 0, AV = 0, AN; = 0), the work done in lifting the weight will be equal to the change in the intemal energy, (AU) yy = —AWree- where AU is the change in total internal energy of the gas, AQ is the heat flow through the walls, and AW can be divided into work done due to change in size of the box, [ PdV, and work done by the gas in raising the weight, AWree: AW= fra + AWyree- (2.77) For a reversible process, AQ = f TdS. For the reversible process pictured in Fig. 2.7, AQ = 0, AV = 0, and AN, = 0 (if no chemical reactions take place). Therefore, (AU) 5 yy) = —SWeee- (2.78) Thus, for a reversible process at constant S, V, and Nj, work can be stored in the form of internal energy and can be recovered completely. Under these conditions, internal energy behaves like a potential energy. For a spontaneous process, work can only be done at constant S, V, and {Nj} if we allow heat to leak through the walls. The first and second laws for a spontaneous process take the form [au = au <[ras— [Pav - awn + ¥> | wan, (2.79) d where the integrals are taken over a reversible path between initial and final States and not the actual spontaneous path. We can do work on the gas Spontaneously by allowing the mass to drop very fast. Then part of the work oes into stirring up the gas. In order for the process to occur at constant entropy, some heat must leak out since AQ < fT dS=0. Thus, for a spontaneous process (AU) sv qu} < ~AWree- (2.80) Not all work is changed to internal energy and is retrievable. Some is wasted in stirring the gas. (Note that for this process the entropy of the universe has increased since heat has been added to the surrounding medium.) 40 INTRODUCTION TO THERMODYNAMICS For processes involving mechanical variables Y and X we can write Eqs. (2.78) and (2.80) in the form (AU) sx (wy) S (—AWeree), (2.81) where AWyee is any work done by the system other than that required to change X. For a reversible process at constant S, X, and {Nj}, work can be stored as internal energy and can be recovered completely. If a process takes place in which no work is done on or by the system, then Eq. (2.81) becomes (AU)sx4n SO (2.82) and the internal energy either does not change (reversible process) or decreases (spontaneous process). Since a system in equilibrium cannot change its state spontaneously, we see that an equilibrium state at fixed S, X, and {Nj} is a state of minimum internal energy. 2.F.2. Enthalpy The internal energy is the convenient potential to use for processes carried out at constant X, S, and {Nj}. However, it often happens that we wish to study the thermodynamics of processes which occur at constant S, ¥, and {Nj}. Then it is more convenient to use the enthalpy. The enthalpy, H, is useful for systems which are thermally isolated and closed but mechanically coupled to the outside world. It is obtained by adding to the internal energy an additional energy due to the mechanical coupling: H=U-XY=ST +)" WN. (2.83) 7 The addition of the term —XY has the effect of changing the independent variables from (S, X, Nj) to (S, ¥, N;) and is called a Legendre transformation. If we take the differential of Eq. (2.83) and combine it with Eq. (2.67), we obtain dH wah; (2.84) 7 and, therefore, OH T=(— : 2.85, (55) samy ae aH -(=> , 2.86 (FF) soy oe THERMODYNAMIC POTENTIALS 41 and oH = (a) . (2.87) UT) SAM) Since dH is an exact differential, we can use Eq. (2.4) to obtain a new set of Maxwell relations: an) () 2) =-(S) , (2.88) (or simp \8S/ v.40) ay 4) -(# i) (2.89) ONS) 5.x {xis} rN) ay (3) aon = “COPD! a0 ON) samen — \OY/ samp and Bona” (am) (2.91) (ae svtiad ONG) sy tv which relate seemingly diverse partial derivatives. For a substance with a single type of molecule, Eqs. (2.84)-(2.91) become particularly simple if we work with densities. Let h = H/n denote the molar enthalpy. Then the fundamental equation for the molar enthalpy can be written h=u-—xY =sT +. The exact differential of the molar enthalpy is dh = Tds — xdY (for reversible processes), which yields the identities (0h/Os)y = T and (8h/@Y), = x. Maxwell’s relations reduce to (87 /OY), = —(Ox/As)y. In Exercise 2.4, we compute the enthalpy for a monatomic ideal gas in terms of its natural variables. @ EXERCISE 2.4. Compute the enthalpy for n moles of a monatomic | ideal gas and express it in terms of its natural variables. The mechanical | equation of here is PV = nRT and the entropy is S = 3nR + nRIn{(V/Vo) (r0/n)(1/To)"”). | Answer: Let us write the molar entropy in terms of temperature and | pressure. It is s = $R + Rln|(Po/P)(T/To)”””]. Also note that when P = Po | and T = To, s = so = $R. Now since dh = Tds + vdP we have | i oh PYF tomy | i (#) =r-n(%) els) /50 (a) | 42 INTRODUCTION TO THERMODYNAMICS. | and | oh RT | (&) ap @) | If we integrate, we find h = $RTo(P/Po)”/*e*-%)/" = $RT. In terms of ; temperature, the enthalpy is h = $RT. There is an easier way to obtain these | results. From Exercise 2.3, the molar intemal energy is u=3RT. The | fundamental equation for the molar enthalpy is h = u + vP, where v=V/n | is the molar volume. Since v = RT/P, we obtain h = 3RT and H = 3nRT. For a YXT system, the enthalpy is a thermodynamic potential for reversible processes carried out at constant ¥. The discussion for the enthalpy is completely analogous to that for the internal energy except that now we allow the extensive variable X to change and maintain the system at constant ¥ We then find AH< [ras- [xar- AWeee +> [wan (2.92) i where the equality holds for a reversible process and the inequality holds for a spontaneous process (AWree is defined in Section 2.F.1). Therefore, (AH)s yc S (—AWiree) (2.93) and we conclude that, for a reversible process at constant S, Y, and {Nj}, work can be stored as enthalpy and can be recovered completely. If a process takes place at constant S, ¥, and {Nj} in which no work is done on or by the system, then (AF) sy,¢n) $0. (2.94) Since the equilibrium state cannot change spontaneously, we find that the equilibrium state at fixed S, Y, and {Nj} is a state of minimum enthalpy. 2.F.3. Helmholtz Free Energy For processes carried out at constant 7, X, and {Nj}, the Helmholtz free energy corresponds to a thermodynamic potential. The Helmholtz free energy, A, is useful for systems which are closed and thermally coupled to the outside world but are mechanically isolated (held at constant X). We obtain the Helmholtz free energy from the internal energy by adding a term due to the thermal coupling: A=U-ST = YX + Spiny. (2.95) i THERMODYNAMIC POTENTIALS 43 The addition of —ST is a Legendre transformation which changes the independent variables from (S,X,{Nj}) to (T,X, {Nj}). If we take the differential of Eq. (2.95) and use Eq. (2.67), we find dA < -SdT +¥dX + )— WaNj. 7 Therefore, --() OT) xy" *) Y=(— (i TAN} and w= (a) NON) rx) Again, from Eq. (2.4), we obtain Maxwell relations (Boy) 7 Gr) OX) remy NOT) xy ase ONS) 4.x,¢Ny) AT] xu’ (2) _ ON;/ 7. (045) 8 nr ON) 7x,40m4:5 SONI) rx,Q0%95 and for the system. (2.96) (2.97) (2.98) (2.99) (2.100) (2.101) (2.102) (2.103) We can write the corresponding equations in terms of densities. Let us Consider a monatomic substance and let a = A/n denote the molar Helmholtz free energy. Then the fundamental equations for the molar Helmholtz free energy is a=u—sT =xY +. The combined first and second laws (for Teversible processes) can be written da = —sdT + ¥dx so that (Oa/8T), = —s 44 INTRODUCTION TO THERMODYNAMICS and (da/dx); = Y. Maxwells relations reduce to (Os/Ax), = —(0Y/@T),. In Exercise 2.5, we compute that Helmholtz free energy for a monatomic ideal gas in terms of its natural variables. , ML EXERCISE 2.5, Compute the Helmholtz free energy for n moles of a monatomic ideal gas and express it in terms of its natural variables. The | mechanical equation of state is PV =nRT and the entropy is S = $nR + nRin|(V/Vo)(n0/n)(T/To)””). Answer: Since da = ~sdT ~ Pdv we have Bete") 0 and | (5), --P--%. (2) | | If we integrate, we find a=—RT—RT In{(v/vo)(T/To)""] and A= =nRT ~ nRT In{(V/Vo)(no/n)(T/To)*”. | For a YXT system, the Helmholtz free energy is a thermodynamic potential for reversible processes carried out at constant 7, X, and {Nj}. For a change in the thermodynamic state of the system, the change in the Helmholtz free energy can be written DA< ~ | sar+ [vax — Wine +3 | wa, (2.104) i where the inequality holds for spontaneous processes and the equality holds for reversible processes (AWree is defined in Section 2.66). For a process carried out at fixed 7, X, and {Nj}, we find (AA)rx(yy S (—AWiree), (2.105) and we conclude that for a reversible process at constant T, X, and {Nj}, work can be stored as Helmholtz free energy and can be recovered completely. If no work is done for a process occurring at fixed T, X, and {Nj}, Eq. (2.105) becomes (AA)z. x,¢ny S 9- (2.106) Thus, an equilibrium state at fixed T, X, and {Nj} is a state of minimum Helmholtz free energy. THERMODYNAMIC POTENTIALS 45 2.F.4. Gibbs Free Energy For processes carried out at constant ¥, 7, and {Nj}, the Gibbs free energy corresponds to the thermodynamic potential. Such a process is coupled both thermally and mechanically to the outside world. We obtain the Gibbs free energy, G, from the internal energy by adding terms due to the thermal and mechanical coupling, G=U-TS~xY¥ =" uN. (2.107) 7 In this way we change from independent variables (S, X, {N;}) to variables (T, ¥, {Ni}). If we use the differential of Eq. (2.106) in Eq. (2.67), we obtain dG < -SdT — Xd¥ +) ujdNj, (2.108) 7 so that ‘a Sa -() , (2.109) OT) ym) ag x--(3) , (2.110) OY) rin and aG ; = (im) . (2.111) 7 AON ry.ANia} The Maxwell relations obtained from the Gibbs free energy are a) () oS) ee 2.112 (sr ryy — \OT/ yn en) oe (#) =- (#) , (2.113) ON) 1.y,4Nig} OT} yw) 3) = -(#) . (2.114) Ni) TY. (Niah OY) ram and 7 OH = (94 2.115 (ae rxtmiad ONT ry mig) us) and again relate seemingly diverse partial derivatives. 46 INTRODUCTION TO THERMODYNAMICS As we found in earlier sections, we can write the corresponding equations in terms of densities. We will consider a monomolecular substance and let g = G/n denote the molar Gibbs free energy. Then the fundamental equation for the molar Gibbs free energy is g=u—sT—xY = and the molar Gibbs free energy is equal to the chemical potential (for a monomolelcular substance). The combined first and second laws (for reversible processes) can be written dg = —sdT —xdY so that (Og/8T), = —s and (Og/OY), = —x. Maxwells relations reduce to (0s/OY); = +(0x/8T)y. For a monatomic substance, the molar Gibbs free energy is equal to the chemical potential. For a YXT system, the Gibbs free energy is a thermodynamic potential for reversible processes carried out at constant 7, ¥, and {Nj}. For a change in the thermodynamic state of the system, the change in Gibbs free energy can be written AG < ~ [sar —[xav— AW + [1a (2.116) j where the equality holds for reversible processes and the inequality holds for spontaneous processes (AWfree is defined in Section 2.F.1). For processes at fixed T, ¥, and {Nj}, we obtain (AG)ry,yy S (—AWiee)- (2.117) Thus, for a reversible process at constant T, Y, and {Nj}, work can be stored as Gibbs free energy and can be recovered completely. For a process at fixed T, ¥, and {Nj} for which no work is done, we obtain (AG)ry¢yy $9, (2.118) and we conclude that an equilibrium state at fixed T, Y, and {Nj} is a state of minimum Gibbs free energy. @ EXERCISE 2.6. Consider a system which has the capacity to do work, @W =—-YdX +@W’. Assume that processes take place spontaneously so that dS = (1/T)4Q + d;S, where d;S is a differential element of entropy due to the spontaneity of the process. Given the fundamental equation for the | Gibbs free energy, G = U — XY — TS, show that —(dG),, = dW’ + Td,S. Therefore, at fixed Y and 7, all the Gibbs free energy is available to do work for reversible processes. However, for spontaneous processes, the amount of | work that can be done is diminished because part of the Gibbs free energy is | used to produce entropy. This result is the starting point of nonequilibrium thermodynamics. THERMODYNAMIC POTENTIALS 47 | Answer: From the fundamental equation for the Gibbs free energy, we [ know that dG =dU —XdY¥—YdX —TdS—SdT. Also we know that dU =40+YdX—4W’', so we can write dg=4Q- AW’! —XdY— | [dS — SdT. For fixed Y and T we have (dG)y ; = 4Q — dW’ — TdS. Now | remember that dS = (1/T)@Q + d,S. Then we find (dG) y = —¢W’ — Td;S. | Note that the fundamental equation, G = U ~ XY — TS contains the starting | Point of nonequilibrium thermodynamics. | For mixtures held at constant temperature, 7, and pressure, P, the Gibbs free energy is a first-order homogeneous function of the particle numbers or particle mole numbers and this allows us to introduce a “partial” free energy, a “partial” volume, a “partial” enthalpy, and a “partial” entropy for each type of particle. For example, the chemical potential of a particle of type i, when written as jt; = (8G/Oni)r. ¢y,,)» 18 a partial molar Gibbs free energy. The total Gibbs free energy can be written Ny G= omni ie) . (2.119) i=1 TPAnjgi} The partial molar volume for a particle of type i is vi = (OV/Oni)z pjn,,,)> and the total volume can be written “(vy v dona on (Fret (2.120) The partial molar entropy for particle of type i is s; = (BS/8ni)p-p fa,» and the total entropy can be written “(as Ss rsa don "(rp (2.121) Amigsd? Because the enthalpy is defined, H = G + TS, we can also define a partial molar enthalpy, hi = (OH/Onj)7 ping} = Hi + Ts. Then the total enthalpy can be written H = 7, njh;. These quantities are very useful when we describe the Properties of mixtures in later chapters. @ EXERCISE 2.7. Consider a fluid with electric potential, ¢, containing v | different kinds of particles. Changes in the internal energy can be written, dU = 4Q ~ PdV + dde + Y-¥ wydny. Find the amount of Gibbs free energy | needed to bring dn; moles of charged particles of type i into the system at | fixed temperature, pressure, and particle number, n,(j # i), in a reversible | 48 INTRODUCTION TO THERMODYNAMICS, { manner. Assume that particles of type i have a valence, z. Note that the amount of charge in one mole of protons is called a Faraday, F. Answer: The fundamental equation for the Gibbs free energy, G=U+PV —TS, yields dG = dU + PdV + VdP ~ T dS — SdT. There- fore, dG = 40+ dde + DY wjdnj + VdP — TdS —SdT. For a reversible process, dS = (1/T)#Q, and dG = dde + Sy pjdnj + VdP — SdT. Now note that the charge carried by dn; moles of particles of type, i, is de = z;Fdn;. Thus, the change in the Gibbs free energy can be written y | dG = ode + VAP — SaT + > pusdny 7 D (1) | = +VaP -SdT + (Fo + mi)dnj + > pdr. j#i For fixed P, 7, and nj(j # i), the change in the Gibbs free energy is (4G) pry, = (FO + pi)dni- (2) From Exercise 2.6, we see that this is just the work needed to add dn; moles of charged particles of type, i, keeping all other quantities fixed. The quantity fi = UFO + Hi, (3) is called the electrochemical potential. 2.F.5. Grand Potential A thermodynamic potential which is extremely useful for the study of quantum systems is the grand potential. It is a thermodynamic potential energy for processes carried out in open systems where particle number can vary but 7, X, and {1;} are kept fixed. The grand potential, 2, can be obtained from the internal energy by adding terms due to thermal and chemical coupling of the system to the outside world: Q=U~TS~ 7 uN; = xy. (2.122) 7 The Legendre transformation in Eq. (2.122) changes the independent variables from (S,X, {N;}) to (T,X, {u}}). If we add the differential of Eq. (2.122) to Eg. (2.67), we obtain dQ < -SdT + ¥ dx — Nya, (2.123) i THERMODYNAMIC POTENTIALS 49 and thus 02) s=-(S) , (2.124) (ar xu) Y= (3) , (2.125) OX ry and N=- (33) . (2.126) ou 4) TXAuhy) The Maxwell relations obtained from the grand potential are () = -(F) ; (2.127) OX ry NOT xy (%) = () , (2.128) Hi) 1xAuiy) Xu (=) --@ eu Hy TXAdg) Tui} and (*) = (3) (2.130) Hi TY Aig} BD TY Logs} and are very useful in treating open systems. The grand potential is a thermodynamic potential energy for a reversible process carried out at constant T, X, and {pi}. For a change in the thermodynamic state of the system, the change in the grand potential can be written Ams - | sar [ydx— AWiee — Do [Nae (2.131) i where the equality holds for reversible changes and the inequality holds for spontaneous changes (A Wiree is defined in Section 2.F.1). For a process at fixed 50 INTRODUCTION TO THERMODYNAMICS T, X, and {y)}, we obtain (Ar. x44) S (—AWeree)- (2.132) Thus, for a reversible process at constant T, X, and {uj}, work can be stored as grand potential and can be recovered completely. For a process at fixed 7, X, and {yu} for which no work is done, we obtain AMp x4 $ (2.133) and we find that an equilibrium state at fixed T, X, and {y;} is a state of minimum grand potential. 2.G. RESPONSE FUNCTIONS The response functions are the thermodynamic quantities most accessible to experiment. They give us information about how a specific state variable changes as other independent state variables are changed under controlled conditions. As we shall see in later chapters, they also provide a measure of the size of fluctuations in a thermodynamic system. The response functions can be divided into (a) thermal response functions, such as heat capacities, (b) mechanical response functions, such as compressibility and susceptibility, and (c) chemical response functions. We shall introduce some thermal and mechanical response functions in this section. 2.G.1. Thermal Response Functions (Heat Capacity) The heat capacity, C, is a measure of the amount of heat needed to raise the temperature of a system by a given amount. In general, it is defined as the derivative, C = (¢Q/dT). When we measure the heat capacity, we try to fix all independent variables except the temperature. Thus, there are as many different heat capacities as there are combinations of independent variables, and they each contain different information about the system. We shall derive the heat capacity at constant X and {N;}, Cx,,,}, and we shall derive the heat capacity at constant Y and {Nj},Cy,(w). We will derive these heat capacities in two different ways, first from the first law and then from the definition of the entropy. To obtain an expression of Cx,¢x,, we shall assume that X, 7, and {Nj} are independent variables. Then the first law can be written au [au i #Q=aU -Ydx-5 wan, = (54) aT + () -Y|dX TT NOT) x43 L\OX/ ry) | +E|Gr, i ) - 1] aN;. (2.134) T,X,{Nigi} RESPONSE FUNCTIONS 51 For constant X and {Nj}, we have [€Q]¥ iv.) = Cx,yj@7 and we find au’ cree (5). (2.135) for the heat capacity at constant X and {Nj}. To obtain an expression for Cy,(v,j, we shall assume that ¥, 7, and {Nj} are independent variables. Then we can write ax ax ax aX = () a+ () av + 3) aN, (2.136 aT) yw) aY) rw) x ON) rxingy ) If we substitute the expression for dX into Eq. (2.134), we obtain au) Ox) =4¢, *\) -vi(& aT od {cua Ce 1) san} [ aU’ J] (ax —— -Yi(— dY * (Fe) roy J (=r) TAN} r 5 au) (2) (22) \ +) Ilan -Yi(=7 t+lar —H; paNj. 7 Ge riny J \ON rytnyy SONI rxima J” (2.137) For constant Y and {Nj} we have (#Qly jw) = CyynjaT and we find [au ] (ax Cyn = Cxyny + | (i) -Y} 3) 2.138 vn} = Cx) NBR) ey | NOP vey (2.138) for the heat capacity at constant Y and {Nj}. For n moles of a monatomic substance, these equations simplify. Let us write them in terms of molar quantities. We can write the heat capacity in the form Cxn = (0U/OT)y,, = n(Ou/AT),, where u = U/n is the molar internal energy and x = X/n is a molar density of the mechanical extensive variable. The molar heat capacity is then cy = (Ou/OT), so that Cy,, = ncy. Similarly, let us note that (OX/8T),,, =n(8x/OT)y and (9U/OX),,, = (Au/Ox)r. Therefore, the molar heat capacity at constant Y is cy = cx + ((Ou/Ox)7 — Y] (Ax/AT)y. It is useful to rederive expressions for Cx,{uj} and Cy,{y} from the entropy. Let us first assume that 7, X, and {Nj} are independent. Then for a reversible Process, we obtain as as as #o=ras-1(5) ar +1(5) dX + (3) aNj. OT] x40) OX) rN) x ON) 1 x,(My) (2.139) 52 INTRODUCTION TO THERMODYNAMICS For a processes which occurs at constant X and {Nj}, Eq. (2.139) becomes os [4Qlx, -7(3) a, (2.140) wi) ar) ius and therefore os PA Crone (5) = -7(53) . (2.141) HONE) x0) OT?) x00 The second term comes from Eq. (2.97). Let us now assume that 7, Y and {Nj} are independent. For a reversible process, we obtain as as as #0 =Tds= 1 ) ar+1(3) a+ (5 ) aN; OT} ym) OY r.40y) ON) ryNig) (2.142) If we combine Eqs. (2.136) and (2.139), we can also write as as ax 40-7451 1(ar) uy * (3K) om Bt) nan OT) xy \OX/ r40y3 OT] vt as ax) (at OX) 705 OY) ru ax as rel), (of) Jaw cn >| OX) my ON) ry.thigy SON TxA) ( If we now compare Egs. (2.142) and (2.143), we find as as ax Cum =7 (FP) = Catny +P (Fe) say Ot) aap 0G = -1(53) . (2.144) OF?) yn The last term in Eq. (2.144) comes from Eq. (2.109). We can obtain some additional useful identities from the above equations. If we compare Eqs. (2.100), (2.138), and (2.144), we obtain the identity a) 1 () yl -(2) a =a1(— -Y|= 2.145 C rANt r| OX J 7,4) OT) x.y) ae RESPONSE FUNCTIONS 53 Therefore, BP) gy" Fae) OY\V 1 (Ax ; (2.146) Ce xiyp TX OX Dray where we have used Egs. (2.4) and (2.145). For a monatomic substance, it is fairly easy to show that the molar heat capacity at constant mechanical molar density, x, is c, = T(0s/OT), = -T(0°a/8T*),, and the molar heat capacity at constant Y is cy = T(8s/8T)y = -T(0?a/AT?),. We also obtain the useful identities (8s/8x), = (1/T)[(Ou/Ox)p —Y\=—(OY/8T), and (2¥ /ATD= =(1/T){0cx/8x)>. 2.G.2. Mechanical Response Functions There are three mechanical response functions which are commonly used. They are the isothermal susceptibility, i) (5s) (ee 2.147 Xn G ry \OP Jr ny eat the adiabatic susceptibility, (OX OH vom =(2) ==), a SONA) sim) OV) sup and the thermal expansivity, Ox ayn) = (=) oat (2.149) Using the identities in Section 2.B, the thermal and mechanical response functions can be shown to satisfy the identities Xr.twy(Cr,qyy — Cx) = Tovey) (2.150) Cy. guy (xray ~ Xssuy) = Tart)” (2.151) and Craniy — 2.152) Cut) XS00) (2.182) The derivation of these identities is left as a homework problem. 54 INTRODUCTION TO THERMODYNAMICS: For PVT systems, the mechanical response functions have special names. Quantities closely related to the isothermal and adiabatic susceptibilities are the isothermal compressibility, 1 (F) 1 3) ee ee , (2.153) NSS VK AP) ng VN OPFD rnp and adiabatic compressibility, 1 oy) + (Se) Re (2.154) SAM} i (ap si) V\OPE] simy ) respectively. The thermal expansivity for a PVT is defined slightly differently from above. It is 1 /aV an) = 5 (sr) (2.155) For a monatomic PVT system the mechanical response functions become even simpler if written in terms of densities. The isothermal and adiabatic compressibilities are «r= —(1/v)(Ov/OP), and K, = —(1/v)(8v/OP),, respectively, where u = V/n is the molar volume. The thermal expansivity is ap = (1/¥)(0v/OT)p | gy EXERCISE 2.8. Compute the molar heat capacities, cy and cp, the | compressibilities, kr and &,, and the thermal expansivity, ap, for a | monatomic ideal gas. Start from the fact that the molar entropy of the gas is | s=4R+Rln[(v/vo)(T/To)"”] (v = V/N is the molar volume), and the mechanical equation of state is Pv = RT. | | (a) The molar heat capacity, cy: The molar entropy is s=$R+ | Rin{(v/v0)(T/To)*”]. Therefore (8s/8T), = (3R/2T) and cy = T(ds/8T), = 3R/2. | (®) The molar heat capacity, gp: The molar entropy can be written 5 =$R+RIn{(Po/P)(T/To)””]. Then (85/9T)p = 5R/2T and cp = T(85/OT)p = 5R/2. (©) The isothermal compressibility, xp: From the mechanical equation of | state, we have v=(RT/P). Therefore, (Ov/OP), = —(v/P) and | ker = —(1/v)(0v/OP)z = (1/P). STABILITY OF THE EQUILIBRIUM STATE 55 (d) The adiabatic compressibility, «,: We must first write the molar | volume as a function of s and P. From the expressions for the molar entropy and mechanical equation of state given above we find | v = vo(Po/P)°* exp{(2s/5R) — 1]. Then (@v/OP), = —(3v/SP) and | Ks, = —(1/v)(0v/P), = (3/5P). | (e) Thermal Expansivity, ap: Using the mechanical equation of state, we find ap = (1/v)(@v/OT), = (1/T). 2.H. STABILITY OF THE EQUILIBRIUM STATE The entropy of an isolated equilibrium system (cf. Section 2.D.3) must be a maximum. However, for a system with a finite number of particles in thermodynamic equilibrium, the thermodynamic quantities describe the average behaviour of the system. If there are a finite number of particles, then there can be spontaneous fluctuations away from this average behaviour. However, fluctuations must cause the entropy to decrease. If this were not so, the system could spontaneously move to new equilibrium state with a higher entropy because of spontaneous fluctuations. For a system in a stable equilibrium state, this, by definition, cannot happen. ‘We can use the fact that the entropy must be maximum to obtain conditions for local equilibrium and for local stability of equilibrium systems. We will restrict ourselves to PVT systems. However, our arguments also apply to general YXT systems. 2.H.1. Conditions for Local Equilibrium in a PVT System Let us consider a mixture of / types of particles in an isolated box of volume, Vr, divided into two parts, A and B, by a conducting porous wall which is free to move and through which particles can pass (cf. Fig. 2.8). With this type of dividing wall there is a free exchange of heat, mechanical energy, and particles between A and B. One can think of A and B as two different parts of a fluid (gas or liquid), or perhaps as a solid (part A) in contact with its vapor (part B). We Shall assume that no chemical reactions occur. Since the box is closed and isolated, the total internal energy, Ur, is Ur = Y° Ua, (2.156) a=A,B where U, is the internal energy of compartment a. The total volume, Vz, is Vr= 0 Vay (2.157) a=AB 56 INTRODUCTION TO THERMODYNAMICS Fig. 2.8. An isolated, closed box containing fluid separated into two parts by a movable porous membrane. where V, is the volume of compartment a. The total number of particles, Nj, of type j is Nir = Do Nias (2.158) o=AB where Nj. is the total number of particles of type j in compartment a. The total entropy, Sr, is Sr= Yo So, (2.159) amAB where S, is the total entropy of compartment a, Let us now assume that spontaneous fluctuations can occur in the energy, volume, and particle number of each cell subject to the constraints AUr = AVr = ANjr =0 (2.160) (assume that no chemical reactions occur) so that AU, = —AUsz, AV, = —AVsp, and AN; =—ANj,. The entropy change due to these spontaneous fluctuations can be written [ (as, Sx AS; = (se) AUa + (= ) AVa Xi Wa) v,,{Nja} Wa) v,,{N0} * ¥ “(ar From Eqs. (2.58)-(2.60) and (2.160), we can write Eq. (2.161) in the form -(2 Pa_P a (Hi Hi As = (F,-7) ava F-Fave F(A Fata + fr NTA (2.162) ) ania fee (2.161) VaWVar{Negiad STABILITY OF THE EQUILIBRIUM STATE 87 where T, and P, are the temperature and pressure, respectively, of the material in compartment a, and yj, is the chemical potential of particles of type j in compartment a. For a system in equilibrium, the entropy is a maximum. Therefore, any spontaneous changes must cause the entropy to decrease. However, AU,, AVa, and ANj,4 can be positive or negative. Thus, in order to ensure that AS; < 0, we must have Ts =Tp, (2.163) Pa = Pa, (2.164) and (2.165) yo Hija = Hibs Equations (2.163)-(2.165) give the conditions for local equilibrium in a system in which no chemical reactions occur. Thus, if the interface between A and B can transmit heat, mechanical energy, and particles of all types, then the two systems must have the same temperature, pressure, and chemical potential for each type of particle in order to be in equilibrium. It is important to note that if the partition cannot pass particles of type, i, then AN;,, = ANi,3 = 0 and we can have pi, 4 # yj ». If the partition is nonporous and fixed in position so no Particles can pass and no mechanical work can be transmitted, then we can have Pa # Pg and pl, # uly (7 =1,..-,0) and still have equilibrium, 2.H.2. Conditions for Local Stability in a PVT System [12, 13] Stability of the equilibrium state places certain conditions on the sign of the response functions. To see this, let us consider a closed isolated box with volume V7, total entropy Sr, total internal energy Ur, and a total number of Particles Njr of type j, where j= 1,...,/. We shall assume that the box is divided into M cells which can exchange thermal energy, mechanical energy, and particles. We shall denote the equilibrium volume, entropy, internal energy, and number of particles of type j for the ath cell by V2, SQ, U2, and N?,, respectively. The equilibrium pressure, temperature, and chemical potentials of the various cells are P°, 7°, and p?, respectively (they must be the same for all the cells). Because there are a finite number of particles in the box, there will be Spontaneous fluctuations of the thermodynamic variables of each cell about their respective equilibrium values. These spontaneous fluctuations must be Such that Vr, and Ur, and Njr remain fixed. However, since the equilibrium State is stable, fluctuations must cause Sr to decrease. If it did not decrease, the equilibrium state would be unstable and spontaneous fluctuations would cause the system to move to a more stable equilibrium state of higher entropy. 58 INTRODUCTION TO THERMODYNAMICS ‘We shall assume that fluctuations about the equilibrium state are small and expand the entropy of the ath cell in a Taylor expansion about its equilibrium value Sa(Uar Var {Nj,a}) as as\° = SAUD, VE, {N +(% ’ au.+ (5) AVa ial) * (ag vy OV) un) as as, (0, ste), > ON) uvsoigy | Wa) v.,ANa} foe as. t as, va(® :) AV, + a( < AN; Wa veiNe) y ON a (2.166) ; Decne In Eg. (2.166), we define as, ars\° a (as ° (3G) aa 5) ys %* (37 (BE) wn) vy 2 BUa) v,.%4) OU) yany —“~ \OV\ BU) vp) yyy = (F(a), + ANje: (en BU) vem : U,(ea)) (2.167) A similar expression holds for A(ISa/IVa.) U5, {Na} For A(8S./ ON.) Ua,Vas(Nepse}? we have , (see (% 3) ) AUe ON; UVM vin} a) Ua Van Media} i) a +(5(%) 88) ) AV Drama UAN} (& G Da Negi}. on 4 ANia- (2.168) +h ON; \ ON; {New 7 U,V,(Negi In Eqs. (2.166)-(2.168), the superscripts 0 on partial derivatives indicate that they are evaluated at equilibrium. The fluctuations AU,, AV,, and ANjq are defined as AUg = Ua — U8, AVa = Va — V2, and ANja = Nya — NO, and denote the deviation of the" quantities Ua, Vw and Nj from their absolute equilibrium values. STABILITY OF THE EQUILIBRIUM STATE 59 We can obtain the total entropy by adding the contributions to the entropy from each of the M cells. Because the entropy can only decrease due to fluctuations, the terms which are first order in the fluctuations, AU,, AV, and ANj,« must add to zero (this is just the condition for local equilibrium obtained in Section 2.H.1). Therefore, the change in the entropy due to fluctuations must have the form Mu ase) AU, + a($2) AVe Wa) va,(¥0} Va) Ve Niad Os, + a( “ ) ANja p +00": y ON}. Ua Var {Nigia} " Equation (2.169) can be written in simpler form if we make use of Eqs. (2.58) (2.60). We than find asr=tycla(1) au.+a(®) av.—Soa(4) aw, Tea MT) * Th, * €¢ \T], (2.169) See (2.170) or 1 1 AS; = - am [arsas. — AP,AVa +> ain 2N| eee. (2.171) ot ia In Eq. (2.171), we have used the relation TAS = AU + PAV— 57), WAN). Equation (2.171) gives the entropy change, due to spontaneous fluctuations, ina completely general form. We can now expand ASr in terms of any set of independent variables we choose. Let us choose T, P, and Nj as the independent variables. Then we can write as\° 1 as\° (hare S(2)? at OP) ry x ON) rP.tNg) (2.172) ‘avy? ‘av\° ov AV, = Gr) ATs + () AP, + “ar, " an or, PAN} . oP, TAN} . 3 ON; TP{Neg)} we (2.173) 60 INTRODUCTION TO THERMODYNAMICS I Op! 0 a) + ( ANias >> ee (2.174) If we now substitute Eqs. (2.172)-(2.174) into Eq. (2.171) and use the Maxwell relations (2.112)-(2.115), the entropy change becomes av ara) -2(t)’ AT, AP, = 2 |(B) | ) OT} piyy ~ * Out 0 By SP" * 23 on) | ae AP.) + a ANioANja| + ( aa Ly ONi) 7.P.{Neys} cc "| i=l j= (2.175) If we make use of Eqs. (2.6), (2.8) and (2.100), we can write 35) (=) (=) (3): es} mi (< AL . 2.176 (Fr pany \OT/ vimy \OV/ rn) \OT/ pny ae If we plug this into Eq. (2.175), we obtain an => le Fram Ta) - (=), aay ovals o=l a +e3 (34) haANja| ++ (2.177) i=l j=l ey ve , where ‘av\° ‘avy\° [Vala = (5) ATa + () APa. (2.178) NET] egy” \OPF gay Because the fluctuations AT,, AP., and ANj,_ are independent, the require- ment that AS; < 0 for a stable equilibrium state leads to the requirement that as 1 (av) C100) = "(ayy 2 200° 7 (FF) sy? fee and > i=l j=1 ) AN,AN; > 0. (2.179) T.P,{Negi} STABILITY OF THE EQUILIBRIUM STATE 61 Conditions (2.179) are a realization of Le Chateliers’s famous principle: [f a system is in stable equilibrium, then any spontaneous change in its parameters must bring about processes which tend to restore the system to equilibrium. The first condition in Eq. (2.179), Cy,,n,} 2 0, is a condition for thermal stability. If a small excess of heat is added to a volume element of fluid, then the temperature of the volume element must increase relative to its surroundings so that some of the heat will flow out again. This requires that the heat capacity be positive. If the heat capacity were negative, the temperature would decrease and even more heat would flow in, thus leading to an instability. From Egs. (2.150) and (2.179) we can also show that Cry > Cviny 2 0. (2.180) The second condition in Eq. (2.179), Kr{yj} 20, is a condition for mechanical stability. If a small volume element of fluid spontaneously increases, the pressure of the fluid inside the fluid element must decrease relative to its surroundings so that the larger pressure of the surroundings will stop the growth of the volume element. This requires that the compressibility be positive. If the compressibility were negative, the pressure would increase and the volume element would continue to grow, thus leading to an instability. From Egs. (2.151) and (2.179) we can show that ry) > ®sny) 20. (2.181) The third condition, 37) Dj-1(O4j/ON1)7.p, (jay AN/AN; 2 0, where AN; and AN; are arbitrary variations, is the condition for chemical stability. We can write the condition for chemical stability in matrix form: Ha Mia Ha) (AN 1 Hb; Ha, | | ANo (AN, ANg,..., AN1) : . | 20, (2.182) Bia Mia Hi) \ AN where 4}, = (Ou)/ONi)r, p,(m,,}- Because of the Maxwell relation y1,; = yi, [cf. Eq. (2.115)], the matrix His Ha ac0 Hh _ Hor a2 Hag Wey 0S (2.183) Mir Hig + Mia is symmetric. In addition, in order to satisfy the condition for chemical stability, the matrix ji’ must be a positive definite matrix. A symmetric matrix is 62 INTRODUCTION TO THERMODYNAMICS: positive definite if 4; > 0 (i= 1,..., 1) and if every principal minor is positive or zero. | Ml EXERCISE 2.9. A mixture of particles, A and B, has a Gibbs free energy of the form G = mu (P,T) + nau (P, T) + RTng In (x4) + RTnp In(xp) + ae, where n = ng + ng,X4 = n/n, and xg = ng/n (n indicates mole number), 3 and 9 are functions only of P and T. Plot the region of thermodynamic instability in the x4 — T plane. Answer: For chemical stability, the matrix (tae ‘ws HBA HB,B must be symmetric positive definite. This requires that (O14 /On4)prn, > 0, (2p10/9ns) pr, >. and (9pta/Ons)p.7.n4 = (O148/9MA) ppg <8 The chemical potential of the A-type particle is _ (a6 4 np nang a = (Se) orn” HA(P,T) + RT In(a4) +A — ALB (2) A condition for thermodynamic stability is oy RT. (3) = Rite _ Me Man 0, (3) or x3 — x4 + RT/22 > 0. For T > A/2R, this is always satisfied. A plot of T = (2X/R)(x4 — 22) is given below. The shaded region corresponds to x} — x4 +RT/2A < 0 and is thermo- dynamically unstable. The unshaded region is thermodynamically stable. For T < A/2R, two values of x4 satisfy the condition x3 — x4 + RT/2A > 0 for STABILITY OF THE EQUILIBRIUM STATE 63 each value of T. These two values of x, lie outside and on either side of the shaded region and are the mole fractions of two coexisting phases of the binary mixture, one rich in A and the other rich in B. For T > \/2R, only one value of x, satisfies the condition x3 — x4 + RT/2A > 0, so only one phase of the substance exists. (As we shall see in Chapter 3, a | thermodynamically stable state may not be a state of thermodynamic equilibrium. For thermodynamic equilibrium we have the additional condition that the free energy be minimum or the entropy be maximum. A thermodynamically stable state which is not an equilibrium state is sometimes called a metastable state. It can exist in nature but eventually will decay to an absolute equilibrium state.) 2.H.3. Implications of the Stability Requirements for the Free Energies The stability conditions place restrictions on the derivatives of the thermo- dynamic potentials. Before we show this, it is useful to introduce the concept of concave and convex functions [14]: (a) A function f(x) is a convex function if d?f(x)/dx? > 0 for all x (cf. Fig. 2.9). For any x; and x) the chord joining the points f(x1) and f(x2) lies above or on the curve f(x) for all x in the interval xy SPECIAL TOPICS b> S2.A. Cooling and Liquefaction of Gases [6] All neutral gases (if we exclude gravitational effects) interact via a potential which has a hard core and outside the core a short-ranged attractive region. If such a gas is allowed to expand, it must do work against the attractive forces and its temperature will decrease. This effect can be used to cool a gas, although the amount of cooling that occurs via this mechanism alone is very small. We shall study two different methods for cooling: one based solely on free expansion and one which involves throttling of the gas through a porous plug or constriction. The second method is the basis for gas liquefiers commonly used in the laboratory. b> S2.A.1. The Joule Effect: Free Expansion Experiments which attempted to measure cooling due to free expansion were first performed by Gay-Lussac in 1807 and were improved upon by Joule in 1843. The free expansion process is shown schematically in Fig. 2.12. The gas is initially confined to an insulated chamber with volume V; at pressure P; and temperature 7). It is then allowed to expand suddenly into an insulated evacuated chamber with volume V;. Since the gas expands freely, no work will be done and AW = 0. Furthermore, since both chambers are insulated, no heat will be added and AQ = 0. Thus, from the first law the internal energy must remain constant, AU = 0, during the process. The only effect of free expansion is a transfer of energy between the potential energy and kinetic energy of the particles. Because free expansion takes place spontaneously, the entropy of the gas will increase even though no heat is added. During the expansion we cannot use thermodynamics to describe the state of the system because it will not be in equilibrium, even locally. However, after the system has settled down and Fig. 2.12. The Joule effect. Free expansion of a gas from an insulated chamber of volume, Vi, into an insulated evacuated chamber of volume, V2, causes cooling. SPECIAL TOPICS: COOLING AND LIQUEFACTION OF GASES 67 reached a final equilibrium state, we can use the thermodynamics to relate the initial and final states by finding an imaginary reversible path between them. During the expansion the internal energy does not change and the particle number does not change. Thus, the internal energies and particle numbers of the initial and final states must be the same, and for our imaginary reversible path we can write (aU), = ), at + (ir) nl (2.188) where 7 is the number of moles of gas. From Eq. (2.188), we can write Grn ar aT =- ®),, dv = (a) ui” (2.189) where we have made use of the chain rule, Eq. (2.6). The quantity (87/8V)y is called the Joule coefficient. Let us compute the Joule coefficient for various gases. From Eq. (2.145), we know that aU OP (w),,-7(@")..-* es For an ideal gas we have the equation of state, PV = nRT, and we find that (2U/AV);,=0. Therefore, for an ideal gas the Joule coefficient, (8T/OV)y,, = 0, and the temperature of an ideal gas cannot change during free expansion. For a van der Waals gas (cf. Eq. (2.12)], we have 3) an? co) -“. (2.191) ( ta VW ov For this case, there will be a change in internal energy due to interactions as volume changes if temperature is held fixed. To obtain the Joule coefficient, we must also find the heat capacity of the van der Waals gas. From Eq. (2.146), we obtain OCvn\ _ (O°P\ (ai), *(er),,-° en) Therefore, for a van der Waals gas the heat capacity, Cy,,, is independent of volume and can depend only on mole number and temperature: Cyn = Cvn(T,n)- (2.193) 68 INTRODUCTION TO THERMODYNAMICS Since the heat capacity, Cy,,, contains no volume corrections due to interactions, it is thermodynamically consistent to choose its value to be equal to that of an ideal gas (at least in the regime where the van der Waals equation describes a gas). If we neglect internal degrees of freedom, we can write Cyn = 3nR. The Joule coefficient for a van der Waals gas then becomes or 2 an (or), =~ 3RVE (2.184) Using Eq. (2.194), we can integrate Eq. (2.189) between initial and final states. We find an (11 all _ TN ; alae GF z)+ " (2.195) The fractional change in temperature is therefore Ar = (aT) _ 2an (1 _ 1 ar ( 7 ) <3 (7 7): (2.196) If V; > Vi, the temperature will always decrease. We can use values of the van der Waals constant, a, given in Table 2.1 to estimate the change in temperature for some simple cases. Let us assume that V;=10->m3, 7; = 300K, n=1_mol, and Vy =o (this will give maximum cooling). For oxygen we have AT ~ —0.037, and for carbon dioxide we obtain, AT ~ —0.097. We must conclude that free expansion alone is not a very effective way to cool a gas. > S2.A.2. The Joule—Kelvin Effect: Throttling Throttling of a gas through a porous plug or a small constriction provides a much more efficient means of cooling than free expansion and is the basis of most liquification machines. The throttling process in its simplest form is depicted in Fig. 2.13. A gas initially at a pressure, P;, temperature, T;, and volume, V;, is forced through a porous plug into another chamber, maintained at pressure, Py < P;. All chambers and the plug are insulated so AQ = 0 for the process. The gas inside the plug is forced through narrow twisting chambers irreversibly. Work must be done to force the gas through the plug. Even though the entire process is irreversible, we can use thermodynamics to relate the initial and final states. The net work done by the gas is yy ° Aw= [ears [av = PyVy — PV. (2.197) 0 vi SPECIAL TOPICS: COOLING AND LIQUEFACTION OF GASES 69 LILI LL LLL ELLE. Lingala LiiZEZ. ¢ 4 vi..¥.,T: PLLLLLZLL OP LLL LLL porous Fig. 2.13. The Joule-Kelvin effect. Throttling of a gas through a porous plug can cause cooling or heating. From the first law, AU = —AW since AQ = 0. Thus, Uy + PpVy = Us + PV; (2.198) or, in terms of enthalpy, Hy = Hy. (2.199) Thus, the throttling process is one which takes place at constant enthalpy. Let us now construct a hypothetical reversible path to describe the constant enthalpy process. For each differential change along the reversible path, we have (assuming the total particle number remains constant) [dH], =0=TdS +VaP. (2.200) We see that the increase in entropy due to the throttling process is accompanied by a decrease in pressure. It is convenient to use temperature and pressure as independent variables rather than entropy and pressure. We therefore expand the entropy as as {ds}, = (=) mo + ) a (2.201) and obtain av [dH], = 0 = CppdT + lv - 7H) Je (2.202) OT) pn In Eq. (2.202) we have used Eqs. (2.144) and (2.112). Equation (2.202) can be Tewritten in the form aT = () me (2.203) 70 INTRODUCTION TO THERMODYNAMICS where (87/P),,,, is the Joule-Kelvin coefficient and is defined me Bo eee|GH) ome Let us now compute the Joule-Kelvin coefficient for various gases. For an ideal gas, (8V/AT)p, =(V/T) and therefore the Joule-Kelvin coefficient, (AT /AP)y,,. equals 0. There will be no temperature change during the throttling process for an ideal gas. Furthermore, since 7; = T; for ideal gases, P;V; = PiV; and no net work will be done (AW = 0). If work had been done on or by the gas, we would expect a temperature change since the process is adiabatic. For a van der Waals gas, assuming that Cy, = 4nR, we find 1 ar 1 [2a /v—b)? 5 3a /v—b\"| (),,-a eS) -|/ Ea (2) ], e209 where v = V/n is the molar volume. Equation (2.205) is straightforward to obtain from the righthand term in Eg. (2.204) and from Egs. (2.12), (2.144), and (2.100). For an interacting gas, such as the van der Waals gas, the Joule-Kelvin coefficient can change sign. This is easiest to see if we consider low densities so that R7v > a and v > b. Then OT\ _ 2 [2a (3), 5R lar For low temperatures (97/OP),,,, > 0, and gases cool in the throttling process, but at high temperatures, we have (87/P)y,,, <0, and they heat up. Two effect determine the behaviour of the Joule—Kelvin coefficient. On the one hand, the gas expand, which gives rise to cooling. On the other hand, work can be done on or by the gas. If P;V; > PrV;, then net work is done on the gas, which causes heating. If P;V; < PV, then net work is done by the gas, which causes cooling. The inversion temperature (the temperature at which the sign of pyx changes) for the Joule-Kelvin coefficient will be a function of pressure. Since Cp,x > 0, the condition for inversion [from Eq. (2.204)] is (#7) a v (2.207) (2.206) or T or, for a van der Waals gas (cf. Eq. (2.205)], 2 z ¢ = *) =b. (2.208) SPECIAL TOPICS: COOLING AND LIQUEFACTION OF GASES n We can use the van der Waals equation, (2.12), to write Eq. (2.208) in terms of pressure and temperature. First solve Eq. (2.208) for v as a function R, 7, a, and b, and substitute into the van der Waals equation. This gives (2.209) The inversion curve predicted by the van der Waals equation has the shape of a parabola with a maximum at TY” = 8a/9bR. For CO, TYW = 911K while the experimental value [15] is TY. = 1500 K. For Hp, TY. = 99 K while the experimental value [15] is TY. = 202K. In Fig. 2.14, we plot the van der Waals and the experimental inversion curves for N>. The van der Waals equation predicts an inversion curve which lies below the experimental curve but qualitatively has the correct shape. For nitrogen at P = 10°Pa, pyx = 1.37 x 10-7K/Pa at T=573K, yx =1.27x 10-°K/Pa at T = 373K, tux = 6.40 x 10-°K/Pa at T=173K, and pyx = 2.36 x 10-5K/Pa at T =93K. (For experimental value of jx for other substances, see the International Critical Tables [5].) We see that the cooling effect can be quite large for throttling. A schematic drawing of a liquefaction machine which utilizes the Joule— Kelvin effect is shown in Fig. 2.15. Gas is precooled in a vessel, A, below its inversion temperature and expanded through a small orifice into vessel, B, thus causing it to cool due to the Joule-Kelvin effect. The cooled gas is allowed to circulate about the tube in B so that the gas in the tube becomes progressively P(Pax 107) TK 23 123 223 323 423 523 623 Fig. 2.14. A plot of the inversion temperature versus pressure for the Joule-Kelvin Coefficient of Nz. The solid line is the experimental curve [6]. The dashed line is the curve predicted by the van der Waals equation for a= 0.1408 Pa. M°/mol? and 6 = 0.03913m3/mol. 2 INTRODUCTION TO THERMODYNAMICS A B compressor cooler 7 6 liquid Fig. 2.15. Schematic drawing of a gas liquefier using the Joule-Kelvin effect. cooler before expanding through the orifice. The process is run continuously and eventually the gas liquefies and collects below. At times the Joule-Kelvin effect can lead to serious difficulties. For example, highly compressed Hz, which has a low inversion temperature, can ignite spontaneously when leaking from a damaged container, because of Joule— Kelvin heating. > S2.B. Entropy of Mixing and the Gibbs Paradox [6] If entropy is a measure of disorder in a system, then we expect that an entropy increase will be associated with the mixing of distinguishable substances, since this causes an increase in disorder. As we shall now show, this does indeed occur even for ideal gases. Let us consider an ideal gas containing n; moles of atoms of type Ai,72 moles of atoms of type A2,..., and ,, moles of atoms of type A, in a box of volume V and held at temperature T. The equation of state for the ideal gas mixture can be written m P=>°P, (2.210) ja where P; is the pressure of the j th component (partial pressure) and is defined as follows: _ RT v (2.211) Pj SPECIAL TOPICS: ENTROPY OF MIXING AND THE GIBBS PARADOX 73 The mole fraction of atoms of type A; is defined as follows: (2.212) The change in the Gibbs free energy for an arbitrary change in the variables, P,T, m1, ---;Mm, is m IG = —SdT + VdP +) wydny, (2.213) ft where ju; is the chemical potential and S, V, and p are given by Eqs. (2.109)- (2.111), but with mole number replacing particle number. Let us now find the entropy of mixing for a mixture of distinguishable monatomic ideal gases. The Gibbs free energy of n moles of a monatomic ideal gas is (r5/?] G(T, P,n) = —nRT Inj + G®, (2.214) : [P| where G© is a constant [cf. Eq. (2.107) and Exercise 2.3]. We will first consider a box held at temperature T and pressure P partitioned into m compartments, and let us assume the following: compartment | contains n; moles of atoms of type A, at pressure P and temperature 7; compartment 2 contains nz moles of atoms of type A2 at pressure P and temperature T; and so on. The compartments are separated by walls that can transmit heat and mechanical energy, so the pressure and temperature is uniform throughout the system. The Gibbs free energy of the system is the sum of the free energies of each compartment and can be written G(P,T, m1)... ,"m) = — Sonera +0. (2.215) where G‘) is a constant. If we now remove the partitions and let the gases mix So that the final temperature and pressure are T and P, the Gibbs free energy of the mixture will be mm [75/2] ) Ge(P,T,m,..-,%m) = — ) aRT In) ——| + GP i (al m = Gi(P,T,m,-+.521m) + >“ mRT In (x)) + GP) — G, ia (2.216) 4 INTRODUCTION TO THERMODYNAMICS where G®) is a constant and we have used the relation Pj = Px;. The change in the Gibbs free energy during the mixing process is therefore m AG = Gr ~ G, = J HjRTIn(y) + GO — GO. (2.217) jl From Eqs. (2.109) and (2.217), the increase in entropy due to mixing is = ASnix = — D> njRIn (xj). (2.218) jl If x; = 1 (one compartment and one substance), then ASpix = 0, as it should be. If there are two compartments, each containing one mole, then x, = (1/2) and Xq = (1/2) and ASpix = 2R In(2) and the entropy increases during mixing. It is important to note that Eq. (2.218) contains no explicit reference to the type of particles in the various compartments. As long as they are different, mixing increases the entropy. However, if they are identical, Eq. (2.218) tells us that there will still be an increase in entropy when the partitions are removed, even though the concept of mixing loses its meaning. Clearly, Eq. (2.218) does not work for identical particles. This was first noticed by Gibbs and is called the Gibbs paradox. The resolution of the Gibbs paradox lies in quantum mechanics, as we shall see when we come to statistical mechanics, (cf. Chapter 7). Identical particles must be counted in a different way from distinguishable particles (they have different “‘statistics”). This difference between identical and distinguish- able particles persists even in the classical limit and leads to a resolution of the Gibbs paradox. > S2.C. Osmotic Pressure in Dilute Solutions Each spring, when the weather begins to warm up, sap rises in the trees, and the yearly cycle of life starts again. The rising of sap is one of many examples in biological systems of the phenomenon called osmosis. We can easily demonstrate the effect in the laboratory. Let us take a beaker of water and immerse a long tube (open at both ends) in it. The water levels of the tube and of the beaker will be the same. Next, close off the bottom end of the tube with a membrane which is permeable to water but not sugar. The water levels will still be the same in the tube and the beaker. Now add a bit of sugar to the water in the tube. Water will begin to enter the tube through the membrane, and the level of the sugar solution will rise a distance h above the level of the water in the beaker (cf. Fig. 2.16). The excess pressure created in the tube, m = p,hg, is called the osmotic pressure (p, is the density of the sugar solution and g is the acceleration of gravity). After equilibrium is reached, the pressure on the side of the membrane with sugar solution will be greater than that on the water side, by a factor . The membrane must sustain the unbalanced force. SPECIAL TOPICS: OSMOTIC PRESSURE IN DILUTE SOLUTIONS 75 sugar solution — 0 add sugar to tube semipermeable membrane Fig. 2.16. The osmotic pressure of the sugar solution is 7 = p,hg, where p, is the density of the sugar solution and g is the acceleration of gravity. Fig. 2.17. A schematic representation of osmosis. It is instructive to show the same phenomenon in another way (cf. Fig. 2.17). We will consider a system consisting of pure water, separated by a permeable (to water) membrane from a solution of sugar and water. The entire system is kept at a fixed temperature 7, and the membrane is rigid. At equilibrium, there will be an imbalance in the pressures of the two sides. If Po is the pressure of the pure water, than the sugar solution will have a pressure P = Py + 7, where 7 is the osmotic pressure. This imbalance of pressures is possible because the membrane is rigid and cannot transmit mechanical energy. Since the water is free to move through the membrane, the chemical potential of the pure water must be equal to the chemical potential of the water in the sugar solution. Let us first write the thermodynamic relations for this system. First consider the sugar solution. A differential change in the Gibbs free energy, G= 16 INTRODUCTION TO THERMODYNAMICS G(P,T,ny,ns), of the sugar solution (with n, moles of water and n, moles of sugar solution) can be written dG = -SdT + VAP + piydny + pisdns, (2.219) where S = —(9G/8T)p,,, », is the entropy of the solution, V = (9G/AP) rp, », is the volume of the solution, and fly = (8G/Onw)p7,,, and ps = (OG/ONs) pr, are the chemical potentials of the water and sugar, respectively, in the solution. The chemical potentials are intensive and depend only on ratios n,/nw. It is convenient to introduce mole fractions Ny ns ns/Mw = =— 1 ang x= = /™ 200 mtn, 1+n/my * (2.220) w ny tng 1+ 15/Mmy Since x, + x, = 1, the chemical potentials can be written as a function of mole fraction, x,. Thus, py = fw(P, T,X) and fs = ps(P, T, Xs). At equilibrium the chemical potentials of the pure water and the water in the sugar solution will be equal. If we let (Po, T) denote the chemical potential of the pure water, we can write HO) (Po, T) = bw (P, 7, Xs) (2.221) as the condition for thermodynamic equilibrium. ‘We want to obtain an expression for the osmotic pressure in terms of measurable quantities. We will simplify properties of the solution as much as possible. We will assume that the solution is dilute so that n,/ny <1 and Xs ns/my <1, We can construct a fairly simple model to describe the solution. We write the Gibbs free energy of the solution in the form Ny why (P, 7) + nsp\(P, 7) — 9m + nyRT In (Xp) + nyRT In (x,)- G(P,T, ns, nw) (2.222) The chemical potential y:{°)(1{°) contains contributions to the Gibbs free energy due to the presence of water (sugar) molecules and due to self- interactions. The term —A(nsny/n) gives the contribution to the free energy due to interactions between sugar and water molecules. The last two terms on the right give contributions to the free energy due to mixing [cf. Eq. 3.57]. The chemical potential of the water in the solution can now be written wn(P.Tons) = (52) =p Q(P,T)-¥2+RTIn(1— x), (2.223) where ,()(P,T) is the chemical potential of pure water at pressure P and temperature T. For a dilute solution, x,=1,/n <1 and In(1—x,) = SPECIAL TOPICS: OSMOTIC PRESSURE IN DILUTE SOLUTIONS 7 =x, — (1/2)x? — +++ . Thus, to lowest order in x, =1,/n, we find Hw (PT, x5) ® WO)(P,T) — XRT (2.224) for the chemical potential of water in a dilute sugar solution. ‘We now can find an expression for the osmotic pressure, 7 = P — Pp. Let us note that water, as well as most liquids, is very incompressible. The compressibility, x7, of water at 0°C is Kr = 4.58 x 10-!" cm?/dyne. Therefore the quantity (042 /OP);,, = (OV°/Ony)p» = vi, (v2 is the partial molar volume of water in the absence of solute and V° is the volume of water in the absence of solute) remains approximately constant for small changes in pressure, With this observation we can integrate (O°/OP);.,,, to find Hy(P,T) — uh (Po, T) © Va (P — Po) = vor. (2.225) Let us now assume that the change in the volume of water as we increase the number of moles is proportional to the number of moles so that V° = nyv®. Also, for very small concentrations of solute, we can assume that the change in the volume of water due to the presence of the solute is negligible so that V° = V, where V is the volume of the mixture. Then we can combine Eqs. (2.221), (2.224), and (2.225) to obtain _nsRT nee (2.226) Equation (2.226) is called van’t Hoff’s law and, surprisingly, looks very much like the ideal gas law, although we are by no means dealing with a mixture of ideal gases. Equation (2.226) is well verified for all dilute neutral solvent-solute systems. @ EXERCISE 2.10. An experiment is performed in which the osmotic Pressure of a solution, containing n,,- moles of sucrose (C12H220}1) and 1 kg | of water (H,O), is found to have the following values [2]: (a) for , | Move = 0.1, 7 = 2.53 x 10° Pa, (b) for nsuc = 0.2, = 5.17 x 10° Pa, and (c) | for sy = 0.3, = 7.81 x 10°Pa Compute the osmotic pressure of this system using van’t Hoff’s law. How do the computed values compare with the measured values? | Answer: The molecular weight of water (HO) is My,o = 18 gr/mol. | Therefore, 1 kg of water contains 55.56 mol of water. The molar volume of | water is v4,0 = 18 x 10-®m?/mol. The osmotic pressure of the solution, | according to van’t Hoff’s law, is | __Msye_(8.3175/K )(303 K) | ™ = 35.56 18 x 10-m3/mol ° | 78 INTRODUCTION TO THERMODYNAMICS | The computed values are as follows: (a) For rguc = 0.1, = 2.52 x 10° Pa, | (0) for neue = 0.2, = 5.04 x 10°Pa, and (c) for nsuc = 0.3,7 = 7.56 10° Pa. The predictions of van’t Hoff’s law are good for a dilute solution of | sucrose in water, but begin to deviate as the mole fraction of sucrose | increases. > S2.D. The Thermodynamics of Chemical Reactions [16-18] Chemical reactions occur in systems containing several species of molecules (which we will call A, B, C, and D), which can transform into one another through inelastic collisions. A typical case might be one in which molecules A and B can collide inelastically to form molecules C and D. Conversely, molecules C and D can collide inelastically to form molecules A and B. The collisions occur at random and can be either elastic or inelastic. To be inelastic and result in a reaction, the two molecules must have sufficient energy to overcome any potential barriers to the reaction which might exist. Chemical equilibrium is a dynamical state of the system. It occurs when the rate of production of each chemical species is equal to its rate of depletion through chemical reactions. The chemical reactions themselves never stop, even at equilibrium. In the early part of this century a Belgium scientist, de Donder, found that it was possible to characterize each chemical reaction by a single variable €, called the degree of reaction. In terms of , it is then possible to determine when the Gibbs free energy has reached its minimum value (chemical reactions usually take place in systems with fixed temperature and pressure) and therefore when the chemical system reaches chemical equilibrium. It is important to notice that the concept of degree of reaction assumes that we can generalize the concept of Gibbs free energy to systems out of equilibrium. > S2.D.1. The Affi ity Let us consider a chemical reaction of the form k 4A — vpB 2 eC + vpD. (2.227) a The quantities v4, vg, Vc, and vp are called stoichiometric coefficients; v; is the number of molecules of type j needed for the reaction to take place. By convention, v4 and vg are negative. The constant k; is the rate constant for the forward reaction, and kz is the rate constant for the backward reaction. The rate constants give the probability per unit time that a chemical reaction takes place. If the rate constants are known, then one can find the rate of change in the SPECIAL TOPICS: THE THERMODYNAMICS OF CHEMICAL REACTIONS 19 number of each type of molecule involved in the chemical reaction. For example, the rate of change in the number, Na, of molecules of type A is given by a =~ NY Ng + ken ENG, (2.228) where |v4| denotes the absolute value of v4. Equation (2.228) reflects the fact that |v4| molecules of A and |vg| molecules of B must collide to destroy A molecules while |vc| molecules of C and |vp| molecules of D must collide to create A molecules. Let us now assume that initially there are nq =—vano moles of A, ng = —Vpny + Np moles of B, nc = vcn' moles of C, and np = vpni +Np moles of D. The reaction to the right will be complete when m=0, mp=Ne, Mc =Ye(mo+M), MD = Yp(Mo +75) + No. We next define the degree of reaction by the equation &= (no +h) + a (2.229) As we have defined it, € has the units of moles. In terms of €, the number of moles of each substance can be written na = —va(no +g) + va, (2.230) ng = —Up(no + ny) + Np + veé, (2.231) nc = uc, (2.232) and np = vp€ + Np. (2.233) Any changes in the concentrations due to the reaction can therefore be written dng =vdé, dng =vpdé, dnc =vcdé, dnp = updé (2.234) or dng dnp dnc _ dnp mw Up ue UD = dé. (2.235) Equations (2.234) and (2.235) are very important because they tell us that any 80 INTRODUCTION TO THERMODYNAMICS changes in the thermodynamic properties of a system due to a given reaction can be characterized by a single variable. From Eq. (2.108), differential changes in the Gibbs free energy may be written m m IG = -SdT + VaP +) yjdn;= —SdT + VaP +) ~ujydé, (2.236) a A where the sum is over the species which participate in the reaction. Therefore (Soa B =a, (2.237) fl and the quantity AS uy (2.238) = is called the affinity (in some books the affinity is defined with an opposite sign). At chemical equilibrium, the Gibbs free energy must be a minimum, (the superscript 0 denotes equilibrium) and, therefore, at chemical equilibrium the affinity must be zero. We can easily find the sign of the affinity as the system moves toward chemical equilibrium from the left or right. At constant P and T, the Gibbs free energy, G, must always decrease as the system moves toward chemical equilibrium (at equilibrium G is a minimum). Therefore, [dGlp7 = (), ,#é <0. (2.240) If the reaction goes to the right, then d€ > 0 and A < 0. If the reaction goes to the left, then d€ < 0 and A > 0. This decrease in the Gibbs free energy is due to spontaneous entropy production resulting from the chemical reactions (see Exercise 2.6). If there are r chemical reactions in the system involving species, j, then there will be r parameters, &, needed to describe the rate of change of the number of moles, nj: dry = > Yid&. (2.241) k=! SPECIAL TOPICS: THE THERMODYNAMICS OF CHEMICAL REACTIONS 81 Table 2.2. Values of the Chemical Poten- tial, »°, for Some Molecules in the Gas Phase at Pressure Py =1 atm and Tem- perature Ty = 298K 1 Molecule (kcal/mol) Hp 0.00 HI 0.31 h 4.63 Np 0.00 NO, 1239 NH —3.98 N20, 23.49 The sum over k is over all chemical reactions in which molecules of type j participate. Using ideal gas laws, some useful relations can be obtained for reactions in the gas phase. Consider a gas composed of four different kinds of molecules (A, B, C, and D) which undergo the reaction in Eq. (2.227). If the partial pressure of the ith constituent is P;, the chemical potential of the ith constituent can be written Hi(Pi, T) = p2(Po, To) — RT In (@)” (2) ; (2.242) where p1?(Po, To) is the chemical potential of the ith constituent at pressure Po and temperature 7p. Values of we, with Po = | atm and 7) = 298K, have been tabulated for many kinds of molecules [20]. A selection is given in Table 2.2. If we use Eq. (2.242), the Gibbs free energy can be written fry? (P, G(T, P,8) = Lonm = Xn (Po, To) — Sonarin (7) (3) = Lraleo, T) - Sonera (Z) (*)] + RT Inpeexiextex??), (2.243) 82 INTRODUCTION TO THERMODYNAMICS and the affinity can be written cs Qe are 52 7p, A(T, P, €) = Dima(o To) = Yvan (Z) @ i = Lovato, 70) - DD, URT In l@) ” (2) (2.244) T eyo wee ] +armlS Wal ri where P = >; P; is the pressure and T is the temperature at which the reaction occurs. For “ideal gas reactions” the equilibrium concentrations of the reactants can be deduced from the condition that at equilibrium the affinity is zero, A° = 0. From Eg. (2.243) this gives the equilibrium condition In 0 ee - > nol (Z)"()] a Mal(Po,To), 2248) where Po = 1 atm and Tp = 298K. Equation (2.245) is called the law of mass action. As we shall show in Exercise 2.11, we can use it to compute the value of the degree of reaction, and therefore the mole fractions, at which chemical equilibrium occurs as a function of pressure and temperature. > S2.D.2. Stability Given the fact that the Gibbs free energy for fixed P and T is minimum at equilibrium, we can deduce a number of interesting general properties of chemical reactions. First, let us note that at equilibrium we have 8G\° ay _ (,. Ao =0 (2.246) and @G\° _ (@aA\° — =(~— 0. I. (Be) nam (38)? Ce Equations (2.246) and (2.247) are statements of the fact that the Gibbs free energy, considered as a function of P, 7, and €, is minimum at equilibrium for fixed T and P. SPECIAL TOPICS: THE THERMODYNAMICS OF CHEMICAL REACTIONS 83 From the fundamental equation, H = G + TS, we obtain several important relations. First, let us note that at equilibrium (@),.-"(@),, eae [we have used Eq. (2.246)]. Thus, changes in enthalpy are proportional to the changes in entropy. The left-hand side of Eq. (2.248) is called the heat of reaction. It is the heat absorbed per unit reaction in the neighborhood of equilibrium. For an exothermic reaction, (0H/2€), is negative. For an endothermic reaction, (0H, / 88) pp i is positive. From Eq. (2.109), Eq. (2.248) can be written (ze), tae), |= tae) a], to (2.249) For an “ideal gas reaction,” we can use Eqs. (2.244) and (2.249) to obtain an explicit expression for the heat of reaction. We find () - DD RT + SRT In I)” “(| |- =RT'In ee ae) (2.250) If the total number of particles changes during the reaction (>, 1; # 0), there will be contributions to the heat of reaction from two sources: (1) There will be a change in the heat capacity of the gas due to the change in particle number, and (2) there will be a change in the entropy due to the change in the mixture of the particles. If the total number of particles remains unchanged (57, 1; = 0), the only contribution to the heat of reaction will come from the change in the mixture of particles (assuming we neglect changes to the heat capacity due to changes in the internal structure of the molecules). Let us now obtain some other general properties of chemical reactions. From the chain rule [Eq. (2.6)] we can write (2.251) OH (2) __ Gre 1 (® rr OT) p, (2) Tm) ~ er "Bar The denominator in Eq. (2.251) is always positive. Thus, at equilibrium any Small increase in temperature causes the reaction to shift in a direction in which heat is absorbed. Gi 84 INTRODUCTION TO THERMODYNAMICS Let us next note the Maxwell relation (a), Ce) eas [cf. Eqs. (2.236) and (2.238)]. It enables us to write OP, ~ er ar At equilibrium an increase in pressure at fixed temperature will cause the reaction to shift in a direction which decreases the total volume. (2), --Bx- (®),2 (0253 @ EXERCISE 2.11. Consider the reaction N204 = 2NO2, which occurs in the gas phase. Start initially with 1 mol of NO, and no NO}. Assume that the reaction occurs at temperature T and pressure P. Use ideal gas equations for the chemical potential. (a) Compute and plot the Gibbs free energy G(T, P, €), as a function of the degree of reaction, €, for (i) P=1 atm and T = 298K and (ii) P = 1 atm and T = 596K. (b) Compute and plot the affinity, A(T, P, €), as a function of the degree of reaction, €, for (i) P = 1 atm and T = 298 K and (ii) P = 1 and T = 596K. (c) What is the degree of reaction, £, at chemical equilibrium for P = 1 atm and temperature T = 298 K? How many moles of N20, and NO, are present at equilibrium? (d) If initially the volume is Vo, what is the volume at equilibrium for P = 1 atm and T = 298 K? (e) What is the heat of reaction for P = 1 atm and T = 298K? Answer: The number of moles can be written ny,o, = 1 — € and nno, = 2£. The mole fractions are le 2 N04 = NO? =<. (1) (a) The Gibbs free energy is Po) | G(T, P) = Small Po, To) — Dok In @" (3) | (1-§), (267 / 4rrin| dO Pe) | a+r) | (2) SPECIAL TOPICS: THE THERMODYNAMICS OF CHEMICAL REACTIONS 85 where i=(N20,4, NO2). From Table 2.2, for Po =1 atm and | = 298K, pR,0, = 23.49kcal/mol and pXo, = 12.39 kcal/mol. Plots of G(T, P) are given below. Chemical equilibrium occurs for the value of € at the minimum of the curve. ACER mal) (b) The affinity is 5 r T\52 (P\) ACrP) =Sel(Poste)— Doarin| ? | 7 7 3 [__ ee | ° (-9a+9] | Plots of A(7’, P) are given in the figures. +RTIn (c) Chemical equilibrium occurs for the value of € at which A = 0. From the plot of the affinity, the equilibrium value of the degree of reaction | is €eq © 0.166. Thus, at equilibrium ny,0, = 0.834 and nno, = 0.332. At equilibrium the mole fractions are xy,0, = (0.834/1.166) = 0.715 and xno, = (0.332/1.166) = 0.285. } (4) Initially there are ny,o, = 1 mol of NO, and xo, = 0 mol of NO2 | and a total of 1 mol of gas present. At chemical equilibrium, there are | 1,0, = 0.834 mol of NOx and nyo, = 0.332 mol of NO? and a total of 1.166 mol of gas present. The reaction occurs at temperature To and | pressure Po. Therefore, the initial volume is Vo = ((1)RTo/Po) and | the final volume is V = ((1.166)RT9/Po) = 1.166Vo. (e) The heat of reaction for the reaction occurring at temperature Ty and pressure Pp is @H\o 5 wo, | _ 5 (0.285)?] | = =RTo — RTp In = 5RTo — RTo In 1 (Barn 2 om [aso] 29 f Los |; | = 4.68 RTp. (4) |

You might also like