You are on page 1of 352

Chemistry 351 and 352

Physical Chemistry I and II

Darin J. Ulness

Fall 2006 – 2007


Contents

I Basic Quantum Mechanics 15

1 Quantum Theory 16
1.1 The “Fall” of Classical Physics . . . . . . . . . . . . . . . . . . . . 16
1.2 Bohr’s Atomic Theory . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.1 First Attempts at the Structure of the Atom . . . . . . . . 17

2 The Postulates of Quantum Mechanics 22


2.1 Postulate I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.2 How to normalize a wavefunction . . . . . . . . . . . . . . . . . . 23


2.3 Postulates II and II . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 The Setup of a Quantum Mechanical Problem 27


3.1 The Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 The Quantum Mechanical Problem . . . . . . . . . . . . . . . . . 27
3.3 The Average Value Theorem . . . . . . . . . . . . . . . . . . . . . 29
3.4 The Heisenberg Uncertainty Principle . . . . . . . . . . . . . . . . 30

4 Particle in a Box 31
4.1 The 1D Particle in a Box Problem . . . . . . . . . . . . . . . . . . 31
4.2 Implications of the Particle in a Box problem . . . . . . . . . . . 34

5 The Harmonic Oscillator 38


5.1 Interesting Aspects of the Quantum Harmonic Oscillator . . . . . 40

i
5.2 Spectroscopy (An Introduction) . . . . . . . . . . . . . . . . . . . 42

II Quantum Mechanics of Atoms and Molecules 45

6 Hydrogenic Systems 46
6.1 Hydrogenic systems . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.2 Discussion of the Wavefunctions . . . . . . . . . . . . . . . . . . . 49
6.3 Spin of the electron . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.4 Summary: the Complete Hydrogenic Wavefunction . . . . . . . . 52

7 Multi-electron atoms 55
7.1 Two Electron Atoms: Helium . . . . . . . . . . . . . . . . . . . . 55
7.2 The Pauli Exclusion Principle . . . . . . . . . . . . . . . . . . . . 56
7.3 Many Electron Atoms . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3.1 The Total Hamiltonian . . . . . . . . . . . . . . . . . . . . 59

8 Diatomic Molecules and the Born Oppenheimer Approximation 60


8.1 Molecular Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.1.1 The Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . 61
8.1.2 The Born—Oppenheimer Approximation . . . . . . . . . . 62
8.2 Molecular Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.2.1 The Morse Oscillator . . . . . . . . . . . . . . . . . . . . . 64
8.2.2 Vibrational Spectroscopy . . . . . . . . . . . . . . . . . . . 66

9 Molecular Orbital Theory and Symmetry 67


9.1 Molecular Orbital Theory . . . . . . . . . . . . . . . . . . . . . . 67
9.2 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

10 Molecular Orbital Diagrams 72


10.1 LCAO–Linear Combinations of Atomic Orbitals . . . . . . . . . 72
10.1.1 Classification of Molecular Orbitals . . . . . . . . . . . . . 73
10.2 The Hydrogen Molecule . . . . . . . . . . . . . . . . . . . . . . . 74
10.3 Molecular Orbital Diagrams . . . . . . . . . . . . . . . . . . . . . 76
10.4 The Complete Molecular Hamiltonian and Wavefunction . . . . . 78

11 An Aside: Light Scattering–Why the Sky is Blue 79


11.1 The Classical Electrodynamics Treatment of Light Scattering . . . 79
11.2 The Blue Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
11.2.1 Sunsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.2.2 White Clouds . . . . . . . . . . . . . . . . . . . . . . . . . 83

III Statistical Mechanics and The Laws of Thermody-


namics 88

12 Rudiments of Statistical Mechanics 89


12.1 Statistics and Entropy . . . . . . . . . . . . . . . . . . . . . . . . 89
12.1.1 Combinations and Permutations . . . . . . . . . . . . . . . 90
12.2 Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

13 The Boltzmann Distribution 94


13.1 Partition Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 96
13.1.1 Relation between the Q and W . . . . . . . . . . . . . . . 97
13.2 The Molecular Partition Function . . . . . . . . . . . . . . . . . . 99

14 Statistical Thermodynamics 103

15 Work 107
15.1 Properties of Partial Derivatives . . . . . . . . . . . . . . . . . . . 107
15.1.1 Summary of Relations . . . . . . . . . . . . . . . . . . . . 107
15.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
15.2.1 Types of Systems . . . . . . . . . . . . . . . . . . . . . . . 108
15.2.2 System Parameters . . . . . . . . . . . . . . . . . . . . . . 109
15.3 Work and Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
15.3.1 Generalized Forces and Displacements . . . . . . . . . . . 110
15.3.2 P V work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

16 Maximum Work and Reversible changes 113


16.1 Maximal Work: Reversible versus Irreversible changes . . . . . . . 113
16.2 Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
16.3 Equations of State . . . . . . . . . . . . . . . . . . . . . . . . . . 116
16.3.1 Example 1: The Ideal Gas Law . . . . . . . . . . . . . . . 116
16.3.2 Example 2: The van der Waals Equation of State . . . . . 117
16.3.3 Other Equations of State . . . . . . . . . . . . . . . . . . . 118

17 The Zeroth and First Laws of Thermodynamics 119


17.1 Temperature and the Zeroth Law of Thermodynamics . . . . . . . 119
17.2 The First Law of Thermodynamics . . . . . . . . . . . . . . . . . 121
17.2.1 The internal energy state function . . . . . . . . . . . . . . 121

18 The Second and Third Laws of Thermodynamics 124


18.1 Entropy and the Second Law of Thermodynamics . . . . . . . . . 124
18.1.1 Statements of the Second Law . . . . . . . . . . . . . . . . 127
18.2 The Third Law of Thermodynamics . . . . . . . . . . . . . . . . . 127
18.2.1 The Third Law . . . . . . . . . . . . . . . . . . . . . . . . 128
18.2.2 Debye’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . 129
18.3 Times Arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

IV Basics of Thermodynamics 134

19 Auxillary Functions and Maxwell Relations 135


19.1 The Other Important State Functions of Thermodynamics . . . . 135
19.2 Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
19.2.1 Heuristic definition: . . . . . . . . . . . . . . . . . . . . . . 137
19.3 Helmholtz Free Energy . . . . . . . . . . . . . . . . . . . . . . . . 137
19.3.1 Heuristic definition: . . . . . . . . . . . . . . . . . . . . . . 138
19.4 Gibbs Free Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 138
19.4.1 Heuristic definition: . . . . . . . . . . . . . . . . . . . . . . 139
19.5 Heat Capacity of Gases . . . . . . . . . . . . . . . . . . . . . . . . 139
19.5.1 The Relationship Between CP and CV . . . . . . . . . . . 139
19.6 The Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . 140

20 Chemical Potential 142


20.1 Spontaneity of processes . . . . . . . . . . . . . . . . . . . . . . . 142
20.2 Chemical potential . . . . . . . . . . . . . . . . . . . . . . . . . . 144
20.3 Activity and the Activity coefficient . . . . . . . . . . . . . . . . . 146
20.3.1 Reference States . . . . . . . . . . . . . . . . . . . . . . . 147
20.3.2 Activity and the Chemical Potential . . . . . . . . . . . . 148

21 Equilibrium 151
21.0.3 Equilibrium constants in terms of KC . . . . . . . . . . . . 153
21.0.4 The Partition Coefficient . . . . . . . . . . . . . . . . . . . 153

22 Chemical Reactions 156


22.1 Heats of Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . 156
22.1.1 Heats of Formation . . . . . . . . . . . . . . . . . . . . . . 157
22.1.2 Temperature dependence of the heat of reaction . . . . . . 157
22.2 Reversible reactions . . . . . . . . . . . . . . . . . . . . . . . . . . 158
22.3 Temperature Dependence of Ka . . . . . . . . . . . . . . . . . . . 159
22.4 Extent of Reaction . . . . . . . . . . . . . . . . . . . . . . . . . . 160

23 Ionics 161
23.1 Ionic Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
23.1.1 Ionic activity coefficients . . . . . . . . . . . . . . . . . . . 162
23.2 Theory of Electrolytic Solutions . . . . . . . . . . . . . . . . . . . 163
23.3 Ion Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
23.3.1 Ion mobility . . . . . . . . . . . . . . . . . . . . . . . . . . 165

24 Thermodynamics of Solvation 169


24.1 The Born Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
24.1.1 Free Energy of Solvation for the Born Model . . . . . . . . 173
24.1.2 Ion Transfer Between Phases . . . . . . . . . . . . . . . . . 174
24.1.3 Enthalpy and Entropy of Solvation . . . . . . . . . . . . . 174
24.2 Corrections to the Born Model . . . . . . . . . . . . . . . . . . . . 175

25 Key Equations for Exam 4 177

V Quantum Mechanics and Dynamics 180

26 Particle in a 3D Box 181


26.1 Particle in a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
26.2 The 3D Particle in a Box Problem . . . . . . . . . . . . . . . . . . 183

27 Operators 187
27.1 Operator Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
27.2 Orthogonality, Completeness, and the Superposition Principle . . 191

28 Angular Momentum 192


28.1 Classical Theory of Angular Momentum . . . . . . . . . . . . . . 192
28.2 Quantum theory of Angular Momentum . . . . . . . . . . . . . . 193
28.3 Particle on a Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
28.4 General Theory of Angular Momentum . . . . . . . . . . . . . . . 195
28.5 Quantum Properties of Angular Momentum . . . . . . . . . . . . 199
28.5.1 The rigid rotor . . . . . . . . . . . . . . . . . . . . . . . . 200
29 Addition of Angular Momentum 201
29.1 Spin Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . 201
29.2 Addition of Angular Momentum . . . . . . . . . . . . . . . . . . . 202
29.2.1 The Addition of Angular Momentum: General Theory . . 202
29.2.2 An Example: Two Electrons . . . . . . . . . . . . . . . . . 203
29.2.3 Term Symbols . . . . . . . . . . . . . . . . . . . . . . . . . 204
29.2.4 Spin Orbit Coupling . . . . . . . . . . . . . . . . . . . . . 205

30 Approximation Techniques 207


30.1 Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . 207
30.2 Variational method . . . . . . . . . . . . . . . . . . . . . . . . . . 209

31 The Two Level System and Quantum Dynamics 211


31.1 The Two Level System . . . . . . . . . . . . . . . . . . . . . . . . 211
31.2 Quantum Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 214

VI Symmetry and Spectroscopy 220

32 Symmetry and Group Theory 221


32.1 Symmetry Operators . . . . . . . . . . . . . . . . . . . . . . . . . 222
32.2 Mathematical Groups . . . . . . . . . . . . . . . . . . . . . . . . . 222
32.2.1 Example: The C2v Group . . . . . . . . . . . . . . . . . . 223
32.3 Symmetry of Functions . . . . . . . . . . . . . . . . . . . . . . . . 223
32.3.1 Direct Products . . . . . . . . . . . . . . . . . . . . . . . . 225
32.4 Symmetry Breaking and Crystal Field Splitting . . . . . . . . . . 225

33 Molecules and Symmetry 228


33.1 Molecular Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . 228
33.1.1 Normal Modes . . . . . . . . . . . . . . . . . . . . . . . . 229
33.1.2 Normal Modes and Group Theory . . . . . . . . . . . . . . 229
34 Vibrational Spectroscopy and Group Theory 231
34.1 IR Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
34.2 Raman Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . 233

35 Molecular Rotations 235


35.1 Relaxing the rigid rotor . . . . . . . . . . . . . . . . . . . . . . . . 236
35.2 Rotational Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . 236
35.3 Rotation of Polyatomic Molecules . . . . . . . . . . . . . . . . . . 237

36 Electronic Spectroscopy of Molecules 240


36.1 The Structure of the Electronic State . . . . . . . . . . . . . . . . 240
36.1.1 Absorption Spectra . . . . . . . . . . . . . . . . . . . . . . 241
36.1.2 Emission Spectra . . . . . . . . . . . . . . . . . . . . . . . 241
36.1.3 Fluorescence Spectra . . . . . . . . . . . . . . . . . . . . . 242
36.2 Franck—Condon activity . . . . . . . . . . . . . . . . . . . . . . . 243
36.2.1 The Franck—Condon principle . . . . . . . . . . . . . . . . 243

37 Fourier Transforms 245


37.1 The Fourier transformation . . . . . . . . . . . . . . . . . . . . . 245

VII Kinetics and Gases 249

38 Physical Kinetics 250


38.1 kinetic theory of gases . . . . . . . . . . . . . . . . . . . . . . . . 250
38.2 Molecular Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . 252

39 The Rate Laws of Chemical Kinetics 254


39.1 Rate Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
39.2 Determination of Rate Laws . . . . . . . . . . . . . . . . . . . . . 258
39.2.1 Differential methods based on the rate law . . . . . . . . . 259
39.2.2 Integrated rate laws . . . . . . . . . . . . . . . . . . . . . . 259
40 Temperature and Chemical Kinetics 261
40.1 Temperature Effects on Rate Constants . . . . . . . . . . . . . . . 261
40.1.1 Temperature corrections to the Arrhenious parameters . . 262
40.2 Theory of Reaction Rates . . . . . . . . . . . . . . . . . . . . . . 262
40.3 Multistep Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . 265
40.4 Chain Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

41 Gases and the Virial Series 269


41.1 Equations of State . . . . . . . . . . . . . . . . . . . . . . . . . . 269
41.2 The Virial Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
41.2.1 Relation to the van der Waals Equation of State . . . . . . 271
41.2.2 The Boyle Temperature . . . . . . . . . . . . . . . . . . . 272
41.2.3 The Virial Series in Pressure . . . . . . . . . . . . . . . . . 272
41.2.4 Estimation of Virial Coefficients . . . . . . . . . . . . . . . 273

42 Behavior of Gases 274


42.1 P, V and T behavior . . . . . . . . . . . . . . . . . . . . . . . . . 274
42.1.1 α and κT for an ideal gas . . . . . . . . . . . . . . . . . . . 275
42.1.2 α and κT for liquids and solids . . . . . . . . . . . . . . . . 275
42.2 Heat Capacity of Gases Revisited . . . . . . . . . . . . . . . . . . 276
42.2.1 The Relationship Between CP and CV . . . . . . . . . . . 276
42.3 Expansion of Gases . . . . . . . . . . . . . . . . . . . . . . . . . . 279
42.3.1 Isothermal and Adiabatic expansions . . . . . . . . . . . . 279
42.3.2 Heat capacity CV for adiabatic expansions . . . . . . . . . 280
42.3.3 When P is the more convenient variable . . . . . . . . . . 281
42.3.4 Joule expansion . . . . . . . . . . . . . . . . . . . . . . . . 282
42.3.5 Joule-Thomson expansion . . . . . . . . . . . . . . . . . . 283

43 Entropy of Gases 286


43.1 Calculation of Entropy . . . . . . . . . . . . . . . . . . . . . . . . 286
43.1.1 Entropy of Real Gases . . . . . . . . . . . . . . . . . . . . 288
VIII More Thermodyanmics 292

44 Critical Phenomena 293


44.1 Critical Behavior of fluids . . . . . . . . . . . . . . . . . . . . . . 293
44.1.1 Gas Laws in the Critical Region . . . . . . . . . . . . . . . 294
44.1.2 Gas Constants from Critical Data . . . . . . . . . . . . . . 295
44.2 The Law of Corresponding States . . . . . . . . . . . . . . . . . . 296
44.3 Phase Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . 296
44.3.1 The chemical potential and T and P . . . . . . . . . . . . 297
44.3.2 The Clapeyron Equation . . . . . . . . . . . . . . . . . . . 298
44.3.3 Vapor Equilibrium and the Clausius-Clapeyron Equation . 298
44.4 Equilibria of condensed phases . . . . . . . . . . . . . . . . . . . . 299
44.5 Triple Point and Phase Diagrams . . . . . . . . . . . . . . . . . . 300

45 Transport Properties of Fluids 301


45.1 Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
45.2 Viscosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
45.3 Thermal conductivity . . . . . . . . . . . . . . . . . . . . . . . . . 305
45.3.1 Thermal Conductivity of Gases and Liquids . . . . . . . . 306
45.3.2 Thermal Conductivity of Solids . . . . . . . . . . . . . . . 307

46 Solutions 308
46.1 Measures of Composition . . . . . . . . . . . . . . . . . . . . . . . 308
46.2 Partial Molar Quantities . . . . . . . . . . . . . . . . . . . . . . . 308
46.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
46.2.2 Partial Molar Volumes . . . . . . . . . . . . . . . . . . . . 310
46.3 Reference states for liquids . . . . . . . . . . . . . . . . . . . . . . 311
46.3.1 Activity (a brief review) . . . . . . . . . . . . . . . . . . . 311
46.3.2 Raoult’s Law . . . . . . . . . . . . . . . . . . . . . . . . . 312
46.3.3 Ideal Solutions (RL) . . . . . . . . . . . . . . . . . . . . . 314
46.3.4 Henry’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . 316
46.4 Colligative Properties . . . . . . . . . . . . . . . . . . . . . . . . . 318
46.4.1 Freezing Point Depression . . . . . . . . . . . . . . . . . . 318
46.4.2 Osmotic Pressure . . . . . . . . . . . . . . . . . . . . . . . 319

47 Entropy Production and Irreverisble Thermodynamics 322


47.1 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
47.2 The Second Law . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
47.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
47.3.1 Entropy Production due to Heat Flow . . . . . . . . . . . 326
47.3.2 Entropy Production due to Chemical Reactions . . . . . . 328
47.4 Thermodynamic Coupling . . . . . . . . . . . . . . . . . . . . . . 330
47.5 Echo Phenonmena . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Chemistry 351: Physical
Chemistry I

1
Solved Problems

I make-up most of the problems on the problems sets, so it might be helpful to


you to see some of these problems worked out.

Even though there aren’t many “book” problems assigned during the year, you can
still learn a lot be working these and looking that their solutions in the solution
manual.

Keep in mind this chapter provides some examples of how to solve problems for
both physical chemistry I and physical chemistry II. Consequently early in the
course some of the examples might seem very itimidating. Simply skip those
examples as you scan through this chapter.

Tips for solving problems

Working problem sets is the heart and sole of learning physical chemistry. The
only way that you can be sure that you understand a concept at to be able to
solve the problems associated with it.

This takes time and hard work.

But there are some things that you can do to help yourself with these problems.

Tips

2
1. Remember nobody cares if you solve any particular problem on the problem
set. They have all been solved before, so if you solve them you will not
become famous nor will you save the world. The only reason you work them
is to learn.

2. Budget your time so that you don’t have to work on an overwhelming number
of problems at a time. Try to whip-off a few on the same day that you get
the problem set. Then work on them consistently during the week. This
will make the problem sets much more efficient at helping you learn.

3. You can do the problem. I don’t assign problems that you cannot do. If you
think you can’t do the problem then maybe you need try a different way of
thinking about it.

4. Part of the trouble is simply understanding what the problem is asking you
to do. There is a tendency to try to start solving the problem before fully
understanding the question.

• Read the question carefully


• Try to think about what topic(s) in lecture and in the notes the problem
is dealing with.
• Do not worry about not knowing how to solve it yet.
• Just identify the general ideas that you think you might need.
• Determine wether you need to approach the problem mathematically
or conceptually or both.
• If the question is long, try to identify subsections of it.

5. For problems that require a mathematical approach...

• Do not be afraid. Try to figure out what mathematical techniques you


need to express the solution to the problem.

3
• Do the math; either you will be able to do this or you won’t. It might
take some review on your part.
• Always check to see if the math makes sense when you are done.

6. For problems that require a conceptual approach...

• Make sure that the physical idea that you are using in your argument is
correct. If you are not sure, start with a related concept that is better
known by you.
• Look for self-consistency. Does you final answer jive with what you
know.

Problems Dealing With Quantum Mechanics

Problem: What is the periodicity of the following functions

• f (x) = sin2 x

• f (x) = cos x

• f(x) = e−2ix

Solution: For the first function it is easiest to see the periodicity by writing the
function as f (x) = (sin x)(sin x). We know that this function will repeat zeros
when ever sin x = 0. This occurs at x = nπ, n = 0, ±1, ±2 . . ., so the periodicity
is π. The second function we should remember from trig as having a period of 2π.
Finally for the last function it is best to used Euler’s identity and write

e−2ix = cos 2x + i sin 2x (1)

The real part of this function, cos 2x, has a period of π as does the imaginary
part, sin 2x. Therefore the entire function has a period of π.

4
Problem: Which of the following functions are eigenfunction of the momentum
d
operator, p̂x = −i~ dx .

• ψ(x) = eikx
2
• ψ(x) = e−αx

• ψ(x) = cos kx

Solution: We need to determine if p̂x ψ(x) = λψ(x) where λ is a constant. If


this equation is true then the function is an eigenfunction with eigenvalue λ. For
the case of momentum all we need to do is take the derivative of each function,
multiply by −i~ and check to see if the eigenvalue equation holds.
For the first function
dψ(x) deikx
p̂x ψ(x) = −i~ = −i~ = ~keikx = ~kψ(x), (2)
dx dx
so, yes, this function is an eigenfunction of the momentum operator.
For the second function
2
dψ(x) de−αx 2 ↓
p̂x ψ(x) = −i~ = −i~ = 2i~αxe−αx = 2i~αxψ(x), (3)
dx dx
so, no, this function is not and eigenfunction of the momentum operator.
For the last function
6=cos kx
dψ(x) d cos kx z }| {
p̂x ψ(x) = −i~ = −i~ = −i~k sin kx, (4)
dx dx
so, no, this function is not an eigenfunction of the momentum operator.

2
Problem: A quantum object is described by the wavefunction ψ(x) = e−αx .
What is the probability of finding the object further than α away from the origin
( x = 0)?

5
Solution: First of all we do not know if this wavefunction is normalized, so we
should assume that it isn’t. We could normalize this wavefunction, but we won’t.
We are interested in finding the probability that the object is outside of the region
−α < x < α. To do this using an unnormalized wavefunction we must evaluate
R −α 2 R∞
−∞
|ψ(x)| dx + α
|ψ(x)|2 dx
P (|x| > α) = R∞ . (5)
−∞
|ψ(x)|2 dx
The first integral in the numerator gives the probability that the object is at a
position x < −α and the second integral in the numerator gives the probability
for x > α. The denominator accounts for the fact that the wavefunction is un-
normalized. The limits of the integral in the denominator represent all space for
the object. If you were working with a normalized wavefunction the denominator
would be equal to 1 and hence not needed. Plugging in the wavefunctions we have
R −α −2αx2 R∞ 2
−∞
e dx + α e−2αx dx
P (|x| > α) = R ∞ . (6)
−∞
e−2αx2 dx
Mathematica can assist with these integrals to give the final answer of
√ 3
P (|x| > α) = erfc[ 2α 2 ]. (7)

Problem: A quantum object is described by the wavefunction ψ(x) = e−γx over


the range 0 ≤ x < ∞. Normalize this wavefunction.

Solution: Following our general procedure from the notes if we have some unnor-
malized wavefunction, ψunnorm we know that this function must simply be some
constant N multiplied by the normalized version of this function:

ψ unnorm = Nψnorm (8)

We have shown generally that N is given by


sZ
N= |ψunnorm (x)|2 dx. (9)
space

6
Which for this case is
sZ sZ r
∞ ∞
1
N= |e−γx |2 dx = e−2γx dx = (10)
0 0 2γ
So finally we get the normalized wavefunction by rearanging ψunnorm = Nψnorm :
p
ψnorm (x) = 2γe−γx . (11)

Problem: A quantum object is described by the wavefunction ψ(x) = e−γx over


the range 0 ≤ x < ∞. What is the average position of the object?

Solution: We need to work with the normalized wavefunction that we found in



the previous problem, ψ(x) = 2γe−γx . Generally and average is calculated as
Z
hôi = ψ∗ (x)ôψ(x), (12)
space

which in this case is


Z ∞p p −γx Z ∞
−γx 1
hx̂i = 2γe x 2γe dx = 2γ xe−2γx dx = . (13)
0 0 2γ
1
So on average you will find the object at x = 2γ
.

Problem: What is the probability of finding an electron in the 1s state of hydrogen


further than one Bohr radius away from the nucleus?

Solution: We need to evaluate


Z 2π Z π Z ∞
P (r > a0 ) = |ψ1s |2 r2 sin θdrdθdφ. (14)
0 0 a0

Remember the extra r2 sin θ is needed when integrating in spherical polar coordi-
nates. The normalized 1s wavefunction is
1
ψ1s = p 3 er/a0 . (15)
πa0

7
We can do this integral by hand or have Mathematica help us to give
5
P (r > a0 ) = = 0.677. (16)
e2
So, about 68% of the time the electron would be found at some distance greater
then one Bohr radius from the proton.

Problem: A free particle in three dimensions is described by the Hamiltonian,


2
Ĥ = −~2m
∇2 . Express the wavefunction (in Cartesian coordinates) as a product
state.

Solution: This problem appears hard at first since we are not studying three
dimensional systems, but all it is asking is to express the wavefunction, which is
a function of the three spatial dimensions, Ψ(x, y, z) as a product state. We know
that if the wavefunction is to be a product state then the Hamiltonian must be
made up of a sum of independent terms. To see this we write out the Laplacian
to get µ ¶
−~2 ∂ 2 ∂2 ∂2
Ĥ = + + . (17)
2m ∂x2 ∂y 2 ∂z 2
We see that indeed the Hamiltonian is a sum of term that depends only on x,
a term depending only on y and a term that depends only on z. Therefore the
appropriate product state is

Ψ(x, y, z) = ψ(x)ψ(y)ψ(z). (18)

Problem: Expand the Morse potential in a Taylor’s series about Req . Verify that
the coefficient for the linear term is zero. What is the force constant associated
with the Morse potential?

Solution: The Morse potential is


£ ¤
V (x) = De 1 − e−β(R−Req ) . (19)

8
The Taylor series about Req for this function is
¯ ¯
dV (x) ¯¯ 1 d2 V (x) ¯¯
V (x) = V (x)|Req + (R − Req ) + (R − Req )2 + · · · . (20)
| {z } dx ¯Req 2! dx2 ¯Req
=0 | {z } | {z }
=0 = β 2 De

So, yes the coefficient of the linear term (the term involving (R − Req ) to the
first power) is zero. This will always be true when you perform a Taylor series
expansion about a minimum (or maximum). The force constant is given by the
coefficient of the quadratic term so in this case k = β 2 De .

Problem: Without performing any calculations, compare hRi as a function of


the vibrational quantum number for a diatomic modelled as a harmonic oscillator
versus a Morse oscillator.

Solution: This problem requires the we think qualitatively about the wavefunc-
tions and the potentials for the harmonic oscillator and the Morse oscillator. The
potential for the harmonic oscillator is described by a parabola centered about the
equilibrium bond length. Hence no mater what the vibrational quantum number is
there is just as much of the wavefunction on either side equilibrium thus hRi = Req
for any quantum number. The Morse potential does not have this symmetry. It
is steeper on the “short” side of equilibrium and softer on the “long” side of equi-
librium and this “softness” increases with increasing quantum number. Therefore
without performing any calculations we can at least say that hRi increases as the
quantum number increases.

Problems Dealing With Statistical Mechanics and Thermo-


dynamics

Problem: A vial containing 10 20 benzene molecules is at 300K. How many mole-


cules are in the first excited state of the ‘ring breathing’ mode (992 cm −1 )? How

9
many are in the first excited state of the symmetric C—H vibrational mode (3063
cm −1 )?

Solution: This is a problem that deals with the Boltzmann distribution. So,
µ ¶ ³ ´
992 3×992
rb
Nv=1 = 2 sinh × e− 2×208 × 1020 = 8.41 × 1017 (21)
2 × 208
and µ ¶ ³ ´
C—H 3063 − 3×3063
Nv=1 = 2 sinh × e 2×208 × 1020 = 4.02 × 1013 . (22)
2 × 208
17
We see that about 8.41×10
1020
× 100% = 0.841% of the benzene molecules are in the
13
first vibrational excited state for the ring breathing mode and 4.02×10
1020
× 100% =
0.0000402% of the benzene molecules are in the first excited state for the C—H
stretching mode.

Problem: Consider a linear chain of N atoms. Each of the atoms can be in one
of three states A, B or C, except that an atom in state A can not be adjacent to
an atom in state C. Find the entropy per atom for this system as N → ∞. To
solve this problem it is useful to define the set of three dimensional column vectors
V (j) such that the three elements are the total number of allowed configurations of
a j-atom chain having the j th atom in state A, B or C. For example,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 2 5
(1) ⎢ ⎥ (2) ⎢ ⎥ (3) ⎢ ⎥
V = ⎣ 1 ⎦, V = ⎣ 3 ⎦, V = ⎣ 7 ⎦,··· . (23)
1 2 5

The V (j+1) can be found from the V (j) vector using the matrix equation,

V (j+1) = MV (j) , (24)

where for this example ⎡ ⎤


1 1 0
⎢ ⎥
M = ⎣ 1 1 1 ⎦. (25)
0 1 1

10
The matrix M is the so-called transfer matrix for this system. It can be shown
that the number of configurations W = Tr[M N ]. Now for large N, Tr[M N ] ≈ λN
max ,
where λmax is the largest eigenvalue of M. So
W = lim λN
max . (26)
N→∞

1. 1. Use M to find V (4)


2. Verify V (3) explicitly by drawing all the allowed 3-atom configurations.
3. Verify W = Tr[M N ] for N = 1 and N = 2.
4. Use Boltzmann’s equation to find the entropy per atom for this chain
as N goes to infinity.

Solution: For part (a) we simply use the transfer matrix as directed in the
problem (we are given V (3) ):
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 1 0 5 12
(4) ⎢ ⎥⎢ ⎥ ⎢ ⎥
V =⎣ 1 1 1 ⎦ ⎣ 7 ⎦ = ⎣ 17 ⎦ .
0 1 1 5 12
For part (b) we need to list all states for the case of N = 3 and verify the we get
the same result as calculated using the transfer matrix. Remembering that V (3)
gives us the number of sequences that end in a given state we should organize our
list in the same manner
States ending in A States ending in B States ending in C
AAA AAB ABC
ABA ABB BBC
BAA BAB BCC
BBA BBB CBC .
CBA BCB
CBB
CCB
√ √ √
5 states 7 states 5 states

11
States like AAC are not allowed because A and C are neighbors.
For part (c) we evaluate W = Tr[M N ] for N = 1 and 2. For N = 1, W =
Tr[M] = 3 This corresponds to the three distinguishable microstates A, B, and
C. For N = 2,
⎡⎡ ⎤⎡ ⎤⎤ ⎡⎡ ⎤⎤
1 1 0 1 1 0 2 2 1
⎢⎢ ⎥⎢ ⎥⎥ ⎢⎢ ⎥⎥
W = Tr[M 2 ] = Tr ⎣⎣ 1 1 1 ⎦ ⎣ 1 1 1 ⎦⎦ = Tr ⎣⎣ 2 3 2 ⎦⎦ = 7 (27)
0 1 1 0 1 1 1 2 2

This corresponds to the seven distinguishable microstates AA, AB, BA, BB, BC,
CB and CC (Remember C and A cannot be neighbors).
For part (d) we use

S k k k
= ln W = lim ln λN
max = lim N ln λmax = k ln λmax . (28)
N N N→∞ N N→∞ N

So, we simply need to find the maximum eigenvalue of the Transfer matrix. Using

Mathematica we find λmax = 1 + 2. Therefore the limiting entropy per atom
is ³
S √ ´
= k ln 1 + 2 . (29)
N

Problem: Using the classical theory of light scattering, calculate the positions of
the Rayleigh, Stokes and anti-Stokes spectral lines for benzene. Assume benzene
has only two active modes (992cm −1 and 3063cm −1 ) and assume the Laser light
used to do the scattering is at 20000cm −1 (this is 500nm–green light).

Solution: Since there are two vibrational modes we expect two Stokes lines to
the red of 20000cm−1 , one at 20000cm−1 − 992cm−1 = 19008cm−1 and one at
20000cm−1 − 3063cm−1 = 16937cm−1 . Likewise we expect two anti-Stokes lines,
one at 20000cm−1 + 992cm−1 = 20992cm−1 and one at 20000cm−1 + 3063cm−1 =
23063cm−1 . There is only one Rayleigh line and it is at the same frequency at the
input laser beam which, in this case, is 20000cm−1 .

12
Problem: A simple model for a crystal is a “gas” of harmonic oscillators. De-
termine A, S, and U from the partition function for this model.

Solution: For this model the crystal is modelled as a collection of harmonic


oscillators so we need the partition function for the harmonic oscillator.
à !
N 1
Qcrystal = qHO = (30)
2 sinh β~ω
2

From our formulas for statistical thermodynamics


µ ¶
β~ω
A = −kT ln Qcrystal = +NKT ln 2 sinh , (31)
2
where we used properties of logs to pull the N out front and move the sinh term
from to the numerator,
∂Qcrystal
S = −kβ + k ln Qcrystal (32)
∂β
µ ¶
Nkβ~ω β~ω β~ω
= coth − k ln 2 sinh
2 2 2
and
∂Qcrystal N~ω β~ω
U =− = coth . (33)
∂β 2 2
Problem: Express the equation of state for internal energy for a Berthelot gas.

Solution: The equation representing a Berthelot gas is


nRT n2 a
P = − . (34)
V − nb T V 2
We are interesting in an equation of state for U(T, V ). Writing out the total
derivative of U(T, V ) we get
µ ¶ µ ¶
∂U ∂U
dU = dT + dV. (35)
∂T V ∂V T

13
¡ ¢ ¡ ∂U ¢
Now ∂U∂T V
is just heat capacity, CV , but ∂V T
is nothing convenient so we must
proceed. We employ the “useful relation”
µ ¶ µ ¶
∂U ∂P
=T −P (36)
∂V T ∂T V

to eliminate U in favor of P so that we can use the equation of state for a Berthelot
gas. One obtains
µ ¶ µ ¶
∂P nR n2 a nRT n2 a 2n2 a
T −P =T + 2 2 − + = . (37)
∂T V V − nb T V V − nb T V 2 TV 2

Hence the equation of state for internal energy of a Berthelot gas is

2n2 a
dU = CV dT + dV (38)
TV 2

¡ ∂P ¢
Problem: Use the identities for partial derivatives to eliminate the ∂T V
factor
in µ ¶ µ ¶
∂V ∂P
Cp = Cv + T (39)
∂T P ∂T V
so that all derivatives are at constant pressure or temperature.

Solution: Here we either remember an identity or turn to our handout of partial


¡ ¢
derivative identities to employ the cyclic rule to ∂P
∂T V
:
µ ¶ µ ¶ µ ¶
∂P ∂P ∂V
=− . (40)
∂T V ∂V T ∂T P

This eliminates the constant V term and so,


µ ¶2 µ ¶
∂V ∂P
Cp = Cv − T . (41)
∂T P ∂V T

14
Part I

Basic Quantum Mechanics

15

15
1. Quantum Theory

The goal of science is unification.

• Many phenomena described by minimal and general concepts.

1.1. The “Fall” of Classical Physics

A good theory:

• explain known experimental results

• self consistent

• predictive

• minimal number of postulates

Around the turn of the century, experiments were being performed in which the re-
sults defied explanation by means of the current understanding of physics. Among
these experiments were

1. The photoelectric effect

2. Low temperature heat capacity

3. Atomic spectral lines

4. Black body radiation and the ultraviolet catastrophe

16

16
5. The two slit experiment

6. The Stern-Gerlach experiment

∗ ∗ See Handouts ∗ ∗

1.2. Bohr’s Atomic Theory

1.2.1. First Attempts at the Structure of the Atom

The “solar system” model.

• The electron orbits the nucleus with the attractive coulomb force balanced
by the repulsive centrifugal force.

Flaws of the solar system model



• Newton: OK

• Maxwell: problem

17
— As the electron orbits the nucleus, the atom acts as an oscillating dipole

• — The classical theory of electromagnetism states that oscillating dipoles


emit radiation and thereby lose energy.
— The system is not stable and the electron spirals into the nucleus. The
atom collapses!

Bohr’s model: Niels Bohr (1885—1962)

18
• Atoms don’t collapse =⇒ what are the consequences

Experimental clues

• Atomic gases have discrete spectral lines.

• If the orbital radius was continuous the gas would have a continuous spec-
trum.

• Therefore atomic orbitals must be quantized.


4π 0 N 2 ~2
r= (1.1)
Zme e2
where Z is the atomic number, me and e are the mass and charge of the
electron respectively and 0 is the permittivity of free space. N is a positive
real integer called the quantum number. ~ = h/2π is Planck’s constant
divided by 2π.

2
The constant quantity 4π 0~
me e2
appears often and is given the special symbol a0 ≡
4π 0 ~ 2
me e2
= 0.52918 Å and is called the Bohr radius.

The total energy of the Bohr atom is related to its quantum number
µ 2 ¶
2 e 1
EN = −Z . (1.2)
2a0 N 2
Tests of the Bohr atom

• Ionization energy of Hydrogen atoms

— The Ionization energy for Hydrogen atoms (Z = 1) is the minium


energy required to completely remove an electron form it ground state,
i.e., N = 1 → N = ∞
µ ¶
−Z 2 e2 1 1 e2
Eionize = E∞ − E1 = − = (1.3)
2a0 ∞2 12 2a0

19
2
e
— Eionize = 2a 0
= 13.606 eV = 109,667 cm−1 = R. R is called the Rydberg
constant.
— Eionize experimentally observed from spectroscopy is 13.605 eV (very
good agreement)

• Spectroscopic lines from Hydrogen represent the difference in energy between


the quantum states

— Bohr theory: Difference energies


µ ¶ µ ¶
e2 1 1 1 1
Ej − Ek = − =R − (1.4)
2a0 Nj2 Nk2 Nj2 Nk2

Initial state Nk Final States Nj Series Name


1 2,3,4,· · · Lyman
2 3,4,5,· · · Balmer
3 4,5,6,· · · Pachen
4 5,6,7,· · · Brackett
5 6,7,8,· · · Pfund

• — Since the orbitals are quantized, the atom may only change its orbital
radius by discrete amounts.
— Doing this results in the emission or absorption of a photon with energy
4E
ṽ = (1.5)
hc

Failure of the Bohr model

• No fine structure predicted (electron-electron coupling)

• No hyperfine structure predicted (electron-nucleus coupling)

• No Zeeman effect predicted (response of spectrum to magnetic field)

20
• Spin is not included in theory

The Bohr quantization idea points to a wavelike behavior for the electron.

The wave must satisfy periodic boundary conditions much like a vibrating ring

∗ ∗ ∗ See Fig. 11.9 Laidler&Meiser ∗ ∗∗

The must be continuous and single valued

Particles have wave-like characteristics

The Bohr atom was an important step towards the formulation of quantum theory

• Erwin Schrödinger (1887—1961): Wave mechanics

• Werner Heisenberg (1902—1976): Matrix mechanics

• Paul Dirac (1902—1984): Abstract vector space approach

21
2. The Postulates of Quantum
Mechanics

2.1. Postulate I

Postulate I: The state of a system is defined by a wavefunction, ψ, which con-


tains all the information that can be known about the system.

We will normally take ψ to be a complex valued function of time and coordi-


nates: ψ(t, x, y, z) and, in fact, we will most often deal with time independent
“stationary” states ψ(x, y, z)

Note: In general the wavefunction need not be expressed as a function of coordi-


nate. It may, for example, be a function of momentum.

The wavefunction ψ represents a probability amplitude and is not directly observ-


able.

However the mod-square of the wavefunction, ψ∗ ψ = |ψ|2 , represents a probability


distribution which is directly observable.

That is, the probability of finding a particle which is described by ψ(x, y, z) at the
position between x and x+dx, y and y +dy and z and z +dz is |ψ(x, y, z)|2 dxdydz
(or |ψ(r, θ, φ)|2 r2 sin θdrdθdφ in spherical coordinates).

22

22
Properties of the wavefunction

• Single valueness

• continuous and finite

• continuous and finite first derivative


R
• space |ψ(x, y, z)|2 dxdydz < ∞

Normalization of the wavefunction

In order for |ψ(x, y, z)|2 to be exactly interpreted as a probability dis-


tribution, ψ(x, y, z) must be normalizable.
qR
That is, ψ unnorm = Nψnorm , where N = space
|ψunnorm (x, y, z)|2 dxdydz
R
This assures that space |ψnorm |2 dxdydz = 1 as expected for a proba-
bility distribution
From now on we will always normalize our wavefunctions.

2.2. How to normalize a wavefunction

If we have some unnormalized wavefunction, ψunnorm we know that this function


must simply be some constant N multiplied by the normalized version of this
function:
ψ unnorm = Nψnorm . (2.1)
Now, we take the mod-square of both sides and then integrate both sides of this
equation over all space
Z Z
2
|ψunnorm | dxdydz = |Nψnorm |2 dxdydz. (2.2)
space space

23
The N is just a constant so it can be pulled out of both the mod-square and the
integral Z Z
2
|ψunnorm | dxdydz = N 2
|ψnorm |2 dxdydz, (2.3)
space space

but Z
|ψnorm |2 dxdydz = 1 (2.4)
space

because that is the very definition of a normalized wavefunction. Thus wherever


R
we see space |ψnorm |2 dxdydz we can replace it with 1. So,
Z
|ψunnorm |2 dxdydz = N 2 × 1 = N 2 . (2.5)
space

This gives us an expression for N. Taking the square root of both sides gives.
sZ
N= |ψunnorm (x, y, z)|2 dxdydz. (2.6)
space

So finally we get the normalized wavefunction by reagranging ψ unnorm = Nψnorm :


1
ψ norm = ψ . (2.7)
N unnorm
Notice that no where did we ever specify what ψunnorm or ψnorm actually were,
therefore this is a general procedure that will work for any wavefunction.

To find the probability for the particle to be in a finite region of space we simple
evaluate (here a 1D case)
R x2 Z x2
x1
|ψ(x)|2 dx if ψ(x)
P (x1 < x < x2 ) = R ∞ 2 =⇒ |ψ(x)|2 dx (2.8)
−∞
|ψ(x)| dx normalized x1

2.3. Postulates II and II

Postulate II: Every physical observable is represented by a linear (Hermitian)


operator.

24
An operator takes a function and turns it into another function

Ôf (x) = g(x) (2.9)

This is just like how a function takes a number and turns it into another number.

So in quantum mechanics operators act on the wavefunction to produce a new


wavefunction

The two most important operators as far as we are concerned are

• x̂ = x

• p̂x = −i~ ∂x

and of course the analogous operators for the other coordinates (y, z) and coordi-
nate systems (spherical, cylindrical, etc.).

Nearly all operators we will need are algebraic combinations of the above.

Postulate III: The measurement of a physical observable will give a result that
is one of the eigenvalues of the corresponding operator.

There is a special operator equation called the eigenvalue equation which is

Ôf (x) = λf (x) (2.10)

where λ is just a number.

For a given operator only a special set of function satisfy this equation. These
functions are called eigenfunctions.

25
The number that goes with each function is called the eigenvalue.

So solution of the eigenvalue equation gives a set of eigenfunctions and a set of


eigenvalues.

Example
Let Ô in the eignevalue equation be the operator that takes the derivative: Ô =
dˆ = dx
d
.

So we want a solution to

ˆ (x) = λf (x)
df (2.11)
df (x)
= λf (x)
dx
So, we ask ourselves what function is proportional to its own derivative? ⇒
f (x) = eλx .

So the eigenfunctions are the set of functions f (x) = eλx and the eigenvalues are
the numbers λ

26
3. The Setup of a Quantum
Mechanical Problem

3.1. The Hamiltonian

The most important physical observable is that of the total energy E.

The operator associated with the total energy is called the Hamiltonian operator
(or simply the Hamiltonian) and is given the symbol Ĥ.

The eigenvalue equation for the Hamiltonian is

Ĥψ = Eψ. (3.1)

This equation is the (time independent) Schrödinger equation.

This equation is the most important equation of the course and we will use it many
times throughout our discussion of quantum mechanics and statistical mechanics.

3.2. The Quantum Mechanical Problem

Nearly every problem one is faced with in elementary quantum mechanics is han-
dled by the same procedure as given in the following steps.

1. Define the classical Hamiltonian for the system.

27

27
• The total energy for a classical system is

Ecl = T + V, (3.2)

where T is the kinetic energy and V is the potential energy.


• The kinetic energy is always of the form
1 ¡ 2 ¢
T = px + p2y + p2z (3.3)
2m
• The potential energy is almost always a function of coordinates only

V = V (x, y, z) (3.4)

• Note: Some quantum systems don’t have classical analogs so the Hamil-
tonian operator must be hypothesized.

2. Use Postulate II to replace the classical variables, x, px etc., with their


appropriate operators. Thus,
−~2 ˆ 2 −~2 2
T̂ = ∇ = ∇, (3.5)
2m 2m
∂2 ∂2 ∂2
where ∇2 ≡ ∂x2
+ ∂y2
+ ∂z 2
, and

V̂ = V (x̂, ŷ, ẑ) = V (x, y, z). (3.6)

So,
−~2 2
Ĥ = T̂ + V̂ = ∇ + V (x, y, z) (3.7)
2m
3. Solve the Schrödinger equation, Ĥψ = Eψ, which is now a second order
differential equation of the form
∙ 2 ¸
−~ 2
∇ + V (x, y, z) ψ = Eψ
2m
−~2 2
⇒ ∇ ψ + (V (x, y, z) − E) ψ = 0 (3.8)
2m

28
• Note: It is solely the form of V (x, y, z) which determines whether this
is easy or hard to do.
• For one-dimensional problems

−~2 d2
ψ + (V (x) − E) ψ = 0 (3.9)
2m dx2

3.3. The Average Value Theorem

Postulate III implies that if ψ is an eigenfunction of a particular operator rep-


resenting a physical observable, then all measurements of that physical property
will yield the associated eigenvalue.

However, If ψ is not an eigenfunction of a particular operator, then all measure-


ments of that physical property will still yield an eigenvalue, but we cannot predict
for certain which one.

We can, however, give an expectation, or average, value for the measurement.


This is given by Z
hα̂i = ψ∗ α̂ψdxdydz (3.10)
space

For example,
Z Z
hx̂i = ∗
ψ x̂ψdxdydz = x |ψ|2 dxdydz (3.11)
space space

and Z Z
∗ ∂ψ
hp̂x i = ψ p̂x ψdxdydz = −i~ ψ∗ dxdydz (3.12)
space space ∂x

29
3.4. The Heisenberg Uncertainty Principle

In quantum mechanics certain pairs of variables can not, even in principle, be


simultaneously known to arbitrary precision. Such variables are called compli-
mentary.

This idea is the Heisenberg uncertainty principle and is of profound im-


portance.

The general statement of the Heisenberg uncertainty principle is


1 ¯¯Dh iE¯
¯
δαδβ ≥ ¯ α̂, β̂ ¯ , (3.13)
2
h i
where the notation α̂, β̂ means the commutator of α̂ and β̂. The commutator is
defined as h i
α̂, β̂ ≡ α̂β̂ − β̂ α̂. (3.14)

The most important example of complimentary variables is position and momen-


tum. We see
1 1
δpx δx ≥ |h[p̂x , x̂]i| = |hp̂x x̂ − x̂p̂x i| (3.15)
2 ¯Z µ 2 ¶ ¯
1 ¯¯ ~ ∂ ∂ ¯
= ψ ∗
x − x ψdx ¯
2¯ i ∂x ∂x ¯
¯ ¯
¯~¯ ~
= ¯¯ ¯¯ = .
2i 2
So, at the very best we can only hope to simultaneously know position and momen-
tum such that the product of the uncertainty in each is ~2 . (n.b., δpx δy = 0, we can
know, for example, the y position and the x momentum to arbitrary precision.)

Suppose we know the position of a particle perfectly, what can we say about its
momentum?

30
4. Particle in a Box

We now will apply the general program for solving a quantum mechanical problem
to our first system: the particle in a box.

This system is very simple which is one reason for beginning with it. It also can
be used as a “zeroth order” model for certain physical systems.

We shall soon see that the particle in a box is a physically unrealistic system and,
as a consequence, we must violate one of our criteria for a good wavefunction.
Nevertheless it is of great pedagogical and practical value.

4.1. The 1D Particle in a Box Problem

Consider the potential, V (x), shown in the figure and given by




⎨ ∞ x≤0
V (x) = 0 0<x<a . (4.1)


∞ x≥a

Because of the infinities at x = 0 and x = a, we need to partition the x-axis into


the three regions shown in the figure.

31

31
Now, in region I and III, where the potential is infinite, the particle can never
exist so, ψ must equal zero in these regions.

The particle must be found only in region II.

The Schrödinger equation in region II is (V (x) = 0)

−~2 d2 ψ(x)
Ĥψ = Eψ =⇒ = Eψ, (4.2)
2m dx2
which can be rearranged into the form

d2 ψ(x) 2mE
+ 2 ψ(x) = 0. (4.3)
dx2 ~
The general solution of this differential equation is

ψ(x) = A sin kx + B cos kx, (4.4)


q
2mE
where k = ~2
.

Now ψ must be continuous for all x. Therefore it must satisfy the boundary
conditions (b.c.): ψ(0) = 0 and ψ(a) = 0.

32
From the ψ(0) = 0 b.c. we see that the constant B must be zero because
cos kx|x=0 = 1.

So we are left with ψ(x) = A sin kx for our wavefunction.

As can be inferred from the following figure, the second b.c., ψ(a) = 0, places
certain restrictions on k.

In particular,

kn =, n = 1, 2, 3, · · · . (4.5)
a
The values of k are quantized. So, now we have
nπx
ψn (x) = A sin . (4.6)
a

The constant A is the normalization constant. We obtain A from


Z ∞ Z a
∗ nπx nπx
ψn (x)ψ n (x) = 1 = A2 sin sin dx. (4.7)
−∞ 0 a a
πx
Letting u = a
, du = πa dx, this becomes
Z π
2a a/π A2 a
1=A sin2 nudu = A2 = . (4.8)
π 0 /π 2 2

33
Solving for A gives r
2
A= . (4.9)
a

Thus our normalized wavefunctions for a particle in a box are



⎪ I
⎨ 0q

2
ψn (x) = sin nπx II . (4.10)


a a
⎩ 0 III

Is this wavefunction OK?

q
2mEn nπ
We can get the energy levels from kn = ~2
and kn = a
:

n2 π 2 ~2 ~= 2π
h
n2 h2
En = = . (4.11)
2ma2 8ma2

4.2. Implications of the Particle in a Box problem

Zero Point Energy

34
The smallest value for n is 1 which corresponds to an energy of

h2
E1 = 6= 0. (4.12)
8ma2
That is, the lowest energy state, or ground state, has nonzero energy. This residual
energy is called the zero point energy and is a consequence of the uncertainty
principle.

If the energy was zero then we would conclude that momentum was exactly zero,
δ p̂ = 0. But we also know that the particle is located within a finite region of
space, so δx̂ 6= ∞.

Hence, δx̂δp̂ = 0 which violates the uncertainty principle.

Features of the Particle in a Box Energy Levels

• The energy level spacing is

(n + 1)2 h2 n2 h2 2 2 h2
4E = En+1 − En = − = /
(n + 2n + 1 − /
n )
8ma2 8ma2 8ma2
h2
4E = (2n + 1) (4.13)
8ma2

• This spacing increases linearly with quantum level n

• This spacing decreases with increasing mass

• This spacing decreases with increasing a

• It is this level spacing that is what is measured experimentally

The Curvature of the Wavefunction

35
−~ 2 d2 d2
The operator for kinetic energy is T̂ = 2m dx2
. The important part of this is dx2
.

From freshman calculus we know that the second derivative of a function describes
its curvature so, a wavefunction with more curvature will have a larger second
derivative and hence it will posses more kinetic energy.

This is an important concept for the qualitative understanding of wavefunctions


for any quantum system.

Applying this idea to the particle in a box we an anticipate both zero point energy
and the behavior of the energy levels with increasing a.

• We know the wavefunction is zero in regions I and III. We also know that
the wave function is not zero everywhere. Therefore it must do something
between x = 0 and x = a. It must have some curvature and hence some zero
point energy.

• As a is increased, the wavefunction is less confined and so the curvature does


not need to be as great to satisfy the boundary conditions. Therefore the
energy levels decrease in energy as does their difference.

The particle in a box problem illustrates some of the many strange features of
quantum mechanics.

We have already seen such nonclassical behavior as quantized energy and zero
point energy.

As another example consider the expectation value of position for a particle in


the second quantum level:
Z ∞ Z
∗ 2 a 2π a
hxi = ψ2 (x)xψ 2 (x)dx = x sin2 [ x]dx = (4.14)
−∞ a 0 a 2

36
yet the probability of finding the particle at x = a2 is zero: ψ2 ( a2 ) = 0. There is
a node at x = a2 . So even though the particle may be found anywhere else in the
box and it may get from the left side of the node to the right side, it can never
be found at the node.

37
5. The Harmonic Oscillator

The harmonic oscillator model which is simply a mass undergoing simple harmonic
motion. The classical example is a ball on a spring

The harmonic oscillator is arguably the single most important model in all of
physics.

We shall begin by reviewing the classical harmonic oscillator and than we will
turn our attention to the quantum oscillator.

The force exerted by the spring in the above figure is F = −k(R − Req ), where k
is the spring constant and Req is the equilibrium position of the ball.

Setting x = R − Req we can measure the displacement about the equilibrium


position.

38

38
2
From Newton’s law of motion F = ma = m ddt2x , we get

d2 x d2 x k
m 2 = −kx ⇒ 2 + x = 0 (5.1)
dt dt m
This is second order differential equation which we already know the solutions to:

x = A sin ωt + B cos ωt, (5.2)


q
k
where ω = m
and A and B are constants which are determined by the initial
conditions.

For quantum mechanics it is much more convenient to talk about energy rather
than forces, so in going to the quantum oscillator, we need to express the force of
the spring in terms of potential energy V . We know
Z
1
V = − F dx = kx2 + C. (5.3)
2

Since energy is on an arbitrary scale we can set C = 0. Thus V = 12 kx2 .

By postulate III the Schrödinger equation becomes


⎛ ⎞
⎜ −~2 d2 1 ⎟
Ĥψ = Eψ ⇒ ⎝ 2
+ kx2 ⎠ ψ = Eψ. (5.4)
|2m{zdx } |2 {z }
K.E. P.E.

This can be rearrange into the form


µ ¶
−~2 d2 ψ 1 2
+ kx − E ψ = 0 (5.5)
2m dx2 2

This differential equation is not easy to solve (you can wait to solve it in graduate
school).

39
The equation is very close to the form of a know differential equation called Her-
mite’s differential equation the solutions of which are called the Hermite polynom-
inals.

As it turns out, the solutions (the eigenfunctions) to the Schrödinger equation for
the harmonic oscillator are
µ ¶1
2
− y2 km 4 1
ψn (y) = An Hn (y)e , y = 2
x, An = p √ , (5.6)
~ 2n n! π

where An is the normalization constant for the nth eigenfunction and Hn (y) are
the Hermite polynomials.

The eigenvalues (the energy levels) are


1
En = (n + )~ω, (5.7)
2
q
k
where again ω = m
.

Note the energy levels are often written as


1
En = (n + )hν 0 , (5.8)
2
q
1 k
where ν 0 = 2π m
and is called the vibrational constant.

∗ ∗ ∗ See Fig. 11.12 Laidler&Meiser ∗ ∗∗

5.1. Interesting Aspects of the Quantum Harmonic Oscilla-


tor

It is interesting to investigate some of the unintuitive properties of the oscillator


as we have gone quantum mechanical

40
1. Consider the ground state (the lowest energy level)

• There is residual energy in the ground state because


1
E0 = (0 + )~ω.
2
• Just like for the particle in a box, this energy is called the zero point
energy.
• It is a consequence of uncertainty principle
— If the ground state energy was really zero, then we would conclude
that the momentum of the oscillator was zero.
— On the other hand, we would conclude the particle was located at
the bottom of the potential well (at x = 0)
— Thus we would have δp = 0, δx = 0, so δpδx = 0 Not allowed!
— The uncertainty principle forces there to be some residual zero
point energy.

2. Consider the wavefunctions.

• The wavefunctions penetrate into the region where the classical particle
is forbidden to go
— The wavefunction is nonzero past the classical turning point.
• The probability distribution |ψ|2 becomes more and more like what is
expected for the classical oscillator when v → ∞.
— This is a manifestation of the correspondence principle which
states that for large quantum numbers, the quantum system must
behave like a classical system. In other words the quantum me-
chanics must contain classical mechanics as a limit.

3. Interpretation of the wavefunctions and energy levels

41
• Remember the wavefunctions are time independent and the energy lev-
els are stationary
• If a molecule is in a particular vibrational state it is NOT vibrating.

5.2. Spectroscopy (An Introduction)

The primary method of measuring the energy levels of a material is through the
use of electromagnetic radiation.

Experiments involving electromagnetic radiation—matter interaction are called


spectroscopies.

Atoms and molecules absorb or emit light only at specific (quantized) energies.

These specific values correspond to the energy level difference between the initial
and final states.

42
Key Equations for Exam 1

Listed here are some of the key equations for Exam 1. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The short cut for getting the normalization constant (1D, see above for 3D).
sZ
N= |ψunnorm (x)|2 dx. (5.9)
space

• The normalized wavefunction:


1
ψ norm = ψ . (5.10)
N unnorm

• The Schrödinger equation (which should be posted on your refrigerator),

Ĥψ = Eψ. (5.11)

43

43
• The Schrödinger equation for 1D problems as a differential equation,
−~2 d2
ψ + (V (x) − E) ψ = 0. (5.12)
2m dx2
• How to get the average value for some property (1D version),
Z
hα̂i = ψ∗ α̂ψdx. (5.13)
space

• The momentum operator



p̂x = −i~ . (5.14)
∂x
• Normalized wavefunctions for the 1D particle in a box,
r
2 nπx
ψn (x) = sin . (5.15)
a a
• The energy levels for the 1D particle in a box,
n2 π 2 ~2 ~= 2π
h
n2 h2
En = = . (5.16)
2ma2 8ma2
• The energy level spacing for the 1D particle in a box,
h2
4E = (2n + 1) (5.17)
8ma2
• The wavefunctions for the harmonic oscillator are
µ ¶1
2
− y2 km 4 1
ψn (y) = An Hn (y)e , y = 2
x, An = p √ , (5.18)
~ n
2 n! π
where An is the normalization constant for the nth eigenfunction and Hn (y)
are the Hermite polynomials.

• The energy levels are


r
1 k
En = (n + )~ω, ω = (5.19)
2 m

44
Part II

Quantum Mechanics of Atoms


and Molecules

45

45
6. Hydrogenic Systems

Now that we have developed the formalism of quantum theory and have discussed
several important systems, we move onto the quantum mechanical treatment of
atoms.

Hydrogen is the only atom for which we can exactly solve the Schrödinger equation
for. So this will be the first atomic system we discuss.

The Schrödinger equation for all the other atoms on the periodic table must be
solved by approximate methods.

6.1. Hydrogenic systems

Hydrogenic systems are those atomic systems which consist of a nucleus and one
electron. The Hydrogen atom (one proton and one electron) is the obvious exam-
ple

Ions such as He+ and Li2+ are also hydrogenic systems.

These system are centrosymmetric. That is they are completely symmetric about
the nucleus.

The obvious choice for the coordinate system is to use spherical polar coordinates

46

46
with the origin located on the nucleus.

The classical potential energy for these hydrogenic systems is

−Ze2
V (r) = . (6.1)
(4π 0 )r

So the Hamiltonian is
−~2 ˆ 2 −Ze2
Ĥ = ∇ + . (6.2)
2me (4π 0 )r̂
Schrödinger’s equation (in spherical polar coordinates) becomes

Eψ = Ĥψ (6.3)
µ 2 2

−~ ˆ 2 −Ze
Eψ = ∇ + ψ
2me (4π 0 )r̂
µ 2∙ µ ¶¸ ¶
−~ 1 ∂ 2 ∂ 1 1 ∂ ∂ 1 ∂2 −Ze2
Eψ = r + sin θ + + ψ
2me r2 ∂r ∂r r2 sin θ ∂θ ∂θ sin2 θ ∂φ2 (4π 0 )r

The Hamiltonian is (almost) the sum of a radial part (only a function of r) and
an angular part (only a function of θ and φ):
1
Ĥ = Ĥrad + Ĥang , (6.4)
r2
∙ ¸
−~2 1 ∂ 2 ∂ Ze2
Ĥrad = r − (6.5)
2me r2 ∂r ∂r (4π 0 )r
and µ ¶
−~2 1 ∂ ∂ 1 ∂2
Ĥang = sin θ + (6.6)
2me sin θ ∂θ ∂θ sin2 θ ∂φ2
Since the Hamiltonian is the sum of two terms, ψ must be a product state.

ψ(r, θ, φ) = ψrad (r)ψang (θ, φ) (6.7)

It turns out that solving the Schrödinger equation,

Ĥang ψang (θ, φ) = Eψang (θ, φ), (6.8)

47
yields
ψ ang (θ, φ) = Ylm (θ, φ), (6.9)

where the Ylm (θ, φ)’s are the spherical harmonic functions characterized by quan-
tum numbers l and m. The spherical harmonics are known functions. (Mathematica
knows them and you can use them just like any other built-in function like sine
or cosine.)

We shall use the spherical harmonics more next semester when we develop the
quantum theory of angular momentum.

It also turns out that the energy associated with Ĥang is found to be

l(l + 1)~2
E = El = . (6.10)
2me
So,
l(l + 1)~2
Ĥang ψang (θ, φ) = ψang (θ, φ) (6.11)
2me

Now let’s denote the radial part of the wavefunction as ψ rad (r) = R(r).

The full Schrödinger equation becomes

Ĥψ(r, θ, φ) = Eψ(r, θ, φ) (6.12)


ĤR(r)Ylm (θ, φ) = ER(r)Ylm (θ, φ)
µ ¶
1
Ĥrad + 2 Ĥang R(r)Ylm (θ, φ) = ER(r)Ylm (θ, φ),
r

Operating with Ĥang we get


µ ¶
l(l + 1)~2
Ĥrad + R(r)Ylm (θ, φ) = ER(r)Ylm (θ, φ) (6.13)
2me r2

48
The Ylm (θ, φ) can now be cancelled to leave a one dimensional differential equation:
µ ¶
−~2 1 ∂ 2 ∂ Ze2 l(l + 1)
r − − R(r) = ER(r). (6.14)
2me r2 ∂r ∂r 4π 0 r r2
This differential equation is very similar to a known equation called Laguerre’s
differential equation which has as solutions the Laguerre polynomials Lln (x).

In fact, the solutions to our differential equation are closely related to the Laguerre
polynomials.
µ ¶l µ ¶
2σ −σ/n 2l+1 2σ
Rnl (σ) = Anl e Ln+1 , (6.15)
n n
where the normalization constant, Anl , depends on the n and l quantum numbers
as sµ ¶3
2Z (n − l − 1)!
Anl = − (6.16)
na0 2n[(n + l)!]3
The energy eigenvalues, i.e., the energy levels are given by

Z 2R
En = − (6.17)
n2
Note: The energy levels are determined by n alone–l drops out.

Also Note: the energy levels are the same as for the Bohr model.

So, the total wavefunction that describes a hydrogenic system (ignoring the spin
of the electron, which will be briefly discussed later) is

ψ nlm (r, θ, φ) = Rnl (r)Ylm (θ, φ) (6.18)

6.2. Discussion of the Wavefunctions

We are now very close to having the atomic orbitals familiar from freshman chem-
istry.

49
We have explicitly derived the “physicists” picture of the atomic orbitals

orbital n l m wavefunctions (σ = r/a0 )


1s 1 0 0 ψ1s = ψ100 = e−σ
¡ ¢
2s 2 0 0 ψ2s = ψ200 = 1 − σ2 e−σ/2
2p 2 1 0 ψ2p0 = ψ210 = σe−σ/2 cos θ
2 1 ±1 ψ2p±1 = ψ21±1 = σe−σ/2 sin θe±iφ
3d 3 2 0 ψ3d0 = ψ320 = σ2 e−σ/3 (3 cos2 θ − 1)
3 2 ±1 ψ3d±1 = ψ32±1 = R32 (r) cos θ sin θe±iφ
3 2 ±2 ψ3d±2 = ψ32±2 = R32 (r) sin2 θe±i2φ

The wavefunctions in the “physicists” picture are complex (they have real and
imaginary components). The wavefunctions that chemists like are pure real. So
one needs to form linear combinations of these orbitals such that these combina-
tions are pure real.

The atomic orbital you are used to from freshman chemistry are the “chemists”
picture of atomic orbitals

In the above table ψ1s , ψ 2s , ψ 2p0 , ψ 3d0 are pure real and so these are the same in
the “chemists” picture as in the “physicists” picture.

The table below lists the atomic orbitals in the “chemists” picture as linear com-
binations of the “physicists” picture wave functions.

50
orbital n l m wavefunctions (σ = r/a0 )
1s 1 0 0ψ1s = ψ1s
2s 2 0 0ψ2s = ψ2s
2p 2 1 0ψpz = ψ2p0
£ ¤
2 1 ψ2px = √12 ψ2p1 + ψ2p−1
±1
£ ¤
2 1 ±1 ψ2py = i√1 2 ψ2p1 − ψ2p−1
3d 3 2 0 ψ3dz2 = ψ3d0
£ ¤
3 2 ±1 ψ3dxz = √12 ψ3d1 + ψ3d−1
£ ¤
3 2 ±1 ψ3dyz = i√1 2 ψ3d1 − ψ3d−1
£ ¤
3 2 ±2 ψ3dxy = √12 ψ3d2 + ψ3d−2
£ ¤
ψ3dx2 −y2 = i√1 2 ψ3d2 − ψ3d−2

6.3. Spin of the electron

As we know from freshman chemistry, electrons also posses an intrinsic quantity


called spin.

Spin is actually rather peculiar so we will put off a more detailed discussion until
next semester.

For now we must be satisfied with the following:

• There are two quantum numbers associated with spin: s and ms

• s is the spin quantum number and for an electron s = 1/2 (always).

• ms is the spin orientation quantum number and ms = ±1/2 for electrons.

The spin wavefunction is a function in spin space not the usual coordinate space,
so we can not write down an explicit function of the coordinate space variables.

51
We simply denote the spin wavefunction generally as χs,ms and “tack it on” as
another factor of the complete wavefunction.

When a particular spin state is needed a further notation is commonly used:


α ≡ χ 1 , 1 (the “spin-up” state) and β ≡ χ 1 ,− 1 (the “spin-down” state)
2 2 2 2

6.4. Summary: the Complete Hydrogenic Wavefunction

We are now in position to fully describe all properties of hydrogenic systems


(except for relativistic effects)

The full wave function is

Ψn,l,m,s,ms = ψn,l,m χs,ms (6.19)


= Rnl (r)Yl,m (θ, φ)χ

The energy is given by


Z 2R
En = − , (6.20)
n2
where recall. Again note that for a free hydrogenic system the total energy depends
only on the principle quantum number n.

The quantum numbers of the hydrogenic system

• The principle quantum number, n: determines the total energy of the sys-
tems and the atomic shells.

— The principle quantum number, n, can take on values of 1,2,3. . .

• The angular momentum quantum numbers, l: determines the total angular


momentum of the system. It also determines the atomic sub-shells

52
— The angular momentum quantum number, l, can take on values of 0,
1, . . . (n − 1)
— For historical reasons l = 0 is called s, l = 1 is called p, l = 2 is called
d, l = 3 is called f etc.

• The orientation quantum number, m: determine the projection of the an-


gular momentum onto the z-axis. It also determines the orientation of the
atomic sub-shells

— The magnetic quantum number, m, can take on values of 0, ±1, . . . ± l.

• The spin quantum number, s: determines the total spin angular momentum.

— For electrons s = 1/2.

• The spin orientation quantum number, ms : determines the projection of the


spin angular momentum onto the z-axis (i.e., spin-up or spin-down).

— For electrons ms = ±1/2

We have accomplished quite a bit. We have determined all that we can about the
hydrogen atom within Schrödinger’s theory of quantum mechanics.

This is not the full story however. The Schrödinger theory is a non-relativistic
one; that is, it can not account for relativistic effects which show up in spectral
data. We also had to add spin in an ad hoc manner to account for what we know
experimentally–spin did not fall out of the theory naturally.

Dirac, in the late 1920’s, developed a relativistic quantum theory in which the
well established phenomenon of spin arose naturally. His theory also made the

53
bold prediction of the existence anti-matter that has now been verified time and
again.

The Dirac theory was still not fully complete, because there still existed exper-
imental phenomena that was not properly described. In 1948 Richard Feynman
developed the beginnings of quantum electrodynamics (QED). QED is the best
theory ever developed in terms of matching with experimental data.

Both the relativistic Dirac theory and QED are beyond our reach, so we limit
ourselves to the non-relativistic Schrödinger theory.

54
7. Multi-electron atoms

7.1. Two Electron Atoms: Helium

We now consider a system consisting of two electrons and a nucleus; for example,
helium.

Although the extension from hydrogen to helium seems simple it is actually ex-
tremely complicated. In fact, it is so complicated that it can’t be solved exactly.

The helium atom is an example of the “three-body-problem”–difficult to handle


even in classical mechanics–one can not get a closed form solution.

The Hamiltonian for helium is


~2 2 ~2 2 Ze2 Ze2 e2
Ĥ = − ∇ − ∇ − − + , (7.1)
2me 1 2me 2 4π r 4π 0 r2 4π 0 r12
| {z } | {z } | {z 0 1} | {z } | {z }
K.E of electron 1 K.E of electron 2 P.E of electron 1 P.E of eletcron 2 elec.—elec. repulsion

where r12 = |r1 − r2 | is the distance between the electrons.

The electron—electron repulsion term is responsible for the difficulty of the prob-
lem. It makes a closed form solution impossible.

The problem must be solved by one of the following methods

• Numerical solutions (we will not discuss this)

55

55
• Perturbation theory (next semester)

• Variational theory (next semester)

• Ignore the electron—electron repulsion (good for qualitative work only)

7.2. The Pauli Exclusion Principle

Electron are fundamentally indistinguishable. They can not truly be la-


belled.

All physical properties of a system where we have labelled the electrons as, say, 1
and 2 must be exactly the same as when the electrons are labelled 2 and 1.

Now, only |ψ|2 is directly measurable–not ψ itself.

All this implies that




⎨ +ψ(2, 1), symmetric
ψ(1, 2) = or (7.2)


−ψ(2, 1) antisymmetric

The Pauli exclusion principle states: The total wavefunctions for fermions
(e.g., electrons) must be antisymmetric under the exchange of indistinguishable
fermions.

Note: a similar statement exists for bosons (e.g., photons): The total wavefunction
for bosons must be symmetric under exchange of indistinguishable bosons.

Let us consider the two electron atom, helium

56
The total wavefunction is
Ψ = ψ(1, 2)χ(1, 2) (7.3)
Since a complete solution for helium is not possible we must use approximate
wavefunctions. Since we are doing this, we may as well simplify matters and use
product state wavefunctions (products of the hydrogenic wavefunctions).

Ψ = ψ(1)ψ(2)χ(1)χ(2), (7.4)
| {z }| {z }
spatial part spin part

where the single particle wavefunctions are that of the hydrogenic system.

The Pauli exclusion principle implies that if the spatial part is even with respect
to exchange then the spin part must be odd. Likewise if the spatial part is odd
then the spin part must be even.
Now let’s blindly list all possibilities for the ground state wave function of helium

Ψa = ψ1s (1)α(1)ψ1s (2)α(2) (7.5)


Ψb = ψ1s (1)α(1)ψ1s (2)β(2)
Ψc = ψ1s (1)β(1)ψ1s (2)α(2)
Ψd = ψ1s (1)β(1)ψ1s (2)β(2)

These appear to be four reasonable ground state wavefunctions which would im-
ply a four-fold degeneracy. However considering the symmetry with respect to
exchange we see the following

• Ψa has symmetric spatial and spin parts and is there for symmetric. It must
be excluded.

• Similarly for Ψd .

• Ψb and Ψc have symmetric spatial parts, but the spin part is neither sym-
metric or antisymmetric. So, one must make an antisymmetric linear com-
bination of the spin parts.

57
The appropriate linear combination is

α(1)β(2) − α(2)β(1). (7.6)

So the ground state wave function for helium is

Ψg = ψ1s (1)ψ1s (2) [α(1)β(2) − α(2)β(1)] . (7.7)

Consequences of the Pauli exclusion principle

• No two electrons can have the same five quantum numbers

• Electrons occupying that same subshell must have opposite spins

7.3. Many Electron Atoms

The remaining atoms on the periodic table are handled in a manner similar to
helium.

Namely the wavefunction is product state that must be antisymmeterized in ac-


cordance with the Pauli exclusion principle.

The product wavefunction for the ground state is determined by applying the
aufbau principle. The aufbau principle states that the ground state wavefunction
is built-up of hydrogenic wavefunctions

To arrive at an antisymmetric wavefunction we construct the Slater determinant:


¯ ¯
¯ ψ (1)α(1) ψ1s (1)β(1) · · · ψn (1)α(1) ψn (1)β(1) ¯¯
¯ 1s
¯ ¯
¯ ψ1s (2)α(2) ψ1s (2)β(2) · · · ψn (2)α(2) ψn (2)β(2) ¯
Ψ = ¯¯ .. .. .. .. .. ¯
¯ (7.8)
¯ . . . . . ¯
¯ ¯
¯ ψ (N)α(N) ψ (N)β(N) ψ (N )α(N) ψ (N)β(N) ¯
1s 1s n n

58
The reason one can be sure that this wavefunction is the antisymmeterized is that
we know from linear algebra that the determinant is antisymmetric under exchange
of rows (corresponds to exchanging two electrons). It is also antisymmetric under
exchange of columns.

Another property of the determinant is that if two rows are the same (corresponds
to two electrons in the same state) the determinant is zero. This agrees with the
Puli exclusion principle.

As an example consider lithium:

• There are three electrons so we need three hydrogenic wavefunctions: ψ1s α,


ψ1s β, and ψ2s α (or ψ 2s β).

• We construct the Slater determinant as


¯ ¯
¯ ψ (1)α(1) ψ (1)β(1) ψ (1)α(1) ¯
¯ 1s 1s 2s ¯
¯ ¯
Ψ1 = ¯ ψ1s (2)α(2) ψ1s (2)β(2) ψ2s (2)α(2) ¯ (7.9)
¯ ¯
¯ ψ (3)α(3) ψ (3)β(3) ψ (3)α(3) ¯
1s 1s 2s

or ¯ ¯
¯ ψ (1)α(1) ψ (1)β(1) ψ (1)β(1) ¯
¯ 1s 1s 2s ¯
¯ ¯
Ψ2 = ¯ ψ1s (2)α(2) ψ1s (2)β(2) ψ2s (2)β(2) ¯ (7.10)
¯ ¯
¯ ψ (3)α(3) ψ (3)β(3) ψ (3)β(3) ¯
1s 1s 2s

• The short hand notation for these states is (1s)2 (2s)1

7.3.1. The Total Hamiltonian

The total Hamiltonian for a many electron (ignoring spin-orbit coupling which
will be discussed next semester) atom is
" #
XN
−~2 2 Ze2 X e2
Ĥ = ∇ − + (7.11)
i=1
2me i 4π 0 ri j>i 4π 0 rij

59
8. Diatomic Molecules and the Born
Oppenheimer Approximation

Now that we have applied quantum mechanics to atoms, we are able to begin the
discussion of molecules.

This chapter will be limited to diatomic molecules.

8.1. Molecular Energy

A diatomic molecule with n electrons requires that 3n+6 coordinates be specified.

Three of these describe the center of mass position.

3n of these describe the position of the n electrons.

This leaves three degrees of freedom (R, θ, φ) which describe the position of the
nuclei relative to the center of mass. R determines the internuclear separation
and θ and φ determine the orientation.

60

60
8.1.1. The Hamiltonian

In the center of mass coordinates the Hamiltonian for a diatomic molecule is

Ĥ = T̂N + T̂e + V̂N N + V̂Ne + V̂ee . (8.1)

T̂N is the nuclear kinetic energy operator and is given by


~2 ˆ 2 ~2 ∂ 2 ∂ ~2 ˆ2
T̂N = − ∇N = − R̂ + J , (8.2)
2μ 2μR2 ∂R ∂R 2μ

where Jˆ is angular momentum operator for molecular rotation and μ = m1 m2


m1 +m2
is
the reduced mass of the diatomic molecule.

P ~2 ˆ 2
T̂e = i − 2m e
∇ei is the kinetic energy operator for the electrons.

ZA ZBe e2
V̂NN = 4π 0 R
is the nuclear—nuclear potential energy operator.

P h ZA e2 ZB e2
i
V̂Ne = − i 4π 0 rAi
+ 4π 0 rBi
is the nuclear—electron potential energy operator.

P e2
V̂ee = i>j 4π 0 rji is the electron—electron potential energy operator.

61
8.1.2. The Born—Oppenheimer Approximation

The Born—Oppenheimer approximation: The nuclei move much slower than


the electrons. (classical picture)

We put the Born—Oppenheimer approximation to work by first defining an effec-


tive Hamiltonian
Ĥef f = T̂e + V̂N N + V̂Ne + V̂ee . (8.3)

The approximation comes in by treating R as a parameter rather than an operator


(or variable). So one writes

Ĥef f ψe (R, {ri }) = Ee (R)ψ e (R, {ri }). (8.4)

ψe is the so-called electronic wavefunction.

Now the Schrödinger equation for the diatomic molecule is


³ ´
T̂N + Ĥef f ψ(R, {ri }) = Eψ(R, {ri }). (8.5)

Since the Hamiltonian is a sum of two terms, one can write the wavefunction
ψ(R, {ri }) as a product wavefunction

ψ = ψN ψe , (8.6)

where ψN is the so-called nuclear wavefunction.

Substituting the product wavefunction into the Schrödinger equation gives


³ ´
T̂N + Ĥef f ψN ψe = EψN ψe (8.7)
³ ´
T̂N + Ee (R) ψ N ψ/e = EψN ψ/e
³ ´
T̂N + Ee (R) ψ N = EψN .

62
The last equation is exactly like a Schrödinger equation with a potential equal to
Ee (R).

One now models Ee (R) or determines it experimentally.

8.2. Molecular Vibrations

As stated earlier R is the internuclear separation and θ and φ determine the


orientation. Consequently, R is the variable involved with vibration whereas θ
and φ are involved with rotation.

Considering only the R part of the Hamiltonian (under the Born—Oppenheimer


approximation), we have
∙ 2 2 ¸
~ ∂
− + Ee (R) ψvib = Evib ψvib . (8.8)
2μ ∂R2

It is convenient at this point to expand Ee (R) in a Taylor series about the equi-
librium position, Req :
µ ¶ µ ¶
0 ∂E 1 ∂2E
Ee (R) = E + (R − Req ) + 2
(R − Req )2 + · · · . (8.9)
∂R Req 2! ∂R Req

Now E 0 is just a constant which, by choice of the zero of energy, can be set to an
arbitrary value.

¡ ∂E ¢
Since we are at a minimum, ∂R Req
must be zero, so the linear term vanishes.

³ ´
∂2E
One defines ∂R2
≡ ke as the force constant.
Req

The remaining terms in the expansion can collective be defined as O[(R−Req )3 ] ≡


Vanh , the anharmonic potential.

63
As a first approximation we can neglect the anharmonicity. With this, the Schrödinger
equation becomes
∙ 2 2 ¸
~ ∂ 1 2
− + ke (R − Req ) ψvib = Evib ψvib . (8.10)
2μ ∂R2 2

If we let x = (R − Req ) this becomes


∙ 2 2 ¸
~ ∂ 1 2
− + ke x ψvib = Evib ψvib , (8.11)
2μ ∂x2 2

which is exactly the harmonic oscillator equation. Hence


√ 2
ψ vib,n = An Hn ( αx)e−αx /2 , (8.12)
q
ke μ
where α ≡ ~
.

And
1
Evib,n = hcω̃ e (n + ), (8.13)
2
q
1 ke
where ω̃ e ≡ 2π μ
.

8.2.1. The Morse Oscillator

Neglecting anharmonicity and using the harmonic oscillator approximation works


well for low energies. However, it is a poor model for high energies.

For high energies we need a more realistic potential–one that will allow of bond
dissociation.

The Morse potential


Ee (R) = De [1 − e−β(R−Re q ) ]2 , (8.14)

64
q
μ
where De is the well depth and β = 2πcω̃e 2D e
is the Morse parameter. Note:
this expression for the Morse potential has the zero of energy at the bottom of
the well (i.e. R = Req , ;Ee (Req ) = 0).

The Morse Potential can also be written as

Ee (R) = De [e−2β(R−Req ) − 2e−β(R−Re q ) ]. (8.15)

Now the zero of energy is the dissociated state (i.e. R → ∞, ;Ee (R → ∞) = 0).

We approach this quantum mechanical problem exactly like all the other.

The Schrödinger equation is


∙ 2 2 ¸
~ ∂ −β(R−Req ) 2
− + De [1 − e ] ψvib = Evib ψvib (8.16)
2μ ∂R2
This is another differential equation that is difficult to solve.

As it turns out, this Schrödinger equation can be transformed into a one of a broad
class of known differential equations called confluent hypergeometric equations–
the solutions of which are the confluent hypergeometric functions, 1 F1 .

Doing this yields the wavefunctions of the form

ψvib,n (z) = z Apn e−z 1 F1 (−n, 1 + 2Apn , 2z), (8.17)



2De μ −βx
z = e ,
βh


A = ,
βh
p −1 − n
pn = De + 2
A
and energy levels of the form
1 1
Evib,n = −De + hcω̃e (n + ) − hcω̃e xe (n + )2 , (8.18)
2 2

65
hcω̃ e
where ω̃ e xe together is the anharmonicity constant, with xe = 4De
.

∗ ∗ ∗ See Handout ∗ ∗∗

8.2.2. Vibrational Spectroscopy

Infrared (IR) and Raman spectroscopy are the two most widely used techniques
to probe vibrational levels.

4E
The spectral peaks appear at ṽ = hc
(in units of wavenumbers, cm−1 ).

The transition from the n = 0 to the n = 1 state is called the fundamental


transition.

Transitions from n = 0 to n = 2, 3, 4 · · · are called overtone transitions.

Transitions from n = 1 to 2, 3, 4 · · · , n = 2 to 3, 4, 5 · · · , etc. are called hot


transitions (or hot bands)

Since the energy levels depend on mass, isotopes will have a different transition
energy and hence appear in a different place in the spectrum. Heavier isotopes
have lower transition energies.

66
9. Molecular Orbital Theory and
Symmetry

9.1. Molecular Orbital Theory

One of the most important concepts in all of chemistry is the chemical bond.

In freshman chemistry we learn of one model for chemical bonding–VSEPR (va-


lence shell electron-pair repulsion) theory, where hybridized atomic orbitals deter-
mine the bonding geometry of a given molecule.

We are now prepared to discuss a bonding theory that is more rigorously based
in quantum mechanics.

Basically we will treat the molecules in the same way as all our other quantum
mechanical problems (e.g., particle in a box, harmonic oscillator, etc.)

As you might expect, it is not possible to obtain the exact wavefunctions and
energy levels so, we must settle for approximate solutions.

As a first example, let us consider the molecular hydrogen ion H+


2.

The Hamiltonianfor H+
2 is

Ĥ = T̂N + T̂el + V̂Nel + V̂NN (9.1)

67

67
We use the Born-Oppenheimer approximation and treat the nuclear coordinates
as a parameters rather than as variables. So we only worry about parts of the
Hamiltonian that deal with the electron.

The effective Hamiltonian becomes

Ĥ = T̂el + V̂Nel (9.2)


−~2 2 e2 e2
= ∇ − − .
2me 4π 0 rA 4π 0 rB
The eigenfunctions of this Hamiltonian are called molecular orbitals.

The molecular orbitals are the analogues of the atomic orbitals.

• Atomic orbitals: Hydrogen is the prototype and all other atomic orbitals
are built from the hydrogen atomic orbitals.

• Molecular orbitals: The hydrogen molecular ion is the prototype and all
other molecular orbitals are built from the hydrogen molecular ion molecular
orbitals.

There is one significant difference between the above, which is the hydrogen atomic
orbitals are exact whereas the hydrogen molecular ion molecular orbitals are not
exact.

In fact, we shall see that these molecular orbitals are constructed as linear com-
binations of atomic orbitals.

9.2. Symmetry

Let the atoms of the hydrogen molecular ion lie on the z-axis of the center of mass
coordinate system.

68
Inversion symmetry

• The potential field of the hydrogen molecular ion is cylindrically symmetric


about the z-axis.

• Because of the symmetry the electron density at (x, y, z) must equal the
electron density at (−x, −y, −z).

• The above symmetry therefore requires that the molecular orbitals be eigen-
functions of the inversion operator, ı̂. That is

ı̂ψ(x, y, z) = ψ(−x, −y, −z) = aψ(x, y, z). (9.3)

• Moreover the eigenvalue a can be either +1 or −1.

• If a = +1 the molecular wavefunction is even with respect to inversion and


is called gerade and labelled with a “g”: ı̂ψ g = ψg

• If a = −1 the molecular wavefunction is odd with respect to inversion and


is called ungerade and labelled with a “u”: ı̂ψ u = −ψu

• The terms gerade and ungerade apply only to systems that posses inversion
symmetry.

Cylindrical symmetry

69
• The cylindrical symmetry implies that the potential energy can not depend
on the φ.

• The molecular wavefunction is described by an eigenvalue λ = 0, ±1, ±2, . . .

— We use λ to label the molecular orbitals as shown in the table


λ 0 ±1 ±2 · · ·
label σ π δ ···

Mirror plane symmetry

70
• There is also a symmetry about the x-y plane called horizontal mirror plane
symmetry: operator σ̂ h .

• Thus the molecular wavefunction must be an eigenfunction of σ̂ h with eigen-


value ±1.

— If the eigenvalue is +1 (even with respect to σ̂ h ) the molecular orbital


is called a bonding orbital.
— If the eigenvalue is −1 (odd with respect to σ̂ h ) the molecular orbital
is called an antibonding orbital.

• There are also vertical mirror plane symmetries, but we will put that dis-
cussion off for the time being.

71
10. Molecular Orbital Diagrams

10.1. LCAO–Linear Combinations of Atomic Orbitals

Now that we know what symmetry the molecular orbitals must posses, we need
to find some useful approximations for them.

Useful can mean qualitatively useful or quantitatively useful.

Unfortunately we can’t have both.

We will discuss the approximation which models the molecular orbitals as linear
combinations of atomic orbitals (LCAO).

LCAO is qualitatively very useful but it lacks quantitative precision.

Let us again consider the hydrogen molecular ion H+


2 : let one H atom be labelled
A and the other labelled B.

Linear combination of the 1s atomic orbital from each H atom is used for the
molecular orbital of H+
2:
(1sA ) = ke−rA /a0 (10.1)

and
(1sB ) = ke−rB /a0 (10.2)

72

72
We construct two molecular orbitals as

Φ+ = C+ (1sA + 1sB ) (10.3)

and
Φ− = C− (1sA − 1sB ) (10.4)
The normalization condition is
Z
Φ± Φ± dΩ = 1 (10.5)

As can be seen from the above figure, Φ+ represents a situation in which the
electron density is concentrated between the nuclei and thus represents a bonding
orbital.

Conversely Φ− represents a situation in which the electron density is very low


between the nuclei and thus represents an antibonding orbital

10.1.1. Classification of Molecular Orbitals

With atoms we classified atomic orbitals according to angular momentum.

For molecular orbitals we shall also classify them according to angular momentum.
But we shall also classify them according to their inversion symmetry and wether
or not they are bonding or antibonding.

73
The classification according to angular momentum is as follows.

λ 0 ±1 ±2 · · ·
orbital symbol σ π δ ···

Atomic orbitals with m = 0 form σ type molecular orbitals, e.g., s ⇒ σ, pz ⇒ σ.

Those with m = ±1 form π type molecular orbitals, e.g., px ⇒ π etc.

The classification according to inversion symmetry is simply a subscript “g” or


“u”. For example, σ g or σ u etc.

The classification according to bonding or antibonding is an asterisk is used to


denote antibonding. For example, σ g is a bonding orbital and σ ∗u is an antibonding
orbital.

10.2. The Hydrogen Molecule

Let us now consider the hydrogen molecule. This molecules is a homonuclear


diatomic with two electrons.

If the two atoms are infinitely far apart. The ground state of the system would
consist of two separate hydrogen molecules in their ground atomic states: (1s)1

74
As the atom are brought closer together, their respective s orbitals begin to over-
lap.

It is now more appropriate to speak in terms of molecular orbitals, so one forms


linear combinations of the atomic orbitals.

There are two acceptable linear combinations. These are

σg = 1sA + 1sB (10.6)

and
σ ∗u = 1sA − 1sB . (10.7)

75
It can be shown mathematically that the energy level associated with σg is lower
than σ∗u .

We can intuit this qualitatively however since the σ ∗u orbital must have a node
whereas the σ g does not.

It is also to be expected since we know H2 is a stable molecule.

10.3. Molecular Orbital Diagrams

The energy levels associated with the molecular orbitals are drawn schematically
is what is called a molecular orbital diagram.

The molecular orbital diagram for H2 is shown below

Molecular orbital diagrams can be drawn for any molecule. Some get very compli-
cated. We will focus on the second row homonuclear diatomics and some simple
heteronuclear diatomics.

76
The molecular orbital diagrams for the second row homonuclear diatomics are
rather simple.

∗ ∗ ∗ See Supplement ∗ ∗∗

The supplement that follows this section contains examples for each of the second
row diatomics.

Heteronuclear diatomics are some what more complicated since there is a disparity
in the energy levels of the atomic orbitals for the separated atoms. This disparity
is not present for homonuclear diatomics.

A consequence of this energy level disparity is that molecular orbitals may be


formed from nonidentical atomic orbitals. For example a high lying 1s orbital
may combine with a low lying 2s orbital to form a σ molecular orbital.

The supplement that follows this section contains some examples of heteronuclear
diatomics.

Bond order

• One important property that can be predicted from the molecular orbital
diagrams is bond order.

• Bond order is defined as


1
BO = (# of bonding electrons − # of antibonding electrons) (10.8)
2

• Examples follow in the supplement.

77
10.4. The Complete Molecular Hamiltonian and Wavefunc-
tion

We have discussed molecular vibrations which under the Born-Oppenheimer ap-


proximation are governed by the vibrational Hamiltonian and described by the
vibrational wavefunction.

Likewise we have discussed molecular orbitals which are the electronic wavefunc-
tions.

Next semester we will discuss molecular rotations and just like for vibrations
and electronic transitions they are governed by the rotational Hamiltonian and
described by the rotational wavefunction.

We can succinctly express the Schrödinger equation for a molecule as follows.


(Next semester will we look at the details of this for polyatomic molecules)

Ĥmol Ψmol = Emol Ψmol (10.9)


³ ´
Ĥele + Ĥvib + Ĥrot ψele ψvib ψrot = (Eele + Evib + Erot ) ψele ψvib ψrot

78
11. An Aside: Light Scattering–Why
the Sky is Blue

This chapter addresses the topic of light scattering from two different perspectives.

• Classical electrodynamics

• Classical statistical mechanics

Since this is not a course on electrodynamics, we have to take several key results
from that theory on faith.

11.1. The Classical Electrodynamics Treatment of Light Scat-


tering

As usual we work under the electric dipole approximation and only focus on the
interaction of the electric field part of light with a dipole.

When the light interacts with the molecule an electric dipole is induced according
to
μ = αE, (11.1)

where α is the polarizability of the molecule describing the “flexibility” of its


electron cloud.

79

79
For light, the electric field part is

E(t) = E0 cos ωt. (11.2)

The polarizability also depends on the positions of nuclei to some degree. That
is, there is a vibrational (and rotational) contribution to the polarizability:

α(t) = α0 + α1 cos ωv t (11.3)

(here for simplicity we assume only one vibrational mode).

Thus the light—matter interaction is described as

μ(t) = α(t)E(t) = (α0 + α1 cos ω v t) E0 cos ωt (11.4)


= α0 E0 cos ωt + α1 E0 cos ω v t cos ωt
⎡ ⎤
α1 E0 ⎣
= α0 E0 cos
| {zωt} + 2 cos(ω − ω v )t + cos(ω + ω v )t ⎦
| {z } | {z }
Rayleigh Stokes Raman AntiStokes Raman

where a trig identity was used in the last step.

According to classical electrodynamics an oscillating dipole emits an electromag-


netic field at the oscillation frequency.

In this case we see the dipole oscillates at three distinct frequencies: ω, ω − ωv


and ω + ω v as part of three terms in the above expression.

The first term corresponds to Rayleigh scattering where the scattered light is at
the same frequency as the incident light.

The second term corresponds to Stokes Raman scattering where the scattered
light is shifted to the red of the incident frequency.

80
The third term corresponds to anti-Stokes Raman scattering where the scattered
light is shifted to the blue of the incident frequency.

Classical electrodynamics can describe exactly how the oscillating electric dipole
emits electromagnetic radiation. It can be shown that the emitted intensity is

ω4 2
I= μ, (11.5)
3c3 0
where μ0 = α0 E0 for the case of Rayleigh scattering and μ0 = α1 E0 /2 for the case
of Raman scattering.

To explicitly derive this expression we would need a fair bit of electrodynamics


and so the derivation is not shown here.

The important point to note is that I ∝ ω 4 or alternatively I ∝ 1/λ4 . There is a


very strong dependence on frequency (or wavelength).

This quartic scattering dependence is, in fact, the reason why the sky is blue (from
the point of view of classical electrodynamics) and is called the Rayleigh scattering
law.

11.2. The Blue Sky

The spectrum of visible light from the sun incident on the outer atmosphere is
essentially flat as shown below.

81
We just learned that light scatters as it traverses the atmosphere according to
Rayleigh’s scattering law: I(λ) ∝ 1/λ4 .

The following figures illustrate why Rayleigh scattering implies that the sky is
blue.

11.2.1. Sunsets

We have focused on a blue sky, but red sunsets occur for the same reason–
Rayleigh scattering.

82
If we look directly at the sun during a sunset (or sunrise) it appears red because
most of the blue light has scattered in other directions.

This more pronounced at dawn or dusk since the light must traverse more of the
atmosphere at those times then at noonday at which time the sun appears yellow
in color.

11.2.2. White Clouds

We might expect that clouds should be highly colored since they consist of droplets
of water which scatter light very effectively.

83
The key difference between light scattering by clouds versus by the atmosphere is
the size of the scatterer.

The water droplets are much larger than the wavelenght of the light–quite the
opposite case as above.

In this limit an entirely different analysis is made–one does not have Rayleigh
scattering but instead has a process called Mie scattering.

In some contexts, particularly in liquid suspensions, Mie scattering is referred to


as Tyndall scattering

84
Key Equations for Exam 2

Listed here are some of the key equations for Exam 2. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The wavefunctions for the hydrogenic system are

ψnlm (r, θ, φ) = Rnl (r)Ylm (θ, φ) (11.6)

• The radial part is.


µ ¶l µ ¶
2σ −σ/n 2l+1 2σ
Rnl (σ) = Anl e Ln+1 , (11.7)
n n

where the normalization constant, Anl , depends on the n and l quantum


numbers as sµ ¶3
2Z (n − l − 1)!
Anl = − (11.8)
na0 2n[(n + l)!]3

85

85
• The energy levels for the hydrogenic system are given by
Z 2R
En = − (11.9)
n2

• The wavefunctions for the harmonic oscillator are


µ ¶1
2
− y2 km 4 1
ψ n (y) = An Hn (y)e , y = 2
x, An = p √ , (11.10)
~ n
2 n! π

where An is the normalization constant for the nth eigenfunction and Hn (y)
are the Hermite polynomials.

• The energy levels are


r
1 k
En = (n + )~ω, ω = (11.11)
2 m

• The Morse potential is

Ee (R) = De [1 − e−β(R−Re q ) ]2 , (11.12)


q
μ
where De is the well depth and β = 2πcω̃ e 2D e
is the Morse parameter.
Note: this expression for the Morse potential has the zero of energy at the
bottom of the well (i.e. R = Req , ;Ee (Req ) = 0).

• The Morse Potential can also be written as

Ee (R) = De [e−2β(R−Req ) − 2e−β(R−Req ) ]. (11.13)

Now the zero of energy is the dissociated state (i.e. R → ∞, ;Ee (R → ∞) =


0).

• The energy levels for the Morse oscillator are of the form
1 1
Evib,n = −De + hcω̃ e (n + ) − hcω̃e xe (n + )2 , (11.14)
2 2
hcω̃e
where ω̃ e xe together is the anharmonicity constant, with xe = 4De
.

86
• Bond order is defined as
1
BO = (# of bonding electrons − # of antibonding electrons) (11.15)
2

• The Rayleigh scattering law is

I(λ) ∝ 1/λ4 ∝ ω 4 (11.16)

87
Part III

Statistical Mechanics and The


Laws of Thermodynamics

88

88
12. Rudiments of Statistical
Mechanics

When we study simple systems like a single molecule, we use a very detailed
theory, quantum mechanics.

However, most of the time in the real world we are dealing with macroscopic
systems, say, at least 100 million molecules.

It is simply impossible, even with the fastest computers, to write down and solve
the Schrödinger equation for those 100 million molecules, but often Avogadro’s
number of molecules.

So we need a less detailed theory called statistical mechanics, which allows one to
handle macroscopic sized systems without losing to much of the rigor.

12.1. Statistics and Entropy

Probability and statistics is at the heart of statistical mechanics.

We will need some definitions

• Ensemble: A large collection of equivalent macroscopic systems. The sys-


tems are the same except that each one is in a different so-called microstate.

89

89
• Microstate: The single particular state of one member of the ensemble given
by listing the individual states of each of the microscopic systems in the
macroscopic state.

• Configuration: The collection of all equivalent microstates. The number of


possible configurations is defined as W.

Boltzmann developed an equation to connect the microscopic properties of an


ensemble to the macroscopic properties. The Boltzmann equation is

S = k ln W (12.1)

Where S is entropy and k is Boltzmann’s constant.

12.1.1. Combinations and Permutations

Consider a random system that when measured can appear in one of two outcomes
(e.g., flipping coins).

One valuable piece of statistical information about system is knowing how many
different ways the system appears p times in, say, outcome 1 after N measure-
ments.

This is given by the mathematical formula for combinations


N!
C(N, p) = . (12.2)
p!(N − p)!

The number C(N, p) is also called the binomial coefficient because it gives the
coefficient for the pth order term in the expansion

X
N
N
(1 + x) = C(N, p)xp . (12.3)
p=0

90
This formula will allow us to derive a normalization constant so that we can obtain
the probability of obtaining p measurements of state 1.

Set x = 1 in the above. This gives

X
N
N
(1 + 1) = C(N, p)(1)p (12.4)
p=0

X
N
2N = C(N, p).
p=0

So the probability of any one outcome of N measurements is


1 1 N!
P (N, p) = N
C(N, p) = N (12.5)
2 2 p!(N − p)!

For combinations we did not care what order the results of the measurements
occurred.

Sometimes the order is important.

So rather than a particular combination, we are interested in a particular permu-


tation. This is given by
N!
W (N, {Ni }) = (12.6)
N1 !N2 !N3 ! · · ·
where N is the total number of measurements and Ni is the number of indistin-
guishable results of type i.

∗ ∗ ∗ See Examples on Handout ∗ ∗∗

For both combinations and permutations we need to evaluated factorials.

91
This is no problem for small numbers, but when we consider macroscopic systems
(1020 or so molecules) no calculator can handle factorials of such large numbers.

Sterlings Approximation:

• In place of evaluating factorials of large number one can use Sterling’s ap-
proximation to approximate the value of the factorial.

• Sterling’s approximation is

ln(N!) ' N ln N − N (12.7)

12.2. Fluctuations

When we list the macroscopic properties of a material such as a beaker of benzene


or the air of the atmosphere, we speak of the average value of the property.

Macroscopic equilibrium is a dynamic rather than static equilibrium. Conse-


quently, the value of a certain property fluctuates about the average value. Often
this fluctuation is not important, but sometimes it is important.

The fluctuation about an average value for any observable property O is described
by the variance which is defined as

σ 2O ≡ O2 − Ō2 . (12.8)

σ O is consider the range of the observable property.

It can be shown that


σO 1
≈√ , (12.9)
Ō N
where N is the number of particles. So for example if N = 1024 then √1 = 10−12
N

92
For ensembles having large numbers of particles measured values of a property are
extremely sharply peaked about the average value.

93
13. The Boltzmann Distribution

Consider a isolated system of N molecules that has the set { i } energy levels
associated with it.

Since the system is isolated the total energy, E, and the total number of particles
will be constant.

The total energy is given by


X
E= Ni i , (13.1)
i

where Ni is the number of particles in energy state i.

The total number of particles is, of course,


X
N= Ni (13.2)
i

The number of configurations for the system is then given by the number of
distinct permutations of the system
N!
W = . (13.3)
N1 !N2 ! · · ·

A system in equilibrium always tries to maximize entropy and minimize energy


and so the equilibrium configuration is a compromise between these two cases.

94

94
For the moment let us relax the isolation constraint.

Maximizing entropy corresponds to maximizing W (via S = k ln W ). This would


be the situation in which every particle was in a different energy state. That is
all Ni = 1 or 0.

Minimizing energy would be the case where all the particles are in the ground
state (say 1 ).

These two situations are contradictory and some compromise must be obtained.

We start by considering our original system–that being one with constant energy,
E and number of particles N

To determine the equilibrium configuration we must find the maximum W subject


to the constraint of constant energy and constant number of particles.

This is done using the mathematical technique of Lagrange multipliers (page 951
of your calc book).

We will not discuss this method in detail and consequently we cannot derive the
equilibrium configuration.

The derivation using Lagrange multipliers arrives at the configuration in which


the
gi e−β i
Ni = N P −β j
, (13.4)
j gj e
| {z }
pi
1
where β ≡ kT
and gj denotes the degeneracy of states having energy j.

95
The pi represents the probability of finding the a randomly chosen particle or
system which has energy i . This is the Boltzmann distribution

gi e−β i
Pi = P −β
(13.5)
j gj e
j

Since we started with a isolated system, β and hence T are constants. A given
energy E will correspond to a unique temperature T.

The analysis readily generalizes to variable energy i.e., nonisolated systems by


considering T as a variable.

13.1. Partition Functions

We have already come across both the partition functions that we will use in this
class.

The first is W –the number of configurations. This is called the microcanonical


partition function.

This partition function is not very useful to us so we will not discuss it further.

The second partition function is


X
Q= gj e−βEj (13.6)
j

and is called the canonical partition function.

96
This was first encountered as the denominator of the Boltzmann distribution and
it is extremely important in statistical mechanics. (Note: the symbol Z is also
often used for the canonical partition function.)

The partition function is to statistical mechanics as the wavefunction is to quan-


tum mechanics. That is, the partition function contains all that can be known
about the ensemble.

We shall see in the next chapter that the partition function will provide a link
between the microscopic (quantum mechanics or classical mechanics) and the
macroscopic (thermodynamics).

In fact we have already seen this in the S = k ln W. But this an inconvenient


connection because, for among other reasons, energy levels and temperature do
not explicitly appear.

There are other partition functions that are useful in different situations but we
will do nothing more than list two important ones here: i) the grand canonical
partition function and ii) the isothermal—isobaric partition function

13.1.1. Relation between the Q and W

When we get to connecting quantum mechanics with thermodynamics it will prove


convenient to use Boltzmann’s equation (S = k ln W ) but as was stated earlier it
is not convenient to use the microcanonical partition function (W ).

In the following we give an argument which provides a relation between the par-
tition functions. It is not an exact relation as we derive it, but it is a very good
approximation for large numbers of particles.

97
The microcanonical partition function describes a system at fixed energy E. In
fact W is the number of available states of the ensemble at the particular energy
E. This is essentially the same as the degeneracy of the ensemble gE .

Conversely the canonical partition function describes a system with variable en-
ergy.

However, based on our previous discussion of fluctuations, even though the energy
of the ensemble is allowed to vary, the number of states with energy equal to the
average energy Ē is overwhelmingly large. That is, almost every state available
to the ensemble has energy Ē.

We can express these ideas mathematically to come up with a relation between


W and Q.

The canonical partition function is


X
Q= gj e−β j , (13.7)
j

but to a good approximation


Q ' gĒ e−β Ē . (13.8)

Now since the degeneracy is essentially the microcanonical partition function we


have
Q ' W e−β Ē . (13.9)

So the canonical partition function is a Boltzmann weighted version of the micro-


canonical partition function.

We will soon make use of the Boltmann’s equation in terms of the canonical

98
partition function:

ln Q ' ln(W e−β Ē ) = ln W + ln(e−β Ē ) (13.10)



| {zW} − kT .
= ln
S/k

so,

S = k ln Q + (13.11)
T

13.2. The Molecular Partition Function

We ended the previous chapter by stating the total molecular energy (about the
center of mass) as
= ele + vib + rot . (13.12)

This is a consequence of the Born Oppenheimer approximation

If we include the center of mass translational motion this is

= ele + vib + rot + trans (13.13)

The ith total energy level is

i = ele,n + vib,v + rot,J + trans,m . (13.14)

Now if we have a collection of molecules in a macroscopic system. A given con-


figuration (say, configuration j) of that system has total energy Ej .

So the canonical partition function is


X
Q= gj e−βEj (13.15)
j

99
But, each Ej is made up of the contributions of all of the molecules:

a b c
Ej = l + m + n + ··· (13.16)

The partition function for the molecule is written as


X X a b c
Q = gj e−βEj = (gla gm
b c
gn · · · )e−β( l + m + n +··· ) (13.17)
j l,m,n···
X a
X a
X a
= gla e−β l a −β
gm e m gna e−β n · · ·
|l {z }|m {z }| n {z }
qm o l,a qm o l,b qm o l,c

where the qmol,i are the molecular partition functions.

The total canonical partition function is the product of the molecular partition
functions.

For the case where the molecules are the same then all the qmol,i are the same:
qmol,i = qmol thus
qN
Q = mol . (13.18)
N!
This allows us to focus only on a single molecule:
X X
qmol = gi e−β i = gele,n gvib,v grot,J gtrans,m e−β ( ele,n + v ib ,v + ro t,J + tra(13.19)
n s,m )

i n,v,J,m
X X X X
gele,n e−β ele ,n
gvib,v e−β v ib ,v
grot,J e−β ro t,J
gtrans,m e−β tra n s,m

|n {z }| v {z }| J {z }|m {z }
qele qv ib qro t qtra n s

We now collect below the expression for each of these partition functions. You
will get the chance to derive each of these for your home work

100
The Translational Partition Function

V
qtrans = (13.20)
Λ3
where
h
Λ≡ √ (13.21)
2πmkT
is the thermal de Broglie wavelength.

The Rotational Partition Function (linear molecules)

We will discuss rotations next semester.

However, the high temperature limit, which works for all gases (of linear molecules)
except H2 is
T
qrot ≈ (13.22)
σθr
2
where θr ≡ 8πh2 Ik (I is the moment of inertia) and σ is the so-called symmetry
number in which σ = 1 for unsymmetrical molecules and σ = 2 for symmetrical
molecules.

The Vibrational Partition Function


1
e− 2 β~ω 1
qvib = = (13.23)
1−e −β~ω 2 sinh 12 β~ω
Note this is for the harmonic oscillator. At temperatures well below the dissocia-
tion energy this is a very good approximation. (You will derive this as a homework
problem.)

The Electronic Partition Function


There is usually only a very few electronic states of interest. Only at exceedingly
high temperatures does any state other that the ground state(s) become important

101
so
X
qele = gele,i e−β tele ,i
≈ gele,ground (13.24)
i

102
14. Statistical Thermodynamics

The partition function allows one to calculate ensemble averages which correspond
to macroscopically measurable properties such as internal energy, free energy,
entropy etc.

In this chapter we will obtain expressions for internal energy, U, pressure, P,


entropy, S, and Helmholtz free energy, A. With these quantities in hand we will,
in the subsequent chapters, formally develop thermodynamics with no need to
refer back to the partition function.

Ensemble averages
The ensemble average of any property is given by
1 X
Ō = Oi gi e−β i . (14.1)
Q i

Internal energy
One critical property of an ensemble is the average (internal) energy U.
1 X −β i
U ≡ Ē = i gi e . (14.2)
Q i

Let us look closer at the above expression. Recall that


X
Q= gi e−β i . (14.3)
i

103

103
Now take the derivative of Q with respect to β gives
µ ¶ Ã " #!
∂Q ∂ X −β i X µ ∂e−β i ¶
= gi e = gi (14.4)
∂β n,V ∂β i i
∂β n,V
n,V
X
= − gi i e−β i
i

By comparing this to the expression for U, we see


µ ¶ µ ¶
1 ∂Q ∂ ln Q
U =− =− , (14.5)
Q ∂β n,V ∂β n,V

1 ∂y ∂ ln y
where we used the identity y ∂x
= ∂x
.

Pressure
Another important property is pressure.

When the ensemble is in the particular state i, d i = −pi dV . So at constant


temperature and number of particles
µ ¶
∂ i
pi = − (14.6)
∂V n,β

Thus the ensemble average pressure is given by


µ ¶
1 X ∂ i
P = p̄ = − gi e−β i . (14.7)
Q i ∂V n,β

Multiplying by β/β we get


µ ¶
1 X ∂ i
P =− gi βe−β i . (14.8)
βQ i ∂V n,β

Using the chain rule in reverse, i.e.,


−βe−β i
zµ }| ¶{ µ ¶ µ ¶
∂e−β i
∂e−β i ∂ i ∂ i
= =− βe−β i
(14.9)
∂V ∂ i ∂V ∂V

104
we proceed as
µ −β i ¶ Ã !
1 X ∂e 1 ∂ X −β i
P = gi = gi e (14.10)
βQ i ∂V n,β βQ ∂V i
n,β
µ ¶ µ ¶
1 ∂Q 1 ∂ ln Q
= = .
βQ ∂V n,β β ∂V n,β

Entropy
We have already obtained the expression for entropy. It is
U
S = + k ln Q (14.11)
T µ ¶
∂ ln Q
= −kβ + k ln Q
∂β n,V

105
Helmholtz Free Energy
Free energy is the energy contained in the system which is available to do work.
That is, it is the energy of the system minus the energy that is “tied-up” in the
random (unusable) thermal motion of the particle in the system: A ≡ U − T S

Free energy is probably the key concept in thermodynamics and so we will discuss
it in much greater detail later. We will make the distinction between the Helmholtz
free energy and the more familiar Gibb’s free energy (G) later as well.

The Helmholtz free energy has the most direct relation to the partition function
as can be seen from
µ ¶ µ ¶
∂ ln Q ∂ ln Q
A ≡ U − TS = − + kT β − kT ln Q (14.12)
∂β n,V ∂β n,V
= −kT ln Q

Any thermodynamic property can now be obtained from the above functions as
we shall see in the following chapters.

106
15. Work

We now begin the study of thermodynamics.

Thermodynamics is a theory describing the most general properties of macroscopic


systems at equilibrium and the process of transferring between equilibrium states.

Thermodynamics is completely independent of the microscopic structure of the


system.

15.1. Properties of Partial Derivatives

Of critical importance in mastering thermodynamics is to become proficient with


partial derivatives.

∗ ∗ ∗ See Handout ∗ ∗∗

15.1.1. Summary of Relations

1. The total derivative of z(x, y):


µ ¶ µ ¶
∂z ∂z
dz = dx + dy (15.1)
∂x y ∂y x

2. The chain rule for partial derivatives:


µ ¶ µ ¶ µ ¶
∂z ∂z ∂u
= (15.2)
∂x y ∂u y ∂x y

107

107
3. The reciprocal rule: µ ¶ µ ¶
∂z ∂x
=1 (15.3)
∂x y ∂z y

4. The cyclic rule: µ ¶ µ ¶ µ ¶


∂z ∂z ∂y
=− (15.4)
∂x y ∂y x ∂x z

5. Finally µ ¶ µ ¶ µ ¶ µ ¶
∂z ∂z ∂z ∂y
= + (15.5)
∂x u ∂x y ∂y x ∂x u

15.2. Definitions

System: a collection of particles

Macroscopic systems: Systems containing a large number of particles.

Microscopic systems: Systems containing a small number of particles.

Environment: Everything not included in the system (or set of systems)

Note that the distinction between the system and the environment is arbitrary
and is chosen as a matter of convenience.

15.2.1. Types of Systems

Isolated system: A system that cannot exchange matter or energy with its envi-
ronment.

Closed system: A system that cannot exchange matter with its environment but
may exchange energy.

108
Open system: A system that may exchange matter and energy with its environ-
ment.

Adiabatic system: A closed system that also can not exchange heat energy with
its environment.

15.2.2. System Parameters

Extensive parameters (or properties): properties that depend on the amount of


matter.

• For example, volume, mass, heat capacity.

Intensive parameters (or properties): properties that are independent of the


amount of matter.

• For example, temperature, pressure, density.

Extensive properties can be “converted” to intensive properties through ratios:


Extensive property
→ Intensive property. (15.6)
Extensive property
mass volume heat capacity
For example volume
= density, moles
= molar volume, mass
= specific heat.

15.3. Work and Heat

A system may exchange energy with its environment or another system in the
form of work or heat.

• Work is exchanged if external parameters are changed during the process.

• Heat is exchanged if only internal parameters are changed during the process.

109
Convention
Work, w, is positive (w > 0) if work is done on the system.

Work is negative (w < 0) if work is done by the system.

Heat, q, is positive (q > 0) if heat is absorbed by the system.

Heat is negative (q < 0) if heat is released from the system.

15.3.1. Generalized Forces and Displacements

In physics you learned that an infinitesimal change in work is given by the product
of force, F , times and infinitesimal change in position, dx:

dw = F dx. (15.7)

For thermodynamics, we need a more general definition if infinitesimal work.

Any given external parameter, A may be considered as a ‘generalized force’ which


is coupled to a particular internal parameter, a, which acts as ‘generalized dis-
placement.’

Note that the generalized force need not have units of force (e.g., Newtons) and
the generalized displacement need not have units of position (e.g., meters), but
the product of the two must have units of energy (e.g., Joules).

The infinitesimal amount of work done on the system is then given by

dw = Ada, (15.8)

or more generally as
X
dw = Ai dai (15.9)
i

110
if more than one set of parameters change.

The following table gives some examples of generalized forces and displacements
Generalized Force, A Generalized Displacement, a Contribution to dw
Pressure, −P Volume, dV −P dV
Stress, σ Strain, dε σdε
Surface tension, γ Surface area, dA γdA
Voltage, E Charge, dQ EdQ
Magnetic Field, H Magnetization, dM HdM
Chemical Potential, μ Moles, dn μdn
Gravity, mg Height, dh mgdh

15.3.2. P V work

In principle all work is interchangeable so that without loss of generality we will


develop the formal aspects of thermodynamics assuming all work is due to changes
in volume under a given pressure. That is

dw = −P dV, (15.10)

this is called P V work.

When we get to applications of thermodynamics we will then be concerned with


the various forms of work like those shown in the table above.

Expanding Gases

Consider the work done by a gas expanding in piston from volume V1 to V2 against
some constant external pressure P = Pex (see figure)

111
The force exerted on a gas by a piston is equal to the external pressure times the
area of the piston: F = Pex A ⇒ Pex = F/A.

Rx
Recall from physics that work is the (path) integral over force: w = − x12 F dx.
This can be manipulated as
Z x2 Z x2 Z V2
F
w=− F dx = − Adx = − Pex dV (15.11)
x1 A |{z}
x1 |{z} V1
dV
Pex

If Pex is independent of V then


Z V2 Z V2
w=− Pex dV = −Pex dV = −Pex 4V (15.12)
V1 V1

112
16. Maximum Work and Reversible
changes

Now that we have learned about PV work we will consider the situation where
the system does the maximum amount of work possible.

16.1. Maximal Work: Reversible versus Irreversible changes

The value of w depends on Pex during the entire expansion.

In the figure Z V2
wA = − Patm dV = −Patm (V2 − V1 ) (16.1)
V1

113

113
and
wB = w1 + w2 , (16.2)

where Z Vi
w1 = − Patm+2W dV = −Patm+2W (Vi − V1 ) (16.3)
V1

and Z V2
w2 = − Patm dV = −Patm (V2 − Vi ) (16.4)
Vi

Hence it is clear that |wB | > |wA | .

Now consider case in the figure below

The expansion is reversible. That is, there is always an intermediate equilibrium


throughout the expansion. Namely Pgas = Pex . So,
Z V2
wrev = − Pgas dV (16.5)
V1

This is the limiting case of path B in the previous figure. Thus wrev is the maxi-
mum possible work that can be done in an expansion. wrev = wmax .

114
16.2. Heat Capacity

Temperature and heat are different.

Temperature is not the amount of heat.

Temperature is an intensive property and heat is an extensive property.

However, heat is related to temperature through the heat capacity


dq
C(T ) = (16.6)
dT
n.b., heat capacity is a function of T ; it is not a constant.

From this equation


dq = C(t)dT, (16.7)
That is, when the temperature of a substance having a heat capacity C(t) is
changed by dT, dq amount of heat energy is transferred.

The heat capacity also depends on the conditions during the temperature change,
¡ dq ¢ ¡ dq ¢
e.g., CV (T ) = dT V
and CP (T ) = dT P
are not the same

Heat capacity is an extensive property. To make an intensive property

1. divide by the number of moles to get molar heat capacity


µ ¶
1 dq
CV m (T ) = (16.8)
n dT V

2. divide by mass to get specific heat


µ ¶
1 dq
cV = (16.9)
m dT V

We will discuss heat capacity more later.

115
16.3. Equations of State

The macroscopic properties of matter are related to one another via a phenom-
enological equation of state.

The state of a pure, homogeneous material (in the absence of external fields) is
given by the values of any two intensive properties.

(More complicated systems require more than two independent variables, but
behave in the same way as the more simple pure system, so we will focus our
development of thermodynamics on simple systems.)

The functional dependence of any property on the two independent variables is


an equation of state. e.g., T , P independent then heat capacity is a function of T
and P , C(T, P ).

16.3.1. Example 1: The Ideal Gas Law

The equation of state for volume of an ideal gas is

P V = nRT , (16.10)

where R is the gas constant (8.315 J K−1 mol−1 ) and n is the number of moles.

The ideal gas equation of state can be expressed in terms of intensive variables
only
P Vm = RT , (16.11)
V
where Vm = n
.

m
The equation of state can also be expressed in terms of density ρ = V
(and molar
mass m/n)
mP MP
ρ= = . (16.12)
nRT RT

116
16.3.2. Example 2: The van der Waals Equation of State

A more realistic equation of state was presented by van der Waals:

nRT n2 a
P = − 2. (16.13)
V − nb V

The parameter a attempts to account for the attractive forces among the particles

The parameter b attempts to account for the repulsive forces among the particles

b originates from hard sphere collisions (see figure):

117
In term of intensive variables
RT a
P = − 2. (16.14)
Vm − b Vm

16.3.3. Other Equations of State

The van der Waals equation of state is not the only one that has been proposed.
Some other equations of state are

• Berthelot
nRT n2 a RT a
P = − 2
= − (16.15)
V − nb T V Vm − b T Vm2
• Dieterici an a
nRT e− RT V RT e− RT Vm
P = = (16.16)
V − nb Vm − b
• Redlich-Kwang

nRT n2 a RT a
P = −√ = −√ (16.17)
V − nb T V (V − nb) Vm − b T Vm (Vm − b)

118
17. The Zeroth and First Laws of
Thermodynamics

Over the course of the next two lectures we will discuss the four core laws of
thermodyanmics.

Today we will cover the zeroth and first laws, which deal with temperature and
total energy respectively.

Next time we will cover the second and third laws which both deal with entropy.

17.1. Temperature and the Zeroth Law of Thermodynamics

Temperature tells us the direction of thermal energy (heat) flow.

• Heat flows from high T to Low T.

Temperature scales

• Celsius: A relative scale based on water (T = 0◦ C for melting ice and


T = 100◦ C for boiling water)

• Kelvin: An absolute temperature scale based on the ideal gas law. The
temperature at which (for fixed V and n) the pressure is zero is defined as
T =0K

• T (Kelvin) = T (Celsius) + 273.15

119

119
Standard conditions

• standard temperature and pressure (STP): T = 273.15 K and P = 1 atm.


(Vm (STP) = 22.414 L/mol)

• standard ambient temperature and pressure (SATP): T = 298.15 K and


P = 1 bar. (Vm (SATP) = 24.789 L/mol)

Diathermic wall: A wall that allows heat to flow through it.

Adiabatic wall: A wall the does not allow heat to flow through it.

Thermal equilibrium: If two systems are in contact along a diathermic wall and
no heat flows across the wall, then the systems are in thermal equilibrium.

The zeroth law of thermodynamics

• Mathematical statement:

If TA = TB and TB = TC , then TA = TC (17.1)

This the mathematical statement of transitivity

• Verbal statement: If system A is in thermal equilibrium with system B


and system B is in thermal equilibrium system C then system A is also in
thermal equilibrium with system C.

The zeroth law implies that if an arbitrary system, C, is chosen as a thermometer


then it will read the same temperature when it is in thermal contact along a
diathermic wall with system A as when it is in thermal contact along a diathermic
wall with system B.

120
17.2. The First Law of Thermodynamics

Definitions:

• State: the state of a system is defined by specifying a minimum number in


intensive variables

• State Function: A function of the chosen independent variables that de-


scribes a property of the state (e.g., V (T, P )). The value of the state func-
tion depends only on that given state and on no other possible state of the
system.

17.2.1. The internal energy state function

For characterizing the change in energy of a system, one is concerned with the
work done on the system (w) and the heat supplied to the system (q). The energy
of a system is called the internal energy (U) of the system.

The first law of thermodynamics:

• Mathematical statement:
4U = q + w (17.2)
or in differential form
dU = dq + dw (17.3)

• Verbal statement: The change in internal energy of a system is equal to the


amount of work done on the system plus the amount of heat provided to the
system.

So for a system where all the work is P V work the first law becomes
Z V2
4U = q − Pex dV (17.4)
V1

121
in differential form this is
dU = dq − Pex dV (17.5)

Although U can be expressed as a function of any two state variables, the most
convenient at this time are V and T. U → U(T, V ).

The total differential of U (T, V ) is


µ ¶ µ ¶
∂U ∂U
dU = dT + dV (17.6)
∂T V ∂V T

Consider adding heat at a constant volume then


µ ¶ µ ¶
∂U ∂U
dU = dT + dV = dq − Pex dV. (17.7)
∂T V ∂V T

So, µ¶ µ ¶
∂U ∂U dq
dT = dq =⇒ = = CV (17.8)
∂T V ∂T V dT
¡ ¢
Hence the slope ∂U
∂T V
is the heat capacity.

¡ ∂U ¢
The other slope, ∂V T
, is called the internal pressure (it has no standard symbol).

A useful relation (derivation to come) is


µ ¶ µ ¶
∂U ∂P
=T −P (17.9)
∂V T ∂T V

Example: A van der Waals gas


µ ¶
nRT n2 a ∂P nR
P = − 2 ⇒ = (17.10)
V − nb V ∂T V V − nb

122
so the useful relation becomes
µ ¶
∂U nR nRT nRT n2 a
= T −P = − + 2
∂V T V − nb V − nb V − nb V
2
na
= + 2 (17.11)
V

The equation of state for U : Express U in terms of T, V, and P.

Start with the total differential of U


µ ¶ µ ¶
∂U ∂U
dU = dT + dV (17.12)
∂T V ∂V T
¡ ¢ ¡ ∂U ¢ ¡ ¢
but ∂U∂T V
= CV and ∂V T
= T ∂P∂T V
− P (useful relation). Hence
∙ µ ¶ ¸
∂P
dU = CV dT + T − P dV (17.13)
∂T V

is the equation of state for U.

A useful approximation is 4U = CV 4T which is valid for


i) heat capacity nearly constant over 4T and with no phase transitions.
ii) ideal gas or at constant volume.

123
18. The Second and Third Laws of
Thermodynamics

18.1. Entropy and the Second Law of Thermodynamics

We learned from statistical mechanics that entropy, S, is a measure of the disorder


of the system and is expressed via Boltzmann’s equation S = k ln W (where W is
the micocanonical partition function)

We expressed Boltzmann’s law in terms of the more convenient canonical partition


function as

S = + k ln Q. (18.1)
T
Now, the average energy of the system Ē is in fact what we call internal energy:
U ≡ Ē.

Furthermore we derived the simple relation between the Helmholtz free energy
and the canonical partition function as A = −kT ln Q.

Hence,
U A 1
S= − = (U − A). (18.2)
T T T
Since U, A, and T are state functions, S is also a state function .

So we may write
1
dS = (dU − dA) (18.3)
T
124

124
for an isothermal process.

Recall the definition of Helmholtz free energy–the energy of the system available
to do work.

We learned previously that the maximum amount of work one can extract from
the system is the work done during a reversible process. Hence dA = dwrev .

For now let us limit the discussion to reversible processes. Then


1 1
dS = (dU − dwrev ) = (dqrev + dw/ rev − dw
/ rev ) (18.4)
T T
dqrev
= . (Reversible process)
T

Note: An alternative approach to thermodynamics which makes no reference to


molecules or statistical mechanics is to simply begin by defining entropy as dS ≡
dqrev
T

The principle of Clausius

• “The entropy of an isolated system will always increase in a spontaneous


process”

• Mathematical statement: (dS)U,V ≥ 0

For a general process: dU = dq − Pex dV

For a reversible process Pex = P and dq = T dS so dU = T dS − P dV

125
Since U, S, T, P, and V are state functions, dU = T dS − P dV holds for any
process, but in general, T dS is not heat and −P dV is not work. (see figure)

T dS is heat and −P dV is work only for reversible processes.

For some dU,

dq − Pex dV = T dS − P dV ⇒ T dS = dq − Pex dV + P dV (18.5)

T dS = dq + (P − Pex ) dV

• Case i) Pex > P then (spontaneous) dV is negative so (P −Pex )dV is positive.

• Case ii) P > Pex then (spontaneous) dV is positive so (P −Pex )dV is positive.

• Case iii) P = Pex then (spontaneous) dV is zero so (P − Pex )dV is zero.

Thus for any spontaneous process T dS ≥ dq.

This is a mathematical statement of the second law of thermodynamics

126
18.1.1. Statements of the Second Law

Unlike the first law, the second law has a number of equivalent statements

1. A cyclic process must transfer heat from a hot to cold reservoir if it is to


convert heat into work.

2. Work must be done to transfer heat from a cold to a hot reservoir.

3. A useful perpetual motion machine does not exist.

4. The entropy of the universe is increasing

5. Spontaneous processes are irreversible in character.

6. The entropy of an isolated system will always increase in a spontaneous


process (the principle of Clausius)

18.2. The Third Law of Thermodynamics

Consider the first law for a reversible change at constant volume.

dU = dq + dw = dq − Pex dV (18.6)

From our earlier discussion of heat capacity dq = CV dT (CV since constant vol-
ume). So,

dU = CV dT (18.7)

but also dU = T dS. So


Z T2
CV dT CV
dS = =⇒ 4S = dT. (18.8)
T T1 T

127
A very similar derivation can be done for a reversible change at constant pressure
(we can not do it quite yet) to yield
Z T2
CP
4S = dT (18.9)
T1 T

18.2.1. The Third Law

Verbal statement
The third law of thermodynamics permits the absolute measurement of entropy.

To derive the mathematical statement of the third laws we starting with


Z T2
CP
4S = dT (18.10)
T1 T

now let T1 → 0 Z T2
CP
4S = S2 − S0 = dT (18.11)
0 T
Hence the mathematical statement of the third law is
Z T2
CP
S(T2 ) = dT + S0 (18.12)
0 T
From a macroscopic point of view S0 is arbitrary. However, a microscopic point of
view suggests S0 = 0 for perfect crystals of atoms or of totally symmetric molecules
(e.g., Ar, O2 etc.). S0 6= 0 for imperfect crystals and crystals of asymmetric
molecules (e.g., CO).

Alternative statement of the third law: Absolute zero is unattainable.

Consider the heat capacity near T → 0.

CP
For S0 to have significance T
must be finite (not infinite) as T → 0. Thus CP → 0.

128
dq dT
But CP = dT
→ 0 implies dq
→ ∞.

In other words, an infinitesimal amount of heat causes an infinite change in tem-


perature.

In view of what we have learned about fluctuations, the ever present random
fluctuations in energy provide the infinitesimal amount of heat and so you can
never reach absolute zero corresponding to an average energy of zero.

18.2.2. Debye’s Law

Heat capacity data only goes down so far. So one needs a theoretical extrapolation
down to T = 0. (Debye)

Postulate: CP m = aT 3 . That is at low temperatures heat capacity goes as the


cube of the temperature.

CP∗ m , T ∗ are the lowest temperature data points. So, a = CP∗ m /T ∗3 .

129
The molar entropy is
Z T∗ ∗ Z T∗
∗ CP C =aT 3 CP m
Sm (T ) = dT P m= T 2 dT (18.13)
0 T T ∗3 0
¯T ∗
CP∗ m T 3 ¯¯ CP∗ m
= = .
T ∗3 3 ¯ 0 3

18.3. Times Arrow

Entropy and the second law give a direction to time.

For example, if we see a picture of your PChem book in mint condition and we see
a picture of your PChem book all battered and beaten. We know which picture
was taken first.

The interesting thing is that each molecule in a macroscopic system obeys time
invariant dynamics. Both Newton’s laws and Quantum dynamics (next semester)
are the same if you replace t with −t.

Yet, the behavior of the macrosystem definitely changes if you replace t with −t.

Thus the simple fact that you have an enormous number of particles induces a
perceived asymmetry in time.

130
Key Equations for Exam 3

Listed here are some of the key equations for Exam 3. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The Boltzmann equation is


S = k ln W. (18.14)

• The Boltzmann distribution :


g e−β i
P i −β (18.15)
j gj e
j

• The canonical partition function is


X
Q= gj e−βEj (18.16)
j

131

131
• The relation between the partition function and the molecular partition
function is
qN
Q = mol . (18.17)
N!
• The Translational Partition Function
V
qtrans = 3 (18.18)
Λ
where
h
Λ≡ √ (18.19)
2πmkT
is the thermal de Broglie wavelength.

• The Rotational Partition Function (linear molecules) is


T
qrot ≈ , (18.20)
σθr
2
where θr ≡ 8πh2 Ik (I is the moment of inertia) and σ is the so-called sym-
metry number in which σ = 1 for unsymmetrical molecules and σ = 2 for
symmetrical molecules

• The Vibrational Partition Function is

1
qvib = . (18.21)
2 sinh 12 β~ω
• The ensemble average of any property is given by
1 X
Ō = Oi gi e−β i . (18.22)
Q i

• The relations between the canonical partition function and the thermody-
namics variables are
Helmholtz Free Energy A = −kT³ln Q´ ³ ´
∂Q ∂ ln Q
Internal energy U = − Q1 ∂β
=− ∂β
³ ´
n,V n,V
∂ ln Q
Entropy S = −kβ ∂β + k ln Q
n,V
¡ ∂Q ¢ ¡ ∂ ln Q ¢
Pressure P = βQ ∂V n,β = β1
1
∂V n,β

132
• P V work is
dw = −P dV. (18.23)

• Heat capacity:
dq = C(t)dT. (18.24)

• General forms of the first law:

4U = q + w, (18.25)

in differential form this is

dU = dq − Pex dV. (18.26)

Also,
dU = T dS − P dV. (18.27)

• The second law


T dS ≥ dq. (18.28)

• The third law Z T2


CP
S(T2 ) = dT + S0 (18.29)
0 T
• Debye’s law for entropy at very low temperatures
CP∗ m
Sm (T ∗ ) = , (18.30)
3
where CP∗ m is the molar heat capacity at the lowest temperature for which
there is data.

133
Part IV

Basics of Thermodynamics

134

134
19. Auxillary Functions and Maxwell
Relations

We have stated that thermodynamics as we are studying it deals with states in


equilibrium or transitions between equilibrium states.

Consequently, the concept of equilibrium plays a key role in much of what we will
discuss for the remainder of the year.

The equilibrium constant for a thermodynamic process, K, (which you are familiar
with from general chemistry) serves are a common point which connects thermo-
dynamics, electrochemistry, and kinetics–topics we will encounter throughout
the year.

19.1. The Other Important State Functions of Thermody-


namics

As was the case in quantum mechanics, here too is energy the key property with
which to work.

So far we have encountered two state functions which characterize the energy of a
macroscopic system–the internal energy and, briefly the Helmholtz free energy.

135

135
From the first law as stated as

dU = T dS − P dV (19.1)

we say that the natural (most convenient) variables for the equation of state for
U are S and V . This is U = U(S, V )

Unfortunately S can not be directly measured and most often P is a more conve-
nient variable than V

Because of this fact, it is handy to define state functions which have different pairs
of natural variables, so that no mater what situation arises we have convenient
equations of state to work with.

The other pairs of natural variables being (S and P ), (T and V ) and (T and P )

The table below lists these state functions

State function Symbol Natural variables Definition Units


Internal Energy U S and V energy
Enthalpy H S and P H ≡ U + PV energy
Helmholtz free energy A T and V A ≡ U − TS energy
Gibbs free energy G T and P G ≡ H − TS energy

We consider each of these functions in turn

19.2. Enthalpy

We want a state function whose natural variables are S and P

Let us try the definition H ≡ U + P V.

136
Now formally
dH = dU + d(P V ) = dU + P dV + V dP, (19.2)

but dU = T dS − P dV, so

/ + P dV/ + V dP
dH = T dS − P dV (19.3)
= T dS + V dP.

Hence Enthalpy does indeed have the desired natural variables.

19.2.1. Heuristic definition:

Enthalpy is the total energy of the system minus the pressure volume energy. So
a change in enthalpy is the change in internal energy adjusted for the P V work
done. If the process occurs at constant pressure then the enthalpy change is the
heat given off or taken in.

For example, consider an reversibly expanding gas under constant pressure (dP =
0) and adiabatic (dq = 0) conditions.

The system does work during the expansion; in doing so it must lose energy. Since
the process is adiabatic no heat energy can flow in to compensate for the work
done and the gas cools.

The total internal energy decreases. The enthalpy of the system on the other hand
does not change–it is the internal energy adjusted by an amount of energy equal
to the P V work done by the system. As Freshmen we learn this as 4H = qp .

19.3. Helmholtz Free Energy

Now we want a state function whose natural variables are T and V

137
Let us try the definition A ≡ U − T S.

Formally
dA = dU − d(T S) = dU − T dS − SdT, (19.4)
but dU = T dS − P dV, so

/ − P dV − T dS
dA = T dS / − SdT (19.5)
= −P dV − SdT.

Hence Helmholtz free energy does indeed have the desired natural variables.

19.3.1. Heuristic definition:

As we have said before Helmholtz free energy is the energy of the system which is
available to do work–It is the internal energy minus that energy which is “used
up” by the random thermal motion of the molecules.

19.4. Gibbs Free Energy

Finally we want a state function whose natural variables are T and P

Let us try the definition G ≡ H − T S.

Now formally
dG = dH + d(T S) = dH − T dS − SdT, (19.6)
but from above dH = T dS + V dP, so

/ + V dP − T dS
dG = T dS / − SdT (19.7)
= V dP − SdT.

Hence Gibbs free energy does indeed have the desired natural variables.

138
19.4.1. Heuristic definition:

Gibbs free energy is the energy of the system which is available to do non P V
work–It is the internal minus both that energy which is “used up” by the random
thermal motion of the molecules and used up in doing the P V work.

19.5. Heat Capacity of Gases

19.5.1. The Relationship Between CP and CV

To find how CP and CV are related we begin with

dH = T dS + V dP (19.8)

at constant pressure and reversible conditions

dH = T dS (19.9)
dH = dq

but
dq = CP dT (19.10)

The constant pressure heat capcity can then be expressed in terms of enthalpy as
µ ¶
∂H
CP = . (19.11)
∂T P

So, µ ¶ µ ¶ µ ¶
∂ (U + P V ) ∂U ∂V
CP = = +P (19.12)
∂T P ∂T P ∂T P
¡ ¢ ¡ ∂U ¢
note ∂U
∂T P
is not CV we need ∂T V
. Use an identity of partial derivatives
µ ¶ µ ¶ µ ¶ µ ¶
∂U ∂U ∂U ∂V
= + (19.13)
∂T P ∂T V ∂V T ∂T P

139
thus
µ ¶ µ ¶ µ ¶ µ ¶
∂U ∂U ∂V ∂V
CP = + +P (19.14)
∂T V ∂V T ∂T P ∂T P
µ ¶ ∙µ ¶ ¸
∂V ∂U
= CV + +P .
∂T P ∂V T
¡ ∂U ¢ ¡ ∂P ¢
Recall the expression for internal pressure ∂V T
= T ∂T V
− P . Then
µ ¶ ∙ µ ¶ ¸
∂V ∂P
CP = CV + T / + P/
−P (19.15)
∂T P ∂T V

Finally µ ¶ µ ¶
∂V ∂P
CP = CV + T (19.16)
∂T P ∂T V

Example: Ideal gases

1. Ideal gas (equation of state: P V = nRT ): This equation is easily made


explicit in either P or V so we don’t need any of the above replacements
µ ¶ µ ¶
∂V ∂P
CP = CV + T (19.17)
∂T P ∂T V
nR nR nRT
= CV + T = nR
P V PV
Thus CP = CV + nR or
CP m = CV m + R (19.18)

19.6. The Maxwell Relations

Summary of thermodynamic relations we’ve seen so far

Definitions and relations:

• H = U + PV

140
• A = U − TS

• G = H − TS
¡ ∂U ¢ ¡ ∂H ¢
• CV = ∂T V
, CP = ∂T P

basic equations Maxwell relations working equations


¡ ∂T ¢ ¡ ¢ £ ¡ ¢ ¤
dU = T dS − P dV ∂V S
= − ∂P dU = CV dT + T ∂P − P dV
¡ ∂T ¢ ¡ ∂V∂S¢ V ∂T V
£ ¡ ¢ ¤
dH = T dS + V dP = dH = CP dT − T ∂V − V dP
¡ ∂S∂P¢ S ¡∂S∂P P¢ ∂T P
¡ ¢
dA = −P dV − SdT = + ∂T V dS = CTV dT + ∂P dV
¡ ∂V
∂S
¢T ¡ ∂V ¢ CP
∂T V
¡ ∂V ¢
dG = V dP − SdT ∂P T
= − ∂T P
dS = T dT − ∂T P dP

We will get plenty of practice with derivations based on these equations and on
the properties of partial derivatives. (See handout and Homework)

141
20. Chemical Potential

20.1. Spontaneity of processes

Two factors drive spontaneous processes

1. The tendency to minimize energy

2. The tendency to maximize entropy

Let us begin with Helmholtz free energy

The total differential of A is (A = U − T S)

dA = dU − T dS − SdT = dq − Pex dV − T dS − SdT (20.1)

For constant T and V, (dA)T,V = dq − T dS

From the second law, T dS ≥ dq for a spontaneous process, (dA)T,V ≤ 0 for a


spontaneous process.
Hence at equilibrium (dA)T,V = 0.

For chemistry it is most often more convenient to use Gibbs free energy

The total differential of G is

dG = dH − T dS − SdT = dq − Pex dV + P dV + V dP − T dS − SdT (20.2)

142

142
For constant T and P = Pex , (dG)T,P = dq − T dS

Again from the second law, T dS ≥ dq for a spontaneous process, (dG)T,P ≤ 0 for
a spontaneous process.

Hence at equilibrium (dG)T,P = 0.

So free energy provides a measure of the thermodynamic driving force towards


equilibrium.

Note free energy provides no information about how fast a process proceeds to
equilibrium.

The free energy functions are the workhorses of applied thermodynamics so we


want to get a feel for them.

Returning to the total differentials of free energy,

dA = dU − T dS − SdT (20.3)

and
dG = dH − T dS − SdT. (20.4)
Expressing dU and dH generally as dU = T dS − P dV and dH = T dS + V dP
(remember that in general T dS cannot be identified with dq and P dV cannot be
identified with −w).

Plugging these into the total differentials of free energy gives

dA = −SdT − P dV (20.5)

and
dG = −SdT + V dP (20.6)

143
These expressions are quite general, but i) only P V work and ii) closed systems.

The total differential of A is also

dA = dq + dw − T dS − SdT. (20.7)

For a reversible process dq = T dS and work is maximal.

Hence (dA)T = dwmax =⇒ (4A)T = wmax . As we have stated in words a number


of times before.

The total differential of G is also

dG = dq + dw + P dV + V dP − T dS − SdT. (20.8)

In general dw = dw0 − Pex dV where dw0 is the non-P V work.

The total differential of G becomes

dG = dq + dw0 − Pex dV + P dV + V dP − T dS − SdT. (20.9)

For constant T and P = Pex , (dG)T,P = dq + w0 − T dS.

For reversible processes q = T dS and this becomes

0 0
(dG)T,P = dwmax =⇒ (4G)T,P = wmax (20.10)

So, as stated earlier, the Gibbs free energy is the energy of the system available
to do non-P V work.

20.2. Chemical potential

What if the amount of substance can change?

144
Extensive properties depend on the amount of “stuff”

For example A(T, V ) now becomes A(T, V, n) and the total differential becomes
µ ¶ µ ¶ µ ¶
∂A ∂A ∂A
dA(T, V, n) = dT + dV + dn (20.11)
∂T V,n ∂V T,n ∂n V,T
¡ ∂A ¢
Let’s focus on the slope ∂n V,T
.

• This is a measure of the change in Helmholtz free energy of a system (at


constant T and V ) with the change in the amount of material.

• Physically, this is a measure of the potential to change the amount of mate-


rial.
¡ ∂A ¢
• It defines the chemical potential μ ≡ ∂n V,T
.

So we can also write


dA = −SdT − P dV + μdn (20.12)

What about the relation of the chemical potential to Gibbs’ free energy?

G = H − TS = U
| −
{zT S} + P V = A + P V so,
=A

dG = dA + P dV + V dP (20.13)
/ dV/ + P/ dV/ + V dP + μdn
= −SdT − P
= −SdT + V dP + μdn,

but from µ ¶ µ ¶ µ ¶
∂G ∂G ∂G
dG = dT + dP + dn (20.14)
∂T P,n ∂P T,n ∂n P,T

145
we see that µ ¶
∂G
μ= . (20.15)
∂n P,T

So, μ is also a measure of the change in Gibbs free energy of a system (at constant
T and P ) with the change in the amount of material and it still has the same
physical meaning.

The Gibbs free energy per mole (Gm ) for a pure substance is equal to the chemical
potential. (Gm = μ)

20.3. Activity and the Activity coefficient

When, for example, a solute is dissolved in a solvent, there exist complicated


interactions which cause deviations from ideal behavior.

To account for this one must introduce the concept of activity and the activity
coefficient.

Activity is hard to define in words and indeed it has an awkward mathematical


definition as we will soon see.

The activity coefficient has a more convenient definition which is that it is the
measure of how a particular real system deviates from some reference system
which is usually taken to be ideal.

The mathematical definition of activity ai of some species i is implicitly stated as


ai
limª =1 (20.16)
ζ→ζ g(ζ)

where g(ζ) is any reference function (e.g., pressure, mole fraction, concentration
etc.), and ζ ª is the value of ζ at the reference state.

146
This implicit definition is awkward so for convenience one defines the activity
coefficient as the argument of the above limit,
ai
γi ≡ (20.17)
g(ζ)

which we can rearrange as


ai = γ i g(ζ). (20.18)

The definition of activity implies that γ i = 1 at g(ζ ª ) (the reference state)

20.3.1. Reference States

Thermodynamics is founded on the concept of energy which we know to have an


arbitrary scale. That is, we can define are zero of energy any where we want.

Because of this it is always necessary to specify a reference state to which our real
state can be compared.

The choice of this state is completely up to us, but it is often the case that the
reference state is chosen to be some ideal state.

For example, if we are talking about a gas we will mostly likely choose the ideal
gas law in terms of pressure (P = nRT /V ) as our reference function and the
reference state being when P = 0 since we know all gases behave ideally in the
limit of zero pressure.

Let us consider the activity of a real gas for the above reference function and
reference state. Note: the activity of gases as referenced to pressure has the
special name fugacity (fugacity is a special case of activity).

147
Our reference function is very simple: g(ζ) = ζ = P , so
a
γ= ⇒ a = γP. (20.19)
P
Thus the activity of our real gas is given by the activity coefficient times the
pressure of an ideal gas under the same conditions.

Based on the condition that γ → 1 as we approach the reference state (P = 0


in this case) we see that the activity (or fugacity) of a real gas becomes equal to
pressure for low pressures

20.3.2. Activity and the Chemical Potential

One cannot measure absolute chemical potentials, only relative potentials can be
measured. By convention we chose a standard state and measure relative to that
state.

The deviation of the chemical potential at the state of interest versus at the
reference state is determined by the activity at the current state (the activity at
the reference state is unity by definition).

μi − μª
i = RT ln ai . (20.20)

Rather than referencing to the standard state one can also reference to any con-
venient “ideal” state. This ideal state is in turn referenced to the standard state.
For the state of interest
μi = μªi + RT ln ai (20.21)

and for the ideal state

μid ª id ª id id
i = μi + RT ln ai ⇒ μi = μi − RT ln ai . (20.22)

148
Thus,

μi = μid id
i − RT ln ai + RT ln ai (20.23)
μi − μid
i = +RT ln ai − RT ln aid
i
ai
= RT ln id
ai
Example: Real and ideal gases at constant temperature, but any pressure.
Starting from the begining
=0
z}|{
id
dμ = dGm = −Sm dT + Vm dP (20.24)
dμid = Vm dP
RT
dμid = dP.
P
Now we integrate from the reference state to the current state of interest
Z Z
id RT
dμ = dP. (20.25)
μª Pª P

This gives
P
μid − μª = RT ln . (20.26)

The usual standard state is the ideal gas at P ª = 1, so

μid = μª + RT ln P. (20.27)

(Note that as P → 1, μid → μª ).

Lets say our gas is not ideal, then at a given pressure

μ = μª + RT ln a. (20.28)

For gases activity is usually called fugacity and given the symbol f , so a = f for
real gasses. Thus
μ = μª + RT ln f. (20.29)

149
Lets say that instead of referencing to the ideal gas at P = 1, we want to reference
to the ideal gas at the current pressure P.

This is easily done by using μª = μid − RT ln P in the above equation for μ,

μ = μid − RT ln P + RT ln f
f
μ = μid + RT ln .
P

Example: The barometric equation for an ideal gas.


We have an ideal gas so,
μid = μª + RT ln P (20.30)
where we will take the reference state to be at sea level, i.e. P ª = 1 atm.

So at sea level
=0
z }| {
id
μ (0) = μ + RT ln 1 = μª
ª
(20.31)
and at elevation h
μid (h) = μª + RT ln Ph (20.32)
The gas fields the gravitational force which gives it a potential energy per mole
of Mgh at height h. We add this energy per mole term to the chemical potential
(which is free energy per mole) thus at equilibrium

μid (0) = μid (h) + Mgh (20.33)

Referencing to the reference state we get


ª ª
μ/ = μ/ + RT ln Ph + Mgh (20.34)
RT ln Ph = −Mgh
−M gh
Ph = e RT

The last line is the barometric equation and it shows that pressure is exponentially
decreasing function of altitude.

150
21. Equilibrium

First let us consider the equilibrium A ­ B.

Since A and B are in equilibrium their chemical potentials must be equal

μA = μB (21.1)

Now,
μA = μª
A + RT ln aA (21.2)

and
μB = μª
B + RT ln aB

So the equilibrium condition becomes

μª ª
A + RT ln aA = μB + RT ln aB (21.3)
−4μª = μª ª
A − μB = RT ln aB − RT ln aA
aB
−4μª = RT ln
aA
Since chemical potential is free energy per mole, if we multiply the above by n
moles we have
aB
−4Gª = nRT ln
aA
as a consequence of the equilibrium condition.

The quantity aaBA defines the equilibrium constant, Ka , for this process.

151

151
Say the system A → B is not in equilibrium then we can not write

μA = μB

but we can write



z }| {
μA + μB − μA = μB (21.4)

Proceeding as above we get

μª ª
A + RT ln aA + 4μ = μB + RT ln aB (21.5)
4μ = μª ª
B − μA + RT ln aB − +RT ln aA
aB
4μ = 4μª + RT ln .
aA
Again multiplying by n gives
aB
4G = 4Gª + nRT ln .
aA
If the 4G < 0 then the transition A → B proceeds spontaneously as written.

Consider a more complicated equilibrium

aA + bB ­ cC + dD. (21.6)

The equilibrium condition is

aμA + bμB = cμC + dμD . (21.7)

In a manner similar to the above

aμª ª ª ª
A + aRT ln aA + bμB + bRT ln aB = cμC + cRT ln aC + dμD + dRT ln aD (21.8)

Rearranging gives
≡−4rx n Gª
z }| { acC adD
aμª
A + bμª
B − cμª
C − dμª
D = RT ln (21.9)
aaA abB

152
the equilibrium constant is

acC adD
Ka = a b =⇒ 4Gª = −RT ln Ka (21.10)
aA aB

Note: n is absent in the above since the molar values are implied by the stoi-
chiometry.

21.0.3. Equilibrium constants in terms of KC

Equilibrium constant in terms of a condensed phase concentration:

[C]c [D]d
KC0 = , (21.11)
[A]a [B]b

which is related to Ka by µ ¶
γ cC γ dD
Ka = KC0 . (21.12)
γ aA γ bB
If the reactants are solutes then as the solution is diluted all the activity coefficients
go to unity and KC0 → Ka .

21.0.4. The Partition Coefficient

Up to now we have only considered miscible solutions.

We now consider the problem of determining the equilibrium concentrations of a


solute A in both phases of an immiscible mixture.

153
The equilibrium equation is
Aα ­ Aβ (21.13)

The equilibrium expression for this process is

4Gα→β = 0 = 4Gª
α→β − nRT ln Ka , (21.14)

where, 4Gª ª ª
α→β ≡ Gβ −Gα . The equilibrium constant for this process has a special
β/α
name; it is called the partition coefficient, P β/α ≡ Kpart , for species A in the α—β
mixture.

We can solve for the partition coefficient to yield

aβ 4G ª
α→β
β/α − nRT
P = A
α
= e . (21.15)
aA

For low concentrations


[A]β
P β/α ' . (21.16)
[A]α
Knowledge of the partition function is important on the delivery of drugs because,
to enter the body, the drugs must transfer between an aqueous phase and a oil
phase.

154
For most drugs
o/w
0 < Ppart < 4 (21.17)

Partition coefficient Delivery mechanism


o/w
low Ppart (likes water) injection
o/w
medium Ppart oral
o/w
high Ppart (likes oil) skin patch/ointment

Factors other than the partition coefficient influence the drug delivery choice. For
example, can the drug handle the acidic environment of the stomach?

155
22. Chemical Reactions

Up to now we have only been considering systems in the absence of chemical


reactions. After chemical reactions take place the system is in a final “product”
thermodynamic state that is in general different from the initial “reactant” state.

For any extensive property

• 4rxn (Property) = property of products − property of reactants

• Example

— Reaction: aA+bB= cC+dD


— 4rxn S = cSm,C + dSm,D − aSm,A − bSm,B

22.1. Heats of Reactions

Exothermic reaction: heat is given off to the surroundings


Endothermic reaction: heat is given taken in from the surroundings

At constant pressure (Pex = P

q = 4rxn U − w = 4rxn U − P 4rxn V = 4rxn H (22.1)

4rxn H < 0 for Exothermic reactions.


4rxn H > 0 for Endothermic reactions.

156

156
22.1.1. Heats of Formation

Hess’s Law of heat summation: 4rxn H is independent of chemical pathway


Example: C2 H2 +H2 = C2 H4 .
This direct reaction is not easy but it can be done in steps
C2 H2 + 52 O2 → 2CO2 +H2 O(liq) 4rxn H ª = −1299.63 kJ
2CO2 +2H2 O(liq)→C2 H4 + 3O2 4rxn H ª = +1410.97 kJ
H2 + 12 O2 →H2 O(liq) 4rxn H ª = −285.83 kJ
C2 H2 +H2 = C2 H4 4rxn H ª = −174.49 kJ

The heat of formation 4f H ª is the 4rxn H at STP in forming a compound from


its constituent atoms in their natural states.

O2 , H2 , C(graphite) are examples of atoms in their natural state.

Example: Formation of water

• H2 + 12 O2 = H2 O not 2H2 +O2 = 2H2 O


P
• 4rxn H = i ν i 4f H(i), where ν i is the stoichiometric factor of the ith com-
ponent.

Example: H2 O(liq)→H2 O(gas) at SATP


H2 + 12 O2 = H2 O(gas) 4f H ª = −241.818 kJ
H2 + 12 O2 = H2 O(liq) 4f H ª = −285.830 kJ
H2 O(liq)→H2 O(gas) 4rxn H ª = −241.818 − (−285.830) = 44.012 kJ

22.1.2. Temperature dependence of the heat of reaction


Z T2
4rxn H(T2 ) = 4rxn H(T1 ) + 4rxn CP dT (22.2)
T1

157
22.2. Reversible reactions

Recall the requirement for a spontaneous change: 4G < 0 for constant T and P.
X
4rxn G = G(products) − G(reactants) = ν i μi , (22.3)
i

(remember μi = Gm,i for pure substance i).

As we saw before μi can be defined in terms of activity

μi = μª
i + RT ln ai . (22.4)

So,
4rx n Gª
zX}| { X
4rxn G = ν i μª
i + RT ν i ln ai . (22.5)
i i

Using the property of logarithms: a ln x + b ln y = ln(xa y b ) the above expression


becomes
Y
4rxn G = 4rxn Gª + RT ln aνi i (22.6)
i
4rxn G = 4rxn Gª + RT ln Q,
Q
where Q ≡ i aνi i is the activity quotient.

At equilibrium, 4rxn G = 0 and Q = Ka (Thermodynamic equilibrium constant).


Ka depends on T but is independent of P.
For the reaction aA + bB = cC + dD
acC adD
Ka = (22.7)
aaA abB

• Note that the activity of any pure solid or liquid is for all practical purposes
equal to 1.

158
Pi Xi P
• For ideal gases, ai = Pª
= Pª
(P ª = 1 bar) This leads to the sometimes
useful relation
PCc PDd (P ª aC )c (P ª aD )d ¡ ª ¢c+d−a−b
KP = = = Ka P , (22.8)
PAa PBb (P ª aA )a (P ª aB )b
S
νi
or more generally KP = Ka (P ª ) i
.

So at equilibrium, 4rxn G = 4rxn Gª + RT ln Q becomes

0 = 4rxn Gª + RT ln Ka ⇒ 4rxn Gª = −RT ln Ka . (22.9)

22.3. Temperature Dependence of Ka

Starting with
³ G =´H − T S or G/T = H/T − S.
From this ∂(G/T )
∂(1/T )
= H.
P
Applying this to
4rxn Gª 4rxn H ª
= − 4rxn S (22.10)
T T
gives µ ¶
∂(4rxn Gª /T )
= 4rxn H ª (22.11)
∂(1/T ) P
ª
Using 4rxn G = −RT ln Ka , we get
µ ¶
∂ ln Ka ind. d ln Ka 4rxn H ª
= =− (22.12)
∂(1/T ) P of P d(1/T ) R
d dT d d
or (using d(1/T )
= d(1/T ) dT
= −T 2 dT )
d ln Ka 4rxn H ª
= (22.13)
dT RT 2
Integration gives
Z T2 ª
1 4rxn Hm
ln Ka (T2 ) = ln Ka (T1 ) + (22.14)
R T1 T2
For a reasonably small range T2 − T1 this is well approximated by
ª
µ ¶
4rxn Hm 1 1
ln Ka (T2 ) = ln Ka (T1 ) − − (22.15)
R T2 T1

159
22.4. Extent of Reaction

There are other equilibrium “constants” that are used in the literature.

• From Pj = Xj P , KX = KP P −4υg
V
¡ RT ¢−4υg
• From nj = Pj RT (ideal gas approximation), Kn = KP V

nj Pj
• From concentration Cj = V
= RT
, KC = KP (RT )−4υg

Equilibrium “constants”
“constants” expression relation to Ka situation used
activity(products)
Ka activity(reactants)
– when an exact answer is needed
partial pressure(products) Ka
KP partial pressure(reactants) −4υ g gas reactions
µ Kγ P ª ¶
mole fraction(products) Ka −4υ g
KX mole fraction(reactants) −4υ g P when eq. P is known
Kγ P ª
µ ¶
moles(products) Ka
¡ RT ¢−4υg
Kn moles(reactants) −4υ g V
when V is known and constant
Kγ P ª
µ ¶
KC concentration(products)
concentration(reactants)
Ka
−4υ g (RT )−4υg when concentration known
Kγ P ª

160
23. Ionics

Many chemical processes involve electrolytes and or acids and bases.

To understand these processes we must know something about how ions behave
in solution.

23.1. Ionic Activities

Consider a salt in solution

Mv+ Xv− → v+ M z+ (aq) + v− X z− (aq), (23.1)

where v+ (v− ) is the number of cations (anions) and z+ (z− ) is the charge on the
cation (anion).
The chemical potential for the salt may be written in terms of the chemical po-
tential for each of the ions:

μsalt = v+ μ+ + v− μ− (23.2)

To determine the activity we start with

μj − μª
j
ln aj = , j = + or − (23.3)
RT
and
μsalt − μª
salt
ln asalt = . (23.4)
RT

161

161
Substituting the expression for μsalt into this gives

v+ μ+ − v− μ− + v+ μª ª
+ − v− μ−
ln asalt = (23.5)
RT
v+ μ+ − v+ μª
+ v− μ− − v− μª

= +
| RT
{z } | RT
{z }
v+ ln a+ v− ln a−

So,
ln asalt = v+ ln a+ + v− ln a− (23.6)

or, alternatively,
asalt = av+ av− (23.7)

It is the case that 1 mole of salt behaves like v = v+ + v− moles of nonelectrolytes


in terms of the colligative properties. This suggests that the interesting quantity
is μsalt
v
:
μsalt μª 1/v
= salt + RT ln asalt . (23.8)
v v
We see that
1/v
asalt = (av+ av− )1/v ≡ a± . (23.9)

The quantity a± is the mean ionic activity.

23.1.1. Ionic activity coefficients

The activity coefficients for ionic solutions can also be defined via

a+ = γ + m+ , a− = γ − m− , (23.10)

where m+ = v+ m and m− = v− m.

The mean ionic activity coefficient is


v v 1/v
γ ± = (γ ++ γ −− ) . (23.11)

162
The quantities a+ , a− , γ + and γ − cannot be measured individually.

One can use the colligative properties to measure the ionic activity coefficients.

It is convenient to redefine the osmotic coefficient as


−1000 g/kg
φ= ln a1 , (23.12)
vmM1
where the subscript 1 refers to the solvent.

Similarly freezing point depression is redefined as

θ = vφKf m. (23.13)

So, vφ corresponds to the empirical factor i discussed earlier.

Recall how γ was calculated from the Gibbs-Duhem equation:


Z m
j
ln γ ± = −j − 0
dm0 , (23.14)
0 m
where j = 1 − φ.

23.2. Theory of Electrolytic Solutions

Ionic strength is defined as


1X 2
I= z mi , (23.15)
2 i i
where z is the charge of the ion and m its concentration.

Results from Debye—Hückel theory: point charge in a continuum


The Debye—Hückel equation:

−α |z+ z− | I
ln γ ± = √ , (23.16)
1 − Ba0 I

163
where
µ ¶1/2
e3 2πρ• L
α= , (23.17)
(εkT )3/2 1000

8πLe2 ρ•
B= , (23.18)
1000εkT
a0 is the radius of closest approach, e is the charge on the electron, ρ• is the
density of the pure solvent, ε is the dielectric constant for the pure solvent and L
is Avogadro’s number.

Notice that the parameters α and B depend only on the solvent.

One important approximation to this equation is to neglect the B term to get the
Debye—Hukel limiting Law (DHLL):

ln γ ± = −α |z+ z− | I. (23.19)

This gives the dependence of ln γ ± for dilute solutions (m → 0). It is seen that

the DHLL correctly predicts the m dependence of ln γ ± , which is observed ex-
P
perimentally (recall I = 12 i zi2 mi ).

A useful empirical approximation is to set Ba0 = 1 and to add an empirical


correction to get the :
√ µ 2 2

−α |z+ z− | I v+ + v−
ln γ ± = √ + 2βm . (23.20)
1− I v+ + v−
This equation works well to ionic strengths of about I = 0.1

23.3. Ion Mobility

Current, I is given by the rate of change (in time) of charge, Q:


dQ
I= (23.21)
dt

164
(Electrical) work, w, is required to move a change through a potential (or voltage),
ε:
w = −εQ (23.22)

Power is given by the product of the voltage and the current:

p = −εI (23.23)

Resistance is given by the ratio of the voltage to current:


ε
R=
I
Conductance is the inverse of the resistance (R−1 ).

Some relevant constants

• charge of an electron e = 1.602177 × 10−19 C.

• Faraday’s constant F = Le = 96485 C/mol (Avogadro’s number of electrons)

23.3.1. Ion mobility

165
The total current passing through an ionic solution is determined by the sum of
the current carried by the cations and by the anions

I = I+ + I− (23.24)

Now
dQi dNi
Ii = = |zi | e , (23.25)
dt dt
where i = +, −.

For uniform ion velocity (vi ) the number of ions arriving at the electrode during
any given time interval 4t is
Ni dNi Ni
4Ni = Avi 4t =⇒ = Avi (23.26)
V dt V
so
Ni
Ii = |zi | e Avi (23.27)
V
Recall Coulomb’s law
Fi = zi eE, (in vacuum) (23.28)

where E is the electric field, E = dx
.
Also recall Newton’s law
dvi
Fi = mai = m = zi eE. (23.29)
dt
The moving ions experience a viscous drag f that is proportional to their velocities.
So the total force on the ions is a sum of the Coulomb force and the viscous drag

Fi = zi eE − fvi (in solution). (23.30)

The ions quickly reach terminal velocity, i.e., the viscous drag equals the Coulomb
force. Hence Fi = 0.
zi eE
zi eE = f vi =⇒ vi = . (23.31)
f
The drag f has three basic origins.

166
1. Stoke’s Law type force

• “spherical” ion moving through a continuous medium


• this contribution is independent of the other ions

2. Electrophoretic effect.

• oppositely charged ions “pull” at each other

3. Relaxation effects

• solvation shell must re-adjust as ion moves. a “dressed” ion.

167
A more fundamental quantity than ion velocity is the ion mobility, ui which is the
ion’s velocity per field,
vi
ui = . (23.32)
E
For the case for parallel plate capacitors E = εl , where l is the separation of the
plates. So,
vi l
ui = . (23.33)
ε
Here the current carried by ion i is
Ni ui ε
Ii = |zi | e A . (23.34)
V l
Suppose a salt has a degree of dissociation α (α = 1 for strong electrolytes) to
produce ν + cations and ν − anions, then each mole of salt gives: N+ = αν + Ln
and N− = αν − Ln.

The current then becomes


αν i Ln ui ε F=Le ε
Ii = |zi | e A = αν i n |zi | ui AF (23.35)
V l Vl
It is of interest to determine the ratio of the current carried by the cation versus
the anion.
=1
z }| {
ε
I+ /
α ν +/ / /F V/
n |z+ | u+ A l ν + |z+ | u+ u+
= ε
= = (23.36)
I− /
α ν −/ / /F V/l
n |z− | u− A ν − |z− | u− u−
Thus the ratio of the currents is determined by simply the ratio of the mobilities.

168
24. Thermodynamics of Solvation

An extremely important application of thermodynamics is to that of ion solvation.

Solvation describes how a solute dissolves in a solvent.

We will focus on ions in solution.

As a basic treatment of solvation we shall consider the solvent as a non-structural


continuum and the ion as a charged particle.

Of course this is an approximation and numerous statistical mechanical models


for solvents which incorporate a more realistic structure can be used, but we will
stick with this simple thermodynamic model.

The way to investigate the ion—solvent interaction upon solvation from a thermo-
dynamics point of view is to consider the change in the properties of the ion in a
vacuum versus the ion in solution.

Primarily we will determine 4Gv→s ≡ Gion in solv. − Gion in vac .

Since Gibbs free energy corresponds to non-P V work, 4Gv→s can be determined
by calculating the reversible work done in transferring an ion into the bulk of the
solvent.

169

169
24.1. The Born Model

The Born model is a simple solvation model in which the ions are taken to be
charged spheres and the solvent is take to be a continuum with dielectric constant
εs

170
4Gv→s for the Born model is obtained by considering the following contribution
to the work of ion transfer from the vacuum state to the solvated state (see figure)

• Begin with the state in which the charged sphere (the ion) is in a vacuum.

• Determine the work, wdis , done in discharging the sphere.

• Assume the uncharged sphere can pass from the (neutral) vacuum to the
neutral solvent without doing any work, wtr = 0. (This is an approximation).

• Determine the work, wch , done in charging the sphere which is now in the
solvent.

171
So,
4Gv→s = wdis + wtr + wch = wdis + wch (24.1)

Work done in discharging the sphere:


The act of discharging a sphere involves bringing out to infinity from the surface
infinitesimal amounts of charge.

The work done is discharging is some what complicated since as one removes the
charge the work done in removing more charge changes according to the amount
of charge currently on the sphere.

172
This is expressed mathematically as
Z 0Z ∞
σ
wdis = 2
drdσ (24.2)
ze ri 4π 0 r
Z 0
σ
= dσ
ze 4π 0 ri
(ze)2
= − ,
8π 0 ri
where z is the oxidation state of the ion, e is the charge of the electron, ri is the
radius of the sphere (ion) and 0 is the permittivity of free space.

Work done in charging the sphere:


The only difference in charging the sphere is that the sign of the work will be differ-
ent and that since we are charging in a solvent we must multiply the permittivity
of free space by the dielectric constant of the solvent.

So,
(ze)2
wch = + (24.3)
8π 0 εs ri

24.1.1. Free Energy of Solvation for the Born Model

Combining the above two expression for work gives

(ze)2 (ze)2
4Gv→s = − + (24.4)
8π 0 ri 8π 0 εs ri
µ ¶
(ze)2 1
= −1
8π 0 ri εs

The above expression is 4Gv→s /ion. For n moles of ions (nL = N)


µ ¶
N (ze)2 1
4Gv→s = −1 (24.5)
8π 0 ri εs

173
The dielectric constant of any solvent is always greater than unity so ε1s − 1 is
always negative hence 4Gv→s < 0. Thus ions always exist more stably in solution
than in a vacuum.

24.1.2. Ion Transfer Between Phases

We can quickly generalize the Born model to describe ion transfer between phases
in a solution of two immiscible phases

Consider an immiscible solution of two phases α and β having dielectric constants


εα and εβ .

Since Gibbs free energy is a state function we can write the change in free energy
for transfer of an ion form the β phase to the α phase as
=−4Gv →β
z }| {
4Gβ→α = 4Gβ→v + 4Gv→α (24.6)
µ ¶ µ ¶
N (ze)2 1 N (ze)2 1
= − −1 + −1
8π 0 ri εβ 8π 0 ri εα
µ ¶
N (ze)2 1 1
= −
8π 0 ri εα εβ

The Partition Coefficient

We can now write the partition coefficient for the Born model as
 
−4Gª L(ze)2 1
α/β β→α − 8πr − ε1
i 0 RT εα
Pi =e nRT =e β
(24.7)

24.1.3. Enthalpy and Entropy of Solvation

We may employ the standard thermodynamic relations which we have derived


earlier to obtain the entropy and enthalpy for the Born model.

174
From µ ¶ µ ¶
∂G ∂4Gv→s
= −S ⇒ = −4Sv→s , (24.8)
∂T P ∂T P
we find entropy to be
" µ ¶#
∂ N (ze)2 1
4Sv→s =− −1 . (24.9)
∂T 8π 0 ri εs

The only variable in the above equation that has a temperature dependence is the
dielectric constant of the solvent so,
µ ¶
N (ze)2 ∂ 1 N (ze)2 ∂εs
4Sv→s = − = . (24.10)
8π 0 ri ∂T εs 8π 0 ri ε2s ∂T

Enthalpy is obtained via the relation:

4Hv→s = 4Gv→s + T 4Sv→s (24.11)


µ ¶
N (ze)2 1 N (ze)2 T ∂εs
= −1 +
8π 0 ri εs 8π 0 ri ε2s ∂T
µ ¶
N (ze)2 1 T ∂εs
= + −1
8π 0 ri εs ε2s ∂T

24.2. Corrections to the Born Model

The Born model is very valuable because of its simplicity–qualitative statements


about solvation and ion transfer between phases can be made.

Unfortunately however, the Born model does not make quantitatively correct pre-
dictions in many cases.

We simply list here several phenomena that more sophisticated theories of solva-
tion must consider

175
1. The solvophobic effect: a cavity must form in the solvent to accommodate
the ion.

2. Changes in solvent structure: the local environment of the ion has a different
arrangement of solvent molecules than that of the bulk solvent, so the initial
structure of the solvent must breakdown and the new structure must form.

3. Specific interactions: any interaction energy specific to the particular ion-


solvent pair: Hydrogen bonding being the prime example.

4. Annihilation of defects: A small ion may be captured in a micro-cavity


within the solvent releasing the energy of the micro-cavity defect.

176
25. Key Equations for Exam 4

Listed here are some of the key equations for Exam 4. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• Some thermodynamic relations

H = U + PV dH = T dS + V dP
A = U − TS dA = −SdT − P dV
G = H − TS dG = −SdT + V dP

• The chemical potential equation

μi = μª
i + RT ln ai (25.1)

• The 4G equation (this should be posted on your refrigerator)

4G = 4Gª + RT ln Q. (25.2)

177

177
At equilibrium 4G = 0 and

4Gª = −RT ln Ka (25.3)

• For an ideal gas


CP m = Cvm + R (25.4)

• The Debye—Hukel limiting Law (DHLL):



ln γ ± = −α |z+ z− | I. (25.5)

• The ratio of the current carried by the cation versus the anion in terms of
ion mobility is
I+ u+
= (25.6)
I− u−
• The chemical potential equation

μi = μª
i + RT ln ai (25.7)

• The 4G equation (this should be posted on your refrigerator)

4G = 4Gª + RT ln Q. (25.8)

At equilibrium 4G = 0 and

4Gª = −RT ln Ka (25.9)

• 4G for the Born model:


µ ¶
N (ze)2 1
4Gv→s = −1 (25.10)
8π 0 rs εs

• 4G for transfer of an ion form the β phase to the α phase,


µ ¶
N (ze)2 1 1
4Gβ→α = − (25.11)
8π 0 ri εα εβ

178
Chemistry 352: Physical
Chemistry II

179

179
Part V

Quantum Mechanics and


Dynamics

180

180
26. Particle in a 3D Box

We now return to quantum mechanics and investigate some of the important


models that we omitted from the first semester.

In particular we will look at the particle in a box in more than one dimension.

We will also solve models which deal with rotations.

26.1. Particle in a Box

Recall that the important ideas from the 1D particle in a box problem were

The potential, V (x), is given by




⎨ ∞ x≤0
V (x) = 0 0<x<a . (26.1)


∞ x≥a

Because of the infinities at x = 0 and x = a, we need to partition the x-axis into


the three regions shown in the figure.

181

181
Now, in region I and III, where the potential is infinite, the particle can never
exist so, ψ must equal zero in these regions.

The particle must be found only in region II.

The Schrödinger equation in region II is (V (x) = 0)

−~2 d2 ψ(x)
Ĥψ = Eψ =⇒ = Eψ, (26.2)
2m dx2
The general solution of this differential equation is

ψ(x) = A sin kx + B cos kx, (26.3)


q
2mE
where k = ~2
.

Now ψ must be continuous for all x. Therefore it must satisfy the boundary
conditions (b.c.): ψ(0) = 0 and ψ(a) = 0.

From the ψ(0) = 0 b.c. we see that the constant B must be zero because
cos kx|x=0 = 1.
So we are left with ψ(x) = A sin kx for our wavefunction.

182
The second b.c., ψ(a) = 0, places certain restrictions on k.

In particular,

,
kn = n = 1, 2, 3, · · · . (26.4)
a
The values of k are quantized. So, now we have
nπx
ψn (x) = A sin . (26.5)
a

The constant A is the normalization constant.

Solving for A gives r


2
A= . (26.6)
a

Thus our normalized wavefunctions for a particle in a box are (in region II)
r
2 nπx
ψ n (x) = sin . (26.7)
a a

We found the energy levels to be


n2 π 2 ~2 ~= 2π
h
n2 h2
En = = . (26.8)
2ma2 8ma2

26.2. The 3D Particle in a Box Problem

We now consider the three dimensional version of the problem.

The potential is now


(
0, 0 < x < a, 0 < y < b, 0 < z < c
V (x, y, z) = . (26.9)
∞, else

183
Now the Schrödinger equation is
−~2 2
Ĥψ = Eψ ⇒ ∇ ψ = Eψ
2m µ ¶
−~2 ∂ 2 ψ ∂ 2 ψ ∂ 2 ψ
⇒ + 2 + 2 = Eψ. (26.10)
2m ∂x2 ∂y ∂z
It is generally true that when the Hamiltonian is a sum of independent terms, we
can write the wavefunction as a product of wavefunctions

ψ(x, y, z) = ψx (x)ψ y (y)ψ z (z). (26.11)

This lets us perform a mathematical trick which is sometimes useful in solving


partial differential equations.

Subbing the product wavefunction into the Schrödinger equation we get


µ ¶
−~2 ∂ 2 ψx ψy ψz ∂ 2 ψx ψy ψz ∂ 2 ψx ψy ψz
+ + = Eψ x ψy ψz (26.12)
2m ∂x2 ∂y 2 ∂z 2
µ ¶
−~2 ψy ψz ∂ 2 ψx ψx ψz ∂ 2 ψy ψx ψy ∂ 2 ψz
+ + = Eψ x ψy ψz .
2m ∂x2 ∂y 2 ∂z 2
We now divide both sides by ψx ψy ψz to get
µ ¶
−~2 1 ∂ 2 ψx 1 ∂ 2ψy 1 ∂ 2ψz
+ + = E. (26.13)
2m ψx ∂x2 ψy ∂y 2 ψz ∂z 2
This equation is now of the form

f (x) + g(y) + h(z) = C, (26.14)

where C is a constant.

If we take the derivative with respect to x we get


d→
f (x) + g(y) + h(z) = C,
dx →
df (x) dg(y) dh(z) dC
+ + = ,
dx dx dx dx
df (x)
= 0, (26.15)
dx

184
So, f (x) is a constant. Similarly for g(y) and h(z)

Applying this to our Schrödinger equation means that we have converted our
partial differential equation into three independent ordinary differential equations,
−~2 1 d2 ψx −~2 d2 ψx
= Ex =⇒ = Ex ψx (26.16)
2m ψx dx2 2m dx2
−~2 1 d2 ψy −~2 d2 ψy
= Ey =⇒ = Ey ψy
2m ψy dy 2 2m dy 2
−~2 1 d2 ψz −~2 d2 ψz
= Ez =⇒ = Ez ψz
2m ψz dz 2 2m dz 2
which we recognize as the 1D particle in a box equations.

Hence we immediately have


r
2 nx πx
ψx = sin , (26.17)
a a
r
2 ny πy
ψy = sin ,
b b
r
2 nz πz
ψz = sin
c c
and
n2x h2
Ex,nx = , (26.18)
8ma2
n2y h2
Ey,ny = ,
8mb2
n2z h2
Ez,nz = .
8mc2
The total wavefunction is

2 2 nx πx ny πy nz πz
ψ=√ sin sin sin (26.19)
abc a b c
and the total energy is
E = Ex,nx + Ey,ny + Ez,nz . (26.20)

185
Degeneracy
The 3D particle in a box model brings up the concept of degeneracy.

When n(> 1) states have the same total energy they are said to be n-fold degen-
erate.

Let the 3D box be a cube (a = b = c) then the states

(nx = 2, ny = 1, nz = 1), (26.21)


(nx = 1, ny = 2, nz = 1),
(nx = 1, ny = 1, nz = 2)

have the same total energy and thus are degenerate.

186
27. Operators

27.1. Operator Algebra

We now take a mathematical excursion and discuss the algebra of operators.

Definitions

• Function: A function, say f, describes how a dependent variable, say y, is


related to an independent variable, say x: y = f(x)

— e.g., y = x2 , y = sin x, etc.

• Operator: An operator, say Ô, transforms a function, say f , into another


function, say g: Ôf(x) = g(x).

• Algebra: An algebra is a specific collection of rules applied to a set of objects


and a particular operation

— Rules
∗ Transitivity
∗ Associativity
∗ Existence of an identity
∗ Existence of an inverse
— e.g., Addition on the set of real numbers, Multiplication on the set of
real numbers

187

187
— Note: Commutivity is not a requirement of an algebra
∗ example 1: multiplication on the set of real number is commutive:
ab = ba
∗ example 2: multiplication on the set of n × n matrices is not com-
mutive: ab 6= ba in general. e.g.,
" #" # " #
1 0 3 1 3 1
= (27.1)
2 1 1 1 7 3
but " #" # " # " #
3 1 1 0 5 1 3 1
= 6= (27.2)
1 1 2 1 3 1 7 3

Algebraic rules for operators

1. Equality:
if α̂ = β̂, then α̂f (x) = g(x) = β̂f (x) (27.3)

2. Addition:

if α̂f (x) = g(x) and β̂f (x) = h(x), (27.4)


then (α̂ + β̂)f (x) = α̂f (x) + β̂f (x) = g(x) + h(x)

3. Multiplication:
³ ´
α̂β̂f (x) = α̂ β̂f (x) (27.5)
β̂ α̂f (x) = β̂ (α̂f (x)) ,

but in general α̂β̂f (x) 6= β̂ α̂f (x).

4. Inverse:

if α̂f (x) = g(x) and β̂g(x) = f (x) (27.6)


then β̂ = α̂−1 and is said to be α̂ inverse

188
Linear operators:

• A special and important class of operators

• They obey all of the above properties in addition to

— α̂ (f (x) + g(x)) = α̂f (x) + α̂g(x), and


— α̂(λf (x)) = λα̂f (x), where λ is a complex number.

Hermitian operators:

• A special class of linear operators

• All observables in quantum mechanics are associated with Hermitian oper-


ators

• The eigenvalues of Hermitian operators are real

Some important operators

1. • x̂: x̂f(x) = xf(x)


ˆ df
• d: ˆ (x) = d
f (x)
dx
³ ´ ¡ ¢ ¡d ¢ d2
• dˆ2 : dˆ2 f (x) = dˆ df
ˆ (x) = dˆ d f (x) =
dx
d
dx dx
f (x) = dx2
f (x)
• ı̂: ı̂f (x, y, z) = f (−x, −y, −z)
³ ´
ˆ ˆ ∂ ∂ ∂
• ∇: ∇f(x, y, z) = ∂x ex + ∂y ey + ∂z ez f(x, y, z)
³ ´
• ∇ˆ 2: ∇ ˆ 2 f (x, y, z) = ∂ 22 + ∂ 22 + ∂ 22 f (x, y, z)
∂x ∂y ∂z

Commutators:
We have seen that in general α̂β̂ 6= β̂ α̂. This leads to the construction of the
commutator, [◦, ◦]: h i
α̂, β̂ ≡ α̂β̂ − β̂ α̂. (27.7)

189
h i
If α̂β̂ = β̂ α̂, then α̂, β̂ = 0 and α̂ and β̂ are said to commute with one another.

The eigenvalue equation:


If α̂f (x) = g(x) and g(x) = af(x), then the operator equation, α̂f (x) = g(x)
becomes the eigenvalue equation

α̂f (x) = af (x). (27.8)

The eigenvalue equation is of fundamental importance in quantum theory. We


shall see that eigenvalues of certain operator can be identified as experimental
observables.

Commuting operators and simultaneous sets of eigenfunctions.

If α̂f (x) = af (x) and β̂ and α̂ commute, then β̂f (x) = bf (x).

The proof goes as follows: On the one hand,

β̂ (α̂f ) = β̂ (af) = aβ̂f (27.9)

because f is an eigenfunction of α̂.

On the other hand, ³ ´


β̂ (α̂f ) = α̂ β̂f (27.10)

because β̂ and α̂ commute.


Thus ³ ´ ³ ´
α̂ β̂f = a β̂f . (27.11)

which states that β̂f is an eigenfunction of α̂ with eigenvalue a. The only way for
this to be true is if β̂f = bf.

190
27.2. Orthogonality, Completeness, and the Superposition
Principle

Theorem 1: The eigenfunctions of a Hermitian operator corresponding to differ-


ent eigenvalues are orthogonal:
Z
ψ∗j ψk = 0, j 6= k. (27.12)
space

Theorem 2: The eigenfunctions of a Hermitian operator form a complete set

Corollary (the superposition principle): Any arbitrary function ψ in the


space of eigenfunctions {ϕi } can be written as a superposition of these eigenfunc-
tions:
X
ψ= ai ϕi (27.13)
i

191
28. Angular Momentum

We will encounter several different types of angular momenta, but fortunately


they are all described by a single theory

Before starting with the quantum mechanical treatment of angular momentum,


we first review the classical treatment.

28.1. Classical Theory of Angular Momentum

The classical angular momentum, L, is given by

L=x×p (28.1)

The vector cross-product can be computed by finding the following determinant:


¯ ¯
¯ ex ey ez ¯ Lx Ly L
¯ ¯ z }| { z }| { z }|z {
¯ ¯
L = ¯ x y z ¯ = (ypz − zpy )ex + (zpx − xpz )ey + (xpy − ypx )ez (28.2)
¯ ¯
¯ px py pz ¯

Hence,

Lx = (ypz − zpy ) , (28.3)


Ly = (zpx − xpz ) , (28.4)
Lz = (xpy − ypx ) . (28.5)

Another quantity that we will find useful is

L2 = L · L = L2x + L2y + L2z (28.6)

192

192
28.2. Quantum theory of Angular Momentum

So, in accordance with postulate II, we replace the classical variables with their
operators. That is,
µ ¶
~ ∂ ∂
L̂x = (ŷ p̂z − ẑ p̂y ) = y −z , (28.7)
i ∂z ∂y
µ ¶
~ ∂ ∂
L̂y = (ẑ p̂x − x̂p̂z ) = z −x , (28.8)
i ∂x ∂z
µ ¶
~ ∂ ∂
L̂z = (x̂p̂y − ŷ p̂x ) = x −y . (28.9)
i ∂y ∂x
Recall the basic commutators.
∙ ¸

, u = 1, (28.10)
∂u
∙ ¸

, v = 0,
∂u
where u, v = x, y, or z and u 6= v.

From these basic commutators one can derive


h i h i h i
L̂x , L̂y = i~L̂z , L̂y , L̂z = i~L̂x , L̂z , L̂x = i~L̂y (28.11)

and h i h i h i
2 2 2
L̂ , L̂x = L̂ , L̂y = L̂ , L̂z = 0 (28.12)

It is often convenient to express the angular momentum operators in spherical


polar coordinates as follows.
µ ¶
∂ ∂
L̂x = i~ sin φ + cot θ cos φ , (28.13)
∂θ ∂φ
µ ¶
∂ ∂
L̂y = −i~ cos φ − cot θ sin φ , (28.14)
∂θ ∂φ

193

L̂z = −i~ (28.15)
∂φ
µ 2 ¶
2 2 ∂ ∂ 1 ∂2
L̂ = −~ + cot θ + (28.16)
∂θ2 ∂θ sin2 θ ∂φ2

28.3. Particle on a Ring

Consider a particle of mass μ confined to move on a ring of radius R.

The moment of inertia is I = μR2

The Hamiltonian is given by


L̂2z −~2 d2
Ĥ = = (28.17)
2I 2I dφ2
(note that we use d rather than ∂ since the problem is one-dimensional).

The Schrödinger equation becomes


−~2 d2 ψ
= Eψ (28.18)
2I dφ2
Notice that this Schrödinger equation is exactly the same form as the particle in
a box. The only difference is the boundary conditions.

The boundary condition for the particle in a box were ψ was zero outside the box.

Now the boundary condition is that ψ(φ) = ψ(φ + 2π). The wavefunction must
by 2π periodic.

The allowable wavefunctions are




⎨ A cos mφ
ψm (φ) = A sin mφ , (28.19)


Aeimφ

194
m = 0, ±1, ±2, ±3, . . .
These wavefunctions are really the “same.” It will be most convenient to use
ψm (φ) = Aeimφ as our wave functions.

Plugging ψm (φ) = Aeimφ into the Schrödinger equation gives


−~2 d2 Aeimφ
= Em Aeimφ (28.20)
2I dφ2
~2 m2 imφ
Ae = Em Aeimφ
2I
Therefore the energy levels (the eigenvalues) for a particle in a ring are
~2 m2 m2 h2
Em = = . (28.21)
2I 8π2 I
Next we need to find the normalization constant, A.
Z 2π
1 = ψ∗ ψdφ (28.22)
Z0 2π
1 = A2 e−imφ eimφ dφ
0
Z 2π
2
1 = A dφ = 2πA2 ,
0
thus r
1
A= . (28.23)

Hence the normalized wavefunctions for a particle on a ring are
1
ψ = √ eimφ . (28.24)

28.4. General Theory of Angular Momentum

To discuss angular momentum in a more general way it is convenient to define


two so-called ‘ladder’ operators

L̂+ ≡ L̂x + iL̂y (28.25)

195
and
L̂− ≡ L̂x − iL̂y (28.26)
We collect here the commutators of L̂+ and L̂− :
h i
L̂z , L̂+ = L̂+ ⇒ L̂+ L̂z = L̂z L̂+ − L̂+ (28.27)
h i
L̂z , L̂− = −L̂− ⇒ L̂− L̂z = L̂z L̂− + L̂− (28.28)

Now, since L̂z and L̂2 commute there must exist a set of simultaneous eigenfunc-
tions {ψi }
L̂z ψi = mψ i (28.29)
and
L̂2 ψi = k2 ψi (28.30)
Physically, k~ represents the length of the angular momentum vector and m~
represents the projection onto the z-axis. (Note: for simplicity in writing we are
‘hiding’ the ~ in the wavefunctions.)

On these physical grounds we conclude |m| ≤ k, i.e., k sets an upper and lower
limit on m.

Let’s define the maximum value of m to be a new quantum number l ≡ mmax .


(Thus l ≤ k).
And let’s define the minium value of m to be a new quantum number l0 ≡ mmin .
(Thus −l0 ≤ k)

Now, at least one of the eigenfunctions in the set {ψ i } yields the eigenvalue mmax
(or l) when operated on by L̂z . Let’s call that eigenfunction ψl ;

L̂z ψl = lψ l . (28.31)

Now we can operate on both sides of this equation with L̂− :

L̂− L̂z ψl = L̂− lψ l (28.32)

196
Using the commutator relation L̂− L̂z = L̂z L̂− + L̂− we get
³ ´
L̂z L̂− + L̂− ψl = lL̂− ψl (28.33)
L̂z L̂− ψl + L̂− ψl = lL̂− ψl
Bringing the second term on the left hand side over to the right hand side gives
L̂z L̂− ψl = lL̂− ψl − L̂− ψl (28.34)
L̂z L̂− ψl = (l − 1)L̂− ψl
| {z } | {z }
ψ l−1 ψ l−1

We see that L̂− ψl ≡ ψl−1 is in fact an eigenfunction of L̂z (with associated eigen-
value (l − 1)) and is thus a member of {ψi } .

The eigenfunction ψl−1 has an associated eigenvalue that is one unit less then the
maximum value.

The above procedure can be repeated n times so that L̂n− ψl = ψl−n provided n
does not exceed l − l0 .

The eigenfunction ψ l−n has an associated eigenvalue that is n units less then the
maximum value, i.e.,
L̂z ψl−n = (l − n)ψ l−n . (28.35)
The largest value of n is l − l0 . For that case,
L̂z ψl0 = (l − l + l0 )ψl0 = l0 ψl0 . (28.36)

Similar behavior is seen for the operator L̂+ , except in the opposite direction–the
eigenvalue is increased by one unit for each action of L̂+ . For example
L̂+ L̂z ψl0 = L̂+ l0 ψl0 (28.37)
³ ´
L̂z L̂+ − L̂+ ψl0 = l0 L̂+ ψl0
L̂z L̂+ ψl0 = (l0 + 1)L̂+ ψl0 .

197
The raising and lowering nature of L̂+ and L̂− is why they are called ladder
operators.

We can not act with L̂+ and L̂− indefinitely since we are limited by l–we reach
the ends of the ladder. This requires that

L̂− ψl0 = 0 (28.38)

(we can’t go lower than the lowest step) and

L̂+ ψl = 0 (28.39)

(we can’t go higher than the highest step).

Often times the ladder operators appear in tandem either as L̂− L̂+ or L̂+ L̂− so it
is useful list some identities for these products

L̂− L̂+ = L̂2 − L̂2z − L̂z (28.40)

and
L̂+ L̂− = L̂2 − L̂2z + L̂z (28.41)

We can use these identities to derive a relation between the quantum numbers k
and l.

We begin with ³ ´
L̂− L̂+ ψl = L̂− L̂+ ψl = 0, (28.42)

but from the first of the above identities


³ ´
L̂− L̂+ ψl = L̂2 − L̂2z − L̂z ψl = (k2 − l2 − l)ψ l (28.43)

Therefore
p
k2 − l2 − l = 0 ⇒ k = l(l + 1). (28.44)

198
We we can also consider
³ ´
L̂+ L̂− ψl0 = L̂+ L̂− ψl0 = 0 (28.45)

and ³ ´
L̂+ L̂− ψl0 = L̂2 − L̂2z + L̂z ψl0 = (k 2 − l02 + l0 )ψl0 . (28.46)
substituting in the relation we just found for k gives

l(l + 1) − l02 + l0 = 0; (28.47)

simplifying gives
l = −l0 (28.48)
Thus mmax = l, mmin = −l and so m = l, l − 1, l − 2, . . . , −l + 1 , −l.

This also implies that the number of ‘rungs’ is 2l + 1 and that l must be either an
integer or a half-integer.

28.5. Quantum Properties of Angular Momentum

The eigenfunctions of angular momentum are entirely specified by two quantum


numbers l and m: ψlm .

L̂2 ψlm = l(l + 1)ψ lm L̂z ψlm = mψ lm (28.49)

If we write out the first of these explicitly in spherical polar coordinates as a


partial differential equation we obtain
∂ 2 ψlm ∂ψlm 1 ∂ 2 ψlm
+ cot θ + + l(l + 1)ψ lm = 0 (28.50)
∂θ2 ∂θ sin2 θ ∂φ2
The solutions to this partial differential equation are known to be the spherical
harmonic functions
ψ lm = Ylm (θ, φ). (28.51)

199
The spherical harmonics are functions of two variables, but they are a product of
a function only of θ and a function only of φ,
|m|
ψlm = Ylm (θ, φ) = APl (θ)eimφ , (28.52)
|m|
where the Pl (θ) are the Legendra polynomials and A is normalization constant.
Both the spherical harmonics and the Legendra polynomials are tabulated. They
are also built-in functions of Mathematica.

The spherical harmonics (and hence the angular momentum wavefunctions) are
orthonormal; meaning,
Z 2π Z π (
1 l0 = l and m0 = m
Yl∗0 m0 (θ, φ)Ylm (θ, φ) sin θdθdφ = (28.53)
0 0 0 l0 6= l or m0 6= m

28.5.1. The rigid rotor

Rotational energy
For general rotation in three dimensions the is
~2 2
Ĥ = L̂ , (28.54)
2I
so the Schrödinger equation is
~2 2 ~2
Ĥψ lm = Elm ψlm ⇒ L̂ ψlm = Elm ψlm ⇒ l(l + 1)ψlm = Elm ψlm . (28.55)
2I 2I
Thus
l(l + 1)~2 l(l + 1)h2
Elm = = = El . (28.56)
2I 8π2 I
There is no m dependence for the energy. In other words, the energy levels are
determined only by the value of l.

We know that there are 2l + 1 different m values for a particular l value. All 2l + 1
of these wavefunctions correspond to the same energy. We say the there is a 2l + 1
degeneracy of the energy levels.

200
29. Addition of Angular Momentum

29.1. Spin Angular Momentum

We learned above that l may take on integer or half-integer values.

Systems in which l takes on half-integer values are peculiar.

These systems have no classical analogs.

One example of such a system is the spin of an electron, l = s = 1/2. The values
of m = ms are limited to +1/2 and −1/2.

One peculiarity of this system is that the wavefunctions are 4π periodic (and 2π
antiperiodic):
ψs (θ) = −ψs (θ + 2π) (29.1)

and
ψ s (θ) = ψs (θ + 4π). (29.2)

That means that the system has to ‘rotate’ twice (in spin space not coordinate
space) to get back to its original state.

∗ ∗ ∗ See in-class demonstration: the belt trick ∗ ∗∗

201

201
29.2. Addition of Angular Momentum

In atoms the are a number of sources of angular momentum: The l’s and s’s of
each of the electrons.

One measures, however, the total angular momentum, J.

The electrons in many electron atoms couple. The are two main coupling schemes
which account for the total angular momentum of the atom.

1. LS coupling (also called Russell-Saunders coupling)

• works well for low atomic weight atoms (first couple of rows of the
periodic table)
P
• find the total spin angular momentum S = Ms,max , (Ms = i msi )
P
• find the total orbital angular momentum L = Mmax , (M = i mi )
• then J = L + S

2. jj coupling

• applies to higher atomic weight atoms


• find subtotal angular momentum for each electron ji = li + si
P
• then find total angular momentum by J = i ji .
• we will not use this method.

29.2.1. The Addition of Angular Momentum: General Theory

Consider two sources of angular momentum for a system represented by the op-
erators Jˆ1 and Jˆ2 (Jˆ1 and Jˆ2 could be L̂ or Ŝ angular momentum; we use Jˆ when
we speak generally.)

202
The total angular momentum is JˆT = Jˆ1 + Jˆ2 .

The total z-component of the angular momentum is JˆzT = Jˆz1 + Jˆz2

The last statement implies that the orientation quantum number of the total
system is simple the sum of that for the components

M = m1 + m2 (29.3)

We need to determine the allowed values of the total angular momentum quantum
number J.

The maximum value of J is determined by the maximum value of M by

Jmax = Mmax = m1max + m2max = j1 + j2 (29.4)

This corresponds to a situation in which component angular momentums add in


the most favorable manner

The minimum value of J is determined by the case when the components add in
the least favorable manner. That is,

Jmin = |j1 − j2 | . (29.5)

The total angular momentum is quantized is exactly the same manner as any
other angular momentum. Thus the allowed values of J are

J = j1 + j2 , j1 + j2 − 1, . . . , |j1 − j2 | + 1, |j1 − j2 | . (29.6)

29.2.2. An Example: Two Electrons

The table below shows the total spin angular momentum S for a two electron
system

203
spin state ms1 ms1 MS S
1 1
α(1)α(2) 2 2
1 1
β(1)β(2) − 12 − 12 −1 1
α(1)β(2) + β(1)α(2) 0 0 0 1
α(1)β(2) − β(1)α(2) 0 0 0 0

Counting states:
The spin degeneracy, gS , of the states is given by 2S + 1. In the above example
the degeneracy is gS = 3 for the S = 1 states and gS = 1 for the S = 0 states.

29.2.3. Term Symbols

We have already seen several term symbols, those being 1 S and 3 S during our
discussion of helium.

Term symbols are simply shorthand notion used to identify states. Term symbols
are useful for predict and understanding spectroscopic data. So, it is worthwhile
to briefly discuss them.

In general the term symbol is simply notates the total orbital angular momentum
and spin degeneracies of a particular set of states (or a state in the case of a singlet
state).

The orbital degeneracy is given by gL = 2L + 1.

For historical reasons L values are associated with a letter like the l values of a
hydrogenic system are.

L 0 1 2 3 4 5
.
symbol S P D F G H

204
The term symbol for a particular states is constructed from the following general
template
gS
LJ .
Many electron atoms have term symbols associated with their states.

Rules:

1. All closed shells have zero spin and orbital angular momentums: L = 0,
S = 0. These states are all singlet S states, notated by 1 S

2. An electron and a “hole” lead to equivalent term symbols.

• E.g., p1 and p5 have the same term symbol.

3. Hund’s Rule for the ground state only.

1. The ground state will have maximum multiplicity.


2. If several terms have the same multiplicity then ground state will be
that of the largest L.
3. Lowest J value (regular) “electron”, Highest J value (inverted) “hole”

29.2.4. Spin Orbit Coupling

A charge possessing angular momentum has a magnetic dipole associated with it.

An electron has orbital and spin magnetic dipoles.

These dipoles interact with a certain spin—orbit interaction energy ESO .

The spin—orbit Hamiltonian is


[
ĤSO = hcAL ·S (29.7)
³
hcA ˆ2 ´
2 2
ĤSO = J − L̂ − Ŝ ,
2

205
where A is the spin—orbit coupling constant.
From the Hamiltonian the spin—orbit interaction energy is
hcA
ESO = [J(J + 1) − L(L + 1) − S(S + 1)] (29.8)
2

206
30. Approximation Techniques

As we learned last semester, there are very few models for which we can obtain
an exact solution.

Consequently we must be satisfied with using approximation methods.

Last semester, we always took the simplest approximation to give the qualitative
properties of the unsolvable system.

Now we will consider two important quantitative approximation methods: (i)


perturbation theory and (ii) variational theory

30.1. Perturbation Theory

The basic procedure of perturbation theory

• Find a solvable system that is similar to the system at hand.

• Treat the difference between the two systems as a perturbation to the solv-
able system

• Use the solvable system’s wavefunctions as a zeroth order approximation to


the wavefunctions for the unsolvable system.

• These wavefunctions are used to find a first order correction to the energy.

207

207
• The first order energy is then used to make a first order approximation to
the wavefunction.

• The procedure is repeated to get higher and higher order approximations.

This process get algebraically intensive so we will only go as far as listing the first
order energy correction.

The nth state energy in perturbation theory:

En = En(0) + En(1) + . . . , (30.1)


(0) (1)
where En is the nth state energy for the unperturbed (solvable) system and En
is the first order correction. This is given by
Z
(1)
En = ψ(0)∗
n Ĥ
(1) (0)
ψn dx, (30.2)
all
space

where Ĥ (1) is the first order correction to the Hamiltonian–the perturbation.


Example: the quartic oscillator

• Consider the quartic oscillator described by the potential V (x) = 12 kx2 +ax4
where a is very small and can be treated as a perturbation.

• The obvious solvable system is the harmonic oscillator:


~2 d2 1
Ĥ = − 2
+ kx2 . (30.3)
2m dx 2
√ 2
This has energy levels En = ~ω(n+ 12 ) and wavefunctions An Hn ( αx)e−αx /2 ,
q
where α = km ~

• The perturbative part of the Hamiltonian is

Ĥ (1) = ax4 . (30.4)

208
• For example, the ground state energy correction is then calculation from
Z ∞
(1) (0)∗ (0)
E0 = ψ0 Ĥ (1) ψ0 dx (30.5)
−∞
Z ∞
2 2
= A0 e−αx /2 ax4 A0 e−αx /2 dx
−∞
Z ∞
2 2
= aA0 x4 e−αx dx
√ −∞
3 πaA20
= 5 ,
4α 2
so the first order ground state energy for a quartic oscillator is

~ω 3 πaA20
E0 ' + 5 .
2 4α 2

30.2. Variational method

The basic idea behind the variational method is to use a trial wavefunction with
an adjustable parameter. The value of the parameter which minimizes the energy,
Etrial , gives a trial wavefunction which is closest to the real wavefunction.

The basis for this is the variation theorem which states

Etrial ≥ E.

We will not prove this theorem here.

The trial energy is calculated by


R
all ψ∗trial Ĥψtrial dx
space
Etrial = R (30.6)
all ψ∗trial ψtrial dx
space

The trial energy is now a function of the adjustable parameter, p, that we use to
minimize the trial energy by setting
dEtrial
=0 (30.7)
dp

209
and solving for p. (Strictly speaking we should check that we have a minimum
and not a maximum or inflection point, but with reasonably good trial functions
one is pretty safe in having a minimum.)

210
31. The Two Level System and
Quantum Dynamics

Our entire discussion of quantum mechanics thus far had dealt only with time
independent quantum mechanics.

The time variable never appears in any expression.

Obviously there are cases where quantum objects move with time. For example,
firing an electron down a particle accelerator.

We shall finally get to quantum dynamics in this chapter, but first we will discuss
the very important model of the two level system.

31.1. The Two Level System

If the harmonic oscillator is the most important model in all a physics, the two
level system is a close second.

The spin system discussed above is an example of a two level system.

The two level system is inherently quantum mechanical in nature. Unlike the
harmonic oscillator it has no classical analogue.

211

211
Consequently, we can not use our usual procedure of writing down the classical
Hamiltonian and then replacing the variables with their corresponding operators.

The two level system consists of two states ψ1 and ψ2 separated by energy 4 =
2 − 1 as shown below

The states ψ1 and ψ2 are orthonormal:


Z (
1 j=k
ψ∗j ψk dΩ = , (31.1)
TLS 0 j 6= k
R
where TLS dΩ means integration over the two level space (which is really just the
P
sum 2i=1 ).

The states ψ 1 and ψ2 are eigenfunctions of the two level Hamiltonian,

Ĥ = 1 δ 1,° + 2 δ 2,° , (31.2)

where δ j,° “projects out” the j th state of the wavefunction being acted on.

212
For example let some arbitrary wavefunction ψ = aψ 1 + bψ2 , then

Ĥψ = ( 1 δ 1,° + 2 δ 2,° ) (aψ 1 + bψ2 ) (31.3)


= 1 δ 1,° (aψ1 + bψ2 ) + 2 δ 2,° (aψ1 + bψ2 )
= a 1 ψ1 + b 2ψ2

Another orthonormal set of wavefunctions are the so-called ‘left’


1 1
ψL = √ ψ1 + √ ψ2 (31.4)
2 2
and ‘right’
1 1
ψR = √ ψ1 − √ ψ2 (31.5)
2 2
states.

We can invert above equations and solve for ψ1 and ψ2 in terms of ψL and ψR
1 1
ψ1 = √ ψL + √ ψR (31.6)
2 2
and
1 1
ψ2 = √ ψL − √ ψR . (31.7)
2 2

213
31.2. Quantum Dynamics

So far we have been concerned with the eigenfunctions and eigenvalues (energy
levels) of the various quantum systems that we have discussed.

What has been kept hidden up to now is the fact that the eigenfunctions are really
multiplied by a phase factor of the form .
i
Ψn (x, t) ≡ ψn (x)e− ~ En t (31.8)

We can verify this by obtaining the time independent Schrödinger equation from
the more general time dependent
∂Ψn (x, t)
i~ = ĤΨn (x, t) (31.9)
∂t
i
∂ψn (x)e− ~ En t i
i~ = Ĥψn (x)e− ~ En t
∂t
i
∂e− ~ En t i
i~ψ n (x) = Ĥψn (x)e− ~ En t
µ ¶ ∂t
i i i
i~ψ n (x) − En e− ~ En t = Ĥψn (x)e− ~ En t
~
i i
En ψn (x)e− ~ En t = e− ~ En t Ĥψn (x)
En ψn (x) = Ĥψ n (x) (31.10)

Does this mean the eigenstates are not stationary states? To determine this we
need to calculate the probability of finding the particle in the same eigenstate at
some future time. This is given by
¯Z ¯2
¯ ¯
P (x, t) = ¯¯ Ψn (x, 0)Ψn (x, t)dx¯¯

(31.11)
¯Z ¯2
¯ i ¯
= ¯¯ ψ n (x)ψ n (x)e ~ dx¯¯
∗ − En t

¯ Z ¯2
¯ −iE t ¯
¯
= ¯e ~ n
ψn (x)ψ n (x)dx¯¯

¯ i ¯
¯ − ~ En t ¯2
= ¯e (1)¯ = 1,

214
so no matter what time t we check we will always find the system in the same
eigenstate. Thus the eigenstates are stationary states.

In general the state of the system need not be in one particular eigenstate; it may
be in a superposition of any number of eigenstates.

The “left” and “right” wavefunctions that we saw in the discussion of the two
level system are examples of superposition states.

The phase factor does become important for superposition states.

As an example consider the state


1 1
Φ(x, t) = √ Ψ1 (x, t) + √ Ψ2 (x, t) (31.12)
2 2
exposing the phase factors we get
1 i 1 i
Φ(x, t) = √ ψ1 (x)e− ~ E1 t + √ ψ2 (x)e− ~ E2 t (31.13)
2 2
Let’s now track the probability of finding the particle in the same superposition
state. Similar to before we calculate
¯Z ¯2
¯ ¯
P (x, t) = ¯ Φ (x, 0)Φ(x, t)¯¯
¯ ∗

¯Z µ ¶µ ¶¯2
¯ 1 ∗ 1 ∗ 1 − ~i E1 t 1 ¯
− ~i E2 t ¯
= ¯ ¯ √ ψ1 (x) + √ ψ2 (x) √ ψ1 (x)e + √ ψ2 (x)e
2 2 2 2 ¯
¯ Z Ã ! ¯
¯1 i i
ψ ∗1 (x)ψ 1 (x)e− ~ E1 t + ψ∗1 (x)ψ 2 (x)e− ~ E2 t ¯2
¯ ¯
= ¯ − ~i E1 t − ~i E2 t
dx¯ . (31.14)
¯2 ∗
+ψ2 (x)ψ 1 (x)e ∗
+ ψ2 (x)ψ 2 (x)e ¯

The “cross-terms” (those of the form ψ∗1 (x)ψ 2 (x) and ψ ∗2 (x)ψ 1 (x)) are zero when

215
integrated because the eigenfunctions are orthogonal. This leaves
¯Z ¯2
¯ ¯
P (x, t) = ¯ Φ (x, 0)Φ(x, t)¯¯
¯ ∗
(31.15)
¯ Z ³
¯1 i i
´ ¯¯2
= ¯¯ ψ 1 (x)ψ 1 (x)e ~ 1 + ψ2 (x)ψ 2 (x)e ~ 2 dx¯¯
∗ − E t ∗ − E t
2
¯ µ Z Z ¶¯2
¯1 − i E t i ¯
= ¯¯ e ~ 1 ∗
ψ1 (x)ψ 1 (x)dx + e ~ − E 2 t
ψ2 (x)ψ 2 (x)dx ¯¯

2
¯ ³ ´¯¯2 1 ³ i ´³ i ´
¯1 − i E t − ~i E2 t ¯ + ~i E2 t − ~i E2 t
¯
= ¯ e ~ 1
+e = e+ ~ E1 t
+ e e− ~ E1 t
+ e
2 ¯ 4
1 ³ ´ 1µ (E1 − E2 )

+ ~i (E1 −E2 )t − ~i (E1 −E2 )t
= 1+e +e +1 = 1 + cos t .
4 2 ~

The probability of find in the system in its original superposition states is not one
for all times t.

216
Key Equations for Exam 1

Listed here are some of the key equations for Exam 1. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The short cut for getting the normalization constant .


sZ
N= |ψunnorm (x, y, z)|2 dxdydz. (31.16)
space

• The normalized wavefunction:


1
ψnorm = ψ . (31.17)
N unnorm

• How to get the average value for some property,


Z
hα̂i = ψ∗ α̂ψdxdydz. (31.18)
space

217

217
• The Laplacian µ ¶
2 ∂2 ∂2 ∂2
∇ = + + . (31.19)
∂x2 ∂y 2 ∂z 2
• Normalized wavefunctions for the 3D particle in a box,

2 2 nx πx ny πy nz πz
ψn (x) = √ sin sin sin . (31.20)
abc a b c

• The energy levels for the 3D particle in a box,

n2x h2 n2y h2 n2z h2


Enx ,ny ,nz = + + . (31.21)
8ma2 8mb2 8mc2

• Orthonormality: (
Z
1, j=k
ψ∗j ψk = . (31.22)
space 0, j 6= k

• Superpostion:
X
ψ= ai ϕi (31.23)
i

• Commonly used comutators of the angular momentum operators are


h i h i h i
L̂x , L̂y = i~L̂z , L̂y , L̂z = i~L̂x , L̂z , L̂x = i~L̂y (31.24)

and h i h i h i
2 2 2
L̂ , L̂x = L̂ , L̂y = L̂ , L̂z = 0. (31.25)

• The energy levels for a particle in a ring are

~2 m2 m2 h2
Em = = . (31.26)
2I 8π2 I

• The normalized wavefunctions for a particle on a ring are


1
ψ = √ eimφ . (31.27)

218
• The eigenfunctions of angular momentum are entirely specified by two quan-
tum numbers l and m: ψ lm .

L̂2 ψlm = l(l + 1)ψ lm L̂z ψlm = mψ lm (31.28)

• The energy levels for the rigid rotor are

l(l + 1)~2
El = . (31.29)
2I

• Degeneracy for general angular momentum is

gJ = 2J + 1. (31.30)

• The first order energy correction in pertubation theory is


Z
(1)
En = ψ(0)∗
n Ĥ
(1) (0)
ψn dx, (31.31)
all
space

• The trial energy in variation theory is calculated by


R ∗
all ψ trial Ĥψ trial dx
space
Etrial = R ∗ (31.32)
all ψ trial ψ trial dx
space

• In general
i
Ψn (x, t) ≡ ψn (x)e− ~ En t (31.33)

• The left and right superposition states are


1 1
ψL = √ ψ1 + √ ψ2 (31.34)
2 2
and
1 1
ψR = √ ψ1 − √ ψ2 (31.35)
2 2

219
Part VI

Symmetry and Spectroscopy

220

220
32. Symmetry and Group Theory

We now take a short break from physical chemistry to discuss ideas from the
mathematical field of group theory.

Inherent to group theory is symmetry.

As far as we are concerned, we will

• determine the symmetry of a particular molecule.

• The types of symmetry it has will determine to which symmetry group it


belongs.

• The mathematical properties of all the possible groups have been worked
out

• These mathematical properties translate into a wide variety of variety of


physical properties including

— Bonding
— Properties of wavefunctions
— Vibrational modes
— Many more applications

221

221
32.1. Symmetry Operators

Any operator that leaves |ψ|2 invariant are symmetry operators for that particular
system:
Ô |ψ|2 = |ψ|2 . (32.1)

This implies
Ôψ = ±ψ. (32.2)

That is, the eigenvalues for the particular symmetry operator are 1 or −1.

For molecules we will be dealing with point group symmetry operators. These
operators deal with symmetry about the center of mass.

We have seen two such operators in ı̂ and σ̂ h .

An example of symmetry operator that is not a point group symmetry operator


would be an operator that performed some sort of translation in space. This type
of operator arrises in the treatment of extended crystal structures.

∗ ∗ ∗ See Handout on Symmetry Elements ∗ ∗∗

32.2. Mathematical Groups

In mathematics the term “group” has special meaning. It is a set of objects and
a single operation, which has the following properties.

1. The group is associative (but not necessarily communative) with respect to


the operation.

2. An identity element exits and is a member of the group

222
3. The “product” of any two members of the group yield a member of the
group.

4. The inverse of every member of the group is also in the group. In other
words, for any member of the group one can find another member of the
group which, upon “multiplication,” yields the identity element.

∗ ∗ ∗ See Handout on Naming Point Groups ∗ ∗∗

∗ ∗ ∗ See Handout on Assigning Point Groups ∗ ∗∗


Associated with a given group is a “multiplication” table.

32.2.1. Example: The C2v Group

The C2v group consists of the symmetry elements Ê, Ĉ2 , σ̂ v (in-plane) and σ̂ 0v
(transverse).

Water is an example of a molecule described by this point group.

The multiplication table for the C2v group is

C2v Ê Ĉ2 σ̂ v σ̂ 0v
Ê Ê Ĉ2 σ̂ v σ̂ 0v
Ĉ2 Ĉ2 Ê σ̂ 0v σ̂ v
σ̂ v σ̂ v σ̂ 0v Ê Ĉ2
σ̂ 0v σ̂ 0v σ̂ v Ĉ2 Ê

32.3. Symmetry of Functions

In the absence of degeneracy, the wavefunctions must be symmetric or antisym-


metric with respect to all elements of the group.

223
Connecting with the C2v group example lets consider the wavefunctions for water.

In this case one can collect the eigenvalues (either +1 or −1) for each of the four
symmetry operators as a four component vector. As it turns out there is four
possible sets of eigenvalues–hence four different vectors:

A1 = (1, 1, 1, 1)
A2 = (1, 1, −1, −1)
B1 = (1, −1, 1, −1)
B2 = (1, −1, −1, 1).

To see where these four vectors come from, consider the following.

• The first value has to be +1 since the only eigenvalue of Ê is 1

• The eigenvalue of Ĉ2 can be +1 or −1

— When it is +1 the vectors are labelled A


— When it is −1 the vectors are labelled B

• The eigenvalue of σ̂ v can be either +1 or −1

— When it is +1 the vectors are labelled with a subscript 1


— When it is −1 the vectors are labelled with a subscript 2

• The eigenvalue of σ̂ 0v can be either +1 or −1

• Finally there is a restriction do to the fact that the eigenvalues must obey
the group multiplication table.

— This restriction forces the eigenvalues of σ̂v and σ̂ 0v to be the same for
the A type vectors and opposite for the B type vectors.

224
The above considerations leave four vectors.

In fact, there will always be the same number of vectors as symmetry elements.

Altogether, the vectors represent what is call an irreducible representation of the


group.

These vectors make up the :

C2v Ê Ĉ2 σ̂ v σ̂ 0v
A1 1 1 1 1
A2 1 1 −1 −1
B1 1 −1 1 −1
B2 1 −1 −1 1

∗ ∗ ∗ See Handout on Character Tables ∗ ∗∗

32.3.1. Direct Products

The direct product of a two vectors is defined as

(x1 , x2 , x3 , . . .) ⊗ (y1 , y2 , y3 , . . .) = (x1 y1 , x2 y2 , x3 y3 , . . .) (32.3)

For the example of the C2v group consider

B1 ⊗ B2 = (1, −1, 1, −1) ⊗ (1, −1, −1, 1)


= (1, 1, −1, −1) = A2 (32.4)

32.4. Symmetry Breaking and Crystal Field Splitting

We shall investigate how degeneracies of energy levels are broken as one reduces
the overall symmetry of the system.

225
In doing this we will, for simplicity, consider only proper rotations (Cn ). Mirror
symmetry will not be considered (although in real applications one must consider
all symmetry).

First consider a free atom. In this case there is complete rotational symmetry.
Thus the symmetry group is the spherical group (see character table handout.)

This is the group associated with the particle on a sphere model and the angular
part of the hydrogen atom. The vectors are the labeled according to the angular
momentum quantum numbers S, P, D, F, etc.

The degeneracies of these vectors are 1 for S, 3 for P, 5 for D and so on as is


familiar to us already.

Now consider the free atom being placed in a crystal lattice of octahedral sym-
metry. For example placed at the center of a cube which has other atoms at the
centers of each face of the cube.

When moving to octahedral symmetry we now must look at the character table for
such a case–the O group (remember we are considering only proper rotations).

The S vector has the symmetry of a sphere (x2 + y 2 + z 2 ) and hence is totally
symmetric. It is also nondegenerate so it will be, of course, nondegenerate in
the octahedral case. It remains totally symmetric so it is now represented by the
vector A1 .

The P vector is triply degenerate and has the symmetry of x, y and z as we see
from the character table for the spherical group. In the octahedral crystal the
degeneracy remains in tact and these states are represented by the T1 group.

226
The D vector has a degeneracy of five and the symmetry of 2z 2 − x2 − y 2 , xz, yz,
xy, x2 − y 2 . Looking at the table for the O group we see the degeneracy splits:
two states become E type and the remaining three become T2 type.

The F states have a degeneracy of 7 and the symmetry of z 3 , xz 2 , yz 2 , xyz,


z(x2 − y 2 ), x(x2 − 3y 2 ) and y(3x2 − y 2 ). In an octahedral environment the states
split with one becoming A2 , three becoming T1 and three becoming T2 . This is
not readily apparent from the character tables so one needs to inspect a little
harder to see it (see homework).

The octahedral group is still highly symmetric. Lets say that two atoms on oppo-
site sides of the cube are moved slightly inward. The remaining four atoms remain
in place.

This breaks the octahedral symmetry and the system now assumes D4 symmetry.

Now the A1 vector of the O group becomes the A1 vector of the D4 group. The
triply degenerate T1 vector splits into a A2 state and a doubly degenerate E state.

The E states from the O group become a A1 type state and a B1 type state.

The T1 states from the O group become a A2 type state and a E type state.

The T2 states from the O group become a B2 and a E type state.

227
33. Molecules and Symmetry

From our chapter on diatomic molecules last semester we have learned a great
deal which caries over directly to polyatomic molecules.
So, in this chapter we simply investigate some of the specific details regarding
polyatomic molecules.

33.1. Molecular Vibrations

As for diatomic molecules, it is convenient to work with center of mass coordinates.


With polyatomic molecules one needs to specify the coordinates of N nuclei rather
than just two nuclei.
To do so we begin with the 3N nuclear degrees of freedom.
As for the diatomic case 3 degrees of freedom determine the center of mass motion.
That leaves us with 3N − 3 coordinates to specify.
One must now consider two different types of polyatomic molecules: Linear and
Nonlinear.

• For linear molecules there are 2 rotational degrees of freedom

• For nonlinear molecules there are 3 rotational degrees of freedom

This now leaves one with 3N − 5 vibrational degrees of freedom for linear poly-
atomic molecules and 3N − 6 vibrational degrees of freedom for nonlinear mole-
cules.

228

228
33.1.1. Normal Modes

Polyatomic molecules can undergo very complicated vibrational motion.


Regardless of what type of vibrational motion is taking place, however, that mo-
tion is some linear combination of fundamental vibrational motions called normal
modes.
This is analogous to writing an arbitrary wavefunction as a linear combination of
eigenfunctions. One example was the “left” and “right” states of the two level
system.

The number of normal modes equals the number of vibrational degrees of freedom.

At low energies the normal modes are well approximated as harmonic oscillators.

33.1.2. Normal Modes and Group Theory

The symmetry of the normal modes are associated with entries in the character
table of the point group of any particular polyatomic molecule.

Example: Water
The point group symmetry of the water molecule is C2v . The character table is

C2v Ê Ĉ2 σ̂ v σ̂ 0v
A1 1 1 1 1
A2 1 1 −1 −1
B1 1 −1 1 −1
B2 1 −1 −1 1

Water has three nuclei and it is nonlinear so it has 3(3) − 6 = 3 normal modes.
The three modes are the bending vibration, the symmetric stretching vibration
and the asymmetric stretch.

229
The normal modes are associated with a particular vector (row) of the character
table by considering the action of the each of the symmetry elements on the normal
mode.

For the bending mode, the vibration is complete unchanged by any of the sym-
metry elements. Consequently the bending mode is associated with A1

The same is true for the symmetric stretching mode. It too is associated with A1 .

The asymmetric stretch, however, is associated with B1 since Ĉ2 and σ̂0v transform
the mode into its opposite and σ̂ v leaves it unchanged.

230
34. Vibrational Spectroscopy and
Group Theory

We now investigate how group theory and, in particular, the character tables can
be used to determine IR and Raman spectra and selection rules for polyatomic
molecules

34.1. IR Spectroscopy

IR absorption is exactly the same as regular electronic absorption except the


frequency of the electromagnetic radiation is much less.

The typical “energies” for IR absorption are from 400 to 4000 cm−1 . This is in
the Infrared region of the electromagnetic spectrum.

As for electronic absorption one typically employs the electric dipole approxima-
tion.

The electric dipole approximation

• Molecule is viewed as a collection of charges

• Multipole expansion

monopole + dipole + quadrapole+ · · · (34.1)

231

231
• Light—matter interaction is dominated by the light—dipole coupling so the
other interactions are ignored.

In order for absorption of the electromagnetic radiation to take place, it must be


able to couple to a changing (oscillating) electric dipole.

The electric dipole is


μ = μx ex + μy ey + μz ez (34.2)

where μx = qx, μy = qy, μz = qz.

The upshot of all this is as far as group theory is concerned is the following
selection rule:

• The vibrational coordinates for an IR active transition must have the same
symmetry as either x, y, or z for the particular group.

Example: Water
Recall that the point group symmetry of the water molecule is C2v .

We now need a column of the character table which we have ignored up to this
point.

The character table is

C2v Ê Ĉ2 σ̂ v σ̂ 0v Functions


A1 1 1 1 1 z, x2 , y 2 , z 2
A2 1 1 −1 −1 xy
B1 1 −1 1 −1 x, xz
B2 1 −1 −1 1 y, yz

The last column describes the symmetry of several important functions for the
point group.

232
Among these functions are x, y, and z.

So we can see immediately that the IR active modes of any molecule having this
point group will be A1 , B1 , and B2 .

The A2 mode is IR forbidden and any vibrations having this symmetry will not
appear in the IR spectrum (or it may appear as a very weak line).

From before we know the modes of water have A1, and B1 symmetry and hence
are all IR active and appear in the IR spectrum

34.2. Raman Spectroscopy

Raman spectroscopy is somewhat different than IR spectroscopy in that vibra-


tional frequencies are measured by way of inelastic scattering of high frequency
(usually visible) light.

The light loses energy to the material in an amount equal to the vibrational energy
of the molecules is the sample.

This lose of energy shows up in the scattered light as a new down shifted frequency
from that of the original input light frequency.

Unlike IR absorption which is based on the electric dipole, Raman scattering is


based on the polarizability of the molecule

Roughly speaking the polarizability of a molecule determines how the electron


density is distorted through interaction with an electromagnetic field.

233

The molecular quantity of interest is the polarizability tensor, α.

We will not get into tensors in this course except to say the polarizability tensor
elements are proportional to the quadratic functions, x2 , y 2 , z 2 , xy, xz, yz, (or
any combinations thereof).

One can now inspect the character table to determine which modes will be Raman
active.

For the example of water, all modes are Raman active

Rule of Mutual exclusion

• Vibrational mode can be both IR and Raman active or inactive

• If, however, the molecule has inversion symmetry (contains ı̂ as a symmetry


element) then no modes will be both IR and Raman active.

234
35. Molecular Rotations

Recall that the three degrees of freedom that described the position of the nuclei
about the center of mass were (R, θ, φ). The R was involved in vibrations. We
now turn our attention to the angular components to describe rotations.

Recall also the Kinetic energy operator for the nuclei in the center of mass coor-
dinates
~2 ˆ 2 ~2 ∂ 2 ∂ ~2 ˆ2
T̂N = − ∇N = − R̂ + J . (35.1)
2μ 2μR2 ∂R ∂R 2μ
We will now be concerned only with the angular part,
~2 ˆ2

J . (35.2)
2I
Now, under the Born-Oppenheimer approximation, R is a parameter. For constant
R the rotational energy is given by
J(J + 1)~2 J(J + 1)h2
Erot = = . (35.3)
2μR2 8π 2 I
This is the so-called rigid rotor energy.

It is common to define
h
Be ≡ (35.4)
8π 2 I
as the rotational constant. Then

Erot = J(J + 1)hBe (35.5)

with a degeneracy of
gJ = 2J + 1 (35.6)

235

235
35.1. Relaxing the rigid rotor

Of course the rigid rotor is not a perfectly correct model for a diatomic molecule.
There are two corrections we will now make

1. Vibrational state dependence:

• The R value is dependent on the particular vibrational level.


• One defines a rotational interaction constant that depends on the vi-
brational level, n. µ ¶
1
Bn ≡ Be − n + αe , (35.7)
2
where αe is an empirical rotational—vibrational interaction constant.

2. Centrifugal stretching:

• Rotation tends to stretch the diatomic distance R.


• This is corrected for by the term

−J 2 (J + 1)2 Dc , (35.8)

where
4Be3
Dc ≡ (35.9)
ω̃ 2e
is the centrifugal stretching constant.

35.2. Rotational Spectroscopy

A rotational transition can occur in the same vibrational level n. This is called a
pure rotational transition. Alternatively, a rotational transition can accompany a
vibrational transition.

In either case the selection rule for the transition is 4J = ±1.

236
It turns out that typical rotational energy gaps are on the order of a few wavenum-
bers or less.

Thermal energy, kT, at room temperature is about 200 cm−1 . This means that
at room temperature the many excited rotational states are populated.

∗ ∗ ∗ See Handout ∗ ∗∗

The selection rules and the thermalized states combine to yield a multi-peaked
ro-vibrational spectrum.

∗ ∗ ∗ See Handout ∗ ∗∗

35.3. Rotation of Polyatomic Molecules

There are a few additional details regarding rotations for polyatomic molecules as
compared to diatomics

Of course one could set-up an arbitrary center of mass coordinate system. But
one system is special–the principle axes coordinate system.

The principle axes coordinate system is the one in which the z-axis is taken to be
along the principle symmetry axis.

The total moment of inertia, I = Ixx + Iyy + Izz

The Hamiltonian in the principle axes system is


" #
~2 Jˆx2 Jˆy2 Jˆz2
Ĥ = + + (35.10)
2 Ixx Iyy Izz

237
There are four classes of polyatomic molecules regarding rotations

1. Linear (e.g., carbon dioxide)

• Izz = 0, Ixx = Iyy


• Jˆ2 = Jˆx2 + Jˆy2
• The Hamiltonian is
~2 ˆ2
Ĥ = J (35.11)
2Ixx
• The rotational energy is

Erot = hBJ(J + 1), (35.12)

where
h
B= (35.13)
8π2 Ixx
2. Symmetric tops (e.g., benzene)

• Ixx = Iyy
• Jˆ2 = Jˆx2 + Jˆy2 + Jˆz2
• The Hamiltonian is
" #
~2 Jˆx2 + Jˆy2 Jˆz2
Ĥ = + (35.14)
2 Ixx Izz

• The rotational energy is

Erot = hBJ(J + 1) + h(A − B)K 2 , (35.15)

where
h
A= , (35.16)
8π 2 Izz
h
B= 2 (35.17)
8π Ixx
and K is the quantum number describing the projection of the angular
momentum onto the z-axis

238
3. Spherical tops (e.g., methane)

• Ixx = Iyy = Izz


• Jˆ2 = Jˆx2 + Jˆy2 + Jˆz2
• The Hamiltonian is
~2 ˆ2
Ĥ = J (35.18)
2Ixx
• The rotational energy is

Erot = hBJ(J + 1), (35.19)

where
h
B= (35.20)
8π2 Ixx
4. Asymmetric tops

• Ixx 6= Iyy 6= Izz


• These are more complicated and we will not discuss them in detail

239
36. Electronic Spectroscopy of
Molecules

The electronic spectra of molecules are quite different than that of atoms.

Atomic spectra consist of single sharp lines due to transitions between energy
levels.

Molecular spectra, on the other hand, have numerous lines (bands) due to the
fact that electronic transitions are accompanied by vibrational and rotational
transitions.

36.1. The Structure of the Electronic State

Last semester we saw that under the Born—Oppenheimer approximation we were


able to write the molecular wavefunction as a product of an electronic part and a
nuclear part.

We found that in doing so the electronic energy level, Ee , was parameterized by


the internuclear distance, R.

Ee as a function of R describe the effective potential for the nuclei.

It had a qualitative shape similar to the Morse potential.

240

240
In the figure below the ground and first excited electronic levels (as a function of
R) are shown.

Note: The potential minima are not at the same value of R for each of the
electronic states.

36.1.1. Absorption Spectra

In absorption spectroscopy, light promotes an electron from the ground electronic


state (and usually from the ground vibrational state too) to the excited electronic
state and any of the excited vibrational states of the excited electronic state.

∗ ∗ ∗ See Spectroscopy Supplement p1 ∗ ∗∗

36.1.2. Emission Spectra

In emission spectroscopy, light demotes an electron from the ground vibrational


state of the excited electronic state to any one of a number of excited vibrational
levels in the ground electronic state.

241
∗ ∗ ∗ See Spectroscopy Supplement p2 ∗ ∗∗

36.1.3. Fluorescence Spectra

All during the process of absorption, the process of is taking place.

∗ ∗ ∗ See Spectroscopy Supplement p3 ∗ ∗∗

As seen in the supplement the fluorescence spectrum is shifted to lower energies


(red shifted) from the absorption spectrum.

This is known as the Stokes shift.

The main stream explanation for the stokes shift is as follows

• Light promotes the system from the ground vibrational and ground elec-
tronic state to excited vibrational levels in the excited electronic state.

• The system then very rapidly (on the order of tens to hundreds of fem-
toseconds) relaxes to the ground vibrational state of the excited electronic
state.

• This process is called

• The molecule than emits a photon to drop back down into an excited vibra-
tional state of the ground electronic state.

• This requires a lower energy (or “more red”) photon. Hence the Stokes shift.

242
36.2. Franck—Condon activity

We have seen than an electronic tranistion involves not only a change in the
electronic state but also in the vibrational state in general (and in the rotaitonal
state as well, but we will ingore this).

Assuming the electronic transition is allowed one must calculate the probability of
the vibrational transistion as well. This is down by evaulating the Franck—Condon
integral.

36.2.1. The Franck—Condon principle

When the Born—Oppenheimer approximation is applied to spectroscopic transi-


tions, one obtains the Franck—Condon principle.

The Franck—Condon principle states that the nuclei do not move during an elec-
tronic transition.

Physically this means that for a particular transition to be Franck—Condon ac-


tive there must be good overlap of the vibrational wavefunctions involved in the
transition.

Mathematically this means that the strength of a transition from Ψi = ψel,i ψvib,i →
Ψf = ψel,f ψvib,f is given by
¯Z ¯2 ¯Z Z ¯2
¯ ¯ ¯ ¯
¯ ¯ ¯ ¯
¯ Ψ∗f μ̂el Ψi ¯ = ¯ ψ∗el,f ψ∗vib,f μ̂el ψel,i ψvib,i ¯ , (36.1)
¯ all ¯ ¯ el vib ¯
space space space

243
where μ̂el is the electronic transition dipole. We can separate the integrals as
¯Z ¯2 ¯Z ¯2
¯ ¯¯ ¯
¯ ∗ ¯ ¯ ∗ ¯
¯ ψel,f μ̂el ψel,i ¯ ¯ ψvib,f ψvib,i ¯ , (36.2)
¯ el ¯ ¯ vib ¯
space space
| {z }| {z }
if 6=0, allowed Franck—Condon

244
37. Fourier Transforms

As a spectroscopist it is imperative to have a deep understanding of the relation-


ship between time and frequency.

Spectroscopic data is obtained either in the time domain or in the frequency


domain and one should readily be able to look at data in one domain and know
what is happening in the other domain.

One should be familiar with qualitative aspects of this time—frequency relation,


such as if a signal oscillates in time it will have a peak in it frequency spectrum
at the frequency with which it is oscillating.

Furthermore, if the signal decays rapidly it will have a broad spectrum and, con-
versely, if the signal decays slowly it will have a narrow spectrum. The mathemat-
ics which governs these qualitative statements is Fourier transform theory which
we now review.

37.1. The Fourier transformation

The Fourier transformation, =, of a function f (t) will, in this work, by denoted


by a tilde, f˜(ω), and is given by
Z ∞
˜
= [f (t)] = f (ω) = f(t)eiωt dt. (37.1)
−∞

245

245
The Fourier transformation is unique and it has a unique inverse, =−1 , which is
given by Z ∞
h i 1
−1 ˜ ˜ −iωt
= f (ω) = f (t) = f(ω)e dω. (37.2)
2π −∞

The above two relations form the convention used throughout this work.

Other authors use different conventions, so one must take care to know exactly
which convention is being used.

For simplicity the symbol = will be used to represent the Fourier transformation
operation, i.e., = [f (t)] = f˜(ω). Whereas
h
−1
i the symbol = will represent the inverse
Fourier transformation, i.e., =−1 f(ω) ˜ = f(t).

246
Key Equations for Exam 2

Listed here are some of the key equations for Exam 2. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• Vibrational degrees of freedom

— linear: 3N − 5
— not linear: 3N − 6

• The so-called rigid rotor energy is

Erot = J(J + 1)hBe . (37.3)

where
h
Be ≡ (37.4)
8π 2 I
is the rotational constant.

247

247
• The degeneracy of the rigid rotor is

gJ = 2J + 1 (37.5)

• Franck—Condon Factor: ¯Z ¯2
¯ ¯
¯ ¯
¯ ψ∗vib,f ψvib,i ¯ (37.6)
¯ vib ¯
space

• The Fourier transformation is


Z ∞
= [f (t)] = f˜(ω) = f (t)eiωt dt. (37.7)
−∞

• The inverse Fourier transformation is


h i Z ∞
−1 ˜ 1 ˜ −iωt
= f (ω) = f (t) = f(ω)e dω. (37.8)
2π −∞

248
Part VII

Kinetics and Gases

249

249
38. Physical Kinetics

We now turn our attention to the molecular level and in particular to molecular
motion.

38.1. kinetic theory of gases

A microscopic view of gases

Consider a gas of point mass (m), m is the molecular (or atomic) mass

• Each particle of mass m has velocity v, hence a momentum of p = mv and


a kinetic energy of KE = 12 mv · v = 12 mv 2 .
N
• A sample of N molecules is characterized by its number density n∗ = V
.
N
• From the ideal gas law P V = nRT = L
RT (L is Avogadro’s number):
N PL
V
= RT = n∗

Consider the ith particle at position xi = (x, y, z) in coordinate (position) space.

¡ dxi dyi dzi ¢


Its velocity is vi = dx
dt
i
= dt
, dt , dt . This can represented in velocity space by
the vector vi = (vxi , vyi , vzi ).

The velocities of the particles are characterized by a probability distribution func-


tion for velocities F (vx , vy , vz , t) which is in general a function of time, t.

250

250
The number of particles, NVv , having velocities in a macroscopic volume, Vv , in
velocity space is
Z Z Z Z
NVv = N F (v, t)dv = N F (vx , vy , vz , t)dvx dvy dvz (38.1)
Vv Vv

It is more convenient to switch to spherical polar coordinates in velocity space


(v, θ, φ); n.b., v is simply a magnitude (not a vector)–it is the speed.

The probability distribution function then becomes F (v, θ, φ, t)

If we choose the origin of our coordinate system to be at the center of mass of the
gas, then for many cases the velocity distribution will be isotropic–independent
of θ and φ.
F (v, θ, φ, t) = F (v, t). (38.2)

Furthermore, stationary distributions–those independent of time–are often en-


countered.
F (v, θ, φ, t) = F (v, θ, φ). (38.3)

251
We shall consider stationary isotropic distributions F (v). So F (v) represents a
distribution of speeds.
It can be shown from first principles that
µ ¶ 32
m −mv 2
F (v) = 4π e 2kb T v 2 (38.4)
2πkb T

where kb = 1.380658 × 10−23 is Boltzmann’s constant. This is the Maxwell’s


distribution (of speeds).

38.2. Molecular Collisions

The average speed of a particle can calculated from Maxwell’s distribution:


Z ∞ Z ∞
m 3 − mv2
hvi = v̄ = vF (v)dv = v 3 4π( ) 2 e 2kT dv (38.5)
0 0 2πkT
s µ ¶ r
8kT L Lk=R 8RT
= =
πm L Lm=M πM
It will be convenient to define number density as n∗ ≡ NV
where N is the number
N
of particles. For an ideal gas (V = nRT
P
), n∗ = P
RT
LP
= RT .
n
|{z}
=L
A simple model for molecular collisions:

252
• Particles are hard spheres of radius σ.

• A Particle moving at v sweeps out a cylinder of radius σ and length v4t =⇒


V = πσ2 v4t.
∗ ∗ ∗ See Handout ∗ ∗∗

• The number of collisions equals the number of particles with their centers
in V :
number of collisions = n∗ πσ 2 v4t (38.6)

• The collision frequency = n∗ πσ 2 v

For the above model we need to find the average collision frequency. Since the
molecules are moving relative to one another we must find the average relative
velocity, v̄12 = h|v1 − v2 |i

It can be shown that


r
16RT √ collision √
v̄12 = = 2v̄ =⇒ = 2n∗ πσ 2 v̄. (38.7)
πM frequency

From the above expression one defines the mean free path λ to be

/v̄ LP
n∗ = RT RT
λ= √ = √ (38.8)
2n∗ πσ 2/v̄ 2P Lπσ 2

Example: Ar at SATP (T = 298 K, P = 1 bar):


m
v̄ = 380.48 ,
s
collision
= 5.25 × 109 s−1 ,
frequency
λ = 72.5 nm

253
39. The Rate Laws of Chemical
Kinetics

Thermodynamics described chemical systems in equilibrium. For the study of


chemical reactions it is important understand systems that can be very far from
equilibrium. For this we turn to the field of chemical kinetics.

We can, from thermodynamics, address the question; Will the reaction occur?

We need kinetics, however, answer the question: How fast will the reaction occur?

39.1. Rate Laws

Consider a general four component reaction

aA + bB = cC + dD (39.1)

The time dependence of this reaction can be observed by following the disap-
pearance of either of the reactants or appearance of either of the products. That
is,
d[A] d[B] d[C] d[D]
− or − or or (39.2)
dt dt dt dt
BUT this is ambiguous because a moles of A reacts with b moles of B and a does
not, in general, equal b. We must account for the stoichiometry.

254

254
We define the reaction velocity as

1 d[I]
v= (39.3)
vi dt
where vi = −a, −b, c or d and I = A, B, C, or D.

This definition is useful but must be used with caution since for complicated
reactions all the v’s may not be equal. An example of this is
½
bB → cC + dD
aA + (39.4)
b B0 → c0 C0 + d0 D0
0

A rate law is the mathematical statement of how the reaction velocity depends
on concentration.
v = f (conc.) (39.5)

For the most part, rate laws are empirical.

Many, but certainly not all, rate laws are of the form

v = k[A1 ]xA 1 [A2 ]xA 2 · · · [An ]xA n . (39.6)

The reaction is said to be of order xAi in species Ai and it is of overall order


P
i xA i .

In general an overall reaction is made up of so called elementary reactions

Reactant = Product overall rxn (39.7)


Reactant → Intermediates → Product

Note that we shall use an equal sign when talking about the overall reaction and
arrows when talking about the elementary reactions

Example

255
Let
2A + 2B = C + D (39.8)

be the overall reaction. One possible set of elementary steps could be

elementary rxn molecularity


A + A → A0 Bimolecular
.
A0 → A00 Unimolecular
A00 + 2B→ C + D Trimolecular

The rate laws for elementary reactions can be determined from the stoichiometry

molecularity elementary rxn rate law


Unimolecular A → Product v = k[A]
Bimolecular A + A → Product v = k[A]2
Bimolecular A + B → Product v = k[A][B] .
Trimolecular A + A + A → Product v = k[A]3
Trimolecular A + A + B → Product v = k[A]2 [B]
Trimolecular A + B + C → Product v = k[A][B][C]

Conversely, rate laws for overall reactions can not be determined by stoichiometry.

Connection to thermodynamics
Consider the overall or elementary reaction
kf
aA + bB ­ cC + dD (39.9)
kr

where kf is the rate constant for the reaction to proceed in the forward direction
and kr is the rate constant for the reaction to proceed in the reverse direction.

Now, at equilibrium vf = vb which implies

kf [A]a [B]b = kr [C]c [D]d (39.10)

256
bringing kr to the LHS and [A][B] to the RHS we get
kf [C]c [D]d
= a b
= Kc0 (39.11)
kr [A] [B]
where Kc0 is the thermodynamic equilibrium “constant.”

So, we have succeeded in connecting thermodynamics to kinetics BUT we have


done so through the ratio of rate constants. The velocity of a reaction is lost
in this ratio and hence we still can not determine the speed of a reaction from
thermodynamics.

Examples of rate laws


Consider the (overall) reaction between molecular hydrogen and molecular iodine,

H2 + I2 = 2HI. (39.12)

The observed rate laws are vf = kf [H2 ][I2 ] and vr = kr [HI]2 . This suggests that the
reaction is elementary. In fact, the reaction is not elementary. Moral: Kinetics
is very much an empirical science.

Next consider the reaction between molecular hydrogen and molecular bromine,

H2 + Br2 = 2HBr. (39.13)

The observed rate law for this reaction is very complicated,


k[H2 ][Br2 ]1/2
v= k0 [HBr]
,
1+ [Br2 ]

this does not obey any common form.

The above two example are seemingly very similar but they have very different
observed rate laws. Moral: Kinetics is very much an empirical science.

Objectives of chemical kinetics

257
• To establish empirical rate laws

• To determine mechanisms of overall reactions

• To empirically study elementary reactions

• To establish theoretical links to statistical mechanics and quantum mechan-


ics

— This involve nonequilibrium thermodynamics–more difficult

• To study chemical reaction dynamics

— the dynamics of molecular collisions that result in reactions

39.2. Determination of Rate Laws

Concentrations c(t) are measured not rates. To obtain the rate from the concen-
tration we must take its time derivative dc(t)
dt
. That is we must measure c(t) as a
function of time and find the rate of change of this concentration curve.

The rates of chemical reactions vary enormously from sub-seconds to years. Con-
sequently no one experimental technique can be used.

• For slow reactions (hrs/days) almost any technique for measuring the con-
centration can be used.

• For medium reactions (min) either a continuous monitoring technique or a


stopping technique can be used

— A stopping technique used rapid cooling or destruction of the catalysts


to stop a reaction at a given point.

• Very fast (sec/subsec) reactions cause problems because the reaction goes
faster than one can mix the reactants.

258
39.2.1. Differential methods based on the rate law

Methods based directly on the rate law rely on the determination of the time
derivative of the concentration.

The main problem with such a method is that randomness in the concentration
measurements gets amplified when taking the derivative.

1. Method of initial velocities

• for v = k[A]x [B]y rate laws.


• initially v0 = kax by where a and b are the initial concentrations of A
and B respectively
• taking the log of both sides gives lnv0 = ln[kax by ] = ln k + x ln a + y ln b
• a and b can be varied independently so both x and y can be determined.
• problems
1. if the concentration drops very sharply
2. if there is an induction period

2. Method of isolation

• for v = k[A]x [B]y rate laws


• start with initial concentrations a and b equal to the stoichometry; this
gives the overall order of x + y
• flood with, say, A so v ≈ kax [B]y

39.2.2. Integrated rate laws

The above differential methods look directly at the rate law which is a differential
equation. The differential equation is not solved.

259
We now solve the differential equations to yield what are called the integrated
rate law.

The differential equations (rate law) and their solutions (integrated rate law) are
simply listed here for a few rate laws.
type rate lawa) integrated rate lawa)
1 d[I]
1st order vi dt
= k[I] [I] = [I0 ]evi kt
1 d[I] 1
2nd order vi dt
= k[I]2 [I]
= [I10 ] − vi kt
1 d[I] 1 1
nth orderb) vi dt
= k[I]n [I]n−1
= [I0 ]n−1
− (n − 1)vi kt
d[I]
enyzme 1
vi dt
= kmk[I]
+[I]
km ln [I[I]0 ] + ([I0 ] − [I]) = −vi kt
a)
[I] is the concentration of one of the reactants in an elementary reaction and
vi is the stoichiometric factor for [I] (n.b., vi is a negative number).
b)
The order need not be an integer. For example n = 3/2 is a three-halves
order rate law.

260
40. Temperature and Chemical
Kinetics

40.1. Temperature Effects on Rate Constants

An empirical rate constant was proposed by Arrhenious:


d ln k Ea
= or (40.1)
dT RT 2
d ln k Ea
= , (40.2)
d(1/T ) R

where Ea is the Arrhenious activation energy.

Integration of the above yields


Ea Ea
ln k = ln A − =⇒ k = Ae− RT (40.3)
RT
(A is the constant of integration). This is the Arrhenious equation

Recall the equilibrium constant can also be obtained from kinetics


kf
Kc0 = ' Ka . (40.4)
kr
Now, take the log of this:
∙ ¸
kf
ln Ka = ln = ln kf − ln kr . (40.5)
kr

261

261
Substituting the Arrhenious equation for the rate constants gives
∙ Ea
¸ h i
Ear
− RTf
ln Ka = ln Af e − ln Ar e− RT (40.6)
∙ ¸
Af Ear − Eaf
= ln +
Ar RT

40.1.1. Temperature corrections to the Arrhenious parameters

The Arrhenious parameters A and Ea are constants.

Theoretical approaches to reaction rates predict rate constants of the form


0
k = aT j e−E /RT . (40.7)

Forcing this to coincide with the Arrhenious implies

Ea = E 0 + jRT (40.8)

and
A = aT j ej (40.9)

We can verify this by starting with the Arrhenious equation and substituting the
above expressions,
Ea E 0 +jRT −E 0 −E 0 √
k = Ae− RT = aT j ej e− RT = aT j e/j e/
−j
e RT = aT j e RT (40.10)

40.2. Theory of Reaction Rates

Simple collision theory (SCT)

• Bimolecular reactions (A,B)

• Reaction rate determined by molecular collisions

262
— Collision frequency for A–B collisions
s
8RT
zAB = πσ AB L2 [A][B] (40.11)
πLμ
mA mB
where μ ≡ mA +mB
is the reduced mass and σAB is the collision diameter.

zA B
• The maximum reaction velocity is vmax = L
, but intuitively the actual
reaction velocity will be less because

— the ability to react depends on orientation =⇒ a steric factor p


— a minimum amount of collisional energy is required=⇒ e−Em in /RT

• The actual reaction velocity is


Em in
pzAB e− RT
v= (40.12)
L
• The rate constant for a bimolecular reaction is
v
k= (40.13)
[A][B]
so SCT predicts
Em in s
pzA B e− RT
L 8RT − Em in
k= = pπσ AB L e RT (40.14)
[A][B] πLμ

263
• Comparison to the (temperature corrected) Arrhenious equation suggests
s
8RT 1
A = pπσ AB L e2 (40.15)
πLμ

and
1
Ea = Emin + RT (40.16)
2

Activated complex theory (ACT)

• An intermediate active complex is formed during the reaction, e.g.,

A + B → (AB)‡ → products. (40.17)

ACT is not limited to bimolecular reactions.

• The active complex is a state in the thermodynamic sense, thus we can apply
thermodynamics to it.

• For the above example, the equilibrium constant is defined as

a‡ low [‡]
Ka‡ = ' (40.18)
aA aB conc. [A][B]

264
• Definition: transmission factor, f

— accounts for the fraction of activated complex that becomes product.


— From statistical mechanics, it can be shown that f = kb T /h where kb
is Boltzmann’s constant and h is Planck’s constant.

• The reaction rate constant for reactants going to products for ACT is
kb T ‡
k = f Ka‡ = K (40.19)
h a
• Thermodynamics tells us that

4G‡ = −RT ln Ka‡ (40.20)

which can be written as


4G‡ 4H ‡ 4S ‡
Ka‡ = e− RT = e− RT e R (40.21)

where 4G‡ = 4H ‡ − T 4S ‡ .

• The ACT reaction rate constant now becomes


kb T − 4H ‡ 4S‡
k= e RT e R . (40.22)
h
This is Eyring’s equation

40.3. Multistep Reactions

Up to now, the reactions we have studied have been single step reactions.

In general, there is many steps from initial reactants to final products.

Reactions may occur in series or in parallel or both, in what is called a reaction


network.
Parallel reactions:

265
• Parallel reactions are of the form, for example,
k
A + B1 →1 C (40.23)
k2
A + B2 → D

• The rate constant for the disappearance of [A] is simply the sum of the two
rate constants: k = k1 + k2

Series reactions:

• Series reactions necessarily include and intermediate product. They are of


the form
k k
A →1 B →2 C (40.24)

• The concentrations of A, B and C are determined by the system of differen-


tial equations:

d[A]
− = k1 [A]
dt
d[B]
= k1 [A] − k2 [B]
dt
d[C]
= k2 [B],
dt
which, when solved yields

[A] = [A0 ]e−k1 t


k1 [A0 ] ¡ −k1 t ¢
[B] = e − ek2 t
k2 − k1
µ ¶
k2 e−k1 t − k1 ek2 t
[C] = [A0 ] − [A] − [B] = [A0 ] 1 −
k2 − k1

• See in class animation

266
40.4. Chain Reactions

Chain reactions are reactions which have at least one step that is repeated indef-
initely. The simplest chain reactions have three distinct steps (discussed below)

Chain reactions are extremely important in polymer chemistry

Steps of a chain reaction

1. Initiation: Typically a molecule M reacts to form some highly reactive rad-


ical
M → R·.

267
2. Propagation: The radical formed in the initiation step reacts with some so
molecule M0 to form another molecule M00 and another radical R0 ·. This step
repeats an indefinite number of times.

R·+M0 → M00 + R0 ·.

3. Termination: The radicals interact with each other or with the walls of the
container to forma stable molecule

R0 ·+R0 · → M000

or
R0 ·+wall → removed

268
41. Gases and the Virial Series

Unlike liquids and solids, a particular particle has much less significant interactions
with the other particles.

This simplifies the theoretical treatment of gases.

We will now look in detail at the gases.

41.1. Equations of State

Recall from last semester several of the equations of states for gases.

• The ideal gas equation of state

P V = nRT. (41.1)
m
The equation of state can also be expressed in term of density ρ = V

mP
ρ= . (41.2)
nRT
• The van der Waals gas equation of state
nRT n2 a
P = − 2 (41.3)
V − nb V
or
RT a
P = − 2, (41.4)
Vm − b Vm
where the parameter a accounts for the attractive forces among the particles
and parameter b accounts for the repulsive forces among the particles

269

269
• Berthelot
nRT n2 a RT a
P = − 2
= − (41.5)
V − nb T V Vm − b T Vm2
• Dieterici an a
nRT e− RT V RT e− RT Vm
P = = (41.6)
V − nb Vm − b
• Redlich-Kwang

nRT n2 a RT a
P = −√ = −√ (41.7)
V − nb T V (V − nb) Vm − b T Vm (Vm − b)

41.2. The Virial Series


PV P Vm
Definition: Compressibility Factor: z = nRT
= RT
.

• z is unity for an ideal gas because for such a gas P V = nRT.

• For a real gas z must approach unity upon dilution ( Vn → 0).

• z can be expended in a power series called the virial series.

n
The virial series in powers of V
is
³n´ ³ n ´2 ³ n ´3
z = 1 + B(T ) + C(T ) + D(T ) +··· , (41.8)
V V V
or µ ¶ µ ¶2 µ ¶3
1 1 1
z = 1 + B(T ) + C(T ) + D(T ) + ··· . (41.9)
Vm Vm Vm
B(T ), C(T ), etc. are called the virial coefficients.

Conceptually B(T ) represents pair-wise interaction of the particles, C(T ) repre-


sents triplet interactions, etc.

270
41.2.1. Relation to the van der Waals Equation of State

Recall the van der Waals equation


RT a
P = − 2 (41.10)
Vm − b Vm
Vm
multiply both sides by RT
to get

P Vm Vm R //T / a
V
= − m /2
RT //
R T Vm − b RT Vm
Vm a
= −
Vm − b RT Vm
1 a
= b
− (41.11)
1 − Vm RT Vm

P Vm
but RT
= z so
1 a
z= b
− . (41.12)
1 − Vm RT Vm
1
The first term is of the form 1−x
which has the power series expansion

1
= 1 + x + x2 + · · · . (41.13)
1−x
Therefore µ ¶2
a b b
z=− +1+ + + ··· . (41.14)
RT Vm Vm Vm
1 1
the first term is proportional to Vm
and so it can be combined with the Vm
term
in the series expansion, hence
³ µ ¶2
a ´ 1 b
z =1+ b− + + ··· . (41.15)
RT Vm Vm

This series can now be compared term by term to the virial series to give expression
for the virial coefficients:
³ a ´
B(T ) = b − , C(t) = b2 , D(T ) = b3 , etc. (41.16)
RT

271
41.2.2. The Boyle Temperature

The temperature at which B(T ) = 0 is called the Boyle temperature, Tb .

The virial series at Tb becomes


µ ¶ µ ¶2 µ ¶3
1 1 1
z(T = Tb ) = 1 + 0 + C(T ) + D(T ) + ···
Vm Vm Vm
µ ¶2 µ ¶3
1 1
= 1 + C(T ) + D(T ) + ··· . (41.17)
Vm Vm
³ ´2
The lowest order correction are now V1m . The gas behaves more like an ideal
gas at Tb then for other temperatures.

41.2.3. The Virial Series in Pressure

One can also expand the compressibility factor in pressure

z = 1 + B 0 (T )P + C 0 (T )P 2 + D0 (T )P 3 + · · · . (41.18)
The relation of this expansion to the one in V1m can be obtained. One finds (see
homework)
B(T )
B 0 (T ) = , (41.19)
RT

272
C(T ) − B(T )2
C 0 (T ) = (41.20)
(RT )2
and
D(T ) − 3B(T )C(T ) − 2B(T )3
D0 (T ) = (41.21)
(RT )3

41.2.4. Estimation of Virial Coefficients

The virial coefficients can be estimated using empirical equations and tabulated
parameters.

• Estimates based on Beattie-Bridgeman constants:


A0 c
B(T ) = B0 − − 3, (41.22)
RT T
A0 a B0 c
C(T ) = − B0 b − 3 , (41.23)
RT T
B0 bc
D(T ) = . (41.24)
T3
where A0 , B0 , a, b, c are tabulated constants

• Estimates based on critical values (we will discuss critical values shortly, for
now treat them as empirical parameters):
µ ¶
9RTc 6Tc2
B(T ) = 1− 2 . (41.25)
128Pc T

273
42. Behavior of Gases

42.1. P, V and T behavior

We shall briefly consider the P, V and T behavior of dense fluids (e.g., liquids).

Taking volume as a function of P and T, we consider the total derivative


µ ¶ µ ¶
∂V ∂V
dV (T, P ) = dT + dP. (42.1)
∂T P ∂P T
We can change this from a extensive property equation to an intensive property
equation by dividing by V :
µ ¶ µ ¶
dV 1 ∂V 1 ∂V
= dT + dP.
V V ∂T P V ∂P T
| {z } | {z }
α −κT

α is the coefficient of thermal expansion.

• At a given pressure, α describes the change in volume with temperature.

• Positive α means the volume of the fluid increases with increasing temper-
ature.

κT is the isothermal compressibility

• At a given temperature, κT describes the change in volume with pressure.

• Positive κT means the volume of the fluid decreases with increasing pressure.

• κT is different from z, the compressibility factor.

274

274
42.1.1. α and κT for an ideal gas

As an exercise we shall calculate α and κT using the ideal gas equation of state
(n.b., it is, of course, absurd to treat a liquid as an ideal gas). Starting with the
ideal gas law: V = nRT P
.
µ ¶ Ã ¡ ¢! µ ¶
−1 ∂V −1 ∂ nRT P −1 nRT
κT = = = − 2 (42.2)
V ∂P T V ∂P V P
T
1 nRT 1 / //
nR T 1
= = =
(P V ) P
| {z }
/n R
//T P P
=nRT

and µ ¶ Ã ¡ ¢!
1 ∂V 1 ∂ nRT
P 1 1
α= = = /n R
/ = (42.3)
V ∂T P V ∂T VP
|{z} T
P
/R
=n /T

42.1.2. α and κT for liquids and solids

In general, the compressibility and expansion of liquids (and solids) are very small.
So one can expand the volume in a Taylor series about a known pressure, P0 .

At constant T
µ ¶ µ ¶2
∂V ∂V
V (P ) = V0 + (P − P0 ) + (P − P0 )2 + · · · (42.4)
∂P ∂P T
| {z T}
−V0 κT

so,
V (P ) ≈ V0 [1 − κT (P − P0 )] . (42.5)

This approximation is quite good even over a rather large pressure range (P −P0 =
100 atm or so).

275
Likewise at constant P

µ µ ¶2
∂V ∂V
V (T ) = V0 + (T − T0 ) + (T − T0 )2 + · · · (42.6)
∂T P ∂T T
| {z }
V0 α

so,
V (T ) ≈ V0 [1 + α(T − T0 )] . (42.7)

As one final point, we can apply the cyclic rule for partial derivatives to determine
the ratio καT :
¡ ∂V ¢ µ ¶
α ∂T P cyclic ∂P
= ¡ ∂V ¢ = (42.8)
κT − ∂P T rule ∂T V

42.2. Heat Capacity of Gases Revisited

This section is a review from the first semester with an additional example beyond
the ideal gas.

42.2.1. The Relationship Between CP and CV

To find how CP and CV are related we begin with


µ ¶
∂H
CP = ,H = U + PV (42.9)
∂T P
so µ ¶ µ ¶ µ ¶
∂ (U + P V ) ∂U ∂V
CP = = +P (42.10)
∂T P ∂T P ∂T P
¡ ¢ ¡ ¢
note ∂U
∂T P
is not CV we need ∂U
∂T V
. Use an identity of partial derivatives
µ ¶ µ ¶ µ ¶ µ ¶
∂U ∂U ∂U ∂V
= + (42.11)
∂T P ∂T V ∂V T ∂T P

276
thus
µ ¶ ¶ µµ ¶ µ ¶
∂U ∂U ∂V ∂V
CP = + +P (42.12)
∂TV ∂V T ∂T P ∂T P
µ ¶ ∙µ ¶ ¸
∂V ∂U
= CV + +P .
∂T P ∂V T
¡ ∂U ¢ ¡ ∂P ¢
Recall the expression for internal pressure ∂V T
= T ∂T V
− P . Then
µ ¶ ∙ µ ¶ ¸
∂V ∂P
CP = CV + T / + P/
−P (42.13)
∂T P ∂T V

Finally µ ¶ µ ¶
∂V ∂P
CP = CV + T (42.14)
∂T P ∂T V

For solids and liquids:


µ ¶ µ ¶
∂V ∂P α
= V α, = (42.15)
∂T P ∂T V κT
so
α2 T V
CP = CV + (42.16)
κT
For gases we need the equation of state which often is conveniently explicit in P
or V but not both

1. Explicit in P : Replace
µ ¶ ¡ ∂P ¢
∂V
with − ¡ ∂T
∂P
¢V (42.17)
∂T P ∂V T

2. Explicit in V : Replace
µ ¶ ¡ ∂V ¢
∂P ∂T P
with − ¡ ∂V ¢ (42.18)
∂T V ∂P T

277
Examples

1. Ideal gas (equation of state: P V = nRT ): This equation is easily made


explicit in either P or V so we don’t need any of the above replacements
µ ¶ µ ¶
∂V ∂P
CP = CV + T (42.19)
∂T P ∂T V
nR nR nRT
= CV + T = = nR
P V PV
Thus CP = CV + nR or CP m = CV m + R

2. One term viral equation (equation of state: V = nRT


P
+ nB). This is explicit
in V so use case 2 above
µ ¶ µ ¶ µ ¶ ¡ ∂V ¢
∂V ∂P ∂V ∂T P
CP = CV + T = CV − T ¡ ∂V ¢ (42.20)
∂T P ∂T V ∂T P ∂P T

The partial derivatives are


µ ¶ µ ¶
∂V nR ∂V nRT
= + nB 0 , =− , (42.21)
∂T P P ∂P T P2
so ¡ ∂V ¢ nR
∂T P P
+ nB 0 /n P (R + P B 0 )
− ¡ ∂V ¢ =− = . (42.22)
∂P T
− nRT
P2 /n RT
Thus
µ ¶Ã 0
!
nR P (R + P B )
CP = CV + /
T + nB 0 (42.23)
P RT/
µ ¶ 2
P B0
= CV + nR 1 +
R
or µ ¶2
P B0
CP m = CV m + R 1 + (42.24)
R

278
42.3. Expansion of Gases

Expanding gases do work: Z V2


−w = Pex dV (42.25)
V1

As we learned last semester the value of w depends on Pex during the expansion.

Recall that if the expansion is reversible, there is always an intermediate equi-


librium throughout the expansion. Namely Pgas = Pex . So,
Z V2
−wrev = Pgas dV (42.26)
V1

nRT
For an ideal gas (P = V
) this becomes
Z V2 µ ¶
nRT V2
−wrev = dV = nRT ln (42.27)
V1 V V1

Also recall that −wrev is the maximum possible work that can be done in an
expansion. −wrev = −wmax .

42.3.1. Isothermal and Adiabatic expansions

We shall consider two limits for the expansion of gases

1. Isothermal expansion T is constant

2. Adiabatic expansion q = 0.

Isothermal expansion

• For the case of a ideal gas, U (T, V ) = U(T ) (independent of V ). So for


isothermal expansion 4U = 0 = q + w =⇒ q = −w.

279
Adiabatic expansion

• Since q = 0, dU = dw = −Pex dV = −P dV (reversible).

• For an ideal gas


−nRT
dU = −P dV = dV (42.28)
V

42.3.2. Heat capacity CV for adiabatic expansions

Considering an ideal gas going adiabatically from (T1 , V1 ) to (T2 , V2 ).


Recall µ ¶
∂U
CV = =⇒ dU = CV dT (42.29)
∂T V
So from above
−nRT CV dT −nRdV
CV dT = dV =⇒ = (42.30)
V T V
Going from (T1 , V1 ) to (T2 , V2 ):
Z T2 Z V2
CV −nR
dT = dV. (42.31)
T1 T V1 V

If CV (T ) is reasonably constant over the internal T1 to T2 then this is approxi-


mately µ ¶ µ ¶
T2 V2
C̄V ln = −nR ln (42.32)
T1 V1
where C̄V = 12 (CV (T1 ) + CV (T2 )) . Or, in terms of molar heat capacity
µ ¶ µ ¶
T2 V2
C̄V m ln = −R ln (42.33)
T1 V1

280
42.3.3. When P is the more convenient variable

What if P is the more convenient variable? Then use H instead of U

Let us still consider an adiabatic expansion


H = U + P V, dH = dU + P dV + V dP (because both P and V can, in general,
change)

/ dV/ + V dP
dH = dq + dw + P (42.34)
dH = V dP.

Now, µ ¶
∂H
CP = =⇒ dH = Cp dT = V dP (42.35)
∂T P
For an ideal gas this becomes
nRT
Cp dT = dP (42.36)
P
Going from (T1 , P1 ) to (T2 , P2 ):
Z T2 Z P2
CP nR
dT = dP. (42.37)
T1 T P1 P
If CP (T ) is reasonably constant over the internal T1 to T2 then this is approxi-
mately µ ¶ µ ¶
T2 P2
C̄P ln = nR ln (42.38)
T1 P1
where C̄P = 12 (CP (T1 ) + CP (T2 )) . Or, in terms of molar heat capacity
µ ¶ µ ¶
T2 P2
C̄P m ln = R ln (42.39)
T1 P1

From the above two cases


µ ¶ µ ¶ µ ¶
T2 R P2 −R V2
ln = ln = ln (42.40)
T1 C̄P m P1 C̄V m V1

281
So µ ¶ µ ¶
P2 C̄P m V2
ln =− ln (42.41)
P1 C̄V m V1
| {z }
≡γ

hence µ ¶ µ ¶ µ ¶ µ ¶γ
P2 V2 V1 V1
ln = −γ ln = γ ln = ln (42.42)
P1 V1 V2 V2
Thus µ ¶ µ ¶γ
P2 V1
= ⇒ P2 V2γ = P1 V1γ , (42.43)
P1 V2
but Pi Viγ are arbitrary so this implies P V γ = constant (** NOTE: The axes
should be reversed **)

42.3.4. Joule expansion

Consider a gas expanding adiabatically against a vacuum (Pex = 0). In this case
q = 0 (adiabatic) and w = 0 (since −dw = Pex dV ).

282
This implies
4U = q + w = 0. (42.44)

Internal energy is constant.

¡ ∂T ¢
We want to find ∂V U
.

Identity: µ ¶ µ ¶ µ ¶ µ ¶
∂T ∂T ∂U 1 ∂U
=− = (42.45)
∂V U ∂U V ∂V T CV ∂V T
| {z }
1/CV
¡ ∂U ¢
For an ideal gas ∂V T = 0 (since U(T, V ) = U(T )). Thus in as much as the
¡ ∂T ¢
gas can be considered ideal ∂V U
= 0. That is, for Joule type expansion the
temperature of the gas does not change. For real gases this is not strictly equal
to zero.

42.3.5. Joule-Thomson expansion

Consider the adiabatic expansion as illustrated by the figure below

283
The work done on the left is

wL = −P1 4V = −P1 (0 − V1 ) = P1 V1 . (42.46)

The work done on the right is

wR = −P2 4V = −P2 (V2 − 0) = −P2 V2 . (42.47)

Now,
4U = U2 − U1 = wL + wR = P1 V1 − P2 V2 (42.48)

Thus
U2 + P2 V2 = U1 + P1 V1 ⇒ H2 = H1 (42.49)

For Joule-Thomson expansion the enthalpy is constant.

¡ ∂T ¢
We want to find ∂V H
≡ μ. (the Joule-Thomson coefficient).
Identity: µ ¶ µ ¶ µ ¶ µ ¶
∂T ∂T ∂H 1 ∂H
=− = =μ (42.50)
∂P H ∂H P ∂P T CP ∂P T
| {z }
1/CP

284
Recall the useful identity
µ ¶ µ ¶
∂H ∂V
=V −T (42.51)
∂P T ∂T P

Thus ¡ ¢
−V + T ∂V
∂T P
μ= (42.52)
CP

Example: The one term virial equation: (equation of state P V = nRT + nB)
µ ¶
1 −nRT nRT 0
μ = − nB + + nT B (42.53)
CP P P
−B + T B 0
μ = .
CP m
Limts:

• Low T : B 0 is positive and B is negative, so μ is positive–the gas cools upon


expansion

• High T : B 0 is nearly zero and B is positive, so μ is negative–the gas warms


upon expansion

• The Joule-Thomson inversion temperature is the temperature where μ = 0.

285
43. Entropy of Gases

43.1. Calculation of Entropy

Entropy must be calculated along reversible paths. This is not a problem though
since entropy is a state function.

Entropy change for changes in temperature.

• At constant V :
dq=CV dT
— dU = dq + dw =⇒ dU = CV dT, but also dU = T dS. So
Z T2
CV CV
dS = dT =⇒ 4S = dT. (43.1)
T T1 T

At constant P : (use H = U + P V instead of U)

P dq=C dT
— dH = dU +P dV +V dP = dq−P dV +P dV +V dP . So dH = dq =⇒
dq=T dS
dH = CP dT, but also dH = T dS. So
Z T2
CP CP
dS = dT =⇒ 4S = dT. (43.2)
T T1 T

Isothermal expansion of an ideal gas (P V = nRT ):

• Recall that for isothermal expansion of an ideal gas dU = 0 = T dS − P dV


⇒ dS = P TdV .

286

286
• Using the equation of state
Z V2
nRdV nR V2
dS = =⇒ 4S = dV = nR ln . (43.3)
V V1 V V1

• Using the equation of state to express V1 and V2 in terms of P1 and P2 .


/ //
n R T
V2 P2
dS = nR ln = nR ln P2 = −nR ln . (43.4)
V1 / //
n R T P1
P1

If two variables change in going from the initial to final states break the path into
two paths in which only one variable changes at a time.

Entropy of Mixing of an ideal gas

• Since the gas is ideal, there are simply two separate equations:
VA + VB VB + VA
4SA = nA R ln , 4SB = nB R ln (43.5)
VA VB
and
4Smix = 4SA + 4SB (43.6)

287
• Recall Avogadro’s principle: n ∝ V for an ideal gas. So.
⎛ ⎞
⎜ nA + nB nB + nA ⎟
⎜ ⎟
4Smix = R ⎜nA ln + nB ln ⎟ = −R (nA ln XA + nB ln XB )
⎝ nA nB ⎠
| {z } | {z }
1/XA 1/XB
(43.7)

43.1.1. Entropy of Real Gases

Consider the question: How does S → S ideal as P → 0 ?

¡ ∂S ¢ ¡ ∂V ¢
Use Maxwell relation ∂P T
= − ∂T P
and single term viral equation, V =
nRT
P
+ nB.

So µ ¶ µ ¶
∂S ∂V nR
=− =− − nB 0 (43.8)
∂P T ∂T P P
Hence
µ ¶ U→
nR 0 P2
dS = − →
− nB dP =⇒ S2 − S1 = −nR ln − nB 0 (P2 − P1 ) (43.9)
P P1
For an ideal gas B 0 = 0, so
P2
S2ideal − S1ideal = −nR ln (43.10)
P1
Thus
S2 − S1 = S2ideal − S1ideal − nB 0 (P2 − P1 ) (43.11)
Letting P1 → 0 and P2 → P θ (Standard pressure 1 bar), this becomes
ideal ideal
S2 − /
S1 = S2ideal − /
S1 − nB 0 (P2 − P1 ) (43.12)

Defining S2ideal , P2 → P θ as S θ . So,

S(P θ ) = S θ − nB 0 P θ (43.13)

288
The entropy at any P and T can be obtained expresses as

S(T, P ) = S ideal (T, P ) − nB 0 P (43.14)

Thus
P
S(T, P ) = S θ (T ) − nR ln − nB 0 P (43.15)

289
Key Equations for Exam 3

Listed here are some of the key equations for Exam 3. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The Maxwell’s distribution of speeds is


µ ¶ 32
m −mv2
F (v) = 4π e 2kb T v2 . (43.16)
2πkb T

• The average speed of a particle is


r
8RT
hvi = (43.17)
πM

• The mean free path is


RT
λ= √ (43.18)
2P Lπσ 2

290

290
• The reaction velocity is
1 d[I]
v= (43.19)
vi dt
• The relation between the rate constant and the thermodynamic equilibrium
constant is
kf
Kc = (43.20)
kr
• The Arrhenious equation
Ea
k = Ae− RT (43.21)

• Important thermodynamic relation:

4G = 4H − T 4S (43.22)

• Eyring’s equation is
kb T − 4G‡ kb T − 4H ‡ 4S‡
k= e RT = e RT e R (43.23)
h h

• The van der Waals gas equation of state:


RT a
P = − 2. (43.24)
Vm − b Vm

• Compressibility Factor:
PV P Vm
z= = . (43.25)
nRT RT
• The virial series is
µ ¶ µ ¶2 µ ¶3
1 1 1
z = 1 + B(T ) + C(T ) + D(T ) + ··· . (43.26)
Vm Vm Vm

• Relation between heat capacities for an ideal gas:

CP m = CV m + R (43.27)

291
Part VIII

More Thermodyanmics

292

292
44. Critical Phenomena

44.1. Critical Behavior of fluids

The point on the top of the coexistence curve is called the critical point. It is
characterized by a critical temperature, Tc , and a critical density ρc .

Law of rectilinear diameters: The average density [ρave = 12 (ρliq + ρvap )] is


linear in temperature.

293

293
44.1.1. Gas Laws in the Critical Region

The vapor pressure of a substance is taken from the gas laws as the pressures
where A1 = A2 in the above figure.

Simple gas laws do not work well near critical points.

294
44.1.2. Gas Constants from Critical Data

Consider the van der Waals equation at the critical point (Pc , Tc , Vmc )
RTc a
Pc = − 2 . (44.1)
Vmc − b Vmc
dP 2
d P
There is an inflection point ( dVm
= 0, dV 2 = 0) at the critical point. So, setting
m
the first and second derivatives at the critical point equal to zero we get
¯
dP ¯¯ −RTc 2a
¯ = 2
+ 3 =0 (44.2)
dVm c (Vmc − b) Vmc

and ¯
d2 P ¯¯ 2RTc 6a
2 ¯ = 3
− 4 =0 (44.3)
dVm c (Vmc − b) Vmc
solving these three equations for Pc , Tc and Vmc gives

Vmc = 3b, (44.4)


8a
Tc = , (44.5)
27bR
a
Pc = . (44.6)
27b2
These values can be used to find the compressibility factor, z, at the critical point
Pc Vmc 3
zc = = = 0.375. (44.7)
RTc 8
Notice that both a and b whose values depend on the particular gas have dropped
out. That is (for the van der Waals Equation) zc = 0.375 for all gases.

The other equations of state give similar results


van der Waals Berthelot Dieterici Redlich-Kwong
zc 3/8 = 0.375 3/8 = 0.375 2/e2 ' 0.27 0.33

295
44.2. The Law of Corresponding States

We have found that zc is predicted by the equations of state to be independent of


the particular gas. This is actually not too far from the truth experimentally.

One can define unitless “reduced” variables Tr = T /Tc , Pr = P/Pc , and Vr = V /Vc .
Then zr = PRT
r Vr
r
.

zr is a “universal” function–it is nearly the same for all gasses.

∗ ∗ See Fig. 1.18 Laidler&Meiser ∗ ∗

44.3. Phase Equilibrium

Consider a homogeneous substance consisting of two phases α and β at a constant


T and V.

Suppose some amount of material, dn, goes from α → β

• (dAα )T = −P dVα − μα dn

• (dAβ )T = −P dVβ + μβ dn
= 0 since V is constant
z }| { ¡ ¢
• (dA)T,V = −P (dVα + dVβ ) + μβ − μα dn

For a spontaneous process A deceases (dA < 0)

At equilibrium dA = 0. This implies μβ = μα is the condition for equilibrium.

When α, β denote liquid (or solid) and vapor phases, then for a given T , the
pressure of the system when μβ = μα is the called the vapor pressure of the
material at temperature T.

296
¡ ¢
For phase changes at constant T and P then (dG)T,P = μβ − μα dn. So again
μβ = μα is the condition for equilibrium.

44.3.1. The chemical potential and T and P

How does μ vary with T and P ?

Generally for homogeneous substances,

dG = −SdT + V dP + μdn (44.8)

Now, µ ¶
∂G
S=− (44.9)
∂T P,n

So, µ ¶ µ ¶
∂S ∂ ∂G ∂ ∂G ∂μ
=− =− =− . (44.10)
∂n P,T ∂n ∂T ∂T ∂n ∂T P
But S = nSm (T, P ) so, µ ¶
∂μ
= −Sm . (44.11)
∂T P
Similarly, µ ¶
∂μ
= Vm . (44.12)
∂P T
Now the total differential of μ is
−S V
z }|m¶ { µ
µ
m
z }| ¶{
∂μ ∂μ
dμ(T, P ) = dT + dP (44.13)
∂T P ∂P T
dμ(T, P ) = −Sm dT + Vm dP

297
44.3.2. The Clapeyron Equation

At equilibrium μβ = μα so,

−Smα dT + Vmα dP = −Smβ dT + Vmβ dP (44.14)

Now
dP Smα − Smβ −4φ Sm 4S= 4H
T 4φ Hm
= = = (44.15)
dT Vmα − Vmβ −4φ Vm T 4φ Vm
This is the Clapeyron Equation
dP 4φ Hm
= (44.16)
dT T 4φ Vm

44.3.3. Vapor Equilibrium and the Clausius-Clapeyron Equation

The above Clapeyron equation applies to any phase transition; consider the liquid-
vapor phase transition.

Now
4v V = Vm,vap − Vm,liq ' Vm,vap (44.17)
Assuming the vapor phase obeys the ideal gas equation of state,
RT
4v V = (44.18)
P
Substituting this into the Clapeyron equation gives
dP 4v Hm 4v Hm P
= RT
= (44.19)
dT T P RT 2
Collecting the T ’s on one side of the equation and the P ’s on the other we get
dP 4v Hm dT
= (44.20)
P R T2
dP dT
Now we identify P
= d(ln P ) and T2
= −d(1/T ) so this becomes
4v Hm
d(ln P ) = − d(1/T ) (44.21)
R

298
Rearranging again leads to
d(ln P ) 4v Hm
=− (44.22)
d(1/T ) R

This is the Clausius-Clapeyron equation.

44.4. Equilibria of condensed phases

Examples

• Solid—liquid

— ice—water, most other common liquids

• Solid—solid

— rhombic sulfur—monoclinic sulfur


— grey tin—white tin
— graphite—diamond

For example a diamond at STP is metastable with respect to graphite.

“A diamond is not forever!”

At equilibrium μα = μβ this implies (for incompressible liquids and solids)


ª
μª ª ª
α + Vmα (P − P ) = μβ + Vmβ (P − P ) (44.23)

This can be rearranged so that terms independent of pressure (the standard chem-
ical potentials) are one side and the terms that depend of pressure are on the other
side
μª ª
α − μβ = (Vmβ − Vmα ) (P − P )
ª
(44.24)

299
Thus for any given T only one P allows for equilibrium.

Recall the Clapeyron equation


dP 4f Hm Hmβ − Hmα
= = (44.25)
dT T 4f Vm T (Vmβ − Vmα )

We make the good approximation that 4f Hm is independent of T and solve the


Clapeyron equation
Z →
dT 4f Vm dP Tf 4f Vm (P − P ª )
= ⇒ ln ª = (44.26)
→ T 4f Hm Tf 4f Hm

where Tfª is the freezing temperature at standard pressure (1 bar).

44.5. Triple Point and Phase Diagrams

Definitions

• Phase Diagram: A graph of P vs. T for a system which shows the lines
of equal chemical potential

• Critical Point: The terminal point of the liquid-vapor line. At temper-


atures above the critical point there is no distinction between vapor and
liquid.

• Triple Point: The point where all three phases coexist in equilibrium:

μsolid = μliq = μvap (44.27)

300
45. Transport Properties of Fluids

Transport properties of matter deal with the flow (or flux) of some property along
a gradient of some other property.

Flux: movement of something through a unit area.

We now consider three transport properties of fluids:

1. Diffusion: The flux of material down a concentration gradient

2. Viscosity: The flux of momentum down a velocity gradient

3. Thermal Conductivity: The flux of energy down a temperature gradient

∗ ∗ See Transport Phenomena handout ∗ ∗

45.1. Diffusion

At equilibrium concentration on a bulk solution will be uniform.

So if there exists a concentration gradient there will be a net flux, J, of material


from high concentration to low concentration so as to establish an equilibrium.
1 dn
J= (45.1)
A dt

301

301
The flux of material through a plane depends on the concentration difference
dC 1 dn dC
J = −D =⇒ = −D
dx A dt dx
where D is the diffusion constant
1 dn dC
= −D (45.2)
A dt dx
This is Fick’s first law of diffusion (in one dimension).

The change in concentration in a lamina between x and dx with time is given by


the flux in minus the flux out of the lamina:
∂C J(x) − J(x + dx) ∂J
= =− (45.3)
∂t dx ∂x
Using Fick’s first law for J
∂C ∂ ∂C
= D . (45.4)
∂t ∂x ∂x
If D is truly constant we get Fick’s second law of diffusion:
∂C ∂ 2C
=D 2. (45.5)
∂t ∂x

302
The solution of this partial differential equation depends on the boundary condi-
tions. Numerous methods of solution exist for this equation but they are beyond
the scope of the course.

The solution for two special boundary conditions are of interest and will simply
be presented here without derivation

1. Point source solution


C0 x2
C(x, t) = √ e− 4Dt (45.6)
2 πDt
2. Step function solution
" Z √x #
1 1 4Dt 2
C(x, t) = C0 −√ e−y dy (45.7)
2 π 0
∙ µ ¶¸
1 x
= C0 1 − erf √
2 4Dt
µ ¶
1 x
= C0 erfc √
2 4Dt
where erf and erfc are tabulated functions respectively called the error func-
tion and complementary error function.

45.2. Viscosity

Viscosity, η, is the resistance to differential fluid flow, i.e., The tendency of a


liquid to flow at the same velocity throughout.

303
dv mass
The frictional (viscous) force is F = ηA dx . (The units of η are lenght·time
. 1 poise
g
= cm·s .)

Poiseuille’s Formula

• Applies to Laminar (nonturbulent) flow

• For a liquid flowing trough a tube (radius r, length l), the volume of flow
4V in time 4t is
4V πr4 4P
=− (45.8)
4t 8ηl
where 4P is the driving pressure, i.e., the difference in pressure on either
side of the tube.

• For a gas µ ¶
4V πr2 Pi2 − Pf2
= (45.9)
4t 16ηl P0
where Pi is the inlet pressure, Pf is the outlet pressure and P0 is the pressure
at which the volume is read.

Stoke’s law: spheres falling through fluids

304
• The frictional force (exerted upwards) is proportional to velocity: Ff = −fv.
Stokes showed f = 6πηr
4πr3
• Gravitational force (exerted downwards): Fg = 3
(ρ − ρ0 )g, where g is the
gravitational acceleration (9.8 m/s2 ).

• Terminal velocity is reached when Ff + Fg = 0 giving

4πr3
−f vterm + (ρ − ρ0 )g = 0 (45.10)
3
4πr3 (ρ − ρ0 )g
vterm =
3f
using f = 6πηr

/ r/3 (ρ − ρ0 )g 2r2 (ρ − ρ0 )g

vterm = ¡ ¢ = (45.11)
3 6π / ηr
/ 9η

• Related to diffusion constant:


kT f =6πηr kT
D= = (45.12)
f 6πηr

45.3. Thermal conductivity

(This section closely follows parts of chapter 8 in Transport Phenomena by R.B.


Bird, W.E. Stewart and E. N. Lightfoot Wiley New York 1960)

The thermal conductivity, κ, of a material is a measure of the tendency of energy


in the form of heat to flow through the material.

Consider a slab of solid material of area A between two large parallel plates a
distance D apart. The plates are held at constant but different temperatures T1
and T2 (T1 > T2 ) for a sufficiently long time that a steady state exists.

305
Under such conditions, a linear steady state temperature distribution across the
material is established. And a constant rate of heat flow dq
dt
is needed to maintain
the temperature difference 4T = (T1 − T2 )
1 dq 4T
= −κ . (45.13)
A dt D
If we take the limit where D becomes infinitesimally small (D → dx) we obtain a
differential form of this equation:
1 dq dT
= Qf = −κ , (45.14)
A dt dx
where Qf is the heat flux. This is called Fourier’s law of heat conduction
(one-dimensional version).

Thermal conductivities are positive quantities so Fourier’s law says that heat flow
down a temperature gradient, i.e., from hot to cold.

45.3.1. Thermal Conductivity of Gases and Liquids

∗ ∗ See Reduced thermal conductivity handout ∗ ∗


From this handout we see that typically the thermal conductivity of gases at low
densities increases with increasing temperature, whereas the thermal conductivity
of most liquids decrease with increasing temperature.

306
45.3.2. Thermal Conductivity of Solids

For the most part, the thermal conductivity of solids have to be determined ex-
perimentally because many factors contributing to the thermal conductivity are
difficult to predict.

In general metals are better heat conductors than nonmetals and crystals are
better heat conductors than amorphous materials.

Dry porous materials are poor heat conductors

Rule of Thumb: Thermal conductivity and electrical conductivity go hand in


hand.

The Wiedemann, Frantz and Lorenz equation relates the thermal conductivity to
electrical conductivity, κel for pure metals:
κ
= L = const. (45.15)
κel T

where L is the Lorenz number (typically 22 to 29 × 10−9 V2 /K2 ).

The Lorenz number is taken as constant because it is only a very weak function
of temperature with a change of 10 to 20% per 1000 degrees being typical.

The Wiedemann, Frantz and Lorenz equation breaks down at low temperature
because metals become superconductive. There is no analog to superconductivity
for thermal conductivity.

307
46. Solutions

Solutions are mixtures of two or more pure substances. So, in addition to the
parameters needed to characterize a pure substance, one also needs to keep track
of the amount of individual species in solution

46.1. Measures of Composition

There are several measures of composition of solutions

n1
• mole ratio r = n2

n2
• mole fraction X2 = n1 +n2
, X1 = 1 − X2
1000X2
• molality m = M1 X1
, where M1 is the molecular weight of species 1
n2
• Molarity c2 = L solution

46.2. Partial Molar Quantities

Thermodynamic properties, in general change upon mixing


X
4mix = properties of soln − properties of pure. (46.1)

For example,
4mix V = Vsoln − Vsolute − Vsolvent (46.2)

Consider a thermodynamic quantity, say, volume.

308

308
In general, it is a function of T, P, n1 and n2 : V (T, P, n1 , n2 ). So, the total
derivative is
µ ¶ µ ¶ µ ¶ µ ¶
∂V ∂V ∂V ∂V
dV = dT + dP + dn1 + dn2 ,
∂T P,n1 ,n2 ∂P T,n1 ,n2 ∂n1 T,P,n2 ∂n2 T,P,n1
³ ´ (46.3)
∂V
∂ni
≡ V̄i , the partial molar volume.
T,P,nj

Similarly
µ ¶ µ ¶ µ ¶ µ ¶
∂G ∂G ∂G ∂G
dG = dT + dP + dn1 + dn2 ,
∂T P,n1 ,n2 ∂P T,n1 ,n2 ∂n1 T,P,n2 ∂n2 T,P,n1
³ ´ (46.4)
∂G
∂ni
≡ μi .
T,P,nj

So now for the more general case of mixtures the chemical potential of a species
of the partial molar free energy for that species, rather than simply the molar free
energy as it was earlier.

46.2.1. Notation

The study of solutions brings with it a large number of symbols which we collect
here for future reference.
Material
Pure liquid i Vi• Hi• Si• G•i
• • •
Pure liquid i per mole Vmi Hmi Smi μ•i
Whole solution V H S G
Solution/(total moles) Vm Hm Sm Gm
Partial molar of i in solution V̄i H̄i S̄i μi
Apparent molar (of solute) φ V φ
H
Reference state Viª Hiª Siª μª
i

309
46.2.2. Partial Molar Volumes

Consider the partial molar volume

For constant T and P


dV = V̄1 dn1 + V̄2 dn2 (46.5)

Now, V̄i depends on concentration, so change each amount of substance propor-


tional to the amount substance present,

dn1 = n1 dλ, dn2 = n2 dλ. (46.6)

So, U
¡ ¢ dλ
dV = V̄1 n1 + V̄2 n2 dλ =⇒ V = V̄1 n1 + V̄2 n2 (46.7)

That is, the total volume of the solution is equal to the sum of the partial molar
volumes each weighted by their respective number of moles.

The total volume, however, is not necessarily the mole weighted sum of the vol-
umes of each component in its pure (unmixed) state. More specifically

• •
4mix V = V − (Vm1 n1 + Vm2 n2 ) (46.8)
¡ ¢ • •
= V̄1 n1 + V̄2 n2 − (Vm1 n1 + Vm2 n2 )
¡ •
¢ ¡ •
¢
= V̄1 − Vm1 n1 + V̄2 − Vm2 n2

4mix V can be positive, negative or zero.


For example,

1. one unit of baseballs are mixed with one unit of basketballs. 4mix V < 0.

2. one unit of baseballs are mixed with one unit of books. 4mix V > 0.

310
46.3. Reference states for liquids

For liquids there are two more convenient ideal states

1. neat (pure) solvent limit

1. all neighboring molecules are same as the given molecule


2. the ideal state for Raoult’s law

2. infinite dilution limit

1. all neighboring molecules are different than the given molecule


2. the ideal state for Henry’s law

Raoult’s law limit Henry’s law limit

46.3.1. Activity (a brief review)

Recall that activity gives a measure of the deviation of the real state from some
reference state

311
Also recall that the mathematical definition of activity ai of some species i is
implicitly stated as
ai
limª =1 (46.9)
ζ→ζ g(ζ)

where g(ζ) is any reference function (e.g., pressure, mole fraction, concentration
etc.), and ζ ª is the value of ζ at the reference state.

This implicit definition is awkward so for convenience one defines the activity
coefficient as the argument of the above limit,
ai
γi ≡ (46.10)
g(ζ)

which we can rearrange as


ai = γ i g(ζ). (46.11)

The definition of activity implies that γ i = 1 at g(ζ ª ) (the reference state)

That is γ i → 1 as the real system approaches the reference state.

Connecting with the chemical potential we saw last semester that the deviation
of the chemical potential at the state of interest versus at the reference state is
determined by the activity at the current state (the activity at the reference state
is unity by definition).
μi − μª
i = RT ln ai . (46.12)

46.3.2. Raoult’s Law

In discussing both Raoult’s law and Henry’s law, we are describing the behavior
of a liquid solution by measuring the vapor (partial) pressures of the components

312
For simplicity we consider here only a two component solution.

dG = μ1 dn1 + μ2 dn2 . (46.13)

Take differential change along a line of constant concentration, so

dG = (μ1 n1 + μ2 n2 ) dλ (46.14)

then
G = μ1 n1 + μ2 n2 . (46.15)

Recall that
4mix G = G(soln) − G(pure components) (46.16)

Hence,

4mix G = μ1 n1 + μ2 n2 − μ•1 n1 − μ•2 n2 (46.17)


= (μ1 − μ•1 ) n1 + (μ2 − μ•2 ) n2 .

Now,
ai low P Pi
μ1 − μ•1 = RT ln •
' RT ln • , (46.18)
ai Pi
where Pi is the vapor pressure of the ith component above the solution.

313
Thus µ ¶
a1 a2
4mix G = RT n1 ln • + n2 ln • (46.19)
a1 a2
or at low P µ ¶
P1 P2
4mix G = RT n1 ln • + n2 ln • (46.20)
P1 P2

46.3.3. Ideal Solutions (RL)

Raoult’s Law:

Pi = Xi Pi• (46.21)
That is, the vapor partial pressure of a component of a mixture is equal to the
mole fraction of the component times the vapor pressure that the component
would have if it were pure.

The change in free energy upon mixing for solutions ideally obeying Raoult’s law
is à !
id(RL) X1 P/1• X2 P/2•
4mix G = RT n1 ln + n2 ln (46.22)
P/1• P/2•
id(RL)
4mix G = RT (n1 ln X1 + n2 ln X2 ) (46.23)
Again, this is for an ideal solution in the Raoult’s Law sense.

From µ ¶ µ ¶
∂G ∂ (G/T )
S=− and H = − , (46.24)
∂T P ∂ (1/T ) P
the entropy of mixing for an ideal Raoult solution is
id(RL)
4mix S = −R (n1 ln X1 + n2 ln X2 ) (46.25)

and the enthalpy of mixing is


id(RL)
4mix H = 0 (46.26)

314
(since G/T is independent of 1/T ).

The Reference State (RL)

Let us apply the definition of activity for the Raoult’s law reference state.

The reference function is g(ζ) = ζ = Xi . and the reference state is Xi = 1

So,
(RL)
ai
lim =1 (46.27)
Xi →1 Xi

implies
(RL) (RL)
ai = γi Xi , (46.28)
(RL)
and γ i → 1 as Xi → 1

Deviations from Raoult’s Law

Raoult’s law is a purely statistical law. It does not require any kind of interaction
among the constituent particle making up the solution.

Since, in reality, there are specific interactions between particles, real solutions
generally deviate from Raoult’s law.

The physical interpretation of deviation from Raoult’s law is

• positive deviation: the molecules prefer to be around themselves rather than


other types of molecules.

• negative deviation: the molecules prefer to be around other types of mole-


cules than themselves.

• no deviation: the molecules have no preference.

315
It is very important to note that this deviation from Raoult’s law is a property of
the solution and NOT any given component.

For example, for a given component, mixing with one substance may lead to
a positive deviation but mixing with another substance may lead to a negative
deviation.

Positive deviation from Raoult’s lawNegative deviation from Raoult’s law

46.3.4. Henry’s Law

Henry’s Law:
Pi = kXi Xi , (46.29)

where kXi is the Henry’s law constant,


µ ¶
Pi
kXi = lim (46.30)
Xi →0 Xi

Henry’s law applies to the solute not to the solvent and becomes more correct for
real solution as the concentration of solute goes to zero (Xi → 0), i.e., at infinite
dilution.

316
The Reference State (HL)

Referring to the definition of activity again we see that the reference function is
g(ζ) = ζ = Xi . and the reference state is now Xi = 0
So,
(HL)
ai
lim =1 (46.31)
Xi →0 Xi

implies
(HL) (HL)
ai = γ i Xi , (46.32)
(HL)
and γ i → 1 as Xi → 0
If instead of mole fraction, molality or molarity is used then
(HL)
ai = γ (HL)
mi mi (46.33)

and
(HL) (HL)
ai = γ Mi Mi (46.34)

respectively.

Comparison of Raoult’s Law and Henry’s Law

Both Raoult’s law and Henry’s law become better approximations for real solu-
tions as the solution becomes pure. But, they apply to opposite species in the
solution. Raoult’s law applies to the dominant species, X1 → 1, whereas Henry’s
law applies to the subdominant species X2 → 0. So, in summary

• Raoult’s law: γ 1 → 1 as X1 → 1

• Henry’s law: γ 2 → 1 as X2 → 0

317
46.4. Colligative Properties

Colligative properties: Properties of dilute solutions that are independent of


the chemical nature of the solute
Examples

• Freezing point depression

• Boiling point elevation

• Vapor pressure lowering

• Osmotic pressure

We will consider the examples of freezing point depression and osmotic pressure

46.4.1. Freezing Point Depression

At Tf (freezing point), μ1 (solid) = μ1 (soln).


| {z }
μs1

318
Using the Raoult’s law reference state (since we are interested in the behavior of
the dominant species), μ1 (soln) = μ•1 + RT ln a1 :

μs1 = μ•1 + RT ln a1 (46.35)

Rearranging this and taking the derivative with respect to T yields


µ ¶
∂ → 1 s • ∂ ln a1 −1 ∂μs1 ∂μ•1
ln a1 = (μ − μ1 ) =⇒ = − (46.36)
∂T → RT 1 ∂T RT 2 ∂T ∂T
∂μ
Now, using ∂T
= H and integrating we get
Z → µ ¶
−1 s • 4f H
d ln a1 = 2
(H1 − H1 ) dT = dT (46.37)
→ RT RT 2
Z Tf
4f H
ln a1 = 2
dT
Tf• RT

For small changes in the freezing point we may approximate T by Tf• in the
integrand. So, Z Tf
4f H −4f H
ln a1 ' •2
dT = Θ, (46.38)
Tf• RTf RTf•2
where Θ ≡ Tf• − Tf . The freezing point depression is

RTf•2 ln a1
Θ=−
4f H

46.4.2. Osmotic Pressure

We consider the osmotic pressure at a constant temperature, T. (so, dG = V dP ).

319
In the above figure μ1 (left) = μ1 (right), hence

μ•1 = μ•1 + RT ln a1 + V̄1 Π, (46.39)

where V̄1 is the partial molar volume of the solvent in solution (difficult to measure)
and Π is the hydrostatic (osmotic) pressure.

From the above equation


V̄1 Π
ln a1 = (46.40)
RT

Now we make the approximations V̄1 = Vm1 , a1 = X1 = 1 − X2 :

Vm1 Π
ln(1 − X2 ) = (46.41)
RT
For dilute solutions X2 is small so ln(1 − X2 ) may be expanded as

X22 X23
ln(1 − X2 ) = −X2 + − − · · · ' −X2 , (46.42)
2 3
n2 n2
but X2 = n1 +n2
' n1
for dilute solutions. Thus

1 V•
z }| {
• •
n2 Vm1 Π n1 Vm1 Π
' =⇒ n2 ' , (46.43)
n1 RT RT

320
or,
n2
Π= RT = cRT, (46.44)
V1•
|{z}
'c

where c is the concentration of the solute.

Note the similarity of this equation with the ideal gas equation: P = cRT. Thus
the solute in a very dilute solution behaves as if it were an ideal gas.

321
47. Entropy Production and
Irreverisble Thermodynamics

We have seen that thermodynamics tells us if a process will occur and kinetics
tells us how fast a process will occur.

These two areas of physical chemistry appear to be rather disjoint.

We now we consider thermodynamics of nonequilibrium states and investigate


how (and how fast) these state move towards equilibrium.

This allows us to make a stronger connection between thermodynamics and ki-


netics.

The main concept of this approach is the idea of entropy production and, ulti-
mately, entropy production per unit time–how fast we are producing entropy.

47.1. Fundamentals

We know the difference between reversible and irreversible processes from before.

However, we will state their respective definitions here in a manner best suited
for this chapter.

322

322
Reversible process: dynamical equations are invariant under time inversion
(t → −t).

• e.g., the one dimensional wave equation,

1 ∂ 2 u ∂ 2 u t→−t 1 ∂ 2 u ∂2u 1 ∂2u ∂2u


= =⇒ = =⇒ = 2, (47.1)
c ∂t2 ∂x2 c ∂(−t)2 ∂x2 c ∂t2 ∂x

is invariant under time reversal

Irreversible process: dynamical equations are not invariant under time inver-
sion (t → −t).

• e.g., the one dimensional heat equation,

1 ∂T ∂ 2 T t→−t 1 ∂T ∂ 2T 1 ∂T ∂2T
= =⇒ = =⇒ − = , (47.2)
κ ∂t ∂x2 κ ∂(−t) ∂x2 κ ∂t ∂x2

is not invariant under time reversal.

We will be concerned with the change in entropy, dS, which can be split into two
components dS = de S + di S.

Definitions

• de S is the change in entropy due to interactions with the exterior environ-


ment.

• di S is the change in entropy due to internal changes of the system

The quantity di S is called the entropy production.

323
Splitting up dS into these two parts permits an easy discussion of both open and
isolated systems–the difference between the two appearing only in de S.

General criteria for irreversibility:

• di S = 0 (reversible change)

• di S > 0 (irreversible change)

For isolated systems have di S = dS and the principle of Clausius, di S = dS ≥ 0,


holds.

47.2. The Second Law

As you might expect, the second law underlies all the concepts of this chapter.

We need a “local” formulation of the second law:

• Absorption of entropy in one part of the system, compensated by a sufficient


production in another part is prohibited

— i.e., in every macroscopic region of the system the entropy production


due to irreversible processes is positive.

This is simply another in our long list of alternative statements of the second law.

324
I

II

Considering the above figure of an isolated system, we write the principle of Clau-
sius as
dS = dS I + dS II ≥ 0. (47.3)

The local formulation statement implies

di S I ≥ 0 and di S II ≥ 0 (47.4)
¡ ¢
and the possibility of, for example, di S I < 0 and di S II > 0 such that di S I + S II >
0 is excluded.

47.3. Examples

The idea of entropy production can be applied to any of the processes we have
talked about; mixing, phase changes, heat flow, chemical reactions, etc. As exam-
ple we now consider the last two of these: heat flow and chemical reactions.

325
47.3.1. Entropy Production due to Heat Flow

Recall from the lecture on transport phenomena that the heat flux Qf is given by

4T
Qf = −κ (47.5)
D
q
We are now interested in exposing the time dependence, so, using Qf = 4t
we get

q κA4T
=− (47.6)
4t D
in differential form this is
dq dT
= −κA . (47.7)
dt dx
Example: Find the entropy production in a system consisting of two identical
connected blocks of metal (I and II), one of which is held at temperature T1 and
the other at T2 (take T1 > T̄ > T2 ) where T̄ is the temperature at the interface.

326
Considering the whole system
ed S i dS
z }| { z }| {
dqI dqII de qI de qII di qI di qII
dS = + = + + + . (47.8)
T1 T2 T1 T2 T1 T2
The quantity de qj is the amount of heat supplied by the environment to hold block
j at its fixed temperature.

Furthermore the heat going out of I through the connecting wall is equal to the
heat coming into II through the connecting wall:

di qI = −di qII . (47.9)

Using this we see that the entropy production is


µ ¶
1 1
di S = di qI − , (47.10)
T1 T2

which we see is positive because di qI < 0 when T1 > T2 .

We have still not made a connection to kinetics.

di S
To do so we must consider the entropy production per unit time dt
.

327
For this example µ ¶
di S di qI 1 1
= −
dt dt T1 T2
From chapter 24 we know
di qI −Aκ4T
= . (47.11)
dt D
So, µ ¶
di S −Aκ4T 1 1
= − (47.12)
dt D T1 T2

To determine T̄ we use the fact that the heat flow out of I is equal to the heat
flow into II:
di qI −di qII
= . (47.13)
dt dt
Using the above expression for heat flow gives us T̄ since,

/ ¡
/κ A ¢ / ¡
/κ A ¢ T1 + T2
− T1 − T̄ = − T̄ − T2 ⇒ T̄ = ; (47.14)
/
D /
D 2

a result we might have guessed.

47.3.2. Entropy Production due to Chemical Reactions

Definitions:
P
1. Chemical affinity: a ≡ − (4rxn G)T,P = − i vi μi and a ≡ − (4rxn A)T,V =
P
− i vi μi

2. Extent of reaction: ξ is defined by dξ = dn


vi
i
, where ni is the number of moles
of the i component and vi the stoichiometric factor of the ith component.
th

328
• e.g., for the reaction N2 + 3H2 → 2NH3
dnN2 dnH2 dnNH3
dξ = = = (47.15)
(−1) (−3) (2)

and
a = 2μNH3 − μN2 − 3μH2 (47.16)


The connection to kinetics: reaction rate v = dt

The connection to thermodynamics:


X X µ ¶
1
(dA)T,V = μi dni = vi μi dni = −adξ (47.17)
vi
i
| i {z }| {z }
−a dξ

but
−adξ
dq z }| {
z }| { dq (dA) T,V
(dA)T,V = (dU)T,V − T dS ⇒ dS = − (47.18)
T T
so
ed S i dS
z}|{ z}|{
dq adξ
dS = + (47.19)
T T

The entropy production per unit time for a chemical reaction is a function of both
the chemical affinity and of the reaction rate
di S a dξ a
= = v≥0 (47.20)
dt T dt T
We see that for a spontaneous process the entropy production per unit time is
positive. This is because a = − (4rxn A)T,V is positive as is v.

329
Simultaneous Reactions

For N simultaneous chemical reactions, the entropy production per unit time
generalizes to
1X
N
di S
= aj vj ≥ 0. (47.21)
dt T j=1
The second law requires that the total entropy production for simultaneous reac-
tions is positive. It says nothing about the entropy production of the individual
component reactions other then the sum of all the component entropy productions
must be positive.

For example in a system of two coupled reactions we could have a1 v1 < 0, a2 v2 > 0
such that a1 v1 + a2 v2 > 0.

47.4. Thermodynamic Coupling

Processes may be what is called thermodynamically coupled such that a process


that normally is not thermodynamically favored can be coupled to another process
that is thermodynamically favored so as to allow for the unfavorable process to
proceed spontaneously.

We just saw an example of such a situation with the discussion of simultaneous


reactions.

Thermodynamic coupling need not be confined to coupling between the same


types of processes.

That is, diffusion is the flux of matter down a concentration gradient. The so-
called Soret effect is flux of matter down a temperature gradient. Conversely, the
so-called Dufour effect is heat flux down a concentration gradient

330
The following table lists a number of thermodynamically coupled phenomena

Flux q m material Q (charge)


Gradient
Thermomechanical
T Thermoconductivity effect
Soret effect Seebeck effect
Mechanocaloric Hydrodynamic Reverse
P effect flow osmosis
Potential of flow
C Dufour effect Osmosis Diffusion Nernst Potential
ε Peltier effect Electrophoresis Migration Electoconductivity

47.5. Echo Phenonmena

Consider an ensemble that is perturbed away from thermal equilibrium by some


means such as by applying a field.

If the perturbation is released the system will begin to evolve in time as it heads
back towards the thermalized equilibrium state.

The ensemble evolves in two ways

• Reversibly

— A second perturbation can “undo” or reverse the evolution.

• Irreversibly

— The evolution towards equilibrium cannot be undone–it is irreversible

Example: The spin echo in pulsed NMR

• A radio frequency pulse prepares an ensemble of nuclear spins such that


they are all spinning coherently.

331
• A strong signal is seen because all the spinning nuclei cooperate.

• Each nucleus is in a slightly different environment so each spin frequency is


slightly different.

• The different environment (spin frequencies) cause the ensemble spinning


nuclei to dephase

• Dephasing causes a decrease in the observed signal because now not all nuclei
are cooperating.

• Now a radio pulse with the opposite phase is applied to make the nuclei spin
in the opposite direction

• This undoes or reverses the dephasing process and the signal regains strength

• The full signal is not recovered however since all the while random ther-
malization is taking place to irreversibly destroy the coherence among the
nuclei.

• This cannot be undone with the second radio pulse.

332
Key Equations for Exam 4

Listed here are some of the key equations for Exam 4. This section should not
substitute for your studying of the rest of this material.

The equations listed here are out of context and it would help you very little to
memorize this section without understanding the context of these equations.

The equations are collected here simply for handy reference for you while working
the problem sets.

Equations

• The Clapeyron Equation is


dP 4φ Hm
= . (47.22)
dT T 4φ Vm

• The Clausius-Clapeyron equation is

d(ln P ) 4φ Hm
=− (47.23)
d(1/T ) R

• Fick’s first law of diffusion is


1 dn dC
= −D (47.24)
A dt dx

333

333
• Fick’s second law of diffusion:
∂C ∂ 2C
=D 2. (47.25)
∂t ∂x
• Relation between the viscosity and the diffusion constant:
kT f =6πηr kT
D= = . (47.26)
f 6πηr

• Fourier’s law of heat conduction is


1 dq dT
= Qf = −κ . (47.27)
A dt dx
• Mixing
X
4mix = properties of soln − properties of pure. (47.28)

• Chemical potential
μ = μª + RT ln a (47.29)

• Raoult’s Law:
Pi = Xi Pi• (47.30)

• Raoult’s law reference


(RL) (RL) (RL)
ai = γi Xi , γi → 1 as Xi → 1 (47.31)

• Henry’s Law:
Pi = kXi Xi . (47.32)
where kXi is the Henry’s law constant,
µ ¶
Pi
kXi = lim . (47.33)
Xi →0 Xi

• Henry’s law reference


(HL) (HL) (HL)
ai = γi Xi , γi → 1 as Xi → 0. (47.34)

334
Index

absorption spectroscopy 241 Bohr radius 19


activity 146, 311 Boltzmann distribution 10, 96, 131
mathematical definition of 146 Boltzmann’s equation 90, 97, 124,
activity coefficient 146, 312 131
adiabatic expansion 280 bond order 77
and heat capacity 280 bonding orbital 71
adiabatic wall 120 Born model 170
angular momentum corrections to 175
addition of 202 enthalpy of solvation 174
classical 192 entropy of solvation 174
eigenfunctions for 199, 219 free energy of solvation 173, 178
jj coupling 202 partition coefficient 174
LS coupling 202 Born—Oppenheimer approximation
quantum numbers 199, 219 62, 99, 235, 240
spin 201 and the Franck—Condon princi-
angular momentum quantum num- ple 243
ber 52 bosons 56
antibonding orbital 71 Boyle temperature 272
Arrhenious activation energy 261 chain rule
Arrhenious equation 261, 291 for partial derivatives 107
temperature corrected 262 character table
atomic orbitals 49 for the C2v group 225
chemists picture 50 chemical affinity 328
physicists picture 50 chemical potential 144
aufbau principle 58 for a salt 161
average value theorem 29 relation to activity 148
Berthelot gas 13, 270 relation to Gibbs free energy
binominal coefficient 90 145
blue sky 81 relation to Helmhotz free en-
Bohr model 18 ergy 145

335

335
Clapeyron equation 298, 300, 333 nuclear-nuclear potential energy
Clausius-Clapeyron equation 299, operator for 61
333 Schrodinger equation for 62
coefficient of thermal expansion 274 Dieterici gas 270
coexistence curve 293 diffusion 301
colligative properties 318 diffusion constant 302
commutator 30, 189 eigenfunction 5
completeness 191 eigenvalue 5
complimentary variables 30 eigenvalue equation 190
compressibility factor electric dipole approximation 79, 231
at the critical point 295 electrolytes
compressibilty factor 270, 291 strong 161
configuration 90 electrophoretic effect 167
confluent hypergeometric functions elementary reactions 255
65 and stoichiometry 256
correspondence principle 41 molecularity 256
critical point 300 emission spectroscopy 241
cyclic rule 14, 108 enemble 89
cylindrical symmetry 69 ensemble average 103, 132
Debye—Huckel limiting law 164, 178 enthalpy 136
Debye—Huckel theory 163 entropy 105
Debye—Huckel—Guggenheim equation change for changes in temper-
164 ature 286
Debye’s law 129, 133 change for isothermal expansion
degeneracy 186 286
of the ensemble 98 change for mixing 287
diathermic wall 120 of real gases 288
diatomic molecules entropy production 322, 323
electron-electron potential en- due to chemical reactions 328
ergy operator for 61 due to heat flow 326
electronic kinetric energy oper- equation of state 116
ator for 61 for a Berthelot gas 118
electronic wavefunction for 62 for a Dieterici gas 118
Hamiltonian for 61 for a Redlich—Kwang gas 118
nuclear kinetic energy operator for a van der Waals gas 117
for 61 for an ideal gas 116
nuclear-electron potential energy for gases 269
operator for 61 equilibrium constant 135

336
equlibrium constant 153 classical 27
Euler’s identity 4 harmonic oscillator 38
expansion energy levels for 40, 44, 86
of gases 111 potential energy 39
reversible 114 Schrodinger equation for 39
extent of reaction 328 heat 109
Eyring’s equation 265, 291 sign convention 110
fermions 56 heat capacity 115, 133
Fick’s first law 302, 333 Heisenberg uncertainty principle 30
Fick’s second law 302, 334 and the harmonic oscillator 41
first law of thermodynamics 121, helium 55
133 electron-electron repulsion term
flipping coins 90 55
fluctuation 92 Hamiltonian 55
fluorescence 242 Helmholtz free energy 106
stokes shift 242 Henry’s law 311, 316, 334
Fourier’s law of heat conduction 306, Henry’s law constant 316, 334
334 Hermite polynominals 40
Franck—Condon integral 243 hot bands 66
Franck—Condon principle 243 Hund’s rule 205
free energy hydrogen atom
Gibbs 138 ioniztion energy of 19
Helmholtz 137 hydrogen molecule 74
fugacity 147 hydrogenic systems 46
fundamental transistions 66 energy levels for 49, 86
general equlibrium 151 Hamiltonian 47
generalized displacement 110 normalization constant 49, 85
generalized force 110 potential energy for 47
gerade 69 Schrodinger equation for 47
Gibb’s free energy 106 wavefunction (no spin) 49
Gibbs-Duhem equation 163 wavefunction (with spin) 52
good theory 16 ideal solution
group Raoult’s law 314
mathematical definition of 222 immiscible solutions 153
multiplication table 223 infrared spectroscopy 66
group theory 221 internal energy 103, 121
Hamiltonian operator 27 intramolecular vibrational relaxation
Hamitonian (IVR) 242

337
inversion symmetry 69 molecular hydrogen ion 67
operator 69 Hamiltonian for 67
ion mobility 166 molecular orbital diagram 76
and current 168 molecular orbitals 68
ion transfer 174 molecular rotations 235
IR spectroscopy 231 asymmetric tops 239
and the character table 232 centrifugal stretching 236
isothermal compressibility 274 linear tops 238
isothermal expansion 279 polyatomic molecules 237
Joule expansion 282 spherical tops 239
Joule-Thomson expansion 283 symmetric tops 238
kinetic theory of gases 250 vibrational state dependence of
Lagrange multipliers 95 236
Laguerre polynominals 49 molecular vibrations 228
laminar flow 304 molecule
law of corresponding states 296 Scrodinger equation for 78
law of rectilinear diameters 293 momentum operator 5
Legendra polynomials 200 Morse oscillator 64
linear combinations of atomic or- energy levels for 65, 86
bitals (LCAO) 72 Schrodinger equation for 65
Lorenz number 307 wavefunction for 65
many electron atom Morse potential 64, 86, 240
Hamlitonian for 59 force constant associated with
maximal work 113 9
Maxwell relations 140 Taylor series expansion of 8
Maxwell’s distribution of speeds 252, normal modes 229
290 operator
mean free path 253, 290 Hermitian 189
mean ionic activity 162 ladder 195
mean ionic activity coefficient 162 linear 189
method of initial velocities 259 symmetry 222
method of isolation 259 operator algebra 187
microstate 90 orientation quantum number 53
Mie scattering 84 orthogonality 191
mirror plane symmetry 70 overtone transitions 66
molar heat capacity 115 parameters
molecular collisions extensive 109
simple model for 252 intensive 109

338
particle in a box 31, 181 Poiseuille’s formula 304
energy levels 183 polarizability 79
energy levels for 34, 44, 218 postulate I (of quantum mechan-
features of the energy levels 35 ics) 22
normalization constant for 33 postulate II (of quantum mechan-
potenial energy 31 ics) 24
Schrodinger equation for 32 postulate III (of quantum mechan-
three dimensional 183 ics) 25
three dimensional energy levels pressure 104
185 principle of Clausius 125, 324
three dimensional wavefunction principle quantum number 52
185 probability amplitude 22
wavefunction for 183 probability distribution 22
wavefunctions for 34, 44, 218 PV work 111, 133
particle on a ring 194 Raman scattering 80
boundary conditions 194 Raman spectroscopy 66, 233
energy levels for 195, 218 and the character table 234
Hamitonian for 194 Raoult’s law 311, 312, 314, 334
wavefunctions for 195, 218 deviations from 315
partition coefficient 154 reference state 315
and drug delivery 155 rate law 255
for the Born model 174 rate laws 254
partition function determination of 258
canonical 96, 131 integrated 259
electronic 101 Rayleigh scattering 80
grand canonical 97 Rayleigh scattering law 81, 82, 87
isothermal—isobaric 97 reaction velocity 255, 291
microcanonical 96 reciprocal rule 108
molecular 100 red sunsets 82
rotational 101, 132 Redlich-Kwang gas 270
translational 101, 132 reference states 147
vibrational 101, 132 relationship between CP and CV
Pauli exclusion principle 56 139, 276
consquences of 58 relaxation effects 167
perturbation theory 207 rigid rotor 200
example of the quartic oscilla- degeneracy of 235, 248
tor 208 energy 235, 247
phase diagram 300 rotational energy levels 200, 219

339
degeneracy of 200 types of 108
rotational Hamiltonian 200 temperature 115
rule of mutual exclusion 234 term symbols 204
Rydberg constant 20 thermal conductivity 301
SATP 120 of gases 306
Schrodinger equation of liquids 306
time dependent 214 thermal equilibrium 120
time independent 27 third law of thermodynamics 128,
second law 133
“local” formulation 324 tips for solving problems 2
second law of thermodynamics 126, total derivative 107
133 transfer matrix 11
statements of 127 triple point 300
simple collision theory 262 two level system 211
Slater determinant 58 ‘left’ and ‘right’ states 213, 219
for lithium 59 Hamiltonian for 212
solar system model 17 Tyndall scattering 84
solvation 169 ungerade 69
solvophobic effect 176 van der Waals equation
specific heat 115
spherical harmonic functions 48, 200
spin 201
quantum number 51, 53
wavefunction 51
spin orientation
quantum number 51, 53
spin-orbit
coupling 205
Hamiltonian 205
interaction energy 205
spontaneous process 142
state function 121
table of important ones 136
Sterlings approximation 92
Stoke’s law 167, 304
STP 120
superposition 191
systems

340

You might also like