You are on page 1of 118

Lagrange Mechanics

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Fri, 09 May 2014 12:29:06 UTC

Contents
Articles
Introduction

Lagrangian

Lagrangian mechanics

10

Hamiltonian system

24

Generalized coordinates

26

Legendre transformation

32

Canonical coordinates

41

Generalized forces

43

Hamiltonian mechanics

45

Integrable system

53

Mathematics

58

Symplectic manifold

58

Phase space

61

Symplectic vector field

65

Liouville's theorem

66

Poisson bracket

69

Lie algebra

74

Symplectomorphism

82

Dynamical system

84

Hamiltonian vector field

93

Generalized forces

96

Hamiltonian mechanics

98

Integrable system

106

Cotangent bundle

111

References
Article Sources and Contributors

113

Image Sources, Licenses and Contributors

115

Article Licenses
License

116

Introduction
Lagrangian
This article is about the Lagrangian function in Lagrangian mechanics. For other uses, see Lagrangian
(disambiguation).
The Lagrangian, L, of a dynamical system is a function that summarizes the dynamics of the system. The
Lagrangian is named after Italian-French mathematician and astronomer Joseph Louis Lagrange. The concept of a
Lagrangian was introduced in a reformulation of classical mechanics introduced by Lagrange known as Lagrangian
mechanics.

Definition
In classical mechanics, the natural form of the Lagrangian is defined as the kinetic energy, T, of the system minus its
potential energy, V. In symbols,

If the Lagrangian of a system is known, then the equations of motion of the system may be obtained by a direct
substitution of the expression for the Lagrangian into the EulerLagrange equation. The Lagrangian of a given
system is not unique, and two Lagrangians describing the same system can differ by the total derivative with respect
to time of some function
, but solving any equivalent Lagrangians will give the same equations of motion.

The Lagrangian formulation


Simple example
The trajectory of a thrown ball is characterized by the sum of the Lagrangian values at each time being a (local)
minimum.
The Lagrangian L can be calculated at several instants of time t, and a graph of L against t can be drawn. The area
under the curve is the action. Any different path between the initial and final positions leads to a larger action than
that chosen by nature. Nature chooses the smallest action this is the Principle of Least Action.
If Nature has defined the mechanics problem of the thrown ball in so elegant a fashion, might She have
defined other problems similarly. So it seems now. Indeed, at the present time it appears that we can describe
all the fundamental forces in terms of a Lagrangian. The search for Nature's One Equation, which rules all of
the universe, has been largely a search for an adequate Lagrangian.
Robert Adair,The Great Design: Particles, Fields, and Creation[1]
Using only the principle of least action and the Lagrangian we can deduce the correct trajectory, by trial and error or
the calculus of variations.

Lagrangian

Importance
The Lagrangian formulation of mechanics is important not just for its broad applications, but also for its role in
advancing deep understanding of physics. Although Lagrange only sought to describe classical mechanics, the action
principle that is used to derive the Lagrange equation was later recognized to be applicable to quantum mechanics as
well.
Physical action and quantum-mechanical phase are related via Planck's constant, and the principle of stationary
action can be understood in terms of constructive interference of wave functions.
The same principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical
conserved quantities to continuous symmetries of a physical system.
Lagrangian mechanics and Noether's theorem together yield a natural formalism for first quantization by including
commutators between certain terms of the Lagrangian equations of motion for a physical system.

Advantages over other methods


The formulation is not tied to any one coordinate system rather, any convenient variables may be used to
describe the system; these variables are called "generalized coordinates" qi and may be any quantitative attributes
of the system (for example, strength of the magnetic field at a particular location; angle of a pulley; position of a
particle in space; or degree of excitation of a particular eigenmode in a complex system) which are functions of
the independent variable(s). This trait makes it easy to incorporate constraints into a theory by defining
coordinates that only describe states of the system that satisfy the constraints.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under
that symmetry. This characteristic is very helpful in showing that theories are consistent with either special
relativity or general relativity.

Cyclic coordinates and conservation laws


An important property of the Lagrangian is that conservation laws can easily be read off from it. For example, if the
Lagrangian does not depend on itself, then the generalized momentum ( ), given by:

is a conserved quantity, because of Lagrange's equations:

It doesn't matter if depends on the time derivative of that generalized coordinate, since the Lagrangian
independence of the coordinate always makes the above partial derivative zero. This is a special case of Noether's
theorem. Such coordinates are called "cyclic" or "ignorable".
For example, the conservation of the generalized momentum,

say, can be directly seen if the Lagrangian of the system is of the form

Also, if the time t, does not appear in L, then the Hamiltonian is conserved. This is the energy conservation unless
the potential energy depends on velocity, as in electrodynamics.[2][3]

Lagrangian

Explanation
The Lagrangian in many classical systems is a function of generalized coordinates qi and their velocities dqi/dt.
These coordinates (and velocities) are, in their turn, parametric functions of time. In the classical view, time is an
independent variable and qi (and dqi/dt) are dependent variables as is often seen in phase space explanations of
systems. This formalism was generalized further to handle field theory. In field theory, the independent variable is
replaced by an event in spacetime (x, y, z, t), or more generally still by a point s on a manifold. The dependent
variables (q) are replaced by the value of a field at that point in spacetime (x,y,z,t) so that the equations of motion
are obtained by means of an action principle, written as:

where the action,

, is a functional of the dependent variables i(s) with their derivatives and s itself

and where s = { s} denotes the set of n independent variables of the system, indexed by = 1, 2, 3,..., n. Notice L is
used in the case of one independent variable (t) and is used in the case of multiple independent variables (usually
four: x, y, z, t).
The equations of motion obtained from this functional derivative are the EulerLagrange equations of this action.
For example, in the classical mechanics of particles, the only independent variable is time, t. So the EulerLagrange
equations are

Dynamical systems whose equations of motion are obtainable by means of an action principle on a suitably chosen
Lagrangian are known as Lagrangian dynamical systems. Examples of Lagrangian dynamical systems range from
the classical version of the Standard Model, to Newton's equations, to purely mathematical problems such as
geodesic equations and Plateau's problem.

An example from classical mechanics


In Cartesian coordinates
Suppose we have a three-dimensional space in which a particle of mass m moves under the influence of a
conservative force . Since the force is conservative, it corresponds to a potential energy function
given by
. The Lagrangian of the particle can be written

The equations of motion for the particle are found by applying the EulerLagrange equation

where i = 1, 2, 3.
Then

and

Lagrangian

Thus

which is Newton's second law of motion for a particle subject to a conservative force. Here the time derivative is
written conventionally as a dot above the quantity being differentiated, and is the del operator.

In spherical coordinates
Suppose we have a three-dimensional space using spherical coordinates (r, , ) with the Lagrangian

Then the EulerLagrange equations are:

Here the set of parameters si is just the time t, and the dynamical variables i(s) are the trajectories

of the

particle.
Despite the use of standard variables such as x, the Lagrangian allows the use of any coordinates, which do not need
to be orthogonal. These are "generalized coordinates".

Lagrangian of a test particle


A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is
insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real
particles like electrons and up quarks are more complex and have additional terms in their Lagrangians.

Classical test particle with Newtonian gravity


Suppose we are given a particle with mass m kilograms, and position meters in a Newtonian gravitation field with
potential in Jkg1. The particle's world line is parameterized by time t seconds. The particle's kinetic energy is:

and the particle's gravitational potential energy is:

Then its Lagrangian is L joules where

Varying

in the integral (equivalent to the EulerLagrange differential equation), we get

Integrate the first term by parts and discard the total integral. Then divide out the variation to get

Lagrangian

and thus
(1)

is the equation of motion two different expressions for the force.

Special relativistic test particle with electromagnetism


In special relativity, the energy (rest energy plus kinetic energy) of a free test particle is

However, the term in the Lagrangian that gives rise to the derivative of the momentum is no longer the kinetic
energy. It must be changed to

where c is the vacuum speed of light in ms1, is the proper time in seconds (i.e. time measured by a clock moving
with the particle) and
The second term in the series is just the classical kinetic energy. Suppose the
particle has electrical charge q coulombs and is in an electromagnetic field with scalar potential volts (a volt is a
joule per coulomb) and vector potential Vsm1. The Lagrangian of a special relativistic test particle in an
electromagnetic field is:

Varying this with respect to , we get

which is

which is the equation for the Lorentz force, where:

are the fields and potentials.

General relativistic test particle


In general relativity, the first term generalizes (includes) both the classical kinetic energy and the interaction with the
gravitational field. It becomes:

The Lagrangian of a general relativistic test particle in an electromagnetic field is:

If the four spacetime coordinates x are given in arbitrary units (i.e. unitless), then g in m2 is the rank 2 symmetric
metric tensor which is also the gravitational potential. Also, A in Vs is the electromagnetic 4-vector potential.

Lagrangian

Notice that a factor of c has been absorbed into the square root because it is the equivalent of

This notion has been directly generalized from special relativity.

Lagrangians and Lagrangian densities in field theory


The time integral of the Lagrangian is called the action denoted by S. In field theory, a distinction is occasionally
made between the Lagrangian L, of which the action is the time integral:

and the Lagrangian density

, which one integrates over all spacetime to get the action:

General form of Lagrangian density:


The relationship between

and

[4]

where
, similar to

(see 4-gradient)
.

In field theory, the independent variable t was replaced by an event in spacetime (x, y, z, t) or still more generally
by a point s on a manifold.
The Lagrangian is then the spatial integral of the Lagrangian density. However, is also frequently simply called
the Lagrangian, especially in modern use; it is far more useful in relativistic theories since it is a locally defined,
Lorentz scalar field. Both definitions of the Lagrangian can be seen as special cases of the general form, depending
on whether the spatial variable is incorporated into the index i or the parameters s in i(s). Quantum field theories
in particle physics, such as quantum electrodynamics, are usually described in terms of , and the terms in this form
of the Lagrangian translate quickly to the rules used in evaluating Feynman diagrams.

Selected fields
To go with the section on test particles above, here are the equations for the fields in which they move. The
equations below pertain to the fields in which the test particles described above move and allow the calculation of
those fields. The equations below will not give you the equations of motion of a test particle in the field but will
instead give you the potential (field) induced by quantities such as mass or charge density at any point
. For
example, in the case of Newtonian gravity, the Lagrangian density integrated over spacetime gives you an equation
which, if solved, would yield
. This
, when substituted back in equation (1), the Lagrangian equation for
the test particle in a Newtonian gravitational field, provides the information needed to calculate the acceleration of
the particle.

Lagrangian

Newtonian gravity
The Lagrangian (density) is in Jm3. The interaction term m is replaced by a term involving a continuous mass
density in kgm3. This is necessary because using a point source for a field would result in mathematical
difficulties. The resulting Lagrangian for the classical gravitational field is:

where G in m3kg1s2 is the gravitational constant. Variation of the integral with respect to gives:

Integrate by parts and discard the total integral. Then divide out by to get:

and thus

which yields Gauss's law for gravity.

Electromagnetism in special relativity


The interaction terms
are replaced by terms involving a continuous charge density in Asm3 and current density

in Am2. The

resulting Lagrangian for the electromagnetic field is:

Varying this with respect to , we get

which yields Gauss' law.


Varying instead with respect to

, we get

which yields Ampre's law.

Electromagnetism in general relativity


For the Lagrangian of gravity in general relativity, see EinsteinHilbert action. The Lagrangian of the
electromagnetic field is:

If the four spacetime coordinates x are given in arbitrary units, then: in Js is the Lagrangian, a scalar density;
in coulombs is the current, a vector density; and
in Vs is the electromagnetic tensor, a covariant antisymmetric
tensor of rank two. Notice that the determinant under the square root sign is applied to the matrix of components of
the covariant metric tensor g, and g is its inverse. Notice that the units of the Lagrangian changed because we are
integrating over (x0, x1, x2, x3) which are unitless rather than over (t, x, y, z) which have units of sm3. The
electromagnetic field tensor is formed by anti-symmetrizing the partial derivative of the electromagnetic vector
potential; so it is not an independent variable. The square root is needed to convert that term into a scalar density

Lagrangian

instead of just a scalar, and also to compensate for the change in the units of the variables of integration. The factor
of (c2) inside the square root is needed to normalize it so that the square root will reduce to one in special
relativity (since the determinant is (c2) in special relativity).

Electromagnetism using differential forms


Using differential forms, the electromagnetic action S in vacuum on a (pseudo-) Riemannian manifold
written (using natural units, c = 0 = 1) as

can be

Here, A stands for the electromagnetic potential 1-form, and J is the current 3-form. This is exactly the same
Lagrangian as in the section above, except that the treatment here is coordinate-free; expanding the integrand into a
basis yields the identical, lengthy expression. Variation of the action leads to

These are Maxwell's equations for the electromagnetic potential. Substituting F = dA immediately yields the
equations for the fields,

Dirac Lagrangian
The Lagrangian density for a Dirac field is:[5]

where is a Dirac spinor (annihilation operator),


notation for

is its Dirac adjoint (creation operator) and

is Feynman

Quantum electrodynamic Lagrangian


The Lagrangian density for QED is:

where

is the electromagnetic tensor, D is the gauge covariant derivative, and

is Feynman notation for

Quantum chromodynamic Lagrangian


The Lagrangian density for quantum chromodynamics is:[6][7][8]

where D is the QCD gauge covariant derivative, n = 1, 2, ...6 counts the quark types, and
strength tensor.

is the gluon field

Lagrangian

Mathematical formalism
Suppose we have an n-dimensional manifold, M, and a target manifold, T. Let
smooth functions from M to T.

be the configuration space of

Examples
In classical mechanics, in the Hamiltonian formalism, M is the one-dimensional manifold , representing time
and the target space is the cotangent bundle of space of generalized positions.
In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any
given point. For example, if there are m real-valued scalar fields, 1, ..., m, then the target manifold is
. If
the field is a real vector field, then the target manifold is isomorphic to
. There is actually a much more
elegant way using tangent bundles over M, but we will just stick to this version.

Mathematical development
Consider a functional,
,
called the action. Physical considerations require it be a mapping to
all complex numbers).

(the set of all real numbers), not

In order for the action to be local, we need additional restrictions on the action. If
integral over M of a function of

, we assume

(the set of
is the

, its derivatives and the position called the Lagrangian,

. In other words,

It is assumed below, in addition, that the Lagrangian depends on only the field value and its first derivative but not
the higher derivatives.
Given boundary conditions, basically a specification of the value of at the boundary if M is compact or some limit
on as x (this will help in doing integration by parts), the subspace of consisting of functions, , such that
all functional derivatives of S at are zero and satisfies the given boundary conditions is the subspace of on shell
solutions.
The solution is given by the EulerLagrange equations (thanks to the boundary conditions),

The left hand side is the functional derivative of the action with respect to

Uses in Engineering
50 years ago Lagrangians were a general part of the engineering curriculum, but quarter of a century later, even with
the ascendency of dynamical systems, they were dropped as requirements from the majority of engineering
programs, and considered to be the domain of physics. A decade ago this changed dramatically, and Lagrangians are
not only a required part of many ME and EE curricula, but are now seen as far more than the province of physics.
This is true of pure and applied engineering, as well as the more physics-related aspects of engineering, or
engineering optimization, which itself is more the province of Lagrange multipliers.
Today, Lagrangians find their way into hundreds of direct engineering solutions, including robotics, turbulent flow
analysis (Lagrangian and Eulerian specification of the flow field), signal processing, microscopic component contact
and nanotechnology (superlinear convergent augmented Lagrangians), gyroscopic forcing and dissipation,
semi-infinite supercomputing (which also involve Lagrange multipliers in the subfield of semi-infinite

Lagrangian

10

programming), chemical engineering (specific heat linear Lagrangian interpolation in reaction planning), civil
engineering (dynamic analysis of traffic flows), optics engineering and design (Lagrangian and Hamiltonian optics)
aerospace (Lagrangian interpolation), force stepping integrators, and even airbag deployment (coupled
Eulerian-Lagrangians as well as SELMthe stochastic Eulerian Lagrangian method).

Notes
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

The Great Design: Particles, Fields, and Creation (New York: Oxford University Press, 1989), ROBERT K. ADAIR, p.2224
Classical Mechanics, T.W.B. Kibble, European Physics Series, McGraw-Hill (UK), 1973, ISBN 07-084018-0
Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
Mandl F., Shaw G., Quantum Field Theory, chapter 2
Itzykson-Zuber, eq. 3-152
http:/ / www. fuw. edu. pl/ ~dobaczew/ maub-42w/ node9. html
http:/ / smallsystems. isn-oldenburg. de/ Docs/ THEO3/ publications/ semiclassical. qcd. prep. pdf
http:/ / www-zeus. physik. uni-bonn. de/ ~brock/ teaching/ jets_ws0405/ seminar09/ sluka_quark_gluon_jets. pdf

References
David Tong Classical Dynamics (http://www.damtp.cam.ac.uk/user/tong/dynamics.html) (Cambridge
lecture notes)

Lagrangian mechanics
Classical
mechanics

History
Timeline

v
t

e [1]

Lagrangian mechanics is a re-formulation of classical mechanics using the principle of stationary action (also
called the principle of least action). Lagrangian mechanics applies to systems whether or not they conserve energy or
momentum, and it provides conditions under which energy, momentum or both are conserved. It was introduced by
the Italian-French mathematician Joseph-Louis Lagrange in 1788.
In Lagrangian mechanics, the trajectory of a system of particles is derived by solving the Lagrange equations in one
of two forms, either the Lagrange equations of the first kind, which treat constraints explicitly as extra equations,
often using Lagrange multipliers; or the Lagrange equations of the second kind, which incorporate the constraints
directly by judicious choice of generalized coordinates. The fundamental lemma of the calculus of variations shows
that solving the Lagrange equations is equivalent to finding the path for which the action functional is stationary, a
quantity that is the integral of the Lagrangian over time.
The use of generalized coordinates may considerably simplify a system's analysis. For example, consider a small
frictionless bead traveling in a groove. If one is tracking the bead as a particle, calculation of the motion of the bead
using Newtonian mechanics would require solving for the time-varying constraint force required to keep the bead in
the groove. For the same problem using Lagrangian mechanics, one looks at the path of the groove and chooses a set

Lagrangian mechanics

11

of independent generalized coordinates that completely characterize the possible motion of the bead. This choice
eliminates the need for the constraint force to enter into the resultant system of equations. There are fewer equations
since one is not directly calculating the influence of the groove on the bead at a given moment.

Conceptual framework
Generalized coordinates
Concepts and terminology
For one particle acted on by external
forces, Newton's second law forms a
set of 3 second-order ordinary
differential equations, one for each
dimension. Therefore, the motion of
the particle can be completely
described by 6 independent variables:
3 initial position coordinates and 3
initial velocity coordinates. Given
these, the general solutions to
Newton's
second
law
become
particular solutions that determine the
time evolution of the particle's
behaviour after its initial state (t = 0).
The most familiar set of variables for
position r = (r1, r2, r3) and velocity
are
Cartesian
Illustration of a generalized coordinate q for one degree of freedom, of a particle moving
coordinates and their time derivatives
in a complicated path. Four possibilities of q for the particle's path are shown. For more
(i.e. position (x, y, z) and velocity (vx,
particles each with their own degrees of freedom, there are more coordinates.
vy, vz) components). Determining
forces in terms of standard coordinates
can be complicated, and usually requires much labour.

An alternative and more efficient approach is to use only as many coordinates as are needed to define the position of
the particle, at the same time incorporating the constraints on the system, and writing down kinetic and potential
energies. In other words, to determine the number of degrees of freedom the particle has, i.e. the number of possible
ways the system can move subject to the constraints (forces that prevent it moving in certain paths). Energies are
much easier to write down and calculate than forces, since energy is a scalar while forces are vectors.
These coordinates are generalized coordinates, denoted

, and there is one for each degree of freedom. Their

corresponding time derivatives are the generalized velocities,

. The number of degrees of freedom is usually not

equal to the number of spatial dimensions: multi-body systems in 3-dimensional space (such as Barton's Pendulums,
planets in the solar system, or atoms in molecules) can have many more degrees of freedom incorporating rotations
as well as translations. This contrasts the number of spatial coordinates used with Newton's laws above.

Lagrangian mechanics

12

Mathematical formulation
The position vector r in a standard coordinate system (like Cartesian, spherical etc.), is related to the generalized
coordinates by some transformation equation:

where there are as many qi as needed (number of degrees of freedom in the system). Likewise for velocity and
generalized velocities.
For example, for a simple pendulum of length , there is the constraint of the pendulum bob's suspension
(rod/wire/string etc.). The position r depends on the x and y coordinates at time t, that is, r(t)=(x(t),y(t)), however x
and y are coupled to each other in a constraint equation (if x changes y must change, and vice versa). A logical choice
for a generalized coordinate is the angle of the pendulum from vertical, , so we have r = (x(), y()) = r(), in
which = (t). Then the transformation equation would be

and so

which corresponds to the one degree of freedom the pendulum has. The term "generalized coordinates" is really a
holdover from the period when Cartesian coordinates were the default coordinate system.
In general, from m independent generalized coordinates qj, the following transformation equations hold for a system
composed of n particles::260

where m indicates the total number of generalized coordinates. An expression for the virtual displacement
(infinitesimal), ri of the system for time-independent constraints or "velocity-dependent constraints" is the same
form as a total differential:264

where j is an integer label corresponding to a generalized coordinate.


The generalized coordinates form a discrete set of variables that define the configuration of a system. The continuum
analogue for defining a field are field variables, say (r, t), which represents density function varying with position
and time.

D'Alembert's principle and generalized forces


D'Alembert's principle introduces the concept of virtual work due to applied forces Fi and inertial forces, acting on a
three-dimensional accelerating system of n particles whose motion is consistent with its constraints,:269
Mathematically the virtual work done W on a particle of mass mi through a virtual displacement ri (consistent with
the constraints) is:
D'Alembert's principle
where ai are the accelerations of the particles in the system and i = 1, 2,...,n simply labels the particles. In terms of
generalized coordinates

Lagrangian mechanics

13

this expression suggests that the applied forces may be expressed as generalized forces, Qj. Dividing by qj gives the
definition of a generalized force::265

If the forces Fi are conservative, there is a scalar potential field V in which the gradient of V is the force::266 & 270

i.e. generalized forces can be reduced to a potential gradient in terms of generalized coordinates. The previous result
may be easier to see by recognizing that V is a function of the ri, which are in turn functions of qj, and then applying
the chain rule to the derivative of with respect to qj.

Kinetic energy relations


The kinetic energy, T, for the system of particles is defined by:269

The partial derivatives of T with respect to the generalized coordinates qj and generalized velocities

Because

and

are :269:

are independent variables:

Then:

The total time derivative of this equation is

resulting in:
Generalized equations of motion
Newton's laws are contained in it, yet there is no need to find the constraint forces because virtual work and
generalized coordinates (which account for constraints) are used. This equation in itself is not actually used in
practice, but is a step towards deriving Lagrange's equations (see below).[2]

Lagrangian mechanics

14

Lagrangian and action


The core element of Lagrangian mechanics is the Lagrangian function, which summarizes the dynamics of the entire
system in a very simple expression. The physics of analyzing a system is reduced to choosing the most convenient
set of generalized coordinates, determining the kinetic and potential energies of the constituents of the system, then
writing down the equation for the Lagrangian to use in Lagrange's equations. It is defined by [3]

where T is the total kinetic energy and V is the total potential energy of the system.
The next fundamental element is the action

, defined as the time integral of the Lagrangian:

This also contains the dynamics of the system, and has deep theoretical implications (discussed below). Technically
action is a functional, rather than a function: its value depends on the full Lagrangian function for all times between
t1 and t2. Its dimensions are the same as angular momentum.
In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field defined
over a region of 3d space. Associated with the field is a Lagrangian density
defined in terms of the field and
its derivatives at a location

. The total Lagrangian is then the integral of the Lagrangian density over 3d space (see

volume integral):

where d3r is a 3d differential volume element, must be used instead. The action becomes an integral over space and
time:

Hamilton's principle of stationary action


Let q0 and q1 be the coordinates at respective initial and final times t0 and t1. Using the calculus of variations, it can
be shown that Lagrange's equations are equivalent to Hamilton's principle:
The trajectory of the system between t0 and t1 has a stationary action S.
By stationary, we mean that the action does not vary to first-order from infinitesimal deformations of the trajectory,
with the end-points (q0, t0) and (q1,t1) fixed. Hamilton's principle can be written as:
Thus, instead of thinking about particles accelerating in response to applied forces, one might think of them picking
out the path with a stationary action.
Hamilton's principle is sometimes referred to as the principle of least action, however the action functional need only
be stationary, not necessarily a maximum or a minimum value. Any variation of the functional gives an increase in
the functional integral of the action.
We can use this principle instead of Newton's Laws as the fundamental principle of mechanics, this allows us to use
an integral principle (Newton's Laws are based on differential equations so they are a differential principle) as the
basis for mechanics. However it is not widely stated that Hamilton's principle is a variational principle only with
holonomic constraints, if we are dealing with nonholonomic systems then the variational principle should be
replaced with one involving d'Alembert principle of virtual work. Working only with holonomic constraints is the
price we have to pay for using an elegant variational formulation of mechanics.

Lagrangian mechanics

15

Lagrange equations of the first kind


Lagrange introduced an analytical method for finding stationary points using the method of Lagrange multipliers,
and also applied it to mechanics.
For a system subject to the constraint equation on the generalized coordinates:

where A is a constant, then Lagrange's equations of the first kind are:

where is the Lagrange multiplier. By analogy with the mathematical procedure, we can write:

where

denotes the variational derivative.


For e constraint equations F1, F2,..., Fe, there is a Lagrange multiplier for each constraint equation, and Lagrange's
equations of the first kind generalize to:
Lagrange's equations (1st kind)
This procedure does increase the number of equations, but there are enough to solve for all of the multipliers. The
number of equations generated is the number of constraint equations plus the number of coordinates, i.e. e + m. The
advantage of the method is that (potentially complicated) substitution and elimination of variables linked by
constraint equations can be bypassed.
There is a connection between the constraint equations Fj and the constraint forces Nj acting in the conservative
system (forces are conservative):

which is derived below.


Derivation of connection between constraint equations and forces
The generalized constraint forces are given by (using the definition of generalized force above):

and using the kinetic energy equation of motion (blue box above):

For conservative systems (see below)

so

and

Lagrangian mechanics

16

equating leads to

and finally equating to Lagrange's equations of the first kind implies:

So each constraint equation corresponds to a constraint force (in a conservative system).

Lagrange equations of the second kind


EulerLagrange equations
For any system with m degrees of freedom, the Lagrange equations include m generalized coordinates and m
generalized velocities. Below, we sketch out the derivation of the Lagrange equations of the second kind. In this
context, V is used rather than U for potential energy and T replaces K for kinetic energy. See the references for more
detailed and more general derivations.
The equations of motion in Lagrangian mechanics are the Lagrange equations of the second kind, also known as
the EulerLagrange equations:[4]
Lagrange's equations (2nd kind)
where j = 1, 2,...m represents the jth degree of freedom, qj are the generalized coordinates, and

are the generalized

velocities.
Although the mathematics required for Lagrange's equations appears significantly more complicated than Newton's
laws, this does point to deeper insights into classical mechanics than Newton's laws alone: in particular, symmetry
and conservation. In practice it's often easier to solve a problem using the Lagrange equations than Newton's laws,
because the minimum generalized coordinates qi can be chosen by convenience to exploit symmetries in the system,
and constraint forces are incorporated into the geometry of the problem. There is one Lagrange equation for each
generalized coordinate qi.
For a system of many particles, each particle can have different numbers of degrees of freedom from the others. In
each of the Lagrange equations, T is the total kinetic energy of the system, and V the total potential energy.

Derivation of Lagrange's equations


Hamilton's principle
The EulerLagrange equations follow directly from Hamilton's principle, and are mathematically equivalent. From
the calculus of variations, any functional of the form:

leads to the general EulerLagrange equation for stationary value of J. (see main article for derivation):

Then making the replacements:

Lagrangian mechanics
yields the Lagrange equations for mechanics. Since mathematically Hamilton's equations can be derived from
Lagrange's equations (by a Legendre transformation) and Lagrange's equations can be derived from Newton's laws,
all of which are equivalent and summarize classical mechanics, this means classical mechanics is fundamentally
ruled by a variation principle (Hamilton's principle above).
Generalized forces
For a conservative system, since the potential field is only a function of position, not velocity, Lagrange's equations
also follow directly from the equation of motion above:

simplifying to

This is consistent with the results derived above and may be seen by differentiating the right side of the Lagrangian
with respect to and time, and solely with respect to qj, adding the results and associating terms with the equations
for Fi and Qj.
Newton's laws
As the following derivation shows, no new physics is introduced, so the Lagrange equations can describe the
dynamics of a classical system equivalently as Newton's laws.
Derivation of Lagrange's equations from Newton's 2nd law and D'Alembert's principle
Force and work done (on the particle)
Consider a single particle with mass m and position vector r, moving under an applied conservative force F, which
can be expressed as the gradient of a scalar potential energy function V(r, t):

Such a force is independent of third- or higher-order derivatives of r.


Consider an arbitrary displacement r of the particle. The work done by the applied force F is

Using Newton's second law:

Since work is a physical scalar quantity, we should be able to rewrite this equation in terms of the generalized
coordinates and velocities. On the left hand side,

On the right hand side, carrying out a change of coordinates to generalized coordinates, we obtain:

Now integrating by parts the summand with respect to t, then differentiating with respect to t:

allows the sum to be written as:

17

Lagrangian mechanics

Recognizing that

we obtain:

Kinetic and potential energy


Now, by changing the order of differentiation, we obtain:

Finally, we change the order of summation:

Which is equivalent to:

where T is total kinetic energy of the system.


Applying D'Alembert's principle
The equation for the work done becomes

However, this must be true for any set of generalized displacements qi, so we must have

for each generalized coordinate qi. We can further simplify this by noting that V is a function solely of r and t, and
r is a function of the generalized coordinates and t. Therefore, V is independent of the generalized velocities:

Inserting this into the preceding equation and substituting L =TV, called the Lagrangian, we obtain Lagrange's
equations:

When qi = ri (i.e. the generalized coordinates are simply the Cartesian coordinates), it is straightforward to check that
Lagrange's equations reduce to Newton's second law.

18

Lagrangian mechanics

19

Dissipation function
Main article: Rayleigh dissipation function
In a more general formulation, the forces could be both potential and viscous. If an appropriate transformation can be
found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form::271

where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily
equal to them
If D is defined this way, then:271

and

Examples
In this section two examples are provided in which the above concepts are applied. The first example establishes that
in a simple case, the Newtonian approach and the Lagrangian formalism agree. The second case illustrates the power
of the above formalism, in a case that is hard to solve with Newton's laws.
Falling mass
Consider a point mass m falling freely from rest. By gravity a force F = mg is exerted on the mass (assuming g
constant during the motion). Filling in the force in Newton's law, we find
from which the solution

follows (by taking the antiderivative of the antiderivative, and choosing the origin as the starting point). This result
can also be derived through the Lagrangian formalism. Take x to be the coordinate, which is 0 at the starting point.
The kinetic energy is T = 12mv2 and the potential energy is V = mgx; hence,

Then

which can be rewritten as

, yielding the same result as earlier.

Lagrangian mechanics

20

Pendulum on a movable support


Consider a pendulum of mass m and length , which is attached to a support with mass M, which can move along a
line in the x-direction. Let x be the coordinate along the line of the support, and let us denote the position of the
pendulum by the angle from the vertical.
The kinetic energy can then be shown to be

Sketch of the situation with definition of the coordinates (click to enlarge)

and the potential energy of the system is

The Lagrangian is therefore

Now carrying out the differentiations gives for the support coordinate x

therefore:

indicating the presence of a constant of motion. Performing the same procedure for the variable

therefore

yields:

Lagrangian mechanics

21

These equations may look quite complicated, but finding them with Newton's laws would have required carefully
identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases,
the correctness of this system can be verified: For example,
should give the equations of motion for a
pendulum that is at rest in some inertial frame, while

should give the equations for a pendulum in a

constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting
conditions and a chosen time step, by stepping through the results iteratively.
Two-body central force problem
The basic problem is that of two bodies in orbit about each other attracted by a central force. The Jacobi coordinates
are introduced; namely, the location of the center of mass R and the separation of the bodies r (the relative position).
The Lagrangian is then[5]

where M is the total mass, is the reduced mass, and U the potential of the radial force. The Lagrangian is divided
into a center-of-mass term and a relative motion term. The R equation from the EulerLagrange system is simply:

resulting in simple motion of the center of mass in a straight line at constant velocity. The relative motion is
expressed in polar coordinates (r, ):

which does not depend upon , therefore an ignorable coordinate. The Lagrange equation for is then:

where is the conserved angular momentum. The Lagrange equation for r is:

or:

This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that
is, a frame rotating with the reduced mass so it appears stationary. If the angular velocity is replaced by its value in
terms of the angular momentum,

the radial equation becomes:

which is the equation of motion for a one-dimensional problem in which a particle of mass is subjected to the
inward central force dU/dr and a second outward force, called in this context the centrifugal force:

Lagrangian mechanics
Of course, if one remains entirely within the one-dimensional formulation, enters only as some imposed parameter
of the external outward force, and its interpretation as angular momentum depends upon the more general
two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the
centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using
the generalized coordinates (r, ) and simply following the Lagrangian formulation without thinking about frames at
all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence
depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian
coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself,
which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the
Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is
(possibly unconsciously) selected by the choice of coordinates.[6] Unfortunately, this usage of "inertial force"
conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the
acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of
coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial
forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand
when he says (p.155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity,
the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious
forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear,
sometimes found by exploiting the symmetry of the system.

Extensions of Lagrangian mechanics


The Hamiltonian, denoted by H, is obtained by performing a Legendre transformation on the Lagrangian, which
introduces new variables, canonically conjugate to the original variables. This doubles the number of variables, but
makes differential equations first order. The Hamiltonian is the basis for an alternative formulation of classical
mechanics known as Hamiltonian mechanics. It is a particularly ubiquitous quantity in quantum mechanics (see
Hamiltonian (quantum mechanics)).
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum
mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and
final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it.
In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle
in optics.
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangians formulated by a certain
doubling of the degrees of freedom; see.[7][8][9][10]

22

Lagrangian mechanics

References
[1]
[2]
[3]
[4]
[5]
[6]

http:/ / en. wikipedia. org/ w/ index. php?title=Template:Classical_mechanics& action=edit


Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
Torby1984, p.270
The Road to Reality, Roger Penrose, Vintage books, 2007, ISBN 0-679-77631-1
The Lagrangian also can be written explicitly for a rotating frame. See
For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and
"updated" Lagrangian formulations in
[7] B. P. Kosyakov, "Introduction to the classical theory of particles and fields", Berlin, Germany: Springer (2007)
[8] "Classical Mechanics of Nonconservative Systems" by Chad Galley (http:/ / authors. library. caltech. edu/ 38643/ 1/ PhysRevLett. 110.
174301. pdf)
[9] "Radiation reaction at the level of the action" by Ofek Birnholtz, Shahar Hadar, and Barak Kol (http:/ / arxiv. org/ abs/ 1402. 2610)
[10] "Theory of post-Newtonian radiation and reaction" by Ofek Birnholtz, Shahar Hadar, and Barak Kol (http:/ / journals. aps. org/ prd/ abstract/
10. 1103/ PhysRevD. 88. 104037)

Further reading
Landau, L.D. and Lifshitz, E.M. Mechanics, Pergamon Press.
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Goldstein, Herbert, Classical Mechanics, Addison Wesley.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University
Press, 2013.

External links
Tong, David, Classical Dynamics (http://www.damtp.cam.ac.uk/user/tong/dynamics.html) Cambridge
lecture notes
Principle of least action interactive (http://www.eftaylor.com/software/ActionApplets/LeastAction.html)
Excellent interactive explanation/webpage
Joseph Louis de Lagrange - uvres compltes (http://portail.mathdoc.fr/cgi-bin/
oetoc?id=OE_LAGRANGE__1) (Gallica-Math)

23

Hamiltonian system

24

Hamiltonian system
This article is about the classical theory. For other uses, see Hamiltonian.
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system
describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field.
These systems can be studied in both Hamiltonian mechanics and dynamical systems theory.

Overview
Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution
equations of a physical system. The advantage of this description is that it gives important insight about the
dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of
three bodies: even if there is no simple solution to the general problem, Poincar showed for the first time that it
exhibits deterministic chaos.
Formally, a Hamiltonian system is a dynamical system completely described by the scalar function
Hamiltonian. The state of the system,
where both

and

, is described by the generalized coordinates 'momentum'

, the
and 'position'

are vectors with the same dimension N. So, the system is completely described by the 2N

dimensional vector
and the evolution equation is given by the Hamilton's equations:

The trajectory
condition

is the solution of the initial value problem defined by the Hamilton's equations and the initial
.

Time independent Hamiltonian system


If the Hamiltonian is not time dependent, i.e. if

, the Hamiltonian does not vary with time:

and thus the Hamiltonian is a constant of motion, whose constant equals the total energy of the system,

Examples of such systems are the pendulum, the harmonic oscillator or dynamical billiards.

Example
Main article: Simple harmonic motion
One example of time independent Hamiltonian system is the harmonic oscillator. Consider the system defined by the
coordinates
and
whose Hamiltonian is given by

The Hamiltonian of this system does not depend on time and thus the energy of the system is conserved.

Hamiltonian system

Symplectic structure
One important property of a Hamiltonian dynamical system is that it has a symplectic structure. Writing

the evolution equation of the dynamical system can be written as

where

and IN the NN identity matrix.


One important consequence of this property is that an infinitesimal phase-space volume is preserved. A corollary of
this is Liouville's theorem:

Examples
Dynamical billiards
Planetary systems, more specifically, the n-body problem.
Canonical general relativity

References
Further reading
Almeida, A. M. (1992). Hamiltonian systems: Chaos and quantization. Cambridge monographs on mathematical
physics. Cambridge (u.a.: Cambridge Univ. Press)
Audin, M., & Babbitt, D. G. (2008). Hamiltonian systems and their integrability. Providence, R.I: American
Mathematical Society
Dickey, L. A. (2003). Soliton equations and Hamiltonian systems. Advanced series in mathematical physics, v.
26. River Edge, NJ: World Scientific.
Treschev, D., & Zubelevich, O. (2010). Introduction to the perturbation theory of Hamiltonian systems.
Heidelberg: Springer
Zaslavsky, G. M. (2007). The physics of chaos in Hamiltonian systems. London: Imperial College Press.

External links
Hamiltonian Systems (http://www.scholarpedia.org/article/Hamiltonian_Systems) at Scholarpedia, curated by
James Meiss.

25

Generalized coordinates

26

Generalized coordinates
Classical
mechanics

History
Timeline

v
t

e [1]

In analytical mechanics, specifically the study of the rigid body dynamics of multibody systems, the term
generalized coordinates refers to the parameters that describe the configuration of the system relative to some
reference configuration. These parameters must uniquely define the configuration of the system relative to the
reference configuration. The generalized velocities are the time derivatives of the generalized coordinates of the
system.
An example of a generalized coordinate is the angle that locates a point moving on a circle. The adjective
"generalized" distinguishes these parameters from the traditional use of the term coordinate to refer to Cartesian
coordinates: for example, describing the location of the point on the circle using x and y coordinates.
Although there may be many choices for generalized coordinates for a physical system, parameters are usually
selected which are convenient for the specification of the configuration of the system and which make the solution of
its equations of motion easier. If these parameters are independent of one another, then number of independent
generalized coordinates is defined by the number of degrees of freedom of the system.

Generalized coordinates

Constraint equations
Generalized coordinates are usually
selected to provide the minimum
number of independent coordinates
that define the configuration of a
system,
which
simplifies
the
formulation of Lagrange's equations of
motion. However, it can also occur that
a useful set of generalized coordinates
may be dependent, which means that
they are related by one or more
constraint equations.

Holonomic constraints
If the constraints introduce relations
between the generalized coordinates qi,
i=1,..., n and time, of the form,

they are called holonomic. These


Generalized coordinates for one degree of freedom (of a particle moving in a complicated
constraint equations define a manifold
path). Instead of using all three Cartesian coordinates x, y, z (or other standard coordinate
in the space of generalized coordinates
systems), only one is needed and is completely arbitrary to define the position. Four
qi, i=1,...,n, known as the configuration
possibilities are shown. Top: distances along some fixed line, bottom left: an angle
relative to some baseline, bottom right: the arc length of the path the particle takes. All
manifold of the system. The degree of
are defined relative to a zero position - again arbitrarily defined.
freedom of the system is d=n-k, which
is the number of generalized
coordinates minus the number of constraints.:260
It can be advantageous to choose independent generalized coordinates, as is done in Lagrangian mechanics, because
this eliminates the need for constraint equations. However, in some situations, it is not possible to identify an
unconstrained set. For example, when dealing with nonholonomic constraints or when trying to find the force due to
any constraint, holonomic or not, dependent generalized coordinates must be employed. Sometimes independent
generalized coordinates are called internal coordinates because they are mutually independent, otherwise
unconstrained, and together give the position of the system.

27

Generalized coordinates

28

Non-holonomic constraints
A mechanical system can involve
constraints on both the generalized
coordinates and their derivatives.
Constraints of this type are known as
non-holonomic.
First-order
non-holonomic constraints have the
form

An example of such a constraint is a


rolling wheel or knife-edge that
constrains the direction of the velocity
vector. Non-holonomic constraints can
also involve next-order derivatives
such as generalized accelerations.

Top: one degree of freedom, bottom: two degrees of freedom, left: an open curve F
(parameterized by t) and surface F, right: a closed curve C and closed surface S. The
equations shown are the constraint equations. Generalized coordinates are chosen and
defined with respect to these curves (one per degree of freedom), and simplify the
analysis since even complicated curves are described by the minimum number of
coordinates required.

Example: Simple pendulum


The relationship between the use of generalized coordinates and
Cartesian coordinates to characterize the movement of a mechanical
system can be illustrated by considering the constrained dynamics of a
simple pendulum.[1]

Coordinates
A simple pendulum consists of a mass M hanging from a pivot point so
that it is constrained to move on a circle of radius L. The position of
the mass is defined by the coordinate vector r=(x, y) measured in the
plane of the circle such that y is in the vertical direction. The
coordinates x and y are related by the equation of the circle

Dynamic model of a simple pendulum.

that constrains the movement of M. This equation also provides a constraint on the velocity components,

Now introduce the parameter , that defines the angular position of M from the vertical direction. It can be used to
define the coordinates x and y, such that

Generalized coordinates
The use of to define the configuration of this system avoids the constraint provided by the equation of the circle.

Virtual work
Notice that the force of gravity acting on the mass m is formulated in the usual Cartesian coordinates,

where g is the acceleration of gravity.


The virtual work of gravity on the mass m as it follows the trajectory r is given by

The variation r can be computed in terms of the coordinates x and y, or in terms of the parameter ,

Thus, the virtual work is given by

Notice that the coefficient of y is the y-component of the applied force. In the same way, the coefficient of is
known as the generalized force along generalized coordinate , given by

Kinetic energy
To complete the analysis consider the kinetic energy T of the mass, using the velocity,

so,

Lagrange's equations
Lagrange's equations for the pendulum in terms of the coordinates x and y are given by,

This yields the three equations

in the three unknowns, x, y and .


Using the parameter , Lagrange's equations take the form

which becomes,

or

This formulation yields one equation because there is a single parameter and no constraint equation.
This shows that the parameter is a generalized coordinate that can be used in the same way as the Cartesian
coordinates x and y to analyze the pendulum.

29

Generalized coordinates

30

Example: Double pendulum


The benefits of generalized coordinates become apparent with the
analysis of a double pendulum. For the two masses mi, i=1, 2, let
ri=(xi, yi), i=1, 2 define their two trajectories. These vectors satisfy the
two constraint equations,

A double pendulum

The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates xi, yi
i=1, 2 and the two Lagrange multipliers i, i=1, 2 that arise from the two constraint equations.

Coordinates
Now introduce the generalized coordinates i i=1,2 that define the angular position of each mass of the double
pendulum from the vertical direction. In this case, we have

The force of gravity acting on the masses is given by,

where g is the acceleration of gravity. Therefore, the virtual work of gravity on the two masses as they follow the
trajectories ri, i=1,2 is given by
The variations ri i=1, 2 can be computed to be

Generalized coordinates

Virtual work
Thus, the virtual work is given by

and the generalized forces are

Kinetic energy
Compute the kinetic energy of this system to be

Lagrange's equations
Lagrange's equations yield two equations in the unknown generalized coordinates i i=1, 2, given by[2]
and

The use of the generalized coordinates i i=1, 2 provides an alternative to the Cartesian formulation of the dynamics
of the double pendulum.

Generalized coordinates and virtual work


The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is
zero for all virtual movements of the system from this state, that is, W=0 for any variation r. When formulated in
terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual
displacement are zero, that is Fi=0.
Let the forces on the system be Fj, j=1, ..., m be applied to points with Cartesian coordinates rj, j=1,..., m, then the
virtual work generated by a virtual displacement from the equilibrium position is given by

where rj, j=1, ..., m denote the virtual displacements of each point in the body.
Now assume that each rj depends on the generalized coordinates qi, i=1, ..., n, then

and

The n terms

are the generalized forces acting on the system. Kane[3] shows that these generalized forces can also be formulated in
terms of the ratio of time derivatives,

31

Generalized coordinates

32

where vj is the velocity of the point of application of the force Fj.


In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be
zero, that is

References
[1] Richard Fitzpatrick, Newtonian Dynamics, http:/ / farside. ph. utexas. edu/ teaching/ 336k/ Newton/ Newtonhtml. html (http:/ / farside. ph.
utexas. edu/ teaching/ 336k/ Newton/ node90. html).
[2] Eric W. Weisstein, Double Pendulum (http:/ / scienceworld. wolfram. com/ physics/ DoublePendulum. html), scienceworld.wolfram.com.
2007
[3] T. R. Kane and D. A. Levinson, Dynamics: theory and applications, McGraw-Hill, New York, 1985

Legendre transformation
In mathematics, the Legendre transformation or
Legendre transform, named after Adrien-Marie
Legendre, is an involutive transformation on the
real-valued convex functions of one real variable. Its
generalization to convex functions of affine spaces is
sometimes
called
the
Legendre-Fenchel
transformation.
It
is
commonly
used
in
thermodynamics and to derive the Hamiltonian
formalism of classical mechanics out of the
Lagrangian formulation, as well as in the solution of
differential equations of several variables.

The function f(x) is defined on the interval [a, b]. The difference px
f(x) takes a maximum at x'. Thus, f*(p) = px' f(x').

For sufficiently smooth functions on the real line, the


Legendre transform g of a function f can be specified up to an additive constant by the condition that the first
derivatives are inverse functions of one another:

Definition
Let I R be an interval, and f : I R a convex function; then its Legendre transform is the function f* : I* R
defined by

with domain

The transform is always well-defined when f(x) is convex.


The generalization to convex functions f : X R on a convex set X Rn is straightforward: f* : X* R has domain

Legendre transformation

33

and is defined by
,
where

denotes the dot product of x* and x.

The function f* is called the convex conjugate function of f. For historical reasons (rooted in analytic mechanics), the
conjugate variable is often denoted p, instead of x*. If the convex function f is defined on the whole line and is
everywhere differentiable, then

can be interpreted as the negative of the y-intercept of the tangent line to the graph of f that has slope p.
The Legendre transformation is an application of the duality relationship between points and lines. The functional
relationship specified by f can be represented equally well as a set of (x, y) points, or as a set of tangent lines
specified by their slope and intercept values.

Properties
The Legendre transform of a convex function is convex.
Let us show this for the case of a doubly differentiable f with a non zero (and hence positive, due to convexity)
double derivative.
For a fixed p, let x maximize px f(x). Then f*(p) = px f(x), noting that x depends on p. So we have

The derivative of f is itself differentiable with a positive derivative and hence strictly monotonuous and invertible.
Thus x = g(p) where
, meaning that g is defined so that
.
Note that g is also differentiable with the following derivative

Thus f*(p) = pg(p) f(g(p)) is the composition of differentiable functions, hence differentiable.
Applying the product rule and the chain rule we have

Giving

so f* is convex.
As we now show, it follows that the Legendre transformation is an involution, i.e., f** = f. Thus, by using the above
equalities for g(p), f*(p) and its derivative

Legendre transformation

34

Examples
Example 1
Let f(x) = cx2 defined on R, where c > 0 is a fixed constant.
For x* fixed, the function x*x f(x) = x*x cx2 of x has the first derivative x* 2cx and second derivative 2c; there
is one stationary point at x = x*/2c, which is always a maximum. Thus, I* = R and

where c* = 1/4c.
Clearly,

namely f** = f.

Example 2
Let f(x) = x2 for x I = [2, 3].
For x* fixed, x*x f(x) is continuous on I compact, hence it always takes a finite maximum on it; it follows that I* =
R. The stationary point at x = x*/2 is in the domain [2, 3] if and only if 4 x* 6, otherwise the maximum is taken
either at x = 2, or x = 3. It follows that

Example 3
The function f(x) = cx is convex, for every x (strict convexity is not required for the Legendre transformation to be
well defined). Clearly x*x f(x) = (x* c)x is never bounded from above as a function of x, unless x* c = 0. Hence
f* is defined on I* = {c} and f*(c) = 0.
One may check involutivity: of course x*x f*(x*) is always bounded as a function of x* {c}, hence I** = R.
Then, for all x one has

and thus f**(x) = cx = f(x).

Legendre transformation

35

Example 4 (many variables)


Let
be defined on X = Rn, where A is a real, positive definite matrix. Then f is convex, and
has gradient p 2Ax and Hessian 2A, which is negative; hence the stationary point x = A-1p/2 is a maximum. We
have X* = Rn, and
.

An equivalent definition in the differentiable case


Equivalently, two convex functions f and g defined on the whole line are said to be Legendre transforms of each
other if their first derivatives are inverse functions of each other,

in which case one writes equivalently f* = g and g* = f. We can see this by first taking the derivative of f*,

This equation, taken together with the previous equation resulting from the maximization condition, results in the
following pair of reciprocal equations,

From these, it is evident that Df and Df* are inverses, as stated. One may exemplify this by considering f(x) = exp x
and hence g(p) = p log p p.
They are unique, up to an additive constant, which is fixed by the additional requirement that

The symmetry of this expression underscores that the Legendre transformation is its own inverse (involutive).
In practical terms, given f(x), the parametric plot of xf'(x) f(x) versus f '(x) amounts to the graph of g(p) versus p.
In some cases (e.g. thermodynamic potentials, below), a non-standard requirement is used, amounting to an
alternative definition of f* with a minus sign,

Behavior of differentials under Legendre transforms


The Legendre transform is linked to integration by parts, pdx = d(px) xdp.
Let f be a function of two independent variables x and y, with the differential
.
Assume that it is convex in x for all y, so that one may perform the Legendre transform in x, with p the variable
conjugate to x. Since the new independent variable is p, the differentials dx and dy devolve to dp and dy, i.e., we
build another function with its differential expressed in terms of the new basis dp and dy. We thus consider the
function g(p, y) = f px so that

Legendre transformation

36

The function g(p, y) is the Legendre transform of f(x, y), where only the independent variable x has been supplanted
by p. This is widely used in thermodynamics, as illustrated below.

Applications
Hamilton-Lagrange mechanics
A Legendre transform is used in classical mechanics to derive the Hamiltonian formulation from the Lagrangian
formulation, and conversely. A typical Lagrangian has the form
,
n

where (v, q) are coordinates on R R , M is a positive real matrix, and

For every q fixed, L(v, q) is a convex function of v, while V(q) plays the role of a constant.
Hence the Legendre transform of L(v, q) as a function of v is the Hamiltonian function,
.
In a more general setting, (v, q) are local coordinates on the tangent bundle

of a manifold

. For each q,

L(v, q) is a convex function of the tangent space Vq. The Legendre transform gives the Hamiltonian H(p, q) as a
function of the coordinates (p, q) of the cotangent bundle
; the inner product used to define the Legendre
transform is inherited from the pertinent canonical symplectic structure.

Thermodynamics
The strategy behind the use of Legendre transforms in thermodynamics is to shift from a function that depends on a
variable to a new (conjugate) function that depends on a new variable, the conjugate of the original one. The new
variable is the partial derivative of the original function with respect to the original variable. The new function is the
difference between the original function and the product of the old and new variables. Typically, this transformation
is useful because it shifts the dependence of, e.g., the energy from an extensive variable to its conjugate intensive
variable, which can usually be controlled more easily in a physical experiment.
For example, the internal energy is an explicit function of the extensive variables entropy, volume, and chemical
composition

which has a total differential


.
By using the (non standard) Legendre transform of the internal energy, U, with respect to volume, V, it is possible to
define the enthalpy as

which is an explicit function of the pressure, P. The enthalpy contains all of the same information as the internal
energy, but is often easier to work with in situations where the pressure is constant.

Legendre transformation

37

It is likewise possible to shift the dependence of the energy from the extensive variable of entropy, S, to the (often
more convenient) intensive variable T, resulting in the Helmholtz and Gibbs free energies. The Helmholtz free
energy, A, and Gibbs energy, G, are obtained by performing Legendre transforms of the internal energy and
enthalpy, respectively,

The Helmholtz free energy is often the most useful thermodynamic potential when temperature and volume are held
constant, while the Gibbs energy is often the most useful when temperature and pressure are held constant.

An example variable capacitor


As another example from physics, consider a parallel-plate capacitor, in which the plates can move relative to one
another. Such a capacitor would allow transfer of the electric energy which is stored in the capacitor into external
mechanical work, done by the force acting on the plates. One may think of the electric charge as analogous to the
"charge" of a gas in a cylinder, with the resulting mechanical force exerted on a piston.
Compute the force on the plates as a function of x, the distance which separates them. To find the force, compute the
potential energy, and then apply the definition of force as the gradient of the potential energy function.
The energy stored in a capacitor of capacitance C(x) and charge Q is
,
where the dependence on the area of the plates, the dielectric constant of the material between the plates, and the
separation x are abstracted away as the capacitance C(x). (For a parallel plate capacitor, this is proportional to the
area of the plates and inversely proportional to the separation.)
The force F between the plates due to the electric field is then

If the capacitor is not connected to any circuit, then the charges on the plates remain constant as they move, and the
force is the negative gradient of the electrostatic energy

However, suppose, instead, that the voltage between the plates V is maintained constant by connection to a battery,
which is a reservoir for charge at constant potential difference; now the charge is variable instead of the voltage, its
Legendre conjugate. To find the force, first compute the non-standard Legendre transform,

The force now becomes the negative gradient of this Legendre transform, still pointing in the same direction,

The two conjugate energies happen to stand opposite to each other, only because of the linearity of the
capacitanceexcept now Q is no longer a constant. They reflect the two different pathways of storing energy into
the capacitor, resulting in, for instance, the same "pull" between a capacitor's plates.

Legendre transformation

38

Probability theory
In large deviations theory, the rate function is defined as the Legendre transformation of the logarithm of the
moment generating function of a random variable. An important application of the rate function is in the calculation
of tail probabilities of sums of i.i.d. random variables.

Geometric interpretation
For a strictly convex function, the Legendre transformation can be interpreted as a mapping between the graph of the
function and the family of tangents of the graph. (For a function of one variable, the tangents are well-defined at all
but at most countably many points, since a convex function is differentiable at all but at most countably many
points.)
The equation of a line with slope p and y-intercept b is given by y = px + b. For this line to be tangent to the graph of
a function f at the point (x0, f(x0)) requires
and

f' is strictly monotone as the derivative of a strictly convex function. The second equation can be solved for x0,
allowing elimination of x0 from the first, giving the y-intercept b of the tangent as a function of its slope p,

Here,

denotes the Legendre transform of f.

The family of tangents of the graph of f parameterized by p is therefore given by


,
or, written implicitly, by the solutions of the equation

The graph of the original function can be reconstructed from this family of lines as the envelope of this family by
demanding

Eliminating p from these two equations gives

Identifying y with f(x) and recognizing the right side of the preceding equation as the Legendre transform of f*,
yields

Legendre transformation

Legendre transformation in more than one dimension


For a differentiable real-valued function on an open subset U of Rn the Legendre conjugate of the pair (U, f) is
defined to be the pair (V, g), where V is the image of U under the gradient mapping Df, and g is the function on V
given by the formula

where

is the scalar product on Rn. The multidimensional transform can be interpreted as an encoding of the convex hull of
the function's epigraph in terms of its supporting hyperplanes.[1]
Alternatively, if X is a real vector space and Y is its dual vector space, then for each point x of X and y of Y, there is a
natural identification of the cotangent spaces T*Xx with Y and T*Yy with X. If f is a real differentiable function over
X, then f is a section of the cotangent bundle T*X and as such, we can construct a map from X to Y. Similarly, if g
is a real differentiable function over Y, g defines a map from Y to X'. If both maps happen to be inverses of each
other, we say we have a Legendre transform.
When the function is not differentiable, the Legendre transform can still be extended, and is known as the
Legendre-Fenchel transformation. In this more general setting, a few properties are lost: for example, the Legendre
transform is no longer its own inverse (unless there are extra assumptions, like convexity).

Further properties
Scaling properties
The Legendre transformation has the following scaling properties: For a > 0,

It follows that if a function is homogeneous of degree r then its image under the Legendre transformation is a
homogeneous function of degree s, where 1/r + 1/s = 1. (Since f(x) = xr/r, with r > 1, implies f*(p) = ps/s.) Thus, the
only monomial whose degree is invariant under Legendre transform is the quadratic.

Behavior under translation

Behavior under inversion

Behavior under linear transformations


Let A : Rn Rm be a linear transformation. For any convex function f on Rn, one has

where A* is the adjoint operator of A defined by

39

Legendre transformation

and Af is the push-forward of f along A

A closed convex function f is symmetric with respect to a given set G of orthogonal linear transformations,

if and only if f* is symmetric with respect to G.

Infimal convolution
The infimal convolution of two functions f and g is defined as
Let f1, ..., fm be proper convex functions on Rn. Then

Fenchel's inequality
For any function f and its convex conjugate f* Fenchel's inequality (also known as the FenchelYoung inequality)
holds for every x X and p X*, i.e., independent x, p pairs,

References
[1] http:/ / maze5. net/ ?page_id=733

Courant, Richard; Hilbert, David (2008). Methods of Mathematical Physics 2. John Wiley & Sons.
ISBN0471504394.
Arnol'd, Vladimir Igorevich (1989). Mathematical Methods of Classical Mechanics (2nd ed.). Springer.
ISBN0-387-96890-3.
Fenchel, W. (1949). "On conjugate convex functions", Canad. J. Math 1: 73-77.
Rockafellar, R. Tyrrell (1996) [1970]. Convex Analysis. Princeton University Press. ISBN0-691-01586-4.
Zia, R. K. P.; Redish, E. F.; McKay, S. R. (2009). "Making sense of the Legendre transform". American Journal
of Physics 77 (7): 614. arXiv: 0806.1147 (http://arxiv.org/abs/0806.1147). doi: 10.1119/1.3119512 (http://dx.
doi.org/10.1119/1.3119512).

Further reading
Touchette, Hugo (2005-07-27). "Legendre-Fenchel transforms in a nutshell" (http://www.maths.qmw.ac.uk/
~ht/archive/lfth2.pdf) (PDF). Retrieved 2013-10-13.
Touchette, Hugo (2006-11-21). "Elements of convex analysis" (http://www.maths.qmul.ac.uk/~ht/archive/
convex1.pdf) (PDF). Retrieved 2013-10-13.

External links
Legendre transform with figures (http://maze5.net/?page_id=733) at maze5.net
Legendre and Legendre-Fenchel transforms in a step-by-step explanation (http://www.onmyphd.com/
?p=legendre.fenchel.transform) at onmyphd.com

40

Canonical coordinates

41

Canonical coordinates
Classical
mechanics

History
Timeline

v
t

e [1]

In mathematics and classical mechanics, canonical coordinates are sets of coordinates which can be used to
describe a physical system at any given point in time (locating the system within phase space). Canonical
coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in
quantum mechanics; see the Stonevon Neumann theorem and canonical commutation relations for details.
As Hamiltonian mechanics is generalized by symplectic geometry and canonical transformations are generalized by
contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be
generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold.

Definition, in classical mechanics


In classical mechanics, canonical coordinates are coordinates

and

in phase space that are used in the

Hamiltonian formalism. The canonical coordinates satisfy the fundamental Poisson bracket relations:

A typical example of canonical coordinates is for


components of momentum. Hence in general, the

to be the usual Cartesian coordinates, and

to be the

coordinates are referred to as "conjugate momenta."

Canonical coordinates can be obtained from the generalized coordinates of the Lagrangian formalism by a Legendre
transformation, or from another set of canonical coordinates by a canonical transformation.

Definition, on cotangent bundles


Canonical coordinates are defined as a special set of coordinates on the cotangent bundle of a manifold. They are
usually written as a set of
or
with the x 's or q 's denoting the coordinates on the underlying
manifold and the p 's denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point q in
the manifold.
A common definition of canonical coordinates is any set of coordinates on the cotangent bundle that allow the
canonical one form to be written in the form

up to a total differential. A change of coordinates that preserves this form is a canonical transformation; these are a
special case of a symplectomorphism, which are essentially a change of coordinates on a symplectic manifold.
In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on
tangent vectors produce real numbers.

Canonical coordinates

42

Formal development
Given a manifold Q, a vector field X on Q (or equivalently, a section of the tangent bundle TQ) can be thought of as
a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a
function

such that

holds for all cotangent vectors p in

. Here,

is a vector in

, the tangent space to the manifold Q at

point q. The function


is called the momentum function corresponding to X.
In local coordinates, the vector field X at point q may be written as

where the

where the

The

are the coordinate frame on TQ. The conjugate momentum then has the expression

are defined as the momentum functions corresponding to the vectors

together with the

together form a coordinate system on the cotangent bundle

; these coordinates

are called the canonical coordinates.

Generalized coordinates
In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are
commonly denoted as
with
called the generalized position and
the generalized velocity. When a
Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical
coordinates by means of the HamiltonJacobi equations.

References
Goldstein, Herbert; Poole, Charles P., Jr.; Safko, John L. (2002). Classical Mechanics (http://www.
pearsonhighered.com/educator/product/Classical-Mechanics/9780201657029.page) (3rd ed.). San Francisco:
Addison Wesley. pp.347349. ISBN0-201-65702-3.

Generalized forces

Generalized forces
Generalized forces find use in Lagrangian mechanics, where they play a role conjugate to generalized coordinates.
They are obtained from the applied forces, Fi, i=1,..., n, acting on a system that has its configuration defined in terms
of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the
variation of a generalized coordinate.

Virtual work
Generalized forces can be obtained from the computation of the virtual work, W, of the applied forces.:265
The virtual work of the forces, Fi, acting on the particles Pi, i=1,..., n, is given by

where ri is the virtual displacement of the particle Pi.

Generalized coordinates
Let the position vectors of each of the particles, ri, be a function of the generalized coordinates, qj, j=1,...,m. Then
the virtual displacements ri are given by

where qj is the virtual displacement of the generalized coordinate qj.


The virtual work for the system of particles becomes

Collect the coefficients of qj so that

Generalized forces
The virtual work of a system of particles can be written in the form

where

are called the generalized forces associated with the generalized coordinates qj, j=1,...,m.

43

Generalized forces

Velocity formulation
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the
velocities of the system. For the n particle system, let the velocity of each particle Pi be Vi, then the virtual
displacement ri can also be written in the form[1]

This means that the generalized force, Qj, can also be determined as

D'Alembert's principle
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force
(apparent force), called D'Alembert's principle. The inertia force of a particle, Pi, of mass mi is
where Ai is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates qj, j=1,...,m, then the generalized
inertia force is given by

D'Alembert's form of the principle of virtual work yields

References
[1] T. R. Kane and D. A. Levinson, Dynamics, Theory and Applications (http:/ / www. amazon. com/
Dynamics-Theory-Applications-Mechanical-Engineering/ dp/ 0070378460), McGraw-Hill, NY, 2005.

44

Hamiltonian mechanics

45

Hamiltonian mechanics
Classical
mechanics

History
Timeline

v
t

e [1]

Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same
outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more
abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which
later contributed to the formulation of quantum mechanics.
Hamiltonian mechanics was first formulated by William Rowan Hamilton in 1833, starting from Lagrangian
mechanics, a previous reformulation of classical mechanics introduced by Joseph Louis Lagrange in 1788.

Overview
In Hamiltonian mechanics, a classical
physical system is described by a set of
canonical coordinates
,
where each
coordinate

component of the
is indexed to the

frame of reference of the system.


The time evolution of the system is
uniquely defined by Hamilton's
equations:[1]

Illustration of a generalized coordinate q for one degree of freedom, of a particle moving


in a complicated path. Four possibilities of q for the particle's path are shown. For more
particles each with their own degrees of freedom, there are more coordinates.

Hamiltonian mechanics

where

46

is the Hamiltonian, which corresponds to the total energy of the system. For a closed

system, it is the sum of the kinetic and potential energy in the system.
In classical mechanics, the time evolution is obtained by computing the total force being exerted on each particle of
the system, and from Newton's second law, the time-evolutions of both position and velocity are computed. In
contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in
the generalized coordinates and inserting it in the Hamiltonian equations. It is important to point out that this
approach is equivalent to the one used in Lagrangian mechanics. In fact, as will be shown below, the Hamiltonian is
the Legendre transform of the Lagrangian, and thus both approaches give the same equations for the same
generalized momentum. The main motivation to use Hamiltonian mechanics instead of Lagrangian mechanics comes
from the symplectic structure of Hamiltonian systems.
While Hamiltonian mechanics can be used to describe simple systems such as a bouncing ball, a pendulum or an
oscillating spring in which energy changes from kinetic to potential and back again over time, its strength is shown
in more complex dynamic systems, such as planetary orbits in celestial mechanics. Naturally, the more degrees of
freedom the system has, the more complicated its time evolution is and, in most cases, it becomes chaotic.

Basic physical interpretation


A simple interpretation of the Hamilton mechanics comes from its application on a one-dimensional system
consisting of one particle of mass m under no external forces applied. The Hamiltonian represents the total energy of
the system, which is the sum of kinetic and potential energy, traditionally denoted T and V, respectively. Here q is
the coordinate and p is the momentum, mv. Then

Note that T is a function of p alone, while V is a function of q alone.


In this example, the time-derivative of the momentum p equals the Newtonian force, and so the first Hamilton
equation means that the force equals the negative gradient of potential energy. The time-derivative of q is the
velocity, and so the second Hamilton equation means that the particles velocity equals the derivative of its kinetic
energy with respect to its momentum.

Calculating a Hamiltonian from a Lagrangian


Given a Lagrangian in terms of the generalized coordinates

and generalized velocities

and time:

1. The momenta are calculated by differentiating the Lagrangian with respect to the (generalized) velocities:

2. The velocities are expressed in terms of the momenta


by inverting the expressions in the previous step.
3. The Hamiltonian is calculated using the usual definition of
as the Legendre transformation of :
Then the velocities are substituted for using the previous results.

Hamiltonian mechanics

47

Deriving Hamilton's equations


Hamilton's equations can be derived by looking at how the total differential of the Lagrangian depends on time,
[2]
generalized positions and generalized velocities

Now the generalized momenta were defined as

If this is substituted into the total differential of the Lagrangian, one gets

We can rewrite this as

and rearrange again to get

The term on the left-hand side is just the Hamiltonian that we have defined before, so we find that

We can also calculate the total differential of the Hamiltonian


Lagrangian

with respect to time directly, as we did with the

above, yielding:

It follows from the previous two independent equations that their right-hand sides are equal with each other. Thus we
obtain the equation

Since this calculation was done off-shell, we can associate corresponding terms from both sides of this equation to
yield:

On-shell, Lagrange's equations tell us that

We can rearrange this to get

Thus Hamilton's equations hold on-shell:

Hamiltonian mechanics

48

As a reformulation of Lagrangian mechanics


Starting with Lagrangian mechanics, the equations of motion are based on generalized coordinates

and matching generalized velocities

We write the Lagrangian as

with the subscripted variables understood to represent all N variables of that type. Hamiltonian mechanics aims to
replace the generalized velocity variables with generalized momentum variables, also known as conjugate momenta.
By doing so, it is possible to handle certain systems, such as aspects of quantum mechanics, that would otherwise be
even more complicated.
For each generalized velocity, there is one corresponding conjugate momentum, defined as:

In Cartesian coordinates, the generalized momenta are precisely the physical linear momenta. In circular polar
coordinates, the generalized momentum corresponding to the angular velocity is the physical angular momentum.
For an arbitrary choice of generalized coordinates, it may not be possible to obtain an intuitive interpretation of the
conjugate momenta.
One thing which is not too obvious in this coordinate dependent formulation is that different generalized coordinates
are really nothing more than different coordinate patches on the same symplectic manifold (see Mathematical
formalism, below).
The Hamiltonian is the Legendre transform of the Lagrangian:

If the transformation equations defining the generalized coordinates are independent of t, and the Lagrangian is a
sum of products of functions (in the generalized coordinates) which are homogeneous of order 0, 1 or 2, then it can
be shown that H is equal to the total energy E = T + V.
Each side in the definition of

produces a differential:

Substituting the previous definition of the conjugate momenta into this equation and matching coefficients, we
obtain the equations of motion of Hamiltonian mechanics, known as the canonical equations of Hamilton:

Hamilton's equations consist of 2n first-order differential equations, while Lagrange's equations consist of n
second-order equations. However, Hamilton's equations usually don't reduce the difficulty of finding explicit
solutions. They still offer some advantages, since important theoretical results can be derived because coordinates
and momenta are independent variables with nearly symmetric roles.

Hamiltonian mechanics
Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, such that a
coordinate does not occur in the Hamiltonian, the corresponding momentum is conserved, and that coordinate can be
ignored in the other equations of the set. Effectively, this reduces the problem from n coordinates to (n-1)
coordinates. In the Lagrangian framework, of course the result that the corresponding momentum is conserved still
follows immediately, but all the generalized velocities still occur in the Lagrangian - we still have to solve a system
of equations in n coordinates.
The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in the theory of classical
mechanics, and for formulations of quantum mechanics.

Geometry of Hamiltonian systems


A Hamiltonian system may be understood as a fiber bundle E over time R, with the fibers Et, t R, being the
position space. The Lagrangian is thus a function on the jet bundle J over E; taking the fiberwise Legendre transform
of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T*Et,
which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian.

Generalization to quantum mechanics through Poisson bracket


Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential
equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at
any point in time. However, the equations can be further generalized to then be extended to apply to quantum
mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the
algebra of Moyal brackets.
Specifically, the more general form of the Hamilton's equation reads

where f is some function of p and q, and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket
without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a
Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie
algebra, as proven by H. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See the
phase space formulation and Weyl quantization). This more algebraic approach not only permits ultimately
extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson
bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.

Mathematical formalism
Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system. The
function H is known as the Hamiltonian or the energy function. The symplectic manifold is then called the phase
space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector
field.
The Hamiltonian vector field (a special type of symplectic vector field) induces a Hamiltonian flow on the manifold.
This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called
the time); in other words an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each
symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced
by the Hamiltonian flow is commonly called the Hamiltonian mechanics of the Hamiltonian system.
The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the
manifold the structure of a Lie algebra.
Given a function f

49

Hamiltonian mechanics

50

If we have a probability distribution, , then (since the phase space velocity (

) has zero divergence, and

probability is conserved) its convective derivative can be shown to be zero and so

This is called Liouville's theorem. Every smooth function G over the symplectic manifold generates a one-parameter
family of symplectomorphisms and if { G, H } = 0, then G is conserved and the symplectomorphisms are symmetry
transformations.
A Hamiltonian may have multiple conserved quantities Gi. If the symplectic manifold has dimension 2n and there
are n functionally independent conserved quantities Gi which are in involution (i.e., { Gi, Gj } = 0), then the
Hamiltonian is Liouville integrable. The Liouville-Arnold theorem says that locally, any Liouville integrable
Hamiltonian can be transformed via a symplectomorphism in a new Hamiltonian with the conserved quantities Gi as
coordinates; the new coordinates are called action-angle coordinates. The transformed Hamiltonian depends only on
the Gi, and hence the equations of motion have the simple form
for some function F (Arnol'd et al., 1988). There is an entire field focusing on small deviations from integrable
systems governed by the KAM theorem.
The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic;
concepts of measure, completeness, integrability and stability are poorly defined. At this time, the study of
dynamical systems is primarily qualitative, and not a quantitative science.

Riemannian manifolds
An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be
written as

where

is a smoothly varying inner product on the fibers

, the cotangent space to the point q in the

configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term.
If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear
isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one
can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the
metric.) The solutions to the HamiltonJacobi equations for this Hamiltonian are then the same as the geodesics on
the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of
such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See
also Geodesics as Hamiltonian flows.

Hamiltonian mechanics

51

Sub-Riemannian manifolds
When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as
one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at
every point q of the configuration space manifold Q, so that the rank of the cometric is less than the dimension of the
manifold Q, one has a sub-Riemannian manifold.
The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely
determines the cometric, and vice-versa. This implies that every sub-Riemannian manifold is uniquely determined by
its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique
sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the ChowRashevskii
theorem.
The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the
Heisenberg group, the Hamiltonian is given by

is not involved in the Hamiltonian.

Poisson algebras
Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth
functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real
Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable
topology) such that for any element A of the algebra, A maps to a nonnegative real number.
A further generalization is given by Nambu dynamics.

Charged particle in an electromagnetic field


A good illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an
electromagnetic field. In Cartesian coordinates (i.e.
), the Lagrangian of a non-relativistic classical particle
in an electromagnetic field is (in SI Units):

where e is the electric charge of the particle (not necessarily the electron charge),
and the

is the electric scalar potential,

are the components of the magnetic vector potential (these may be modified through a gauge

transformation). This is called minimal coupling.


The generalized momenta are given by:

Rearranging, the velocities are expressed in terms of the momenta:

If we substitute the definition of the momenta, and the definitions of the velocities in terms of the momenta, into the
definition of the Hamiltonian given above, and then simplify and rearrange, we get:

This equation is used frequently in quantum mechanics.

Hamiltonian mechanics

52

Relativistic charged particle in an electromagnetic field


The Lagrangian for a relativistic charged particle is given by:

Thus the particle's canonical (total) momentum is

that is, the sum of the kinetic momentum and the potential momentum.
Solving for the velocity, we get

So the Hamiltonian is

From this we get the force equation (equivalent to the EulerLagrange equation)

from which one can derive

An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum,

This has the advantage that

can be measured experimentally whereas

is

cannot. Notice that the Hamiltonian

(total energy) can be viewed as the sum of the relativistic energy (kinetic+rest),

plus the potential

energy,

References
Footnotes
[1] Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
[2] This derivation is along the lines as given in

Other
Arnol'd, V. I. (1989), Mathematical Methods of Classical Mechanics, Springer-Verlag, ISBN0-387-96890-3
Abraham, R.; Marsden, J.E. (1978), Foundations of Mechanics, London: Benjamin-Cummings,
ISBN0-8053-0102-X
Arnol'd, V. I.; Kozlov, V. V.; Neshtadt, A. I. (1988), "Mathematical aspects of classical and celestial mechanics",
Encyclopaedia of Mathematical Sciences, Dynamical Systems III 3, Springer-Verlag

Hamiltonian mechanics
Vinogradov, A. M.; Kupershmidt, B. A. (1981), The structure of Hamiltonian mechanics (http://diffiety.ac.ru/
djvu/structures.djvu) (DjVu), London Math. Soc. Lect. Notes Ser. 60, London: Cambridge Univ. Press

External links
Binney, James J., Classical Mechanics (lecture notes) (http://www-thphys.physics.ox.ac.uk/users/
JamesBinney/cmech.pdf), University of Oxford, retrieved 27 October 2010
Tong, David, Classical Dynamics (Cambridge lecture notes) (http://www.damtp.cam.ac.uk/user/tong/
dynamics.html), University of Cambridge, retrieved 27 October 2010
Hamilton, William Rowan, On a General Method in Dynamics (http://www.maths.tcd.ie/pub/HistMath/
People/Hamilton/Dynamics/), Trinity College Dublin

Integrable system
In mathematics and physics, there are various distinct notions that are referred to under the name of integrable
systems.
In the general theory of differential systems, there is Frobenius integrability, which refers to overdetermined
systems. In the classical theory of Hamiltonian dynamical systems, there is the notion of Liouville integrability.
More generally, in differentiable dynamical systems integrability relates to the existence of foliations by invariant
submanifolds within the phase space. Each of these notions involves an application of the idea of foliations, but they
do not coincide. There are also notions of complete integrability, or exact solvability in the setting of quantum
systems and statistical mechanical models. Integrability can often be traced back to the algebraic geometry of
differential operators.

Frobenius integrability (overdetermined differential systems)


A differential system is said to be completely integrable in the Frobenius sense if the space on which it is defined has
a foliation by maximal integral manifolds. The Frobenius theorem states that a system is completely integrable if and
only if it generates an ideal that is closed under exterior differentiation. (See the article on integrability conditions for
differential systems for a detailed discussion of foliations by maximal integral manifolds.)

General dynamical systems


In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant,
regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are
invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of
the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as
complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this
context.
An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can
be adapted to describe evolution equations that either are systems of differential equations or finite difference
equations.
The distinction between integrable and nonintegrable dynamical systems thus has the qualitative implication of
regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be
explicitly integrated in exact form.

53

Integrable system

Hamiltonian systems and Liouville integrability


In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. Liouville
integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the
Hamiltonian vector fields associated to the invariants of the foliation span the tangent distribution. Another way to
state this is that there exists a maximal set of Poisson commuting invariants (i.e., functions on the phase space
whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).
In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of
constants), then it must have even dimension
, and the maximal number of independent Poisson commuting
invariants (including the Hamiltonian itself) is . The leaves of the foliation are totally isotropic with respect to the
symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems
(i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time dependent) have at least one
invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are
compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle"
variables. The cycles of the canonical -form are called the action variables, and the resulting canonical
coordinates are called action-angle variables (see below).
There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well
as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the
dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than
maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When
there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting,
and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable.
If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.

Action-angle variables
When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level
sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as
mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that
the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the
Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the torus. The
motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.

The HamiltonJacobi approach


In canonical transformation theory, there is the HamiltonJacobi method, in which solutions to Hamilton's equations
are sought by first finding a complete solution of the associated HamiltonJacobi equation. In classical terminology,
this is described as determining a transformation to a canonical set of coordinates consisting of completely
ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical
"position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In
the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the
general theory of partial differential equations of HamiltonJacobi type, a complete solution (i.e. one that depends
on n independent constants of integration, where n is the dimension of the configuration space), exists in very
general cases, but only in the local sense. Therefore the existence of a complete solution of the HamiltonJacobi
equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be
"explicitly integrated" involve a complete separation of variables, in which the separation constants provide the
complete set of integration constants that are required. Only when these constants can be reinterpreted, within the
full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a

54

Integrable system
Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.

Solitons and inverse spectral methods


A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons,
which are strongly stable, localized solutions of partial differential equations like the Kortewegde Vries equation
(which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing
these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach
for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often
reducible to RiemannHilbert problems), which generalize local linear methods like Fourier analysis to nonlocal
linearization, through the solution of associated integral equations.
The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and
which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably
generalized sense) is invariant under the evolution. This provides, in certain cases, enough invariants, or "integrals of
motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of
freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability.
However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a
transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly
infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a
transformation to action-angle variables, although typically only a finite number of the "position" variables are
actually angle coordinates, and the rest are noncompact.

Quantum integrable systems


There is also a notion of quantum integrable systems. In the quantum setting, functions on phase space must be
replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by
commuting operators.
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body
reducible. A quantum system is said to be integrable if the dynamics are two-body irreducible. The Yang-Baxter
equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved
quantities. All of these ideas are incorporated into the Quantum inverse scattering method where the algebraic Bethe
Ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb-Liniger Model,
the Hubbard model and several variations on the Heisenberg model.

Exactly solvable models


In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as
exactly solvable models. This obscures the distinction between integrability in the Hamiltonian sense, and the more
general dynamical systems sense.
There are also exactly solvable models in statistical mechanics, which are more closely related to quantum
integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense,
based on the YangBaxter equations and the quantum inverse scattering method provide quantum analogs of the
inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.
An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some
previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself,
rather than the purely calculational feature that we happen to have some "known" functions available, in terms of
which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known"
functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such

55

Integrable system
"known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic
validity, it often implies the sort of regularity that is to be expected in integrable systems.

List of some well-known classical integrable systems


1. Classical mechanical systems (finite-dimensional phase space):

Harmonic oscillators in n dimensions


Central force motion (exact solutions of classical central-force problems)
Two center Newtonian gravitational motion
Geodesic motion on ellipsoids
Neumann oscillator
Lagrange, Euler and Kovalevskaya tops
Integrable Clebsch and Steklov systems in fluids
CalogeroMoserSutherland model
Swinging Atwood's Machine with certain choices of parameters

2. Integrable lattice models


Toda lattice
AblowitzLadik lattice
Volterra lattice
3. Integrable systems of PDEs in 1 + 1 dimension

Kortewegde Vries equation


SineGordon equation
Nonlinear Schrdinger equation
AKNS system
Boussinesq equation (water waves)
Nonlinear sigma models
Classical Heisenberg ferromagnet model (spin chain)
Classical Gaudin spin system (Garnier system)
LandauLifshitz equation (continuous spin field)
BenjaminOno equation
Dym equation
Three-wave equation

4. Integrable PDEs in 2 + 1 dimensions


KadomtsevPetviashvili equation
DaveyStewartson equation
Ishimori equation
5. Other integrable systems of PDEs in higher dimensions
Self-dual YangMills equations

56

Integrable system

Notes
References
V.I. Arnold (1997). Mathematical Methods of Classical Mechanics, 2nd ed. Springer. ISBN978-0-387-96890-2.
M. Dunajski (2009). Solitons, Instantons and Twistors,. Oxford University Press. ISBN978-0-19-857063-9.
L.D. Faddeev, L. A. Takhtajan (1987). Hamiltonian Methods in the Theory of Solitons. Addison-Wesley.
ISBN978-0-387-15579-1.
A.T. Fomenko, Symplectic Geometry. Methods and Applications. Gordon and Breach, 1988. Second edition 1995,
ISBN 978-2-88124-901-3.
A.T. Fomenko, A. V. Bolsinov Integrable Hamiltonian Systems: Geometry, Topology, Classification. Taylor and
Francis, 2003, ISBN 978-0-415-29805-6.
H. Goldstein (1980). Classical Mechanics, 2nd. ed. Addison-Wesley. ISBN0-201-02918-9.
J. Harnad, P. Winternitz, G. Sabidussi, eds. (2000). Integrable Systems: From Classical to Quantum. American
Mathematical Society. ISBN0-8218-2093-1.
V.E. Korepin, N. M. Bogoliubov, A. G. Izergin (1997). Quantum Inverse Scattering Method and Correlation
Functions. Cambridge University Press. ISBN978-0-521-58646-7.
V. S. Afrajmovich, V.I. Arnold, Yu S. Il'yashenko, L. P. Shil'nikov. Dynamical Systems V. Springer.
ISBN3-540-18173-3.
Giuseppe Mussardo. Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics.
Oxford University Press. ISBN978-0-19-954758-6.

External links
Hazewinkel, Michiel, ed. (2001), "Integrable system" (http://www.encyclopediaofmath.org/index.php?title=p/
i051330), Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4

57

58

Mathematics
Symplectic manifold
In mathematics, a symplectic manifold is a smooth manifold, M, equipped with a closed nondegenerate differential
2-form, , called the symplectic form. The study of symplectic manifolds is called symplectic geometry or
symplectic topology. Symplectic manifolds arise naturally in abstract formulations of classical mechanics and
analytical mechanics as the cotangent bundles of manifolds. For example, in the Hamiltonian formulation of classical
mechanics, which provides one of the major motivations for the field, the set of all possible configurations of a
system is modeled as a manifold, and this manifold's cotangent bundle describes the phase space of the system.
Any real-valued differentiable function, H, on a symplectic manifold can serve as an energy function or
Hamiltonian. Associated to any Hamiltonian is a Hamiltonian vector field; the integral curves of the Hamiltonian
vector field are solutions to Hamilton's equations. The Hamiltonian vector field defines a flow on the symplectic
manifold, called a Hamiltonian flow or symplectomorphism. By Liouville's theorem, Hamiltonian flows preserve
the volume form on the phase space.

Motivation
Symplectic manifolds arise from classical mechanics, in particular, they are a generalization of the phase space of a
closed system.[1] In the same way the Hamilton equations allow one to derive the time evolution of a system from a
set of differential equations, the symplectic form should allow one to obtain a vector field describing the flow of the
system from the differential dH of a Hamiltonian function H. As Newton's laws of motion are linear differential
equations, such a map should be linear as well.[2] So we require a linear map TM T* M, or equivalently, an
element of T* M T* M. Letting denote a section of T* M T* M, the requirement that be non-degenerate
ensures that for every differential dH there is a unique corresponding vector field VH such that dH = (VH, ). Since
one desires the Hamiltonian to be constant along flow lines, one should have dH(VH) = (VH, VH) = 0, which
implies that is alternating and hence a 2-form. Finally, one makes the requirement that should not change under
flow lines, i.e. that the Lie derivative of along VH vanishes. Applying Cartan's formula, this amounts to
which is equivalent to the requirement that should be closed.

Definition
A symplectic form on a manifold M is a closed non-degenerate differential 2-form .[3] Here, non-degenerate means
that for all p M, if there exists an X TpM such that (X,Y) = 0 for all Y TpM, then X = 0. The skew-symmetric
condition (inherent in the definition of differential 2-form) means that for all p M we have (X,Y) = (Y,X) for
all X,Y TpM. In odd dimensions, antisymmetric matrices are not invertible. Since is a differential two-form, the
skew-symmetric condition implies that M has even dimension. The closed condition means that the exterior
derivative of vanishes, d = 0. A symplectic manifold consists of a pair (M,), of a manifold M and a symplectic
form . Assigning a symplectic form to a manifold M is referred to as giving M a symplectic structure.

Symplectic manifold

59

Linear symplectic manifold


There is a standard linear model, namely a symplectic vector space R2n. Let R2n have the basis {v1, ..., v2n}. Then we
define our symplectic form so that for all 1 i n we have (vi,vn+i) = 1, (vn+i,vi) = 1, and is zero for all
other pairs of basis vectors. In this case the symplectic form reduces to a simple quadratic form. If In denotes the n
n identity matrix then the matrix, , of this quadratic form is given by the (2n 2n) block matrix:

Lagrangian and other submanifolds


There are several natural geometric notions of submanifold of a symplectic manifold.
symplectic submanifolds (potentially of any even dimension) are submanifolds where the symplectic form is
required to induce a symplectic form on them.
isotropic submanifolds are submanifolds where the symplectic form restricts to zero, i.e. each tangent space is an
isotropic subspace of the ambient manifold's tangent space. Similarly, if each tangent subspace to a submanifold
is co-isotropic (the dual of an isotropic subspace), the submanifold is called co-isotropic.
The most important case of the isotropic submanifolds is that of Lagrangian submanifolds. A Lagrangian
submanifold is, by definition, an isotropic submanifold of maximal dimension, namely half the dimension of the
ambient symplectic manifold. One major example is that the graph of a symplectomorphism in the product
symplectic manifold (M M, ) is Lagrangian. Their intersections display rigidity properties not possessed by
smooth manifolds; the Arnold conjecture gives the sum of the submanifold's Betti numbers as a lower bound for the
number of self intersections of a smooth Lagrangian submanifold, rather than the Euler characteristic in the smooth
case.

Lagrangian fibration
A Lagrangian fibration of a symplectic manifold M is a fibration where all of the fibres are Lagrangian
submanifolds. Since M is even-dimensional we can take local coordinates (p1,,pn,q1,,qn), and by Darboux's
theorem the symplectic form can be, at least locally, written as = dpk dqk, where d denotes the exterior
derivative and denotes the exterior product. Using this set-up we can locally think of M as being the cotangent
bundle T*Rn, and the Lagrangian fibration as the trivial fibration : T*Rn Rn. This is the canonical picture.

Lagrangian mapping
Let L be a Lagrangian submanifold of a symplectic manifold (K,)
given by an immersion i : L K (i is called a Lagrangian
immersion). Let : K B give a Lagrangian fibration of K. The
composite ( i) : L K B is a Lagrangian mapping. The critical
value set of i is called a caustic.
Two Lagrangian maps (1 i1) : L1 K1 B1 and (2 i2) : L2
K2 B2 are called Lagrangian equivalent if there exist
diffeomorphisms , and such that both sides of the diagram given
on the right commute, and preserves the symplectic form. Symbolically:

where *2 denotes the pull back of 2 by .

Click to Enlarge

Symplectic manifold

Special cases and generalizations


A symplectic manifold endowed with a metric that is compatible with the symplectic form is an almost Khler
manifold in the sense that the tangent bundle has an almost complex structure, but this need not be integrable.
Symplectic manifolds are special cases of a Poisson manifold. The definition of a symplectic manifold requires
that the symplectic form be non-degenerate everywhere, but if this condition is violated, the manifold may still be
a Poisson manifold.
A multisymplectic manifold of degree k is a manifold equipped with a closed nondegenerate k-form.[4]
A polysymplectic manifold is a Legendre bundle provided with a polysymplectic tangent-valued
-form; it is utilized in Hamiltonian field theory.[5]

Notes
[1] Ben Webster: What is a symplectic manifold, really? http:/ / sbseminar. wordpress. com/ 2012/ 01/ 09/ what-is-a-symplectic-manifold-really/
[2] Henry Cohn: Why symplectic geometry is the natural setting for classical mechanics http:/ / research. microsoft. com/ en-us/ um/ people/
cohn/ thoughts/ symplectic. html
[3] Maurice de Gosson: Symplectic Geometry and Quantum Mechanics (2006) Birkhuser Verlag, Basel ISBN 3-7643-7574-4. (page 10)
[4] F. Cantrijn, L. A. Ibort and M. de Len, J. Austral. Math. Soc. Ser. A 66 (1999), no. 3, 303-330.
[5] G. Giachetta, L. Mangiarotti and G. Sardanashvily, Covariant Hamiltonian equations for field theory, Journal of Physics A32 (1999)
6629-6642; arXiv: hep-th/9904062 (http:/ / xxx. lanl. gov/ abs/ hep-th/ 9904062).

References
Dusa McDuff and D. Salamon: Introduction to Symplectic Topology (1998) Oxford Mathematical Monographs,
ISBN 0-19-850451-9.
Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London ISBN
0-8053-0102-X See section 3.2.
Maurice A. de Gosson: Symplectic Geometry and Quantum Mechanics (2006) Birkhuser Verlag, Basel ISBN
3-7643-7574-4.
Alan Weinstein (1971). "Symplectic manifolds and their lagrangian submanifolds". Adv Math 6 (3): 32946. doi:
10.1016/0001-8708(71)90020-X (http://dx.doi.org/10.1016/0001-8708(71)90020-X).

External links
. Lumiste (2001), "Symplectic Structure" (http://www.encyclopediaofmath.org/index.php?title=s/s091860),
in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4
Sardanashvily, G., Fibre bundles, jet manifolds and Lagrangian theory. Lectures for theoreticians, arXiv:
0908.1886 (http://xxx.lanl.gov/abs/0908.1886)
Examples of symplectic manifolds (http://planetmath.org/?op=getobj&from=objects&id=3672),
PlanetMath.org.

60

Phase space

Phase space
For other uses, see Phase space (disambiguation).
In mathematics and physics, a phase space
is a space in which all possible states of a
system are represented, with each possible
state of the system corresponding to one
unique point in the phase space. For
mechanical systems, the phase space usually
consists of all possible values of position
and momentum variables (i.e. the cotangent
space of configuration space). The concept
of phase space was developed in the late
19th century by Ludwig Boltzmann, Henri
Poincar, and Willard Gibbs.
A plot of position and momentum variables
as a function of time is sometimes called a
phase plot or a phase diagram. Phase
Phase space of a dynamic system with focal instability, showing one phase space
diagram, however, is more usually reserved
trajectory
in the physical sciences for a diagram
showing the various regions of stability of
the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition.
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional
space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For
every possible state of the system, or allowed combination of values of the system's parameters, a point is included in
the multidimensional space. The system's evolving state over time traces a path (a phase space trajectory for the
system) through the high-dimensional space. The phase space trajectory represents the set of states compatible with
starting from one particular initial condition, located in the full phase space that represents the set of states
compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can
be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may
contain a great many dimensions. For instance, a gas containing many molecules may require a separate dimension
for each particle's x, y and z positions and momenta as well as any number of other properties.
In classical mechanics, any choice of generalized coordinates q i for the position (i.e. coordinates on configuration
space) defines conjugate generalized momenta pi which together define co-ordinates on phase space. More
abstractly, in classical mechanics phase space is the cotangent space of configuration space, and in this interpretation
the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural
local Darboux coordinates for the standard symplectic structure on a cotangent space.
The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of
points in such systems obeys Liouville's Theorem, and so can be taken as constant. Within the context of a model
system in classical mechanics, the phase space coordinates of the system at any given time are composed of all of the
system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the
future or the past, through integration of Hamilton's or Lagrange's equations of motion.

61

Phase space

62

Examples
Low dimensions
Main articles: phase line (mathematics)
and phase plane

Illustration of how a phase portrait would be constructed for the motion of a simple
pendulum.

For simple systems, there may be as few as one or two degrees of


freedom. One degree of freedom occurs when one has an autonomous
ordinary differential equation in a single variable,
with the resulting one-dimensional system being called a phase line,
and the qualitative behaviour of the system being immediately visible
from the phase line. The simplest non-trivial examples are the
exponential growth model/decay (one unstable/stable equilibrium) and
the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane,
which occurs in classical mechanics for a single particle moving in one
dimension, and where the two variables are position and velocity. In
this case, a sketch of the phase portrait may give qualitative
information about the dynamics of the system, such as the limit cycle
of the Van der Pol oscillator shown in the diagram.
Here, the horizontal axis gives the position and vertical axis the
velocity. As the system evolves, its state follows one of the lines
(trajectories) on the phase diagram.

Evolution of an ensemble of classical systems in


phase space (top). The systems are a massive
particle in a one-dimensional potential well (red
curve, lower figure). The initially compact
ensemble becomes swirled up over time.

Phase space

63

Chaos theory
Classic examples of phase diagrams
from chaos theory are :
the Lorenz attractor
population growth (i.e. Logistic
map)
parameter plane of complex
quadratic polynomials with
Mandelbrot set.

Quantum mechanics
In quantum mechanics, the coordinates
p and q of phase space normally
become hermitian operators in a
Hilbert space.

Phase portrait of the Van der Pol oscillator

But they may alternatively retain their


classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946
star product), consistent with the uncertainty principle of quantum mechanics. Every quantum mechanical observable
corresponds to a unique function or distribution on phase space, and vice versa, as specified by Hermann Weyl
(1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H J
Groenewold (1946). With J E Moyal (1949), these completed the foundations of the phase space formulation of
quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern
abstractions include deformation quantization and geometric quantization.)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the
density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner
quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map
facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with
deformation parameter /S, where S is the action of the relevant process. (Other familiar deformations in physics
involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the
deformation of Newtonian gravity into General Relativity, with deformation parameter Schwarzschild
radius/characteristic-dimension.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by -dependent quantum
corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the
noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.

Thermodynamics and statistical mechanics


In thermodynamics and statistical mechanics contexts, the term phase space has two meanings: It is used in the same
sense as in classical mechanics. If a thermodynamic system consists of N particles, then a point in the
6N-dimensional phase space describes the dynamic state of every particle in that system, as each particle is
associated with three position variables and three momentum variables. In this sense, a point in phase space is said to
be a microstate of the system. N is typically on the order of Avogadro's number, thus describing the system at a
microscopic level is often impractical. This leads us to the use of phase space in a different sense.

Phase space
The phase space can refer to the space that is parametrized by the macroscopic states of the system, such as pressure,
temperature, etc. For instance, one can view the pressure-volume diagram or entropy-temperature diagrams as
describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may
easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could
have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase
space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of
much larger dimensions than the second sense. Clearly, many more parameters are required to register every detail of
the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the
system.

Phase Integral
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the
partition function (sum over states) known as the phase integral.[1] Instead of summing the Boltzmann factor over
discretely spaced energy states (defined by appropriate integer quantum numbers for each degree of freedom) one
may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the
momentum component of all degrees of freedom (momentum space) and integration of the position component of all
degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical
partition function by multiplication of a normalization constant representing the number of quantum energy states
per unit phase space. As shown in detail in, this normalization constant is simply the inverse of Planck's constant
raised to a power equal to the number of degrees of freedom for the system.

References
[1] Laurendeau, Normand M. Statistical Thermodynamics: Fundamentals and Applications. New York: Cambridge University Press, 2005.
164-66. Print.

External links
Hazewinkel, Michiel, ed. (2001), "Phase space" (http://www.encyclopediaofmath.org/index.php?title=p/
p072590), Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4

64

Symplectic vector field

65

Symplectic vector field


In physics and mathematics, a symplectic vector field is one whose flow preserves a symplectic form. That is, if
is a symplectic manifold with smooth manifold
and symplectic form
, then a vector field
in the Lie algebra

is symplectic if its flow preserves the symplectic structure. In other words,

the Lie derivative of the vector field must vanish:


.
An alternative definition is that a vector field is symplectic if its interior product with the symplectic form is closed.
(The interior product gives a map from vector fields to 1-forms, which is an isomorphism due to the nondegeneracy
of a symplectic 2-form.) The equivalence of the definitions follows from the closedness of the symplectic form and
Cartan's magic formula for the Lie derivative in terms of the exterior derivative.
If the interior product of a vector field with the symplectic form is an exact form (and in particular, a closed form),
then it is called a Hamiltonian vector field. If the first De Rham cohomology group
of the manifold is
trivial, all closed forms are exact, so all symplectic vector fields are Hamiltonian. That is, the obstruction to a
symplectic vector field being Hamiltonian lives in
. In particular, symplectic vector fields on simply
connected manifolds are Hamiltonian.
The Lie bracket of two symplectic vector fields is Hamiltonian, and thus the collection of symplectic vector fields
and the collection of Hamiltonian vector fields both form Lie algebras.

References
This article incorporates material from Symplectic vector field on PlanetMath, which is licensed under the Creative
Commons Attribution/Share-Alike License.

Liouville's theorem

66

Liouville's theorem
This article is about Liouville's theorem in Hamiltonian mechanics. For other uses, see Liouville's theorem
(disambiguation).
In physics, Liouville's theorem, named after the French mathematician Joseph Liouville, is a key theorem in
classical statistical and Hamiltonian mechanics. It asserts that the phase-space distribution function is constant along
the trajectories of the system that is that the density of system points in the vicinity of a given system point
travelling through phase-space is constant with time.
There are also related mathematical results in symplectic topology and ergodic theory.

Liouville equations
These Liouville equations describe the time evolution of the phase
space distribution function. Although the equation is usually referred to
as the "Liouville equation", Josiah Willard Gibbs was the first to
recognize the importance of this equation as the fundamental equation
of statistical mechanics.[1] It is referred to as the Liouville equation
because its derivation for non-canonical systems utilises an identity
first derived by Liouville in 1838.[2] Consider a Hamiltonian
dynamical system with canonical coordinates
and conjugate
momenta

, where

distribution

. Then the phase space

determines the probability

that

the system will be found in the infinitesimal phase space volume


. The Liouville equation governs the evolution of
in time

Evolution of an ensemble of classical systems in


phase space (top). Each system consists of one
massive particle in a one-dimensional potential
well (red curve, lower figure). Whereas the
motion of an individual member of the ensemble
is given by Hamilton's equations, Liouville's
equations describe the flow of whole. The motion
is analogous to a dye in an incompressible fluid.

Time derivatives are denoted by dots, and are evaluated according to Hamilton's equations for the system. This
equation demonstrates the conservation of density in phase space (which was Gibbs's name for the theorem).

Liouville's theorem

67

Liouville's theorem states that


The distribution function is constant along any trajectory in phase space.
A proof of Liouville's theorem uses the n-dimensional divergence theorem. This proof is based on the fact that the
evolution of obeys an n-dimensional version of the continuity equation:

That is, the tuplet

is a conserved current. Notice that the difference between this and Liouville's

equation are the terms

where

is the Hamiltonian, and Hamilton's equations have been used. That is, viewing the motion through phase

space as a 'fluid flow' of system points, the theorem that the convective derivative of the density,
follows from the equation of continuity by noting that the 'velocity field'

, is zero

in phase space has zero divergence

(which follows from Hamilton's relations).


Another illustration is to consider the trajectory of a cloud of points through phase space. It is straightforward to
show that as the cloud stretches in one coordinate
say it shrinks in the corresponding direction so that the
product

remains constant.

Equivalently, the existence of a conserved current implies, via Noether's theorem, the existence of a symmetry. The
symmetry is invariant under time translations, and the generator (or Noether charge) of the symmetry is the
Hamiltonian.

Other formulations
Poisson bracket
The theorem is often restated in terms of the Poisson bracket as

or in terms of the Liouville operator or Liouvillian,

as

Ergodic theory
In ergodic theory and dynamical systems, motivated by the physical considerations given so far, there is a
corresponding result also referred to as Liouville's theorem. In Hamiltonian mechanics, the phase space is a smooth
manifold that comes naturally equipped with a smooth measure (locally, this measure is the 6n-dimensional
Lebesgue measure). The theorem says this smooth measure is invariant under the Hamiltonian flow. More generally,
one can describe the necessary and sufficient condition under which a smooth measure is invariant under a flow. The
Hamiltonian case then becomes a corollary.

Liouville's theorem

Symplectic geometry
In terms of symplectic geometry, the phase space is represented as a symplectic manifold. The theorem then states
that the natural volume form on a symplectic manifold is invariant under the Hamiltonian flows. The symplectic
structure is represented as a 2-form, given as a sum of wedge products of dpi with dqi. The volume form is the top
exterior power of the symplectic 2-form, and is just another representation of the measure on the phase space
described above. One formulation of the theorem states that the Lie derivative of this volume form is zero along
every Hamiltonian vector field.
In fact, the symplectic structure itself is preserved, not only its top exterior power. For this reason, in this context, the
symplectic structure is also called Poincar invariant. Hence the theorem about Poincar invariant is a generalization
of the Liouville's theorem.

Quantum Liouville equation


The analog of Liouville equation in quantum mechanics describes the time evolution of a mixed state. Canonical
quantization yields a quantum-mechanical version of this theorem, the Von Neumann equation. This procedure,
often used to devise quantum analogues of classical systems, involves describing a classical system using
Hamiltonian mechanics. Classical variables are then re-interpreted as quantum operators, while Poisson brackets are
replaced by commutators. In this case, the resulting equation is[3][4]

where is the density matrix.


When applied to the expectation value of an observable, the corresponding equation is given by Ehrenfest's theorem,
and takes the form

where

is an observable. Note the sign difference, which follows from the assumption that the operator is

stationary and the state is time-dependent.

Remarks
The Liouville equation is valid for both equilibrium and nonequilibrium systems. It is a fundamental equation of
non-equilibrium statistical mechanics.
The Liouville equation is integral to the proof of the fluctuation theorem from which the second law of
thermodynamics can be derived. It is also the key component of the derivation of Green-Kubo relations for linear
transport coefficients such as shear viscosity, thermal conductivity or electrical conductivity.
Virtually any textbook on Hamiltonian mechanics, advanced statistical mechanics, or symplectic geometry will
derive[5] the Liouville theorem.[6][7][8]

68

Liouville's theorem

69

References
Modern Physics, by R. Murugeshan, S. Chand publications
Liouville's theorem in curved space-time : Gravitation 22.6, by Misner,Thorne and Wheeler, Freeman
[1] J. W. Gibbs, "On the Fundamental Formula of Statistical Mechanics, with Applications to Astronomy and Thermodynamics." Proceedings of
the American Association for the Advancement of Science, 33, 57-58 (1884). Reproduced in The Scientific Papers of J. Willard Gibbs, Vol II
(1906), pp.16 (http:/ / archive. org/ stream/ scientificpapers02gibbuoft#page/ 16/ mode/ 2up).
[2] [J. Liouville, Journ. de Math., 3, 349(1838)].
[3] The theory of open quantum systems, by Breuer and Petruccione, p110 (http:/ / books. google. com/ books?id=0Yx5VzaMYm8C&
pg=PA110).
[4] Statistical mechanics, by Schwabl, p16 (http:/ / books. google. com/ books?id=o-HyHvRZ4VcC& pg=PA16).
[5] [for a particularly clear derivation see "The Principles of Statistical Mechanics" by R.C. Tolman , Dover(1979), p48-51].
[6] http:/ / hepweb. ucsd. edu/ ph110b/ 110b_notes/ node93. html Nearly identical to proof in this Wikipedia article. Assumes (without proof) the
n-dimensional continuity equation. Retrieved 01/06/2014.
[7] http:/ / www. nyu. edu/ classes/ tuckerman/ stat. mech/ lectures/ lecture_2/ node2. html A rigorous proof based on how the Jacobian volume
element transforms under Hamiltonian mechanics. Retrieved 01/06/2014.
[8] http:/ / www. pma. caltech. edu/ ~mcc/ Ph127/ a/ Lecture_3. pdf Uses the n-dimensional divergence theorem (without proof) Retrieved
01/06/2014.

Poisson bracket
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian
mechanics, playing a central role in Hamilton's equations of motion, which govern the time-evolution of a
Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate-transformations,
called canonical transformations, which maps canonical coordinate systems into canonical coordinate systems. (A
"canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical
Poisson-bracket relations.) The set of possible canonical transformations is always very rich. For instance, often it is
possible to choose the Hamiltonian itself H= H(q, p; t) as one of the new canonical momentum coordinates.
In a more general sense: the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on
a Poisson manifold is a special case. These are all named in honour of Simon Denis Poisson.

Canonical coordinates
In canonical coordinates (also known as Darboux coordinates)
and

on the phase space, given two functions

, the Poisson bracket takes the form

Hamilton's equations of motion


Hamilton's equations of motion have an equivalent expression in terms of the Poisson bracket. This may be most
directly demonstrated in an explicit coordinate frame. Suppose that f (p, q, t) is a function on the manifold. Then
from the multivariable chain rule, one has

Further, one may take p = p(t) and q = q(t) to be solutions to Hamilton's equations; that is,

Then, one has

Poisson bracket

70

Thus, the time evolution of a function f on a symplectic manifold can be given as a one-parameter family of
symplectomorphisms (i.e. canonical transformations, area-preserving diffeomorphisms), with the time t being the
parameter: Hamiltonian motion is a canonical transformation generated by the Hamiltonian. That is, Poisson
brackets are preserved in it, so that any time t in the solution to Hamilton's equations, q(t)=exp(t{H,}) q(0),
p(t)=exp(t{H,}) p(0), can serve as the bracket coordinates. Poisson brackets are canonical invariants.
Dropping the coordinates, one has

The operator in the convective part of the derivative, iL = {H, } , is sometimes referred to as the Liouvillian (see
Liouville's theorem (Hamiltonian)).

Constants of motion
An integrable dynamical system will have constants of motion in addition to the energy. Such constants of motion
will commute with the Hamiltonian under the Poisson bracket. Suppose some function f(p,q) is a constant of motion.
This implies that if p(t), q(t) is a trajectory or solution to the Hamilton's equations of motion, then one has that

along that trajectory. Then one has

where, as above, the intermediate step follows by applying the equations of motion. This equation is known as the
Liouville equation. The content of Liouville's theorem is that the time evolution of a measure (or "distribution
function" on the phase space) is given by the above.
If the Poisson bracket of f and g vanishes ({f,g} = 0), then f and g are said to be in involution. In order for a
Hamiltonian system to be completely integrable, all of the constants of motion must be in mutual involution.

The Poisson bracket in coordinate-free language


Let M be symplectic manifold, that is, a manifold equipped with a symplectic form: a 2-form which is both closed
(i.e. its exterior derivative d = 0) and non-degenerate. For example, in the treatment above, take M to be
and
take

If

is the interior product or contraction operation defined by

, then non-degeneracy is

equivalent to saying that for every one-form there is a unique vector field such that
smooth function on M, the Hamiltonian vector field XH can be defined to be

. Then if H is a

. It is easy to see that

Poisson bracket

71

The Poisson bracket

on (M, ) is a bilinear operation on differentiable functions, defined by


; the Poisson bracket of two functions on M is itself a function on M. The Poisson bracket is

antisymmetric because:
.
Furthermore,
.

(1)

Here Xgf denotes the vector field Xg applied to the function f as a directional derivative, and

denotes the

(entirely equivalent) Lie derivative of the function f.


If is an arbitrary one-form on M, the vector field generates (at least locally) a flow
boundary condition

The

satisfying the

and the first-order differential equation

will be symplectomorphisms (canonical transformations) for every t as a function of x if and only if


; when this is true, is called a symplectic vector field. Recalling Cartan's identity
and d = 0, it follows that

. Therefore is a symplectic

vector field if and only if is a closed form. Since

, it follows that every Hamiltonian vector

field Xf is a symplectic vector field, and that the Hamiltonian flow consists of canonical transformations. From (1)
above, under the Hamiltonian flow XH,
This is a fundamental result in Hamiltonian mechanics, governing the time evolution of functions defined on phase
space. As noted above, when {f,H} = 0, f is a constant of motion of the system. In addition, in canonical coordinates
(with
and
), Hamilton's equations for the time evolution of the system
follow immediately from this formula.
It also follows from (1) that the Poisson bracket is a derivation; that is, it satisfies a non-commutative version of
Leibniz's product rule:
(2)

, and

The Poisson bracket is intimately connected to the Lie bracket of the Hamiltonian vector fields. Because the Lie
derivative is a derivation,
.
Thus if v and w are symplectic, using

It follows that

, Cartan's identity, and the fact that

is a closed form,

, so that
.

(3)

Thus, the Poisson bracket on functions corresponds to the Lie bracket of the associated Hamiltonian vector fields.
We have also shown that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field and hence is
also symplectic. In the language of abstract algebra, the symplectic vector fields form a subalgebra of the Lie algebra
of smooth vector fields on M, and the Hamiltonian vector fields form an ideal of this subalgebra. The sympletic
vector fields are the Lie algebra of the (infinite-dimensional) Lie group of symplectomorphisms of M.
It is widely asserted that the Jacobi identity for the Poisson bracket,

Poisson bracket

72

follows from the corresponding identity for the Lie bracket of vector fields, but this is true only up to a locally
constant function. However, to prove the Jacobi identity for the Poisson bracket, it is sufficient to show that:

where the operator

on smooth functions on M is defined by

right-hand side is the commutator of operators,

and the bracket on the


. By (1), the operator

is equal to the

operator Xg. The proof of the Jacobi identity follows from (3) because the Lie bracket of vector fields is just their
commutator as differential operators.
The algebra of smooth functions on M, together with the Poisson bracket forms a Poisson algebra, because it is a Lie
algebra under the Poisson bracket, which additionally satisfies Leibniz's rule (2). We have shown that every
symplectic manifold is a Poisson manifold, that is a manifold with a "curly-bracket" operator on smooth functions
such that the smooth functions form a Poisson algebra. However, not every Poisson manifold arises in this way,
because Poisson manifolds allow for degeneracy which cannot arise in the symplectic case.

A result on conjugate momenta


Given a smooth vector field X on the configuration space, let PX be its conjugate momentum. The conjugate
momentum mapping is a Lie algebra anti-homomorphism from the Poisson bracket to the Lie bracket:

This important result is worth a short proof. Write a vector field X at point q in the configuration space as

where the

is the local coordinate frame. The conjugate momentum to X has the expression

where the pi are the momentum functions conjugate to the coordinates. One then has, for a point (q,p) in the phase
space,

The above holds for all (q,p), giving the desired result.

Poisson bracket

Quantization
Poisson brackets deform to Moyal brackets upon quantization, that is, they generalize to a different Lie algebra, the
Moyal algebra, or, equivalently in Hilbert space, quantum commutators. The Wigner-nn group contraction of
these (the classical limit, 0) yields the above Lie algebra.

References
Arnold, V. I. (1989). Mathematical Methods of Classical Mechanics (2nd ed.). New York: Springer.
ISBN978-0-387-96890-2.
Landau, L. D.; Lifshitz, E. M. (1982). Mechanics. Course of Theoretical Physics. Vol. 1 (3rd ed.).
Butterworth-Heinemann. ISBN978-0-7506-2896-9.
Karasv, M. V.; Maslov, V. P.: Nonlinear Poisson brackets. Geometry and quantization. Translated from the
Russian by A. Sossinsky [A. B. Sosinski] and M. Shishkova. Translations of Mathematical Monographs, 119.
American Mathematical Society, Providence, RI, 1993.

External links
Hazewinkel, Michiel, ed. (2001), "Poisson brackets" [1], Encyclopedia of Mathematics, Springer,
ISBN978-1-55608-010-4
Eric W. Weisstein, "Poisson bracket [2]", MathWorld.

References
[1] http:/ / www. encyclopediaofmath. org/ index. php?title=p/ p073270
[2] http:/ / mathworld. wolfram. com/ PoissonBracket. html

73

Lie algebra

74

Lie algebra
Group theory Lie groups

Lie groups

Table of Lie groups

v
t

e [1]

"Lie bracket" redirects here. For the operation on vector fields, see Lie bracket of vector fields.
In mathematics, Lie algebras (/li/, not /la/) are algebraic structures which were introduced to study the concept of
infinitesimal transformations. The term "Lie algebra" (after Sophus Lie) was introduced by Hermann Weyl in the
1930s. In older texts, the name "infinitesimal group" is used.
Related mathematical concepts include Lie groups and differentiable manifolds.

Definitions
A Lie algebra is a vector space

over some field F together with a binary operation

Lie bracket, which satisfies the following axioms:


Bilinearity:

for all scalars a, b in F and all elements x, y, z in


Alternating on

for all x in

The Jacobi identity:

for all x, y, z in

called the

Lie algebra

75

Note that the bilinearity and alternating properties imply anticommutativity, i.e., [x,y] = [y,x], for all elements x, y
in , while anticommutativity only implies the alternating property if the field's characteristic is not 2.[2]
It is customary to express a Lie algebra in lower-case fraktur, like . If a Lie algebra is associated with a Lie group,
then the spelling of the Lie algebra is the same as that Lie group. For example, the Lie algebra of SU(n) is written as
.

Generators and dimension


Elements of a Lie algebra are said to be generators of the Lie algebra if the smallest subalgebra of containing
them is itself. The dimension of a Lie algebra is its dimension as a vector space over F. The cardinality of a
minimal generating set of a Lie algebra is always less than or equal to its dimension.

Homomorphisms, subalgebras, and ideals


The Lie bracket is not associative in general, meaning that

need not equal

. Nonetheless,

much of the terminology that was developed in the theory of associative rings or associative algebras is commonly
applied to Lie algebras. A subspace
that is closed under the Lie bracket is called a Lie subalgebra. If a
subspace

satisfies a stronger condition that

then I is called an ideal in the Lie algebra .[3] A homomorphism between two Lie algebras (over the same base
field) is a linear map that is compatible with the respective commutators:

for all elements x and y in . As in the theory of associative rings, ideals are precisely the kernels of
homomorphisms, given a Lie algebra and an ideal I in it, one constructs the factor algebra
, and the first
isomorphism theorem holds for Lie algebras.
Let S be a subset of

. The set of elements x such that

for all s in S forms a subalgebra called the

centralizer of S. The centralizer of itself is called the center of . Similar to centralizers, if S is a subspace, then
the set of x such that
is in S for all s in S forms a subalgebra called the normalizer of S.

Direct sum
Given two Lie algebras

and

pairs

, their direct sum is the Lie algebra consisting of the vector space

, of the

, with the operation

Properties
Admits an enveloping algebra
See also: Enveloping algebra
For any associative algebra A with multiplication , one can construct a Lie algebra L(A). As a vector space, L(A) is
the same as A. The Lie bracket of two elements of L(A) is defined to be their commutator in A:

The associativity of the multiplication * in A implies the Jacobi identity of the commutator in L(A). For example, the
associative algebra of nn matrices over a field F gives rise to the general linear Lie algebra
The
associative algebra A is called an enveloping algebra of the Lie algebra L(A). Every Lie algebra can be embedded
into one that arises from an associative algebra in this fashion; see universal enveloping algebra.

Lie algebra

76

Representation
Given a vector space V, let

denote the Lie algebra enveloped by the associative algebra of all linear

endomorphisms of V. A representation of a Lie algebra

on V is a Lie algebra homomorphism

A representation is said to be faithful if its kernel is trivial. Every finite-dimensional Lie algebra has a faithful
representation on a finite-dimensional vector space (Ado's theorem).
For example,

given by
derivation on the Lie algebra

is a representation of

on the vector space

called the adjoint representation. A

(in fact on any non-associative algebra) is a linear map

that obeys the

Leibniz' law, that is,


for all x and y in the algebra. For any x,
of

lies in the subalgebra of

of

is called an inner derivation. If

is a derivation; a consequence of the Jacobi identity. Thus, the image

consisting of derivations on

. A derivation that happens to be in the image

is semisimple, every derivation on

is inner.

Examples
Vector spaces
Any vector space V endowed with the identically zero Lie bracket becomes a Lie algebra. Such Lie algebras are
called abelian, cf. below. Any one-dimensional Lie algebra over a field is abelian, by the antisymmetry of the Lie
bracket.
The real vector space of all nn skew-hermitian matrices is closed under the commutator and forms a real Lie
algebra denoted
. This is the Lie algebra of the unitary groupU(n).

Subspaces
The subspace of the general linear Lie algebra

consisting of matrices of trace zero is a subalgebra,[4] the

special linear Lie algebra, denoted

Real matrix groups


Any Lie group G defines an associated real Lie algebra =Lie(G). The definition in general is somewhat
technical, but in the case of real matrix groups, it can be formulated via the exponential map, or the matrix
exponent. The Lie algebra consists of those matrices X for which exp(tX) G, real numbers t.
The Lie bracket of is given by the commutator of matrices. As a concrete example, consider the special
linear group SL(n,R), consisting of all nn matrices with real entries and determinant 1. This is a matrix Lie
group, and its Lie algebra consists of all nn matrices with real entries and trace 0,

This Lie algebra is related to the pseudogroup of diffeomorphisms of M.

Lie algebra

77

Three dimensions
The three-dimensional Euclidean space R3 with the Lie bracket given by the cross product of vectors becomes a
three-dimensional Lie algebra.
The Heisenberg algebra H3(R) is a three-dimensional Lie algebra generated by elements x, y and z with Lie
brackets
.
It is explicitly realized as the space of 33 strictly upper-triangular matrices, with the Lie bracket given by the
matrix commutator,

Any element of the Heisenberg group is thus representable as a product of group generators, i.e., matrix
exponentials of these Lie algebra generators,

The commutation relations between the x, y, and z components of the angular momentum operator in quantum
mechanics form a representation of a complex three-dimensional Lie algebra, which is the complexification of the
Lie algebra so(3) of the three-dimensional rotation group:

Infinite dimensions
An important class of infinite-dimensional real Lie algebras arises in differential topology. The space of smooth
vector fields on a differentiable manifold M forms a Lie algebra, where the Lie bracket is defined to be the
commutator of vector fields. One way of expressing the Lie bracket is through the formalism of Lie derivatives,
which identifies a vector field X with a first order partial differential operator LX acting on smooth functions by
letting LX(f) be the directional derivative of the function f in the direction of X. The Lie bracket [X,Y] of two
vector fields is the vector field defined through its action on functions by the formula:

A KacMoody algebra is an example of an infinite-dimensional Lie algebra.


The Moyal algebra is an infinite-dimensional Lie algebra which contains all classical Lie algebras as subalgebras.

Lie algebra

78

Structure theory and classification


Lie algebras can be classified to some extent. In particular, this has an application to the classification of Lie groups.

Abelian, nilpotent, and solvable


Analogously to abelian, nilpotent, and solvable groups, defined in terms of the derived subgroups, one can define
abelian, nilpotent, and solvable Lie algebras.
A Lie algebra is abelian if the Lie bracket vanishes, i.e. [x,y] = 0, for all x and y in . Abelian Lie algebras
correspond to commutative (or abelian) connected Lie groups such as vector spaces
or tori
and are all of
the form

meaning an n-dimensional vector space with the trivial Lie bracket.

A more general class of Lie algebras is defined by the vanishing of all commutators of given length. A Lie algebra
is nilpotent if the lower central series

becomes zero eventually. By Engel's theorem, a Lie algebra is nilpotent if and only if for every u in
endomorphism

the adjoint

is nilpotent.
More generally still, a Lie algebra

is said to be solvable if the derived series:

becomes zero eventually.


Every finite-dimensional Lie algebra has a unique maximal solvable ideal, called its radical. Under the Lie
correspondence, nilpotent (respectively, solvable) connected Lie groups correspond to nilpotent (respectively,
solvable) Lie algebras.

Simple and semisimple


A Lie algebra is "simple" if it has no non-trivial ideals and is not abelian. A Lie algebra is called semisimple if its
radical is zero. Equivalently, is semisimple if it does not contain any non-zero abelian ideals. In particular, a
simple Lie algebra is semisimple. Conversely, it can be proven that any semisimple Lie algebra is the direct sum of
its minimal ideals, which are canonically determined simple Lie algebras.
The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of
their representations. When the ground field F has characteristic zero, any finite-dimensional representation of a
semisimple Lie algebra is semisimple (i.e., direct sum of irreducible representations.) In general, a Lie algebra is
called reductive if the adjoint representation is semisimple. Thus, a semisimple Lie algebra is reductive.

Cartan's criterion
Cartan's criterion gives conditions for a Lie algebra to be nilpotent, solvable, or semisimple. It is based on the notion
of the Killing form, a symmetric bilinear form on defined by the formula

where tr denotes the trace of a linear operator. A Lie algebra


nondegenerate. A Lie algebra is solvable if and only if

is semisimple if and only if the Killing form is

Lie algebra

79

Classification
The Levi decomposition expresses an arbitrary Lie algebra as a semidirect sum of its solvable radical and a
semisimple Lie algebra, almost in a canonical way. Furthermore, semisimple Lie algebras over an algebraically
closed field have been completely classified through their root systems. However, the classification of solvable Lie
algebras is a 'wild' problem, and cannotWikipedia:Please clarify be accomplished in general.

Relation to Lie groups


Although Lie algebras are often studied in their own right, historically they arose as a means to study Lie groups.
Lie's fundamental theorems describe a relation between Lie groups and Lie algebras. In particular, any Lie group
gives rise to a canonically determined Lie algebra (concretely, the tangent space at the identity); and, conversely, for
any Lie algebra there is a corresponding connected Lie group (Lie's third theorem; see the
BakerCampbellHausdorff formula). This Lie group is not determined uniquely; however, any two connected Lie
groups with the same Lie algebra are locally isomorphic, and in particular, have the same universal cover. For
instance, the special orthogonal group SO(3) and the special unitary group SU(2) give rise to the same Lie algebra,
which is isomorphic to R3 with the cross-product, while SU(2) is a simply-connected twofold cover of SO(3).
Given a Lie group, a Lie algebra can be associated to it either by endowing the tangent space to the identity with the
differential of the adjoint map, or by considering the left-invariant vector fields as mentioned in the examples. In the
case of real matrix groups, the Lie algebra consists of those matrices X for which exp(tX) G for all real numbers
t, where exp is the exponential map.
Some examples of Lie algebras corresponding to Lie groups are the following:
The Lie algebra
The Lie algebra
The Lie algebras

for the group


for the group
for the group

is the algebra of complex nn matrices


is the algebra of complex nn matrices with trace 0
and

for

are both the algebra of real anti-symmetric

nn matrices (See Antisymmetric matrix: Infinitesimal rotations for a discussion)


The Lie algebra
for the group
is the algebra of skew-Hermitian complex nn matrices while the Lie
algebra

for

is the algebra of skew-Hermitian, traceless complex nn matrices.

In the above examples, the Lie bracket

(for

and

matrices in the Lie algebra) is defined as

.
Given a set of generators Ta, the structure constants f abc express the Lie brackets of pairs of generators as linear
combinations of generators from the set, i.e., [Ta, Tb] = f abc Tc. The structure constants determine the Lie brackets of
elements of the Lie algebra, and consequently nearly completely determine the group structure of the Lie group. The
structure of the Lie group near the identity element is displayed explicitly by the BakerCampbellHausdorff
formula, an expansion in Lie algebra elements X, Y and their Lie brackets, all nested together within a single
exponent, exp(tX) exp(tY) = exp(tX+tY+ t2[X,Y] + O(t3) ).
The mapping from Lie groups to Lie algebras is functorial, which implies that homomorphisms of Lie groups lift to
homomorphisms of Lie algebras, and various properties are satisfied by this lifting: it commutes with composition, it
maps Lie subgroups, kernels, quotients and cokernels of Lie groups to subalgebras, kernels, quotients and cokernels
of Lie algebras, respectively.
The functor L which takes each Lie group to its Lie algebra and each homomorphism to its differential is faithful and
exact. It is however not an equivalence of categories: different Lie groups may have isomorphic Lie algebras (for
example SO(3) and SU(2) ), and there are (infinite dimensional) Lie algebras that are not associated to any Lie
group.
However, when the Lie algebra is finite-dimensional, one can associate to it a simply connected Lie group having
as its Lie algebra. More precisely, the Lie algebra functor L has a left adjoint functor from finite-dimensional
(real) Lie algebras to Lie groups, factoring through the full subcategory of simply connected Lie groups.[5] In other

Lie algebra

80

words, there is a natural isomorphism of bifunctors

The adjunction
adjunction

(corresponding to the identity on

) is an isomorphism, and the other

is the projection homomorphism from the universal cover group of the identity

component of H to H. It follows immediately that if G is simply connected, then the Lie algebra functor establishes a
bijective correspondence between Lie group homomorphisms GH and Lie algebra homomorphisms L(G)L(H).
The universal cover group above can be constructed as the image of the Lie algebra under the exponential map. More
generally, we have that the Lie algebra is homeomorphic to a neighborhood of the identity. But globally, if the Lie
group is compact, the exponential will not be injective, and if the Lie group is not connected, simply connected or
compact, the exponential map need not be surjective.
If the Lie algebra is infinite-dimensional, the issue is more subtle. In many instances, the exponential map is not even
locally a homeomorphism (for example, in Diff(S1), one may find diffeomorphisms arbitrarily close to the identity
that are not in the image of exp). Furthermore, some infinite-dimensional Lie algebras are not the Lie algebra of any
group.
The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of
Lie groups and the related matter of the representation theory of Lie groups. Every representation of a Lie algebra
lifts uniquely to a representation of the corresponding connected, simply connected Lie group, and conversely every
representation of any Lie group induces a representation of the group's Lie algebra; the representations are in one to
one correspondence. Therefore, knowing the representations of a Lie algebra settles the question of representations
of the group.
As for classification, it can be shown that any connected Lie group with a given Lie algebra is isomorphic to the
universal cover mod a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the
discrete subgroups of the center, once the classification of Lie algebras is known (solved by Cartan et al. in the
semisimple case).

Category theoretic definition


Using the language of category theory, a Lie algebra can be defined as an object A in Veck, the category of vector
spaces over a field k of characteristic not 2, together with a morphism [.,.]: A A A, where refers to the
monoidal product of Veck, such that

where (a b) := b a and is the cyclic permutation braiding (id A,A) (A,A id). In diagrammatic form:

Lie algebra

Notes
[1]
[2]
[3]
[4]
[5]

http:/ / en. wikipedia. org/ w/ index. php?title=Template:Lie_groups& action=edit


Humphreys p. 1
Due to the anticommutativity of the commutator, the notions of a left and right ideal in a Lie algebra coincide.
Humphreys p.2
Adjoint property is discussed in more general context in Hofman & Morris (2007) (e.g., page 130) but is a straightforward consequence of,
e.g., Bourbaki (1989) Theorem 1 of page 305 and Theorem 3 of page 310.

References
Beltita, Daniel. Smooth Homogeneous Structures in Operator Theory, CRC Press, 2005. ISBN
978-1-4200-3480-6
Boza, Luis; Fedriani, Eugenio M. & Nez, Juan. A new method for classifying complex filiform Lie algebras,
Applied Mathematics and Computation, 121 (2-3): 169175, 2001
Bourbaki, Nicolas. "Lie Groups and Lie Algebras - Chapters 1-3", Springer, 1989, ISBN 3-540-64242-0
Erdmann, Karin & Wildon, Mark. Introduction to Lie Algebras, 1st edition, Springer, 2006. ISBN 1-84628-040-0
Hall, Brian C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, 2003. ISBN
0-387-40122-9
Hofman, Karl & Morris, Sidney. "The Lie Theory of Connected Pro-Lie Groups", European Mathematical
Society, 2007, ISBN 978-3-03719-032-6
Humphreys, James E. Introduction to Lie Algebras and Representation Theory, Second printing, revised.
Graduate Texts in Mathematics, 9. Springer-Verlag, New York, 1978. ISBN 0-387-90053-5
Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979.
ISBN 0-486-63832-4
Kac, Victor G. et al. Course notes for MIT 18.745: Introduction to Lie Algebras, math.mit.edu (http://www.
math.mit.edu/~lesha/745lec/)
O'Connor, J.J. & Robertson, E.F. Biography of Sophus Lie, MacTutor History of Mathematics Archive,
www-history.mcs.st-andrews.ac.uk (http://www-history.mcs.st-andrews.ac.uk/Mathematicians/Lie.html)
O'Connor, J.J. & Robertson, E.F. Biography of Wilhelm Killing, MacTutor History of Mathematics Archive,
www-history.mcs.st-andrews.ac.uk (http://www-history.mcs.st-andrews.ac.uk/Mathematicians/Killing.html)
Serre, Jean-Pierre. "Lie Algebras and Lie Groups", 2nd edition, Springer, 2006. ISBN 3-540-55008-9
Steeb, W.-H. Continuous Symmetries, Lie Algebras, Differential Equations and Computer Algebra, second
edition, World Scientific, 2007, ISBN 978-981-270-809-0
Varadarajan, V.S. Lie Groups, Lie Algebras, and Their Representations, 1st edition, Springer, 2004. ISBN
0-387-90969-9

81

Symplectomorphism

82

Symplectomorphism
In mathematics, a symplectomorphism is an isomorphism in the category of symplectic manifolds. In classical
mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and
preserves the symplectic structure of phase space, and is called a canonical transformation.

Formal definition
A diffeomorphism between two symplectic manifolds

where

is the pullback of

is called symplectomorphism, if

. The symplectic diffeomorphisms from

to

are a (pseudo-)group, called the

symplectomorphism group (see below).


The infinitesimal version of symplectomorphisms give the symplectic vector fields. A vector field
is called symplectic, if
Also,

is symplectic, iff the flow

Lie-subalgebra of

of

is symplectic for every

. These vector fields build a

Examples of symplectomorphisms include the canonical transformations of classical mechanics and theoretical
physics, the flow associated to any Hamiltonian function, the map on cotangent bundles induced by any
diffeomorphism of manifolds, and the coadjoint action of an element of a Lie Group on a coadjoint orbit.

Flows
Any smooth function on a symplectic manifold gives rise, by definition, to a Hamiltonian vector field and the set of
all such form a subalgebra of the Lie Algebra of symplectic vector fields. The integration of the flow of a symplectic
vector field is a symplectomorphism. Since symplectomorphisms preserve the symplectic 2-form and hence the
symplectic volume form, Liouville's theorem in Hamiltonian mechanics follows. Symplectomorphisms that arise
from Hamiltonian vector fields are known as Hamiltonian symplectomorphisms.
Since {H,H} = XH(H) = 0, the flow of a Hamiltonian vector field also preserves H. In physics this is interpreted as
the law of conservation of energy.
If the first Betti number of a connected symplectic manifold is zero, symplectic and Hamiltonian vector fields
coincide, so the notions of Hamiltonian isotopy and symplectic isotopy of symplectomorphisms coincide.
We can show that the equations for a geodesic may be formulated as a Hamiltonian flow.

The group of (Hamiltonian) symplectomorphisms


The symplectomorphisms from a manifold back onto itself form an infinite-dimensional pseudogroup. The
corresponding Lie algebra consists of symplectic vector fields. The Hamiltonian symplectomorphisms form a
subgroup, whose Lie algebra is given by the Hamiltonian vector fields. The latter is isomorphic to the Lie algebra of
smooth functions on the manifold with respect to the Poisson bracket, modulo the constants.
The group of Hamiltonian symplectomorphisms of

usually denoted as

Groups of Hamiltonian diffeomorphisms are simple, by a theorem of Banyaga. They have natural geometry given by
the Hofer norm. The homotopy type of the symplectomorphism group for certain simple symplectic four-manifolds,
such as the product of spheres, can be computed using Gromov's theory of pseudoholomorphic curves.

Symplectomorphism

83

Comparison with Riemannian geometry


Unlike Riemannian manifolds, symplectic manifolds are not very rigid: Darboux's theorem shows that all symplectic
manifolds of the same dimension are locally isomorphic. In contrast, isometries in Riemannian geometry must
preserve the Riemann curvature tensor, which is thus a local invariant of the Riemannian manifold. Moreover, every
function H on a symplectic manifold defines a Hamiltonian vector field XH, which exponentiates to a one-parameter
group of Hamiltonian diffeomorphisms. It follows that the group of symplectomorphisms is always very large, and
in particular, infinite-dimensional. On the other hand, the group of isometries of a Riemannian manifold is always a
(finite-dimensional) Lie group. Moreover, Riemannian manifolds with large symmetry groups are very special, and a
generic Riemannian manifold has no nontrivial symmetries.

Quantizations
Representations of finite-dimensional subgroups of the group of symplectomorphisms (after

-deformations, in

general) on Hilbert spaces are called quantizations. When the Lie group is the one defined by a Hamiltonian, it is
called a "quantization by energy". The corresponding operator from the Lie algebra to the Lie algebra of continuous
linear operators is also sometimes called the quantization; this is a more common way of looking at it in physics. See
Weyl quantization, geometric quantization, non-commutative geometry.

Arnold conjecture
A celebrated conjecture of Vladimir Arnold relates the minimum number of fixed points for a Hamiltonian
symplectomorphism on M, in case M is a closed manifold, to Morse theory. More precisely, the conjecture states
that has at least as many fixed points as the number of critical points a smooth function on M must have
(understood as for a generic case, Morse functions, for which this is a definite finite number which is at least 2).
It is known that this would follow from the ArnoldGivental conjecture named after Arnold and Alexander Givental,
which is a statement on Lagrangian submanifolds. It is proven in many cases by the construction of symplectic Floer
homology.

References
McDuff, Dusa & Salamon, D. (1998), Introduction to Symplectic Topology, Oxford Mathematical Monographs,
ISBN0-19-850451-9.
Abraham, Ralph & Marsden, Jerrold E. (1978), Foundations of Mechanics, London: Benjamin-Cummings,
ISBN0-8053-0102-X. See section 3.2.
Symplectomorphism groups
Gromov, M. (1985), "Pseudoholomorphic curves in symplectic manifolds", Inventiones Mathematicae 82 (2):
307347, Bibcode:1985InMat..82..307G [1], doi:10.1007/BF01388806 [2].
Polterovich, Leonid (2001), The geometry of the group of symplectic diffeomorphism, Basel; Boston: Birkhauser
Verlag, ISBN3-7643-6432-7.

References
[1] http:/ / adsabs. harvard. edu/ abs/ 1985InMat. . 82. . 307G
[2] http:/ / dx. doi. org/ 10. 1007%2FBF01388806

Dynamical system

84

Dynamical system
This article is about the general aspects of dynamical systems. For technical details, see Dynamical system
(definition). For the study, see Dynamical systems theory.
"Dynamical" redirects here. For other uses, see Dynamical (disambiguation).
A dynamical system is a concept in mathematics where a fixed rule
describes the time dependence of a point in a geometrical space.
Examples include the mathematical models that describe the swinging
of a clock pendulum, the flow of water in a pipe, and the number of
fish each springtime in a lake.
At any given time a dynamical system has a state given by a set of real
numbers (a vector) that can be represented by a point in an appropriate
state space (a geometrical manifold). Small changes in the state of the
system create small changes in the numbers. The evolution rule of the
dynamical system is a fixed rule that describes what future states
follow from the current state. The rule is deterministic; in other words,
for a given time interval only one future state follows from the current
state.

The Lorenz attractor arises in the study of the


Lorenz Oscillator, a dynamical system.

Overview
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and
engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the
system for only a short time into the future. (The relation is either a differential equation, difference equation or other
time scale.) To determine the state for all future times requires iterating the relation many timeseach advancing
time a small step. The iteration procedure is referred to as solving the system or integrating the system. Once the
system can be solved, given an initial point it is possible to determine all its future positions, a collection of points
known as a trajectory or orbit.
Before the advent of fast computing machines, finding an orbit required sophisticated mathematical techniques and
could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic
computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too
complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximatelythe parameters of the system may not be known
precisely or terms may be missing from the equations. The approximations used bring into question the validity or
relevance of numerical solutions. To address these questions several notions of stability have been introduced in
the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical
system implies that there is a class of models or initial conditions for which the trajectories would be equivalent.
The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic,
whereas others may wander through many different states of the system. Applications often require enumerating
these classes or maintaining the system within one class. Classifying all possible trajectories has led to the
qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear
dynamical systems and systems that have two numbers describing a state are examples of dynamical systems
where the possible classes of orbits are understood.

Dynamical system
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter
is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical
system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in
the transition to turbulence of a fluid.
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute
averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic
systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the
probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of
chaos.
It was in the work of Poincar that these dynamical systems themes developed.[citation needed]

Basic definitions
Main article: Dynamical system (definition)
A dynamical system is a manifold M called the phase (or state) space endowed with a family of smooth evolution
functions t that for any element of t T, the time, map a point of the phase space back into the phase space. The
notion of smoothness changes with applications and the type of manifold. There are several choices for the setT.
When T is taken to be the reals, the dynamical system is called a flow; and if T is restricted to the non-negative reals,
then the dynamical system is a semi-flow. When T is taken to be the integers, it is a cascade or a map; and the
restriction to the non-negative integers is a semi-cascade.

Examples
The evolution function t is often the solution of a differential equation of motion

The equation gives the time derivative, represented by the dot, of a trajectory x(t) on the phase space starting at some
pointx0. The vector field v(x) is a smooth function that at every point of the phase space M provides the velocity
vector of the dynamical system at that point. (These vectors are not vectors in the phase spaceM, but in the tangent
space TxM of the pointx.) Given a smooth t, an autonomous vector field can be derived from it.
There is no need for higher order derivatives in the equation, nor for time dependence in v(x) because these can be
eliminated by considering systems of higher dimensions. Other types of differential equations can be used to define
the evolution rule:

is an example of an equation that arises from the modeling of mechanical systems with complicated constraints.
The differential equations determining the evolution function t are often ordinary differential equations: in this
case the phase space M is a finite dimensional manifold. Many of the concepts in dynamical systems can be extended
to infinite-dimensional manifoldsthose that are locally Banach spacesin which case the differential equations are
partial differential equations. In the late 20th century the dynamical system perspective to partial differential
equations started gaining popularity.

85

Dynamical system

Further examples

Logistic map
Complex quadratic polynomial
Dyadic transformation
Tent map
Double pendulum
Arnold's cat map
Horseshoe map
Baker's map is an example of a chaotic piecewise linear map
Billiards and outer billiards
Hnon map
Lorenz system
Circle map
Rssler map
Kaplan-Yorke map
List of chaotic maps
Swinging Atwood's machine

Quadratic map simulation system


Bouncing ball dynamics

Linear dynamical systems


Main article: Linear dynamical system
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a
linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented
by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle:
if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so
will u(t)+w(t).

Flows
For a flow, the vector field (x) is a linear function of the position in the phase space, that is,

with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using
the superposition principle (linearity). The case b0 with A=0 is just a straight line in the direction ofb:

When b is zero and A0 the origin is an equilibrium (or singular) point of the flow, that is, if x0=0, then the orbit
remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an
initial point x0,
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the
eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the
origin.
The distance between two different initial conditions in the case A0 will change exponentially in most cases, either
converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive
dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not
sufficient) conditions for chaotic behavior.

86

Dynamical system

87

Linear vector fields and a few trajectories.

Maps
A discrete-time, affine dynamical system has the form
with A a matrix and b a vector. As in the continuous case, the change of coordinates xx+(1A)1b removes the
term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of
the linear system Anx0. The solutions for the map are no longer curves, but points that hop in the phase space. The
orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of
the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For
example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the
points along u1, with R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.

Local dynamics
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is
sometimes taken as a definition of qualitative): a singular point of the vector field (a point wherev(x)=0) will
remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth
deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic
orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of
dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but
computable) that makes the dynamical system as simple as possible.

Rectification
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field
v(y)0, then there is a change of coordinates for a region around y where the vector field becomes a series of
parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight
line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the
whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire
phase space. There may be singular points in the vector field (where v(x)=0); or the patches may become smaller
and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out
in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops
around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.

Dynamical system

Near periodic orbits


In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincar developed an
approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit and
consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincar
section S(,x0), of the orbit. The flow now defines a map, the Poincar map F:SS, for points starting in S and
returning toS. Not all these points will take the same amount of time to come back, but the times will be close to the
time it takesx0.
The intersection of the periodic orbit with the Poincar section is a fixed point of the Poincar map F. By a
translation, the point can be assumed to be at x=0. The Taylor series of the map is F(x)=Jx+O(x2), so a change
of coordinates h can only be expected to simplify F to its linear part

This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major
tasks of research in dynamical systems. Poincar first approached it assuming all functions to be analytic and in the
process discovered the non-resonant condition. If 1,..., are the eigenvalues of J they will be resonant if one
eigenvalue is an integer linear combination of two or more of the others. As terms of the form i (multiples of
other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also
known as the small divisor problem.

Conjugation results
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree
of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be
complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is
called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case, the HartmanGrobman theorem gives the conditions for the existence of a continuous
function that maps the neighborhood of the fixed point of the map to the linear map Jx. The hyperbolic case is also
structurally stable. Small changes in the vector field will only produce small changes in the Poincar map and these
small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that
the map is still hyperbolic.
The KolmogorovArnoldMoser (KAM) theorem gives the behavior near an elliptic point.

Bifurcation theory
Main article: Bifurcation theory
When the evolution map t (or the vector field it is derived from) depends on a parameter , the structure of the
phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase
space until a special value 0 is reached. At this point the phase space changes qualitatively and the dynamical
system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus)
and studies its behavior as a function of the parameter. At the bifurcation point the structure may change its
stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps
and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog
the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family F can be characterized by the eigenvalues of the
first derivative of the system DF(x0) computed at the bifurcation point. For a map, the bifurcation will occur when
there are eigenvalues of DF on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary
axis. For more information, see the main article on Bifurcation theory.

88

Dynamical system
Some bifurcations can lead to very complicated structures in phase space. For example, the RuelleTakens scenario
describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example,
Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling
bifurcations.

Ergodic systems
Main article: Ergodic theory
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a
-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws
as long as the coordinates are the position and the momentum and the volume is measured in units of
(position)(momentum). The flow takes points of a subset A into the points t(A) and invariance of the phase
space means that

In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum
such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville
measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial
condition. Because of energy conservation, only the states with the same energy as the initial condition are
accessible. The states with the same energy form an energy shell , a sub-manifold of the phase space. The volume
of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincar discovered the recurrence theorem: Assume the
phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the
phase space. Then almost every point of A returns to A infinitely often. The Poincar recurrence theorem was used
by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space
averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory
spends in a region A is vol(A)/vol().
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical
mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical
systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a
function that to each point of the phase space associates a number (say instantaneous pressure, or average height).
The value of an observable can be computed at another time by using the evolution function t. This introduces an
operator Ut, the transfer operator,

By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties
oft. In using the Koopman approach of considering the action of the flow on an observable function, the
finite-dimensional nonlinear problem involving t gets mapped into an infinite-dimensional linear problem
involvingU.
The Liouville measure restricted to the energy surface is the basis for the averages computed in equilibrium
statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the
Boltzmann factor exp(H). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of
dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are
defined on attractors of chaotic systems.

89

Dynamical system

90

Nonlinear dynamical systems and chaos


Main article: Chaos theory
Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable
behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly
unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that
exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a
trajectory can be well separated into two parts: one with the points that converge towards the orbit (the stable
manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is
not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather
to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the
possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
Note that the chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve
complexeven chaoticbehavior. Chaos theory has been so surprising because chaos can be found within almost
trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear.

Geometrical definition
A dynamical system is the tuple

, with

a manifold (locally a Banach space or Euclidean space),

the domain for time (non-negative reals, the integers, ...) and f an evolution rule tft (with
is a diffeomorphism of the manifold to itself. So, f is a mapping of the time-domain

) such that ft
into the space of

diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain

Measure theoretical definition


Main article: Measure-preserving dynamical system
A dynamical system may be defined formally, as a measure-preserving transformation of a sigma-algebra, the
quadruplet (X, , , ). Here, X is a set, and is a sigma-algebra on X, so that the pair (X, ) is a measurable space.
is a finite measure on the sigma-algebra, so that the triplet (X, , ) is a probability space. A map : X X is said to
be -measurable if and only if, for every , one has
. A map is said to preserve the measure if
and only if, for every , one has

. Combining the above, a map is said to be a

measure-preserving transformation of X , if it is a map from X to itself, it is -measurable, and is


measure-preserving. The quadruple (X, , , ), for such a , is then defined to be a dynamical system.
The map embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates
for integer n are studied. For continuous dynamical systems, the map is understood to be a
finite time evolution map and the construction is more complicated.

Examples of dynamical systems

Arnold's cat map


Baker's map is an example of a chaotic piecewise linear map
Circle map
Double pendulum
Billiards and Outer Billiards
Hnon map

Horseshoe map
Irrational rotation
List of chaotic maps

Dynamical system
Logistic map
Lorenz system
Rossler map

Multidimensional generalization
Dynamical systems are defined over a single independent variable, usually thought of as time. A more general class
of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such
systems are useful for modeling, for example, image processing.

References
Further reading
Works providing a broad coverage:
Ralph Abraham and Jerrold E. Marsden (1978). Foundations of mechanics. BenjaminCummings.
ISBN0-8053-0102-X. (available as a reprint: ISBN 0-201-40840-6)
Encyclopaedia of Mathematical Sciences (ISSN 0938-0396) has a sub-series on dynamical systems with reviews
of current research.
Christian Bonatti, Lorenzo J. Daz, Marcelo Viana (2005). Dynamics Beyond Uniform Hyperbolicity: A Global
Geometric and Probabilistic Perspective. Springer. ISBN3-540-22066-6.
Stephen Smale (1967). "Differentiable dynamical systems". Bulletin of the American Mathematical Society 73
(6): 747817. doi: 10.1090/S0002-9904-1967-11798-1 (http://dx.doi.org/10.1090/
S0002-9904-1967-11798-1).
Introductory texts with a unique perspective:
V. I. Arnold (1982). Mathematical methods of classical mechanics. Springer-Verlag. ISBN0-387-96890-3.
Jacob Palis and Wellington de Melo (1982). Geometric theory of dynamical systems: an introduction.
Springer-Verlag. ISBN0-387-90668-1.
David Ruelle (1989). Elements of Differentiable Dynamics and Bifurcation Theory. Academic Press.
ISBN0-12-601710-7.
Tim Bedford, Michael Keane and Caroline Series, eds. (1991). Ergodic theory, symbolic dynamics and hyperbolic
spaces. Oxford University Press. ISBN0-19-853390-X.
Ralph H. Abraham and Christopher D. Shaw (1992). Dynamicsthe geometry of behavior, 2nd edition.
Addison-Wesley. ISBN0-201-56716-4.
Textbooks
Kathleen T. Alligood, Tim D. Sauer and James A. Yorke (2000). Chaos. An introduction to dynamical systems.
Springer Verlag. ISBN0-387-94677-2.
Oded Galor (2011). Discrete Dynamical Systems. Springer. ISBN978-3-642-07185-0.
Anatole Katok and Boris Hasselblatt (1996). Introduction to the modern theory of dynamical systems. Cambridge.
ISBN0-521-57557-5.
Guenter Ludyk (1985). Stability of Time-variant Discrete-Time Systems. Springer. ISBN3-528-08911-3.
Stephen Lynch (2010). Dynamical Systems with Applications using Maple 2nd Ed. Springer.
ISBN0-8176-4389-3.
Stephen Lynch (2007). Dynamical Systems with Applications using Mathematica. Springer. ISBN0-8176-4482-2.
Stephen Lynch (2004). Dynamical Systems with Applications using MATLAB. Springer. ISBN0-8176-4321-4.
James Meiss (2007). Differential Dynamical Systems. SIAM. ISBN0-89871-635-7.

91

Dynamical system
Morris W. Hirsch, Stephen Smale and Robert Devaney (2003). Differential Equations, dynamical systems, and an
introduction to chaos. Academic Press. ISBN0-12-349703-5.
Julien Clinton Sprott (2003). Chaos and time-series analysis. Oxford University Press. ISBN0-19-850839-5.
Steven H. Strogatz (1994). Nonlinear dynamics and chaos: with applications to physics, biology chemistry and
engineering. Addison Wesley. ISBN0-201-54344-3.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems (http://www.mat.univie.ac.at/
~gerald/ftp/book-ode/). Providence: American Mathematical Society. ISBN978-0-8218-8328-0.
Stephen Wiggins (2003). Introduction to Applied Dynamical Systems and Chaos. Springer. ISBN0-387-00177-8.
Popularizations:
Florin Diacu and Philip Holmes (1996). Celestial Encounters. Princeton. ISBN0-691-02743-9.
James Gleick (1988). Chaos: Making a New Science. Penguin. ISBN0-14-009250-1.
Ivar Ekeland (1990). Mathematics and the Unexpected (Paperback). University Of Chicago Press.
ISBN0-226-19990-8.
Ian Stewart (1997). Does God Play Dice? The New Mathematics of Chaos. Penguin. ISBN0-14-025602-4.

External links
Interactive applet for the Standard and Henon Maps (http://complexity.xozzox.de/nonlinmappings.html) by A.
Luhn
A collection of dynamic and non-linear system models and demo applets (http://vlab.infotech.monash.edu.au/
simulations/non-linear/) (in Monash University's Virtual Lab)
Arxiv preprint server (http://www.arxiv.org/list/math.DS/recent) has daily submissions of (non-refereed)
manuscripts in dynamical systems.
DSWeb (http://www.dynamicalsystems.org/) provides up-to-date information on dynamical systems and its
applications.
Encyclopedia of dynamical systems (http://www.scholarpedia.org/article/
Encyclopedia_of_Dynamical_Systems) A part of Scholarpedia peer reviewed and written by invited experts.
Nonlinear Dynamics (http://www.egwald.ca/nonlineardynamics/index.php). Models of bifurcation and chaos
by Elmer G. Wiens
Oliver Knill (http://www.dynamical-systems.org) has a series of examples of dynamical systems with
explanations and interactive controls.
Sci.Nonlinear FAQ 2.0 (Sept 2003) (http://amath.colorado.edu/faculty/jdm/faq-Contents.html) provides
definitions, explanations and resources related to nonlinear science
Online books or lecture notes:
Geometrical theory of dynamical systems (http://arxiv.org/pdf/math.HO/0111177). Nils Berglund's lecture
notes for a course at ETH at the advanced undergraduate level.
Dynamical systems (http://www.ams.org/online_bks/coll9/). George D. Birkhoff's 1927 book already takes a
modern approach to dynamical systems.
Chaos: classical and quantum (http://chaosbook.org). An introduction to dynamical systems from the periodic
orbit point of view.
Modeling Dynamic Systems (http://www.embedded.com/2000/0008/0008feat2.htm). An introduction to the
development of mathematical models of dynamic systems.
Learning Dynamical Systems (http://www.cs.brown.edu/research/ai/dynamics/tutorial/home.html).
Tutorial on learning dynamical systems.
Ordinary Differential Equations and Dynamical Systems (http://www.mat.univie.ac.at/~gerald/ftp/book-ode/
). Lecture notes by Gerald Teschl
Research groups:

92

Dynamical system
Dynamical Systems Group Groningen (http://www.math.rug.nl/~broer/), IWI, University of Groningen.
Chaos @ UMD (http://www-chaos.umd.edu/). Concentrates on the applications of dynamical systems.
Dynamical Systems (http://www.math.sunysb.edu/dynamics/), SUNY Stony Brook. Lists of conferences,
researchers, and some open problems.
Center for Dynamics and Geometry (http://www.math.psu.edu/dynsys/), Penn State.
Control and Dynamical Systems (http://www.cds.caltech.edu/), Caltech.
Laboratory of Nonlinear Systems (http://lanoswww.epfl.ch/), Ecole Polytechnique Fdrale de Lausanne
(EPFL).
Center for Dynamical Systems (http://www.math.uni-bremen.de/ids.html/), University of Bremen
Systems Analysis, Modelling and Prediction Group (http://www.eng.ox.ac.uk/samp/), University of Oxford
Non-Linear Dynamics Group (http://sd.ist.utl.pt/), Instituto Superior Tcnico, Technical University of Lisbon
Dynamical Systems (http://www.impa.br/), IMPA, Instituto Nacional de Matemtica Pura e Applicada.
Nonlinear Dynamics Workgroup (http://ndw.cs.cas.cz/), Institute of Computer Science, Czech Academy of
Sciences.
Simulation software based on Dynamical Systems approach:
FyDiK (http://fydik.kitnarf.cz/)
iDMC (http://idmc.googlecode.com), simulation and dynamical analysis of nonlinear models

Hamiltonian vector field


In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field, defined for any
energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a
Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral
curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The
diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as
canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.
Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two
Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with
the Hamiltonian given by the Poisson bracket of f and g.

Definition
Suppose that (M,) is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a
fiberwise-linear isomorphism

between the tangent bundle TM and the cotangent bundle T*M, with the inverse

Therefore, one-forms on a symplectic manifold M may be identified with vector fields and every differentiable
function H: M R determines a unique vector field XH, called the Hamiltonian vector field with the Hamiltonian
H, by requiring that for every vector field Y on M, the identity

must hold.
Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying
conventions in physical and mathematical literature.

93

Hamiltonian vector field

94

Examples
Suppose that M is a 2n-dimensional symplectic manifold. Then locally, one may choose canonical coordinates (q1,
..., qn, p1, ..., pn) on M, in which the symplectic form is expressed as

where d denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with
Hamiltonian H takes the form

where is a 2n by 2n square matrix

and

Suppose that M = R2n is the 2n-dimensional symplectic vector space with (global) canonical coordinates.
If H = pi then
if H = qi then
if
if

then
then

Properties
The assignment f Xf is linear, so that the sum of two Hamiltonian functions transforms into the sum of the
corresponding Hamiltonian vector fields.
Suppose that (q1, ..., qn, p1, ..., pn) are canonical coordinates on M (see above). Then a curve (t)=(q(t),p(t)) is an
integral curve of the Hamiltonian vector field XH if and only if it is a solution of the Hamilton's equations:

The Hamiltonian H is constant along the integral curves, because


That is, H((t)) is actually independent of t. This property corresponds to the conservation of energy in
Hamiltonian mechanics.
More generally, if two functions F and H have a zero Poisson bracket (cf. below), then F is constant along the
integral curves of H, and similarly, H is constant along the integral curves of F. This fact is the abstract
mathematical principle behind Noether's theorem.
The symplectic form is preserved by the Hamiltonian flow. Equivalently, the Lie derivative

Hamiltonian vector field

Poisson bracket
The notion of a Hamiltonian vector field leads to a skew-symmetric, bilinear operation on the differentiable
functions on a symplectic manifold M, the Poisson bracket, defined by the formula

where

denotes the Lie derivative along a vector field X. Moreover, one can check that the following identity

holds:
where the right hand side represents the Lie bracket of the Hamiltonian vector fields with Hamiltonians f and g. As a
consequence (a proof at Poisson bracket), the Poisson bracket satisfies the Jacobi identity

which means that the vector space of differentiable functions on M, endowed with the Poisson bracket, has the
structure of a Lie algebra over R, and the assignment f Xf is a Lie algebra homomorphism, whose kernel consists
of the locally constant functions (constant functions if M is connected).

References
Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings.
ISBN0-8053-1012-X Check |isbn= value (help). See section 3.2.
Arnol'd, V.I. (1997). Mathematical Methods of Classical Mechanics. Berlin etc: Springer. ISBN0-387-96890-3.
Frankel, Theodore (1997). The Geometry of Physics. Cambridge: Cambridge University Press.
ISBN0-521-38753-1.
McDuff, Dusa; Salamon, D. (1998). Introduction to Symplectic Topology. Oxford Mathematical Monographs.
ISBN0-19-850451-9.

95

Generalized forces

Generalized forces
Generalized forces find use in Lagrangian mechanics, where they play a role conjugate to generalized coordinates.
They are obtained from the applied forces, Fi, i=1,..., n, acting on a system that has its configuration defined in terms
of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the
variation of a generalized coordinate.

Virtual work
Generalized forces can be obtained from the computation of the virtual work, W, of the applied forces.:265
The virtual work of the forces, Fi, acting on the particles Pi, i=1,..., n, is given by

where ri is the virtual displacement of the particle Pi.

Generalized coordinates
Let the position vectors of each of the particles, ri, be a function of the generalized coordinates, qj, j=1,...,m. Then
the virtual displacements ri are given by

where qj is the virtual displacement of the generalized coordinate qj.


The virtual work for the system of particles becomes

Collect the coefficients of qj so that

Generalized forces
The virtual work of a system of particles can be written in the form

where

are called the generalized forces associated with the generalized coordinates qj, j=1,...,m.

96

Generalized forces

Velocity formulation
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the
velocities of the system. For the n particle system, let the velocity of each particle Pi be Vi, then the virtual
displacement ri can also be written in the form[1]

This means that the generalized force, Qj, can also be determined as

D'Alembert's principle
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force
(apparent force), called D'Alembert's principle. The inertia force of a particle, Pi, of mass mi is
where Ai is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates qj, j=1,...,m, then the generalized
inertia force is given by

D'Alembert's form of the principle of virtual work yields

References
[1] T. R. Kane and D. A. Levinson, Dynamics, Theory and Applications (http:/ / www. amazon. com/
Dynamics-Theory-Applications-Mechanical-Engineering/ dp/ 0070378460), McGraw-Hill, NY, 2005.

97

Hamiltonian mechanics

98

Hamiltonian mechanics
Classical
mechanics

History
Timeline

v
t

e [1]

Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same
outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more
abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which
later contributed to the formulation of quantum mechanics.
Hamiltonian mechanics was first formulated by William Rowan Hamilton in 1833, starting from Lagrangian
mechanics, a previous reformulation of classical mechanics introduced by Joseph Louis Lagrange in 1788.

Overview
In Hamiltonian mechanics, a classical
physical system is described by a set of
canonical coordinates
,
where each
coordinate

component of the
is indexed to the

frame of reference of the system.


The time evolution of the system is
uniquely defined by Hamilton's
equations:[1]

Illustration of a generalized coordinate q for one degree of freedom, of a particle moving


in a complicated path. Four possibilities of q for the particle's path are shown. For more
particles each with their own degrees of freedom, there are more coordinates.

Hamiltonian mechanics

where

99

is the Hamiltonian, which corresponds to the total energy of the system. For a closed

system, it is the sum of the kinetic and potential energy in the system.
In classical mechanics, the time evolution is obtained by computing the total force being exerted on each particle of
the system, and from Newton's second law, the time-evolutions of both position and velocity are computed. In
contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in
the generalized coordinates and inserting it in the Hamiltonian equations. It is important to point out that this
approach is equivalent to the one used in Lagrangian mechanics. In fact, as will be shown below, the Hamiltonian is
the Legendre transform of the Lagrangian, and thus both approaches give the same equations for the same
generalized momentum. The main motivation to use Hamiltonian mechanics instead of Lagrangian mechanics comes
from the symplectic structure of Hamiltonian systems.
While Hamiltonian mechanics can be used to describe simple systems such as a bouncing ball, a pendulum or an
oscillating spring in which energy changes from kinetic to potential and back again over time, its strength is shown
in more complex dynamic systems, such as planetary orbits in celestial mechanics. Naturally, the more degrees of
freedom the system has, the more complicated its time evolution is and, in most cases, it becomes chaotic.

Basic physical interpretation


A simple interpretation of the Hamilton mechanics comes from its application on a one-dimensional system
consisting of one particle of mass m under no external forces applied. The Hamiltonian represents the total energy of
the system, which is the sum of kinetic and potential energy, traditionally denoted T and V, respectively. Here q is
the coordinate and p is the momentum, mv. Then

Note that T is a function of p alone, while V is a function of q alone.


In this example, the time-derivative of the momentum p equals the Newtonian force, and so the first Hamilton
equation means that the force equals the negative gradient of potential energy. The time-derivative of q is the
velocity, and so the second Hamilton equation means that the particles velocity equals the derivative of its kinetic
energy with respect to its momentum.

Calculating a Hamiltonian from a Lagrangian


Given a Lagrangian in terms of the generalized coordinates

and generalized velocities

and time:

1. The momenta are calculated by differentiating the Lagrangian with respect to the (generalized) velocities:

2. The velocities are expressed in terms of the momenta


by inverting the expressions in the previous step.
3. The Hamiltonian is calculated using the usual definition of
as the Legendre transformation of :
Then the velocities are substituted for using the previous results.

Hamiltonian mechanics

100

Deriving Hamilton's equations


Hamilton's equations can be derived by looking at how the total differential of the Lagrangian depends on time,
[2]
generalized positions and generalized velocities

Now the generalized momenta were defined as

If this is substituted into the total differential of the Lagrangian, one gets

We can rewrite this as

and rearrange again to get

The term on the left-hand side is just the Hamiltonian that we have defined before, so we find that

We can also calculate the total differential of the Hamiltonian


Lagrangian

with respect to time directly, as we did with the

above, yielding:

It follows from the previous two independent equations that their right-hand sides are equal with each other. Thus we
obtain the equation

Since this calculation was done off-shell, we can associate corresponding terms from both sides of this equation to
yield:

On-shell, Lagrange's equations tell us that

We can rearrange this to get

Thus Hamilton's equations hold on-shell:

Hamiltonian mechanics

101

As a reformulation of Lagrangian mechanics


Starting with Lagrangian mechanics, the equations of motion are based on generalized coordinates

and matching generalized velocities

We write the Lagrangian as

with the subscripted variables understood to represent all N variables of that type. Hamiltonian mechanics aims to
replace the generalized velocity variables with generalized momentum variables, also known as conjugate momenta.
By doing so, it is possible to handle certain systems, such as aspects of quantum mechanics, that would otherwise be
even more complicated.
For each generalized velocity, there is one corresponding conjugate momentum, defined as:

In Cartesian coordinates, the generalized momenta are precisely the physical linear momenta. In circular polar
coordinates, the generalized momentum corresponding to the angular velocity is the physical angular momentum.
For an arbitrary choice of generalized coordinates, it may not be possible to obtain an intuitive interpretation of the
conjugate momenta.
One thing which is not too obvious in this coordinate dependent formulation is that different generalized coordinates
are really nothing more than different coordinate patches on the same symplectic manifold (see Mathematical
formalism, below).
The Hamiltonian is the Legendre transform of the Lagrangian:

If the transformation equations defining the generalized coordinates are independent of t, and the Lagrangian is a
sum of products of functions (in the generalized coordinates) which are homogeneous of order 0, 1 or 2, then it can
be shown that H is equal to the total energy E = T + V.
Each side in the definition of

produces a differential:

Substituting the previous definition of the conjugate momenta into this equation and matching coefficients, we
obtain the equations of motion of Hamiltonian mechanics, known as the canonical equations of Hamilton:

Hamilton's equations consist of 2n first-order differential equations, while Lagrange's equations consist of n
second-order equations. However, Hamilton's equations usually don't reduce the difficulty of finding explicit
solutions. They still offer some advantages, since important theoretical results can be derived because coordinates
and momenta are independent variables with nearly symmetric roles.

Hamiltonian mechanics
Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, such that a
coordinate does not occur in the Hamiltonian, the corresponding momentum is conserved, and that coordinate can be
ignored in the other equations of the set. Effectively, this reduces the problem from n coordinates to (n-1)
coordinates. In the Lagrangian framework, of course the result that the corresponding momentum is conserved still
follows immediately, but all the generalized velocities still occur in the Lagrangian - we still have to solve a system
of equations in n coordinates.
The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in the theory of classical
mechanics, and for formulations of quantum mechanics.

Geometry of Hamiltonian systems


A Hamiltonian system may be understood as a fiber bundle E over time R, with the fibers Et, t R, being the
position space. The Lagrangian is thus a function on the jet bundle J over E; taking the fiberwise Legendre transform
of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T*Et,
which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian.

Generalization to quantum mechanics through Poisson bracket


Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential
equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at
any point in time. However, the equations can be further generalized to then be extended to apply to quantum
mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the
algebra of Moyal brackets.
Specifically, the more general form of the Hamilton's equation reads

where f is some function of p and q, and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket
without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a
Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie
algebra, as proven by H. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See the
phase space formulation and Weyl quantization). This more algebraic approach not only permits ultimately
extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson
bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.

Mathematical formalism
Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system. The
function H is known as the Hamiltonian or the energy function. The symplectic manifold is then called the phase
space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector
field.
The Hamiltonian vector field (a special type of symplectic vector field) induces a Hamiltonian flow on the manifold.
This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called
the time); in other words an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each
symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced
by the Hamiltonian flow is commonly called the Hamiltonian mechanics of the Hamiltonian system.
The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the
manifold the structure of a Lie algebra.
Given a function f

102

Hamiltonian mechanics

103

If we have a probability distribution, , then (since the phase space velocity (

) has zero divergence, and

probability is conserved) its convective derivative can be shown to be zero and so

This is called Liouville's theorem. Every smooth function G over the symplectic manifold generates a one-parameter
family of symplectomorphisms and if { G, H } = 0, then G is conserved and the symplectomorphisms are symmetry
transformations.
A Hamiltonian may have multiple conserved quantities Gi. If the symplectic manifold has dimension 2n and there
are n functionally independent conserved quantities Gi which are in involution (i.e., { Gi, Gj } = 0), then the
Hamiltonian is Liouville integrable. The Liouville-Arnold theorem says that locally, any Liouville integrable
Hamiltonian can be transformed via a symplectomorphism in a new Hamiltonian with the conserved quantities Gi as
coordinates; the new coordinates are called action-angle coordinates. The transformed Hamiltonian depends only on
the Gi, and hence the equations of motion have the simple form
for some function F (Arnol'd et al., 1988). There is an entire field focusing on small deviations from integrable
systems governed by the KAM theorem.
The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic;
concepts of measure, completeness, integrability and stability are poorly defined. At this time, the study of
dynamical systems is primarily qualitative, and not a quantitative science.

Riemannian manifolds
An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be
written as

where

is a smoothly varying inner product on the fibers

, the cotangent space to the point q in the

configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term.
If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear
isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one
can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the
metric.) The solutions to the HamiltonJacobi equations for this Hamiltonian are then the same as the geodesics on
the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of
such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See
also Geodesics as Hamiltonian flows.

Hamiltonian mechanics

104

Sub-Riemannian manifolds
When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as
one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at
every point q of the configuration space manifold Q, so that the rank of the cometric is less than the dimension of the
manifold Q, one has a sub-Riemannian manifold.
The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely
determines the cometric, and vice-versa. This implies that every sub-Riemannian manifold is uniquely determined by
its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique
sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the ChowRashevskii
theorem.
The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the
Heisenberg group, the Hamiltonian is given by

is not involved in the Hamiltonian.

Poisson algebras
Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth
functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real
Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable
topology) such that for any element A of the algebra, A maps to a nonnegative real number.
A further generalization is given by Nambu dynamics.

Charged particle in an electromagnetic field


A good illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an
electromagnetic field. In Cartesian coordinates (i.e.
), the Lagrangian of a non-relativistic classical particle
in an electromagnetic field is (in SI Units):

where e is the electric charge of the particle (not necessarily the electron charge),
and the

is the electric scalar potential,

are the components of the magnetic vector potential (these may be modified through a gauge

transformation). This is called minimal coupling.


The generalized momenta are given by:

Rearranging, the velocities are expressed in terms of the momenta:

If we substitute the definition of the momenta, and the definitions of the velocities in terms of the momenta, into the
definition of the Hamiltonian given above, and then simplify and rearrange, we get:

This equation is used frequently in quantum mechanics.

Hamiltonian mechanics

105

Relativistic charged particle in an electromagnetic field


The Lagrangian for a relativistic charged particle is given by:

Thus the particle's canonical (total) momentum is

that is, the sum of the kinetic momentum and the potential momentum.
Solving for the velocity, we get

So the Hamiltonian is

From this we get the force equation (equivalent to the EulerLagrange equation)

from which one can derive

An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum,

This has the advantage that

can be measured experimentally whereas

is

cannot. Notice that the Hamiltonian

(total energy) can be viewed as the sum of the relativistic energy (kinetic+rest),

plus the potential

energy,

References
Footnotes
[1] Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
[2] This derivation is along the lines as given in

Other
Arnol'd, V. I. (1989), Mathematical Methods of Classical Mechanics, Springer-Verlag, ISBN0-387-96890-3
Abraham, R.; Marsden, J.E. (1978), Foundations of Mechanics, London: Benjamin-Cummings,
ISBN0-8053-0102-X
Arnol'd, V. I.; Kozlov, V. V.; Neshtadt, A. I. (1988), "Mathematical aspects of classical and celestial mechanics",
Encyclopaedia of Mathematical Sciences, Dynamical Systems III 3, Springer-Verlag

Hamiltonian mechanics
Vinogradov, A. M.; Kupershmidt, B. A. (1981), The structure of Hamiltonian mechanics (http://diffiety.ac.ru/
djvu/structures.djvu) (DjVu), London Math. Soc. Lect. Notes Ser. 60, London: Cambridge Univ. Press

External links
Binney, James J., Classical Mechanics (lecture notes) (http://www-thphys.physics.ox.ac.uk/users/
JamesBinney/cmech.pdf), University of Oxford, retrieved 27 October 2010
Tong, David, Classical Dynamics (Cambridge lecture notes) (http://www.damtp.cam.ac.uk/user/tong/
dynamics.html), University of Cambridge, retrieved 27 October 2010
Hamilton, William Rowan, On a General Method in Dynamics (http://www.maths.tcd.ie/pub/HistMath/
People/Hamilton/Dynamics/), Trinity College Dublin

Integrable system
In mathematics and physics, there are various distinct notions that are referred to under the name of integrable
systems.
In the general theory of differential systems, there is Frobenius integrability, which refers to overdetermined
systems. In the classical theory of Hamiltonian dynamical systems, there is the notion of Liouville integrability.
More generally, in differentiable dynamical systems integrability relates to the existence of foliations by invariant
submanifolds within the phase space. Each of these notions involves an application of the idea of foliations, but they
do not coincide. There are also notions of complete integrability, or exact solvability in the setting of quantum
systems and statistical mechanical models. Integrability can often be traced back to the algebraic geometry of
differential operators.

Frobenius integrability (overdetermined differential systems)


A differential system is said to be completely integrable in the Frobenius sense if the space on which it is defined has
a foliation by maximal integral manifolds. The Frobenius theorem states that a system is completely integrable if and
only if it generates an ideal that is closed under exterior differentiation. (See the article on integrability conditions for
differential systems for a detailed discussion of foliations by maximal integral manifolds.)

General dynamical systems


In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant,
regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are
invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of
the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as
complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this
context.
An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can
be adapted to describe evolution equations that either are systems of differential equations or finite difference
equations.
The distinction between integrable and nonintegrable dynamical systems thus has the qualitative implication of
regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be
explicitly integrated in exact form.

106

Integrable system

Hamiltonian systems and Liouville integrability


In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. Liouville
integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the
Hamiltonian vector fields associated to the invariants of the foliation span the tangent distribution. Another way to
state this is that there exists a maximal set of Poisson commuting invariants (i.e., functions on the phase space
whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).
In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of
constants), then it must have even dimension
, and the maximal number of independent Poisson commuting
invariants (including the Hamiltonian itself) is . The leaves of the foliation are totally isotropic with respect to the
symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems
(i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time dependent) have at least one
invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are
compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle"
variables. The cycles of the canonical -form are called the action variables, and the resulting canonical
coordinates are called action-angle variables (see below).
There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well
as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the
dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than
maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When
there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting,
and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable.
If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.

Action-angle variables
When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level
sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as
mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that
the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the
Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the torus. The
motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.

The HamiltonJacobi approach


In canonical transformation theory, there is the HamiltonJacobi method, in which solutions to Hamilton's equations
are sought by first finding a complete solution of the associated HamiltonJacobi equation. In classical terminology,
this is described as determining a transformation to a canonical set of coordinates consisting of completely
ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical
"position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In
the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the
general theory of partial differential equations of HamiltonJacobi type, a complete solution (i.e. one that depends
on n independent constants of integration, where n is the dimension of the configuration space), exists in very
general cases, but only in the local sense. Therefore the existence of a complete solution of the HamiltonJacobi
equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be
"explicitly integrated" involve a complete separation of variables, in which the separation constants provide the
complete set of integration constants that are required. Only when these constants can be reinterpreted, within the
full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a

107

Integrable system
Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.

Solitons and inverse spectral methods


A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons,
which are strongly stable, localized solutions of partial differential equations like the Kortewegde Vries equation
(which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing
these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach
for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often
reducible to RiemannHilbert problems), which generalize local linear methods like Fourier analysis to nonlocal
linearization, through the solution of associated integral equations.
The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and
which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably
generalized sense) is invariant under the evolution. This provides, in certain cases, enough invariants, or "integrals of
motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of
freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability.
However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a
transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly
infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a
transformation to action-angle variables, although typically only a finite number of the "position" variables are
actually angle coordinates, and the rest are noncompact.

Quantum integrable systems


There is also a notion of quantum integrable systems. In the quantum setting, functions on phase space must be
replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by
commuting operators.
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body
reducible. A quantum system is said to be integrable if the dynamics are two-body irreducible. The Yang-Baxter
equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved
quantities. All of these ideas are incorporated into the Quantum inverse scattering method where the algebraic Bethe
Ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb-Liniger Model,
the Hubbard model and several variations on the Heisenberg model.

Exactly solvable models


In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as
exactly solvable models. This obscures the distinction between integrability in the Hamiltonian sense, and the more
general dynamical systems sense.
There are also exactly solvable models in statistical mechanics, which are more closely related to quantum
integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense,
based on the YangBaxter equations and the quantum inverse scattering method provide quantum analogs of the
inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.
An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some
previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself,
rather than the purely calculational feature that we happen to have some "known" functions available, in terms of
which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known"
functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such

108

Integrable system
"known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic
validity, it often implies the sort of regularity that is to be expected in integrable systems.

List of some well-known classical integrable systems


1. Classical mechanical systems (finite-dimensional phase space):

Harmonic oscillators in n dimensions


Central force motion (exact solutions of classical central-force problems)
Two center Newtonian gravitational motion
Geodesic motion on ellipsoids
Neumann oscillator
Lagrange, Euler and Kovalevskaya tops
Integrable Clebsch and Steklov systems in fluids
CalogeroMoserSutherland model
Swinging Atwood's Machine with certain choices of parameters

2. Integrable lattice models


Toda lattice
AblowitzLadik lattice
Volterra lattice
3. Integrable systems of PDEs in 1 + 1 dimension

Kortewegde Vries equation


SineGordon equation
Nonlinear Schrdinger equation
AKNS system
Boussinesq equation (water waves)
Nonlinear sigma models
Classical Heisenberg ferromagnet model (spin chain)
Classical Gaudin spin system (Garnier system)
LandauLifshitz equation (continuous spin field)
BenjaminOno equation
Dym equation
Three-wave equation

4. Integrable PDEs in 2 + 1 dimensions


KadomtsevPetviashvili equation
DaveyStewartson equation
Ishimori equation
5. Other integrable systems of PDEs in higher dimensions
Self-dual YangMills equations

109

Integrable system

Notes
References
V.I. Arnold (1997). Mathematical Methods of Classical Mechanics, 2nd ed. Springer. ISBN978-0-387-96890-2.
M. Dunajski (2009). Solitons, Instantons and Twistors,. Oxford University Press. ISBN978-0-19-857063-9.
L.D. Faddeev, L. A. Takhtajan (1987). Hamiltonian Methods in the Theory of Solitons. Addison-Wesley.
ISBN978-0-387-15579-1.
A.T. Fomenko, Symplectic Geometry. Methods and Applications. Gordon and Breach, 1988. Second edition 1995,
ISBN 978-2-88124-901-3.
A.T. Fomenko, A. V. Bolsinov Integrable Hamiltonian Systems: Geometry, Topology, Classification. Taylor and
Francis, 2003, ISBN 978-0-415-29805-6.
H. Goldstein (1980). Classical Mechanics, 2nd. ed. Addison-Wesley. ISBN0-201-02918-9.
J. Harnad, P. Winternitz, G. Sabidussi, eds. (2000). Integrable Systems: From Classical to Quantum. American
Mathematical Society. ISBN0-8218-2093-1.
V.E. Korepin, N. M. Bogoliubov, A. G. Izergin (1997). Quantum Inverse Scattering Method and Correlation
Functions. Cambridge University Press. ISBN978-0-521-58646-7.
V. S. Afrajmovich, V.I. Arnold, Yu S. Il'yashenko, L. P. Shil'nikov. Dynamical Systems V. Springer.
ISBN3-540-18173-3.
Giuseppe Mussardo. Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics.
Oxford University Press. ISBN978-0-19-954758-6.

External links
Hazewinkel, Michiel, ed. (2001), "Integrable system" (http://www.encyclopediaofmath.org/index.php?title=p/
i051330), Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4

110

Cotangent bundle

111

Cotangent bundle
In mathematics, especially differential geometry, the cotangent bundle of a smooth manifold is the vector bundle of
all the cotangent spaces at every point in the manifold. It may be described also as the dual bundle to the tangent
bundle.

The cotangent sheaf


Smooth sections of the cotangent bundle are differential one-forms.

Definition of the cotangent sheaf


Let M be a smooth manifold and let MM be the Cartesian product of M with itself. The diagonal mapping sends a
point p in M to the point (p,p) of MM. The image of is called the diagonal. Let be the sheaf of germs of
smooth functions on MM which vanish on the diagonal. Then the quotient sheaf

consists of equivalence

classes of functions which vanish on the diagonal modulo higher order terms. The cotangent sheaf is the pullback of
this sheaf to M:
By Taylor's theorem, this is a locally free sheaf of modules with respect to the sheaf of germs of smooth functions of
M. Thus it defines a vector bundle on M: the cotangent bundle.

Contravariance in manifolds
A smooth morphism

of manifolds, induces a pullback sheaf

of vector bundles

on M. There is an induced map

The cotangent bundle as phase space


Since the cotangent bundle X=T*M is a vector bundle, it can be regarded as a manifold in its own right. Because of
the manner in which the definition of T*M relates to the differential topology of the base space M, X possesses a
canonical one-form (also tautological one-form or symplectic potential). The exterior derivative of is a
symplectic 2-form, out of which a non-degenerate volume form can be built for X. For example, as a result X is
always an orientable manifold (meaning that the tangent bundle of X is an orientable vector bundle). A special set of
coordinates can be defined on the cotangent bundle; these are called the canonical coordinates. Because cotangent
bundles can be thought of as symplectic manifolds, any real function on the cotangent bundle can be interpreted to
be a Hamiltonian; thus the cotangent bundle can be understood to be a phase space on which Hamiltonian mechanics
plays out.

The tautological one-form


Main article: Tautological one-form
The cotangent bundle carries a tautological one-form also known as the Poincar 1-form or Liouville 1-form. (The
form is also known as the canonical one-form, although this can sometimes lead to confusion.) This means that if we
regard T*M as a manifold in its own right, there is a canonical section of the vector bundle T*(T*M) over T*M.
This section can be constructed in several ways. The most elementary method is to use local coordinates. Suppose
that xi are local coordinates on the base manifold M. In terms of these base coordinates, there are fibre coordinates pi:
a one-form at a particular point of T*M has the form pidxi (Einstein summation convention implied). So the manifold
T*M itself carries local coordinates (xi,pi) where the x are coordinates on the base and the p are coordinates in the
fibre. The canonical one-form is given in these coordinates by

Cotangent bundle

112

Intrinsically, the value of the canonical one-form in each fixed point of T*M is given as a pullback. Specifically,
suppose that : T*M M is the projection of the bundle. Taking a point in Tx*M is the same as choosing of a point
x in M and a one-form at x, and the tautological one-form assigns to the point (x, ) the value

That is, for a vector v in the tangent bundle of the cotangent bundle, the application of the tautological one-form to
v at (x, ) is computed by projecting v into the tangent bundle at x using d : TT*M TM and applying to this
projection. Note that the tautological one-form is not a pullback of a one-form on the base M.

Symplectic form
The cotangent bundle has a canonical symplectic 2-form on it, as an exterior derivative of the canonical one-form,
the symplectic potential. Proving that this form is, indeed, symplectic can be done by noting that being symplectic is
a local property: since the cotangent bundle is locally trivial, this definition need only be checked on
.
But there the one form defined is the sum of
of

, and the differential is the canonical symplectic form, the sum

Phase space
If the manifold

represents the set of possible positions in a dynamical system, then the cotangent bundle

can be thought of as the set of possible positions and momenta. For example, this is a way to describe the phase
space of a pendulum. The state of the pendulum is determined by its position (an angle) and its momentum (or
equivalently, its velocity, since its mass is not changing). The entire state space looks like a cylinder. The cylinder is
the cotangent bundle of the circle. The above symplectic construction, along with an appropriate energy function,
gives a complete determination of the physics of system. See Hamiltonian mechanics for more information, and the
article on geodesic flow for an explicit construction of the Hamiltonian equations of motion.

References
Jurgen Jost, Riemannian Geometry and Geometric Analysis, (2002) Springer-Verlag, Berlin ISBN
3-540-63654-4.
Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London ISBN
0-8053-0102-X.
Stephanie Frank Singer, Symmetry in Mechanics: A Gentle Modern Introduction, (2001) Birkhauser, Boston.

Article Sources and Contributors

Article Sources and Contributors


Lagrangian Source: http://en.wikipedia.org/w/index.php?oldid=602387097 Contributors: 124Nick, 1ForTheMoney, Ae-a, Ahadley, AmarChandra, Amirhdrz 91, Anders Feder, Andre Engels,
Andres, Antixt, Ardric47, Army1987, Art Carlson, Arthur Rubin, AugPi, Austinstkong, Awaterl, Balcer, Bart van den Broek, BenFrantzDale, Berland, Bevo, Biniamin, CUSENZA Mario, CYD,
Cameronc, Cap'n Squid, CedricMC, Charles Matthews, ChrisChiasson, ChristophDemmer, Complexica, Correogsk, Craig Pemberton, Crowsnest, DJIndica, Davidaedwards, Dchristle, DefLog,
Diogenes2000, DrKiernan, Dratman, Dysprosia, Ebehn, Eigenlambda, Erik Streb, F=q(E+v^B), Fermat618, FrYGuY, Geometry guy, Giftlite, Gilderien, GoingBatty, Gsard, HolyCookie,
Hongooi, Hugozam, JRSpriggs, Jason Quinn, Jgates, Jheise, Jim Fraser, Jpod2, Jrme, KIAaze, Karol Langner, Ketyner, Kiefer.Wolfowitz, Lambiam, Larsobrien, Laurascudder, LeBofSportif,
Lejarrag, Lilac Soul, Linas, Lseixas, L Minh Nht, MER-C, Mal, Masudr, Matt McIrvin, Maurice Carbonaro, Mbell, Melchoir, Michael Hardy, MikeGogulski, Mpatel, Msurajit, MutluMan,
Ndeine, NormHardy, Odysei, Papa November, Patrick0Moran, Pdecalculus, Peterlin, Phys, Physicistjedi, Plasticup, Pps, Pratyush Sarkar, Puzl bustr, Quondum, R'n'B, RG2, RL0919,
Rajkishan211990, Rami radwan, Red Act, Rednblu, Reyk, Rho, Roadrunner, Salix alba, Saloon.cat, Schneelocke, SchreiberBike, SeventyThree, Sprevrha, Srleffler, Stevertigo, Susilehtola,
Tarotcards, Tarquin, Tbsmith, The Anome, Tom harrison, Tugjob, Twalton, User A1, Voidxor, WhiteHatLurker, Wisdom89, WolfmanSF, Wtmitchell, Wwoods, Xxanthippe, Zeimusu, Zowie,
Zueignung, Zundark, Zvika, 179 anonymous edits
Lagrangian mechanics Source: http://en.wikipedia.org/w/index.php?oldid=600684161 Contributors: 217.126.156.xxx, AManWithNoPlan, ANONYMOUS COWARD0xC0DE, Aither,
Ajgorhoe, AmarChandra, Andre Engels, AstroNomer, AugPi, BertSen, Bobo192, Brews ohare, CALR, CYD, Camrn86, CedricMC, Cgabe6, Chairboy, Charles Matthews, ChrisChiasson,
ChrisfromHouston, Colonies Chris, CompuChip, Conversion script, Craig Pemberton, Crowsnest, Cstarknyc, Ct529, DVdm, Dabarton91, DefLog, Derek Ross, Dratman, Dream Focus, Druzhnik,
Dwightfowler, E104421, Edouard.darchimbaud, Elzair, F=q(E+v^B), Favonian, Filos96, First Harmonic, Frdrick Lacasse, Galoubet, Giftlite, Graham87, Gregbard, Grj23, Gsard, Haham
hanuka, Hamsterlopithecus, Headbomb, Helixblue, Hoyabird8, Hublolly, Icairns, InverseHypercube, Isis, Isnow, JBdV, JRSpriggs, JSquish, Jason Quinn, Jcc2011, Jgates, Jomoal99, Jordgette,
K3wq, Karada, Kevincassel, Khazar2, KnowledgeOfSelf, Ksyrie, Kxx, Lambiam, Laudak, Laurascudder, Lethe, Linas, Looxix, Lseixas, Luis Sanchez, Manastra, Mark viking, Maschen,
Materialscientist, Mathematici6n, Michael H 34, Michael Hardy, Minime12358, Mleconte, Mpatel, Obsidian Soul, OlEnglish, Pcp071098, Peterlin, Phys, Pit, Pixel ;-), Plasticup, R'n'B, R6144,
Raul654, Razimantv, Rjwilmsi, Roadrunner, Robert Turner, Rwmcgwier, Sauhailam, Sbandrews, SeventyThree, Shsteven, Silly rabbit, Sofia Koutsouveli, Stevan White, StevenBell,
StradivariusTV, Syedosamaali786, Szabolcs Nagy, Sawomir Biay, Tal physdancer, Tarquin, Themeasureoftruth, Tim Starling, Tom harrison, Ultinate, Wapcaplet, Wavelength, Wolfkeeper,
Xaariz, Xenure, Zen Mind, Zhen Lin, , 199 anonymous edits
Hamiltonian system Source: http://en.wikipedia.org/w/index.php?oldid=580600414 Contributors: Andre.fspinto, Benja, Chris Howard, Conscious, DadaNeem, Dimoroi, Ekwos, Jorgecarleitao,
Linas, Marco Polo, Oleg Alexandrov, Pensil, Red Act, Reddi, Schmiteye, Theaucitron, 18 anonymous edits
Generalized coordinates Source: http://en.wikipedia.org/w/index.php?oldid=606696094 Contributors: AJDaviesOU, BenFrantzDale, Brews ohare, Charles Matthews, ChrisChiasson, Count
Truthstein, Crowsnest, Danmichaelo, F=q(E+v^B), Giftlite, Gmcastil, Hublolly, Jgates, Jimp, JohnBlackburne, Maschen, Niceguyedc, Owlbuster, Plasticup, Prof McCarthy, Ptjackyll, Renatops,
Rmilson, Sylviaelse, The Anome, Thurth, TimBentley, Tom harrison, Valenciano, VictorAnyakin, WikiPedant, Zueignung, , 29 anonymous edits
Legendre transformation Source: http://en.wikipedia.org/w/index.php?oldid=605240888 Contributors: A. Pichler, Amire80, AmitAronovitch, Amitushtush, Andrewman327, Blotwell, Charles
Matthews, Chuunen Baka, Complexica, Count Truthstein, Cuzkatzimhut, Cyberjoac, Dan Gluck, Darklilac, David Shear, Djinnome, Drorata, Dylan Thurston, Esagherardo, Every Creek Counts,
Flyer22, Freiddie, Georgelulu, Giftlite, GuidoGer, Headbomb, IllusoryAltruist, Ingle.atul, Iyerkri, JCMPC, JabberWok, Jason Quinn, Jenks24, Jess Riedel, Jim.belk, John of Reading, JordiGH,
Katterjohn, KephasC, Kiefer.Wolfowitz, Larsobrien, Lauro Chieza de Carvalho, Lavaka, Linas, LokiClock, LorCC, Manoguru, Mark viking, MarkHudson, MathMartin, Michael Hardy, Nbarth,
Netrapt, Nigellwh, NuclearPhysicist87, Octahedron80, Oleg Alexandrov, PAR, Paolo.dL, Paul White, Peterlin, Phys, Quantling, QuantumOfHistory, Quietbritishjim, Red Act, Red Slash,
Reddevyl, Ricky81682, Rjwilmsi, Sagi Harel, SimonWillerton, Stemonitis, Sunny house, That Guy, From That Show!, The Anome, The Disambiguator, TheMaestro, Thielum, Thurth, Tobias
Bergemann, TobinFricke, TomyDuby, Tourigny, Tovrstra, Unara, Xxanthippe, YouRang?, Zueignung, , 104 anonymous edits
Canonical coordinates Source: http://en.wikipedia.org/w/index.php?oldid=599761133 Contributors: Colonies Chris, Count Truthstein, Ebyabe, Geometry guy, Jason Quinn, KSmrq, Linas,
Lseixas, Lumidek, Maschen, Mathieu Perrin, Omnipaedista, Point-set topologist, Rausch, Senderista, Sgoder, Szteven, Thurth, Venny85, Yevgeny Kats, 12 anonymous edits
Generalized forces Source: http://en.wikipedia.org/w/index.php?oldid=567381556 Contributors: ChrisChiasson, Finemann, JCarlos, Prof McCarthy, Qrystal, Radagast83, 2 anonymous edits
Hamiltonian mechanics Source: http://en.wikipedia.org/w/index.php?oldid=601172392 Contributors: 7segment, Akriasas, Alexey Muranov, Andrej.westermann, Archelon, Army1987,
AstroMark, Atoll, AugPi, Bachrach44, Barbacana, Bb vb, Bender235, BertSen, Bevo, Borgx, CYD, Cgwaldman, Charles Matthews, Chris Howard, ChrisGualtieri, Chuunen Baka, Commutator,
Complexica, Craig Pemberton, Crasshopper, Crowsnest, Cuzkatzimhut, Cyde, D6, DS1000, Darrel francis, David R. Ingham, David spector, Dchristle, Dimoroi, Doctor Zook, Dratman, Dzustin,
Ebyabe, Epq, Errarel, F=q(E+v^B), Felix116, Frederic Y Bois, Freeskyman, Frdrick Lacasse, Gazok, Gbmaizol, Gene Nygaard, Geometry guy, Gerasime, Giftlite, HHHEB3, Headbomb,
Helptry, Hessammehr, HorsePunchKid, Hublolly, Hyandat, Isnow, JRSpriggs, Jamontaldi, Jason Quinn, Jcc2011, JerroldPease-Atlanta, JerrySteal, Jheald, Jitse Niesen, JohnBlackburne,
Jorgecarleitao, K3wq, KSmrq, Kylarnys, Lethe, Linas, Linuxlad, Looxix, MK8, Maschen, Maurice Carbonaro, Mbell, Mct mht, Mebden, Michael Hardy, Mjb, Movementarian, Netheril96,
OS2Warp, Paquitotrek, PerryTachett, PhoenixPinion, Phys, Pit, RE, Razimantv, Rbeas, Red Act, Reddi, Rephorm, Reyk, Rico402, SPat, Samuel Blanning, Sanpaz, Sbandrews, Sheliak, Srleffler,
Stefansquintet, Stevan White, StewartMH, Sverdrup, Teply, Thurth, TimBentley, Tobias Bergemann, UrsusArctosL71, Voorlandt, Vrenator, Xenure, YohanN7, Zhen Lin, Zueignung, Zundark,
152 anonymous edits
Integrable system Source: http://en.wikipedia.org/w/index.php?oldid=600201954 Contributors: Alexeytuzhilin, Auntof6, AxelBoldt, Barstaw, Bomazi, Charles Matthews, Crowsnest, Delaszk,
Dratman, Enigmaman, Enyokoyama, Giftlite, Gsard, Hans Lundmark, JPINFV, KarlJacobi, Korepin, Lcarr, Linas, Mdd, Mhym, Michael Hardy, Mogism, Njerseyguy, NotWith, Omnipaedista,
Paquitotrek, Pariefracture, Pierscoleman, Policron, R physicist, R.e.b., RDBury, RE, Rpchase, Ryan Reich, Silly rabbit, SophomoricPedant, Subhroneel, TakuyaMurata, Wavelength, 79
anonymous edits
Symplectic manifold Source: http://en.wikipedia.org/w/index.php?oldid=607153790 Contributors: 192.115.22.xxx, 198.81.16.xxx, 212.29.241.xxx, 9258fahsflkh917fas, AHusain314,
AmarChandra, Anonymous editor, ArnoldReinhold, AxelBoldt, BenBaker, BenFrantzDale, Bobo192, Brad7777, CUSENZA Mario, Cadoi, Charles Matthews, Chris Howard, ChrisGualtieri,
Conversion script, Darkwind, David Farris, DefLog, Dratman, Dysprosia, Eas6489, Enyokoyama, Erkabo, Eubulides, Fly by Night, Fropuff, Gaius Cornelius, Geometry guy, Giftlite, Graham87,
Gsard, Gzornenplatz, Headbomb, Ht686rg90, I.M.S., Jitse Niesen, JohnPritchard, KSmrq, Katzmik, KnightRider, Linas, Longhair, Loren Rosen, Mark viking, Maschen, Mathieu Perrin, Maurice
Carbonaro, Mgvongoeden, Miguel, Mild Bill Hiccup, Msh210, Nbarth, NielsenGW, Orthografer, Pfortuny, Phys, Pjacobi, Point-set topologist, Pred, R.e.b., Rausch, Silly rabbit, Sympleko,
Sawomir Biay, Tobias Bergemann, Tosha, Txebixev, VectorPosse, Wavelength, Wesley, Whiterabbit74, Wikidsp, , 54 anonymous edits
Phase space Source: http://en.wikipedia.org/w/index.php?oldid=590702380 Contributors: Adam majewski, BMF81, Beowulf333, Bovlb, CarrieVS, Charles Matthews, Chris Howard,
Complexica, Cuzkatzimhut, D.H, DEMcAdams, DVdm, Danman3459, Deville, Dohn joe, Dougher, Eb.hoop, Edsanville, Electricmuffin11, ErNa, Evilphoenix, Fugacity88, Galaksiafervojo,
Giftlite, Gpvos, Headbomb, Jesse V., Jheald, Jmath666, K-UNIT, KasugaHuang, Kernsters, Linas, Linuxlad, Loresayer, Lowellian, Marasmusine, Mct mht, Meisam, Merlion444, Mernst, Michael
Hardy, Nbarth, OceanEngineerRI, Oleg Alexandrov, Paddles, Ploncomi, Policron, Preethikasanilwiki, Quondum, Rjwilmsi, RogierBrussee, Rror, Rudolf.hellmuth, Sadi Carnot, Shoeofdeath,
SigmaAlgebra, Srleffler, Teply, ThorinMuglindir, TimBentley, Ulner, Victor Blacus, Viriditas, Vugluskr, Will Gladstone, XaosBits, 69 anonymous edits
Symplectic vector field Source: http://en.wikipedia.org/w/index.php?oldid=605339749 Contributors: 777sms, AHusain314, BD2412, David Eppstein, David Farris, Ebyabe, Geometry guy,
Jitse Niesen, Jtwdog, Linas, Mark viking, 1 anonymous edits
Liouville's theorem Source: http://en.wikipedia.org/w/index.php?oldid=596269985 Contributors: AHusain314, AK456, AmarChandra, Anville, Aspects, AvicAWB, BenFrantzDale, BeteNoir,
Brad7777, Cdjfacer, Cederal, Charles Matthews, Chphe, Complexica, D.H, Denevans, EmilJ, Eumolpo, Fromage2009, Galoa2804, Ganeshsashank, Geometry guy, Giftlite, Gogobera, Guy
vandegrift, Hairy Dude, Hamiltondaniel, Hg264, HolIgor, Jbhayet, Jfr26, Joke137, Jorgecarleitao, Kingturtle, LGNR, Linas, Linuxlad, Lionel sittler, Liusm, Mct mht, Michael Hardy, Nanite,
Njerseyguy, OlEnglish, Owlbuster, PV=nRT, Phys, Sbyrnes321, Slawekb, Sodin, Srleffler, Sawomir Biay, Template namespace initialisation script, Topbanana, Wigie, WuTheFWasThat,
Xxxsemoi, Zbxgscqf, 70 anonymous edits
Poisson bracket Source: http://en.wikipedia.org/w/index.php?oldid=605318251 Contributors: Ancheta Wis, Andreas Rejbrand, Archelon, B. Wolterding, BD2412, Charles Matthews, Chris
Howard, ChrisGualtieri, Chuunen Baka, Complexica, CryptoDerk, Cuzkatzimhut, Darij, DefLog, Dicklyon, Ebyabe, Fropuff, Gauge, Geminatea, Geometry guy, Giftlite, Hadrianheugh,
HappyCamper, Headbomb, Jitse Niesen, Joshua Davis, Keenan Pepper, Kier07, Lethe, Linas, Linuxlad, Lionelbrits, Lseixas, Mandarax, Mark viking, Melchoir, Michael Hardy, Michael K.
Edwards, Natkuhn, Nick Levine, Night Jaguar, Owlbuster, Pamputt, PetaRZ, Phys, Policron, Qmechanic, RDBury, RJFJR, Rdengler, Red Act, Rpchase, Sammy1339, ShelfSkewed, Skittleys,
SophomoricPedant, StarLight, Sympleko, Sawomir Biay, Tkuvho, Tobias Bergemann, Zfeinst, Zueignung, , 49 anonymous edits
Lie algebra Source: http://en.wikipedia.org/w/index.php?oldid=603963264 Contributors: 314Username, Adam cohenus, AlainD, Arcfrk, Arthena, Asimy, AxelBoldt, B-80, BD2412,
BenFrantzDale, BlackFingolfin, Bogey97, CSTAR, Chameleon, Charles Matthews, Conversion script, Count Truthstein, Crasshopper, CryptoDerk, CsDix, Curps, Cuzkatzimhut, Dachande,
Danielbrice, Darij, David Gerard, Dd314, DefLog, Deflective, Delilahblue, Dirac1933, Doctor Zook, Drbreznjev, Drorata, Dysprosia, Englebert, Enyokoyama, Fatchat, Flbsimas, Foobaz,

113

Article Sources and Contributors


Forgetfulfunctor00, Freiddie, Fropuff, Gauge, Geometry guy, Giftlite, Grendelkhan, Grokmoo, Grubber, Hairy Dude, Harold f, Headbomb, Hesam7, IkamusumeFan, Incnis Mrsi, Iorsh, Isnow,
JackSchmidt, Jason Quinn, Jason Recliner, Esq., Jenny Lam, Jeremy Henty, Jkock, Joel Koerwer, JohnBlackburne, Jrw@pobox.com, Juniuswikiae, Kaoru Itou, Kmarinas86, Kragen,
Kwamikagami, Lenthe, Lethe, Linas, LokiClock, Loren Rosen, Lotje, MarSch, Mark L MacDonald, Masnevets, Maurice Carbonaro, Michael Hardy, Michael Larsen, Michael Slone, Miguel,
Mikhail Ryazanov, Msh210, NatusRoma, Nbarth, Ndbrian1, Niout, Noegenesis, Oleg Alexandrov, Paolo.dL, Pfeiferwalter, Phys, Pj.de.bruin, Prtmrz, Pt, Pyrop, Python eggs, Quondum, R'n'B,
Rausch, Reinyday, RexNL, RobHar, Roentgenium111, Rossami, Rschwieb, Saung Tadashi, Sbyrnes321, Second Quantization, Shirulashem, Silly rabbit, Slawekb, Spangineer, Stca74, Stephenb,
StevenJohnston, Suisui, Supermanifold, TakuyaMurata, Teika kazura, Thomas Bliem, Tobias Bergemann, Tosha, Twri, Vanish2, Veromies, Vsmith, Walterpfeifer, Wavelength, Weialawaga,
Wood Thrush, Wshun, Zundark, 101 anonymous edits
Symplectomorphism Source: http://en.wikipedia.org/w/index.php?oldid=579289774 Contributors: 9258fahsflkh917fas, Arcfrk, Bender235, Charles Matthews, Chtito, David Farris,
Enyokoyama, Fropuff, Geometry guy, Giftlite, Hadal, Headbomb, JHunterJ, Jevansen, Joe Decker, LarRan, Linas, Mark viking, Marsupilamov, Phe, Phys, Red Act, Rjwilmsi, TakuyaMurata,
TheTito, Tosha, 19 anonymous edits
Dynamical system Source: http://en.wikipedia.org/w/index.php?oldid=597584535 Contributors: 0, 195.186.254.xxx, 7&6=thirteen, Aaronp808, Adam majewski, Aleksandar Guzijan, Aliotra,
Altenmann, Americanhero, AntOnTrack, Ap, Athkalani, AxelBoldt, Bluemoose, Brazzouk, Byelf2007, CBM, CX, Caesium, Charles Matthews, Chetvorno, Chopchopwhitey, Complex01,
Complexica, Cumi, Cutler, Daniele.tampieri, Dhollm, Dinhxuanduyet, Dino, Djfrost711, Dmharvey, Dratman, Duoduoduo, Dysprosia, EPM, El C, Epbr123, Epolk, Everyking, Evilphoenix,
Filemon, Filur, Finn-Zoltan, Frederick.d.abraham, Fredrik, Funandtrvl, Gandalf61, Gareth Griffith-Jones, Giftlite, Graham87, Headbomb, Hesam7, Highlightened, Hve, Hydroli, Iamthedeus,
Jabernal, Jay Gatsby, Jeff3000, Jeffrey Smith, JerrySteal, Jitse Niesen, JocK, Jowa fan, Juenni32, Jugander, K-UNIT, KConWiki, KYPark, Karol Langner, Kayvan45622, Kenneth M Burke,
Klemen Kocjancic, Kotepho, Kzzl, LaHonda MBA, Lakinekaki, LeilaniLad, Levineps, Lightmouse, Linas, LokiClock, ManiacK, Marj Tiefert, Math.geek3.1415926, MathMartin, Mathfreak231,
Mathieu Perrin, Mathmanta, Mct mht, Mdd, Meersan, Michael Hardy, Milly.mortimer, MrOllie, Msh210, Myasuda, Neelix, Neruo, Nick Number, Nn123645, Noeckel, Oldtimermath, Oleg
Alexandrov, Orange Death, OrgasGirl, Paine Ellsworth, Patrickdepinguin, PetaRZ, Pgan002, Phys, PlatypeanArchcow, Profyorke, Purnendu Karmakar, R'n'B, RDBury, Randomguess, RedWolf,
Reddi, Reinderien, Revolver, Rhetth, Rhythmiccycle, Rich Farmbrough, Rintrah, Rjwilmsi, Roesser, Rschwieb, SEIBasaurus, Sadi Carnot, Salgueiro, Salix alba, Sam Korn, Samuelbf85,
Schmloof, SilverSurfer314, Snoyes, Solace098, Sverdrup, Tedder, Template namespace initialisation script, The Anome, The wub, Tiled, Tobias Hoevekamp, Tomisti, Tommyjs, Torcini, Tosha,
Ttrantow, Volfy, Voretus, Waitati, WaysToEscape, Wclxlus, WhiteC, WillowW, XJaM, XaosBits, Yellow octopus, Zsniew, ^musaz, 162 anonymous edits
Hamiltonian vector field Source: http://en.wikipedia.org/w/index.php?oldid=604298196 Contributors: AHusain314, Akriasas, Arcfrk, BD2412, Cong06, David Farris, Ebyabe, Geometry guy,
Guido Magnano, Headbomb, Jbergquist, Jtwdog, KSmrq, Keyi, Linas, Lseixas, ManuBerlin, Molinagaray, PV=nRT, Red Act, Saaska, Stephane Redon, TakuyaMurata, Vincent Semeria, 14
anonymous edits
Generalized forces Source: http://en.wikipedia.org/w/index.php?oldid=567381556 Contributors: ChrisChiasson, Finemann, JCarlos, Prof McCarthy, Qrystal, Radagast83, 2 anonymous edits
Hamiltonian mechanics Source: http://en.wikipedia.org/w/index.php?oldid=601172392 Contributors: 7segment, Akriasas, Alexey Muranov, Andrej.westermann, Archelon, Army1987,
AstroMark, Atoll, AugPi, Bachrach44, Barbacana, Bb vb, Bender235, BertSen, Bevo, Borgx, CYD, Cgwaldman, Charles Matthews, Chris Howard, ChrisGualtieri, Chuunen Baka, Commutator,
Complexica, Craig Pemberton, Crasshopper, Crowsnest, Cuzkatzimhut, Cyde, D6, DS1000, Darrel francis, David R. Ingham, David spector, Dchristle, Dimoroi, Doctor Zook, Dratman, Dzustin,
Ebyabe, Epq, Errarel, F=q(E+v^B), Felix116, Frederic Y Bois, Freeskyman, Frdrick Lacasse, Gazok, Gbmaizol, Gene Nygaard, Geometry guy, Gerasime, Giftlite, HHHEB3, Headbomb,
Helptry, Hessammehr, HorsePunchKid, Hublolly, Hyandat, Isnow, JRSpriggs, Jamontaldi, Jason Quinn, Jcc2011, JerroldPease-Atlanta, JerrySteal, Jheald, Jitse Niesen, JohnBlackburne,
Jorgecarleitao, K3wq, KSmrq, Kylarnys, Lethe, Linas, Linuxlad, Looxix, MK8, Maschen, Maurice Carbonaro, Mbell, Mct mht, Mebden, Michael Hardy, Mjb, Movementarian, Netheril96,
OS2Warp, Paquitotrek, PerryTachett, PhoenixPinion, Phys, Pit, RE, Razimantv, Rbeas, Red Act, Reddi, Rephorm, Reyk, Rico402, SPat, Samuel Blanning, Sanpaz, Sbandrews, Sheliak, Srleffler,
Stefansquintet, Stevan White, StewartMH, Sverdrup, Teply, Thurth, TimBentley, Tobias Bergemann, UrsusArctosL71, Voorlandt, Vrenator, Xenure, YohanN7, Zhen Lin, Zueignung, Zundark,
152 anonymous edits
Integrable system Source: http://en.wikipedia.org/w/index.php?oldid=600201954 Contributors: Alexeytuzhilin, Auntof6, AxelBoldt, Barstaw, Bomazi, Charles Matthews, Crowsnest, Delaszk,
Dratman, Enigmaman, Enyokoyama, Giftlite, Gsard, Hans Lundmark, JPINFV, KarlJacobi, Korepin, Lcarr, Linas, Mdd, Mhym, Michael Hardy, Mogism, Njerseyguy, NotWith, Omnipaedista,
Paquitotrek, Pariefracture, Pierscoleman, Policron, R physicist, R.e.b., RDBury, RE, Rpchase, Ryan Reich, Silly rabbit, SophomoricPedant, Subhroneel, TakuyaMurata, Wavelength, 79
anonymous edits
Cotangent bundle Source: http://en.wikipedia.org/w/index.php?oldid=607143587 Contributors: Ardonik, Ben R-B, Billlion, Buka, Charles Matthews, CryptoDerk, Dysprosia, Fropuff, Giftlite,
Haseldon, Isnow, JoaoRicardo, Lethe, Linas, LokiClock, Mairi, N4nojohn, Ozob, Phys, Point-set topologist, Rausch, Rich Farmbrough, Rjwilmsi, Robinh, Shadowjams, Silly rabbit,
TakuyaMurata, Tobias Bergemann, Tong, Tosha, Wikidsp, 20 anonymous edits

114

Image Sources, Licenses and Contributors

Image Sources, Licenses and Contributors


File:Generalized coordinates 1df.svg Source: http://en.wikipedia.org/w/index.php?title=File:Generalized_coordinates_1df.svg License: Public Domain Contributors: User:Maschen
File:pendulumWithMovableSupport.svg Source: http://en.wikipedia.org/w/index.php?title=File:PendulumWithMovableSupport.svg License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: CompuChip
File:Generalized coordinates 1 and 2 df.svg Source: http://en.wikipedia.org/w/index.php?title=File:Generalized_coordinates_1_and_2_df.svg License: Public Domain Contributors:
User:Maschen
File:Pendulum.gif Source: http://en.wikipedia.org/w/index.php?title=File:Pendulum.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Gurjete Ukaj
File:Double-Pendulum.svg Source: http://en.wikipedia.org/w/index.php?title=File:Double-Pendulum.svg License: GNU Free Documentation License Contributors: User JabberWok on
en.wikipedia
Image:Legendre transformation.png Source: http://en.wikipedia.org/w/index.php?title=File:Legendre_transformation.png License: GNU Free Documentation License Contributors:
Esagherardo
Image:TIKZ PICT FBN.png Source: http://en.wikipedia.org/w/index.php?title=File:TIKZ_PICT_FBN.png License: Creative Commons Attribution 3.0 Contributors: Fly by Night
Image:Focal stability.png Source: http://en.wikipedia.org/w/index.php?title=File:Focal_stability.png License: Public Domain Contributors: BMF81, EugeneZelenko, Mdd, Pieter Kuiper, 2
anonymous edits
File:Pendulum Phase Portrait.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pendulum_Phase_Portrait.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Kernsters
File:Hamiltonian flow classical.gif Source: http://en.wikipedia.org/w/index.php?title=File:Hamiltonian_flow_classical.gif License: Creative Commons Zero Contributors: User:Nanite
Image:Limitcycle.svg Source: http://en.wikipedia.org/w/index.php?title=File:Limitcycle.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Gargan
File:E8Petrie.svg Source: http://en.wikipedia.org/w/index.php?title=File:E8Petrie.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jgmoxness
File:Liealgebra.png Source: http://en.wikipedia.org/w/index.php?title=File:Liealgebra.png License: Public Domain Contributors: Phys
File:Lorenz attractor yb.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lorenz_attractor_yb.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors:
User:Dschwen, User:Wikimol
File:LinearFields.png Source: http://en.wikipedia.org/w/index.php?title=File:LinearFields.png License: Creative Commons Attribution 2.5 Contributors: XaosBits

115

License

License
Creative Commons Attribution-Share Alike 3.0
//creativecommons.org/licenses/by-sa/3.0/

116

You might also like