You are on page 1of 5

Wasserstein metric

In mathematics, the Wasserstein or Kantorovich–Rubinstein metric or distance is a distance function defined between
probability distributions on a given metric space .

Intuitively, if each distribution is viewed as a unit amount of "dirt" piled on , the metric is the minimum "cost" of turning one
pile into the other, which is assumed to be the amount of dirt that needs to be moved times the mean distance it has to be moved.
Because of this analogy, the metric is known in computer science as the earth mover's distance.

The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after the Russian mathematician Leonid Vaseršteĭn
who introduced the concept in 1969. Most English-language publications use the German spelling "Wasserstein" (attributed to the
name "Vaseršteĭn" being of German origin).

Contents
Definition
Intuition and connection to optimal transport
Examples
Point masses (degenerate distributions)
Normal distributions
Applications
Properties
Metric structure
Dual representation of W1
Separability and completeness
See also
References
External links

Definition
Let be a metric space for which every probability measure on is a Radon measure (a so-called Radon space). For
, let denote the collection of all probability measures on with finite moment. Then, there exists some
in such that:

The Wasserstein distance between two probability measures and in is defined as


where denotes the collection of all measures on with marginals and on the first and second factors
respectively. (The set is also called the set of all couplings of and .)

The above distance is usually denoted (typically among authors who prefer the "Wasserstein" spelling) or
(typically among authors who prefer the "Vaserstein" spelling). The remainder of this article will use the notation.

The Wasserstein metric may be equivalently defined by

where denotes the expected value of a random variable and the infimum is taken over all joint distributions of the
random variables and with marginals and respectively.

Intuition and connection to optimal transport


One way to understand the motivation of the above definition is to consider the
optimal transport problem. That is, for a distribution of mass on a space ,
we wish to transport the mass in such a way that it is transformed into the
distribution on the same space; transforming the 'pile of earth' to the pile
. This problem only makes sense if the pile to be created has the same mass as
the pile to be moved; therefore without loss of generality assume that and
are probability distributions containing a total mass of 1. Assume also that there
is given some cost function

that gives the cost of transporting a unit mass from the point to the point . A
Two one-dimensional distributions
transport plan to move into can be described by a function which
and , plotted on the x and y axes,
gives the amount of mass to move from to . In order for this plan to be
and one possible joint distribution
meaningful, it must satisfy the following properties that defines a transport plan between
them. The joint distribution/transport
plan is not unique

That is, that the total mass moved out of an infinitesimal region around must be equal to and the total mass moved into
a region around must be . This is equivalent to the requirement that be a joint probability distribution with marginals
and . Thus, the infinitesimal mass transported from to is , and the cost of moving is ,
following the definition of the cost function. Therefore, the total cost of a transport plan is

The plan is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As
mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals and ; letting denote the set
of all such measures as in the first section, the cost of the optimal plan is
If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the
distance.

Examples

Point masses (degenerate distributions)


Let and be two degenerate distributions (i.e. Dirac delta distributions) located at points and in .
There is only one possible coupling of these two measures, namely the point mass located at . Thus, using
the usual absolute value function as the distance function on , for any , the -Wasserstein distance between and is

By similar reasoning, if and are point masses located at points and in , and we use the usual
Euclidean norm on as the distance function, then

Normal distributions
Let and be two non-degenerate Gaussian measures (i.e. normal distributions) on , with
respective expected values and and symmetric positive semi-definite covariance matrices and .
Then,[1] with respect to the usual Euclidean norm on , the 2-Wasserstein distance between and is

This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case ),
since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term
disappears and only the term involving the Euclidean distance between the means remains.

Applications
The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is
derived from the other by small, non-uniform perturbations (random or deterministic).

In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two
digital images; see earth mover's distance for more details.

In his paper 'Wasserstein GAN', Arjovsky et al.[2] use the Wasserstein-1 metric as a way to improve the original framework of
Generative Adversarial Networks (GAN), to alleviate the vanishing gradient and the mode collapse issues.

Properties

Metric structure
It can be shown that Wp satisfies all the axioms of a metric on Pp(M). Furthermore, convergence with respect to Wp is equivalent
to the usual weak convergence of measures plus convergence of the first pth moments.
Dual representation of W1
The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ
and ν have bounded support,

where Lip(f) denotes the minimal Lipschitz constant for f.

Compare this with the definition of the Radon metric:

If the metric d is bounded by some constant C, then

and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies
convergence in the Wasserstein metric, but not vice versa.

Separability and completeness


For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[3]

See also
Lévy metric
Lévy–Prokhorov metric
Total variation distance of probability measures
Transportation theory
Earth mover's distance

References
1. Olkin, I. and Pukelsheim, F. (1982). "The distance between two random vectors with given dispersion matrices".
Linear Algebra Appl. 48: 257–263. doi:10.1016/0024-3795(82)90112-4 (https://doi.org/10.1016%2F0024-3795%2
882%2990112-4). ISSN 0024-3795 (https://www.worldcat.org/issn/0024-3795).
2. Arjovski (2017). "Wasserstein GAN". Clinical Toxicology. 44: 301–306.
3. Bogachev, V.I.; Kolesnikov, A.V. "The Monge–Kantorovich problem: achievements, connections, and
perspectives". Russian Math. Surveys. 67: 785–890. doi:10.1070/RM2012v067n05ABEH004808 (https://doi.org/
10.1070%2FRM2012v067n05ABEH004808).

Villani, Cédric (2008). Optimal Transport, Old and New. Springer. ISBN 978-3-540-71050-9.
Ambrosio, L., Gigli, N. & Savaré, G. (2005). Gradient Flows in Metric Spaces and in the Space of Probability
Measures. Basel: ETH Zürich, Birkhäuser Verlag. ISBN 3-7643-2428-7.
Jordan, Richard; Kinderlehrer, David; Otto, Felix (1998). "The variational formulation of the Fokker–Planck
equation". SIAM J. Math. Anal. 29 (1): 1–17 (electronic). CiteSeerX 10.1.1.6.8815 (https://citeseerx.ist.psu.edu/vi
ewdoc/summary?doi=10.1.1.6.8815). doi:10.1137/S0036141096303359 (https://doi.org/10.1137%2FS00361410
96303359). ISSN 0036-1410 (https://www.worldcat.org/issn/0036-1410). MR 1617171 (https://www.ams.org/math
scinet-getitem?mr=1617171).
Rüschendorf, L. (2001) [1994], "Wasserstein metric" (https://www.encyclopediaofmath.org/index.php?title=Wasse
rstein_metric), in Hazewinkel, Michiel (ed.), Encyclopedia of Mathematics, Springer Science+Business Media
B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

External links
"What is the advantages of Wasserstein metric compared to Kullback–Leibler divergence?" (https://stats.stackexc
hange.com/q/295617). Stack Exchange. August 1, 2017.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Wasserstein_metric&oldid=903316923"

This page was last edited on 25 June 2019, at 00:19 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using
this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

You might also like