Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Marc Niethammer
1 Background
The derivative for “regular” functions on R is deﬁned as
˙
f(x) := lim
→0
f(x + ) − f(x)
=
df
dx
.
The directional derivative, in the direction u, for a function deﬁned on R
n
is
f
u
(x) = lim
→0
f(x + u) − f(x)
=
∂f
∂
(x + u)
=0
.
To ﬁnd a stationary point (i.e., a local maximum or minimum) the derivative
˙
f(x) needs to vanish. In higher
dimensions, the gradient ∇f needs to be identical to zero. The latter may also be expressed as f
u
= 0 ∀u
1
.
In a sense these simple derivatives are the most basic form of the calculus of variations. Usually, however,
one talks about calculus of variations in the context of determining functions that minimize functionals. In
other words, the argument (for which a given quantity, the functional) is extremal is no longer a point in
space, but a function out of an appropriate class of functions. Given a functional J : F →R (where F is the
desired space of functions) the Gˆateaux variation is (compare this to the directional derivative)
δJ(y; v) := lim
→0
J(y + v) − J(y)
=
∂
∂
J(y + v)
=0
, y ∈ F, ∈ R,
where v is a testfunction that needs to be chosen consistent with the problem’s boundary conditions. (More
on this later, for now, just think as an example of a function with two ﬁxed end points, at these ﬁxed
endpoints the test function v cannot vary, the solution is known, it needs to be ﬁxed to 0). The testfunction
v plays the role of the u in the previously introduced directional derivative. δJ needs to vanish for all v to
have an extremum. Figure 1 illustrates the ﬁnite dimensional derivatives and the concept of the Gˆateaux
variation.
Before moving on to an example let’s review the diﬀerentiation rule for integrals. Given
I() =
_
x2()
x1()
f(x, ) dx,
the derivative with respect to is
dI
d
= f(x
2
, )
dx
2
d
− f(x
1
, )
dx
1
d
+
_
x2()
x1()
∂f
∂
dx.
Together with the integration by parts rule
_
˙
(gf) dx = gf(+c) =
_
g
˙
f dx +
_
˙ gf dx,
1
This will be connected to calculus of variations later.
1
Variational Methods and Dynamic Curve Evolution Curve Evolution
Calculus of Variations
C =
x(p)
y(p)
C + V
(b) f(x, y) = x
2
+ y
2
(a) f(x) = x
2
(c) E(C) =
1
p=0
C
p
dp = L
Variational calculus is “diﬀerential calculus for functions.”
Analogous deﬁnitions (inﬁnitesimal perturbations):
= lim
→0
f(x+u)−f(x)
= lim
→0
E(C+V)−E(C)
=
∂f
∂
(x + u)
=0
= 0 ∀ u
(c) δE(C; V)
(a)
df(x)
dx
= lim
→0
f(x+)−f(x)
= 0
(b) f
u
(x)
=
∂
∂
E(C + V)
=0
= 0 ∀ V
UNC, May 7, 2007 8 / 49 Figure 1: Illustration of the principle of calculus of variations as “diﬀerential calculus for functions.”
or with deﬁnite integration bounds
_
b
a
˙ gf dx = [gf]
b
a
−
_
b
a
g
˙
f dx.
this is pretty much all one needs to know in practice to do calculus of variations with functionals having
functions deﬁned on R as their argument. Here is an unpractical example: What is the Gˆateaux variation
of the functional
F(y) =
_
b
a
f(x, y, ˙ y)dx?
Sticking strictly to the previous deﬁnition (probably always the safest bet, instead of using readymade
solutions) yields
δF(y; v) =
∂
∂
F(y + v)
=0
=
∂
∂
_
b
a
f(x, y + v, ˙ y + ˙ v) dx
=0
=
_
b
a
∂f
∂y
v +
∂f
∂ ˙ y
˙ v dx.
For an extremal point, δF(y; v) = 0 ∀v. To check what the condition on y needs to be, integration of part
brings relief, since
δF(y; v) =
_
b
a
∂f
∂y
v −
d
dx
_
∂f
∂ ˙ y
_
v dx +
_
∂f
∂ ˙ y
v
_
b
a
=
_
b
a
_
∂f
∂y
−
d
dx
_
∂f
∂ ˙ y
__
v dx.
The boundary term dropped out since we assume the values of f ﬁxed at the boundary and thus v(a) =
v(b) = 0. Alternatively, the boundary term yields the natural boundary conditions, i.e.,
∂f
∂ ˙ y
= 0 at x = a
and/or x = b. Finally, the last result shows that for the Gˆateaux variation to vanish for all v, the condition
(known as the EulerLagrange equation)
∂f
∂y
−
d
dx
_
∂f
∂ ˙ y
_
= 0. (1)
2
needs to hold. (This derivation easily extends to cases with an arbitrary order of derivatives as the arguments
of the functional, leading to more complex EulerLagrange equations.) If the solution to the EulerLagrange
equation is known, the problem is solved. Unfortunately, many times a closed form solution is not known.
One way to cope with this problem is to use a gradient descent technique. Incidentally, the left hand side of
the EulerLagrange equation can be regarded as an inﬁnitedimensional gradient in the L
2
sense, i.e.,
∇F =
∂f
∂y
−
d
dx
_
∂f
∂ ˙ y
_
,
yielding the gradient descent scheme
F
t
= −∇F.
Why is this? We can write the Gˆateaux variation in inner product form (which we have done all along, since
the quantities of interest are real functions, but so far never explicitly spelled out). Then
δF(y; v) =
_
gv dx =< g, v > .
The Cauchy Schwarz inequality is
 < g, v > 
2
≤ < g, g > · < v, v > .
The function g is given. If we assume without loss of generality that
< v, v >= c
2
< g, g >, c = const. ∈ R,
then
 < g, v > 
2
≤ c
2
< g, g >
2
.
Consequently, v = cg maximizes the inner product. Thus, g can be regarded as an inﬁnitedimensional
gradient, g = ∇F. Moving in its opposite direction amounts to the direction decreasing the energy F
maximally fast, as desired.
2 Example: Shortest distance between two points
Here is a more practical example. Assume the path connecting two points (a, y(a)) and (b, y(b)) can be
expressed as a function. Then the curve is given by
C(x, y(x)) =
_
x
y(x)
_
and its derivative with respect to x is
˙
C(x, y(x)) =
_
1
˙ y
_
.
The length of the curve may be written (this is the functional to be minimized) as
L =
_
b
a
˙
C dx =
_
b
a
_
1 + ˙ y
2
dx. (2)
With f(x, y, ˙ y) =
_
1 + ˙ y
2
it follows
∂f
∂y
= 0,
∂f
∂ ˙ y
=
˙ y
_
1 + ˙ y
2
.
3
Thus the EulerLagrange equation (plugging into Equation 1) becomes
d
dx
_
˙ y
_
1 + ˙ y
2
_
= 0.
This implies
˙ y
_
1 + ˙ y
2
= const. =⇒ ˙ y = const,
which proves that the shortest distance between two points in terms of the length deﬁned in Equation 2 is
given by the length of the straight line connecting the two.
3 Example: Shortest distance between two points, the parametric
version
Assuming that the curve connecting two points in a plane (as in the previous section) can be expressed as
a function is an oversimpliﬁcation of the problem. If a point in the plane is given by the coordinates (α, β)
and both these coordinates are independently described as a function over a parametrization p ∈ [0, 1], then
C =
_
α(p)
β(p)
_
, C
p
=
_
α
p
β
p
_
.
The length functional then becomes
L =
_
1
p=0
C
p
dp.
The Gˆateaux variation is
δL(C; V ) =
∂
∂
_
1
p=0
C
p
+ V
p
dp
=0
.
Going through the usual motions yields
δL(C; V ) =
_
1
p=0
1
C
p
C
p
· V
p
dp =
_
1
p=0
T
.¸¸.
tangent
·V
p
dp
=
_
1
p=0
−
1
C
p
∂
∂p
(T ) · V C
p
dp
. ¸¸ .
ds
+[T · V ]
1
p=0
=
_
l
0
−
∂
∂s
(T ) · V ds,
where s denotes arclength. From this follows that T = const. Thus the curve connecting the two points is a
straight line.
4 Example: Multidimensional domain, Laplace’s equation and the
heat equation
Before delving into the multidimensional problem is it useful to review Green’s theorem (here in 2D) which
will lead to the analog to integration by parts in multiple dimensions. Green’s theorem in two dimensions
states
__
Ω
_
∂P
∂x
+
∂Q
∂y
_
dx dy =
__
Ω
(Pdy − Qdx) .
4
Substituting P = ηG and Q = ηF results in the twodimensional analogue to integration by parts
__
Ω
_
G F
_
∇η dx dy = −
__
Ω
η
_
∂G
∂x
+
∂F
∂y
_
dx dy +
_
∂Ω
η(Gdy − Fdx), (3)
where Ω is the domain of integration and ∂Ω is its boundary. Given the functional
L(∇I) =
_
Ω
1
2
∇I
2
dΩ,
the Gˆateaux variation is
δL(I; V ) =
∂
∂
_
Ω
1
2
∇I + ∇V
2
dΩ
=0
=
_
Ω
∇I · ∇V. (4)
With η = V , G = I
x
and F = I
y
and the help of Equation 3, Equation 4 becomes
δL(I; V ) = −
_
Ω
(I
xx
+ I
yy
)V dΩ +
_
∂Ω
V (I
x
dy − I
y
dx) = −
_
Ω
∆IV dΩ +
_
∂Ω
V
∂I
∂n
dS, (5)
where dS denotes the surface element, n the surface normal, and ∆I the Laplacian of I. The boundary term
drops out for appropriate Dirichlet and Neumann conditions and the EulerLagrange equation becomes
∆I = 0,
which is Laplace’s equation. Interpreting the EulerLagrange equation as an inﬁnitedimensional gradient
leads to
I
t
= ∆I,
which is the heat equation.
5 Example:Optical ﬂow
Optical ﬂow is the simplest approach to perform image registration, i.e., to map two images to each other.
Section 5.1 discusses the classic Horn and Schunck optimal ﬂow formulation. Section 5.2 discusses an optical
ﬂow formulation without using the optical ﬂow constraint. Both approaches are based on the assumption of
image intensity constancy (for corresponding points), i.e.,
I(x, t) = const. (6)
5.1 Horn and Schunck
Taking the total derivative of Equation 6 yields the optical ﬂow constraint equation
dI
dt
(x, t) = I
t
+ ∇
x
I
∂x
∂t
= I
t
+ ∇I · v = 0, (7)
with v = (v
1
, v
2
)
T
the velocity vector. The Horn and Schunck energy (in two dimensions) is
L(v) =
1
2
_
Ω
(I
t
+ ∇I · v)
2
dΩ + α
1
2
_
Ω
∇v
1
2
+ ∇v
2
2
dΩ,
where the second term is used to regularize the resulting vector ﬁeld to cope with the aperture problem; we
know its corresponding variation from the Laplace equation example above. Deﬁne
L
1
(v) =
1
2
_
Ω
(I
t
+ ∇I · v)
2
dΩ.
5
Its Gˆateaux variation is
δL
1
(v; V ) =
∂
∂
_
Ω
1
2
(I
t
+ ∇I · (v + V ))
2
dΩ
=0
=
_
Ω
(I
t
+ ∇I · v)∇I · V dΩ.
Combining the result for δL
1
(v; V ) with the Laplace equation result (5) for vanishing boundary conditions
yields the EulerLagrange equation
(I
t
+ ∇I · v)∇I − α
_
∆v
1
∆v
2
_
= 0.
The corresponding gradient descent scheme is thus
v
1
t
= −(I
t
+ ∇I · v)I
x
+ α∆v
1
,
v
2
t
= −(I
t
+ ∇I · v)I
y
+ α∆v
2
.
5.2 “Nonlinear” optical ﬂow
Instead of using the optical ﬂow constraint for the minimization, the intensity mapping can be formulated
explicitly, without the need to resort to a linear approximation (by taking the derivative). The energy then
becomes
L(v) =
1
2
_
Ω
(I
1
(x −v) − I
2
(x))
2
dΩ + α
1
2
_
Ω
∇v
1
2
+ ∇v
2
2
dΩ
where I
1
and I
2
denote the images at time points t
1
and t
2
respectively. The Gˆateaux variation of the second
term is again already known, the one of the ﬁrst is
δL
1
(v; V ) =
∂
∂
_
Ω
(I
1
(x −v − V ) − I
2
(x))
2
dΩ
=0
=
_
Ω
(I
1
(x −v) − I
2
(x))∇I
1
(x −v) · (−V ) dΩ.
Thus the gradient descent based on the EulerLagrange equation becomes
v
1
t
= (I
1
(x −v) − I
2
(x))I
x
(x −v) + α∆v
1
,
v
1
t
= (I
1
(x −v) − I
2
(x))I
y
(x −v) + α∆v
2
.
These equations are structurally very similar to the ones obtained for the Horn and Schunck optical ﬂow.
However, they make use of the explicit coordinate mapping.
6 A tiny little bit of diﬀerential geometry for planar curves
Given the planar parametrized curve
C =
_
x(p)
y(p)
_
,
the tangent to it is
T =
C
p
C
p
= C
s
,
where
∂
∂s
=
1
C
p
∂
∂p
.
Since T
2
= 1, it follows that
∂
∂s
T
2
= T
s
· T + T · T
s
= 2T
s
· T = 0.
6
Thus, T
s
is orthogonal to T and
T
s
= C
ss
= κN,
where N is the unit inward normal to the curve and
κ = C
ss
· N
is the signed Euclidean curvature. From before, the Gˆateaux variation for L =
_
C
p
dp is
δL(C; V ) =
_
C
−
∂
∂s
(T ) · V ds + [T · V ]
1
p=0
. ¸¸ .
=0 (closed curve)
.
With the new found diﬀerential geometry knowledge this reduces to
δL(C; V ) =
_
C
−κN · V ds.
Thus the gradient descent for a length minimizing ﬂow is
C
t
= κN,
the curvature ﬂow.
7 Example: Deriving the geodesic active contour
With the knowledge from Section 6 the evolution equation for the geodesic active contour can be derived.
The functional is
L(C, C
p
) =
_
1
0
g(C)C
p
dp =
_
l
0
g(C) ds,
where g > 0. It corresponds to a weighted length. Using the developed machinery for calculus of variations
the Gˆateaux variation is
δL(C; V ) =
∂
∂
_
1
0
g(C + V )C
p
+ V
p
dp
=0
=
_
1
0
∂
∂
g(C + V )
=0
C
p
+ g(C)
∂
∂
C
p
+ V
p

=0
dp
=
_
1
0
∇g(C) · V C
p
+ g(C)T · V
p
dp
=
_
1
0
∇g(C) · V C
p
−
∂
∂p
(g(C)T ) · V dp + [g(C)T · V ]
1
0
. ¸¸ .
=0 (closed curve)
=
_
1
0
∇g(C) · V C
p
−
∂
∂s
(g(C)T ) · V C
p
dp
=
_
l
0
(∇g(C) − (∇g(C) · T )T − g(C)κN) · V ds
=
_
l
0
((∇g(C) · N)N − g(C)κN) · V ds.
The corresponding gradient descent ﬂow is the evolution equation of the geodesic active contour (geometric
curve evolution):
C
t
= g(C)κN − (∇g(C) · N)N.
7
8 Where to go from here ...
For many problems in computer vision and image analysis calculus of variations boils down to
1) Being able to take derivatives and becoming a master in the chain rule.
2) Being able to perform integration by parts, also in higher dimensions (Green’s theorem).
3) Choosing the right numerical implementation to solve the resulting partial diﬀerential equations (ﬁnite
diﬀerences, ﬁnite elements, boundary elements, spectral approaches, etc.).
See [2, 1] for more details, for example on how to integrate constraints through Lagrangian multipliers
and how to use variational calculus for dynamic problems in the context of optimal control. In particular,
Lanczos’ classic book [1] is well worth the read and motivates everything from a mechanics perspective.
References
[1] C. Lanczos. The Variational Principles of Mechanics. Courier Dover Publications, 1986.
[2] J. L. Troutman. Variational Calculus and Optimal Control. Springer, 1995.
8
y + v) dx ˙ ˙ a =0 = a ∂f ∂f v+ v dx.e. y. since b δF (y. v) = ∂ F (y + v) ∂ =0 = ∂ ∂ b b f (x. The boundary term dropped out since we assume the values of f ﬁxed at the boundary and thus v(a) = v(b) = 0. To check what the condition on y needs to be.” UNC. Alternatively. V) + u) =0 = 0 ∀ u E(C + V) =0 = 0 ∀ V Figure 1: Illustration of the principle of calculus of variations as “diﬀerential calculus for functions.Variational Methods and Dynamic Curve Evolution Curve Evolution Calculus of Variations C+ V C= (a) f (x) = x2 (b) f (x. v) = 0 ∀v. instead of using readymade solutions) yields δF (y. y) = x2 + y 2 (c) E(C) = 1 p=0 x(p) y(p) Cp dp = L Variational calculus is “diﬀerential calculus for functions. y)dx? ˙ Sticking strictly to the previous deﬁnition (probably always the safest bet. 2007 8 / 49 or with deﬁnite integration bounds b b gf dx = [gf ]b − ˙ a a a g f˙ dx. i. y + v. the condition a (known as the EulerLagrange equation) ∂f d − ∂y dx 2 ∂f ∂y ˙ = 0. (1) . May 7. this is pretty much all one needs to know in practice to do calculus of variations with functionals having functions deﬁned on R as their argument. Here is an unpractical example: What is the Gˆteaux variation a of the functional b F (y) = a f (x. ∂f = 0 at x = a ∂y ˙ and/or x = b. integration of part brings relief.. the last result shows that for the Gˆteaux variation to vanish for all v.” Analogous deﬁnitions (inﬁnitesimal perturbations): (a) df (x) dx = lim →0 = lim →0 = lim →0 f (x+ )−f (x) f (x+ u)−f (x) E(C+ V)−E(C) =0 = = ∂f (x ∂ ∂ ∂ (b) fu(x) (c) δE(C. δF (y. Finally. v) = a ∂f d v− ∂y dx ∂f ∂y ˙ v dx + ∂f v ∂y ˙ b b = a a ∂f d − ∂y dx ∂f ∂y ˙ v dx. the boundary term yields the natural boundary conditions. ˙ ∂y ∂y ˙ For an extremal point.
y(x)) = 1 . ∂y ∂f = ∂y ˙ y ˙ 1 + y2 ˙ . gv dx =< g. leading to more complex EulerLagrange equations. the left hand side of the EulerLagrange equation can be regarded as an inﬁnitedimensional gradient in the L2 sense. many times a closed form solution is not known.) If the solution to the EulerLagrange equation is known. g can be regarded as an inﬁnitedimensional gradient. Consequently. Assume the path connecting two points (a. v > 2 ≤ c2 < g. ∂f d − ∂y dx ∂f ∂y ˙ . Incidentally. y) = ˙ 1 + y 2 it follows ˙ ∂f = 0. y(b)) can be expressed as a function. Why is this? We can write the Gˆteaux variation in inner product form (which we have done all along. g = F . the problem is solved. y. Moving in its opposite direction amounts to the direction decreasing the energy F maximally fast. If we assume without loss of generality that < v. v > . ˙ (2) With f (x. c = const.needs to hold. 3 . ∈ R. y(a)) and (b. v > . g >2 . Then the curve is given by C(x. since a the quantities of interest are real functions. g > · < v. One way to cope with this problem is to use a gradient descent technique. The function g is given.e. v = cg maximizes the inner product. Unfortunately.. g >. 2 Example: Shortest distance between two points Here is a more practical example. Thus. i. as desired. F = yielding the gradient descent scheme Ft = − F. (This derivation easily extends to cases with an arbitrary order of derivatives as the arguments of the functional. but so far never explicitly spelled out). v >= c2 < g. then  < g. v > 2 ≤ < g. v) = The Cauchy Schwarz inequality is  < g. Then δF (y. y(x)) = and its derivative with respect to x is ˙ C(x. y ˙ x y(x) The length of the curve may be written (this is the functional to be minimized) as b b L= a ˙ C dx = a 1 + y 2 dx.
then C= The length functional then becomes 1 α(p) . β) and both these coordinates are independently described as a function over a parametrization p ∈ [0. =⇒ y = const. If a point in the plane is given by the coordinates (α. dx dy = ∂x ∂y Ω Ω 4 . the parametric version Assuming that the curve connecting two points in a plane (as in the previous section) can be expressed as a function is an oversimpliﬁcation of the problem. Thus the curve connecting the two points is a straight line. From this follows that T = const. 1]. Laplace’s equation and the heat equation Before delving into the multidimensional problem is it useful to review Green’s theorem (here in 2D) which will lead to the analog to integration by parts in multiple dimensions. Green’s theorem in two dimensions states ∂P ∂Q + (P dy − Qdx) . 1 The Gˆteaux variation is a δL(C. β(p) Cp = αp . V ) = Going through the usual motions yields 1 ∂ ∂ Cp + Vp dp p=0 =0 . ∂s where s denotes arclength. ˙ y ˙ 1 + y2 ˙ = 0. which proves that the shortest distance between two points in terms of the length deﬁned in Equation 2 is given by the length of the straight line connecting the two. V ) = p=0 1 1 Cp · Vp dp = Cp − 1 T p=0 tangent ·Vp dp = p=0 l 1 ∂ 1 (T ) · V Cp dp + [T · V ]p=0 Cp ∂p ds = 0 ∂ − (T ) · V ds.Thus the EulerLagrange equation (plugging into Equation 1) becomes d dx This implies y ˙ 1 + y2 ˙ = const. βp L= p=0 Cp dp. δL(C. 3 Example: Shortest distance between two points. 4 Example: Multidimensional domain.
we know its corresponding variation from the Laplace equation example above. Section 5.. which is Laplace’s equation. n the surface normal. Given the functional L( I) = Ω 1 2 I 2 dΩ. i. (3) where Ω is the domain of integration and ∂Ω is its boundary. (4) Ω With η = V . V ) = − Ω (Ixx + Iyy )V dΩ + ∂Ω V (Ix dy − Iy dx) = − Ω ∆IV dΩ + ∂Ω V ∂I dS. and ∆I the Laplacian of I. ∂n (5) where dS denotes the surface element. which is the heat equation. the Gˆteaux variation is a δL(I. (6) 5. Equation 4 becomes δL(I. Section 5. The boundary term drops out for appropriate Dirichlet and Neumann conditions and the EulerLagrange equation becomes ∆I = 0. I(x. t) = It + dt ∂x = It + ∂t Taking the total derivative of Equation 6 yields the optical ﬂow constraint equation xI I · v = 0. to map two images to each other. The Horn and Schunck energy (in two dimensions) is L(v) = 1 2 (It + Ω I · v) dΩ + α 2 1 2 v1 Ω 2 + v2 2 dΩ. 5 Example:Optical ﬂow Optical ﬂow is the simplest approach to perform image registration. v 2 )T the velocity vector.1 Horn and Schunck dI (x.1 discusses the classic Horn and Schunck optimal ﬂow formulation.e. i. Deﬁne L1 (v) = 1 2 (It + Ω I · v) dΩ. (7) with v = (v 1 . Interpreting the EulerLagrange equation as an inﬁnitedimensional gradient leads to It = ∆I.Substituting P = ηG and Q = ηF results in the twodimensional analogue to integration by parts G Ω F η dx dy = − Ω η ∂G ∂F + ∂x ∂y dx dy + ∂Ω η(Gdy − F dx).2 discusses an optical ﬂow formulation without using the optical ﬂow constraint. 2 5 . Both approaches are based on the assumption of image intensity constancy (for corresponding points). t) = const. G = Ix and F = Iy and the help of Equation 3. V ) = ∂ ∂ 1 2 I+ V 2 dΩ =0 = Ω I· V. where the second term is used to regularize the resulting vector ﬁeld to cope with the aperture problem.e..
the intensity mapping can be formulated explicitly. The energy then becomes 1 1 2 L(v) = (I1 (x − v) − I2 (x)) dΩ + α v1 2 + v 2 2 dΩ 2 Ω 2 Ω where I1 and I2 denote the images at time points t1 and t2 respectively. The Gˆteaux variation of the second a term is again already known. without the need to resort to a linear approximation (by taking the derivative). Ω Combining the result for δL1 (v. 6 A tiny little bit of diﬀerential geometry for planar curves C= x(p) . V ) with the Laplace equation result (5) for vanishing boundary conditions yields the EulerLagrange equation (It + I · v) I − α ∆v 1 ∆v 2 = 0. V ) = ∂ ∂ 1 (It + 2 I · (v + V )) dΩ 2 =0 = Ω (It + I · v) I · V dΩ. 6 . However. Cp Given the planar parametrized curve the tangent to it is T = where 1 ∂ ∂ = . ∂s Cp ∂p 2 Since T = 1. they make use of the explicit coordinate mapping. 1 vt = (I1 (x − v) − I2 (x))Iy (x − v) + α∆v 2 . V ) = ∂ ∂ (I1 (x − v − V ) − I2 (x)) dΩ Ω 2 =0 = Ω (I1 (x − v) − I2 (x)) I1 (x − v) · (−V ) dΩ. y(p) Cp = Cs . the one of the ﬁrst is δL1 (v. The corresponding gradient descent scheme is thus 1 vt 2 vt = −(It + = −(It + I · v)Ix + α∆v 1 . I · v)Iy + α∆v 2 .Its Gˆteaux variation is a δL1 (v.2 “Nonlinear” optical ﬂow Instead of using the optical ﬂow constraint for the minimization. These equations are structurally very similar to the ones obtained for the Horn and Schunck optical ﬂow. it follows that ∂ T ∂s 2 = Ts · T + T · Ts = 2Ts · T = 0. Thus the gradient descent based on the EulerLagrange equation becomes 1 vt = (I1 (x − v) − I2 (x))Ix (x − v) + α∆v 1 . 5.
Ts is orthogonal to T and Ts = Css = κN . where N is the unit inward normal to the curve and κ = Css · N is the signed Euclidean curvature. From before. where g > 0. Thus the gradient descent for a length minimizing ﬂow is Ct = κN . It corresponds to a weighted length. 7 . V ) = C −κN · V ds. Using the developed machinery for calculus of variations the Gˆteaux variation is a δL(C. Cp ) = 0 g(C) Cp dp = 0 g(C) ds. the Gˆteaux variation for L = a δL(C. 0 = The corresponding gradient descent ﬂow is the evolution equation of the geodesic active contour (geometric curve evolution): Ct = g(C)κN − ( g(C) · N )N .Thus. the curvature ﬂow. V ) = C Cp dp is − ∂ (T ) · V ds + ∂s [T · V ]p=0 =0 (closed curve) 1 . 7 Example: Deriving the geodesic active contour 1 l With the knowledge from Section 6 the evolution equation for the geodesic active contour can be derived. The functional is L(C. With the new found diﬀerential geometry knowledge this reduces to δL(C. V ) = = 0 1 ∂ ∂ 1 1 g(C + V ) Cp + Vp dp 0 =0 ∂ g(C + V ) ∂ =0 Cp + g(C) ∂ Cp + Vp  ∂ =0 dp = 0 1 g(C) · V Cp + g(C)T · Vp dp g(C) · V Cp − 0 1 = ∂ 1 (g(C)T ) · V dp + [g(C)T · V ]0 ∂p =0 (closed curve) = 0 l g(C) · V Cp − ∂ (g(C)T ) · V Cp dp ∂s = 0 l ( g(C) − ( g(C) · T )T − g(C)κN ) · V ds (( g(C) · N )N − g(C)κN ) · V ds.
Lanczos. Variational Calculus and Optimal Control.. 1] for more details. 1) Being able to take derivatives and becoming a master in the chain rule. Troutman. The Variational Principles of Mechanics.). For many problems in computer vision and image analysis calculus of variations boils down to See [2. In particular. 3) Choosing the right numerical implementation to solve the resulting partial diﬀerential equations (ﬁnite diﬀerences. also in higher dimensions (Green’s theorem). Courier Dover Publications. ﬁnite elements. 1995. 1986. 8 . 2) Being able to perform integration by parts. [2] J. etc. References [1] C. spectral approaches. Lanczos’ classic book [1] is well worth the read and motivates everything from a mechanics perspective.8 Where to go from here . boundary elements.. Springer. for example on how to integrate constraints through Lagrangian multipliers and how to use variational calculus for dynamic problems in the context of optimal control. L.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.