You are on page 1of 70

The University of

CS 4487/9587

Algorithms for Image Analysis Ontario

2D Segmentation (part II)

Deformable Models
Acknowledgements: many slides from the University of Manchester,
demos from Visual Dynamics Group (University of Oxford),

5-1
The University of

CS 4487/9587 Algorithms for Image Analysis

Deformable Models in 2D Ontario

 Active Contours or “snakes”


• “snakes” vs. “livewire”
• (discrete) energy formulations for snakes
• relation to Hidden Markov Models (HMM)
 Optimization (discrete case)
• Gradient Descent
• Dynamic Programming (DP), Viterbi algorithm
• DP versus Dijkstra

Extra Reading: Sonka et.al 5.2.5 and 8.2


Active Contours by Blake and Isard
5-2
The University of

“Live-wire” vs. “Snakes” Ontario

• intelligent scissors [Mortensen, Barrett 1995]


• live-wire [Falcao, Udupa, Samarasekera, Sharma 1998]

4 1

3
2

Shortest paths on image-based graph connect


seeds placed on object boundary
5-3
The University of

“Live-wire” vs. “Snakes” Ontario

•Snakes, active contours [Kass, Witkin, Terzopoulos 1987]


•In general, deformable models are widely used

Given: initial contour (model) near desirable object

5-4
The University of

“Live-wire” vs. “Snakes” Ontario

•Snakes, active contours [Kass, Witkin, Terzopoulos 1987]


•In general, deformable models are widely used

Given: initial contour (model) near desirable object


Goal: evolve the contour to fit exact object boundary
5-5
The University of

Tracking via deformable models Ontario

1. Use final contour/model extracted at frame


t as an initial solution for frame t+1
2. Evolve initial contour to fit exact object
boundary at frame t+1
3. Repeat steps 1 and 2 for t ‘= t+1

5-6
The University of

Tracking via deformable models Ontario

Acknowledgements: Visual Dynamics Group, Dept. Engineering Science, University of Oxford.

Applications: Traffic monitoring


Human-computer interaction
Animation
Surveillance
Computer Assisted Diagnosis in medical imaging
5-7
The University of

Tracking via deformable models Ontario

Tracking Heart Ventricles

5-8
The University of
gradient descent w.r.t. some
function describing snake’s quality

“Snakes” Ontario

 A smooth 2D curve which matches to image data


 Initialized near target, iteratively refined
 Can restore missing data

initial intermediate final

Q: How does that work? ….


5-9
The University of

for simplicity, assume that


"snake” is a vector (or point) in R1
Preview Ontario

assume some energy function f(x)


f(x) describing snake’s “quality”

x  R1
x̂ x2 x1 x0

local minima gradient descent for 1D functions


for f(x)
xi 1  xi  t  f ' ( xi )

Q: Is snake (contour) a point in some space? ... Yes


5-10
The University of

Parametric Curve Representation


(continuous case) Ontario

 A curve can be represented by 2 functions


parameter
ν( s )  ( x( s ), y( s )) 0 s 1
open curve closed curve

Note: in computer vision and medical imaging the term “snake” is commonly associated with
such parametric representation of contours. (Other representations will be discussed later!)

Here, contour is a point in R  (space of functions)


C  { ν( s ) | s  [ 0 ,1 ]} 5-11
The University of

Parametric Curve Representation


(discrete case) Ontario

 A curve can be represented by a set of 2D points


parameter
ν i  ( xi , yi ) 0in

( xn , yn ) 

2n
Here, contour is a point in R _
C  ( ν i | 0  i  n )  ( x0 , y0 , x1 , y1 ,...., xn1 , yn1 )
5-12
The University of

Measuring snake’s quality:


Energy function Ontario

2n
Contours can be seen as points C in R (or in R )

C  ( ν0 , ν1 , ν 2 ,...., ν n1 )  ( x0 , y0 , x1 , y1 ,...., xn1 , yn1 )

We can define some energy function E(C) that assigns


some number (quality measure) to all possible snakes
WHY?: Somewhat philosophical question, but
E(C)
R 2n
 R specifying a quality function E(C) is an objective
way to define what “good” means for contours C.
(contours C) (scalars)
Moreover, one can find “the best” contour
(segmentation) by optimizing energy E(C).

Q: Did we use any function (energy) to measure quality of segmentation results in


1) image thresholding? NO
2) region growing? NO
3) live-wire? YES (will compare later)
5-13
The University of

Energy function Ontario

Usually, the total energy of snake is a combination of


internal and external energies

E  Ein  Eex
Internal energy encourages
smoothness or any particular shape External energy encourages curve onto
image structures (e.g. image edges)
Internal energy incorporates prior
knowledge about object boundary
allowing to extract boundary even if
some image data is missing

5-14
The University of

Internal Energy
(continuous case) Ontario

 The smoothness energy at contour point v(s) could be


evaluated as
2

d d
2
2
Ein ( ( s ))   ( s )   ( s)
2
ds d s
Elasticity/stretching Stiffness/bending

Then, the interior energy (smoothness) of the whole snake


C  { ν( s ) | s  [ 0 ,1 ]} is 1
Ein   Ein ( ( s )) ds
0
5-15
The University of

Internal Energy
(discrete case) Ontario

C  ( ν 0 , ν 1 , ν 2 ,...., ν n 1 )   2n
v4
elastic energy v3 ν i  ( xi , yi )
(elasticity) v5
v2
d
 vi 1  i
ds v1 v6

v 10 v7

v9 v8 bending energy
(stiffness)
d 2
2
 (  i 1   i )  (  i   i 1 )   i 1  2 i   i 1
ds
The University of

Internal Energy
(discrete case) Ontario

C  ( ν 0 , ν 1 , ν 2 ,...., ν n 1 )   2n
d ν i  ( xi , yi )
 vi 1  i
ds

d 2
2
 ( i 1  i )  ( i  i 1 )   i 1  2 i  i 1
ds

n 1
Ein  i 0
 | i 1  i |   | i 1  2 i  i 1 |
2 2

Elasticity Stiffness
5-17
The University of

External energy Ontario

 The external energy describes how well the


curve matches the image data locally
 Numerous forms can be used, attracting the
curve toward different image features

5-18
The University of

External energy Ontario

 Suppose we have an image I(x,y)


 Can compute image gradient I  ( I x , I y ) at any point
 Edge strength at pixel (x,y) is | I ( x , y ) |
 External energy of a contour point v=(x,y) could be
Eex ( v)   | I ( v) |2   | I ( x, y ) |2
 External energy term for the whole snake is
1
Eex   Eex ( ( s)) ds continuous case
C  { ν( s ) | s  [ 0 ,1 ]}
0 n 1
Eex   Eex ( i ) discrete case
C  { νi | 0  i  n }
i 0
5-19
The University of

Basic Elastic Snake Ontario

 The total energy of a basic elastic snake is

1 1
dv 2
E     | | ds   | I ( v( s )) | ds
2 continuous case
C  { ν( s ) | s  [ 0 ,1 ]}

0
ds 0

n 1 n 1
E    | vi 1  vi |  2
| I ( vi ) | 2 discrete case
C  { νi | 0  i  n }
i 0 i 0

elastic smoothness term image data term


(interior energy) (exterior energy)

5-20
The University of

Basic Elastic Snake


(discrete case) Ontario

C  ( ν i | 0  i  n )  ( x0 , y0 , x1 , y1 ,...., xn1 , yn1 )

n 1
    Li
2 This can make a curve shrink i
Ein i-1 i+1
(to a point) Li-1 Li
i 0
Li+1
n 1 i+2
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0
C
n 1
Eex   | I ( xi , yi ) |2
i 0

n 1
   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
5-21
The University of

Basic Elastic Snake


(discrete case) Ontario

 The problem is to find contour C  ( x0 ,, xn1 , y0 ,, yn1 )  R 2 n


that minimizes
n 1 n 1
E( C )     ( xi 1  xi )2  ( yi 1  yi )2  | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0 i 0

 Optimization problem for function of 2n variables


• can compute local minima via gradient descent (coming soon)
• potentially more robust option: dynamic programming (later)

5-22
The University of

Basic Elastic Snake


Synthetic example Ontario

(1) (2)

(3) (4)

5-23
The University of

Basic Elastic Snake


Dealing with missing data Ontario

 The smoothness constraint can deal with


missing data:

5-24
The University of

Basic Elastic Snake


Relative weighting Ontario

 Notice that the strength of the internal elastic


component can be controlled by a parameter, 
n 1
    Li
2
Ein
i 0
 Increasing this increases stiffness of curve

large  medium  small 


5-25
The University of

Encouraging point spacing Ontario

 To stop the curve from shrinking to a point


n 1
Ein     ( Li  Li )
ˆ 2

i 0

• encourages particular point separation

5-26
The University of

Simple shape prior Ontario

 If object is some smooth variation on a known


shape, use
n 1
Ein     ( i  i )
ˆ 2

i 0

– where {ˆi } give points of the basic shape

 May use a statistical (Gaussian) shape model

Ein   ln N ( |ˆ)  (  )  C  ( ˆ)


ˆ T

5-27
The University of

Alternative External Energies Ontario

 Directed gradient measures


n 1
Eex    u x ,i  I x ( ν i )  u y ,i  I y ( ν i )
i 0

• Where u i  ( u x ,i ,u y ,i ) is the unit normal to the


boundary at contour point  i

• This gives a good response when the boundary has


the same direction as the edge, but weaker
responses when it does not
5-28
The University of

Additional Constraints Ontario

• Snakes originally developed for interactive image


segmentation

• Initial snake result can be nudged where it goes wrong


• Simply add extra external energy terms to
– Pull nearby points toward cursor, or
– Push nearby points away from cursor

5-29
The University of

Interactive (external) forces Ontario

 Pull points towards cursor:


n 1
r2
E pull  
i 0 |  i  p |
2

Nearby points get


pulled hardest

Negative sign gives


better energy for
positions near p
5-30
The University of

Interactive (external) forces Ontario

 Push points from cursor:


n 1
r2
E push  
i 0 |  i  p |
2

Nearby points get


pushed hardest

Positive sign gives


better energy for
positions far from p
5-31
The University of

Dynamic snakes Ontario

 Adding motion parameters as variables (for


each snake node)

 Introduce energy terms for motion consistency

 primarily useful for tracking (nodes represent real


tissue elements with mass and kinematic energy)

5-32
The University of

Open and Closed Curves Ontario

 0  n 0 open curve
 n 1
closed curve

n 1 n2
Ein   (
i 0
i 1  i )
2
Ein   (
i 0
i 1  i ) 2

 When using an open curve we can impose constraints on


the end points (e.g. end points may have fixed position)

– Q: What are similarities and differences with the live-wire


if the end points of an open snake are fixed?
5-33
The University of

Discrete Snakes Optimization Ontario

 At each iteration we compute a new snake position within


proximity to the previous snake
 New snake energy should be smaller than the previous one
 Stop when the energy can not be decreased within local
neighborhood of the snake (local energy minima)

Optimization Methods
1. Gradient Descent
2. Dynamic Programming

5-34
The University of

Gradient Descent Ontario

 Example: minimization of functions of 2 variables


E ( x, y )

 Ex 
( x0 , y0 )  E   E 
 y 
y

negative gradient at point (x,y) gives direction of the


steepest descent towards lower values of function E
5-35
The University of

Gradient Descent Ontario

 Example: minimization of functions of 2 variables


E ( x, y )

x
update equation for a point p=(x,y)

p  p  t  E
( x0 , y0 )
y
 x   x   Ex 
      t   E 
 y   y   y 

Stop at a local minima where E  0 5-36
The University of

Gradient Descent Ontario

 Example: minimization of functions of 2 variables


E ( x, y )

High sensitivity wrt. the initialisation !!


5-37
The University of

Gradient Descent for Snakes Ontario

C simple elastic snake energy


n 1
E( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
here, energy is a function of 2n variables
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0

update equation for the whole snake

C'  C  E  t
 x'0   x0   x0 
E

   
C  0   0 
y ' y  E
y0 
 ...    ...    ...   t
     E 
 x' n 1   xn 1   xn1 
 y' n 1   yn 1   ynE1 
5-38
The University of

Gradient Descent for Snakes Ontario

C simple elastic snake energy


n 1
E( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
here, energy is a function of 2n variables
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0


update equation for each node
  xEi 
 ν'i  ν i  Fi  t Fi    E 
Fi  yi 
 x'0   x0   x0 
E

   
C  0   0 
y ' y  E
y0 
 ...    ...    ...   t
     E 
 x' n 1   xn 1   xn1 
 y' n 1   yn 1   ynE1 
5-39
The University of

Gradient Descent for Snakes Ontario

C simple elastic snake energy


n 1
E( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
here, energy is a function of 2n variables
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0


update equation for each node
  xEi 
 ν'i  ν i  Fi  t Fi    E  = ?
Fi  yi 
C E
xi  2  I x  I xx  2  I y  I yx    2  ( xi 1  xi )    2  ( xi  xi 1 )
E
yi  2  I x  I xy  2  I y  I yy    2  ( yi 1  yi )    2  ( yi  yi 1 )
Q: Do points move independently?
NO, motion of point i depends on positions of neighboring points 5-40
The University of

Gradient Descent for Snakes Ontario

C simple elastic snake energy


n 1
E( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
here, energy is a function of 2n variables
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0


update equation for each node
  xEi 
 ν'i  ν i  Fi  t Fi    E  = ?
Fi  yi 
C E
xi  2  I x  I xx  2  I y  I yx    2  ( xi 1  xi )    2  ( xi  xi 1 )
E
yi  2  I x  I xy  2  I y  I yy    2  ( yi 1  yi )    2  ( yi  yi 1 )

from exterior from interior


(image) energy (smoothness) energy 5-41
The University of

Gradient Descent for Snakes Ontario

C simple elastic snake energy


n 1
E( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
here, energy is a function of 2n variables
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0


update equation for each node
  xEi 
 ν'i  ν i  Fi  t Fi    E  = ?
Fi  yi 
C  2
d v
This term for vi
Fi   (| I |2 )  2  depends on
neighbors
( xi , yi ) d 2s
vi-1 and vi+1
motion of vi towards higher motion of vi reducing
magnitude of image gradients contour’s bending 5-42
The University of

Discrete Snakes:
“Gradient Flow” evolution Ontario

dC  dt  E
Contour evolution via
“Gradient flow”


update equation for each node

 i' ν'i  ν i  Fi  t i  0 , , n  1
i

C Stopping criteria:

Fi  0 for all i  E  0
C’
local minima of energy E
5-43
The University of

Difficulties with Gradient Descent Ontario

 Very difficult to obtain accurate estimates of high-order


derivatives on images (discretization errors)
• E.g., estimating Eex requires computation of second image
derivatives I xx , I xy , I yy

 Gradient descent is not trivial even for one-dimensional


functions. Robust numerical performance for 2n-
dimensional function could be problematic.
• Choice of parameter t is non-trivial
– Small t , the algorithm may be too slow
– Large t , the algorithm may never converge
• Even if “converged” to a good local minima, the snake is likely
to oscillate near it 5-44
The University of

Alternative solution for 2D snakes:


Dynamic Programming Ontario

 In many cases, snake energy can be written


as a sum of pair-wise interaction potentials
n 1
Etotal ( 0 , , n 1 )   E ( ,
i 0
i i i 1 )

 More generally, it can be written as a sum of


higher-order interaction potentials (e.g. triple interactions).
n 1
Etotal ( 0 ,, n 1 )   E (
i 0
i i 1 , i , i 1 )

5-45
The University of

Snake energy:
pair-wise interactions Ontario

Example: simple elastic nsnake


1
energy
Etotal( x0 , , xn 1 , y0 , , yn 1 )   | I x ( xi , yi ) |2  | I y ( xi , yi ) |2
i 0
n 1
    ( xi 1  xi ) 2  ( yi 1  yi ) 2
i 0
n 1 n 1
Etotal( 0 , , n 1 )    || I ( i ) ||     ||  i 1  i ||2
2

i 0 i 0
n 1
Etotal ( 0 ,, n 1 )   E ( ,
i 0
i i i 1 )

where Ei ( i , i 1 )   || I ( i ) ||  || i  i 1 ||
2 2

Q: give an example of snake with triple-interaction potentials?


5-46
The University of

DP Snakes
[Amini, Weymouth, Jain, 1990] Ontario

v3 v2
v1
v4 v6
v5
control points

First-order interactions
E (v1 , v2 ,..., vn )  E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )
Energy E is minimized via Dynamic Programming

5-47
The University of

DP Snakes
[Amini, Weymouth, Jain, 1990] Ontario

v3 v2
v1
v4 v6
v5
control points

First-order interactions
E (v1 , v2 ,..., vn )  E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )
Energy E is minimized via Dynamic Programming
Iterate until optimal position for each point is the center of the box,
i.e. the snake is optimal in the local search space constrained by boxes
5-48
The University of

Dynamic Programming (DP)


Viterbi Algorithm Ontario

Here we will concentrate on first-order interactions


E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )
E1 (v1 , v2 ) E2 (v2 , v3 ) E3 (v3 , v4 ) E4 (v4 , vn )
sites

v1 v2 v3 v4 vn
states
E1 (1)  0 E2 (1) E3 (1) E4 (1) E n (1)
1
E1 (2)  0 E2 (2) E3 ( 2 ) E4 (2) En (2)
2
E1 (3)  0 E2 (3) E3 (3) E4 (3) En (3)

E1 (4)  0 E2 (4) E3 ( 4 ) E4 (4) En (4)
m

2
Complexity: O(nm ), Worst case = Best Case5-49
The University of

Dynamic Programming and


Hidden Markov Models (HMM) Ontario

 DP is widely used in speech recognition

time
word1 word2 word3 word4
ordered (in time) hidden variables (words) to be estimated from observed signal

E1 (v0 , v1 )  ...  Ei (vi 1 , vi )  ...  En (vn1 , vn )

 ln{Pr( signal (ti ) | wordi ) }  ln{Pr( wordi | wordi 1 )}


5-50
The University of

Snakes can also be seen as


Hidden Markov Models (HMM) Ontario

 Positions of snake nodes are hidden variables


 Timely order is replaced with spatial order
 Observed audible signal is replaced with image

1
n

E1 (v1 , v2 )  ...  Ei (vi , vi 1 )  ...  En1 (vn1 , vn )

 || I ( i ) ||  Eelastic( i , i 1 )
5-51
The University of

Dynamic Programming
for a closed snake? Ontario

Clearly, DP can be applied to optimize an open ended snake


E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )
1
n

Can we use DP for a “looped” energy in case of a closed snake?


E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )  En (vn , v1 )
 n 1
n
1
2 3 4
5-52
The University of

Dynamic Programming
for a closed snake Ontario

E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )  En (vn , v1 )

1. Can use Viterbi to optimize snake


energy in case  1  c is fixed.
(in this case the energy above for exact solution
effectively has no loop) complexity
increases to
2. Use Viterbi to optimize snake for
O(nm3)
all possible values of c and
choose the best of the obtained m
solutions.
5-53
The University of

Dynamic Programming
for a closed snake Ontario

E1 (v1 , v2 )  E2 (v2 , v3 )  ...  En1 (vn1 , vn )  En (vn , v1 )


DP has problems with “loops” (even one loop increases complexity).
However, some approximation tricks can be used in practice…
1. Use DP to optimize snake energy
with fixed  1 (according to a This is only an
given initial snake position).
approximation,
2. Use DP to optimize snake energy but complexity is
again. This time fix position of an good: O(nm2)
intermediate node  n / 2  ˆn / 2
where ˆ is an optimal position
obtained in step 1. 5-54
The University of

Dynamic Programming
for snakes with higher order interactions Ontario

E1 (v1 , v2 , v3 )  E2 (v2 , v3 , v4 )  ...  En2 (vn2 , vn1 , vn )


(e.g. if bending energy is added into the “model” of the snake)

Viterbi algorithm can be generalized to 3-clique


case but its complexity increases to O(nm3).
v3 v2
one approach: combine each pair of neighboring v1
nodes into one super node. Each triple interaction
can be represented as a pair-wise interaction v4
between 2 super-nodes. Viterbi algorithm v5
will need m3 operations for each super node (why?)
5-55
The University of

DP snakes (open case)


Summary of Complexity Ontario

energy type complexity


(order of interactions)
n
unary potentials O(nm)
 E (v )
i 1
i i
(d=1)
n 1
pair-wise potentials O((n-1)m2)*
 E (v , v
i 1
i i i 1 )
(d=2)
n2
O((n-2)m3)*
 E (v , v
i 1
i i i 1 , vi  2 ) triple potentials
(d=3)

E (v1 , v2 ,..., vn ) complete connectivity O(mn) – exhaustive search


(d=n)

* - adding a single loop increases complexity by factor md-1


5-56
The University of

Problems with snakes Ontario

 Depends on number and spacing of control points


 Snake may oversmooth the boundary
 Not trivial to prevent curve self intersecting

 Can not follow topological changes of objects

5-57
The University of

Problems with snakes Ontario

 May be sensitive to initialization


– may get stuck in a local energy minimum near initial contour

 Numerical stability can be an issue for gradient descent and


variational methods (continuous formulation)
• E.g. requires computing second order derivatives
 The general concept of snakes (deformable models) does
generalize to 3D (deformable mesh), but many robust
optimization methods suitable for 2D snakes do not apply in 3D
• E.g.: dynamic programming only works for 2D snakes
5-58
The University of

Problems with snakes Ontario

 External energy: may need to diffuse image gradients,


otherwise the snake does not really “see” object
boundaries in the image unless it gets very close to it.

image gradients I
are large only directly on the boundary 5-59
The University of

Diffusing Image Gradients I Ontario

image gradients diffused via


Gradient Vector Flow (GVF)

Chenyang Xu and Jerry Prince, 98


http://iacl.ece.jhu.edu/projects/gvf/ 5-60
The University of

Alternative Way to
Improve External Energy Ontario

n 1 n 1
 Use Eex   D( v i ) instead of Eex  | I ( v i ) | where D() is
i 0 i 0

• Distance Transform (for detected binary image features, e.g. edges)


Distance Transform can
be visualized as a gray-
scale image

binary image features Distance Transform D ( x, y )


(edges)

• Generalized Distance Transform (directly for image gradients)


5-61
The University of

Distance Transform
(see p.20-21 of the text book) Ontario

Image features (2D) Distance Transform


1 0 1 2 3 4 3 2
1 0 1 2 3 3 2 1
1 0 1 2 3 2 1 0
1 0 0 1 2 1 0 1
2 1 1 2 1 0 1 2
3 2 2 2 1 0 1 2
4 3 3 2 1 0 1 2
5 4 4 3 2 1 0 1

Distance Transform is a function D() that for each image


pixel p assigns a non-negative number D ( p ) corresponding to
distance from p to the nearest feature in the image I
5-62
The University of

Distance Transform
can be very efficiently computed Ontario

5-63
The University of

Distance Transform
can be very efficiently computed Ontario

5-64
The University of

Distance Transform
can be very efficiently computed Ontario

• Forward-Backward pass algorithm computes


shortest paths in O(n) on a grid graph with regular
4-N connectivity and homogeneous edge weights 1

• Alternatively, Dijkstra’s algorithm can also compute


a distance map (trivial generalization for multiple
sources), but it would take O(n*log(n)).
- Dijkstra is slower but it is a more general
method applicable to arbitrary weighted graphs

5-65
The University of

Distance Transform:
an alternative way to think about Ontario

0 if pixel p is image feature


 Assuming F ( p)   
  O.W . 
then D( p)  min {|| p  q ||  F (q)}  min || p  q ||
q q:F ( q ) 0
is standard Distance Transform (of image features)
  
F ( p)
D( p)
p
Locations of binary image features 5-66
The University of

Distance Transform vs.


Generalized Distance Transform Ontario

 For general F ( p )

D( p)  min {  || p  q ||  F (q)}
q

is called Generalized Distance Transform of F ( p )


F ( p)
D(p) may prefer
D( p) “strength” of F(p)
to proximity

q p
F(p) may represent non-binary image features (e.g. image intensity gradient)
5-67
The University of

Generalized Distance Transforms


(see Felzenszwalb and Huttenlocher, IJCV 2005) Ontario

 The same “Forward-Backward” algorithm can be


applied to any initial array
• Binary (0 / ) initial values are non-essential.
 If the initial array contains values of function F(x,y) then
the output of the “Forward-Backward” algorithm is a
Generalized Distance Transform
D( p)  min (  || p  q ||  F (q))
qI
 “Scope of attraction” of image
n 1
gradients can be extended
via external energy Eex   D(vi ) based on a generalized
distance transform of i 0
F ( x, y )  g (| I ( x, y ) |)
5-68
The University of

Metric properties of
discrete Distance Transforms Ontario

Forward Backward Metric Set of equidistant


mask mask points

- 1 0 1
Manhattan (L1) metric
1 0 1 -

1.4 1 1.4 0 1
Better approximation
1 0 1.4 1 1.4 of Euclidean metric

In fact, “exact” Euclidean Distance


transform can be computed fairly
efficiently (in linear or near-linear time) Euclidean (L2) metric
without bigger masks
1) www.cs.cornell.edu/~dph/matchalgs/
2) Fast Marching Method –Tsitsiklis, Sethian 5-69
The University of

HW assignment 2 Ontario

 DP Snakes
• Use elastic snake model E  Eint  l  Eext (value of l is important)
n 1 n 1

• Compare Eint   Li vs. Eint   | Li  Lˆ |


2 2

i 0 n 1 i 0 n 1
• Compare Eint   Li Eint   | Li |
2
vs.
n 1 i 0 i 0 n 1

• Compare Eext   F (vi ) vs. Eext   D(vi ) where D is a


i 0 i 0
generalized distance transform of F  g (| I |)

such that D( p)  min {  || p  q ||  F (q)} (value of  is important)


q

• Use Viterbi algorithm for optimization


• Incorporate edge alignment
• Use 3x3 search box for each control point
5-70

You might also like