You are on page 1of 198



A Dissertation

Presented to

the Faculty of the Department of Computer Science

University of Houston

In Partial Fulfillment

of the Requirements for the Degree

Doctor of Philosophy


Alberto Santamarı́a-Pang

November 2007


Alberto Santamarı́a-Pang


Ioannis A. Kakadiaris, Ph.D., Chairman

Dept. of Computer Science and
Dept. of Electrical and Computer Engineering

Costa M. Colbert, Ph.D., M.D.

Dept. of Biology and Biochemistry

Christoph Eick, Ph.D.

Dept. of Computer Science

Yuriy Fofanov, Ph.D.

Dept. of Computer Science and
Dept. of Biology and Biochemistry

Peter Saggau, Ph.D.

Dept. of Neuroscience,
Baylor College of Medicine

George Zouridakis, Ph.D.

Dept. of Computer Science

Dean, College of Natural Sciences and Mathematics



An Abstract of a Dissertation
Presented to
the Faculty of the Department of Computer Science
University of Houston

In Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy

Alberto Santamarı́a-Pang
November 2007

A central goal of modern neuroscience is to elucidate the computational principles
and cellular mechanisms that underlie brain function, in both normal and diseased
states. Notably, neuronal morphologies are broadly affected by age, genetic diseases
such as Down’s Syndrome, and degenerative diseases such as Alzheimer’s disease. A
major obstacle to this research is the lack of automated methods for reconstructions of
morphology to produce libraries of neurons with quantitative measurements suitable
for simulation.
This dissertation presents a novel framework for automatic three dimensional
morphological reconstruction of nerve cells from optical images. More specifically,
we propose: i) a new algorithm for 3D noise removal in optical images; ii) a novel
method for detection of volumetric irregular tubular structures; and iii) a robust
algorithm for morphological reconstruction. Our results are comparable with those
obtained by human experts and outperform state-of-the-art computer algorithms.
Our novel methodology opens the way for the creation of neuron libraries and for
guiding online functional imaging experiments in live neurons.


1 Introduction 1
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Literature Review 8
2.1 Existing Morphological Reconstruction
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 Semi-Automatic Methods . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Computer-Aided Manual Methods . . . . . . . . . . . . . . . . 13
2.2 Previous Work in Denoising . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Previous Work in Segmentation Methods of Tubular Objects . . . . . 22
2.3.1 Deformable Models Methods . . . . . . . . . . . . . . . . . . . 24
2.3.2 Tubular enhancing filtering . . . . . . . . . . . . . . . . . . . . 34
2.3.3 Medial Axis Extraction . . . . . . . . . . . . . . . . . . . . . . 41
2.3.4 Hybrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3 Methods 59
3.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.2 Approach for Automatic Cell Reconstruction . . . . . . . . . . . . . . 61
3.3 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4.1 Construction of 3D Non-separable Parseval Frame . . . . . . . 67
3.4.2 Frame-based Denoising . . . . . . . . . . . . . . . . . . . . . . 75
3.5 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.6 Dendrite Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.6.1 Anisotropic Tubular Feature Extraction . . . . . . . . . . . . 87
3.6.2 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . 91
3.6.3 Tubular Shape Learning . . . . . . . . . . . . . . . . . . . . . 94
3.7 Morphological Reconstruction . . . . . . . . . . . . . . . . . . . . . . 101
3.7.1 Level Set Formulation . . . . . . . . . . . . . . . . . . . . . . 101
3.7.2 Neuron Morphological Reconstruction . . . . . . . . . . . . . . 108
3.7.3 Soma-pipette segmentation . . . . . . . . . . . . . . . . . . . . 109
3.7.4 Isotropic 3D Front Propagation . . . . . . . . . . . . . . . . . 111
3.7.5 Detection of Terminal Points . . . . . . . . . . . . . . . . . . . 112
3.7.6 Anisotropic 3D Front Propagation . . . . . . . . . . . . . . . . 114
3.7.7 Center line extraction and tree reconstruction: . . . . . . . . . 116
3.7.8 Diameter estimation . . . . . . . . . . . . . . . . . . . . . . . 120

4 Results and Discussion 122

4.1 Results in Frames Shrinkage . . . . . . . . . . . . . . . . . . . . . . . 122
4.1.1 Denoising Results in Synthetic Data . . . . . . . . . . . . . . 123
4.1.2 Denoising Results in Confocal and Multi-photon Microscopy
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.2 Dendrite Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.2.1 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

4.2.2 Real data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.3 Morphological Reconstruction . . . . . . . . . . . . . . . . . . . . . . 145
4.3.1 Qualitative and Qualitative Analysis . . . . . . . . . . . . . . 145

5 Conclusion 159
5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.1.1 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.1.2 Tubular Shape Learning . . . . . . . . . . . . . . . . . . . . . 160
5.1.3 Morphological Reconstruction . . . . . . . . . . . . . . . . . . 160
5.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Bibliography 162

List of Figures

1.1 A CA1 hippocampal neuron cell acquired with a confocal microscope. 6

2.1 Classification of segmentation methods for tubular structures. . . . . 22

3.1 Neuron morphology. . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.2 Neuron Morphological Reconstruction System. . . . . . . . . . . . . . 61
3.3 Comparison of beads. . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4 Deconvolution of the average bead. . . . . . . . . . . . . . . . . . . . 66
3.5 Depiction of the directional derivatives. . . . . . . . . . . . . . . . . . 74
3.6 Results of applying our denoising algorithm in confocal imaging in a
neuron cell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.7 Maximum intensity projection of volume data from Synthetic neuron
n120. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.8 Depiction of the local neighborhood associated with noise removal. . . 82
3.9 Registration of 3 volume stacks. . . . . . . . . . . . . . . . . . . . . . 83
3.10 A typical dendrite segment. . . . . . . . . . . . . . . . . . . . . . . . 84
3.11 Overview of our algorithm for dendrite detection. . . . . . . . . . . . 86
3.12 Anisotropic structural features. . . . . . . . . . . . . . . . . . . . . . 87
3.13 Synthetic volumetric data and isotropic structural features. . . . . . . 90
3.14 Labels used to train a synthetic regular tubular model. . . . . . . . . 94
3.15 Structural features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

3.16 Comparison of dendrite enhancement in different regions. . . . . . . . 98
3.17 Comparison of dendrite enhancement in different cells. . . . . . . . . 99
3.18 Schematic of shape learning given from two different models. . . . . . 100
3.19 Level set embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.20 Schematic of propagation forces normal to the curve. . . . . . . . . . 103
3.21 Schematic of the one dimensional case of the Eikonal Equation. . . . 105
3.22 Soma and pipette segmentation. . . . . . . . . . . . . . . . . . . . . . 110
3.23 Schematic of ending points detection. . . . . . . . . . . . . . . . . . . 113
3.24 Visualization of the 3D front propagation along the centerline of the
tubular object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.25 Schematic depicting the general principle to construct a single con-
nected tree component. . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.26 Parametrization of segments as generalized cylinders representation. . 118
3.27 Centerline extraction and diameter estimation. . . . . . . . . . . . . . 121

4.1 Synthetic phantom data. . . . . . . . . . . . . . . . . . . . . . . . . . 124

4.2 Denoising results due to different algorithms on a synthetic noisy vol-
umes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.3 Maximum intensity projections in the x − y and x − z axis of the
volume of interest respectively . . . . . . . . . . . . . . . . . . . . . . 128
4.4 Comparison results of applying our denoising, anisotropic diffusion,
median filter and the 3D Haar wavelet. . . . . . . . . . . . . . . . . . 129
4.5 Performance evaluation of the length in function of the detected largest
component volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6 A confocal imaging volume with selected region of interest. . . . . . . 131
4.7 Results in selected region of the confocal imaging, . . . . . . . . . . . 132
4.8 Energy distribution other than the low pass filter in each subband of
UH Lifted Spline Filterbank (UH-LSF). . . . . . . . . . . . . . . . . . 133
4.9 Synthetic tubular model constructed from cubic splines. . . . . . . . . 135

4.10 Comparison of tubularity measures in a volumetric example with vari-
ation in diameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.11 Comparative results in synthetic data. . . . . . . . . . . . . . . . . . 137
4.12 Results in typical stack for the CA1 pyramidal cell type. . . . . . . . 140
4.13 Results of applying different methods to detect 3D tube-like objects. . 141
4.14 Comparison of tubular measures in a dendrite segment. . . . . . . . . 142
4.15 Results for the spiny striatal neuron type. . . . . . . . . . . . . . . . 143
4.16 Generalization of the synthetic tubularity measure. . . . . . . . . . . 144
4.17 Visual comparison of morphological reconstructions. . . . . . . . . . . 146
4.18 Comparison of the quality of reconstruction in different cells. . . . . . 147
4.19 A variation of Sholl analysis as performance metrics. . . . . . . . . . 149
4.20 Selected subtree to perform quantitative analysis. . . . . . . . . . . . 150
4.21 Comparison of performance metrics. . . . . . . . . . . . . . . . . . . . 151
4.22 Comparison of performance metrics. . . . . . . . . . . . . . . . . . . . 152
4.23 Comparison of performance metrics performed in Subtree. . . . . . . 152
4.24 Quantitative analysis of diameter estimation for Cell A. . . . . . . . . 153
4.25 Comparison of the minimum intensity projection and the morpholog-
ical model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.26 Comparison of the volumetric data and the morphological model. . . 155
4.27 Visualization of the extracted centerline en phantom data depicted in
Fig. 4.10(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.28 Visualization of the results of centerline extraction when applied to
CTA data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

List of Tables

2.1 Morphological Reconstruction Methods . . . . . . . . . . . . . . . . . 15

2.2 Denoising Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Segmentation of Tubular Objects – Deformable Models . . . . . . . . 33
2.4 Segmentation of Tubular Objects – Tubular Enhancing . . . . . . . . 40
2.5 Segmentation of Tubular Objects – Tracking-based methods . . . . . 52
2.6 Segmentation of Tubular Objects – Hybrid Methods . . . . . . . . . . 58

3.1 Lifted Spline Filterbank: Selected Frame Elements . . . . . . . . . . . 74

4.1 Performance Evaluation on the noisy volume depicted in Fig. 4.1. . . 124
4.2 Performance Evaluation on the noisy volume depicted in Fig. 4.2. . . 124
4.3 Performance evaluation - Time (UNIT: SECOND). . . . . . . . . . . 127
4.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.5 Performance Evaluation - Total Dendrite Length and Surface Area . . 149
4.6 Performance Evaluation - Diameter Statistics . . . . . . . . . . . . . 149
4.7 Performance Evaluation - Length Statistics . . . . . . . . . . . . . . . 150
4.8 Performance Evaluation - Path from Soma . . . . . . . . . . . . . . . 150
4.9 Performance Evaluation - Subtree . . . . . . . . . . . . . . . . . . . . 150

Chapter 1


Dendritic morphology is a key determinant of neuronal computation (Mainen and

Sejnowski [139], Hama et al. [81], Samsonovich et al. [184],Yuste et al. [266], Rall et

al. [175]). Long thin dendrites provide both a maximal surface-to-volume ratio for

synaptic contacts to be made and electrical isolation of synaptic and voltage-gated

potentials to allow dendritic compartments to function independently (Mel [144],

Pongracz et al. [170], Tada et al. [208], Korkotian et al. [119], Benavides et al. [24]).

Because neuronal integration involves complex interactions between large numbers

of nonlinear voltage-gated processes, it is not straightforward to infer the relation-

ships between the structure and specific functions of neuronal dendrites. Thus, model

building and computer simulation are essential to produce viable theories of neuronal

computation (Samsonovich et al. [40], Hoffman et al. [92], Poirazi et al. [169], As-

coli and Atkeson [13], Ascoli et al. [11], Mizrahi et al. [151]). Fortunately, it has

become relatively easy to perform complex simulations incorporating both neuronal

morphologies and ion channel kinetics due to the availability of powerful simulation

tools (Hines and Carnevale [91], Beaman et al. [21], Cuntz et al. [49]). However,

for a variety of reasons reflecting both the complexity and individual variability of

neurons and the relatively weak constraints of the models, one cannot simply present

an input and produce an output that necessarily reflects biological reality (Holmes et

al. [93]).

To better understand the role of dendritic arbors in neuronal computation, the

production of databases of neuronal morphologies that can be used for computer sim-

ulation is an important goal (Migliore et al. [150], Ascoli [12], Schmidt et al. [191],

Wiseman et al. [251]). Available databases have been limited in scope because of

the relatively large effort involved in the largely manual computer-aided methods

currently in use (Neurolucida, MicroBrightfield). To address this need, a number of

semi-automated and automated systems are under development (Evers et al. [64],

Wearne et al. [245], Brown et al. [34]) by reducing the need for the investigator

to make individual measurements, throughput, consistency, and accuracy should in

principle improve. Our own interests in developing a system for automated recon-

structions comes from the need to acquire electrophysiological data and morpho-

logical reconstructions from the same neurons. Advances in optical imaging have

improved our ability to monitor physiological processes (e.g.; Ca2+ concentration)

from multiple sites and from fine structures (Hoogland et al. [96]). Despite this

capability, however, there is always a limit on the overall bandwidth. That is, the

acquisition methods allow either high temporal or high spatial resolution, but not

both simultaneously. Thus, there is a need to determine the optimal sites for func-

tional imaging. If the morphology of the neuron is known at the outset, quantitative

criteria can be used to decide where imaging has a high likelihood of yielding useful

(i.e.; constraining) information. Even in cases where such precision is not necessary,

producing neuronal libraries of paired morphological and functional data nevertheless

remains a goal.

An online experiment consists of an initial structural imaging phase, where con-

focal or multiphoton images stacks are acquired. During an intermediate phase,

a morphological reconstruction is produced and used to choose functional imaging

sites. Functional imaging comprises the final phase. Such a scenario places a num-

ber of requirements on the imaging and computational approaches. Here we describe

these requirements and our progress on an imaging and computational pipeline.

1.1 Objectives

The goal of this dissertation is to develop the methodology and the computational

framework to allow automatic reconstruction of neuron cells from confocal and multi-

photon microscopes towards, on-line functional imaging, and the creation of libraries

of neuron cells. Such objectives include:

1. Developing denoising framework for denoising in three dimensional optical


2. Developing a segmentation algorithm for irregular and regular tubular struc-


3. Developing a algorithm to be able to reconstruct automatically the cell mor-

phology in terms of cylindrical lengths and diameters.

1.2 Challenges

There are two major challenge for automatic morphological reconstruction of neuron

cells. The first challenge involves the structural imaging of the biological specimen

(typically alive) and it can be summarized as:

1. The point spread function imposed by the optics of the microscope.

2. Important structures are near the limit of imaging resolution, 0.2 µm as (spines

and small dendrites).

3. Low signal-to-noise ratio due to different sources of noise [164], which generally

does not follow a Gaussian distribution Fig. 1.1(b).

4. Uneven distribution of fluorescent dye in the cell, Fig. 1.1(a).

The second major challenge refers to a rapid and accurate shape modeling of the cell

as a single tree in terms of cylindrical lengths and diameters.1 Then, the difficulties

to express a realistic morphological model are:

These two parameters (the lengths and diameters are of decisive importance to constrain a
realistic computational model.

1. Removal of external objects that do no belong to the cell structure (pipette)

that are attached to the cell body (soma).

2. Accurate estimation of the cylindrical diameters specially in thin structures.

3. Memory requirements. A single cell may require to allocate several gigabytes

in RAM (typically two gigabytes).

4. Rapid reconstruction during the limited time frame for which cell is alive.

Under these considerations our reconstruction algorithm must overcome these

challenges and yet provide an accurate reconstruction for further morphological anal-


1.3 Contributions

Our specific contributions are:

1. We have developed a framework for three dimensional Frame-based image de-

noising. Unlike the majority of image denoising methods, we developed a

multidirectional filter back which preserves edge information and not compu-

tationally as expensive as algorithm with similar performance. The advantage

over the state of the art methods is that structural information of objects

of interest is preserved (specifically the topology of three dimensional tubu-

lar structures) while removing different types of noise (Gaussian and Poisson).

The proposed denoising method is not constrained only to three dimensions,


(b) (c)

Figure 1.1: A CA1 hippocampal neuron cell acquired with a confocal microscope.
(a) Volume rendering of the original data, (b) a detail depicting the variability in
morphology, and (c) an example of the typical image artifacts created due to spilling
of the fluorescent dye in the volume of interest.

neither to a specific imaging modality. Is application is straight forward to

different medical imaging modalities without special assumptions.

2. We have developed a novel framework for learning and predicting irregular

tubular structures. Classical methods for enhancing tubular models assume:

i) a smooth cylindrical shape (case of coronary arteries) and ii) a prior shape

(cylindrical or elliptical). Diverging from these classical approaches, we have

constructed a framework for intelligent shape learning and prediction in a two

step fashion. In the first step a model is trained to learn complex tubular shapes

from a tubular example. Shape descriptors are the eigenvalues derived from

the hessian matrix where the hessian matrix is constructed by second order

derivatives. Such shape descriptors provide a structural-feature space. In the

second step, a the model that has been trained is used for predicting unseen

tubular shapes with considerable shape variations. Support vector machines

are used to create an intelligent tubular shape model. Under this formulation,

we generalize the classical approaches for detection regular tubular models to

detecting regular and irregular models.

3. We have developed a robust morphological reconstruction of neuron cells for

generalized tubular models. The core of our framework is the representation of

tubular models as probability maps, then we evolve a three dimensional with

maximum curvature in the center of the tubular object. Front evolution is

posed in a Level Set framework and it is performed by solving Eikonal equation.

1.4 Outline

In Chapter 2, we review the state-of-the-art methods including: i) existing mor-

phological methods for morphological reconstruction of neurons; ii) noise removal

methods in optical imaging; and iii) segmentation of tubular structures. We present

our methodology in Chapter 3 to automatically generate a three dimensional cylin-

drical model of a neuron cell. Results and Discussion are presented in Chapter 4 and

Conclusion and Future Work is presented in Chapter 5.

Chapter 2

Literature Review

In this chapter, we review of the state-of-the-art methods for: i) neuron morphological

reconstruction; ii) image denoising; and iii) segmentation of tubular structures. We

compare the most relevant methods according to key points demanded by our specific

problem (neuron morphological reconstruction).

2.1 Existing Morphological Reconstruction


2.1.1 Semi-Automatic Methods

The goal of algorithms for morphological reconstruction of neurons can be stated as

the representation of the neuron cell as a single connected tree component in terms of

cylindrical lengths and diameters. These two parameters (the lengths and diameters)

are crucial to constrain a computational model to guide an on-line functional imaging


Morphological reconstruction algorithms for neuron cells can be categorized as:

i) skeletonization-based methods [183, 243, 116, 115, 57, 192, 64, 220, 225] where

morphology is reconstructed from the centerline; and ii) cylindrical extraction-based

methods [90, 7, 179, 246, 245, 32], where morphology is reconstructed directly from

a given cylindrical model. Skeletonization-Based Methods

An algorithm for dendrite centerline extraction and spine detection only at selected

dendritic segments was proposed by Koh et al. [116, 115]. Dendrite medial axis is

extracted by applying the algorithm proposed by Lee et al. [127] only at dendrite

segments. To extract the dendrite medial axis, a denoising step is required by ap-

plying a debluring algorithm. Then, medial axis is extracted from the segmented

neuron cell. Once medial axis is extracted, the algorithm performs spine detection.

Results are presented in a limited number of dendritic segments rather than in the

entire cell. One of the limitations of this method is that the debluring algorithm

can remove small structures (specially spines which are about 1.0 µm width) if used

not properly making centerline extraction and spine detection not a straight forward


A neuron segmentation method is proposed by Dima et al. [57]. The proposed

method is based on the 3D discrete wavelet transform which is used for denoising,

segmentation and dendrite centerline extraction. Denoising was performed by a vari-

ation of the orthogonal wavelet shrinkage approach as described by Donoho et al. [59].

In further steps, the method highly depends on the detection of the edges (gradient)

across different scales. However, the 3D discrete wavelet transform imposes deci-

mation, (downsampling). Since the skeletonization algorithm relies on the gradient

information, the final skeleton reconstruction may have significant gaps in regions

where the gradient is not detected properly (specifically in regions with dendrites of

small diameter and different variations of intensity). Results are depicted regions

of neurons with a relatively small signal to noise ratio depicting the medial axis

reconstruction as a graph representation.

Schmitt et al. [192, 64] presented a semi-automated method towards neuron mor-

phology reconstruction where branching points are manually selected to estimate

centerlines and diameters of cylindrical segments. Centerline and diameter estima-

tion are based on a given active contour model (Kass et al. [112]), enforcing centerline


In order to maintain the medial axis close to the brightest voxels and possibly close

to the centerline1 , a medialness measure is defined with the following assumptions:

i) higher intensity values are in the center of the dendrite circular cross-section; and

ii) the magnitude of the gradient vector field estimated from the intensity image

reaches a maximum in the center of the dendrite cross-section. In general these

assumptions may not hold, since in reality dendritic cross-sections do not exhibit
Dye concentration can be constant in dendrites cross sections and therefore the centerline cannot
be found in function of the maximum intensity value.

a regular circular cross-section2 and since the medialness measure depends on the

magnitude of the gradient, it may be not adequate to detect the centerline, specially

in one or two voxel wide structures. Cylindrical Extraction-Based Methods

A generalized cylindrical-based model for morphological reconstruction was presented

by Al-Kofahi et al. [7], where a generalized cylinder grows dynamically guided by a

set of multidirectional (orthogonal) filters.

Since the tracing algorithm is highly dependent of the gradient, centerline tracing

have a severe limitations in small structures (two or three voxel-wide). The proposed

approach requires to initialize seed points close to center the real dendrite center

lines. To accomplish this, a medialness measure is defined in function by projecting

multidirectional rays for each point. Medialness values above a user defined threshold

are considered center points. A postprocessing step is required to eliminate points

which actually do not belong to the dendritic centerline. This method does not

involve any technique to enhance the quality of the image (i.e.; deconvolution or

noise removal), rather it assumes that the input image is of almost noise free.Results

are reported in images with dimensions of 512 × 480 × 301 in the x, y and z axis

with a depth of 8 bits/pixel (70 MB). Morphological reconstruction was performed

on a single stack.

Rodriguez et al. [179] presented an approach for reconstructing dendrite surfaces

Due to the effects of the point spread function imposed by the microscope.

based on the following steps: i) performing blind deconvolution [102, 94], i.e. a

theoretical point spread function is used to enhance the quality of the 3D image, ii)

performing registration along the x − y and y − z projections, and iii) performing

surface reconstruction by applying a standard marching cubes algorithm (Lorensen et

al. [133]). The major limitations of the proposed system can be listed as: i) blind

deconvolution does not necessarily reflect the non linear transformation of the optical

microscope (Sarder et al. [188]) and; ii) the marching cubes segmentation algorithm

is not a suitable for dendritic structure due to the high variation in intensity and

shape. Results are depicted for multiphoton images.

Wearne et al. [246, 245] expanded the work of Rodriguez et al. [179] by incorpo-

rating a ray-burst or ray-projection technique for 3D neuronal shape analysis. This

method improves the segmentation by performing a ray-projection technique along

the dendrite centerline at multiple scales. Then, segmentation highly depends of the

width of the dendrites, making it a challenging task specially when the data type

contains high variants of Poisson noise near the object of interest (Dima et al. and

Sarder et al. [188]).

Broser et al. [32] presented an algorithm for neuron skeletonization. The proposed

method is based on nonlinear anisotropic diffusion filtering [166] of three-dimensional

images. Results are presented in a single volume.

Auto Neuron (AN) is a commercial software included with Neurolucida. This

package aims to perform morphological reconstruction of neurons with minimum hu-

man intervention. Regarding the tracing method, first a number of “seeds” points

are generated, then the centerline is extracted by connecting the seed points. We

observe two major limitations of this method: i) morphological reconstruction is not

guarantee to be a single connected tree usually the user has to reconnect individ-

ual subtrees); and ii) the accuracy of the centerline is compromised according to

the number of seed points3 , therefore the extracted centerline may not represent a

“realistic” dendrite path.

Both approaches (cylindrical-based and skeleton-based) have their advantages

and limitations [77]. Cylindrical approaches assume a tubular-like average shape, al-

though they differ significantly from conventional cylindrical models. Skeleton-based

approaches assume the object of interest has been already segmented, but in reality

segmentation is one of the most difficult steps towards morphological reconstruction.

2.1.2 Computer-Aided Manual Methods

A different number of commercial systems for manual reconstructions of neuron

morphology are currently available in the market. One of the most popular is

NeurolucidaT M which is marketed by MicroBrightfield Inc. [149] and it was originally

proposed by Glase et al. [75] in the early 1990’s. NeurolucidaT M is a software sys-

tem designed to operate in junction with the optical microscope. Three dimensional

point-wise selection and measurement is performed by a joystick (that determines

lateral movement) and an independent motorized device (called “Z-control”) that

performs displacements along the z axis.

In this case, performance highly depends on the morphological reconstructions

Centerline extraction connects seed points.

of: i) the precise matching of the object of interest and the displayed image, (the

entire dendritic tree cannot be observed since the dendrites go out of focus); and ii)

the calibration of the lens of the microscope.

With the previous considerations, the overall dendritic tree can be manually

reconstructed. However, when precise and accurate morphological measurements

are required this system may not suitable due to accuracy of the mechanical devices

involved in the process (in particular measurements in the z axis can be rather

difficult as the user is require to adjust the Z-control along the centerline of the

dendrite tree). Therefore, morphological cell reconstructions are dependent of the

human expertise, thus making the reconstruction process very time consuming (about

six to eight hours to reconstruct a single neuron cell).

Neurotracer 3DT M [72] is another commercial system for “off-line” neuron mor-

phology reconstruction from confocal or multiphoton images. Neuron reconstruction

is based on a single image stack. The major limitation of this method is the poor

performance in image segmentation and the high sensitivity of the software to the

presence of noise and image artifacts in the dendritic tree.

The complexity of these methods makes of morphological reconstruction of neu-

rons an extremely time-consuming task for the user, and it is highly subjective since

it depends of the knowledge and experience of the user.

Table 2.1 presents a comparative analysis of the previous methods. The proper-

ties that compare are: i) image modality: Confocal (C) or multiphoton (M); ii) use

of A priori Knowledge (AK); iii) preprocessing steps such as: registration of Mul-

tiple Stacks (MS), Deconvolution (DCV), Denoising (DNS); and iv) morphological

reconstruction in terms of a Connected Tree Representation (CRT).

Table 2.1: Morphological Reconstruction Methods

Algorithm Modality Dimension Preprocessing CTR


√ √
Rusakov et al. [183] 1995
√ √
Watzel et al. [243, 244] 1996
√ √
Koh et al. [116, 115] 2002
√ √ √
Dima et al. [57] 2002

Evers et al. [64] 2005
√ √ √
Uehara et al. [220] 2004

Urban et al. [225] 2006
√ √
Auto Neuron et al. [149] 2007
√ √ √ √ √ √ √
Santamarı́a-Pang et al. [187] 2007
Cylindrical extraction-based
√ √
Herzog et al. [90] 1997
√ √
Al-Kofahi et al. [7] 2002
√ √ √
Wang et al. [241] 2003
√ √ √ √
Rodriguez et al. [179] 2003
√ √ √ √
Weaver et al. [246] 2004
√ √
Broser et al. [32] 2004
√ √ √ √ √
Wearne et al. [245] 2005

Modality: Confocal (C), Multiphoton (M). AK: A priori Knowledge

Preprocessing: Multiple Stacks (MS). Deconvolution (DCV). Denoising (DNS).
Connected Tree Representation (CTR).

2.2 Previous Work in Denoising

Photon-limited imaging systems such as confocal and multi-photon obtain very high-

resolution optical sections through relatively thick specimens (e.g.; live brain tissues).

By acquiring individual 2D sections at different depths through the whole specimen,

imaging of the entire volume can be achieved.

However, during acquisition of the volume data, noise is embedded. Several

sources of noise can be identified for photon-limited imaging data (Pawley [164]),

typically, they include: i) thermal noise induced by the photomultiplier; ii) photon

shot noise which accounts for the part of the noise that varies locally with intensity

and can best be described using the Poisson noise model; iii) biological background

or autofluorescence noise which can be described using the additive Gaussian white

noise; and iv) non-uniform fluorophore noise. There are also other imaging artifacts

due to possible mismatches of refractive indices and tissue scattering.

Denoising for photon-limited imaging data has been an active research field in the

last several decades and a number of effective methods have been proposed. Most

of the early methods employed statistical technologies in the spatial domain due to

the special properties of the Poisson noise [234, 178]. The Maximum Likelihood

Estimator (MLE) was the most popular one and was routinely applied in scientific

and clinical practice [171, 89, 230, 76, 67, 51]. Lately, with the fast development of the

theory of wavelets and their great successes in the field of signal estimation, wavelet-

based denoising methods for photon-limited imaging data have been developed [60,

117, 248, 27, 213]. These methods make full use of the excellent ability of the wavelet

transform to sparsely represent the underlying intensity function.

Dima et al. [56, 57] proposed an effective wavelet denoising method for 3D con-

focal microscopy imaging data that includes a multiresolution validation scheme to

detect edges and to suppress the responses from noise and variations of contrast.

The technique rests on the “á trous” pyramidal decomposition scheme. It includes

a multiresolution validation scheme to detect edges and to suppress the responses

from noise and variations of contrast. Edge information is performed as:

 1, if Ms (x) > max (Ms (x − γs (x)), Ms (x + γs (x))) ,

Es (x) = (2.1)
 0, otherwise,

where γs is the gradient direction and Ms its magnitude at scale S. The downside

of this method is that the structure of fine neurons can be highly corrupted.

Willett et al. [248] employed translation-invariant Haar denoising for Poisson

data, by deriving the relationship between maximum penalized likelihood tree prun-

ing decisions and the undecimated wavelet transform coefficients. However, this

method also applies to 2D denoising.

Recently, Wegner et al. [237] developed a denoising method which operates in

spacial-temporal dimensions, i.e. 2D images in time. Similarly to Dima et al. [57]

the technique is based on the discrete wavelet transform where wavelets coefficients

are applied a “hard threshold operator” depending of the resolution scale. Although

this method is applied to optical imaging, the authors do not present results in

neuron cells.

Denoising based on non-linear anisotropic filtering was proposed by Broser et

al. [33]. Anisotropic diffusion was formulated by Perona et al. [166] in terms of a

Partial Differential Equation (PDE) as:

It = div(g(|∇I|)∇I), I(x, 0) = I0 (x), (2.2)

where the function g can be defined as g(|∇I|) = e−(|∇I|/k) or g(|∇I|) = 1

then the function g controls the sensitivity to edges. In the application to optical

imaging, noise is averaged along the local axis of the neuron’s tubular-wise dendrites

in order to maintain morphological structure.

A different approach to denoising can be integrated by estimating the point spread

function of the confocal microscope to perform denoising (Tiedemann et al. [236] and

Ulfendahl et al. [221]). Tiedemann et al. [236] presented an adaptive estimation of

the point spread function to “remove noise in the image”. Although, denoising and

deconvolution serve different (but similar) objectives, the proposed technique uses

deconvolution as a denoising method. The advantage of the proposed approach is

that it tries to “calibrate” the optical apparatus of the microscope. However, the

major disadvantage is that in practice it is very challenging to estimate such point

spread function. Limiting factors are: i) point spread function change according

to “depth”, and an adaptive scheme must integrate the non-linear changes; ii) the

point spread function is greatly affected by the medium in which the specimen is

immersed, therefore different mediums lead to significant changes of the point spread

function. As pointed out by Monvel et al. [30], image restoration can be achieved by

using denoising algorithms. In this case, wavelets algorithms were implemented and

compared with deconvolution techniques producing similar results

Tomasi et al. [214] presented a denoising method based on bilateral filter. The

basic principle is to perform smoothing (then producing homogeneous regions) while

preserving as much as possible the edges. In general, we have:

h(x) = k(x) f (w)c(x, w)s(f (x), f (w))dw, (2.3)

where c and s are convolution kernels, ω is a local neighborhood and k is normaliza-

tion term computed as:

k(x) = c(x, w)s(f (x), f (w))dw, (2.4)

(f (x)−f (w)2
( 2 )
where σc is a parameter and s(f (x), f (w)) = e σs . The major limitation of

this method is that it is computationally very expensive.

In general, separable wavelet system fails to efficiently deal with edges involved

in high dimensions and several new systems have been developed [165, 38, 209]. In

contrast with the non-separable wavelet systems [118, 186, 187, 109] these systems

allow one to analyze a function in a multidirectional fashion. Hence, one can deal with

straight and curve singularities more effectively. Anisotropic diffusion methods [249,

214] depend on the edge information (weak edges tend to be lost) and tend to remove

subtle details such as spines.

Despite of the advances mentioned above, the denoising problem for photon-

limited imaging data of neurons is considered one of the most challenging problems

due to the high complexity of the components of the different noise sources. Further-

more, a typical 3D image of a neuron has low ratio of structure occupied voxels to

total voxels. Then, special attention to morphological reconstruction of neuron cells

is needed (where the structure of interest is less than 5% of the total volume) since

small but important structures (dendrites) must be denoised without corrupting their

structural information.

Table 2.2 presents a comparative analysis of the previous methods.

Table 2.2: Denoising Methods

Algorithm Dimension Noise type

Year 1D-2D 3D AK OI GN PN TS

Perona et al. [166] 1990

Barash et al. [18] 2002
√ √ √
Broser et al. [33] 2004
√ √
McGraw et al. [142] 2005
√ √
Pantelic et al. [162] 2007
Orthogonal Wavelets
√ √
Dima et al. [56] 1999
√ √ √
Timmermann et al. [213] 1999
Sendur et al. [194] 2002
√ √
Donoho et al. [59] 2003
√ √ √ √
Willett et al. [248] 2004
√ √
Pennec et al. [165] 2005
√ √
Ville et al. [229] 2007
√ √
Chen et al. [43] 2007
Non-Orthogonal Wavelets
√ √
Ashino et al. [14] 2004
√ √
Shen et al. [196, 197] 2005
√ √ √ √
Konstantinidis et al. [118] 2005
√ √ √ √
Santamarı́a-Pang et al. [186] 2006
Restoration-based methods.
√ √ √
Kempen et al. [230] 1996
√ √
Vovk et al. [238] 2004
√ √
Bernad et al. [171] 2005
√ √ √
Lukac et al. [136] 2007

CF: Classification, AK: A priori Knowledge, OI: Optical Imaging,

GN: Gaussian Noise, PN: Poisson Noise, TS: Tubuar Structure.

2.3 Previous Work in Segmentation Methods of

Tubular Objects


Up to date, segmentation of tubular structures is one of the most active research

areas not only in the computer vision community [224] but also in the biomedical

imaging community [108, 114, 39, 77, 202]. The importance of detecting automati-

cally tubular objects it is not only from the pure computer vision point of view, but

from a diverse number of biomedical research areas [84, 6, 251, 191, 108, 68, 222, 207].

In this section, we describe the state-of-the-art methods related to segmentation of

irregular and regular tubular structures with special attention to those methods suit-

able for dendrite segmentation. In general, the majority of these methods can be

categorized as:

1.1 Parametric deformable models

 1. Deformable models

 1.2 Geometric deformable models
2. Tubular-enhancing filtering 

3.1 Skeletonization based methods
3. Medial axis extraction

3.2 Tracking based methods

4. Hybrid methods

Figure 2.1: Classification of segmentation methods for tubular structures.

Broadly, different approaches for segmentation of regular and irregular tubular

structures are based on the following methods. Deformable models based meth-

ods [128, 143, 256, 124, 50, 98, 259, 228, 210, 261, 158, 105, 82], in general are based

on geometric properties of structures which are dynamically deformed under the influ-

ence of external (image) and internal (shape) forces. Tubular-enhancing filtering

methods [254, 255, 180, 124, 205, 204, 71, 189] are designed to capture tubular mor-

phology of 2D or 3D volumetric tubular structures by enhancing the tubular topology

of the given object. Regularly, they integrate strong shape priors 4 , (i.e.; synthetic

models of cylinders or ellipses). Medial axis extraction methods aim to extract

the centerline or skeleton of the tubular object of interest. There are approaches that

mimic the actions a robot in a given environment to extract a path as the robot navi-

gates [253, 9, 152, 110, 74, 73], while others express the tubular morphology in terms

of a skeleton [217, 215, 153, 219, 83, 122, 86, 264, 106, 240, 88, 58, 113, 211, 52, 265,

28, 247, 69]. Hybrid methods [20, 37, 62, 267, 79, 103, 125, 101, 173, 263, 46, 163]

combine different techniques to be able to perform segmentation. Under this cate-

gory, volumetric tubular objects can be segmented according to specific properties

of the object of interest [173, 86, 223]. A review of vessel extraction techniques is

can be found in Kirbas et al. [114].

We compared all these approaches in Tables 2.3, 2.4, 2.5, 2.6. The comparison

is based on the following key points: i) dimension: 2D or 3D; ii) type of tubular

structure: regular or irregular; iii) use of a probabilistic framework, specifically if use

a technique for shape learning; and v) the complete representation of a tree structure

in terms of cylindrical lengths and diameters.

Relevant shape variations such as curvature and irregular diameter variations are not taken
into account.

2.3.1 Deformable Models Methods Parametric Deformable Models

Parametric deformable models aim to express general shapes in terms of a parameter

space, such that for a given shape, if the shape parameters are estimated, then the

shape is known. We can categorize the family of parametric deformable models for

segmentation of tubular structures in: i) 2D images (two parameters are needed), ii)

3D images (three parameters are needed) and iii) 3D plus time (four parameters).

Active contour models (Kass et al. [111]) (also known as snake models) provided

an initial formulation for segmentation of tubular objects posed as an energy mini-

mization problem that integrates three energy terms: Einternal , Eexternal , and Econstrain .

These energy terms depend of a parameterized contour C(υ), υ ∈ [0, 1] and the gra-

dient information ∇I of the intensity image. The formulation can be written as:
2 2
E(C(υ)) = α 0
|C (υ)| dυ + β 00
|C (υ)| dυ − λ |∇IC(υ)|2 dυ , (2.5)
| {z } | {z } | {z }
Einternal Eexternal Econstrain

where α, β and λ are real positive values.

Under this formulation Han et al. [82], integrated a minimal path active contour

models for vessel segmentation. Such formulation is expressed as the search for

contour with minimal path that satisfies:

min (ω + g(∇I(C(v)))) |C 0 (v)| dv, (2.6)

where ω is a parameters that measures the length of the contour, g(∇I(C)) =

is the is edge potential function, and p is equal 1 or 2. Once the en-

ergy functional is defined, energy minimization is performed based on a graph search

technique. The method was applied to detect “semi circular objects” in 2D. Simi-

larly, Yang et al. [259] developed a method for segmentation of semi circular objects

in 2D images from a snake model.

Valverde et al. [227] integrated energy term to the general formulation in Eq. 2.5,

which incorporates sensitivity to “granular” noise. Such formulation is expressed as:

α(s)Einternal + β(s)Eexternal + γ(s)Econstrain + δ(s)Enoise ds, (2.7)

where the energy terms are defined previously. This formulation makes that the

snake model to attach to tubular objects, where the term Enoise is a penalty value

for pixels in the background.

Three-dimensional parametric deformable models perform shape representation

as physics-based deformable models [212, 148, 147]. This class of deformable models

can provide a parametric representation of a three dimensional cylinder C with a

parametric curve l(u) and cross sectional planes a(u, v):

 
 l1 (u) + a1 (u, v) 
 
C(u, v) =  l
 2 (u) + a 2 (u, v) ,
 (2.8)
 
l3 (u) + a3 (u, v)

where − π2 ≤ u ≤ π
, −π ≤ v ≤ π, and l(u) = [l1 (u), l2 (u), l3 (u)]> , a(u, v) =

[a1 (u, v), a2 (u, v), a3 (u, v)]> . Here the main aim is to recover the parameters that

represent the shape of a tubular object C. Usually, Lagrangian formulations con-

struct an energy functional that integrates shape parameters of the model C.

A deformable volume which integrates a mass-spring energy model with the shape

of a 3D line is presented by Anshelevich et al. [9]. For an unformed shape U a

deformation is defined as γ(U ) where γ is a deformation function. Then, for a given

deformation γ, the total energy of the system is defined as: ϕ(t)dt and the energy
γ(U )
density ϕ is:

ϕ(v) = λ(trace(e))2 + µ trace(e2 ), (2.9)
the trace of the matrix aij is aii and e is the Green-Lagrange tensor of the de-
formation defined as: e = 2
(F t F − I), F is the matrix of partial derivatives of γ,

and λ, µ are constants related with the physical properties of the material. This

approach creates a model of a deformable “cable” that is able to deform and navigate

in an ideal environment.

Formulations for vessel shape representation were proposed by [172, 158, 100]

where edge information played a crucial role. By adding smoothing constrains, Yim et

al. [261] developed a vessel surface reconstruction method with a deformable tubu-

lar model. The position of every point in the tubular model is determined by the

parametric radial function Ri (a, φ), where a is the radius and φ is the circumferen-

tial location at the i-th iteration. The model axis is designated by the parametric

function α and the generalized cylinder is represented in terms of ρaφ as:

 
ρ0 Ri (a,φ)
(Ri (an , φn ) − Ri (a, φ)) + aϕ · ∇ ∇I(ρ0 Ri (a, φ))  ,
Ri+1 (a, φ) = Ri (a, φ) + K1 K2 aϕ (2.10)
a0n φn ρ0aϕ Ri (a,φ)

where an and φn are the axial and angular locations adjacent to a and φ, K1 , K2

are constants representing the time step size and the weight of the forces and ∇I

represents the gradient vector after convolving with a Gaussian kernel. The forces

acting in the model depend of the gradient vector field, deformations stop once forces

reach equilibrium.

A parametric deformable model that integrates the shape for the coronary arteries

over time was proposed by Chen et al. [44]. Quantitative analysis of the coronary

artery tree is estimated from biplane angiogram.

Bruijne et al. [50] presented an adapting active shape model framework for 3D

segmentation of tubular structures in medical images. Such model is based on sim-

ilarity profiles where the dissimilarity of a profile gs is the squared Mahalanobis

distance f (gs ):

f (gs ) = (gs − ḡ)T ST

g (gs − ḡ) , (2.11)

where ḡ is the mean tubular shape, and S is the covariance matrix. Deformations

are based on statistical shape variations of tubular structures. Results are presented

in tubular cross sections of tubular structures in CT images.

Recently an innovative approach for vascular segmentation was proposed in an

artificial intelligence framework by McIntosh et al. [143]. Such approach defines a

“deformable organism” which is doted with: i) sensors, ii) a cognitive center; iii)

behaviors; iv) locomotion characteristics; and v) geometrical properties that allow

deformations. The method includes preprocessing steps to enhance tubular struc-

tures (by applying Frangi’s vesselness measure [71]). Then, the deformable organism

deforms to segment vascular tree structures. One of the limitations of this method,

is that is computationally very expensive and there is number of parameters that

need to be estimated, making the implementation very specific to the problem itself.

A 3D active shape method for anatomical segmentation based on statistical shape

modeling was presented by Lekadir et al. [128]. In this work a shape metric invariant

different linear transformations (i.e. scaling, rotation and translation) is defined. The

main idea is to construct a “fitness” measure that integrates gray level appearance

and phase information expressed in the phase profile ϕ:

ϕ = (|ϕ1 − ϕs | , ..., |ϕh − ϕs |) , (2.12)

where h is the number of the profiles and ϕs is the stationary phase and the intensity

gi , phase ϕi profiles are normalized as:

gi − gmin ϕi − ϕmin
ĝi = , ϕ̂i = , (2.13)
gmax − gmin ϕmax − ϕmin

and combined to produce a p profile. This method combines intensity and phase

for searching in a given feature space. Results are depicted in a single segment of

a tubular structure with approximately the same diameter. In addition, there is a

strong assumption for shape priors (circular shapes), which in general may make

the method not suitable for irregular tubular structures with considerable shape

variations. Geometric Deformable Models

Compared to parametric deformable models, the shape is represented as a implicit

function embedded in a higher dimensional space, shape deformation is archived by

evolving this implicit function over time to match a given shape. If we compare

geometrical deformable models versus parametric, “parametric contours in 2D or

3D”, are expressed in terms of the implicit function (a curve in 2D or closed surface

in 3D).

The surface is expressed as the zero level set: C(υ, 0) = {(x, y, z) : ϕ(x, y, z) = 0},

where ϕ(x, y, z, t) : R2 × [0, T ) → R. Geometric deformable models evolve a curve

or surface of the form:

C(p, t) = F N , (2.14)

where the function F determines the magnitude of the deformations which are normal

to the surface of the level set. In general, curve evolution is guided by a PDE


Different formulations to segment smooth and regular tubular objects have been

proposed (Lorigo et al. [135] and Bemmel et al. [228]). The key idea is to evolve a

closed curve or surface over time, where the deformation forces are normal to the

curve or the surface.

Extensions of such models include: i) hierarchical approaches (Holtzman-Gazit et

al. [95]), where local level sets drive the evolution locally (Manniesing et al. [140]

and Law et al. [124]), ii) symmetry evolution approaches, where deformations occur

according to the 2D curve5 (Kuijper et al. [122]). Physics based approaches integrate

elastic properties via active contour models proposed by Xiang et al. [256]. Yan et

al. [258] the evolution of the level set was inspired in the physical formulation of

a capillary tube immerse in a liquid. Antiga et al. [10] integrated fluid-dynamics

models to segment human arteries. The integration of prior knowledge to guide the
In the three dimensional space is not straight forward to define an axis of symmetry.

level set evolution was included in Unal et al. [223].

The general formulation in Eq.2.5 can be posed in a level set framework as:

Ct = gkN − h∇g, Ni N, (2.15)

where the function g is the stopping criteria of the curve or surface evolution.

Xiang et al. [256] proposed a level set model of the form:

φt = µ∇ · + v |∇φ| , (2.16)

where φ is the level set, the function g is the stopping criteria which tends to zero

close the edges and µ is a small positive constant and v = − 41 hr,∇(Gg ∗I+αH(φ))i dx0 dy 0

and H is the Heaviside function defined as:

 0,

 if φ ≤ −σ,

H(φ) = 1 πφ (2.17)
 2
sin 2σ
+ 1 , if − σ < φ < σ,

if φ ≥ σ,

 1,

where σ is the same parameter used to smooth the image with a Gaussian kernel.

Lorigo et al. [135] proposed an elegant method for curve evolution in a variational

level set framework. Let C : [0, 1] → R3 be a curve and v : R3 → [0, ∞)] for which its

zero level set is C. Ambrosio and Soner [8] proved that evolving Eq. 2.14 (C = F N)

is equivalent to the evolution of φt = G(∇φ, ∇2 φ) where the function G is defined

as the matrix P∇φ ∇2 φP∇φ , and Pq is defined as Pq = I − |q|2
, q 6= 0. Then curves

evolution is expressed as:

∇g Y ∇I
Ct = kN − H , (2.18)
g |∇I|

where H is the projection operator onto the normal space of C. From here that the

PDE formulation is expressed in terms of the level set of C as:

2 ∇I
φt = G(∇φ, ∇ φ) + ρ(h∇φ, ∇Ii) ∇φ, H , (2.19)
g |∇I|

where the multiplier term ρ(h∇φ, ∇Ii) is a stopping term.

Yan et al. [258] proposed a method for segmentation of tubular objects inspired

in the physical action of “capillaries forces”. When tubular objects are submerged

in liquids, the energy formulation involves the surface Sw of the capillary object, the

adhesion coefficient β, the area Sw∗ in contact with the outer medium (air) and the

corresponding adhesion coefficient β ∗ . Then the energy functional that drives the

capillary can be written as:

E(Sw (t)) = βSw + β ∗ Sw∗

R R  (2.20)
= α C(t, x)dx + λ |Cx (t, x)| dx ,

where α = β−β ∗ and λ is a Lagrange multiplier (which is a surface area regularization

term). In terms of curve evolution, it can be written as:

Ct = g(k + c)N − <∇g, N > N + α(1 + γ k̂2 ) N − cos θ , (2.21)

where α controls the propagation, advection, and capillary forces, the constant c and

λ act like a balloon forces. Then, the zero level set φ embedded in S that evolves

according to:

φt = g(k + c) |∇φ| + < ∇g, ∇φ> + α(1 + λk̂2 ) |∇φ| × f (1 − cos2 θ) ,


∇Ψ <∇Ψ,∇g>
where N = − |∇Ψ| , cos θ = |∇Ψ||∇g|
, and f is defined as the parametric sigmoid
function defined as: f (x) = − x−b
, here a and b control the “shape” of the
(1+e ( a ))

sigmoid function. The corresponding PDE can be performed as:

φn+1 = φn + ∇φ∆t, (2.23)

where ∆t is the time step and the 3D volume is previously convolved with a Gaussian

kernel and the level set φ is periodically reinitialized. The method was applied to

segment tubular structure from magnetic resonance angiogram producing “smooth”


Recently Li et al. [129] proposed a method to extract vasculature tree in MRI

angiography data. The approach is based on a generalization of the Eikonal Equation

to estimate vessel centerline and diameter.

The advantages of geometric deformable models over parametric could be listed

as: i) they have highly topology changes; ii) suitable for irregular shapes; and iii)

initial position independent. The disadvantages include: i) leakage (over segmenta-

tion); ii) the stopping forces; iii) computationally very expensive; and iv) it is difficult

to use them to segment large datasets.

Table 2.3 presents a comparison of the described methods.

Table 2.3: Segmentation of Tubular Objects – Deformable Models

Algorithm Tubular Structure

Year Dim. Regular Irregular Prob. Centerline Diameter Tree

Framework Struct.

Parametric Deformable Models

√ √ √ √
Potel et al. [172] 1983 3D
√ √ √
Donnell et al. [158] 1994 3D
√ √ √
Ingrassia et al. [100] 1999 2D
√ √ √
Anshelevich et al. [9] 2000 3D
√ √ √
Yim et al. [261] 2001 3D
√ √ √
Han et al. [82] 2001 2D
√ √ √ √
Chen et al. [44] 2002 3D
√ √ √ √
Behrens et al. [22] 2003 3D
√ √ √ √
Bruijne et al. [50] 2003 3D
√ √ √
Yang et al. [259] 2003 2D
√ √
Valverde et al. [227] 2004 2D
√ √ √ √
Lekadir et al. [128] 2006 3D
√ √ √ √ √
McIntosh et al. [143] 2006 3D
Geometric Deformable Models
√ √ √
Lorigo et al. [135] 2001 3D
√ √
Deschamps et al. [52] 2001 3D
√ √
Telea et al. [210] 2002 3D
√ √ √
Bemmel et al. [228] 2003 3D √ √ √
Holtzman-Gazit et al. [95] 2003 3D
√ √
Dey et al. [55] 2003 3D
√ √ √ √
Antiga et al. [10] 2003 3D
√ √ √
Wink et al. [250] 2004 2D
√ √ √
Manniesing et al. [140] 2004 3D
√ √
Jianfei et al. [105] 2004 3D
√ √
Kuijper et al. [122] 2005 3D
√ √
Yan et al. [258] 2006 3D
√ √ √ √
Unal et al. [223] 2006 2D
√ √ √
Xiang et al. [256] 2006 3D
√ √ √
Law et al. [124] 2006 2D
√ √ √ √
Wong et al. [252] 2007 3D

2.3.2 Tubular enhancing filtering

The enhancement of tubular objects is a fundamental topic in modern computer

vision and it has been extensively investigated over the past decades leading to sig-

nificant progress in shape modeling of tubular structures, especially in segmentation

of vasculature organs (regular and smooth tubular objects) or objects with tubular-

like shape. In this subsection, we review the most relevant methods for enhancing

tubular structures.

A different number of methods [71, 189, 134, 53, 130] for vessel enhancement

have been proposed relying in structural features that could potentially discriminate

tubular objects. Some of them [71, 189, 134] are model based as they create an ideal

cylindrical or elliptical model.

From a probability approach, the general formulation for the enhancement of

tubular objects can be stated as constructing a likelihood function V that assigns a

likelihood value to each vector x that belongs to a tubular structure. More precisely,

such function depends of a scale σ, such that:

F (x) = max V (x, σ), (2.24)

σmin ≤σ≤σmax

Usually, the structural features are defined as the ordered eigenvalues |λ1 | < |λ2 | <

|λ3 | of the Hessian matrix constructed with the 2nd-order derivatives, approximated

with a Gaussian kernel or radius σ.

Frangi et al. [71] proposed one of the early and yet one of the most used tubularity

measures based on the model of an ideal cylinder to enhance tubular objects by

constructing a function of the Hessian matrix as:

 0, if λ2 < 0 or λ3 < 0,

2 2
VFrangi (x, σ) = RA R
(− B ) S2 (2.25)

 (1 − e(− 2α2 ) ) (e 2β2 ) (1 − e(− 2c2 ) ), otherwise,

 | {z } | {z } | {z }
Sheet structures Blob structures Noise sentivity

where RA = |λ2|
|λ3 |
enhances sheet-like objects, RB = √|λ1 | enhances blob-like objects
|λ2 λ3 |

and S = λ1 + λ2 + λ3 is sensitive to noise.

Independently, Sato et al. [189] generalized the model of an ideal cylinder to

incorporate elliptical shapes as:

  ξ  τ
 σ 2 |λ3 | |λ2 |

1 + λ1
, if λ3 < λ2 < λ1 < 0,
|λ3 | |λ2 |
VSato (x, σ) =  ξ  τ (2.26)
 σ 2 |λ3 | |λ2 |

1 + ρ |λλ12 | , if λ3 < λ2 < 0 < λ1 < |λ2 |
|λ3 | ρ

where ξ > 0 specifies the asymmetry of the ideal cylinder, τ ≥ 0 controls the sensi-

tivity to blob like structures, 0 < ρ ≤ 1 vessel curvature, and σ 2 normalizes different

scales. Similarly, Lorenz et al. [134] followed the same approach by integrating a

likelihood score for vessels.

Recently, Descoteaux et al. [53] reported a multi-scale bone enhancement mea-

sure by introducing a morphology operator (a sheetness measure) from multi-scale

eigenanalysis defined as:

 0, if λ3 < 0,

R2 R2
VDescoteaux (x, σ) = (− A2 ) (− B2 )
(− S 2 ) (2.27)

 (e 2α ) (1 − e 2β ) (1 − e 2c ) otherwise,

 | {z } | {z }| {z }
Sheet structures Blob structures Noise sentivity

Filter-based approaches for vessel enhancement have been reported also by Li et

al. [130]. The filtering is based on the “ideal” measures of: i) a dot (or blob):

x2 +y 2 +z 2 y 2 +z 2
d(x, y, z) = e− 2σ 2 ; ii) a line: l(x, y, z) = e− 2σ 2 ; and iii) a point: p(x, y, z) =
− x2
e 2σ . Then the filters are defined in terms of the magnitude of the eigenvalues and

a likelihood score as:

 0,

VLi (x, σ) = 2
 |λ3 | ,

if λ1 < 0, λ2 < 0, λ3 < 0,
|λ1 |


 0,

VLiLine (x, σ) = (2.29)
|λ2 |(|λ2 |−|λ3 |)
if λ1 < 0, λ2 < 0

|λ1 |


 0,

VLiP lane (x, σ) = (2.30)
 |λ1 | − |λ2 | ,

if λ1 < 0.

Note that this measure is designed for pulmonary data in CT scans and is a variation

of the measure proposed by Sato [189].

A probabilistic vessel model to enhance vessels and junctions has been proposed

by Agam et al. [5, 4]. The basic idea is to find a direction v orthogonal to the

gradient vector field (in local windows) that minimizes the squared projection onto

v of all the gradients in a local window. The projection of all gradients is given by

the following energy functional:

((gi )T v)2 =vGGT v,
E(v) = n

where G = √1 [gi , ..., gn ] and GGT is a 3 x 3 matrix (structure tensor). Then, the

idea is to exploit the structure of this tensor via eigenvalue decomposition (similar to

the case of the Hessian matrix). The probabilistic framework is based on the model

of an ideal vessel defined as a “Cauchy distribution” function:

P −1/2
−1 −d ˆ
X α̂i π i
fp (x|Θ̂) = P −1 + α̂M pM (x), (2.32)
i=1 (x − µ̂i )T ˆ i (x − µ̂i ) + 1

where d is the dimensionality of the data, µ̂i , ˆ i are the estimated parameters with

Θ̂ = [α̂1 , ..., α̂1 , θ̂1 , ..., θ̂M ], and pM (x) is a uniform density function. Parametric

models for “nodules”, “vessels” or “junctions” can be modeled by selecting an the

appropriate density function pM .

Yang et al. [260] used prior knowledge of regions to calculate posterior proba-

bilities with Bayes formula and the vessel segmentation was obtained via maximum

posteriori probability classifications of the smoothed posteriors. The method as-

sumes three regions vessel, myocardium, and lung region. Manual segmentation is

performed and the density parameters µc and standard deviation σc are estimated.

Then the probability density function is estimated by:

(v−µc )2
1 −
Pr(V (x) = v|x ∈ c) = √ e 2σc2 , (2.33)

and from the Bayes formula:

Pr(V (x) = v|x ∈ c) Pr(x ∈ c)

Pr(x ∈ c|V (x) = v) = P , (2.34)
γ Pr(V (x) = v|x ∈ γ) Pr(x ∈ γ)

the classification of voxels is obtained via maximum a posteriori probability estima-


C(x) = arg max Pr∗ (x ∈ c|V (x) = v), (2.35)

c∈{vessel, myocardium, lung}

where C(x) is the class that the voxel x belong to, and Pr∗ is a smoothed version

of the posterior probability obtained by applying anisotropic diffusion [166]. This

method only applies to CT data since the intensity values correspond to anatomical


Abdul-Karim et al. [3] presented an algorithm for automatic selection of pa-

rameters for neurite segmentation. Neurite segmentation is posed in a probabilistic

framework incorporating a minimum description length principle, and in general can

be stated as the following minimization problem:

arg min{|LIM (I|M i ) + |LM (M i )}, (2.36)


where |· is represents the number of bits needed to describe the data using LM and

LIM . Then, the key idea is to estimate the posteriori probability of a pixel belong-

ing to a tubular structure by integrating Frangi’s measure as [71]: |LIM (I|M i ) =

− log2 P (VFrangi (x)). This method was implemented in 2D confocal images,
x∈{F i ,B i }
and it was intended to segment neurite without providing a morphological description

of the tubular structure.

Mahadevan et al. [138] proposed a nonlinear vessel enhancement framework that

integrates not only shape constraints but inherited properties of the image modality

(e.g.; variations in intensity, texture, tubular width, orientation, scale and noise).

Rohr et al. [180] developed synthetic tubular models with analytical bounds of the

width of 3D tubular structures. These models are based on a 3D Gaussian distribu-

tions with a given width σ. Under these assumptions, the position and the diameter

of tubular 3D structures was estimated.

Jiang et al. [107] adapted Frangi’s measure [71] to enhance microtubule in elec-

tron tomography images. The key idea is incorporate gradient information in the

computation of the structural features for tubular structures. To incorporate such

information, the eigenvalues derived of the Weingarten matrix were estimated, en-

hancing globally microtubules, only a given scale. This adaptation is the natural

choice in this particular imaging modality, since microtubules objects have “weak”

edges. Similarly Worz et al. [255], Law et al. [123] developed a class of filters that

have high response to tubular structures.

Stathis et al. [80] extracted microtubules in consecutive segments via Hamilton-

Jacobi equations in fluorescence microscopy imaging, the proposed method is more

suited for 2D images.

Krissian et al. [121, 120] designed a non-linear filtering based on anisotropic dif-

fusion to enhance 3D tubular objects as:

It = div(F) + β(I0 − I), I(x, 0) = I0 , (2.37)

where the vector field F defined as: F = φi (uei )ei incorporate multiple directions
according to the orthonormal basis {e0 , e1 , e2 )} of R3 , β is a data attachment coeffi-

cient, and the function φ controls the diffusion process as proposed by [166].

In the optical imaging arena, Willett et al. [249] incorporated nonparametric

multi-scale platelet algorithms suited to photon-limited medical imaging including

variations of Poisson noise. Tubular or semi-tubular structures are enhanced by


All the approaches previously mentioned are mostly designed to enhance vessels

Table 2.4: Segmentation of Tubular Objects – Tubular Enhancing

Algorithm Tubular Structure

Year Dim. Regular Irregular Prob. Centerline Diameter Tree

Framework Struct.
√ √
Lorenz et al. [134] 1997 3D
√ √
Frangi et al. [71] 1998 3D
√ √
Sato et al. [189] 1998 3D
Li et al. [130] 2003
√ √ √ √ √
Streekstra et al. [206] 2002 3D
√ √ √ √
Mahadevan et al. [138] 2004 2D
√ √ √ √
Yang et al. [260] 2004 3D
√ √ √
Agam et al. [4] 2005 3D
√ √
Desobry et al. [54] 2005 3D

Descoteaux et al. [53] 2005 3D
√ √ √ √ √
Abdul-Karim et al. [3] 2005 3D
√ √
Rohr et al. [180] 2006 3D
√ √ √ √
Worz et al. [255] 2006 3D
√ √ √ √
Palagyi et al. [161] 2006 3D
√ √ √
Jiang et al. [107] 2006 3D
√ √ √ √
Mendonca et al. [145] 2006 3D
√ √ √ √
Xiong et al. [257] 2006 3D √ √ √
Law et al. [123] 2007 3D

Krissian et al. [120] 2000 3D √
Stathis et al. [80] 2005 2D

in the human body as the methods make the assumption that vessels are regular

and smooth tubular structures. Most of the existing methods were developed from

the need to enhance vessels from imaging modalities like: CT, MRI, PET, therefore

when considering different imaging modalities different structures (not vessels), they

do not necessarily apply.

Table 2.4 depicts our comparative analysis.

2.3.3 Medial Axis Extraction

Medial axis extraction or skeletonization of tubular models aim to perform segmen-

tation by first recovering the general shape of the tubular object to consequently

perform diameter estimation. These methods are suitable to express tubular mor-

phology in terms of cylindrical lengths and diameters as they identify junction points.

We classify these class of methods as Skeleton based methods and tracking based

methods. Skeleton-based

Dey et al. [55] offers a general representation of shape in terms of skeleton. Represen-

tations for geometric object modeling in 2D and 3D as medial manifolds is presented

by Yushkevich et al. [265]. Shape is represented by using: i) medial axis, and ii)

deformable templates. Medial axis is defined as:

f (x, u, v) = |x − m(u, v)|2 − r(u, v)2 = 0, (2.38)

where m is the medial surface.

Moll et al. [152] presented a path planning method with minimal energy curves.

The method simulates stable configurations of a “flexible” wire as intermediate paths.

The formulation is inspired in the concept of Frenet frames of a parametric line in

terms of curvature k, torsion τ , the tangent vector T , and binormal vector B as:

D = τ T + kB (i.e.; rotational strain). The energy functional depends of geometrical

properties such as torsion and curvature defined as:
 
 
arg min
q 
| {z }
+ K | · e{z } , (2.39)
Curvature and torsion term Error term

(ki2 + τi2 ) · si , and q is
where K is a penalty cost, err is an error function, E(q) =
n × 3 matrix and each row i contains the parameters (ki , τi , si ), 1 ≤ i ≤ n.

Vasilevskiy et al. [235] proposed a flux maximizing geometric flow. The key idea

was to evolve a curve or a surface by incorporating not only the magnitude but also

the direction of a vector field.

Let V be a vector field defined in R3 . The total inward flux of the vector field

through the surface is defined by the surface integral:

F lux(t) = < V, N > dS, (2.40)

where A(t) is the surface area of the evolving surface. Based on this result, Vasilevskiy

et al. [235] proved that the direction in which the invariant flux of the vector field V

through the surface S is increasing most rapidly is given by:

Ct = div(V)N, (2.41)

where the vector field V is defined as V =φ |∇I| . This was later applied by Dimitrov et

al. [58], where a flux invariant technique was developed to distinguish between medial

and non-medial points. A skeleton is obtained by using Hamilton-Jacobi skeletoniza-

tion algorithm. Results are depicted in 2D binary synthetic images.

Later, Torsello et al. [217, 215, 216] introduced a Hamilton-Jacobi method for

skeletonization. The main idea relies in a variation of the of the work of Dimitrov et

al. [58] and it is based Divergence Theorem which relates the flux ΦA (F) of the vector

field F in an arbitrary area surface A as:

∇ · F(x)dx = F · ndl = ΦA (F). (2.42)
A ∂A

where dl is the “length” differential on the boundary ∂A. And therefore in areas

where the divergence is well defined we have:

ΦA (F)
∇ · F = lim , (2.43)
|A|→0 |A|

Under this definition a “skeleton” or “shock” point is defined as point where ∇ · F <

0. The key idea is to approximate the flux by integrating multiple directions by

incorporating a Frenet frame curve as:

∇ · F(p) = −k(p), (2.44)

where k(p) is the curvature of a front orthogonal to F. This expressions can be seen

as a regularization term where the flux is not conservative. The approach takes

into account a normalized flux with the inward evolution of the object boundary

at non-skeletal points. Depending of the sign associated to the flux, the skeleton

is computed based on curvature constrains. Since the proposed method is high

dependent of curvature, there is a need to smooth the image and method applies

directly to 2D images.

Fast extraction of minimal paths in 3D images and applications to virtual en-

doscopy is presented by Deschamps et al. [52]. The method is based on a wave

propagation based on Fast Marching Methods and medial axis is extracted by trac-

ing the ending point from the source of propagation. This approach is general in

the sense that is applied not only to binary images but to gray level images. The

minimal action U is defined as:

U (p) = inf P (C(s))ds, (2.45)
Ap0 ,p

where Ω ∈ [0, 1], Ap0 ,p is the set of all paths 3D paths between p0 and p and the

potential or cost function is defined as a function of the image gradient as:

P (x) = |∇Gσ ∗ I(x)| + w, (2.46)

Gσ is a Gaussian filter and w is the weight of the model. Under this considerations,

multiple front propagations were used to extract centerlines.

A general framework for computing flight paths is presented in Hassouna et

al. [86]. In this framework the authors propose a variational approach based on

distance transform and gradient vector flow. The application presented is to esti-

mate the paths in virtual endoscopy from a binary object. This method is applied

directly to binary images with no shape priors. Results are depicted for 3D volumes.

The key idea here is to express the cost function in terms of the Gradient Vector

Flow (GVF) which is sensitive to concave regions. The GVF is defined as the vector

field V that minimized the following energy functional:

E(V) = µ |∇V(x)|2 + |∇f (x)|2 |V(x) − ∇f (x)|2 dx, (2.47)

where µ is a regularization parameter and f (x) is an edge map. Then the cost

function F is computed in terms of the GVF of V. The key idea is to take into

account smooth and concave regions of the object of interest. From here that this

method is not suitable for irregular tubular objects.

A segmentation free approach for skeletonization of gray scale images via

anisotropic vector diffusion is reported by Yu et al. [264]. In this approach a skeleton

strength map is calculated from the diffused vector field, which is defined as the

vector field V that minimized the following energy functional:

ut = µ · div (g(α) · ∇u) −(u − fx )(fx2 + fy2 ),

 | {z }
Diffusion term
vt = µ · div (g(α) · ∇v) −(v − fy )(fx2 + fy2 ),

 | {z }
Diffusion term

where α is the angle between the central vector and the vectors in a given neighbor-

hood, g is monotonic decreasing function and f (x, y) = k∇Gσ (x, y) ∗ I(x, y)k2 is an

edge strength map of the original image. The minimization of such Energy function

is performed by solving a PDE and mainly relies in the computation of the strength

map f and the selection of the neighborhood to compute α. Results are presented

in 2D medical images where the boundaries of the structures are well defined, and

the skeleton is not extracted as single connected tree component.

He et al. [87, 88] presented a skeletonization method to extract dendrite center-

lines. Skeletonization is performed from a binary volume by performing connectivity

testing to re-connect isolated dendrite’s segments. The authors reported results in

volumes of average size of 383 × 328 × 150 voxels (17.9 MB), the volumes were de-

convolved using a theoretical point spread function. Towards skeletonization, the

authors do not perform any noise removal method which may have a major impact

in the segmentation, another constrain is as the authors pointed out, the algorithm

may not work in general due to the “softness” of the images. In addition the size

of the images to reconstruct are relatively small. In order to reconnect two three

dimensional points (xi , yi , zi ) and (xj , yj , zj ) the following heuristic function is pro-

θ(i, j) d(i, j)
C(i, j) = α +β , (2.49)
| {zπ } | {z ∆ }
Direction Distance

where θ(i, j) is the angle and d(i, j) distance between to disconnected points, and α,

∆ are parameters which are user specified. In order to reconnect two points, cost is

computed reconnection is produced. Once reconnection of points is performed, an

algorithm to generate a minimum spanning tree is applied. The major drawback of

this approach is that the reconnection of “dendrites” is performed only taking into

a account the 2D projection of points, this projection operator may loose important

information in the 3D volume when two disconnection occur and one dendrite is over

the other.

An algorithm to find piecewise linear skeletons by principal curves is proposed

by Kegl et al. [113]. This approach estimate smooth curves which pass through the

“middle” of points. The general principle is based on a fitting-smoothing step given


E(G) = ∇(G) + λP (G) , (2.50)

| {z } | {z }
Average squared distance term Penalty term

and where the distance ∇(G) and the penalty terms P (G) are defined as:
∇(G) = ∆(xi , G), (2.51)
n i=1 | {z }
Euclidean square distance term

1 X
P (G) = Pv (vi ), (2.52)
m i=1 | {z }
Curvature penalty term

where ∆(xi , G) is the distance of the point xi to the nearest point of the graph G,

and Pv (vi ) is the curvature penalty at the vertex vi . The method is suitable for 2D

shapes, specifically for handwritten characters in binary images.

Rumpf et al. [182, 211] proposed a skeletonization method based on Level Sets

methods, specifically, a fast marching algorithm based on the construction of a dis-

tance function from the boundary of the object. The method applies to 2D and 3D

binary images. An augmented Fast Marching Method for computing skeletons and

centerlines is presented by Telea et al. [210]. The author performed a parametriza-

tion from the boundary of the region of interest. The 3D skeleton is computed by

from the 2D projection of the volume. Alongside, skeleton extraction of 3D objects

by using radial basis functions is proposed by Wan-Chun et al. [240]. The approach

is based on constructing a distance field from a binary object and connecting local

maxima regions to generate the skeleton. The key idea is to formulate a skeleton as

in terms of surface S defined as the set of points M (S) such that for each point q on

surface S, exists a point p in M (S) where M satisfies: i) a neighborhood property;

ii) an uniformity property; and iii) a compactness property. These properties induce

an inverse pseudo-inverse mapping M −1 used to construct the skeleton. Similarly

Lien et al. [132] presents a general skeletonization method but is mostly designed for

synthetic images.

Morrison et al. [153, 154] proposed a skeletonization method based on an adaptive

selection of contour points by performing a non-pixel-based analysis where a con-

strained Delaunay Triangulation technique is used. The proposed method is suitable

for thin structures with well defined boundaries and it requires 2D binary images.

Tran et al. [219] defines a 3D voxel coding algorithm based on discrete Euclidean

Distance Transform. In his approach topology is preserved in complex shapes. How-

ever, strong constrains are introduced by constructing medial points that depend of

cluster-labeling heuristic function. Similar work in discrete skeletons was performed

by Niethammer et al. [157]. Tracking-based

Kofahi et. al. [7] described a morphological neuron reconstruction using an adaptive

exploratory search at voxel intensity level. Directional filters are used to describe the

neuron’s morphology, assuming there is no preprocessing. However, this method is

well suited for images with no significant noise or artifacts that can potentially lead

to an improper reconstruction.

Wink et al. [250] presented a multiscale vessel tracking method for 2D images.

The method propagates a wave between two user-selected points using a scale-

selective method to ignore irrelevant off-branches which might cause the path to

go in a “shortcut” (dynamic programming). The cost function defined is defined as


, if R(σ) = 0,

C(σ) = (2.53)
, otherwise,


where Rl is the minimum value of the image and R is the vessel measure of Frangi [71].

A method for the identification of abnormal vascular structures, specifically in

angiography data was first proposed by Wong et al. [253]. The proposed method

applies to binary data (already segmented), and the major focus of the analysis is to

identify abnormalities in the vessel shape. Rather than achieving segmentation, this

method discriminates circular-like shapes. Later, and adaptation that integrated a

probabilistic approach for vessel axis tracing and segmentation [252] was integrated.

The method is formulated in terms of stream surfaces with a minimum cost path

formulation as:
 
 d cos θxi+1 sin φi+1
x 
T T  
xi+1 = xi + t̂i (t̂i × t̂i )  i+1
 d sin θx sin φx
i+1 .
 (2.54)
 
d cos φi+1

Tracking is performed from the following functional:

 

pi+1 = arg max f (p|qi, fˆ) = arg max f (fˆ|qi , p) f (p|qi ) , (2.55)
p∈Ω | {z } p∈Ω | {z } | {z }
Posterior Likelihood Prior

where pi+1 is the solution vector of the axis tracing problem, and xi+1 , t̂i+1 are in
i+1 i+1
spherical coordinates (θxi+1 , φi+1
x ) and (θt , φt ) and the initial solution vector pi=1

i+1 i+1
is expressed as: [θxi+1 , φi+1 T T T
x , θt , φt , ri+1 ] , where qi = [xi t̂i t̂i ] .

The major limitations of this model are: i) it is difficult to express non-regular

shapes; ii) is not straightforward to represent the entire morphology of the tubular

structure (specially bifurcation or junction points).

An application to Computed Tomography Colonography is presented by Kang et

al. [110]. Centerline extraction of the colon is posed as a path planning problem

in which the robot has to travel along the colon where the robot is guided with a

camera. Then, the problem is to traverse the colon with the guidance of the camera.

Camera position p(t) at a given time t is expressed in function of: i) the direction

d(t); ii) the centerline c(t); and iii) thickness function T (t) and it is expressed as:
 
 
 ∂c(t) 
d(t) = kT (t)  ∂t  , (2.56)
 
| {z }  ∂c(t) 
Magnitud  ∂t 
| {z }

where k is a constant value. Under this formulation, the authors proposed a “double”

navigation method to extract the centerlines, especially in regions of “high curvature”

where the result is a single line trajectory.

Fleuret et al. [69] developed an algorithm to reconstruct the neuron morphology

from 2D images. The algorithm identifies dendrites from the “intensity” information

in the image, without estimating structural features. The method relies in two steps:

i) in the first step probability of a every pixel to be on a filament is estimated (only

from the intensity information); and ii) in the second step, an optimal spanning tree

from the detected filaments is built. The probability is estimated via a modified

version of the EM algorithm and it is expressed as the probability map: ξ(x, y) =

(P (Z = 1|I(x, y)). The computation of the maximum likelihood tree T reduces to

find the tree t that maximizes: Ψ(x, y) and it is derived from:

log P (I|T ) = Ψo + Ψ(x, y), (2.57)

where T is the set of pixels in the filament tree and the function Ψ is expressed as:

P (I(x, y)|Y (x, y) = 1)

Ψ(x, y) . (2.58)
P (I(x, y)|Y (x, y) = 0)

Then dendrite tracking is performed only at selected dendrite segments in 2D images.

Robust segmentation of the tubular ellipses with a elliptical model is developed

by Behrens et al. [22]. The method includes a vessel enhancement step by integrating

a extended randomized Hough transform. The analytical expression of the Hough

transform: x2 + y 2 + U (y 2 − x2 ) − 2V xy − Rx − Sy − T = 0, with parameters:


 U = cos(2θ) 1−e
1+e 2,

 V = sin(2θ) 1−e22 ,

R = 2xc (1 − V ) − 2yc V,

 S = 2a2 b2 − xc R − yc R , e = b ,

a2 +b2 2 2 a

and then the a solution of the form: (xc , yc ), a, b, θ is found by solving a linear system

of equations derived from the previous equations. For tracking, a Kalman filter is

used. Let xk be a point on the cylinder axis at time k∆t, the estimate at time instant

(K = 1)∆t is found from the quadratic model:

 xk+1 = xk + ∆tẋk + 21 ∆t2 ẍk ,

ẋk+1 = ẋk + ∆tẍk , (2.60)

 ẍ
k+1 = ẍk ,

Experiments were performed to segment the aortic arc and the spinal cord in MR

angiography 3D data. Similarly, Florin et al. [70] proposed a tracking method based

on particle filtering to extract centerlines of the coronary data in CT data. Likewise

Aylward et al. [15] integrated a tracking system based on detecting and following

ridges in 3D images. The previous methods are compared in Table 2.5.

Table 2.5: Segmentation of Tubular Objects – Tracking-based methods
Algorithm Tubular Structure

Year Dim. Regular Irregular Prob. Centerline Diameter Tree

Framework Struct.

√ √
Ferreira et al. [66] 1999 2D
√ √ √ √ √
Bitter et al. [28] 2001 3D
√ √ √ √
Hanger et al. [83] 2002 3D
√ √
Tran et al. [219] 2005 3D
√ √ √ √
Hassouna et al. [86, 85] 2007 3D
√ √
Torsello et al. [217] 2006 2D
√ √
Reniers et al. [177] 2007 3D
√ √ √
Yim et al. [262] 2000 2D
√ √ √
Wan et al. [239] 2002 3D
√ √
Kegl et al. [113] 2002 2D
√ √ √
Quek et al. [174] 2001 3D
√ √
Rami et al. [176] 2004 2D
√ √ √
Yushkevich et al. [265] 2003 3D
√ √
Ji et al. [104] 2004 2D
√ √
Huang et al. [97] 2003 2D
√ √
Couprie et al. [48] 2007 3D
√ √
Wang et al. [242] 2007 3D
√ √
Morrison et al. [154] 2006 2D
√ √
Bertrand et al. [26] 2006 2D
√ √ √
Schlecht et al. [190] 2007 3D
√ √
Moll et al. [152] 2006 3D
√ √ √
Kang et al. [110] 2005 3D
√ √
Ge et al. [74] 2005 2D
√ √ √
Gayle et al. [73] 2005 3D
√ √
Wan et al. [240] 2001 3D
√ √
Dimitrov et al. [58] 2003 2D
√ √ √ √
Rumpf et al. [182, 211] He et al. [88] 2003 3D
√ √ √
Maddah et al. [137] 2002 3D
√ √ √ √
Soltanian et al. [201] 2005 3D
√ √
Toumoulin et al. [218] 2001 3D
√ √ √ √
Aylward et al. [15] 2002 3D
√ √ √ √
Wesarg et al. [247] 2006 3D
√ √ √ √ √
Fleuret et al. [69] 2002 2D
√ √ √ √
Lee et al. [126] 2007 3D

2.3.4 Hybrid Methods

A segmentation-free skeletonization of intensity images via PDE’s is presented by

Chung et al. [46]. The method was applied to 2D images and it was based on a

particular PDE that performs morphological operations. The approach consists of

preprocessing steps (denoising and contrast enhancement), followed by skeletoniza-

tion (morphological operations in a PDE framework) and finally a postprocessing

step (centerline detection). Morphological operators such as erosion and dilation are

integrated in the following PDE:

∂Φ(x, y, t)
= ± sup (< r(θ), ∇Φ >) , (2.61)
∂t r(θ)∈β

where β is a binary (flat) element parameterized by θ, and Φ(x, y, 0) = Φ(x, y)0 is

the initial condition, the plus sign corresponds to erosion while minus sign refers to

dilation. Under this formulation erosion is as:

|Φ − η|
Φt = k∇Φk , (2.62)

where η is the intensity value of the image, and M is the maximum intensity value.

This formulation produces the slow propagations of the front near η. Results are

shown in 2D images. Similarly Jianfei et al. [105] followed a morphological based


Barbu et al. [20] developed a learning-based method for the detection and seg-

mentation of 3D tubular structures, specifically for CT colonoscopy images. The

method incorporates a hierarchical scheme to detect parts tubular structures based

on a voting scheme. Dynamic programming is incorporated by detecting “short”

tubes Ti = (X1 , X2 , R1 , R2 ), where X1 , X2 are the ending points and R1 , R2 are the

respectively radius. The set of all tubes is denoted by: i) the set T = {T1 , ..., Tn };

and ii) the graph G = (T, E) where the nodes of G are the tubes T . Two tubes

Ti , Tj are connected depending of the orientation of the ending points. The orien-

tation measure is defined by: Eij = |αij − π| tan (|αij − π|), and then the unary cost

of each tube is:

(R2 − R1 )2
c(T ) = − ln(P (T )) + , (2.63)
under a dynamic programming framework the cost Cijk of the best chain k is given

by the following formula:

Cijk+1 = min Cijk + Esj + c(Tj ) .


This method assumes high homogeneity in intensity values inside the structure of

interest (which is the case for CT colonoscopy images). However, this condition is

not applicable to different imaging modalities such as MRI and optical imaging.

A combination of an optical flow technique and normalized cuts for segmentation

of blood vessels in retinal images is presented by Cai et al. [37]. The optical flow


∇I · [u, v]T + It = 0, (2.65)

is solved by a least square method to find a solution in terms of [u, v]. By rewriting

it in terms of the gradient matrix ∇I as:

   
 p Ix Ix p Ix Iy  Ix It
 · [u, v]T =  p
 
 , (2.66)
 P P   P 
Iy Ix Iy Iy Iy It
p p p

the eigenvalues of the gradient matrix ∇I provide structural information of lines and

this will define a feature space of “structural features”. Then the problem of seg-

menting vessels is posed as “learning” structural features by integrating a normalized

graph cut method as:

 
X  1 1 
N Cut(A, B) = wpq  P + P , (2.67)
Dq Dp
p∈A p∈B

where A, B are two segments of the original set V , wpq is the similarity of the vertexes
p and q in function of both the intensity and distance, and Dp = wpq is the degree
of the vertex p. The!weighs are!expressed in function of the intensity and distance
∇I(i,j) 2 D(i,j)
− 2 − 2
as: wij = e σ1
e σ2
, where σ12 , σ22 are parameters set by the user. One

of the limitations of this problem is that it computationally very expensive. This

method assumes a strong response of the gradient in the boundaries of the vessels.

However, this assumption may not hold for the case of small vessels. An application

to 2D vessel segmentation in retinal images is presented.

El-Baz et al. [62] proposed a probabilistic model for segmenting vessels. The

method is based on separating blood vessels from other regions of interest by ap-

proximating a marginal distribution with a linear combination of Gaussian functions

rather than using a mixture of Gaussian functions or Rician functions. Results are

depicted in images where vessels are clearly visible and boundaries are well defined.

Zeng et al. [267] presented an approach for automatic extraction and measurement

of tubular structures in minirhizotron images. This method integrates a classifier to

discriminate tubular structures from the background. Then a labeling method is

implemented to identify each tubular structure in the image.

 e− 2σθ2 , if |xθ | ≤ L ,

Kθ,σ (x, y) = (2.68)
 0,

where Kθ,σ (x, y) = Kθ,σ (x, y) − µθ,σ ; and the method implements a version of

AdaBoost as:
H(x) = sign αn hn (x) , (2.69)
where αn = 21 ln 1−εn

An approach to vessel segmentation by computing wavelets features using sta-

tistical classifiers is presented by Soares et al. [200, 125]. The method is based on

the 2D discrete wavelet transform where the image is analyzed at different scales.

This methods provides results in 2D images where there is almost no noise present.

However, when decomposing the image into different resolutions (i.e., applying a

down-sample operator required by the discrete wavelet transform) small and fine

vessels structures may be lost in the segmentation.

Isgum et al. [101] proposed a pattern recognition approach for segmentation of

the coronary arteries in CTA data. The method is based on the computing a different

number of features to describe: i) size; ii) shape; iii) position; and iv) appearance

of tubular structures. Then a K nearest-neighbor classifier is used to perform the

segmentation of the coronary arteries in presence of contrast agent. The author

claims that a total of eight features are the most relevant ones. However there is no

strong evidence that the same set of features can characterize generalized tubular


An algorithm for the detection of vascular structures CT lung images was pro-

posed by Prassad et al. [173]. This approach integrates elements of machine learn-

ing theory by integrating relational learning from potential bronchovascular pairs.

This approaches assumes manual segmentation of candidate structures, making the

method specifically designed for the given application.

Hanger et al. [83] proposed a method for skeletonization of vascular tree structures

for medical images. The proposed reconstruction method works under the hypothesis

that the minimum intensity occurs at the center of gravity of orthogonal cross section

of the vessels. Results are depicted for a limited number of vascular structures. A

of wavelet skeleton method for ribbon-like shapes is proposed by You et al. [263].

Wavelet analysis is used to characterize medial points in ribbon-like structures at

different scales. The method is designed for 2D structures.

Table 2.6 depicts our comparative analysis.

Table 2.6: Segmentation of Tubular Objects – Hybrid Methods

Algorithm Tubular Structure

Year Dim. Regular Irregular Prob. Centerline Diameter Tree

Framework Struct.
√ √
Chung et al. [46] 2000 2D

Pitas et al. [167] 2001 3D
√ √ √
Nedzved et al. [156] 2001 2D
√ √ √ √
Selle et al. [193] 2002 3D
√ √ √ √ √
Passat et al. [163] 2005 3D
Huysmans et al. [98] 2006
√ √ √ √
Barbu et al. [20, 19] 2007 3D
√ √ √
Cai et al. [37] 2006 2D
√ √ √ √
El et al. [62] 2006 3D
√ √ √
Zeng et al. [267] 2006 2D
√ √ √
Soares et al. [200] 2006 2D
√ √ √ √ √
Isgum et al. [101] 2004 3D
√ √ √ √ √
Prasad et al. [173] 2004 3D
√ √
Gu et al. [78] 2004 2D
√ √
You et al. [263] 2005 2D
√ √ √ √
Marquering et al. [141] 2005 3D
√ √ √
Mosaliganti et al. [155] 2006 3D
√ √ √
Zhang et al. [268, 269] 2007 3D
√ √ √
Bai et al. [16] 2007 3D
√ √ √
Cheng et al. [45] 2007 3D

Chapter 3


In this Chapter we present our methodology to enable automatic morphological

reconstruction of neuron cells from optical imaging. We remind to the reader that

some of the major challenges for reliable automatic reconstruction is the poor quality

of the data, and the tree morphological representation in terms of cylindrical lengths

and diameters.

3.1 Experimental Data

Our database or neuron cells images consist of twelve CA1 pyramidal neuron cells

from rat hippocampi. We have acquired data from a confocal and a multiphoton

microscope. Multiphoton data were acquired with a customized multiphoton Galvo

microscope and loaded with Alexa Fluor 594 dye. We have collected twelve image

datasets consisting of seven or more partially overlapping stacks with approximately

Figure 3.1: Neuron morphology.

size of 640 × 480 × 150 each, with voxel size 0.3 µm in the x-y axis and 1.0 µm in

the z axis. Excitation wavelength was set to 810 nm while the lens and index of

refraction both correspond to water. In addition we acquired data with an Olympus

FluoviewTM confocal microscope and each cell was loaded with Alexa Fluor 555 dye.

Confocal imaging datasets consist of three or more partially overlapping stacks with

resolution of 1024 × 1024 × 110 each, the resolution of each stack is of 0.25 µm in the

x and y axis and 0.5 µm in the z axis. The emission and excitation wavelength was

set to 543 nm and 567 nm respectively while the numerical aperture and pinhole

diameter was set to 0.9 and and 150 µm respectively. Both the lens and index of

refraction correspond to water.

Realistic modeling of neuronal morphology, includes centerline extraction from

dendrites which are highly irregular tubular structures. The challenges towards cen-

terline extraction of these structures include: i) a poor signal-to-noise ratio, ii) the

objects of interest are at the limit of optical imaging (imaging resolution is typically

on the order of 0.2 µm), iii) a non-homogeneous distribution of optical intensity

throughout the cell, and, most importantly, iv) there is an extreme variation in

shape among dendrites.

We assume that the mathematical model of an acquired image I is the following:

I(x, y, z) = (P ∗ O)(x, y, z) + N (x, y, z), (3.1)

where P , is the Point Spread Function (PSF) of the optical microscope, O is the true

image, and N is a source of noise.

3.2 Approach for Automatic Cell Reconstruction

Our approach consists in the steps described below. A detailed description of each

step follows (Fig. 3.2):

Figure 3.2: Neuron Morphological Reconstruction System.

1. Deconvolution;

2. Frames-shrinkage denoising;

3. Registration;

4. Dendrite Segmentation; and

5. Morphological Reconstruction.

3.3 Deconvolution

We present a technique to improve the quality of the image by performing decon-

volution in confocal imaging. In the case for multiphoton imaging, this step is not

required since the apparatus of the microscope provides images with better quality

than confocal.

Images from 3D fluorescence optical imaging suffer from a different number of

distortions imposed by the microscope [179, 29] The point spread function is the

response of the optical device to a impulse response in the sense that it is a measure

of how much an ideal point is in reality imaged. In the case of 3D confocal images,

the major effect is a blurring effect a elongation along the “z” axis of the images.

To alleviate this effect, an experimental PSF is estimated by performing an imaging

experiment on a volume of sparsely-scattered latex beads absorptive of the fluorescent

dye used in these experiments. In general, a model for 3D fluorescence optical image

I can be expressed in terms of a PSF1 P and different noise sources2 N and it can
Highly dependent of the microscope optics.
Thermal, photon shot and biological background noise.

be expressed as:
Z+∞ Z+∞ Z+∞
I = P (x − x1 , y − y1 , z − z1 )O(x, y, z)dx1 dy1 dz1 + N (x, y, z) (3.2)
−∞ −∞ −∞
= (P ∗ O)(x, y, z) + N (x, y, z), (3.3)

where O is the original object. Then the problem of deconvolution consists in recov-

ering the image O. By considering the dual problem in the Fourier space, Eq. 3.2

can be expressed as:

Iˆ = Ô(u, v, z)P̂ (u, v, z) + N̂ (u, v, z), (3.4)

ˆ Ô, P̂ , N̂ is the Fourier Transform of I, O, P, N respectively. Under the ideal

where I,

circumstances, where there is no noise, one solution can be expressed as:

ˆ v, z)
Ô(u, v, z) = , (3.5)
P̂ (u, v, z)

this is the so called Fourier-quotient method. However this method is very sensitive

to noise, and in real applications cannot be applied.

To deconvolve the acquired data, we robustly estimate an experimental PSF.

First, latex bead data is immersed in tissue (the medium) and it was acquired with

an Olympus FluoviewTM confocal microscope. The diameter of the beads was 0.2 µm,

the resolution was set to 0.076 µm in the x and y axis, and 0.2 µm along the z-axis,

giving a voxel aspect ratio of about 1:1:3; the emission and excitation wavelength

was set to 520 nm and 490 nm respectively while the numerical aperture and pinhole

diameter was set to 0.9 and 150 µm respectively. The lens and index of refraction

correspond both to water. The same parameters of the microscope were used to

acquire the neuron data sets. Second, to robustly estimate a PSF, individual beads

were averaged out from a given 3D image stack, Fig. 3.3 compares beads at different

depths in the same image stack. Third, deconvolution was performed using the
Huygens Software software using a standard Maximum Likelihood Estimator

method [2]. Fig. 3.4 depicts the effect of deconvolution in the average bead obtained

in the image stack.

(a) (b) (c)

(d) (e) (f)

Figure 3.3: Comparison of beads. Maximum intensity projections of beads (a)-(c)

x − y view of beads in tissue, with depths z = 69.4 µm, 48.4 µm and 18.4 µ m,
respectively, (d)-(f) x − z view of beads in tissue.

(a) (b)

(c) (d)

Figure 3.4: Deconvolution of the average bead obtained from the bead in Fig. 3.3.
(a),(b) projection in the x − y axis of average bead after and before deconvolution
respectively; and (c),(d) projection in x − z axis after and before deconvolution.

3.4 Denoising

Using a general result on lifting frames, we constructed a non-separable 3D frame

capable of robust edge detection from three 1D filters that generate a 1D Parseval

frame [181]. This lifted frame incorporates robust edge detectors along the main

diagonals of the 3D space. Edge information is captured by an ensemble thresholding

approach of the filtered data. The denoising uses a hysteresis thresholding step

and an affine thresholding function that takes full advantage of the filter adaptive

threshold bounds.

3.4.1 Construction of 3D Non-separable Parseval Frame

In this section, we describe a method to construct 3D non-separable frames in Hilbert

spaces using existing frames based on digital filters. More specifically, we define in

Section the notion of Parseval frame and we briefly describe the mathematical

framework for constructing and lifting frames using filterbanks in Section We

construct the new 3D non-separable filterbanks in Section Parseval Frame

The theory of frames in Hilbert spaces plays a fundamental role in signal and im-

age processing. A frame is a redundant collection of vectors in a given Hilbert

space, generalizing the notion of orthogonal basis. A frame satisfies the property

of perfect reconstruction: any vector of the Hilbert space can be recovered from its

inner products with the frame vectors. The linear frame transform, from the ini-

tial space of coefficients, obtained by taking the inner product of a vector with the

frame vectors, is injective and hence admits a left inverse [41]. Perfect reconstruc-

tion together with redundancy make the use of frames successful in a broad spectrum

of applications, including signal/image processing and quantum information theory

[196, 198, 25, 17, 63].

Let us recall that a digital filter is a vector K ∈ `2 (Zd ) for which the Fourier

transform K̂ = k is a bounded function. This filter acts on every digital signal by

the convolution operator CK , defined as CK (s) = s ∗ K, for all s ∈ `2 (Zd ). On `2 (Zd )

we will also consider the translation operator Tn , defined by Tn s(m) = s(m − n), for

every n, m ∈ Zd and s ∈ `2 (Zd ).

A frame in a Hilbert space H, with inner product < ·, · >, is a collection of

vectors {vi }i∈I ⊂ H, which satisfies the frame inequalities:

Akxk2 ≤ | < x, vi > |2 ≤ Bkxk2 , for all x ∈ H, (3.6)

where A ≤ B are positive constants called frame bounds. For our purposes, I is a

countable index set. A Parseval frame is a frame for which A = B = 1; for this frame

the inequality above becomes the well-known Parseval identity. Parseval frames

generalize orthogonal bases: the same vectors used in analysis (decomposition) can

be used in synthesis (reconstruction). In summary, using the notation above, for a

Parseval frame {vi }i∈I ⊂ H we have the following perfect reconstruction formula:

x= < x, vi > vi , for all x ∈ H. (3.7)

68 Augmenting a Frame

The power and efficiency of frames comes from their redundancy, a key ingredient

in accurate reconstruction. To enhance redundancy, we will add elements to a given

frame in a structurally stable way, thus obtaining new improved frames. We use our

previous framework for augmenting frames [196, 198].

A finite set {K0 , ..., Kl } of `2 (Zd ) generates a frame in `2 (Zd ) if the family

{Tn Kj : n ∈ Zd , j = 0, ..., l} of all possible translates is a frame in `2 (Zd ). The

following result provides a characterization for the sets of digital filters that generate


Proposition 1. Let Kj , j = 0, ..., l, be a finite set of digital filters. Then {Tn Kj :

n ∈ Zd , j = 0, ..., l} is a frame of `2 (Zd ) if and only if there exists constants A, B > 0

such that for almost every w ∈ [−π, π)d the following inequality holds:
A≤ |K̂i (w)|2 ≤ B. (3.8)

Moreover, this frame is a Parseval frame if and only if A = B = 1.

As a consequence of this characterization we have the following useful Collo-

rary that allow us to augment frames. For a given positive integer Q, let U be a

2πZd -periodic (Q + 1) × (R + 1) matrix-valued function whose entries (U (ω))q,r are

continuous. The matrix multiplication:

U (ω)(K̂0 (ω), K̂1 (ω), . . . , KˆR (ω))t = (Fˆ0 (ω), Fˆ1 (ω), . . . , FˆQ (ω))t , (3.9)

defines a new family of digital filters Fq , with q = 0, 1, . . . , Q. As a consequence of

Proposition 1, under certain assumptions on the matrix U , the new family of filters

will generate a frame.

Corollary 1. If there exists A > 0 such that for almost every ω ∈ [−π, π)d we have

Akxk ≤ kU (ω)xk for all x ∈ CR+1 , then the integer translates of the new family of

digital filters Fq , q = 0, 1, . . . , Q also form a frame for `2 (Zd ). If, in particular, U (ω)

is an isometry for almost every ω ∈ [−π, π)d , then the resulting and the original

frames have the same frame bounds.

Note that Corollary 1 is a general tool that can be used in constructing frames

in any dimension. We construct in Section a separable Parseval frame for the

Hilbert space `2 (Z3 ), and then augment it with the lifting scheme from Corollary 1

using a constant matrix U that implements an isometry (i.e., preserves distances).

As a result, we are able to build a non-separable Parseval frame incorporating mul-

tidirectional edge detectors. Non-separable 3D Frame from Multi-directional Filters

We begin our construction with the 1D frame described by Ron and Shen [181] as

being the simplest example of a compactly supported tight spline frame. Consider

the following Riesz scaling function φ and corresponding wavelets ψ1 and ψ2 :

sin2 ( w2 ) √ cos( w4 ) sin3 ( w4 ) sin4 ( w4 )

φ̂(w) = w 2 , ψ̂1 = i 2 , and ψ̂2 (w) = − w 2 . (3.10)
(2) ( w4 )2 (4)

The associated low-pass k0 , band-pass k1 , and high-pass k2 filters are defined as


2 ω 2 ω
k0 (ω) = cos ( ), k1 (ω) = i( ) sin(ω), and k2 (ω) = sin2 ( ). (3.11)
2 2 2
The filters are normalized so that:

|k0 (ω)|2 + |k1 (ω)|2 + |k2 (ω)|2 = 1, (3.12)

for all ω ∈ [−π, π). By Proposition 1 the translates Tn (n ∈ Z) of the corresponding

impulse responses kˆ0 = K0 = (1/4) [1, 2, 1], kˆ1 = K1 = (1/4) 2, 0, − 2 and

kˆ2 = K2 = (1/4) [−1, 2, −1] form a 1D Parseval frame for `2 (Z). Note that K1 is a

first-order singularity detector (edge-detector) while K2 is a second-order singularity


To extend to 3D, we take the 3-folded tensor products of this frame with itself.

Fourier calculus, the perfect reconstruction condition (Eq. 3.7) and Proposition 1 are

needed to show that the Fourier transforms of:

kp·32 +q·3+r (ω1 , ω2 , ω3 ) = kp (ω1 )kq (ω2 )kr (ω3 ), (3.13)

with p, q, r ∈ {0, 1, 2}, are digital filters that generate a separable 3D Parseval frame

with 27 filters. The term separable refers to the fact that the 3D filters are obtained by

direct multiplication of filters from lower dimensions, in our case 1D filters. This set

of 3D filters preserves the perfect reconstruction property, and incorporates detection

of first-order singularities along the coordinates axes in the 3D space. Next, we will

further augment this frame to incorporate additional first-order singularity detectors.

We focus our attention on the following set of filters: K1 , K3 , K9 , the impulse

responses of k1 , k3 , k9 , respectively. These separable, unidirectional edge detector

operators are tuned to detect edges in the three principal axes. We wish to augment

our frame with non-separable filters capable of detecting edges along other desired

directions (e.g., the main diagonals in 3D space). Let θ ∈ [0, 2π) be the angle in 3D

measured counterclockwise from (0, 1, 0) towards (0, 0, 1) while on the positive x-axis

and let ϕ ∈ [0, π2 ]. Then,

K(θ, ϕ) = K9 cosϕ + K3 cosθsinϕ + K1 sinθsinϕ, (3.14)

represents, up to normalization, an unidirectional edge detector along the vector

(cosϕ, cosθsinϕ, sinθsinϕ), while K9 corresponds to the choice ϕ = 0, K3 corre-

sponds to ϕ = π2 , θ = 0 and K1 corresponds to ϕ = π2 , θ = π2 .

To incorporate additional directions to the frame, we would need to add more

choices of pairs of angles in such a way that the resulting set of filters and their

translates still form a Parseval frame. We apply Corollary 1 for the above frame,

with R = 27 and Q = R + N , N being the number of new directional filters. We will

choose U to be a constant (R + N ) × (R) matrix implementing an isometry. Since

it will preserve distances, the matrix U will satisfy automatically the hypothesis

required in Proposition 1. We may reorganize the filters into vector so that K9 , K3

and K1 are the last 3 elements, in this order. We will only use the last 3 columns of

U to augment the frame, since only these columns will affect the last 3 elements of

the input vector. Therefore, we can write U as the block matrix:

 
 IR−3 03 
U = , (3.15)
0N +3,R−3 U1

where Ik is the k × k identity matrix, 0k is the k × k zero matrix, and 0k,l is the k × l

zero matrix. U is an isometry whenever U1 is, and we define U1 as:

 
a 0 0
 
 

 0 a 0 

 
0 0 a
 
 
U1 = 

 (3.16)
 b · cosϕ1 b · cosθ1 sinϕ1 b · sinθ1 sinϕ1 
 
 .. .. .. 

 . . . 

 
b · cosϕN b · cosθN sinϕN b · sinθN sinϕN

For U1 to be an isometry, it suffices that the columns are orthogonal and of norm

2 2 2
b cosϕi sinθi sinϕi = 0, a + b cos2 ϕi = 1, (3.17)
i=1 i=1

2 2 2
b cosϕi cosθi sinϕi = 0, a + b cos2 θi sin2 ϕi = 1, (3.18)
i=1 i=1

and finally:
2 2 2 2
b cosθi sinθi sin ϕi = 0, a + b sin2 θi sin2 ϕi = 1. (3.19)
i=1 i=1

We have augmented the frame with two choices of angles and constants a, b. One
π π
of our choice for the angles is N = 4, ϕi = 2
,i = 1, ..., 4 and θi = 4
+ (i − 1) ·
√ √
π 3 2
. This leads to a = 2
and b = 4
and the filters obtained by applying U to

(K̂0 (ω), K̂1 (ω), . . . , KˆR (ω))t are edge detectors along the main diagonals in 3D.

Table 3.1 presents our choice of U by listing the result of applying the operations

associated with the augmentation process. We call the resulting filterbank the UH

Table 3.1: Lifted Spline Filterbank: Selected

Frame Elements
F1 = √ 2
F3 = √ 2
F9 = 2
F27 = 14 (K9 + K3 + K1 )
F28 = 14 (K9 − K3 + K1 )
F29 = 14 (K9 + K3 − K1 )
F30 = 14 (K9 − K3 − K1 )

Lifted Spline Filterbank (UH-LSF). All the other 23 frame elements that are not

listed remain unchanged. The new frame incorporates F1 , F3 , F9 which are scaled

versions of the separable original filters. They are edge detectors capable to detect

edges parallel to coordinates axes. It also defines a set of new directions by containing

non-separable filters (F27 , F28 , F29 and F30 ) that are tuned along the main diagonals,

as shown in Fig. 3.5. For example, F27 estimates the directional derivative in the

direction of the vector (1, 1, 1)t while F30 estimates the directional derivative in the

direction of the vector (1, −1, −1)t .

Figure 3.5: Depiction of the directional derivatives.

Another choice for angles is N = 8, ϕi = π2 , and θi = (i − 1) · π4 for all i = 1, ..., 4.
√ √
2 2
For this choice one can easily verify that a = 2
and b = 4
. This frame will contain

in addition to the edge detectors along the main diagonals, the detectors for edges

along all the diagonals in the coordinate axes planes.

A similar construction can be performed starting from any set of 1D filters that

generate a frame. For example, the filters corresponding to the Haar scaling and

wavelet functions are, √1 [1 1] and √1 [1 − 1], respectively. The first filter is the low
2 2

pass, averaging filter and retains most of the energy. The second filter is the detail

filter, again an edge detector. Since they are created via a multi-resolution analysis,

the integer translates of these filters will again generate a 1D frame.

Figure 3.6(a) depicts the maximum intensity projection of a subvolume with

original data while Fig. 3.6(b) depicts the maximum intensity projection along the

z axis of the denoised volume.

3.4.2 Frame-based Denoising

Based on the constructed 3D non-separable filterbank, UH-LSF, we will present a

simple but effective algorithm for noise removal in 3D photon-limited images in Sec-

tion The algorithm thresholds the noisy frame coefficients based on two

adaptive threshold bounds that depend on subsets of the frame elements, which is

very different from the traditional wavelet shrinkage algorithm in the literature. To

determine the optimal thresholds and evaluate the performance of the presented al-

gorithm, we will develop a generalized method to construct computational phantoms

(a) (b)

Figure 3.6: Results of applying our denoising algorithm in confocal imaging in a

neuron cell. Maximum intensity projection along the z axis for (a) the original data
from confocal images with noise and (b) the image after denoising with our algorithm.

that resemble the real fluorescence microscopy data of neurons in Section Frame-based Affine Hysteresis Thresholding

Assume that {Fr }r=0,...,R is a filterbank whose integer translates form a Parseval

frame for `2 (Zd ). Let I = {(r, n) : 0 ≤ r ≤ R, n ∈ Zd } be the index set and let v(r,n) =

Tn (Fr ) for all (r, n) ∈ I. With this notation, our assumption is that {v(r,n) |(r, n) ∈ I}

is a Parseval frame. Let X represent a d-dimensional input noisy signal (d ≥ 1). By

padding X in all directions with zeros, we can embed X in `2 (Zd ). We will always

consider input signals as elements of `2 (Zd ). A simple computation shows that the

perfect reconstruction condition (Eq. 3.7) can be written as:
X= (X ∗ Frt ) ∗ Fr , (3.20)

where Frt is a copy of Fr flipped about the origin: Frt (n) = Fr (−n) for all n ∈ Z3 .

To see this, let Yr = X ∗ Frt . Then {Yr }r contains all the frame coefficients of

the translates of Fr . Furthermore:

Yr (n) = X(m)Frt (n − m) = X(m)Fr (m − n) =< X, Tn (Fr ) >= v(r, n).
m∈Z3 m∈Z3


Let Λr be the maximum of the absolute values of frame coefficients corresponding

to Fr . For the denoising purpose, we set a low threshold bound B1 (r) = a1 · Λr and a

upper threshold bound B2 (r) = a2 · Λr , where a1 and a2 are some constants (we will

determine the optimal values of a1 and a2 in Section To take full advantage

of the redundancy of the frame, we implement a hysteresis algorithm that modifies

the frame coefficients as follows. If a coefficient’s value |c(r,n) | exceeds B2 (r), the

coefficient remains unmodified. If this absolute value is less than B1 (r), then the

coefficient will be replaced by 0. If the absolute value is between B1 (r) and B2 (r), to

decide whether to retain this coefficient we check all other filters for coefficients that

correspond to the same spatial location (i.e. voxel in 3D) given by n. If there is at

least one more filter Fr̃ with r̃ 6= r and for which |c(r̃,n) | is above the lower threshold

bound B1 (r̃), the coefficient c(r,n) is retained but modified with an affine threshold

function given by:

ρB1 ,B2 (x) = B2 (3.22)
B2 −B1 (x − sgn(x)B1 ).

In summary, the affine hysteresis thresholding is formulated as follows:

c(r,n) , if |c(r,n) | > B2 (r),

if B1 (r) < |c(r,n) | ≤ B2 (r)

 ρ
B1 (r),B2 (r) (c(r,n) ),
c(r,n) =
e (3.23)

 and |c(r̃,n) | > B1 (r̃) for some r̃ 6= r,

0, otherwise.

The choice of the affine function was motivated by the fact that it will enhance the

smoothness of the structure of interest. With Ye r (n) = e

c(r,n) containing the altered

frame coefficients, the reconstructed volume will be Xe =P Y e r ∗ Fr .


In contrast with the classical wavelet threshold approach, our method takes ad-

vantage of filter information with a voting-based correction scheme. Augmented

frames provide at each voxel location a more detailed information than a standard

separable wavelet decomposition and the correction scheme will use this detailed in-

formation to decide if a coefficient will be modified or not. The proposed algorithm

can be summarized as follows:

Algorithm 1. Input: The noisy data X and the number of decomposition levels L.

Output: Denoised data.

Step 1: Recursively decompose the volume X up to level L using the filterbank to

obtain {Yr }r .

Step 2: Compute {Y
e r }r by applying the approach described in Eq. (3.23).

e from {Y
Step 3: Reconstruct X e r }r using the same filterbank.

As compared with most existing wavelet-based denoising algorithms, our algo-

rithm processes all high frequency subbands but keeps the lowpass subband un-


78 Computational Phantom and Validation Strategy

In the following, we will outline the construction of computational phantoms for

fluorescence microscopy data that can be used for the validation of the performance

of a denoising algorithm. We will use these computational phantoms to determine

the optimal threshold bounds for our denoising algorithm.

The construction has the following steps: i) create binary volume; ii) simulate

intensity decay; iii) create tubular neighborhoods; and iv) add noise.

Create binary volume: Based on the Duke-Southampton database of cells [1]),

we create a volume sampled at the desired resolution where the voxels occupied by

the cylinder are labeled 1 and background voxels are labeled 0 (note such a binary

volume can be used, for example, as the ground truth for dendrite morphology re-

construction tasks).

Simulate intensity decay: Construct a volume in which the intensity decays lin-

early in the voxels that correspond to the neuron and simulates the diffusion of the

dye in the dendrites. The collection of cylinders is presented in a tree-like description

with the root in the soma. The linear intensity decay is based on the tree-distance of

the cylinders to the soma. Such volume will be used as the original for the denoising

task considered in this paper.

Create tubular neighborhoods: Create tubular neighborhoods around the neu-

ron, based on a prescribed ratio of volumes and surfaces. We can create several

neighborhoods in which the neuron occupies for example at least 5% of the total vol-

ume of the neighborhood. These neighborhoods will be used both to create several

types of synthetic noise and to define accurate metrics.

Add noise: Add to the tubular neighborhoods decaying speckle noise with ran-

domly generated statistical parameters. To obtain the local Poisson noise, we apply

a Poisson noise with variance, for example, 0.1, of the local intensity. To compensate

for the intensity gap between the layers of noise, we filter the image with a zero mean

gaussian noise with local intensity dependent variance. For more realistic effects, we

convolve the volume with the theoretical point spread function derived using the

parameters of the microscope used to acquire the real data.

Figure 4.1(a) depicts a binary volume with dimensions of 374 × 158 × 57 and

isotropic voxels, created using the descriptions of n120.swc from the

Duke-Southampton database [1]. Noise is added in several layers of tubular neigh-

borhoods to obtain Fig. 3.7(b), as described above.

Using the constructed computational phantoms, we will determine the optimal

threshold bounds for our denoising algorithm. As pointed out by Dima et al. [56],

for 3D neuron data denoising application, the Mean Square Error (MSE) computed

at the level of the entire volume would not produce good results due to the small

structure-to-volume ratio in images depicting neurons. In fact, Dima et al. do not

consider this metric at all, concluding that its value is meaningless for this class

of volumes. We hence need to define several new metrics that account for sparse

structures of neurons. Our strategy, as outlined in Fig. 3.8, is to consider metrics

in the regions close to the structure, according to tubular neighborhood constructed

above. Our approach permits a local neighborhood evaluation. Let I be the original

and Id be the denoised image, both with dimensions n1 × n2 × n3 . Let χLN be the

indicator function of the local neighborhood in which the metrics are evaluated. We



Figure 3.7: Maximum intensity projection of volume data from Synthetic neuron
n120. (a) Binary volume; (b) with added noise.

define new metrics, namely, local neighborhood MSE (LN-MSE), local neighborhood

Root MSE (LN-RMSE), local neighborhood signal-to-noise-ratio (LN-SNR) and local

neighborhood peak-signal-to-noise-ratio (LN-PSNR) as follows:

k(I − Id ) · χLN k22
LN-MSE(I, Id ) = , (3.24)
n1 n2 n3
LN-RMSE(I, Id ) = LN-MSE(I, Id ), (3.25)

kI · χLN k2
LN-SNR(I, Id ) = 10 log10 , (3.26)
k(I − Id ) · χLN k2

max I
LN-PSNR(I, Id ) = 20 log10 . (3.27)

Similarly, for the evaluation of preservation of structure, we also can define the

local confusion matrix [118, 186].

Figure 3.8: Depiction of the local neighborhood associated with noise removal.

Obviously, the lower and upper thresholds of our algorithm play a key role for

noise removal of the photon-limited data. It is natural to determine the optimal

thresholds employing these computational phantoms constructed above. The sim-

plest and most straightforward criterion is to choose the thresholds that lead to

minimum of the LN-MSE. Experimentally, we found that the optimal value for the

lower threshold is in the range [0.5-0.6]Λr , and for the upper threshold is in the range

[0.7-0.8]Λr . In addition, we also found that these optimal thresholds do not change

significantly with the data and noise considered, which is indeed desirable for prac-

tical applications. We set the optimal lower and upper threshold to be 0.5Λr and

0.75Λr , respectively.

Figure 3.9: Registration of 3 volume stacks.

3.5 Registration

Many dendrites are larger than the typical field of view of typical laser-scanning

microscopes. Multiple image volumes are therefore necessary to fully capture the

neuron structure. We are thus required to merge and align the multiple data sets to

a single volume. The experimentalist supplies estimated x, y, z offsets between each

stack (which are obtainable when moving the microscope from one area of interest

to the next). To measure similarity during registration, we use the sum of mean-

squared differences for each voxel in the two images. This measure is then minimized

using limited-memory Broyden Fletcher Goldfarb Shannon minimization with simple

bounds [35, 270, 36].

3.6 Dendrite Segmentation

Now, we will derive a general structural measure to enhance regular and irregular

volumetric structures. The key idea is to use prior knowledge of the topology of a

representative tubular-like structure to learn an association rule between structural

features (the eigenvalues of the structure tensor for a given tubular object) and the

tubular structure itself. The association rule, assigns high probability values inside

the volumetric tubular object (being maximum in the center); low probability values

in the border (being minimum at the border itself); and zero probability values

outside the tubular structure (being the background).

Figure 3.10: A typical dendrite segment.

The enhancement and detection of tube-like structures is a crucial step in a broad

number of image analysis applications. Most of the existing algorithms assume an

elliptical or semi-elliptical shape with the cross section of the object of interest (typ-

ically vessels). However, when detecting objects with extremely high irregularities

in shape (i.e., not semi-elliptical), these algorithms may not perform well since the

assumptions of an ideal or elliptical cylinder do not hold any more, due to:

1. irregular shape of the dendrites: dendrites do not present a circular or elliptical

cross sectional shape as in the case of vessels;

2. adjoining structures: these are structures attached to the dendrites that play

a crucial role in neuron physiology. Therefore it is desirable to enhance and

detect them;

3. image formation: confocal imaging is based on exciting photons from a flu-

orescent substance leading to a different noise model as compared to CT or


Then, accurate detection of such irregular shapes is necessary for a comprehensive

morphological description [225, 220, 187, 185]. The key advantages of our method


1. no multi-scale analysis is needed: in order to detect tubular and semi-tubular

structures, multi-scale analysis is avoided by integrating in the training phase

not only the medial axis of the tubular structure but the entire tubular object

as a whole (i.e., including properties such as variations in diameter, intensity,

curvature, and noise);

Figure 3.11: Overview of our algorithm for dendrite detection.

2. learning structure and noise: using this machine learning approach, not only

are geometrical shapes being learned, but also the noise variations intrinsic to

the imaging modality;

3. generality: the application of our method is straightforward for detecting dif-

ferent tubular shapes in data from various imaging modalities.

We propose a method for the enhancement of 3D objects with irregular cross

sectional shape and considerable radius variations. Our method is based on statistical

learning theory. Specifically, Support Vector Machines (SVMs) are used to learn

tube-like shapes and to estimate the posterior probability distribution for a given

tubular structure. An overview of our Learning Irregular Tubular Structures (LITS)

algorithm is depicted in Fig. 3.11.

3.6.1 Anisotropic Tubular Feature Extraction

(a) (b)

(c) (d)

Figure 3.12: Anisotropic structural features. (a) maximum intensity projection in

the x − y of a typical dendrite segment, (b)-(d) Eigenvalues λ1 to λ3 .

In this Subsection we present our approach to extract anisotropic structural fea-

tures for volumetric tubular objects. We consider the general case for tubular objects

with anisotropic aspect ratio. The particular for tubular objects with isotropic aspect

ratio is covered by this general formulation.

We derive two classes of tubularity measures obtained from: i) dendrites; and

ii) from a synthetic model depicted in Fig. 3.13(a), its morphological properties

include: i) variation of intensity, ii) radius variation from 0.5 to 1.5 µm, iii) variety

of branching sections, and iv) high and low curvature segments. Voxel size was

isotropic and was set to 1µm.

To the best of our knowledge, McIntosh et al. [143] has only reported the extrac-

tion of structural tubular features based on the estimation of the Hessian matrix and

it has only been applied in MRI data for the segmentation of the spinal cord. Rather

than performing expensive (and sometimes unfeasible) data resampling, we propose

to estimate structural features by taking in consideration the nature of the confocal

and multiphoton images.

Our approach is to estimate tubularity features from the eigenvalues correspond-

ing of the Hessian matrix by estimating second partial derivatives at resolution (spe-

cially in the z axis). Classically, structural features in 2D and 3D have been computed

in isotropic data [71, 189, 134]. We should emphasize that confocal and multiphoton

data is anisotropic by nature, the aspect ratio in the x, y, and z axis is 1:1:3, and

from here that we construct the Hessian matrix based on the existing aspect ratio.

For a fixed σxy in the x, y axis and for a fixed σz in the z axis, the Hessian matrix

is computed as:
 
 Ixx (x; σxy ; σxy ) Ixy (x; σxy ; σxy ) Ixz (x; σxy ; σxy ) 
 
O2 I(x; σxy ; σz ) = 
 Iyx (x; σxy ; σxy ) Iyy (x; σxy ; σxy ) Iyz (x; σxy ; σxy )

 (3.28)
 
Izx (x; σz ; σxy Izy (x; σz ; σxy ) Izz (x; σz ; σxy )


Ixy (x; σxy ; σxy ) = { G(x; σxy ; σxy )} ∗ I(x), (3.29)


Izx (x; σz ; σzx ) = { G(x; σz ; σxy )} ∗ I(x), (3.30)

represents the approximations to the second partial derivative after convolving the

image I with an “anisotropic” Gaussian function with standard deviations σxy in the

x-y plane and σx in the z axis. Let λ1 (x; σxy ; σz ), λ2 (x; σxy ; σz ), λ3 (x; σxy ; σz ) be the

eigenvalues of O2 I(x; σxy ; σz ).

We observe that eigenvalues λ1 , λ2 , λ3 of O2 I(x; σxy ; σz ) are real and positive

since the matrix O2 I(x; σxy ; σz ) is symmetric and positive-definite. The information

derived from the eigenvalues of Jσxy ;σz encodes structural information in a local neigh-

borhood controlled by the parameter σ. If we order the eigenvalues λ1 ≤ λ2 ≤ λ3 then

different combinations can reveal structural information. For example, if we have

λ1 ≈ λ2 ≈ λ3 ≈ 0 then there is no structure present; if λ1 ≈ 0, λ1  λ2 , and λ2 ≈ λ3

then the structure resembles that of an ‘ideal tubular structure’ (Frangi et al. [71])

and if λ1 > 0 and λ1 ≈ λ2 ≈ λ3 then the structure resembles a blob. From these

configurations of the eigenvalues, analytical functions to enhance structural shapes


(b) (c)

(d) (e)

Figure 3.13: Synthetic volumetric data and isotropic structural features. (a) syn-
thetic tubular model based on a spline centerline model. (b) a 2D slice of the syn-
thetic model, (c)-(e) estimated eigenvalues λ1 , λ2 , and λ3 .

can be derived Sato et al. [189]. However analytical expressions are limited to ideal

models as they represent an ideal point in a geometrical model.

In the rest of this chapter we will denote a tubular feature vector (TFV) for a

fixed σ as:

Tσxy ;σz (x) = (λ1 (x; σxy ; σz ), λ2 (x; σxy ; σz ), λ3 (x; σxy ; σz )). (3.31)

For the case of isotropic data, we simply consider the case when σxy = σz we express

isotropic tubular feature vectors as:

Tσ (x) = (λ1 (x; σ), λ2 (x; σ), λ3 (x; σ)). (3.32)

3.6.2 Support Vector Machines

SVMs were proposed by Vapnik and Cortes [47, 233] as a method for data classi-

fication. SVMs are based on statistical learning theory that estimates a decision

function f (x) for any value x ∈ Rn . The function f (x) is estimated from the set of

training vectors xi ∈ Rn , i = 1, ..., l with labels yi ∈ {−1, 1}. Then SVMs provide a

solution to the following quadratic optimization problem:

1 X
min kwk2 + C ξi
w,b,ξ 2 i=1
subject to : yi (< w, φ(xi ) > + b) ≥ 1 − ξi ,

ξi ≥ 0,

where the set {xi }i=1,..,l of the training vectors are mapped by the feature map

φ : Rn 7→ H into a finite or infinite dimensional Hilbert space H, w is a normal

vector to the hyperplane that represents the decision boundary, the constant C > 0

is the parameter for the hyperplane separation error, and ξi is a slack variable used

to penalize the objective function.

The dual solution to the minimization problem posed in Eq. 3.33 is to maximize

the quadratic form:

l l
X 1X
max αi − αi αj yi yj < φ(xi ), φ(xj ) >
2 i,j=1
X (3.34)
subject to: yi αi = 0, i = 1, ..., l

0 ≤ αi ≤ C.

The class prediction for an instance x is then estimated from:

f (x) = sign( yi αi < φ(xi ), φ(x) > + b). (3.35)

The points for which αi > 0 are called the support vectors and lie closest to the

hyperplane. In order to minimize the computational cost, kernels are used to replace

the inner product < φ(xi ), φ(xj ) >. A kernel K : Rn × Rn 7→ R that satisfies the

Mercer condition [146], implements a dot product of some feature map φ, i.e.:

K(xi , xj ) = < φ(xi ), φ(xj ) >, (3.36)

and it can be directly used in solving the dual problem.

Conventional SVMs solve a binary classification problem (Eq. 3.35) from the

training data. However, as proposed by Platt [168], SVMs can robustly estimate

a posterior probability density function. The posterior probability p(y = 1|f ) is

approximated from a parametric sigmoid function:

p(y = 1|f ) = . (3.37)
1 + exp(Af (x) + B)

Let DT = {D+
∪ D− T
}, be a subset of l trained vectors where x ∈ D+
iff f (x) = y = 1 and x ∈ D− iff f (x) = y = −1. The parameters A and B are

estimated as follows (see [131]):

min F (Z),
F (Z) = − (ti log(pi ) + (1 − ti )log(1 − pi ))
pi = 1+exp(Afi +B)
, fi = f (xi )

N+ +1
, if yi = 1

N+ +2
ti =
, if yi = −1

N− +2

where N+ = |D+ | and N− = |D− |. Our objective is to estimate the probability

density function from DT for different tubular 3D objects. Without loss of generality,

we have selected a Gaussian (RBF) covariance function as kernel due to its isotropic


K(xi , xj ) = exp(−γkxi − xj k2 ), γ > 0 , (3.39)

where γ is the variance or scaling parameter that determines the width in which the

data changes abruptly.

The vectors Tσ are mapped into a high dimensional space in which the parame-

ters A and B (Eq. 3.38) can be estimated and therefore a probability value p(x) is

calculated from the sigmoid function (Eq. 3.37).

We propose a training method using the local neighborhoods of the structure

of interest rather than performing training in the entire volume. We define a local

neighborhood for tubular structures for which training and prediction will be per-

formed. This local neighborhood reduces the cardinality of the voxels to be classified

and therefore allows training and prediction to be performed in the decision bound-

aries for the feature vectors. Let I be a 3D volume and IT be the set of voxels which

belong to a tube-like structure in I. Let IT, be a local neighborhood for IT such

that IT ⊂ IT, ⊂ I. We define the set of training vectors for a tubular structure as:

Tσ = {Tσ (x) : Tσ (x) is a TFV and x ∈ IT, }. (3.40)

For training, a tubular-like object IT is selected and manually segmented. The

tubular-like object of interest is associated with labels that belong to the tubular

object. Our objective is to estimate the posterior probability that a given voxel

belongs to IT using SVMs probability outputs.

3.6.3 Tubular Shape Learning

Figure 3.14: Labels used to train a synthetic regular tubular model, labels corre-
sponding to the centerline are marked in color white, while labels corresponding
to the non-centerline are marked in color gray (background is excluded from the

In this Subsection we will explain how we construct a statistical shape model for

a dendrite segment as well as how we choose the optimal dendrite model in terms of

dendrites examples and in terms of SVMs parameters.

As it was previously mentioned, dendritic neurons do not follow complete cylin-

drical or elliptical shape patterns; instead they present highly irregular tubular-like

patterns in addition to adjoining structures. Different types of tubular measures of

the following form have been mentioned in the literature (Frangi et al. [71], Sato et

al. [189]:

Vm (x) = max{fσ (λ1 (x), λ2 (x), λ3 (x))}, (3.41)

where |λ1 | 6 |λ2 | 6 |λ3 | are the ordered eigenvalues of the Hessian matrix H(I ∗

Gσ )(x) approximated with a Gaussian function G at a given scale σ. These methods

define vesselness measures Vm by assuming ideal cylindrical or elliptical geometrical

shapes, discriminating structural features such as: plates, lines, and blob-like struc-

tures. However, these measures cannot be applied to irregular tubular structures

since the structural information they contain does not fulfill the hypothesis of the

assumed model.

(a) (b)

Figure 3.15: Structural features. (a) Distribution of the normalized eigenvalues, (b)
estimation of the parametric sigmoid function at three different scales.

Thus, we hypothesize that regular and irregular cylindrical shape models lead to

different probability density functions in the space of functions fσ . We reformulate

the problem of detecting tubular structures to learning a structural tubular model

from the object of interest itself (as opposed to defining an ideal tubular measure

Vm ).

Deriving an isotropic regular tubular model: Let us consider a synthetic

tubular model (Fig. 3.13(a)) for which a rule that associates the eigenvalues and the

centerline (Fig. 3.14) will be constructed. Since the configuration of the eigenvalues

reveals structural information our goal is to identify those eigenvalues for which the

location of its structure tensor is the centerline of the tubular object. Figure 3.15

depicts the class distribution of the eigenvalues with respect to the synthetic model

of Fig. 3.13(a) with centerline labels depicted in Fig. 3.14. Note that the probability

distribution of the centerline and non-centerline classes overlap with each other and

yet we want to estimate the probability of an element belonging to the centerline of

the tubular object. This motivates the idea of learning the association rule between

the eigenvalues and the tubular structure of the object.

To estimate the parameters needed obtain the posterior probability of the cen-

terline model SVMs parameter selection was performed with a grid search using

three-fold cross-validation. The best performance was obtained using the penalty

value of C = 10, a linear kernel, and σ = 0.5. The optimal SVMs parameters corre-

spond to b = 4.19 (Eq. 3.35) with a number of support vectors equal to 638, while

A and B (Eq. 3.38) were: A = −1.7955, B = −0.0539. Figure 3.15(b) depicts the

estimated parametric sigmoid function for three different scales σ = {0.25, 0.5, 1.0}.

Deriving an anisotropic irregular dentritic tubular model: We derive a

generic “irregular tubular measure” of a dendrite model.

The two classes of parameters we consider are: i) generic dendrites shapes; and

ii) optical SVMs parameters.

Our goal is to construct a dendrite shape model that captures both global and

local shape variations. Dendrite selection was performed by taking into account i)

dendritic global shape with some variations in diameter, ii) the inclusion of spines,

iii) variations in intensity, and iv) dendrite segments with high and low curvature3 .

To construct such model, we selected five dendrite segments, depicted in Fig. 3.16,

Regions A-D. For each dendrite segment SVMs training was performed by grid search

with different kernels (linear, polynomial, exponential), values of σxy (0.15, 0.3, 0.45,

0.9) µ m, and σz (0.5, 1.0, 1.5, 2.0) µ m, and varying the slack variable. Testing was

perform in unseen data for each segment. Figure 3.17 depicts two out of 10 volumes

for which testing was performed. For our specific application, the performance is

measure by the ability of performing “segmentation” in the unseen data. The model

corresponding to Region E, was selected as the generic measure of irregular objects,

where the estimated parameters were: A = −1.94, B = −0.222 (Eq. 3.38), b = 8.269,

σxy = 0.3µ m, σz = 1µ m, using a linear kernel and C = 50.

Figure 3.17 depicts the results of predicting the tubular shape with the model

trained from Region E, while Fig. 3.16(b), and Fig. 3.16(d) depicts the result of the

prediction in the different regions.

The importance of shape feature normalization:

Note that the major drawback of models which rely on an ideal or elliptical cylindrical shape is
that they do not include the ability to detect adjoining objects (spines) attached to the dendrites
which is desirable to be included in the learning algorithm.

(a) (b)

(c) (d)

Figure 3.16: Comparison of dendrite enhancement in different regions. (a),(c) se-

lected regions in the denoised data; and (b),(d) selected regions in the probability
volume obtained from the dendrite segment in Region E.

Figure 3.18(a) presents the results of applying two different shape models to a

3D image stack. Model A represents a ‘smooth and regular’ tubular model, where as

Model B represents an ‘irregular’ tubular model. Note the difference when predicting

tubular structure, the predicted model A is considerable smoother than the predicted

model B, this is evident since spines are enhanced in model B as opposed to model


(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 3.17: Comparison of dendrite enhancement in different cells. The probability

volumes where obtained from the dendrite in Region E depicted in Fig. 3.16. (a),(e)-
(c),(g) volume rendering of the denoised volumes of two different cells in the x − y
axis x − z axis respectively; and (a),(e)-(c),(g) volume rendering of the probability
volume in the x − y axis x − z axis respectively.


Figure 3.18: Schematic of shape learning given from two different models. Model A
was obtained from a synthetic example and is regular tubular model, whereas Model
B is an irregular tubular model. Note that the major difference of the result obtained
by these models is that Model B enhances “spines” compared with Model A.

3.7 Morphological Reconstruction

Morphological reconstruction of neuron cells expresses the cell anatomy in terms

of a single tree representation4 and in terms of cylindrical lengths and diameters.

The key idea of our approach is to evolve a 3D front obtained from the probability

volume described in Sec. 3.6 such that, the front moves considerable faster in the

centerline of the irregular tubular object as compared with the border (anisotropic

front propagation), inducing a new distance metric. Based on this distance metric,

dendrite centerlines are precisely paths with “optimal cost” when traveling along the

dendrites (convergence to a global minimum is always guarantee by construction).

In addition, the solution of such paths is performed in a “optimal number of steps”,

leading to a fast and accurate computation of the centerline.

In this Section first we review the basic concepts of Level Set theory needed for

our application (Sec. 3.7.1) and then we present our proposed framework for rapid

and accurate morphological reconstruction of neuron cells (Sec. 3.7.2).

3.7.1 Level Set Formulation

Let x be a point in Rn and let Γ(t) : [0, ∞) → Rn be a closed interface. Γ can be

a curve (R2 ) or surface (R3 ). We denote by the Int(Γ) the interior of Γ, that is the

bounded connected component of (R2 \Γ) or (R3 \Γ) and Ext(Γ) to its exterior.

Definition 1. A relation H(x) = 0 is called a implicit function, if H defines the

function y = f (x) implicitly, that is if given x in the domain of f , then: H(x, f (x)) =
By construction, loops are prohibited when representing the tree structure.


Definition 2. A distance function d for the metric space (X, | · |) is defined as:

d(x, ∂S) = inf |x, y| (3.42)


where x, y ∈ X.

Definition 3. A signed distance function φ for the metric space is defined as an

implicit function φ such that:

 d(x, Γ) if x ∈ Int(Γ),

φ(x) = −d(x, Γ) if x ∈
/ Ext(Γ), (3.43)

if x ∈ Γ,

 0

Figure 3.19: Level set embedding.

Let x be a point in Rn and φ a distance function φ : Rn → R and let Γ(t = 0) be

a closed hypersurface (n-1) dimensional defined as:

Γ(t = 0) = {x|φ(x, t = 0) = 0} (3.44)

where gradient of the implicit function φ can be written as:

∂φ ∂φ ∂φ
∇φ = ( , , ). (3.45)
∂x ∂y ∂z

A general result applied to the implicit function φ is that the gradient ∇φ is

perpendicular to the isocontours of φ and points to the direction of increasing φ [159].

From here that the unit outward normal can be written in terms of the implicit

function φ as:

N= . (3.46)

Figure 3.20: Schematic of propagation forces normal to the curve.

Once a general representation of the implicit function φ is constructed, we can

provide with “motion” in direction of the outward normal N to the implicit function

(the same applies to 3D). The “magnitude” of the motion is represented by a function

F called “speed function” which generally depends of geometrical properties such as

arguments curvature and the κ normal direction N . Let x denote a point in the

front Γ, we want to evolve Γ as a function of its embedded level set φ, where φ moves

in direction to its normal vectors with speed F . Given x in the front Γ, then we

can express F at a given time t as: F (x(t)) = x0t · N. By applying the chain rule to

φ(x(t), t) = 0 we obtain φt + ∇φ(x(t), t) · x0 (t) = 0 and the outward normal vector is

N= |∇φ|
(Eq. 3.46), we obtain:

φt + F |∇φ| = 0. (3.47)

This last equation is was introduced by Osher and Setian [160]. The geometrical

properties of the curvature can be witting as:

∇φ φxx φ2y − 2φx φy φxy + φyy φ2x
κ=∇· = . (3.48)
|∇φ| (φ2x + φ2y )3/2
We consider the case when the speed function F only depends of the time. This

formulation requires that F is a strictly positive function and this formulation is the

so called “stationary level set equation”.

Shortest Geodesic Paths

We introduce the concept of shortest geodesic paths in terms of level sets propagation

fronts, specifically, we consider the case of the stationary level set equation φt +

F |∇φ| = 0, with F > 0, derived from Eq. 3.47.

Let T : R3 → R+ be a positive function and define the zero level set C of T as:

C(x, y, z, t) = {(x, y, z) : T (x, y, z) = t}, (3.49)

Then, the level set C(x, y, z, t) = {(x, y, z) : T (x, y, z) = t} is a strictly monotonic

front and it is the set of points that can be reached from p0 with minimum cost at

time t. Assume that C evolves according to: Γ = F N, where F > 0 is the speed

of the front. To illustrate this concept, we can think of this type of propagation as

a balloon that is “only” expanding in function of the time t and at a given speed F

where the direction of the expansion is direction normal to the surface N.

Let us consider the one dimensional case. To compute the arrival time of a particle

moving in 1D we can use the well established relation: distance = rate × time.

From elemental calculus, the tangent ∇T is orthogonal to the level sets of T and

the magnitude |∇T | is inversely proportional to the speed F . We can state that

the magnitude |∇T | is directly proportional to the “cost” of moving the particle,

Fig. 3.21.

F = 1. (3.50)

Figure 3.21: Schematic of the one dimensional case of the Eikonal Equation.

In the formulation expressed by Eq. 3.49, the embedded level set always moves

outwards, and the relation between the arrival time T and the speed of propagation

as in Eq. 3.51 and this can be deduced from:

T (C(x, t)) = t, ⇒

∇T · Ct = 1, ⇒
∇T · F |∇T |
= 1, ⇒

F · |∇T | = 1.

Therefore, evolving a strictly monotonic front with direction normal can be ex-

pressed as:

k∇T (x)k = F (x), with T (p0 ) = 0, and F (x) > 0 . (3.52)

where F and p0 are known.

The duality of the Eikonal Equation 3.52 with the shortest path problem can be

stated as follows: given two points p, q ∈ R3 , the optimal path between p and q is

defined by the weighted arclength du2 = F (x)ds2 , where ds2 = dx2 + dy 2 + dz 2 is

the Euclidean arclength and F (x) is the weight over a domain D. Then, the shortest

path c(t) = {x(t), y(t), z(t)} from the point p0 to p is the minimal cumulative cost

T (p) defined as:

Z p
T (p) = min F (c(s))ds. (3.53)
c p0

The arrival time T are the points that are reached with minimal cost C and the min-

imal cost paths are orthogonal to the level set curves. To illustrate this statement we

consider the two dimensional case, the three dimensional case can be found in [159].

The proof the following Lemmas can be found in Bellman [23].

Lemma 1. If a path c(p) = (t(p), s(p)) satisfies the equation:

c0 (p) = ∇T, (3.54)

c0 (p)
where T(p) is the unit tangent vector to c(p) defined as: T(p) = |c(p)|
, then c(p)

achieves the minimum in:

Z p
min F (c(p))|c0 (p)|dp. (3.55)
c p0

Lemma 2. The gradient descent curves c(p) = (t(p), s(p)) defined by the ordinary

partial differential equation:

c0 (p) = ∇T, (3.56)

satisfies the Euler-Lagrange equation:

F (c(p))|c0 (p)|dp. (3.57)

Lemma 3. The optimal paths between two points A and B are the gradient descent

contours of the function u that satisfies Eikonal equation:

|∇u| = g, u(A) = 0. (3.58)

Then, we can find explicitly the optimal paths between the starting point p0 and

a given point p by solving the ordinary differential equation:

 Xt = −∇T,

 X(0) = p0 ,

that is: given the arrival time T , then the optimal path can be found by traveling

from point p along the negative of the gradient to the starting point.

To solve numerically the Eikonal Equation 3.52 we use a numerical scheme based

on [195] as:
max(D−x u, D+x u, 0) + max(D−y u, D+y u, 0) + max(D−z u, D+z u, 0) 2 = Fijk ,

ijk ijk ijk ijk ijk ijk


where Fijk is the cost function D+ , D− is the forward and backward operators defined

ψi −ψi−1
Di−x ψ = h
ψi+1 −ψi
Di+x ψ = h

and ψi is the value defined on a grid at the i-th position and h is the step size. This nu-

merical solution solves the Eikonal Equation 3.52 in a optimal number (O(N log N ))

of steps.

3.7.2 Neuron Morphological Reconstruction

Our approach for morphological reconstruction consists of several phases:

1. Soma and pipette segmentation;

2. Isotropic 3D front propagation;

3. Detection of terminal points;

4. Anisotropic 3D front propagation;

5. Center line extraction and tree reconstruction;

6. Diameter estimation.

3.7.3 Soma-pipette segmentation

In order to represent morphologically a neuron cell as a tree representation starting

from the soma, both soma and pipette segmentation must be performed. In the case

where the pipette is absent, only soma segmentation is considered. We assume that

both the soma and pipette are by far brightest objects5 and the longest objects along

the z axis in the volume (Fig. 3.22(a)).

We represent the soma and pipette volume VSP as the union of the soma volume

VS and pipette volume VP . Our goal is to remove the pipette volume VP (present)

and segment the soma volume VS . We consider the case where both the pipette and

soma are present6 .

Since we assume that the soma and pipette are the brightest objects in the 3D

volume (Fig. 3.22(a)), the additive projection along the z axis, we create two masks.

The first mask E1 is found by fitting an ellipse to the soma only and the second

mask E2 is found by fitting an ellipse that encloses both the soma and pipette

(Fig. 3.22(b)). We use the ellipse E2 to segment the soma and pipette volume VSP .

The soma and pipette are segmented by applying a standard K-means algorithm

in the mask E2 (enclosing the soma and pipette). From the region of interest we

select the largest connected component (the soma and pipette attached), designated

as volume VSP . Our goal now is to remove the pipette. To that end, we construct

a cost function (speed image) for which we propagate a 3D front. The cost function
This is a reasonable assumption since the pipette carries the fluoresce dye and therefore, if it
is present it will produce the highest illumination along with the soma.
If the pipette is not present, a similar analysis is performed.



(c) (d) (e) (f) (g)

Figure 3.22: Soma and pipette segmentation. (a) Additive projection of a neuron
cell along the z axis, lines in blue color define a region of interest obtained from
the ellipse marked with green color enclosing the soma and pipette. (b) Pipette
removal, left: ellipse E2 enclosing the pipette from the additive projection along the
‘z’ axis, right: ellipse E1 and E2 enclosing the soma and pipette respectively, and
2D projection of pipette inside the ellipse E2. Steps in pipette removal: (c) Additive
projection along the z axis, (d) Additive projection inside the ellipse E2, (e) 3D front
propagation in the pipette, and (g) Extracted pipette medial axis along the circular
masks used for pipette segmentation.

is estimated from the distance transform D where the starting point is the center of

the soma, and corresponds to point with maximum distance value in the soma region

(second mask).

Next, we define an energy function such that the embedded level set C evolves

with higher curvature at the center of the volume containing the soma and pipette.

Such level set is guided by the normalized distance transform Dn of the segmented

volume (Hossouna et al. [86]):

[g(Dn (x))] k∇T (x)k = 1, (3.62)

with T (p0 ) = 0. The term g(Dn (x) is the speed function (Fig. 3.22(e)) which guides

the front along the 3D centerline of the pipette; g is defined as g(x) = ex , where x

is a value between zero and one. Figure 3.22(f) depicts the estimated medial axis of

the volume VSP .

Pipette segmentation is performed, by creating circles along the medial axis of

the pipette (mask depicted with blue color in Fig. 3.22(g)). The radius of such circles

is estimated from the distance transform D (since it provides the distance from the

center point to the boundary). The radius of each circle is defined as:

ri = D(xi ) + K, (3.63)

where D is the distance transform of the object of interest at the voxel xi and K is

a constant value to ensure the circles cover the boundary of the object of interest.

The 3D segmentation of the pipette is performed by backprojecting the estimated

2D mask in the x − y plane. In the case where the pipette is not present, only the

soma is segmented and its center point is estimated.

3.7.4 Isotropic 3D Front Propagation

Isotropic front propagation consists of evolving a front with low curvature from center

point of the soma. The volume used for this propagation is the binary volume

obtained by H, the surface evolution produces a distance map that captures the

distance of every voxel from the soma center point p0 . Isotropic front propagation is

performed by setting the term F = 1 in Eq.3.52 and solving the Equation:

k∇T (x)k = 1 , (3.64)

whereas in previously T (p0 ) = 0 and p0 is the soma center point.

3.7.5 Detection of Terminal Points

A point is considered a terminal point if it is in the “tip” of a dendrite. The set of

all the tip points are the set of the terminal points. Terminal points have the unique

characteristic that they have the maximum distance to the soma in a given dendrite

(compared with points in the same dendrite).

Let us denote the isotropic distance volume as: VID . We construct discrete dis-

tance volume denoted by VDD by increasing equally distance steps di , 1 ≤ i ≤ N

from the soma p0 . Then we have partitioned the volume in N steps, and for each

distance step i the discretized volume can be expressed as:

Si = {x|di ≤ d(x) < d(i+1) }, (3.65)

Note that two adjacent regions Si , Si+1 do not share any point in common and we

observe that for each distance step i, multiple regions can be created. This is easy

to observe when we consider a volume with multiple bifurcations, each bifurcation

can have a region for a distance step i.



Figure 3.23: Schematic of ending points detection. (a) common region to detect two
adjacent regions; and (b) visualization of the chain volume VChain , ending points are
marked with yellow color and common regions are marked with gray color.

We define a “chain volume” VChain as the discretized volume composed by the


Ci = {x|di +  ≤ d(x) ≤ d(i+1) }, (3.66)

where  is a distance value7 . We say that two regions Ci and Ci+1 are connected if:

pi = max{VID (x) : x ∈ Ci }, then pi ∈ Ci+1 , that is the point with maximum distance

value in the region Ci is in the region Ci+1 , see Fig. 3.23(a).

To detect terminal points, we march from the N distance step dN . In case multiple

regions MN are created (due to the multiple branches), we find the points for each
Typically  is selected to be 1.0 or 2.0 µm, approximately three or six voxels.

M region and they are marked as a ending point. Then, we consider all the MN −1

regions for the distance step N − 1 and we check each region has a connecting region

(Eq. 3.66). If a given region has no a connecting region from the distance step N ,

then we compute the point with maximum distance and it flagged as a terminal

point. This procedure is repeated until the regions reach the soma (clearly in the

soma there are no bifurcation points). Figure 3.23(b) depicts the detected terminal

points (marked in yellow) and the chained volume, where the regions of color gray

are the regions with common points.

3.7.6 Anisotropic 3D Front Propagation

Given a cost volume (denoted by F ) and a starting point (the soma point p0 ),

we propagate an anisotropic 3D front that has a very high speed in the center of

the irregular tubular object as compared in the border of the tubular object. The

objective of propagating a 3D front that travels at high speed the centerline of the

tubular structure is to be able to determine the optimal path from the set of ending

points to the soma. Note that by construction a path is optimal if the commutative

cost is minimum and it is unique.

The proposed energy functional to compute geodesic paths along the centerline

of the tubular object is given by:

[g(Dn (H(x)))] k∇T (x)k = 1 , (3.67)

with T (p0 ) = 0 and p0 is the voxel corresponding to the soma center point.

The term g(Dn (H(x)) induces a cost function for which a 3D front propagates

(a) (b)

(c) (d)

(e) (f)

Figure 3.24: Visualization of the 3D front propagation along the centerline of the
tubular object. (a)-(d) 3D front propagation in a tubular region with considerable
diameter, note how the topology branching dendrites is naturally modeled by the 3D
front, always moving along the centerline. (e),(f) 3D front propagation in dendrite
structures with small diameter.

with maximum curvature in regions of maximum distance to the boundary (cen-

ter lines), and is composed of a morphological operator H and the normalized

distance transform operator Dn . The morphological operator H(x) is defined as:

H(x) = f1 (p(x|f )) ∪ f2 (V (x)), where the term f1 (P (x|f )) is a probabilistic morpho-

logical operator, composed of the posterior probability P that a voxel belongs to the

centerline (Eq. 3.37) and it is equal to 1 in regions greater or equal to a given proba-

bility value. This function ensures that the great majority of the small dendrites are

robustly segmented. The second term f2 (V (x)) is a threshold operator that is equal

to 1 in regions greater or equal to a given intensity value, this operators ensures that

wider structures, mainly the soma (not a cylindrical object) and the largest dendrites

are segmented. Figure 3.24 depicts the front propagation in a 2D slice of a typical

volume. Note that in Fig.3.24(a) the front is expanding anisotropically in the center

of the tubular structure, and gradually travels in along the centerline of the bifurca-

tions (Figs.3.24(b)-3.24(d)). This property has a tremendous generalization power,

since this type of front of propagation naturally handles the complex topology of

multiple branches. Figure 3.24(e) depicts the front of propagation in a 2D section of

a thin dendrite with two branches and Figure 3.24(f) depicts the front of propagation

in a single dendrite.

3.7.7 Center line extraction and tree reconstruction:

In general, centerline points correspond to points where the curvature of the front

at a given time t is maximal, and therefore they are located the farthest away from

the initial voxel p0 at a given the time t (in the case of a single branch). Then, in

the case of one dendrite, centerline is extracted by marching along the gradient of

the 3D front from the ending voxel to the initial voxel.

In the case of the complete neuron cell, a 3D front starting from the initial voxel

is intiated according Eq. 3.71. Finally, centerlines are extracted by marching along

Figure 3.25: Schematic depicting the general principle to construct a single connected
tree component when tracing back the optimal path from the ending point to the root
point. Individual paths are marked with a non-continuous line and a the common
path is marked with a continuous line.

the gradient from the ending voxels to the initial voxel p0 (note that convergence is

always guaranteed since the global minimum corresponds to p0 ).

In order to represent the set of paths as a single connected tree structure, the

soma center point is the natural selection of the root. Branching points and dendrite

segments are constructed by tracing back the paths from the terminal points to the

root. Each centerline voxel along the paths is labeled according the number of times

that a point has been visited when traveling from the terminal points to the root

point. Such labeling induces a unique identification for paths segments. For example,

consider the Fig. 3.25, the path of non-continues lines correspond to paths that are

visited only once when traveling from the terminal voxels to the root point, while

voxels that have been visited twice correspond to continues lines. By considering

all the terminal voxels in a neuron cell, a single connected tree component is always

guaranteed to be constructed (in order to produce realistic computational simula-

tions, a single connected tree structure must be constructed). Therefore, the cell

morphology is uniquely expressed terms of dendrite segments, where each segment

is expressed as a parametric differentiable curve l(u), where u parameterizes that

particular segment in terms of the voxels. We ensure differentiability by smoothing

the curve from the path’s points.

Figure 3.26: Parametrization of segments as generalized cylinders representation.

Given a differentiable curve l(u), a FS frame at a given point of l(u) is a vector

field consisting of a triplet of vectors (T(u); B(u); N(u))> . The FS frame constitutes

an orthogonal system of vectors, and such orthogonal system is obtained from the

curve’s derivatives l with respect to the parameter u. The first derivative, l̇(u), is

the vector in the direction of the tangent to the curve at point l(u). The first l̇(u)

and second l̈(u) derivatives define the osculating plane of the curve at that point –

the limit of all the planes defined by the point l(u) and the ends of the infinitesimal

arcs of the curve near it [203]. These two vectors are not always orthogonal to each

other, but they can be used to define the binormal vector, the vector perpendicular

to the osculating plane. The binormal and tangent vectors, in turn, define the normal

vector and with it an orthogonal system of axes that constitutes the FS frame at the

point l(u):

T angent T(u) = ,
l̇(u) × l̈(u)
Binormal B(u) = , and (3.68)
|l̇(u) × l̈(u)|
N ormal N(u) = B(u) × T(u) .

This frame is independent of the curve’s coordinate system, and the parametrization

depends only on its local shape, Bronsvoort [31].

The geometric model e of each segment, is a tube–shaped model with a parametric

curve l(u) as the dendrite centerline, and FS frame oriented cross sectional planes

a(u, v):
 
 l1 (u) + a1 (u, v) 
 
e(u, v) = 
 l2 (u) + a2 (u, v)
 (3.69)
 
l3 (u) + a3 (u, v)

where −π ≤ v ≤ π and the cross sectional planes can be represented in a matrix

form as follows:
   
 a1 (u, v)  N1 (u) B1 (u) 
 
  cos(v) 

  
 a (u, v)  = r(u)  N (u) B (u)  , (3.70)
 2   2 2 
    sin(v)
a3 (u, v) N3 (u) B3 (u)

where, l(u) = (l1 (u), l2 (u), l3 (u))> , a(u, v) = (a1 (u, v), a2 (u, v), a3 (u, v))> , and r(u)

is defined as the radius of the dendrite cross section.

3.7.8 Diameter estimation

Here we describe the use of the distance transform to estimate diameters of the

connected tree data structure 8 . The major challenge in the correct estimation of

the dendritic diameters is “to not overestimate them”.

Dendritic diameters are estimated for each voxel along each segment Segmenti . A

function R to approximate the diameter is defined in function of distance transform

volume as: r(u(t)) = 2 ∗ k ∗ Dm (H((u(t)ji )), where k is a penalty value and the

function Dm is the average distance from the voxel i defined as:

1 X
Dm (H((u(t)ji ) = ∗ D(H((u(t)j+z
i ))). (3.71)

Figure 3.27 depicts different anatomical branches for which the morphological model

is extracted. Figure 3.27(a) depicts a bifurcation branch overlapped with the ex-

tracted centerline (line in red color), the sphere in blue color is the detected branching

point, whereas Fig. 3.27(b) depicts the cylindrical representation of the dendrite in

terms of the Equation 3.69. Figures 3.27(c)-3.27(f) present the estimated cylindrical

representation of two anatomical structures.

We emphasize the proposed morphological reconstruction is always guarantee to construct
a single connected tree representation of the cell, where every path corresponds to the dendrite

(a) (b)

(c) (d)

(e) (f)

Figure 3.27: Centerline extraction and diameter estimation. (a) Overlay of the max-
imum intensity projection of the denoised data with the detected centerline (red
color) and the detected bifurcation point (sphere with blue color), similarly (b) cor-
responds to the cylindrical representation of the dendrite segment. (c),(e) depicts the
maximum intensity projection in the x − y axis of typical branches of the denoised
data, while (d),(f) depict the morphological reconstruction represented as a single
tree connected component in cylindrical representation.

Chapter 4

Results and Discussion

4.1 Results in Frames Shrinkage

In our experiments, we compare our algorithm to the median filtering, nonlinear

anisotropic diffusion filtering and threshold algorithm based on the 3D separable

Haar system. The median filter is a non-separable edge-preserving smoothing filter.

It is usually applied as a preprocessing in order to reduce the amount of noise in some

visual processing tasks. The filter sorts pixels covered by a N × N × N mask accord-

ing to their intensity; the center pixel is then replaced by the median of these pixels.

We used a 3 × 3 × 3 mask in our experiments. The nonlinear anisotropic diffusion

filtering employs an iterative, ‘tunable’ filter introduced by Perona and Malik [166].

Perona and Malik formulated it as a diffusion process that encourages smoothing

while preserving the edges. Broser et al. [33] used the anisotropic diffusion filtering

to average noise along the local axis of the neuron’s tubular-like dendrites in order

to maintain morphological structure. In all the experiments, the parameters used

for the anisotropic diffusion were set to 50 iterations with a time step of 0.0625, a

conductance parameter equal to 3 and it was implemented using the Insight Segmen-

tation and Registration Toolkit (ITK) [99]. The most widely-used wavelet threshold

algorithm first estimates the noise level according to the median of absolute value of

coefficients in the high frequency subband, then determines the threshold based on

the estimated noise level. Surprisingly, we found such an algorithm does not work

for both synthetic and real volumes for the 3D separable Haar system. Particularly,

we found the estimated threshold for our test data is about zero or a very small

number. The reason is that the neuron imaging data are very sparse and most noise

components are signal-depending (such as the Poisson noise component) in the 3D

photon-limited case. As a result, most wavelet coefficients are near zero and the

estimated threshold too. Instead of estimating the threshold using the median value

method, we set the threshold to be half of maximum of the absolute value of coef-

ficients in a subband for the threshold algorithm based on the 3D separable Haar

system. For the proposed algorithm, we employ the UH Lifted Spline Filterbank

(UH-LSF) filterbank.

4.1.1 Denoising Results in Synthetic Data

We first test our algorithm on synthetic noisy volumes – the computational phan-

toms. The two synthetic noisy volumes are used in our experiments are depicted in

Figs. 4.1,4.2(b) of dimensions 374 × 158 × 57 and 180 × 66 × 40 respectively. The

first one is used here to simulate the neuron imaging data as a whole, while the

Figure 4.1: Synthetic phantom data.

second one provides us the opportunity to investigate the behavior of our denoising

algorithm on more detailed structures (basal dendrites).

To assess denoising we use local metrics computed in the tubular neighborhoods.

The denoising performance on the two synthetic data is presented in Table 4.1 and

Table 4.2, respectively.

Table 4.1: Performance Evaluation on the noisy volume depicted in Fig. 4.1.
Metric Noisy Anisotropic Diffusion Filtering Median Filtering 3D Haar Threshold Our algorithm
LN-MSE 67.13 63.49 16.17 20.28 12.97
LN-RMSE 8.19 7.96 4.02 4.50 3.60
LN-SNR (dB) -4.19 -3.95 1.98 1.00 2.94
LN-PSNR (dB) 45.25 45.49 51.43 50.45 52.39

Table 4.2: Performance Evaluation on the noisy volume depicted in Fig. 4.2.
Metric Noisy Anisotropic Diffusion Filtering Median Filtering 3D Haar Threshold Our algorithm
LN-MSE 749.06 720.16 448.82 349.23 261.28
LN-RMSE 27.36 26.83 21.18 18.68 16.16
LN-SNR (dB) 1.36 1.53 3.58 4.67 5.93
LN-PSNR (dB) 34.77 34.94 37.00 38.09 39.35

From Tables 4.1 and 4.2, it is clear that all algorithms used in our experiments

can significantly suppress noise components in the data. Our algorithm produces the

(a) (b)

(c) (d)

(e) (f)
Figure 4.2: Denoising results due to different algorithms on a synthetic noisy volumes.
(a) Original volume; (b) noisy volume; (c) result due to median filtering; (d) result
due to anisotropic diffusion; (e) result due to our algorithm; and (f) result due to
threshold algorithm based on the 3D Haar wavelet.

best results for both noisy volumes and in terms of all local metrics. For example,

for noisy volume Fig. 4.1(b), our algorithm is 6.89 dB better than the nonlinear

anisotropic diffusion filtering in terms of LN-PSNR; for noisy volume Fig. 4.2(b),

our algorithm is 187.53 better than the median filtering in terms of LN-MSE.

For visual comparison, we present denoising results using four algorithms for

noisy volume depicted in Fig. 4.2(b).

In Fig. 4.2(c), we notice that the median filtering destroys fragile details, which

in fact include very important information needed for the morphology analysis of

neuron imaging data. The nonlinear anisotropic diffusion method and the thresh-

old algorithm based on the 3D Haar system tends to remove most of the noise

(Figs. 4.2(d),(f)). However, by carefully inspecting the corresponding results, we can

find some sections of the structure of interest are not preserved. By comparison, our

algorithm not only removes most background noise, but also preserves the tubular

structure — even in regions with very fine details (Fig. 4.2(e)).

Table 4.3 depicts the running time of each denoising method. Among the four

algorithms in our experiments, the median filtering is the most computationally

effective. Our method runs slower than the median filtering and threshold algorithm

based on the 3D separable Haar system but faster than the anisotropic diffusion

filtering. The platform we used for our experiments is listed as follows. Hardware

Architecture: PC; Operating System: Windows XP; Processor: 2.6 GHz; Memory

Size: 4GB.

Table 4.3: Performance evaluation - Time (UNIT: SECOND).

Anisotropic Diffusion Filtering Median Filtering 3D Haar Threshold Our Algorithm

Noisy Volume Fig. 4.1 687.95 58.59 71.97 374.76
Noisy Volume Fig. 4.2(b) 28.85 2.09 3.80 14.20

4.1.2 Denoising Results in Confocal and Multi-photon Mi-

croscopy data

We have tested our method on both multi-photon and confocal microscopy data sets

and compared its performance quantitatively and qualitatively with respect to the

other three filtering methods.

We define a structural-quantitative measure as the length of the dendrites ob-

tained from the largest connected component after applying a global threshold ob-

tained from the denoising results. This allow us to assess the sensitivity of each

algorithm to produce ‘gaps’ among dendrites (tubular structures) and how well the

overall connectivity is preserved. Figures 4.3(a),4.3(b) depict the x − y and x − z

maximum intensity projection from a typical image stack. Figures 4.4(a) depicts

a detail of the original volume, while Figs. 4.4(b)-(d) depict the projection of the

binary segmented volume with a threshold value set to 10 in the x − y axis. Notice

that fragile details are lost in the case of median filtering and anisotropic diffusion fil-

tering. We note that some background noise is present after denoising with wavelets

in the first level of decomposition (Fig. 4.4(e)), whereas in the second level the back-

ground noise is mostly removed (Fig. 4.4(f)). The effect of the separable filter can be

observed in Fig. 4.4(f), here a block effect is introduced, this effect is visible in the

(a) (b)

Figure 4.3: (a),(b) Maximum intensity projections in the x − y and x − z axis of the
volume of interest respectively.

binarized volume. This block effect in the binary volume is not desired since does not

allow to capture local structures (spines) which populate the dendrites. In addition,

we observe that some line segments are broken in the results when applying median

filtering and anisotropic diffusion. By comparison, our algorithm can preserve more

edges, even the weak ones, as shown in Fig. 4.4(d) without producing an aliasing

effect (our transform is undecimated). Figure 4.5 depicts the estimated length from

the binarized largest connected component at different threshold values. We observe

that in low threshold values our method preserves more structure than the other

three filtering methods. In high threshold values we observe that the performance

of the anisotropic diffusion is better than our method and median filtering, and the

largest difference with respect to our method and the median filtering is at the value

of 40. This effect can be explained since the anisotropic diffusion only preserves

strong edges. In addition, it should be noted that, while we increase the threshold

value, the relation between the energy of the signal (the volume of interest) and the

ability to capture details at very low energy levels is indirectly proportional.

(a) (b)

(c) (d)

(e) (f)

Figure 4.4: Comparison results of applying our denoising, anisotropic diffusion, me-
dian filter and the 3D Haar wavelet. (a) details of the selected region of interest
(square with red color in Fig. 4.3). Results of applying a global threshold with value
T = 10; (b) median filtering; (c) anisotropic diffusion filtering; (d) our method; and
(e),(f) threshold algorithms based on the 3D Haar wavelet with one and two levels
of decomposition respectively.

Figure 4.5: Performance evaluation of the length in function of the detected largest
component volume at a given threshold, the number of levels of decomposition of
wavelet transform was two.

Figure 4.6 depicts a confocal imaging volume with a selected region of interest.

For comparison, a manually segmented version of the structure in the selected region

is shown in Fig. 4.7. As it can be observed, both the median filter and our algorithm

obtain satisfactory results in this case. On the contrary, the nonlinear anisotropic

diffusion tends to destroy fine details (Fig. 4.7(d)). Again, the block effect can be

easily observed in the result due to the 3D Haar wavelet (Fig. 4.7(f))

In the experiments above, we have demonstrated the high efficiency of the con-

structed 3D non-separable system for noise removal of neuron photon-limited imaging

data. Compared with other two algorithms, namely, the median filtering and the non-

linear anisotropic diffusion, our algorithm has a significant advantage – it is good at

preserving the edge information. The main reason, we believe, is the high efficiency

Figure 4.6: A confocal imaging volume with selected region of interest.

of the new 3D non-separable system to deal with directional information. We con-

structed the 3D non-separable system by adding new filters into existing separable

systems. These new filters in fact correspond to new directions that the separable

system can not deal with effectively. More precisely, these new filters correspond

to the main diagonals in 3D. To make it clearer, we have investigated the energy

distribution of different subband of the new 3D system due to neuron imaging data.

As usual, for the new system, most of the energy (i.e., the l2 -norm) is contained in

the first filter: the low-pass filter F0 . The energy contribution of the rest 30 filters

is presented in Fig. 4.8. Most of the energy is captured by the detectors of first

and second order singularities. Notice that the subbands due to newly-added filters

(F27 , F28 , F29 , and F30 ) have significant energy, even more than part existing filters.

This means the considered data have energy in main diagonal direction and these

directional information has been captured by the new 3D non-separable system.

(a) (b)

(c) (d)

(e) (f)
Figure 4.7: Results in selected region of the confocal imaging (Fig. 4.6). (a) Original
volume; (b) manually segmented result; (c) result due to median filtering; (d) result
due to anisotropic diffusion filtering; (e) result due to our algorithm; and (f) result
due to 3D Haar from wavelets.

Figure 4.8: Energy distribution other than the low pass filter in each subband of UH
Lifted Spline Filterbank (UH-LSF) on the 3D neuron data of Fig. 4.3.

4.2 Dendrite Detection

We have applied our method to both synthetic and real data. We created synthetic

data to: i) learn a generic tubular shape model, and ii) detect tubular structures

in unseen examples from synthetic and CT data. In both synthetic and real data,

parameter selection was performed with a grid search using three-fold cross-validation

with different kernels, penalty, and sigma values.

4.2.1 Validation

The model to be learned is depicted in Fig. 4.9(b). Its morphological properties

include: i) variation of intensity, ii) radius variation from 0.5 to 1.5 µm, iii) variety

of branching sections, and iv) high and low curvature segments. Voxel size was

isotropic and it was set to 1.0 µm.

Figure 4.9(b) depicts an unseen example to detect the centerline. Note that the

radius decreases gradually from the bottom (1.5 µm) to the top (0.5 µm). The

centerline is overlaid in white color. Figures 4.10(c),4.10(d) depict the predicted

centerline with the estimated model in (Fig. 4.10(a)), while Figure 4.10(b) depicts

the centerline according to Sato’s measure [189] (σ = 0.5 µm, α = 1, β = 1, and

γ = 1). Note the difference of these two models, especially at the bottom of the


To quantify the performance of our method, we have constructed synthetic vol-

umetric data to: i) learn a generic tubular shape model, and ii) predict an new

(a) (b)

Figure 4.9: Synthetic tubular model constructed from cubic splines. (a) control
points and spline lines; and (b) volumetric representation.

example from tubular models.

The tubular model to predict was a neuron cell model from the

Duke-Southampton database [61]. The intensity distribution in the model was not

homogeneous. Dendrites were constructed from cylindrical models with a variation

in radius ranging from 0.5 to 0.9 µm.

In both learning and prediction, the TFV vectors were computed by selecting a

sigma value of 0.5 µm . The resulting A and B values that estimate the probability

outputs were: −2.4629 and 0.3869 respectively.

To accurately quantify structure preservation after processing, we compute the

confusion matrix in local tubular neighborhoods (LN-confusion matrix) rather than

computing it in the entire volume. LN-confusion matrix components are the true

positive rate (TPR), the false positive rate (FPR), the true negative rate (TNR), false

(a) (b)

(c) (d)

Figure 4.10: Comparison of tubularity measures in a volumetric example with varia-

tion in diameter. (a) Synthetic volumetric volume, note how the diameter “increases”
by a factor of 2X from top to bottom; (b) prediction according Sato’s measure [189];
and (c),(d) prediction according to the model constructed in Fig. 4.9.


(b) (c)

(d) (e)

Figure 4.11: Comparative results in synthetic data. (a) Comparison of different

methods to enhance tubular structures using the LN geometric mean as a quality
metric. (b) Maximum intensity projection in the x, y axis of the synthetic data. Prob-
ability volume estimated from: (c) the synthetic spline model, (d) the S-measure,
and (e) the F-measure.

negative rate (FNR), and the geometric mean (GM). These last metrics are defined

as follows: TPR is the proportion of voxels in LN that were correctly identified to

belong to the object of interest, FPR is the proportion of voxels that were incorrectly

classified as the object of interest, TNR refers to the proportion of background voxels

that were classified correctly, FNR is the proportion of object’s voxels that were

incorrectly classified as background, and the GM is given by GM = T P R · T N R.

We compare the performance of our algorithm with the Frangi et al. [71] measure

(F-measure) and the Sato et al. [189] measure (S-measure). The ground truth is

considered to be the binary volume of the synthetic cell. We then evaluate structure

preservation from the LN-confusion matrix by segmenting the probability volume at

different threshold values. Geometric mean curves at different probability threshold

values are depicted in Figure 4.11(a). The performance with respect to the LN metric

at the best GM value for each method is depicted in the Table 4.4. Note that for

any possible probability threshold value our method preserves more structure than

the F-measure and the S-measure respectively, and the best possible segmentation

is obtained at a probability value of 0.1. Our method achieved a GM value of 99.32

percent, while the F-measure and S-measure achieved a GM of approximately 90


4.2.2 Real data

To demonstrate the robustness of our method, we present results in confocal imaging

data for two different cell types: spiny striatal and CA1 pyramidal neuron cells. Each

Table 4.4: Performance Evaluation

Tubularity Measure TPR FPR TNR FNR GM

SVM 99.18 0.543 99.45 0.81 99.32
Spline Model 2 51.06 3.04 99.95 48.9 70.36
Frangi 80.04 0.14 99.85 19.53 89.64
Sato 80.00 0.13 99.88 19.98 89.38

cell image was acquired from different confocal microscopes under different conditions

such as: image resolution, dye concentration and microscope optical parameters. In

all the cases we used the SVMs library LIBSVM [42] and comparing results from

different values of C and γ we found that the best probability map obtained was

with the value of C equal to 100, and γ equal to 10. All experiments were performed

on an AMD OpteronTM at 2.0 GHz PC.

Figure 4.12(a) depicts a region of interest for one of the stacks after denoising.

Figure 4.12(b) depicts the probability map estimated from a statistical dendrite shape

model. The scale value of σ was 0.5 µm. Time to perform training was approximately

7 min. while the time to estimate the probability map was 45 min.

Figure 4.13(a) depicts dendrite segments from the CA1 pyramidal

cell of Fig. 4.12(a). Figure 4.13(b) illustrates the probability map estimated from

our method while Fig. 4.13(c) illustrates the results of applying the S-measure and

Fig. 4.13(d) depicts the result of applying the F-measure. Notice how our method

can detect highly anisotropic dendrites and how it preserves the general dendrite

morphology as compared with methods based on circular or semi-elliptical shapes.

Figures 4.14(a)-(c) depict a dendrite segment, centerline enhancement using



Figure 4.12: Results in typical stack for the CA1 pyramidal cell type. (a) Original
volume denoised with our FAST algorithm; and (b) the estimated probability map.

Sato’s measure (parameters: α = 1, β = 1, and γ = 0.5) and our measure re-


The medium spiny striatal neuron cell is presented in Fig. 4.15. Figure 4.15(a)

depicts the denoised cell volume with our FAST denoising algorithm, Fig. 4.15(b)

illustrates the probability map estimated from a dendrite segment. Figure 4.15(c)

depicts the result of segmenting the cell’s volume from its estimated probability map,

(notice how spines are present in the segmented volume) and Fig. 4.15(d) illustrates

(a) (b)

(c) (d)

Figure 4.13: Results of applying different methods to detect 3D tube-like objects.

(a) Detail from the original data, (b) result of applying the estimated SVM model,
(c) after applying the S-measure, and (d) after applying the F-measure.

(a) (b) (c)

Figure 4.14: Comparison of tubular measures in a dendrite segment. (a) a typical

dendrite segment; (b)after Sato’s measure; and (c) after our measure.

the morphological model estimated from the segmented volume. The value of σ

was 0.3 µm and the time to perform training and prediction was 5 and 35 min.


Results from synthetic data suggest that when learning a tubular shape model,

tubular morphology is an important factor for tubular shape prediction. For ex-

ample, to classify tube-like structures with different diameters and shapes, one can

select a fixed scale and perform training from samples of structures with different

diameters (to some extent). Then the learning process takes into account different

shape properties for different diameters. The selection of IT, should be decided by

the user based on the specific application for both: training and prediction. Results

from both synthetic data and confocal data suggest that a machine learning approach

to detect semi-tubular shapes is highly adaptable to a broad variety of semi-tubular


(a) (b)

(c) (d)

Figure 4.15: Results for the spiny striatal neuron type. (a) Denoised volume with our
FAST algorithm, (b) probability map obtained by SVMs from the single dendrite
model, (c) segmentation from the probability map, and (d) cell reconstruction as
cylindrical models.

(a) (b)

(c) (d)

Figure 4.16: Generalization of the synthetic tubularity measure. (a),(c) CT angiog-

raphy datasets; and (a),(c) tubular structures depicted using the model depicted in
Fig. 4.9.

4.3 Morphological Reconstruction

4.3.1 Qualitative and Qualitative Analysis

To assess the quality of our morphological reconstruction, we compare qualitatively

and quantitatively the morphological reconstructions from three human experts (E1,

E2, and E3), one tracer obtained from using the module Auto Neuron (AN) from

NeurolucidaTM , and one using our method.

The cells of interest consist of a database (Figs. 4.17,4.18) of six neuron cells, five

obtained from a multiphoton microscope (Cells A to D) and a typical cell obtained

from a confocal microscope. We categorize the quality of the data as: good (Cell A),

medium (Cells B to D); and poor (Cell F).

Comparison of reconstruction reveal how well the different methods represent the

branch lengths, diameters, and connectivity.

Visual comparison (non-quantitative) demonstrate the gross success of each

method in capturing the morphology. Figure 4.17 presents a visual comparison of

morphological reconstructions performed (from top to bottom: our method, AN, and

three human experts E1, E2 and E3), while Fig. 4.18 depicts a visual comparison

of morphological reconstructions performed by all the tracers (from top to bottom

appears our method, the one obtained from AN, and three human experts).

To quantitatively compare (globally and locally) the reconstructed cell topology

among all reconstructions, we used a variant of Sholl analysis [175, 199, 231, 232, 65,

151, 184]. Global descriptors include: i) total dendritic length (Fig. 4.19(a)), ii) total


Figure 4.17: Visual comparison of morphological reconstructions. From top to

bottom, morphological reconstruction obtained by our method, the computer tracer
AN and human tracers E1, E2 and E3 respectively.

surface area, iii) diameter statistics per segment (4.19(c)), and iv) length statistics

per segment (Fig. 4.19(b)).

Table 4.5 presents the total dendritic length and total surface area. Among all

the tracers, E3 reported the longest dendritic length as opposed the one by AN.

Our method reported dendritic lengths close to the median values. With respect to

surface area, our method reported the smallest surface area, and AN reported the

largest surface area. Table 4.6 lists statistics for the estimated dendritic diameter.

(a) Reconstruction cell B (b) Cell B from MP

(c) Reconstruction cell C (d) Cell C from MP

(e) Reconstruction cell D (f) Cell D from MP

(g) Reconstruction cell E (h) Cell E from MP

(i) Reconstruction cell F (j) Cell F from Confocal

Figure 4.18: (a),(c),(e),(g),(i) Morphological reconstruction of CA1 pyramidal cells

and (b),(d),(f),(h),(j) maximum intensity projections of the denoised volumes.

The average minimum values ranged from 0.9 to 1.5 µm. The human tracers reported

diameters in the [0.1 0.2] µm range, significantly below the optical resolution of the

imaging systems that were used to collect the data sets. AN reported the longest

average diameter, followed by the three human tracers and finally by our method.

Tables 4.7,4.8 depict statistics dendritic lengths distance from the soma.

We extracted a typical subtree Figure 4.20 (bottom) and we compared quantita-

tively the performed reconstruction by all the tracers in only that subtree. Table 4.5

(right column) presents the estimated total dendrite length and surface. The human

tracer H3 reported the maximum length of 468.52 µm while the minimum length

of 409.84 µm was reported by AN and our method reported an estimated length

of 460.9. AN reported the maximum dendrite surface area, while the human tracer

H3 reported the minimum dendrite surface area. Table 4.9 depicts the results of

quantitative analysis of the reconstruction obtained by all the tracers.

(a) (b) (c)

Figure 4.19: A variation of Sholl analysis as performance metrics. (a) Path from
soma; (b) dendrite length; -(c) dendrite diameter.

Table 4.5: Performance Evaluation - Total Dendrite Length and Surface Area
Cell A Cell B Cell F Subtree
Length Surface Length Surface Length Surface Length Surface
H1 6327 1430.22 4018 1437.65 4017.86 4017.86 415.76 1131.69
H2 6806 1530.25 3397 1103.44 3397.31 2530.81 456.28 1296.60
H3 7150 334.27 4144 961.54 4193.27 223.38 468.52 545.87
OR 7065 2209.57 6228 1794.20 3861.38 2089.54 460.90 1119.72
AN 5698 1374.36 5816 865.54 1607.61 891.120 409.84 1571.59

Table 4.6: Performance Evaluation - Diameter Statistics

Cell A Cell B Cell F
µ σ min max µ σ min max µ σ min max
H1 1.1 0.6 0.1 5.5 1.9 1.2 0.8 9.2 1.98 1.52 0.90 7.94
H2 1.1 0.7 0.5 7.2 2.1 2.1 0.5 15.2 2.29 1.93 0.46 12.42
H3 0.9 1.0 0.1 7.7 1.1 1.1 0.3 8.8 1.43 2.33 0.4 18.8
OR 1.5 0.9 0.5 6.8 0.7 0.5 0.3 4.7 3.17 0.87 1.84 6.74
AN 1.4 0.7 0.8 7.5 2.8 1.4 1.6 15.0 0.69 0.38 0.27 3.11

Table 4.7: Performance Evaluation - Length Statistics
Cell A Cell B Cell F
µ σ min max µ σ min max µ σ min max
H1 46.5 39.9 0.2 208.4 55.7 43.3 0.3 206 40.58 39.31 0.17 161.20
H2 40.3 35.5 1.1 158.7 57.8 56.6 0.2 427.6 50.70 47.26 1.17 232.16
H3 44.4 35.2 0.1 166.5 51.8 40.6 0.2 179.4 46.59 42.29 0.12 200.68
AN 39.8 32.2 0.3 178.0 39.0 35.7 0.4 168.4 30.91 32.05 2.3 140.01
OR 44.2 36.5 0.3 187.1 43.9 39.6 2.2 197.1 34.78 33.66 0.54 164.02

Table 4.8: Performance Evaluation - Path from Soma

Cell A Cell B Cell F
µ σ min max µ σ min max µ σ min max
H1 221.5 143.1 10.1 640.3 342.1 222.5 0.3 830.4 175.2 113.8 1.0 559.4
H2 212.8 132.9 1.0 637.9 257.9 187.0 1.1 882.0 161.9 106.1 1.2 563.2
H3 218.4 133.8 0.0 626.0 305.2 193.1 0.2 736.9 184.2 101.1 0.0 543.6
AN 219.3 131.5 2.1 628.8 262.3 186.0 0.8 796.5 193.6 97.3 35.1 444.7
OR 260.8 172.0 0.3 753.9 284.4 218.9 8.9 776.5 230.2 139.4 23.1 614.4

Figure 4.20: Selected subtree to perform quantitative analysis.

Table 4.9: Performance Evaluation - Subtree

Path from Soma Length Diameter
µ σ min max µ σ min max µ σ min max
H1 86.35 67.20 0 197.54 59.39 52.87 5.0 161.8 2.39 0.12 2.18 2.48
H2 94.36 71.64 0 207.36 65.18 55.58 0.53 167.79 1.54 0.27 1.22 1.96
H3 92.6 70.72 0 203.89 66.93 53.36 2.47 164.14 1.17 0.49 0.64 2.0
AN 81.21 59.74 0 199.78 45.53 45.12 1.61 139.45 2.29 0.23 2.03 2.84
OR 98.53 79.86 0 230.25 65.84 64.14 0.98 187.68 1.06 0.21 0.85 1.48

(a) Cell A - ED (b) Cell B - ED (c) Cell F - ED

(d) Cell A - GD (e) Cell B - GD (f) Cell F - GD

Figure 4.21: Sholl analysis. (a)-(c) Comparison of number of branches in function of

Euclidean Distance (ED) from the soma, (d)-(f), number of branches in function of
the Geodesic Distance (ED) of the branching point to the soma.

Local descriptors include [199, 226] the number of branching points as function

of: i) the Euclidean Distance (ED), ii) the Geodesic Distance (GD) to the soma,

the distribution of iii) Dendrite Lengths (DL) and iv) Diameters (D). Figure 4.22

presents a qualitatively comparison with respect to the metrics ED, GD and D among

all tracers in Cell A, B and E.

Figure 4.25 depicts a visual comparison of the reconstruction obtained with our

method (bottom) and the minimum intensity projection (top) along the x − y axis

of the denoised volume. In the middle, Region A depicts a detail in the soma region,

while Region B depicts a detail of some typical dendrites of average diameter. Fig-

ure 4.26 depicts a comparison of the volumetric rendering (green) overlaid with our

(a) Cell A - DL (b) Cell B - DL (c) Cell F - DL

(d) Cell A - D (e) Cell B - D (f) Cell F - D

Figure 4.22: Sholl analysis. (a)-(c), Distribution of Dendritic Length (DL), and
(d)-(f), distribution of the Diameter (D).

(a) Subtree - ED (b) Subtree - GD (c) Subtree - DL (d) Subtree - D

Figure 4.23: Sholl analysis. (a) Comparison of number of branches in function of

Euclidean Distance (ED) from the soma, (b), number of branches in function of
the Geodesic Distance (ED) of the branching point to the soma, (c), distribution of
Dendritic Length (DL), and (d), distribution of the Diameter (D).

(a) OR vs E1 (b) OR vs E2 (c) OR vs E3 (d) OR vs AN

Figure 4.24: Quantitative analysis of diameter estimation for Cell A. Diameter esti-
mated by OR compared with respect to: (a) H1, (b) H2, (c) H3, (d) AN.


In a “typical” CA1 neuron with 100 µm long branches, this represents 7 to 14

standard branches. It is interesting to note that the largest discrepancy is seen for

the “good” neuron where we would expect smallest variance due to good dye-filling

and thus bright, consistent signal.

From quantitative global descriptors, all the tracers reported longer dendritic

lengths given higher quality data sets due to the larger signals in the distal and thin

processes. Tabulation of the total dendritic lengths across all five tracers (human and

computer) reveal total spreads of 1451, 772, and 747 m for the “good”, “average”,

and “bad” data sets, respectively.

The AN tracer consistently reported the smallest dendritic length, perhaps re-

flecting the fact that it does not automatically reconnect discontiguous branches.

Our method (OR) reported total dendritic lengths near the median values. One

human tracer, H3, reported the longest total lengths on two of the three data sets.

Another human tracing, H1, reported the median dendritic lengths for all three data


Figure 4.25: Comparison of the minimum intensity projection and the morphological
model. Top: Minimum intensity projection of the denoised volume corresponding to
the cell A, middle: reconstruction (white color) overlaid with the minimum intensity
projection in two regions of interest, and bottom: morphological reconstruction of
cell A.

Figure 4.26: Comparison of the volumetric (overlayed in green) data and the mor-
phological model.

sets. In contrast, another human operator, H2, reported the longest dendritic length

on one data set and the shortest on another. Since each tracing was performed over

several hours, possibly spanning multiple days, and each data set was addressed over

the span of several weeks, this highlights the variability inherent between human,

manual tracings.

For the “good” data set, all tracers reported an average dendritic length that

was within a 6.7 µm range (close agreement). The average dendritic lengths for the

“average” and “poor” data sets were spread over a larger range (18.8 and 16.9 µm,

respectively). No tracer was consistently at either extreme.

Total surface area highlights differences that are due to total dendritic length

and diameter estimation. Since our method (OR) systematically estimating smaller

diameters, OR generates models with smaller surface areas despite consistently gen-

erating the longest total dendritic length. Likewise, because it consistently over-

estimated diameters, AutoNeuron reports surface areas that are in the upper end of

the distribution.

Local descriptors indicate the “completeness” of the morphological reconstruc-

tion. With respect to the ED (Figs. (a)-(c)) all the tracers had similar performance

in Cell A (good quality), a comparable performance in Cell (medium quality), having

differences significant differences in Cell D (poor quality). Consistent results were

obtained with respect to GD measure, depicted in Figs. (d)-(f). These results sug-

gest that the computer based OR and AN tracers, detected the largest number of

branch points across the different data sets as compared with the humans tracers H1,

H2 and H3. Similar results were obtained when computing dendritic length (DL)

(Figs. (a)-(c)) and dendritic diameter (Figs. (d)-(f)), obtaining relatively consistency

among tracers in Cell A, while obtaining relatively discrepancies in Cell F.

Figure 4.23 presents a comparative analysis performed in the subtree (Fig. 4.20,

and Fig. 4.24 presents a direct comparison of estimating the diameters of each den-

drite segment (dendrite-wise) among all tracers with respect to our method. Clus-

tering along the line with slope one indicates close agreement of the corresponding

diameters while clusters away from the diagonal indicates disagreement. This com-

parison allow to assess how each method over- or under-estimates the diameters in

comparison to our method.

Figures 4.27(a),4.27(b) depicts the extracted centerline (white line) and the

ground truth centerline (brown line). We have created 20 synthetic phantoms with a

variety of tubular structures and computed the distance from the extracted center-

lines to the ground truth. The maximum and average distances from the extracted

centerlines using our method were 0.87 µm and 0.57 µm, while the maximum and

average distances from the extracted centerlines using Sato’s method were 0.94 µm

and 0.63 µm.

(a) (b)

Figure 4.27: Visualization of the extracted centerline en phantom data depicted in

Fig. 4.10(a). (a),(b) 3D and x − y projection of the extracted centerline (brow line),
overlaid with the true centerline (white line).

Figure 4.28 presents a generalization of our method to Computer Tomography

Angiography (CTA) data to extract the coronary arteries of a human heart. In this

case, a seed point was manually selected, and the front propagation was based only

in the probability volume presented in Fig. 4.16(d).

(a) (b)

Figure 4.28: Visualization of the results of centerline extraction when applied to

CTA data depicted in Fig. 4.16(d).

Chapter 5


In this chapter present possible future directions research based on our work as well

as our conclusion.

5.1 Future Work

5.1.1 Denoising

We presented a framework for denoising 3D optical volumes assuming there the

distribution of the noise follows a Poisson distribution and it is signal dependent.

There are two areas in which further research can be explored. The first possible

area refers to how to suppress the noise from the Frame coefficients, and this could be

done by considering: i) a dual space for the noise distribution (i.e.; approximating as

a mixture of Gaussian); ii) integrating prior knowledge from the specific application

(i.e.; estimating the posterior probability for an element to be noise).

The second possible area refers to design Parseval frames for texture feature

extraction. Frames based on digital filters provide a characterization of the image

in terms of prescribed features, detected by a set of multidirectional digital filters.

These textural features can be used to detect tubular-like structures (for example

the coronary arteries in CTA imaging).

5.1.2 Tubular Shape Learning

Regarding tubular shape learning, we have presented a framework that allow to

learn and predict generic tubular shape models. Future directions in this are include

the followings. To investigate different probabilistic frameworks from the tubular

feature vectors (i.e.; to determine the optimal number of feature vectors needed to

generalize well a considerable number of cases). To create an hybrid model that

enhances tubular objects according to shape and texture. This approach has the

potential to combine structural information and specific texture features to perform

segmentation of tubular objects in medical imaging.

5.1.3 Morphological Reconstruction

We presented a framework for morphological reconstruction of neurons in terms of

cylindrical lengths and diameters. Based on these parameters, algorithms for spines

detection could be implemented. We have observed that the probability volume

greatly enhances spines structures in the dendrites, then by combining the cylindrical

description (already estimated) and the probability volume (spines enhanced) then

algorithms for spines detection could be explored further.

5.2 Conclusion

This dissertation has presented a general framework for automatic three dimensional

morphological reconstruction of neuron cells from optical images. We have presented:

i) a novel frame-shrinkage denoising algorithm for 3D in optical images; ii) a new

method for enhancing regular and tubular structures without assuming a particular

tubular shape; and iii) an automatic algorithm for three dimensional reconstruction

of neuron cells.

The methodology presented in this dissertation is designed for guiding online

functional experiments and automatize the creation of neuron libraries.


[1] Neuroscience Research Group, School of Biological Sciences, Southampton Uni-

versity: Duke/Southampton archive of neuronal morphology.
[2] Deconvolution recipes. Help manual for Huygens Software , 2003.

[3] M. A. Abdul-Karim, B. Roysam, N. Dowell-Mesfin, A. Jeromin, M. Yuksel,

and S. Kalyanaraman. Automatic selection of parameters for vessel/neurite
segmentation algorithms. IEEE Transactions on Image Processing, 14(5):1338–
1350, September 2005.

[4] G. Agam, S. Armato III, and C. Wu. Vessel tree reconstruction in thoracic
CT scans with application to nodule detection. IEEE Transactions on Medical
Imaging, 24(4):486–499, 2005.

[5] G. Agam and C. Wu. Probabilistic modeling based vessel enhancement in

thoracic CT scans. In Proc. IEEE Conf. on Computer Vision and Pattern
Recognition, volume 2, pages 649–654, Washington, DC, USA, June 2005.

[6] Y. Ai and J. Jaffe. Design and preliminary tests of a family of adaptive wave-
forms to measure blood vessel diameter and wall thickness. IEEE Trans. Ul-
trasonics, Ferroelectrics, and Freq. Control, 52(2):250–260, Feb. 2005.

[7] K. Al-Kofahi, S. Lasek, D. Szarowski, C. Pace, and G. Nagy. Rapid automated

three-dimensional tracing of neurons from confocal image stacks. IEEE Trans.
Information Technology in Biomedicine, 6(2):171–187, June 2002.

[8] L. Ambrosio and H. M. Soner. Level set approach to mean curvature flow in
arbitrary codimension. J. Diff. Geom., 43:693–737, 1996.

[9] E. Anshelevich, S. Owens, F. Lamiraux, and L. E. Kavraki. Deformable vol-

umes in path planning applications. In Proc. IEEE International Conference
on Robotics and Automation, pages 2290–2295, San Fransisco, CA, April 2000.

[10] L. Antiga, B. Ene-Iordache, and A. Remuzzi. Computational geometry for
patient-specific reconstruction and meshing of blood vessels from MR and
CT angiography. IEEE Transactions on Medical Imaging, 22(5):674–684, May

[11] G. Ascoli. Progress and perspectives in computational neuroanatomy. Anatom-

ical Record, 257(6):195–207, 1999.

[12] G. Ascoli. Mobilizing the base of neuroscience data: the case of neuronal
morphologies. Nature Rev. Neurosci., 7:318–324, 2006.

[13] G. Ascoli and J. Atkeson. Incorporating anatomically realistic cellular-level

connectivity in neural network models of the rat hippocampus. Biosystems,
79:173–181, 2005.

[14] R. Ashino, S. Desjardins, C. Heil, M. Nagase, and R. Vaillancourt. Micrological

analysis, smooth frames and denoising in fourier space. pages 153–160, 2004.

[15] S. Aylward and E. Bullitt. Initialization, noise, singularities, and scale in height
ridge traversal for tubular object centerline extraction. IEEE Transactions on
Medical Imaging, 21(2):61–75, 2002.

[16] W. Bai, X. Zhou, L. Ji, J. Cheng, and S. Wong. Automatic dendritic spine anal-
ysis in two-photon laser scanning microscopy images. Cytometry A, (71A):818–
826, July 2007.

[17] R. Balan, P. Casazza, and D. Edidin. On signal reconstruction from absolute

value of frame coefficients. In SPIE Wavelets Applications in Signal and Image
Processing XI, volume 5914, pages 355–362, Jan. 2005.

[18] D. Barash. A fundamental relationship between Bilateral filtering, adaptive

smoothing and the nonlinear Diffusion Equation. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 24(6):844–847, Jun. 2002.

[19] A. Barbu, V. Athitsos, B. Georgescu, S. Boehm, P. Durlak, and D. Comani-

ciu. Hierarchical learning of curves: Application to guidewire localization in
fluoroscopy. In IEEE Proc. Computer Vision and Pattern Recognition, pages
1–8, Minneapolis, MN, June 2007.

[20] A. Barbu, L. Bogoni, and D. Comaniciu. Hierarchical part-based detection

of 3D flexible tubes: Application to CT colonoscopy. In Proc. Medical Im-
age Computing and Computer-Assisted Intervention, volume 2, pages 462–470,
Copenhagen, Denmark, Sep 2006.

[21] C. Beaman-Hall, J. Leahy, S. Benmansour, and M. Vallano. Glia modulate
NMDA-mediated signaling in primary cultures of cerebellar granule cells. J.
Neurochem., 71:1993–2005, 1998.

[22] T. Behrens, K. Rohr, and H. Stiehl. Robust segmentation of tubular structures

in 3-D medical images by parametric object detection and tracking. IEEE
Transactions on Systems, Man, and Cybernetics, 33(4):554–561, 2003.

[23] R. Bellman and R. Kalaba. Dynamic Programming and modern control theory.
London mathematical society monographs, London, 1965.

[24] R. Benavides-Piccione, I. B.-Y. nez, J. DeFelipe, and R. Yuste. Cortical area

and species differences in dendritic spine morphology. J Neurocytol., 31(3-
5):337–346, 2002.

[25] J. J. Benedetto and S. Li. The theory of multiresolution analysis frames and
applications to filter banks. Appl. Comp. Harm. Anal., 5:389–427, 1998.

[26] G. Bertrand and M. Couprie. New 2D parallel thinning algorithms based on

critical kernels. In International Workshop on Combinatorial Image Analysis,
pages 45–59, Berlin, Germany, June 2006.

[27] P. Besbeas, I. D. Feis, and T. A. Sapatinas. Comparative simulation study

of wavelet shrinkage estimators for Poisson counts. International Statistical
Review, 72:209–237, 2004.

[28] I. Bitter, A. Kaufman, and M. Sato. Penalized-distance volumetric skele-

ton algorithm. IEEE Transactions on Visualization and Computer Graphics,
7(3):195–206, July-September 2001.

[29] J. Boutet de Monvel, S. L. Calvez, and M. Ufendahl. Image restoration for

confocal microscopy: improving the limits of deconvolution with application
to the visualization of the mammalian hearing organ. Biophysical Journal,
80:2455–2470, May 2005.

[30] J. Boutet de Monvel, S. Le Calvez, and M. Ulfendahl. Image restoration for

confocal microscopy: improving the limits of deconvolution, with application
to the visualization of the mammalian hearing organ. Journal of Biophysics,
80(5):2455–70, 2001.

[31] W. F. Bronsvoort. Direct Display Algorithms for Solid Modelling, page 79.
Delft University Press, 1990.

[32] P. Broser, R. Schulte, A. Roth, F. Helmchen, S. Lang, G. Wittum, and B. Sak-
mann. Nonlinear anisotropic diffusion filtering of three-dimensional image data
from two-photon microscopy. J. Biomedical Optics, 9(6):1253–1264, November

[33] P. Broser, R. Schulte, A. Roth, F. Helmchen, S. Lang, G. Wittum, and B. Sak-

mann. Nonlinear anisotropic diffusion filtering of three-dimensional image data
from two-photon microscopy. J Biomedical Optics, 9(6):1253–1264, 2004.

[34] T. Brown, I. Tran, D. Backos, and J. Esteban. NMDA receptor-dependent

activation of the small GTPase Rab5 drives the removal of synaptic AMPA
receptors during hippocampal LTD. Neuron, 45:81–94, 2005.

[35] R. H. Byrd, P. Lu, and J. Nocedal. A limited memory algorithm for bound con-
strained optimization. SIAM Journal on Scientific and Statistical Computing,
16:1190–1208, 1995.

[36] R. H. Byrd, J. Nocedal, and R. B. Schnabel. Representation of quasi-newton

matrices and their use in limited memory methods. Mathematical Program-
ming, 63:129–156, 1994.

[37] W. Cai and A. Chung. Multi-resolution vessel segmentation using normalized

cuts in retinal images. In Proc. Medical Image Computing and Computer-
Assisted Intervention, volume 2, pages 928–936, Copenhagen, Denmark, Sep

[38] E. J. Candes and D. Donoho. Curvelets: A surprisingly effective nonadaptive

representation for objects with edges. In L. L. S. A. Cohen, C. Rabut, editor,
Curve and Surface Fitting. Vanderbilt University Press, Nashville, 1999.

[39] J. Cardoso and L. Corte-Real. Toward a generic evaluation of image segmenta-


[40] N. Carnevale, K. Tsai, B. Claiborne, and T. Brown. Comparative electrotonic

analysis of three classes of rat hippocampal neurons. J. Neurophysiol, 78:703–
720, 1997.

[41] P. G. Casazza. The art of frame theory. Taiwanese J. of Math., 4:129–201,


[42] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines,
2001. Software available at

[43] G. Chen and B. Kegl. Image denoising with complex ridgelets. Pattern Recog-
nition, 40(2):578–585, 2007.
[44] S. Chen, J. Carroll, and J. Messenger. Quantitative analysis of reconstructed
3-D coronary arterial tree and intracoronary devices. IEEE Transactions on
Medical Imaging, 21(7):724–740, 2002.
[45] J. Cheng, X. Zhou, E. Miller, R. Witt, J. Zhu, B. Sabatini, and S. Wong.
A novel computational approach for automatic dendrite spines detection in
two-photon laser scan microscopy. J Neurosci Methods, 165(1):122–134, 2007.
[46] D. Chung and G. Sapiro. Segmentation-free skeletonization of gray-scale images
via PDE’s. In Proc. International Conference on Image Processing, 2000.
[47] C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20:237–297,
[48] M. Couprie, D. Coeurjolly, and R. Zrour. Discrete bisector function and eu-
clidean skeleton in 2D and 3D. Image and Vision Computing, 25(10):1543–
1556, 2007.
[49] H. Cuntz, A. Borst, and I. Segev. Optimization principles of dendritic structure.
Theoretical Biology and Medical Modelling, 4(21):1–8, 2007.
[50] M. de Bruijne, B. van Ginneken, M. Viergever, and W. Niessen. Adapting
active shape models for 3D segmentation of tubular structures in medical im-
ages. In Proc. Information Processing in Medical Imaging, volume 2732, pages
136–147, 2003.
[51] A. R. Depierro. A modified expectation maximization algorithm for penalized
likelihood estimation in emission tomography. IEEE Transactions on Medical
Image Processing, 14:132–137, Jan. 1995.
[52] T. Deschamps and L. Cohen. Fast extraction of minimal paths in 3D images
and applications to virtual endoscopy. IEEE Transactions on Medical Image
Analysis, 5(4):281–299, Dec 2001.
[53] M. Descoteaux, M. Audette, K. Chinzei, and K. Siddiqi. Bone enhancement
filtering: application to sinus bone segmentation and simulation of pituitary
surgery. In Proc. Medical Image Computing and Computer-Assisted Interven-
tion, pages 9–16, Palm Springs, CA, Oct 2005.
[54] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection
algorithm. IEEE Trans Signal Processing, 53(8):2961–2974, August 2005.

[55] T. Dey, J. Giesen, and S. Goswami. Shape segmentation and matching with
flow discretization. In Proc. Workshop on Algorithms Data Structures, LNCS
2748, pages 25–36, 2003.

[56] A. Dima, M. Scholz, and K. Obermayer. Semi-automatic quality determina-

tion of 3D confocal microscope scans of neuronal cells denoised by 3D-wavelet
shrinkage. In Wavelet Applications VI-Proceedings of the SPIE, volume 3723,
pages 446–457, 1999.

[57] A. Dima, M. Scholz, and K. Obermayer. Automatic segmentation and skele-

tonization of neurons from confocal microscopy images based on the 3-D
wavelet transform. IEEE Transactions on Image Processing, 11(7):790–801,
Jul 2202.

[58] P. Dimitrov, J. Damon, and K. Siddiqi. Flux invariants for shape. In IEEE
Conf. Computer Vision Pattern Recognition, volume 1, pages 835–841, Jun

[59] D. Donoho, S. Mallat, R. V. Sachs, and Y.Samuelides. Locally stationary

covariance and signal estimation with macrotiles. IEEE Transactions on Signal
Processing, 51(3):614–627, 2003.

[60] D. L. Donoho. Non-linear wavelet methods for recovery of signals, densities

and spectra from indirect and noisy data. In I. Daubechies, editor, Symposia
in Applied Mathematics: Different Perspectives on Wavelets, pages 173–205.
American Mathematical Society, 1993.

[61] Duke-Southampton Archive., 2006.

[62] A. El-Baz, A. Farag, G. Gimelfarb, M. El-Ghar, and T. Eldiasty. A new adap-

tive probabilistic model of blood vessels for segmenting mra images. In Proc.
Medical Image Computing and Computer-Assisted Intervention, volume 2,
pages 799–806, Copenhagen, Denmark, Sep 2006.

[63] Y. C. Eldar and G. D. Forney. Optimal tight frames and quantum measure-
ment. IEEE Trans. Inform. Theory, 48(3):599–610, Mar. 2002.

[64] J. Evers, S. Schmitt, M. Sibila, and C. Duch. Progress in functional neu-

roanatomy: precise automatic geometric reconstruction of neuronal morphol-
ogy from confocal image stacks. Journal of Neurophysiology, 93:2331–2342,

[65] E. Famiglietti. New metrics for analysis of dendritic branching patterns demon-
strating similarities and differences in ON and ON-OFF directionally selective
retinal ganglion cells. J. Comparative Neurology, 324:295–321, 1992.

[66] A. Ferreira and S. Ubéda. Computing the medial axis transform in parallel with
eight scan operations. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 21(3):277–282, March 1999.

[67] J. A. Fessler and A. O. Hero. Penalized maximum-likelihood image reconstruc-

tion using space-alternating generalized EM algorithms. IEEE Transactions on
Medical Imaging, 4:1417–1429, Oct. 1995.

[68] J. Fiala. Reconstruct: A free editor for serial section microscopy. J. Microscopy,
218(1):52–61, April 2005.

[69] F. Fleuret and P. Fua. Dendrite tracking in microscopic images using minimum
spanning trees and localized E-M. Technical Report EPFL/CVLAB2006.02,
EPFL, March 2006.

[70] C. Florin, N. Paragios, and J. Williams. Globally optimal active contours,

sequential monte carlo and on-line learning for vessel segmentation. In Proc.
European Conference on Computer Vision, number 3, pages 476–489, Graz,
Austria, 2006.

[71] A. Frangi, W. Niessen, K. Vincken, and M. Viergever. Multiscale vessel en-

hancement filtering. In A. Colchester and S. Delp, editors, Proc. First Medical
Image Computing and Computer Assisted Intervention, volume 1496, pages
130–137, Cambridge, MA, Oct 1998. Springer Verlag.

[72] Funetics.

[73] R. Gayle, P. Segars, M. C. Lin, and D. Manocha. Path planning for deformable
robots in complex environments. In Proceedings of Robotics: Science and Sys-
tems, Cambridge, USA, June 2005.

[74] S. Ge, X. Lai, and A. Mamun. Boundary following and globally convergent
path planning using instant goals. In IEEE Transactions on Systems, Man and
Cybernetics-Part B: CYBERNTICS, volume 35, pages 240–254, April 2005.

[75] J. Glaser and E. Glaser. Neuron imaging with Neurolucida–a PC-based system
for image combining microscopy. Computerized Medical Imaging and Graphics,
14(5):307–317, 1990.

[76] P. J. Green. Bayesian reconstruction from emission tomography data using a
modified EM algorithm. IEEE Trans. Med. Imag., 9:84–93, Jan. 1990.

[77] H. Greenspan, M. Laifenfeld, S. Einav, and O. Barnea. Evaluation of center-

line extraction algorithms in quantitative coronary angiography. IEEE Trans-
actions on Medical Imaging, 20(9):928–941, September 2001.

[78] X. Gu, D. Yu, and L. Zhang. Image thinning using pulse coupled neural
network. Pattern Recognition Letters, 25(9):1075–1084, July 2004.

[79] X. Guanglei, Z. Xiaobo, J. Liang, A. Degterev, and S. Wong. Automated label-

ing of neurites in fluorescence microscopy images. In Proc. IEEE International
Symposium on Biomedical Imaging: Macro to Nano, pages 534–537, Arlington,
Virginia, USA, April 2006.

[80] S. Hadjidemetriou, D. Toomre, and J. Duncan. Segmentation and 3D recon-

struction of microtubules in total internal reflection fluorescence microscopy
(TIRFM). In S. Berlag, editor, Proc. 8th Medical Image Computing and
Computer-Assisted Intervention (MICCAI), Lecture Notes in Computer Sci-
ence, pages 761–769, Palm Springs, CA, October 2005.

[81] K. Hama, T. Arii, and T. Kosaka. Three-dimensional morphometrical study of

dendritic spines of the granule cell in the rat dentate gyrus with HVEM stereo
images. J Electron Microsc Tech., 12(2):80–87, 1989.

[82] C. Han, T. S. Hatsukami, J. N. Hwang, and C. Yuan. A fast minimal path

active contour model. IEEE Transactions on Medical Imaging, 10(6):865–873,
Jun 2001.

[83] C. Hanger, S. Haworth, R. Molthen, C. Dawson, and R. Johnson. Simple cone

beam backprojection reconstruction for robust skeletonization of 3D vascular
trees. In IEEE Nuclear Science Symposium Conference Record, volume 2, pages
1003–1007, 2002.

[84] K. Harris, F. Jensen, and B. Tsao. Three-dimensional structure of dendritic

spines and synapses in rat hippocampus (CA1) at postnatal day 15 and adult
ages: implications for the maturation of synaptic physiology and long-term
potentiation. J Neurosci, 12(7):2685–2705, 1992.

[85] M. Hassouna, A. Abdel-Hakim, and A. Farag. In press. pde-

based robust robotic navigation. Image and Vision Computing,
doi:10.1016/j.imavis.2007.03.005, 2007.

[86] M. S. Hassouna, A. Farag, and R. Falk. Differential fly-throughs (DFT): A
general framework for computing flight paths. In Proc. Medical Image Com-
puting and Computer-Assisted Intervention, volume 1, pages 654–661, Palm
Springs, CA, 2005.

[87] W. He. Adaptive algorithms for skeletonizing 3-D noisy binary images: Ap-
plications to neurobiology. Master’s thesis, Rensselaer Polytechnic Institute,
Troy, NY., 1998.

[88] W. He, T. Hamilton, A. Cohen, T. Holmes, C. Pace, D. H. Szarowski, J. N.

Turner, and B. Roysam. Automated Three-Dimensional Tracing of Neurons in
Confocal and Brightfield Images. Microscopy and Microanalysis, 9(4):296–310,
August 2003.

[89] T. Hebert and R. Leahy. A generalized EM algorithm for 3D Bayesian recon-

struction from Poisson data using Gibbs priors. IEEE Transactions on Medical
Imaging, 8:194–202, 1989.

[90] A. Herzog, G. Krell, B. Michaelis, J. Wang, W. Zuschratter, and A. K. Braun.

Restoration of three-dimensional quasi-binary images from confocal microscopy
and its application to dendritic trees. In SPIE, Three-Dimensional Microscopy:
Image Acquisition and Processing IV, pages 146–157, 1997.

[91] M. Hines and N. Carnevale. NEURON: a tool for neuroscientists. The Neuro-
scientist, 7:123–135, 2001.

[92] D. Hoffman, J. Magee, C. Colbert, and D. Johnston. K+ channel regulation

of signal propagation in dendrites of hippocampal pyramidal neurons. Nature,
387(869-875), 1997.

[93] A. Holmes, K. Weedmark, and G. Gloor. Mutations in the extra sex combs
and enhancer of polycomb genes increase homologous recombination in somatic
cells of drosophila melanogaster. Genetics, 172(4):2367–2377, 2006.

[94] T. Holmes. Blind deconvolution of quantum-limited incoherent imagery:

maximum-likelihood approach. Journal of Optical Society of America,
9(2):10521061, 1992.

[95] M. Holtzman-Gazit, D. Goldsher, and R. Kimmel. Hierarchical segmentation of

thin structures in volumetric medical images. In Proc. Medical Image Comput-
ing and Computer-Assisted Intervention, volume 2, pages 562–569, Montréal,
Canada, Nov 2003.

[96] T. Hoogland and P. Saggau. Facilitation of l-type ca2+ channels in den-
dritic spines by activation of b2 adrenergic receptors. Journal of Neuroscience,
24(39):8416–8427, 2004.
[97] L. Huang, G. Wan, and C. Liu. An improved parallel thinning algorithm.
In Seventh International Conference on Document Analysis and Recognition,
volume 2, pages 780–783, Los Alamitos, CA, USA, 2003.
[98] T. Huysmans, J. Sijbers, and B. Verdonk. Statistical shape models for tubular
objects. In The second annual IEEE BENELUX/DSP Valley Signal Processing
Symposium, Metropolis, Belgium, March 2006.
[99] L. Ibanez, W. Schroeder, L. Ng, and J. Cates. The ITK Software Guide.
Kitware Inc, 2004.
[100] C. Ingrassia, P. Windyga, and M. Shah. Segmentation and tracking of coronary
arteries. In Proceedings of the First Joint BMES/EMBS Conference, volume 1,
page 203, 1999.
This is not a very relevant paper. The tracking is done in 2D. Paper of only
one page long.
[101] I. Isgum, B. Ginneken, and M. Prokop. A pattern recognition approach to
automated coronary calcium scoring. In Proc. of the 17th International Pattern
Recognition, volume 3, pages 746–749, Aug. 2004.
[102] P. A. Jansson. Deconvolution of Images and Spectra. Academic Press, second
edition, 1997.
[103] J. Jeong-Won, K. Tae-Seong, D. Shin, S. Do, M. Singh, and V. Marmarelis. Soft
tissue differentiation using multiband signatures of high resolution ultrasonic
transmission tomography. IEEE Transactions on Medical Imaging, 24(3):399–
408, March 2005.
[104] X. Ji and J. Feng. A new approach to thinning based on time-reversed heat
conduction model. In International Conference on Image Processing, volume 1,
pages 653–636, 2004.
[105] L. Jianfei, Z. Xiaopeng, and F. Blaise. Distance contained centerline for virtual
endoscopy. In Proc. IEEE International Symposium on Biomedical Imaging:
Macro to Nano, pages 261–264, 2004.
[106] H. Jiang and N. Alperin. A new automatic skeletonization algorithm for 3D
vascular volumes. In Proc. IEEE Engineering in Medicine and Biology Society,
pages 1565–1568, September 2004.

[107] M. Jiang, Q. Ji, and B. McEwen. Model-based automated extraction of mi-
crotubules from electron tomography volume. Transactions on Information
Technology in Biomedicine, 10(3):608–616, July 2006.

[108] S. Jiang, X. Zhou, T. Kirchhausen, and S. Wong. Detection of molecular

particles in live cells via machine learning. Cytometry Part A, 71A(8):563–575,

[109] I. A. Kakadiaris, A. Santamaria-Pang, B. Losavio, Y. Liang, P. Saggau, and

C. M. Colbert. Orion: Automated reconstruction of neuronal morphologies
from image stacks. In Proc. 24th Annual Houston Conference on Biomedical
Engineering Research (HSEMB), page 275, Houston, February 2007.

[110] D. G. Kang and J. B. Ra. A new path planning algorithm for maximizing vis-
ibility in computed tomography colonography. IEEE Transactions on Medical
Imaging, 24(8):957–968, 2005.

[111] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models.

International Journal of Computer Vision, 1(4):321–331, 1988.

[112] R. E. Kass, B. P. Carlin, A. Gelman, and R. M. Neal. Markov chain Monte

Carlo in practice: A roundtable discussion. Journal of The American Statistical
Association, 52(2):93–100, May 1998.

[113] B. Kegl and A. Krzyzak. Piecewise linear skeletonization using principal curves.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1):59–74,

[114] C. Kirbas and F. Quek. A review of vessel extraction techniques and algorithms.
ACM Computing Surveys, 36(2):81–121, 2004.

[115] I. Y. Koh, W. Lindquist, K. Zito, E. A. Nimchinsky, and K. Svoboda. An

image analysis algorithm for dendritic spines. Neural Computation, 14:1283–
1310, 2002.

[116] Y. Y. Koh. Automated recognition algorithms for neural studies. PhD thesis,
State University of New York at Stony Brook, 2001.

[117] E. D. Kolaczyk. Bayesian multiscale models for Poisson processes. J. Amer.

Statist. Ass., 94:920–933, 1999.

[118] I. Konstantinidis, A. Santamarı́a-Pang, and I. A. Kakadiaris. Frames-based
denoising in 3D confocal microscopy imaging. In Proc. 27th Annual Interna-
tional Conference of the IEEE Engineering in Medicine and Biology Society,
Shanghai, China, Sept. 2005.
[119] E. Korkotian and M. Segal. Structure-function relations in dendritic spines: is
size important? Hippocampus, 10(5):587–596, 2000.
[120] K. Krissian. Flux-based anisotropic diffusion applied to enhancement of 3D
angiograms. IEEE Transactions on Medical Imaging, 21(11), Nov. 2002.
[121] K. Krissian, G. Malandain, N. Ayache, R. Vaillant, and Y. Trousset. Model
based detection of tubular structures in 3D images. Computer Vision and
Image Understanding, 80(2):130–171, 2000.
[122] A. Kuijper and O. Olsen. Geometric skeletonization using the symmetry set.
In Proc. IEEE International Conference on Image Processing, volume 1, pages
497–500, Sep 2005.
[123] M. W. Law and A. C. Chung. Weighted local variance-based edge detection and
its application to vascular segmentation in magnetic resonance angiography.
IEEE Transactions on Medical Imaging, 26(9):1224–1241, 2007.
[124] W. Law and A. Chung. Segmentation of vessels using weighted local variances
and an active contour model. In Proc. IEEE Conf. on Computer Vision and
Pattern Recognition, page 83, Jun 2006.
[125] J. Leandro, J. Soares, R. Cesar, and H. Jelinek. Blood vessels segmentation
in nonmydriatic images using wavelets and statistical classifiers. In Proc. XVI
Brazilian Symposium on Computer Graphics and Image Processing, pages 262–
269, Oct 2003.
[126] J. Lee, P. Beighley, E. Ritman, and N. Smith. In press: Automatic segmen-
tation of 3D micro-CT coronary vascular images. Medical Image Analysis,
doi:10.1016/, 2007.
[127] T. C. Lee, R. L. Kashyap, and C. N. Chu. Building skeleton models via
3D medial surface/axis thinning algorithms. CVGIP: Graph. Models Image
Process., 56(6):462–478, 1994.
[128] K. Lekadir and G. Yang. Carotid artery segmentation using an outlier im-
mune 3D active shape models framework. In Proc. Medical Image Computing
and Computer-Assisted Intervention, volume 1, pages 289–296, Copenhagen,
Denmark, Sep 2006.

[129] H. Li and A. Yezzi. Vessels as 4D curves: global minimal 4D paths to extract
3D tubular surfaces and centerlines. IEEE Transactions on Medical Imaging,
26(9):1213–23, 2007.

[130] Q. Li, S. Sone, and K. Doi. Selective enhancement filters for vessels and airway
walls in two-and three-dimensional CT scans. Medical Physics, 30(8):2040–
2051, 2003.

[131] S. P. Liao, H. T. Lin, and C. Lin. A note on the decomposition methods for
support vector regression. Neural Computation, 14(6):1267–1281, Jun 2002.

[132] J. Lien, J. Keyser, and N. Amato. Simultaneous shape decomposition and

skeletonization. In Proc. ACM Symposium on Solid and Physical Modeling,
pages 219–228, New York City, NY, July 2006.

[133] W. Lorensen and H. Cline. Marching cubes: A high resolution 3D surface

construction algorithm. Computer Graphics, 21(4):163–169, July 1987.

[134] C. Lorenz, I. Carlsen, T. Buzug, C. Fassnacht, and J. Weese. Multi-scale

line segmentation with automatic estimation of width, contrast and tangen-
tial direction in 2D and 3D medical images. In Proc. First Joint Conference
on Computer Vision, Virtual Reality and Robotics in Medicine and Medial
Robotics and Computer-Assisted Surgery, volume 1205, pages 233–244, 1997.

[135] L. Lorigo, O. Faugeras, W. Grimson, R. Keriven, R. Kikinis, A. Nabavi, and

C. Westin. CURVES: curve evolution for vessel segmentation. Medical Image
Analysis, 5(3):195–206, 2001.

[136] R. Lukac, B. Smolka, and K. Plataniotis. Sharpening vector median filters.

Signal Processing, 87(9):2085–2099, 2007.

[137] M. Maddah, A. Afzali-Kusha, and H. Soltanian-Zadeha. Fast center-line ex-

traction for quantification of vessels in confocal microscopy images. In Proc.
IEEE International Symposium on Biomedical Imaging: Macro to Nano, pages
461– 464, Washington, DC, 2002.

[138] V. Mahadevan, H. Narasimha-Iye, B. Roysam, and H. Tanenbaum. Robust

model-based vasculature detection in noisy biomedical images. IEEE Transac-
tions on Information Technology in Biomedicine, 8(3):360–376, 2004.

[139] Z. Mainen and T. Sejnowski. Reliability of spike timing in neocortical neurons.

Science, 268(5216):1503–1506, 1995.

[140] R. Manniesing and W. Niessen. Local speed functions in level set based ves-
sel segmentation. In Proc. Medical Image Computing and Computer-Assisted
Intervention, number 1, pages 475–482, Saint-Malo, France, Sep 2004.
[141] H. Marquering, J. Dijkstra, P. D. Koning, B. Stoel, and J. Reiber. Towards
quantitative analysis of coronary CTA. The International Journal of Cardio-
vascular Imaging, 21(1):73–84, 2005.
[142] T. McGraw, B. Vemuri, Y. Chen, M. Rao, and T. Mareci. DT-MRI denoising
and neuronal fiber tracking. Medical Image Analysis, 8(2):95–111, June 2004.
[143] C. McIntosh and G. Hamarneh. Vessel crawlers: 3D Physically-based de-
formable organisms for vasculature segmentation and analysis. In Proc. Confer-
ence on Computer Vision and Pattern Recognition, volume 1, pages 1084–1091,
June 2006.
[144] B. W. Mel. Synaptic integration in an excitable dendritic tree. J Neurophysiol,
70:1086–110, 1993.
[145] A. M. Mendonca and A. Campilho. Segmentation of retinal blood vessels by
combining the detection of centerlines and morphological reconstruction. IEEE
Transactions on Medical Imaging, 25(9):1200–1213, 2006.
[146] J. Mercer. Functions of positive and negative type and their connection with
the theory of integral equation. Philos. Trans. R. Soc. London, A-209:415446,
[147] D. Metaxas. Physics-based Modeling of Non-rigid Objects for Vision and
Graphics. Ph.d. Thesis, Graduate Department of Computer Science, University
of Toronto, 1992.
[148] D. Metaxas and D. Terzopoulos. Shape and nonrigid motion estimation
through physics-based synthesis. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 15(6):580 – 591, June 1993.
[149] MicroBrightfield Inc.
[150] M. Migliore, M. Ferrante, and G. Ascoli. Signal propagation in oblique den-
drites of CA1 pyramidal cells. J Neurophysiol, 94:4145–4155, 2005.
[151] A. Mizrahi, E. Ben-Ner, M. Katz, K. Kedem, J. Glusman, and F. Libersat.
Comparative analysis of dendritic architecture of identified neurons using the
Hausdorff distance metric. The Journal of Comparative Nuerology, 233(3):415–
428, 2000.

[152] M. Moll and L. E. Kavraki. Path planning for variable resolution minimal-
energy curves of constant length. In Proc. IEEE International Conference on
Robotics and Automation, pages 2142–2147, Barcelona, Spain, April 2005.

[153] P. Morrison and J. Zou. An effective skeletonization method based on adaptive

selection of contour points. In Proc. International Conference on Information
Technology and Applications, volume 2, pages 644–649, Washington, DC., 2005.

[154] P. Morrison and J. Zou. Skeletonization based on error reduction. Pattern

Recognition, 39(6):1099–1109, 2006.

[155] K. Mosaliganti, F. Janoos, X. Xu, R. Machiraju, K. Huang, and S. Wong.

Temporal matching of dendritic spines in confocal microscopy images of neu-
ronal tissue sections. In Proc. International Workshop in Microscopic Image
Analysis and Applications in Biology, Copenhangen, Denmark, 2006.

[156] A. Nedzved, Y. Ilyich, S. Ablameyko, and S. Kamata. Color thinning with

applications to biomedical images. In 9th International Conference on Com-
puter Analysis of Images and Patterns, pages 256–263, London, UK, 2001.

[157] M. Niethammer, W. Kalies, K. Mischaikow, and A. Tannenbaum. On the

detection of simple points in higher dimensions using cubical homology. IEEE
Transactions on Image Processing, 15(8):2462–2469, August 2006.

[158] T. O’Donnell, T. Boult, X. Fang, and A. Gupta. The extruded generalized

cylinder: a deformable model for object recovery. In Proc. IEEE Conf. on
Computer Vision and Pattern Recognition, pages 174–181, 1994.

[159] S. Osher and R. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces.
Springer, 2002.

[160] S. Osher and J. Sethian. Fronts propagating with curvature dependent speed:
algorithms based on the hamilton-jacobi formulation. Journal of Computa-
tional Physics, 79:12–49, 1998.

[161] K. Palágyi, J. Tschirren, E. Hoffman, and M. Sonka. Quantitative analysis

of pulmonary airway tree structures. Computers in Biology and Medicine,
36(9):974–996, 2006.

[162] R. Pantelic, G. Ericksson, N. Hamilton, and B. Hankamer. Bilateral edge

filter: Photometrically weighted, discontinuity based edge detection. Journal
of Structural Biology, 160(1):93–102, 2007.

[163] N. Passat, C. Ronse, J. Baruthio, J. Armspach, and J. Foucher. Using water-
shed and multimodal data for vessel segmentation: Application to the superior
sagittal sinus. In Proc. Conference in Mathematical Morphology: 40 years on,
pages 419–428, April 2005.

[164] J. Pawley. Handbook of Biological Confocal Microscopy. Plenum, 1995.

[165] E. L. Pennec and S. Mallat. Sparse geometric image representation with Ban-
delets. IEEE Trans. on Image Processing, 14(4):423–438, Apr. 2005.

[166] P. Perona and J. Malik. Scale-space and edge detection using anisotropic
diffusion. IEEE Trans. Pattern Anal. Mach. Intell., 12(7):629–639, 1990.

[167] I. Pitas and C. Cotsaces. Memory efficient propagation-based watershed and

influence zone algorithms for large images. IEEE Transactions on Image Pro-
cessing, 9(7):1185–1199, July 2000.

[168] J. Platt. Probabilistic outputs for support vector machines and comparison to
regularize likelihood methods. In Advances in Large Margin Classifiers, pages
61–74, 2000.

[169] P. Poirazi and B. Mel. Impact of active dendrites and structural plasticity on
the memory capacity of neural tissue. Neuron, 29(3):779–796, 2001.

[170] F. Pongrácz. The function of dendritic spines: a theoretical study. Neuro-

science, 15(4):933–946, Aug 1985.

[171] G. Pons-Bernad, L. Blanc-Fraud, and J. Zerubia. Restauration d’images bi-

ologiques 3D en microscopie confocale par transforme en ondelettes complexes.
Research Report 5507, INRIA,France, Feb. 2005.

[172] M. J. Potel, J. M. Rubin, S. A. MacKay, A. M. Aisen, and J. Al-Sadir. Methods

for evaluating cardiac wall motion in three-dimensions using bifurcation points
of the coronary arterial tree. Investigative Radiology, 18:47–57, 1983.

[173] M. Prasad and A. Sowmya. Detection of bronchovascular pairs on HRCT lung

images through relational learning. In Proc. IEEE International Symposium
on Biomedical Imaging: Macro to Nano, volume 2, pages 1135 – 1138, April

[174] F. Quek, C. Kirbas, and X. Gong. Simulated wave propagation and traceback
in vascular extraction. In Proc. IEEE Medical Imaging and Augmented Reality,
pages 229–234, Hong Kong, Jun. 2001.

[175] W. Rall. Handbook of Physiology: The Nervous System, volume 1, chapter Core
conductor theory and cable properties of neurons, pages 39–98. Baltimore,

[176] S. Rami and C. Vachier. H-thinning for gray-scale. International Conference

on Image Processing, 1:287– 290, Oct 2004.

[177] D. Reniers and A. Telea. Skeleton-based hierarchical shape segmentation. In

Proc. IEEE International Conference on Shape Modeling and Applications,
pages 179–188, Washington, DC, 2007.

[178] W. Richardson. Bayesian-based iterative method of image restoration. J. Opt.

Soc. Amer., 62:55–59, 1972.

[179] A. Rodriguez, D. Ehlenberger, K. Kelliher, M. Einstein, S. Henderson, J. Mor-

rison, P. Hof, and S. Wearne. Automated reconstruction of three-dimensional
neuronal morphology from laser scanning microscopy images. Methods,
30(1):94–105, 2003.

[180] K. Rohr and S. Worz. High-precision localization and quantification of 3D

tubular structures. In Proc. IEEE International Symposium on Biomedical
Imaging: Macro to Nano, pages 1160 – 1163, 2006.

[181] A. Ron and Z. Shen. Affine system in L2 (Rd ): the analysis of the analysis
operator. Journal of Functional Analysis, 148:408–447, 1997.

[182] M. Rumpf and A. Telea. A continuous skeletonization method based on level

sets. In Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization,
pages 151–159, Aire-la-Ville, Switzerland, Switzerland, 2002.

[183] D. A. Rusakov and M. G. Stewart. Quantification of dendritic spine populations

using image analysis and a tilting disector. Journal of Neuroscience Methods,
60:11–21, 1995.

[184] A. Samsonovich and G. Ascoli. Statistical determinants of dendritic morphol-

ogy in hippocampal pyramidal neurons: A hidden markov model. Hippocampus,
15(2):166–183, 2004.

[185] A. Santamarı́a-Pang, T. S. Bı̂ldea, C. M. Colbert, P. Saggau, and I. A. Kaka-

diaris. Towards segmentation of irregular tubular structures in 3D confocal
microscope images. In Proc. MICCAI International Workshop in Microscopic
Image Analysis and Applications in Biology, pages 78–85, Copenhangen, Den-
mark, 2006.

[186] A. Santamarı́a-Pang, T. S. Bildea, I. Konstantinidis, and I. A. Kakadiaris.
Adaptive frames-based denoising of confocal microscopy data. In Proc. Eu-
ropean Conference on Computer Vision, pages 85–88, Toulouse, France, May

[187] A. Santamarı́a-Pang, C. M. Colbert, P. Saggau, and I. A. Kakadiaris. Au-

tomatic centerline extraction of irregular tubular structures using probabil-
ity volumes from multiphoton imaging. In Proc. Medical Image Computing
and Computer Assisted Intervention, number 2, pages 486–494, Brisbane, Aus-
tralia, Octuber 2007.

[188] P. Sarder and A. Nehorai. Deconvolution methods for 3-D fluorescence mi-
croscopy images. IEEE Signal Processing Magazine, 23(3):32–45, May 2006.

[189] Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kiki-

nis. 3-D multi-scale line filter for segmentation and visualization of curvilinear
structures in medical images. Medical Image Analysis, 2(2):143–168, 1998.

[190] J. Schlecht, K. Barnard, E. Spriggs, and B. Pryor. Inferring grammar-based

structure models from 3d microscopy data. In Proc. IEEE Conf. on Computer
Vision and Pattern Recognition, pages 1–8, 2007.

[191] S. Schmidt, J. Kappes, M. Bergtholdt, V. Pekar, S. Dries, D. Bystrov, and

C. Schnrr. Spine detection and labeling using a parts-based graphical model.
In Proc. 20th International Conference on Information Processing in Medical
Imaging, LNCS 4584, pages 122–133, 2007.

[192] S. Schmitt, J. Evers, C. Duch, M. Scholz, and K. Obermayer. New methods

for the computer-assisted 3-D reconstruction of neurons from confocal image
stacks. Neuroimage, 23(4):1283–1298, December 2004.

[193] D. Selle, B. Preim, A. Schenk, and H. Peitgen. Analysis of vasculature for liver
surgical planning. IEEE Transactions on Medical Imaging, 21(11):1344–1357,
November 2002.

[194] L. Sendur and I. Selesnick. Bivariate shrinkage with local variance estimation.
IEEE Signal Processing Letters, 9(12):438–441, December 2002.

[195] J. Sethian. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechan-
ics, Computer Vision and Materials Sciences. Cambridge University Press,

[196] L. Shen, M. Papadakis, I. A. Kakadiaris, I. Konstantinidis, D. Kouri, and
D. Hoffman. Image denoising using a tight frame. IEEE Transactions on
Image Processing, 15(5):1254–1263, May 2006.

[197] L. Shen, M. Papadakis, I. A. Kakadiaris, I. Konstantinidis, D. Kouri, and

D. Hoffman. Image denoising using a tight frame. IEEE Transactions on
Image Processing, 15(5):1254– 1263, 2006.

[198] L. Shen, M. Papadakis, I. A. Kakadiaris, I. Konstantinidis, D. Kouri, and

D. Hoffman. Image denoising using a tight frame. IEEE Trans. Image Pro-
cessing, 15(5):1254–1263, May. 2006.

[199] D. Sholl. Dendritic organization in the neurons of the visual and motor cortices
of the cat. Journal of Anatomy, 87(4):387–406, 1953.

[200] J. Soares, J. Leandro, R. Cesar-Jr., H. Jelinek, and M. Cree. Retinal vessel

segmentation using the 2-D Gabor wavelet and supervised classification. IEEE
Transactions on Medical Imaging, 25(9):1214 – 1222, Sep 2006.

[201] H. Soltanian-Zadeh, A. Shahrokni, M. Khalighi, Z. Zhang, R. Zoroofi, M. Mad-

dah, and M. Chopp. 3-D quantification and visualization of vascular structures
from confocal microscopic images using skeletonization and voxel-coding. Com-
puters in Biology and Medicine, 35(9):791–813, 2005.

[202] D. J. Stevenson, I. Smith, and G. Robinson. Working towards the automated

detection of blood vessels in X-ray angiograms. Pattern Recognition Letters,
2(6):107–112, 1987.

[203] J. Stewart. Calculus. Brooks/Cole Publishing Company, Pacific Grove, Cali-

fornia, 1991.

[204] G. Streekstra, R. van den Boomgaard, and A. Smeulders. Scale dependency of

image derivatives for feature measurement in curvilinear structures. Interna-
tional Journal of Computer Vision, 42(3):177–189, 2001.

[205] G. Streekstra, R. van den Boomgaard, and A. W. Smeulders. Scale dependent

differential geometry for the measurement of center line and diameter in 3D
curvilinear structures. In Proc. European Conference on Computer Vision,
volume 1, pages 856–870, Dublin, Ireland, 2000.

[206] G. Streekstra and J. van Pelt. Analysis of tubular structures in three-

dimensional confocal images. Network: Comput. Neural Syst., 13:381–395,
July 2002.

[207] K. Svoboda. Do spines and dendrites distribute dye evenly? Trends in Neuro-
sciences, 27(8):445–446, 2004.

[208] T. Tada and M. Sheng. Molecular mechanisms of dendritic spine morphogen-

esis. Curr Opin Neurobiol, 16(1):95–101, 2006.

[209] S. Tan and L. C. Jiao. Ridgelet bi-frame. Appl. Comput. Harmon. Anal.,
20:391–402, 2006.

[210] A. Telea and J. van Wijk. An augmented fast marching method for computing
skeletons and centerlines. In Proc. Symposium on Data Visualisation, pages
251–ff, Aire-la-Ville, Switzerland, 2002. Eurographics Association.

[211] A. Telea and A. Vilanova. A robust level-set algorithm for centerline extraction.
In Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization, pages
185–194, Grenoble, France, 2003.

[212] D. Terzopoulos and D. Metaxas. Dynamic 3D models with local and global
deformations: Deformable superquadrics. IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, 13(7):703–714, 1991.

[213] K. E. Timmermann and R. D. Nowak. Multiscale modeling and estimation of

Poisson processes with application to Photon-limited imaging. IEEE Transac-
tions on Information Theory, 45(3):846 – 862, 1999.

[214] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In
Proceedings of the IEEE International Conference on Computer Vision, pages
839–846, Bombay, India, Jan 1998.

[215] A. Torsello and E. Hancock. Curvature correction of the Hamilton-Jacobi

skeleton. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,
volume 1, pages 828–834, 2003.

[216] A. Torsello and E. Hancock. Curvature dependent skeletonization. In Proc.

International Conference on Image Processing, volume 1, pages 337–340, Sep

[217] A. Torsello and E. Hancock. Correcting curvature-density effects in the

Hamilton-Jacobi skeleton. IEEE Transactions on Image Processing, 15(4):877–
891, 2006.

[218] C. Toumoulin, C. Boldak, J. Dillenseger, J. Coatrieux, and Y. Rolland. Fast
detection and characterization of vessels in very large 3-D data sets using geo-
metrical moments. IEEE Transactions on Biomedical Engineering, 48(5):604–
606, May 2001.

[219] S. Tran and L. Shih. Efficient 3D binary image skeletonization. In Proc. IEEE
Computational Systems Bioinformatics Conference - Workshops, pages 364–
372, Aug 2005.

[220] C. Uehara, C. M. Colbert, P. Saggau, and I. Kakadiaris. Towards automatic

reconstruction of dendrite morphology from live neurons. In Proc. IEEE En-
gineering in Medicine and Biology Society, San Fransisco, CA, Sep 2004.

[221] M. Ulfendahl, J. Boutet de Monvel, and S. Le Calvez. Exploring the living

cochlea using confocal microscopy. Audiology and Neuro-otology, 7(1):27–30,
2002. Journal Article Research Support, Non-U.S. Gov’t Switzerland.

[222] P. Umesh-Adiga and B. Chaudhuri. Some efficient methods to correct confocal

images for easy interpretation. Micron, 32:363–370, 2001.

[223] G. Unal, S. Bucher, S. Carlier, G. Slabaugh, T. Fang, and K. Tanaka. Shape-

driven segmentation of intravascular ultrasound images. In Proc. MICCAI
Workshop in Computer Vision for Intravascular and Intracardiac Imaging,
Copenhagen, Denmark, October 2006.

[224] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of

image segmentation algorithms. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 29(6):929–944, 2007.

[225] S. Urban, S. O’Malley, B. Walsh, A. Santamarı́a-Pang, P. Saggau, C. Col-

bert, and I. Kakadiaris. Automatic reconstruction of dendrite morphology
from optical section stacks. In Springer-Verlag, editor, Proc. 2nd International
Workshop on Computer Vision Approaches to Medical Image Analysis, Graz,
Austria, May 2006.

[226] H. Uylings and J. van Pelt. Measures for quantifying dendritic arborizations.
Network: Computation in Neural Systems, 13(3):397–414, 2002.

[227] F. Valverde, N. Guil, and J. Munoz. Segmentation of vessels from mam-

mograms using a deformable model. Computer Methods and Programs in
Biomedicine, 73(3):233–247, 2004.

[228] C. Van-Bemmel, L. Spreeuwers, M. Viergever, and W. Niessen. Level-set-
based arteryvein separation in blood pool agent CE-MR angiograms. IEEE
Transactions on Medical Imaging, 22(10):1224–1234, 2003.

[229] D. van de Ville, M. Seghier, F. Lazeyras, T. Blu, and M. Unser. WSPM:

wavelet-based statistical parametric mapping. NeuroImage, 37(4):1205–1217,

[230] G. M. van Kempen, H. T. van der Voort, J. G. Bauman, and K. C. Strasters.

Comparing maximum likelihood estimation and constrained Tikhonov-Miller
restoration. IEEE Eng. Med. Biol. Mag., 15:76–83, 1996.

[231] J. van Pelt and A. Schierwagen. Morphological analysis and modeling of neu-
ronal dendrites. Mathematical Biosciences, 188:147–155, March-April 2004.

[232] J. van Pelt and H. Uylings. Modeling the natural variability in the shape
of dendritic trees: Application to basal dendrites of small rat cortical layer 5
pyramidal neurons. Neurocomputing, 26-27:305–311, 1999.

[233] V. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, Berlin,

Germany, 1995.

[234] Y. Vardi, L. A. Shepp, and L. Kaufman. A statistical model for Positron emis-
sion tomography. Journal of the American Statistical Association, 80(389):8–
20, 1985.

[235] A. Vasilevskiy and K. Siddiqi. Flux maximizing geometric flows. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, 24(12):1565–1578, 2002.

[236] M. von Tiedemann, A. Fridberger, M. Ulfendahl, I. Tomo, and J. Boutet de

Monvel. Image adaptive point-spread function estimation and deconvolution
for in vivo confocal microscopy. Microsc Res Tech, 69(1):10–20, 2006.

[237] F. Von Wegner, M. Both, R. H. Fink, and O. Friedrich. Fast XYT imaging of
elementary calcium release events in muscle with multifocal multiphoton mi-
croscopy and wavelet denoising and detection. IEEE Transactions on Medical
Imaging, 26(7):925–34, 2007.

[238] U. Vovk, F. Pernus, and B. Likar. Multi-feature intensity inhomogeneity correc-

tion in MR images. In Proc. Medical Image Computing and Computer-Assisted
Intervention, volume 1, pages 283–290, 2004.

[239] M. Wan, Z. Liang, Q. Ke, L. Hong, I. Bitter, and A. Kaufman. Automatic
centerline extraction for virtual colonoscopy. IEEE Transactions on Medical
Imaging, 21(12):1450–1460, December 2002.

[240] M. Wan-Chun, W. Fu-Che, and M. Ouhyoung. Skeleton extraction of 3D

objects with radial basis functions. In Proc. Shape Modeling International,
pages 207– 215, Los Alamitos, CA, USA, May 2003.

[241] L. Wang, J. Bai, , and K. Ying. Adaptive approximation of the boundary

surface of a neuron in confocal microscopy. Medical and Biological Engineering
and Computing, 41(5):601–607, 2003.

[242] T. Wang and A. Basu. A note on a fully parallel 3D thinning algorithm and
its applications. Pattern Recognition Letters, 28(4):501–506, 2007.

[243] R. Watzel, K. Braun, A. Hess, H. Scheich, and W. Zuschratter. Detection

of dendritic spines in 3-dimensional images. In DAGM-Symposium Bielefeld,
pages 160–167, 1995.

[244] R. Watzel, K. Braun, A. Hess, W. Zuschratter, and H. Scheich. Restoration of

dendrites and spines with the objective of topologically correct segmentation.
In Proc. International Conference on Pattern Recognition, page 472, Washing-
ton, DC, 1996.

[245] S. Wearne, A. Rodriguez, D. Ehlenrger, A. Rocher, S. Henderson, and P. Hof.

New techniques for imaging, digitization and analysis of three-dimensional neu-
ral morphology on multiple scales. Neuroscience, 136(3):661680, 2005.

[246] C. Weaver, P. Hof, S. Wearne, and W. Lindquist. Automated algorithms

for multiscale morphometry of neuronal dendrites. Neural Computation,
16(7):1353–1383, July 2004.

[247] S. Wesarg, M. Khan, and E. Firle. Localizing calcifications in cardiac CT data

sets using a new vessel segmentation approach. Journal of Diginal Imaging,
19(3):249–257, Sep 2006.

[248] R. Willett and R. Novak. Fast multiresolution photon-limited image recon-

struction. In IEEE International Symposium on Biomedical Imaging, pages
446–457, Arlington, VA, Apr 2004.

[249] R. Willett and R. Nowak. Platelets: A multiscale approach for recovering

edges and surfaces in photon-limited medical imaging. IEEE Transactions on
Medical Imaging, 22(3):332–350, 2003.

[250] O. Wink, W. Niessen, and M. Viergever. Multiscale vessel tracking. IEEE
Transactions on Medical Imaging, 23(1):130–133, Jan 2004.
[251] P. Wiseman, F. Capani, J. Squier, and M. Martone. Counting dendritic spines
in brain tissue slices by image correlation spectroscopy analysis. Journal of
Microscroscopy, 205(2):177–186, 2002.
[252] W. Wong and A. Chung. In press: Probabilistic vessel axis tracing and its
application to vessel segmentation with stream surfaces and minimum cost
paths. Medical Image Analysis, doi:10.1016/, 2007.
[253] W. C. Wong and A. C. Chung. Augmented vessels for quantitative analysis of
vascular abnormalities and endovascular treatment planning. IEEE Transac-
tions on Medical Imaging, 25(6):665–684, 2006.
[254] S. Worz and K. Rohr. Limits on estimating the width of thin tubular struc-
tures in 3D images. In Proc. Medical Image Computing and Computer-Assisted
Intervention, volume 1, pages 215–222, Copenhagen, Denmark, Sep 2006.
[255] S. Worz and K. Rohr. Segmentation and quantification of human vessels using
a 3D cylindrical intensity model. IEEE Transactions on Medical Imaging,
16(8):1994–2004, 2007.
[256] Y. Xiang, A. C., Chung, and J. Ye. An active contour model for image seg-
mentation based on elastic interaction. Journal of Computational Physics,
219:455–476, May 2006.
[257] G. Xiong, X. Zhou, L. Ji, A. Degterev, and S. Wong. Automated labeling
of neurites in fluorescence microscopy images. In Proc. IEEE Symposium on
Biomedical Imaging: Nano to Macro, pages 534–537, Arlington, Virginia, USA,
April 2006.
[258] P. Yan and A. Kassim. Segmentation of vessels from mammograms using a
deformable model. Medical Image Analysis, 10(3):317–329, 2006.
[259] F. Yang, G. Holzapfel, C. Schulze-Bauer, R. Stollberger, D. Thedens,
L. Bolinger, A. Stolpen, and M. Sonka. Segmentation of wall and plaque
in in vitro vascular MR images. The International Journal of Cardiovascular
Imaging, 19:419–428, 2003.
[260] Y. Yang, A. Tannenbaum, and D. Giddens. Knowledge-based 3-D segmenta-
tion and reconstruction of coronary arteries using CT images. In Proc. IEEE
Engineering in Medicine and Biology Society, pages 1664–1666, San Francisco,
CA, 2004.

[261] P. Yim, J. Cebral, R. Mullick, H. Marcos, and P. Choyke. Vessel surface re-
construction with a tubular deformable model. IEEE Transactions on Medical
Imaging, 20(12):1411–1421, Dec 2001.

[262] P. Yim, P. Choyke, and R. Summers. Gray-scale skeletonization of small vessels

in magnetic resonance angiography. IEEE Transactions on Medical Imaging,
19(6):568–576, June 2000.

[263] X. You, B. Fang, and Y. Y. Tang. Wavelet-based approach for skeleton extrac-
tion. In Proc. IEEE Workshops on Application of Computer Vision, volume 1,
pages 228–233, Los Alamitos, CA, USA, 2005.

[264] Z. Yu and C. L. Bajaj. A segmentation-free approach for skeletonization of

gray-scale images via anisotropic vector diffusion. In Proc. IEEE Conf Com-
puter Vision and Pattern Recognition, volume 1, pages 415–420, 2004.

[265] P. A. Yushkevich, P. T. Fletcher, S. C. Joshi, A. Thall, and S. M. Pizer.

Continuous medial representations for geometric object modeling in 2D and
3D. Image and Vision Computing, 21(1):17–27, 2003.

[266] R. Yuste and W. Denk. Dendritic spines as basic functional units of neuronal
integration. Nature, 375(6533):682–684, June 1995.

[267] G. Zeng, S. Birchfield, and C. Wells. Detecting and measuring fine roots in
minirhizotron images using matched filtering and local entropy thresholding.
Machine Vision and Applications, 17:265278, Sep 2006.

[268] Y. Zhang, X. Zhou, R. W. B. Sabatini, D. Adjeroh, and S. Wong. Dendritic

spine detection using curvilinear structure detector and LDA classifier. Neu-
roimage, 36(2):346–360, 2007.

[269] Y. Zhang, X. Zhou, R. Witt, B. Sabatini, D. Adjeroh, and S. Wong. Automated

spine detection using curvilinear structure detector and LDA classifier. In Proc.
IEEE Symposium on Biomedical Imaging: Nano to Macro, pages 528–531, VA,
USA, April 2007.

[270] C. Zhu, R. H. Byrd, and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-

B, FORTRAN routines for large scale bound constrained optimization. ACM
Transactions on Mathematical Software, 23(4):550–560, 1997.