146 views

Uploaded by api-3769727

- Vehicle Detection and Counting by Using Headlight Information in the Dark
- Hanbury Image Segmentation Wiley Encyclopedia
- A Hybrid Particle Swarm Optimization Algorithm to Human Skin Region Detection
- HAL
- Paper 7-Hybrid Denoising Method for Removal of Mixed Noise in Medical Images
- A Review on Image Denoising using Wavelet Transform
- Survey on Haze Removal Techniques
- A Review of different method of Medical Image Segmentation
- Ranjay Pres
- Labview - Waveletes Analsys Toolkit
- DiameterJ - ImageJ
- Libros
- 6-Reduction of Gaussian, Supergaussian, And Impulsive
- A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy
- Gesture Based Robot Control for Robotic Arm Edge
- 01 Introduction
- Alan Schore Paper
- Segmentation and Classification of Skin Lesions Based on Texture Features
- Ai in Practice Case Study on a Flotation Plant
- imgprocessingproject

You are on page 1of 198

A Dissertation

Presented to

University of Houston

In Partial Fulfillment

Doctor of Philosophy

By

Alberto Santamarı́a-Pang

November 2007

AUTOMATIC THREE-DIMENSIONAL

Alberto Santamarı́a-Pang

APPROVED:

Dept. of Computer Science and

Dept. of Electrical and Computer Engineering

Dept. of Biology and Biochemistry

Dept. of Computer Science

Dept. of Computer Science and

Dept. of Biology and Biochemistry

Dept. of Neuroscience,

Baylor College of Medicine

Dept. of Computer Science

ii

iii

AUTOMATIC THREE-DIMENSIONAL

An Abstract of a Dissertation

Presented to

the Faculty of the Department of Computer Science

University of Houston

In Partial Fulfillment

of the Requirements for the Degree

Doctor of Philosophy

By

Alberto Santamarı́a-Pang

November 2007

iv

Abstract

A central goal of modern neuroscience is to elucidate the computational principles

and cellular mechanisms that underlie brain function, in both normal and diseased

states. Notably, neuronal morphologies are broadly affected by age, genetic diseases

such as Down’s Syndrome, and degenerative diseases such as Alzheimer’s disease. A

major obstacle to this research is the lack of automated methods for reconstructions of

morphology to produce libraries of neurons with quantitative measurements suitable

for simulation.

This dissertation presents a novel framework for automatic three dimensional

morphological reconstruction of nerve cells from optical images. More specifically,

we propose: i) a new algorithm for 3D noise removal in optical images; ii) a novel

method for detection of volumetric irregular tubular structures; and iii) a robust

algorithm for morphological reconstruction. Our results are comparable with those

obtained by human experts and outperform state-of-the-art computer algorithms.

Our novel methodology opens the way for the creation of neuron libraries and for

guiding online functional imaging experiments in live neurons.

v

Contents

1 Introduction 1

1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Literature Review 8

2.1 Existing Morphological Reconstruction

Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.1 Semi-Automatic Methods . . . . . . . . . . . . . . . . . . . . 8

2.1.2 Computer-Aided Manual Methods . . . . . . . . . . . . . . . . 13

2.2 Previous Work in Denoising . . . . . . . . . . . . . . . . . . . . . . . 16

2.3 Previous Work in Segmentation Methods of Tubular Objects . . . . . 22

2.3.1 Deformable Models Methods . . . . . . . . . . . . . . . . . . . 24

2.3.2 Tubular enhancing filtering . . . . . . . . . . . . . . . . . . . . 34

2.3.3 Medial Axis Extraction . . . . . . . . . . . . . . . . . . . . . . 41

2.3.4 Hybrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3 Methods 59

3.1 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

vi

3.2 Approach for Automatic Cell Reconstruction . . . . . . . . . . . . . . 61

3.3 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.4 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.4.1 Construction of 3D Non-separable Parseval Frame . . . . . . . 67

3.4.2 Frame-based Denoising . . . . . . . . . . . . . . . . . . . . . . 75

3.5 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3.6 Dendrite Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 84

3.6.1 Anisotropic Tubular Feature Extraction . . . . . . . . . . . . 87

3.6.2 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . 91

3.6.3 Tubular Shape Learning . . . . . . . . . . . . . . . . . . . . . 94

3.7 Morphological Reconstruction . . . . . . . . . . . . . . . . . . . . . . 101

3.7.1 Level Set Formulation . . . . . . . . . . . . . . . . . . . . . . 101

3.7.2 Neuron Morphological Reconstruction . . . . . . . . . . . . . . 108

3.7.3 Soma-pipette segmentation . . . . . . . . . . . . . . . . . . . . 109

3.7.4 Isotropic 3D Front Propagation . . . . . . . . . . . . . . . . . 111

3.7.5 Detection of Terminal Points . . . . . . . . . . . . . . . . . . . 112

3.7.6 Anisotropic 3D Front Propagation . . . . . . . . . . . . . . . . 114

3.7.7 Center line extraction and tree reconstruction: . . . . . . . . . 116

3.7.8 Diameter estimation . . . . . . . . . . . . . . . . . . . . . . . 120

4.1 Results in Frames Shrinkage . . . . . . . . . . . . . . . . . . . . . . . 122

4.1.1 Denoising Results in Synthetic Data . . . . . . . . . . . . . . 123

4.1.2 Denoising Results in Confocal and Multi-photon Microscopy

data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

4.2 Dendrite Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

4.2.1 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

vii

4.2.2 Real data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

4.3 Morphological Reconstruction . . . . . . . . . . . . . . . . . . . . . . 145

4.3.1 Qualitative and Qualitative Analysis . . . . . . . . . . . . . . 145

5 Conclusion 159

5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

5.1.1 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

5.1.2 Tubular Shape Learning . . . . . . . . . . . . . . . . . . . . . 160

5.1.3 Morphological Reconstruction . . . . . . . . . . . . . . . . . . 160

5.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Bibliography 162

viii

List of Figures

3.2 Neuron Morphological Reconstruction System. . . . . . . . . . . . . . 61

3.3 Comparison of beads. . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.4 Deconvolution of the average bead. . . . . . . . . . . . . . . . . . . . 66

3.5 Depiction of the directional derivatives. . . . . . . . . . . . . . . . . . 74

3.6 Results of applying our denoising algorithm in confocal imaging in a

neuron cell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

3.7 Maximum intensity projection of volume data from Synthetic neuron

n120. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3.8 Depiction of the local neighborhood associated with noise removal. . . 82

3.9 Registration of 3 volume stacks. . . . . . . . . . . . . . . . . . . . . . 83

3.10 A typical dendrite segment. . . . . . . . . . . . . . . . . . . . . . . . 84

3.11 Overview of our algorithm for dendrite detection. . . . . . . . . . . . 86

3.12 Anisotropic structural features. . . . . . . . . . . . . . . . . . . . . . 87

3.13 Synthetic volumetric data and isotropic structural features. . . . . . . 90

3.14 Labels used to train a synthetic regular tubular model. . . . . . . . . 94

3.15 Structural features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

ix

3.16 Comparison of dendrite enhancement in different regions. . . . . . . . 98

3.17 Comparison of dendrite enhancement in different cells. . . . . . . . . 99

3.18 Schematic of shape learning given from two different models. . . . . . 100

3.19 Level set embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

3.20 Schematic of propagation forces normal to the curve. . . . . . . . . . 103

3.21 Schematic of the one dimensional case of the Eikonal Equation. . . . 105

3.22 Soma and pipette segmentation. . . . . . . . . . . . . . . . . . . . . . 110

3.23 Schematic of ending points detection. . . . . . . . . . . . . . . . . . . 113

3.24 Visualization of the 3D front propagation along the centerline of the

tubular object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

3.25 Schematic depicting the general principle to construct a single con-

nected tree component. . . . . . . . . . . . . . . . . . . . . . . . . . . 117

3.26 Parametrization of segments as generalized cylinders representation. . 118

3.27 Centerline extraction and diameter estimation. . . . . . . . . . . . . . 121

4.2 Denoising results due to different algorithms on a synthetic noisy vol-

umes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

4.3 Maximum intensity projections in the x − y and x − z axis of the

volume of interest respectively . . . . . . . . . . . . . . . . . . . . . . 128

4.4 Comparison results of applying our denoising, anisotropic diffusion,

median filter and the 3D Haar wavelet. . . . . . . . . . . . . . . . . . 129

4.5 Performance evaluation of the length in function of the detected largest

component volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

4.6 A confocal imaging volume with selected region of interest. . . . . . . 131

4.7 Results in selected region of the confocal imaging, . . . . . . . . . . . 132

4.8 Energy distribution other than the low pass filter in each subband of

UH Lifted Spline Filterbank (UH-LSF). . . . . . . . . . . . . . . . . . 133

4.9 Synthetic tubular model constructed from cubic splines. . . . . . . . . 135

x

4.10 Comparison of tubularity measures in a volumetric example with vari-

ation in diameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

4.11 Comparative results in synthetic data. . . . . . . . . . . . . . . . . . 137

4.12 Results in typical stack for the CA1 pyramidal cell type. . . . . . . . 140

4.13 Results of applying different methods to detect 3D tube-like objects. . 141

4.14 Comparison of tubular measures in a dendrite segment. . . . . . . . . 142

4.15 Results for the spiny striatal neuron type. . . . . . . . . . . . . . . . 143

4.16 Generalization of the synthetic tubularity measure. . . . . . . . . . . 144

4.17 Visual comparison of morphological reconstructions. . . . . . . . . . . 146

4.18 Comparison of the quality of reconstruction in different cells. . . . . . 147

4.19 A variation of Sholl analysis as performance metrics. . . . . . . . . . 149

4.20 Selected subtree to perform quantitative analysis. . . . . . . . . . . . 150

4.21 Comparison of performance metrics. . . . . . . . . . . . . . . . . . . . 151

4.22 Comparison of performance metrics. . . . . . . . . . . . . . . . . . . . 152

4.23 Comparison of performance metrics performed in Subtree. . . . . . . 152

4.24 Quantitative analysis of diameter estimation for Cell A. . . . . . . . . 153

4.25 Comparison of the minimum intensity projection and the morpholog-

ical model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

4.26 Comparison of the volumetric data and the morphological model. . . 155

4.27 Visualization of the extracted centerline en phantom data depicted in

Fig. 4.10(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

4.28 Visualization of the results of centerline extraction when applied to

CTA data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

xi

List of Tables

2.2 Denoising Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.3 Segmentation of Tubular Objects – Deformable Models . . . . . . . . 33

2.4 Segmentation of Tubular Objects – Tubular Enhancing . . . . . . . . 40

2.5 Segmentation of Tubular Objects – Tracking-based methods . . . . . 52

2.6 Segmentation of Tubular Objects – Hybrid Methods . . . . . . . . . . 58

4.1 Performance Evaluation on the noisy volume depicted in Fig. 4.1. . . 124

4.2 Performance Evaluation on the noisy volume depicted in Fig. 4.2. . . 124

4.3 Performance evaluation - Time (UNIT: SECOND). . . . . . . . . . . 127

4.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 139

4.5 Performance Evaluation - Total Dendrite Length and Surface Area . . 149

4.6 Performance Evaluation - Diameter Statistics . . . . . . . . . . . . . 149

4.7 Performance Evaluation - Length Statistics . . . . . . . . . . . . . . . 150

4.8 Performance Evaluation - Path from Soma . . . . . . . . . . . . . . . 150

4.9 Performance Evaluation - Subtree . . . . . . . . . . . . . . . . . . . . 150

xii

Chapter 1

Introduction

Sejnowski [139], Hama et al. [81], Samsonovich et al. [184],Yuste et al. [266], Rall et

al. [175]). Long thin dendrites provide both a maximal surface-to-volume ratio for

Pongracz et al. [170], Tada et al. [208], Korkotian et al. [119], Benavides et al. [24]).

ships between the structure and specific functions of neuronal dendrites. Thus, model

building and computer simulation are essential to produce viable theories of neuronal

computation (Samsonovich et al. [40], Hoffman et al. [92], Poirazi et al. [169], As-

coli and Atkeson [13], Ascoli et al. [11], Mizrahi et al. [151]). Fortunately, it has

1

morphologies and ion channel kinetics due to the availability of powerful simulation

tools (Hines and Carnevale [91], Beaman et al. [21], Cuntz et al. [49]). However,

for a variety of reasons reflecting both the complexity and individual variability of

neurons and the relatively weak constraints of the models, one cannot simply present

an input and produce an output that necessarily reflects biological reality (Holmes et

al. [93]).

production of databases of neuronal morphologies that can be used for computer sim-

ulation is an important goal (Migliore et al. [150], Ascoli [12], Schmidt et al. [191],

Wiseman et al. [251]). Available databases have been limited in scope because of

the relatively large effort involved in the largely manual computer-aided methods

semi-automated and automated systems are under development (Evers et al. [64],

Wearne et al. [245], Brown et al. [34]) by reducing the need for the investigator

principle improve. Our own interests in developing a system for automated recon-

structions comes from the need to acquire electrophysiological data and morpho-

logical reconstructions from the same neurons. Advances in optical imaging have

from multiple sites and from fine structures (Hoogland et al. [96]). Despite this

capability, however, there is always a limit on the overall bandwidth. That is, the

acquisition methods allow either high temporal or high spatial resolution, but not

2

both simultaneously. Thus, there is a need to determine the optimal sites for func-

tional imaging. If the morphology of the neuron is known at the outset, quantitative

criteria can be used to decide where imaging has a high likelihood of yielding useful

(i.e.; constraining) information. Even in cases where such precision is not necessary,

remains a goal.

sites. Functional imaging comprises the final phase. Such a scenario places a num-

1.1 Objectives

The goal of this dissertation is to develop the methodology and the computational

framework to allow automatic reconstruction of neuron cells from confocal and multi-

photon microscopes towards, on-line functional imaging, and the creation of libraries

imaging.

3

2. Developing a segmentation algorithm for irregular and regular tubular struc-

tures.

1.2 Challenges

There are two major challenge for automatic morphological reconstruction of neuron

cells. The first challenge involves the structural imaging of the biological specimen

2. Important structures are near the limit of imaging resolution, 0.2 µm as (spines

3. Low signal-to-noise ratio due to different sources of noise [164], which generally

The second major challenge refers to a rapid and accurate shape modeling of the cell

as a single tree in terms of cylindrical lengths and diameters.1 Then, the difficulties

1

These two parameters (the lengths and diameters are of decisive importance to constrain a

realistic computational model.

4

1. Removal of external objects that do no belong to the cell structure (pipette)

4. Rapid reconstruction during the limited time frame for which cell is alive.

challenges and yet provide an accurate reconstruction for further morphological anal-

ysis.

1.3 Contributions

multidirectional filter back which preserves edge information and not compu-

over the state of the art methods is that structural information of objects

lar structures) while removing different types of noise (Gaussian and Poisson).

5

(a)

(b) (c)

Figure 1.1: A CA1 hippocampal neuron cell acquired with a confocal microscope.

(a) Volume rendering of the original data, (b) a detail depicting the variability in

morphology, and (c) an example of the typical image artifacts created due to spilling

of the fluorescent dye in the volume of interest.

i) a smooth cylindrical shape (case of coronary arteries) and ii) a prior shape

step fashion. In the first step a model is trained to learn complex tubular shapes

6

from a tubular example. Shape descriptors are the eigenvalues derived from

the hessian matrix where the hessian matrix is constructed by second order

second step, a the model that has been trained is used for predicting unseen

are used to create an intelligent tubular shape model. Under this formulation,

1.4 Outline

drical model of a neuron cell. Results and Discussion are presented in Chapter 4 and

7

Chapter 2

Literature Review

compare the most relevant methods according to key points demanded by our specific

Methods

the representation of the neuron cell as a single connected tree component in terms of

8

cylindrical lengths and diameters. These two parameters (the lengths and diameters)

experiments.

i) skeletonization-based methods [183, 243, 116, 115, 57, 192, 64, 220, 225] where

methods [90, 7, 179, 246, 245, 32], where morphology is reconstructed directly from

An algorithm for dendrite centerline extraction and spine detection only at selected

dendritic segments was proposed by Koh et al. [116, 115]. Dendrite medial axis is

extracted by applying the algorithm proposed by Lee et al. [127] only at dendrite

segments. To extract the dendrite medial axis, a denoising step is required by ap-

plying a debluring algorithm. Then, medial axis is extracted from the segmented

neuron cell. Once medial axis is extracted, the algorithm performs spine detection.

Results are presented in a limited number of dendritic segments rather than in the

entire cell. One of the limitations of this method is that the debluring algorithm

can remove small structures (specially spines which are about 1.0 µm width) if used

not properly making centerline extraction and spine detection not a straight forward

operation.

9

method is based on the 3D discrete wavelet transform which is used for denoising,

ation of the orthogonal wavelet shrinkage approach as described by Donoho et al. [59].

In further steps, the method highly depends on the detection of the edges (gradient)

across different scales. However, the 3D discrete wavelet transform imposes deci-

information, the final skeleton reconstruction may have significant gaps in regions

where the gradient is not detected properly (specifically in regions with dendrites of

small diameter and different variations of intensity). Results are depicted regions

of neurons with a relatively small signal to noise ratio depicting the medial axis

Schmitt et al. [192, 64] presented a semi-automated method towards neuron mor-

tion are based on a given active contour model (Kass et al. [112]), enforcing centerline

smoothness.

In order to maintain the medial axis close to the brightest voxels and possibly close

i) higher intensity values are in the center of the dendrite circular cross-section; and

ii) the magnitude of the gradient vector field estimated from the intensity image

assumptions may not hold, since in reality dendritic cross-sections do not exhibit

1

Dye concentration can be constant in dendrites cross sections and therefore the centerline cannot

be found in function of the maximum intensity value.

10

a regular circular cross-section2 and since the medialness measure depends on the

magnitude of the gradient, it may be not adequate to detect the centerline, specially

Since the tracing algorithm is highly dependent of the gradient, centerline tracing

have a severe limitations in small structures (two or three voxel-wide). The proposed

approach requires to initialize seed points close to center the real dendrite center

multidirectional rays for each point. Medialness values above a user defined threshold

which actually do not belong to the dendritic centerline. This method does not

involve any technique to enhance the quality of the image (i.e.; deconvolution or

noise removal), rather it assumes that the input image is of almost noise free.Results

are reported in images with dimensions of 512 × 480 × 301 in the x, y and z axis

on a single stack.

2

Due to the effects of the point spread function imposed by the microscope.

11

based on the following steps: i) performing blind deconvolution [102, 94], i.e. a

theoretical point spread function is used to enhance the quality of the 3D image, ii)

al. [133]). The major limitations of the proposed system can be listed as: i) blind

deconvolution does not necessarily reflect the non linear transformation of the optical

microscope (Sarder et al. [188]) and; ii) the marching cubes segmentation algorithm

is not a suitable for dendritic structure due to the high variation in intensity and

Wearne et al. [246, 245] expanded the work of Rodriguez et al. [179] by incorpo-

the dendrite centerline at multiple scales. Then, segmentation highly depends of the

width of the dendrites, making it a challenging task specially when the data type

contains high variants of Poisson noise near the object of interest (Dima et al. and

Broser et al. [32] presented an algorithm for neuron skeletonization. The proposed

man intervention. Regarding the tracing method, first a number of “seeds” points

are generated, then the centerline is extracted by connecting the seed points. We

12

observe two major limitations of this method: i) morphological reconstruction is not

guarantee to be a single connected tree usually the user has to reconnect individ-

ual subtrees); and ii) the accuracy of the centerline is compromised according to

the number of seed points3 , therefore the extracted centerline may not represent a

and limitations [77]. Cylindrical approaches assume a tubular-like average shape, al-

approaches assume the object of interest has been already segmented, but in reality

morphology are currently available in the market. One of the most popular is

proposed by Glase et al. [75] in the early 1990’s. NeurolucidaT M is a software sys-

tem designed to operate in junction with the optical microscope. Three dimensional

3

Centerline extraction connects seed points.

13

of: i) the precise matching of the object of interest and the displayed image, (the

entire dendritic tree cannot be observed since the dendrites go out of focus); and ii)

With the previous considerations, the overall dendritic tree can be manually

are required this system may not suitable due to accuracy of the mechanical devices

involved in the process (in particular measurements in the z axis can be rather

difficult as the user is require to adjust the Z-control along the centerline of the

human expertise, thus making the reconstruction process very time consuming (about

Neurotracer 3DT M [72] is another commercial system for “off-line” neuron mor-

is based on a single image stack. The major limitation of this method is the poor

performance in image segmentation and the high sensitivity of the software to the

rons an extremely time-consuming task for the user, and it is highly subjective since

Table 2.1 presents a comparative analysis of the previous methods. The proper-

ties that compare are: i) image modality: Confocal (C) or multiphoton (M); ii) use

14

of A priori Knowledge (AK); iii) preprocessing steps such as: registration of Mul-

tiple Stacks (MS), Deconvolution (DCV), Denoising (DNS); and iv) morphological

Skeletonization-based

√ √

Rusakov et al. [183] 1995

√ √

Watzel et al. [243, 244] 1996

√ √

Koh et al. [116, 115] 2002

√ √ √

Dima et al. [57] 2002

√

Evers et al. [64] 2005

√ √ √

Uehara et al. [220] 2004

√

Urban et al. [225] 2006

√ √

Auto Neuron et al. [149] 2007

√ √ √ √ √ √ √

Santamarı́a-Pang et al. [187] 2007

Cylindrical extraction-based

√ √

Herzog et al. [90] 1997

√ √

Al-Kofahi et al. [7] 2002

√ √ √

Wang et al. [241] 2003

√ √ √ √

Rodriguez et al. [179] 2003

√ √ √ √

Weaver et al. [246] 2004

√ √

Broser et al. [32] 2004

√ √ √ √ √

Wearne et al. [245] 2005

Preprocessing: Multiple Stacks (MS). Deconvolution (DCV). Denoising (DNS).

Connected Tree Representation (CTR).

15

2.2 Previous Work in Denoising

Photon-limited imaging systems such as confocal and multi-photon obtain very high-

resolution optical sections through relatively thick specimens (e.g.; live brain tissues).

sources of noise can be identified for photon-limited imaging data (Pawley [164]),

typically, they include: i) thermal noise induced by the photomultiplier; ii) photon

shot noise which accounts for the part of the noise that varies locally with intensity

and can best be described using the Poisson noise model; iii) biological background

or autofluorescence noise which can be described using the additive Gaussian white

noise; and iv) non-uniform fluorophore noise. There are also other imaging artifacts

Denoising for photon-limited imaging data has been an active research field in the

last several decades and a number of effective methods have been proposed. Most

of the early methods employed statistical technologies in the spatial domain due to

the special properties of the Poisson noise [234, 178]. The Maximum Likelihood

Estimator (MLE) was the most popular one and was routinely applied in scientific

and clinical practice [171, 89, 230, 76, 67, 51]. Lately, with the fast development of the

theory of wavelets and their great successes in the field of signal estimation, wavelet-

based denoising methods for photon-limited imaging data have been developed [60,

117, 248, 27, 213]. These methods make full use of the excellent ability of the wavelet

16

transform to sparsely represent the underlying intensity function.

Dima et al. [56, 57] proposed an effective wavelet denoising method for 3D con-

detect edges and to suppress the responses from noise and variations of contrast.

The technique rests on the “á trous” pyramidal decomposition scheme. It includes

1, if Ms (x) > max (Ms (x − γs (x)), Ms (x + γs (x))) ,

Es (x) = (2.1)

0, otherwise,

where γs is the gradient direction and Ms its magnitude at scale S. The downside

of this method is that the structure of fine neurons can be highly corrupted.

data, by deriving the relationship between maximum penalized likelihood tree prun-

ing decisions and the undecimated wavelet transform coefficients. However, this

the technique is based on the discrete wavelet transform where wavelets coefficients

are applied a “hard threshold operator” depending of the resolution scale. Although

this method is applied to optical imaging, the authors do not present results in

neuron cells.

17

al. [33]. Anisotropic diffusion was formulated by Perona et al. [166] in terms of a

2

where the function g can be defined as g(|∇I|) = e−(|∇I|/k) or g(|∇I|) = 1

1+(|∇I|/k)2

,

then the function g controls the sensitivity to edges. In the application to optical

imaging, noise is averaged along the local axis of the neuron’s tubular-wise dendrites

function of the confocal microscope to perform denoising (Tiedemann et al. [236] and

the point spread function to “remove noise in the image”. Although, denoising and

deconvolution serve different (but similar) objectives, the proposed technique uses

that it tries to “calibrate” the optical apparatus of the microscope. However, the

spread function. Limiting factors are: i) point spread function change according

to “depth”, and an adaptive scheme must integrate the non-linear changes; ii) the

point spread function is greatly affected by the medium in which the specimen is

immersed, therefore different mediums lead to significant changes of the point spread

function. As pointed out by Monvel et al. [30], image restoration can be achieved by

using denoising algorithms. In this case, wavelets algorithms were implemented and

18

Tomasi et al. [214] presented a denoising method based on bilateral filter. The

Z

−1

h(x) = k(x) f (w)c(x, w)s(f (x), f (w))dw, (2.3)

ω

Z

k(x) = c(x, w)s(f (x), f (w))dw, (2.4)

ω

(f (x)−f (w)2

( 2 )

where σc is a parameter and s(f (x), f (w)) = e σs . The major limitation of

In general, separable wavelet system fails to efficiently deal with edges involved

in high dimensions and several new systems have been developed [165, 38, 209]. In

contrast with the non-separable wavelet systems [118, 186, 187, 109] these systems

allow one to analyze a function in a multidirectional fashion. Hence, one can deal with

straight and curve singularities more effectively. Anisotropic diffusion methods [249,

214] depend on the edge information (weak edges tend to be lost) and tend to remove

Despite of the advances mentioned above, the denoising problem for photon-

limited imaging data of neurons is considered one of the most challenging problems

due to the high complexity of the components of the different noise sources. Further-

more, a typical 3D image of a neuron has low ratio of structure occupied voxels to

19

is needed (where the structure of interest is less than 5% of the total volume) since

small but important structures (dendrites) must be denoised without corrupting their

structural information.

20

Table 2.2: Denoising Methods

Year 1D-2D 3D AK OI GN PN TS

PDE-Based

√

Perona et al. [166] 1990

√

Barash et al. [18] 2002

√ √ √

Broser et al. [33] 2004

√ √

McGraw et al. [142] 2005

√ √

Pantelic et al. [162] 2007

Orthogonal Wavelets

√ √

Dima et al. [56] 1999

√ √ √

Timmermann et al. [213] 1999

Sendur et al. [194] 2002

√ √

Donoho et al. [59] 2003

√ √ √ √

Willett et al. [248] 2004

√ √

Pennec et al. [165] 2005

√ √

Ville et al. [229] 2007

√ √

Chen et al. [43] 2007

Non-Orthogonal Wavelets

√ √

Ashino et al. [14] 2004

√ √

Shen et al. [196, 197] 2005

√ √ √ √

Konstantinidis et al. [118] 2005

√ √ √ √

Santamarı́a-Pang et al. [186] 2006

Restoration-based methods.

√ √ √

Kempen et al. [230] 1996

√ √

Vovk et al. [238] 2004

√ √

Bernad et al. [171] 2005

√ √ √

Lukac et al. [136] 2007

GN: Gaussian Noise, PN: Poisson Noise, TS: Tubuar Structure.

21

2.3 Previous Work in Segmentation Methods of

Tubular Objects

tab:SegmentationHybridMethods

areas not only in the computer vision community [224] but also in the biomedical

imaging community [108, 114, 39, 77, 202]. The importance of detecting automati-

cally tubular objects it is not only from the pure computer vision point of view, but

from a diverse number of biomedical research areas [84, 6, 251, 191, 108, 68, 222, 207].

irregular and regular tubular structures with special attention to those methods suit-

able for dendrite segmentation. In general, the majority of these methods can be

categorized as:

1.1 Parametric deformable models

1. Deformable models

1.2 Geometric deformable models

2. Tubular-enhancing filtering

Methods

3.1 Skeletonization based methods

3. Medial axis extraction

3.2 Tracking based methods

4. Hybrid methods

structures are based on the following methods. Deformable models based meth-

ods [128, 143, 256, 124, 50, 98, 259, 228, 210, 261, 158, 105, 82], in general are based

22

on geometric properties of structures which are dynamically deformed under the influ-

methods [254, 255, 180, 124, 205, 204, 71, 189] are designed to capture tubular mor-

of the given object. Regularly, they integrate strong shape priors 4 , (i.e.; synthetic

the centerline or skeleton of the tubular object of interest. There are approaches that

mimic the actions a robot in a given environment to extract a path as the robot navi-

gates [253, 9, 152, 110, 74, 73], while others express the tubular morphology in terms

of a skeleton [217, 215, 153, 219, 83, 122, 86, 264, 106, 240, 88, 58, 113, 211, 52, 265,

28, 247, 69]. Hybrid methods [20, 37, 62, 267, 79, 103, 125, 101, 173, 263, 46, 163]

of the object of interest [173, 86, 223]. A review of vessel extraction techniques is

We compared all these approaches in Tables 2.3, 2.4, 2.5, 2.6. The comparison

is based on the following key points: i) dimension: 2D or 3D; ii) type of tubular

a technique for shape learning; and v) the complete representation of a tree structure

4

Relevant shape variations such as curvature and irregular diameter variations are not taken

into account.

23

2.3.1 Deformable Models Methods

space, such that for a given shape, if the shape parameters are estimated, then the

shape is known. We can categorize the family of parametric deformable models for

segmentation of tubular structures in: i) 2D images (two parameters are needed), ii)

3D images (three parameters are needed) and iii) 3D plus time (four parameters).

Active contour models (Kass et al. [111]) (also known as snake models) provided

mization problem that integrates three energy terms: Einternal , Eexternal , and Econstrain .

These energy terms depend of a parameterized contour C(υ), υ ∈ [0, 1] and the gra-

dient information ∇I of the intensity image. The formulation can be written as:

Z Z Z

2 2

E(C(υ)) = α 0

|C (υ)| dυ + β 00

|C (υ)| dυ − λ |∇IC(υ)|2 dυ , (2.5)

Ω Ω Ω

| {z } | {z } | {z }

Einternal Eexternal Econstrain

Under this formulation Han et al. [82], integrated a minimal path active contour

models for vessel segmentation. Such formulation is expressed as the search for

Z

min (ω + g(∇I(C(v)))) |C 0 (v)| dv, (2.6)

Ω

24

1

1+|∇I(C)|p

is the is edge potential function, and p is equal 1 or 2. Once the en-

technique. The method was applied to detect “semi circular objects” in 2D. Simi-

larly, Yang et al. [259] developed a method for segmentation of semi circular objects

Valverde et al. [227] integrated energy term to the general formulation in Eq. 2.5,

Z

α(s)Einternal + β(s)Eexternal + γ(s)Econstrain + δ(s)Enoise ds, (2.7)

Ω

where the energy terms are defined previously. This formulation makes that the

snake model to attach to tubular objects, where the term Enoise is a penalty value

as physics-based deformable models [212, 148, 147]. This class of deformable models

l1 (u) + a1 (u, v)

C(u, v) = l

2 (u) + a 2 (u, v) ,

(2.8)

l3 (u) + a3 (u, v)

where − π2 ≤ u ≤ π

2

, −π ≤ v ≤ π, and l(u) = [l1 (u), l2 (u), l3 (u)]> , a(u, v) =

[a1 (u, v), a2 (u, v), a3 (u, v)]> . Here the main aim is to recover the parameters that

25

A deformable volume which integrates a mass-spring energy model with the shape

R

deformation γ, the total energy of the system is defined as: ϕ(t)dt and the energy

γ(U )

density ϕ is:

1

ϕ(v) = λ(trace(e))2 + µ trace(e2 ), (2.9)

2

n

P

the trace of the matrix aij is aii and e is the Green-Lagrange tensor of the de-

i=0

1

formation defined as: e = 2

(F t F − I), F is the matrix of partial derivatives of γ,

and λ, µ are constants related with the physical properties of the material. This

approach creates a model of a deformable “cable” that is able to deform and navigate

in an ideal environment.

Formulations for vessel shape representation were proposed by [172, 158, 100]

where edge information played a crucial role. By adding smoothing constrains, Yim et

al. [261] developed a vessel surface reconstruction method with a deformable tubu-

lar model. The position of every point in the tubular model is determined by the

parametric radial function Ri (a, φ), where a is the radius and φ is the circumferen-

tial location at the i-th iteration. The model axis is designated by the parametric

ρ0 Ri (a,φ)

(Ri (an , φn ) − Ri (a, φ)) + aϕ · ∇ ∇I(ρ0 Ri (a, φ)) ,

P

Ri+1 (a, φ) = Ri (a, φ) + K1 K2 aϕ (2.10)

a0n φn ρ0aϕ Ri (a,φ)

where an and φn are the axial and angular locations adjacent to a and φ, K1 , K2

are constants representing the time step size and the weight of the forces and ∇I

represents the gradient vector after convolving with a Gaussian kernel. The forces

26

acting in the model depend of the gradient vector field, deformations stop once forces

reach equilibrium.

A parametric deformable model that integrates the shape for the coronary arteries

over time was proposed by Chen et al. [44]. Quantitative analysis of the coronary

Bruijne et al. [50] presented an adapting active shape model framework for 3D

distance f (gs ):

g (gs − ḡ) , (2.11)

where ḡ is the mean tubular shape, and S is the covariance matrix. Deformations

are based on statistical shape variations of tubular structures. Results are presented

“deformable organism” which is doted with: i) sensors, ii) a cognitive center; iii)

tures (by applying Frangi’s vesselness measure [71]). Then, the deformable organism

deforms to segment vascular tree structures. One of the limitations of this method,

need to be estimated, making the implementation very specific to the problem itself.

27

A 3D active shape method for anatomical segmentation based on statistical shape

modeling was presented by Lekadir et al. [128]. In this work a shape metric invariant

different linear transformations (i.e. scaling, rotation and translation) is defined. The

main idea is to construct a “fitness” measure that integrates gray level appearance

where h is the number of the profiles and ϕs is the stationary phase and the intensity

gi − gmin ϕi − ϕmin

ĝi = , ϕ̂i = , (2.13)

gmax − gmin ϕmax − ϕmin

and combined to produce a p profile. This method combines intensity and phase

for searching in a given feature space. Results are depicted in a single segment of

strong assumption for shape priors (circular shapes), which in general may make

the method not suitable for irregular tubular structures with considerable shape

variations.

evolving this implicit function over time to match a given shape. If we compare

28

3D”, are expressed in terms of the implicit function (a curve in 2D or closed surface

in 3D).

The surface is expressed as the zero level set: C(υ, 0) = {(x, y, z) : ϕ(x, y, z) = 0},

C(p, t) = F N , (2.14)

where the function F determines the magnitude of the deformations which are normal

to the surface of the level set. In general, curve evolution is guided by a PDE

formulation.

Different formulations to segment smooth and regular tubular objects have been

proposed (Lorigo et al. [135] and Bemmel et al. [228]). The key idea is to evolve a

closed curve or surface over time, where the deformation forces are normal to the

al. [95]), where local level sets drive the evolution locally (Manniesing et al. [140]

and Law et al. [124]), ii) symmetry evolution approaches, where deformations occur

according to the 2D curve5 (Kuijper et al. [122]). Physics based approaches integrate

elastic properties via active contour models proposed by Xiang et al. [256]. Yan et

al. [258] the evolution of the level set was inspired in the physical formulation of

models to segment human arteries. The integration of prior knowledge to guide the

5

In the three dimensional space is not straight forward to define an axis of symmetry.

29

level set evolution was included in Unal et al. [223].

The general formulation in Eq.2.5 can be posed in a level set framework as:

where the function g is the stopping criteria of the curve or surface evolution.

∇φ

φt = µ∇ · + v |∇φ| , (2.16)

|∇φ|

where φ is the level set, the function g is the stopping criteria which tends to zero

close the edges and µ is a small positive constant and v = − 41 hr,∇(Gg ∗I+αH(φ))i dx0 dy 0

R

r3

Ω

and H is the Heaviside function defined as:

0,

if φ ≤ −σ,

H(φ) = 1 πφ (2.17)

2

sin 2σ

+ 1 , if − σ < φ < σ,

if φ ≥ σ,

1,

where σ is the same parameter used to smooth the image with a Gaussian kernel.

Lorigo et al. [135] proposed an elegant method for curve evolution in a variational

level set framework. Let C : [0, 1] → R3 be a curve and v : R3 → [0, ∞)] for which its

zero level set is C. Ambrosio and Soner [8] proved that evolving Eq. 2.14 (C = F N)

q⊗q

as the matrix P∇φ ∇2 φP∇φ , and Pq is defined as Pq = I − |q|2

, q 6= 0. Then curves

∇g Y ∇I

Ct = kN − H , (2.18)

g |∇I|

30

where H is the projection operator onto the normal space of C. From here that the

g0

2 ∇I

φt = G(∇φ, ∇ φ) + ρ(h∇φ, ∇Ii) ∇φ, H , (2.19)

g |∇I|

Yan et al. [258] proposed a method for segmentation of tubular objects inspired

in the physical action of “capillaries forces”. When tubular objects are submerged

in liquids, the energy formulation involves the surface Sw of the capillary object, the

adhesion coefficient β, the area Sw∗ in contact with the outer medium (air) and the

corresponding adhesion coefficient β ∗ . Then the energy functional that drives the

R R (2.20)

= α C(t, x)dx + λ |Cx (t, x)| dx ,

∇g

Ct = g(k + c)N − <∇g, N > N + α(1 + γ k̂2 ) N − cos θ , (2.21)

|∇g|

where α controls the propagation, advection, and capillary forces, the constant c and

λ act like a balloon forces. Then, the zero level set φ embedded in S that evolves

according to:

(2.22)

∇Ψ <∇Ψ,∇g>

where N = − |∇Ψ| , cos θ = |∇Ψ||∇g|

, and f is defined as the parametric sigmoid

1

function defined as: f (x) = − x−b

, here a and b control the “shape” of the

(1+e ( a ))

31

sigmoid function. The corresponding PDE can be performed as:

where ∆t is the time step and the 3D volume is previously convolved with a Gaussian

kernel and the level set φ is periodically reinitialized. The method was applied to

surfaces.

as: i) they have highly topology changes; ii) suitable for irregular shapes; and iii)

tion); ii) the stopping forces; iii) computationally very expensive; and iv) it is difficult

32

Table 2.3: Segmentation of Tubular Objects – Deformable Models

Framework Struct.

√ √ √ √

Potel et al. [172] 1983 3D

√ √ √

Donnell et al. [158] 1994 3D

√ √ √

Ingrassia et al. [100] 1999 2D

√ √ √

Anshelevich et al. [9] 2000 3D

√ √ √

Yim et al. [261] 2001 3D

√ √ √

Han et al. [82] 2001 2D

√ √ √ √

Chen et al. [44] 2002 3D

√ √ √ √

Behrens et al. [22] 2003 3D

√ √ √ √

Bruijne et al. [50] 2003 3D

√ √ √

Yang et al. [259] 2003 2D

√ √

Valverde et al. [227] 2004 2D

√ √ √ √

Lekadir et al. [128] 2006 3D

√ √ √ √ √

McIntosh et al. [143] 2006 3D

Geometric Deformable Models

√ √ √

Lorigo et al. [135] 2001 3D

√ √

Deschamps et al. [52] 2001 3D

√ √

Telea et al. [210] 2002 3D

√ √ √

Bemmel et al. [228] 2003 3D √ √ √

Holtzman-Gazit et al. [95] 2003 3D

√ √

Dey et al. [55] 2003 3D

√ √ √ √

Antiga et al. [10] 2003 3D

√ √ √

Wink et al. [250] 2004 2D

√ √ √

Manniesing et al. [140] 2004 3D

√ √

Jianfei et al. [105] 2004 3D

√ √

Kuijper et al. [122] 2005 3D

√ √

Yan et al. [258] 2006 3D

√ √ √ √

Unal et al. [223] 2006 2D

√ √ √

Xiang et al. [256] 2006 3D

√ √ √

Law et al. [124] 2006 2D

√ √ √ √

Wong et al. [252] 2007 3D

33

2.3.2 Tubular enhancing filtering

vision and it has been extensively investigated over the past decades leading to sig-

of vasculature organs (regular and smooth tubular objects) or objects with tubular-

like shape. In this subsection, we review the most relevant methods for enhancing

tubular structures.

A different number of methods [71, 189, 134, 53, 130] for vessel enhancement

have been proposed relying in structural features that could potentially discriminate

tubular objects. Some of them [71, 189, 134] are model based as they create an ideal

likelihood value to each vector x that belongs to a tubular structure. More precisely,

σmin ≤σ≤σmax

Usually, the structural features are defined as the ordered eigenvalues |λ1 | < |λ2 | <

|λ3 | of the Hessian matrix constructed with the 2nd-order derivatives, approximated

Frangi et al. [71] proposed one of the early and yet one of the most used tubularity

34

constructing a function of the Hessian matrix as:

0, if λ2 < 0 or λ3 < 0,

2 2

VFrangi (x, σ) = RA R

(− B ) S2 (2.25)

(1 − e(− 2α2 ) ) (e 2β2 ) (1 − e(− 2c2 ) ), otherwise,

| {z } | {z } | {z }

Sheet structures Blob structures Noise sentivity

where RA = |λ2|

|λ3 |

enhances sheet-like objects, RB = √|λ1 | enhances blob-like objects

|λ2 λ3 |

√

and S = λ1 + λ2 + λ3 is sensitive to noise.

ξ τ

σ 2 |λ3 | |λ2 |

1 + λ1

, if λ3 < λ2 < λ1 < 0,

|λ3 | |λ2 |

VSato (x, σ) = ξ τ (2.26)

σ 2 |λ3 | |λ2 |

1 + ρ |λλ12 | , if λ3 < λ2 < 0 < λ1 < |λ2 |

,

|λ3 | ρ

where ξ > 0 specifies the asymmetry of the ideal cylinder, τ ≥ 0 controls the sensi-

tivity to blob like structures, 0 < ρ ≤ 1 vessel curvature, and σ 2 normalizes different

scales. Similarly, Lorenz et al. [134] followed the same approach by integrating a

0, if λ3 < 0,

R2 R2

VDescoteaux (x, σ) = (− A2 ) (− B2 )

2

(− S 2 ) (2.27)

(e 2α ) (1 − e 2β ) (1 − e 2c ) otherwise,

| {z } | {z }| {z }

Sheet structures Blob structures Noise sentivity

al. [130]. The filtering is based on the “ideal” measures of: i) a dot (or blob):

35

x2 +y 2 +z 2 y 2 +z 2

d(x, y, z) = e− 2σ 2 ; ii) a line: l(x, y, z) = e− 2σ 2 ; and iii) a point: p(x, y, z) =

2

− x2

e 2σ . Then the filters are defined in terms of the magnitude of the eigenvalues and

0,

otherwise,

Dot

VLi (x, σ) = 2

(2.28)

|λ3 | ,

if λ1 < 0, λ2 < 0, λ3 < 0,

|λ1 |

and

0,

otherwise

VLiLine (x, σ) = (2.29)

|λ2 |(|λ2 |−|λ3 |)

if λ1 < 0, λ2 < 0

|λ1 |

and

0,

otherwise,

VLiP lane (x, σ) = (2.30)

|λ1 | − |λ2 | ,

if λ1 < 0.

Note that this measure is designed for pulmonary data in CT scans and is a variation

A probabilistic vessel model to enhance vessels and junctions has been proposed

by Agam et al. [5, 4]. The basic idea is to find a direction v orthogonal to the

gradient vector field (in local windows) that minimizes the squared projection onto

v of all the gradients in a local window. The projection of all gradients is given by

n

1

((gi )T v)2 =vGGT v,

P

E(v) = n

(2.31)

i=1

where G = √1 [gi , ..., gn ] and GGT is a 3 x 3 matrix (structure tensor). Then, the

n

idea is to exploit the structure of this tensor via eigenvalue decomposition (similar to

36

the case of the Hessian matrix). The probabilistic framework is based on the model

P −1/2

−1 −d ˆ

M

X α̂i π i

fp (x|Θ̂) = P −1 + α̂M pM (x), (2.32)

i=1 (x − µ̂i )T ˆ i (x − µ̂i ) + 1

where d is the dimensionality of the data, µ̂i , ˆ i are the estimated parameters with

P

Θ̂ = [α̂1 , ..., α̂1 , θ̂1 , ..., θ̂M ], and pM (x) is a uniform density function. Parametric

Yang et al. [260] used prior knowledge of regions to calculate posterior proba-

bilities with Bayes formula and the vessel segmentation was obtained via maximum

sumes three regions vessel, myocardium, and lung region. Manual segmentation is

performed and the density parameters µc and standard deviation σc are estimated.

(v−µc )2

1 −

Pr(V (x) = v|x ∈ c) = √ e 2σc2 , (2.33)

2πσc

Pr(x ∈ c|V (x) = v) = P , (2.34)

γ Pr(V (x) = v|x ∈ γ) Pr(x ∈ γ)

tion:

c∈{vessel, myocardium, lung}

37

where C(x) is the class that the voxel x belong to, and Pr∗ is a smoothed version

method only applies to CT data since the intensity values correspond to anatomical

structures.

Mi

where |· is represents the number of bits needed to describe the data using LM and

LIM . Then, the key idea is to estimate the posteriori probability of a pixel belong-

P

− log2 P (VFrangi (x)). This method was implemented in 2D confocal images,

x∈{F i ,B i }

and it was intended to segment neurite without providing a morphological description

integrates not only shape constraints but inherited properties of the image modality

(e.g.; variations in intensity, texture, tubular width, orientation, scale and noise).

Rohr et al. [180] developed synthetic tubular models with analytical bounds of the

tions with a given width σ. Under these assumptions, the position and the diameter

38

Jiang et al. [107] adapted Frangi’s measure [71] to enhance microtubule in elec-

tron tomography images. The key idea is incorporate gradient information in the

information, the eigenvalues derived of the Weingarten matrix were estimated, en-

hancing globally microtubules, only a given scale. This adaptation is the natural

choice in this particular imaging modality, since microtubules objects have “weak”

edges. Similarly Worz et al. [255], Law et al. [123] developed a class of filters that

Krissian et al. [121, 120] designed a non-linear filtering based on anisotropic dif-

2

P

where the vector field F defined as: F = φi (uei )ei incorporate multiple directions

i=0

according to the orthonormal basis {e0 , e1 , e2 )} of R3 , β is a data attachment coeffi-

cient, and the function φ controls the diffusion process as proposed by [166].

denoising.

All the approaches previously mentioned are mostly designed to enhance vessels

39

Table 2.4: Segmentation of Tubular Objects – Tubular Enhancing

Framework Struct.

√ √

Lorenz et al. [134] 1997 3D

√ √

Frangi et al. [71] 1998 3D

√ √

Sato et al. [189] 1998 3D

Li et al. [130] 2003

√ √ √ √ √

Streekstra et al. [206] 2002 3D

√ √ √ √

Mahadevan et al. [138] 2004 2D

√ √ √ √

Yang et al. [260] 2004 3D

√ √ √

Agam et al. [4] 2005 3D

√ √

Desobry et al. [54] 2005 3D

√

Descoteaux et al. [53] 2005 3D

√ √ √ √ √

Abdul-Karim et al. [3] 2005 3D

√ √

Rohr et al. [180] 2006 3D

√ √ √ √

Worz et al. [255] 2006 3D

√ √ √ √

Palagyi et al. [161] 2006 3D

√ √ √

Jiang et al. [107] 2006 3D

√ √ √ √

Mendonca et al. [145] 2006 3D

√ √ √ √

Xiong et al. [257] 2006 3D √ √ √

Law et al. [123] 2007 3D

√

Krissian et al. [120] 2000 3D √

Stathis et al. [80] 2005 2D

in the human body as the methods make the assumption that vessels are regular

and smooth tubular structures. Most of the existing methods were developed from

the need to enhance vessels from imaging modalities like: CT, MRI, PET, therefore

when considering different imaging modalities different structures (not vessels), they

40

2.3.3 Medial Axis Extraction

tation by first recovering the general shape of the tubular object to consequently

perform diameter estimation. These methods are suitable to express tubular mor-

phology in terms of cylindrical lengths and diameters as they identify junction points.

We classify these class of methods as Skeleton based methods and tracking based

methods.

2.3.3.1 Skeleton-based

Dey et al. [55] offers a general representation of shape in terms of skeleton. Represen-

by Yushkevich et al. [265]. Shape is represented by using: i) medial axis, and ii)

Moll et al. [152] presented a path planning method with minimal energy curves.

terms of curvature k, torsion τ , the tangent vector T , and binormal vector B as:

41

properties such as torsion and curvature defined as:

err(q)−1

arg min

q

E(q)

| {z }

+ K | · e{z } , (2.39)

Curvature and torsion term Error term

n

(ki2 + τi2 ) · si , and q is

P

where K is a penalty cost, err is an error function, E(q) =

i=1

n × 3 matrix and each row i contains the parameters (ki , τi , si ), 1 ≤ i ≤ n.

Vasilevskiy et al. [235] proposed a flux maximizing geometric flow. The key idea

was to evolve a curve or a surface by incorporating not only the magnitude but also

Let V be a vector field defined in R3 . The total inward flux of the vector field

A(t)

Z

F lux(t) = < V, N > dS, (2.40)

0

where A(t) is the surface area of the evolving surface. Based on this result, Vasilevskiy

et al. [235] proved that the direction in which the invariant flux of the vector field V

Ct = div(V)N, (2.41)

∇I

where the vector field V is defined as V =φ |∇I| . This was later applied by Dimitrov et

al. [58], where a flux invariant technique was developed to distinguish between medial

Later, Torsello et al. [217, 215, 216] introduced a Hamilton-Jacobi method for

skeletonization. The main idea relies in a variation of the of the work of Dimitrov et

42

al. [58] and it is based Divergence Theorem which relates the flux ΦA (F) of the vector

Z Z

∇ · F(x)dx = F · ndl = ΦA (F). (2.42)

A ∂A

where dl is the “length” differential on the boundary ∂A. And therefore in areas

ΦA (F)

∇ · F = lim , (2.43)

|A|→0 |A|

Under this definition a “skeleton” or “shock” point is defined as point where ∇ · F <

where k(p) is the curvature of a front orthogonal to F. This expressions can be seen

as a regularization term where the flux is not conservative. The approach takes

into account a normalized flux with the inward evolution of the object boundary

at non-skeletal points. Depending of the sign associated to the flux, the skeleton

dependent of curvature, there is a need to smooth the image and method applies

directly to 2D images.

propagation based on Fast Marching Methods and medial axis is extracted by trac-

ing the ending point from the source of propagation. This approach is general in

43

the sense that is applied not only to binary images but to gray level images. The

Z

U (p) = inf P (C(s))ds, (2.45)

Ap0 ,p

Ω

where Ω ∈ [0, 1], Ap0 ,p is the set of all paths 3D paths between p0 and p and the

Gσ is a Gaussian filter and w is the weight of the model. Under this considerations,

al. [86]. In this framework the authors propose a variational approach based on

distance transform and gradient vector flow. The application presented is to esti-

mate the paths in virtual endoscopy from a binary object. This method is applied

directly to binary images with no shape priors. Results are depicted for 3D volumes.

The key idea here is to express the cost function in terms of the Gradient Vector

Flow (GVF) which is sensitive to concave regions. The GVF is defined as the vector

Z Z Z

E(V) = µ |∇V(x)|2 + |∇f (x)|2 |V(x) − ∇f (x)|2 dx, (2.47)

where µ is a regularization parameter and f (x) is an edge map. Then the cost

function F is computed in terms of the GVF of V. The key idea is to take into

account smooth and concave regions of the object of interest. From here that this

44

A segmentation free approach for skeletonization of gray scale images via

strength map is calculated from the diffused vector field, which is defined as the

ut = µ · div (g(α) · ∇u) −(u − fx )(fx2 + fy2 ),

| {z }

Diffusion term

(2.48)

vt = µ · div (g(α) · ∇v) −(v − fy )(fx2 + fy2 ),

| {z }

Diffusion term

where α is the angle between the central vector and the vectors in a given neighbor-

hood, g is monotonic decreasing function and f (x, y) = k∇Gσ (x, y) ∗ I(x, y)k2 is an

edge strength map of the original image. The minimization of such Energy function

is performed by solving a PDE and mainly relies in the computation of the strength

map f and the selection of the neighborhood to compute α. Results are presented

in 2D medical images where the boundaries of the structures are well defined, and

volumes of average size of 383 × 328 × 150 voxels (17.9 MB), the volumes were de-

authors do not perform any noise removal method which may have a major impact

in the segmentation, another constrain is as the authors pointed out, the algorithm

may not work in general due to the “softness” of the images. In addition the size

of the images to reconstruct are relatively small. In order to reconnect two three

45

dimensional points (xi , yi , zi ) and (xj , yj , zj ) the following heuristic function is pro-

posed:

θ(i, j) d(i, j)

C(i, j) = α +β , (2.49)

| {zπ } | {z ∆ }

Direction Distance

where θ(i, j) is the angle and d(i, j) distance between to disconnected points, and α,

∆ are parameters which are user specified. In order to reconnect two points, cost is

this approach is that the reconnection of “dendrites” is performed only taking into

a account the 2D projection of points, this projection operator may loose important

information in the 3D volume when two disconnection occur and one dendrite is over

the other.

by Kegl et al. [113]. This approach estimate smooth curves which pass through the

by:

| {z } | {z }

Average squared distance term Penalty term

and where the distance ∇(G) and the penalty terms P (G) are defined as:

n

1X

∇(G) = ∆(xi , G), (2.51)

n i=1 | {z }

Euclidean square distance term

and

m

1 X

P (G) = Pv (vi ), (2.52)

m i=1 | {z }

Curvature penalty term

46

where ∆(xi , G) is the distance of the point xi to the nearest point of the graph G,

and Pv (vi ) is the curvature penalty at the vertex vi . The method is suitable for 2D

Rumpf et al. [182, 211] proposed a skeletonization method based on Level Sets

tance function from the boundary of the object. The method applies to 2D and 3D

binary images. An augmented Fast Marching Method for computing skeletons and

tion from the boundary of the region of interest. The 3D skeleton is computed by

by using radial basis functions is proposed by Wan-Chun et al. [240]. The approach

is based on constructing a distance field from a binary object and connecting local

maxima regions to generate the skeleton. The key idea is to formulate a skeleton as

in terms of surface S defined as the set of points M (S) such that for each point q on

ii) an uniformity property; and iii) a compactness property. These properties induce

Lien et al. [132] presents a general skeletonization method but is mostly designed for

synthetic images.

for thin structures with well defined boundaries and it requires 2D binary images.

47

Tran et al. [219] defines a 3D voxel coding algorithm based on discrete Euclidean

ever, strong constrains are introduced by constructing medial points that depend of

2.3.3.2 Tracking-based

Kofahi et. al. [7] described a morphological neuron reconstruction using an adaptive

exploratory search at voxel intensity level. Directional filters are used to describe the

well suited for images with no significant noise or artifacts that can potentially lead

to an improper reconstruction.

Wink et al. [250] presented a multiscale vessel tracking method for 2D images.

The method propagates a wave between two user-selected points using a scale-

selective method to ignore irrelevant off-branches which might cause the path to

follows:

1

, if R(σ) = 0,

Rl

C(σ) = (2.53)

1

, otherwise,

R

where Rl is the minimum value of the image and R is the vessel measure of Frangi [71].

angiography data was first proposed by Wong et al. [253]. The proposed method

48

applies to binary data (already segmented), and the major focus of the analysis is to

identify abnormalities in the vessel shape. Rather than achieving segmentation, this

probabilistic approach for vessel axis tracing and segmentation [252] was integrated.

The method is formulated in terms of stream surfaces with a minimum cost path

formulation as:

d cos θxi+1 sin φi+1

x

T T

xi+1 = xi + t̂i (t̂i × t̂i ) i+1

d sin θx sin φx

i+1 .

(2.54)

d cos φi+1

x

pi+1 = arg max f (p|qi, fˆ) = arg max f (fˆ|qi , p) f (p|qi ) , (2.55)

p∈Ω | {z } p∈Ω | {z } | {z }

Posterior Likelihood Prior

where pi+1 is the solution vector of the axis tracing problem, and xi+1 , t̂i+1 are in

i+1 i+1

spherical coordinates (θxi+1 , φi+1

x ) and (θt , φt ) and the initial solution vector pi=1

i+1 i+1

is expressed as: [θxi+1 , φi+1 T T T

x , θt , φt , ri+1 ] , where qi = [xi t̂i t̂i ] .

shapes; ii) is not straightforward to represent the entire morphology of the tubular

al. [110]. Centerline extraction of the colon is posed as a path planning problem

in which the robot has to travel along the colon where the robot is guided with a

camera. Then, the problem is to traverse the colon with the guidance of the camera.

Camera position p(t) at a given time t is expressed in function of: i) the direction

49

d(t); ii) the centerline c(t); and iii) thickness function T (t) and it is expressed as:

∂c(t)

d(t) = kT (t) ∂t , (2.56)

| {z } ∂c(t)

Magnitud ∂t

| {z }

Direction

where k is a constant value. Under this formulation, the authors proposed a “double”

from 2D images. The algorithm identifies dendrites from the “intensity” information

in the image, without estimating structural features. The method relies in two steps:

from the intensity information); and ii) in the second step, an optimal spanning tree

from the detected filaments is built. The probability is estimated via a modified

P

find the tree t that maximizes: Ψ(x, y) and it is derived from:

(x,y)∈T

X

log P (I|T ) = Ψo + Ψ(x, y), (2.57)

(x,y)∈T

where T is the set of pixels in the filament tree and the function Ψ is expressed as:

Ψ(x, y) . (2.58)

P (I(x, y)|Y (x, y) = 0)

50

Robust segmentation of the tubular ellipses with a elliptical model is developed

by Behrens et al. [22]. The method includes a vessel enhancement step by integrating

2

U = cos(2θ) 1−e

1+e 2,

V = sin(2θ) 1−e22 ,

1+e

(2.59)

R = 2xc (1 − V ) − 2yc V,

S = 2a2 b2 − xc R − yc R , e = b ,

a2 +b2 2 2 a

and then the a solution of the form: (xc , yc ), a, b, θ is found by solving a linear system

of equations derived from the previous equations. For tracking, a Kalman filter is

used. Let xk be a point on the cylinder axis at time k∆t, the estimate at time instant

xk+1 = xk + ∆tẋk + 21 ∆t2 ẍk ,

ẋk+1 = ẋk + ∆tẍk , (2.60)

ẍ

k+1 = ẍk ,

Experiments were performed to segment the aortic arc and the spinal cord in MR

angiography 3D data. Similarly, Florin et al. [70] proposed a tracking method based

Aylward et al. [15] integrated a tracking system based on detecting and following

51

Table 2.5: Segmentation of Tubular Objects – Tracking-based methods

Algorithm Tubular Structure

Framework Struct.

Skeleton-based

√ √

Ferreira et al. [66] 1999 2D

√ √ √ √ √

Bitter et al. [28] 2001 3D

√ √ √ √

Hanger et al. [83] 2002 3D

√ √

Tran et al. [219] 2005 3D

√ √ √ √

Hassouna et al. [86, 85] 2007 3D

√ √

Torsello et al. [217] 2006 2D

√ √

Reniers et al. [177] 2007 3D

√ √ √

Yim et al. [262] 2000 2D

√ √ √

Wan et al. [239] 2002 3D

√ √

Kegl et al. [113] 2002 2D

√ √ √

Quek et al. [174] 2001 3D

√ √

Rami et al. [176] 2004 2D

√ √ √

Yushkevich et al. [265] 2003 3D

√ √

Ji et al. [104] 2004 2D

√ √

Huang et al. [97] 2003 2D

√ √

Couprie et al. [48] 2007 3D

√ √

Wang et al. [242] 2007 3D

√ √

Morrison et al. [154] 2006 2D

√ √

Bertrand et al. [26] 2006 2D

√ √ √

Schlecht et al. [190] 2007 3D

√ √

Moll et al. [152] 2006 3D

√ √ √

Kang et al. [110] 2005 3D

√ √

Ge et al. [74] 2005 2D

√ √ √

Gayle et al. [73] 2005 3D

√ √

Wan et al. [240] 2001 3D

√ √

Dimitrov et al. [58] 2003 2D

√ √ √ √

Rumpf et al. [182, 211] He et al. [88] 2003 3D

√ √ √

Maddah et al. [137] 2002 3D

√ √ √ √

Soltanian et al. [201] 2005 3D

Tracking-based

√ √

Toumoulin et al. [218] 2001 3D

√ √ √ √

Aylward et al. [15] 2002 3D

√ √ √ √

Wesarg et al. [247] 2006 3D

√ √ √ √ √

Fleuret et al. [69] 2002 2D

√ √ √ √

Lee et al. [126] 2007 3D

52

2.3.4 Hybrid Methods

Chung et al. [46]. The method was applied to 2D images and it was based on a

step (centerline detection). Morphological operators such as erosion and dilation are

∂Φ(x, y, t)

= ± sup (< r(θ), ∇Φ >) , (2.61)

∂t r(θ)∈β

the initial condition, the plus sign corresponds to erosion while minus sign refers to

|Φ − η|

Φt = k∇Φk , (2.62)

M

where η is the intensity value of the image, and M is the maximum intensity value.

This formulation produces the slow propagations of the front near η. Results are

method.

Barbu et al. [20] developed a learning-based method for the detection and seg-

53

tubes Ti = (X1 , X2 , R1 , R2 ), where X1 , X2 are the ending points and R1 , R2 are the

respectively radius. The set of all tubes is denoted by: i) the set T = {T1 , ..., Tn };

and ii) the graph G = (T, E) where the nodes of G are the tubes T . Two tubes

Ti , Tj are connected depending of the orientation of the ending points. The orien-

tation measure is defined by: Eij = |αij − π| tan (|αij − π|), and then the unary cost

(R2 − R1 )2

c(T ) = − ln(P (T )) + , (2.63)

5

under a dynamic programming framework the cost Cijk of the best chain k is given

(2.64)

s

This method assumes high homogeneity in intensity values inside the structure of

interest (which is the case for CT colonoscopy images). However, this condition is

not applicable to different imaging modalities such as MRI and optical imaging.

of blood vessels in retinal images is presented by Cai et al. [37]. The optical flow

equation:

is solved by a least square method to find a solution in terms of [u, v]. By rewriting

P P P

p Ix Ix p Ix Iy Ix It

· [u, v]T = p

, (2.66)

P P P

Iy Ix Iy Iy Iy It

p p p

54

the eigenvalues of the gradient matrix ∇I provide structural information of lines and

this will define a feature space of “structural features”. Then the problem of seg-

X 1 1

N Cut(A, B) = wpq P + P , (2.67)

p∈A,q∈B

Dq Dp

p∈A p∈B

where A, B are two segments of the original set V , wpq is the similarity of the vertexes

P

p and q in function of both the intensity and distance, and Dp = wpq is the degree

q∈V

of the vertex p. The!weighs are!expressed in function of the intensity and distance

2

∇I(i,j) 2 D(i,j)

− 2 − 2

as: wij = e σ1

e σ2

, where σ12 , σ22 are parameters set by the user. One

method assumes a strong response of the gradient in the boundaries of the vessels.

However, this assumption may not hold for the case of small vessels. An application

El-Baz et al. [62] proposed a probabilistic model for segmenting vessels. The

method is based on separating blood vessels from other regions of interest by ap-

rather than using a mixture of Gaussian functions or Rician functions. Results are

depicted in images where vessels are clearly visible and boundaries are well defined.

Zeng et al. [267] presented an approach for automatic extraction and measurement

55

implemented to identify each tubular structure in the image.

y2

e− 2σθ2 , if |xθ | ≤ L ,

2

Kθ,σ (x, y) = (2.68)

0,

otherwise,

0

where Kθ,σ (x, y) = Kθ,σ (x, y) − µθ,σ ; and the method implements a version of

AdaBoost as:

5

!

X

H(x) = sign αn hn (x) , (2.69)

n=1

where αn = 21 ln 1−εn

εn

.

tistical classifiers is presented by Soares et al. [200, 125]. The method is based on

the 2D discrete wavelet transform where the image is analyzed at different scales.

This methods provides results in 2D images where there is almost no noise present.

However, when decomposing the image into different resolutions (i.e., applying a

down-sample operator required by the discrete wavelet transform) small and fine

the coronary arteries in CTA data. The method is based on the computing a different

number of features to describe: i) size; ii) shape; iii) position; and iv) appearance

claims that a total of eight features are the most relevant ones. However there is no

strong evidence that the same set of features can characterize generalized tubular

structures.

56

An algorithm for the detection of vascular structures CT lung images was pro-

posed by Prassad et al. [173]. This approach integrates elements of machine learn-

Hanger et al. [83] proposed a method for skeletonization of vascular tree structures

for medical images. The proposed reconstruction method works under the hypothesis

that the minimum intensity occurs at the center of gravity of orthogonal cross section

of the vessels. Results are depicted for a limited number of vascular structures. A

of wavelet skeleton method for ribbon-like shapes is proposed by You et al. [263].

57

Table 2.6: Segmentation of Tubular Objects – Hybrid Methods

Framework Struct.

√ √

Chung et al. [46] 2000 2D

√

Pitas et al. [167] 2001 3D

√ √ √

Nedzved et al. [156] 2001 2D

√ √ √ √

Selle et al. [193] 2002 3D

√ √ √ √ √

Passat et al. [163] 2005 3D

Huysmans et al. [98] 2006

√ √ √ √

Barbu et al. [20, 19] 2007 3D

√ √ √

Cai et al. [37] 2006 2D

√ √ √ √

El et al. [62] 2006 3D

√ √ √

Zeng et al. [267] 2006 2D

√ √ √

Soares et al. [200] 2006 2D

√ √ √ √ √

Isgum et al. [101] 2004 3D

√ √ √ √ √

Prasad et al. [173] 2004 3D

√ √

Gu et al. [78] 2004 2D

√ √

You et al. [263] 2005 2D

√ √ √ √

Marquering et al. [141] 2005 3D

√ √ √

Mosaliganti et al. [155] 2006 3D

√ √ √

Zhang et al. [268, 269] 2007 3D

√ √ √

Bai et al. [16] 2007 3D

√ √ √

Cheng et al. [45] 2007 3D

58

Chapter 3

Methods

reconstruction of neuron cells from optical imaging. We remind to the reader that

some of the major challenges for reliable automatic reconstruction is the poor quality

of the data, and the tree morphological representation in terms of cylindrical lengths

and diameters.

Our database or neuron cells images consist of twelve CA1 pyramidal neuron cells

from rat hippocampi. We have acquired data from a confocal and a multiphoton

microscope and loaded with Alexa Fluor 594 dye. We have collected twelve image

59

Figure 3.1: Neuron morphology.

size of 640 × 480 × 150 each, with voxel size 0.3 µm in the x-y axis and 1.0 µm in

the z axis. Excitation wavelength was set to 810 nm while the lens and index of

FluoviewTM confocal microscope and each cell was loaded with Alexa Fluor 555 dye.

Confocal imaging datasets consist of three or more partially overlapping stacks with

resolution of 1024 × 1024 × 110 each, the resolution of each stack is of 0.25 µm in the

x and y axis and 0.5 µm in the z axis. The emission and excitation wavelength was

set to 543 nm and 567 nm respectively while the numerical aperture and pinhole

diameter was set to 0.9 and and 150 µm respectively. Both the lens and index of

dendrites which are highly irregular tubular structures. The challenges towards cen-

terline extraction of these structures include: i) a poor signal-to-noise ratio, ii) the

objects of interest are at the limit of optical imaging (imaging resolution is typically

throughout the cell, and, most importantly, iv) there is an extreme variation in

60

shape among dendrites.

where P , is the Point Spread Function (PSF) of the optical microscope, O is the true

Our approach consists in the steps described below. A detailed description of each

1. Deconvolution;

2. Frames-shrinkage denoising;

61

3. Registration;

5. Morphological Reconstruction.

3.3 Deconvolution

volution in confocal imaging. In the case for multiphoton imaging, this step is not

required since the apparatus of the microscope provides images with better quality

than confocal.

distortions imposed by the microscope [179, 29] The point spread function is the

response of the optical device to a impulse response in the sense that it is a measure

of how much an ideal point is in reality imaged. In the case of 3D confocal images,

the major effect is a blurring effect a elongation along the “z” axis of the images.

dye used in these experiments. In general, a model for 3D fluorescence optical image

I can be expressed in terms of a PSF1 P and different noise sources2 N and it can

1

Highly dependent of the microscope optics.

2

Thermal, photon shot and biological background noise.

62

be expressed as:

Z+∞ Z+∞ Z+∞

I = P (x − x1 , y − y1 , z − z1 )O(x, y, z)dx1 dy1 dz1 + N (x, y, z) (3.2)

−∞ −∞ −∞

= (P ∗ O)(x, y, z) + N (x, y, z), (3.3)

where O is the original object. Then the problem of deconvolution consists in recov-

ering the image O. By considering the dual problem in the Fourier space, Eq. 3.2

where I,

ˆ v, z)

I(u,

Ô(u, v, z) = , (3.5)

P̂ (u, v, z)

this is the so called Fourier-quotient method. However this method is very sensitive

First, latex bead data is immersed in tissue (the medium) and it was acquired with

an Olympus FluoviewTM confocal microscope. The diameter of the beads was 0.2 µm,

the resolution was set to 0.076 µm in the x and y axis, and 0.2 µm along the z-axis,

giving a voxel aspect ratio of about 1:1:3; the emission and excitation wavelength

was set to 520 nm and 490 nm respectively while the numerical aperture and pinhole

diameter was set to 0.9 and 150 µm respectively. The lens and index of refraction

correspond both to water. The same parameters of the microscope were used to

63

acquire the neuron data sets. Second, to robustly estimate a PSF, individual beads

were averaged out from a given 3D image stack, Fig. 3.3 compares beads at different

depths in the same image stack. Third, deconvolution was performed using the

TM

Huygens Software software using a standard Maximum Likelihood Estimator

method [2]. Fig. 3.4 depicts the effect of deconvolution in the average bead obtained

64

(a) (b) (c)

x − y view of beads in tissue, with depths z = 69.4 µm, 48.4 µm and 18.4 µ m,

respectively, (d)-(f) x − z view of beads in tissue.

65

(a) (b)

(c) (d)

Figure 3.4: Deconvolution of the average bead obtained from the bead in Fig. 3.3.

(a),(b) projection in the x − y axis of average bead after and before deconvolution

respectively; and (c),(d) projection in x − z axis after and before deconvolution.

66

3.4 Denoising

capable of robust edge detection from three 1D filters that generate a 1D Parseval

frame [181]. This lifted frame incorporates robust edge detectors along the main

approach of the filtered data. The denoising uses a hysteresis thresholding step

and an affine thresholding function that takes full advantage of the filter adaptive

threshold bounds.

spaces using existing frames based on digital filters. More specifically, we define in

Section 3.4.1.1 the notion of Parseval frame and we briefly describe the mathematical

framework for constructing and lifting frames using filterbanks in Section 3.4.1.2. We

The theory of frames in Hilbert spaces plays a fundamental role in signal and im-

space, generalizing the notion of orthogonal basis. A frame satisfies the property

of perfect reconstruction: any vector of the Hilbert space can be recovered from its

67

inner products with the frame vectors. The linear frame transform, from the ini-

tial space of coefficients, obtained by taking the inner product of a vector with the

frame vectors, is injective and hence admits a left inverse [41]. Perfect reconstruc-

tion together with redundancy make the use of frames successful in a broad spectrum

Let us recall that a digital filter is a vector K ∈ `2 (Zd ) for which the Fourier

we will also consider the translation operator Tn , defined by Tn s(m) = s(m − n), for

X

Akxk2 ≤ | < x, vi > |2 ≤ Bkxk2 , for all x ∈ H, (3.6)

i∈I

where A ≤ B are positive constants called frame bounds. For our purposes, I is a

countable index set. A Parseval frame is a frame for which A = B = 1; for this frame

the inequality above becomes the well-known Parseval identity. Parseval frames

generalize orthogonal bases: the same vectors used in analysis (decomposition) can

Parseval frame {vi }i∈I ⊂ H we have the following perfect reconstruction formula:

X

x= < x, vi > vi , for all x ∈ H. (3.7)

i∈I

68

3.4.1.2 Augmenting a Frame

The power and efficiency of frames comes from their redundancy, a key ingredient

frame in a structurally stable way, thus obtaining new improved frames. We use our

A finite set {K0 , ..., Kl } of `2 (Zd ) generates a frame in `2 (Zd ) if the family

following result provides a characterization for the sets of digital filters that generate

frames.

such that for almost every w ∈ [−π, π)d the following inequality holds:

l

X

A≤ |K̂i (w)|2 ≤ B. (3.8)

j=0

rary that allow us to augment frames. For a given positive integer Q, let U be a

U (ω)(K̂0 (ω), K̂1 (ω), . . . , KˆR (ω))t = (Fˆ0 (ω), Fˆ1 (ω), . . . , FˆQ (ω))t , (3.9)

69

Proposition 1, under certain assumptions on the matrix U , the new family of filters

Corollary 1. If there exists A > 0 such that for almost every ω ∈ [−π, π)d we have

Akxk ≤ kU (ω)xk for all x ∈ CR+1 , then the integer translates of the new family of

digital filters Fq , q = 0, 1, . . . , Q also form a frame for `2 (Zd ). If, in particular, U (ω)

is an isometry for almost every ω ∈ [−π, π)d , then the resulting and the original

Note that Corollary 1 is a general tool that can be used in constructing frames

in any dimension. We construct in Section 3.4.1.3 a separable Parseval frame for the

Hilbert space `2 (Z3 ), and then augment it with the lifting scheme from Corollary 1

We begin our construction with the 1D frame described by Ron and Shen [181] as

being the simplest example of a compactly supported tight spline frame. Consider

φ̂(w) = w 2 , ψ̂1 = i 2 , and ψ̂2 (w) = − w 2 . (3.10)

(2) ( w4 )2 (4)

70

The associated low-pass k0 , band-pass k1 , and high-pass k2 filters are defined as

follows:

√

2 ω 2 ω

k0 (ω) = cos ( ), k1 (ω) = i( ) sin(ω), and k2 (ω) = sin2 ( ). (3.11)

2 2 2

The filters are normalized so that:

√

impulse responses kˆ0 = K0 = (1/4) [1, 2, 1], kˆ1 = K1 = (1/4) 2, 0, − 2 and

√

kˆ2 = K2 = (1/4) [−1, 2, −1] form a 1D Parseval frame for `2 (Z). Note that K1 is a

detector.

To extend to 3D, we take the 3-folded tensor products of this frame with itself.

Fourier calculus, the perfect reconstruction condition (Eq. 3.7) and Proposition 1 are

with p, q, r ∈ {0, 1, 2}, are digital filters that generate a separable 3D Parseval frame

with 27 filters. The term separable refers to the fact that the 3D filters are obtained by

direct multiplication of filters from lower dimensions, in our case 1D filters. This set

of first-order singularities along the coordinates axes in the 3D space. Next, we will

71

operators are tuned to detect edges in the three principal axes. We wish to augment

our frame with non-separable filters capable of detecting edges along other desired

directions (e.g., the main diagonals in 3D space). Let θ ∈ [0, 2π) be the angle in 3D

measured counterclockwise from (0, 1, 0) towards (0, 0, 1) while on the positive x-axis

choices of pairs of angles in such a way that the resulting set of filters and their

translates still form a Parseval frame. We apply Corollary 1 for the above frame,

it will preserve distances, the matrix U will satisfy automatically the hypothesis

and K1 are the last 3 elements, in this order. We will only use the last 3 columns of

U to augment the frame, since only these columns will affect the last 3 elements of

IR−3 03

U = , (3.15)

0N +3,R−3 U1

where Ik is the k × k identity matrix, 0k is the k × k zero matrix, and 0k,l is the k × l

72

a 0 0

0 a 0

0 0 a

U1 =

.

(3.16)

b · cosϕ1 b · cosθ1 sinϕ1 b · sinθ1 sinϕ1

.. .. ..

. . .

b · cosϕN b · cosθN sinϕN b · sinθN sinϕN

For U1 to be an isometry, it suffices that the columns are orthogonal and of norm

one:

N

X N

X

2 2 2

b cosϕi sinθi sinϕi = 0, a + b cos2 ϕi = 1, (3.17)

i=1 i=1

N

X N

X

2 2 2

b cosϕi cosθi sinϕi = 0, a + b cos2 θi sin2 ϕi = 1, (3.18)

i=1 i=1

and finally:

N

X N

X

2 2 2 2

b cosθi sinθi sin ϕi = 0, a + b sin2 θi sin2 ϕi = 1. (3.19)

i=1 i=1

We have augmented the frame with two choices of angles and constants a, b. One

π π

of our choice for the angles is N = 4, ϕi = 2

,i = 1, ..., 4 and θi = 4

+ (i − 1) ·

√ √

π 3 2

2

. This leads to a = 2

and b = 4

and the filters obtained by applying U to

(K̂0 (ω), K̂1 (ω), . . . , KˆR (ω))t are edge detectors along the main diagonals in 3D.

Table 3.1 presents our choice of U by listing the result of applying the operations

associated with the augmentation process. We call the resulting filterbank the UH

73

Table 3.1: Lifted Spline Filterbank: Selected

√

Frame Elements

3

F1 = √ 2

K1

3

F3 = √ 2

K3

3

F9 = 2

K9

F27 = 14 (K9 + K3 + K1 )

F28 = 14 (K9 − K3 + K1 )

F29 = 14 (K9 + K3 − K1 )

F30 = 14 (K9 − K3 − K1 )

Lifted Spline Filterbank (UH-LSF). All the other 23 frame elements that are not

listed remain unchanged. The new frame incorporates F1 , F3 , F9 which are scaled

versions of the separable original filters. They are edge detectors capable to detect

edges parallel to coordinates axes. It also defines a set of new directions by containing

non-separable filters (F27 , F28 , F29 and F30 ) that are tuned along the main diagonals,

as shown in Fig. 3.5. For example, F27 estimates the directional derivative in the

direction of the vector (1, 1, 1)t while F30 estimates the directional derivative in the

74

Another choice for angles is N = 8, ϕi = π2 , and θi = (i − 1) · π4 for all i = 1, ..., 4.

√ √

2 2

For this choice one can easily verify that a = 2

and b = 4

. This frame will contain

in addition to the edge detectors along the main diagonals, the detectors for edges

A similar construction can be performed starting from any set of 1D filters that

generate a frame. For example, the filters corresponding to the Haar scaling and

wavelet functions are, √1 [1 1] and √1 [1 − 1], respectively. The first filter is the low

2 2

pass, averaging filter and retains most of the energy. The second filter is the detail

filter, again an edge detector. Since they are created via a multi-resolution analysis,

original data while Fig. 3.6(b) depicts the maximum intensity projection along the

simple but effective algorithm for noise removal in 3D photon-limited images in Sec-

tion 3.4.2.1. The algorithm thresholds the noisy frame coefficients based on two

adaptive threshold bounds that depend on subsets of the frame elements, which is

very different from the traditional wavelet shrinkage algorithm in the literature. To

determine the optimal thresholds and evaluate the performance of the presented al-

75

(a) (b)

neuron cell. Maximum intensity projection along the z axis for (a) the original data

from confocal images with noise and (b) the image after denoising with our algorithm.

that resemble the real fluorescence microscopy data of neurons in Section 3.4.2.2.

Assume that {Fr }r=0,...,R is a filterbank whose integer translates form a Parseval

frame for `2 (Zd ). Let I = {(r, n) : 0 ≤ r ≤ R, n ∈ Zd } be the index set and let v(r,n) =

Tn (Fr ) for all (r, n) ∈ I. With this notation, our assumption is that {v(r,n) |(r, n) ∈ I}

padding X in all directions with zeros, we can embed X in `2 (Zd ). We will always

consider input signals as elements of `2 (Zd ). A simple computation shows that the

76

perfect reconstruction condition (Eq. 3.7) can be written as:

X

X= (X ∗ Frt ) ∗ Fr , (3.20)

r

where Frt is a copy of Fr flipped about the origin: Frt (n) = Fr (−n) for all n ∈ Z3 .

To see this, let Yr = X ∗ Frt . Then {Yr }r contains all the frame coefficients of

X X

Yr (n) = X(m)Frt (n − m) = X(m)Fr (m − n) =< X, Tn (Fr ) >= v(r, n).

m∈Z3 m∈Z3

(3.21)

to Fr . For the denoising purpose, we set a low threshold bound B1 (r) = a1 · Λr and a

upper threshold bound B2 (r) = a2 · Λr , where a1 and a2 are some constants (we will

determine the optimal values of a1 and a2 in Section 3.4.2.2). To take full advantage

the frame coefficients as follows. If a coefficient’s value |c(r,n) | exceeds B2 (r), the

coefficient remains unmodified. If this absolute value is less than B1 (r), then the

coefficient will be replaced by 0. If the absolute value is between B1 (r) and B2 (r), to

decide whether to retain this coefficient we check all other filters for coefficients that

correspond to the same spatial location (i.e. voxel in 3D) given by n. If there is at

least one more filter Fr̃ with r̃ 6= r and for which |c(r̃,n) | is above the lower threshold

bound B1 (r̃), the coefficient c(r,n) is retained but modified with an affine threshold

n

ρB1 ,B2 (x) = B2 (3.22)

B2 −B1 (x − sgn(x)B1 ).

77

In summary, the affine hysteresis thresholding is formulated as follows:

c(r,n) , if |c(r,n) | > B2 (r),

if B1 (r) < |c(r,n) | ≤ B2 (r)

ρ

B1 (r),B2 (r) (c(r,n) ),

c(r,n) =

e (3.23)

and |c(r̃,n) | > B1 (r̃) for some r̃ 6= r,

0, otherwise.

The choice of the affine function was motivated by the fact that it will enhance the

c(r,n) containing the altered

r

In contrast with the classical wavelet threshold approach, our method takes ad-

frames provide at each voxel location a more detailed information than a standard

separable wavelet decomposition and the correction scheme will use this detailed in-

Algorithm 1. Input: The noisy data X and the number of decomposition levels L.

obtain {Yr }r .

Step 2: Compute {Y

e r }r by applying the approach described in Eq. (3.23).

e from {Y

Step 3: Reconstruct X e r }r using the same filterbank.

rithm processes all high frequency subbands but keeps the lowpass subband un-

changed.

78

3.4.2.2 Computational Phantom and Validation Strategy

fluorescence microscopy data that can be used for the validation of the performance

The construction has the following steps: i) create binary volume; ii) simulate

intensity decay; iii) create tubular neighborhoods; and iv) add noise.

we create a volume sampled at the desired resolution where the voxels occupied by

the cylinder are labeled 1 and background voxels are labeled 0 (note such a binary

volume can be used, for example, as the ground truth for dendrite morphology re-

construction tasks).

Simulate intensity decay: Construct a volume in which the intensity decays lin-

early in the voxels that correspond to the neuron and simulates the diffusion of the

with the root in the soma. The linear intensity decay is based on the tree-distance of

the cylinders to the soma. Such volume will be used as the original for the denoising

ron, based on a prescribed ratio of volumes and surfaces. We can create several

neighborhoods in which the neuron occupies for example at least 5% of the total vol-

ume of the neighborhood. These neighborhoods will be used both to create several

79

Add noise: Add to the tubular neighborhoods decaying speckle noise with ran-

domly generated statistical parameters. To obtain the local Poisson noise, we apply

a Poisson noise with variance, for example, 0.1, of the local intensity. To compensate

for the intensity gap between the layers of noise, we filter the image with a zero mean

gaussian noise with local intensity dependent variance. For more realistic effects, we

convolve the volume with the theoretical point spread function derived using the

Figure 4.1(a) depicts a binary volume with dimensions of 374 × 158 × 57 and

threshold bounds for our denoising algorithm. As pointed out by Dima et al. [56],

for 3D neuron data denoising application, the Mean Square Error (MSE) computed

at the level of the entire volume would not produce good results due to the small

consider this metric at all, concluding that its value is meaningless for this class

of volumes. We hence need to define several new metrics that account for sparse

above. Our approach permits a local neighborhood evaluation. Let I be the original

and Id be the denoised image, both with dimensions n1 × n2 × n3 . Let χLN be the

indicator function of the local neighborhood in which the metrics are evaluated. We

80

(a)

(b)

Figure 3.7: Maximum intensity projection of volume data from Synthetic neuron

n120. (a) Binary volume; (b) with added noise.

define new metrics, namely, local neighborhood MSE (LN-MSE), local neighborhood

k(I − Id ) · χLN k22

LN-MSE(I, Id ) = , (3.24)

n1 n2 n3

p

LN-RMSE(I, Id ) = LN-MSE(I, Id ), (3.25)

kI · χLN k2

LN-SNR(I, Id ) = 10 log10 , (3.26)

k(I − Id ) · χLN k2

max I

LN-PSNR(I, Id ) = 20 log10 . (3.27)

LN-RMSE(I, Id )

81

Similarly, for the evaluation of preservation of structure, we also can define the

Figure 3.8: Depiction of the local neighborhood associated with noise removal.

Obviously, the lower and upper thresholds of our algorithm play a key role for

plest and most straightforward criterion is to choose the thresholds that lead to

minimum of the LN-MSE. Experimentally, we found that the optimal value for the

lower threshold is in the range [0.5-0.6]Λr , and for the upper threshold is in the range

[0.7-0.8]Λr . In addition, we also found that these optimal thresholds do not change

significantly with the data and noise considered, which is indeed desirable for prac-

tical applications. We set the optimal lower and upper threshold to be 0.5Λr and

0.75Λr , respectively.

82

Figure 3.9: Registration of 3 volume stacks.

3.5 Registration

Many dendrites are larger than the typical field of view of typical laser-scanning

microscopes. Multiple image volumes are therefore necessary to fully capture the

neuron structure. We are thus required to merge and align the multiple data sets to

stack (which are obtainable when moving the microscope from one area of interest

to the next). To measure similarity during registration, we use the sum of mean-

squared differences for each voxel in the two images. This measure is then minimized

83

3.6 Dendrite Segmentation

Now, we will derive a general structural measure to enhance regular and irregular

volumetric structures. The key idea is to use prior knowledge of the topology of a

features (the eigenvalues of the structure tensor for a given tubular object) and the

tubular structure itself. The association rule, assigns high probability values inside

the volumetric tubular object (being maximum in the center); low probability values

in the border (being minimum at the border itself); and zero probability values

84

elliptical or semi-elliptical shape with the cross section of the object of interest (typ-

ically vessels). However, when detecting objects with extremely high irregularities

in shape (i.e., not semi-elliptical), these algorithms may not perform well since the

assumptions of an ideal or elliptical cylinder do not hold any more, due to:

2. adjoining structures: these are structures attached to the dendrites that play

detect them;

MRI.

morphological description [225, 220, 187, 185]. The key advantages of our method

are:

not only the medial axis of the tubular structure but the entire tubular object

85

Figure 3.11: Overview of our algorithm for dendrite detection.

2. learning structure and noise: using this machine learning approach, not only

are geometrical shapes being learned, but also the noise variations intrinsic to

86

sectional shape and considerable radius variations. Our method is based on statistical

learning theory. Specifically, Support Vector Machines (SVMs) are used to learn

tube-like shapes and to estimate the posterior probability distribution for a given

(a) (b)

(c) (d)

the x − y of a typical dendrite segment, (b)-(d) Eigenvalues λ1 to λ3 .

.

tures for volumetric tubular objects. We consider the general case for tubular objects

87

with anisotropic aspect ratio. The particular for tubular objects with isotropic aspect

ii) from a synthetic model depicted in Fig. 3.13(a), its morphological properties

include: i) variation of intensity, ii) radius variation from 0.5 to 1.5 µm, iii) variety

of branching sections, and iv) high and low curvature segments. Voxel size was

To the best of our knowledge, McIntosh et al. [143] has only reported the extrac-

tion of structural tubular features based on the estimation of the Hessian matrix and

it has only been applied in MRI data for the segmentation of the spinal cord. Rather

ing of the Hessian matrix by estimating second partial derivatives at resolution (spe-

cially in the z axis). Classically, structural features in 2D and 3D have been computed

in isotropic data [71, 189, 134]. We should emphasize that confocal and multiphoton

data is anisotropic by nature, the aspect ratio in the x, y, and z axis is 1:1:3, and

from here that we construct the Hessian matrix based on the existing aspect ratio.

For a fixed σxy in the x, y axis and for a fixed σz in the z axis, the Hessian matrix

88

is computed as:

Ixx (x; σxy ; σxy ) Ixy (x; σxy ; σxy ) Ixz (x; σxy ; σxy )

O2 I(x; σxy ; σz ) =

Iyx (x; σxy ; σxy ) Iyy (x; σxy ; σxy ) Iyz (x; σxy ; σxy )

(3.28)

Izx (x; σz ; σxy Izy (x; σz ; σxy ) Izz (x; σz ; σxy )

where

∂2

Ixy (x; σxy ; σxy ) = { G(x; σxy ; σxy )} ∗ I(x), (3.29)

∂x∂y

and

∂2

Izx (x; σz ; σzx ) = { G(x; σz ; σxy )} ∗ I(x), (3.30)

∂z∂x

represents the approximations to the second partial derivative after convolving the

image I with an “anisotropic” Gaussian function with standard deviations σxy in the

x-y plane and σx in the z axis. Let λ1 (x; σxy ; σz ), λ2 (x; σxy ; σz ), λ3 (x; σxy ; σz ) be the

since the matrix O2 I(x; σxy ; σz ) is symmetric and positive-definite. The information

derived from the eigenvalues of Jσxy ;σz encodes structural information in a local neigh-

then the structure resembles that of an ‘ideal tubular structure’ (Frangi et al. [71])

and if λ1 > 0 and λ1 ≈ λ2 ≈ λ3 then the structure resembles a blob. From these

89

(a)

(b) (c)

(d) (e)

Figure 3.13: Synthetic volumetric data and isotropic structural features. (a) syn-

thetic tubular model based on a spline centerline model. (b) a 2D slice of the syn-

thetic model, (c)-(e) estimated eigenvalues λ1 , λ2 , and λ3 .

can be derived Sato et al. [189]. However analytical expressions are limited to ideal

In the rest of this chapter we will denote a tubular feature vector (TFV) for a

90

fixed σ as:

Tσxy ;σz (x) = (λ1 (x; σxy ; σz ), λ2 (x; σxy ; σz ), λ3 (x; σxy ; σz )). (3.31)

For the case of isotropic data, we simply consider the case when σxy = σz we express

SVMs were proposed by Vapnik and Cortes [47, 233] as a method for data classi-

fication. SVMs are based on statistical learning theory that estimates a decision

function f (x) for any value x ∈ Rn . The function f (x) is estimated from the set of

training vectors xi ∈ Rn , i = 1, ..., l with labels yi ∈ {−1, 1}. Then SVMs provide a

l

1 X

min kwk2 + C ξi

w,b,ξ 2 i=1

(3.33)

subject to : yi (< w, φ(xi ) > + b) ≥ 1 − ξi ,

ξi ≥ 0,

where the set {xi }i=1,..,l of the training vectors are mapped by the feature map

vector to the hyperplane that represents the decision boundary, the constant C > 0

is the parameter for the hyperplane separation error, and ξi is a slack variable used

91

The dual solution to the minimization problem posed in Eq. 3.33 is to maximize

l l

X 1X

max αi − αi αj yi yj < φ(xi ), φ(xj ) >

α

i=1

2 i,j=1

l

X (3.34)

subject to: yi αi = 0, i = 1, ..., l

i=1

0 ≤ αi ≤ C.

Xl

f (x) = sign( yi αi < φ(xi ), φ(x) > + b). (3.35)

i=1

The points for which αi > 0 are called the support vectors and lie closest to the

hyperplane. In order to minimize the computational cost, kernels are used to replace

the inner product < φ(xi ), φ(xj ) >. A kernel K : Rn × Rn 7→ R that satisfies the

Mercer condition [146], implements a dot product of some feature map φ, i.e.:

Conventional SVMs solve a binary classification problem (Eq. 3.35) from the

training data. However, as proposed by Platt [168], SVMs can robustly estimate

1

p(y = 1|f ) = . (3.37)

1 + exp(Af (x) + B)

Let DT = {D+

T T

∪ D− T

}, be a subset of l trained vectors where x ∈ D+

T

iff f (x) = y = 1 and x ∈ D− iff f (x) = y = −1. The parameters A and B are

92

estimated as follows (see [131]):

min F (Z),

Z=(A,B)

l

X

F (Z) = − (ti log(pi ) + (1 − ti )log(1 − pi ))

i=1

1

(3.38)

pi = 1+exp(Afi +B)

, fi = f (xi )

N+ +1

, if yi = 1

N+ +2

ti =

1

, if yi = −1

N− +2

T T

where N+ = |D+ | and N− = |D− |. Our objective is to estimate the probability

density function from DT for different tubular 3D objects. Without loss of generality,

we have selected a Gaussian (RBF) covariance function as kernel due to its isotropic

properties:

where γ is the variance or scaling parameter that determines the width in which the

The vectors Tσ are mapped into a high dimensional space in which the parame-

ters A and B (Eq. 3.38) can be estimated and therefore a probability value p(x) is

of interest rather than performing training in the entire volume. We define a local

neighborhood for tubular structures for which training and prediction will be per-

formed. This local neighborhood reduces the cardinality of the voxels to be classified

and therefore allows training and prediction to be performed in the decision bound-

aries for the feature vectors. Let I be a 3D volume and IT be the set of voxels which

93

belong to a tube-like structure in I. Let IT, be a local neighborhood for IT such

that IT ⊂ IT, ⊂ I. We define the set of training vectors for a tubular structure as:

tubular-like object of interest is associated with labels that belong to the tubular

object. Our objective is to estimate the posterior probability that a given voxel

Figure 3.14: Labels used to train a synthetic regular tubular model, labels corre-

sponding to the centerline are marked in color white, while labels corresponding

to the non-centerline are marked in color gray (background is excluded from the

training).

In this Subsection we will explain how we construct a statistical shape model for

a dendrite segment as well as how we choose the optimal dendrite model in terms of

drical or elliptical shape patterns; instead they present highly irregular tubular-like

94

patterns in addition to adjoining structures. Different types of tubular measures of

the following form have been mentioned in the literature (Frangi et al. [71], Sato et

al. [189]:

fσ

where |λ1 | 6 |λ2 | 6 |λ3 | are the ordered eigenvalues of the Hessian matrix H(I ∗

shapes, discriminating structural features such as: plates, lines, and blob-like struc-

since the structural information they contain does not fulfill the hypothesis of the

assumed model.

(a) (b)

Figure 3.15: Structural features. (a) Distribution of the normalized eigenvalues, (b)

estimation of the parametric sigmoid function at three different scales.

Thus, we hypothesize that regular and irregular cylindrical shape models lead to

95

from the object of interest itself (as opposed to defining an ideal tubular measure

Vm ).

tubular model (Fig. 3.13(a)) for which a rule that associates the eigenvalues and the

centerline (Fig. 3.14) will be constructed. Since the configuration of the eigenvalues

reveals structural information our goal is to identify those eigenvalues for which the

location of its structure tensor is the centerline of the tubular object. Figure 3.15

depicts the class distribution of the eigenvalues with respect to the synthetic model

of Fig. 3.13(a) with centerline labels depicted in Fig. 3.14. Note that the probability

distribution of the centerline and non-centerline classes overlap with each other and

the tubular object. This motivates the idea of learning the association rule between

To estimate the parameters needed obtain the posterior probability of the cen-

terline model SVMs parameter selection was performed with a grid search using

three-fold cross-validation. The best performance was obtained using the penalty

value of C = 10, a linear kernel, and σ = 0.5. The optimal SVMs parameters corre-

spond to b = 4.19 (Eq. 3.35) with a number of support vectors equal to 638, while

A and B (Eq. 3.38) were: A = −1.7955, B = −0.0539. Figure 3.15(b) depicts the

estimated parametric sigmoid function for three different scales σ = {0.25, 0.5, 1.0}.

96

The two classes of parameters we consider are: i) generic dendrites shapes; and

Our goal is to construct a dendrite shape model that captures both global and

local shape variations. Dendrite selection was performed by taking into account i)

dendritic global shape with some variations in diameter, ii) the inclusion of spines,

iii) variations in intensity, and iv) dendrite segments with high and low curvature3 .

To construct such model, we selected five dendrite segments, depicted in Fig. 3.16,

Regions A-D. For each dendrite segment SVMs training was performed by grid search

with different kernels (linear, polynomial, exponential), values of σxy (0.15, 0.3, 0.45,

0.9) µ m, and σz (0.5, 1.0, 1.5, 2.0) µ m, and varying the slack variable. Testing was

perform in unseen data for each segment. Figure 3.17 depicts two out of 10 volumes

for which testing was performed. For our specific application, the performance is

measure by the ability of performing “segmentation” in the unseen data. The model

where the estimated parameters were: A = −1.94, B = −0.222 (Eq. 3.38), b = 8.269,

Figure 3.17 depicts the results of predicting the tubular shape with the model

trained from Region E, while Fig. 3.16(b), and Fig. 3.16(d) depicts the result of the

3

Note that the major drawback of models which rely on an ideal or elliptical cylindrical shape is

that they do not include the ability to detect adjoining objects (spines) attached to the dendrites

which is desirable to be included in the learning algorithm.

97

(a) (b)

(c) (d)

lected regions in the denoised data; and (b),(d) selected regions in the probability

volume obtained from the dendrite segment in Region E.

Figure 3.18(a) presents the results of applying two different shape models to a

3D image stack. Model A represents a ‘smooth and regular’ tubular model, where as

Model B represents an ‘irregular’ tubular model. Note the difference when predicting

tubular structure, the predicted model A is considerable smoother than the predicted

model B, this is evident since spines are enhanced in model B as opposed to model

A.

98

(a) (b)

(c) (d)

(e) (f)

(g) (h)

volumes where obtained from the dendrite in Region E depicted in Fig. 3.16. (a),(e)-

(c),(g) volume rendering of the denoised volumes of two different cells in the x − y

axis x − z axis respectively; and (a),(e)-(c),(g) volume rendering of the probability

volume in the x − y axis x − z axis respectively.

99

(a)

Figure 3.18: Schematic of shape learning given from two different models. Model A

was obtained from a synthetic example and is regular tubular model, whereas Model

B is an irregular tubular model. Note that the major difference of the result obtained

by these models is that Model B enhances “spines” compared with Model A.

100

3.7 Morphological Reconstruction

The key idea of our approach is to evolve a 3D front obtained from the probability

volume described in Sec. 3.6 such that, the front moves considerable faster in the

centerline of the irregular tubular object as compared with the border (anisotropic

front propagation), inducing a new distance metric. Based on this distance metric,

dendrite centerlines are precisely paths with “optimal cost” when traveling along the

In this Section first we review the basic concepts of Level Set theory needed for

our application (Sec. 3.7.1) and then we present our proposed framework for rapid

a curve (R2 ) or surface (R3 ). We denote by the Int(Γ) the interior of Γ, that is the

bounded connected component of (R2 \Γ) or (R3 \Γ) and Ext(Γ) to its exterior.

function y = f (x) implicitly, that is if given x in the domain of f , then: H(x, f (x)) =

4

By construction, loops are prohibited when representing the tree structure.

101

0.

Definition 2. A distance function d for the metric space (X, | · |) is defined as:

y∈∂S

where x, y ∈ X.

d(x, Γ) if x ∈ Int(Γ),

φ(x) = −d(x, Γ) if x ∈

/ Ext(Γ), (3.43)

if x ∈ Γ,

0

102

Let x be a point in Rn and φ a distance function φ : Rn → R and let Γ(t = 0) be

∂φ ∂φ ∂φ

∇φ = ( , , ). (3.45)

∂x ∂y ∂z

From here that the unit outward normal can be written in terms of the implicit

function φ as:

∇φ

N= . (3.46)

|∇φ|

provide with “motion” in direction of the outward normal N to the implicit function

(the same applies to 3D). The “magnitude” of the motion is represented by a function

103

F called “speed function” which generally depends of geometrical properties such as

arguments curvature and the κ normal direction N . Let x denote a point in the

front Γ, we want to evolve Γ as a function of its embedded level set φ, where φ moves

in direction to its normal vectors with speed F . Given x in the front Γ, then we

can express F at a given time t as: F (x(t)) = x0t · N. By applying the chain rule to

∇φ

N= |∇φ|

(Eq. 3.46), we obtain:

φt + F |∇φ| = 0. (3.47)

This last equation is was introduced by Osher and Setian [160]. The geometrical

∇φ φxx φ2y − 2φx φy φxy + φyy φ2x

κ=∇· = . (3.48)

|∇φ| (φ2x + φ2y )3/2

We consider the case when the speed function F only depends of the time. This

formulation requires that F is a strictly positive function and this formulation is the

We introduce the concept of shortest geodesic paths in terms of level sets propagation

fronts, specifically, we consider the case of the stationary level set equation φt +

Let T : R3 → R+ be a positive function and define the zero level set C of T as:

104

Then, the level set C(x, y, z, t) = {(x, y, z) : T (x, y, z) = t} is a strictly monotonic

front and it is the set of points that can be reached from p0 with minimum cost at

time t. Assume that C evolves according to: Γ = F N, where F > 0 is the speed

of the front. To illustrate this concept, we can think of this type of propagation as

a balloon that is “only” expanding in function of the time t and at a given speed F

Let us consider the one dimensional case. To compute the arrival time of a particle

moving in 1D we can use the well established relation: distance = rate × time.

From elemental calculus, the tangent ∇T is orthogonal to the level sets of T and

the magnitude |∇T | is inversely proportional to the speed F . We can state that

the magnitude |∇T | is directly proportional to the “cost” of moving the particle,

Fig. 3.21.

dT

F = 1. (3.50)

dx

Figure 3.21: Schematic of the one dimensional case of the Eikonal Equation.

In the formulation expressed by Eq. 3.49, the embedded level set always moves

105

outwards, and the relation between the arrival time T and the speed of propagation

T (C(x, t)) = t, ⇒

∇T · Ct = 1, ⇒

(3.51)

∇T

∇T · F |∇T |

= 1, ⇒

F · |∇T | = 1.

Therefore, evolving a strictly monotonic front with direction normal can be ex-

pressed as:

The duality of the Eikonal Equation 3.52 with the shortest path problem can be

stated as follows: given two points p, q ∈ R3 , the optimal path between p and q is

the Euclidean arclength and F (x) is the weight over a domain D. Then, the shortest

path c(t) = {x(t), y(t), z(t)} from the point p0 to p is the minimal cumulative cost

Z p

T (p) = min F (c(s))ds. (3.53)

c p0

The arrival time T are the points that are reached with minimal cost C and the min-

imal cost paths are orthogonal to the level set curves. To illustrate this statement we

consider the two dimensional case, the three dimensional case can be found in [159].

106

Lemma 1. If a path c(p) = (t(p), s(p)) satisfies the equation:

c0 (p)

where T(p) is the unit tangent vector to c(p) defined as: T(p) = |c(p)|

, then c(p)

Z p

min F (c(p))|c0 (p)|dp. (3.55)

c p0

Lemma 2. The gradient descent curves c(p) = (t(p), s(p)) defined by the ordinary

Z

F (c(p))|c0 (p)|dp. (3.57)

Lemma 3. The optimal paths between two points A and B are the gradient descent

Then, we can find explicitly the optimal paths between the starting point p0 and

Xt = −∇T,

(3.59)

X(0) = p0 ,

that is: given the arrival time T , then the optimal path can be found by traveling

from point p along the negative of the gradient to the starting point.

107

To solve numerically the Eikonal Equation 3.52 we use a numerical scheme based

on [195] as:

1

max(D−x u, D+x u, 0) + max(D−y u, D+y u, 0) + max(D−z u, D+z u, 0) 2 = Fijk ,

ijk ijk ijk ijk ijk ijk

(3.60)

where Fijk is the cost function D+ , D− is the forward and backward operators defined

as:

ψi −ψi−1

Di−x ψ = h

,

(3.61)

ψi+1 −ψi

Di+x ψ = h

,

and ψi is the value defined on a grid at the i-th position and h is the step size. This nu-

merical solution solves the Eikonal Equation 3.52 in a optimal number (O(N log N ))

of steps.

6. Diameter estimation.

108

3.7.3 Soma-pipette segmentation

from the soma, both soma and pipette segmentation must be performed. In the case

where the pipette is absent, only soma segmentation is considered. We assume that

both the soma and pipette are by far brightest objects5 and the longest objects along

We represent the soma and pipette volume VSP as the union of the soma volume

VS and pipette volume VP . Our goal is to remove the pipette volume VP (present)

and segment the soma volume VS . We consider the case where both the pipette and

Since we assume that the soma and pipette are the brightest objects in the 3D

volume (Fig. 3.22(a)), the additive projection along the z axis, we create two masks.

The first mask E1 is found by fitting an ellipse to the soma only and the second

mask E2 is found by fitting an ellipse that encloses both the soma and pipette

(Fig. 3.22(b)). We use the ellipse E2 to segment the soma and pipette volume VSP .

The soma and pipette are segmented by applying a standard K-means algorithm

in the mask E2 (enclosing the soma and pipette). From the region of interest we

select the largest connected component (the soma and pipette attached), designated

as volume VSP . Our goal now is to remove the pipette. To that end, we construct

a cost function (speed image) for which we propagate a 3D front. The cost function

5

This is a reasonable assumption since the pipette carries the fluoresce dye and therefore, if it

is present it will produce the highest illumination along with the soma.

6

If the pipette is not present, a similar analysis is performed.

109

(a)

(b)

Figure 3.22: Soma and pipette segmentation. (a) Additive projection of a neuron

cell along the z axis, lines in blue color define a region of interest obtained from

the ellipse marked with green color enclosing the soma and pipette. (b) Pipette

removal, left: ellipse E2 enclosing the pipette from the additive projection along the

‘z’ axis, right: ellipse E1 and E2 enclosing the soma and pipette respectively, and

2D projection of pipette inside the ellipse E2. Steps in pipette removal: (c) Additive

projection along the z axis, (d) Additive projection inside the ellipse E2, (e) 3D front

propagation in the pipette, and (g) Extracted pipette medial axis along the circular

masks used for pipette segmentation.

is estimated from the distance transform D where the starting point is the center of

the soma, and corresponds to point with maximum distance value in the soma region

(second mask).

Next, we define an energy function such that the embedded level set C evolves

110

with higher curvature at the center of the volume containing the soma and pipette.

Such level set is guided by the normalized distance transform Dn of the segmented

with T (p0 ) = 0. The term g(Dn (x) is the speed function (Fig. 3.22(e)) which guides

the front along the 3D centerline of the pipette; g is defined as g(x) = ex , where x

is a value between zero and one. Figure 3.22(f) depicts the estimated medial axis of

the pipette (mask depicted with blue color in Fig. 3.22(g)). The radius of such circles

is estimated from the distance transform D (since it provides the distance from the

center point to the boundary). The radius of each circle is defined as:

ri = D(xi ) + K, (3.63)

where D is the distance transform of the object of interest at the voxel xi and K is

a constant value to ensure the circles cover the boundary of the object of interest.

2D mask in the x − y plane. In the case where the pipette is not present, only the

Isotropic front propagation consists of evolving a front with low curvature from center

point of the soma. The volume used for this propagation is the binary volume

111

obtained by H, the surface evolution produces a distance map that captures the

distance of every voxel from the soma center point p0 . Isotropic front propagation is

all the tip points are the set of the terminal points. Terminal points have the unique

characteristic that they have the maximum distance to the soma in a given dendrite

Let us denote the isotropic distance volume as: VID . We construct discrete dis-

from the soma p0 . Then we have partitioned the volume in N steps, and for each

Note that two adjacent regions Si , Si+1 do not share any point in common and we

observe that for each distance step i, multiple regions can be created. This is easy

112

(a)

(b)

Figure 3.23: Schematic of ending points detection. (a) common region to detect two

adjacent regions; and (b) visualization of the chain volume VChain , ending points are

marked with yellow color and common regions are marked with gray color.

regions:

where is a distance value7 . We say that two regions Ci and Ci+1 are connected if:

pi = max{VID (x) : x ∈ Ci }, then pi ∈ Ci+1 , that is the point with maximum distance

To detect terminal points, we march from the N distance step dN . In case multiple

regions MN are created (due to the multiple branches), we find the points for each

7

Typically is selected to be 1.0 or 2.0 µm, approximately three or six voxels.

113

M region and they are marked as a ending point. Then, we consider all the MN −1

regions for the distance step N − 1 and we check each region has a connecting region

(Eq. 3.66). If a given region has no a connecting region from the distance step N ,

then we compute the point with maximum distance and it flagged as a terminal

point. This procedure is repeated until the regions reach the soma (clearly in the

soma there are no bifurcation points). Figure 3.23(b) depicts the detected terminal

points (marked in yellow) and the chained volume, where the regions of color gray

Given a cost volume (denoted by F ) and a starting point (the soma point p0 ),

we propagate an anisotropic 3D front that has a very high speed in the center of

the irregular tubular object as compared in the border of the tubular object. The

objective of propagating a 3D front that travels at high speed the centerline of the

tubular structure is to be able to determine the optimal path from the set of ending

points to the soma. Note that by construction a path is optimal if the commutative

The proposed energy functional to compute geodesic paths along the centerline

with T (p0 ) = 0 and p0 is the voxel corresponding to the soma center point.

The term g(Dn (H(x)) induces a cost function for which a 3D front propagates

114

(a) (b)

(c) (d)

(e) (f)

Figure 3.24: Visualization of the 3D front propagation along the centerline of the

tubular object. (a)-(d) 3D front propagation in a tubular region with considerable

diameter, note how the topology branching dendrites is naturally modeled by the 3D

front, always moving along the centerline. (e),(f) 3D front propagation in dendrite

structures with small diameter.

logical operator, composed of the posterior probability P that a voxel belongs to the

115

centerline (Eq. 3.37) and it is equal to 1 in regions greater or equal to a given proba-

bility value. This function ensures that the great majority of the small dendrites are

robustly segmented. The second term f2 (V (x)) is a threshold operator that is equal

to 1 in regions greater or equal to a given intensity value, this operators ensures that

wider structures, mainly the soma (not a cylindrical object) and the largest dendrites

are segmented. Figure 3.24 depicts the front propagation in a 2D slice of a typical

volume. Note that in Fig.3.24(a) the front is expanding anisotropically in the center

of the tubular structure, and gradually travels in along the centerline of the bifurca-

since this type of front of propagation naturally handles the complex topology of

a thin dendrite with two branches and Figure 3.24(f) depicts the front of propagation

in a single dendrite.

In general, centerline points correspond to points where the curvature of the front

at a given time t is maximal, and therefore they are located the farthest away from

the initial voxel p0 at a given the time t (in the case of a single branch). Then, in

the case of one dendrite, centerline is extracted by marching along the gradient of

In the case of the complete neuron cell, a 3D front starting from the initial voxel

is intiated according Eq. 3.71. Finally, centerlines are extracted by marching along

116

Figure 3.25: Schematic depicting the general principle to construct a single connected

tree component when tracing back the optimal path from the ending point to the root

point. Individual paths are marked with a non-continuous line and a the common

path is marked with a continuous line.

the gradient from the ending voxels to the initial voxel p0 (note that convergence is

In order to represent the set of paths as a single connected tree structure, the

soma center point is the natural selection of the root. Branching points and dendrite

segments are constructed by tracing back the paths from the terminal points to the

root. Each centerline voxel along the paths is labeled according the number of times

that a point has been visited when traveling from the terminal points to the root

point. Such labeling induces a unique identification for paths segments. For example,

consider the Fig. 3.25, the path of non-continues lines correspond to paths that are

visited only once when traveling from the terminal voxels to the root point, while

voxels that have been visited twice correspond to continues lines. By considering

all the terminal voxels in a neuron cell, a single connected tree component is always

117

guaranteed to be constructed (in order to produce realistic computational simula-

tions, a single connected tree structure must be constructed). Therefore, the cell

field consisting of a triplet of vectors (T(u); B(u); N(u))> . The FS frame constitutes

an orthogonal system of vectors, and such orthogonal system is obtained from the

curve’s derivatives l with respect to the parameter u. The first derivative, l̇(u), is

the vector in the direction of the tangent to the curve at point l(u). The first l̇(u)

and second l̈(u) derivatives define the osculating plane of the curve at that point –

the limit of all the planes defined by the point l(u) and the ends of the infinitesimal

118

arcs of the curve near it [203]. These two vectors are not always orthogonal to each

other, but they can be used to define the binormal vector, the vector perpendicular

to the osculating plane. The binormal and tangent vectors, in turn, define the normal

vector and with it an orthogonal system of axes that constitutes the FS frame at the

point l(u):

l̇(u)

T angent T(u) = ,

|l̇(u)|

l̇(u) × l̈(u)

Binormal B(u) = , and (3.68)

|l̇(u) × l̈(u)|

N ormal N(u) = B(u) × T(u) .

This frame is independent of the curve’s coordinate system, and the parametrization

curve l(u) as the dendrite centerline, and FS frame oriented cross sectional planes

a(u, v):

l1 (u) + a1 (u, v)

e(u, v) =

l2 (u) + a2 (u, v)

,

(3.69)

l3 (u) + a3 (u, v)

form as follows:

a1 (u, v) N1 (u) B1 (u)

cos(v)

a (u, v) = r(u) N (u) B (u) , (3.70)

2 2 2

sin(v)

a3 (u, v) N3 (u) B3 (u)

119

where, l(u) = (l1 (u), l2 (u), l3 (u))> , a(u, v) = (a1 (u, v), a2 (u, v), a3 (u, v))> , and r(u)

Here we describe the use of the distance transform to estimate diameters of the

connected tree data structure 8 . The major challenge in the correct estimation of

Dendritic diameters are estimated for each voxel along each segment Segmenti . A

volume as: r(u(t)) = 2 ∗ k ∗ Dm (H((u(t)ji )), where k is a penalty value and the

1 X

Dm (H((u(t)ji ) = ∗ D(H((u(t)j+z

i ))). (3.71)

3

z={−1,0,1}

Figure 3.27 depicts different anatomical branches for which the morphological model

is extracted. Figure 3.27(a) depicts a bifurcation branch overlapped with the ex-

tracted centerline (line in red color), the sphere in blue color is the detected branching

point, whereas Fig. 3.27(b) depicts the cylindrical representation of the dendrite in

terms of the Equation 3.69. Figures 3.27(c)-3.27(f) present the estimated cylindrical

8

We emphasize the proposed morphological reconstruction is always guarantee to construct

a single connected tree representation of the cell, where every path corresponds to the dendrite

centerline.

120

(a) (b)

(c) (d)

(e) (f)

Figure 3.27: Centerline extraction and diameter estimation. (a) Overlay of the max-

imum intensity projection of the denoised data with the detected centerline (red

color) and the detected bifurcation point (sphere with blue color), similarly (b) cor-

responds to the cylindrical representation of the dendrite segment. (c),(e) depicts the

maximum intensity projection in the x − y axis of typical branches of the denoised

data, while (d),(f) depict the morphological reconstruction represented as a single

tree connected component in cylindrical representation.

121

Chapter 4

visual processing tasks. The filter sorts pixels covered by a N × N × N mask accord-

ing to their intensity; the center pixel is then replaced by the median of these pixels.

filtering employs an iterative, ‘tunable’ filter introduced by Perona and Malik [166].

while preserving the edges. Broser et al. [33] used the anisotropic diffusion filtering

to average noise along the local axis of the neuron’s tubular-like dendrites in order

122

to maintain morphological structure. In all the experiments, the parameters used

for the anisotropic diffusion were set to 50 iterations with a time step of 0.0625, a

conductance parameter equal to 3 and it was implemented using the Insight Segmen-

tation and Registration Toolkit (ITK) [99]. The most widely-used wavelet threshold

algorithm first estimates the noise level according to the median of absolute value of

coefficients in the high frequency subband, then determines the threshold based on

the estimated noise level. Surprisingly, we found such an algorithm does not work

for both synthetic and real volumes for the 3D separable Haar system. Particularly,

we found the estimated threshold for our test data is about zero or a very small

number. The reason is that the neuron imaging data are very sparse and most noise

photon-limited case. As a result, most wavelet coefficients are near zero and the

estimated threshold too. Instead of estimating the threshold using the median value

method, we set the threshold to be half of maximum of the absolute value of coef-

ficients in a subband for the threshold algorithm based on the 3D separable Haar

system. For the proposed algorithm, we employ the UH Lifted Spline Filterbank

(UH-LSF) filterbank.

We first test our algorithm on synthetic noisy volumes – the computational phan-

toms. The two synthetic noisy volumes are used in our experiments are depicted in

first one is used here to simulate the neuron imaging data as a whole, while the

123

Figure 4.1: Synthetic phantom data.

second one provides us the opportunity to investigate the behavior of our denoising

The denoising performance on the two synthetic data is presented in Table 4.1 and

Table 4.1: Performance Evaluation on the noisy volume depicted in Fig. 4.1.

Metric Noisy Anisotropic Diffusion Filtering Median Filtering 3D Haar Threshold Our algorithm

LN-MSE 67.13 63.49 16.17 20.28 12.97

LN-RMSE 8.19 7.96 4.02 4.50 3.60

LN-SNR (dB) -4.19 -3.95 1.98 1.00 2.94

LN-PSNR (dB) 45.25 45.49 51.43 50.45 52.39

Table 4.2: Performance Evaluation on the noisy volume depicted in Fig. 4.2.

Metric Noisy Anisotropic Diffusion Filtering Median Filtering 3D Haar Threshold Our algorithm

LN-MSE 749.06 720.16 448.82 349.23 261.28

LN-RMSE 27.36 26.83 21.18 18.68 16.16

LN-SNR (dB) 1.36 1.53 3.58 4.67 5.93

LN-PSNR (dB) 34.77 34.94 37.00 38.09 39.35

From Tables 4.1 and 4.2, it is clear that all algorithms used in our experiments

can significantly suppress noise components in the data. Our algorithm produces the

124

(a) (b)

(c) (d)

(e) (f)

Figure 4.2: Denoising results due to different algorithms on a synthetic noisy volumes.

(a) Original volume; (b) noisy volume; (c) result due to median filtering; (d) result

due to anisotropic diffusion; (e) result due to our algorithm; and (f) result due to

threshold algorithm based on the 3D Haar wavelet.

125

best results for both noisy volumes and in terms of all local metrics. For example,

for noisy volume Fig. 4.1(b), our algorithm is 6.89 dB better than the nonlinear

anisotropic diffusion filtering in terms of LN-PSNR; for noisy volume Fig. 4.2(b),

our algorithm is 187.53 better than the median filtering in terms of LN-MSE.

For visual comparison, we present denoising results using four algorithms for

In Fig. 4.2(c), we notice that the median filtering destroys fragile details, which

in fact include very important information needed for the morphology analysis of

neuron imaging data. The nonlinear anisotropic diffusion method and the thresh-

old algorithm based on the 3D Haar system tends to remove most of the noise

find some sections of the structure of interest are not preserved. By comparison, our

algorithm not only removes most background noise, but also preserves the tubular

Table 4.3 depicts the running time of each denoising method. Among the four

effective. Our method runs slower than the median filtering and threshold algorithm

based on the 3D separable Haar system but faster than the anisotropic diffusion

filtering. The platform we used for our experiments is listed as follows. Hardware

Architecture: PC; Operating System: Windows XP; Processor: 2.6 GHz; Memory

Size: 4GB.

126

Table 4.3: Performance evaluation - Time (UNIT: SECOND).

Noisy Volume Fig. 4.1 687.95 58.59 71.97 374.76

Noisy Volume Fig. 4.2(b) 28.85 2.09 3.80 14.20

croscopy data

We have tested our method on both multi-photon and confocal microscopy data sets

and compared its performance quantitatively and qualitatively with respect to the

tained from the largest connected component after applying a global threshold ob-

tained from the denoising results. This allow us to assess the sensitivity of each

algorithm to produce ‘gaps’ among dendrites (tubular structures) and how well the

maximum intensity projection from a typical image stack. Figures 4.4(a) depicts

a detail of the original volume, while Figs. 4.4(b)-(d) depict the projection of the

binary segmented volume with a threshold value set to 10 in the x − y axis. Notice

that fragile details are lost in the case of median filtering and anisotropic diffusion fil-

tering. We note that some background noise is present after denoising with wavelets

in the first level of decomposition (Fig. 4.4(e)), whereas in the second level the back-

ground noise is mostly removed (Fig. 4.4(f)). The effect of the separable filter can be

observed in Fig. 4.4(f), here a block effect is introduced, this effect is visible in the

127

(a) (b)

Figure 4.3: (a),(b) Maximum intensity projections in the x − y and x − z axis of the

volume of interest respectively.

binarized volume. This block effect in the binary volume is not desired since does not

allow to capture local structures (spines) which populate the dendrites. In addition,

we observe that some line segments are broken in the results when applying median

filtering and anisotropic diffusion. By comparison, our algorithm can preserve more

edges, even the weak ones, as shown in Fig. 4.4(d) without producing an aliasing

effect (our transform is undecimated). Figure 4.5 depicts the estimated length from

that in low threshold values our method preserves more structure than the other

three filtering methods. In high threshold values we observe that the performance

of the anisotropic diffusion is better than our method and median filtering, and the

largest difference with respect to our method and the median filtering is at the value

of 40. This effect can be explained since the anisotropic diffusion only preserves

strong edges. In addition, it should be noted that, while we increase the threshold

value, the relation between the energy of the signal (the volume of interest) and the

128

(a) (b)

(c) (d)

(e) (f)

Figure 4.4: Comparison results of applying our denoising, anisotropic diffusion, me-

dian filter and the 3D Haar wavelet. (a) details of the selected region of interest

(square with red color in Fig. 4.3). Results of applying a global threshold with value

T = 10; (b) median filtering; (c) anisotropic diffusion filtering; (d) our method; and

(e),(f) threshold algorithms based on the 3D Haar wavelet with one and two levels

of decomposition respectively.

129

Figure 4.5: Performance evaluation of the length in function of the detected largest

component volume at a given threshold, the number of levels of decomposition of

wavelet transform was two.

Figure 4.6 depicts a confocal imaging volume with a selected region of interest.

For comparison, a manually segmented version of the structure in the selected region

is shown in Fig. 4.7. As it can be observed, both the median filter and our algorithm

obtain satisfactory results in this case. On the contrary, the nonlinear anisotropic

diffusion tends to destroy fine details (Fig. 4.7(d)). Again, the block effect can be

easily observed in the result due to the 3D Haar wavelet (Fig. 4.7(f))

In the experiments above, we have demonstrated the high efficiency of the con-

data. Compared with other two algorithms, namely, the median filtering and the non-

preserving the edge information. The main reason, we believe, is the high efficiency

130

Figure 4.6: A confocal imaging volume with selected region of interest.

structed the 3D non-separable system by adding new filters into existing separable

systems. These new filters in fact correspond to new directions that the separable

system can not deal with effectively. More precisely, these new filters correspond

to the main diagonals in 3D. To make it clearer, we have investigated the energy

distribution of different subband of the new 3D system due to neuron imaging data.

As usual, for the new system, most of the energy (i.e., the l2 -norm) is contained in

the first filter: the low-pass filter F0 . The energy contribution of the rest 30 filters

is presented in Fig. 4.8. Most of the energy is captured by the detectors of first

and second order singularities. Notice that the subbands due to newly-added filters

(F27 , F28 , F29 , and F30 ) have significant energy, even more than part existing filters.

This means the considered data have energy in main diagonal direction and these

131

(a) (b)

(c) (d)

(e) (f)

Figure 4.7: Results in selected region of the confocal imaging (Fig. 4.6). (a) Original

volume; (b) manually segmented result; (c) result due to median filtering; (d) result

due to anisotropic diffusion filtering; (e) result due to our algorithm; and (f) result

due to 3D Haar from wavelets.

132

Figure 4.8: Energy distribution other than the low pass filter in each subband of UH

Lifted Spline Filterbank (UH-LSF) on the 3D neuron data of Fig. 4.3.

133

4.2 Dendrite Detection

We have applied our method to both synthetic and real data. We created synthetic

data to: i) learn a generic tubular shape model, and ii) detect tubular structures

in unseen examples from synthetic and CT data. In both synthetic and real data,

parameter selection was performed with a grid search using three-fold cross-validation

4.2.1 Validation

include: i) variation of intensity, ii) radius variation from 0.5 to 1.5 µm, iii) variety

of branching sections, and iv) high and low curvature segments. Voxel size was

Figure 4.9(b) depicts an unseen example to detect the centerline. Note that the

radius decreases gradually from the bottom (1.5 µm) to the top (0.5 µm). The

centerline with the estimated model in (Fig. 4.10(a)), while Figure 4.10(b) depicts

γ = 1). Note the difference of these two models, especially at the bottom of the

structure.

umetric data to: i) learn a generic tubular shape model, and ii) predict an new

134

(a) (b)

Figure 4.9: Synthetic tubular model constructed from cubic splines. (a) control

points and spline lines; and (b) volumetric representation.

The tubular model to predict was a neuron cell model from the

Duke-Southampton database [61]. The intensity distribution in the model was not

In both learning and prediction, the TFV vectors were computed by selecting a

sigma value of 0.5 µm . The resulting A and B values that estimate the probability

computing it in the entire volume. LN-confusion matrix components are the true

positive rate (TPR), the false positive rate (FPR), the true negative rate (TNR), false

135

(a) (b)

(c) (d)

tion in diameter. (a) Synthetic volumetric volume, note how the diameter “increases”

by a factor of 2X from top to bottom; (b) prediction according Sato’s measure [189];

and (c),(d) prediction according to the model constructed in Fig. 4.9.

136

(a)

(b) (c)

(d) (e)

methods to enhance tubular structures using the LN geometric mean as a quality

metric. (b) Maximum intensity projection in the x, y axis of the synthetic data. Prob-

ability volume estimated from: (c) the synthetic spline model, (d) the S-measure,

and (e) the F-measure.

137

negative rate (FNR), and the geometric mean (GM). These last metrics are defined

belong to the object of interest, FPR is the proportion of voxels that were incorrectly

classified as the object of interest, TNR refers to the proportion of background voxels

that were classified correctly, FNR is the proportion of object’s voxels that were

√

incorrectly classified as background, and the GM is given by GM = T P R · T N R.

We compare the performance of our algorithm with the Frangi et al. [71] measure

(F-measure) and the Sato et al. [189] measure (S-measure). The ground truth is

considered to be the binary volume of the synthetic cell. We then evaluate structure

values are depicted in Figure 4.11(a). The performance with respect to the LN metric

at the best GM value for each method is depicted in the Table 4.4. Note that for

any possible probability threshold value our method preserves more structure than

the F-measure and the S-measure respectively, and the best possible segmentation

percent.

data for two different cell types: spiny striatal and CA1 pyramidal neuron cells. Each

138

Table 4.4: Performance Evaluation

SVM 99.18 0.543 99.45 0.81 99.32

Spline Model 2 51.06 3.04 99.95 48.9 70.36

Frangi 80.04 0.14 99.85 19.53 89.64

Sato 80.00 0.13 99.88 19.98 89.38

cell image was acquired from different confocal microscopes under different conditions

such as: image resolution, dye concentration and microscope optical parameters. In

all the cases we used the SVMs library LIBSVM [42] and comparing results from

different values of C and γ we found that the best probability map obtained was

with the value of C equal to 100, and γ equal to 10. All experiments were performed

Figure 4.12(a) depicts a region of interest for one of the stacks after denoising.

Figure 4.12(b) depicts the probability map estimated from a statistical dendrite shape

model. The scale value of σ was 0.5 µm. Time to perform training was approximately

7 min. while the time to estimate the probability map was 45 min.

cell of Fig. 4.12(a). Figure 4.13(b) illustrates the probability map estimated from

our method while Fig. 4.13(c) illustrates the results of applying the S-measure and

Fig. 4.13(d) depicts the result of applying the F-measure. Notice how our method

can detect highly anisotropic dendrites and how it preserves the general dendrite

139

(a)

(b)

Figure 4.12: Results in typical stack for the CA1 pyramidal cell type. (a) Original

volume denoised with our FAST algorithm; and (b) the estimated probability map.

spectively.

The medium spiny striatal neuron cell is presented in Fig. 4.15. Figure 4.15(a)

depicts the denoised cell volume with our FAST denoising algorithm, Fig. 4.15(b)

illustrates the probability map estimated from a dendrite segment. Figure 4.15(c)

depicts the result of segmenting the cell’s volume from its estimated probability map,

(notice how spines are present in the segmented volume) and Fig. 4.15(d) illustrates

140

(a) (b)

(c) (d)

(a) Detail from the original data, (b) result of applying the estimated SVM model,

(c) after applying the S-measure, and (d) after applying the F-measure.

141

(a) (b) (c)

dendrite segment; (b)after Sato’s measure; and (c) after our measure.

the morphological model estimated from the segmented volume. The value of σ

was 0.3 µm and the time to perform training and prediction was 5 and 35 min.

respectively.

Results from synthetic data suggest that when learning a tubular shape model,

tubular morphology is an important factor for tubular shape prediction. For ex-

ample, to classify tube-like structures with different diameters and shapes, one can

select a fixed scale and perform training from samples of structures with different

diameters (to some extent). Then the learning process takes into account different

shape properties for different diameters. The selection of IT, should be decided by

the user based on the specific application for both: training and prediction. Results

from both synthetic data and confocal data suggest that a machine learning approach

shapes.

142

(a) (b)

(c) (d)

Figure 4.15: Results for the spiny striatal neuron type. (a) Denoised volume with our

FAST algorithm, (b) probability map obtained by SVMs from the single dendrite

model, (c) segmentation from the probability map, and (d) cell reconstruction as

cylindrical models.

143

(a) (b)

(c) (d)

raphy datasets; and (a),(c) tubular structures depicted using the model depicted in

Fig. 4.9.

144

4.3 Morphological Reconstruction

and quantitatively the morphological reconstructions from three human experts (E1,

E2, and E3), one tracer obtained from using the module Auto Neuron (AN) from

The cells of interest consist of a database (Figs. 4.17,4.18) of six neuron cells, five

from a confocal microscope. We categorize the quality of the data as: good (Cell A),

Comparison of reconstruction reveal how well the different methods represent the

morphological reconstructions performed (from top to bottom: our method, AN, and

three human experts E1, E2 and E3), while Fig. 4.18 depicts a visual comparison

appears our method, the one obtained from AN, and three human experts).

among all reconstructions, we used a variant of Sholl analysis [175, 199, 231, 232, 65,

151, 184]. Global descriptors include: i) total dendritic length (Fig. 4.19(a)), ii) total

145

(a)

bottom, morphological reconstruction obtained by our method, the computer tracer

AN and human tracers E1, E2 and E3 respectively.

surface area, iii) diameter statistics per segment (4.19(c)), and iv) length statistics

Table 4.5 presents the total dendritic length and total surface area. Among all

the tracers, E3 reported the longest dendritic length as opposed the one by AN.

Our method reported dendritic lengths close to the median values. With respect to

surface area, our method reported the smallest surface area, and AN reported the

largest surface area. Table 4.6 lists statistics for the estimated dendritic diameter.

146

(a) Reconstruction cell B (b) Cell B from MP

and (b),(d),(f),(h),(j) maximum intensity projections of the denoised volumes.

147

The average minimum values ranged from 0.9 to 1.5 µm. The human tracers reported

diameters in the [0.1 0.2] µm range, significantly below the optical resolution of the

imaging systems that were used to collect the data sets. AN reported the longest

average diameter, followed by the three human tracers and finally by our method.

Tables 4.7,4.8 depict statistics dendritic lengths distance from the soma.

tively the performed reconstruction by all the tracers in only that subtree. Table 4.5

(right column) presents the estimated total dendrite length and surface. The human

tracer H3 reported the maximum length of 468.52 µm while the minimum length

of 460.9. AN reported the maximum dendrite surface area, while the human tracer

H3 reported the minimum dendrite surface area. Table 4.9 depicts the results of

148

(a) (b) (c)

Figure 4.19: A variation of Sholl analysis as performance metrics. (a) Path from

soma; (b) dendrite length; -(c) dendrite diameter.

Table 4.5: Performance Evaluation - Total Dendrite Length and Surface Area

Cell A Cell B Cell F Subtree

Length Surface Length Surface Length Surface Length Surface

H1 6327 1430.22 4018 1437.65 4017.86 4017.86 415.76 1131.69

H2 6806 1530.25 3397 1103.44 3397.31 2530.81 456.28 1296.60

H3 7150 334.27 4144 961.54 4193.27 223.38 468.52 545.87

OR 7065 2209.57 6228 1794.20 3861.38 2089.54 460.90 1119.72

AN 5698 1374.36 5816 865.54 1607.61 891.120 409.84 1571.59

Cell A Cell B Cell F

µ σ min max µ σ min max µ σ min max

H1 1.1 0.6 0.1 5.5 1.9 1.2 0.8 9.2 1.98 1.52 0.90 7.94

H2 1.1 0.7 0.5 7.2 2.1 2.1 0.5 15.2 2.29 1.93 0.46 12.42

H3 0.9 1.0 0.1 7.7 1.1 1.1 0.3 8.8 1.43 2.33 0.4 18.8

OR 1.5 0.9 0.5 6.8 0.7 0.5 0.3 4.7 3.17 0.87 1.84 6.74

AN 1.4 0.7 0.8 7.5 2.8 1.4 1.6 15.0 0.69 0.38 0.27 3.11

149

Table 4.7: Performance Evaluation - Length Statistics

Cell A Cell B Cell F

µ σ min max µ σ min max µ σ min max

H1 46.5 39.9 0.2 208.4 55.7 43.3 0.3 206 40.58 39.31 0.17 161.20

H2 40.3 35.5 1.1 158.7 57.8 56.6 0.2 427.6 50.70 47.26 1.17 232.16

H3 44.4 35.2 0.1 166.5 51.8 40.6 0.2 179.4 46.59 42.29 0.12 200.68

AN 39.8 32.2 0.3 178.0 39.0 35.7 0.4 168.4 30.91 32.05 2.3 140.01

OR 44.2 36.5 0.3 187.1 43.9 39.6 2.2 197.1 34.78 33.66 0.54 164.02

Cell A Cell B Cell F

µ σ min max µ σ min max µ σ min max

H1 221.5 143.1 10.1 640.3 342.1 222.5 0.3 830.4 175.2 113.8 1.0 559.4

H2 212.8 132.9 1.0 637.9 257.9 187.0 1.1 882.0 161.9 106.1 1.2 563.2

H3 218.4 133.8 0.0 626.0 305.2 193.1 0.2 736.9 184.2 101.1 0.0 543.6

AN 219.3 131.5 2.1 628.8 262.3 186.0 0.8 796.5 193.6 97.3 35.1 444.7

OR 260.8 172.0 0.3 753.9 284.4 218.9 8.9 776.5 230.2 139.4 23.1 614.4

Path from Soma Length Diameter

µ σ min max µ σ min max µ σ min max

H1 86.35 67.20 0 197.54 59.39 52.87 5.0 161.8 2.39 0.12 2.18 2.48

H2 94.36 71.64 0 207.36 65.18 55.58 0.53 167.79 1.54 0.27 1.22 1.96

H3 92.6 70.72 0 203.89 66.93 53.36 2.47 164.14 1.17 0.49 0.64 2.0

AN 81.21 59.74 0 199.78 45.53 45.12 1.61 139.45 2.29 0.23 2.03 2.84

OR 98.53 79.86 0 230.25 65.84 64.14 0.98 187.68 1.06 0.21 0.85 1.48

150

(a) Cell A - ED (b) Cell B - ED (c) Cell F - ED

Euclidean Distance (ED) from the soma, (d)-(f), number of branches in function of

the Geodesic Distance (ED) of the branching point to the soma.

Local descriptors include [199, 226] the number of branching points as function

of: i) the Euclidean Distance (ED), ii) the Geodesic Distance (GD) to the soma,

the distribution of iii) Dendrite Lengths (DL) and iv) Diameters (D). Figure 4.22

presents a qualitatively comparison with respect to the metrics ED, GD and D among

Figure 4.25 depicts a visual comparison of the reconstruction obtained with our

method (bottom) and the minimum intensity projection (top) along the x − y axis

of the denoised volume. In the middle, Region A depicts a detail in the soma region,

while Region B depicts a detail of some typical dendrites of average diameter. Fig-

ure 4.26 depicts a comparison of the volumetric rendering (green) overlaid with our

151

(a) Cell A - DL (b) Cell B - DL (c) Cell F - DL

Figure 4.22: Sholl analysis. (a)-(c), Distribution of Dendritic Length (DL), and

(d)-(f), distribution of the Diameter (D).

Euclidean Distance (ED) from the soma, (b), number of branches in function of

the Geodesic Distance (ED) of the branching point to the soma, (c), distribution of

Dendritic Length (DL), and (d), distribution of the Diameter (D).

152

(a) OR vs E1 (b) OR vs E2 (c) OR vs E3 (d) OR vs AN

Figure 4.24: Quantitative analysis of diameter estimation for Cell A. Diameter esti-

mated by OR compared with respect to: (a) H1, (b) H2, (c) H3, (d) AN.

reconstruction.

standard branches. It is interesting to note that the largest discrepancy is seen for

the “good” neuron where we would expect smallest variance due to good dye-filling

From quantitative global descriptors, all the tracers reported longer dendritic

lengths given higher quality data sets due to the larger signals in the distal and thin

processes. Tabulation of the total dendritic lengths across all five tracers (human and

computer) reveal total spreads of 1451, 772, and 747 m for the “good”, “average”,

The AN tracer consistently reported the smallest dendritic length, perhaps re-

flecting the fact that it does not automatically reconnect discontiguous branches.

Our method (OR) reported total dendritic lengths near the median values. One

human tracer, H3, reported the longest total lengths on two of the three data sets.

Another human tracing, H1, reported the median dendritic lengths for all three data

153

(a)

Figure 4.25: Comparison of the minimum intensity projection and the morphological

model. Top: Minimum intensity projection of the denoised volume corresponding to

the cell A, middle: reconstruction (white color) overlaid with the minimum intensity

projection in two regions of interest, and bottom: morphological reconstruction of

cell A.

154

Figure 4.26: Comparison of the volumetric (overlayed in green) data and the mor-

phological model.

sets. In contrast, another human operator, H2, reported the longest dendritic length

on one data set and the shortest on another. Since each tracing was performed over

several hours, possibly spanning multiple days, and each data set was addressed over

the span of several weeks, this highlights the variability inherent between human,

manual tracings.

For the “good” data set, all tracers reported an average dendritic length that

was within a 6.7 µm range (close agreement). The average dendritic lengths for the

“average” and “poor” data sets were spread over a larger range (18.8 and 16.9 µm,

Total surface area highlights differences that are due to total dendritic length

155

and diameter estimation. Since our method (OR) systematically estimating smaller

diameters, OR generates models with smaller surface areas despite consistently gen-

erating the longest total dendritic length. Likewise, because it consistently over-

estimated diameters, AutoNeuron reports surface areas that are in the upper end of

the distribution.

tion. With respect to the ED (Figs. (a)-(c)) all the tracers had similar performance

obtained with respect to GD measure, depicted in Figs. (d)-(f). These results sug-

gest that the computer based OR and AN tracers, detected the largest number of

branch points across the different data sets as compared with the humans tracers H1,

H2 and H3. Similar results were obtained when computing dendritic length (DL)

(Figs. (a)-(c)) and dendritic diameter (Figs. (d)-(f)), obtaining relatively consistency

Figure 4.23 presents a comparative analysis performed in the subtree (Fig. 4.20,

and Fig. 4.24 presents a direct comparison of estimating the diameters of each den-

drite segment (dendrite-wise) among all tracers with respect to our method. Clus-

tering along the line with slope one indicates close agreement of the corresponding

diameters while clusters away from the diagonal indicates disagreement. This com-

parison allow to assess how each method over- or under-estimates the diameters in

Figures 4.27(a),4.27(b) depicts the extracted centerline (white line) and the

156

ground truth centerline (brown line). We have created 20 synthetic phantoms with a

variety of tubular structures and computed the distance from the extracted center-

lines to the ground truth. The maximum and average distances from the extracted

centerlines using our method were 0.87 µm and 0.57 µm, while the maximum and

average distances from the extracted centerlines using Sato’s method were 0.94 µm

(a) (b)

Fig. 4.10(a). (a),(b) 3D and x − y projection of the extracted centerline (brow line),

overlaid with the true centerline (white line).

Angiography (CTA) data to extract the coronary arteries of a human heart. In this

case, a seed point was manually selected, and the front propagation was based only

157

(a) (b)

CTA data depicted in Fig. 4.16(d).

158

Chapter 5

Conclusion

In this chapter present possible future directions research based on our work as well

as our conclusion.

5.1.1 Denoising

There are two areas in which further research can be explored. The first possible

area refers to how to suppress the noise from the Frame coefficients, and this could be

done by considering: i) a dual space for the noise distribution (i.e.; approximating as

a mixture of Gaussian); ii) integrating prior knowledge from the specific application

159

(i.e.; estimating the posterior probability for an element to be noise).

The second possible area refers to design Parseval frames for texture feature

These textural features can be used to detect tubular-like structures (for example

learn and predict generic tubular shape models. Future directions in this are include

feature vectors (i.e.; to determine the optimal number of feature vectors needed to

enhances tubular objects according to shape and texture. This approach has the

cylindrical lengths and diameters. Based on these parameters, algorithms for spines

greatly enhances spines structures in the dendrites, then by combining the cylindrical

160

description (already estimated) and the probability volume (spines enhanced) then

5.2 Conclusion

This dissertation has presented a general framework for automatic three dimensional

method for enhancing regular and tubular structures without assuming a particular

tubular shape; and iii) an automatic algorithm for three dimensional reconstruction

of neuron cells.

161

Bibliography

versity: Duke/Southampton archive of neuronal morphology.

TM

[2] Deconvolution recipes. Help manual for Huygens Software , 2003.

and S. Kalyanaraman. Automatic selection of parameters for vessel/neurite

segmentation algorithms. IEEE Transactions on Image Processing, 14(5):1338–

1350, September 2005.

[4] G. Agam, S. Armato III, and C. Wu. Vessel tree reconstruction in thoracic

CT scans with application to nodule detection. IEEE Transactions on Medical

Imaging, 24(4):486–499, 2005.

thoracic CT scans. In Proc. IEEE Conf. on Computer Vision and Pattern

Recognition, volume 2, pages 649–654, Washington, DC, USA, June 2005.

[6] Y. Ai and J. Jaffe. Design and preliminary tests of a family of adaptive wave-

forms to measure blood vessel diameter and wall thickness. IEEE Trans. Ul-

trasonics, Ferroelectrics, and Freq. Control, 52(2):250–260, Feb. 2005.

three-dimensional tracing of neurons from confocal image stacks. IEEE Trans.

Information Technology in Biomedicine, 6(2):171–187, June 2002.

[8] L. Ambrosio and H. M. Soner. Level set approach to mean curvature flow in

arbitrary codimension. J. Diff. Geom., 43:693–737, 1996.

umes in path planning applications. In Proc. IEEE International Conference

on Robotics and Automation, pages 2290–2295, San Fransisco, CA, April 2000.

162

[10] L. Antiga, B. Ene-Iordache, and A. Remuzzi. Computational geometry for

patient-specific reconstruction and meshing of blood vessels from MR and

CT angiography. IEEE Transactions on Medical Imaging, 22(5):674–684, May

2003.

ical Record, 257(6):195–207, 1999.

[12] G. Ascoli. Mobilizing the base of neuroscience data: the case of neuronal

morphologies. Nature Rev. Neurosci., 7:318–324, 2006.

connectivity in neural network models of the rat hippocampus. Biosystems,

79:173–181, 2005.

analysis, smooth frames and denoising in fourier space. pages 153–160, 2004.

[15] S. Aylward and E. Bullitt. Initialization, noise, singularities, and scale in height

ridge traversal for tubular object centerline extraction. IEEE Transactions on

Medical Imaging, 21(2):61–75, 2002.

[16] W. Bai, X. Zhou, L. Ji, J. Cheng, and S. Wong. Automatic dendritic spine anal-

ysis in two-photon laser scanning microscopy images. Cytometry A, (71A):818–

826, July 2007.

value of frame coefficients. In SPIE Wavelets Applications in Signal and Image

Processing XI, volume 5914, pages 355–362, Jan. 2005.

smoothing and the nonlinear Diffusion Equation. IEEE Transactions on Pat-

tern Analysis and Machine Intelligence, 24(6):844–847, Jun. 2002.

ciu. Hierarchical learning of curves: Application to guidewire localization in

fluoroscopy. In IEEE Proc. Computer Vision and Pattern Recognition, pages

1–8, Minneapolis, MN, June 2007.

of 3D flexible tubes: Application to CT colonoscopy. In Proc. Medical Im-

age Computing and Computer-Assisted Intervention, volume 2, pages 462–470,

Copenhagen, Denmark, Sep 2006.

163

[21] C. Beaman-Hall, J. Leahy, S. Benmansour, and M. Vallano. Glia modulate

NMDA-mediated signaling in primary cultures of cerebellar granule cells. J.

Neurochem., 71:1993–2005, 1998.

in 3-D medical images by parametric object detection and tracking. IEEE

Transactions on Systems, Man, and Cybernetics, 33(4):554–561, 2003.

[23] R. Bellman and R. Kalaba. Dynamic Programming and modern control theory.

London mathematical society monographs, London, 1965.

and species differences in dendritic spine morphology. J Neurocytol., 31(3-

5):337–346, 2002.

[25] J. J. Benedetto and S. Li. The theory of multiresolution analysis frames and

applications to filter banks. Appl. Comp. Harm. Anal., 5:389–427, 1998.

critical kernels. In International Workshop on Combinatorial Image Analysis,

pages 45–59, Berlin, Germany, June 2006.

of wavelet shrinkage estimators for Poisson counts. International Statistical

Review, 72:209–237, 2004.

ton algorithm. IEEE Transactions on Visualization and Computer Graphics,

7(3):195–206, July-September 2001.

confocal microscopy: improving the limits of deconvolution with application

to the visualization of the mammalian hearing organ. Biophysical Journal,

80:2455–2470, May 2005.

confocal microscopy: improving the limits of deconvolution, with application

to the visualization of the mammalian hearing organ. Journal of Biophysics,

80(5):2455–70, 2001.

[31] W. F. Bronsvoort. Direct Display Algorithms for Solid Modelling, page 79.

Delft University Press, 1990.

164

[32] P. Broser, R. Schulte, A. Roth, F. Helmchen, S. Lang, G. Wittum, and B. Sak-

mann. Nonlinear anisotropic diffusion filtering of three-dimensional image data

from two-photon microscopy. J. Biomedical Optics, 9(6):1253–1264, November

2004.

mann. Nonlinear anisotropic diffusion filtering of three-dimensional image data

from two-photon microscopy. J Biomedical Optics, 9(6):1253–1264, 2004.

activation of the small GTPase Rab5 drives the removal of synaptic AMPA

receptors during hippocampal LTD. Neuron, 45:81–94, 2005.

[35] R. H. Byrd, P. Lu, and J. Nocedal. A limited memory algorithm for bound con-

strained optimization. SIAM Journal on Scientific and Statistical Computing,

16:1190–1208, 1995.

matrices and their use in limited memory methods. Mathematical Program-

ming, 63:129–156, 1994.

cuts in retinal images. In Proc. Medical Image Computing and Computer-

Assisted Intervention, volume 2, pages 928–936, Copenhagen, Denmark, Sep

2006.

representation for objects with edges. In L. L. S. A. Cohen, C. Rabut, editor,

Curve and Surface Fitting. Vanderbilt University Press, Nashville, 1999.

tion. IEEE TRANSACTIONS ON IMAGE PROCESSING,, 14(11):1773–1782,

2005.

analysis of three classes of rat hippocampal neurons. J. Neurophysiol, 78:703–

720, 1997.

2000.

[42] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines,

2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

165

[43] G. Chen and B. Kegl. Image denoising with complex ridgelets. Pattern Recog-

nition, 40(2):578–585, 2007.

[44] S. Chen, J. Carroll, and J. Messenger. Quantitative analysis of reconstructed

3-D coronary arterial tree and intracoronary devices. IEEE Transactions on

Medical Imaging, 21(7):724–740, 2002.

[45] J. Cheng, X. Zhou, E. Miller, R. Witt, J. Zhu, B. Sabatini, and S. Wong.

A novel computational approach for automatic dendrite spines detection in

two-photon laser scan microscopy. J Neurosci Methods, 165(1):122–134, 2007.

[46] D. Chung and G. Sapiro. Segmentation-free skeletonization of gray-scale images

via PDE’s. In Proc. International Conference on Image Processing, 2000.

[47] C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20:237–297,

1995.

[48] M. Couprie, D. Coeurjolly, and R. Zrour. Discrete bisector function and eu-

clidean skeleton in 2D and 3D. Image and Vision Computing, 25(10):1543–

1556, 2007.

[49] H. Cuntz, A. Borst, and I. Segev. Optimization principles of dendritic structure.

Theoretical Biology and Medical Modelling, 4(21):1–8, 2007.

[50] M. de Bruijne, B. van Ginneken, M. Viergever, and W. Niessen. Adapting

active shape models for 3D segmentation of tubular structures in medical im-

ages. In Proc. Information Processing in Medical Imaging, volume 2732, pages

136–147, 2003.

[51] A. R. Depierro. A modified expectation maximization algorithm for penalized

likelihood estimation in emission tomography. IEEE Transactions on Medical

Image Processing, 14:132–137, Jan. 1995.

[52] T. Deschamps and L. Cohen. Fast extraction of minimal paths in 3D images

and applications to virtual endoscopy. IEEE Transactions on Medical Image

Analysis, 5(4):281–299, Dec 2001.

[53] M. Descoteaux, M. Audette, K. Chinzei, and K. Siddiqi. Bone enhancement

filtering: application to sinus bone segmentation and simulation of pituitary

surgery. In Proc. Medical Image Computing and Computer-Assisted Interven-

tion, pages 9–16, Palm Springs, CA, Oct 2005.

[54] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection

algorithm. IEEE Trans Signal Processing, 53(8):2961–2974, August 2005.

166

[55] T. Dey, J. Giesen, and S. Goswami. Shape segmentation and matching with

flow discretization. In Proc. Workshop on Algorithms Data Structures, LNCS

2748, pages 25–36, 2003.

tion of 3D confocal microscope scans of neuronal cells denoised by 3D-wavelet

shrinkage. In Wavelet Applications VI-Proceedings of the SPIE, volume 3723,

pages 446–457, 1999.

tonization of neurons from confocal microscopy images based on the 3-D

wavelet transform. IEEE Transactions on Image Processing, 11(7):790–801,

Jul 2202.

[58] P. Dimitrov, J. Damon, and K. Siddiqi. Flux invariants for shape. In IEEE

Conf. Computer Vision Pattern Recognition, volume 1, pages 835–841, Jun

2003.

covariance and signal estimation with macrotiles. IEEE Transactions on Signal

Processing, 51(3):614–627, 2003.

and spectra from indirect and noisy data. In I. Daubechies, editor, Symposia

in Applied Mathematics: Different Perspectives on Wavelets, pages 173–205.

American Mathematical Society, 1993.

tive probabilistic model of blood vessels for segmenting mra images. In Proc.

Medical Image Computing and Computer-Assisted Intervention, volume 2,

pages 799–806, Copenhagen, Denmark, Sep 2006.

[63] Y. C. Eldar and G. D. Forney. Optimal tight frames and quantum measure-

ment. IEEE Trans. Inform. Theory, 48(3):599–610, Mar. 2002.

roanatomy: precise automatic geometric reconstruction of neuronal morphol-

ogy from confocal image stacks. Journal of Neurophysiology, 93:2331–2342,

2005.

167

[65] E. Famiglietti. New metrics for analysis of dendritic branching patterns demon-

strating similarities and differences in ON and ON-OFF directionally selective

retinal ganglion cells. J. Comparative Neurology, 324:295–321, 1992.

[66] A. Ferreira and S. Ubéda. Computing the medial axis transform in parallel with

eight scan operations. IEEE Transactions on Pattern Analysis and Machine

Intelligence, 21(3):277–282, March 1999.

tion using space-alternating generalized EM algorithms. IEEE Transactions on

Medical Imaging, 4:1417–1429, Oct. 1995.

[68] J. Fiala. Reconstruct: A free editor for serial section microscopy. J. Microscopy,

218(1):52–61, April 2005.

[69] F. Fleuret and P. Fua. Dendrite tracking in microscopic images using minimum

spanning trees and localized E-M. Technical Report EPFL/CVLAB2006.02,

EPFL, March 2006.

sequential monte carlo and on-line learning for vessel segmentation. In Proc.

European Conference on Computer Vision, number 3, pages 476–489, Graz,

Austria, 2006.

hancement filtering. In A. Colchester and S. Delp, editors, Proc. First Medical

Image Computing and Computer Assisted Intervention, volume 1496, pages

130–137, Cambridge, MA, Oct 1998. Springer Verlag.

[73] R. Gayle, P. Segars, M. C. Lin, and D. Manocha. Path planning for deformable

robots in complex environments. In Proceedings of Robotics: Science and Sys-

tems, Cambridge, USA, June 2005.

[74] S. Ge, X. Lai, and A. Mamun. Boundary following and globally convergent

path planning using instant goals. In IEEE Transactions on Systems, Man and

Cybernetics-Part B: CYBERNTICS, volume 35, pages 240–254, April 2005.

[75] J. Glaser and E. Glaser. Neuron imaging with Neurolucida–a PC-based system

for image combining microscopy. Computerized Medical Imaging and Graphics,

14(5):307–317, 1990.

168

[76] P. J. Green. Bayesian reconstruction from emission tomography data using a

modified EM algorithm. IEEE Trans. Med. Imag., 9:84–93, Jan. 1990.

line extraction algorithms in quantitative coronary angiography. IEEE Trans-

actions on Medical Imaging, 20(9):928–941, September 2001.

[78] X. Gu, D. Yu, and L. Zhang. Image thinning using pulse coupled neural

network. Pattern Recognition Letters, 25(9):1075–1084, July 2004.

ing of neurites in fluorescence microscopy images. In Proc. IEEE International

Symposium on Biomedical Imaging: Macro to Nano, pages 534–537, Arlington,

Virginia, USA, April 2006.

struction of microtubules in total internal reflection fluorescence microscopy

(TIRFM). In S. Berlag, editor, Proc. 8th Medical Image Computing and

Computer-Assisted Intervention (MICCAI), Lecture Notes in Computer Sci-

ence, pages 761–769, Palm Springs, CA, October 2005.

dendritic spines of the granule cell in the rat dentate gyrus with HVEM stereo

images. J Electron Microsc Tech., 12(2):80–87, 1989.

active contour model. IEEE Transactions on Medical Imaging, 10(6):865–873,

Jun 2001.

beam backprojection reconstruction for robust skeletonization of 3D vascular

trees. In IEEE Nuclear Science Symposium Conference Record, volume 2, pages

1003–1007, 2002.

spines and synapses in rat hippocampus (CA1) at postnatal day 15 and adult

ages: implications for the maturation of synaptic physiology and long-term

potentiation. J Neurosci, 12(7):2685–2705, 1992.

based robust robotic navigation. Image and Vision Computing,

doi:10.1016/j.imavis.2007.03.005, 2007.

169

[86] M. S. Hassouna, A. Farag, and R. Falk. Differential fly-throughs (DFT): A

general framework for computing flight paths. In Proc. Medical Image Com-

puting and Computer-Assisted Intervention, volume 1, pages 654–661, Palm

Springs, CA, 2005.

[87] W. He. Adaptive algorithms for skeletonizing 3-D noisy binary images: Ap-

plications to neurobiology. Master’s thesis, Rensselaer Polytechnic Institute,

Troy, NY., 1998.

Turner, and B. Roysam. Automated Three-Dimensional Tracing of Neurons in

Confocal and Brightfield Images. Microscopy and Microanalysis, 9(4):296–310,

August 2003.

struction from Poisson data using Gibbs priors. IEEE Transactions on Medical

Imaging, 8:194–202, 1989.

Restoration of three-dimensional quasi-binary images from confocal microscopy

and its application to dendritic trees. In SPIE, Three-Dimensional Microscopy:

Image Acquisition and Processing IV, pages 146–157, 1997.

[91] M. Hines and N. Carnevale. NEURON: a tool for neuroscientists. The Neuro-

scientist, 7:123–135, 2001.

of signal propagation in dendrites of hippocampal pyramidal neurons. Nature,

387(869-875), 1997.

[93] A. Holmes, K. Weedmark, and G. Gloor. Mutations in the extra sex combs

and enhancer of polycomb genes increase homologous recombination in somatic

cells of drosophila melanogaster. Genetics, 172(4):2367–2377, 2006.

maximum-likelihood approach. Journal of Optical Society of America,

9(2):10521061, 1992.

thin structures in volumetric medical images. In Proc. Medical Image Comput-

ing and Computer-Assisted Intervention, volume 2, pages 562–569, Montréal,

Canada, Nov 2003.

170

[96] T. Hoogland and P. Saggau. Facilitation of l-type ca2+ channels in den-

dritic spines by activation of b2 adrenergic receptors. Journal of Neuroscience,

24(39):8416–8427, 2004.

[97] L. Huang, G. Wan, and C. Liu. An improved parallel thinning algorithm.

In Seventh International Conference on Document Analysis and Recognition,

volume 2, pages 780–783, Los Alamitos, CA, USA, 2003.

[98] T. Huysmans, J. Sijbers, and B. Verdonk. Statistical shape models for tubular

objects. In The second annual IEEE BENELUX/DSP Valley Signal Processing

Symposium, Metropolis, Belgium, March 2006.

[99] L. Ibanez, W. Schroeder, L. Ng, and J. Cates. The ITK Software Guide.

Kitware Inc, 2004.

[100] C. Ingrassia, P. Windyga, and M. Shah. Segmentation and tracking of coronary

arteries. In Proceedings of the First Joint BMES/EMBS Conference, volume 1,

page 203, 1999.

This is not a very relevant paper. The tracking is done in 2D. Paper of only

one page long.

[101] I. Isgum, B. Ginneken, and M. Prokop. A pattern recognition approach to

automated coronary calcium scoring. In Proc. of the 17th International Pattern

Recognition, volume 3, pages 746–749, Aug. 2004.

[102] P. A. Jansson. Deconvolution of Images and Spectra. Academic Press, second

edition, 1997.

[103] J. Jeong-Won, K. Tae-Seong, D. Shin, S. Do, M. Singh, and V. Marmarelis. Soft

tissue differentiation using multiband signatures of high resolution ultrasonic

transmission tomography. IEEE Transactions on Medical Imaging, 24(3):399–

408, March 2005.

[104] X. Ji and J. Feng. A new approach to thinning based on time-reversed heat

conduction model. In International Conference on Image Processing, volume 1,

pages 653–636, 2004.

[105] L. Jianfei, Z. Xiaopeng, and F. Blaise. Distance contained centerline for virtual

endoscopy. In Proc. IEEE International Symposium on Biomedical Imaging:

Macro to Nano, pages 261–264, 2004.

[106] H. Jiang and N. Alperin. A new automatic skeletonization algorithm for 3D

vascular volumes. In Proc. IEEE Engineering in Medicine and Biology Society,

pages 1565–1568, September 2004.

171

[107] M. Jiang, Q. Ji, and B. McEwen. Model-based automated extraction of mi-

crotubules from electron tomography volume. Transactions on Information

Technology in Biomedicine, 10(3):608–616, July 2006.

particles in live cells via machine learning. Cytometry Part A, 71A(8):563–575,

2007.

C. M. Colbert. Orion: Automated reconstruction of neuronal morphologies

from image stacks. In Proc. 24th Annual Houston Conference on Biomedical

Engineering Research (HSEMB), page 275, Houston, February 2007.

[110] D. G. Kang and J. B. Ra. A new path planning algorithm for maximizing vis-

ibility in computed tomography colonography. IEEE Transactions on Medical

Imaging, 24(8):957–968, 2005.

International Journal of Computer Vision, 1(4):321–331, 1988.

Carlo in practice: A roundtable discussion. Journal of The American Statistical

Association, 52(2):93–100, May 1998.

[113] B. Kegl and A. Krzyzak. Piecewise linear skeletonization using principal curves.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1):59–74,

2002.

[114] C. Kirbas and F. Quek. A review of vessel extraction techniques and algorithms.

ACM Computing Surveys, 36(2):81–121, 2004.

image analysis algorithm for dendritic spines. Neural Computation, 14:1283–

1310, 2002.

[116] Y. Y. Koh. Automated recognition algorithms for neural studies. PhD thesis,

State University of New York at Stony Brook, 2001.

Statist. Ass., 94:920–933, 1999.

172

[118] I. Konstantinidis, A. Santamarı́a-Pang, and I. A. Kakadiaris. Frames-based

denoising in 3D confocal microscopy imaging. In Proc. 27th Annual Interna-

tional Conference of the IEEE Engineering in Medicine and Biology Society,

Shanghai, China, Sept. 2005.

[119] E. Korkotian and M. Segal. Structure-function relations in dendritic spines: is

size important? Hippocampus, 10(5):587–596, 2000.

[120] K. Krissian. Flux-based anisotropic diffusion applied to enhancement of 3D

angiograms. IEEE Transactions on Medical Imaging, 21(11), Nov. 2002.

[121] K. Krissian, G. Malandain, N. Ayache, R. Vaillant, and Y. Trousset. Model

based detection of tubular structures in 3D images. Computer Vision and

Image Understanding, 80(2):130–171, 2000.

[122] A. Kuijper and O. Olsen. Geometric skeletonization using the symmetry set.

In Proc. IEEE International Conference on Image Processing, volume 1, pages

497–500, Sep 2005.

[123] M. W. Law and A. C. Chung. Weighted local variance-based edge detection and

its application to vascular segmentation in magnetic resonance angiography.

IEEE Transactions on Medical Imaging, 26(9):1224–1241, 2007.

[124] W. Law and A. Chung. Segmentation of vessels using weighted local variances

and an active contour model. In Proc. IEEE Conf. on Computer Vision and

Pattern Recognition, page 83, Jun 2006.

[125] J. Leandro, J. Soares, R. Cesar, and H. Jelinek. Blood vessels segmentation

in nonmydriatic images using wavelets and statistical classifiers. In Proc. XVI

Brazilian Symposium on Computer Graphics and Image Processing, pages 262–

269, Oct 2003.

[126] J. Lee, P. Beighley, E. Ritman, and N. Smith. In press: Automatic segmen-

tation of 3D micro-CT coronary vascular images. Medical Image Analysis,

doi:10.1016/j.media.2007.06.012, 2007.

[127] T. C. Lee, R. L. Kashyap, and C. N. Chu. Building skeleton models via

3D medial surface/axis thinning algorithms. CVGIP: Graph. Models Image

Process., 56(6):462–478, 1994.

[128] K. Lekadir and G. Yang. Carotid artery segmentation using an outlier im-

mune 3D active shape models framework. In Proc. Medical Image Computing

and Computer-Assisted Intervention, volume 1, pages 289–296, Copenhagen,

Denmark, Sep 2006.

173

[129] H. Li and A. Yezzi. Vessels as 4D curves: global minimal 4D paths to extract

3D tubular surfaces and centerlines. IEEE Transactions on Medical Imaging,

26(9):1213–23, 2007.

[130] Q. Li, S. Sone, and K. Doi. Selective enhancement filters for vessels and airway

walls in two-and three-dimensional CT scans. Medical Physics, 30(8):2040–

2051, 2003.

[131] S. P. Liao, H. T. Lin, and C. Lin. A note on the decomposition methods for

support vector regression. Neural Computation, 14(6):1267–1281, Jun 2002.

skeletonization. In Proc. ACM Symposium on Solid and Physical Modeling,

pages 219–228, New York City, NY, July 2006.

construction algorithm. Computer Graphics, 21(4):163–169, July 1987.

line segmentation with automatic estimation of width, contrast and tangen-

tial direction in 2D and 3D medical images. In Proc. First Joint Conference

on Computer Vision, Virtual Reality and Robotics in Medicine and Medial

Robotics and Computer-Assisted Surgery, volume 1205, pages 233–244, 1997.

C. Westin. CURVES: curve evolution for vessel segmentation. Medical Image

Analysis, 5(3):195–206, 2001.

Signal Processing, 87(9):2085–2099, 2007.

traction for quantification of vessels in confocal microscopy images. In Proc.

IEEE International Symposium on Biomedical Imaging: Macro to Nano, pages

461– 464, Washington, DC, 2002.

model-based vasculature detection in noisy biomedical images. IEEE Transac-

tions on Information Technology in Biomedicine, 8(3):360–376, 2004.

Science, 268(5216):1503–1506, 1995.

174

[140] R. Manniesing and W. Niessen. Local speed functions in level set based ves-

sel segmentation. In Proc. Medical Image Computing and Computer-Assisted

Intervention, number 1, pages 475–482, Saint-Malo, France, Sep 2004.

[141] H. Marquering, J. Dijkstra, P. D. Koning, B. Stoel, and J. Reiber. Towards

quantitative analysis of coronary CTA. The International Journal of Cardio-

vascular Imaging, 21(1):73–84, 2005.

[142] T. McGraw, B. Vemuri, Y. Chen, M. Rao, and T. Mareci. DT-MRI denoising

and neuronal fiber tracking. Medical Image Analysis, 8(2):95–111, June 2004.

[143] C. McIntosh and G. Hamarneh. Vessel crawlers: 3D Physically-based de-

formable organisms for vasculature segmentation and analysis. In Proc. Confer-

ence on Computer Vision and Pattern Recognition, volume 1, pages 1084–1091,

June 2006.

[144] B. W. Mel. Synaptic integration in an excitable dendritic tree. J Neurophysiol,

70:1086–110, 1993.

[145] A. M. Mendonca and A. Campilho. Segmentation of retinal blood vessels by

combining the detection of centerlines and morphological reconstruction. IEEE

Transactions on Medical Imaging, 25(9):1200–1213, 2006.

[146] J. Mercer. Functions of positive and negative type and their connection with

the theory of integral equation. Philos. Trans. R. Soc. London, A-209:415446,

1909.

[147] D. Metaxas. Physics-based Modeling of Non-rigid Objects for Vision and

Graphics. Ph.d. Thesis, Graduate Department of Computer Science, University

of Toronto, 1992.

[148] D. Metaxas and D. Terzopoulos. Shape and nonrigid motion estimation

through physics-based synthesis. IEEE Transactions on Pattern Analysis and

Machine Intelligence, 15(6):580 – 591, June 1993.

[149] MicroBrightfield Inc. http://www.microbrightfield.com/.

[150] M. Migliore, M. Ferrante, and G. Ascoli. Signal propagation in oblique den-

drites of CA1 pyramidal cells. J Neurophysiol, 94:4145–4155, 2005.

[151] A. Mizrahi, E. Ben-Ner, M. Katz, K. Kedem, J. Glusman, and F. Libersat.

Comparative analysis of dendritic architecture of identified neurons using the

Hausdorff distance metric. The Journal of Comparative Nuerology, 233(3):415–

428, 2000.

175

[152] M. Moll and L. E. Kavraki. Path planning for variable resolution minimal-

energy curves of constant length. In Proc. IEEE International Conference on

Robotics and Automation, pages 2142–2147, Barcelona, Spain, April 2005.

selection of contour points. In Proc. International Conference on Information

Technology and Applications, volume 2, pages 644–649, Washington, DC., 2005.

Recognition, 39(6):1099–1109, 2006.

Temporal matching of dendritic spines in confocal microscopy images of neu-

ronal tissue sections. In Proc. International Workshop in Microscopic Image

Analysis and Applications in Biology, Copenhangen, Denmark, 2006.

applications to biomedical images. In 9th International Conference on Com-

puter Analysis of Images and Patterns, pages 256–263, London, UK, 2001.

Springer-Verlag.

detection of simple points in higher dimensions using cubical homology. IEEE

Transactions on Image Processing, 15(8):2462–2469, August 2006.

cylinder: a deformable model for object recovery. In Proc. IEEE Conf. on

Computer Vision and Pattern Recognition, pages 174–181, 1994.

[159] S. Osher and R. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces.

Springer, 2002.

[160] S. Osher and J. Sethian. Fronts propagating with curvature dependent speed:

algorithms based on the hamilton-jacobi formulation. Journal of Computa-

tional Physics, 79:12–49, 1998.

of pulmonary airway tree structures. Computers in Biology and Medicine,

36(9):974–996, 2006.

filter: Photometrically weighted, discontinuity based edge detection. Journal

of Structural Biology, 160(1):93–102, 2007.

176

[163] N. Passat, C. Ronse, J. Baruthio, J. Armspach, and J. Foucher. Using water-

shed and multimodal data for vessel segmentation: Application to the superior

sagittal sinus. In Proc. Conference in Mathematical Morphology: 40 years on,

pages 419–428, April 2005.

[165] E. L. Pennec and S. Mallat. Sparse geometric image representation with Ban-

delets. IEEE Trans. on Image Processing, 14(4):423–438, Apr. 2005.

[166] P. Perona and J. Malik. Scale-space and edge detection using anisotropic

diffusion. IEEE Trans. Pattern Anal. Mach. Intell., 12(7):629–639, 1990.

influence zone algorithms for large images. IEEE Transactions on Image Pro-

cessing, 9(7):1185–1199, July 2000.

[168] J. Platt. Probabilistic outputs for support vector machines and comparison to

regularize likelihood methods. In Advances in Large Margin Classifiers, pages

61–74, 2000.

[169] P. Poirazi and B. Mel. Impact of active dendrites and structural plasticity on

the memory capacity of neural tissue. Neuron, 29(3):779–796, 2001.

science, 15(4):933–946, Aug 1985.

ologiques 3D en microscopie confocale par transforme en ondelettes complexes.

Research Report 5507, INRIA,France, Feb. 2005.

for evaluating cardiac wall motion in three-dimensions using bifurcation points

of the coronary arterial tree. Investigative Radiology, 18:47–57, 1983.

images through relational learning. In Proc. IEEE International Symposium

on Biomedical Imaging: Macro to Nano, volume 2, pages 1135 – 1138, April

2004.

[174] F. Quek, C. Kirbas, and X. Gong. Simulated wave propagation and traceback

in vascular extraction. In Proc. IEEE Medical Imaging and Augmented Reality,

pages 229–234, Hong Kong, Jun. 2001.

177

[175] W. Rall. Handbook of Physiology: The Nervous System, volume 1, chapter Core

conductor theory and cable properties of neurons, pages 39–98. Baltimore,

1977.

on Image Processing, 1:287– 290, Oct 2004.

Proc. IEEE International Conference on Shape Modeling and Applications,

pages 179–188, Washington, DC, 2007.

Soc. Amer., 62:55–59, 1972.

rison, P. Hof, and S. Wearne. Automated reconstruction of three-dimensional

neuronal morphology from laser scanning microscopy images. Methods,

30(1):94–105, 2003.

tubular structures. In Proc. IEEE International Symposium on Biomedical

Imaging: Macro to Nano, pages 1160 – 1163, 2006.

[181] A. Ron and Z. Shen. Affine system in L2 (Rd ): the analysis of the analysis

operator. Journal of Functional Analysis, 148:408–447, 1997.

sets. In Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization,

pages 151–159, Aire-la-Ville, Switzerland, Switzerland, 2002.

using image analysis and a tilting disector. Journal of Neuroscience Methods,

60:11–21, 1995.

ogy in hippocampal pyramidal neurons: A hidden markov model. Hippocampus,

15(2):166–183, 2004.

diaris. Towards segmentation of irregular tubular structures in 3D confocal

microscope images. In Proc. MICCAI International Workshop in Microscopic

Image Analysis and Applications in Biology, pages 78–85, Copenhangen, Den-

mark, 2006.

178

[186] A. Santamarı́a-Pang, T. S. Bildea, I. Konstantinidis, and I. A. Kakadiaris.

Adaptive frames-based denoising of confocal microscopy data. In Proc. Eu-

ropean Conference on Computer Vision, pages 85–88, Toulouse, France, May

2006.

tomatic centerline extraction of irregular tubular structures using probabil-

ity volumes from multiphoton imaging. In Proc. Medical Image Computing

and Computer Assisted Intervention, number 2, pages 486–494, Brisbane, Aus-

tralia, Octuber 2007.

[188] P. Sarder and A. Nehorai. Deconvolution methods for 3-D fluorescence mi-

croscopy images. IEEE Signal Processing Magazine, 23(3):32–45, May 2006.

nis. 3-D multi-scale line filter for segmentation and visualization of curvilinear

structures in medical images. Medical Image Analysis, 2(2):143–168, 1998.

structure models from 3d microscopy data. In Proc. IEEE Conf. on Computer

Vision and Pattern Recognition, pages 1–8, 2007.

C. Schnrr. Spine detection and labeling using a parts-based graphical model.

In Proc. 20th International Conference on Information Processing in Medical

Imaging, LNCS 4584, pages 122–133, 2007.

for the computer-assisted 3-D reconstruction of neurons from confocal image

stacks. Neuroimage, 23(4):1283–1298, December 2004.

[193] D. Selle, B. Preim, A. Schenk, and H. Peitgen. Analysis of vasculature for liver

surgical planning. IEEE Transactions on Medical Imaging, 21(11):1344–1357,

November 2002.

[194] L. Sendur and I. Selesnick. Bivariate shrinkage with local variance estimation.

IEEE Signal Processing Letters, 9(12):438–441, December 2002.

[195] J. Sethian. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechan-

ics, Computer Vision and Materials Sciences. Cambridge University Press,

1996.

179

[196] L. Shen, M. Papadakis, I. A. Kakadiaris, I. Konstantinidis, D. Kouri, and

D. Hoffman. Image denoising using a tight frame. IEEE Transactions on

Image Processing, 15(5):1254–1263, May 2006.

D. Hoffman. Image denoising using a tight frame. IEEE Transactions on

Image Processing, 15(5):1254– 1263, 2006.

D. Hoffman. Image denoising using a tight frame. IEEE Trans. Image Pro-

cessing, 15(5):1254–1263, May. 2006.

[199] D. Sholl. Dendritic organization in the neurons of the visual and motor cortices

of the cat. Journal of Anatomy, 87(4):387–406, 1953.

segmentation using the 2-D Gabor wavelet and supervised classification. IEEE

Transactions on Medical Imaging, 25(9):1214 – 1222, Sep 2006.

dah, and M. Chopp. 3-D quantification and visualization of vascular structures

from confocal microscopic images using skeletonization and voxel-coding. Com-

puters in Biology and Medicine, 35(9):791–813, 2005.

detection of blood vessels in X-ray angiograms. Pattern Recognition Letters,

2(6):107–112, 1987.

fornia, 1991.

image derivatives for feature measurement in curvilinear structures. Interna-

tional Journal of Computer Vision, 42(3):177–189, 2001.

differential geometry for the measurement of center line and diameter in 3D

curvilinear structures. In Proc. European Conference on Computer Vision,

volume 1, pages 856–870, Dublin, Ireland, 2000.

dimensional confocal images. Network: Comput. Neural Syst., 13:381–395,

July 2002.

180

[207] K. Svoboda. Do spines and dendrites distribute dye evenly? Trends in Neuro-

sciences, 27(8):445–446, 2004.

esis. Curr Opin Neurobiol, 16(1):95–101, 2006.

[209] S. Tan and L. C. Jiao. Ridgelet bi-frame. Appl. Comput. Harmon. Anal.,

20:391–402, 2006.

[210] A. Telea and J. van Wijk. An augmented fast marching method for computing

skeletons and centerlines. In Proc. Symposium on Data Visualisation, pages

251–ff, Aire-la-Ville, Switzerland, 2002. Eurographics Association.

[211] A. Telea and A. Vilanova. A robust level-set algorithm for centerline extraction.

In Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization, pages

185–194, Grenoble, France, 2003.

[212] D. Terzopoulos and D. Metaxas. Dynamic 3D models with local and global

deformations: Deformable superquadrics. IEEE Transactions on Pattern Anal-

ysis and Machine Intelligence, 13(7):703–714, 1991.

Poisson processes with application to Photon-limited imaging. IEEE Transac-

tions on Information Theory, 45(3):846 – 862, 1999.

[214] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In

Proceedings of the IEEE International Conference on Computer Vision, pages

839–846, Bombay, India, Jan 1998.

skeleton. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,

volume 1, pages 828–834, 2003.

International Conference on Image Processing, volume 1, pages 337–340, Sep

2003.

Hamilton-Jacobi skeleton. IEEE Transactions on Image Processing, 15(4):877–

891, 2006.

181

[218] C. Toumoulin, C. Boldak, J. Dillenseger, J. Coatrieux, and Y. Rolland. Fast

detection and characterization of vessels in very large 3-D data sets using geo-

metrical moments. IEEE Transactions on Biomedical Engineering, 48(5):604–

606, May 2001.

[219] S. Tran and L. Shih. Efficient 3D binary image skeletonization. In Proc. IEEE

Computational Systems Bioinformatics Conference - Workshops, pages 364–

372, Aug 2005.

reconstruction of dendrite morphology from live neurons. In Proc. IEEE En-

gineering in Medicine and Biology Society, San Fransisco, CA, Sep 2004.

cochlea using confocal microscopy. Audiology and Neuro-otology, 7(1):27–30,

2002. Journal Article Research Support, Non-U.S. Gov’t Switzerland.

images for easy interpretation. Micron, 32:363–370, 2001.

driven segmentation of intravascular ultrasound images. In Proc. MICCAI

Workshop in Computer Vision for Intravascular and Intracardiac Imaging,

Copenhagen, Denmark, October 2006.

image segmentation algorithms. IEEE Transactions on Pattern Analysis and

Machine Intelligence, 29(6):929–944, 2007.

bert, and I. Kakadiaris. Automatic reconstruction of dendrite morphology

from optical section stacks. In Springer-Verlag, editor, Proc. 2nd International

Workshop on Computer Vision Approaches to Medical Image Analysis, Graz,

Austria, May 2006.

[226] H. Uylings and J. van Pelt. Measures for quantifying dendritic arborizations.

Network: Computation in Neural Systems, 13(3):397–414, 2002.

mograms using a deformable model. Computer Methods and Programs in

Biomedicine, 73(3):233–247, 2004.

182

[228] C. Van-Bemmel, L. Spreeuwers, M. Viergever, and W. Niessen. Level-set-

based arteryvein separation in blood pool agent CE-MR angiograms. IEEE

Transactions on Medical Imaging, 22(10):1224–1234, 2003.

wavelet-based statistical parametric mapping. NeuroImage, 37(4):1205–1217,

2007.

Comparing maximum likelihood estimation and constrained Tikhonov-Miller

restoration. IEEE Eng. Med. Biol. Mag., 15:76–83, 1996.

[231] J. van Pelt and A. Schierwagen. Morphological analysis and modeling of neu-

ronal dendrites. Mathematical Biosciences, 188:147–155, March-April 2004.

[232] J. van Pelt and H. Uylings. Modeling the natural variability in the shape

of dendritic trees: Application to basal dendrites of small rat cortical layer 5

pyramidal neurons. Neurocomputing, 26-27:305–311, 1999.

Germany, 1995.

[234] Y. Vardi, L. A. Shepp, and L. Kaufman. A statistical model for Positron emis-

sion tomography. Journal of the American Statistical Association, 80(389):8–

20, 1985.

[235] A. Vasilevskiy and K. Siddiqi. Flux maximizing geometric flows. IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, 24(12):1565–1578, 2002.

Monvel. Image adaptive point-spread function estimation and deconvolution

for in vivo confocal microscopy. Microsc Res Tech, 69(1):10–20, 2006.

[237] F. Von Wegner, M. Both, R. H. Fink, and O. Friedrich. Fast XYT imaging of

elementary calcium release events in muscle with multifocal multiphoton mi-

croscopy and wavelet denoising and detection. IEEE Transactions on Medical

Imaging, 26(7):925–34, 2007.

tion in MR images. In Proc. Medical Image Computing and Computer-Assisted

Intervention, volume 1, pages 283–290, 2004.

183

[239] M. Wan, Z. Liang, Q. Ke, L. Hong, I. Bitter, and A. Kaufman. Automatic

centerline extraction for virtual colonoscopy. IEEE Transactions on Medical

Imaging, 21(12):1450–1460, December 2002.

objects with radial basis functions. In Proc. Shape Modeling International,

pages 207– 215, Los Alamitos, CA, USA, May 2003.

surface of a neuron in confocal microscopy. Medical and Biological Engineering

and Computing, 41(5):601–607, 2003.

[242] T. Wang and A. Basu. A note on a fully parallel 3D thinning algorithm and

its applications. Pattern Recognition Letters, 28(4):501–506, 2007.

of dendritic spines in 3-dimensional images. In DAGM-Symposium Bielefeld,

pages 160–167, 1995.

dendrites and spines with the objective of topologically correct segmentation.

In Proc. International Conference on Pattern Recognition, page 472, Washing-

ton, DC, 1996.

New techniques for imaging, digitization and analysis of three-dimensional neu-

ral morphology on multiple scales. Neuroscience, 136(3):661680, 2005.

for multiscale morphometry of neuronal dendrites. Neural Computation,

16(7):1353–1383, July 2004.

sets using a new vessel segmentation approach. Journal of Diginal Imaging,

19(3):249–257, Sep 2006.

struction. In IEEE International Symposium on Biomedical Imaging, pages

446–457, Arlington, VA, Apr 2004.

edges and surfaces in photon-limited medical imaging. IEEE Transactions on

Medical Imaging, 22(3):332–350, 2003.

184

[250] O. Wink, W. Niessen, and M. Viergever. Multiscale vessel tracking. IEEE

Transactions on Medical Imaging, 23(1):130–133, Jan 2004.

[251] P. Wiseman, F. Capani, J. Squier, and M. Martone. Counting dendritic spines

in brain tissue slices by image correlation spectroscopy analysis. Journal of

Microscroscopy, 205(2):177–186, 2002.

[252] W. Wong and A. Chung. In press: Probabilistic vessel axis tracing and its

application to vessel segmentation with stream surfaces and minimum cost

paths. Medical Image Analysis, doi:10.1016/j.media.2007.05.003, 2007.

[253] W. C. Wong and A. C. Chung. Augmented vessels for quantitative analysis of

vascular abnormalities and endovascular treatment planning. IEEE Transac-

tions on Medical Imaging, 25(6):665–684, 2006.

[254] S. Worz and K. Rohr. Limits on estimating the width of thin tubular struc-

tures in 3D images. In Proc. Medical Image Computing and Computer-Assisted

Intervention, volume 1, pages 215–222, Copenhagen, Denmark, Sep 2006.

[255] S. Worz and K. Rohr. Segmentation and quantification of human vessels using

a 3D cylindrical intensity model. IEEE Transactions on Medical Imaging,

16(8):1994–2004, 2007.

[256] Y. Xiang, A. C., Chung, and J. Ye. An active contour model for image seg-

mentation based on elastic interaction. Journal of Computational Physics,

219:455–476, May 2006.

[257] G. Xiong, X. Zhou, L. Ji, A. Degterev, and S. Wong. Automated labeling

of neurites in fluorescence microscopy images. In Proc. IEEE Symposium on

Biomedical Imaging: Nano to Macro, pages 534–537, Arlington, Virginia, USA,

April 2006.

[258] P. Yan and A. Kassim. Segmentation of vessels from mammograms using a

deformable model. Medical Image Analysis, 10(3):317–329, 2006.

[259] F. Yang, G. Holzapfel, C. Schulze-Bauer, R. Stollberger, D. Thedens,

L. Bolinger, A. Stolpen, and M. Sonka. Segmentation of wall and plaque

in in vitro vascular MR images. The International Journal of Cardiovascular

Imaging, 19:419–428, 2003.

[260] Y. Yang, A. Tannenbaum, and D. Giddens. Knowledge-based 3-D segmenta-

tion and reconstruction of coronary arteries using CT images. In Proc. IEEE

Engineering in Medicine and Biology Society, pages 1664–1666, San Francisco,

CA, 2004.

185

[261] P. Yim, J. Cebral, R. Mullick, H. Marcos, and P. Choyke. Vessel surface re-

construction with a tubular deformable model. IEEE Transactions on Medical

Imaging, 20(12):1411–1421, Dec 2001.

in magnetic resonance angiography. IEEE Transactions on Medical Imaging,

19(6):568–576, June 2000.

[263] X. You, B. Fang, and Y. Y. Tang. Wavelet-based approach for skeleton extrac-

tion. In Proc. IEEE Workshops on Application of Computer Vision, volume 1,

pages 228–233, Los Alamitos, CA, USA, 2005.

gray-scale images via anisotropic vector diffusion. In Proc. IEEE Conf Com-

puter Vision and Pattern Recognition, volume 1, pages 415–420, 2004.

Continuous medial representations for geometric object modeling in 2D and

3D. Image and Vision Computing, 21(1):17–27, 2003.

[266] R. Yuste and W. Denk. Dendritic spines as basic functional units of neuronal

integration. Nature, 375(6533):682–684, June 1995.

[267] G. Zeng, S. Birchfield, and C. Wells. Detecting and measuring fine roots in

minirhizotron images using matched filtering and local entropy thresholding.

Machine Vision and Applications, 17:265278, Sep 2006.

spine detection using curvilinear structure detector and LDA classifier. Neu-

roimage, 36(2):346–360, 2007.

spine detection using curvilinear structure detector and LDA classifier. In Proc.

IEEE Symposium on Biomedical Imaging: Nano to Macro, pages 528–531, VA,

USA, April 2007.

B, FORTRAN routines for large scale bound constrained optimization. ACM

Transactions on Mathematical Software, 23(4):550–560, 1997.

186

- Vehicle Detection and Counting by Using Headlight Information in the DarkUploaded byNguyễn Cương
- Hanbury Image Segmentation Wiley EncyclopediaUploaded bypavan_bmkr
- A Hybrid Particle Swarm Optimization Algorithm to Human Skin Region DetectionUploaded byJournal of Computing
- HALUploaded byMárcio Machado Ribeiro
- Paper 7-Hybrid Denoising Method for Removal of Mixed Noise in Medical ImagesUploaded byEditor IJACSA
- A Review on Image Denoising using Wavelet TransformUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Survey on Haze Removal TechniquesUploaded byIJMTER
- A Review of different method of Medical Image SegmentationUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Ranjay PresUploaded bynguyentai325
- Labview - Waveletes Analsys ToolkitUploaded byrcmscribd
- DiameterJ - ImageJUploaded bysamia
- LibrosUploaded bygabo_9058706
- 6-Reduction of Gaussian, Supergaussian, And ImpulsiveUploaded bydonyarmstrong
- A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropyUploaded byMia Amalia
- Gesture Based Robot Control for Robotic Arm EdgeUploaded byAbinaya Devarajan
- 01 IntroductionUploaded byHsiung Wei Wei
- Alan Schore PaperUploaded byPaul Scallan
- Segmentation and Classification of Skin Lesions Based on Texture FeaturesUploaded byAnonymous 7VPPkWS8O
- Ai in Practice Case Study on a Flotation PlantUploaded byJuan Olivares
- imgprocessingprojectUploaded byRajendra Kumar
- journalpaper.pdfUploaded bykarpagaabirami

- Qrs Detect nUploaded byaswathy12345
- Most Suited Mother Wavelet for Localization of Transmission Line FaultsUploaded byIJSTR Research Publication
- 06291069Uploaded bySiva Prasad Padilam
- Pitch Recognition With WaveletsUploaded byhariharankalyan
- ThesisUploaded bypro.gig9033
- A practical Time-Series Tutorial with MATLABUploaded byNabila Chowdhury
- 09_chapter4Uploaded byseprienna
- README-C.pdfUploaded byAnonymous 980dmQfwmi
- Fault Analysis of voltage source converter based multi terminal VSC HVDC transmission link.Uploaded byRajesh Kumar Patnaik
- 1-s2.0-S0923596509001192-main.pdfUploaded bycakomen
- Fault WaveletuioUploaded byaravindan476
- High PSNR Based Image SteganographyUploaded byEditor IJRITCC
- Synopsis 3Uploaded byAtul Narkhede
- ECG SignalUploaded byPavan Pakki
- v80-199Uploaded byHuan Vu Phan
- Intech-Advances in Wavelet Theory and Their Applications in Engineering Physics and TechnologyUploaded byBernardMight
- How_to_do_Time_series_experiments.pdfUploaded byorial
- Cohen AnalyzingNeuralTimeSeriesData TOCUploaded bybaghlol
- Discrete Wavelet Transform Using MatlabUploaded byIAEME Publication
- Paper 22-A Fresnelet-Based Encryption of Medical Images Using Arnold TransformUploaded byEditor IJACSA
- Islanding Detection and Controlled Islanding in Emerging Power Systems Key Issues and ChallengesUploaded byIRJET Journal
- Research InitiativeUploaded byrupesh
- WaveletsUploaded bymilindgoswami
- Epilepsy Multi FractalUploaded byGaurav Malik
- Study of Different Image Fusion AlgorithmUploaded byAnand Kumar
- Image Compression Using DCTUploaded byĐặng Thành Trung
- Removal of Speckle Noise Using Hybrid Filter TechniqueUploaded byInnovative Research Publications
- Legendre Wavelet for Solving Linear System of Fredholm And Volterra Integral EquationsUploaded byInternational Journal of Research in Engineering and Science
- ch3_unit2Uploaded byRajesh Vungarala
- Www.xsgeo.com Course DeconUploaded bySilvia Jannatul Fajar