Professional Documents
Culture Documents
1558-254X © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
1032 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 41, NO. 5, MAY 2022
Fig. 1. Workflow of the proposed method. (a) The proposed neuron reconstruction method. (b) Examples of synthetic training images.
Networks (CNNs) are increasingly used to segment or enhance advanced. Many novel deep learning techniques appeared
neuronal structures from 3D microscopy images [8], [31]–[35], for 2D image analysis, for example, the recently popular
aiming at improving the performances of the existing neu- vision transformer (ViT) [50] was proposed for 2D image
ron reconstruction methods. DeepNeuron [36], for example, recognition. Therefore, another advantage of using 2D CNNs
is able to detect neurites, connect neuronal signals and refine is that we can easily catch up with the fast development
reconstruction results. Other methods [14], [37] employ CNNs of 2D architectures to improve 2D CNNs based neuronal
to detect neuron critical points as seed points for the subse- image analysis methods, whereas extending the state-of-the-art
quent neuron reconstruction process. To our best knowledge, 2D architectures to 3D may be infeasible due to the limited
deep-learning based neuron reconstruction strategies have computational resources.
barely been explored. In [51], deep reinforcement learning In this paper, we propose a spherical-patches extraction
is employed for neural tracking in 2D neuronal microscopy (SPE) [14] and deep-learning based neuron reconstructor
images, but it is not extended to 3D. One of the main (DNR), called SPE-DNR, for neuron reconstruction from 3D
drawbacks of these methods is that the network training relies microscopy images. By employing the SPE method for feature
on either extensive manual annotation of foreground voxels extraction and transformation, we build SPE-DNR using 2D
or pseudo labels generated based on manual reconstructions. CNNs consisting of two functional heads (a neurite tracer
Obtaining such training samples is extremely time-consuming and a classifier). The neurite tracer starts from a set of seed
and labor-intensive [4], due to the rich hierarchy of neuronal points and iteratively determines the tracing directions along
structures and the low quality and ambiguity of the images. the neurite centerlines. The classifier estimates the radii of the
Hence, progressive-learning based methods [1], [38] have neuronal structures and automatically stops the tracing process
been proposed to iteratively train the neuron segmentation when it steps into the background region. The overall pipeline
networks and update the pseudo labels produced by automated of the neuron reconstruction procedure in this work is shown
neuron reconstruction methods. However, the iterative training in Fig. 1(a). Given an input image stack, the neuron soma
scheme significantly increases the training time. Moreover, the and a set of seed points are first detected. Then, starting from
performance of the trained model essentially depends on the the seed points, the SPE-DNR traces the neuronal centerlines
quality improvement of the pseudo labels in each iteration, by iteratively determining the tracing directions using 2D
which in turn depends on the performance and robustness of CNNs. In this process, a joint decision scheme is developed
the existing automated neuron reconstruction methods. to determine the tracing direction based on the confidence
Currently, the size of an entire 3D mouse brain imaged scores given by the seed point detector and the neuron recon-
at sub-micrometer is larger than 20000 × 40000 × 10000 structor. Finally, the graph representing the complete neuron
voxels. While it seems natural to use 3D CNNs for neuron circuit is reconstructed using a breadth-first search (BFS) algo-
reconstruction in 3D microscopy images, the huge volume of rithm [40]. Moreover, to avoid introducing erroneous training
the 3D neuronal imaging data introduces significant challenges labels caused by imperfect manual annotations, we develop
to 3D CNNs. Due to the smaller number of parameters, an image synthesizing scheme to generate synthetic training
2D CNNs have higher efficiency and lower computational images with exact reconstructions. Both the CNN-based seed
costs than 3D CNNs. These advantages not only allow faster point detector and SPE-DNR are trained on synthetic images
processing, but also lead to more possibilities of 2D CNNs in and directly evaluated on the test images. Examples of the
terms of the functionalities and architectures, given the same synthetic training images are shown in Fig. 1(b).
computational resources. Thus, 2D CNNs are more preferable The main contributions of this paper are:
than 3D CNNs for neuronal image analysis. Moreover, though • We propose a novel 3D neuron reconstruction method
3D CNNs are well-developed right now, the development integrating SPE with 2D CNNs. During tracing, the
of 2D architectures, whether CNNs or not, is usually more joint decision scheme helps to increase the robustness of
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: DEEP-LEARNING-BASED AUTOMATED NEURON RECONSTRUCTION 1033
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
1034 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 41, NO. 5, MAY 2022
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: DEEP-LEARNING-BASED AUTOMATED NEURON RECONSTRUCTION 1035
1 T
B
Ltra = − p̃k log(pk ) (4)
B
k=1
where B is the batch size, k is the index of the training
samples, pk is the output of the direction determination task
and p̃k is the reference for pk .
The cross-entropy loss Lcls from the voxel classification
task is calculated as,
1 T
B
Lcls = − b̃k log(bk ) (5)
Fig. 3. Illustrations of training sample generation and common errors B
k=1
in manual reconstruction annotations. (a) Reference directions gener-
ated on reconstruction annotations that precisely match the neuronal where bk is the output of the voxel classification task and b̃k
centerlines. (a1)-(a2) Correct reference directions of a centerline point is the reference for bk .
and an off-centerline point, respectively. (b) Reference directions made
on imperfect reconstruction annotations. (b1)-(b2) Erroneous reference The squared error regression loss Lreg from the radius
directions of a centerline point and an off-centerline point, respectively. regression task is calculated as,
(c) Examples of imperfect manual reconstructions in the BigNeuron
1
B
project [30].
Lreg = (rk − r̃k )2 (6)
a) Centerline samples: Given a randomly selected center- B
k=1
line point yc from the reconstruction annotation, its child
where rk is the output of the radius regression task and r̃k
node is picked to generate one of two reference directions
is the reference for rk . Note that the background samples
(Fig. 3(a1)) as,
are excluded when calculating Ltra and Lreg , because it is
d̃ = arg min(ang(, d(n))) (3) unnecessary to determine the tracing directions and estimate
d(n) ∈D the radii in background regions.
where d̃ is the reference direction, is the displacement vector After calculating the losses, one way to train our SPE-DNR,
from yc to its child node, ang(, d(n)) is the angle between mimicking other works [42], [43], would be minimizing the
and d(n) . Then, in p̃, the class probability corresponding to losses of all the tasks as,
d̃ is set to 0.5. Another reference direction is generated and L1 = Ltra + Lcls + Lreg (7)
labeled in the same way, using the parent node of yc . This
However, we found that the regression task negatively affects
way, two elements in p̃ have probability 0.5, corresponding to
the two classification tasks if the regression and the classi-
the two reference directions, and the remaining probabilities
fication tasks are jointly trained. Apparently, these tasks are
are set to 0.0. The reference radius r̃ is obtained from the
too different to optimize jointly. Therefore, to improve the
reconstruction annotation.
performance of the classification tasks, especially the direction
b) Off-centerline samples: If SPE-DNR is trained using
determination task, we use a separate training strategy. First,
only centerline samples, it may suffer from overfitting and
fail to identify correct directions when it slightly deviates the network is optimized by minimizing the losses from the
two classification tasks,
from the centerline. To improve the robustness of the neu-
rite tracer, off-centerline samples are generated (Fig. 3(a2)). L2 = Ltra + Lcls (8)
Specifically, an off-centerline point yo is randomly sampled
After the network converges, we freeze the parameters of
around the reconstruction annotation. Its reference directions
all the layers in the network, except the output layer of the
are determined by finding the centerline point yc nearest to it
classifier. Then, the parameters of this layer are optimized by
and calculating the directions from yo to the parent and child
minimizing the loss as,
nodes of yc .
c) Background samples: These samples are generated from L3 = Lcls + Lreg (9)
the background regions in the 3D training images and only
This way, the side effect of the radius regression task on the
used to train the classifier, so reference directions and radii
voxel classification task is limited, because only the parameters
are not required. We set b̃ = [1, 0]T for the centerline and
of output layer of the classifier are fine-tuned.
off-centerline samples and b̃ = [0, 1]T for the background
samples.
After the locations of the training samples are determines, E. Synthetic Training Image Generation
the SPE method is performed at each location in the training Through the proposed training scheme, the SPE-DNR
images to extract P for training. can learn to determine the centerline direction and esti-
2) Separate Training Strategy: During training, P is fed to mate the radius of neuronal structures. However, it requires
the DNR and three losses are calculated. Here, the calculation precise reconstruction annotations to generate accurate
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
1036 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 41, NO. 5, MAY 2022
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: DEEP-LEARNING-BASED AUTOMATED NEURON RECONSTRUCTION 1037
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
1038 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 41, NO. 5, MAY 2022
Fig. 7. Visual examples of the reconstruction results on a typical test image from the BigNeuron dataset. The image intensity is inverted and the
image contrast is adjusted for better visualization. The reconstructions are visualized using Vaa3D [49]. (a) Minimum intensity projection of the test
image. (b) Manual reconstruction. (c)-(h) Reconstruction results of the compared methods.
2 See Supplementary Material Section 2 for the details of hyper-parameter 3 See Supplementary Material Section 3 for detailed parameters and opera-
selection. tions for the synthetic training images.
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: DEEP-LEARNING-BASED AUTOMATED NEURON RECONSTRUCTION 1039
Fig. 9. Visual examples of the reconstruction results on a typical WMBS image. The image intensity is inverted and the image contrast is adjusted for
better visualization. The reconstructions are visualized using Vaa3D. (a) Minimum intensity projection of the test image. (b) Manual reconstruction.
(c)-(h) Reconstruction results of the compared methods. Green ellipses indicate the challenging weak-signal and thin neurites.
Fig. 11. Visual examples of the reconstruction results on a typical NCL1A image. The image intensity is inverted and the image contrast is adjusted
for better visualization. The reconstructions are visualized using Vaa3D. (a) Minimum intensity projection of the test image. (b) Manual reconstruction.
(c)-(h) Reconstruction results of the compared methods. Yellow boxes are the zoomed-in views an ambiguous structure and green arrows indicate
the ambiguous structure.
they outperform other competitors by a large margin in SD than the compared neuron reconstruction methods, because
and SSD scores, and SPE-DNR obtains the best SD and identifying the weak-signal and thin neuronal structures is a
SSD scores. Since the images in this dataset are from the time-consuming task to the researchers and a challenging task
same species, the performance spreads of all the methods to the existing automatic neuron reconstruction methods.
are smaller than that of the BigNeuron dataset. However, The quantitative comparison results on the 16 images in
due to the influences of the weak-signal neuronal structures, this dataset is shown in Fig. 12. In general, all the compared
the Recalls of all the methods decrease, and SPE-DNR only methods show competitive results in this dataset. Note that in
shows a slight drop in Recall though it is still relatively the former two datasets, the performances of SPE-DNR and
high. The experimental results demonstrate that SPE-DNR can R2 are close and outperform the other methods. In this dataset,
generalize well on this dataset. however, SPE-DNR outperforms R2 in terms of Precision,
Recall, F1-measure and SSD score, and it is competitive to R2
E. Experiments on Neocortical Layer-1 Axons Images in other metrics. This is because R2 requires a soma region as
Different from the former two datasets, the images in this a source point for neuron reconstruction. Since the images in
dataset mainly contain network-like neuron axons from various this dataset have no clear somas, R2 alternatively considers the
neuronal cells and have no clear somas. The 3D neuron neuronal structure with the largest radius as the source point,
reconstruction results of the compared methods on a typical and then it traces a path between the source point and each
NCL1A image are shown in Fig. 11. It can be observed that geodesic furthest point. In this process, reconstruction errors
the performance of SPE-DNR is still robust. Moreover, we can may be generated because R2 may step into the background
see that some ambiguous structures are reconstructed by our region to build a path between the source point and the neurite
method in the background region, because these structures of another neuronal cell. Therefore, the performance of R2 is
show tubularity patterns in a local region. For example, not as competitive as in the former two datasets.
in Fig. 11(a), it is hard to tell whether the structure indicated by Among all the methods, only SPE-DNR consistently
the green arrow is background noise or weak-signal neuronal achieved satisfactory performances in all the metrics on this
structure. Our method recognizes this ambiguous structure as dataset, which further demonstrates its the robustness and
neuronal structure, whereas there is no reconstruction refer- generalizability.
ence for it. Artifacts are therefore generated. The negative
impact of the artifacts on practical application scenarios is
F. Experiments on the Computational Efficiency of
that researchers may spend some time removing the artifacts
SPE-DNR
in the background region. This can be achieved easily by
a few mouse clicks using software tools such as Vaa3D To evaluate the computational efficiency of the proposed
[49], because the artifacts in background regions are usually method, we calculated the average running time and the
isolated. Indeed, the sensitivity of our method to ambiguous other evaluation metrics of the compared methods on the 34
structures helps the researchers to identify the weak-signal and BigNeuron test images (Table II). It can be seen that the
thin neuronal structures that are hard to find even by human average running time of our method is acceptable: seed
eyes. This makes our method have higher practical value points extraction took 18.00 seconds and neurite tracing took
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: DEEP-LEARNING-BASED AUTOMATED NEURON RECONSTRUCTION 1041
TABLE II
T HE AVERAGE VALUES FOR THE E VALUATION M ETRICS OF THE C OMPARED M ETHODS ON THE B IG N EURON D ATASET
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.
1042 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 41, NO. 5, MAY 2022
[5] J. Xie, T. Zhao, T. Lee, E. Myers, and H. Peng, “Anisotropic path [28] L. Acciai, P. Soda, and G. Iannello, “Automated neuron tracing methods:
searching for automatic neuron reconstruction,” Med. Image Anal., An updated account,” Neuroinformatics, vol. 14, no. 4, pp. 353–367,
vol. 15, no. 5, pp. 680–689, Oct. 2011. 2016.
[6] Z. Zhou, X. Liu, B. Long, and H. Peng, “TReMAP: Automatic 3D [29] K. M. Brown et al., “The DIADEM data sets: Representative light
neuron reconstruction based on tracing, reverse mapping and assem- microscopy images of neuronal morphology to advance automation of
bling of 2D projections,” Neuroinformatics, vol. 14, no. 1, pp. 41–50, digital reconstructions,” Neuroinformatics, vol. 9, nos. 2–3, pp. 143–157,
Jan. 2016. 2011.
[7] A.-A. Liu, J. Tang, W. Nie, and Y. Su, “Multi-grained random fields [30] H. Peng et al., “BigNeuron: Large-scale 3D neuron reconstruction
for mitosis identification in time-lapse phase contrast microscopy image from optical microscopy images,” Neuron, vol. 87, no. 2, pp. 252–256,
sequences,” IEEE Trans. Med. Imag., vol. 36, no. 8, pp. 1699–1710, 2015.
Aug. 2017. [31] Y. Jiang, W. Chen, M. Liu, Y. Wang, and E. Meijering, “3D neuron
[8] Q. Li and L. Shen, “3D neuron reconstruction in tangled neuronal microscopy image segmentation via the ray-shooting model and a DC-
image with deep networks,” IEEE Trans. Med. Imag., vol. 39, no. 2, BLSTM network,” IEEE Trans. Med. Imag., vol. 40, no. 1, pp. 26–37,
pp. 425–435, Feb. 2020. Jan. 2021.
[9] Y. Qiao, B. P. F. Lelieveldt, and M. Staring, “An efficient preconditioner [32] B. Yang, W. Chen, H. Luo, Y. Tan, M. Liu, and Y. Wang, “Neuron
for stochastic gradient descent optimization of image registration,” IEEE image segmentation via learning deep features and enhancing weak
Trans. Med. Imag., vol. 38, no. 10, pp. 2314–2325, Oct. 2019. neuronal structures,” IEEE J. Biomed. Health Informat., vol. 25, no. 5,
pp. 1634–1645, May 2021.
[10] E. Meijering, “Neuron tracing in perspective,” Cytometry A, vol. 77,
[33] M. Kozinski, A. Mosinska, M. Salzmann, and P. Fua, “Tracing in 2D to
no. 7, pp. 693–704, 2010.
reduce the annotation effort for 3D deep delineation,” Med. Imag. Anal.,
[11] H. Zeng and J. R. Sanes, “Neuronal cell-type classification: Challenges, vol. 60, pp. 1–11, Feb. 2020.
opportunities and the path forward,” Nature Rev. Neurosci., vol. 18, no. 9,
[34] A. Mosinska, M. Kozinski, and P. Fua, “Joint segmentation and path
pp. 530–546, Sep. 2017. classification of curvilinear structures,” IEEE Trans. Pattern Anal. Mach.
[12] J. Winnubst et al., “Reconstruction of 1,000 projection neurons reveals Intell., vol. 42, no. 6, pp. 1515–1521, Jun. 2020.
new cell types and organization of long-range connectivity in the mouse [35] R. Li, T. Zeng, H. Peng, and S. Ji, “Deep learning segmentation of
brain,” Cell, vol. 179, no. 1, pp. 268–281, 2019. optical microscopy images improves 3-D neuron reconstruction,” IEEE
[13] J. De et al., “A graph-theoretical approach for tracing filamentary Trans. Med. Imag., vol. 36, no. 7, pp. 1533–1541, Jul. 2017.
structures in neuronal and retinal images,” IEEE Trans. Med. Imag., [36] Z. Zhou, H.-C. Kuo, H. Peng, and F. Long, “DeepNeuron: An open deep
vol. 35, no. 1, pp. 257–272, Jan. 2016. learning toolbox for neuron tracing,” Brain Inf., vol. 5, no. 2, pp. 3–11,
[14] W. Chen et al., “Spherical-patches extraction for deep-learning-based Dec. 2018.
critical points detection in 3D neuron microscopy images,” IEEE Trans. [37] Y. Tan, M. Liu, W. Chen, X. Wang, H. Peng, and Y. Wang, “DeepBranch:
Med. Imag., vol. 40, no. 2, pp. 527–538, Feb. 2021. Deep neural networks for branch point detection in biomedical images,”
[15] Y. Wang et al., “TeraVR empowers precise reconstruction of complete IEEE Trans. Med. Imag., vol. 39, no. 4, pp. 1195–1205, Apr. 2020.
3-D neuronal morphology in the whole brain,” Nature Commun., vol. 10, [38] Q. Huang et al., “Weakly supervised learning of 3D deep network
no. 1, pp. 1–9, Dec. 2019. for neuron reconstruction,” Frontiers Neuroanatomy, vol. 14, pp. 1–15,
[16] H. Peng, F. Long, and G. Myers, “Automatic 3D neuron tracing Jul. 2020.
using all-path pruning,” Bioinformatics, vol. 27, no. 13, pp. i239–i247, [39] M. Helmstaedter, K. L. Briggman, and W. Denk, “High-accuracy neurite
Jul. 2011. reconstruction for high-throughput neuroanatomy,” Nature Neurosci.,
[17] H. Xiao and H. Peng, “APP2: Automatic tracing of 3D neuron morphol- vol. 14, no. 8, pp. 1081–1088, Aug. 2011.
ogy based on hierarchical pruning of a gray-weighted image distance- [40] M. Radojević and E. Meijering, “Automated neuron reconstruction
tree,” Bioinformatics, vol. 29, no. 11, pp. 1448–1454, Jun. 2013. from 3D fluorescence microscopy images using sequential Monte Carlo
[18] T. Batabyal, B. Condron, and S. T. Acton, “NeuroPath2Path: Clas- estimation,” Neuroinformatics, vol. 17, no. 3, pp. 423–442, Jul. 2019.
sification and elastic morphing between neuronal arbors using path- [41] D. Zhang, S. Liu, Y. Song, D. Feng, H. Peng, and W. Cai, “Auto-
wise similarity,” Neuroinformatics, vol. 18, no. 3, pp. 479–508, mated 3D soma segmentation with morphological surface evolution for
Jun. 2020. neuron reconstruction,” Neuroinformatics, vol. 16, no. 2, pp. 153–166,
[19] S. Liu, D. Zhang, S. Liu, D. Feng, H. Peng, and W. Cai, “Rivulet: Jan. 2018.
3D neuron morphology tracing with iterative back-tracking,” Neuroin- [42] J. M. Wolterink, R. W. van Hamersvelt, M. A. Viergever, T. Leiner,
formatics, vol. 14, no. 4, pp. 387–401, 2016. and I. Išgum, “Coronary artery centerline extraction in cardiac CT
[20] S. Liu, D. Zhang, Y. Somg, H. Peng, and W. Cai, “Automated 3-D angiography using a CNN-based orientation classifier,” Med. Image
neuron tracing with precise branch erasing and confidence controlled Anal., vol. 51, pp. 46–60, Jan. 2019.
back tracking,” IEEE Trans. Med. Imag., vol. 37, no. 1, pp. 2441–2452, [43] H. Yang, J. Chen, Y. Chi, X. Xie, and X. Hua, “Discriminative
Nov. 2018. coronary artery tracking via 3D CNN in cardiac CT angiography,” in
Proc. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), 2019,
[21] J. Yang, M. Hao, X. Liu, Z. Wan, N. Zhong, and H. Peng, “FMST:
pp. 468–476.
An automatic neuron tracing method based on fast marching and
[44] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convo-
minimum spanning tree,” Neuroinformatics, vol. 17, no. 2, pp. 185–196,
lutions,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2016, pp. 1–13.
Apr. 2019.
[45] X. Ming et al., “Rapid reconstruction of 3D neuronal morphology
[22] M. Radojević and E. Meijering, “Automated neuron tracing using from light microscopy images with augmented rayburst sampling,” PLoS
probability hypothesis density filtering,” Bioinformatics, vol. 33, no. 7, ONE, vol. 8, no. 12, Dec. 2013, Art. no. e84557.
pp. 1073–1080, 2017.
[46] A. Rodriguez, D. B. Ehlenberger, P. R. Hof, and S. L. Wearne, “Rayburst
[23] S. Basu, W. T. Ooi, and D. Racoceanu, “Neurite tracing with object sampling, an algorithm for automated three-dimensional shape analysis
process,” IEEE Trans. Med. Imag., vol. 35, no. 6, pp. 1443–1451, from laser scanning microscopy images,” Nature Protocols, vol. 1, no. 4,
Jun. 2016. pp. 2152–2161, 2006.
[24] S. Mukherjee, B. Condron, and S. T. Acton, “Tubularity flow field— [47] H. Peng, Z. Ruan, D. Atasoy, and S. Sternson, “Automatic reconstruction
A technique for automatic neuron segmentation,” IEEE Trans. Image of 3D neuron structures using a graph-augmented deformable model,”
Process., vol. 24, no. 1, pp. 374–389, Jan. 2015. Bioinformatics, vol. 26, no. 12, pp. i38–i46, Jun. 2010.
[25] A. Rodriguez, D. B. Ehlenberger, P. R. Hof, and S. L. Wearne, “Three- [48] D. M. Powers, “Evaluation: From precision, recall and F-measure
dimensional neuron tracing by voxel scooping,” J. Neurosci. Methods, to ROC, informedness, markedness and correlation,” J. Mach. Learn.
vol. 184, no. 1, pp. 169–175, 2009. Technol., vol. 2, no. 1, pp. 37–63, 2011.
[26] M. Radojević, I. Smal, and E. Meijering, “Fuzzy-logic based detec- [49] H. Peng, A. Bria, Z. Zhou, G. Iannello, and F. Long, “Extensible
tion and characterization of junctions and terminations in fluorescence visualization and analysis for multidimensional images using Vaa3D,”
microscopy images of neurons,” Neuroinformatics, vol. 14, no. 2, Nature protocols, vol. 9, pp. 193–208, Jan. 2014.
pp. 201–219, 2016. [50] A. Dosovitskiy et al., “An image is worth 16×16 words: Transformers
[27] M. Liu, W. Chen, C. Wang, and H. Peng, “A multiscale ray-shooting for image recognition at scale,” in Proc. Int. Conf. Learn. Represent.
model for termination detection of tree-like structures in biomedical (ICLR), 2021, pp. 1–22.
images,” IEEE Trans. Med. Imag., vol. 38, no. 8, pp. 1923–1934, [51] T. Dai et al., “Deep reinforcement learning for subpixel neural tracking,”
Aug. 2019. in Proc. Int. Conf. Med. Imag. With Deep Learn., 2019, pp. 130–150.
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 20,2023 at 20:03:58 UTC from IEEE Xplore. Restrictions apply.