Professional Documents
Culture Documents
Medical Image Understanding and Analysis 21st Annual Conference MIUA 2017 Edinburgh UK July 11 13 2017 Proceedings 1st Edition María Valdés Hernández
Medical Image Understanding and Analysis 21st Annual Conference MIUA 2017 Edinburgh UK July 11 13 2017 Proceedings 1st Edition María Valdés Hernández
https://textbookfull.com/product/image-analysis-and-
recognition-14th-international-conference-iciar-2017-montreal-qc-
canada-july-5-7-2017-proceedings-1st-edition-fakhri-karray/
https://textbookfull.com/product/image-analysis-and-processing-
iciap-2017-19th-international-conference-catania-italy-
september-11-15-2017-proceedings-part-i-1st-edition-sebastiano-
Intelligent Human Computer Interaction 9th
International Conference IHCI 2017 Evry France December
11 13 2017 Proceedings 1st Edition Patrick Horain
https://textbookfull.com/product/intelligent-human-computer-
interaction-9th-international-conference-ihci-2017-evry-france-
december-11-13-2017-proceedings-1st-edition-patrick-horain/
https://textbookfull.com/product/data-analytics-31st-british-
international-conference-on-databases-bicod-2017-london-uk-
july-10-12-2017-proceedings-1st-edition-andrea-cali/
https://textbookfull.com/product/logic-language-information-and-
computation-24th-international-workshop-wollic-2017-london-uk-
july-18-21-2017-proceedings-1st-edition-juliette-kennedy/
https://textbookfull.com/product/reasoning-web-semantic-
interoperability-on-the-web-13th-international-summer-
school-2017-london-uk-july-7-11-2017-tutorial-lectures-1st-
María Valdés Hernández
Víctor González-Castro (Eds.)
Medical Image
Understanding
and Analysis
21st Annual Conference, MIUA 2017
Edinburgh, UK, July 11–13, 2017
Proceedings
123
Communications
in Computer and Information Science 723
Commenced Publication in 2007
Founding and Former Series Editors:
Alfredo Cuzzocrea, Dominik Ślęzak, and Xiaokang Yang
Editorial Board
Simone Diniz Junqueira Barbosa
Pontifical Catholic University of Rio de Janeiro (PUC-Rio),
Rio de Janeiro, Brazil
Phoebe Chen
La Trobe University, Melbourne, Australia
Xiaoyong Du
Renmin University of China, Beijing, China
Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Orhun Kara
TÜBİTAK BİLGEM and Middle East Technical University, Ankara, Turkey
Igor Kotenko
St. Petersburg Institute for Informatics and Automation of the Russian
Academy of Sciences, St. Petersburg, Russia
Ting Liu
Harbin Institute of Technology (HIT), Harbin, China
Krishna M. Sivalingam
Indian Institute of Technology Madras, Chennai, India
Takashi Washio
Osaka University, Osaka, Japan
More information about this series at http://www.springer.com/series/7899
María Valdés Hernández
Víctor González-Castro (Eds.)
Medical Image
Understanding and Analysis
21st Annual Conference, MIUA 2017
Edinburgh, UK, July 11–13, 2017
Proceedings
123
Editors
María Valdés Hernández Víctor González-Castro
Department of Neuroimaging Sciences Universidad de León
University of Edinburgh León
Edinburgh Spain
UK
This volume comprises the proceedings of the 21st edition of the Medical Image
Understanding and Analysis (MIUA) Conference, an annual forum organized in the
United Kingdom for communicating research progress within the community interested
in biomedical image analysis. Its goals are the dissemination and discussion of research
in medical image analysis to encourage the growth and to raise the profile of this
multi-disciplinary field that has an ever-increasing real-world applicability. The con-
ference constitutes an excellent opportunity to network, generate new ideas, establish
new collaborations, learn about and discuss different topics, listen to speakers of
international reputation, and present and show medical image analysis tools.
This year’s edition was organized by The Row Fogo Centre for Research in Ageing
and Dementia in conjunction with Edinburgh Imaging (http://www.ed.ac.uk/clinical-
sciences/edinburgh-imaging) at The University of Edinburgh, in partnership with the
Scottish Imaging Network A Platform for Scientific Excellence (SINAPSE) (http://
www.sinapse.ac.uk/), the Journal of Imaging (http://www.mdpi.com/journal/jimaging),
and the EPSRC Medical Image Analysis Network (MedIAN) (http://www.ibme.ox.ac.
uk/MedIAN) and MathWorks (https://uk.mathworks.com/); it was supported by the
British Machine Vision Association (BMVA), Toshiba Medical Visualisation Systems
Europe, General Electric, Bayer, Siemens, Edinburgh Imaging, Optos, Holoxica Ltd.,
AnalyzeDirect and NVIDIA.
The number and level of submissions of this year’s edition were unprecedented.
In all, 105 technical papers and 22 abstracts showing clinical applications of
image-processing techniques (the latter not considered for inclusion in this volume)
were revised by an expert team of 93 reviewers from British(38), Spanish(25),
French(6), Italian(11), Swedish(5), Greek(2), New Zealand(1), American(3),
German(1), Dutch(1), Bangladeshi(1) and Swiss(1) institutions. Each of the 127 sub-
missions were reviewed by two to four members of the Program Committee. Based on
their ranking and recommendations, 18 of 105 papers were accepted, 65 of 105 papers
were provisionally accepted pending minor revisions and suggestions, and 22 of 105
papers were rejected for publication in this volume. From the papers rejected, 15 were
considered to need a major revision, and their authors were invited to address the
reviewers’ comments and submit their revised work to the Journal of Imaging (con-
ference partner), to which the reviews will be provided for facilitating the new review
process. After a second round of revision, we are including in this volume 82 full
papers. We hope you agree with us that they all show high quality and represent a step
forward in the medical image analysis field.
We thank all members of the MIUA 2017 Organizing, Program, and Steering
Committees and, particularly, all those who supported MIUA 2017 by submitting
papers and attending the meeting. We thank Professor Joanna Wardlaw, Head of the
Academic Hub and Centre that organized the conference, for her welcome words.
VI Preface
We also thank our speakers Professors Sir Michael Brady, Daniel Rueckert, Ingela
Nyström, and Jinah Park, and Dr. Constantino Carlos Reyes Aldasoro and Kon-
stantinos Kamnitsas for sharing their success, knowledge, and experiences. We hope
you enjoy the proceedings of MIUA 2017.
Program Chairs
Maria Valdes Hernandez University of Edinburgh, UK
Víctor González-Castro Universidad de León, Spain
Retinal Imaging
Ultrasound Imaging
Cardiovascular Imaging
Oncology Imaging
Brain Imaging
1 Introduction
Optical Coherence Tomography (OCT) images of retina provide a cross-sectional
view of the intra-retinal tissue which is composed of 7 adjacent layers [2], sepa-
rated by 8 boundaries as depicted in Fig. 1a. The boundaries ordered from the
top to bottom are the: (i) Inner Limiting Membrane (ILM) separating the Nerve
Fiber Layer (NFL) from vitreous, (ii) NFL/GCL boundary separating NFL from
the Ganglion Cell and Inner Plexicon layer (GCL-IPL), (iii) IPL/INL separat-
ing GCL-IPL from the Inner Nuclear Layer (INL), (iv) INL/OPL separating
INL from the Outer Plexiform Layer (OPL), (v) OPL/ONL separating OPL
from the Outer Nuclear and Inner Segment (ONL-IS) layer, (vi) IS/OS sepa-
rating ONL-IS from the Outer Segment (OS) layer (vii) RP Ein separating OS
from the Retinal Pigment Epithelium (RPE) layer and finally the (viii) RP Eout
boundary separating RPE from the choroid.
c Springer International Publishing AG 2017
M. Valdés Hernández and V. González-Castro (Eds.): MIUA 2017, CCIS 723, pp. 3–14, 2017.
DOI: 10.1007/978-3-319-60964-5 1
4 A. Chakravarty and J. Sivaswamy
a. b.
Fig. 1. Retinal layer boundaries in OCT B-scans of (a) Healthy retina, listed from
top to bottom: ILM(Red), NFL/GCL(Green), IPL/INL(blue), INL/OPL(yellow),
OPL/ONL(cyan), IS/OS(magenta), RP Ein (pink) and RP Eout (purple); (b) Retina
with AMD: ILM(Red), RP Ein (Green) and RP Eout (Blue). (Color figure online)
the need for disease specific modifications. We address this problem with a novel
linearly parameterized Conditional Random Field (LP-CRF) formulation whose
parameters are learnt in a data-driven, end-to-end manner with a Structured
Support Vector Machine (SSVM) [3]. The convolution filters used to capture the
appearance of each layer region and boundary as well as the relative weights of
the appearance and shape prior based cost terms are implicitly modelled within
the LP-CRF model. As a result our method doesnot require any explicit feature
extraction or tunable weights. The contributions of this paper are:
– We eliminate the need to handcraft the energy by learning it from a set of
training images in an end-to-end manner. The proposed method outperforms
the existing methods [4] with similar but handcrafted energy cost terms.
– Our CRF formulation seamlessly incorporates both hard and soft constraints
on shape priors in a single pairwise edge between each neighbor which is
difficult to implement in the MCCS approach [4,13], requiring additional
directed edges in the graph construction.
– We jointly segment all the layers in a single optimization step in contrast to
the existing methods that need to handle each layer differently.
– Our method is able to learn a single energy function for both Normal as well
as AMD cases for a three layer boundary segmentation problem.
– The proposed method has been shown to be robust to data acquired from
multiple centres with different scanners and at different resolutions.
2 Method
An overview of the proposed method is presented in Fig. 2a. The input image
is standardized by extracting the region of interest (ROI), flattening the retinal
tissue and removing speckle noise (see Sect. 2.1). The task of joint extraction of
the multiple layer boundaries is posed as a CRF based Energy Minimization (see
Sect. 2.2). The total CRF energy is learnt in an end-to-end manner by employing
SSVM (see Sect. 2.3). This involves parameterization of the energy as a linear
function LP-CRF (see Sect. 2.4). During testing, a CRF is constructed from
the standardized image using the learnt parameters. Thereafter, the optimal
labelling of the CRF is evaluated and brought back to the original image space
by reversing the image flattening and ROI extraction operations.
Fig. 2. a. Overview of the proposed method, training is represented with dotted lines.
b. Graphical model of the proposed CRF formulation.
6 A. Chakravarty and J. Sivaswamy
The curvature of the retinal surface is flattened by using the method reported in
[2]. Each image column is shifted by an offset obtained by fitting a quadratic poly-
nomial on a rough estimate of the RP Eout boundary. Next, the ROI is extracted
by cropping out the dark (intensity values close to 0) background regions at the
top and bottom of the image. The ROI is estimated as the rectangular region
encompassing the largest connected component obtained by thresholding the
input image at 0.3 after scaling the intensity to [0,1]. To reduce holes due to the
dark layers within the ROI, a large Gaussian filter with σ empirically set to 9
was employed prior to thresholding. Thereafter, the speckle reducing anisotropic
diffusion [15] is applied. The ROI is resized to 190 × 600 to handle the variations
in image resolution and an intensity standardization scheme based on [10] is
applied to handle the inter and intra-scanner intensity variations.
L
N
L N −1
L−1 N
E(x, I) = εlbnd + εl,n
intra + εl,n
inter
l=1 n=1 l=1 n=1 l=1 n=1
= Ebnd (x, I) + Eintra (x, I) + Einter (x, I), (1)
where Ebnd (x, I), Eintra (x, I) and Einter (x, I) are the sum of all the unary, intra-
layer and the inter-layer cost terms in the entire CRF respectively. The labelling
x that maximizes E(x, I) for an image I corresponds to the desired segmenta-
tion. During implementation, the max CRF inference in Eq. 1 is converted into
a minimization problem by taking the negative of all the unary and pairwise
cost terms, and solved using the Sequential-Tree Reweighted Message Passing
algorithm [6].
Our objective is to parameterize E(x, I) by a set of parameters θ which can
be learnt from a set of training images. Next, we look at the problem of learning
θ in Sect. 2.3 followed by an appropriate definition of Eθ (x, I) in Sect. 2.4.
1
M
λ
min || θ || +
2
ξk
θ,ξ≥0 2 M
k=1
constraint (known as the max oracle) is problem specific and involves solving the
following optimization problem: yk∗ = argmaxx (Δ(x(k) , x̄(k) ) + Eθ (x, I)). Since
in our case, Δ(x(k) , x̄(k) ) is separable at each xln , the max oracle is defined similar
to the CRF inference problem in Eq. 1. with an additional term | xln − x̄ln | added
to the unary terms for each xln .
We define the individual cost terms in Eq. 1 as the linear functions denoted
by, Ebnd (x, I) = w
bnd .Fbnd (x, I), Eintra (x, I) = wintra .Fintra (x, I) and
Einter (x, I) = winter .Finter (x, I) respectively. Then, the net energy is given
by Eθ (x, I) = θ .F (x, I) where F (x, I) = [Fbnd
Fintra
Finter ] and θ =
[wbnd wintra winter ]. The details of the individual cost terms are discussed below.
L
N
L
N
Ebnd (x, I) = u
l .Il,n = u
l Il,n = w
bnd .Fbnd (x, I), (3)
l=1 n=1 l=1 n=1
N N
where w
bnd = [u1 ...uL ] and Fbnd (x, I) = [ n=1 Il,n ... n=1 Il,n ] .
Pairwise Intra-layer Energy: The interaction between the adjacent points
on the lth boundary consists of a shape and an appearance term. The shape
−1 (xl,n+1 −xl,n )−μl,n
prior dl,n
intra (xl,n , xl,n+1 ) = exp{ 2 .( l,n
σintra
) }
intra 2
preserves the bound-
ary smoothness by penalizing large deviations of the signed height gradient
(xl,n+1 − xl,n ) from the Gaussian distributions whose mean μl,n intra and stan-
l,n
dard deviation σintra are separately learnt from the GT of the training images
for each layer l, and column yn .
Emphasising the shape prior can lead to gross segmentation errors in the
presence of abnormalities such as AMD which affect the boundary smoothness.
Hence, an additional appearance-based term is introduced to favour the sim-
ilarity in the appearance of the adjacent boundary points. S(xl,n , xl,n+1 ) =
255
1 − min k=1 min{hk (xl,n , yn ), hk (xl,n+1 , yn+1 )} is a histogram intersection
based similarity measure between the normalized histograms hk of p × p image
patches centered at (xl,n , yn ) and (xl,n+1 , yn+1 ) respectively.
The pairwise intra-layer energy is modelled as a linear combination of the
shape and appearance terms, εl,n l,n l
intra = αl .dintra (xl,n , xn+1 ) + βl .S(xl,n , xl,n+1 ),
where αl and βl are relative weight coefficients learnt automatically in an end-to
end manner during training. Therefore, the total Intra-layer pairwise energy is
Learning CRF for OCT Segmentation 9
L N −1
Eintra (x, I) = (αl .dl,n
intra (xl,n , xl,n+1 ) + βl .S(xl,n , xl,n+1 ))
l=1 n=1
L
N −1
L
N −1
= αl .( dl,n
intra (xl,n , xl,n+1 )) + βl .( S(xl,n , xl,n+1 ))
l=1 n=1 l=1 n=1
= wintra .Fintra (x, I). (4)
Here, Eintra (x, I) is linearized by taking w intra= [α1 α2 ...αL β1 β2 ...βL ] and
N −1
Fintra (x) = [d1 d2 ... dL S 1 S 2 ... S L ] , where di = n=1 di,nintra (xi,n , xi,n+1 ) and
i
N −1
S = n=1 S(xi,n , xi,n+1 ) respectively.
Pairwise Inter-layer Energy: It captures the interaction between the adjacent
layer boundaries by employing a shape and an appearance based cost. The shape
prior dl,n
inter penalizes the deviation of the layer thickness (xl+1,n − xl,n ) from
apriori learnt Gaussian distributions with mean μl,n inter and standard deviation
l,n
σinter for each column yn . Moreover, hard constraints are also imposed on the
minimum Tmn l
and maximum Tmx l
layer thickness of each layer l by assigning ∞
to the infeasible labellings. Therefore,
l,n
(xl+1 l
n −xn )−μinter 2
l,n exp{ −1
2 .( l,n ) }, if Tmnl
≤ (xl+1,n − xl,n ) ≤ Tmx
l
dinter = σinter (5)
∞, otherwise.
= w
inter .Finter (x, I) (6)
3 Results
Dataset: The proposed method was evaluated on the macular OCT B-scans
of both Normal and AMD subjects using two datasets, Normal dataset [2] and
AMD dataset [1] kindly provided by Chiu et al.
The Normal dataset consists of 107 B-scans obtained from 10 OCT volumes.
The B-scans in 5 volumes are of size 300 × 800 pixels with a pixel resolution
of (3.23, 6.7, 67) µm/pixel along the axial, lateral and azimuthal directions. The
AMD dataset is characterized by the presence of drusen and geographic atro-
phy. It consists of 220 B-scans sampled from 20 OCT volumes collected from 4
different clinics. All B-scans were of size 512 × 1000 pixels with pixel resolution
varying in the range of (3.06–3.24, 6.50–6.60, 65–69.8) μm/pixel.
In both datasets, eleven B-scans are linearly sampled from the OCT volumes
such that the sixth B-scan is centered at the fovea. The GT comprises of manual
markings by a senior grader. While GT for all 8 layer boundaries (see Fig. 1a.)
are available for the Normal dataset, GT of only 3 boundaries (see Fig. 1b.) are
available for the AMD dataset.
Evaluation Metrics and Benchmarking: The accuracy of the proposed
method was analyzed in terms of the boundary localization error (BLE) for each
layer boundary, the Dice coefficient (DC) and the retinal Layer Thickness Error
(LTE) for the segmented tissue regions. BLE is defined as the average (signed or
unsigned) distance in pixels between the extracted boundary and the GT along
each column (A-scan) in the image. DC measures the extent of overlap between
the extracted and GT tissue regions. LTE is defined as the average absolute dif-
ference in the thickness (in pixels) between the extracted and GT tissue region
across each column in the image. While DC provides a global measure of the
accuracy, LTE is more sensitive to localized inaccuracies at each column. Ideally,
BLE, LTE should be 0 and DC should be 1.
The proposed method was benchmarked against the results obtained from
three publicly available OCT segmentation software: (A) CASEREL1 based on
[2], (B) the Iowa Reference Algorithm (IRA) based on [4] and (C) OCTSEG2
based on [8]. Each software extracts different number of layers and the RP Ein
boundary which is critical for AMD detection is only available in IRA. Addi-
tionally, the IN L/OP L boundary is also unavailable in OCTSEG.
Experimental Setup: The experiments were carried out in two settings. In
Experiment-1, the proposed method was evaluated on the task of the joint seg-
mentation of all the 8 layer boundaries on the Normal dataset. In Experiment-2,
a single CRF model was learnt on the combined Normal and AMD dataset for
extraction of the three layers, ILM, RP Ein and RP Eout as the GT of only these
three boundaries were available for the AMD dataset. In both experiments, yn
was sampled 4 pixels apart for computational tractability and the intermediate
boundary points were obtained using b-spline interpolation. The size of both ul
1
http://pangyuteng.github.io/caserel/.
2
https://www5.cs.fau.de/research/software/octseg/.
Learning CRF for OCT Segmentation 11
Table 1. (mean/std.) of BLE (in pixels) for 8 layer boundaries on the Normal dataset
Table 2. (mean/std.) of LTE and Dice for 7 tissue regions on the Normal dataset.
Ground Truth mean layer thickness is reported to provide a context for the LTE.
All traces of the old cork on the joint can be removed with
sandpaper, leaving it as shown at the left. The cork comes in strips
of about the proper thickness, and wide and long enough to allow for
trimming. The ends of the strip should be beveled to make a ¹⁄₄-in.
lap joint.
A small quantity of the cement is heated over the lamp and six
drops poured on the joint; then with the end of the file, which should
be heated also, it is spread to give an even, thin coating. The
beveled ends of the strip are similarly treated. By working quickly
and carefully, the coating on the joint and strip are brought to a
plastic state by holding in the flame, and the strip is quickly laid in
place. Before the cement has time to harden, press the cork in,
forming a neat joint. Bind a rag around the cork, leaving it until the
cement is thoroughly set.
The corked joint will be too large to go into the joining section of
the instrument. File and sandpaper it to a twisting fit. Though the
cork should be truly cylindrical, it may be tapered a trifle smaller at
the forward end. A coating of tallow applied to the joint will make it
easy-fitting, but air-tight and moisture-proof.
The pads are disks of felt incased in thin sheepskin. After long
usage, they become too hard to make an air-tight fit. Repadding
should, therefore, be anticipated. Shellac will give good results in
putting on pads. It is heated until liquid and poured into the key
recess. The new pad is pressed into the liquid shellac, care being
taken to have it well centered. For different keys, it will be necessary
to use varying quantities of shellac to make the pad sit higher or
lower, as required.—Donald A. Hampson, Middletown, N. Y.
Anyone with a power boat can construct a blower for the whistle
very cheaply. The whistle is attached to a suitable length of pipe,
threaded on each end. The blower is made of two white-pine boards,
1 in. thick, cut as shown at A; a thin piece of leather is cut like the
pattern B, to form the bellows part, and after it is shaped, the edges
of the boards are glued and the leather placed in position, where it is
fastened with tacks driven in about 1 in. apart. The bellows are
fastened to the under side of a seat with screws, and a tension
spring is attached to the bottom of the bellows and the floor of the
boat. A cord is fastened to the lower board of the bellows and run up
through to the cabin roof over suitable pulleys to a handle within
convenient reach of the operator.—Contributed by John I. Somers,
Pleasantville, N. J.
Filling In Broken Places on Enamel
Ordinary putty will not do to fill in cracks or broken spots on an
enameled surface, such as a clockface. Fine sealing wax is much
better, as it hardens at once, takes color without absorbing the oil,
and does not shrink like putty. Use a wax of the proper color to
match the surface as closely as possible. Fit it in and smooth with a
warm, flexible piece of metal, such as a palette knife. Give it one or
two coats of thin color to exactly match the other surface, and
varnish. If the article has not a high polish, the gloss of the varnish
can be cut a little with pumice stone.
A Twisting Thriller Merry-Go-Round
By R. E. EDWARDS
“Stepdime!”
right up; three twisting thrillers for a penny—a tenth of a
was the familiar invitation which attracted customers to
the delights of a homemade merry-go-round of novel design. The
patrons were not disappointed, but came back for more. The power
for the whirling thriller is produced by the heavy, twisted rope,
suspended from the limb of a tree, or other suitable support. The
rope is cranked up by means of the notched disk A, grasped at the
handle B, the car being lifted off. The thriller is stopped when the
brakeplate I rests on the weighted box L.
The Supporting Ropes are Wound Up at the Disk A, the Car is Hooked into
Place, and the Passengers Take Their Seats for a Thrilling Ride, Until the
Brakeplate I Rests on the Box
Manila rope, ³⁄₄ in. or more in diameter, is used for the support,
and is rigged with a spreader, about 2 ft. long, at the top, as shown.
The disk is built up of wood, as detailed, and notches, C, provided
for the ropes. The rope is wound up and the car is suspended from it
by the hook, which should be strong, and deep enough so that it
cannot slip out, as indicated at H.
The car is made of a section of 2 by 4-in stuff, D, 10 ft. long, to
which braces, E, of 1 by 4-in. stuff are fastened with nails or screws.
The upper ends of the pieces E are blocked up with the centerpiece
F, nailed securely, and the wire link G is fastened through the joint.
The seats J are suspended at the ends of the 2 by 4-in. bar, with
their inner ends lower, as shown, to give a better seating when the
thriller is in action. The seats are supported by rope or strap-iron
brackets, K, set 15 in. apart. The box should be high enough so that
the seats do not strike the ground.