You are on page 1of 20

Medical Image Processing Lectures

Medical Image Segmentation


Prof. Leo Joskowicz
School of Engineering and Computer Science
The Hebrew University of Jerusalem, ISRAEL

Segmentación
• La segmentación en el campo de la visión
artificial es el proceso de dividir una imagen
digital en varias partes (grupos de píxeles) u
objetos. El objetivo de la segmentación es
simplificar y/o cambiar la representación de una
imagen en otra más significativa y más fácil de
analizar.

1
2
Tipos de Segmentación
Umbralización Bordes

Clasificación
/Redes Segmentación
Morfología
Neuronales Matemática

Modelos Regiones
Deformables

Image Segmentation
Spatial partitioning of an image into its constituent
parts -- isolating specific objects in an image in support
of a need or task.
Two main tasks

• Recognition: what objects appear. Humans are better!

• Delineation: object boundaries. Computers are better!

Many techniques, mostly from image and signal


processing to be adapted to anatomical structures, image
types, and clinical needs. 2
4

4
IS: technical issues
Goal: high precision and accuracy with minimal or no
human help. Usually, not completely automatic!

Key concepts
• Precision: reliabitity  reproducibility
• Accuracy: validity  agreement with truth
• Efficiency: computing time, user time
• Interaction: type of user, type of interaction

MIS: clinical issues


Goal: to be clinically useful and relevant
 Improve a current procedure, enable a new one
 Computer-Aided Radiology (CAR)
Wide variety of uses
• Diagnosis one-time, follow-up evaluation
• Surgical planning
• Registration multimodal, preop/intraop
• Intraoperative follow-up
• Postoperative evaluation
6
3
6
MIS clinical uses: mammography
Computer-Aided Diagnosis (CAD) -- mammography

microcalcifications
7

Microcalcifications clusters
• Microcalcifications – tiny deposits of calcium in the
breast that cannot be felt but can be detected on a
mammogram. A cluster of these very small specks
of calcium may indicate that cancer is present.

• An early sign of disease in 30-50% of


mammography detected cases is the appearances of
clusters of fine, granular microcalcifications whose
individual grains is typically 0.05-1 mm in diameter.

4
8
Microcalcifications clusters

Mammography -- clinical needs


• 10-30% of breast lesions are missed by radiologists
during routine screening.

• With nationwide screening programs in many western


countries, the number of mammograms to be analyzed
by the radiologists is enormous.

CAD goal: improving the detection performance


and throughput of screening mammography.

5
10
MIS clinical uses: pathology analysis
liver tumor

2D measurements 3D measurements
abdominal aortic aneurism

11

MIS clinical uses: Optic Pathway Glioma

DaCo T1, 19/4/09 DaCo T2, 19/4/09

Internal
components
classification

red: solid blue: cyst green: enhancing 6


12
MIS clinical uses: preoperative planning
Diagnosis, 3D visualization, volumetric measurements

13

MIS validation
• Algorithms without validation are clinically
worthless!
• Validation is with respect to a clinical task.
• Validation requires a ground truth for comparison.
• Physical anatomical models and/or phantoms are
typically not available (except sometimes for bones).
• Ground truth is usually obtained by manual
identification and/or segmentation by a user.
• Experts: in most cases, radiologists.
• Extrinsic comparison: compare vs. other methods. 7
14
MIS validation: issues
• Large inter- and intra- variability across experts and
clinical sites.
• May not be representative of population variability.
• Main quantitative parameters:
– validation set size
– number and type of observers
– intra and inter-observer manual segmentation
variability
– surface-based, volume-based, voxel-based measures

15

MIS validation: anatomical variability

8
16
MIS validation: anatomical variability
stenosis narrowing

looping
17

MIS validation: metrics


• Visual appreciation
• Surface-based error measurements
– Mean surface distance
– RMS surface distance
– Maximum surface distance
• Volume-based error measurements
– Dice coefficient
– Volumetric overlap error
• Voxel-based Sensitivity and Specificity
9
18
MIS validation: visualization carotid arteries
radiologist
algorithm

19

MIS validation: surface metrics


Surface-based error measurements
• Mean surface distance
• RMS surface distance
• Maximum surface distance
Reference Result

21
10
21
MIS validation: volume metrics
Volume-based error measurements
• Volumetric overlap error
• Dice similarity
• Jaccard coefficient
Reference Result

22

22

Volume Error Overlap/ DICE coefficient

23
11
23
Classification: Aesop's Fable
“The Boy Who Cried Wolf”

24

24

25
12
25
MIS validation: voxel sensitivity/specificity
Overlap measures based on sum of voxels in each set
SR Rereference
SA Algorithm
Sensitivity

Specificity

26

27
13
27
MIS validation: observers variability
• Intra-observer variability: determine how much
variation there is when a single observer produced the
ground-truth segmentation
 repeat 5 times the segmentation
• Inter-observer variability: determine how much
variation there is between multiple observers that
produced the ground-truth segmentation
 ask 3 radiologists to do the
segmentation

• Observer expertise and frequency variability 28

28

Linear vs. volumetric measurements


LINEAR
Pre Therapy
Long axis: 25.0mm
Short axis: 20.4mm
Long axis: 25.9mm
Short axis: 19.8mm
Evaluation
Change in Long axis 4%
Linear: stable
After 24 days
Volumetric: progression VOLUMETRIC
Volume: 4608 mm3
Volume: 3420 mm3
• Change in Volume 35%
[Schwartz and Zhao 2010]
14
29
Volumetric measurements: inaccuracy
Study:
• Single axial MRI T1 slice
Evaluation
•Observer
9 independent
1: stableobservers
•Observer
Repeated2: delineations
progression

Inter- and intra-observer


variability: 30%

Key issue:
fuzzy tumor boundaries
[Weltens 2001]

30

Size and density changes


Pre Therapy Post Therapy

31
15
31
Measurements characteristics
• Ground-truth: not know!

• Repeatability: intra-observer variability

• Reliability: inter-observer variability


• Measure: correlation between repeated
measurements.

32

Inaccuracy vs. Uncertainty


GROUND TRUTH

INACCURACY

16
33
Inaccuracy vs. Uncertainty
GROUND TRUTH

UNCERTAINTY

34

Inaccuracy vs. Uncertainty


GROUND TRUTH
INACCURACY

UNCERTAINTY

I am never wrong!
17
35
Inaccuracy vs. Uncertainty
GROUND TRUTH
INACCURACY

UNCERTAINTY

improving accuracy
matters!

36

GROUND TRUTH
INACCURACY

UNCERTAINTY

I am always
precisely wrong!
18
37
Inaccuracy vs. Uncertainty
• Intrinsic uncertainty about the volume measurement.
• Accuracy for clinical significance is unknown.
Results can be
Inaccuracy > Uncertainty
meaningless!

• Uncertainty may not be improved!


• Goal: improve accuracy to obtain:

Inaccuracy < Uncertainty

38

MIS: technical considerations


• Very wide variety of images, from many sources
– 2D: X-rays, US, microscope, video camera
– 3D: CT, CTA, MRI (many protocols), fMRI, SPECT..
– 2D+time: X-ray angiography, video
– 3D+time: cardiology, time sequences
• Multimodal datasets
• Varying resolutions, signal-to-noise ratios, etc

39
19
39
MIS: technical considerations
• Very wide variety of anatomies and pathologies
– bones
– abdominal organs
– brain structures
– tumors, pathologies
• Different types of variability between people
• Variability from atlas  “average” anatomy
• Adjacencies to other structures
• Imaging artifacts
40

40

MIS methods: overview


No UNIVERSAL segmentation algorithm
• Very large number of available algorithms!
• Possible classifications:
– Generic vs Task-oriented
– Bottom-up vs Top-down
– Boundary vs Region-based
– Explicit vs Implicit apriori knowledge
• Present a technical classification based on the characteristics
of the methods.
41
20
41

You might also like