Professional Documents
Culture Documents
Deep Learning
for
NeuroImaging
Andrew Doyle
McGill Centre for Integrative Neuroscience
@crocodoyle
Outline
1. GET EXCITED
2. Artificial Neural Networks
3. Backpropagation
4. Convolutional Neural Networks
5. Neuroimaging Applications
ImageNet-1000 Results
StackGAN
CycleGAN
CycleGAN
Wolterink, Jelmer M., et al. "Deep MR to CT synthesis using unpaired
data." International Workshop on Simulation and Synthesis in Medical
Imaging. Springer, Cham, 2017.
Synthetic Images
Real
Real
Synthesized
Synthesized
Uncertainty
Cohen, Joseph Paul, Margaux Luck, and Sina Honari. "Distribution Matching
Losses Can Hallucinate Features in Medical Image Translation." arXiv
preprint arXiv:1805.08841(2018).
Reinforcement Learning
Dendi
(human)
OpenAI
(bot)
mini-map
Feedforward Recurrent
Artificial Neurons
i1
w1i1
w2i2
i2 x o 𝑜 = 𝑓 𝑥 = 𝑓 𝒘𝑻 𝒊 + 𝒃
w3i3
b
i3
Artificial Neurons
Artificial Neurons
i1
Logistic Regression
w1i1
w2i2
i2 x o 𝑜 = 𝜎 𝑥 = 𝜎 𝒘𝑻 𝒊 + 𝒃
w3i3
b
i3
Output
Input
Neural Networks
h4 x1 x2
h1
x1 h5
h2 y h1
x2 h6
h3
h7 h2 h3
h4 h5 h6 h7
y
Sethi, Ishwar Krishnan. "Entropy nets: From decision trees to neural
networks." Proceedings of the IEEE 78.10 (1990): 1605-1613
Neural Networks
h84 x1 x2
h1
x1 h95
h2 y h1
x2 h106
h3
h117 h2 h3
h4
h12
h5 h4 h5 h6 h7
h13
h6
h14 h8 h9 h10 h11 h12 h13 h14 h15
y
h7
h
Sethi, Ishwar Krishnan. "Entropy nets: From decision trees to neural
15 of the IEEE 78.10 (1990): 1605-1613
networks." Proceedings y
Neural Networks
Neural Networks
i1 x1 h1
y o
i2 x2 h2 𝑓 𝑦 = 𝜎(𝑤𝑦,ℎ1 𝑓 ℎ1 + 𝑤𝑦,ℎ2 𝑓 ℎ2 + 𝑏𝑦 )
𝑓 𝑥2 = 𝜎(𝑖2 𝑤𝑥2 ,𝑖2 + 𝑏𝑥2 ) = 𝜎(𝑤𝑦,ℎ1 𝜎(𝑤ℎ1,𝑥1 𝜎 𝑖1 𝑤𝑥 1 + 𝑏𝑥 1
+ 𝑤1,𝑥2 𝜎 𝑖2 𝑤𝑥2 ,𝑖2 + 𝑏𝑥 2 + 𝑏ℎ 1 )
𝑓 ℎ2 = 𝜎(𝑤ℎ2,𝑥1 𝑓 𝑥1 + 𝑤ℎ2,𝑥2 𝑓 𝑥2 + 𝑏ℎ2 )
+ 𝑤𝑦,ℎ2 𝜎(𝑤ℎ2 ,𝑥1 𝜎 𝑖1 𝑤𝑥1 ,𝑖1 + 𝑏𝑥 1
= 𝜎(𝑤ℎ2,𝑥1 𝜎 𝑖1 𝑤𝑥1 ,𝑖1 + 𝑏𝑥 1 + 𝑤ℎ2,𝑥2 𝜎 𝑖2 𝑤𝑥2 ,𝑖2 + 𝑏𝑥 2 + 𝑏ℎ 2 )
+ 𝑤ℎ2 ,𝑥2 𝜎 𝑖2 𝑤𝑥2 ,𝑖2 + 𝑏𝑥 2 + 𝑏ℎ 2 )
+ 𝑏𝑦 )
17 parameters θ = {w, b}
Backpropagation
1. Random θ initialization
Iterate:
1. Forward - compute loss
forward pass
𝑇
𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿
𝛻𝜃𝐿 𝑜, 𝑦ො = , , , ,…,
𝜕𝑤𝑥1 ,𝑖1 𝜕𝑏𝑥1 𝜕𝑤𝑥2 ,𝑖2 𝜕𝑏𝑥 2 𝜕𝑤𝑦,ℎ2
Backpropagation
Gradients in blue
Backpropagation
Initialize w
forward pass
backward pass J 𝜕𝐿
𝜕𝑤
𝜕𝐿
𝑤′ = 𝑤 − 𝛼
𝜕𝑤
learning rate
w
Backpropagation
i1 x1 h1
y 𝑦ො ≈ 𝑜
i2 x2 h2
𝜕𝐿 𝜕𝐿 𝜕𝑦ො
= ∗
𝜕𝑤𝑦,ℎ1 𝜕𝑦ො 𝜕𝑤𝑦,ℎ1
…
= −𝜎 𝑦ො 1 − 𝜎 𝑦ො 𝑓 ℎ1
Backpropagation
i1 x1 h1
y 𝑦ො ≈ 𝑜
i2 x2 h2
𝜕𝐿 𝜕𝐿 𝜕𝑦 𝜕ℎ1
= ∗ ∗
𝜕𝑤ℎ1 ,𝑥1 𝜕𝑦 𝜕ℎ1 𝜕𝑤ℎ1 ,𝑥1
𝜕𝐿 𝜕𝐿 𝜕𝑦 𝜕ℎ2
= ∗ ∗
𝜕𝑤ℎ2,𝑥2 𝜕𝑦 𝜕ℎ2 𝜕𝑤ℎ2,𝑥2
Backpropagation
i1 x1 h1
y 𝑦ො ≈ 𝑜
i2 x2 h2
X-Y grid:
• Param (θ) space
X-Y grid:
• Param (θ) space
𝑓 𝑡 ∗𝑔 𝑡 = න 𝑓 𝜏 ⋅ 𝑔 𝑡 − 𝜏 𝑑𝜏
𝜏=−∞
Convolutional Neural Networks
𝑔(𝑡)
𝑓(𝑡)
𝑓 𝑡 ∗ 𝑔(𝑡)
𝑔(𝑡)
𝑓(𝑡)
𝑓 𝑡 ∗ 𝑔(𝑡)
𝑔(𝑡)
𝑓(𝑡)
𝑓 𝑡 ∗ 𝑔(𝑡)
Input
Convolution
Convolutional Neural Networks
Convolution
Convolutional Neural Networks
Convolution
Convolutional Neural Networks
Convolution
Convolutional Neural Networks
Convolution
Convolutional Neural Networks
Convolution
Convolutional Neural Networks
Max Pool
Convolution
Convolutional Neural Networks
Max Pool
Convolution
Convolutional Neural Networks
Max Pool
Convolution
Convolutional Neural Networks
90% parameters
AlexNet trained using Dropout
x1 h1
y
x2 h2
P(dropout) = 0.5
Dropout
x1 h1
y
x2 h2
P(dropout) = 0.5
Dropout
x1 h1
y
x2 h2
P(dropout) = 0.5
Batch Normalization
𝜇
• N : training examples
• C : channels
• H, W : spatial dimensions
VGG16
bit.ly/ohbmdl
keras.io
deeplearningbook.org
Thanks!