Professional Documents
Culture Documents
Low Power To High Power Translation
Low Power To High Power Translation
POWER TRANSLATION
Nick Tai, 2021/1/11
MODEL
A Noise-Robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions From
CT Images - IEEE Journals & Magazine
COPLE-NET: General model architecture to map from source image to target image (image translation)
The characteristic of CopleNet is that (1) Channel attention is adapted; (2) ASPP to capture multi-scale semantics;
(3) Shortcuts are also an encoder-decoder submodule
2
EXPERIMENT SETUPS
Dataset 20201215
Training: Area 1~4 (some area contain 60 slices, some contain 70)
Assume this is a more difficult settings that the context of area 5 are never include in the training set
This is a potential improvement that involving depth information (later I’ll proposed one for reference)
Optimizer is using AdaBelief with Lookahead and Gradient Centralization, Gradient norm clipping sets to 1
LR is set to 1e-3 for first 72% of epochs, final 28% switches to cosine down to 0
3
TRAINING CURVE
Details will be in the attachment
4
VISUALIZE
Details will be in the attachment
Z=20 Z=40
Z=30 Z=50
5
POTENTIAL IMPROVEMENT (1)
More meaningful loss function
Although L1/L2 directly optimized the pixel intensities, but human won’t observe an image with a pixel-
independent manner
6
POTENTIAL IMPROVEMENT (2)
Depth-aware model
Intuitive, we can train dedicated weights for each depth z, but it’s
impractical and slow to hot-swap the model weights