You are on page 1of 2

Optical Power-Enhancement Method for Overcoming

Photodamage based on Deep Learning


YI-JIUN SHEN1, EN‑YU LIAO2, TSUNG-MING TAI3, YI‑HUA LIAO4, CHI‑KUANG SUN2, CHENG-
KUANG LEE3, SIMON SEE3, AND HUNG-WEN CHEN1,5,6, *
1
International Intercollegiate Ph.D. Program, National Tsing Hua University, Hsinchu, Taiwan
2
Department of Electrical Engineering and Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan
3
NVIDIA AI Technology Center, NVIDIA
4
Department of Dermatology, National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei, Taiwan
5
Institute of Photonics Technologies, National Tsing Hua University, Hsinchu, Taiwan
6
Center for Measurement Standards, Industrial Technology Research Institute (ITRI), Hsinchu, Taiwan
*hungwen@mx.nthu.edu.tw

Abstract: Overcoming photodamage has been challenging in bio-imaging. We demonstrated


a power-enhancement model in harmonic generation microscopy by deep learning. With the
image acquired at low optical power, the model outputs an image at higher optical power
close to the ground-truth image. Consequently, the risk of photo-damage can be reduced.
Key Word— Deep learning, phototoxicity, photodamage, nonlinear optics, harmonic generation microscopy (HGM).

1. Introduction
Overcoming phototoxicity and photodamage has been a longstanding issue in biomedical imaging [1-2]. High-
quality images in fluorescent microscopy usually come at the cost of long exposure or high light dosages. The
imaging process under these conditions can cause irreversible photobleaching, increase the phototoxicity of cells,
or even destroy cells. On the other hand, harmonic generation microscopy (HGM) does not collect fluorescent
signals and hence there are no issues resulting from adding additional dyes. Besides, there is no concern for
waste heat accumulation in biological samples because of the virtual transition which helps reduce the
phototoxicity. However, high-intensity optical pulses may cause damage to cells under long-term continuous
irradiation. There are various ways to alleviate the photodamage through the optimization of the hardware
system. For example, one can use a laser source with a higher repetition rate, lower the laser pulse energy and
still maintain a comparable image quality [3].
In this work, we demonstrated a power-enhancement model based on deep learning in HGM. The image
obtained with the low input optical power, 25mW, is used at the input of the model, and the model predicts the
resulting image, which is close to the ground-truth image taken at a higher optical power, 100mW. In addition,
considering the nonlinear optical characteristics of harmonic generation microscopy, we include a hint image as
prior knowledge at the model input and find that the performance of the model can be further improved.

2. Methods
HGM is a non-invasive virtual optical imaging technology for biomedicine. Different from traditional optical
Coherence Tomography (OCT) and reflection confocal microscopy (RCM) technology [4], HGM technology
utilizes nonlinear phenomena instead of aperture to obtain higher spatial resolution. The self-made HGM system
includes the second harmonic generation (SHG) and the third harmonic generation (THG) [5]. The light source
is Cr: Forsterite laser with a pulse width of 38fs and a repetition rate of 105 MHz. The center wavelength is
1262nm, which can penetrate deeper skin depth, avoid melanin absorption, and greatly reduce scattering in skin
tissues. SHG signals are generated in biological tissues with non-centrosymmetric structures such as collagen
fibers and muscle fibers, while THG signals can obtained at the interface of different materials. HGM can
supplement single-modality confocal reflection microscopy to obtain dual contrasts between skin dermis and
epidermis and improve image contrast.
As shown in figure 1, we demonstrated a power-enhancement model based on a modified version of U-Net,
the COVID-19 pneumonia lesion segmentation network (COPLE-Net) [6]. To deal with complex cell tissues,
COPLE-Net retains the concept of CNN model architecture and expands several important modules. For
example, a bridge layer is added between the encoder and the decoder to reduce the semantic gap between
different features. In the down-sampling step, maximum pooling and average pooling are used to reduce
information loss. To adapt to the different proportions of cells, the Atrous spatial pyramid pooling block (ASPP
block) is added to the bottleneck.
The human skin samples used in the experiment came from National Taiwan University Hospital (NTUH).
The entire protocol was approved by the National Taiwan University Hospital Research Ethics Committee,
National Taiwan University Hospital, Taiwan (IRB Approval number-201309036DINC), and informed consent
was obtained from each subject prior to study entry. The samples were sliced and then stained with hematoxylin
and eosin (H&E). HGM images are captured at 50 different positions of the samples and at two different values
of input optical power, 25 mW and 100 mW. At every position, 40 slices of 2D-SHG images and 2D-THG
images are acquired with a depth pitch of 0.6-μm. For each optical power, a total of 2000 images were obtained.
Each is with 256 × 256 pixels in a field of view of 235 × 235 μm 2. The images acquired at the 25-mW optical
power are the datasets for the training and validating the model, and the images acquired at the 100-mW optical
power are taken as the ground-truth data. The ratio of training set to validation set is about 82% and 18%. By
randomly selecting the data set, the training set has a total of 1640 HGM images, and the verification set has a
total of 360 HGM images.

Fig. 1 Power-enhancement model based on the COPLE-Net model [6]

3. Results and Conclusion


As shown in Fig. 1, this model takes a channel of HGM image, SHG or THG, at the low optical power, 25mW,
as an input, and predicts an output HGM image of the same channel at the higher optical power, 100mW. The
parameters in the COPLE-Net for each channel are trained separately. We analyzed the predicted HGM images
with ground-truth HGM images and obtained a resulting peak signal-to-noise ratio (PSNR) of 30.1408 and a
structural similarity index (SSIM) of 0.6436. An example of the predicted HGM image and the ground-truth
HGM image is shown in Fig. 1. Furthermore, according to the theory of nonlinear multiphoton absorption, the
SHG signal intensity is proportional to the square of the excitation intensity of the input laser, and the THG
intensity is proportional to the cube of the laser intensity. Based on this theory, hint images can be generated for
each channel, and these can be added together with the original input image, which will improve the
performance of the model in the preliminary experiments. More work is ongoing to quantify the results.
In conclusion, we demonstrated a power enhancement model based on the COPLE-Net, the input power of
the HGM system is reduced to 25mW from 100mW which can thus greatly reduce the photo-damage problem.
We believe that this model is promising for applications in in-vivo and ex-vivo biomedical imaging in HGM.
This work is sponsored by Ministry of Science and Technology, Taiwan (MOST 110-2321-B-002-011-)
and Ministry of Education, Taiwan (Yushan Young Scholar Program).

4. References
[1] P. P. Laissue, R. A. Alghamdi, P. Tomancak, E. G. Reynaud, and H. Shroff, “Assessing phototoxicity in live fluorescence imaging,”
Nature Methods 14, 657 (2017).
[2] J. Icha, M. Weber, J. C. Waters, and C. Norden, “Phototoxicity in live fluorescence microscopy, and how to avoid it,” BioEssays 39
(2017).
[3] N. Ji, J. C. Magee, and E. Betzig, “High-speed, low-photodamage nonlinear imaging using passive pulse splitters,” Nature Methods 5,
197 (2008).
[4] A. P. Raphael, D. Tokarz, M. Ardigo, and T. W. Prow, “Reflectance Confocal Microscopy and Aging,” in Textbook of Aging Skin, 1
(2016).
[5] J.-H. Lai, E.-Y. Liao, Y.-H. Liao, and C.-K. Sun, “Investigating the optical clearing effects of 50% glycerol in ex vivo human skin by
harmonic generation microscopy,” Scientific Reports 11, 329 (2021).
[6] G. Wang, X. Liu, C. Li, Z. Xu, J. Ruan, H. Zhu, T. Meng, K. Li, N. Huang, and S. Zhang, “A Noise-Robust Framework for Automatic
Segmentation of COVID-19 Pneumonia Lesions From CT Images,” IEEE Transactions on Medical Imaging 39, 2653 (2020).

You might also like