Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Keywords: The high-precision dynamic measurement of optical surfaces in unstable environments is a critical problem in
Interferometry the process of fabrication and application. In order to solve this problem, a high-precision ultra-fast phase de-
Convolutional neural network modulation method based on convolutional neural network is proposed in this paper. The wrapped phase and
Phase demodulation
corresponding wrap count map can be obtained from one interferogram at the same time, so the phase extraction
and unwrapping are performed simultaneously. It only takes 0.02 seconds to demodulate single-frame interfer-
ogram. Simulation and experimental results show that the root-mean-square error between this algorithm and
phase-shifting algorithm is better than 0.01𝜆 (𝜆=632.8nm). It indicates that the proposed method has excellent
performance in measurement accuracy and efficiency.
∗
Corresponding Author.
E-mail address: edward_bayun@163.com (H. Shen).
https://doi.org/10.1016/j.optlaseng.2021.106941
Received 24 September 2021; Received in revised form 25 November 2021; Accepted 26 December 2021
Available online 3 January 2022
0143-8166/© 2021 Elsevier Ltd. All rights reserved.
Y. Sun, Y. Bian, H. Shen et al. Optics and Lasers in Engineering 151 (2022) 106941
Fig. 2. Network architecture. (a) The RU-Net architecture. Every blue box represents a feature map with the size of M × M × N, where M is the number on the left
(downsampling path) or right (upsampling path) of the box, N is the number on the top of box; (b) Structure of resblock; (c) Structure of convblock.
Kando et al. recovered the phase from a closed fringe pattern with wrapped phase. Finally, eliminate the piston, tilt and defocus from the
CNN, but the relative root mean square accuracy is only 0.027 [38]. unwrapped phase to get the measured surface.
Liu et al. proposed a demodulation method by extracting the Zernike
coefficients of the measured element from a single interferogram [39]. 2.2. Network architecture
However, this method loses the high-frequency information of the mea-
sured part, and is not suitable for complex surfaces that cannot be fitted The network architecture is illustrated in Fig. 2. We improve the con-
with high precision by Zernike polynomials. In this work, we propose a ventional U-Net [40] by using resblocks instead of the original convo-
high-precision and ultrafast phase demodulation method based on CNN. lutional layers, deepening the network depth while enhancing the gra-
While extracting the wrapped phase from single-frame interferogram, dient propagation ability of the network. For convenience, the revised
the corresponding wrap count distribution will be obtained simultane- network is named RU-Net. The input of RU-Net is an interferogram of
ously, which means that the phase extraction and unwrapping steps are size 256 × 256 × 1 (height × width × channel). The output has a size of
combined, thus avoiding the influence of traditional phase unwrapping 256 × 256 × 2 (the first channel is the wrapped phase while the second
algorithms on phase recovery efficiency and accuracy. It has been veri- channel is the corresponding wrap count). The basic units of RU-Net
fied that the demodulation of single-frame interferogram takes only 0.02 include convolutional layer, rectified linear unit (ReLU), batch normal-
seconds. Simulation and experimental results demonstrate that the ac- ization (BN), max pooling layer, and upsampling layer. The convblock
curacy of proposed algorithm is comparable to that of the phase-shifting consists of a BN, a ReLU and a convolutional layer. Two convblocks and
algorithm. a shortcut form the resblock.
Like U-Net, the RU-Net has a downsampling path (left part) and an
2. Principle upsampling path (right part). Each path includes four steps with differ-
ent resolution. In downsampling path, each step consists of a convblock,
2.1. The principle of CNN to restore surface from an interferogram followed by two resblocks and a max pooling layer with a stride of 2.
After each downsampling step, double the number of features. In up-
The flow chart of the proposed method for demodulating the in- sampling path, every step consists of an upsampling layer followed by
terferogram is shown in Fig. 1. An interferogram is used as the input a convolution that halves the number of feature channels, a concate-
of CNN, and CNN outputs the corresponding wrapped phase and wrap nation with the corresponding features from the downsampling path, a
count together. Then, add the two outputs directly to obtain the un- convblock and two resblocks. Finally, a convolution layer was used to
2
Y. Sun, Y. Bian, H. Shen et al. Optics and Lasers in Engineering 151 (2022) 106941
Table 1
Range of Zernike coefficients
map the features to the desired output. Considering the numerical stabil-
ity, the tanh function was used in last layer to limit the output to [- 1,1].
Moreover, the range of [-1,1] corresponds linearly to that of wrapped
phase ([-𝜋, 𝜋]).
2.3. Dataset
3
Y. Sun, Y. Bian, H. Shen et al. Optics and Lasers in Engineering 151 (2022) 106941
Fig. 5. Detailed prediction results of RU-Net for two simulated interferograms. (a1)-(a2) Simulated interferogram; (b1)-(b2) Predicted wrapped phase by RU-Net;
(c1)-(c2) Predicted wrap count by RU-Net; (d1) Unwrapped phase obtained from (b1) and (c1); (d2) Unwrapped phase obtained from (b2) and (c2); (e1)-(e2) Surface
after removing piston, tilt and defocus from (d1) and (d2); (f1)-(f2) Real wrapped phase; (g1)-(g2) Real wrap count; (h1)-(h2) Real unwrapped phase; (i1)-(i2) Real
surface without piston, tilt and defocus; (j1) Residual between (e1) and (i1); (j2) Residual between (e2) and (i2).
The laser light source is usually used in the interferometer, intensity 2.4. Training
of which is Gaussian distributed. As a result, the reference light intensity
can be expressed as In this task, we use the mean squared error (MSE) as the loss function,
( ( )2 ( )2 ) defined as follows
𝑥 − 𝑥0 + 𝑦 − 𝑦0 𝐶
𝐼𝑟 (𝑥, 𝑦) = 𝐴0 exp − (7) 1 ∑( )2
2𝜎 2 𝐿𝑜𝑠𝑠 = 𝑥 − 𝑦𝑖 (10)
𝐶 𝑖=1 𝑖
where 𝐴0 denotes the intensity amplitude. 𝑥0 and 𝑦0 denote the offset of
where C is the number of samples, 𝑥𝑖 and 𝑦𝑖 are the output and truth
the spot in the x and y directions, both of them are generated randomly
data respectively. We use adaptive moment estimation (Adam) to op-
in the range of [-0.2,0.2]. 𝜎 denotes the beam width, and is randomly
timize the weights during training. The learning rate (lr) is a variable
generated in the range of [0.9,1,1]. Combining Eq. (1) and Eq. (4) –
value defied as Eq. (11). RU-Net is trained for a total of 120 epochs,
Eq. (7), the normalized intensity of any point in the simulated interfer-
with batchsize set to 8 and 512 steps per epoch. The network was im-
ogram should satisfy
( plemented in Tensorflow framework version 2.0 and Keras framework
√ )
𝐼 (𝑥, 𝑦) < 1 + 𝑤 + 2 𝑤 𝐴0 (8) version 2.3. This algorithm is implemented on a desktop computer with
a Core i7-7700 K CPU @ 4.2 GHz (Intel) and 64 GB of RAM. The net-
In order to make the interferogram has good contrast and avoid over- work training and testing were performed using a GeForce GTX 1080Ti
exposure, the range of 𝐴0 satisfies the following condition. GPU (NVIDA). The training process needs about 28 hours. Fig. 4 shows
the curves of loss function. After about 100 epochs of training, the loss
0.5 0.95
( √ ) ≤ 𝐴0 < ( √ ) (9) function values of training set and validation set converge.
1+𝑤+2 𝑤 1+𝑤+2 𝑤 {
2 × 10−4 , 𝑒𝑝𝑜𝑐ℎ ≤ 60
𝑙𝑟 = (11)
Furthermore, we follow the method shown in Fig. 3 to simulate the 2 × 10−5 , 𝑒𝑝𝑜𝑐ℎ > 60
background noise of the actual interferometer as much as possible. First,
the reference light intensity of the Zygo interferometer is collected and 3. Results and discussion
a Gaussian fit is made to it. Then, subtract the fitting term from the
reference light intensity. The residual is the background noise of the 3.1. Simulation
Zygo interferometer. The noise acquired from the interferometer con-
tains more pixels than the simulated interferogram, so we randomly se- After network training, the network is used to predict the test set gen-
lect a 256 × 256 pixels area from it every time and add to the simulation erated in section 2.1, and the average prediction time of each interfero-
reference intensity. Finally, a Gaussian noise with a signal-to-noise ra- gram is 0.02 seconds. Fig. 5 illustrates the detailed prediction results for
tio of 30 dB is added to the simulation reference light to enhance the two of the simulated interferograms. The first column in Fig. 5 shows the
diversity of data. input interferograms. The second to sixth columns in Fig. 5 respectively
A total of 6144 groups of interferograms and the corresponding represent the corresponding wrapped phases, wrap counts, unwrapped
wrapped phase and wrap count were generated as the dataset, of which phases, and final surfaces after eliminating tilt and defocus (the first
4096 pairs of data were used as the training set. The validation set has and third rows are predicted results, while the second and fourth rows
1024 pairs of data to evaluate learning effect during training and avoid are truth data). Fig. 5(j1) and Fig. 5(j2) are the residual errors between
overfitting the network. The remaining 1024 pairs of data are used as predicted surfaces (Fig. 5(e1), Fig. 5(e2)) and real surfaces (Fig. 5(i1),
the test set to verify the performance of the network after training. Fig. 5(i2)). The PV value of the residual error of first interferogram is
4
Y. Sun, Y. Bian, H. Shen et al. Optics and Lasers in Engineering 151 (2022) 106941
Fig. 6. Prediction residuals of RU-Net for whole simulated test set. (a) PV of residuals. (b) RMS of residuals.
Fig. 7. Comparison results between RU-Net and phase-shifting algorithm. (a) Interferograms; (b)-(c) Output wrapped phase and wrap count of RU-Net; (d) Surfaces
recovered by RU-Net; (e) Surfaces recovered by phase-shifting algorithm; (f) Residuals between RU-Net and phase-shifting algorithm.
0.0543𝜆 and the RMS value is 0.0070𝜆. The PV and RMS values of the algorithm are shown in Fig. 7(e). The residual errors between RU-Net
residual error of second interferogram are 0.0695𝜆 and 0.0096𝜆. All and phase-shifting algorithm are shown in Fig. 7(f). The PV and RMS
residuals of RU-Net’s prediction results for test set are shown in Fig. 6. values of first residual wavefront are 0.0672𝜆 and 0.0100𝜆. The PV and
The average PV and RMS values of residuals are 0.0767𝜆 and 0.0071𝜆. RMS value of second residual wavefront are 0.0719𝜆 and 0.0106𝜆. The
results demonstrate that RU-Net is feasible for high-precision real-time
3.2. Experiments measurement of optical surfaces.
5
Y. Sun, Y. Bian, H. Shen et al. Optics and Lasers in Engineering 151 (2022) 106941
spectively. Finally, the experiments and data analysis also proved our [13] Liu Q, Yuan D, He H. Enhancement of vibration desensitising capability of iterative
idea robust and perfect. Simulation and experimental results demon- algorithms for phase-shifting interferometers. Opt. Lasers. Eng. 2017;98:31–6.
[14] Liu F, Wu Y, Wu F. Phase shifting interferometry from two normalized interfero-
strate that the accuracy of proposed algorithm is comparable to that of grams with random tilt phase-shift. Opt. Express 2015;23:19932–46.
the phase-shifting algorithm, and the RMS value of the residual between [15] Tian C, Liu S. Demodulation of two-shot fringe patterns with random phase
them is only 0.01𝜆. It is believable that our CNN-based phase demod- shifts by use of orthogonal polynomials and global optimization. Opt. Express
2016;24:3202–15.
ulation method would highly contribute to the further development of [16] Sun Y, Shen H, Li X, Li J, Gao J, Zhu R. Wavelength-tuning point diffraction inter-
optical measurement efficiency while maintaining accuracy. ferometer resisting inconsistent light intensity and environmental vibration: appli-
cation to high-precision measurement of a large-aperture spherical surface. Appl.
Opt. 2019;58:1253–60.
Declaration of Competing Interest
[17] Kihm H, Kim SW. Fiber-diffraction interferometer for vibration desensitization. Opt.
Lett. 2005;30:2059–61.
The authors declare that they have no known competing financial [18] Hettwerm A, kran J, Schwider J. Three channel phase-shifting interferometer using
polarization-optics and a diffraction grating. Opt. Eng. 2000;39:960–6.
interests or personal relationships that could have appeared to influence
[19] Kwon OY. Multichannel phase-shifted interferometer. Opt. Lett. 1984;9:59–61.
the work reported in this paper. [20] Zhang Z, Dong F, Qian K, Zhang Q, Chu W. Real-time phase measurement of optical
vortices based on pixelated micropolarizer array. Opt. Express 2015;23:20521–8.
CRediT authorship contribution statement [21] Wang D, Liang R. Simultaneous polarization Mirau interferometer based on pixelated
polarization camera. Opt. Lett. 2016;41:41.
[22] Zheng D, Chen L, Li J, Zhu W, Li J, Han Z. Position registration for interfero-
Yue Sun: Conceptualization, Methodology, Software, Investigation, grams and phase-shifting error calibration in dynamic interferometer. Acta Opt. Sin.
Writing – original draft. Yinxu Bian: Validation, Writing – review & 2016;36:0226002.
[23] Miao X, Ma J, Yu Y, Li J, Wei C, Zhu R, Chen L, Yuan C, Wang Q, Zheng D. Mod-
editing, Visualization. Hua Shen: Writing – review & editing. Rihong elling and correction for polarization errors of a 600 mm aperture dynamic Fizeau
Zhu: Resources, Supervision. interferometer. Opt. Express 2020;28:33355.
[24] Juan MC, Konstantinos F, Tomasz K. Fast and accurate phase-unwrapping algorithm
based on the transport of intensity equation. Appl. Opt. 2017;56:7079–88.
Acknowledgments [25] Guo Y, Chen X, Zhang T. Robust phase unwrapping algorithm based on least squares.
Opt. Lasers Eng. 2014;63:25–9.
This work was supported by the National Natural Science Foundation [26] Reyes-Figueroa A, Flores VH, Rivera M. Deep neural network for fringe pattern fil-
tering and normalization. Appl. Opt. 2021;60:2022–36.
of China (Grant No. 62005120), Natural Science Foundation of Jiangsu
[27] Yan K, Yu Y, Huang C, Sui L, Qian K, Asundi A. Fringe pattern denoising based on
Province (Grant No. BK20201305), Basic research program of Jiangsu deep learning. Opt. Commun. 2019;437:148–52.
Province (Grant No. BK20190456), and Chinese Academy of Sciences [28] Rivenson Y, Zhang Y, Gunaydin H, Da T, Ozcan A. Phase recovery and holo-
graphic image reconstruction using deep learning in neural networks. Light Sci.
(Grant No. KLOMT190101).
Appl. 2018;7:17141.
[29] Ayan S, Justin L, Li S, George B. Lensless computational imaging through deep learn-
References ing. Optica 2017;4:1117.
[30] Zhang J, Tian X, Shao J, Luo H, Liang R. Phase unwrapping in optical metrology via
[1] D. Malacara, M. Servín, and Z. Malacara, Interferogram Analysis for Optical Testing, denoised and convolutional segmentation networks. Opt. Express 2019;27:14903.
2nd ed., 2005. [31] Zhao Z, Li B, Kang X, Lu J, Liu T. Phase unwrapping method for point diffraction
[2] Carré P. Installation et utilisation du comparateur photoélectrique et interférentiel interferometer based on residual auto encoder neural network. Opt. Lasers Eng.
du Bureau International des Poids et Mesures. Metrologia 1966;2:13. 2021;138:106405.
[3] Wyant JC. Use of an ac heterodyne lateral shear interferometer with real–time wave- [32] Liu T, Haan KD, Rivenson Y, Wei Z, Zeng X, Zhang Y, Ozcan A. Deep learning-based
front correction systems. Appl. Opt. 1975;14:2622. super-resolution in coherent imaging systems. Scientific Reports 2019;9.
[4] de Groot PJ. Vibration in phase-shifting interferometry. J. Opt. Soc. Am. A [33] Meng L, Wang H, Li G, Zheng S, Situ G. Learning-based lensless imaging through
1995;12:354–65. optically thick scattering media. Adv. Photonics 2019;1:036002.
[5] Li J, Zhu R, Chen L, He Y. Phase-tilting interferometry for optical testing. Opt. Lett. [34] Shen H, Gao J. Deep learning virtual colorful lens-free on-chip microscopy. Chin.
2013;38:2838–41. Opt. Lett. 2020;18:121705.
[6] Servin M, Cuevas FJ. A novel technique for spatial phase-shifting interferometry. J. [35] He T, Zhang Q, Zhou M, Shen J. CVNet: confidence voting convolutional neural
Modern Opt. 1995;42:1853–62. network for camera spectral sensitivity estimation. Opt. Express 2021;29:19655–74.
[7] Servin M, Estrada JC, Medina O. Fourier transform demodulation of pixelated phase– [36] Zhang L, Zhou S, Li J, Yu B. Deep neural network based calibration for freeform
masked interferograms. Opt. Express 2010;18:16090. surface misalignments in general interferometer. Opt. Express 2019;27:33709–29.
[8] Li J, Song L, Chen L, H Z, Gu C. Quadratic polar coordinate transform technique for [37] Feng S, Chen Q, Gu G, Tao T, Zhang L, Hu Y, Yin W, Zuo C. Fringe pattern analysis
the demodulation of circular carrier interferogram. Opt. Commun 2015;336:166–72. using deep learning. Adv. Photonics 2019;1:025001.
[9] Servin M, Marroquin JL, Quiroga JA. Regularized quadrature and phase tracking [38] Kando D, Tomioka S, Miyamoto N, Ueda R. Phase extraction from single interfero-
from a single closed-fringe interferogram. J. Opt. Soc. Amer. A 2004;21:411–19. gram including closed-fringe using deep learning. Appl. Sci. 2019;9:3529.
[10] Li K, Qian K. Improved generalized regularized phase tracker for demodulation of a [39] Liu X, Yang Z, Dou J, Liu Z. Fast demodulation of single-shot interferogram via
single fringe pattern. Opt. Express 2013;21:24385–97. convolutional neural network. Opt. Commun. 2021;487:126813.
[11] Vargas J, Quiroga JA, Belenguer T. Phase-shifting interferometry based on principal [40] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical
component analysis. Opt. Lett. 2011;36:1326–8. Image Segmentation. arXiv: 2015 1505.04597.
[12] Wang Z, Han B. Advanced iterative algorithm for phase extraction of randomly
phase-shifted interferograms. Opt. Lett. 2004;29:1671–3.