Professional Documents
Culture Documents
1 s2.0 S0967066123000072 Main
1 s2.0 S0967066123000072 Main
∗ Corresponding author.
E-mail addresses: wenqiren9801@163.com (W. Ren), yangtang@ecust.edu.cn (Y. Tang).
https://doi.org/10.1016/j.conengprac.2023.105438
Received 23 October 2022; Received in revised form 26 December 2022; Accepted 13 January 2023
Available online 24 January 2023
0967-0661/© 2023 Elsevier Ltd. All rights reserved.
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
2
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
et al., 2019b, 2020; Sakaridis et al., 2018; Zheng et al., 2021). Nev- Symbolism Meaning
ertheless, since synthetic hazy images fail to depict real-world hazy 𝑆hazy Domain set of hazy samples
scenes reliably, there is a domain gap between synthetic and real-world 𝑆𝑖 The 𝑖th domain
𝐷𝑖 The 𝑖th task
hazy images. Consequently, the model trained on synthetic datasets (Li
𝑥 Hazy images
et al., 2019b, 2020; Sakaridis et al., 2018; Zheng et al., 2021) frequently 𝑦 Haze-free images
suffers from a performance drop on the real hazy images, because 𝜙𝑖 Task-specific parameter of 𝐷𝑖
the features learned from synthetic domains are sub-optimal for real 𝜙𝑖∗ Optimal domain-specific parameter of 𝑆 𝑖
domains. Although both ours and existing work (Guo et al., 2022; Lee 𝜑 Preliminary parameter
𝐸 Adaptation network
et al., 2020; Wu et al., 2021; Zhang & Patel, 2018) rely on synthetic
𝐹 Dehazing network
data, the motivations are different. The existing methods (Guo et al., 𝜔1 Neural weights of adaptation network
2022; Lee et al., 2020; Wu et al., 2021; Zhang & Patel, 2018) aim at 𝜔2 Neural weights of dehazing network
exploring the internal information of synthetic domains. Instead, our 𝑀 Number of sample pairs in each task
meta-learning-based model attempts to learn how to extract internal 𝑁 Number of sampled tasks
𝐼 Number of domains in 𝑆hazy
information from diverse synthetic domains. Thus, when applied to real
𝐾 Number of preliminary parameters of a task
images, our model can quickly distill domain-specific information of
real domains to improve the performance on real samples.
2.1.3. Domain adaptation-based approaches (2022) conduct test-time training to enable the adaptation to specific
To narrow the domain gap, domain adaptation-based approaches domains.
attempt to adopt both synthetic and real hazy images (Chen et al., However, test-time training depends heavily on manually designed
2021; Li et al., 2019a; Shao et al., 2020). These approaches employ hyperparameters (e.g., iteration steps and learning rates), which leads
real hazy images for training (Li et al., 2019a; Shao et al., 2020) or to under-fitting on target unseen images (Gao et al., 2022). In addition,
fine-tuning (Chen et al., 2021) to capture the internal information of test-time training increases the computational costs and runtime of
real domains. Although all these approaches (Chen et al., 2021; Li et al., the model (Huisman et al., 2021; Liu et al., 2022), which results in
2019a; Shao et al., 2020) have achieved performance gains on real hazy low efficiency in practical applications. To tackle these issues, this
images, they have an additional requirement on a large number of real paper resorts to model-based meta-learning approaches (Garnelo et al.,
hazy images. In contrast, our meta-learning-based domain generaliza- 2018a, 2018b; Zhang et al., 2021), especially ARM (Zhang et al., 2021),
tion framework can enable internal learning and address the domain
to modify the model without sensitive and time-consuming test-time
shift by merely exploiting synthetic samples, which is significant for
training. Compared with ARM (Zhang et al., 2021), we further present
the practical application of the model.
a distance-aware aggregator and a domain-relevant contrastive regu-
larization, which encourages the model to extract more representative
2.2. Meta-learning in image restoration
and discriminative internal information of the given domain.
Meta-learning, also known as learning to learn, targets adapting
to a new scenario rapidly from a limited number of samples. It has 3. Methodology
been applied to diverse computer vision tasks and has made significant
breakthroughs in recent years. Meta-learning can be categorized into 3.1. Framework overview
metric-based, model-based, and optimization-based techniques (Huis-
man et al., 2021). Among them, optimization-based algorithms, espe- In this paper, we present a novel framework for single image de-
cially model-agnostic meta-learning (MAML) (Finn et al., 2017) and hazing, which can deal with out-of-distribution domain generalization
its variants (Liu et al., 2019a; Sun et al., 2020), are widely employed and enable internal learning on real domains without test-time training.
in image restoration. Soh et al. (2020) employ MAML to image super- As shown in Fig. 2, our proposed framework includes an adaptation
resolution for obtaining an optimal model initialization, based on which network 𝐸𝜔1 (⋅) and a dehazing network 𝐹𝜔2 (⋅, 𝜙). 𝜔1 and 𝜔2 stand for
the model can adapt to unseen samples with several test-time training the neural parameters of 𝐸𝜔1 (⋅) and 𝐹𝜔2 (⋅, 𝜙), respectively. 𝜙 denotes
steps. Chi et al. (2021) adopt an auxiliary reconstruction task to opti- external variables and serves as an additional input of 𝐹𝜔2 (⋅, 𝜙). Among
mize the model indirectly to deal with blur images caused by unseen them, 𝜔1 and 𝜔2 are fixed after meta-training, which are shared cross-
kernels. To tackle multi-domain learning in image dehazing, Liu et al. domains. 𝜙 varies with domain properties of input hazy images and
3
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Fig. 5. Comparison between the average operation and our distance-aware aggregation.
The blue ellipse represents the distribution of {𝜑𝑖𝑘 }𝑀
𝑘=1
estimated by 𝑀 samples in 𝑆 𝑖 .
As exhibited in Fig. 2, 𝜙𝑖 of 𝐷𝑖 is obtained through our adaptation 𝑑𝑘𝑖 = ‖𝜑𝑖 − 𝜑𝑖𝑠 ‖1 , (3)
𝐾 − 1 𝑠=1,𝑠≠𝑘 𝑘
network 𝐸𝜔1 (⋅) as well as our aggregator. The 𝐸𝜔1 (⋅) explores the
domain properties from the samples in 𝐷𝑖 and store them into a series of where ‖ ⋅ ‖1 denotes the L1 regularization. Thus, we can obtain a set of
preliminary parameters {𝜑𝑖𝑘 }𝐾
𝑘=1
. The aggregator summarizes the inter- distance values {𝑑𝑘𝑖 }𝐾
𝑘=1
for 𝐾 samples in 𝐷𝑖 . The larger the value of 𝑑𝑘𝑖 ,
nal information hidden in {𝜑𝑖𝑘 }𝐾
𝑘=1
to obtain 𝜙𝑖 . In this section, we focus the higher the probability that 𝑥𝑖𝑘 can be regarded as an outlier, and
on our designed 𝐸𝜔1 (⋅) and discuss our aggregator in the next section. the weight coefficient of 𝜑𝑖𝑘 needs to be reduced. Then, we reset the
4
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
weights of {𝜑𝑖𝑘 }𝐾
𝑘
according to the computed {𝑑𝑘𝑖 }𝐾
𝑘=1
, and obtain 𝜙𝑖 by
summing the reweighted preliminary parameters:
∑
𝐾
𝑒𝑥𝑝(−𝑑𝑘𝑖 )
𝜙𝑖 = ∑𝐾 𝜑𝑖 . (4)
𝑒𝑥𝑝(−𝑑 𝑖) 𝑘
𝑘=1 𝑠=1 𝑠
where the picked parameter is 𝜙𝑝 that has higher confidence scores 3.5.4. Cross entropy loss
than 𝜙𝑖 . The 𝜎 is a constant to avoid situations where the denominator Cross entropy loss 𝐿𝐶𝐸 is employed to train our classifier, which is
becomes zero. defined as:
1 ∑
𝑁
3.5. Loss function 𝐿𝐶𝐸 = − 𝑃 (𝜙𝑖 ) log𝑒 (𝑄(𝜙𝑖 )), (12)
𝑁 𝑖=1
Except for the domain-relevant contrastive regularization 𝐿𝐷𝐶𝑅 , where the 𝑃 (𝜙𝑖 ) and 𝑄(𝜙𝑖 ) are the given probability and the estimated
there are other four loss functions are employed in our experiments, probability of 𝜙𝑖 , respectively.
5
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Fig. 7. Visual comparisons with conventional learning-based methods on real-world hazy images from the RTTS dataset (Li et al., 2019b).
6
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Fig. 8. Visual comparisons with conventional learning-based methods on real-world hazy images from the URHI dataset (Li et al., 2019b).
Fig. 9. Visual comparisons with conventional learning-based methods on real hazy images from the ESPW dataset (Fattal, 2014; He et al., 2010).
Fig. 10. Visual comparisons with domain adaptation-based methods on real hazy images (Li et al., 2019b).
Thirdly, the problem of color distortion (e.g., Fig. 9, column 5) is one adaptation-based methods, as shown in Fig. 10. It can be found that
of the obstacles hindering the acquisition of high-quality images. These although real-world hazy images are inaccessible in training, our model
failure cases show that only based on the internal information extracted has visual effects similar to that of DAD (Shao et al., 2020). In addi-
from the synthetic data is not sufficient to maintain the robustness in tion, the restored images of our model have less residual haze than
real samples. In contrast to the competitors, our proposed model can that of PSD (Chen et al., 2021). The results illustrate that our model
obtain higher-quality results from both global and local perspectives, can obtain competitive performance with domain adaptation-based
where the restored images have less color distortion and more satis- methods (Chen et al., 2021; Shao et al., 2020).
factory visual perception. Our model is improved on MSBDN (Dong
et al., 2020b) but is capable of restoring clear objects with less residual 4.2.2. Quantitative comparison
haze. The results demonstrate that utilizing the internal information We further leverage BRISQUE and PaQ-2-PiQ to evaluate the per-
of real domains contributes to boosting the model performance on formance of our proposed model with state-of-the-art competitors.
real hazy images. Moreover, our model is also compared with domain Table 2 reveals the quantitative results of each participant on RTTS
7
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Fig. 11. Ablation study of our proposed model with different settings on the real hazy dataset (Li et al., 2019b).
Table 2
Quantitative comparison of the state-of-the-art dehazing models on real hazy datasets (Li et al., 2019b).
RTTS URHI
Methods Year Real #Param Runtime (s) FLOPS
BRISQUE↓ PaQ-2-PiQ↑ BRISQUE↓ PaQ-2-PiQ↑
Hazy – – 37.011* 66.054 33.531 67.254 – – –
√
DAD 2020 32.727* 67.031 – – 54.59M 0.010 195.926G
√
PSD 2021 25.239* 70.430 – – 33.11M 0.024 211.979G
AOD-Net 2017 35.466 66.435 34.077 67.273 0.002M 0.004 536.371M
GDN 2019 28.086 66.061 27.941 66.585 0.96M 0.014 100.447G
MSBDN 2020 28.743* 66.197 26.617 67.851 31.35M 0.021 194.550G
FFA-Net 2020 30.183 67.110 26.141 67.688 4.68M 0.087 1.348T
AECR-Net 2021 28.594 66.197 25.879 67.946 2.61M 0.028 201.665G
D4 2022 29.536 66.677 27.429 67.588 10.70M 0.032 10.445G
DeHamer 2022 30.986 66.573 28.202 67.739 4.63M 0.066 219.887G
Ours 2022 27.021 67.495 24.987 68.509 31.58M 0.062 198.227G
1
‘‘Real’’ denotes the access of real hazy images during the training stage. ‘‘*’’ represents that the results are obtained from the existing paper (Chen et al., 2021). ‘‘#Param’’ stands
for the number of neural parameters. ‘‘FLOPS’’ refers to the floating-point operations per second.
8
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Fig. 12. Visual comparisons on other similar tasks. The inputs of (a) and (d) are downloaded from the Internet, and the inputs of (b) and (c) are from the relevant existing
datasets of nighttime image dehazing (Li et al., 2015; Zhang et al., 2017, 2020a) and underwater image enhancement (Islam et al., 2020).
Table 4
Ablation study of our proposed model with different settings on the URHI dataset (Li et al., 2019b).
Methods Settings
URHI
Adaptation network Aggregator
Baseline DCR
𝐴𝑁𝐶𝑁𝑁 𝐴𝑁𝐶𝐺−𝐶𝑜𝑛𝑣 𝐴𝑂𝑚𝑒𝑎𝑛 𝐴𝑂𝐷𝐴𝐴 BRISQUE↓ PaQ-2-PiQ↑
✓ 27.043 67.277
✓ ✓ ✓ 26.494 67.368
Our model ✓ ✓ ✓ 25.760 67.645
✓ ✓ ✓ 25.574 68.171
✓ ✓ ✓ ✓ 24.987 68.509
Table 5 et al., 2020b) fails to recover the remote objects, and there are some
Quantitative results with different 𝜆4 values on the
pixels that are mishandled (the road in the image generated by MS-
URHI dataset (Li et al., 2019b).
BDN Dong et al., 2020b). In contrast, the restored image of our model
𝜆4 BRISQUE↓ PaQ-2-PiQ↑
has higher visual perception with less distortion. The comparisons
0 25.574 68.171
of the other three cases also demonstrate that our contributions to
0.5 24.987 68.509
1 25.143 67.700
MSBDN (Dong et al., 2020b) not only improve the performance on real
2 25.342 67.612 hazy samples but also enhance the generalization capability in other
similar low-level tasks.
9
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Acknowledgments Li, L., Dong, Y., Ren, W., Pan, J., Gao, C., Sang, N., & Yang, M. (2019). Semi-supervised
image dehazing. IEEE Transactions on Image Processing, 29, 2766–2779.
Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2017). AOD-Net: All-in-one dehazing
This work was supported by National Natural Science Foundation
network. In Proceedings of the IEEE international conference on computer vision (pp.
of China (62233005, 62293502), Program of Shanghai Academic Re- 4770–4778).
search Leader, China under Grant 20XD1401300, Sino-German Center Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., & Wang, Z. (2019). Benchmarking
for Research Promotion, China (Grant M-0066), the CNPC Innovation single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1),
Fund, China under Grant 2021D002-0902, and Shanghai AI Lab, China. 492–505.
Li, Y., Tan, R., & Brown, M. (2015). Nighttime haze removal with glow and multiple
light colors. In Proceedings of the IEEE international conference on computer vision
References (pp. 226–234).
Li, R., Zhang, X., You, S., & Li, Y. (2020). Learning to dehaze from realistic scene with
Cai, B., Xu, X., Jia, K., Qing, C., & Tao, D. (2016). DehazeNet: An end-to-end system a fast physics-based dehazing network. arXiv:2004.08554.
for single image haze removal. IEEE Transactions on Image Processing, 25(11), Lin, X., Ma, L., Liu, W., & Chang, S. (2020). Context-gated convolution. In Proceedings
5187–5198. of the European conference on computer vision (pp. 701–718). Springer.
Chen, R., Gao, N., Vien, N., Ziesche, H., & Neumann, G. (2022). Meta-learning Liu, S., Davison, A., & Johns, E. (2019). Self-supervised generalisation with meta
regrasping strategies for physical-agnostic objects. arXiv preprint arXiv:2205.11110. auxiliary learning. Advances in Neural Information Processing Systems, 32.
Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for Liu, X., Ma, Y., Shi, Z., & Chen, J. (2019). GridDehazeNet: Attention-based multi-scale
contrastive learning of visual representations. In Proceedings of the international network for image dehazing. In Proceedings of the IEEE/CVF international conference
conference on machine learning (pp. 1597–1607). on computer vision (pp. 7314–7323).
Chen, Z., Wang, Y., Yang, Y., & Liu, D. (2021). PSD: Principled synthetic-to-real Liu, H., Wu, Z., Li, L., Salehkalaibar, S., Chen, J., & Wang, K. (2022). Towards multi-
dehazing guided by physical priors. In Proceedings of the IEEE/CVF conference on domain single image dehazing via test-time training. In Proceedings of the IEEE/CVF
computer vision and pattern recognition (pp. 7180–7189). conference on computer vision and pattern recognition (pp. 5831–5840).
Chi, Z., Wang, Y., Yu, Y., & Tang, J. (2021). Test-time fast adaptation for dynamic scene McCartney, E. (1976). Optics of the atmosphere: Scattering by molecules and particles. New
deblurring via meta-auxiliary learning. In Proceedings of the IEEE/CVF conference on York: John Wiley and Sons.
computer vision and pattern recognition (pp. 9137–9146). Mechrez, R., Talmi, I., & Zelnik-Manor, L. (2018). The contextual loss for image
Dong, Y., Liu, Y., Zhang, H., Chen, S., & Qiao, Y. (2020). FD-GAN: Generative adver- transformation with non-aligned data. In Proceedings of the European conference on
sarial networks with fusion-discriminator for single image dehazing. In Proceedings computer vision (pp. 768–783).
of the AAAI conference on artificial intelligence. Vol. 34 (07), (pp. 10729–10736). Mittal, A., Moorthy, A., & Bovik, A. (2012). No-reference image quality assessment in
Dong, H., Pan, J., Xiang, L., Hu, Z., Zhang, X., Wang, F., & Yang, M. (2020). Multi-scale the spatial domain. IEEE Transactions on Image Processing, 21(12), 4695–4708.
boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF
Narasimhan, S., & Nayar, S. (2000). Chromatic framework for vision in bad weather.
conference on computer vision and pattern recognition (pp. 2157–2167).
In Proceedings of the IEEE conference on computer vision and pattern recognition. Vol.
van Dooren, S., Duhr, P., Amstutz, A., & Onder, C. (2022). Optimal control of real 1 (pp. 598–605).
driving emissions. Control Engineering Practice, 127, Article 105269.
Narasimhan, S., & Nayar, S. (2002). Vision and the atmosphere. International Journal
Fattal, R. (2014). Dehazing using color-lines. ACM Transactions on Graphics, 34(1), 1–14.
of Computer Vision, 48(3), 233–254.
Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast
Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020). FFA-Net: Feature fusion attention
adaptation of deep networks. In Proceedings of the international conference on machine
network for single image dehazing. In Proceedings of the AAAI conference on artificial
learning (pp. 1126–1135).
intelligence. Vol. 34. No. 7.
Gao, N., Ziesche, H., Vien, N., Volpp, M., & Neumann, G. (2022). What matters for
Redmon, J., & Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv
meta-learning vision regression tasks? In Proceedings of the IEEE/CVF conference on
preprint arXiv:1804.02767.
computer vision and pattern recognition (pp. 14776–14786).
Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., & Yang, M. (2016). Single image dehazing
Garnelo, M., Rosenbaum, D., Maddison, C., Ramalho, T., Saxton, D., Shanahan, M.,
via multi-scale convolutional neural networks. In Proceedings of the European
Teh, Y., Rezende, D., & Eslami, S. (2018). Conditional neural processes. In
conference on computer vision (pp. 154–169). Springer.
Proceedings of the international conference on machine learning (pp. 1704–1713).
Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with
Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D., Eslami, S., & Teh, Y.
synthetic data. International Journal of Computer Vision, 126(9), 973–992.
(2018). Neural processes. arXiv:1807.01622.
Shao, Y., Li, L., Ren, W., Gao, C., & Sang, N. (2020). Domain adaptation for image
Gróf, T., Bauer, P., & Watanabe, Y. (2022). Positioning of aircraft relative to unknown
dehazing. In Proceedings of the IEEE/CVF conference on computer vision and pattern
runway with delayed image data, airdata and inertial measurement fusion. Control
recognition (pp. 2808–2817).
Engineering Practice, 125, Article 105211.
Soh, J., Cho, S., & Cho, N. (2020). Meta-transfer learning for zero-shot super-resolution.
Guo, C., Yan, Q., Anwar, S., Cong, R., Ren, W., & Li, C. (2022). Image dehazing
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
transformer with transmission-aware 3D position embedding. In Proceedings of the
(pp. 3516–3525).
IEEE/CVF conference on computer vision and pattern recognition (pp. 5812–5820).
Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., & Hardt, M. (2020). Test-time training
He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsu-
with self-supervision for generalization under distribution shifts. In Proceedings of
pervised visual representation learning. In Proceedings of the IEEE/CVF conference
the international conference on machine learning (pp. 9229–9248).
on computer vision and pattern recognition (pp. 9729–9738).
He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. Sun, Q., Yen, G., Tang, Y., & Zhao, C. (2022). Learn to adapt for monocular depth
IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12), 2341–2353. estimation. arXiv preprint arXiv:2203.14005.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image Tang, Z., Cunha, R., Cabecinhas, D., Hamel, T., & Silvestre, C. (2021). Quadrotor going
recognition. In Proceedings of the IEEE conference on computer vision and pattern through a window and landing: An image-based visual servo control approach.
recognition (pp. 770–778). Control Engineering Practice, 112, Article 104827.
Huisman, M., Van Rijn, J., & Plaat, A. (2021). A survey of deep meta-learning. Artificial Tang, Y., Zhao, C., Wang, J., Zhang, C., Sun, Q., Zheng, W., Du, W., Qian, F., &
Intelligence Review, 54(6), 4483–4541. Kurths, J. (2022). An overview of perception and decision-making in autonomous
Ifqir, S., Combastel, C., Zolghadri, A., Alcalay, G., Goupil, P., & Merlet, S. (2022). Fault systems in the era of learning. IEEE Transactions on Neural Networks and Learning
tolerant multi-sensor data fusion for autonomous navigation in future civil aviation Systems.
operations. Control Engineering Practice, 123, Article 105132. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality
Islam, M., Xia, Y., & Sattar, J. (2020). Fast underwater image enhancement for assessment: from error visibility to structural similarity. IEEE Transactions on Image
improved visual perception. IEEE Robotics and Automation Letters, 5(2), 3227–3234. Processing, 13(4), 600–612.
Jo, E., & Sim, J. (2021). Multi-scale selective residual learning for non-homogeneous Wu, J., Jin, Z., Liu, A., Yu, L., & Yang, F. (2022). A hybrid deep-Q-network and
dehazing. In Proceedings of the IEEE/CVF conference on computer vision and pattern model predictive control for point stabilization of visual servoing systems. Control
recognition (pp. 507–515). Engineering Practice, 128, Article 105314.
Kaleli, A. (2020). Development of the predictive based control of an autonomous Wu, H., Qu, Y., Lin, S., Zhou, J., Qiao, R., Zhang, Z., Xie, Y., & Ma, L. (2021).
engine cooling system for variable engine operating conditions in SI engines: Contrastive learning for compact single image dehazing. In Proceedings of the
design, modeling and real-time application. Control Engineering Practice, 100, Article IEEE/CVF conference on computer vision and pattern recognition (pp. 10551–10560).
104424. Xu, L., Zhao, D., Yan, Y., Kwong, S., Chen, J., & Duan, L.-Y. (2019). IDeRs: Iterative
Lee, B., Lee, K., Oh, J., & Kweon, I. (2020). CNN-based simultaneous dehazing and dehazing method for single remote sensing image. Information Sciences, 489, 50–62.
depth estimation. In Proceedings of the IEEE international conference on robotics and Yang, Y., Wang, C., Liu, R., Zhang, L., Guo, X., & Tao, D. (2022). Self-augmented
automation (pp. 9722–9728). unpaired image dehazing via density and depth decomposition. In Proceedings of
Lee, S., Son, T., & Kwak, S. (2022). FIFO: Learning fog-invariant features for foggy the IEEE/CVF conference on computer vision and pattern recognition (pp. 2037–2046).
scene segmentation. In Proceedings of the IEEE/CVF conference on computer vision Ye, Z., & Yao, L. (2022). Contrastive conditional neural processes. In Proceedings of the
and pattern recognition (pp. 18911–18921). IEEE/CVF conference on computer vision and pattern recognition (pp. 9687–9696).
10
W. Ren, Q. Sun, C. Zhao et al. Control Engineering Practice 133 (2023) 105438
Ying, Z., Niu, H., Gupta, P., Mahajan, D., Ghadiyaram, D., & Bovik, A. (2020). From Zhang, H., & Patel, V. (2018). Densely connected pyramid dehazing network. In
patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. Proceedings of the IEEE conference on computer vision and pattern recognition (pp.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 3194–3203).
(pp. 3575–3585). Zhang, C., Wang, J., Yen, G., Zhao, C., Sun, Q., Tang, Y., Qian, F., & Kurths, J. (2020).
Zhan, F., Yu, Y., Cui, K., Zhang, G., Lu, S., Pan, J., Zhang, C., Ma, F., Xie, X., & Miao, C. When autonomous systems meet accuracy and transferability through AI: A survey.
(2021). Unbalanced feature transport for exemplar-based image translation. In Patterns, 1(4), Article 100050.
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition Zhang, P., Zhang, B., Chen, D., Yuan, L., & Wen, F. (2020). Cross-domain correspon-
(pp. 15028–15038). dence learning for exemplar-based image translation. In Proceedings of the IEEE/CVF
Zhang, J., Cao, Y., Fang, S., Kang, Y., & Wen Chen, C. (2017). Fast haze removal conference on computer vision and pattern recognition (pp. 5143–5153).
for nighttime image using maximum reflectance prior. In Proceedings of the IEEE Zhang, H., Zhao, C., & Ding, J. (2022). Online reinforcement learning with passivity-
conference on computer vision and pattern recognition (pp. 7418–7426). based stabilizing term for real time overhead crane control without knowledge of
the system model. Control Engineering Practice, 127, Article 105302.
Zhang, J., Cao, Y., Zha, Z., & Tao, D. (2020). Nighttime dehazing with a synthetic
Zhao, C., Tang, Y., & Sun, Q. (2022). Unsupervised monocular depth estimation in
benchmark. In Proceedings of the 28th ACM international conference on multimedia
highly complex environments. IEEE Transactions on Emerging Topics in Computational
(pp. 2355–2363).
Intelligence, 1–10.
Zhang, T., Fu, Y., Wang, L., & Huang, H. (2019). Hyperspectral image reconstruction us-
Zhao, C., Zhang, Y., Poggi, M., Tosi, F., Guo, X., Zhu, Z., Huang, G., Tang, Y., &
ing deep external and internal learning. In Proceedings of the IEEE/CVF international
Mattoccia, S. (2022). MonoViT: Self-supervised monocular depth estimation with a
conference on computer vision (pp. 8559–8568).
vision transformer. In Proceedings of the international conference on 3D vision.
Zhang, M., Marklund, H., Dhawan, N., Gupta, A., Levine, S., & Finn, C. (2021). Adaptive
Zheng, Z., Ren, W., Cao, X., Hu, X., Wang, T., Song, F., & Jia, X. (2021). Ultra-high-
risk minimization: Learning to adapt to domain shift. Advances in Neural Information definition image dehazing via multi-guided bilateral learning. In Proceedings of the
Processing Systems, 34. IEEE/CVF conference on computer vision and pattern recognition (pp. 16180–16189).
11