You are on page 1of 9

Knowledge-Based Systems 223 (2021) 107022

Contents lists available at ScienceDirect

Knowledge-Based Systems
journal homepage: www.elsevier.com/locate/knosys

A data hiding scheme based on U-Net and wavelet transform



Lianshan Liu a , , Lingzhuang Meng a , Yanjun Peng a , Xiaoli Wang b
a
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266+590, China
b
Office of Network Security and Informatization, Shandong University of Science and Technology, Qingdao 266+590, China

article info a b s t r a c t

Article history: As deep learning was used in the field of data hiding, the capacity of data hiding had been greatly
Received 8 February 2021 increased. In addition, the wavelet transform in the traditional data hiding technology performed well,
Received in revised form 22 March 2021 and the hidden image was invisible compared with the spatial domain scheme A data hiding scheme
Accepted 6 April 2021
based on U-Net and wavelet transform was proposed in this paper, which innovatively combined the
Available online 8 April 2021
advantages of U-Net in image detail feature processing and the ability of wavelet transform to divide
Keywords: image details. Data hiding and extraction were realized through the jointly trained hidden network
Deep learning and extraction network respectively. The wavelet coefficients of the secret data were hidden in the
Neural network image through the hidden network, and a hidden image with good visual effects could be obtained. The
U-Net image feature was subdivided into four wavelet coefficients in the extraction network, and finally the
Wavelet transform secret data image was obtained through inverse wavelet transform. The experimental results shown
Data hiding that the proposed scheme had better hidden data invisibility than the traditional scheme, and the data
extraction was more accurate.
© 2021 Elsevier B.V. All rights reserved.

1. Introduction schemes. Meng et al. [10] proposed a reversible watermarking


scheme based on lifting Integer Wavelet Transform (IWT), which
With the advancement of science and technology, people pay could achieve complete reversible extraction and recovery, and
more and more attention to information security. Image data guarantee the validity of secret data.
hiding is a scheme that hided secret data into natural images As traditional schemes were difficult to meet the increasing
or medical images [1], and transmits confidential data through security and capacity requirements, the latest research focused
public image. It is mostly used in information encryption and on the use of neural networks [11] or augmented reality [12]
special fields such as medical and judicial image transmission [2]. schemes for data hiding. Compared with traditional data hiding
Because the confidential data involved is often of high value, data and watermarking schemes, neural network schemes tended to
hiding is of great significance in practical applications. Increasing have higher capacity and robustness. Most of the previous wa-
the data hiding capacity and the robustness of hiding schemes termarking schemes embedded binary data in gray-scale or color
while hiding data has become the focus of research. image [13,14], while neural network. The scheme was more to
Traditional data hiding schemes included spatial domain and hide grayscale image into color image, or even hided color image
transform domain methods. The spatial scheme used the redun- into color image. At the same time, the use of neural networks
dant information between pixels, including Least Significant Bits for data hiding had good robustness and had the ability to resist
(LSB), histogram, differential expansion, etc. [3,4]. The amount of traditional geometric attacks [15,16].
redundant information that could be used in this scheme was lim- Baluja [17] proposed a steganography scheme using deep
ited, so the capacity was low. The transform domain used Discrete learning: Deep Steganography. This scheme used the Preparation
Cosine Transform (DCT) [5], Wavelet Transform (WT) [6,7], etc. Network to normalize the image, then embedded the secret
to adjust the image to the transform domain, and then used the data through the Hidden Network, and finally obtained the se-
redundant information between the obtained coefficients or other cret image according to the Reveal Network. Subsequently, CNN
schemes for data hiding. was introduced into the field of data hiding [18–20]. Rehman
Wavelet transform performed well in data hiding. The scheme et al. [18] used CNN to realized data hiding and extraction. He
using wavelet transform [8–10] had good visual effects and was proposed an encoder and a decoder, and used a joint loss function
invisible, which had great advantages compared to spatial domain for end-to-end training. Chen et al. [20] proposed a highly robust
optimization scheme for the scheme similar to [18]. This scheme
∗ Corresponding author. not only had a large capacity but also resisted traditional image
E-mail address: liulianshan@sdust.edu.cn (L. Liu). attacks.

https://doi.org/10.1016/j.knosys.2021.107022
0950-7051/© 2021 Elsevier B.V. All rights reserved.
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

generation. Its network structure was shown in Fig. 1. The net-


work used four downsampling and four upsampling, and cropped
and connected the stages of the same level. The combination of
convolutional network and pooling layer realized the network’s
extraction and recognition of image features. The network input
image size was 572×572, but the output size was 388×388,
which shown that after network processing, the output result
does not completely correspond to the original image.
In U-Net, the scheme of replicating and connecting from the
bottom network to the high-level network played a key role. This
type of skip connection added semantic information at the high
level, and refined the contour at the bottom level, so the resulting
image was more refined. Therefore, the image obtained by using
U-Net for image segmentation or image generation was highly
consistent with the detailed features of the original image, which
Fig. 1. U-Net network structure. could capture some abstract features of the image and improve
the final visual effect.

Liu et al. [21,22] combined DenseNet with wavelet sequence 2.2. Wavelet transform
mapping and proposed a coverless image steganography algo-
rithm. Such a scheme could effectively resist the existing ste- Wavelet transform (WT) was a true mathematical microscope,
ganalysis tools, and its robustness was high. However, its hiding which allowed us to use exponential spectrum to describe the
capacity was very small. Data hiding schemes based on GAN net- complexity of ‘‘monofractal’’ and ‘‘multifractal’’ objects [33], and
work [23,24] and Faster-RCNN [25] both had good visual effects, was widely used in image processing. The image was upgraded
but they also faced the problem of small capacity. Duan et al. [26] from spatial domain to wavelet domain through WT. The co-
proposed an information hiding scheme based on Residual Net- efficients obtained by WT contain different detailed features of
work, which used the residual block connection method to sup- the original image. At this time, the operation on the image was
plement the lack of features, and had a larger hiding capacity than no longer a simple LSB change, but an operation for wavelet
similar solutions. coefficients.
Ronneberger et al. [27] proposed U-Net for biomedical image This article used the ‘‘haar’’ wavelet transform (HWT) [34] to
segmentation in 2015. This network could not only extract and process the image. The HWT could be expressed in the form of
analyze image features, but also restored to an image with a high A = HBH T , where B was the N × N image matrix and H was the
degree of similarity to the original image based on the underlying N × N ‘‘haar’’ transformation matrix, and from this, the N × N
and high-level features. Due to the good recovery effect, U-Net haar basis function hk (Z ) was generated, which was defined on
performed well in various applications such as image segmen- the interval Z ∈ [0, 1] for k = 0, 1, 2, . . . , N − 1, where N = 2n .
tation [28] and image generation [29]. In the latest research,
Ouahabi et al. [30] proposed a network for semantic segmentation 1
h0 (Z ) = h00 (Z ) = √ (1)
of medical images: FCdDNet. This scheme improved segmentation N⎧
efficiency and robustness while ensuring high accuracy, which p/2 q−0.5
⎨2
⎪ q−1
2p
≤Z < 2p
was a greater improvement compared to U-Net. Gu et al. [31] 1 q−0.5
proposed the CE-Net network structure for medical image seg- hk (Z ) = hpq (Z ) = √ −2p/2 ≤ Z < q/2 p (2)
2p
N ⎪
mentation, which had certain advantages over U-Net in some 0 other w ise

detail segmentation.
p
Duan et al. [32] proposed a scheme of using U-Net for data where k = 2 + q − 1, if p = 0, then 0 ≤ p ≤ n − 1, q = 0 or 1, and
hiding. This scheme realized hiding the color image into the color if p ̸ = 0, then 1 ≤ q ≤ 2p . When N = 4, the wavelet transform
image, and had a higher visual effect and a larger embedding matrix H4 could be obtained as follows:
ability. ⎡
1 1 1 1

Based on the existing U-Net network and the application of
1 ⎢√1 1
√ −1 −1 ⎥
neural network in the field of data hiding, a data hiding scheme H4 = √ ⎣⎢ ⎥ (3)
4 2 − 2 √0 0 ⎦

based on U-Net and wavelet transform was proposed in this pa-
per. The proposed scheme adopted a fully convolutional network 0 0 2 − 2
structure similar to U-Net, and innovatively used the wavelet The steps of the two-order ‘‘haar’’ wavelet transform of the
transform and inverse wavelet transform to hide and extract net- image were as follows: first performed the ‘‘haar’’ wavelet trans-
works respectively. The ability of wavelet transform to segment form on the rows of the pixel matrix to obtain two matrices LF
image details and the ability of U-Net to extract and correct
and HF, and then perform the ‘‘haar’’ wavelet transform on the
image features were combined in this scheme, thereby a hid-
columns of the two matrices to obtain four components A, H, V,
den network with good hiding effect and an extraction network
and D. The transformation order of rows and columns did not
with good recovery effect were obtained. Experimental results
affect the result. The schematic diagram of image decomposition
shown that the proposed scheme had better visual effects and
and restoration by the two-order ‘‘haar’’ wavelet transform was
performance than other schemes when the embedding rate was
shown in Fig. 2.
similar.
The ‘‘haar’’ wavelet transform divided the image into four
2. Related works details. Among the obtained components, A was an approximate
component, which contained higher energy and was the closest
2.1. U-Net to the original image. H and V were the horizontal and vertical
components, which contained detailed features in the horizontal
U-Net was proposed by Ronneberger et al. in 2015 and was and vertical directions. D was the diagonal component, with only
widely used in medical image segmentation and natural image part of the contour, included the feature of the diagonal direction.
2
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Fig. 2. Schematic diagram of ‘‘haar’’ wavelet transform (HWT) and inverse transform (iHWT).

3. Proposed scheme network, the bottom layer features were divided into four parts
(each part had the same parameters and was regarded as the
Since U-Net and WT had advantages in image processing, same layer), and four components with a size of 128×128 were
a data hiding network structure based on U-Net and wavelet obtained respectively. The secret image was obtained through
transform was proposed in this paper. The proposed scheme in- inverse WT.
novatively introduced wavelet transform into the neural network The extraction network used three convolutional layers and
structure, and used a layer skip connection similar to U-Net to four deconvolutional layers (including ten deconvolution opera-
improve the visual effect of the hidden image. At the same time, tions). The convolution layer used ReLU as the activation func-
by connecting different wavelet components with different fea- tion, the deconvolution layer used LeakyReLU as the activation
ture layers, the invisibility of the secret data in the carrier image function, and both convolution and deconvolution used BN to
was improved. The program included a data hiding network and a accelerate training (except for the last layer of deconvolution).
data extraction network. The loss functions of the two networks The first five layers (convolution or deconvolution) used a 4×4
were weighted and summed to obtain the loss function during convolution kernel with a step size of 2. The convolution kernel
training. used in the last two layers of deconvolution is 3×3, and the step
size is 1.
3.1. Hiding network

3.3. Loss function


The hidden network used a network structure similar to U-Net
to hide secret data into the carrier image. The specific struc-
ture was shown in Fig. 3. An RGB carrier image with a size The evaluation criteria of traditional image data hiding
of 256×256×3 and a gray-scale secret image with a size of schemes included PSNR, MSE, etc., which were used to quantify
256×256 were used as network input, and an RGB image with the difference between the original carrier image and the hid-
a size of 256×256×3 was used as an output. The network con- den image, or the difference between the secret data and the
tained multiple layer skip connections and did not use crop extracted data. Therefore, the MSE was used as the model loss
cutting, which preserved the characteristics of the carrier image function in this paper. In the hidden network, MSE was used
to the greatest extent and improved the visual effect of the to measure the difference between the carrier image and the
resulting image. The secret image was first subjected to WT, hidden image, and in the extraction network, the MSE was used to
and then the four components were respectively subjected to measure the difference between the secret data and the extracted
convolution operation and connected to the network. Since the data. The MSE function equation was as follows:
energy contained in the four components was different, they M N
were connected to different network layers after convolution 1 ∑ ∑
MSE(I , I ′ ) = (Ii,j − Ii′,j )2 (4)
feature extraction. Such a connection scheme not only retained M ×N
i=1 j=1
the characteristics of the secret image, but also minimized the
impact on the carrier image. where I , I ′ meant two matrices for MSE operation, and M, N
The structure of the hidden network included 7 downsam- meant the length and width of the matrix. The loss function of
pling and 7 upsampling, which were implemented by convolution the data hiding network was expressed as:
operation (Conv) and deconvolution operation (DeConv) respec-
tively. Among them, downsampling used ReLU as the activation loss1 = MSE (I , Ia ) (5)
function, upsampling used LeakyReLU as the activation function, where I , Ia respectively represented the original carrier image
and the output layer used Sigmoid as the activation function. and the hidden image. The loss function of the data extraction
Except for the first layer, the last layer, and the convolution network was expressed as:
operation at the bottom of the ‘‘U’’ shape, all other layers used
loss2 =MSE A, A′ + MSE H, H′ + MSE V, V′ + MSE(D, D′ )
( ) ( ) ( )
BN to speed up network operations. All convolutions and de-
convolutions used 4×4 convolution kernels with a step size of (6)
2.
where A, A represented the approximate components obtained

3.2. Extraction network by performing ‘‘haar’’ wavelet transform on the secret data image
and the extracted data. Similarly, H, H′ , V, V′ and D, D′ repre-
The extraction network was designed as a fully convolutional sented the horizontal, vertical and diagonal components of the
network. The input was an RGB image with a size of 256×256×3, secret data and the extracted data.
and the output was a gray secret image with a size of 256×256. During the training process, it was very necessary for each task
The specific structure was shown in Fig. 4. In the extraction to learn at a similar speed, that is, the loss level of different tasks
3
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Fig. 3. Hidden network structure diagram.

Fig. 4. Extracting network structure diagram.

was close. Therefore, it was achieved by weighted summation of 4. Experimental and results analysis
the loss of different tasks, as follows:

Loss = µ × loss1 + (1 − µ) × loss2 (7) In this paper, the network used the images in the VOC 2012
where µ ∈ (0, 1) represented the weight of each task. The loss dataset and the ImageNet dataset for training. Among them,
function used Adam optimization algorithm and backpropagation 60,000 images were trained and 7000 images were tested. The
algorithm to obtain the optimal network parameters. At the same secret image selected the traditional data hiding experimental
time, the network learning rate lr was attenuated according to image and some images in ImageNet (the grayscale image was
epoch, as follows: converted from the color image). The experimental environment
epoch was pytho3.6+pytorch, and the hardware used GPU: Tesla V100.
lr = lr × λn , n = (8) In the continuous training process, the following optimal param-
epoch_size
eters were obtained: the initial network learning rate lr = 0.001,
where λ was the decay rate, and epoch_size was the number
the task weight µ = 0.75, the number of iterations was epoch =
of learning rate decay intervals. The data hiding and extraction
2000.
network was trained in task learning, that is, the output of the
hidden network was used as the input of the extraction network. In order to verify the algorithm, the visual effects, histograms,
In the process of continuously optimized learning and adjusting residual images, PSNR and other parts were analyzed in this
the learning rate, two corresponding network models with the paper. Fig. 5 listed some of the carrier images and secret data
lowest image difference were obtained. images in the test set used.
4
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Fig. 5. Examples of partial test set images (upper row) and secret data images (lower row).

4.1. Visual effects 4.3. Hidden capacity analysis

Fig. 6 shown the experimental image and the corresponding The scheme proposed in this paper hided the grayscale image
pixel histogram, including the original carrier image, hidden im- into the color image, which had a certain improvement compared
age, secret image and extracted image. According to the images in with some traditional spatial domain scheme. Effective capacity
Fig. 6, it could be seen that there was no major visual difference (EC) was used to indicate the embedding rate of secret data,
between the carrier image and the hidden image, and there was which could reflect the degree of data embedding. The calculation
no visual difference between the secret data and the extracted method of the effective capacity was as follows:
data.
NS
Generally speaking, the histogram represented the distribu- EC = (9)
tion of pixel values in the image, and the degree of modification NC
of the image could be judged based on the histogram. According where NS was the size of the secret data (bits), and NC was the
to the histogram in Fig. 6, the pixel histograms between the number of extracted pixels of the carrier image. Table 1 shown
original image and the hidden image were not much different, the EC comparison between the proposed scheme and similar
so the invisibility of the hidden scheme was good. At the same schemes. The schemes used for comparison include traditional
time, according to the histograms between the secret image and schemes and neural network schemes.
the extracted image, it could be seen that the difference be- According to Table 1, the maximum capacity of the traditional
tween the secret image and the extracted image by the proposed data hiding schemes was 2 bpp. Compared with the traditional
scheme was very small. Therefore, the proposed scheme had data hiding schemes [1,10,13], the proposed scheme had a larger
better performance in the visual effects. effective capacity. The maximum capacity of the data hiding
scheme using a neural network was 8 bpp, which meat that
4.2. Security analysis each pixel hidden 8 bits of data. The proposed scheme achieved
the maximum ability of the schemes [18,24] that used neural
The residual image could directly show the visual difference network to hide grayscale images into color images, so the hid-
between the images, and it could also be seen whether the hidden den ability of the proposed scheme had reached the theoretical
image contained relevant information of secret image. If the maximum hidden ability.
residual image contained the information, then the scheme was
insecure. Fig. 7 shown the residual image between the original 4.4. Image quality
image and the hidden image, the residual image enlarged by 20
times, and the secret image. Peak Signal to Noise Ratio (PSNR) was an objective evaluation
According to Fig. 7, the residual images between the original index of image quality, which was widely used in fields, such
carrier image and the hidden image shown the afterimage of the as data hiding. A higher PSNR value indicated that the image
carrier image, and there was no similarity with the secret image. distortion was small and the image quality after hiding was
Therefore, it was difficult to obtain useful semantic information better. It could also explain the difference between the extracted
from the residual image in this scheme, and avoided the leakage secret image and the original secret data. The PSNR calculation
of secret data due to residual images. method was as follows:
At the same time, combined with the histogram comparison (2n − 1)2
test in Fig. 6, the histogram of the hidden image had no corre- PSNR = 10 × log10 ( ) (10)
lation with the histogram of the secret image. It is impossible to MSE(I , Ia )
judge whether the secret data was hidden in the image, and the Among them, I was the original carrier image or the original
secret image could not be extracted according to the histogram. secret data image, and Ia was the hidden image or the extracted
Therefore, the secret hidden in the image had better security. It secret image.
could be seen that this scheme belonged to the coverless image The Structural Similarity (SSIM) index was a metric based on
steganography scheme, and the security of the proposed scheme the Human Visual System (HVS) to quantify the degradation of
was relatively high. structural information between two images. SSIM index could
5
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Fig. 6. Comparison of experimental image and histogram.

Table 1
Comparison of effective capacity between the proposed scheme and similar schemes.
Schemes Carrier image size(bits) Secret data size(bits) EC (bpp)
Gao et al. [1] 256×256 132,126 2
Traditional Meng et al. [10] 512×512 <256×256×8 <2
Pakdaman et al. [13] 512×512 <128×128×8 <1
Rehman et al. [18] 300×300(RGB) 300×300×8 8
Yang et al. [23] 256×256 256×256 <1
Neural network Zhang et al. [24] 256×256(RGB) 256×256×8 8
Zhou et al. [25] 256×256(RGB) 25×8 <1
Proposed 256×256(RGB) 256×256×8 8

be used as a quantitative measure of image quality, which was where σI2 and σI2a were the variances of I and Ia respectively, and
composed of luminance l, contrast c and structure s [35]. The σI and σIa were the covariances of I and Ia respectively. C2 was a
metric was defined as follows: stable denominator to avoid instability like C1 . And the structure
comparison s (I , Ia ) was calculated as follows:
SSIM (I , Ia ) = l (I , Ia ) × c (I , Ia ) × s(I , Ia ) (11) σIIa + C3
s (I , Ia ) = (14)
The luminance comparison l (I , Ia ) was calculated as follows: σI + σIa + C3
2µI µIa + C1 Similarly, C3 was a stable denominator to avoid instability. σIIa
l (I , Ia ) = (12) represented the correlation coefficient between I and Ia
µ2I + µ2Ia + C1
Table 2 shown the average PSNR and SSIM values of the model
where, µI and µIa were the average values of I and Ia , respec- on the test set, and compared with the latest neural network data
tively, and C1 was a stable denominator to avoid variables whose hiding schemes or the data hiding schemes with similar hidden
denominator was zero. And the contrast comparison c (I , Ia ) was capacity. The PSNR value reflected the degree of distortion, and
it could usually be concluded that the visual effect of the image
calculated as follows:
will be poor when the PSNR was lower than 30 dB. The value of
2σI σIa + C2 SSIM was between −1 and 1. If the SSIM value of the two images
c (I , Ia ) = (13)
σI2 + σI2a + C2 was 1, it meat that the two images were exactly the same.
6
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Fig. 7. The residual image between the original carrier image and the hidden image, and the secret data image.

Table 2
Comparison of PSNR and SSIM between the proposed scheme and similar schemes.
Schemes Hidden image Extracted image
PSNR(Average) 32.5 34.7571
Rehman et al. [18]
SSIM(Average) 0.9371 0.93
PSNR(Average) 42.3 38.45
Li et al. [19]
SSIM(Average) 0.987 0.953
PSNR(Max) 37.35 31.38
Chen et al. [20]
SSIM(Max) – –
PSNR(Max) 34.89 39.9
Zhang et al. [24]
SSIM(Max) 0.9681 0.96
PSNR(Average) 40.4716 40.6665
Duan et al. [32]
SSIM(Average) 0.9794 0.9842
PSNR(Average) 39.7708 43.3571
Proposed
SSIM(Average) 0.9828 0.9862

It could be seen from Table 2 that the PSNR values of the 4.5. Robustness analysis
proposed schemes were all greater than 30 dB, so the hidden
images and the extracted images have good visual effects. In the
For common non-geometric attacks, including Cropping, Salt
comparison scheme, the PSNR and SSIM values of the hidden
and Pepper Noise, and Gaussian Filtering, experiments were per-
image through this scheme were slightly lower than that the
formed. Among them, cropping attacks used 1/4 and 1/2 range
scheme [19,32], but higher than the scheme [18,20,24], and the
PSNR and SSIM values of the extracted image were the highest. crop, salt and pepper noise attacks used standard deviations of
Therefore, compared with similar papers, the quality of hidden 0.001 and 0.005, and Gaussian filtering used template sizes of
image by the proposed scheme was good, the pixel difference 3×3 and 7×7. The hidden image was attacked and then used to
between the hidden image and the original carrier image was extract the secret data. The example image and the average PSNR
small, and the pixel difference between the extracted image and and SSIM were obtained, and were shown in Table 3.
the original secret image was also small. Therefore, the proposed According to Table 3, it could be seen that the proposed
scheme had advantages over other schemes in the judgment scheme could perform well against non-geometric attacks. When
criteria of PSNR and SSIM values. the hidden image was cropped, noise added, filtered, etc., the
7
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

Table 3
Diagrams and data of attack experiments.

image extracted from the hidden image was still a recogniz- Declaration of competing interest
able secret image. In these attack experiments, the noise at-
tack performed best, which was common in the image transmis- The authors declare that they have no known competing finan-
sion process. Therefore, the proposed scheme had the ability to cial interests or personal relationships that could have appeared
deal with non-geometric attacks, and had certain robustness to to influence the work reported in this paper.
non-geometric attacks.
Acknowledgments
5. Conclusions
This work was supported in part by National Natural Science
Foundation of China (No. 61976126), Shandong Natural Science
A data hiding scheme based on U-Net and wavelet trans-
Foundation, China (No. ZR2019MF003).
form proposed in this paper. This scheme innovatively com-
bined wavelet transform with U-Net network, and realized data References
hiding and extraction networks through two full convolutional
networks. The secret data hidden by the sender had no visual [1] G.Y. Gao, S.K. Tong, Z.H. Xia, B. Wu, L.Y. Xu, Z.Q. Zhao, Reversible data
difference from the original image in the eyes of the receiver, hiding with automatic contrast enhancement for medical images, Signal
but the secret data extracted through the extraction network was Process. 178 (2021) http://dx.doi.org/10.1016/j.sigpro.2020.107817.
[2] J. Li, T. Zhong, X.F. Dai, C.X. Yang, R. Li, Z.L. Tang, Compressive optical image
clear. Experimental results shown that the proposed algorithm watermarking using joint Fresnel transform correlator architecture, Opt.
had better visual effects, and the secret data would not be re- Lasers Eng. 89 (2017) 29–33, http://dx.doi.org/10.1016/j.optlaseng.2016.02.
flected in the carrier image. Therefore, the proposed scheme had 024.
advantages compared with some current schemes. The follow-up [3] C.C. Chen, Y.H. Tsai, H.C. Yeh, Difference-expansion based reversible and
visible image watermarking scheme, Multimedia Tools Appl. 76 (6) (2017)
work will be further studied in terms of robustness. 8497–8516, http://dx.doi.org/10.1007/s11042-016-3452-9.
[4] B.W. Feng, J. Weng, Improved algorithms for robust histogram shape-
CRediT authorship contribution statement based image watermarking, in: C. Kraetzer, Y.Q. Shi, J. Dittmann, H.J. Kim
(Eds.), Digital Forensics and Watermarking, in: Lecture Notes in Computer
Science, vol 10431, ,2017, pp. 275–289, http://dx.doi.org/10.1007/978-3-
Lianshan Liu: Methodology, Formal analysis, Resources, Writ- 319-64185-0_21.
ing - review & editing, Visualization, Data curation. Lingzhuang [5] S.A. Mallat, Wavelet Tour of Signal Processing, third ed., The Sparse Way.
Academic Press, 2008.
Meng: Conceptualization, Methodology, Software, Validation, Writ-
[6] Sang J. Nasrullah, M.A. Akbar, B. Cai, H. Xiang, H.B. Hu, Joint image
ing - original draft. Yanjun Peng: Funding acquisition, Supervi- compression and encryption using IWT with SPIHT, Kd-tree and chaotic
sion. Xiaoli Wang: Investigation, Project administration. maps, Appl. Sci.-Basel. 8 (10) (2018) http://dx.doi.org/10.3390/app8101963.

8
L. Liu, L. Meng, Y. Peng et al. Knowledge-Based Systems 223 (2021) 107022

[7] W. Lin, S. Horng, T. Kao, P. Fan, C. Lee, Y. Pan, An efficient watermarking [21] Q. Liu, X.Y. Xiang, J.H. Qin, Y. Tan, J.S. Tan, Y.J. Luo, Coverless steganography
method based on significant difference of wavelet coefficient quantization, based on image retrieval of DenseNet features and DWT sequence map-
IEEE Trans. Multimedia 10 (5) (2008) 746–757, http://dx.doi.org/10.1109/ ping, Knowl.-Based Syst. 192 (2020) 15, http://dx.doi.org/10.1016/j.knosys.
TMM.2008.922795. 2019.105375.
[8] A. Amsaveni, P.T. Vanathi, Vanathi PT reversible data hiding based on radon [22] Q. Liu, X.Y. Xiang, J.H. Qin, Y. Tan, Y. Qiu, Coverless image steganography
and integer lifting wavelet transform, Inf. Midem.-J. Microelectron Electron based on densenet feature mapping, Eurasip J. Image Video Process. 2020
Compon. Mater. 47 (2) (2017) 91–99. (1) (2020) http://dx.doi.org/10.1186/s13640-020-00521-7.
[9] G. Mamatha, A. Shaik, Reversible data hiding based on integer wavelet [23] J.H. Yang, D.Y. Ruan, J.W. Huang, X.G. Kang, Y.Q. Shi, An embedding cost
transforms and prediction error expansion, in: Paper presented at the 2018 learning framework using GAN, IEEE Trans. Inf. Forensic Secur. 15 (2020)
International Conference on Computational Science and Computational 839–851, http://dx.doi.org/10.1109/tifs.2019.2922229.
Intelligence, CSCI, 2018. [24] R. Zhang, S.Q. Dong, J.Y. Liu, Invisible steganography via generative ad-
[10] L. Meng, L. Liu, G. Tian, X. Wang, An adaptive reversible watermarking versarial networks, Multimedia Tools Appl. 78 (7) (2019) 8559–8575,
in IWT domain, Multimedia Tools Appl. 80 (1) (2021) 711–735, http: http://dx.doi.org/10.1007/s11042-018-6951-z.
//dx.doi.org/10.1007/s11042-020-09686-9. [25] Z.L. Zhou, Y. Cao, M.M. Wang, E.M. Fan, QMJ. Wu, Faster-RCNN based robust
[11] J.E. Lee, Y.H. Seo, D.W. Kim, Convolutional neural network-based digital coverless information hiding system in cloud environment, IEEE Access 7
image watermarking adaptive to the resolution of image and watermark, (2019) 179891-179897, http://dx.doi.org/10.1109/access.2019.2955990.
Appl. Sci.-Basel. 10 (19) (2020) http://dx.doi.org/10.3390/app10196854. [26] X. Duan, B. Li, Z. Xie, D. Yue, Y. Ma, High-capacity information hiding
[12] C.L. Li, X.M. Sun, Y.Q. Li, Information hiding based on augmented reality, based on residual network, IETE Technical Review. 38 (1) (2021) 172–183,
Math. Biosci. Eng. 16 (5) (2019) 4777–4787, http://dx.doi.org/10.3934/mbe. http://dx.doi.org/10.1080/02564602.2020.1808097.
2019240. [27] O. Ronneberger, P. Fischer, T. Brox, U-net: convolutional networks for
[13] Z. Pakdaman, H. Nezamabadi-pour, S. Saryazdi, A new reversible data biomedical image segmentation, in: International Conference on Medical
hiding in transform domain, Multimedia Tools Appl. (2020) http://dx.doi. Image Computing and Computer-Assisted Intervention, 2015.
org/10.1007/s11042-020-10058-6. [28] R. Kamiya, K. Hotta, K. Oda, S. Kakuta, Road detection from satellite images
[14] Q.T. Su, Y.H. Liu, D.C. Liu, Z.H. Yuan, H.Y. Ning, A new watermarking scheme by improving U-net with difference of features, in: Proceedings of the 7th
for colour image using QR decomposition and ternary coding, Multimedia International Conference on Pattern Recognition Applications and Methods,
Tools Appl. 78 (7) (2019) 8113–8132, http://dx.doi.org/10.1007/s11042- 2018, http://dx.doi.org/10.5220/0006717506030607.
018-6632-y. [29] M. Oda, K. Tanaka, H. Takabatake, M. Mori, H. Natori, K. Mori, Realistic
[15] S.M. Jiao, C.Y. Zhou, Y.S. Shi, W.B. Zou, X. Li, Review on optical image hiding endoscopic image generation method using virtual-to-real image-domain
and watermarking techniques, Opt. Laser Technol. 109 (2019) 370–380, translation, Healthcare Technol. Lett. 6 (6) (2019) 214–219, http://dx.doi.
http://dx.doi.org/10.1016/j.optlastec.2018.08.011. org/10.1049/htl.2019.0071.
[16] J.Z. Li, Robust image watermarking scheme against geometric attacks using [30] A. Ouahabi, A. Taleb-Ahmed, Deep learning for real-time semantic seg-
a computer-generated hologram, Appl. Opt. 49 (32) (2010) 6302–6312, mentation: Application in ultrasound imaging, Pattern Recognit. Lett. 144
http://dx.doi.org/10.1364/ao.49.006302. (2021) 27–34, http://dx.doi.org/10.1016/j.patrec.2021.01.010.
[17] S. Baluja, Hiding images in plain sight: Deep steganography, in: I. Guyon, [31] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, J. Liu,
U.V. Luxburg, S. Bengio, et al. (Eds.), Advances in Neural Information CE-net: Context encoder network for 2D medical image segmentation, IEEE
Processing Systems 30, in: Advances in Neural Information Processing Trans. Med. Imaging 38 (10) (2019) 2281–2292, http://dx.doi.org/10.1109/
Systems, vol. 30, 2017. TMI.2019.2903562.
[18] Rehman. Au, R. Rahim, S. Nadeem, S. ul Hussain, End-to-end trained CNN [32] X.T. Duan, K. Jia, B.X. Li, D.D. Guo, E. Zhang, C. Qin, Reversible image
encoder-decoder networks for image steganography, in: L. LealTaixe, S. steganography scheme based on a U-net structure, IEEE Access 7 (2019)
Roth (Eds.), Computer Vision - Eccv 2018 Workshops, Pt Iv, in: Lecture 9314–9323, http://dx.doi.org/10.1109/access.2019.2891247.
Notes in Computer Science., vol. 11132, 2019, pp. 723–729, http://dx.doi. [33] A. Ouahabi, Signal and Image Multiresolution Analysis, John Wiley & Sons,
org/10.1007/978-3-030-11018-5_64. 2012.
[19] Q. Li, X.Y. Wang, X.Y. Wang, B. Ma, C.P. Wang, Y.J. Xian, Y.Q. Shi, A novel [34] A. ST, B. AN, Image cryptographic algorithm based on the Haar wavelet
grayscale image steganography scheme based on chaos encryption and transform, Inform. Sci. 269 (11) (2014) 21–34.
generative adversarial networks, IEEE Access 8 (2020) 168166-168176, [35] M. Ferroukhi, A. Ouahabi, M. Attari, Y. Habchi, A. Taleb-Ahmed, Medical
http://dx.doi.org/10.1109/access.2020.3021103. video coding based on 2nd-generation wavelets: Performance evaluation,
[20] B.J. Chen, J.X. Wang, Y.Y. Chen, Z.L. Jin, H.J. Shim, Y.Q. Shi, High-capacity Electronics 8 (1) (2019) 88.
robust image steganography via adversarial network, KSII Trans. Internet
Inf. Syst. 14 (1) (2020) 366–381, http://dx.doi.org/10.3837/tiis.2020.01.020.

You might also like