You are on page 1of 18

Optical and Quantum Electronics (2021) 53:161

https://doi.org/10.1007/s11082-021-02793-3

A value parity combination based scheme for retinal images


watermarking

Fares Kahlessenane1 · Amine Khaldi1   · Med Redouane Kafi1 · Narima Zermi2 ·


Salah Euschi1

Received: 29 July 2020 / Accepted: 9 February 2021 / Published online: 15 March 2021
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021

Abstract
In order to improve the security of images exchanged in telemedicine, we propose in this
paper, 2 watermarking schemes for retinal image protection; each scheme will be declined
into 2 variants. The integration of the watermark will be performed by combining the par-
ity of the successive values; each variant will represent a different combination. These
approaches will be implemented in the three insertion domains: spatial, frequency and
multi-resolution domain. For the spatial domain, the watermark will be integrated into
the R, G and B values of the image. In the frequency domain, the watermark bits will be
substituted to the DCT coefficient’s least significant bit. For the multi-resolution domain
insertion, after calculating a DWT, the obtained LL sub-band coefficients will be used for
the integration process. After comparing our approaches to the different recent works in
the three domains; the obtained results demonstrate that our first proposed approach offers
a good imperceptibility for the frequency and spatial domains insertion. However, using
512 × 512 images for our experiments considerably reduces the capacity of our approach in
the frequency domain.

Keywords  Retinal image · Blind watermarking · Least significant bit · Discrete cosine
transform · Discrete wavelet transform · Spatial domain · Transform domain

* Amine Khaldi
Khaldi.Amine@univ-ouargla.dz
Fares Kahlessenane
Kahlessenane.Fares@univ-ouargla.dz
Med Redouane Kafi
Kafi.Redouane@univ-ouargla.dz
Narima Zermi
Zermi.Narima@univ-annaba.dz
Salah Euschi
Salah@univ-ouargla.dz
1
Artificial Intelligence and Information Technology Laboratory (LINATI), Computer Science
Department, Faculty of Sciences and Technology, University of Kasdi Merbah, 30000 Ouargla,
Algeria
2
Laboratories of Automation and Signals of Annaba (LASA), Electronics Department, Faculty
of Engineering Sciences, Badji Mokhtar Annaba University, 23000 Annaba, Algeria

13
Vol.:(0123456789)
161  Page 2 of 18 F. Kahlessenane et al.

1 Introduction

The development of networks and the rapid evolution of computer technology make pos-
sible the diffusion of any kind of text, sound or image information; this facilitates distri-
bution and illegal duplication (Wang et  al. 2019). Cryptography was the first techniques
that have been realized to guarantee the safety during the transmission and the data dis-
semination (Boukela et  al. 2020); but, the data still encrypted and protected only during
transmission, after the decryption process the file is no longer protected and easily distrib-
uted by unauthorized users (Khaldi 2018). Then the principle of watermarking appeared
as one of the methods to protect copyright and ensure the security of multimedia (Roy
et al. 2018). Watermarking consists of inserting a mark (logo or name of the owner) into an
image or video or audio file where the inserted information can be extracted for different
purposes (Jafari Barani et al. 2020). The inserted information represents a digital mark that
is usually a visible or invisible identification code that may contain information about the
recipient; the legitimate owner or the author, its copyright in the form of textual or image
data (Khaldi 2020a). An effective watermarking system must guarantee the imperceptibil-
ity of the hidden data. The inserted mark must not affect the visual quality of the image
(Ahmadi et  al. 2020a). A watermarking system is also evaluated for its robustness; this
criterion represents the resistance after sudden changes by attacks (Yousefi Valandar et al.
2020). Capacity is also a significant measure to evaluate the effectiveness of a watermark
(Valandar et  al. 2017). It represents the amount of information that can be inserted into
the image; however, the larger the brand size is the greater the degradation is important
(Khaldi 2020b). In order to improve the security of images exchanged in telemedicine, we
propose in this work two watermarking schemes for retinal image protection; each scheme
will be declined in two variants. The goal is to find a compromise between the capacity
and the imperceptibility to hide the most possible data while minimizing the degradation
of the image. Our first experiment will be to apply our watermarking schemes in the spatial
domain; for this purpose, the data to hide will be substituted to the colorimetric values of
the image according to the proposed scheme. Then, the second experiment will be to apply
the four watermarking schemes in the frequency domain; for this, the DCT coefficients
of the image will be calculated, and the substitution will be made in the values of these
coefficients. Our last experiment will be to apply our four insertion schemes in the multi-
resolution domain; for this a DWT will be applied to the cover image, and the data will be
inserted substituting the least significant bit of the obtained LL bands coefficients. For each
insertion domain and for each variant, the distortion measures will be calculated (between
the original and watermarked images) to have a more objective idea of the generated dis-
tortions. This will also determine the degree of imperceptibility obtained for each variant.
In addition, the hiding capacity of the four schemas is different and varies according to
the insertion domain. We could therefore with a summary state (capacity/imperceptibil-
ity) determine for each insertion domain, the most suitable variant which offers the best
compromise between capacity and imperceptibility. We have in this work proposed a sub-
stitution process that modifies few information in order to preserve a reasonable impercep-
tibility, in fact the modification of one coefficient over three during the integration process
allows to generate less distortion and thus preserve the diagnostic quality of the image. The
proposed approach being blind watermarking for the three insertion domain, the extrac-
tion of the mark is done without using the original image, the watermarked image alone
will be sufficient to identify the patient. The proposed schemes allow the protection of the
patient’s information and thus ensure the confidentiality of personal data. Therefore, the

13
A value parity combination based scheme for retinal images… Page 3 of 18  161

proposed scheme could be integrated into a medical document authentication application


where any user can check whether a document from the database is authentic or not. This
paper is organized as follows. In the Sect. 2 will be presented the different recent work of
image watermarking in the spatial and frequency domain. Section 3 will be dedicated to
the description and presentation of our two proposed schemes as well as the two possible
variants for each scheme. The experiments and the performance evaluation of each scheme
will be presented in Sect. 4. The conclusion of this paper and the perspectives will be dis-
cussed in Section 6.

2 Related work

The watermarking techniques applied to the images are related to their diverse representa-
tion (domains); each insertion domain has various marking schemes (Ahmadi et al. 2020b),
and this domain can be either a spatial domain or a transformed domain.

2.1 Spatial domain watermarking techniques

Spatial methods allow inserting the mark directly into the image without any transforma-
tion (Mun et al. 2019). The insertion will consist in working directly on the image by modi-
fying the intensity of the pixels which will contain the mark (Khaldi 2020c). These remain
simple and inexpensive methods in computing time since they do not require a prior trans-
formation. They are dedicated to real-time markings needed in environments of low com-
puting power (Sadeghi and Toosi 2019).

• Xiong et al. (2019) proposed a two-stage watermarking scheme vertical integration and
then horizontal integration. In the horizontal direction, hiding is done by increasing
the pixel values of the even lines and decreasing the pixel values in the odd lines. The
reverse is performed in the vertical direction; the hiding is done by increasing the pixel
values of the even lines and decreasing the pixel values in the odd lines.
• Su et  al. (2019) proposes a method of blind watermarking based on the principle of
Schur decomposition in the spatial domain. In this approach, after calculating the maxi-
mum eigenvalue of Schur decomposition, the bits to be integrated are hidden in the
maximum eigenvalue acquired.
• Jobin and Varghese(2019) propose a watermarking approach where two masks are used
one for embedding the watermark and a second one for the color compensation. After
decomposing the image into blocks, the blue component (R, G and B values) of the
image is used for substitution with the application of the first mask. In this method, the
use of the first mask ensures that the modified pixels are not distinguishable from the
neighborhood pixels. To guarantee a similar color distribution to the initial image, the
second compensation mask is applied.
• Kim et  al. (2019) propose a watermarking scheme that uses the multidirectional dif-
ferentiation of the pixel values; this technique makes it possible to perform the substitu-
tion in two or three directions of the image. The image is divided into non-overlapping
blocks and the pixels of each block are calculated, and the minimum value in each
block is selected. The data to be substituted can be inserted in two or three directions

13
161  Page 4 of 18 F. Kahlessenane et al.

depending on the minimum value of the block’s pixels. Then, the generated pixels are
placed separately in two different images. In the extraction process, the encoded data
can be extracted using the difference value of the two images received.

2.2 Transform domain watermarking techniques

Transformation domains have been extensively studied in the context of image coding and
compression (Euschi et al. 2021), and many research results can be applied to digital water-
marking. The image coding theory maintains that in most images, the colors of neighbor-
ing pixels are strongly correlated (Haddad et  al. 2017). Mapping in a specific transform
domain (like DCT or DWT) serves two purposes, decorrelates the values of the original
sample, and concentrates the original signal energy into only a few coefficients (Ahmadi
et al. 2019).

• Ernawan and Kabir (2019) presents a watermarking technique based on Tchebichef


moments to protect copyrights. In this technique the host image is divided into non-
overlapping blocks, for each block, the Tchebichef moments are calculated. The mark
is integrated in the blocks having lower visual entropies. The watermark is then scram-
bled by Arnold’s transformation and integrated into the Tchebichef moments of the
selected blocks.
• Liu et al. (2019) have proposed an approach that integrates image compression and data
concealment into a single procedure. For complex blocks, the data is embedded and the
blocks are compressed using Modified Block Truncation Coding (MBTC) while main-
taining reasonable visual distortion. For smooth blocks, the data is embedded, and the
blocks are compressed using Block Search Order Coding (BSOC) while maintaining
acceptable compression performance.
• Hu et al. (2016) proposes a blind watermarking scheme based on discrete cosine trans-
form (DCT) to integrate binary information into images. This scheme modulates the
Partially Signed Average (PSA) of some DCT coefficients derived from an 8 × 8 picture
block. The variation margin of each coefficient is regulated by consulting a JPEG quan-
tization table to reinforce the imperceptibility of the watermark.
• In the approach proposed by Moosazadeh et al. (2019) a DCT of the image is calcu-
lated then, the parameters and position of the coefficients to be modified is determined
automatically by a method of Teaching–Learning-Based Optimization. The proposed
method uses TLBO, which has rarely been applied in watermarking and allows to auto-
matically determining the integration parameters and the appropriate position for the
watermark insertion.
• Sari et al. (2019) propose a combination of AES—Huffman—DWT coding to secure
the message and conceal it in a cover image. The message is encrypted before integra-
tion by the AES algorithm. After calculating the DWT of the image cover, the conceal-
ment is done in the four sub-bands: LL, LH, HL and HH.
• Liu et  al. (2018) proposed a dual watermark a fragile for image authentication and a
robust for copyright protection. The robust watermark is incorporated using the discrete
wavelet transformation in the YCbCr color space. The fragile watermark is performed
via an improved approach of replacing the least significant bits in the RGB compo-
nents. The extraction of the two watermarks is done without the original host image
and watermark.

13
A value parity combination based scheme for retinal images… Page 5 of 18  161

• Guo et al. (2017) have implemented a new FA-optimized image watermarking method
in the DWT-QR domain. The author first applies a DWT to get the four sub-bands (LL,
LH, HL and HH). LL bands will be used for data integration. Then, the LL bands are
divided into blocks (without overlapping). QR decomposition is then performed to
obtain two values Q and R; the first line of each R element is selected to integrate the
watermark.
• Makbol et al (2016). proposed a block based scheme that uses entropy and edge entropy
as HVS characteristics to determine which blocks will be used for integration. Blocks
with lower edge entropy and entropy values are considered the best regions for data
concealment. After DWT decomposition, an SVD is performed on the coefficients of
the LL subband. Then, the integration process is performed by examining the second
and third entries of the first column of the U matrix.

3 Proposed approaches

3.1 Spatial domain

The proposed approach is a combination of matrix coding and LSB matching (LSBM).
Matrix coding will allow us to ensure that the secret message is embedded in the image
with a minimum number of pixel changes. The LSB matching will mitigate the PoV (Pairs
of Values) effect of the LSB replacement method (LSB). The proposed schemes are based
on the difference of the colorimetric values of the pixels. The red green and blue values are
extracted and compared between them according to the variant used (Fig. 1).
The inserted Watermark is a sequence of 0 and 1. This data is substituted for the LSB
of R, G and B values depending on the variant used. In the first proposed scheme, three
pixels are used to conceal six bits. Let R1, R2 and R3 three values corresponding to the
red components of the pixels P1, P2 and P3 and X, Y two bits of the secret message. For
the second variant, the values A, B and C correspond to the least significant bits of the red
components of the pixels P1, P2 and P3 (Table 1).
In the second scheme, two pixels are used to hide three bits. The difference between the
color values of the two pixels is calculated. Let R1 and R2 two values corresponding to the
red components of the pixels P1 and P2 and X the bit to be integrated. For the second vari-
ant, the values A and B correspond to the last significant bits of the red components of the
pixels P1 and P2 (Table 2).

3.2 Frequency domain

In the proposed approach for the frequency domain, a DCT is applied to the image (Fig. 2),
after the thresholding and quantization step the watermark is inserted into the DCT coef-
ficients to obtain the watermarked image.

Fig. 1  Integration process in the spatial domain

13
161  Page 6 of 18 F. Kahlessenane et al.

Table 1  First proposed insertion scheme for the spatial domain


Variant 1 Variant 2
Condition Action Condition Action

X = (R1–R2) % 2 No change required X = (A ⨁ B) % 2 No change required


Y = (R1–R3) % 2 Y = (A ⨁ C) % 2
X ≠ (R1–R2) % 2 Change R2 component X ≠ (A ⨁ B) % 2 Change R2 component
Y = (R1–R3) % 2 Y = (A ⨁ C) % 2
X = (R1–R2) % 2 Change R3 component X = (A ⨁ B) % 2 Change R3 component
Y ≠ (R1–R3) % 2 Y ≠ (A ⨁ C) % 2
X ≠ (R1–R2) % 2 Change R1 component X ≠ (A ⨁ B) % 2 Change R1 component
Y ≠ (R1–R3) % 2 Y ≠ (A ⨁ C) % 2

Table 2  Second proposed insertion scheme for the spatial domain


Variant 1 Variant 2
Condition Action Condition Action

X = (R1–R2) % 2 No change required X = (A ⨁ B) % 2 No change required


X ≠ (R1–R2) % 2 Change R1 component X ≠ (A ⨁ B) % 2 Change R1 component

Fig. 2  Integration process in the frequency domain

3.2.1 The discreet cosine transform (DCT)

The image is divided into blocks (8 × 8) without overlapping. A DCT is calculated for
each block (Kahlessenane et al. 2020a). The DCT of an N × N block size for i, j = 1, 2…
N is calculated as follows:
N N � � � �
1 � � (2x + 1)u𝜋 (2y + 1)v𝜋
Fu,v = √ Cu .Cv fx,y . cos cos (1)
2N x=1 y=1
2N 2N
� 1
ifu = 0
where Cu =

2N
1 ifu > 0
Then, thresholding is performed to increase the efficiency of the compression. If the
absolute values of the (non-zero) coefficients of the DCT matrix obtained are below
a certain threshold (Ahmadi et  al. 2020c), these coefficients will be eliminated (set to
zero).

13
A value parity combination based scheme for retinal images… Page 7 of 18  161

Table 3  First proposed insertion scheme for the frequency domain


Variant 1 Variant 2
Condition Action Condition Action

X = (DCT–DCT2) % 2 No change required X = (A ⨁ B) % 2 No change required


Y = (DCT1–DCT3) % 2 Y = (A ⨁ C) % 2
X ≠ (DCT1–DCT2) % 2 Change DCT2 component X ≠ (A ⨁ B) % 2 Change DCT2 component
Y = (DCT1–DCT3) % 2 Y = (A ⨁ C) % 2
X = (DCT1–DCT2) % 2 Change DCT3 component X = (A ⨁ B) % 2 Change DCT3 component
Y ≠ (DCT1–DCT3) % 2 Y ≠ (A ⨁ C) % 2
X ≠ (DCT1–DCT2) % 2 Change DCT1 component X ≠ (A ⨁ B) % 2 Change DCT1 component
Y ≠ (DCT1–DCT3) % 2 Y ≠ (A ⨁ C) % 2

Table 4  Second proposed insertion scheme for the frequency domain


Variant 1 Variant 2
Condition Action Condition Action

X = (DCT1–DCT2) % 2 No change required X = (A ⨁ B) % 2 No change required


X ≠ (DCT1–DCT2) % 2 Change DCT1 component X ≠ (A ⨁ B) % 2 Change DCT1 component

3.2.2 Quantification process

Quantification of the obtained DCT coefficients is performed. The quantization of each


block groups the sets of values close together. A single quantum value will represent a
range of values (Fares et al. 2021). This minimizes the data required to represent an image.
To calculate the quantization matrix, the quantization value Q is chosen, (Q represents the
number of bits needed to code each element of the DCT matrix). The two MAX and MIN
values of the DCT matrix (DCTmax, DCTmin) are determined. Next, the DCTQ matrix is
calculated by the following formula:
( )
−1 + 2Q (DCT(i, j) − DCTmin)
DCTQ(i, j) = (2)
DCTmax − DCTmin

3.2.3 Integration process

In the first scheme, the parity of three successive coefficients is computed, and then com-
pared to the hidden bits. In case of inequality, the parity of a coefficient is then modified to
satisfy the equality according to the rules of Table 3. Consider for the first variant DCT1,
DCT2 and DCT3 three successive coefficients and X, Y two bits of the message to insert.
For the second variant, the values A, B and C represent the remainders of the integer divi-
sion on 2 of the values DCT1, DCT2 and DCT3.
In the second scheme, the parity of two successive coefficients is calculated; the substi-
tution of a bit is performed by applying one of the rules of Table 4. Let us consider for the

13
161  Page 8 of 18 F. Kahlessenane et al.

first variant DCT1 and DCT2 two successive coefficients and X the bit of the message to be
inserted. For the second variant, the values A and B represent the remainders of the whole
division on 2 of the values DCT1 and DCT2.

3.3 Multi‑resolution domain

In the approach proposed for the multi-resolution domain (Fig. 3), a DWT of the image is
calculated. Then, the bits of the secret message are integrated in the LL bands obtained. An
IDWT is then applied to generate the watermarked image.

3.3.1 Discrete wavelet transform (DWT)

The implementation of the DWT in the 2D image divides the image into four sub-bands
(Kahlessenane et al. 2020b), LL—LH—HL—HH. Several filters can be used to obtain a
DWT, the simplest and most often used filter is the Haar filter (Farri and Ayubi 2018). In
our implementation we used the Haar filter to process the signal. To obtain the coefficients
of each sub-band, we calculate with the Haar filter as follows:
p(x, y) + p(x, y + 1) + p(x + 1, y) + p(x + 1, y + 1)
LL(x, y) = (3)
2

p(x, y) + p(x, y + 1) − p(x + 1, y) − p(x + 1, y + 1)


LH(x, y) = (4)
2

p(x, y) − p(x, y + 1) + p(x + 1, y) − p(x + 1, y + 1)


HL(x, y) = (5)
2

p(x, y) − p(x, y + 1) − p(x + 1, y) + p(x + 1, y + 1)


HH(x, y) = (6)
2

3.3.2 Integration process

Since our approach is applicable to color images, we will therefore have three sub-bands,
LLr for the red component, LLg for the green component and LLb for the blue compo-
nent, which significantly increases the capacity of the proposed approach. After decompos-
ing the image into four sub-bands, the substitution is carried out in the coefficients of the

Fig. 3  Integration process in the multi-resolution domain

13
A value parity combination based scheme for retinal images… Page 9 of 18  161

Table 5  First proposed insertion scheme for the multi-resolution domain


Variant 1 Variant 2
Condition Action Condition Action

X = (LLR1–LLR2) % 2 No change required X = (A ⨁ B) % 2 No change required


Y = (LLR1–LLR3) % 2 Y = (A ⨁ C) % 2
X ≠ (LLR1–LLR2) % 2 Change LLR2 component X ≠ (A ⨁ B) % 2 Change LLR2 component
Y = (LLR1–LLR3) % 2 Y = (A ⨁ C) % 2
X = (LLR1–LLR2) % 2 Change LLR3 component X = (A ⨁ B) % 2 Change LLR3 component
Y ≠ (LLR1–LLR3) % 2 Y ≠ (A ⨁ C) % 2
X ≠ (LLR1–LLR2) % 2 Change LLR1 component X ≠ (A ⨁ B) % 2 Change LLR1 component
Y ≠ (LLR1–LLR3) % 2 Y ≠ (A ⨁ C) % 2

Table 6  Second proposed insertion scheme for the multi-resolution domain


Variant 1 Variant 2

Condition Action Condition Action


X = (LLR1–LLR2) % 2 No change required X = (A ⨁ B) % 2 No change required
X ≠ (LLR1–LLR2) % 2 Change LLR1 component X ≠ (A ⨁ B) % 2 Change LLR1 component

sub-bands LL obtained. In the first scheme, as for the frequency approach, the parity of
three successive coefficients is computed and then compared to the hidden bits. In the case
of inequality, the parity of a coefficient is then modified in order to satisfy equality accord-
ing to the rules in Table 5. Consider for the first variant LLR1, LLR2 and LLR3 three suc-
cessive coefficients of the sub-band LL of the red component and X, Y two bits of the mes-
sage to be inserted. For the second variant, the values A, B and C represent the remainders
of the integer division on 2 of the values LLR1, LLR2 and LLR3.
In the second scheme, the parity of two successive coefficients of the sub-bands is cal-
culated; the substitution of a bit is carried out by applying one of the rules of Table 6. Let
us consider for the first variant LLR1 and LLR2 two successive coefficients of the LL sub-
band of the red component and X the bit of the message to be integrated. For the second
variant, the values A and B represent the remainders of the whole division on 2 of the val-
ues LLR1 and LLR2.

4 Experiments and results

In this section, the proposed watermarking techniques are experimented on a large database
of color images. For our experiments, we used a medical image database RITE (Retinal
Images vessel Tree Extraction). This database (Tan et al. 2009) contains 40 retinal images
of size 565 × 584 (Fig. 4).
The watermark will consist of the Electronic Patient Record and the image acquisition data.
We implemented each approach in the three insertion domains (Spatial, frequency and multi-
resolution). In what follows Sch1V1 and Sch1V2 will refer to the first and second variants of

13
161  Page 10 of 18 F. Kahlessenane et al.

Fig. 4  Medical images used for the analysis of the proposed scheme

the first proposed approach, Sch2V1 and Sch2V2 will refer to the first and second variants of
the second proposed approach.

4.1 Performance measurement metrics

In order to measure and evaluate the performance of the proposed approaches, different meas-
ures (Narima et al. 2021) are used:

• PSNR is a metric that measures the amount of noise added to the image cover during the
integration process. A high PSNR value remains a desirable feature and means the level
of distortion is low. A PSNR value of less than 30 dB usually indicates that the image has
perceptible impairments; PSNR is the most commonly used evaluation metric.
• SSIM is used to measure the similarity between two images. The result varies between
[− 1, 1] and the maximum value of 1 indicates that the two images are identical.
• NPCR is a measure that indicates the rate of change in the number of pixels in the water-
marked image compared to the original image
• UACI The unified average intensity change allows measuring the average intensity of the
differences between the original and the watermarked image
• NPC indicates the number of pixels modified by the watermarking process. If only one
value (Red, Green or Blue) of the pixel is changed then the NPC is incremented.
• NVC calculates the number of values (Red, Green or Blue) modified by the hiding pro-
cess.
• C/Img indicate the capacity of the approach (the number of bits that can be embedded in
each image).

4.2 Application of our approaches in the spatial domain

In order to experiment our approaches in the spatial domain; the images of our database
were watermarked, a random sequence of 30.000 bits were substituted for the last least

13
A value parity combination based scheme for retinal images… Page 11 of 18  161

Fig. 5  Watermarking results
in spatial domain, a: Original
image, b: Watermarked image

Table 7  Imperceptibility results for the spatial domain insertion


PSNR MSE SSIM NPCR NPC UACI NVC C/Img

Sch1.V1 61.72 0.01125 1 0.011940 2958 0.000047 4148.6 524288


Sch1.V2 59.87 0.01739 1 0.015585 3959 0.000061 5612.8 524288
Sch2.V1 55.08 0.04569 0.99992 0.042977 6378 0.000171 10218.8 393216
Sch2.V2 55.31 0.04415 0.99979 0.043652 6851 0.000169 10197.6 393216
LSB1 52.04 0.23876 0.9997 0.04021 8869 0.000160 14540 786432

significant bits of the values R, G and B of the images’ pixels. As we can see in Fig. 5 the
distortions generated are visually undetectable. In order to analyze the results obtained, the
similarity and distortion measures were calculated “Table  7” by comparing the original
images and the watermarked imager.
The results obtained by a simple LSB substitution were also calculated to see the dif-
ference between our approach and a simple LSB substitution. Indeed, an LSB substitution
systematically modifies all the values of the R, G and B components of an image during
the integration process (14,540 on average) which increases the distortions (52.04 dB on
average). Our approach, on the other hand, modifies only one value out of the three compo-
nents (4148.6 for Sch1.V1), which reduces the distortion produced to the cover image. As
we can see from the results obtained in the Table 7, all the proposed variants generate less
distortion during the integration process than the LSB method. This is also explained by
the number of pixels modified, a Simple substitution LSB modifies on average 8869 pixels
unlike our approaches that does not change more than 6851 pixels. As we can also see from
the similarity measure (SSIM), these approaches generate a watermarked image more simi-
lar to the original image.

4.3 Application of our approaches in the frequency domain

In order to experiment our approaches in the frequency domain; a DCT was calculated
for each image of our base (after decomposition into blocks). Then, a random sequence
of 4,000 bits was substituted for the last least significant bits of the quantized DCT coef-
ficients obtained for each image.
As for the spatial domain results, the distortions engendered to the cover image are
undetectable visually Fig.  6. This is why the distortion measurements were calculated

13
161  Page 12 of 18 F. Kahlessenane et al.

Fig. 6  Watermarking results in
frequency domain, a: Original
image, b: Watermarked image

Table 8  Imperceptibility results for the frequency domain insertion


PSNR MSE NPCR NPC UACI NVC C/Img

Sch1.V1 40.83 2.7727562 0.00019 504 0.000.001 168 8192


Sch1.V2 40.62 2.80618 0.00156 480 0.000.006 160 8192
Sch2.V1 40.21 3.229263 0.00306 1470 0.000.012 490 6144
Sch2.V2 40.49 2.99588 0.0008 384 0.000.003 128 6144
LSB1 40.47 3.09691 0.00582 2775 0.000.097 925 12288

between the original and watermarked images (Table  8). As for the spatial domain, we
applied a simple LSB substitution to quantized DCT coefficients to conceal the secret mes-
sage. As we can see in Table 8; the number of values and pixels modified by a simple LSB
substitution remain significantly higher than the number of values and pixels modified by
our approaches.
We will logically get a higher PSNR for our approaches. However, since we do not hide
directly in the image values and manipulate quantized DCT coefficients, the capacity of our
approaches (8192and 6144) is much less than the LSB method capacity (12,288). In effect,
our first scheme requires three DCT coefficients for hiding two bits unlike the LSB method
which requires only two coefficients (for concealing two bits).

4.4 Application of our approaches in the multi‑resolution domain

In order to experiment our approaches in the multi-resolution domain, each image of our
base is decomposed into four sub-bands (LL, LH, HL and HH). The LL bands will be used
to integrate the Watermark, since we use color images, we will have three LL sub-bands to
perform the substitution (one sub-band for each component R, G and B). After hiding our
message (a random sequence of 90.000 bits) the images obtained do not allow to visually
detect distortions Fig. 7.
In order to measure the performance and evaluate the variants used, the similarity and
distortion measurements between the original images and the watermarked images were
calculated (Table 8).
In order to compare our substitution approaches to a direct LSB substitution, we have
also implemented and experimented an LSB substitution in the three sub-bands (LLr, LLg
and LLb obtained by a DWT). As with the results obtained in the spatial and frequency

13
A value parity combination based scheme for retinal images… Page 13 of 18  161

Fig. 7  Watermarking results
in multi-resolution domain, a:
Original image, b: Watermarked
image

domains, a simple LSB substitution always generates more distortions than the proposed
approaches Table 9. This is due to the number of modified values, in an LSB substitution
each coefficient has a 50% chance of being modified, contrary to our approach where a
DWT coefficient has a 25% chance of being modified, because the integration of two val-
ues is done by the combination of three coefficients where only one will be modified.

4.5 Discussion and results comparison

In order to compare and evaluate our approaches compared to recent work; we have for
each insertion domain, compared the results of distortions obtained with those of related
work presented in the second section of this paper.
As we can see in Table 10, the PSNR values obtained by our approaches in the three
insertion domains are reasonable compared to the related works. In the first approach;
the hiding process combines three values to integrate two bits and only one value may be
modified, which reduces the probability of modification, unlike the other approaches. This
therefore implies less modification and consequently generates less distortion to the cover
image, which explains the good PSNR rates obtained and therefore a reasonable imper-
ceptibility. However, since our first approach requires three values to insert two bits, this
impacts the approach’s capacity. In the spatial domain this can be negligible since we pos-
sess all the pixels of the image to perform the integration. But this becomes more problem-
atic in the integration in the frequency domain where the number of quantized DCT coef-
ficients is reduced. To evaluate the robustness of the proposed schemes (for each domain),
several attacks were executed on the watermarked images, and the obtained results after the
extraction of the watermark are compared to the related works. The normalized correla-
tion (NC) is applied to evaluate the robustness of the watermark scheme by comparing the
original and the extracted watermark (after attacking the watermarked image). Normally,

Table 9  Imperceptibility results PSNR MSE SNR IMMSE C/Img


for the multi-resolution domain
insertion
Sch1.V1 42.48 0.0.007985 − 0.000001 0.003585 131072
Sch1.V2 40.5 0.0.013464 − 0.000005 0.0.061644 131072
Sch2.V1 39.45 0.0.026455 − 0.000005 0.0.075778 98304
Sch2.V2 37.84 0.0.028122 − 0.000004 0.0.022842 98304
LSB1 30.93 0.0.035482 − 0.000005 0.0066009 196608

13
13
161  Page 14 of 18

Table 10  Imperceptibility comparison with related work


Spatial domaine Xiong Su et al. Jobin and Varghese Kim et al. Sch1.V1 Sch1.V2 Sch2.V1 Sch2.V2

PSNR 58.60 40.69 53.64 36.23 61.72 59.87 55.08 55.31


Frequency domaine Hu et al. Moosazadeh et al. Ernawan and Kabir Liu et al. Sh1.V1 Sh1.V2 Sh2.V1 Sh2.V2

PSNR 40.48 40.76 40.86 38.11 40.83 40.62 40.21 40.49


Multi-resolution domain Sari et al. Guo et al. Makbol et al. Liu et al. Sh1.V1 Sh1.V2 Sh2.V1 Sh2.V2

PSNR 44.84 40.49 41.39 40.85 42.48 40.50 39.45 37.84


F. Kahlessenane et al.
A value parity combination based scheme for retinal images… Page 15 of 18  161

an NC > 0.85 implies a considerable similarity between the original and extracted water-
mark. As we can see in Table 11, the proposed schemas generate reasonably robust images
against various attacks with a high-quality watermark. Usually, in the spatial domain, if the
LSBs are modified the extraction of the watermark becomes impossible. However, in the
proposed schemes, we can recover the embedded watermark even after the LSB bit distor-
tion. This is explained by the little modification performed, in the proposed approaches the
parity of the successive elements is combined in order to integrate the watermark, which
reduces the probability of modification. We can therefore conclude from the obtained
results that our approaches are resistant to several malicious watermark deletion attacks
while guaranteeing a reasonable PSNR value.

5 Conclusion

In this paper, two new approaches have been proposed for retinal image watermarking.
These two approaches have been applied in three insertion domains (spatial, frequency
and multi-resolution). In the spatial domain a combination of color values (R, G and B)
allowed to hide two bits by modifying only one value. Changing as few values as pos-
sible generates fewer distortions, which produces reasonable PSNR results (over 55 dB).
These approaches are simple and inexpensive in computing time (since they do not require
a prior processing step) can be devoted to real-time watermarking requested in low power
environments. Our approaches were then applied in the frequency domain, after the DCT
calculation and quantization; the integration is performed by modifying the LSBs of the
quantized coefficients by comparing the parity of the successive coefficients. The use of
transforms makes the message more robust to compression, since it uses the same space
used to code the image. In addition, these schemes can provide better robustness against fil-
tering attacks because the brand’s coefficients are spread throughout the cover image. The

Table 11  Robustness comparison with related work


Spatial domaine Su et al. Jobin and Varghese Ch1var1 Ch1var2 Ch2var1 Ch2var2

Average filtering 0.9752 0.9451 0.9863 0.9637 0.9729 0.9685


Cropping 0.7108 0.5000 0.6835 0.7384 0.6319 0.7493
Salt and pepper 0.81 0.9710 0.8536 0.8556 0.8409 0.8256
Gaussian LPF 0.9946 0.9993 0.9985 0.9982 0.9979 0.9983
JPEG compression 0.9245 0.8648 0.8462 0.8672 0.8503 0.8493
Frequential domaine Moosazadeh et al. Ernawan and Kabir Sh1.V1 Sh1.V2 Sh2.V1 Sh2.V2

Salt and pepper 0.8672 0.915 0.8725 0.8862 0.8941 0.8124


Median filter 0.8866 0.914 0.9162 0.9018 0.9006 0.8991
Cropping 0.8219 0.9541 0.9509 0.9376 0.9127 0.9291
Scaling 0.9489 0.9042 0.9213 0.9192 0.8974 0.8929
Multi-resolution Liu et al. Makbol et al. Sh1.V1 Sh1.V2 Sh2.V1 Sh2.V2
domain

Gaussian noise 0.9171 0.8027 0.9276 0.9089 0.8711 0.8932


Salt and pepper 0.9893 0.9521 0.9248 0.9315 0.9183 0.9272
JPEG compression 0.9711 0.9521 0.9613 0.9421 0.9076 0.9208

13
161  Page 16 of 18 F. Kahlessenane et al.

results obtained remain satisfactory compared to some recent work with a PSNR higher
than 40 dB on average. However the fact of using an image of 512 × 512 (which reduces
the number of coefficients) and considering our approach requires three coefficients to
insert two bits the capacity of our approaches is significantly reduced in the frequency
domain (between 8192 and 6144 bits per image). Finally, the proposed approaches have
been implemented to the multi-resolution domain by substituting the coefficients of the LL
sub-bands of the three R, G and B values. As for the two previous domains, the parity of
three successive coefficients is combined to integrate two bits of the secret message. Sub-
band decomposition has allowed us to fine-tune the low-frequency components, which is a
less sensitive insertion space while keeping a spatial content of the preserved image. The
imperceptibility tests reveal acceptable and satisfactory PSNR values compared to some
recent work with a PSNR greater than 40 dB on average for the first approach. Unlike the
frequency domain, the number of DWT coefficients generated is more numerous, which
increases the integration space of our approaches. After comparing our approaches to the
other recent works in the three domains of insertion; we can notice that our first approach
offers a good imperceptibility for insertion in the frequency and spatial domains. However,
for future work, improvements need to be made to adapt our approach to the multi-resolu-
tion integration by trying to reduce the distortions generated during substitutions. Our fre-
quency integration scheme should also be improved to increase the concealment capacity
which is an essential criterion for a watermarking method.

Declarations 

Conflict of interest  The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

Ethical approval  This article does not contain any studies with human participants or animals performed by
any of the authors.

References
Abraham, J., Paul, V.: An imperceptible spatial domain color image watermarking scheme. J. King Saud
Univ. Comput. Inf. Sci. 31, 125–133 (2019)
Ahmadi, S. B. B., Zhang G., Jelodar, H.: A robust hybrid SVD-based image watermarking scheme for
color images. 2019 IEEE 10th annual information technology, electronics and mobile communi-
cation conference (IEMCON), Vancouver, BC, Canada, (2019) pp. 0682–0688. https://​doi.​org/​10.​
1109/​IEMCON.​2019.​89362​29
Ahmadi, S.B.B., Zhang, G., Rabbani, M., Boukela, L., Jelodar, H.: An intelligent and blind dual color
image watermarking for authentication and copyright protection. Appl. Intell. (2020). https://​doi.​
org/​10.​1007/​s10489-​020-​01903-0
Ahmadi, S.B.B., Zhang, G., Wei, S.: Robust and hybrid SVD-based image watermarking schemes. Mul-
timed. Tools Appl. 79(1), 1075–1117 (2020). https://​doi.​org/​10.​1007/​s11042-​019-​08197-6
Ahmadi, S.B.B., Zhang, G., Wei, S., Boukela, L.: An intelligent and blind image watermarking scheme
based on hybrid SVD transforms using human visual system characteristics. Vis. Comput. (2020a).
https://​doi.​org/​10.​1007/​s00371-​020-​01808-6
Barani, M.J., Ayubi, P., Valandar, M.Y., Irani, B.Y.: A blind video watermarking algorithm robust
to lossy video compression attacks based on generalized Newton complex map and con-
tourlet transform. Multimed. Tools Appl. 79(3), 2127–2159 (2020). https://​doi.​org/​10.​1007/​
s11042-​019-​08225-5

13
A value parity combination based scheme for retinal images… Page 17 of 18  161

Boukela, L., Zhang, G., Yacoub, M., Bouzefrane, S., Ahmadi, S.B.B., Jelodar, H.: A modified LOF-
based approach for outlier characterization in IoT. Ann. Telecommun. (2020). https://​doi.​org/​10.​
1007/​s12243-​020-​00780-5
Ernawan, F., Kabir, M.N.: An improved watermarking technique for copyright protection based on tch-
ebichef moments. IEEE Access 7, 151985–152003 (2019)
Euschi, S., Amine, K., Kafi, R., Kahlessenane, F.: A Fourier transform based audio watermarking algo-
rithm. Appl. Acoust. 172, 107652 (2021). https://​doi.​org/​10.​1016/j.​apaco​ust.​2020.​107652
Fares, K., Khaldi, A., Redouane, K., Salah, E.: DCT & DWT based watermarking scheme for medical
information security. Biomed. Signal Process Control 66, 102403 (2021). https://​doi.​org/​10.​1016/j.​
bspc.​2020.​102403
Farri, E., Ayubi, P.: A blind and robust video watermarking based on IWT and new 3D generalized cha-
otic sine map. Nonlinear Dyn 93(4), 1875–1897 (2018). https://​doi.​org/​10.​1007/​s11071-​018-​4295-x
Guo, Y., Li, B.-Z., Goel, N.: Optimized blind image watermarking method based on firefly algorithm in
DWT-QR transform domain. IET Image Proc. 11, 406–415 (2017)
Haddad, S., Coatrieux, G., Cozic, M., Bouslimi, D.: Joint watermarking and lossless JPEG-LS compres-
sion for medical image security. IRBM 38(4), 198–206 (2017)
Hwai-Tsu, Hu., Chang, J.-R., Hsu, L.-Y.: Robust blind image watermarking by modulating the mean of
partly sign-altered DCT coefficients guided by human visual perception. Int. J. Electron. Commun.
70, 1374–1381 (2016)
Kahlessenane, F., Khaldi, A., Euschi, S.: A robust blind color image watermarking based on fourier
transform domain. Optik 208, 164562 (2020a). https://​doi.​org/​10.​1016/j.​ijleo.​2020.​164562
Kahlessenane, F., Khaldi, A., Kafi, R., Euschi, S.: A DWT based watermarking approach for medi-
cal image protection. J. Ambient Intell. Hum. Comput. (2020b). https://​doi.​org/​10.​1007/​
s12652-​020-​02450-9
Khaldi, A.: Diffie-Hellman Key Exchange through Steganographied Images. Rev. Dir. Est. Telecomuni-
cacoes (2018). https://​doi.​org/​10.​26512/​lstr.​v10i1.​21504
Khaldi, A.: Steganographic techniques classification according to image format. Int. Ann. Sci. (2019).
https://​doi.​org/​10.​21467/​ias.8.​1.​143-​149
Khaldi, A.: A lossless blind image data hiding scheme for semi-fragile image watermark. IJCVR 10(5),
373–384 (2020a). https://​doi.​org/​10.​1504/​IJCVR.​2020.​109389
Khaldi, A.: An invariable to rotation data hiding scheme for semi-fragile blind watermark. J. Telecom-
mun. Electron. Comput. Eng. 12, 65–69 (2020)
Kim, P.H., Yoon, E.J., Ryu, K.W., Jung, K.H.: Data-hiding scheme using multidirectional pixel-value
differencing on colour images. Secur. Commun. Netw. (2019). https://​doi.​org/​10.​1155/​2019/​90386​
50
Liu, X.-L., Lin, C.-C., Yuan, S.-M.: Blind dual watermarking for color images’ authentication and copy-
right protection. IEEE Trans. Circuits Syst. Video Technol. 28, 1047–1055 (2018)
Liu, X., Lin, C.-C., Muhammad, K., Al-Turjman, F., Yuan, S.-M.: Joint data hiding and compression
scheme based on modified BTC and image inpainting. IEEE Access 7, 116027–116037 (2019)
Makbol, N.M., Khoo, B.E., Rassem, T.H.: Block-based discrete wavelet transform singular value decom-
position image watermarking scheme using human visual system characteristics. IET Image Pro-
cess. 10, 34–52 (2016)
Moosazadeh, M., Ekbatanifard, G.: A new DCT-based robust image watermarking method using teach-
ing-learning-based optimization. J. Inf. Secur. Appl. 47, 28–38 (2019)
Mun, S.-M., Nam, S.-H., Jang, H., Kim, D., Lee, H.-K.: Finding robust domain from attacks: a learning
framework for blind watermarking. Neurocomputing 337(14), 191–202 (2019)
Narima, Z., Khaldi, A., Redouane, K., Fares, K., Salah, E.: A DWT-SVD based robust digital water-
marking for medical image security. Forensic Sci. Int. (2021). https://​doi.​org/​10.​1016/j.​forsc​iint.​
2021.​110691
Qingtang, Su., Yuan, Z., Liu, D., Method, A.-B.: IEEE Access 7, 4358–4370 (2019)
Roy, R., Ahmed, T., Changder, S.: Watermarking through image geometry change tracking. Vis. Inf.
2(2), 125–135 (2018)
Sadeghi, M., Toosi, R.: Mohammad Ali Akhaee, Blind gain invariant image watermarking using random
projection approach. Signal Process. 163, 213–224 (2019)
Sari, C.A., Ardiansyah, G., De Setiadi, R.I.M., Rachmawanto, E.H.: An improved security and message
capacity using AES and Huffman coding on image steganography. TELKOMNIKA 17, 2400–2409
(2019)

13
161  Page 18 of 18 F. Kahlessenane et al.

Tan, N.M., Liu, J., Wong, D.W.K., Lim, J.H., Li, H., Patil, S.B., Yu, W., Wong, T.Y.: Automatic Detection
of Left and Right Eye in RetinalFundus Images, pp. 610–614. Springer, Berlin Heidelberg (2009)
Valandar, M.Y., Ayubi, P., Barani, M.J.: A new transform domain steganography based on modified
logistic chaotic map for color images. J. Inf. Secur. Appl. 34, 142–151 (2017). https://​doi.​org/​10.​
1016/j.​jisa.​2017.​04.​004
Valandar, M.Y., Barani, M.J., Ayubi, P.: “A blind and robust color images watermarking method based
on block transform and secured by modified 3-dimensional Hénon map. Soft. Comput. 24(2), 771–
794 (2020). https://​doi.​org/​10.​1007/​s00500-​019-​04524-z
Wang, X.-Y., Zhang, S.-y, Wen, T.-T., Yang, H.-Y., Niu, P.-P.: Coefficient difference based watermark
detector in nonsubsampled contourlet transform domain. Inf. Sci. 503, 274–290 (2019)
Xiong, X.: Novel scheme of reversible watermarking with a complementary embedding strategy. IEEE
Access 7, 136592–136603 (2019)

Publisher’s Note  Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

13

You might also like