You are on page 1of 16

896 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO.

2, FEBRUARY 2019

Improving the Visual Quality of Size-Invariant


Visual Cryptography for Grayscale Images:
An Analysis-by-Synthesis (AbS) Approach
Bin Yan , Member, IEEE, Yong Xiang , Senior Member, IEEE, and Guang Hua , Member, IEEE

Abstract— In visual cryptography (VC) for grayscale image, the secret image can be reconstructed with sufficient fidelity
size reduction leads to bad perceptual quality to the reconstructed and can be identified by the human visual system (HVS). But
secret image. To improve the quality, the current efforts are for other groups of participants, i.e., those in the forbidden
limited to the design of VC algorithm for binary image, and
measuring the quality with metrics that are not directly related sets, no information about the secret should be recovered
to how the human visual system perceives halftone images. from the shares. The first requirement of VC is called the
We propose an analysis-by-synthesis (AbS) framework to integrate contrast condition, and the second one is called the security
the halftoning process and the VC encoding: the secret pixel/block condition. In a (k, n)-threshold VC, any group with d < k
is reconstructed from the shares in the encoder and the error participants belongs to the forbidden set, and any group
between the reconstructed secret and the original secret images
is fed back and compensated concurrently by the error diffusion with d ≥ k participants belongs to the qualified set.
process. In doing so, the error between the reconstructed secret As a cryptography algorithm, the security condition is a
and original secret is pushed to high frequency band, thus hard constraint: if the security condition is violated, then the
producing visually pleasing reconstructed secret image. This system is considered as unacceptable. So when designing the
framework is simple and flexible in that it can be combined VC algorithm for grayscale images, one should optimize the
with many existing size-invariant VC algorithms, including
probabilistic VC, random grid VC, and vector/block VC. More visual quality subject to security constraint. This is different
importantly, it is proved that this AbS framework is as secure as from some information hiding systems for halftone image that
the traditional VC algorithms. Experimental results demonstrate use superposition for watermark decoding, where the fidelity
the effectiveness of the proposed AbS framework. is optimized with no security constraint [2].
Index Terms— Size-invariant visual cryptography, grayscale Improving the quality of the reconstructed secret image
image, visual quality, analysis-by-synthesis (AbS), vector visual (which will be referred to as the target image in this paper)
cryptography. and reducing the size expansion have been the focuses of
VC research in recent years. VC with small or no size
I. I NTRODUCTION expansion is usually preferred since it imposes less burden
on processing, storage and transmission of the shares. Many
V ISUAL cryptograph (VC) is a secret sharing technique,
where the decoding is done by stacking of shares [1]. The
secret holder splits his secret image into n shares, prints them
algorithms were proposed for size-invariant VC, including
the random grid (RG) algorithms [3]–[5], probabilistic algo-
onto transparencies, and then distributes the transparencies rithms [6], [7], and block encoding algorithms [8], [9]. How-
to n participants. For some designated groups of participants ever, it is observed that reducing the size expansion brings low
(the qualified sets), if they superimpose their transparencies, perceptual quality to the target image, especially for grayscale
secret images.
Manuscript received June 14, 2016; revised May 22, 2017, July 9, 2018, Grayscale VC generates n binary share images from a
and September 29, 2018; accepted October 2, 2018. Date of publication grayscale secret image. This paper focuses on improving the
October 8, 2018; date of current version October 22, 2018. This work was
supported in part by the National Natural Science Foundation of China visual quality of the recovered secret image under this setting.
(NSFC) under Grant 61272432, in part by the Shandong Provincial Natural This focus is different from previous works on improving the
Science Foundation under Grant ZR2014JL044, in part by the Australian visual quality of the share images in extended VC [10], [11],
Research Council under Grant LP170100458, and in part by the Hubei
Provincial Natural Science Foundation of China under Grant 2018CFB225. where the secret image is a simple binary logo image. This
The associate editor coordinating the review of this manuscript and approv- focus is also different from the work in [12] where the secret
ing it for publication was Prof. Patrick Le Callet. (Corresponding author: image is a binary image from thresholding a grayscale image.
Yong Xiang.)
B. Yan is with the College of Electronics, Communication and Physics, Furthermore, we focus on the strict sense VC, where decoding
Shandong University of Science and Technology, Qingdao 266590, China is done by stacking, not by XOR operation [13].
(e-mail: yanbinhit@hotmail.com). The current researches on size-invariant grayscale VC can
Y. Xiang is with the School of Information Technology, Deakin University,
Melbourne, VIC 3125, Australia (e-mail: yxiang@deakin.edu.au). be categorized into two frameworks:
G. Hua is with the School of Electronic Information, Wuhan University, • Separate halftoning and VC encoding [8], [14]–[18].
Wuhan 430072, China (e-mail: ghua@whu.edu.cn). The grayscale secret image is first halftoned or simply
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. quantized by two level quantizer to generate a binary
Digital Object Identifier 10.1109/TIP.2018.2874378 image (These two types of binary images will be called
1057-7149 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 897

halftone image and thresholded image, respectively). The contributions of this work can be summarized as
Then the various size-invariant VC algorithms for binary follows:
image can be applied to this binary image. This category • An AbS framework for (n, n)-threshold VC, having
encompasses most of the existing algorithms. significantly improved quality of the target image, and
• Direct block quantization and mapping [9], [19]. The satisfying the security condition (in the sense of per-
average intensity of a small block is quantized to different fect secrecy). Using this framework, we also provide
levels, and these levels are in turn represented by different detailed algorithms for AbS-based probabilistic VC and
binary patterns in the target image to approximate the AbS-based vector VC.
local intensity. • Introducing and designing of quality metrics for the target
Two major problems exist in these frameworks. First, image that are relevant to the HVS. We introduce the
an inherent lossy mapping from the secret pixel/block to quality metric, radially averaged power spectrum density
the reconstructed pixel/block (which will be referred to as (RAPSD), that is traditionally not used in evaluating the
the target pixel/block/vector) is not compensated by other quality of target image in VC. In addition, we design a
pixels/blocks. Due to the security condition, the target image new metric, the residual variance, to quantify the level
has limited ability in representing the binary secret image. So of residual noise in the frequency band to which HVS is
from the secret block to the target block, there is a many-to- sensitive.
one lossy mapping. This paper is organized as follows. We formulate the size-
Second, the perceptual quality of the target image is not invariant grayscale VC problem in section II. The AbS-based
directly addressed, or the metrics used do not reflect how the probabilistic VC and AbS-based vector VC are presented in
HVS perceives a halftone image. In particular, the important sections III and IV, respectively, along with the corresponding
spectral property of the target image is not considered in the experimental results. The conclusion is given in section V.
design and evaluation of VC algorithms. The block variance
II. P ROBLEM F ORMULATION
and global contrast are two commonly used quality metrics.
The block variance is not a metric for measuring the qual- In this section, we first define the security condition and
ity of halftone target image [8], [14]. The global contrast contrast condition for size-invariant VC with grayscale secret
metric is also designed for binary secret image. Recently, image. Then, we present our general AbS framework.
efforts have been made to produce blue noise patterns on
A. Representing Pixel on Transparency by Light
shares [17]. However, the stacking or random sampling of
Transmittance
two blue noise shares is no longer blue noise. So this
type of algorithms work well only on thresholded grayscale In VC, ‘1’ (‘0’ resp.) is used to denote a black (white resp.)
image. pixel, and the stacking operation is logic ‘OR’. This is
To address the two problems outlined above, we propose advantageous for binary VC. But it is not directly related to
an analysis-by-synthesis (AbS) framework: the encryption of the physical quantities that characterize the light transmission
the secret image into shares is considered as an analysis through the transparencies or intensity of light from a display
step, while the stacking of the shares to reconstruct the device. So it is not suitable for joint halftoning and VC.
secret image (i.e., the target image) is a synthesis step. This paper uses light transmittance to represent pixels on
In AbS, we let the target image to be fed back to the analysis transparencies or display devices, which is also convenient for
process, so that the difference between the current target developing multitone VC.
block/pixel and the grayscale secret block/pixel is known to the Light transmittance is the ratio between the intensity of
encryption process. Then, the halftoning in encryption process the outgoing light and the intensity of the incoming light.
can adjust its strategy in producing the binary block/pixel to A region with light transmittance 0 ≤ β ≤ 1 lets only a
the VC encoder, in order to compensate for this difference. proportion of β of the incoming intensity to appear in the
Specifically, we put the VC encryption and decoding stages outgoing light. A white pixel is represented by β = 1, while
into the loop of error diffusion-based halftoning. This error for a black pixel β = 0. This representation is convenient for
diffusion is different from the error diffusion in ordinary joint halftoning and VC encryption. Furthermore, extension to
halftoning. It is not the error between the halftone image and multitone printing is straightforward. If we stack two points
the grayscale image that is fed back, but the error between the having light transmittances β1 and β2 , respectively, then the
original secret image and the target image (whose quality is resulting transmittance is β1 β2 .
of concern) that is fed back. Denote vectors and matrices by boldface letters. The
This AbS framework is quite flexible in that it can be used grayscale secret image J [n] is of M rows and N columns,
in any (n, n)-threshold scheme, including probabilistic VC, where n = [nr , n c ] is the two dimensional index with row
random grid VC and block VC. The resulting system is index nr and column index n c . It should be noted that n is
perfectly secure if the basic VC algorithm is perfectly secure. used to denote the number of shares in VC.
Since a local or temporary stacking result needs to be fed back
to the halftoning stage, this framework is suitable for error B. Definition of Size-Invariant VC for Grayscale Image
diffusion-based halftoning and halftoning using direct binary The general structure for a grayscale VC system is shown
search (DBS), but is not suitable for halftoning using ordered in Fig. 1. The secret image J [n] is a grayscale image.
dithering. It is divided into n share images s1 [n] , · · · , sn [n] by the
898 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

Fig. 1. The general structure of the VC system for grayscale image.

VC encoder. These share images are binary images with Fig. 2. The block diagram of the proposed framework.
meaningless appearance, and are printed on transparencies.
The VC decoder can recover the content of the secret image 
by simply stacking the shares. The reconstructed secret image random vector f  si1 [n] , · · · , sid [n] , where  = 1, 2,
(i.e., the target image) Jˆ [n] is obtained from
no additional information about J [n] can be derived,

n
i.e., Pr {J [n] = a} = Pr {J [n] = a | f }, ∀a ∈ C.
Jˆ [n] = si [n] . (1) Remark 1: The requirement that the distributions of f1
i=1 and f2 are independent of n is to ensure that the pixels on
In this work, we focus on the design of grayscale VC encoder. the same share are independent of each other. This definition
Before designing the size-invariant VC for grayscale image, is also valid for multitone shares [20].
we need to identify and define the desired properties of this
VC system. More specifically, we focus on such a system with C. The Proposed Framework
halftone/multitone share images. The definition is motivated by
The proposed framework is illustrated in Fig. 2. The
the specialty of the halftone/multitone share images. Given a
VC encryption and stacking are done in a pixel/block-by-pixel/
constant grayscale image J [n] = g, ∀n and the corresponding
block basis. The input image J [n] is mapped by gamut map-
halftone image I [n], and suppose that the set of colors
ping to account for the difference between the input gamut and
on the halftone image is D = {g1 , · · · , g }. The marginal
output gamut. After that, the mapped image x [n] is halftoned
distribution
 of color gi on I [n] is Pr {I [n] = gi } = pi , and
and then encrypted by VC to generate the share images
i=1 pi = 1. Then the halftone image I [n] should satisfy s1 , · · · , sn . These share images are stacked in the encoder to
E (I [n]) = g, where E is the mathematical expectation. Moti-
get the target image ŷ [n]. This synthesis result is fed back
vated by this random process model, we give the following
to the digital halftoning process, so as to compensate for the
definition of size-invariant VC for grayscale secret image with
difference between y [n] and ŷ [n]. This can be formulated as
halftone/multitone shares (size-invariant grayscale multitone
an optimization problem.
VC, or SIGM-VC).
Let the mapping from y [n] to ŷ [n] be denoted as
Definition 1 ((k, n, g, c)-SIGM-VC): Let the set of colors/
ŷ [n] = S (y [n]). Due to the security constraint of the basic
grayscales of the secret image be C, and those used by the
VC algorithm, this mapping is stochastic. For example,
VC encoder be E. Define g to be the number of colors in C,
in the (2, 2)-threshold probabilistic VC [21], we have:
and define c to be the number of colors in E, i.e., g = |C|
Pr( ŷ [n] = 0 | y [n] = 0) = 1, and Pr( ŷ [n] = 0 | y [n] =
and c = |E|. Given two grayscale images, each having
1) = Pr( ŷ [n] = 1 | y [n] = 1) = 21 . In order to retain the
constant grayscales: J1 [n] = g1 and J2 [n] = g2 , and
security of the original binary VC, we keep the VC encryption
0 1≤ g1 < g21 ≤ 1. Each image is divided  into n shares:
algorithm intact and carefully change the input to the
s1 [n] , · · · , sn [n] , s12 [n] , · · · , sn2 [n] , and si ∈ E. If the
VC encryption, i.e., y [n]. By doing so, we intend to minimize
following two conditions are satisfied, then the secret sharing
the perceived difference between the target image ŷ [n] and
algorithm is called a grayscale multitone VC.
the original secret image x [n] (after gamut mapping). So we
1) Contrast Condition: Let i 1 , · · · , i k be k participants, then
have the following stochastic optimization problem:
the stacking of the k shares


ik min [w ⊗ (S (y) − x) [n]]2 , (4)
y
ŷ [n]  si [n] ,  = 1, 2, (2) ∀n
i=i1 where ⊗ denotes convolution and w is the filter model of HVS,
    such as the alpha stable model [22]. Note that the problem
satisfies E ŷ1 [n] < E ŷ2 [n] .
2) Security Condition: Let i 1 , · · · , i d be d participants and in (4) is different from the ordinary halftoning. In ordinary
d < k, then, ∀n, the following two random vectors are halftoning, one intends to minimize
 the perceived difference
equivalent: between y and x, i.e., min ∀n (w ⊗ (y − x) [n])2 , which
 T  T is a deterministic optimization problem. Note also that our
f1 = si11 [n] , · · · , si1d [n] , f2 = si21 [n] , · · · , si2d [n] , formulation is different from the halftone watermarking model
in [23] where the security condition in VC is not considered.
(3)
As with ordinary halftoning, one can elaborate on a
i.e., they have the same sample space and probability dis- DBS algorithm to solve this stochastic optimization prob-
tribution, and are independent of n. Equivalently, given the lem. But DBS is computationally expensive. A good balance
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 899

Fig. 3. The structure of AbS-based probabilistic VC system.

between performance and complexity is the error diffusion Algorithm 1 (n, n)-AbSProb: AbS-Based Probabilistic
algorithm. So we focus on error diffusion-based solution to the VC for Grayscale Images
problem in (4). The block diagram is shown in Fig. 3. In this
framework, the HVS model w is implicitly incorporated into
the diffusion filter and error diffusion model.

III. A B S-BASED P ROBABILISTIC VC


In this section, we combine the AbS framework with
a (n, n)-threshold probabilistic VC. Both the probabilistic
VC and the random grid VC are pixel based, i.e., each pixel is
encrypted independently. The layout and performance of the
algorithm for AbS-based random grid are quite similar to those
of the algorithm for AbS-based probabilistic VC, which can
be explained by the equivalence between probabilistic VC and
RG VC established by Yang et al. [21]. So AbS-based random
grid is not discussed as a separate scheme.
The probabilistic VC utilizes the basic matrices from the
deterministic VC [6], [7]. Instead of assigning all of the
permuted columns of the basic matrix to the shares, only
one column is selected randomly and assigned to one pixel
on each share. It is proved by Wang et al. [24] that for
each deterministic grayscale VC, there exists a corresponding
probabilistic VC with flexible size expansion which can be as
small as 1.

A. Overall Structure
The structure of the AbS-based probabilistic VC is
The affine mapping G A can retain the relative contrast
shown in Fig.3. The corresponding algorithm is summarized
between adjacent gray levels. In addition, its parameters are
in Algorithm 1. Construction of the basic matrices for
easier to determine than nonlinear mappings. Let G A be
(n, n)-threshold VC can be found from [6], [7]. In this work,
we use the construction from [7], resulting in average contrast G A : x = k J + b, (5)
of 1/2n−1 .
1) Gamut Mapping: The Gamut Mapping step is to ensure then the parameters k and b can be determined as:
that the system is stable in BIBS (bounded input bounded
βt − β1 β1 Jmax − βt Jmin
state) sense, i.e., to ensure that the error e [n] is bounded. k= , b= . (6)
To achieve this, the dynamic range of the input should be Jmax − Jmin Jmax − Jmin
bounded by the convex hull of the colors at the output of the If the original secret image is of bad contrast due to
loop [23], [25]. Let the dynamic range of the input image J [n] bad illumination during image acquisition, then histogram
be [ Jmin , Jmax ], and the set of colors on the target image equalization may be needed to enhance the details. So the
be D = {β1 , · · · , βt : β1 < · · · < βt }. Then the convex hull nonlinear gamut mapping G N considered here is an affine
of the output color is simply [β1 , βt ]. A gamut mapping mapping combined with histogram equalization. Let the his-
G : [ Jmin , Jmax ] → [β1 , βt ] is needed to limit the dynamic togram of the original secret image be p( j ) with its support
range of the input to the error diffusion loop. We consider on [ Jmin , Jmax ]. Then, after the affine transformation
 z =
two gamut mappings: an affine mapping G A and a special k J + b, the histogram of z is pz (z) = p z−bk with its support
nonlinear mapping G N . on [β1 , βt ]. Finally, after equalization G E : z → x,
900 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

we have Proof: Referring to the symbols in Fig. 3, we show that


the security condition and the contrast condition as defined by
z ζ −b
x = β1 + (βt − β1 ) p dζ. (7) Definition 1 are satisfied.
β1 k Security condition: Suppose d < n and let C0d and C1d
be two sets of vectors by restricting the vectors in C0 and
So the nonlinear gamut mapping G N can be written as a
C1 to the i 1 , · · · , i d rows, respectively. Then from the secu-
composition of two operators: G N = G E ◦ G A .
rity condition of (n, n)-ProbVC, the two sets C0d and C1d
Unfortunately, for probabilistic and RG VC, the output
contain the same vectors with the same frequencies. So this
color is not deterministic. For example, for (2, 2)-threshold
security condition can ensure perfect secrecy as defined by
 VC, even though D = {0, 1}, but on the target
probabilistic
Shannon [29], i.e., no information about y [n] can be
image Pr ŷ [n] = 1|y [n] = 1 = 12 . Thus on average, a white
derived
 from the  d shares. For simplicity, let sd 
pixel y [n] only has 50% chance to be reproduced exactly
si1 [n] , · · · , sid [n] , then we have
by the decoder, resulting in an average light transmittance
of 12 . So the output gamut cannot be set as [0, 1]. Fortunately, Pr {y [n]} = Pr {y [n] | sd } . (8)
from the stability analysis of error diffusion loop [25], when
Note that x [n] → y [n] → sd form a Markov chain, hence
the average intensity of the input is slightly outside of the
sd → y [n] → x [n] is also a Markov chain. So, if given any
output gamut, the damage to the image is small and local.
d shares, the a posteriori probability of x [n] is
So in this case, we can set β1 = E{ ŷ [n] |y [n] = 0}; and

βt = ρ · E{ ŷ [n] |y [n] = 1}, where the parameter 0 < ρ ≤ 1 Pr {x [n] | sd } = Pr {x [n] , y [n] | sd }
is used to control the upper bound of the input gamut. With y[n]∈E
the decrease of ρ, the probability of out-gamut-pixel becomes

= Pr {y [n] | sd } Pr {x [n] | sd , y [n]} (9)


smaller, but the dynamic range of the target image ŷ [n]
y[n]∈E
also decreases. Therefore, there is a tradeoff in selecting ρ.

From our experiments, we found that setting ρ = 1 produces = Pr {y [n]} Pr {x [n] | y [n]} (10)
satisfactory result while keeping the global dynamic range as y[n]∈E
high as possible. = Pr {x [n]} .
2) Diffusion Kernel: The error diffusion kernel h [n] can
From (9) to (10), we use the result in (8) and the one-step
be any stable blue noise diffusion kernel, such as the Floyd
dependence property of the Markov chain. So we conclude
and Steinberg’s kernel [26], the Jarvis’ kernel [27], or the
that, if the probabilistic VC is perfectly secure, then our
Stucki’s kernel [28]. Although the diffusion kernel can also
AbS-based probabilistic VC is also perfectly secure, i.e., given
be optimized or made adaptive, this issue is out of the scope
any d < n shares, no additional information of x [n] can be
of the current paper.
derived.
Furthermore, even though y [n] may have spatial depen-
dency, the pixels on one share are independent of each other
B. Theoretical Validation
from the property of the basic ProbVC.
We show that Algorithm 1 is a valid construction of Contrast Condition: To show  the  contrast condition in
(n, n, g, 2)-SIGM-VC. The proof of this result hinges on Definition 1, we need to find E ŷ [n] . For this purpose, a sta-
the contrast property and the security property of the basic tistical model for the binary quantizer is needed. For binary
(n, n)-ProbVC [7]. We summarize them as a Lemma. quantizer, the quantization error y [n] − x̂ [n] is correlated
Lemma 1: Let the set C0 (C1 resp.) contains all the column with the input x̂ [n]. A linear model y [n] = γ x̂ [n] + q [n]
vectors from the basic matric B0 (B1 resp.) in deterministic can characterize this dependency [30]. The coefficient γ can
VC [1]. When sharing a black (white resp.) pixel, the encoder be determined by least square fitting between the model
randomly choose one vector from C0 (C1 resp.), and ran- and the quantizer input/output data [31]. For natural image,
domly permute the rows, then assign one row to each share. γ may vary from image to image. For constant input image,
Then, the (n, n)-ProbVC scheme in [7] satisfies the follow- γ also depends on the input level g. The term q [n] is the
ing contrast and security conditions. i) Contrast condition: part of the quantizer error that cannot be described by the
Stacking
 all the n shares,  the stacking  result ŷ [n] satisfies
 linear correlation term, and is assumed to be independent of
Pr ŷ [n] = 1|y [n] = 1 ≥ pTH , Pr ŷ [n] = 1|y [n] = 0 ≤ the input x̂ [n]. Based on this quantizer model, a simplified
pTH − α, where pTH  is a threshold
 and α > 0. ii) Security and equivalent block diagram for Fig. 3 is derived, as shown
condition: ∀ subset i 1 , · · · , i q ⊂ {1, 2, · · · , n} with q < n, in Fig. 4.
the two sets obtained by restricting each vector in C0 and C1 When n shares are stacked, the combined effect of
to the rows i 1 , · · · , i q , contain the same vectors with the same VC encryption and simulated stacking in Fig. 3 can be
frequencies. modelled as a probabilistic non-symmetric binary channel.
Proof: The proof of Lemma 1 can be found in [7]. Here we only need E( ŷ [n]). From the contrast condition of
Furthermore, The pixels on one share are independent of each the (n, n)-ProbVC, we have E( ŷ [n]) = κ · E(y [n]), where
other, due to the randomization in ProbVC encoding. κ = 2n−11
(See [7, Th. 3]). This implies that, if we focus on
Theorem 1: The construction of (n, n)-AbSProb algorithm the expected value of all the signals, then the equivalent system
in Algorithm 1 is a valid (n, n, g, 2)-SIGM-VC. is linear time invariant (LTI).
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 901

The perceptual quality of halftone image is related to the


spectral properties of the point process for the halftone dots.
To characterize the spectral properties, several constant images
are usually halftoned by the halftoning algorithms, then the
spectral measures, such as RAPSD, are calculated on the
halftone image. The proposed residual variance can be thought
of as a measure of the noise power in low frequency bands
  that the HVS is most sensitive to.
Fig. 4. Equivalent block diagram for analysis of E ŷ [n] .
1) Blue Noise and Its Spectral Characterization: A dither
pattern from halftoning can be modeled as a spatial random
For the equivalent LTI system, we can find the transfer func- point process, where the spectral feature of this model corre-
tions for the signal x [n] and the noise q [n], respectively [31]. lates closely with its perceptual quality. To produce visually
For clarity, we use subscript E to denote the expected value pleasing dither patterns, the minority pixels must be scattered
of a random variable, and use upper case letter for the as far as possible from each other. This requirement is reflected
corresponding Z-transform. For example, x E [n]  E {x [n]}, as blue noise (or high frequency noise) on the spectrum [22].
and X E (z)  Z {x E [n]}. From the 2D spatial point process, it is not difficult to
Let γs and γq be the γ for signal path and noise path, estimate the power spectrum density (PSD) P̂ (f), where
respectively. Then, for signal path, we have f is two-dimensional frequency [22]. By partitioning the
2D frequency domain into a series of annular rings,
E E (z) = (1 − γs κ) X̂ E (z), (11) we can get two convenient 1D statistics: RAPSD P( fρ ) and
X̂ E (z) = X E (z) + H (z)E E(z), (12) anisotropy A( f ρ ). Let the radius and the width of the annular
ŶE (z) = γs κ X̂ E (z). (13) ring R( f ρ ) be fρ and ρ respectively, then the RAPSD is:
1

So, the signal transfer function (STF) is P( fρ ) =   P̂(f), (16)


γs κ  R( f ρ )
∀f∈R( f ρ )
Hs (z) = , (14)
1 + (γs κ − 1) H (z)  
where  R( fρ ) denotes the number of elements in the
where H (z) is the Z-transform of the error diffusion filter. set R( f ρ ). For visually pleasing blue noise pattern, the
Similarly, we can find the noise transfer function (NTF) Hn (z). RAPSD has zero low-frequency component, and a peak
So the output is:
at the principal frequency, followed by flat high frequency
γs κ region [32].
ŶE (z) = X E (z)
1 + (γs κ − 1) H (z) 2) Residual Variance: For most sample-based VC, such
  
STF as probabilistic VC and RG, the target image is not a blue
κ (1 − H (z)) noise pattern. Hence after processing by the HVS model,
+   Q E (z). (15) a significant amount of noise is left within the frequency band
1 + γq κ − 1 H (z)
   to which the HVS is sensitive. So if the testing image is a
NTF constant image, then we propose to use the residual noise
Note that the error diffusion filter is low-pass with unit power after applying the HVS as a measure of perceptual
DC gain, so it is easy to verify that the STF is low-pass with quality. We call this the residual variance. Residual variance
unit DC gain, and the NTF is high-pass  with zero DC gain. measures the power of the noise within the band of HVS.
Hence, if the input x [n] = g, then E ŷ [n] = g. Let the target image be ŷ [n], and the HVS be simply
Note that E (q [n]) can be nonzero. Since the noise transfer modeled as a Gaussian lowpass filter G σ [n] with standard
function (NTF) between ŷ [n] and q [n] is high-pass, this deviation σ . Then the noise that is ‘visible’ to the HVS is
nonzero mean value is suppressed by the loop. Hence it doesn’t ȳ [n] = ŷ [n] ⊗ G σ [n]. So the residual variance is calculated
affect the mean value of the output ŷ [n]. as
So, we have shown that the average value of the stacking
result ŷ [n] is equal to the input gray level. If we have two V R = Var ( ȳ [n]) = E ( ȳ [n] − E ( ȳ [n]))2 . (17)
constant images x 1 [n]= g1 and x 2[n] =g2 with g1 < g2 ,
Obviously, if ŷ [n] is a blue noise pattern, then ȳ [n] is close
then it follows that E ŷ1 [n] < E ŷ2 [n] . So, the contrast to zero. So the smaller the V R , the better perceptual quality.
condition is satisfied. Compared to RAPSD, the residual variance is a summary of
the quality for each gray level. So a 2D plot of the residual
C. Quality Metrics for Target Image variance can help us to explore the qualities for all possible
In this subsection, we first briefly discuss the blue noise gray levels, while RAPSD needs 3D plotting.
property that quantifies the perceptual quality of the target To validate the RV as a quality measure, we perform a small
image. Then we design a new metric, the residual variance, scale subjective test. Thirty persons with normal vision and
to measure the residual noise power within the band of HVS ages between 22 to 45 are involved in the experiment. Each
and introduce two fidelity metrics between the original secret person is asked to evaluate 50 printed target images according
image and the target image. to a five-scale scoring [33]. The target images are generated
902 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

Fig. 5. Scatter plot of residual variance against MOS.

using the algorithm in [7], where the secret image is a constant


image with integer intensity randomly chosen from [0, 255].
The target images are of size 512 × 512 and are printed with
600 dpi by a Pantum P3000 Laser printer. Finally, the mean
opinion score (MOS) from the thirty persons are calculated.
The result is reported in Fig. 5, which shows close correlation
between RV and MOS. The Pearson’s correlation coefficient
is -0.9715. Fig. 6. Stacking results for probabilistic VC. Each column corresponds
to one algorithm and each row corresponds to one gray level. From left to
In all our experiments, the standard deviation of the right, Yang [7], Wang et al. [24], AbS and error diffusion-based halftoning
Gaussian filter is σ = 2, so a 13 × 13 square neighborhood of J [n] /2. From top to bottom, gray level g = 18 , 14 , 21 , 34 , 78 .
should be chosen in order to cover the ±3σ range [20].
The residual variance is calculated as sample variance of the image [7]. The second one is a recent algorithm constructed by
filtered image. Wang et al. [24], which operates directly on grayscale image
3) Fidelity Measures: The fidelity between the target image and halftoning is not needed. For fair comparison, the pixel
and the original grayscale image is characterized by tone expansion is set to 1 in Wang’s algorithm [24].
similarity and structural similarity. The tone similarity is mea- 2) Visual Quality of the Target Image: First, we per-
sured by Human Peak Signal-to-Noise Ratio (HPSNR) [20], form experiments to explore the spectral properties of the
which is the PSNR between two smoothed images using the AbS-based approach. As is usually done in testing halftoning
lowpass filter model of the HVS. To calculate the HPSNR, algorithms, several constant images J [n] = g are used as

both the original image and the target image are smoothed input. We choose g ∈ 18 , 14 , 21 , 43 , 78 . Using each constant
by a Gaussian filter w [n]. Then PSNR is calculated between image as the secret image, share images are generated and
these two smoothed images. then stacked to obtain the target image ŷ [n]. Then the RAPSD
The structure similarity is measured by Mean Structure and the residual variance are calculated to characterize the
Similarity (MSSIM) [34]. This metric combines the local perceptual quality of ŷ [n].
luminance comparison, local contrast comparison and local Fig. 6 shows a sample of the stacking results. The results
correlation between two images into one metric [34]. It is for Yang’s algorithm [7], Wang’s algorithm [24] and the
calculated between the smoothed original image and the proposed AbS-based algorithm are in the first, second and the
smoothed target image, using the same Gaussian filter as third columns. The halftoning result (from error diffusion
calculating the residual variance and HPSNR. without VC) for J [n] /2 is shown in the last column, which
serves as reference. Using J [n] /2 instead of J [n] is because
D. Experimental Results that the output range of the three algorithms are all decreased
In this section, we compare the proposed algorithm with to 1/2 of the input range. Each sample image is of size
typical probabilistic VCs reported in recent papers. 128 × 128. Visual inspection reveals that the AbS algorithm
1) Reference Algorithms: Since the purpose of the this produces better and visually pleasing results than Yang’s
work is improving the visual quality of the target image for and Wang’s algorithms. Less clusters of minority pixels are
grayscale secret image, the reference algorithms should be observed in AbS than in the other two. When Yang’s algorithm
limited to the ones addressing the same problem. More specifi- and Wang’s algorithm are compared, the minority dots are
cally, the reference algorithms in this section and section IV-B more evenly distributed in Yang’s algorithm. This is under-
should be limited to: (1) The secret image is a grayscale standable since Yang’s algorithm is applied to halftone image,
image. So we exclude algorithms designed for binary secret where the minority dots are already distributed evenly by
image (such as binary logo, binary cartoon image) or simply halftoning before VC encoding. On the other hand, halftoning
thresholded grayscale images from comparison. (2) The goal and dot distribution are not considered in Wang’s algorithm.
is to improve the visual quality of the target image, not As g increases to be above 12 , the performances of all three
the quality of the share images. So we exclude algorithms algorithms deteriorate. For Yang’s algorithm, more clusters
related to extended VC from comparison, such as [10], [11], appear. The reason is that, VC can reproduce black pixels
and [35]–[38]. exactly but only partially reproduce the white pixels. So when
Two algorithms are chosen to compare with. The first g is small, black pixels dominate and white pixels are scattered
one is the application of Yang’s probabilistic VC to halftone evenly. In this case, even though half of the white pixels are
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 903

Fig. 8. Comparison of residual variance for various probabilistic VC:


Yang [7], Wang et al. [24], AbS and error diffusion-based halftoning.
(a) (2, 2)-threshold algorithms. (b) (3, 3)-threshold algorithms.
Fig. 7. RAPSD for probabilistic VC. Each row corresponds to one gray
level. From top to bottom, gray level g = 18 , 14 , 21 , 34 , 78 . The symbol
indicates the blue noise principal frequency. Blue dashed line: RAPSD of from Yang’s algorithm is lower than Wang’s algorithm for
halftone image for J [n] /2 = g/2 from error diffusion. (a) Yang 2004 [7]. most g. For the proposed AbS-based approach, the residual
(b) Wang et al. [24] 2011. (c) AbS. variance is small when g < 150. When g > 150, the residual
variance increases significantly due to the increased probability
randomly flipped to black pixels, the minority white pixels of the out of gamut pixels, but it still lower than that of Yang’s
are still scattered apart, thus producing near blue noise effect. algorithm. The result of residual variance is consistent with the
However, as g increases to be above 12 , white pixels dominate RAPSD result in low frequency band.
and black pixels become the minority ones. When half of these In summary, the perceptual quality of the proposed AbS
white pixels are flipped by VC encoding and stacking, these probabilistic VC is superior to the raw VC algorithms in
flips are random. Hence, clusters of black pixels appear on this experiment. This is verified by subjective evaluation of
the target image. The same trend is observed from the AbS sample images, RAPSD and the residual variance. In addition,
result. As g increases to be above 12 , the chance that input the result of residual variance is convenient to show when
is outside of the output gamut increases. However, AbS still comparing different algorithms and different gray levels.
produces better perceptual quality than Yang’s algorithm, when Even though the AbS algorithm is designed for halftone
observed from sufficient distance. The superior performance of image, it works for simple binary pattern or thresholded
AbS can also be verified by RAPSD and residual variance. grayscale secret image as well. However, unlike the algorithm
In Fig. 7, the blue dash-dot curve is the RAPSD for halftone in [39] where same quality is ensured for black region and
image from error diffusion, which is plotted as a reference. The white region, the AbS algorithm reproduces the dark region
principal frequency is indicated by on the horizontal axes. with a better quality than the white region. This can be
A visually pleasing blue noise pattern should distribute most explained from the residual variance in Fig. 8. It is important
of its power above the principal frequency, so that the noise to note that the algorithm in [39] is not secure, while the AbS
power within the frequency band of the HVS is small. algorithm offers full security.
First, the proposed AbS approach produces almost zero 3) Test of Fidelity: The above experiments are carried out on
DC component, especially for small g. This is due to the images with constant intensities, which aims to understand the
feedback loop in AbS approach, where the combined noise perceptual quality of the proposed AbS method. Next, we test
from quantization, VC encoding and stacking is pushed to high the fidelity of the AbS method on natural images, including a
frequency band. Also we note that the periodic pattern, known test on a sample image and a test on an image database.
as ‘banding effect’ in error diffusion-based halftoning, does The test result on the Lena image is shown in Fig. 9,
not appear. This can be explained by the random noise injected including the original Lena image, the shares and the target
by VC encoding and stacking into the diffusion loop. This images. In Fig. 10, we compare the result with reference
random noise breaks the annoying periodic pattern. Second, algorithms. The AbS approach produces the result with better
we note that the noise in Wang’s algorithm (middle column fidelity than Yang’s and Wang’s algorithms. As can be seen
of Fig. 7) is white noise, containing all the frequency contents from Fig. 10(c), the quality of darker areas is better than that of
with comparable magnitude. This can be explained from the the brighter area, which is consistent with the residual variance
cluster of dots from Wang’s result in Fig. 6. In addition, Yang’s plot in Fig. 8.
method has lower magnitude than Wang’s method in low In the batch test, we use a set of 24 images from the Kodak
frequency, but still significantly larger than the AbS approach. database, as shown in Fig. 11. The images are converted
In terms of residual variance, the results for (2, 2)-threshold to their grayscale counterparts before testing. The metrics
and (3, 3)-threshold algorithms are compared in Fig. 8(a) HPSNR and MSSIM are used to measure the fidelity of the
and Fig. 8(b) respectively. Obviously, the halftone image target images. For the three algorithms, their output range
1
from error diffusion has the lowest residual variance due to decreases to 2n−1 of the input dynamic range. So the original
1
the blue noise property of error diffusion. Among the three secret image should be multiplied by 2n−1 to obtain the
algorithms, the residual variance from AbS is the smallest. reference image.
Wang’s algorithm has the largest residual variance since its The testing results for quality metrics are shown in Fig. 12.
PSD is the largest in low frequency band. The residual variance The AbS approach significantly outperforms Yang’s algorithm
904 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

Fig. 12. Testing result of Yang’s algorithm [7], Wang’s algorithm [24], and
the AbS algorithm on the Kodak image database. (a) HPSNR, (b) MSSIM.

Note that on each test image, the AbS probabilistic VC pro-


duces the highest HPSNR and MSSIM among the three
algorithms, even though the metrics vary from image to image.
To better understand this variation, it is helpful to figure out
how the content of the image influences the performance
Fig. 9. The result of AbS-based probabilistic VC on the Lena image.
(a) The original image. (b) Share 1. (c) Share 2. (d) The target image. metrics. The performance on image No. 20 (the plane image)
is worse than others, both in terms of HPSNR and MSSIM.
This is due to the large white areas (the sky) in the image,
since white pixels are only partially reproduced. This result
is also consistent with the residual variance in Fig. 8. As the
brightness of the image increases, residual noise with larger
power is left within the band of high HVS sensitivity. Hence
the fidelity between the original secret image and the target
image is lower. It is also observed that the complexity of the
content also influences the performance metrics.
4) Analysis of Complexity: First, we note that the time
complexities of the three algorithms are linear in the size
of the secret image, i.e., the time complexity is O (M N).1
Let the complexity of random integer generation be denoted
as Crnd . For Wang’s algorithm [24], at each pixel location, only
a random integer between 0 and 255 needs to be generated.
So the time complexity is M NCrnd . For Yang’s algorithm [7],
at each pixel location, in addition to random integer genera-
tion, the quantization error is calculated and pushed forward
Fig. 10. Comparison of probabilistic VC on Lena image. (a) Yang [7], to four neighboring pixels. This requires 4M N multiplications
(b) Wang et al. [24], (c) AbS, and (d) Halftone.
and 5M N additions/subtractions. So the time complexity is
roughly 4M NCmul + 5M NCadd + M NCrnd . For the proposed
AbS algorithm, the linear gamut mapping needs M N multipli-
cations, and the simulated stacking needs M N multiplications,
so the time complexity is 6M NCmul + 5M NCadd + M NCrnd .
So for time complexity, Wang’s algorithm is the lowest and
the proposed AbS algorithm is the highest. When compared
to Yang’s algorithm, AbS needs only two additional multipli-
cations for each pixel. The actual machine times for Yang’s
algorithm and the AbS algorithm are quite close. For example,
for the Lena image, the running time for Yang’s algorithm is
3.02 seconds and the running time for AbS is 3.19 seconds,
Fig. 11. The set of testing images from the Kodak database. The indices when tested in Matlab 2016a on a desktop computer with an
of the images are from 1 to 24, starting from the left to the right and from Intel i5 CPU (running at 3.10 GHz) and 20 GB memory.
the top to the bottom. These images are converted to grayscale before the
VC encryption. The original size of the images are 768 × 512 or 512 × 768.
From the experiments, we observe that the quality of target
image deteriorates with the increase of gray level. As the
and Wang’s algorithm, both in HPSNR and MSSIM. It is gray level increases, the quantizer produces more white pixels.
interesting to observe that the results of the three algorithms In a small neighborhood such as a 2 × 2 block, if all the
also show strong correlation on a given image, indicating that pixels are white and all of them are flipped to be black on
the content of the image influences the performance of the 1 An algorithm has a time complexity O(M N ), if there exists a positive
three algorithms. number c such that the complexity is upper bounded by cM N .
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 905

n
of the n share vectors is the target vector ŷ = si , where
 i=1
a b denotes element-wise product between two vectors,
i.e., the Hadamard product. If the following two conditions
are satisfied, we call it a (n, n, m) Vector VC.
m
a) Contrast condition: Let B(y) = k=1 yi be the
brightness of the vector y, and y1 and y2 be two vectors such
that B (y1 ) < B(y2), then  the
 corresponding target vectors
should satisfy B ŷ1 ≤ B ŷ2 .  
Fig. 13. The structure of AbS-based vector VC system.
b) Security condition: Let sd  si1 , · · · , sid be any
d share vectors with d < n, then no information can be derived
the target image, then this produces a cluster of black pixels, from sd about y, i.e., Pr {y} = Pr {y | sd }.
and the feedback loop forces the quantizer to produce more Remark
 2: The
 contrast condition
  only
  requires that
white pixels. Hence we also see cluster of white pixels on B ŷ1 ≤ B ŷ2 , instead of B ŷ1 < B ŷ2 . The security
the target image. These clusters are the source of bad visual condition is the same as ordinary VC, which requires perfect
quality. To avoid the cluster of pixels on the target image, secrecy if given only d < n shares.
we should guarantee a certain amount of white pixels within Next, we give a construction of (n, n, m) vector VC that
a block on the target image. This can be accomplished by satisfies the above definition. Let E be the set of brightness
vector VC. So, in next section, we extend our AbS based of the vectors in E: E  {B (y) , ∀y ∈ E}. And   let
 D be the 
probabilistic VC to AbS-based vector VC. set of brightness of the vectors in D: D  B ŷ , ∀ŷ ∈ D .
The essence of designing the vector VC is the design of
IV. A B S-BASED V ECTOR VC a non-invertible mapping M from E to D. This mapping
As shown in section III, for probabilistic VC, the major is realized by using different basic matrices for different
quality degradation comes from the out-of-gamut pixels. This values in D. The basic matrices can be designed using
problem can be remedied by vector VC where each image Algorithm 2.
block of size B × B is encoded. Algorithm 2 is based on construction 2 in [1]. Let
ai be the -th row of the basic matrix Ai constructed using
A. Overall Structure this construction, then the brightnesses of  nthe vectors
 from
The overall block diagram of AbS-based vector VC is stacking
 all
 the rows should satisfy: B a
=1 
0 = 0 and

shown in Fig. 13. After the gamut mapping G, the input B  n=1 a1  = 1. Then, it is straightforward to show that
grayscale image x [n] is divided into non-overlapping blocks. B n=1 bi = i , where bi is the -th row of the basic
The pixels in each block are organized into a vector x [n]. The matrix Bi .
modified vector x̂ [n] is quantized by a vector quantizer QE , For example, taking m = 4 and D = {2, 1, 0}, we get
where E is the codebook of the vector quantizer. The quantized    
1 0 1 0 1 0 1 0
vector y [n] ∈ E is encoded by vector VC. Then, the gen- B0 = B1 = ,
0 1 0 1 0 1 1 0
erated share vectors s1 [n] , · · · , sn [n] are superimposed to  
1 0 1 0
synthesize the target image ŷ [n]. The discrepancy e [n] = B2 = . (18)
1 0 1 0
x̂ [n]− ŷ [n] is calculated and diffused to adjacent blocks, using
a vector error diffusion filter H [n]. For (3,3)-threshold and m = 4, we have
The design of gamut mapping G, the vector quan- ⎡ ⎤ ⎡ ⎤
1 1 0 0 1 1 0 0
tizer QE and the diffusion filter H [n] depends on the vector
B0 = ⎣ 1 0 1 0 ⎦, B1 = ⎣ 1 0 1 0 ⎦, (19)
VC encoder, so we describe it first.
0 1 1 0 1 0 0 1
1) Vector VC: The input into the vector VC is a binary
vector y [n] ∈ Zm×1 2 , where m = B 2 is the number of pixels but now D = {1, 0}. We will focus on (2, 2)-threshold vector
in a block, and Z2  {0, 1}. For simplicity, we drop the time VC in the following discussion.
index n for a moment and focus on one block/vector. Next Next we need to determine the mapping M : E → D.
we define the vector VC. Recall that E is the codebook of the Recall that E = {0, 1, · · · , m}. Using the basic matrices Bi
quantizer and hence is also the codebook of the VC encoder. from Algorithm 2, we have D = {0, · · · , |D| − 1}. To make
Besides, we defined D to be the codebook of the VC decoder, full use of the output range of brightness, let 0 → 0, m →
i.e., the set of vectors on the target image. In general D ⊂ E |D| − 1. The mapping of other elements should be evenly
due to the security condition. So the mapping E → D distributed in D and meet the following restriction: if x < y,
is inevitably a many-to-one mapping, which is similar to a then M (x) ≤ M (y). For example, for m = 4 and n = 2,
stochastic quantizer. However, considering that this loss of we can use the mapping as shown in Table I.
contrast can be compensated by error diffusion loop, and that It is not difficult to verify the contrast condition from the
the security condition should be faithfully fulfilled, we relax construction of the mapping M. Furthermore, the rows of
the contrast condition in the definition. Bi are indistinguishable after random column permutation,
Definition 2: A (n, n, m) vector VC encodes a vector y ∈ so from only one row, one cannot infer which Bi is used.
Zm×1
2 into n share vectors s1 , · · · , sn ∈ Zm×1
2 . The stacking Hence the security condition in Definition 2 is satisfied.
906 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

Algorithm 2 Design Basic Matrices for (n, n, m) Vector VC

Fig. 15. Vector error diffusion process. The shaded block is the current
block.

to the remaining three un-quantized pixels:


7
x̂[nr , n c + 1] = x̂[nr , n c + 1] + δ[nr , n c ]
16
1
x̂[nr + 1, n c + 1] = x̂[nr + 1, n c + 1] + δ[nr , n c ]
16
5
x̂[nr + 1, n c ] = x̂[nr + 1, n c ] + δ[nr , n c ]
TABLE I 16
M APPING AND BASIC M ATRICES FOR V ECTOR VC W ITH m = 4 AND n = 2 Similarly, we then sequentially quantize x̂[nr , n c + 1] and
x̂[nr + 1, n c ], and diffuse the quantization errors to the
remaining un-quantized pixels.
4) Vector Error Diffusion: The purpose of vector error
diffusion is to let the average of the target image (from
simulated stacking) to approximate the average of the input
secret image. So it diffuses different errors than the error
diffusion in vector quantization. Vector error diffusion (VED)
diffuses the vector error e [n] to neighboring blocks. VED was
originally proposed by Venkata and Evans, in order to produce
dither pattern with varying dot density and dot size [41]. Here
we use a similar block neighbors, but a different diffusion
filter. The configuration of blocks is shown in Fig. 15.
Fig. 14. Intra-block error diffusion in the vector quantizer. The shaded blocks Denote e as the error vector from the current block. Let each
correspond to quantized pixels.
column of X be the vectors x [n] from the four neighboring
blocks in Fig. 15, also following a raster scanning order. Then
2) Gamut Mapping: From the construction of the vector the vector error diffusion can be represented as:
VC algorithm, one can determine the gamut mapping. For
example, for the (2, 2, 4) vector VC, the mapping between the  vec(X̂) = vec(X) + ê = vec(X) + H · e, (20)
block brightness of y and ŷ is: {0, 1, · · · , m} → 0, · · · , m2 .
So in a local B × B block, the dynamic range is halved. where H ∈ R4m×m is the vector error diffusion filter. The
For each pixel, the average output gamut is [0, 12 ]. This operator vec(A) ∈ R4m×1 stacks the columns of A ∈ Rm×4 to
form a vector. The entries in H determines how each element
suggests using the affine gamut mapping G A : x [n] = 12 J [n].
of e is distributed to pixels in the neighboring blocks. This
The nonlinear gamut mapping G N = G E ◦ G A is also valid
filter must be low pass in order to push the errors to high
here.
frequency band. It also must ensure that all quantization errors
 Quantizer: The vector quantizer ŷ [n] =
3) Vector
are diffused. So the entries of H must satisfy: h i, j > 0 and
QE x̂ [n] is realized by a combination of scalar quantizers 4m  T
and intra-block error diffusion. Even though the codebook E h
i=1 i, j = 1, ∀ j ∈ {1, · · · , m}. Let h  7 3 5 1
16 16 16 16
can be optimized, it is not pursued here [40]. The pixels in a and Im be a m × m matrix with all elements equal to 1. Then
block are processed in raster scanning order. At each pixel the Venkata
  and Evans’s vector error diffusion filter is H =
location n, the intensity x̂ [n] is quantized using a simple h ⊗ m1 Im , where ⊗ is the Kronecker product between two
two level quantizer, with the threshold setting to 21 . Then the matrices. This is equivalent to first distributing the total error
quantization error is diffused to other pixels in the block using among blocks according to the Floyd-Steinberg’s diffusion,
the ordinary Floyd and Steinberg’s kernel. The purpose of this and then distributing the allocated error evenly within each
error diffusion is to let the average of the quantizer output to block. Such even distribution within a block is advantageous
approximate the average of the input block. when one intends to produce clustered dots. In this work,
For example, referring to Fig. 14, given a 2 × 2 block of however, the intention is to produce scattered dots for better
secret image x [n], the vector quantizer outputs a binary 2 × 2 visual quality. So a modified H is proposed.
block as follows. First, the pixel x̂[nr , n c ] is quantized and the The proposed diffusion process is illustrated in
quantization error δ[nr , n c ] = x̂[nr , n c ] − y[nr , n c ] is diffused Fig. 15(a)-(d). From it, the coefficients of the diffusion
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 907

filter can be derived. In fact, instead of producing an


explicit expression for H, one only needs to follow the steps
in Fig. 15 to push the errors forward. The pixels in the
current block are already quantized (after VQ, VC encoding
and stacking). The ‘quantization’ error is pushed forward to
their neighbors using the Floyd and Steinberg’s kernel. If the
error is diffused to another already quantized pixel, it will
not affect the quantization result at that position. Instead,
the error will be forwarded to other neighboring pixels. As
a result of the above error pushing, the error e is pushed to
seven pixels that are nearest to the current block. The other
nine pixels in the neighboring blocks are unchanged. The
entries of H are non-negative, since each coefficient from the
Floyd-Steinberg’s diffusion is non-negative. Furthermore, all
errors are diffused, as when the errors cannot be compensated
by the already quantized samples, they are forwarded to other
neighbors.

B. Experimental Result
1) Reference Algorithms: Three relevant algorithms are
implemented and compared with the proposed AbS-based
Vector VC. Here we briefly outline these algorithms.
The first algorithm is Hou’s block encoding algorithm [42], Fig. 16. The target images. Each column corresponds to one algorithm.
From left to right: Hou’s algorithm [42], Liu’s algorithm [8], AbS Vector
which was also summarized in [8]. This algorithm works on VC algorithm, and halftone result using error diffusion. Each row corresponds
halftone image. The image blocks are classified according to to one gray level. From top to bottom, g = 18 , 14 , 12 , 34 , 78 . The halftone
their blackness, i.e., the number of black pixels in it. So for a image of 12 g serves as a reference since the first two algorithms are applied
block with m pixels, one gets m + 1 types of blocks Di , i = to halftone image.
0, 1, · · · , m. Let B0 and B1 represent the basic matrices for
black and white pixels in the basic deterministic VC [1]. When due to the fact that for a type-i block with i black pixels,
encoding a type i block, the basic matrix B0 is utilized i times Hou’s algorithm uses B0 for the first i type-i blocks. This
and the basic matrix B1 is utilized m − i times. So in a ‘local’ deterministic rule maps the periodic patterns in the halftone
neighborhood of m blocks (same type), the contrast between image into the periodic patterns on the target image. Also
black and white is maintained. one may notice that, along the vertical directions, the dots
The second algorithm is Liu’s improvement to Hou’s algo- are clustered into a band, hence may introduce low frequency
rithm [8]. Liu’s algorithm uses the same halftoning, the same components on the spectrum and RAPSD. Second, from
block type and counter as in Hou’s algorithm, but the use the second column, we observe that the periodic patterns
of B0 and B1 is probabilistic. Each block has chance to be are broken since Liu’s algorithm allows each pixel to have
encoded by B0 or B1 , depending on the type of the block and chance to use B0 and B1 , in a probabilistic manner. Last,
the number of B0 that has already been used for that type of from the third column, we observe that the target image from
block. the proposed AbS vector VC algorithm produces much better
Chen et al. use the histogram to guide the local map- visual quality: minority dots are spread apart. This is due to
ping between the secret image block and the block on the the vector error diffusion loop in AbS vector VC which pushes
target image, which is later improved by Chen et al. [9], the error to high frequency band.
Lee et al. [18]. We call these algorithms the histogram directed The target image for g = 18 from AbS (the first row and third
mapping (HDM) algorithms. column in Fig. 16) contains slightly noticeable line artifacts at
2) Testing on Constant Images: First we explore the top portion. The reason is that, the AbS algorithm uses raster
 1 1 1 3 7 of the algorithms on constant images g ∈
performance scanning order, starting from the top-left corner and ending
8 , 4 , 2 , 4 , 8 , and study the spatial and spectral properties. at the bottom-right corner. At the beginning of the scanning
The histogram directed mapping algorithms, i.e., Chen’s algo- (the top), when g is small, the vector quantizer produces blocks
rithm and Lee’s algorithm, need the histogram that cannot be with either no white pixel, or just one white pixel. According
provided by a constant image, so they are excluded from the to Table I, these two types of blocks are mapped to all-black
current experiments. We will test them on natural images later. blocks on the target image. This mapping has two effects. First,
The target images from Hou’s algorithm, Liu’s algorithm it delays the appearance of white pixels on the target image,
and the proposed AbS vector VC are shown in Fig. 16. when compared to pure halftone result in the last column of
First, from the first column, we observe the periodic patterns Fig. 16. Second, the error between the secret block and the
produced by Hou’s algorithms. Moreover, when there are target block will be accumulated till it is large enough such
periodic patterns on the halftone image (see last column), that the vector quanziter produces at least two white pixels in
there will be periodic patterns on the target image. This is the 2 × 2 block. When that happens, the target block contains
908 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

Fig. 18. Residual variance for Hou and Tu [42] 2004, Liu et al. [8] 2012,
AbS and Halftone. (a) Average gray level of the target image after Gaussian
smoothing. The results for Hou 2004 and Liu 2012 are overlapped by the
result of AbS. (b) Residual variance.

Fig. 17. RAPSDs for the images in Fig. 16. From top to bottom,
g = 18 , 14 , 12 , 34 , 78 . The RAPSD for the halftone image of 12 g from error
diffusion is plotted as blue dashed curve in each sub-figure. The symbol
indicates the blue noise principal frequency. (a) Hou and Tu [42] 2004.
(b) Liu et al. [8] 2012. (c) AbS.

one or two white pixels. Since the error is accumulated large


enough, one observes consecutive blocks with white pixels.
This produces a jump from all-black blocks to blocks with
white pixels, hence the observed line artifacts. This line artifact
is less visible when g is large. In addition, it is almost invisible
when applied to natural image.
The RAPSDs are plotted in Fig. 17. The first column
corresponds to Hou’s algorithm. One may notice the peaks
in RAPSD that are located at lower frequency than the blue
noise principal frequency. These peaks corresponds to the low
frequency periodic patterns on the target image (See the first Fig. 19. The result of AbS-based vector VC on the Lena image. (a) The
column of Fig. 16). Such peaks disappear in Hou’s RAPSD, original image. (b) Share 1. (c) Share 2. (d) The target image.
since the the periodic patterns are broken by the stochastic
mapping. However, we may observe that the RAPSD in low The test result on the Lena image is shown in Fig. 19,
frequency range is larger than those in Hou’s algorithm. including the original Lena image, the shares and the target
Compared to Hou’s algorithm and Liu’s algorithm, the pro- images.
posed AbS algorithm produces the lowest RAPSDs in low The reconstructed Lena images from the competing algo-
radial frequency band. For example, the RAPSDs are almost rithms are shown in Fig. 20. We also include Lee’s algorithm
zeros for f ρ < 0.1, irrespective of the gray level g. Since into this comparison [18]. The size of the blocks are 2 × 2
the low frequency region is where the HVS is sensitive to, for all the four algorithms. The halftone Lena image is also
the AbS algorithm leaves the least amount of noise in this plotted in Fig. 20(f).
region, compared to Hou’s algorithm and Liu’s algorithm. This The first row presents Hou’s algorithm and Liu’s algorithm.
conclusion can be further verified by the residual variance. Clearly, in Hou’s result, the area above the hat and blow
The result for the residual variance is shown in Fig. 18. the face contains periodic patterns. This can be understood
In Fig. 18(a), the average values of the target images are plot- by checking the halftone image. From the halftone image
ted. They are exactly half of the input gray levels, reflecting (See Fig. 10(d)), the region above the hat and the region
the loss of contrast from VC encoding. Using 12 g as reference, below the face correspond to gray levels near 128. Hence
we calculate the noise and its power, i.e., the residual variance. on the halftone image, periodic patterns are produced. These
The result is shown in Fig. 18(b). Obviously, halftone image periodic patterns are then mapped to the target image due to
has the lowest residual variance since no modification from the deterministic mapping in Hou’s algorithm.
VC encoding is imposed. The AbS algorithm has the lowest Lee’s algorithm produces better global contrast than Hou’s
residual variance among the three algorithms. Furthermore, algorithm and Liu’s algorithm, due to the implicit equalization
the residual variance for AbS is low and stable for different from the local block mapping. For the Lena image, since the
gray level g. Recall that in our AbS-based probabilistic VC, histogram is symmetric, Lee uses the mapping as in Table I.
the residual variance increases with g, see Fig. 8. As a result, the blocks without white pixel and the blocks
3) Test of Fidelity: Fidelity is tested on natural images, with one white pixel are all mapped to blocks without white
including the typical Lena image and the Kodak database. pixel. This helps to stretch the histogram of the corresponding
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 909

Fig. 21. Comparing four vector VC algorithms on the Kodak database,


including Hou and Tu [42] 2004, Liu et al. [8] 2012, Lee et al. [18] 2014
and AbS. (a) HPSNR. (b) MSSIM.

then AbS(G A ) is better than the competing algorithms; if the


global contrast also needs to be improved, then AbS(G N ) is
better than the competing algorithms.
In addition to subjective evaluation of visual quality, we also
use HPSNR and MSSIM to measure the perceptual quality.
The next experiment is done on Kodak image database. Each
image in the database is encoded and stacked using the
competing algorithms. Then the HPSNR and MSSIM are
calculated between the reference image and the target image.
Considering the loss of global contrast from VC encoding and
stacking (See Fig. 18(a)), the reference image is chosen as
J R [n] = 12 J [n]. For fair comparison with Hou’s algorithm
and Liu’s algorithm, the reference image is not equalized.
The experimental results are plotted in Fig. 21. Several
observations can be made. First, the AbS vector VC produces
Fig. 20. Comparing different vector VCs. (a) Hou’s algorithm [42],
(b) Liu’s algorithm [8], (c) Lee’s algorithm [18], (d) AbS with affine gamut target image with the highest HPSNR and MSSIM among the
mapping G A , (e) AbS with nonlinear gamut mapping G N , (f) Halftone four competing algorithms. In addition, the MSSIM is more
1 J [n].
image of 2 stable than the AbS-based probabilistic VC across different
secret images (see Fig. 12). The reason is that, different
grayscale image (i.e., after inverse halftoning) to the left end. gray levels in the secret image are better reproduced by the
Similarly, the histogram is also stretched to the right end, thus AbS-based vector VC. This can be understood from comparing
producing implicit equalization. However, this block mapping the RAPSD and residual variance. For the RAPSD results
is global and lossy, which leads to loss of local contrast in Fig. 7(a) and Fig. 17(c), the AbS-based vector VC produces
and image details, and such loss is not compensated by their near zero RAPSD in low radial frequencies for all the five
adjacent blocks. For example, in Lena image, the gray level of gray levels, while AbS-based probabilistic VC produces higher
the mirror frame and some details of the hair are lost. The loss RAPSD in low radial frequencies for gray levels 34 and 78 .
of contrast is also reflected by the bad HPSNR performance From the plots of residual variance in Fig. 8 and Fig. 18(b),
that will be presented below. we see that all the gray levels have low and stable residual vari-
The target image using the proposed AbS algorithm with ances from AbS-based vector VC, while the residual variance
affine gamut mapping (AbS(G A )) is shown in Fig. 20(d). increases with the gray level in AbS-based probabilistic VC.
Visual inspection reveals that this image is the closest to the To explain these observations further, we note that:
halftone image in Fig. 20(f), thus having the best fidelity. This • For AbS-based probabilistic VC, each secret pixel is
is due to the linear mapping G A : x [n] = 12 J [n] that preserves encoded independently. Each white pixel in the secret
the relative contrast between adjacent gray levels. We also image has 50% chances to be flipped to black on the
plot the result using the AbS algorithm with nonlinear gamut target image due to VC encryption. For example, let’s
mapping in Fig. 20(e). When AbS(G N ) is compared with Lee’s consider a small 2 × 2 block. If after quantization, all
algorithm, it produces not only similar global contrast, but of the four pixels are white, then there is a probability
 1 4
also better local contrast, thanks to the explicit equalization 2 = 16
1
that the four pixels will be black on the target
G E in G N . The lost details from Lee’s algorithm, such as the image. This cluster of black pixels not only introduces
gray level of the mirror frame and details of the hair, are now black clusters on target image, but also forces the error
recovered using AbS(G N ). Thus, depending on the application diffusion process to produce more white pixels at the
scenario, if fidelity with the original secret is more important, quantizer output, thus producing white clusters on target
910 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 28, NO. 2, FEBRUARY 2019

image. These clusters lead to the large residual variance measure, the residual variance, is also designed to further
and large RAPSD at low radial frequencies. characterize the level of noise relevant to the HVS.
• For AbS-based vector VC, such as the one using the The AbS framework can successfully push the noise on
mapping in Table I, we can guarantee the number of black the target image to high frequency bands. Among the two
pixels on the target image, depending on the number of AbS algorithms, the AbS vector VC produces almost zero
white pixels on the secret image. For example, if the DC component on RAPSD. Its residual variance is also
secret image block has 4 white pixels, i.e., B(y) = 4, the lowest amongst the competing algorithms. In terms of
then after VC encoding and stacking, there are only two the fidelity of the target image, the vector VC consistently
black pixels on the target block, i.e. B(ŷ) = 2 (See the outperforms the existing algorithms.
first row of Table I). The probability of seeing four black One limitation of the current work is that it is limited to
pixels in this small block is zero on target image. Thus, error diffusion framework. Possible future works include the
we can be sure that on a local 2 × 2 block, the input gray elaboration of a DBS-like algorithm to solve the optimiza-
level (after gamut mapping) is within the output gamut. tion problem in (4) directly. For the ordered dithering based
So, the black pixels are more evenly distributed, thus pro- halftoning algorithm, if the dither pattern is made adaptive
ducing more stable residual variance across different gray to the stacking results from neighboring blocks, then it may
levels, and producing near zero RAPSD for low radial possible to design AbS based VC for ordered dithering.
frequencies.
Second, Hou’s algorithm and Liu’s algorithm produce sim- ACKNOWLEDGEMENT
ilar results. Third, Lee’s algorithm produces better MSSIM We would like to thank the editor and the anonymous
than Hou’s algorithm and Liu’s algorithm, but worse HPSNR. reviewers for their valuable suggestions and comments.
This is due to the implicit equalization in Lee’s algorithm
which affects the intensity of the target image. Since HPSNR R EFERENCES
measures tone similarity, any change of intensity may decrease
[1] M. Naor and A. Shamir, “Visual cryptography,” in Proc. Work-
HPSNR. On the contrary, MSSIM also reflects the correlation shop Theory Appl. Cryptograph. Techn. (EUROCRYPT), Perugia, Italy,
between the reference image and the target image, hence is less May 1994, pp. 1–12.
affected by equalization. We also include HPSNR and MSSIM [2] E. Myodo, K. Takagi, S. Miyaji, and Y. Takishima, “Halftone visual
cryptography embedding a natural grayscale image based on error
between the reference image and its halftone image. Since diffusion technique,” in Proc. IEEE Int. Conf. Multimedia Expo, Beijing,
there is no modification due to VC encoding and stacking, each China, Jul. 2007, pp. 2114–2117.
secret block is perfectly reconstructed on the halftone image. [3] O. Kafri and E. Keren, “Encryption of pictures and shapes by random
grids,” Opt. Lett., vol. 12, no. 6, pp. 377–379, Jun. 1987.
This result serves as an upper bound for all VC algorithms [4] S. J. Shyu, “Image encryption by random grids,” Pattern Recognit.,
based on the same halftoning method. One may observe that, vol. 40, no. 3, pp. 1014–1031, Mar. 2007.
for images with complicated contents, such as images No. 5, [5] R. De Prisco and A. De Santis, “On the relation of random grid and
deterministic visual cryptography,” IEEE Trans. Inf. Forensics Security,
No. 8 and No. 13, the HPSNRs produced by AbS are very vol. 9, no. 4, pp. 653–665, Apr. 2014.
close to those from the halftone images. [6] R. Ito, H. Kuwakado, and H. Tanaka, “Image size invariant visual cryp-
Finally, we note that, all of the four algorithms involved tography,” IEICE Trans. Fundam., vol. E82-A, no. 10, pp. 2172–2177,
Oct. 1999.
in this section have linear complexity O(M N). When AbS is [7] C.-N. Yang, “New visual secret sharing schemes using probabilistic
compared to Lee’s algorithm (the closely relevant one to AbS), method,” Pattern Recognit. Lett., vol. 25, no. 4, pp. 481–494, 2004.
the block error diffusion in AbS has the same complexity as [8] F. Liu, T. Guo, C. Wu, and L. Qian, “Improving the visual quality
of size invariant visual cryptography scheme,” J. Vis. Commun. Image
the pixel-based error diffusion in Lee’s algorithm. The AbS Represent., vol. 23, no. 2, pp. 331–342, Feb. 2012.
algorithm needs gamut mapping and simulated stacking for [9] Y.-F. Chen, Y.-K. Chan, C.-C. Huang, M.-H. Tsai, and Y.-P. Chu,
each block. So, totally 2M N more multiplications are needed “A multiple-level visual secret-sharing scheme without image size
expansion,” Inf. Sci., vol. 177, no. 21, pp. 4696–4710, Nov. 2007.
for AbS. Since this is linear in M N, it results only slight [10] Z. Wang, G. R. Arce, and G. D. Crescenzo, “Halftone visual cryptog-
increase in time complexity. On the Lena image, the machine raphy via error diffusion,” IEEE Trans. Inf. Forensics Security, vol. 4,
time for Lee’s algorithm is 4.54 seconds, and the machine time no. 3, pp. 383–396, Sep. 2009.
[11] F. Liu and C. Wu, “Embedded extended visual cryptography schemes,”
for AbS is 5.41 seconds. IEEE Trans. Inf. Forensics Security, vol. 6, no. 2, pp. 307–322,
Jun. 2011.
V. C ONCLUSION [12] X. Wu and W. Sun, “Generalized random grid and its applications in
visual cryptography,” IEEE Trans. Inf. Forensics Security, vol. 8, no. 9,
Aiming at improving the visual quality of the recon- pp. 1541–1553, Sep. 2013.
structed secret image (i.e., target image) in size-invariant visual [13] X. Wu and W. Sun, “Extended capabilities for XOR-based visual
cryptography, we propose an AbS framework to push the cryptography,” IEEE Trans. Inf. Forensics Security, vol. 9, no. 10,
pp. 1592–1605, Oct. 2014.
reconstruction error to high-frequency band. This framework [14] Y.-C. Hou and S.-F. Tu, “A visual cryptographic technique for chromatic
is flexible in that it can be used in any (n, n)-threshold images using multi-pixel encoding method,” J. Res. Pract. Inf. Technol.,
VC algorithm to improve the visual quality, without sac- vol. 37, no. 2, pp. 179–191, May 2005.
[15] S. F. Tu and Y. C. Hou, “Design of visual cryptographic methods with
rificing security. Based on the AbS framework, two algo- smoothlooking decoded images of invariant size for grey-level images,”
rithms are designed, including AbS-based probabilistic VC and Imag. Sci. J., vol. 55, no. 2, pp. 90–101, Jun. 2007.
AbS-based vector VC. We introduce the measure RAPSD that [16] Y.-W. Chow, W. Susilo, and D. S. Wong, “Enhancing the perceived
visual quality of a size invariant visual cryptography scheme,” in Proc.
is traditionally used for evaluating the quality of halftone 14th Int. Conf. Inf. Commun. Secur. (ICICS), Hong Kong, Oct. 2012,
image to characterize the quality of target image. A new pp. 10–21.
YAN et al.: IMPROVING THE VISUAL QUALITY OF SIZE-INVARIANT VC FOR GRAYSCALE IMAGES: AbS APPROACH 911

[17] X. Wu and W. Sun, “Improving the visual quality of random grid-based [39] X. Wu, T. Liu, and W. Sun, “Improving the visual quality of random
visual secret sharing,” Signal Process., vol. 93, no. 5, pp. 977–995, grid-based visual secret sharing via error diffusion,” J. Vis. Commun.
May 2013. Image Represent., vol. 24, no. 5, pp. 552–566, 2013.
[18] C.-C. Lee, H.-H. Chen, H.-T. Liu, G.-W. Chen, and C.-S. Tsai, “A new [40] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression.
visual cryptography with multi-level encoding,” J. Vis. Lang. Comput., Norwell, MA, USA: Kluwer, 1991.
vol. 25, no. 3, pp. 243–250, Jun. 2014. [41] N. Damera-Venkata and B. L. Evans, “FM halftoning via block error
[19] H.-H. Chen, C.-C. Lee, C.-C. Lee, H.-C. Wu, and C.-S. Tsai, “Multi- diffusion,” in Proc. Int. Conf. Image Process., Thessaloniki, Greece,
level visual secret sharing scheme with smooth-looking,” in Proc. vol. 2, Oct. 2001, pp. 1081–1084.
2nd Int. Conf. Interact. Sci. (ICIS), Seoul, South Korea, Nov. 2009, [42] Y. C. Hou and S. F. Tu, “Visual cryptography techniques for color images
pp. 155–160. without pixel expansion,” (in Chinese), J. Inf., Technol. Soc., vol. 4,
[20] J.-M. Guo, J.-Y. Chang, Y.-F. Liu, G.-H. Lai, and J.-D. Lee, “Tone- no. 1, pp. 95–110, 2004.
replacement error diffusion for multitoning,” IEEE Trans. Image
Process., vol. 24, no. 11, pp. 4312–4321, Nov. 2015.
[21] C.-N. Yang, C.-C. Wu, and D.-S. Wang, “A discussion on the relationship
between probabilistic visual cryptography and random grid,” Inf. Sci.,
vol. 278, pp. 141–173, Sep. 2014. Bin Yan (M’15) received the Ph.D. degree in
electrical engineering from the Harbin Institute of
[22] D. Lau and G. Arce, Modern Digital Halftoning, 2nd ed. Boca Raton,
FL, USA: Taylor & Francis, 2001. Technology, China, in 2007. From 1996 to 1999, he
[23] C. W. Wu, G. R. Thompson, and M. J. Stanich, “Digital watermark- was an Engineer with the Goma Company Group.
ing and steganography via overlays of halftone images,” Proc. SPIE, From 2007 to 2012, he was a Lecturer with the
Shandong University of Science and Technology.
vol. 5561, pp. 152–164, Oct. 2004.
[24] D. S. Wang, F. Yi, and X. B. Li, “Probabilistic visual secret sharing From 2015 to 2016, he was a Visiting Scholar with
schemes for grey-scale images and color images,” Inf. Sci., vol. 181, Deakin University, Australia. Since 2013, he has
no. 11, pp. 2189–2208, 2011. been an Associate Professor with the College of
[25] R. Escbbach, Z. Fan, K. T. Knox, and G. Marcu, “Threshold modulation Electronics, Communication and Physics, Shandong
and stability in error diffusion,” IEEE Signal Process. Mag., vol. 20, University of Science and Technology. His research
no. 4, pp. 39–50, Jul. 2003. interests include multimedia signal processing, multimedia security, statistical
signal processing, and data forensic.
[26] R. W. Floyd and L. Steinberg, “An adaptive algorithm for spatial gray-
scale,” Proc. Soc. Inf. Display, vol. 17, no. 2, pp. 75–77, 1976.
[27] J. F. Jarvis, C. N. Judice, and W. Ninke, “A survey of techniques for
the display of continuous tone pictures on bilevel displays,” Comput.
Graph. Image Process., vol. 5, no. 1, pp. 13–40, 1976. Yong Xiang (SM’12) received the Ph.D. degree
[28] P. Stucki, “MECCA—A multiple-error correcting computation algorithm in electrical and electronic engineering from The
for bilevel image hardcopy reproduction,” IBM Res. Lab., Zurich, University of Melbourne, Australia. He is currently
Switzerland, Tech. Rep. RZ 1060, 1981. a Professor and the Director of the Artificial Intel-
[29] C. E. Shannon, “Communication theory of secrecy systems,” Bell Labs ligence and Image Processing Research Cluster,
Tech. J., vol. 28, no. 4, pp. 656–715, Oct. 1949. School of Information Technology, Deakin Univer-
[30] T. D. Kite, B. L. Evans, A. C. Bovik, and T. L. Sculley, “Digital sity, Australia. He has published four monographs,
halftoning as 2-D delta-sigma modulation,” in Proc. Int. Conf. Image over 100 refereed journal articles, and numerous
Process., vol. 1, Oct. 1997, pp. 799–802. conference papers in these areas. His research inter-
[31] T. D. Kite, B. L. Evans, and A. C. Bovik, “Modeling and quality assess- ests include information security and privacy, signal
ment of halftoning by error diffusion,” IEEE Trans. Image Process., and image processing, data analytics and machine
vol. 9, no. 5, pp. 909–922, May 2000. intelligence, Internet of Things, and blockchain. He has served as a program
[32] R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE, vol. 76, no. 1, chair, a TPC chair, a symposium chair, and a session chair for a number
pp. 56–79, Jan. 1988. of international conferences. He is an Associate Editor of the IEEE S IGNAL
[33] P. L. Callet, F. Autrusseau, and P. Campisi, “Visibility control and P ROCESSING L ETTERS and the IEEE A CCESS .
quality assessment of watermarking and data hiding algorithms,” in Mul-
timedia Forensics and Security. Hershey, PA, USA: IGI Global, 2008,
pp. 164–192.
[34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
quality assessment: From error visibility to structural similarity,” IEEE Guang Hua (S’12–M’13) received the B.Eng.
Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. degree in communication engineering from Wuhan
[35] B. Yan, Y.-F. Wang, L.-Y. Song, and H.-M. Yang, “Size-invariant University, China, in 2009, and the M.Sc. degree in
extended visual cryptography with embedded watermark based on error signal processing and the Ph.D. degree in informa-
diffusion,” Multimedia Tools Appl., vol. 75, no. 18, pp. 11157–11180, tion engineering from Nanyang Technological Uni-
2015. versity, Singapore, in 2010 and 2014, respectively.
[36] X. Yan, S. Wang, X. Niu, and C.-N. Yang, “Halftone visual cryptography From 2013 to 2015, he was a Research Scientist with
with minimum auxiliary black pixels and uniform image quality,” Digit. the Department of Cyber Security and Intelligence,
Signal Process., vol. 38, pp. 53–65, Mar. 2015. Institute for Infocomm Research, Singapore. After
[37] Y.-C. Hou, S.-C. Wei, and C.-Y. Lin, “Random-grid-based visual cryp- that, he joined the School of Electrical and Elec-
tography schemes,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, tronic Engineering, Nanyang Technological Univer-
no. 5, pp. 733–744, May 2014. sity, as a Research Fellow, until 2017. He is currently with the School of
[38] Z. Zhou, G. R. Arce, and G. D. Crescenzo, “Halftone visual cryptog- Electronic Information, Wuhan University. His research interests include mul-
raphy,” IEEE Trans. Image Process., vol. 15, no. 8, pp. 2441–2453, timedia forensics and security, applied convex optimization, applied machine
Aug. 2006. learning, and general signal processing topics.

You might also like