LGOHBased Discriminant
CentreSurround Saliency Detection
Regular Paper
Lili Lin
1
and Wenhui Zhou
2,*
1 College of Information and Electronic Engineering, Zhejiang Gongshang University, China
2 School of Computer Science and Technology, Hangzhou Dianzi University, China
* Corresponding author Email: zhouwenhui@hdu.edu.cn
Received 06 Jan 2013; Accepted 09 Oct 2013
DOI: 10.5772/57222
© 2013 Lin and Zhou; licensee InTech. This is an open access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract Discriminant saliency is a kind of decision
theoreticbased saliency detection method that has been
proposed recently. Based on local gradient distribution,
this paper proposes a simple but efficient discriminant
centresurround hypothesis, and builds local and global
saliency models by combining multiscale intensity
contrast with colour and orientation features. This
method makes three important contributions. First, a
circular and multiscale hierarchical centresurround
profile is designed for the local saliency detection.
Secondly, the dense local gradient orientation histogram
(LGOH) of the centresurround region is counted and
used for the local saliency analysis. And thirdly, a new
integration strategy for the local and global saliency is
proposed and applied to the final visual saliency
discriminant. Experiments demonstrate the effectiveness
of the proposed method. Compared with 12 stateofthe
art saliency detection models, the proposed method
outperforms the others in precisionrecall, Fmeasures
and mean absolute error (MAE), and can produce a more
complete salient object.
Keywords Saliency Detection, CentreSurround Profile,
Visual Selective Attention, Local Gradient Orientation
Histogram
1. Introduction
The mechanism of saliency plays an important role in
visual selective attention. It may provide a rapid and
effective strategy to reduce the computational complexity
of visual processing. Inspired by biological vision
systems, numerous computational saliency models of
visual attention have been proposed and widely used in
the fields of machine vision, image processing and
intelligent robotics, etc.
Most of current biologicallyinspired visual saliency
models usually rely on bottomup/spatialbased or top
down/objectbased processing. The bottomup model is
based on feature integration theory and the centre
surround hypothesis [12]. The primal images are
decomposed into several independent feature spaces,
such as intensity, colour and orientation, etc. The
conspicuity map of each feature space is extracted
individually, and they are then linearly combined to form
the final saliency map. The topdown model is a kind of
goaldirected saliency analysis, and it requires prior
knowledge of the tasks in question [34]. Compared with
the bottomup model, it can realize more efficient and
1 Lili Lin and Wenhui Zhou: LGOHbased Discriminant CentreSurround Saliency Detection
www.intechopen.com
ARTICLE
www.intechopen.com
Int. j. adv. robot. syst., 2013, Vol. 10, 385:2013
accurate visual searching, but its cost is lower speed and
higher computational complexity. Some alternative
models have been proposed that integrate topdown and
bottomup saliency attention [56]. However, the main
shortcomings of these models are low resolution, poorly
defined object boundaries and high computational costs.
In recent years, many saliency models have been
proposed. R. Achanta et al. presented a fast salient region
detection method based on the lowlevel features of
luminance and colour [7]. It can generate high quality
saliency maps of the same size and resolution as the input
image. R. Achanta et al. also introduced a higher
precision salient region detector based on a frequency
tuned method [8]. V. Gopalakrishnan et al. presented a
colour and orientation distributionbased salient region
detection framework [9]. They introduced a novel
orientation histogram of image regions that can be used
to characterize the local and global orientations. D. Gao et
al. proposed a discriminant saliency detector [1011]. It is
rooted in the decisiontheoretic interpretation of
perception, and can produce optimal saliency measures
in the sense of classification. In the frequency domain, Q.
Zhang et al. analysed and integrated local saliency, global
saliency and rarity saliency into the same framework [12].
Distinct from the bioinspired models, these models are
usually pure computational ones. Although they are
inspired by the biological concept of centresurround
contrast, they are not based on any biological model. In
addition, a centresurround hypothesis based on Weber’s
Law [13] has been proposed by us. It provides better
saliency detection performance than those of Itti [1],
Achanta [8] and Rahtu [14]. However, this method
exhibits undesirable blur in the detected salient object,
and tends to highlight objects’ boundaries rather than the
whole object.
In this paper, we introduce a new and efficient
discriminant centresurround hypothesis for saliency
detection. It is inspired by the biological model of spatial
receptive fields and the concept of local descriptor in the
field of computer vision. Specifically, the centresurround
differences are estimated by LGOH, such as the gradient
location and orientation histogram (GLOH) [15], the
DAISY descriptor [1617], the local binary pattern (LBP)
[18] and the Weber local descriptor (WLD) [19], etc. Note
that we do not strictly discuss the biological plausibility
of this hypothesis in cognitive neuroscience, instead
demonstrating by experiments its good performance,
enhancement of salient areas and suppression of non
salient areas.
Compared with other saliency models, our model has the
following main contributions:
a) Inspired by the centresurround pattern of biological
vision, we design a circular and multiscale
hierarchical centresurround profile for each pixel of
the primal image.
b) We extract the LGOH of the centresurround region
to represent the centresurround differences, and
use the statistic of LGOH as a decision value for the
local saliency analysis.
c) We proposed a new integration strategy for the local
and global saliency to get the final visual saliency
maps.
This paper is organized as follows. Section 2 describes the
proposed discriminant centresurround hypothesis.
Section 3 discusses the local saliency analysis based on
LGOH, and gives a new integration strategy for the local
and global saliency. Section 4 shows some experimental
results and discussions. Finally, Section 5 presents our
conclusions and prospects.
2. The Proposed Centresurround Hypothesis
The centresurround hypothesis is an important
mechanism for almost all saliency models, whether they
are bioinspired or pure computational. Its main function
is selfexcitation in the central excitatory regions while
inhibiting in the surrounding regions [2]. L. Itti et al.
determined centresurround contrast using a difference of
Gaussian (DoG) [12]. S. Frintrop et al. used a square filter
to compute centresurround differences [20]. R. Achanta
et al. used a centresurround feature distance [7]. A
frequency domain processingbased centresurround
contrast estimation method was presented in [8].
Meanwhile, D. Gao et al. proposed a discriminant centre
surround hypothesis by combining the hypothesis of
decisiontheoretic optimality with the traditional
hypothesis. It maximized the mutual information
between the feature distributions of the central and
surrounding regions [1011].
In this section, we present a simple but efficient
discriminant centresurround hypothesis. We begin by
describing the structure of our centresurround
organization and then discuss how to apply it to the local
saliency detection.
2.1 Circular and Multiscale Hierarchical
Centresurround Profile
Here, we are mainly inspired by two research
achievements in the field of cognitive neuroscience.
The first one is the compartmental model of the coneH1
cell network, which had been used to simulate the
synergistic centresurround receptive field of the monkey
H1 horizontal cells [21]. The simplified compartmental
2 Int. j. adv. robot. syst., 2013, Vol. 10, 385:2013 www.intechopen.com
model is shown in Figure 1, where the centresurround
receptive field (the grey region) is organized by the
horizontal cell network. Each horizontal cell is modelled
as a soma represented by a sphere and connected to all of
the cones lying in a 120 μm diameter dendritic field.
Horizontal cells are connected to their nearest neighbours
with resistive gap junctions.
Figure 1. Illustration of the simplified compartmental model
The second one is the resolution hypothesis in the visual
attention mechanism [22]. Experimental evidence
suggests that the attention mechanism can actively
enhance the spatial resolution at the attended location.
Moreover, the attention dynamics can be demonstrated
by the processing of multiple spatial resolutions with a
visual search of hierarchical patterns.
Based on the aforementioned neuropsychological
evidence, we design a circular and multiscale
hierarchical centresurround profile by extending the
compartmental model to a multiple spatial resolution
model. The illustration of the proposed centresurround
organization is shown in Figure 2.
Image plane
Dendritic
field
(a) structure of proposed
centersurround organization
(b) dendritic field of each
hypothetical H1 cell with
different radius
hypothetical
H1 cell array
Centersurround
region
Figure 2. Illustration of the proposed centresurround
organization
The image plane can be regarded as the cone array in
Figure 1. Figure 2(a) shows the structure of the centre
surround (the grey region) that is composed of N
concentric circles with different radii. There are M
hypothetical H1 cells (solid circular points) on each
concentric circle. The hypothetical H1 cells on different
concentric circles have different spatial resolutions.
Clearly, the hypothetical H1 cells on circles with a larger
radius have higher spatial resolution than those on circles
with a smaller radius. The range of the dendritic field of
each hypothetical H1 cell is illustrated by the identically
coloured circles in Figure 2(b).
2.2 Discriminant Centresurround Hypothesis
for Local Saliency Detection
Based on the wellknown fact that human the vision
system is sensitive to gradient magnitude and orientation,
we present a discriminant centresurround hypothesis by
the statistical analysis of the LGOH. Firstly, we use the
DAISY descriptor to extract the dense LGOH of the
centresurround region. Next, we propose the local
saliency decision by the variance analysis of the LGOH.
Let
0 0
( ( , , ))
m
l u v r
σ
H denote the normalized gradient
orientation histogram of the dendritic field connected
with the hypothetical H1 cell at
0 0
( , , )
m
l u v r , which is
the location of the mth hypothetical H1 cell on the
concentric circle with a radius r.
0 0
( , ) u v is the centre of
the concentric circle and σ is the Gaussian kernel scale
parameter of the concentric circle with the radius of r.
Therefore, the DAISY descriptor
0 0
( , ) u v D can be
formulated as follows:
( )
( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1
1 1
2 2
0 0
T
0 0
T T
1 0 0 1 0 0 1
T T
1 0 0 2 0 0 2
T T
1 0 0 0 0
,
, ,
, , , , , , ,
= , , , , , , ,
, , , , , ,
N N
M
M
N M N
u v
u v
l u v r l u v r
l u v r l u v r
l u v r l u v r
σ
σ σ
σ σ
σ σ
D
H
H H
H H
H H
(1)
More details about the DAISY descriptor can be found in
the literature [1617]. Here, we reorganize Equation (1) to
form our LGOH for the local saliency analysis.
Assume that K is the number of bins in the histogram.
0 0
( ( , , ), )
m k
h l u v r
σ
θ is the value of the kth bin in the
histogram
0 0
( ( , , ))
m
l u v r
σ
H . For conciseness, we denote
0 0
( ( , , ), )
m k
h l u v r
σ
θ as ( ( ), )
m k
h l r
σ
θ . Let
k
η be a
vector consisting of values of the kth bin of all histograms.
The LGOH of the centresurround region L can be
formulated as follows:
3 Lili Lin and Wenhui Zhou: LGOHbased Discriminant CentreSurround Saliency Detection
www.intechopen.com
( )
T
T T T
0 0 1 2
, , , ,
K
u v ( =
¸ ¸
L η η η (2)
where:
( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1
1 1
2 2
T
0 0
1 1 1
1 2 2
1
, , ,
, , , , ,
, , , , ,
, , , ,
N N
k
k M k
k k M k
N k M N k
h u v
h l r h l r
h l r h l r
h l r h l r
σ
σ σ
σ σ
σ σ
θ
θ θ
θ θ
θ θ
(
(
(
(
=
(
(
(
(
¸ ¸
η
(3)
According to the centresurround mechanism, we suppose
there is statistically significant difference between the
responses of the central and surrounding regions at or near
the salient location. Similar statistical suppositions have
been discussed in many recent studies [7, 9, 11, 23]. To
propose the local saliency decision by the variance analysis
of the LGOH, we let
k
ν be the variance of the vector
k
η ;
as such, the local saliency decision
( )
0 0
,
local
S u v can be
defined as a linear combination of all
k
ν , i.e.:
( )
0 0
1
,
local k
k
S u v v
K
=
¿
(4)
and:
( ) ( ) ( ) ( )
( ) ( ) ( )
2
1 1
1 1
,
,
n
n
k m n k k n
n N m M
k m n k n
n N m M
v h l r h p
h h l r p
σ
σ
θ σ
θ σ
≤ ≤ ≤ ≤
≤ ≤ ≤ ≤
¦
= −
¦
´
= ¦
¹
¿ ¿
¿ ¿
(5)
where
k
h is the mean of vector
k
η and
( )
n
p σ is the
probability of the nth concentric circle, representing the
contribution of all hypothetical H1 cells on the nth
concentric circle to the whole centresurround region.
( )
( ) ( )
( ) ( )
1 1
1 1 1
,
,
n
n
m n k
m M k K
n
m n k
n N m M k K
h l r
p
h l r
σ
σ
θ
σ
θ
≤ ≤ ≤ ≤
≤ ≤ ≤ ≤ ≤ ≤
=
¿ ¿
¿ ¿ ¿
(6)
3. The Whole Saliency Decision
This section introduces our integration strategy for the
local and global saliency. We first describe the workflow
of our complete saliency decision algorithm. Next, a
global saliency detection method is introduced. Finally, a
new integration strategy is discussed.
3.1 Workflow of our Saliency Decision
The whole process of the proposed saliency decision
algorithm is shown in Figure 3. First, the lowlevel colour
features are extracted. Since the perceptual differences in
the CIELAB colour space approximates to Euclidian space
[7, 24], we use the L*a*b* colour features.
Figure 3. Workflow of the proposed saliency detection algorithm
Next, we compute the local and global saliency of each
colour channel, respectively. Local saliency represents the
difference between a region and its surroundings. It can
be computed by Equation (4). However, it is not enough
to make decision only according to the local saliency,
because high local saliency values may lie in some global
texture regions, such as the skyline. So, it is necessary to
use global saliency to provide global constraints and
further reduce the effects of backgrounds.
Finally, we generate the final saliency map by a new
integration strategy for the local and global saliency.
3.2 Global Saliency Decision
We apply the method of SF [25] to extract the multiscale
global saliency in each colour channel. Let
, 0 0
( , ), { , , }
c
global i
S u v c l a b ∈ be the global saliency of the
pixel located at
0 0
( , ) u v on scale i , where 1 6 i ≤ ≤ . The
first scale corresponds to the input image and the sixth
scale corresponds to the coarsest scale. Accordingly, the
global saliency of the pixel located at
0 0
( , ) u v in colour
channel c can be expressed as:
2
6
0 0 , 0 0
1
( , ) ( , )
c c
global global i
i
S u v S u v
=
 
=

\ .
¿
(7)
4 Int. j. adv. robot. syst., 2013, Vol. 10, 385:2013 www.intechopen.com
3.3 Integration Strategy for Local and Global Saliency
The purpose of the integration strategy for local and
global saliency is to enhance the response of the local
contrast while inhibiting that of the background.
Therefore, we calculate the exponent of local saliency and
take the global saliency as a global constraint by using it
to weight the exponential local saliency. Thus, the
saliency in each colour channel can be expressed as:
0 0
( , )
0 0 0 0
( , ) ( , )
c
local
S u v c c
global
S u v S u v e = ⋅ (8)
where
( ) { }
0 0
, , , ,
c
local
S u v c l a b ∈ are calculated according
to Equation (4). Lastly, to get the final saliency map, we
combine the saliency maps of the l, a and b channels, as
follows:
2
0 0 0 0
{ , , }
( , ) ( , )
c
c l a b
S u v S u v
∈
=
(9)
Different from other combining strategies for local and
global saliency, ours is a softdecision integration strategy.
As such, we do not perform a binary threshold operation
during the integration process. Figure 4 illustrates the
effects of our integration strategy for the local and global
saliency. Clearly, the final saliency maps weighted by
global saliency  shown in the fifth column  are much
better than all the local saliency maps shown in the
second through to the fourth columns.
Figure 4. From left to right are: input images, local saliency maps
of the lchannel, the achannel and the bchannel, final weighted
saliency maps and ground truths, respectively
4. Experiments and Discussion
We evaluate our method on a commonlyused database
that includes 1,000 images and their ground truths [8].
We select 12 stateoftheart methods for comparison,
namely: SF[25], LR[26], HC[27], RC[27], FT[8], AC[7],
CA[28], GB[29], IT[1], LC[30], SR[31] and MZ[32]. Some
visual comparison results are shown in Figure 5.
Figure 5. From left to right are: input images, the ground truths and the final saliency maps of our method, SF[25], LR[26], RC[27],
HC[27], FT[8], CA[28] and AC[7], respectively
5 Lili Lin and Wenhui Zhou: LGOHbased Discriminant CentreSurround Saliency Detection
www.intechopen.com
The original results of these 12 methods can be found at
http://users.eecs.northwestern.edu/~xsh835/LowRankSali
ency.html,
http://cg.cs.tsinghua.edu.cn/people/~cmm/saliency/ and
http://ivrgwww.epfl.ch/supplementary_material/RK_CV
PR09/.
Considering the computational cost, the proposed centre
surround organization shown in Figure 2 should not be
too complex. So, we take the medium values and set N = 3,
M = 8 and K= 8. The binary threshold is 0.1 and the range
of the saliency value is [0, 1]. Due to the combination of
the local and global saliency decisions, our algorithm can
extract more exact salient regions and  preferably 
inhibit the saliency values at the nonsalient regions.
In the following three subsections, we provide more
concrete comparisons in terms of precisionrecall, F
measure and mean absolute error (MAE).
4.1 Precisionrecall Curve with Fixed Threshold
Precision and recall reflect the effectiveness and
completeness of the saliency detection, respectively. The
higher the recall ratio, the more complete the detected
salient object. It is wellknown that there exists a tradeoff
between precision and recall.
We first compare the precisionrecall curves with a fixed
threshold, which takes a value within the range [0,255].
For each fixed threshold, a binary salient map is
generated from the saliency result and the corresponding
precision and recall are calculated. The precisionrecall
curves are shown in Figure 6. Clearly, our method
outperforms the other 12 methods in most cases.
4.2 Precisionrecall Bar Char with Adaptive Threshold
We set the adaptive threshold as twice the mean of all the
pixels’ saliency, i.e.:
1 1
2
( , )
w h
x y
Th S x y
w h
= =
=
×
(10)
where w and h are the width and height of the saliency
map S, respectively. The binary salient map is obtained
by comparing each pixel’s saliency with the adaptive
threshold. In the calculation of the Fmeasure, we use the
following equation:
2
2
(1 ) precision recall
F
precision recall
β
β
β
+ ×
=
+
(11)
where
2
=0.3 β . Next, we draw the precisionrecall bar
chart with an adaptive threshold in Figure 7. Evidently,
the precision, recall and Fmeasure of our method are the
best among these methods.
4.3 Mean Absolute Error (MAE)
In the evaluation of the precision and recall, the selection
of the threshold has a higher impact on the evaluation
results. In particular, a different calculation method for
the adaptive threshold may cause different evaluation
results. From our viewpoint, it is difficult with the
precisionrecall evaluation to reflect the algorithm
performance comprehensively. Furthermore, it is even
harder to evaluate the effects of uniform highlighting and
background inhibition.
The MAE estimates the approximation degree between
the continuous saliency map S and the ground truth GT,
which provides us with a new means of evaluation. We
calculate the MAE according to the following definition:
1 1
1
( , ) ( , )
w h
x y
MAE S x y GT x y
w h
= =
= −
×
(12)
where w and h are the width and height of the saliency
map S, respectively. The comparison results in Figure 8
show that the MAE value of our method is the lowest.
Figure 6. Evaluation of the precision and recall with a fixed threshold
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
Recall
P
r
e
c
i
s
i
o
n
Ours
SF
LR
RC
HC
FT
CA
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
Recall
P
r
e
c
i
s
i
o
n
Ours
AC
GB
IT
LC
MZ
SR
6 Int. j. adv. robot. syst., 2013, Vol. 10, 385:2013 www.intechopen.com
Figure 7. Evaluation of the precision, recall and Fmeasure with
adaptive threshold
Figure 8. Evaluation of the MAE
5. Conclusions
In this paper, a circular and multiscale hierarchical
centresurround profile is designed. Afterwards, a simple
but efficient discriminant centresurround hypothesis and
a local saliency decision based on the variance analysis of
LGOH are presented. Finally, we discuss a softdecision
integration strategy for the local and global saliency in
the CIELAB colour space.
Plenty of experiments have been done to verify the
effectiveness of our method. In addition to the final
saliency detection results, precision, recall and Fmeasure,
we also provide more objective evaluation by MAE. All
our experiments demonstrate that our algorithm can
produce more complete salient objects and that it has a
stronger response in salient regions and better inhibition
performance in nonsalient regions.
Future work might focus on finding a more effective
integration strategy to inhibit the effects of the global
texture and backgrounds while enhancing the response of
the attention regions.
6. Acknowledgments
This work has been founded by the National Natural
Science Foundation of China (No. 60902077, No. 61102146)
and Zhejiang Provincial Natural Science Foundation of
China (LY12F05004). The authors are grateful to the
anonymous reviewers who made constructive comments.
7. References
[1] L. Itti, C. Koch, E. Niebur (1998) A Model of Saliency
based Visual Attention for Rapid Scene Analysis.
IEEE Transactions on Pattern Analysis and Machine
Intelligence. 20: 12541259.
[2] L. Itti, C. Koch (2001) Feature Combination Strategies
for Saliencybased Visual Attention Systems. Journal
of Electronic Imaging. 10: 161169.
[3] D. Gao, N. Vasconcelos (2004) Discriminant Saliency
for Visual Recognition from Cluttered Scenes.
Conference on Neural Information Processing
Systems, pp. 481488.
[4] S. Frintrop, G. Backer, E. Rome (2005) Goaldirected
Search with a Topdown Modulated Computational
Attention System. Proceedings of the Annual
Meeting of the German Association for Pattern
Recognition (DAGM ’05), Wien, Austria.
[5] V. Navalpakkam, L. Itti (2006) An Integrated Model
of TopDown and BottomUp Attention for
Optimizing Detection Speed. IEEE International
Conference on Computer Vision and Pattern
Recognition (CVPR), 2049– 2056.
[6] T. Xu, H. Wu, T. Zhang, K. Kühnlenz, M. Buss (2009)
Environment Adapted Active Multifocal Vision
System for Object Detection. ICRA 2009, 24182423.
[7] R. Achanta, F. Estrada, P. Wils, S. Süsstrunk (2008)
Salient Region Detection and Segmentation.
International Conference on Computer Vision
Systems. Vol. 5008, 6675.
[8] R. Achanta, S. Hemami, F. Estrada, S. Süsstrunk
(2009) Frequencytuned Salient Region Detection.
IEEE International Conference on Computer Vision
and Pattern Recognition (CVPR), 15971604.
[9] V. Gopalakrishnan, Y. Hu, D. Rajan (2009) Salient
Region Detection by Modeling Distributions of Color
and Orientation. IEEE Transactions on Multimedia.
11(5): 892905.
[10] D. Gao, V. Mahadevan, N. Vasconcelos (2009)
Discriminant Saliency, the Detection of Suspicious
Coincidences, and Applications to Visual
Recognition. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 31(6): 9891005.
[11] D. Gao, V. Mahadevan, N. Vasconcelos (2009) On the
Plausibility of the Discriminant Centersurround
Hypothesis for Visual Saliency. Journal of Vision.
8(7): 118.
Ours SF LR HC RC FT CA IT LC AC GB SR MZ
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Precision
Recall
Fmeasure
Ours SF HC LR IT FT AC SR LC GB MZ CA RC
0
0.05
0.1
0.15
0.2
0.25
7 Lili Lin and Wenhui Zhou: LGOHbased Discriminant CentreSurround Saliency Detection
www.intechopen.com
[12] Q. Zhang, H. Liu, J. Shen, G. Gu (2010) An Improved
Computational Approach for Salient Region
Detection. Journal of Computers, 5(7): 1011–1018.
[13] L. Lin, W. Zhou, H. Zhang (2011) Weber’s Law Based
CenterSurround Hypothesis for BottomUp Saliency
Detection. 18th International Conference on Neural
Information Processing, Part III, LNCS 7064: 592600.
[14] E. Rahtu, J. A. Heikkilä (2009) A Simple and Efficient
Saliency Detector for Background Subtraction. IEEE
12
th
International Conference on Computer Vision
Workshops, 11371144.
[15] K. Mikolajczyk, C. Schmid (2005) A Performance
Evaluation of Local Descriptors. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 27(10):
1615–1630.
[16] E. Tola, V. Lepetit, P. Fua (2008) A Fast Local
Descriptor for Dense Matching. IEEE Conference on
Computer Vision and Pattern Recognition (CVPR),
Alaska, USA, 1–8.
[17] E. Tola, V. Lepetit, P. Fua (2010) Daisy: an Efficient
Dense Descriptor Applied to Wide Baseline Stereo.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 32(5): 815–830.
[18] T. Ojala, M. Pietikainen, and T. Maenpaa (2002)
Multiresolution Grayscale and Rotation Invariant
Texture Classification with Local Binary Patterns.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 24(7): 971–987.
[19] J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, et al
(2010) Wld: A Robust Local Image Descriptor. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 32(9): 1705–1720.
[20] S. Frintrop, M. Klodt, E. Rome (2007) A Realtime
Visual Attention System Using Integral Images.
International Conference on Computer Vision
Systems (ICVS), Bielefeld, Germany, 110.
[21] O. S. Packer, D. M. Dacey (2005) Synergistic Center
Surround Receptive Field Model of Monkey H1
Horizontal Cells. Journal of Vision, 5(11): 1038–1054.
[22] G. Deco, D. Heinke (2007) Attention and Spatial
Resolution: a Theoretical and Experimental Study of
Visual Search in Hierarchical Patterns. Perception,
36(3): 335354.
[23] V. Yanulevskaya, J. Geusebroek (2008) Salient Region
Detection from Natural Image Statistics. The 14
th
Annual Conference of the Advanced School for
Computing and Imaging, Heijen, The Netherlands,
389395.
[24] E. Vazquez, T. Gevers, M. Lucassen, J. V. D. Weijer, R.
Baldrich (2010) Saliency of Color Image Derivatives:
a Comparison between Computational Models and
Human Perception. Journal of the Optical Society of
America, A: 27(3): 613621.
[25] F. Perazzi, P. Krahenbuhl, Y. Pritch, A. Homung
(2012) Saliency Filters: Contrastbased Filtering for
Salient Region Detection. IEEE International
Conference on Computer Vision and Pattern
Recognition (CVPR), 733740.
[26] X. Shen, Y. Wu (2012) A Unified Approach to Salient
Object Detection via Low Rank Matrix Recovery.
IEEE International Conference on Computer Vision
and Pattern Recognition (CVPR), 853860.
[27] M. M. Cheng, G. X. Zhang, N. J. Mitra, X. Huang, S.
M. Hu (2011) Global Contrastbased Salient Region
Detection. IEEE International Conference on
Computer Vision and Pattern Recognition (CVPR),
409416.
[28] S. Goferman, L. ZelnikManor, A. Tal (2010) Context
Aware Saliency Detection. IEEE International
Conference on Computer Vision and Pattern
Recognition (CVPR), 23762383.
[29] J. Harel, C. Koch, P. Perona (2006) Graphbased
Visual Saliency. Conference on Neural Information
Processing Systems, 545552.
[30] Y. Zhai, M. Shah (2006) Visual Attention Detection in
Video Sequences using Spatiotemporal Cues. ACM
Multimedia, 815824.
[31] X. Hou, L. Zhang (2007) Saliency Detection: a
Spectral Residual Approach. IEEE International
Conference on Computer Vision and Pattern
Recognition (CVPR), 18.
[32] Y. Ma, H. Zhang (2003) Contrastbased Image
Attention Analysis by Using Fuzzy Growing. ACM
Multimedia, 374381.
8 Int. j. adv. robot. syst., 2013, Vol. 10, 385:2013 www.intechopen.com