You are on page 1of 19

applied

sciences
Article
An Improved Method for Evaluating Image Sharpness Based on
Edge Information
Zhaoyang Liu , Huajie Hong, Zihao Gan *, Jianhua Wang and Yaping Chen

College of Intelligence Science and Technology, National University of Defense Technology,


Changsha 410073, China; zhaoyangliunudt@163.com (Z.L.); opalqq@163.com (H.H.); wangjh20a@163.com (J.W.);
yaping_chen2021@163.com (Y.C.)
* Correspondence: ganzihaoh@sina.com

Abstract: In order to improve the subjective and objective consistency of image sharpness evaluation
while meeting the requirement of image content irrelevance, this paper proposes an improved
sharpness evaluation method without a reference image. First, the positions of the edge points are
obtained by a Canny edge detection algorithm based on the activation mechanism. Then, the edge
direction detection algorithm based on the grayscale information of the eight neighboring pixels is
used to acquire the edge direction of each edge point. Further, the edge width is solved to establish
the histogram of edge width. Finally, according to the performance of three distance factors based
on the histogram information, the type 3 distance factor is introduced into the weighted average
edge width solving model to obtain the sharpness evaluation index. The image sharpness evaluation
method proposed in this paper was tested on the LIVE database. The test results were as follows: the
Pearson linear correlation coefficient (CC) was 0.9346, the root mean square error (RMSE) was 5.78,
the mean absolute error (MAE) was 4.9383, the Spearman rank-order correlation coefficient (ROCC)
was 0.9373, and the outlier rate (OR) as 0. In addition, through a comparative analysis with two other
methods and a real shooting experiment, the superiority and effectiveness of the proposed method in
performance were verified.
Citation: Liu, Z.; Hong, H.; Gan, Z.;
Wang, J.; Chen, Y. An Improved
Keywords: image sharpness; no-reference; eight-neighborhood algorithm; edge width; distance factor
Method for Evaluating Image
Sharpness Based on Edge
Information. Appl. Sci. 2022, 12, 6712.
https://doi.org/10.3390/
app12136712 1. Introduction
With the significant advantages of non-contact, flexibility, and high integration, com-
Academic Editor: Silvia Liberata Ullo
puter vision measurement has broad application prospects in electronic semiconductors,
Received: 8 June 2022 automotive manufacturing, food packaging, film, and other industrial fields. Image sharp-
Accepted: 30 June 2022 ness is the core index to measure the quality of visual images; therefore, the research on
Published: 2 July 2022 the evaluation method of visual image sharpness is one of the key technologies to achieve
Publisher’s Note: MDPI stays neutral visual detection [1–3]. Moreover, as people demand more and more sharpness in video
with regard to jurisdictional claims in chats, HDTV, etc., the research of a more efficient image sharpness evaluation method has
published maps and institutional affil- become a pressing problem nowadays.
iations. Generally, image sharpness evaluation methods can be divided into full-reference
(FR) sharpness evaluation methods, reduced-reference (RR) sharpness evaluation methods,
and no-reference (NR) sharpness evaluation methods. Among them, the FR sharpness
evaluation methods are used to judge the degree of deviation of the measured image
Copyright: © 2022 by the authors. from the sharp reference image [4]. The RR sharpness evaluation methods evaluate the
Licensee MDPI, Basel, Switzerland. measured image by extracting only part of the information of the reference image [5].
This article is an open access article
However, in practical applications, undistorted sharp reference images are usually difficult
distributed under the terms and
to obtain. Therefore, the NR sharpness evaluation methods have higher research value and
conditions of the Creative Commons
wider application capability. Existing NR sharpness evaluation methods are formulated
Attribution (CC BY) license (https://
either in the transform domain or in the spatial domain [6]. Transform domain-based
creativecommons.org/licenses/by/
methods [7–10] need to transform images from the spatial domain to other domains for
4.0/).

Appl. Sci. 2022, 12, 6712. https://doi.org/10.3390/app12136712 https://www.mdpi.com/journal/applsci


Appl. Sci. 2022, 12, 6712 2 of 19

processing. However, the computational complexity is often too large. Therefore, such
methods are poor in real time and limited in many applications. Spatial domain-based
methods [11–15] can be divided into two main types. One type is based on the fact that
clear images have higher contrast compared to blurred images. Typical evaluation methods
of this type are the various gradient function methods, such as the Tenegrad function
method and energy gradient function method [11]. The other type is based on the fact
that image blurring will lead to edge diffusion; a typical evaluation method of this type is
the average edge width method [12]. It should be noted that, although both the contrast-
based evaluation method and the edge information-based evaluation method have the
advantage of low computational complexity, the former is more dependent on the image
content compared to the latter, that is, the former method tends to fail when the contents of
measured images are different.
Li et al. [13] proposed a no-reference image sharpness evaluation method on scanning
electron microscopes. The method firstly extracts the edge of dark channel maps by a
Sobel operator. It then removes the noise effect but preserves the edge information by an
edge-preserving operator based on the weighted least squares’ (WLS) framework. Finally, it
combines the maximum gradient of each edge point with the average gradient to form the
sharpness evaluation index. Although this method extracts a part of the edge information
of the image by edge detection, it is still essentially an evaluation method based on the
contrast principle. Wang [14] proposed an image sharpness evaluation method based on
a strong edge width. She convolved the measured image by a Sobel operator to obtain
the horizontal and vertical gradient maps, respectively. By selecting the threshold, the
horizontal and vertical strong edge points of the measured image were obtained. Moreover,
the strong edge width was solved. Finally, the sharpness evaluation index was generated by
introducing the histogram information. In summary, most of the current image sharpness
evaluation methods based on edge information still extract edge points by a Sobel operator
and often only consider horizontal and vertical directions when determining the edge
direction of edge points, which largely limits the further improvement of the accuracy
of this type of evaluation methods. In addition, not all edge information is needed for
evaluation methods and few scholars distinguish the extracted edge information.
In this paper, we focus on the abovementioned problems. Firstly, a Canny edge detec-
tion algorithm with excellent comprehensive performance was improved to enhance the
edge detection effect of the measured images. Then, we proposed an eight-neighborhood
grayscale difference method to achieve a rapid and efficient determination of the edge
points’ four edge directions. Finally, by comparing three distance factors based on the
histogram of the edge width, the image sharpness evaluation method proposed in this
paper was obtained. With the abovementioned improvements, our proposed method has
excellent performance in terms of content irrelevance, subjective–objective consistency, and
computational speed, especially in the real-time evaluation of image sharpness, which has
great potential for application.

2. Principle and Design of the Sharpness Evaluation Method


2.1. Image Edge
The edge information of an image is crucial for vision and it is also one of the important
features of an image. Figure 1 simulates the situation when the ideal step edge is blurred
using a black and white image with drastically changing grayscale values. Additionally,
it can be seen that, when the image is blurred, the edges of the image will spread and the
grayscale curve slows down accordingly. Obviously, there is a positive correlation between
the degree of edge diffusion and the degree of image blurring.
It should be noted that the edges in a clear image are not always step edges, and there
are also impulse edges and roof edges depending on the variation of grayscale values, as
shown in Figure 2. However, the clear image is gradually smoothed after blurring, which
leads to the disappearance of impulse edges and roof edges; this is obviously different
from the relationship of step edges with the degree of image blurring. Therefore, which
Appl. Sci. 2022, 12, 6712 3 of 19

Appl. Sci. 2022, 12, 6712 3 of 19

approach is used to extract the step or approximate step edges is directly related to the
Appl. Sci. 2022, 12, 6712 3 of 19
accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed
solution to this problem, which will not be discussed here for now.

Figure 1. Step edge image and its corresponding grayscale change curve of pixels on a row.

It should be noted that the edges in a clear image are not always step edges, and there
are also impulse edges and roof edges depending on the variation of grayscale values, as
shown in Figure 2. However, the clear image is gradually smoothed after blurring, which
leads to the disappearance of impulse edges and roof edges; this is obviously different
from the relationship of step edges with the degree of image blurring. Therefore, which
approach is used to extract the step or approximate step edges is directly related to the
accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed
Figure1.1.Step
Figure Stepedge
edgeimage
imageand
andits
itscorresponding
correspondinggrayscale
grayscalechange
changecurve
curveofofpixels
pixelson
ona arow.
row.
solution to this problem, which will not be discussed here for now.
It should be noted that the edges in a clear image are not always step edges, and there
are also impulse edges and roof edges depending on the variation of grayscale values, as
shown in Figure 2. However, the clear image is gradually smoothed after blurring, which
leads to the disappearance of impulse edges and roof edges; this is obviously different
from the relationship of step edges with the degree of image blurring. Therefore, which
approach is used to extract the step or approximate step edges is directly related to the
accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed
solution to this problem, which will not be discussed here for now.

Figure2.2.Impulse,
Figure Impulse,roof
roofedge
edgeimages,
images,and
andtheir
theircorresponding
correspondinggrayscale change
grayscale curves
change curvesof of
pixels onon
pixels a
row: (a)(a)
a row: impulse edge;
impulse (b)(b)
edge; roofroof
edge; (c) impulse
edge; edge
(c) impulse grayscale
edge change
grayscale curve;
change (d) roof
curve; edgeedge
(d) roof grayscale
gray-
change curve.curve.
scale change

2.2. Edge Detection


2.2. Edge Detection
A Canny operator has superior overall performance compared to other edge detection
A Canny operator has superior overall performance compared to other edge detec-
operators; but, the effect of Canny edge detection depends heavily on the choice of its
tion operators; but, the effect of Canny edge detection depends heavily on the choice of
threshold value. If the threshold value is set too high, it will lead to missed detection
its threshold value. If the threshold value is set too high, it will lead to missed detection
and edge discontinuity. If the threshold value is set too low, there will be over-detection
Figure 2. Impulse,
problems, such asroof
theedge images,
noise and their corresponding
in measured images beinggrayscale
wronglychange curves
detected asofan
pixels
edge.on
a row: (a) impulse edge; (b) roof edge; (c) impulse edge grayscale change curve; (d) roof
Therefore, in order to improve the edge detection effect, an improved Canny edge detection edge gray-
scale change
algorithm curve.
based on the activation mechanism is proposed in this paper.

2.2. Edge Detection


A Canny operator has superior overall performance compared to other edge detec-
tion operators; but, the effect of Canny edge detection depends heavily on the choice of
its threshold value. If the threshold value is set too high, it will lead to missed detection
Appl. Sci. 2022, 12, 6712 4 of 19

and edge discontinuity. If the threshold value is set too low, there will be over-detectio
and edge discontinuity.
problems, such as theIf noisethe threshold value isimages
in measured set too low,
beingthere will be over-detection
wrongly detected as an edge
problems,
Therefore, in order to improve the edge detection effect, an improved as
such as the noise in measured images being wrongly detected an edge.
Canny
Appl. Sci. 2022, 12, 6712 4 ofedge
19 detec
Therefore,
tion algorithm based on the activation mechanism is proposed in this paper.detec-
in order to improve the edge detection effect, an improved Canny edge
tion algorithm based on the activation mechanism is proposed in this paper.
Plot (a) in Figure 3 assumes that the edge detection result is obtained under a hig
Plot (a) in Figure 3 assumes that the edge detection result is obtained under a high
threshold;
Plot (a)itit
threshold;
can
incan be 3seen
Figure
be seen
that that
assumes
that there
there theare
are
fewer
edge
fewer
edge
detection
edge points
points
result on
on it.is Plot
it.(b)Plot
obtained (b) the
under
shows
shows
a high
edge
the edg
detectionresult
threshold;
detection itresult obtained
canobtained
be seen that inthere
in the the low-threshold
are fewer edge
low-threshold case case
points
with with more
on it.
more edge Plotedge points
(b) shows
points and the and
the edge
appear-the appear
detection
ance of aresult
noise obtained
point in the
marked low-threshold
in green. Bycase with more
replicating edge
the points
ance of a noise point marked in green. By replicating the edge information in plot (a) plot
edge and the
information appear-
in to (a) t
ance
plot of
plot (b),a noise
(b),plot point
plot(c)(c)can marked
can bebe in green.
procured.
procured. By
ThisThisreplicating
process
process the edge
is called
is called information
activation;
activation; in plot (a)
the activated
the activated edgeto edg
plot (b),are
points
points plot (c) can be
aremarked
marked ininprocured.
red.
red. This
After
After that, process
that, theisactivated
called
the activated activation;
edge edge the activated
pointspoints
will edgethepoints
will activate
activate all all the othe
other
are marked in red. After that, the activated edge points will activate all the other edge
edge
edgepoints
pointsadjacent
adjacent to to
them,
them,as shown
as shown in plot (d). Because
in plot noise tends
(d). Because noiseto existto
tends in exist
isolation,
in isolation
points adjacent to them, as shown in plot (d). Because noise tends to exist in isolation, the
the
the isolated
isolated noise
noisepoint
pointis filtered
is filteredout out
afterafter
the activation
the process,
activation which which
process, is plot (e).
is The (e). Th
plot
isolated noise point is filtered out after the activation process, which is plot (e). The edge
edge
edgeinformation
information obtained after the the
detection of this
ofimproved algorithm has the has
charac-
information obtainedobtained after
after the detection detection
of this improved this improved
algorithm hasalgorithm
the characteristics the charac
teristics
teristics of
oflow
low noise
noise and
andhighhighaccuracy.
accuracy.
of low noise and high accuracy.

Figure 3. The principle of an improved Canny edge detection algorithm: (a) edge detection result
under
Figure 3.a high threshold;of(b)
The principle anedge detection
improved result
Canny under
edge a low threshold;
detection algorithm:(c)(a)edge
edgedetection
detectionresult
result
Figure 3. The principle of andetection
improved Canny edge detection algorithm: (a)edge
edge detection result
under a high threshold; (b) edge detection result under a low threshold; (c) edge detection detection
after initial activation; (d) edge result after multiple activations; (e) final result after
under a high threshold; (b) edge detection result under a low threshold; (c) edge detection result
result.
initial activation; (d) edge detection result after multiple activations; (e) final edge detection result.
after initial activation; (d) edge detection result after multiple activations; (e) final edge detection
result.
Figure4 4depicts
Figure depicts
thethe edge
edge extraction
extraction process
process ofLena
of the the Lena test image
test image using
using our our im-
improved
proved algorithm. It can be clearly seen that the edge extraction result processed
algorithm. It can be clearly seen that the edge extraction result processed by the improved by the
improved
Figure
algorithm isalgorithm
less noisyisthan
4 depicts less noisy
thethat
edge than that threshold
extraction
under low under lowand
process threshold
ofmore and more
the accurate
Lena test accurate
image
than than our im
using
that under
that under
proved
high high threshold.
algorithm. It can be clearly seen that the edge extraction result processed by th
threshold.
improved algorithm is less noisy than that under low threshold and more accurate tha
that under high threshold.

Final Low High Raw


image threshold threshold image

Figure4.4.Improved
Figure ImprovedLena
Final Lenaedge
edgedetection
detectiondiagram.
diagram.
Low High Raw
2.3. Analysis
imageof Edge Width threshold threshold image
2.3. Analysis of Edge Width
The essence of an edge is a collection of pixel points with drastically changing grayscale
values.The essence
calculateofthe
an edge
edge width
is a collection of pixel
point,points with drastically
to firstlychanging
determinegray-
Figure To4. Improved Lena edge of an edge
detection diagram. it is necessary the
scale values. To calculate the edge width of an edge point, it is necessary to
edge direction corresponding to the edge point and then calculate the edge width along the firstly deter-
minedirection
edge the edge according
direction corresponding
to appropriateto the edge point and then calculate the edge width
rules.
2.3. Analysis of Edge Width
along the edge direction according to appropriate rules.
The essence ofofan
2.3.1. Determination edge
Edge is a collection of pixel points with drastically changing gray
Direction
scale values.
Eeping To[16]
et al. calculate the edge
calculated width of
the gradient of each
an edge
edgepoint, it is
point in thenecessary
measuredto firstly deter
image
mine
by the edge
a Sobel direction
operator corresponding
and defined to direction
the gradient the edge (including
point and then calculate
the negative the edge widt
direction
along the edge direction according to appropriate rules.
of the gradient) as the edge direction of that edge point. Different from the idea of using
a gradient to determine the edge direction, this paper proposes a method based on the
difference of a grayscale between the pixel points in the eight neighborhoods of the edge
time.
Figure 5 illustrates the calculation principle of the eight-neighborhood grayscale dif
ference method. For each edge point (GEdge denotes an edge point in the figure below), th
four grayscale differences of its eight-neighborhood pixel points are calculated along th
Appl. Sci. 2022, 12, 6712 horizontal, vertical, 45°, and −45° directions, respectively, namely: 5 of 19

𝐷 = ABS 𝐺 −𝐺
points. Compared with the gradient determining method, this method not only improves
𝐷 = ABS 𝐺 − 𝐺
the accuracy of determining the edge direction but also saves twice the computational time.
Figure 5 illustrates the calculation principle of the eight-neighborhood grayscale (1
difference method. For each edge point 𝐷 ° = denotes
(GEdge ABS 𝐺 an−edge
𝐺 point in the figure below),
the four grayscale differences of its eight-neighborhood pixel points are calculated along
𝐷 ° = ABS
the horizontal, vertical, 45◦ , and −45◦ directions, 𝐺 − 𝐺 namely:
respectively,
The opposite directionDof the direction
horizontal = ABS( Gcorresponding
12 − G32 ) to the minimum value of th
four differences in the above equations
D vertical = ABS ( G −
is the21edge23direction obtained by this method. Fo
G )
(1)
example, the opposite directionD45◦of= the ( G31 − G13 )direction is the vertical direction, and
ABShorizontal
◦ = ABS( G11 − G33 )
the opposite direction of the D
45°
−45direction is the −45° direction.

Figure 5. Eight-neighborhood grayscale difference method.


Figure 5. Eight-neighborhood grayscale difference method.
The opposite direction of the direction corresponding to the minimum value of the
The following
four differences in the figure shows the
above equations is effect
the edge of direction
determining the by
obtained edge
thisdirection of the Len
method. For
test image using the gradient determining method in reference [14] and the
example, the opposite direction of the horizontal direction is the vertical direction, and the eight-neigh
borhooddirection
opposite grayscaleof the 45◦ direction
difference method 45◦ direction.
is thein−this paper, respectively. The results of the gradi
The following method
ent determining figure shows the effect of determining
for determining the edge direction
the edge directions of thepoints
of the edge Lena test
(where th
image using the gradient determining method in reference [14] and the eight-neighborhood
pentagrams are located) in Figure 6a–d are the vertical direction, vertical direction, hori
grayscale differenceand
zontal direction, method in thisdirection,
vertical paper, respectively. The results
respectively. of the gradient
Accordingly, determin-
the edge direction de
ing method for determining the edge directions of the edge points (where the pentagrams
termination results of the eight-neighborhood grayscale difference method are −45° direc
are located) in Figure 6a–d are the vertical direction, vertical direction, horizontal direc-
tion,and
tion, horizontal direction,respectively.
vertical direction, vertical direction, and the
Accordingly, −45° direction,
edge directionrespectively.
determinationIt is thu
clear that
results the
of the determination results
eight-neighborhood of the
grayscale edge direction
difference method areby−the
45◦eight-neighborhood
direction, horizon- gray
scale difference method are more realistic
◦ and accurate.
tal direction, vertical direction, and −45 direction, respectively. It is thus clear that the
determination results of the edge direction by the eight-neighborhood grayscale difference
method are more realistic and accurate.
In addition to comparing the accuracy of the above two methods, this paper further
compares the computational time of the gradient determining method and the eight-
neighborhood grayscale difference method by calculating four images with typical edge
directions.
We performed the gradient determining operation on the four images in Figure 7 in
Visual Studio 2019 using C++ under the Windows 10 operating system, and the average
processing time was 10 ms per image, while the average processing time was 3 ms per
image for the eight-neighborhood grayscale difference operation on the four images. It can
be seen that the eight-neighborhood grayscale difference method proposed in this paper
Appl. Sci. 2022, 12, 6712 6 of 19

Appl. Sci. 2022, 12, 6712 6 of 19


can quickly and efficiently determine the edge direction of each edge point, which creates
the condition for accurate calculation of the edge width in the next step.

Figure 6. Determination of edge direction: (a) −45° edge direction; (b) horizontal edge direction; (c
vertical edge direction; (d) −45° edge direction.

In addition to comparing the accuracy of the above two methods, this paper furthe
compares the computational time of the gradient determining method and the eigh
neighborhood grayscale difference method by calculating four images with typical edg
directions.
We performed the gradient determining operation on the four images in Figure 7 i
Visual Studio 2019 using C++ under the Windows 10 operating system, and the averag
processing time was 10 ms per image, while the average processing time was 3 ms pe
image for the eight-neighborhood grayscale difference operation on the four images.
can be seen that the eight-neighborhood grayscale difference method proposed in this pa
per can
Figure
Figure quickly andofefficiently
6.6. Determination
Determination determine
direction:
edge direction: (a)−45°
(a) thedirection;
◦ edge
−45edge edge direction
direction;
(b)(b) of each
horizontal
horizontal edge
edge
edge point,
(c) whic
direction;
direction;

creates
vertical
(c) vertical the
edge condition
direction;
edge direction; for
(d)(d) −accurate
−45° edge
45 calculation of the edge width in the next step.
direction.
edge direction.

In addition to comparing the accuracy of the above two methods, this paper further
compares the computational time of the gradient determining method and the eight-
neighborhood grayscale difference method by calculating four images with typical edge
directions.
We performed the gradient determining operation on the four images in Figure 7 in
Visual Studio 2019 using C++ under the Windows 10 operating system, and the average
processing time was 10 ms per image, while the average processing time was 3 ms per
image for the eight-neighborhood grayscale difference operation on the four images. It
can be seen that the eight-neighborhood grayscale difference method proposed in this pa-
per can quickly and efficiently determine the edge direction of each edge point, which
creates the condition for accurate calculation of the edge width in the next step.

Figure 7. Typical edge direction diagram: (a) vertical edge direction; (b) 45◦ edge direction; (c) −45◦
Figure 7. Typical edge direction diagram: (a) vertical edge direction; (b) 45° edge direction; (c) −45
edge direction; (d) horizontal edge direction.
edge direction; (d) horizontal edge direction.
2.3.2. Solution of Edge Width
To calculate the edge width of an edge point, it is necessary to find the grayscale
extreme points at the two ends closest to the edge point in the edge direction [10]. When
the grayscale values of one side are larger than those of the other side, the maximum value
point of the side with the larger grayscale values and the minimal value point of the side
with the smaller grayscale values are selected as the start and end points of the edge width;

Figure 7. Typical edge direction diagram: (a) vertical edge direction; (b) 45° edge direction; (c) −45°
edge direction; (d) horizontal edge direction.
To calculate the edge width of an edge point, it is necessary to find the grayscale
2.3.2.
extremeSolution of at
points Edge
the Width
two ends closest to the edge point in the edge direction [10]. When
To calculate
the grayscale the edge
values of onewidth of an
side are edge
larger point,
than thoseit is
ofnecessary to find
the other side, thethe grayscalevalue
maximum
extreme points at the two ends closest to the edge point in the
point of the side with the larger grayscale values and the minimal value point of edge direction [10]. Whenthe side
Appl. Sci. 2022, 12, 6712 the
with the smaller grayscale values are selected as the start and end points of7value
grayscale values of one side are larger than those of the other side, the maximum of 19
the edge
point of the side with the larger grayscale values and the minimal
width; the distance between the two end points is the corresponding edge width of the value point of the side
with
edgethe smaller
point. Figure grayscale
8 shows values are selected
the variation as the
of the start and
grayscale end in
values points of therow
the 257th edgealong
width;
the the distance between the two end points is the corresponding edge width of the
thedistance
horizontal between the two
direction of end
the points is the
reference corresponding
image “parrots”edge width
in the LIVE of the edge point.
database [17] after
edge
Figure point.
8 shows Figure 8 shows
the variation the
of thevariation
grayscale of the grayscale
valuesbelow,
in the thevalues
257th in the
rowwidths 257th
along the row along
horizontal
Gaussian blurring. As can be seen in the figure edge of the edge points
the horizontal
direction of the direction
reference of the “parrots”
image reference in image
the “parrots”
LIVE databasein the
[17]LIVE
after database
Gaussian [17] after
blurring.
P1 and P3 are P2–P2’ and P4’–P4, respectively.
Gaussian
As blurring.
can be seen in theAs can be
figure seenthe
below, in the
edgefigure below,
widths of thethe edge
edge widths
points of the
P1 and P3edge points0
are P2–P2
P1 and
and 0 P3 are
P4 –P4, P2–P2’ and P4’–P4, respectively.
respectively.

Figure 8. Parrots’ grayscale value change curve.


Figure8.
Figure 8. Parrots’
Parrots’ grayscale
grayscale value
valuechange
changecurve.
curve.

InIn thispaper,
paper, the above rule is also followed when calculatingedge the edge width. The
In this
this paper,the
theabove
aboverule
ruleisisalso
alsofollowed
followedwhen
whencalculating
calculatingthe the edgewidth.
width. The
The
upperand
upper
upperand
andlower
lowerrows
rows
lowerrows of of
ofthe
the images
the images
images in
in Figureare
in Figure
9 are
Figure 99 are
“parrots”
“parrots”
“parrots” and
and “planes”
and “planes”
“planes”in
inGaussian
in the
the Gaussian
the Gaussian
blurred
blurred
blurredimagesimages
images ofof
of thethe LIVE
the LIVE database,
LIVE database, respectively.
database, respectively. According
respectively. According
According to to
to the the
the method method
method in in
in this this paper,
paper,
this paper,
theedge
the
the edgewidths
edge widthsofof
widths ofthe
theedge
the edge points
points in in
thethe
in the upper
andand
upper
upper and lower
lowerlower
rowsrows
rows ofofimages
of the the
theimages
werewere
images werecalcu-
calcu-
lated
calculatedseparately; the
separately; calculation
the results
calculation resultsare shown
are shown
lated separately; the calculation results are shown in Figure 10. in Figure
in Figure 10.
10.

Figure
Figure9.9.9.Gaussian
Gaussian
Gaussianblurred
blurredimages.
images.
Figure blurred images.
Randomly
Randomlyselect select100
100 edge
edgepoints of “parrots”
points and “planes”
of “parrots” and divide
and “planes” these 100
and divide points
these 100
into Randomly select ◦ 100 edge points of “parrots” and “planes” andwithin
divide these◦ 100
points into equal parts by 360° so that each edge point corresponds to an angle within 0–,
equal parts by 360 so that each edge point corresponds to an angle 0–360
points
which into equal
is the is
polar parts by
angle angle 360° so
correspondingthat each
to thatedge point corresponds to an angle within 0–
360°, which the polar corresponding toedge point.point.
that edge Then,Then,
the edge width
the edge of the
width of
360°,point
edge which is the polar
is selected as theangle corresponding to that edge point. Then, the edge width of
the edge point is selected ascorresponding polar diameter;
the corresponding so, the
polar diameter; edge
so, thepoint
edgewith a certain
point with a
the
edge edge
width point is selected as the corresponding polar diameter; so, the edge point with a
certain edgecan be mapped
width to the polar
can be mapped coordinate
to the system. Plot
polar coordinate (a) and
system. plot
Plot (a)(b)
andin plot
Figure
(b)10
in
certain edge
correspond to width
the uppercan and
be mapped
lower rowsto the polar coordinate
of images in Figure 9,system. Plot (a)
respectively androw
(each plotof(b) in
images in Figure 9, from left to right, can be numbered as a, b, c, and d). According to the
definition of points in the polar coordinate system, the more the line is located outside, the
more edge points with large edge widths will be in the image corresponding to the line.
It can be clearly seen from Figure 10 that the pink line is the most inward, the blue line is
outward, the yellow line is further outward, and the green line is the outermost, which
correspond to the fact that the two rows of images in Figure 9 are getting blurred from
Figure 10 correspond to the upper and lower rows of images in Figure 9, respectively
(each row of images in Figure 9, from left to right, can be numbered as a, b, c, and d).
According to the definition of points in the polar coordinate system, the more the line is
located outside, the more edge points with large edge widths will be in the image corre-
Appl. Sci. 2022, 12, 6712 8 of 19
sponding to the line. It can be clearly seen from Figure 10 that the pink line is the most
inward, the blue line is outward, the yellow line is further outward, and the green line is
the outermost, which correspond to the fact that the two rows of images in Figure 9 are
left to right,
getting indicating
blurred that
from left tothe edge
right, width calculation
indicating method
that the edge proposed
width in this
calculation paper pro-
method can
adequately reflect the blurring degree of images.
posed in this paper can adequately reflect the blurring degree of images.

Figure 10. (a) “parrots” polar graph of edge information; (b) “planes” polar graph of edge information.
Figure 10. (a) “parrots” polar graph of edge information; (b) “planes” polar graph of edge infor-
mation.
2.4. Histogram of Edge Width
For the obtained edge widths of different edge points, the probability P(ωi ) that the
2.4. Histogram of Edge Width
edge width is ωi can be calculated by Equation (2).
For the obtained edge widths of different edge points, the probability 𝑃 𝜔 that the
edge width is 𝜔 can be calculated by P Equation ni
(ωi ) = (2). (2)
N𝑛
𝑃 𝜔 = (2)
In the above equation, ni is the number of edge 𝑁 points with edge width ωi and N is
the total number
In the aboveofequation,
edge points.𝑛 is the number of edge points with edge width 𝜔 and 𝑁
is theOnce
totalthe probabilities
number of different edge widths are obtained, the histogram of the
of edge points.
edge Once
widththe canprobabilities
be established. of Take the Gaussian
different edge widths blurred
are image “womanhat”
obtained, in the of
the histogram LIVE
the
database as an example; its corresponding histogram is shown in Figure
edge width can be established. Take the Gaussian blurred image “womanhat” in the LIVE 11.
It canasbean
database seen from Figure
example; 11 that, as the
its corresponding degree ofisblurring
histogram shown in deepens,
Figure two
11. phenomena
appear in the corresponding histogram. (1) The peak shifts to the
It can be seen from Figure 11 that, as the degree of blurring deepens, tworight, that is, the probabil-
phenomena
ity of large edge widths increases. (2) The histogram spreads and the peak
appear in the corresponding histogram. (1) The peak shifts to the right, that is, the proba- value decreases,
which
bility ofmeans
largethat
edgethewidths
probability of larger
increases. edgehistogram
(2) The widths generally
spreadsincreases. Reference
and the peak value[14]
de-
states, for these phenomena, that the edge widths corresponding to the
creases, which means that the probability of larger edge widths generally increases. Ref- peak portion of the
histogram
erence [14]are morefor
states, likely
thesetophenomena,
be generatedthat afterthe
theedge
stepwidths
edges or approximate to
corresponding step
theedges
peak
are blurred, which can more accurately reflect blurriness. In this regard, a distance factor,
portion of the histogram are more likely to be generated after the step edges or approxi-
as shown in Equation (3), was introduced to enhance the contribution of the edge widths
mate step edges are blurred, which can more accurately reflect blurriness. In this regard,
of the peak portion to the sharpness evaluation. The distance factor variation relationship
a distance factor, as shown in Equation (3), was introduced to enhance the contribution of
corresponding to Equation (3) is shown in Figure 12.
the edge widths of the peak portion to the sharpness evaluation. The distance factor vari-
In this paper, based on the previous study, two distance factors, as shown in
ation relationship corresponding to Equation (3) is shown in Figure 12.
Equations (4) and (5), are proposed, and their respective relationships with the edge width
In this paper, based on the previous study, two distance factors, as shown in Equa-
are shown in Figures 13 and 14, respectively. For the convenience of later description, the
tions (4) and (5), are proposed, and their respective relationships with the edge width are
distance factors corresponding to Equations (3)–(5) are named as type 1 distance factor,
shown in Figures 13 and 14, respectively. For the convenience of later description, the
type 2 distance factor, and type 3 distance factor, respectively.
distance factors corresponding to Equations (3)–(5) are named as type 1 distance factor,
type 2 distance factor, and type 3 distance ωi factor, respectively.
 2
ωi < ωmp


 ωmp

( ωi ) = 1 ωi = ωmp (3)
  2
 ωme −ωi
ωi > ωmp

ωme −ωmp

𝜔 (2𝜔 − 𝜔 )
⎧ 𝜔 <𝜔
⎪ 𝜔
𝑑(𝜔 ) = 1 𝜔 =𝜔 (5)
⎨(𝜔 − 𝜔 )(𝜔 − 2𝜔 + 𝜔 )
Appl. Sci. 2022, 12, 6712 ⎪ 𝜔 >𝜔 9 of 19
⎩ (𝜔 − 𝜔 )

Appl. Sci. 2022, 12, 6712 10 of 19


Figure11.
Figure 11.Histogram
Histogramofofedge
edge width
width under
under different
different blurring
blurring degrees
degrees of “womanhat”.
of “womanhat”.

Figure 12. The distance factor variation relationship in Equation (3).


Figure 12. The distance factor variation relationship in Equation (3).
In the above equation, ωmp is the edge width with the highest probability, ωme is the
longest edge width, ωi is the edge width, and d(ωi ) is the distance factor of ωi .
 ωi

 ωmp ωi < ωmp
d ( ωi ) = 1 ωi = ωmp (4)
 ωi

+ ωme
ω > ω mp
ωmp −ωme ωme− ωmp i
Appl. Sci. 2022, 12, 6712 10 of 19

ωi (2ωmp −ωi )



 ωmp 2
ωi < ωmp

d ( ωi ) = 1 ωi = ωmp (5)
Figure 12. The distance factor (ωme −ωi )(ωirelationship
−2ωmp +ωme ) in Equation (3).
 variation

ωi > ωmp


2
(ωmp −relationship
Figure 12. The distance factor variation ωme ) in Equation (3).

Figure 13.The
Figure 13. The distance
distance factorfactor variation
variation relationship
relationship in Equationin
(4).Equation (4).
Figure 13. The distance factor variation relationship in Equation (4).

14. The
Figure 14.
Figure Thedistance
distancefactor variation
factor relationship
variation in Equation
relationship (5). (5).
in Equation
Figure 14. The distance factor variation relationship in Equation (5).
2.5. Sharpness
2.5. Sharpness Evaluation
Evaluation Model
Model
After acquiring
2.5. Sharpness distance
Evaluation factors, the final sharpness evaluation value can be obtained
Model
After acquiring distance factors, the final sharpness evaluation value can be obtained
by introducing them into Equation (6).
After acquiring
by introducing themdistance factors,
into Equation the final sharpness evaluation value can be obtained
(6).
by introducing them into Equation (6).maxE
ω
Value = ∑ d ( ωi ) P ( ωi ) ωi (6)
ValueωminE
= 𝑑(𝜔 ) 𝑃(𝜔 )𝜔 (6)
Value = 𝑑(𝜔 ) 𝑃(𝜔 )𝜔 (6)
In the above equation, ωminE and ωmaxE are the minimum and maximum edge
the above equation, 𝜔
In respectively.
widths, and 𝜔 are the minimum and maximum edge
In the
Finally, above
we
widths, respectively. equation,
summarize the 𝜔 and
sharpness 𝜔
evaluation are theproposed
model minimum andpaper.
in this maximum
The edge
edge
widths,information of
respectively.
Finally, the measured image can be obtained after the edge detection.
we summarize the sharpness evaluation model proposed in this paper. Then, the The
edgeFinally,
directionwe of the edge
summarize point can
the be determined
sharpness by calculating
evaluation model the eight-neighborhood
proposed in this paper. The
edge information of the measured image can be obtained after the edge detection. Then,
grayscale difference of the extracted edge point and the edge width can be calculated along
edge information
the edge directionof ofthe
the measured
edge pointimage
can becan be obtained
determined after the edge
by calculating thedetection. Then,
eight-neighbor-
the edge direction of the edge point. With the edge width, the histogram of edge width can
the
hood edge direction
grayscaleThen,
be established.
of the
difference edge point can
of the factor
the distance extractedbe determined
edge
of each point
edge and
width
by calculating
thebeedge
can
the
width
acquired
eight-neighbor-
can be calculated
according to
hood
along grayscale
the edge difference
direction of
of the extracted
edge edge
point. point
With theand
edgethe edge
width, width
the
the distance factor calculation equation. Afterwards, the distance factor is introduced can beinto
histogram calculated
of edge
along
width the
canedge
the evaluation direction
be established.
index ofThen,
to obtainthe
the edge point.
the distance
sharpness With themodel
factor
evaluation ofedge width,
eachof edge the histogram
width
this paper. canabove
The of edge
be acquired
width
processcan
according be the
to
is shown established.
indistance Then, the
Figure 15.factor distanceequation.
calculation factor of Afterwards,
each edge width can be acquired
the distance factor is
according
introducedtointo
thethe
distance factor
evaluation calculation
index equation.
to obtain Afterwards,
the sharpness the distance
evaluation model offactor is
this pa-
introduced intoprocess
per. The above the evaluation
is shown index to obtain
in Figure 15. the sharpness evaluation model of this pa-
per. The above process is shown in Figure 15.
Appl.
Appl.Sci.
Sci.2022,
2022,12,
12,6712
6712 1111ofof19
19

Figure15.
Figure Flowchart
15.Flow chartof
ofsharpness
sharpnessevaluation
evaluationmodel.
model.

3. Experimental Results and Analysis


3. Experimental Results and Analysis
3.1. Distance Factor Comparison Experiment
3.1. Distance Factor Comparison Experiment
In order to fully compare the performances of the three distance factors and, thus,
In order
decide whichtodistance
fully compare the performances
factor should be introduced of the
intothree distance factors
the sharpness and, index,
evaluation thus,
decide which distance factor should be introduced into the sharpness
two experiments were conducted for this section based on whether the image contentsevaluation index,
two
wereexperiments
the same. were conducted for this section based on whether the image contents
were The
the same.
first experiment was set up as follows. Firstly, 11 “cameraman” images with the
sameThe firstcontents
image experimentbutwas set up increasing
gradually as follows. blur
Firstly, 11 selected,
were “cameraman” images
as shown with the
in Figure 16.
same image contents but gradually increasing blur were selected, as shown
Then these images were evaluated by the sharpness evaluation model after introduc- in Figure 16.
Then thesedifferent
ing three images were evaluated
distance factors,by the sharpness
respectively. evaluation
Finally, model evaluation
the obtained after introducing
values
three different distance factors, respectively. Finally, the obtained evaluation
were plotted as scatter plot and least squares fitted with polynomial functions, values were
as shown
plotted as
in Figure 17.scatter plot and least squares fitted with polynomial functions, as shown in
Figure 17.
Appl.Sci.
Appl. Sci.2022, 12, 6712
2022,12, 6712 12 of 19 12 of 1
Appl. Sci. 2022, 12, 6712 12 of 19

Figure
Figure
Figure 16.
16.
16. “Cameraman”
“Cameraman”
“Cameraman” with
with
with increasing
increasing
increasing blur.
blur.
blur.

Figure17.
Figure 17.The
The relationship
relationship between
between the evaluation
the evaluation valuevalue and
and the the degree
degree of imageofblurring
image blurring
when thewhen
Figure
the 17.contents
image The relationship between the evaluation value and the degree of image blurring whe
image contents are theare the same.
same.
the image contents are the same.
InInFigure
Figure17,17,the blue,
the blue,green, and
green, andredred
lines are are
lines the the
fitted lineslines
fitted of the
ofscatter points
the scatter of of
points
the In
image Figure 17,
sharpness the blue,
evaluation green,
values and
afterred
the lines are
introduction the fitted
of the lines
three
the image sharpness evaluation values after the introduction of the three distance factors of the
distance scatter
factors points o
of type
thetype
of 1,
image type 2, and
sharpness
1, type type 3, respectively.
evaluation
2, and type In
values after
3, respectively. Table
In Table 1, CC is the
the introduction Pearson linear
of the three
1, CC is the Pearson correlation
lineardistance factor
correlation
coefficient
of type 1,and
coefficient ROCC
type
and ROCC is the
2, and is Spearman
type rank-order
the3,Spearman
respectively. Incorrelation
rank-order 1, coefficient.
Tablecorrelation
CC is the A higher linear
Pearson
coefficient. CC value
A highercorrelatio
CC
indicates thatand
the evaluation is method is more effective;
coefficient
value indicates ROCC
that the Spearman
the evaluation is morea effective;
methodrank-order larger ROCC values
correlation indicatesA
coefficient.
a larger ROCC that
valueshigher
indi- CC
the evaluation method is more monotonic. From the data in Table 1, it is clear that the
valuethat
cates indicates that the method
the evaluation evaluation method
is more is moreFrom
monotonic. effective; a larger
the data ROCC
in Table 1, itvalues
is clearind
evaluation method after introducing the type 3 distance factor performed better in terms
that
catesthethatevaluation methodmethod
the evaluation after introducing the type 3 distance
is more monotonic. From thefactordata performed
in Table 1,better
it is clea
of both accuracy and predicted monotonicity. Therefore, when the contents of measured
in terms
that the of both
evaluationaccuracy
method and predicted
after monotonicity.
introducing
images are the same, the type 3 distance factor performs better. the type Therefore,
3 distance when
factor the contents
performed of
bette
measured
in terms of images
both are the same,
accuracy andthe type 3 distance
predicted factor performs
monotonicity. better.
Therefore, when the contents o
Table 1. Performance
measured imagesofare
three
thedistance
same,factors.
the type 3 distance factor performs better.
Table 1. Performance of three distance factors.
Experiment Distance Factor CC ROCC
Table 1. Performance of threeDistance
Experiment distance factors.
Factor CC
Type 1 0.8869 0.9545 ROCC
ExperimentI Type
Type
Distance
2
1 Factor 0.9369
0.8869
CC 0.9727
0.9545
ROCC
Ⅰ Type 2 0.9369 0.9727
Type3 Type 1 0.9568 0.8869 0.9909 0.9545
Type3 0.9568 0.9909
Ⅰ Type 1 Type 2 0.9509 0.9369 0.9818 0.9727
Type 1 0.9509 0.9818
II Type 2 Type3 0.9514 0.9568 0.9909 0.9909
II Type 2 0.9514 0.9909
Type3 Type 1 0.9502 0.9509 0.9909 0.9818
Type3 0.9502 0.9909
The highest II Type 2
performances are shown in boldface. 0.9514 0.9909
The highest performances are shown in boldface.
Type3 0.9502 0.9909
The highest performances
The second are shown
experiment in boldface.
was different from the first experiment. Its selected images
were all from the Gaussian blurred images of the LIVE database. These images had dif-
ferentThe second
blurring experiment
degrees was
and were different
not from
correlated the
with firstother,
each experiment.
as shownIts
in selected image
Figure 18.
were all from the Gaussian blurred images of the LIVE database. These images had di
ferent blurring degrees and were not correlated with each other, as shown in Figure 18.
Appl. Sci. 2022, 12, 6712 13 of 19

712 The second experiment was different from the first experiment. Its selected
13 ofimages
19
were all from the Gaussian blurred images of the LIVE database. These images had different
Appl. Sci. 2022, 12, 6712 blurring degrees and were not correlated with each other, as shown in Figure 18. 13 of 19

Figure 18. Content-independent images with increasing blur.


Figure 18. Content-independent images with increasing
Figure 18. Content-independent blur.
images with increasing blur.
Usingthe
Using thesame
sameprocessing
processingmethod
methodasas the
the first
first experiment,
experiment, Figure
Figure 19 19
waswas obtained.
obtained. It
Using the same processing
It should be noted method as the first experiment, Figure 19 was obtained.
should be noted thatthat
the the abscissa
abscissa in Figure
in Figure 19 was
19 was not order
not the the orderof theofmeasured
the measured imagesimages
but
It should be notedthethat thesubjective
butsubjective
the abscissa in Figure
evaluation 19
DMOS was not
values the
of theorder of the
corresponding
evaluation DMOS values of the corresponding measured images. The reason measured
measured images
images. The
reason
but the subjective isevaluationis that the
DMOS
that the measured measured
values
images images
offirst
in the in the first
theexperiment experiment
corresponding were
measured
were generated generated by artificially
images. applying
by artificially The
applyingblur
Gaussian Gaussian
evenly. blur evenly.itTherefore,
Therefore, is reasonable it istoreasonable
perform a to perform
linear fit to a linear
the fitpoints
scatter to the
reason is that the measured images in the first experiment were generated by artificially
scatter points of the evaluation values. However, the measured
of the evaluation values. However, the measured images selected in the second experiment images selected in the sec-
applying Gaussian blur
ond evenly. were
experiment Therefore,
Gaussian it blurred
is reasonable
images intothe
perform
LIVE a linear
database; fit blurring
their to the de-
were Gaussian blurred images in the LIVE database; their blurring degrees did not increase
scatter points of the evaluation
grees did not
uniformly. values.
increase
At this itHowever,
time, uniformly.
was obviously Atthe
this measured
time, to
wrong wasimages
it linearly fit theselected
obviously inlinearly
wrongpoints.
scatter to theTherefore,
sec- fit the
ond experiment were it wasGaussian
scatter points. blurred
Therefore,
better to select itimagesevaluation
was
the subjective in the LIVE
better to select the
DMOS database;
values in their
subjective evaluation
the LIVE blurring
DMOS
database de-
values
as thein
the LIVE
grees did not increase database
uniformly.
abscissa and then toAtas the
this
perform abscissa
time, and
a linearit fitthen
was to scatter
perform
obviously
to the awrong
linear fitto
points. tolinearly
the scatterfitpoints.
the
scatter points. Therefore, it was better to select the subjective evaluation DMOS values in
the LIVE database as the abscissa and then to perform a linear fit to the scatter points.

Figure19.
Figure 19.The
Therelationship
relationshipbetween
betweenthethe evaluation
evaluation value
value andand
thethe degree
degree of image
of image blurring
blurring whenwhen
the
the image contents are not the
image contents are not the same. same.

Observing
Observingthe thedata
dataofofthe
thesecond
secondexperiment
experimentin inTable
Table1,1,ititisiseasy
easytotofind
findthat
thatthe
theCC
CC
values
valuesofofthe
theevaluation
evaluationmethods
methodscorresponding
correspondingtotothe thethree
threedistance
distancefactors
factorswere
werealmost
almost
the
thesame,
same,while
whilethetheROCC
ROCCvalue
valueof ofthe
theevaluation
evaluationmethod
methodcorresponding
corresponding to to the
thetype
type11
distance factor
distancebetween was
factor was smaller
smallercompared
compared to those
to and of the
thosethe evaluation
of the evaluation methods
methods corresponding
corresponding to
Figure 19. The relationship the evaluation value degree of image blurring when
the type
to not 2
the the and
typesame.type 3 distance factors. This indicates that, when the contents
2 and type 3 distance factors. This indicates that, when the contents of the of the measured
the image contents are
images
measuredare not correlated,
images are notthere is no significant
correlated, there is nodifference in the
significant accuracyin
difference ofthe
the accuracy
evaluation of
methods with different distance factors. However, the evaluation methods
the evaluation methods with different distance factors. However, the evaluation methods corresponding
Observing thetocorresponding
data
type 2ofandthetype
second experiment
3 distance
to type 2 and factors
in Table
performed
type 3 distance
1, it
better
factors
is predicting
in easy to
performed
find
better
that the CC
monotonicity.
in predicting mono-
values of the evaluation methods corresponding
In conclusion,
tonicity. to the after
the evaluation method threeintroducing
distance factors
the typewere almost
3 distance factor
has
the same, while the ROCC better accuracy
value of
In conclusion, and monotonicity
thethe evaluation
evaluation prediction
method method when evaluating
corresponding
after introducing images
the type 3todistancewith
the type the same
1 has
factor
distance factor was smaller compared to those of the evaluation methods corresponding con-
better accuracy and monotonicity prediction when evaluating images with the same
to the type 2 and tents
typeor3 images
distance with different contents. Therefore, the image sharpness evaluation index
factors. This indicates that, when the contents of the
after the introduction of the type 3 distance factor will be chosen for subsequent experi-
measured images are not correlated, there is no significant difference in the accuracy of
ments in this paper.
3.2. Content-Independent Experiment
The experiment was designed to show that the evaluation method proposed in this
Appl. Sci. 2022, 12, 6712 paper was superior compared to the evaluation method proposed in reference [14]
14 of 19 and
the traditional Tenegrad function evaluation method.
As in the second experiment in the previous subsection, we still selected images from
contents ordatabase
the LIVE in this
images with experiment;
different the difference
contents. Therefore,was that thesharpness
the image selected images
evaluationwere 29
undistorted reference images with different contents. First, the order
index after the introduction of the type 3 distance factor will be chosen for subsequent of these 29 images
was disrupted
experiments andpaper.
in this randomly ordered. After that, a Gaussian blur was added to these im-
ages sequentially according to the order of measured images, with a Gaussian blur stand-
3.2.
ardContent-Independent
deviation from 0.1Experiment
to 2.9 in steps of 0.1, resulting in 29 blurred measured images. Fi-
The experiment
nally, the measuredwas designed
images weretoevaluated
show thatby thethe
evaluation
method method proposed
in this paper, the in this in
method
paper was superior
reference [14], andcompared to thefunction
the Tenegrad evaluation methodrespectively;
method, proposed in the
reference [14] and
evaluation the are
results
traditional Tenegrad
shown in Figure 20. function evaluation method.
As
Asincan
thebesecond
seen inexperiment
Figure 20,inwith
the previous subsection,
the increase we still selected
in the Gaussian images deviation
blur standard from
the LIVE database in this experiment; the difference was that the selected images were
of the images, both the evaluation methods in this paper and reference [14] show an in-
29 undistorted reference images with different contents. First, the order of these 29 images
creasing trend while the Tenegrad function evaluation method shows a decreasing trend,
was disrupted and randomly ordered. After that, a Gaussian blur was added to these
which are caused by their respective calculation principles. In order to compare the accu-
images sequentially according to the order of measured images, with a Gaussian blur
racy of the
standard three evaluation
deviation from 0.1 to methods
2.9 in stepsmore comprehensively,
of 0.1, this paper
resulting in 29 blurred addsimages.
measured a compari-
son of the
Finally, the measured
root meanimagessquarewere
error (RMSE)by
evaluated andthemean
method absolute
in this error
paper,(MAE) of theinthree
the method
evaluation methods in addition to the calculation of the Pearson linear
reference [14], and the Tenegrad function method, respectively; the evaluation results are correlation coeffi-
cient (CC);
shown the final
in Figure 20. results are shown in Table 2.

Figure 20. Comparative experiment of image content irrelevance (red represents the method of this
Figure 20. Comparative experiment of image content irrelevance (red represents the method of
paper, blue represents the method of reference [14], green represents the Tenegrad method).
this paper, blue represents the method of reference [14], green represents the Tenegrad method).
As can be seen in Figure 20, with the increase in the Gaussian blur standard deviation
Table
of the 2. Performance
images, of the
both the three evaluation
evaluation methodsmethods.
in this paper and reference [14] show an
increasing trend while the Tenegrad function evaluation method shows a decreasing trend,
Method CC RMSE MAE
which are caused by their respective calculation principles. In order to compare the accuracy
Proposed
of the three evaluation methods more0.9684
comprehensively, this 0.6534 0.4746 of
paper adds a comparison
Reference [14] 0.9627 0.7158
the root mean square error (RMSE) and mean absolute error (MAE) of the three evaluation 0.5275
methods inTenegrad
addition to the calculation−0.7817 1.944
of the Pearson linear correlation coefficient 2.9083
(CC); the
final results are shown in Table 2.
Larger CC values and smaller RMSE and MAE values in the above table indicate the
Table 2. Performance of the three evaluation methods.
better validity of the method. Therefore, it is clear from the above table that the evaluation
method Method
proposed in this paperCC had significant superiority
RMSE comparedMAE to the other two
methods.Proposed 0.9684 0.6534 0.4746
Reference [14] 0.9627 0.7158 0.5275
Tenegrad −0.7817 1.944 2.9083

Larger CC values and smaller RMSE and MAE values in the above table indicate
the better validity of the method. Therefore, it is clear from the above table that the
Appl.
Appl. Sci.Sci. 12,12,
2022,
2022, 6712
6712 1515
of 19
of 19

evaluation
3.3. Subjectivemethod proposed
and Objective in this paper
Consistency had significant superiority compared to the
Experiment
other two methods.
The above experiments fully proved that the sharpness evaluation method proposed
in3.3.
thisSubjective
paper can andwell determine
Objective the blurring
Consistency Experiment degrees of images with different contents.
However, the consistency between the evaluation
The above experiments fully proved that the sharpness method in this paper
evaluation and the
method subjective
proposed
evaluation
in this paperhascannotwell
beendetermine
verified. Therefore,
the blurringwe verified
degrees the subjective
of images and objective
with different con-
contents.
sistency of the proposed method by evaluating the sharpness of all
However, the consistency between the evaluation method in this paper and the subjective 145 Gaussian blurred
images in thehas
evaluation LIVE notdatabase. In addition,
been verified. it is important
Therefore, we verified to the
notesubjective
that all 145andimages above
objective
have subjective
consistency of evaluation
the proposed DMOS values.
method by evaluating the sharpness of all 145 Gaussian
blurred
The images
sharpness in the LIVE database.
evaluation values In and
addition,
DMOS it isvalues
important to note
of these that
145 all 145 images
measured images
were fitted using Equation (7) [18] where Value is the sharpness evaluation value,
above have subjective evaluation DMOS values.
DMOSThe sharpness
is the evaluation
corresponding values and
subjective DMOS value,
evaluation values and 𝛽 ~𝛽145are
of these measured
the modelimages
param-
were fitted using Equation
eters that need to be fitted. (7) [18] where Value i is the sharpness evaluation value, DMOS i
is the corresponding subjective evaluation value, and β 1 ~β 4 are the model parameters that
need to be fitted. 𝛽 −𝛽
DMOS = 𝛽 + β − β (7)
1 ( 2 )
DMOSi = β 2 + 1 + 𝑒 β | |
(7)
−(Valuei − | β3 | )
1+e 4
Figure 21 shows the fitting curves between the evaluation values of the three evalu-
Figure 21 shows the fitting curves between the evaluation values of the three evaluation
ation methods and DMOS values. The model parameters of the fitting curve of the pro-
methods and DMOS values. The model parameters of the fitting curve of the proposed
posed method are 𝛽 = 73.14, 𝛽 = 17.12, 𝛽 = 10150, and 𝛽 = 1406. The model parame-
method are β 1 = 73.14, β 2 = 17.12, β 3 = 10,150, and β 4 = 1406. The model parameters of
ters of the fitting curve of the reference [14] method are 𝛽 = 64.56, 𝛽 = −1.817, 𝛽 =
the fitting curve of the reference [14] method are β 1 = 64.56, β 2 = −1.817, β 3 = 1.857, and
1.857,
β4 = − and 𝛽 =The
0.3279. −0.3279.
modelThe model parameters
parameters of the fittingofcurve
the fitting curve of method
of the Tenegrad the Tenegrad
are
method are 𝛽 = 29.22, 𝛽 = 227.7, 𝛽 = −1716, and
β 1 = 29.22, β 2 = 227.7, β 3 = −1716, and β 4 = −1811. 𝛽 = −1811.

(a) (b)

(c)

Figure21.
Figure Fittingcurves
21.Fitting curves of
of three
three methods:
methods: (a)(a)the
thefitting
fittingcurve
curveofofproposed
proposedmethod;
method;(b)(b)
thethe
fitting
fitting
curve of reference [14] method; (c) the fitting curve of Tenegrad method.
curve of reference [14] method; (c) the fitting curve of Tenegrad method.
After obtaining the fitted model parameters of the above three methods, the subjective
After obtaining
evaluation the fitted
DMOS values model
of the parameters
Gaussian blurred of the above
images in thethree
LIVE methods, the subjec-
database could be
tive evaluation DMOS values of the Gaussian blurred images in the LIVE database could
be predicted by Equation (7). The relationship between the predicted subjective evalua-
tion value DMOS and the subjective evaluation value DMOS is shown in Figure 22.
Appl. Sci. 2022, 12, 6712 16 of 19

Appl. Sci. 2022, 12, 6712 16 of


predicted by Equation (7). The relationship between the predicted subjective evaluation
value DMOS pred and the subjective evaluation value DMOS is shown in Figure 22.

(a) (b)

(c)

Figure 22. The relationship between


Figure 22. The predicted
relationship subjective
between evaluation
predicted values
subjective and subjective
evaluation valuesevaluation
and subjective evalu
values: (a) proposed
tion method; (b)
values: (a) reference
proposed [14] method;
method; (c) tenegrad
(b) reference method.
[14] method; (c) tenegrad method.

As can be seenAs in Figure 22, the method of this paper was in better agreement with
can be seen in Figure 22, the method of this paper was in better agreement wi
the subjective evaluation.
the subjectiveTo illustrateTo
evaluation. this point more
illustrate fully,more
this point fivefully,
additional technicaltechnical i
five additional
indicators, which
dicators, which are widely used to measure the performance of the methods,
are widely used to measure the performance of the evaluation evaluation method
were calculatedwere
in this paper. These
calculated in thisfive
paper.indicators
These fiveareindicators
the root meanare thesquare errorsquare
root mean (RMSE), error (RMS
Pearson linear Pearson
correlationlinear correlation coefficient (CC), mean absolute error (MAE), rank-
coefficient (CC), mean absolute error (MAE), Spearman Spearman ran
order correlation coefficient
order (ROCC),
correlation and outlier
coefficient (ROCC), ratio
and(OR).
outlierThe higher
ratio (OR).the
TheROCC
highervalues,
the ROCC valu
the more obviousthe more obvious is the trend that the predicted evaluation value ofincreases
is the trend that the predicted evaluation value of the model the model increas
with the increase
withinthe
theincrease
subjectivein theevaluation value. Thus,
subjective evaluation theThus,
value. ROCC thevalue
ROCCisvalueusedis toused to sho
show the predictive monotonicity
the predictive of the of
monotonicity model; the larger
the model; the the
the larger value is, is,
value thethe
better
betterthethe model
model is. A smaller
A smallerOROR valuevalue means
means that
thatthethemodel
model hashas better predictionability
better prediction abilityforforimages wi
images with different
differentcontents,
contents,which whichisisused usedtotoshowshowthe the predictive
predictive consistency
consistency of of the
the model; t
model; the smaller thethe
smaller value
value is,is,the
thebetter
betterthe modelis.
the model is.The
The performance
performance evaluation
evaluation of theofthree met
the three methods is shown
ods is shownininTable Table3. 3.It isItobvious
is obviousfromfrom the table
the table that
that the the evaluation
evaluation method propos
method proposed in this
in this paper
paper is better
is better thanthan the the
otherother
twotwo methods
methods in thein above
the above technical
technical indicators. Th
indicators. Thisproves
provesthatthatthe
theevaluation
evaluation method
method proposed
proposed in this paper
in this can well
paper satisfy
can well the requireme
satisfy
the requirementthat
thatimage
imagesharpness
sharpnessevaluation
evaluation should
should bebe independent
independent of of
image
imagecontent
contentand has hi
consistency with subjective
and has high consistency with subjective evaluation. evaluation.
Appl. Sci. 2022, 12, 6712 17 of 19

Table 3. Performance evaluation of the three methods.

Method RMSE CC MAE ROCC OR


Proposed 5.78 0.9346 4.9383 0.9373 0
Reference [14] 7.393 0.8906 5.9560 0.8754 0.0276
Tenegrad 8.297 0.8599 6.7808 0.8301 0.0456

3.4. Running Time Evaluation


In the previous subsections, we experimentally verified that our proposed method was
superior to the other two methods in terms of content irrelevance and subjective–objective
consistency. In the following, we evaluated 29 clear reference images in the LIVE database
using each of the above three evaluation methods and then obtained the time required to
process one image for each method by averaging, as shown in Table 4. It should be noted
that the experiment in this section was conducted on a PC with a 2.9 GHz CPU, 16 G RAM,
and an NVIDIA GeForce RTX 2060.
Table 4. Running time comparison.

Method Proposed Reference [14] Tenegrad


Time (s) 0.00697 0.0207 0.0247

From the experimental data in Table 4, it can be seen that the proposed method had
more obvious advantages in terms of running time compared to the other two methods.
This is actually due to the fact that we adopted the less computationally intensive eight-
neighborhood grayscale difference method to determine the edge directions of the edge
points while reference [14] requires at least two Sobel convolution operations on the mea-
sured image, making its running time significantly slower than our method. Additionally,
we noticed that both the methods in this paper and in reference [14] were faster than the
Tenegrad method. This fully illustrates that the edge information-based image sharpness
evaluation methods have the advantage of being faster than the Tenegrad method based
on the contrast principle. Therefore, our method is more suitable for application in scenes
with heavy real-time requirements.

3.5. Real Shooting Experiment


Most of the measured images in the above experiments are from the LIVE database.
In order to verify that the sharpness evaluation method proposed in this paper is also
effective for real shooting images, we conducted a real shooting experiment by HUAWEI
P40 Pro, taking a total of six images with different contents, as shown in Figure 23. The
blurring degrees of these six images are increasing sequentially; the sharpness indexes
after applying the proposed evaluation method to these images are shown in Figure 24.
The experimental result shows that the evaluation method proposed in this paper has the
irrelevance of image content and subjective–objective consistency.
Appl.
Appl. Sci.
Appl.Sci. 2022, 12,
2022,12,
Sci.2022, 6712
12,6712
6712 1818
18 of1919
ofof 19

(a)
(a) (b)
(b) (c)
(c) (d)
(d)

(e)
(e) (f)
(f)
Figure23.
23. Real shooting
shooting images: (a) toy; (b) vehicle; (c) building; (d) cup; (e) indoor; (f) door.
Figure 23.Real
Figure Real shootingimages:
images: (a)
(a) toy;
toy;(b)
(b)vehicle;
vehicle;(c)
(c)building;
building;(d)
(d)cup;
cup;(e)
(e)indoor;
indoor;(f)
(f)door.
door.

Figure 24. Evaluation values of real shooting images.


Figure24.
Figure 24.Evaluation
Evaluationvalues
valuesof
ofreal
realshooting
shootingimages.
images.
4. Conclusions
4.
4. Conclusions
Conclusions
This paper proposes an improved method based on edge information for evaluating
imageThis
This paper
paperproposes
sharpness. Firstly, an
proposes theimproved
an improved
Canny edge method
method based on
onedge
edgeinformation
basedalgorithm
detection information
based on the for evaluating
foractivation
evaluating
image
image sharpness.
sharpness. Firstly, the
the Canny
Canny edge
edge detection
detection algorithm
algorithm
mechanism was used to obtain the edge position. Then, the edge direction of each based
based on
on the
the activation
activation
edge
mechanism
mechanism was
was used
used to
to obtain
obtain the
the edge
edge position.
position. Then,
Then, the
the edge
edge direction
direction
point was determined by the eight-neighborhood grayscale difference method; the histo- of
of each
each edge
edge
point
point was
was determined
determined by
by the
the eight-neighborhood
eight-neighborhood grayscale
grayscale difference
difference
gram of edge width was established afterwards. Finally, a distance factor was introduced method;
method; the
the his-
histo-
togram
gramthe
into of edge
of weightedwidth
edge width was established
was established
average edge width afterwards.
afterwards.
solving modelFinally,
Finally, a distance
a distance
to obtain factor was introduced
factor wasevaluation
the sharpness introduced
into
intothe
the
index. Byweighted
weighted
comparingaverage
average
the imageedge width
widthsolving
edgeevaluation solving model
modelto
performance toobtain
obtain
when the
three sharpness
thedistance
sharpness evaluation
evaluation
factors were
index.
index. By
By comparing
comparing the
the image
image evaluation
evaluation performance
performance when
when three
three distance
distance
applied, a comprehensive analysis showed that the type 3 distance factor possessed better factors
factors were
were
applied,
applied, aa comprehensive
comprehensive analysis
analysis showed
showed that
that the
the type
type 33 distance
distance factor
factor
accuracy and predictive monotonicity. In addition, to verify the superiority of the evalu-
possessed
possessed bet-
better
ter accuracy
accuracy
ation andand
method predictive
predictive
proposed in monotonicity.
monotonicity.
this In addition,
In addition,
paper, three to verify
to verify
evaluation methodsthe werethe compared
superiority
superiority of theonof the
evalu-
the
evaluation
ation method proposed in this paper, three evaluation methods were compared on on
method proposed in this paper, three evaluation methods were compared the
Appl. Sci. 2022, 12, 6712 19 of 19

the LIVE database. The experimental results showed that, compared with the traditional
Tenegrad function evaluation method, the method proposed in this paper was greatly
improved in performance and can meet the requirement of image content irrelevance and
subjective–objective consistency.

Author Contributions: Conceptualization, Z.L. and Z.G.; methodology, Z.L.; software, Z.L.; vali-
dation, J.W.; formal analysis, Y.C.; investigation, Z.L.; resources, Z.L.; data curation, Z.L.; writing—
original draft preparation, Z.L.; writing—review and editing, Z.G.; supervision, H.H.; project admin-
istration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version
of the manuscript.
Funding: This research was funded by National Key Laboratory Fund Project (Grant No. 6142003190302)
and University Scientific Research Plan Project (Grant No. ZK22-19).
Data Availability Statement: The data presented in this study are available on request from the author.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Ke, F.; Liu, H.; Zhao, D.; Sun, G.; Xu, W.; Feng, W. A high precision image registration method for measurement based on the
stereo camera system. Optik 2020, 204, 164186. [CrossRef]
2. Varga, D. No-Reference Image Quality Assessment with Convolutional Neural Networks and Decision Fusion. Appl. Sci. 2022,
12, 101. [CrossRef]
3. Liu, T.J.; Liu, H.H.; Pei, S.C.; Liu, K.H. A high-definition diversity-scene database for image quality assessment. IEEE Access 2018,
6, 45427–45438. [CrossRef]
4. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE
Trans. Image Process 2004, 13, 600–612. [CrossRef] [PubMed]
5. Liu, Y.; Zhai, G.; Gu, K.; Liu, X.; Zhao, D.; Gao, W. Reduced-reference image quality assessment in free-energy principle and
sparse representation. IEEE Trans. Multimed. 2018, 20, 379–391. [CrossRef]
6. Liu, H.T.; Heynderickx, I. Issues in the design of a no-reference metric for perceived blur. In Proceedings of the SPIE Conference
on Image Quality and System Performance, San Francisco, CA, USA, 24 January 2011; p. 78670C.
7. Marichal, X.; Ma, W.Y.; Zhang, H.J. Blur determination in the compressed domain using DCT information. In Proceedings of the
International Conference on Image Processing, Kobe, Japan, 24–28 October 1999; pp. 386–390.
8. Caviedes, J.; Gurbuz, S. No-reference sharpness metric based on local edge kurtosis. In Proceedings of the International
Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. 53–56.
9. Caviedes, J.; Oberti, F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Process. Image
Commun. 2004, 19, 147–161. [CrossRef]
10. Hassen, R.; Wang, Z.; Salama, M. No-reference image sharpness assessment based on local phase coherence measurement. In
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010;
pp. 2434–2437.
11. Zhan, Y.B.; Zhang, R. No-reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE
Trans. Multimed. 2018, 20, 1796–1808. [CrossRef]
12. Pina, M.; Frederic, D.; Stefan, W.; Touradj, E. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Process. Image
Commun. 2004, 19, 163–172.
13. Li, Q.Y.; Li, L.D.; Lu, Z.L.; Zhou, Y.; Zhu, H.C. No-reference Sharpness Index for Scanning Electron Microscopy Images Based on
Dark Channel Prior. KSII Trans. Int. Inf. Systems. 2019, 13, 2529–2543.
14. Wang, Y.R. Research on Auto-Focus Methods Based on Digital Imaging Processing. Ph.D. Thesis, Zhejiang University, Hangzhou,
China, 2018.
15. Ferzli, R.; Karam, L.J. A No-Reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur (JNB). IEEE
Trans. Image. Process. 2009, 18, 717–728. [CrossRef] [PubMed]
16. Ong, E.; Lin, W.; Lu, Z.; Yang, X.; Yao, S.; Pan, F.; Jiang, L.; Moschetti, F. A no-reference quality metric for measuring image blur.
In Proceedings of the International Symposium on Signal Processing and Its Applications(IASTED), Paris, France, 4 July 2003;
pp. 469–472.
17. LIVE Image Quality Assessment Database Release 2. Available online: http://live.eoe.utexas.edu/research/quality/ (accessed
on 1 February 2022).
18. Kalpana, S.; Rajiv, S.; Alan, C.B.; Lawrence, K.C. Study of Subjective and Objective Quality Assessment of Video. IEEE Trans.
Image. Process. 2010, 19, 1427–1440.

You might also like