You are on page 1of 6

A NO-REFERENCE EVALUATION METRIC FOR LOW-LIGHT IMAGE ENHANCEMENT

Zicheng Zhang1 (B),Wei Sun1 ,Xiongkuo Min1 ,Wenhan Zhu2 ,Tao Wang1 ,Wei Lu1 ,and Guangtao Zhai1
1
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China
2021 IEEE International Conference on Multimedia and Expo (ICME) | 978-1-6654-3864-3/20/$31.00 ©2021 IEEE | DOI: 10.1109/ICME51207.2021.9428312

2
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
zzc1998@sjtu.edu.cn

ABSTRACT In the current literature, the evaluation for LIEAs falls


behind. As far as we know, there is no image quality as-
Low-light images, which are usually taken in dark or back- sessment (IQA) algorithm specifically designed for light-
lighting conditions, are hard to perceive due to the low visi- enhanced images and only a few quality descriptors are used
bility and low contrast. To improve viewers’ Quality of Ex- in [1][2][3][4] for evaluating LIEAs. However, these descrip-
perience (QoE) and support the application of vision-based tors have a low correlation with the quality of light-enhanced
systems, various low-light image enhancement algorithms images, which are demonstrated in Section 3. Generally
(LIEAs) have been proposed to lighten low-light images. speaking, existing objective IQA methods are divided into
However, some LIEAs may amplify the hidden distortions in three categories: full-reference (FR), reduced-reference (RR),
the dark like noise and even further, introduce new distortions and no-reference (NR). Although FR and RR IQA [5] usu-
such as structural damage, color shift, etc, which severely af- ally perform well, the reference image is not available in most
fect the quality of light-enhanced images and need to be eval- cases, especially for low-light images, which is why we focus
uated quantificationally. However, in the literature, few mea- on the NR IQA in this paper. In the past two decades, a series
sures are proposed to assess the quality of light-enhanced im- of NR IQA methods [6][7] have been proposed. For example,
ages. Therefore, in this paper, we develop a no-reference low- BRISQUE [8] assesses image quality based on scene statis-
light image enhancement evaluation (NLIEE) metric to pre- tics of locally normalized luminance coefficients. NFERM
dict the quality of light-enhanced images. The image quality [9] uses free-energy-based brain theory as well as classical
is mainly assessed from four key aspects: light enhancement, human visual system (HVS)-inspired features to predict the
color comparison, noise measurement, and structure evalu- image of quality. NIQE [10] employs measurable deviations
ation. The experiment results show that NLIEE achieves the from statistical regularities observed in natural images for im-
best performance among the general no-reference image qual- age quality evaluation. These methods are capable of extract-
ity assessment (NR IQA) models and quality descriptors for ing effective image quality features and can accurately predict
light enhancement. the image quality in general IQA databases which are con-
Index Terms— Low-light image enhancement, no- cerned about limited types of distortions, such as JPEG com-
reference image quality assessment (NR IQA), structure sim- pression, Gaussian noise, Gaussian blur, etc. However, light-
ilarity enhanced images usually contain more than one type of dis-
tortions, which are much more complicated than the synthet-
1. INTRODUCTION ically distorted images in the general IQA databases. There-
fore, the traditional NR IQAs have a low correlation with the
Low-light images are captured in scenes with an insufficient quality of light-enhanced images.
amount of light, which inevitably makes the image content
To deal with the challenge of low-light image enhance-
difficult to perceive. Some vision-based intelligent systems,
ment evaluation, we propose a novel no-reference low-light
such as automatic driving, remote sensing, and homeland se-
image enhancement evaluation (NLIEE) metric. First, we ex-
curity, are severely affected by the low visibility of low-light
tract the shape and variance parameters of the illumination
images. Therefore, many low-light image enhancement algo-
distribution of light-enhanced images to reflect the effect of
rithms (LIEA) have been proposed to lighten low-light im-
light enhancement. Second, we compute the hue and satura-
ages. However, driven by different strategies like denoising
tion relative entropy of light-enhanced image and its corre-
and enhancing contrast, different LIEAs may result in differ-
sponding low-light image as color features. Third, we extract
ent quality of light-enhanced images, which need to be evalu-
the noise features by estimating the Gaussian noise and salt-
ated quantificationally.

This work was supported in part by the National Science Foundation of National Key R&D Program of China (No.2019YFB1405900).
China (No.61831015, 61771305, 61927809 and U1908210), and in part by The code for this paper is available at: https://github.com/zzc-1998/NLIEE

978-1-6654-3864-3/21/$31.00 ©2021 IEEE

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.
where w is a local
P Gaussian weighting window, the local
mean µ(i, j) = k,l wk,l L(i+k, j+l), and the local variance
qP
2
σ(i, j) = k,l wk,l [L(i + k, j + l) − µ(i, j)] . Following
the processes, the local mean, variance, and normalized illu-
mination map of low-light image and light-enhanced image
can be computed as µl , µe , σl , σe , L̃l , L̃e .
(a) (b)
0.5
0.6
Natural
Low-light
Natural
Low-light
2.2. Light Enhancement
0.4
0.5

0.4
The main purpose of LIEA is to light up low-light images,
0.3
Probability

Probability

thus luminance is the most important aspect for LIEA eval-


0.3
0.2 uation. An effective LIEA should recover the hidden image
0.2
content in low-light areas to an appropriate extent [12], thus
0.1
0.1 we adopt the normalized luminance distribution for light eval-
0.0 0.0 uation. As discussed in [8], the normalized luminance distri-
0 50 100 150 200 -3 -2 -1 0 1 2
Luminance Normalized Luminance bution of natural images tends to be Gaussian-like. However,
(c) (d) suffering from the insufficient amount of light and low con-
trast, the normalized distribution of low-light images decays
Fig. 1. Example of a natural image and its corresponding faster in the shape and is more concentrated on the center
low-light image from the LIEQ database [11]. (a) and (b) than natural images as illustrated in Fig.1. Hence, to mea-
are the natural image and its corresponding low-light image, sure the luminance enhancement between low-light and light-
(c) and (d) are the luminance probability distribution and the enhanced images, we utilize the zero mean Generalized Gaus-
normalized probability distribution of (a) and (b) respectively. sian Distribution (GGD) [13] parameters estimated from the
luminance distribution of light-enhanced images as light en-
hancement features, which is given by
and-pepper noise level in noise-sensitive areas. Fourth, we
  α 
calculate the structure features using structure similarities of α |x|
different scales. Finally, we can obtain a total of 18 features GGD(x; α, β 2 ) = exp − , (3)
2βΓ(1/α) β
as the representation of the quality of light-enhanced images.
The features are regressed into a single quality score using q
Γ(1/α) R∞
the support vector regression (SVR) model. To analyze the where β = σ Γ(3/α) , Γ(α) = 0
tα−1 e−t dt, α > 0 is the
performance of the proposed method, we compare the NLIEE gamma function, and the two estimated parameters (α, β 2 )
with other state-of-art NR IQAs as well as the quality descrip- indicate the shape and variance of the distribution.
tors designed for evaluating LIEAs on the low-light image en-
hancement quality (LIEQ) database [11]. Experiment results 2.3. Color Comparison
show that the proposed method performs the best on LIEQ
database among all compared methods. When recovering original information of the image, some
LIEAs may violate the correctness of color transfer and per-
2. PROPOSED METHOD ception. Since the RGB color space does not precisely re-
2.1. Pre-Processing flect the human eye’s perception of color, the image I is first
transformed from RGB color space to HSV color space which
To better quantify the distortions of light-enhanced images, represents hue, saturation, and value. Through the transfor-
we adopt the local normalization process as pre-processing, mation, the hue and saturation maps of low-light and light-
which has been proved effective in many IQA problems. enhanced images can be obtained as Hl , He , Sl , Se .
Given an image I, the illumination map can be computed us- Affected by various types of distortions, low-light images
ing the maximum of RGB channels may have a greater disorder in color spaces than natural im-
ages, which has been revealed in [14]. Further, in many pre-
L(i, j) = max I(i, j), (1)
c∈{r,g,b} vious IQA researches [15]-[16], entropy is adopted as an ef-
fective indicator for measuring disorder. Hence, to describe
where L represents the illumination map, i and j are pixel the difference of color disorder between low-light and light-
indexes of the image. The normalized illumination map can enhanced images, we propose to utilize the relative entropy
be derived as (Kullback-Leibler (KL) divergence) metric, which has been
L(i, j) − µ(i, j) demonstrated as a good indicator for measuring disorder dif-
L̃(i, j) = , (2) ference. Then we note the relative entropy (KL) of hue and
σ(i, j) + 1

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.
areas are more susceptible to noise, so we estimate the noise
level by the pooling of the noise estimation maps in the flat
and dark areas
1 X
Nγ = Mγ (i, j),
TΩ (6)
(i,j)∈Ω

Ω = {(i, j) | µl (i, j) < E(µl ), σl < E(σl )},


where Nγ represents the noise measurement for Gaussian
noise and salt-and-pepper noise, E(·) is the average opera-
tor, TΩ is the number of the pixels in the flat and dark area set
Ω, and set Ω includes all pixels of low-light images whose lo-
(a) cal mean and variance are lower than the average local mean
and variance.

2.5. Structure Evaluation


The structure information can effectively reflect the degra-
dations of light-enhanced images. In this section, the im-
age structure is mainly evaluated by structural restoration
and over-enhancement. Fig.3 illustrates two typical exam-
(b) (c) ples of structure evaluation. The structural restoration gen-
erally focuses on the enhancement in dark and sharp areas
Fig. 2. An example for noise measurement. (a) is the noise that may contain rich structure information while the over-
image, (b) shows salt-and-pepper noise in the blue rectangle enhancement usually occurs in the low contrast and flat area
while (c) shows Gaussian noise in the red rectangle. because some meaningless details are taken as image struc-
saturation maps as ture and then enhanced out. In FR IQA, it is common to uti-
Z lize the structure similarity of the distorted and reference im-
p(x) ages to evaluate the structural restoration. However, perfect
KL(p(x)kq(x)) = p(x) log dx,
q(x) (4) reference images are not available for light-enhanced image
x
evaluation. Since we only care about the changes in the struc-
Cχ = KL (χe , χl )) ,
ture information which influence the image quality, we simply
where KL(·) represents the cross entropy function, p(x) and take low-light images as approximate reference. Specifically,
q(x) are the distribution likelihood, subscript χ ∈ {H, S} the overall structural restoration can be described with four
indicates the distributions for hue and saturation maps, χe and parameters: variance similarity, normalized variance similar-
χl represents the distribution map for light-enhanced and low- ity, normalized illumination map similarity, and edge similar-
light images respectively. ity, which can be given by
2θl · θe + c
2.4. Noise Measurement Sθ = , (7)
θl2 + θe2 + c
Noise can significantly affect image quality, especially in the
where θ ∈ {σ, η, L̃, ρ}, σ means the variance map, η denotes
cases of an insufficient amount of light. As shown in Fig.
the normalized variance map, L̃ denotes the normalized illu-
2, the noise in light-enhanced images can be generally di-
mination map, ρ denotes the log edge map, and c is a small
vided into Gaussian-like noise and salt-and-pepper-like noise.
constant used to avoid instability. For light-enhanced images,
Therefore, we measure the noise level by computing two spe-
more attention is paid to the structure enhancement effect in
cific noise estimation maps through the Gaussian filter and
the dark and sharp areas that are more likely to have hidden
the median filter, which are sensitive to Gaussian noise [17]
structure information, thus we calculate the similarities in the
and salt-and-pepper noise [18] respectively. Then the noise
areas with lower luminance and greater contrast. Specifically,
estimation maps are derived as the difference of the images
the luminance and contrast can be characterized by the local
before and after filtering
mean and the local variance. Then the structure enhancement
Mγ = Fγ (Le ) − Le , (5) can be described by pooling the structure similarity maps in
the areas mentioned above, which can be derived as
where Mγ indicates the noise estimation map and γ ∈ 1 X
{gaussian, median}, Fγ indicates the filtering function for Eθ = Sθ (i, j),
Tφ (8)
Gaussian filtering with the kernel size of 7x7 or median fil- (i,j)∈φ
tering with the kernel of 3x3. In most cases, the dark and flat φ = {(i, j) | µl (i, j) < E(µl ), σl (i, j) > E(σl )},

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.
(a) (b) (c) (d) (e) (f)

Fig. 3. An illustration for structure evaluation. (a) and (b) show the structure evaluation area in low-light and light-enhanced
images. (c) and (d) show the low contrast area with over-enhancement effect in the blue rectangle, (e) and (f) show the dark and
sharp area with structure enhancement in the red rectangle respectively.

Labeled pairs Feature Groups


Light
Enhancement
Regression
SVR
Color
Comparison

18 Features

Noise
Quality Score
Measurement
Structure
Evaluation

Fig. 4. Framework of the proposed method.


where φ indicates the set including pixels whose local mean support vector regression (SVR) as the regression model. La-
is lower than the average value of µl , and local variance is beled low-light and light-enhanced image pairs are used to
larger than the average value of σl . Specifically, µl and σl train the regressor, then the trained regressor can predict the
represent the local mean map and local contrast map for low- image quality through the same framework.
light images. Tφ indicates the number of the pixels in set φ.
The main factors causing structure over-enhancement is 3. EXPERIMENTAL PERFORMANCE
the enhanced meaningless details, which usually occur in the
low contrast area. Therefore, we simply utilize the average 3.1. Dataset
pooling result of the structure similarity maps in the low con- The proposed method NLIEE is validated on the low-light im-
trast areas as the structure over-enhancement features, which age enhancement quality (LIEQ) database [11]. The LIEQ
are given by database selects 100 typical scene images from the multi-
exposure image database in [19], which is constructed espe-
1 X cially for providing low-light images and their corresponding
Oθ = Sθ (i, j),
T well-lighted ones. Further, 10 representative low-light im-
(i,j)∈ψ
(9) age enhancement algorithms are employed to generate a total
ψ = {(i, j) | σl (i, j) < E(σl ),
of 1,000 light-enhanced images. A subjective quality assess-
σe (i, j) − σl (i, j) > E(σe − σl )},
ment study has been performed on the LIEQ database, and the
Mean Opinion Scores (MOSs) are provided for each light-
where ψ indicates the set including pixels whose local vari-
enhanced image. To the best of our knowledge, the LIEQ
ance is lower than the average value of σl , and variance en-
database is the first database designed for low-light image en-
hancement is larger than the average value of the enhance-
hancement quality assessment study.
ment map, which is defined as σe − σl . Tψ indicates the num-
ber of the pixels in set ψ. 3.2. Evaluation Competitors

2.6. Feature Regression To demonstrate the effectiveness of the NLIEE metric, some
IQAs that may be effective for the quality of light-enhanced
As illustrated in Fig.4, NLIEE mainly consists of two mod- images are selected for evaluation. Specifically, the selected
ules: feature extraction described above, and feature regres- measures are divided into two categories:
sion described in this section. Following the extraction pro- • Descriptors designed for light enhancement, including
cesses, a feature vector consisting of 18 features can be ob- lightness order error (LOE) [3], color constancy (CC) [1], dis-
tained as illustrated in Table 1. In this paper, we utilize the crete entropy (DE) [1] and contrast enhancement (CE) [2].

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.
Table 1. Overview features for proposed method
Feature group Feature ID Feature Description Symbol Computation
Light enhancement f1 − f2 GGD shape and scale (α,β) Eq.(3)
Color comparison f3 − f4 Hue and saturation entropy difference (CH ,CS ) Eq.(4)
Noise measurement f5 − f6 Noise measurement in dark and flat areas (Ng ,Nm ) Eq.(6)
f 7 − f 10 Overall structure similarity (Sθ ,Sη ,SL̃ ,Sρ ) Eq.(7)
Structure evaluation f 11 − f 14 Enhancement in dark and sharp areas (Eθ ,Eη ,EL̃ ,Eρ ) Eq.(8)
f 15 − f 18 Over-enhancement in low contrast areas (Oθ ,Oη ,OL̃ ,Oρ ) Eq.(9)

Table 2. Performance comparison with other NR IQAs


Criteria LOE CC DE CE BRISQUE ILNIQE HIGRADE NIQE NFERM Proposed
SRCC 0.1372 0.5044 0.4680 0.2932 0.6521 0.5621 0.6541 0.6136 0.6499 0.8503
PLCC 0.1662 0.5136 0.4913 0.2647 0.5832 0.6526 0.6581 0.6420 0.6551 0.8391
KRCC 0.0913 0.3486 0.3210 0.1218 0.4615 0.44598 0.4761 0.4327 0.4654 0.6596
RMSE 13.7514 12.1392 12.3229 14.0145 13.3091 11.6145 11.8109 10.6529 10.3819 7.5344

• General NR IQAs methods including BRISQUE [8], IL-


Table 3. Performance of different feature groups
NIQE [20], HIGRADE [21], NIQE [10], NFERM [9], and
NFSDM [22]. Criteria G1 G2 G3 G4 G5
SRCC 0.8503 0.7504 0.5960 0.6383 0.4410
3.3. Evaluation Criteria and Setup PLCC 0.8391 0.7718 0.6021 0.6465 0.5246
KRCC 0.6596 0.5595 0.4216 0.4577 0.3088
Four mainstream consistency evaluation criteria are utilized RMSE 7.5344 9.0927 11.1021 10.9029 12.3610
to compare the correlation between the predicted scores and
MOSs, which include Spearman Rank Correlation Coefficient
(SRCC), Kendall’s Rank Correlation Coefficient (KRCC), 3.5. Ablation Experiment
Pearson Linear Correlation Coefficient (PLCC), Root Mean In this section, we conduct the ablation experiment to further
Squared Error (RMSE). test the contribution of different feature groups. According to
As described in Section 2.6, the proposed method has a re- Table 1 and Table 3, feature groups are listed by the corre-
gression module that requires training. Therefore, the whole sponding category, specifically:
dataset is split into an 80 percent training set and a 20 percent • G1: With all features.
testing set. To avoid the influence of different image content, • G2: Without light enhancement features.
all the light-enhanced images of the same scene can only be • G3: Without color comparison features.
in the training set or the testing set. This process is repeated • G4: Without noise measurement features.
1,000 times and the median results of SRCC, RMSE, PLCC, • G5: Without structure evaluation features.
and KRCC are recorded as the final performance. To make the The results are listed in Table 3. It is observed that the per-
comparison fair, we adopt the same splitting sets and training formances of G2−G5 are all inferior to the complete features.
principles for other NR IQA models. Further, we find that G5 gets the lowest performance, which
indicates that the structure evaluation features contribute most
3.4. Comparison Discussion to NLIEE. Besides, the color comparison and noise measure-
ment features are of great significance for assessing the qual-
The final performance is shown in Table 2, from which sev-
ity of light-enhanced images. Among G2 − G5, G2 performs
eral observations can be obtained. First, the proposed method
best, which means that the light enhancement features con-
performs the best compared with other IQA models, which
tribute relatively little to NLIEE compared with other fea-
demonstrates the effectiveness of the NLIEE metric. Second,
tures. It may be explained that the features computed in
the descriptors designed especially for light enhancement per-
other groups may contain some redundant information with
form mediocre, which is because such descriptors consider
the light enhancement features.
only one aspect of image distortions and light-enhanced im-
ages contain more than one type of distortions. Third, some
general NR IQA measures have a certain ability to predict the 4. CONCLUSION
quality of light-enhanced images. It may be because that such
NR IQAs can capture the features which are in line with the This paper proposes a no-reference low-light image enhance-
judging principles of light-enhanced images, however, they ment (NLIEE) metric to evaluate the effect of LIEAs. The
are not as effective as the proposed method. proposed method extracts 18 features mainly from four as-

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.
pects: light enhancement, color comparison, noise measure- [11] G. Zhai, W. Sun, X. Min, and J. Zhou, “Perceptual qual-
ment, and structure evaluation. Then, the features are re- ity assessment of low-light image enhancement,” ACM
gressed to a single quality score through SVR. Experiment Transactions on Multimedia Computing, Communica-
results show that the proposed method performs the best on tions, and Applications (TOMM), 2021.
the LIEQ database among other NR IQAs, which suggests [12] L. Shi, W. Zhou, Z. Chen, and J. Zhang, “No-reference
that our method can be utilized to evaluate LIEAs quantita- light field image quality assessment based on spatial-
tively and optimize viewer experience along with practical angular measurement,” IEEE TCSVT, vol. 30, no. 11,
vision-based systems. The ablation experiment shows that all pp. 4114–4128, 2020.
four groups contribute to quality assessment, among which [13] K. Sharifi and A. Leon-Garcia, “Estimation of shape
the structure features contribute the most. parameter for generalized gaussian distributions in sub-
band decompositions of video,” IEEE TCSVT, vol. 5,
5. REFERENCES no. 1, pp. 52–56, 1995.
[14] M. Baisantry, D. S. Negi, O. P. Manocha, B. Pratap
[1] L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma, Singh, and S. Kishore, “Object-based image fusion for
“Msr-net: Low-light image enhancement using deep minimization of color distortion in high-resolution satel-
convolutional network,” CoRR, vol. abs/1711.02488, lite imagery,” in International Conference on Image In-
2017. formation Processing, 2011, pp. 1–6.
[2] X. Fu, Y. Liao, D. Zeng, Y. Huang, X. Zhang, and [15] X. Min, G. Zhai, K. Gu, Y. Zhu, J. Zhou, G. Guo,
X. Ding, “A probabilistic method for image enhance- X. Yang, X. Guan, and W. Zhang, “Quality evaluation of
ment with simultaneous illumination and reflectance es- image dehazing methods using synthetic hazy images,”
timation,” IEEE TIP, vol. 24, no. 12, pp. 4965–4977, IEEE TMM, vol. 21, no. 9, pp. 2319–2333, 2019.
2015. [16] Z. Chen, W. Zhou, and W. Li, “Blind stereoscopic video
[3] H. Yeganeh and Z. Wang, “Objective assessment of tone quality assessment: From depth perception to overall
mapping algorithms,” in IEEE ICIP, 2010, pp. 2477– experience,” IEEE TIP, vol. 27, no. 2, pp. 721–734,
2480. 2018.
[4] W. Sun, X. Min, G. Zhai, K. Gu, S. Ma, and X. Yang, [17] D. Chowdhury, S. K. Das, S. Nandy, A. Chakraborty,
“Dynamic backlight scaling considering ambient lumi- R. Goswami, and A. Chakraborty, “An atomic tech-
nance for mobile videos on lcd displays,” IEEE Trans- nique for removal of gaussian noise from a noisy gray
actions on Mobile Computing, 2020. scale image using lowpass-convoluted gaussian filter,”
[5] W. Zhu, G. Zhai, X. Min, M. Hu, J. Liu, G. Guo, and in International Conference on Opto-Electronics and
X. Yang, “Multi-channel decomposition in tandem with Applied Optics, 2019, pp. 1–6.
free-energy principle for reduced-reference image qual- [18] C. Chang, J. Hsiao, and C. Hsieh, “An adaptive me-
ity assessment,” IEEE TMM, vol. 21, no. 9, pp. 2334– dian filter for image denoising,” in International Sympo-
2346, 2019. sium on Intelligent Information Technology Application,
[6] W. Zhu, W. Sun, X. Min, G. Zhai, and X. Yang, “Struc- 2008, vol. 2, pp. 346–350.
tured computational modeling of human visual system [19] J. Cai, S. Gu, and L. Zhang, “Learning a deep single
for no-reference image quality assessment,” Interna- image contrast enhancer from multi-exposure images,”
tional Journal of Automation and Computing, vol. 18, IEEE TIP, vol. 27, no. 4, pp. 2049–2062, 2018.
no. 2, pp. 1–15, 2021. [20] L. Zhang, L. Zhang, and A. C. Bovik, “A feature-
[7] W. Sun, X. Min, G. Zhai, K. Gu, and S. Duan, H.and Ma, enriched completely blind image quality evaluator,”
“Mc360iqa: A multi-channel cnn for blind 360-degree IEEE TIP, vol. 24, no. 8, pp. 2579–2591, 2015.
image quality assessment,” IEEE Journal of Selected [21] D. Kundu, D. Ghadiyaram, A. C. Bovik, and B. L.
Topics in Signal Processing, vol. 14, no. 1, pp. 64–77, Evans, “No-reference image quality assessment for high
2020. dynamic range images,” in Asilomar Conference on Sig-
nals, Systems and Computers, 2016, pp. 1847–1852.
[8] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-
[22] K. Gu, G. Zhai, X. Yang, W. Zhang, and Longfei Liang,
reference image quality assessment in the spatial do-
“No-reference image quality assessment metric by com-
main,” IEEE TIP, vol. 21, no. 12, pp. 4695–4708, 2012.
bining free energy theory and structural degradation
[9] K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using free
model,” in IEEE ICME, 2013, pp. 1–6.
energy principle for blind image quality assessment,”
IEEE TMM, vol. 17, no. 1, pp. 50–63, 2015.
[10] A. Mittal, R. Soundararajan, and A. C. Bovik, “Mak-
ing a “completely blind” image quality analyzer,” IEEE
Signal Processing Letters, vol. 20, no. 3, pp. 209–212,
2013.

Authorized licensed use limited to: IEEE Xplore. Downloaded on June 30,2021 at 07:17:25 UTC from IEEE Xplore. Restrictions apply.

You might also like