You are on page 1of 13

Journal of Ambient Intelligence and Humanized Computing

https://doi.org/10.1007/s12652-020-01782-w

ORIGINAL RESEARCH

An efficient codebook generation using firefly algorithm for optimum


medical image compression
M. Laxmi Prasanna Rani1 · Gottapu Sasibhushana Rao2 · B. Prabhakara Rao3

Received: 14 November 2019 / Accepted: 11 February 2020


© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract
In recent times, the medical imaging becomes an indispensable tool in clinical practice. Due to the large volume of medical
images, compression is needed to lessen the redundancies in the image and also to represent the image in shorter manner for
effective transmission. In this paper, Linde–Buzo–Gray (LBG) algorithm was developed with vector quantization (VQ) for
compressing the images, and it results in decent image quality. To further increase the image quality, optimization techniques
[particle swarm optimization (PSO) and firefly algorithm (FA)] were used in LBG method to optimize the codebook for
generating the global codebook. In the proposed work, LBG method was used to get the local codebooks and the obtained
local codebooks were optimized by utilizing PSO. The optimized codebooks from PSO were again optimized by using FA
that results in good quality of the image. In the experimental phase, the performance of the proposed work was compared
with individual optimization techniques like PSO and FA. From the experimental study, the proposed work showed 1.2–6 dB
improvement in image compression related to other existing approaches.

Keywords  Firefly algorithm · Linde–Buzo–Gray · Medical image compression · Particle swarm optimization · Vector
quantization

1 Introduction in the fields of medical sciences, TV broadcasting, internet


browsing, navigation applications, satellite etc. for storage
Compression of digital images is one of the fundamental and transmission. Therefore, the reduction of memory space
steps in analysis of digital images. Abundant memory space and transmission time is necessary for digital images in sci-
and more bandwidths are required to store and transmission ence and technology. But reduced bandwidth leads to some
of these digital images. Image compression is the process of problems. Hence the selection of better image compression
attaining compressed form of an image with reasonable qual- techniques is the primary constraint to overcome the prob-
ity of information with less loss. Image compression is used lems of reduced bandwidth. The compression is the process
of reducing the amount of data required to represent a digital
image. Image compression makes it possible to facilitate the
* M. Laxmi Prasanna Rani process of getting storable and transmittable dimensions of
prassugowtham@gmail.com; laxmirani.mvgr@gmail.com
file sizes. Compression is attained by reducing redundant
Gottapu Sasibhushana Rao and irrelevant data to store and for transmission.
sasigps@gmail.com
Nowadays, the usage of medical images like computer-
B. Prabhakara Rao ized tomography (CT), magnetic resonance imaging (MRI),
drbprjntu@gmail.com
X-ray, etc. becomes crucial because these are necessary for
1
Electronics and Communication Engineering, MVGR the doctors to the efficient diagnosis of abnormal diseases.
College of Engineering, Vizianagaram, Andhra Pradesh, These images are generated in large volumes every day for
India diagnosis. Hence these images needed to be stored for future
2
Department of Electronics and Communication Engineering, reference of patients and their findings. So, the process com-
Andhra University College of Engineering, Visakhapatnam, pression became necessary, before storing and transmitting
India
the medical images through internet for the consultation
3
Department of Electronics and Communication Engineering, with other doctors (Reddy et al. 2018). Less memory size
JNT University, Kakinada, Andhra Pradesh, India

13
Vol.:(0123456789)
M. L. P. Rani et al.

of medical images consumes low transmission time during hybrid technique showed extraordinary performance when
transmission. Hence, role of medical image compression compared with existing methods. However, improvement for
became essential for effective storage and transmission. compression using multiwavelets decomposition technique
The techniques for the compression of images are classified and improvement in designing the filters were essential.
as lossy and lossless (Gonzalez and Woods 2008; Jayara- Yang (2009) developed Cuckoo search metaheuristic
man et al. 2012). Better quality-decompressed images are optimization algorithm, which optimized Linde–Buzo–Gray
obtained from lossless techniques and are used in bio-medi- (LBG), usually designed as a local optimal codebook for
cal applications, satellite communications, etc. The process image compression. The images LENA, BABOON, PEP-
of providing better compression with considerable loss is PERS, BARB and GOLDHILL were obtained from the
referred as lossy compression and is used in internet appli- publicly available dataset. Cuckoo search metaheuristic
cations, mobile phones, etc. The LBG algorithm is highly optimization algorithm optimized LBG codebook by levy
sensitive to initial codebook. Among various kinds of com- flight distribution function that functions Mantegna’s algo-
pression methods, vector quantization (VQ) is one of the rithm. The developed Cuckoo search algorithm improved the
wide spread lossy compression techniques. VQ with LBG PSNR values by around 0.2 dB at low bit rate and 0.3 dB at
algorithm for the compression of brain MRI images is used a higher bit rate. Experimental results obtained the graphs
in this paper (Suguna and Senthilkumaran 2011). Using VQ, for different codebook sizes, Cuckoo search algorithm were
the local codebooks are generated to decrease the value of observed. PSNR value obtained was better than existing
mean square error (MSE) with less peak signal to noise methods. Slower convergence is the major drawback of the
ratio (PSNR). The codebooks obtained from LBG-VQ are developed method.
optimized using PSO to get optimal codebooks. The code- Rani et al. (2019) developed back propagation neural
books generated from LBG-PSO algorithm is enhanced by network with Levenberg–Marquardt training algorithm
FA. From the exploration characteristics of FA an efficient (BPNNLM) and singular value decomposition (SVD) for
optimal solutions are obtained from search space. Hence, the the compression of medical images. The images were col-
output image received are reconstructed with the enhanced lected from CT, MRI of brain image and X-ray of chest
codebooks obtained by the proposed LBG-PSO-FA for images. The original image can be reconstructed with sin-
detection of disease. This optimal compression algorithm gular values of matrix. The image can be represented with
produces efficient Codebooks by generating visually bet- only the required features of an image during compression.
ter quality out image with reduced computational time and The experimental results showed that the storage space for
excellent PSNR. compressed image was less compared to the original image.
This paper organized such that Sect. 1 gives the impor- For acceptable compression and reconstruction of the image,
tance of medical image compression and introduction to the selection of the singular values was critical. The results
compression techniques. The survey of image compres- proved that the compressed technique provided based on
sion using vector quantization with the techniques of LBG, the singular values obtained better PSNR values, less MSE
LBG-PSO and LBG-PSO-FA is explained in Sect. 2. The and good Structural Similarity Index Measurement (SSIM)
proposed methodology is explained in Sect. 3. The com- values.
parison of performance metrics is tabulated and the results Nag (2019) developed an improved differential evolu-
are discussed in Sect. 4. Finally, conclusions are in Sect. 5. tion Linde–Buzo–Gray (IDE-LBG) algorithm. which cou-
pled with LBG for generating optimum Vector Quantiza-
tion codebooks. The images LENA, BABOON, PEPPERS,
2 Literature survey BARB and GOLDHILL were obtained from the publicly
available dataset. The members of the initial population of
Ammah and Owusu (2019) developed a DWT-VQ (dis- IDE were chosen at random from each group. Eventually,
crete wavelet transform-vector quantization) technique for the best Codebook obtained by the IDE was used as the
compressing the images and also to retain the quality of the initial Codebook for the LBG algorithm. The Bitrate Per
images, in any sort of medical tolerant situations. The devel- Pixel (BPP) evaluated the data size of the compressed image
oped hybrid methodology extracted the medical images from for different codebook sizes and the peak signal to noise
the DICOM dataset and these images constituted speckle and ratio (PSNR) values were calculated for individual code-
salt and pepper noises that were significantly reduced during book sizes. The developed algorithm obtained better PSNR
the execution of the process. The graphs obtained from the values and the quality of reconstructed image obtained were
results indicated the compression ratios per widow size per much better than that obtained from the other algorithms
codebook size. The peak signal to noise ratio (PSNR) and in comparison for eight different codebook sizes. However,
the root mean square error (RMSE) provided a substantial improvement in reduction of computation time was needed.
amount of the quality of the image formed. The developed The searching of redundant packet structure consumes more

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

time and increase the computational time and decrease in the 3 Proposed methodology
computation time was needed.
Horng (2012) developed a structure reference selection 3.1 Vector quantization (VQ)
process for collecting redundant frame of structure for com-
pression. Proposed algorithm used medical CTA image as Quantization for the compression of images is of two types,
well as on a sequence of images. The sequence used is gray- scalar and vector quantization. Each and every input symbol
scale MRI images taken from local hospitals. The devel- is taken individually to get the output in case scalar quantiza-
oped algorithm is a combination of integer wavelet packet tion. But in vector quantization (VQ), the input symbols are
transform function, particle swarm optimization and HCC clubbed together and form group of vectors using different
matrix. Both similar and dissimilar packet collects in two clustering techniques (Kim and Rao 1993). VQ is widely
different units and passes through HCC matrix and after that used in image compression to get better output. The image
image is compressed. Our empirical evaluation of PSNR compression using VQ is obtained in the steps of encoding,
and compression ratio shows that better performance instead transmitting through a channel and decoding (Pandey et al.
of another method used in the experimental process. The 2013). The block diagram of compression of data using VQ
searching of redundant packet structure consumes more time is shown in below Fig. 1.
and increase the computational time, in the future reduces
the computational time for compression. (a) Image compression using VQ
Nowakova et al. (2017) developed a hybrid enhanced
system with fast fuzzy clustering based vector quantizers. In VQ, the image is divided into blocks, and each block is
The images LENA, BABOON, PEPPERS, BARB and GOL- the representation of vectors of image referred as code words
DHILL were obtained from the publicly available dataset. by the clustering techniques. The set of code words forms
Three different modules such as reduction of number of code the codebook for VQ.
words affected by noise, second is to reduce training pattern The process of VQ is described in the following steps.
numbers that significantly reduced the cost of the quantizers
and thirdly to increase the size of small clusters by relo- 1. Design of codebook
cating the corresponding code words close to large ones. 2. Encoding of an image
This strategy enhanced the competition between clusters, 3. Image decoding
yielding better local minima. The performance of the algo-
rithm was evaluated in terms of the computational demands Compression is required where redundancy and irrele-
and the quality of the reconstructed image. The developed vancy are present within an image. VQ is used in the appli-
approach executed with high speed and competitive to exist- cations where high compression ratios are required (Nag
ing methods. 2019).

Fig. 1  Block diagram representing the steps of VQ for data compression

13
M. L. P. Rani et al.

The image is partitioned into blocks and each block has • All the code words are organized such that the dimen-
to be coded into vectors. The set of code words/code vec- sions of the decompressed image at the receiver are the
tor is called codebook. Compression of images is obtained matching to the input image.
by transmitting the address of the code word called index
instead of codebook. The useful information is stored in
index of the codebook of input image. Thus, by sending 3.2 Code book generation
index of codebook, the transmission bit rate is minimized.
The main aim of the codebook design is to reduce bit rate The most frequently used VQ algorithm is generalized
in image encoding/decoding process for compression. LBG Lloyd Algorithm (GLA) also called Linde–Buzo–Gray
algorithm is used (Sasazaki et al. 2008) to generate local (LBG) algorithm. LBG are used for mapping function to
optimum codebook in traditional method. partition training vectors in N clusters. The mapping func-
The image compression using VQ with encoder, channel tion is defined as RK → CB  . Where RK is the randomly
and decoder (Hu et al. 2008) is shown in the Fig. 2. generated codebook CB is the codebook size.
The above block diagram consists of three blocks and all It generates a local codebook with minimum distortion
these blocks are explained below (Sanyal et al. 2013). The steps in LBG algorithm are given below (Tsolakis
Encoder: This block consists of image vectors generation, et al. 2012).
generation of code book and code book index.
Step 1 Initially, a random codebook with size of Nc and
• The input image is divided into blocks and each block is distortion D1 = 1 are taken
converted into row/column vector called as code word. Step 2 Partition the input image training set/code vectors
• A codebook is a group of code words of blocks of input into clusters using the K-means clustering with
image. nearest neighborhood condition
• The major part in VQ is the generation of efficient code-
book. • Compute the Euclidean distance between first row vector
• A better algorithm generates an effective codebook. of input image and all row vectors in the codebook.
• Each vector of input image is indexed with number. • Each and every row vector of input image is represented
{ }
xi =  xi1 , xi2 , … .xiL  , where i = 1,
{ 2, 3 … and codeword
}
Channel: The indexed numbers are transmitted through a of thecodebook is denoted as cj =  cj1 , cj2 , … .cjL  , where
channel instead of codebooks. j = 1, 2, … Nc.
Decoder: This block consists of indexed numbers, and • Note down these distances and find the minimum all dis-
reconstruction of decompressed image. tances and that index of minimum distance of codebook
index is placed in the first index value of input image. It
• The indexed values are received and are these values are means that corresponding index of code word in code-
decoded with index table. book is nearer to the input image vector.
• These indexed numbers are allotted to its code words, • This repeats for all the remaining rows in the image.
respectively.

Fig. 2  Block diagram of VQ for image compression

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

Step 3 Once all the input image vectors are completed, the 3.2.2 Image compression using optimization algorithms
centroids of the region of partition is computed as
in step 2
(a) Image compression using PSO
The distortion d (x, Cj) between the input image vector x
and code book ­Cj = 1,2,3…Nc is calculated at encoder block. PSO is an evolutionary algorithm, it is endorsed to Ken-
The index of the code word vector/codebook with nearest nedy, Eberhart in 1995, and it was inspired by the social
neighbor rule is transmitted to the decoder if distortion is behavior of movement in a bird flock or fish school. A prob-
less. The index table of all vectors of input image is trans- able solution for a problem is obtained by representation of
mitted to the receiver. each individual particle. The fitness function is evaluated
based on fitness value of particle’s position. Every particle
3.2.1 Updating the code book has two positions named as global best (g best) and personal
best (p best). The position of particle, which has highest fit-
ness value among other particles, is called g best. The posi-
Step 4 This can be done using K-means clustering. If the tion of particle of itself with highest fitness value is termed
index values of input image row vectors are same as p best. The particles are moving with their own velocity
then the rows of corresponding indexes are aver- in the search space and changes their position with respect
aged and that averaged row/updated row is placed to g best and p best positions by keeping informed velocities
in the same index value of codebook. This repeats (Patane and Russo 2002).
for all similar index values in input vectors of
image. This gives an updated codebook (Kumar 3.2.3 LBG‑PSO optimisation for image compression
et al. 2018)
Step 5 Each vector of input image is assigned to a cor- For the compression of images, the codebook is taken as a
responding code word, and that code word index is particle in PSO. To get better codebook, calculate the fitness/
replaced by the associated input vectors to obtain distortion function by minimizing the error is referred as the
the aim of compression objective function and LBG-PSO algorithm is explained in
steps (Feng et al. 2007):
To calculate the distance between pixels of any two
images, different distance measures namely city block dis- Step 1: Initially, execute LBG algorithm, generate code
tance, Euclidian distance and chess board distance are used. book and that code book is taken as g best
The Euclidean distance is the straight line distance between Step 2: The remaining code books from LBG are initial-
two pixels. The City Block distance metric measures the path ized with random positions and random velocities
between the pixels based on a 4-connected neighborhood. Step 3: Compute the fitness/distortion values for each code
The Chessboard distance metric measures the path between book using equation given below (Yang 2008)
the pixels based on an 8-connected neighborhood. But in which is same used in LBG
this process of clustering, Euclidean distance is enough to Nb Nc
get distances between the pixels in the input image and the 1 ∑∑ ‖ ‖2
Distortion D = 𝜇ij ⋅ ‖xi − Cj ‖
corresponding pixels in codebook. Nb i=1 j=1 ‖ ‖
All the Euclidian distances are between input and the
codebook are averaged to calculate the distortion Dm + 1. If where xi is ith vector of input image with block size Nb, ­cj
the distortion is minimum that is final code book else repeat is jth vector of codebook with size Nc, 𝜇ij = ‘1’ if xi is in
the process again codebook jth cluster and else ‘0’
Step 4 The distortion function value of the present code-
Nb Nc
1 ∑∑ ‖ ‖2 book is lower than fitness function value (p best) of
Distortion D = 𝜇ij ⋅ ‖xi − Cj ‖ (1)
Nb i=1 j=1 ‖ ‖ previous one, then the new fitness value of present
codebook is taken as p best. This process contin-
where xi is ith vector input image of size Nb , cj is jth vector ues and also finds the fitness value until all the
of codebook with size Nc , 𝜇ij is ‘1’ if xi is in the codebook vectors of input image are completed
jth cluster, else ‘0’. Step 5 From all the fitness values of codebooks, choose
If Dm − Dm+1 < T  , where T is the predefined threshold, the lowest fitness value and if this value is better
then stops, otherwise minemented by one and repeat the than gbest, then the chosen minimum fitness value
above steps from 2 to 4. (Qinghai 2010; Chen 2005) is taken as new g best

13
M. L. P. Rani et al.

Step 6 Using PSO algorithm, velocity and position of using Firefly algorithm is explained using flow chart given
each particle are updated using Eqs. (2), (3) to get below (Table 2).
a new position
3.2.4 LBG‑FA optimisation for image compression
Vikn+1 = Vikn + C1 rand1n (pbestikn − xik
n
) + C2 rand2n (pbestkn − xik
n
)
(2) In this method of image compression, codebooks are taken
n+!
xik = n
xik + vn+1 (3) as fireflies. For a minimization problem, the quality or value
ik
of fitness function is taken as objective function (Horng
where the number of solutions in search space is k , particle 2012). The process of image compression using FA and PSO
position is i  , C1 is the cognitive coefficient, social coefficient are described in given steps (Karri and Jena 2016).
is C2 , 0 ≤ C1 , C2 ≤ 2 rand1 and rand2 are random values
(0 ≤ rand1, rand2 ≤ 1), Vik is the velocity of the particle at Step 1: Initially LBG algorithm is applied to get codebooks
the position i  , xik is the position of particle, pbest is the Step 2: From all the codebooks obtained from LBG, the
particle’s individual best solution, gbest is the swarm’s best best codebook is taken and apply to FA
solution. Step 3: Using the method of LBG-FA, the following
Step 7: Repeat from Step 3 to 6, until it reaches maximum parameters are assumed
number of iterations
Max number of iterations = 15,
The parameters used in the PSO-LBG algorithm (Qing- Population = 20,
hai 2010) are given Table 1. Mutation Coefficient (α) = 0.01,
Mutation coefficient damping ratio (αdamp) = 0.99,
(b) Firefly algorithm (FA) attractiveness (β0) = 2
Light absorption coefficient (γ) = 1.
This algorithm was proposed in 2008 by Xin-She Yang
Yang and it was stimulated by the features and flashing Step 4: Set the rest of fireflies with codebooks as random
behavior of fireflies. In this paper using FA, the bright- Step 5: From that code books, select a codebook randomly
ness of a fireflies is taken as the value of objective func- and get the fitness value/objective value of that
tion. Generally, the firefly of low brightness value i.e. codebook
low fitness/objective function value moves toward higher Step 6: Update the intensity of fireflies and the position.
brightness of firefly i.e. high value of objective function The firefly of low brightness value i.e. low fitness/
(Vijayvargiya et al. 2014). The process of optimization objective function value moves toward higher
brightness of firefly i.e. high value of objective
function
Table 1  Parameters of PSO Step 7: Now the updated codebook from the firefly algo-
Parameters used Description Value rithm is the optimum codebook and use that code-
in PSO book for compression using vector quantization

S Number of particles 30
3.2.5 Proposed hybrid LBG‑PSO‑FA for image compression
N Number of iterations 20
C1 Cognitive coefficient 0–1 random value
Linde–Buzo–Gray (LBG) algorithm is used in VQ to gen-
C2 Social coefficient 0–1 random value
erate local codebooks for the compression of images, but it

Table 2  Time complexity Samples Time (ms)


values obtained by the
algorithms Genetic algo- PSO (O(n)) Cuckoo search algo- FA (O(nc)) Proposed
rithm (O(n)) rithm (O(nc)) PSO-FA
(O(nc))

1 78 85 150 350 380


2 81 88 180 370 390
3 83 90 188 382 392
4 85 93 192 387 400
5 89 96 195 393 406

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

results less image quality with low peak signal to noise ratio The procedure (Horng 2012; Ali et al. 2014) of image
(PSNR). To increase the quality of the image, optimization compression using LBG-PSO-FA is explained in Fig. 4.
techniques are used after LBG method by optimizing code-
books to generate global codebook. Particle swarm optimi-
zation (PSO) and firefly algorithm (FA) (Chiranjeevi and 4 Results and discussion
Jena 2018) were used after LBG method to generate global
codebooks. The image quality and PSNR is improved than In this paper, experiments have done for the design of
LBG. To enhance the quality of image, a new hybrid tech- enhanced codebook for the efficient compression of images.
nique LBG-PSO-FA is proposed after LBG by generating These experiments have done on the gray scale medical
enhanced global codebooks. In this technique, FA is used to images of brain MRI image with pixel amplitude resolution
increase the particle velocity in PSO thereby updating the of 8 bits with 256 × 256 size collected from BraTS dataset
position (codebooks). By hybridization of PSO with FA, 2018. For evaluation of the results, the original data is com-
efficient codebooks are generated thereby increasing the pared with the compressed data using the BraTS dataset.
quality of image with more PSNR (Fig. 3). Initially, the image is divided into 4 × 4 pixel size of non-
overlapping blocks. Each non-overlapping block with 16 ele-
ments are taken as the input vector. Hence, total image of
size 256 × 256 is converted into 4096 input vectors to encode
and compress an image. The PSO and FA are the search
algorithm and helps the LBG to decrease the complexity of
the system.
The various quality metrics describes the performance of
compression techniques are given below.

4.1 Performance evaluation metrics

The excellence of proposed LBG-PSO-FA for image com-


pression is compared with other optimization techniques
using the quality parameters i.e. PSNR, MSE, SSIM, mean
absolute error (MAE), structural content (SC), and entropy.

1. MSE gives the measure of degradation of reconstructed


image quality as compared to the initial image. It is out-
lined in equation given below.
M−1 N−1
1 ∑ ∑[ � ]
MSE = I(i, j) − I (i, j) (4)
MN i=0 j=0

Fig. 3  Firefly optimization technique flow chart

Divide the image into Run LBG- Codebooks are Code


Input non-overlapping PSO optimized using book
Image blocks Algorithm PSO-FA technique Index

Decompressed Code book Index at Channel


Image decoder

Fig. 4  Block diagram of image compression using hybrid PSO-FA

13
M. L. P. Rani et al.

where M × N is the total number of pixels in the image. N−1


∑ ( )
I (i, j) is the input image. I′(i, j)is output decompressed En = − pi ⋅ log2 pi . (11)
image. i=0

2. The PSNR is defined as follows where pi is the histogram counts of an image.


(
255 × 255
) 6. Structural content (SC): SC is used to find the quality of
PSNR = 10 log10 (5) an image. It is a correlation based measurement param-
MSE
eter and gives the similarity between two images. SC is
3. SSIM is quality assessment metric predicted on the com- given by the equation given below.
putation of luminance, contrast and structural terms.
∑M−1 ∑N−1
i=0 j=0 (P(i, j)2
2𝜇i 𝜇j + c1 SC = ∑M−1 ∑N−1 (12)
l(i, j) = (6) (Q(i, j)2
𝜇i2 + 𝜇j2 + c1 i=0 j=0

where Q (i, j) represents the original (reference)image


2𝜎i 𝜎j + c2 and P (i, j) represents the reconstructed image. Lower
c(i, j) = (7) the value of SC gives that better quality image.
𝜎i2 + 𝜎j2 + c2
7. Computational time (CT) is the time taken to perform a
computational process.
𝜎ij + c3
s(i, j) = (8)
𝜎i 𝜎j + c3 The experiments have done with different codebook
sizes of 4, 8, 16, 32, 64, 128, 256, 512 and 1024 and the
( )( ) different metrics of PSNR, MSE, MAE, entropy, SC, and
2𝜇i 𝜇j + c1 2𝜎ij + c2
SSIM = ( computational time are computed. The proposed firefly opti-
)( ) (9)
𝜇i2 + 𝜇j2 + c1 𝜎i2 + 𝜎j2 + c2 mization technique performance is compared to traditional
LBG and optimization using PSO i.e. LBG-PSO. Tables 3,
c1 , c2 , c3 are the regularization constant terms for the 4 and 5 shows the quality evaluation metrics of PSNR, MSE,
luminance, contrast, and structural terms. SSIM, MAE, entropy, SC and computational time for tra-
  𝜇i , 𝜇j are mean intensity values and 𝜎i , 𝜎j are the stand- ditional LBG, LBG-PSO and LBG-PSO-FA are evaluated
ard deviation values of two images. 𝜎ij is the covariance and compared with different code book sizes. Figures 5 and
of the two images. 6 represents original MRI image and the corresponding
4. MAE is the quality measure of the absolute differences decompressed images of LBG, LBG-PSO and LBG-PSO-
between two images. MAE is given by FA of brain MRI images 1 and 2 respectively. Figures 7
∑N−1 � and 8 represents the graphical representation of variation in
� ∑N−1 �ei � PSNR values with variation of bits per pixel (BPP). Where
i=0 �yi − xi � � � (10)
MAE = = i=0
BPP = bpp = k2 c  . From the results, the proposed hybrid
log N
N N
technique of LBG-PSO-FA provides better results i.e. more
where yi xi, j is predicted value and xi yi, j is true value. N PSNR, less MAE, better Entropy. Nevertheless, the compu-
is the total no. of elements. ei is the error. tational time is more for LBG-PSO-FA compared to LBG,
5. Entropy (En) is the measures of randomness to charac- LBG-PSO, LBG-FA.
terize the texture of image. Entropy is defined as

Table 3  Quality evaluation Code book size PSNR MSE SSIM ENTROPY SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 20.116204 633.9819 0.623814 3.038537 1.119676 8.006912 1.810726
computation time with different
8 20.484436 582.21 0.63835 3.567117 1.096271 7.402344 2.455646
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 21.152981 498.976 0.678668 4.131138 1.081915 6.364288 1.890815
LBG- vector quantization of 32 22.79279 342.042 0.713361 4.299359 1.054376 5.569794 2.064042
brain MRI image1 64 23.199833 311.9466 0.71934 4.453125 1.050142 5.212479 2.158178
128 23.542492 287.793 0.751904 5.0417 1.036112 4.688126 3.086827
256 24.019024 258.273 0.769598 5.597836 1.024657 4.324036 3.424933
512 24.471067 232.316 0.773454 5.841739 1.021237 4.107986 6.235416
1024 26.066393 161.943 0.790247 5.873002 1.021270 3.78183 10.025146

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

Table 4  Performance evaluation Code book size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 20.226098 618.29 0.630063 3.358189 1.116542 7.915192 100.62798
computation time with different
8 21.281386 393.622 0.675527 3.836849 1.088804 6.405502 127.24702
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 22.32468 381.136 0.71188 4.277947 1.072562 5.668732 148.54
LBG-PSO of brain MRI image1 32 23.377908 272.948 0.729161 4.380785 1.037423 5.373474 198.255
64 24.002987 258.869 0.752919 4.922394 1.040475 4.840118 221.34597
128 24.445844 233.388 0.76038 5.333835 1.034379 4.453659 251.50975
256 25.068761 202.805 0.773376 5.522709 1.023678 4.293304 290.67878
512 25.485063 192.788 0.778161 5.875153 1.021947 3.973694 327.24
1024 27.486427 116.166 0.80493 6.521489 1.019255 3.633087 523.638

Table 5  Performance evaluation Codebook PSNR MSE SSIM Entropy SC MAE Time


metrics of PSNR, MSE, size (FA)
SSIM, SC, MAE, entropy and
computation time with different 4 20.54151 574.222 0.631326 3.59263 1.116092 7.884125 192.517273
code book sizes of 4, 8, 16, 32,
8 21.58579 451.939 0.66221 4.361624 1.104869 6.828613 246.625484
64, 128, 256, 512, 1024 using
LBG- FA of brain MRI image1 16 22.651201 353.248 0.70366 4.342443 1.075303 5.23312 827.48547
32 23.67103 279.306 0.721033 4.559232 1.047071 5.12766 950.042469
64 24.329192 240.48 0.747823 5.300376 1.039117 4.59183 1086.101667
128 24.949918 208.487 0.760678 5.628348 1.028355 4.116873 1178.508556
256 25.542421 181.585 0.777572 6.263633 1.024916 4.074014 1661.514247
512 25.96621 164.846 0.800343 6.9216 1.028620 3.36249 1875.086057
1024 27.991233 103.295 0.810396 7.630911 1.019997 3.004883 2022.000762

Computational Complexity The genetic algorithm and PSO are having linear fitness
1 function so, the time complexity is evaluated by O(n). The
Fitness function (distance)

0.8 Cuckoo Search Algorithm, FA, PSO-FA algorithms are hav-


0.6
0.4
ing conditional loops, so the time complexity is increased
0.2 in each iteration. These algorithms and our proposed algo-
0 rithms hybrid PSO-FA evaluates the time complexity by
1 2 3 4 5 6 7 8 9 10
Iterations O(nc ) (Fig. 9).
LBG-PSO LBG-FFA LBG-PSO-FFA
4.3 Discussion of quantitative analysis
Fig. 5  Graph represents the computational time complexity of LBG-
PSO, LBG-FA, LBG-PSO-FA fitness functions The results obtained are evaluated for the code book size
ranging from 4 to 1024. From the Table 3, quality metrics
such as PSNR, MSE, SSIM, SC, MAE and computation
4.2 Discussion of computational complexity time were evaluated for those code book sizes using LBG-
vector quantization of brain MRI image1. From the table, it
The time complexity for fitness function of LBG-FA is is clear that for the highest code size, PSNR attains better
given as O(nc ). . Similarly, the time complexity of fitness values when compared with lower code book size images.
function of LBG-PSO is evaluated by using O(n) . In our The PSNR value is obtained as 26.06 dB, MSE is obtained
research work, the proposed LGB-PSO-FA also executes as 161.943, SSIM as 0.790247, Entropy as 5.87 and con-
time complexity by evaluating O(nc ). The time complexity sumes more time of 10.02 s, compared to the lower code
is lesser in the LBG-FA and the proposed method. Due to book size images.
the less complexity time, the fitness function also gains From Table  4, quality metrics such as PSNR, MSE,
better distance values. Comparison graph of the fitness SSIM, SC, MAE and computation time were evaluated for
values obtained by LBG-FA, LBG-PSO and LBC-PSO-FA those code book sizes using LBG-PSO of brain MRI image1.
is shown in the Fig. 5. From the table, it is clear that for the highest code size,
PSNR attains better values when compared with lower code

13
M. L. P. Rani et al.

Fig. 6  i Input image, ii, iii,


iv decompressed images of
LBG-|PSO, LBG-FA, and LBG-
PSO-FA of Brain MRI image1
respectively

Fig. 7  i Original, ii, iii, iv


decompressed images of LBG,
LBG-PSO, LBG-FA of Brain
MRI image2 respectively

book size images. The PSNR value is obtained as 27.48 dB, and consumes more time of 2022 s compared to the lower
MSE is obtained as 116.166, SSIM as 0.80, entropy as 6.521 code book size images.
and consumes more time of 523.63 s compared to the lower From Table  6, quality metrics such as PSNR, MSE,
code book size images. SSIM, SC, MAE and computation time were evaluated for
From Table  5, quality metrics such as PSNR, MSE, those code book sizes using the proposed LBG-PSO-FA of
SSIM, SC, MAE and computation time were evaluated for brain MRI image1. From the table, it is clear that for the
those code book sizes using LBG-FA of brain MRI image1. highest code size, PSNR attains better values when com-
From the table, it is clear that for the highest code size, pared with lower code book size images. The PSNR value is
PSNR attains better values when compared with lower code obtained as 33.5432 dB, MSE is obtained as 28.56, SSIM as
book size images. The PSNR value is obtained as 27.99 dB, 0.945, Entropy as 7.91 and consumes more time of 2145.45 s
MSE is obtained as 103.295, SSIM as 0.81, Entropy as 7.63 compared to the lower code book size images.

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

40 From Table 8, quality metrics such as PSNR, MSE,


35 SSIM, SC, MAE and computation time were evaluated
30 for those code book sizes using LBG-PSO of brain MRI
25 image2. From the table, it is clear that for the highest code
PSNR in dB

20 size, PSNR attains better values when compared with


15 lower code book size images. The PSNR value is obtained
10 as 28.41 dB, MSE is obtained as 93.719, SSIM as 0.73,
5 Entropy as 6.91 and consumes more time of 891.699 s
0
compared to the lower code book size images.
1 2 3 4 5 6 7 8 9
From Table 9, quality metrics such as PSNR, MSE,
LBG LBG-PSO LBG-FA LBG-PSO-FA
SSIM, SC, MAE and computation time were evaluated
for those code book sizes using LBG-FA of brain MRI
Fig. 8  Graph represents the variations of PSNR of LBG, LBG-PSO,
image 2. From the table, it is clear that for the highest
LBG-FA of MRI of brain image1
code size, PSNR attains better values when compared with
lower code book size images. The PSNR value is obtained
30 as 28.98 dB, MSE is obtained as 82.239, SSIM as 0.75,
28 Entropy as 7.89 and consumes more time of 4359.39 s
compared to the lower code book size images.
PSNR in dB

26
Table 10 shows the performance measures of the exist-
24 ing network models compared with the proposed model.
22 All the neural networks are connected directly to the fully
20 output layer. The performance measure such as PSNR,
0.1 0.2 0.3 0.4 0.5 0.6 0.7 MSE, time usage generated from the network model are
Bits Per Pixel tabulated in the table.
PSNR LBG PSNR LBG-PSO PSNR LBG-FF Table 11 shows the comparison table for the perfor-
mance measures, PSNR, RMSE, SSIM and sizes. The pro-
Fig. 9  Graph represents the variations of PSNR of LBG, LBG-PSO, posed method LBG-PSO-FA is compared with the existing
LBG-firefly of MRI of brain image2 methods performed by the authors Karri and Jena (2016)
and Kumar et al. (2018). The values obtained are the aver-
age values for each performance measures with respect to
From Table  7, quality metrics such as PSNR, MSE, high resolution code size 1024. The type of images such
SSIM, SC, MAE and computation time were evaluated for as CT and MRI are utilized for the comparison of existing
those code book sizes using LBG of brain MRI image2. methods and proposed method. Table 10 shows that the
From the table, it is clear that for the highest code size, proposed method attains better PSNR values, MSE value
PSNR attains better values when compared with lower code and time requirement more for the compressed size of the
book size images. The PSNR value is obtained as 28. 14 dB, image when compared with the existing methods.
MSE is obtained as 99.78, SSIM as 0.70, entropy as 6.26 and
consumes more time of 10.06 s compared to the lower code
book size images.

Table 6  Performance evaluation Codebook size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE, (PSO-FA)
SSIM, SC, MAE, entropy and
computation time with different 4 24.022813 257.679 0.72266 3.658331 1.082474 6.684784 594.8134
code book sizes of 4, 8, 16, 32,
8 25.577514 180.335 0.777572 4.452005 1.007779 6.268768 649.1922
64, 128, 256, 512, 1024 using
the proposed LBG-PSO-FA of 16 26.067963 160.834 0.80143 4.596651 1.000872 5.111008 954.1569
brain MRI 32 26.522056 144.837 0.80945 4.775273 1.011872 4.599869 1090.521
64 27.224643 123.333 0.839563 5.679104 1.024123 4.395432 1143.732
128 28.698992 87.9185 0.863125 5.876631 1.028906 3.813327 1287.455
256 29.129645 79.63 0.8945 7.092261 1.016520 3.612073 17341.28
512 31.876523 42.2163 0.916576 7.616867 1.011395 3.13523 1987.87
1024 33.5432 28.56 0.945841 7.916088 1.012133 2.911499 2145.45

13
M. L. P. Rani et al.

Table 7  Image1 performance Codebook size PSNR MSE SSIM Entropy SC MAE Time
evaluation metrics of PSNR,
MSE, SSIM, entropy and 4 21.838854 426.658 0.621305 2.988422 1.019854 8.269699 1.279674
computation time with different
8 22.871 335.722 0.636542 3.131723 1.018463 7.963654 1.727146
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 23.242004 308.375 0.641361 3.6002 1.020062 6.018372 1.844802
LBG of brain MRI image2 32 23.629272 282.540 0.651528 3.673754 1.003153 6.152772 2.366126
64 25.4198 186.758 0.659755 4.03705 1.003642 5.434601 2.394593
128 26.378134 149.996 0.667144 4.3675 1.003096 4.8629 2.613611
256 26.9097 132.467 0.680929 4.913372 1.009103 4.063004 3.565143
512 27.67067 111.193 0.703252 5.673273 1.00647 3.93042 6.02487
1024 28.147189 99.788 0.7105174 6.26 1.005678 3.698013 10.0644

Table 8  Performance evaluation Codebook size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 22.933696 331.192 0.5919137 3.152839 1.017119 7.279388 224.320973
computation time with different
8 23.16574 314.108 0.617405 3.203545 1.014971 7.007004 315.526806
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 23.63 281.890 0.624377 4.151657 1.020399 5.696808 315.736005
LBG-PSO of brain MRI image2 32 24.589057 226.506 0.624842 4.134273 1.016413 5.817688 305.687586
64 25.688086 175.824 0.6558 4.2456 1.001967 5.36756 305.557828
128 26.95233 131.244 0.6898 4.450055 1.003870 4.508911 241.412837
256 27.705987 110.428 0.721545 5.282238 1.006592 3.7928 326.636017
512 28.06931 101.643 0.719766 5.7832 1.004146 3.73211 542.156714
1024 28.4125 93.719 0.731532 6.91 1.3365 3.6627 891.699397

Table 9  Quality metrics of Code book size PSNR MSE SSIM Entropy SC MAE Time
PSNR, MSE, SSIM, SC, Mae
and computation time with 4 23.024681 324.399 0.616744 3.392008 1.017265 7.161774 1027.84457
different code book sizes using
8 23.62866 282.020 0.62234 3.786627 1.015187 6.830887 1105.19047
LBG-FA of brain MRI image2
16 24.626091 224.429 0.642351 4.76442 1.0068 5.2765 12267.965
32 25.254764 194.124 0.649563 4.210303 1.0044385 5.485458 1488.5247
64 26.4728 146.487 0.669698 4.405967 1.004906 4.742783 1593.369
128 27.105557 126.642 0.693132 4.81855 1.0029 4.23055 2091.32
256 27.82975 107.196 0.74321 5.7431 1.008796 3.386121 2521.193
512 28.5678 90.59 0.725015 6.70237 1.005662 3.680237 3583.103
1024 28.9867 82.239 0.7543 7.8964 1.003459 3.1987 4359.39755

Table 10  The comparison table Models Codebook size PSNR (dB) MSE Time
for performance measures of the
existing network models and the Artificial neural network 512 27.5932 32.001 2832.37
proposed method
1024 28.945 31.75 2700.64
Neural network 512 29.956 30.834 2745.73
1024 30.124 30.134 2643.54
Deep neural network (lossy) 512 31.076 30.003 2601.08
1024 31.53 29.603 2536.86
Convolutional neural network 512 32.09 29.05 2456.98
1024 32.536 29.002 2354
Proposed 512 32.104 28.45 2178.64
1024 33.5432 28.56 2145.45

13
An efficient codebook generation using firefly algorithm for optimum medical image compression

Table 11  The comparison table Authors Codebook size Type PSNR (dB) MSE Time
for performance measures of
the methods obtained from Karri and Jena (2016) 1024 CT 29.5 72.95 1736.11
literature and the proposed
Kumar et al. (2018) 1024 CT 32.625 0.0004 2587.64
method
Proposed method 1024 MRI 33.5432 28.56 2145.45

5 Conclusions Jayaraman S, Esakirajan S, Veerakumar T (2012) Digital image pro-


cessing. Tata McGraw Hill Education, USA
Karri C, Jena U (2016) Fast vector quantization using a Bat algorithm
This paper presents a proposed optimization using firefly for image compression. Eng Sci Technol Int J 19:769–781
algorithm and PSO, which is applied after the VQ using Kim JK, Rao SW (1993) A fast mean-distance-ordered partial code-
LBG to boost the codebooks, and increases the quality of book search algorithm for image vector quantization. IEEE Trans
Circ Syst II Analog Digit Signal Process 40:576–579
reconstructed images compared to traditional vector quan- Kumar SN, Fred AL, Varghese PS (2018) Compression of CT images
tization using LBG algorithm and LBG-PSO optimization. using contextual vector quantization with simulated annealing for
This proposed method produces additional PSNR, low value telemedicine application. J Med Syst 42:218
of MSE and improved SSIM with low MAE compared to Nag S (2019) Vector quantization using the improved differential evo-
lution algorithm for image compression. Genet Program Evolv
other methods. However, the proposed LBG-PSO-FA needs Mach 2019:20
more computation time than LBG, LBG-FA, LBG-PSO. Nowakova J, Prilepok M, Snsasel V (2017) Medical image retrieval
From the results, it is concluded that the LBG-PSO-FA using vector quantization and fuzzy S-tree. J Med Syst 41:18
optimization enhances the performance of LBG method and Pandey R, Vijayvargiya G, Silakari S (2013) A survey: various tech-
niques of image compression. IJCSIS 11:1–6
LBG-PSO. From the results, it shows that proposed LBG- Patane G, Russo M (2002) The enhanced LBG algorithm. Neural Netw
PSO-FA algorithm is more reliable than LBG, LBG-FA, 14:1219–1237
LBG-PSO algorithm. Qinghai B (2010) Analysis of particle swarm optimization algorithm.
Comput Inf Sci 3:180–184
Rani ML, Rao GS, Rao BP (2019) Performance analysis of compres-
sion techniques using LM algorithm and SVD for medical images.
Funding  The authors have not received any funding from any sources.
In: 6th international conference on signal processing and inte-
grated networks (SPIN)
Compliance with ethical standards  Reddy RM, Ravichandran KS, Venkatraman B, Suganya SD (2018) A
new approach for the image compression to the medical images
Conflict of interest  M. Laxmi Prasanna Rani declares that he has no using PCASPIHT. Biomed Res
conflict of interest. Gottapu Sasibhushana Rao declares that he has no Sanyal N, Chatterjee A, Munshi S (2013) Modified bacterial forag-
conflict of interest. ing optimization technique for vector quantization-based image
compression. Computational intelligence in image processing.
Ethical approval  This article does not contain any studies with human Springer, Berlin, pp 131–152
participants or animals performed by any of the authors. Sasazaki K, Saga S, Maeda J, Suzuki Y (2008) Vector quantization of
images with variable block size. Appl Soft Comput 8:634–645
Suguna J, Senthilkumaran N (2011) Neural network technique for loss-
less image compression using X-ray images. Int J Comput Electr
Eng 3:17–23
References Tsolakis D, Tsekouras GE, Niros AD, Rigos A (2012) On the system-
atic development of fast fuzzy vector quantization for grayscale
Ali N, Othman MA, Husain MN, Misran MH (2014) A review of firefly image compression. Neural Netw 36:83–96
algorithm. ARPN J Eng Appl Sci 9:1732–1736 Vijayvargiya G, Silakari S, Pandey R (2014) A novel medical image
Ammah PN, Owusu E (2019) Robust medical image compression compression technique based on structure reference selection
based on wavelet transform and vector quantization. J Inform using integer wavelet transform function and PSO algorithm. Int
Med Unlocked 20:15 J Comput Appl 91:20
Chen Q (2005) Image compression method using improved PSO vector Yang XS (2008) Nature-inspired metaheuristic algorithms. Luniver
quantization. Lect Notes Comput Sci 3612:20 Press, York
Chiranjeevi K, Jena UR (2018) Image compression based on vector Yang XS (2009) Firefly algorithms for multimodal optimization. In:
quantization using cuckoo search optimization technique. Ain Stochastic algorithms, foundation and applications, SAGA, lecture
Shams Eng J 9:1417–1431 notes in computers sciences, vol 5792, pp 169–178
Feng HM, Chen CY, Ye F (2007) Evolutionary fuzzy particle swarm
optimization vector quantization learning scheme in image com- Publisher’s Note Springer Nature remains neutral with regard to
pression. Expert Syst Appl. 32:213–222 jurisdictional claims in published maps and institutional affiliations.
Gonzalez RC, Woods RE (2008) Digital image processing, 3rd edn.
Prentice-Hall, USA
Horng M-H (2012) Vector quantization using the firefly algorithm for
image compression. Expert Syst Appl 39(1):1078-1091
Hu YC, Su BH, Tsou CC (2008) Fast VQ codebook search algorithm
for grayscale image coding. Image Vis Comput 20:20

13

You might also like