Professional Documents
Culture Documents
https://doi.org/10.1007/s12652-020-01782-w
ORIGINAL RESEARCH
Abstract
In recent times, the medical imaging becomes an indispensable tool in clinical practice. Due to the large volume of medical
images, compression is needed to lessen the redundancies in the image and also to represent the image in shorter manner for
effective transmission. In this paper, Linde–Buzo–Gray (LBG) algorithm was developed with vector quantization (VQ) for
compressing the images, and it results in decent image quality. To further increase the image quality, optimization techniques
[particle swarm optimization (PSO) and firefly algorithm (FA)] were used in LBG method to optimize the codebook for
generating the global codebook. In the proposed work, LBG method was used to get the local codebooks and the obtained
local codebooks were optimized by utilizing PSO. The optimized codebooks from PSO were again optimized by using FA
that results in good quality of the image. In the experimental phase, the performance of the proposed work was compared
with individual optimization techniques like PSO and FA. From the experimental study, the proposed work showed 1.2–6 dB
improvement in image compression related to other existing approaches.
Keywords Firefly algorithm · Linde–Buzo–Gray · Medical image compression · Particle swarm optimization · Vector
quantization
13
Vol.:(0123456789)
M. L. P. Rani et al.
of medical images consumes low transmission time during hybrid technique showed extraordinary performance when
transmission. Hence, role of medical image compression compared with existing methods. However, improvement for
became essential for effective storage and transmission. compression using multiwavelets decomposition technique
The techniques for the compression of images are classified and improvement in designing the filters were essential.
as lossy and lossless (Gonzalez and Woods 2008; Jayara- Yang (2009) developed Cuckoo search metaheuristic
man et al. 2012). Better quality-decompressed images are optimization algorithm, which optimized Linde–Buzo–Gray
obtained from lossless techniques and are used in bio-medi- (LBG), usually designed as a local optimal codebook for
cal applications, satellite communications, etc. The process image compression. The images LENA, BABOON, PEP-
of providing better compression with considerable loss is PERS, BARB and GOLDHILL were obtained from the
referred as lossy compression and is used in internet appli- publicly available dataset. Cuckoo search metaheuristic
cations, mobile phones, etc. The LBG algorithm is highly optimization algorithm optimized LBG codebook by levy
sensitive to initial codebook. Among various kinds of com- flight distribution function that functions Mantegna’s algo-
pression methods, vector quantization (VQ) is one of the rithm. The developed Cuckoo search algorithm improved the
wide spread lossy compression techniques. VQ with LBG PSNR values by around 0.2 dB at low bit rate and 0.3 dB at
algorithm for the compression of brain MRI images is used a higher bit rate. Experimental results obtained the graphs
in this paper (Suguna and Senthilkumaran 2011). Using VQ, for different codebook sizes, Cuckoo search algorithm were
the local codebooks are generated to decrease the value of observed. PSNR value obtained was better than existing
mean square error (MSE) with less peak signal to noise methods. Slower convergence is the major drawback of the
ratio (PSNR). The codebooks obtained from LBG-VQ are developed method.
optimized using PSO to get optimal codebooks. The code- Rani et al. (2019) developed back propagation neural
books generated from LBG-PSO algorithm is enhanced by network with Levenberg–Marquardt training algorithm
FA. From the exploration characteristics of FA an efficient (BPNNLM) and singular value decomposition (SVD) for
optimal solutions are obtained from search space. Hence, the the compression of medical images. The images were col-
output image received are reconstructed with the enhanced lected from CT, MRI of brain image and X-ray of chest
codebooks obtained by the proposed LBG-PSO-FA for images. The original image can be reconstructed with sin-
detection of disease. This optimal compression algorithm gular values of matrix. The image can be represented with
produces efficient Codebooks by generating visually bet- only the required features of an image during compression.
ter quality out image with reduced computational time and The experimental results showed that the storage space for
excellent PSNR. compressed image was less compared to the original image.
This paper organized such that Sect. 1 gives the impor- For acceptable compression and reconstruction of the image,
tance of medical image compression and introduction to the selection of the singular values was critical. The results
compression techniques. The survey of image compres- proved that the compressed technique provided based on
sion using vector quantization with the techniques of LBG, the singular values obtained better PSNR values, less MSE
LBG-PSO and LBG-PSO-FA is explained in Sect. 2. The and good Structural Similarity Index Measurement (SSIM)
proposed methodology is explained in Sect. 3. The com- values.
parison of performance metrics is tabulated and the results Nag (2019) developed an improved differential evolu-
are discussed in Sect. 4. Finally, conclusions are in Sect. 5. tion Linde–Buzo–Gray (IDE-LBG) algorithm. which cou-
pled with LBG for generating optimum Vector Quantiza-
tion codebooks. The images LENA, BABOON, PEPPERS,
2 Literature survey BARB and GOLDHILL were obtained from the publicly
available dataset. The members of the initial population of
Ammah and Owusu (2019) developed a DWT-VQ (dis- IDE were chosen at random from each group. Eventually,
crete wavelet transform-vector quantization) technique for the best Codebook obtained by the IDE was used as the
compressing the images and also to retain the quality of the initial Codebook for the LBG algorithm. The Bitrate Per
images, in any sort of medical tolerant situations. The devel- Pixel (BPP) evaluated the data size of the compressed image
oped hybrid methodology extracted the medical images from for different codebook sizes and the peak signal to noise
the DICOM dataset and these images constituted speckle and ratio (PSNR) values were calculated for individual code-
salt and pepper noises that were significantly reduced during book sizes. The developed algorithm obtained better PSNR
the execution of the process. The graphs obtained from the values and the quality of reconstructed image obtained were
results indicated the compression ratios per widow size per much better than that obtained from the other algorithms
codebook size. The peak signal to noise ratio (PSNR) and in comparison for eight different codebook sizes. However,
the root mean square error (RMSE) provided a substantial improvement in reduction of computation time was needed.
amount of the quality of the image formed. The developed The searching of redundant packet structure consumes more
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
time and increase the computational time and decrease in the 3 Proposed methodology
computation time was needed.
Horng (2012) developed a structure reference selection 3.1 Vector quantization (VQ)
process for collecting redundant frame of structure for com-
pression. Proposed algorithm used medical CTA image as Quantization for the compression of images is of two types,
well as on a sequence of images. The sequence used is gray- scalar and vector quantization. Each and every input symbol
scale MRI images taken from local hospitals. The devel- is taken individually to get the output in case scalar quantiza-
oped algorithm is a combination of integer wavelet packet tion. But in vector quantization (VQ), the input symbols are
transform function, particle swarm optimization and HCC clubbed together and form group of vectors using different
matrix. Both similar and dissimilar packet collects in two clustering techniques (Kim and Rao 1993). VQ is widely
different units and passes through HCC matrix and after that used in image compression to get better output. The image
image is compressed. Our empirical evaluation of PSNR compression using VQ is obtained in the steps of encoding,
and compression ratio shows that better performance instead transmitting through a channel and decoding (Pandey et al.
of another method used in the experimental process. The 2013). The block diagram of compression of data using VQ
searching of redundant packet structure consumes more time is shown in below Fig. 1.
and increase the computational time, in the future reduces
the computational time for compression. (a) Image compression using VQ
Nowakova et al. (2017) developed a hybrid enhanced
system with fast fuzzy clustering based vector quantizers. In VQ, the image is divided into blocks, and each block is
The images LENA, BABOON, PEPPERS, BARB and GOL- the representation of vectors of image referred as code words
DHILL were obtained from the publicly available dataset. by the clustering techniques. The set of code words forms
Three different modules such as reduction of number of code the codebook for VQ.
words affected by noise, second is to reduce training pattern The process of VQ is described in the following steps.
numbers that significantly reduced the cost of the quantizers
and thirdly to increase the size of small clusters by relo- 1. Design of codebook
cating the corresponding code words close to large ones. 2. Encoding of an image
This strategy enhanced the competition between clusters, 3. Image decoding
yielding better local minima. The performance of the algo-
rithm was evaluated in terms of the computational demands Compression is required where redundancy and irrele-
and the quality of the reconstructed image. The developed vancy are present within an image. VQ is used in the appli-
approach executed with high speed and competitive to exist- cations where high compression ratios are required (Nag
ing methods. 2019).
13
M. L. P. Rani et al.
The image is partitioned into blocks and each block has • All the code words are organized such that the dimen-
to be coded into vectors. The set of code words/code vec- sions of the decompressed image at the receiver are the
tor is called codebook. Compression of images is obtained matching to the input image.
by transmitting the address of the code word called index
instead of codebook. The useful information is stored in
index of the codebook of input image. Thus, by sending 3.2 Code book generation
index of codebook, the transmission bit rate is minimized.
The main aim of the codebook design is to reduce bit rate The most frequently used VQ algorithm is generalized
in image encoding/decoding process for compression. LBG Lloyd Algorithm (GLA) also called Linde–Buzo–Gray
algorithm is used (Sasazaki et al. 2008) to generate local (LBG) algorithm. LBG are used for mapping function to
optimum codebook in traditional method. partition training vectors in N clusters. The mapping func-
The image compression using VQ with encoder, channel tion is defined as RK → CB . Where RK is the randomly
and decoder (Hu et al. 2008) is shown in the Fig. 2. generated codebook CB is the codebook size.
The above block diagram consists of three blocks and all It generates a local codebook with minimum distortion
these blocks are explained below (Sanyal et al. 2013). The steps in LBG algorithm are given below (Tsolakis
Encoder: This block consists of image vectors generation, et al. 2012).
generation of code book and code book index.
Step 1 Initially, a random codebook with size of Nc and
• The input image is divided into blocks and each block is distortion D1 = 1 are taken
converted into row/column vector called as code word. Step 2 Partition the input image training set/code vectors
• A codebook is a group of code words of blocks of input into clusters using the K-means clustering with
image. nearest neighborhood condition
• The major part in VQ is the generation of efficient code-
book. • Compute the Euclidean distance between first row vector
• A better algorithm generates an effective codebook. of input image and all row vectors in the codebook.
• Each vector of input image is indexed with number. • Each and every row vector of input image is represented
{ }
xi = xi1 , xi2 , … .xiL , where i = 1,
{ 2, 3 … and codeword
}
Channel: The indexed numbers are transmitted through a of thecodebook is denoted as cj = cj1 , cj2 , … .cjL , where
channel instead of codebooks. j = 1, 2, … Nc.
Decoder: This block consists of indexed numbers, and • Note down these distances and find the minimum all dis-
reconstruction of decompressed image. tances and that index of minimum distance of codebook
index is placed in the first index value of input image. It
• The indexed values are received and are these values are means that corresponding index of code word in code-
decoded with index table. book is nearer to the input image vector.
• These indexed numbers are allotted to its code words, • This repeats for all the remaining rows in the image.
respectively.
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
Step 3 Once all the input image vectors are completed, the 3.2.2 Image compression using optimization algorithms
centroids of the region of partition is computed as
in step 2
(a) Image compression using PSO
The distortion d (x, Cj) between the input image vector x
and code book Cj = 1,2,3…Nc is calculated at encoder block. PSO is an evolutionary algorithm, it is endorsed to Ken-
The index of the code word vector/codebook with nearest nedy, Eberhart in 1995, and it was inspired by the social
neighbor rule is transmitted to the decoder if distortion is behavior of movement in a bird flock or fish school. A prob-
less. The index table of all vectors of input image is trans- able solution for a problem is obtained by representation of
mitted to the receiver. each individual particle. The fitness function is evaluated
based on fitness value of particle’s position. Every particle
3.2.1 Updating the code book has two positions named as global best (g best) and personal
best (p best). The position of particle, which has highest fit-
ness value among other particles, is called g best. The posi-
Step 4 This can be done using K-means clustering. If the tion of particle of itself with highest fitness value is termed
index values of input image row vectors are same as p best. The particles are moving with their own velocity
then the rows of corresponding indexes are aver- in the search space and changes their position with respect
aged and that averaged row/updated row is placed to g best and p best positions by keeping informed velocities
in the same index value of codebook. This repeats (Patane and Russo 2002).
for all similar index values in input vectors of
image. This gives an updated codebook (Kumar 3.2.3 LBG‑PSO optimisation for image compression
et al. 2018)
Step 5 Each vector of input image is assigned to a cor- For the compression of images, the codebook is taken as a
responding code word, and that code word index is particle in PSO. To get better codebook, calculate the fitness/
replaced by the associated input vectors to obtain distortion function by minimizing the error is referred as the
the aim of compression objective function and LBG-PSO algorithm is explained in
steps (Feng et al. 2007):
To calculate the distance between pixels of any two
images, different distance measures namely city block dis- Step 1: Initially, execute LBG algorithm, generate code
tance, Euclidian distance and chess board distance are used. book and that code book is taken as g best
The Euclidean distance is the straight line distance between Step 2: The remaining code books from LBG are initial-
two pixels. The City Block distance metric measures the path ized with random positions and random velocities
between the pixels based on a 4-connected neighborhood. Step 3: Compute the fitness/distortion values for each code
The Chessboard distance metric measures the path between book using equation given below (Yang 2008)
the pixels based on an 8-connected neighborhood. But in which is same used in LBG
this process of clustering, Euclidean distance is enough to Nb Nc
get distances between the pixels in the input image and the 1 ∑∑ ‖ ‖2
Distortion D = 𝜇ij ⋅ ‖xi − Cj ‖
corresponding pixels in codebook. Nb i=1 j=1 ‖ ‖
All the Euclidian distances are between input and the
codebook are averaged to calculate the distortion Dm + 1. If where xi is ith vector of input image with block size Nb, cj
the distortion is minimum that is final code book else repeat is jth vector of codebook with size Nc, 𝜇ij = ‘1’ if xi is in
the process again codebook jth cluster and else ‘0’
Step 4 The distortion function value of the present code-
Nb Nc
1 ∑∑ ‖ ‖2 book is lower than fitness function value (p best) of
Distortion D = 𝜇ij ⋅ ‖xi − Cj ‖ (1)
Nb i=1 j=1 ‖ ‖ previous one, then the new fitness value of present
codebook is taken as p best. This process contin-
where xi is ith vector input image of size Nb , cj is jth vector ues and also finds the fitness value until all the
of codebook with size Nc , 𝜇ij is ‘1’ if xi is in the codebook vectors of input image are completed
jth cluster, else ‘0’. Step 5 From all the fitness values of codebooks, choose
If Dm − Dm+1 < T , where T is the predefined threshold, the lowest fitness value and if this value is better
then stops, otherwise minemented by one and repeat the than gbest, then the chosen minimum fitness value
above steps from 2 to 4. (Qinghai 2010; Chen 2005) is taken as new g best
13
M. L. P. Rani et al.
Step 6 Using PSO algorithm, velocity and position of using Firefly algorithm is explained using flow chart given
each particle are updated using Eqs. (2), (3) to get below (Table 2).
a new position
3.2.4 LBG‑FA optimisation for image compression
Vikn+1 = Vikn + C1 rand1n (pbestikn − xik
n
) + C2 rand2n (pbestkn − xik
n
)
(2) In this method of image compression, codebooks are taken
n+!
xik = n
xik + vn+1 (3) as fireflies. For a minimization problem, the quality or value
ik
of fitness function is taken as objective function (Horng
where the number of solutions in search space is k , particle 2012). The process of image compression using FA and PSO
position is i , C1 is the cognitive coefficient, social coefficient are described in given steps (Karri and Jena 2016).
is C2 , 0 ≤ C1 , C2 ≤ 2 rand1 and rand2 are random values
(0 ≤ rand1, rand2 ≤ 1), Vik is the velocity of the particle at Step 1: Initially LBG algorithm is applied to get codebooks
the position i , xik is the position of particle, pbest is the Step 2: From all the codebooks obtained from LBG, the
particle’s individual best solution, gbest is the swarm’s best best codebook is taken and apply to FA
solution. Step 3: Using the method of LBG-FA, the following
Step 7: Repeat from Step 3 to 6, until it reaches maximum parameters are assumed
number of iterations
Max number of iterations = 15,
The parameters used in the PSO-LBG algorithm (Qing- Population = 20,
hai 2010) are given Table 1. Mutation Coefficient (α) = 0.01,
Mutation coefficient damping ratio (αdamp) = 0.99,
(b) Firefly algorithm (FA) attractiveness (β0) = 2
Light absorption coefficient (γ) = 1.
This algorithm was proposed in 2008 by Xin-She Yang
Yang and it was stimulated by the features and flashing Step 4: Set the rest of fireflies with codebooks as random
behavior of fireflies. In this paper using FA, the bright- Step 5: From that code books, select a codebook randomly
ness of a fireflies is taken as the value of objective func- and get the fitness value/objective value of that
tion. Generally, the firefly of low brightness value i.e. codebook
low fitness/objective function value moves toward higher Step 6: Update the intensity of fireflies and the position.
brightness of firefly i.e. high value of objective function The firefly of low brightness value i.e. low fitness/
(Vijayvargiya et al. 2014). The process of optimization objective function value moves toward higher
brightness of firefly i.e. high value of objective
function
Table 1 Parameters of PSO Step 7: Now the updated codebook from the firefly algo-
Parameters used Description Value rithm is the optimum codebook and use that code-
in PSO book for compression using vector quantization
S Number of particles 30
3.2.5 Proposed hybrid LBG‑PSO‑FA for image compression
N Number of iterations 20
C1 Cognitive coefficient 0–1 random value
Linde–Buzo–Gray (LBG) algorithm is used in VQ to gen-
C2 Social coefficient 0–1 random value
erate local codebooks for the compression of images, but it
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
results less image quality with low peak signal to noise ratio The procedure (Horng 2012; Ali et al. 2014) of image
(PSNR). To increase the quality of the image, optimization compression using LBG-PSO-FA is explained in Fig. 4.
techniques are used after LBG method by optimizing code-
books to generate global codebook. Particle swarm optimi-
zation (PSO) and firefly algorithm (FA) (Chiranjeevi and 4 Results and discussion
Jena 2018) were used after LBG method to generate global
codebooks. The image quality and PSNR is improved than In this paper, experiments have done for the design of
LBG. To enhance the quality of image, a new hybrid tech- enhanced codebook for the efficient compression of images.
nique LBG-PSO-FA is proposed after LBG by generating These experiments have done on the gray scale medical
enhanced global codebooks. In this technique, FA is used to images of brain MRI image with pixel amplitude resolution
increase the particle velocity in PSO thereby updating the of 8 bits with 256 × 256 size collected from BraTS dataset
position (codebooks). By hybridization of PSO with FA, 2018. For evaluation of the results, the original data is com-
efficient codebooks are generated thereby increasing the pared with the compressed data using the BraTS dataset.
quality of image with more PSNR (Fig. 3). Initially, the image is divided into 4 × 4 pixel size of non-
overlapping blocks. Each non-overlapping block with 16 ele-
ments are taken as the input vector. Hence, total image of
size 256 × 256 is converted into 4096 input vectors to encode
and compress an image. The PSO and FA are the search
algorithm and helps the LBG to decrease the complexity of
the system.
The various quality metrics describes the performance of
compression techniques are given below.
13
M. L. P. Rani et al.
Table 3 Quality evaluation Code book size PSNR MSE SSIM ENTROPY SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 20.116204 633.9819 0.623814 3.038537 1.119676 8.006912 1.810726
computation time with different
8 20.484436 582.21 0.63835 3.567117 1.096271 7.402344 2.455646
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 21.152981 498.976 0.678668 4.131138 1.081915 6.364288 1.890815
LBG- vector quantization of 32 22.79279 342.042 0.713361 4.299359 1.054376 5.569794 2.064042
brain MRI image1 64 23.199833 311.9466 0.71934 4.453125 1.050142 5.212479 2.158178
128 23.542492 287.793 0.751904 5.0417 1.036112 4.688126 3.086827
256 24.019024 258.273 0.769598 5.597836 1.024657 4.324036 3.424933
512 24.471067 232.316 0.773454 5.841739 1.021237 4.107986 6.235416
1024 26.066393 161.943 0.790247 5.873002 1.021270 3.78183 10.025146
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
Table 4 Performance evaluation Code book size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 20.226098 618.29 0.630063 3.358189 1.116542 7.915192 100.62798
computation time with different
8 21.281386 393.622 0.675527 3.836849 1.088804 6.405502 127.24702
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 22.32468 381.136 0.71188 4.277947 1.072562 5.668732 148.54
LBG-PSO of brain MRI image1 32 23.377908 272.948 0.729161 4.380785 1.037423 5.373474 198.255
64 24.002987 258.869 0.752919 4.922394 1.040475 4.840118 221.34597
128 24.445844 233.388 0.76038 5.333835 1.034379 4.453659 251.50975
256 25.068761 202.805 0.773376 5.522709 1.023678 4.293304 290.67878
512 25.485063 192.788 0.778161 5.875153 1.021947 3.973694 327.24
1024 27.486427 116.166 0.80493 6.521489 1.019255 3.633087 523.638
Computational Complexity The genetic algorithm and PSO are having linear fitness
1 function so, the time complexity is evaluated by O(n). The
Fitness function (distance)
13
M. L. P. Rani et al.
book size images. The PSNR value is obtained as 27.48 dB, and consumes more time of 2022 s compared to the lower
MSE is obtained as 116.166, SSIM as 0.80, entropy as 6.521 code book size images.
and consumes more time of 523.63 s compared to the lower From Table 6, quality metrics such as PSNR, MSE,
code book size images. SSIM, SC, MAE and computation time were evaluated for
From Table 5, quality metrics such as PSNR, MSE, those code book sizes using the proposed LBG-PSO-FA of
SSIM, SC, MAE and computation time were evaluated for brain MRI image1. From the table, it is clear that for the
those code book sizes using LBG-FA of brain MRI image1. highest code size, PSNR attains better values when com-
From the table, it is clear that for the highest code size, pared with lower code book size images. The PSNR value is
PSNR attains better values when compared with lower code obtained as 33.5432 dB, MSE is obtained as 28.56, SSIM as
book size images. The PSNR value is obtained as 27.99 dB, 0.945, Entropy as 7.91 and consumes more time of 2145.45 s
MSE is obtained as 103.295, SSIM as 0.81, Entropy as 7.63 compared to the lower code book size images.
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
26
Table 10 shows the performance measures of the exist-
24 ing network models compared with the proposed model.
22 All the neural networks are connected directly to the fully
20 output layer. The performance measure such as PSNR,
0.1 0.2 0.3 0.4 0.5 0.6 0.7 MSE, time usage generated from the network model are
Bits Per Pixel tabulated in the table.
PSNR LBG PSNR LBG-PSO PSNR LBG-FF Table 11 shows the comparison table for the perfor-
mance measures, PSNR, RMSE, SSIM and sizes. The pro-
Fig. 9 Graph represents the variations of PSNR of LBG, LBG-PSO, posed method LBG-PSO-FA is compared with the existing
LBG-firefly of MRI of brain image2 methods performed by the authors Karri and Jena (2016)
and Kumar et al. (2018). The values obtained are the aver-
age values for each performance measures with respect to
From Table 7, quality metrics such as PSNR, MSE, high resolution code size 1024. The type of images such
SSIM, SC, MAE and computation time were evaluated for as CT and MRI are utilized for the comparison of existing
those code book sizes using LBG of brain MRI image2. methods and proposed method. Table 10 shows that the
From the table, it is clear that for the highest code size, proposed method attains better PSNR values, MSE value
PSNR attains better values when compared with lower code and time requirement more for the compressed size of the
book size images. The PSNR value is obtained as 28. 14 dB, image when compared with the existing methods.
MSE is obtained as 99.78, SSIM as 0.70, entropy as 6.26 and
consumes more time of 10.06 s compared to the lower code
book size images.
Table 6 Performance evaluation Codebook size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE, (PSO-FA)
SSIM, SC, MAE, entropy and
computation time with different 4 24.022813 257.679 0.72266 3.658331 1.082474 6.684784 594.8134
code book sizes of 4, 8, 16, 32,
8 25.577514 180.335 0.777572 4.452005 1.007779 6.268768 649.1922
64, 128, 256, 512, 1024 using
the proposed LBG-PSO-FA of 16 26.067963 160.834 0.80143 4.596651 1.000872 5.111008 954.1569
brain MRI 32 26.522056 144.837 0.80945 4.775273 1.011872 4.599869 1090.521
64 27.224643 123.333 0.839563 5.679104 1.024123 4.395432 1143.732
128 28.698992 87.9185 0.863125 5.876631 1.028906 3.813327 1287.455
256 29.129645 79.63 0.8945 7.092261 1.016520 3.612073 17341.28
512 31.876523 42.2163 0.916576 7.616867 1.011395 3.13523 1987.87
1024 33.5432 28.56 0.945841 7.916088 1.012133 2.911499 2145.45
13
M. L. P. Rani et al.
Table 7 Image1 performance Codebook size PSNR MSE SSIM Entropy SC MAE Time
evaluation metrics of PSNR,
MSE, SSIM, entropy and 4 21.838854 426.658 0.621305 2.988422 1.019854 8.269699 1.279674
computation time with different
8 22.871 335.722 0.636542 3.131723 1.018463 7.963654 1.727146
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 23.242004 308.375 0.641361 3.6002 1.020062 6.018372 1.844802
LBG of brain MRI image2 32 23.629272 282.540 0.651528 3.673754 1.003153 6.152772 2.366126
64 25.4198 186.758 0.659755 4.03705 1.003642 5.434601 2.394593
128 26.378134 149.996 0.667144 4.3675 1.003096 4.8629 2.613611
256 26.9097 132.467 0.680929 4.913372 1.009103 4.063004 3.565143
512 27.67067 111.193 0.703252 5.673273 1.00647 3.93042 6.02487
1024 28.147189 99.788 0.7105174 6.26 1.005678 3.698013 10.0644
Table 8 Performance evaluation Codebook size PSNR MSE SSIM Entropy SC MAE Time
metrics of PSNR, MSE,
SSIM, SC, MAE, entropy and 4 22.933696 331.192 0.5919137 3.152839 1.017119 7.279388 224.320973
computation time with different
8 23.16574 314.108 0.617405 3.203545 1.014971 7.007004 315.526806
code book sizes of 4, 8, 16, 32,
64, 128, 256, 512, 1024 using 16 23.63 281.890 0.624377 4.151657 1.020399 5.696808 315.736005
LBG-PSO of brain MRI image2 32 24.589057 226.506 0.624842 4.134273 1.016413 5.817688 305.687586
64 25.688086 175.824 0.6558 4.2456 1.001967 5.36756 305.557828
128 26.95233 131.244 0.6898 4.450055 1.003870 4.508911 241.412837
256 27.705987 110.428 0.721545 5.282238 1.006592 3.7928 326.636017
512 28.06931 101.643 0.719766 5.7832 1.004146 3.73211 542.156714
1024 28.4125 93.719 0.731532 6.91 1.3365 3.6627 891.699397
Table 9 Quality metrics of Code book size PSNR MSE SSIM Entropy SC MAE Time
PSNR, MSE, SSIM, SC, Mae
and computation time with 4 23.024681 324.399 0.616744 3.392008 1.017265 7.161774 1027.84457
different code book sizes using
8 23.62866 282.020 0.62234 3.786627 1.015187 6.830887 1105.19047
LBG-FA of brain MRI image2
16 24.626091 224.429 0.642351 4.76442 1.0068 5.2765 12267.965
32 25.254764 194.124 0.649563 4.210303 1.0044385 5.485458 1488.5247
64 26.4728 146.487 0.669698 4.405967 1.004906 4.742783 1593.369
128 27.105557 126.642 0.693132 4.81855 1.0029 4.23055 2091.32
256 27.82975 107.196 0.74321 5.7431 1.008796 3.386121 2521.193
512 28.5678 90.59 0.725015 6.70237 1.005662 3.680237 3583.103
1024 28.9867 82.239 0.7543 7.8964 1.003459 3.1987 4359.39755
Table 10 The comparison table Models Codebook size PSNR (dB) MSE Time
for performance measures of the
existing network models and the Artificial neural network 512 27.5932 32.001 2832.37
proposed method
1024 28.945 31.75 2700.64
Neural network 512 29.956 30.834 2745.73
1024 30.124 30.134 2643.54
Deep neural network (lossy) 512 31.076 30.003 2601.08
1024 31.53 29.603 2536.86
Convolutional neural network 512 32.09 29.05 2456.98
1024 32.536 29.002 2354
Proposed 512 32.104 28.45 2178.64
1024 33.5432 28.56 2145.45
13
An efficient codebook generation using firefly algorithm for optimum medical image compression
Table 11 The comparison table Authors Codebook size Type PSNR (dB) MSE Time
for performance measures of
the methods obtained from Karri and Jena (2016) 1024 CT 29.5 72.95 1736.11
literature and the proposed
Kumar et al. (2018) 1024 CT 32.625 0.0004 2587.64
method
Proposed method 1024 MRI 33.5432 28.56 2145.45
13