You are on page 1of 10
5th Annual International Conference on Advances in Biotechnology (BioTech 2015) Multifocus Medical Image Fusion based on Fractional Lower Order Moments and Modified Spatial Frequency T.Tirupal', B.Chandra Mohan? and S.Srinivas Kumar® ' INTUK, Kakinada, India * Dept.of ECE, Bapatla Engineering College Bapatia, India *Dept.of ECE, JNTUK Kakinada, India E-mail: Abstract— The objective of image fusion for medical images is to combine multiple images obtained from various sources into a single image suitable for better diagnosis. An algorithm is proposed to develop a novel fusion rule for multifocus medical images based on fractional lower order moments. First, multifocus medical images are decomposed using Undecimated Discrete Wavelet Transform (UDWT). Most of the state of the art image fusing techniques are based on Discrete Wavelet Transform (DWT). A slight blurring is noticed when DWT is used for image fusion. By using UDWT, this blurring is significantly reduced. Then, the low frequency sub bands are fused using modified spatial frequency (MSF) and high frequency subbands are fused using fractional lower order moments. Finally, the fused image is reconstructed using inverse UDWT. The experimental results on several pairs of multifocus medical images indicate that the proposed algorithm is superior to the existing algorithms in terms of spatial frequency, image quality index, edge information preservation and various other image quality metrics. Keywords-image fusion; moments; DWT; UDW’ MSF. 1 INTRODUCTION Image processing methods such as image fusion is increasing its importance in moder medicine and health care in fusing multimodal medical images such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), and Positron emission tomography (PET). Over the past decade, research in processing and analysis of medical data has begun to flourish. Fusing CT and MRI provides abundant information that is useful for diagnosis since the CT image can provide dense structures like bones and implants with less distortion, but it cannot detect physiological changes, while the MR image can provide normal and pathological soft tissues information, In extracting the detail information from images the advancement of developing new algorithms has provided major impetus for new algorithms in signal and image processing. The overall objective of computer aided diagnostic systems is to enable the early diagnostics, disease monitoring, and better treatment. Medical images from different modalities provide complementary information and this is integrated for better analy Fusion of multimodal medical images provides a single Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) Copyright © GSTF 2015 ISSN 2251-2489 doi: 10.5176/2251-2489_BioTech15.46 145 irutalari@email.com, chandrabhuma@email.com, samay_ssk2@yahoo.com composite image that is dependable for better and improved analysis and diagnosis, In medical image processing the most important research issue is to extract maximum information by fusing multimodal images. Existing research methods include the pixel averaging method [1], the gradient pyramid [2], the laplacian pyramid [3], the contrast pyramid [4], the ratio-of-low-pass pyramid [5], the morphological pyramid [6], ripplet transform [7] and the DWT method [8-10]. A thorough survey of medical image fusion techniques is referred in [11]. The simplest method of image fusion is to take pixel by pixel average of two images which leads to undesirable side effects like contrast reduction. All pyramid based methods fails to introduce spatial orientation selectivity in the decomposition process, which cause blocking effects and undesired edges in the fused image. The other method of multiresolution fusion technique is the wavelet-based method which uses discrete wavelet transform (DWT) in the fusion where DWT preserves different frequency information in stable form and allows good localization in time and frequency domain. The major drawback of DWT is that the transform does not provide shift invariance which cause a major change in the wavelet coefficients of the image for minor changes in the input image. In medical imaging analysis it is important to know and Preserve the exact location of this information and it is happened with a wavelet called undecimated discrete wavelet transform [12]. Moments have extensive applications in the field of image processing [13-17] due to their ability to represent global features. The moment invariants were also used for recognition of ships, aircrafis and edges of images Zemike and Legendre moments can reconstruct an image easily using a set of orthogonal moments. Fusion of multimodal medical images based on fractional lower order moments is found by two measures such as dispersion, estimated in a neighborhood around the reference wavelet coefficients as a salience measur while the symmetric coefficient of covariation is introduced in order to account for the similarities between corresponding patterns in the two subbands to be fused. In this paper, a new method based on undecimated discrete wavelet transform and fractional lower order moments for multimodal medical images is proposed which includes fusing the low frequency sidebands using lower order moments and high frequency sidebands using modified Copyright 2015 GSTF Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) spatial frequency. Finally, applying inverse UDWT to get the fused image. II. FUSION ALGORITHMS A. Image Fusion by Simple Average This technique is a basic and straight forward technique and fusion is achieved by simple average [1] of corresponding pixels in each input image as: Lay) + hy, 1,(%y)= At S 20%») B. Principal Component Analysis (PCA) The PCA [18] involves a mathematical procedure that transforms a number of correlated variables into a fewer number of uncorrelated variables called principal components. The first principal component accounts for as much of the variance in the data as possible and each sueceeding component accounts for as much of the remaining variance as possible. The first principal component is taken to be along the direction with the ‘maximum variance and the second principal component is constrained to lie in the subspace perpendicular of the first. The third principal component is taken in the maximum variance direction in the subspace perpendicular to the first two and so on, J) PCA Algorithm: The two source images are arranged in two-column vectors. Steps followed to project this data into a 2-D subspace are: ‘Step I: Arrange the data into column vectors. The resulting matrix Z is of dimension 2 Xn. Step 2: Compute the empirical mean along each column. The empirical mean vector M_ has a dimension of 1X2, Step 3: Subtract the empirical mean vector M, from each column of the data matrix Z . The resulting matrix X is of dimension 2 Xn. Step 4: Find the covariance matrix C of X and mean of expectation using formulae C= XX, Mean of expectation = cov(Y) Step 5: Compute the eigenvectors V and cigenvalue Dot Cand sort them by decreasing cigen value. Both V and Dare of dimension2 X 2. Step6: Consider the first column of V which corresponds to larger eigenvalue to compute P,and P, as: 146 vey SHE 2) Image Fusion by PCA: The input images (images to be fused) /,(x,y), 1,(x,y)are arranged in two column vectors and their empirical means are subtracted. The resulting vector has a dimension of X 2, where nis length of the each image vector. Eigenvector and eigenvalues for this resulting vector are computed and the (afigenvectors corresponding to the larger eigenvalue is obtained. The normalized components P., Py (i.e.P, + P, =1) using eq. (4) are computed from the obtained eigenvector. The fused image is: 1,(¥) = Fl (% y)+ Ply (x¥) 6) ©. Fractional Lower Order Momenis Fractional lower order moments are specifically employed based on the two parameters such as: the dispersion y in [19] given by y-(any Cpa) arar( 224) -2) 2 3 avai{- 2) 6) Where C(p, a) = 2) the Gamma function. The other parameter is [20], which defines the similarities between the corresponding patterns in the two source images that are to be fused. Let X, and X2 be two random variables the covariation of X, and X; is defined as

fis ® 2 a at )rhe symmetric covariation coefficient is defined as @) Corr, (X,,X,)= Ay, x, side (9) (1, X11 Yoda Where it takes values between -1 and 1. D. Undecimated Discrete Wavelet Transform (UDWT) The Discrete Wavelet Transform (DWT) is referred to aS Mallat’s algorithm [21], which is based on orthogonal decomposition of the image onto a wavelet basis in order to avoid the redundancy of information in the pyramid at cach level of resolution. Consequently, Undecimated Copyright 2015 GSTF Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) Discrete Wavelet Transform (UDWT) avoids image decimation which has been developed for image processing applications such as denoising [22], texture classification [23]. pattern recognition and fusion. The discrete implementation of UDWT can be accomplished by using the ‘a trous’ (with holes) algorithm, which presents interesting properties such as: * The evaluation of the wavelet decomposition can be followed from level to level. A single wavelet coefficient plane is produced at each level of decomposition, ‘The wavelet coeificients are computed for each location allowing a better detection of dominant feature. © Itiseasily implemented, ‘The ‘a trous” wavelet transform is a non-orthogonal ‘multiresolution decomposition, which separates the low frequency information (approximation) from high- frequency information (detail coefficients), Such a separation uses a low-pass filter h(n), associated with the scale function g(x), to obtain successive approximations ofa signal through scales as follows: a(k SY htna, (keen), j=l Where a,(k)corresponds to the original discrete signals(k); and N are the scale index and the number of scales, respectively. ‘The wavelet coefficients are extracted by using a high-pass filter g(n), associated with the wavelet function y/(x).. through the following filtering operation 0 (= gla, (k-+ 02) ‘The perfect reconstruction of data is performed by introducing two dual filters far(n) and. gr(v) that should satisfy the quadrature mirror filter condition Lirhl =n) + gr(ng(l-n) = 6) Where d(N)is the Dirac fumetion. A simple choice consists in considering jar(n)and gr(m)filters as equal to Dirac function (d-(71) = gr(m) = d()). Therefore 1g (7) is deduced from (3) as g(r) = 5(1)- h(n) Hence, the wavelet coefficients are obtained by a simple difference between two _ successive approximations as follows wj(k =a,()-a,(%) (1) To construct the sequence, this algorithm performs successive convolutions with a filter obtained from an auxiliary function named as the scaling function. Given an imageT, the sequence of approximations are constructed are as follows A=FU), A =F(A), A= F(A) (15) Where Fis a scale function, A B3 cubic spline function is often used for the characterization of the scale function [24] and the use of a B3 eubic spline leads to a convolution with a mask of 5x5 14-6 41 416 24 16 4 T then Wry = 05-0512 OMe Re 1-T TEXT > Yyo then Wi= Waray and Ws Wag else Wy= Wyn and W3= Wie In this paper the threshold, T = 0.1 is taken as reference value. 3) The coefficients of HESs are Frequency Subband (HFS). u frequency described in section I followed is fused to get fused High ising modified spatial I-E and the fusion rule HFS* HFS} if MSF* > MSR” HES, Z otherwise (28) Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) Where HFS> is the fused High Frequency Subband image, MSF;" and MSF) are the modified spatial frequencies of the ith region of HFS image for X and ¥ images respectively. Here, the size of each region taken is 64X64. 4) Fused image is reconstructed from the processed low Frequency subbands and high frequency subbands using inverse UDWT. Input image X | Input image ¥ I I Perform UDWT LFSs HFSs ] L LFSs Compute MSF Compute MSF 2, ¥ fusion using HF Subband Moments Fusion Rule Inverse UDWT. J Fused Image Figure 1. Schematic diagram of the proposed medical image fusion method TV. EXPERIMENTAL RESULTS AND COMPARISONS ‘The performance of proposed medical image fusion method on various medical images are presented. The first example is shown in Fig. 2 which contains different output images of different methods, Figures 2a and b are two medical images, Figs. 2c-f are fused images by using pixel averaging method [1], the gradient pyramid method [2], the conventional DWT method [27] and proposed method respectively. Experimental results on several pairs of medical images such as Figs. 3a and b are T1-weighted MR and MRA images with some illness prove that the Proposed method produces better medical information than the other three approaches. Figures 4a and b are bottom and top focused T2-weighted MR images. We use the biorthogonal’ bior6.8, with a decomposition level of 1, 4s the wavelet basis for the proposed method. For further comparison of fasion results the following objective fidelity criteria are used and the values of all objective criteria are listed in Table 1 for figs. 2 and 3, Table 2 for figs. 4 and 5, Table 3 for figs. 6 and 7, and Table 4 for fig. 8. Figure 9 represents the reference fused images for all the input source images. From Table 1-4, We can observe that the values of several quality indices of proposed method are larger than those of pixel averaging, gradient pyramid and DWT methods. A, Mutual information (My) I is a metric defined as the sum of MI [28] between each source image and the fused image. Consider the two source images as X and Y, and a fused image as 7 Pax (Z, Tz (2x) Tha eaigp se (29) Pry(2,¥) Tzy(,y)= YP, y(z, y)log— (30) GNU Py PEO) Where Py, Py and Pz ate the probability den functions in the images X, Y and Z respectively. Pzy and Py are the joint probability density functions, Thus the fusion performance measure can be defined as MI = x (2.x) +I,» (2,9) (31) The larger the value of Mf, the better the fusion result, B. Entropy (H): Entropy is used to measure the information content of an image. An image with high information content would have high entropy. It is defined as: H= = h, i) log , h,, (i) (32) i Where hy (i)the normalized histogram of the fused image and L is the number of frequency bins in the histogram. Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) (@ (e) (f) nage (b) MR image (c) Fused image by pixel aver Figure 2, Fusion results for CT and MR images (a) CT age by DWT ({) Fused image by the proposed method (€) Fused im i) hted MR image (b) MRA image (c) Fused image by pixel averaging (d) by DWI (1) Fused image by the proposed methax Fused image by gradient pyramid (¢) Fused im TABLE 1, PERFORMANCE EVALUATION RESULTS FOR DIFFERENT FUSION METHODS FOR FIGURE 2 AND FIGURE 3 = Tt Fewes I MI ‘Entropy omer PSNR aT ~ | Entropy wp | PSNR Methods > | o s > |e tistomiod | ST | wisiymiog | SP | 2°" | “ay | avon | | wintroo | * | 2% | Ta bt 0.0792 4.8693 5912 ou 0.4264 0.0659 0.1687 | 0.497 | 63, amie | _[s Gradient sans | spar 7 as esas | 300 | ores | wana ere DWT 51s | —Fomme a5) T2677 | — 3607 oteor | osta7 | os tor r [72077] Propet |g asn5 gon | o2sio | 06s 0395s | since | sr | 02601 | ossor | es92n eho 69 5 2 [us 2201 | 03801 | 6s Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) (d) (©) () Figure 4, Fusion results fr T2-weighted MR image (2) Image blurred on th tp (b) Image blued on the bottom (<) Fused image by pixel Averaging (4) Fused image by gradient pyramid (e) F ‘used image by DWT (f) Fused image by the proposed method (d) (e) a) Figure 5. Fusion results for Mutimodatty images (2) MRI image(b) PET image (c) Fused image by piel averaging (d) Fused imag pyramid (©) Fused image by DWT (1) Fused im ize by the proposed method TABLE 2. _Pexroamance evatu, ATION RESULTS FOR DIFFERENT FUSION METHODS FOR FIGURE4 AND FGURE 5. F Figeed Figures a are Nee aeeaal eet =a] Saar ae Methods | Gtctymbo) | SF prissymboy | SF | rievsymboy | 8? | 2 “ak — 1428 5.4624 68.0919 aba owen [cies | sac 0 Gate Tass ra as DWT oss aes ais i aa | ot” | at Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) (a) () (©) wo Figure 8. Fusion results for Multimodality images (a) PET image(b) CT image (c) Fused image by pixel averaging (d) Fused image by gradient ‘pyramid (e) Fused image by DWT ({) Fused image by the proposed method () 0 (2) ure 9. Reference images (a) CT-MRI (b) T-weighted MR and MRA (c)T2-weighted MR (d) MRI-PET (e) MRISPECT (1) Xray-VA (a) PET- cr TABLE 4. PERFORMANCE EVALUATION RESULTS FOR DIFFERENT FUSION METHODS FOR FIGURE 8 Figure 8 ei x Fusion Methods | ais oF Entropy oka PSNR (dB) meee fe sa} Ceitssymbo)_ Pixclaveraging 0.0965 29602 34650 | 0.1170 72.1083 Gnesi =A “384 37530 01195 ¥ DWT ~|__01030 38095 37 -| = 0.212 ropa [ow 77882 5.1465 71.8990 Sth Annual International Conference on Advances in Biotechnology (BioTech 2015) C. Standard Deviation (SD): This metric measures the contrast in the fused image. It is composed of the signal and noise parts and standard deviation is more efficient in the absence of noise. An image with high contrast will have a high standard deviation and itis given as . D-H h, @) Where, he io Jy, (i) is the normalized histogram of the fused image and L is the number of frequency bins in the histogram, V. CONCLUSIONS In this paper we have discussed a new method based on fractional lower order moments, UDWT and modified spatial frequency for fusing multifocus medical images. The fusion of medical images plays an important role in many clinical applications for they can support more accurate information than any individual image. Some of the existing problems in pixel-level fusion methods such as blurring effects have been effectively overcome. This paper presents the use of UDWT for medical image fusion, which consists of three steps. In the first step. the medical images to be fused are decomposed into sub images by UDWT. In the second step the coefficients of low-frequency band are fused using fractional lower order moments and the coefficients of high-frequency band are fused using modified spatial frequency. In the last step, the fused image is constructed by the inverse UDWT with the composite coefficients. The performance of the proposed method proves its accuracy and strength over different image fusion techniques with the help of visual and quantitative measures. REFERENCES [1] NMaianowdis and TStathaki, “Pixel-based and region-based image Fusion schemes using ICA bases,” Information fusion, vo 8,0 2, pp. 131-142, 2007 [2] P4.Burt and R.-Kolezynski, “Enhanced image capture through fusion,” IEEE Intemational Conference on Computer Vision, pp 173-182, 1993 (B]_PJ.Burt and EH Adelson, “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no, 4, pp. 532-540, 1983, [4] ATost,J.Van Ruyven, end JM Valeton, “Merging thermal end Yisual images by a contrast pyramid,” Optical Engineering, vol 28, no. 7, pp. 789-792, 1989, [5] AToet, “Image fusion by a ratio of low-pass pyramid,” Patern Recognition Letters, vol, 9, no. 4, pp. 245-253, 1989. [6] AToct, “A morphological pyramidal image decomposition,” Pattem Recognition Letters, vol 9, no. 4, pp. 255-261, 1989 (7) $Dus, MChowdhury, and M.K Kundu, “Medical image fusion based on ripplet transform type-t,” Progress in Electromagnetics Research B, vol 30, pp. 385-370, 2011, {8} Hi, BS Manjunath, and S.K Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and. Image Processing, vol. 57, no. 3, pp. 235-245, 1995. [9] Y Yang, D'S Park, S Huang, and N.Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP Joumal on ‘Advances in Signal Processing, vol. 2010, article ID $79341, 2010. [10] QGuihong, ZDali, and Y-Pingfan, “Medical image fusion by ‘wavelet transform modulus maxima,” Optics Express, vol. 9, no 4, pp. 184-190, 2001 [11] APJames and B.V Dasarathy, “Medical image fusion: a survey of the state of the at,” Information Fusion, 2014, [12] J1.Flower, “The redundant discrete wavelet transform and additive (33) noise,” IEEE Signal Processing Letters, vol. 12, no. 9, pp. 629- 632, 2005. (13) M.K.Hu, “Visual patter recognition by moment invariants,” IRE Trans Inform. Theory, vol. 1-8, pp. 179-187, Feb. 1962. (3414) M R Teague, “Image analysis va the general dheory of moments,” J. Opt. Soc. Amer, vol. 70, pp. 920-930, Aug. 1962 {15} SDudani, K Breeding, and R McGhee, “Aircraft identification by ‘moment invariants,” TEEE Trans Comput, vol. 26, pp. 39-45, Feb. 1977 [16] DCasasent, and R Cheatham, “Image segmentation and real image tests for an optical moment-based feature exact,” Opt Commun, vol. 51, pp. 227-230, Sept. 1996 (17) SO.Betkasim, MSridhar, and M Ahmadi, “Pattern recognition ‘with moment invariants: A comparative study and new results.” Paitem Recognition, vol. 24, no. 12, pp. 1117-1138, 1991 [18] VPS.Naidu, IR Raol, ‘Pixel-tevel Image Fusion using Wavelets and Principal Component Analysis,” Defence Science Journal ‘Vo! $8, No3, May 2008, pp 338-352, {19] CL. Nikias and M, Shao, “Signal Processing with Alpha-Stable Distributions and Applications” New York: John Wiley and Sons, 1995 120] G Samorodnitsky, and M.S-Tagqu, “Stable non-gaussian random processes: stochastic models with infinite variance,” Newyork, Chapman and Hall, 1994 [21] S. Mallat, “A theory for multiresolution signal decomposition: the ‘wavelet representation,” IEEE Transactions on Pattem Analysis Machine Intelligence, vol 11, no. 7, pp. 674-693, 1989. [22] M. Malfait and D. Roose, “Wavele-based image denoising using a markov random field & prio’ model,” IEEE Transactions on Image Processing, vol. 6, no 4, pp, 549-565, 1997 123] M. Unser, “Texture classification and segmentation using wavelet, frames,” IEEE Transactions on Image Processing, vol 4, no. 11, pp. 1549-1560, 1995, [24] J. Nunez, X. Otaza, 0 Fors, A. Prades, V. Pala, and R. Arbiol, Multresolution-based image fusion with additive wavelet decomposition,” IEEE Transactions on Geoscience Remote Sensing, vol. 37, no. 3, pp. 1204-1211, 1999, 25] AEskicioglu, and PFisher, “Image quality measures and their performance.” IEEE Transactions on Communications, vol. 43, tno, 12, pp. 2959-2965, 1995 [26] $.Das, and MK Kundu, “Ripplet based multimodality medical image fusion using pulse-coupled neural network anid modified spatial frequency,” TEE Intemational Conference on Revent ‘Trends in Information Systems, pp. 229-234, 2011 [27] HOShanker Mishra, and SBhatnagar, “MRI and CT image fusion based on wavelet transform,” Inlematonal Joural of Inf and Computer Tech, vol. 4, no. 1, pp. 47-52, 2014 [28] Shuiao Li, and Bin Yang, “Multioeus image fusion using region Segmentation and spatial fiequeney,” Image and Vision Computing, vol 26, no. 7, pp. 971-979, 2008,

You might also like