This action might not be possible to undo. Are you sure you want to continue?

6, 2011

Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker product

Dr. H.B. Kekre

Senior Professor, Department of Computer Science, Mukesh Patel School of Technology Management and Engineering Mumbai, India

**Dr. Tanuja Sarode
**

Associate Professor, Department of Computer Science, Thadomal Shahani College of Engineering Mumbai, India

Rekha Vig

Asst. Prof. and Research Scholar, Dept. of Elec. and Telecom. Mukesh Patel School of Technology Management and Engineering Mumbai, India

Abstract— In this paper we present a unified algorithm with some minor modifications applicable to most of the transforms. There are many transforms used in signal and image processing for data compression and many other applications. Many authors have given different algorithms for reducing the complexity to increase the speed of computation. These algorithms have been developed at different points of time. The paper also shows how the mixed radix system of counting can be used along with Kronecker product of matrices leading to fast algorithm reducing the complexity to logarithmic order. The results of use of such transforms have been shown for both 1-D and 2-D (image) signals and considerable compression is observed in each case. Keywords-Orthogonal transforms, Data compression, Fast algorithm, Kronecker product, Decimation in Time, Decimation in Frequency, mixed radix system of counting

The precursor of the transforms was the Fourier series to express functions in finite intervals. It was given by Joseph Fourier, French mathematician and physicist who initiated the Fourier series and its applications to problems of heat transfer and vibrations[8]. Using the Fourier series, just about any practical function of time can be represented as a sum of sines and cosines, each suitably scaled, shifted and "squeezed" or "stretched". Later the Fourier transform was developed to remove the requirement of finite intervals and to accommodate all types of signals[3]. Laplace transform technique followed which converted the frequency representation into a twodimensional s-plane, what is termed the "complex frequency" domain. DFT is a transform for Fourier analysis of finite-domain discrete-time functions, which only evaluates enough frequency components to reconstruct the finite segment that is analyzed. Variants of the discrete Fourier transform were used by Alexis Clairaut[30] in 1754 to compute an orbit, which has been described as the first formula for the DFT, and in 1759 by Joseph Louis Lagrange[30], in computing the coefficients of a trigonometric series for a vibrating string. The data which both considered had periodic patterns and was discrete samples of an unknown function, and since the approximating functions were finite sums of trigonometric functions, their work led to some of the earliest expressions of the discrete Fourier transform[8]. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform[10]); a true cosine+sine DFT was used by Gauss[7] in 1805 for trigonometric interpolation of asteroid orbits. Equally significant is a small calculation buried in Gauss’ treatise on interpolation that appeared posthumously in 1866 as an unpublished paper, which shows the first clear and indisputable use of the fast Fourier transform (FFT)[5][6], which is generally attributed to Cooley and Tukey[4] in 1965. It is a very efficient algorithm for calculating the discrete Fourier transform, before which the use of DFT, though useful in many applications was very limited.

I.

INTRODUCTION

Image transforms play an important role in digital image processing as a theoretical and implementation tool in numerous tasks, notably in digital image filtering, restoration, encoding, compression and analysis [1]. Image transforms are often linear ones. If they are represented by transform matrix T then (1) represents the transformation, F = [T]f (1) where, f and F are the original and transformed image respectively. Unitary transforms are also energy conserving transforms so that

∑[ f ]

i i

2

=

∑| F

k

k

|2 , thus they are used

for data compression using energy compaction in transformed elements. In most cases the transform matrices are unitary. i.e. T-1 = Tt

t

(2)

The columns of T are the basis vectors of transform. In case of 2-D transforms, the basis vectors correspond to the basis images. Thus a transform decomposes a digital image to a weighted sum of basis images.

194

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

Digital applications becoming more popular with the advent of computers led to use of square waves as basis functions to represent digital waveforms. Rademacher and J.L. Walsh [11] independently presented the first use of square functions which led to development of more transforms based on square functions, e.g. Haar[13], Walsh. All of these have fast algorithm for calculations and hence are used extensively. Hadamard[12, 27] matrices having elements +1 and -1 are also used as transforms. The other most commonly used transforms are Group Theoretic transforms[29] Slant transform[14] KLT and fast KLT [9,15]. These transforms are used in various applications and different transforms may be more suitable in different applications. The applications include image analysis[1,22,23], image filtering[1], image segmentation[21], image reconstruction[1,16], image restoration[1], image compression[1,16,17-20,24-26], Scaling operation[2], Pattern analysis and recognition[28] etc. In this paper we present general fast transform algorithm for mixed radix system from which not only all other fast transform algorithms can be derived but one can generate composite transforms with fast algorithms. Key to this fast algorithm is Kronecker product of matrices. Image transforms such as DFT, sine, cosine, Hadamard, Haar and slant can be factored as Kronecker products of several smaller sized matrices. This makes it possible to develop fast algorithms for their implementation. Next section describes the Kronecker product and properties of Kronecker product. II. KRONECKER PRODUCT OF MATRICES

then,

-1

f = Ct µ C F

-1

(8)

**where µ C = [CCt]-1. B. Properties of Kronecker Product 1. (A + B) ⊗ C = A ⊗ C + B ⊗ C 2. ( A ⊗ B ) ⊗ C = A ⊗ ( B ⊗ C ) 3. a (A ⊗ B) = (aA ⊗ B) = (A ⊗ aB) where a is scalar 4. (A ⊗ B)t = At ⊗ Bt 5. (A ⊗ B)-1 = A-1 ⊗ B-1 6. Π k =1 ( Ak ⊗ Bk ) = (Π k =1 Ak ) ⊗ (Π k =1 Bk )
**

L L L

7. det (A ⊗ B) = (det A)m (det B)n where A is mxm matrix and B is nxn matrix. 8. Iff A and B are unitary matrices then A ⊗ B is also UNITARY matrix. III. KRONECKER PRODUCT LEADS TO FAST ALGORITHM Let C = A ⊗ B where A is mxm, B is nxn hence C is mn x mn. Thus F= [C]f, is written in an expanded form as given below:

A. Kronecker Product Kronecker product of two matrices A and B is defined as C = A ⊗ B = [aij B ] Where C is m1n1 x m2n2 , A is m1 x m2 and B is n1 x n2 Matrix [C] is given by

⎡ a11 B a12 B .......... a1m B ⎤ ⎢ a B a B .......... a B ⎥ 22 2m ⎥ ⎢ 21 ⎢ . . . . ⎥ [C ] = ⎢ ⎥ . . . ⎥ ⎢ . ⎢ . . . . ⎥ ⎢ ⎥ ⎢a m1 B a m 2 B .......... a mm B ⎥ ⎣ ⎦

(3)

(4)

For matrix C to be orthogonal matrices A and B both have to be orthogonal. Now if AAt = µA diagonal matrix and BBt = µB diagonal matrix then CCt = µA ⊗ µB = µC (5) is also a diagonal matrix. To get this result, use (6) (A ⊗ B) (C ⊗ D) = AC ⊗ BD Thus if F=[C]f (7)

Let us partition the input and output sequences into m partitions of n elements each and also the matrix into nxn blocks as shown above. In a compact form the above matrix equation can be written as F0 = a0,0[B] f0 + a0,1[B] f1+ ……+ a0,m-1[B] fm-1 F1 = a1,0[B] f0+ a1,1[B] f1+ ……+ a1,m-1[B] fm-1 . . Fm-1 = am-1,0[B] f0+ am-1,1[B] f1+ .…+ am-1,m-1[B] fm-1 (9.m) (9.1) (9.2)

195

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

It is seen that the coefficients computed by operation of matrix [B] onto vectors f0, f1, ……., fm-1 (9.1) are directly used in (9.2) to (9.m), thus reducing the number of computations. By computing multiplication by matrix [B] to vectors f we can compute intermediate coefficients vectors say G0, G1, …Gm-1, so that, Gi = [B] fi Hence we get F0 = a0,0G0 + a0,1G1 + ………………+ a0,m-1Gm-1 F1 = a1,0G0 + a1,1G1 + ………………+ a1,m-1Gm-1 . . Fm-1 = am-1,0G0 + am-1,1G1 + …………+ am-1,m-1Gm-1 (11.m) For calculations of F0, F1, ……., Fm-1 the coefficients G0, G1, ……, Gm-1 can be used thus reducing computations considerably. This algorithm can be made elegant as follows. Let G0, G1, ……, Gm-1 be written as,

⎡ g m −1,0 ⎤ ⎡ g 00 ⎤ ⎡ g10 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ g m −1,1 ⎥ ⎢ g 01 ⎥ ⎢ g11 ⎥ ⎥ ⎥ , G1 = ⎢. ⎥ ,…., Gm −1 = ⎢. G0 = ⎢. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢. ⎥ ⎢. ⎥ ⎢. ⎥ ⎢ ⎥ ⎢g ⎥ ⎢g ⎥ ⎢ g m −1, n −1 ⎥ ⎣ 0,n−1 ⎦ ⎣ 1,n −1 ⎦ ⎣ ⎦

other. The algorithm thus obtained is given in pictorial form in Fig. 1. III. DECIMATION IN FREQUENCY

for i = 0,1, 2, .., m-1

(10) (11.1) (11.2)

In this algorithm, input sequence (f0, f1, ……., fmn-1) appears in order whereas output sequence appears in a shuffled form, hence the name Decimation in Frequency (DIF) as shown in Fig. 1. For number of computations required, let M be the total multiplications required, then M = n2m + m2n = nm(n+m) (16)

Without this algorithm we require (nm)2 multiplications. Since (n+m) < nm for all values of m and n except 2 there is a reduction in number of multiplications. Similarly for additions A = nm(n-1) + mn(m-1) = nm(n+m-2) (17)

In general if sequence length is N and if N= n1n2n3……nr, then we get M = N(n1 + n2 + n3 + ……. + nr) and, (18)

(12)

A = N(n1 + n2 + n3 + ……. + nr - r) Let n = 2r then M = N(2 + 2 + 2 + …… r times ) = N(2r) = 2Nlog2n

(19)

**Now collecting the first elements of output vectors F0, F1, ……., Fm-1 and forming a new vectors as given by.
**

⎤ ⎡ a ⎡ F0 ⎥ ⎢ 00 ⎢ ⎥ ⎢ a10 ⎢ Fn ⎥ ⎢ . ⎢. ⎥ =⎢ ⎢ ⎥ ⎢ . ⎢. ⎥ ⎢ . ⎢. ⎥ ⎢ ⎢ ⎢ F( m −1) n ⎥ ⎣a m−1,0 ⎦ ⎣ a 01 a 0 m −1 ⎤ ....... a11 ........ a1m −1 ⎥ ⎥ . . ⎥ ⎥ . ....... . ⎥ . . ⎥ a m −1,1 ....... a m −1,m−1 ⎥ ⎦ ⎤ ⎡ g 00 ⎥ ⎢ ⎥ ⎢ g10 ⎥ ⎢. ⎥ (13) ⎢ ⎥ ⎢. ⎥ ⎢. ⎥ ⎢ ⎢ g ( m−1),0 ⎥ ⎦ ⎣

(20)

Normally without this algorithm we require M = N2 multiplications i.e. M = N(n1n2n3……nr) multiplication and with this algorithm we require M = N(n1 + n2 + n3 + …. + nr). Thus the product is replaced by sum of those factors. The reduction is of logarithmic order. A. Relation between A ⊗ B and B ⊗ A Consider the sequence f(n) represented by vector f transformed to vector F given by F = [C] f ; where C = A ⊗ B. Now 1. If we shuffle the input sequence f and output F has to remain same it is necessary to shuffle the columns of [C] by the same shuffle. 2. If we shuffle the output elements of vector F and want their values remain same the rows of matrix [C] are to be shuffled by same shuffle. Let And fs = [Sn] f Fs = [Sn] F

Thus by taking second elements of vectors F0, F1,.., Fm-1 we get these by operating matrix [A] on second elements of G0, G1, … Gm-1. In general taking ith elements of vectors F0, F1, …, Fm-1, we get these by operating matrix[A] on ith elements of G0, G1, …, Gm-1. This algorithm has been obtained by shuffling output elements of B matrix by a perfect shuffle matrix Sn. So the output F comes in a shuffled form Fs where Fs = [Sm] F (14) Fs is obtained from F by dividing F into n groups of m elements each sequentially and then picking up first element of each group and then second element and so on. To obtain F from Fs we get F = [Sm]-1Fs = [Sn] Fs (15) Here [Sm] and [Sn] are known to be PERFECT SHUFFLE MATRICES, where m x n = N. They are also inverse of each

⇒ f = [Sn]-1 fs ⇒ F = [Sn]-1 Fs

196

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

Substituting in the above equation we get [Sn]-1Fs = [C] [Sn]-1fs Fs = [Sn] [C] [Sn]-1fs = [Sn] [C] [Sn]tfs = [Sn] [C] [Sm] = [B ⊗ A]fs

remainder. The process can be continued till we obtained mn-1. Thus the n-tuple (mn-1, mn-2, ……, m0) is obtained representing number N. As an example of mixed radix system application consider the Kronecker product of three orthogonal matrices given below.

1 1⎤ ⎡1 T3 = ⎢− 2 1 1⎥ ⎢ ⎥ ⎢ 0 − 1 1⎥ ⎣ ⎦ 1 1 1 1 1⎤ ⎡ ⎢− 4 1 1 1 1⎥ ⎥ ⎢ T5 = ⎢ 0 − 3 1 1 1⎥ ⎥ ⎢ 0 − 2 1 1⎥ ⎢ 0 ⎢ 0 0 0 − 1 1⎥ ⎦ ⎣

(21)

Thus replacing B and A with each other to implement [B ⊗ A] in (21) and giving input in a shuffled form fs will result in output coming in a normal form, giving us a new algorithm which can be named as decimation in time. This is given in a pictorial form in Fig. 2. IV. DECIMATION IN TIME

⎡1 1 ⎤ T2 = ⎢ ⎥ ⎣1 − 1⎦

In this algorithm the input sequence appears in a shuffled form using [Sm] as shuffling matrix. Output is in a normal order as shown in Fig. 2. V. MERGING OF DIT AND DIF ALGORITHMS

Using rectangular array of size mxn and filling them as shown in Fig. 3, we can unify DIF and DIT algorithms. By using two dimensional array [f] filled up by input sequence columnwise as shown in Fig. 3 and operating by matrix [B] on all columns of f-array we get g-array. On this operating by matrix [A] on rows we get [F] array. Operation of [B] and [A] can be interchanged, as shown in DIT path, where intermediate array is named as p-array. It may be noted that g and p array will have different elements but finally we obtain same F-array. Algorithm given is very simple to understand. This works very well for Walsh, Hadamard and Haar transforms. With little modifications it also gives fast algorithm for DFT and other Fourier transform family like DCT and DST and also Group Theoretic Transforms. If N has more than 2 factors then we have to consider multidimensional array for filling and reading. This is easily achieved by using mixed radix system of counting, for indexing input and output sequences. VI. MIXED RADIX SYSTEM Let N be any integer consisting of radix r, then N can be written as N = mn-1rn-1 + ………….. + m2r2 + m1r1 + m0 in case of mixed radix form N can be written as N = mn-1r1r2…rn-1 + ……... + m2r1r2 + m1r1 + m0 where r1, r2, ……, rn-1 are different radices. When r1 = r2 = ……= rn-1 = r then mixed radix system reduces to fixed radix system. Thus mixed radix system is general and fixed radix system is a special case of mixed radix system. Now we can decompose N in case fixed radix by dividing N by r successively to get coefficients m0, m1, ……, mn-1 as remainders. In case of mixed radix N can be decomposed by dividing N by r1 to obtain m0 as remainder and quotient can then be divided by r2 to obtain m1 as

μ m0 = ⎢ ⎥ ⎣ 2⎦

⎡ 2⎤

μ m1 = ⎢6⎥ ⎢ ⎥

⎢2⎥ ⎣ ⎦

⎡3 ⎤

μ m2

⎡5 ⎤ ⎢20⎥ ⎢ ⎥ = ⎢12 ⎥ ⎢ ⎥ ⎢6 ⎥ ⎢2 ⎥ ⎣ ⎦

Giving a transformation matrix T = T5 ⊗ T3 ⊗ T2

(22)

Table 1 shows Fast algorithm using mixed radix system column one is the input sequence subscript i.e n as fn. Column 2, 3 and 4 shows the values of m2, m1, m0 representing n in a mixed radix system with radices r3= 5, r2=3, and r1=2. Next column is input sequence f ={0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29}. Intermediate stage g1 is computed from input sequence f by operating T2 on pairs of two input numbers such that m2, m1 are constant and m0 is varying thus following fifteen pairs of input sequence (0, 1), (2, 3), (4, 5), (6, 7), (8, 9), (10, 11),…. and so on are obtained. Intermediate stage 2 is computed from intermediate stage 1 by operating T3 on stage 1, such that m2 and m0 are constant and m1 is varying thus we get following ten 3-tuples (15, 25, 35), (-15, -15, -15), (17, 27, 37), (-15, -15, -15),….. and so on. Final output F = g3 is computed from intermediate stage 2 such that m1 and m0 are constant and m2 is varying thus we get following 5-tuples (75, 81, 87, 93, 99), (30, 30, 30, 30, 30), (10, 10, 10, 10, 10), (-45, 45, -45, -45, -45), (0, 0, 0, 0, 0) and (0, 0, 0, 0, 0) and operating by T5. we get final output sequence F = {435, 60, 36, 18, 6, 150, 0 ,0 ,0 ,0, 50, 0, 0, 0, 0, -225, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} as shown in (23).

197

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

Figure 1. Decimation in Frequency domain (Perfect Shuffle [Sm]) Note that there are two perfect shuffles [Sm] and [Sn] and [Sm].[Sn] = I where mn = N and also [Sm] = [Sn]t.

Figure 2. Decimation in Time domain ( Perfect Shuffle [Sn])

198

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

**Figure 3. Merging of decimation in time and frequency algorithms.
**

435 60 36 18 6 150 0 0 0 0 50 0 0 0 0 -225 0 0 0 0 0 0 0 0 0 0 0 0 0 0

* =

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

(23)

199

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

TABLE I.

FAST ALGORITHM USING MIXED RADIX SYSTEM

5 Subscript of f

m2

3

m1

2

m0

Input sequence f

Intermediate stages

Stage 1 g1 Stage 2 g2

Final Stage Output sequence g3 = F 435 60 36 18 6 150 0 0 0 0 50 0 0 0 0 -225 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3*2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4

2 0 0 1 1 2 2 0 0 1 1 2 2 0 0 1 1 2 2 0 0 1 1 2 2 0 0 1 1 2 2

1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 15 25 35 -15 -15 -15 17 27 37 -15 -15 -15 19 29 39 -15 -15 -15 21 31 41 -15 -15 -15 23 33 43 -15 -15 -15 75 30 10 -45 0 0 81 30 10 -45 0 0 87 30 10 -45 0 0 93 30 10 -45 0 0 99 30 10 -45 0 0

200

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011

A.

**Inverse Transform: Let μ m2 μ m1 μ m0 = μ m2 ⊗ μ m1 ⊗ μ m0
**

⎡5 ⎤ ⎢20⎥ ⎢ ⎥ ⎡3 ⎤ ⎡2⎤ = ⎢12 ⎥ ⊗ ⎢6⎥ ⊗ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 2⎦ ⎣ ⎦ ⎢6 ⎥ ⎢ 2 ⎥ ⎢2 ⎥ ⎣ ⎦

μ m2 μ m1 μ m0

where, m0, m1, and m2 are suffixes given in column 2, 3, and 4 of Table 1. To obtain original sequence f back from transformed sequence F first divide each of Fk by μ m2 μ m1 μ m0 where m2m1m0 is the mixed radix representation of subscript k of F and then multiply the sequence by T5t ⊗ T3t ⊗ T2t refer to the 4th properties of Kronecker product given in section II. For fast inverse mixed radix algorithm same algorithm given in Table 1 is valid with respective matrices replaced by their transpose. The number of multiplications required are N2 and additions N(N-1) where N is the input sequence length i.e. total multiplications required for this problem is 302 = 900 and additions required are 30*29= 870. Where in case of proposed mixed radix fast algorithm total multiplications required are 30*(5+3+2) = 300 and total additions required are 30*(5+3+23) =210, thus reducing the number of computations by a factor more than 3. In case of two-dimensional signals like images, these composite transforms generated by using mixed radix systems can be used for compression. Given below is an example, where an image is transformed using one such composite transform generated using one 4x4 Walsh matrix(Matrix A), one 3x3 Kekre’s transform matrix(Matrix B), one 5x5 DCT (Discrete Cosine Transform) matrix(Matrix C) and one 5x5 Kekre’s transform matrix(Matrix D). The Kroenecker product taken in the order D ⊗ C ⊗ B ⊗ A of these matrices produces a 300x300 size composite transform and has been used on a 300x300 fingerprint image. In transform domain certain coefficients are selected such that the total energy of these coefficients is equal to some percentage of the total energy of the image. The image is reconstructed using these selected components. The reconstructed images, compression ratio and the percentage error for 98%, 98.5% and 99% of total energy components are shown in the Fig. 4.

Figure 4. a) Original Fingerprint image b) Reconstructed image with 98% energy components gives 62.24% compression and 7.86% error c) Reconstructed image with 98.5% energy components gives 54.32% compression and 6.8% error d) Reconstructed image with 99% energy components gives 44.32% compression and 5.53% error

VII. CONCLUSION In this paper we propose generalized Fast Algorithm using Kronecker product. The given algorithm is very simple to understand and works very well for Walsh, Hadamard and Haar transforms. With little modifications it also gives fast algorithm for DFT and other Fourier transform family like DCT and DST and also Group Theoretic Transforms. If N has more than 2 factors then we have to consider multidimensional array for filling and reading. It has also been shown how this algorithm can be easily applied using mixed radix system of counting. The application of the proposed method to a one dimensional number sequence and two- dimensional image shows that this method can be used to generate considerable amount of compression. REFERENCES

[1] [2] Loannis Pitas, “Digital Image Processing Algorithms and Applications”, Published by Wiley-IEEE, Feb. 2000, ISBN 0471377392 M. J. Kieman L. M. Linnett R. J. Clarke “The Design and Application of Four-Tap Wavelet Filters”, IEE Colloquium on Applications of Wavelet Transforms in Image Processing, 20 Jan 1993 I. J. Good “The Interaction Algorithm and Practical Fourier Analysis” J. Royal Stat. Soc. (London) B20 (1958):361

[3]

201

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

**(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 6, 2011
**

[4] J.W. Cooley and J. W. Tukey, “An Algorithm for Machine Calculation of Complex Fourier Series”, Math. Comput. 19, 90, April 1965, pp-297301. G. D. Bergland. “A Guided Tour of the Fast Fourier Transform,” IEEE Spectrum 6 (July 1969): 41-52. E. O. Brigham. The Fast Fourier Transform. Englewood Cliffs, N. J.: Prentice-Hall 1974. M. Heideman, D. Johnson, and C. S. Burrus, ``Gauss and the history of the FFT,'' IEEE Signal Processing Magazine, vol. 1, pp. 14-21, Oct. 1984 Fourier, Joseph. (1878). The Analytical Theory of Heat. Cambridge University Press (reissued by Cambridge University Press, 2009; A. K. Jain, “A Fast Karhuen Loeve Transform for a Class of Random Processes”, IEEE Trans. Communications, vol Com-24. 1023-1029, Sept. 1976. P. Yip and K. R. Rao. “ A Fast Computational Algorithm for the Discrete Sine Transform.” IEEE Trans. Commun. COM-28, no. 2 (February 1980):304-307. J. L. Walsh. “A Closed Set of Orthogonal Functions.” American J. of Mathematics 45 (1923): 5-24. H. Kitajima. “Energy Packing Efficiency of the Hadamard Transform.” IEEE Trans. Comm. (correspondence) COM-24 (November 1976):12561258 J. E. Shore. “On the Applications of Haar Functions,” IEEE Trans. Communications COM-21 (March 1973): 209-216. W. K. Pratt, W. H. Chen and L. R. Welch. “Slant Transform Image Coding.” IEEE Trans. Comm. COM-22 (August 1974): 1075-1093. H. Hotelling. “Analysis of a Complex of Statistical Variables into Principle Components.” J. Educ. Psychology 24 (1933): 417-441 and 498-520. Anil K. Jain, “Fundamentals of Digital Image Processing,” Prentice Hall 1997. A. Habibi and P. A. Wintz. “Image Coding by Linear Transformation and Block Quantization.” IEEE Trans. Commun. Tech. COM-19, no. 1 (February 1971): 50-63. P. A. Wintz “Transform Picture Coding,” Proc. IEEE 60, no. 7 (July 1972): 809-823. W. K. Pratt, W. H. Chen and L. R. Welch, “Slant Transform Image Coding,” IEEE Trans. Commun. COM-22, no. 8 (August 1974): 10751093. K. R. Rao, M. A. Narsimhan and K. Revuluri. “Image Data Processing by Hadamard –Haar Transforms.” IEEE Trans. Computers C-23, no. 9 (September 1975): 888-896. Chang-Tsun Li, Roland Wilson, “Image Segmentation Using Multiresolution Fourier Transform” Technical report, Department of Computer Science, University of Warwick, September 1995. Andrew R. Davies. Image Feature Analysis using the Multiresolution Fourier Transform. PhD thesis, Department of Computer Science, The University of Warwick, UK, 1993 A. Calway. The Multiresolution Fourier Transform: A Genera Purpose Tool for Image Analysis. PhD thesis, Department of Computer Science, The University of Warwick, UK, September 1989. Dorin, Comaniciu, Richard Grisel, “Image coding using transform vector quantization with training set synthesis”, Signal Processing Volume 82 , Issue 11 (November 2002) Pages: 1649 – 1663. Data compression using orthogonal transform and vector quantization United States Patent 4851906. Robert Y. Li, Jung Kim and N. Al-Shamakhi “Image compression using transformed vector quantization” Image and Vision Computing 20, 2002, pp-37-45. [27] M. H. Lee and M. Kaveh, “Fast Hadamard Transform Based on a Simple Matrix factorization,” IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-34, No. 6, December 1986, pp 1666-1667. [28] R. K. Rao Yarlagadda, John E. Hershey, “Hadamard Matrix Analysis and Synthesis with Applications to Communications and Signal/Image Processing,” Kluwer Academic Publishers 1997. [29] S. V. Kanetkar, “Group Theoretic Transforms”, Ph.D. Thesis, Department of Electrical Engineering, Indian Institution of Technology, Bombay, 1979. [30] William L. Briggs, Van Emden Henson, “The DFT: an owner’s manual for the discrete Fourier transform”, SIAM, 1995 AUTHORS PROFILE Dr. H. B. Kekre has received B.E. (Hons.) in Telecomm. Engg. from Jabalpur University in 1958, M.Tech (Industrial Electronics) from IIT Bombay in 1960, M.S.Engg. (Electrical Engg.) from University of Ottawa in 1965 and Ph.D. (System Identification) from IIT Bombay in 1970. He has worked Over 35 years as Faculty of Electrical Engineering and then HOD Computer Science and Engg. at IIT Bombay. For last 13 years worked as a Professor in Department of Computer Engg. at Thadomal Shahani Engineering College, Mumbai. He is currently Senior Professor working with Mukesh Patel School of Technology Management and Engineering, SVKM’s NMIMS University, Vile Parle(w), Mumbai, INDIA. He has guided 17 Ph.D.s, 150 M.E./M.Tech Projects and several B.E./B.Tech Projects. His areas of interest are Digital Signal processing, Image Processing and Computer Networks. He has more than 350 papers in National / International Conferences / Journals to his credit. Recently thirteen students working under his guidance have received best paper awards. Four of his students have been awarded Ph. D. of NMIMS University. Currently he is guiding eight Ph.D. students. He is fellow of IETE and life member of ISTE. Dr. Tanuja K. Sarode has received M.E.(Computer Engineering) degree from Mumbai University in 2004, Ph.D. from Mukesh Patel School of Technology, Management and Engg., SVKM’s NMIMS University, Vile-Parle (W), Mumbai, INDIA. She has more than 11 years of experience in teaching. Currently working as Assistant Professor in Dept. of Computer Engineering at Thadomal Shahani Engineering College, Mumbai. She is member of International Association of Engineers (IAENG) and International Association of Computer Science and Information Technology (IACSIT). Her areas of interest are Image Processing, Signal Processing and Computer Graphics. She has 90 papers in National /International Conferences/journal to her credit.

[5] [6] [7]

[8] [9]

[10]

[11] [12]

[13] [14] [15]

[16] [17]

[18] [19]

[20]

[21]

[22]

[23]

[24]

[25] [26]

Rekha Vig has received B.E. (Hons.) in Telecomm. Engg. from Jabalpur University in 1994 and M.Tech (Telecom) from MPSTME, NMIMS University in 2010. She is working as Assisstant Professor in the Department of Electronics and Telecommunications in Mukesh Patel School of Technology Management and Engineering, NMIMS University, Mumbai. She has more than 12 years of teaching and approximately 2 years of industry experience. She is currently pursuing her Ph.D. from NMIMS University, Mumbai. Her areas of specialization are image processing, digital signal processing and wireless communication. Her publications include more than 15 papers in IEEE international conferences, international journals and in national conferences and journal.

202

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

- Journal of Computer Science IJCSIS March 2016 Part II
- Journal of Computer Science IJCSIS March 2016 Part I
- Journal of Computer Science IJCSIS April 2016 Part II
- Journal of Computer Science IJCSIS April 2016 Part I
- Journal of Computer Science IJCSIS February 2016
- Journal of Computer Science IJCSIS Special Issue February 2016
- Journal of Computer Science IJCSIS January 2016
- Journal of Computer Science IJCSIS December 2015
- Journal of Computer Science IJCSIS November 2015
- Journal of Computer Science IJCSIS October 2015
- Journal of Computer Science IJCSIS June 2015
- Journal of Computer Science IJCSIS July 2015
- International Journal of Computer Science IJCSIS September 2015
- Journal of Computer Science IJCSIS August 2015
- Journal of Computer Science IJCSIS April 2015
- Journal of Computer Science IJCSIS March 2015
- Fraudulent Electronic Transaction Detection Using Dynamic KDA Model
- Embedded Mobile Agent (EMA) for Distributed Information Retrieval
- A Survey
- Security Architecture with NAC using Crescent University as Case study
- An Analysis of Various Algorithms For Text Spam Classification and Clustering Using RapidMiner and Weka
- Unweighted Class Specific Soft Voting based ensemble of Extreme Learning Machine and its variant
- An Efficient Model to Automatically Find Index in Databases
- Base Station Radiation’s Optimization using Two Phase Shifting Dipoles
- Low Footprint Hybrid Finite Field Multiplier for Embedded Cryptography

by ijcsis

In this paper we present a unified algorithm with some minor modifications applicable to most of the transforms. There are many transforms used in signal and image processing for data compression a...

In this paper we present a unified algorithm with some minor modifications applicable to most of the transforms. There are many transforms used in signal and image processing for data compression and many other applications. Many authors have given different algorithms for reducing the complexity to increase the speed of computation. These algorithms have been developed at different points of time. The paper also shows how the mixed radix system of counting can be used along with Kronecker product of matrices leading to fast algorithm reducing the complexity to logarithmic order. The results of use of such transforms have been shown for both 1-D and 2-D (image) signals and considerable compression is observed in each case.

- Automatic Generation of Prime Length FFT Programs -1996.pdf
- Basic Kronecker Notes
- 06502713
- Fast Algorithm for DCT(2)
- IJCTT-V3I3P123
- Exam2F10Hints&Solns
- 3fast 1-d Dct Algorithms With11 Mul
- Question Bankdip
- A hadamarad Transform
- Hadamard Transform
- DFT.pdf
- Fft
- Dcs
- Digital Image Processing Questions
- Vort Rag
- 2010 Maria Petrou, Costas Petrou(Auth.) Image Processing_ the Fundamentals, Second Edition
- Image Processing
- MATHLAB_1
- DFT Algorithms
- IIJEC-2014-10-18-22
- 67985_02
- ISSPA Paper From IEEE Site
- FFT-work by Rishad
- DFT-Discrete Fourier Transform
- CHEMISTRY
- fast walsh.pdf
- Top Ten
- Coding_CRM.pdf
- An Intro to Data Structure and Algorithm - Storer
- 12JETCAS_vlsiamp
- Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker Product

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd