Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker Product

Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker Product

Ratings: (0)|Views: 43|Likes:
Published by ijcsis
In this paper we present a unified algorithm with some minor modifications applicable to most of the transforms. There are many transforms used in signal and image processing for data compression and many other applications. Many authors have given different algorithms for reducing the complexity to increase the speed of computation. These algorithms have been developed at different points of time. The paper also shows how the mixed radix system of counting can be used along with Kronecker product of matrices leading to fast algorithm reducing the complexity to logarithmic order. The results of use of such transforms have been shown for both 1-D and 2-D (image) signals and considerable compression is observed in each case.
In this paper we present a unified algorithm with some minor modifications applicable to most of the transforms. There are many transforms used in signal and image processing for data compression and many other applications. Many authors have given different algorithms for reducing the complexity to increase the speed of computation. These algorithms have been developed at different points of time. The paper also shows how the mixed radix system of counting can be used along with Kronecker product of matrices leading to fast algorithm reducing the complexity to logarithmic order. The results of use of such transforms have been shown for both 1-D and 2-D (image) signals and considerable compression is observed in each case.

More info:

Published by: ijcsis on Jul 07, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

01/29/2012

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, 2011
Unified Fast Algorithm forMost Commonly used Transforms usingMixed Radix and Kronecker product
Dr. H.B. Kekre
Senior Professor, Department of Computer Science,Mukesh Patel School of TechnologyManagement and EngineeringMumbai, India
Dr. Tanuja Sarode
Associate Professor, Department of Computer Science, ThadomalShahani College of EngineeringMumbai, India
Rekha Vig
Asst. Prof. and Research Scholar,Dept. of Elec. and Telecom.Mukesh Patel School of TechnologyManagement and EngineeringMumbai, India
 Abstract
— In this paper we present a unified algorithm withsome minor modifications applicable to most of the transforms.There are many transforms used in signal and image processingfor data compression and many other applications. Many authorshave given different algorithms for reducing the complexity toincrease the speed of computation. These algorithms have beendeveloped at different points of time. The paper also shows howthe mixed radix system of counting can be used along withKronecker product of matrices leading to fast algorithm reducingthe complexity to logarithmic order. The results of use of suchtransforms have been shown for both 1-D and 2-D (image) signalsand considerable compression is observed in each case.
 Keywords-Orthogonal transforms, Data compression, Fast algorithm, Kronecker product, Decimation in Time, Decimation in Frequency, mixed radix system of counting
I.
 
I
NTRODUCTION
Image transforms play an important role in digital imageprocessing as a theoretical and implementation tool innumerous tasks, notably in digital image filtering, restoration,encoding, compression and analysis [1]. Image transforms areoften linear ones. If they are represented by transform matrix Tthen (1) represents the transformation,F = [T]f (1)where, f and F are the original and transformed imagerespectively. Unitary transforms are also energy conservingtransforms so that
ii
 f 
2
][=
2
||
, thus they are usedfor data compression using energy compaction in transformedelements. In most cases the transform matrices are unitary. i.e.T
-1
= T
t
(2)The columns of T
t
are the basis vectors of transform. Incase of 2-D transforms, the basis vectors correspond to thebasis images. Thus a transform decomposes a digital image to aweighted sum of basis images.The precursor of the transforms was the Fourier series toexpress functions in finite intervals. It was given by JosephFourier, French mathematician and physicist who initiated theFourier series and its applications to problems of heat transferand vibrations[8]. Using the Fourier series, just about anypractical function of time can be represented as a sum of sinesand cosines, each suitably scaled, shifted and "squeezed" or"stretched". Later the Fourier transform was developed toremove the requirement of finite intervals and to accommodateall types of signals[3]. Laplace transform technique followedwhich converted the frequency representation into a two-dimensional s-plane, what is termed the "complex frequency"domain.DFT is a transform for Fourier analysis of finite-domaindiscrete-time functions, which only evaluates enoughfrequency components to reconstruct the finite segment that isanalyzed. Variants of the discrete Fourier transform were usedby Alexis Clairaut[30] in 1754 to compute an orbit, which hasbeen described as the first formula for the DFT, and in 1759 byJoseph Louis Lagrange[30], in computing the coefficients of atrigonometric series for a vibrating string. The data which bothconsidered had periodic patterns and was discrete samples of an unknown function, and since the approximating functionswere finite sums of trigonometric functions, their work led tosome of the earliest expressions of the discrete Fouriertransform[8]. Technically, Clairaut's work was a cosine-onlyseries (a form of discrete cosine transform), while Lagrange'swork was a sine-only series (a form of discrete sinetransform[10]); a true cosine+sine DFT was used by Gauss[7]in 1805 for trigonometric interpolation of asteroid orbits.Equally significant is a small calculation buried in Gauss’treatise on interpolation that appeared posthumously in 1866 asan unpublished paper, which shows the first clear andindisputable use of the fast Fourier transform (FFT)[5][6],which is generally attributed to Cooley and Tukey[4] in 1965.It is a very efficient algorithm for calculating the discreteFourier transform, before which the use of DFT, though usefulin many applications was very limited.
194http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, 2011
Digital applications becoming more popular with the adventof computers led to use of square waves as basis functions torepresent digital waveforms. Rademacher and J.L. Walsh [11]independently presented the first use of square functions whichled to development of more transforms based on squarefunctions, e.g. Haar[13], Walsh. All of these have fastalgorithm for calculations and hence are used extensively.Hadamard[12, 27] matrices having elements +1 and -1 are alsoused as transforms. The other most commonly used transformsare Group Theoretic transforms[29] Slant transform[14] KLTand fast KLT [9,15].These transforms are used in various applications anddifferent transforms may be more suitable in differentapplications. The applications include image analysis[1,22,23],image filtering[1], image segmentation[21], imagereconstruction[1,16], image restoration[1], imagecompression[1,16,17-20,24-26], Scaling operation[2], Patternanalysis and recognition[28] etc.In this paper we present general fast transform algorithmfor mixed radix system from which not only all other fasttransform algorithms can be derived but one can generatecomposite transforms with fast algorithms. Key to this fastalgorithm is Kronecker product of matrices. Image transformssuch as DFT, sine, cosine, Hadamard, Haar and slant can befactored as Kronecker products of several smaller sizedmatrices. This makes it possible to develop fast algorithms fortheir implementation. Next section describes the Kroneckerproduct and properties of Kronecker product.II.
 
K
RONECKER
P
RODUCT OF
M
ATRICES
 
 A.
 
Kronecker Product 
Kronecker product of two matrices A and B is defined asC = A
B = [a
ij
B ] (3)Where C is m
1
n
1
x m
2
n
2
, A is m
1
x m
2
and B is n
1
x n
2
 Matrix [C] is given by
=
 Ba Ba Ba  Ba Ba Ba  Ba Ba Ba
mmmmmm
.............. .... .... ....................][
212222111211
(4)For matrix C to be orthogonal matrices A and B both have tobe orthogonal. Now if AA
t
= µ 
A
diagonal matrix and BB
t
=µ 
B
diagonal matrix thenCC
t
= µ 
A
 
µ 
B
= µ 
C
(5)is also a diagonal matrix. To get this result, use(A
B) (C
D) = AC
BD (6)Thus if F = [ C ] f (7)then, f = C
t
 
-1C
µ 
F (8)where
-1C
µ 
= [CC
t
]
-1
.
 B.
 
Properties of Kronecker Product 
1. (A + B)
C = A
C + B
C2. ( A
B )
C = A
( B
C )3. a (A
B) = (aA
B) = (A
 
aB) where a is scalar4. (A
B)
t
= A
t
 
B
t
 5. (A
B)
-1
= A
-1
 
B
-1
 6.
)()()(
111
 L L L
 B A B A
===
ΠΠ=Π
 7. det (A
B) = (det A)
m
(det B)
n
where A is mxm matrixand B is nxn matrix.8. Iff A and B are unitary matrices then A
B is alsoUNITARY matrix.III
.
 
K
RONECKER
P
RODUCT
L
EADS TO
F
AST
A
LGORITHM
 Let C = A
B where A is mxm, B is nxn hence C is mn x mn.Thus F= [C]f, is written in an expanded form as given below:Let us partition the input and output sequences into mpartitions of n elements each and also the matrix into nxnblocks as shown above.In a compact form the above matrix equation can be written asF
0
= a
0,0
[B] f 
0
+ a
0,1
[B] f 
1
+ ……+ a
0,m-1
[B] f 
m-1
(9.1)F
1
= a
1,0
[B] f 
0
+ a
1,1
[B] f 
1
+ ……+ a
1,m-1
[B] f 
m-1
(9.2)..F
m-1
= a
m-1,0
[B] f 
0
+ a
m-1,1
[B] f 
1
+ .…+ a
m-1,m-1
[B] f 
m-1
(9.m)
195http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, 2011
It is seen that the coefficients computed by operation of matrix [B] onto vectors f 
0
,
1
, ……., f 
m-1
(9.1) are directlyused in (9.2) to (9.m), thus reducing the number of computations.By computing multiplication by matrix [B] to vectors f we cancompute intermediate coefficients vectors say G
0
, G
1
, …G
m-1,
 so that,G
i
= [B] f 
i
for i = 0,1, 2, .., m-1 (10)Hence we getF
0
= a
0,0
G
0
+ a
0,1
G
1
+ ………………+ a
0,m-1
G
m-1
(11.1)F
1
= a
1,0
G
0
+ a
1,1
G
1
+ ………………+ a
1,m-1
G
m-1
(11.2)..F
m-1 =
a
m-1,0
G
0
+ a
m-1,1
G
1
+ …………+ a
m-1,m-1
G
m-1
(11.m)For calculations of F
0
, F
1
, ……., F
m-1
the coefficientsG
0
, G
1
, ……, G
m-1
can be used thus reducing computationsconsiderably. This algorithm can be made elegant as follows.Let G
0
, G
1
, ……, G
m-1
be written as,
=
1,001000
..
n
gggG
,
=
1,111101
..
n
gggG
,….,
=
1,11,10,11
..
nmmmm
gggG
(12)Now collecting the first elements of output vectors F
0
,F
1
, ……., F
m-1
and forming a new vectors as given by.
nmn
)1(0
...=
1,11,10,1 111110 100100
......................................
mmmm mm
aaaaaaaaa
0),1( 1000
...
m
ggg
(13)Thus by taking second elements of vectors F
0
, F
1
,.., F
m-1
 we get these by operating matrix [A] on second elements of G
0
, G
1
, … G
m-1
. In general taking i
th
elements of vectors F
0
, F
1
,…, F
m-1
, we get these by operating matrix[A] on i
th
elementsof G
0
, G
1
, …, G
m-1.
This algorithm has been obtained byshuffling output elements of B matrix by a perfect shufflematrix S
n
. So the output F comes in a shuffled form F
s
whereF
s
= [S
m
] F (14)F
s
is obtained from F by dividing F into n groups of melements each sequentially and then picking up first elementof each group and then second element and so on. To obtain Ffrom F
s
we getF = [S
m
]
-1
F
s
= [S
n
] F
s
(15)Here [S
m
] and [S
n
] are known to be PERFECT SHUFFLEMATRICES, where m x n = N. They are also inverse of eachother. The algorithm thus obtained is given in pictorial form inFig. 1.III.
 
D
ECIMATION IN
F
REQUENCY
 In this algorithm, input sequence (f 
0
, f 
1
, ……., f 
mn-1
) appears inorder whereas output sequence appears in a shuffled form,hence the name Decimation in Frequency (DIF) as shown inFig. 1. For number of computations required, let M be the totalmultiplications required, thenM = n
2
m + m
2
n = nm(n+m) (16)Without this algorithm we require (nm)
2
multiplications. Since(n+m) < nm for all values of m and n except 2 there is areduction in number of multiplications. Similarly for additionsA = nm(n-1) + mn(m-1) = nm(n+m-2) (17)In general if sequence length is N and if N= n
1
n
2
n
3
……n
r
, thenwe getM = N(n
1
+ n
2
+ n
3
+ ……. + n
r
) (18)and,A = N(n
1
+ n
2
+ n
3
+ ……. + n
r
- r) (19)Let n = 2
r
thenM = N(2 + 2 + 2 + …… r times )= N(2r) = 2Nlog
2
n (20)Normally without this algorithm we require M = N
2
 multiplications i.e. M = N(n
1
n
2
n
3
……n
r
) multiplication andwith this algorithm we require M = N(n
1
+ n
2
+ n
3
+ …. + n
r
).Thus the product is replaced by sum of those factors. Thereduction is of logarithmic order.
 A.
 
 Relation between A
B and B
A
Consider the sequence f(n) represented by vector f transformed to vector F given by F = [C] f ; where C =A
B.Now1.
 
If we shuffle the input sequence f and output F hasto remain same it is necessary to shuffle the columns of [C] bythe same shuffle.2.
 
If we shuffle the output elements of vector F andwant their values remain same the rows of matrix [C] are to beshuffled by same shuffle.Let f 
s
= [S
n
] f 
f = [S
n
]
-1
s
 And F
s
= [S
n
] F
F = [S
n
]
-1
F
s
 
196http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->