You are on page 1of 10

# SOME MORE RESULTS ON TSALLIS’S ENTROPY

Satish Kumar, Rajesh Kumar,
Department of Mathematics, Department of Mathematics,
G.I.M.T. (Kanipla) Kurukshetra, Hindu College, University of Delhi,
Haryana (India). Delhi-7.
E-mail:drsatish74@rediffmail.com E-mail:rajeshhctm@rediffmail.com

ABSTRACT
A parametric mean length is defined as the quantity
,
  1
 
α

1    ui  α − ni  αα−1 
Lα ( U ; P ) = 1 −  ∑ Pi
β β
 β 
D  
 ∑ ui pi 
α −1   
 
   

where , and .This being the useful mean length
α > 0 ( ≠ 1) , β > 0 ui > 0
∑p i =1

of code words weighted by utilities, . Lower and upper bounds for
ui Lαβ ( U ; P)

are derived in terms of ‘useful’ information measure for the incomplete power

distribution, .
β
p

AMS Subject classification: 94A15, 94A17, 94A24, 26D15.

Key words: Tsallis’s Entropy, ‘Useful’ Information measure, Utilities and Power
probabilities.
1. INTRODUCTION
Consider the following model for a random experiment S,

S N = [ E ; P; U ]
where is a finite system of events happening with
E = ( E1 , E 2 , ...., E N )
respective probabilities , , and and
P = ( p1 , p2 , ...., pN ) pi ≥ 0 ∑
N
pi = 1
i =1

credited with utilities , .Denote the
U = ( u1 , u 2 , ...., u N ) u i > 0, i = 1, 2, ..., N
model by where,
SN  E1 E2 .... EN 
S N =  p1 p2 .... pN 

 u1 u2 .... uN 
...(1.1)
We call (1.1) a Utility Information Scheme (UIS). Belis and Guiasu [6]
proposed a measure of information called ‘useful information’ for this scheme,
given by
, …(1.2)
H ( U ; P) = −∑ i =1 ui pi log pi
N

where H(U; P) reduces to Shannon’s [26] entropy when the utility aspect of the
scheme is ignored i.e., when for each i. Throughout the paper, will
ui = 1 ∑
stand for unless otherwise stated and logarithms are taken to base

N
i =1

.
D ( D > 1)
Guiasu and Picard [10] considered the problem of encoding the
outcomes in (1.1) by means of a prefix code with codewords
w1 , w2 , .... , w N
having lengths and satisfying Kraft’s[18] inequality.
n1 , n 2 , .... , n N

2
. ...(1.3)
N

∑D
i =1
− ni
≤1

where D is the size of the code alphabet. The useful mean length of
L ( U ; P)
code was defined as:
...(1.4)
L(U; P) =
∑ ui ni pi
∑u p i i

and the authors obtained bounds for it in terms of H(U; P). Longo [19], Gurdial
and Pessoa [12], Autar and Khan [4], Hooda[14] have studied generalized
coding theorems by considering different generalized measures of (1.2) and
(1.4) under condition (1.3) of unique decipherability.
In this paper, we study some coding theorems by considering a new
information measure depending on the parameters and and a utility
α
β
function. Our motivation for studying this new function is that it generalizes
‘useful’ information measure already existing in the paper Tsallis’s[29]entropy.
2. CODING THEOREMS
In this section, we define generalized ‘useful’ information measure as:
…(2.1)
1   ∑ ui pi  
αβ
H (U ; P ) =
β
1−   
α − 1   ∑ ui piβ 
α

 
where ,
α > 0 ( ≠ 1) , β > 0
∑p i =1

(i) When = 1 then (2.1) reduces to ‘useful’ information measure studied
β
by Hooda and Ram[15],
i.e. ...(2.2)
1   ∑ ui pi 
α
Hα ( U ; P ) = 1 −   
α − 1   ∑ ui pi  
(ii) When then (2.1) reduces to new generalized information measure
ui = 1
of order and type
α β,
i.e. ...(2.3)
1   ∑ pi 
αβ
Hα ( P ) =
β
1 −  
α − 1   ∑ piβ 

3
(iii) When and = 1, (2.1) reduces to entropy as considered by
ui = 1 β
Tsallis[29] ,
i.e. ...(2.4)
1 
Hα ( P ) =
α −1 
1− ( ∑ p )  α
i

(iv) When = 1 and , (2.1) reduces to a measure of ‘useful’
β α→1

information for the incomplete distribution due to Hooda and Bhaker[16],
i.e. …(2.5)
H ( U; P) = −
∑ u p log p
i i i

∑u p i i

(v) When for each i, i.e. when the utility aspect is ignored and
ui = 1
, the measure (2.1) reduces to entropy considered by Mathur and
α→1
Mitter [21] for - power distribution,
β
i.e. ...(2.6)
H β ( P) = −
∑ piβ log piβ
∑ pβ i

(vi) When , = 1 and , then (2.1) reduces to Shannon’s[26]
α→1
ui = 1 β
entropy,
i.e. …(2.7)
H ( P ) = −∑ pi log pi
(vii) When ,then (2.1) becomes generalized ‘useful’ information mesure of
α →1
-power distribution,
β
i.e. …(2.8)
H β ( U ; P) = −
∑ i i
u p β
log p i
β

∑ ui pi β

Further consider
Definition: The generalized ‘useful’ mean length with respect to
Lαβ ( U ; P)
‘useful’ information measure is defined as :
...(2.9)
  α

 α −1 
1

1    ui  α− ni   
Lαβ ( U ; P ) = 1 − ∑ Pi
β
 β 
D  α  
α − 1    ∑ ui pi 

 
   

4
where
α > 0 ( ≠ 1) , β > 0, pi > 0,∑ pi =1, i =1, 2,............., N
(i) For = 1 ,then (2.9) reduces to the new useful mean length,
β
i.e. ...(2.10)
  α

 α −1  
1

1    ui  α − ni  
Lα ( U ; P ) = 1 − ∑ pi 
α
 D    
α − 1   ∑ i i 
u p  
   
(ii) For for each i, then (2.9) becomes new optimal code length,
ui = 1
i.e. ...(2.11)
  α

 α −1  
1

1    1  α
− ni 
α  

Lαβ ( P ) = 1 −  ∑ pi 
β
 D    
 ∑ pi
β
α −1     
   
(iii) For = 1, and then (2.9) reduced to considered by
β α→1 L
ui = 1
Shannon[26],
i.e. ...(2.12)
L = ∑ ni pi
(iv) For = 1 and for each i, then (2.9) becomes new optimal code
β ui = 1
length,
i.e. ...(2.13)
 α

1   
 α −1 
− ni  
Lα ( P ) = 1 − ∑ pi D  α 
 
α − 1   
 
 

We establish a result, that in a sense, provides a characterization of
H αβ ( U ; P)
under the condition of unique decipherability.
Theorem 2.1 For all integers D > 1
...(2.14)
Lαβ ( U ; P ) ≥ H αβ ( U ; P)
under the condition (1.3). Equality holds if and only if
...(2.15)
 up αβ 
ni = − log D  i i

 ∑u p αβ
 i i 
Proof: We use Holder’s [27] inequality

5
...(2.16)

∑ x y ≥ (∑ x ) (∑ y )
1 1
p p q q
i i i i

for all N when p < 1 ( 1) and , with
xi ≥ 0 , y i ≥ 0 , i = 1 , 2 , ......, ≠ p −1 + q −1 = 1
equality if and only if there exists a positive number c such that
. ...(2.17)
x ip = cyiq
Setting
,
1
αβ
 ui  α −1
xi = piα −1  D − ni
 ∑ u p 
β
 i i 

1
αβ
 ui 1−α
yi = p 1−α
 β 
,
 ∑ ui pi 
i 
and in (2.16) and using (1.3) we obtain the result (2.14) after
1 q = 1− α
p = 1−
α
simplification for as .
1 α >1
>0
α −1
Theorem 2.2 For every code with lengths , can be
{ ni } , i = 1, 2, ..., N Lαβ ( U ; P)
...(2.18)
1 
L (U; P) < H (U; P) D ( 1−α )
1− D ( ) 
β β 1−α
α α + 
α −1
Proof: Let be the positive integer satisfying, the inequality
ni
...(2.19)
 u pαβ   ui piαβ 
− log D  i i αβ  ≤ n < − logD  + 1
 ∑u p i  ∑ u pαβ
 i i   i i 
Consider the intervals
…(2.20)
  u pαβ   up αβ  
δ i =  − log D  i i αβ  , − log D  i i
 + 1
 ∑u p
 ∑u p
αβ
  i i  i i  
of length 1. In every , there lies exactly one positive number such that
δi ni

6
…(2.21)
 ui piαβ   ui piαβ 
0 < − log D   ≤ ni < − logD   + 1
 ∑ u pαβ
 ∑ ui pi
αβ
 i i  
It can be shown that the sequence thus defined, satisfies
{ ni } , i = 1, 2, ..., N
(1.3). From (2.21) we have
 u pαβ 
ni < − log D  i i αβ  + 1
 ∑u p 
 i i 
 up αβ 
⇒ D − ni >  i i αβ  D−1
 ∑u p 
 i i 
...(2.22)
α −1
 α −1 
− ni 
 α 
  u pαβ  α 1−α
⇒D >  i i αβ  Dα
 ∑u p
 i i 
Multiplying both sides of (2.22) by , summing over
1
 ui  α
piβ 
 ∑ u p β 
 i i 

and simplification for as , gives (2.18).
i = 1, 2, ...., N 1 α>1
>0
α −1
Theorem 2.3 For every code with lengths , , of Theorem 2.1,
{ ni } i = 1, 2, ...., N

Lαβ ( U ; P )
...(2.23)
α
L ( U ; P ) ≥ H ( U ; P ) > H (U ; P ) D +
β
α
β
α
β
α (1− D )
α −1
Proof: Suppose
...(2.24)
 ui piαβ 
ni = − log D  
 ∑ u pαβ
 i i 
Clearly and + 1 satisfy ‘equality’ in Holder’s inequality (2.16). Moreover,
ni ni
satisfies Kraft’s inequality (1.3).
ni

7
Suppose is the unique integer between and + 1, then obviously,
ni ni ni
satisfied (1.3).
ni
Since , we have
α > 0 ( ≠ 1)

α
 1
( α −1) 
  ui  − ni
α

 ∑ p β
  D α

 ∑ ui p i
i β
  
 
α
 1
( α −1) 
  ui  − ni
α

≤  ∑ piβ   D α

 ∑ u pβ
  i i  
 
…(2.25)
α
 ( α −1) 
1

  ui  − ni α

< D∑ p i 
β
D α 
 ∑ u p β 
  i i  
 
Since,
α
 1

  ∑ ui pi
 ui  − ni
α ( α −1) αβ

 ∑ p β
  D α
 =  
 ∑ ui pi   ∑ ui pi
i β β
  
 
Hence, (2.25) becomes
α
 1

  ui  α ( α −1)
  ∑ ui piαβ   ∑ ui piαβ 
 ∑ p β

−n
 D i α
 ≤   < D  
 ∑ ui pi  ∑ ui pi  ∑ ui pi
i β β β
    
 
which gives the result (2.23).

8
References
[1] RAYEES AHMAD DAR and M.A.K BAIG, Coding Theorems on
Generalized Cost Measure, Journal of inequalities in pure and applied
mathematics, vol 9, Iss.1, art.8. (2008).
[2] J. ACZEL and Z. DAROCZY, Uber Verallegemeineste quasiliniare
mittelveste die mit grewinebts functionen gebildet Sind. Pub. Math.
Debrecan, vol. 10 (1963),171-190.
[3] C. ARNDT, Information Measure-Information and its description in
Science and Engineering, Springer, Berlin, (2001).
[4] R. AUTAR and A.B. KHAN, On Generalized`Useful’Information for
Incomplete distribution, J. of Comb. Information and Syst. Sci., 14(4)
(1989), 187-191.
[5] M.A.K. BAIG and RAYEES AHMAD DAR, Some noiseless coding
theorems of inaccuracy measure of order and type , Sarajevo
α β
Journal of Mathematics, Volume.3(15) (2007), 137-143.
[6] M. BELIS and S. GUIASU, A Qualitative-Quantitative Measure of
Information in Cybernetics Systems, IEEE Transation in Information
Theory, IT(14) , (1968) , 593-594.
[7] L.L.CAMPBELL, A coding theorem and Renyi’s entropy, Information
and Control, Vol. 8 (1965), 423-429.
[8] Z.DAROCZY, Generalized Information function, Information and
Control, Vol.16 (1970), 36-51.
[9] A. FEINSTEIN, Foundation of Information Theory, McGraw Hill, New
York, (1956).
[10] S. GUIASU and C.F. PICARD, Borne Inferieture de la Longuerur
de utile Certain Codes, C.R. Acad. Sci, Paris, SerA-B 273, (1971) , 248-
251.
[11] S.GUIASU, “Weighted Entropy”. Reports on Math Physics, 2, (1971),
165-179
[12] GURDIAL and F.PESSOA, On Useful Information of order , J. Comb.
α
Information and Syst. Sci., 2, (1977), 30-35.

9
[13] HAVRDA and CHARVAT, Qualification Method of Classification
Process, the concept of structural -entropy, Kybernetika, vol.3, (1967),
α
30-35.
[14] D.S. HOODA, A Coding Theorem on Generalized R-Norm Entropy,
Korean J. Comput. & Applied Math. Vol. 8 (2001), No. 3, 657-664.
[15] D.S. HOODA and A. RAM, Characterization of the generalized R-norm
entropy, Caribbean Journal of Mathematical & Computer Science, Vol.
8, (1998).
[16] D.S.HOODA, and U.S.BHAKER, Mean Value Characterization of Useful
Information Measure, Tamkang Journals of Mathematics, vol. 24(1993),
383-394.
[17] A. B. KHAN and HASSEEN AHMAD, Some noiseless coding
theorems of entropy of order of the power distribution , Metron,
α Pβ
39(3-4) (1981), 87-94.
[18] L.G.KRAFT, A device for quantizing, grouping and coding amplitude
modulated pulses, MS Thesis, Electrical Engineering Department, MIT,
1949.
[19] G. LONGO, A Noiseless Coding Theorem for Sources Having Utilities,
SIAM J. Appl. Math., 30(4) , (1976) ,739-748.
[20] MC-MILLAN, Two inequalities implied by using decipherability IEEE
Trans Inform. Theory IT-2 (1956), 115-116.
[21] J.MITTER and Y.D MATHUR, Comparison of entropies of power
distribution, ZAMM, 52(1972), 239-240.
[22] OM PARKASH and P.K SHARMA Noiseless Coding Theorems
Corresponding to Fuzzy Entropies, Southeast Asian Bulletin of
Mathematics, 27, (2004), 1073-1080.
[23] S. PIRZADA and B.A. BHAT, Some more results In coding theory, J.
KSIAM, vol. 10, No. 2, (2006), 123-131.
[24] A. RENYI, On Measure of entropy and information, Proc. 4 th Berkeley
Symp. Maths. Stat. Prob., Vol.1 (1961), 547-561.
[25] L. K. ROY, Comparison of Renyi’s entropy of power distribution,
ZAMM, 56(1976), 217-218.
[26] C.E. SHANNON, A Mathematical Theory of Communication, Bell
System Tech., J., 27 (1948) 379-423, 623-636.
[27] O. SHISHA, Inequalities, Academic Press, New York, (1967).
[28] I.J.TANEJA and P. KUMAR, Relative information of types, Csiszar’s f-
divergence, and information Inequalities, Information Sciences,
116(2004), 105-125 .
[29] C .TSALLIS, Possible Generalization of Boltzmann Gibbs Statistics, J.
Stat. Phy., 52(1988), 479.
[30] R.K. TUTEJA and P. GUPTA, On a Coding Theorem connected with
useful entropy of order and , Int. J. Math. and Mathemetical
α β
Sci.Vol.12 (1989), 193-198.

10