You are on page 1of 8

National University of San Agustin

System Engineer School

EDAT - Lab 03 - Matrix and Haralick Textures

Christian E. Portugal-Zambrano

April 30, 2019

1 Introduction

This practice is about learn how to read a paper and implement it, besides you have
to use all of your code already done, be sure you have done it before try to make this
practice, the paper to implement is about one technique extensively applied to image
processing and pattern recognition, here we revise the fundamentals of the technique
from an array perspective, the original paper is [1] which is attached to this, yes! this
is written on english, although the paper mentioned before is the original idea for this
practice we were working with the paper of [2] which explain it in a more detailed way,
but additional to these paper you can read the report attached called Hall-Beyer3.0, all
this material will give you some references and explain how features are calculated

2 Objectives

• Apply the knowledge of matrices with applications of real world data.

• Introduce some advances concepts of matrix operations.

• Introduce to Haralick features and papers reading and implementation

3 Pre-requisites

We must require a basic use of arrays 1D, 2D and more, additionally we must have
practice 02 completely resolved.

1
4 Algorithms and code presentation

A
So, as we are working with L TEX algorithms presentation will be easy, you just need to
add this at the header of your document:

\usepackage{algorithm}
\usepackage{algorithmicx}
\usepackage[noend]{algpseudocode}

Also if you need to add some code you can use:

\usepackage{listings}
\usepackage{color}

\lstset{frame=tb,
language=Java,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=none,
numberstyle=\tiny\color{gray},
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true,
tabsize=3
}

More information about Listing on https://en.wikibooks.org/wiki/LaTeX/Source_


Code_Listings

5 To Do

For this practice you must nd the next results, according to the next notations:

• p(i,j) is the position (i,j) in the normalized GLCM.

• Ng is the dimension of the GLCM.

• px (i) and py (j) are the margin probabilities dened as:

Ng −1 Ng −1
X X
px (i) = p(i, j) py (j) = p(i, j)
j=0 i=0

2
• µ is the mean of µx y µy , dened as:

Ng −1 Ng −1
X X
µx = ipx (i) µy = ipy (i)
i=0 i=0

• σx y σy are the standard deviations of px y py , dened as:

 1  1
Ng −1 2 Ng −1 2
X X
2 2
σx =  px (i)(i − µx ) σy =  py (i)(i − µy )
i=0 i=0

• adittionaly:
Ng −1 Ng −1
X X
px+y (k) = p(i, j) k = 2, 3, 4, . . . , 2Ng
i=0 j=0 i+j=k

Ng −1 Ng −1
X X
px−y (k) = p(i, j) k = 0, 1, . . . , Ng − 1
i=0 j=0 |i−j|=k

Ng −1 Ng −1
X X
HXY 1 = − p(i, j) log{px (i)py (j)}
i=0 j=0

Ng −1 Ng −1
X X
HXY 2 = − px (i)py (j) log{px (i)py (j)}
i=0 j=0

Ng −1
X p(i, k)p(j, k)
Q(i, j) =
px (i)py (j)
k=0

Implement the below features according to Haralick:

• Angular Second Moment (f1)

Ng −1 Ng −1
X X
f1 = p(i, j)2
i=0 j=0

• Contrast (f2)
Ng −1 Ng −1
X X
f2 = P (i, j)(i − j)2
i=0 j=0

• Correlation (f3)
Ng −1 Ng −1
X X (i − µx )(i − µy )p(i, j)
f3 =
σx σy
i=0 j=0

3
• Sum of Squares (Variance) (f4)

Ng −1 Ng −1
X X
f (4) = (i − µ)2 p(i, j)
i=0 j=0

• Inverse Dierence Moment (Homogeneity) (f5)

Ng −1 Ng −1
X X 1
f (5) = p(i, j)
1 + (i − j)2
i=0 j=0

• Sum Average (f6)


2Ng
X
f6 = kpx+y (k)
k=2

• Sum Entropy (f7)


2Ng
X
f7 = − px+y (k) log{px+y (k)}
k=2

• Sum Variance (f8)


2Ng
X
f8 = k − f (7)px+y (k)
k=2

• Entropy (f9)
Ng −1 Ng −1
X X
f9 = − p(i, j) log{p(i, j)}
i=0 j=0

• Dierence Variance (f10)


NOTE: According to Haralick (1973) the variance is px−y , this is not f(4), you have
to take as a reference the theory given for Population variance and Sample variance
where the Population variance of a nite population of size N is dened as:

N N
!
2 1 X 1 X 2
σ = (xi − µ)2 = xi − µ2
N N
i=1 i=1

where:
N
1 X
µ= xi
N
i=1
so:   2 
Ng −1 Ng −1
1 X X
f (10) =  px−y (k)2 −  px−y (k) ∗ Ng 
Ng − 1
k=0 k=0

4
• Dierence Entropy (f11)

Ng −1
X
f (11) = − px−y (k) log{px−y (k)}
k=0

• Information Measures of Correlation I (f12)

f (9) − HXY 1
f (12) =
max(HX, HY )

• Information Measures of Correlation II (f13)

f (13) = (1 − exp[−2(HXY 2 − f 9)])

• Maximal Correlation Coecient (f14)

1
f (14) = (Second largest EigenValue of Q) 2

where:
X p(i, k)p(j, k)
Q=
px (i)py (k)
k

Resolve the Haralick's features using the next test matrices:


Test Matrix 01:  
63 62 70 110
 50 40 120 110 
 
 12 135 150 175 
180 152 200 220
GLCM for 0, 45, 90 and 135:

 
0.16666667. 0.083333336 0.041666668 0
 0.083333336 0.16666667 0 0 
 
 0.041666668 0 0.25 0.041666668 
0 0 0.041666668 0.083333336
 
0.22222222 0.055555556 0 0
 0.055555556 0.11111111 0.11111111 0 
 
 0 0.11111111 0.22222222 0.055555556 
0 0 0.055555556 0
 
0.25 0 0.083333336 0

 0 0.16666667 0.083333336 0 

 0.083333336 0.083333336 0.083333336 0.083333336 
0 0 0.083333336 0

5
 
0.11111111 0.055555556 0.16666667 0
 0.055555556 0.11111111 0.055555556 0 
 
 0.16666667 0.055555556 0 0.11111111 
0 0 0.11111111 0

Features for GLCM with 0 as angle:

• (1) Angular Second Moment: 0.145833

• (2) Contrast: 0.583333

• (3) Correlation: 0.719533

• (4) Sum of squares (Variance): 1.03993

• (5) Inverse Dierence Moment (Homogeneity): 0.808333

• (6) Sum Average: 4.58333

• (7) Sum Entropy: 1.70455

• (8) Sum Variance: 2.87878

• (9) Entropy: 2.09473

• (10 Dierence Variance: 0.087963

• (11) Dierence Entropy: 0.823959

• (12) Information Measures of Correlation I: -0.427479

• (13) Information Measures of Correlation II: 0.679821

• (14) Maximal Correlation Coecient: 0.747951

Features for GLCM with 45 as angle:

• (1) Angular Second Moment: 0.148148

• (2) Contrast: 0.444444

• (3) Correlation: 0.735294

• (4) Sum of squares (Variance): 0.839506

• (5) Inverse Dierence Moment (Homogeneity): 0.777778

• (6) Sum Average: 4.44444

• (7) Sum Entropy: 1.73513

• (8) Sum Variance: 2.70932

• (9) Entropy: 2.04319

• (10) Dierence Variance: 0.0853909

• (11) Dierence Entropy: 0.686962

• (12) Information Measures of Correlation I: -0.351596

• (13) Information Measures of Correlation II: 0.58172

• (14) Maximal Correlation Coecient: 0.618892

6
Features for GLCM with 90 as angle:

• (1) Angular Second Moment: 0.138889

• (2) Contrast: 1

• (3) Correlation: 0.485714

• (4) Sum of squares (Variance): 0.972222

• (5) Inverse Dierence Moment (Homogeneity): 0.7

• (6) Sum Average: 4.33333

• (7) Sum Entropy: 1.51711

• (8) Sum Variance: 2.81623

• (9) Entropy: 2.09473

• (10) Dierence Variance: 0.0462963

• (11) Dierence Entropy: 1.0114

• (12) Information Measures of Correlation I: -0.371201

• (13) Information Measures of Correlation II: 0.6151

• (14) Maximal Correlation Coecient: 0.508319

Features for GLCM with 135 as angle:

• (1) Angular Second Moment: 0.117284

• (2) Contrast: 1.77778

• (3) Correlation: 0.162791

• (4) Sum of squares (Variance): 1.06173

• (5) Inverse Dierence Moment (Homogeneity): 0.511111

• (6) Sum Average: 4.44444

• (7) Sum Entropy: 1.42706

• (8) Sum Variance: 3.01738

• (9) Entropy: 2.2161

• (10) Dierence Variance: 0.0360082

• (11) Dierence Entropy: 1.06086

• (12) Information Measures of Correlation I: -0.30933

• (13) Information Measures of Correlation II: 0.555556

• (14) Maximal Correlation Coecient: 0.510747

Test Matrix 2:  
255 125 135 68 235

 12 36 36 96 56  

 89 253 195 175 165 

 13 125 154 48 69 
156 23 66 159 125
Its features are:

7
(1) Angular Second Moment: 0.0425

(2) Contrast: 6.35

(3) Correlation: 0.256331

(4) Sum of squares (Variance): 4.26937

(5) Inverse Dierence Moment (Homogeneity): 0.344729

(6) Sum Average: 8.15

(7) Sum Entropy: 2.31957

(8) Sum Variance: 5.83043

(9) Entropy: 3.24683

(10) Dierence Variance:0.0142857

(11) Dierence Entropy:1.63505

(12) Information Measures of Correlation I: -0.379988

(13) Information Measures of Correlation II: 0.781975

(14) Maximal Correlation Coecient: 0.762343

Which leve of quantization is used? Remember that the level of quantization selected is
the same as the dimension of the GLCM, for this test 2 you have to test dierent levels
as 4,8,16, etc, one of them will give you the correct features, to make sure about your
results check if your quantization goes well in the previous test.

6 Deadline

This practice will be considered for the rst evaluation of the course, you must present
and expose your work on May, 7 at class hours, there are not other dates to present your
work. Remember that plagiarism must be avoided and if it is detected the grade will
be zero and repetition informed to superior authorities. A pdf with you CUI must be
presented at the next class, this pdf must be a report about your implementation and
results, you can use the TEX model from here All question and doubts must be done to
email.

References

[1] R. M. Haralick, K. Shanmugam, et al., Textural features for image classication,


IEEE Transactions on systems, man, and cybernetics, no. 6, pp. 610621, 1973.

[2] M. Presutti, La matriz de co-ocurrencia en la clasicación multiespectral: tutorial


a
para la enseñanza de medidas texturales en cursos de grado universitario, 4 Jornada

de Educação em Sensoriamento Remoto no Âmbito do Mercosul, 2004.

You might also like