Professional Documents
Culture Documents
Ali Kadhem
1 Filtering
Filtering can be done by convoloving filters with an image. The filters
1 1 1 1 1
1 1 1 1 1
−1 −1 −1 1
f1 = 1 −1 f2 = f3 = 1 1 1 1 1
1 1 1 25
1 1
1 1 1
1 1 1 1 1 (1)
−1 −1 −1
f4 = −1 9 −1 f5 = 1 −2 1 ,
−1 −1 −1
are convoluted with the image Original in figure 1 and produce images A–E.
Figure 1: Images A–E result of convolution of f1 –f5 with the original image.
The first filter f1 is a derivative filter in x–direction and therefore results in image C. Image A is
a further derivative in the x–direction and therefore the filter f5 is the filter that resulted in this
image since it is the second derivative. Filter f2 is a derivative filter in the y–direction at multiple
points and therefore results in the image D. Filter f3 is an averaging filter of 25 points and results
in the blurry image B.
1
The last filter f4 is a kind of sharpening filter and can be split up like
−1 −1 −1 0 0 0 1 1 1
f4 = −1 9 −1 = 0 10 0 − 1 1 1 .
−1 −1 −1 0 0 0 1 1 1
It would have been a sharpening filter if the average filter had been subtracted and not the sum of
values. This filter corresponds to the image E since the white areas are grey due to the subtraction
of surrounding pixels.
2 Interpolation
Interpolation is a method to obtain values between points. Linear interpolation means that it
should be a linear function between the points. For the image
f = 1 4 6 8 7 5 3 , (at points i = 1, . . . , 7)
the interpolation Flin (x) is the interpolated function shown in figure 2.
9
Data
Linear interpolation
8
5
f
0
0 1 2 3 4 5 6 7 8
x
The interpolation function Flin (x) is continuous but not differentiable since it’s piece-wise and not
smooth. Interpolation of f can be expressed as
7
X
Fg (x) = g(x − i)f (i), (2)
i=1
where g(x) is different for different types of interpolation. For linear interpolation g(x) can be found
by considering the first terms of the sum
2
X
= g(x − 1)f (1) + g(x − 2)f (2). (3)
i=1
2
For linear interpolation with step length 1 between points, we have that
X
F (x) = f (i) + (x − i)(f (i + 1) − f (i)). (4)
i
For the first interpolation between the points i = 1 and i = 2 is F (x) = f (1) + (x − 1)(f (2) − f (1)),
which is equal to (3). So,
g(x − 1) = 2 − x, 1<x<2
(5)
g(x − 2) = x − 1, 1 < x < 2.
g(t) = 1 − t, 0<x<1
(6)
g(s) = 1 + s, −1 < x < 0,
0.8
0.6
0.4
g
0.2
-0.2
-0.4
-3 -2 -1 0 1 2 3
x
2 3 2
This function g is continuous since the endpoints coincide, 1 − |x| = −2|x| + 10|x| − 16|x| + 8 at
3 2 2 2
x = 1 and −2|x| +10|x| −16|x|+8 = 0 at x = 2. The differentiation of 1−|x| is −(x2 +|x| )/x and
3 2 2
of −2|x| + 10|x| − 16|x| + 8 it is −(2x(3|x| − 10|x| + 8))/|x|. At point x = 1 these derivatives are
3 2
both equal to -2. At x = 2 the derivative is 0 for −2|x| +10|x| −16|x|+8. So g is differentiable since
3
it is differentiable inside each piecewise function and at their endpoints with matching derivatives.
The interpolation Fg is the interpolation of f with the function g given in (7) and is shown in figure
4.
5
Fg
0
0 1 2 3 4 5 6 7 8
Class 1:
0.4003, 0.3985, 0.3998, 0.3997, 0.4015, 0.3995, 0.3991
Class 2:
0.2554, 0.3139, 0.2627, 0.3802, 0.3247, 0.3360, 0.2974
Class 3:
0.5632, 0.7687, 0.0524, 0.7586, 0.4443, 0.5505, 0.6469.
Assuming the first four measurements in each class is for training and the three last are for testing.
Using Nearest–Neighbour classification, classifying a test measurement according to the smallest
distance between it and the training data, it was found that all test measurements ended up in its
correct class except the number 0.4443 from Class 3 that was classified as Class 1.
4
3.2 Gaussian distributions
Bayes classifier for continuous probability distribution fX can be formulated as
fX (x|Y = y)P (Y = y)
P (Y = y|X = x) = . (8)
fX (x)
Assuming the probability to be a Gaussian and for Class 1 the mean is µ1 = 0.4 and standard devi-
ation σ1 = 0.01, for Class 2 µ2 = 0.32 and σ2 = 0.05, and for Class 3 µ3 = 0.55 and σ3 = 0.2. The
probability for each class is the same. For classification the denominator fX (x) can be neglected
and so can the class probability since the are constant for all measurements. The Bayes classifier
with Gaussian distributions resulted in the classification as:
Class 1:
0.4003, 0.3985, 0.3998, 0.3997, 0.4015, 0.3995, 0.3991, 0.3802
Class 2:
0.2554, 0.3139, 0.2627, 0.3247, 0.3360, 0.2974
Class 3:
0.5632, 0.7687, 0.0524, 0.7586, 0.4443, 0.5505, 0.6469.
4 Classification
Using probabilities to classify images. The images
0 1 1 0 1 1
, , (9)
1 0 0 1 1 0
There is a probability ϵ that the observed pixel has been distorted and show the wrong value –
P (observing 0|correct value is 1) = P (observing 1|correct value is 0) = ϵ. Given the image
0 1
,
1 1
5
where Image is one
of the
images in (9). Since the probabilities will be compared to each other the
0 1
normalisation P can be ignored. Assuming the pixels are independent the likelihood can
1 1
be written as products of the likelihood of each pixel as
0 1
P Image = P (0 | Image pixel 1 )P (1 | Image pixel 2 )P (1 | Image pixel 3 )P (1 | Image pixel 4 ).
1 1
So the posteriori probabilities, with normalisation set to k (and is considered later), are
0 1 0 1 1
P = k · (1 − ϵ)3 · ϵ · ,
1 01 1 4
1 0 0 1 1
P = k · (1 − ϵ) · ϵ3 · ,
0 11 1 2
1 1 0 1 1
P = k · (1 − ϵ)2 · ϵ2 · .
1 01 1 4
0 1
resulting in that the image is interpreted as . For ϵ = 0.4 the probabilities become
1 0
0 1 0 1 1 0 0 1 1 1 0 1
P = 0.3913, P = 0.3478, P = 0.2609,
1 01 1 0 1 1 1 1 01 1
0 1
again giving the image as MAP. By increasing the noise the other images obtain a higher
1 0
probability.
5 Classification (problem 5)
The image
1 0 0 0
0 1 0 0
.
0 0 1 0
0 1 0 0
In a correct image all pixels are white except one vertical row which is black. The error for different
pixels are independent and is P (white | line) = P (black | no line) = ϵ. Priori probability for that
6
the line is located in column 1 or 4 is 0.3 and in column 2 or 3 is 0.2. The likelihoods
1 0 0 0 1 0 0 0
0 1 0 0 1 0 0 0
P = (1 − ϵ)10 ϵ6 ,
0 0 1 0 1 0 0 0
0 1 0 01 0 0 0
1 0 0 0 0 1 0 0
0 1 0 0 0 1 0 0
P = (1 − ϵ)12 ϵ4 ,
0 0 1 0 0 1 0 0
0 1 0 00 1 0 0
(10)
1 0 0 0 0 0 1 0
0 1 0 0 0 0 1 0
P = (1 − ϵ)10 ϵ6 ,
0 0 1 0 0 0 1 0
0 1 0 00 0 1 0
1 0 0 0 0 0 0 1
0 1 0 0 0 0 0 1
P = (1 − ϵ)8 ϵ8 .
0 0 1 0 0 0 0 1
0 1 0 00 0 0 1
for correct images Imagei given in (10). For error ϵ = 0.2 we have
6 Classification (problem 6)
1 1 0 0 1 0 0 1 0
1 0 1 1 0 1 1 0 1
1
’B’ = 1 0, 1
’0’ = 0 1
, 0
’8’ = 1 0
.
1 0 1 1 0 1 1 0 1
1 1 0 0 1 0 0 1 0
The probability of the image ’B’ is 0.35, for ’0’ 0.4 and for ’8’ 0.25. A white pixel has probability
0.3 of being wrong and a black pixel has probability of 0.2 of being wrongly measured. An observed
7
image x is
0 0 0
1 0 0
0
x= 1 0
.
0 0 1
1 1 0
The likelihood probabilities assuming pixels are independent are
P (’B’|x) = 0.2251,
P (’0’|x) = 0.0362,
P (’8’|x) = 0.7387,
function holes=noHoles(B)
B(B==0) = 2;
B(B==1) = 0;
B2=bwlabel(B);
holes=length(unique(B2))-2;
8
To find the dimension of the letter this implementation was used:
width=find(sum(B,1),1,’last’)-find(sum(B,1),1);
height=find(sum(B,2),1,’last’)-find(sum(B,2),1);
• The standard deviation of the projection of letter on x and y axis was used.
– Motivation: Expect small standard deviation for letters like 1
The code used was:
function pos=massPos(B)
[r c]=size(B);
[cx cy]=centerPoints(B);
cy=round(cy);
pos=0.5;
if (r/2-cy>=1)
pos=0.6;
end
if (r/2-cy<=-1)
pos=0.4;
end
• Perimeter
– Motivation: Letters like 1 will have smaller perimeter
Implementation:
function p=perimeterIm(B)
p = 0;
for i = 1:size(B,1)
for k=1:size(B,2)
if (B(i,k)==1)
p=p+(4-noNeigh(B,i,k));
9
end
end
end
% Number of neighbours
function number=noNeigh(B,i,k)
number = 0;
if (B(i-1,k)==1)
number=number+1;
end
if (B(i,k - 1)==1)
number=number+1;
end
if (B(i + 1,k)==1)
number=number+1;
end
if (B(i,k + 1)==1)
number=number+1;
end
function f = segment2features(B)
B=resizeIm(B);
area=sum(sum(B));
[width height]=numDim(B);
[r c]=size(B);
[stdW stdH]=stdNum(B);
10
Figure 5: Visualisation of features extraction accuracy.
The more grouped the same number is in a certain place, the better the feature extraction is.
Examples of feature vectors and their image are shown in figure 6.
11
(a) Feature vector: (b) Feature vector:
[0.0901 0.96 1 0.5 0.5911 0.5109 0.78] [ 0.0821 0.8 1 0.5 0.5913 0.4599 0.9]
12