You are on page 1of 4

Recognition patterns

Jean Carlo Grandas Franco


March 2020

Exercise E2.1

For the first exercise, a weight value of 1.3 and bias of 3 is given. The pur-
pose of the exercise consist in identifying which transfer function is possible for a
given output, considering in turn that there is only one neuron with single input.

1. 1.6
For this case, given an output bigger than 1, only a linear transfer function
is possible.
In order to obtain a 1.6 output from it, the necessary p input is:

a = f (wp + b)
a−b
p=
w
p = −1.08

2. 1
Here, both Hard Limit and symmetrical hard limit work as long as n es
equal of bigger than 0.

wp + b ≥ 0
−b
p= ≥ −2.31
w
3. For a = 0.9963 any transfer function including all reals from 0 to 1 might
work. For instance, all linear functions (linear, saturating linear, symmet-
ric saturating linear, positive linear) can be used, considering an input of.
a−b
p= = −1.54
w

However, log-sigmoid can be used as well. For n,

1
1
a=
1 + e−n
 
1−a
n = −ln
a
n = −1.54

4. For a = −1, symmetrical hard limit might be used, as long as.

p < −2.31

, which we got in a previous task.


Symmetric saturation linear can be applied too.

Exercise E2.2

Let’s now consider a single-input neuron with a bias. We would like:

a = −1 for p < 3
a = 1 for p ≥ 3

For this case, symmetrical hard limit is the necessary transfer function, be-
ing 3 the exact point where the function switchs from -1 to 1 or viceversa.
Here, the resulting n must be greater or equal to 0 in order to display a value
of 1, thus.

w(3) + b ≥ 0

The above equation shows that a weight function needs to be specified in


order to choose the proper bias. For w = 1, b = −3. However, for w = 1.5 we
have b = −4.5.

b = −3w
is the bias for the symmetrical hard limit transfer function with saturation at
3.
Exercise E2.3

For this exercise, a two-input neuron is given with the following weight ma-
   T
trix and input vector: W = 3 2 and P = −5 2 , we would like to have
an output of 0.5. Is there a combination of bias and transfer function allowing
this if,

2
1. Is there a transfer function that will do the job is the bias is zero?.
Considering that 0.5 is a value between 0 and 1, any linear function might
do the job, as well as a log-sigmoid.

2. Is there a bias that will do the job if the linear transfer function is used,
if yes, what is it?

In order to obtain an output of 0.5 with a linear function, it is necessary


to set n = 0.5. Thus, we just need to find the necessary bias to satisfy the
this condition.
 
  −5
W = 3 2 + b = 0.5
2
b = 11.5

3. Is there a bias that will do the job is a log-sigmoid transfer function is


used?, if yes, what is it?
For log-sigmoid, the following condition must be satisfied:
1
a= = 0.5
1 + e−n
 
1−a
n = −ln
a
n=0

Notice that for n equal to zero, the bias must be b = 11.


4. Is there a bias that will do the job for a symmetrical hard limit transfer
function?

Since symmetrical hard limit functions only gives outputs of 1 and -1,
there is no bias satisfying the above condition.

Exercise E2.4

A two-layers neural network is to have four inputs and six outputs. The
range of the outputs lies between 0 and 1 and is continuous.

Considering that two layers are to be used and six outputs must be dis-
played, it is necessary to have 6 Neurons on the second layer that will display
the six needed output. On the other side, the number of outputs from the first
layers does not play a crucial role to satisfy this condition, since this number of
outputs is to be used as inputs for the second layer. Hence, it can be set to 1

3
or more.

The dimension of the first layer weight matrix is 4x4 if we consider that there
are four inputs needed. The dimensions of the weight matrix on the second lay-
ers would then correspond to the number of outputs coming from the first layers.

In the second layers, there is no doubt that a linear transfer function is the
best option, although there are many others that will also display values within
the given limits. Of course, saturation at 1 is also a good option to make sure
that no output will lie outside the desired bandwidth. For the first layer, there
are more options depending on how many neurons are to be used, for instante,
if the are many of them, a hard limit is also possible and thus the second layers
would compute the output from the number of inputs at 1, for example.

Biases are normally optional, but in some cases, the would help the system
to work better.

The rest of the exercises can be found in the attached file.

You might also like