You are on page 1of 3

Chapter 1 Introduction to Machine Learning

Figure 1-17.  Summation

The next step is to pass this sum through an activation function. Let’s
consider using a sigmoid function that returns values between 0 and 1
irrespective of the input. The sigmoid function would calculate the value
as shown here and in Figure 1-18:

1
f ( x) =

(1 + e- x )
1
f ( sum ) =

(1 + e- sum )
1
f ( 0.031) =

(1 + e-0.031 )
f ( 0.031) = 0.5077

29
Chapter 1 Introduction to Machine Learning

Figure 1-18.  Activation

So, the output of this single neuron is equal to 0.5077.

N
 eural Network
When we combine multiple neurons, we end up with a neural network.
The simplest and most basic neural network can be built using just the
input and output neurons, as shown in Figure 1-19.

Figure 1-19.  Simple network

30
Chapter 1 Introduction to Machine Learning

The challenge with using a neural network like this is that it can only
learn linear relationships and cannot perform well in cases where the
relationship between the input and output is nonlinear. As we have already
seen, in real-world scenarios, the relationship is hardly simple and linear.
Hence, we need to introduce an additional layer of neurons between the
input and output layers to increase its capability to learn different kinds
of nonlinear relationships as well. This additional layer of neurons is
known as the hidden layer, as shown in Figure 1-20. It is responsible for
introducing the nonlinearities into the learning process of the network.
Neural networks are also known as universal approximators since they
carry the ability to approximate any relationship between the input and
output variables no matter how complex and nonlinear it is nature. A lot
depends on the number of hidden layers in the networks and the total
number of neurons in each hidden layer. Given enough hidden layers, it
can perform incredibly well at mapping this relationship.

Figure 1-20.  Neural network with hidden layer

31

You might also like