Professional Documents
Culture Documents
Reliability Theory
Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a
specified period of time, or will operate in a defined environment without failure.
Failure:
● Every neuron will only be performing a linear transformation on the inputs using the weights and biases.
● it doesn’t matter how many hidden layers we attach in the neural network; all layers will behave in the
same way because the composition of two linear functions is a linear function itself.
Example Activations
● commonly used for models where
● does not activate all the
we have to predict the probability
as an output. neurons at the same time.
Activation for Multiclass Classification
Distribution
● One of the features of neural network's computation which is a major incentive for their
application in solving problems is that of distribution.
● Two distinct forms of this property can be considered to exist:
○ Distributed Information Storage
○ Distributed Processing.
● Distributed processing refers to how every unit performs its own function independently
of any other units in the neural network.
● However, correctness of its inputs may be dependant on other units.
● The global recall or functional evaluation performed by the entire neural network results
from the joint (parallel) operation of all the units.
Drawback
● Faults cannot be located easily.
● In an implementation each component would require extra circuitry to detect
and signal the occurrence of a fault.
● The cost and reduction in overall reliability of the system might render this
approach unsatisfactory.
Advantage
● However, neural networks also have another important property, they can learn.
● This feature will allow a faulty system, once detected, to be retrained either to
remove or to compensate for the faults without requiring them to be located.
● The re-learning process will be relatively fast compared to the original learning time
since the neural network will only be distorted by the faults, not completely
randomised.
This refers to the ability of a neural network which has been trained using a limited set of
training data, to supply a reasonable output to an input which it did not encounter during
training.
The output of the network is then a scalar function of the input vector,
Where,
Why local generalization
● The radial basis function is commonly taken to be
Gaussian:
If local :
If Global:
● Global generalisation will cause a small loss of generalisation for any input
pattern.
Assignment
Compare different activation functions in terms of: