You are on page 1of 7

SIMAD UNIVERSITY

Chapter 6: Artificial Neural Networks for Data Mining


Explain briefly neural networks
Neural networks represent a brain metaphor for information processing. These models are
biologically inspired rather than exact replica of how the brain actually functions. Also artificial
neural networks can refer as a pattern recognition methodology an artificial neural network (ANN).
What are uses for ANN?
Neural networks have been used in many business applications for pattern recognition,
forecasting, prediction, and classification. Neural network computing is a key component of any
data mining tool kit.
List Applications areas of neural networks
Applications of neural networks abound in finance, marketing, manufacturing, operations,
information system and so on.
Define neuron and Soma
Neuron or Soma is the special cells in which human brain composed of.
Explain Biological Neural Networks including: dendrites, axons, synapse and nucleus
Biological neural networks are composed of many massively interconnected neurons. Each
neuron possesses axons and dendrites, fingerlike projection that enables the neuron to
communicate with its neighboring neurons by transmitting and receiving electrical and chemical
signals.
Dendrites
Synapse
Synapse

Axon

Axon

Dendrites Soma
Soma

Dendrites provide input signal to the cell. The axon sends output signals to cell 2 via axon
terminals. Synapse is able to increase or decrease the strength of the connection between
neurons and cause excitation or inhibition of subsequence neuron. Nucleus is the central
processing portion of the neuron.

DSS Chapter 6 ANN Class: BIT14B Page 1


SIMAD UNIVERSITY

The relationship between biological and artificial neural networks

Explain the processing information in artificial neuron


The main processing elements of a neural network are individual neurons, analogous to brain's
neurons. These artificial neurons receive the information from other neurons or external input
stimuli, perform a transformation on the inputs, and then pass on transformed information to
other neurons or external output. However information is processed in neural network by
assigning input signals to weights then summation is done and transformation occur to produce
output.

Describe Processing elements


Neural network is composed of processing elements that organize in different ways to form the
network's structure. The basic processing unit is the neuron. Neuron can be organized in a
number of different ways but one of the most popular approaches, known as the feed forward-
back propagation which allows all neurons to link the output in one layer to the input of the
next layer but does not allow any feedback linkage.
Explain Elements of ANN
1. Processing element: the processing element (PE) of ANN is artificial neurons. Each neuron
receives inputs, processes them, and delivers a signal output.

DSS Chapter 6 ANN Class: BIT14B Page 2


SIMAD UNIVERSITY

2. Network architecture: each ANN is composed of a collection of neurons that are grouped
into layers. These layers are three layers input layer, intermediate or hidden layer and output
layer. Hidden layer is a layer of neurons that take input from the previous layer and converts
those inputs into outputs for further processing. The parallel processing resembles the way the
brain works, and it differs from the serial processing of conventional computing.
3. Network information processing: third element of ANN and contains five process
 Input: each input corresponds to a signal attribute.
 Output: contains the solution to a problem.
 Connection weights: are key elements of an ANN.
 Summation function: computes the weighted sums of the entire input element entering
each processing element. This function multiplies each input value by its weight and
totals the values the neuron. Its formula looks like Y=X1W1*X2W2.

Transformation function: computes the internal stimulation, or activation level of the


neuron. This function combines the inputs coming into a neuron from other neuron/sources
and then produces an output based on transformation function. The computation of
transformation function is done by using one of three ways: Linear function, Sigmoid
(logical activation) function [0 1] and Tangent Hyperbolic function [-1 1]
List Neural Network Architectures and briefly discuss it
There are several neural network architectures but most common ones include feed forward
(with back propagation), associative memory, recurrent networks, probabilistic, self-organizing
feature maps, Hopfield networks and many more others. The architecture of neural network
model is driven by the task it is intended to address. For example neural network models have
been used as classification, regression, clustering, association, forecasting tools, and general
optimization. However most popular architecture: Feed forward, multi-layered perception with back
propagation learning algorithm which is Used for both classification and regression type problems
Explain briefly Learning in ANN
Learning: process by which a neural network learns the underlying relationship between input
and outputs, or just among the inputs. Supervised learning is simple training set is used to teach
the network about its problem domain. The learning algorithm is the training procedure that an
ANN uses. Unsupervised learning: does not try to learn a target answer.

DSS Chapter 6 ANN Class: BIT14B Page 3


SIMAD UNIVERSITY

Taxonomy of ANN Learning Algorithms


Learning Algorithms

Discrete/binary input Continuous Input

Surepvised Unsupervised Surepvised Unsupervised


 Simple Hopefield  ART-1  Delta rule  ART-3
 Outerproduct AM  Carpenter /  Gradient Descent  SOFM (or SOM)
 Hamming Net Grossberg  Competitive learning  Other clustering
 Neocognitron algorithms
 Perceptor

Architectures

Supervised Unsupervised

Recurrent Feedforward Extimator Extractor


 Hopefield  Nonlinear vs. linear  SOFM (or SOM)  ART-1
 Backpropagation  ART-2
 ML perceptron
 Boltzmann

List Supervised Learning Process


Supervised Learning involve three tasks or processes
1. Compute temporary outputs
2. Compare outputs with desired targets
3. Adjust the weights and repeat the process
Describe How a Network Learns
Consider a single neurons that learn the inclusive OR operation in classic problem in symbolic
logic. The two input elements are x1 and x2.

Explain with diagram Backpropagation Learning


Backpropagation) (short for back error propagation) is most widely used supervised learning
algorithm in neural computing. It is very easy to implement and it includes one or more hidden
layers. This type of network is considered feed forward because there are no interconnection
between the output of a processing element and input of a node in the same layer or in a
preceding layer.

DSS Chapter 6 ANN Class: BIT14B Page 4


SIMAD UNIVERSITY

List the learning algorithm procedure


1. Initialize weights with random values and set other network parameters
2. Read in the inputs and the desired outputs
3. Compute the actual output (by working forward through the layers)
4. Compute the error (difference between the actual and desired output)
5. Change the weights by working backward through the hidden layers
6. Repeat steps 2-5 until weights stabilize
List nine development Process of an ANN
1. Collect, organize, and format the data.
2. Separate data into training, validation, and testing sets.
3. Decide on a network architecture and structure.
4. Select a learning algorithm.
5. Set network parameters and initialize their values.
6. Initialize wrights and start training.
7. Stop training freeze the network wrights
8. Test the training network
9. Deploy the network for use on unknown new cases.
Testing a Trained ANN Model
Data is split into three parts
1. Training (~60%)
2. Validation (~20%)
3. Testing (~20%)
Sensitivity Analysis on ANN Models
Sensitivity analysis is the method for extracting the cause and effect relationships among the
inputs and the outputs of a trained neural network model. The basic procedure behind sensitivity

DSS Chapter 6 ANN Class: BIT14B Page 5


SIMAD UNIVERSITY

analysis is that the inputs to the network are systematically perturbed within the allowable value
ranges and corresponding change in the output is recorded for each and every outputs variable.
Bankruptcy Prediction inputs
A comparative analysis of ANN versus logistic regression (a statistical method) Inputs are:
 X1: Working capital/total assets
 X2: Retained earnings/total assets
 X3: Earnings before interest and taxes/total assets
 X4: Market value of equity/total debt
 X5: Sales/total assets
Self Organizing Maps (SOM) Algorithm
1. Initialize each node's weights
2. Present a randomly selected input vector to the network
3. Determine most resembling (winning) node
4. Determine the neighboring nodes
5. Adjusted the winning and neighboring nodes.
6. Repeat steps 2-5 for until a stopping criteria is reached
Applications of SOM
 Customer segmentation
 Bibliographic classification
 Image-browsing systems
 Medical diagnosis
 Interpretation of seismic activity
 Speech recognition
 Data compression
 Environmental modeling, many more
Hopfield network
The Hopfield network is interesting neural networka architecture, first introduced by john
Hopfield (1982). One of the major advantages of Hopfield neural networks which are gaining
popularity in solving optimization or mathematical programming problems.

DSS Chapter 6 ANN Class: BIT14B Page 6


SIMAD UNIVERSITY

Applications Types of ANN


1. Classification: a neural network that can be trained to predict a categorical output variable.
types of ANN used for this task include feedforwad network(such as MLP with
blackpropagation learning), radial basis functions, and probabilistic network.
2. Regression: a neural network that can be trained to predict an output variable whose values
are of numeric (i.e, real or integer number) type. types of ANN used for this task include
feedforwad network(such as MLP with black propagation learning) and radial basis functions.
3. Clustering: sometimes a dataset is so complicated there is no obvious way to classify the data
into different categories. Adaptive Resonance Theory (ART) and SOM.
4. Classification: a neural network that can be trained to remember a number of unique patterns
so that when a distorted version of a particular pattern is presented the network associated it
with the closest one in its memory and returns the original version of that particular pattern.
Type of ANN needed is Hopfield networks
List advantages of ANN
1. Able to deal with (identify/model) highly nonlinear relationships
2. Not prone to restricting normality and/or independence assumptions
3. Can handle variety of problem types
4. Usually provides better results compared to its statistical counterparts
5. Handles both numerical and categorical variables (transformation needed!)
List disadvantages of ANN
1. They are deemed to be black-box solutions, lacking expandability
2. It is hard to find optimal values for large number of network parameters
3. It is hard to handle large number of variables
4. Training may take a long time for large datasets; which may require case sampling
ANN Software
1. Standalone ANN software tool includes NeuroSolutions, BrainMaker , NeuralWare and
NeuroShell.
2. Part of a data mining software suit including PASW (formerly SPSS Clementine), SAS
Enterprise Miner and Statistica Data Miner.
END OF THE CHAPTER

DSS Chapter 6 ANN Class: BIT14B Page 7

You might also like