Professional Documents
Culture Documents
MatLab
Outline
REVISION: neural network
CASE STUDY: wines classification
CASE STUDY: Body Fat
2
1.
REVISION
Let’s review some concepts
1. REVISION
How does a Neural Network work?
Output layer
This layer delivers
the final result
5
1. REVISION
How does a Neural Network work?
The interconnection
are assigned weights W11
W12
1
x1 1 W13
x2
. 2
2
.
3
x3
3
.
W41
W42 4
The weighted sum of inputs is
W43 𝑗 𝑡ℎ 𝑛𝑒𝑢𝑟𝑜𝑛 a fed as input to the activation
The weights are multiplied 𝑛
with the input signals and a function to decide which nodes
𝑓𝑗 = 𝑤𝑗𝑖 × 𝑥𝑖 + 𝑏 to fire for feacture extraction
bias is added to all of them 𝑖=0
6
1. REVISION
How does a Neural Network work?
The interconnection
are assigned weights W11
W12
1
x1 1 W13
x2
. 2
2
.
3
x3
3
.
W41
W42 4
The weighted sum of inputs is
W43
The weights are multiplied a fed as input to the activation
with the input signals and a function to decide which nodes
bias is added to all of them to fire for feacture extraction
7
1. REVISION
How does a Neural Network work?
The interconnection
are assigned weights W11
W12
1
x1 1 W13
x2
. 2
2
.
3
x3
3
.
W41
W42 4
W43
The weights are multiplied
with the input signals and a
bias is added to all of them
8
1. REVISION
How does a Neural Network work?
x2
. 2 W ‘13
W ‘14
2
. W ‘21
3
x3
3
. W ’22
W ‘23
W41
W ‘24
W42 4 The weighted sum of inputs is
fed as input to the activation
W43
The weights are multiplied function to decide the impact
with the input signals and a of neuron units in the
bias is added to all of them succeeding layer.
9
1. REVISION
How does a Neural Network work?
x2
. 2 W ‘13
W ‘14
2
. W ‘21
3
x3
3
. W ’22
W ‘23
W41
W ‘24 The weighted sum of inputs is
W42 4
a fed as input to the activation
W43
The weights are multiplied function to decide the impact
with the input signals and a of neuron units in the
bias is added to all of them succeeding layer.
10
1. REVISION
How does a Neural Network work?
W11
W12
1
W ´11
1 W13 W ’12
x1
. 2 W ‘13
W ‘14
2
x2
. W ‘21
3 W ’22
x3
3
. W ‘23
W41
W ‘24
W42 4
W43
Input features are mapped to corresponding class labels using interative process called
epoch.The respective weights of interconnections are optimised based on the estimated error
(Backpropagation).
11
1. REVISION
Mathematical Operations behind ANNs
Let the input unit, hidden unit and output unit be represented as 𝑖1 , ℎ1 , 𝑎𝑛𝑑 𝑜1 .
Let the weight pointing from node i (input layer) to j (hidden layer) be 𝑤𝑗𝑖 , 𝑎𝑛𝑑 from node j (hidden
′
layer) to k (output layer) be 𝑤𝑘𝑗 . Further, the weights associated with bias terms is equal to 1.
W11
1 Hidden Layer Units calculation
W12 W ´11 ℎ1 =𝜎 𝑤11 ∗ 𝑖1 + 𝑤12 ∗ 𝑖2 + 𝑤13 ∗ 𝑖3 + 𝑤14 ∗ 𝑏1
1 ℎ2 =𝜎 𝑤21 ∗ 𝑖1 + 𝑤22 ∗ 𝑖2 + 𝑤23 ∗ 𝑖3 + 𝑤24 ∗ 𝑏2
W13 W ’12
2 ℎ3 =𝜎 𝑤31 ∗ 𝑖1 + 𝑤32 ∗ 𝑖2 + 𝑤33 ∗ 𝑖3 + 𝑤34 ∗ 𝑏3
2 . W ‘13
W ‘14
ℎ4 =𝜎 𝑤41 ∗ 𝑖1 + 𝑤42 ∗ 𝑖2 + 𝑤43 ∗ 𝑖3 + 𝑤44 ∗ 𝑏4
. 3 W ‘21
3 W ’22
. 4 W ‘23 Output Layer Units calculation
′ ′ ′ ′ ′
W41 W ‘24 𝑜1 = 𝜎 𝑤11 ∗ ℎ1 + 𝑤12 ∗ ℎ2 + 𝑤13 ∗ ℎ3 + 𝑤14 ∗ ℎ4 + 𝑤15 ∗ 𝑏1′
′
W42 𝑜2 = 𝜎 𝑤21 ∗ ℎ1 + 𝑤22 ∗ ℎ2 + 𝑤23 ∗ ℎ3 + 𝑤24 ∗ ℎ4 + 𝑤25 ∗ 𝑏2′
′ ′ ′ ′
W43
12
1. REVISION
Mathematical Operations behind ANNs
Let the input unit, hidden unit and output unit be represented as 𝑖1 , ℎ1 , 𝑎𝑛𝑑 𝑜1 .
Let the weight pointing from node i (input layer) to j (hidden layer) be 𝑤𝑗𝑖 , 𝑎𝑛𝑑 from node j (hidden
′
layer) to k (output layer) be 𝑤𝑘𝑗 . Fu
Hidden Layer Units calculation (Matrix Form)
W11
1 ℎ1 𝑤11 𝑤12 𝑤13 𝑏1
𝑖1
W12 W ´11 ℎ2 𝑤21 𝑤22 𝑤23 𝑏
1 = 𝑤 𝑤 𝑤 ∙ 𝑖2 + 2
W13 W ’12 ℎ3 31 32 33 𝑏3
𝑖3
2 ℎ4 𝑤 41 𝑤41 𝑤41 𝑏4
2 . W ‘13
W ‘14
Output Layer Units calculation (Matrix Form)
. 3 W ‘21
ℎ1
′ ′ ′ ′
3 W ’22 𝑜1 𝑤11 𝑤12 𝑤13 𝑤14 ℎ 𝑏′
. 4 W ‘23 𝑜1 = ′
𝑤21 ′
𝑤21 ′
𝑤21 ′
𝑤41
. 2 + 1′
ℎ3 𝑏2
W41 W ‘24
ℎ4
W42
W43
13
1. REVISION
How does a Neural Network work?
Training error and validation error are functions of the number epochs.
14
Neural Network Training Basic Workflow
15
Neural Network Training Basic Workflow
• Data Collection: Collecting the features along with the corresponding labels.
• Network Creation: Deciding the number of hidden units and create a basic Neural
Network architecture.
• Configuring Network: Selecting the error function and optimization algorithm,
convergence condition, stopping criteria, train-test data splitting criteria.
• Weights and biases Initialization: This is done for faster convergence; depends on
experience.
• Network Training : Multiple passes for mapping inputs(features) to outputs(labels).
• Network Validation :It is done by predicting the validation data samples followed by
calculating various performance metrics e.g. confusion matrix.
16
2. CASE STUDY:
wine classification
Let's to our first case.
Case study:
wine classification
18
Case study:
wine classification
1. Alcohol
2. Malic acid
3. Ash
4. Alcalinity of ash
5. Magnesium
6. Total phenols
7. Flavanoids
8. Nonflavanoid phenols
9. Proanthocyanins
10. Color intensity
11. Hue
12. OD280/OD315 of diluted wines
13. Proline
19
2. CASE STUDY: wine classification
Preparing the Data
[x,t] = wine_dataset;
- The data is organized into 2 matrices the input matrix x and the target matrix t.
- Both matrices have 178 columns represent 178 wine sample attributes (inputs) and associated winery class
vectors (targets).
- Input matrix x has 13 rows represent the 13 attributes.
- Target matrix t has 3 rows represent the 3 wineries.
20
2. CASE STUDY: wine classification
Creating Neural Network for Pattern Recognition
net = patternnet(10);
view(net);
21
2. CASE STUDY: wine classification
Configuring the Neural Network
net.trainFcn='trainlm'; ‘Levenberg-Marquardt’
net.performFcn='mse';
net.trainParam.min_grad=0;
net.trainParam.epochs=50
net.trainParam.max_fail =50;
net.divideParam.trainRatio=0.7;
net.divideParam.valRatio=0.15;
net.divideParam.testRatio=0.15;
22
2. CASE STUDY: wine classification
Training the Neural Network
[net,tr] = train(net,x,t);
23
2. CASE STUDY: wine classification
Testing the Neural Network
testX = x(:,tr.testInd);
testT = t(:,tr.testInd);
testY = net(testX);
testIndices = vec2ind(testY)
24
2. CASE STUDY: wine classification
Checking the performance
5. Plot confusion
plotconfusion(testT,testY)
Or
[c,cm] = confusion(testT,testY);
fprintf(‘% Correct Classification : %f%%\n', 100*(1-c));
fprintf(‘% Incorrect Classification : %f%%\n', 100*c);
25
2. CASE STUDY: wine classification
Confusion Matrix
TP-True Positive
True class(+)=Predicted class(+)
TN-True Negative
True class(-)=Predicted class(-)
FP-False Positive
True class(-)=Predicted class(+)
FN-False Negative
True class(+)=Predicted class(-)
26
2. CASE STUDY: wine classification
Checking the performance
5. Plot confusion
1 3 4 4
1. The rows correspond to the predicted class: Output Class
2. the columns correspond to the true class: Target Class.
3. The diagonal cells correspond to observations that are 4 3 4
correctly classified(True Positives/True Negatives).
4. The off-diagonal cells correspond to incorrectly classified
4 4 3
observations(False Negatives/False Positives).
5. The cell in the bottom right of the plot shows the overall
accuracy 5
27
2. CASE STUDY: wine classification
Checking the performance
5. Plot confusion
7. Recall (or true positive rate) and false negative rate: The
row at the bottom of the plot shows the percentages of all
the examples belonging to each class that are correctly 7
and incorrectly classified.
28
Let’s do a
break
29
3. CASE STUDY:
Body fat
Let's to the third case.
Case study:
body fat
Fitting neural network for estimate the
percentage of body fat of someone from
various measures.
1. Age (years)
2. Weight (lbs)
3. Height (inches)
4. Neck circumference (cm)
5. Chest circumference (cm)
6. Abdomen 2 circumference (cm)
7. Hip circumference (cm)
8. Thigh circumference (cm)
9. Knee circumference (cm)
10. Ankle circumference (cm)
11. Biceps (extended) circumference (cm)
12. Forearm circumference (cm)
13. Wrist circumference (cm)
31
3. CASE STUDY: body fat
Pattern recognition neural network & Fitting neural network
Instead of using:
net = patternnet(10);
We use:
net = fitnet(10);
32
3. CASE STUDY: body fat
Neural Network Start GUI
[bodyfatInputs,bodyfatTargets]= bodyfat_dataset;
Open the Neural Network Start GUI with this command: nnstart
33
NOTES AND TIPS
For practicing
Datasets: Deep Learning Toolbox Sample Data Sets
Use the Neural Network Start GUI. Use the script button to
reproduce the neural network and, then, adapt it to solve similar
problems.
34
Thanks!
35