You are on page 1of 9
4 Perceptron Learning Rule P43 We have a classification problem with four classes of input vector. The four classes are aro ome v(m asses ae onal) ‘Design a perceptron network to solve this problem. To solve a problem with four classes of input vector we will need a percep- tspn with atest tronenrens, since an S-neuren perceptron can catepurize lasses. Tho two-neuron perceptron is shown in Figure P4.2 Input Hard Limit Layer a a Id 2a 2 a= hardlin(Wp+b) Figure P4.2 Two-Neuron Perceptron {ets begin by dsraying the inpt vector, asin Figure P&.3. The ent cles © indicate ¢l vectors, the light squares L] indicate class 2 vector the dark ctcles @ indicate claas 3 veetors, andthe dark squares Ml indicate class 4 vectors, A two-neuron perceptron creates two decision boundaries. Therefore, to di- vide the input space into the four categories, we need to have one decision boundary divide the four classes into two sets of two. The remaining bound- ary must then isolate each class. Two such boundaries are illustrated in Figure P4.4. We now know that our patterns are linearly separsble. Solved Probleme 4 e+ 0 1 e*> +0 2 at o atoa Figure P4,4 Tentative Decision Boundaries for Problem P4.3 The weight vectors should be orthogonal to the decision boundaries and should point toward the regions where the neuron outputs are 1. The next step is to decide which side of each boundary should produce a 1.One choice is illustrated in Figure P4.5, where the shaded areas represent outputs of 1. The darkest shading indicates that both neuron outputs are 1. Note that this solution corresponds to target values of som bl whedon ok ‘We can now select the weight vectors: 435 4 Perceptron Learning Rule i 1 and ,w = . refer Note that the lengths of the weight vectors is not important, only their di- rections. They must be orthogonal to the decision. boundaries. Now we can calculate the bias by picking a point on a boundary and satisfying Ea. (4,15): Figure P4.5 Decision Regions for Problem P4.3 In matrix form we have which completes our design. P44 Solve the following classification problem with the perceptron rule. Apply each input vector in order, for as many repetitions as it takes to ensure that the problem is solved. Draw a graph of the problem only after you have found a solution. 4-26 Solved Problems belicibelel be ted Use the initial weights and bias: WO) = [og (= 0. We start by calculating the perceptron’s output: a for the first input vector , Using the initia! weights and bias. hardlim (W (0)p, +5(0}) = sro fo af »| = hardlim(0) = 1 The output « does not equal the target value *, , 90 we use the perceptron rule to find new weights and biases based on the error. e=t-a=0-12-1 WUD = WO) ent = [od + CDP = [2-3] b(L) = b(0) +e = 04-1) =-1 We now apply the second input vector p,, using the updated weights and as. @ = hardlim(W (1)p,+6(1)) = Padi fo] | ‘ - 7 = hardlim(1) = 1 This time the output a is equal to the target 1,. Application of the percep- tron rule will not result in any changes. WwW) = WO) b(2) = bit) We now apply the third input vector. 427 4 Perceptran Learning Rule 498 = hardlim(W(2)p,+6(2)) = boat 2 -2 [2 - i] = hardlim(-1) = 0 ? ‘The output in response to input vector p, is equal to the terget z,, so there will be no changes. W(3) = W(2) BG) = 62) ‘We now move on to the last input vector p,. a = hardlim(W (3), +8(3)) = hardin [2-3 fi - ‘| = hardlim(-1) = 0 This time the output « does not equal the appropriate target r, . The per- ceptron rule will result in a new aet of values for W and b. e=t-a=1-0=1 Wed) = WO) sep, = L2 a]+ LI] = [3] b(4) = dG) +e =-1¢1 20 We now must check the first vector p, again. This time the output a is equal to the associated target a = hardlim (W(4)p, +(4)) = hardlim| [3 4]["]+0! = hardlim(-8) = 0 3 2} Therefore there are no changes. Wis) = Wi) b(5) = b(4) ‘The second presentation of p, results in an error and therefore a new set of weight and bias values. Solved Probleme a = hardlim\W (5)p,+b{S)) = rari [3-1] [} +) = hardlim(-D Here are those new values: -G=1 Woven: = [3a] a] faa] W(6) b(6) = BS) +e = 0+) =). Cycling through each input vector once more results in no errors. a= hardlim(W (6)p, +b (6)) = torn [2-4] i * | a = hardlim(W (6)p,+5(6)) = sora [24] fi +] a = hardlim (W (6) p. + b(6}) = vara [2-3] B + 7 =0=4, a= hardlim (W (6) p, + 6(6}) = toa {2-4 [} + 7 alah ‘Therefore the algorithm has converged. The final solution is: Web ‘Now we can graph the training data and the decision boundary of the solu- tion. The decision boundary is given by 1 n= Wptb =, .p, +) 2p,+0 = —2p,-3p,+1 = 0. To find the p, intercept of the decision boundary, set p & Lo, =— if, 0. Be -y To find the p, intercept, set p, ifr, =0 4 Pereeptron Learning Rule ‘The resulting decision boundary is illustrated in Figure P4.6. Figure P4.6 Decision Boundary for Problem P4.4 Note that the decision boundary falls across one of the training vectors. ‘This is acceptable, given the problem definition, since the hard limit func- tion returns 1 when given an input of 0, and the target for the vector in question ie indeed 1. P45 Consider again the four-class decision problem that we introduced in Problem P48, Train a perceptron network to solve this problem using the perceptron learning rule. Ifwe use the same target vectors that we introduced in Problem P4.3, the Yolo ols ff “file @ pte bei Let's begin the algorithm with the folowing initial weights and biases: WO) = [ 9 = (]. on 1 ‘The first iteration is a = hardlim (W (0)p, +b(0)) = varie Stil = [j. 0 Wh \ q Solved Problems von Bf b(1) = b(0) +e = eal - p. ‘The second iteration is 0} 0. a= hardlim (W (1)p, +b (1}) = hardlim ( - = [ con be W(2) = W(I) +6p; = ° fo ye [2 A. eek (2) = b(I) +e = (2) = b(i) +e fi. The third iteration is 9 = hardlim (W (2)p,+ b(2)) = hardlim a q EI fp . [}. swf more meres fled fi) 431 4 Perceptron Learning Rule 0, j-1 =] b(3) = b(2) = = . repens Ah El Iterations four through eight produce no changes in the weights. ‘W(8) = W(7) = W(6) = W(5) = W(4) = W(3) b(8) = b(7} = B(6) = b(5) = bea) = bI3) ‘The ninth iteration produces a = hardlim (W (8) p, +b (8)) = hardlim ~ (9) = (8) +e = [; + (" = (i) L 1] oO At this point the algorithm has converged, since all input patterns will be correctly elassified. The final decision boundaries are displayed in Figure 4.7, Compare this result with the network we designed in Problem P4.3. W(9) = 7 5. “ sa Figure P4,7 Final Decision Boundaries for Problem P4.5 432

You might also like