You are on page 1of 25
November 22, 2022 Name: IDe COE 292-15: Introduction to Artificial Intelligence Term 221 (Fall 2022) 5-K lutions Question-1: The table below provides a training data set containing six examples with their labels (Red and Green). Using a K-NN classifier, we wish to use this training dataset to make a prediction for a new point using, T. (a) (2 points) Fill the table with the Euclidean distance between the test point, T, and all training points, Point |__Coordinates Class | Distance to x | «| x test point, T A | o | 3 0 1 3 Blo fo] a a1 1 cfol[o 4 a1 4 pdflo|2i{o = 2 E [-1 | 0 1 “I 14 Tftotfo 0 2 = (b) (2 points) Using the distance values you found in (a), what is predicted class of the test point, T, when k = Land when k= §, respectively? (@)-1,-1 (b)-1, 41 (©) #141 (@ #11 Question-2: (4 points) Consider the set of training points listed in the following table. Features class xo] dt [ 2 | y Lo) 10 [20 7 1 10) 2.0 | 5.0 | 1 10 | 3.0 | -10 | 1 30,30 [ 0 10 Wi sw: tCactnal - pred coded JOR +X ; Apply the Perceptron algorithm to find a boundary decision line. Assume that the initial weights are ‘wo = -3.0, wi = -1.0, w2 = 3.0, and the learning rate a = 1. Also, assume that the perceptron algorithm uses the 2 treo ison yi = [2 MAP Hes 20 given points’ order. Document your steps by filling the following table. Apply the algorithm for one round, and in the Features Round Actual Predicated | Updated Weights x0 | xt | x2 | class (y) class (hy) | wo | wi | w2 10 | 10) 20 1 1 -3.0 | -10 | 3.0 1.0 | -2.0 | 5.0 1 1 -3.0 | -10 | 3.0 ' 10 | 30 1 0 -20 | -40 | 20 10 | 3.0 0 1 3.0 | -10 | 5.0 Page 1 of2 November 22, 2022 Name: IDt ier which predicts two TR+THW . A\ + Tp 2 EW sepa ‘The accuracy of this binary classifier is (12+28)/(12+28+8+2)=80% while precision is 12/(12+8)=60%, yen the following confusion matrix of a binary clas Question-3: (3 points) Gi classes, namely “ Predictions Bee wracy = Actual Question-4: (2 points) Which of the following statements is true about k-Means Clustering algorithm? (a) Increasing the value of k always leads to a better result (b) It is an example of unsupervised machine learning algorithms. (c) It is equivalent to the k-Nearest Neighbor algorithm, (@) It can be used for reinforcement learning. Question-5: (2 points) Suppose in order to train and test a classifier model you split the dataset into a training and testing sets. Then the classifier yields 5% error rate on the training set and 20% error rate on the testing set. Which of the following statements is TRUE about this model? (a) The model is overfitting (b) The model is underfitting. (©) The model has low variance (d) The model achieves the best fit possible like all machine learning models. Question-6: (2 points) In the following, select all statements that are TRUE about SVM. (a) Support vectors are the data points near the decision boundary. (b) There are at least two support vectors. (©) SVM can handle non-linearly separable data by using the kernel trick. (d) A soft-margin SVM does not allow any error. Question-7: (3 points) Consider a game that has 6 states (A, B, C, D, E, F}. Suppose the game can start at any state but ends when we reach to state F. Let us assume that «=I and y=0.9, Assume that the Q- earning algorithm was applied, and the initial Q matrix and Reward matrix are shown below. A dash (-) in the reward matrix means that it is not possible to go from a state to another state. Update the Q-matrix if the action (E, D) is performed. staw\acion ABCD # RF 4 See eg 4] zr |--- 0-1 c |---0- - pd |-o0-0- QLE, 0) = RIED) +0.9 MaxlA(0,8), A(0,C), Q(0,E)] (ol lo we = 0+0.9*Max([50, 65, 80] = 0.9*80 = 72 ABCDEF a) - (EN) +0 Rl + Ble G00 a a 740... e-c foo 000 0 Jovif D |o soso w 0 0 z]0 00%0 0 +09 (8d) - rio ooo00 Page 2 of 2 King Fahd University of Petroleum & Minerals College of Computing and Mathematics Computer Engineering Department COE 292: Introduction to Artificial Intelligence Quiz #5 Section: Code: B Duration: 20 min Name: KFUPM I ‘Question 1: (I Points) Fiven a scatter plot that shows two classes of data, namely a triangle class and a circle class. If you want to predi distance in the class of new data point x=1 and y=I using Et IN and 7-NN. The f the new point is: B. 3.NN > triangle, 5-NN > circle, 7-NN > circle C. 3-NN > circle, 5-NN > triangle, 7-NN > circle D. Allare triangle E, None of the above Y Aaa Question 2: Point) 1,5.) (1 Points) Consider the perceptron algorithm in its training phase has a = 0.1, WO=2, W1=1 and W2~1, Moreover, assume we are training with point (2,3) that is of an actual class 0 but was misclassified as 1. After the perceptron updates its weights what will be the values of the new weights: Ww, = L+(0 )) (1) ) =\4 A. WO=L.9, W1= -1.2, W2=0.96 B. W0=1.9, W1=0.8, W2=-1 we \ t (0-1)(0) Q) = 0. 8 wy =-1¢Co~!) (oyCy = 7h (I Points) Consider the SVM algorithm, if the weight vector found normal on the best line separating the two classes is W=[3,4,0], what will be the maximal margin for this SVM: 2 A . \w\ A wayina| ynargin =A 2a WI= Jaraywr = 5 Quiz s COE 292, Introduction to Artificial Intelligence Ws E, None of the above ‘Question 4: (I Points) Consider the SVM algorithm and the 2D plot for the data being used for classification. Identify the support vectors in the following figu yecr® Qo Y suger vectors A. cy and ty B. é and a D. cy and tz E. Allof the above Question 5: (1 Points) Consider two different classifiers for the same data shown below: Classifier 1 Classifier 2 Ba ° ° a . . K® i . er e 32 al 2 3 > 3 * 1 1 2 3 > . bs ° a Oia “3 94 “0, Os Crcex, Cirdeney What is the 0-1 loss function associated with each classifier A. Classifier 1:3, Classifier 2: 7 > B. Classifier 1: 5, Classifier 2: 3 loss Sundin = number ef fier 2:3 nis sel Coin Quiz s COE 292, Introduction to Artificial Intelligence 23 Question 6: (1 Points) Consider Q-learning in reinforcement leaming. If you play the same game discussed in class where there are 6 states, namely {A, B, C, D, E, F} and you win if you reach state F. If Q-matrix your algorithm has reached is given below: stae\acion A BCD E F A - - = = 8 = 64 64 What will be the values of the new Q(C,D) if the computer starts the game from node C, the value of @ = 1, y = 0.8, and the reward matrix is as shown below: state\action AB C D EF 4 ----0- 5 - - - 0 - 100 R= Cc ---@-- - D -00-0 - E 0 - - 0 - 100 F - 0 - - 0 100 @aewee — GC P)= ACC.) val RCO) + 7D) -Aa) C. QCD) = 100 QCe,p) = 64 D. Q(C,D)= 80 E. None of the above f (c,)) =0 ~ ID na = 89 QC p) = 64 +1) 0 +0.8(8 0) - Gu] = 6Y Quiz s COE 292, Introduction to Artificial Intelligence 38 Version A COE 292: Introduction to Al (1221) Dee 5,2022 Quiz 5 Name: ID: Every question has ONE correct answer and is worth 1.5 point ‘QI — Q4: Consider the shown (3x2) game world that has 6 states {A, B, C, D, E, F} and four actions (right, left, up, down). In every new episode, the game starts by choosing a random state and ends when state Fis reached, for which the player receives a reward of +10, For al other actions that do not lead . to state F, the reward is -1. Shown below, Qo is the Q function after initial training using the Q-learning algorithm, function a Ts S\A | right | let | up | down Cc D A 2 : - 3 Er B - 5 - 4 c 6 - 7 3 6, 8) — Qa) + a((F Hy MAK QE, 2) - OG, a) Dt: 3 6 @ Assume a= 0.2, y=0.5 E 8 - 6 ~ = 6+ AE] toS@®) -6]=5.4 F : 10 9 : Q3. Using Quas a starting point and a greedy © Qt. Using Qoas a starting hat is th |. Using Qoas a starting point, what is the orciston poleg which Pp -policy, which action is supposed to be wa updated Q value after taking the action (C, right)? taken from state? A. 54 Boaz A. right B. left C64 Sup D. 65 D. down E. None of the above E. Any of the possible actions. Q4. Using Qoas a starting point and a e-greedy decision-policy, which action is supposed to be taken from state D ? ‘Using Qoas a starting point, what is the updated Q value after taking the action (B, left)? A 58 ‘A. right B52 B. left Cad = C. up D. 4.7 E. D. down .. None of the above E. Any of the possible actions from D. There wight be « nichee In Ths uetion, Version A Q5—Q7: Given the shown data points comprising two classes represented as diamonds and circles. ‘Two different classification algorithms produced the two shown boundary lines “Boundary 1” and “Boundary 2°, for which the diamonds-class are above the lines and the circles-class are below the line QS. What is the total “0-1 loss function” for Q7. Based on the “0-1 loss function”, which boundary 1? boundary line is considered to be better? A2 A. Boundary line 1 B.3 B. Boundary line 2 c4 C. The two boundary lines are the same @s D. The two boundary lines can not be compared E. 6 E. None of the above Q6. What is the total “0-1 loss function” for boundary 2? =oa@> ame Name: lai COE 292, Term 221 Introduction to Artit cial Intelligence Quiz# 5 Solution Date: Tuesday, Nov. 15, 2022 ‘Question 1: 4 Points) neighbor Given the following 6 labeled data points, classify the query point (X1,X2)~(2,5) using k-neares {K-NN) classifier that uses Euclidean distance for distance calculations with k=3 xa | 2 | class | Oe Sirtodeta pone z 2 [3 316 LAMh)24 (Gry =3.\46 rs 2.00 3s pa 00 ee 228 ga 36 Ta an Based on the classification of the 3-nearest neighbor points, the query point will be classified as -1 Question 2: G Points) Consider the table given below that shows the number of predicted class vs actual class for all samples: Predicted Class Positive | Negative RewalCaw] Posie pp Wey 2 Nae Ep > ays Determine the number of True Positive, False Negative, and compute accuracy. True Positive = 10 False Negative =2 1 pet Ww Accuracy = (105) /(10+5+2+3)=15/20= 75% = All (5 Points) der the set of training points lis ed in the following table: my 1 I 2 L a [0 3 [0 Apply the Perceptron algorithm to find @ boundary decision line. Assume that the initial weights are w0 = 2.0, wl =-2, w2 = 2.0, and the learning rate a = 1. Apply the algorithm for two rounds, and in the given points’ order. Document your steps by filling the following table, Then, determine whether a linear classifier has been found after the two rounds or not. Determine the equation of the resulting boundary de ion line Features Updated Weights Round x0 | xt_|x2_| Actually) | Predicated wo | wi | w2 1,2 )1 1 1 2) 2 | 2 _ oes apa pe 1 =I “i 0 Ti~A 1 a 3 | 1/2] 0 0 1,43 pet 1 H 1) a3 >» Qo 1 1 pa 3] eT 0 o 1,43 p23 0 0 1 pa 3 ‘Yes, all points have been classified correctly. The equation of the resulting boundary decision line is: 3x2 x1+1=0 Question 4: (2 Points) Consider a game that has 6 states {A, B, C, D, E, F}. We can start at any state but the game ends when we reach to state F. Let us assume that a=0.7 and Y=0.8. Using Q-learning, compute the updated Q value for going from state D to state B assuming the Reward (R) and Q matrices as given below: aaw\aton ABCDEF A eT 2 == - 0 - 100 D oo0-& z d= = 0 = 100 F - 0 - ~ 0 100 Q(state,action) = Q(state,action) + « [R(stateaction)+ Y Max[Q(next state, all actions) - Q(state,aetion)] Q(D,E) = 0.3480 +0.74[0 + 0.8 Max[Q(E,A), QE,D), Q(E,FY]] = 24 + 0.74[0.8 * 100] = 80 % AOE) = O0,£)+a[ Rév,e) + 7 PEne) - 2(D)€) | = 80 toFfo ya +lv0-¥B| = Bo Quiz #3 - COE 292: Introduction to Artificial Intelligence - Term 213 - Sec-01 Student Name: KEY SOLUTION 1D: ‘Question T; Fill m the blanks @ poms) * Supervised leaming uses labeled data, whereas unsupervised Jeaming uses unlabeled data. ‘+ An example of the first type of learning is: K-NN, SVM, and an example of the latter is K-Means * The parameter K in the K-means algorithm refers to the number of clusters. Whereas in the k-NN classifier, k refers to the number of nearest neighbor samples. «In overfitting the training error is much lower than the testing error. Whereas in underfitting both errors are high. Question 2: (5 points) Given the following data points and labels: © (0, -2) (3, -2) (2, 0) (2, 2) and @ (0,0) (0,3) (2,0) (2,5) a) Draw on the graph the decision boundaries for 1-Nearest ‘Neighbor. Your drawing should be accurate enough so that we can tell which region the integer-valued coordinate points in the diagram are. (3 points) aph 1b) What class does the above 1-NN predict for the new point: (1, - 1.01) Explain why. (1 point) Class: 0 Why: The closest data point is (0, -2)- 0 ©) What class does the above 3-NN predict for the new point: (I, -1.01) Explain why. (1 point) Class: Why ° the class of the closest data points with majority are (0, -2).(2,0). Question 3: (4 points) Given the data points (plot) Negative: (-1, 0) (2, -2) Positive: (1, 0). Using the standard perceptron algorithm discussed in class, find the weights of the coefficients a, b, and c obtained using a learning rate (A) of 1.0 and a zero initial weight. Assume that the points are examined in the order given in the table below: Test Point Updated Weights M ? (pg) | Label isclassified : a - o 0 co [| ves : o a Qa | - a 2 2 (1.0) + 0 a a Question 4: 3 points) Given the following confusing matrix: Predictions 3 + - < + [Po Ful - es [wz a) Fill in the following table with the right values from the confusion matrix above: (1.5 points) isi F 1 a) Find the precision and the recall using: Precision = 1. Fou and Recall = == (1.5 points) [Precision=0.78 | Recall=0.90 uestion 3: (3 points) Given the following confusing matrix: Predictions + + [4 - [2 [18 Actual b) Fill in the following table with the right values from the confusion matrix: eg re c) Find the specificity and recall using: Specificity = —™ re TN+FP ead TP+EN iz # 4 - COE 292: Introduction to Artificial Intelligence - Term 213 - Sec-01 Student Name: KEY SOLUTION ID: ‘Question (7 pomntsy ‘A. Fill in the gaps with the appropriate words: A non-linearly Sparable training set in a given feature dimension can be made linearly separable in another dimension, using a suitable kernel transformation. (3) B. Three different classifiers are trained on the same data. Their decision boundaries are shown below. (3) Model 1 Model 2 Fill in the gaps with the appropriate words (high, low, training, testing): a) Model | has a high error rate on the training set and a high error rate on the testing set b) Model 2 has a low error rate on the training set and a low error rate on the testing set ©) Model 3 has a low error rate on the training set and a high error rate on the testing set C. Which of the above classification models would you pick as the most suitable? (1) Question 2: (4 points) The figure shows a training set with 8 samples used to train a Support Vector Classifier (SVC). (2 points) (a) Mark on the graph the points that are the support vectors, (b) If the sample point (5,1) is removed, would the marginal classifier change? What would be the new support vectors? Q) ‘Question 3: (4 points) Given the 8 training samples of unlabeled data (graph). We intend to use the K-Means algorithm with the Euclidean distance to separate the samples into two clusters. Assume we start with points A and D as the initial cluster centers, for clusters I and 2 respectively. a) Afier running the first iteration, specify the samples belonging to cach of the two clusters; I and 2? (2) Cluster 1 | (ANB)E} Cluster? | (GING) b) After the first iteration is finished what would be the new clusters? (2) Question 2 \ Xe 1D: Name Section: we ZO ae COE 292: Introduction to Al (T212) Quiz 4 (Apr 3, 2022) Q1. (6pts) Consider the training data shown below, in which there are two classes represented as ‘squares and circles @. The Perceptron algorithm is to be used to obtain a linear classifier (ax+by+c=0). ‘Assume the following: Initial line ,b=1,c=-3(dottedline) a Above the initial line is the square jg side ~ Below the initial line is the circle 5 7 ~ Leaming rate A = 0.1 : ‘ e i In the table below, fillin the updated > parameters for the first 2 steps of the 3 e : algorithm, given indicated points are picked Step| Point] a b c 1 |(24] 0.8 | 0.6 |-3.1 2 [016-9] 0.9| -3.0 x 2. (4pts) Consider the data shown below, in which there are two classes represented as squares i and circles @. Suppose that K-NN algorithm is used to classify a new data X. ‘Assuming Euclidean distances, what will be the class of the new data in each of the a 1: following cases: e i a)1-NN: ©) Ccircle) Py e : a > paNe Fy & SQuove) e ox a c)ENN: CO) C circde ) a azn: QO Ccwcled 1 o- e a Page 1 of 3 King Fahd University of Petroleum and Minerals College of Computing and Mathematics Department of Computer Engineering COE 292: Introduction to Artificial Intelligence Term 212 (Spring 2022) 4- Solutions Name: IDit QL. Indicate whether the following statement is TRUE (T) or FALSE (F) (1p. T OF HOD Anon-linearly separable training set in a given feature dimension can be made linearly- separable in another dimension, Q2. Three different classifiers are trained on the same data. Their decision boundaries are shown below. Model 1 (a) (2 points) Based on the figures above, which of the following statements are TRUE? i, Model 1 has low error rate on training set and low error rate on testing set ii, Model 1 has high error rate on training set and low error rate on testing set iii, Model 3 has low error rate on training set and low error rate on testing set iv, Model 3 has low error rate on training set and high error rate on testing set (b) (1 points) Which of the above classification models would you pick as the most suitable? Model 2. Even though it results in some errors on the training data but it would give the least errors on the testing data over the long term. Page 2 of 3 Q3. The figure below shows a training set with 8 point samples used to train a Support Vector Classifier (sv). (a) (2 points) Which points are the support vector Mark the them in the figure above. (b) (1 points) If we remove the sample at (5,1); tk triangle, would the marginal classifier change: What would be the new support vectors? Yes, Support vectors are (2,2), (3,1) and (4,5) % Neko OVO 4. (3 points) The following graph represents a scaled data set consisting of 2 classes, shown in the plot as red circles and blue triangles. ‘Suppose we intend to use the perceptron algorithm to find the line (ax-+by+c) that best separates the two classes. Suppose the initial values of a, b, c, at epoch 1, are -1, 3, 2, respectively, which gives us the line shown in the graph. Show how the perceptron algorithm will adjust these weights, at epoch +1, to correctly classify the training sample at (-4, -3) assuming a learning rate 2 = 0.5. Also, assume that the 1 if axt+by+c20 oO otherwise -d as red circle and if Z = 0 it is classified as blue triangle. perceptron algorithm uses the threshold fimetion Z = { such that when Z=1 the sample is clas: epoch a b c ax+by+c Z 7 “I 3 2 OaRBI=3_ [0 HT | -140.5*(-4)=-3 | 340.54(-3)= 1.5 | 2+0.5=2.5 | (-3)*-4)+1.54(-3)42.5=10 | 1 Page 3 of 3 Observations: 1- The sample at (-4, -3) should be classified as red circle, i.e. it should be above the line, Therefore, in order to update the line, we should set the weights a= a + Ap, b= b +Aqandc=c+A. 2- Note that the new weights will result in a correct classification of this sample point but may result in misclassification of others. This is due to the selected value of 2. King Fahd University of Petroleum & Minerals College of Computing and Mathematics ‘Computer Engineering Department COE 292: Introduction to Artificial Intelligence iz #3 Duration: 15 min ‘Name: . Student ID: ...... . Section: Question 1: Given the following data points and label: Negative: (-1, 0) (2, 1) (2, -2) Positive: (0, 0) (1, 0) as shown below: a) Draw the decision boundaries for 1-Nearest Neighbors on the graph above. Your drawing should be accurate cnough so that we can tell which region the infeger-valued coordinate points inthe diagram are. (6 points) b)_ What class does 1-NN predict for the new point: (1, -1.01) Explain why. (3 points) Quiz2 COE 292, Introduction to Artificial Intelligence 12 Class: | Positive (+) Why: | Since this is the class of the closest data point (1, 0) ©) What class does 3-NN predict for the new point: (1, -1.01) Explain why. (6 points) Class: | Positive (+) Since it is the majority class of the three closest data points Why! | (0, 0), 0, 0) and (2, 2) jon 2 (8 Points) Data points are: Negative: (-1, 0) (2, -2) Positive: (1, 0). Using the standard perceptron algorithm discussed in class, find the weights of the linear separator obtained using a learning rate (A) of 1.0 and a zero initial weight. Assume that the points are examined in the order given in the table below: Test Point | Misclassified? Updated Weights Label | Points a 5 € (p. g) 0 0 0 - |(1,0) | Yes 1 0 -l - (2, -2) al 2 2 + |G,0) | Yes 0 2 - Quiz2 COE 292, Introduction to Artificial Intelligence 22 King Fahd University of Petroleum & Minerals College of Computing and Mathematics Computer Engineering Department COE 292: Introduction to Artificial Intelligence Quiz #3 Section: Duration: 15 min Name: Student ID: Question 1: (10 points) Given the following data points and labels shown below: Point | Coor. Class A 1@.0 Black B 0,-2) | White D C1 2.0) | White 2,2) White EB 1d.2 Black (05,2) [New draw on the graph above the decision 5 points) a) Ignoring the clas boundaries for 1-Nearest Neighbors Solution: Quiz2 COE 292, Introduction to Artificial Intelligence Ws b) Identify which class will a kNN classifier predict if the point labeled “New” is classified using the value of k that are shown in the table below: (5 points) k Predicted Class Label of k considered points 1 | White [1 point] B (0.25 point] 3. | White [1 point] A,B,C [0.25 point each] 5. | White [1 point] A,B,C, D, F [0.25 point each] Question 2 (2 Points) The below curve was obtained after using a maximal margin classifier a. Identify how many support vectors are there (1 Point) 4 b._Cirele the support veetors in the figure above (1 point) Question 3: (8 points) The table below shows the evolution of the weights for a perceptron algorithm as it was trained using some data. The perceptron algorithm is using a line of the form ax + by + c = 0 to classify the data points as discussed in class. Using the weight evolution, identify if the chosen point for training was classified correctly or misclassified Quiz2 COE 292, Introduction to Antficial Intelligence 23 Quiz2 Training Point |_@ b ¢__| MisiCorrect Initial 1 1 0 XXXXXKAXXX A 1 1 0 Correct B 2 1 -1 | Miss c 1 2 1 Miss D 1 2 2 Miss E 1 15 | 15 | Miss F 12 |13 [18 | Miss G 1.25 | 1.36 | 1.77 | Miss H 1.25 | 1.36 | 1.77 | Correct COE 292, Introduction to Antficial Intelligence Student Namé usin: (6%2= 12) ‘In what type of learning labelled training data is used a) unsupervised learning b) supervised learning ©) reinforcement learning 4d) active learning 2. Data used to build a data mining model is called a) training data b) validation data ©) test data 4) hidden data 3. Of the following examples, which one would you address using a supervised learning algorithm’? a) given email labeled as spam or not spam, learn a spam filter ) _givena set of news articles found on the web, group them into set of articles about the same story. ©) given database of customer data, automatically discover market segments and group customers into different market segments 4) find the patterns in market basket analysis, 4, In simple terms, machine learing is 2) training based on historical data 'b) prediction to answer @ query ©) bothaandb 4) automation of complex tasks 5. Which of the following approaches is an example of unsupervised learning? a) Support vector machine b) KNN algorithm ©) K-means algorithm ) ANN 6. The parameter K, in the K-Means algorithm refers to a). the algorithm complexity b) nothing, it is just an arbitrary fixed value ©) number of iterations 4) number of clusters Question 2: (2) Given the hyperplane defined by the line: y= 11 - 2x2 > [y Are the following points correctly classified, and why? nD 0) Dy 1) The equation of the hyperplane (line in the 2D space) may be written as: axi+bxi* =O. Therefore, a point is comectly classified if axitbxzte > 0 ie. m1 -2x2> 0 1) x= (1,0) > x1- 2x: = 1 - 2x0 = 1> 0 50 point (1,0) having y = 1 is correctly classified 2) x= (11) D xi- 22 = 1 - 2x1 =-1 <0 hence point (1,1) having y = 1 is not correctly classified 0, where a= 1, b=-2 and Question 3: (6) Given the following labelled data points, belonging to 2 classes A and B (figure below). Assume the K-NN classifier and, the Manhattan Distance used as a metric. a) With K = 1, find to which classes belong the points C; and C2. (2) b) Now with K=3, find to which classes belong the points C; and C. (4) ‘Note: The Manhattan distance between two points pi (xi, yi) and pz at (2s, y2), is defined as: d fui =af + yn ye x x - &¥) ca [t4,4) Mo [0 2 2 a2 | 2 | 4 3 1 AR 1 a1 4 2 B1 a 0 1 3 B2 oO a 3 3 a3 [| o | 2 2 2 2) With K= 1 ind to which classes belong the points C; and Ce b) With K = 3, find to which classes belong the points C; and C2. (4)

You might also like