You are on page 1of 3

‫‪ - CS ۲۲۹‬ﺗﻌﻠﻢ آﻟﻲ‬ ‫‪https://stanford.

edu/~shervine/l/ar/‬‬

‫اﻟﺘﻔﺴﻴﺮ‬ ‫اﻟﻤﻌﺎدﻟﺔ‬ ‫اﻟﻤﻘﻴﺎس‬


‫‪TP + TN‬‬
‫اﻷداء اﻟﻌﺎم ﻟﻠﻨﻤﻮذج‬ ‫اﻟﻀﺒﻂ )‪(accuracy‬‬
‫‪TP + TN + FP + FN‬‬

‫ّ‬
‫دﻗﺔ اﻟﺘﻮﻗﻌﺎت اﻹﻳﺠﺎﺑﻴﺔ )‪(positive‬‬
‫‪TP‬‬
‫‪Precision‬‬ ‫ﻣﺮﺟﻊ ﺳﺮﻳﻊ ﻟﻨﺼﺎﺋﺢ وﺣﻴﻞ ّ‬
‫ﺗﻌﻠﻢ اﻵﻟﺔ‬
‫‪TP + FP‬‬
‫‪TP‬‬
‫ﺗﻐﻄﻴﺔ ﻋﻴﻨﺎت اﻟﺘﻮﻗﻌﺎت اﻹﻳﺠﺎﺑﻴﺔ اﻟﻔﻌﻠﻴﺔ‬ ‫‪Recall‬‬
‫‪TP + FN‬‬
‫‪Sensitivity‬‬
‫‪TN‬‬ ‫اﻓﺸﯿﻦ ﻋﻤﯿﺪی و ﺷﺮوﯾﻦ ﻋﻤﯿﺪی‬
‫ﺗﻐﻄﻴﺔ ﻋﻴﻨﺎت اﻟﺘﻮﻗﻌﺎت اﻟﺴﻠﺒﻴﺔ اﻟﻔﻌﻠﻴﺔ‬ ‫‪Specificity‬‬
‫‪TN + FP‬‬
‫ﻣﻘﻴﺎس ﻫﺠﻴﻦ ﻣﻔﻴﺪ‬ ‫‪2TP‬‬
‫درﺟﺔ ‪F1‬‬
‫‪ ١٤‬رﺑﻴﻊ اﻟﺜﺎﻧﻲ‪١٤٤١ ،‬‬
‫ﻟﻸﺻﻨﺎف ﻏﻴﺮ اﻟﻤﺘﻮازﻧﺔ )‪(unbalanced‬‬ ‫‪2TP + FP + FN‬‬

‫دﻗﺔ اﻷداء )‪ – (ROC‬ﻣﻨﺤﻨﻰ ّ‬


‫دﻗﺔ اﻵداء‪ ،‬وﻳﻄﻠﻖ ﻋﻠﻴﻪ ‪ ،ROC‬ﻫﻮ رﺳﻤﺔ ﻟﻤﻌﺪل اﻟﺘﺼﻨﻴﻔﺎت اﻹﻳﺠﺎﺑﻴﺔ‬ ‫‪ r‬ﻣﻨﺤﻨﻰ ّ‬
‫اﻟﺼﺤﻴﺤﺔ )‪ (TPR‬ﻣﻘﺎﺑﻞ ﻣﻌﺪل اﻟﺘﺼﻨﻴﻔﺎت اﻹﻳﺠﺎﺑﻴﺔ اﻟﺨﺎﻃﺌﺔ )‪ (FPR‬ﺑﺎﺳﺘﺨﺪام ﻗﻴﻢ ﺣﺪ )‪ (threshold‬ﻣﺘﻐﻴﺮة‪.‬‬
‫ﻫﺬه اﻟﻤﻘﺎﻳﻴﺲ ﻣﻠﺨﺼﺔ ﻓﻲ اﻟﺠﺪول اﻟﺘﺎﻟﻲ‪:‬‬
‫ﺗﻤﺖ اﻟﺘﺮﺟﻤﺔ ﺑﻮاﺳﻄﺔ ﻓﺎرس اﻟﻘﻨﻴﻌﻴﺮ‪ .‬ﺗﻤﺖ اﻟﻤﺮاﺟﻌﺔ ﺑﻮاﺳﻄﺔ زﻳﺪ اﻟﻴﺎﻓﻌﻲ‪.‬‬

‫ﻣﺮادف‬ ‫اﻟﻤﻌﺎدﻟﺔ‬ ‫اﻟﻤﻘﻴﺎس‬


‫‪TP‬‬
‫‪Recall, sensitivity‬‬ ‫‪True Positive Rate‬‬
‫‪TP + FN‬‬
‫‪TPR‬‬
‫‪FP‬‬ ‫ﻣﻘﺎﻳﻴﺲ اﻟﺘﺼﻨﻴﻒ‬
‫‪1-specificity‬‬ ‫‪False Positive Rate‬‬
‫‪TN + FP‬‬
‫‪FPR‬‬
‫ﻓﻲ ﺳﻴﺎق اﻟﺘﺼﻨﻴﻒ اﻟﺜﻨﺎﺋﻲ‪ ،‬ﻫﺬه اﻟﻤﻘﺎﻳﻴﺲ )‪ (metrics‬اﻟﻤﻬﻤﺔ اﻟﺘﻲ ﻳﺠﺪر ﻣﺮاﻗﺒﺘﻬﺎ ﻣﻦ أﺟﻞ ﺗﻘﻴﻴﻢ آداء اﻟﻨﻤﻮذج‪.‬‬

‫‪ r‬اﻟﻤﺴﺎﺣﺔ ﺗﺤﺖ ﻣﻨﺤﻨﻰ دﻗﺔ اﻷداء اﻟﻤﺴﺎﺣﺔ ﺗﺤﺖ اﻟﻤﻨﺤﻨﻰ )‪ – (AUC‬اﻟﻤﺴﺎﺣﺔ ﺗﺤﺖ ﻣﻨﺤﻨﻰ دﻗﺔ اﻷداء �اﻟﻤﺴﺎﺣﺔ‬ ‫ّ‬ ‫ّ‬
‫اﻟﺪﻗﺔ ﻷﺧﺬ ﺗﺼﻮر ﺷﺎﻣﻞ ﻋﻨﺪ ﺗﻘﻴﻴﻢ أداء اﻟﻨﻤﻮذج‪.‬‬ ‫اﻟﺪﻗﺔ )‪ – (confusion matrix‬ﺗﺴﺘﺨﺪم ﻣﺼﻔﻮﻓﺔ‬ ‫‪ r‬ﻣﺼﻔﻮﻓﺔ‬
‫ﺗﺤﺖ اﻟﻤﻨﺤﻨﻰ�‪ ،‬وﻳﻄﻠﻖ ﻋﻠﻴﻬﺎ ‪ AUC‬أو ‪ ،AUROC‬ﻫﻲ اﻟﻤﺴﺎﺣﺔ ﺗﺤﺖ ‪ ROC‬ﻛﻤﺎ ﻫﻮ ﻣﻮﺿﺢ ﻓﻲ اﻟﺮﺳﻤﺔ اﻟﺘﺎﻟﻴﺔ‪:‬‬ ‫وﻫﻲ ﺗﻌﺮّ ف ﻛﺎﻟﺘﺎﻟﻲ‪:‬‬

‫اﻟﺘﺼﻨﻴﻒ اﻟﻤﺘﻮﻗﻊ‬
‫–‬ ‫‪+‬‬

‫‪FN‬‬ ‫‪TP‬‬
‫‪False Negatives‬‬ ‫‪+‬‬
‫‪True Positives‬‬
‫‪Type II error‬‬
‫اﻟﺘﺼﻨﻴﻒ اﻟﻔﻌﻠﻲ‬
‫‪TN‬‬ ‫‪FP‬‬
‫‪False Positives‬‬ ‫–‬
‫‪True Negatives‬‬
‫ﻣﻘﺎﻳﻴﺲ اﻻﻧﺤﺪار‬ ‫‪Type I error‬‬

‫ً‬
‫ﻏﺎﻟﺒﺎ ﻣﺎ ﺗﺴﺘﺨﺪم ﻟﺘﻘﻴﻴﻢ أداء‬ ‫‪ r‬اﻟﻤﻘﺎﻳﻴﺲ اﻷﺳﺎﺳﻴﺔ – إذا ﻛﺎن ﻟﺪﻳﻨﺎ ﻧﻤﻮذج اﻻﻧﺤﺪار ‪ ،f‬ﻓﺈن اﻟﻤﻘﺎﻳﻴﺲ اﻟﺘﺎﻟﻴﺔ‬
‫اﻟﻨﻤﻮذج‪:‬‬ ‫‪ r‬اﻟﻤﻘﺎﻳﻴﺲ اﻷﺳﺎﺳﻴﺔ – اﻟﻤﻘﺎﻳﻴﺲ اﻟﺘﺎﻟﻴﺔ ﺗﺴﺘﺨﺪم ﻓﻲ اﻟﻌﺎدة ﻟﺘﻘﻴﻴﻢ أداء ﻧﻤﺎذج اﻟﺘﺼﻨﻴﻒ‪:‬‬

‫ﺟﺎﻣﻌﺔ ﺳﺘﺎﻧﻔﻮرد‬ ‫‪۱‬‬ ‫ﺧﺮﻳﻒ ‪۲۰۱۸‬‬


‫‪ - CS ۲۲۹‬ﺗﻌﻠﻢ آﻟﻲ‬ ‫‪https://stanford.edu/~shervine/l/ar/‬‬

‫‪Leave-p-out‬‬ ‫‪k-fold‬‬ ‫ﻣﺠﻤﻮع اﻟﻤﺮﺑﻌﺎت اﻟﻤﺘﺒﻘﻲ‬ ‫ﻣﺠﻤﻮع اﻟﻤﺮﺑﻌﺎت اﻟﻤُ ﱠ‬


‫ﻔﺴﺮ‬ ‫اﻟﻤﺠﻤﻮع اﻟﻜﻠﻲ ﻟﻠﻤﺮﺑﻌﺎت‬

‫‪ -‬اﻟﺘﺪرﻳﺐ ﻋﻠﻰ ‪ n − p‬ﻋﻴﻨﺔ واﻟﺘﻘﻴﻴﻢ ﺑﺎﺳﺘﺨﺪام‬ ‫‪ -‬اﻟﺘﺪرﻳﺐ ﻋﻠﻰ ‪ k − 1‬ﺟﺰء واﻟﺘﻘﻴﻴﻢ ﺑﺎﺳﺘﺨﺪام اﻟﺠﺰء‬ ‫∑‬
‫‪m‬‬
‫∑‬
‫‪m‬‬
‫∑‬
‫‪m‬‬

‫اﻟـ ‪ p‬ﻋﻴﻨﺎت اﻟﻤﺘﺒﻘﻴﺔ‬ ‫اﻟﺒﺎﻗﻲ‬ ‫= ‪SSres‬‬ ‫‪(yi − f (xi ))2‬‬ ‫= ‪SSreg‬‬ ‫‪(f (xi ) − y)2‬‬ ‫= ‪SStot‬‬ ‫‪(yi − y)2‬‬
‫‪ -‬اﻟﺤﺎﻟﺔ ‪ p = 1‬ﻳﻄﻠﻖ ﻋﻠﻴﻬﺎ‬ ‫‪i=1‬‬ ‫‪i=1‬‬ ‫‪i=1‬‬
‫‪ -‬ﺑﺸﻜﻞ ﻋﺎم ‪ k = 5‬أو ‪۱۰‬‬
‫اﻹﺑﻘﺎء ﻋﻠﻰ واﺣﺪ )‪(leave-one-out‬‬
‫ً‬
‫وﻏﺎﻟﺒﺎ ﻳﺮﻣﺰ ﻟﻪ ﺑـ ‪ R2‬أو ‪ ،r2‬ﻳﻌﻄﻲ‬ ‫‪ r‬ﻣُ ﻌﺎﻣﻞ اﻟﺘﺤﺪﻳﺪ )‪ – (Coefficient of determination‬ﻣُ ﻌﺎﻣﻞ اﻟﺘﺤﺪﻳﺪ‪،‬‬
‫ً‬ ‫ﻗﻴﺎس ﻟﻤﺪى ﻣﻄﺎﺑﻘﺔ اﻟﻨﻤﻮذج ﻟﻠﻨﺘﺎﺋﺞ اﻟﻤﻠﺤﻮﻇﺔ‪ ،‬وﻳﻌﺮف ﻛﻤﺎ ﻳﻠﻲ‪:‬‬
‫اﺳﺘﺨﺪاﻣﺎ ﻳﻄﻠﻖ ﻋﻠﻴﻬﺎ اﻟﺘﺤﻘﻖ اﻟﻤﺘﻘﺎﻃﻊ س ﺟﺰء�أﺟﺰاء )‪ ،(k-fold‬وﻳﺘﻢ ﻓﻴﻬﺎ ﺗﻘﺴﻴﻢ اﻟﺒﻴﺎﻧﺎت إﻟﻰ‬ ‫اﻟﻄﺮﻳﻘﺔ اﻷﻛﺜﺮ‬
‫‪ k‬ﺟﺰء‪ ،‬ﺑﺤﻴﺚ ﻳﺘﻢ ﺗﺪرﻳﺐ اﻟﻨﻤﻮذج ﺑﺎﺳﺘﺨﺪام ‪ k − 1‬واﻟﺘﺤﻘﻖ ﺑﺎﺳﺘﺨﺪام اﻟﺠﺰء اﻟﻤﺘﺒﻘﻲ‪ ،‬وﻳﺘﻢ ﺗﻜﺮار ذﻟﻚ ‪ k‬ﻣﺮة‪.‬‬ ‫‪SSres‬‬
‫‪R2 = 1 −‬‬
‫ﻳﺘﻢ ﺑﻌﺪ ذﻟﻚ ﺣﺴﺎب ﻣﻌﺪل اﻷﺧﻄﺎء ﻓﻲ اﻷﺟﺰاء ‪ k‬وﻳﺴﻤﻰ ﺧﻄﺄ اﻟﺘﺤﻘﻖ اﻟﻤﺘﻘﺎﻃﻊ‪.‬‬ ‫‪SStot‬‬

‫ً‬
‫ﻏﺎﻟﺒﺎ ﻟﺘﻘﻴﻴﻢ أداء ﻧﻤﺎذج اﻻﻧﺤﺪار‪ ،‬وذﻟﻚ ﺑﺄن ﻳﺘﻢ اﻷﺧﺬ ﻓﻲ‬ ‫‪ r‬اﻟﻤﻘﺎﻳﻴﺲ اﻟﺮﺋﻴﺴﻴﺔ – اﻟﻤﻘﺎﻳﻴﺲ اﻟﺘﺎﻟﻴﺔ ﺗﺴﺘﺨﺪم‬
‫اﻟﺤﺴﺒﺎن ﻋﺪد اﻟﻤﺘﻐﻴﺮات ‪ n‬اﻟﻤﺴﺘﺨﺪﻣﺔ ﻓﻴﻬﺎ‪:‬‬

‫‪R2 Adjusted‬‬ ‫‪BIC‬‬ ‫‪AIC‬‬ ‫‪Mallow’s Cp‬‬

‫‪(1 −‬‬ ‫)‪− 1‬‬


‫‪R2 )(m‬‬ ‫[‬ ‫]‬ ‫‪SSres + 2(n + 1)b‬‬
‫‪σ2‬‬
‫‪1−‬‬ ‫)‪log(m)(n + 2) − 2 log(L‬‬ ‫)‪2 (n + 2) − log(L‬‬
‫‪m−n−1‬‬ ‫‪m‬‬

‫‪ b‬ﺗﻘﺪﻳﺮ اﻟﺘﺒﺎﻳﻦ اﻟﺨﺎص ﺑﻜﻞ ﻧﺘﻴﺠﺔ‪.‬‬


‫ﺣﻴﺚ ‪ L‬ﻫﻮ اﻷرﺟﺤﻴﺔ‪ ،‬و ‪σ 2‬‬
‫‪ r‬ﺿﺒﻂ )‪ – (Regularization‬ﻋﻤﻠﻴﻪ اﻟﻀﺒﻂ ﺗﻬﺪف إﻟﻰ ﺗﻔﺎدي ﻓﺮط اﻟﺘﺨﺼﻴﺺ )‪ (overfit‬ﻟﻠﻨﻤﻮذج‪ ،‬وﻫﻮ ﺑﺬﻟﻚ‬
‫ً‬
‫اﺳﺘﺨﺪاﻣﺎ‪:‬‬ ‫ﻳﺘﻌﺎﻣﻞ ﻣﻊ ﻣﺸﺎﻛﻞ اﻟﺘﺒﺎﻳﻦ اﻟﻌﺎﻟﻲ‪ .‬اﻟﺠﺪول اﻟﺘﺎﻟﻲ ﻳﻠﺨﺺ أﻧﻮاع وﻃﺮق اﻟﻀﺒﻂ اﻷﻛﺜﺮ‬ ‫اﺧﺘﻴﺎر اﻟﻨﻤﻮذج‬

‫‪ r‬ﻣﻔﺮدات – ﻋﻨﺪ اﺧﺘﻴﺎر اﻟﻨﻤﻮذج‪ ،‬ﻧﻔﺮق ﺑﻴﻦ ‪ ۳‬أﺟﺰاء ﻣﻦ اﻟﺒﻴﺎﻧﺎت اﻟﺘﻲ ﻟﺪﻳﻨﺎ ﻛﺎﻟﺘﺎﻟﻲ‪:‬‬
‫‪Elastic Net‬‬ ‫‪Ridge‬‬ ‫‪LASSO‬‬
‫ﻣﺠﻤﻮﻋﺔ اﺧﺘﺒﺎر‬ ‫ﻣﺠﻤﻮﻋﺔ ﺗﺤﻘﻖ‬ ‫ﻣﺠﻤﻮﻋﺔ ﺗﺪرﻳﺐ‬
‫اﻟﻤﻔﺎﺿﻠﺔ ﺑﻴﻦ اﺧﺘﻴﺎر اﻟﻤﺘﻐﻴﺮات‬
‫ﻳﺠﻌﻞ اﻟﻤُ ﻌﺎﻣﻼت أﺻﻐﺮ‬ ‫‪ -‬ﻳﻘﻠﺺ اﻟﻤُ ﻌﺎﻣﻼت إﻟﻰ ‪۰‬‬
‫واﻟﻤُ ﻌﺎﻣﻼت اﻟﺼﻐﻴﺮة‬ ‫‪ -‬اﻟﻨﻤﻮذج ﻳﻌﻄﻲ اﻟﺘﻮﻗﻌﺎت‬ ‫‪ -‬ﻳﺘﻢ ﺗﻘﻴﻴﻢ اﻟﻨﻤﻮذج‬ ‫‪ -‬ﻳﺘﻢ ﺗﺪرﻳﺐ اﻟﻨﻤﻮذج‬
‫‪ -‬ﺟﻴﺪ ﻻﺧﺘﻴﺎر اﻟﻤﺘﻐﻴﺮات‬ ‫‪ -‬ﺑﻴﺎﻧﺎت ﻟﻢ ﻳﺴﺒﻖ رؤﻳﺘﻬﺎ‬ ‫ً‬ ‫ً‬
‫ﻏﺎﻟﺒﺎ ‪ 20%‬ﻣﻦ ﻣﺠﻤﻮﻋﺔ‬ ‫‪-‬‬ ‫ﻏﺎﻟﺒﺎ ‪ 80%‬ﻣﻦ ﻣﺠﻤﻮﻋﺔ‬ ‫‪-‬‬
‫ﻣﻦ ﻗﺒﻞ‬ ‫اﻟﺒﻴﺎﻧﺎت‬ ‫اﻟﺒﻴﺎﻧﺎت‬
‫‪ -‬ﻳﻄﻠﻖ ﻋﻠﻴﻬﺎ ﻛﺬﻟﻚ اﻟﻤﺠﻤﻮﻋﺔ‬
‫اﻟﻤُ ﺠﻨّﺒﺔ أو ﻣﺠﻤﻮﻋﺔ اﻟﺘﻄﻮﻳﺮ‬

‫ﺑﻤﺠﺮد اﺧﺘﻴﺎر اﻟﻨﻤﻮذج‪ ،‬ﻳﺘﻢ ﺗﺪرﻳﺒﻪ ﻋﻠﻰ ﻣﺠﻤﻮﻋﺔ اﻟﺒﻴﺎﻧﺎت ﺑﺎﻟﻜﺎﻣﻞ ﺛﻢ ﻳﺘﻢ اﺧﺘﺒﺎره ﻋﻠﻰ ﻣﺠﻤﻮﻋﺔ اﺧﺘﺒﺎر ﻟﻢ ﻳﺴﺒﻖ‬
‫رؤﻳﺘﻬﺎ ﻣﻦ ﻗﺒﻞ‪ .‬ﻛﻤﺎ ﻫﻮ ﻣﻮﺿﺢ ﻓﻲ اﻟﺸﻜﻞ اﻟﺘﺎﻟﻲ‪:‬‬

‫[‬ ‫]‬
‫‪... + λ (1 − α)||θ||1 + α||θ||22‬‬ ‫‪... + λ||θ||22‬‬ ‫‪... + λ||θ||1‬‬
‫]‪λ ∈ R, α ∈ [0,1‬‬ ‫‪λ∈R‬‬ ‫‪λ∈R‬‬
‫‪ r‬اﻟﺘﺤﻘﻖ اﻟﻤﺘﻘﺎﻃﻊ )‪ – (Cross-validation‬اﻟﺘﺤﻘﻖ اﻟﻤﺘﻘﺎﻃﻊ‪ ،‬وﻛﺬﻟﻚ ﻳﺨﺘﺼﺮ ﺑـ ‪ ،CV‬ﻫﻮ ﻃﺮﻳﻘﺔ ﺗﺴﺘﺨﺪم ﻻﺧﺘﻴﺎر‬
‫ﻧﻤﻮذج ﺑﺤﻴﺚ ﻻ ﻳﻌﺘﻤﺪ ﺑﺸﻜﻞ ﻛﺒﻴﺮ ﻋﻠﻰ ﻣﺠﻤﻮﻋﺔ ﺑﻴﺎﻧﺎت اﻟﺘﺪرﻳﺐ اﻟﻤﺒﺪأﻳﺔ‪ .‬أﻧﻮاع اﻟﺘﺤﻘﻖ اﻟﻤﺘﻘﺎﻃﻊ اﻟﻤﺨﺘﻠﻔﺔ‬
‫ﻣﻠﺨﺼﺔ ﻓﻲ اﻟﺠﺪول اﻟﺘﺎﻟﻲ‪:‬‬

‫ﺟﺎﻣﻌﺔ ﺳﺘﺎﻧﻔﻮرد‬ ‫‪۲‬‬ ‫ﺧﺮﻳﻒ ‪۲۰۱۸‬‬


‫‪ - CS ۲۲۹‬ﺗﻌﻠﻢ آﻟﻲ‬ ‫‪https://stanford.edu/~shervine/l/ar/‬‬

‫‪ r‬ﺗﺤﻠﻴﻞ اﻟﺨﻄﺄ – ﺗﺤﻠﻴﻞ اﻟﺨﻄﺄ ﻫﻮ ﺗﺤﻠﻴﻞ اﻟﺴﺒﺐ اﻟﺮﺋﻴﺴﻲ ﻟﻠﻔﺮق ﻓﻲ اﻷداء ﺑﻴﻦ اﻟﻨﻤﺎذج اﻟﺤﺎﻟﻴﺔ واﻟﻨﻤﺎذج اﻟﻤﺜﺎﻟﻴﺔ‪.‬‬ ‫اﻟﺘﺸﺨﻴﺼﺎت‬
‫‪ r‬اﻻﻧﺤﻴﺎز )‪ – (Bias‬اﻻﻧﺤﻴﺎز ﻟﻠﻨﻤﻮذج ﻫﻮ اﻟﻔﺮق ﺑﻴﻦ اﻟﺘﻨﺒﺆ اﻟﻤﺘﻮﻗﻊ واﻟﻨﻤﻮذج اﻟﺤﻘﻴﻘﻲ اﻟﺬي ﻧﺤﺎول ﺗﻨﺒﺆه ﻟﻠﺒﻴﺎﻧﺎت‬
‫‪ r‬ﺗﺤﻠﻴﻞ اﺳﺘﺌﺼﺎﻟﻲ )‪ – (Ablative analysis‬اﻟﺘﺤﻠﻴﻞ اﻻﺳﺘﺌﺼﺎﻟﻲ ﻫﻮ ﺗﺤﻠﻴﻞ اﻟﺴﺒﺐ اﻟﺮﺋﻴﺴﻲ ﻟﻠﻔﺮق ﻓﻲ اﻷداء‬
‫اﻟﻤﻌﻄﺎة‪.‬‬
‫ﺑﻴﻦ اﻟﻨﻤﺎذج اﻟﺤﺎﻟﻴﺔ واﻟﻨﻤﺎذج اﻟﻤﺒﺪﺋﻴﺔ )‪.(baseline‬‬
‫‪ r‬اﻟﺘﺒﺎﻳﻦ )‪ – (Variance‬ﺗﺒﺎﻳﻦ اﻟﻨﻤﻮذج ﻫﻮ ﻣﻘﺪار اﻟﺘﻐﻴﺮ ﻓﻲ ﺗﻨﺒﺆ اﻟﻨﻤﻮذج ﻟﻨﻘﺎط اﻟﺒﻴﺎﻧﺎت اﻟﻤﻌﻄﺎة‪.‬‬

‫‪ r‬ﻣﻮازﻧﺔ اﻻﻧﺤﻴﺎز�اﻟﺘﺒﺎﻳﻦ )‪ – (Bias/variance tradeoff‬ﻛﻠﻤﺎ زادت ﺑﺴﺎﻃﺔ اﻟﻨﻤﻮذج‪ ،‬زاد اﻻﻧﺤﻴﺎز‪ ،‬وﻛﻠﻤﺎ زاد‬
‫ﺗﻌﻘﻴﺪ اﻟﻨﻤﻮذج‪ ،‬زاد اﻟﺘﺒﺎﻳﻦ‪.‬‬

‫‪Overfitting‬‬ ‫‪Just right‬‬ ‫‪Underfitting‬‬

‫‪ -‬ﺧﻄﺄ اﻟﺘﺪرﻳﺐ ﻣﻨﺨﻔﺾ‬ ‫‪ -‬ﺧﻄﺄ اﻟﺘﺪرﻳﺐ أﻗﻞ ﺑﻘﻠﻴﻞ‬ ‫‪ -‬ﺧﻄﺄ اﻟﺘﺪرﻳﺐ ﻋﺎﻟﻲ‬
‫ً‬
‫ﺟﺪا‬ ‫ﻣﻦ ﺧﻄﺄ اﻻﺧﺘﺒﺎر‬
‫‪ -‬ﺧﻄﺄ اﻟﺘﺪرﻳﺐ أﻗﻞ ﺑﻜﺜﻴﺮ‬ ‫‪ -‬ﺧﻄﺄ اﻟﺘﺪرﻳﺐ ﻗﺮﻳﺐ ﻣﻦ‬ ‫اﻷﻋﺮاض‬
‫ﻣﻦ ﺧﻄﺄ اﻻﺧﺘﺒﺎر‬ ‫ﺧﻄﺄ اﻻﺧﺘﺒﺎر‬
‫‪ -‬ﺗﺒﺎﻳﻦ ﻋﺎﻟﻲ‬ ‫‪ -‬اﻧﺤﻴﺎز ﻋﺎﻟﻲ‬

‫ﺗﻮﺿﻴﺢ اﻻﻧﺤﺪار‬

‫ﺗﻮﺿﻴﺢ اﻟﺘﺼﻨﻴﻒ‬

‫ﺗﻮﺿﻴﺢ اﻟﺘﻌﻠﻢ اﻟﻌﻤﻴﻖ‬

‫اﻟﻀﺒﻂ‬ ‫إﺟﺮاء‬ ‫‪-‬‬ ‫‪ -‬زﻳﺎدة ﺗﻌﻘﻴﺪ اﻟﻨﻤﻮذج‬


‫)‪(regularization‬‬
‫‪ -‬اﻟﺤﺼﻮل ﻋﻠﻰ اﻟﻤﺰﻳﺪ ﻣﻦ‬ ‫ﻣﻦ‬ ‫اﻟﻤﺰﻳﺪ‬ ‫‪ -‬إﺿﺎﻓﺔ‬ ‫اﻟﻌﻼﺟﺎت اﻟﻤﻤﻜﻨﺔ‬
‫اﻟﺒﻴﺎﻧﺎت‬ ‫اﻟﺨﺼﺎﺋﺺ‬
‫‪ -‬ﺗﺪرﻳﺐ ﻟﻤﺪة أﻃﻮل‬

‫ﺟﺎﻣﻌﺔ ﺳﺘﺎﻧﻔﻮرد‬ ‫‪۳‬‬ ‫ﺧﺮﻳﻒ ‪۲۰۱۸‬‬

You might also like