You are on page 1of 98

‫‪ ‬ا‪ ‬ا‪ ‬ا‪‬‬

‫ﺟﺎﻣﻌﺔ اﻟﻨﻴﻠﻴﻦ‬

‫ﻛﻠﻴﺔ ﻋﻠﻮم اﻟﺤﺎﺳﻮب وﺗﻘﺎﻧﺔ اﻟﻤﻌﻠﻮﻣﺎت‬

‫ﻣﺎﺟﺴﺘﻴﺮ ﺗﻘﺎﻧﺔ اﻟﻤﻌﻠﻮﻣﺎت‬

‫‪‬ر‪) ‬ا‪‬ظم ا‪‬ذ‪(‬‬

‫إ‪‬اد ا‪‬ب ‪:‬‬

‫‪ -‬أم أﻳﻤﻦ أﺣﻤﺪ ﺑﺎﺑﻜﺮ‬


‫‪ -‬ﺻﻔﻴﻪ ﻧﺎﺟﺢ ﻧﻮري اﻟﺒﺪري‬
‫‪ -‬أﻓﺮاح ﻋﻠﻲ ﺻﺪﻳﻖ‬
‫‪ -‬أﺷﺮف ﺣﺴﻦ ﻋﺒﺪ اﻟﻌﺰﻳﺰ ﻣﺴﺘﻨﺪ‬

‫إ‪‬اف ‪:‬‬

‫د‪.‬ﺟﻌﻔﺮ زﻳﻦ اﻟﻌﺎﺑﺪﻳﻦ‬

‫‪1‬‬
‫اﻟﻔﻬﺮس‬

‫اﳌﻮﺿﻮع‬ ‫اﻟﺼﻔﺤﺔ‬
‫اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ‬ ‫‪١٣-٣‬‬
‫اﻟﻤﺪرك اﻟﺒﺴﻴﻂ‬ ‫‪٢١-١٤‬‬
‫ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ‬ ‫‪٢٩-٢٢‬‬
‫ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ‬ ‫‪٤٩-٣٠‬‬
‫اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ‬ ‫‪٦٨-٥٠‬‬
‫ﺗﻌﻠﻢ ﻛﻮﻫﻨﻴﻦ‬ ‫‪٧٧-٦٩‬‬
‫اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ واﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ‬ ‫‪٨٨-٧٨‬‬
‫اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ‬ ‫‪٩٨-٨٩‬‬

‫‪2‬‬
‫اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ‪:‬‬

‫ﻧﻈﺎم ﺧﺒﻴﺮ ﻓﻲ ﺗﺸﺨﻴﺺ أﻣﺮاض اﻟﻄﻔﻮﻟﺔ اﻟﺴﺘﻪ‬

‫‪3‬‬
‫اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ‪Expert Systems -‬‬
‫اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ‪ ،‬أﺣﺪ أﻗﻮى ﻓﺮوع اﻟﺬﻛﺎء اﻹﺻﻄﻨﺎﻋﻲ اﻟﺬي ﻳﻌﺘﺒﺮ ﺑﺪورﻩ أﻗﻮى ﻓﺮوع ﻋﻠﻢ اﻟﺤﺎﺳﺐ اﻵﻟﻲ‪.‬‬
‫ﻓﻤﺎ ﻫﻲ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ) ‪ ( Expert Systems‬؟‬
‫ﻫﻲ ﺑﺮاﻣﺞ ﺗُﺤﺎﻛﻲ آداء اﻟﺨﺒﻴﺮ اﻟﺒﺸﺮي ﻓﻲ ﻣﺠﺎل ﺧﺒﺮة ﻣﻌﻴﻦ ‪ ،‬وذﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺗﺠﻤﻴﻊ واﺳﺘﺨﺪام ﻣﻌﻠﻮﻣﺎت‬
‫وﺧﺒﺮة ﺧﺒﻴﺮ أو أﻛﺜﺮ ﻓﻲ ﻣﺠﺎل ﻣﻌﻴﻦ‪.‬‬
‫ﺑﺎﺧﺘﺼﺎر ﻫﺬﻩ اﻟﻨﻈﻢ أوﺟﺪت ﻣﻦ أﺟﻞ اﺳﺘﺨﻼص ﺧﺒﺮات اﻟﺨﺒﺮاء ‪-‬وﺧﺼﻮﺻﺎً ﻓﻲ اﻟﺘﺨﺼﺼﺎت اﻟﻨﺎدرة–‬
‫وﺿﻤﻬﺎ ﻓﻲ ﻧﻈﺎم ﺧﺒﻴﺮ ﻳﺤﻞ ﻣﺤﻞ اﻹﻧﺴﺎن وﻳﺴﺎﻋﺪ ﻓﻲ ﻧﻘﻞ ﻫﺬﻩ اﻟﺨﺒﺮات ﻷﻧﺎس آﺧﺮﻳﻦ ﺑﺎﻹﺿﺎﻓﺔ إﻟﻰ ﻗﺪرﺗﻪ‬
‫ﻋﻠﻰ ﺣﻞ اﻟﻤﺸﻜﻼت ﺑﻄﺮﻳﻘﺔ أﺳﺮع ﻣﻦ اﻟﺨﺒﻴﺮ اﻟﺒﺸﺮي‪.‬‬
‫ﻣﻦ ﻣﻤﻴﺰات ﻫﺬﻩ اﻟﻨﻈﻢ‪:‬‬
‫ﻣﻄﻮر‪.‬‬
‫‪ .١‬أﻧﻬﺎ ﺳﻬﻠﺔ اﻹﺳﺘﺨﺪام ﻷي ﻣﺴﺘﺨﺪم ﺳﻮاء ﻣﺴﺘﺨﺪم ﻋﺎدي أو ّ‬
‫‪ .٢‬أﻧﻬﺎ ﻧﺎﻓﻌﺔ ﻓﻲ ﻣﺠﺎل اﻟﺘﻄﺒﻴﻖ ﺑﺸﻜﻞ واﺿﺢ‪.‬‬
‫‪ .٣‬ﻗﺎدرة ﻋﻠﻰ اﻟﺘﻌﻠﻢ ﻣﻦ اﻟﺨﺒﺮاء ﺑﻄﺮﻳﻘﺔ ﻣﺒﺎﺷﺮة وﻏﻴﺮ ﻣﺒﺎﺷﺮة‪.‬‬
‫‪ .٤‬ﻗﺎدرة ﻋﻠﻰ ﺗﻌﻠﻴﻢ ﻏﻴﺮ اﻟﻤﺘﺨﺼﺼﻴﻦ‪.‬‬
‫‪ .٥‬ﻗﺎدرة ﻋﻠﻰ ﺗﻔﺴﻴﺮ أي ﺣﻠﻮل ﺗﺘﻮﺻﻞ إﻟﻴﻬﺎ ﻣﻊ ﺗﻮﺿﻴﺢ ﻃﺮﻳﻘﺔ اﻟﻮﺻﻮل إﻟﻴﻬﺎ‪.‬‬
‫‪ .٦‬ﻗﺎدرة ﻋﻠﻰ اﻹﺳﺘﺠﺎﺑﺔ ﻟﻸﺳﺌﻠﺔ اﻟﺒﺴﻴﻄﺔ وﻛﺬﻟﻚ اﻟﻤﻌﻘﺪة ﻓﻲ ﺣﺪود اﻟﺘﻄﺒﻴﻖ‪.‬‬
‫‪ .٧‬وﺳﻴﻠﺔ ﻣﻔﻴﺪة ﻓﻲ ﺗﻮﻓﻴﺮ ﻣﺴﺘﻮﻳﺎت ﻋﺎﻟﻴﺔ ﻣﻦ اﻟﺨﺒﺮة ﻓﻲ ﺣﺎل ﻋﺪم ﺗﻮﻓﺮ ﺧﺒﻴﺮ‪.‬‬
‫‪ .٨‬ﻗﺎدرة ﻋﻠﻰ ﺗﻄﻮﻳﺮ آداء اﻟﻤﺘﺨﺼﺼﻴﻦ ذوي اﻟﺨﺒﺮة اﻟﺒﺴﻴﻄﺔ‪.‬‬
‫اﻷﺳﺒﺎب ﻟﻌﺪم إﻧﺘﺸﺎر اﻷﻧﻈﻤﺔ اﻟﺨﺒﻴﺮة ‪:‬‬
‫أﻧﻬﺎ ذات ﺗﻜﻠﻔﺔ ﻋﺎﻟﻴﺔ ﻣﻘﺎرﻧ ًﺔ ﺑﺎﻟﺘﻄﺒﻴﻘﺎت اﻟﺘﻘﻠﻴﺪﻳﺔ‪.‬‬ ‫‪‬‬

‫ﻧﻈﺎم ﺗﻄﺒﻴﻘﻬﺎ ﻣﺤﺪود ﻓﻲ اﻟﻨﻈﻢ اﻹدارﻳﺔ واﺳﺘﺮﺟﺎع اﻟﻤﻌﻠﻮﻣﺎت اﻟﻤﺘﻜﺎﻣﻠﺔ‪.‬‬ ‫‪‬‬

‫إﻻ أﻧﻪ وﻣﻊ ﻫﺬﻩ اﻟﻤﺸﺎﻛﻞ ﻫﻨﺎك أﺳﺒﺎب ﻗﻮﻳﺔ ﺗﺠﻌﻞ ﺑﻌﺾ اﻟﺸﺮﻛﺎت ﺗﺘﻐﻠﺐ ﻋﻠﻰ ﻫﺬﻩ اﻟﻤﺸﺎﻛﻞ ﻣﻨﻬﺎ‪:‬‬

‫‪4‬‬
‫اﻹﺣﺘﻔﺎظ ﺑﺎﻟﺨﺒﺮة واﻟﻤﻌﺮﻓﺔ ﻣﻦ اﻹﻧﺪﺛﺎر أو اﻹﻧﻘﺮاض ‪ ،‬وﺧﺼﻮﺻﺎً ﻓﻲ اﻟﺘﺨﺼﺼﺎت اﻟﻬﺎﻣﺔ اﻟﻜﺜﻴﺮة‬ ‫‪‬‬

‫اﻹﺳﺘﺨﺪام أو اﻟﻨﺎدرة‪.‬‬
‫ﺣﻞ اﻟﻤﺸﺎﻛﻞ ‪ ،‬ﻣﻤﺎ ﻳﺤﻔﻆ اﻟﻮﻗﺖ و اﻟﻤﺎل واﻟﺠﻬﺪ‪.‬‬ ‫‪‬‬

‫زﻳﺎدة اﻟﺨﺒﺮاء ﻓﻲ ﻣﺠﺎل ﺗﻄﺒﻴﻖ اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ‪.‬‬ ‫‪‬‬

‫وﻣﻦ أﻫﻢ ﻣﺠﺎﻻت ﺗﻄﺒﻴﻘﺎت ﻧﻈﻢ اﻟﺨﺒﺮة ﻫﻮ اﻟﺘﺼﻨﻴﻒ )‪ (classification‬ﺣﻴﺚ ﻳﻜﻮن ﻣﻄﻠﻮب ﻣﻦ اﻟﻨﻈﺎم‬
‫ﺗﺤﺪﻳﺪ اﻟﻔﺌﺔ اﻟﺘﻲ ﻳﻨﺘﻤﻲ إﻟﻴﻬﺎ اﻟﻜﺎﺋﻦ اﻟﻤﻄﻠﻮب ﺗﺼﻨﻴﻔﻪ ‪ ،‬ﻛﻤﺎ أن اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة دﺧﻠﺖ ﻓﻲ ﻋﺪة ﻣﺠﺎﻻت أﺧﺮى‬
‫ﻛﺎﻟﻄﺐ واﻟﺰراﻋﺔ واﻟﺘﻨﻘﻴﺐ واﻹﻟﻜﺘﺮوﻧﻴﺎت واﻟﺤﺎﺳﺒﺎت واﻟﺠﻴﻮﻟﻮﺟﻴﺎ واﻟﻬﻨﺪﺳﺔ واﻟﺘﻌﻠﻴﻢ واﻟﺸﺮﻳﻌﺔ واﻟﻘﺎﻧﻮن‬
‫واﻟﺘﺠﺎرة واﻹﻗﺘﺼﺎد وﻏﻴﺮﻫﺎ اﻟﻜﺜﻴﺮ‬
‫وﻹﻧﺘﺎج ﻧﻈﺎم ﺧﺒﻴﺮ ﻳﺠﺐ ﺗﻮﻓﺮ ﻋﻨﺼﺮﻳﻦ ﻫﺎﻣﻴﻦ ﻫﻤﺎ‪:‬‬
‫‪ .١‬اﻟﻤﺒﺮﻣﺞ اﻟﺬي ﻳﻘﻮم ﺑﺘﺤﻠﻴﻞ اﻟﻤﺸﻜﻠﺔ وﻛﺘﺎﺑﺔ اﻟﺒﺮﻧﺎﻣﺞ ﻓﻲ ﻣﺠﺎل اﻟﺬﻛﺎء اﻹﺻﻄﻨﺎﻋﻲ‪.‬‬
‫‪ .٢‬ﺧﺒﻴﺮ اﻟﻤﺠﺎل وﻫﻮ اﻟﺸﺨﺺ اﻟﻤﺘﺨﺼﺺ ﻓﻲ ﻣﺠﺎل ﻣﻌﻴﻦ وﻟﻴﺲ ﺑﺎﻟﻀﺮورة أن ﻳﻜﻮن ﻟﺪﻳﻪ ﻋﻠﻢ ﺑﺎﻟﺬﻛﺎء‬
‫اﻹﺻﻄﻨﺎﻋﻲ ﻓﺎﻟﻤﻬﻢ ﻣﺪى ﺧﺒﺮﺗﻪ وإﻟﻤﺎﻣﻪ ﺑﺒﻮاﻃﻦ اﻷﻣﻮر ﻓﻲ ﻣﺠﺎل ﺗﺨﺼﺼﻪ‪.‬‬
‫وﻳﻤﺮ اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺑﻌﺪة ﻣﺮاﺣﻞ ﺣﺘﻰ ﻳﻈﻬﺮ ﺑﺎﻟﺸﻜﻞ اﻟﻤﻄﻠﻮب وﻫﻲ ﻛﺎﻟﺘﺎﻟﻲ‪:‬‬
‫‪ .١‬ﺗﻌﺮﻳﻒ اﻟﺘﻄﺒﻴﻖ ‪ :‬وﻓﻴﻬﺎ ﻳﺘﻢ ﺗﺤﺪﻳﺪ ﻣﺎﻟﺬي ﻧﺮﻳﺪﻩ ﻣﻦ اﻟﻨﻈﺎم وﻣﺠﺎل اﻟﺨﺒﺮة‪.‬‬
‫‪ .٢‬ﺗﺼﻤﻴﻢ اﻟﻨﻈﺎم‬
‫‪ .٣‬ﺑﺮﻣﺠﺔ اﻟﻨﻈﺎم‬
‫‪ .٤‬اﺧﺘﺒﺎر اﻟﻨﻈﺎم وﺗﻮﺛﻴﻘﻪ‬
‫وﻟﻜﻞ ﺧﻄﻮة ﻣﻦ ﻫﺬﻩ اﻟﺨﻄﻮات اﻷﺷﺨﺎص اﻟﻤﻜﻠﻔﻴﻦ ﺑﺎﻟﻘﻴﺎم ﺑﻬﺎ‪.‬‬
‫وﻣﻦ اﻷﻣﺜﻠﺔ ﻋﻠﻰ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة‪:‬‬
‫ﻧﻈﺎم ‪ Eliza‬ﻟﻠﻌﻼج اﻟﻨﻔﺴﻲ ‪ :‬وﻫﻮ ﻋﺒﺎرة ﻋﻦ ﻧﻈﺎم ﻳُﺠﺮي ﺣﻮار ﻣﻊ اﻟﻤﺴﺘﺨﺪم وﻳﺠﻴﺐ ﻋﻠﻰ اﻹﺳﺘﻔﺴﺎرات‬
‫ﻛﻄﺒﻴﺐ ﻧﻔﺴﻲ ﺧﺒﻴﺮ ‪.‬‬

‫وﻛﻤﺜﺎل ﻟﻠﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺗﻨﺎوﻟﻨﺎ ﻧﻈﺎم ﺧﺒﻴﺮ ﻓﻲ ﺗﺸﺨﻴﺺ أﻣﺮاض اﻟﻄﻔﻮﻟﺔ اﻟﺴﺘﻪ ‪ ،‬وﺗﻢ إﺳﺘﺨﺪام ﻟﻐﺔ ﺑﺮوﻟﻮج ‪.‬‬

‫‪5‬‬
‫ﻧﻈﺎم ﺧﺒﻴﺮ ﻓﻲ ﺗﺸﺨﻴﺺ أﻣﺮاض اﻷﻃﻔﺎل‬

‫ﻳﺒﺪأ اﻟﻨﻈﺎم ﺑﺎﻟﺴﺆال ﻋﻦ اﻷﻋﺮاض ‪ ،‬ﻣﺜﺎل ﻋﻠﻰ ذﻟﻚ ﻳﺒﺪأ ﺑﺎﻟﺴﺆال ﻋﻦ وﺟﻮد اﻟﺤﻤﻰ ﻓﺈذا أﺟﺎب اﻟﻤﺮﻳﺾ‬
‫ﺑﻨﻌﻢ ﻳﺒﺪأ ﺑﺎﻟﺒﺤﺚ ﻓﻲ اﻷﻣﺮاض اﻟﺘﻲ ﺑﻬﺎ اﻟﺤﻤﻰ ﻛﻌﺮض ‪ ،‬ﺛﻢ ﻳﻮاﺻﻞ اﻟﺴﺆال ﻋﻦ ﻋﺮض آﺧﺮ ﺛﻢ آﺧﺮ ﺣﺘﻰ‬
‫ﻳﺤﺼﺮ ﻧﻔﺴﻪ ﻓﻲ ﻣﺮض ﻣﻌﻴﻦ ﻓﻴﺒﺪأ ﺑﺎﻟﺴﺆال ﺑﺎﻷﻋﺮاض اﻟﺘﻲ ﺗﺨﺺ ﻫﺬا اﻟﻤﺮض ‪.‬‬
‫أﻣﺎ إذا ﻛﺎﻧﺖ إﺟﺎﺑﺔ اﻟﻤﺮﻳﺾ ﺑﻼ ﻣﻦ اﻟﺒﺪاﻳﻪ ﻳﺤﺪث ﺗﻌﻘﺐ ﺧﻠﻔﻲ ‪ back tracking‬وﻫﻜﺬا ﻳﻨﺘﻘﻞ اﻟﻨﻈﺎم‬
‫إﻟﻰ ﺗﺸﺨﻴﺺ ﻣﺮض آﺧﺮ ﻟﻴﺴﺖ اﻟﺤﻤﻰ ﻋﺮض ﻣﻦ أﻋﺮاﺿﻪ وﻫﻜﺬا ﺣﺘﻰ ﻳﺼﻞ ﻟﻤﺮض ﻣﻌﻴﻦ ‪ ،‬أﻣﺎ إذا ﻟﻢ ﻳﺴﺘﻄﻴﻊ‬
‫ﻣﻌﺮﻓﺔ اﻟﻤﺮض ﻓﻬﺬا ﻳﻌﻨﻲ أن اﻟﻤﺮﻳﺾ ﻳﺸﻜﻮ ﻣﻦ ﻣﺮض آﺧﺮ ﻟﻴﺲ ﻟﻪ ﻋﻼﻗﺔ ﺑﺄﻣﺮاض اﻷﻃﻔﺎل ‪.‬‬

‫ﻗﺎﻋﺪة اﻟﻤﻌﺮﻓﻪ‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺣﻤﻰ ‪. fever‬‬ ‫‪‬‬
‫ﺳﻌﺎل ‪. cough‬‬ ‫‪‬‬
‫إﺣﻤﺮار اﻟﻌﻴﻦ ‪. conjuctvitis‬‬ ‫‪‬‬
‫ﺳﻴﻼن اﻷﻧﻒ ‪. runny_nose‬‬ ‫‪‬‬
‫ﻃﻔﺢ ﺟﻠﺪي ‪. rash‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺤﺼﺒﻪ ‪. measles‬‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺣﻤﻰ ‪. fever‬‬ ‫‪‬‬
‫ﺻﺪاع ‪. headache‬‬ ‫‪‬‬
‫ﺳﻴﻼن اﻷﻧﻒ ‪. runny_nose‬‬ ‫‪‬‬
‫ﻃﻔﺢ ﺟﻠﺪي ‪. rash‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺤﺼﺒﻪ اﻷﻟﻤﺎﻧﻴﻪ ‪. german_measles‬‬

‫‪6‬‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺣﻤﻰ ‪. fever‬‬ ‫‪‬‬
‫ﺻﺪاع ‪. headache‬‬ ‫‪‬‬
‫وﺟﻊ ﺟﺴﻢ ‪. body_ache‬‬ ‫‪‬‬
‫إﺣﻤﺮار اﻟﻌﻴﻦ ‪. conjuctvitis‬‬ ‫‪‬‬
‫ﺑﺮد ‪. chills‬‬ ‫‪‬‬
‫إﺣﺘﻘﺎن ﻓﻲ اﻟﺤﻠﻖ ‪. sore_throat‬‬ ‫‪‬‬
‫ﺳﻌﺎل ‪. cough‬‬ ‫‪‬‬
‫ﺳﻴﻼن اﻷﻧﻒ ‪.‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻹﻧﻔﻠﻮﻧﺰا ‪. flu‬‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺻﺪاع ‪. headache‬‬ ‫‪‬‬
‫ﻋﻄﺲ ‪. sneezing‬‬ ‫‪‬‬
‫إﺣﺘﻘﺎن ﻓﻲ اﻟﺤﻠﻖ ‪. sore_throat‬‬ ‫‪‬‬
‫ﺑﺮد ‪. chills‬‬ ‫‪‬‬
‫ﺳﻴﻼن اﻷﻧﻒ ‪. runny_nose‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻧﺰﻟﺔ ﺑﺮد ‪. common_cold‬‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺣﻤﻰ ‪. fever‬‬ ‫‪‬‬
‫ورم ‪. swollen_glands‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﻨﻜﺎف ‪. mumps‬‬

‫‪7‬‬
‫إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ ‪-:‬‬ ‫‪‬‬
‫ﺣﻤﻰ ‪. fever‬‬ ‫‪‬‬
‫ﻃﻔﺢ ﺟﻠﺪي ‪. rash‬‬ ‫‪‬‬
‫وﺟﻊ ﺟﺴﻢ ‪. body_ache‬‬ ‫‪‬‬
‫ﺑﺮد ‪. chills‬‬ ‫‪‬‬
‫ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺠﺪري ‪. chiken_pox‬‬

‫آﻟﺔ اﻹﺳﺘﺪﻻل‬

‫‪8‬‬
‫ﺣﺻﺑﺔ‬
‫ﺣﻣــــــــــــــــــــــــﻲ‬

‫ﺳﻌــــــــــــــــــــﺎل‬

‫ﺳﯾــــــــــــــــــﻼن أﻧــف‬
‫ﺣﺻﺑﺔ اﻟﻣﺎﻧﯾﺔ‬

‫طﻔـــــــــــــــــــﺢ ﺟﻠـــــــــــدي‬

‫ﺻــــــــــــــــــــداع‬

‫وﺟـــــﻊ ﺟﺳــــــــــــــــم‬
‫اﻻﻧﻔﻠوﻧزا‬
‫اﺣﺗﻘـــــــــــــــــﺗﺎن ﺣﻠـــــق‬

‫ﺑـــــــــــــــــــــــــــرد‬

‫ﻋطــــــــــــــــــــــــــــس‬

‫اﺣﻣـــــــــــــرار ﻋﯾـــــــــــن‬ ‫ﻧزﻟﺔ اﻟﺑرد‬

‫ورم‬

‫اﻟﻧﻛــــــــــــــﺎف‬

‫ﺟـــــــــــــــــــدري‬

‫‪9‬‬
prolog ‫اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺑﻠﻐﺔ ﺑﺮوﻟﻮج‬
%MEDICAL DIAGNOSTIC SYSTEM
%CHILDHOOD DISEASES
%EXAMPLE ONLY NOT FOR MEDICAL USE

domains
disease=symbol
symptom=symbol
query=symbol
replay=symbol
database
xpositive(symptom)
xnegative(symptom)
predicates
nondeterm hypothesis(disease)
nondeterm symptom(disease)
go
positive(query,symptom)
clear_facts
remember(symptom,replay)
ask(query,symptom,replay)
clauses
go:-
%clearwindow,
hypothesis(Disease),!,
write("the patient probably has ",Disease),
clear_facts.
positive(_,Symptom):-
xpositive(Symptom),!.
positive(Query,Symptom):-
not(xnegative(Symptom)),
ask(Query,Symptom,Replay),
Replay="y".

10
ask(Query,Symptom,Replay):-
write(Query),
readln(Replay),
remember(Symptom,Replay).
remember(Symptom,"y"):-
asserta(xpositive(Symptom)).
remember(Symptom,"n"):-
asserta(xnegative(Symptom)).
clear_facts:-
retract(xpositive(_)),fail.
clear_facts:-
retract(xnegative(_)),fail.
clear_facts.
symptom(fever):-
positive("Dos the patient have the fever (y/n)",fever).
symptom(headache):-
positive("Dos the patient have the headache (y/n)",headache).
symptom(body_ache):-
positive("Dos the patient have the body_ache (y/n)",body_ache).
symptom(conjuctvitis):-
positive("Dos the patient have the conjuctvitis (y/n)",conjuctvitis).
symptom(chills):-
positive("Dos the patient have the chills (y/n)",chills).
symptom(sore_throat):-
positive("Dos the patient have the throat (y/n)",throat).
symptom(couph):-
positive("Dos the patient have the couph (y/n)",couph).
symptom(runny_nose):-
positive("Dos the patient have the runny_nose (y/n)",runny_nose).
symptom(rash):-
positive("Dos the patient have the rash (y/n)",rash).

11
symptom(sneezing):-
positive("Dos the patient have the sneezing (y/n)",sneezing).
%************************************************************
hypothesis(measles):-
symptom(fever),
symptom(cough),
symptom(conjuctvitis),
symptom(runny_nose),
symptom(rash).
hypothesis(german_measles):-
symptom(fever),
symptom(headache),
symptom(runny_nose),
symptom(rash).
hypothesis(flu):-
symptom(fever),
symptom(headache),
symptom(body_ache),
symptom(conjuctvitis),
symptom(chills),
symptom(sore_throat),
symptom(couph),
symptom(runny_nose).
hypothesis(common_cold):-
symptom(headache),
symptom(sneezing),
symptom(sore_throat),
symptom(chills),
symptom(runny_nose).
hypothesis(mumps):-
symptom(fever),
symptom(swollen_glands).

12
hypothesis(chiken_pox):-
symptom(fever),
symptom(rash),
symptom(body_ache),
symptom(chills).
goal
go.

13
‫اﻟﻤﺪرك اﻟﺒﺴﻴﻂ ‪:‬‬

‫ﺗﺪرﻳﺐ اﻟﻤﺪرك اﻟﺒﺴﻴﻂ ﻟﻴﺆدي ﻋﻤﻠﻴﺔ‬

‫‪OR & NOR‬‬

‫‪14‬‬
‫اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ‬
‫اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻫﻲ إﺣﺪى اﻟﺘﻘﻨﻴﺎت اﻟﺤﺪﻳﺜﺔ ﻧﺴﺒﻴﺎً ﻓﻲ اﻟﺤﻮﺳﺒﺔ‪ ،‬وﻫﻲ ﻣﺴﺘﻮﺣﺎﻩ ﻣﻦ ﻃﺮﻳﻘﺔ ﻋﻤﻞ‬
‫اﻟﻌﻘﻞ اﻟﺒﺸﺮي واﻟﺠﻬﺎز اﻟﻌﺼﺒﻲ اﻟﻤﺮﻛﺰي‪ .‬ﻋﻠﻰ اﻟﺮﻏﻢ ﻣﻦ أن ﻧﺸﺄة اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﺗﻌﻮد إﻟﻰ ﻣﺎ‬
‫ﻗﺒﻞ اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي‪ ،‬ﺣﻴﺚ ﺗﻢ إﻧﺘﺎج أول ﺧﻠﻴﺔ ﻋﺼﺒﻴﺔ اﺻﻄﻨﺎﻋﻴﺔ ﻓﻲ ﻋﺎم ‪ ١٩٤٣‬ﺑﻮاﺳﻄﺔ اﻟﻌﺎﻟﻤﻴﻦ وارﻳﻦ‬
‫ﻣﺎﻛﻠﻮﺗﺶ وواﻟﺘﺮ ﺑﺘﺲ‪ ،‬إﻻ أن اﻟﺘﻘﻨﻴﺔ اﻟﻤﺘﻮﻓﺮة وﻗﺘﻬﺎ ﻟﻢ ﺗﻤﻜﻨﻬﻤﺎ ﻣﻦ ﺗﻄﻮﻳﺮﻫﺎ أو اﻻﺳﺘﻔﺎدة ﻣﻨﻬﺎ‪.‬‬
‫ﺗﺘﻜﻮن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﻦ ﻋﺪد ﻛﺒﻴﺮ ﻣﻦ وﺣﺪات اﻟﻤﻌﺎﻟﺠﺔ اﻟﻌﺼﺒﻴﺔ »ﺳﻤﻴﻬﺎ ﻣﺠﺎزا ﺧﻼﻳﺎ ﻋﺼﺒﻴﺔ«‬
‫اﻟﻤﺘﺸﺎﺑﻜﺔ ﺗﺸﺎﺑﻜﺎ ﻛﺒﻴﺮا ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﺤﻴﺚ ﺗﻜﻮن ﻗﺎدرة ﻋﻠﻰ ﻣﻌﺎﻟﺠﺔ أﻧﻮاع ﻣﻌﻴﻨﺔ ﻣﻦ اﻟﻤﺸﺎﻛﻞ‪ .‬ﻛﻤﺎ ﻓﻲ اﻟﺨﻼﻳﺎ‬
‫اﻟﻌﺼﺒﻴﺔ اﻟﺤﻴﺔ‪ ،‬ﺗﺤﺘﺎج اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ إﻟﻰ اﻟﺘﺪرﻳﺐ ﺑﺤﻴﺚ ﻳﺘﻢ ﺿﺒﻂ اﻟﺘﺸﺎﺑﻜﺎت ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ‪ .‬ﺑﻌﺪ‬
‫ﻋﻤﻠﻴﺔ اﻟﺘﺪرﻳﺐ‪ ،‬ﻳﻤﻜﻦ اﻋﺘﺒﺎر اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ »ﺧﺒﻴﺮة« ﻓﻲ ﻓﺌﺔ اﻟﻤﻌﻠﻮﻣﺎت اﻟﺘﻲ ﺗﻢ ﺗﺪرﻳﺒﻬﺎ ﻋﻠﻴﻬﺎ‪.‬‬
‫ﺗﺘﻤﻴﺰ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﺑﻘﺪرﺗﻬﺎ ﻋﻠﻰ اﺳﺘﻨﺒﺎط ﻧﺘﺎﺋﺞ ﻣﻦ ﻣﺪﺧﻼت ﻣﻌﻘﺪة أو ﻏﻴﺮ دﻗﻴﻘﺔ‪ .‬ﻛﻤﺎ ﻳﻤﻜﻨﻬﺎ اﺳﺘﺨﺮاج‬
‫أﻧﻤﺎط وﻛﺸﻒ اﺗﺠﺎﻫﺎت ﻣﻦ اﻟﺘﻌﻘﻴﺪ ﺑﺤﻴﺚ ﻻ ﻳﻤﻜﻦ ﻣﻼﺣﻈﺘﻬﺎ ﻣﻦ ﻗﺒﻞ اﻻﻧﺴﺎن أو ﻣﻦ ﻗﺒﻞ ﺗﻘﻨﻴﺎت اﻟﺤﺎﺳﺐ‬
‫اﻷﺧﺮى‪ .‬ﻳﻤﻜﻦ ﻟﻠﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﻤﺪرﺑﺔ )اﻟﺨﺒﻴﺮة( اﻟﺘﻨﺒﺆ ﺑﻨﺘﺎﺋﺞ ﻣﻮاﻗﻒ ﺟﺪﻳﺪة أو إﺟﺎﺑﺔ أﺳﺌﻠﺔ ﻣﻦ ﻧﻮع »ﻣﺎذا‬
‫ﻟﻮ«‪.‬‬
‫ﺗﺨﺘﻠﻒ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻋﻦ اﻟﺤﻮاﺳﻴﺐ اﻟﺘﻘﻠﻴﺪﻳﺔ ﻓﻲ أن اﻷﺧﻴﺮة ﺗﻘﻮم ﺑﻤﻌﺎﻟﺠﺔ اﻟﻤﺸﺎﻛﻞ ﻣﻦ ﺧﻼل ﺧﻄﻮات‬
‫وﺗﻌﻠﻴﻤﺎت ﻣﺤﺪدة وﻣﺒﺮﻣﺠﺔ )ﺧﻮارزﻣﻴﺔ(‪ ،‬وﺑﺎﻟﺘﺎﻟﻲ ﻻ ﻳﻤﻜﻨﻬﺎ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﻐﻴﺮ ﻣﺒﺮﻣﺠﺔ ﺳﻠﻔﺎ‪ ،‬وﺑﺎﻟﺘﺎﻟﻲ ﻓﺈن‬
‫اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي ﻻ ﻳﻤﻜﻨﻪ إﻻ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﺘﻲ ﻳﺴﺘﻄﻴﻊ اﻟﻤﺒﺮﻣﺞ ﻧﻔﺴﻪ ﺣﻠﻬﺎ‪ .‬أﻣﺎ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻓﻬﻲ‬
‫ﻋﻠﻰ اﻟﻨﻘﻴﺾ ﺣﻴﺚ أﻧﻬﺎ ﺗﺘﻌﻠﻢ ﻣﻦ ﺧﻼل اﻷﻣﺜﻠﺔ وﻟﻴﺲ ﻋﻦ ﻃﺮﻳﻖ إﻋﻄﺎﺋﻬﺎ ﺗﻌﻠﻴﻤﺎت‪ ،‬وﻳﻤﻜﻨﻬﺎ اﻟﺘﻌﺎﻣﻞ ﻣﻊ‬
‫ﻣﺸﺎﻛﻞ ﻟﻢ ﻳﺘﻢ ﺑﺮﻣﺠﺘﻬﺎ ﺳﺎﺑﻘﺎ‪ ،‬وﺑﺎﻟﺘﺎﻟﻲ ﻓﻬﻲ أﻗﺮب ﻟﻄﺮﻳﻘﺔ ﻋﻤﻞ اﻟﻌﻘﻞ اﻟﺒﺸﺮي ﻣﻦ ﺣﻴﺚ أﻧﻪ ﻳﺴﺘﻔﻴﺪ ﻣﻦ ﺧﺒﺮاﺗﻪ‬
‫اﻟﺴﺎﺑﻘﺔ ﻓﻲ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﺠﺪﻳﺪة‪.‬‬
‫ﺗﺘﻜﻮن اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻣﻦ ﺟﻬﺎز ﻟﻪ ﻋﺪة ﻣﺪاﺧﻞ وﻣﺨﺮج واﺣﺪ‪ .‬وﻳﺘﻢ اﺳﺘﺨﺪاﻣﻬﺎ ﻓﻲ أﺣﺪ وﺿﻌﻴﻦ‬
‫إﻣﺎ وﺿﻊ اﻟﺘﺪرﻳﺐ أو وﺿﻊ اﻻﺳﺘﺨﺪام‪ .‬ﻓﻲ وﺿﻊ اﻟﺘﺪرﻳﺐ ﻳﺘﻢ ﺗﻠﻘﻴﻦ اﻟﺨﻠﻴﺔ أن ﺗﻄﻠﻖ ‪ fire‬أو ﻻ ﺗﻄﻠﻖ ﻋﻨﺪ‬
‫إدﺧﺎل ﻧﻤﻮذج ‪ pattern‬ﻣﻌﻴﻦ‪ .‬ﻳﻘﺼﺪ ﺑﺎﻹﻃﻼق ﻫﻨﺎ ﻫﻮ اﻟﻘﻴﻤﺔ اﻟﻤﺨﺮﺟﺔ أو اﻟﻨﺎﺗﺠﺔ‪ .‬أﻣﺎ ﻓﻲ وﺿﻊ‬

‫‪15‬‬
‫اﻻﺳﺘﺨﺪام‪ ،‬ﻳﺘﻢ إﺧﺮاج اﻟﻘﻴﻤﺔ اﻟﻤﺮﺗﺒﻄﺔ ﺑﺎﻟﻨﻤﻮذج اﻟﺬي ﺗﻢ ﺗﻌﻠﻤﻪ ﻋﻨﺪ اﺳﺘﺸﻌﺎرﻩ ﻓﻲ اﻟﻤﺪﺧﻞ‪ ،‬أو ﻳﺘﻢ ﺗﻄﺒﻴﻖ‬
‫ﻗﺎﻋﺪة اﻹﻃﻼق ‪ firing rule‬ﻋﻦ وﺟﻮد ﻣﺪﺧﻞ ﺟﺪﻳﺪ ﻟﻢ ﻳﺘﻢ ﺗﻌﻠﻤﻪ ﺳﺎﺑﻘﺎً‪ .‬ﻋﻠﻰ ﺳﺒﻴﻞ اﻟﻤﺜﺎل‪ ،‬ﻳﻤﻜﻦ أن‬
‫ﺗﻜﻮن ﻗﺎﻋﺪة اﻹﻃﻼق ﻫﻲ إﺧﺮاج اﻟﻘﻴﻤﺔ اﻟﻤﺮﺗﺒﻄﺔ ﺑﺄﻗﺮب اﻟﻨﻤﺎذج اﻟﻤﻌﻠﻮﻣﺔ ﺷﺒﻬﺎً ﺑﺎﻟﻨﻤﻮذج اﻟﻤﺪﺧﻞ‪.‬‬
‫ﻣﻦ أﺷﻬﺮ ﺗﻄﺒﻴﻘﺎت اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻟﺘﻌﺮف ﻋﻠﻰ اﻟﻨﻤﺎذج واﻷﺷﻜﺎل ﻣﺜﻞ اﻟﻨﺼﻮص‪ ،‬إﻳﺠﺎد اﻟﺤﻠﻮل اﻟﺘﻘﺮﻳﺒﻴﺔ‪،‬‬
‫ﺗﺼﻨﻴﻒ ﺻﻮر اﻷﻗﻤﺎر اﻟﺼﻨﺎﻋﻴﺔ‪ ،‬ﺗﻮﻗﻊ اﻟﻤﺒﻴﻌﺎت واﻟﻨﺘﺎﺋﺞ اﻟﻤﺴﺘﻘﺒﻠﻴﺔ‪ ،‬إدارة اﻟﻤﺨﺎﻃﺮة‪ ،‬اﻟﺘﻌﺮف ﻋﻠﻰ أﻧﻤﺎط ﻣﻌﻴﻨﺔ‬
‫ﻓﻲ ﻛﻤﻴﺎت ﻛﺒﻴﺮة ﻣﻦ اﻟﺒﻴﺎﻧﺎت‪ ،‬ﺗﺸﺨﻴﺺ ﺑﻌﺾ اﻷﻣﺮاض‪ ،‬اﻟﺘﺤﻜﻢ ﻓﻲ ﺣﺮﻛﺔ اﻷﺷﺨﺎص اﻵﻟﻴﻴﻦ‪ ،‬إﺿﺎﻓﺔ إﻟﻰ ﻋﺪة‬
‫ﺗﻄﺒﻴﻘﺎت وﻣﺠﺎﻻت أﺧﺮى‪.‬‬
‫ﻣﻦ اﻟﻤﻬﻢ اﻟﺘﺄﻛﻴﺪ ﻋﻠﻰ أن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻟﻴﺴﺖ ﺑﺪﻳﻼً وﻻ ﻣﻨﺎﻓﺴﺎً ﻟﻠﺤﻮاﺳﻴﺐ اﻟﺘﻘﻠﻴﺪﻳﺔ وإﻧﻤﺎ‬
‫ﻫﻲ ﻣﻜﻤﻠﺔ ﻟﻬﺎ‪ ،‬ﺣﻴﺚ ﻟﻜﻞ ﻣﻨﻬﻤﺎ اﺳﺘﺨﺪاﻣﺎﺗﻪ‪ ،‬إﺿﺎﻓﺔ إﻟﻰ ذﻟﻚ ﻓﻬﻨﺎك اﻟﻜﺜﻴﺮ ﻣﻦ اﻟﻤﻬﺎم اﻟﺘﻲ ﺗﺤﺘﺎج إﻟﻰ‬
‫اﻟﺘﻜﺎﻣﻞ ﺑﻴﻦ اﻻﺛﻨﻴﻦ‪ ،‬وﻋﺎدة ﻣﺎ ﻳﺘﻮﻟﻰ اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي اﻹﺷﺮاف ﻋﻠﻰ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ ﻟﺘﺤﻘﻴﻖ أﻛﺒﺮ ﻛﻔﺎءة‬
‫ﻣﻤﻜﻨﺔ‪.‬‬

‫ﻛﻴﻒ ﺗﻤﺜﻞ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ اﻟﻤﺦ؟‬


‫ﺗﺘﻜﻮن اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﺼﻨﺎﻋﻴﺔ ﻣﻦ ﻋﺪد ﻣﻦ اﻟﻤﺸﻐﻼت اﻟﺒﺴﻴﻄﺔ ﺟﺪا“ واﻟﻤﺘﺸﺎﺑﻜﺔ ﺑﺼﻮرة ﻣﺮﺗﻔﻌﺔ ﻣﻊ ﺑﻌﻀﻬﺎ‬
‫ﺑﻌﺾ ﺗﺴﻤﻲ ﻋﺼﺒﻮﻧﺎت اﻳﻀﺎ“ واﻟﺘﻲ ﺗﺘﻤﺎﺛﻞ ﻣﻊ اﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﻮﺟﻮدة ﻓﻲ اﻟﻤﺦ ‪ .‬وﺗﺘﺼﻞ اﻟﻌﺼﺒﻮﻧﺎت ﻣﻊ ﺑﻌﻀﻬﺎ‬
‫ﺑﻌﺾ ﺑﻮاﺳﻄﺔ رواﺑﻂ ﻣﻮزوﻧﺔ ﺗﻤﺮر اﺷﺎرة ﻣﻦ ﻋﺼﺒﻮن اﻟﻲ اﻻﺧﺮ ‪ .‬ﻳﺴﺘﻘﺒﻞ اﻟﻌﺼﺒﻮن ﻣﺠﻤﻮﻋﺔ ﻣﻦ اﻟﻤﺪﺧﻼت‬
‫وﻟﻜﻦ ﻟﻪ ﻣﺨﺮج واﺣﺪ ﻓﻘﻂ ‪) .‬ﻣﺨﺮج ﻋﺒﺮ ‪ axon‬ﻓﻲ اﻟﺸﺒﻜﺔ اﻟﺒﻴﻮﻟﻮﺟﻴﺔ( اﻟﺮاﺑﻂ اﻟﺨﺎرﺟﻲ ﻳﻤﻜﻦ ان ﻳﻨﻘﻞ‬
‫ﻧﻔﺲ اﻻﺷﺎرة اﻟﺨﺎرﺟﺔ اﻟﻲ ﻋﺪد ﻣﻦ اﻟﻔﺮوع ‪ .‬وﺗﻨﻬﻲ اﻟﻔﺮوع اﻟﺨﺎرﺟﻴﺔ ﻋﻨﺪ اﻟﺮواﺑﻂ اﻟﻮاردﻩ ﻟﻌﺼﺒﻮﻧﺎت اﺧﺮي ﻓﻲ‬
‫اﻟﺸﺒﻜﺔ‬
‫ﻛﻴﻒ ﺗﺘﻌﻠﻢ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ؟‬
‫ﺑﻮاﺳﻄﺔ إﻋﺎدة ﺿﺒﻂ اﻷوزان ﺣﺘﻰ اﻟﻮﺻﻮل إﻟﻲ اﻷوزان اﻟﺼﺤﻴﺤﺔ ‪.‬‬
‫ﻟﺒﻨﺎء ﺷﺒﻜﺔ ﻋﺼﺒﻴﺔ ﻧﺤﺘﺎج اﻟﻲ ﺗﺤﺪﻳﺪ ﻣﻌﻤﺎرﻳﺔ اﻟﺸﺒﻜﺔ وذﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺗﺤﺪﻳﺪ ‪:‬‬
‫• ﻋﺪد اﻟﻌﺼﺒﻮﻧﺎت اﻟﺘﻲ ﺗﺴﺘﺨﺪم‬

‫‪16‬‬
‫• ﻛﻴﻒ ﺗﺘﺸﺎﺑﻚ ﻫﺬﻩ اﻟﻌﺼﺒﻮﻧﺎت‬
‫• ﺧﻮارزﻣﻴﺔ اﻟﺘﻌﻠﻢ‬
‫• ﺗﺪرﻳﺐ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ ‪ :‬ﻧﺤﺪد اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ ﻷوزان اﻟﺸﺒﻜﺔ ‪.‬‬

‫اﻟﻌﺼﺒﻮن ﻋﻨﺼﺮ ﺣﻮﺳﺒﺔ ﺑﺴﻴﻂ ‪: the neuron as simple computing element‬‬


‫ﻳﺴﺘﻘﺒﻞ اﻟﻌﺼﺒﻮن إﺷﺎرات ﻣﺘﻌﺪدة ﻣﻦ رواﺑﻂ ﻣﺪﺧﻼﺗﻪ ‪ ،‬وﻳﺤﺴﺐ ﻣﺴﺘﻮي ﺗﻨﺸﻴﻂ ﺟﺪﻳﺪ وﻳﺮﺳﻠﻪ ﻛﺈﺷﺎرة‬
‫ﻣﺨﺮﺟﺎت ﻋﺒﺮ رواﺑﻂ اﻟﻤﺨﺮﺟﺎت ‪ .‬وﻳﻤﻜﻦ أن ﺗﻜﻮن اﻟﻤﺪﺧﻼت ﺑﻴﺎﻧﺎت ﺧﺎم أو ﻣﺨﺮﺟﺎت ﻣﻦ ﻋﺼﺒﻮﻧﺎت أﺧﺮي‬
‫واﻟﻤﺨﺮﺟﺎت ﻳﻤﻜﻦ أن ﺗﻜﻮن ﺣﻼ ﻧﻬﺎﺋﻴﺎ ﻟﻤﺸﻜﻠﺔ ‪ .‬وﻳﻤﻜﻦ أن ﺗﻜﻮن ﻣﺪﺧﻼت ﻟﻌﺼﺒﻮﻧﺎت أﺧﺮي ‪.‬‬

‫ﻛﻴﻒ ﻳﺤﺪد اﻟﻌﺼﺒﻮن ﻣﺨﺮﺟﺎﺗﻪ ؟‬


‫ﻋﻦ ﻃﺮﻳﻖ دوال اﻟﺘﻨﺸﻴﻂ وﻫﻲ ‪:‬‬
‫داﻟﺔ اﻹﺷﺎرة ‪sign function‬‬
‫داﻟﺔ اﻟﺨﻄﻮة ‪step function‬‬
‫واﻟﺘﻲ ﺗﻌﺮف ﺑﺄﻧﻬﺎ دوال اﻟﺤﺪ اﻟﺼﻠﺐ‪Hard limit function‬‬
‫اﻟﺪاﻟﺔ اﻵﺳﻴﺔ ‪sigmoid function‬‬
‫اﻟﺪاﻟﺔ اﻟﺨﻄﻴﺔ ‪linear function‬‬
‫واﻟﺘﻲ ﺗﻌﺮف ﺑﺄﻧﻬﺎ دوال اﻟﺤﺪ اﻟﻤﺮن ‪soft limit function‬‬

‫اﻟﻤﺪرك اﻟﺒﺴﻴﻂ‬
‫ﻛﻴﻒ ﻳﺘﻌﻠﻢ اﻟﻤﺪرك ﺗﺼﻨﻴﻔﻪ ﻟﻠﻤﻬﺎم ؟‬
‫• ﻳﺤﺪث ﻫﺬا ﻋﻦ ﻃﺮﻳﻖ ﻋﻤﻞ ﺗﻀﺒﻴﻄﺎت ﺻﻐﻴﺮة ﻓﻲ اﻷوزان ﻟﺘﻘﻠﻴﻞ اﻟﻔﺮق ﺑﻴﻦ اﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ‬
‫ﻟﻠﻤﺪرك واﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ‪ .‬وﺗﺤﺪد اﻷوزان اﻻﺑﺘﺪاﺋﻴﺔ ﻋﺸﻮاﺋﻴﺎ ‪ ،‬وﻋﺎدة ﻓﻲ اﻟﻤﺪى ]‪، [-0.5,0.5‬‬

‫‪17‬‬
‫وﺗﺠﺪد ﺑﻌﺪ ذﻟﻚ ﻟﻠﺤﺼﻮل ﻋﻠﻲ ﻣﺨﺮﺟﺎت ﻣﺘﺴﻘﺔ ﻣﻊ أﻣﺜﻠﺔ اﻟﺘﺪرﻳﺐ ‪ .‬وﺑﺎﻟﻨﺴﺒﺔ إﻟﻲ اﻟﻤﺪرك ﺗﻜﻮن‬
‫ﻋﻤﻠﻴﺔ ﺗﺠﺪﻳﺪ اﻷوزان ﺑﺴﻴﻄﺔ ﺟﺪا“ ‪ .‬ﻓﺈذا ﻛﺎﻧﺖ اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ واﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ‬
‫)‪ Yd(p‬ﻋﻨﺪ ﻧﻔﺲ اﻟﺘﻜﺮار ‪ ،‬ﻓﺘﺤﺴﺐ اﻟﻤﻌﺎدﻟﺔ اﻟﺘﺎﻟﻴﺔ اﻟﺨﻄﺄ ‪:‬‬
‫• …‪e(p) = Yd(p) –Y(p) where p=1,2,3,‬‬
‫• وﻳﺸﻴﺮ اﻟﺘﻜﺮار ‪ p‬ﻫﻨﺎ اﻟﻲ ﻣﺜﺎل اﻟﺘﺪرﻳﺐ رﻗﻢ ‪ p‬اﻟﻤﻘﺪم ﻟﻠﻤﺪرك‬
‫• ﻓﺎذا ﻛﺎن اﻟﺨﻄﺄ )‪ e(p‬ﻣﻮﺟﺒﺎ ﻓﺈﻧﻨﺎ ﻧﺤﺘﺎج إﻟﻲ زﻳﺎدة ﻣﺨﺮﺟﺎت اﻟﻤﺪرك )‪ ، Y(p‬وإذا ﻛﺎن ﺳﺎﻟﺒﺎ ﻓﺈﻧﻨﺎ‬
‫ﻧﺤﺘﺎج إﻟﻲ ﺗﻘﻠﻴﻞ )‪ .Y(p‬وﺑﺎﻷﺧﺬ ﻓﻲ اﻟﺤﺴﺒﺎن أن ﻛﻞ ﻣﺪﺧﻞ ﻣﻦ ﻣﺪﺧﻼت اﻟﻤﺪرك ﻳﺴﺎﻫﻢ ب‬
‫)‪ xi(p)* wi(p‬ﻓﻲ اﺟﻤﺎﻟﻲ اﻟﻤﺪﺧﻼت )‪ x(p‬ﻓﺎﻧﻨﺎ ﻧﺠﺪ ان ﻗﻴﻤﺔ اﻟﻤﺪﺧﻼت )‪ xi(p‬ﺗﻜﻮن‬
‫ﻣﻮﺟﺒﻪ ‪ ،‬وﺗﻤﻴﻞ زﻳﺎدة ﻓﻲ اﻟﻮزن )‪ wi(p‬اﻟﻲ زﻳﺎدة ﻣﺨﺮﺟﺎت اﻟﻤﺪرك )‪ . Y(p‬ﺑﻴﻨﻤﺎ اذا ﻛﺎﻧﺖ )‪xi(p‬‬
‫ﺳﺎﻟﺒﻪ ‪ ،‬ﺗﻤﻴﻞ اﻟﺰﻳﺎدة ﻓﻲ اﻟﻮزن )‪ wi(p‬اﻟﻲ ﺗﻘﻠﻴﻞ ﻣﺨﺮﺟﺎت اﻟﻤﺪرك ‪ .‬ﻟﺬﻟﻚ ﻳﻤﻜﻦ ﺗﺤﺪﻳﺪ اﻟﻘﺎﻋﺪة‬
‫ﺗﻌﻠﻴﻢ اﻟﻤﺪرك‬
‫• ‪Preceptron learning rule:‬‬
‫)‪Wi(p+1)= wi(p) +α×xi(p) ×e(p‬‬
‫ﺣﻴﺚ ‪ α‬ﻣﻌﺪل اﻟﺘﻌﻠﻢ ‪ learning rate,‬وﻫﻮ ﺛﺎﺑﺖ ﻣﻮﺟﺐ اﻗﻞ ﻣﻦ اﻟﻮاﺣﺪ اﻟﺼﺤﻴﺢ‬

‫ﺧﻮارزﻣﻴﺔ ﺗﺪرﻳﺐ اﻟﻤﺪرك اﻟﺒﺴﻴﻂ ‪:‬‬


‫• اﻟﺨﻄﻮة اﻻوﻟﻲ ‪ :‬وﺿﻊ اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ‬
‫ﺗﺤﺪﻳﺪ اﻷوزان اﻻﺑﺘﺪاﺋﻴﺔ ‪ ، w1,w2,…,wn‬واﻟﻌﺘﺒﺔ ‪ θ‬ﺑﺄرﻗﺎم ﻋﺸﻮاﺋﻴﺔ ﺗﻘﻊ ﻓﻲ اﻟﻤﺪى ]‪[-0.5,0.5‬‬
‫• اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ :‬اﻟﺘﻨﺸﻴﻂ‬
‫ﺗﻨﺸﻴﻂ اﻟﻤﺪرك ﻋﻦ ﻃﺮﻳﻖ ﺗﻄﺒﻴﻖ اﻟﻤﺪﺧﻼت )‪ ، x1(p),x2(p),…xn(p‬واﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ‬
‫)‪ . Yd(p‬وﺣﺴﺎب اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ﻋﻨﺪ ‪p=1‬‬
‫‪n‬‬ ‫‪‬‬
‫‪y  sign   xi wi  ‬‬
‫‪i 1‬‬ ‫‪‬‬
‫ﺣﻴﺚ ‪ n‬ﻣﺪﺧﻼت اﻟﻤﺪرك ‪ ،‬و ‪ step‬داﻟﺔ ﺗﻨﺸﻴﻂ اﻟﺨﻄﻮة‬

‫‪18‬‬
‫• اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ ‪ :‬ﺗﺪرﻳﺐ اﻷوزان ‪ :‬ﺗﺠﺪﻳﺪ أوزان اﻟﻤﺪرك‬
‫)‪Wi(p+1)= wi(p) +∆Wi(p‬‬
‫)‪∆Wi(p)= α×xi(p) ×e(p‬‬
‫ﺣﻴﺚ )‪ ∆Wi(p‬ﺗﺼﺤﻴﺢ اﻟﻮزن ﻓﻲ اﻟﺘﻜﺮار ‪p‬‬
‫اﻟﺨﻄﻮة اﻟﺮاﺑﻌﺔ ‪ :‬اﻟﺘﻜﺮار‬ ‫•‬
‫زﻳﺎدة اﻟﺘﻜﺮار ‪ p‬ﺑﻤﻘﺪار واﺣﺪ ﺻﺤﻴﺢ ‪ ،‬واﻟﻌﻮدة اﻟﻲ اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ ،‬وﺗﻜﺮار اﻟﻌﻤﻠﻴﺔ ﺣﺘﻲ ﻧﻘﻄﺔ اﻻﻟﺘﻘﺎء ‪.‬‬

‫‪19‬‬
‫ﻫﻨﺎ إﺳﺘﻄﺎع اﳌﺪرك اﻟﺒﺴﻴﻂ اﻟﺘﺪرب ﻋﻠﻰ ﻋﻤﻠﻴﺔ ‪ OR‬ﻷ ﺎ ﻣﻔﺼﻮﻟﺔ ﺧﻄﻴﺎً ‪.‬‬

‫‪20‬‬
‫ﻫﻨﺎ ﻻﻳﺴﺘﻄﻴﻊ اﳌﺪرك اﻟﺒﺴﻴﻂ اﻟﺘﺪرب ﻋﻠﻰ اﻟﻌﻤﻠﻴﺔ ‪ xor‬ﻷ ﺎ ﻏﲑ ﻣﻔﺼﻮﻟﺔ ﺧﻄﻴﺎ وﻳﺘﻢ‬
‫إﺳﺘﺨﺪام اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت ﻟﺘﺪرﻳﺐ اﻟﺸﺒﻜﺔ ﺑﺈﺳﺘﺨﺪام ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر‬
‫ﻟﻠﺨﻠﻒ ‪.‬‬
‫أي ﻳﺴﺘﻄﻴﻊ اﳌﺪرك اﻟﺒﺴﻴﻂ ان ﻳﺘﻌﻠﻢ اﻟﻌﻤﻠﻴﺔ ‪ . or‬اﻻ ان اﳌﺪرك ﻣﻦ ﻃﺒﻘﺔ واﺣﺪة ﻻ‬
‫ﻳﺴﺘﻄﻴﻊ ان ﻳﺘﺪرب ﻋﻠﻲ ﺗﻨﻔﻴﺬ ‪ or‬اﳌﺎﻧﻌﺔ ‪.‬‬

‫‪21‬‬
‫ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ‪:‬‬

‫ﺗﺪرﻳﺐ ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻟﺘﺆدي‬


‫ﻋﻤﻠﻴﺔ ‪NOR‬‬

‫‪22‬‬
‫اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت ‪Multilayer neural networks‬‬
‫ﻫﻨﺎﻟﻚ اﻟﻌﺪﻳﺪ ﻣﻦ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ و ﻣﻦ أﻫﻤﻬﺎ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﺒﻴﺮﺳﻴﺒﺘﺮون ﻣﺘﻌﺪد اﻟﻄﺒﻘﺎت‬
‫)‪ . (Perceptron multicouche‬وﺗﻨﻘﺴﻢ ﻫﺬﻩ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ إﻟﻲ ﻃﺒﻘﺎت ﻣﻦ اﻟﺨﻼﻳﺎ‬
‫اﻻﺻﻄﻨﺎﻋﻴﺔ ‪ :‬ﻃﺒﻘﺔ داﺧﻠﻴﺔ وﻃﺒﻘﺔ ﺧﺎرﺟﻴﺔ وﻃﺒﻘﺎت ﺑﻴﻨﻬﻢ أو ﻣﺨﻔﻴﺔ ﺗﺘﻮاﺟﺪ ﺑﻴﻦ ﻃﺒﻘﺔ اﻟﻤﺪﺧﻼت ) ‪Input‬‬
‫‪ (layer‬وﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت )‪ .(Output layer‬ﻛﻞ ﺧﻠﻴﺔ ﻓﻲ إﺣﺪى ﻫﺬﻩ اﻟﻄﺒﻘﺎت ﺗﺘﺼﻞ ﺑﻜﺎﻓﺔ‬
‫اﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﻮﺟﻮدة ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﺘﻲ ﺗﻠﻴﻬﺎ وﻛﺎﻓﺔ اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﺘﻲ ﺗﺴﺒﻘﻬﺎ‪.‬‬
‫ﻛﻞ اﺗﺼﺎل ﺑﻴﻦ ﻋﺼﺒﻮن وآﺧﺮ ﻳﺘﻤﻴﺰ ﺑﺎرﺗﺒﺎﻃﻪ ﺑﻘﻴﻤﺔ ﺗﺪﻋﻰ اﻟﻮزن )‪ (Weight‬وﻫﻲ ﺗﺸﻜﻞ ﻣﺪى أﻫﻤﻴﺔ‬
‫اﻻرﺗﺒﺎط ﺑﻴﻦ ﻫﺬﻳﻦ اﻟﻌﻨﺼﺮﻳﻦ‪ ،‬ﻳﻘﻮم اﻟﻌﺼﺒﻮن ﺑﻀﺮب ﻛﻞ ﻗﻴﻤﺔ دﺧﻞ واردة ﻣﻦ ﻋﺼﺒﻮﻧﺎت اﻟﻄﺒﻘﺔ اﻟﺴﺎﺑﻘﺔ ﺑﺄوزان‬
‫اﻻﺗﺼﺎﻻت ﻣﻊ ﻫﺬﻩ اﻟﻌﺼﺒﻮﻧﺎت‪ ،‬ﻣﻦ ﺛﻢ ﺟﻤﻊ ﻧﻮاﺗﺞ اﻟﻀﺮب ﺟﻤﻴﻌﺎ‪ ،‬ﺛﻢ إﺧﻀﺎع اﻟﻨﺘﻴﺠﺔ ﻟﺘﺎﺑﻊ ﺗﺤﻮﻳﻞ ﻳﺨﺘﻠﻒ‬
‫ﺣﺴﺐ ﻧﻮع اﻟﻌﺼﺒﻮن‪ ،‬واﻟﻨﺎﺗﺞ ﻳﻌﺘﺒﺮ ﺧﺮج اﻟﻌﺼﺒﻮن اﻟﺬي ﻳﻨﻘﻞ إﻟﻰ ﻋﺼﺒﻮﻧﺎت اﻟﻄﺒﻘﺔ اﻟﻼﺣﻘﺔ‪.‬‬
‫إن اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻳﻤﻜﻦ أن ﺗﻨﺠﺰ ﻧﻤﺎذج ﺑﺴﻴﻄﺔ ﻣﺤﺪدة ﻟﺘﻮاﺑﻊ رﻳﺎﺿﻴﺔ‪ ،‬إﻻ أن ﻗﻮة اﻟﺤﺴﺎب‬
‫اﻟﻌﺼﺒﻴﺔ وﺳﺮﻫﺎ ﻳﺄﺗﻲ ﻣﻦ اﻟﻮﺻﻼت ﺑﻴﻦ اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ‪ .‬ﻓﺎﻟﺒﻨﻴﺔ اﻷﺳﺎﺳﻴﺔ ﻓﻲ اﻟﺸﺒﻜﺔ‬
‫اﻟﻌﺼﺒﻴﺔ ﻫﻲ اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ وﺑﺘﻐﻴﻴﺮ وﺗﻌﺪﻳﻞ وﺿﻌﻴﺔ اﺗﺼﺎل اﻟﺨﻼﻳﺎ ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ ﻳﺨﺘﻠﻒ ﺳﻠﻮك اﻟﺸﺒﻜﺔ‬
‫وﺗﺄﺛﻴﺮﻫﺎ وﻧﺘﺎﺋﺠﻬﺎ‪.‬‬
‫وﻳﻤﻜﻦ ﺑﻮﺿﻊ أﻛﺜﺮ ﻣﻦ ﻃﺒﻘﺔ ووﺻﻠﻬﺎ اﻟﺤﺼﻮل ﻋﻠﻰ ﺷﺒﻜﺎت أﺿﺨﻢ وأﻛﺜﺮ ﺗﻌﻘﻴﺪا وذات ﻗﺪرة ﺣﺴﺎﺑﻴﺔ ﺿﺨﻤﺔ‬
‫‪ ،‬وﻗﺪ ﺗﻢ اﻟﺒﺮﻫﺎن ﻋﻠﻰ أن اﻟﺸﺒﻜﺎت ذات اﻟﻄﺒﻘﺎت اﻟﻤﺘﻌﺪدة ﺗﻤﺘﻠﻚ ﻗﺪرات أﻋﻈﻢ ﻣﻦ ﺗﻠﻚ اﻟﻤﻮﺟﻮدة ﻓﻲ‬
‫اﻟﺸﺒﻜﺔ ذات اﻟﻄﺒﻘﺔ اﻟﻮﺣﻴﺪة‪.‬‬
‫إن ﻋﻤﻠﻴﺔ ﺗﻐﻴﻴﺮ ﻋﺪد اﻟﻄﺒﻘﺎت اﻟﻤﻮﺟﻮدة ﻓﻲ اﻟﺸﺒﻜﺔ وﻋﺪد اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ اﻟﻤﻮﺟﻮدة ﻓﻲ ﻛﻞ ﻃﺒﻘﺔ وﻛﻴﻔﻴﺔ‬
‫اﻟﻮﺻﻞ ﺑﻴﻦ اﻟﻄﺒﻘﺎت ﺗﺘﺒﻊ ﻟﻨﻮﻋﻴﺔ و ﺿﺨﺎﻣﺔ وﺗﻌﻘﻴﺪ اﻟﺘﻄﺒﻴﻖ اﻟﺬي ﺗﺴﺘﺨﺪم ﻣﻦ أﺟﻠﻪ اﻟﺸﺒﻜﺔ‪ .‬ﻓﻤﺜﻼ ‪،‬ﻳﻤﻜﻦ ﻣﻦ‬
‫أﺟﻞ ﺗﻄﺒﻴﻖ ﻣﺎ أن ﻳﻜﻮن اﺳﺘﺨﺪام ﺷﺒﻜﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت – ﺑﺎﻟﺮﻏﻢ ﻣﻦ ﻗﺪرﺗﻬﺎ اﻟﺤﺴﺎﺑﻴﺔ اﻟﺠﺒﺎرة –ﻏﻴﺮ ﻧﺎﺟﻊ‬
‫ﻣﺜﻠﻤﺎ ﻟﻮ ﺗﻢ اﺳﺘﺨﺪام ﺷﺒﻜﺔ وﺣﻴﺪة اﻟﻄﺒﻘﺔ ﻣﻊ ﻗﺪرﺗﻬﺎ اﻟﺤﺴﺎﺑﻴﺔ اﻟﺼﻐﻴﺮة ﻣﻘﺎرﻧﺔ ﻣﻊ اﻟﺸﺒﻜﺎت ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت‬
‫‪ .‬وﻫﻜﺬا ﻣﺎ ﻳﺰال ﻫﺬا اﻟﻌﻠﻢ ﻋﻠﻤﺎً ﺗﺠﺮﻳﺒﻴﺎً ﺑﺎﻟﺮﻏﻢ ﻣﻦ اﻟﺨﻮارزﻣﻴﺎت واﻟﻔﺮﺿﻴﺎت اﻟﻤﻮﺿﻮﻋﺔ ‪ ،‬ﺑﻤﻌﻨﻰ أﻧﻪ ﻣﻦ أﺟﻞ‬

‫‪23‬‬
‫ﺗﻄﺒﻴﻖ ﻣﺎ ﻋﻠﻴﻨﺎ أن ﻧﺨﻀﻊ اﻟﺸﺒﻜﺔ ﻟﻌﺪة ﺗﺠﺎرب ﺗﻐﻴﻴﺮ وﺗﻌﺪﻳﻞ ﻓﻲ ﺑﻨﻴﺘﻬﺎ ﺣﺘﻰ ﻧﺤﺼﻞ ﻋﻠﻰ أﻓﻀﻞ اﻟﻨﺘﺎﺋﺞ‬
‫اﻟﻤﻼﺋﻤﺔ ﻟﺘﻄﺒﻴﻘﻨﺎ ‪.‬‬

‫ﻛﻴﻒ ﺗﺘﻌﻠﻢ ﻟﻠﺸﺒﻜﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت ؟‬


‫ﻫﻨﺎك اﻟﻌﺪﻳﺪ ﻣﻦ اﻟﺨﻮارزﻣﻴﺎت وأﻫﻤﻬﺎ ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ‪.‬‬

‫ﺧﻮارزﻣﻴﺔ ﺗﺪرﻳﺐ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ‬


‫‪ ‬اﻟﺨﻄﻮة اﻻوﻟﻲ ‪ :‬وﺿﻊ اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ‬
‫ﺗﺤﺪﻳﺪ اﻷوزان اﻻﺑﺘﺪاﺋﻴﺔ‪ ،‬ﺑﺘﻮزﻳﻊ ﻣﻨﺘﻈﻢ ﻟﻸرﻗﺎم اﻟﻌﺸﻮاﺋﻴﺔ داﺧﻞ ﻣﺪي ﺻﻐﻴﺮ أي ﺗﻘﻊ ﻓﻲ اﻟﻤﺪى ‪[-2.4/Fi,‬‬
‫]‪2.4/Fi‬‬
‫ﺣﻴﺚ ‪ Fi‬إﺟﻤﺎﻟﻲ ﻋﺪد اﻟﻤﺪﺧﻼت ﻟﻠﻌﺼﺒﻮن ‪ i‬ﻓﻲ اﻟﺸﺒﻜﺔ‪ .‬وﻳﺤﺪث ﺗﺤﺪﻳﺪ اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ ﻟﻸوزان ﻋﻠﻲ‬
‫أﺳﺎس ﻋﺼﺒﻮن ﺑﻌﺪ ﻋﺼﺒﻮن ‪.‬‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ :‬اﻟﺘﻨﺸﻴﻂ‬
‫ﺗﻨﺸﻴﻂ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻟﻠﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ ﻋﻦ ﻃﺮﻳﻖ ﺗﻄﺒﻴﻖ اﻟﻤﺪﺧﻼت )‪، x1(p),x2(p),…xn(p‬‬
‫واﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ )‪Yd,1(p) , Yd,2(p) ,….,Yd,n(p‬‬
‫وﺣﺴﺎب اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺒﺄة ‪:‬‬
‫‪n‬‬ ‫‪‬‬
‫‪y ( p )  sigmoid   xi ( p) wij ( p )  j ‬‬
‫‪j‬‬
‫‪i 1‬‬ ‫‪‬‬

‫ﺣﻴﺚ ‪ n‬ﻣﺪﺧﻼت ﻟﻠﻌﺼﺒﻮن ‪ j‬ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺒﺄة ‪ ،‬و ‪ sigmoid‬داﻟﺔ ﺗﻨﺸﻴﻂ اﻻس‬
‫وﺣﺴﺎب اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت ‪:‬‬
‫‪m‬‬ ‫‪‬‬
‫‪y ( p )  sigmoid   x jk ( p) w jk ( p)  k ‬‬
‫‪k‬‬ ‫‪i 1‬‬ ‫‪‬‬

‫‪24‬‬
‫ﺣﻴﺚ ‪ m‬ﻣﺪﺧﻼت ﻟﻠﻌﺼﺒﻮن ‪ k‬ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت ‪ ،‬و ‪ sigmoid‬داﻟﺔ ﺗﻨﺸﻴﻂ اﻻس‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ ‪ :‬ﺗﺪرﻳﺐ اﻷوزان ‪ :‬ﺗﺠﺪﻳﺪ أوزان ﻓﻲ ﺷﺒﻜﺔ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻋﻦ ﻃﺮﻳﻖ ﻧﺸﺮ اﻷﺧﻄﺎء‬
‫اﻟﻤﺼﺎﺣﺒﺔ ﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت ﻟﻠﺨﻠﻒ‬
‫ﺣﺴﺎب ﻣﻴﻞ او اﻧﺤﺪار اﻟﺨﻄﺄ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت‬
‫)‪δk(p)= yk(p)×[1-yk(p)] × ek(p‬‬
‫ﺣﻴﺚ ‪:‬‬
‫…‪ek(p) = Ydk(p) –Yk(p) where p=1,2,3,‬‬
‫وﺣﺴﺎب ﺗﺼﺤﻴﺢ اﻻوزان ‪:‬‬
‫)‪Wjk (p+1)= wjk(p) +∆Wjk (p‬‬
‫)‪∆Wjk (p)= α×yj(p) ×δk(p‬‬
‫وﺗﺠﺪﻳﺪ اﻻوزان ﻋﻨﺪ ﻋﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت ‪:‬‬
‫)‪Wjk (p+1)= wjk(p) + α×yj(p) ×δk(p‬‬
‫ﺣﺴﺎب ﻣﻴﻞ او اﻧﺤﺪار اﻟﺨﻄﺄ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺒﺄة ‪:‬‬
‫‪I‬‬
‫) ‪ j ( p )  y j ( p)  [1  y j ( p)]    k ( p ) w jk ( p‬‬
‫‪k 1‬‬

‫)‪Wij (p+1)= Wij (p) +∆ Wij (p‬‬


‫)‪∆ Wij (p) = α×Xi(p) ×δj(p‬‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺮاﺑﻌﺔ ‪ :‬اﻟﺘﻜﺮار‬
‫‪ ‬زﻳﺎدة اﻟﺘﻜﺮار ‪ p‬ﺑﻤﻘﺪار واﺣﺪ ﺻﺤﻴﺢ ‪ ،‬واﻟﻌﻮدة اﻟﻲ اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ ،‬وﺗﻜﺮار اﻟﻌﻤﻠﻴﺔ ﺣﺘﻲ ﻳﺘﺤﻘﻖ‬
‫ﻣﻌﻴﺎر اﻟﺨﻄﺄ اﻟﻤﺨﺘﺎر ‪.‬‬
‫‪ ‬وﻧﻜﺮر ﻋﻤﻠﻴﺔ اﻟﺘﺪرﻳﺐ ﺣﻨﻲ ﻳﺼﺒﺢ ﺣﺎﺻﻞ ﺟﻤﻊ ﻣﺮﺑﻌﺎت اﻷﺧﻄﺎء اﻗﻞ ﻣﻦ ‪. 0.001‬‬

‫‪25‬‬
Code :
% ==================
% Filename: XOR_bp.m
% ==================

echo on;

% Hit any key to define four 2-element input vectors denoted by "p".
pause

p=[1 0 1 0;1 1 0 0]

% Hit any key to define ten 1-element target vectors denoted by "t".
pause

t=[0 1 1 0]

% Hit any key to define the network architecture.


pause

s1=3; %Three neurons in the hidden layer


s2=1; %One neuron in the output layer

% Hit any key to create the network and initialise its weights and
biases.
pause

net=newff([0 1;0 1],[s1,s2],{'tansig','purelin'},'traingd');

% Hit any key to set up the frequency of the training progress to be


displayed,
% maximum number of epochs, acceptable error, and learning rate.
pause

net.trainParam.show=1; % Number of epochs between showing the


progress
net.trainParam.epochs=1000; % Maximum number of epochs
net.trainParam.goal=0.001; % Performance goal
net.trainParam.lr=0.1; % Learning rate

% Hit any key to train the back-propagation network.


pause

net=train(net,p,t);

% Hit any key to see whether the network has learned the XOR
operation.
pause

p=[1;1]
a=sim(net,p)

26
% Hit any key to continue.
pause

p=[0;1]
a=sim(net,p)

% Hit any key to continue.


pause

p=[1;0]
a=sim(net,p)

% Hit any key to continue.


pause

p=[0;0]
a=sim(net,p)

echo off
disp('end of XOR_bp')

Output :

27
28
29
‫ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ‪:‬‬

‫ﺗﻢ ﺣﻞ ﻫﺬﻩ اﻟﺸﺒﻜﺔ ﺑﻄﺮﻳﻘﺘﻴﻦ‬

‫اﻟﻄﺮﻳﻘﺔ اﻟﺜﺎﻧﻴﺔ ﺑﺈﺳﺘﺨﺪام ‪ GUI‬وذﻟﻚ‬


‫ﺑﺘﺪرﻳﺐ اﻟﺸﺒﻜﺔ ﻋﻠﻰ اﻟﺼﻮر واﻟﺘﻌﺮف ﻋﻠﻴﻬﺎ‬

‫‪30‬‬
‫ﻟﻘﺪ ﺻﻤﻤﺖ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻓﻲ ﺗﻤﺎﺛﻞ اﻟﻤﺦ ‪ .‬إﻻ أن ذاﻛﺮة اﻟﻤﺦ ﺗﻌﻤﻞ ﺑﻮاﺳﻄﺔ اﻟﻤﺼﺎﺣﺒﺔ‪.‬ﻣﺜﺎل ذﻟﻚ ‪،‬‬
‫ﻳﻤﻜﻨﻨﺎ أن ﻧﻤﻴﺰ وﺟﻬﺎ ﻣﻌﺘﺎدا ﺣﺘﻰ ﻓﻲ اﻟﺒﻴﺌﺔ ﻏﻴﺮ اﻟﻤﻌﺘﺎدة ﻣﻦ ﺧﻼل ‪ ، sm ٢٠٠-١٠٠‬وﻳﻤﻜﻨﻨﺎ ﺗﺬﻛﺮ أﻳﻀﺎ‬
‫ﺗﺠﺮﺑﺔ إﺣﺴﺎس ﻛﺎﻣﻠﺔ ﺑﻤﺎ ﻓﻲ ذﻟﻚ اﻷﺻﻮات ‪ ،‬واﻟﻤﺸﺎﻫﺪ ﻋﻨﺪﻣﺎ ﻧﺴﻤﻊ ﺑﻀﻊ ﻧﻐﻤﺎت ﻣﻮﺳﻴﻘﻴﺔ ﻓﻘﻂ ‪ .‬ﻓﻴﺼﺎﺣﺐ‬
‫اﻟﻤﺦ ﺷﻴﺌﺎ ﺑﺄﺧﺮ ﺑﺼﻮرة روﺗﻴﻨﻴﺔ ‪.‬‬
‫ﻫﻞ ﻳﻤﻜﻦ أن ﺗﺤﺎﻛﻲ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﺨﻮاص اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة اﻟﺒﺸﺮﻳﺔ ؟‬
‫ﺗﺴﺘﺨﺪم اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت اﻟﻤﺪرﺑﺔ ﺑﺨﻮارزﻣﻴﺔ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻓﻲ ﻣﺸﺎﻛﻞ ﺗﻤﻴﻴﺰ اﻷﻧﻤﺎط ‪.‬‬
‫وﻟﻜﻦ ‪ ،‬ﻛﻤﺎ ﺳﺒﻖ ﻻﺣﻈﻨﺎ ﻟﻴﺴﺖ ﻫﺬﻩ اﻟﺸﺒﻜﺎت ذﻛﻴﺔ ﺑﺼﻮرة ﺣﻘﻴﻘﻴﺔ ‪ .‬وﻟﺘﻘﻠﻴﺪ اﻟﺨﻮاص اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة‬
‫اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة اﻟﺒﺸﺮﻳﺔ ﻓﺈﻧﻨﺎ ﻧﺤﺘﺎج إﻟﻲ ﻧﻮع ﻣﺨﺘﻠﻒ ﻣﻦ اﻟﺸﺒﻜﺎت ﺷﺒﻜﺔ ﻋﺼﺒﻴﺔ ﻣﺘﻜﺮرة ‪recurrent‬‬
‫‪. neural network‬‬
‫ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ‪the hopfiled network‬‬
‫اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﻤﺘﻜﺮرة دورات ﺗﻐﺬﻳﺔ ﻣﺮﺗﺠﻌﺔ ﻣﻦ ﻣﺨﺮﺟﺎﺗﻬﺎ إﻟﻲ ﻣﺪﺧﻼﺗﻬﺎ ‪ .‬وﻟﻮﺟﻮد ﻣﺜﻞ ﻫﺬﻩ اﻟﺪورات ﺗﺄﺛﻴﺮ‬
‫ﻋﻤﻴﻖ ﻋﻠﻲ إﻣﻜﺎﻧﻴﺎت ﺗﻌﻠﻢ اﻟﺸﺒﻜﺔ‬

‫‪x1‬‬ ‫‪1‬‬ ‫‪y1‬‬


‫‪Output Signals‬‬
‫‪Input Signals‬‬

‫‪x2‬‬ ‫‪2‬‬ ‫‪y2‬‬

‫‪xi‬‬ ‫‪i‬‬ ‫‪yi‬‬

‫‪xn‬‬ ‫‪n‬‬ ‫‪yn‬‬

‫ﻫﻞ ﺗﺘﻌﻠﻢ اﻟﺸﺒﻜﺔ اﻟﻤﺘﻜﺮرة ؟‬


‫ﺑﻌﺪ ﺗﻄﺒﻴﻖ ﻣﺪﺧﻼت ﺟﺪﻳﺪة ‪ ،‬ﺗﺤﺴﺐ ﻣﺨﺮﺟﺎت اﻟﺸﺒﻜﺔ ‪ ،‬وﺗﻐﺬي ارﺗﺠﺎﻋﻴﺎ ﻟﺘﻀﺒﻂ اﻟﻤﺪﺧﻼت ‪ .‬وﺗﺤﺴﺐ ﺑﻌﺪ‬
‫ذﻟﻚ اﻟﻤﺨﺮﺟﺎت ﻣﺮة أﺧﺮي ‪ ،‬وﺗﻜﺮر ﻫﺬﻩ اﻟﻌﻤﻠﻴﺔ ﺣﺘﻰ ﺗﺼﺒﺢ اﻟﻤﺨﺮﺟﺎت ﺛﺎﺑﺘﺔ‬

‫‪31‬‬
‫ﺧﻮارزﻣﻴﺔ ﺗﺪرﻳﺐ ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ‬
‫‪ ‬اﻟﺨﻄﻮة اﻷوﻟﻲ ‪ :‬اﻟﺘﺨﺰﻳﻦ‬
‫ﺗﻜﻮن ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻓﻲ ‪ n‬ﻋﺼﺒﻮن ﻣﻄﻠﻮﺑﺔ ﻟﺘﺨﺰﻳﻦ ﻓﺌﺔ ﻣﻦ ‪ M‬ذاﻛﺮة أﺳﺎﺳﻴﺔ ‪Y1,Y2,…,YM ،‬‬
‫وﻳﺤﺴﺐ وزن ﻧﻘﻄﺔ اﻻﺷﺘﺒﺎك ﻣﻦ اﻟﻌﺼﺒﻮن ‪ i‬و اﻟﻌﺼﺒﻮن ‪ j‬ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬
‫‪M‬‬
‫‪ m1 y mi y mj , i  j‬‬
‫‪w ij‬‬ ‫‪ ‬‬
‫‪ 0 , i  j‬‬
‫ﺣﻴﺚ ‪ ymi‬و ‪ ymj‬اﻟﻌﻨﺼﺮان رﻗﻢ ‪ i,j‬ﻓﻲ اﻟﺬاﻛﺮة اﻷﺳﺎﺳﻴﺔ ﻋﻠﻲ اﻟﺘﻮاﻟﻲ وﻓﻲ ﺻﻮرة اﻟﻤﺼﻔﻮﻓﺔ ‪ ،‬وﺗﻤﺜﻞ‬

‫‪M‬‬
‫اﻷوزان ﻧﻘﻄﺔ اﻻﺷﺘﺒﺎك ﺑﻴﻦ اﻟﻌﺼﺒﻮﻧﺎت ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬
‫‪T‬‬
‫‪w   y y  MI‬‬ ‫‪m m‬‬
‫‪m 1‬‬
‫وﻳﻤﻜﻦ أن ﺗﺨﺰن ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻓﺌﺔ ﻣﻦ اﻟﺬاﻛﺮات اﻷﺳﺎﺳﻴﺔ إذا ﻛﺎﻧﺖ ﻣﺼﻔﻮﻓﺔ اﻟﻮزن ﻣﺘﻤﺎﺛﻠﺔ ﻣﻊ وﺟﻮد‬
‫أﺻﻔﺎر ﻓﻲ ﻗﻄﺮﻫﺎ اﻟﺮﺋﻴﺴﻲ‬
‫وﺑﻤﺠﺮد ﺣﺴﺎب اﻷوزان ﻓﺈﻧﻬﺎ ﺗﻈﻞ ﺛﺎﺑﺘﺔ‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﻪ ‪ :‬اﻻﺧﺘﺒﺎر‬
‫ﻧﺤﺘﺎج أن ﻧﺘﺄﻛﺪ ﻣﻦ ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻗﺎدرة ﻋﻠﻲ أن ﺗﺘﺬﻛﺮ ﻛﻞ ذاﻛﺮﺗﻬﺎ اﻷﺳﺎﺳﻴﺔ ‪ .‬ﺑﻜﻠﻤﺎت أﺧﺮي ‪ ،‬ﻳﺠﺐ‬
‫أن ﺗﺘﺬﻛﺮ اﻟﺸﺒﻜﺔ أي ذاﻛﺮة أﺳﺎﺳﻴﺔ ‪ ym‬ﻋﻨﺪﻣﺎ ﺗﻘﺪم ﻟﻬﺎ ﻛﻤﺪﺧﻼت ‪ .‬أي أن ‪:‬‬
‫‪xm , j  y m , j , i  1,2,...n; m  1,2,.., M‬‬
‫‪n‬‬
‫) ‪y mi  sign (  wij xmj  i‬‬
‫‪j 1‬‬

‫‪.......... .......... .......... .......... .......... .....‬‬


‫‪xm  y m , m  1,2,..., M‬‬
‫‪y m  sign ( wx m   ), m  1,2,..., M‬‬
‫ﺣﻴﺚ ‪ ymi‬ﻫﻲ اﻟﻌﻨﺼﺮ ‪ i‬ﻟﻤﺘﺠﻪ اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ‪ ym‬و ‪ xmj‬واﻟﻌﻨﺼﺮ ‪ j‬ﻟﻤﺘﺠﻪ اﻟﻤﺪﺧﻼت ‪xm‬‬
‫واﻟﺼﻴﻐﺔ اﻟﺜﺎﻧﻴﺔ ﺗﻮﺿﺢ ذﻟﻚ أﻳﻀﺎ وﻟﻜﻦ ﻓﻲ ﺻﻮرة ﻣﺼﻔﻮﻓﺔ ‪.‬‬

‫‪32‬‬
‫ﻧﻼﺣﻆ اﻧﻪ إذا ﺣﺪث ﺗﺬﻛﺮ ﻟﻜﻞ اﻟﺬاﻛﺮات اﻷﺳﺎﺳﻴﺔ ﺑﺼﻮرة ﻛﺎﻣﻠﺔ ﻓﻴﻤﻜﻨﻨﺎ أن ﻧﺴﺘﻤﺮ ﺑﺎﻟﺨﻄﻮة اﻟﺘﺎﻟﻴﺔ ‪.‬‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ ‪ :‬اﻻﺳﺘﺮﺟﺎع‬
‫ﻗﺪم ﻣﺘﺠﻪ ﻓﻲ ‪ n‬ﺑﻌﺪ )ﻣﺠﺲ ‪ x (prob‬ﻟﻠﺸﺒﻜﺔ ‪ ،‬وﻧﺴﺘﺮﺟﻊ ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار ﺗﻘﻠﻴﺪﻳﺎ ‪ ،‬وﻳﻤﺜﻞ اﻟﻤﺠﺲ‬
‫ﺻﻴﻐﺔ ﺗﺎﻟﻔﺔ او ﻏﻴﺮ ﻛﺎﻣﻠﺔ ﻟﻠﺬاﻛﺮة اﻷﺳﺎﺳﻴﺔ ‪ ،‬أي أن ‪:‬‬
‫‪x≠ym‬‬ ‫‪m=1,2,…,M‬‬
‫)أ( ﺗﺤﺪد اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ ﻟﺨﻮارزم اﻻﺳﺘﺮﺟﺎع ﻟﺸﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻋﻦ ﻃﺮﻳﻖ ﺗﺤﺪﻳﺪ ﻣﺎ ﻳﻠﻲ ‪:‬‬
‫‪xj(0) = xj‬‬ ‫‪j=1,2,…,n‬‬
‫وﻧﺤﺴﺐ اﻟﺤﺎﻟﺔ اﻻﺑﺘﺪاﺋﻴﺔ ﻟﻜﻞ ﻋﺼﺒﻮن‬
‫‪n‬‬
‫‪yi (0)  sign( wij x j (0)  i ).., i  1,2,..., n‬‬
‫‪j 1‬‬

‫ﺣﻴﺚ )‪ xj(0‬اﻟﻌﻨﺼﺮ ‪ j‬ﻟﻠﻤﺘﺠﻪ اﻟﻤﺠﺲ ‪ x‬ﻋﻨﺪ اﻟﺘﻜﺮار ‪ yi(0) ، p=0‬ﺣﺎﻟﺔ اﻟﻌﺼﺒﻮن ‪ i‬ﻓﻲ اﻟﺘﻜﺮار ‪p‬‬
‫وﻓﻲ ﺻﻮرة ﻣﺼﻔﻮﻓﺔ ‪ ،‬ﻳﻤﺜﻞ ﻣﺘﺠﻪ اﻟﺤﺎﻟﺔ ﻋﻨﺪ اﻟﺘﻜﺮار‬
‫‪ p=0‬ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬
‫) ‪y (0)  sign( wx(0)  ‬‬

‫)ب( ﻧﺠﺪد ﻋﻨﺎﺻﺮ ﻣﺘﺠﻪ اﻟﺤﺎﻟﺔ )‪y(p‬‬


‫ﻃﺒﻘﺎ“ ﻟﻠﻘﺎﻋﺪة اﻟﺘﺎﻟﻴﺔ ‪:‬‬
‫‪n‬‬
‫) ‪yi ( p  1)  sign( wij x j ( p )  i‬‬
‫‪j 1‬‬
‫وﺗﺨﺘﺎر اﻟﻌﺼﺒﻮﻧﺎت ﻟﻠﺘﺠﺪﻳﺪ ﻏﻴﺮ ﻣﺘﺰاﻣﻨﺔ ‪ ،‬أي ﻋﺸﻮاﺋﻴﺎ وﻋﻨﺼﺮ وراء ﻋﻨﺼﺮ‬
‫ﻧﻜﺮر اﻟﺘﻜﺮار ﺣﺘﻰ ﻻ ﻳﺘﻐﻴﺮ ﻣﺘﺠﻪ اﻟﺤﺎﻟﺔ ‪ ،‬وﺑﻜﻠﻤﺎت أﺧﺮي ﺗﺤﻘﻖ ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار ‪ .‬وﻳﻤﻜﻦ ﺗﻌﺮﻳﻒ ﺷﺮط‬
‫اﻻﺳﺘﻘﺮار ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬
‫‪n‬‬
‫‪yi ( p  1)  sign( wij y j ( p )  i), i  1,2,..., n‬‬
‫‪j 1‬‬

‫او ﻓﻲ ﺻﻮرة اﻟﻤﺼﻔﻮﻓﺔ ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬


‫) ‪y ( p  1)  sign( wy ( p )  ‬‬

‫‪33‬‬
‫وﺗﺘﻘﺎرب ﺷﺒﻜﺔ ﻫﻮ ﺑﻔﻴﻠﺪ داﺋﻤﺎ ﺣﺘﻲ ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار اذا ﺣﺪث اﺳﺘﺮﺟﺎع ﻏﻴﺮ ﻣﺘﺰاﻣﻦ اﻻ ان ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار‬
‫ﻻ ﺗﻤﺜﻞ ﺑﺎﻟﻀﺮورة اﺣﺪي اﻟﺬاﻛﺮات اﻻﺳﺎﺳﻴﺔ ‪ .‬واذا ﻛﺎﻧﺖ اﺳﺎﺳﻴﺔ ﻓﻠﻴﺲ ﻣﻦ اﻟﻀﺮوري ان ﺗﻜﻮن اﻻﻗﺮب ‪.‬‬

‫‪34‬‬
Code (1):
% ====================

% The Hopfield network

% ====================

% ===========================================================================

% Reference: Negnevitsky, M., "Artificial Intelligence: A Guide to Intelligent

% Systems", Addison Wesley, Harlow, England, 2002.

% Sec. 6.6 The Hopfield network

% ===========================================================================

% Hit any key to define two target fundamental memories to be stored

% in the network as the two columns of the matrix T.

pause

T=[1 1 1;-1 -1 -1]'

T=

1 -1

1 -1

1 -1

% Hit any key to plot the Hopfield state space with the two fundamental memories

% identified by red markers.

pause

plot3(T(1,:),T(2,:),T(3,:),'r.','markersize',20)

axis([-1 1 -1 1 -1 1]); axis manual; hold on;

title('Representation of the possible states for the three-neuron Hopfield network')

set(gca,'box','on'); view([37.5 30]);

% Hit any key to obtain weights and biases of the Hopfield network.

pause

35
net=newhop(T);

% Hit any key to test the network with six unstable states represented as the

% six-column matrix P. Unstable states are identified by blue markers.

pause

P=[-1 1 1;1 -1 1;1 1 -1;-1 -1 1;-1 1 -1;1 -1 -1]'

P=

-1 1 1 -1 -1 1

1 -1 1 -1 1 -1

1 1 -1 1 -1 -1

for i=1:6

a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

pause

a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

36
pause

a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

pause

a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

pause

a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

pause

37
a = {P(:,i)};

[y,Pf,Af]=sim(net,{1 10},{},a);

record=[cell2mat(a) cell2mat(y)];

start=cell2mat(a);

plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))

drawnow;

% Hit any key to continue.

pause

Output(1) :

38
39
40
41
Code(2) :
function varargout = hopfieldNetwork(varargin)
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @hopfieldNetwork_OpeningFcn, ...
'gui_OutputFcn', @hopfieldNetwork_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before hopfieldNetwork is made visible.


function hopfieldNetwork_OpeningFcn(hObject, eventdata, handles,
varargin)

% Choose default command line output for hopfieldNetwork


handles.output = hObject;
N = str2num(get(handles.imageSize,'string'));
handles.W = [];
handles.hPatternsDisplay = [];

% Update handles structure


guidata(hObject, handles);

% --- Outputs from this function are returned to the command line.
function varargout = hopfieldNetwork_OutputFcn(hObject, eventdata,
handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes on button press in reset.


function reset_Callback(hObject, eventdata, handles)
% cleans all data and enables the change of the number of neurons used
for n=1 : length(handles.hPatternsDisplay)
delete(handles.hPatternsDisplay(n));

42
end
handles.hPatternsDisplay = [];
set(handles.imageSize,'enable','on');
handles.W = [];
guidata(hObject, handles);

function imageSize_Callback(hObject, eventdata, handles)


% hObject handle to imageSize (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
num = get(hObject,'string');
n = str2num(num);
if isempty(n)
num = '32';
set(hObject,'string',num);
end
if n > 32
warndlg('It is strongly recomended NOT to work with networks
with more then 32^2 neurons!','!! Warning !!')
end

% --- Executes during object creation, after setting all properties.


function imageSize_CreateFcn(hObject, eventdata, handles)
.
if ispc
set(hObject,'BackgroundColor','white');
else

set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor')
);
end

% --- Executes on button press in loadIm.


function loadIm_Callback(hObject, eventdata, handles)
[fName dirName] = uigetfile('*.bmp;*.tif;*.jpg;*.tiff');
if fName
set(handles.imageSize,'enable','off');
cd(dirName);
im = imread(fName);
N = str2num(get(handles.imageSize,'string'));
im = fixImage(im,N);
imagesc(im,'Parent',handles.neurons);
colormap('gray');
end

% --- Executes on button press in train.


function train_Callback(hObject, eventdata, handles)

43
Npattern = length(handles.hPatternsDisplay);
if Npattern > 9
msgbox('more then 10 paterns isn''t supported!','error');
return
end

im = getimage(handles.neurons);
N = get(handles.imageSize,'string');
N = str2num(N);
W = handles.W; %weights vector
avg = mean(im(:)); %removing the cross talk part
if ~isempty(W)
%W = W +( kron(im,im))/(N^2);
W = W + ( kron(im-avg,im-avg))/(N^2)/avg/(1-avg);
else
% W = kron(im,im)/(N^2);
W = ( kron(im-avg,im-avg))/(N^2)/avg/(1-avg);
end
% Erasing self weight
ind = 1:N^2;
f = find(mod(ind,N+1)==1);
W(ind(f),ind(f)) = 0;

handles.W = W;

% Placing the new pattern in the figure...


xStart = 0.01;
xEnd = 0.99;
height = 0.65;
width = 0.09;
xLength = xEnd-xStart;
xStep = xLength/10;
offset = 4-ceil(Npattern/2);
offset = max(offset,0);
y = 0.1;

if Npattern > 0
for n=1 : Npattern
x = xStart+(n+offset-1)*xStep;
h = handles.hPatternsDisplay(n);
set(h,'units','normalized');
set(h,'position',[x y width height]);
end
x = xStart+(n+offset)*xStep;
h = axes('units','normalized','position',[x y width height]);
handles.hPatternsDisplay(n+1) = h;
imagesc(im,'Parent',h);
else
x = xStart+(offset)*xStep;
h = axes('units','normalized','position',[x y width height]);
handles.hPatternsDisplay = h;
end

44
imagesc(im,'Parent',h);
set(h,
'YTick',[],'XTick',[],'XTickMode','manual','Parent',handles.learnedPat
erns);

guidata(hObject, handles);

% --- Executes on button press in addNoise.


function addNoise_Callback(hObject, eventdata, handles)
im = getimage(handles.neurons);
% N = get(handles.imageSize,'string');
% N = floor(str2num(N)/2)+1;
noisePercent = get( handles.noiseAmount, 'value' );
N = round( length(im(:))* noisePercent );
N = max(N,1); %minimum change one neuron
ind = ceil(rand(N,1)*length(im(:)));
% im(ind) = -1*im(ind); %!!!!
im(ind) = ~im(ind);
imagesc(im,'Parent',handles.neurons);
colormap('gray');

% --- Executes on button press in run.


function run_Callback(hObject, eventdata, handles)
im = getimage(handles.neurons);
[rows cols] = size(im);
if rows ~= cols
msgbox('I don''t support non square images','error');
return;
end
N = rows;
W = handles.W;
if isempty(W)
msgbox('No train data - doing nothing!','error');
return;
end
%figure; imagesc(W)
mat = repmat(im,N,N);
mat = mat.*W;
mat = im2col(mat,[N,N],'distinct');
networkResult = sum(mat);
networkResult = reshape(networkResult,N,N);
im = fixImage(networkResult,N);
imagesc(im,'Parent',handles.neurons);

function im = fixImage(im,N)
% if isrgb(im)
if length( size(im) ) == 3
im = rgb2gray(im);

45
end
im = double(im);
m = min(im(:));
M = max(im(:));
im = (im-m)/(M-m); %normelizing the image
im = imresize(im,[N N],'bilinear');
%im = (im > 0.5)*2-1; %changing image values to -1 & 1
im = (im > 0.5); %changing image values to 0 & 1

% --- Executes on slider movement.


function noiseAmount_Callback(hObject, eventdata, handles)
% hObject handle to noiseAmount (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
percent = get(hObject,'value');
percent = round(percent*100);
set(handles.noisePercent,'string',num2str(percent));

% --- Executes during object creation, after setting all properties.


function noiseAmount_CreateFcn(hObject, eventdata, handles)
set(hObject,'BackgroundColor',[.9 .9 .9]);
else

set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor')
);
end

Output(2) :

46
47
48
49
‫اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ‪:‬‬

‫اﻟﺘﺪرﻳﺐ ﻋﻠﻰ اﻷرﻗﺎم وﺗﻤﻴﺰﻫﺎ‬

‫‪50‬‬
‫اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ ‪self-organizing neural network‬‬

‫ﺗﻜﻮن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ ﻓﻌﺎﻟﺔ ﻓﻲ اﻟﺘﻌﺎﻣﻞ ﻣﻊ اﻟﻈﺮوف ﻏﻴﺮ اﻟﻤﺘﻮﻗﻌﺔ ‪ ،‬واﻟﻤﺘﻐﻴﺮة وﻓﻲ ﻫﺬا اﻟﺠﺰء‬
‫ﻧﺘﻨﺎول اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ‪.‬واﻟﻤﺒﻨﻲ ﻋﻠﻲ اﻟﺸﺒﻜﺎت ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ‬

‫اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ‪habbian learning‬‬

‫ﻳﺬﻛﺮ ﻗﺎﻧﻮن ﻫﻴﺐ اﻧﻪ إذا ﻛﺎن اﻟﻌﺼﺒﻮن ‪ i‬ﻗﺮﻳﺒﺎ ﻗﺮﺑﺎ ﻛﺎﻓﻴﺎ ﻣﻦ اﻟﻌﺼﺒﻮن اﻟﻤﺜﺎر ‪ j‬وﺗﻜﺮر ﻣﺼﺎﺣﺒﻴﺘﻪ ﻓﻲ‬
‫ﺗﻨﺸﻴﻄﻪ ﻓﺘﺤﺪث ﺗﻘﻮﻳﺔ ﻟﻨﻘﻄﺔ اﻻﺷﺘﺒﺎك ﺑﻴﻦ ﻫﺬﻳﻦ اﻟﻌﺼﺒﻮﻧﻴﻦ ‪ ،‬وﻳﺼﺒﺢ اﻟﻌﺼﺒﻮن ‪ j‬اﻛﺜﺮ ﺣﺴﺎﺳﻴﺔ ﻟﻠﺘﻨﺒﻴﻪ ﻣﻦ‬
‫اﻟﻌﺼﺒﻮن ‪i‬‬
‫وﻳﻤﻜﻨﻨﺎ ﺗﻤﺜﻴﻞ ﻗﺎﻧﻮن ﻫﻴﺐ ﻣﻦ ﻗﺎﻋﺪﺗﻴﻦ ﻛﻤﺎ ﻳﻠﻲ ‪:‬‬
‫‪ -١‬إذا ﺣﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮﻧﻴﻦ ﻋﻠﻲ أي ﺟﺎﻧﺐ ﻣﻦ ارﺗﺒﺎﻃﻬﻤﺎ ﻣﺘﺰاﻣﻨﻴﻦ ﻓﻴﺰداد ﻋﻨﺪ ذﻟﻚ وزن ﻫﺬا اﻻرﺗﺒﺎط‬
‫‪.‬‬
‫إذا ﺣﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮﻧﻴﻦ ﻋﻠﻲ أي ﺟﺎﻧﺐ ﻣﻦ ارﺗﺒﺎﻃﻬﻤﺎ ﻏﻴﺮ ﻣﺘﺰاﻣﻨﻴﻦ ﻓﻴﻘﻞ ﻋﻨﺪ ذﻟﻚ وزن ﻫﺬا‬ ‫‪-٢‬‬
‫اﻻرﺗﺒﺎط ‪.‬‬
‫وﻳﻘﺪم ﻗﺎﻧﻮن ﻫﻴﺐ اﻷﺳﺎس ﻟﻠﺘﻌﻠﻢ دون ﻣﻌﻠﻢ ‪ .‬ﻓﻴﻜﻮن اﻟﺘﻌﻠﻢ ﻫﻨﺎ ﻇﺎﻫﺮة ﻣﺤﻠﻴﺔ ﺗﺤﺪث دون ﺗﻐﺬﻳﺔ ﻣﺮﺗﺠﻌﺔ ﻣﻦ‬
‫اﻟﺒﻴﺌﺔ ‪ .‬وﺑﻴﻦ اﻟﺸﻜﻞ اﻟﺘﺎﻟﻲ ﺗﻌﻠﻢ ﻫﻴﺒﻴﺎن ﻓﻲ ﺷﺒﻜﺔ ﻋﺼﺒﻴﺔ‬

‫‪51‬‬
‫ﺧﻮارزﻣﻴﺔ اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ اﻟﻤﻌﻤﻤﺔ‬
‫‪ ‬اﻟﺨﻄﻮة اﻻوﻟﻲ ‪ :‬وﺿﻊ اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ‬
‫ﺣﺪد اوزان ﻧﻘﺎط اﻻﺷﺘﺒﺎك ‪ ،‬واﻟﻌﺘﺒﺎت اﻻﺑﺘﺪاﺋﻴﺔ ﺑﻘﻴﻢ ﻋﺸﻮاﺋﻴﺔ ﺻﻐﻴﺮة ‪ ،‬وﻟﺘﻜﻦ ﻓﻲ اﻟﻔﺘﺮة ]‪[0,1‬‬
‫‪.‬ﺣﺪد اﻳﻀﺎ ﻗﻴﻤﺎ ﻣﻮﺟﺒﻪ ﺻﻐﻴﺮة ﻟﻤﻌﻠﻤﺔ ﻣﻌﺪل اﻟﺘﻌﻠﻢ ‪ ، α‬وﻣﻌﺎﻣﻞ اﻟﻨﺴﻴﺎن ‪. φ‬‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ :‬اﻟﺘﻨﺸﻴﻂ‬
‫اﺣﺴﺐ ﻣﺨﺮﺟﺎت اﻟﻌﺼﺒﻮن ﻋﻨﺪ اﻟﺘﻜﺮار ‪p‬‬
‫‪n‬‬
‫‪y j ( p )   xi ( p) wij ( p )   j‬‬
‫‪i 1‬‬

‫ﺣﻴﺚ ‪ n‬ﻋﺪد ﻣﺪﺧﻼت اﻟﻌﺼﺒﻮن ‪ ،‬و ‪ θj‬ﻗﻴﻤﺔ اﻟﻌﺘﺒﺔ ﻟﻠﻌﺼﺒﻮن ‪j‬‬


‫‪ ‬اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ ‪ :‬اﻟﺘﻌﻠﻢ‬
‫ﺟﺪد اﻷوزان ﻓﻲ اﻟﺸﺒﻜﺔ ‪:‬‬
‫)‪∆Wij(p)+= Wij(p) Wij(p+1‬‬
‫ﺣﻴﺚ )‪ ∆Wij(p‬ﺗﺼﺤﻴﺢ اﻟﻮزن ﻋﻨﺪ اﻟﺘﻜﺮار ‪p‬‬
‫وﺗﺤﺪد ﺗﺼﺤﻴﺢ اﻟﻮزن ﺑﻮاﺳﻄﺔ ﻗﺎﻋﺪة ﺿﺮب اﻟﻨﺸﺎط اﻟﻤﻌﻤﻢ ‪:‬‬
‫])‪∆Wij(p)= φ yj(p)[λxj(p) - Wij(p‬‬
‫ﺣﻴﺚ ‪λ=α/φ‬‬
‫‪ ‬اﻟﺨﻄﻮة اﻟﺮاﺑﻌﺔ ‪ :‬اﻟﺘﻜﺮار‬
‫أﺿﻒ ‪ ١‬إﻟﻲ اﻟﺘﻜﺮار ‪ ، p‬وارﺟﻊ إﻟﻲ اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ‪ ،‬واﺳﺘﻤﺮ ﺣﺘﻲ ﺗﺼﻞ أوزان ﻧﻘﺎط اﻻﺷﺘﺒﺎك إﻟﻲ‬
‫ﻗﻴﻢ ﺣﺎﻟﺘﻬﺎ اﻟﻤﺴﺘﻘﺮة‬

‫‪52‬‬
‫ﻣﺜﺎل ﻋﻠﻰ اﻟﺘﻌﻠﻴﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ‬
1 1 y1 1 0 y1
x1 1 1 x 1 1

0 2 0 y2 0 2 1 y2
x2 2 x 2

x3 0 3 3
0 y3 x 0 3 3
0 y3

0 0 y4 0 0 y4
x4 4 4 x 4 4

1 5 1 y5 1 5 1 y5
x5 5 x 5
Input layer Output layer Input layer Output layer

53
Code :
function digit_recognition

disp(' =====================================')
disp(' Character recognition neural networks')
disp(' =====================================')

disp('
======================================================================
======')
disp(' Reference: Negnevitsky, M., "Artificial Intelligence: A Guide
to Intelligent')
disp(' Systems", Addison Wesley, Harlow, England, 2002.
')
disp(' Sec. 9.4 Will a neural network work for my problem?
')
disp('
======================================================================
======')

disp('
======================================================================
=========')
disp(' Problem: A multilayer feedforward network is used for the
recognition of digits')
disp(' from 0 to 9. Each digit is represented by a 5 x 9 bit
map. ')
disp('
======================================================================
=========')

disp(' Hit any key to visualise bit maps of the digits.')


disp(' ')
pause

[digit1,digit2,digit3,digit4,digit5,digit6,digit7,digit8,digit9,digit0
] = bit_maps;

disp(' Hit any key to obtain ten 45-element input vectors denoted by
"p".')
pause

p=[digit1(:),digit2(:),digit3(:),digit4(:),digit5(:),digit6(:),digit7(
:),digit8(:),digit9(:),digit0(:)]

disp(' Hit any key to define ten 10-element target vectors denoted by
"t". ')
pause

t = eye(10)

54
disp(' Hit any key to define the network architecture.')
pause

s1=12; % Number of neurons in the hidden layer


s2=10; % Number of neurons in the output layer

disp(' ')
fprintf(1,' s1=%.0f; Number of neurons in the hidden layer\n',s1);
fprintf(1,' s2=%.0f; Number of neurons in the output layer\n',s2);
disp(' ')

disp(' Hit any key to create the network, initialise its weights and
biases, ')
disp(' and set up training parameters.')
pause

net = newff(minmax(p),[s1 s2],{'logsig' 'purelin'},'traingdx');

net.trainParam.show = 20; % Number of epochs between showing the


progress
net.trainParam.epochs = 1000; % Maximum number of epochs
net.trainParam.goal = 0.001; % Performance goal
net.trainParam.lr=0.1; % Learning rate

disp(' ')
fprintf(1,' net.trainParam.show=%.0f; Number of epochs between
showing the progress\n',net.trainParam.show);
fprintf(1,' net.trainParam.epochs=%.0f; Maximum number of
epochs\n',net.trainParam.epochs);
fprintf(1,' net.trainParam.goal=%.3f; Performance
goal\n',net.trainParam.goal);
fprintf(1,' net.trainParam.lr=%.2f; Learning
rate\n',net.trainParam.lr);
disp(' ')

disp(' Hit any key to train the back-propagation network.')


pause

disp(' ')
disp(' net = train(net,p,t)')
disp(' ')

net = train(net,p,t);

disp(' Hit any key to see how the network recognises a digit, for
example digit 3.')
pause

digit3
probe=digit3(:);
a=sim(net,probe);
disp(' a=sim(net,probe)')

55
a=round(a)

disp(' Hit any key to see how "noise" distorts the bit map of a digit,
for example digit 5.')
disp(' ')
pause

probe=digit5;
figure('name','"Noisy" bit maps')
subplot(1,2,1)
probe_plot(probe)
title('Noise level: 0%')

disp(' Hit any key to continue.')


disp(' ')
pause

probe=digit5+randn(size(probe))*0.1;
subplot(1,2,2)
probe_plot(probe)
title('Noise level: 10%')

disp(' Hit any key to continue.')


disp(' ')
pause

probe=digit5+randn(size(probe))*0.2;
figure('name','"Noisy" bit maps')
subplot(1,2,1)
probe_plot(probe)
title('Noise level: 20%')

disp(' Hit any key to continue.')


disp(' ')
pause

probe=digit5+randn(size(probe))*0.5;
subplot(1,2,2)
probe_plot(probe)
title('Noise level: 50%')

disp(' Hit any key to evaluate the digit recognition neural network.')
disp(' ')
pause

% Set parameters for the test.


noise_range = 0:.05:.50; % Range of noise
max_test = 100; % Number of test examples for each digit to
be recognised
average_error = []; % Average recognition error for a particular
noise level

56
% Evaluate the digit recognition network.
for noise_level=noise_range
error=0;

for i=1:max_test
probe=p+randn(size(p))*noise_level;
a=compet(sim(net,probe));
error=error+sum(sum(abs(a-t)))/2;
end

average_error = [average_error error/10/max_test];


fprintf('Noise level: %.0f percent; Average error: %.0f
percent\n',noise_level*100,error/10/max_test*100);
end

disp(' ')
disp(' Hit any key to plot the test results.')
disp(' ')
pause

h = figure;
plot(noise_range*100,average_error*100,'b-');
title('Performance of the digit recognition network')
xlabel('Noise level, %');
ylabel('Recognition error, %');

disp(' ')
disp(' Hit any key to train the digit recognition network with "noisy"
examples.')
disp(' ')
pause

figure
net.trainParam.epochs = 1000; % Maximum number of epochs to train.

t_noise = [t t t t];
for pass = 1:10
fprintf('Pass = %.0f\n',pass);
p_noise=[p p (p+randn(size(p))*0.1) (p+randn(size(p))*0.2)];
net= train(net,p_noise,t_noise);
end

disp(' Hit any key to evaluate the digit recognition network trained
with "noisy" examples.')
disp(' ')
pause

average_error = [];

for noise_level = noise_range


error = 0;

57
for i=1:max_test
probe=p+randn(size(p))*noise_level;
a=compet(sim(net,probe));
error=error+sum(sum(abs(a-t)))/2;
end

average_error = [average_error error/10/max_test];


fprintf('Noise level: %.0f percent; Average error: %.0f
percent\n',noise_level*100,error/10/max_test*100);
end

disp(' ')
disp(' Hit any key to plot the test results.')
disp(' ')
pause

figure(h)
hold on
plot(noise_range*100,average_error*100,'r-');
legend('Network trained with "perfect" examples','Network trained with
"noisy" examples',2);
hold off

disp('end of digit_recognition')

function probe_plot(probe);

[m n]=size(probe);
probe_plot=[probe probe(:,[n])]';
probe_plot=[probe_plot probe_plot(:,[m])]';
pcolor(probe_plot)
colormap(gray)
axis('ij')
axis image

Output :

58
59
60
61
62
63
64
65
66
67
68
‫ﺗﻌﻠﻢ ﻛﻮﻫﻨﻴﻦ ‪:‬‬

‫‪69‬‬
‫اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ‪competitive learning‬‬
‫اﻟﻨﻮع اﻟﺸﺎﺋﻊ اﻷﺧﺮ ﻟﻠﺘﻌﻠﻢ دون إﺷﺮاف ﻫﻮ اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ‪ ،‬ﺣﻴﺚ ﺗﺘﻨﺎﻓﺲ اﻟﻌﺼﺒﻮﻧﺎت ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ‬
‫ﻋﻠﻲ اﻟﺘﻨﺸﻴﻂ ‪ .‬وأﺛﻨﺎء اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ‪ ،‬ﻳﻤﻜﻦ ﺗﻨﺸﻴﻂ ﻋﺪد ﻣﻦ اﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ ‪،‬‬
‫وﻓﻲ اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ﻳﺤﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮن واﺣﺪ ﻓﻘﻂ ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ ‪ .‬وﻳﺴﻤﻲ ﻋﺼﺒﻮن اﻟﻤﺨﺮﺟﺎت‬
‫اﻟﺬي ﻳﻜﺴﺐ اﻟﻤﻨﺎﻓﺴﺔ ﺑﻌﺼﺒﻮن اﻟﻜﺎﺳﺐ اﻟﺬي ﻳﺄﺧﺬ اﻟﻜﻞ )‪(winner-takes-all‬‬

‫ﺗﻌﻠﻢ ﻛﻮﻫﻨﻴﻦ ‪KOHNEN learning‬‬


‫)‪self-organizing map (SOM‬‬
‫ﺗﺘﻜﻮن ﺷﺒﻜﺔ ﻛﻮﻫﻨﻴﻦ ﻣﻦ ﻃﺒﻘﺔ واﺣﺪة ﻣﻦ ﻋﺼﺒﻮﻧﺎت اﻟﺤﺴﺎﺑﺎت ‪ ،‬وﻳﻜﻮن ﻟﻬﺎ ﻧﻮﻋﺎن ﻣﺨﺘﻠﻔﺎن ﻣﻦ اﻻرﺗﺒﺎﻃﺎت ‪،‬‬
‫ﻓﻬﻨﺎك ارﺗﺒﺎﻃﺎت ﻟﻸﻣﺎم ) ‪ (forward connection‬ﻣﻦ اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺪﺧﻼت إﻟﻲ‬
‫اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت وﻛﺬﻟﻚ ﺟﺎﻧﺒﻴﺔ )‪ (lateral connection‬ﺑﻴﻦ اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ‬
‫اﻟﻤﺨﺮﺟﺎت‬
‫وﺗﺴﺘﺨﺪم اﻻرﺗﺒﺎﻃﺎت اﻟﺠﺎﻧﺒﻴﺔ ﻓﻲ إﻧﺘﺎج ﺗﻨﺎﻓﺲ ﺑﻴﻦ اﻟﻌﺼﺒﻮﻧﺎت ‪ .‬وﻳﺼﺒﺢ اﻟﻌﺼﺒﻮن اﻟﺬي ﻟﻪ اﻛﺒﺮ ﻣﺴﺘﻮي‬
‫ﺗﻨﺸﻴﻂ ﻋﺒﺮ ﻛﻞ اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت ﻫﻮ اﻟﻔﺎﺋﺰ )ﻋﺼﺒﻮن اﻟﻔﺎﺋﺰ ﻳﺄﺧﺬ اﻟﻜﻞ ‪winner-‬‬
‫‪ (taked-all-neuron‬وﻳﻜﻮن ﻫﺬا اﻟﻌﺼﺒﻮن اﻟﻮﺣﻴﺪ اﻟﺬي ﻳﻨﺘﺞ إﺷﺎرة ﻣﺨﺮﺟﺎت ‪ .‬وﻳﻠﻐﻲ ﻛﻞ ﻧﺸﺎط‬
‫اﻟﻌﺼﺒﻮﻧﺎت اﻻﺧﺮي ﻓﻲ اﻟﻤﻨﺎﻓﺴﺔ‬
‫وﻋﻨﺪ ﺗﻘﺪﻳﻢ ﻧﻤﻂ ﻣﺪﺧﻼت اﻟﺸﺒﻜﺔ ‪ ،‬ﻳﺴﺘﻘﺒﻞ ﻛﻞ ﻋﺼﺒﻮن ﻓﻲ ﻃﺒﻘﺔ ﻛﻮﻫﻨﻴﻦ ﻧﺴﺨﺔ ﻛﺎﻣﻠﺔ ﻣﻦ ﻧﻤﻂ اﻟﻤﺪﺧﻼت ‪،‬‬
‫وﺗﺘﻌﺪل ﺑﻮاﺳﻄﺔ ﻣﺴﺎرﻫﺎ ﺧﻼل أوزان ارﺗﺒﺎﻃﺎت ﻧﻘﺎط اﻻﺷﺘﺒﺎك ﺑﻴﻦ ﻃﺒﻘﺎت اﻟﻤﺪﺧﻼت ‪ ،‬وﻃﺒﻘﺔ ﻛﻮﻫﻨﻴﻦ ‪ ،‬وﺗﻨﺘﺞ‬
‫ارﺗﺒﺎﻃﺎت اﻟﺘﻐﺬﻳﺔ اﻟﻤﺮﺗﺠﻌﺔ اﻟﺠﺎﻧﻴﺔ اﻟﺘﺄﺛﻴﺮ ﻣﺜﻴﺮة أو ﻣﺎﻧﻌﺔ اﻋﺘﻤﺎدا ﻋﻠﻲ اﻟﻤﺴﺎﻓﺔ ﻣﻦ اﻟﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ‪ .‬وﻳﺘﺤﻘﻖ‬
‫ﻫﺬا ﻋﻦ ﻃﺮﻳﻖ اﺳﺘﺨﺪام داﻟﺔ اﻟﻘﺒﻌﺔ اﻟﻤﻜﺴﻴﻜﻴﺔ ‪.‬‬

‫‪70‬‬
‫داﻟﺔ اﻟﻘﺒﻌﺔ اﻟﻤﻜﺴﻴﻜﻴﺔ‬
‫ﺗﻤﺜﻞ ﻫﺬﻩ اﻟﺪاﻟﺔ اﻟﻌﻼﻗﺔ ﺑﻴﻦ اﻟﻤﺴﺎﻓﺔ ﺑﻴﻦ ﻋﺼﺒﻮن اﻟﻔﺎﺋﺰ ﻳﺄﺧﺬ اﻟﻜﻞ وﻗﻮة اﻻرﺗﺒﺎﻃﺎت ﻓﻲ ﻃﺒﻘﺔ ﻛﻮﻫﻨﻴﻦ ‪.‬‬
‫وﻃﺒﻘﺎ“ ﻟﻬﺬﻩ اﻟﺪاﻟﺔ ‪ ،‬ﻳﻜﻮن ﻷﻗﺮب اﻟﺠﻴﺮان )ﻣﻨﻘﻄﺔ إﺛﺎرة ﺟﺎﻧﺒﻴﺔ ﻗﺼﻴﺮة أﻟﻤﺪي ( ﺗﺄﺛﻴﺮ إﺛﺎرة ﻗﻮي ‪ ،‬وﻳﻜﻮن‬
‫ﻟﻠﺠﺎر اﻟﺒﻌﻴﺪ )ﺷﺒﻪ اﻟﻈﻞ اﻟﻤﺎﻧﻊ ‪ (an inhibitory‬ﺗﺎﺛﻴﺮ ﻣﺎﻧﻊ ﻣﻌﺘﺪل‬
‫وﻳﻜﻮن ﻟﻠﺠﺎر اﻟﺒﻌﻴﺪ ﺟﺪا“ )اﻟﻤﻨﻄﻘﺔ اﻟﻤﺤﻴﻄﺔ ﺑﺸﺒﺔ اﻟﻈﻞ اﻟﻤﺎﻧﻊ( ﺗﺄﺛﻴﺮ إﺛﺎرة ﺿﻌﻴﻒ ‪ ،‬واﻟﺬي ﻣﺎ ﻳﻬﻤﻞ ﻋﺎدة ‪.‬‬
‫وﻓﻲ ﺷﺒﻜﺔ ﻛﻮﻫﻨﻴﻦ ‪ ،‬وﻳﺘﻌﻠﻢ اﻟﻌﺼﺒﻮن ﻋﻦ ﻃﺮﻳﻖ ﺗﺮﺣﻴﻞ أوزاﻧﻪ ﻣﻦ ارﺗﺒﺎﻃﺎت ﻏﻴﺮ ﻧﺸﻄﺔ إﻟﻲ ارﺗﺒﺎﻃﺎت ﻧﺸﻄﺔ ‪.‬‬
‫وﻳﺴﻤﺢ ﻟﻠﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ‪ ،‬وﺟﻴﺮاﻧﻪ ﻓﻘﻂ ﻟﻠﺘﻌﻠﻢ ‪ .‬ﻓﺈذا ﻟﻢ ﻳﺴﺘﺠﻴﺐ اﻟﻌﺼﺒﻮن ﻟﻨﻤﻂ ﻣﺪﺧﻼت ﻣﻌﻴﻦ ﻓﻌﻨﺪ ذﻟﻚ‬
‫ﻻ ﻳﻤﻜﻦ أن ﻳﺤﺪث اﻟﺘﻌﻠﻢ ﻓﻲ ﻫﺬا اﻟﻌﺼﺒﻮن اﻟﻤﺤﺪد ‪.‬‬
‫وﺗﻮﺿﻊ إﺷﺎرة اﻟﻤﺨﺮﺟﺎت ‪ yj‬ﻟﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ﻳﺄﺧﺬ اﻟﻜﻞ ‪ j‬ﺗﺴﺎوي ‪ ، ١‬وﺗﻮﺿﻊ إﺷﺎرة ﻣﺨﺮﺟﺎت اﻟﻌﺼﺒﻮﻧﺎت‬
‫اﻻﺧﺮي )اﻟﺘﻲ ﺧﺴﺮت اﻟﻤﻨﺎﻓﺴﺔ ( ﺗﺴﺎوي ‪.٠‬‬
‫وﺗﻌﺮف ﻗﺎﻋﺪة اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ اﻟﻨﻤﻄﻲ ‪ standard competitive learning rule‬اﻟﺘﻐﻴﺮ ﻓﻲ‬
‫اﻟﻮزن اﻟﺬي ﻳﻄﺒﻖ ﻋﻠﻲ وزن ﻧﻘﺎط اﻻﺷﺘﺒﺎك ﻛﻤﺎ ﻳﻠﻲ‪:‬‬

‫‪  ( xi  wij ), if neuron j wins the competition‬‬


‫‪ wij  ‬‬
‫‪‬‬ ‫‪0,‬‬ ‫‪if neuron j loses the competition‬‬
‫ﺣﻴﺚ‬
‫‪ xi‬إﺷﺎرة اﻟﻤﺪﺧﻼت ‪ ،‬و ‪α‬ﻣﻌﻠﻤﺔ ﻣﻌﺪل اﻟﺘﻌﻠﻢ وﺗﻘﻊ ﺑﻴﻦ ‪ ٠‬إﻟﻲ ‪١‬‬
‫وﻳﻘﻊ اﻟﺘﺄﺛﻴﺮ اﻟﺸﺎﻣﻞ ﻟﻠﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ﻓﻲ اﻧﺘﻘﺎل ﻣﺘﺠﻪ وزن اﻻﺷﺘﺒﺎك ‪ wj‬ﻟﻠﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ‪ j‬ﺗﺠﺎﻩ ﻧﻤﻂ‬
‫اﻟﻤﺪﺧﻼت ‪ . x‬وﻳﻜﻮن ﻣﻌﻴﺎر اﻻﺗﻔﺎق ﻣﻜﺎﻓﺌﺎ ﻷﻗﻞ ﻣﺴﺎﻓﺔ اﻗﻠﻴﺪﻳﺔ ‪ Euclidean distance‬ﺑﻴﻦ‬
‫اﻟﻤﺘﺠﻬﻴﻦ ‪.‬‬

‫‪71‬‬
‫اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ‬
‫ﺗﻌﺮف اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ ﺑﻴﻦ زوج ﻣﻦ ﻣﺘﺠﻬﺎت ‪ x‬و ‪ 1) wi‬ﻓﻲ ‪ (n‬ﺑﺎﻟﻌﻼﻗﺔ‬
‫‪1‬‬
‫‪n‬‬ ‫‪2‬‬
‫‪d  x  wj   ( xi  wij ) 2 ‬‬
‫‪ i 1‬‬ ‫‪‬‬

‫ﺣﻴﺚ ‪ xi‬و ‪ wij‬ﻫﻤﺎ اﻟﻌﻨﺼﺮان ﻣﻦ اﻟﻤﺘﺠﻬﻴﻦ ‪ x‬و ‪ wj‬ﻋﻠﻲ اﻟﺘﻮاﻟﻲ‬


‫وﻳﺘﺤﺪد اﻟﺘﺸﺎﺑﻪ ﺑﻴﻦ اﻟﻤﺘﺠﻬﻴﻦ ‪ wj ، x‬ﺑﺄﻧﻪ ﺗﺒﺎدﻟﻲ ﻟﻠﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ ‪ ، d‬وﺗﻤﺜﻞ اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ ﺑﻴﻦ‬
‫اﻟﻤﺘﺠﻬﻴﻦ ‪ x‬و ‪ wj‬ﻓﻲ ﺷﻜﻞ اﻟﺘﺎﻟﻲ ﺑﻄﻮل اﻟﺨﻂ اﻟﺬي ﻳﺼﻞ ﻣﻘﺪﻣﺔ اﻟﻤﺘﺠﻬﻴﻦ ‪ ،‬وﻳﺘﻀﺢ ﻣﻦ اﻟﺸﻜﻞ ﺗﻤﺎﻣﺎ اﻧﻪ‬
‫ﻛﻠﻤﺎ ﺻﻐﺮت اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ ﻛﻠﻤﺎ ذاد اﻟﺘﺸﺎﺑﻪ ﺑﻴﻦ اﻟﻤﺘﺠﻬﻴﻦ ‪wj ، x‬‬

‫ﻣﻦ ﻫﻮ اﻟﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ‬


‫ﻟﺘﻌﺮف ﻋﻞ اﻟﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ‪ jx‬ذﻟﻚ اﻟﺬي ﻳﻜﻮن ﻟﻪ أﻓﻀﻞ اﺗﻔﺎق ﻣﻊ ﻣﺘﺠﻪ اﻟﻤﺪﺧﻼت ‪ x‬ﻳﻤﻜﻨﻨﺎ ﺗﻄﺒﻴﻖ‬
‫اﻟﺸﺮط اﻟﺘﺎﻟﻲ‬
‫‪j x  min x  wj , j  1,2,..., m‬‬

‫ﺣﻴﺚ ‪ m‬ﻋﺪد اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ ﻛﻮﻫﻨﻴﻦ ‪.‬‬

‫‪72‬‬
Code :
echo on;

pause

rand('seed',1234);
p=rands(2,1000);
plot(p(1,:),p(2,:),'r.')
title('Input vectors');
xlabel('p(1)');
ylabel('p(2)');

pause

s1=6; s2=6;
net=newsom([-1 1; -1 1],[s1 s2]);
plotsom(net.iw{1,1},net.layers{1}.distances)
pause

net.trainParam.show=100; % Number of epochs between showing the


progress

for i=1:10
hold on;
net.trainParam.epochs=i*net.trainParam.show;
net=train(net,p);
delete(findobj(gcf,'color',[0 0 1]));
delete(findobj(gcf,'color',[1 0 0]));
plotsom(net.IW{1,1},net.layers{1}.distances);
hold off;
pause(0.001)
end

echo off;

for i=1:(s1*s2);
text(net.iw{1,1}(i,1)+0.02,net.iw{1,1}(i,2),sprintf('%g',i));
end

echo on;

pause

for i=1:3
probe=rands(2,1);
hold on;
plot(probe(1,1),probe(2,1),'.g','markersize',25);
a=sim(net,probe);
a=find(a)
text(probe(1,1)+0.03,probe(2,1),sprintf('%g',(a)));
hold off

73
% Hit any key to continue.
if i<3
pause
end
end

echo off
disp('end of Kohonen')

Output :

74
75
76
77
‫اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ واﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ‪:‬‬

‫‪78‬‬
‫اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ‪ & Fuzzy logic‬اﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ‪Fuzzy inference‬‬

‫ﺗﻌﺮف اﻟﻨﻈﻢ اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻌﺮﻓﺔ ‪ Knowledge Based Systems‬ﺑﺎﻟﻨﻈﻢ اﻟﺘﻲ ﺗﻢ ﺗﻄﻮﻳﺮﻫﺎ ﺧﺼﻴﺼﺎً‬
‫ﻟﻤﺤﺎﻛﺎة ﺗﻔﻜﻴﺮ اﻹﻧﺴﺎن ﻓﻲ ﺣﻞ اﻟﻤﺸﺎﻛﻞ وﺗﻘﺪﻳﻢ اﻟﻨﺼﺢ‪ .‬وﻣﻦ أﻧﻮاع اﻟﻨﻈﻢ اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻌﺮﻓﺔ ﻣﺎ ﻳﻌﺮف‬
‫ﺑﺎﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ‪ Expert System‬اﻟﺘﻲ ﻳﺘﻢ اﺳﺘﺨﺪاﻣﻬﺎ ﻓﻲ اﻟﻌﺪﻳﺪ ﻣﻦ اﻟﻤﺠﺎﻻت ﻛﺎﻟﺘﺸﺨﻴﺺ اﻟﻄﺒﻲ‬
‫وﺗﺪاول اﻷﺳﻬﻢ ﻣﺜﻼً‪ .‬وﺑﺎﻟﺮﻏﻢ ﻣﻦ ﻧﺠﺎح ﺗﻠﻚ اﻟﺘﻄﺒﻴﻘﺎت إﻻ أﻧﻬﺎ ﻻ ﺗﻨﺎﺳﺐ اﻟﻤﺠﺎﻻت اﻟﺘﻲ ﻳﺼﻌﺐ ﺗﺤﺪﻳﺪﻫﺎ‬
‫ﺑﺸﻜﻞ دﻗﻴﻖ إذ ﺗﻘﻞ ﻛﻔﺎءﺗﻬﺎ ﺑﺸﻜﻞ ﻛﺒﻴﺮ‪ .‬وﻟﻜﻦ ﻳﻤﻜﻦ اﻟﺤﺼﻮل ﻋﻠﻰ ﻧﻈﻢ ﺧﺒﻴﺮة ذات ﻛﻔﺎءة ﻋﺎﻟﻴﺔ ﺑﺎﺳﺘﺨﺪام‬
‫»اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ« اﻟﺬي ﻃﻮرﻩ اﻟﻌﺎﻟﻢ ﻟﻄﻔﻲ زادﻩ ﻋﻨﺪﻣﺎ ﻃﺮح ﻧﻈﺮﻳﺘﻪ ﻋﺎم ‪١٩٦٥‬م ﻓﻲ ورﻗﺔ ﺑﺤﺜﻴﺔ ﺑﻌﻨﻮان‬
‫‪fuzzy sets.‬ﻓﻲ اﻟﺒﺪاﻳﺔ ﻗﻮﺑﻠﺖ اﻟﻨﻈﺮﻳﺔ ﺑﺎﻟﺮﻓﺾ إﻻ أﻧﻪ ﻣﻊ ﻣﺮور اﻟﻮﻗﺖ اﺳﺘﻄﺎﻋﺖ أن ﺗﺴﺘﺤﻮذ ﻋﻠﻰ‬
‫اﻫﺘﻤﺎم اﻷوﺳﺎط اﻟﻌﻠﻤﻴﺔ‪ .‬ﺗﻌﺘﻤﺪ ﻧﻈﺮﻳﺔ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻋﻠﻰ ﻣﺤﺎﻛﺎة ﺗﻔﻜﻴﺮ اﻹﻧﺴﺎن ﻣﻊ اﻷﺧﺬ ﺑﻌﻴﻦ اﻻﻋﺘﺒﺎر‬
‫ﻋﺪم ﺗﺼﻨﻴﻒ اﻷﺷﻴﺎء إﻟﻰ ﺻﻮاب وﺧﻄﺄ ﻓﻘﻂ وإﻧﻤﺎ إدراك أن ﻫﻨﺎك ﻗﻴﻤﺎً أﺧﺮى ﻳﻤﻜﻦ أﺧﺬﻫﺎ ﺑﻌﻴﻦ اﻻﻋﺘﺒﺎر ﺗﻘﻊ‬
‫ﺑﻴﻦ ﻫﺎﺗﻴﻦ اﻟﻘﻴﻤﺘﻴﻦ‪.‬‬
‫ﺗﺒﻨﻰ ﻧﻈﺮﻳﺔ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻋﻠﻰ ﻣﻔﻬﻮم »اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ‪» Fuzzy Sets‬واﻟﺘﻲ ﺗﻌﺘﺒﺮ اﻣﺘﺪاداً ﻟﻤﻔﻬﻮم‬
‫اﻟﻤﺠﻤﻮﻋﺔ ‪ Set‬ﻛﻤﺎ ﻳﻌﺮﻓﻬﺎ اﻟﺠﻤﻴﻊ‪ .‬إن اﻟﻤﺠﻤﻮﻋﺔ اﻟﻀﺒﺎﺑﻴﺔ ﻫﻲ ﻣﺠﻤﻮﻋﺔ ﻻ ﻳﻤﻜﻦ ﺗﻌﺮﻳﻔﻬﺎ ﺑﺪﻗﺔ‪ .‬أي أن‬
‫ﺗﻌﺮﻳﻔﻬﺎ ﻳﺨﺘﻠﻒ ﺑﺎﺧﺘﻼف وﺟﻬﺎت اﻟﻨﻈﺮ‪ .‬ﻓﻤﺜﻼً ﻣﺠﻤﻮﻋﺔ اﻟﺴﻴﺪات اﻟﺠﻤﻴﻼت ﻫﻲ ﻣﺠﻤﻮﻋﺔ ﺿﺒﺎﺑﻴﺔ ﻷن ﻣﻔﻬﻮم‬
‫اﻟﺠﻤﺎل ﻳﺼﻌﺐ ﺗﺤﺪﻳﺪﻩ ﺑﺪﻗﺔ! ﻛﻤﺎ أن ﻣﺠﻤﻮﻋﺔ اﻷرﻗﺎم اﻟﺘﻲ ﻫﻲ أﻛﺒﺮ ﺑﻜﺜﻴﺮ ﻣﻦ اﻟﺮﻗﻢ واﺣﺪ ﻫﻲ أﻳﻀﺎ ﻣﺠﻤﻮﻋﺔ‬
‫ﺿﺒﺎﺑﻴﺔ وذﻟﻚ ﻟﻨﻔﺲ اﻟﺴﺒﺐ‪» ،‬أﻛﺒﺮ ﺑﻜﺜﻴﺮ« ﻋﺒﺎرة ﻻ ﻳﻜﻤﻦ وﺻﻔﻬﺎ ﺑﺪﻗﺔ‪ .‬وﺑﺎﻟﺘﺎﻟﻲ ﻳﻜﻤﻦ اﻻﺧﺘﻼف ﺑﻴﻦ‬
‫اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ واﻟﺘﻘﻠﻴﺪﻳﺔ ﻓﻲ اﻧﺘﻤﺎء اﻟﻌﻨﺎﺻﺮ ﻟﻠﻤﺠﻤﻮﻋﺔ‪ .‬ﻓﻔﻲ اﻟﻤﺠﻤﻮﻋﺔ اﻟﺘﻘﻠﻴﺪﻳﺔ ﻫﻨﺎك ﺣﺎﻟﺘﺎن ﻓﻘﻂ‪.‬‬
‫إﻣﺎ أن ﻳﻨﺘﻤﻲ اﻟﻌﻨﺼﺮ إﻟﻰ اﻟﻤﺠﻤﻮﻋﺔ أو ﻻ ﻳﻨﺘﻤﻲ )ﻧﺴﺘﻄﻴﻊ أن ﻧﺮﻣﺰ ﻟﻼﻧﺘﻤﺎء ﺑﺎﻟﺮﻗﻢ واﺣﺪ وﻋﺪم اﻻﻧﺘﻤﺎء ﺑﺼﻔﺮ(‪.‬‬
‫ﺑﻴﻨﻤﺎ ﺗﺘﻴﺢ اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﻟﻌﻨﺎﺻﺮﻫﺎ درﺟﺎت ﻣﺨﺘﻠﻔﺔ ﻣﻦ اﻻﻧﺘﻤﺎء ﺗﺘﺮاوح ﺑﻴﻦ اﻻﻧﺘﻤﺎء اﻟﻜﻠﻲ )واﺣﺪ( أو‬
‫ﻋﺪم اﻻﻧﺘﻤﺎء )ﺻﻔﺮ(‪ .‬ﻣﺜﻼً إذا ﻛﺎن ﻟﺪﻳﻨﺎ ‪ ٣‬ﻣﺠﻤﻮﻋﺎت ﺿﺒﺎﺑﻴﺔ‪ :‬ﻣﺠﻤﻮﻋﺔ ﻃﻮﻳﻼت اﻟﻘﺎﻣﺔ وﻗﺼﻴﺮات اﻟﻘﺎﻣﺔ‬
‫وﻣﺘﻮﺳﻄﺎت اﻟﻘﺎﻣﺔ‪ ،‬ﻓﺈن اﻟﻔﺘﺎة اﻟﺘﻲ ﻃﻮﻟﻬﺎ ‪ ٦٥‬ﺳﻢ ﺗﻨﺘﻤﻲ إﻟﻰ ﺟﻤﻴﻊ اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﺑﺪرﺟﺎت اﻧﺘﻤﺎء‬
‫ﻣﺨﺘﻠﻔﺔ ﻓﻤﺜﻼً‪ :‬ﺗﻨﺘﻤﻲ إﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻃﻮﻳﻼت اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ ‪ ،٠,٥‬وإﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻣﺘﻮﺳﻄﺎت اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ ‪،٠,٨‬‬
‫وإﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻗﺼﻴﺮات اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ ‪ .٠,٣‬ﻳﺘﻢ ﺗﺤﺪﻳﺪ اﻷﻃﻮال وﻣﺎ ﻳﻘﺎﺑﻠﻬﺎ ﻣﻦ درﺟﺎت اﻧﺘﻤﺎء ﺑﻮاﺳﻄﺔ ﺷﺨﺺ‬
‫ﺧﺒﻴﺮ ﻓﻲ اﻟﻤﺠﺎل ‪.‬‬

‫‪79‬‬
‫ﻛﻴﻒ ﺗﻌﻤﻞ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ؟‬
‫ﻳﻤﻜﻦ اﺳﺘﺨﺪام اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻓﻲ ﺑﻨﺎء اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة وذﻟﻚ ﻟﺠﻌﻠﻬﺎ أﻛﺜﺮ ﻣﺤﺎﻛﺎةً ﻟﻠﺘﻔﻜﻴﺮ اﻹﻧﺴﺎﻧﻲ‪ .‬وﻳﻌﻤﻞ‬
‫اﻟﻨﻈﺎم ﺑﺎﺳﺘﺨﺪام اﻟﻤﻜﻮﻧﺎت اﻟﺘﺎﻟﻴﺔ‪:‬‬
‫اﻟﺠﺰء اﻟﻤﺴﺆول ﻋﻦ ﺣﺴﺎب درﺟﺎت اﻻﻧﺘﻤﺎء ﻟﻠﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﻟﻠﺒﻴﺎﻧﺎت اﻟﻤﺪﺧﻠﺔ وﻳﻌﺮف‬ ‫‪-١‬‬
‫ﺑـ ‪Fuzzifier‬‬
‫‪ -٢‬ﻣﺤﺮك اﻻﺳﺘﺪﻻل ‪ Inference Engine‬وﻳﻘﻮم ﻫﺬا اﻟﺠﺰء ﺑﺎﻟﺘﻮﺻﻞ إﻟﻰ اﻟﻨﺘﺎﺋﺞ ﻋﻦ ﻃﺮﻳﻖ اﺳﺘﺨﺪام‬
‫ﻗﺎﻋﺔ اﻟﻤﻌﺮﻓﺔ ‪ Rule Base‬واﻟﺘﻲ ﺗﺤﺘﻮي ﻋﻠﻰ ﻗﻮاﻋﺪ ﺑﺼﻴﻐﺔ )إذا …‪ .‬ﻓﺈن)‪…..‬‬
‫‪ -٣‬اﻟﺠﺰء اﻟﺬي ﻳﺤﻮل اﻟﻨﺘﻴﺠﺔ اﻟﻀﺒﺎﺑﻴﺔ اﻟﻰ ﻧﺘﻴﺠﺔ دﻗﻴﻘﺔ وﻳﺴﻤﻰ‪Dfuzzifier .‬‬
‫ﻫﻨﺎك أﻣﺜﻠﺔ ﻋﺪﻳﺪة ﻟﻠﻨﻈﻢ اﻟﺨﺒﻴﺮة اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻧﺬﻛﺮ ﻣﻨﻬﺎ ﻋﻠﻰ ﺳﺒﻴﻞ اﻟﻤﺜﺎل‪ :‬ﻧﻈﺎم‬
‫‪SIGMAR‬اﻟﺬي ﻳﺴﺎﻋﺪ ﻋﻠﻰ اﻟﺘﻨﺒﺆ ﺑﺴﺮﻋﺔ اﻟﺮﻳﺎح ‪ .‬وﻧﻈﺎم ‪ MEDEX‬اﻟﺬي ﻳﺴﺎﻋﺪ ﻋﻠﻰ اﻟﺘﻨﺒﺆ‬
‫ﺑﻤﻮﺟﺎت اﻟﺠﻠﻴﺪ واﻟﻔﻴﻀﺎﻧﺎت ﻓﻲ ﻛﻨﺪا‪ .‬وﻧﻈﺎم ﺧﺒﻴﺮ آﺧﺮ ﻓﻲ ﻣﺠﺎل اﻟﻔﻨﺪﻗﺔ واﻟﺴﻴﺎﺣﺔ ﻳﺴﺎﻋﺪ اﻟﺴﻴﺎح ﻓﻲ‬
‫اﻟﺒﺤﺚ ﻋﻦ اﻟﻔﻨﺎدق وﻓﻘﺎً ﻟﺮﻏﺒﺎﺗﻬﻢ ﺣﻴﺚ ﻳﺘﻴﺢ ﻟﻬﻢ اﻟﺘﻌﺒﻴﺮ ﺑﻌﺒﺎرات ﻏﻴﺮ دﻗﻴﻘﺔ )ﻣﺜﺎل‪ :‬ﻳﻤﻜﻦ أن ﻳﻌﺒﺮ اﻟﺴﺎﺋﺢ‬
‫ﻋﻦ ﺛﻤﻦ اﻟﻔﻨﺪق ﺑﺈﺣﺪى اﻟﻜﻠﻤﺎت اﻟﺘﺎﻟﻴﺔ‪ :‬رﺧﻴﺼﺔ‪ ،‬ﻣﺘﻮﺳﻄﺔ وﻋﺎﻟﻴﺔ اﻟﺘﻜﻠﻔﺔ(‪ .‬وﻓﻲ اﻟﻤﺠﺎل اﻟﻄﺒﻲ ﻧﻈﺎم‬
‫‪ABVAB‬اﻟﺬي ﻳﺴﺎﻋﺪ اﻷﻃﺒﺎء ﻓﻲ اﻟﻜﺸﻒ ﻋﻦ ﻣﺴﺒﺒﺎت اﻟﻨﺰف‪.‬‬

‫ﻳﻤﻜﻦ ﺗﻌﺮﻳﻒ اﻻﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ﺑﺄﻧﻪ ﺗﺤﻮﻳﻞ ﺗﻤﺜﻴﻞ ﻣﺪﺧﻼت ﻣﻌﻴﻨﻪ إﻟﻲ ﻣﺨﺮﺟﺎت ﺑﺎﺳﺘﺨﺪام ﻧﻈﺮﻳﺔ اﻟﻔﺌﺎت‬
‫اﻟﻀﺒﺎﺑﻴﺔ ‪.‬‬

‫وﻣﻦ ﻃﺮق اﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ‪ :‬اﻻﺳﺘﺪﻻل ﺑﻨﻤﻂ ﻣﺎﻣﺪاﻧﻲ ‪: Mamdani-style inference‬‬

‫وﺗﻨﻔﺬ ﻋﻤﻠﻴﺔ اﻻﺳﺘﺪﻻل ﻋﻠﻲ ﻧﻤﻂ ﻣﺎﻣﺪاﻧﻲ ﻓﻲ أرﺑﻊ ﺧﻄﻮات ‪:‬‬

‫‪ -١‬ﻋﻤﻞ اﻟﻀﺒﺎﺑﻴﺔ ﻟﻤﺘﻐﻴﺮات اﻟﻤﺪﺧﻼت ‪. fuzzification‬‬


‫‪ -٢‬ﺗﻘﻮﻳﻢ اﻟﻘﺎﻋﺪة ‪. Rule evaluation‬‬
‫‪ -٣‬ﺗﺠﻤﻴﻊ ﻣﺨﺮﺟﺎت اﻟﻘﺎﻋﺪة ‪.aggregation of consequents rule‬‬
‫‪ -٤‬اﻟﻐﺎء اﻟﻀﺒﺎﺑﻴﺔ ‪. defuzzification‬‬

‫‪80‬‬
Code :
% ===========================
% Filename : fuzzy_centre_3.m
% ===========================

echo on;

pause

a=readfis('centre_3.fis');

% Hit any key to display the whole system as a block diagram.


pause

figure('name','Block diagram of the fuzzy system');


plotfis(a);

% Hit any key to display fuzzy sets for the linguistic variable "Mean
delay".
pause

figure('name','Mean delay (normalised)');


plotmf(a,'input',1);

% Hit any key to display fuzzy sets for the linguistic variable
"Number of servers".
pause

figure('name','Number of servers (normalised)');


plotmf(a,'input',2);

% Hit any key to display fuzzy sets for the linguistic variable
"Repair utilisation factor".
pause

figure('name','Repair utilisation factor');


plotmf(a,'input',3);

% Hit any key to display fuzzy sets for the linguistic variable
"Number of spares".
pause

figure('name','Number of spares (normalised)');


plotmf(a,'output',1);

% Hit any key to bring up the Rule Editor.


pause

ruleedit(a);

% Hit any key to generate three-dimensional plots for Rule Base.

81
pause

figure('name','Three-dimensional surface (a)');


gensurf(a,[1 2],1); view([49 36]);

% Hit any key to continue.


pause

figure('name','Three-dimensional surface (b)');


gensurf(a,[1 3],1); view([49 36]);

% Hit any key to bring up the Rule Viewer.


pause

ruleview(a);

% Hit any key to continue.


pause

% CASE STUDY
%
======================================================================
==================
% Suppose, a service centre is required to supply its customers with
spare parts within
% 24 hours. The service centre employs 8 servers and the repair
utilisation factor is 60%.
% The inventory capacities of the centre are limited by 100 spares.
The values for the
% mean delay, number of servers and repair utilisation factor are 0.6,
0.8 and 0.6,
% respectively.
%
======================================================================
==================

% Hit any key to obtain the required number of spares.


pause

n=round((evalfis([0.7 0.8 0.6],a))*100)

% Hit any key to continue.


pause

%
======================================================================
=
% Suppose, now a manager of the service centre wants to reduce the
customer's average
% waiting time to 12 hours.

82
%
======================================================================
=

% Hit any key to see how this will effect the required number of
spares.
pause

n=round((evalfis([0.35 0.8 0.6],a))*100)

echo off
disp('End of fuzzy_centre_3.m')

output :

83
84
85
86
87
88
‫اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ ‪:‬‬

‫‪89‬‬
‫اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ‪Genetic Algorithm‬‬

‫اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ ‪ genetic algorithms‬ﻫﻲ ﻃﺮﻳﻘﺔ ﻣﻦ ﻃﺮق اﻻﺳﺘﻤﺜﺎل و اﻟﺒﺤﺚ‪ .‬ﻳﻤﻜﻦ ﺗﺼﻨﻴﻒ‬
‫ﻫﺬﻩ اﻟﻄﺮﻳﻘﺔ ﻛﺈﺣﺪى ﻃﺮق اﻟﺨﻮارزﻣﻴﺎت اﻟﺘﻄﻮرﻳﺔ ‪ evolutionary algorithms‬اﻟﺘﻲ ﺗﻌﺘﻤﺪ ﻋﻠﻰ‬
‫ﺗﻘﻠﻴﺪ ﻋﻤﻞ اﻟﻄﺒﻴﻌﺔ ﻣﻦ ﻣﻨﻈﻮر داروﻳﻨﻲ‪.‬‬
‫ِ‬
‫ﺗﺤﻘﻴﻖ اﻷﻣﺜﻠﻴﺔ ‪،‬‬ ‫اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ‪ :‬ﻫﻲ ﺗﻘﻨﻴﺔ ﺑﺤﺚ ﺗﺴﺘﻌﻤﻞ ﻹﻳﺠﺎد ِ‬
‫ﺣﻠﻮل ﻣﻀﺒﻮﻃﺔ أَو ﺗﻘﺮﻳﺒﻴﺔ اﻟﺘﻲ‬
‫اﻟﺨﻮارزﻣﻴﺎت اﻟﻮراﺛﻴﺔ ﺗﺼﻨﻒ ﻛﺒﺤﻮث اﻟﻌﺎﻟﻤﻴﺔ اﻻﺳﺘﺪﻻﻟﻲ ‪(Global search heuristics),‬وﻫﻲ أﻳﻀﺎ‬
‫ِ‬
‫اﻟﺘﻄﻮري ‪(evolutionary‬‬ ‫ﻓﺌﺔ ﻣﻌﻴﻨﺔ ﻣﻦ اﻟﺨﻮارزﻣﻴﺎت اﻟﺘﻄﻮرﻳﺔ اﻟﻤﻌﺮوﻓﺔ ﻛﺬﻟﻚ ﺑِﺎﻟﺤﺴﺎب‬
‫)‪computation‬اﻟﺘﻲ ﺗﺴﺘﺨﺪم ﺗﻜﻨﻮﻟﻮﺟﻴﺎ اﻟﻤﺴﺘﻮﺣﺎة ﻣﻦ اﻟﺒﻴﻮﻟﻮﺟﻴﺎ اﻟﺘﻄﻮرﻳﺔ ‪(evolutionary‬‬
‫)‪biology‬ﻣﺜﻞ اﻟﺘﻮرﻳﺚ واﻟﻄﻔﺮات واﻻﺧﺘﻴﺎر و اﻟﺘﻬﺠﻴﻦ‪(crossover).‬‬
‫ﺗﻌﺘﺒﺮ اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻣﻦ اﻟﺘﻘﻨﻴﺎت اﻟﻬﺎﻣﺔ ﻓﻲ اﻟﺒﺤﺚ ﻋﻦ اﻟﺨﻴﺎر اﻷﻣﺜﻞ ﻣﻦ ﻣﺠﻤﻮﻋﺔ ﺣﻠﻮل ﻣﺘﻮﻓﺮة‬
‫ﻟﺘﺼﻤﻴﻢ ﻣﻌﻴﻦ‪ ،‬وﺗﻌﺘﻤﺪ ﻣﺒﺪأ داروﻳﻦ ﻓﻲ اﻻﺻﻄﻔﺎء ﺣﻴﺚ ﺗﻘﻮم ﻫﺬﻩ اﻟﻤﻌﺎﻟﺠﺔ اﻟﻮراﺛﻴﺔ ﺑﺘﻤﺮﻳﺮ اﻟﻤﺰاﻳﺎ اﻟﻤﺜﻠﻰ ﻣﻦ‬
‫ﺧﻼل ﻋﻤﻠﻴﺎت اﻟﺘﻮاﻟﺪ اﻟﻤﺘﻌﺎﻗﺒﺔ‪ ،‬وﺗﺪﻋﻴﻢ ﻫﺬﻩ اﻟﺼﻔﺎت‪ ،‬وﺗﻜﻮن ﻟﻬﺬﻩ اﻟﺼﻔﺎت اﻟﻘﺪرة اﻷﻛﺒﺮ ﻋﻠﻰ دﺧﻮل‬
‫ﻋﻤﻠﻴﺔ اﻟﺘﻮاﻟﺪ‪ ،‬وإﻧﺘﺎج ذرﻳﺔ أﻣﺜﻞ وﺑﺘﻜﺮار اﻟﺪورة اﻟﻮراﺛﻴﺔ ﺗﺘﺤﺴﻦ ﻧﻮﻋﻴﺔ اﻟﺬرﻳﺔ ﺗﺪرﻳﺠﻴﺎً‪.‬‬
‫اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻳﺘﻢ ﺗﻨﻔﻴﺬﻫﺎ ﺑﺎﻋﺘﺒﺎرﻫﺎ ﻣﺤﺎﻛﺎة اﻟﻜﻤﺒﻴﻮﺗﺮ ﺣﻴﺚ ﺗﺴﺘﺨﺪم اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت ﻛﺄﻓﺮاد ﻓﻲ‬
‫اﻟﻌﻤﻠﻴﺎت اﻟﺘﻲ ﺗﻘﻮم ﺑﻬﺎ ﻹﻳﺠﺎد اﻓﺼﻞ اﻟﺤﻠﻮل ‪ ،‬ﺑﺸﻜﻞ ﻋﺎم اﻟﺤﻠﻮل ﺗﻤﺜﻞ ﺑﻨﻈﺎم اﻟﺜﻨﺎﺋﻲ ) ‪ (binary‬ﻣﻦ ‪٠‬‬
‫و‪، ١‬وأﻳﻀﺎ ﻳﻤﻜﻦ اﺳﺘﺨﺪام رﻣﻮز أﺧﺮى‪.‬‬
‫ﻋﻤﻠﻴﺔ اﻟﺘﻄﻮر )‪(evolution‬ﺗﺒﺪأ ﻋﺎدة ﻣﻦ اﺧﺘﻴﺎر اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت )‪(population‬ﺑﺸﻜﻞ ﻋﺸﻮاﺋﻲ وﻫﺬا‬
‫ﻳﺤﺪث ﻓﻲ اﻷﺟﻴﺎل اﻷﺧﺮى ‪،‬ﻓﻲ ﻛﻞ ﺟﻴﻞ ﻳﺘﻢ ﺣﺴﺎب اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ )‪(fitness function‬ﻟﻜﻞ‬
‫اﻟﻜﺮوﺳﻮﻣﺎت ﺑﺸﻜﻞ ﻣﻨﻔﺮد و ﻳﺘﻢ اﺧﺘﻴﺎر أﻓﻀﻞ اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت ﺑﺎﻻﻋﺘﻤﺎد ﻋﻠﻰ أﻓﻀﻞ اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ و ﻣﻦ ﺛﻢ‬
‫ﻋﻤﻞ ﺗﻬﺠﻴﻦ )دﻣﺞ( وأﻳﻀﺎ ﻋﻤﻞ ﻃﻔﺮة ‪ ،‬ﻫﺬﻩ اﻟﺨﻮارزﻣﻴﺔ ﺗﺘﻮﻗﻒ ﻋﻨﺪﻣﺎ ﻧﺼﻞ إﻟﻰ أﻛﺒﺮ ﻋﺪد ﻣﻦ اﻷﺟﻴﺎل ﺗﻢ‬
‫إﻧﺘﺎﺟﻪ أو اﻟﻮﺻﻞ إﻟﻰ أﻓﻀﻞ ﺗﺤﻴﻖ ﻣﻦ ﺧﻼل اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ ‪ ،‬إذا ﻛﺎن اﻟﺘﻮﻗﻒ ﺑﺴﺒﺐ أﻛﺒﺮ ﻋﺪد ﻣﻦ اﻷﺟﻴﺎل‬
‫ﻳﻜﻮن اﻟﺤﻞ اﻷﻣﺜﻞ ﻏﻴﺮ ﻣﺘﺤﻘﻖ‪.‬‬
‫اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﺗﻮﺟﺪ ﻓﻲ اﻟﺘﻄﺒﻴﻘﺎت اﻟﻤﻌﻠﻮﻣﺎﺗﻴﺔ اﻹﺣﻴﺎﺋﻴﺔ )‪(bioinformatics‬و ﻋﻠﻮم اﻟﺤﺎﺳﻮب‬
‫واﻟﻬﻨﺪﺳﺔ و اﻻﻗﺘﺼﺎد و اﻟﻜﻴﻤﻴﺎء و اﻟﺼﻨﺎﻋﺎت اﻟﺘﺤﻮﻳﻠﻴﺔ )‪ ( manufacturing‬و اﻟﺮﻳﺎﺿﻴﺎت واﻟﻔﻴﺰﻳﺎء‬

‫‪90‬‬
‫وﻏﻴﺮﻫﺎ ﻣﻦ اﻟﻤﻴﺎدﻳﻦ‪.‬‬
‫ﺗﻘﻮم ﻃﺮﻳﻘﺔ اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻋﻠﻰ ﺗﻮﻟﻴﺪ ﺣﻠﻮل ﺟﺪﻳﺪة ﺗﻮﻟﺪ ﺣﻠﻮﻻ ﻣﻦ اﺣﺘﻤﺎﻻت ﻣﺸﻔﺮة ﻋﻠﻰ اﻟﺸﻜﻞ‬
‫"ﻣﻮرث"‪ .‬اﻟﻜﺮوﻣﻮﺳﻮﻣﺎت ﺗﺠﻤﻊ أو ﺗﺘﻐﻴﺮ ﻹﻧﺘﺎج اﻷﻓﺮاد اﻟﺠﺪد‪ .‬وﻫﻲ ﻣﻔﻴﺪة‬
‫اﻟﻤﻌﺮوف ب "ﻛﺮوﻣﻮﺳﻮم" أَو ّ‬
‫ﻹﻳﺠﺎد اﻟﺤﻞ اﻻﻣﺜﻞ ﻟﻠﻤﻌﻀﻼت اﻟﻤﺘﻌﺪدة اﻷﺑﻌﺎد اﻟﺘﻲ ﻳﻤﻜﻦ ﻓﻴﻬﺎ أن ﺗﺸﻔﺮ اﻟﻘﻴﻢ ﻟﻠﻤﺘﻐﻴﺮات اﻟﻤﺨﺘﻠﻔﺔ ﻓﻴﻬﺎ‬
‫ﻋﻠﻰ ﺷﻜﻞ اﻟﻜﺮوﻣﻮﺳﻮم‪.‬‬
‫وﻟﺘﻄﺒﻴﻖ اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻋﻠﻴﻨﺎ أوﻻً أن ﻧﻮﺟﺪ اﻟﺘﻤﺜﻴﻞ اﻟﻤﻨﺎﺳﺐ ﻟﻠﻤﺸﻜﻠﺔ اﻟﻤﺪروﺳﺔ وﻓﻖ ﻋﻤﻠﻴﺎت ﺻﺒﻐﻴﺔ‪،‬‬
‫ﺣﻞ ﻟﻠﻤﺸﻜﻠﺔ اﻟﻤﻌﻄﺎة‬
‫وأﺷﻬﺮ ﻃﺮق اﻟﺘﻤﺜﻴﻞ ﻫﻲ اﺳﺘﺨﺪام اﻟﺴﻼﺳﻞ اﻟﺜﻨﺎﺋﻴﺔ ﻟﺘﻤﺜﻴﻞ ﻗﻴﻢ اﻟﻤﺘﻐﻴﺮات اﻟﺘﻲ ﺗﻌﺒﺮ ﻋﻦ ّ‬
‫وﻋﻠﻰ ﻫﻴﺌﺔ ﺻﺒﻐﻴﺎت‪ ،‬وﺑﻌﺪ أن ﺗﻨﺘﺞ ﻫﺬﻩ اﻟﺼﺒﻐﻴﺎت ﻻ ﺑﺪ ﻣﻦ ﻃﺮق ﻟﻤﻌﺎﻟﺠﺘﻬﺎ ﺣﻴﺚ ﻳﻮﺟﺪ أرﺑﻌﺔ ﻋﻤﻠﻴﺎت وﻫﻲ‬
‫)اﻟﻨﺴﺦ‪ ،‬اﻟﺘﺼﺎﻟﺐ‪ ،‬اﻟﻄﻔﺮة و اﻟﻌﻜﺲ‪).‬‬
‫ﻓﺎﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻣﺒﻨﻴﺔ ﻋﻠﻰ أﺳﺎس ﺗﻘﻨﻴﺔ اﻟﺤﻠﻮل اﻟﻤﺜﻠﻰ ﺗﺤﺎﻛﻲ اﻟﻨﺸﻮء اﻟﻄﺒﻴﻌﻲ وذﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺗﺸﻔﻴﺮ‬
‫اﻟﺤﻠﻮل اﻟﻤﻤﻜﻨﺔ ﻟﺘﻤﺜﻴﻠﻬﺎ ﻋﻠﻰ ﺷﻜﻞ ﺳﻼﺳﻞ ﻣﺸﺎﺑﻬﺔ ﻟﺴﻼﺳﻞ اﻟﺼﺒﻐﻲ‪ ،‬وﻣﻦ ﺛﻢ ﺗﻄﺒﻴﻖ ﺑﻌﺾ اﻟﻌﻤﻠﻴﺎت‬
‫اﻟﺒﻴﻮﻟﻮﺟﻴﺔ )ﻧﺴﺦ‪ ،‬ﺗﺼﺎﻟﺐ‪ ،‬ﻃﻔﺮة(‪ ،‬واﻟﻌﻤﻠﻴﺎت اﻟﺼﻨﻌﻴﺔ)اﻟﻌﻜﺲ( ﻹﻧﺘﺎج اﻟﺤﻞ اﻷﻣﺜﻞ‪.‬‬
‫واﻟﻤﻴﺰة اﻷﻫﻢ ﻓﻲ اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻫﻲ ﻃﺒﻴﻌﺘﻬﺎ اﻟﺘﻜﻴﻴﻔﻴﺔ‪ ،‬واﻟﺘﻲ ﺗﺠﻌﻠﻬﺎ أﻗﻞ ﺣﺎﺟﺔ ﻟﻤﻌﺮﻓﺔ اﻟﻤﻌﺎدﻟﺔ ﻣﻦ أﺟﻞ‬
‫ﺣﻠﻬﺎ‪.‬‬
‫ﻓﺎﻟﺨﻮارزﻣﺎت اﻟﺠﻴﻨﻴﺔ ﻫﻲ ﻃﺮﻳﻘﺔ ﻟﻤﺤﺎﻛﺎة ﻣﺎﺗﻔﻌﻠﻪ اﻟﻄﺒﻴﻌﺔ ﻓﻲ ﺗﻜﺎﺛﺮ اﻟﻜﺎﺋﻨﺎت اﻟﺤﻴﺔ‪ ،‬واﺳﺘﺨﺪام ﺗﻠﻚ اﻟﻄﺮﻳﻘﺔ‬
‫ﻟﺤﻞ ﻣﺸﻜﻼت ﻣﻌﻘﺪة ﻟﻠﻮﺻﻮل ﻟﻠﺤﻞ اﻷﻓﻀﻞ‪ ،‬أو أﻗﺮب ﺣﻞ ﻣﻤﻜﻦ ﻟﻠﺤﻞ اﻷﻓﻀﻞ‪ .‬إذن ﻟﺪﻳﻨﺎ ﻣﺸﻜﻠﺔ ﻟﻬﺎ‬
‫ﻋﺪد ﻛﺒﻴﺮ ﺟﺪا ﻣﻦ ﻣﻦ اﻟﺤﻠﻮل أﻛﺜﺮﻫﺎ ﺧﺎﻃﺊ وﺑﻌﻀﻬﺎ ﺻﺤﻴﺢ‪ ،‬وﻫﻨﺎﻟﻚ داﺋﻤﺎ اﻟﺤﻞ اﻷﻓﻀﻞ واﻟﺬي ﻳﺼﻌﺐ‬
‫ﻏﺎﻟﺒﺎ اﻟﻮﺻﻮل إﻟﻴﻪ‪.‬‬
‫ﻓﻔﻜﺮة اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﺗﻜﻤﻦ ﻓﻲ ﺗﻮﻟﻴﺪ ﺑﻌﺾ اﻟﺤﻠﻮل ﻟﻠﻤﺸﻜﻠﺔ ﻋﺸﻮاﺋﻴﺎ‪ ،‬ﺛﻢ ﺗﻔﺤﺺ ﻫﺬﻩ اﻟﺤﻠﻮل وﺗﻘﺎرن‬
‫ﺑﺒﻌﺾ اﻟﻤﻌﺎﻳﻴﺮ اﻟﺘﻲ ﻳﻀﻌﻬﺎ ﻣﺼﻤﻢ اﻟﺨﻮارزم‪ ،‬وأﻓﻀﻞ اﻟﺤﻠﻮل ﻓﻘﻂ ﻫﻲ اﻟﺘﻲ ﺗﺒﻘﻰ أﻣﺎ اﻟﺤﻠﻮل اﻷﻗﻞ ﻛﻔﺎءة ﻓﻴﺘﻢ‬
‫إﻫﻤﺎﻟﻬﺎ ﻋﻤﻼ ﺑﺎﻟﻘﺎﻋﺪة اﻟﺒﻴﻮﻟﻮﺟﻴﺔ "اﻟﺒﻘﺎء ﻟﻸﺻﻠﺢ‪".‬‬
‫واﻟﺨﻄﻮة اﻟﺘﺎﻟﻴﺔ ﻫﻲ ﻣﺰاوﺟﺔ أو ﺧﻠﻂ اﻟﺤﻠﻮل اﻟﻤﺘﺒﻘﻴﺔ )اﻟﺤﻠﻮل اﻷﻛﺜﺮ ﻛﻔﺎءة( ﻹﻧﺘﺎج ﺣﻠﻮل ﺟﺪﻳﺪة ﻋﻠﻰ ﻏﺮار ﻣﺎ‬
‫ﻳﺤﺼﻞ ﻓﻲ اﻟﻜﺎﺋﻨﺎت اﻟﺤﻴﺔ وذﻟﻚ ﺑﻤﺰج ﻣﻮرﺛﺎﺗﻬﺎ )ﺟﻴﻨﺎﺗﻬﺎ( ﺑﺤﻴﺚ ﻳﺤﻤﻞ اﻟﻜﺎﺋﻦ اﻟﺠﺪﻳﺪ ﺻﻔﺎت ﻫﻲ ﻋﺒﺎرة ﻋﻦ‬
‫ﻣﺰﻳﺞ ﻣﻦ ﺻﻔﺎت واﻟﺪﻳﻪ‪.‬‬
‫اﻟﺤﻠﻮل اﻟﻨﺎﺗﺠﺔ ﻣﻦ اﻟﺘﺰاوج ﺗﺪﺧﻞ ﻫﻲ أﻳﻀﺎ ﺗﺤﺖ اﻟﻔﺤﺺ واﻟﺘﻨﻘﻴﺢ ﻟﻤﻌﺮﻓﺔ ﻣﺪى ﻛﻔﺎءﺗﻬﺎ واﻗﺘﺮاﺑﻬﺎ ﻣﻦ اﻟﺤﻞ‬
‫اﻷﻣﺜﻞ‪ ،‬ﻓﺈن ﺛﺒﺘﺖ ﻛﻔﺎءة اﻟﺤﻞ اﻟﺠﺪﻳﺪ ﻓﺈﻧﻪ ﻳﺒﻘﻰ وإﻻ ﻳُﻬﻤﻞ‪ ،‬وﻫﻜﺬا ﺗﺘﻢ ﻋﻤﻠﻴﺎت اﻟﺘﺰاوج واﻻﻧﺘﻘﺎء ﺣﺘﻰ ﺗﺼﻞ‬

‫‪91‬‬
‫اﻟﻌﻤﻠﻴﺔ إﻣﺎ ﻟﻌﺪد ﻣﻌﻴﻦ ﻣﻦ اﻟﺘﻜﺮارات )ﻳﻘﺮرﻩ ﻣﺴﺘﺤﺪم اﻟﻨﻈﺎم( أو ﺗﺼﻞ اﻟﺤﻠﻮل اﻟﻨﺎﺗﺠﺔ‪ ،‬أو إﺣﺪاﻫﺎ إﻟﻰ ﻧﺴﺒﺔ‬
‫ﻛﻔﺎءة‪ ،‬أو ﻧﺴﺒﺔ ﺧﻄﺄ ﺿﺌﻴﻠﺔ )ﻳﺤﺪدﻫﺎ اﻟﻤﺴﺘﺨﺪم أﻳﻀﺎ( أو ﺣﺘﻰ اﻟﺤﻞ اﻷﻓﻀﻞ‪.‬‬

‫‪92‬‬
Code :
function GA_1

disp('=============================================================')
disp('Genetic algorithms: the fitness function of a single variable')
disp('=============================================================')

disp('================================================================
============')
disp('Reference: Negnevitsky, M., "Artificial Intelligence: A Guide to
Intelligent')
disp(' Systems", Addison Wesley, Harlow, England, 2002.
')
disp(' Sec. 7.3 Genetic algorithms
')
disp('================================================================
============')

disp('================================================================
=============')
disp('Problem: It is desired to find the maximum value of the function
(15*x - x*x)')
disp(' where parameter "x" varies between 0 and 15. Assume
that "x" takes ')
disp(' only integer values.
')
disp('================================================================
=============')

disp('Hit any key to define the objective function.')


pause

ObjFun='15*x -x.*x';

disp(' ')
disp('ObjFun=15*x -x.*x')
disp(' ')

disp('Hit any key to define the size of a chromosome population, the


number of genes')
disp('in a chromosome, the crossover probability, the mutation
probability, possible')
disp('minimum and maximum values of parameter "x", and the number of
generations.')
pause

nind=6; % Size of a chromosome population


ngenes=4; % Number of genes in a chromosome
Pc=0.9; % Crossover probability
Pm=0.001; % Mutation probability
xmin=0; % Possible minimum value of parameter "x"

93
xmax=15; % Possible maximum value of parameter "x"
ngener=20; % Number of generations

disp(' ')
fprintf(1,'nind=%.0f; Size of a chromosome population\n',nind);
fprintf(1,'ngenes=%.0f; Number of genes in a chromosome\n',ngenes);
fprintf(1,'Pc=%.1f; Crossover probability\n',Pc);
fprintf(1,'Pm=%.3f; Mutation probability\n',Pm);
fprintf(1,'xmin=%.0f; Possible minimum value of parameter
"x"\n',xmin);
fprintf(1,'xmax=%.0f; Possible maximum value of parameter
"x"\n',xmax);
fprintf(1,'ngener=%.0f; Number of generations\n',ngener);
disp(' ')

fprintf(1,'Hit any key to generate a population of %.0f


chromosomes.\n',nind);
pause

chrom=round(rand(nind,ngenes))

disp('Hit any key to obtain decoded integers from the chromosome


strings.')
pause

x=chrom*[2.^(ngenes-1:-1:0)]'

% Calculation of the chromosome fitness


ObjV=evalObjFun(ObjFun,x);
best(1)=max(ObjV);
ave(1)=mean(ObjV);

disp('Hit any key to display initial locations of the chromosomes on


the objective function.')
pause

figure('name','Chromosome locations on the objective function');


fplot(ObjFun,[xmin,xmax])
hold;
plot(x,ObjV,'r.','markersize',15)
legend(['The objective function: ',ObjFun],'Initial chromosome
population',4);
title('Hit any key to continue');
xlabel('Parameter "x"');
ylabel('Chromosome fitness');
hold;

disp(' ')
disp('Hit any key to run the genetic algorithm.')
pause

for i=1:ngener,

94
% Fitness evaluation
fitness=ObjV;
if min(ObjV)<0
fitness=fitness-min(ObjV);
end

% Roulette wheel selection


numsel=round(nind*0.9); % The number of chromosomes to be selected
for reproduction
cumfit=repmat(cumsum(fitness),1,numsel);
chance=repmat(rand(1,numsel),nind,1)*cumfit(nind,1);
[selind,j]=find(chance < cumfit & chance >=
[zeros(1,numsel);cumfit(1:nind-1,:)]);
newchrom=chrom(selind,:);

% Crossover
points=round(rand(floor(numsel/2),1).*(ngenes-2))+1;
points=points.*(rand(floor(numsel/2),1)<Pc);
for j=1:length(points),
if points(j),
newchrom(2*j-1:2*j,:)=[newchrom(2*j-1:2*j,1:points(j)),...
flipud(newchrom(2*j-1:2*j,points(j)+1:ngenes))];
end
end

% Mutation
mut=find(rand(numsel,ngenes)<Pm);
newchrom(mut)=round(rand(length(mut),1));

% Fitness calculation
newx=newchrom*[2.^(ngenes-1:-1:0)]';
newx=xmin+(newx+1)*(xmax-xmin)/(2^ngenes-1);
newObjV=evalObjFun(ObjFun,newx);

% Creating a new population of chromosomes


if nind-numsel, % Preserving a part of the parent chromosome
population
[ans,Index]=sort(fitness);
chrom=[chrom(Index(numsel+1:nind),:);newchrom];
x=[x(Index(numsel+1:nind));newx];
ObjV=[ObjV(Index(numsel+1:nind));newObjV];
else % Replacing the entire parent chromosome population with a new
one
chrom=newchrom;
x=newx;
ObjV=newObjV;
end

% Plotting current locations of the chromosomes on the objective


function
fplot(ObjFun,[xmin,xmax])

95
hold;
plot(x,ObjV,'r.','markersize',15)
legend(['The objective function: ',ObjFun],'Current chromosome
population',4);
title(['Generation # ',num2str(i)]);
xlabel('Parameter "x"');
ylabel('Chromosome fitness');
pause(0.2)
hold;

best(1+i)=max(ObjV);
ave(1+i)=mean(ObjV);
end

disp(' ')
disp('Hit any key to display the performance graph.')
pause

figure('name','Performance graph');
plot(0:ngener,best,0:ngener,ave);
legend('Best','Average',0);
title(['Pc = ',num2str(Pc),', Pm = ',num2str(Pm)]);
xlabel('Generations');
ylabel('Fitness')

function y=evalObjFun(ObjFun,x)
y=eval(ObjFun);

Output :

96
97
End …

98

You might also like