Professional Documents
Culture Documents
مشاريع AI
مشاريع AI
ﺟﺎﻣﻌﺔ اﻟﻨﻴﻠﻴﻦ
إاف :
1
اﻟﻔﻬﺮس
اﳌﻮﺿﻮع اﻟﺼﻔﺤﺔ
اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ١٣-٣
اﻟﻤﺪرك اﻟﺒﺴﻴﻂ ٢١-١٤
ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ٢٩-٢٢
ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ٤٩-٣٠
اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ٦٨-٥٠
ﺗﻌﻠﻢ ﻛﻮﻫﻨﻴﻦ ٧٧-٦٩
اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ واﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ٨٨-٧٨
اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ ٩٨-٨٩
2
اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ :
3
اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة Expert Systems -
اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ،أﺣﺪ أﻗﻮى ﻓﺮوع اﻟﺬﻛﺎء اﻹﺻﻄﻨﺎﻋﻲ اﻟﺬي ﻳﻌﺘﺒﺮ ﺑﺪورﻩ أﻗﻮى ﻓﺮوع ﻋﻠﻢ اﻟﺤﺎﺳﺐ اﻵﻟﻲ.
ﻓﻤﺎ ﻫﻲ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة ) ( Expert Systems؟
ﻫﻲ ﺑﺮاﻣﺞ ﺗُﺤﺎﻛﻲ آداء اﻟﺨﺒﻴﺮ اﻟﺒﺸﺮي ﻓﻲ ﻣﺠﺎل ﺧﺒﺮة ﻣﻌﻴﻦ ،وذﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺗﺠﻤﻴﻊ واﺳﺘﺨﺪام ﻣﻌﻠﻮﻣﺎت
وﺧﺒﺮة ﺧﺒﻴﺮ أو أﻛﺜﺮ ﻓﻲ ﻣﺠﺎل ﻣﻌﻴﻦ.
ﺑﺎﺧﺘﺼﺎر ﻫﺬﻩ اﻟﻨﻈﻢ أوﺟﺪت ﻣﻦ أﺟﻞ اﺳﺘﺨﻼص ﺧﺒﺮات اﻟﺨﺒﺮاء -وﺧﺼﻮﺻﺎً ﻓﻲ اﻟﺘﺨﺼﺼﺎت اﻟﻨﺎدرة–
وﺿﻤﻬﺎ ﻓﻲ ﻧﻈﺎم ﺧﺒﻴﺮ ﻳﺤﻞ ﻣﺤﻞ اﻹﻧﺴﺎن وﻳﺴﺎﻋﺪ ﻓﻲ ﻧﻘﻞ ﻫﺬﻩ اﻟﺨﺒﺮات ﻷﻧﺎس آﺧﺮﻳﻦ ﺑﺎﻹﺿﺎﻓﺔ إﻟﻰ ﻗﺪرﺗﻪ
ﻋﻠﻰ ﺣﻞ اﻟﻤﺸﻜﻼت ﺑﻄﺮﻳﻘﺔ أﺳﺮع ﻣﻦ اﻟﺨﺒﻴﺮ اﻟﺒﺸﺮي.
ﻣﻦ ﻣﻤﻴﺰات ﻫﺬﻩ اﻟﻨﻈﻢ:
ﻣﻄﻮر.
.١أﻧﻬﺎ ﺳﻬﻠﺔ اﻹﺳﺘﺨﺪام ﻷي ﻣﺴﺘﺨﺪم ﺳﻮاء ﻣﺴﺘﺨﺪم ﻋﺎدي أو ّ
.٢أﻧﻬﺎ ﻧﺎﻓﻌﺔ ﻓﻲ ﻣﺠﺎل اﻟﺘﻄﺒﻴﻖ ﺑﺸﻜﻞ واﺿﺢ.
.٣ﻗﺎدرة ﻋﻠﻰ اﻟﺘﻌﻠﻢ ﻣﻦ اﻟﺨﺒﺮاء ﺑﻄﺮﻳﻘﺔ ﻣﺒﺎﺷﺮة وﻏﻴﺮ ﻣﺒﺎﺷﺮة.
.٤ﻗﺎدرة ﻋﻠﻰ ﺗﻌﻠﻴﻢ ﻏﻴﺮ اﻟﻤﺘﺨﺼﺼﻴﻦ.
.٥ﻗﺎدرة ﻋﻠﻰ ﺗﻔﺴﻴﺮ أي ﺣﻠﻮل ﺗﺘﻮﺻﻞ إﻟﻴﻬﺎ ﻣﻊ ﺗﻮﺿﻴﺢ ﻃﺮﻳﻘﺔ اﻟﻮﺻﻮل إﻟﻴﻬﺎ.
.٦ﻗﺎدرة ﻋﻠﻰ اﻹﺳﺘﺠﺎﺑﺔ ﻟﻸﺳﺌﻠﺔ اﻟﺒﺴﻴﻄﺔ وﻛﺬﻟﻚ اﻟﻤﻌﻘﺪة ﻓﻲ ﺣﺪود اﻟﺘﻄﺒﻴﻖ.
.٧وﺳﻴﻠﺔ ﻣﻔﻴﺪة ﻓﻲ ﺗﻮﻓﻴﺮ ﻣﺴﺘﻮﻳﺎت ﻋﺎﻟﻴﺔ ﻣﻦ اﻟﺨﺒﺮة ﻓﻲ ﺣﺎل ﻋﺪم ﺗﻮﻓﺮ ﺧﺒﻴﺮ.
.٨ﻗﺎدرة ﻋﻠﻰ ﺗﻄﻮﻳﺮ آداء اﻟﻤﺘﺨﺼﺼﻴﻦ ذوي اﻟﺨﺒﺮة اﻟﺒﺴﻴﻄﺔ.
اﻷﺳﺒﺎب ﻟﻌﺪم إﻧﺘﺸﺎر اﻷﻧﻈﻤﺔ اﻟﺨﺒﻴﺮة :
أﻧﻬﺎ ذات ﺗﻜﻠﻔﺔ ﻋﺎﻟﻴﺔ ﻣﻘﺎرﻧ ًﺔ ﺑﺎﻟﺘﻄﺒﻴﻘﺎت اﻟﺘﻘﻠﻴﺪﻳﺔ.
إﻻ أﻧﻪ وﻣﻊ ﻫﺬﻩ اﻟﻤﺸﺎﻛﻞ ﻫﻨﺎك أﺳﺒﺎب ﻗﻮﻳﺔ ﺗﺠﻌﻞ ﺑﻌﺾ اﻟﺸﺮﻛﺎت ﺗﺘﻐﻠﺐ ﻋﻠﻰ ﻫﺬﻩ اﻟﻤﺸﺎﻛﻞ ﻣﻨﻬﺎ:
4
اﻹﺣﺘﻔﺎظ ﺑﺎﻟﺨﺒﺮة واﻟﻤﻌﺮﻓﺔ ﻣﻦ اﻹﻧﺪﺛﺎر أو اﻹﻧﻘﺮاض ،وﺧﺼﻮﺻﺎً ﻓﻲ اﻟﺘﺨﺼﺼﺎت اﻟﻬﺎﻣﺔ اﻟﻜﺜﻴﺮة
اﻹﺳﺘﺨﺪام أو اﻟﻨﺎدرة.
ﺣﻞ اﻟﻤﺸﺎﻛﻞ ،ﻣﻤﺎ ﻳﺤﻔﻆ اﻟﻮﻗﺖ و اﻟﻤﺎل واﻟﺠﻬﺪ.
وﻣﻦ أﻫﻢ ﻣﺠﺎﻻت ﺗﻄﺒﻴﻘﺎت ﻧﻈﻢ اﻟﺨﺒﺮة ﻫﻮ اﻟﺘﺼﻨﻴﻒ ) (classificationﺣﻴﺚ ﻳﻜﻮن ﻣﻄﻠﻮب ﻣﻦ اﻟﻨﻈﺎم
ﺗﺤﺪﻳﺪ اﻟﻔﺌﺔ اﻟﺘﻲ ﻳﻨﺘﻤﻲ إﻟﻴﻬﺎ اﻟﻜﺎﺋﻦ اﻟﻤﻄﻠﻮب ﺗﺼﻨﻴﻔﻪ ،ﻛﻤﺎ أن اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة دﺧﻠﺖ ﻓﻲ ﻋﺪة ﻣﺠﺎﻻت أﺧﺮى
ﻛﺎﻟﻄﺐ واﻟﺰراﻋﺔ واﻟﺘﻨﻘﻴﺐ واﻹﻟﻜﺘﺮوﻧﻴﺎت واﻟﺤﺎﺳﺒﺎت واﻟﺠﻴﻮﻟﻮﺟﻴﺎ واﻟﻬﻨﺪﺳﺔ واﻟﺘﻌﻠﻴﻢ واﻟﺸﺮﻳﻌﺔ واﻟﻘﺎﻧﻮن
واﻟﺘﺠﺎرة واﻹﻗﺘﺼﺎد وﻏﻴﺮﻫﺎ اﻟﻜﺜﻴﺮ
وﻹﻧﺘﺎج ﻧﻈﺎم ﺧﺒﻴﺮ ﻳﺠﺐ ﺗﻮﻓﺮ ﻋﻨﺼﺮﻳﻦ ﻫﺎﻣﻴﻦ ﻫﻤﺎ:
.١اﻟﻤﺒﺮﻣﺞ اﻟﺬي ﻳﻘﻮم ﺑﺘﺤﻠﻴﻞ اﻟﻤﺸﻜﻠﺔ وﻛﺘﺎﺑﺔ اﻟﺒﺮﻧﺎﻣﺞ ﻓﻲ ﻣﺠﺎل اﻟﺬﻛﺎء اﻹﺻﻄﻨﺎﻋﻲ.
.٢ﺧﺒﻴﺮ اﻟﻤﺠﺎل وﻫﻮ اﻟﺸﺨﺺ اﻟﻤﺘﺨﺼﺺ ﻓﻲ ﻣﺠﺎل ﻣﻌﻴﻦ وﻟﻴﺲ ﺑﺎﻟﻀﺮورة أن ﻳﻜﻮن ﻟﺪﻳﻪ ﻋﻠﻢ ﺑﺎﻟﺬﻛﺎء
اﻹﺻﻄﻨﺎﻋﻲ ﻓﺎﻟﻤﻬﻢ ﻣﺪى ﺧﺒﺮﺗﻪ وإﻟﻤﺎﻣﻪ ﺑﺒﻮاﻃﻦ اﻷﻣﻮر ﻓﻲ ﻣﺠﺎل ﺗﺨﺼﺼﻪ.
وﻳﻤﺮ اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺑﻌﺪة ﻣﺮاﺣﻞ ﺣﺘﻰ ﻳﻈﻬﺮ ﺑﺎﻟﺸﻜﻞ اﻟﻤﻄﻠﻮب وﻫﻲ ﻛﺎﻟﺘﺎﻟﻲ:
.١ﺗﻌﺮﻳﻒ اﻟﺘﻄﺒﻴﻖ :وﻓﻴﻬﺎ ﻳﺘﻢ ﺗﺤﺪﻳﺪ ﻣﺎﻟﺬي ﻧﺮﻳﺪﻩ ﻣﻦ اﻟﻨﻈﺎم وﻣﺠﺎل اﻟﺨﺒﺮة.
.٢ﺗﺼﻤﻴﻢ اﻟﻨﻈﺎم
.٣ﺑﺮﻣﺠﺔ اﻟﻨﻈﺎم
.٤اﺧﺘﺒﺎر اﻟﻨﻈﺎم وﺗﻮﺛﻴﻘﻪ
وﻟﻜﻞ ﺧﻄﻮة ﻣﻦ ﻫﺬﻩ اﻟﺨﻄﻮات اﻷﺷﺨﺎص اﻟﻤﻜﻠﻔﻴﻦ ﺑﺎﻟﻘﻴﺎم ﺑﻬﺎ.
وﻣﻦ اﻷﻣﺜﻠﺔ ﻋﻠﻰ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة:
ﻧﻈﺎم Elizaﻟﻠﻌﻼج اﻟﻨﻔﺴﻲ :وﻫﻮ ﻋﺒﺎرة ﻋﻦ ﻧﻈﺎم ﻳُﺠﺮي ﺣﻮار ﻣﻊ اﻟﻤﺴﺘﺨﺪم وﻳﺠﻴﺐ ﻋﻠﻰ اﻹﺳﺘﻔﺴﺎرات
ﻛﻄﺒﻴﺐ ﻧﻔﺴﻲ ﺧﺒﻴﺮ .
وﻛﻤﺜﺎل ﻟﻠﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺗﻨﺎوﻟﻨﺎ ﻧﻈﺎم ﺧﺒﻴﺮ ﻓﻲ ﺗﺸﺨﻴﺺ أﻣﺮاض اﻟﻄﻔﻮﻟﺔ اﻟﺴﺘﻪ ،وﺗﻢ إﺳﺘﺨﺪام ﻟﻐﺔ ﺑﺮوﻟﻮج .
5
ﻧﻈﺎم ﺧﺒﻴﺮ ﻓﻲ ﺗﺸﺨﻴﺺ أﻣﺮاض اﻷﻃﻔﺎل
ﻳﺒﺪأ اﻟﻨﻈﺎم ﺑﺎﻟﺴﺆال ﻋﻦ اﻷﻋﺮاض ،ﻣﺜﺎل ﻋﻠﻰ ذﻟﻚ ﻳﺒﺪأ ﺑﺎﻟﺴﺆال ﻋﻦ وﺟﻮد اﻟﺤﻤﻰ ﻓﺈذا أﺟﺎب اﻟﻤﺮﻳﺾ
ﺑﻨﻌﻢ ﻳﺒﺪأ ﺑﺎﻟﺒﺤﺚ ﻓﻲ اﻷﻣﺮاض اﻟﺘﻲ ﺑﻬﺎ اﻟﺤﻤﻰ ﻛﻌﺮض ،ﺛﻢ ﻳﻮاﺻﻞ اﻟﺴﺆال ﻋﻦ ﻋﺮض آﺧﺮ ﺛﻢ آﺧﺮ ﺣﺘﻰ
ﻳﺤﺼﺮ ﻧﻔﺴﻪ ﻓﻲ ﻣﺮض ﻣﻌﻴﻦ ﻓﻴﺒﺪأ ﺑﺎﻟﺴﺆال ﺑﺎﻷﻋﺮاض اﻟﺘﻲ ﺗﺨﺺ ﻫﺬا اﻟﻤﺮض .
أﻣﺎ إذا ﻛﺎﻧﺖ إﺟﺎﺑﺔ اﻟﻤﺮﻳﺾ ﺑﻼ ﻣﻦ اﻟﺒﺪاﻳﻪ ﻳﺤﺪث ﺗﻌﻘﺐ ﺧﻠﻔﻲ back trackingوﻫﻜﺬا ﻳﻨﺘﻘﻞ اﻟﻨﻈﺎم
إﻟﻰ ﺗﺸﺨﻴﺺ ﻣﺮض آﺧﺮ ﻟﻴﺴﺖ اﻟﺤﻤﻰ ﻋﺮض ﻣﻦ أﻋﺮاﺿﻪ وﻫﻜﺬا ﺣﺘﻰ ﻳﺼﻞ ﻟﻤﺮض ﻣﻌﻴﻦ ،أﻣﺎ إذا ﻟﻢ ﻳﺴﺘﻄﻴﻊ
ﻣﻌﺮﻓﺔ اﻟﻤﺮض ﻓﻬﺬا ﻳﻌﻨﻲ أن اﻟﻤﺮﻳﺾ ﻳﺸﻜﻮ ﻣﻦ ﻣﺮض آﺧﺮ ﻟﻴﺲ ﻟﻪ ﻋﻼﻗﺔ ﺑﺄﻣﺮاض اﻷﻃﻔﺎل .
ﻗﺎﻋﺪة اﻟﻤﻌﺮﻓﻪ
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺣﻤﻰ . fever
ﺳﻌﺎل . cough
إﺣﻤﺮار اﻟﻌﻴﻦ . conjuctvitis
ﺳﻴﻼن اﻷﻧﻒ . runny_nose
ﻃﻔﺢ ﺟﻠﺪي . rash
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺤﺼﺒﻪ . measles
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺣﻤﻰ . fever
ﺻﺪاع . headache
ﺳﻴﻼن اﻷﻧﻒ . runny_nose
ﻃﻔﺢ ﺟﻠﺪي . rash
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺤﺼﺒﻪ اﻷﻟﻤﺎﻧﻴﻪ . german_measles
6
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺣﻤﻰ . fever
ﺻﺪاع . headache
وﺟﻊ ﺟﺴﻢ . body_ache
إﺣﻤﺮار اﻟﻌﻴﻦ . conjuctvitis
ﺑﺮد . chills
إﺣﺘﻘﺎن ﻓﻲ اﻟﺤﻠﻖ . sore_throat
ﺳﻌﺎل . cough
ﺳﻴﻼن اﻷﻧﻒ .
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻹﻧﻔﻠﻮﻧﺰا . flu
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺻﺪاع . headache
ﻋﻄﺲ . sneezing
إﺣﺘﻘﺎن ﻓﻲ اﻟﺤﻠﻖ . sore_throat
ﺑﺮد . chills
ﺳﻴﻼن اﻷﻧﻒ . runny_nose
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻧﺰﻟﺔ ﺑﺮد . common_cold
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺣﻤﻰ . fever
ورم . swollen_glands
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﻨﻜﺎف . mumps
7
إذا ﻛﺎﻧﺖ أﻋﺮاض اﻟﻤﺮﻳﺾ ﺗﻨﺤﺼﺮ ﻓﻲ -:
ﺣﻤﻰ . fever
ﻃﻔﺢ ﺟﻠﺪي . rash
وﺟﻊ ﺟﺴﻢ . body_ache
ﺑﺮد . chills
ﻓﺈن اﻟﻄﻔﻞ ﻳﻌﺎﻧﻲ ﻣﻦ ﻣﺮض اﻟﺠﺪري . chiken_pox
آﻟﺔ اﻹﺳﺘﺪﻻل
8
ﺣﺻﺑﺔ
ﺣﻣــــــــــــــــــــــــﻲ
ﺳﻌــــــــــــــــــــﺎل
ﺳﯾــــــــــــــــــﻼن أﻧــف
ﺣﺻﺑﺔ اﻟﻣﺎﻧﯾﺔ
طﻔـــــــــــــــــــﺢ ﺟﻠـــــــــــدي
ﺻــــــــــــــــــــداع
وﺟـــــﻊ ﺟﺳــــــــــــــــم
اﻻﻧﻔﻠوﻧزا
اﺣﺗﻘـــــــــــــــــﺗﺎن ﺣﻠـــــق
ﺑـــــــــــــــــــــــــــرد
ﻋطــــــــــــــــــــــــــــس
ورم
اﻟﻧﻛــــــــــــــﺎف
ﺟـــــــــــــــــــدري
9
prolog اﻟﻨﻈﺎم اﻟﺨﺒﻴﺮ ﺑﻠﻐﺔ ﺑﺮوﻟﻮج
%MEDICAL DIAGNOSTIC SYSTEM
%CHILDHOOD DISEASES
%EXAMPLE ONLY NOT FOR MEDICAL USE
domains
disease=symbol
symptom=symbol
query=symbol
replay=symbol
database
xpositive(symptom)
xnegative(symptom)
predicates
nondeterm hypothesis(disease)
nondeterm symptom(disease)
go
positive(query,symptom)
clear_facts
remember(symptom,replay)
ask(query,symptom,replay)
clauses
go:-
%clearwindow,
hypothesis(Disease),!,
write("the patient probably has ",Disease),
clear_facts.
positive(_,Symptom):-
xpositive(Symptom),!.
positive(Query,Symptom):-
not(xnegative(Symptom)),
ask(Query,Symptom,Replay),
Replay="y".
10
ask(Query,Symptom,Replay):-
write(Query),
readln(Replay),
remember(Symptom,Replay).
remember(Symptom,"y"):-
asserta(xpositive(Symptom)).
remember(Symptom,"n"):-
asserta(xnegative(Symptom)).
clear_facts:-
retract(xpositive(_)),fail.
clear_facts:-
retract(xnegative(_)),fail.
clear_facts.
symptom(fever):-
positive("Dos the patient have the fever (y/n)",fever).
symptom(headache):-
positive("Dos the patient have the headache (y/n)",headache).
symptom(body_ache):-
positive("Dos the patient have the body_ache (y/n)",body_ache).
symptom(conjuctvitis):-
positive("Dos the patient have the conjuctvitis (y/n)",conjuctvitis).
symptom(chills):-
positive("Dos the patient have the chills (y/n)",chills).
symptom(sore_throat):-
positive("Dos the patient have the throat (y/n)",throat).
symptom(couph):-
positive("Dos the patient have the couph (y/n)",couph).
symptom(runny_nose):-
positive("Dos the patient have the runny_nose (y/n)",runny_nose).
symptom(rash):-
positive("Dos the patient have the rash (y/n)",rash).
11
symptom(sneezing):-
positive("Dos the patient have the sneezing (y/n)",sneezing).
%************************************************************
hypothesis(measles):-
symptom(fever),
symptom(cough),
symptom(conjuctvitis),
symptom(runny_nose),
symptom(rash).
hypothesis(german_measles):-
symptom(fever),
symptom(headache),
symptom(runny_nose),
symptom(rash).
hypothesis(flu):-
symptom(fever),
symptom(headache),
symptom(body_ache),
symptom(conjuctvitis),
symptom(chills),
symptom(sore_throat),
symptom(couph),
symptom(runny_nose).
hypothesis(common_cold):-
symptom(headache),
symptom(sneezing),
symptom(sore_throat),
symptom(chills),
symptom(runny_nose).
hypothesis(mumps):-
symptom(fever),
symptom(swollen_glands).
12
hypothesis(chiken_pox):-
symptom(fever),
symptom(rash),
symptom(body_ache),
symptom(chills).
goal
go.
13
اﻟﻤﺪرك اﻟﺒﺴﻴﻂ :
14
اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ
اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻫﻲ إﺣﺪى اﻟﺘﻘﻨﻴﺎت اﻟﺤﺪﻳﺜﺔ ﻧﺴﺒﻴﺎً ﻓﻲ اﻟﺤﻮﺳﺒﺔ ،وﻫﻲ ﻣﺴﺘﻮﺣﺎﻩ ﻣﻦ ﻃﺮﻳﻘﺔ ﻋﻤﻞ
اﻟﻌﻘﻞ اﻟﺒﺸﺮي واﻟﺠﻬﺎز اﻟﻌﺼﺒﻲ اﻟﻤﺮﻛﺰي .ﻋﻠﻰ اﻟﺮﻏﻢ ﻣﻦ أن ﻧﺸﺄة اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﺗﻌﻮد إﻟﻰ ﻣﺎ
ﻗﺒﻞ اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي ،ﺣﻴﺚ ﺗﻢ إﻧﺘﺎج أول ﺧﻠﻴﺔ ﻋﺼﺒﻴﺔ اﺻﻄﻨﺎﻋﻴﺔ ﻓﻲ ﻋﺎم ١٩٤٣ﺑﻮاﺳﻄﺔ اﻟﻌﺎﻟﻤﻴﻦ وارﻳﻦ
ﻣﺎﻛﻠﻮﺗﺶ وواﻟﺘﺮ ﺑﺘﺲ ،إﻻ أن اﻟﺘﻘﻨﻴﺔ اﻟﻤﺘﻮﻓﺮة وﻗﺘﻬﺎ ﻟﻢ ﺗﻤﻜﻨﻬﻤﺎ ﻣﻦ ﺗﻄﻮﻳﺮﻫﺎ أو اﻻﺳﺘﻔﺎدة ﻣﻨﻬﺎ.
ﺗﺘﻜﻮن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﻦ ﻋﺪد ﻛﺒﻴﺮ ﻣﻦ وﺣﺪات اﻟﻤﻌﺎﻟﺠﺔ اﻟﻌﺼﺒﻴﺔ »ﺳﻤﻴﻬﺎ ﻣﺠﺎزا ﺧﻼﻳﺎ ﻋﺼﺒﻴﺔ«
اﻟﻤﺘﺸﺎﺑﻜﺔ ﺗﺸﺎﺑﻜﺎ ﻛﺒﻴﺮا ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﺤﻴﺚ ﺗﻜﻮن ﻗﺎدرة ﻋﻠﻰ ﻣﻌﺎﻟﺠﺔ أﻧﻮاع ﻣﻌﻴﻨﺔ ﻣﻦ اﻟﻤﺸﺎﻛﻞ .ﻛﻤﺎ ﻓﻲ اﻟﺨﻼﻳﺎ
اﻟﻌﺼﺒﻴﺔ اﻟﺤﻴﺔ ،ﺗﺤﺘﺎج اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ إﻟﻰ اﻟﺘﺪرﻳﺐ ﺑﺤﻴﺚ ﻳﺘﻢ ﺿﺒﻂ اﻟﺘﺸﺎﺑﻜﺎت ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ .ﺑﻌﺪ
ﻋﻤﻠﻴﺔ اﻟﺘﺪرﻳﺐ ،ﻳﻤﻜﻦ اﻋﺘﺒﺎر اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ »ﺧﺒﻴﺮة« ﻓﻲ ﻓﺌﺔ اﻟﻤﻌﻠﻮﻣﺎت اﻟﺘﻲ ﺗﻢ ﺗﺪرﻳﺒﻬﺎ ﻋﻠﻴﻬﺎ.
ﺗﺘﻤﻴﺰ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﺑﻘﺪرﺗﻬﺎ ﻋﻠﻰ اﺳﺘﻨﺒﺎط ﻧﺘﺎﺋﺞ ﻣﻦ ﻣﺪﺧﻼت ﻣﻌﻘﺪة أو ﻏﻴﺮ دﻗﻴﻘﺔ .ﻛﻤﺎ ﻳﻤﻜﻨﻬﺎ اﺳﺘﺨﺮاج
أﻧﻤﺎط وﻛﺸﻒ اﺗﺠﺎﻫﺎت ﻣﻦ اﻟﺘﻌﻘﻴﺪ ﺑﺤﻴﺚ ﻻ ﻳﻤﻜﻦ ﻣﻼﺣﻈﺘﻬﺎ ﻣﻦ ﻗﺒﻞ اﻻﻧﺴﺎن أو ﻣﻦ ﻗﺒﻞ ﺗﻘﻨﻴﺎت اﻟﺤﺎﺳﺐ
اﻷﺧﺮى .ﻳﻤﻜﻦ ﻟﻠﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﻤﺪرﺑﺔ )اﻟﺨﺒﻴﺮة( اﻟﺘﻨﺒﺆ ﺑﻨﺘﺎﺋﺞ ﻣﻮاﻗﻒ ﺟﺪﻳﺪة أو إﺟﺎﺑﺔ أﺳﺌﻠﺔ ﻣﻦ ﻧﻮع »ﻣﺎذا
ﻟﻮ«.
ﺗﺨﺘﻠﻒ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻋﻦ اﻟﺤﻮاﺳﻴﺐ اﻟﺘﻘﻠﻴﺪﻳﺔ ﻓﻲ أن اﻷﺧﻴﺮة ﺗﻘﻮم ﺑﻤﻌﺎﻟﺠﺔ اﻟﻤﺸﺎﻛﻞ ﻣﻦ ﺧﻼل ﺧﻄﻮات
وﺗﻌﻠﻴﻤﺎت ﻣﺤﺪدة وﻣﺒﺮﻣﺠﺔ )ﺧﻮارزﻣﻴﺔ( ،وﺑﺎﻟﺘﺎﻟﻲ ﻻ ﻳﻤﻜﻨﻬﺎ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﻐﻴﺮ ﻣﺒﺮﻣﺠﺔ ﺳﻠﻔﺎ ،وﺑﺎﻟﺘﺎﻟﻲ ﻓﺈن
اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي ﻻ ﻳﻤﻜﻨﻪ إﻻ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﺘﻲ ﻳﺴﺘﻄﻴﻊ اﻟﻤﺒﺮﻣﺞ ﻧﻔﺴﻪ ﺣﻠﻬﺎ .أﻣﺎ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻓﻬﻲ
ﻋﻠﻰ اﻟﻨﻘﻴﺾ ﺣﻴﺚ أﻧﻬﺎ ﺗﺘﻌﻠﻢ ﻣﻦ ﺧﻼل اﻷﻣﺜﻠﺔ وﻟﻴﺲ ﻋﻦ ﻃﺮﻳﻖ إﻋﻄﺎﺋﻬﺎ ﺗﻌﻠﻴﻤﺎت ،وﻳﻤﻜﻨﻬﺎ اﻟﺘﻌﺎﻣﻞ ﻣﻊ
ﻣﺸﺎﻛﻞ ﻟﻢ ﻳﺘﻢ ﺑﺮﻣﺠﺘﻬﺎ ﺳﺎﺑﻘﺎ ،وﺑﺎﻟﺘﺎﻟﻲ ﻓﻬﻲ أﻗﺮب ﻟﻄﺮﻳﻘﺔ ﻋﻤﻞ اﻟﻌﻘﻞ اﻟﺒﺸﺮي ﻣﻦ ﺣﻴﺚ أﻧﻪ ﻳﺴﺘﻔﻴﺪ ﻣﻦ ﺧﺒﺮاﺗﻪ
اﻟﺴﺎﺑﻘﺔ ﻓﻲ ﺣﻞ اﻟﻤﺸﺎﻛﻞ اﻟﺠﺪﻳﺪة.
ﺗﺘﻜﻮن اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻣﻦ ﺟﻬﺎز ﻟﻪ ﻋﺪة ﻣﺪاﺧﻞ وﻣﺨﺮج واﺣﺪ .وﻳﺘﻢ اﺳﺘﺨﺪاﻣﻬﺎ ﻓﻲ أﺣﺪ وﺿﻌﻴﻦ
إﻣﺎ وﺿﻊ اﻟﺘﺪرﻳﺐ أو وﺿﻊ اﻻﺳﺘﺨﺪام .ﻓﻲ وﺿﻊ اﻟﺘﺪرﻳﺐ ﻳﺘﻢ ﺗﻠﻘﻴﻦ اﻟﺨﻠﻴﺔ أن ﺗﻄﻠﻖ fireأو ﻻ ﺗﻄﻠﻖ ﻋﻨﺪ
إدﺧﺎل ﻧﻤﻮذج patternﻣﻌﻴﻦ .ﻳﻘﺼﺪ ﺑﺎﻹﻃﻼق ﻫﻨﺎ ﻫﻮ اﻟﻘﻴﻤﺔ اﻟﻤﺨﺮﺟﺔ أو اﻟﻨﺎﺗﺠﺔ .أﻣﺎ ﻓﻲ وﺿﻊ
15
اﻻﺳﺘﺨﺪام ،ﻳﺘﻢ إﺧﺮاج اﻟﻘﻴﻤﺔ اﻟﻤﺮﺗﺒﻄﺔ ﺑﺎﻟﻨﻤﻮذج اﻟﺬي ﺗﻢ ﺗﻌﻠﻤﻪ ﻋﻨﺪ اﺳﺘﺸﻌﺎرﻩ ﻓﻲ اﻟﻤﺪﺧﻞ ،أو ﻳﺘﻢ ﺗﻄﺒﻴﻖ
ﻗﺎﻋﺪة اﻹﻃﻼق firing ruleﻋﻦ وﺟﻮد ﻣﺪﺧﻞ ﺟﺪﻳﺪ ﻟﻢ ﻳﺘﻢ ﺗﻌﻠﻤﻪ ﺳﺎﺑﻘﺎً .ﻋﻠﻰ ﺳﺒﻴﻞ اﻟﻤﺜﺎل ،ﻳﻤﻜﻦ أن
ﺗﻜﻮن ﻗﺎﻋﺪة اﻹﻃﻼق ﻫﻲ إﺧﺮاج اﻟﻘﻴﻤﺔ اﻟﻤﺮﺗﺒﻄﺔ ﺑﺄﻗﺮب اﻟﻨﻤﺎذج اﻟﻤﻌﻠﻮﻣﺔ ﺷﺒﻬﺎً ﺑﺎﻟﻨﻤﻮذج اﻟﻤﺪﺧﻞ.
ﻣﻦ أﺷﻬﺮ ﺗﻄﺒﻴﻘﺎت اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻟﺘﻌﺮف ﻋﻠﻰ اﻟﻨﻤﺎذج واﻷﺷﻜﺎل ﻣﺜﻞ اﻟﻨﺼﻮص ،إﻳﺠﺎد اﻟﺤﻠﻮل اﻟﺘﻘﺮﻳﺒﻴﺔ،
ﺗﺼﻨﻴﻒ ﺻﻮر اﻷﻗﻤﺎر اﻟﺼﻨﺎﻋﻴﺔ ،ﺗﻮﻗﻊ اﻟﻤﺒﻴﻌﺎت واﻟﻨﺘﺎﺋﺞ اﻟﻤﺴﺘﻘﺒﻠﻴﺔ ،إدارة اﻟﻤﺨﺎﻃﺮة ،اﻟﺘﻌﺮف ﻋﻠﻰ أﻧﻤﺎط ﻣﻌﻴﻨﺔ
ﻓﻲ ﻛﻤﻴﺎت ﻛﺒﻴﺮة ﻣﻦ اﻟﺒﻴﺎﻧﺎت ،ﺗﺸﺨﻴﺺ ﺑﻌﺾ اﻷﻣﺮاض ،اﻟﺘﺤﻜﻢ ﻓﻲ ﺣﺮﻛﺔ اﻷﺷﺨﺎص اﻵﻟﻴﻴﻦ ،إﺿﺎﻓﺔ إﻟﻰ ﻋﺪة
ﺗﻄﺒﻴﻘﺎت وﻣﺠﺎﻻت أﺧﺮى.
ﻣﻦ اﻟﻤﻬﻢ اﻟﺘﺄﻛﻴﺪ ﻋﻠﻰ أن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻟﻴﺴﺖ ﺑﺪﻳﻼً وﻻ ﻣﻨﺎﻓﺴﺎً ﻟﻠﺤﻮاﺳﻴﺐ اﻟﺘﻘﻠﻴﺪﻳﺔ وإﻧﻤﺎ
ﻫﻲ ﻣﻜﻤﻠﺔ ﻟﻬﺎ ،ﺣﻴﺚ ﻟﻜﻞ ﻣﻨﻬﻤﺎ اﺳﺘﺨﺪاﻣﺎﺗﻪ ،إﺿﺎﻓﺔ إﻟﻰ ذﻟﻚ ﻓﻬﻨﺎك اﻟﻜﺜﻴﺮ ﻣﻦ اﻟﻤﻬﺎم اﻟﺘﻲ ﺗﺤﺘﺎج إﻟﻰ
اﻟﺘﻜﺎﻣﻞ ﺑﻴﻦ اﻻﺛﻨﻴﻦ ،وﻋﺎدة ﻣﺎ ﻳﺘﻮﻟﻰ اﻟﺤﺎﺳﻮب اﻟﺘﻘﻠﻴﺪي اﻹﺷﺮاف ﻋﻠﻰ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ ﻟﺘﺤﻘﻴﻖ أﻛﺒﺮ ﻛﻔﺎءة
ﻣﻤﻜﻨﺔ.
16
• ﻛﻴﻒ ﺗﺘﺸﺎﺑﻚ ﻫﺬﻩ اﻟﻌﺼﺒﻮﻧﺎت
• ﺧﻮارزﻣﻴﺔ اﻟﺘﻌﻠﻢ
• ﺗﺪرﻳﺐ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ :ﻧﺤﺪد اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ ﻷوزان اﻟﺸﺒﻜﺔ .
اﻟﻤﺪرك اﻟﺒﺴﻴﻂ
ﻛﻴﻒ ﻳﺘﻌﻠﻢ اﻟﻤﺪرك ﺗﺼﻨﻴﻔﻪ ﻟﻠﻤﻬﺎم ؟
• ﻳﺤﺪث ﻫﺬا ﻋﻦ ﻃﺮﻳﻖ ﻋﻤﻞ ﺗﻀﺒﻴﻄﺎت ﺻﻐﻴﺮة ﻓﻲ اﻷوزان ﻟﺘﻘﻠﻴﻞ اﻟﻔﺮق ﺑﻴﻦ اﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ
ﻟﻠﻤﺪرك واﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ .وﺗﺤﺪد اﻷوزان اﻻﺑﺘﺪاﺋﻴﺔ ﻋﺸﻮاﺋﻴﺎ ،وﻋﺎدة ﻓﻲ اﻟﻤﺪى ]، [-0.5,0.5
17
وﺗﺠﺪد ﺑﻌﺪ ذﻟﻚ ﻟﻠﺤﺼﻮل ﻋﻠﻲ ﻣﺨﺮﺟﺎت ﻣﺘﺴﻘﺔ ﻣﻊ أﻣﺜﻠﺔ اﻟﺘﺪرﻳﺐ .وﺑﺎﻟﻨﺴﺒﺔ إﻟﻲ اﻟﻤﺪرك ﺗﻜﻮن
ﻋﻤﻠﻴﺔ ﺗﺠﺪﻳﺪ اﻷوزان ﺑﺴﻴﻄﺔ ﺟﺪا“ .ﻓﺈذا ﻛﺎﻧﺖ اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ واﻟﻤﺨﺮﺟﺎت اﻟﻤﺮﻏﻮب ﻓﻴﻬﺎ
) Yd(pﻋﻨﺪ ﻧﻔﺲ اﻟﺘﻜﺮار ،ﻓﺘﺤﺴﺐ اﻟﻤﻌﺎدﻟﺔ اﻟﺘﺎﻟﻴﺔ اﻟﺨﻄﺄ :
• …e(p) = Yd(p) –Y(p) where p=1,2,3,
• وﻳﺸﻴﺮ اﻟﺘﻜﺮار pﻫﻨﺎ اﻟﻲ ﻣﺜﺎل اﻟﺘﺪرﻳﺐ رﻗﻢ pاﻟﻤﻘﺪم ﻟﻠﻤﺪرك
• ﻓﺎذا ﻛﺎن اﻟﺨﻄﺄ ) e(pﻣﻮﺟﺒﺎ ﻓﺈﻧﻨﺎ ﻧﺤﺘﺎج إﻟﻲ زﻳﺎدة ﻣﺨﺮﺟﺎت اﻟﻤﺪرك ) ، Y(pوإذا ﻛﺎن ﺳﺎﻟﺒﺎ ﻓﺈﻧﻨﺎ
ﻧﺤﺘﺎج إﻟﻲ ﺗﻘﻠﻴﻞ ) .Y(pوﺑﺎﻷﺧﺬ ﻓﻲ اﻟﺤﺴﺒﺎن أن ﻛﻞ ﻣﺪﺧﻞ ﻣﻦ ﻣﺪﺧﻼت اﻟﻤﺪرك ﻳﺴﺎﻫﻢ ب
) xi(p)* wi(pﻓﻲ اﺟﻤﺎﻟﻲ اﻟﻤﺪﺧﻼت ) x(pﻓﺎﻧﻨﺎ ﻧﺠﺪ ان ﻗﻴﻤﺔ اﻟﻤﺪﺧﻼت ) xi(pﺗﻜﻮن
ﻣﻮﺟﺒﻪ ،وﺗﻤﻴﻞ زﻳﺎدة ﻓﻲ اﻟﻮزن ) wi(pاﻟﻲ زﻳﺎدة ﻣﺨﺮﺟﺎت اﻟﻤﺪرك ) . Y(pﺑﻴﻨﻤﺎ اذا ﻛﺎﻧﺖ )xi(p
ﺳﺎﻟﺒﻪ ،ﺗﻤﻴﻞ اﻟﺰﻳﺎدة ﻓﻲ اﻟﻮزن ) wi(pاﻟﻲ ﺗﻘﻠﻴﻞ ﻣﺨﺮﺟﺎت اﻟﻤﺪرك .ﻟﺬﻟﻚ ﻳﻤﻜﻦ ﺗﺤﺪﻳﺪ اﻟﻘﺎﻋﺪة
ﺗﻌﻠﻴﻢ اﻟﻤﺪرك
• Preceptron learning rule:
)Wi(p+1)= wi(p) +α×xi(p) ×e(p
ﺣﻴﺚ αﻣﻌﺪل اﻟﺘﻌﻠﻢ learning rate,وﻫﻮ ﺛﺎﺑﺖ ﻣﻮﺟﺐ اﻗﻞ ﻣﻦ اﻟﻮاﺣﺪ اﻟﺼﺤﻴﺢ
18
• اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ :ﺗﺪرﻳﺐ اﻷوزان :ﺗﺠﺪﻳﺪ أوزان اﻟﻤﺪرك
)Wi(p+1)= wi(p) +∆Wi(p
)∆Wi(p)= α×xi(p) ×e(p
ﺣﻴﺚ ) ∆Wi(pﺗﺼﺤﻴﺢ اﻟﻮزن ﻓﻲ اﻟﺘﻜﺮار p
اﻟﺨﻄﻮة اﻟﺮاﺑﻌﺔ :اﻟﺘﻜﺮار •
زﻳﺎدة اﻟﺘﻜﺮار pﺑﻤﻘﺪار واﺣﺪ ﺻﺤﻴﺢ ،واﻟﻌﻮدة اﻟﻲ اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ ،وﺗﻜﺮار اﻟﻌﻤﻠﻴﺔ ﺣﺘﻲ ﻧﻘﻄﺔ اﻻﻟﺘﻘﺎء .
19
ﻫﻨﺎ إﺳﺘﻄﺎع اﳌﺪرك اﻟﺒﺴﻴﻂ اﻟﺘﺪرب ﻋﻠﻰ ﻋﻤﻠﻴﺔ ORﻷ ﺎ ﻣﻔﺼﻮﻟﺔ ﺧﻄﻴﺎً .
20
ﻫﻨﺎ ﻻﻳﺴﺘﻄﻴﻊ اﳌﺪرك اﻟﺒﺴﻴﻂ اﻟﺘﺪرب ﻋﻠﻰ اﻟﻌﻤﻠﻴﺔ xorﻷ ﺎ ﻏﲑ ﻣﻔﺼﻮﻟﺔ ﺧﻄﻴﺎ وﻳﺘﻢ
إﺳﺘﺨﺪام اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت ﻟﺘﺪرﻳﺐ اﻟﺸﺒﻜﺔ ﺑﺈﺳﺘﺨﺪام ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر
ﻟﻠﺨﻠﻒ .
أي ﻳﺴﺘﻄﻴﻊ اﳌﺪرك اﻟﺒﺴﻴﻂ ان ﻳﺘﻌﻠﻢ اﻟﻌﻤﻠﻴﺔ . orاﻻ ان اﳌﺪرك ﻣﻦ ﻃﺒﻘﺔ واﺣﺪة ﻻ
ﻳﺴﺘﻄﻴﻊ ان ﻳﺘﺪرب ﻋﻠﻲ ﺗﻨﻔﻴﺬ orاﳌﺎﻧﻌﺔ .
21
ﺧﻮارزﻣﻴﺔ اﻹﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ :
22
اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت Multilayer neural networks
ﻫﻨﺎﻟﻚ اﻟﻌﺪﻳﺪ ﻣﻦ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ و ﻣﻦ أﻫﻤﻬﺎ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﺒﻴﺮﺳﻴﺒﺘﺮون ﻣﺘﻌﺪد اﻟﻄﺒﻘﺎت
) . (Perceptron multicoucheوﺗﻨﻘﺴﻢ ﻫﺬﻩ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ إﻟﻲ ﻃﺒﻘﺎت ﻣﻦ اﻟﺨﻼﻳﺎ
اﻻﺻﻄﻨﺎﻋﻴﺔ :ﻃﺒﻘﺔ داﺧﻠﻴﺔ وﻃﺒﻘﺔ ﺧﺎرﺟﻴﺔ وﻃﺒﻘﺎت ﺑﻴﻨﻬﻢ أو ﻣﺨﻔﻴﺔ ﺗﺘﻮاﺟﺪ ﺑﻴﻦ ﻃﺒﻘﺔ اﻟﻤﺪﺧﻼت ) Input
(layerوﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت ) .(Output layerﻛﻞ ﺧﻠﻴﺔ ﻓﻲ إﺣﺪى ﻫﺬﻩ اﻟﻄﺒﻘﺎت ﺗﺘﺼﻞ ﺑﻜﺎﻓﺔ
اﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﻮﺟﻮدة ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﺘﻲ ﺗﻠﻴﻬﺎ وﻛﺎﻓﺔ اﻟﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﺘﻲ ﺗﺴﺒﻘﻬﺎ.
ﻛﻞ اﺗﺼﺎل ﺑﻴﻦ ﻋﺼﺒﻮن وآﺧﺮ ﻳﺘﻤﻴﺰ ﺑﺎرﺗﺒﺎﻃﻪ ﺑﻘﻴﻤﺔ ﺗﺪﻋﻰ اﻟﻮزن ) (Weightوﻫﻲ ﺗﺸﻜﻞ ﻣﺪى أﻫﻤﻴﺔ
اﻻرﺗﺒﺎط ﺑﻴﻦ ﻫﺬﻳﻦ اﻟﻌﻨﺼﺮﻳﻦ ،ﻳﻘﻮم اﻟﻌﺼﺒﻮن ﺑﻀﺮب ﻛﻞ ﻗﻴﻤﺔ دﺧﻞ واردة ﻣﻦ ﻋﺼﺒﻮﻧﺎت اﻟﻄﺒﻘﺔ اﻟﺴﺎﺑﻘﺔ ﺑﺄوزان
اﻻﺗﺼﺎﻻت ﻣﻊ ﻫﺬﻩ اﻟﻌﺼﺒﻮﻧﺎت ،ﻣﻦ ﺛﻢ ﺟﻤﻊ ﻧﻮاﺗﺞ اﻟﻀﺮب ﺟﻤﻴﻌﺎ ،ﺛﻢ إﺧﻀﺎع اﻟﻨﺘﻴﺠﺔ ﻟﺘﺎﺑﻊ ﺗﺤﻮﻳﻞ ﻳﺨﺘﻠﻒ
ﺣﺴﺐ ﻧﻮع اﻟﻌﺼﺒﻮن ،واﻟﻨﺎﺗﺞ ﻳﻌﺘﺒﺮ ﺧﺮج اﻟﻌﺼﺒﻮن اﻟﺬي ﻳﻨﻘﻞ إﻟﻰ ﻋﺼﺒﻮﻧﺎت اﻟﻄﺒﻘﺔ اﻟﻼﺣﻘﺔ.
إن اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ اﻻﺻﻄﻨﺎﻋﻴﺔ ﻳﻤﻜﻦ أن ﺗﻨﺠﺰ ﻧﻤﺎذج ﺑﺴﻴﻄﺔ ﻣﺤﺪدة ﻟﺘﻮاﺑﻊ رﻳﺎﺿﻴﺔ ،إﻻ أن ﻗﻮة اﻟﺤﺴﺎب
اﻟﻌﺼﺒﻴﺔ وﺳﺮﻫﺎ ﻳﺄﺗﻲ ﻣﻦ اﻟﻮﺻﻼت ﺑﻴﻦ اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ .ﻓﺎﻟﺒﻨﻴﺔ اﻷﺳﺎﺳﻴﺔ ﻓﻲ اﻟﺸﺒﻜﺔ
اﻟﻌﺼﺒﻴﺔ ﻫﻲ اﻟﺨﻠﻴﺔ اﻟﻌﺼﺒﻴﺔ وﺑﺘﻐﻴﻴﺮ وﺗﻌﺪﻳﻞ وﺿﻌﻴﺔ اﺗﺼﺎل اﻟﺨﻼﻳﺎ ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ ﻳﺨﺘﻠﻒ ﺳﻠﻮك اﻟﺸﺒﻜﺔ
وﺗﺄﺛﻴﺮﻫﺎ وﻧﺘﺎﺋﺠﻬﺎ.
وﻳﻤﻜﻦ ﺑﻮﺿﻊ أﻛﺜﺮ ﻣﻦ ﻃﺒﻘﺔ ووﺻﻠﻬﺎ اﻟﺤﺼﻮل ﻋﻠﻰ ﺷﺒﻜﺎت أﺿﺨﻢ وأﻛﺜﺮ ﺗﻌﻘﻴﺪا وذات ﻗﺪرة ﺣﺴﺎﺑﻴﺔ ﺿﺨﻤﺔ
،وﻗﺪ ﺗﻢ اﻟﺒﺮﻫﺎن ﻋﻠﻰ أن اﻟﺸﺒﻜﺎت ذات اﻟﻄﺒﻘﺎت اﻟﻤﺘﻌﺪدة ﺗﻤﺘﻠﻚ ﻗﺪرات أﻋﻈﻢ ﻣﻦ ﺗﻠﻚ اﻟﻤﻮﺟﻮدة ﻓﻲ
اﻟﺸﺒﻜﺔ ذات اﻟﻄﺒﻘﺔ اﻟﻮﺣﻴﺪة.
إن ﻋﻤﻠﻴﺔ ﺗﻐﻴﻴﺮ ﻋﺪد اﻟﻄﺒﻘﺎت اﻟﻤﻮﺟﻮدة ﻓﻲ اﻟﺸﺒﻜﺔ وﻋﺪد اﻟﺨﻼﻳﺎ اﻟﻌﺼﺒﻴﺔ اﻟﻤﻮﺟﻮدة ﻓﻲ ﻛﻞ ﻃﺒﻘﺔ وﻛﻴﻔﻴﺔ
اﻟﻮﺻﻞ ﺑﻴﻦ اﻟﻄﺒﻘﺎت ﺗﺘﺒﻊ ﻟﻨﻮﻋﻴﺔ و ﺿﺨﺎﻣﺔ وﺗﻌﻘﻴﺪ اﻟﺘﻄﺒﻴﻖ اﻟﺬي ﺗﺴﺘﺨﺪم ﻣﻦ أﺟﻠﻪ اﻟﺸﺒﻜﺔ .ﻓﻤﺜﻼ ،ﻳﻤﻜﻦ ﻣﻦ
أﺟﻞ ﺗﻄﺒﻴﻖ ﻣﺎ أن ﻳﻜﻮن اﺳﺘﺨﺪام ﺷﺒﻜﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت – ﺑﺎﻟﺮﻏﻢ ﻣﻦ ﻗﺪرﺗﻬﺎ اﻟﺤﺴﺎﺑﻴﺔ اﻟﺠﺒﺎرة –ﻏﻴﺮ ﻧﺎﺟﻊ
ﻣﺜﻠﻤﺎ ﻟﻮ ﺗﻢ اﺳﺘﺨﺪام ﺷﺒﻜﺔ وﺣﻴﺪة اﻟﻄﺒﻘﺔ ﻣﻊ ﻗﺪرﺗﻬﺎ اﻟﺤﺴﺎﺑﻴﺔ اﻟﺼﻐﻴﺮة ﻣﻘﺎرﻧﺔ ﻣﻊ اﻟﺸﺒﻜﺎت ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت
.وﻫﻜﺬا ﻣﺎ ﻳﺰال ﻫﺬا اﻟﻌﻠﻢ ﻋﻠﻤﺎً ﺗﺠﺮﻳﺒﻴﺎً ﺑﺎﻟﺮﻏﻢ ﻣﻦ اﻟﺨﻮارزﻣﻴﺎت واﻟﻔﺮﺿﻴﺎت اﻟﻤﻮﺿﻮﻋﺔ ،ﺑﻤﻌﻨﻰ أﻧﻪ ﻣﻦ أﺟﻞ
23
ﺗﻄﺒﻴﻖ ﻣﺎ ﻋﻠﻴﻨﺎ أن ﻧﺨﻀﻊ اﻟﺸﺒﻜﺔ ﻟﻌﺪة ﺗﺠﺎرب ﺗﻐﻴﻴﺮ وﺗﻌﺪﻳﻞ ﻓﻲ ﺑﻨﻴﺘﻬﺎ ﺣﺘﻰ ﻧﺤﺼﻞ ﻋﻠﻰ أﻓﻀﻞ اﻟﻨﺘﺎﺋﺞ
اﻟﻤﻼﺋﻤﺔ ﻟﺘﻄﺒﻴﻘﻨﺎ .
ﺣﻴﺚ nﻣﺪﺧﻼت ﻟﻠﻌﺼﺒﻮن jﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺒﺄة ،و sigmoidداﻟﺔ ﺗﻨﺸﻴﻂ اﻻس
وﺣﺴﺎب اﻟﻤﺨﺮﺟﺎت اﻟﻔﻌﻠﻴﺔ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت :
m
y ( p ) sigmoid x jk ( p) w jk ( p) k
k i 1
24
ﺣﻴﺚ mﻣﺪﺧﻼت ﻟﻠﻌﺼﺒﻮن kﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت ،و sigmoidداﻟﺔ ﺗﻨﺸﻴﻂ اﻻس
اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ :ﺗﺪرﻳﺐ اﻷوزان :ﺗﺠﺪﻳﺪ أوزان ﻓﻲ ﺷﺒﻜﺔ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻋﻦ ﻃﺮﻳﻖ ﻧﺸﺮ اﻷﺧﻄﺎء
اﻟﻤﺼﺎﺣﺒﺔ ﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت ﻟﻠﺨﻠﻒ
ﺣﺴﺎب ﻣﻴﻞ او اﻧﺤﺪار اﻟﺨﻄﺄ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ ﻃﺒﻘﺔ اﻟﻤﺨﺮﺟﺎت
)δk(p)= yk(p)×[1-yk(p)] × ek(p
ﺣﻴﺚ :
…ek(p) = Ydk(p) –Yk(p) where p=1,2,3,
وﺣﺴﺎب ﺗﺼﺤﻴﺢ اﻻوزان :
)Wjk (p+1)= wjk(p) +∆Wjk (p
)∆Wjk (p)= α×yj(p) ×δk(p
وﺗﺠﺪﻳﺪ اﻻوزان ﻋﻨﺪ ﻋﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت :
)Wjk (p+1)= wjk(p) + α×yj(p) ×δk(p
ﺣﺴﺎب ﻣﻴﻞ او اﻧﺤﺪار اﻟﺨﻄﺄ ﻟﻠﻌﺼﺒﻮﻧﺎت ﻓﻲ اﻟﻄﺒﻘﺔ اﻟﻤﺨﺒﺄة :
I
) j ( p ) y j ( p) [1 y j ( p)] k ( p ) w jk ( p
k 1
25
Code :
% ==================
% Filename: XOR_bp.m
% ==================
echo on;
% Hit any key to define four 2-element input vectors denoted by "p".
pause
p=[1 0 1 0;1 1 0 0]
% Hit any key to define ten 1-element target vectors denoted by "t".
pause
t=[0 1 1 0]
% Hit any key to create the network and initialise its weights and
biases.
pause
net=train(net,p,t);
% Hit any key to see whether the network has learned the XOR
operation.
pause
p=[1;1]
a=sim(net,p)
26
% Hit any key to continue.
pause
p=[0;1]
a=sim(net,p)
p=[1;0]
a=sim(net,p)
p=[0;0]
a=sim(net,p)
echo off
disp('end of XOR_bp')
Output :
27
28
29
ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ :
30
ﻟﻘﺪ ﺻﻤﻤﺖ اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻓﻲ ﺗﻤﺎﺛﻞ اﻟﻤﺦ .إﻻ أن ذاﻛﺮة اﻟﻤﺦ ﺗﻌﻤﻞ ﺑﻮاﺳﻄﺔ اﻟﻤﺼﺎﺣﺒﺔ.ﻣﺜﺎل ذﻟﻚ ،
ﻳﻤﻜﻨﻨﺎ أن ﻧﻤﻴﺰ وﺟﻬﺎ ﻣﻌﺘﺎدا ﺣﺘﻰ ﻓﻲ اﻟﺒﻴﺌﺔ ﻏﻴﺮ اﻟﻤﻌﺘﺎدة ﻣﻦ ﺧﻼل ، sm ٢٠٠-١٠٠وﻳﻤﻜﻨﻨﺎ ﺗﺬﻛﺮ أﻳﻀﺎ
ﺗﺠﺮﺑﺔ إﺣﺴﺎس ﻛﺎﻣﻠﺔ ﺑﻤﺎ ﻓﻲ ذﻟﻚ اﻷﺻﻮات ،واﻟﻤﺸﺎﻫﺪ ﻋﻨﺪﻣﺎ ﻧﺴﻤﻊ ﺑﻀﻊ ﻧﻐﻤﺎت ﻣﻮﺳﻴﻘﻴﺔ ﻓﻘﻂ .ﻓﻴﺼﺎﺣﺐ
اﻟﻤﺦ ﺷﻴﺌﺎ ﺑﺄﺧﺮ ﺑﺼﻮرة روﺗﻴﻨﻴﺔ .
ﻫﻞ ﻳﻤﻜﻦ أن ﺗﺤﺎﻛﻲ اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﺨﻮاص اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة اﻟﺒﺸﺮﻳﺔ ؟
ﺗﺴﺘﺨﺪم اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت اﻟﻤﺪرﺑﺔ ﺑﺨﻮارزﻣﻴﺔ اﻻﻧﺘﺸﺎر ﻟﻠﺨﻠﻒ ﻓﻲ ﻣﺸﺎﻛﻞ ﺗﻤﻴﻴﺰ اﻷﻧﻤﺎط .
وﻟﻜﻦ ،ﻛﻤﺎ ﺳﺒﻖ ﻻﺣﻈﻨﺎ ﻟﻴﺴﺖ ﻫﺬﻩ اﻟﺸﺒﻜﺎت ذﻛﻴﺔ ﺑﺼﻮرة ﺣﻘﻴﻘﻴﺔ .وﻟﺘﻘﻠﻴﺪ اﻟﺨﻮاص اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة
اﻟﻤﺼﺎﺣﺒﺔ ﻟﻠﺬاﻛﺮة اﻟﺒﺸﺮﻳﺔ ﻓﺈﻧﻨﺎ ﻧﺤﺘﺎج إﻟﻲ ﻧﻮع ﻣﺨﺘﻠﻒ ﻣﻦ اﻟﺸﺒﻜﺎت ﺷﺒﻜﺔ ﻋﺼﺒﻴﺔ ﻣﺘﻜﺮرة recurrent
. neural network
ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ the hopfiled network
اﻟﺸﺒﻜﺔ اﻟﻌﺼﺒﻴﺔ اﻟﻤﺘﻜﺮرة دورات ﺗﻐﺬﻳﺔ ﻣﺮﺗﺠﻌﺔ ﻣﻦ ﻣﺨﺮﺟﺎﺗﻬﺎ إﻟﻲ ﻣﺪﺧﻼﺗﻬﺎ .وﻟﻮﺟﻮد ﻣﺜﻞ ﻫﺬﻩ اﻟﺪورات ﺗﺄﺛﻴﺮ
ﻋﻤﻴﻖ ﻋﻠﻲ إﻣﻜﺎﻧﻴﺎت ﺗﻌﻠﻢ اﻟﺸﺒﻜﺔ
31
ﺧﻮارزﻣﻴﺔ ﺗﺪرﻳﺐ ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ
اﻟﺨﻄﻮة اﻷوﻟﻲ :اﻟﺘﺨﺰﻳﻦ
ﺗﻜﻮن ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻓﻲ nﻋﺼﺒﻮن ﻣﻄﻠﻮﺑﺔ ﻟﺘﺨﺰﻳﻦ ﻓﺌﺔ ﻣﻦ Mذاﻛﺮة أﺳﺎﺳﻴﺔ Y1,Y2,…,YM ،
وﻳﺤﺴﺐ وزن ﻧﻘﻄﺔ اﻻﺷﺘﺒﺎك ﻣﻦ اﻟﻌﺼﺒﻮن iو اﻟﻌﺼﺒﻮن jﻛﻤﺎ ﻳﻠﻲ :
M
m1 y mi y mj , i j
w ij
0 , i j
ﺣﻴﺚ ymiو ymjاﻟﻌﻨﺼﺮان رﻗﻢ i,jﻓﻲ اﻟﺬاﻛﺮة اﻷﺳﺎﺳﻴﺔ ﻋﻠﻲ اﻟﺘﻮاﻟﻲ وﻓﻲ ﺻﻮرة اﻟﻤﺼﻔﻮﻓﺔ ،وﺗﻤﺜﻞ
M
اﻷوزان ﻧﻘﻄﺔ اﻻﺷﺘﺒﺎك ﺑﻴﻦ اﻟﻌﺼﺒﻮﻧﺎت ﻛﻤﺎ ﻳﻠﻲ :
T
w y y MI m m
m 1
وﻳﻤﻜﻦ أن ﺗﺨﺰن ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻓﺌﺔ ﻣﻦ اﻟﺬاﻛﺮات اﻷﺳﺎﺳﻴﺔ إذا ﻛﺎﻧﺖ ﻣﺼﻔﻮﻓﺔ اﻟﻮزن ﻣﺘﻤﺎﺛﻠﺔ ﻣﻊ وﺟﻮد
أﺻﻔﺎر ﻓﻲ ﻗﻄﺮﻫﺎ اﻟﺮﺋﻴﺴﻲ
وﺑﻤﺠﺮد ﺣﺴﺎب اﻷوزان ﻓﺈﻧﻬﺎ ﺗﻈﻞ ﺛﺎﺑﺘﺔ
اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﻪ :اﻻﺧﺘﺒﺎر
ﻧﺤﺘﺎج أن ﻧﺘﺄﻛﺪ ﻣﻦ ﺷﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻗﺎدرة ﻋﻠﻲ أن ﺗﺘﺬﻛﺮ ﻛﻞ ذاﻛﺮﺗﻬﺎ اﻷﺳﺎﺳﻴﺔ .ﺑﻜﻠﻤﺎت أﺧﺮي ،ﻳﺠﺐ
أن ﺗﺘﺬﻛﺮ اﻟﺸﺒﻜﺔ أي ذاﻛﺮة أﺳﺎﺳﻴﺔ ymﻋﻨﺪﻣﺎ ﺗﻘﺪم ﻟﻬﺎ ﻛﻤﺪﺧﻼت .أي أن :
xm , j y m , j , i 1,2,...n; m 1,2,.., M
n
) y mi sign ( wij xmj i
j 1
32
ﻧﻼﺣﻆ اﻧﻪ إذا ﺣﺪث ﺗﺬﻛﺮ ﻟﻜﻞ اﻟﺬاﻛﺮات اﻷﺳﺎﺳﻴﺔ ﺑﺼﻮرة ﻛﺎﻣﻠﺔ ﻓﻴﻤﻜﻨﻨﺎ أن ﻧﺴﺘﻤﺮ ﺑﺎﻟﺨﻄﻮة اﻟﺘﺎﻟﻴﺔ .
اﻟﺨﻄﻮة اﻟﺜﺎﻟﺜﺔ :اﻻﺳﺘﺮﺟﺎع
ﻗﺪم ﻣﺘﺠﻪ ﻓﻲ nﺑﻌﺪ )ﻣﺠﺲ x (probﻟﻠﺸﺒﻜﺔ ،وﻧﺴﺘﺮﺟﻊ ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار ﺗﻘﻠﻴﺪﻳﺎ ،وﻳﻤﺜﻞ اﻟﻤﺠﺲ
ﺻﻴﻐﺔ ﺗﺎﻟﻔﺔ او ﻏﻴﺮ ﻛﺎﻣﻠﺔ ﻟﻠﺬاﻛﺮة اﻷﺳﺎﺳﻴﺔ ،أي أن :
x≠ym m=1,2,…,M
)أ( ﺗﺤﺪد اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ ﻟﺨﻮارزم اﻻﺳﺘﺮﺟﺎع ﻟﺸﺒﻜﺔ ﻫﻮﺑﻔﻴﻠﺪ ﻋﻦ ﻃﺮﻳﻖ ﺗﺤﺪﻳﺪ ﻣﺎ ﻳﻠﻲ :
xj(0) = xj j=1,2,…,n
وﻧﺤﺴﺐ اﻟﺤﺎﻟﺔ اﻻﺑﺘﺪاﺋﻴﺔ ﻟﻜﻞ ﻋﺼﺒﻮن
n
yi (0) sign( wij x j (0) i ).., i 1,2,..., n
j 1
ﺣﻴﺚ ) xj(0اﻟﻌﻨﺼﺮ jﻟﻠﻤﺘﺠﻪ اﻟﻤﺠﺲ xﻋﻨﺪ اﻟﺘﻜﺮار yi(0) ، p=0ﺣﺎﻟﺔ اﻟﻌﺼﺒﻮن iﻓﻲ اﻟﺘﻜﺮار p
وﻓﻲ ﺻﻮرة ﻣﺼﻔﻮﻓﺔ ،ﻳﻤﺜﻞ ﻣﺘﺠﻪ اﻟﺤﺎﻟﺔ ﻋﻨﺪ اﻟﺘﻜﺮار
p=0ﻛﻤﺎ ﻳﻠﻲ :
) y (0) sign( wx(0)
33
وﺗﺘﻘﺎرب ﺷﺒﻜﺔ ﻫﻮ ﺑﻔﻴﻠﺪ داﺋﻤﺎ ﺣﺘﻲ ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار اذا ﺣﺪث اﺳﺘﺮﺟﺎع ﻏﻴﺮ ﻣﺘﺰاﻣﻦ اﻻ ان ﺣﺎﻟﺔ اﻻﺳﺘﻘﺮار
ﻻ ﺗﻤﺜﻞ ﺑﺎﻟﻀﺮورة اﺣﺪي اﻟﺬاﻛﺮات اﻻﺳﺎﺳﻴﺔ .واذا ﻛﺎﻧﺖ اﺳﺎﺳﻴﺔ ﻓﻠﻴﺲ ﻣﻦ اﻟﻀﺮوري ان ﺗﻜﻮن اﻻﻗﺮب .
34
Code (1):
% ====================
% ====================
% ===========================================================================
% ===========================================================================
pause
T=
1 -1
1 -1
1 -1
% Hit any key to plot the Hopfield state space with the two fundamental memories
pause
plot3(T(1,:),T(2,:),T(3,:),'r.','markersize',20)
% Hit any key to obtain weights and biases of the Hopfield network.
pause
35
net=newhop(T);
% Hit any key to test the network with six unstable states represented as the
pause
P=
-1 1 1 -1 -1 1
1 -1 1 -1 1 -1
1 1 -1 1 -1 -1
for i=1:6
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
pause
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
36
pause
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
pause
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
pause
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
pause
37
a = {P(:,i)};
[y,Pf,Af]=sim(net,{1 10},{},a);
record=[cell2mat(a) cell2mat(y)];
start=cell2mat(a);
plot3(start(1,1),start(2,1),start(3,1),'b*',record(1,:),record(2,:),record(3,:))
drawnow;
pause
Output(1) :
38
39
40
41
Code(2) :
function varargout = hopfieldNetwork(varargin)
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @hopfieldNetwork_OpeningFcn, ...
'gui_OutputFcn', @hopfieldNetwork_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Outputs from this function are returned to the command line.
function varargout = hopfieldNetwork_OutputFcn(hObject, eventdata,
handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
42
end
handles.hPatternsDisplay = [];
set(handles.imageSize,'enable','on');
handles.W = [];
guidata(hObject, handles);
set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor')
);
end
43
Npattern = length(handles.hPatternsDisplay);
if Npattern > 9
msgbox('more then 10 paterns isn''t supported!','error');
return
end
im = getimage(handles.neurons);
N = get(handles.imageSize,'string');
N = str2num(N);
W = handles.W; %weights vector
avg = mean(im(:)); %removing the cross talk part
if ~isempty(W)
%W = W +( kron(im,im))/(N^2);
W = W + ( kron(im-avg,im-avg))/(N^2)/avg/(1-avg);
else
% W = kron(im,im)/(N^2);
W = ( kron(im-avg,im-avg))/(N^2)/avg/(1-avg);
end
% Erasing self weight
ind = 1:N^2;
f = find(mod(ind,N+1)==1);
W(ind(f),ind(f)) = 0;
handles.W = W;
if Npattern > 0
for n=1 : Npattern
x = xStart+(n+offset-1)*xStep;
h = handles.hPatternsDisplay(n);
set(h,'units','normalized');
set(h,'position',[x y width height]);
end
x = xStart+(n+offset)*xStep;
h = axes('units','normalized','position',[x y width height]);
handles.hPatternsDisplay(n+1) = h;
imagesc(im,'Parent',h);
else
x = xStart+(offset)*xStep;
h = axes('units','normalized','position',[x y width height]);
handles.hPatternsDisplay = h;
end
44
imagesc(im,'Parent',h);
set(h,
'YTick',[],'XTick',[],'XTickMode','manual','Parent',handles.learnedPat
erns);
guidata(hObject, handles);
function im = fixImage(im,N)
% if isrgb(im)
if length( size(im) ) == 3
im = rgb2gray(im);
45
end
im = double(im);
m = min(im(:));
M = max(im(:));
im = (im-m)/(M-m); %normelizing the image
im = imresize(im,[N N],'bilinear');
%im = (im > 0.5)*2-1; %changing image values to -1 & 1
im = (im > 0.5); %changing image values to 0 & 1
set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor')
);
end
Output(2) :
46
47
48
49
اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ :
50
اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ self-organizing neural network
ﺗﻜﻮن اﻟﺸﺒﻜﺎت اﻟﻌﺼﺒﻴﺔ ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ ﻓﻌﺎﻟﺔ ﻓﻲ اﻟﺘﻌﺎﻣﻞ ﻣﻊ اﻟﻈﺮوف ﻏﻴﺮ اﻟﻤﺘﻮﻗﻌﺔ ،واﻟﻤﺘﻐﻴﺮة وﻓﻲ ﻫﺬا اﻟﺠﺰء
ﻧﺘﻨﺎول اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ .واﻟﻤﺒﻨﻲ ﻋﻠﻲ اﻟﺸﺒﻜﺎت ذاﺗﻴﺔ اﻟﺘﻨﻈﻴﻢ
ﻳﺬﻛﺮ ﻗﺎﻧﻮن ﻫﻴﺐ اﻧﻪ إذا ﻛﺎن اﻟﻌﺼﺒﻮن iﻗﺮﻳﺒﺎ ﻗﺮﺑﺎ ﻛﺎﻓﻴﺎ ﻣﻦ اﻟﻌﺼﺒﻮن اﻟﻤﺜﺎر jوﺗﻜﺮر ﻣﺼﺎﺣﺒﻴﺘﻪ ﻓﻲ
ﺗﻨﺸﻴﻄﻪ ﻓﺘﺤﺪث ﺗﻘﻮﻳﺔ ﻟﻨﻘﻄﺔ اﻻﺷﺘﺒﺎك ﺑﻴﻦ ﻫﺬﻳﻦ اﻟﻌﺼﺒﻮﻧﻴﻦ ،وﻳﺼﺒﺢ اﻟﻌﺼﺒﻮن jاﻛﺜﺮ ﺣﺴﺎﺳﻴﺔ ﻟﻠﺘﻨﺒﻴﻪ ﻣﻦ
اﻟﻌﺼﺒﻮن i
وﻳﻤﻜﻨﻨﺎ ﺗﻤﺜﻴﻞ ﻗﺎﻧﻮن ﻫﻴﺐ ﻣﻦ ﻗﺎﻋﺪﺗﻴﻦ ﻛﻤﺎ ﻳﻠﻲ :
-١إذا ﺣﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮﻧﻴﻦ ﻋﻠﻲ أي ﺟﺎﻧﺐ ﻣﻦ ارﺗﺒﺎﻃﻬﻤﺎ ﻣﺘﺰاﻣﻨﻴﻦ ﻓﻴﺰداد ﻋﻨﺪ ذﻟﻚ وزن ﻫﺬا اﻻرﺗﺒﺎط
.
إذا ﺣﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮﻧﻴﻦ ﻋﻠﻲ أي ﺟﺎﻧﺐ ﻣﻦ ارﺗﺒﺎﻃﻬﻤﺎ ﻏﻴﺮ ﻣﺘﺰاﻣﻨﻴﻦ ﻓﻴﻘﻞ ﻋﻨﺪ ذﻟﻚ وزن ﻫﺬا -٢
اﻻرﺗﺒﺎط .
وﻳﻘﺪم ﻗﺎﻧﻮن ﻫﻴﺐ اﻷﺳﺎس ﻟﻠﺘﻌﻠﻢ دون ﻣﻌﻠﻢ .ﻓﻴﻜﻮن اﻟﺘﻌﻠﻢ ﻫﻨﺎ ﻇﺎﻫﺮة ﻣﺤﻠﻴﺔ ﺗﺤﺪث دون ﺗﻐﺬﻳﺔ ﻣﺮﺗﺠﻌﺔ ﻣﻦ
اﻟﺒﻴﺌﺔ .وﺑﻴﻦ اﻟﺸﻜﻞ اﻟﺘﺎﻟﻲ ﺗﻌﻠﻢ ﻫﻴﺒﻴﺎن ﻓﻲ ﺷﺒﻜﺔ ﻋﺼﺒﻴﺔ
51
ﺧﻮارزﻣﻴﺔ اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ اﻟﻤﻌﻤﻤﺔ
اﻟﺨﻄﻮة اﻻوﻟﻲ :وﺿﻊ اﻟﻘﻴﻢ اﻻﺑﺘﺪاﺋﻴﺔ
ﺣﺪد اوزان ﻧﻘﺎط اﻻﺷﺘﺒﺎك ،واﻟﻌﺘﺒﺎت اﻻﺑﺘﺪاﺋﻴﺔ ﺑﻘﻴﻢ ﻋﺸﻮاﺋﻴﺔ ﺻﻐﻴﺮة ،وﻟﺘﻜﻦ ﻓﻲ اﻟﻔﺘﺮة ][0,1
.ﺣﺪد اﻳﻀﺎ ﻗﻴﻤﺎ ﻣﻮﺟﺒﻪ ﺻﻐﻴﺮة ﻟﻤﻌﻠﻤﺔ ﻣﻌﺪل اﻟﺘﻌﻠﻢ ، αوﻣﻌﺎﻣﻞ اﻟﻨﺴﻴﺎن . φ
اﻟﺨﻄﻮة اﻟﺜﺎﻧﻴﺔ :اﻟﺘﻨﺸﻴﻂ
اﺣﺴﺐ ﻣﺨﺮﺟﺎت اﻟﻌﺼﺒﻮن ﻋﻨﺪ اﻟﺘﻜﺮار p
n
y j ( p ) xi ( p) wij ( p ) j
i 1
52
ﻣﺜﺎل ﻋﻠﻰ اﻟﺘﻌﻠﻴﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ
1 1 y1 1 0 y1
x1 1 1 x 1 1
0 2 0 y2 0 2 1 y2
x2 2 x 2
x3 0 3 3
0 y3 x 0 3 3
0 y3
0 0 y4 0 0 y4
x4 4 4 x 4 4
1 5 1 y5 1 5 1 y5
x5 5 x 5
Input layer Output layer Input layer Output layer
53
Code :
function digit_recognition
disp(' =====================================')
disp(' Character recognition neural networks')
disp(' =====================================')
disp('
======================================================================
======')
disp(' Reference: Negnevitsky, M., "Artificial Intelligence: A Guide
to Intelligent')
disp(' Systems", Addison Wesley, Harlow, England, 2002.
')
disp(' Sec. 9.4 Will a neural network work for my problem?
')
disp('
======================================================================
======')
disp('
======================================================================
=========')
disp(' Problem: A multilayer feedforward network is used for the
recognition of digits')
disp(' from 0 to 9. Each digit is represented by a 5 x 9 bit
map. ')
disp('
======================================================================
=========')
[digit1,digit2,digit3,digit4,digit5,digit6,digit7,digit8,digit9,digit0
] = bit_maps;
disp(' Hit any key to obtain ten 45-element input vectors denoted by
"p".')
pause
p=[digit1(:),digit2(:),digit3(:),digit4(:),digit5(:),digit6(:),digit7(
:),digit8(:),digit9(:),digit0(:)]
disp(' Hit any key to define ten 10-element target vectors denoted by
"t". ')
pause
t = eye(10)
54
disp(' Hit any key to define the network architecture.')
pause
disp(' ')
fprintf(1,' s1=%.0f; Number of neurons in the hidden layer\n',s1);
fprintf(1,' s2=%.0f; Number of neurons in the output layer\n',s2);
disp(' ')
disp(' Hit any key to create the network, initialise its weights and
biases, ')
disp(' and set up training parameters.')
pause
disp(' ')
fprintf(1,' net.trainParam.show=%.0f; Number of epochs between
showing the progress\n',net.trainParam.show);
fprintf(1,' net.trainParam.epochs=%.0f; Maximum number of
epochs\n',net.trainParam.epochs);
fprintf(1,' net.trainParam.goal=%.3f; Performance
goal\n',net.trainParam.goal);
fprintf(1,' net.trainParam.lr=%.2f; Learning
rate\n',net.trainParam.lr);
disp(' ')
disp(' ')
disp(' net = train(net,p,t)')
disp(' ')
net = train(net,p,t);
disp(' Hit any key to see how the network recognises a digit, for
example digit 3.')
pause
digit3
probe=digit3(:);
a=sim(net,probe);
disp(' a=sim(net,probe)')
55
a=round(a)
disp(' Hit any key to see how "noise" distorts the bit map of a digit,
for example digit 5.')
disp(' ')
pause
probe=digit5;
figure('name','"Noisy" bit maps')
subplot(1,2,1)
probe_plot(probe)
title('Noise level: 0%')
probe=digit5+randn(size(probe))*0.1;
subplot(1,2,2)
probe_plot(probe)
title('Noise level: 10%')
probe=digit5+randn(size(probe))*0.2;
figure('name','"Noisy" bit maps')
subplot(1,2,1)
probe_plot(probe)
title('Noise level: 20%')
probe=digit5+randn(size(probe))*0.5;
subplot(1,2,2)
probe_plot(probe)
title('Noise level: 50%')
disp(' Hit any key to evaluate the digit recognition neural network.')
disp(' ')
pause
56
% Evaluate the digit recognition network.
for noise_level=noise_range
error=0;
for i=1:max_test
probe=p+randn(size(p))*noise_level;
a=compet(sim(net,probe));
error=error+sum(sum(abs(a-t)))/2;
end
disp(' ')
disp(' Hit any key to plot the test results.')
disp(' ')
pause
h = figure;
plot(noise_range*100,average_error*100,'b-');
title('Performance of the digit recognition network')
xlabel('Noise level, %');
ylabel('Recognition error, %');
disp(' ')
disp(' Hit any key to train the digit recognition network with "noisy"
examples.')
disp(' ')
pause
figure
net.trainParam.epochs = 1000; % Maximum number of epochs to train.
t_noise = [t t t t];
for pass = 1:10
fprintf('Pass = %.0f\n',pass);
p_noise=[p p (p+randn(size(p))*0.1) (p+randn(size(p))*0.2)];
net= train(net,p_noise,t_noise);
end
disp(' Hit any key to evaluate the digit recognition network trained
with "noisy" examples.')
disp(' ')
pause
average_error = [];
57
for i=1:max_test
probe=p+randn(size(p))*noise_level;
a=compet(sim(net,probe));
error=error+sum(sum(abs(a-t)))/2;
end
disp(' ')
disp(' Hit any key to plot the test results.')
disp(' ')
pause
figure(h)
hold on
plot(noise_range*100,average_error*100,'r-');
legend('Network trained with "perfect" examples','Network trained with
"noisy" examples',2);
hold off
disp('end of digit_recognition')
function probe_plot(probe);
[m n]=size(probe);
probe_plot=[probe probe(:,[n])]';
probe_plot=[probe_plot probe_plot(:,[m])]';
pcolor(probe_plot)
colormap(gray)
axis('ij')
axis image
Output :
58
59
60
61
62
63
64
65
66
67
68
ﺗﻌﻠﻢ ﻛﻮﻫﻨﻴﻦ :
69
اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ competitive learning
اﻟﻨﻮع اﻟﺸﺎﺋﻊ اﻷﺧﺮ ﻟﻠﺘﻌﻠﻢ دون إﺷﺮاف ﻫﻮ اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ،ﺣﻴﺚ ﺗﺘﻨﺎﻓﺲ اﻟﻌﺼﺒﻮﻧﺎت ﻣﻊ ﺑﻌﻀﻬﺎ اﻟﺒﻌﺾ
ﻋﻠﻲ اﻟﺘﻨﺸﻴﻂ .وأﺛﻨﺎء اﻟﺘﻌﻠﻢ اﻟﻬﻴﺒﻴﺎﻧﻲ ،ﻳﻤﻜﻦ ﺗﻨﺸﻴﻂ ﻋﺪد ﻣﻦ اﻟﻌﺼﺒﻮﻧﺎت اﻟﻤﺨﺮﺟﺎت ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ ،
وﻓﻲ اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ ﻳﺤﺪث ﺗﻨﺸﻴﻂ ﻟﻌﺼﺒﻮن واﺣﺪ ﻓﻘﻂ ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ .وﻳﺴﻤﻲ ﻋﺼﺒﻮن اﻟﻤﺨﺮﺟﺎت
اﻟﺬي ﻳﻜﺴﺐ اﻟﻤﻨﺎﻓﺴﺔ ﺑﻌﺼﺒﻮن اﻟﻜﺎﺳﺐ اﻟﺬي ﻳﺄﺧﺬ اﻟﻜﻞ )(winner-takes-all
70
داﻟﺔ اﻟﻘﺒﻌﺔ اﻟﻤﻜﺴﻴﻜﻴﺔ
ﺗﻤﺜﻞ ﻫﺬﻩ اﻟﺪاﻟﺔ اﻟﻌﻼﻗﺔ ﺑﻴﻦ اﻟﻤﺴﺎﻓﺔ ﺑﻴﻦ ﻋﺼﺒﻮن اﻟﻔﺎﺋﺰ ﻳﺄﺧﺬ اﻟﻜﻞ وﻗﻮة اﻻرﺗﺒﺎﻃﺎت ﻓﻲ ﻃﺒﻘﺔ ﻛﻮﻫﻨﻴﻦ .
وﻃﺒﻘﺎ“ ﻟﻬﺬﻩ اﻟﺪاﻟﺔ ،ﻳﻜﻮن ﻷﻗﺮب اﻟﺠﻴﺮان )ﻣﻨﻘﻄﺔ إﺛﺎرة ﺟﺎﻧﺒﻴﺔ ﻗﺼﻴﺮة أﻟﻤﺪي ( ﺗﺄﺛﻴﺮ إﺛﺎرة ﻗﻮي ،وﻳﻜﻮن
ﻟﻠﺠﺎر اﻟﺒﻌﻴﺪ )ﺷﺒﻪ اﻟﻈﻞ اﻟﻤﺎﻧﻊ (an inhibitoryﺗﺎﺛﻴﺮ ﻣﺎﻧﻊ ﻣﻌﺘﺪل
وﻳﻜﻮن ﻟﻠﺠﺎر اﻟﺒﻌﻴﺪ ﺟﺪا“ )اﻟﻤﻨﻄﻘﺔ اﻟﻤﺤﻴﻄﺔ ﺑﺸﺒﺔ اﻟﻈﻞ اﻟﻤﺎﻧﻊ( ﺗﺄﺛﻴﺮ إﺛﺎرة ﺿﻌﻴﻒ ،واﻟﺬي ﻣﺎ ﻳﻬﻤﻞ ﻋﺎدة .
وﻓﻲ ﺷﺒﻜﺔ ﻛﻮﻫﻨﻴﻦ ،وﻳﺘﻌﻠﻢ اﻟﻌﺼﺒﻮن ﻋﻦ ﻃﺮﻳﻖ ﺗﺮﺣﻴﻞ أوزاﻧﻪ ﻣﻦ ارﺗﺒﺎﻃﺎت ﻏﻴﺮ ﻧﺸﻄﺔ إﻟﻲ ارﺗﺒﺎﻃﺎت ﻧﺸﻄﺔ .
وﻳﺴﻤﺢ ﻟﻠﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ،وﺟﻴﺮاﻧﻪ ﻓﻘﻂ ﻟﻠﺘﻌﻠﻢ .ﻓﺈذا ﻟﻢ ﻳﺴﺘﺠﻴﺐ اﻟﻌﺼﺒﻮن ﻟﻨﻤﻂ ﻣﺪﺧﻼت ﻣﻌﻴﻦ ﻓﻌﻨﺪ ذﻟﻚ
ﻻ ﻳﻤﻜﻦ أن ﻳﺤﺪث اﻟﺘﻌﻠﻢ ﻓﻲ ﻫﺬا اﻟﻌﺼﺒﻮن اﻟﻤﺤﺪد .
وﺗﻮﺿﻊ إﺷﺎرة اﻟﻤﺨﺮﺟﺎت yjﻟﻌﺼﺒﻮن اﻟﻔﺎﺋﺰ ﻳﺄﺧﺬ اﻟﻜﻞ jﺗﺴﺎوي ، ١وﺗﻮﺿﻊ إﺷﺎرة ﻣﺨﺮﺟﺎت اﻟﻌﺼﺒﻮﻧﺎت
اﻻﺧﺮي )اﻟﺘﻲ ﺧﺴﺮت اﻟﻤﻨﺎﻓﺴﺔ ( ﺗﺴﺎوي .٠
وﺗﻌﺮف ﻗﺎﻋﺪة اﻟﺘﻌﻠﻢ اﻟﺘﻨﺎﻓﺴﻲ اﻟﻨﻤﻄﻲ standard competitive learning ruleاﻟﺘﻐﻴﺮ ﻓﻲ
اﻟﻮزن اﻟﺬي ﻳﻄﺒﻖ ﻋﻠﻲ وزن ﻧﻘﺎط اﻻﺷﺘﺒﺎك ﻛﻤﺎ ﻳﻠﻲ:
71
اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ
ﺗﻌﺮف اﻟﻤﺴﺎﻓﺔ اﻻﻗﻠﻴﺪﻳﺔ ﺑﻴﻦ زوج ﻣﻦ ﻣﺘﺠﻬﺎت xو 1) wiﻓﻲ (nﺑﺎﻟﻌﻼﻗﺔ
1
n 2
d x wj ( xi wij ) 2
i 1
72
Code :
echo on;
pause
rand('seed',1234);
p=rands(2,1000);
plot(p(1,:),p(2,:),'r.')
title('Input vectors');
xlabel('p(1)');
ylabel('p(2)');
pause
s1=6; s2=6;
net=newsom([-1 1; -1 1],[s1 s2]);
plotsom(net.iw{1,1},net.layers{1}.distances)
pause
for i=1:10
hold on;
net.trainParam.epochs=i*net.trainParam.show;
net=train(net,p);
delete(findobj(gcf,'color',[0 0 1]));
delete(findobj(gcf,'color',[1 0 0]));
plotsom(net.IW{1,1},net.layers{1}.distances);
hold off;
pause(0.001)
end
echo off;
for i=1:(s1*s2);
text(net.iw{1,1}(i,1)+0.02,net.iw{1,1}(i,2),sprintf('%g',i));
end
echo on;
pause
for i=1:3
probe=rands(2,1);
hold on;
plot(probe(1,1),probe(2,1),'.g','markersize',25);
a=sim(net,probe);
a=find(a)
text(probe(1,1)+0.03,probe(2,1),sprintf('%g',(a)));
hold off
73
% Hit any key to continue.
if i<3
pause
end
end
echo off
disp('end of Kohonen')
Output :
74
75
76
77
اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ واﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ :
78
اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ & Fuzzy logicاﻹﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ Fuzzy inference
ﺗﻌﺮف اﻟﻨﻈﻢ اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻌﺮﻓﺔ Knowledge Based Systemsﺑﺎﻟﻨﻈﻢ اﻟﺘﻲ ﺗﻢ ﺗﻄﻮﻳﺮﻫﺎ ﺧﺼﻴﺼﺎً
ﻟﻤﺤﺎﻛﺎة ﺗﻔﻜﻴﺮ اﻹﻧﺴﺎن ﻓﻲ ﺣﻞ اﻟﻤﺸﺎﻛﻞ وﺗﻘﺪﻳﻢ اﻟﻨﺼﺢ .وﻣﻦ أﻧﻮاع اﻟﻨﻈﻢ اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻌﺮﻓﺔ ﻣﺎ ﻳﻌﺮف
ﺑﺎﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة Expert Systemاﻟﺘﻲ ﻳﺘﻢ اﺳﺘﺨﺪاﻣﻬﺎ ﻓﻲ اﻟﻌﺪﻳﺪ ﻣﻦ اﻟﻤﺠﺎﻻت ﻛﺎﻟﺘﺸﺨﻴﺺ اﻟﻄﺒﻲ
وﺗﺪاول اﻷﺳﻬﻢ ﻣﺜﻼً .وﺑﺎﻟﺮﻏﻢ ﻣﻦ ﻧﺠﺎح ﺗﻠﻚ اﻟﺘﻄﺒﻴﻘﺎت إﻻ أﻧﻬﺎ ﻻ ﺗﻨﺎﺳﺐ اﻟﻤﺠﺎﻻت اﻟﺘﻲ ﻳﺼﻌﺐ ﺗﺤﺪﻳﺪﻫﺎ
ﺑﺸﻜﻞ دﻗﻴﻖ إذ ﺗﻘﻞ ﻛﻔﺎءﺗﻬﺎ ﺑﺸﻜﻞ ﻛﺒﻴﺮ .وﻟﻜﻦ ﻳﻤﻜﻦ اﻟﺤﺼﻮل ﻋﻠﻰ ﻧﻈﻢ ﺧﺒﻴﺮة ذات ﻛﻔﺎءة ﻋﺎﻟﻴﺔ ﺑﺎﺳﺘﺨﺪام
»اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ« اﻟﺬي ﻃﻮرﻩ اﻟﻌﺎﻟﻢ ﻟﻄﻔﻲ زادﻩ ﻋﻨﺪﻣﺎ ﻃﺮح ﻧﻈﺮﻳﺘﻪ ﻋﺎم ١٩٦٥م ﻓﻲ ورﻗﺔ ﺑﺤﺜﻴﺔ ﺑﻌﻨﻮان
fuzzy sets.ﻓﻲ اﻟﺒﺪاﻳﺔ ﻗﻮﺑﻠﺖ اﻟﻨﻈﺮﻳﺔ ﺑﺎﻟﺮﻓﺾ إﻻ أﻧﻪ ﻣﻊ ﻣﺮور اﻟﻮﻗﺖ اﺳﺘﻄﺎﻋﺖ أن ﺗﺴﺘﺤﻮذ ﻋﻠﻰ
اﻫﺘﻤﺎم اﻷوﺳﺎط اﻟﻌﻠﻤﻴﺔ .ﺗﻌﺘﻤﺪ ﻧﻈﺮﻳﺔ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻋﻠﻰ ﻣﺤﺎﻛﺎة ﺗﻔﻜﻴﺮ اﻹﻧﺴﺎن ﻣﻊ اﻷﺧﺬ ﺑﻌﻴﻦ اﻻﻋﺘﺒﺎر
ﻋﺪم ﺗﺼﻨﻴﻒ اﻷﺷﻴﺎء إﻟﻰ ﺻﻮاب وﺧﻄﺄ ﻓﻘﻂ وإﻧﻤﺎ إدراك أن ﻫﻨﺎك ﻗﻴﻤﺎً أﺧﺮى ﻳﻤﻜﻦ أﺧﺬﻫﺎ ﺑﻌﻴﻦ اﻻﻋﺘﺒﺎر ﺗﻘﻊ
ﺑﻴﻦ ﻫﺎﺗﻴﻦ اﻟﻘﻴﻤﺘﻴﻦ.
ﺗﺒﻨﻰ ﻧﻈﺮﻳﺔ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻋﻠﻰ ﻣﻔﻬﻮم »اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ » Fuzzy Setsواﻟﺘﻲ ﺗﻌﺘﺒﺮ اﻣﺘﺪاداً ﻟﻤﻔﻬﻮم
اﻟﻤﺠﻤﻮﻋﺔ Setﻛﻤﺎ ﻳﻌﺮﻓﻬﺎ اﻟﺠﻤﻴﻊ .إن اﻟﻤﺠﻤﻮﻋﺔ اﻟﻀﺒﺎﺑﻴﺔ ﻫﻲ ﻣﺠﻤﻮﻋﺔ ﻻ ﻳﻤﻜﻦ ﺗﻌﺮﻳﻔﻬﺎ ﺑﺪﻗﺔ .أي أن
ﺗﻌﺮﻳﻔﻬﺎ ﻳﺨﺘﻠﻒ ﺑﺎﺧﺘﻼف وﺟﻬﺎت اﻟﻨﻈﺮ .ﻓﻤﺜﻼً ﻣﺠﻤﻮﻋﺔ اﻟﺴﻴﺪات اﻟﺠﻤﻴﻼت ﻫﻲ ﻣﺠﻤﻮﻋﺔ ﺿﺒﺎﺑﻴﺔ ﻷن ﻣﻔﻬﻮم
اﻟﺠﻤﺎل ﻳﺼﻌﺐ ﺗﺤﺪﻳﺪﻩ ﺑﺪﻗﺔ! ﻛﻤﺎ أن ﻣﺠﻤﻮﻋﺔ اﻷرﻗﺎم اﻟﺘﻲ ﻫﻲ أﻛﺒﺮ ﺑﻜﺜﻴﺮ ﻣﻦ اﻟﺮﻗﻢ واﺣﺪ ﻫﻲ أﻳﻀﺎ ﻣﺠﻤﻮﻋﺔ
ﺿﺒﺎﺑﻴﺔ وذﻟﻚ ﻟﻨﻔﺲ اﻟﺴﺒﺐ» ،أﻛﺒﺮ ﺑﻜﺜﻴﺮ« ﻋﺒﺎرة ﻻ ﻳﻜﻤﻦ وﺻﻔﻬﺎ ﺑﺪﻗﺔ .وﺑﺎﻟﺘﺎﻟﻲ ﻳﻜﻤﻦ اﻻﺧﺘﻼف ﺑﻴﻦ
اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ واﻟﺘﻘﻠﻴﺪﻳﺔ ﻓﻲ اﻧﺘﻤﺎء اﻟﻌﻨﺎﺻﺮ ﻟﻠﻤﺠﻤﻮﻋﺔ .ﻓﻔﻲ اﻟﻤﺠﻤﻮﻋﺔ اﻟﺘﻘﻠﻴﺪﻳﺔ ﻫﻨﺎك ﺣﺎﻟﺘﺎن ﻓﻘﻂ.
إﻣﺎ أن ﻳﻨﺘﻤﻲ اﻟﻌﻨﺼﺮ إﻟﻰ اﻟﻤﺠﻤﻮﻋﺔ أو ﻻ ﻳﻨﺘﻤﻲ )ﻧﺴﺘﻄﻴﻊ أن ﻧﺮﻣﺰ ﻟﻼﻧﺘﻤﺎء ﺑﺎﻟﺮﻗﻢ واﺣﺪ وﻋﺪم اﻻﻧﺘﻤﺎء ﺑﺼﻔﺮ(.
ﺑﻴﻨﻤﺎ ﺗﺘﻴﺢ اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﻟﻌﻨﺎﺻﺮﻫﺎ درﺟﺎت ﻣﺨﺘﻠﻔﺔ ﻣﻦ اﻻﻧﺘﻤﺎء ﺗﺘﺮاوح ﺑﻴﻦ اﻻﻧﺘﻤﺎء اﻟﻜﻠﻲ )واﺣﺪ( أو
ﻋﺪم اﻻﻧﺘﻤﺎء )ﺻﻔﺮ( .ﻣﺜﻼً إذا ﻛﺎن ﻟﺪﻳﻨﺎ ٣ﻣﺠﻤﻮﻋﺎت ﺿﺒﺎﺑﻴﺔ :ﻣﺠﻤﻮﻋﺔ ﻃﻮﻳﻼت اﻟﻘﺎﻣﺔ وﻗﺼﻴﺮات اﻟﻘﺎﻣﺔ
وﻣﺘﻮﺳﻄﺎت اﻟﻘﺎﻣﺔ ،ﻓﺈن اﻟﻔﺘﺎة اﻟﺘﻲ ﻃﻮﻟﻬﺎ ٦٥ﺳﻢ ﺗﻨﺘﻤﻲ إﻟﻰ ﺟﻤﻴﻊ اﻟﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﺑﺪرﺟﺎت اﻧﺘﻤﺎء
ﻣﺨﺘﻠﻔﺔ ﻓﻤﺜﻼً :ﺗﻨﺘﻤﻲ إﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻃﻮﻳﻼت اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ ،٠,٥وإﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻣﺘﻮﺳﻄﺎت اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ ،٠,٨
وإﻟﻰ ﻣﺠﻤﻮﻋﺔ ﻗﺼﻴﺮات اﻟﻘﺎﻣﺔ ﺑﺪرﺟﺔ .٠,٣ﻳﺘﻢ ﺗﺤﺪﻳﺪ اﻷﻃﻮال وﻣﺎ ﻳﻘﺎﺑﻠﻬﺎ ﻣﻦ درﺟﺎت اﻧﺘﻤﺎء ﺑﻮاﺳﻄﺔ ﺷﺨﺺ
ﺧﺒﻴﺮ ﻓﻲ اﻟﻤﺠﺎل .
79
ﻛﻴﻒ ﺗﻌﻤﻞ اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ؟
ﻳﻤﻜﻦ اﺳﺘﺨﺪام اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻓﻲ ﺑﻨﺎء اﻟﻨﻈﻢ اﻟﺨﺒﻴﺮة وذﻟﻚ ﻟﺠﻌﻠﻬﺎ أﻛﺜﺮ ﻣﺤﺎﻛﺎةً ﻟﻠﺘﻔﻜﻴﺮ اﻹﻧﺴﺎﻧﻲ .وﻳﻌﻤﻞ
اﻟﻨﻈﺎم ﺑﺎﺳﺘﺨﺪام اﻟﻤﻜﻮﻧﺎت اﻟﺘﺎﻟﻴﺔ:
اﻟﺠﺰء اﻟﻤﺴﺆول ﻋﻦ ﺣﺴﺎب درﺟﺎت اﻻﻧﺘﻤﺎء ﻟﻠﻤﺠﻤﻮﻋﺎت اﻟﻀﺒﺎﺑﻴﺔ ﻟﻠﺒﻴﺎﻧﺎت اﻟﻤﺪﺧﻠﺔ وﻳﻌﺮف -١
ﺑـ Fuzzifier
-٢ﻣﺤﺮك اﻻﺳﺘﺪﻻل Inference Engineوﻳﻘﻮم ﻫﺬا اﻟﺠﺰء ﺑﺎﻟﺘﻮﺻﻞ إﻟﻰ اﻟﻨﺘﺎﺋﺞ ﻋﻦ ﻃﺮﻳﻖ اﺳﺘﺨﺪام
ﻗﺎﻋﺔ اﻟﻤﻌﺮﻓﺔ Rule Baseواﻟﺘﻲ ﺗﺤﺘﻮي ﻋﻠﻰ ﻗﻮاﻋﺪ ﺑﺼﻴﻐﺔ )إذا … .ﻓﺈن)…..
-٣اﻟﺠﺰء اﻟﺬي ﻳﺤﻮل اﻟﻨﺘﻴﺠﺔ اﻟﻀﺒﺎﺑﻴﺔ اﻟﻰ ﻧﺘﻴﺠﺔ دﻗﻴﻘﺔ وﻳﺴﻤﻰDfuzzifier .
ﻫﻨﺎك أﻣﺜﻠﺔ ﻋﺪﻳﺪة ﻟﻠﻨﻈﻢ اﻟﺨﺒﻴﺮة اﻟﻤﺒﻨﻴﺔ ﻋﻠﻰ اﻟﻤﻨﻄﻖ اﻟﻀﺒﺎﺑﻲ ﻧﺬﻛﺮ ﻣﻨﻬﺎ ﻋﻠﻰ ﺳﺒﻴﻞ اﻟﻤﺜﺎل :ﻧﻈﺎم
SIGMARاﻟﺬي ﻳﺴﺎﻋﺪ ﻋﻠﻰ اﻟﺘﻨﺒﺆ ﺑﺴﺮﻋﺔ اﻟﺮﻳﺎح .وﻧﻈﺎم MEDEXاﻟﺬي ﻳﺴﺎﻋﺪ ﻋﻠﻰ اﻟﺘﻨﺒﺆ
ﺑﻤﻮﺟﺎت اﻟﺠﻠﻴﺪ واﻟﻔﻴﻀﺎﻧﺎت ﻓﻲ ﻛﻨﺪا .وﻧﻈﺎم ﺧﺒﻴﺮ آﺧﺮ ﻓﻲ ﻣﺠﺎل اﻟﻔﻨﺪﻗﺔ واﻟﺴﻴﺎﺣﺔ ﻳﺴﺎﻋﺪ اﻟﺴﻴﺎح ﻓﻲ
اﻟﺒﺤﺚ ﻋﻦ اﻟﻔﻨﺎدق وﻓﻘﺎً ﻟﺮﻏﺒﺎﺗﻬﻢ ﺣﻴﺚ ﻳﺘﻴﺢ ﻟﻬﻢ اﻟﺘﻌﺒﻴﺮ ﺑﻌﺒﺎرات ﻏﻴﺮ دﻗﻴﻘﺔ )ﻣﺜﺎل :ﻳﻤﻜﻦ أن ﻳﻌﺒﺮ اﻟﺴﺎﺋﺢ
ﻋﻦ ﺛﻤﻦ اﻟﻔﻨﺪق ﺑﺈﺣﺪى اﻟﻜﻠﻤﺎت اﻟﺘﺎﻟﻴﺔ :رﺧﻴﺼﺔ ،ﻣﺘﻮﺳﻄﺔ وﻋﺎﻟﻴﺔ اﻟﺘﻜﻠﻔﺔ( .وﻓﻲ اﻟﻤﺠﺎل اﻟﻄﺒﻲ ﻧﻈﺎم
ABVABاﻟﺬي ﻳﺴﺎﻋﺪ اﻷﻃﺒﺎء ﻓﻲ اﻟﻜﺸﻒ ﻋﻦ ﻣﺴﺒﺒﺎت اﻟﻨﺰف.
ﻳﻤﻜﻦ ﺗﻌﺮﻳﻒ اﻻﺳﺘﺪﻻل اﻟﻀﺒﺎﺑﻲ ﺑﺄﻧﻪ ﺗﺤﻮﻳﻞ ﺗﻤﺜﻴﻞ ﻣﺪﺧﻼت ﻣﻌﻴﻨﻪ إﻟﻲ ﻣﺨﺮﺟﺎت ﺑﺎﺳﺘﺨﺪام ﻧﻈﺮﻳﺔ اﻟﻔﺌﺎت
اﻟﻀﺒﺎﺑﻴﺔ .
80
Code :
% ===========================
% Filename : fuzzy_centre_3.m
% ===========================
echo on;
pause
a=readfis('centre_3.fis');
% Hit any key to display fuzzy sets for the linguistic variable "Mean
delay".
pause
% Hit any key to display fuzzy sets for the linguistic variable
"Number of servers".
pause
% Hit any key to display fuzzy sets for the linguistic variable
"Repair utilisation factor".
pause
% Hit any key to display fuzzy sets for the linguistic variable
"Number of spares".
pause
ruleedit(a);
81
pause
ruleview(a);
% CASE STUDY
%
======================================================================
==================
% Suppose, a service centre is required to supply its customers with
spare parts within
% 24 hours. The service centre employs 8 servers and the repair
utilisation factor is 60%.
% The inventory capacities of the centre are limited by 100 spares.
The values for the
% mean delay, number of servers and repair utilisation factor are 0.6,
0.8 and 0.6,
% respectively.
%
======================================================================
==================
%
======================================================================
=
% Suppose, now a manager of the service centre wants to reduce the
customer's average
% waiting time to 12 hours.
82
%
======================================================================
=
% Hit any key to see how this will effect the required number of
spares.
pause
echo off
disp('End of fuzzy_centre_3.m')
output :
83
84
85
86
87
88
اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ :
89
اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ Genetic Algorithm
اﻟﺨﻮارزﻣﻴﺔ اﻟﺠﻴﻨﻴﺔ genetic algorithmsﻫﻲ ﻃﺮﻳﻘﺔ ﻣﻦ ﻃﺮق اﻻﺳﺘﻤﺜﺎل و اﻟﺒﺤﺚ .ﻳﻤﻜﻦ ﺗﺼﻨﻴﻒ
ﻫﺬﻩ اﻟﻄﺮﻳﻘﺔ ﻛﺈﺣﺪى ﻃﺮق اﻟﺨﻮارزﻣﻴﺎت اﻟﺘﻄﻮرﻳﺔ evolutionary algorithmsاﻟﺘﻲ ﺗﻌﺘﻤﺪ ﻋﻠﻰ
ﺗﻘﻠﻴﺪ ﻋﻤﻞ اﻟﻄﺒﻴﻌﺔ ﻣﻦ ﻣﻨﻈﻮر داروﻳﻨﻲ.
ِ
ﺗﺤﻘﻴﻖ اﻷﻣﺜﻠﻴﺔ ، اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ :ﻫﻲ ﺗﻘﻨﻴﺔ ﺑﺤﺚ ﺗﺴﺘﻌﻤﻞ ﻹﻳﺠﺎد ِ
ﺣﻠﻮل ﻣﻀﺒﻮﻃﺔ أَو ﺗﻘﺮﻳﺒﻴﺔ اﻟﺘﻲ
اﻟﺨﻮارزﻣﻴﺎت اﻟﻮراﺛﻴﺔ ﺗﺼﻨﻒ ﻛﺒﺤﻮث اﻟﻌﺎﻟﻤﻴﺔ اﻻﺳﺘﺪﻻﻟﻲ (Global search heuristics),وﻫﻲ أﻳﻀﺎ
ِ
اﻟﺘﻄﻮري (evolutionary ﻓﺌﺔ ﻣﻌﻴﻨﺔ ﻣﻦ اﻟﺨﻮارزﻣﻴﺎت اﻟﺘﻄﻮرﻳﺔ اﻟﻤﻌﺮوﻓﺔ ﻛﺬﻟﻚ ﺑِﺎﻟﺤﺴﺎب
)computationاﻟﺘﻲ ﺗﺴﺘﺨﺪم ﺗﻜﻨﻮﻟﻮﺟﻴﺎ اﻟﻤﺴﺘﻮﺣﺎة ﻣﻦ اﻟﺒﻴﻮﻟﻮﺟﻴﺎ اﻟﺘﻄﻮرﻳﺔ (evolutionary
)biologyﻣﺜﻞ اﻟﺘﻮرﻳﺚ واﻟﻄﻔﺮات واﻻﺧﺘﻴﺎر و اﻟﺘﻬﺠﻴﻦ(crossover).
ﺗﻌﺘﺒﺮ اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻣﻦ اﻟﺘﻘﻨﻴﺎت اﻟﻬﺎﻣﺔ ﻓﻲ اﻟﺒﺤﺚ ﻋﻦ اﻟﺨﻴﺎر اﻷﻣﺜﻞ ﻣﻦ ﻣﺠﻤﻮﻋﺔ ﺣﻠﻮل ﻣﺘﻮﻓﺮة
ﻟﺘﺼﻤﻴﻢ ﻣﻌﻴﻦ ،وﺗﻌﺘﻤﺪ ﻣﺒﺪأ داروﻳﻦ ﻓﻲ اﻻﺻﻄﻔﺎء ﺣﻴﺚ ﺗﻘﻮم ﻫﺬﻩ اﻟﻤﻌﺎﻟﺠﺔ اﻟﻮراﺛﻴﺔ ﺑﺘﻤﺮﻳﺮ اﻟﻤﺰاﻳﺎ اﻟﻤﺜﻠﻰ ﻣﻦ
ﺧﻼل ﻋﻤﻠﻴﺎت اﻟﺘﻮاﻟﺪ اﻟﻤﺘﻌﺎﻗﺒﺔ ،وﺗﺪﻋﻴﻢ ﻫﺬﻩ اﻟﺼﻔﺎت ،وﺗﻜﻮن ﻟﻬﺬﻩ اﻟﺼﻔﺎت اﻟﻘﺪرة اﻷﻛﺒﺮ ﻋﻠﻰ دﺧﻮل
ﻋﻤﻠﻴﺔ اﻟﺘﻮاﻟﺪ ،وإﻧﺘﺎج ذرﻳﺔ أﻣﺜﻞ وﺑﺘﻜﺮار اﻟﺪورة اﻟﻮراﺛﻴﺔ ﺗﺘﺤﺴﻦ ﻧﻮﻋﻴﺔ اﻟﺬرﻳﺔ ﺗﺪرﻳﺠﻴﺎً.
اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻳﺘﻢ ﺗﻨﻔﻴﺬﻫﺎ ﺑﺎﻋﺘﺒﺎرﻫﺎ ﻣﺤﺎﻛﺎة اﻟﻜﻤﺒﻴﻮﺗﺮ ﺣﻴﺚ ﺗﺴﺘﺨﺪم اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت ﻛﺄﻓﺮاد ﻓﻲ
اﻟﻌﻤﻠﻴﺎت اﻟﺘﻲ ﺗﻘﻮم ﺑﻬﺎ ﻹﻳﺠﺎد اﻓﺼﻞ اﻟﺤﻠﻮل ،ﺑﺸﻜﻞ ﻋﺎم اﻟﺤﻠﻮل ﺗﻤﺜﻞ ﺑﻨﻈﺎم اﻟﺜﻨﺎﺋﻲ ) (binaryﻣﻦ ٠
و، ١وأﻳﻀﺎ ﻳﻤﻜﻦ اﺳﺘﺨﺪام رﻣﻮز أﺧﺮى.
ﻋﻤﻠﻴﺔ اﻟﺘﻄﻮر )(evolutionﺗﺒﺪأ ﻋﺎدة ﻣﻦ اﺧﺘﻴﺎر اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت )(populationﺑﺸﻜﻞ ﻋﺸﻮاﺋﻲ وﻫﺬا
ﻳﺤﺪث ﻓﻲ اﻷﺟﻴﺎل اﻷﺧﺮى ،ﻓﻲ ﻛﻞ ﺟﻴﻞ ﻳﺘﻢ ﺣﺴﺎب اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ )(fitness functionﻟﻜﻞ
اﻟﻜﺮوﺳﻮﻣﺎت ﺑﺸﻜﻞ ﻣﻨﻔﺮد و ﻳﺘﻢ اﺧﺘﻴﺎر أﻓﻀﻞ اﻟﻜﻮرﻣﻮﺳﻮﻣﺎت ﺑﺎﻻﻋﺘﻤﺎد ﻋﻠﻰ أﻓﻀﻞ اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ و ﻣﻦ ﺛﻢ
ﻋﻤﻞ ﺗﻬﺠﻴﻦ )دﻣﺞ( وأﻳﻀﺎ ﻋﻤﻞ ﻃﻔﺮة ،ﻫﺬﻩ اﻟﺨﻮارزﻣﻴﺔ ﺗﺘﻮﻗﻒ ﻋﻨﺪﻣﺎ ﻧﺼﻞ إﻟﻰ أﻛﺒﺮ ﻋﺪد ﻣﻦ اﻷﺟﻴﺎل ﺗﻢ
إﻧﺘﺎﺟﻪ أو اﻟﻮﺻﻞ إﻟﻰ أﻓﻀﻞ ﺗﺤﻴﻖ ﻣﻦ ﺧﻼل اﻟﺪاﻟﺔ اﻷﻣﺜﻠﻴﺔ ،إذا ﻛﺎن اﻟﺘﻮﻗﻒ ﺑﺴﺒﺐ أﻛﺒﺮ ﻋﺪد ﻣﻦ اﻷﺟﻴﺎل
ﻳﻜﻮن اﻟﺤﻞ اﻷﻣﺜﻞ ﻏﻴﺮ ﻣﺘﺤﻘﻖ.
اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﺗﻮﺟﺪ ﻓﻲ اﻟﺘﻄﺒﻴﻘﺎت اﻟﻤﻌﻠﻮﻣﺎﺗﻴﺔ اﻹﺣﻴﺎﺋﻴﺔ )(bioinformaticsو ﻋﻠﻮم اﻟﺤﺎﺳﻮب
واﻟﻬﻨﺪﺳﺔ و اﻻﻗﺘﺼﺎد و اﻟﻜﻴﻤﻴﺎء و اﻟﺼﻨﺎﻋﺎت اﻟﺘﺤﻮﻳﻠﻴﺔ ) ( manufacturingو اﻟﺮﻳﺎﺿﻴﺎت واﻟﻔﻴﺰﻳﺎء
90
وﻏﻴﺮﻫﺎ ﻣﻦ اﻟﻤﻴﺎدﻳﻦ.
ﺗﻘﻮم ﻃﺮﻳﻘﺔ اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﻋﻠﻰ ﺗﻮﻟﻴﺪ ﺣﻠﻮل ﺟﺪﻳﺪة ﺗﻮﻟﺪ ﺣﻠﻮﻻ ﻣﻦ اﺣﺘﻤﺎﻻت ﻣﺸﻔﺮة ﻋﻠﻰ اﻟﺸﻜﻞ
"ﻣﻮرث" .اﻟﻜﺮوﻣﻮﺳﻮﻣﺎت ﺗﺠﻤﻊ أو ﺗﺘﻐﻴﺮ ﻹﻧﺘﺎج اﻷﻓﺮاد اﻟﺠﺪد .وﻫﻲ ﻣﻔﻴﺪة
اﻟﻤﻌﺮوف ب "ﻛﺮوﻣﻮﺳﻮم" أَو ّ
ﻹﻳﺠﺎد اﻟﺤﻞ اﻻﻣﺜﻞ ﻟﻠﻤﻌﻀﻼت اﻟﻤﺘﻌﺪدة اﻷﺑﻌﺎد اﻟﺘﻲ ﻳﻤﻜﻦ ﻓﻴﻬﺎ أن ﺗﺸﻔﺮ اﻟﻘﻴﻢ ﻟﻠﻤﺘﻐﻴﺮات اﻟﻤﺨﺘﻠﻔﺔ ﻓﻴﻬﺎ
ﻋﻠﻰ ﺷﻜﻞ اﻟﻜﺮوﻣﻮﺳﻮم.
وﻟﺘﻄﺒﻴﻖ اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻋﻠﻴﻨﺎ أوﻻً أن ﻧﻮﺟﺪ اﻟﺘﻤﺜﻴﻞ اﻟﻤﻨﺎﺳﺐ ﻟﻠﻤﺸﻜﻠﺔ اﻟﻤﺪروﺳﺔ وﻓﻖ ﻋﻤﻠﻴﺎت ﺻﺒﻐﻴﺔ،
ﺣﻞ ﻟﻠﻤﺸﻜﻠﺔ اﻟﻤﻌﻄﺎة
وأﺷﻬﺮ ﻃﺮق اﻟﺘﻤﺜﻴﻞ ﻫﻲ اﺳﺘﺨﺪام اﻟﺴﻼﺳﻞ اﻟﺜﻨﺎﺋﻴﺔ ﻟﺘﻤﺜﻴﻞ ﻗﻴﻢ اﻟﻤﺘﻐﻴﺮات اﻟﺘﻲ ﺗﻌﺒﺮ ﻋﻦ ّ
وﻋﻠﻰ ﻫﻴﺌﺔ ﺻﺒﻐﻴﺎت ،وﺑﻌﺪ أن ﺗﻨﺘﺞ ﻫﺬﻩ اﻟﺼﺒﻐﻴﺎت ﻻ ﺑﺪ ﻣﻦ ﻃﺮق ﻟﻤﻌﺎﻟﺠﺘﻬﺎ ﺣﻴﺚ ﻳﻮﺟﺪ أرﺑﻌﺔ ﻋﻤﻠﻴﺎت وﻫﻲ
)اﻟﻨﺴﺦ ،اﻟﺘﺼﺎﻟﺐ ،اﻟﻄﻔﺮة و اﻟﻌﻜﺲ).
ﻓﺎﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻣﺒﻨﻴﺔ ﻋﻠﻰ أﺳﺎس ﺗﻘﻨﻴﺔ اﻟﺤﻠﻮل اﻟﻤﺜﻠﻰ ﺗﺤﺎﻛﻲ اﻟﻨﺸﻮء اﻟﻄﺒﻴﻌﻲ وذﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺗﺸﻔﻴﺮ
اﻟﺤﻠﻮل اﻟﻤﻤﻜﻨﺔ ﻟﺘﻤﺜﻴﻠﻬﺎ ﻋﻠﻰ ﺷﻜﻞ ﺳﻼﺳﻞ ﻣﺸﺎﺑﻬﺔ ﻟﺴﻼﺳﻞ اﻟﺼﺒﻐﻲ ،وﻣﻦ ﺛﻢ ﺗﻄﺒﻴﻖ ﺑﻌﺾ اﻟﻌﻤﻠﻴﺎت
اﻟﺒﻴﻮﻟﻮﺟﻴﺔ )ﻧﺴﺦ ،ﺗﺼﺎﻟﺐ ،ﻃﻔﺮة( ،واﻟﻌﻤﻠﻴﺎت اﻟﺼﻨﻌﻴﺔ)اﻟﻌﻜﺲ( ﻹﻧﺘﺎج اﻟﺤﻞ اﻷﻣﺜﻞ.
واﻟﻤﻴﺰة اﻷﻫﻢ ﻓﻲ اﻟﺨﻮارزﻣﻴﺔ اﻟﻮراﺛﻴﺔ ﻫﻲ ﻃﺒﻴﻌﺘﻬﺎ اﻟﺘﻜﻴﻴﻔﻴﺔ ،واﻟﺘﻲ ﺗﺠﻌﻠﻬﺎ أﻗﻞ ﺣﺎﺟﺔ ﻟﻤﻌﺮﻓﺔ اﻟﻤﻌﺎدﻟﺔ ﻣﻦ أﺟﻞ
ﺣﻠﻬﺎ.
ﻓﺎﻟﺨﻮارزﻣﺎت اﻟﺠﻴﻨﻴﺔ ﻫﻲ ﻃﺮﻳﻘﺔ ﻟﻤﺤﺎﻛﺎة ﻣﺎﺗﻔﻌﻠﻪ اﻟﻄﺒﻴﻌﺔ ﻓﻲ ﺗﻜﺎﺛﺮ اﻟﻜﺎﺋﻨﺎت اﻟﺤﻴﺔ ،واﺳﺘﺨﺪام ﺗﻠﻚ اﻟﻄﺮﻳﻘﺔ
ﻟﺤﻞ ﻣﺸﻜﻼت ﻣﻌﻘﺪة ﻟﻠﻮﺻﻮل ﻟﻠﺤﻞ اﻷﻓﻀﻞ ،أو أﻗﺮب ﺣﻞ ﻣﻤﻜﻦ ﻟﻠﺤﻞ اﻷﻓﻀﻞ .إذن ﻟﺪﻳﻨﺎ ﻣﺸﻜﻠﺔ ﻟﻬﺎ
ﻋﺪد ﻛﺒﻴﺮ ﺟﺪا ﻣﻦ ﻣﻦ اﻟﺤﻠﻮل أﻛﺜﺮﻫﺎ ﺧﺎﻃﺊ وﺑﻌﻀﻬﺎ ﺻﺤﻴﺢ ،وﻫﻨﺎﻟﻚ داﺋﻤﺎ اﻟﺤﻞ اﻷﻓﻀﻞ واﻟﺬي ﻳﺼﻌﺐ
ﻏﺎﻟﺒﺎ اﻟﻮﺻﻮل إﻟﻴﻪ.
ﻓﻔﻜﺮة اﻟﺨﻮارزﻣﻴﺎت اﻟﺠﻴﻨﻴﺔ ﺗﻜﻤﻦ ﻓﻲ ﺗﻮﻟﻴﺪ ﺑﻌﺾ اﻟﺤﻠﻮل ﻟﻠﻤﺸﻜﻠﺔ ﻋﺸﻮاﺋﻴﺎ ،ﺛﻢ ﺗﻔﺤﺺ ﻫﺬﻩ اﻟﺤﻠﻮل وﺗﻘﺎرن
ﺑﺒﻌﺾ اﻟﻤﻌﺎﻳﻴﺮ اﻟﺘﻲ ﻳﻀﻌﻬﺎ ﻣﺼﻤﻢ اﻟﺨﻮارزم ،وأﻓﻀﻞ اﻟﺤﻠﻮل ﻓﻘﻂ ﻫﻲ اﻟﺘﻲ ﺗﺒﻘﻰ أﻣﺎ اﻟﺤﻠﻮل اﻷﻗﻞ ﻛﻔﺎءة ﻓﻴﺘﻢ
إﻫﻤﺎﻟﻬﺎ ﻋﻤﻼ ﺑﺎﻟﻘﺎﻋﺪة اﻟﺒﻴﻮﻟﻮﺟﻴﺔ "اﻟﺒﻘﺎء ﻟﻸﺻﻠﺢ".
واﻟﺨﻄﻮة اﻟﺘﺎﻟﻴﺔ ﻫﻲ ﻣﺰاوﺟﺔ أو ﺧﻠﻂ اﻟﺤﻠﻮل اﻟﻤﺘﺒﻘﻴﺔ )اﻟﺤﻠﻮل اﻷﻛﺜﺮ ﻛﻔﺎءة( ﻹﻧﺘﺎج ﺣﻠﻮل ﺟﺪﻳﺪة ﻋﻠﻰ ﻏﺮار ﻣﺎ
ﻳﺤﺼﻞ ﻓﻲ اﻟﻜﺎﺋﻨﺎت اﻟﺤﻴﺔ وذﻟﻚ ﺑﻤﺰج ﻣﻮرﺛﺎﺗﻬﺎ )ﺟﻴﻨﺎﺗﻬﺎ( ﺑﺤﻴﺚ ﻳﺤﻤﻞ اﻟﻜﺎﺋﻦ اﻟﺠﺪﻳﺪ ﺻﻔﺎت ﻫﻲ ﻋﺒﺎرة ﻋﻦ
ﻣﺰﻳﺞ ﻣﻦ ﺻﻔﺎت واﻟﺪﻳﻪ.
اﻟﺤﻠﻮل اﻟﻨﺎﺗﺠﺔ ﻣﻦ اﻟﺘﺰاوج ﺗﺪﺧﻞ ﻫﻲ أﻳﻀﺎ ﺗﺤﺖ اﻟﻔﺤﺺ واﻟﺘﻨﻘﻴﺢ ﻟﻤﻌﺮﻓﺔ ﻣﺪى ﻛﻔﺎءﺗﻬﺎ واﻗﺘﺮاﺑﻬﺎ ﻣﻦ اﻟﺤﻞ
اﻷﻣﺜﻞ ،ﻓﺈن ﺛﺒﺘﺖ ﻛﻔﺎءة اﻟﺤﻞ اﻟﺠﺪﻳﺪ ﻓﺈﻧﻪ ﻳﺒﻘﻰ وإﻻ ﻳُﻬﻤﻞ ،وﻫﻜﺬا ﺗﺘﻢ ﻋﻤﻠﻴﺎت اﻟﺘﺰاوج واﻻﻧﺘﻘﺎء ﺣﺘﻰ ﺗﺼﻞ
91
اﻟﻌﻤﻠﻴﺔ إﻣﺎ ﻟﻌﺪد ﻣﻌﻴﻦ ﻣﻦ اﻟﺘﻜﺮارات )ﻳﻘﺮرﻩ ﻣﺴﺘﺤﺪم اﻟﻨﻈﺎم( أو ﺗﺼﻞ اﻟﺤﻠﻮل اﻟﻨﺎﺗﺠﺔ ،أو إﺣﺪاﻫﺎ إﻟﻰ ﻧﺴﺒﺔ
ﻛﻔﺎءة ،أو ﻧﺴﺒﺔ ﺧﻄﺄ ﺿﺌﻴﻠﺔ )ﻳﺤﺪدﻫﺎ اﻟﻤﺴﺘﺨﺪم أﻳﻀﺎ( أو ﺣﺘﻰ اﻟﺤﻞ اﻷﻓﻀﻞ.
92
Code :
function GA_1
disp('=============================================================')
disp('Genetic algorithms: the fitness function of a single variable')
disp('=============================================================')
disp('================================================================
============')
disp('Reference: Negnevitsky, M., "Artificial Intelligence: A Guide to
Intelligent')
disp(' Systems", Addison Wesley, Harlow, England, 2002.
')
disp(' Sec. 7.3 Genetic algorithms
')
disp('================================================================
============')
disp('================================================================
=============')
disp('Problem: It is desired to find the maximum value of the function
(15*x - x*x)')
disp(' where parameter "x" varies between 0 and 15. Assume
that "x" takes ')
disp(' only integer values.
')
disp('================================================================
=============')
ObjFun='15*x -x.*x';
disp(' ')
disp('ObjFun=15*x -x.*x')
disp(' ')
93
xmax=15; % Possible maximum value of parameter "x"
ngener=20; % Number of generations
disp(' ')
fprintf(1,'nind=%.0f; Size of a chromosome population\n',nind);
fprintf(1,'ngenes=%.0f; Number of genes in a chromosome\n',ngenes);
fprintf(1,'Pc=%.1f; Crossover probability\n',Pc);
fprintf(1,'Pm=%.3f; Mutation probability\n',Pm);
fprintf(1,'xmin=%.0f; Possible minimum value of parameter
"x"\n',xmin);
fprintf(1,'xmax=%.0f; Possible maximum value of parameter
"x"\n',xmax);
fprintf(1,'ngener=%.0f; Number of generations\n',ngener);
disp(' ')
chrom=round(rand(nind,ngenes))
x=chrom*[2.^(ngenes-1:-1:0)]'
disp(' ')
disp('Hit any key to run the genetic algorithm.')
pause
for i=1:ngener,
94
% Fitness evaluation
fitness=ObjV;
if min(ObjV)<0
fitness=fitness-min(ObjV);
end
% Crossover
points=round(rand(floor(numsel/2),1).*(ngenes-2))+1;
points=points.*(rand(floor(numsel/2),1)<Pc);
for j=1:length(points),
if points(j),
newchrom(2*j-1:2*j,:)=[newchrom(2*j-1:2*j,1:points(j)),...
flipud(newchrom(2*j-1:2*j,points(j)+1:ngenes))];
end
end
% Mutation
mut=find(rand(numsel,ngenes)<Pm);
newchrom(mut)=round(rand(length(mut),1));
% Fitness calculation
newx=newchrom*[2.^(ngenes-1:-1:0)]';
newx=xmin+(newx+1)*(xmax-xmin)/(2^ngenes-1);
newObjV=evalObjFun(ObjFun,newx);
95
hold;
plot(x,ObjV,'r.','markersize',15)
legend(['The objective function: ',ObjFun],'Current chromosome
population',4);
title(['Generation # ',num2str(i)]);
xlabel('Parameter "x"');
ylabel('Chromosome fitness');
pause(0.2)
hold;
best(1+i)=max(ObjV);
ave(1+i)=mean(ObjV);
end
disp(' ')
disp('Hit any key to display the performance graph.')
pause
figure('name','Performance graph');
plot(0:ngener,best,0:ngener,ave);
legend('Best','Average',0);
title(['Pc = ',num2str(Pc),', Pm = ',num2str(Pm)]);
xlabel('Generations');
ylabel('Fitness')
function y=evalObjFun(ObjFun,x)
y=eval(ObjFun);
Output :
96
97
End …
98