You are on page 1of 6

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.

ORG

23

Applying a natural intelligence pattern in cognitive robots


N. Jafari, J. Jafari Amirbandi, A. Rahmani and M. Pedram
Abstract Human brain was always a mysterious subject to explore, as it has still got lots to be discovered, and a good topic to be studied in many aspects, by different branches of science. In other hand, one of the biggest concerns of the future generation of Artificial Intelligence (AI) is to build robots who can think like human. To achieve this AI engineers used the theories inspired by human intelligent, which were suggested by well-known psychologists, to improve the intelligence systems. To control this complicated system they can gain a lot of benefits from studying how human mind works. In this article, cognitive robots, which were equipped to a system that was built based on human brain s function, searched in a virtual environment and tried to survive for longer. To build the cognitive system for these robots, the psychoanalysis theory of Sigmund Freud (id, ego, and super-ego) was used. And at the end, the surviving period of cognitive robots and normal robots in similar environments were compared. The results of these simulations proved that cognitive robots had more chances of surviving. Index Terms Cognitive robotic, Artificial Intelligence, Natural intelligence

1 INTRODUCTION

O get benefit from technology in our lives, we need to u nd erstand and learn all asp ects of hu m an life, sp ecifically the need s and p roblem s. To ad ap t technology w ith hu m an need s, stu d y of a w id e range of d ifferent branches of science is necessary. Cognitive science is a collection of a w id e range of know led ge w hich is aim ed to d escribe and assem ble the cognitive abilities of all living things and the m echanism s of their brains fu nctions. To solve a p roblem and m ake a d ecision, hu m an shou ld u nd erstand their su rrou nd ings, first. Then, w ith this inform ation and the exp erience(s) or skill(s) gained p rior to this, he cou ld act p rop erly tow ard s the situ ation. It s assu m ed that the stru ctu re of an au tom ated system m u st be d esigned w ith thou sand s of sensors to be able to p rocess new d ata and m ake a su itable d ecision. Error! Reference source not found [1]. A collection of Ep istem ology, Cognitive N eu ro Science, Cognitive Psychology, and Artificial Intelligent create Cognitive Science, which is one of this generation scientific ap p roaches and very u sefu l for hu m an need s. In other hand , Robotics as a technologic d ep end ant of AI, is a percep tion w hich consists of m echanics, com p u ter sciences, and electronics control. Robotics p lu s cognitive sciences together m ake a new branch of science called cognitive robotics. Cognitive Robotics, u ses living organism s and

Seyedeh Negar Jafari is MS. Student of Artificial Intelligence at Islamic Azad University- Science and Research branch, Tehran, Iran  Jafar Jafari Amirbandi is MS. Student of Artificial Intelligence at Islamic Azad University- Science and Research branch, Tehran, Iran  Amir Masoud Rahmani is lecturer at Department of Computer Engineering, Islamic Azad University- Science and Research branch, Tehran, Iran  Mir Mohsen Pedram is lecturer at Department of Computer Engineering,Tarbiat moalem Unerversity, Tehran, Iran

human brains function in its calculation algorithms. One of the m ain ap p lications of Cognitive sciences is in AI (bu ild ing of hu m an-like com p u ters). From their p oint of view, human mind is a kind of computer that sends the inform ation received from its sensors (e.g. vision) to its p rocessing centre (m ind ) and as the resu lt of this p rocess we talk or walk, and so on. The behaving p atterns that had been insp ired from hum an intelligence, w ou ld be a great help for engineers to im p rove the system s. Sp ecifically, the w ay that hu m an m ind cou ld control com p licated m atters su ggests new ways to scientists. In articles [2], [3] show n that a w id e range of sciences cou ld com e to help engineers to u nd erstand how hu m an brain w orks su ch as p sychology, p sychotherap y, cognitive sciences, and neu ro sciences. Many researchers in AI are investigating the p atterns of natu ral intelligence, in order to apply them in automated robots [5]. In article [4], a Memory-based theory is u sed to d iscover brain fu nctions and p rove that m em ory is the base and root of any kind of intelligence either natu ral or artificial. They also believe that m ost of the u nsolved p roblem s in com puter fields, software engineering, informatics and AI are becau se of not u nd erstand ing the m echanism s of natural intelligence, and cognitive functions of brain. In [6] has show n the theoretical im p rovem ents in the m echanism s of AI and N atu ral Intelligence (N I). As w ell, the classification of intelligence and the role of inform ation in d evelop m ent of brain fu nctions w ere stu d ied . They ve exp and ed a general intelligence m od el w hich w ill help d escribing the d evelop ed m echanism s of natu ral intelligence.

2 TOPIC PLAN
The sim u lation of hu m an and other living organism brain

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

24

pattern and its behaviou r m od el in a virtu al environm ent has been alw ays a top ic of interest for scientists and their researches, as m u ch that w e can see som e of those resu lts in m od ernising ou r ind u stries and their affects in ou r lives, too. In ord er to follow hu m an cognitive stru ctu re, ou r research w as based on a p sychological fram ew ork. Pu rsu ing this, w e chose a su itable m od el of N I am ong the su ggested ones, and d esigned ou r cognitive robots in a virtu al environm ent accord ingly. Ou r N I m od el is from a cognitive field of psychology in which the different aspect of p ersonality, p ercep tion, excitem ent, intension, and p hysical reactions of a p erson and his ad ap tion w ith the environment, was studied. Sigm u nd Freu d , the fou nd er of p sychoanalysis, says that personality is consisted three elem ents w hich together control hu m an behaviou r. H is theory w ou ld be d iscu ssed in other sections of this article. The p lan of this article w as sim u lation of the behaviou r of cognitive robots, w ith N I p attern, in a virtu al environm ent. The d esigned cognitive robots searched their su rroundings and in order to survive for longer, they needed to use the energy sup plies from their virtual environment, otherw ise their energy level red u ced too m u ch and they w ou ld be elim inated . The cognitive robots w ere equ ip p ed w ith Learning Au tom ata (LA), w hich help ed them m aking d ecision by recognizing their situ ation in their su rrou nd ings in each ep isod e. Bu t in other hand , norm al robots had no LA available, so they m oved rand om ly and tried to survive for longer. Later, the virtu al environm ent, LAs, d ecision m aking algorithm s (based on Freu d s theory) w ou ld be discussed in details.

that they constantly u se/ lose less energy in each ep isod e. In other hand , they have less chance of u sing energy p acks, since nu m ber of energy p acks are m ore in the d angerous/Rocky zone. The other grou p , that has cognitive robots, w ith the m echanism of d ecision m aking (accord ing to Freu d s theory), w ill act su itably to their su rrou nd ings. Op p osite of the first grou p , this grou p of robots (cognitive) based on their ability of decision making, try to learn about their su rrou nd ings and w hich action s best to ap p ly in each situation, so, they could survive longer.

Fig. 1.Virtual ecosystem

2.2 LEARNING AUTOMATA (LA)


The Learning Au tom ata are abstract m od els. They act rand om ly in the environm ent, and are able to u p to d ate their actions based on the inform ation they ve got from ou tsid e (environm ent). This featu re helps them to imp rove their fu nctions. A LA can d o lim ited nu m ber of actions. Each chosen action, is stu d ied by the virtu al environm ent, and resp ond ed w ith a rew ard (for correct action), or a fine (for w rong action). LA u ses this respond, and chooses its next action [7], [8]. In sim p ler w ord s, a learning au tom ata is an abstract m od el that rand om ly chooses one of its lim ited actions and enforce it to the environm ent. And the environm ent stu d ies this action, and send s the resu lt w ith a relayed signal to LA. Then, w ith this d ata, LA u p to d ates its information, and chooses its next action. Following diagram shows this relationship between LA and the environment.

2.1 THE DESIGN OF VIRTUAL ECOSYSTEM


The sim u lated su rrou nd ings w ere sectioned in tw o zones, Rocky zone (or d angerou s zone), and Flat zone (or safe zone). These tw o zones w ere exactly the sam e size, and the tw o zones had been sep arated by a hazard ou s boarder. Since the Rocky zone w as d angerou s, the am ou nt of energy u sed in it, w as tw ice the energy level u sed in a safe zone. The other d ifference betw een these tw o zones w as the nu m ber of energy p ack sp read in them , w hich w as m ore in the dangerous zone. The robots are bu ilt rand om ly, w ith the sam e level of energy (the highest level of energy), in the Safe zone. They are obliged to search in the zone and try to su rvive for longer in the environm ent. Eventu ally, w ith each m ovem ent, the robot w ill lose one energy u nit in Safe zone, and tw o energy u nits in Rocky (d angerou s) zone. When a robot gets an energy p ack, its energy level reaches to its highest level. But, if its energy level drops to zero, it will get eliminated. Tw o grou p s of robots (cognitive and norm al), w ith sim ilar situ ation, search the virtu al environm ent. At the end , the average of their (the tw o grou p s) su rviving tim e w ill be compared together. The first grou p , w hich involves norm al robots that have no d ecision m aking ability, m ove rand om ly in the virtu al environm ent and never enter Rocky zone. This m eans

Fig. 2.Relationship between LA and environmemnt

There are tw o typ es of LAs: Fixed stru ctu re, and Variable structu re. We have u sed the variable typ e in ou r research. To get m ore d etailed inform ation abou t these tw o types, refer to the articles of [9], [10], [11], please.

2.3 Robots Decision Making, Based on Freud s Theory


A natu ral Intelligence (N I) m od el is actu ally a p attern of human or any other living organism s natu ral behaviou rs, and the consequ ences of cognitive behaviou r follow s this m od el, as w ell. As m entioned before Freu d s p sychoana-

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

25

lysis theory w as u sed as an ou tline for ou r d esigned cognitive robots actions. Accord ing to w hat has been su ggested in this theory, the structu re of hu m an p ersonality is bu ilt u p three elem ents, the id , the ego , and the su p er-ego . These three asp ects of p ersonality in d ealing together, create the com p licated human behaviours (refer to the fig. 3).

m anner accep table in the real w orld (su p er-ego). It p lays a m id d le p erson role and m akes the final d ecision. Follow ing this p attern, ou r d esigned robots learn that from its exp eriences from its su rrou nd ings, choose su itable actions; this is d one by LAs. So, robots based on available LAs and their actions p robability ratios, d ecid e w hat action to choose next. A cognitive robot, in its su rviving p eriod (from highest level of energy to zero energy), exp eriences tw o cond itions, accord ing to Freu d s theory. We called them N orm al and Excited / Tensed cond itions. When, a robot s energy level goes low er than a certain am ou nt, its norm al condition changes to tensed condition. This is shown with a cognitive ind ex. In tense cond ition, a robot gets m ore risky, and is able to m ake better d ecisions in critical conditions.

Fig. 3.Information flow between world modules [12]

id is p resented from birth, ego is resp onsible for d ealing w ith reality, and su p er-ego is all of ou r inner m oral stand ard s and id eals that w e acqu ire from both ou r p arents and our society. id is d riven by the p leasu re p rincip le, w hich strives for immed iate gratification of all d esires, w ants, and need s. If these need s are not satisfied im m ed iately, the resu lt is a state anxiety or tension. So, id tries to resolve this tension created by the p leasu re p rincip le throu gh the p rim ary p rocess, w hich m eans learning to find a su itable m ethod to satisfy the needs. Based on this d escrip tion, cognitive robots start m oving in their su rrou nd ings at first, and as their energy level d rop s d u e to each m ovem ent, and getting fined by the environm ent tim e to tim e, they learn that their m ovem ents shou ld be in a w ay that by u sing the available energy p acks, w hen necessary, be able to su rvive for longer. In this article, need s of robots are d efined as w anting to use the energy p acks , and if this need s w eren t satisfied , the robots w ou ld go to their tense mode/state. su p er-ego is the asp ect of civilization of p ersonality, and hold s all the id eals and m oral stand ard s w hich are gained from p arents and society. It also p rovid es gu id elines for m aking ju d gem ents (ou r sense of right and w rong). The robots, also, based on this d escrip tion, and throu gh their consecu tive ep isod es, by the Learning Au tom ata, w ill learn that their energy level shou ld n t d rop d rastically, otherw ise they w ill get elim inated . H ow ever, to follow the m oral stand ard s, as soon as they receive an energy p ack , they shou ld n t u se it, and they shou ld keep it for w hen they are in tense/ excitem ent m od e/ state (it s not id eal for robots to u se energy in their norm al/ stead y m od e). Id eally, robots shou ld not go for risky m ovem ents w hen they are in their norm al/ stead y m od e. Accord ing to the d escrip tion of su p er-ego in Freu d s theory, the robots cou ld get ad vantage from other robots exp eriences as well. Also, Freu d exp lained ego is resu lted from p ersonal exp eriences, and is the execu tive centre as w ell as d ecision m aking one. The ego strives to satisfy id s d esires, and ensu res that the im p u lses of id can be exp ressed in a

3 PROPOSED SOLUTION
In this article, the robots start searching/ valu ating their su rrou nd ings synchronou sly, and they are able to 5 actions in each episode. They can choose to be steady (fixed) in any cond ition that they think it s giving a better resu lt, bu t they w ill lose one or tw o u nits of their energy level depends on which zone they are in. There is a noticeable d ifference betw een fu nctioning of a cognitive robot and a norm al robot s ; a norm al robot w ill never risk its (energy level) in ord er to su rvive longer, so it never enters an unsafe/ u nsecu re zone. Bu t, in other hand , a cognitive robot, becau se of being equ ip p ed to LA(w hich follow s N I and Freu d s theory) , risks its life so it can red u ce the tension occu rred in its system , and enters the Rocky zone (d angerou s zone) . This m eans instead of choosing the id eals by their su p er-ego , the d ecision m aking elem ent (ego), w ill go m ore for w hatever need s id elem ent asking for. The cognitive robots, can u p to d ate the p robability of their actions (u p / d ow n, right/ left, fix) by the help of this LA. So, can grad u ally im p rove their fu nctions and increase their su rviving chance in the virtu al environm ent. Based on Freu d s theory, the robots get rew ard ed , or fined , d ep end s on the m ovem ent (action). We ll go in more details later. When the robots are bu ilt, the p robability of any of these 5 actions is the sam e as others, becau se they have not exp erienced the reaction of the environm ent, yet(rew ard , or fine). As you can observe the follow ing table the ad d ition of the p robabilities of these 5 actions, is equ al to 1 in the beginning of their movement, Pleas refer to (1). Table1 probability of each action in the beginning Action Probability Up 0.2 Right 0.2 Left 0.2 Down Fix 0.2 0.2

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

26

Sum of Probability (action) = P (up) + P (right) + P (down) + P (left) + P (fix) = 1; (1) A cognitive robot w ill choose an action rand om ly, an action w ith a higher p robability, has a higher chance to be chosen next, bu t in the beginning of the m ovem ent (search) it w ill go for an action rand om ly and investigate its su rrou nd ings. As the resu lt a robot receives back a rew ard or fine for each action chosen, and this inform ation and the new p robability (su m of these new p robabilities w ill be equ al to 1, at all tim e) w ill be record ed for later u se. This w ill be continu ou sly d one in each iteration, u ntil all the robots are elim inated from the virtu al environment.

3.1 The Algorithm of Up to Dating the Probability of the Actions in Learning Automata
. If an action got a reward (=0) as its result, in each LA, its p robability ratio w ou ld be increased , in other w ord s the rest of actions ratios would get decreased. . If the resu lt w as a fine (=1), then the p robability ratio w ou ld be d ecreased in that LA for that action, and w ise versa for the rest of actions. Please refer (2):

In this algorithm, the variables are as follow: P(n) : the probability of an action in n times P(n+1) : the probability of an action in n+1 tim es (or the probability of an action after up to dating) : the reaction or rep ly of the environm ent to the robot s action* = 0 means reward = 1 means fine a : reward index b : fine index r : number of defined actions for a robot * Environm ent rew ard s or fines based on som e ru les w hich follow Freu d s theory of p sychoanalysis.

viving tim e, and save them in their m em ories. They cou ld u p to d ate their LAs in each ep isod e. There w ere 5 actions (u p / d ow n, right/ left, and fix) available for each LA, and each action w as allocated w ith a p robability ratio. As a robot faced an exp erienced situ ation, it referred to its list of LAs, and chose a su itable LA, then follow ed that LA actions based on their p robabilities. So, this w ay they m ad e a su itable m ovem ent/ action, and learned grad u ally how to increase their chance of surviving. In each ep isod e, instead of any action the robot has chosen, it has exp erienced a situ ation in its su rrou nd ing, so, one LA w ou ld be p rod u ced . If robot has not exp erienced su ch situ ation before, there w ou ld be a new LA bu ilt, and ad d ed to the list of LAs in robot s m em ory. As w ell, the respond of the environm ent to su ch exp erience (based on the ru les introd u ced ) w ou ld p rovid e u p to d ating of probability ratio of actions for the LA. Bu t if a robot has alread y got that sp ecific LA in its list, It w ou ld n t ad d it again, instead , its exp eriences w ou ld be u sed in d ecid ing w hat action to take, and this w ay robot has got the chance of increasing its exp erience ( knowled ge of d ecision m aking ), and as the resu lt increasing its su rviving tim e. That s how Freu d believed in the feedbacks a p erson has received from the society, w hich w ou ld effect his/ her actions later on, and help his/ her to ad ap t better to the society. H e also believed that p arents rew ard s or p u nishm ent has the sam e affects on their child s behaviou r. For exam p le a child w ho has been p raised for a good behaviou r, by his p arents; his su p er-ego has record ed it as an id eal reaction, so in a sim ilar situ ation, it w ou ld be m u ch easier for him to d ecid e w hat to d o, based on his exp erience (ego). That s exactly the sam e for ou r d esigned cognitive robots in the virtual environment.

3.3 How LAs Built in Robots


There are m any d ifferent of w ays to bu ild LAs in robots. In this article, three m ethod s w ere d iscu ssed and exp erienced. In all the three m ethod s the sam e zones and the sam e actions w ere introd u ced . They (based on their fu nctions) w ere called : State based , Mod el based , and State based with vision radius. Method 1: State based LA w as d esigned based on the robot s state, situ ation and the fou r su rrou nd ing neighbou rs of the robot. Robot could sense its surroundings with its sensors. Method 2: Model based In this m ethod , p lu s the sate of the robot and its neighbou rhood inform ation, the axial p ositions of the robot ,in each episode, were recorded as one of the fields of LA. So, w hen robot d id n t know m u ch abou t its su rrou nd ings bu t had its axial p ositions, tried to collect other inform ation abou t that environm ent grad u ally, w hich resu lted in better decision making, and action. One of this m ethod s d efects w as being slow d u e to its massive number of LAs.

3.2 How Cognitive Robots Acquire Information from their environment


Each robot d ep end ed on its vision bou nd ary (rad iu s) w as equ ip p ed to sensors, that enabled it to recognize the contents (inform ation) from their neighbou rhood su rrou ndings. So, each robot in its p resence in the virtu al environment could recognise: . The zone it was in (Safe/Rocky) . Its state based on its energy level (normal/excited) . The zones its neighbours were in (Safe/Rocky) . The contents of its neighbou rhood (a robot/ an energy pack/a barrier/or nothing) These abilities together w ere w hat m ad e a Learning Autom ata for a robot. Robots m ad e m any LAs in their sur-

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

27

Method 3: State based with vision radius In this m ethod the p rop osed algorithm w as im p roved w hen the vision rad iu s of the robots increased . H ere, as w ell as the first m ethod , the state of robot and its neighbou ring inform ation (bu t not the axial p ositions) w ere available for the robots w ith the d ifference of giving the fou r neighbou rs, 4 p oints, that help ed the robots to choose an action based on those p oints. This m ethod has provided a better result at the end. This pointing system worked this way: The highest p oint w as given to energy su p p ly p acks, and the rest got p oints based on the how far they ve been from the energy p ack. That m eant the p oints w ere red u ced if they w ere fu rther from the energy p ack. It w as calculated by the following three equations (3,4,5): Max Point = 2 r+1 (3) Distance = | (N od eX Resou rceX)| + | (N od eY ceY)| (4) Node Point = Max Point / 2 Distance (5)

Resour-

In which: r : vision radius Max Point : highest point Node Point :the point given to each neighbouring side Distance : the distance of the side from energy pack (N od eX, N od eY) : the axial p osition of each sid e that has earned a point (ResourceX, ResourceY) : the axial of an energy pack The interesting thing w as that each sid e s p oint w as su m of all other p oints it had earned from d ifferent su p p lies. And there w ou ld be no p oints for the sid es w hich w ere aw ay from the vision rad iu s. So, the robot cou ld find an energy pack faster. In this m ethod , in each ep isod e, a Learning Au tom ata w as bu ilt w ith the above featu res. Its p rivilege to the other tw o m ethod s w as that the robots had a w id er view of their su rrou nd ings, w hich help ed them in d ecision m aking (ego), and having better results. If a vision rad iu s of 1, 2, or 3 w as available for the robots, they cou ld recognise the contents of their neighbou ring units with the help of their sensors. (fig.4)

recorded the experiences gained from the surrounding.) Rule 2: If an energy supply was used where the robot was in its norm al state, there w ou ld be fine ap p lied . (id w as trying to fu lfil its p leasu re asp ects w here it contrad icted the id eal of su p er-ego, so ego took control over id and balances achieved ) Rule 3: If the robot chose a m ovem ent that lead ed to an energy p ack, a rew ard w ou ld be given to the robot. (The robot has learned that being next to an energy p ack cou ld satisfy its need s faster at the tim e of tension, w hich m eant ego was collecting information from the environment) Rule 4: If a robot in its norm al state, w ent to Rocky zone from Safe zone, w ou ld have got fine. (Risking w hen it w as not necessary, lead ed to fine. Ego w as learning abou t the motives.) Rule 5: If a robot in its normal state, chose to move to Safe zone, Wou ld have earned rew ard s. (In Safe zone, less energy w ou ld be lost, so ego has learned how to ad ap t or compromise.) Rule 6: If a robot w as in tense/ excited state, and u sed its energy su p p ly p ack, w ou ld have got rew ard ed . (Ego m u st have red u ced the tension of id by u sing the available sources.) Rule 7: If a robot w as in tense state and thou gh having energy su p p ly available, not u se it, w ou ld have got fined . (ego neglected id s need s and as the resu lt record ed the information about this incident/ experience.)

3.5 The Results of the Simulation


To stu d y the su ggested algorithm s, five robots w ith m axim u m level of energy (200 u nits) w ere p u t in an environm ent w ith 25 energy su p p ly p acks, w ith ratio of 80% (energy p acks in Rocky zone)to 20% (energy p acks in Safe zone) p lu s rew ard ind ex of 0.8 , fine ind ex of 0.7, and excitement index of 0.5. In 4 m ethod s and 60 iterations, the average of robots su rviving tim e w as calcu lated . In the fou rth m ethod , a vision rad iu s of 3 w as p rovid ed for robots, w hich m eant a better view to the environm ent and to the energy su p p ly p acks, and also a better decision making. The resu lts of these 60 continu ou s iterations and robots life time in the environment are shown in fig. 5.

Fig. 4.vision radius of 1, 2, and 3

Fig. 5.Total of robots life time in test

3.4 The Rules of Giving Rewards or Fine to a Robot by Its Surroundings


Rule 1: If a w rong action w as chosen, the robot could have got fine e.g. w ent u p , w here w as a barrier. (Su p erego (the social asp ect of behaviou r) has bu ilt u p , and ego

As it was shown in the graph, after 60 iterations, cognitive robot had a better chance of su rviving com p aring to a norm al robot, w hich m eant a robot w ith the p rivilege of d ecision m aking ability (LA) w ou ld act m ore su itable and accep table to its su rrou nd ings. Am ong cognitive robots,

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 4, APRIL 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

28

those u sed third m ethod had better resu lts, and that w as becau se of having a w id er vision rad iu s w hich lead ed to better decision makings and better actions. In this test, cognitive robots w ith m ethod one(statebased ), show ed less efficiency com p aring to those u sed m ethod tw o (m od el-base), w here had more robots available in the environm ent, and as the resu lt m ore inform ation record ed and m ore exp eriences gained to help d ecision making process.

CONCLUSION

This article tried Freu d s psychoanalysis theory (id , ego, su per-ego) to su ggest a suitable behaviou r m od el for cognitive robots. And this theory w as simu lated in a w ay that robots cou ld m ake d ecisions to choose an action (w hich look better and closer to reality) based on their experiences from the environment. The cognitive robots used these algorithm s to search their su rrou nd ings and gain some inform ation so that they cou ld su rvive for longer. To achieve this know led ge and m ake a d ecision, cognitive robots u sed LAs and the respond s (fine, or rew ard ) of their environm ent based on Freu d s theory. For sim ulation, three m ethod s w ere su ggested and robots d ecision m aking pow er w ere stu d ied and compared together. The resu lts show ed that cognitive robots (w hich w ere given the factor of sensing and d ecision m aking, to look closer to hu m an) ad apted themselves easier to the environm ent, and after a few iterations, they learned (by making better d ecisions in critical m om ents) how to survive longer com paring to norm al robots. Though their (cognitive robots) risking approaches to critical situ ation, som etim es, caused trou ble for them , and d id n t resu lt w ell, overall, comparing to norm al robots, they prod u ced m u ch better outcomes according to the observations done.

609-614, May 25-27, 2008. Wang Y., Cognitive Informatics foundation of nature and machine intelligence , Proc. 6th IEEE Intetnational Conference on Cognitive Informatics (ICCI'07), 2007. [7] K. S. Narendra and M. A. L. Thathachar, Learning Automata: An Introduction , Prentice-Hall Inc, 1989. [8] Mars, P., Chen, J. R. and Nambir, R, Learning Algorithms: Theory and Applications, in Signal Processing , Control and Communications, CRC Press, Inc., pp. 5-24, 1996. [9] Lakshmivarahan, S., Learning Algorithms: Theory and Applications , New York, Springer Verlag, 1981. [10] Meybodi, M. R. and S. Lakshmivarahan, Optimality of a Generalized Class of Learning Algorithm , Information Science, pp. 1-20, 1982. [11] Meybodi, M. R. and S. Lakshmivarahan, On a Class of Learning Algorithms which have a Symmetric Bahavior under Success and Failure , Lecture Notes in Statistics, Springer Verlag 1984, pp. 145-155, 1984. [12] H.Zeilinger, T.Deutsch, B.Muller, R.Lang, Bionic Inspired Decision Making Unit Model for Autonomous Agent , IEEE Vienna Uneversity of Technology/ Institute of computer Technology, Vienna, Austria, pp. 259-264, 2008. [6]

REFERENCES
[1] [2] T.Deutsch, H.Zeilinger, R.Lang, Simulation Result for the ARS-PA Model , IEEE, pp. 995-1000, 2011. R. Lang, S. kohlhauser, G. Zucker, T. Deutsch, Integration internal performance measures into the decision making process of autonomous agents, 3rd International Conference on Human System Interactions (HSI), Rzeszow, pp. 715-721, 2010. Wang Y., Kinsner W., and Zhang D. , Contemporary Cybernetics and Its Facets of Cognitive Informatics and Computational Intelligence, IEEE Transactions on Systems, Man and Cybernetics-part B: Cybernetics, Vol.39, No.4, pp.823-833, 2009. Wang Y., Cognitive Informatics Models of the Brain , IEEE tranzactions on systems, man, and cybernetics-part C: Application and reviews, pp.203-207, 2006. S.I. Ahson, Andrzej Buller, Toward Machines that Can Daydream , IEEE Conference on Human System Interactions , Krakow, Poland, pp.

[3]

[4]

[5]