Professional Documents
Culture Documents
1 Introduction
1.1 A Subsection Sample
The success of autonomous software platforms to be a ubiquitous enabling capability
delivering significant value across a range of industries and the driving requirement to
implement is dependent upon the abilities for autonomous technologies to reach a high
level of cognition in mimicking the human mind. This implies the design and imple-
mentation of computational routines able to combine cognitive abilities in an integrated
manner, what is referred to the creation of general intelligent systems [1, 2]. The first
challenge it is presented by the definition of “intelligence” within an autonomous sys-
tem. In order to design a system, the definition of what the system is needs to be clear
and associated to mathematical and physical representations. The need for the system to
be adaptable to its environment. Therefore, the author suggests the following design def-
inition for an autonomous system. Intelligence in autonomous platforms can be defined
as the ability for a system to adapt to its environment and survive. Artificial Intelligence
(AI) can be considered as a technology field that aims to embed self-modulated responses
into machines, which will enable them to be self-driven towards a primary goal: the sys-
tem survival in itself. AI can be thought as the combination of various technologies that
will provide machines with the sense of survival, hence what we described as adaptation
capabilities. Adaptation requires the machines to:
Cognition implies the ability of interpreting the reality and understand how things could
be in a future state to support the decision-making process. It also highlights the need
to “remember” past events to support forecasting future events outcomes. Therefore, as
reported by Vernon, D. et al. [3], cognition “allows the system to act effectively, to adapt,
and to improve”.
568 I. Panella et al.
It is possible to observe from the analysis in Table 1 that whilst significant achievements
have been accomplished over time by the various cognitive architectures’ developments,
there are still open research questions and challenges related to:
Table 1. (continued)
Table 1. (continued)
Table 1. (continued)
to dynamically and efficiently store and retrieve knowledge based on the experiences
or events encountered by the system within the environment. Knowledge size refers
to the dimension of the knowledge base available to the agents. Knowledge typology
refers to theories on how humans organize, reason, and retrieve conceptual informa-
tion. In the past, it was believed that concepts representation in the human brain was
homogeneous and that concepts were categorized as classical, a prototype view, exem-
plar view, or theory-theoretical view. However, it is now believed that human may use
in different instances different representation to categorize concepts, which led to the
Heterogeneous Hypothesis about the nature of concepts. The heterogeneous hypoth-
esis claims that different type of representation may exist and perhaps co-exist within
the human brain. All such representations constitute different body of knowledge and
contain different type of information associated with the same type of entity. More-
over, the heterogeneous hypothesis claims that each body of conceptual knowledge is
distinguished by specific processes in which such representations are involved, such
as in tasks like recognition, learning, categorization, planning, etc. The heterogeneous
hypothesis, which assumes the availability of different types of knowledge encoded
in a conceptual structure, is not implemented in any CA [20].
Table 2. Newell’s functional criteria for human cognitive architectures reported in Table 1 in [14]
with associated AI technologies and Metrics for CA evaluation generated by the author.
Table 2. (continued)
Table 2. (continued)
2. The weights modification in the network can be represented through the implemen-
tation principles of a spiking neural network (SNN), which will enable a bio-inspired
learning through the weight modification based on the temporal and performance
information provided by each modified activation function [27, 32–41]. By com-
puting the weights through the temporal contribution of each activation function we
can surpass the issues of training the SNN, as they are implemented through sums
of Dirac delta functions that do not have derivatives to support a backpropagation
algorithm implementation to test the network.
3. The activation function in the neural network is now considered as multi-agent soft-
ware function, whereby computations such as information extraction, learning with
for instance reinforcement learning (RL), decision making, etc. will be performed.
By doing so, the software cognitive functionalities do not require to be executed
A Deep Learning Cognitive Architecture 579
sequentially but they can be processed in parallel and the propagation function will
output a multi-dimensional array to determine which functions need to be enabled.
4. The bias in the neural network are now considered inputs from knowledge databases,
goals, human input, and world model created within the system.
5. As reported in [46], subjective estimation of the environment enables to create mean-
ing within the system. However, subjective bias cannot be constants and they need
to evolve as a function of the increased knowledge of the system. Therefore, as the
databases are updated, the meaning of the semantic knowledge of the system will
change and the bias will support the improvement in the overall inference of the
system.
6. Learning, represented by the adaption of the network to better handle the task at
hand within the given environment, is no longer solely represented by the adaption
of the weights within the system, but by the expansion and contraction of the net-
work through the inhibition of specific agents within the activation function as not
relevant for the task, for instance, or the addition of new nodes to include additional
information or inputs. This enables the implementation of plasticity as an embedded
property within the system to learn the neural network structure and enable adaptive
structures. In Russell and Norvig [30], one of the major problems in modelling a
neural network is identified to be overfitting, which occurs when there are too many
parameters in the model. Where when dealing with deep neural networks, one of the
challenges in modelling the problem is represented by the number of hidden layers
needed to support its solution and their sizes. A potential solution to plasticity is
provided by Russell and Norvig ([30], p. 748) with the introduction of the optimal
brain damage algorithm, which starts with a fully connected neural network and
then removes connections from it, and with the tiling algorithm, which, on the other
hand, starts with a single node network (perceptron) that tries to produce the correct
output for as many example training sets as possible and adds subsequent nodes to
handle the ones that could not be handled by the single perceptron.
The deep neural network is a recurrent neural network with the feedback loops
supporting the implementation of cognition cycle.
References
1. Langley, P.: Cognitive architectures and general intelligent systems. AI Mag. 27(2), 33–44
(2006)
2. Langley, P.: Information-processing psychology, artificial intelligence, and the cognitive
systems paradigm thanks to. In: AAAI (2017)
3. Vernon, D., Metta, G., Sandini, G.: A survey of artificial cognitive systems: implications for
the autonomous development of mental capabilities in computational agents. IEEE Trans.
Evol. Comput. 11(2), 151–180 (2007). https://doi.org/10.1109/TEVC.2006.890274
4. Models, C., Branch, A., Force, A., Patterson, W., Force, A.: Unified Theories of Cognition:
Newell’s Vision after 25 Years Presenters, pp. 250–251 (2012)
5. Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., Qin, Y.: An integrated
theory of the mind. 111(4), 1036–1060 (2004). https://doi.org/10.1037/0033295x.111.4.1036
A Deep Learning Cognitive Architecture 581
6. Sun, R., Langley, P., Laird, J.E., Rogers, S.: Cognitive architectures: research issues and
challenges. Cogn. Syst. Res. 10(2), 141–160 (2009). https://doi.org/10.1016/j.cogsys.2006.
07.004
7. Profanter, S.: Cognitive architectures (2012)
8. Lieto, A., Bhatt, M., Oltramari, A., Vernon, D.: The role of cognitive architectures in general
artificial intelligence. Cogn. Syst. Res. 48, 1–3 (2018). https://doi.org/10.1016/j.cogsys.2017.
08.003
9. Duch, W., Oentaryo, R.J., Pasquier, M.: Cognitive architectures: where do we go from here?
Front. Artif. Intell. Appl. 171, 122–136 (2008)
10. Thagard, P.W.: Cognitive architectures. In: The Cambridge Handbook of Cognitive Science.
Cambridge University Press, pp. 50–70 (2012)
11. Ritter, F.E.: Two cognitive modeling frontiers. Trans. Jpn. Soc. Artif. Intell. 24, 241–249
(2009). https://doi.org/10.1527/tjsai.24.241
12. Kotseruba, I., Tsotsos, J.K.: A Review of 40 Years of Cognitive Architecture Research: Core
Cognitive Abilities and Practical Applications (2016)
13. Ye, P., Wang, T., Wang, F.Y.: A survey of cognitive architectures in the past 20 years. IEEE
Trans. Cybern. 48(12), 3280–3290 (2018). https://doi.org/10.1109/TCYB.2018.2857704
14. Anderson, J.R., Lebiere, C.: The Newell Test for a Theory of cognition
15. Samsonovich, A.: Comparative Table of Cognitive Architectures (started on October 27, 2009;
last update: June 18, 2012)
16. Samsonovich, A.V.: Comparative analysis of implemented cognitive architectures. Front.
Artif. Intell. Appl. 233, 469–479 (2011). https://doi.org/10.3233/978-1-60750-959-2-469
17. Kingdon, R.: A review of cognitive architectures. ISO Project report (2008)
18. Franklin, S., Madl, T., D’Mello, S., Snaider, J.: LIDA: a systems-level architecture for cogni-
tion, emotion, and learning. IEEE Trans. Auton. Ment. Dev. 6(1), 19–41 (2014). https://doi.
org/10.1109/TAMD.2013.2277589
19. Computing, C.: The Mind According to LIDA - A Brief account The “LIDA Model” and its
Cognitive Cycle, pp. 1–20 (2013)
20. Lieto, A., Lebiere, C., Oltramari, A.: The knowledge level in cognitive architectures: current
limitations and possible developments. Cogn. Syst. Res. 48, 39–55 (2018). https://doi.org/10.
1016/j.cogsys.2017.05.001
21. Li, D.: A tutorial survey of architectures, algorithms. APSIPA Trans. Signal Inf. Process.
3(2014), 1–29 (2014)
22. Lieto, A.: Representational limits in cognitive architectures. CEUR Workshop Proceedings,
vol. 1855, pp. 16–20 (2017)
23. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
24. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: Neural networks architectures
review. 1–31 (2017)
25. Liu, Y., Xiang, C.: Hybrid learning network: a novel architecture for fast learning. Procedia
Comput. Sci. 122, 622–628 (2017)
26. Luo, X., Shen, R., Hu, J., Deng, J., Hu, L., Guan, Q.: A deep convolution neural network model
for vehicle recognition and face recognition. Procedia Comput. Sci. 107(ICICT), 715–720
(2017)
27. Petersen, S.E., Sporns, O.: Brain networks and cognitive architectures. Neuron 88(1), 207–219
(2015)
28. Qi, X., Luo, Y., Wu, G., Boriboonsomsin, K., Barth, M.: Deep reinforcement learning enabled
self-learning control for energy efficient driving. Transp. Res. Part C Emerging Technol. 99,
67–81 (2019)
29. Rizk, Y., Hajj, N., Mitri, N., Awad, M.: Deep belief networks and cortical algorithms: a
comparative study for supervised classification. Appl. Comput. Inf. 15(2), 81–93 (2019)
582 I. Panella et al.
30. Russell, S.J., Norvig, P.: Artificial intelligence: a modern approach, vol. 9 (1995)
31. Behere, S., Törngren, M., Chen, D.: A reference architecture for cooperative driving. J. Syst.
Architect. 59(10), 1095–1112 (2013)
32. Brehmer, B.: The dynamic OODA loop: amalgamating Boyd’s OODA loop and the cybernetic
approach to command and control. In: 10th International Command and Control Research
and Technology Symposium The Future of C2 (2005)
33. Huyck, C.R.: A neural cognitive architecture. Cogn. Syst. Res. 59, 171–178 (2020)
34. Kim, J., Kim, H., Huh, S., Lee, J., Choi, K.: Deep neural networks with weighted spikes.
Neurocomputing 311, 373–386 (2018)
35. Sboev, A., Vlasov, D., Rybka, R., Serenko, A.: Spiking neural network reinforcement learning
method based on temporal coding and STDP. Procedia Comput. Sci. 145, 458–463 (2018)
36. Tavanaei, A., Ghodrati, M., Kheradpisheh, S.R., Masquelier, T., Maida, A.: Deep learning in
spiking neural networks. Neural Netw. 111, 47–63 (2019)
37. Wu, X., Wang, Y., Tang, H., Yan, R.: A structure-time parallel implementation of spike-based
deep learning. Neural Netw. 113, 72–78 (2019)
38. Wang, B., Chen, L.L., Zhang, Z.Y.: A novel method on the edge detection of infrared image.
Optik 180, 610–614 (2019)
39. Stief, P., Dantan, J.-Y., Etienne, A., Siadat, A.: A New Methodology to Analyze the Functional
and Physical Architecture of Existing Products for an Assembly Oriented Product Family
Identification (2018)
40. Seijen, V., Harm, M.F., Romoff, J., Laroche, R., Barnes, T., Tsang, J.: Hybrid reward archi-
tecture for reinforcement learning. In Advances in Neural Information Processing Systems
2017 (NIPS 2017), pp. 5393–5403 (2017)
41. Qi, X., Luo, Y., Wu, G., Boriboonsomsin, K., Barth, M.: Deep reinforcement learning enabled
self-learning control for energy efficient driving. Transp. Res. Part C Emerging Technol. 99,
67–81 (2019)