You are on page 1of 7

REASONING LEARNING: A NEW APPROACH OF COGNITIVE ARCHITECTURE

AND MACHINE LEARNING VIA AUTOMATIC AND EVOLUTIONARY CODING


MODELS

Rogério Figurelli

Trajecta, Porto Alegre, BR


figurelli@gmail.com

October 20, 2020

ABSTRACT

Advances in the field of ​machine learning and the related technologies allow the creation of artificial
intelligence algorithms based on non-programmatic models, which perform tasks that are increasingly
closer to people, especially with regard to the representation of information. In theory, everything
would be very simple if it were possible to scale the same evolution to even greater capabilities, such
as reasoning and intelligence for general applications, but there are great difficulties in creating really
effective models for this through this approach. In this scenario, the search for cognitive architectures
using the most varied models is considered fundamental. In this sense, a new approach to machine
learning is proposed, different from current paradigms, such as artificial neural networks or using deep
learning technologies, and focused on building cognitive architectures that learn to reason in the same
way that people learn to program, creating evolutionary systems for it. In theory, we are looking for a
new way to open the doors for the discovery of the functionalities of our mind in more abstract layers,
maintaining artificial neurons and the operational processing of our brain only for representation
activities, since these models may not be the ideal ones to materialize in the machines some of the
most intelligent capabilities of our mind. In practice, the programmatic functions, abstracted to the
background through the current models, which generate algorithms only from the processing of
labeled data, as in the case of supervised learning, gain strength to seek the synthesis of artificial
reasoning that is able to solve problems similar to what programmers use when creating their own
codes, and using the most advanced machine learning know abilities and techniques.

1. INTRODUCTION

To what extent will machine learning, even at advanced levels of abstraction of neural networks, and
their structures, lead to a cognitive architecture at the human level, even more, if we consider its
differentiated potential for generalization?
This is the key question and problem that we seek to explore in this article, with the proposal of a
different approach of machine learning solution, but where other solutions, such as deep learning, will
be abstracted, seeking to adapt the same model of creation of codes by humans for problem-solving
as the main tool for cognitive modeling and reasoning.
In this sense, there are many architectures that seek reasoning capabilities on machines at the
human level, involving the most varied approaches, such as, for example, key structures in the brain
[1][3][4], learning problems using neural networks [2], and/or be able to learn and communicate
through natural language [5].
But the reality is that this is an open field for research and models building, since the real
effectiveness of results in terms of cognitive processing, especially in challenges that involve a greater

1
understanding of the general context, is far from what is already possible to do in terms of modeling
object representations, one of the greatest advances in Artificial Intelligence in recent years, notably
due to the progress in the field of deep learning, and the whole structure that supports this scenario.

2. THE CHALLENGE OF COGNITIVE MODELING THROUGH MACHINE LEARNING

It is considered that a complementary approach to seeking to model the structural patterns of the
human brain in search of science, and, why not, art, of reasoning, is to seek to model the techniques
used by humans to solve problems using their reasoning.
In practice, we seek to treat the problem as a challenge for modeling the cognitive process, without
limiting possible solutions, since this is undoubtedly a frontier of knowledge in the field of Artificial
Intelligence.
Thus, through the architecture proposed in this article, all the current evolution in terms of modeling
brain structures will be considered only as an advance in terms of the representation of objects or
information, accessible through code functions. That is, one can or cannot use functions that will
create these representations, providing input parameters for their execution. However, the definition of
the code, line by line, and of its entire structure, in the same way, that it is performed by people's
minds, will be the challenge to be overcome by the machine learning process, using as structure only
the automatic generation of lines, code and the evolutionary quality assessment process of this final
code, through supervised learning, as an initial reference and to facilitate the understanding of the
proposed approach.

2.1. The relevance of brain and mind models

In practice, there is a paradox in the proposed approach, as the programming process was precisely
abstracted through the technologies of machine learning and disciplines such as data science. That
is, models and programs, today, can be created just from data. But to what extent can the machine
learning technology itself not build the opposite path, i.e., it creates source code directly to solve
problems?
This is considered here a challenge very similar to that of separating the brain and the mental
capabilities at different levels of abstraction, in order to create solutions that start from the level of our
mind, and not initially of our brain, and its basic structure, which is the neural networks. From the
vision of mind, and solutions, a precedent technique was chosen, which is precisely the coding
process, carried out by the programmers, and an external structure, which are the lines of code,
modeled in some way by specialized people and with competence for this, so that the generation of
this code can be used for the solution of general problems, or even, for the generalization of
applicability of the proposed approach for the most varied demands.

2.2. The difference between models for cognitive representation and cognitive reasoning

In order to separate the approaches to creating models of brain and mind, and their functionalities, it
is considered fundamental to separate the problem of creating models with cognitive ability to
represent structures and information from cognitive reasoning ability.
In practice, there is already a great evolution of representation and architectures for this, as for
example in Figure 1, but, in theory, it depends on the evolution, step by step, until reaching human
levels of cognitive ability.

2
Figure 1 - Example of an architecture of a multilayer feed-forward network at [6]

In this sense, it is considered relevant to seek different approaches from the existing ones for machine
learning, including independently of the human brain, to open the ground for a discovery process
more focused on capabilities than on perceived structures.
However, in the same way, persevering in machine learning technologies, but with different
problem-solving structures, as proposed here, through the direct use for automatic generation of lines
of code, opens the door to the use of years of evolution in this field, which, at least, are extremely
relevant in the area of ​representation.

2.3. The limits of deep learning for cognitive learning

In practice, the approach and solution proposed in the pursuit of cognitive learning seek to join forces
from various disciplines and scientific approaches that already exist to create cognitive architectures,
but overcoming the paradigm and limits of the deep learning approach, which becomes just a function
of this process, and no longer its main structure.
More than that, this new structure seeks to create models of how programmers reason to solve
problems in a way that can be applied to solve any problem, even considering the challenges typically
characterized as imposed for AGI (Artificial General Intelligence) technologies.

3. REACHING MACHINE COGNITIVE LEARNING FROM CODING LEARNING

The first step in terms of the proposed cognitive architecture is to define the inputs and outputs (I/O)
with a systemic view. Considering a set of I/O used similarly in other problems addressed by machine
learning, internal and external variables can be separated into I/O, facilitating the modeling of
automatically generated source codes (Figure 2).
Another important point is that any internal code structure, regardless of layer or mesh, must allow the
connection of input and output parameters as part of the cognitive programming learning and, directly,
the generation of a source code automatically.

3
Figure 2 - Example of I/O setup

In this sense, it can be considered that a basic line of programming is composed of a set of output
variables as a result of a function of handling a set of input variables, like the example pictured in
Figure 3.

Figure 3 - Example of a general code setup syntax

In the same way, it is possible to add variable conditions (if, while, etc.) as logic to be learned,
together with the connection of input and output variables, and functions executed from input
parameters to be transformed, or even using remote functions, based on remote representations or
APIs, as pictured in Figure 4.

Figure 4 - Example of function attributes

And, in theory, the machine learning process, from labeled data, must be able to produce, in an
evolutionary way, combinations of inputs, connections, functions, etc., modeled on lines of code,
capable of serving a function cost efficiency, especially if the optimization time is considered infinite,
as pictured in Figure 5.

Figure 5 - Example of function modeling

This is the theory and logic proposed, however, infinity is a time that is not available, and the
alternative to reduce it to shorter times goes through a series of simplifications, which can be
summarized in:

- Start with simpler problems and with fewer lines of code to be learned and generated, in order
to be scaled to meet more complex problems, with a greater number of resources for this;

4
- Increase the number of available functions, with variation and diversification of capacities,
aligned to the maximum with the class of problems to be solved, including the use of deep
learning to represent structures and information;
- Start with simpler reasoning models, like Monolithic ones, before migrating to more advanced
structures like Multilayered and Mesh.

In practice, it is considered that there are no limitations to scale a cognitive architecture in this pattern
of code generation capabilities automatically, since, in practice, this is the same process done by
programmers when there is an understanding of the specification, in terms of functional and
non-functional requirements. But, without a doubt, this definition is also a complex process, which only
time can validate its efficiency, at least in terms of comparison with other cognitive architectures.

4. MACHINE LEARNING VIA AUTOMATIC CODING MODELS

The conceptual architecture diagram in Figure 6 shows the components described more clearly,
where the main logic of training based on machine learning can be highlighted, which should produce
a sequence of lines of code (Model Code) with parameters adjusted in order to reduce the cost
function, in a progressive and evolutionary way.

Figure 6 - Reasoning Learning Optimization Algorithm and Model

It can be considered that with each learning cycle, the lines of code will be modified in some way: or
existing lines will be eliminated, or new lines will be added, or the parameters of the existing lines will
be modified.
In fact, although this process is within a cycle of optimization and training, typical of machine learning
technology, it is not much different than what an experienced programmer must do to generate source
code that solves any problem.
And that is exactly the objective of the proposed architecture: to seek to learn in some way the human
reasoning competence, as long as, of course, there are resources for that and a labeled basis that
allows the learning to lead to the solution of the problem, which does not even always happen when
the problem is addressed by human programmers.

5. A NEW APPROACH OF COGNITIVE ARCHITECTURE AND MACHINE LEARNING

Although the creation of a monolithic code is the most direct solution, even with basic internal
functions, there are no limits on the evolution of architecture for approaches more aligned to
hierarchical code structures, mesh, or even using services and microservices, approaches, as drawn

5
in Figure 7, or, in a thesis, other hybrid models, as long as the principle of code generation is not
changed within the state of the art of programming technology and source code production.

Figure 7 - Examples of reasoning models (Monolithic, Multilayered, and Mesh)

However, as stated before, start with simpler reasoning models, like Monolithic ones, before migrating
to more advanced structures like Multilayered and Mesh, it is a way to address the limitation of
machine learning resources, especially for more complex problems.

6. CONCLUSION

A new approach to machine learning was proposed, different from current paradigms, such as
artificial neural networks or using deep learning technologies, and focused on building cognitive
architectures that learn to reason in the same way that people learn to program, creating evolutionary
systems for it. The objective of the proposed architecture is to seek to learn in some way the human
reasoning competence, as long as, of course, there are resources for that and a labeled basis that
allows the learning to lead to the solution of the problem, which does not even always happen when
the problem is addressed by human programmers. In addition, a simplification approach was
presented to enable start generating codes automatically, in order to be capable of building
architectures to scale according to the complexity of problems to be solved, seeking to reach
increasingly general problems, which is a known limitation of the current Artificial Intelligence
technologies.

REFERENCES

[1] L. Remmelzwaal, A. Mishra, G. Ellis, "Brain-inspired Distributed Cognitive Architecture",


https://arxiv.org/pdf/2005.08603.pdf​ .

[2] A. Betti, M. Gori, S. Marullo, S. Melacci, "Developing Constrained Neural Units Over Time",
https://arxiv.org/pdf/2009.00296.pdf​ .

[3] S. Scardapane, M. Scarpiniti, E. Bacarelli, A. Uncini, "Why should we add early exits to neural
networks?", ​https://arxiv.org/pdf/2004.12814.pdf​ .

[4] P. Langley, "Progress and Challenges in Research on Cognitive Architectures", Institute for the
Study of Learning and Expertise, 2164 Staunton Court, Palo Alto, CA 94306.

6
[5] B. Golosioa, A. Cangelosib, O. Gamotinaa , G. Luca Masalaa, "A cognitive neural architecture able
to learn and communicate through natural language", ​https://arxiv.org/pdf/1506.03229.pdf​.

[6] S. Tian, F. Tao, H. Du, W. Yu, "How machine learning can help the design and analysis of
composite materials and structures?", ​https://arxiv.org/pdf/2010.09438.pdf​ .

You might also like