6.xxx The Human Intelligence Enterprise . Professor: P.H. Winston Rodanthi Vardouli [varo@mit.edu] . Assignment #9 .

03/13/13 Stephen David Lardon, Intrinsic Representation: Bootstrapping Symbols from Experience

For Immediate Release
STUDENT PROPOSES MODEL OF INTRINSIC REPRESENTATION TO EXPLAIN HUMAN INTELLIGENCE

CAMBRIDGE, MA, March 13, 2013. MIT President L. Rafel Reif announced that, in his graduate thesis at the MIT Department of Electrical Engineering and Computer Science, Stephen David Larson designed and implemented a model of Intrinsic Representation that allows a computer system to build up descriptions of its environment from sensory data without explicit instruction. President Reif indicated that Larson has been faithfully committed to the pursuit of understanding human intelligence from a computational perspective, being deeply motivated by the student group Genesis of the Human Intelligence Enterprise, formed in 1998 at the MIT AI Lab. Larson designed and implemented a new way of forming a self-organizing representation capable of representing the statistically salient features in the information that an organism receives from its sensor cells. He proposed a model of an adaptive knowledge representation scheme that is intrinsic to the model and not parasitic on meanings captured in some external system, such as the human mind. Apart from the highly original idea that natural environments contain patterns that are identifiable after the right processing has been carried out, Larson's thesis is an excellent piece of academic writing. The salient idea is complemented by the clear slogan of “Intrinsic Representation” and the symbol of the hierarchy of association, that demonstrates how regularities in the streams of data received from the outside world, are stored and grouped into clusters of high similarity, resulting in a trained system that treats incoming patterns as members of a class. The implementation of Larson's model is exciting news. The second of Larson's experiments illustrates how there can be associations that allow the arm of the computer program to move into a region near the eye's view, almost imitating human behavior! It took us by great surprise that machines can be made to extract meaning in the world without any instruction from an external system. The communications between sensory systems, therefore enabling action, without involving any linguistic descriptions is truly striking. Larson's vision was to understand how we can make sense of the world without explicit instruction. In his thesis, he aimed to provide an answer for the Symbol Grounding Problem, or how we can design representations for intelligent systems that associate symbols with sub-symbolic descriptions. His first step was to outline the features of the model of Intrinsic Representation in terms of its differences with various previous approaches to the Symbol Grounding Problem. In his model, symbols' descriptions are discovered through unsupervised statistical processing of sensory data, and correspond to statistical regularities in information space; to classes of things in the world. Larson identified the strategy that enables simple organisms with the ability to represent the world and raised it to the goal of his model of Representation. He elucidated the hierarchical, self-organizing system of Intrinsic Representation that acquires symbols by extracting statistically salient information from low-level sensory data that flows in it from its environment, in an unsupervised manner, i.e. without relying on human-designed assumptions of similarity. Second, Larson described the architecture of Intrinsic Representation through its three key levels of representation, arranged in a hierarchy: the map celI, the cluster, and the association, and its implementation in a computer program that discovers regularities in the information that flows in it from a 2D-block world in an unsupervised manner, and then associates regularities with symbolic tags, in a supervised manner. According to his description, the program is divided into four major areas: a 2D blocks world simulation, a self organizing map, a cluster set, and cluster associations. Larson's third step was to provide the results of two experiments in order to demonstrate how his representational system successfully discovers implicit representations of blocks in its world in an unsupervised manner, and then assigns explicit symbols to those representations, in a supervised manner. More precisely, the first experiment demonstrated how his implemented system associates color blocks with linguistic symbols representing utterances, while the second showed how his system learns to associate the movement of its eye with the movement of its arm. By introducing the model of Intrinsic Representation, Larson greatly contributes to the field of Artificial Intelligence. He takes decisive steps in the pursuit of understanding Human Intelligence and implementing it in intelligent machines. The main contributions are summarized in the steps that Larson took in order to answer the Symbol Grounding Problem, and to understand how meanings can be learned without explicit instruction. First, he designed the model of Intrinsic Representation. Second, he implemented the model in a computer system. Third, he conducted experiments using this representational system in a computer program. His hierarchy of map cells, clusters and associations, provides a highly fertile ground for future research.

Sign up to vote on this title
UsefulNot useful