The Evitability of the Emergence of Omohundro's Basic AI Drives

Abstract In Omohundro (2008), it is argued that a sufficiently advanced artificial intelligence of any design will develop a specified set of sub-goals. Here the ambiguity of the definition of sufficient advancement is addressed, and a paradigm of algorithmically fixed, recursively improving artificial intelligence is offered as a counterexample to Omohundro’s claims. In the paper The Basic AI Drives (Omohundro 2008), it is argued that a specified array of drives will arise in sufficiently advanced AI systems of any design. The referenced drives are those toward self-improvement, rationality, the preservation of initial utility functions, the prevention of counterfeit utility, self-protection, and efficient resource use and acquisition. It is here argued that the logical necessity of these basic AI drives applies to a subset of program architectures, and is not universally applicable to the domain of all sufficiently advanced intelligences, unless sufficient advancement is defined so as to be synonymous with the emergence of these very drives. Conventional and tautological uses of the word “sufficiency” The notion of sufficiency conventionally assumes an agreed upon objective, by whose purposes a given subject’s degree of sufficiency may be assessed. In common discourse, the objectives referenced with regard to a subject’s sufficiency can be either explicit or implied. For example, in saying, “He isn’t sufficiently fast to win the race.” The objective by which sufficiency is assessed is explicit: namely, winning a race. A contrasting example of an implied objective is the sentence, “This meat isn’t cooked sufficiently.” In this example, the objective by which the degree of the sufficiency of the cooking is being assessed is not specified, though in most contexts the modern speaker of English would be likely to assume that this objective involved safe and enjoyable ingestion.

Literarily, reference to sufficiency without an implied or specific objective has been used to create subtle logical tautologies. A famous instance of this technique, and perhaps its first use, can found in novelist Arthur C. Clarke’s “third law” (1962). “Any sufficiently advanced technology is indistinguishable from magic.” Because no explicit objective is referred to in the quoted maxim, and the only framework available with which the reader may ascertain any implied objectives is the sentence’s predicate, the only objective possessed by the reader with which she might assess technological sufficiency is distinguishability from magic. This creates a logical tautology of the form PP. Other tautologies one might construct using this literary technique are: “Any sufficiently tall man can touch the moon.” (A man tall enough to touch the moon is tall enough to touch the moon.) and, “Any sufficiently beautiful sunset can make me cry.” (A sunset beautiful enough to make me cry is beautiful enough to make me cry.) Discussion of sufficient advancement in The Basic AI Drives In The Basic AI Drives, the “sufficiency”, or “sophistication” of hypothetical intelligences is referred to without explicit objectives with which the reader may interpret these qualities. Omohundro writes, “ We identify a number of ‘drives’ that will appear in sufficiently advanced AI systems of any design,” and, “The arguments are simple, but the style of reasoning may take some getting used to. Researchers have explored a wide variety of architectures for building intelligent systems [2]:

neural networks, genetic algorithms, theorem provers, expert systems, Bayesian networks, fuzzy logic, evolutionary programming, etc. Our arguments apply to any of these kinds of system[sic] as long as they are sufficiently powerful. To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world. If an AI is at all sophisticated, it will have at least some ability to look ahead and envision the consequences of its actions. And it will choose to take the actions which it believes are most likely to meet its goals. “ (Emphasis added.) Because of the existence of the tautological interpretations of the words “sufficiency” and “sophistication” for the above uses, the thesis of The Basic AI Drives is not strictly refutable. However, under the assumption that The Basic AI Drives refers to an implied sufficiency, such as recursively improving superhuman general intelligence, one can attempt to refute its claims. The prerequisites for the development of sub-goals The basic AI drives are described as sub-goals which might be developed by a hypothetical artificial intelligence, which would not be “programmed in at the start.” The development of these specific sub-goals is logically predicated on an ability to generate sub-goals in general. This capacity, ubiquitous to the evolved biological cognitive architectures which are presently agreed to possess intelligence, is not a defining feature of general intelligence. Ben Goertzel (n.d.) defines general intelligence as “the ability to solve a variety of complex problems in a variety of complex environments.” Wang (1995) defines it as “the capacity to adapt under insufficient knowledge and resources. “ Biomorphic intelligences adapt to complex and various problems and environments in part by means of the development of sub-goals. Intelligences with non-biomorphic architectures sometimes leverage recursively improving non-algorithmic data to affect this adaptation. KriegExpert: An existing algorithmically fixed narrow AI capable of recursive improvement KriegExpert is a narrow game-playing AI which was awarded the silver medal for its performance in the 2009 International Computer Games Olympiad ({AUTHOR} 2009). The program is written to play a variant of chess called kriegspiel, which is a game of incomplete information wherein the contestants possess limited knowledge about the position of their opponents’ pieces. While KreigExpert’s composite goal might be articulated in anthropomorphic terms as “to play excellent kriegspiel,” the emergence what may be perceived as a well-played game is the result of disparate algorithms which evaluate the board in several different ways, making calculations concerning the geometry of the chess board and the liabilities and advantages of possible moves. These algorithms are fixed and non-adaptive.

KriegExpert’s decisions are influenced, in part, by a data archive of kriegspiel games that have been played. Because the program generates more archivable data with every game played, it is capable of recursive self-improvement by means of the growth and increasing sophistication of its dataset, even given its inability to develop sub-goals due to its initial and consequently permanent fixity of algorithm. Some systems which are algorithmically fixed initially are capable of being algorithmically modified by means of interactions between non-algorithmic data and algorithmic data which have been inadvertently permitted by the initial code, as is the case when systems vulnerable to SQL injections are compromised to the ultimate effect that a perpetrator is able to change the algorithmic constitution of a system (Dzulfakar 2009). While these systems do furnish an example of the existence of fixed-goal systems which might develop the capacity for the generation of sub-goals, the locus of any such adaptation is exogenous, and the resultant goals cannot be considered to a be a logically necessary outgrowth of such a system's initial architecture. Conclusion: Natural language interactions by means of immutable algorithms Norvig (2009) has observed that greater processing speeds and the greater availability of data have increased the accuracy of natural language processing systems at the same time as significantly reducing their algorithmic complexity. Whereas older natural language processing systems were required to encode grammatical and syntactic rules algorithmically, modern systems are able to perform at a greater level of sophistication with less code by extrapolating knowledge from large data sets which do not interact with the algorithmic constitution of the software. Like Norvig’s algorithms, the constituent elements of proposed “Oracle AI” systems, wherein a greater-than-human intelligence’s inputs and outputs would be limited to the domain of natural language interactions, do not depend on the existence of mutable code. Such an intelligence might derive assertions of fact and logical relationships from stores of natural language data, construct deductive or probabilistic models of superhuman sophistication, and create natural language outputs which could in turn be used in a recursively improving data set. A system of this architecture, while being recursively improving and possessing greater-than-human intelligence, would lack the algorithmic mutability required to engender susceptibility to the development of sub-goals. Returning to the aforementioned identifications of general intelligence with the ability to adapt to

complex and various problems and situations, the execution of human-level interactions in natural language, with its ability to render solutions to problems as disparate as fox hunting and theoretical physics, is a proposed indicator of general intelligence (Turing 1950). Acknowledgments I’m grateful to Levi Self, Michael Anissimov, and Steven Omohundro for the insights that led to this paper. References Clarke, A. 1962. Profiles of the Future; An Inquiry into the Limits of the Possible. New York: Harper & Row. Dzulfakar, M. 2009. "Advanced MySQL Exploitation." Proceedings of the DEFCON Conference. Las Vegas. Goertzel, B. (n.d.) Artificial General Intelligence Research Institute. http://www.agiri.org/wiki/index.php?title=Artificial_General_Intelligence. 29 June 2011. {Author}. 2009. "Zapata Native Creates Winning Artificial Intelligence." Zapata County News. 25 May. Norvig, P. 2009. "Innovation in Search and Artificial Intelligence." CITRIS. Berkeley. Omohundro, S. 2008. "The Basic AI Drives." Proceedings of the First Conference on Artificial Intelligence. Memphis, 171. Turing, A. 1950.. "Computing Machinery and Intelligence." Mind”: 443-460. Wang, P. 1995. Non-Axiomatic Reasoning System. Indianapolis: University of Indiana Press.

Sign up to vote on this title
UsefulNot useful