Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Design of a LISP-Based Microprocessor

Design of a LISP-Based Microprocessor



|Views: 1,109 |Likes:
Published by rama09999
classic paper by Steele and Sussman describing the essence of Scheme
classic paper by Steele and Sussman describing the essence of Scheme

More info:

Published by: rama09999 on Dec 26, 2007
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





already been determined, all other nodes will eventuallyenter the state in which they await a reply.
Complexity Analysis
A processor, x, initiates messages along paths oflength 2 i only if it is not defeated by a processor withindistance 2 i-1 (in either direction) from x. Within anygroup of 2 i-1 + 1 consecutive processors, at most one caninitiate messages along paths of length 2 ~. Althoughpossibly all n processors will initiate paths of length 1, atmost In/2] (read ceiling of
processors will initiatepaths of length 2, at most In/3] of length 4, at most
5] of length 8, etc.A processor initiating messages along paths of length2 i causes messages to emanate in both directions, andreturn. At most 4.2 i messages will be passed as a resultof that initiation. The sum total of all messages passed istherefore at most4.(l*n + 2.[n/2] + 4.[n/3] + 8.[n/5] + ... +
2~*[n/(2 i-1 +
1)] +...).Each of the terms within the parentheses is less than 2n.There are no more than 1 + [log n] terms. (No processorwill pass messages along paths of length 2n or greatersince, once a processor initiates paths of at least n lengthand the message is acceptable all the way around thecircle, the processor wins and stops initiating messages.)Thus, the total number of messages passed is less than8n + 8[n log n] =
log n).If one detrmes the time complexity to be the minimumtime required for the completion of an election assumingas much message transmission overlap as possible, theworst case time complexity can easily be shown to belinear in the number of processors. The exact formula isTime = 2. (1 + 2 + 4 + ... +
2 i + ... + n).
When n is an exact power of 2 (the best case), Time =4n - 2; when n is one more than an exact power of 2(the worst case), Time = 6n - 6. Thus, the savings inworst-case message passages is paid for by an increase inthe wall time.We conjecture that models in which message passingis unidirectional must, in the worst case, have quadraticbehavior and that bidirectional capability is necessary inorder to achieve
log n) performance. Recently, Burnshas shown [3] that n log n is asymptotically optimal.Received 6/79; revised 5/80; accepted 7/80
I. Chang, E., and Roberts, R. An improved algorithm fordecentralized extrema-fmding n circularly configuraiions ofprocesses.
Comm. ACM 22,
5 (May 1979), 281-283.2. LeLann,G. Distributed systems--Towards a formal approach.Inform. Proc. 77, North-Holland Pub. Co., 1977, Amsterdam, pp.155-160.3. Bums,J.E. A formal model for message passing systems. Tech.Rep. No. 91, Comptr. Sci. Dept., Indiana Univ., May 1980.
Computer Architectureand SystemsJ.P. HayesEditor
Design of aLISP-BasedMicroprocessor
Guy Lewis Steele Jr. andGerald Jay SussmanMassachusetts Institute of Technology
We present a design for a class of computers whose"instruction sets" are based on LISP. LISP, liketraditional stored-program machine languages andunlike most high-level languages, conceptually storesprograms and data in the same way and explicitlyallows programs to be manipulated as data, and so is asuitable basis for a stored-program computerarchitecture. LISP differs from traditional machinelanguages in that the program/data storage isconceptually an unordered set of linked recordstructures of various sizes, rather than an ordered,indexable vector of integers or bit fields of fixed size.An instruction set can be designed for programsexpressed as trees of record structures. A processorcan interpret these program trees in a recursive fashionand provide automatic storage management for therecord structures.We discuss a small-scale prototype VLSImicroprocessor which has been designed and fabricated,containing a sufficiently complete instructioninterpreter to execute small programs and arudimentary storage allocator.Key Words and Phrases: microprocessors, large-scale integration, integrated circuits, VLSI, liststructure, linked lists, garbage collection, storagemanagement, direct execution, high-level languagearchitectures, interpreters, tail recursion, LISP,SCHEMECR Categories: 4.13, 4.21 4.22, 4.34, 6.21, 6.22, 6.33
Permission to copy without fee all or part of this material
granted provided that the copies are not made or distributed for directcommercial advantage, the ACM copyright notice and the title of thepublication and its date appear, and notice is given that copying
bypermission of the Association for Computing Machinery. To copyotherwise, or to republish, requires a fee and/or specific permission.This work was supported in part by the National Science Foun-dation under Grant MCS77-04828, and in part by Air Force Office ofScientific Research Grant AFOSR-78-3593.Authors' present addresses: G.L. Steele, Jr., Computer ScienceDepartment, Carnegie-Mellon University, Pittsburgh, PA 15213;G.J. Sussman, Artificial Intelligence Laboratory, Massachusetts Insti-tute of Technology, 545 Technology Square, Cambridge, MA 02139.© 1980 ACM 0001-0782/80/1100-0628 $00.75.Communications November 1980of Volume 23the ACM Number 11
1. Introduction
An idea which has increasingly gained attention isthat computer architectures should reflect specific lan-guage structures to be supported. This is an old idea; onecan see features in the machines of the 1960s intended tosupport Cobol, Fortran, Algol, and PL/I. More recentlyresearch has been conducted into architectures to supportstring or array processing as in SNOBOL or APL.An older and by now well-accepted idea is that ofthe stored-program computer. In such a computer theprogram and the data reside in the same memory; thatis, the program is itself data which can be manipulatedas any other data by the processor. It is this idea whichallows the implementation of such powerful and inces-tuous software as program editors, compilers, inter-preters, linking loaders, debugging systems, etc.One of the great failings of most high-level languagesis that they have abandoned this idea. It is extremelydifficult, for example, for a PL/I (Pascal, Fortran,Cobol...) program to manipulate PL/I (Pascal, Fortran,Cobol...) programs.On the other hand, many of these high-level lan-guages have introduced other powerful ideas not presentin standard machine languages. Among these are (1)recursively deemed, nested data structures; and (2) theuse of functional composition to allow programs to con-tain expressions as well as (or instead of) statements. TheLISP language, in fact, has both of these features. It isunusual among high-level languages in that it also ex-plicitly supports the stored-program idea: LISP programsare represented in a standardized way as recursivelydeemed, nested LISP data structures. By contrast withsome APL implementations, for example, which allowprograms to be represented as arrays of characters, LISPalso reflects the structure of program expressions in thestructure of the data which represents the program. (Anarray of APL characters must be
to determine thelogical structure of the APL expressions represented bythe array. Similar remarks apply to SNOBOL statementsrepresented as SNOBOL strings.)It is for this reason that LISP is often referred to asa "high-level machine language." As with standardstored-program machine languages, programs and dataare made of the same stuff. In a standard machine,however, the "stuff' is a homogeneous, linear (ordered)vector of fixed-size bit fields; a program is represented asan ordered sequence of bit fields (instructions) withinthe overall vector. In LISP, the "stuff' is a heteroge-neous, unordered set of records linked to form lists, trees,and graphs; a program is represented as a tree (a "parsetree" or "expression tree") of linked records (a subset ofthe overall set of records). Standard machines usuallyexploit the linear nature of the "stuff' through suchmechanisms as indexing by additive offset and linearlyadvancing program counters. A computer based on LISPcan similarly exploit tree structures. The counterpart ofindexing is component selection; the counterpart of lin-
ear instruction execution is evaluation of expressions byrecursive tree-walk.Just as the "linear vector" stored-program-computermodel leads to a variety of specific architectures, so doesthe "linked record" model. For concreteness we presenthere one specific architecture based on the linked recordmodel which has actuary been constructed.
2. List Structure and Programs
One of the central ideas of the LISP language is thatstorage management should be completely invisible tothe programmer, so that he need not concern himselfwith the issues involved. LISP is an object-orientedlanguage, rather than a value-oriented language. TheLISP programmer does not think of variables as theobjects of interest, bins in which values can be held.Instead, each data item is itself an object, which can beexamined and modified, and which has an identity in-dependent of the variable(s) used to name it.In this section we discuss LISP data structures at theconceptual level; the precise form of LISP data objectsis not of concern here. Later we will discuss specificrepresentations within the machine. LISP data is collec-tively referred to as "S-expressions" ("S" for "sym-bolic"). For our purposes we will need only the specialcases of S-expressions cared
lists. An
atom isan "indivisible" data object, which we denote by writinga string of letters and digits; if only digits are used, thenthe atom is considered to be a number. Many specialcharacters such as "-", "+", "@", and "*" are consid-ered to be letters; we will see below that it is not necessaryto specially reserve them for use as operator symbols. Alist is a (possibly empty) sequence of LISP data objects,notated by (recursively) notating the objects in order,between a set of parentheses and separated by blankspace. A list of the atoms "FOO", "43", and "BAR"would be written "(FOO 43 BAR)". Notice that thedefinition of a list is recursive. For example,(DEFINE SECOND (LAMBDA (X) (CAR (CDR X))))is a list of three things: the atomic symbol DEFINE, theatomic symbol SECOND, and another list of three thingsLAMBDA, (X), and (CAR (CDR X)).A convenient way to use lists to represent algebraicexpressions is to use "Cambridge Polish" notation, es-sentiaUy a parenthesized version of prefix Polish nota-tion. Numeric constants are encoded as numeric atoms;variables are encoded as nonnumeric atoms (whichhenceforth we will call
and procedure invoca-tions (function cars) are encoded as lists, where the firstelement of the list represents the procedure and the restrepresent the arguments. For example, the algebraicexpression "a * b + c * d" can be represented as "(+ (*a b) (. c d))". Notice that LISP does not need the usualprecedence rules concerning whether multiplication oraddition is performed first; the parentheses (or, rather,
Communications November 1980of Volume 23the ACM Number 11
the structure of the lists) explicitly define the order. Also,all procedure invocations have a uniform syntax, nomatter how many arguments are involved. Infix, super-script, and subscript notations are not used; thus, theexpression "Jp(x 2 + 1)" would be written "(J p (+ (t x2) 1))".To encode a conditional expression "ifp then x elsey" we write:(IF p x y)Expressions are made into procedures (functions) bythe use of Church's lambda-notation [5]. For example,(LAMBDA (X Y) (+ (, 3 Y) X))evaluates to a function of two arguments X and Y whichcomputes 3 * Y + X. The list of names of variables afterthe LAMBDA indicates how the names of variables inthe expression are to be matched positionally to suppliedarguments when the function is applied.We can also encode recursive LISP programs as listdata. For example, to compute N factorial (N!):(DEFINE FACTORIAL(LAMBDA (N)(IF (= N 0) 1
(* N (FACTORIAL (- N 1))))))
Suppose that we now want to write a LISP programwhich will take such a data structure and perform someuseful operation on it, such as determining the value ofan algebraic expression represented as a list structure.We need some procedures for categorizing, decomposing,and constructing LISP data.The predicate ATOM, when applied to a LISP da-tum, produces true when given an atom and false other-wise. The empty list ("()") is considered to be an atom.The predicate NULL is true of only the empty list; itsargument need not be a list, but may be any LISP datum.The predicate NUMBERP is true of numbers and falseof symbols and lists. The predicate EQ, when applied totwo symbols, is true if the two atomic symbols areidentical. It is false when applied to two distinct symbols,or to a symbol and any other datum.The decomposition operators for lists are tradition-ally called CAR and CDR for historical reasons. CARextracts the first element of a list, while CDR producesa list containing all elements but the first. Because com-positions of CAR and CDR are commonly used in LISP,an abbreviation is provided: All the Cs and Rs in themiddle can be squeezed out. For example, "(CDR (CDR(CAR (CDR X))))" can be written as "(CDDADR X)".The construction operator CONS, given any datumand a list, produces a new list whose car is the datumand whose cdr is the given list; that is, CONS adds anew element to the front of a list. The operator LISTcan take any number of arguments (a special feature)and produces a list of its arguments.Notice that CONS (and LIST) conceptually
new data structures. As far as the LISP programmer isconcerned, new data objects are available in endlesssupply. They can be conveniently called forth to servesome immediate purpose and discarded when they areno longer of use. While creation is explicit, discarding isnot; a data object simply disappears into limbo when theprogram throws away all references (direct or indirect)to that object.The immense freedom this gives the programmermay be seen by an example taken from current experi-ence. A sort of error message familiar to most program-mers is "too many nested DO loops" or "more than 200declared arrays" or "symbol table overflow". Such mes-sages typically arise within compilers or assemblerswhich were written in languages requiring data tables tobe preallocated to some fLxed length. The author of acompiler, for example, might well guess, "No one willever use more than, say, ten nested DO loops; I'll doublethat for good measure and make the nested DO looptable 20 long." Inevitably, someone eventually findssome reason to write 21 nested DO loops and finds thatthe compiler overflows its fixed table and issues an errormessage (or, worse yet,
does not
issue an error message!).On the other hand, had the compiler writer made thetable 100 long or 1,000 long, most of the time most ofthe memory space devoted to that table would be wasted.A compiler written in LISP would be much morelikely to keep a linked list of records describing each DOloop. Such a list could be grown at any time by creatinga new record on demand and adding it to the list. In thisway, as many or as few records as needed could beaccommodated.Now one could certainly write a compiler in anylanguage and provide such dynamic storage manage-ment with enough programming. The point is that LISPprovides automatic storage management from the outsetand encourages its use (in much the same way thatFortran provides floating-point numbers and encouragestheir use, even though the particular processor on whicha Fortran program runs may or may not have floating-point hardware).Using CAR, CDR, and CONS, we can now writesome interesting programs in LISP to deal with LISPdata. For example, we can write a program APPEND,which when given two lists produces their concatenationas a new list:
Because LISP programs are represented as LISP datastructures, there is a difficulty with representing con-stants. For example, suppose we want to determinewhether or not the value of the variable X is the symbol"FOO". We might try writing(EQ X FOO)This does not work. The occurrence of "FOO" does not
63O Communications November 1980of Volume 23the ACM Number 11

Activity (9)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
VK17 liked this
lixkidathcs liked this
Mehrdad liked this
ozdragon88 liked this
abetul liked this
danielionut86 liked this
jjajj liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->