You are on page 1of 109

1

A study on fundamental principles of programming languages


By
tadanki.ramakrishna@yahoo.co.in
-5-
INDEX
1. Introduction (Page6
!. Procedura" Paradigm (Page #
$. %unctiona" Paradigm(Page !&
'. (ynta) and (emantics(Page 51
5. *y+es and (tructures(Page 6&
6. ,ogic and *echni-ues (Page &.
#. /onc"usion(Page 111
6
1.Introduction a0out +rinci+"es o1 +rogramming "anguages.
Programming languages are one of the most important and direct tools for the
construction of a computing system. In modern computing different languages are
routinely used for different levels of abstraction. Aspects related to writing an operating
system or a device driver are generally very different from those of writing high level
applications. Moreover, in some typical complex applications, different levels (and thus
languages) coexist and inter-operate, from the "core" logic written e.g. in C++ or Java,
to the level of scripting, or to the level of "gluing" different applications, usually defined
by an interpreted high level language (e.g. Python). Like natural languages,
Programming languages are a fundamental means of expression. Algorithms
implemented using different programming languages may exhibit very different
characteristics, that can be of aesthetic character, as higher level languages can be very
synthetic and are usually very expressive; or in terms of performance, as relatively lower
level languages allow a more direct management of memory and in general of the
performance of the generated code.
7

!.Procedura" Paradigm
The introduction of the architecture was a crucial step in the development of
electronic computers. The basic idea is that instructions can be encoded as data and
stored in the memory of the computer. The first consequence of this idea is that changing
and modifying the stored program is simple and efficient. In fact, changes can take place
at electronic speeds, a very different situation from earlier computers that were
programmed by plugging wires into panels.
The second, and ultimately more far-reaching, consequence is that computers can
process programs themselves, under program control. In particular, a computer can
translate a program from one notation to another. Thus the stored-program concept led
to the development of programming "languages. The history of PLs, like the history of
any technology, is complex. There are advances and setbacks; ideas that enter the
mainstream and ideas that end up in a backwater; even ideas that are submerged for a
while and later surface in an unexpected place. With the benefit of hindsight, we can
identify several strands in the evolution of PLs. These strands are commonly called
"paradigms and, in this course, we survey the paradigms separately although their
development was interleaved. Sources for this section include (Wexelblat 1981; Williams
and Campbell-Kelly 1989; Bergin and Gibson 1996).
8
Early Days
The first PLs evolved from machine code. The first programs used numbers to refer to
machine addresses. One of the first additions to programming notation was the use of
symbolic names rather than numbers to represent addresses. Briefly, it enables the
programmer to refer to any word in a programme by means of a label or tag attached to
it arbitrarily by the programmer, instead of by its address in the store. Thus, for example,
a number appearing in the calculation might be labeled `a3. The programmer could then
write simply `A a3 to denote the operation of adding this number into the accumulator,
without having to specify just where the number is located in the machine. (Mutch and
Gill 1954) The key point in this quotation is the phrase "instead of by its address in the
store. Instead of writing Location Order
100 A 104
101 A 2
102 T 104
103 H 24
104 C 50
105 T 104
the programmer would write
A a3
A 2
T a3
H 24
a3) C 50
T a3
9
systematically replacing the address 104 by the symbol a3 and omitting the explicit
addresses. This establishes the principle that a variable name stands for a memory
location, a principle that influenced the subsequent development of PLs and is now
known - perhaps inappropriately - as value semantics.
The importance and subroutines and subroutine libraries was recognized before high-
level programming languages had been developed, as the following quotation shows.
The following advantages arise from the use of such a library:
1. It simplifies the task of preparing problems for the machine;
2. It enables routines to be more readily understood by other users, as conventions
are standardized and the units of a routine are much larger, being subroutines
instead of individual orders;
3. Library subroutines may be used directly, without detailed coding and punching;
4. Library subroutines are known to be correct, thus greatly reducing the overall
chance of error in a complete routine, and making it much easier to locate errors.
. . . . Another difficulty arises from the fact that, although it is desirable to have
subroutines available to cover all possible requirements, it is also undesirable to allow
the size of the resulting library to increase unduly. However, a subroutine can be made
more versatile by the use of parameters associated with it, thus reducing the total size
of the library.
We may divide the parameters associated with subroutines into two classes.
EXTERNAL parameters, i.e. parameters which are fixed throughout the solution of a
problem and arise solely from the use of the library;
INTERNAL parameters, i.e. parameters which vary during the solution of the problem.
10
. . . . Subroutines may be divided into two types, which we have called OPEN and
CLOSED. An open subroutine is one which is included in the routine as it stands
whereas a closed subroutine is placed in an arbitrary position in the store and can be
called into use by any part of the main routine. (Wheeler 1951)
Machine code is a sequence of "orders or "instructions that the computer is expected to
execute. The style of programming that this viewpoint developed became known as the
"imperative or "procedural programming paradigm. In these notes, we use the term
"procedural rather than "imperative because programs resemble "procedures (in the
English, non-technical sense) or recipes rather than "commands. Confusingly, the
individual steps of procedural PLs, such as Pascal and C, are often called "statements,
although in logic a "statement is a sentence that is either true or false.
By default, the commands of a procedural program are executed sequentially. Procedural
Pls provide various ways of escaping from the sequence. The earliest mechanisms were
the "jump command, which transferred control to another part of the program, and the
"jump and store link command, which transferred control but also stored a "link to
which control would be returned after executing a subroutine. The data structures of
these early languages were usually rather simple: typically primitive values (integers and
floats) were provided, along with single- and multi-dimensioned arrays of primitive
values.
FORTRAN
FORTRAN was introduced in 1957 at IBM by a team led by John Backus. The "Preliminary
Report describes the goal of the FORTRAN project: The IBM Mathematical Formula
Translation System or briefly, FORTRAN, will comprise a large set of programs to enable
the IBM 704 to accept a concise formulation of a problem in terms of a mathematical
notation and to produce automatically a high-speed 704 program for the solution of the
problem. (Quoted in (Sammet 1969).)
11
This suggests that the IBM teams goal was to eliminate programming! The following
quotation seems to confirm this: If it were possible for the 704 to code problems for itself
and produce as good programs as human coders (but without the errors), it was clear
that large benefits could be achieved. (Backus 1957) It is interesting to note that, 20
years later, Backus (1978) criticized FORTRAN and similar languages as "lacking useful
mathematical properties. He saw the assignment statement as a source of inefficiency:
"the von Neumann bottleneck. The solution, however, was very similar to the solution he
advocated in 1957 - programming must become more like mathematics: "we should be
focusing on the form and content of the overall result. Although FORTRAN did not
eliminate programming, it was a major step towards the elimination of assembly
language coding. The designers focused on efficient implementation rather than elegant
language design, knowing that acceptance depended on the high performance of
compiled programs.
FORTRAN has value semantics. Variable names stand for memory addresses that are
determined when the program is loaded. The major achievements of FORTRAN are:
. efficient compilation;
. separate compilation (programs can be presented to the compiler as separate
subroutines, but the compiler does not check for consistency between components);
. demonstration that high-level programming, with automatic translation to machine
code, is feasible. The principal limitations of FORTRAN are:
Flat uniform structure. There is no concept of nesting in FORTRAN. A program
consists of a sequence of subroutines and a main program. Variables are either global or
local to subroutines. In other words, FORTRAN programs are rather similar to assembly
language programs: the main difference is that a typical line of FORTRAN describes
evaluating an expression and storing its value in memory whereas a typical line of
assembly language specifies a machine instruction (or a small group of instructions in the
case of a macro).
12
!imited control structures. The control structures of FORTRAN are IF, DO, and GOTO.
Since there are no compound statements, labels provide the only indication that a
sequence of statements form a group.
"nsafe memory allocation. FORTRAN borrows the concept of COMMON storage from
assembly language program. This enables different parts of a program to share regions
of memory, but the compiler does not check for consistent usage of these regions. One
program component might use a region of memory to store an array of integers, and
another might assume that the same region contains reals. To conserve precious
memory, FORTRAN also provides the EQUIVALENCE statement, which allows variables
with different names and types to share a region of memory.
No recursion. FORTRAN allocates all data, including the parameters and local variables
of subroutines, statically. Recursion is forbidden because only one instance of a
subroutine can be active at one time.
Algol #$
During the late fifties, most of the development of PLs was coming from industry. IBM
dominated, with COBOL, FORTRAN, and FLPL (FORTRAN List Processing Language), all
designed for the IBM 704. Algol 60 (Naur et al. 1960; Naur 1978; Perlis 1978) was
designed by an international committee, partly to provide a PL that was independent of
any particular company and its computers. The committee included both John Backus
(chief designer of FORTRAN) and John McCarthy (designer of LISP). The goal was a
"universal programming language. In one sense, Algol was a failure: few complete,
high-quality compilers were written and the language was not widely used (although it
was used more in Europe than in North America). In another sense, Algol was a huge
success: it became the
13
!isting %& An Algol Block
begin
integer x;
begin
function f(x) begin ... end;
integer x;
real y;
x := 2;
y := 3.14159;
end;
x := 1;
end
standard language for describing algorithms. For the better part of 30 years, the ACM
required submissions to the algorithm collection to be written in Algol. The major
innovations of Algol are discussed below.
'loc( )tructure. Algol programs are recursively structured. A program is a bloc(. A
block consists of declarations and statements. There are various kinds of statement; in
particular, one kind of statement is a block. A variable or function name declared in a
block can be accessed only within the block: thus Algol introduced nested scopes. The
recursive structure of programs means that large programs can be constructed from
small programs. In the Algol block shown in Listing 3, the two assignments to x refer to
two different variables.
14
The run-time entity corresponding to a block is called an activation record (AR). The AR
is created on entry to the block and destroyed after the statements of the block have
been executed. The syntax of Algol ensures that blocks are fully nested; this in turn
means that ARs can be allocated on a stac(. Block structure and stacked ARs have been
incorporated into almost every language since Algol.
Dynamic Arrays. The designers of Algol realized that it was relatively simple to allow
the size of an array to be determined at run-time. The compiler statically allocates space
for a pointer and an integer (collectively called a "dope vector) on the stack. At run-
time, when the size of the array is known, the appropriate amount of space is allocated
on the stack and the components of the "dope vector are initialized. The following code
works fine in Algol 60.
procedure average (n); integer n;
begin
real array a[1:n];
. . . .
end;
Despite the simplicity of the implementation, successor PLs such as C and Pascal dropped
this useful feature.
*all 'y Name. The default method of passing parameters in Algol was "call by name
and it was described by a rather complicated "copy rule. The essence of the copy rule is
that the program behaves as if the text of the formal parameter in the function is
replaced by the text of the actual parameter. The complications arise because it may be
necessary to rename some of the variables during the textual substitution. The usual
implementation strategy was to translate the actual parameter into a procedure with no
arguments (called a "thunk);
+,
!isting -& Call by name
procedure count (n); integer n;
begin
n := n + 1
end
!isting ,& A General Sum Function
integer procedure sum (max, i, val); integer max, i, val;
begin
integer s;
s := 0;
for i := 1 until n do
s := s + val;
sum := s
end
each occurrence of the formal parameter inside the function was translated into a call to
this function. The mechanism seems strange today because few modern languages use
it. However, the Algol committee had several valid reasons for introducing it..
Call by name enables procedures to alter their actual parameters. If the procedure count
is defined as in Listing 4, the statement
count(widgets)
has the same effect as the statement
16
begin
widgets := widgets + 1
end
The other parameter passing mechanism provided by Algol, call by value, does not allow
a procedure to alter the value of its actual parameters in the calling environment: the
parameter behaves like an initialized local variable.. Call by name provides control
structure abstraction. The procedure in Listing 5 provides form of abstraction of a for
loop. The first parameter specifies the number of iterations,the second is the loop index,
and the third is the loop body. The statement
sum(3, i, a[i])
computes a[1]+a[2]+a[3].
. Call by name evaluates the actual parameter exactly as often as it is accessed. (This is
in contrast with call by value, where the parameter is usually evaluated exactly once, on
entry to the procedure.) For example, if we declare the procedure try as in Listing 6,
it is safe to call try(x > 0, 1.0/x), because, if x 0, the expression 1.0/x will not be
evaluated.
Own .ariables. A variable in an Algol procedure can be declared own. The effect is that
the variable has local scope (it can be accessed only by the statements within the
procedure)
but global e/tent (its lifetime is the execution of the entire program).
!isting #& Using call by name
real procedure try (b, x); boolean b; real x;
begin
try := if b then x else 0.0
end
17
Algol 60 and most of its successors, like FORTRAN, has a value semantics. A variable
name stands for a memory addresses that is determined when the block containing the
variable declaration is entered at run-time. With hindsight, we can see that Algol made
important contributions but also missed some very interesting opportunities.
. An Algol block without statements is, in effect, a record. Yet Algol 60 does not provide
records.
. The local data of an AR is destroyed after the statements of the AR have been executed.
If the data was retained rather than destroyed, Algol would be a language with modules.
. An Algol block consists of declarations followed by statements. Suppose that
declarations and statements could be interleaved in a block. In the following block, D
denotes a sequence of declarations and S denotes a sequence of statements.
begin
D1
S1
D2
S2
end
A natural interpretation would be that S1 and S2 are executed concurrently.
. Own variables were in fact rather problematic in Algol, for various reasons including the
difficulty of reliably initializing them . But the concept was important: it is the separation
of scope and extent that ultimately leads to objects.
. The call by name mechanism was a first step towards the important idea that functions
can be treated as values. The actual parameter in an Algol call, assuming the default
calling mechanism, is actually a parameter less procedure, as mentioned above. Applying
this idea consistently throughout the language would have led to high order functions
and paved the way to functional programming.
18
The Algol committee knew what they were doing, however. They knew that incorporating
the "missed opportunities described above would have led to significant implementation
problems. In particular, since they believed that the stack discipline obtained with nested
blocks was crucial for efficiency, anything that jeopardized it was not acceptable. Algol 60
was simple and powerful, but not quite powerful enough.
*O'O!
COBOL (Sammett 1978) introduced structured data and implicit type conversion. When
COBOL was introduced, "programming was more or less synonymous with "numerical
computation. COBOL introduced "data processing, where data meant large numbers of
characters. The data division of a COBOL program contained descriptions of the data to
be processed. Another important innovation of COBOL was a new approach to data types.
The problem of type conversion had not arisen previously because only a small number
of types were provided by the PL. COBOL introduced many new types, in the sense that
data could have various degrees of precision, and different representations as text. The
choice made by the designers of COBOL was radical: type conversion should be
automatic. The assignment statement in COBOL has several forms, including
MOVE X to Y.
If X and Y have different types, the COBOL compiler will attempt to find a conversion
from one type to the other. In most PLs of the time, a single statement translated into a
small number of machine instructions. In COBOL, a single statement could generate a
large amount of machine code.
19
0!12
During the early 60s, the dominant languages were Algol, COBOL, FORTRAN. The
continuing desire for a "universal language that would be applicable to a wide variety of
problem domains led IBM to propose a new programming language (originally called NPL
but changed, after objections from the UKs National Physical Laboratory, to PL/I) that
would combine the best features of these three languages. Insiders at the time referred
to the new language as "CobAlgoltran. The design principles of PL/I (Radin 1978)
included:
. the language should contain the features necessary for all kinds of programming;
. a programmer could learn a subset of the language, suitable for a particular application,
without having to learn the entire language.
An important lesson of PL/I is that these design goals are doomed to failure. A
programmer who has learned a "subset of PL/I is likely, like all programmers, to make a
mistake. With luck, the compiler will detect the error and provide a diagnostic message
that is incomprehensible to the programmer because it refers to a part of the language
outside the learned subset. More probably, the compiler will not detect the error and the
program will behave in a way that is inexplicable to the programmer, again because it is
outside the learned subset.
PL/I extends the automatic type conversion facilities of COBOL to an extreme degree. For
example,
the expression (Gelernter and Jagannathan 1990)
(57 || 8) + 17
is evaluated as follows:
1. Convert the integer 8 to the string 8.
2. Concatenate the strings 57 and 8, obtaining 578.
3. Convert the string 578 to the integer 578.
4. Add 17 to 578, obtaining 595.
5. Convert the integer 595 to the string 595.
6.
20
The compilers policy, on encountering an assignment x = E, might be paraphrased as:
"Do everything possible to compile this statement; as far as possible, avoid issuing any
diagnostic message that would tell the programmer what is happening. PL/I did
introduce some important new features into PLs. They were not all well-designed, but
their existence encouraged others to produce better designs.
. Every variable has a storage class: static, automatic, based, or controlled. Some of
these were later incorporated into C.
. An object associated with a based variable x requires explicit allocation and is placed on
the heap rather than the stack. Since we can execute the statement allocate x as often
as necessary, based variables provide a form of template.
. PL/I provides a wide range of programmer-defined types. Types, however, could not be
named.
. PL/I provided a simple, and not very safe, form of exception handling. Statements of
the following form are allowed anywhere in the program:
ON condition
BEGIN;
. . . .
END;
If the condition (which might be OVERFLOW, PRINTER OUT OF PAPER, etc.) becomes
TRUE, control is transferred to whichever ON statement for that condition was most
recently executed. After the statements between BEGIN and END (the handler) have
been executed, control returns to the statement that raised the exception or, if the
handler contains a GOTO statement, to the target of that statement.
3+
Algol #4
Whereas Algol 60 is a simple and expressive language, its successor Algol 68 (van
Wijngaarden et al. 1975; Lindsey 1996) is much more complex. The main design
principle of Algol 68 was orthogonality: the language was to be defined using a number
of basic concepts that could be combined in arbitrary ways. Although it is true that lack
of orthogonality can be a nuisance in PLs, it does not necessarily follow that orthogonality
is always a good thing.
The important features introduced by Algol 68 include the following.
. The language was described in a formal notation that specified the complete syntax and
semantics of the language (van Wijngaarden et al. 1975). The fact that the Report was
very hard to understand may have contributed to the slow acceptance of the language.
. Operator overloading: programmers can provide new definitions for standard operators
such as "+. Even the priority of these operators can be altered.
. Algol 68 has a very uniform notation for declarations and other entities. For example,
Algol 68 uses the same syntax (mode name = expression) for types, constants,
variables, and functions. This implies that, for all these entities, there must be forms of
expression that yield appropriate values.
. In a collateral clause of the form (x, y, z), the expressions x, y, and z can be
evaluated in any order, or concurrently. In a function call f(x, y, z), the argument list is a
collateral clause. Collateral clauses provide a good, and early, example of the idea that a
PL specification should intentionally leave some implementation details undefined. In this
example, the Algol 68 report does not specify the order of evaluation of the expressions
in a collateral clause. This gives the implementor freedom to use any order of evaluation
and hence, perhaps, to optimize.
22
. The operator ref stands for "reference and means, roughly, "use the address rather
than the value. This single keyword introduces call by reference, pointers, dynamic data
structures, and other features to the language. It appears in C in the form of the
operators "* and "&.
. A large vocabulary of PL terms, some of which have become part of the culture (cast,
coercion, narrowing, . . . .) and some of which have not (mode, weak context, voiding,. .
. .).
Like Algol 60, Algol 68 was not widely used, although it was popular for a while in various
parts of Europe. The ideas that Algol 68 introduced, however, have been widely imitated.
0ascal
Pascal was designed by Wirth (1996) as a reaction to the complexity of Algol 68, PL/I,
and other languages that were becoming popular in the late 60s. Wirth made extensive
use of the ideas of Dijkstra and Hoare (later published as (Dahl, Dijkstra, and Hoare
1972)), especially Hoares ideas of data structuring. The important contributions of Pascal
included the following.
. Pascal demonstrated that a PL could be simple yet powerful.
. The type system of Pascal was based on primitives (integer, real, bool, . . . .) and
mechanisms for building structured types (array, record, file, set, . . . .). Thus data types
in Pascal form a recursive hierarchy just as blocks do in Algol 60.
. Pascal provides no implicit type conversions other than subrange to integer and integer
to real. All other type conversions are explicit (even when no action is required) and the
compiler checks type correctness.
23
. Pascal was designed to match Wirths (1971) ideas of program development by
stepwise refinement.Pascal is a kind of "fill in the blanks language in which all programs
have a similar structure, determined by the relatively strict syntax. Programmers are
expected to start with a complete but skeletal "program and flesh it out in a series of
refinement steps, each of which makes certain decisions and adds new details. The
monolithic structure that this idea imposes on programs is a drawback of Pascal because
it prevents independent compilation of components.
Pascal was a failure because it was too simple. Because of the perceived missing
features, supersets were developed and, inevitably, these became incompatible. The first
version of "Standard Pascal was almost useless as a practical programming language
and the Revised Standard described a usable language but appeared only after most
people had lost interest in Pascal. Like Algol 60, Pascal missed important opportunities.
The record type was a useful innovation (although very similar to the Algol 68 struct) but
allowed data only. Allowing functions in a record declaration would have paved the way to
modular and even object oriented programming. Nevertheless, Pascal had a strong
influence on many later languages. Its most important innovations were probably the
combination of simplicity, data type declarations, and static type checking.
5odula63
Wirth (1982) followed Pascal with Modula-2, which inherits Pascals strengths and, to
some extent, removes Pascals weaknesses. The important contribution of Modula-2 was,
of course, the introduction of modules. (Wirths first design, Modula, was never
completed. Modula-2 was the product of a sabbatical year in California, where Wirth
worked with the designers of Mesa, another early modular language.)
24
A module in Modula-2 has an interface and an implementation. The interface provides
information about the use of the module to both the programmer and the compiler. The
implementation contains the "secret information about the module. This design has the
unfortunate consequence that some information that should be secret must be put into
the interface. For example, the compiler must know the size of the object in order to
declare an instance of it. This implies that the size must be deducible from the interface
which implies, in turn, that the interface must contain the representation of the object.
(The same problem appears again in C++.)
Modula-2 provides a limited escape from this dilemma: a programmer can define an
"opaque type with a hidden representation. In this case, the interface contains only a
pointer to the instance and the representation can be placed in the implementation
module. The important features of Modula-2 are:
. Modules with separated interface and implementation descriptions (based on
Mesa). .
Coroutines.
25
*
C is a very pragmatic PL. Ritchie (Ritchie 1996) designed it for a particular task -
systemsprogramming- for which it has been widely used. The enormous success of C is
partly accidental. UNIX, after Bell released it to universities, became popular, with good
reason. Since UNIX depended heavily on C, the spread of UNIX inevitably led to the
spread of C. C is based on a small number of primitive concepts. For example, arrays are
defined in terms of pointers and pointer arithmetic. This is both the strength and
weakness of C. The number of concepts is small, but C does not provide real support for
arrays, strings, or boolean operations.C is a low-level language by comparison with the
other PLs discussed in this section. It is designed to be easy to compile and to produce
efficient object code. The compiler is assumed to be rather unsophisticated (a reasonable
assumption for a compiler running on a PDP/11 in the late sixties) and in need of hints
such as register. C is notable for its concise syntax. Some syntactic features are inherited
from Algol 68 (for example, += and other assignment operators) and others are unique
to C and C++ (for example, postfix and prefix ++ and --).
Ada
Ada (Whitaker 1996) represents the last major effort in procedural language design. It is
a large and complex language that combines then-known programming features with
little attempt at consolidation. It was the first widely-used language to provide full
support for concurrency, with interactions checked by the compiler, but this aspect of the
language proved hard to implement.
Ada provides templates for procedures, record types, generic packages, and task types.
The corresponding objects are: blocks and records (representable in the language); and
packages and tasks (not representable in the language). It is not clear why four distinct
mechanisms are required (Gelernter and Jagannathan 1990).
26
The syntactic differences suggest that the designers did not look for similarities between
these constructs. A procedure definition looks like this:
procedure procname ( parameters ) is
body
A record type looks like this:
type recordtype ( parameters ) is
body
The parameters of a record type are optional. If present, they have a different form than
the parameters of procedures.
A generic package looks like this:
generic ( parameters ) package packagename is
package description
The parameters can be types or values. For example, the template
generic
max: integer;
type element is private;
package Stack is
. . . .
might be instantiated by a declaration such as
package intStack is new Stack(20, integer)
Finally, a task template looks like this (no parameters are allowed):
task type templatename is
task description
27
Of course, programmers hardly notice syntactic differences of this kind: they learn the
correct incantation and recite it without thinking. But it is disturbing that the language
designers apparently did not consider passible relationships between these four kinds of
declaration. Changing the syntax would be a minor improvement, but uncovering deep
semantic similarities might have a significant impact on the language as a whole, just as
the identity declaration of Algol 68 suggested new and interesting possibilities.
28
$. %unctiona" Paradigm
Procedural programming is based on instructions ("do something) but, inevitably,
procedural Pls also provide expressions ("calculate something). The key insight of
functional programming (FP) is that everything can be done with expressions: the
commands are unnecessary.
This point of view has a solid foundation in theory. Turing (1936) introduced an abstract
model of "programming, now known as the Turing machine. Kleene (1936) and Church
(1941) introduced the theory of recursive functions. The two theories were later shown
(by Kleene) to be equivalent: each had the same computational power. Other theories,
such as Post production systems, were shown to have the same power. This important
theoretical result shows that FP is not a complete waste of time but it does not tell us
whether FP is useful or practical. To decide that, we must look at the functional
programming languages (FPLs) that have actually been implemented.
Most functional language support high order functions. Roughly, a high order function
is a function that takes another function as a parameter or returns a function. More
precisely:
. A 7eroth order expression contains only variables and constants.
. A first order expression may also contain function invocations, but the results and
parameters of functions are variables and constants (that is, zeroth order expressions).
. In general, in an n8th order expression, the results and parameters of functions are
(n-1)-th order expressions.
. A high order expression is an n-th order expression with n 2.
The same conventions apply in logic with "function replaced by "function or predicate.
In first-order logic, quantifiers can bind variables only; in a high order logic, quantifiers
can bind predicates.
29
!2)0
Anyone could learn LISP in one day, except that if they already knew F!"!A#, it
would take three day$. Marvin Minsky
Functional programming was introduced in 1958 in the form of LISP by John McCarthy.
The following account of the development of LISP is based on McCarthys (1978) history.
The important early decisions in the design of LISP were:
. to provide list processing (which already existed in languages such as Information
Processing Language (IPL) and FORTRAN List Processing Language (FLPL));
. to use a prefix notation (emphasizing the operator rather than the operands of an
expression);
. to use the concept of "function as widely as possible (cons for list construction; car and
cdr for extracting list components; cond for conditional, etc.);
. to provide higher order functions and hence a notation for functions (based on Churchs
(1941) -notation);
. to avoid the need for explicit erasure of unused list structures.
McCarthy (1960) wanted a language with a solid mathematical foundation and decided
that recursive function theory was more appropriate for this purpose than the then-
popular Turing machine model. He considered it important that LISP expressions should
obey the usual mathematical laws allowing replacement of expressions and:
30
Another way to show that LISP was neater than Turing machines was to write a universal
LISP function and show that it is briefer and more comprehensible than the description of
a universal Turing machine. This was the LISP function e%al [e, a], which computes the
value of a LISP expression e, the second argument a being a list of assignments of values
to variables. . . . Writing e%al required inventing a notation for representing LISP
functions as LISP data, and such a notation was devised for the purpose of the paper
with no thought that it would be used to express LISP programs in practice. (McCarthy
1978)
After the paper was written, McCarthys graduate student S. R. Russel noticed that e%al
could be used as an interpreter for LISP and hand-coded it, thereby producing the first
LISP interpreter. Soon afterwards, Timothy Hart and Michael Levin wrote a LISP compiler
in LISP; this is probably the first instance of a compiler written in the language that it
compiled.
The function application f(x, y) is written in LISP as (f x y). The function name always
comes first: a + & is written in LISP as (+ a b). All expressions are enclosed in
parentheses and can be nested to arbitrary depth.
There is a simple relationship between the text of an expression and its representation in
memory. An atom is a simple object such as a name or a number. A list is a data
structure composed of cons-cells (so called because they are constructed by the function
cons); each cons-cell has two pointers and each pointer points either to another cons-cell
or to an atom. Figure 1 shows the list structure corresponding to the expression (A (B
C)). Each box represents a cons-cell. There are two lists, each with two elements, and
each terminated with NIL. The diagram is simplified in that the atoms A, B, C, and NIL
would themselves be list structures in an actual LISP system.
31
The function cons constructs a list from its head and tail: (cons head tail). The value of
(car list) is the head of the list and the value of (cdr list) is the tail of the list. Thus:
(car (cons head tail)) ! head
(cdr (cons head tail)) ! tail
The names car and cdr originated in IBM 704 hardware; they are abbreviations for
"contents of address register (the top 18 bits of a 36-bit word) and "contents of
decrement register (the bottom 18 bits). It is easy to translate between list expressions
and the corresponding data structures. There is a function eval (mentioned in the
quotation above) that evaluates a stored list expression. Consequently, it is
straightforward to build languages and systems "on top of LISP and LISP is often used in
this way.
It is interesting to note that the close relationship between code and data in LISP mimics
the von Neumann architecture at a higher level of abstraction. LISP was the first in a long
line of functional programming (FP) languages. Its principal contributions are listed
below.
Names. In procedural PLs, a name denotes a storage location (value semantics). In LISP,
a name is a reference to an object, not a location (reference semantics). In the Algol
sequence
int n;
n := 2;
n := 3;
32
the declaration int n; assigns a name to a location, or "box, that can contain an integer.
The next two statements put different values, first 2 then 3, into that box. In the LISP
sequence
(progn
(setq x (car structure))
(setq x (cdr structure)))
x becomes a reference first to (car structure) and then to (cdr structure). The two
objects have different memory addresses. A consequence of the use of names as
references to objects is that eventually there will be objects for which there are no
references: these objects are "garbage and must be automatically reclaimed if the
interpreter is not to run out of memory. The alternative - requiring the programmer to
explicitly deallocate old cells - would add considerable complexity to the task of writing
LISP programs. Nevertheless, the decision to include automatic garbage collection (in
1958!) was courageous and influential. A PL in which variable names are references to
objects in memory is said to have reference semantics. All FPLs and most OOPLs have
reference semantics. Note that reference semantics is not the same as "pointers in
languages such as Pascal and C. A pointer variable stands for a location in memory and
therefore has value semantics; it just so happens that the location is used to store the
address of another object.
!ambda. LISP uses "lambda expressions, based on Churchs -calculus, to denote
functions. For example, the function that squares its argument is written
(lambda (x) (* x x))
by analogy to Churchs f = x . x2. We can apply a lambda expression to an argument to
obtain the value of a function application. For example, the expression
33
((lambda (x) (* x x)) 4)
yields the value 16.
However, the lambda expression itself cannot be evaluated. Consequently, LISP had to
resort to programming tricks to make higher order functions work. For example, if we
want to pass the squaring function as an argument to another function, we must wrap it
up in a "special form called function:
(f (function (lambda (x) (* x x))) . . . .)
Similar complexities arise when a function returns another function as a result.
Dynamic )coping. Dynamic scoping was an "accidental feature of LISP: it arose as a
side-effect of the implementation of the look-up table for variable values used by the
interpreter. The C-like program in Listing 7 illustrates the difference between static and
dynamic scoping. In
Static and Dynamic Binding
int x = 4; // 1
void f()
{
printf("%d", x);
}
void main ()
{
int x = 7; // 2
f();
}
34
C, the variable x in the body of the function f is a use of the global variable x defined in
the first line of the program. Since the value of this variable is 4, the program prints 4.
(Do not confuse dynamic scoping with dynamic binding!)
A LISP interpreter constructs its environment as it interprets. The environment behaves
like a stack (last in, first out). The initial environment is empty, which we denote by hi.
After interpreting the LISP equivalent of the line commented with "1, the environment
contains the global binding for x: hx = 4i. When the interpreter evaluates the function
main, it inserts the local x into the environment, obtaining hx = 7, x = 4i. The interpreter
then evaluates the call f(); when it encounters x in the body of f, it uses the first value of
x in the environment and prints 7.
Although dynamic scoping is natural for an interpreter, it is inefficient for a compiler.
Interpreters are slow anyway, and the overhead of searching a linear list for a variable
value just makes them slightly slower still. A compiler, however, has more efficient ways
of accessing variables, and forcing it to maintain a linear list would be unacceptably
inefficient. Consequently, early LISP systems had an unfortunate discrepancy: the
interpreters used dynamic scoping and the compilers used static scoping. Some
programs gave one answer when interpreted and another answer when compiled!
2nterpretation. LISP was the first major language to be interpreted. Originally, the LISP
interpreter behaved as a calculator: it evaluated expressions entered by the user, but its
internal state did not change. It was not long before a form for defining functions was
introduced (originally called define, later changed to defun) to enable users to add their
own functions to the list of built-in functions.
A LISP program has no real structure. On paper, a program is a list of function
definitions; the functions may invoke one another with either direct or indirect recursion.
At run-time, a program is the same list of functions, translated into internal form, added
to the interpreter.
35
The current dialect of LISP is called Common LISP (Steele et al. 1990). It is a much
larger and more complex language than the original LISP and includes many features of
Scheme (described below). Common LISP provides static scoping with dynamic scoping
as an option.
'asic !2)0 Functions
The principal - and only - data structure in classical LISP is the list. Lists are written (a
b c).Lists also represent programs: the list (f x y) stands for the function f applied to the
arguments x and y. The following functions are provided for lists.
cons builds a list, given its head and tail.
first (originally called car) returns the head (first component) of a list.
rest (originally called cdr) returns the tail of a list.
second (originally called cadr, short for car cdr) returns the second element of a list.
null is a predicate that returns true if its argument is the empty list.
atom is a predicate that return true if its argument is not a list - and must therefore be
an "atom, that is, a variable name or other primitive object.
A form looks like a function but does not evaluate its argument. The important forms
are:
quote takes a single argument and does not evaluate it.
cond is the conditional construct has the form
(cond (p1 e1) (p2 e2) ... (t e))
end works like this: if p1 is true, return e1; if p2 is true, return e2; . . . ; else return e.
def is used to define functions:
(def name (lambda parameters body ))
%#
A !2)0 2nterpreter
The following interpreter is based on McCarthys LISP 1.5 Evaluator from Li$p '.(
Pro)ra**er+$ ,anual by McCarthy et al., MIT Press 1962.
An environment is a list of name/value pairs. The function pairs builds an environment
from a list of names and a list of expressions.
(def pairs (lambda (names exps env)
(cond
((null names) env)
(t (cons
(cons (first name) (first exps))
(pairs (rest names) (rest exps) env) )) ) ))
The function lookup finds the value of a name in a table. The table is represented as a list
of pairs.
(def lookup (lambda (name table)
(cond
((eq name (first (first table))) (rest (first table)))
(t (lookup name (rest table))) ) ))
The heart of the interpreter is the function eval which evaluates an expression exp in an
environment env.
37
(def eval (lambda (exp env)
(cond
((null exp) nil)
((atom exp) (lookup exp env))
((eq (first exp) (quote quote)) (second exp))
((eq (first exp) (quote cond)) (evcon (second exp) env) )
(t (apply (first exp) (evlist (rest exp) env) env)) ) ))
The function evcon is used to evaluate cond forms; it takes a list of pairs of the form (cnd
exp) and returns the value of the expression corresponding to the first true cnd.
(def evcon (lambda (tests env)
(cond
((null tests) nil)
(t
(cond
((eval (first (first tests)) env) (eval (second (first tests)) env))
(t (evcon (rest tests) env)) ) ) ) ))
The function evlist evaluates a list of expressions in an environment. It is a
straightforward recursion.
(def evlist (lambda (exps env)
(cond
((null exps) nil)
(t (cons (eval (first exps) env)
(evlist (rest exps) env) )) ) ))
38
The function apply applies a function fun to arguments args in the environment env. The
cases it considers are:
Built-in functions: first, second, cons, and other built-in functions not considered here.
Any other function with an atomic name: this is assumed to be a user-defined function,
and lookup is used to find the lambda form corresponding to the name.
A function which is a lambda form.
(def apply (lambda (fun args env)
(cond
((eq fun (quote first)) (first (first args)))
((eq fun (quote second)) (second (first args)))
((eq fun (quote cons)) (cons (first args) (second args)))
((atom fun) (apply (eval fun env) args env))
((eq (first fun) (quote lambda))
(eval (third fun) (pairs (second fun) args env)) ) ) ))
Dynamic 'inding
The classical LISP interpreter was implemented while McCarthy was still designing the
language. It contains an important defect that was not discovered for some time. James
Slagle defined a function like this
(def testr (lambda (x p f u)
(cond
((p x) (f x))
((atom x) (u))
(t (testr (rest x) p f (lambda () (testr (first x) p f u)))) ) ))
39
and it did not give the correct result.
We use a simpler example to explain the problem. Suppose we define:
(def show (lambda ()
x ))
Calling this function generates an error because x is not defined. (The interpreter above
does not incorporate error detection and would simply fail with this function.) However,
we can wrap show inside another function:
(def try (lambda (x)
(show) ))
We then find that (try t) evaluates to t. When show is called, the binding (x.t) is in the
environment, and so that is the value that show returns.
In other words, the value of a variable in LISP depends on the dynamic behaviour of the
program (which functions have been called) rather than on the static text of the
program.
We say that LISP uses dynamic binding whereas most other languages, including
Scheme,
Haskell, and C++, use static binding.
Correcting the LISP interpreter to provide static binding is not difficult: it requires a
slightly more complicated data structure for environments (instead of having a simple list
of name/value pairs, we effectively build a list of activation records). However, the
dynamic binding problem was not discovered until LISP was well-entrenched: it would be
almost 20 years before Guy Steele introduced Scheme, with static binding, in 1978.
40
People wrote LISP compilers, but it is hard to write a compiler with dynamic binding.
Consequently, there were many LISP systems that provided dynamic binding during
interpretation and static binding for compiled programs!
)cheme
Scheme was designed by Guy L. Steele Jr. and Gerald Jay Sussman (1975). It is very
similar to LISP in both syntax and semantics, but it corrects some of the errors of LISP
and is both simpler and more consistent. The starting point of Scheme was an attempt
by Steele and Sussman to understand Carl Hewitts theory of actors as a model of
computation. The model was object oriented and influenced by Smalltalk (see Section
6.2). Steele and Sussman implemented the actor model using a small LISP
Factorial with functions
(define factorial
(lambda (n) (if (= n 0)
1
(* n (factorial (- n 1))))))
Factorial with actors
(define actorial
(alpha (n c) (if (= n 0)
(c 1)
(actorial (- n 1) (alpha (f) (c (* f n)))))))
41
interpreter. The interpreter provided lexical scoping, a lambda operation for creating
functions, and an alpha operation for creating actors. For example, the factorial function
could be represented either as a function, as in Listing 9, or as an actor, as in Listing 10.
Implementing the interpreter brought an odd fact to light: the interpreters code for
handling alpha was identical to the code for handling lambda! This indicated that closures
- the objects created by evaluating lambda - were useful for both high order functional
programming and object oriented programming (Steele 1996).
LISP ducks the question "what is a function? It provides lambda notation for functions,
but a lambda expression can only be applied to arguments, not evaluated itself. Scheme
provides an answer to this question: the value of a function is a closure. Thus in Scheme
we can write both
(define num 6)
which binds the value 6 to the name num and
(define square (lambda (x) (* x x)))
which binds the squaring function to the name square. (Scheme actually provides an
abbreviated form of this definition, to spare programmers the trouble of writing lambda
all the time, but the form shown is accepted by the Scheme compiler and we use it in
these notes.) Since the Scheme interpreter accepts a series of definitions, as in LISP, it is
important to understand the effect of the following sequence:
(define n 4)
(define f (lambda () n))
(define n 7)
(f)
42
The final expression calls the function f that has just been defined. The value returned is
the value of n, but which value, 4 or 7? If this sequence could be written in LISP, the
result would be 7, which is the value of n in the environment when (f) is evaluated.
Scheme, however, uses static scoping. The closure created by evaluating the definition
of f includes all name bindings in effect at the time of definition. Consequently, a
Scheme interpreter yields 4 as the value of (f). The answer to the question "what is a
function? is "a function is an expression (the body of the function) and an environment
containing the values of all variables accessible at the point of definition.
Closures in Scheme are ordinary values. They can be passed as arguments and returned
by functions. (Both are possible in LISP, but awkward because they require special
forms.)
Differentiating in Scheme
(define derive (lambda (f dx)
(lambda (x)
(/ (- (f (+ x dx)) (f x)) dx))))
(define square (lambda (x) (* x x)))
(define Dsq (derive sq 0.001))
43
Banking in Scheme
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(sequence (set! balance (- balance amount))
balance)
("Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond
((eq? m withdraw) withdraw)
((eq? m deposit) deposit)
(else (error "Unrecognized transaction" m))))
dispatch)
)A)!
SASL (St. Andrews Symbolic Language) was introduced by David Turner (1976). It has
an Algollike syntax and is of interest because the compiler translates the source code into
a combinator expression which is then processed by graph reduction (Turner 1979).
Turner subsequently designed KRC (Kent Recursive Calculator) (1981) and Miranda
(1985), all of which are implemented with combinator reduction.
44
Combinator reduction implements call by name (the default method for passing
parameters in Algol 60) but with an optimization. If the parameter is not needed in the
function, it is not evaluated, as in Algol 60. If it is needed one or more times, it is
evaluated exactly once. Since SASL expressions do not have side-effects, evaluating an
expression more than once will always give the same result. Thus combinator reduction
is (in this sense) the most efficient way to pass parameters to functions. Evaluating an
expression only when it is needed, and never more than once, is called call by need or
la7y evaluation.
The following examples use SASL notation. The expression x::xs denotes a list with first
element (head) x and remaining elements (tail) xs. The definition
nums(n) = n::nums(n+1)
apparently defines an infinite list:
nums(0) = 0::nums(1) = 0::1::nums(2) = . . . .
The function second returns the second element of a list. In SASL, we can define it like
this:
second (x::y::xs) = y
Although nums(0) is an "infinite list, we can find its second element in SASL:
second(nums(0)) = second(0::nums(1)) = second(0::1::nums(2)) = 1
This works because SASL evaluates a parameter only when its value is needed for the
calculation to proceed. In this example, as soon as the argument of second is in the form
0::1::. . . ., the required result is known.
45
Call by need is the only method of passing arguments in SASL but it occurs as a special
case in other languages. If we consider if as a function, so that the expression
if P then - else .
is a fancy way of writing if(P,X,Y), then we see that if must use call by need for its second
and third arguments. If it did not, we would not be able to write expressions such as
if x = 0 then 1 else 1/x
In C, the functions && (AND) and || (OR) are defined as follows:
- && . if - then . else false
- || . if - then true else .
These definitions provide the effect of lazy evaluation and allow us to write expressions
such as
if (p != NULL && p->f > 0) . . . .
)5!
SML (Milner, Tofte, and Harper 1990; Milner and Tofte 1991) was designed as a
"metalanguage (ML) for reasoning about programs as part of the Edinburgh Logic for
Computable Functions (LCF) project. The language survived after the rest of the project
was abandoned and became "standard ML, or SML. The distinguishing feature of SML is
that it is statically typed in the sense of Section B.3 and that most types can be inferred
by the compiler.
46
In the following example, the programmer defines the factorial function and SML
responds with its type. The programmer then tests the factorial function with argument
6; SML assigns the result to the variable it, which can be used in the next interaction if
desired. SML is run interactively, and prompts with "-.
- fun fac n = if n = 0 then 1 else n * fac(n-1);
val fac = fn : int -> int
- fac 6;
val it = 720 : int
SML also allows function declaration &y ca$e$, as in the following alternative declaration
of the factorial function:
- fun fac 0 = 1
= | fac n = n * fac(n-1);
val fac = fn : int -> int
- fac 6;
val it = 720 : int
Since SML recognizes that the first line of this declaration is incomplete, it changes the
prompt to "= on the second line. The vertical bar "| indicates that we are declaring
another "case of the declaration. Each case of a declaration by cases includes a pattern.
In the declaration of fac, there are two patterns. The first, 0, is a con$tant pattern, and
matches only itself. The second, \tt n,
47
Function composition
- infix o;
- fun (f o g) x = g (f x);
val o = fn : (a -> b) * (b -> c) -> a -> c
- val quad = sq o sq;
val quad = fn : real -> real
- quad 3.0;
val it = 81.0 : real
Finding factors
- fun hasfactor f n = n mod f = 0;
val hasfactor fn : int -> int -> bool
- hasfactor 3 9;
val it = true : bool
is a %aria&le pattern, and matches any value of the appropriate type. Note that the
definition fun sq x = x * x; would fail because SML cannot decide whether the type of x is
int or real.
- fun sq x:real = x * x;
val sq = fn : real -> real
- sq 17.0;
val it = 289.0 : real
The symbols a, b, and c are type names; they indicate that SML has recognized o as a
polymorphic function.
48
The function hasfactor defined in above returns true if its first argument is a factor of
its second argument. All functions in SML have exactly one argument. It might appear
that hasfactor has two arguments, but this is not the case. The declaration of hasfactor
introduces two functions, as shown in Listing 16. Functions like hasfactor take their
arguments one at a time. Applying the first argument, as in hasfactor 2, yields a new
function. The trick of applying one argument at a time is called "currying, after the
American logician Haskell Curry. It may be helpful to consider the types involved:
hasfactor : int -> int -> bool
hasfactor 2 : int -> bool
hasfactor 2 6 : bool
The following brief discussion, adapted from (Ake Wikstrom 1987), shows how
functions can be used to build a programmers toolkit. The functions here are for list
manipulation, which is a widely used example but not the only way in which a FPL can be
used. We start with a list generator, defined as an infix operator.
A function with one argument
- val even = hasfactor 2;
val even = fn : int -> bool;
- even 6;
val it = true : bool
49
Sums and products
- fun sum [] = 0
= | sum (x::xs) = x + sum xs;
val sum = fn : int list -> int
- fun prod [] = 1
= | prod (x::xs) = x * prod xs;
val prod = fn : int list -> int
- sum (1 -- 5);
val it = 15 : int
- prod (1 -- 5);
val it = 120 : int
Using reduce
- fun sum xs = reduce add 0 xs;
val sum = fn : int list -> list
- fun prod xs = reduce mul 1 xs;
val prod = fn : int list -> list
- infix --;
- fun (m -- n) = if m < n then m :: (m+1 -- n) else [];
val -- = fun : int * int -> int list
- 1 -- 5;
val it = [1,2,3,4,5] : int list
The functions sum and prod in Listing 17 compute the sum and product, respectively, of
a list of integers. We note that sum and prod have a similar form. This suggests that we
can abstract the common features into a function reduce that takes a binary function, a
value for the empty list, and a list. We can use reduce to obtain one-line definitions of
sum and prod, as in above. The idea of processing a list by recursion has been captured
in the definition of reduce.
50
- fun reduce f u [] = u
= | reduce f u (x::xs) = f x (reduce f u xs);
val reduce = fn : (a -> b -> b) -> b -> a list -> b
We can also define a sorting function as
- val sort = reduce insert nil;
val sort = fn : int list -> int list
where insert is the function that inserts a number into an ordered list, preserving the
ordering:
- fun insert x:int [] = x::[]
| insert x (y::ys) = if x <= y then x::y::ys else y::insert x ys;
val insert = fn : int -> int list -> int list

51
'.(ynta) and (emantics
"Syntax" is often used as a synonym to "grammar." It deals with what a language looks
like and how that language is structured. It examines the pieces of a sentence, such as
nouns, adjectives and verbs, and how to order them to create a coherent language.
)ynta/ refers to the ways symbols may be combined to create well-formed sentences
(or programs) in the language. Syntax defines the formal relations between the
constituents of a language, thereby providing a structural description of the various
expressions that make up legal strings in the language. Syntax deals solely with the form
and structure of symbols in a language without any consideration given to their meaning.
"Semantics" looks at the meaning of a language. That can include the basic,
straightforward definitions of words or fine distinctions between similar words. When
people refer to groups disagreeing over "semantics," it typically means that they cannot
agree on how to refer to a term or concept. For example, different groups of people may
disagree over whether to call a political position "pro-choice" or "pro-abortion" based on
the subtle semantic differences between the two terms.
Semantics reveals the meaning of syntactically valid strings in a language. For natural
languages, this means correlating sentences and phrases with the objects, thoughts, and
feelings of our experiences. For programming languages, semantics describes the
behavior that a computer follows when executing a program in the language. We might
disclose this behavior by describing the relationship between the input and output of a
program or by a step-by-step explanation of how a program will execute on a real or an
abstract machine. The syntax of a programming language is commonly divided into two
parts, the le/ical synta/ that describes the smallest units with significance, called
to(ens , and the phrase8structur e synta/ that explains how tokens are arranged
into programs. The lexical syntax recognizes identifiers, numerals, special symbols, and
reserved words as if a syntactic category <token> had the definition:
52
<token> ::= <identifier> | <numeral> | <reserved word> | <relation>
| <weak op> | <strong op> | &9 | : | ; | | < | &
where
<program> ::= program <identifier> is <block>
<block> ::= <declaration seq> begin <command seq> end
<declaration seq> ::= e | <declaration> <declaration seq>
<declaration> ::= var <variable list> & <type> <
<type> ::= integer | boolean
<variable list> ::= <variable> | <variable> <variable list>
<command seq> ::= <command> | <command> < <command seq>
<command> ::= <variable> &9 <expr> | s(ip
| read <variable> | write <integer expr>
| while <boolean expr> do <command seq> end while
| if <boolean expr> then <command seq> end if
| if <boolean expr> then <command seq> else <command seq> end if
<expr> ::= <integer expr> | <boolean expr>
<integer expr> ::= <term> | <integer expr> <weak op> <term>
<term> ::= <element> | <term> <strong op> <element>
<element> ::= <numeral> | <variable> | : <integer expr> ; | 6 <element>
<boolean expr> ::= <boolean term> | <boolean expr> or <boolean term>
<boolean term> ::= <boolean element>
| <boolean term> and <boolean element>
<boolean element> ::= true | false | <variable> | <comparison>
| not : <boolean expr> ; | : <boolean expr> ;
<comparison> ::= <integer expr> <relation> <integer expr>
<variable> ::= <identifier>
<relation> ::= =9 | = | 9 | > | >9 | =>
<weak op> ::= ? | 6
<strong op> ::= @ | 1
53
<identifier> ::= <letter> | <identifier> <letter> | <identifier> <digit>
<letter> ::= a | b | c | d | e | f | g | h | i | A | ( | l | m
| n | o | p | B | r | s | t | u | v | w | / | y | 7
<numeral> ::= <digit> | <digit> <numeral>
<digit> ::= $ | + | 3 | % | - | , | # | C | 4 | D
BNF for Wren
<reserved word> ::= program | is | begin | end | var | integer
| boolean | read | write | s(ip | while | do | if
| then | else | and | or | true | false | not.
<program> ::= program <identifier> is <block>
<block> ::= <declaration seq> begin <command seq> end
<declaration seq> ::= e | <declaration> <declaration seq>
<declaration> ::= var <variable list> & <type> <
<type> ::= integer | boolean
<variable list> ::= <variable> | <variable> <variable list>
<command seq> ::= <command> | <command> < <command seq>
<command> ::= <variable> &9 <expr> | s(ip
| read <variable> | write <integer expr>
| while <boolean expr> do <command seq> end while
| if <boolean expr> then <command seq> end if
| if <boolean expr> then <command seq> else <command seq> end if
<expr> ::= <integer expr> | <boolean expr>
<integer expr> ::= <term> | <integer expr> <weak op> <term>
<term> ::= <element> | <term> <strong op> <element>
<element> ::= <numeral> | <variable> | : <integer expr> ; | 6 <element>
<boolean expr> ::= <boolean term> | <boolean expr> or <boolean term>
<boolean term> ::= <boolean element>
| <boolean term> and <boolean element>
<boolean element> ::= true | false | <variable> | <comparison>
54
| not : <boolean expr> ; | : <boolean expr> ;
<comparison> ::= <integer expr> <relation> <integer expr>
<variable> ::= <identifier>
<relation> ::= =9 | = | 9 | > | >9 | =>
<weak op> ::= ? | 6
<strong op> ::= @ | 1
<identifier> ::= <letter> | <identifier> <letter> | <identifier> <digit>
<letter> ::= a | b | c | d | e | f | g | h | i | A | ( | l | m
| n | o | p | B | r | s | t | u | v | w | / | y | 7
<numeral> ::= <digit> | <digit> <numeral>
<digit> ::= $ | + | 3 | % | - | , | # | C | 4 | D
Such a division of syntax into lexical issues and the structure of programs in terms of
tokens corresponds to the way programming languages are normally implemented.
Programs as text are presented to a le/ical analy7er or scanner that reads characters
and produces a list of tokens taken from the le/icon , a collection of possible tokens of
the language. Since semantics ascribes meaning to programs in terms of the structure of
their phrases, the details of lexical syntax are irrelevant. The internal structure of tokens
is immaterial, and only intelligible tokens take part in providing semantics to a program.
In the above , the productions defining <relation>, <weak op>,
<strong op>, <identifier>, <letter>, <numeral>, and <digit> form the lexical syntax of
Wren, although the first three rules may be used as abbreviations in the phrase-structure
syntax of the language.
55
A')TRA*T )ENTAF
The BNF definition of a programming language is sometimes referred to as the concrete
synta/ of the language since it tells how to recognize the physical text of a program.
Software utilities take a program as a file of characters, recognize that it satisfies the
context-free syntax for the language, and produce a derivation tree exhibiting its
structure. This software usually decomposes into two parts: a scanner or le/ical
analy7er that reads the text and creates a list of tokens and a parser or syntactic
analy7er that forms a derivation tree from the token list based on the BNF definition.
A TGO8!E.E! HRA55AR FOR GREN
The two-level grammar that we construct for Wren performs all necessary context
checking. The primary focus is on ensuring that identifiers are not multiply declared, that
variables used in the program have been declared, and that their usage is consistent with
their types . All declaration information is present in a metanotion called DECLSEQ, which
is associated with the context-sensitive portions of commands. We use the following
Wren program for illustration as we develop the two-level grammar: program p is
var x y & integer<
var a & boolean<
begin
read x< read y<
a &9 x = y<
if a then write x else write y end if
end
The program is syntactically correct if we can build a complete derivation tree
for it. Recall that a tree is complete if every leaf is a terminal symbol or empty and that a
preorder traversal of the leaf nodes matches the target program once the symbols have
been replaced with the corresponding tokens from the representation table. We introduce
metarules and hyper-rules on an "as needed basis while discussing the sample program.
The complete two-level grammar, with all rules identified by number
,#
Declarations
A declaration associates a name with a type. Suppose that a Wren program contains the
following declarations:
var sum1 : integer ;
var done : boolean ;
These declarations will be represented in our two-level grammar derivation tree as
letter s letter u letter m digit + type integer
letter d letter o letter n letter e type boolean
The following metanotions define a declaration and a declaration sequence. Since a valid
Wren program may have no declarations-for example, it may be a program that writes
only constant values-we need to allow for an empty declaration sequence.
(m9) DE*! && NA5E type TE0E.
(m10) TE0E && integer< boolean< pr ogram.
(m11) DE*!)EI && DE*!< DE*!)EI DE*!.
(m12) DE*!)EIETE && DE*!)EI< E50TE .
(m13) E50TE && .
These metanotions are sufficient to begin construction of the declaration
information for a Wren program. The most difficult aspect of gathering together the
declaration information is the use of variable lists, such as var w x y z & integer<
which should produce the following DE*!)EI :
57
letter w type integer letter / type integer
letter y type integer letter 7 type integer
-.3 A TGO8!E.E! HRA55AR FOR GREN
++4 *JA0TER - TGO8!E.E! HRA55AR)
The difficulty is that integer appears only once as a terminal symbol and has to be
"shared with all the variables in the list. The following program fragments should
produce this same DE*!)EI , despite the different syntactic form:
var w & integer<
var x & integer<
var y & integer<
var z & integer<
and
var w x & integer<
var y z & integer<
A DECLSEQ permits three alternatives: (1) a sequence followed by a single declaration,
(2) a single declaration, or (3) an empty declaration.
(h3) DE*!)EI DE*! declaration seB &
DE*!)EI declaration seB DE*! declaration.
(h4) DE*!)EI declaration seB & DE*!)EI declaration.
(h5) E50TE declaration seB & E50TE .
It should be noted that these three hyper-rules can be expressed as two rules (h4 is
redundant), but we retain the three alternatives for the sake of clarity. If all variables are
declared in a separate declaration, we will require a single hyper-rule for a declaration:
58
(h6) NA5E type TE0E declaration & var symbol NA5E symbol
colon symbol TE0E symbol semicolon symbol.
)E!F8DEF2N2T2ON OF !2)0
Lisp, initially developed by John McCarthy in 1958, is the second oldest programming
language (after Fortran) in common use today. In the early 1960s McCarthy realized that
the semantics of Lisp can be defined in terms of a few Lisp primitives and that an
interpreter for Lisp can be written as a very small, concise Lisp program. Such an
interpreter, referred to as a metacircular interpreter, can handle function definitions,
parameter passing, and recursion as well as simple S-expressions of Lisp. The small size
of the interpreter is striking, considering the thousands of lines of code needed
to write a compiler for an imperative language. We have elected to construct the
interpreter in Scheme, a popular dialect of Lisp. Although we implement a subset of
Scheme in Scheme, the interpreter is similar to the original self-definition given by
McCarthy. The basic operations to decompose a list, car and cdr, and the list constructor
cons are described in Figure 6.1. Combined with a predicate nullK to test for an empty
list, a conditional expression cond, and a method to define functions using define it is
possible to write useful )cheme functions.
A function for concatenating lists is usually predefined in Lisp systems and goes by the
name "append.
(define (concat lst1 lst2)
(cond ((null? lst1) lst2)
(#t (cons (car lst1) (concat (cdr lst1) lst2)))))
,D
!ist Operations
(car <list>) return the first item in <list>
(cdr <list>) return <list> with the first item removed
(cons <item> <list>) add <item> as first element of <list>
Arithmetic Operations
(+ <e1> <e2>) return sum of the values of <e1> and <e2>
(- <e1> <e2>) return difference of the values of <e1> and <e2>
(* <e1><e2>) return product of the values of <e1> and <e2>
(/ <e1> <e2>) return quotient of the values of <e1> and <e2>
0redicates
(null? <list>) test if <list> is empty
(equal? <s1> <s2>) test the equality of S-expressions <s1> and <s2>
(atom? <s>) test if <s> is an atom
*onditional
(cond (<p1> <e1>) sequentially evaluate predicates <p1>, <p2>, ... till
(<p2> <e2>) one of them, say <pi>, returns a not false (not #f)
: : result; then the corresponding expression ei is
(<pn> <en>) ) evaluated and its value returned from the cond
Function Definition and Anonymous Functions
(define (<name> allow user to define function <name> with formal
<formals>) parameters <formals> and function body <body>
<body>)
(lambda (<formals>) create an anonymous function
<body>)
(let (<var-bindings>) an alternative to function application;
<body>) <var-bindings> is a list of (variable S-expression)
pairs and the body is a list of S-expressions; let
returns the value of last S-expression in <body>
#$
Other
(quote <item>) return <item> without evaluating it
(display <expr>) print the value of <expr> and return that value
(newline) print a carriage return and return ( )
The symbols Lt and Lf represent the constant values true and false. Anonymous
functions can be defined as lambda expressions. The let expression is a variant of
function application. If we add an equality predicate eBualK And an atom-testing
predicate atomK, we can write other useful list processing functions with this small set of
built-in functions. In the replace function below, all occurrences of the item s are
replaced with the item r at the top level in the list lst.
(define (replace s r lst)
(cond ((null? lst) lst)
((equal? (car lst) s) (cons r (replace s r (cdr lst))))
(#t (cons (car lst) (replace s r (cdr lst))))))
In order to test the metacircular interpreter, it is necessary to have a function Buote that
returns its argument unevaluated and a function display that prints the value of an S-
expression.
We have elected to expand the basic interpreter by adding four arithmetic
operations, ?, 8, @, and 1, so that we can execute some recursive arithmetic
functions that are familiar from imperative programming.
61
(define (fibonacci n)
(cond ((equal? n 0) 1)
((equal? n 1) 1)
(#t (+ (fibonacci (- n 1)) (fibonacci (- n 2))))))
(define (factorial n)
(cond ((equal? n 0) 1)
(#t (* n (factorial (- n 1))))))
We first build a very simple meta-interpreter in Prolog that handles only the
conjunction of goals and the chaining goals. A goal succeeds for one of three
reasons:
1. The goal is true.
2. The goal is a conjunction and both conjuncts are true.
3. The goal is the head of a clause whose body is true.
)E!F8DEF2N2T2ON OF 0RO!OH
+4$ *JA0TER # )E!F8DEF2N2T2ON OF 0ROHRA552NH !ANH"AHE)
All other goals fail. A predefined Prolog predicate clause searches the user database for a
clause whose head matches the first argument; the body of the clause is returned as the
second argument.
prove(true).
prove((Goal1, Goal2)) :- prove(Goal1), prove(Goal2).
prove(Goal) :- clause(Goal, Body), prove(Body).
prove(Goal) :- fail.
62
We define a membership function, called memb so it will not conflict with any
built-in membership operation.
memb(X,[X|Rest]).
memb(X,[Y|Rest]) :- memb(X,Rest).
Here is the result of the testing:
:- prove::memb:FMabcN;memb:FMbcdN;;;.
X = b < % semicolon requests the next answer, if any
X = c <
no
:- prove::memb:FMabcN;memb:FMdefN;;;.
no
:- prove:::memb:FMabcN;memb:FMbcdN;;memb:FMcdeN;;;.
X = c ;
no
These results are correct, but they provide little insight into how they are obtained. We
can overcome this problem by returning a "proof tree for each clause that succeeds. The
proof for true is simply true, the proof of a conjunction of goals is a conjunction of the
individual proofs, and a proof of a clause whose head is true because the body is true will
be represented as
"Goal<==Proof. We introduce a new infix binary operator <== for this purpose.
The proof tree for failure is simply fail.
:- op(500,xfy,<==).
prove(true, true).
prove((Goal1, Goal2),(Proof1, Proof2)) :- prove(Goal1,Proof1),
prove(Goal2,Proof2).
prove(Goal, Goal<==Proof) :- clause(Goal, Body), prove(Body, Proof).
prove(Goal,fail) :- fail.
63
Here are the results of our test cases:
+4+
:- prove::memb:FMabcN;memb:FMbcdN;;0roof;.
X = b
Proof = memb(b,[a,b,c])<==memb(b,[b,c])<==true,
memb(b,[b,c,d])<==true
:- prove::memb:FMabcN;memb:FMdefN;; 0roof;.
no
:- prove:::memb:FMabcN;memb:FMbcdN;;
memb:FMcdeN;; 0roof;.
X = c
Proof =
(memb(c,[a,b,c])<==memb(c,[b,c])<==memb(c,[c])<==true,
memb(c,[b,c,d])<==memb(c,[c,d])<==true),
memb(c,[c,d,e])<==true
TRAN)!AT2ONA! )E5ANT2*)
Wren program being translated obeys the context-sensitive conditions for the language
as well as the context-free grammar. We parse the declaration section to ensure that the
BNF is correct, but no attributes are associated with the declaration section. Context
checking can be combined with code generation in a single attribute grammar, but we
leave this task unfinished at this time.
The machine code is based on a primitive architecture with a single accumulator
(Acc) and a memory addressable with symbolic labels and capable of holding integer
values. In this translation, Boolean values are simulated by integers. We use names to
indicate symbolic locations. The hypothetical machine has a load/store architecture:
64
- The LOAD instruction copies a value from a named location, whose value is not
changed, to the accumulator, whose old value is overwritten, or transfers an integer
constant into the accumulator.
- The STO instruction copies the value of the accumulator, whose value is
not changed, to a named location, whose previous value is overwritten.
The target language has two input/output commands:
- GET transfers an integer value from the input device to the named location.
- PUT transfers the value stored at the named location to the output device. There are
four arithmetic operations-ADD, SUB, MULT and DIV-and three logical operations-
AND, OR, and NOT. For the binary operations, the first operand is the current
accumulator value and the second operand is specified in the instruction itself. The
second operand can be either the contents of a named location or an integer constant.
For Boolean values, the integer 1 is used to represent true and the integer 0 to represent
false. The result of an operation is placed in the accumulator. The NOT operation has no
argument; it simply inverts the 0 or 1 in the accumulator.
The target language contains one unconditional jump J and one conditional jump JF
where the conditional jump is executed if the value in the accumulator is false (equal to
zero). The argument of a jump instruction is a label instruction. For example, J L3 means
to jump unconditionally to label L3, which appears in an instruction of the form L3 LABEL.
The label instruction has no operand.
There are six test instructions; they test the value of the accumulator relative to zero.
For example, TSTEQ tests whether the accumulator is equal to zero. The test instructions
are destructive in the sense that the value in the accumulator is replaced by a 1 if the
test is true and a 0 if the test is false.
65
We will find this approach to be convenient when processing Boolean expressions. The
five other test instructions are: TSTLT (less than zero), TSTLE (less than or equal zero),
TSTNE (not equal zero), TSTGE (greater than or equal zero), and TSTGT (greater than
zero). The NO-OP instruction performs no operation. Finally, the target language includes
a HALT instruction.
LOAD <name> or <const> Load accumulator from named
location or load constant value
STO <name> Store accumulator to named location
GET <name> Input value to named location
PUT <name> Output value from named location
ADD <name> or <const> Acc Acc + <operand>
SUB <name> or <const> Acc Acc - <operand>
MULT <name> or <const> Acc Acc O <operand>
DIV <name> or <const> Acc Acc / <operand>
AND <name> or 0 or 1 Acc Acc and <operand>
OR <name> or 0 or 1 Acc Acc or <operand>
NOT Acc not Acc
J <label> Jump unconditionally
JF <label> Jump on false (Acc = 0)
LABEL Label instruction
TSTLT Test if Acc !ess Than zero
TSTLE Test if Acc !ess than or Equal zero
TSTNE Test if Acc Not Equal zero
TSTEQ Test if Acc EIual zero
TSTGE Test if Acc Hreater than or Equal zero
TSTGT Test if Acc Hreater Than zero
NO-OP No operation
HALT Halt execution
66
A 0rogram Translation
Consider a greatest common divisor (gcd) program:
program gcd is
var m,n : integer;
begin
read m; read n;
while m < > n do
if m < n then n := n - m
else m := m - n
end if
end while;
write m
end
This program translates into the following object code:
GET M
GET N
L1 LABEL
LOAD M
*JA0TER C TRAN)!AT2ONA! )E5ANT2*)
SUB N
TSTNE
JF L2
LOAD M
SUB N
TSTLT
JF L3
LOAD N
67
SUB M
STO N
J L4
L3 LABEL
LOAD M
SUB N
STO M
L4 LABEL
J L1
L2 LABEL
LOAD M
STO T1
PUT T1
HALT
68
5.Data Types and Structures
The data type is one of the basic important pillar of an pro-gramme and programming
language. They are basically 6 types.
1.Numbers 2. Strings 3.Constants 4.Lists 5.Arrays 6.Structures.
1.Numbers :- A complex expression is specified in Maxima by adding the real part of the
expression to %i times the imaginary part. Thus the roots of the equation x^2 - 4*x +
13 = 0 are 2 + 3*%i and 2 - 3*%i. Note that simplification of products of complex
expressions can be effected by expanding the product. Simplification of quotients, roots,
and other functions of complex expressions can usually be accomplished by using
the realpart, imagpart, rectform, polarform, abs, carg functions.
Functions and Variables for Numbers
Function: bfloat /expr0
Converts all numbers and functions of numbers in expr to bigfloat numbers. The
number of significant digits in the resulting bigfloats is specified by the global
variable fpprec.
When float2bf is false a warning message is printed when a floating point number is
converted into a bigfloat number (since this may lead to loss of precision).
Option variable: bftorat
Default value: false
bftorat controls the conversion of bfloats to rational numbers.
When bftorat is false, ratepsilon will be used to control the conversion (this results in
relatively small rational numbers). When bftorat is true, the rational number generated
will accurately represent the bfloat.
69
Note: bftorat has no effect on the transformation to rational numbers with the
function rationalize.
Example:
(%i1) ratepsilon:1e-4;
(%o1) 1.e-4
(%i2) rat(bfloat(11111/111111)), bftorat:false;
`rat' replaced 9.99990999991B-2 by 1/10 = 1.0B-1
1
(%o2)/R/ --
10
(%i3) rat(bfloat(11111/111111)), bftorat:true;
`rat' replaced 9.99990999991B-2 by 11111/111111 = 9.99990999991B-2
11111
(%o3)/R/ ------
111111
C$
3.)trings
Strings (quoted character sequences) are enclosed in double quote marks " for input, and
displayed with or without the quote marks, depending on the global variable stringdisp.
Strings may contain any characters, including embedded tab, newline, and carriage
return characters.
The sequence \" is recognized as a literal double quote, and \\ as a literal backslash.
When backslash appears at the end of a line, the backslash and the line termination
(either newline or carriage return and newline) are ignored, so that the string continues
with the next line. No other special combinations of backslash with another character are
recognized; when backslash appears before any character other than ", \, or a line
termination, the backslash is ignored. There is no way to represent a special character
(such as tab, newline, or carriage return) except by embedding the literal character in
the string. There is no character type in Maxima; a single character is represented as a
one-character string. The stringproc add-on package contains many functions for working
with strings.
Examples:
%i1) s_1 : "This is a string.";
(%o1) This is a string.
(%i2) s_2 : "Embedded \"double quotes\" and backslash \\ characters.";
(%o2) Embedded "double quotes" and backslash \ characters.
(%i3) s_3 : "Embedded line termination
in this string.";
(%o3) Embedded line termination
in this string.
(%i4) s_4 : "Ignore the \
line termination \
characters in \
this string.";
(%o4) Ignore the line termination characters in this string.
(%i5) stringdisp : false;
(%o5) false
(%i6) s_1;
(%o6) This is a string.
(%i7) stringdisp : true;
(%o7) true
(%i8) s_1;
(%o8) "This is a string."
Functions and Variables for Strings
Function: concat /ar)1', ar)12, 30
Concatenates its arguments. The arguments must evaluate to atoms. The return
value is a symbol if the first argument is a symbol and a string otherwise.
concat evaluates its arguments. The single quote ' prevents evaluation.
(%i1) y: 7$
(%i2) z: 88$
(%i3) concat (y, z/2);
(%o3) 744
(%i4) concat ('y, z/2);
(%o4) y44
71
72
A symbol constructed by concat may be assigned a value and appear in expressions.
The :: (double colon) assignment operator evaluates its left-hand side.
(%i5) a: concat ('y, z/2);
(%o5) y44
(%i6) a:: 123;
(%o6) 123
(%i7) y44;
(%o7) 123
(%i8) b^a;
y44
(%o8) b
(%i9) %, numer;
123
(%o9) b
% *onstants
Functions and .ariables for *onstants
Constant: Pe
%e represents the base of the natural logarithm, also known as Euler's number. The
numeric value of %e is the double-precision floating-point value
2.718281828459045d0.
73
Constant: Pi
%i represents the imaginary unit, $4rt/5 '0.
Constant: false
false represents the Boolean constant of the same name. Maxima implements false by
the value NIL in Lisp.
Constant: Pgamma
The Euler-Mascheroni constant, 0.5772156649015329 ....
Constant: ind
ind represents a bounded, indefinite result.
See also limit.
(%i1) limit (sin(1/x), x, 0);
(%o1) ind
Constant: Pphi
%phi represents the so-called )olden *ean, /' 6 $4rt/(00/2. The numeric value
of %phi is the double-precision floating-point value 1.618033988749895d0.
fibtophi expresses Fibonacci numbers fib(n) in terms of %phi.
By default, Maxima does not know the algebraic properties of %phi. After
evaluating tellrat(%phi^2 - %phi - 1) andalgebraic: true, ratsimp can simplify some
expressions containing %phi.
74
Examples:
fibtophi expresses Fibonacci numbers fib(n) in terms of %phi.
(%i1) fibtophi (fib (n));
n n
%phi - (1 - %phi)
(%o1) -------------------
2 %phi - 1
(%i2) fib (n-1) + fib (n) - fib (n+1);
(%o2) - fib(n + 1) + fib(n) + fib(n - 1)
(%i3) fibtophi (%);
n + 1 n + 1 n n
%phi - (1 - %phi) %phi - (1 - %phi)
(%o3) - --------------------------- + -------------------
2 %phi - 1 2 %phi - 1
n - 1 n - 1
%phi - (1 - %phi)
+ ---------------------------
2 %phi - 1
(%i4) ratsimp (%);
(%o4) 0
C,
- !ists
Lists are the basic building block for Maxima and Lisp. All data types other than arrays,
hash tables, numbers are represented as Lisp lists, These Lisp lists have the form
Lists are the basic building block for Maxima and Lisp. All data types other than arrays,
hash tables, numbers are represented as Lisp lists, These Lisp lists have the form
((MPLUS) $A 2)
to indicate an expression a+2. At Maxima level one would see the infix notation a+2.
Maxima also has lists which are printed as
[1, 2, 7, x+y]
for a list with 4 elements. Internally this corresponds to a Lisp list of the form
((MLIST) 1 2 7 ((MPLUS) $X $Y ))
The flag which denotes the type field of the Maxima expression is a list itself, since after
it has been through the simplifier the list would become
((MLIST SIMP) 1 2 7 ((MPLUS SIMP) $X $Y))
76
.ariables for !ists
Operator: M
Operator: N
[ and ] mark the beginning and end, respectively, of a list.
[ and ] also enclose the subscripts of a list, array, hash array, or array function.
Examples:
(%i1) x: [a, b, c];
(%o1) [a, b, c]
(%i2) x[3];
(%o2) c
(%i3) array (y, fixnum, 3);
(%o3) y
(%i4) y[2]: %pi;
(%o4) %pi
(%i5) y[2];
(%o5) %pi
(%i6) z['foo]: 'bar;
(%o6) bar
(%i7) z['foo];
(%o7) bar
(%i8) g[k] := 1/(k^2+1);
1
(%o8) g := ------
k 2
k + 1
(%i9) g[10];
1
(%o9) ---
101
77
Function: append /li$t1', 3, li$t1n0
Returns a single list of the elements of li$t1' followed by the elements of li$t12,
. append also works on general expressions, e.g. append (f(a,b),
f(c,d,e)); yields f(a,b,c,d,e).
Do example(append); for an example.
Function: assoc /key, li$t, default0
Function: assoc /key, li$t0
This function searches for the key in the left hand side of the input li$t of the
form [x,y,z,...] where each of the li$telements is an expression of a binary operand and 2
elements. For example x=1, 2^3, [a,b] etc. The key is checked against the first
operand. assoc returns the second operand if the key is found. If the key is not found it
either returns the defaultvalue. default is optional and defaults to false.
Function: cons /expr, li$t0
Returns a new list constructed of the element expr as its first element, followed by the
elements of li$t. cons also works on other expressions, e.g. cons(x, f(a,b,c)); -
> f(x,a,b,c).
Function: delete /expr1', expr120
Function: delete /expr1', expr12, n0
delete(expr1', expr12) removes from expr12 any arguments of its top-level operator
which are the same (as determined by "=") as expr1'. Note that "=" tests for formal
equality, not equivalence. Note also that arguments of subexpressions are not affected.
expr1' may be an atom or a non-atomic expression. expr12 may be any non-atomic
expression. delete returns a new expression; it does not modify expr12.
delete(expr1', expr12, n) removes from expr12 the first n arguments of the top-level
operator which are the same asexpr1'. If there are fewer than n such arguments, then
all such arguments are removed.
78
Examples:
Removing elements from a list.
(%i1) delete (y, [w, x, y, z, z, y, x, w]);
(%o1) [w, x, z, z, x, w]
Removing terms from a sum.
(%i1) delete (sin(x), x + sin(x) + y);
(%o1) y + x
Removing factors from a product.
(%i1) delete (u - x, (u - w)*(u - x)*(u - y)*(u - z));
(%o1) (u - w) (u - y) (u - z)
Removing arguments from an arbitrary expression.
(%i1) delete (a, foo (a, b, c, d, a));
(%o1) foo(b, c, d)
Limit the number of removed arguments.
(%i1) delete (a, foo (a, b, a, c, d, a), 2);
(%o1) foo(b, c, d, a)
79
Whether arguments are the same as expr1' is determined by "=". Arguments which
are equal but not "=" are not removed.
(%i1) [is (equal (0, 0)), is (equal (0, 0.0)), is (equal (0, 0b0))];
rat: replaced 0.0 by 0/1 = 0.0
`rat' replaced 0.0B0 by 0/1 = 0.0B0
(%o1) [true, true, true]
(%i2) [is (0 = 0), is (0 = 0.0), is (0 = 0b0)];
(%o2) [true, false, false]
(%i3) delete (0, [0, 0.0, 0b0]);
(%o3) [0.0, 0.0b0]
(%i4) is (equal ((x + y)*(x - y), x^2 - y^2));
(%o4) true
(%i5) is ((x + y)*(x - y) = x^2 - y^2);
(%o5) false
(%i6) delete ((x + y)*(x - y), [(x + y)*(x - y), x^2 - y^2]);
2 2
(%o6) [x - y ]
, Arrays
Functions and Variables for Arrays
Function: array /na*e, di*1', 3, di*1n0
Function: array /na*e, type, di*1', 3, di*1n0
Function: array /7na*e1', 3, na*e1*8, di*1', 3, di*1n0
80
Creates an n-dimensional array. n may be less than or equal to 5. The subscripts for
the i'th dimension are the integers running from 0 to dim_i.
array (name, dim_1, ..., dim_n) creates a general array.
array (name, type, dim_1, ..., dim_n) creates an array, with elements of a specified
type. type can be fixnum for integers of limited size or flonum for floating-point
numbers.
array ([name_1, ..., name_m], dim_1, ..., dim_n) creates m arrays, all of the same
dimensions.
If the user assigns to a subscripted variable before declaring the corresponding
array, an undeclared array is created. Undeclared arrays, otherwise known as
hashed arrays (because hash coding is done on the subscripts), are more general
than declared arrays. The user does not declare their maximum size, and they grow
dynamically by hashing as more elements are assigned values. The subscripts of
undeclared arrays need not even be numbers. However, unless an array is rather
sparse, it is probably more efficient to declare it when possible than to leave it
undeclared. The array function can be used to transform an undeclared array into a
declared array.
Function: arrayapply /A, 7i1', 3, i1n80
Evaluates A [i1', ..., i1n], where A is an array and i1', ., i1n are integers.
This is reminiscent of apply, except the first argument is an array instead of a function.
Function: arrayinfo /A0
Returns information about the array A. The argument A may be a declared array, an
undeclared (hashed) array, an array function, or a subscripted function.
For declared arrays, arrayinfo returns a list comprising the atom declared, the number of
dimensions, and the size of each dimension. The elements of the array, both bound and
unbound, are returned by listarray.
81
For undeclared arrays (hashed arrays), arrayinfo returns a list comprising the
atom hashed, the number of subscripts, and the subscripts of every element which has a
value. The values are returned by listarray.
For array functions, arrayinfo returns a list comprising the atom hashed, the number of
subscripts, and any subscript values for which there are stored function values. The
stored function values are returned by listarray.
For subscripted functions, arrayinfo returns a list comprising the atom hashed, the
number of subscripts, and any subscript values for which there are lambda expressions.
The lambda expressions are returned by listarray.
Examples:
arrayinfo and listarray applied to a declared array.
(%i1) array (aa, 2, 3);
(%o1) aa
(%i2) aa [2, 3] : %pi;
(%o2) %pi
(%i3) aa [1, 2] : %e;
(%o3) %e
(%i4) arrayinfo (aa);
(%o4) [declared, 2, [2, 3]]
(%i5) listarray (aa);
(%o5) [#####, #####, #####, #####, #####, #####, %e, #####,
#####, #####, #####, %pi]
82
arrayinfo and listarray applied to an undeclared (hashed) array.
(%i1) bb [FOO] : (a + b)^2;
2
(%o1) (b + a)
(%i2) bb [BAR] : (c - d)^3;
3
(%o2) (c - d)
(%i3) arrayinfo (bb);
(%o3) [hashed, 1, [BAR], [FOO]]
(%i4) listarray (bb);
3 2
(%o4) [(c - d) , (b + a) ]
arrayinfo and listarray applied to an array function.
(%i1) cc [x, y] := y / x;
y
(%o1) cc := -
x, y x
(%i2) cc [u, v];
v
(%o2) -
u
(%i3) cc [4, z];
z
(%o3) -
4
(%i4) arrayinfo (cc);
(%o4) [hashed, 2, [4, z], [u, v]]
(%i5) listarray (cc);
z v
(%o5) [-, -]
4 u
83
arrayinfo and listarray applied to a subscripted function.
(%i1) dd [x] (y) := y ^ x;
x
(%o1) dd (y) := y
x
(%i2) dd [a + b];
b + a
(%o2) lambda([y], y )
(%i3) dd [v - u];
v - u
(%o3) lambda([y], y )
(%i4) arrayinfo (dd);
(%o4) [hashed, 1, [b + a], [v - u]]
(%i5) listarray (dd);
b + a v - u
(%o5) [lambda([y], y ), lambda([y], y )]
System variable: arrays
Default value: []
arrays is a list of arrays that have been allocated. These comprise arrays declared
by array, hashed arrays constructed by implicit definition (assigning something to an
array element), and array functions defined by := and define. Arrays defined
bymake_array are not included.
84
Examples:
(%i1) array (aa, 5, 7);
(%o1) aa
(%i2) bb [FOO] : (a + b)^2;
2
(%o2) (b + a)
(%i3) cc [x] := x/100;
x
(%o3) cc := ---
x 100
(%i4) dd : make_array ('any, 7);
(%o4) {Array: #(NIL NIL NIL NIL NIL NIL NIL)}
(%i5) arrays;
(%o5) [aa, bb, cc]
6 Structures
Maxima provides a simple data aggregate called a structure. A structure is an expression
in which arguments are identified by name (the field name) and the expression as a
whole is identified by its operator (the structure name). A field value can be any
expression.
A structure is defined by the defstruct function; the global variable structures is the list of
user-defined structures. The functionnew creates instances of structures. The @ operator
refers to fields. kill(S) removes the structure definition S, and kill(x@ a)unbinds the
field a of the structure instance x.
In the pretty-printing console display (with display2d equal to true), structure instances
are displayed with the value of each field represented as an equation, with the field name
on the left-hand side and the value on the right-hand side. (The equation is only a
display construct; only the value is actually stored.) In 1-dimensional display
(via grind or with display2d equal to false), structure instances are displayed without the
field names.
85
There is no way to use a field name as a function name, although a field value can be a
lambda expression. Nor can the values of fields be restricted to certain types; any field
can be assigned any kind of expression. There is no way to make some fields accessible
or inaccessible in different contexts; all fields are always visible.
Global variable: structures
structures is the list of user-defined structures defined by defstruct.
Function: defstruct /S/a1', 3, a1n00
Function: defstruct /S/a1' 9 %1', 3, a1n 9 %1n00
Define a structure, which is a list of named fields a1', ., a1n associated with a
symbol S. An instance of a structure is just an expression which has operator S and
exactly n arguments. new(S) creates a new instance of structure S.
An argument which is just a symbol a specifies the name of a field. An argument which is
an equation a = % specifies the field name a and its default value %. The default value can
be any expression.
defstruct puts S on the list of user-defined structures, structures.
kill(S) removes S from the list of user-defined structures, and removes the structure
definition.
Examples:
(%i1) defstruct (foo (a, b, c));
(%o1) [foo(a, b, c)]
(%i2) structures;
(%o2) [foo(a, b, c)]
(%i3) new (foo);
(%o3) foo(a, b, c)
(%i4) defstruct (bar (v, w, x = 123, y = %pi));
(%o4) [bar(v, w, x = 123, y = %pi)]
(%i5) structures;
(%o5) [foo(a, b, c), bar(v, w, x = 123, y = %pi)]
(%i6) new (bar);
(%o6) bar(v, w, x = 123, y = %pi)
(%i7) kill (foo);
(%o7) done
(%i8) structures;
(%o8) [bar(v, w, x = 123, y = %pi)]
Function: new /S0
Function: new /S /%1', 3, %1n00
new creates new instances of structures.
new(S) creates a new instance of structure S in which each field is assigned its default
value, if any, or no value at all if no default was specified in the structure definition.
new(S(%1', ..., %1n)) creates a new instance of S in which fields are assigned the
values %1', ., %1n.
Examples:
(%i1) defstruct (foo (w, x = %e, y = 42, z));
(%o1) [foo(w, x = %e, y = 42, z)]
(%i2) new (foo);
(%o2) foo(w, x = %e, y = 42, z)
(%i3) new (foo (1, 2, 4, 8));
(%o3) foo(w = 1, x = 2, y = 4, z = 8)
86
87
Operator: Q
@ is the structure field access operator. The expression x@ a refers to the value of
field a of the structure instance x. The field name is not evaluated.
If the field a in x has not been assigned a value, x@ a evaluates to itself.
kill(x@ a) removes the value of field a in x.
Examples:
(%i1) defstruct (foo (x, y, z));
(%o1) [foo(x, y, z)]
(%i2) u : new (foo (123, a - b, %pi));
(%o2) foo(x = 123, y = a - b, z = %pi)
(%i3) u@z;
(%o3) %pi
(%i4) u@z : %e;
(%o4) %e
(%i5) u;
(%o5) foo(x = 123, y = a - b, z = %e)
(%i6) kill (u@z);
(%o6) done
(%i7) u;
(%o7) foo(x = 123, y = a - b, z)
(%i8) u@z;
(%o8) u@z
The field name is not evaluated.
(%i1) defstruct (bar (g, h));
(%o1) [bar(g, h)]
(%i2) x : new (bar);
(%o2) bar(g, h)
(%i3) x@h : 42;
(%o3) 42
(%i4) h : 123;
(%o4) 123
(%i5) x@h;
(%o5) 42
(%i6) x@h : 19;
(%o6) 19
(%i7) x;
(%o7) bar(g, h = 19)
(%i8) h;
(%o8) 123
88
89
6.,ogic and *echni-ues
Pro)ra* 9 data $tructure 6 al)orith*
Al)orith* 9 lo)ic 6 control
A pro)ra* i$ a theory /in $o*e lo)ic0 and co*putation i$ deduction fro* the theory.
Lo)ic pro)ra**in) i$ characterized &y pro)ra**in) with relation$ and inference.
:eyword$ and phra$e$; Horn clause, Logic programming, inference, modus ponens,
modus tollens, logic variable, unification, unifier, most general unifier, occurs-check,
backtracking, closed world assumption, meta programming, pattern matching. set,
relation, tuple, atom, constant, variable, predicate, functor, arity, term, compound term,
ground, nonground, substitution, instance, instantiation, existential quantification,
universal quantification, unification, modus ponens, proof tree, goal, resolvent.
A logic program consists of a set of axioms and a goal statement. The rules of inference
are applied to determine whether the axioms are sufficient to ensure the truth of the goal
statement. The execution of a logic program corresponds to the construction of a proof of
the goal statement from the axioms.
In the logic programming model the programmer is responsible for specifying the basic
logical relationships and does not specify the manner in which the inference rules are
applied. Thus
Logic + Control = Algorithms
Logic programming is based on tuples. Predicates are abstractions and generalization of
the data type of tuples. Recall, a tuple is an element of
S0 S1 ... Sn
The squaring function for natural numbers may be written as a set of tuples as follows:
{(0,0), (1,1), (2,4) ...}
Such a set of tuples is called a relation and in this case the tuples define the squaring
relation.
sqr = {(0,0), (1,1), (2,4) ...}
90
Abstracting to the name sqr and generalizing an individual tuple we can define the
squaring relation as:
sqr = (x,x2)
Parameterizing the name gives:
sqr(X,Y) <-- Y is } X*X
In the logic programming language Prolog this would be written as:
sqr(X,Y) <-- Y is } X*X.
Note that the set of tuples is named sqr and that the parameters are X and Y. Prolog
does not evaluate its arguments unless required, so the expression Y is X*X forces the
evaluation of X*X and unifies the answer with Y. The Prolog code
P <-- Q.
may be read in a number of ways; it could be read P where Q or P if Q. In this latter form
it is a variant of the first-order predicate calculus known as Horn clause logic. A complete
reading of the sqr predicate the point of view of logic is: for every X and Y, Y is the sqr of
X if Y is X*X. From the point of view of logic, we say that the variables are universally
quantified. Horn clause logic has a particularly simple inference rule which permits its use
as a model of computation. This computational paradigm is called Logic programming
and deals with relations rather than functions or assignments. It uses facts and rules to
represent information and deduction to answer queries. Prolog is the most widely
available programming language to implement this computational paradigm.
Relations may be composed. For example, suppose we have the predicates, male(X),
siblingof(X,Y), and parentof(Y,Z) which define the obvious relations, then we can define
the predicate uncleof(X,Z) which implements the obvious relation as follows:
uncleof(X,Z) <-- male(X), siblingof(X,Y), parentof(Y,Z).
The logical reading of this rule is as follows: ``for every X,Y and Z, X is the uncle of Z, if
X is a male who has a sibling Y which is the parent of Z.'' Alternately, ``X is the uncle of
91
Z, if X is a male and X is a sibing of Y and Y is a parent of Z.''
%fatherof(X,Y),fatherof(Y,Z) defines paternalgrandfather(X,Z)
The difference between logic programming and functional programming may be
illustrated as follows. The logic program
f(X,Y) <-- Y = X*3+4
is an abreviation for
\forall X,Y (f(X,Y) <-- Y = X*3+4)
which asserts a condition that must hold between the corresponding domain and range
elements of the function. In contrast, a functional definition introduces a functional object
to which functional operations such as functional composition may be applied.
Logic programming has many application areas:
Relational Data Bases
Natural Language Interfaces
Expert Systems
Symbolic Equation solving
Planning
Prototyping
Simulation
Programming Language Implementation
)ynta/
There are just four constructs: constants, variables, function symbols, predicate symbols,
and two logical connectives, the comma (and) and the implication symbol.
92
Core Prolog
P in Programs
C in Clauses
Q in Queries
A in Atoms
T in Terms
X in Variables
Program ::= Clause... Query | Query
Clause ::= Predicate . | Predicate :- PredicateList .
PredicateList ::= Predicate | PredicateList , Predicate
Predicate ::= Atom | Atom( TermList )
TermList ::= Term | TermList , Term
Term ::= Numeral | Atom | Variable | Structure
Structure ::= Atom ( TermList )
Query ::= ?- PredicateList .
Numeral ::= an integer or real number
Atom ::= string of characters beginning with a lowercase letter or encluded in
apostrophes.
Variable ::= string of characters beginning with an uppercase letter or
underscore
Terminals = {Numeral, Atom, Variable, :-, ?-, comma, period, left and right
parentheses }
While there is no standard syntax for Prolog, most implementations recognize
the grammar in Figure M.N.
93

Figure 5.N& Prolog grammar
P in Programs
C in Clauses
Q in Query
H in Head
B in Body
A in Atoms
T in Terms
X in Variable

P ::= C... Q...
C ::= H [ :- B ] .
H ::= A [ ( T [,T]... ) ]
B ::= G [, G]...
G ::= A [ ( [ X | T ]... ) ]
T ::= X | A [ ( T... ) ]
Q ::= ?- B .
CLAUSE, FACT, RULE, QUERY, FUNCTOR, ARITY, ORDER, UNIVERSAL QUANTIFICATION,
EXISTENTIAL QUANTIFICATION, RELATIONS
In logic, relations are named by predicate symbols chosen from a prescribed vocabulary.
Knowledge about the relations is then expressed by sentences constructed from
predicates, connectives, and formulas. An n-ary predicate is constructed from prefixing
an n-tuple with an n-ary predicate symbol.
94
A logic program is a set of axioms, or rules, defining relationships between objects. A
computation of a logic program is a deduction of consequences of the program. A
program defines a set of consequences, which is its meaning. The art of logic
programming is constructing concise and elegant programs that have the desired
meaning.
The basic constructs of logic programming, terms and statements are inherited from
logic. There are three basic statements: facts, rules and queries. There is a single data
structure: the logical term.
Facts 0redicates and Atoms
Facts are a means of stating that a relationship holds between objects.
father(bill,mary).
plus(2,3,5).
...
This fact states that the relation father holds between bill and mary. Another name for a
relationship is predicate.
(emantics
The operational semantics of logic programs correspond to logical inference. The
declarative semantics of logic programs are derived from the term model commonly
referred to as the Herbrand base. The denotational semantics of logic programs are
defined in terms of a function which assigns meaning to the program.
There is a close relation between the axiomatic semantics of imperative programs and
logic programs. A logic program to sum the elements of a list could be written as follows.
sum([Nth],Nth).
sum([Ith|Rest],Ith + Sum_Rest) <-- sum(Rest,Sum_Rest).
95
A proof of its correctness is trivial since the logic program is but a statement of the
mathematical properties of the sum.
A[N] = sum_{i=N}^N A[i]}
sum([A[N]],A[N]).
sum_{i=I}^N A[i] = A[I] + S if 0 < I, sum_{i=I+1}^N A[i] = S}
sum([A[I],...,A[N]], A[I]+S) <-- sum([A[I+1],...,A[N]],S).
Operational )emantics
Definition: The meaning of a logic program P, M(P), is the set of unit goals deducible from
P.
Logic Program A logic program is a finite set of facts and rules.
Interpretation and meaning of logic programs.
The rule of instantiation (P(X) deduce P(c)). The rule of deduction is modus
ponens. From A :- B1, B2, ..., Bn. and B1', B2', ..., Bn' infer A'. Primes indicate
instances of the corresponding term.
The meaning M(P) of a logical program P is the set of unit goals deducible from the
program.
A program P is correct with respect to some intended meaning M iff the meaning of
P M(P) is a subset of M (the program does not say things that were not intended).
A program P is complete with respect to some intended meaning M iff M is a subset
of M(P) (the program says everything that was intended).
A program P is correct and complete with respect to some intended meaning M iff
M = M(P).
The operational semantics of a logic program can be described in terms of logical
inference using unification and the inference rule resolution. The following logic program
illustrates logical inference.
a.
b <-- a.
b?
96
We can conclude b by modus ponens given that b <-- a and a. Alternatively, if b is
assume to be false then from b <-- a and modus tollens we infer a but since a is given
we have a contradiction and b must hold. The following program illustrates unification.
parent_of(a,b).
parent_of(b,c).
ancestor_of(Anc,Desc) <-- parent_of(Anc,Desc).
ancestor_of(Anc,Desc) <-- parent_of(Anc,Interm) \wedge
ancestor_of(Interm,Desc).
parent_of(a,b)?
ancestor_of(a,b)?
ancestor_of(a,c)?
ancestor_of(X,Y)?
Consider the query `ancestor_of(a,b)?'. To answer the question ``is a an ancestor of b'',
we must select the second rule for the ancestor relation and unify a with Anc and b with
Desc. Interm then unifies with c in the relation parent_of(b,c). The query,
ancestor_of(b,c)? is answered by the first rule for the ancestor_of relation. The last query
is asking the question, ``Are there two persons such that the first is an ancestor of the
second.'' The variables in queries are said to be existentially quantified. In this case the X
unifies with a and the Y unifies with b through the parent_of relation. Formally,
Definition M.N:
A unifier of two terms is a substitution making the terms identical. If two terms
have a unifier, we say they unify.
97
For example, two identical terms unify with the identity substitution. concat([1,2,3],
[3,4],List) and concat([X|Xs],Ys,[X|Zs]) unify with the substitutions {X = 1, Xs = [2,3],
Ys = [3,4], List = [1|Zs]}
There is just one rule of inference which is resolution. Resolution is much like proof by
contradiction. An instance of a relation is ``computed'' by constructing a refutation.
During the course of the proof, a tree is constructed with the statement to be proved at
the root. When we construct proofs we will use the symbol to mark formulas which we
either assume are false or infer are false and the symbol [] for contradiction. Resolution
is based on the inference rule modus tollens and unification. This is the modus tollens
inference rule.
From B
and B <-- A0,...,An
infer A0 or...or An
Notice that as a result of the inference there are several choices. Each A_{i} is a
formula marking a new branch in the proof tree. A contradiction occurs when both a
formula and its negation appear on the same path through the proof tree. A path is said
to be closed when it contains a contradiction otherwise a path is said to be open. A
formula has a proof if and only if each path in the proof tree is closed. The following is a
proof tree for the formula B under the hypothesises A0 and B <-- A0,A_{1}.
1 From B
2 and A0
3 and B <-- A0,A_{1}
4 infer A0 or} A_{1}
5 choose A0
6 contradiction []
7 choose A_{1}
8 no further possibilities} open}
There are two paths through the proof tree, 1-4, 5, 6 and 1-4, 7, 8. The first path
contains a contradiction while the second does not. The contradiction is marked with [].
98
As an example of computing in this system of logic suppose we have defined the
relations parent and ancestor as follows:
1. parent_of(ogden,anthony)
2. parent_of(anthony,mikko)
3. parent_of(anthony,andra)
4. ancestor_of(A,D) <-- parent_of(A,D)
5. ancestor_of(A,D) <-- parent_of(A,X)
6. ancestor_of(X,D)
where identifiers beginning with lower case letters designate constants and identifiers
beginning with an upper case letter designate variables. We can infer that ogden is an
ancestor of mikko as follows.
ancestor(ogden,mikko) the assumption
parent(ogden,X) or} ancestor(X,mikko)
resolution}
parent(ogden,X) first choice
parent(ogden,anthony) unification with first entry
[] produces a contradiction
ancestor(anthony,mikko) second choice
parent(anthony,mikko) resolution
[] A contradiction of a
fact.}
Notice that all choices result in contradictions and so this proof tree is a proof of the
proposition that ogden is an ancestor of mikko. In a proof, when unification occurs, the
result is a substitution. In the first branch of the previous example, the term anthonoy is
unified with the variable X and anthony is substituted for all occurences of the variable X.
UNIVERSAL QUANTIFICATION, EXISTENTIAL QUANTIFICATION
The unification algorithm can be defined in Prolog. Figure~\ref{lp:unify} contains a
formal definition of unification in Prolog

99
Figure MN: Unification Algoririthm
unify(X,Y) <-- X == Y.
unify(X,Y) <-- var(X), var(Y), X=Y.
unify(X,Y) <-- var(X), nonvar(Y), \+ occurs(X,Y), X=Y.
unify(X,Y) <-- var(Y), nonvar(X), \+ occurs(Y,X), Y=X.
unify(X,Y) <-- nonvar(X), nonvar(Y), functor(X,F,N),
functor(Y,F,N),
X =..[F|R], Y =..[F|T], unify_lists(R,T).
unify_lists([ ],[ ]).
unify_lists([X|R],[H|T]) <-- unify(X,H), unify_lists(R,T).
occurs(X,Y) <-- X==Y.
occurs(X,T) <-- functor(T,F,N), T =..[F|Ts],
occurs_list(X,Ts).
occurs_list(X,[Y|R]) <-- occurs(X,Y).
occurs_list(X,[Y|R]) <-- occurs_list(X,R).
Unification subsumes
single assignment
parameter passing
record allocation
read/write-once field-access in records
\begin{array}{l}
\frac{A1 <-- B.}, ?- A1, A2,...,An.}}{?- B, A2,...,An.}}
\frac{?- true, A1, A2,...,An.}}{?- A1, A2,...,An.}}
\caption{Inference Rules\label{lp:ir}}
To illustrate the inference rules, consider the following program consisting of a rule, two
facts and a query:
a <-- b \wedge c . b <-- d . b <-- e . ?- a .
100
By applying the inference rules to the program we derive the following additional queries:
?- b \wedge c . ?- d \wedge c . ?- e \wedge c. ?- c. ?-
Among the queries is an empty query. The presence of the empty query indicates that
the original query is satisfiable, that is, the answer to the query is yes. Alternatively, the
query is a theorem, provable from the given facts and rules.
2nference and "nification
Definition: The law of universal modus ponens says that from
R = (A :- B1,...,Bn) and
B'1.
...
B'n.
A' can be deduced, if A' :- B'1,...,B'n is an instance of R.
Definition: A logic program is a finite set of rules.
Definition: An existentially quantified goal G is a logical consequence of a program P iff
there is a clause in P with an instance A :- B1,...,Bn, n \geq 0 such that B1,...,Bn are
logical consequences of P and A is an instance of G.
The control portion of the the equation is provide by an inference engine whose role is to
derive theorems based on the set of axioms provided by the programmer. The inference
engine uses the operations of resolution and unification to construct proofs.
101
Resolution says that given the axioms
f if a0, ..., a_m.
g if f, b0, ..., bn.
the fact
g if a0, ..., a_m, b0, ..., bn.
can be derived.
Unification is the binding of variables. For example
A query containing a variable asks whether there is a value for the variable that makes
the query a logical consequence of the program.
?- father(bill,X).
?- father(X,mary).
?- father(X,Y).
Note that variables do not denote a specified storage location, but denote an unspecified
but single entity.
Definition: Constants and variables are terms. A compound term is comprised of a
functor and a sequence of terms. A functor is characterized by its name, which is an
atom and its arity, or number of arguments.
X, 3, mary, fatherof(F,mary), ...
Definition: Queries, facts and terms which do not contain variables are called ground.
Where variables do occur they are called nonground.
Definition: A substitution is a finite set (possibly empty) of pairs of the form Xi=ti, where
Xi is a variable and ti is a term, and Xi\neq Xj for every i \neq j, and Xi does not occur in
tj, for any i and j.
p(a,X,t(Z)), p(Y,m,Q); theta = { X=m,Y=a,Q=t(Z) }
102
Definition: A is an instance of B if there is a substitution theta such that A = Btheta.
Definition: Two terms A and B are said to have a common instance C iff there are
substitutions theta1 and theta2 such that C = Atheta1 and C = Btheta2.
A = plus(0,3,Y), B = plus(0,X,X). C = plus(0,3,3)
since C = A{ Y=3} and C = B{ X=3}.
Definition: A unifier of two terms A and B is a substitution making the two terms
identical. If two terms have a unifer they are said to unify.
p(a,X,t(Z))theta = p(Y,m,Q)theta where theta = { X=m,Y=a,Q=t(Z) }
Definition: A most general unifier or mgu of two terms is a unifier such that the
associated common instance is most general.
unify(A,B) :- unify1(A,B).
unify1(X,Y) :- X == Y.
unify1(X,Y) :- var(X), var(Y), X=Y. % The substitution
unify1(X,Y) :- var(X), nonvar(Y), \+ occurs(X,Y), X=Y. % The substitution
unify1(X,Y) :- var(Y), nonvar(X), \+ occurs(Y,X), Y=X. % The substitution
unify1(X,Y) :- nonvar(X), nonvar(Y), functor(X,F,N), functor(Y,F,N),
X =..[F|R], Y =..[F|T], match_list(R,T).
match_list([],[]).
match_list([X|R],[H|T]) :- unify(X,H), match_list(R,T).
occurs(A,B) :- A == B.
occurs(A,B) :- nonvar(B), functor(B,F,N), occurs(A,B,N).
occurs(A,B,N) :- N > 0, arg(N,B,AN), occurs(A,AN),!. % RED
occurs(A,B,M) :- M > 0, N is M - 1, occurs(A,B,N).
+$%
A )imple 2nterpreter for 0ure 0rolog
An interpreter for pure Prolog can be written in Prolog.
A simple interpreter for pure Prolog
is_true( Goals ) <-- resolved( Goals ).
is_true( Goals ) <-- write( no ), nl.
resolved([]).
resolved(Goals) <-- select(Goal,Goals,RestofGoals),
% Goal unifies with head of some rule
clause(Head,Body), unify( Goal, Head ),
add(Body,RestofGoals,NewGoals),
resolved(NewGoals).
prove(true).
prove((A,B)) <-- prove(A), prove(B). % select first goal
prove(A) <-- clause(A,B), prove(B). % select only goal and find a rule
is the Prolog code for an interpreter.
The interpreter can be used as the starting point for the construction of a debugger for
Prolog programs and a starting point for the construction of an inference engine for an
expert system.
The operational semantics for Prolog are given in Figure~\ref{lp:opsem}
104
Logic Programming (Horn Clause Logic) -- Operational Semantics
Abstract Syntax:
P in Programs
C in Clauses
Q in Queries
T in Terms
A in Atoms
X in Variables
P ::= (C | Q)...
C ::= G [ <-- G_{1} [ \wedge G_{2}]...] .
G ::= A [ ( T [,T]... ) ]
T ::= X | A [ ( T [,T]... ) ]
Q ::= G [,G]... ?
Semantic Domains:
beta in {\bf B} = Bindings
\epsilon in {\bf E} = Environment
Semantic Functions:
R in Q} --> B} --> B } + (B} }yes }) + } no } }
U in C } C } --> B } --> B }
Semantic Equations:
R[ ?} ] beta , \epsilon &=& (beta, yes})
R[ G ] beta , \epsilon &=& beta'
& where }
105
&&G' in \epsilon , U [ G, G' ] beta = beta'
R[ G ] beta , \epsilon &=& R[ B ] beta' , \epsilon & where }
&&(G' <-- B) in \epsilon , U [ G, G' ] beta = beta'
R[ G1,G2 ] beta , \epsilon &=& R[ B,G2 ] ( R [ G1 ] beta , \epsilon ), \epsilon
R[ G } ] beta , \epsilon &=& no }
& where no other rule applies}
\caption{Operational semantics\label{lp:opsem}}
Declarative )emantics
The declarative semantics of logic programs is based on the standard model-theoretic
semantics of first-order logic.
Definition
Let P be a logic program. The Herbrand universe of P, denoted by U(P) is the
set of ground terms that can be formed from the constants and function
symbols appearing in P
Definition M.N:
The Herbrand base, denoted by {\cal B}(P), is the set of all ground goals that
can be formed from the predicates in P and the terms in the Herbrand
universe.
106
The Herbrand base is infinite if the Herbrand universe is.
Definition M.N:
An interpretation for a logic program is a subset of the Herbrand base.
An interpretation assigns truth and falsity to the elements of the Herbrand base. A goal
in the Herbrand base is true with respect to an interpretation if it is a member of it, false
otherwise.
Definition M.N:
An interpretation I is a model for a logic program if for each ground instance of
a clause in the program A <-- B1, ... , Bn A is in I if B1, ... , Bn are in I.
This approach to the semantics is often called the term model.
Denotational )emantics
Denotational semantics assignes meanings to programs based on associating with the
program a function over the domain computed by the program. The meaning of the
program is defined as the least fixed point 0f the function, if it exists.
+$C
0ragmatics
!ogic 0rogramming and )oftware Engineering
Programs are theories and computation is deduction from the theory. Thus the process of
software engineering becomes:
obtain a problem description
define the intended model of interpretation (domains, symbols etc)
devise a suitable theory (the logic component) suitably restricted so as to have an
efficient proof procedure.
describe the control component of the program
use declarative debugging to isolate errors in definitions
Pros and Cons
Pro
Closer to problem domain thus higher programmer productivity
Separation of logic and control (focuses on the logical structure of the
problem rather than control of execution)
Simple declarative semantics and referential transparency
Suitable for prototyping and exploratory programming
Strong support for meta-programming
Transparent support for parallel execution
Con
Operational implementation is not faithful to the declarative semantics
Unsuited for state based programming
Often inefficient
+$4
The !ogical .ariable
The logical variable, terms and lists are the basic data structures in logic programming.
Here is a definition of the relation between the prefix and suffixes of a list. The relation is
named concat because it may be viewed as defining the result of appending two lists to
get the third list.
{l} concat([ ],[ ]) concat([H|T],L,[H|TL]) <-- concat(T,L,TL)
Logical variables operate in a way much different than variables in traditional
programming languages. By way of illustration, consider the following instances of the
concat relation.
1.?- concat([a,b,c],[d,e],L). L = [a, b, c, d, e] the expected use of the concat
operation.
2.?- concat([a,b,c],S,[a,b,c,d,e]). S = [d, e] the suffix of L.
3.?- concat(P,[d,e],[a,b,c,d,e]). P = [a, b, c] the prefix of L.
4.?- concat(P,S,[a,b,c,d,e]). P = [ ], S = [a,b,c,d,e] P = [a], S = [b,c,d,e] P =
[a,b], S = [c,d,e] P = [a,b,c], S = [d,e] P = [a,b,c,d], S = [e] P = [a,b,c,d,e], S =
[ ] the prefixes and sufixes of L.
5.?- concat(_,[c|_],[a,b,c,d,e]). answers Yes since c is the first element of some
suffix of L.
Thus concat gives us 5 predicates for the price of one.
concat(L1,L2,L)
prefix(Pre,L) <-- concat(Pre,_,L).
sufix(Suf,L) <-- concat(_,Suf,L).
split(L,Pre,Suf) <-- concat(Pre,Suf,L).
member(X,L) <-- concat(_,[X|_],L).
The underscore _ designates an anonymous variable, it matches anything.
There two simple types of constants, string and numeric. Arrays may be represented as a
relation. For example, the two dimensional matrix
109
data} = \left( \begin{array}{lr} mary 18.47 john 34.6 jane 64.4 \end{array} \right)
may be written as {ll} data(1,1,mary)&data(1,2,18.47) data(2,1,john)&data(2,2,34.6)
data(3,1,jane)&data(3,2,64.4)
Records may be represented as terms and the fields accessed through pattern matching.
book(author( last(aaby), first(anthony), mi(a)),
title('programming language concepts),
pub(wadsworth),
date(1991))
book(A,T,pub(W),D)
Lists are written between brackets [ and ], so [ ] is the empty list and [b, c] is the list of
two symbols b and c. If H is a symbol and T is a list then [H|T] is a list with head H and
tail T. Stacks may then be represented as a list. Trees may be represented as lists of lists
or as terms.
Lists may be used to simulate stacks, queues and trees. In addition, the logical variable
may be used to implement incomplete data structures.
2ncomplete Data )tructures
The following code implements a binary search tree as an incomplete data structure. It
may be used both to construct the tree by inserting items into the tree and to search the
tree for a particular key and associated data.
lookup(Key,Data,bt(Key,Data,LT,RT))
lookup(Key,Data,bt(Key0,Data0,LT,RT)) <-- Key @< Key0,
lookup(Key,Data,LT)
lookup(Key,Data,bt(Key0,Data0,LT,RT)) <-- Key @> Key0,
lookup(Key,Data,RT)
This is a sequence of calls. Note that the initial call is with the variable BT.
lookup(john,46,BT), lookup(jane,35,BT), lookup(allen,49,BT),
lookup(jane,Age,BT).
110
The first three calls initialize the dictionary to contain those entries while the last call
extracts janes's age from the dictionary.
The logical and the incomplete data structure can be used to append lists in constant
time. The programming technique is known as difference lists. The empty difference list
is X/X. The concat relation for difference lists is defined as follows:
concat_dl(Xs/Ys, Ys/Zs, Xs/Zs)
Here is an example of a use of the definition.
?- concat_dl([1,2,3|X]/X,[4,5,6|Y]/Y,Z).

_X = [4,5,6 | 11]
_Y = 11
_Z = [1,2,3,4,5,6 | 11] / 11
Yes
The relation between ordinary lists and difference lists is defined as follows:
ol_dl([ ],X/X) <-- var(X)
ol_dl([F|R],[F|DL]/Y) <-- ol_dl(R,DL/Y)
Arithmetic
Terms are simply patterns they may not have a value in and of themselves. For example,
here is a definition of the relation between two numbers and their product.
times(X,Y,XY)
However, the product is a pattern rather than a value. In order to force the evaluation of
an expression, a Prolog definition of the same relation would be written
times(X,Y,Z) <-- Z is XY
111
&. /onc"usion
With the above study on principles of the programing languages, it is cleared that there
are mainly six important basic principles. The principles are nothing but the basis or roots
or sources. They are not flexible. But in the computing, the basics are also having the
nature of flexibility and changing components to the stetted objects. The Object Oriented
Programming is innovated to meet the required stetted targets. This promotes flexibility
of general programming principles due to composite computing.
As discussed above, a language should necessarily built with basic principles of its
procedural and functional structures. The syntax,semantics,logic(s),data types are its
ingredients. In fact these ingredients are all founding or building materials of the
programing languages. A programming language should not rigid and should not over
flexible. It should transforms and transport along with its ingredients towards
programmer's targets or objects in the complex computing environment. This object was
tried to discuss in the above chapters.
2e1erences
Programming Languages: Principles and Practices- Kenneth C. Louden , Kenneth
A. Lamber
Programming languages- Allen B. Tucke
Programming Language Pragmatics- Michael L. Scott - 2009
Practical Aspects of Declarative Languages: 12th International Symposium - Manuel
Carro , Ricardo Pea
Mcqs In Computer Science,2E- Williams - 2005

You might also like