You are on page 1of 57

Course Faculty

Dr.D.Indumathi
Associate Professor
Dept. of CSE
PSG Tech
The students will be able

 To learn formal syntax and semantics of


programming languages.
 To learn the fundamental concepts of
different programming languages.
 Upon completion of this course, the students
will be able to
 CO1: Understand the need, syntax and
semantics of programming languages.
 CO2: Describe the Imperative and Object
oriented languages.
 CO3: Understand and apply the functional
languages with elements.
 CO4: Describe the Logic programming
languages.
 CO5: Understand the concurrent
programming
INTRODUCTION (PROGRAMMING LANGUAGE DEFINITION AND DESCRIPTION): Reason for studying
concepts of Programming Languages – Programming Domains – Language Evaluation Criteria –
Influences on Language Design – Language Categories – Language Design Trade-offs – Implementation
methods – Programming Environment - Language Description: Syntactic Structure: Expression Notations
- Abstract Syntax Trees – Lexical Syntax - Semantic Structure: Synthesized Attributes - Attribute
Grammars - Variants of Grammars - Natural Semantics - Denotational Semantics (9+3)

IMPERATIVE AND OBJECT-ORIENTED LANGUAGES: Names, their scope, life and binding. Control-flow,
control abstraction; in subprogram and exception handling. Primitive and constructed data types, data
abstraction, inheritance, type checking and polymorphism.
(9+3)

FUNCTIONAL LANGUAGES: Elements of Functional Programming - Types: Values and Operations –


Function Declarations - Approaches to Expression Evaluation - Lexical Scope. Type Checking -
Functional Programming in a typed calculus- Functional Programming with Lists: The Structure of Lists,
List Manipulation -Storage Allocation for Lists. Case study.
(9+3)

LOGIC PROGRAMMING LANGUAGES: Computing with relation, first-order logic, SLD-resolution,


unification, sequencing of control, negation, implementation, case study.
(9+3)

CONCURRENT PROGRAMMING: Communication and synchronization, shared memory and message


passing, safety and liveness properties, multithreaded program
(9+3)
TEXT BOOKS:
1. Sebesta R W , "Concepts of Programming Languages", Eleventh
Edition, Pearson, USA, 2019.
2. Sethi R , "Programming Languages: Concepts and Constructs",
Second Edition, PEARSON INDIA, 2007.

REFERENCES:
1. Friedman D P, Wand M , "Essentials of Programming Languages",
3rd Edition, The MIT Press, 2008.
2. Harper R , "Practical Foundations for Programming Languages",
2nd Edition,Cambridge University Press, 2016.
3. Scott M L , "Programming Language Pragmatics", Morgan
Kaufmann, 4th Edition,2015.
4. Turbak F A, Gifford D K, Sheldon M A , "Design Concepts in
Programming Languages", The MIT Press, Massachusetts, 2008.
5. Benjamin C. Pierce, Types and Programming Languages, MIT
Press, 2002
 Benefits of studying PL concepts
◦ Increased capability to express ideas
◦ Improved background for choosing
appropriate languages
◦ Increased ability to learn new languages
◦ Better understanding of significance of
implementation
◦ Better use of languages that are already
known
◦ Overall advancement of computing
 Scientific applications
– Large number of floating point computations
– Fortran
 Business applications
– Produce reports, use decimal numbers and characters
– COBOL
 Artificial intelligence
– Symbols rather than numbers manipulated
– LISP
 Systems programming
– Need efficiency because of continuous use
–C
 Web Software
– Eclectic collection of languages: markup (e.g., XHTML),
scripting (e.g., PHP), general-purpose (e.g., Java)
• Readability : the ease with which programs can be
read and understood
• Writability : the ease with which a language can be
used to create programs
• Reliability : conformance to specifications (i.e.,
performs to its specifications)
• Cost : the ultimate total cost
 Overall simplicity
– A manageable set of features and constructs
– Few feature multiplicity (means of doing the same operation)
– Minimal operator overloading

 Orthogonality
– A relatively small set of primitive constructs can be combined in a relatively
small number of ways to build the control and data structures of the
language
– Every possible combination is legal

 Control statements
– The presence of well-known control structures (e.g., while statement)

 Data types and structures


– The presence of adequate facilities for defining data structures

 Syntax considerations
– Identifier forms: flexible composition
– Special words and methods of forming compound statements
– Form and meaning: self-descriptive constructs, meaningful keywords
 Simplicity and Orthogonality
◦ Few constructs, a small number of primitives, a
small set of rules for combining them
 Support for abstraction
◦ The ability to define and use complex structures or
operations in ways that allow details to be ignored
 Expressivity
◦ A set of relatively convenient ways of specifying
operations
◦ Example: the inclusion of for statement in many
modern languages
 Type checking
–Testing for type errors
 Exception handling
– Intercept run-time errors and take corrective
measures
 Aliasing
– Presence of two or more distinct referencing
methods for the same memory location
 Readability and writability
◦ – A language that does not support “natural” ways of
expressing an algorithm
 will necessarily use “unnatural” approaches, and
hence reduced reliability
 Computer Architecture
– Languages are developed around the
prevalent computer architecture, known as
the von Neumann architecture
 Programming Methodologies
– New software development methodologies
(e.g., object-oriented software development)
led to new programming paradigms and by
extension, new programming languages
 Well-known computer architecture: Von
Neumann
 Imperative languages, most dominant, because
of von Neumann computers
– Data and programs stored in memory
– Memory is separate from CPU
– Instructions and data are piped from memory to CPU
– Basis for imperative languages
 Variables model memory cells
 Assignment statements model piping
 Iteration is efficient
 1950s and early 1960s: Simple applications;
worry about machine efficiency
 Late 1960s: efficiency became important;
readability, better control structures
– structured programming
– top-down design and step-wise refinement
 Late 1970s: Process-oriented to data-oriented
– data abstraction
 Middle 1980s: Object-oriented programming
– Data abstraction + inheritance + polymorphism
 Imperative
◦ Central features are variables, assignment statements, and
iteration
◦ Examples: C, Pascal
 Functional
◦ Main means of making computations is by applying
functions to given parameters
◦ Examples: LISP, Scheme
 Logic
◦ Rule-based (rules are specified in no particular order)
◦ Example: Prolog
 Object-oriented
◦ Data abstraction, inheritance, late binding
◦ Examples: Java, C++
 Markup
◦ New; not a programming per se, but used to specify the
layout of information in Web documents
 – Examples: XHTML, XML
 Reliability vs. cost of execution
◦ Conflicting criteria
◦ Example: Java demands all references to array
elements be checked for proper indexing but that
leads to increased execution costs
 Readability vs. writability
◦ Another conflicting criteria
◦ Example: APL provides many powerful operators
(and a large number of new symbols), allowing
complex computations to be written in a compact
program but at the cost of poor readability
 Writability (flexibility) vs. reliability
◦ Another conflicting criteria
◦ Example: C++ pointers are powerful and very
flexible but not reliably used
 Connection speed between a computer‘s
memory and its processor determines the
speed of a computer
 Program instructions often can be executed a
lot faster than the above connection speed;
the connection speed thus results in a
bottleneck
 Known as von Neumann bottleneck; it is the
primary limiting factor in the speed of
computers
 Fetch-execute-cycle (on a von Neumann
architecture)
◦ initialize the program counter
◦ repeat forever
 fetch the instruction pointed by the counter
 increment the counter
 decode the instruction
 execute the instruction
◦ end repeat
 A translator is a programming language processor that
converts a computer program from one language to
another. It takes a program written in source code and
converts it into object or machine code. It discovers and
identifies the error during translation.
Source program is written in a source language
 Object program belongs to an object language
Purpose of Translator
 It translates high-level language program into a machine
language program that the Central Processing Unit (CPU)
can understand. It also detects errors in the program.

 Different Types of Translators:


Assembler, Compiler, Interpreter

Assembler:
source program Assembler object program
(in assembly language) (in machine language)
 A compiler is a program that can read a program in one language the
source language - and translate it into an equivalent program in another
language - the target language;
 An important role of the compiler is to report any errors in the source
program that it detects during the translation process

 If the target program is an executable machine-language program, it


can then be called by the user to process inputs and produce outputs;
•An interpreter is another common kind of language processor. Instead of
producing a target program as a translation, an interpreter appears to directly
execute the operations specified in the source program on inputs supplied by the
user

The machine-language target program produced by a compiler is usually much


faster than an interpreter at mapping inputs to outputs . An interpreter, however,
can usually give better error diagnostics than a compiler, because it executes
the source program statement by statement.
Compiler
 A compiler is a translator used to convert high-level programming
language to low-level programming language. It converts the whole
program in one session and reports errors detected after the
conversion. Compiler takes time to do its work as it translates high-
level code to lower-level code all at once and then saves it to memory.
A compiler is processor-dependent and platform-dependent.
Interpreter
 Just like a compiler, is a translator used to convert high-level
programming language to low-level programming language. It
converts the program one at a time and reports errors detected at once,
while doing the conversion.
 With this, it is easier to detect errors than in a compiler.
It is often used as a debugging tool for software development as it can
execute a single line of code at a time. An interpreter is also more
portable than a compiler as it is not processor-dependent.
Assembler
 An assembler is a translator used to translate assembly language to
machine language. It is like a compiler for the assembly language but
interactive like an interpreter. Assembly language is difficult to
understand as it is a low-level programming language. An assembler
translates a low-level language, an assembly language to an even
lower-level language, which is the machine code. The machine code
can be directly understood by the CPU.
 Compilation
◦ Programs are translated into machine
language
 Pure Interpretation
◦ Programs are interpreted by another program
known as an interpreter
 Hybrid Implementation Systems
◦ A compromise between compilers and pure
interpreters
 Translate high-level program (source
language) into machine code (machine
language)
 Slow translation, fast execution
 Compilation process has several phases:
– lexical analysis: converts characters in the source
program into lexical units
– syntax analysis: transforms lexical units into parse
trees which represent the syntactic structure of
program
– Semantics analysis: generate intermediate code
– code generation: machine code is generated
 Load module (executable image): the user and system code
together
 Linking and loading: the process of collecting system program
and linking them to user program
 Relocatable Machine Code: It can be loaded at any point and can
be run. The address within the program will be in such a way
that it will cooperate with the program movement.
Assembler
 An assembler is a translator used to translate assembly language
to machine language. It is like a compiler for the assembly
language but interactive like an interpreter. Assembly language
is difficult to understand as it is a low-level programming
language. An assembler translates a low-level language, an
assembly language to an even lower-level language, which is the
machine code. The machine code can be directly understood by
the CPU.
COMPILER INTERPRETER

A compiler is a program which coverts


Interpreter takes a source program and
the entire source code of a
runs it line by line, translating each line
programming language into executable
as it comes to it.
machine code for a CPU.

Compiler takes large amount of time to


Interpreter takes less amount of time to
analyze the entire source code but the
analyze the source code but the overall
overall execution time of the program is
execution time of the program is slower.
comparatively faster.

Compiler generates the error message


only after scanning the whole program, Its Debugging is easier as it continues
so debugging is comparatively hard as translating the program until the error is
the error can be present any where in met
the program.

No intermediate object code is


Generates intermediate object code.
generated.

Examples: C, C++, Java Examples: Python, Perl


 No translation
 Easier implementation of programs (run-time
errors can easily and immediately displayed)
 Slower execution (10 to 100 times slower
than compiled programs)
 Often requires more space
 Becoming rare on high-level languages
 Significantly comeback with some latest web
scripting languages (e.g.,JavaScript)
 A compromise between compilers and pure
interpreters
 A high-level language program is translated to an
intermediate language that allows easy interpretation
 Faster than pure interpretation
 Examples
 – Perl programs are partially compiled to detect
errors before interpretation
 – Initial implementations of Java were hybrid; the
intermediate form, byte code, provides portability to
any machine that has a byte code interpreter and a
runtime system (together, these are called Java
Virtual Machine)
 Initially translate programs to an intermediate
language
 Then compile intermediate language into
machine code
 Machine code version is kept for subsequent
calls
 JIT systems are widely used for Java programs
 .NET languages are implemented with a JIT
system
 Preprocessor macros (instructions) are
commonly used to specify that code from
another file is to be included
 A preprocessor processes a program
immediately before the program is compiled
to expand embedded preprocessor macros
 A well-known example: C preprocessor
– expands #include, #define, and similar
macros

You might also like