You are on page 1of 16

World University of Bangladesh

Assignment
Subject Name : Compiler Design
Subject Code : CSE 905

Submitted To
Mithun Kumar PK
Senior Lecturer, Department of CSE.
World University of Bangladesh

Submitted By
Ummay Sumaiya
Roll - 2204
Batch - 38(A)
Dept : CSE.
Assignment- Spring Semester- 2020
Compiler Design
Batch: CSE-38(A)
Course Teacher: Mithun Kumar PK
Submission Date: 03.05.2020

Answer all questions

1. a) Draw the phases of a compiler and briefly explain all phases.

b) i) Discuss the application of regular expression in compiler.


ii) Draw the transition diagram using regular expression for signed number.

2. a) Why lexical analyzer and syntax analyzer are separately implemented in


compiler? Explain elaborately.

b) Explain elaborately step by step of language processing system with figure.

3. a) total_cost = product*(fare + packing/2.5) + 15


Translate this assignment statement by following compiler phases.

b) Differentiate between Environment and State phases according to static and


dynamic situation.

4. a) Depict the application of postfix polish notation for compiler design with
example.

b) Explain Lexemes, Patterns, Tokens, Symbol table, lexical error, syntax error,
semantic error, and logical error with example.

5. a) Draw and explain the non-recursive predictive parsing model.

b) Why left recursive elimination is needed for top-down parsing? Explain with
example.
Ans. to the que. no. (01)
1.
(a) Phases of a compiler: The compilation process is a sequence of various phases. Each
phase takes input from its previous stage, has its own representation of source program, and
feeds its output to the next phase of the compiler. The phases of a compiler are:

1. Lexical analysis
2. Syntax analysis
3. Semantic analysis
4. Intermediate code generator
5. Code optimizer
6. Code generator Source Code

Lexical Analyzer

Syntax Analyzer

Semantic Analyzer Error


Symbol
Handler
Table

Intermediate Code
Generator

Code Optimizer

Code Generation

Target Code Fig. Phases of a Compiler


Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the source code as a
stream of characters and converts it into meaningful lexemes. Lexical analyzer represents
these lexemes in the form of tokens as:
<token-name, attribute-value>

Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced by lexical
analysis as input and generates a parse tree (or syntax tree). In this phase, token arrangements
are checked against the source code grammar, i.e. the parser checks if the expression made by
the tokens is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of language. For
example, assignment of values is between compatible data types, and adding string to an
integer. Also, the semantic analyzer keeps track of identifiers, their types and expressions;
whether identifiers are declared before use or not etc. The semantic analyzer produces an
annotated syntax tree as an output.
Intermediate Code Generation
After semantic analysis the compiler generates an intermediate code of the source code for
the target machine. It represents a program for some abstract machine. It is in between the
high-level language and the machine language. This intermediate code should be generated in
such a way that it makes it easier to be translated into the target machine code.
Code Optimization
The next phase does code optimization of the intermediate code. Optimization can be
assumed as something that removes unnecessary code lines, and arranges the sequence of
statements in order to speed up the program execution without wasting resources (CPU,
memory).
Code Generation
In this phase, the code generator takes the optimized representation of the intermediate code
and maps it to the target machine language. The code generator translates the intermediate
code into a sequence of (generally) re-locatable machine code. Sequence of instructions of
machine code performs the task as the intermediate code would do.
(b).
(i). Ans: Regular Expressions:
Regular Expressions are used to denote regular languages. An expression is regular if:
 ɸ is a regular expression for regular language ɸ.
 ɛ is a regular expression for regular language {ɛ}.
 If a ∈ Σ (Σ represents the input alphabet), a is regular expression with language {a}.
 If a and b are regular expression, a + b is also a regular expression with language {a,b}.
 If a and b are regular expression, ab (concatenation of a and b) is also regular.
 If a is regular expression, a* (0 or more times a) is also regular.

Regular Grammar: A grammar is regular if it has rules of form A -> a or A -> aB or A -> ɛ where ɛ
is a special symbol called NULL.
 
Regular Languages: A language is regular if it can be expressed in terms of regular expression.

Closure Properties of Regular Languages


Union: If L1 and If L2 are two regular languages, their union L1 ∪ L2 will also be regular. For
example, L1 = {an | n ≥ 0} and L2 = {bn | n ≥ 0}
L3 = L1 ∪ L2 = {an ∪ bn | n ≥ 0} is also regular.

Intersection: If L1 and If L2 are two regular languages, their intersection L1 ∩ L2 will also be
regular. For example,
L1= {am bn | n ≥ 0 and m ≥ 0} and L2= {am bn ∪ bn am | n ≥ 0 and m ≥ 0}
L3 = L1 ∩ L2 = {am bn | n ≥ 0 and m ≥ 0} is also regular.

Concatenation: If L1 and If L2 are two regular languages, their concatenation L1.L2 will also be
regular. For example,
L1 = {an | n ≥ 0} and L2 = {bn | n ≥ 0}
L3 = L1.L2 = {am . bn | m ≥ 0 and n ≥ 0} is also regular.

Kleene Closure: If L1 is a regular language, its Kleene closure L1* will also be regular. For
example,
L1 = (a ∪ b)
L1* = (a ∪ b)*

Complement: If L(G) is regular language, its complement L’(G) will also be regular. Complement
of a language can be found by subtracting strings which are in L(G) from all possible strings. For
example,
L(G) = {an | n > 3}
L’(G) = {an | n <= 3}
Note: Two regular expressions are equivalent if languages generated by them are same. For
example, (a+b*)* and (a+b)* generate same language. Every string which is generated by
(a+b*)* is also generated by (a+b)* and vice versa.

(b). (ii). Ans:


Drawing the transition diagram using regular expression for signed number.

Ans. to the Que. No –02


(a)Ans: Lexical analyzer and syntax analyzer are separately implemented in
compiler because A lexical analyzer is a pattern matcher while a syntax analysis involves
forming a syntax tree to analyze deformities in the syntax/ structure.

Both these steps are done during the phase of compilation.

Lexical analysis is separated from syntax analysis because lexical analysis is simpler and easier
to perform.
Another reason for separately implemented in compiler them could be due to the fact that
lexical analysis requires more optimization as compared to syntax analysis. Lexical analysis
typically, takes the majority of the time in compilation process. Hence it is only right to optimize
the step. Separating both can simplify the process of optimizing the lexical analyzer.

(b)Ans:
Explaining elaborately step by step of language processing system with figure.

Fig: language processing system


A compiler is a program that converts high-level language to assembly language. Similarly,
an assembler is a program that converts the assembly language to machine-level language.
Let us first understand how a program, using C compiler, is executed on a host machine.
 User writes a program in C language (high-level language).
 The C compiler, compiles the program and translates it to assembly program (low-level
language).
 An assembler then translates the assembly program into machine code (object).
 A linker tool is used to link all the parts of the program together for execution
(executable machine code).
 A loader loads all of them into memory and then the program is executed.
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that produces input for
compilers. It deals with macro-processing, augmentation, file inclusion, language extension, etc.

Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine language.
The difference lies in the way they read the source code or input. A compiler reads the whole
source code at once, creates tokens, checks semantics, generates intermediate code, executes the
whole program and may involve many passes. In contrast, an interpreter reads a statement from
the input, converts it to an intermediate code, executes it, then takes the next statement in
sequence. If an error occurs, an interpreter stops execution and reports it. whereas a compiler
reads the whole program even if it encounters several errors.
Assembler
An assembler translates assembly language programs into machine code.The output of an
assembler is called an object file, which contains a combination of machine instructions as well
as the data required to place these instructions in memory.
Linker
Linker is a computer program that links and merges various object files together in order to
make an executable file. All these files might have been compiled by separate assemblers. The
major task of a linker is to search and locate referenced module/routines in a program and to
determine the memory location where these codes will be loaded, making the program
instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading executable files into memory
and execute them. It calculates the size of a program (instructions and data) and creates memory
space for it. It initializes various registers to initiate execution.
Cross-compiler
A compiler that runs on platform (A) and is capable of generating executable code for platform
(B) is called a cross-compiler.

Source-to-source Compiler

A compiler that takes the source code of one programming language and translates it into the
source code of another programming language is called a source-to-source compiler.
Ans to the que no -3
(a)Ans:
3a. total cost==product*(fare/5+packing)+15
Translate this assignment statement by following compiler phases.
Answer to the question (3 a): In the syntax directed translation, assignment statement is
mainly deals with expressions. The expression can be of type real, integer, array and
records.

Total cost = product*(fare/5+packing) +15.

Lexical analyzer

<id1> <=> <id2> <*><(> <id3></><5><+><id4><)><+><15>

Syntax analyzer

=
/ \
Id1 id2
*
/ \
Id3 /
/ \
Int5 +
/ \
Id4 +
15

Semantic Analyzer

=
/ \
<id1> <id2>
\
*
/ \
id3 /
/ \
5 +
/ \
Id4 +
/
Flow (1.5)

Intermediate code generator


T1 =(1.5)
t2=(t1 + id4)
t3= t2 + id3
t4= t3 * id2
Id1 = t4
Intermediate code optimization
t2 = 1.5 + id4
t4 = t3 * t2 + id3
Id1 = t4

Code generator

LDE R1 id 4
ADDF R1 R1 # 1.5
LDF R2 id3
ADDF R2 . R2 # Id3
Div F R3 R2
LDF R4 id2
MULF R4 id2
STF id1 R4

Ans to the Que no-4


Polish notation: Polish notation is a notation form for expressing arithmetic, logic and
algebraic equations. Its most basic distinguishing feature is that operators are placed on the left
of their operands. If the operator has a defined fixed number of operands, the syntax does not
require brackets or parenthesis to lessen ambiguity.

Postfix notation : The postfix notation for the a+b places the operator at the right end as
ab +. In general, if e1 and e2 are any postfix expressions, and + is any binary operator, the
result of applying + to the values denoted by e1 and e2 is postfix notation by e1e2 +. No
parentheses are needed in postfix notation because the position and arity (number of
arguments) of the operators permit only one way to decode a postfix expression. In postfix
notation the operator follows the operand.
Example: a*b+c is converted to postfix:ab*c+
(a – b) * (c + d) + (a – b) is converted to postfix:   ab – cd + *ab -+.

4-1*2 is converted to postfix: 412*-


Postfix notation, also known as RPN, is very easy to process left-to-right. An operand is
pushed onto a stack; an operator pops its operand(s) from the stack and pushes the result.
Little or no parsing is necessary. It's used by Forth and by some calculators (HP calculators
are noted for using RPN).
Prefix notation is nearly as easy to process; it's used in Lisp.

(b)Ans:
Lexemes: A lexeme is a sequence of characters in the source program that is matched by the
pattern for a token.
Example: const, if, <,<=,= ,< >,>=,> , "core"
Patterns: A set of strings in the input for which the same token is produced as output. This
set of strings is described by a rule called a pattern associated with the token.
Example: const, < or <= or = or < > or >= or letter followed by letters & digit, any
character b/w “and “except"

Tokens: Token is a sequence of characters that can be treated as a single logical entity.
Example: keywords, operators, constants, identifiers.
Symbol table: Symbol table is an important data structure created and maintained by
compilers in order to store information about the occurrence of various entities such as
variable names, function names, objects, classes, interfaces, etc. Symbol table is used by
both the analysis and the synthesis parts of a compiler.
Example :
extern double bar(double x);
double foo(int count)
{
double sum = 0.0;
for (int i = 1; i <= count; i++)
sum += bar((double) i);
return sum;
}
Lexical error: Lexical Error are the errors which occurs during lexical analysis phase of
compiler. It occurs when compiler does not recognise valid token string while scanning the
code.
Example:
Void main()
{
int x=10, y=20;
char * a;
a= &x;
x= 1xab;
}
Here 1xab causes error as it is not a valid lexeme(in C).
Syntax error: Syntax error is an error in the syntax of a sequence of characters or tokens
that is intended to be written in compile-time. A program will not compile until all syntax
errors are corrected.
Example: int a = 5 // semicolon is missing

Semantic error: Semantic error in an error that recognized by semantic analyzer.


Example: float x = 10.1;
float y = x*30;

Logical error: A logical error occurs due to the poor understanding of the problem.
Example: public static int sum(int a, int b) {
return a - b ;}
// this method returns the wrong value wrt the specification that requires
// to sum two integers

Ans to the Que No – 5


(a)Ans:

Fig: non-recursive predictive parsing model


Algorithm for non recursive Predictive Parsing:
The main Concept ->With the help of FIRST() and FOLLOW() sets, this parsing can be
done using a just a stack which avoids the recursive calls.
For each rule, A->x in grammar G:
For each terminal ‘a’ contained in FIRST(A) add A->x to M[A, a] in parsing table if x
derives ‘a’ as the first symbol.
If FIRST(A) contain null production for each terminal ‘b’ in FOLLOW(A), add this
production (A->null) to M[A, b] in parsing table.
Non-recursive predictive parsing model is explaining below -
Buffer: An input buffer which contains the string to be passed
Stack: A pushdown stack which contains a sequence of grammar symbols
A parsing table: A 2d array M[A, a]
Where,
A->non-terminal, a->terminal or $
Output stream: End of the stack and an end of the input symbols are both denoted with $
The Procedure:
 In the beginning, the pushdown stack holds the start symbol of the grammar G.
 At each step a symbol X is popped from the stack:
if X is a terminal then it is matched with the lookahead and lookahead is advanced
one step,
if X is a nonterminal symbol, then using lookahead and a parsing table
(implementing the FIRST sets) a production is chosen and its right-hand side is
pushed into the stack.
 This process repeats until the stack and the input string become null (empty).
Table-driven Parsing algorithm:
Input: a string w and a parsing table M for G.
tos top of the stack
Stack[tos++] <-$
Stack[tos++] <-Start Symbol
token <-next_token()
X <-Stack[tos]
repeat
if X is a terminal or $ then
if X = token then
pop X
token is next of token()
else eror()
else /* X is a non-terminal */
if M[x, token] = X -> y1y2...yk then
pop x
push Yk, Yk-1, ...., Y1
else error()
X Stack[tos]
until X = $
So according to the given diagram the non-recursive parsing algorithm.
Input: A input string ‘w’ and a parsing table(‘M’) for grammar G.
Output: If w is in L(G), an LMD of w; otherwise an error indication.
(b)Ans:
Left recursive elimination is needed for top-down parsing.
Usually compilers use top-down parsing. If you have left-recursion, then the parser goes into an
infinite recursion. However, in right-recursion, the parser can see the prefix of the string that it
has so far. Thus, it can check whether the derivation went "too far".
Top-down parsers cannot handle left recursion. Left recursion often poses problems for
parsers, either because it leads them into infinite recursion (as in the case of most top-down
parsers) or because they expect rules in a normal form that forbids it (as in the case of many
bottom-up parsers, including the CYK algorithm).

Example :

// expr:: = expr + term


expr() {
expr();
if (token == '+') {
getNextToken();
}
term();
}

The following code is the example of left-recursion.


In the left recursive case, expr() continually calls itself with the same token and no progress is
made. In the right recursive case, it consumes some of the input in the call to term() and the
PLUS token before reaching the call to expr(). So, at this point, the recursive call may call term
and then terminate before reaching the if test again.

You might also like