Professional Documents
Culture Documents
• Compiler
• Interpreter
• Assembler
Language processing systems (using Compiler) –Cousins of Compilers
High Level Language – If a program contains #define or
#include directives such as #include or #define it is called
HLL. They are closer to humans but far from machines.
These (#) tags are called pre-processor directives. They
direct the pre-processor about what to do.
Pre-Processor – removes all the #include directives by
including the files called file inclusion and all the #define
directives using macro expansion. It deals with macro-
processing, augmentation, file inclusion, language
extension, etc.
Loader/Linker – It converts the relocatable code into absolute code and tries to run
the program resulting in a running program or an error message (or sometimes both
can happen).
–Linker loads a variety of object files into a single file to make it executable. Then
loader loads it in memory and executes it. Linker is a computer program that links
and merges various object files together in order to make an executable file. All
these files might have been compiled by separate assemblers. The major task of a
linker is to search and locate referenced module/routines in a program and to
determine the memory location where these codes will be loaded, making the
program instruction to have absolute references Loader is a part of operating system
and is responsible for loading executable files into memory and execute them. It
calculates the size of a program (instructions and data) and creates memory space
for it. It initializes various registers to initiate execution.
COMPILER ASSEMBLER
Compiler converts the source code written by the Assembler converts the assembly code into the machine
programmer to a machine level language. code.
Compiler input source code. Assembler input assembly language code.
It converts the whole code into machine language at a But the Assembler can’t do this at once.
time.
A Compiler is more intelligent than an Assembler. But, an Assembler is less intelligent than a Compiler.
The compilation phases are lexical analyzer, syntax Assembler makes two phases over the given input, first
analyzer, semantic analyzer, intermediate code phase and the second phase.
generated, a code optimizer, code generator, and error
handler
The output of compiler is a mnemonic version of machine The output of assembler is binary code.
code.
C, C++, Java, and C# are examples of compiled languages. GNU is an example of an assembler.
Introduction
• A compiler is a program that reads a program written in one language and translate it into an
equivalent program in a another language
• is a software which converts a program written in high level language (Source Language) to low
level language (Object/Target/Machine Language).
• Compiler also reports errors presented in the source program as a part of its translation
• There is a agreement on format for object (or assembly) code
Compilation Error
Compiler also reports errors present in the source program as part of its translation process
Introduction
• Cross Compiler :
• that runs on a machine ‘A’ and produces a code for another machine ‘B’.
• It is capable of creating code for a platform other than the one on which the compiler is
running.
• Source-to-source Compiler or transcompiler or transpiler:
• is a compiler that translates source code written in one programming language into source
code of another programming language.
Single pass Compiler
Process the input exactly once
Command interpreters such as bash/ sh/ tesh can be considered as
single pass compiler
All phases (Lexical Analysis, Syntax analysis, Semantic analysis ,
Intermediate code generation, Code optimization, Target Code
generation) are in a single module
Faster and smaller than multi-pass compiler
Less efficient in comparison with multi-pass compiler
Can not optimize very well due to the text
Two pass Compiler
First pass
o is refered as Front end
o It is a Analytical Part of compiler
o Produces Platform independent code
o The output of first pass is three address code
o Lexical Analysis, Syntax Analysis, Semantic Analysis, Intermediate code
generation – First pass phases
Second pass:
o Back end of the compiler
o Synthesis part – taking input as three address code and convert them into
low level language /assembly language
o Platform dependent – dependent on target machine system
Two pass Compiler
With the multi-pass compiler, we can solve these two basic problems
If we want to design as compiler for different programming language for same machine
Two pass Compiler
With the multi-pass compiler, we can solve these two basic problems
If we want to design a compiler for same programming language for different machine /system
Type Single Pass Compiler Multi pass Compiler
Speed Fast slow
Memory Required More less
Time Less More
Portability No Yes
Major Parts of Compilers( Two Parts of the compilation Process) : Analysis and Synthesis
Analysis Part:
input: source program Front-end, Back-end division
Output: intermediate code /representation
Synthesis Part:
Input : intermediate code
Output: final target machine program
Front end maps legal code into IR, Back end maps IR onto
target machine
Simplify retargeting, Allows multiple front ends
Multiple passes -> better code
Structure/ Steps/ Components / Phases of / Architecture of a Compiler
Error Recovery
• A parser should be able to detect and report any error in the program.
• It is expected that when an error is encountered, the parser should be able to
handle it and carry on parsing the rest of the input.
• Mostly it is expected from the parser to check for errors but errors may be
encountered at various stages of the compilation process.
• A program may have the following kinds of errors at various stages:
• Lexical : name of some identifier typed incorrectly
• Syntactical : missing semicolon or unbalanced parenthesis
• Semantical : incompatible value assignment
• Logical : code not reachable, infinite loop
Symbol Table:
• Data structure created and maintained by compiler in order to
maintain the occurrence of various entities such as:
• Variables and function name
• Objects
• Classes
• Interfaces
• Information is gathered form the analysis phase and used in synthesis
phase
Lexical Analyzer( Scanner, Tokenizer)
Consider a statement
Count = count + temp ;
id=id+id;
Tokens : id, operator , punctuation
Lexemes: count, temp, =,+,;
e.g 31+28-59 For example, in C language, the
here variable declaration line
tokens : Number : [0-9]*, operator int value = 100;
lexeme : 31, 28, 59 (number) contains the tokens:
int (keyword), value (identifier), =
+,- (operator )
(operator), 100 (constant) and ; (symbol).
“
(3)
(1) (2)
Option1
Option2
Here we have not taken care any associativity and precedence of the operator so it generated ambiguity
Here Parse tree 2 has violated the
associativity rule so it is wrong
Generally +, -, * follows the left associativity
but here in case of 2 the evaluation f parse
tree yields right associativity
e.g.
• +,-,* are left associative --- so the grammar should have to be in the form
of left recursive form to remove ambiguity
• exponentiation has right associativity so the grammar should be in
right recursive form
Associativity rule Violation and Solution in CFGs
This is ambiguous grammar
Modified Grammar- This grammar is not
ambiguous
Since + and * are left associative in nature
so we design the grammar in the same
way left recursive form, which solved the
problem of ambiguity
No Infinite loop
Infinite loop problem
problem because of
because of
Here we can check the
Recursive call if A()
condition using value
continuously without
of α
checking any condition
Non-Deterministic CFGs Vs Deterministic CFG
Deterministic CFG
Non Deterministic CFG
Backtracking Problem is may arise here
Here from A there is non deterministic move on
terminal α. If we have to derive αβ3 , there might
be three possibility of move
Parser
• A parser is a program that generates a parse tree for the given string ,
if the string is generated from the underlying grammar
Parser
• Construction of Parse tree starts from root and • Construction of Parse tree starts from bottom
proceed to child (String is derived from Non and proceed to root (start from terminal)
Terminals) • Shift Reduce Parser (SR parser)
• Decision is : What to Use ?? Which production • Shift (Push) , Reduce (POP) - Stack is used
to be used • Decision is : When to reduce ??
• Uses LMD • Follow RMD in reverse Order
• Less Powerful Parser
• If there is a multiple choice problem may arise
(Backtracking is needs to be done – which is
problem)
• Scans the strings by parser left to right one
symbol at a time
Top down parser
• In order to construct TDP the CFG should not have
• Left Recursion
• Non-determinism
• Ambiguity
Top Down Parsing : Recursive Descent Parsing
• Recursive descent is a top-down parsing technique that constructs the
parse tree from the top and the input is read from left to right.
• Built from a set of mutually recursive procedures. It uses
procedures/functions for every non-terminal entity.
• Each procedures implements one non terminal of the gramar
• This parsing technique recursively parses the input to make a parse
tree, which may or may not require back-tracking.
• But the grammar associated with it (if not left factored) cannot avoid
back-tracking.
• This parsing technique is regarded recursive as it uses context-free
grammar which is recursive in nature
• A form of recursive-descent parsing that [ does not require any back-
tracking is known as predictive parsing].
Top Down Parsing : Recursive Descent Parsing
It is Left Recursive form of grammar since TDP
cant parse grammar with Left recursive form
For EiE’
Back-tracking in Top down parsing
• Top- down parsers start from the root node (start symbol) and match the
input string against the production rules to replace them (if matched).
• To understand this, take the following example of CFG:
• S → rXd | rZd , X → oa | ea, Z → ai
• For an input string: “read”, a top-down parser, will behave like this:
Predictive Parser - LL(1)
• Predictive parser is a recursive descent parser, which has
the capability to predict which production is to be used
to replace the input string.
• The predictive parser does not suffer from backtracking.
• To accomplish its tasks, the predictive parser uses a look-
ahead pointer, which points to the next input symbols.
• To make the parser back-tracking free, the predictive
parser puts some constraints on the grammar and
accepts only a class of grammar known as LL(k) grammar.
• Predictive parsing uses a stack and a parsing table to
parse the input and generate a parse tree. Both the stack
and the input contains an end symbol $ to denote that
the stack is empty and the input is consumed. The parser
refers to the parsing table to take any decision on the
input and stack element combination.
• In recursive descent parsing, the parser may have more
than one production to choose from for a single instance
of input, whereas in predictive parser, each step has at
most one production to choose
Predictive Parser –LL(1)
LL(1)
LL(1)
LL(1) Parser –Predictive Parser
• Non Recursive Descent -- LL(1)
a b $
S SAB SAB AAB
A {AA Aε Aε
B -- Bb Bε
Since Single Entry in parse table so the given grammar can be parse by LL(1) parser
Identify the either given grammar can be parsed by Predictive -
LL(1) or not?
Grammar FIRST FOLLOW
SaSA {a,ε} {$,c }
Ac/ε {c, ε } {$,c }
a c $
S SaSA Sε Sε
A --- Aε Aε
Ac
Since multiple Entry in parse table so the given grammar can not be parse by LL(1) parser
Parsing in LL(1) - Example
Bottom-Up Parser (SR Parser)
• Bottom-up parsing starts from the leaf nodes of a tree and works in upward
direction till it reaches the root node. Here, we start from a sentence and then
apply production rules in reverse manner in order to reach the start symbol.
• Shift-Reduce Parsing
• Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps
are known as shift-step and reduce-step.
• Shift step:
• The shift step refers to the advancement of the input pointer to the next input symbol, which is
called the shifted symbol.
• This symbol is pushed onto the stack. The shifted symbol is treated as a single node of the parse
tree.
• Reduce step :
• When the parser finds a complete grammar rule (RHS) and replaces it to (LHS), it is known as
reduce-step.
• This occurs when the top of the stack contains a handle. To reduce, a POP function is performed
on the stack which pops off the handle and replaces it with LHS non-terminal symbol.
Types of Bottom-up(SR) Parser
1. Operator Precedence Parser :
• Simple, but only a small class of grammars
Note: Ambiguous Grammar can be parsed only on this parser
2. LR Parser( Scans string from left to right / Reverse of the Rightmost
derivation ) CFG
• SLR(1) – Simple LR CLR
• LR(0)
SLR
• CLR(1)
Bottom-Up Parser : LR Parser
• The LR parser is a non-recursive, shift-reduce, bottom-up parser.
• It uses a wide class of context-free grammar which makes it the most
efficient syntax analysis technique.
• LR parsers are also known as LR(k) parsers, where
• L stands for left-to-right scanning of the input stream;
• R stands for the construction of right-most derivation in reverse, and
• k denotes the number of lookahead symbols to make decisions.
Bottom-Up Parser : LR Parser
• There are three widely used algorithms available for constructing an LR
parser:
• SLR(1) – Simple LR Parser:
• Works on smallest class of grammar
• Few number of states, hence very small table
• Simple and fast construction
• LR(1) – LR Parser:
• Works on complete set of LR(1) Grammar
• Generates large table and large number of states
• Slow construction
• LALR(1) – Look-Ahead LR Parser:
• Works on intermediate size of grammar
• Number of states are same as in SLR(1)
Bottom-Up Parser : Operator Precedence Parser
• For a small class of CFG’s , the principles of operator precedence can be
used to build a simple bottom up parser
AB Not Operator Grammar
• A CFG which have the following operation A+B Operator Precedence
• No RHS of any production has an empty (Є) A*B Operator Precedence
A/B Operator Precedence
• No two non terminals are adjacent
• Such grammar is called operator precedence grammar or simply
operator grammar
• A Shift reduce parser can easily be constructed for this kind of grammar
and it is called an operator precedence parser
Operator Precedence Parser
1. EE+E/ E*E /id – operator Precedence Grammar
2. EEAE/id
A+/* Not an Operator Grammar AB Not Operator Grammar
A+B Operator Precedence
A*B Operator Precedence
A/B Operator Precedence
Input String
Operator Grammar
Operator Parser
Parser Tree
Operator Precedence Parser: Parsing Acting
• Both end of the given input string , add the $ symbol
• Now Scan the input string from left to right until the > is encountered
• Scan towards left over all the equal precedence until the first left
most < is encountered
• Everything between the left most < and right most > is handle
• $ on $ means parsing is successful
Operator Precedence Parser
Eg. With the help of following grammar parse the input string
“id+id*id”
TT+T/T*T/id
Basics:
Steps to solve + * id $ • Id, a,b,c High
1. Check Operator grammar or not + > < < > • $low
* > > < > • +>+
2. Operator Precedence Relation Table id • *>*
> > - > • id≠ id
3. Parse the given string $ < < < A • $A $
Ameans Accepted
Stack Rela Input Comment
tion
$ < Id+id*id$ Shift id
$id > +id*id$ Reduce TId
$T < +id*id$ Shift +
$T+ < Id*id$ Shift id
$T+id > *id$ Reduce Tid
$T+T < *id$ Shift *
$T+T* < id$ Shift id
$T+T*id > $ Reduce Tid
$T+T*T > $ Reduce TT*T
$T+T > $ Reduce TT+T
$T A $
If relation is < then shift
if relation is > then reduce
LL Vs LR Parser
LL LR
Does a leftmost derivation. Does a rightmost derivation in reverse.
Starts with the root nonterminal on Ends with the root nonterminal on the stack.
the stack.
Ends when the stack is empty. Starts with an empty stack.
Uses the stack for designating what is Uses the stack for designating what is already seen.
still to be expected.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Continuously pops a nonterminal off Tries to recognize a right hand side on the stack, pops it, and
the stack, and pushes the pushes the corresponding nonterminal.
corresponding right hand side.
Expands the non-terminals. Reduces the non-terminals.
Reads the terminals when it pops one Reads the terminals while it pushes them on the stack.
off the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree
Semantic Analyzer
• It checks whether the parse tree constructed follows the rules of
language . E.g. assignment of values is between compatible data
types or not ?
• Also keeps track of identifier, their type and expression.
• A semantic analyzer checks the source program for semantic errors
and collects the type information for the code generation.
• Semantics of a language provide meaning to its constructs, like tokens
and syntax structure. Semantics help interpret symbols, their types,
and their relations with each other.
• Semantic analysis judges whether the syntax structure constructed in
the source program derives any meaning or not.
Semantic Analyzer
• Type-checking is an important part of semantic analyzer.
• Normally semantic information cannot be represented by a context-
free language used in syntax analyzers.
• Context-free grammars used in the syntax analysis are integrated with
attributes (semantic rules)
• the result is a syntax-directed translation,
• Attribute grammars
Semantic Analyzer
E.g. a: int
int a ; sum: double
double sum b: char
char b
sum=a+b This is incorrect : data type mis match
This is syntactically correct but semantically incorrect
For example:
int a = “ value”;
• should not issue an error in lexical and syntax analysis phase, as it is lexically and
structurally correct, but it should generate a semantic error as the type of the assignment
differs.
• These rules are set by the grammar of the language and evaluated in semantic analysis.
The following tasks should be performed in semantic analysis:
•Scope resolution
•Type checking
•Array-bound checking
Type checking and its types
• Type checking is the process of verifying that each operation executed
in a program respects the type system of the language.
• This generally means that all operands in any expression are of
appropriate types and number.
• Semantic Checks
• Static – done during compilation
• Dynamic – done during run-time
• Process of designing a type checker
• Identify the types that are available in the language
• Identify the language constructs that have types associated with them
• Identify the semantic rules for the language
Type checking and its types
• Static type checking Dynamic type checking
• Static type checking is done o Implemented by including
at compile-time. type information for each
• Obtained via declarations
and stored in a master data location at runtime.
symbol table. o For example, a variable of
• After this information is
collected, the types type double would contain
involved in each operation both the actual double
are checked.
value and some kind of tag
• Example of Static Checks:
• Type Checks indicating "double type".
• Flow of Control Checks
• Uniqueness Checks
• Name-related Checks
Type Systems
• Strongly Typed Vs Weakly Typed System
• Collection of rules for assigning type expressions
• A sound type system eliminates run-time type checking for type errors.
• A programming language is strongly-typed, if every program its compiler accepts will
execute without type errors.
• Here first one is L-Attributed SDD but Second one is not because in
Q.i=R.s , Q taking its value from it right sibling
S-attributed and L-attributed SDT.
• S-attributed SDT :
• If an SDT uses only synthesized attributes, it is called as S-attributed SDT.
• S-attributed SDTs are evaluated in bottom-up parsing, as the values of the
parent nodes depend upon the values of the child nodes.
• Semantic actions are placed in rightmost place of RHS.
• L-attributed SDT:
• If an SDT uses both synthesized attributes and inherited attributes with a
restriction that inherited attribute can inherit values from left siblings only, it
is called as L-attributed SDT.
• Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right
parsing manner.
• Semantic actions are placed anywhere in RHS.
S-attributed and L-attributed SDT.
• For example,
A XYZ {Y.S = A.S, Y.S = X.S, Y.S = Z.S}
is not an L-attributed grammar since Y.S = A.S and Y.S = X.S are allowed
but Y.S = Z.S violates the L-attributed SDT definition as attributed is
inheriting the value from its right sibling.
If a definition is S-attributed, then it is also L-attributed but NOT vice-
versa.
S-attributed and L-attributed SDT.
• Example – Consider the given below SDT.
P1: S MN {S.val= M.val + N.val}
P2: M PQ {M.val = P.val * Q.val and P.val =Q.val}
• Explanation –
The correct answer is option C as, In P1, S is a synthesized attribute and in
L-attribute definition synthesized is allowed.
• So P1 follows the L-attributed definition. But P2 doesn’t follow L-attributed
definition as P is depending on Q which is RHS to it.
Example of S-attributed SDD
Syntax Directed Translation
• Syntax Directed Translation are augmented rules to the grammar that
facilitate semantic analysis.
• SDT involves passing information bottom-up and/or top-down the
parse tree in form of attributes attached to the nodes.
• Syntax directed translation rules use
• 1) lexical values of nodes,
• 2) constants &
• 3) attributes associated to the non-terminals in their definitions.
• The general approach to Syntax-Directed Translation is to construct a
parse tree or syntax tree and compute the values of attributes at the
nodes of the tree by visiting them in some order.
• In many cases, translation can be done during parsing without
building an explicit tree.
Applications of SDT(Syntax Directed Translation)
• Executing Arithmetic Expression
• Conversion from infix to postfix
• Conversion form infix to prefix
• Conversion from binary to decimal
• Counting number of reductions
• Creating syntax tree
• Generating intermediate code
• Type Checking
• Storing type info into symbol table
Syntax Directed Translation
Example
E E+T | T
T T*F | F
F INTLIT
SDD
CFG
Syntax Directed Translation
• Let’s take a string to see how semantic
analysis happens –
• S = 2+3*4. Parse tree corresponding to
S would be
SDT
EE+ T { Printf(“+”);} -1
/T { } -2
TT* F { printf(“*”);} -3
/ F { } -4
Fnum {printf(num.lval);} -5
• Triples face the problem of code immovability while optimization, as the results are positional and
changing the order or position of an expression may cause problems.
Three-Address Code
• Indirect Triples
• This representation is an enhancement over triples representation.
• It uses pointers instead of position to store results.
• This enables the optimizers to freely re-position the sub-expression to
produce an optimized code.
Code Optimizer
Is a technique of which tries to improve the code by eliminating unnecessary code lines/block
and arranging the statements in such a sequence that speed up the program execution without
wasting the resources.
The code optimizer optimizes the code produced by the intermediate code generator in the
terms of time and space.
Optimization is a program transformation technique, which tries to improve the code by making
it consume less resources (i.e. CPU, Memory) and deliver high speed
Advantages : Executes faster, Efficient memory usage, Yields better performance
A code optimizing process must follow the three rules given below:
o The output code must not, in any way, change the meaning of the program.
o Optimization should increase the speed of the program and if possible, the program should
demand less number of resources.
o Optimization should itself be fast and should not delay the overall compiling process.
Code Optimizer
Common steps in optimization may be
o Data Flow Analysis – Examine the program to find out certain properties of interest
o Code Optimization – changes the code based on data flow analysis information in a
way that improves performance
• Efforts for an optimized code can be made at various levels of compiling the process.
At the beginning, users can change/rearrange the code or use better algorithms to
write the code.
After generating intermediate code, the compiler can modify the intermediate code by
address calculations and improving loops.
While producing the target machine code, the compiler can make use of memory
hierarchy and CPU registers.
Code Optimizer
• Optimization can be categorized broadly into two types :
machine independent and
machine dependent.
• Platform Dependent • Platform independent Techniques
Techniques • Loop Optimization
• Peephole • Loop Unrolling ,
• Code Movement ,
• Instruction Level parallelism
• Frequency reduction,
• Data Level Parallelism • loop jamming /fusion
• Cache optimization • Constant folding
• Redundant Resources • Constant propagation
• Common Subexpression Elimination
Machine-independent Optimization
the compiler takes in the intermediate code and transforms a part of the code that does not involve any
CPU registers and/or absolute memory locations. For example:
do
{
item = 10;
value = value + item;
} while(value<100);
• This code involves repeated assignment of the identifier item, which if we put this way:
Item = 10;
do
{
value = value + item;
} while(value<100);
• should not only save the CPU cycles, but can be used on any processor.
Machine-dependent Optimization
Machine-dependent optimization is done after the target code has
been generated and when the code is transformed according to the
target machine architecture.
It involves CPU registers and may have absolute memory references
rather than relative references.
Machine-dependent optimizers put efforts to take maximum
advantage of memory hierarchy.
Must have knowledge about machine architecture
Basic Blocks
Source codes generally have a number of instructions, which are always executed in sequence and are
considered as the basic blocks of the code.
These basic blocks do not have any jump statements among them, i.e., when the first instruction is
executed, all the instructions in the same basic block will be executed in their sequence of appearance
without losing the flow control of the program.
A program can have various constructs as basic blocks, like IF-THEN-ELSE, SWITCH-CASE conditional
statements and loops such as DO-WHILE, FOR, and REPEAT-UNTIL, etc.
Basic blocks are important concepts from both code generation and optimization point of view.
int main()
int main()
{
{
int a=10,b=20;
// two variables are declared
printf(“sum is =%d”,a+b);
int a,b;
return 0;
a=10;
}
return 0;
How many tokens:
}
a) 33
How many tokens:
b)27
a) 18
c) 35
b)19
d) 23
c) 23
d) 24
Ans- b
• Consider the grammar defined by the following rules with two operators * and +
ST*P
TU | T*U
PQ+P | Q
Qid
Uid
Which of the following is TRUE
a) + is left associative while * is right associative
b) + is right associative while * is left associative
c) Both are right associative
d) Both are left associative
From the grammar we can find out associative by looking at grammar. Let us consider the 2nd production T -> T * U
T is generating T*U recursively (left recursive) so * is
left associative.
Similarly
P -> Q + P Right recursion so + is right associative.
So option B is correct.
Ans -C
• Consider line number 3 of the following C- program.
Identify the compiler’s response about this line while creating the object-module
(A) No compilation error
(B) Only a lexical error
(C) Only syntactic errors
(D) Both lexical and syntactic errors
Ans -b
• Which of the following rues violate the requirement of an operator
grammar ? P, Q, R are non terminals and r,s,t are terminals
• i) PQR ii) PQsR iii) Pe iv) PQtRr
a) (i) only
b) (i) and (iii) only
c) (ii) and (iii) only
d) (iii) and (iv) only
• Which of the following statement is true?
A. SLR parser is more powerful than LALR
B. LALR parser is more powerful than Canonical LR Parser
C. Canonical LR parser is more powerful than LALR
D. The parsers SLR , Canonical LR and LALR have the same power
Q2. The lexical analysis for a modern computer language such as Java needs the power of
which one of the following machine models in a necessary and sufficient sense?
• (A) Finite state automata
• (B) Deterministic pushdown automata
• (C) Non-Deterministic pushdown automata
• (D) Turing Machine
• Ans :(A) Finite state automata
• An intermediate code form is suitable for
• An intermediate code form is
• Reading
• Postfix Notation
• Debugging
• Syntax trees
• Testing
• Three Address codes
• Optimization
• All of these
• Intermediate code generator gets • A byte code is the intermediate
input from language for the
• Lexical Analyzer • C++
• Syntax Analyzer • Java
• Semantic Analyzer • Java Virtual Machine
• Error Handling • C
• In the following grammar Which of the
• Which one of the following is NOT following is true?
performed during compilation?
(A) Dynamic memory allocation
(B) Type checking
(C) Symbol table management a) + is left associative while '* ' is right
(D) Inline expansion associative
b) Both and '* ' are left associative
c) is right associative while ' *' is left
associative
d) None of the above