You are on page 1of 11

1. EXPLAIN THE WORKING OF DIRECT LINKING LOADER WITH EXAMPLE.

SHOW ENTRIES IN DIFFERENT DATABASES BUILD BY DLL.

2. EXPLAIN THE FUNCTION OF LOADER.


Assemblers and compilers are used to convert source code to object code. The loader
will accept that object code, make it ready for execution, and helps to execute. Loader
performs its task via four functions, these are as follows:

a) Allocation:

It allocates memory for the program in the main memory . In order to allocate
memory to the program, the loader allocates the memory on the basis of the size of
the program, this is known as allocation. The loader gives the space in memory
where the object program will be loaded for execution.

b) Linking:

It combines two or more separate object programs or modules and supplies


necessary information. The linker resolves the symbolic reference code or data
between the object modules by allocating all of the user subroutine and library
subroutine addresses. This process is known as linking. In any language, a program
written has a function, it can be user-defined or can be a library function. For
example, in C language we have a printf() function. When the program control goes
to the line where the printf() is written, then the linker comes into the picture and it
links that line to the module where the actual implementation of the printf() function
is written.

c) Relocation:

It modifies the object program so that it can be loaded at an address different from
the location. There are some address-dependent locations in the program, and these
address constants must be modified to fit the available space, this can be done by
loader and this is known as relocation. In order to allow the object program to be
loaded at a different address than the one initially supplied, the loader modifies the
object program by modifying specific instructions.

d) Loading:

It brings the object program into the main memory for execution. The loader loads
the program into the main memory for execution of that program. It loads machine
instruction and data of related programs and subroutines into the main memory, this
process is known as loading. The loader performs loading; hence, the assembler
must provide the loader with the object program.
eg. Absolute Loader
3. EXPLAIN DIFFERENT TYPES OF LOADER.
In compiler design, a loader is a program that is responsible for loading executable
programs into memory for execution. The loader reads the object code of a program, which
is usually in binary form, and copies it into memory. The loader is typically part of the
operating system and is invoked by the system’s bootstrap program or by a command from
a user. Loaders can be of seven types:
1. Absolute Loader:
- An absolute loader is one of the simplest types of loaders.
- It loads a program into memory at a specific location, known as an absolute
address.
- The addresses of the program's instructions and data are fixed and specified in the
executable file.
- Absolute loaders are not very flexible and are rarely used in modern systems.
2. Relocating Loader:
- A relocating loader is more flexible than an absolute loader.
- It loads a program into memory, allowing it to be loaded at any location.
- The loader adjusts the addresses specified in the program's instructions and data to
reflect the actual location where the program is loaded.
- Relocating loaders are commonly used in modern systems to support programs
that can be loaded into different memory locations.
3. Direct Linking Loader:
- In direct linking, the linking process occurs at compile time rather than at runtime.
- The loader combines object files (compiled source code) into an executable file by
directly merging their code and data sections.
- The linking is done statically, meaning that the addresses of external symbols are
resolved at compile time.
- Direct linking results in faster loading times but lacks flexibility compared to
dynamic linking.
4. Dynamic Linking Loader:
- Dynamic linking loaders link executable files with shared libraries (DLLs in
Windows, shared objects in Unix-like systems) at runtime.
- Shared libraries contain code that multiple programs can use, reducing memory
usage and facilitating updates.
- The loader locates the required shared libraries, loads them into memory, resolves
references to symbols in these libraries, and updates the program's memory space
accordingly.
- Dynamic linking allows for more efficient memory usage and easier updates but
may incur a small performance overhead during runtime due to the dynamic linking
process.
5. Compile and Go Loader:
6. General Loader:
7. Program Linking Loader

4. EXPLAIN DYNAMIC LINKING LOADER.


Dynamic Linking Loader:
• Dynamic Linking Loader is a general re-locatable loader
• Allowing the programmer multiple procedure segments and multiple data
segments and giving programmer complete freedom in referencing data or
instruction contained in other segments.
• The assembler must give the loader the following information with each
procedure or data segment.
• Dynamic linking defers much of the linking process until a program starts
running. It provides a variety of benefits that are hard to get otherwise.
• Dynamically linked shared libraries are easier to create than static linked shared
libraries.
• Dynamically linked shared libraries are easier to update than static linked shared
libraries.
• The semantics of dynamically linked shared libraries can be much closer to
those of unshared libraries.
• Dynamic linking permits a program to load and unload routines at runtime, a
facility that can otherwise be very difficult to provide.

• Length of segment. 1. A list of all symbols in the segment that may be


referenced by other segments. 2. List of all symbols not defined in the segment
but referenced in the segment. 3. Information where the address constant are
loaded in the segment.

Here's how it works:

1. Compilation: When you compile a program that uses shared libraries,


references to functions or symbols in those libraries are left unresolved. Instead,
the compiler inserts placeholders (often called stubs or placeholders) for these
unresolved references.

2. Loading: When you execute the program, the dynamic loader is invoked. It
locates the required shared libraries based on the paths specified during
compilation or through environment variables. Then, it loads these libraries into
memory alongside the main program.

3. Symbol Resolution: Once the libraries are loaded, the dynamic loader resolves
the unresolved references in the main program's code by linking them to the
appropriate symbols or functions in the shared libraries. It updates the
placeholders with the correct memory addresses.

4. Relocation: If necessary, the dynamic loader adjusts memory addresses within


the shared libraries to ensure they do not conflict with other libraries or the main
program.

5. Execution: Finally, the program starts executing, now with all its dependencies
loaded and linked dynamically. Any calls to functions or symbols from the shared
libraries are resolved at runtime.

5. EXPLAIN DIFFERENT PHASES OF COMPILER. ILLUSTRATEALL THE OUTPUT


AFTER EACH PHASE FOR THE FOLLOWING STATEMENT: a=b+c-d*5.

6. Enlist the different types of attributes in SDD. Explain with example.


Syntax Directed Definition (SDD) is a kind of abstract specification. It is
generalization of context free grammar in which each grammar production X –> a is
associated with it a set of production rules of the form s = f(b 1, b2, ……bk) where s is
the attribute obtained from function f. The attribute can be a string, number, type or a
memory location.

Types of attributes – There are two types of attributes:


1. Synthesized Attributes – These are those attributes which derive their values from their
children nodes i.e. value of synthesized attribute at node is computed from the values of
attributes at children nodes in parse tree.
Example:
E --> E1 + T { E.val = E 1.val + T.val}
In this, E.val derive its values from E 1.val and T.val
Computation of Synthesized Attributes –
 Write the SDD using appropriate semantic rules for each production in given
grammar.
 The annotated parse tree is generated and attribute values are computed in
bottom up manner.
 The value obtained at root node is the final output.
Example: Consider the following grammar
S --> E
E --> E1 + T
E --> T
T --> T1 * F
T --> F
F --> digit
The SDD for the above grammar can be written as follow

Let us assume an input string 4 * 5 + 6 for computing synthesized attributes. The annotated
parse tree for the input string is

2. Inherited Attributes – These are the attributes which derive their values from their
parent or sibling nodes i.e. value of inherited attributes are computed by value of parent or
sibling nodes.
Example:
A --> BCD { C.in = A.in, C.type = B.type }
Computation of Inherited Attributes –
 Construct the SDD using semantic actions.
 The annotated parse tree is generated and attribute values are computed in top
down manner.
Example: Consider the following grammar
S --> T L
T --> int
T --> float
T --> double
L --> L1, id
L --> id
The SDD for the above grammar can be written as follow

Let us assume an input string int a, c for computing inherited attributes. The annotated
parse tree for the input string is

7. Explain LL(1) grammar in details.


LL(1) Grammars

A context-free grammar G = (VT, VN, S, P) whose parsing table has no multiple entries is said
to be LL(1). In the name LL(1),

 the first L stands for scanning the input from left to right,
 the second L stands for producing a leftmost derivation,
 and the 1 stands for using one input symbol of lookahead at each step to make parsing
action decision.

A language is said to be LL(1) if it can be generated by a LL(1) grammar. It can be shown


that LL(1) grammar are

 not ambiguous and


 not left-recursive.

Example: the following grammar:

E → T E’

E’ → + T E’ | λ

T → F T’

T’ → * F T’ | λ

F → (E) | id

whose parsing table M is

N/I id + * ( ) $
E E→TE’ E→TE’
E’ E’→+TE’ E’→ λ E’→ λ
T T→FT’ T→FT’
T’ T’→ λ T’→*FT’ T’→ λ T’→ λ
F F→id F→(E)

is an LL(1) grammar

LL(1) grammars enjoys several nice properties: for example they are not ambiguous and
not left recursive.

8. Explain recursive descent parsing technique with suitable example.


9. Draw the syntax tree and directed acyclic graph for the expression:
(a*b)+(c-d)*(a*b)+b.
10.Explain three address code its statement, type and also implementation
of three address statement.

Three Address Code

 Three-address code is considered as an intermediate code and utilised by


optimising compilers.
 In the three-address code, the given expression is broken down into multiple
guidelines. These instructions translate to assembly language with ease.
 Three operands are required for each of the three address code instructions. It’s
a binary operator and an assignment combined.

Implementation of Three Address Code

There are 2 representations of three address codes, namely

1. Quadruple
2. Triples

1. Quadruple

To implement the three address codes, the quadruples have four fields. The name of the
operator, the first source operand, the second source operand, and the result are all contained
in the quadruple field.

Quadruples field

Operator

Source 1

Source 2

Destination
Example:

p := -q * r + s

Three-address code is as follows:

t1 := -q

t2 := r + s

t3 := t1 * t2

p := t3

2. Triples

To implement the three address codes, the triples have three fields. The name of the operator,
the first source operand, and the second source operand are all contained in the field of
triples.

Triples fields

Operator

Source 1

Source 2

Example – p := -q * r + s

Three address code is as follows:

t1 := -q t2 := r + sM t3 := t1 * t2 p := t3

11.Explain different code optimization technique along with example.


12.Discuss different issues in design of code generation.
The following issue arises during the code generation phase:
 Input to code generator – The input to the code generator is the intermediate
code generated by the front end, along with information in the symbol table
that determines the run-time addresses of the data objects denoted by the
names in the intermediate representation. Intermediate codes may be
represented mostly in quadruples, triples, indirect triples, Postfix notation,
syntax trees, DAGs, etc.
 Target program: The target program is the output of the code generator. The
output may be absolute machine language, relocatable machine language, or
assembly language.
 Absolute machine language as output has the advantages that it
can be placed in a fixed memory location and can be
immediately executed.
 Relocatable machine language as an output allows subprograms
and subroutines to be compiled separately.
 Assembly language as output makes the code generation easier.
We can generate symbolic instructions and use the macro-
facilities of assemblers in generating code. And we need an
additional assembly step after code generation.

 Memory Management – Mapping the names in the source program to the


addresses of data objects is done by the front end and the code generator. A
name in the three address statements refers to the symbol table entry for the
name. Then from the symbol table entry, a relative address can be determined
for the name.
 Instruction selection – Selecting the best instructions will improve the
efficiency of the program. It includes the instructions that should be complete
and uniform. Instruction speeds and machine idioms also play a major role
when efficiency is considered. But if we do not care about the efficiency of the
target program then instruction selection is straightforward.

 Register allocation issues – Use of registers make the computations faster in


comparison to that of memory, so efficient utilization of registers is important.
The use of registers is subdivided into two subproblems:
1. During Register allocation – we select only those sets of variables that
will reside in the registers at each point in the program.
2. During a subsequent Register assignment phase, the specific register is
picked to access the variable.

 Evaluation order – The code generator decides the order in which the
instruction will be executed. The order of computations affects the efficiency
of the target code. Among many computational orders, some will require only
fewer registers to hold the intermediate results. However, picking the best
order in the general case is a difficult NP-complete problem.
 Approaches to code generation issues: Code generator must always generate
the correct code. It is essential because of the number of special cases that a
code generator might face. Some of the design goals of code generator are:
1. Correct
2. Easily maintainable
3. Testable
4. Efficient

You might also like