You are on page 1of 8

1. What is binding? When will it takes place? Explain briefly.

Answer
A memory binding is an association between 'memory address' attribute of a data item and the address of a memory area. Memory allocation is the procedure used to perform memory binding. Memory bindings can be static dynamic in nature, giving rise to the static and dynamic memory allocation models. In static memory allocation, memory is alloted to a variable before the execution of a program begins. Static memory allocation is typically performed during compilation. No memory allocation or de-allocation is performed during the execution of a program. Thus, variables permanently allocated. Allocation to a variable exists even if the program which it is defined is not active. In dynamic memory allocation, memory bin are established and destroyed during the execution of a program. Typical example of the use of these memory allocation models is FORTRAN for static allocation, block structured languages like PL/I, Pascal, Ada, etc., for dynamic allocation.

2. Differentiate between static and dynamic storage allocations. Answer


Static Memory Allocation Defn: Static memory allocation refers to the process of allocating memory at compile time before the associated program is executeed,unlike dynamic memory allocation or automatic memory allocation where memory is allocated as required at run time. Static Allocation of Variables The first type of memory allocation is known as a static memory allocation, which corresponds to file scope variables and local static variables. Not all variables are automatically allocated. The following kinds of variables are statically allocated: All global variables, regardless of whether or not they have been declared as static; Local variables explicitly declared to be static.

Statically allocated variables have their storage allocated and initialized before main starts running and are not deallocated until main has terminated. Statically allocated local variables are not re-initialized on every call to the function in which they are declared. A statically allocated variable thus has the occasionally useful property of maintaining its value even when none of the functions that access the variable are active. The addresses and sizes of these allocations are fixed at the time of compilation and so they can be placed in a fixed-sized data area which then corresponds to a section within the final linked executable file. Such memory allocations are called static because they do not vary in location or size during the lifetime of the program. There can be many types of data sections within an executable file; the three most common are normal data, BSS data and read-only data. BSS data contains variables and arrays which are to be initialized to zero at run-time and so is treated as a special case, since the actual contents of the section need not be stored in the executable file. Read-only data consists of constant variables and arrays whose contents are guaranteed not to change when a program is being run. Dynamic Memory Allocations The last type of memory allocation is known as a dynamic memory allocation, which corresponds to memory allocated via malloc() or operator new[ ]. The sizes, addresses and contents of such memory vary at run-time and so can cause a lot of problems when trying to diagnose a fault in a program. These memory allocations are called dynamic memory allocations because their location and size can vary throughout the lifetime of a program. Such memory allocations are placed in a system memory area called the heap, which is allocated per process on some systems, but on others may be allocated directly from the system in scattered blocks. Unlike memory allocated on the stack, memory allocated on the heap is not freed when a function or scope is exited and so must be explicitly freed by the programmer. The pattern of allocations and deallocations is not guaranteed to be (and is not really expected to be) linear and so

the functions that allocate memory from the heap must be able to efficiently reuse freed memory and resize existing allocated memory on request. In some programming languages there is support for a garbage collector, which attempts to automatically free memory that has had all references to it removed, but this has traditionally not been very popular for programming languages such as C and C++, and has been more widely used in functional languages. Because dynamic memory allocations are performed at run-time rather than compile-time, they are out with the domain of the compiler and must be implemented in a run-time package, usually as a set of functions within a linker library. Such a package manages the heap in such a way as to abstract its underlying structure from the programmer, providing a common interface to heap management on different systems. However, this malloc library must decide whether to implement a fast memory allocator, a space-conserving memory allocator, or a bit of both. It must also try to keep its own internal tables to a minimum so as to conserve memory, but this means that it has very little capability to diagnose errors if any occur. In some compiler implementations there is a builtin function called alloca(). This is a dynamic memory allocation function that allocates memory from the stack rather than the heap, and so the memory is automatically freed when the function that called it returns. This is a nonstandard feature that is not guaranteed to be present in a compiler, and indeed may not be possible to implement on some systems. However, the mpatrol library provides a debugging version of this function (and a few other related functions) on all systems, so that they make use of the heap instead of the stack.

3. What is hash table? Why do we need it for symbol table implementation? Answer
Hash Tables Hash tables are good for doing a quick search on things. For instance if we have an array full of data (say 100 items). If we knew the position that a specific item is stored in an array, then we could quickly access it. For instance, we just happen to know that the item we want it is at position 3; I can apply: Myitem = myarray [3]. With this, we don't have to search through each element in the array, we just access position 3. Hash tables are good in situations where you have enormous amounts of data from which you would like to quickly search and retrieve information For compiler symbol tables. The compiler uses a symbol table to keep track of the user-defined symbols in a C++ program. This allows the compiler to quickly look up attributes associated with symbols (for example, variable names).

4. Explain any two types of errors that compiler detects briefly Answer
1) Reporting Errors The manner in which a compiler reports errors can greatly affect how pleasant and economical it is to use its language on a given machine. Good error diagnostics should posses a number of properties: The messages should pinpoint the errors in terms of the original source program, rather than in terms of some internal representation that is totally mysterious to the user. The error messages should be tasteful and understandable by the user (e.g., missing right parenthesis in line 5 rather than a cryptic error code such as OH17.) The messages should be specific and should localize the problem (e.g., ZAP not declared in procedure BLAH, rather than missing declaration). The messages should not be redundant. If a variable ZAP is undeclared, that should be said once, not every time ZAP appears in the program.

2) Sources of Error It is difficult to give a precise classification scheme for programming errors. One way to classify errors is according to how they are introduced. If we look at the entire process of designing and implementing a program, we see that errors can arise at every stage of the process. At the very onset, the design specifications for the program may be inconsistent or faulty. The algorithms used to meet the design may be inadequate or incorrect (algorithmic errors). The programmer may introduce errors in implementing the algorithms, either by introducing logical errors or by using the programming language constructs improperly (Coding errors). Keypunching or transcription errors can occur when the program is typed onto cards or into a file. The program may exceed a compiler or machine limit not implied by the definition of the programming language. For example, an array may be declared with too many dimensions to fit in the symbol table, or an array may be too large to be allocated space at run time. Finally, although it should not happen, a compiler can insert errors as it translates the source program into an object program (compiler errors).

From the point of view of the compiler writer, it is convenient to classify errors as being either syntactic or semantic. We define a syntactic error to be an error detectable by the lexical or syntactic phase of the compiler. Other errors detectable by the compiler are termed semantic errors

5. What is an Operating System? Explain its components Answer


An Operating System, or OS, is a software program that enables the computer hardware to communicate and operate with the computer software. Without a computer Operating System, a computer would be useless.

Examples of operating systems are, Windows Vista, Windows XP, Windows ME, Windows 2000, Windows NT, Windows 95, and all other members of the Windows family. UNIX, Linux, Solaris, Irix, and all other members of the UNIX family MacOS 10 (OSX), MacOS 8, MacOS 7, and all other members of the MacOS family. The nucleus/OS (OS is also called nucleus) deals with the following: 1. Interrupt / Trap Handling OS contains interrupt service routines (interrupt handlers), typically one for each possible type of interrupt from the hardware. Example: clock handler: handles the clock device, which ticks 60 (or more) times per second OS also contains trap service routines (trap handlers), typically one for each possible type of trap from the processor 2. Short Term Scheduling: Choosing which process to run next 3. Process Management: Creating Assigning privileges and resources to processes and deleting processes

4. Interprocess Communication (IPC): Exchanging information between processes

Within the OS there are routines for

- Managing Registers - Managing Time - Handling Device Interrupts


OS provides the environment in which processes exist Every process depends on services provided by the OS

Components of Computer Systems The following are the major components of computer systems, 1. Hardware: Provides basic computing resources (CPU, memory, I/O devices). 2. Operating System: Controls and coordinates the use of the hardware among the various application programs for the various users. 3. Application Programs: Define the ways in which the system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs). 4. Users: (people, machines, other computers)

6. What is process? Differentiate between a process and program?

Answer
Process is a program which is in execution or A process is the unit of work in a system. The process has been given many definitions for instance A program in Execution. An asynchronous activity. The 'animated sprit' of a procedure in execution. The entity to which processors are assigned. The 'dispatchable' unit. Operating system handles everything is terms of processes only. Process is not the same as program. A process is more than a program code. A process is an active entity as oppose to program which consider to be a passive entity. Being a passive, a program is only a part of process. Process, on the other hand, includes. Current value of Program Counter (PC) Contents of the processors registers Value of the variables The Process Stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables. A data section that contains global variables In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called multiprogramming).

You might also like