Computer Hardware and System Software Concepts

Introduction to concepts of System Software/Operating System

Welcome to this course on Computer Hardware and System Software Concepts

1

RoadMap
Introduction to the Concepts of System Software Introduction to Operating System/Memory Management

Copyright © 2004, Infosys Technologies Ltd

2

ER/CORP/CRS/OS09/003 Version No: 2.0

•Day2 •Recap of Day1 •Introduce System Software •To discuss about the following •Assemblers •Loaders •Linkers •Compilers •To introduce Operating Systems/Memory Management •To discuss about the following •Operating System •Functions of Operating System •Memory Management •Memory Management Schemes

2

System Software
System programs which provide a more convenient environment for program development and execution Example
– Compilers – Assemblers – Loaders – Linkers – Operating System

Copyright © 2004, Infosys Technologies Ltd

3

ER/CORP/CRS/OS09/003 Version No: 2.0

Motivate, once again, what is the difference application and systems software.

3

Translators
A program which converts a user’s program written in some language to another language. The language in which the user’s program is written is called the source language The language to which the source language is converted is called the target language

Copyright © 2004, Infosys Technologies Ltd

4

ER/CORP/CRS/OS09/003 Version No: 2.0

Motivate for translators: If there is a processor that can directly execute programs written in the source language, then, there is no need to translate the source program into the target language. Translation is thus used only when a processor is available for the target language but not for the source language. Running the translated program will give exactly the same result as the execution of the same program would have given had the processor for it was available. i.e. of course if the translation was done correctly. Example: Compilers, Assemblers etc There is an important difference between translation and interpretation. In the former, the original program is first converted to an equivalent program called an object program. Then this object program is executed i.e. only after the translation has been completed. Hence, the original program in the source language is not directly executed. Thus, translation comprises of two steps i.e 1. Generation of an equivalent program in target language 2. Execution of the generated program

Interpretation consists of only one step i.e. executing the original source program

4

Translators

Source Program

Target Program

Copyright © 2004, Infosys Technologies Ltd

5

ER/CORP/CRS/OS09/003 Version No: 2.0

5

Translator

Source Program (High level language)

Compiler

Target Program (Object /Exe code )

Copyright © 2004, Infosys Technologies Ltd

6

ER/CORP/CRS/OS09/003 Version No: 2.0

When a source program is a high level language such as COBOL and the target program is a numerical machine language then the translator is called as a compiler.

6

Translator
Source Program (Assembly language)

Assembler

Target Program (Machine language )

Copyright © 2004, Infosys Technologies Ltd

7

ER/CORP/CRS/OS09/003 Version No: 2.0

When a source program is a assembly language and the target program is a numerical machine language then the translator is called as a assembler. Assembly language is basically a symbolic representation for a numerical machine language

7

Assembly language
A convenient language using mnemonics (symbolic names and symbolic addresses) for coding machine instructions The assembly programmer has access to all the features and instructions available on the target machine and thus can execute every instruction in the instruction set of the target machine

Copyright © 2004, Infosys Technologies Ltd

8

ER/CORP/CRS/OS09/003 Version No: 2.0

Some other characteristics: • • • • Assembly language programming is difficult Takes longer time Takes longer to debug Difficult to maintain

Why go for assembly language programming? 1. Performance issues – For some applications, speed and size of the code are critical. An expert assembly language programmer can often produce code that is much smaller and much faster than a high level programmer can. Example – embedded applications such as the code on a smart card, the code in a cellular telephone, BIOS routines, inner loops of performance critical applications etc. 2. Access to the machine – Some procedures need complete access to the hardware, something which is impossible in high level languages. Example – low level interrupt and trap handlers in an operating system etc.

8

Assembly language (Example)
Consider the computation of a formula N=I+J illustrated using instructions from Motorola 68030 Label TEMP : OPCODE MOVE.L ADD.L MOVE.L I: J: N: DC.L DC.L DC.L OPERANDS I, D0 ; J, D0 ; D0,N ; 3 3 0 ; ; ; COMMENT Load I into Reg D0 ADD J to D0 Store I+J in N Reserve 4 bytes initialized to 3; Reserve 4 bytes initialized to 3; Reserve 4 bytes initialized to 0;

Copyright © 2004, Infosys Technologies Ltd

9

ER/CORP/CRS/OS09/003 Version No: 2.0

Assembly language format: The format of a typical assembly language program consists of •Label field – provides symbolic names for memory addresses which is needed on executable statements so that the statements can be jumped to. It also permits the data stored there to be accessible by the symbolic name. Example: TEMP,FORMUL etc •Operation field – contains a symbolic abbreviation for the opcode or a pseudo-instruction. Example: MOVE, ADD etc. •Operands field – specifies addresses or registers used by operands of the machine instruction. Example: D0,R1,R2 etc. •Comment field – is used for documentation purposes. Explanation of the example in the slide above: TEMP : It is a label field MOVE : An instruction that moves the first arg to second arg ADD : Adds the contents of first arg to second arg and stores it in the second arg I : Yet another label DC (Define Constant) is a pseudo instruction which is a command to the assembler to interpret it. The suffix .L denotes long (i.e. 4 bytes) associated with that opcode. In the above example one important point is worth noting. How does the assembler know what is stored in a location N. such a reference which is used even before it is defined is called a forward reference(next slide).

9

Assembly language (The Forward Reference)
is a reference which is used even before it is defined Can be handled in two ways:
– Two pass assembler – One pass assembler

Copyright © 2004, Infosys Technologies Ltd

10

ER/CORP/CRS/OS09/003 Version No: 2.0

Each reading of the source program is called a pass. Any translator which reads the input program once is called a one-pass assembler and if it reads twice is called a two-pass assembler. Two - pass assembler: 1. In pass-one of a two-pass assembler, the definitions of symbols, statement labels etc are collected and stored in a table known as the symbol table. 2. In pass-two, each statement can be read, assembled and output as the values of all symbols are known. This approach is thus quite simple though it requires an additional pass. One - pass assembler: The assembly program is read once and converted to an intermediate form and thereafter stored in a table in memory

10

Loaders
Are system programs Loads the binary code in memory (main) Transfer the control to 1st instruction

Copyright © 2004, Infosys Technologies Ltd

11

ER/CORP/CRS/OS09/003 Version No: 2.0

There are various loading schemes: 1. Assemble-and-go loader: The assembler simply places the code into memory and the loader executes a single instruction that transfers control to the starting instruction of the assembled program. In this scheme, some portion of the memory is used by the assembler itself which would otherwise have been available for the object program. 2. Absolute loader: Object code must be loaded into the absolute addresses in the memory to run. If there are multiple subroutines, then each absolute address has to be specified explicitly. 3. Relocating loader: This loader modifies the actual instructions of the program during the process of loading a program so that the effect of the load address is taken into account.

11

Linkers
Are system programs that accounts for and reconciles all address references within and among modules. Example

Large Program

main

sort

search

count

Copyright © 2004, Infosys Technologies Ltd

12

ER/CORP/CRS/OS09/003 Version No: 2.0

In actual practice, a complete program is built from many smaller routines possibly by many people. All these routines have to be connected logically and linked to form a single program. A linker is a systems program that accounts for and reconciles all address references within and among modules and replaces those references with a single consistent scheme of relative addresses. Linking is done after the code is generated and is closely associated with a loader.

Compilers and translators basically translate one procedure at a time and put the translated output on the disk. All the translated procedures have to be located and linked together to be run as a unit called an executable binary program. In MS-DOS, Windows 95/98 etc object modules have extension .obj and the executable binary programs have .exe extension. In UNIX, object modules have .o extension and executable programs have no extension.

Linking is of two main types: 1. Static Linking: All references are resolved during loading at linkage time 2. Dynamic Linking: References made to the code in the external module are resolved during run time. Takes advantage of the full capabilities of virtual memory. The disadvantage is the considerable overhead and complexity incurred due to postponement of actions till run time.

12

Compiler
Are system programs that translates an input program in a high level language into its machine language equivalent

Copyright © 2004, Infosys Technologies Ltd

13

ER/CORP/CRS/OS09/003 Version No: 2.0

Ask the participants what is HLL (High Level Language) ? Features of a HLL: •High degree of machine independence •Good data structures •Improved debugging capability •Good documentation Example: COBOL,PASCAL, FORTRAN etc.

13

Phases in a compiler
Lexical analysis Syntactic analysis Semantic analysis Intermediate code generation Code optimization Code generation

Copyright © 2004, Infosys Technologies Ltd

14

ER/CORP/CRS/OS09/003 Version No: 2.0

Compilers are complex system programs. Hence, they are often broken into several phases to accomplish the task. The phases of a compiler are mentioned in the slide above. We shall be interested in looking into the functionality of each slide rather than the concerned algorithms used in implementing the phases. Each phase is an independent task in the compilation process.

14

Compiler (Front - End )
Largely dependent on the source language Independent of the target machine Comprises the first three phases viz., – Lexical Analysis – Syntactic Analysis – Semantic Analysis Sometimes the intermediate code generation phase is also included

Copyright © 2004, Infosys Technologies Ltd

15

ER/CORP/CRS/OS09/003 Version No: 2.0

Most of the times, the phases of a compiler are collected into a front-end and a back-end. The front-end comprises of those phases or at times also parts of the phases which depend on the source language and are independent of the target machine. These include lexical analysis, syntactic analysis, creation of symbol table, semantic analysis and generation of intermediate code. It also includes some amount of error handling and code optimization that goes along with these phases.

15

Back-End
Dependent on the target machine Independent of the source language Includes the last two phases viz.,
– Code Optimization – Code Generation

Copyright © 2004, Infosys Technologies Ltd

16

ER/CORP/CRS/OS09/003 Version No: 2.0

The back-end generally includes those phases of the compiler which depend on the target machine. They do not depend on the source language, just the intermediate language. Backend includes code optimization, code generation along with the necessary error handling and symbol-table operations. Taking the front-end of a compiler and redoing its associated back-end to produce a compiler for the same source language on a different machine is quite common these days.

16

Lexical Analysis
Scans the source program into basic elements called tokens Prepares the symbol table which maintains information about tokens Eliminates whitespace characters such as comments, blanks and tabs

Copyright © 2004, Infosys Technologies Ltd

17

ER/CORP/CRS/OS09/003 Version No: 2.0

Lexical Analyser is also called as linear analysis or a scanner. The input program which consists of a stream of characters is read from left to right and grouped into tokens. Tokens are mainly of two kinds viz., 1. Fixed elements of the language such as keywords, vocabulary of the language, operators, signs etc 2. Identifiers and constants EXAMPLE: IF ( x < 5.0 ) THEN TOKENS: Keywords : IF, THEN, ELSE Identifier(s) : x Constants : 2, 3, 5.0 The blanks separating these tokens would normally be eliminated during lexical analysis. Nowadays, there are tools to do this phase efficiently. For e.g. in Unix systems, a standard tool called lex is available for this purpose. x=x+2 ELSE x=x-3;

17

Syntax Analysis
Reorganizes major constructs and groups them Calls the appropriate actions corresponding to each construct Ascertains the legality of every statement Output of this phase are parse trees

Copyright © 2004, Infosys Technologies Ltd

18

ER/CORP/CRS/OS09/003 Version No: 2.0

Syntax analysis is also known as parsing or hierarchical analysis. It basically involves grouping of the statements into grammatical phrases that are used by the compiler to generate the output finally. The grammatical phrases of the source program are usually represented by a parse tree( A parse tree is a structural representation of the input being parsed) as shown below: Example: Structure Of Statement ( x= 2* 3.0 + y) Assignment statement = / \ identifier expression | / | \ x expression + expression / | \ | expression * expression identifier | | | integer real y | | 2

3.0

Nowadays, there are tools to generate parsers. For e.g. in Unix systems, a tool called as YACC (Yet Another Compiler Compiler) is available for this purpose.

18

Semantic Analysis
Looks into the static meaning of the program Gathers type information for the subsequent code generation phase

Copyright © 2004, Infosys Technologies Ltd

19

ER/CORP/CRS/OS09/003 Version No: 2.0

Checks whether the type of variables used in the program is consistent with the type defined in the declaration. Example: If a variable is defined as type char, then it is not permitted to do arithmetic operations on that variable.

19

Intermediate code generation
Transforms the parse tree into an intermediate language representation of the source program. Output would be some other structured representation viz.,
– AST (Abstract Syntax Trees) – Quadruples

Copyright © 2004, Infosys Technologies Ltd

20

ER/CORP/CRS/OS09/003 Version No: 2.0

Some compilers generate an explicit intermediate representation of the source program after syntax and semantic analysis. This intermediate representation of the source program can be thought of as a program for an abstract machine and should have two main properties viz., 1. It should be easy to produce 2. It should be easy to translate into the target program

20

Necessity for intermediate code generation

m languages

n machines

Copyright © 2004, Infosys Technologies Ltd

21

ER/CORP/CRS/OS09/003 Version No: 2.0

Let us consider the situation given in the slide above. Suppose, we have to write a complier for m languages targeted for n machines. The obvious approach would be to write m*n compilers.

21

Intermediate code generation

m languages

...................

INTERMEDIATE CODE

n machines
Copyright © 2004, Infosys Technologies Ltd 22

....................
ER/CORP/CRS/OS09/003 Version No: 2.0

An intermediate language avoids most of the problems. It allows a logical separation between machine independent and dependent phases and facilitates optimization. All we have to do is to choose a rich intermediate language that would bridge both the source programs and the target programs. Find out how many front-ends and back-ends would be required in the above example shown in the slide. Intermediate representation has a variety of forms. There are also many algorithms for generating intermediate codes for typical programming language constructs.

22

Code Optimization
Transforms the intermediate code to improve the execution time and memory space usage Examples
– Common sub-expression elimination – Dead Code Elimination – Loop optimization

Copyright © 2004, Infosys Technologies Ltd

23

ER/CORP/CRS/OS09/003 Version No: 2.0

Common Sub - Expression Elimination: •Avoid re-computation of expressions •Make use of previously computed value Example: x = 2*i y = 2*i z = i*2 Transform y = x and z = x at

appropriate points or detect i* 2 as a common expression

Dead Code Elimination Consider the following fragment: .......... x = 10 ; y = .... ; .......... if ( x < 100 ) then y = y + 5 else y = y - 5 else branch will never get executed since the value of ‘x’ cannot be greater than or equal to 100 Loop Optimization: When a program is in execution, a lot of time is spent in loops. There are various ways to perform optimizations inside the loop. For e.g. If there is a statement such as, TEMP =5, inside a loop which is not affected by other statements, then this can be moved outside the loop. Such an optimization is called as code motion.

23

Code Generation
Generates the code for the target machine Translates intermediate code to a sequence of machine instructions that perform the desired task Code generator has knowledge of target machine
– About the number of registers – special registers – addressing modes etc.

Copyright © 2004, Infosys Technologies Ltd

24

ER/CORP/CRS/OS09/003 Version No: 2.0

The final phase of the compiler is generation of the target code which consists normally consists of relocatable machine code or assembly code. Memory locations are selected for all the variables used by the program. The intermediate instructions are then translated into a sequence of machine instructions that perform the task. Example: Expression : x = 2 * 3.0 + y LOAD R1 , 2 MUL R1 , 3.0 ADD R1 , y STORE x In the first instruction, 2 is loaded into register R1. The second instruction multiplies the value 3.0 to the value stored in register R1. The third instruction adds the value in y to the previous result. The final result is stored in x. Generated code:

24

Support modules
Provide additional services of storage allocation and error indication which is required by a compiler. Example
– Symbol table – Error processing

Copyright © 2004, Infosys Technologies Ltd

25

ER/CORP/CRS/OS09/003 Version No: 2.0

The support modules interact with all the six phases of a compiler.

25

Symbol Table
A table which contains information about identifiers encountered during lexical analysis Keeps track of the attributes of the symbols like
– name

– type (int, char etc.,), – size (in bytes), – address of the label

Copyright © 2004, Infosys Technologies Ltd

26

ER/CORP/CRS/OS09/003 Version No: 2.0

A symbol table is a data structure which contains a record for each identifier. The fields of the record contain the attributes of the identifier. This basically helps us to locate the record for each identifier easily and to store or retrieve data from that record quickly. When an identifier in the source program is detected during lexical analysis, its information is stored in the symbol table. The remaining phases of the compiler enter the information about the attributes of the identifier.

26

Error Processing
Error reporting
– Giving the error number and pinpoints the appropriate place in the source where the error has been detected

Error recovery
– After reporting an error, continues translation

Copyright © 2004, Infosys Technologies Ltd

27

ER/CORP/CRS/OS09/003 Version No: 2.0

Error processing is required at almost every stage. One an error is detected, error processing module generates an error message for the user. Error processing is of two types viz., 1. Error Reporting 2. Error Recovery Error Reporting - involves getting the line number and pinpointing the appropriate place in the source where exactly the error has been detected. Error Recovery – After reporting an error, error processing attempts to either correct or eat up certain lexumes till a certain point from where it can pretend that nothing has gone wrong and continue translation.

27

Interpreter
Is a systems program that looks at and executes programs on a line-by-line basis rather than producing object code Slower than compiled code Used in test environments as overhead of compilation is not there Generally not recommended for production programs

Copyright © 2004, Infosys Technologies Ltd

28

ER/CORP/CRS/OS09/003 Version No: 2.0

Each line is scanned, parsed and executed before moving to the next line.

28

OPERATING SYSTEMS
Memory Management

Introduction to Operating Systems Introduction to the Concepts of Memory Management

29

Operating systems
A program which acts as an interface between the user and the computer and provides an environment in which a user can execute programs Viewed as a Resource Allocator or Resource Manager
– Memory – Processors – Peripherals – Information

Copyright © 2004, Infosys Technologies Ltd

30

ER/CORP/CRS/OS09/003 Version No: 2.0

Primary Goal of an Operating System is convenience for the user. Secondary Goal is efficient operation of the computer system. Resource Memory Processors Peripherals Information Examples: •MS-DOS •OS/ 2 •WINDOWS 3.X •WINDOWS 95 •WINDOWS NT •UNIX Examples Primary, Secondary CPU, I/O Terminal, Printer, Tape Files, Data Managers Memory Management Process Management Device Management File Management

30

Memory management
Plays an important role as utilization of memory is crucial to the performance of the system
– Allow as many user jobs as possible to be active – Respond to changes in memory demand by individual user jobs – Prevent unauthorized changes to a user job’s memory region – Implement allocation and addressing as efficiently as possible

Copyright © 2004, Infosys Technologies Ltd

31

ER/CORP/CRS/OS09/003 Version No: 2.0

The computer must keep several processes in memory at the same time to improve the CPU utilization and the speed of the response of the computer to its users. Memory management discusses various ways to manage memory.

31

Memory management SCHEMES
Single Contiguous allocation Partitioned allocation Relocatable Partitioned allocation Simple Paged allocation Demand Paging Segmentation

Copyright © 2004, Infosys Technologies Ltd

32

ER/CORP/CRS/OS09/003 Version No: 2.0

There are various memory management schemes as mentioned in the slide above. Each scheme has its own advantage and disadvantage. Selection of a particular technique depends on various factors such as hardware support, extent of memory available etc.

32

Single Contiguous Allocation Memory

os
User’s job

Wasted
Copyright © 2004, Infosys Technologies Ltd 33 ER/CORP/CRS/OS09/003 Version No: 2.0

In single contiguous allocation, the user program is given complete control of the CPU until completion or an error occurs. Advantages: Very simple to implement Disadvantages: Leads to uniprogramming Leads to wastage of space Leads to wastage of time (During any I/O operation CPU has to wait till I/O is finished)

33

Partitioned Allocation
Fixed Partitioned allocation Variable Partitioned allocation

Copyright © 2004, Infosys Technologies Ltd

34

ER/CORP/CRS/OS09/003 Version No: 2.0

To solve the problem of space and time usage, let us break the memory into various partitions. This allows several user jobs to reside in the memory. There are 2 main kinds of partitions viz., Fixed and Variable

34

Fixed Partitioned Allocation
MEMORY

OS JOB 1 JOB 2 JOB 3
FREE
(A)
Copyright © 2004, Infosys Technologies Ltd 35 ER/CORP/CRS/OS09/003 Version No: 2.0

20 K 10 K 30 K 10 K

Here, the memory is divided into fixed partitions as shown in the slide above. Advantage: Leads to Multiprogramming (CPU utilization is increased). Disadvantages: Leads to Internal Fragmentation (Explained in the next slide) Solution: •Relocatable partition •Paged allocation

35

Fixed Partitioned Allocation
MEMORY MEMORY

OS JOB 1 JOB 2 JOB 3
FREE
(A)

20 K 10 K 30 K 10 K
Copyright © 2004, Infosys Technologies Ltd 36

FREE JOB 2 FREE FREE
ER/CORP/CRS/OS09/003 Version No: 2.0

(B)

Disadvantage: 1 .Consider the situation where JOB 1(20K) and JOB 3(30K) are over. Now, suppose there is another new job of 50K which has to be executed. In the present scenario as we see in the slide, even though 60K is available, we cannot run a job of 50K because the available memory is not contiguous. 2. If a job of 25K has to be executed it has to go into a 30K slot resulting in wastage of 5K. This occurrence of free space within the active process space is called as Internal Fragmentation.

36

Variable Partitioned allocation
No predetermined partitioning of memory – allocates the exact amount of memory space needed for each process as and when required Processes are loaded into consecutive areas until the memory is filled or remaining space is too small to accommodate a new process. Disadvantage
– External Fragmentation

Copyright © 2004, Infosys Technologies Ltd

37

ER/CORP/CRS/OS09/003 Version No: 2.0

Self-explanatory

37

Variable Partitioned Allocation
MEMORY MEMORY

OS JOB 1 JOB 2 JOB 3
FREE
(A)

20 K 10 K 30 K 10 K
Copyright © 2004, Infosys Technologies Ltd 38

FREE JOB 2 FREE FREE
ER/CORP/CRS/OS09/003 Version No: 2.0

(B)

Here, the partitions are not fixed. As and when the jobs come, they take up consecutive space in memory. Disadvantage: When a process terminates, the space that was occupied by it is freed and these free spaces are called ‘holes’ When the holes are formed between active (running) processes, even though the total free space may be sufficient to hold a new process, there may not be a single large enough hole to accommodate the incoming process. This kind of wastage which occurs outside the space allocated for an active process is called External Fragmentation. If the active process in between the holes combine to form one big hole, it is known as ‘Coalescence of holes’.

38

Relocatable Partitioned Allocation
Space wasted due to fragmentation can be used by doing compaction – running jobs are squeezed together by relocating them and clubbing the rest of the free space into one large block. Simple to implement

Copyright © 2004, Infosys Technologies Ltd

39

ER/CORP/CRS/OS09/003 Version No: 2.0

In this mode, the active process shifts to one end, leaving the holes to combine to get a much larger space than before.

39

Relocatable partitioned allocation
MEMORY MEMORY

OS FREE JOB 2 FREE FREE JOB 2 20 K 10 K 30 K 10 K
Copyright © 2004, Infosys Technologies Ltd 40

FREE

FREE FREE
After Compaction
ER/CORP/CRS/OS09/003 Version No: 2.0

The diagram above shows that JOB2 has been moved upward leaving 50 K of contiguous free space. A new job of 50 K can be run now. Disadvantage: •Relocating the running jobs afresh leads to problems that are address dependent Solution: •Reload & start from beginning every programs that needs to be relocated which is very expensive and at times is an irreversible action •Relative addressing mechanism wherein the job is run independent of any program location. The disadvantage of Relative Addressing is that an extra overhead would be incurred because of a separate index register and addressing through the index registers.

40

Simple paged allocation
Divides the jobs address space into pages of the same size(4K) Divides the main memory address space into blocks/frames(4 K) Pages are units of memory that are swapped in and out of primary memory Pages are grouped together for a given user job and are placed in page tables

Copyright © 2004, Infosys Technologies Ltd

41

ER/CORP/CRS/OS09/003 Version No: 2.0

Simple paged allocation is a solution to fragmentation. Advantage: As each page is separately allocated, the users job need not be contiguous. Disadvantages: •Extra memory required for storing page tables •Considerable amount of hardware support is required for address transformations etc. •All pages of entire job must be in memory

41

Simple paged allocation

page number block number
0 1 2 2 4 7

OS
1000 2000 3000 4000

JOB 1

7000

JOB 2

0

3
ER/CORP/CRS/OS09/003 Version No: 2.0

Copyright © 2004, Infosys Technologies Ltd

42

The example in the slide above shows a page map table with 2 columns viz., page number and block number which essentially shows the mapping between page number and block number. JOB1 has 3 pages viz., 0,1,2. Page 0 maps to block 2 in the OS, page 1 to clock 4 and page 2 to block 7 in the OS. JOB 2 has 1 page i.e. page 0 which maps to block 3 in the OS. Thus, we can see that pages of a job need not be located contiguously in the memory.

42

Demand paging
Illusion of infinite memory available to the user (Virtual Memory) Job operating under demand paging A page is brought to the main memory only when there is a demand

Copyright © 2004, Infosys Technologies Ltd

43

ER/CORP/CRS/OS09/003 Version No: 2.0

This is an enhancement of simple paging wherein the pages are brought into the primary memory from the secondary memory (disk) only on demand. Thus, the entire process is not loaded into the memory at one stretch. It is loaded part by part and executed and then swapped back to the disk so that some other blocks of the same process can be loaded in that place. This gives an illusion to the user that the memory can accommodate and execute a process of any size. Since the full process is not loaded at one stretch, the process size can exceed the total memory size and still be executed. This is known as the virtual memory concept.

43

Demand paging

Page number

Block number
Page Table

OS
JOB 1 2 3 A 1

OS
1000 2000 3000 4000 7000

Status
Copyright © 2004, Infosys Technologies Ltd

Judgement
44 ER/CORP/CRS/OS09/003 Version No: 2.0

The diagram in the slide above shows the page map table for demand paging. The page map table (PMT) has two additional columns viz., status and judgement. Initially, all the pages have status field as NA (Not Available) implying that all the pages are in the secondary device (disk). As and when a page is loaded from the secondary to primary, the status is updated to A (Available) from NA. Now, if the same page is required again in the main memory, the status bit will indicate the presence of it in the primary memory. The judgement field decides if a page has to be moved back to the secondary memory or not.

44

Page replacement algorithms
Algorithms based on which the pages are selected for replacement Examples
– Least Frequently Used (LFU) – Least Recently Used ( LRU ) (NRU ) (FIFO )

– Not Recently Used – First in First out

Copyright © 2004, Infosys Technologies Ltd

45

ER/CORP/CRS/OS09/003 Version No: 2.0

LFU: If the algorithm decides to move a page from main memory and store it in secondary memory based on the fact that it is not used often, then it is called LFU. For every page a reference counter is maintained in the judgement field. LRU: If the algorithm decides to move a page from main memory and store it in secondary memory based on the fact that it is not used often in the recent times, then it is called LRU. For every page a timestamp is maintained in the judgement field. NRU: If the algorithm decides to move a page from main memory and store it in secondary memory based on the fact that it is not used at all in the recent times, then it is called NRU. A reference bit is associated with each page. FIFO: If the algorithm decides that the page has been first moved to the memory should be moved out to secondary memory first, then it is using FIFO.

45

Thrashing
Most of the time is being spent in either swapping in or swapping out of the pages from main memory to secondary memory,instead of doing useful work This high paging activity when happens frequently is called THRASHING To overcome thrashing ,system schedules some pages to be removed from memory in the background (Page Stealing) and continues till a certain level of page frames are free

Copyright © 2004, Infosys Technologies Ltd

46

ER/CORP/CRS/OS09/003 Version No: 2.0

Page Fault: If a user job accesses a page and the page is not available in the main memory, a page fault is said to occur Page Replacement: If the memory is full then the inactive pages which are not needed currently for execution are removed and are replaced by those pages from the secondary device which are to be executed. This is called Page Replacement.

46

Segmentation
Segment is a grouping of information that is treated as a logical entity A process is divided into different segments each of its own length for example one segment can correspond to a single subroutine, a group of closely related subroutines etc. Segmentation uses unequal chunks Chunk size is determined by the programmer Each individual segment can be protected Requires two-level translation : Segment tables to page tables and then to main memory

Copyright © 2004, Infosys Technologies Ltd

47

ER/CORP/CRS/OS09/003 Version No: 2.0

Paging brought about a separation between the user’s view of memory and the actual physical memory which is taken care of in segmentation. The user prefers to view memory as a collection of different sized segments with no necessary ordering among the segments. Say for example, when a user writes a program, he thinks about is a main program with a set of subroutines, procedures, functions etc. Each of these modules are referred to by a name and each of these segments are of variable length. The length of a segment is defined by the purpose of the segment in the program. Segmentation is thus a memory management technique which supports the user view of memory. A programmer has a say on the number of segments in a process and this division is dependent on the logical structure of the process.

47

Summary
System Software Translators Operating System Memory Management

Copyright © 2004, Infosys Technologies Ltd

48

ER/CORP/CRS/OS09/003 Version No: 2.0

48

Thank You!
Copyright © 2004, Infosys Technologies Ltd 49 ER/CORP/CRS/OS09/003 Version No: 2.0

49

Sign up to vote on this title
UsefulNot useful