You are on page 1of 10

Operating System Module 3

1. Memory management

The part of the OS that manage the memory is called the Memory Manager. It response
for the following:
 Keep track of which parts of the memory are in use and which parts are not in use.
 Allocate memory to Processes when they need it and de allocates it when done.
 Manage swapping between main memory and disk (auxiliary memory), when main
memory is not big enough to hold all the processes.

1.1 Mono-programming without Swapping or Paging

Memory Management systems can be divided into two classes those that move processes
back and forth between main memory and disk during execution (swapping and paging), and
those that do not.
The Simplest possible memory management scheme is to have just one process in
memory at a time, and to allow that process to use all the memory.

OS in ROM Device
Driver in
ROM

User User User


Program program Program

OS in OS in RAM
RAM
0 0 0

1.2Multiprogramming with fixed partition


To make maximum advantage of multiprogramming, it is necessary for several jobs to
reside in Computers main memory at a time. Thus when one job requests I/O, the CPU may be
immediately switched to another and may do calculations without delay. When this new job gives
away the CPU, another job may be ready to use it. The earliest multiprogramming system used
fixed partition multiprogramming in which main memory was divided into a number of fixed size
partitions. Each partition could hold a single job. The CPU was switched rapidly between jobs to
create the illusion of Multiprogramming. When a job arrives, it can be put into the input queue for
the smallest partition large enough to hold it.

Multiple Input Queues


Here memory is divided up into n (possibly unequal) partitions.

OS
Partition 1
Partition 2
Partition 3
Partition 4
Fig. 2a
If a job was ready to run and its partition was occupied, then that job had to wait; even if other
partitions were available. This results in wastage of storage resource.

Single Input Queue P4


P3
P2
P1

Fig . 2b
An alternative organisation is to maintain a single queue. Whenever a partition becomes free, the
job closest to the front of the queue that fix in it, could be loaded into the empty partition and run.
It is undesirable to waste a large partition on a small job. Therefore a different strategy is
followed. In this strategy whenever a partition becomes free the whole input queue is searched
and the largest job that fix into the empty partition is picked.

Relocation and protection


Multiprogramming introduces two essential problems that must be solved, relocation and
protection. In a multiprogramming system it is clear that different jobs will be running at different
addresses. When a user program is linked (ie, the main program, user written procedures and
library procedures are compiled into a single address space), the linker must know at which
address the program will begin in memory.
For example, suppose that the first instruction is a call to a procedure at a relative address
100 within the binary file produced by the linker, if this program is loaded in partition-1 (P1, see
figure below) that instruction will jump to an absolute address 100 which is inside the OS.

P4 700K
P3 400 K
200 K
P2 100 K
P1 0K
fig. 3a
OS
What is needed is a call to 100K +100. If the program is loaded into partition-2 (P2), it
must be carried out as a call to 200K+100 and so on. This problem is known as relocation
problem.
A solution to both relocation and protection problems are to equip the machine with two
special hardware registers called: Base and Limit registers.
When a process is scheduled the base register is loaded with the address of the start of its
partition, and the limit registers loaded with the length of the partition.

Limit Register
Maximum
address of
400 partition

Base Register
200 Address of
start of its
partition
Base
address

fig. 2b
Every memory address generated automatically has the base register contents added to it
before being sent to memory. Thus, if base register is 200K, a call 100 instruction is turned into a
call (200K+100) instruction without modifying the instruction itself. Addresses are also checked
against the limit register to make sure that ‘no attempt’ is made to address memory outside the
current partition.
The hardware protects the base and limit registers to prevent user programs from
modifying them. The IBM PC uses a weaker version of this scheme; it has a base register (the
segment register) but no limit registers. An additional advantage of using a base register for
relocation is that a program can be moved in memory after it has started execution. After it has
been moved, it is only required to change the value of the base register, to make it ready to run.

Swapping
With timesharing, there are normally more users than there is memory to hold all their
processes, so it is necessary to keep excess process on disk. To run these processes they must be
brought into main memory. Moving processes from main memory to disk and back is called
swapping.

1.3 Multiprogramming with variable partitions


In principle, a swapping system could be based on fixed partitions. Whenever a process is
blocked, it could be moved on to the disk and another process brought into its partition from the
disk. In practice, fixed partitions are unattractive when memory is scarce because too much of it
is wasted by programs that are smaller than their partitions. A different memory management
algorithm is used. It is known as variable partitions.
When variable partitions are used, the number and size of the partitions in memory vary
dynamically throughout the day. Figure below shows how variable partitions work.

C C C C C

B B B B
E
A A A
D D D
OS OS OS OS OS OS
OS
(a) (b) (c) (d) (e) (f) (g)
 Initially only process A is in memory.
 Processes B and C are created or swapped in from disk.
 Process A terminates or swapped out to disk. (Fig d).
 Process D comes in (fig e), and B goes out (fig f)
 Finally E comes in (fig g)
The main difference between fixed partitions and the variable partitions is that the
number, location and size of the partitions vary dynamically in the latter as processes come and
go, where as they are fixed in the former. Every storage organization scheme involves some
degree of waste. In variable partition multi programming the waste does not becomes obvious
until jobs start to finish and leave holes in the main storage. It is possible to combine all the holes
by moving processes downward as far as possible, and make a big one. This technique is known
as Memory compaction. It is usually not done, because it requires a lot of CPU time.

1.4 Variable Partition Allocation with Compaction.


The fragmentation problem encountered in the previous method can be tackled by
physically moving resident processes about the memory in order to close up the holes and hence
bring the free space into a single large block. This process is referred to as compaction.

1000K 1000K

750K

Process C 550K
450K Process C
250K 250K

process A process A
100K 100K
0K OS 0K
OS

before Compaction result after Compaction

fig. 6a fig. 6b
It is clear that the compaction will have the desired effect of making the total free space
more usable by incoming processes, but this is achieved at the expense of large-scale movement
of current processes. All processes would need to be suspended while the re-shuffle takes place,
with attendant updating of process context information, such as the load address. Such activity
would not be feasible in a time critical system and would be a major overhead in any system.
In practice, the compaction scheme has seldom been used due to the fact that its
overheads and added complexity tend to minimise its advantage over the non - compacted
scheme. So, we are still in pursuit of a technique, which will make better use of the memory and
hence enhance the throughput of the system. Our current problem is that we create holes in
available memory, which can only be consolidated at the considerable expense of moving active
processes. The residual size of these free space holes is the essential problem; they are frequently
too small to accommodate a full process.
Discusses about storage placement algorithms:
 First fit
 Best fit
 Worst fit

2. Virtual Memory

If the combined size of the program, data, and stack exceeds the amount of physical
memory available for it, then part of the program will be saved on the disk. Therefore the
physical memory was virtually enhanced by adding the part of the disk space.

2.1 Paging

In a paged system, each process is notionally divided into a number of fixed size 'chunks'
called pages, typically 4KB in length. The memory space is also viewed as a set of page frames
of the same size. The loading process now involves transferring each process page to some
memory page frame.
Figure below shows an example of three processes, which have been loaded into
contiguous pages in the memory.

Process A Process B Process C


Page 1 Page 1 Page 1
Page 2 Page 2 Page 2
Page 3 Page 3 Page 3
Page 4 Page 4
Page 5
Figure 7d shows that there remain two free pages in memory, which are available for use.
Fig 7a Fig 7b Fig 7c
Suppose now that Process-B terminates and releases its allocation of pages, giving us the
situation illustrated in figure 7e.
We now have two disconnected regions of free spaces reminiscent of holes in the variable
allocation scheme. However, this is less of problem in a paging system because the allocation is
done on a page by page basis; the pages of a process as held in memory page frames don’t need to
be contiguous or even in the correct order.
Let us now assume that two more processes D and E require being loaded; process D requires two
pages and E requires three pages. These are allocated to any memory pages, which are free,
producing figure 7f.

Main Main
fig.
A15b Memory
A1 Memory
A1

A2 A2 A2
A3 A3
A4 A4 A4
A5 A5 A5
B1 D1
B2 D2
B3 E1
C1 C1 C1
C2 C2 C2
C3 C3 C3
C4 C4 C4
E2
E3
fig .7d fig .7e fig 7f
Paging alleviates the problem of fragmented free space, since a process can be distributed over a
number of separate holes. After a period of operation, the pages of active processes could become
extensively intermixed.

There is still the residual problem of there being a number of free spaces available which are
insufficient in total to accommodate a new process; such space would be wasted. However, in
general, the space utilization and consequently the system throughput are improved.
Paging requires relocation of multiple parts of each process. Clearly, the needs of paging in this
respect are more elaborate. The key to the solution of this problem lies in the way a specific
memory location is addressed in a paging environment.

An address is considered to have the form: (p, d).


Where p is the number of page containing the location and d is the displacement of the location
from the start of the page. These parameters are derived from the actual memory address by
subdividing it into two portions, representing the respective page and displacement values.
Eg. With a 16 bit address:
15 11 10 0

Page No Displacement
The page number uses the top 5 bits therefore has a value range of 0 - 31 pages. The displacement
value uses 11 bits and therefore has a range of 0 - 2047. This means that a system based on this
scheme would have 32 pages each of 2048 locations.
To solve the relocation problem, we observe that when a process page is positioned in some
memory page frame, the page number parameter of the paging address changes but the
displacement remains constant. Hence, relocation reduces simply to converting a process page
number to a memory page frame number. This is accomplished using a page table; this has one
entry for each possible page number and contains the corresponding memory page frame number.
The overall conversion process is shown below in figure 7h.

p d

Converted address
0
1
2
3
4
p
5

fig. 7h
Note that the process page number is used to index the page table, which in effect is an array of
memory page frame numbers.
Paging is impressed upon the physical form of the process and is independent of the programme's
structure.
Paging is used to improve memory utilisation by avoiding fragmentation.

2.2 Simple Segmentation


Paging achieves its objective by subdividing a process into a number of fixed size chunks.
Segmentation presents us with an alternative method of dividing a process by using variable
length chunks. Segmentation is similar in some ways to the variable partition method of
allocation, except that a process can be loaded in several partitions - segments. Because these can
be independently positioned in the memory, it can provide more efficient utilisation of free space
areas.

Segment 3
Process A Segment 2
A3
B4 Main
fig 8a Segment 1 Memory
A2

Segment 4
fig 8c
Segment 3 A1
Process
B B2
Segment 2

fig 8 b B3
Segment 1
B1

Segment Addressing
The segment address consists of two parts, namely the segment reference, s , and displacement d,
within that segment, which are derived from a subdivision of the bits of the logical address. The
segment reference indexes a process segment table whose entries specify the base address and the
segment size.

A segmented address reference requires the following steps :


 Extract the segment number and the displacement from the logical address.
 Use the segment number to index the segment table, to obtain the segment base address
and length.
 Check that the displacement is not greater than the segment length. If greater an invalid
address is signalled.
 Generate the required physical address by adding the offset to the base address.
Logical address
+
length address

Physical
address

Segment Table

fig. 8d.

Segmentation reflects the logical structure of the process.


Segmentation improves allocation while preserving the process structure.

You might also like