You are on page 1of 49

OPERATING SYSTEMS HANDOUT 9

CPU Scheduling Continuation…

Round Robin Algorithm (RR)

It is a preemptive version of FCFS algorithm. The one that enters the Ready queue first gets to be
executed by the CPU first but is given a time limit. This limit is called time quantum or time slice. The
process will enter the rear of the queue after its time slice expires.

Example:

A set of processes with their respective arrival times at the ready queue and the length of their next CPU
burst are given below.

Assume that the time slice is 3.

All values are in milliseconds.

Solution:

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. At t = 3, process B
arrives and process A consumes its first time slice. Process A goes back to the ready queue.

b. Processes B (CPU burst of 4) and A (CPU burst of 5) are inside the ready queue (in that order) by
the time process A finishes executing at t = 3. The CPU scheduler will select process B to execute next.
Process B will start executing at t = 3. Process C arrives at the ready queue at t = 4. At t = 6, process D
arrives and process B consumes its first time slice. Process B goes back to the ready queue.

c. Processes A (CPU burst of 5), C (CPU burst of 5), D (CPU burst of 3), and B (CPU burst of 1) are
inside the ready queue (in that order) by the time process B finishes executing at t = 6. The CPU
scheduler will select process A to execute next. Process A will start executing at t = 6 and ends after its
second time slice at t = 9. Process A goes back to the ready queue.

pg. 1 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

d. Processes C (CPU burst of 5), D (CPU burst of 3), B(CPU burst of 1), and A (CPU burst of 2) are
inside the ready queue (in that order) by the time process A finishes executing at t = 9. The CPU
scheduler will select process C to execute next. Process E arrives at the ready queue at t = 10. At t = 12,
process C consumes its first time slice. Process C goes back to the ready queue.

e. Processes D (CPU burst of 3), B (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2), and C
(CPU burst of 2) are inside the ready queue (in that order) by the time process C finishes executing at t =
12. The CPU scheduler will select process D to execute next. Process D will start executing at t = 12 and
ends at t = 15.

f. Processes B (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2), and C (CPU burst of 2) are
inside the ready queue (in that order) by the time process C finishes executing at t = 15. The CPU
scheduler will select process B to execute next. Process B will start executing at t = 15 and ends at t = 16.

g. Processes A (CPU burst of 2), E (CPU burst of 2), and C (CPU burst of 2) are inside the ready
queue (in that order) by the time process C finishes executing at t = 16. The CPU scheduler will select
process A to execute next. Process A will start executing at t = 16 and ends at t = 18.

h. Processes E (CPU burst of 2) and C (CPU burst of 2) are inside the ready queue (in that order) by
the time process A finishes executing at t = 18. The CPU scheduler will select process E to execute next.
Process E will start executing at t = 18 and ends at t = 20.

pg. 2 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

i. Process C (CPU burst of 2) is the only process remaining inside the ready queue at t = 20. The
CPU scheduler will select process C to execute next. Process C will start executing at t = 20 and ends at t
= 22.

The waiting times for each of the five processes are:

WTA = (0 - 0) + (6 - 3) + (16 - 9) = 10 ms

WTB = (3 - 3) + (15 - 6) = 9 ms

WTC = (9 - 4) + (20 - 12) = 13 ms

WTD = 12 - 6 = 6 ms

WTE = 18 - 10 = 8 ms

average waiting time

= (10 + 9 + 13 + 6 + 8)/5 = 46/5 = 9.2 ms

The turnaround time for each of the five processes are:

TAA = 18 - 0 = 18 ms TAB = 16 - 3 = 13 ms TAC = 22 - 4 = 18 ms

TAD = 15 - 6 = 9 ms

TAE = 20 - 10 = 10 ms

average turnaround time

= (18 + 13 + 18 + 9 + 10)/5 = 68/5 = 13.6 ms

pg. 3 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

RR scheduling algorithm guarantees that each process gets a fair share of CPU time. It prioritizes fast
response time, not the waiting time and turnaround time. This algorithm is ideal for multitasking
systems.

Caution must be observed in deciding the time slice since it will directly affect the number of context
switches that will occur. Too small time slices will result to many context switches while too large time
slices will somewhat perform similarly as FCFS. It is recommended that the time slice should be greater
than 80% of all the CPU burst times.

5. Priority Scheduling

This algorithm may be non-preemptive or preemptive. The CPU scheduler chooses the process with the
highest priority to be executed next. Each process is assigned a priority which is usually expressed as an
integer. In this discussion, the lower the value of the integer, the higher its priority.

In preemptive priority scheduling, the currently executing process is preempted if a higher priority
process arrives in the ready queue. The preempted process will enter the rear of the queue. If two or
more processes have the same priority, the FCFS algorithm may be used.

Example:

A set of processes with their respective arrival times at the ready queue and the length of their next CPU
burst are given below.

Solution using non-preemptive algorithm:

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst
of 8 so it will end at t = 8.

b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes
executing at t = 8. The CPU scheduler will select process C to execute next since it has the highest
priority among the three. It will start executing at t = 8. It has a CPU burst of 5 so it will end at t = 13.

pg. 4 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

c. Processes B, D, and E are inside the ready queue (in that order) by the time process B finishes
executing at t = 13. The CPU scheduler will select between D and E because both have the same priority.

Since process D arrives first, it will be executed next. It will start executing at t = 13. It has a CPU burst of
3 so it will end at t = 16.

d. Processes B and E remain inside the ready queue (in that order) at t = 16. The CPU scheduler will
select process E to execute next. It will start executing at t = 16. It has a CPU burst of 2 so it will end at t
= 18.

e. Process B is the only process remaining inside the ready queue at t = 18. The CPU scheduler will
select process B to execute next. It will start executing at t = 18. It has a CPU burst of 4 so it will end at t
= 22.

The waiting time for each of the five processes are:

WTA = 0 - 0 = 0 ms

WTB = 18 - 3 = 15 ms

WTC = 8 - 4 = 4 ms

WTD = 13 - 6 = 7 ms WTE = 16 - 10 = 6 ms average waiting

time

= (0 + 15 + 4 + 7 + 6)/5 = 32/5 = 6.4 ms

pg. 5 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

The turnaround time for each of the five processes are:

TAA = 8 - 0 = 8 ms

TAB = 22 - 3 = 19 ms TAC = 13 - 4 = 9 ms

TAD = 16 - 6 = 10 ms TAE = 18 - 10 = 8 ms average turnaround time = (8 + 19 +

9 + 10 + 8)/5 = 54/5 = 10.8 ms

Solution using preemptive algorithm:

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst
of 8 but it cannot be assumed that it will end at t = 8.

b. Process B arrives at the ready queue at t = 3. Since process B has a higher priority, it will
preempt process A. Process A goes back to the ready queue. Process B will start executing at t = 3 but it
cannot be assumed that it will end at t = 7.

c. Process C arrives at the ready queue at t = 4. Since process C has a higher priority, it will
preempt process B. Process B goes back to the ready queue. Process B will start executing at t = 3. Since
it has the highest priority, it is safe to assume that it will end at t = 9.

d. Processes A (CPU burst of 5), B (CPU burst of 3), and D (CPU burst of 3) are inside the ready
queue (in that order) by the time process C finishes executing at t = 9. The CPU scheduler will select
process D to execute next. Process D will start executing at t = 9 but it cannot be assumed that it will end
at t = 12.

e. Process E arrives at the ready queue at t = 10. Since E has the same priority as D, it cannot
preempt D. Process D will continue to execute and it will end at t = 12.

pg. 6 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

f. Processes A (CPU burst of 5), B (CPU burst of 3), and E (CPU burst of 2) are inside the ready
queue (in that order) by the time process D finishes executing at t = 12. These will be executed
according to their priorities.

The waiting time for each of the five processes are:

WTA = (0 - 0) + (17 - 3) = 14 ms

WTB = (3 - 3) + (14-4) = 10 ms WTC = (4 - 4) = 0 ms

WTD = (9 - 6) = 3 ms

WTE = (12 - 10) = 2 ms

average waiting time

= (14 + 10 + 0 + 3 + 2)/5 = 29/5 = 5.8 ms

The following are the turnaround time for each of the five processes:

TAA = 22 - 0 = 22 ms

TAB = 17 - 3 = 14 ms

TAC = 9 - 4 = 5 ms

TAD = 12 - 6 = 6 ms

TAE = 14 - 10 = 4 ms

average turnaround time

= (22 + 14 + 5 + 6 + 4)/5 = 51/5 = 10.2 ms

pg. 7 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

In priority scheduling, the waiting time and turnaround time of higher priorities are minimized.

However, this may lead to starvation wherein low-priority processes may wait indefinitely in the ready
queue. To solve this, a method called aging can be used wherein the priority of a process gradually
increases the longer it stays in the ready queue.

6. Multilevel Feedback Queues

This may be used by the operating system to determine the behavior pattern of a process. It uses
several ready queues. Each of these queues will have different priorities. The figure below shows an
example of a multilevel feedback queue.

pg. 8 Prepared by: Sir Jerome Saturno


OPERATING SYSTEMS HANDOUT 9

Based on the given figure, queue Q0 has the highest priority while Qn has
the lowest. The queues follow the FCFS algorithm except for the lowest
queue. The lowest queue follows the RR algorithm. Each queue is
assigned a time slice that increases as its priority decreases.

All processes enter at the rear of the queue with the highest priority. A
process will be moved to the next lower-priority queue if it does not
complete its execution or request for an I/O operation within the given
time slice. Processes in the lower-priority queue can only execute if there
are no processes in the higher-priority queue. An executing process in the
lower-priority queue can also be preempted if a new process arrives at
the higher-priority queue.

Once the highest-priority queue is empty, processes in the next lower-


priority queue will be scheduled for execution by the operating system.
Since the time slice is increased, this will give them more time to
complete its execution. This repeats until processes reach the lowest-
priority queue.

When processes reach the lowest-priority queue, it will use the RR


scheduling algorithm.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

9
OPERATING SYSTEMS HANDOUT 9

Multilevel feedback queues favor I/O-bound processes. It can


isolate CPU-bound processes from I/O-bound processes since the
Multilevel Feedback Queues former tends to stay in the queuing network.

CPU Scheduling

Non-preemptive and Preemptive CPU Scheduling CPU scheduling may be classified as:

1. Non-preemptive Scheduling

In non-preemptive scheduling, the CPU cannot be taken away from its currently executing process. The only time the CPU scheduler can assign
the CPU to another process in the ready queue is when the currently executing process terminates or enters into a Blocked state.

2. Preemptive Scheduling

In preemptive scheduling, the CPU can be taken away from its currently executing process. The currently executing process is sent back to the
ready queue and the CPU scheduler assigns the CPU to another process. This will happen if:

a. An interrupt occurred so the current process has to stop executing. The CPU is then assigned to execute the ISR of the requesting device
or process.

b. The priority of the process that enters the ready queue is higher than the currently executing process.

The CPU is then assigned to execute the higher-priority process.

c. The time limit of a process for using the CPU has been exceeded. The CPU is then assigned to execute another process even though
running process has not yet completed its current CPU burst.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

10
OPERATING SYSTEMS HANDOUT 9

Preemptive scheduling is ideally used for interactive or real-time computing systems. Non-preemptive scheduling is good for batch processing
systems only. However, non-preemptive scheduling has fewer context switches than preemptive scheduling; therefore, the former has less
overhead.

CPU Scheduling Algorithms

The following performance criteria should be optimized by a good scheduler:

1. CPU Utilization

The CPU must be busy doing useful work at all times.

2. Throughput

The amount of work done by the CPU should be maximized.

7. 3. Turnaround Time

The time between the point a process is submitted and the time it finishes executing is minimized.

8. 4. Response Time

The time between the submission of a request and the start of the system’s first response is minimized.

9. 5. Waiting Time

The time a process has to spend inside the ready queue waiting to be executed by the CPU is minimized. Many considers this as a measure of
how good a scheduling algorithm is since waiting time somehow have an impact on the order of process execution.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

11
OPERATING SYSTEMS HANDOUT 9

Note that a scheduling algorithm cannot optimize all the performance criteria because of conflicts. Some criteria might be optimized while
others are compromised. The scheduling algorithm may select to optimize only one or two criteria, which depends on the type of computer
system being used.

Fairness, meaning all processes will be given equal opportunity to use the CPU, is another performance criterion.

It is hard to measure so it was not included in the above list.

The different CPU scheduling algorithms are:

10. 1. First-Come, First-Served Algorithm (FCFS)

It is a non-preemptive scheduling algorithm wherein the one that enters the Ready queue first gets to be executed by the CPU first. In choosing
the next process to be executed, the CPU scheduler selects the process at the front of the Ready queue. Processes enter at the rear of the Ready
queue.

Example:

A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.

All values are in milliseconds.

In the succeeding solutions, Gantt charts are used to illustrate the time each process starts and ends executing.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

12
OPERATING SYSTEMS HANDOUT 9

Solution:

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 so it will end at t = 8.

b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes executing at t = 8. The CPU scheduler will
select process B to execute next. It will start executing at t = 8. It has a CPU burst of 4 so it will end at t = 12.

c. Processes C, D, and E are inside the ready queue (in that order) by the time process B finishes executing at t = 12. The CPU scheduler will
select process C to execute next. It will start executing at t = 12.
It has a CPU burst of 5 so it will end at t = 17.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

13
OPERATING SYSTEMS HANDOUT 9

d. Processes D and E remain inside the ready queue (in that order) at t = 17. The CPU scheduler will select process D to execute next. It will
start executing at t = 17. It has a CPU burst of 3 so it will end at t = 20.

e. Process E is the only process remaining inside the ready queue at t = 20. The CPU scheduler will select process E to execute next. It will
start executing at t = 20. It has a CPU burst of 2 so it will end at t = 22.

The waiting time of each process is computed as

WT = time left queue - time entered queue

The waiting times for each of the five processes are:

WTA = 0 - 0 = 0 ms

WTB = 8 - 3 = 5 ms

WTC = 12 - 4 = 8 ms

WTD = 17 - 6 = 11 ms WTE = 20 - 10 = 10 ms average waiting time

= (0 + 5 + 8 + 11 + 10)/5 = 34/5 = 6.8 ms

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

14
OPERATING SYSTEMS HANDOUT 9

In FCFS algorithm, if processes with longer CPU bursts arrived at the ready queue ahead of shorter processes, the waiting times of shorter
processes will be large. This increases the average waiting time of the system.

The turnaround time for each process is computed as

TA = time of completion - arrival time

The turnaround times for each of the five processes are:

TAA = 8 - 0 = 8 ms

TAB = 12 - 3 = 9 ms

TAC = 17 - 4 = 13 ms

TAD = 20 - 6 = 14 ms

TAE = 22 - 10 = 12 ms

average turnaround time

= (8 + 9 + 13 + 14 + 12)/5 = 56/5 = 11.2 ms

The FCFS scheduling algorithm is simple. Since choosing the next process to be executed is straightforward, the execution of the CPU scheduler
can be very fast.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

15
OPERATING SYSTEMS HANDOUT 9

However, FCFS favors CPU-bound processes, which in effect, generally yields a high average waiting time. CPU-bound processes tend to enter
the ready queue often and have long CPU burst times. I/O-bound processes spend most of their time in Blocked state. When I/O-bound
processes enter the ready queue, CPU-bound processes are already there. This can cause I/O-bound processes to wait for a long period of time
and stack up at the rear of the ready queue. This is called the convoy effect. Because of this, FCFS scheduling algorithm is not suitable for time-
sharing and interactive computer systems.

11. 2. Shortest Process First Algorithm (SPF)

This algorithm is also called Shortest Job First (SJF). It is a nonpreemptive scheduling algorithm wherein the process with the shortest CPU burst
time is the one that will be executed first. If two or more processes have the same CPU burst time, the FCFS algorithm may be used.

Example:

A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.

Process ID Arrival Time CPU Burst


A 0 8
B 3 4
C 4 5
D 6 3
E 10 2
All values are in milliseconds.

Solution:
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

16
OPERATING SYSTEMS HANDOUT 9

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 but it cannot be assumed that it will
end at t = 8.

b. Processes B, C, and D are inside the ready queue (in that order) by the time process A finishes executing at t = 8. The CPU scheduler will
select process D to execute next. It will start executing at t = 8. It has a CPU burst of 3 so it will end at t = 11.

c. Processes B, C, and E are inside the ready queue (in that order) by the time process D finishes executing at t = 11. The CPU scheduler will
select process E to execute next. It will start executing at t = 11. It has a CPU burst of 2 so it will end at t = 13.

d. Processes B and C remain inside the ready queue (in that order) at t = 13. The CPU scheduler will select process B to execute next. It will
start executing at t = 13. It has a CPU burst of 4 so it will end at t = 17.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

17
OPERATING SYSTEMS HANDOUT 9

e. Process C is the only process remaining inside the ready queue at t = 17. The CPU scheduler will select process C to execute next. It will
start executing at t = 17. It has a CPU burst of 5 so it will end at t = 22.

The waiting times for each of the five processes are:

WTA = 0 - 0 = 0 ms

WTB = 13 - 3 = 10 ms WTC = 17 - 4 = 13 ms

WTD = 8 - 6 = 2 ms WTE = 11 - 10 = 1 ms average waiting time

= (0 + 10 + 13 + 2 + 1)/5 = 26/5 = 5.2 ms

The turnaround times for each of the five processes are:

TAA = 8 - 0 = 8 ms

TAB = 17 - 3 = 14 ms TAC = 22 - 4 = 18 ms

TAD = 11 - 6 = 5 ms

TAE = 13 - 10 = 3 ms

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

18
OPERATING SYSTEMS HANDOUT 9

average turnaround time

= (8 + 14 + 18 + 5 + 3)/5 = 48/5 = 9.6 ms

SPF favors I/O-bound processes. It is optimal in terms of waiting time and turnaround time since shorter processes are executed ahead of the
longer processes.

However, the exact CPU burst of each process is practically impossible to determine. The only solution is to use exponential averaging. But in
this method, the next CPU burst of a process may be predicted based on past CPU bursts. Therefore, the operating system cannot determine the
CPU burst time if there is no historical data. A solution to this can be found in Multilevel Feedback Queues. This will be discussed in succeeding
topics.

Since SPF favors I/O-bound processes, it may take time for CPUbound processes to be executed. And if ever a very large CPUbound processes
finally gets executed, it will monopolize the CPU since SPF is non-preemptive. Because of this, SPF scheduling algorithm may not be suitable for
time-sharing and interactive computer systems since fast response times are not always guaranteed.

12. 3. Shortest Remaining Time First Algorithm (SRTF)

It is a preemptive version of SPF algorithm. The currently executing process is preempted if a process with shorter CPU burst time than the
remaining CPU burst time of the running process arrives in the ready queue. The preempted process will enter the rear of the queue.

Example:

A set of processes with their respective arrival times at the ready queue and the length of their next CPU burst are given below.
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

19
OPERATING SYSTEMS HANDOUT 9

All values are in milliseconds.

Solution:

a. Process A arrives at the ready queue at t = 0 and will start executing at t = 0. It has a CPU burst of 8 but it cannot be assumed that it will
end at t = 8.

b. Process B arrives at the ready queue at t = 3. Process B has a CPU burst of 4 while process A has a remaining CPU burst of 5. Since
process B is shorter, it will preempt process A. Process A goes back to the ready queue. Process B will start executing at t = 3 but it cannot be
assumed that it will end at t = 7.

c. Process C arrives at the ready queue at t = 4. Process C has a CPU burst of 5 while process B has a remaining CPU burst of 3. Since
process B is shorter, it will continue its execution. Process D arrives at the ready queue at t = 6. Process D has a CPU burst of 3 while process B
has a remaining CPU burst of 1. Since process B is shorter, it will continue its execution. Process B will end at t = 7.
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

20
OPERATING SYSTEMS HANDOUT 9

d. Processes A (CPU burst of 5), C (CPU burst of 5), and D (CPU burst of 3) are inside the ready queue (in that order) by the time process B
finishes executing at t = 7. The CPU scheduler will select process D to execute next. Process D will start executing at t = 7 but it cannot be
assumed that it will end at t = 10.

e. Process E arrives in the ready queue by the time process D finishes executing at t = 10.

f. Processes A (CPU burst of 5), C (CPU burst of 5), and E (CPU burst of 2) are inside the ready queue (in that order) by the time process D
finishes executing at t = 10. The CPU scheduler will select process E to execute next. Process E will start executing at t = 10. Since no new process
arrives, it will end at t = 12.

g. Processes A and C remain inside the ready queue (in that order) at t = 12. Process A and C have the same CPU burst time. Since process
A is ahead of process C, the CPU scheduler will select process A to execute next. It will start executing at t = 12. It has a CPU burst of 5 so it will
end at t = 17.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

21
OPERATING SYSTEMS HANDOUT 9

h. Process C is the only process remaining inside the ready queue at t = 17. The CPU scheduler will select process C to execute next. It will
start executing at t = 17. It has a CPU burst of 5 so it will end at t = 22.

The waiting times for each of the five processes are:

WTA = (0 - 0) + (12 - 3) = 9 ms

WTB = 3 - 3 = 0 ms

WTC = 17 - 4 = 13 ms

WTD = 7 - 6 = 1 ms

WTE = 10 - 10 = 0 ms

average waiting time

= (9 + 0 + 13 + 1 + 0)/5 = 23/5 = 4.6 ms

The turnaround times for each of the five processes are:


ITP109 (Platform Technologies) ITP109 (Platform Technologies)

22
OPERATING SYSTEMS HANDOUT 9

TAA = 17 - 0 = 17 ms TAB = 7 - 3 = 4 ms

TAC = 22 - 4 = 18 ms

TAD = 10 -6 = 4 ms

TAE = 12 - 10 = 2 ms

average turnaround time

= (17 + 4 + 18 + 4 + 2)/5 = 45/5 = 9.0 ms

SRTF scheduling algorithm is optimal in terms of minimizing waiting time. It is ideal for interactive systems.

In addition to the problem of determining the exact CPU burst time of processes, another burden is tracking the remaining CPU burst time of
running processes. It is also expected that SRTF will have more context switches compared to FCFS and SPF. For these reasons, the CPU
scheduler will have additional overheads.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

23
OPERATING SYSTEMS HANDOUT 9

Objectives
You will be able to describe:
• The fundamentals of file management and the
structure of the file management system
• File-naming conventions, including the role of
extensions
• The difference between fixed-length and
Topic 6 - 7 variablelength record format
File Management (Part I & II) • The advantages and disadvantages of contiguous,
noncontiguous, and indexed file storage techniques
• Comparisons of sequential and direct file access

- ITP109 (Platform Technologies) - ITP109 (Platform Technologies)

Objectives (continued) File Management

• File Manager controls every file in system


You will be able to describe:
• Efficiency of File Manager depends on: – How
• The security ramifications of access control
techniques and how they compare system’s files are organized (sequential, direct, or indexed
• The role of data compression in file storage sequential)
– How they’re stored (contiguously, noncontiguously, or
indexed)
– How each file’s records are structured (fixed-length or
variable-length)
– How access to these files is controlled

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

24
OPERATING SYSTEMS HANDOUT 9

The File Manager The File Manager (continued)


• File Manager is the software responsible for
creating, deleting, modifying, and controlling • Responsibilities of File Managers: (continued)
access to files – Allocate each file when a user has been cleared for
access to it, then record its use
– Manages the resources used by files
– Deallocate file when it is returned to storage and
• Responsibilities of File Managers: communicate its availability to others waiting for it
– Keep track of where each file is stored
– Use a policy to determine where and how files will be
stored
• Efficiently use available storage space
• Provide efficient access to files
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

The File Manager (continued)


The File Manager (continued)
• Definitions:
– Field: Group of related bytes that can be identified by user • Program files: Contain instructions
with name, type, and size • Data files: Contain data
– Record: Group of related fields
• Directories: Listings of filenames and their attributes
– File: Group of related records that contains information used
by specific application programs to generate reports • Every program and data file accessed by computer system,
• Sometimes called flat file; has no connections to other files and every piece of computer software, is treated as a file
– Database: Groups of related files that are interconnected at • File Manager treats all files exactly the same way as far as
various levels to give users flexibility of access to the data storage is concerned
stored
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

25
OPERATING SYSTEMS HANDOUT 9

Interacting with the File Manager


(continued)
Interacting with the File Manager
• Each logical command is broken down into sequence of
low-level signals that
• User communicates with File Manager via specific – Trigger step-by-step actions performed by
commands that may be: device
– Embedded in the user’s program – Supervise progress of operation by
• OPEN, CLOSE, READ, WRITE, and MODIFY testing status • Users don’t need to include in each
– Submitted interactively by the user program the low-level instructions for every device
• CREATE, DELETE, RENAME, and COPY to be used
• Commands are device independent • Users can manipulate their files by using a simple set of
– User doesn’t need to know its exact physical location on disk commands (e.g., OPEN, CLOSE, READ, WRITE, and
pack or storage medium to access a file MODIFY)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Typical Volume Configuration


(continued)

Typical Volume Configuration

• Volume: Each secondary storage unit (removable


or non-removable)
– Each volume can contain many files called multifile
volumes
– Extremely large files are contained in many volumes
called multivolume files Figure 8.1: Volume descriptor, stored at the beginning of
• Each volume in system is given a name each volume
– File Manager writes name & other descriptive info on
an easy-to-access place on each unit ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

26
OPERATING SYSTEMS HANDOUT 9

Typical Volume Configuration Typical Volume Configuration


(continued) (continued)
• Master file directory (MFD): Stored immediately • Disadvantages of a single directory per volume as
after volume descriptor and lists: supported by early operating systems:
– Names and characteristics of every file in volume – Long time to search for an individual file
• File names can refer to program files, data files, and/or – Directory space would fill up before the disk storage
system files space filled up
– Subdirectories, if supported by File Manager – Users couldn’t create subdirectories
– Remainder of the volume used for file storage – Users couldn’t safeguard their files from other users
– Each program in the directory needed a unique
name, even those directories serving many users

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

About Subdirectories (continued)


About Subdirectories
Subdirectories:
Subdirectories: • Today’s File Managers allow users to create
• Semi-sophisticated File Managers create MFD for subdirectories (Folders)
each volume with entries for files and subdirectories – Allows related files to be grouped together
• Subdirectory created when user opens account to • Implemented as an upside-down tree
access computer – Allows system to efficiently search individual
• Improvement from single directory scheme directories
• Still can’t group files in a logical order to improve • Path to the requested file may lead through several
accessibility and efficiency of system directories
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

27
OPERATING SYSTEMS HANDOUT 9

About Subdirectories (continued) About Subdirectories (continued)

• File descriptor includes the following information :


– Filename
– File type
– File size
– File location
– Date and time of creation
– Owner
– Protection information
– Record size
Figure 8.2: File directory tree structure

File Naming Conventions


• Absolute filename (complete filename): Long name File Organization
that includes all path info
• Relative filename: Short name seen in directory • All files composed of records that are of two types:
listings and selected by user when file is created – Fixed-length records: Easiest to access directly
• Length of relative name and types of characters • Ideal for data files
allowed is OS dependent • Record size critical
• Extension: Identifies type of file or its contents – Variable-length records: Difficult to access directly
– e.g., BAT, COB, EXE, TXT, DOC • Components • Don’t leave empty storage space and don’t truncate any characters
required for a file’s complete name depend on the • Used in files accessed sequentially (e.g., text files, program files) or files using index to access records
operating system • File descriptor stores record format

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

28
OPERATING SYSTEMS HANDOUT 9

File Organization (continued)

Physical File Organization

• The way records are arranged and the


characteristics of the medium used to store them
Figure 8.4: When data is stored in fixed-length fields (a), • On magnetic disks, files can be organized as:
data that extends beyond the fixed size is truncated. sequential, direct, or indexed sequential
When data is stored in a variable length record format (b), • Considerations in selecting a file organization
the size expands to fit the contents, but it takes more time scheme:
to access. – Volatility of the data
– Activity of the file
ITP109 (Platform Technologies)
– Size of the file – Response time

ITP109 (Platform Technologies)

Physical File Organization (continued)


• Sequential record organization: Records are
stored and retrieved serially (one after the other)
– Easiest to implement
– File is searched from its beginning until the requested Physical File Organization (continued)
record is found
– Optimization features may be built into system to • Direct record organization: Uses direct access files; can be implemented only on direct access storage devices
speed search process
– Allows accessing of any record in any order without having to begin search from beginning of file
• Select a key field from the record
– Records are identified by their relative addresses
– Complicates maintenance algorithms (addresses relative to beginning of file)
• Original order must be preserved every time records are
added or deleted • These logical addresses computed when records are stored and again when records are retrieved – Use hashing algorithms

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

29
OPERATING SYSTEMS HANDOUT 9

Physical File Organization (continued)

• Advantages of direct record organization:


– Fast access to records
– Can be accessed sequentially by starting at first relative
Physical File Organization (continued)
address and incrementing to get to next record
– Can be updated more quickly than sequential files • Indexed sequential record organization: generates index file for record retrieval
– No need to preserve order of the records, so adding or – Combines best of sequential & direct access
deleting them takes very little time – Divides ordered sequential file into blocks of equal size
• Disadvantages of direct record organization: – – Each entry in index file contains highest record key and physical location of data block
Collision in case of similar keys – Created and maintained through ISAM software – Advantage: Doesn’t create collisions

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Physical Storage Allocation


(continued)

Physical Storage Allocation

• File Manager must work with files not just as whole


units but also as logical units or records
• Records within a file must have the same format but
they can vary in length
• Records are subdivided into fields
• Record’s structure usually managed by application
programs and not OS
Figure 8.6: Types of records in a file
• File storage actually refers to record storage
ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

30
OPERATING SYSTEMS HANDOUT 9

Contiguous Storage
Noncontiguous Storage
• Records stored one after another
– Advantages: • Allows files to use any available disk storage space
• Any record can be found once starting address and
size are known • File’s records are stored in a contiguous manner if
• Direct access easy as every part of file is stored in enough empty space
same compact area
• Any remaining records, and all other additions to file,
– Disadvantages:
are stored in other sections of disk (extents)
• Files can’t be expanded easily, and fragmentation
– Linked together with pointers
– Physical size of each extent is determined by OS
(usually 256 bytes)
Figure 8.7: Contiguous storage

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

31
OPERATING SYSTEMS HANDOUT 9

Noncontiguous Storage (continued)


Noncontiguous Storage (continued)
• File extents are linked in following ways:
• Advantage of noncontiguous storage:
• Linking at storage level:
– Eliminates external storage fragmentation and need
– Each extent points to next one in sequence
for compaction However:
– Directory entry consists of filename, storage location
of first extent, location of last extent, and total – Does not support direct access because no easy
number of extents, not counting first way to determine exact location of specific record
• Linking at directory level:
– Each extent listed with its physical address, size, and
pointer to next extent
– A null pointer indicates that it's the last one

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Noncontiguous Storage (continued) Noncontiguous Storage (continued)

Figure 8.9: Noncontiguous


file storage with linking
taking place at the directory
Figure 8.8: Noncontiguous file storage with linking taking place
level
at the storage level
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

32
OPERATING SYSTEMS HANDOUT 9

Indexed Storage (continued)


Indexed Storage

• Allows direct record access by bringing pointers linking


every extent of that file into index block
• Every file has its own index block
– Consists of addresses of each disk sector that make
up the file
– Lists each entry in the same order in which sectors are
linked
• Supports both sequential and direct access
• Doesn’t necessarily improve use of storage space Figure 8.10: Indexed storage
• Larger files may have several levels of indexes

Access Methods (continued)

Access Methods
• Dictated by a file’s organization
• Most flexibility is allowed with indexed sequential
files and least with sequential
• File organized in sequential fashion can support
only sequential access to its records
– Records can be of fixed or variable length
• File Manager uses the address of last byte read to Figure 8.11: (a) Fixed-length records
access the next sequential record (b) Variable-length records
• Current byte address (CBA) must be updated
every time a record is accessed ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

Access Methods (continued)


Access Methods (continued)
• Sequential access:
– Fixed-length records: • Direct access:
• CBA = CBA + RL – Variable-length records: (continued)
– Variable-length records:
• File Manager must do sequential search through records
• CBA = CBA + N + RLk • Direct access:
• File Manager can keep table of record numbers and their CBAs
– Fixed-length records:
• CBA = (RN – 1) * RL; RN is desired record number – • Indexed Sequential File:
Variable-length records: – Can be accessed either sequentially or directly
• Virtually impossible because address of desired record can’t – Index file must be searched for the pointer to the block where
be easily computed the data is stored

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

33
OPERATING SYSTEMS HANDOUT 9

Levels in a File Management System


Levels in a File Management System (continued)

• Each level of file management system is


implemented by using structured and modular
programming techniques
• Each of the modules can be further subdivided into
more specific tasks
• Using the information of basic file system, logical file
system transforms record number to its byte address
• Verification occurs at every level of the file
management system
Figure 8.12: File Management System

Access Control Verification Module

Levels in a File Management System • Each file management system has its own method
(continued) to control file access
• Types:
• Verification occurs at every level of the file
– Access control matrix
management system:
– Access control lists
– Directory level: file system checks to see if the
requested file exists – Capability lists
– Access control verification module determines – Lockword control
whether access is allowed
– Logical file system checks to see if the requested byte
address is within the file’s limits
– Device interface module checks to see whether the ITP109 (Platform Technologies)
storage device exists
ITP109 (Platform Technologies)

Access Control Matrix Access Control Lists

• Easy to implement • Modification of access control matrix technique


• Works well for systems with few files & few users • Each file is entered in list & contains names of users who
• Results in space wastage because of null entries are allowed access to it and type of access permitted

Table 8.1: Access Control Matrix


Table 8.2: Access Control List
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

34
OPERATING SYSTEMS HANDOUT 9

Capability Lists
Access Control Lists (continued)
• Lists every user and the files to which each has access
• • Can control access to devices as well as to files
Contains the name of only those users who may use
file; those denied any access are grouped under
“WORLD”
• List is shortened by putting users into categories:
– SYSTEM: personnel with unlimited access to all files
– OWNER: Absolute control over all files created in own
account
– GROUP: All users belonging to appropriate group have
access Table 8.3: Capability Lists
– WORLD: All other users in system

Lockwords
• Lockword: similar to a password but protects a single file Data Compres
• Advantages:
– Requires smallest amount of storage for file protection
• A technique used to save space in files • Methods for data comp
– Records with repeated characters: Repeated characters are replaced with
• Disadvantages:
– Can be guessed by hackers or passed on to unauthorized
• e.g., ADAMSbbbbbbbbbb => ADAMSb10
300000000 => 3#
users
– Repeated terms: Compressed by using symbols to represent most commonl
– Generally doesn’t control type of access to file • e.g., in a university’s student database common words like student, course, grad
• Anyone who knows lockword can read, write, execute, or delete file

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Data Compression (continued)


Front-end compression: Each entry takes a given
number of characters from the previous entry that they
have in common Case Study: File Management in Linux

• All Linux files are organized in directories that are


connected to each other in a treelike structure
• Linux specifies five types of files used by the system
to determine what the file is to be used for
• Filenames can be up to 255 characters long and
contain alphabetic characters, underscores, and
numbers
Table 8.4: Front-end compression
• Filename can’t start with a number or a period and
ITP109 (Platform Technologies) can’t contain slashes or quotes

ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

35
OPERATING SYSTEMS HANDOUT 9

Case Study: File Management in Linux


Case Study: File Management in Linux (continued)
(continued)
• Linux users can obtain file directories:
– By opening the appropriate folder on their desktops
– Using the command shell interpreter and typing
commands after the prompt
• Linux allows three types of file permissions: read
(r), write (w), and execute (x)
• Virtual File System (VFS) maintains an interface
between system calls related to files and the file
management code
Table 8.5: Types of Linux files

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

36
OPERATING SYSTEMS HANDOUT 9

Summary
Summary (continued)
• The File Manager controls every file in the system
• Processes user commands (read, write, modify,
• Each level of file management system is
create, delete, etc.) to interact with any other file
implemented with structured and modular
• Manages access control procedures to maintain the
programming techniques
integrity and security of the files under its control
• File Manager must accommodate a variety of file • Verification occurs at every level of the file
organizations, physical storage allocation schemes, management system
record types, and access methods • Data compression saves space in files
• Linux specifies five types of files used by the system
• VFS maintains an interface between system calls
related to files and the file management code
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Objectives
You will be able to describe:
• The basic functionality of the memory allocation methods
covered in this chapter: paged, demand paging, segmented,
and segmented/demand paged memory allocation
Topic 5 • The influence that these page allocation methods have had
Main Memory Management (Part II) on virtual memory
• The difference between a first-in first-out page replacement
policy, a least-recently-used page replacement policy, and a
clock page replacement policy
- ITP109 (Platform Technologies) - ITP109 (Platform Technologies)

37
OPERATING SYSTEMS HANDOUT 9

Objectives (continued) Memory Management: Virtual Memory

You will be able to describe: • Disadvantages of early schemes:


• The mechanics of paging and how a memory allocation – Required storing entire program in memory
scheme determines which pages should be swapped out – Fragmentation
of memory – Overhead due to relocation
• The concept of the working set and how it is used in • Evolution of virtual memory helps to:
memory allocation schemes – Remove the restriction of storing programs contiguously
• The impact that virtual memory had on multiprogramming – Eliminate the need for entire program to reside in memory
• Cache memory and its role in improving system response during execution
time
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Paged Memory Allocation (continued)

Paged Memory Allocation

• Divides each incoming job into pages of equal size


• Works well if page size, memory block size (page
frames), and size of disk section (sector, block) are all
equal
• Before executing a program, Memory Manager:
– Determines number of pages in program
– Locates enough empty page frames in main memory Figure 3.1: Paged memory allocation scheme for a
job of 350 lines
– Loads all of the program’s pages into them
ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

38
OPERATING SYSTEMS HANDOUT 9

Paged Memory Allocation (continued)

Paged Memory Allocation (continued)


• Memory Manager requires three tables to
keep track of the job’s pages:
– Job Table (JT) contains information about
• Size of the job
• Memory location where its PMT is stored Table 3.1: A TypicalJob Table
– Page Map Table (PMT) contains information about (a) initially has three entries, one for each job in process.
• Page number and its corresponding page frame When the second job (b) ends, its entry in the table is
memory address released and it is replaced by (c), information about the next
– Memory Map Table (MMT) contains job that is processed
• Location for each page frame
ITP109 (Platform Technologies)
• Free/busy status

ITP109 (Platform Technologies)

Paged Memory Allocation (continued)

Paged Memory Allocation (continued)


Job 1 is 350 lines
long and is divided
into four pages of • Displacement (offset) of a line: Determines how far
100 lines each. away a line is from the beginning of its page
– Used to locate that line within its page frame
• How to determine page number and displacement of
Figure 3.2: Paged a line:
Memory Allocation – Page number = the integer quotient from the division
Scheme of the job space address by the page size
– Displacement = the remainder from the page
ITP109 (Platform Technologies) number division

ITP109 (Platform Technologies)

39
OPERATING SYSTEMS HANDOUT 9

Paged Memory Allocation (continued)


Paged Memory Allocation (continued)
• Steps to determine exact location of a line in
memory:
– Determine page number and displacement of a line – • Advantages:
Refer to the job’s PMT and find out which page frame – Allows jobs to be allocated in noncontiguous memory locations
contains the required page • Memory used more efficiently; more jobs can fit
– Get the address of the beginning of the page frame by • Disadvantages:
multiplying the page frame number by the page frame – Address resolution causes increased overhead
size – Internal fragmentation still exists, though in last page
– Add the displacement (calculated in step 1) to the – Requires the entire job to be stored in memory location
starting address of the page frame – Size of page is crucial (not too small, not too large)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Demand Paging Demand Paging (continued)


• Demand paging made virtual memory widely available
• Demand Paging: Pages are brought into memory only as – Can give appearance of an almost infinite or nonfinite amount
they are needed, allowing jobs to be run with less main of physical memory
memory • Allows the user to run jobs with less main memory than
• Takes advantage that programs are written sequentially so required in paged memory allocation
not all pages are necessary at once. For example: • Requires use of a high-speed direct access storage device
– User-written error handling modules are processed only that can work directly with CPU
when a specific error is detected
• How and when the pages are passed (or
– Mutually exclusive modules “swapped”) depends on predefined policies
– Certain program options are not always accessible
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

40
OPERATING SYSTEMS HANDOUT 9

Demand Paging (continued)


Demand Paging (continued)

• The OS depends on following tables:


– Job Table
– Page Map Table with 3 new fields Total job pages are 15,
to determine and the number of total
available page frames is
• If requested page is already in memory 12.
• If page contents have been modified
• If the page has been referenced recently
– Used to determine which pages should remain in Figure 3.5: A typical
demand paging scheme
main memory and which should be swapped out –
Memory Map Table ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

Demand Paging (continued) Demand Paging (continued)

• Swapping Process: • Page fault handler: The section of the operating system
– To move in a new page, a resident page must be swapped that determines
back into secondary storage; involves – Whether there are empty page frames in memory
• Copying the resident page to the disk (if it was modified) • If so, requested page is copied from secondary storage
• Writing the new page into the empty page frame – Which page will be swapped out if all page frames are busy
– Requires close interaction between hardware components, • Decision is directly dependent on the predefined policy for page
software algorithms, and policy schemes removal

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

41
OPERATING SYSTEMS HANDOUT 9

Demand Paging (continued) Demand Paging (continued)


• Thrashing : An excessive amount of page swapping
between main memory and secondary storage • Advantages:
– Operation becomes inefficient – Job no longer constrained by the size of physical memory
– Caused when a page is removed from memory but is called (concept of virtual memory)
back shortly thereafter – Utilizes memory more efficiently than the previous schemes
– Can occur across jobs, when a large number of jobs are
vying for a relatively few number of free pages • Disadvantages:
– Can happen within a job (e.g., in loops that cross page – Increased overhead caused by the tables and the page
boundaries) interrupts
• Page fault: a failure to find a page in memory
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Page Replacement Policies and


Concepts (continued)
Page Replacement Policies
and Concepts
• Policy that selects the page to be removed;
crucial to system efficiency. Types include:
– First-in first-out (FIFO) policy: Removes page
that has been in memory the longest
– Least-recently-used (LRU) policy: Removes
page that has been least recently accessed
– Most recently used (MRU) policy Figure 3.7: FIFO Policy
– Least frequently used (LFU) policy
ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

Page Replacement Policies and Page Replacement Policies and


Concepts (continued) Concepts (continued)

Figure 3.8: Working of a FIFO algorithm for a job with Figure 3.9: Working of an LRU algorithm for a job with
four pages (A, B, C, D) as it’s processed four pages (A, B, C, D) as it’s processed
by a system with only two available page by a system with only two available page
frames frames

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

42
OPERATING SYSTEMS HANDOUT 9

Page Replacement Policies and


Concepts (continued)
• Initially, leftmost bit of its reference byte is set to 1, all bits
Page Replacement Policies and to the right are set to zero
Concepts (continued) • Each time a page is referenced, the leftmost bit is set to 1
• Reference bit for each page is updated with every time tick

• Efficiency (ratio of page interrupts to page requests)


is slightly better for LRU as compared to FIFO
• FIFO anomaly: No guarantee that buying more
memory will always result in better performance
• In LRU case, increasing main memory will cause
either decrease in or same number of interrupts
Figure 3.11: Bit-shifting technique in LRU policy
• LRU uses an 8-bit reference byte and a bit-shifting
technique to track the usage of each page currently ITP109 (Platform Technologies)

in memory
ITP109 (Platform Technologies)

The Mechanics of Paging (continued)

The Mechanics of Paging

• Status bit: Indicates if page is currently in memory


• Referenced bit: Indicates if page has been
referenced recently
– Used by LRU to determine which pages should be
swapped out
• Modified bit: Indicates if page contents have been
Table 3.3: Page Map Table for Job 1 shown in Figure 3.5.
altered
– Used to determine if page must be rewritten to
secondary storage when it’s swapped out ITP109 (Platform Technologies)

ITP109 (Platform Technologies)

The Mechanics of Paging (continued)


The Working Set

Table 3.4: Meanings of bits used in PMT
Working set: Set of pages residing in memory that
can be accessed directly without incurring a page fault
– Improves performance of demand page schemes
– Requires the concept of “locality of reference”

Table 3.5: Possible combinations of modified and


referenced bits

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

43
OPERATING SYSTEMS HANDOUT 9

• System must decide


– How many pages compose the working set
– The maximum number of pages the operating system
will allow for a working set

The Working Set (continued)

Segmented Memory Allocation

• Each job is divided into several segments of


different sizes, one for each module that contains
pieces to perform related functions
• Main memory is no longer divided into page frames,
rather allocated in a dynamic manner • Segments
Figure 3.12: An example of a time line showing the amount are set up according to the program’s structural
of time required to process page faults modules when a program is compiled or assembled
– Each segment is numbered
ITP109 (Platform Technologies) – Segment Map Table (SMT) is generated

ITP109 (Platform Technologies)

Segmented Memory Allocation Segmented Memory Allocation


(continued) (continued)

Figure 3.13: Segmented memory allocation. Job 1


includes a main program, Subroutine A, and
Subroutine B. It’s one job divided into three Figure 3.14: The Segment Map Table tracks each
segments. segment for Job 1
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

44
OPERATING SYSTEMS HANDOUT 9

Segmented Memory Allocation Segmented Memory Allocation


(continued) (continued)
• Memory Manager tracks segments in memory • Advantages:
using following three tables: – Internal fragmentation is removed
– Job Table lists every job in process (one for whole
system)
– Segment Map Table lists details about each • Disadvantages:
segment (one for each job) – Difficulty managing variable-length segments in
– Memory Map Table monitors allocation of main secondary storage
memory (one for whole system) – External fragmentation
• Segments don’t need to be stored contiguously
• The addressing scheme requires segment number
and displacement

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

45
OPERATING SYSTEMS HANDOUT 9

Segmented/Demand Paged
Memory Allocation
• Subdivides segments into pages of equal size, smaller
than most segments, and more easily manipulated than
whole segments. It offers:
– Logical benefits of segmentation Segmented/Demand Paged Memory Allocation (continued)
– Physical benefits of paging
• Removes the problems of compaction, external • This scheme requires following four tables:
fragmentation, and secondary storage handling – Job Table lists every job in process (one for the whole system)
• The addressing scheme requires segment number, – Segment Map Table lists details about each segment (one for each job)
page number within that segment, and displacement – Page Map Table lists details about every page (one for each segment)
within that page – Memory Map Table monitors the allocation of the page frames in main memory (one for the whole system)
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Segmented/Demand Paged Memory


Allocation (continued)

Segmented/Demand Paged Memory


Allocation (continued)
• Advantages:
– Large virtual memory – Segment loaded on demand
• Disadvantages:
– Table handling overhead
– Memory needed for page and segment tables
• To minimize number of references, many systems
Figure 3.16: Interaction of JT, SMT, PMT, and main memory
in a segment/paging scheme use associative memory to speed up the process
– Its disadvantage is the high cost of the complex
ITP109 (Platform Technologies)
hardware required to perform the parallel searches

ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

46
OPERATING SYSTEMS HANDOUT 9

Virtual Memory
Virtual Memory (continued)
• Allows programs to be executed even though they are
not stored entirely in memory • Advantages (continued):
• Requires cooperation between the Memory Manager – Eliminates external fragmentation and minimizes internal fragmentation
and the processor hardware – Allows the sharing of code and data
• Advantages of virtual memory management: – Facilitates dynamic linking of program segments
– Job size is not restricted to the size of main memory • Disadvantages:
– Memory is used more efficiently – Increased processor hardware costs
– Allows an unlimited amount of multiprogramming
– Increased overhead for handling paging interrupts
– Increased software complexity to prevent thrashing

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Virtual Memory (continued)

Cache Memory

• A small high-speed memory unit that a processor


can access more rapidly than main memory
• Used to store frequently used data, or instructions
• Movement of data, or instructions, from main
memory to cache memory uses a method similar to
that used in paging algorithms
Table 3.6: Comparison of virtual memory with paging
and segmentation • Factors to consider in designing cache memory:
– Cache size, block size, block replacement algorithm
ITP109 (Platform Technologies)
and rewrite policy

ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

47
OPERATING SYSTEMS HANDOUT 9

Cache Memory (continued) Cache Memory (continued)

Figure 3.17: Comparison of (a) traditional path used by


early computers and (b) path used by modern Table 3.7: A list of relative speeds and sizes for all types of
computers to connect main memory and CPU memory. A clock cycle is the smallest unit of time for a
via cache memory processor.

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

Case Study: Memory Management Case Study: Memory Management in


in Linux Linux (continued)
Virtual memory in Linux is managed using a three- Case: Main memory consists of 64 page
level table hierarchy
frames, and Job 1 requests 15 page frames,
Job 2 requests 8 page frames

Figure 3.18: Virtual memory management in Linux Figure 3.19: An example of Buddy algorithm in Linux
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

48
OPERATING SYSTEMS HANDOUT 9

Summary (continued)
Summary
• Segmented/demand paged memory allocation
• Paged memory allocations allow efficient use of removes the problems of compaction, external
memory by allocating jobs in noncontiguous fragmentation, and secondary storage handling
memory locations • Associative memory can be used to speed up the
• Increased overhead and internal fragmentation are process
problems in paged memory allocations • Virtual memory allows programs to be executed
• Job no longer constrained by the size of physical even though they are not stored entirely in memory
memory in demand paging scheme • Job’s size is no longer restricted to the size of
• LRU scheme results in slightly better efficiency as main memory by using the concept of virtual
compared to FIFO scheme memory
• Segmented memory allocation scheme solves • CPU can execute instruction faster with the use of
internal fragmentation problem cache memory
ITP109 (Platform Technologies) ITP109 (Platform Technologies)

ITP109 (Platform Technologies) ITP109 (Platform Technologies)

49

You might also like