You are on page 1of 116

Introduction To Operating

System

Computer System

Hardware

Input
Units

CPU

Output
Units

Control
Unit

Processor

Software

Secondary
storage

Magnetic
Tapes

Cache
Memory

Compact
Disks

Application
Programs

Compilers &
Assemblers

Programming
Languages

Magnetic
Disks

Memory

RAM

Operating
System

User

ROM

Floppy
Disks

Hard
Disks

High
Level

Low
Level

Q: Explain what is meant by Application Programs?


Sol:
User

Application Programs
Software
Operating System

Hardware

Application programs
Are programs produced by programming companies to help computer
users to perform useful tasks.

Q: Draw a block diagram showing the computer internal


structure.
Sol:

Input
Units

Control
Unit

Instructions

Instructions

Data

Memory

Processor

(RAM)
Output
Units

Control
Signals

Results

Cache
Memory
Data

ALU
Store Results
Results

Or
Load date

Secondary
Storage

ROM

CPU

Q: Define the following:


RAM ROM Cache Memory
Sol:
RAM:

Random Access Memory


Volatile (lose contents when power is off).
Can be modified by the user.

ROM:

Read only Memory


Non Volatile.
Contains fixed programs used by the computer.

Cache Memory:

A memory inside the processor chip.


Used to store frequently used data.
Increases the processor speed.

Processor
Cache
Memor
y

ALU

Q: Write short notes about:


Secondary Storage Units
Sol:

Definition:
Secondary storage units are the units used to store data
permanently.

Secondary
Storage
Magnetic Disks Magnetic Tapes

Floppy Disks

Hard Disks

Compact Disks

Magnetic Tapes:
Used to store large volume of data in large computers like

mainframes for long time.


Consists of plastic film coated by magnetic material (iron
oxide).

Advantages:
Compact (can store huge amount of data).
Economical (low cost).
No loss of data.
Disadvantages:
Sequential storage.

Magnetic Disks:
A surface of metal (in the case of hard disk)
or plastic (in floppy disks) coated with
magnetic material.
It rotates with a high speed.
Track
Divided into tracks and sectors.
Sector

Compact Disks:
CD/ROM (Compact Disk/Read Only Memory).
A plastic surface coated by a reflective
material.
A laser beam is used to write on CD/ROM.
Can store up to 600 Mega byte.

Q: Define OS, then what are the different OS goals?


Sol:
Operating System (OS):
is the program running all the times on the computer to
communicate all computer components (usually called
Kernel).
OS does not perform a useful task by itself, but it creates a
suitable environment so that other programs can operate
efficiently.
OS Goals:
Convenient for the user.
Efficient for the system components.

Q: What are the different types of OS?


Sol:

Different Types of OS are:


1. Batch System.
2. Multi-Programming System.
3. Multi-Tasking (Time Sharing) System.
4. Multi-Processor (Parallel) System.
5. Network Operating Systems.
6. Real Time Systems.

Batch System:
Users send their jobs to the computer operator.
Operator organize jobs into a set of batches (each contains similar
jobs).
Each batch is run separately as a set of jobs.
Next batch

Batch 1

Operator
Batch 2

CPU
Batch 1
Job

Multi-Programming System:
A number of processes are in memory inside the ready queue
waiting for the CPU (there is one user).
Windows OS using this concept.

Ready
Queue

Process 1
Process 2
Process 3
Memory

Switching between
processes

CPU

Multi-Tasking (Time sharing) System:


Allow a number of users to share the CPU in the same time.
This concepts is used in the mainframe computers.

User 1

Ready Queue for user 1

Process 1
Process 2
Process 3

User 2

Ready Queue for user 2

Process 1
Process 2

CPU

Process 3

User 3

Ready Queue for user 3

Process 1
Process 2
Process 3

Memory

Switching between
users processes

Multi-Processor System:
Is a system with more than one processor to maximize the system
speed.

CPU 1
User

Ready Queue

Process 1
Process 2

CPU 2

Process 3

CPU 3
Memory

Network System:
Systems that operate networks in order to achieve:

Resource sharing.
Computation speedup.
Load balancing.
Communication between hosts.

Star

Bus

Ring

Real Time System:


Systems that performs critical tasks.

Real Time
Systems

Hard Real Time Systems

Soft Real Time Systems

Hard real time systems: critical tasks must be performed


on time.
Soft real time systems: critical tasks gets priority over

Q: What are the different activates supported by modern OS?


Sol:

Different OS functions are:


1. Process Management.
2. Memory Management.
3. File Management.
4. Storage Management.
5. I/O Management.
6. Protection Management.
7. Networking Management.

Process
Management

Q: Define the following:


Process Resource
Sol:
Process: is a program in execution; it is an active entity in the memory
while the program is the passive copy in the hard disk.
Process 1
Program

Memory

Hard

Process 2

Disk

Resource: any things in the system that may be used by active


processes. They may be hardware (printers, tape drives, memory), or
software (files, databases).

Resource
Resource Types
Types

Preemptive
Resources

Non Preemptive
Resources

A resource that stops the current


process when a higher priority
process comes.

A resource that can not stop the current


process when a higher priority process
comes.

Ex: Memory.

Ex: CD recorder,

Q: Explain why? Memory is a preemptive recourse.


Sol:

Because
It allows a low priority process to swapped

out form it to the disk when a higher


priority process arrives and needs a larger
amount of memory than the available. The
low priority process can resume execution
later (swapped in again).

Assume a system with 32 kb memory size.

5kb are used for OS, 10 kb for the low priority process.
Hence, the available space is 17 kb.
A higher priority process arrives and needs 20 kb.
Swap in the high
priority process

high priority process


finishes execution
OS (5 kb)

Available
27 Kb

OS (5 kb)

High Priority
process (20
kb)

Disk

OS (5 kb)

Available
27 Kb

Available
7 Kb

Swap in the low priority process again to


resume execution

Swap out the low


priority process to
disk

OS (5 kb)
Low Priority
process (10
kb)
Available
17 Kb

High
Priority
20 Kb

Q: Explain why? CD recorder is nonpreemptive resource.


Sol:

Because: If a process has begun to burn a CDROM, suddenly taking the CD recorder away from
it and giving it to another process will result in a
bad CD.

Q: What are the different steps to utilize a


resource by a process?
Sol:
A process may utilize a resource in only the
following sequence as shown below:

Reques
t
Use

Releas
e

Q: Explain the main differences between a input


(job) and ready queues?
Sol:
-input queue: stores the programs that will be opened soon (located
at the disk)
- ready queue: contains the active process that are ready to be
Ready Queue
executed (located in the memory).

Process need to
be executed
(programs)

Q: Explain what is meant by process states, and then


draw the process state diagram.
Sol:
As process executes, it changes its state. The process state may be:

New: The process is being created.


Ready: The process is waiting for the processor.
Running: The process instructions are being executed.
Waiting: The process is waiting for an event to happen
(such as I/O completion).
Terminated: The process has finished execution.

The process state diagram is:

Dispatcher

Q: Explain why? Each process must have a process


control block (PCB) in memory.
Sol:

Because:

Process control block (PCB) is the way used to


represent the process in the operating system.

It contains information about the process such


as:
Process state (new, ready, )
Address of the next instruction to be executed
in the process.
Process priority.
Process assigned memory.
I/O information (such as I/O devices and

Q: Show graphically how OS uses the PCB to switch between active


processes
Sol:

When interrupt happened, OS saves the state of the current active process to
its PCB so it can continue correctly when it resumes its execution again.
To resume execution, OS reloads the process state from its PCB and continue
execution.

Q: Give a suitable definition for the "Context


Switch", and then explain why context switch is
a pure overhead?
Sol:
Context switch is the time needed by OS to
switch the processor from a process to another.

Context switch(Px Py) = TX + TY , where

TX : time needed to save the state of P x in PCBx


TY : time needed to load the state of P Y in PCBY
Context switch is a pure overhead because the

system does no useful work while switching.

Q: what is meant by Schedulers?, then discuss different


types of them?
Sol:
Long-term scheduler (or job scheduler) selects which
processes should be brought into the ready queue from
the job queue (determine the degree of multiprogramming)
Short-term scheduler (or CPU scheduler) selects which

process should be executed next and allocates CPU

Medium-term scheduler: swap out the process from

memory (ready queue) and swapped in again later (it


decrease the degree of multiprogramming).

Passive
Programs

Disk

Open
program

Memory
Long Term
Scheduler

Select a
process
from job to
ready queue

Job (input)
Queue

Swap a
process
from ready
to job queue

Medium Term
Scheduler

Short Term
Scheduler

Process

Process

Assign the
CPU to a
process
from ready
queue

CPU
CPU

Q: Explain why?
Long term scheduler increases the degree of

multiprogramming.
Medium term scheduler decreases the degree of
multiprogramming.

Sol
:
Degree
of multi-programming is the number of
processes that are placed in the ready queue waiting for
execution by the CPU.
Process 1
Process 2
Process 3
Process 4
Process 5
Memory

Degree of
Multi-Programming

Since Long term scheduler selects which processes to


brought to the ready queue, hence, it Decreases the
degree of multiprogramming.

Disk

Long Term
Scheduler

Process 1
Process 2
Process 3
Process 4
Process 5

Job Queue

Memory

Degree of
Degree of
Multi-Programming
Multi-Programming

Since Medium term scheduler picks some processes from


the ready queue and swap them out of memory, hence, it
decreases the degree of multiprogramming.

Disk

Medium Term
Scheduler

Process 1
Process 2
Process 3
Process 4
Process 5

Job Queue

Memory

Degree of
Degree of
Multi-Programming
Multi-Programming

CPU
Scheduling
((Ready
Ready Queue
Queue))

Q: Explain what is meant by CPU scheduling, and


then discuss the difference between loader and
dispatcher?
Sol:
Difference between loader and dispatcher:
Ready Queue

Process need to
be executed

CPU scheduling

CPU Scheduling is the method to select a

process from the ready queue to be executed by


CPU when ever the CPU becomes idle.

Some Examples:
First Come First Serviced (FCFS) scheduling.
Shortest Job First (SJF) scheduling.
FCFS
Priority scheduling.
CPU Scheduling
Algorithm

Ready
Queue

SJF
Priority

Process 1
Process 2
Process 3
Memory

Dispatcher

CPU

Q: Explain the main differences between


preemptive and non preemptive scheduling?

Sol:
Preemptive scheduling: allows releasing the

current executing process from CPU when


another process (which has a higher priority)
comes and need execution.

Non-preemptive scheduling: once the CPU has

been allocated to a process, the process keeps


the CPU until it release the CPU .

Preemptive
Scheduling

CPU

Non- Preemptive
Scheduling

CPU

Q: discuss in details, what is meant by the

following parameters:
CPU utilization.
System throughput.
Turnaround time.
Waiting time.
Response time.

Then discuss which parameter to maximize


and which one to minimize?

Sol:
CPU Utilization:
The percentage of times while CPU is busy to the

total time ( times CPU busy + times it is idle).


Hence, it measures the benefits from CPU.
CPU Utilization

Times CPU Busy


*100
Total Time

To maximize utilization, keep CPU as busy as

possible.

CPU utilization range from 40% (for lightly loaded

systems) to 90% (for heavily loaded) (Explain why?


CPU utilization can not reach 100%, because of the
context switch between active processes).

System Throughput:
The number of process that are completed per time unit

(hour)

Turnaround time:
For a particular process, it is the total time needed for

process execution (from the time of submission to the


time of completion).
It is the sum of process execution time and its waiting
times (to get memory, perform I/O, .).

Waiting time:
The waiting time for a specific process is the sum of all

periods it spends waiting in the ready queue.

Response time.
It is the time from the submission of a process until the

first response is produced (the time the process takes to


start responding).

It is desirable to:
Maximize:
CPU utilization.
System throughput.

Minimize:
Turnaround time.
Waiting time.
Response time.

First Come First Serviced (FCFS) algorithm


The process that comes first will be
executed first.
Not preemptive.

Ready queue

FCFS Scheduling

CPU

Consider the following set of processes, with the length of


the CPU burst (Execution) time given in milliseconds:
The processes arrive in the order
11
P1, P2, P3. All at time 0.

22
33

Process

Burst Time

P1

24

P2

P3

Gant chart:

waiting times and turnaround times for each process


are:
Process

P1

P2

P3

Waiting Time (WT)

24

27

Turnaround Time (TAT)

24

27

30

Hence, average waiting time= (0+24+27)/3=17


milliseconds

+
Execution
Time

Repeat the previous example, assuming that the processes


arrive in the order P2, P3, P1. All at time 0.
33
11

Process

Burst Time

P1

24

P2

P3

22

Gant chart:

waiting times and turnaround times for each process


are:
Process

P1

P2

P3

Waiting Time (WT)

Turnaround Time (TAT)

30

Hence, average waiting time= (6+0+3)/3=3 milliseconds

Q: Explain why? FCFS CPU scheduling


algorithm introduces a long average waiting
time?

Sol:
Because:
it suffers from Convoy effect, hence, all other

processes must wait for the big process to execute if


this big process comes first.

This results in a long waiting time for small

processes, and accordingly increases the average


waiting time.

Shortest-Job-First (SJF) scheduling

When CPU is available, it will be assigned to


the process with the smallest CPU burst (non
preemptive).
If two processes have the same next CPU
burst, FCFS is used.
SJF Scheduling
10

18

10

18

Note: numbers indicates the process execution time

CPU

Consider the following set of processes, with the length of


the CPU burst time given in milliseconds:
The processes arrive in the order
P1, P2, P3, P4. All at time 0.

Process

Burst Time

P1

P2

P3

P4

1. Using FCFS
Gant chart:

waiting times and turnaround times for each process


are:
Process

P1

P2

P3

P4

Waiting Time (WT)

14

21

Turnaround Time (TAT)

14

21

24

Hence, average waiting time= (0+6+14+21)/4=10.25


milliseconds

2. Using SJF

Burst Time

P1

P2

P3

P4

Gant chart:

waiting times and turnaround times for each process are:


Process

Process

P1

P2

P3

P4

Waiting Time (WT)

16

Turnaround Time (TAT)

24

16

Hence, average waiting time= (3+16+9+0)/4=7


milliseconds

Q: Explain why? SJF CPU scheduling algorithm introduces the


minimum average waiting time for a set of processes? Give an
example.
Sol:
Because: by moving a short process before a long one, the waiting
time of the short process decreases more than it increases the
waiting time of the long process. Hence, the average waiting
time decreases.
decreases

Example: assuming two processes P1 and P2


Process

Burst Time

P1

30

P2

Using FCFS
P1
0

Using SJF

P2
30

32

P2
0

P1
2

32

Waiting time(P1)=0

Waiting time(P1)=2

Waiting time(P2)=30

Waiting time(P2)=0

Average waiting time=(0+30)/2=15

Average waiting time=(0+2)/2=1

Shortest-Remaining-Time-First (SRTF)
It is a preemptive version of the Shortest Job
First .
It allows a new process to gain the processor
if its execution time less than the remaining
time of the currently processing one.

SRTF Scheduling
2

10

3
4

CPU

Consider the following set of processes, with the length of


the CPU burst time given in milliseconds:
Process

Burst Time

Arrival Time

P2

P3

P4

The processes arrive in the order


P1
P1, P2, P3, P4. as shown in table.

1. Using SJF
Gant chart:

waiting times and turnaround times for each process


are:
Process

P1

P2

P3

P4

Waiting Time (WT)

Turnaround Time (TAT)

10

11

Hence, average waiting time= (0+6+3+7)/4=4


milliseconds

2. Using SRTF

Burst Time

Arrival Time

P1

P2

P3

P4

Gant chart:

waiting times and turnaround times for each process are:


Process

Process

P1

P2

P3

P4

Waiting Time (WT)

Turnaround Time (TAT)

16

Hence, average waiting time= (9+1+0+2)/4=3


milliseconds

Priority scheduling
A priority number (integer) is associated
with each process
The CPU is allocated to the process with the
highest priority (smallest integer). There are
two types:
Preemptive
nonpreemptive

Priority Scheduling
10

18

10

18

Note: numbers indicates the process priority

CPU

Problems with Priority scheduling


Problem Starvation (infinite blocking) low

priority processes may never execute


Solution Aging as time progresses
increase the priority of the process
Very lowVery
priority
low process
priority process

26
28

Starvation
Aging

30
8

Consider the following set of processes, with the length of


the CPU burst time given in milliseconds:
The processes arrive in the order
P1, P2, P3, P4, P5. All at time 0 .

1. Using priority scheduling


Gant chart:

Burst Time

priority

P1

10

P2

P3

P4

P5

waiting times and turnaround times for each process


are:
Process

Process

P1

P2

P3

P4

P5

Waiting Time (WT)

16

18

Turnaround Time (TAT)

16

18

19

Hence, average waiting time= (6+0+16+18+1)/5=8.2


milliseconds

Round Robin scheduling

Allocate the CPU for one Quantum time (also


called time slice) Q to each process in the
ready queue.
This scheme is repeated until all processes
are finished.
A new process is added to the end of the
ready queue.
Round Roben Scheduling

CPU

Consider the following set of processes, with the length of the


CPU burst time given in milliseconds:
The processes arrive in the order
P1, P2, P3. All at time 0.
use RR scheduling with Q=2 and Q=4

RR with Q=4

Process

Burst Time

P1

24

P2

P3

Gant chart:

waiting times and turnaround times for each process are:


Process

P1

P2

P3

Waiting Time (WT)

Turnaround Time (TAT)

30

10

Hence, average waiting time= (6+4+7)/3=5.66 milliseconds

RR with Q=2

Process

Burst Time

P1

24

P2

P3

Gant chart:

waiting times and turnaround times for each process are:


Process

P1

P2

P3

Waiting Time (WT)

Turnaround Time (TAT)

30

10

Hence, average waiting time= (6+6+7)/3=6.33


milliseconds

Explain why? If the quantum time decrease,


this will slow down the execution of the
processes.
Sol:
Because decreasing the quantum time will

increase the context switch (the time


needed by the processor to switch between
the processes in the ready queue) which will
increase the time needed to finish the
execution of the active processes, hence,
this slow down the system.

Multi-level queuing scheduling


There are two types:
Without feedback: processes can not move between

queues.
With feedback: processes can move between queues.

Multi-level queuing without feedback:


Divide ready queue into several queues.

Each queue has specific priority and its own scheduling algorithm (FCFS, ).
High priority Queue

Low priority Queue

Multi-level queuing with feedback:


Divide ready queue into several queues.
Each queue has specific Quantum time as shown in
figure.
Allow processes to move between queues.

Queue 0

Queue 1

Queue 2

Deadlock

Q: Give a suitable definition for the Deadlock problem.


Sol:
Solution:

Deadlock:

A set of blocked processes, each:


11 holding a resource
22 and waiting to use a resource held by another process in the set.
Give me your
recourse

Resource

Process A

Deadlock

Give me your
recourse first

Process B

Resource

Hence, blocked processes will never change state (Explain why?) because the
resource it has requested is held by another waiting process.

Deadlock
System Breakdown

Q: Discuss briefly the different deadlock conditions.


Sol:
Deadlock arises if four conditions hold simultaneously:

Deadlock conditions

Mutual
Exclusion

Circular
wait

Hold and
wait

No preemption

11
Mutual
Exclusion

only one process can use a


resource at a time.
22 Wait
Hold and

A process holding at least one resource is


waiting for additional resource held by another
processes

No Preemption
33
A resource is released only by the
process holding it after it completed
its task
44
44
Circular Wait

A set of processes each waits for another one


in a circular fashion.
Note: the four conditions must occur to have a deadlock. If one condition is
absent, a deadlock may not exist.

Circular
wait

There exists a set {P0, P1, P2, .., Pn} of waiting processes such that:
P0 waiting for a resource held by P1.
P1 waiting for a resource held by P2.
P2 waiting for a resource held by P3.

P0
Pn

P1

Circular
Wait

P2
P3

P4

Pn waiting for a resource held by P0.

Deadlock Modeling
(Resource Allocation Graph)
In order to solve the deadlock problem, we must find a method to
express it.
This can be achieved using resource allocation graph.

Resource Allocation Graph

it is a graph expressing:
- All system active processes.
- Available system resources .
- Interconnections between active processes and system resources.

Contents of Resource Allocation Graph

Process

Resource

P (Process Set) set of processes in the system


P = {P1, P2, P3, ., Pn}
R (Resource Set) set of Resource types in the system
R = {R1, R2, R3, ., Rm}

Edge
Pi

Rj

( Request edge )
Process Pi requests an instance of
resource Rj

Rj

( Assignment edge )
An instance of Resource Rj is
assigned to Process Pi

E (Edge Set) set of all edges in Resource Allocation Graph.


E = {Pn Rm, Rx Py, .. }

Pi

Example:
11

Resource instances:
One instance of R1 and R3.
Two instances of R2.

ThreeP,
instances
R4.
The sets
R, andofE:
22

Processes States:

P =Process
Waiting
{ P1, P2, P3Holding
}
of R2
Instance of R1
R = {P1R1, RInstance
2, R3, R4 }
Instance of R1 and R2
Instance of R3
E = {P2
P1R1, P2R3, R1P2, R2P2,
Instance of R3
------R PP3,R P }
2

Q: Explain why: Although the graph contains a cycle, the system may
not in a deadlock state.
Sol:
Case 1: The system has one instance per resource type

If a cycle exists, the system in a deadlock state.

Each process involved in the cycle is deadlocked.


P1

R2

Cycle

Cycle:
Cycle

R1

P1 R1 P2 R2

(Deadlock)
P2

As show, no chance to break the cycle


because:

No process can finish execution.

No way to free a resource.

Case 2: The system has more than one instance per resource type

If a cycle exists, the system may or may not be in a deadlock state.

Ex 1: a cycle with deadlock:


Two Cycles exists:
exists
P1 R1 P2 R3 P3 R2

P2 R3 P3 R2

As show, P1, P2, and P3 are deadlocked

Ex 2: a cycle without deadlock:

Cycle:
Cycle
P1 R1 P3 R2

There is no deadlock because:

P4 may release its instance of R2.

This resource can be then allocated to


P3 which breaking the cycle.

Also P2 may release its instance of


R1.

This resource can be allocated to P1.

Deadlock Avoidance
Q: Describe in details the basic rules used for deadlock avoidance .
Sol:
Examines the system state so that:
System state

Safe

No deadlock

Unsafe

Possibility of deadlock

Avoidance ensure that a system will never

enter an unsafe state.

System in safe state if there exists a safe sequence of all


processes

<P1, P2, P3, P4, .., Pn-1, Pn>


Can be executed by

Can be executed by

Can be executed by

11 it holds
resources
+
22
systems
available resources.

11
resources it holds
+
systems22
available resources.
+
Resources 3held
by P1
3

11
resources it holds
+
systems22
available resources.
+
Resources 3held
by P1, P2
3

Avoidance Algorithms
Resource Allocation Graph Algorithm

11 Claim edge Pi ---> Rj indicated that process P j

may request resource Rj in future;


represented by a dashed line.

22

Claim edge converts to request edge when a

process requests a resource.

Request edge converted to an assignment

edge when the resource is allocated to the


process.

Claim edge Request edge Assignment edge

Assignment
Edge

Request Edge

Claim Edge

Suppose that process Pi requests a resource Rj


The request can be granted only if converting the

request edge to an assignment edge does not result in


a cycle in the resource allocation graph

Case Study: consider the following resource allocation

graph:

Suppose that
P2 requests R2

Although R2 is free, we can not allocate it to P2 since this creates a cycle

Cycle

Consider the following Resource allocation graph

R1

R2
P1

P3

R3

R4
P2

Is the system at safe state? What is the safe sequence? (if


exists).
Assume at T1, P3 requests R2, is it reasonable to grant this
request?

R1

R2
P1

P3

R3

R4
P2

Steps:
1. Find the Available Resources.
2. Construct the table:
Process

3. Find the safe sequence.

Max. Need

Hold

R1

R2
P1

P3

R3

R4
P2

Available Resources: R2

Process

Max. Need

Hold

P1

R1, R2, R3

R1, R3

P2

R3, R4

----

P3

R1, R2, R3, R4

R4

Safe sequence: <P1, P3, P2> so system in safe state.

R1

cycle

R2
P1

P3

R3

R4
P2

At T1: (P3 requests R2)


It is not reasonable to grant the request because system will
be in unsafe state (because this may result in a future cycle if
P1 requests R2).

Recovery From Deadlock

Terminate deadlocked
processes

Terminate all deadlocked


processes

Terminate one by one


Until deadlock eliminated

Free some
Resources

Free some resources


from the deadlocked processes.
Give those resources to other
processes until deadlock
eliminated

Memory Management
Techniques

Q: explain what is meant by memory management, then


discuss why to manage memory?
Sol:
Ready Queue

Process need
to be
executed

Memory management is how to organize active processes (processes


currently in the ready queue) so that:
1. Processes can be easily reached.
2. Maximize memory space utilization.

Q: show how a loader stores an executable file into the


memory assuming:
File of Size =20 memory words (instructions and data).
Using Contiguous allocation method.
Repeat the problem three different times using:
First fit.
Best fit.
Worst Fit.

Sol:

Executable file
(Size =20
memory
words)

?
Loader

Memory

loader
1.

contiguous allocation (
) block
:
) hole First Fit :
. (
) Best Fit :
. (
. Worst Fit :

.
Limit word Base Register
. register

physical. logical

Using First Fit


Base

Start
address of
process
Legal range

1040
20
Limit

Memory
0

OS
999
1000
1010
1040

Process

1060
1070

Executable file
(Size =20
memory
words)

Process

Process

1100

1125

Process

1150

Loader

1200

Process
1250
1255

Memory

Using Best Fit


Base

Start
address of
process
Legal range

OS
1000

1100

1010

20

1040

Process

Limit
1070

Executable file
(Size =20
memory
words)

1100

Process
Process

1120
1125

Process

1150

Loader

1200
1250
1255

Process

Memory

Using Worst Fit


Base

Start
address of
process
Legal range

OS
1000

1150

1010

20

1040

Process

Limit
1070

Executable file
(Size =20
memory
words)

Process

1100

1125

1150

Loader

Process
Process

1170
1200
1250
1255

Process

Q: Explain what is meant by Bootstrapping, then what is the


difference between Loader and Bootstrap Loader?
Sol:

Bootstrapping

Bootstrap program

ROM
Bootstrap Loader

.
ROM ( )
Bootstrap Program .
Bootstrap Program .
Bootstrap loader

Actions taken when a computer is first powered on until it is ready to be

used .

Computer reads a program from a ROM ( Read Only Memory) which:


Is installed by the manufacturer.
Contains bootstrap program and some other routines that controls

hardware (BIOS)

. Loader
. loader
:
ROM Bootstrap loader Loaders
. 0 Loader . Bootstrap program
. Absolute Loader loader

Loader
A part or OS.
Perform loading and relocation for users programs.

Bootstrap Loader
An absolute loader.
Executed when a computer is turned on or restarted.
Loads the first program to be run by the computer (usually an OS).
loads at address 0 in the memory (so that it is an absolute loader).

Q: Explain why? The bootstrap loader is an absolute (Simple) loader?


Sol:
Assuming OS requires 1000 words (from 0 to 999).
As shown, the range of the logical address (in the executable copy of OS) in the
disk is the same of the physical address range in memory.
So, no relocation.

Disk

Range of physical
addresses
999

OS

OS
999

No Relocation
Memory

Range of logical
addresses

Bootstrap
Loader

Q: Explain in details how to manage memory in multi-programming


environment?
Sol:
Memory Management In
multi-programming environment

Paging
Swapping

Contagious allocation

Swapping:
Q: Explain what is meant by swapping, Give some examples for swapping.
Sol:

A process can be swapped out of memory to disk, and then brought back into
memory for continued execution.

Ex1:
Multi-programming environment with priority scheduling

Assume a system with 32 kb memory size.


5kb are used for OS, 10 kb for the low priority process.
Hence, the available space is 17 kb.
A higher priority process arrives and needs 20 kb.
high priority process
finishes execution
OS (5 kb)

Available
27 Kb

Swap in the high


priority process

OS (5 kb)

High Priority
process (20
kb)

Disk

OS (5 kb)

Available
27 Kb

Available
7 Kb

Swap in the low priority process again to


resume execution

Swap out the


low priority
process to disk
OS (5 kb)
Low Priority
process (10
kb)
Available
17 Kb

High
Priority
20 Kb

Contiguous Allocation
Q: Explain what is meant by contiguous allocation, what are its different
types?
Sol:
In contiguous allocation, each process is contained in a single
contiguous section of memory.

.

Methods for Contiguous


Allocation

Simple
Method

General
Method

Simple Method

Divide memory into several fixed-sized partitions.


Each partition contains one process.
Degree of multiprogramming is bound by the number of partitions.
When a partition is free, a process is selected from the input queue and is
loaded into the free partition.
When the process terminates, the partition becomes available for another
process.

its no longer used (Explain why?) because it has various drawbacks like:
1. Degree of multiprogramming is bounded by the number of
partions.
2. Internal fragmentations.

Assuming a memory of 30 KB divided into three partitions as following:

Input Queue
(in the disk)

KB 4

KB 8

KB 9

Internal
Fragmentation

KB 7

Process 1
(7 KB)

Process 2
(9 KB)

Process 3
(8 KB)

10
KB

10
KB

10
KB

As shown
This method suffers from internal fragmentations.
The degree of multiprogramming is bounded to 3 although it can be
4.

General Method
Initially, all memory is available for user processes, and is considered as
one large block of available memory.
When a process arrives and needs memory, we search for a hole large
enough for this process using.

First Fit

Best Fit

Worst Fit

If we find one, we allocate only as much memory as is needed, keeping


the rest available to satisfy future requests.

There are three different methods to find the suitable hole for
a process?:
First fit: allocate the first hole that is big enough (fastest
method).
Best fit: allocate the smallest hole that is big enough
(produces the smallest leftover hole).
Worst fit: allocate the largest hole (produces the largest
leftover hole which may be more useful than the smaller
leftover hole from a best-fit approach.

OS
Input Queue
(in the disk)

KB 4

KB 8

KB 9

KB 7

Process 4
Process 1
(4 KB)
(7 KB)
Process 5
(9
KB) 2
Process
(9 KB)
Process 3
(8 KB)

As shown:
The degree of multiprogramming changing according to the number
of processes in the memory (in ready queue).

After a period of time External Fragmentations appear

OS
Process 4
Input Queue
(in the disk)

KB 4

KB 8

KB 9

KB 7

Process 2
Process 3
Process 9
Process 20

As shown:
This method suffers from external Fragmentations.

External
Fragmentations

Compaction
OS
Process 4
Process 4
Input Queue
(in the disk)

KB 4

KB 8

KB 9

KB 7

Process 2
Process 2
Process 3
Process 3
Process 9
Process 9
Process 20
Process 20

A new hole to store a


new process

Compaction:
Is a movement of the memory contents to place all free memory in a one large block
sufficient to store new process.
It is a solution for the external fragmentation but it is expensive and is not always
possible.

Paging
Paging is a memory-management scheme that permits the physicaladdress space of a process to be noncontiguous.
it is commonly used in most operating systems.
Divide physical memory into fixed-sized blocks called frames.
Divide Process into blocks of same size called pages.
Use a page table which contains base address of each page in physical
memory.

Example
Page
number
0

Process
P

A process P is divided into 4 pages.

Page
number

Frame
number

1
2
3

The process will be loaded into 4 frames.


The page number is used as index to the page table.
The page table contains the frame number for each page.

Paging Example
32-byte memory, each memory word of size =1 byte (can store only one
character), the size of page (and frame also)= 4-byte.
Show how to store 4 pages process into memory using a page table.
according to your page table, what are the physical addresses
corresponding to the logical addresses 4 and 13.

Sol:
Memory size=32 Byte =32 words.
Page size = Frame size = 4 bytes =4 words.
Number of Frames = 32/4 = 8 Frames (addressed from 0 7)

Logical address

4 Page
process

Page 0 Contents

Physical address

Page 0

Frame
0

Page 1

Frame
1

Page 2

Frame
2

Page 3

Frame
3
Frame
4
Frame
5
Frame
6
Frame
7

Advantages of paging:
1.
2.

No external fragmentation.
Allow the process components to be noncontiguous.

Problems in paging:
1. A possibility of internal fragmentations that can not be used.

Q: Explain when you have internal fragmentations when using the paging
technique.
Sol:
when the contents of the last page of the process is less than the frame
size, then the remaining part of the frame will be an internal
fragmentations that can not be used.

Any Questions?

You might also like