You are on page 1of 26

ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Syllabus

Process Scheduling: FCFS, SJF, Priority, Round-Robin, Multilevel Queue and Multilevel Feedback
Queue Scheduling. Thread: Structure. User and kernel level threads, multi-threading models,
multiprocessor scheduling.

Q. Scheduling Queues:-

1. Job queue:- As processes enter the system, they are put into a job queue. This queue
consists of all processes in the system.
2. Ready queue:-The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list. We extend each
PCB to include a pointer field that points to the next PCB in the ready queue.

3. Device queue: - The list of processes waiting for a particular I/O device is called adevice queue.
Each device has its own device queue.

Q. Queuing diagram:-

 A common representation of process scheduling is a queuing diagram.


 Each rectangular box represents a queue.
 Two types of queues are present: the ready queue and a set of device queues.
 The circles represent the resources that serve the queues, and the arrows indicate
the flow of processes in the system.
 A new process is initially put in the ready queue. It waits in the ready queue until it is
selected for execution .
 Once the process is assigned to the CPU and is executing, one of several events could
occur:
o The process could issue an I/O request, and then be placed in an I/O queue.
o The process could create a new sub process and wait for its termination.
1
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

o The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue.
 In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it
terminates.

Q. Types of Processor Scheduling:-


 The aim of processor scheduling is to assign processes to be executed by the processor or processors
over time; in a way that meets system objectives such as response time, through put and processor
efficiency.
 A process migrates between the various scheduling queues throughout its lifetime.
 The operating system must select, for scheduling purposes, processes from these queues
in some fashion.
 The selection process is carried out by the appropriate scheduler
 The key to the multiprogramming is scheduling.
 Three types:
 Long term scheduling
 Medium term scheduling
 Short term scheduling
 Long term scheduling is performed when a new process is created.
 Medium scheduling is a part of the swapping function.
 Short term scheduling is the actual decision of which process ready process to execute next.

2
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Long-term scheduler job scheduler:-


 It determines which programs are admitted to the system for processing.
 It controls the degree of multi programming.
 Once added a job or user program becomes a process and is added to the queue for the short term
scheduler
 In a batch system, newly submitted jobs are routed to disk and held in a batch queue.
 The long term scheduler creates process from the queue when it can.
 There are two decisions involved.
 The scheduler must decide
 When the OS can take on one or more additional processes.
 Which job or jobs to accept & turn in to processes?
 Decision 1is driven by the desired degree of multi programming.
 More processes are created; the smaller is the percentage of time that each process can be executed.
 Long term Scheduling limits the degree of multi programming to provide satisfactory services to the
current set of process
 Each time a job terminates the scheduler may decide to add one or more new processes.
 The decision 2 can be on simple FCFS basis or it can be a tool to manage s/m performance.
 The criteria used may include priority, expected execution time, & I/O requirements.
3
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Scheduler may attempt to keep a good mix of processor bound & I/O bound processes.
 For interactive processes in time sharing s/m the Os will accept the process creation request by users
until the s/m is saturated.
Short-term scheduler or CPU scheduler:-
 The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute, and allocates the CPU to one of them.
 Assigning the CPU to a process is called “Process Dispatching”.
 Also known as dispatcher.
 It executes most frequently
 short-term scheduler is involved whenever an event occurs that may lead to the blocking of
current process and to choose another process for execution.
 Clock interrupts
 I/O interrupts
 OS s/m calls & terps
 Signals ( semaphores)
Medium term Scheduler:-
 When a process is suspended after making an I/O request, it cannot make any progress
till the requested I/O is completed by the OS.
 Sometimes, a suspended process is removed from the main memory and shifted to the
secondary storage(Swapping-Out),to make space for another process.
 Medium term scheduler is charged with the handling of swapped –out process.
 It determines when a swapped- out can be moved back into the memory(swapping-
in),and resumed from the same point ,Where it was swapped – out.
 Swapping may be necessary to improve the process mix or because a change in m/y requirements
has over committed available memory

Comparison among Schedulers

4
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Medium-Term
Long-Term Scheduler Short-Term Scheduler Scheduler

It is a process swapping
It is a job scheduler It is a CPU scheduler.. scheduler.

Scheduler which puts the Jobs


New state to Ready Queue. from the ready state to active state
CPU burst time will be very short
It decides that what are the jobs and it cannot take much time to
which should reside the main decide which job has to be made
memory/which should be in at active state from ready state.
ready queue.
Speed is in between both
Speed is lesser than short Speed is fastest among other two short and long term
term scheduler scheduler.
It controls the degree It provides lesser control over It reduces the degree of
of degree of multiprogramming multiprogramming.
multiprogramming,
It is almost absent or minimal It is also minimal in time sharing
It is a part of Time sharing
in time sharing system system systems.
It can re-introduce the
It selects processes from pool It selects those processes which process into memory and
and loads them into memory are ready to execute execution can be continued.
for execution

Q. CPU-I/O Burst Cycle:-

 The execution of a process consists of a cycle of CPU execution and I/O wait.
o A process begins with a CPU burst, followed by an I/O burst, followed by another
CPU burst and so on. The last CPU burst will end will a system request to
terminate the execution.
 The CPU burst durations vary from process to process and computer to computer.
o An I/O bound program has many very short CPU bursts.
o A CPU bound program might have a few very long CPU bursts.

5
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Q. Context Switching:-

 When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process - this is called context switch.
 The time it takes is dependent on hardware support.
 Context-switch time is overhead; the system does no useful work while switching.
Steps in Context Switching
 Save context of processor including program counter and other registers.
 Update the PCB of the running process with its new state and other associate
information.
 Move PCB to appropriate queue - ready, blocked,
 Select another process for execution.
 Update PCB of the selected process.
 Restore CPU context from that of the selected process.
 When the process is switched, the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register Changed State
• I/O State information
• Accounting information

6
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Q. Preemptive and non preemptive Scheduling

 CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state.
2. When a process switches from the running state to the ready state.
3. When a process switches from the waiting state to the ready state.
4. When a process terminates.
 Under 1 & 4 scheduling scheme is non preemptive .i.e.,
o When a process switches from the running state to the waiting state.
o When a process terminates
 Otherwise the scheduling scheme is preemptive. i.e.,
o When a process switches from the running state to the ready state.
o When a process switches from the waiting state to the ready state.
 Under non preemptive scheduling, once the CPU has been allocated a process, the
process keeps the CPU until it releases the CPU either by termination or by switching to
the waiting state.
 This scheduling method is used by the Microsoft windows environment.
Preemptive:
 Currently running process may be interrupted and moved to the Ready state by the OS.
 Allows for better service since any one process cannot monopolize the processor for very long.

Q. Dispatcher:-The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves:

7
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program

Q. Scheduling Criteria:

Many criteria have been suggested for comparing CPU-scheduling algorithms.


The criteria include the following:
 CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).
 Throughput: If the CPU is busy executing processes, then work is being done. One measure
of work is the number of processes completed per time unit, called throughput. For long
processes, this rate may be 1 process per hour; for short transactions, throughput might be 10
processes per second.
 Turnaround time: The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
Turnaround Time= Completion Time- Arrival Time
 Waiting time: Waiting time is the sum of the periods spent waiting in the ready
queue.
Waiting Time= Turnaround Time- Burst Time
 Response time: In an interactive system, turnaround time may not be the best criterion.
Another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the amount of time it takes to start
responding, but not the time that it takes to output that response.
“In Case of Non Pre Emptive, Waiting Time and Response Time are same.”
Classification of Scheduling Criteria
• User-oriented:
– Response Time: Elapsed time from the submission of a request to the beginning of response.
– Turnaround Time: Elapsed time from the submission of a process to its completion.
• System-oriented:
– CPU utilization
– Throughput: number of process completed per unit time.
– Fairness
 Optimization Criteria
– Max CPU utilization
– Max throughput
– Min turnaround time
– Min waiting time

8
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

– Min response time

Q. Scheduling Algorithms
 CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU scheduling algorithms,
are given below.
A. First -Come, First-Served Scheduling:-
 The process that requests the CPU first is allocated the CPU first. It is non- preemptive
algorithm.
 Can easily be implemented with a FIFO queue.
 When a process enters the ready queue, its PCB is linked onto the tail of the queue.
 When CPU is free, it is allocated to the process at the head of the queue.
 Example:

Process Burst Time


P1 24
P2 3
P3 3

 If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart: which is a bar chart that illustrates a
particular schedule, including the start and finish times of each of the participating
processes:

Average waiting time = (0+24+27) / 3 = 17 ms Average


Turn - around time = (24+27+30) / 3 = 27 ms
 If the processes arrive in the order P2, P3, P1, Gantt chart is:

Waiting time for P1 = 6ms

Waiting time for P2 = 0ms

9
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Waiting time for P3 = 3ms


Average waiting time =(6+0+3)/3= 3ms

 Thus, the average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the processes CPU burst times vary greatly
 The FCFS algorithm is particularly troublesome for time – sharing systems, where it is
important that each user get a share of the CPU at regular intervals.
 Whenever twp processes have the same arrival time, We have picked up the one which
is having the least process ID

Advantages
 Very simple
Disadvantages
 Long average and worst-case waiting times
 Poor dynamic behavior (convoy effect - short process behind long process).
Convoy effect in FCFS:

 Assume one CPU-bound process and many I/O-bound processes. As the processes flow around
the system, the following scenario may result.
 The CPU-bound process will get and hold the CPU. During this time, all the other processes will
finish their I/O and will move into the ready queue, waiting for the CPU. While the processes
wait in the ready queue, the I/O devices are idle. Eventually, the CPU-bound process finishes its
CPU burst and moves to an I/O device.
 All the I/O-bound processes, which have short CPU bursts, execute quickly and move, back to
the I/O queues. At this point, the CPU sits idle. The CPU-bound process will then move back to
the ready queue and be allocated the CPU. Again, all the I/O processes end up waiting in the
ready queue until the CPU-bound process is done.
 There is a convoy effect as all the other processes wait for the one big process to get off the
CPU.
 The convoy effect results in lower CPU and device utilization.
 The FCFS scheduling algorithm is non-preemptive.
 Once the CPU has been allocated to a process, that process keeps the CPU until it releases the
CPU, either by terminating or by requesting I/O. The FCFS algorithm is thus particularly
troublesome for time-sharing systems.

b). Shortest-Job-First Scheduling


 This algorithm associates with each process the length of the latter's next CPU burst.
 When the CPU is available, it is assigned to the process that has the smallest next CPU
burst.
 If two processes have the same length next CPU burst, FCFS scheduling is used to

10
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

break the tie. It is a non-preemptive policy.


 Preemptive & non preemptive scheduling is used for SJF

Example Process Burst Time

P1 6
P2 8
P3 7
3
P4
Gantt Chart:-

Average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.


Average turnaround time = ( 3+9+16+24) / 4 = 13 ms
Example :
Process Arrival Time Burst Time
P1 0 8
P2 1 4

P3 2 9
p4 3 5

Preemptive SJF is known as Shortest Remaining Time First (SRTF)


Q: Non – Preemptive Scheduling:

P1 P2 P4 P3

0 8 12 17 26
AWT = 0 + (8 – 1) + (12 – 3) + (17 – 2) / 4 = 7.75 ms
 In SJF, the time duration we have specified are actually the execution Time of Next
CPU Burst.
 In case of FCFS there is no such problem. i.e., the only criteria are job which is arrived
in the ready queue first that has to be executed first. Without the bothering of what is the
next CPU burst.
 In case of SJF, we must have the idea of what is the next CPU bust duration. We are

11
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

executing the next CPU Burst time is minimum and Job which take maximum time that
must be executed last.
 The next CPU burst is generally predicted as an exponential average of the measured lengths of
previous CPU bursts. We can define the exponential average with the following formula
 Let tn be the length of the nth CPU burst, and let n+1 be our predicted value for the next CPU burst.
Then, for 0≤ ≤ 1, define

 The value of tn contains our most recent information; n stores the past history.
 The parameter a controls the relative weight of recent and past history in our prediction.

 If = 0, then n+1 = n, and recent history has no effect (current conditions are assumed to be
transient).
 If = 1, then n+1 = tn, and only the most recent CPU burst matters (history is assumed to be old and
irrelevant).
 More commonly, = 1/2, so recent history and past history are equally weighted.
 The initial o can be defined as a constant or as an overall system average.
 To understand the behavior of the exponential average, we can expand the formula for n+1 by
substituting for n, then
n+1 = tn + (1 - ) tn-1 + · · · + (1-)i tn-i + · · · + (1-)n+1 o.
 Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its
predecessor.
c) Preemptive SJF – Shortest Remaining Time First:-
 Preemptive version of SJF.
 If a new process arrives with CPU burst length less than remaining time of current
executing process, preempt the currently executing process and allocate the CPU to the
new process.
 Advantages:
 Minimizes average waiting times.
 Problems:
 How to determine length of next CPU burst?
 Problem: starvation of jobs with long CPU bursts.
Example :
Process Arrival Time Burst Time
P1 0 8

P2 1 4
P3 2 9
p4 3 5
12
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Average waiting time : P1 : 10 – 1 = 9


P2 : 1 – 1 = 0
P3 : 17 – 2 = 15
P4 : 5 – 3 = 2
AWT = (9+0+15+2) / 4 = 6.5 ms

Q: What is starvation?
In the case of preemptive algorithm
 A job which is currently in active stage that will also have some priority level.
 Now, while the job is in active state and if any other job arrived in the ready queue with
priority higher than that , then we have to pre emptive the job which is currently active
and give this Higher priority Job to CPU for execution
 If we do so, then there is a danger. i.e., If we given a Job for CPU for execution which is
currently of highest priority.
 Here the situation is dynamic and every time new Jobs arrive in the queue, taking the
priority of that Job with the priority of active Job.
 Every time a new Job arrive in the queue, we found that priority of that Job is greater
than the some Job already in the ready queue with lowest priority that will never be
executed. It will starve for the CPU Time
Q.: What is aging?
 While the job remain in the ready queue at regular intervals of time, increment the
priority level of the Job
 That means the priority of the job now it is not only decided by the user/administrator.
 The priority of the job will also be decided by the duration of time it is remaining in the
ready queue.
 Because regular interval of time we are incrementing the priority of the Job so there will
be somewhere the job will be reach the maximum priority Level.
 And when it is reaches maximum priority level, it will be executed by the CPU.
 This is the one way in which starvation can be solved.

13
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

d) Round Robin Scheduling (preemptive):-

 This scheduling algorithm is designed especially for time sharing systems. It is similar
to FCFS scheduling, but preemption is added to switch between processes.
 A small unit of time, called a time quantum, or time slice, is assigned to each process.
Usually a quantum is 10 to 100 ms.
 This scheduler allocates the CPU to each process in the ready queue for a time interval
of up to 1 time quantum in FIFO (circular) fashion.
 If the process still running at the end of the quantum; it will be preempted from the
CPU.
 A context switch will be executed, and the process will be put at the tail of the ready
queue.
 Then the scheduler will select the next process in the ready queue.
 However, if the process has blocked or finished before the quantum has elapsed, the
scheduler will then proceed with the next process in the ready queue.
 The ready queue is treated as a circular queue.
 Example:-
Process Burst Time
P1 24
P2 3
P3 3
Time quantum = 4 ms
Gantt chart

14
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Waiting time for P1 = 26 – 20 = 6


P2 = 4
P3 = 7
The average waiting time = (6+4+7) / 3 = 17/3 = 5.66 milliseconds
 The performance of the RR algorithm depends heavily on the size of the time quantum.
 If time-quantum is very large (infinite) then RR policy is same as FCFS policy.
 If time quantum is very small, RR approach is called processor sharing and appears to the
users as though each of n process has its own processor running at 1/n the speed of real
processor.
 If the Time –slice is chosen to be very small (closer to the context switching period) then
context switching overheads will be very high, thus affecting the system throughput
adversely.
 So ,Time -slice has to be carefully chosen .It should be small enough to give a good
response to the interactive users.
 At the same time it should be large enough to keep the context –switching
overhead low.

e) Priority scheduling (preemptive and non-preemptive):-


 The basic principle of the priority scheduling algorithm is to assign a priority order to
each ready process.
 The ready process with the highest priority is allowed to run and the CPU is allocated to
the process with the highest priority.
 Equal priority processes are scheduled in FCFS order.
 Priority scheduling can be of :-
o Non-preemptive :- A process runs up to the end of its CPU burst time. A non-
preemptive priority-scheduling algorithm will simply put the new process at the
head of the ready queue.
o Preemptive :-When a process arrives at the ready queue ,its priority is compared
with the priority of the currently running process. A preemptive priority scheduling
algorithm will preempt the CPU if the priority of the newly arrived process is higher
than the priority of the currently running process.
Example for Non-preemptive and preemptive scheduling

Arrival Burst
Process time ms ms Priority

P1 0 10 2
P2 1 5 2
P3 3 6 4
P4 5 4 0

15
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Problem with priority scheduling algorithms is indefinite blocking or starvation.


 A solution to the problem of indefinite blockage of low priority process is aging.
 Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time.
 The main focus is to service the more critical processes earlier than the less critical
ones.
 Average Context switching overhead of a preemptive algorithm will be higher than the
average context switching time of the corresponding non-preemptive algorithm .
 Thus, the throughput of a preemptive algorithm will be lower than its non- preemptive
counterpart.

Q. Multilevel Queue Scheduling


 A multilevel queue-scheduling algorithm partitions the ready queue into several
separate queues.
 The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type.
 There must be scheduling between the queues, which is commonly implemented as a fixed-
priority preemptive scheduling.
 For example the foreground queue may have absolute priority over the background
queue.
Example: of a multilevel queue scheduling algorithm with five queens
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queue.

16
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Multilevel queue scheduling


Q. Multilevel Feedback Queue Scheduling:-
 Multilevel feedback queue scheduling, allows a process to move between queues.
 The idea is to separate processes with different CPU-burst characteristics.
 If a process uses too much CPU time, it will be moved to a lower-priority queue.
 This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
 Similarly, a process that waits too long in a lower priority queue may be moved to a
higher-priority queue. This form of aging prevents starvation.
 For example, consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2 .
 The scheduler first executes all processes in queue 0.
 Only when queue 0 is empty will it execute processes in queue 1. Similarly, processes in
queue 2 will be executed only if queues 0 and 1 are empty.
 A process that arrives for queue 1 will preempt a process in queue 2.
 A process that arrives for queue 0 will, in turn, preempt a process in queue 1

Multilevel feedback queues


A multilevel feedback queue scheduler is defined by the following parameters:
o The number of queues
o The scheduling algorithm for each queue
o The method used to determine when to upgrade a process to a higher priority queue
o The method used to determine when to demote a process to a lower-priority queue
o The method used to determine which queue a process will enter when that process needs
service.
Q. Multiple-Processor Scheduling
 We can classify multiprocessor systems as follows:
 Loosely coupled or distributed multiprocessor, or cluster:
• Consists of a collection of relatively autonomous systems, each processor
having its own main memory and I/O channels.
17
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Functionally specialized processors:


• An example is an I/O processor.
• In this case, there is a master, general-purpose processor; specialized
processors are controlled by the master processor and provide services to it.
 Tightly coupled multiprocessing:
• Consists of a set of processors that share a common main memory and are
under the integrated control of an operating system.
• If multiple CPUs are available, the scheduling problem is correspondingly
more complex. We concentrate on homogenous multiprocessors.
• Even with those systems we have certain limitations.

 If several identical processors are available, then load-sharing can occur.


 It is possible to provide a separate queue for each processor.
 In this case however, one processor could be idle, with an empty queue, while another
processor was very busy.
 To prevent this situation, we use a common ready queue.
 All processes go into one queue and are scheduled onto any available processor.
 In such a scheme, one of two scheduling approaches may be used.
 In one approach, each processor is self-scheduling.
 Each processor examines the common ready queue and selects a process to execute.
 If we have multiple processors trying to access and update a common
data structure ,each processor must be programmed very carefully.
 We must ensure that two processors do not choose the same process, and that processes
are not lost from the queue.
 The other approach avoids this problem by appointing one processor as scheduler for
the other processors, thus creating a master-slave structure.
 This asymmetric multiprocessing is far simpler than symmetric multiprocessing,
because only one processor accesses the system data structure, alleviating the need
for data sharing. However, it is also not as efficient.
 A good way of characterizing multiprocessors and placing them in context with other
architectures is to consider the synchronization granularity, or frequency of synchronization,
between processes in a system.
 We can distinguish five categories of parallelism that differ in the degree of granularity.
• Independent Parallelism
• Coarse Grained Parallelism
• Very Coarse-Grained Parallelism
• Medium-Grained Parallelism
• Fine-Grained Parallelism

18
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

INDEPENDENT PARALLELISM
 With independent parallelism, there is no explicit synchronization among processes.
 Each represents a separate, independent application or job.
 A typical use of this type of parallelism is in a time-sharing system.
• Each user is performing a particular application, such as word processing or
using a spreadsheet.
• The multiprocessor provides the same service as a multiprogrammed
uniprocessor.
• Because more than one processor is available, average response time to the
users will be less.

COARSE GRAINED PARALLELISM


 With coarse and very-coarse grained parallelism, there is synchronization among
processes, but at a very gross level.
 This kind of situation is easily handled as a set of concurrent processes running on a
multiprogrammed uniprocessor and can be supported on a multiprocessor with little or no
change to user software.

MEDIUM-GRAINED PARALLELISM
 A single application can be effectively implemented as a collection of threads within a
single process.
 In this case, the programmer must explicitly specify the potential parallelism of an
application.
 Typically, there will need to be rather a high degree of coordination and interaction
among the threads of an application, leading to a medium-grain level of synchronization.
 Because the various threads of an application interact so frequently, scheduling decisions
concerning one thread may affect the performance of the entire application.

FINE-GRAINED PARALLELISM
 Fine-grained parallelism represents a much more complex use of parallelism than is
found in the use of threads.
 Although much work has been done on highly parallel applications, this is so far a
specialized and fragmented area, with many different approaches

19
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

SCHEDULING DESIGN ISSUES


 Scheduling on a multiprocessor involves three interrelated issues:
• Assignment of processes to processors
• Use of multiprogramming on individual processors
• Actual dispatching of a process
 The approach taken will depend on the degree of granularity of applications and the
number of processors available
1. Assignment of processes to processors
 Assuming that the architecture of the multiprocessor is uniform, in the sense that no
processor has a particular physical advantage with respect to access to main memory or to
I/O devices.
 Then the simplest scheduling approach is to treat the processors as a pooled resource and
assign processes to processors on demand.
 Two types of assignment is possible, static and dynamic.
Static Assignment
 If a process is permanently assigned to one processor from activation until its completion,
 Then a dedicated short-term queue is maintained for each processor.
 An advantage of this approach is that there may be less overhead in the scheduling
function, because the processor assignment is made once and for all.
 Also, the use of dedicated processors allows a strategy known as group or gang
scheduling, as discussed later.
 A disadvantage is that one processor can be idle, with an empty queue, while another
processor was too busy.
 To prevent this situation, a common queue can be used.

20
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 All processes go into one global queue and are scheduled to any available processor.
Dynamic Assignment
 With Dynamic Assignment, threads are moved for a queue for one processor to a queue
for another processor;
 Linux uses this approach.
Both dynamic and static methods require some way of assigning a process to a processor
 Two methods:
• Master/Slave
• Peer
I. Master/Slave Architecture
 Key kernel functions of the operating system always run on a particular processor. The
other processors may only execute user programs.
 The master is responsible for scheduling jobs.
 Once a process is active, if the slave needs service (e.g., an I/O call), it must send a
request to the master and wait for the service to be performed.

 There are two disadvantages to this approach:


o A failure of the master brings down the whole system,
o The master can become a performance bottleneck.
II. Peer Architecture
 The kernel can execute on any processor,
 and each processor does self-scheduling from the pool of available processes.
 This approach complicates the operating system.
 The operating system must ensure that two processors do not choose the same process
and that the processes are not somehow lost from the queue.
 Techniques must be employed to resolve and synchronize competing claims to resources.
2. Use of multiprogramming on individual processors
 In traditional multiprocessor, which is dealing with course grained or independent
synchronization granularity, each individual processor should be able to switch among a
number of processes to achieve high utilization and therefore better performance.
 However for medium grained applications running on a multiprocessor with many
processors, it is no longer paramount that every single processor be as busy as much as
possible
3. Process Dispatching
 Final design issue related to multiprocessor scheduling is the actual selection of a process
to run.
Process Scheduling
 Usually, processes are not dedicated to processors.
 Instead, a single queue is used for all processors,
 or if some sort of priority scheme is used, there are multiple queues based on priority, all
feeding into the common pool of processors.
21
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

THREADS
 A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack.
 It shares with other threads belonging to the same process its code section, data section, and other
OS resources, such as open files and signals.

 Conventional (or heavy weight) process has a single thread of control. If a process has multiple
threads of control (of light weight process), it can perform more than one task at a time.
 A thread plays an important role in IPC.

 The difference between a single-threaded process and a multiple-threaded process is shown below.

 Most OS kernels are multithreaded, ie several threads operate in the kernel, and each thread
performs a specific task, such as managing devices or interrupts handling.

Advantages of multi-threading

 Increasing responsiveness: Multithreading of an interactive application may allow a program


to continue running even if the part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user.

22
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Economy: Allocating memory and resources for process creation is costly. Because threads share
the resources of the process to which they belong, it is more economical to create and context-
switch threads.

 Default resource sharing: Processes share resources only by using IPC methods

 Scalability: The benefits of multithreading can be greatly increased in a multiprocessor


architecture. Multithreading on a multi-CPU machine increases parallelism.

THREAD SCHEDULING
 With threads, the concept of execution is separated from the rest of the definition of a process.
 An application can be implemented as a set of threads, which cooperate and execute concurrently
in the same address space.
 Threads on a uniprocessor system can be used as a program structuring aid and to overlap I/O with
processing.
 However, the full power of threads becomes evident in a multiprocessor system.
 Threads can be used to exploit true parallelism in an application.
 If the various threads of an application are simultaneously run on separate processors, dramatic
gains in performance are possible.

Approaches to Thread Scheduling


 four general approaches stand out:
o Load Sharing
o Gang Scheduling
o Dedicated processor assignment
o Dynamic scheduling
1. Load Sharing:
 Processes are not assigned to a particular processor.
 A global queue of ready threads is maintained, and each processor, when idle, selects a thread
from the queue.
 The term load sharing is used to distinguish this strategy from load-balancing schemes in which
work is allocated on a more permanent basis
 Advantages:
 The load is distributed evenly across the processors, assuring that no processor is idle while work
is available to do.
 No centralized scheduler is required; when a processor is available, the scheduling routine of the
operating system is run on that processor to select the next thread.
 The global queue can be organized and accessed using any of the scheduling schemes explained
as below

23
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Disadvantages:
 The central queue occupies a region of memory that must be accessed in a manner that
enforces mutual exclusion.
 Thus, it may become a bottleneck if many processors look for work at the same time.
 Preempted threads are unlikely to resume execution on the same processor.
o If each processor is equipped with a local cache, caching becomes less
efficient.
 If all threads are treated as a common pool of threads, it is unlikely that all of the threads
of a program will gain access to processors at the same time.
 If a high degree of coordination is required between the threads of a program, the process
switches involved may seriously compromise performance.
2. Gang Scheduling
 A set of related threads is scheduled to run on a set of processors at the same time, on a
one-to-one basis.
 If closely related processes execute in parallel, synchronization blocking may be reduced,
less process switching may be necessary, and performance will increase.
 Scheduling overhead may be reduced because a single decision affects a number of
processors and processes at one time.
 The use of gang scheduling creates a requirement for processor allocation.

24
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

 Consider an example in which there are two applications, one with four threads and one
with one thread.
 Using uniform time allocation wastes 37.5% of the processing resource, because when
the single-thread application runs, three processors are left idle.
 If there are several one-thread applications, these could all be fit together to increase
processor utilization.
 If that option is not available, an alternative to uniform scheduling is scheduling that is
weighted by the number of threads.
 Thus, the four-thread application could be given four-fifths of the time and the one-
thread application given only one-fifth of the time, reducing the processor waste to 15%.
Dedicated Processor Assignment
 Each program is allocated a number of processors equal to the number of threads in the
program, for the duration of the program execution.
 When the program terminates, the processors return to the general pool for possible
allocation to another program.
 This approach would appear to be extremely wasteful of processor time.
 If a thread of an application is blocked waiting for I/O or for synchronization with
another thread, then that thread’s processor remains idle: there is no multiprogramming of
processors.But
o In a highly parallel system, with tens or hundreds of processors, each of which
represents a small fraction of the cost of the system, processor utilization is no
longer so important as a metric for effectiveness or performance.
o The total avoidance of process switching during the lifetime of a program should
result in a substantial speedup of that program.
Dynamic Scheduling
 The number of threads in a process can be altered during the course of execution
allowing the operating system to adjust the load to improve utilization.
 In this both operating systems and application are involved in making scheduling
decisions.
 The operating system is responsible for partitioning the processors among the jobs.
 Each job uses the processor to execute some subset of its runnable tasks by maping these
tasks to threads
 An appropriate decision about which subset to run as well as which thread to suspend
when a process is pre emptied, is done by the individual application.
 In this approach the scheduling responsibility of the operating system is primarily limited
to processor allocation and proceeds according to the following policy.

25
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II

Q. Real-Time Scheduling:-
o Real-time computing is divided into two types.
o Hard real-time systems are required to complete a critical task within a guaranteed
amount of time
o Generally, a process is submitted along with a statement of the amount of time in which
it needs to complete or perform I/O.
o The scheduler then either admits the process, guaranteeing that the process will
complete on time, or rejects the request as impossible.
o This is known as resource reservation.
o Soft real-time computing is less restrictive.
o It requires that critical processes receive priority over less fortunate ones.
o Implementing soft real-time functionality requires careful design of the scheduler and
related aspects of the operating system.
o First, the system must have priority scheduling, and real-time processes must have the
highest priority.
o The priority of real-time processes must not degrade over time, even though the priority
of non-real-time processes may.
o Second, the dispatch latency must be small. The smaller the latency, the faster a real-
time process can start executing once it is runnable.
o The high-priority process would be waiting for a lower-priority one to finish.
o This situation is known as priority inversion. In fact, a chain of processes could all be
accessing resources that the high-priority process needs.
o This problem can be solved via the priority-inheritance protocol, in which all these
processes (the ones accessing resources that the high-priority process needs) inherit the
high priority until they are done with the resource in question.
o When they are finished, their priority reverts to its original value.

26
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally

You might also like