You are on page 1of 4

Main points

Question: dispatcher can choose any thread on the ready queue to run;
how to decide and which to choose?

CPU Scheduling

n Scheduling algorithms
Arvind Krishnamurthy n FCFS (or FIFO)

Spring 2001 n SJF (or STCF or SRTCF)

n Priority scheduling

n Lottery scheduling

n Round Robin

n Multi-level feedback queues

Alternating CPU and I/O Bursts CPU scheduler


n Maximum CPU utilization obtained n Selects from among the processes in memory that are
with multiprogramming ready to execute, and allocates the CPU to one of them
n CPU–I/O Burst Cycle

n CPU burst distribution n CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.

n Scheduling under 1 and 4 is nonpreemptive.


n All other scheduling is preemptive.

Scheduling policy goals First Come First Served


n minimize response time : elapsed time to do an n Example: Process Burst Time
operation (or job) P1 24
n Response time is what the user sees: elapsed time to P2 3
n echo a keystroke in editor
P3 3
n compile a program

n run a large scientific problem


n Suppose that the processes arrive in the order: P1 , P2 , P3
The schedule is:
n maximize throughput : operations (jobs) per second
n two parts to maximizing throughput
P1 P2 P3
n minimize overhead (for example, context switching)

n efficient use of system resources (not only CPU, but disk,

memory, etc.) 0 24 27 30

n fair : share CPU among users in some equitable way n Waiting time for P1 = 0; P2 = 24; P3 = 27
n Average waiting time: (0 + 24 + 27)/3 = 17

1
FCFS scheduling (cont’d) Shortest-Job-First (SJF)
Suppose that the processes arrive in the order n Associate with each process the length of its next burst
P2 , P3 , P1 . n Use these lengths to schedule the process with the shortest time
n The Gantt chart for the schedule is:

n Two schemes:
n nonpreemptive – once given CPU it cannot be preempted until
P2 P3 P1
completes its CPU burst.
n preemptive – if a new process arrives with CPU burst length less
0 3 6 30
than remaining time of current executing process, preempt.
n Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
SJF is optimal but unfair
n
n
n Much better than previous case.
n pros: gives minimum average response time
n FCFS Pros: simple; Cons: short jobs get stuck behind long jobs n cons: long-running jobs may starve if too many short jobs;
n difficult to implement (how do you know how long it will take)

Non-preemptive SJF Example of preemptive SJF


Process Arrival Time Burst Time Process Arrival Time Burst Time
P1 0.0 7 P1 0.0 7
P2 2.0 4 P2 2.0 4
P3 4.0 1 P3 4.0 1
P4 5.0 4 P4 5.0 4

n SJF (non-preemptive) n SJF (preemptive)

P1 P3 P2 P4 P1 P2 P3 P2 P4 P1

0 3 7 8 12 16 0 2 4 5 7 11 16

n Average waiting time = (0 + 6 + 3 + 7)/4 = 4 n Average waiting time = (9 + 1 + 0 +2)/4 = 3

Priority scheduling Handling dependencies


n A priority number associated with each process
n Scheduling = deciding who should make progress
n The CPU is allocated to the process with the highest
n Obvious: a thread’s importance should increase with the importance
priority (smallest integer ≡ highest priority). of those that depend on it.
n preemptive n Naïve priority schemes violate this (“Priority inversion”)
nonpreemptive
Example: T1 at high priority, T2 at low
n
n

n T2 acquires lock L.
n SJF is a priority scheduling where priority is the predicted n Scene 1: T1 tries to acquire L, fails, spins. T2 never gets to run.
next CPU burst time. n Scene 2: T1 tries to acquire L, fails, blocks. T3 enters system at high
priority. T2 never gets to run.
n Problem: low priority processes may never execute. n “Priority donation”
n Solution: aging – as time progresses increase the priority of the n Thread’s priority scales w/ priority of dependent threads
process. n Works well with explicit dependencies

2
Lottery Scheduling Lottery Scheduling Example
n Problem: this whole priority thing is really ad hoc. n Add or delete jobs (& their tickets) affects all jobs proportionately
n How to ensure that processes will be equally penalized under load? short job: 10 tickets + long job: 1 ticket
That system doesn’t have a bad screw case?
n Lottery scheduling #short jobs/ % of CPU each % of CPU each
n give each process some number of tickets #long jobs short job gets long job gets
n each scheduling event, randomly pick ticket 1/1 91% 9%
n run winning process 0/2 NA 50%
n to give P n% of CPU, give it ntickets * n% 2/0 50% NA
10 / 1 10% 1%
n How to use?
1 / 10 50% 5%
n Approximate priority: low-priority, give few tickets, high-priority give
many
n Easy priority inversion:
n Approximate SJF: give short jobs more tickets, long jobs fewer. If job n Donate tickets to process you’re waiting on.
has at least 1, will not starve
n Its CPU% scales with tickets of all waiters.

Round Robin (RR) RR with time quantum = 20


n Each process gets a small unit of CPU time (time Process Burst Time
quantum). After time slice, it is moved to the end of the P1 53
ready queue. P2 17
Time Quantum = 10 - 100 milliseconds on most OS P3 68
n n processes in the ready queue; time quantum is q P4 24
n each process gets 1/n of the CPU time in q time units at once. n The Gantt chart is:
n no process waits more than (n-1)q time units.
n each job gets equal shot at the CPU
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
n Performance
n q large ⇒ FCFS 0 20 37 57 77 97 117 121 134 154 162

n q too small ⇒ throughput suffers. Spend all your time context n Typically, higher average turnaround than SJF, but better
switching, not getting any real work done response.

RR vs. FCFS vs. SJF Multilevel queue


n Assuming zero-cost time slice, is RR always better than n Ready queue is partitioned into separate queues:
FCFS? n foreground (interactive)
n 10 jobs, each take 100 secs, RR time slice 1 sec n background (batch)
n what would be the average response time under RR and FCFS ? n Each queue has its own scheduling algorithm:
n foreground – RR
n Another example: three tasks A, B, C n background – FCFS
n A and B both CPU bound run for a week n Scheduling must be done between the queues.
n C is I/O bound: loop 1 ms CPU followed by 10 ms disk I/O n Fixed priority scheduling; i.e., serve all from foreground then from
n running C by itself gets 90% disk utilization background. Possibility of starvation.
n with FIFO? n Time slice – each queue gets a certain amount of CPU time which it
n with RR (100 ms time slice)? can schedule amongst its processes; i.e.,
n with RR (1 ms time slice)? 80% to foreground in RR; 20% to background in FCFS

3
Multilevel feedback queue Example of Multilevel Feedback
n Three queues:
n Q0 – quantum 8 ms
n Q1 – quantum 16 ms
n Q2 – FCFS

n Scheduling
n Multilevel-feedback-queue scheduler defined by the n A new job enters queue Q0 which is served FCFS
following parameters: n When it gains CPU, job receives 8 milliseconds
n number of queues, scheduling algorithms for each queue, when to
upgrade/downgrade a process, which queue a process will enter n If it does not finish in 8 milliseconds, job is moved to queue Q1

Solaris 2 scheduling Java thread scheduling


n Local scheduling – the threads library decides which thread to put onto an n JVM uses a preemptive, priority-based scheduling
available LWP.
algorithm.
n Global scheduling – the kernel decides which kernel thread to run next.

n FIFO queue is used if there are multiple threads with the


same priority.

n JVM schedules a thread to run when:


n the currently running thread exits the runnable state.
n a higher priority thread enters the runnable state

* Note – the JVM does not specify whether threads are time-
sliced or not.

Java thread priorities Summary


n Thread Priorities:
n FCFS:
Priority Comment + simple
- short jobs can get stuck behind long ones; poor I/O
Thread.MIN_PRIORITY Minimum Thread Priority
Thread.MAX_PRIORITY Maximum Thread Priority n SJF:
Thread.NORM_PRIORITY Default Thread Priority + optimal (average response time, average time-to-completion)
- hard to predict the future
- unfair
Priorities may be set using setPriority() method:
n RR:
+ better for short jobs; fast reponse
setPriority(Thread.NORM_PRIORITY + 2); - poor when jobs are the same length
n Multi-level feedback:
+ approximate SJF
- unfair to long running jobs

You might also like