You are on page 1of 55

Chapter 3

Task Assignment and Scheduling
3.1 Introduction
3.2 Rate monotonic analysis
3.3 Other uniprocessor scheduling
algorithms
3.4 Task assignment
3.5 Fault-tolerant scheduling
Objective : Look at techniques for allocating &
scheduling task to ensure deadline is met

Introduction
 Real-time computing objective :
– Execute, by appropriate deadlines it’s control tasks

 Objective of Chapter:
– Techniques for allocating & scheduling tasks on processors to
ensure that deadlines are met.

Real-Time Systems

2

Scheduling

Scheduling
research
growth

1970
Real-Time Systems

Years

3

The allocation/scheduling problem can be
stated as follows:

 Given a set of factors affecting
allocation/scheduling
– Tasks (consumes resources)
• number of tasks, priorities

– task characteristics
• periodicity
• timing constraints

– task precedence constraints
(best described using precedence graph)
– resource requirements
– inter-task interactions
We are asked to devise a feasible
allocation / schedule on a given computer
Real-Time Systems

4

Precedence Graph
T1

T2

T3

T5

T4

T6
T7
T8

Real-Time Systems

5

(T) indicates which tasks must be completed before Y can begin. Real-Time Systems 6 .  We denote the precedence task set of task T by <(T) that is .Precedence Graph  The arrows indicate which task has precedence over the other task.

)         <(1) =  <(2) = {1} <(3) = {1} <(4) = {1} <(5) = {1.6} <(8) = {1.3.Precedence Graph (Cont.3} <(6) = {1.2.3} or {2.4} T2 <(7) = {1.6.3.3.7} T1 T3 T5 T4 T6 T7 T8 Real-Time Systems 7 .4.4.

 It can also be written as : j>i  The precedence operator is transitive: i< j and j < k  i < k  For economic representation: Only the list of immediate ancestors in the precedence set: – E. < (5) = {2.g.Precedence Graph (Cont.)  We can also write : i<j to indicate that task Ti must precede task Tj.3} since <(2) = {1} Real-Time Systems 8 .

the time at which all the data that are required to begin executing the task are available. Processor execution time.  Relative deadline = Absolute deadline – release time Real-Time Systems 9 . memory or access to a bus  Resources examples:  Resources may be (depending to its usage): – Exclusively held by a task  Release Time of a task. Eg.Each task has :  Each task requires resources.  Deadline – the time at which the task must complete its execution. (Deadline  maybe hard or soft).

• Every period is generally  Deadline – Sporadic • not periodic but has an upper bound on the rate in which it has to be invoked.)  Task – Periodic • every Pi seconds. the constraints is that it has to run exactly once every period.Each task …(cont. • Irregular intervals – Aperiodic • Not periodic but has no upper bound Real-Time Systems 10 .

 Precedence constraints – – – – inter-task relationship precedence graph < (T) : precedent-task set of task T i < j : task Ti precedes task Tj T1 T4 T2  Resource requirements – exclusive – nonexclusive T3 T5 T6 T7 T8 Real-Time Systems 11 .

time t. set of tasks Τ. the schedule S is a function such that • S: P × t  Τ • S(i. t) : task scheduled to run on processor i at time t – online (dynamically) vs offline scheduling (precomputed) – Static(doesn’t change within a mode) vs dynamic priority algorithm – preemptive vs nonpreemptive scheduling Real-Time Systems 12 .Characteristics of task assignment/scheduling – feasible schedule • a valid schedule by which every task completes by its deadline – task assignment • in case of multiple processors – for a set of processors P.

 Inter-task interactions – inter-task communication • synchronous • asynchronous – mutual exclusion problem (synchronization) • priority inversion • chained blocking • deadlock Real-Time Systems 13 .

If one or more schedules cannot be feasible. Real-Time Systems 14 .  Thus. then we must either return to the allocation step and change the allocation or declare that a schedule cannot be found.Assignment / scheduling problems  Most problems pertaining are more than two processors must make do with heuristics. multiprocessor schedule are divided into two (2) steps: – 1) assign tasks to processors – 2) Run a uniprocessor schedule to schedule the task allocated to each processor. Heuristics are motivated by the fact that uniprocessor scheduling are tractable.

Developing a multiprocessor schedule Make an allocation Schedule each processor based on the allocation Change Allocation Are all these schedules feasible Output schedule Check stopping criterion Continue Stop Declare Failure Real-Time Systems 15 .

Overview  Uniprocessor scheduling algorithms – – – – – – traditional rate-monotonic (RM) rate-monotonic deferred server (DS) earliest deadline first (EDF) precedence and exclusion conditions multiple task versions IRIS tasks • increased reward with increased service – mode changes Real-Time Systems 16 .

 Multiprocessor scheduling – – – – – – utilization balancing algorithm next-fit algorithm bin-packing algorithm myopic offline scheduling algorithm focused addressing and bidding algorithm assignment with precedence constraints  Critical sections  Fault-tolerant scheduling Real-Time Systems 17 .

 Notation – nnumber of tasks in task set – ci execution time of task τi – Ti period of periodic task τi – Ii – di phase of periodic task τi relative deadline of task τi – Di absolute deadline of task τi – ri release time of task τi Real-Time Systems 18 .

Commonly Used Approaches  Weighted round-robin approach – tasks waiting in the FIFO queue – a task with weight wt get wt time slices every round – suitable for scheduling real-time traffic in high-speed switched networks • a switch downstream can begin to transmit an earlier portion of the message upon receipt of the portion. without having to wait for the arrival of the later portion – no need for sorted priority queue  speedup of scheduling Real-Time Systems 19 .

nonpreemptive Real-Time Systems 20 . list scheduling. Priority-driven approach – never leaves any resource idle intentionally – greedy scheduling. work-conserving scheduling – most scheduling algorithms used in nonreal-time systems are priority-driven – preemptive vs.

 Clock-driven(time-driven) approach – tasks and their timing constraints are known a priori except for aperiodic tasks – relies on hardware timers – a static schedule • constructed off-line • cyclic schedule: periodic static schedule • clock-driven schedule: cyclic schedule for hard real-time tasks – foreground/background approach • foreground: interrupt-driven scheduling • background: cyclic executive (“Big loop”) Real-Time Systems 21 .

Foreground/Background Systems Background code block code block Loop Foreground interrupt ISR . ISR interrupt ISR ISR: interrupt service routine Real-Time Systems 22 .. .

sleep. let the job at the head of the aperiodic job queue execute else. set the timer to expire at tk. end SCHEDULER Real-Time Systems 23 . do forever: accept timer interrupt. …. compute the next table entry k = i mod N. current task τ = τ(tk).A clock-driven scheduler Input: Stored schedule (tk. 1. let the task τ execute. τ(tk)) for k = 0. set the timer to expire at f(i/N)H + tk { f: floor function H: hyperperiod N: #tasks in H if the current task τ is an idle interval (or idle task). if an aperiodic job is executing. N-1 Task SCHEDULER: set the next decision point i and table entry k to 0. increment i by 1. preempt the job.

Relative deadline = period Real-Time Systems 24 .2 Rate Monotonic Analysis  Assumptions – A1. All tasks periodic – A5. and negligible preemption cost – A2. No precedence constraints among tasks – A4.3. Resource constraint on CPU time only – A3. No nonpreemptible parts in a task.

Rate-Monotonic Scheduling(RMS)  Overview – rate monotonic priority • the higher rate. the higher priority – schedulability guaranteed if utilization rate is below a certain limit – for feasible schedules • fi = 1/Ti : frequency (=rate) • ci or Ci : execution time n c f i 1 i i 1 Real-Time Systems 25 .

3.3 Other Uniprocessor Scheduling Algorithms  Period transformation for transient overload – a modified form of RM scheduling  Dynamic scheduling – earliest deadline first scheduling – least laxity first scheduling  Scheduling of IRIS tasks – imprecise computation  Scheduling of aperiodic tasks  Mode change Real-Time Systems 26 .

how can we guarantee T2’s deadline in case of transient overload.79. to cope with semantic criticality in RM scheduling – example • tasks: T1: T1 = 12. C2 = 10. C1+ = 7 [Ci+: worst case] T2: T2 = 22. C2+ = 14 utilization rates: average = 0.Period Transformation  Period transformation for transient overload – changes the period to cope with transient overloads (in terms of RM scheduling) – actually. and T1’s deadline in the average case? Real-Time Systems 27 . worst case = 1.22 • problem: if T2 is hard rt and T1 is soft (or not). C1 = 4.

(continued) • solution: boost priority of T2 by reducing its period replace T2 by T2’: T2’ = T2 /2. C2’ = C2 /2. double the value of parameters – the new deadline must be ok Real-Time Systems 28 . C2’+ = C2 +/2 • an alternative: lower the priority of T1 by lengthening its period – in this case.

Earliest Deadline First Scheduling  Also know as Deadline Monotonic  EDF scheduling – dynamic priority based. – Allows preemptions. Real-Time Systems 29 . the task set can be feasibly scheduled on a single processor by EDF. deadline monotonic scheduling  Properties – EDF is optimal for uniprocessors – for periodic tasks with their relative deadline equal to periods: if the total utilization of the task set is no greater than 1.

L] in reverse topological order. Select the task with earliest deadline to execute Real-Time Systems 30 . 2. 4. Revise the deadlines in reverse topological order. if necessary 3.– Procedure 1. Sort task instances that require execution in time interval [0. Initialize the deadline of the kth instance of task Ti to (k-1)Ti + di.

Iterative algorithms. o : execution time of the mandatory and optional parts.Uniprocessor Scheduling of IRIS Tasks  Introduction – Not necessary to run to completion. respectively Real-Time Systems 31 . – Task of this type are known as increased reward with increased service (IRIS) – reward function R(x) • typically 0   R( x)   r ( x)  r (o  m)  if x  m if m  x  o  m if x  o  m • where r(x) is monotonically nondecreasing in x. • m.

4 Task Assignment  Assignment of tasks to processors – use heuristics  cannot guarantee that an allocation will be found that permits all task to be feasibly scheduled. Real-Time Systems 32 . – consider communication costs precedence of task completion. – Sometime an allocation algorithm uses communication costs as part of its allocation criterion.3.

– Considers running multiple copies for fault-tolerance systems.Utilization-balancing algorithm – Objective to balance processor utilization. do allocate one copy of Ti to each of the ri least utilized processors update the processor allocation to account for the allocation of Ti end do Real-Time Systems 33 . and proceeds by allocating the tasks one by one and selecting the least utilized processor. for each task Ti.

scheduler: distributed – objectives: • to partition a task set so that each partition is scheduled later for execution on a processor by RM scheduling • to use as few processors as possible – task characteristics • each task has constant period and deadline constraints • independent. no precedence constraints Real-Time Systems 34 .A Next-fit algorithm for RM scheduling – Used in conjunction with RM – separation of allocation and scheduling • simplifies the scheduler to a local one • allocation: centralized.

j : set of tasks assigned to a processor – Nk : number of class-k processors used so far • tasks are divided into M classes such that task Ti  class. M >3 • assigns k class-k tasks to each class-k processor.M if 0  u  2  1 where 1  k < M .k if 2 1 k+1 1 k  1 u  2  1 1 M task Ti  class. keeping the utilization factor of the class-M processor less than ln 2 Real-Time Systems 35 .– allocation algorithm – n tasks – ui : utilization factor of Ti – Pi.

while i <= n do if Ti is a task from class-k.Nk has not task assigned to it then set Nk = Nk -1 Real-Time Systems 36 . if Pk.NM endif set i = i +1 endwhile if Pk.Nk has currently k tasks assigned to it then set Nk = Nk +1 endif else (Ti is a task from class-M) if the total utilization factors of all the tasks assigned to PM. then assign Ti to Pk.– Algorithm Next-Fit-M for k = 1 to M do set Nk = 1. set i = 1. 1 <= k < M.NM is greater than ln2-ui then set NM = NM + 1 endif assign Ti to PM.Nk.

for all j. and minimize the number of processors needed – first fit decreasing algorithm Initialize i to 1. Assign the i-th task in L to pj Set i = i + 1 . (L : a list of tasks with their utilizations sorted in descending order.sorted list of task so their utilization are in non-increasing 37 .Bin-packing assignment algorithm for EDF – periodic independent preemptible tasks – bin-packing problem: assign tasks such that the sum of utilization factors does not exceed 1. nT : # tasks ) while i <= nT do Let j = min{k | U(k) + u(i) <= 1}. end while Real-Time Systems L. Set U(j) = 0.

– Non-pre-emptive task – Not only processor resources but also others resources such as memory etc. Real-Time Systems 38 . extended by one task. execution time and deadline. – Each node represents an assignment and scheduling of a subset of the tasks. Schedule Tree – MOS proceeds by building up a schedule tree. – A leaf of this tree consist of the schedule of the entire task set. – Each child of a node consists of a schedule of its parent node.Myopic Offline Scheduling (MOS) Algorithm – Offline Algorithm –given in advance arrival times. – The root of the schedule tree is an empty schedule.

else backtrack iii) extend the current partial schedule by one task 2 Questions – which one task & when to stop (1) apply the heuristic function to the first Nk tasks in the task set (2) choose the task with the smallest heuristic value to extend the current schedule Real-Time Systems 39 .Myopic Offline Scheduling (MOS) Algorithm – algorithm i) start with an empty partial schedule ii) determine if the current partial schedule is strongly feasible then proceed.

If not feasible.Develop a node if it is strongly feasible. we backtrack that is we mark that node as hopeless and then go back to its parents .

loosely coupled both critical and noncritical tasks local scheduler: handles (critical) tasks arriving at a given node • global scheduler: schedules noncritical tasks across processor boundary • global state Real-Time Systems 41 .Focused addressing and bidding (FAB) – Introduction • • • • online distributed environment.

– algorithms for global scheduling to which node the task should be sent • noncooperative algorithm-if enough resources for critical yes. • focused addressing algorithm – overloaded processor checks its surplus info. • random scheduling algorithm-if a processor load is exceeding its threshold then another processor is chosen randomly. and selects a processor which it feels it is able to process the task within its deadline. else no for non-critical. • bidding algorithm – simultaneous – lightly loaded to bid (Request For Bids) • flexible algorithm <-.FAB cont.focused addressing + bidding Real-Time Systems 42 . Prob: surplus info may be outdated.

– focused addressing algorithm • FAS: focused addressing surplus. the task is rejected Real-Time Systems 43 . tunable parameter • locally unschedulable tasks sent to the node with the highest surplus ( > FAS) • if no such node is found.

select k nodes with sufficient surplus k: chosen to maximize the chances of finding a node • a request-for-bid(RFB) message is sent to these nodes • those nodes that receive RFB message – calculate a bid ( = likelihood that the task can be guaranteed) – send the bid to the bidder node if the bid > minimum bid req’d • the bidder sends the task to node that offers the best bid • if no good bid available. reject the task Real-Time Systems 44 .– bidding algorithm • first.

pi sends in parallel a RFB message to the remaining k-1 nodes. sends the bid to ps if ps exists Real-Time Systems 45 . RFB contains info on ps • when a node receives the RFB message – it calculates a bid.– symbols • pi: a processor node with a newly arriving task that is not locally guaranteed • ps: a node that is selected by FA algorithm • pt: a node that receives RFB message – the flexible algorithm (FAB algorithm) • pi selects k nodes with sufficient surplus – if the largest value of the surplus > FAS » the node with that surplus is chosen as focused node(ps) » pi sends the task to ps immediately » also.

ps evaluates the bids. and sends this info to pi • in case there is no focused node. pi will handle the bidding • if ps cannot guarantee the task and if there is no good bid available. sends the task to the node responding with the highest bid. then corrective actions follow Real-Time Systems 46 . all the bids for the task will be ignored if it fails.(continued) • when the task reaches ps – it first invokes the local scheduler and checks the feasibility – if it succeeds.

ps (focused node) (original node) pi network bidding Real-Time Systems 47 .

Real-Time Systems 48 . F: full (TF) and T: over (TV)  If a processor has a transition from F/T to U it broadcast an announcement to this effect. it differs in the manner in which it finds the lightly loaded tasks:  Each processor has 3 thresholds of loading: – U:Under (TU).  However.The Buddy Strategy  Same as FAB in the sense that if the processor is overloaded it will try to offload some task to lightly loaded processor. This broadcast is not to all processors but to a subset and this effect is known as a buddy effect.

3. – multiple processors with a set of periodic tasks – multiple copies of each version of a task executed in parallel – the approach taken : ghost copies of tasks • embedded into the schedule • need not be identical to the primary copies • the tasks concerned are those that were to have been run by the failing processor Real-Time Systems 49 .5 Fault-Tolerant Scheduling  Introduction – in case of hardware failure – Systems have sufficient reserve capacity and sufficiently fast failure-response mechanism.

– feasible pair of a ghost schedule and a primary schedule • if both schedules can be merged/adjusted to be feasible Real-Time Systems 50 . Fault-tolerant schedule – should be able to run one or more copies of each version (or iteration) of a task despite the failure of up to nsust processor – Output of each fault-tolerant processor • has a ghost schedule + 1+ primary schedules • makes room for ghosts by shifting primary copies.

 Ghosts – each version of a task must have ghost copies scheduled on nsust distinct processors – ghosts are conditionally transparent. if the primary copies of both tasks are not assigned to the same processor) • primary copies may overlap the ghosts only if there is sufficient slack time in the schedule to continue to meet all the deadlines Real-Time Systems 51 . only if • two ghost copies may overlap in the schedule of a processor if no other processor carries the copies of both tasks (that is.

return to step 1 • otherwise.primary tasks are needlessly delayed when the ghost do not have to be executed. record the position of the ghost copies in ghost schedule Gi. (the primary copies will always be schedule according to S regardless of any ghost happen or not) Limitation: . and the position of the primary copies in schedule Si. Hs: EDF scheduling procedure 1. While all task will meet their deadlines. • if the resulting schedule is found infeasible. 2. Run Hs for ghost and primary copies on a processor i. it is frequently best to complete execution of the task early to provide slack time. Algorithm FA1 Ha: assignment procedure. Run Ha to obtain a candidate allocation of copies to processors. Real-Time Systems 52 .

3. 2. return to step 1 • otherwise. Run Ha to obtain a candidate allocation of copies to processors. Run a static-priority preemptive scheduler for primary copies with the priorities to obtain Si Real-Time Systems 53 . • if the resulting schedule is found infeasible. Assign static priorities to the primary tasks in the order in which they finish executing. Run Hs for ghost and primary copies on a processor i. Algorithm FA2 1. record the position of the ghost copies in ghost schedule Gi.

g5. Example ghosts: g4. h2. g6 release time execution time deadline primaries: h1. h3 h1 h2 h3 g4 g5 g6 2 5 3 0 0 9 2 2 4 2 2 2 6 8 15 5 6 12 for the primary copies of g4 and g5 case 1: they are allocated to the same processor case 2: they are on different processors Real-Time Systems 54 .

g4 g5 0 g6 5 10 15 ghost schedule of p if g4 and g5 cannot overlap g4. g5 g6 0 5 10 15 ghost schedule of p if g4 and g5 can overlap h1 0 h3 h2 5 h3 10 15 feasible primary schedule of p if g4 and g5 can overlap Real-Time Systems 55 .