# Clock-driven Scheduling

OUTLINE
• Notations and assumptions • Practical Considerations • Static, timer-driven scheduler - Handling frame overruns • Cyclic scheduling - Mode changes - Job slices - General workloads • Cyclic executive algorithm - Multiprocessor scheduling • Improving the average response • Constructing static schedules time of aperiodic jobs - Network-flow graph - Slack stealing • Scheduling sporadic jobs - Acceptance test - Optimality of the EDF algorithm

Ref: [Liu] • Ch. 5 (pg. 85 – 114)

1

Notations and assumptions
• Notations r d r d - Periodic task Ti is denoted by 0 1 4 7 11 14 17 21 … (Φi,pi,ei,Di) Ji,1 Ji,2 - Ex. Ti = (1,10,3,6) - red lines are a possible execution scheduled for jobs of Ti - utilization of this task is: ui = ei /pi = 3/10 = 0.3 - default values: Φi = 0, Di = pi, can by omitted (0,8,3,8) or (8,3,8) or (8,3) indicate the same periodic task
i,1 i,1 i,2 i,2

• Assumptions - n periodic tasks; n fixed per mode of operation; - each job Ji,j in Ti is released pi units of time after the previous job Ji,j-1; - Ji,k is ready for execution at ri,k; - aperiodic jobs may be released at unexpected times; 2 - no sporadic jobs (for the moment).

Static scheduling
• When job parameters with hard deadlines are known in advance, a static schedule (containing all scheduling decision times) can be constructed off-line • Example with the following 4 tasks: - T1=(4,1); T2=(5,1.8); T3=(20,1); T4=(20,2); - Φ1=0, D1=4; Φ2=0; D2=5; Φ3=0, D3=20; Φ4=0, D4=20; • Let’s verify their utilization factor (ui= ei / pi ): - u1= 0.25; u2=0.36; u3=0.05; u4=0.1; ⇨ U = 0.76 - the 4 tasks don’t require more processor time than available; • Hyperperiod is defined as the lcm of all periods = 20; - Entire schedule consists of replicated segments of period 20.
3

Static schedules
- T1=(4,1); T2=(5,1.8); T3=(20,1); T4=(20,2); Hyperperiod = 20; - Different static schedules are possible:
T1
0

T2

T3 T1
4 5

T2

T1
8 10

T2

T1
12

T4

T1
15 16

T2
20

T2
0

T1
4 5

T4

T1
8

T2
10

T1
12

T2

T1

T3

T2

T1
20

15 16

T1 T3
0

T2
4

T1
5

T4
8

T2

T1
10 12

T2

T1

T1
15 16

T2
20

• Processor time not used by periodic tasks (4.8 time units in H) may be used for aperiodic jobs: - The OS maintains a queue of aperiodic jobs: the aperiodic job at the head of the queue executes when processor is idle (in the last example, during time intervals: [3.8,4], [5,6], 4 [10.8,12], [14.8, 16], [19.8, 20]).

A clock-driven scheduler
•input: (tk, T(tk)) for k=0,1,…N-1 •Task SCHEDULER •set decision point i and table entry k to 0; •set the timer to expire at tk; •do forever: tk •accept timer interrupt; •if an aperiodic job is executing, preempt it •current task T=T(tk); •increment i by 1; •sets timer to tk+1 •prepares T(tk) •next table entry k = i mod(N); •suspends itself •set the timer to expire at ⎣i/N⎦ H+tk •if the current task T is I, • execute a job from the aperiodic queue •else let the task T execute; •sleep; TABLE contains the static schedule: tk: decision time (is not periodic) •end SCHEDULER
t0, T(t0) … tk, T(tk) … tJ, I … TABLE

T(tk): name of the task or I (idle)

A CLOCK-DRIVEN SCHEDULER

5

Cyclic scheduling: frame size
• Decision points at regular intervals (frames); • Within a frame the processor may be idle to accommodate aperiodic jobs • The first job of every task is released at the beginning of some frame • How to determine the frame size f ?
• The following 3 constraints should be satisfied:

1. f ≥ max(ei) (for 1 ≤ i ≤ n) (n tasks) • each job may start and complete within one frame: no job is preempted 2. ⎣pi /f⎦ - pi /f = 0 (for at least one i) • to keep the cyclic schedule short, f must divide the hyperperiod H; this is true if f divides at least one pi 3. 2f – gcd(pi,f) ≤ Di (for 1 ≤ i ≤ n) • to have at least one whole frame between the release time and the deadline of every job (so the job can be feasibly scheduled in that frame) (see next 2 slides for a demonstration of this third constraint)
6

Demonstration of 3rd constraint
t = beginning of k-th frame t’ = release time, within the k-th frame, of a job of task Ti with period pi and relative deadline Di t’- t < f

3

• to have at least one complete frame between the release time t’ and the deadline t’ + Di of a job; - in the special case (t’ = t ): it suffices to have f ≤ Di; - in the general case (t < t’ < t + f ): the job’s deadline (t’ + Di) should occure not earlier than t + 2f ; that is: 2f - Di ≤ (t’- t) (f ≤ Di /2 + (t’ – t)/2); - if this last constraint is satisfied, then, since t’- t < f, it is also f ≤ Di ; so the condition for the special case (t’ = t ) is also satisfied; • we may then consider t’ > t and choose f such that: 2f - Di ≤ (t’- t) (for 1 ≤ i ≤ n)
7

Demonstration of 3rd constraint (2)
3
t = beginning of k-th frame t’ = release time, within the k-th frame, of a job of task Ti with period pi and relative deadline Di t’- t < f

• we consider t’ > t and choose f such that 2f - Di ≤ (t’- t) (for 1 ≤ i ≤ n) - t, t’, f, pi, Di are all integers, - the first job of every task is released at the beginning of some frame (t’ = t for some frame); if we set the time = 0 at the beginning of this frame, then: - the beginning of the k-th frame is at: t = kf (k integer ≥ 0) - the release time of the job in k-th frame is at: t’ = jpi (j integer ≥ 0) - if G=gcd(pi, f), then we can write: f = mG and pi = nG (m and n integers > 0); and also: t’- t = jnG - kmG = hG (h integer ≥ 0; h > 0 if t’ ≠ t ) • since (t’- t) ≥ G, the constraint (t’- t) ≥ 2f - Di is met if G ≥ 2f - Di (if 2f - G ≤ Di ) • as a result, there is at least one frame between the release time and the deadline of every job, if f satisfies the following constraint: 2f - gcd(pi,f) ≤ Di (for 1 ≤ i ≤ n)
8

Cyclic scheduling: example 1
- Example: T1 = (4,1); T2 = (5,1.8); T3 = (20,1); T4 = (20,2). • The hyperperiod is: H = 20. • The total utilization of the 4 tasks is: U = (1/4)+(1.8/5)+(1/20)+(2/20) = 0.76 • The choice of f should meet the following 3 constraints: 1. f ≥ max(ei) 2. ⎣pi/f⎦ - pi/f = 0 (for 1 ≤ i ≤ n) (for at least one i) f≥2 f = {2, 4, 5, 10, 20} let’s try the 5 possibilities found so far … f = 4 is NOK f = 5 is NOK f = 10 is NOK f = 20 is NOK OK

3. 2f – gcd(pi,f) ≤ Di (for 1 ≤ i ≤ n) f = 2: - T1 : 4 – 2 ≤ 4 (OK) - T2 : 4 – 1 ≤ 5 (OK) - T3 and T4: 4 – 2 ≤ 20 (OK) f = 2 is OK

f = 4: -T1 : 8 – 4 ≤ 4 (OK) -T2 : 8 – 1 ≤ 5 (NO) f = 5: -T1 : 10 – 1 ≤ 4 (NO) f = 10: -T1 : 20 – 2 ≤ 4 (NO) f = 20: -T1 : 40 – 4 ≤ 4 (NO)

the only choice that meets all 3 constraints is f = 2:

This is not the only solution: jobs could be fit into frames in other ways.

9

Cyclic scheduling: example 2
Example: T1 = (15,1,14); T2 = (20,2,26); T3 = (22,3). (for 1 ≤ i ≤ n) (for at least one i) f≥3 f = {3, 4, 5, 10, 11, 15, 20, 22} let’s try the 8 possibilities found so far … f =10 is NOK f = 11 is NOK f =15 is NOK f = 20 is NOK f = 22 is NOK H = 660; U = 0.303; the choice of f should meet the following 3 constraints: 1. f ≥ max(ei) 2. ⎣pi/f⎦ - pi/f = 0

3. 2f – gcd(pi,f) ≤ Di (for 1 ≤ i ≤ n) • f = 3: -T1 : 6 – 3 ≤ 14 (OK) -T2 : 6 – 1 ≤ 26 (OK) -T3 : 6 – 1 ≤ 22 (OK) f = 3 is OK • f = 4: -T1 : 8 – 1 ≤ 14 (OK) -T2 : 8 – 4 ≤ 26 (OK) -T3 : 8 – 2 ≤ 22 (OK) f = 4 is OK • f = 5: -T1: 10 – 5 ≤ 14 (OK) -T2: 10 – 5 ≤ 26 (OK) -T3: 10 – 1 ≤ 22 (OK) - f = 5 is OK

• f = 10: -T1: 20 – 5 ≤ 14 (NO) • f = 11: -T1: 22 – 1 ≤ 14 (NO) • f = 15: -T1: 30 –15 ≤ 14 (NO) • f = 20: -T1: 40 – 5 ≤ 14 (NO) • f = 22: -T1: 44 – 1 ≤ 14 (NO)

the 3 constraints are met if f = 3 or f = 4 or f = 5 the most convenient choice is: f = 5
10

Cyclic scheduling: problem
• Frame size constraints 1., 2., 3. sometimes cannot be all met • Example: T: {T1 = (4,1), T2 = (5,2,7), T3 = (20,5)} (H = 20; U = 0.9) 1. f ≥ ei (for 1 ≤ i ≤ n) => f ≥ 5; 2. ⎣pi/f⎦ - pi/f = 0 (for at least one i) => f = 2, 4, 5, 10, 20 3. 2f – gcd(pi,f) ≤ Di (for 1 ≤ i ≤ n) let’s apply constraint 3. to T1 : 2f – gcd(4,f) ≤ 4 since gcd(4,f) ≤ 4, we have: 2f – 4 ≤ 2f – gcd(4,f) ≤ 4 2f – 4 ≤ 4 => f ≤ 4 it is not possible to satisfy both constraints 1. and 3.
11

Cyclic scheduling: job slices
• When frame size constraints 1., 2., 3. cannot be all met, in order to find a frame size with the desired properties, a possible solution is to slice large jobs (into sub-jobs); example: T: {T1 = (4,1), T2 = (5,2,7), T3 = (20,5)} (1.: f ≥ 5; 3.: f ≤ 4); - we can divide T3 into 3 slices: - T1 = (4,1), T2 = (5,2,7), T3,1 = (20,1), T3,2 = (20,3), T3,3 = (20,1) 1.: => f ≥ 3; 2.: => f = {4, 5, 10, 20}; 3.: let’s try f = 4: T1: 8 – 4 ≤ 4 OK; T2: 8 – 1 ≤ 7 OK; T3: 8 – 4 ≤ 20 OK => f = 4

• Why task T3 was not divided into 2 slices?

- T1 uses 1 time unit in each one of the 5 frames; - T2 uses 2 time units in 4 of the 5 frames; - this leaves 3 time units available in 1 frame and just 1 in the other 4.
12

Constructing a cyclic schedule
• Design steps and decisions to consider in the process of constructing a cyclic schedule: determine the hyperperiod H, determine the total utilization U (if >1 schedule is unfeasible), choose a frame size that meets the constraints, partition jobs into slices, if necessary, place slices in the frames.

13

A table-driven cyclic executive
• Precomputed cyclic schedule is stored in a table. • The table has F entries L(k) (k=0, .., F-1) (F = number of frames in a hyperperiod) • Each entry L(k) lists the job slices that are scheduled in that frame • L(k): scheduling block

(see next slide)

14

• Assumption: job slices in each scheduling block L(k) correspond to precisely defined code segments, such as procedures (job slicing has been performed by reviewing jobs’ design in order to meet frame size constraints); • then the periodic-task server executes all the job slices in L(k) by calling them one after the other (no context switches take place).

• in a system where periodic job slices never overrun (no frame overrun) the periodic-task server is not needed: job slices may be executed directly by the cyclic executive.
15

Response time of aperiodic jobs
SLACK STEALING
• There is no advantage to completing a job with a hard deadline early • Minimizing the response time of each aperiodic job is an important design goal • A natural way to improve the response times of aperiodic jobs is by executing the aperiodic jobs ahead of the periodic jobs whenever possible. This approach is called slack stealing Cyclic executive operation: - As long as there is slack the cyclic executive returns to examine the aperiodic job queue after each slice completes - If queue is empty go to next slice - If queue is not empty execute aperiodic job for y units

• Assume that the total amount of time allocated to all slices scheduled in frame k is: xk • The slack time available in the frame is: initial_slack = f - xk • After y units of slack are used by aperiodic jobs => slack = f - xk - y • The cyclic executive can let aperiodic jobs execute as long as there is slack in the frame (as long as f - xk - y > 0).
16

Slack stealing: example
first major cycle in the cyclic scheduler 3 aperiodic jobs no slack stealing with slack stealing With no slack stealing (c): - job A1: released at t=4, starts at t=7, completes at t=10.5; response time = 6.5 - job A2: released at t=9.5, starts at t=10.5, completes at t=11; resp. time = 1.5 - job A3: released at t=10.5, starts at t=11, completes at t=16; resp. time = 5.5 average response time = 4.5 With slack stealing (d): - job A1: released at t=4, starts at t=4, completes at t=8.5; response time = 4.5 - job A2: released at t=9.5, starts at t=9.5, completes at t=10; resp. time = 0.5 - job A3:released at t=10.5, starts at t=11, completes at t=13; resp. time = 2.5 17 average response time = 2.5

Slack stealing: implementation
• Whenever the slack is > 0 and there is an aperiodic job ready, the scheduler will let this aperiodic job execute; • the initial slack in each frame is pre-computed and stored in the scheduling table; • to keep track of the slack and update it when aperiodic jobs are executing, the cyclic scheduler uses an interval timer: - when a frame starts, the timer is set with its initial slack value, - when an aperiodic job is executing, the timer counts down, - when the timer expires (no more slack), the scheduler preempts the aperiodic job and executes the next job slice in the current frame; • most OS do not offer interval timers with ticks < 1 ms; so the method is practical only with temporal parameters > 10-1 s.
18

• Sporadic jobs have hard deadlines • Their minimum release times and maximum execution times are unknown a priori: impossible to guarantee a priori that sporadic jobs can complete in time • Assumptions: - the maximum execution time becomes known upon release - all sporadic jobs are preemptable • Before scheduling the sporadic jobs the scheduler performs an acceptance test - sporadic jobs released during a frame are tested when the next frame starts: a newly released sporadic job is accepted if there is a sufficient amount of time in the frames before its deadline to complete it without causing any job in the system to complete too late - sporadic job S(d,e): d = deadline, e = maximum execution time - the current total slack time σc(t,l ) in the frames t, .., l before d must be ≥ e

σc

19

Scheduling of accepted jobs
rejected d1 rejected

d3

d2

d4

• what if S2 was scheduled to use up the slack available in the time intervals (10,12), (15,16), and (19,20)? - S2 would complete earlier (at t =20), - but S3 would have been rejected! • problem: early commit.

20

Scheduling of accepted jobs
• Static scheduling of sporadic jobs: – schedule as large a slice as possible of the accepted sporadic job in the current frame; – schedule remaining portions as late as possible; – append slices of accepted job to list of periodic-task slices in frames where they are scheduled. • Problem (early commit): – leaving unused slacks in frames may prevent acceptance of sporadic jobs that are released later (in the previous example: what if S3 was released at t = 14 (with the same e3 = 1.5 and D3 = 11: d3 = 25), and S4 at t = 17 (with e4 = 5 and D4 = 30: d4 = 47)? Both would be rejected. • Alternative 1: rescheduling upon arrival: – more sporadic jobs may be accepted (S1 in the previous example); – however, unpredictable number of context switches and acceptance tests may cause periodic jobs to complete late. • Alternative 2: dynamic scheduling of sporadic jobs: EDF
21

EDF Scheduling of Accepted Jobs
T1 T2

T3

...
TN
acceptance test reject EDF priority queue Aperiodic job queue

processor

22

EDF scheduling of accepted jobs
rejected d1 rejected

d3

d2

d4

• EDF algorithm is a good way to schedule accepted sporadic jobs: acceptance tests are done at the beginning of each frame (frame size f = 4 in the example); • at the beginning of the frame, the scheduler inserts accepted sporadic jobs into a queue in non-decreasing order of their deadlines; • whenever all the slices of periodic tasks scheduled in each frame are completed, the cyclic executive lets the sporadic jobs execute in EDF order.

23

Acceptance test
• The acceptance test at the beginning of frame t for a sporadic job S(d,e) (with deadline d and execution time e) is performed in 2 steps: 1. compute σ(t,l ): the total slack time available in frames t to l before d; if σ(t,l ) < e, then S is rejected; 2. else if the acceptance of S will not cause any sporadic job already accepted to complete late, then S is accepted; otherwise S is rejected • Sk(dk,ek) ∈ {S0,S1,...,Sns} - sporadic jobs already accepted • for the first step of the acceptance test, the scheduler needs: - σ(i,h) - total (initial) amount of slack time in frames i to h, for i,h = 1, .., F; all values of these slack times σ(i,h), leftover by periodic jobs from any frame i to any frame h (i ≤ h) in the first (and in any) major cycle, can be pre-computed and stored in a slack table with (F + 1)*F/2 elements; - from σ(i,h) in the first major cycle, initial slack from frame i in any major cycle j to frame h in any major cycle j’ (> j) is computed when required: σ((j – 1)F + i, (j’ – 1)F + h) = σ(i,F) + σ(1,h) + (j’ – j – 1)σ(1,F) (see example in next slide) 24

Acceptance test (2)
Example (F = 5): • values stored in slack table: σ(1,1),σ(1,2),σ(1,3),σ(1,4),σ(1,5), σ(2,2),σ(2,3),σ(2,4),σ(2,5), σ(3,3),σ(3,4),σ(3,5), σ(4,4),σ(4,5), σ(5,5) • (5+4+3+2+1 = (5+1)*5/2 = 15 values stored) d3 d4 d2 d1

Initial slack from i-th frame of j-th major cycle to h-th frame of j’-th major cycle: σ((j - 1)F + i, (j’ - 1)F + h) = σ(i,F) + σ(1,h) + (j’ - j - 1)σ(1,F)
• To compute σ(5,11): - frame 5 is in major cycle j = 1 (i = 5); - frame 11 is in major cycle j’ = 3 (h = 1): σ(5,11) = σ(5,5) + σ(1,1) + (3-1-1)*σ(1,5) = 1 + 0.5 +1*5.5 = 7

0.5, 1.5, 3.5, 4.5, 5.5 1, 3, 4, 5 2, 3, 4 1, 2 1

25

Acceptance test (3)
• for the second step of the acceptance test of a new sporadic job S(d,e), the scheduler needs to compute, for every sporadic job Sk already accepted: - ξk – the execution time of the portion of Sk that has been completed at the beginning of the current frame t; - the current slack σc(t,l ) available for S in frames t to l (before d):

(σ(t,l ) is the initial slack leftover by periodic jobs in frames t to l ) • if σc(t,l ) ≥ e, the acceptance of S(d,e) is compatible with all jobs Sk having deadline dk ≤ d already accepted; otherwise S(d,e) is rejected; • if S(d,e) is accepted, its initial slack σ before its deadline d is: σ = σc(t,l ) – e (σ ≥ 0); this value of σ is stored for later use; • when a sporadic job Sk is executed in a frame, the scheduler updates ξk at the end of the frame.

26

Acceptance test (4)
• for the second step of the acceptance test, the scheduler needs also to evaluate whether the acceptance of the new sporadic job S(d,e) (found compatible with already accepted Sk having deadline dk ≤ d) will cause already accepted sporadic jobs Sl having deadline dl > d to miss their deadlines: - if S(d,e) is accepted, the slack σl of each one of these jobs Sl is reduced by the execution time e of S; - S(d,e) is accepted only if the reduced amount of slack σl is not less than zero for all already accepted Sl having deadline dl > d. • in summary the data maintained by the scheduler to perform acceptance tests are: - the precomputed slack table; - the execution time ξk of the completed portions of every sporadic job Sk in the system at the beginning of the current frame t; - the current slack σk of every sporadic job Sk in the system.
27

Acceptance test (5)
0.5, 1.5, 3.5, 4.5, 5.5 1, 3, 4, 5 2, 3, 4 1, 2 1

d1 d3 d2

CYCLIC EDF ALGORITHM Acceptance tests are performed at the beginning of frames:

d4 •Time 4: - test of S1(17,4.5) - σ(2,4) = 4 < 4.5 => reject S1 •Time 8: - test of S2(29,4) - σc(3,7) = σ(3,5) + σ(1,2) = 4+1.5 = 5.5 > 4 => accepts S2 => σ2 = 1.5 •Time 12: test of S3(22,1.5) - d2 = 29 > d3 = 22 => σc(4,5) = σ(4,5) – 0 = 2 > 1.5 => first step OK - second step: σ2 = 1.5 – 1.5 ≥ 0 => accept S3 => σ3 = 0.5; σ2 = 0 •Time 16: test of S4(44, 5.0) - d3 < d2 < d4; ξ2 = 2; ξ3 = 1; e2 - ξ2 = 4 – 2 = 2; e3 – ξ3 = 1.5 – 1 = 0.5; - σc(5,11) =7- 2 - 0.5 = 4.5 < 5.0 => reject S4

28

Optimality of cyclic EDF algorithm
• When compared with the class of algorithms that perform acceptance tests at the beginning of frames, cyclic EDF algorithm is optimal, as long as the set of sporadic jobs is schedulable. • Cyclic EDF is not optimal when compared with algorithms that perform acceptance tests at arbitrary times: if tests were performed upon the release of each sporadic job, the results would be better (e.g. S1(17,4.5) of the previous example, would be accepted); - however interrupt-driven acceptance tests would increase the risk of periodic job slices to complete late (being delayed by an unpredictable number of context switches and acceptance tests during their execution); - for this reason it is better to use cyclic EDF and perform acceptance tests only at the beginning of each frame. • When prior knowledge of future job’s parameter is not available at decision times, it is not always possible for the scheduler to make an optimal decision: e.g. in the previous slide, by rejecting S3 and accepting S4, the “value” of the schedule (total exec. time of all accepted jobs) would have been 9 instead of 5.5; but what if S4 had not been released later or had execution time < 1,5?
29

Handling Frame Overruns
• Frame overrun: when a job slice scheduled in a frame t has not completed execution at the end of the frame tend. • Causes of frame overruns: - execution of a job is input data dependent, - a transient hardware fault, - a software flaw undetected during debugging. • Methods to handle overrun jobs at the end of frame: - abort the job => generate fault, - preempt the job => treat the remaining part as aperiodic, - continue the execution => delay future frames, - the choice of the method depends on the application.

30

Mode changes
• During the mode change the system is reconfigured: - new periodic tasks are created; - some old periodic tasks stay, some are deleted; - pre-computed schedule table for the new mode is brought into memory; - the code of new tasks is brought into memory; - memory space for data is allocated; - execution can proceed. • The work to reconfigure the system is a mode-change job; its deadline can be: - soft : aperiodic mode change job; - hard: sporadic mode change job.
31

Aperiodic mode change
• during mode-change the cyclic scheduler uses the old table; • the mode-change job or the scheduler mark tasks to be deleted; • before executing a periodic job the scheduler checks if the task is marked; in that case the scheduler returns immediately: the time allocated to the deleted tasks can be used to execute the mode-change job, which will: - create new periodic tasks, - bring into memory the code and data of new tasks, - bring into memory the new schedule table; • what to do with (other) aperiodic and sporadic jobs during mode change? - all aperiodic jobs may be delayed until after the mode change; - for sporadic jobs (that have been already accepted) it is reasonable to: • delay the switchover to the new mode, • or repeat the acceptance test with the new schedule and: if the sporadic job passes the test, execute it; if the test fails, decide whether: - to delay mode change, - or let the sporadic job complete late.

32

• a sporadic mode-change job has to be completed by a hard deadline; • possible approaches: 1. treat the mode-change job like any other sporadic job (possible rejection of the mode-change job must be acceptable by the application and properly handled); 2. in case the mode-change job is rejected, the scheduler can: - delay mode change, or - take some alternate action (if possible) and treat the mode-change job like an aperiodic job; 3. in case the mode-change job cannot be rejected, it may be scheduled as a periodic task, with period ≤ maximum allowed response time/2: - processor time is made available for this job on each period; - most of the times the mode-change job will not use its scheduled processor time (it may be used by sporadic and aperiodic jobs); - a significant amount of processor bandwidth may be wasted (capacity to handle periodic tasks is reduced).
33

Mode changes

Schedule as a periodic Mode Change task
34

Mode changer
• task MODE_CHANGER (oldMode, newMode) • fetch the deleteList of periodic tasks to be deleted; • mark each periodic task in the deleteList; • inform the cyclic executive that a mode change has commenced; • fetch the newTaskList of periodic tasks to be executed in NewMode; • allocate memory space for each task in newTaskList and create the tasks; • fetch the newSchedule; • perform acceptance test on each sporadic job in the system according to the newSchedule • if every sporadic job in system can complete on time based on the newSchedule • inform the cyclic executive to use the new Schedule; • else, • compute the latestCompletionTime of all sporadic jobs in system; • inform the cyclic executive to use the newSchedule at max (latestCompletionTime, thresholdTime); • end Mode_Changer
35

• Clock-driven approach is applicable also to other types of workload: - jobs not executed on a CPU (e. g. bus arbitrator), - jobs not characterized by the periodic task model. • Whenever jobs parameters are known a priori a static schedule can be computed off-line (taking into account also other types of constraints, besides release times and deadlines, such as precedence and resource contention); • The static schedule can be stored as a table and used by the same static scheduler used for periodic tasks.

36

Multiprocessor scheduling

• Construct (offline) a global schedule which specifies on which processor and when each job executes; - global clock required. • Sometimes a precomputed multiprocessor schedule can be derived from a precomputed uniprocessor schedule: - in the figures the system bus is the bottleneck: a feasible schedule for data transfer activities on the bus gives a feasible schedule of related jobs in the corresponding processors; - if clock drifts on processors are small, uniprocessor schedulers may be used • In general searching for feasible multiproccesor schedule is more complex than in the uniprocessor case. 37

Constructing static schedules
• The general problem of choosing a frame length for a given set of periodic tasks, segmenting the tasks if necessary, and scheduling the tasks to meet all their deadlines is NP-hard. • In the special case of independent preemptable tasks, a polynomial-time solution is based on the Iterative Network Flow algorithm (INF algorithm). • A system of independent preemptable periodic tasks whose relative deadlines are not less than their respective periods is schedulable iff the total utilization of the tasks is ≤ 1. - if some relative deadlines are shorter than the period, a feasible schedule may not exist even when U ≤ 1.
38

INF algorithm
• The INF algorithm is performed in 2 steps: - step 1: find all the possible frame sizes of the system that meet the frame size constraints 2 and 3 (slide 6) but not necessarily constraint 1; - step 2: apply INF algorithm starting with the largest possible frame. • Example (slide 11): - T1 = (4,1); T2 = (5,2,7) and T3 = (20,5) - frame sizes 2 and 4 meet constraints 2 and 3, but not 1. • The INF algorithm iteratively tries to find a feasible cyclic schedule of the system for a possible frame size at a time, starting with the largest value.
39

Network-flow graph
• The algorithm used in each iteration is based on the network-flow formulation of the preemptive scheduling problem. In this formulation: - we ignore the tasks to which the jobs belong; - we name the jobs in a major cycle of F frames as J1, J2, ..., JN; - the constraints are represented by graph. • The graph contains the following vertices and edges: - Job vertex Ji, I = 1, ..., N - Frame vertex j, j = 1, ..., F - 2 additional vertices: source and sink. - Edges from job v. to frame v. (Ji,j ) if job Ji can be scheduled in frame j edge capacity = f (amount of time available in frame j ). - Edges from source to every job vertex Ji edge capacity = ei (amount of time required by job Ji ). - Edges from every frame vertex to the sink edge capacity = f (amount of time available in a frame). • Edges are labeled with 2 numbers {(c),f }; c = capacity, f = flow.

40

Network-flow graph (2)
• The flow of an edge (Ji,j ) gives the amount of time in frame j allocated to job Ji . • A flow of an edge is a positive number that satisfies the following constraints: - ≤ the edge capacity, - ∑(flows into vertex) = ∑(flows out). • Flow of a networkflow graph = ∑(all flows into the sink) • Problem: Find the maximum flow of network-flow graph: time complexity of algorithm = ((N+F)3). • Maximum flow ≤ ∑ ei; if the set of flows of Jobs→Frames edges gives maximum flow = ei, then they represent a feasible preemptive schedule.

41

Network-flow graph example 1
• Example: - T1 = (4,1); T2 = (5,2,7); T3 = (20,5) H = 20; U = 0.9 - frame sizes 2 and 4 meet constraints 2 and 3, (but not 1). • Try first frame size 4: in H there are: - 5 frames, - 5 jobs of T1; 4 jobs of T2 ; 1 of T3 . • Each job (or job slice) is schedulable in a frame that is contained in its feasible interval, i.e. each job (or job slice) is scheduled in a frame that: - begins no sooner than its release time, - ends no later than its deadline.
job feasible interval schedulable in frame(s)

J1,1 J1,2 J1,3

0–4 4–8

F1 (0 – 4) F2 (4 – 8)

8 – 12 F3 (8 – 12)

J1,4 12 – 16 F4 (12 – 16) J1,5 16 – 20 F5 (16 – 20) J2,1 J2,2 0–7 F1 (0 – 4) 5 – 12 F3 (8 – 12)

J2,3 10 – 17 F4 (12 – 16) J2,4 15 – 22 F5 (16 – 20) J3,1 0 – 20 F1,F2,F3,F4,F5
42

• Place jobs in frames following this rule (table above); • then draw the network flow graph and apply INF algorithm (next slide).

Network-flow graph example 1
0-4

4-8

8-12

(4),1

12-16

16-20

- T1 =(4,1); T2 =(5,2,7); T3 =(20,5) • edge (Ji,j,k) from job vertex Ji,j to frame vertex k is drawn if job Ji,j is scheduleable in frame k; • its flow gives the amount of time in frame k allocated to job Ji,j . • INF algorithm: step 1: possible frame sizes 4, 2 step 2: try first with f = 4 (figure) - maximum flow is 18 = ∑ ei feasible schedule • The flows of the feasible schedule indicate that T3 is to be partitioned in 3 slices and give their size. • The time diagram with the job-slice schedule in each frame may now be drawn (the same as in slide 12).
43

Network-flow graph example 2
• Second example: - T1 = (4,3); T2 = (6,1.5) H = 12; U = 1 - frame sizes 2 and 4 meet constraints 2 and 3, (but not 1). • Try first frame size 4: in H there are: - 3 frames, - 3 jobs of T1; 2 jobs of T2 . ( J, e(r, d] ) J1,1, 3(0,4]; J1,2, 3(4,8]; J1,3, 3(8,12]; J2,1, 1.5(0,6]; J2,2, 1.5(6,12]; • Place jobs in frames that contain their feasible interval (table above). • Draw network flow graph and find maximum flow (next slide).
44

job

feasible interval

schedulable in frame(s)

J1,1 J1,2 J1,3 J2,1 J2,2

0–4 4–8 0–6

F1 (0 – 4) F2 (4 – 8) F1 (0 – 4)

8 – 12 F3 (8 – 12) 6 – 12 F3 (8 – 12)

Network-flow graph example 2
J11
(3),3 (4),3 (4),1

F1
(4),4

J12
(3),3

J13
(3),3

(4),3

F2

(4),3

source
(1.5),1.5

J21

sink

(4),3 (1.5),1.5

J22

(4),1

F3
(4),4

maximum flow is: 11 < ∑ ei = 12 no feasible schedule with F = 4
45

Network-flow graph example 2
T1 = (4,3); T2 = (6,1.5) no feasible schedule with F = 4 • Try frame size 2: in H = 12 there are: - 6 frames, - 3 jobs of T1; 2 jobs of T2 . • e1 = 3 > F: jobs of T1 must be sliced: • place jobs in frames that contain their feasible interval (table on the right)
job feasible schedulable in frame(s) interval

J1,1 J1,2 J2,1

0–4 4–8 0–6

F1(0 – 2), F2(2 – 4) F3(4 – 6), F4(6 – 8) F1 , F2 , F3

J1,3 8 – 12 F3(8 – 10), F4(10 – 12) J2,2 6 – 12 F4, F5, F6

How to slice jobs? additional constraint: - every job of the same task should be sliced in the same way: the cyclic executive (or the periodic-task server) must execute (call) every time the same sequence of code segments; - this suggest to slice each J1,i in 2 sub-jobs of equal execution time 1.5; - jobs of T1 leave 0.5 time units free in each frame: jobs of T2 (e2 = 1.5) must be sliced in 3 sub-jobs, scheduled in 3 frames; 46 - draw network flow graph and find maximum flow (next slide).

Network-flow graph example 2
J11
(3),3 (2),1.5 (2),0.5 (2),1.5

F1
(2),2

F2
(2),2

J12
(3),3

(2),0.5

(2),1.5 (2),0.5 (3),3

F3

(2),2

J13
(2),1.5 (2),0.5

source

F4

(2),2

sink

J21
(1.5),1.5

(2),1.5 (2),0.5 (2),1.5

F5

(2),2

(1.5),1.5

J22

(2),0.5

F6

(2),2

• • • •

maximum flow is: 12 = ∑ ei feasible schedule jobs of T1 are sliced into 2 sub-jobs: J1,iA, J1,iB jobs of T2 are sliced into 3 sub-jobs: J2,iA, J2,iB , J2,iC the time diagram with the job-slice schedule in each frame may now be drawn (next slide).

47

Network-flow graph example 2
J11
(3),3 (2),1.5 (2),0.5 (2),1.5

F1
(2),2

F2
(2),2

J12
(3),3

(2),0.5

(2),1.5 (2),0.5 (3),3

F3

(2),2

J13
(2),1.5 (2),0.5

source

F4

(2),2

sink

J21
(1.5),1.5

(2),1.5 (2),0.5 (2),1.5

F5

(2),2

(1.5),1.5

J22
d1,1

(2),0.5

F6
d1,2

(2),2

d2,2

F1
J1,1A J2,1A

F2
J1,1B J2,1B

F3
J1,2A J2,1C

d2,1

F4
J1,2B J2,2A

F5
J1,3A J2,2B

F6
J1,3B J2,2B

d1,3

0

1

2

3

4

5

6

7

8

9

10

11

12 48

Clock-driven scheduling: pros
• The clock-driven approach has many advantages: - conceptual simplicity; - we can take into account complex dependencies, communication delays, and resource contentions among jobs in the choice and construction of the static schedule; - static schedule stored in a table; change table to change operation mode; - no need for concurrency control and synchronization mechanisms; - context switch overhead can be kept low with large frame sizes. • It is possible to further simplify clock-driven scheduling: - sporadic and aperiodic jobs may also be time-triggered (interrupts in response to external events are queued and polled periodically); - the periods may be chosen to be multiples of the frame size. • Easy to validate, test and certify (by exhaustive simulation and testing). • Many traditional real-time applications use clock-driven schedules. • This approach is suited for systems (e.g. small embedded controllers) which are rarely modified once built.
49

Clock-driven scheduling: cons
• The clock-driven approach has also many disadvantages: - brittle: changes in execution time or addition of a task often require a new schedule to be constructed; - release times must be fixed (this is not required in priority-driven systems); - all combinations of periodic tasks that might execute at the same time must be known a priori: it is not possible to reconfigure the system on line (priority-driven systems do not have this restriction); - not suitable for many systems that contain both hard and soft real-time applications: in the clock-driven systems previously discussed, aperiodic and sporadic jobs were scheduled in a priority driven manner (EDF).

50