You are on page 1of 19

10/14/2019

Problem formulation

 We consider a computing system that has to


execute a set  of n periodic real-time tasks:
 = { 1, 2 , … n }

 Each task i is characterized by:


Ci worst-case computation time
Ti activation period
Di relative deadline
i initial arrival time (phase)
2

Problem formulation Proportional share algorithm


Basic idea
i (i, Ci, Ti, Di) job ik
 Divide the timeline into slots of equal length.
 Within each slot serve each task for a time
i aik dik proportional to its utilization:
For each periodic task i we must guarantee that: Pig utilization factor = 4/16 = 1/4
Cow utilization factor = 20/40 = 1/2
 each job ik is activated at aik = i + (k-1)Ti
Pig
 each job ik completes within dik = aik + Di 4/16
2 2 2 2 2 2

There are several wrong ways to achieve this goal. Cow 4 4 4 4 4


20/40
0 8 16 24 32 40
3

Proportional share algorithm Proportional share algorithm


In general
 This method approximates a fluid system, where
Let: Ui = required feeding fraction execution progresses proportionally to Ui
Δ = GCD (T1, T2) = 8
 The major problem is that if periods are not
execute each task for i = UiΔ in each slot Δ
harmonic, Δ = GCD (T1, …, Tn) is small and a task
Pig 2 2 2 2 2 2
is fragmented into many chunks (Ti/Δ) of small
4/16 duration i = UiΔ.
Cow 4 4 4 4 4
20/40
0 8 16 24 32 40
too much overhead
NOTE: UiΔ ensures Ci in Ti, in fact: i(Ti/Δ) = Ci
Feasibility test: i ≤ Δ i.e. Ui ≤ 1 5 6
10/14/2019

Work and Sleep Work and Sleep


According to this method, a task executes for Ci units Example 1: task Ci Ti Sleep
and then suspends for Ti – Ci units: A 1 5 4
It works well for small
B 2 10 8
Ci Ti Sleep time
computation times
task C 3 20 17
A 1 5 4
B 2 10 8
4 4 4 4 4 4
C 3 20 17 A (1/5)
0 5 10 15 20 25 30
8 8 8
B (2/10)
functionA(); functionB(); functionC(); 1 3 11 13 21 23 31

17 17
C (3/20)
sleep(4); sleep(8); sleep(17); 0 3 7 24 26

7 8

Work and Sleep Loop Scheduling


Example 2: task Ci Ti Sleep It is a simple trick to schedule periodic activities at different
A 2 5 3 Problem rates using a single loop (often used in Arduino):
B 2 8 6 Low priority tasks int count = 0; // relative time
C 6 20 12 experience long delays int T1 = 20; // period 1 in ms
int T2 = 50; // period 2 in ms
int T3 = 80; // period 3 in ms

3 3 3 3 3 3 3 while (1) {
A (2/5)
if (count%T1 == 0) function1();
0 5 10 15 20 25 30
if (count%T2 == 0) function2();
6 6 6
B (2/8) if (count%T3 == 0) function3();
2 4 12 14 22 24 32 34
count++;
12
C (6/20) if (count == T1*T2*T3) count = 0;
0 4 7 10 14 17 30 34
delay(1); // wait for 1 ms
30 instead of 20 }
9 10

Loop Scheduling Loop Scheduling


Note that the counter must be reset at the least common A better way is to rely on a system call that returns
multiple of the periods, called the hyperperiod (H): the system time:
count++; Initialization
if (count == T1*T2*T3) count = 0;
t = get_time();
t = get_time(); a1 = t – T1;
Q: How many bits are needed to represent the hyperperiod? a2 = t – T2;

T1 = 10 H = 1012 t  a1+T1 a1 = t function1();


T2 = 40
T3 = 50 bits = log21012 = 39.86 = 40
T4 = 100 t = get_time();
T5 = 500
T6 = 1000 It does not fit to a long integer.
We are in trouble in a small microntroller! t  a2+T2 a2 = t function2();

11 12
10/14/2019

Loop Scheduling Loop Scheduling


Implementation: Example 1: task Ci Ti

#define N 5 // number of tasks


A 1 5
time t; // current time B 1 10
time a[N], T[N]; // act. times, periods
C 1 20
initialize_periods(T); // e.g., read from file
t = get_time();
for (i=0; i<N; i++) a[i] = t – T[i];
A (1/5)
while (1) { 0 5 9 14 19 24 29 34

for (i=0; i<N; i++) {


B (3/10)
t = get_time();
1 11 21 31
if (t >= a[i] + T[i]) {
a[i] = t;
C (5/20)
function(i);
} 0 2 22

}
} 13 14

Loop Scheduling Loop Scheduling


Example 2: task Ci Ti Problem If the scheduler is not the only thread, a sleep must be inserted:
A 1 5 Tasks with short periods are #define N 5 // number of tasks
#define DELTA 3 // milliseconds
B 3 10 delayed by the other tasks
time t; // current time
C 5 20 time a[N], T[N]; // act. times, periods

9 10 initialize_periods(T); // e.g., read from file


t = get_time();
for (i=0; i<N; i++) a[i] = t – T[i];
A (1/5)
0 5 9 14 19
while (1) {
24 29 34
for (i=0; i<N; i++) {
B (3/10) t = get_time();
1 11 21 31 if (t-a[i] >= T[i]) {
a[i] = t;
C (5/20) function(i);
0 4 9 24
}
}
sleep(DELTA); // suspend for 3 ms
15 } 16

Loop Scheduling Warning!


Example 3: task Ci Ti Problem
DELTA = 3
A 1 5 The experienced delay is given The scheduling methods shown in the previous
B 3 10 by other tasks + suspension slides are not precise and should be avoided.
C 5 20
12 15

A (1/5) More predictable methods are presented in the


0 5 9 14
following slides.
19 24 29 34

B (3/10)
1 11 13 16 23 26 31

C (5/20)
0 4 9 12 24

NOTE: Suspension time can be higher due to other tasks


17 18
10/14/2019

Timeline Scheduling Timeline Scheduling

Also known as cyclic scheduling, it has been used for Method


30 years in military systems, navigation, and
monitoring systems.  The time axis is divided in intervals of equal
length (time slots).
Examples
 Each task is statically allocated in a slot in
– Air traffic control systems
order to meet the desired request rate.
– Space Shuttle
– Boeing 777  The execution in each slot is activated by a
– Airbus navigation system
timer.

19 20

Timeline Scheduling Timeline Scheduling


Example: task Ci Ti Implementation:
A 10 ms 25 ms  = GCD (minor cycle)
timer
B 10 ms 50 ms T = lcm (major cycle) A minor
C 10 ms 100 ms B
cycle
timer
T A
 C major
timer cycle
A
0 25 50 75 100 125 150 175 200
B
CA + CB  
Guarantee: timer
CA + CC   A
21 22

Cycling Scheduling Timeline scheduling


Coding:
Advantages
#define MINOR 25 // minor cycle = 25 ms
 Simple implementation (no RTOS is required).
initialize_timer(MINOR); // interrupt every 25 ms
while (1) {  Low run-time overhead.
sync(); // block until interrupt  All tasks run with very low jitter.
function_A();
function_B();
sync(); // block until interrupt
function_A(); Disadvantages
function_C();
sync(); // block until interrupt  It is not robust during overloads.
function_A();
function_B();
 It is difficult to expand the schedule.
sync(); // block until interrupt  It is not easy to handle aperiodic activities.
function_A();
}
23 24
10/14/2019

Problems during overloads Expandibility

What do we do during task overruns? If one or more tasks need to be upgraded, we may
have to re-design the whole schedule again.
 Let the task continue
– we can have a domino effect on all the other Example: B is updated so that CB = 20 ms
tasks (timeline break) now CA + CB > 


 Abort the task
– the system can remain in inconsistent states. A B
0 25

25 26

Expandibility Expandibility

If the frequency of some task is changed, the


 We have to split task B in two subtasks (B1, B2)
impact can be even more significant:
and re-build the schedule:
task Told Tnew
A B1 A B2 C A B1 A B2 A 25 ms 25 ms
•••
B 50 ms 40 ms
0 25 50 75 100
C 100 ms 100 ms

CA + CB1  
Guarantee:
CA + CB2 + CC   minor cycle:  = 25 =5 40 sync.
major cycle: T = 100 T = 200 per cycle!
27 28

Example Priority Scheduling

 T Method
1. Assign priorities to each task based on its
0 25 50 75 100 125 150 175 200 timing constraints.

 2. Verify the feasibility of the schedule using


analytical techniques.

0 25 50 75 100 125 150 175 200


3. Execute tasks on a priority-based kernel.
T

29 30
10/14/2019

How to assign priorities? Priority vs. importance


If 2 is more important than 1 and is assigned
 Typically, task priorities are assigned based on higher priority, the schedule may not be feasible:
the their relative importance.
1
 However, different priority assignments can P1 > P2
lead to different processor utilization bounds. 2
deadline miss

1
P2 > P1
2
31 32

Priority vs. importance Optimal priority assignments


If priority are not properly assigned, the utilization
bound can be arbitrarily small:  Rate Monotonic (RM): optimal among FP algs
for T = D
An application can be unfeasible even Pi  1/Ti (static)
when the processor is almost empty!
deadline miss  Deadline Monotonic (DM): optimal among FP algs
 for D  T
1 Pi  1/Di (static)
P2 > P1
2   Earliest Deadline First (EDF): optimal among
all algs
Pi  1/dik (dynamic)
 C2
U = + 0
di,k = ri,k + Di
T1 
33

Rate Monotonic is optimal Deadline Monotonic is optimal


RM is optimal among all fixed priority If Di  Ti then the optimal priority assignment is
algorithms (if Di = Ti): given by Deadline Monotonic (DM):

If there exists a fixed priority assignment DM 1


which leads to a feasible schedule, then
P2 > P1
the RM schedule is feasible. 2

RM
1
If a task set is not schedulable by RM,
P1 > P2
then it cannot be scheduled by any fixed 2
priority assignment.
35 36
10/14/2019

EDF Optimality Optimality

EDF is optimal among all algorithms:


EDF
DM
If there exists a feasible schedule for a task set,
dynamic fixed
then EDF will generate a feasible schedule. priority priority RM
(D  T) (D  T) fixed priority
(D = T)
If a task set is not schedulable by EDF, then
it cannot be scheduled by any algorithm.

37 38

Rate Monotonic (RM) An unfeasible RM schedule


 Each task is assigned a fixed priority
proportional to its rate [Liu & Layland ‘73]. 3 4
Up    0.944
6 9
A
0 25 50 75 100

B 1
0 40 80 0 3 6 9 12 15 18

C 2
0 100 0 3 6 9 12 15 18
deadline miss
Note that small parameter variations are automatically
handled by the scheduler without any intervention.
39

EDF Schedule How can we verify feasibility?

3 4  Each task uses the processor for a fraction of


Up    0.944 time:
6 9 Ci
Ui 
Di = Ti Ti
 Hence the total processor utilization is:
1
n
Ci
Up  
0 3 6 9 12 15 18

2 i 1 Ti
0 3 6 9 12 15 18
 Up is a measure of the processor load.

41
10/14/2019

Identifying the worst case Critical Instant


Feasibility may depend on the 3 4 For any task i, the longest response time occurs
Up    0.944
initial activations (phases): 6 9 when it arrives together with all higher priority tasks.

1 1
0 3 6 9 12 15 18

2 2
0 3 6 9 12 15 18
R2
deadline miss

1 1
0 3 6 9 12 15 18
2
2
0 3 6 9 12 15 18 R2
43 44

Critical Instant A necessary condition


For independent preemptive tasks under fixed priorities, the
critical instant of i, occurs when it arrives together with all
higher priority tasks. A necessary condition for having a feasible
schedule is that Up ≤ 1.
1 1/6

2 2/8 In fact, if Up > 1 the processor is overloaded


hence the task set cannot be schedulable.
3 2/12

Idle time
However, there are cases in which Up ≤ 1
but the task set is not schedulable by RM.
i 2/14

An unfeasible RM schedule Utilization upper bound

3 4
Up    0.944 3 3
6 9 Up    0.833
6 9
1
0 3 6 9 12 15 18 1
0 3 6 9 12 15 18
2
0 3 6 9 12 15 18 2
deadline miss 0 3 6 9 12 15 18

Given this task set (period configuration), what is the NOTE: If C1 or C2 is increased,
higher utilization that guarantees feasibility? 2 will miss its deadline!
47 48
10/14/2019

A different upper bound A different upper bound

2 4 2 4
U ub    0.9 Up    1
4 10 4 8

1 1
0 4 8 12 16 0 4 8 12 16

2 2
0 2 4 6 8 10 12 14 16 18 20 0 4 8 12 16

NOTE: The upper bound Uub depends on the NOTE: The upper bound Uub depends on the
specific task set. specific task set.
49 50

The least upper bound A sufficient condition

Uub If Up  Ulub the task set is certainly


1 schedulable with the RM algorithm.

Ulub
NOTE
... If Ulub < Up  1 we cannot say anything
about the feasibility of that task set.

51 52

Ulub for RM RM Least Upper Bound


 In 1973, Liu and Layland proved that for a set of CPU%
n periodic tasks:

RM
U lub 
 n 21/ n  1  69%

for n   Ulub  ln 2
n

53
10/14/2019

A special case RM Guarantee Test

 We compute the processor utilization as:


If tasks have harmonic periods Ulub = 1.
n
Ci
Up  
2 4 i 1 Ti
Up    1
4 8
 Guarantee Test (only sufficient):
1

2
0 4 8 12 16

U p  n 21/ n  1 
0 4 8 12 16

56

Basic Assumptions Computing Ulub


A1. Ci is constant for every job of i  Assume the worst-case scenario for the task
set (simultaneous arrivals)
A2. Ti is constant for every job of i
 Increase all Ci's to fully utilize the processor
A3. For each task, Di = Ti

A4. Tasks are independent:  Compute the upper bound Uub


 no precedence relations
 no resource constraints  Minimize Uub with respect to all remaining
 no blocking on I/O operations variables

57 58

Computing Ulub for 2 tasks Computing Ulub for 2 tasks


Case 1: C1 < T2 – FT1 T2 – FT1
C1 C1 C1 C1
1
C1 C1 C1 C1
1
2
2 FT1 T2
FT1 T2 C1  T2 
U ub  1    ( F  1)
max T2 T2  T1 
C2 = T2  (F+1)C1 F  1
T1
C1 T  ( F  1)C1 C T 
U ub   2  1  1  2  ( F  1)
T1 T2 T2  T1  C1
59
T2  FT1 T1 60
10/14/2019

Computing Ulub for 2 tasks Computing Ulub for 2 tasks


Case 2: C1  T2 – FT1 T2 – FT1

C1 C1 C1 C1 T1 C T2
1 Uub = F + 1 –F
T2 T2 T1
2 Uub
FT1 T2
1
T2
C2max = F(T1 – C1) F 
T1
C1 F(T1 – C1) T C T2 C1
Uub = + = F 1 + 1 –F T2  FT1 T1
T1 T2 T2 T2 T1
61 62

Computing Ulub for 2 tasks Worst case for 2 tasks


T1   
2
T
U lub  U ub C T  FT   F   2  F    F=1 T1 < T2 < 2T1
1 2 1
T2   T1  

This function increases with F, hence we pose F = 1.  C1 = T2 – FT1 C 1 = T2 – T 1
Minimizing with respect to k = T2/T1 we have:
1  k  1
2 C1 C1
dU lub k2  2 1
U lub  
k dk k2 T1 2T1
C2
2
 
dU/dk = 0 for
U lub  2 2  1  0.83 0 T2
k  2
63 64

Worst case for n tasks Computing Ulub for n tasks


T1 < Ti < 2T1
T2  T1 T T T T 2T1  Tn
C1 C1 U ub   3 2    n n 1 
1 C1 = T2 – T1 T1 T2 Tn1 Tn
C2 T1 C2 2T1
2 C2 = T3 – T2 T2 T T 2T
T2
U ub   3    n  1  n
C3 C3 T1 T2 Tn 1 Tn
3 C3 = T4 – T3
n 1
T3 Ti 1 Tn



• C Defining Ri  and noting that P  Ri 
Cn–1 Ti i 1
T1
n-1 Cn–1 = Tn – Tn–1
n–1

we can write:
Tn–1 n 1 n 1
Cn
n C n  T1   Ck U ub   Ri 
2
 n
0 T1 Tn k 1
i 1
P
65 66
10/14/2019

Computing Ulub for n tasks The Hyperbolic Bound


n 1
2  In 2000, Bini et al. proved that a set of n
U ub  
i 1
Ri 
P
 n periodic tasks is schedulable by RM if:

Minimizing with respect to Ri we have: n


U ub
Ri
1 
2
Ri P
 0 for Ri  21 / n 
i 1
(U i  1)  2

Hence:
U lub  n 21 / n  1  
67 68

Proof sketch Proof sketch


T1 < Ti < 2T1 Ci = Ti+1 – Ti Ui = Ri – 1 Ri = Ui + 1
C1 C1
1 C1 = T2 – T1
n n 1
T1
C C n   C k  T1
2T1
C2 C2  T1 Cn  Tn  2T1
2 C2 = T3 – T2
k 1
k
k 1
C3 T2 C3
3 C3 = T4 – T3
Tn
T3 (U n  1 )  2
• T1

• C Cn–1
n-1 Cn–1 = Tn – Tn–1
n–1
n 1 n 1 n
Tn
n
Cn
Tn–1
C n  T1 
n 1

 Ck T1
 
i 1
Ri  
i 1
(U i  1) 
i 1
(U i  1)  2
0 T1 Tn k 1

69 70

HB vs. LL Extension to tasks with D < T


U1
Ti
1 n
LL 
i 1
Ui  n(2 1/ n
 1) Di
Ci
0.83 n i
HB 
i 1
(U i  1)  2
ri,k di,k ri,k+1 t

Scheduling algorithms
 Deadline Monotonic: pi  1/Di (static)
 Earliest Deadline First: pi  1/di (dynamic)
0.83 1 U2 71 72
10/14/2019

Deadline Monotonic Response Time Analysis


[Audsley '90]
1
 For each task i compute the interference due
to higher priority tasks:
2
0 4 8 12 16 20 24 28
Ii  C
Dk  Di
k

Problem with the Utilization Bound  compute its response time as Ri = Ci + Ii


n
C 2 3
U p   i    1.16  1
i 1 D i 3 6  verify that Ri  Di
but the task set is schedulable.
73 74

Computing Interference Computing Response Times

i 1
k Ri
Ri  Ci   Ck
k 1 Tk
i
0 Ri
Iterative solution:
Interference of k on i Ri
in the interval [0, Ri]:
I ik  Ck
Tk Ri0  Ci
iterate while
Interference on i i 1
Ri i 1
Ri( s 1)
by high-priority tasks: Ii   Ck Ris  Ci   Ck Ris  Ri( s 1)
k 1 Tk k 1 Tk
75 76

Earliest Deadline First (EDF)

 Each job receives an absolute deadline:


di,k = ri,k + Di

 At any time, the processor is assigned to the


job with the earliest absolute deadline.

 Under EDF, any task set can utilize the


processor up to 100%.

78
10/14/2019

EDF Example Unfeasible under RM

3 4 3 4
Up    0.944 Up    0.944
6 9 6 9
Di = Ti

1 1
0 3 6 9 12 15 18 0 3 6 9 12 15 18

2 2
0 3 6 9 12 15 18 0 3 6 9 12 15 18
deadline miss

79 80

EDF Optimality EDF Optimality [Dertouzos ‘74]

EDF is optimal among all algorithms:


E

If there exists a feasible schedule for a task set s


, then EDF will generate a feasible schedule. k
t tE f E dE dk

Transforming s in s’
If  is not schedulable by EDF, then it
cannot be scheduled by any algorithm. Feasibility is preserved
s’(t) = s(tE)
fk’ = fE  dE  dk
s’(tE) = s(t)

81 82

EDF schedulability Proving sufficiency


 In 1973, Liu and Layland proved that for a
set of n periodic tasks: Up  1  schedulable

Proof by contradiction
EDF
U lub  1  Assume Up  1 and  is not schedulable
 Show that Up > 1
 This means that a task set  is schedulable
by EDF if and only if Equivalent to show:

Up  1  unschedulable Up > 1

83 84
10/14/2019

Proving sufficiency Proving sufficiency


If  is not schedulable, there is a deadline miss:
idle
idle

deadline miss
deadline miss

t1 t2
t1 t2 d k t 2 n
(t 2  t1 ) n
t t
 Let t2 be the first instant at which a deadline is missed. C p (t1 , t 2 )   C 
rk  t1
k
i 1 Ti
Ck   2 1 Ck  (t 2  t1 )U p
i 1 Ti
 Let [t1, t2] be the longest interval of continuous utilization,
before t2, such that only jobs with d  t2 are executed in [t1, t2]
C p (t1 , t 2 )  (t 2  t1 )U p
 Let Cp(t1,t2) be the total demand in [t1, t2]. 85 86

Proving sufficiency Proving sufficiency

idle
Up  1  schedulable

deadline miss
Proof by optimality
 Find any algorithm for which the above
t1 t2 condition holds;

Since a deadline is missed, we must have: (t 2  t1 )  C p (t1 , t 2 )  Then, for the EDF optimality, the above
condition also holds for EDF.
and since C p (t1 , t 2 )  (t 2  t1 )U p we have: (t 2  t1 )  (t 2  t1 )U p

Hence: Up > 1 (contradiction).


87 88

Proving sufficiency Proving sufficiency

Consider the Proportional Share algorithm: With this algorithm, a task executes in each period for:
in every interval of length  it schedules a fraction i: Ti T
 i  i U i   TiU i  Ci
 
i = Ui  i i i i
t
    Ti
1 2 3 1 2 3 1 2 3 n

t
Feasibility is ensured if 
i 1
i   that is if
   n

U  
i 1
i  Up  1
89 90
10/14/2019

EDF with D  T Processor Demand


Schedulability Analysis
Processor Demand Criterion [Baruah ‘90]

In any interval of length L, the computational t1 t2


demand g(0,L) of the task set must be no The demand in [t1, t2] is the computation time of
greater than the available time in that interval. those tasks started at or after t1 with deadline
less than or equal to t2:

L  0, g (0, L )  L d i t2
g (t1 , t 2 )  C
ri t1
i

91 92

Demand of a periodic task Example

1

0 L 2
L  Di  Ti 0 2 4 6 8 10 12 14 16
g i (0, L)  Ci
Ti g(0, L)
L
8
n
L  Di  Ti
g (0, L)  
6
Ci 4
i 1 Ti
2
0 L
93 94

Bounding complexity Bounding complexity


 Since g(0,L) is a step function, we can check  Moreover we note that: g(0, L)  G(0, L)
feasibility only at deadline points.
n
 L  Ti  Di 
 If tasks are synchronous and Up < 1, we can G (0, L)     Ci
i 1  Ti 
check feasibility up to the hyperperiod H:
n n
Ci Ci
H = lcm(T1, … , Tn)  L T
i 1
  (T  D ) T
i 1
i i
i i

n
 LU   (T  D )U
i 1
i i i

95 96
10/14/2019

Limiting L Processor Demand Test

n
G (0, L)  LU   (T  D )U
i 1
i i i
L L  D , g (0, L )  L
G(0, L)
n

 (T  D )U
i i i
L*  i 1 g(0, L) D = {dk | dk  min (H, L* )}
1U
H = lcm(T1, … , Tn)
for L > L* n
g(0,L)  G(0,L) < L  (T  D )U
i i i
L L*  i 1
L* 1U
97 98

Summary Complexity Issues


 Three scheduling approaches:  Utilization based analysis U  Ulub
 Off-line construction (Timeline)  O(n) complexity
 Fixed priority (RM, DM)
 Dynamic priority (EDF)  Response time analysis i Ri  Di
 Pseudo-polynomial complexity
 Three analysis techniques:
 Processor Utilization Bound U  Ulub  Processor demand analysis L g(0,L)  L
 Response Time Analysis i Ri  Di  Pseudo-polynomial complexity
 Processor Demand Criterion L g(0,L)  L
99 100

RM vs. EDF Context switches


RM
Metrics
1
 Implementation complexity 0 5 10 15 20 25 30 35
 Efficiency 2
 Schedulability analysis 0 7 14 21 28 35
deadline miss
 Runtime overhead EDF
 Overload conditions 1
0 5 10 15 20 25 30 35
 Jitter
 Aperiodic task handling 2
0 7 14 21 28 35

101 102
10/14/2019

Schedulability Analysis Questions


Di = Ti Di  Ti If EDF is more efficient than RM, why commercial
Suff.: polynomial O(n) pseudo-polynomial RT operating systems are still based on RM?
Response Time Analysis
LL: Ui  n(21/n –1)
i Ri  Di Main reason
RM HB: PUi+1)  2
i 1
Ri
Exact pseudo-polynomial Ri  Ci   Ck  RM is simpler to implement on top of
RTA k 1 Tk commercial (fixed priority) kernels.

polynomial: O(n) pseudo-polynomial  EDF requires explicit kernel support for deadline
EDF Processor Demand Analysis scheduling, but gives other advantages.
Ui  1 L  0, g (0, L)  L

103 104

RM: harmonic periods Non harmonic periods

Harmonic task sets are schedulable by RM


1
if and only if U  1. 4 8 12 16 20 24

2
A set of tasks is harmonic if every pair of 8 16 24

periods are in harmonic relation.


3
0 2 4 6 8 10 12 14 16 18 20 22 24

A common misconception 2 2 2
U     0.917
The RM schedulability bound is 1 if every 4 8 12
period is multiple of the shortest period. Any increase in the Ci’s makes the system unschedulable
105 106

Harmonic task set Robustness under overloads

Two situations are considered:


1
4 8 12 16 20 24 1. Permanent overload
2
8 16 24
 This occurs when U > 1

3 2. Transient overload
0 2 4 6 8 10 12 14 16 18 20 22 24

2 2 4  This occurs when some job executes


U     1 more than expected
4 8 16

107 108
10/14/2019

RM under permanent overload EDF under permanent overload


4 6 5 4 6 5
U     1.25 U     1.25
8 12 20 8 12 20

1 1
0 8 16 24 32 40 48 56 64 72 80 0 8 16 24 32 40 48 56 64 72 80

2 2
0 12 24 36 48 60 72 84 0 12 24 36 48 60 72 84

3 3
0 20 40 60 80 0 20 40 60 80

 High priority tasks execute at the proper rate  All tasks execute at a slower rate
 Low priority tasks are completely blocked  No task is blocked
109 110

EDF is predictable in overloads RM during transient overruns


Theorem (Cervin ‘03) Uavg = 0.817 C1avg = 2, C1max = 4

If U > 1, EDF executes tasks with an 1 (2/5)


average period T’i = Ti U. 0 5 10 15 20 25 30

2 (3/9)
0 9 18 27
U = 1.25
Ti T’i 3(1/20)
1
1 8 10
0 20

2
8 10
4(1/30)
2 12 15 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
12 15
3 3 20 25
20 25
111 112

RM during transient overruns Advantages of EDF


Uavg = 0.817 C1avg = 2, C1max = 4 EDF offers the following advantages with respect
to RM:
1 (2/5)
0 5 10 15 20 25 30  Better processor utilization (100%)
2 (3/9)
 Less overhead due to preemptions;
0 9 18 27
deadline
3(1/20) miss  More flexible behavior in overload situations;
0 20
 More uniform jitter control;
4(1/30)
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30  Better aperiodic responsiveness.

Who is missing its deadline is not the lowest priority task


113 114

You might also like