You are on page 1of 12

11/5/2019

Handling aperiodic tasks


 Aperiodic tasks are typically activated by the
arrival of external events (notified by interrupts).

 From one hand, one objective of the kernel is to


reduce the response time of aperiodic tasks
Periodic tasks (interrupt latency).
+  On the other hand, aperiodic task execution
Aperiodic tasks should not jeopardize schedulability.

Aperiodic Scheduling Background service


Consider a simple example with 2 periodic tasks (scheduled by If aperiodic jobs are scheduled in background (i.e., during idle
RM) and a single aperiodic job with Ca = 2 arriving at time t = 2: times left by periodic tasks) their response times are too long:
C1 = 1 C1 = 1
1 1
4 8 4 8
C2 = 3 C2 = 3
2 2
0 2 6 12 0 2 6 12
ape ape
0 2 4 6 8 10 12 0 2 4 6 8 10 12

Response Time = 9

3 4

Immediate service Immediate service


On the other hand, if interrupts service routines are scheduled C1 = 1
at the highest priority, the other tasks can miss their deadlines: 1
4 8
C2 = 3
C1 = 1
2
1 0 6 12
2 deadline miss
4 8 ape
C2 = 3
2 0 2 4 6 8 10 12

0 2 6 12 A problem related to a wrong service of aperiodic tasks


ape deadline miss occurred in the Apollo 11 computing system, during the
historical mission on the Moon, in the landing phase.
0 2 4 6 8 10 12

Response Time = 2 How can we manage aperiodic requests to reduce


response times, but preserve periodic tasks?
5 6

1
11/5/2019

HARD aperiodic tasks SOFT aperiodic tasks


 Aperiodic tasks with HARD deadlines must be  Aperiodic tasks with SOFT deadlines should be
guaranteed under worst-case conditions. executed as soon as possible, but without
jeopardizing HARD tasks.
 Off-line guarantee is only possible if we can
bound interarrival times (sporadic tasks).  We may be interested in
 minimizing the response time of each aperiodic request
 Hence sporadic tasks can be guaranteed as
periodic tasks with Ci = WCETi and Ti = MITi  performing an on-line guarantee

WCET = Worst-Case Execution Time How can we achieve these goals?


MIT = Minimum Interarrival Time

7 8

Aperiodic Servers Aperiodic service queue


 A server is a kernel activity aimed at controlling the
execution of aperiodic tasks. aperiodic
Server
SOFT tasks
 Normally, a server is a periodic task having two Service queue
parameters:
periodic/sporadic
HARD tasks
CPU
Cs capacity (or budget) READY queue
Ts server period
 The server is scheduled as any periodic task.
 Priority ties are broken in favor of the server.
To preserve periodic tasks, no more than Cs
units must be executed every period Ts  Aperiodic tasks can be selected using an arbitrary
queueing discipline.
9 10

Aperiodic Servers Polling Server (PS)


 At the beginning of each period, the budget is
Fixed priority servers Dynamic priority servers recharged at its maximum value.
 Polling Server  Dynamic Polling Server
 Deferrable Server  Dynamic Sporadic Server  Budget is consumed during job execution.
 Sporadic Server  Total Bandwidth Server  When the server becomes active and there are
 Slack Stealer  Tunable Bandwidth Server no pending jobs, Cs is discharged to zero.
 Constant Bandwidth Server
 When the server becomes active and there are
pending jobs, they are served until Cs > 0.

11 12

2
11/5/2019

Background service RM + Polling Server


Let’s take the previous example:
C1 = 1
1
C1 = 1
1 C2 = 3
4 8

4 8
2
C2 = 3
2 0 2 6 12
ape
0 2 6 12
ape PS 0 2 4 6 8 10 12
Cs = 1
0 2 4 6 8 10 12
Ts = 4
0 4 8 12
Response Time = 9
Response Time = 7
13 14

Computing Ulub for RM + PS Computing Ulub for RM + PS


T1 < Ti < 2T1
Cs Cs T2  T1 T T 2Ts  Tn
PS Cs = T1 – Ts U ub  U s     n n 1 
Ts
T1 Tn 1 Tn
2Ts
1
C1 C1 = T2 – T1
T2 T 2Ts
T1 U ub  U s    n   n
2
C2 C2 = T3 – T2 T1 Tn1 Tn
T2
• Ti 1

• C Defining Ri  we can write:
Cn–1 = Tn – Tn–1 Ti
n-1
n–1

 n 1

Cn
Tn–1 C n  Ts   Cs   C k  n 1
n  k 1  U ub  U s  R i 
2Ts
 n
0 Ts Tn = 2Ts – Tn i 1 Tn
15 16

Computing Ulub for RM + PS Computing Ulub for RM + PS


n 1
2Ts
R
n 1
U ub  U s  i   n U ub  U s   Ri 
K
 n
i 1 Tn P
i 1
n 1
Tn T1
Note that P  R
i 1
i
i 
T1
Cs = T1 – Ts
Ts
 U s 1 Minimizing with respect to Ri we have:

 2 1  2  U ub K
2Ts 2Ts T1 1   0 for Ri  K 1/ n
hence:     K    Ri Ri P
Tn T1 Tn  U s 1 P  U s 1

 
Hence:
RM  PS
Kn 1
U lub  U s  n K 1/ n  1
U ub  U s   Ri   n
i 1 P
17 18

3
11/5/2019

PS properties RM + PS schedulability
 In the worst-case, the PS behaves as a periodic RM  PS  2 
task with utilization Us = Cs/Ts U lub (n  )  U s  ln 
 U s 1
1
 Aperiodic tasks execute at the highest priority if
Ts = min(T1, … , Tn).
ln2
 Liu & Layland analysis gives that: RM  PS
U lub (n)  U s  n ( K 1 / n  1)

 2  1/ n
 K 
2
RM  PS
U lub (n)  U s  n    1 Us 1
 U s  1   Us
0 1
19 20

Analysis with Hyperbolic Bound Response time under PS


A set of periodic tasks is schedulable by Rate Monotonic Consider a PS running at the highest priority and an
in the presence of a Polling Server with utilization Us if aperiodic job arriving when the server is idle:
n
2
 (U
Ca
 1)  ape
Us 1
i
i 1
(Cs, Ts)
n
Defining P   (U i  1)
i 1
ra
initial delay # full service periods final chunk
the maximum server utilization that guarantees the ra Ca
a = T r Fa  1  a  C a  Fa C s
schedulability of the periodic task set is Ts s a Cs

2
U smax  1 Ra   a  Ca  Fa (Ts  C s )
P
21 22

Deferrable Server (DS) RM + Deferrable Server


 Is similar to the PS, but the budget is not
C1 = 2
discharged if there are no pending requests.
1
 Keeping the budget improves responsiveness, C2 = 1
4 8
since jobs can be served within a period. 2
0 2 6 12
ape
0 2 4 6 8 10 12
ape DS
Cs = 1
0 2 4 6 8 10 12 Ts = 5
DS
Cs = 1 0 4 8 12
Ts = 4
Response Time = 3
0 4 8 12
23 24

4
11/5/2019

Deferrable Server (DS) Deferrable Server (DS)


 However, DS does not behaves like a periodic task and it is 1
more invasive than PS.
0 2 4 6 8 10 12

 Keeping the budget decreases the utilization bound. 2


There can be two server 0 5 10

executions close to each other DS


Cs = 2
Ts = 4

ape 0 4 8

0 2 4 6 8 10 12 ape
DS 0 2 4 6 8 10 12
Cs = 1 deadline
Ts = 4 2 miss
0 4 8 0 5 10
25 26

Analysis of RM + DS RM + DS schedulability

RM  DS
Cs Cs Cs
2Cs = T1 – Ts
U lub (n  )
DS
1
C1 Ts +Cs C 2Ts+Cs
1 C1 = T2 – T1
1

PS
C2 T1 C2 ln2
2 C2 = T3 – T2
DS
T2
RM  PS
U lub (n)  U s  n ( K 1/ n  1)
 U  2  1n  2 Us  2
U RM  DS
(n)  U s  n  s   1 K PS  K DS 
 2U s  1 
Us 1 2U s  1

lub
Us
 
0 1
27 28

Analysis with Hyperbolic Bound Response time under DS


A set of periodic tasks is schedulable by Rate Monotonic Consider a DS running at the highest priority and an aperiodic
in the presence of a Deferrable Server with utilization Us if job arriving when the server is idle:
C arem  C a  q s
n
U 2 Ca

i 1
(U i  1)  s
2U s  1
ape
qs
(Cs, Ts)
n qs
Defining P   (U
i 1
i  1) ra
initial delay full service periods final chunk
the maximum server utilization that guarantees the ra C arem
a = T r Fa  1  a  C arem  Fa C s
schedulability of the periodic task set is Ts s a Cs

2P
U smax  Ra   a  FaTs   a
2P  1
29 30

5
11/5/2019

Response time under DS Designing server parameters


If a task arrives close to the next server period, a value a < Cs
n
is executed. In general, the initial execution is:  = min( , q )
in a s 1. Determine Usmax using  (U i  1)  K server
C arem  C a   in i 1
Ca
ape
2. Define Us  Usmax
a
(Cs, Ts)

ra 3. Define Ts = min (T1, …, Tn)


initial delay full service periods final chunk
ra C arem
a = T r Fa  1  a  C arem  Fa C s
Ts s a Cs 4. Compute Cs = UsTs

Ra   a  FaTs   a
31 32

Sporadic Server (SS) Sporadic Server rules


 It preserves the budget like DS, but it is less qs = current server budget
Assumptions:
aggressive than DS, since the budget is replenished SS has the highest priority: Ts  min(T1, …, Tn)
only Ts units after its consumption.
Rule 1
 SS is not activated periodically, but from the analysis At time tA, at which the following event occurs:
point of view it behaves like a period task with (qs > 0) AND ( pending aperiodic requests)
computation time Cs and period Ts.
set the replenishment time in the future at time RT = tA + Ts
2
ape Rule 2
At time tI, at which the following event occurs:
0 2 4 6 8 10 12
SS Ts +1 Ts +1 (qs  0) OR ( pending aperiodic requests)
Cs = 1
Ts = 4 set the replenishment amount equal to the budget Cape(tA, tI)
0 4 8 12 consumed in the interval [tA, tI].
33 34

SS example Slack Stealer


1 2  It is a more aggressive method that serves tasks by
ape stealing all the available slack left by periodic tasks.
2 4 6 8 10 12
SS 0 Ts
Cs = 2 +1 slack slack
+1
+1
Ts = 4 6 12 18 24
slack
0 2 4 6 8 10 12
RT1 RT2 RT3 10 20
4

35 36

6
11/5/2019

Slack Stealer Slack Stealer


 It is a more aggressive method that serves tasks by  It is a more aggressive method that serves tasks by
stealing all the available slack left by periodic tasks. stealing all the available slack left by periodic tasks.

6 12 18 24 6 12 18 24

10 20 10 20
4 4

However, computing the available slack is complex and


requires either high runtime overhead or large memory tables
(to store pre-computed values).

37 38

Optimality Counter example


 Slack Stealer was thought to be optimal (i.e., able to 1
minimize the response time of aperiodic tasks).  If we minimize R1 we3 cannot
6
minimize
9
R2 and vice versa.

 To minimize 2the R we should be clairvoyant!
avg
4 8
However, in 1996, Tia, Liu and Shankar, proved that, under
fixed priority scheduling, no optimal servers exist. That is: 3
0 6 12
N
No service
i algorithm
l ith can minimize
i i i the
th response time
ti off  No service algorithm can minimize the response time of
each aperiodic request. each aperiodic request.
 No service algorithm can minimize the average response  No service algorithm can minimize the average response time
time of aperiodic requests. of aperiodic requests.

This can be proved by a counter example.

39
R1 = 1 R2 = 4 Ravg = 2.5 R1 = 2 R2 =2 Ravg = 2 40

Total Bandwidth Server (TBS)


 It is a dynamic priority server, used along with
EDF.
 Each aperiodic request is assigned a deadline so
that the server demand does not exceed a given
bandwidth Us .
 Aperiodic jobs are inserted in the ready queue
and scheduled together with the HARD tasks.

42

7
11/5/2019

The TBS mechanism Deadline assignment rule


 Deadline has to be assigned not to jeopardize
aperiodic Deadline periodic tasks.
tasks assignment
 A safe relative deadline is equal to the minimum
period that can be assigned to a new periodic
periodic/sporadic
tasks CPU task with utilization Us:
READY queue Us = Ck / Tk Tk = dk  rk = Ck / Us
 Deadlines ties are broken in favor of the server.  Hence, the absolute deadline can be set as:
 Periodic tasks are guaranteed if and only if
dk = rk + Ck / Us
Up + Us  1
43 44

Deadline assignment rule EDF + TBS schedule


C1 = 1
C1/Us C2/Us 1
4 8
C1 C2 C2 = 3
2
0 6 12
r1 r2 d1 d2 2 1
ape

 To keep track of the bandwidth assigned to 0


r1 2
r2 4 6 8
d1 10 12
d2
previous jobs, dk must be computed as:
Us = 1  Up = 1/4
d1 = r1 + C1 / Us = 1 + 2  4 = 9
dk = max (rk , dk-1) + Ck / Us
d2 = max(r2 , d1) + C2 / Us = 9 + 1  4 = 13
45 46

Improving TBS Improving TBS


 What’s the minimum deadline that can be assigned  If we freeze the schedule and advance d1 to 7, no
to an aperiodic job? task misses its deadline, but the schedule is not
EDF:
C1 = 1 C1 = 1
1 1
C2 = 3 4 8 C2 = 3 4 8
2 2
0 6 12 0 6 12
2 2
ape ape
0 2 4 6 8 10 12 0 2 4 6 8 10 12
r1 d1 r1 d1
Feasible schedule  EDF
47 48

8
11/5/2019

Improving TBS Improving TBS


 However, since EDF is optimal, the schedule  We can now apply the same argument, and
produced by EDF is also feasible: advance the deadline to t = 6:

C1 = 1 C1 = 1
1 1
C2 = 3 4 8 C2 = 3 4 8
2 2
0 6 12 0 6 12
2 2
ape ape
0 2 4 6 8 10 12 0 2 4 6 8 10 12
r1 d1 r1 d1

49 50

Improving TBS Improving TBS


 We can now apply the same argument, and  Clearly, advancing the deadline now does not
advance the deadline to t = 6: produce any enhancement:

C1 = 1 C1 = 1
1 1
C2 = 3 4 8 C2 = 3 4 8
2 2
0 6 12 0 6 12
2 2
ape ape
0 2 4 6 8 10 12 0 2 4 6 8 10 12
r1 d1 r1 d1

51 52

Computing the deadline Computing the deadline


 In general, the new deadline has to be set to the  The actual finishing time can be estimated based on
finishing time of the current job: the periodic interference:
Ck
d k0  max(rk , d k01 ) 
Us f ks  rk  Ck  I p (rk , d ks )
s 1
d k  f  f k (d )
k
s s
k
Ip

ape ape
s s s s
rk fk dk rk fk dk

53 54

9
11/5/2019

Periodic Interference Computing interference


Up = 1/2  1/3 = 5/6 Ck = 2 1
Us = 1  Up = 1/6 dk = 3 + 2/ Us = 15 0 4 8 12 16 20

2
ei(t) ci(t)

1 0
2
6 12 18

0 4 8 12 16 20
ape
2 3 dk
ei(t) = time executed by i before t
0
2
6 12 18
I a (t , d ) 
s
k  c (t ) i   [C i  ei (t )]
ci(t) = remaining WCET of i at time t
ape  i active  i active
d i  d ks d i  d ks
nexti(t) = next release time of i after t
3 dk
n
d ks  next i ( t ) Actually this is wrong for
I p (t , d ks )  I a (t , d ks )  I f (t , d ks ) I f ( t , d ks )  i 1 Ti
Ci integer ratios

55 56

Computing interference The Optimal Server (TBS*)


Ck
1 d k0  max(rk , d k01 ) 
Us
compute the initial
s=0 deadline with TBS
0 4 8 12 16 20

2
ei(t) ci(t)

f ks  rk  Ck  I p ( rk , d ks )
0 6 12 18
2 shorten the deadline
ape d ks 1  f ks
3 dk

I a (t , d ks )   [C  ei (t )]  t  s = s+1 d ks 1  d ks EXIT
 active
i
nexti (t )    1 Ti NO YES
 Ti 
i
d i  d ks

Ck
n
 d ks  nexti (t )  ape
I f (t , d )  
s
k
 1 Ci s s
i 1  Ti  fk dk
57 Ip 58

Tunable Bandwidth Server Performance vs. overhead


TBS(K): make at most N shortening steps
TBS(0) = TBS
TBS() = TBS* N = max number of steps
Performance N=
C
TB*
d  max(rk , d
0 0
k 1 ) k optimal server
k
Us O(1)
s=0

s=N EXIT
YES N=0
NO

f ks  rk  Ck  I p (rk , d ks ) TBS
O(n) O(Nn)
d ks 1  f ks
polynomial

s = s+1 d ks 1  d ks overhead
NO YES
EXIT
59 60

10
11/5/2019

Aperiodic responsiveness Problems with the TBS


Avg. Response Time Up = 0.85  Without a budget management, there is no
10 TB(0) protection against execution overruns.
9

8  If a job executes more than expected, hard tasks


TB(1)
7 could miss their deadlines.
6
TB(3)
5
C1 = 1 deadline miss
4 TB(5)
1
3 TB*
2
4 8
1 overrun
1 Us = 1/4
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0 2 4 6 8 10 12
Relative Aperiodic Load: a/(1-Up)
61 62

Overrun handling Solution: task isolation


 If a job executes more than expected (i.e.,  In the presence of overruns, only the faulty task
consumes its budget) it must be delayed by should be delayed.
decreasing its priority or postponing its deadline.
 Each task i should not consume more than its
declared utilization (Ui = Ci/Ti).
1
 If a task executes more than expected, its priority
1 should be decreased (or its deadline postponed).
Us = 1/4
0 2 4 6 8 10 12

63 64

Achieving isolation Implementation


 Isolation among tasks can be achieved through a
bandwidth reservation.
10 % Us1
20 % 1 server
4 1 Ready queue
 Each task is managed Us2
by a dedicated server 2 server CPU
having bandwidth Us 2
3 25 % Us3 EDF
45 % 3 server

Us1 + Us2 + Us3  1


 The server assigns priorities (or deadlines) to tasks so that
they do not exceed the reserved bandwidth.

65 66

11
11/5/2019

Constant Bandwidth Server Basic CBS rules


 It assigns deadlines to tasks as the TBS, but keeps track
of job executions through a budget mechanism.
Arrival of job Jk at time rk  assign ds

 When the budget is exhausted it is immediately if ( pending ape. requests) then <enqueue Jk>
replenished, but the deadline is postponed to keep the
demand constant. else if (qs > (ds – rk)Us) then qs = Qs

CBS parameters
t ds = rk + Ts

Maximum budget: Qs
assigned
Server period: Ts Budget exhausted  postpone ds
by the user
Server bandwidth: Us = Qs/Ts
q s = Qs
Current budget: qs (initialized to 0) maintained
by the server
ds = ds + Ts
Server deadline: ds (initialized to 0)
67 68

Deadline assignment Budget exhausted

3 2 5

0 5 12 0 3 6 12
cs cs
6 3
Qs = 6 3 Qs = 3
1
Ts = 12 1 Ts = 6
0 5 12 0 3 12

69 70

EDF + CBS schedule Server comparison

1 performance
optimal server (TBS*)
0 6 12 18 24 TBS(N)
2
Slack Stealer
0 9 18 27
TBS CBS
d0 d1 d2 d3 d4
3 3 1
ape DS SS
r1 8 r2 14 18 r3 24 27 PS
qs
Background
overhead
0 2 4 6 8 10 12 14 16 18 20 22 24 26
The best choice depends on the price (overhead)
CBS: Qs = 2, Ts = 6
we can pay to reduce task response times.
71 72

12

You might also like