VS
g, MULTI-TASKING AND TASK
SCHEDULING
6.1 Introduction
In Section 3.6 of Chapter 3, we studied how to implement concurrency control
among several tasks (or processes) which are dependent. The control is built
into these individual tasks so that they can synchronize and communicate to
satisfy certain constraints on the interleaving of their actions. Mutual exclu-
sion, studied therein, is an important example of such constraints. Commu-
nication and synchronization are related, since some forms of communciation
require synchronization, and synchronizaton can be considered as communi-+
cation without information exchange.
Going a step further, we now include some other significant features of
tasks, namely, each task (1) will have its own deadline to meet, (2) requires
allocation of a finite amount of processor time to execute, and (3) can be
delayed from completing its execution due to suspension (during the shared
accesses) arising from synchronisation and data sharing.
To satisfactorily handle theses features inherent in many practical tasks,
scheduling among multiple tasks is necessary. One can think of task scheduling
as a precedence ordering or interleaving of concurrent tasks in a system, done
in an attempt to meet the deadlines of all tasks, or as many of them as feasibly
possible. A task is said to be schedulable (with respect to a scheduling strategy
or means) if its deadline can be met using the strategy.
In this chapter, we will study how to schedule concurrent tasks on a single
processor so that each task meets its own deadline. Basically, this involves
assigning priorities to tasks and using an appropriate scheduling strategy. We
will also study some means to evaluate if every task in a system is schedulable
in the worst-case when a scheduling strategy is applied. Specifically, we will:
1. Understand the temporal scope of a task in terms of some characteristic
times, and learn how to determine their values for some specific tasks. Every
task has a temporal scope; whether a task is schedulable or not depends
on its temporal scope and the scheduling strategy used.
2. Examine some common scheduling strategies.154 6, MULTI-TASKING AND TASK SCHEDULING
3. Learn how to deal with aperiodic or sporadic tasks.
4, Understand what task priority allocation is and how it is done in Ada,
5. Learn what rate monotonic preemptive scheduling (RMS) and deadline
monotonic scheduling (DMS) are.
6. Learn how to perform schedulability tests for a system of independent
tasks.
7. Understand the problem of blocking and priority inversion in a system with
dependent tasks, and appreciate the need for dynamic priority assignment
(by inheritance).
8. Learn about Immediate Ceiling Priority Protocol (ICPP), a protocol that
minimizes blocking among concurrent dependent tasks, and how to com-
pute each task’s worst-case blocking time in an ICPP-based system,
9. Learn how to perform a full schedulability test for an ICPP-based system
that uses RMS or DMS. ;
10. Understand the implicit assumptions made in all the timing analysis in
order to more objectively assess or judge the outcome of a schedulability
test.
6.2 Temporal Scope of a Task
All tasks are either periodic or aperiodic. A periodic task is ready to run (i.e.,
runnable or ‘available for execution’) on a regular repetitive basis and has an
explicit deadline which must be met. A task deadline is measured from the
next time instant it is ready to run . An aperiodic or sporadic task also has
an explicit response time (deadline), but the next time instant at which it is
ready to run is generally not known.
6.2.1 Characteristic Times
We characterize the temporal scope of a task in terms of five times, as depicted
in Fig. 6.1.
Referring to Fig. 6.1:
1. Min_D is the minimum delay before a task can start; it is usually zero but
may be > 0.
2. Max_D is the maximum delay measured from the instant a task is ready
to run, to beyond which the task must have started execution.
3. Max_E is the maximum time between the start and the end of a task
execution.+ 6.2 Temporal Scope of a Task 155
Now (time at which a task becomes available to exeoute and available for scheduiting)
time
Max_E: Maximum elapsed time
Pe Seon
Max_CPU: Maximum CPU time = A+B+C.
!ax_D: Maximum delay
trom start of TS
‘Min_D: Minimum delay
“from start of TS
Fig. 6.1. Temporal scope (TS) of a task
4. Max.CPU is the total amount of processor time needed to execute a task;
and clearly must be < Max-E. Note that it is not necessarily contiguous.
5. DL is the deadline of a task; it is the maximum time specified by which
the task must have been completed. .
These five times are not arbitrarily specified, but are dependent on the oper-
ational aspects of the system, including its hardware and interface software.
For instance, a delay At € [Min_D, Max_D] in task execution is usually due to
transience in the system. Thus, (the values of) these times need to be deter-
mined, which the next section will elaborate more on. Then, given a system of
concurrent tasks with their temporal scopes known, that the maximum com-
pletion time of every task must not exceed its deadline in the real-time system
is the most frequently quoted requirement for the system.
6.2.2 Determining the Scope: Some Practical Considerations
As mentioned earlier, a system must be examined to determine each of the five
times for the temporal scope of each task. This can be done through analysis
of the physical hardware and interface. This requires, where possible:
1. Measuring times using timer/counters, logic analyzers and oscilloscopes
during the various stages of operation in a real-time system.
2. Performing calculations and analysis of a model of the system.
It is necessary to determine all the critical times through analyzing the se-
quence of events for the activities of each task in a real-time system to ensure
correct timing operation. In particular, we need to clearly identify all time-
critical tasks and determine all critical and important deadlines.—
tb 156 6, MULTITASKING AND TASK SCHEDULING,
Below, we illustrate through practical examples on how to determine the
temporal scope of a task.
i inimum delay before a task can start)
‘ ieee tht data aust be read from the system. Some values
to be read may be indicated by relays which have a settling time of 60ms,
Therefore a minimum delay of 60ms is required. Or, there are mechanical
changes which must ‘ripple through and so the task cannot run until these
mechanical processes have settled.
Max_D (maximum delay beyond which the task must have started)
e.g. Reading ADC's (analog-to-digital converters). If the sampling rate is
to be 100Hz, we have a 10ms repetition rate. The periodic task of reading
the values may need to be done within a certain time (eg. lms) of the task
repeating its temporal scope, and so Max-D = Ims (see Fig. 6.2).
re
within 1ms of the 10ms period +
‘ims tims
oms 10ms 20ms
Fig. 6.2. Reading an ADC
3. Max_E (maximum time between start and end of a task execution)
e.g. In loop control, the time between reading the current plant output,
calculating the error signal, calculating the new control signal and writing
it out must be completed within a certain amount of time for adequate
control.
4. MaxCPU (processor time needed to execute a task)
In order to estimate the worst and typical processing times we need to use
profiling tools (software tools) and/or logic analyzers by
a) assuming worst case inter-task communication times;
b) assuming loops run to their maximum amount;
c) determining the input which consumes the most CPU time; and
d) simulating or use actual 1/O system.
The same code will be run at different, times on different platforms. De-
pending on the safety-critical nature of the system, the above can be a time
consuming, but necessary activity.
5. DL (task deadline)
essai -6.4 Scheduling Strategies 157
It is a time set by the system designer but is constrained by DL >
Max-CPU. For a periodic task, DL is < the task period.
There is a need to ensure that each task deadline is met, subject to scheduling
and these inherent operational times: Min_D, Max_D, Max-E and Max_CPU.
6.3 Static and Dynamic Task Priority Allocation
Task priority simply refers to the ranking of tasks; the higher the rank of a
task, the higher its priority in being allocated the processor for execution.
Priority allocation (or assignment) may be static or dynamic. The former
means the priority level (or rank) is fixed (cannot be changed) while the latter
means it may be changed during the system program execution.
Static priority allocation enables straightforward overall system timing
analysis, and is adequate if processing resources are high.
Dynamic priority allocation flexibly allows higher priority to be given to
selected tasks in some critical run-time situations. It also allows a task with
low priority to increase priority with time to ensure it does eventually tun.
But the general trade-offs are:
1. Time analysis can become much more difficult.
2. Additional processing overhead is incurred due to time needed to dynami-
cally assign /reassign task priorities.
3. Risk of locking out a task for a long time; this can occur if a task is
constantly being assigned and re-assigned a low priority.
Thus, although dynamic priority allocation is an appealing idea in concurrent
task scheduling, it must be performed in a controlled manner so that it is
predictable and analyzable.
6.4 Scheduling Strategies
For an overview of simple scheduling strategies and analysis, we shall begin
with cyclic and preemptive scheduling under the following initial basic assump-
tions:
1. The system has a fixed number of tasks and tasks cannot be created dy-
namically.
2. All tasks have known temporal scopes.
3. All tasks are considered periodic.=
158 6, MULTI-TASKING AND TASK SCHEDULING
4. All tasks are independent.
5. System overheads in tas context-switching are assumed to be zero,
6. All tasks have a completion deadline equal to their own period.
7. All task periods must be a multiple of the shortest task period.
In this course, we will focus on scheduling strategies to support concurrency
among multiple tasks on a single processor (simply referred to as a GPU), We
will also study how to analyze whether or not a task is schedulable in a system,
namely can its deadline be met under a given strategy? In this regard, the
task temporal scopes must be assumed known (Assumption 2). This means
that recursion in a task is not allowed unless there is known maximum depth,
and neither is the dynamic creation of processes (Assumption 1) ~ these make
the timings indeterminate. For the same reason, the loops must have upper
bounds on the number of iterations and interprocess communications must
have timeouts. Under these assumptions, we can then estimate the processor
utilization. Processor utilization by a task refers to the percentage of time (on
the average) that the processor spends on the task.
We first recall some terminology for the task temporal scope, illustrated
graphically in Table 6.1. We will refer to it in our subsequent discussions. *
‘Table 6.1. Task scope: Some terminology
nal start of execution
start delay ieee
| end of execution
elapse time |
set to finish
deadline *
J
|
completion time {|
6.4.1 Cyclic Scheduling
Simple cyclic scheduling is analogous to the single task cyclic executive ap-
proach discussed previously. The scheduler works as follows:
ead6.4 Scheduling Strategies 159
1. It allocates the CPU in turn to each task.
a) The task uses the CPU as long as it wishes (until the task has finished
or suspends).
b) The task then gives up the CPU.
2. When the task gives up the CPU, it then allocates it to another task and
repeats the cycle. Note therefore that a task execution time on each run
should be short and defined to work effectively; otherwise it can lock out
all other tasks.
The scheduler repeats the cycle at the rate of the fastest task. Only those tasks
which are ready at the start ofa new cycle is run within the cycle.
To illustrate, consider the following example.
Example 6.4.1. Assume a system has a tick period of 20 ms to keep track of
time and consists of three independent tasks, A, B and C. Task A must run
every 20 ms and requires 8 ms of CPU time to run. Task B must run every 40
ms and requires 4 ms of CPU time to run Task C must run every 80 ms and
also requires 4 ms of CPU time to run. If Task A has the highest priority and
Task C has the lowest priority, then we have the time schedule as depicted in
the task activation diagram of Fig. 6.3.
8
2a
£8 taskA |
= a
Bo
a5 i
8 TaskB 4 ct
8B
og int
ao —E-+
Bg TaskC Lal
—s
78
as 0 20ms 40ms ~60ms += 80ms_—100ms
Fig. 6.3. Cyclic scheduling: Task activation diagram for Example 6.4.1
The ‘cross’ x in the diagram denotes the instant at which the respective
task is ready to run. The cycle repeats once per tick, which is the rate of
repetition of Task A.
Take another example.“160 6 MULTE-TASKING AND ‘TASK SCHEDULING
Example 6.4.2. Same as Example 6.4.1, except that now Task C has the high-
est priority and Task A has the lowest priority. Then we have the time schedule
as depicted in the task activation diagram of Fig. 6.4.
Task C
Task B
Task A 5 a 1
0 20ms 40ms 60ms 80ms 100ms
Task c B A a
start delay ‘Oms| ‘Oms or dms | Oms, ms or 8ms
elapse time | ims _| ims. Sms
deadline 80ms_ | 10ms 20ms
‘completion time | ims -ims or 8ms | 8ms, 12ms or [6ms.
Fig. 6.4. Cyclic scheduling: Task activation diagram for Example 6.4.2
6.4.2 Preemptive Scheduling
As defined, a system is a set of tasks. At any time instant, each task is con-
sidered to be in either of the two status:
1. runnable;
suspended, waiting for an event to happen.
nv
In general, a preemptive scheduler works as follows:
. It allocates CPU time to each task in turn. i
. The task uses the CPU, but if
a) the task’s allocated time expires or
b) a higher-priority task becomes runnable,
then it
a) interrupts (or preempts) the task before it has completed and
ve6.4 Scheduling Strategies 161
b) suspends the task.
3. When the task has finished or is suspended, it then é
a) returns a suspended task to runnable if the task has the highest priority,
b) allocates the CPU to another runnable task (of the highest priority)
and repeats the process.
Note that the scheduler would need to save the current working environment of
each task before suspending it so as to be able to restore it. Time is lost in sav-
ing the working environment of one task, and restoring that of the other that
is often needed. This is called contest switching, involving saving and restor-
ing registers, as well as saying and restoring active memory image (memory
swapping).
To illustrate how a preemptive scheduler works, consider the following ex-
ample.
Example 6.4.3. Assume a system has a tick period of 20 ms to keep track of
time and consists of three independent tasks, A, B and C. Task C takes 30 ms
to execute and must run every 80 ms. Task A requires 8 ms to run and must
run every 20 ms and Task B requires 4 ms to run and must run every 40 ms.
Task A has the highest priority and Task C, the lowest priority. (note: Task C
is a low frequency cyclic task requiring more than the tick period of 20 ms to
execute). Then we have the preemptive task schedule as depicted in the task
activation diagram of Fig. 6.5.
Task C is preempted three times each time it is ready to run (e.g. at 20ms,
40ms and 60ms). This is what happens:
1. Task A starts and completes, followed by Task B.
2. Task C then starts and runs for 8ms before Task A, of a higher priority,
becomes ready and preempts Task C.
3. When Task A finishes at 28ms, the only task suspended is Task C, so Task
C continues to run, but at 40ms, Task A and Task B become ready again.
The higher priority Task A preempts Task C.
4, When Task A finishes, the next task that runs is Task B si
priority than Task C.
5. When Task B finishes at 52ms, Task C, the only task suspended, continues
to run, but at 60ms, Task A becomes ready, and preempts Task C.
. When Task A finishes, Task C, the only task suspended, runs to completion
as no other higher priority tasks become ready to preempt it yet again until
at 80ms. And so on.
ince it has higher
For Task A and Task B, their elapse time equals the CPU time (time required
to run) because their execution is never preempted. For Task C, its being162 6. MULTI-TASKING AND TASK SCHEDULING
Task A
oo
Task B a
= =
Task C
Ooms 20ms 40ms 60ms 80ms 100ms
‘Task A_|B_ lc
start delay Oms. 8ms 12ms
elapse time | Sms_| dms_| s8ms
deadline 20ms | 40ms | soms
completion time | 8ms_| 12ms | 7Oms
Fig. 6.5. Preemptive scheduling: Task activation diagram for Example 6.4.3
preempted three times following every start of execution results in its elapse
time being much greater than its CPU time.
Notice that all task deadlines are met. But suppose Task C, which has the
longest period, is now set with the highest priority and Task B, the lowest
priority. Then we have the preemptive task schedule as depicted in the task
activation diagram of Fig. 6.6.
Task C : f, i
Task A i
Missed] deadlines|
Task B He = Ft
Oms 20ms 40ms 60ms 80ms 100ms
Fig, 6.6. Preemptive scheduling wher
n task priorities are different: Task activation diagram for
Example 6.6.4 Scheduling Strategies 163
This time round, Task A sometimes misses its deadline. In general, there is
not usually any justification but generally, to schedule all tasks in an attempt
to get every task meet its deadline, as in Fig. 6.5, the task with shortest period
is assigned the highest priority (because it has the most stringent timings) and
so on. This is called rate monotonic scheduling (RMS). More will be discussed
about RMS later. :
In the following, we examine four specific preemptive scheduling schemes.
Time Slice (Round-Robin). This preemptive scheduling scheme uses time
slicing, where each task is allocated a fixed length of time to execute continu-
ously. The time slice is usually defined by a number of clock ticks.
All tasks are kept in some ‘queue’, with priorities determined such that the
task at the head of the queue has the highest priority and the task at the tail
has the lowest. A task that is suspended is removed from the head and added
to the tail of the queue.
To illustrate, consider the following example.
Example 6.4.4. Assume a system has three independent tasks, 1, 2 and 3. If
Task 1 has the highest priority and Task 3 has the lowest priority, then we
have the time schedule as depicted in the task activation diagram of Fig. 6.7.
At the end of each time slice (i.e., allocated time expires!), the next task
allocated to the CPU is the one with the highest priority and is ready to run
(ie., runnable). Or, if of equal priority, the task that has been delayed the
longest is run.
Task 1
1}
Task 2
Task3 SE,
time
Fig. 6.7. Round-robin scheduling: Task activation diagram for Example 6.4.4
Preference. This preemptive scheduling scheme uses the relative preference
or importance of tasks meeting their deadlines. Such importance is specified by
the system designer. The tasks are ranked based on their relative importance,
without necessarily considering their technical characteristics such as their
Tates of repetition.1646. MULTI-TASKING AND TASK SCHEDULING
‘As an illustration, consider the following example; this example highlights
a limitation of preference scheduling. i
Example 6.4.5. Task P1 is deemed to be more important than Task P2 (in
meeting their deadlines). The task periods, CPU (execution) times and uti.
lization are tabulated below:
Task | Period (P,) | CPU Time (C;) | Utilization (SY ]
Pi | 50ms 10ms 20%
P2 | 10ms 2ms 20%
The time schedule is depicted in the task activation diagram of Fig. 6.8.
Oo 10 %2 30 40 a a0 ms
Deadlines missed
Fig. 6.8. Preference scheduling: Task activation diagram for Example 6.4.5
Note that for Tasks P1 and P2, the processor utilizations are #4 or 20%,
and 72s or 20%, respectively. The total processor utilization is therefore 40%.
One may inspect the task activation diagram over the longer period of Task
P1 to verify these values,
The periodic deadlines for Task P2 are at 10ms, 20ms,--- ,60ms, +++. But
the diagram (over the displayed time range) shows that Task P2 also com-
pletes execution at 12ms and 62ms, corresponding respectively to those task
repetitions runnable at Oms and 50ms. In
‘ fi other words, th two repetitions
miss their deadlines. ena 2
One attempt to avoid missin;
: € deadlines is t it a k with a longer
period, as the next example illustrates, Sea
Esample 6.4.5. Referring to Example 6.4.5, we split Task P1 into smaller parts
by changing its period 50 ms
$ pe sand execution ti i jod 5 ms and
execution time 1 ms, and completin, Hee
i is depi 6 the task i ting
time schedule is depicted in the task activation ian an ae 2 s io
Clearly, the deadlines are now all met. gram of Fig. 6.9.6.4 Scheduling Strategies 165
0 5 10 15 20 25 30ms
| l l 1 ! 1 I
P2 Ls
Fig. 6.9. Preference scheduling: Task activation diagram for Example 6.4.6
Note however that we cannot always split a task into equal timing parts.
But this attempt implies that by making the period of Task P1 to be shorter
than that of Task P2, (or equivalently, increasing the frequency of Task P1 to
be higher than that of Task P2), we can schedule the two tasks concurrently.
Or, if we assign priority to each task such that the shorter its period, the higher
its priority, we might be able to schedule these tasks without their missing the
respective deadlines. This is the key idea of rate monotonic scheduling (RMS).
Rate Monotonic and Deadline Monotonic. In RMS, each task is assigned
its own unique priority as follows:
1, The priority is fixed and based on the repetition period of each task.
2. The shorter the period, the higher the task priority.
Deadline monotonic scheduling (DMS) is similar to RMS, except that the
priority is based on the task deadline of each task; the shorter the deadline,
the higher the task priority.
RMS may be remembered by the ‘shortest period first’ rule, whereas DMS,
the ‘shortest deadline first’ rule.
The following example illustrates RMS.
Example 6.4.7. Assume Task P1 and Task P2 are defined as in Example 6.4.5,
Le,
Task | Period (P)) | CPU Time (Gi) | Utilization (&) |
Pl | 50ms 10ms 20%
P2_| 10ms 2ms 20%
Using RMS, Task P2 has a higher priority than Task Pl. This means that
whenever both are ready, Task P2 will be scheduled for execution first. Thus,
the resulting schedule is as depicted in the task activation diagram of Fig.
6.10.
Notice that Task P2 preempts Task P1 whenever the former becomes
runnable (e.g. at 10ms and 60ms), but both tasks meet their own deadlines,
assumed to be equal to their respective periods (Assumption 6, see p. 157).6, MULTI-TASKING AND TASK SCHEDULING
nj $4 4
166
02 1012 30 50 ms
Fig. 6.10. RMS: Task activation diagram for Example 6.4.7
Specifically, for Task P1, by observing its execution within the first period,
start time = 2ms, elapse time = (14-2)ms = 12ms, CPU time = (8+2)ms =
10ms and completion time = (14-0)ms = 14ms. Its task deadline is the period
of 50ms. The deadline is met since the completion time is less than or equal
to the deadline.
6.5 Aperiodic Tasks: How to Deal with Them?
Realistically. not all tasks are periodic. So we need to deal with aperiodicity or
sporadicity in a task in order that it is also amenable to scheduling analysis,
such as by the way we have been doing through task activation diagrams. We
can do so by rendering the task ‘periodic’ using the following two options:
1. Option 1 (Conservative)
Assume the worst case that the sporadic tasks occur at their maximum
possible rates (i.e.. shortest periods).
Such an option is vital for a hard real-time system because it ensures that.
if a sporadic task is analyzed to be schedulable in the system, it will indeed
meet its (varying) deadline. But the actual utilization of the processor may
be lower.
. Option 2 (More Optimistic)
Assume that the sporadic tasks occur at their average possible rates.
This can result in a task not meeting its deadline although the scheduling
analy: ys otherwise. This condition is called transient overload, which
may be acceptable for a task with a soft deadline,
rey
In general, which option to use is an engineering decision.
For either option, we need to estimate rate at which the sporadic task may
occur - either maximum or average, This is achieved by careful examination
and study of the system. Take the following example.6.6. A Note on Assigning Priorities in Ada 167
Example 6.5.1. A sporadic ‘replenishing’ task may be generated each time a
supply tube of parts on a production line b
Then suppose an analysis of the
every
ecomes low and needs replenishing,
system shows this oceurs on average once
seconds when the production line is operating normally and the tube is
filled each time it is replenished. The analysis also shows that the (worst-case)
highest rate of occurrence of the sporadic tas
s observed to occur orice every
1.5 seconds when the tube is only partially refilled each time it is replenished.
Here, the decision on which option to use depends on whether or not this
sporadic task is a hard time deadline or a soft time deadline. If it is a hard
deadline, then schedule it with the '
soft deadline, then conside
deadlines of the other t
system
worst case periodic rate (1.5s). If it is a
T using the average rate (5s), or lower (>5s) if tight
asks are causing scheduling problems elsewhere in the
6.6 A Note on Assigning Priorities in Ada
In Ada. a priority can be associated with a task using a pragma. For example.
Fig. 6.11 shows an Ada pseudocode assigning a static priority J
an integer, to a task, ACCESS_DATA, where preemptive s
In Ada, the higher the value of !
. where NV is
heduling is used.
. the higher the task priority,
Fig. 6.11. Assigning task priority in Ada
The default is that all Ada tasks have the same priority. All Ada tasks not
‘assigned a priority are given the same priority lower than any assigned priority,
In general, we need to check the context in which priority values are defined.
and assigned to tasks. In one case, such as in assigning priorities in Ada, the
higher the assigned value, the higher the task priority. In another, it could be
that the lower the value, the higher the priority.ING
1686. MULTI-TASKING AND TASK SCHEDUL]
6.7 Preemptive Scheduling for Independent Tasks
using task activation diagrams for schedulability.
Up eer ees on a single processor under some basic an.
rome 157). By amie we simply draw and inspect the diagrams is
i i met.
see a all = cornea formal tests for task schedulability.in a single.
— ae that uses RMS. The test assumes the worst-case, where ini
tially, all tasks are simultaneously runnable. If schedulability holds in this case,
then it will hold under whatever combinations of task ‘runnable’ times.
6.7.1 Rate Monotonic Schedulability: A Simple Test
This simple test uses the following sufficiency result proved by Liu and Layland
(1973), as reported in [1, p.471]:
Consider a system with n concurrent independent tasks, each of period P,
and requires CPU time C;, 1 4 C; is the completion time for task T; if no task will preempt 7; during its
ja
b) Siiz+1) = Ri(Siz), € 2 0, is the completion time for 7; if high i i
it during its execution, where ee
int
fj tt}.
i. Ri(t) = Cr+ DOG; x [st is the cumulative demand for processing in the duration
ja
P;
t
ii. the ceiling function [.] for a number a.b (e.g. 5.4, a =5, b=4) is defined by
(6.2)
3. If there exists a value Sir < Dj (Dj is the deadline for task T,), then T; is schedulable with
worst-case completion time Sir. .
Fig. 6.14. A full schedulability test for meeting task deadlines using RMS for independent tasks
‘Si241) formula returns the same value as that in Siz, it means there is no new
release of any higher priority task j in the duration Siz, $0 Sij2+1) = Sic, which
clearly is the worst-case completion time of task i.
To explain the method, reconsider the system of Example 6.7.2 in the fol-
lowing.
Example 6.7.3. Consider three tasks with repetition rates of 30ms, 40ms and
50ms and processing requirements of 10ms, 10ms and 12ms respectively. Recall
that it is assumed task deadline D; = task period P; (see p. 157, Assumption
6). Do not confuse Pi from P;; the former is a task name, the latter refers to
the period of a task assigned with priority number i.
Following the steps detailed in Fig. 6.14, we have:
Step 1: Order the tasks according to RMS.
Task |i GPU Time (Ci) Period (P;) Utilization ()
P3 [1 10ms 30ms 33%
P2 |2 10ms 40ms 25%
Pi [3 12ms 50ms 24%
i Total 82%172 6. MULTI-TASKING AND TASK SCHEDULING
Steps 2 and 3: Compute the completion times Sir, and check if Sip < Dj.
Si = C; = 10,
Sy = Cy = 10 = Sto < 30(= A), so Task 1 is schedulable.
So = C2 + Ci = 10 + 10 = 20,
Sn = Oh +C1 x [2
PR
20
=10+10|—
+0/3]
= 20 = Sy < 40(= P2), so Task 2 is schedulable.
Seq = Cy + Co + Cy = 12 +10 + 10 = 32,
S: S
S51 = Cz + Co x [=| +x [|
32 32
12 +10[] +10[2]
=12+10+10x2
= 42,
S. 5.
Sgq = O3 + Co x [] +e x [=|
42 42
=12+10 (Z| +10 [zl
=12+10x2+10x2
= 52,
S: S:
S33 = C3 + C2 x [=| +O x (=|
52 52
~ 12+ 10/2] +10/ 8]
=12+10x2+10x2
= 52 = Syp > 50(= P3), so Task 3 does not meet its deadline.
Instead of sketching and inspecting the task activation diagram (see Fig, 6.13
for this example system), we can formally (and more easily) analyze task
schedulability using this three-step method.
Consider another example.6.7 Preemptive Scheduling for Independent Tasks 173
Example 6.7.4. A factory control system has three independent control tasks.
One must be repeated every 11ms and r
d Processing whilst the third must repeat
every 30 ms and requires 10ms of Processing time.
Again, following the steps detailed in Fig. 6.14, we have:
Step 1: Order the tasks according to RMS.
Taski| PO;
L lims 06ms
2 30ms 10ms
'3 42ms__05ms
Steps 2 and 3: Compute the completion times Sie, and check if Sip < Dj;
with D; = P; (assumed).
Sip = C, = 6,
Su = C1 = 6 = Sig < 11(= Fy), so Task 1 is schedulable.
So = Cr+ C, = 10+6=16,
S:
Sn =C2+ Ox [2]174
6. MULTI-TASKING AND TASK SCHEDULING
Seq = Cy + Cy + 0, = 5 + 10+ 6 = 21,
So So
S51 = Cr+ Cx | HHO |
21 a1
=5+10 30 +6 Tr
=5+10+6x2
Si
Sen = Cn -+ Gh x Sa 40x (2
27 or)
=5+10 30 +6 7
=5+104+6x3
= 33,
= S30 S32
Sg3 = C3 + Cr x 4 +Ox |Z
33 33
=i+10|=|+6|—
5+ 30 +6 i
=5+10x2+6x3
=43,
Ss Soa
Sea = C3 233 S33
a1 = Ca Cox | FB] +O x |
43
=5+10|/=|+6/=
+1030] 78a
=5+10x2+6x4
=49,
S. 5
Ss5 = C3 + Cy x | 41 40, x | 2H
s+ Cox | TEL +i x |S
49 49
=5+10/=|+6|~
30/7 °|ar
=5+10x2+6x5
= 55,
Sta = Cs-+ Cr x +0, x |S
P| tx |S
56 55
=5+10/21 46)
30| +8 |
=5+10x246x5
= «> 12(= P)), so Task 3 does not meet its deadline.6.7 Preemptive Scheduling for Independent Tasks 175
6.7.3 Deadline Monotonic Schedulability
In general, the deadline of a task D may not exceed its Period P, ie, D< P.
So, one might decide to assign a higher priority to a task with a shorter task
deadline instead of task period as in RMS. This is called deadline monotonic
scheduling (DMS), as mentioned in Section 6.4.2.
In testing schedulability for DMS, the method shown in Fig. 6.14 applies
except that in Step 1, the independent tasks are now prioritized according to
DMS. ‘
DMS often leads to a different schedulability result, i-e., it is possible that
given a system, DMS might lead to a (more) schedulable result, but the RMS
scheme might not, and conversely. Take the following example. _
Example 6.7.5. Consider the system of Example 6.7.4, but with tasks of pe-
tiods 11ms, 30ms and 42ms also assigned deadlines 10ms, 23ms and 6ms re-
spectively. Then applying DMS, the tasks are ordered as tabulated below.
Task i] P G Di
1 42ms 05ms 06ms
2 lims 06ms 10ms
3 30ms_10ms 23ms
Following Steps 2 and 3 detailed in Fig. 6.14, we get the schedulability
results tabulated for comparison with the results using RMS, as shown below.
Task | P, Gy D; DMS RMS
i] Sr [il Sir
42ms 05ms 06ms | 1 | 05ms | 3 | 55ms|
llms 06ms 10ms | 2 | 11ms | 1 | 06ms
30ms 10ms 23ms | 3 | 33ms | 2 | 22ms
awe
In this example, the DMS results are arguably worse since comparing with
the RMS results, with respect to the task deadlines, only Task A is schedu-
lable (ie., Sy < D,)} under DMS; whereas under RMS, only Task A is not
schedulable ($3 > Ds).
Note that the last example only illustrates the different schedulability re-
sults that one might get from RMS and DMS, and should not be read as one
scheme being superior to the other.
With this, we conclude the study on preemptive scheduling for independent
tasks,