Professional Documents
Culture Documents
Real-Time Systems: Stefan M. Petters
Real-Time Systems: Stefan M. Petters
Stefan M. Petters
Lecture Content
1
Definition
Real-Time Systems
2
Real-Time Systems
Real-Time Systems
3
Real-Time Systems
Real-Time Systems
Is there a pattern?
4
What’s needed of an RTOS
Triggering
Event
Time
© NICTA 2007/2008 No: 10
5
Soft Real-Time
Triggering
Event
Time
© NICTA 2007/2008 No: 11
Gain Deadline
Example Gain
Function
Triggering
Event
6
Weakly Hard Real-Time Systems
7
Requirement, Specification, Verification
8
Scheduling in Real-Time Systems
Overview
9
Requirements
Specification
10
Task Model
• Periodic tasks
– Time-driven. Characteristics are known a priori
– Task τi is characterized by (Ti, Ci)
– E.g.: Task monitoring temperature of a patient in an ICU.
• Aperiodic tasks
– Event-driven. Characteristics are not known a priori
– Task τi is characterized by (Ci, Di) and some probabilistic profile
for arrival patterns (e.g. Poisson model)
– E.g.: Task activated upon detecting change in patient’s condition.
• Sporadic Tasks
– Aperiodic tasks with known minimum inter-arrival time (Ti, Ci)
Task Model
11
Task Constraints
• Deadline constraint
• Resource constraints
– Shared access (read-read), Exclusive access (write-x)
– Energy
• Precedence constraints
– τ1 ⇒ τ2: Task τ2 can start executing only after τ1
finishes its execution
• Fault-tolerant requirements
– To achieve higher reliability for task execution
– Redundancy in execution
© NICTA 2007/2008 No: 23
Preemption
12
Preemption
• Cooperative preemption?
– Applications allow preemption at given points
– Reduction of preemptions
– Increase of latency for high priority tasks
13
Fixed Priority Scheduling
Task τ1
Task τ2
Task τ3
Priority C T D
Task τ1 1 5 20 20
Task τ2 2 8 30 20
Task τ3 3 15 50 50
14
Earliest Deadline First (EDF)
• Dynamic priorities
• Scheduler picks task, whose deadline is due next
• Advantages:
– Optimality
– Reduces number of task switches
– Optimal if system is not overloaded
• Drawbacks:
– Deteriorates badly under overload
– Needs smarter scheduler
– Scheduling is more expensive
Task τ1
Task τ2
Task τ3
Task τ1
Task τ2
Task τ3
15
FPS vs. EDF
Task τ1
Task τ2
Task τ3
Priority C T D
Task τ1 1 5 20 20
Task τ2 2 8 30 20
Task τ3 3 15 40 40
Task τ1
Task τ2
Task τ3
Task τ1
Task τ2
Task τ3
16
Time Triggered/Driven Scheduling
• Advantages:
– Very simple to implement
– Very efficient / little overhead (in suitable case)
• Disadvantages:
– Big latency if event rate does not match base rate
– Inflexible
– Potentially big base rate (many scheduling decisions) or
hyperperiod
τ1 τ2 τ1 τ3 τ4 τ1 τ2 τ1 τ3
17
Message Based Synchronisation
τ4 τ2
τ1
τ3
Overload Situations
18
Overload Situations in FPS
Task τ1
Task τ2
Task τ3
Old
Priority C T D
Task τ1 1 5 20 20
Task τ2 2 12 20 20
Task τ3 3 15 50 50
Task τ1
Task τ2
Task τ3
Task τ1
Task τ2
Task τ3
19
Overload Situations in EDF
Task τ1
Task τ2
Task τ3
Task τ1
Task τ2
Task τ3
Task τ1
Task τ2
Task τ3
{
{
Task τ1
Task τ2
Task τ3
20
Priority Inversion
• Pathfinder example
Task τ1
Task τ2
Task τ3
Task τ4
Task τ5
• 2 shared resources
• One shared by 3 (nested by one)
• One shared by 2
© NICTA 2007/2008 No: 42
21
Non-Preemptable Critical Section
• GOOD
– Simple
– No deadlock.
– No unbounded priority inversion
– No prior knowledge about resources.
– Each task blocked by at most 1 task of lower priority
– Works with fixed and dynamic priorities. (especially good for short critical
sections with high contention)
• BAD
– Tasks blocked even when no contention exists.
Priority Inheritance
Task τ1
Task τ2
Task τ3
Task τ4
Task τ5
22
Priority Inheritance
Task τ1
Task τ2
Task τ3
Task τ4
Task τ5
23
Basic Priority Ceiling Protocol
Task τ1
Task τ2
Task τ3
Task τ4
Task τ5
24
Immediate Priority Ceiling Protocol
Implementation Comparison
• Non-preemptable critical sections
– Easy to implement. Either blocking interrupts or syscall to have that
implemented on behalf of task
• Priority Inheritance
– Fairly straightforward, however requires various references (e.g.
which thread is holding a resource)
• Basic Priority Ceiling
– Requires application designer to explicitly identify which resources
will be requested later (when first resource request of nested
requests is made) on top of references
• Immediate priority ceiling
– Very easy to implement: Only requires ceilings associated with each
resource mutex (that’s something which may be automated if all tasks
known
– Alternatively server task encapsulating the critical section
© NICTA 2007/2008 No: 50
25
Reflective/Feedback-based Scheduling
• Adaptive systems
• By definition soft real time
• Adjusts scheduling based on information
about change
• Capable of better coping with “the
unknown”
• Connects quite well with adaptive
applications
26
Schedulability Analysis
Critical Instant
27
Response Time Analysis
Task τ1
Task τ2
Task τ3
Priority C T D
Task τ1 1 5 20 20
Task τ2 2 8 30 20
Task τ3 3 15 50 50
Task ττ11
Task ττ22
Task ττ33
28
Formal RTA
• Blocking time
• Jitter
• Pre-emption delay
J j + w in
w n +1
i = C i + Bi + ∑ * (C j + δ i , j )
∀j<i T j
29
Rate Monotonic Analysis
Ci
∑ T
1
µ= ≤ n * ( 2 − 1)
n
∀i i
30
Graphical EDF Analysis
Computation
Request
Time
Events Task 1
Task 2
Deadlines
Task 3
© NICTA 2007/2008 No: 61
Computation
Request
Time
Events Task 1
Task 2
Deadlines
Task 3
© NICTA 2007/2008 No: 62
31
Worst Case Execution Time Analysis
Problem Definition
execution time
© NICTA 2007/2008 No: 64
32
Problem Definition contd
execution time
execution time
© NICTA 2007/2008 No: 65
Is it a Problem?
33
Is it a Problem? contd
Program
Structural Low-Level
Analysis Analysis
Constraint/Flow WCET
Computation
Analysis
34
Structural Analysis
• Object Code
– Pros:
– All compiler optimisations are done
– This is what really is running, no trouble with
macros, preprocessors
– Cons:
– Needs to second guess what all the variables
meant
– A lot of the original information is lost
• E.g. multiple conditions, indirect function calls,
object member functions
© NICTA 2007/2008 No: 70
35
Structural Analysis contd
• Assembly Code
– Pros:
– All compiler optimisations done
– Cons:
– Same as Object Code +
– Potentially still some macros
• Source Code
– Pros:
– All information the user has put there is there
– Structure in pure form (e.g. multiple loop
continue conditions, object member functions,
indirect calls)
– Cons:
– Trouble with macros, preprocessors etc.
– Needs to second guess what the compiler will
do
© NICTA 2007/2008 No: 72
36
Multiple Continue conditions explained
first
May look like
for (; first condition;){
for (; other cond;){ second
Func();
}
}
© NICTA 2007/2008 No: 73
Flow information
37
Flow Info Characteristics
do Structurally possible
{ max = 10 flows (infinite)
if(...) // A A
do max = 20 Basic finiteness
{ B
if(...) // B samepath(D,G)
samepath(D,G)
... // C Statically allowed
C D
else
... // D Actual feasible
I
if(...) // E E paths
... // F
else F G
... // G
} while(...) // H H
else WCET found here = WCET found here =
... // I overestimation desired result
} while(...) // J J
...
Constraints Generated
Foo()
• Constraints: XfooA XGA
Start and end Xfoo=1 A
XA
condition Xend =1 XAB
B
XBC XB
Program XA=XfooA+XGA XBD
XAB=XA C D
structure XC XD
XCE XDE
XBC+XBD=XB E
XE
XE=XCE+XDE XEF
XEG F
Loop bounds XA<=100 XF
XFG
Other flow G
information XC+XF<=XA XG
end
© NICTA 2007/2008 No: 76
38
Hardware
Static Analysis
39
Measurement Based Analysis
40
Path Based Computation
Tree Representation
Sequence
1
2
3 1 Alt 3 Loop 9
4
2 Void Sequence Void
5 6
7
4 Alt 7 8
8
9
5 6
41
IPET Calculation
• WCET= Foo()
XfooA XGA
tA=7
max Σ(xentity * tentity) XA
XAB
A
tB=5
– Where each xentity XBC
B
XB
XBD
satisfies all constraints tC=12
C D
tD=2
XC X
XCE XDE D
tE=4
E
XE
XEF
XEG tF=8
Xfoo=1 XA=XfooA+XGA XC+XF=100 F
XF
XAB=XA XFG
XA<=100 G tG=20
XBC+XBD=XB XG
XE=XCE+XDE end
© NICTA 2007/2008 No: 83
Calculation methods
Foo() Xfoo=1
Xend=1 end
© NICTA 2007/2008 No: 84
42
Multiprocessor/Multithreaded Real-Time
Systems
WHY
• Performance
– Responsiveness in the presence of many
external events
• Throughput
– Managing continuous load
• Fault tolerance
– Managing bugs, HW faults
• Reliability
– Ensuring uptime, HW/SW upgrades …
© NICTA 2007/2008 No: 86
43
Hardware
• Distributed System
– Latency in communication, loosely coupled
SMT
Almost an SMT:
Image taken from http://www.tommesani.com/images/P3Architecture.jpg
44
SMP
Distributed System
CPU CPU
Caches Caches
Memory Memory
NIC NIC
Network
CPU
CPU
Caches
Caches
Memory
Memory
NIC
NIC
45
Issues
• Resource contention
– Execution units
– Caches
– Memory
– Network
• Adding a CPU does not help
– Example double the load, 2 instead of 1 CPU
Solutions??
• Partitioning
– Resource contention still there!
– Assignment using heuristics
• Non partitioning
– mostly theoretical so far
– Assumptions:
• Zero preemption cost
• Zero migration cost
• Infinite time slicing
– Don’t translate into reality
– Acceptance test and no task migration a way to make it work
46
Solutions??
Non-Preemptive
• But!!!
– Less efficient processor use
– Anomalies: response time can increase with
• Changing the priority list
• Increasing number of CPUs
• Reducing execution times
• Weakening the precedence constraints
– Bin packing problem NP hard
– Theoretically: time slicing into small quantums (PFAIR),
but practically useless, as preemption and task
migration overhead outweigh gains of Multiprocessors.
47
And now?
• No global solution.
• Partitioning and reducing it to single CPU problem
good, but still contention of resources.
• Next step: After figuring out how to do the
scheduling, what about preemption delay?
• Industry works with SMP/SMT, but most often on
a very ad hoc basis.
• Active and unsolved research area
• Why does it work on non-RT?
– Running the “wrong” task is not critical.
© NICTA 2007/2008 No: 95
48
Real-Time vs. General-Purpose OS
Why?
49
How?
50
Separate Resource Allocation and Dispatching
• Observation: Scheduling
consists of two distinct
constrained
Hard
questions: Missed
Real-
Deadline
Time
Resource allocation SRT
Resource Allocation
– How much resources to allocate
to each process
R
Resource
at
Dispatching
e-
Allocation
Ba
Soft
s
ed
– When to give each process the SRT
Real-
resources it has been allocated CPU- Time
unconstrained
• Existing schedulers integrate Bound
Best
their management Effort I/O-
– Real-time schedulers implicitly Bound
separate them somewhat via job unconstrained constrained
admission Dispatching
51
Rate-Based Earliest Deadline Scheduler
• Basic Idea
– EDF provides hard guarantees RBED: RAD Scheduler using
– Varying rates and periods provide rate and period to control
flexibility resource allocation and dispatching
– Programmable timer interrupts
Scheduling
guarantee isolation between
processes Policy
How
• RBED policy much
Scheduling
Rate ? Mechanism
– Resource allocation: Target rate- Period,
of-progress for each process (S ≤ WCET EDF
100%) Dispatch, w/timers
– Dispatching: Period based on block, etc.
Period
process timeliness needs When?
P0
• RBED mechanism Runtime System
– Rate-Enforcing EDF: EDF +
rate = utilization
programmable timer interrupts
WCET = rate*period
HRT
Now Process
BE
Process 1
Cumulative CPU Time
Time
New BE process enters
52
Adjusting Rates at Runtime
HRT
Now Process
BE
Process 1
Cumulative CPU Time
BE
Process 2
Time
New BE process enters
EDF RBED
• Period and WCET are • Period and WCET are
specified per task specified per job
– Ti has sequential jobs Ji,k – Ti has sequential jobs Ji,k
– Ji,k has release time ri,k, – Ji,k has release time ri,k,
period pi, deadline di,k period pi,k, deadline di,k
– ri,k = di,k-1, and di,k= ri,k+ pi – ri,k= di,k-1, and di,k = ri,k+ pi,k
– ui = ei/pi and U = Σ ui – ui,k= ei,k/pi,k and U = Σui,k
53
Two Observations 1
deadline
progress
time
© NICTA 2007/2008 No: 108
54
Increasing Rate (= increasing WCET)
time
© NICTA 2007/2008 No: 109
A task can
never be in
this region
if resources
are available!
time
© NICTA 2007/2008 No: 110
55
Increasing Rate (= increasing WCET)
time
© NICTA 2007/2008 No: 111
56
RBED Theory Summary
57
When To Allocate Slack?
Solution
Answer: Allocate
slack as early as
possible
Solution
Answer: Allocate
slack to the task
with the earliest
deadline
58
How To Use Future Slack?
Solution
Answer: Borrow
resources (potential
slack) from the next
job to meet the
current deadline
Solution
Answer: Back-
donate slack to tasks
that borrowed from
the future
59
Principles
BACKSLASH Conclusions
60
Bandwidth Enforcement in RBED and Power
Management
start of execution
job time
Task model
reserved budget deadline
k
release release t
timek execution time k dynamic slack time timek+1
k
© NICTA 2007/2008 No: 121
X
release time deadline deadline t
61
Modelling Time
performance performance
fCPU fCPU
CPU bound application Memory bound application
Modelling Energy
energy energy
T + χ mem f mem ∆t
Etot = PstatT + ∫ Pdyn dt
+ γ 1 PMC1 + γ 2 PMC2 + ...
0
+ V 2 (φ1PMC1 + φ2 PMC2 + ...)
Edyn ∝ fV 2 ∆t ∝ cyclesV 2
© NICTA 2007/2008 No: 124
62
Integration of DVFS
t
Job stretching wasted cycles
Job stretching
t
or
63
Algorithm
newEnergy = energyAtCurrentFrequency
newFrequency = currentFrequency
for frequency in frequencySetPoints
if WCETAtSwitchedFrequency + switching.WCET < remainingBudet
&& switching.Energy + energyAtSwitchedFrequency < newEnergy
newEnergy = switchingCost.Energy + energyAtSwitchedFrequency;
newFrequency = frequency;
if newFrequency != currentFrequency
switchFrequency (newFrequency);
© NICTA 2007/2008 No: 127
f2
t
Real World
f1
t
f2
64
Switching Time Accounting
release release t
timek t switch execution time dynamic slack t switch time k+1
k k
65
Basic Priority Ceiling Protocol
• Scheduling:
– Highest priority task in ready queue gets scheduled. Priority
exceptions as below
• Each resource has a ceiling priority equivalent of highest priority using
task
• Allocation
– If resource locked, block
– If (potentially modified) priority of task higher than the ceiling of
any resource not used by that task but used at the time, allocate
– Else, block
• Priorities:
– If task τ1 blocked at resource held by task τ2:
• τ2 is lifted in priority to task τ1
• revert to original priority once all resources are released
© NICTA 2007/2008 No: 131
66