Professional Documents
Culture Documents
always be a positive number smaller than unity. On the To see how process priorities change with time we use the
other hand we have: partial calculus of P :
(3-2-4)
Relation (3-1-3) states that Q will always increase with The above relation states that P will be smaller for
s and this will help us achieve the goals mentioned above. processes with larger values of E . In other words, longer
This requires a careful note at the first instance of each processes will get higher priorities. On the other hand we
process. One optimistic approach is to assume that the have:
instance will take a short time to execute. But an opposite
pessimistic approach is to assume a large time for its - -a-p
- - E
t o
execution. The impact of each of these assumptions is more pD aD (E - D > ’
completely discussed later in section 3.2. (3-2-5)
The above relation states that P increases with D ,
The solution to the secondproblem
that is, as D (the remaining time to the deadline)
decreases for ready processes, P decreases as well. In other
The traditional MLF algorithm gives higher priorities to processes words, the priority of each process increases during its
with smaller laxities. The laxity of a process is defined by the residence in the ready queue. But in order to see the impact
following relation. we use another partial calculus:
L = D - E (3-2-1) E + D
Where L is the laxity of a process, D is the remaining p, apLl
= -- - - 5 0
time to its deadline and E is its remaining execution time. aE ( E - D )’
The laxity of a process can be interpreted as the maximum (3-2-6)
amount of time it can stay in the ready queue of the Assuming that E - D +
0 , the above relation
operating system without the risk of missing its deadline shows that P DE will be less negative for larger values
provided that it will not yield the CPU (after being
dispatched) before it is completed. of E .This means that P, decreases slower (while
Assuming that the laxity of a process at time t I is equal to E increases) for larger values of E , On the other
,
L and its laxity at time t , is equal to L , and hand, P shows how much P increases with D . In
assuming that the process has not received the CPU in the other words, P and P DE show that the priority of a
,
time interval between t I and t ,we will find process grows faster if t has a longer remaining computation
time.
3
2620
According to the above reasoning, we can achieve the D = - 1
Lim E ~.
P = Lim E , O .
following goals by making use of the proposed priority E - D
variable instead of the pure laxity. (3-2-8)
Thus, the first instance of each process gets the smallest
Among different processes with possible priority among all processes with the same
the same amount of remaining deadline, This may cause two or more instances of a
execution time, the highest priority will frequently occurred process to simultaneously reside in the
be assigned to the one with the closest ready queue of the operating system. In such a case, we
deadline, just like the case of traditional cannot use the total execution time of the first instance to
MLF. determine those of the next instances. Therefore, we have to
treat each of them as a first instance, until the real first
0 Among different processes with instance is completed and its total execution time is
simultaneous deadlines, the ones with determined. This means that we can treat each individual
longer remaining execution times will instance in an optimistic or pessimistic way.
get higher priorities, similar to the case Another important point to note is the runtime overhead of
of traditional MLF. calculating P on each process completion or preemption
This will cause some extra overhead even though the
proposed approach reduces the number of context switches
Among processes with in contrast to the traditional MLF (note that
simultaneous deadlines residing in the calculating P requires longer time in contrast to the
ready queue, the priority of the one with laxity), One way to reduce this overhead is to predict the
the longest execution time increases process that will take the CPU afier the currently running
more rapidly, unlike the case of process. To this, we have to determine which process will
traditional MLF. It can easily be proved get a priority larger than that of the running process in the
through some simple algebraic shortest time.
reasoning. As the fust step let us calculate the length of the time a
typical process requires to get a priority equal to that of the
An important point to note is that it is the algebraic value of running process. This time can be calculated from the
P that determines the priority of the corresponding following relation.
process. In other words, among processes with identical D - t, -
- DR - f p
deadlines, the one with a calculated P approximately E - (D - I , ) (ER - I, ) - (D - I, )
equal to - 1 has the lowest priority (this means that the (3-
process has a very small remaining execution time). 2-9)
Here again, it is very important to consider a value for the
In the above relation, t is the required time
execution time of the first instance of each process. One
pessimistic way is to assume that the first instance will take and E and D respectively show the remaining
a very large amount of time to execute (nearly equal to the execution time of the considered process and the remaining
remaining time to its deadline). Such an assumption causes time to its deadline. This relation can be simplified as
the system to assign a high priority to the mentioned follows.
instance. The following relation gives the formal
D - t, -
--
D R - t P
description.
D E - D + t
Lim E ~ P
~. = Lim i Z ~ D .E - D - (3-2-10)
(3-2-7) Or
The above relation states that the pessimistic assumption 2
leads to approximately assign the highest possible priority t, y = 0
+ Pt, +
to the first instance of each process among all processes p = -L - E ,
with identical deadlines. In relation (3-2-7),
y = L D R - D L
E + D means that the only way for the process (3-2-1 1 )
to meet its deadline is to get the CPU and run (with no
interrupts) until completion. Where L and L are respectively the laxities of the
Another way is to think optimistically and assume a very considered process and the running one. It is obvious that
short execution time for each first instance. This will lead to the running process has the highest priority (the smallest
assign a low priority to the instance. The following relation value of P in the beginning of its execution. Thus, if we
formally states this. show the priority vatiable of the running process by
2621
P we will have P + f' . The immediate result is the traditional MLF and count the
number of required context switches in
that
each case. It also checks the ability of
D DR each of the two algorithms to guarantee
t (3-2- 13)
E - D E,-DIl all deadlines of a schedulable set of
The following inequality will be converted to the following processes.
one by some simple algebraic operations.
LD - DL > 0 (3-2-14) To compare the performance of the proposed algorithm to
that of traditional MLF, the following procedure was taken.
Or simply y + 0 . For e a c h n E [5 ,20 1, 500 schedulable sets of
On the other hand it is obvious that we will have random processes were produced each having a length
p 3 0 (Assuming that L > 0 ). Thus we can equal to n . Then the abilities of the two algorithms to
conclude that that equation (3-2-1 1) will normally have two schedule the sets of processes were compared as well as the
positive roots. The smaller one is acceptable, because shows numbers of context switches required by each algorithm
the fust time that the running process should he substituted (The exploited optimized MLF algorithm is the pessimistic
by one of the ready processes. Thus, t is given by the variation. Observations show that this variation performs a
little better because the optimistic variation may cause some
following relation. processes to lose their deadlines as mentioned before).
t, = - P - JP - 4~ (3-2-15)
It is obvious that such a methodology does not evaluate the
0 efficiency of the technique presented in section 3-1, but the
L
mentioned technique has been proposed just to make MLF
The process having the smallest t will be the one that more applicable to highly dynamic environments, not to
will take the CPU after the running process unless a process improve its performance. Therefore, there is no need to
with a smaller i arrives before the determined time. evaluate that technique.
As table 1 shows, the Optimized MLF outperforms MLF in during its time this reduces the number
most cases. As the number of active processes increases, of context switches, because running
this superiority becomes more and more evident when the processes do not release the CPU as
number of active processes increases. The reason for this soon as the case of traditional MLF.
effect is the smaller number of context switches required by Long processes have their priorities
Optimized MLF. Optimized MLF requires less context increased faster than those of short
switches for two main reasons. processes this causes long processes to
In traditional MLF, the priority of the lose the CPU time less numbers of
running process remains constant times during their residence in the
during its execution time. Therefore it system.
will soon give its place to one of the As the number of processes increases, the impact of the
ready processes which are having their context switches shows itself more and clearer.
priorities increased during this interval. However, there are some exceptions, which can he justified
But in Optimized MLF, the priority of by the random nature of the simulations.
the running process slowly increases
6
2623
5. CONCLUSIONS AND FURTHER
WORKS
[7] John Stankovic et al., Deadline Scheduling for Real-time
The Optimized MLF process-scheduling algorithm Systems: EDF and Related Algorithms, Kluwer Academic
proposed in this paper solves two main problems of the Publishers, 1998.
traditional MLF algorithm. First, it is more applicable to [SI Alia K. Atlas, Azer Bestavros, Statistical Rate
highly dynamic environments such as spacecraft avionics Monotonic Scheduling, In Proceedings of the 19Ih Real-
systems because of its ability to approximate the execution Time Systems Symposium(RTSS98), Madrid, Spain,
times of non-deterministic processes. Second, it increases December 24,1998.
the priorities of long processes faster than those of short [9] S. Ghosh, R. Melhem, D. Mosse and J. Sansarma, Fault
processes during their residence in the ready queue of the Tolerant, Rate Monotonic Scheduling, Journal of Real-Time
operating system and consequently requires less context systems. vol 15, no. 2 September 1998.
switches than the traditional MLF. Mathematical modeling
and simulation results demonstrate that this algorithm can [IO] Henk L. Muller, Paul W.A. Stallard and David H.D
improve the dependability of the RTOS which controls the Warren, The Application of Skewed-Associative Memories
spacecraft avionics system. This work can be continued by to Cache On/y Memory Architectures, in proceedings of the
deriving analytical models to evaluate Optimized MLF and 1995 international conference on parallel processing,
similar algorithms as well as evaluating the impact of ICPP’95, Volume 1,pp 150-154, August 14-18, 1995,
various memory system technologies and operating system Oconomovoc, Wisconsin.
ready queue configurations on the performance of the
algorithm and the application of the algorithm to different [ I l l Klein, M., Lehoczky, J. P. and Rajkumar, R. Rate
parallel and distributed processing architectures. Monotonic Analysis f o r Real-Time Industrial Computing
Appficotions. IEEE Computer, Jan. 1994,24-33.
REFERENCES
[I21 A. Seznec, A Case f o r Two-way Set-Associative
Caches, In proceedings of the 20” ISCA, pp 169-178, San
[l] Behrouz Zolfaghari, Minimizing the Overhead of Diego, California, May 1993.
Dynamic Scheduling Strategies in Avionics Systems, In
Proceedings of 2002 IEEE Aerospace Conference, Big Sky, [I31 Philippe Nain and Donald F. Towsley, Comparison of
Montana, March 2002. Hybrid Minimum Laxiry/First-In-First-Out Scheduling
Policies for Real-Time Multiprocessors, IEEE Transactions
[2] M. Sharifi and B. Zolfaghari, The Application of CAM on Computers 41(10) : 1271-1278,1992.
Memories in Real-Time Systems, The 2002 International
Conference on Parallel and Distributed Processing [ 141 J. Lehoczky, L. Sha , and Y. Ding, The Rate Monotonic
Techniques and Applications (PDPTA’OZ), The 2002 Scheduling : Exact Characterization and Average Case
International Multiconference on Computer Science, Behavior, In Proceedings of IEEE Real-Time Systems
CSREA Press, Las Vegas, USA, June 24-27,2002. Symposium(RTSS89), Dec 1989, pages 160-171.
[3] M. Sharifi and B. Zolfaghari, An Approach to [I51 J. Hong, X. Tan, and D. Towsley, A Perjormance
Exploiting Skewed Associative Memories in Avionics Analysis of Minimum Laxity and Earliest Deadline First
Systems, 2002 International Conference on Parallel and Scheduling, UM-CS-1998-071 ,July 31, 1989.
Distributed Systems, National Central University, Taiwan,
December 2002, ROC. [16] J. Hong, X. Tan, and D. Towsley, The Binary
Simulation of the Minimum Laxiry and Earliest Deadline,
[I91 J. Hildebrandt. F. Golatowski and D. Timmermann , UM-CS-1998-070, July 31, 1989.
“Scheduljng Coprocessor for Enhanced Least-Laxity-First
Scheduling in Hard Real-Time Systems”, In Proceedings of [I71 Inlroduction to Maple V Release 3 Scientific
11th Euromicro Conference on Real-Time Systems, June 09 Computing, University of Buffalo. State University of New
-11,1999, York, England.
York.
[5] Alia Atlas and h e r Bestavros. Design and
[18] J. Hong, X. Tan, and D. Towsley, “ A Perjormance
Implementation of Statistical Rate Monotonic Scheduling in
Analysis of Minimum Laxity and Earliest Deadline
KURT Linux. In Proceedings of RTSS’99: The 19th IEEE
Scheduling in a Real-Time System,’’ LEEE Transactions on
Real-Time Systems Symposium, Phoenix, AZ, December
Computers, Vol. 38, No.-12, (December 1989), 1736-
1999.
1744.
[6] D. Gill and Douglas C. Schmidt, Dynamic Scheduling
[ 191 H. Chetto and M. Chetto. Some Results of the Earliest
Strategies f o r Avionics Mission Computing, In proceedings
of the 17Ih IEEEiAIAA Digital Avionics Systems Deadline Scheduling Algorithm. IEEE Transactions on
Software Engineering, 15( 10):1261--1269, October 1989.
Conference(DASC98), Nov 1998.
7
2624